CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
22
99
DESCRIPTION
stringlengths
1.36k
3.71k
TRANSCRIPTION
stringlengths
1.5k
93.6k
SEGMENTS
stringlengths
2.35k
153k
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=OWvy-fCWTBQ
How to Build a Deep Learning Machine - Everything You Need To Know
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video, I tell you everything you need to know about building your own machine learning workstation! Should you build it? How to do proper research on the components, which websites to use, how to buy components, and much more. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Disclaimer: below are affiliate links - by clicking & buying I earn a percentage at no additional cost to you! :) Core components: * RTX 3090 https://geni.us/CU4C * AMD Ryzen 9 7950x https://geni.us/mJAJX * ARCTIC Liquid Freezer II 280 https://geni.us/7ec1Z * Asus ROG Crosshair X670E Hero https://geni.us/XxIa * Kingston Fury Beast https://geni.us/FB4a * Samsung 980 PRO 2 TB m.2 https://geni.us/AQuFQtB * SAMSUNG 870 QVO SATA III 8TB https://geni.us/iyDmwX * Lian Li PC-O11DW 011 DYNAMIC https://geni.us/2NrKFf * EVGA Supernova 1600 G+ https://geni.us/WGcJ Peripherals I use (super happy with my decision esp. keyboard!): * Dell U2720Q UltraSharp 27 Inch 4K https://geni.us/eyUy * Corsair K70 RGB MK.2 Mechanical w/ MX brown https://geni.us/bgZw * Mouse: Corsair M65 PRO RGB https://geni.us/ghLb List: https://uk.pcpartpicker.com/list/ryMsBj ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Blogs I recommended in the video: https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning https://timdettmers.com/2018/12/16/deep-learning-hardware-guide https://www.emilwallner.com/p/ml-rig https://nonint.com/2022/05/30/my-deep-learning-rig/ https://www.mrdbourke.com/notes-on-building-a-deep-learning-pc/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro - announcing the new series 02:00 Why should you build your deep learning workstation? 03:56 Quick skim of the components 04:45 pcpartpicker is your friend 07:20 Cost comparisons - Lambda Labs workstation & laptop, cloud 15:45 Researching the components - useful resources 15:57 Overview Tim Dettmers' blogs 19:20 Do you need 11+ GBs? 22:55 Don't follow someone's advice blindly! 27:08 Upgrade to NVIDIA 40x series? 28:05 Other resources 30:30 How to pick the right components? 32:10 CPU - Ryzen 9 7950x or Threadripper? 33:50 CPU Cooler 34:45 MoBo 37:50 Memory and storage 40:40 Case 42:00 PSU 42:55 Quick mention of my peripherals 43:15 Advice for targeting your particular budget 44:25 Tips for buying components - site aggregators, Amazon, legitness 48:25 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #deeplearning #machinelearning #workstation #rig #pc #build
What's cracking guys Alex here with this video? I'm kicking off a brand new video series on how to build a deep learning Workstation for a particular budget and whether you should build it so in this first video I'm gonna walk you through how to do the necessary research To pick the right components for your budget and how to buy those components in the second video We're gonna see how to basically assemble the machine and in the third video of the series We're going to see how to install the Linux operating system how to install the necessary CUDA drivers I know many of you have been complaining about those and finally how to install the necessary library so that we can run a Simple ML app and just verify that everything is working as expected So I pretty much have already everything I need to build my own deep learning rig. So I have here the AMD Ryzen 9 2950 X beast of a CPU from AMD. I have the SSD 2 terabyte NVMe SSD basically super fast Storage device. We also have the 8 terabyte SSD but this one uses the SATA interface. Hopefully I'm not butchering that name and I have 64 gigabytes of RAM memory right here from Kingston and You can see here some liquid cooling for my CPU. So this is the liquid freezer to 280 ARGB liquid cooling system and there is a good reason why I picked this one It's not just because it's beautiful which it is I think But yeah, we'll see those details a bit later. So I obviously have other components down there. I have the case RTX 3090 is currently being shipped from the states here to London a generous gift from Nvidia So I'm really looking forward to building this this machine with you guys showing you the the the steps along along this journey And so yeah, hopefully it's gonna be a fun series So let's first see the the why behind why should you build a deep learning? Workstation and whether you should build one. So in my opinion, there are three main reasons So the first one is being cost-effective That means that using if you if you build your own deep learning rig even for 800 bucks or a thousand bucks You'll basically break even depending on the amount of GPUs and the usage your how much you're using your your machine You'll break even in literally six months one year year and a half and you also have to keep in mind that at the end of that journey, you'll still have a Machine that you can sell on eBay if you wish to do so So that's that money is not gonna like depreciate to zero dollars in a year Obviously the second reason would be learning So you're gonna learn so much by doing the research around the components by buying them by assembling the machine and installing the software So it's a very valuable learning experience in and off itself And if you can save up like 800 bucks, that's already super enough to go through this process and learn everything and demystify the computers if you're intimidated But by how they work the third important reason would be performance and so I'm not sure whether we are aware But when you're running on a cloud GPU you're paying for the overheads you're paying for the virtualization overhead you're paying for the IO overhead So in general those machines are much slower than if they were here locally in your own workstation So that's additional reason why you would want to build your own workstation So additionally, you don't have to deal with preemptions with interrupts. And so it's just much easier Okay guys having said all of that let's now see the particular components that I bought and Then let's do a cost comparison between my machine and something that you could buy on the market That's pre assembled for you and just compare those two. Okay guys, so this is the list components I bought for my digital dragon one deep learning Workstation that's the name I gave it and you can you can see it here and you can see the final price here Which is definitely the upper upper limit. It's not gonna cost this much. There are multiple reasons Why not one of them is that there were price swings going on? For example, I paid this 8 terabyte SSD for only 480 only So what they are doing is sometimes in the Black Friday season They they start spiking up the prices only to then reduce them by 20% What not to get to the same price that that item had maybe a month or two months ago So this is definitely the upper limit of what you'll be paying But I'm gonna touch on the cost a bit more a bit later For now, I just want to get you familiar with this PC part picker platform website. It's a very useful website You're gonna use it yourself to build your own awesome deep learning workstation. So let's go through it So first important tab here is the completed build step So in case you you know that you want to have for example RTX 3060 you go here to the completed build step And you hit the RTX 3060 filter basically and then the website Filters out those builds that have RTX 3060 and so here you can find some inspiration for how to build your own build That's one way you can use this this website the other one is to just go to this PC builder tab and then it offers you literally like a to-do list of the Components you need to buy for your workstation and not only that it also offers you it shows you certain Incompatibilities or issues that you might have so let me take a particular example to to demonstrate that So I'm gonna go to the saved parts list. I'm gonna open up the digital dragon I'm gonna hit the edit part list and let's go here so I'm gonna remove the cooler now just for to show you something and you immediately see the Compatibility the warning here and so if we go down here You can see that the AMD CPU does not include a stock CPU cooler adding a CPU cooler to your part list is recommended Okay, so that means you definitely need to have some type of cooling for this particular CPU They even recommend a liquid cooling at least 20 40 millimeter fans so that's quite a lot and so now let's go and pick a particular cooler Let me just find an alternative that I was considering before and that's this one noctua NHD 15 if I paste it right here We'll see that there are zero compatible products and that's because there are zero compatible products if we toggle off the compatibility filter Then we'll see them and now let's add the noctua and you'll immediately see that we have a compatibility So it says lian-li so that the case I have Is not compatible with the CPU cooler and indeed if you go to and check out the dimensions You'll see that if you were to use the CPU you couldn't put the front panel on your case and that kind of sucks So so in any case the PC part picker did its job And that's why it's great website that you should definitely use okay, so now let's go back here And let me show you let's do cost comparison with the lambda labs Workstation and laptop. I'm gonna first pick this no peripheral single GPU basically setup and This one I'm gonna use as a baseline and compare it against the lambda labs workstation. I'm going to for fairness add Add additional so I'm gonna hit the edit part list here. I'm going to add additional video card, okay? And I'm gonna add RTX 3090 and I'm doing this because their workstation also has two GPUs although they have RTX 3080 as we'll soon see okay, so the price here with these items is again This is the upper limit because you can always find cheaper items you can buy on eBay You can find cheaper websites than Amazon and so this is roughly five thousand 140 pounds now keep in mind that I myself made some mistakes because I decided to go with AMD and Even though in 2020 they were much better than Intel now in 2022 if you look at the high-end Like high performance CPUs It's actually better to buy Intel right now if you want to have high performance CPU and it would pay 500 pounds to get even better CPU than this one, so I made a mistake here That's the biggest mistake I made in my build And so let's now kind of see what would be the actual price if you didn't buy as fancy components as I did But it would still have fairly Similar quality build as mine, so let's take the number here, so we have the five Five thousand one hundred forty currently, so let's subtract from that 240 because roughly the Intel processor costs 500 so that's That's that and then motherboard you can find certainly you can find something for like 300 pounds Although it would be harder for you to find the you couldn't fit maybe a two to Two GPUs especially the the big ones like RTX 3090, so this is maybe maybe to be fair. Let's put 250 You can definitely shave off some of the price there. You can buy cheaper memory for sure This is Kingston their famous brand if you can go cheaper you can maybe save hundred pounds there You can certainly save at least at least 440 here on the SSD so 440 because this is SSD you can you can just have hard drive You wouldn't lose too much, and you would save a bunch of money here as you can see also you could buy cheaper And the end like cheaper SSD you would say maybe 50 pounds there and then finally I think I've done a great job with case and They are not including the price of the PSU's I'm gonna add like maybe I forgot the price of that thing But let's say it's around two two hundred Actually, let me see just for a second. What's roughly the price of this thing? So the price is let's say around Let's say it's three hundred. Let's say it's even three hundred. It's that's fine, so I'm gonna open up the calculator. Let's add the Let's add the 300 and so we end up with roughly 4300 pounds for a powerful build such as this one okay now Let's do this side-by-side comparison. Okay. Let's do this so I'm gonna take this tab put it on the side here My computer is glitching for some reason okay, so here we are on the left hand side We have the Lambda labs workstation on the right hand side. We have our built okay remember the price was around 4300 in case you decide to just cut some corners so to speak okay, so here We are let's see what the what specs are here So they the reason they pay so much is because you get warranty for three years They pre assemble the thing for you to install the Linux to install the drivers they install the libraries everything But you can do all that yourself and save some money So that's the main trade if you wanna you want to basically pay for here, so just think about what you want So we have the they have the AMD thread reaper probe That's a more expensive CPU actually twice the price of the AMD Ryzen 9 But the thing is you don't need it unless you go for three plus GPUs This is a waste of money, so you don't need more than than this one So that's that's gonna you could save money bunch of money there unless you're going for the three plus GPU setup We're gonna see later. Why and because me personally I was I was basically Deciding whether I should buy thread reaper or this one, and I decided to buy this one And you'll see the reasons a bit later. They have two RTX 3080 those are way cheaper than 3090 So you can save money there as well, so we'll be even even cheaper. They have a bit more memory Okay, so that's that's fine. They have four terabytes. We have ten terabytes and Finally that's it so that means you would be paying eight thousand for this build, so let's see So eight thousand this is let's see how many pounds. So it's roughly 6840 pounds for a weaker setup because if you if we were to pick RTX 3080 We'd basically have have Saved more money, but they do have more expensive CPU Which you don't need so it's kind of basically you're saving at least two Two thousand five hundred pounds if you build it you're on your own. That's the the basically the summary of this analysis Okay, whoops. Let me just reopen that one Okay, so let's go here They also offer laptops and some of the people on LinkedIn asked me whether they should buy the laptop So here is the analysis a quick analysis here as well So you can see here the price is this is still more expensive than our build and let's see the specs So the specs are you have a single RTX 3080 Ti that's a bit better than 3080 You have the let's see they have Intel i7 which is way cheaper and way lower quality than the CPU which we've chosen they have a bit worse RAM although this doesn't matter that much because this is like 4800 we have 5200 Hertz although this is basically a gimmick you don't care about the the the the hurts that much And it's equally good memory pretty much And then they have only two terabytes of SSD and blah blah blah they additionally have the screen so you get Awful so much better build for less money that it's crazy plus laptops are poor with thermals There is no way that this laptop won't have thermal issues I know this firsthand having had Omen 17 laptop with RTX 2080 I had severe severe like thermal issues, and I never used my machine for training something for more than like two hours max And then it's literally you can boil eggs on your laptop. There is no way you're gonna have a good cooling system in a laptop It's just the physics of it. It won't allow it So that means you can use laptop only in case you want to run inference only in that case you should buy a laptop But like I think this is definitely an overkill. That's my opinion Just take the pros and cons I mentioned you do get the warranty you do get the portability You obviously cannot pack the lap you cannot pack the build We are we are we're creating here into your like briefcase and go to a local Starbucks or whatnot So those are the pros and cons you decide for yourself. I'm just trying to give you as much information as possible here Okay guys so quickly on to this plug why building your own deep learning computer is 10x cheaper than AWS So there is a lot of blocks similar to this one where they'll show you for How many months you'll need depending on your setup how many GPUs you have? Until you break even compared to using the cloud and as I said previously I think in the beginning of the video the difference is you will end up with a machine So you'll end up with something whereas here you just pay the cloud and then you you just don't have anything Left to sell later on whereas with your machine you can say sell your components on eBay and get back some money that you invested beforehand Okay, so won't get into much more detail. Basically. I think it's more cost-effective if you're any If you're serious about machine learning if you want to spend some years in this field you definitely need your I think it's a it's a no-brainer I should have done it before basically Okay, let's go through some research right now. So I obviously went through many blocks and videos to Learn and then pick the components. So one of the the main resources is team Dappmer's block So you can see here that the title of this one is which GPUs to get for deep learning My experience and advice for using GPUs in deep learning. That's one of his blocks The second one is this one where he shows how you can build the whole machine Not just how to pick the GPU so you can see here a full hardware guide to deep learning and in this one He walks you through a lot of the components and not just the GPUs. Okay So let's quickly skim through these blocks and also keep in mind that he did make some mistakes So never always take everything people say and write especially if it's an older block always take it with a grain of salt So now let's quickly walk through the block here. Let's see some important details First things first the the first part of the block has a lot of technical details that you probably don't care about if you're here to just build your own PC So you can ignore things like like sparse network training obviously, etc, etc So I think a reasonable thing to do is start from this new fan design thermal issues So that's where the high level talk starts before that. It was just like a lot of technical technicalities that you probably don't care about Okay, so Thermal issues this is relevant if you're building 3 plus GPU setup If not, you can also ignore those parts, but like in any case if you have enough time go through the whole block I think it's a valuable learning experience You can also limit the power of your GPUs through software and doing that you lose some performance But you get you get a lot of benefit thermally so So so so in the in the opposite way if you had thermal throttling you would lose even more performance than doing it Beforehand, but hopefully if you pick a right case and if you pick right cooling you won't have thermal issues And you will not have to limit your power Okay, that's one thing and then he has some cool charts here showing Comparisons between different GPUs and you can see that RTX 3090 is right like up there among the Giants like we 100 and a 100 and I guess these numbers are I'm fairly confident these numbers are from the cloud So that's the thing I mentioned and that's that these GPUs the server GPUs are Suffering from virtualization and I oh and so you get the V 100 is maybe just a bit better than RTX 3090 keep that in mind So that's why I want to have a local machine it's much better although you do pay for maintenance and stuff like that and and you Have to do the research etc. Okay, so here we can see normalized performance per dollar And so so you can see here the RTX 3080 looks to be the best one But keep in mind that the performance here does not take into the account the fact that we have 24 gigabytes in RTX 3090 so that means if you're training bigger models like some bigger transformers That that's very much important. That's very much important, and then the value of this one would be much bigger so here I think they're just not he's just taking into account the performance in the sense of how many numbers can this GPU crunch in a second? Like the the the pure power, but not the memory which is also important So depending on your needs you might you also should take this chart with a grain of salt That's my point here. Okay, so he then goes on to say Whether you should buy more whether you should have more than 11 gigabytes or less so in my experience so far Let me open up my github profile here most of the projects. I work with were Like you could do them with 8 gigabyte VRAM GPU. That was fine So if I open up for example the get project if I open up the deep dream project I made here You can see that the hardware requirements here are fairly fairly like low You only need around two gigabytes to train the get so the graph attention network on some of the simpler Data sets such as core etc. You also need for the deep dream you might need like a Couple of gigabytes literally also so you can see here around two gigabytes But if you get into the territory if you want to train Style GAN or you want to train actual transformer you will need much more memory, and you definitely need to go for 11 Plus gigabytes. That's why I decided to have the RTX 3090 okay So let's continue here. You can also obviously fit the 24 gigabyte model into smaller device But you will have to deal with mixed precision You will have to deal with gradient checkpointing model parallelism data parallelism blah blah blah all of that all of those details Well not data parallelism unless you have multiple GPUs, but you get my point Basically I covered all of these techniques in my previous videos I'm gonna link them somewhere here if you're curious to learn more But in general a rule of thumb if you know you're gonna train bigger models. Just go with more memory Don't don't don't don't play the hero Okay, let's continue here Let's see what else so he's basically he's built teams built Like clusters and servers for his university, and so there is a lot of details here if you want to build much more powerful server type of workstations and Discussions around whether you need a PCI 4 or not well PCI 5.0 is already a thing so 4.0 is pretty much standard nowadays and For GPU you don't want to go with 4x especially if you buy a fancy GPUs like like RTX 30 or 40 series you don't want to go with 4x with for PCL and you want to go with 8 or 16 you otherwise you will lose This performance that you you spend your money for so it doesn't make any sense to be honest Okay, as I said go and read through the blog at your own pace. It's too long for me to cover every detail I just want to kind of skim it He gives some very useful tips here as well And I definitely strongly recommend you check it out. Let's not skim his second blog here so Which GPU you should pick the RAM and basically he says here that the the clock rate is like a gimmick and you can you can watch this video from Linus Tech tips YouTube channel, and he says that basically it's pretty much gimmicks You can save some money and just grab a RAM that has lower Basically clock rate, and you'll still have this pretty much the same amount of performance, so yeah recommendations on CPUs how many cores frequency their drive Power supply unit a lot of details as I said I will not be digging into into the actual Particularities of this blog even he also just tells you how many monitors you should have I currently have two monitors I think two or three are sweet spot, but it also depends on your own preferences basically okay so having skim through teams blocks and Let's now point some of the details that might not be necessarily correct So one of those is this recommendation here So add up watts of GPUs plus CPU then multiply the total by 110 percent for required wattage If you listen to this advice, and you have RTX 3090 you'll likely gonna end up having a system that crashes every now And then I already had people reach out and tell me this formula doesn't work for those GPUs And you can also find I found this block here, and if we let's just find the 600 watt Keyword here, okay, so here it is so here is a paragraph from this block Naive me thought that I would be able to power for 350 watts a quote unquote GPUs, so it's quote unquote because that's the official TDP But as we'll soon see it's actually pulling much more so nope doesn't work not even on 240 volts a lesser-known fact about 390s is they actually pull burst currents of up to 600 watts this means that a This PSU will power on and even run four of the of the of the GPUs But we will randomly trip the over current protection when all three GPUs simultaneously draw the their peak power which happens every couple of hours This was a fun one to figure out I also had friends reach out and tell me they they hit the same issue So as I said take every single information you find with a grain of salt That's why you want to have multiple blogs multiple resources Just see and do some type of majority voting and figure out stuff for your for yourself, okay? There are some other details that are we're mentioning so let's go here. I have some so he also mentions that that the That the computer case does not matter for for for cooling, but I'm gonna. I'm gonna show you the opposite Conclusion here, so let me just find that thing so does computer case design matter for cooling team says here no GPUs are usually perfectly cooled if there is at least a small gap between GPUs case design will give you one to three centigrade better temperatures a space between GPUs will provide you with 10 to 30 Basically Celsius improvements the bottom line if you have space between GPUs cooling does not matter if you have no space between GPUs You need the right color design blah blah blah blah blah Bottom line this is not actually correct with a proper case you can get up to 10 or more degrees Improvements, and that's quite significant, so if we open up this block, so sorry this video. I found this video from this great Channel that does benchmarking of various components called gamers Nexus and you can see for this particular Corsair crystal case that I was personally considering for my for my setup You can see the difference when you remove the front panel And when the front panel is on which is just like pure aesthetics and of course case so some some protection against us, etc But there is a difference of eight degrees And you can also see that some cases here have much lower Temperatures than some other cases so cases do matter as long as you don't take a completely shitty case You're you're probably fine, but like this this is something that's worth considering and doing a small research also not just By anything you find so just keep that in mind The final thing he mentions is the main mistake that people make is that people pay too much attention to PCI lanes of a CPU You should not care much about PCI lanes instead Just look up if your CPU and motherboard combination supports a number of GPUs that you want to run This is true in case you don't want to use the NVMe SSDs and because those guys use Four PCI lanes so that means if you're gonna stick two of the SSDs you'll you'll eat eight you'll eat eight PCI lanes and that leaves you with much less lanes for your GPUs So you need to keep that in mind as well, so PCI and lanes matter if if you're if you're trying to squeeze out the performance from your from your setup, okay? Okay worth mentioning here is also Tim's recent tweet on the RTX 40 series and again I would take this with a grain of salt until I see the actual numbers But it's a useful piece of information since it's coming from Tim who does have some reputation in this space So finished RTX 4090 modeling not good If you have an RTX 3090 probably best to wait four years for chiplets and consumer HPM This is what dead moore's law looks like you can only scale cost per with features But you can only add tensor core once we are stuck more soon It's kind of pessimistic to be honest and until I see the actual numbers I would be fairly skeptical, but this goes on to tell you that just because a new series came out Doesn't mean you should immediately Sell your older GPUs And go and buy the next new thing you need to do your own research And also always question even the authority because they they sometimes inadvertently make mistakes so just keep that in mind, okay? Let's continue here. This is our amazing block. I found this guy actually built his own server With I think eight GPUs or something so even though that's probably for the most for most of you guys Not gonna be a relevant use case He still goes through the rationale and he gives a lot of useful information and tips on how to build your your Setup so do go through this block. I strongly recommend it and so yeah and Next up what I've done is somewhere on the bottom of this block I found like a setup that has two RTX 3090 which is exactly the setup that I care about and so I opened up the PC part Picker here and I use this particular list as an inspiration for my use case But you'll probably have a different use case So just but like just get that pattern of opening up stuff and like doing side-by-side comparisons That's that's against the point. I'm trying to make here I mentioned this block where we saw the 600 watt tip about the rtx 3090 This is an amazing piece of information here. This guy actually now works at open AI He used to work at Google. He has two huge servers with like eight GPUs He's like renting one to earn some money like renting on the vest AI platform And he's using he's using the second one for his personal projects if I understood him well And so you can see his machines, but he has a lot of very useful tips He's been building these setups for for for quite some years now And so you might learn a lot from from this blog as well. So I do recommend it And finally, Daniel here, Daniel Bork has a very cool video where where like blog and video So his video is like one hour or something long so you can see here one hour long He's basically assembling a workstation together with his buddy And I think it might be an interesting resource even though the components are fairly obsolete And maybe it's not as informative as some of the other blocks here But I think it's a nice story and I think it's definitely worth checking it out Obviously I am present on YouTube so I consume a lot of YouTube content myself And basically I found a lot of videos and here I just opened two random videos But there are many other good ones that that I've used to just open up their PC part The component lists and just use that as a as a source of inspiration That's basically it Guys, that's pretty much it now that you have now that you've read all of the blogs All of the resources you've watched the video you collected the information Try and take some notes extract some tips that people gave you Now you want to basically start picking your own components And how you do that is you open up the PC part picker You then go and go to the PC builder here You hit the start new and you start building your own components And then you do your research so here's how I would do it So first so now we have the CPU So first thing I have done is I went through all of these blocks And I checked out the recommendations they're giving for the CPU And then I get some like rough intuition Okay and so what I ended up figuring out is that most of them are recommending like the AMD CPU So this guy here go with AMD Okay and then you go to the CPU So here go with AMD okay and then team is also I think team was also mentioning AMD somewhere here Let me just find it It doesn't really matter but like I think it was here AMD GPUs are not competitive but we are not talking about the GPUs We're talking about CPUs let me just find it Okay so here it says AMD CPUs are cheaper and better than Intel CPUs in general for deep learning So as you can tell everyone is recommending AMDs and it kind of fell for the trap As I previously mentioned currently on the high performance end it's better to go with Intel They're cheaper and they have better performance So because of that take everything with a grain of salt do your own research Okay so let's see let me open up the one out here and let me show you some additional details I ultimately ended up by doing this research I ended up with two options The AMD Ryzen 9 which is recommended on a lot of reviews You can see here if I open up a random this tech raider article You can see that the best AMD processor here is precisely the one I've chosen And then let me go back here and then the second option I really had was the Threadripper But then after doing some research and finding this source for example If you go here and if you read this chapter let me just find it Do you need a Threadripper or Ryzen CPU? And then they say before getting started with picking a TRX40 motherboard You have to make sure that you're going with Threadripper out of genuine need And not falling prey to marketing Now I'll assume you do need a high core count processor for your workloads So here is a list of reasons why you might choose the Threadripper instead of the Ryzen 9 And the main details here are if you need a lot of PCI lanes So my CPU has 24 PCI lanes which is enough for two GPUs And then you can squeeze in a couple of NVMe's but that's it And here the Threadripper usually has like at least I think like around 100 or more PCI lanes It's crazy these are made to create triple or quad GPU setups if not more Like it's three plus GPUs So if you want to have three plus GPUs in the future on your workstation Then definitely go ahead with Threadripper But if not then just go with a 2X cheaper CPU which is already overpriced compared to Intel CPU So yeah you're saving up some money by doing smart research Let's continue on here Next up I've chosen the cooler So again I've done my research I found a lot of sources recommending this Arctic Liquid Freezer 2 And also this Noctua NH-T15 air cooler So this is a liquid cooling system, this is an air cooling system Liquid is usually better, more effective But there is the danger of potential leak which could be catastrophic But usually does not happen So it's kind of a trade-off but yeah So I wanted to buy Noctua ultimately after having read some of the articles Showing that it's good enough But it turned out it's not compatible as we saw before It's not compatible with the case I've chosen And so I decided to go with the Arctic Liquid Freezer It's basically only maybe 10-20 pounds more expensive So it was a good decision in my opinion For the motherboard what I've done is I went and opened up this overclockers website And I sorted the AM5 motherboards So those are the motherboards that are compatible with the AMD Horizon 9 CPU And then I sorted them by price in a descending fashion And then I went from the cheapest one all the way to the more expensive ones And I was looking for the number of PCI lanes and I was looking for the clearance And the first motherboard that was good enough was this Art Pro or something Let me find it So this one, the Asus Art Pro So if I open up that one So let me just quickly copy paste that one And let's open up Amazon So let's open up Amazon here I'm gonna copy paste the name here Let's find it And here it is I think that's the one Yep, let me open it up And now let me open up the actual one I bought And that's the Crosshair Hero So that's the one I bought As you can see it's quite more expensive But yeah, so I'm gonna tell you why I decided to go with this one So I first went to buy this one I went to watch the video And somebody wrote that basically if you stick a single GPU That takes three slots So that means 30 or 40 series or even 20 series He said, that person said that basically it's almost touching the second PCI slot So these white bars you can see Those are the PCI slots where you plug in your GPU And he said it's literally almost touching that slot So that means if you were to pick that If you were to put the second GPU You would probably have thermal issues if not clearance issues And so because of that I knew I cannot buy this motherboard And so I just went and found this one And if you take a look So this is maybe a primitive approach But it actually works The reason it works is because both motherboards are the ATX format And if you just take a look, visual look Because manuals don't actually have the exact dimensions I even opened up, I was so crazy that I even opened up a manual And I couldn't find the centimeters or the actual physical dimensions So I had to eyeball through Amazon pictures And figure out that this one is probably good enough So if you take a look here You can see that this one has definitely more This one has definitely more space than the ART Pro 1 And so I just kind of If we put them side by side Let me just do it like this It's gonna be much more clear I'm gonna do it like this So you can see it here And you can see it here So this one has enough space presumably for what I need Again, that's the research I've done That's the way I've done it Unfortunately, some of these new motherboards They don't have enough information online So I had to do my best to figure out stuff Okay, I'm gonna close up all of these And let's go back to the next component The next component is memory Again, I just went for the Kingston Because I previously had experience with Kingston And you can see a note here that I bought a much cheaper one on CCL online As opposed to buying I would pay 400 pounds on Amazon for the 5200 Hz version And so that was kind of a huge cost saving for me Okay, next up we have the storage So for the storage Again, done my research I saw that the Pro 990 is coming out soon But there are only pre-orders And because I don't really want to optimize that much I just decided to go with the 980 Pro Which is currently one of the best SSDs for consumers on the market And so I also saw that I don't need the heat sinks Because my motherboard already has the heat sinks So that would be A, cost ineffective And B, incompatible with my motherboard So take care of details such as that one So I decided to go with 2 TB Where I'll be keeping, this is kind of obsolete I'll be keeping operating system there Also my video material is going to be on that SSD And etc, etc And I'm going to keep bigger data sets on the 8 TB SSD You can go with the hard drive if you want I just wanted to have something that's I simply wanted to have the best component here And so I decided to go with SSD Also the blog, one of the blogs I mentioned before So let me just find it So I think it was this one Yeah, I think this was the place where I saw the recommendation For this particular SSD So you can see the Samsung That's pretty much the same one as I bought I mean, about the 870 And he said I also have an array of these ones Blah, blah, blah These are quite pricey these days But I picked them up for under $500 per PC On a lucky Black Friday sale So I also bought them for less than $500 But it was actually before Friday sale Because now we have a spike as you saw It's now $640 or something So that's kind of funny what people do And manipulate the prices You got to be careful and aware of that phenomenon Okay, let's go back here I also kind of saw this FireCuda 530 Being a serious competitor Even outperforming this particular SSD And then I saw that it lacks some software tools Like Samsung Magician And so I just decided I don't want to go with this brand this time So I made a slight sacrifice Probably performance-wise And decided to go with Samsung Although Samsung apparently has better random access speeds Than the FireCuda So I don't know They are fairly comparable, I guess For the case, I initially wanted to go with the Corsair Carbide Series Air 540 Most of the blogs I showed you recommend this one But it's not big enough for RTX 3090 It is, but I think it's going to be super tight I saw some people make it But it's going to be super tight And then you might have some heating issues And I don't really want to go there And so I just wanted to go with the successor So I found this one, the Corsair Crystal Series RGB So this model And this one looks fairly nice Like it's a beautiful case So let me just open it up here So you can see it Let me find an image And you can see it's a very, very nice case Looks very nice But the thing is, as we saw before It has some heating issues And so I decided to go with the Lian Li case Which is also recommended all around the place It's a strong competitor It has better thermals And it's twice the price Sorry, it's twice cheaper than the above case So the only con was pretty much the way it looks But since I already have fancy liquid cooling With ARGB lighting and stuff I don't really care So yeah Finally, PSU I have done the calculation that Tim and others recommended According to that, I would need around 1200 watts But to have the safe margin against these current pools From the RTX 3090 That can go up to 600 watts I just went with a much stronger PSU And I decided to go with 1600 watts myself And that's it The brand and the particular model Was recommended across many of these blogs So that's why I ended up using them So I think even this one is using this particular Yeah, it's using the same PSU So that's the OpenAI guy And also I think this guy is also using EVGA Yep, he's also using it And so apparently it's a very reputable brand And I just decided to go with it And that's it I'm not going to focus on the peripheries Some of the details So I wanted the mechanical keyboard And so I've chosen the Cherry MX Browns Which are apparently the best ones for typing Whereas this one is super silent It doesn't have the clicky sound And this one is too loud It has too much of the clicky sound And so again, I've done some research there And bought the components And guys, that's pretty much it Let me now quickly show you how Again, how you would go about picking your own setup So go and see what are the priciest components And those are GPUs and CPUs If you take a look at my setup here You can use my list, of course It's going to be open and public So you can see that the CPU and the GPUs are the priciest components And there is also the motherboard So depending on your budget What you want to do is you want to max out You want to take the best possible GPU If you're doing this for machine learning GPU is the name of the game Much more important than the CPU You just want to have the GPU And so after that, try and squeeze in the basically Some of the CPU and other components But you want to max out, to be honest I would max out the money for the GPU And then try and allocate the rest for everything else Obviously, I can't give you a particular Each one of you is going to have a different set of needs And so by just giving you the tools, the ideas, the way I think And do research, hopefully it's going to help you Do it for yourself Okay, so now just as a simple exercise Let me show you how you would go about Once you have the set of components You've chosen the components You now want to buy the components You obviously don't want to just buy from the first website You want to do some research And so if you go and do this If you just search for this one and just hit price You'll see a lot of websites offering this item And so if you were to just go and be dumb And you go to Amazon and just buy it without doing any research You would lose around 200 pounds And that's why this is a great example So I'm going to open up a couple of these websites Price Runner and this website Price Spy are aggregators So that means they're not selling the case themselves They're just pointing to other sites that have That are offering the case So let's see what I see So first let's open up the Amazon link So here you can see that Let me just I'll actually have to type it in manually here Because this one doesn't have any price associated with it So you can see here it's offered for 340 pounds And I bought it for around 100 something pounds And so you can see that this one here Which is the same case And has much more reviews Doesn't have the price which means it's out of stock And this one probably was offered for 150 And now because it's out of stock Some competitors can basically offer much more Can boost up the prices And you don't want to buy it from here basically So if you go here you can find some other companies Like these ones that offer it for much lower prices But keep in mind you again don't want to just go to a random Company whose reputation you know nothing of And just buy it Because I've seen a lot of comments and I know people And like you can just go in Google And you'll find in various forums people complaining about Like either not getting the item The item is broken And like a lot of problems if you just go with a random company So you don't want to go and basically do some research On these companies And so what I've done in particular for this case I just Googled CCL online Because back then on the aggregators This was the best price the CCL online And so I just opened up the Reddit That's going to be your friend definitely And you open up some of these review sites And so on Reddit you can see that the comments are all legit Even though they are five years ago If something has changed Basically if the company became corrupt since five years ago Somebody would have commented on this thread pretty much But you see predominantly positive comments here I use them all the time They're a good company I've used them quite a lot Blah blah blah So yeah So it seems positive feedback on Reddit And you can see positive feedback here For 26,000 reviews 4.7 is quite a high grade Again keep in mind that these websites They are always competing with each other And it's not always fair play And so they sometimes just hire bots or whatnot Or actual people to put either five stars or one stars Depending on whether it's your company That you're doing that for or a competitor And so you can see here a lot of ones recently So a lot of ones And I suspect I highly suspect this has to do with Black Friday sales Because at that point of time It's in the best interest of the competitor to Well gain all of the market And just kind of reduce the credibility of the competitors Which is kind of sad practice but people do that And so according to these couple of reviews You would think this is like 3.2 or something It's 4.7 So it's kind of hard But like go and open up a couple links And then you'll know that the company is legit And for me that was the case I had a very nice experience dealing with this particular company Okay guys that's it for the first video of this series I really tried my best to give you as much information as possible I'm going to link all of the resources I mentioned Down in the video description In case anything is missing Feel free to comment down below Also any feedback is appreciated Let me know if you have any other questions I'll try my best and reply to each one of them Over the next couple of weeks In the next video we'll see how to assemble the workstation And then in the final one we'll see how to install all of the software Until then, cheers!
[{"start": 0.0, "end": 3.3000000000000003, "text": " What's cracking guys Alex here with this video?"}, {"start": 3.3000000000000003, "end": 6.88, "text": " I'm kicking off a brand new video series on how to build a deep learning"}, {"start": 7.5600000000000005, "end": 12.38, "text": " Workstation for a particular budget and whether you should build it so in this first video"}, {"start": 12.38, "end": 15.64, "text": " I'm gonna walk you through how to do the necessary research"}, {"start": 16.12, "end": 21.82, "text": " To pick the right components for your budget and how to buy those components in the second video"}, {"start": 21.82, "end": 26.34, "text": " We're gonna see how to basically assemble the machine and in the third video of the series"}, {"start": 26.34, "end": 31.78, "text": " We're going to see how to install the Linux operating system how to install the necessary CUDA drivers"}, {"start": 31.78, "end": 38.06, "text": " I know many of you have been complaining about those and finally how to install the necessary library so that we can run a"}, {"start": 38.32, "end": 42.760000000000005, "text": " Simple ML app and just verify that everything is working as expected"}, {"start": 42.92, "end": 49.92, "text": " So I pretty much have already everything I need to build my own deep learning rig. So I have here the AMD"}, {"start": 50.6, "end": 52.6, "text": " Ryzen 9"}, {"start": 52.6, "end": 58.120000000000005, "text": " 2950 X beast of a CPU from AMD. I have the"}, {"start": 59.0, "end": 63.02, "text": " SSD 2 terabyte NVMe SSD basically super fast"}, {"start": 63.82, "end": 67.42, "text": " Storage device. We also have the 8 terabyte"}, {"start": 68.44, "end": 74.76, "text": " SSD but this one uses the SATA interface. Hopefully I'm not butchering that name and I have"}, {"start": 75.84, "end": 78.24000000000001, "text": " 64 gigabytes of RAM memory right here"}, {"start": 78.92, "end": 80.8, "text": " from Kingston and"}, {"start": 80.8, "end": 85.08, "text": " You can see here some liquid cooling for my CPU. So this is the"}, {"start": 86.12, "end": 88.12, "text": " liquid freezer to"}, {"start": 88.12, "end": 89.32, "text": " 280"}, {"start": 89.32, "end": 92.96, "text": " ARGB liquid cooling system and there is a good reason why I picked this one"}, {"start": 92.96, "end": 95.8, "text": " It's not just because it's beautiful which it is I think"}, {"start": 96.52, "end": 103.2, "text": " But yeah, we'll see those details a bit later. So I obviously have other components down there. I have the case"}, {"start": 103.84, "end": 110.34, "text": " RTX 3090 is currently being shipped from the states here to London a generous gift from Nvidia"}, {"start": 110.34, "end": 117.42, "text": " So I'm really looking forward to building this this machine with you guys showing you the the the steps along along this journey"}, {"start": 117.42, "end": 119.94, "text": " And so yeah, hopefully it's gonna be a fun series"}, {"start": 120.46000000000001, "end": 125.38000000000001, "text": " So let's first see the the why behind why should you build a deep learning?"}, {"start": 126.26, "end": 130.38, "text": " Workstation and whether you should build one. So in my opinion, there are three main reasons"}, {"start": 130.38, "end": 132.38, "text": " So the first one is being cost-effective"}, {"start": 132.42000000000002, "end": 139.02, "text": " That means that using if you if you build your own deep learning rig even for 800 bucks or a thousand bucks"}, {"start": 139.02, "end": 145.94, "text": " You'll basically break even depending on the amount of GPUs and the usage your how much you're using your your machine"}, {"start": 145.94, "end": 152.3, "text": " You'll break even in literally six months one year year and a half and you also have to keep in mind that at the end"}, {"start": 152.3, "end": 154.5, "text": " of that journey, you'll still have a"}, {"start": 155.14000000000001, "end": 158.34, "text": " Machine that you can sell on eBay if you wish to do so"}, {"start": 158.34, "end": 162.94, "text": " So that's that money is not gonna like depreciate to zero dollars in a year"}, {"start": 162.94, "end": 165.62, "text": " Obviously the second reason would be learning"}, {"start": 165.62, "end": 172.06, "text": " So you're gonna learn so much by doing the research around the components by buying them by"}, {"start": 172.46, "end": 174.46, "text": " assembling the machine and installing the software"}, {"start": 174.46, "end": 177.58, "text": " So it's a very valuable learning experience in and off itself"}, {"start": 177.58, "end": 185.14000000000001, "text": " And if you can save up like 800 bucks, that's already super enough to go through this process and learn everything and"}, {"start": 185.66, "end": 188.3, "text": " demystify the computers if you're intimidated"}, {"start": 188.3, "end": 194.06, "text": " But by how they work the third important reason would be performance and so I'm not sure whether we are aware"}, {"start": 194.06, "end": 196.38, "text": " But when you're running on a cloud"}, {"start": 197.02, "end": 202.68, "text": " GPU you're paying for the overheads you're paying for the virtualization overhead you're paying for the IO overhead"}, {"start": 202.7, "end": 209.68, "text": " So in general those machines are much slower than if they were here locally in your own workstation"}, {"start": 209.68, "end": 214.38, "text": " So that's additional reason why you would want to build your own workstation"}, {"start": 214.38, "end": 219.46, "text": " So additionally, you don't have to deal with preemptions with interrupts. And so it's just much easier"}, {"start": 219.62, "end": 221.9, "text": " Okay guys having said all of that"}, {"start": 221.9, "end": 225.66, "text": " let's now see the particular components that I bought and"}, {"start": 226.34, "end": 232.82, "text": " Then let's do a cost comparison between my machine and something that you could buy on the market"}, {"start": 232.82, "end": 239.52, "text": " That's pre assembled for you and just compare those two. Okay guys, so this is the list components"}, {"start": 239.52, "end": 243.18, "text": " I bought for my digital dragon one deep learning"}, {"start": 244.06, "end": 249.42000000000002, "text": " Workstation that's the name I gave it and you can you can see it here and you can see the final price here"}, {"start": 249.42, "end": 254.33999999999997, "text": " Which is definitely the upper upper limit. It's not gonna cost this much. There are multiple reasons"}, {"start": 254.33999999999997, "end": 257.9, "text": " Why not one of them is that there were price swings going on?"}, {"start": 258.26, "end": 264.26, "text": " For example, I paid this 8 terabyte SSD for only 480 only"}, {"start": 264.82, "end": 268.74, "text": " So what they are doing is sometimes in the Black Friday season"}, {"start": 268.74, "end": 273.21999999999997, "text": " They they start spiking up the prices only to then reduce them by 20%"}, {"start": 273.3, "end": 278.74, "text": " What not to get to the same price that that item had maybe a month or two months ago"}, {"start": 278.74, "end": 282.86, "text": " So this is definitely the upper limit of what you'll be paying"}, {"start": 282.86, "end": 285.64, "text": " But I'm gonna touch on the cost a bit more a bit later"}, {"start": 286.22, "end": 293.14, "text": " For now, I just want to get you familiar with this PC part picker platform website. It's a very useful website"}, {"start": 293.14, "end": 299.38, "text": " You're gonna use it yourself to build your own awesome deep learning workstation. So let's go through it"}, {"start": 299.46000000000004, "end": 303.62, "text": " So first important tab here is the completed build step"}, {"start": 303.62, "end": 311.18, "text": " So in case you you know that you want to have for example RTX 3060 you go here to the completed build step"}, {"start": 311.18, "end": 313.66, "text": " And you hit the RTX"}, {"start": 314.66, "end": 318.18, "text": " 3060 filter basically and then"}, {"start": 318.74, "end": 320.42, "text": " the website"}, {"start": 320.42, "end": 327.34000000000003, "text": " Filters out those builds that have RTX 3060 and so here you can find some inspiration for how to build your own build"}, {"start": 327.34000000000003, "end": 329.34000000000003, "text": " That's one way you can use this this website"}, {"start": 329.34, "end": 336.41999999999996, "text": " the other one is to just go to this PC builder tab and then it offers you literally like a to-do list of the"}, {"start": 336.46, "end": 342.62, "text": " Components you need to buy for your workstation and not only that it also offers you it shows you certain"}, {"start": 343.14, "end": 349.38, "text": " Incompatibilities or issues that you might have so let me take a particular example to to demonstrate that"}, {"start": 349.38, "end": 353.76, "text": " So I'm gonna go to the saved parts list. I'm gonna open up the digital dragon"}, {"start": 353.76, "end": 356.94, "text": " I'm gonna hit the edit part list and let's go here"}, {"start": 356.94, "end": 362.3, "text": " so I'm gonna remove the cooler now just for to show you something and you immediately see the"}, {"start": 363.18, "end": 367.14, "text": " Compatibility the warning here and so if we go down here"}, {"start": 367.14, "end": 374.74, "text": " You can see that the AMD CPU does not include a stock CPU cooler adding a CPU cooler to your part list is recommended"}, {"start": 374.86, "end": 379.7, "text": " Okay, so that means you definitely need to have some type of cooling for this particular CPU"}, {"start": 379.7, "end": 382.14, "text": " They even recommend a liquid cooling at least"}, {"start": 382.14, "end": 389.41999999999996, "text": " 20 40 millimeter fans so that's quite a lot and so now let's go and pick a particular cooler"}, {"start": 390.26, "end": 396.38, "text": " Let me just find an alternative that I was considering before and that's this one noctua"}, {"start": 396.9, "end": 399.78, "text": " NHD 15 if I paste it right here"}, {"start": 400.7, "end": 408.02, "text": " We'll see that there are zero compatible products and that's because there are zero compatible products if we toggle off the compatibility filter"}, {"start": 408.02, "end": 414.9, "text": " Then we'll see them and now let's add the noctua and you'll immediately see that we have a compatibility"}, {"start": 414.9, "end": 417.62, "text": " So it says lian-li so that the case I have"}, {"start": 418.06, "end": 424.46, "text": " Is not compatible with the CPU cooler and indeed if you go to and check out the dimensions"}, {"start": 424.46, "end": 430.06, "text": " You'll see that if you were to use the CPU you couldn't put the front panel on your case and that kind of sucks"}, {"start": 430.06, "end": 434.21999999999997, "text": " So so in any case the PC part picker did its job"}, {"start": 434.22, "end": 440.26000000000005, "text": " And that's why it's great website that you should definitely use okay, so now let's go back here"}, {"start": 440.78000000000003, "end": 445.70000000000005, "text": " And let me show you let's do cost comparison with the lambda labs"}, {"start": 446.3, "end": 450.84000000000003, "text": " Workstation and laptop. I'm gonna first pick this no peripheral single GPU"}, {"start": 451.70000000000005, "end": 453.70000000000005, "text": " basically setup and"}, {"start": 453.94000000000005, "end": 460.1, "text": " This one I'm gonna use as a baseline and compare it against the lambda labs workstation. I'm going to for fairness add"}, {"start": 460.1, "end": 466.58000000000004, "text": " Add additional so I'm gonna hit the edit part list here. I'm going to add additional video card, okay?"}, {"start": 467.1, "end": 473.86, "text": " And I'm gonna add RTX 3090 and I'm doing this because their workstation also has two"}, {"start": 474.1, "end": 482.38, "text": " GPUs although they have RTX 3080 as we'll soon see okay, so the price here with these items is again"}, {"start": 482.38, "end": 486.24, "text": " This is the upper limit because you can always find cheaper items you can buy on eBay"}, {"start": 486.24, "end": 490.66, "text": " You can find cheaper websites than Amazon and so this is roughly five thousand"}, {"start": 491.5, "end": 498.94, "text": " 140 pounds now keep in mind that I myself made some mistakes because I decided to go with AMD and"}, {"start": 499.5, "end": 505.1, "text": " Even though in 2020 they were much better than Intel now in 2022 if you look at the high-end"}, {"start": 505.78000000000003, "end": 507.78000000000003, "text": " Like high performance CPUs"}, {"start": 507.78000000000003, "end": 511.8, "text": " It's actually better to buy Intel right now if you want to have high performance"}, {"start": 511.8, "end": 517.66, "text": " CPU and it would pay 500 pounds to get even better CPU than this one, so I made a mistake here"}, {"start": 517.66, "end": 519.48, "text": " That's the biggest mistake I made in my build"}, {"start": 519.48, "end": 526.3, "text": " And so let's now kind of see what would be the actual price if you didn't buy as fancy components as I did"}, {"start": 526.3, "end": 528.3, "text": " But it would still have fairly"}, {"start": 529.28, "end": 534.44, "text": " Similar quality build as mine, so let's take the number here, so we have the five"}, {"start": 534.44, "end": 543.12, "text": " Five thousand one hundred forty currently, so let's subtract from that 240 because roughly the Intel processor costs"}, {"start": 543.9200000000001, "end": 545.6800000000001, "text": " 500 so that's"}, {"start": 545.6800000000001, "end": 551.0, "text": " That's that and then motherboard you can find certainly you can find something for like 300 pounds"}, {"start": 551.6, "end": 557.5, "text": " Although it would be harder for you to find the you couldn't fit maybe a two to"}, {"start": 557.5, "end": 565.1, "text": " Two GPUs especially the the big ones like RTX 3090, so this is maybe maybe to be fair. Let's put 250"}, {"start": 565.1, "end": 569.52, "text": " You can definitely shave off some of the price there. You can buy cheaper memory for sure"}, {"start": 569.52, "end": 574.48, "text": " This is Kingston their famous brand if you can go cheaper you can maybe save hundred pounds there"}, {"start": 574.86, "end": 577.62, "text": " You can certainly save at least at least"}, {"start": 578.38, "end": 584.58, "text": " 440 here on the SSD so 440 because this is SSD you can you can just have hard drive"}, {"start": 584.58, "end": 589.82, "text": " You wouldn't lose too much, and you would save a bunch of money here as you can see also you could buy cheaper"}, {"start": 590.34, "end": 594.0600000000001, "text": " And the end like cheaper SSD you would say maybe 50 pounds there"}, {"start": 594.5, "end": 598.86, "text": " and then finally I think I've done a great job with case and"}, {"start": 599.98, "end": 605.9200000000001, "text": " They are not including the price of the PSU's I'm gonna add like maybe I forgot the price of that thing"}, {"start": 606.0600000000001, "end": 608.22, "text": " But let's say it's around two two hundred"}, {"start": 608.22, "end": 614.02, "text": " Actually, let me see just for a second. What's roughly the price of this thing?"}, {"start": 615.46, "end": 617.96, "text": " So the price is let's say around"}, {"start": 618.58, "end": 625.26, "text": " Let's say it's three hundred. Let's say it's even three hundred. It's that's fine, so I'm gonna open up the calculator. Let's add the"}, {"start": 626.7, "end": 629.78, "text": " Let's add the 300 and so we end up with roughly"}, {"start": 629.78, "end": 636.86, "text": " 4300 pounds for a powerful build such as this one okay now"}, {"start": 637.1, "end": 643.66, "text": " Let's do this side-by-side comparison. Okay. Let's do this so I'm gonna take this tab put it on the side here"}, {"start": 645.42, "end": 650.3399999999999, "text": " My computer is glitching for some reason okay, so here we are on the left hand side"}, {"start": 650.3399999999999, "end": 656.26, "text": " We have the Lambda labs workstation on the right hand side. We have our built okay remember the price was around"}, {"start": 656.26, "end": 662.8, "text": " 4300 in case you decide to just cut some corners so to speak okay, so here"}, {"start": 662.8, "end": 665.7, "text": " We are let's see what the what specs are here"}, {"start": 665.7, "end": 669.78, "text": " So they the reason they pay so much is because you get warranty for three years"}, {"start": 670.02, "end": 675.52, "text": " They pre assemble the thing for you to install the Linux to install the drivers they install the libraries everything"}, {"start": 675.52, "end": 677.74, "text": " But you can do all that yourself and save some money"}, {"start": 677.74, "end": 683.36, "text": " So that's the main trade if you wanna you want to basically pay for here, so just think about what you want"}, {"start": 683.36, "end": 687.52, "text": " So we have the they have the AMD thread reaper probe"}, {"start": 688.2, "end": 694.08, "text": " That's a more expensive CPU actually twice the price of the AMD Ryzen 9"}, {"start": 694.36, "end": 698.8000000000001, "text": " But the thing is you don't need it unless you go for three plus GPUs"}, {"start": 698.8000000000001, "end": 702.34, "text": " This is a waste of money, so you don't need more than than this one"}, {"start": 702.34, "end": 707.1, "text": " So that's that's gonna you could save money bunch of money there unless you're going for the three plus GPU setup"}, {"start": 707.1, "end": 711.84, "text": " We're gonna see later. Why and because me personally I was I was basically"}, {"start": 711.84, "end": 716.6800000000001, "text": " Deciding whether I should buy thread reaper or this one, and I decided to buy this one"}, {"start": 716.6800000000001, "end": 722.76, "text": " And you'll see the reasons a bit later. They have two RTX 3080 those are way cheaper than 3090"}, {"start": 722.84, "end": 727.72, "text": " So you can save money there as well, so we'll be even even cheaper. They have a bit more memory"}, {"start": 727.72, "end": 731.76, "text": " Okay, so that's that's fine. They have four terabytes. We have ten terabytes and"}, {"start": 732.84, "end": 738.9200000000001, "text": " Finally that's it so that means you would be paying eight thousand for this build, so let's see"}, {"start": 738.92, "end": 743.5999999999999, "text": " So eight thousand this is let's see how many pounds. So it's roughly"}, {"start": 744.3199999999999, "end": 745.88, "text": " 6840"}, {"start": 745.88, "end": 751.8, "text": " pounds for a weaker setup because if you if we were to pick RTX 3080"}, {"start": 752.36, "end": 753.88, "text": " We'd basically"}, {"start": 753.88, "end": 755.3199999999999, "text": " have have"}, {"start": 755.3199999999999, "end": 758.9599999999999, "text": " Saved more money, but they do have more expensive CPU"}, {"start": 758.9599999999999, "end": 763.88, "text": " Which you don't need so it's kind of basically you're saving at least two"}, {"start": 763.88, "end": 771.76, "text": " Two thousand five hundred pounds if you build it you're on your own. That's the the basically the summary of this analysis"}, {"start": 771.8, "end": 774.62, "text": " Okay, whoops. Let me just reopen that one"}, {"start": 775.28, "end": 777.28, "text": " Okay, so let's go here"}, {"start": 777.36, "end": 782.14, "text": " They also offer laptops and some of the people on LinkedIn asked me whether they should buy the laptop"}, {"start": 782.14, "end": 784.6, "text": " So here is the analysis a quick analysis here as well"}, {"start": 784.68, "end": 789.84, "text": " So you can see here the price is this is still more expensive than our build and let's see the specs"}, {"start": 789.84, "end": 795.24, "text": " So the specs are you have a single RTX 3080 Ti that's a bit better than 3080"}, {"start": 795.8000000000001, "end": 798.2, "text": " You have the let's see they have"}, {"start": 798.96, "end": 803.26, "text": " Intel i7 which is way cheaper and way lower quality than the"}, {"start": 804.2, "end": 811.32, "text": " CPU which we've chosen they have a bit worse RAM although this doesn't matter that much because this is like"}, {"start": 812.2800000000001, "end": 813.96, "text": " 4800 we have"}, {"start": 813.96, "end": 819.4000000000001, "text": " 5200 Hertz although this is basically a gimmick you don't care about the the the the hurts that much"}, {"start": 819.4, "end": 821.8, "text": " And it's equally good memory pretty much"}, {"start": 822.24, "end": 828.68, "text": " And then they have only two terabytes of SSD and blah blah blah they additionally have the screen so you get"}, {"start": 829.28, "end": 836.9, "text": " Awful so much better build for less money that it's crazy plus laptops are poor with thermals"}, {"start": 836.9, "end": 840.04, "text": " There is no way that this laptop won't have thermal issues"}, {"start": 840.04, "end": 845.24, "text": " I know this firsthand having had Omen 17 laptop with RTX 2080"}, {"start": 845.24, "end": 853.28, "text": " I had severe severe like thermal issues, and I never used my machine for training something for more than like two hours max"}, {"start": 853.28, "end": 860.12, "text": " And then it's literally you can boil eggs on your laptop. There is no way you're gonna have a good cooling system in a laptop"}, {"start": 860.12, "end": 862.12, "text": " It's just the physics of it. It won't allow it"}, {"start": 862.24, "end": 867.6800000000001, "text": " So that means you can use laptop only in case you want to run inference only in that case you should buy a laptop"}, {"start": 867.6800000000001, "end": 871.34, "text": " But like I think this is definitely an overkill. That's my opinion"}, {"start": 871.34, "end": 876.38, "text": " Just take the pros and cons I mentioned you do get the warranty you do get the portability"}, {"start": 876.38, "end": 879.86, "text": " You obviously cannot pack the lap you cannot pack the build"}, {"start": 879.86, "end": 885.74, "text": " We are we are we're creating here into your like briefcase and go to a local Starbucks or whatnot"}, {"start": 885.74, "end": 892.34, "text": " So those are the pros and cons you decide for yourself. I'm just trying to give you as much information as possible here"}, {"start": 892.38, "end": 898.96, "text": " Okay guys so quickly on to this plug why building your own deep learning computer is 10x cheaper than AWS"}, {"start": 898.96, "end": 903.4200000000001, "text": " So there is a lot of blocks similar to this one where they'll show you for"}, {"start": 903.86, "end": 907.48, "text": " How many months you'll need depending on your setup how many GPUs you have?"}, {"start": 907.86, "end": 911.86, "text": " Until you break even compared to using the cloud and as I said previously"}, {"start": 911.86, "end": 916.0400000000001, "text": " I think in the beginning of the video the difference is you will end up with a machine"}, {"start": 916.14, "end": 922.22, "text": " So you'll end up with something whereas here you just pay the cloud and then you you just don't have anything"}, {"start": 922.22, "end": 930.38, "text": " Left to sell later on whereas with your machine you can say sell your components on eBay and get back some money that you invested"}, {"start": 930.94, "end": 931.78, "text": " beforehand"}, {"start": 931.78, "end": 937.5, "text": " Okay, so won't get into much more detail. Basically. I think it's more cost-effective if you're any"}, {"start": 938.62, "end": 944.98, "text": " If you're serious about machine learning if you want to spend some years in this field you definitely need your I think it's a it's a no-brainer"}, {"start": 944.98, "end": 946.98, "text": " I should have done it before basically"}, {"start": 946.98, "end": 953.54, "text": " Okay, let's go through some research right now. So I obviously went through many blocks and videos to"}, {"start": 953.82, "end": 959.74, "text": " Learn and then pick the components. So one of the the main resources is team Dappmer's block"}, {"start": 959.74, "end": 963.66, "text": " So you can see here that the title of this one is which GPUs to get for deep learning"}, {"start": 963.82, "end": 968.02, "text": " My experience and advice for using GPUs in deep learning. That's one of his blocks"}, {"start": 968.02, "end": 973.4200000000001, "text": " The second one is this one where he shows how you can build the whole machine"}, {"start": 973.42, "end": 978.9399999999999, "text": " Not just how to pick the GPU so you can see here a full hardware guide to deep learning and in this one"}, {"start": 978.9399999999999, "end": 982.8199999999999, "text": " He walks you through a lot of the components and not just the GPUs. Okay"}, {"start": 983.38, "end": 988.8, "text": " So let's quickly skim through these blocks and also keep in mind that he did make some mistakes"}, {"start": 988.8, "end": 995.0999999999999, "text": " So never always take everything people say and write especially if it's an older block always take it with a grain of salt"}, {"start": 995.62, "end": 999.86, "text": " So now let's quickly walk through the block here. Let's see some important details"}, {"start": 999.86, "end": 1009.66, "text": " First things first the the first part of the block has a lot of technical details that you probably don't care about if you're here to just build your own PC"}, {"start": 1009.66, "end": 1014.1, "text": " So you can ignore things like like sparse network training obviously, etc, etc"}, {"start": 1014.1, "end": 1019.14, "text": " So I think a reasonable thing to do is start from this new fan design thermal issues"}, {"start": 1019.14, "end": 1023.54, "text": " So that's where the high level talk starts before that. It was just like a lot of"}, {"start": 1024.22, "end": 1026.78, "text": " technical technicalities that you probably don't care about"}, {"start": 1027.54, "end": 1029.14, "text": " Okay, so"}, {"start": 1029.14, "end": 1033.4, "text": " Thermal issues this is relevant if you're building 3 plus GPU setup"}, {"start": 1033.4, "end": 1038.9, "text": " If not, you can also ignore those parts, but like in any case if you have enough time go through the whole block"}, {"start": 1038.9, "end": 1040.66, "text": " I think it's a valuable learning experience"}, {"start": 1040.66, "end": 1047.3000000000002, "text": " You can also limit the power of your GPUs through software and doing that you lose some performance"}, {"start": 1047.3000000000002, "end": 1049.5400000000002, "text": " But you get you get a lot of benefit"}, {"start": 1050.0600000000002, "end": 1051.6200000000001, "text": " thermally so"}, {"start": 1051.62, "end": 1059.06, "text": " So so so in the in the opposite way if you had thermal throttling you would lose even more performance than doing it"}, {"start": 1059.6599999999999, "end": 1065.1, "text": " Beforehand, but hopefully if you pick a right case and if you pick right cooling you won't have thermal issues"}, {"start": 1065.1, "end": 1067.1, "text": " And you will not have to limit your power"}, {"start": 1067.1799999999998, "end": 1070.86, "text": " Okay, that's one thing and then he has some cool charts here showing"}, {"start": 1071.5, "end": 1074.78, "text": " Comparisons between different GPUs and you can see that RTX"}, {"start": 1074.78, "end": 1081.66, "text": " 3090 is right like up there among the Giants like we 100 and a"}, {"start": 1081.94, "end": 1086.42, "text": " 100 and I guess these numbers are I'm fairly confident these numbers are from the cloud"}, {"start": 1086.42, "end": 1091.46, "text": " So that's the thing I mentioned and that's that these GPUs the server GPUs are"}, {"start": 1091.8999999999999, "end": 1098.34, "text": " Suffering from virtualization and I oh and so you get the V 100 is maybe just a bit better than RTX"}, {"start": 1098.86, "end": 1100.86, "text": " 3090 keep that in mind"}, {"start": 1100.86, "end": 1107.62, "text": " So that's why I want to have a local machine it's much better although you do pay for maintenance and stuff like that and and you"}, {"start": 1107.62, "end": 1112.2199999999998, "text": " Have to do the research etc. Okay, so here we can see normalized performance per dollar"}, {"start": 1112.2199999999998, "end": 1117.6799999999998, "text": " And so so you can see here the RTX 3080 looks to be the best one"}, {"start": 1117.6999999999998, "end": 1123.3999999999999, "text": " But keep in mind that the performance here does not take into the account the fact that we have"}, {"start": 1123.6599999999999, "end": 1129.9799999999998, "text": " 24 gigabytes in RTX 3090 so that means if you're training bigger models like some bigger transformers"}, {"start": 1129.98, "end": 1136.82, "text": " That that's very much important. That's very much important, and then the value of this one would be much bigger so here"}, {"start": 1136.82, "end": 1144.4, "text": " I think they're just not he's just taking into account the performance in the sense of how many numbers can this GPU crunch in a second?"}, {"start": 1144.4, "end": 1148.3600000000001, "text": " Like the the the pure power, but not the memory which is also important"}, {"start": 1148.3600000000001, "end": 1152.8, "text": " So depending on your needs you might you also should take this chart with a grain of salt"}, {"start": 1153.42, "end": 1158.02, "text": " That's my point here. Okay, so he then goes on to say"}, {"start": 1158.02, "end": 1164.7, "text": " Whether you should buy more whether you should have more than 11 gigabytes or less so in my experience so far"}, {"start": 1164.7, "end": 1169.5, "text": " Let me open up my github profile here most of the projects. I work with were"}, {"start": 1170.02, "end": 1175.22, "text": " Like you could do them with 8 gigabyte VRAM GPU. That was fine"}, {"start": 1175.22, "end": 1181.02, "text": " So if I open up for example the get project if I open up the deep dream project I made here"}, {"start": 1181.66, "end": 1186.1399999999999, "text": " You can see that the hardware requirements here are fairly fairly like low"}, {"start": 1186.14, "end": 1193.18, "text": " You only need around two gigabytes to train the get so the graph attention network on some of the simpler"}, {"start": 1193.5800000000002, "end": 1199.42, "text": " Data sets such as core etc. You also need for the deep dream you might need like a"}, {"start": 1200.26, "end": 1203.8200000000002, "text": " Couple of gigabytes literally also so you can see here around two gigabytes"}, {"start": 1203.8200000000002, "end": 1206.6200000000001, "text": " But if you get into the territory if you want to train"}, {"start": 1206.8600000000001, "end": 1214.18, "text": " Style GAN or you want to train actual transformer you will need much more memory, and you definitely need to go for 11"}, {"start": 1214.18, "end": 1218.02, "text": " Plus gigabytes. That's why I decided to have the RTX"}, {"start": 1218.66, "end": 1220.3400000000001, "text": " 3090 okay"}, {"start": 1220.3400000000001, "end": 1228.14, "text": " So let's continue here. You can also obviously fit the 24 gigabyte model into smaller device"}, {"start": 1228.14, "end": 1230.74, "text": " But you will have to deal with mixed precision"}, {"start": 1230.74, "end": 1236.3400000000001, "text": " You will have to deal with gradient checkpointing model parallelism data parallelism blah blah blah all of that all of those details"}, {"start": 1236.3400000000001, "end": 1238.54, "text": " Well not data parallelism unless you have multiple"}, {"start": 1239.38, "end": 1241.38, "text": " GPUs, but you get my point"}, {"start": 1241.38, "end": 1245.3000000000002, "text": " Basically I covered all of these techniques in my previous videos"}, {"start": 1245.3000000000002, "end": 1248.1000000000001, "text": " I'm gonna link them somewhere here if you're curious to learn more"}, {"start": 1248.18, "end": 1253.22, "text": " But in general a rule of thumb if you know you're gonna train bigger models. Just go with more memory"}, {"start": 1253.22, "end": 1255.22, "text": " Don't don't don't don't play the hero"}, {"start": 1255.7800000000002, "end": 1257.7800000000002, "text": " Okay, let's continue here"}, {"start": 1258.38, "end": 1262.74, "text": " Let's see what else so he's basically he's built teams built"}, {"start": 1263.94, "end": 1269.18, "text": " Like clusters and servers for his university, and so there is a lot of details here if you want to build"}, {"start": 1269.18, "end": 1271.18, "text": " much more powerful"}, {"start": 1271.18, "end": 1273.18, "text": " server type of workstations"}, {"start": 1274.18, "end": 1275.5, "text": " and"}, {"start": 1275.5, "end": 1282.8600000000001, "text": " Discussions around whether you need a PCI 4 or not well PCI 5.0 is already a thing so 4.0"}, {"start": 1282.8600000000001, "end": 1284.8600000000001, "text": " is pretty much standard nowadays and"}, {"start": 1285.02, "end": 1291.02, "text": " For GPU you don't want to go with 4x especially if you buy a fancy GPUs like like RTX"}, {"start": 1291.18, "end": 1298.46, "text": " 30 or 40 series you don't want to go with 4x with for PCL and you want to go with 8 or 16 you otherwise you will lose"}, {"start": 1298.46, "end": 1302.9, "text": " This performance that you you spend your money for so it doesn't make any sense to be honest"}, {"start": 1304.06, "end": 1310.38, "text": " Okay, as I said go and read through the blog at your own pace. It's too long for me to cover every detail"}, {"start": 1310.38, "end": 1312.14, "text": " I just want to kind of skim it"}, {"start": 1312.14, "end": 1314.3400000000001, "text": " He gives some very useful tips here as well"}, {"start": 1314.78, "end": 1321.04, "text": " And I definitely strongly recommend you check it out. Let's not skim his second blog here"}, {"start": 1321.66, "end": 1322.98, "text": " so"}, {"start": 1322.98, "end": 1329.74, "text": " Which GPU you should pick the RAM and basically he says here that the the clock rate is like a"}, {"start": 1329.9, "end": 1332.42, "text": " gimmick and you can you can watch this video from Linus"}, {"start": 1332.94, "end": 1336.98, "text": " Tech tips YouTube channel, and he says that basically it's pretty much gimmicks"}, {"start": 1336.98, "end": 1340.22, "text": " You can save some money and just grab a RAM that has lower"}, {"start": 1341.14, "end": 1346.14, "text": " Basically clock rate, and you'll still have this pretty much the same amount of performance, so yeah"}, {"start": 1347.58, "end": 1349.58, "text": " recommendations on CPUs"}, {"start": 1350.02, "end": 1352.02, "text": " how many cores"}, {"start": 1352.02, "end": 1354.02, "text": " frequency their drive"}, {"start": 1354.54, "end": 1360.54, "text": " Power supply unit a lot of details as I said I will not be digging into into the actual"}, {"start": 1362.34, "end": 1368.34, "text": " Particularities of this blog even he also just tells you how many monitors you should have I currently have two monitors"}, {"start": 1368.34, "end": 1373.3799999999999, "text": " I think two or three are sweet spot, but it also depends on your own preferences basically okay"}, {"start": 1373.86, "end": 1376.86, "text": " so having skim through teams blocks and"}, {"start": 1376.86, "end": 1382.4599999999998, "text": " Let's now point some of the details that might not be necessarily correct"}, {"start": 1382.4599999999998, "end": 1385.4199999999998, "text": " So one of those is this recommendation here"}, {"start": 1385.4199999999998, "end": 1391.82, "text": " So add up watts of GPUs plus CPU then multiply the total by 110 percent for required wattage"}, {"start": 1392.4199999999998, "end": 1400.06, "text": " If you listen to this advice, and you have RTX 3090 you'll likely gonna end up having a system that crashes every now"}, {"start": 1400.06, "end": 1404.54, "text": " And then I already had people reach out and tell me this formula doesn't work for those GPUs"}, {"start": 1404.54, "end": 1410.1399999999999, "text": " And you can also find I found this block here, and if we let's just find the 600 watt"}, {"start": 1410.94, "end": 1415.6599999999999, "text": " Keyword here, okay, so here it is so here is a paragraph from this block"}, {"start": 1416.74, "end": 1419.1, "text": " Naive me thought that I would be able to power for"}, {"start": 1420.98, "end": 1426.3, "text": " 350 watts a quote unquote GPUs, so it's quote unquote because that's the official TDP"}, {"start": 1426.3799999999999, "end": 1431.94, "text": " But as we'll soon see it's actually pulling much more so nope doesn't work not even on"}, {"start": 1431.94, "end": 1434.8600000000001, "text": " 240 volts a lesser-known fact about"}, {"start": 1435.46, "end": 1441.02, "text": " 390s is they actually pull burst currents of up to 600 watts this means that a"}, {"start": 1441.54, "end": 1447.1000000000001, "text": " This PSU will power on and even run four of the of the of the GPUs"}, {"start": 1447.14, "end": 1451.06, "text": " But we will randomly trip the over current protection when all three"}, {"start": 1452.1000000000001, "end": 1457.42, "text": " GPUs simultaneously draw the their peak power which happens every couple of hours"}, {"start": 1457.42, "end": 1464.0600000000002, "text": " This was a fun one to figure out I also had friends reach out and tell me they they hit the same issue"}, {"start": 1464.0600000000002, "end": 1468.26, "text": " So as I said take every single information you find with a grain of salt"}, {"start": 1468.26, "end": 1471.1000000000001, "text": " That's why you want to have multiple blogs multiple resources"}, {"start": 1471.1000000000001, "end": 1476.1000000000001, "text": " Just see and do some type of majority voting and figure out stuff for your for yourself, okay?"}, {"start": 1476.78, "end": 1484.44, "text": " There are some other details that are we're mentioning so let's go here. I have some so he also mentions that that the"}, {"start": 1484.44, "end": 1490.3600000000001, "text": " That the computer case does not matter for for for cooling, but I'm gonna. I'm gonna show you the opposite"}, {"start": 1490.88, "end": 1497.92, "text": " Conclusion here, so let me just find that thing so does computer case design matter for cooling team says here no"}, {"start": 1498.44, "end": 1504.8400000000001, "text": " GPUs are usually perfectly cooled if there is at least a small gap between GPUs case design will give you one to three"}, {"start": 1505.3200000000002, "end": 1507.3200000000002, "text": " centigrade"}, {"start": 1507.3200000000002, "end": 1512.1200000000001, "text": " better temperatures a space between GPUs will provide you with 10 to 30"}, {"start": 1512.12, "end": 1519.6399999999999, "text": " Basically Celsius improvements the bottom line if you have space between GPUs cooling does not matter if you have no space between GPUs"}, {"start": 1519.6399999999999, "end": 1521.9199999999998, "text": " You need the right color design blah blah blah blah blah"}, {"start": 1522.7199999999998, "end": 1529.04, "text": " Bottom line this is not actually correct with a proper case you can get up to 10 or more degrees"}, {"start": 1529.7199999999998, "end": 1536.08, "text": " Improvements, and that's quite significant, so if we open up this block, so sorry this video. I found this video from this great"}, {"start": 1536.08, "end": 1541.96, "text": " Channel that does benchmarking of various components called gamers Nexus and you can see for this particular"}, {"start": 1542.1599999999999, "end": 1547.08, "text": " Corsair crystal case that I was personally considering for my for my setup"}, {"start": 1547.08, "end": 1550.08, "text": " You can see the difference when you remove the front panel"}, {"start": 1550.56, "end": 1557.36, "text": " And when the front panel is on which is just like pure aesthetics and of course case so some some protection against us, etc"}, {"start": 1557.36, "end": 1559.36, "text": " But there is a difference of eight degrees"}, {"start": 1559.6, "end": 1563.8799999999999, "text": " And you can also see that some cases here have much lower"}, {"start": 1563.88, "end": 1569.48, "text": " Temperatures than some other cases so cases do matter as long as you don't take a completely shitty case"}, {"start": 1569.72, "end": 1576.68, "text": " You're you're probably fine, but like this this is something that's worth considering and doing a small research also not just"}, {"start": 1577.0, "end": 1579.88, "text": " By anything you find so just keep that in mind"}, {"start": 1580.8400000000001, "end": 1587.6000000000001, "text": " The final thing he mentions is the main mistake that people make is that people pay too much attention to PCI lanes of a CPU"}, {"start": 1588.0800000000002, "end": 1590.96, "text": " You should not care much about PCI lanes instead"}, {"start": 1590.96, "end": 1597.28, "text": " Just look up if your CPU and motherboard combination supports a number of GPUs that you want to run"}, {"start": 1597.92, "end": 1604.48, "text": " This is true in case you don't want to use the NVMe SSDs and because those guys use"}, {"start": 1604.88, "end": 1611.92, "text": " Four PCI lanes so that means if you're gonna stick two of the SSDs you'll you'll eat eight you'll eat eight"}, {"start": 1612.4, "end": 1615.88, "text": " PCI lanes and that leaves you with much less lanes for your GPUs"}, {"start": 1615.88, "end": 1617.88, "text": " So you need to keep that in mind as well, so"}, {"start": 1617.88, "end": 1624.5200000000002, "text": " PCI and lanes matter if if you're if you're trying to squeeze out the performance from your from your setup, okay?"}, {"start": 1625.16, "end": 1631.4, "text": " Okay worth mentioning here is also Tim's recent tweet on the RTX 40 series and again"}, {"start": 1631.4, "end": 1634.7600000000002, "text": " I would take this with a grain of salt until I see the actual numbers"}, {"start": 1634.7600000000002, "end": 1640.68, "text": " But it's a useful piece of information since it's coming from Tim who does have some reputation in this space"}, {"start": 1640.68, "end": 1644.2800000000002, "text": " So finished RTX 4090 modeling not good"}, {"start": 1644.28, "end": 1649.72, "text": " If you have an RTX 3090 probably best to wait four years for chiplets and consumer HPM"}, {"start": 1649.72, "end": 1654.44, "text": " This is what dead moore's law looks like you can only scale cost per with features"}, {"start": 1654.44, "end": 1657.32, "text": " But you can only add tensor core once we are stuck more soon"}, {"start": 1657.32, "end": 1661.16, "text": " It's kind of pessimistic to be honest and until I see the actual numbers"}, {"start": 1661.16, "end": 1666.76, "text": " I would be fairly skeptical, but this goes on to tell you that just because a new series came out"}, {"start": 1666.76, "end": 1668.76, "text": " Doesn't mean you should immediately"}, {"start": 1669.32, "end": 1671.32, "text": " Sell your older GPUs"}, {"start": 1671.32, "end": 1675.24, "text": " And go and buy the next new thing you need to do your own research"}, {"start": 1675.24, "end": 1682.2, "text": " And also always question even the authority because they they sometimes inadvertently make mistakes so just keep that in mind, okay?"}, {"start": 1682.76, "end": 1688.2, "text": " Let's continue here. This is our amazing block. I found this guy actually built his own server"}, {"start": 1688.6799999999998, "end": 1694.4399999999998, "text": " With I think eight GPUs or something so even though that's probably for the most for most of you guys"}, {"start": 1694.4399999999998, "end": 1696.4399999999998, "text": " Not gonna be a relevant use case"}, {"start": 1696.44, "end": 1705.72, "text": " He still goes through the rationale and he gives a lot of useful information and tips on how to build your your"}, {"start": 1707.3200000000002, "end": 1711.88, "text": " Setup so do go through this block. I strongly recommend it and so yeah and"}, {"start": 1712.44, "end": 1715.3200000000002, "text": " Next up what I've done is somewhere on the bottom of this block"}, {"start": 1715.3200000000002, "end": 1722.8400000000001, "text": " I found like a setup that has two RTX 3090 which is exactly the setup that I care about and so I opened up the PC part"}, {"start": 1722.84, "end": 1727.24, "text": " Picker here and I use this particular list as an inspiration for my use case"}, {"start": 1727.24, "end": 1728.9199999999998, "text": " But you'll probably have a different use case"}, {"start": 1728.9199999999998, "end": 1734.6799999999998, "text": " So just but like just get that pattern of opening up stuff and like doing side-by-side comparisons"}, {"start": 1734.6799999999998, "end": 1736.6799999999998, "text": " That's that's against the point. I'm trying to make here"}, {"start": 1738.12, "end": 1743.8, "text": " I mentioned this block where we saw the 600 watt tip about the rtx 3090"}, {"start": 1743.8, "end": 1749.08, "text": " This is an amazing piece of information here. This guy actually now works at open AI"}, {"start": 1749.08, "end": 1753.6399999999999, "text": " He used to work at Google. He has two huge servers with like eight GPUs"}, {"start": 1753.8799999999999, "end": 1758.12, "text": " He's like renting one to earn some money like renting on the vest AI platform"}, {"start": 1758.12, "end": 1762.36, "text": " And he's using he's using the second one for his personal projects if I understood him well"}, {"start": 1762.36, "end": 1766.84, "text": " And so you can see his machines, but he has a lot of very useful tips"}, {"start": 1766.84, "end": 1770.52, "text": " He's been building these setups for for for quite some years now"}, {"start": 1770.52, "end": 1774.9199999999998, "text": " And so you might learn a lot from from this blog as well. So I do recommend it"}, {"start": 1774.92, "end": 1782.8400000000001, "text": " And finally, Daniel here, Daniel Bork has a very cool video where where like blog and video"}, {"start": 1782.8400000000001, "end": 1786.8400000000001, "text": " So his video is like one hour or something long so you can see here one hour long"}, {"start": 1786.8400000000001, "end": 1790.92, "text": " He's basically assembling a workstation together with his buddy"}, {"start": 1790.92, "end": 1796.3600000000001, "text": " And I think it might be an interesting resource even though the components are fairly obsolete"}, {"start": 1796.3600000000001, "end": 1800.04, "text": " And maybe it's not as informative as some of the other blocks here"}, {"start": 1800.04, "end": 1805.08, "text": " But I think it's a nice story and I think it's definitely worth checking it out"}, {"start": 1805.08, "end": 1810.28, "text": " Obviously I am present on YouTube so I consume a lot of YouTube content myself"}, {"start": 1810.28, "end": 1814.76, "text": " And basically I found a lot of videos and here I just opened two random videos"}, {"start": 1814.76, "end": 1819.96, "text": " But there are many other good ones that that I've used to just open up their PC part"}, {"start": 1819.96, "end": 1824.52, "text": " The component lists and just use that as a as a source of inspiration"}, {"start": 1824.52, "end": 1825.96, "text": " That's basically it"}, {"start": 1825.96, "end": 1831.0, "text": " Guys, that's pretty much it now that you have now that you've read all of the blogs"}, {"start": 1831.0, "end": 1833.88, "text": " All of the resources you've watched the video you collected the information"}, {"start": 1833.88, "end": 1837.16, "text": " Try and take some notes extract some tips that people gave you"}, {"start": 1837.16, "end": 1841.0, "text": " Now you want to basically start picking your own components"}, {"start": 1841.0, "end": 1845.0, "text": " And how you do that is you open up the PC part picker"}, {"start": 1845.64, "end": 1848.6000000000001, "text": " You then go and go to the PC builder here"}, {"start": 1848.6000000000001, "end": 1852.92, "text": " You hit the start new and you start building your own components"}, {"start": 1852.92, "end": 1856.1200000000001, "text": " And then you do your research so here's how I would do it"}, {"start": 1856.1200000000001, "end": 1858.92, "text": " So first so now we have the CPU"}, {"start": 1858.92, "end": 1863.0, "text": " So first thing I have done is I went through all of these blocks"}, {"start": 1863.0, "end": 1866.6000000000001, "text": " And I checked out the recommendations they're giving for the CPU"}, {"start": 1866.6000000000001, "end": 1869.24, "text": " And then I get some like rough intuition"}, {"start": 1869.24, "end": 1875.0800000000002, "text": " Okay and so what I ended up figuring out is that most of them are recommending like the AMD CPU"}, {"start": 1875.0800000000002, "end": 1877.5600000000002, "text": " So this guy here go with AMD"}, {"start": 1877.5600000000002, "end": 1880.52, "text": " Okay and then you go to the CPU"}, {"start": 1880.52, "end": 1887.8, "text": " So here go with AMD okay and then team is also I think team was also mentioning AMD somewhere here"}, {"start": 1887.8, "end": 1888.76, "text": " Let me just find it"}, {"start": 1890.84, "end": 1893.6399999999999, "text": " It doesn't really matter but like I think it was here"}, {"start": 1894.76, "end": 1899.8, "text": " AMD GPUs are not competitive but we are not talking about the GPUs"}, {"start": 1899.8, "end": 1901.8799999999999, "text": " We're talking about CPUs let me just find it"}, {"start": 1901.8799999999999, "end": 1907.56, "text": " Okay so here it says AMD CPUs are cheaper and better than Intel CPUs in general for deep learning"}, {"start": 1907.56, "end": 1912.04, "text": " So as you can tell everyone is recommending AMDs and it kind of fell for the trap"}, {"start": 1912.04, "end": 1917.6399999999999, "text": " As I previously mentioned currently on the high performance end it's better to go with Intel"}, {"start": 1917.6399999999999, "end": 1920.9199999999998, "text": " They're cheaper and they have better performance"}, {"start": 1920.9199999999998, "end": 1925.48, "text": " So because of that take everything with a grain of salt do your own research"}, {"start": 1925.48, "end": 1931.24, "text": " Okay so let's see let me open up the one out here and let me show you some additional details"}, {"start": 1931.24, "end": 1936.36, "text": " I ultimately ended up by doing this research I ended up with two options"}, {"start": 1936.36, "end": 1940.76, "text": " The AMD Ryzen 9 which is recommended on a lot of reviews"}, {"start": 1940.76, "end": 1945.8, "text": " You can see here if I open up a random this tech raider article"}, {"start": 1945.8, "end": 1950.36, "text": " You can see that the best AMD processor here is precisely the one I've chosen"}, {"start": 1950.36, "end": 1955.6399999999999, "text": " And then let me go back here and then the second option I really had was the Threadripper"}, {"start": 1955.6399999999999, "end": 1959.1599999999999, "text": " But then after doing some research and finding this source for example"}, {"start": 1959.8799999999999, "end": 1964.52, "text": " If you go here and if you read this chapter let me just find it"}, {"start": 1964.52, "end": 1968.04, "text": " Do you need a Threadripper or Ryzen CPU?"}, {"start": 1968.04, "end": 1973.0, "text": " And then they say before getting started with picking a TRX40 motherboard"}, {"start": 1973.0, "end": 1976.92, "text": " You have to make sure that you're going with Threadripper out of genuine need"}, {"start": 1976.92, "end": 1978.52, "text": " And not falling prey to marketing"}, {"start": 1979.32, "end": 1983.24, "text": " Now I'll assume you do need a high core count processor for your workloads"}, {"start": 1983.24, "end": 1987.16, "text": " So here is a list of reasons why you might choose the Threadripper instead of the Ryzen 9"}, {"start": 1988.04, "end": 1992.04, "text": " And the main details here are if you need a lot of PCI lanes"}, {"start": 1992.04, "end": 1997.08, "text": " So my CPU has 24 PCI lanes which is enough for two GPUs"}, {"start": 1997.08, "end": 2000.44, "text": " And then you can squeeze in a couple of NVMe's but that's it"}, {"start": 2000.44, "end": 2007.1599999999999, "text": " And here the Threadripper usually has like at least I think like around 100 or more PCI lanes"}, {"start": 2007.1599999999999, "end": 2013.6399999999999, "text": " It's crazy these are made to create triple or quad GPU setups if not more"}, {"start": 2014.52, "end": 2016.44, "text": " Like it's three plus GPUs"}, {"start": 2016.44, "end": 2020.04, "text": " So if you want to have three plus GPUs in the future on your workstation"}, {"start": 2020.04, "end": 2022.52, "text": " Then definitely go ahead with Threadripper"}, {"start": 2022.52, "end": 2029.32, "text": " But if not then just go with a 2X cheaper CPU which is already overpriced compared to Intel CPU"}, {"start": 2029.32, "end": 2032.28, "text": " So yeah you're saving up some money by doing smart research"}, {"start": 2033.48, "end": 2034.52, "text": " Let's continue on here"}, {"start": 2035.32, "end": 2037.08, "text": " Next up I've chosen the cooler"}, {"start": 2037.08, "end": 2039.3999999999999, "text": " So again I've done my research"}, {"start": 2039.3999999999999, "end": 2043.96, "text": " I found a lot of sources recommending this Arctic Liquid Freezer 2"}, {"start": 2043.96, "end": 2047.3999999999999, "text": " And also this Noctua NH-T15 air cooler"}, {"start": 2047.4, "end": 2050.12, "text": " So this is a liquid cooling system, this is an air cooling system"}, {"start": 2050.12, "end": 2052.28, "text": " Liquid is usually better, more effective"}, {"start": 2052.28, "end": 2059.0, "text": " But there is the danger of potential leak which could be catastrophic"}, {"start": 2059.0, "end": 2060.28, "text": " But usually does not happen"}, {"start": 2060.28, "end": 2062.36, "text": " So it's kind of a trade-off but yeah"}, {"start": 2062.36, "end": 2068.6, "text": " So I wanted to buy Noctua ultimately after having read some of the articles"}, {"start": 2069.4, "end": 2071.2400000000002, "text": " Showing that it's good enough"}, {"start": 2071.2400000000002, "end": 2073.96, "text": " But it turned out it's not compatible as we saw before"}, {"start": 2073.96, "end": 2076.2000000000003, "text": " It's not compatible with the case I've chosen"}, {"start": 2076.2, "end": 2079.3999999999996, "text": " And so I decided to go with the Arctic Liquid Freezer"}, {"start": 2079.3999999999996, "end": 2082.9199999999996, "text": " It's basically only maybe 10-20 pounds more expensive"}, {"start": 2082.9199999999996, "end": 2085.24, "text": " So it was a good decision in my opinion"}, {"start": 2085.8799999999997, "end": 2091.16, "text": " For the motherboard what I've done is I went and opened up this overclockers website"}, {"start": 2091.16, "end": 2093.64, "text": " And I sorted the AM5 motherboards"}, {"start": 2093.64, "end": 2098.7599999999998, "text": " So those are the motherboards that are compatible with the AMD Horizon 9 CPU"}, {"start": 2098.7599999999998, "end": 2102.04, "text": " And then I sorted them by price in a descending fashion"}, {"start": 2102.04, "end": 2106.04, "text": " And then I went from the cheapest one all the way to the more expensive ones"}, {"start": 2106.04, "end": 2109.72, "text": " And I was looking for the number of PCI lanes and I was looking for the clearance"}, {"start": 2109.72, "end": 2116.2799999999997, "text": " And the first motherboard that was good enough was this Art Pro or something"}, {"start": 2116.2799999999997, "end": 2116.84, "text": " Let me find it"}, {"start": 2116.84, "end": 2119.4, "text": " So this one, the Asus Art Pro"}, {"start": 2119.4, "end": 2120.7599999999998, "text": " So if I open up that one"}, {"start": 2120.7599999999998, "end": 2123.96, "text": " So let me just quickly copy paste that one"}, {"start": 2123.96, "end": 2125.56, "text": " And let's open up Amazon"}, {"start": 2125.56, "end": 2126.92, "text": " So let's open up Amazon here"}, {"start": 2126.92, "end": 2128.6, "text": " I'm gonna copy paste the name here"}, {"start": 2128.6, "end": 2129.8, "text": " Let's find it"}, {"start": 2129.8, "end": 2131.4, "text": " And here it is"}, {"start": 2131.4, "end": 2133.16, "text": " I think that's the one"}, {"start": 2133.16, "end": 2134.52, "text": " Yep, let me open it up"}, {"start": 2134.52, "end": 2139.96, "text": " And now let me open up the actual one I bought"}, {"start": 2139.96, "end": 2141.48, "text": " And that's the Crosshair Hero"}, {"start": 2141.48, "end": 2142.7599999999998, "text": " So that's the one I bought"}, {"start": 2143.32, "end": 2146.12, "text": " As you can see it's quite more expensive"}, {"start": 2146.12, "end": 2149.4, "text": " But yeah, so I'm gonna tell you why I decided to go with this one"}, {"start": 2149.4, "end": 2152.04, "text": " So I first went to buy this one"}, {"start": 2152.04, "end": 2153.16, "text": " I went to watch the video"}, {"start": 2153.16, "end": 2157.56, "text": " And somebody wrote that basically if you stick a single GPU"}, {"start": 2157.56, "end": 2159.16, "text": " That takes three slots"}, {"start": 2159.16, "end": 2162.36, "text": " So that means 30 or 40 series or even 20 series"}, {"start": 2162.36, "end": 2167.88, "text": " He said, that person said that basically it's almost touching the second PCI slot"}, {"start": 2167.88, "end": 2169.7200000000003, "text": " So these white bars you can see"}, {"start": 2169.7200000000003, "end": 2173.08, "text": " Those are the PCI slots where you plug in your GPU"}, {"start": 2173.08, "end": 2176.84, "text": " And he said it's literally almost touching that slot"}, {"start": 2176.84, "end": 2179.0, "text": " So that means if you were to pick that"}, {"start": 2179.0, "end": 2180.84, "text": " If you were to put the second GPU"}, {"start": 2180.84, "end": 2184.36, "text": " You would probably have thermal issues if not clearance issues"}, {"start": 2184.36, "end": 2189.32, "text": " And so because of that I knew I cannot buy this motherboard"}, {"start": 2189.32, "end": 2191.32, "text": " And so I just went and found this one"}, {"start": 2191.32, "end": 2192.52, "text": " And if you take a look"}, {"start": 2192.52, "end": 2193.88, "text": " So this is maybe a primitive approach"}, {"start": 2193.88, "end": 2194.84, "text": " But it actually works"}, {"start": 2195.48, "end": 2201.0, "text": " The reason it works is because both motherboards are the ATX format"}, {"start": 2201.0, "end": 2203.0, "text": " And if you just take a look, visual look"}, {"start": 2203.0, "end": 2206.04, "text": " Because manuals don't actually have the exact dimensions"}, {"start": 2206.04, "end": 2209.88, "text": " I even opened up, I was so crazy that I even opened up a manual"}, {"start": 2209.88, "end": 2214.6800000000003, "text": " And I couldn't find the centimeters or the actual physical dimensions"}, {"start": 2214.6800000000003, "end": 2218.2000000000003, "text": " So I had to eyeball through Amazon pictures"}, {"start": 2218.2, "end": 2222.68, "text": " And figure out that this one is probably good enough"}, {"start": 2222.68, "end": 2224.04, "text": " So if you take a look here"}, {"start": 2224.04, "end": 2226.68, "text": " You can see that this one has definitely more"}, {"start": 2227.56, "end": 2234.4399999999996, "text": " This one has definitely more space than the ART Pro 1"}, {"start": 2234.4399999999996, "end": 2236.6, "text": " And so I just kind of"}, {"start": 2236.6, "end": 2238.2, "text": " If we put them side by side"}, {"start": 2238.2, "end": 2239.16, "text": " Let me just do it like this"}, {"start": 2239.16, "end": 2241.0, "text": " It's gonna be much more clear"}, {"start": 2242.2, "end": 2243.56, "text": " I'm gonna do it like this"}, {"start": 2243.56, "end": 2244.6, "text": " So you can see it here"}, {"start": 2245.7999999999997, "end": 2247.24, "text": " And you can see it here"}, {"start": 2247.24, "end": 2252.68, "text": " So this one has enough space presumably for what I need"}, {"start": 2253.3999999999996, "end": 2255.24, "text": " Again, that's the research I've done"}, {"start": 2255.7999999999997, "end": 2257.0, "text": " That's the way I've done it"}, {"start": 2257.0, "end": 2259.08, "text": " Unfortunately, some of these new motherboards"}, {"start": 2259.08, "end": 2260.7599999999998, "text": " They don't have enough information online"}, {"start": 2260.7599999999998, "end": 2263.64, "text": " So I had to do my best to figure out stuff"}, {"start": 2263.64, "end": 2265.8799999999997, "text": " Okay, I'm gonna close up all of these"}, {"start": 2266.4399999999996, "end": 2268.9199999999996, "text": " And let's go back to the next component"}, {"start": 2269.4799999999996, "end": 2271.0, "text": " The next component is memory"}, {"start": 2271.0, "end": 2272.8399999999997, "text": " Again, I just went for the Kingston"}, {"start": 2272.8399999999997, "end": 2275.16, "text": " Because I previously had experience with Kingston"}, {"start": 2275.16, "end": 2279.64, "text": " And you can see a note here that I bought a much cheaper one on CCL online"}, {"start": 2279.64, "end": 2280.6, "text": " As opposed to buying"}, {"start": 2281.3999999999996, "end": 2286.6, "text": " I would pay 400 pounds on Amazon for the 5200 Hz version"}, {"start": 2286.6, "end": 2290.44, "text": " And so that was kind of a huge cost saving for me"}, {"start": 2290.44, "end": 2292.52, "text": " Okay, next up we have the storage"}, {"start": 2292.52, "end": 2293.64, "text": " So for the storage"}, {"start": 2294.68, "end": 2296.92, "text": " Again, done my research"}, {"start": 2296.92, "end": 2300.52, "text": " I saw that the Pro 990 is coming out soon"}, {"start": 2300.52, "end": 2302.12, "text": " But there are only pre-orders"}, {"start": 2302.12, "end": 2305.7999999999997, "text": " And because I don't really want to optimize that much"}, {"start": 2305.7999999999997, "end": 2309.72, "text": " I just decided to go with the 980 Pro"}, {"start": 2309.72, "end": 2314.12, "text": " Which is currently one of the best SSDs for consumers on the market"}, {"start": 2314.12, "end": 2316.7599999999998, "text": " And so I also saw that I don't need the heat sinks"}, {"start": 2316.7599999999998, "end": 2319.64, "text": " Because my motherboard already has the heat sinks"}, {"start": 2319.64, "end": 2322.6, "text": " So that would be A, cost ineffective"}, {"start": 2322.6, "end": 2325.4, "text": " And B, incompatible with my motherboard"}, {"start": 2325.4, "end": 2327.4, "text": " So take care of details such as that one"}, {"start": 2327.96, "end": 2329.64, "text": " So I decided to go with 2 TB"}, {"start": 2329.64, "end": 2332.2, "text": " Where I'll be keeping, this is kind of obsolete"}, {"start": 2332.2, "end": 2334.52, "text": " I'll be keeping operating system there"}, {"start": 2335.16, "end": 2338.92, "text": " Also my video material is going to be on that SSD"}, {"start": 2338.92, "end": 2340.3599999999997, "text": " And etc, etc"}, {"start": 2340.3599999999997, "end": 2344.6, "text": " And I'm going to keep bigger data sets on the 8 TB SSD"}, {"start": 2344.6, "end": 2346.44, "text": " You can go with the hard drive if you want"}, {"start": 2346.44, "end": 2349.72, "text": " I just wanted to have something that's"}, {"start": 2351.7999999999997, "end": 2355.0, "text": " I simply wanted to have the best component here"}, {"start": 2355.0, "end": 2357.3199999999997, "text": " And so I decided to go with SSD"}, {"start": 2357.32, "end": 2360.36, "text": " Also the blog, one of the blogs I mentioned before"}, {"start": 2360.36, "end": 2361.4, "text": " So let me just find it"}, {"start": 2362.1200000000003, "end": 2363.8, "text": " So I think it was this one"}, {"start": 2365.0, "end": 2367.96, "text": " Yeah, I think this was the place where I saw the recommendation"}, {"start": 2367.96, "end": 2371.4, "text": " For this particular SSD"}, {"start": 2371.4, "end": 2373.96, "text": " So you can see the Samsung"}, {"start": 2373.96, "end": 2375.6400000000003, "text": " That's pretty much the same one as I bought"}, {"start": 2375.6400000000003, "end": 2378.36, "text": " I mean, about the 870"}, {"start": 2378.36, "end": 2381.88, "text": " And he said I also have an array of these ones"}, {"start": 2381.88, "end": 2383.1600000000003, "text": " Blah, blah, blah"}, {"start": 2383.1600000000003, "end": 2384.6000000000004, "text": " These are quite pricey these days"}, {"start": 2384.6, "end": 2388.36, "text": " But I picked them up for under $500 per PC"}, {"start": 2388.36, "end": 2390.92, "text": " On a lucky Black Friday sale"}, {"start": 2390.92, "end": 2393.0, "text": " So I also bought them for less than $500"}, {"start": 2393.0, "end": 2395.4, "text": " But it was actually before Friday sale"}, {"start": 2395.4, "end": 2397.0, "text": " Because now we have a spike as you saw"}, {"start": 2397.0, "end": 2399.4, "text": " It's now $640 or something"}, {"start": 2399.4, "end": 2401.4, "text": " So that's kind of funny what people do"}, {"start": 2401.4, "end": 2403.16, "text": " And manipulate the prices"}, {"start": 2403.16, "end": 2407.3199999999997, "text": " You got to be careful and aware of that phenomenon"}, {"start": 2408.12, "end": 2409.4, "text": " Okay, let's go back here"}, {"start": 2410.04, "end": 2413.96, "text": " I also kind of saw this FireCuda 530"}, {"start": 2413.96, "end": 2415.48, "text": " Being a serious competitor"}, {"start": 2415.48, "end": 2419.0, "text": " Even outperforming this particular SSD"}, {"start": 2419.0, "end": 2421.48, "text": " And then I saw that it lacks some software tools"}, {"start": 2421.48, "end": 2423.56, "text": " Like Samsung Magician"}, {"start": 2423.56, "end": 2428.12, "text": " And so I just decided I don't want to go with this brand this time"}, {"start": 2428.12, "end": 2430.52, "text": " So I made a slight sacrifice"}, {"start": 2430.52, "end": 2431.8, "text": " Probably performance-wise"}, {"start": 2432.44, "end": 2434.36, "text": " And decided to go with Samsung"}, {"start": 2434.36, "end": 2438.6, "text": " Although Samsung apparently has better random access speeds"}, {"start": 2438.6, "end": 2439.64, "text": " Than the FireCuda"}, {"start": 2439.64, "end": 2440.68, "text": " So I don't know"}, {"start": 2440.68, "end": 2443.0, "text": " They are fairly comparable, I guess"}, {"start": 2443.0, "end": 2449.0, "text": " For the case, I initially wanted to go with the Corsair Carbide Series Air 540"}, {"start": 2449.0, "end": 2452.44, "text": " Most of the blogs I showed you recommend this one"}, {"start": 2452.44, "end": 2455.32, "text": " But it's not big enough for RTX 3090"}, {"start": 2455.88, "end": 2458.28, "text": " It is, but I think it's going to be super tight"}, {"start": 2458.28, "end": 2459.8, "text": " I saw some people make it"}, {"start": 2459.8, "end": 2461.16, "text": " But it's going to be super tight"}, {"start": 2461.16, "end": 2463.16, "text": " And then you might have some heating issues"}, {"start": 2463.16, "end": 2464.76, "text": " And I don't really want to go there"}, {"start": 2464.76, "end": 2466.84, "text": " And so I just wanted to go with the successor"}, {"start": 2466.84, "end": 2471.48, "text": " So I found this one, the Corsair Crystal Series RGB"}, {"start": 2471.48, "end": 2472.36, "text": " So this model"}, {"start": 2472.36, "end": 2474.36, "text": " And this one looks fairly nice"}, {"start": 2474.36, "end": 2476.92, "text": " Like it's a beautiful case"}, {"start": 2476.92, "end": 2478.36, "text": " So let me just open it up here"}, {"start": 2478.92, "end": 2480.36, "text": " So you can see it"}, {"start": 2480.36, "end": 2481.7200000000003, "text": " Let me find an image"}, {"start": 2482.92, "end": 2486.2000000000003, "text": " And you can see it's a very, very nice case"}, {"start": 2486.2000000000003, "end": 2486.92, "text": " Looks very nice"}, {"start": 2487.48, "end": 2490.04, "text": " But the thing is, as we saw before"}, {"start": 2490.04, "end": 2491.4, "text": " It has some heating issues"}, {"start": 2491.4, "end": 2495.32, "text": " And so I decided to go with the Lian Li case"}, {"start": 2495.32, "end": 2497.56, "text": " Which is also recommended all around the place"}, {"start": 2498.2000000000003, "end": 2499.48, "text": " It's a strong competitor"}, {"start": 2499.48, "end": 2500.92, "text": " It has better thermals"}, {"start": 2500.92, "end": 2503.4, "text": " And it's twice the price"}, {"start": 2503.4, "end": 2508.6, "text": " Sorry, it's twice cheaper than the above case"}, {"start": 2508.6, "end": 2511.88, "text": " So the only con was pretty much the way it looks"}, {"start": 2511.88, "end": 2514.92, "text": " But since I already have fancy liquid cooling"}, {"start": 2514.92, "end": 2516.92, "text": " With ARGB lighting and stuff"}, {"start": 2516.92, "end": 2518.04, "text": " I don't really care"}, {"start": 2518.04, "end": 2518.6800000000003, "text": " So yeah"}, {"start": 2519.56, "end": 2521.32, "text": " Finally, PSU"}, {"start": 2521.32, "end": 2524.84, "text": " I have done the calculation that Tim and others recommended"}, {"start": 2524.84, "end": 2527.64, "text": " According to that, I would need around 1200 watts"}, {"start": 2527.64, "end": 2533.3199999999997, "text": " But to have the safe margin against these current pools"}, {"start": 2533.3199999999997, "end": 2535.8799999999997, "text": " From the RTX 3090"}, {"start": 2535.8799999999997, "end": 2538.2799999999997, "text": " That can go up to 600 watts"}, {"start": 2538.2799999999997, "end": 2540.92, "text": " I just went with a much stronger PSU"}, {"start": 2540.92, "end": 2543.7999999999997, "text": " And I decided to go with 1600 watts myself"}, {"start": 2544.7599999999998, "end": 2546.3599999999997, "text": " And that's it"}, {"start": 2546.3599999999997, "end": 2548.52, "text": " The brand and the particular model"}, {"start": 2548.52, "end": 2551.24, "text": " Was recommended across many of these blogs"}, {"start": 2551.24, "end": 2553.4, "text": " So that's why I ended up using them"}, {"start": 2553.4, "end": 2557.4, "text": " So I think even this one is using this particular"}, {"start": 2557.4, "end": 2558.92, "text": " Yeah, it's using the same PSU"}, {"start": 2559.8, "end": 2561.08, "text": " So that's the OpenAI guy"}, {"start": 2561.08, "end": 2565.88, "text": " And also I think this guy is also using EVGA"}, {"start": 2565.88, "end": 2567.4, "text": " Yep, he's also using it"}, {"start": 2567.4, "end": 2572.28, "text": " And so apparently it's a very reputable brand"}, {"start": 2572.28, "end": 2575.4, "text": " And I just decided to go with it"}, {"start": 2575.4, "end": 2575.96, "text": " And that's it"}, {"start": 2575.96, "end": 2577.7200000000003, "text": " I'm not going to focus on the peripheries"}, {"start": 2577.7200000000003, "end": 2578.6800000000003, "text": " Some of the details"}, {"start": 2579.2400000000002, "end": 2580.92, "text": " So I wanted the mechanical keyboard"}, {"start": 2580.92, "end": 2583.88, "text": " And so I've chosen the Cherry MX Browns"}, {"start": 2583.88, "end": 2586.84, "text": " Which are apparently the best ones for typing"}, {"start": 2586.84, "end": 2588.92, "text": " Whereas this one is super silent"}, {"start": 2588.92, "end": 2590.36, "text": " It doesn't have the clicky sound"}, {"start": 2590.36, "end": 2591.6400000000003, "text": " And this one is too loud"}, {"start": 2591.6400000000003, "end": 2593.4, "text": " It has too much of the clicky sound"}, {"start": 2593.4, "end": 2595.32, "text": " And so again, I've done some research there"}, {"start": 2595.32, "end": 2596.44, "text": " And bought the components"}, {"start": 2597.0, "end": 2598.92, "text": " And guys, that's pretty much it"}, {"start": 2600.04, "end": 2601.32, "text": " Let me now quickly show you how"}, {"start": 2601.88, "end": 2605.32, "text": " Again, how you would go about picking your own setup"}, {"start": 2605.32, "end": 2609.08, "text": " So go and see what are the priciest components"}, {"start": 2609.08, "end": 2610.6800000000003, "text": " And those are GPUs and CPUs"}, {"start": 2610.6800000000003, "end": 2612.1200000000003, "text": " If you take a look at my setup here"}, {"start": 2612.1200000000003, "end": 2613.48, "text": " You can use my list, of course"}, {"start": 2613.48, "end": 2615.2400000000002, "text": " It's going to be open and public"}, {"start": 2615.24, "end": 2620.9199999999996, "text": " So you can see that the CPU and the GPUs are the priciest components"}, {"start": 2620.9199999999996, "end": 2622.52, "text": " And there is also the motherboard"}, {"start": 2622.52, "end": 2623.7999999999997, "text": " So depending on your budget"}, {"start": 2623.7999999999997, "end": 2625.64, "text": " What you want to do is you want to max out"}, {"start": 2625.64, "end": 2627.64, "text": " You want to take the best possible GPU"}, {"start": 2627.64, "end": 2629.3199999999997, "text": " If you're doing this for machine learning"}, {"start": 2629.3199999999997, "end": 2631.7999999999997, "text": " GPU is the name of the game"}, {"start": 2631.7999999999997, "end": 2633.8799999999997, "text": " Much more important than the CPU"}, {"start": 2633.8799999999997, "end": 2635.3999999999996, "text": " You just want to have the GPU"}, {"start": 2635.3999999999996, "end": 2639.8799999999997, "text": " And so after that, try and squeeze in the basically"}, {"start": 2640.6, "end": 2643.24, "text": " Some of the CPU and other components"}, {"start": 2643.24, "end": 2645.56, "text": " But you want to max out, to be honest"}, {"start": 2645.56, "end": 2647.9599999999996, "text": " I would max out the money for the GPU"}, {"start": 2647.9599999999996, "end": 2650.3599999999997, "text": " And then try and allocate the rest for everything else"}, {"start": 2650.9199999999996, "end": 2652.7599999999998, "text": " Obviously, I can't give you a particular"}, {"start": 2652.7599999999998, "end": 2655.16, "text": " Each one of you is going to have a different set of needs"}, {"start": 2655.16, "end": 2658.52, "text": " And so by just giving you the tools, the ideas, the way I think"}, {"start": 2658.52, "end": 2660.68, "text": " And do research, hopefully it's going to help you"}, {"start": 2661.56, "end": 2662.4399999999996, "text": " Do it for yourself"}, {"start": 2663.0, "end": 2666.3599999999997, "text": " Okay, so now just as a simple exercise"}, {"start": 2666.3599999999997, "end": 2667.9599999999996, "text": " Let me show you how you would go about"}, {"start": 2667.9599999999996, "end": 2669.64, "text": " Once you have the set of components"}, {"start": 2669.64, "end": 2670.8399999999997, "text": " You've chosen the components"}, {"start": 2670.8399999999997, "end": 2672.2, "text": " You now want to buy the components"}, {"start": 2672.2, "end": 2674.8399999999997, "text": " You obviously don't want to just buy from the first website"}, {"start": 2674.8399999999997, "end": 2676.2799999999997, "text": " You want to do some research"}, {"start": 2676.2799999999997, "end": 2678.7599999999998, "text": " And so if you go and do this"}, {"start": 2678.7599999999998, "end": 2682.2799999999997, "text": " If you just search for this one and just hit price"}, {"start": 2682.8399999999997, "end": 2685.72, "text": " You'll see a lot of websites offering this item"}, {"start": 2685.72, "end": 2688.3599999999997, "text": " And so if you were to just go and be dumb"}, {"start": 2688.3599999999997, "end": 2691.56, "text": " And you go to Amazon and just buy it without doing any research"}, {"start": 2691.56, "end": 2693.56, "text": " You would lose around 200 pounds"}, {"start": 2693.56, "end": 2695.3999999999996, "text": " And that's why this is a great example"}, {"start": 2695.3999999999996, "end": 2697.96, "text": " So I'm going to open up a couple of these websites"}, {"start": 2697.96, "end": 2702.36, "text": " Price Runner and this website Price Spy are aggregators"}, {"start": 2702.36, "end": 2705.56, "text": " So that means they're not selling the case themselves"}, {"start": 2705.56, "end": 2708.36, "text": " They're just pointing to other sites that have"}, {"start": 2708.92, "end": 2710.6, "text": " That are offering the case"}, {"start": 2711.56, "end": 2713.2400000000002, "text": " So let's see what I see"}, {"start": 2713.2400000000002, "end": 2715.56, "text": " So first let's open up the Amazon link"}, {"start": 2715.56, "end": 2716.6, "text": " So here you can see that"}, {"start": 2717.7200000000003, "end": 2718.2, "text": " Let me just"}, {"start": 2718.76, "end": 2720.84, "text": " I'll actually have to type it in manually here"}, {"start": 2720.84, "end": 2724.2, "text": " Because this one doesn't have any price associated with it"}, {"start": 2724.2, "end": 2727.7200000000003, "text": " So you can see here it's offered for 340 pounds"}, {"start": 2727.72, "end": 2730.12, "text": " And I bought it for around 100 something pounds"}, {"start": 2730.12, "end": 2732.2, "text": " And so you can see that this one here"}, {"start": 2732.2, "end": 2734.12, "text": " Which is the same case"}, {"start": 2734.8399999999997, "end": 2737.08, "text": " And has much more reviews"}, {"start": 2737.72, "end": 2740.12, "text": " Doesn't have the price which means it's out of stock"}, {"start": 2740.12, "end": 2743.7999999999997, "text": " And this one probably was offered for 150"}, {"start": 2743.7999999999997, "end": 2745.8799999999997, "text": " And now because it's out of stock"}, {"start": 2745.8799999999997, "end": 2748.3599999999997, "text": " Some competitors can basically offer much more"}, {"start": 2749.08, "end": 2751.64, "text": " Can boost up the prices"}, {"start": 2751.64, "end": 2755.3999999999996, "text": " And you don't want to buy it from here basically"}, {"start": 2755.4, "end": 2758.92, "text": " So if you go here you can find some other companies"}, {"start": 2758.92, "end": 2764.28, "text": " Like these ones that offer it for much lower prices"}, {"start": 2765.08, "end": 2767.88, "text": " But keep in mind you again don't want to just go to a random"}, {"start": 2768.52, "end": 2771.2400000000002, "text": " Company whose reputation you know nothing of"}, {"start": 2771.2400000000002, "end": 2772.2000000000003, "text": " And just buy it"}, {"start": 2772.2000000000003, "end": 2775.08, "text": " Because I've seen a lot of comments and I know people"}, {"start": 2775.08, "end": 2777.2400000000002, "text": " And like you can just go in Google"}, {"start": 2777.2400000000002, "end": 2780.12, "text": " And you'll find in various forums people complaining about"}, {"start": 2780.12, "end": 2781.96, "text": " Like either not getting the item"}, {"start": 2781.96, "end": 2782.92, "text": " The item is broken"}, {"start": 2782.92, "end": 2786.52, "text": " And like a lot of problems if you just go with a random company"}, {"start": 2786.52, "end": 2792.2000000000003, "text": " So you don't want to go and basically do some research"}, {"start": 2792.2000000000003, "end": 2793.0, "text": " On these companies"}, {"start": 2793.0, "end": 2797.2400000000002, "text": " And so what I've done in particular for this case"}, {"start": 2797.8, "end": 2800.28, "text": " I just Googled CCL online"}, {"start": 2801.16, "end": 2803.96, "text": " Because back then on the aggregators"}, {"start": 2803.96, "end": 2806.28, "text": " This was the best price the CCL online"}, {"start": 2806.28, "end": 2807.96, "text": " And so I just opened up the Reddit"}, {"start": 2807.96, "end": 2809.7200000000003, "text": " That's going to be your friend definitely"}, {"start": 2809.7200000000003, "end": 2812.36, "text": " And you open up some of these review sites"}, {"start": 2812.36, "end": 2815.4, "text": " And so on Reddit you can see that the comments are all legit"}, {"start": 2816.28, "end": 2817.96, "text": " Even though they are five years ago"}, {"start": 2817.96, "end": 2820.52, "text": " If something has changed"}, {"start": 2820.52, "end": 2824.1200000000003, "text": " Basically if the company became corrupt since five years ago"}, {"start": 2824.1200000000003, "end": 2827.8, "text": " Somebody would have commented on this thread pretty much"}, {"start": 2827.8, "end": 2830.28, "text": " But you see predominantly positive comments here"}, {"start": 2830.28, "end": 2831.6400000000003, "text": " I use them all the time"}, {"start": 2831.6400000000003, "end": 2832.6800000000003, "text": " They're a good company"}, {"start": 2832.6800000000003, "end": 2833.88, "text": " I've used them quite a lot"}, {"start": 2833.88, "end": 2834.92, "text": " Blah blah blah"}, {"start": 2834.92, "end": 2835.88, "text": " So yeah"}, {"start": 2835.88, "end": 2838.52, "text": " So it seems positive feedback on Reddit"}, {"start": 2838.52, "end": 2840.6, "text": " And you can see positive feedback here"}, {"start": 2840.6, "end": 2845.64, "text": " For 26,000 reviews 4.7 is quite a high grade"}, {"start": 2845.64, "end": 2847.96, "text": " Again keep in mind that these websites"}, {"start": 2847.96, "end": 2849.88, "text": " They are always competing with each other"}, {"start": 2849.88, "end": 2851.4, "text": " And it's not always fair play"}, {"start": 2851.4, "end": 2854.52, "text": " And so they sometimes just hire bots or whatnot"}, {"start": 2855.72, "end": 2860.7599999999998, "text": " Or actual people to put either five stars or one stars"}, {"start": 2860.7599999999998, "end": 2863.3199999999997, "text": " Depending on whether it's your company"}, {"start": 2863.3199999999997, "end": 2866.2799999999997, "text": " That you're doing that for or a competitor"}, {"start": 2866.2799999999997, "end": 2869.64, "text": " And so you can see here a lot of ones recently"}, {"start": 2869.64, "end": 2870.6, "text": " So a lot of ones"}, {"start": 2870.6, "end": 2874.44, "text": " And I suspect I highly suspect this has to do with Black Friday sales"}, {"start": 2874.44, "end": 2876.92, "text": " Because at that point of time"}, {"start": 2876.92, "end": 2879.24, "text": " It's in the best interest of the competitor to"}, {"start": 2880.04, "end": 2882.2, "text": " Well gain all of the market"}, {"start": 2882.2, "end": 2884.92, "text": " And just kind of reduce the credibility of the competitors"}, {"start": 2884.92, "end": 2887.48, "text": " Which is kind of sad practice but people do that"}, {"start": 2887.48, "end": 2890.44, "text": " And so according to these couple of reviews"}, {"start": 2890.44, "end": 2893.72, "text": " You would think this is like 3.2 or something"}, {"start": 2893.72, "end": 2894.52, "text": " It's 4.7"}, {"start": 2894.52, "end": 2895.8799999999997, "text": " So it's kind of hard"}, {"start": 2895.8799999999997, "end": 2897.64, "text": " But like go and open up a couple links"}, {"start": 2897.64, "end": 2899.4, "text": " And then you'll know that the company is legit"}, {"start": 2899.4, "end": 2901.88, "text": " And for me that was the case"}, {"start": 2901.88, "end": 2905.4, "text": " I had a very nice experience dealing with this particular company"}, {"start": 2905.4, "end": 2908.28, "text": " Okay guys that's it for the first video of this series"}, {"start": 2908.28, "end": 2913.4, "text": " I really tried my best to give you as much information as possible"}, {"start": 2913.4, "end": 2915.88, "text": " I'm going to link all of the resources I mentioned"}, {"start": 2915.88, "end": 2917.48, "text": " Down in the video description"}, {"start": 2917.48, "end": 2919.08, "text": " In case anything is missing"}, {"start": 2919.08, "end": 2920.52, "text": " Feel free to comment down below"}, {"start": 2921.08, "end": 2923.32, "text": " Also any feedback is appreciated"}, {"start": 2923.32, "end": 2925.2400000000002, "text": " Let me know if you have any other questions"}, {"start": 2925.2400000000002, "end": 2928.44, "text": " I'll try my best and reply to each one of them"}, {"start": 2928.44, "end": 2930.04, "text": " Over the next couple of weeks"}, {"start": 2930.04, "end": 2935.4, "text": " In the next video we'll see how to assemble the workstation"}, {"start": 2935.4, "end": 2938.92, "text": " And then in the final one we'll see how to install all of the software"}, {"start": 2938.92, "end": 2958.92, "text": " Until then, cheers!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=mV7bhf6b2Hs
High Fidelity Neural Audio Compression | Paper & Code Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the "High Fidelity Neural Audio Compression" paper and code. With 6 kbps they already get the same audio quality (as measured by the subjective MUSHRA metric) as mp3 at 64 kbps! 10x compression rate! This is super important as streaming video+audio makes for ~82% of total internet traffic! Lots of ideas we've already seen in previous paper overview videos such as VQ-VAE, VQ-GAN, and AudioGen applied to the problem of audio compression. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2210.13438 ✅ Code: https://github.com/facebookresearch/encodec ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:37 Paper walk-through: high level overview 12:05 Residual Vector Quantization 18:05 Reducing the BW using arithmetic coding and transformers 20:05 Loss formulations and results 23:40 Code walk-through 26:00 EnCodec architecture 28:20 Residual Vector Quantizer module 32:55 Loading the audio signal 34:35 Compression - a forward pass through the encoder 38:00 Quantization forward pass 42:35 Efficiently packing the bits 45:25 Using LM to further compress audio 57:50 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #neural #audio #compression
What's cracking guys, Alex here. In this video I'll be covering a new paper called in codec high fidelity neural audio compression from the meta team and basically I'm gonna do a code walkthrough as well as the paper walkthrough and hopefully that combination is gonna be useful for you guys. So basically what they've done is they've shown how they can preserve the quality after compressing and decompressing the audio file even though they are using neural networks and they can do so and basically save up a lot of space so like the bandwidth of signals encoded using this neural method are much lower compared to say MP3 format, et cetera, et cetera, so it's very cool results. We're gonna see the actual numbers a bit later but for now go ahead and basically clone this repo and you should probably take a look at the installation for development section in the readme if you wanna follow along and do a stepping through the code base. Okay, so there is a couple of samples they've shown here. I'm gonna play a couple of them. So they have two models basically. One is the 48 kilohertz stereophonic model that's non-streamable. We'll see what it means in a moment and then they also have the 24 kilohertz monophonic. So monophonic just means a single channel. Stereophonic means in this particular case two channels but it can be even more channels and basically this one is streamable which means you can use it to real time encode, send the data as it's, so as the raw samples are arriving you can encode it real time and basically that's what streamable refers to in this context. So here is the ground truth signal. Let me play it. ["I Am You"] ["In The Galaxy Globes"] So now if I play the encodex six, so that's like, I don't know what's the original size of the audio per second but it must be much higher than the numbers you can see here. So here we have like six kilobits per second and that's the encodex. So that's this method. Let's hear it. ["I Am You"] Okay and finally here is the MP3 and the funniest thing is we'll see this in the table a bit later is that this particular method with only six kilobits per second achieves the same subjective score as the MP3. So let's hear the 64 kilobits per second. ["I Am You"] ["In The Galaxy Globes"] Cool guys, so let's dig into the paper and let's see what the magic is about. So again, as I said, high fidelity neural audio compression by the meta team and there's a couple of papers that this paper is building off of. So one of those is the sound stream and end to end neural audio codec and the second one is audiogen. So that's kind of a paper I covered a couple of videos ago and that could be a very useful prerequisite because I've covered some things like what are male spectrograms, et cetera, et cetera. So maybe do check out that one in case you struggle understanding what's happening in this video. Okay, having said that, let's dig into the paper. So first of all, why is audio compression important? Well, this sentence kind of clarifies it. So recent studies suggest that streaming audio and video have accounted for the majority of the internet traffic in 2021, 82% according to Cisco report. Okay, that's kind of a huge number and basically if we can manage to better compress our data, then it's a no brainer that we can save a lot like the bandwidth of the internet, that's the energy, money, et cetera, et cetera. So like that's a hugely impactful thing to be working on. Okay, so then because we mentioned we have a neural audio compression, so what's the alternative? Well, the alternative are these signal processing hand engineered methods and we'll see some of the baselines a bit later, but basically as you can see here, the audio codecs typically employ a carefully engineered pipeline and like I've kind of highlighted the signal processing part because they are not learned, they're not neural, instead they are handcrafted. Okay, so what's the problem? What's the potential pitfall of these neural models? So here you can see that the loss in neural compression has a couple of problems. So the first one is the model has to represent a wide range of signals, such as not to overfit the training set or produce artifact laden audio outside its comfort zone. So basically outside of its training data set. And this is something we are well familiar with and the way they cope with this problem is that they have a diverse set of data sets and diverse domains such as general speech and noise speech and music, et cetera, et cetera. Okay, and then the other problem is that instead of compressing efficiently, both in compute time and in size. So for the compute time, they mentioned here, so for the former, we'll limit ourselves to models that run in real time on a single CPU core. So I think they've tested all of these models on a MacBook 2019 or something and that's kind of cool. And then for the latter, we use residual vector quantization of the neural encoder floating point output. Okay, so some of these ideas are probably familiar to you in case you watched some of my videos like VQVAE or VQGAN or any of the derivatives of those models. But I'm gonna still cover and explain what those mean in a second. Okay, so let me quickly walk you through the high level diagram here. So let's see how this thing is constructed. So on the left side of the diagram here, we have a raw audio signal and we also have the male spectrogram representation. So that's basically on one axis here, we have frequency. On the other axis, we have basically time axis. And then the color of the dot on this diagram tells you the amplitude of that particular sinusoid of that particular frequency at that point of time, if that makes sense. So again, do watch the audio gen in case you're confused, but that's basically equivalent representation to the raw audio. So this is like the time representation, this is the frequency space representation of the audio signal. Okay, and then there's a couple of things they do. So first they kind of pass the signal through the encoder, which is built up as you can see here out of these COM1D layers. And they additionally have the LSTM, a two layer LSTM sub module here, which helps to integrate the temporal information because otherwise we'd be basically, so like given the particular architecture they are using, they basically encode 320 samples from the raw audio into a single vector here, that's 128 dimensions large. Okay, and they refer to this thing here as the frame. So this thing is referred to as frame, but it also referred to some of these things here as frames, it's kind of inconvenient. Okay, so once you have this representation at the output of the encoder, what you do is you quantize it. So that means each of these vector quantizations is gonna have associated code book table. So basically that means they're gonna have something like this, so like they have a table and the dimensions of this table are gonna be 1024. So they have 1024 vectors and each of these vectors are, so the dimension here is 128, the same as the output of the encoder, okay? And each of these, so 128, and each of these vector quantization modules is gonna have a particular independent code table like this one. And so what these do then is you take this particular vector at the output of the encoder and you find the closest vector in this code table and you then just grab its index, okay? So when I say close, I think they're using the L2 norm. So basically just find the closest vector here and then let's say for example, this vector here is the one that corresponds, which is the closest one to this one, then you just grab its index here. So that might be like, I don't know, like 238. And now you use 238 as a representation of this whole vector. And that's how you kind of compress down the output representation here. And then additionally they have, as you can see, they have multiple of these and that's because they do something called residual quantization. So they first snap, so snap, they find basically, they find the closest vector and then they take this vector and they subtract it from the original vector and then the residue is encoded by the next vector quantization module. And then rinse and repeat and that's how they basically encode this thing. So you end up in this particular drawing here, you would end up instead of 128 dimensions, you would end up with four dimensions because you have four numbers being allocated to that particular vector. And we have, again, in the background, we have four tables, okay? And then finally, so I'm gonna ignore the transformer module here for a second, but like finally, you can basically decode those signals because you have the numbers and then you can just grab the vector from the associated tables, you can sum them up and you form your representation which you then pass through the decoder, okay? And decoder is just basically you can see symmetric to what we had in the encoder. So I'm gonna skip explaining it. And finally, we end up with the reconstructed audio signal here, okay? So how the thing is trained is there are multiple losses and they also introduced this thing called a balancer which uses the gradient information to balance and scale up all of the losses as needed. So first loss here is the LT which stands for time domain loss. They basically do L1 between the raw representations of the reconstructed audio here and the input audio here. And then they have the LS which is the basically a spectral domain L1 basically norm between the reconstructed mouse spectrogram and the input mouse spectrogram. So that's what they do here. And then finally, they have the LD and LG which are the generator and discriminator losses. And you can see here, they're like a couple of identical discriminators. And so what you do here is you just convert the audio signal into the real and imaginary STFD which are just again, the spectral representations. And these are strictly more informative. So let me just change the color here. So these are strictly more informative than the mouse spectrogram down here because given this information, you can easily get the mouse spectrogram information because you just have to find the amplitude of these imaginary complex numbers. And that's what gives you the amplitude of the mouse spectrogram. Anyways, you kind of pass that through this encoder to the discriminator. And then you train using the adversarial loss as usual. So you just basically contrast, you give it the real signal you give it the generated signal and you teach the discriminator to discriminate between those two and you teach the generator which is in this particular case, this whole pipeline here how to generate signals that look as real ones. Finally, a couple more losses here. So we covered these ones and the final two ones are this one. So this is the commitment loss. And again, if you watched any of the VQVA, VQGAN papers you're familiar with this one. And finally, there is the loss that corresponds to this transformer here. So what the transformer does is it helps us to basically compress this latent representation to even lower dimensionality at the cost of having to spend more compute and basically increase the latency of your system. But I'm gonna kind of go through probably if you have enough time, I'm gonna go through and show you how the language model part fits into this pipeline. Once we start walking through the code. Okay guys, let me quickly show you this residual vector quantization logic. There is from this before I mentioned this sound stream paper and they have a very neat pseudocode for this residual vector quantization. So let's kind of run through it. So you can see here, Y is denoted as the output of the encoder. We have multiple of these quantizers and there is NQ of them. So on this particular drawing, NQ is four because we have one, two, three, four of these components here. Okay. So let's go back here. And so here is how it works. So initially we initialize the residual variable with Y, which is the output from the encoder. And then you can see here, we quantize it and then, and basically then we add that to the reconstructed signal. Then we subtract that quantize, those quantize vectors from the initial residual vectors. And then we use those residual ones and we feed them into the next quantizer. And that's basically how it looks like on a high level. I also wanna walk you through the rationale. Why are we doing this? So I think a couple of these paragraphs might help elucidate the idea behind it. Okay. So the vector quantizer VQ proposed in these papers here in the context of VQ VAE meets this requirement. This vector quantizer learns a code book of N vectors. So N was 1024 in our particular case. To encode each D dimensional frame, and D was 128 of the output of the encoder here denoted as NQ of X. The encoded audio, as you can see the dimensionality being S times D. So if we have a one second audio signal, S will be 75 because we have the hop being 320. And I'm just using particular numbers because it's easier to kind of create a mental picture. But basically imagine you have a 24 kilohertz signal and then one second means you'll have 24,000 samples. If we divide that by 320, we end up with 75. So that means that for a signal of audio, for one second of the audio and the input we'll end up with 75 frames at the output. And then D is just the, as I said, 128. Okay. So that's then mapped to a sequence of one hot vectors of shape SN. So N is again 1024, which can be represented using this many bits. Okay. So because this is 1024, this is going to be 10. So once you do the log, you'll end up with 10 bits here as that's how much you need to index into those codebook tables. Okay. So let's see a concrete example here. So let us consider a codec targeting a bit rate of six K, so kilo bits per second. When using a striding factor of 320, each second of audio at sampling rate 24 kilohertz is represented by 75 frames at the output of the encoder. This corresponds to 80 bits allocated to each frame, right? So if we have the target bandwidth of 6,000 and we have 75 frames, that means we can allocate up to 80 bits and we will still not basically go over the target bandwidth. So using a plain vector quantizer, this requires storing a codebook with two raised to the power of 80 vectors, which is obviously unfeasible. Okay. And then they say to address this issue, we adopt a residual vector quantizer, AKA multi-stage vector quantizer, which cascades NQ layers, so we have four in the drawing below, of VQ as follows. The unquantized input vector is passed through our first VQ and quantization residuals are computed. The residuals are then iteratively quantized by a sequence of additional NQ minus one vector quantizers as described in algorithm one, which was the one we saw here. The total rate budget is uniformly allocated to each VQ, i.e. we have R divided by NQ, so R was 80 bits and then we divide that by the number of quantizers and that's how many bits each of the quantizers has to basically allocate. For example, when using NQ eight, each quantizer uses a codebook of size 1,024 and that's the number we'll be using in this paper as well. Okay. So hopefully that makes sense, so that's why this approach was basically developed. It makes it easy to get to the target bandwidth without having to have huge, enormous basically codebooks. Okay, so let's now quickly go through some of the details. Basically, as I said, they have the non-streamable and the streamable architectures. The streamable ones, so they say here, thanks to this padding scheme, which is basically left padding, the causal padding, the model can output 320 samples, which corresponds to 13 milliseconds, given a 24 kilohertz sampling rate. As soon as the first 320 samples are received. So that means as soon as you have 320 samples on the input, you can already encode it into a frame and then you can quantize that frame and you can send that down the line and then you can just keep on doing that as the signal keeps on coming as you're receiving your input signal. You can keep on encoding it and sending it. So that means that's why it's streamable. Okay, and then they mentioned by selecting a variable number of residual steps at train time, a single model can be used to support multiple bandwidth targets. That's something we saw up there with this particular example. So basically, depending on how many of these quantizer models you add, you can achieve a particular bandwidth. So here we picked eight because we want to have 80 bits per frame and thus we want to have six kilobits per second. Okay. Okay, so I did mention the language model that's being used here. So implemented as a transformer to further reduce the bandwidth. So let's kind of read this paragraph and then later we probably will have enough time to just go through that and see it in code. So for a timestamp T, the discrete representation obtained at time T minus one is transformed into a continuous representation using learned embedding tables, one for each code book and which are then summed. So we'll see that in the code a bit later. But basically, so discrete representation are these numbers I mentioned. So for example, if this is a step T minus one, you can see that the discrete representation is basically these four numbers here. And when I say four numbers, even though it's kind of float initially, which means you have like 64 bits, we know that it doesn't have to be larger than 10 bits because it just needs to index into this table that has 1024 vectors. And that's why these numbers are actually just 10 bit numbers, right? Okay, so let's go back here. And then for T equals zero, a special token is used instead. Okay, the output of the transformer is fed into ankyl linear layers with as many output channels as the cardinality of each code book, which is 1024 in our examples, giving us the logits of the estimated distribution over each code book for time T. Okay, so basically instead of just finding the closest vector in the code book table and snapping our output vector to that particular code book vector, instead of that, we basically do, we form like a probability distribution across the whole code book. So this is the code book table and we form a probability distribution across its indices and then combined with arithmetic coding, they show that you can further compress your latent representation using this approach. Okay, let's continue here. So here are the formulas for the losses. I'm not gonna spend too much time here, but you can see here L1 norm between the time domain signals. And then you can see here a combination of L1 and L2 norms between the mal spectrograms across various scales. And then there is the discriminative loss. Basically, you can see here the generator loss and you can also find the discriminator loss somewhere in here. So here it is. But I'm not gonna spend any more time on this because I've covered this. Also, I think the audio gen or VQ gen paper basically have pretty much same losses as this paper. So I'm gonna kind of skip all of those details. Here is also the commitment loss, fairly familiar to those of you who watch those videos and who know about those papers. And finally, the final loss is just a weighted sum of all of these various losses. And in order to avoid having to basically do like a search and find the right coefficients, they make it adaptive. And so they basically calculate the gradients of these losses with respect to the input. And then they use the norm of the gradient to automatically rescale the importance of each of these losses. So you can see here, if a gradient of a particular loss was huge, so we have this, this is basically a norm and exponentially, and they've done an exponentially moving average on the norm of the gradient. And you can see if that was huge historically, then that means this is gonna be down weighted. So the gradients from that particular loss are gonna be down weighted because this is a big number in the denominator. And because of that, that loss is gonna be down weighted. So that's the kind of the intuition behind it. Okay. So as I said, they use diverse domains to train these models. That's how they ensure that they don't overfit to particular domain. And yeah, so they mentioned here speech, noisy speech, music, and general audio. And here are the results. So you can see here how they compare against some of the baselines, such as OPUS and EDS, which are basically signal processing baselines. And then we have Lyra from Google, which is a neural codec as well. And you can see it basically outperforms all of those models for all of the target bandwidths. Additionally, this entropy coded one, which uses the transformer to further compress the representation, basically outperforms for a lower target bandwidths, outperforms the normal one, the normal encodec without the transformer. Okay, guys. Let's see what else is interesting here. Yeah, this result here is kinda cool. So I think I mentioned this in the beginning of the video. Basically, you can see here that MP3 with the target bandwidth of 64 kilobits per second has the same, this MUSHR score, which is just basically subjective score, and has the same score as this particular encodec with six kilobits per second. So you can see these are fairly on-pair. And you can see that already with 12 kilobits per second, it heavily outperforms the MP3 format. And also, if you include the entropy coding, then you can even further reduce. With 4.2 kilobits per second, you outperform 64 kilobits per second MP3, and you have comparable basically subjective metrics. That's fairly cool. Okay, having seen all of this, having seen the paper, let's now switch our focus to the actual code. And hopefully that's gonna help us understand this much better. So I just created a long JSON, and I basically passed, as you can see here, the arguments, I just have this test 24K. I'm gonna pronounce this as wave signal. I'm not sure how this format is pronounced, but wave sounds easy enough. Basically, test 24K is already a part of the codebase. It's checked in, so you just have to clone the repo, and you will have it out of the box. So that's it. That's everything we have to set, and later I'm just gonna add this LM flag to see how the language model fits into this whole picture. And there is also the HQ, which is the 48 kilohertz non-streamable model, but I'm not sure I'll have enough time. I don't wanna go over one hour in this video. So let's basically start. I'm gonna enable all of the breakpoints here, and let's run this thing. Okay, again, I'm gonna focus only on the crucial parts. So you just make sure that the audio signal exists, and then here you can see that because we have wave signal, and this is basically ECDC, which is the compressed format suffix. Because of that, we are gonna go into the compression branch. So, but by understanding this compressed function, this decompressed function is gonna be fairly trivial to understand. So I'm gonna focus on this particular branch of the code. Okay, so we formed the output, as we just changed the suffix. Basically, it's also gonna be, as you can see here, test24k, oops, test24k ECDC. That's the output. And then we check that the output exists, and because it already exists, I'll just have to first basically delete that signal. I'm gonna delete this output compressed file, and now it's gonna work. So this is gonna happen to you as well. So I'm not gonna edit this out. I'm just gonna restart the program again, and this time it will not fail. So let's skip to this check output, and this time this will not fail, so everything's fine there. Cool. So because HQ is set to false, we're gonna deal with 24 kilohertz signal. Let's enter this. Let's enter the encodec model 24 kilohertz function. And basically, so you can see here, we just set some variables, and finally we need to first basically get this model. So the model consists, as you can see here, from the encoder and decoder. So I'm gonna enter into the encoder just to briefly show you how everything looks like. So we are in the encoder here, and you can see basically just some variables, the location there, and then we have this ratios list, which basically tells us how we are reducing the temporal dimensionality as we are propagating through the encoder. And if you sum all of these up, you'll get the number that's probably familiar at this point, and that's 320. And that's why we need 320 raw samples to get a single output frame. That's the particular reason why this is happening, because of the architecture of the encoder. Okay? So let's continue. So you can see here, we just have these COM1D layers with this S prefix. We just suggest that they have, they kind of modified it a bit. I'm gonna show you how it looks like in a second. But after that, they just keep on adding these ResNet blocks, which consists basically out of bunch of these COM1D layers and some skip connections. So basically nothing fundamental there. It's just your encoder architecture, and that's it. I'm gonna quickly just show you how this thing looks like. So they have this S prefix, as I mentioned. And so the thing here is, they'll have this, depending whether we are dealing with a causal model or non-causal model, they'll have a bit different padding here. And I think that's pretty much it. They also have some details like how they wrap up this norm layer with the COM layer. So you can see here, just some idiosyncrasies, but nothing crucial really to understand how this is working. So let's treat it as a black box, and we have our encoder. The decoder is fairly similar, so I'm gonna skip it completely. And finally, let me zoom in a bit in case you can see a bit better now. Okay, hopefully this works. Let me know whether the visibility is now better than before I zoomed in. By the way, I'm a bit sick, so that's why my throat might sound a bit different. But hopefully the video is gonna be high quality nonetheless. So let's see what's happening here. You can see here that this enqueue is gonna be the max number of those quantizers we had in that bottleneck of the encoder decoder that this model can have in case we have the largest target bandwidth. And you can see here, target bandwidths go from 1.5 kilohertz all the way to 24 kilohertz. And when we index by minus one, we fetch the largest bandwidth, which is the 24 kilohertz. And because it's 24, we multiply with thousands such that we end up with hertz. So this is now the number of hertz for the target bandwidth. And then we divide that number with this thing. And so what's this thing? So basically we have here the number of frames. You can see here sample rate. When you divide that by the hop length, you end up with the output number of frames. It's gonna be like 75 for our particular case here. And then we multiply that with 10 because they kind of hard coded that because they assume that the code book is gonna be 1,024 slots. And they basically does assume that we need 10 bits to index into that code book. So again, that means that a single quantizer will have 75 times 10 bits. So that means 750 bits for a second of the video. And so when we divide these two, we get the number of quantizers we need for the target bandwidth of 24 kilohertz. As you can see here, it's gonna be 32. So that means we need 32 of these quantizers to be able to achieve this particular target bandwidth of 24 kilohertz. So again, just to clarify as this might be a bit confusing, imagine we have a signal, a second of the audio raw signal that's gonna be 24,000 samples, right? And then we encode that and we end up with 75 frames. And because for each of the frames, we allocate 10 bits, that means we have 75 times 10 bits, right? And then if you wanna figure out how many of these do we need to get to 24K, we just divide 24K by that number of bits. And that's precisely what you've seen here. Okay, second thing I wanna show you is this residual vector quantizer, although all of the fun is gonna happen in the forward pass, not here, but let's see how it's constructed. Again, just bunch of variables here. And then there is this residual vector quantization. Let's enter that thing. And you can see here, we have 32 of these vector quantizations basically wrapped into a list. And let me see whether I have a break point here. So let's enter into one of these vector quantization modules and let's see what's going on there. So here is the codebook dimensionality, it's gonna be 128. So that's the number we've been using as a running example throughout the video. And then because we don't require any projection because the dimensionality of the output of the encoder and the codebook are the same because of that we don't need any projection and that's why we're gonna end up with identities here. Okay, and finally we have this Euclidean codebook and the whole fun is gonna be in the forward pass, I'm gonna skip the actual initialization, but that's the implementation of the codebook I've been mentioning this whole time. And it's called Euclidean because once we do the, once we try to figure out the closest vector in that code table we use the L2 norm to figure that out, hence Euclidean. Okay, I'm gonna hit F5 here and let's exit this. And now I'm gonna disable all of the break points and let's exit this one. And we are back here. I'm gonna re-enable all of the break points and let's continue here. Okay, so we have the encoder, we have the decoder, we have the quantizer, and finally we just have this encodic model which is just basically gonna wrap up all of those variables, nothing fancy, I can move this break point, I can remove it and let's continue here. So let's just get to here and that's it. So we are out of here and we have our model finally ready. So now we just fetch the actual state dictionary, so the pre-trained weights, we initialize the model, we set it into the eval mode and we continue here. Okay guys, so now we set the target bandwidth, okay? So the target bandwidth by default is gonna be six kilobits per second and what it does is basically just sets this bandwidth internal variable of the encodic model to six and that's it. So nothing fancy happening there. Okay, so now let's just load the audio file. It's gonna be a 24 kilohertz, 20 second audio file. So if you take a look at the root of this repository, we can find that signal here. Let me see where we can hear it. Okay, you can see it's a 20 second audio file, that's it. Okay, so let's continue here. Let me just show you the shape of that WAV file. So shape is 480,000 and that's because if you just think about it, if you multiply 20, which is the number of seconds of the audio file times the 24,000 samples per second, we end up with 480,000 samples. My brain is glitching, man. Okay, so let's continue. We now do the conversion but because, it's gonna be a no-op basically. I'm gonna enter here just to show you but basically because the target channel is set to one, we do a mean across the zero dimension but because we only have a single channel, it's gonna be a no-op. If we had a stereo sound, then we'd have two channels and we'd just do a mean across the two channels to get the final output channel because the target number of channels is set to one here. But in our case, it's just no-ops, we can ignore it and then we do resampling but again, because both our signal and the target signal sampling rates are the same 24K, we do nothing. So we just basically, it's a no-op. Okay, so here is the fun part. We do the compression of the signal, of the loaded signal and we pass it through our model. Let's see how that's gonna work. Okay, so let's kinda skip over all of these details. What we do here is we are basically going to load the whole signal. So this whole loop is gonna have just a single iteration. And I think worth mentioning is even though they support the streamable setting, they don't have it implemented here neither do they have implemented the training code. So what you can find here is the inference. You can also find some details about the balancer and about the multi-scale STFT discriminators. Okay, so basically you can see here, they also use the terminology frame for the input raw audio signal and not only for the output of the encoder. So just keep that in mind and don't get confused by the terminology here. So frame is again, is gonna be simply the same as our input signal. So because we fetched every single sample, it's just gonna be 480,000 samples. Okay, and now the fun happens in the code frame function. So let's see how that's gonna work. So we don't do it in normalizations, we can skip that. And now we pass the X, so the input signal through the encoder. So let's see how that's gonna work. So again, this is just gonna be a series of these COM1D layers and some of them are gonna have a stride of two. And so our input signal is slowly gonna be reduced across the temporal dimension and the number of channels is gonna be increasing. So what I'm gonna do here is I'm gonna enter the COM1D layer here and I'm gonna put the break point with a particular condition. And this is why I love Visual Studio Code. So you can basically edit the break point and you can set a particular condition when you wanna hit this one. And I wanna hit it when stride is bigger than one, right? And so because I wanna see how the number of the temporal axis is being reduced. Okay, so now let's hit F5. And here we are, so currently if I just print a shape, the shape is, as you can see here, we already have 32 channels because we had some previous COM1D layers which didn't have the stride but they still had increased the number of channels. But once I hit, so now I'm gonna put a regular break point and hit F5. So once we are here, now if I do x.shape, we have, as you can see here, two x, a smaller dimensionality across the temporal axis and we have two x more channels. And that's a regular pattern we've been seeing all around. CNNs do that all the time. As soon as you reduce the spatial dimensionality by two, you also increase the number of channels by two. That's a common pattern you'll see all around. Okay guys, so now I'm gonna remove basically all of the break points and I'm gonna exit from this function. I'm gonna exit here, I'm gonna exit here, I'm gonna exit through all of these and we are done. Okay, so this is the encoder. I'm gonna hit F10 and here is the final dimensionality. It's gonna be 1,500, 128. And that's the output of our encoder, right? And it's 1,500 frames because again, we have 20 seconds and each second is 75 frames and that's why we end up with 1,500. Hopefully that makes sense. Okay, so now what we do is we have to quantize those vectors and let's see how that's gonna work. So here we are. So we first have to figure out for the desired target bandwidth, how many of the quantizers do we have to actually use to get that target bandwidth? And so that's what this function does. So let's enter into the function and let's see how it works. So, okay, so they say get bandwidth per quantizer. So does that make sense? Let's think about it. So we have 75 here. So that means we have 75 frames. It's gonna be 10 because bins is gonna be 1,024. So this is 10 times the number of frames. So this is basically the number of bits for a single quantizer module. And then we divide by 1,000 because we wanna have kilobits, right? And so you can see here bandwidth per quantizer. And that's gonna be 0.75 kilobits per quantizer, okay? And now what we do is we take the target bandwidth, which is six kilobits per second, and we divide by 0.75, and we end up that we need eight of these quantizers to achieve that particular target bandwidth, okay? And that's it. So now we actually do the encoding with these eight quantizers. So let's see how that's gonna work. So you can see here, we just take the first eight quantizers instead of the full list, which is like 32. So remember, we have actually, we have 32 of these, and we can only deploy those when we have target bandwidth of 24 kilohertz. Okay, so let's enter here. Let's see how the encode works. So it's fairly simple. So first of all, this pre-process is just gonna do some reshaping, nothing fancy there. So we end up with X being of shape 1,528. And then this is where the fun happens in the quantize function. So in the quantize function, let me enter the quantize function. You can see this is the codebook, the codebook table I was mentioning the whole time. So let's see the dimensionality of the codebook table. This is gonna be 1,024 because we have 1,024 vectors, and each of those is 128 dimensionality, right? And so what we do here is we basically do, we compute this distance matrix, which tells us for each of our output vectors, how close are each of these codebook vectors from that particular vector. And because of that, we're gonna end up, let me just show you this. Let me show you the dimensionality of the distance matrix. It's gonna be 1,500, 1,024, because as I said, you have for each of the rows, you have the columns telling you how far away is that particular codebook vector from that particular output encoder vector, right? And because of this minus sign, if we do a max, we actually find the closest vectors, and that's it. So we end up with, we basically end up with, as you can see here, 1,500 indices, and let me print the actual numbers. So those are the indices of the closest vectors in those code tables. Hopefully that makes sense. And again, post-process just does some reshaping, and we are done, guys. That's it. So now we do the decoding. That's the second interesting part. So let's enter it into the decoding part. So let's see what happens there. So decoding is gonna call this Dequantize method, and Dequantize is basically just gonna fetch the indices, and you can see here, we just grab the vectors from that table that correspond to those indices, and we end up with the quantized vectors. Whoops, let me just show you the dimensionality here. We have 1,500, 128. So that's when we snap our output vectors onto the closest vectors in the code table. That's what happened there. Okay, and then projections identity, so we can ignore this, and that's pretty much it. And now that we have the quantized vectors, we do the residual minus quantized, and then we save the indices from the first quantizer, and we repeat the process, right? We now take the second code book, and we take the residuals, and we just encode them, and rinse and repeat. So I'm gonna now disable all of the breakpoints, and I'm gonna just put the breakpoint here, hit F5. I'm gonna re-enable the breakpoints, and that's it. So the out indices now are 1,508. So that means for each of the frames, we now have eight numbers. That's how we encoded our output of the encoder. Okay, let's continue here, and that's pretty much it. We can now keep on exiting here, and we have everything we need. We have the data, the compressed version of our audio file. We now write some metadata into the header of this ecdc file. I'm gonna ignore that completely, not that important for us. And then because we're not using LM, we are gonna construct this bitpacker. And so why the bitpacker? So again, the problem is these numbers are gonna be either ints or floats. Let me just see for a second. Let me tell you the exact thing here. So frames, again, frames shape is, okay, frames is a list. Let me just see. Okay, it's a list, and then I think I just have to do something like this. Yeah, okay, so here are the frames. And if we do the D type, let's see the data type. So it's gonna be int 64. So that means each of these eight numbers currently take 64 bits, even though we only need 10 bits. And because of that, we only, because we only, out of those 64 bits, only lower 10 bits have the valuable information. That's where this bitpacker comes into play to handle the low level details. I did debug it and understand how it works, but it's gonna be, I think, redundant if I do that in this video, because you can do it at your own pace, but nothing fundamental there other than low level details of how do you extract the 10 bits that are actually valuable and pack them into the output bit by stream. Okay, and that's it. So here, what we do, we just keep on pushing those values, and values are coming from the frames. And so basically, that's it. We're just taking those numbers from this structure here, and we're just basically encoding them into this by stream. And so let me hit F5 here. Let me remove this thing. And we are done. We are done. We can exit here. We can exit here. We just write the bytes, and that's it, guys. So that's it. So let me just find the file. So the file is here. As you can see here, test24k.ecdc is again created, and that contains our compressed audio file. You can see that we started from 937 kilobytes, and we ended up with 14.7 kilobytes. Does that make sense? Let's see. So we have 14.7 times, I guess, eight, and this is how many kilobits we have. And that's roughly, well, that's roughly 20 times six, right? Which is 120, because we have six kilobits per second target bandwidth, and we have 20 seconds, so that's roughly it. So everything basically fits. Okay, so now I'm gonna go to the launch file. I'm gonna quickly add the LM, and I'm kinda afraid that I will not be able to explain this, to give it justice, but I'm gonna try nonetheless, because I think it would require quite a lot of time to actually explain all of the details behind how and why it works. Okay, so now I'm gonna start this again, and let's just see the diff between the last ones. I'm not gonna focus on the things that are the same, obviously, I just wanna focus on the differences. So the differences here are gonna be only, as you can see here, we passed the LM argument to the compressed function, so I'm gonna disable the breakpoints, I'm gonna enable this one. I'm gonna hit F5. Whoops, again, I forgot to delete the file, god damn it. So I'm gonna delete this one, and I'm gonna restart. So let's hit, you can add the force argument to avoid having these errors, but I was lazy, so I kinda skipped it. So I'm gonna enable the breakpoints, so let's now enter this, let's enter here, and let's see, so use LM is where the differences are. So there's a couple of places where there are the differences. So I'm gonna get here first, so let's see what the get LM model does. So basically we instantiate a language model here. So let's see how that thing is implemented. I'm gonna hit F5 here, here we are. We have something called Streaming Transformer Encoder, and we have, as you can see here, 32 of these embedding tables, and we also have 32 of these linear layers that project from the 200, which is the dimensionality of the transformer, to 1024, because that's the cardinality, that's what card stands for, the cardinality or the size of the codebook table. That's why we have 1024 there, right? So I'm gonna, okay, I have a breakpoint here already. Let me see whether there is something interesting to mention here, let's quickly walk through the transformer here. Okay, basically there are these Streaming Transformer Encoder Layer modules, and let me just see whether those are interesting. So those are basically just, as you can see here, just stock PyTorch Transformer Encoder Layer with some differences, like, yeah. This might be too complex to explain in this shorter video, but in any case, I'm gonna skip this, I'm gonna skip the instantiation, so we just have a transformer, we create those embedding tables, we create the linear layers, and we continue here. So that's the LM, okay? So, now what happens is we'll load the actual pre-trained weights. We put it into the eval mode, and we return the language model, okay. So now we do the encoding the same way as before. So nothing changes here. We still end up with 1500 times eight numbers. So let me show you that. So we have, oops, again, I have to first do the zero, zero indexing, and then we end up with 1508, so that's the same shape as before. And so now, let's see where the difference is. So the first difference is here. We formed this thing called Arithmetic Coder, and it's fairly complex to explain, as you can see just by the description of how this thing works, and some of the low-level details here of bit shifting, et cetera, et cetera, but basically in a nutshell, what it does is using the, well, I'm gonna get there in a second, so it's gonna be wasteful for me to try and explain it like this. So in any case, we have this Arithmetic Coder. We're gonna see how it works a bit later, okay. So you can see here, we first initialize the input as all zeros, and k is equal to eight, which is the number of our quantizers. So this is kinda, before we get the actual eight numbers that are the encoded version of the first output vector of the encoder, before that, we just assume we have all zeros, right? So again, let me just show you what I mean. Let me find the diagram here. So here we are. So again, before we have the actual encoding for this first vector, so we'll have like four numbers here or eight numbers in our particular case, we just assume we have all zeros. We have all zeros, and that's what the program assumes here. Okay, that's what this thing is, okay. So let's now see how this is gonna work. So we pass the input, the states, and the offset into the language model, and then we just basically do a forward pass through the model to get the output probabilities. Okay, let's enter here, just to, let me just show you one thing. Whoops, let me just say one thing, and that's the logits part. So after we just do a forward pass through the transformer, which I'm gonna skip, so I'm gonna skip this part. So I'm gonna put breakpoint here. So I'm gonna disable everything, enable this one. We're gonna do a forward pass through the transformer, and we end up with the output, okay. So output is gonna be of the shape, let's see, so 200, because that's the inherent, that's the inner dimensionality of the transformer. So we passed in all zeros, and we basically, how we convert those all zeros into a continuous representation is on this line here. As you can see here, we just do eight times. We grab the number, and we basically grab the corresponding embedding table, and then that's how we get the representation, and then we just sum them up. And I think that was also explained in the paper here. So let me just show you where that thing is explained. I think somewhere here, so okay. Okay, for a time step T, the discrete representation obtained at that time, T minus one is transformed into continuous representation using learned embedding tables, one for each codebook, and which are summed. So that's basically this line. This line describes this piece of code here. So again, let's go back to the diagram. It's gonna be easier. So we have these, let's say four numbers here, and each of these four numbers is gonna index into a particular table. So we'll have four tables associated with this transformer, and we get their corresponding vectors, then we sum them up, and then we do a forward pass through the transformer, and we end up with a 200-dimensional vector. That's what we do there, okay? That's the idea. And now we just map those using those linear layers I mentioned before that are gonna have the output dimensionality being equal to the cardinality of the codebook, which is 1,024. So if I do just a forward step over this, we're gonna end up with logits. Here are the shape. The shape is 1,024, eight. And that makes sense because for each of the eight numbers that we are supposed to output for this particular time step, we have a distribution, which is 1,024 bins, across our codebook table, corresponding codebook table. I know this might be confusing, so you'll have to go through the code base at your own pace, but hopefully this helps you somewhat. Okay, so probabilities are there. So what we do next is this. So we have this, we grab the next eight numbers and we add one. And one is added because the zeros, as you see here, have a special connotation. They're just like initial values, which cannot actually be, we can't get zero as the output. And that's why there is this plus one here. Anyways, a small detail there. So here is the complex part. We build this quantized CDF, which is a cumulative distribution function. And basically how it works is that the higher the probability at the particular bin, the bigger the range that particular bin is gonna occupy. So again, I told you this is gonna be fairly complex. So it says here, turn the given PDF into a quantized CDF that splits this particular range into chunks of size roughly proportional to the PDF, right? And they use 24 bits here. So that means this is gonna be two raised to the power of 24 minus one. Again, let me try and quickly draw this, and I think we are almost done with the video. Okay, so here is how it's gonna look like. So we get as the output, we get some distribution. So let's say it looks like this. Okay, and so what this is gonna be mapped into is this range. So this range is gonna be from zero to two raised to the power of 24 minus one, okay? And then we're trying to split this range. And so here, initially, because we have a peak here, that means that this range is gonna be huge, okay? Let me just change the color here. This range is gonna be huge. And then where we have lower probability, we'll have much smaller, whoops, let me change the color again. So we'll have much smaller ranges here. So these ranges here are gonna be small, but these ones are gonna be bigger, and they're gonna slowly diminish here. So that's the mapping that's happening in that particular function I showed you. And these are all of the details behind the arithmetic encoding. And so you'll have to dig into that a bit deeper yourself in case you're curious, but that's it. I'm gonna go back here. So we end up with the CDF, and let me just show you. So CDF again, the shape here is 1,024 again. But you can see here that we just have the numbers from zero to two raised to the power of 24 minus one. And you can see here that the differences here, if they are super big, that means that the corresponding bin in the PDF had a higher probability. And that's the idea. And then we use this arithmetic coder, and we push the value. Whoops, here it is. But we also pass the CDF, because the arithmetic coder requires that information of the ranges in order to efficiently encode this. And this is gonna occupy less than 10 bits in expectation. So before, because we had eight numbers, the best we could do was eight times 10 bits, that's 80 bits. Here we can go below 80 bits. And that's the magic behind the arithmetic coder. So I'm gonna basically enter this just to show you the complexity, but I'm not gonna try and explain everything here. Let me hit F5 here. And so you can see here how the numbers are, how the range information here is used to basically finally here get the actual encoding for this particular number and this CDF. So you can see here, we are packing bit by bit. This packer is the same packer as the one we had before, but instead of using 10 bits, instead of pushing 10 bits into the stream, it pushes just a single bit into the stream. And, but in expectation, this while loop is gonna have less than 10 iterations, i.e. 10 bits per symbol. And that's why we're gonna be able to basically compress this even more than using the, than not using the arithmetic coding. So I'm gonna exit here. I'm gonna exit here. And basically, I'm gonna remove all of the breakpoints, put a breakpoint here, hit F5, and that's it. So you can see it was a bit slower. And that's the trade off you have to take if you wanna have even higher compression ratio. And that's it, guys. Everything else remains the same. We just write the output bytes, and that's it. So hopefully this combination of using the paper and the video was helpful. Do let me know whether you have some questions. I know I had to kinda rush through some of the details behind it, but hopefully this gives you some intuition, some understanding. Do leave some comments down below. Any feedback you might have for me, it will be super useful. And guys, until next time, bye bye. Lindsey Ch resolution
[{"start": 0.0, "end": 2.58, "text": " What's cracking guys, Alex here."}, {"start": 2.58, "end": 5.58, "text": " In this video I'll be covering a new paper called"}, {"start": 5.58, "end": 8.5, "text": " in codec high fidelity neural audio compression"}, {"start": 8.5, "end": 11.58, "text": " from the meta team and basically I'm gonna do"}, {"start": 11.58, "end": 14.9, "text": " a code walkthrough as well as the paper walkthrough"}, {"start": 14.9, "end": 17.400000000000002, "text": " and hopefully that combination is gonna be useful"}, {"start": 17.400000000000002, "end": 18.32, "text": " for you guys."}, {"start": 18.32, "end": 22.18, "text": " So basically what they've done is they've shown"}, {"start": 22.18, "end": 25.38, "text": " how they can preserve the quality after compressing"}, {"start": 25.38, "end": 27.580000000000002, "text": " and decompressing the audio file even though"}, {"start": 27.58, "end": 31.06, "text": " they are using neural networks and they can do so"}, {"start": 31.06, "end": 34.78, "text": " and basically save up a lot of space so like the bandwidth"}, {"start": 34.78, "end": 38.739999999999995, "text": " of signals encoded using this neural method"}, {"start": 38.739999999999995, "end": 43.18, "text": " are much lower compared to say MP3 format, et cetera,"}, {"start": 43.18, "end": 44.78, "text": " et cetera, so it's very cool results."}, {"start": 44.78, "end": 47.3, "text": " We're gonna see the actual numbers a bit later"}, {"start": 47.3, "end": 50.599999999999994, "text": " but for now go ahead and basically clone this repo"}, {"start": 50.599999999999994, "end": 53.54, "text": " and you should probably take a look at the installation"}, {"start": 53.54, "end": 55.72, "text": " for development section in the readme"}, {"start": 55.72, "end": 60.06, "text": " if you wanna follow along and do a stepping through"}, {"start": 60.06, "end": 61.22, "text": " the code base."}, {"start": 61.22, "end": 65.46, "text": " Okay, so there is a couple of samples they've shown here."}, {"start": 65.46, "end": 67.74, "text": " I'm gonna play a couple of them."}, {"start": 67.74, "end": 69.42, "text": " So they have two models basically."}, {"start": 69.42, "end": 73.18, "text": " One is the 48 kilohertz stereophonic model"}, {"start": 73.18, "end": 74.82, "text": " that's non-streamable."}, {"start": 74.82, "end": 76.82, "text": " We'll see what it means in a moment"}, {"start": 76.82, "end": 80.3, "text": " and then they also have the 24 kilohertz monophonic."}, {"start": 80.3, "end": 81.98, "text": " So monophonic just means a single channel."}, {"start": 81.98, "end": 84.7, "text": " Stereophonic means in this particular case two channels"}, {"start": 84.7, "end": 86.46000000000001, "text": " but it can be even more channels"}, {"start": 86.46000000000001, "end": 88.38000000000001, "text": " and basically this one is streamable"}, {"start": 88.38000000000001, "end": 92.46000000000001, "text": " which means you can use it to real time encode,"}, {"start": 92.46000000000001, "end": 94.38, "text": " send the data as it's,"}, {"start": 94.38, "end": 96.58, "text": " so as the raw samples are arriving"}, {"start": 96.58, "end": 98.58, "text": " you can encode it real time"}, {"start": 98.58, "end": 101.58, "text": " and basically that's what streamable refers to"}, {"start": 101.58, "end": 102.66, "text": " in this context."}, {"start": 102.66, "end": 104.98, "text": " So here is the ground truth signal."}, {"start": 104.98, "end": 105.82000000000001, "text": " Let me play it."}, {"start": 106.78, "end": 108.62, "text": " [\"I Am You\"]"}, {"start": 108.62, "end": 111.62, "text": " [\"In The Galaxy Globes\"]"}, {"start": 113.06, "end": 114.98, "text": " So now if I play the encodex six,"}, {"start": 114.98, "end": 117.98, "text": " so that's like, I don't know what's the original size"}, {"start": 117.98, "end": 121.78, "text": " of the audio per second but it must be much higher"}, {"start": 121.78, "end": 123.26, "text": " than the numbers you can see here."}, {"start": 123.26, "end": 126.06, "text": " So here we have like six kilobits per second"}, {"start": 126.06, "end": 127.06, "text": " and that's the encodex."}, {"start": 127.06, "end": 127.9, "text": " So that's this method."}, {"start": 127.9, "end": 128.72, "text": " Let's hear it."}, {"start": 130.72, "end": 133.22, "text": " [\"I Am You\"]"}, {"start": 135.34, "end": 137.3, "text": " Okay and finally here is the MP3"}, {"start": 137.3, "end": 139.46, "text": " and the funniest thing is we'll see this in the table"}, {"start": 139.46, "end": 141.52, "text": " a bit later is that this particular method"}, {"start": 141.52, "end": 143.9, "text": " with only six kilobits per second"}, {"start": 143.9, "end": 147.60000000000002, "text": " achieves the same subjective score as the MP3."}, {"start": 147.60000000000002, "end": 150.02, "text": " So let's hear the 64 kilobits per second."}, {"start": 150.02, "end": 152.60000000000002, "text": " [\"I Am You\"]"}, {"start": 152.60000000000002, "end": 156.56, "text": " [\"In The Galaxy Globes\"]"}, {"start": 156.56, "end": 158.52, "text": " Cool guys, so let's dig into the paper"}, {"start": 158.52, "end": 160.84, "text": " and let's see what the magic is about."}, {"start": 160.84, "end": 162.06, "text": " So again, as I said,"}, {"start": 162.06, "end": 165.5, "text": " high fidelity neural audio compression by the meta team"}, {"start": 165.5, "end": 169.54, "text": " and there's a couple of papers that this paper"}, {"start": 169.54, "end": 170.66, "text": " is building off of."}, {"start": 170.66, "end": 172.64, "text": " So one of those is the sound stream"}, {"start": 172.64, "end": 174.74, "text": " and end to end neural audio codec"}, {"start": 174.74, "end": 176.76, "text": " and the second one is audiogen."}, {"start": 176.76, "end": 180.2, "text": " So that's kind of a paper I covered a couple of videos ago"}, {"start": 180.2, "end": 182.6, "text": " and that could be a very useful prerequisite"}, {"start": 182.6, "end": 184.0, "text": " because I've covered some things like"}, {"start": 184.0, "end": 187.08, "text": " what are male spectrograms, et cetera, et cetera."}, {"start": 187.08, "end": 190.3, "text": " So maybe do check out that one in case you struggle"}, {"start": 190.3, "end": 192.9, "text": " understanding what's happening in this video."}, {"start": 192.9, "end": 195.26, "text": " Okay, having said that, let's dig into the paper."}, {"start": 195.26, "end": 198.51999999999998, "text": " So first of all, why is audio compression important?"}, {"start": 198.51999999999998, "end": 200.51999999999998, "text": " Well, this sentence kind of clarifies it."}, {"start": 200.51999999999998, "end": 203.76, "text": " So recent studies suggest that streaming audio and video"}, {"start": 203.76, "end": 206.35999999999999, "text": " have accounted for the majority of the internet traffic"}, {"start": 206.35999999999999, "end": 210.56, "text": " in 2021, 82% according to Cisco report."}, {"start": 210.56, "end": 211.79999999999998, "text": " Okay, that's kind of a huge number"}, {"start": 211.79999999999998, "end": 215.1, "text": " and basically if we can manage to better compress our data,"}, {"start": 215.1, "end": 217.23999999999998, "text": " then it's a no brainer that we can save a lot"}, {"start": 217.23999999999998, "end": 218.84, "text": " like the bandwidth of the internet,"}, {"start": 218.84, "end": 220.68, "text": " that's the energy, money, et cetera, et cetera."}, {"start": 220.68, "end": 225.18, "text": " So like that's a hugely impactful thing to be working on."}, {"start": 225.18, "end": 228.44, "text": " Okay, so then because we mentioned"}, {"start": 228.44, "end": 230.48000000000002, "text": " we have a neural audio compression,"}, {"start": 230.48000000000002, "end": 231.36, "text": " so what's the alternative?"}, {"start": 231.36, "end": 233.92000000000002, "text": " Well, the alternative are these signal processing"}, {"start": 233.92000000000002, "end": 236.32, "text": " hand engineered methods"}, {"start": 236.32, "end": 238.4, "text": " and we'll see some of the baselines a bit later,"}, {"start": 238.4, "end": 240.36, "text": " but basically as you can see here,"}, {"start": 240.36, "end": 241.60000000000002, "text": " the audio codecs typically employ"}, {"start": 241.60000000000002, "end": 243.24, "text": " a carefully engineered pipeline"}, {"start": 243.24, "end": 246.28, "text": " and like I've kind of highlighted the signal processing part"}, {"start": 246.28, "end": 248.60000000000002, "text": " because they are not learned, they're not neural,"}, {"start": 248.60000000000002, "end": 250.70000000000002, "text": " instead they are handcrafted."}, {"start": 250.70000000000002, "end": 252.48000000000002, "text": " Okay, so what's the problem?"}, {"start": 252.48000000000002, "end": 254.48000000000002, "text": " What's the potential pitfall of these neural models?"}, {"start": 254.48, "end": 258.32, "text": " So here you can see that the loss in neural compression"}, {"start": 258.32, "end": 259.84, "text": " has a couple of problems."}, {"start": 259.84, "end": 261.84, "text": " So the first one is the model has to represent"}, {"start": 261.84, "end": 263.48, "text": " a wide range of signals,"}, {"start": 263.48, "end": 265.48, "text": " such as not to overfit the training set"}, {"start": 265.48, "end": 270.0, "text": " or produce artifact laden audio outside its comfort zone."}, {"start": 270.0, "end": 273.09999999999997, "text": " So basically outside of its training data set."}, {"start": 273.09999999999997, "end": 275.24, "text": " And this is something we are well familiar with"}, {"start": 275.24, "end": 276.96, "text": " and the way they cope with this problem"}, {"start": 276.96, "end": 279.74, "text": " is that they have a diverse set of data sets"}, {"start": 279.74, "end": 283.36, "text": " and diverse domains such as general speech"}, {"start": 283.36, "end": 285.76, "text": " and noise speech and music, et cetera, et cetera."}, {"start": 285.76, "end": 288.32, "text": " Okay, and then the other problem is that"}, {"start": 288.32, "end": 290.6, "text": " instead of compressing efficiently,"}, {"start": 290.6, "end": 293.42, "text": " both in compute time and in size."}, {"start": 293.42, "end": 295.2, "text": " So for the compute time, they mentioned here,"}, {"start": 295.2, "end": 296.76, "text": " so for the former, we'll limit ourselves"}, {"start": 296.76, "end": 299.72, "text": " to models that run in real time on a single CPU core."}, {"start": 299.72, "end": 301.98, "text": " So I think they've tested all of these models"}, {"start": 301.98, "end": 305.76, "text": " on a MacBook 2019 or something and that's kind of cool."}, {"start": 305.76, "end": 306.88, "text": " And then for the latter,"}, {"start": 306.88, "end": 309.40000000000003, "text": " we use residual vector quantization"}, {"start": 309.40000000000003, "end": 313.08000000000004, "text": " of the neural encoder floating point output."}, {"start": 313.08, "end": 318.0, "text": " Okay, so some of these ideas are probably familiar to you"}, {"start": 318.0, "end": 321.64, "text": " in case you watched some of my videos like VQVAE or VQGAN"}, {"start": 321.64, "end": 325.08, "text": " or any of the derivatives of those models."}, {"start": 325.08, "end": 327.08, "text": " But I'm gonna still cover and explain"}, {"start": 327.08, "end": 329.44, "text": " what those mean in a second."}, {"start": 329.44, "end": 331.64, "text": " Okay, so let me quickly walk you through"}, {"start": 331.64, "end": 334.28, "text": " the high level diagram here."}, {"start": 334.28, "end": 336.03999999999996, "text": " So let's see how this thing is constructed."}, {"start": 336.03999999999996, "end": 338.14, "text": " So on the left side of the diagram here,"}, {"start": 338.14, "end": 340.12, "text": " we have a raw audio signal"}, {"start": 340.12, "end": 343.2, "text": " and we also have the male spectrogram representation."}, {"start": 343.2, "end": 346.68, "text": " So that's basically on one axis here, we have frequency."}, {"start": 346.68, "end": 351.42, "text": " On the other axis, we have basically time axis."}, {"start": 351.42, "end": 355.4, "text": " And then the color of the dot on this diagram"}, {"start": 355.4, "end": 358.92, "text": " tells you the amplitude of that particular sinusoid"}, {"start": 358.92, "end": 361.48, "text": " of that particular frequency at that point of time,"}, {"start": 361.48, "end": 362.36, "text": " if that makes sense."}, {"start": 362.36, "end": 364.72, "text": " So again, do watch the audio gen in case you're confused,"}, {"start": 364.72, "end": 368.08, "text": " but that's basically equivalent representation"}, {"start": 368.08, "end": 369.12, "text": " to the raw audio."}, {"start": 369.12, "end": 370.56, "text": " So this is like the time representation,"}, {"start": 370.56, "end": 372.6, "text": " this is the frequency space representation"}, {"start": 372.6, "end": 373.92, "text": " of the audio signal."}, {"start": 373.92, "end": 375.88, "text": " Okay, and then there's a couple of things they do."}, {"start": 375.88, "end": 380.2, "text": " So first they kind of pass the signal through the encoder,"}, {"start": 380.2, "end": 381.74, "text": " which is built up as you can see here"}, {"start": 381.74, "end": 384.48, "text": " out of these COM1D layers."}, {"start": 384.48, "end": 386.12, "text": " And they additionally have the LSTM,"}, {"start": 386.12, "end": 389.44, "text": " a two layer LSTM sub module here,"}, {"start": 389.44, "end": 392.24, "text": " which helps to integrate the temporal information"}, {"start": 392.24, "end": 395.4, "text": " because otherwise we'd be basically,"}, {"start": 395.4, "end": 399.96, "text": " so like given the particular architecture they are using,"}, {"start": 399.96, "end": 404.96, "text": " they basically encode 320 samples from the raw audio"}, {"start": 405.02, "end": 410.02, "text": " into a single vector here, that's 128 dimensions large."}, {"start": 411.47999999999996, "end": 416.47999999999996, "text": " Okay, and they refer to this thing here as the frame."}, {"start": 417.44, "end": 419.23999999999995, "text": " So this thing is referred to as frame,"}, {"start": 419.23999999999995, "end": 421.47999999999996, "text": " but it also referred to some of these things here as frames,"}, {"start": 421.47999999999996, "end": 422.47999999999996, "text": " it's kind of inconvenient."}, {"start": 422.48, "end": 425.64000000000004, "text": " Okay, so once you have this representation"}, {"start": 425.64000000000004, "end": 427.3, "text": " at the output of the encoder,"}, {"start": 427.3, "end": 429.32, "text": " what you do is you quantize it."}, {"start": 429.32, "end": 431.76, "text": " So that means each of these vector quantizations"}, {"start": 431.76, "end": 434.68, "text": " is gonna have associated code book table."}, {"start": 434.68, "end": 437.16, "text": " So basically that means they're gonna have something"}, {"start": 437.16, "end": 440.52000000000004, "text": " like this, so like they have a table"}, {"start": 440.52000000000004, "end": 445.40000000000003, "text": " and the dimensions of this table are gonna be 1024."}, {"start": 446.36, "end": 450.40000000000003, "text": " So they have 1024 vectors and each of these vectors are,"}, {"start": 450.4, "end": 453.15999999999997, "text": " so the dimension here is 128,"}, {"start": 453.15999999999997, "end": 456.08, "text": " the same as the output of the encoder, okay?"}, {"start": 456.08, "end": 458.12, "text": " And each of these, so 128,"}, {"start": 458.12, "end": 460.4, "text": " and each of these vector quantization modules"}, {"start": 460.4, "end": 463.0, "text": " is gonna have a particular independent code table"}, {"start": 463.0, "end": 463.84, "text": " like this one."}, {"start": 463.84, "end": 467.79999999999995, "text": " And so what these do then is you take this particular vector"}, {"start": 467.79999999999995, "end": 469.28, "text": " at the output of the encoder"}, {"start": 469.28, "end": 472.64, "text": " and you find the closest vector in this code table"}, {"start": 472.64, "end": 475.59999999999997, "text": " and you then just grab its index, okay?"}, {"start": 475.59999999999997, "end": 479.03999999999996, "text": " So when I say close, I think they're using the L2 norm."}, {"start": 479.04, "end": 481.36, "text": " So basically just find the closest vector here"}, {"start": 481.36, "end": 484.0, "text": " and then let's say for example, this vector here"}, {"start": 485.0, "end": 487.0, "text": " is the one that corresponds,"}, {"start": 487.0, "end": 488.68, "text": " which is the closest one to this one,"}, {"start": 488.68, "end": 490.52000000000004, "text": " then you just grab its index here."}, {"start": 490.52000000000004, "end": 494.44, "text": " So that might be like, I don't know, like 238."}, {"start": 494.44, "end": 497.44, "text": " And now you use 238 as a representation"}, {"start": 497.44, "end": 498.28000000000003, "text": " of this whole vector."}, {"start": 498.28000000000003, "end": 499.76, "text": " And that's how you kind of compress"}, {"start": 499.76, "end": 502.48, "text": " down the output representation here."}, {"start": 502.48, "end": 504.76, "text": " And then additionally they have,"}, {"start": 504.76, "end": 506.6, "text": " as you can see, they have multiple of these"}, {"start": 506.6, "end": 508.24, "text": " and that's because they do something"}, {"start": 508.24, "end": 509.84000000000003, "text": " called residual quantization."}, {"start": 509.84000000000003, "end": 512.64, "text": " So they first snap, so snap,"}, {"start": 512.64, "end": 515.88, "text": " they find basically, they find the closest vector"}, {"start": 515.88, "end": 517.2, "text": " and then they take this vector"}, {"start": 517.2, "end": 519.92, "text": " and they subtract it from the original vector"}, {"start": 519.92, "end": 521.88, "text": " and then the residue is encoded"}, {"start": 521.88, "end": 525.84, "text": " by the next vector quantization module."}, {"start": 525.84, "end": 527.08, "text": " And then rinse and repeat"}, {"start": 527.08, "end": 529.8, "text": " and that's how they basically encode this thing."}, {"start": 531.24, "end": 533.2, "text": " So you end up in this particular drawing here,"}, {"start": 533.2, "end": 535.5600000000001, "text": " you would end up instead of 128 dimensions,"}, {"start": 535.5600000000001, "end": 537.0600000000001, "text": " you would end up with four dimensions"}, {"start": 537.06, "end": 539.56, "text": " because you have four numbers being allocated"}, {"start": 539.56, "end": 540.8, "text": " to that particular vector."}, {"start": 540.8, "end": 542.1199999999999, "text": " And we have, again, in the background,"}, {"start": 542.1199999999999, "end": 543.8, "text": " we have four tables, okay?"}, {"start": 543.8, "end": 545.7199999999999, "text": " And then finally, so I'm gonna ignore"}, {"start": 545.7199999999999, "end": 549.4, "text": " the transformer module here for a second,"}, {"start": 549.4, "end": 552.92, "text": " but like finally, you can basically decode those signals"}, {"start": 552.92, "end": 554.4799999999999, "text": " because you have the numbers"}, {"start": 554.4799999999999, "end": 556.0799999999999, "text": " and then you can just grab the vector"}, {"start": 556.0799999999999, "end": 558.2399999999999, "text": " from the associated tables, you can sum them up"}, {"start": 558.2399999999999, "end": 560.0, "text": " and you form your representation"}, {"start": 560.0, "end": 561.8399999999999, "text": " which you then pass through the decoder, okay?"}, {"start": 561.8399999999999, "end": 564.06, "text": " And decoder is just basically you can see"}, {"start": 564.06, "end": 566.0799999999999, "text": " symmetric to what we had in the encoder."}, {"start": 566.08, "end": 567.96, "text": " So I'm gonna skip explaining it."}, {"start": 567.96, "end": 571.08, "text": " And finally, we end up with the reconstructed audio"}, {"start": 572.0400000000001, "end": 573.84, "text": " signal here, okay?"}, {"start": 573.84, "end": 577.24, "text": " So how the thing is trained is there are multiple losses"}, {"start": 577.24, "end": 579.64, "text": " and they also introduced this thing called a balancer"}, {"start": 579.64, "end": 582.12, "text": " which uses the gradient information to balance"}, {"start": 582.12, "end": 586.44, "text": " and scale up all of the losses as needed."}, {"start": 586.44, "end": 590.5200000000001, "text": " So first loss here is the LT"}, {"start": 590.5200000000001, "end": 592.8000000000001, "text": " which stands for time domain loss."}, {"start": 592.8000000000001, "end": 596.0, "text": " They basically do L1 between the raw representations"}, {"start": 596.0, "end": 601.0, "text": " of the reconstructed audio here and the input audio here."}, {"start": 601.0, "end": 602.52, "text": " And then they have the LS"}, {"start": 602.52, "end": 607.52, "text": " which is the basically a spectral domain L1 basically norm"}, {"start": 608.84, "end": 612.16, "text": " between the reconstructed mouse spectrogram"}, {"start": 612.16, "end": 613.92, "text": " and the input mouse spectrogram."}, {"start": 613.92, "end": 615.28, "text": " So that's what they do here."}, {"start": 615.28, "end": 617.28, "text": " And then finally, they have the LD and LG"}, {"start": 617.28, "end": 621.52, "text": " which are the generator and discriminator losses."}, {"start": 621.52, "end": 623.46, "text": " And you can see here, they're like a couple"}, {"start": 623.46, "end": 625.56, "text": " of identical discriminators."}, {"start": 625.56, "end": 628.8, "text": " And so what you do here is you just convert the audio signal"}, {"start": 628.8, "end": 631.68, "text": " into the real and imaginary STFD"}, {"start": 631.68, "end": 634.4, "text": " which are just again, the spectral representations."}, {"start": 634.4, "end": 636.1999999999999, "text": " And these are strictly more informative."}, {"start": 636.1999999999999, "end": 637.88, "text": " So let me just change the color here."}, {"start": 637.88, "end": 639.5999999999999, "text": " So these are strictly more informative"}, {"start": 639.5999999999999, "end": 641.6999999999999, "text": " than the mouse spectrogram down here"}, {"start": 641.6999999999999, "end": 646.06, "text": " because given this information, you can easily get"}, {"start": 646.06, "end": 647.92, "text": " the mouse spectrogram information"}, {"start": 647.92, "end": 650.16, "text": " because you just have to find the amplitude"}, {"start": 650.16, "end": 652.68, "text": " of these imaginary complex numbers."}, {"start": 652.68, "end": 654.8399999999999, "text": " And that's what gives you the amplitude"}, {"start": 654.84, "end": 656.2800000000001, "text": " of the mouse spectrogram."}, {"start": 656.2800000000001, "end": 659.9200000000001, "text": " Anyways, you kind of pass that through this encoder"}, {"start": 659.9200000000001, "end": 661.96, "text": " to the discriminator."}, {"start": 661.96, "end": 665.4, "text": " And then you train using the adversarial loss as usual."}, {"start": 665.4, "end": 668.72, "text": " So you just basically contrast, you give it the real signal"}, {"start": 668.72, "end": 670.52, "text": " you give it the generated signal"}, {"start": 670.52, "end": 672.88, "text": " and you teach the discriminator to discriminate"}, {"start": 672.88, "end": 675.1600000000001, "text": " between those two and you teach the generator"}, {"start": 675.1600000000001, "end": 677.9200000000001, "text": " which is in this particular case, this whole pipeline here"}, {"start": 677.9200000000001, "end": 682.36, "text": " how to generate signals that look as real ones."}, {"start": 682.36, "end": 683.84, "text": " Finally, a couple more losses here."}, {"start": 683.84, "end": 686.76, "text": " So we covered these ones and the final two ones"}, {"start": 686.76, "end": 687.6, "text": " are this one."}, {"start": 687.6, "end": 688.72, "text": " So this is the commitment loss."}, {"start": 688.72, "end": 692.52, "text": " And again, if you watched any of the VQVA, VQGAN papers"}, {"start": 692.52, "end": 694.08, "text": " you're familiar with this one."}, {"start": 694.08, "end": 697.48, "text": " And finally, there is the loss that corresponds"}, {"start": 697.48, "end": 698.98, "text": " to this transformer here."}, {"start": 698.98, "end": 701.8000000000001, "text": " So what the transformer does is it helps us"}, {"start": 701.8000000000001, "end": 706.14, "text": " to basically compress this latent representation"}, {"start": 706.14, "end": 708.96, "text": " to even lower dimensionality at the cost"}, {"start": 708.96, "end": 711.88, "text": " of having to spend more compute"}, {"start": 711.88, "end": 716.36, "text": " and basically increase the latency of your system."}, {"start": 716.36, "end": 719.64, "text": " But I'm gonna kind of go through"}, {"start": 719.64, "end": 721.32, "text": " probably if you have enough time, I'm gonna go through"}, {"start": 721.32, "end": 723.5, "text": " and show you how the language model part"}, {"start": 723.5, "end": 724.68, "text": " fits into this pipeline."}, {"start": 724.68, "end": 726.96, "text": " Once we start walking through the code."}, {"start": 726.96, "end": 729.76, "text": " Okay guys, let me quickly show you this"}, {"start": 729.76, "end": 732.28, "text": " residual vector quantization logic."}, {"start": 732.28, "end": 736.12, "text": " There is from this before I mentioned this sound stream"}, {"start": 736.12, "end": 740.04, "text": " paper and they have a very neat pseudocode"}, {"start": 740.04, "end": 742.92, "text": " for this residual vector quantization."}, {"start": 742.92, "end": 744.5999999999999, "text": " So let's kind of run through it."}, {"start": 744.5999999999999, "end": 746.8399999999999, "text": " So you can see here, Y is denoted as the output"}, {"start": 746.8399999999999, "end": 747.92, "text": " of the encoder."}, {"start": 747.92, "end": 749.76, "text": " We have multiple of these quantizers"}, {"start": 749.76, "end": 751.16, "text": " and there is NQ of them."}, {"start": 751.16, "end": 754.3, "text": " So on this particular drawing, NQ is four"}, {"start": 754.3, "end": 758.64, "text": " because we have one, two, three, four"}, {"start": 758.64, "end": 759.7199999999999, "text": " of these components here."}, {"start": 759.7199999999999, "end": 761.12, "text": " Okay."}, {"start": 761.12, "end": 762.92, "text": " So let's go back here."}, {"start": 762.92, "end": 765.24, "text": " And so here is how it works."}, {"start": 765.24, "end": 768.4, "text": " So initially we initialize the residual variable"}, {"start": 768.4, "end": 770.6, "text": " with Y, which is the output from the encoder."}, {"start": 770.6, "end": 773.0, "text": " And then you can see here, we quantize it"}, {"start": 773.84, "end": 776.56, "text": " and then, and basically then we add that"}, {"start": 776.56, "end": 778.28, "text": " to the reconstructed signal."}, {"start": 778.28, "end": 782.64, "text": " Then we subtract that quantize, those quantize vectors"}, {"start": 782.64, "end": 784.56, "text": " from the initial residual vectors."}, {"start": 784.56, "end": 786.92, "text": " And then we use those residual ones"}, {"start": 786.92, "end": 789.62, "text": " and we feed them into the next quantizer."}, {"start": 789.62, "end": 792.62, "text": " And that's basically how it looks like on a high level."}, {"start": 793.52, "end": 796.98, "text": " I also wanna walk you through the rationale."}, {"start": 796.98, "end": 798.1999999999999, "text": " Why are we doing this?"}, {"start": 798.2, "end": 800.44, "text": " So I think a couple of these paragraphs"}, {"start": 800.44, "end": 803.24, "text": " might help elucidate the idea behind it."}, {"start": 803.24, "end": 804.08, "text": " Okay."}, {"start": 804.08, "end": 806.76, "text": " So the vector quantizer VQ proposed in these papers here"}, {"start": 806.76, "end": 809.6, "text": " in the context of VQ VAE meets this requirement."}, {"start": 809.6, "end": 812.84, "text": " This vector quantizer learns a code book of N vectors."}, {"start": 812.84, "end": 815.72, "text": " So N was 1024 in our particular case."}, {"start": 815.72, "end": 819.2, "text": " To encode each D dimensional frame, and D was 128"}, {"start": 820.32, "end": 824.1, "text": " of the output of the encoder here denoted as NQ of X."}, {"start": 824.1, "end": 826.84, "text": " The encoded audio, as you can see the dimensionality"}, {"start": 826.84, "end": 828.22, "text": " being S times D."}, {"start": 828.22, "end": 833.22, "text": " So if we have a one second audio signal, S will be 75"}, {"start": 833.8000000000001, "end": 836.9200000000001, "text": " because we have the hop being 320."}, {"start": 836.9200000000001, "end": 838.6, "text": " And I'm just using particular numbers"}, {"start": 838.6, "end": 840.86, "text": " because it's easier to kind of create a mental picture."}, {"start": 840.86, "end": 844.94, "text": " But basically imagine you have a 24 kilohertz signal"}, {"start": 844.94, "end": 849.44, "text": " and then one second means you'll have 24,000 samples."}, {"start": 849.44, "end": 853.0, "text": " If we divide that by 320, we end up with 75."}, {"start": 853.0, "end": 856.6800000000001, "text": " So that means that for a signal of audio,"}, {"start": 856.68, "end": 859.0799999999999, "text": " for one second of the audio and the input"}, {"start": 859.0799999999999, "end": 862.1999999999999, "text": " we'll end up with 75 frames at the output."}, {"start": 862.1999999999999, "end": 865.9599999999999, "text": " And then D is just the, as I said, 128."}, {"start": 865.9599999999999, "end": 866.8, "text": " Okay."}, {"start": 866.8, "end": 868.04, "text": " So that's then mapped to a sequence"}, {"start": 868.04, "end": 870.2399999999999, "text": " of one hot vectors of shape SN."}, {"start": 870.2399999999999, "end": 873.7199999999999, "text": " So N is again 1024, which can be represented"}, {"start": 873.7199999999999, "end": 876.0999999999999, "text": " using this many bits."}, {"start": 876.0999999999999, "end": 876.9399999999999, "text": " Okay."}, {"start": 876.9399999999999, "end": 881.12, "text": " So because this is 1024, this is going to be 10."}, {"start": 881.12, "end": 884.76, "text": " So once you do the log, you'll end up with 10 bits here"}, {"start": 884.76, "end": 886.72, "text": " as that's how much you need to index"}, {"start": 886.72, "end": 888.4, "text": " into those codebook tables."}, {"start": 888.4, "end": 889.22, "text": " Okay."}, {"start": 889.22, "end": 890.72, "text": " So let's see a concrete example here."}, {"start": 890.72, "end": 893.68, "text": " So let us consider a codec targeting a bit rate"}, {"start": 893.68, "end": 897.22, "text": " of six K, so kilo bits per second."}, {"start": 897.22, "end": 900.48, "text": " When using a striding factor of 320,"}, {"start": 900.48, "end": 903.76, "text": " each second of audio at sampling rate 24 kilohertz"}, {"start": 903.76, "end": 907.04, "text": " is represented by 75 frames at the output of the encoder."}, {"start": 907.04, "end": 911.16, "text": " This corresponds to 80 bits allocated to each frame, right?"}, {"start": 911.16, "end": 915.04, "text": " So if we have the target bandwidth of 6,000"}, {"start": 915.04, "end": 917.76, "text": " and we have 75 frames, that means we can allocate"}, {"start": 917.76, "end": 922.52, "text": " up to 80 bits and we will still not basically"}, {"start": 922.52, "end": 925.14, "text": " go over the target bandwidth."}, {"start": 926.0, "end": 928.0, "text": " So using a plain vector quantizer,"}, {"start": 928.0, "end": 930.64, "text": " this requires storing a codebook with two raised"}, {"start": 930.64, "end": 934.56, "text": " to the power of 80 vectors, which is obviously unfeasible."}, {"start": 934.56, "end": 935.4, "text": " Okay."}, {"start": 935.4, "end": 936.64, "text": " And then they say to address this issue,"}, {"start": 936.64, "end": 940.0799999999999, "text": " we adopt a residual vector quantizer,"}, {"start": 940.08, "end": 942.2, "text": " AKA multi-stage vector quantizer,"}, {"start": 942.2, "end": 944.96, "text": " which cascades NQ layers, so we have four"}, {"start": 944.96, "end": 948.36, "text": " in the drawing below, of VQ as follows."}, {"start": 948.36, "end": 951.9200000000001, "text": " The unquantized input vector is passed through our first VQ"}, {"start": 951.9200000000001, "end": 954.2800000000001, "text": " and quantization residuals are computed."}, {"start": 954.2800000000001, "end": 956.32, "text": " The residuals are then iteratively quantized"}, {"start": 956.32, "end": 959.76, "text": " by a sequence of additional NQ minus one vector quantizers"}, {"start": 959.76, "end": 961.32, "text": " as described in algorithm one,"}, {"start": 961.32, "end": 962.88, "text": " which was the one we saw here."}, {"start": 964.12, "end": 968.0, "text": " The total rate budget is uniformly allocated to each VQ,"}, {"start": 968.0, "end": 972.44, "text": " i.e. we have R divided by NQ, so R was 80 bits"}, {"start": 972.44, "end": 974.44, "text": " and then we divide that by the number of quantizers"}, {"start": 974.44, "end": 977.0, "text": " and that's how many bits each of the quantizers"}, {"start": 977.0, "end": 980.04, "text": " has to basically allocate."}, {"start": 980.04, "end": 981.96, "text": " For example, when using NQ eight,"}, {"start": 981.96, "end": 985.68, "text": " each quantizer uses a codebook of size 1,024"}, {"start": 985.68, "end": 987.68, "text": " and that's the number we'll be using in this paper as well."}, {"start": 987.68, "end": 988.74, "text": " Okay."}, {"start": 988.74, "end": 991.72, "text": " So hopefully that makes sense, so that's why"}, {"start": 991.72, "end": 994.1, "text": " this approach was basically developed."}, {"start": 994.1, "end": 999.1, "text": " It makes it easy to get to the target bandwidth"}, {"start": 999.22, "end": 1000.94, "text": " without having to have huge,"}, {"start": 1000.94, "end": 1004.5400000000001, "text": " enormous basically codebooks."}, {"start": 1004.5400000000001, "end": 1009.5400000000001, "text": " Okay, so let's now quickly go through some of the details."}, {"start": 1010.4200000000001, "end": 1013.34, "text": " Basically, as I said, they have the non-streamable"}, {"start": 1013.34, "end": 1015.78, "text": " and the streamable architectures."}, {"start": 1015.78, "end": 1017.86, "text": " The streamable ones, so they say here,"}, {"start": 1017.86, "end": 1019.78, "text": " thanks to this padding scheme,"}, {"start": 1019.78, "end": 1022.62, "text": " which is basically left padding, the causal padding,"}, {"start": 1022.62, "end": 1024.72, "text": " the model can output 320 samples,"}, {"start": 1024.72, "end": 1027.18, "text": " which corresponds to 13 milliseconds,"}, {"start": 1027.18, "end": 1031.18, "text": " given a 24 kilohertz sampling rate."}, {"start": 1031.18, "end": 1034.8, "text": " As soon as the first 320 samples are received."}, {"start": 1034.8, "end": 1037.8, "text": " So that means as soon as you have 320 samples on the input,"}, {"start": 1037.8, "end": 1039.94, "text": " you can already encode it into a frame"}, {"start": 1039.94, "end": 1041.44, "text": " and then you can quantize that frame"}, {"start": 1041.44, "end": 1043.7, "text": " and you can send that down the line"}, {"start": 1043.7, "end": 1046.26, "text": " and then you can just keep on doing that"}, {"start": 1046.26, "end": 1048.44, "text": " as the signal keeps on coming"}, {"start": 1048.44, "end": 1050.34, "text": " as you're receiving your input signal."}, {"start": 1050.34, "end": 1052.06, "text": " You can keep on encoding it and sending it."}, {"start": 1052.06, "end": 1054.46, "text": " So that means that's why it's streamable."}, {"start": 1054.46, "end": 1057.34, "text": " Okay, and then they mentioned by selecting a variable number"}, {"start": 1057.34, "end": 1059.02, "text": " of residual steps at train time,"}, {"start": 1059.02, "end": 1060.34, "text": " a single model can be used"}, {"start": 1060.34, "end": 1063.28, "text": " to support multiple bandwidth targets."}, {"start": 1063.28, "end": 1065.86, "text": " That's something we saw up there"}, {"start": 1065.86, "end": 1067.5, "text": " with this particular example."}, {"start": 1067.5, "end": 1070.1, "text": " So basically, depending on how many"}, {"start": 1070.1, "end": 1073.6599999999999, "text": " of these quantizer models you add,"}, {"start": 1073.6599999999999, "end": 1076.7, "text": " you can achieve a particular bandwidth."}, {"start": 1076.7, "end": 1077.78, "text": " So here we picked eight"}, {"start": 1077.78, "end": 1080.74, "text": " because we want to have 80 bits per frame"}, {"start": 1080.74, "end": 1084.34, "text": " and thus we want to have six kilobits per second."}, {"start": 1084.34, "end": 1085.58, "text": " Okay."}, {"start": 1085.58, "end": 1088.74, "text": " Okay, so I did mention the language model"}, {"start": 1088.74, "end": 1089.74, "text": " that's being used here."}, {"start": 1089.74, "end": 1091.7, "text": " So implemented as a transformer"}, {"start": 1091.7, "end": 1094.78, "text": " to further reduce the bandwidth."}, {"start": 1094.78, "end": 1096.78, "text": " So let's kind of read this paragraph"}, {"start": 1096.78, "end": 1099.26, "text": " and then later we probably will have enough time"}, {"start": 1099.26, "end": 1101.3, "text": " to just go through that and see it in code."}, {"start": 1101.3, "end": 1103.26, "text": " So for a timestamp T,"}, {"start": 1103.26, "end": 1107.64, "text": " the discrete representation obtained at time T minus one"}, {"start": 1107.64, "end": 1110.54, "text": " is transformed into a continuous representation"}, {"start": 1110.54, "end": 1112.58, "text": " using learned embedding tables,"}, {"start": 1112.58, "end": 1115.58, "text": " one for each code book and which are then summed."}, {"start": 1115.58, "end": 1117.18, "text": " So we'll see that in the code a bit later."}, {"start": 1117.18, "end": 1119.04, "text": " But basically, so discrete representation"}, {"start": 1119.04, "end": 1120.3999999999999, "text": " are these numbers I mentioned."}, {"start": 1120.3999999999999, "end": 1125.3999999999999, "text": " So for example, if this is a step T minus one,"}, {"start": 1125.94, "end": 1127.86, "text": " you can see that the discrete representation"}, {"start": 1127.86, "end": 1130.46, "text": " is basically these four numbers here."}, {"start": 1130.46, "end": 1132.22, "text": " And when I say four numbers,"}, {"start": 1132.22, "end": 1135.94, "text": " even though it's kind of float initially,"}, {"start": 1135.94, "end": 1138.32, "text": " which means you have like 64 bits,"}, {"start": 1138.32, "end": 1141.4199999999998, "text": " we know that it doesn't have to be larger than 10 bits"}, {"start": 1141.4199999999998, "end": 1143.9399999999998, "text": " because it just needs to index into this table"}, {"start": 1143.9399999999998, "end": 1146.74, "text": " that has 1024 vectors."}, {"start": 1146.74, "end": 1147.74, "text": " And that's why these numbers"}, {"start": 1147.74, "end": 1149.82, "text": " are actually just 10 bit numbers, right?"}, {"start": 1150.6799999999998, "end": 1152.54, "text": " Okay, so let's go back here."}, {"start": 1154.8999999999999, "end": 1156.1, "text": " And then for T equals zero,"}, {"start": 1156.1, "end": 1157.34, "text": " a special token is used instead."}, {"start": 1157.34, "end": 1158.86, "text": " Okay, the output of the transformer"}, {"start": 1158.86, "end": 1160.8, "text": " is fed into ankyl linear layers"}, {"start": 1160.8, "end": 1162.22, "text": " with as many output channels"}, {"start": 1162.22, "end": 1163.98, "text": " as the cardinality of each code book,"}, {"start": 1163.98, "end": 1166.02, "text": " which is 1024 in our examples,"}, {"start": 1166.02, "end": 1168.78, "text": " giving us the logits of the estimated distribution"}, {"start": 1168.78, "end": 1170.84, "text": " over each code book for time T."}, {"start": 1171.8, "end": 1174.5, "text": " Okay, so basically instead of just finding"}, {"start": 1174.5, "end": 1178.82, "text": " the closest vector in the code book table"}, {"start": 1178.82, "end": 1180.7, "text": " and snapping our output vector"}, {"start": 1180.7, "end": 1183.06, "text": " to that particular code book vector,"}, {"start": 1183.06, "end": 1185.04, "text": " instead of that, we basically do,"}, {"start": 1185.04, "end": 1188.34, "text": " we form like a probability distribution"}, {"start": 1188.34, "end": 1189.5, "text": " across the whole code book."}, {"start": 1189.5, "end": 1191.1, "text": " So this is the code book table"}, {"start": 1191.1, "end": 1194.62, "text": " and we form a probability distribution across its indices"}, {"start": 1194.62, "end": 1198.7199999999998, "text": " and then combined with arithmetic coding,"}, {"start": 1198.7199999999998, "end": 1200.82, "text": " they show that you can further compress"}, {"start": 1200.82, "end": 1204.26, "text": " your latent representation using this approach."}, {"start": 1204.26, "end": 1206.12, "text": " Okay, let's continue here."}, {"start": 1207.2199999999998, "end": 1209.2399999999998, "text": " So here are the formulas for the losses."}, {"start": 1209.2399999999998, "end": 1211.02, "text": " I'm not gonna spend too much time here,"}, {"start": 1211.02, "end": 1212.58, "text": " but you can see here L1 norm"}, {"start": 1212.58, "end": 1214.7399999999998, "text": " between the time domain signals."}, {"start": 1214.7399999999998, "end": 1218.02, "text": " And then you can see here a combination of L1 and L2 norms"}, {"start": 1218.02, "end": 1220.86, "text": " between the mal spectrograms across various scales."}, {"start": 1220.86, "end": 1223.78, "text": " And then there is the discriminative loss."}, {"start": 1223.78, "end": 1226.8999999999999, "text": " Basically, you can see here the generator loss"}, {"start": 1227.76, "end": 1230.44, "text": " and you can also find the discriminator loss"}, {"start": 1230.44, "end": 1231.78, "text": " somewhere in here."}, {"start": 1231.78, "end": 1233.1, "text": " So here it is."}, {"start": 1233.1, "end": 1235.3, "text": " But I'm not gonna spend any more time on this"}, {"start": 1235.3, "end": 1236.78, "text": " because I've covered this."}, {"start": 1236.78, "end": 1240.26, "text": " Also, I think the audio gen or VQ gen paper"}, {"start": 1240.26, "end": 1243.3799999999999, "text": " basically have pretty much same losses as this paper."}, {"start": 1243.3799999999999, "end": 1245.26, "text": " So I'm gonna kind of skip all of those details."}, {"start": 1245.26, "end": 1247.58, "text": " Here is also the commitment loss,"}, {"start": 1247.58, "end": 1250.78, "text": " fairly familiar to those of you who watch those videos"}, {"start": 1250.78, "end": 1252.3999999999999, "text": " and who know about those papers."}, {"start": 1252.4, "end": 1256.7, "text": " And finally, the final loss is just a weighted sum"}, {"start": 1256.7, "end": 1258.8200000000002, "text": " of all of these various losses."}, {"start": 1258.8200000000002, "end": 1263.3200000000002, "text": " And in order to avoid having to basically do like a search"}, {"start": 1263.3200000000002, "end": 1266.5400000000002, "text": " and find the right coefficients, they make it adaptive."}, {"start": 1266.5400000000002, "end": 1269.22, "text": " And so they basically calculate the gradients"}, {"start": 1269.22, "end": 1271.3200000000002, "text": " of these losses with respect to the input."}, {"start": 1271.3200000000002, "end": 1273.1000000000001, "text": " And then they use the norm of the gradient"}, {"start": 1273.1000000000001, "end": 1276.6200000000001, "text": " to automatically rescale the importance"}, {"start": 1276.6200000000001, "end": 1277.8200000000002, "text": " of each of these losses."}, {"start": 1277.8200000000002, "end": 1278.9, "text": " So you can see here,"}, {"start": 1278.9, "end": 1282.42, "text": " if a gradient of a particular loss was huge,"}, {"start": 1282.42, "end": 1285.14, "text": " so we have this, this is basically a norm"}, {"start": 1285.14, "end": 1286.7800000000002, "text": " and exponentially,"}, {"start": 1288.14, "end": 1290.0600000000002, "text": " and they've done an exponentially moving average"}, {"start": 1290.0600000000002, "end": 1291.3400000000001, "text": " on the norm of the gradient."}, {"start": 1291.3400000000001, "end": 1294.14, "text": " And you can see if that was huge historically,"}, {"start": 1294.14, "end": 1296.94, "text": " then that means this is gonna be down weighted."}, {"start": 1296.94, "end": 1298.94, "text": " So the gradients from that particular loss"}, {"start": 1298.94, "end": 1299.9, "text": " are gonna be down weighted"}, {"start": 1299.9, "end": 1304.0600000000002, "text": " because this is a big number in the denominator."}, {"start": 1304.0600000000002, "end": 1307.1000000000001, "text": " And because of that, that loss is gonna be down weighted."}, {"start": 1307.1, "end": 1309.1399999999999, "text": " So that's the kind of the intuition behind it."}, {"start": 1309.1399999999999, "end": 1310.36, "text": " Okay."}, {"start": 1310.36, "end": 1311.1999999999998, "text": " So as I said,"}, {"start": 1312.3, "end": 1316.8999999999999, "text": " they use diverse domains to train these models."}, {"start": 1316.8999999999999, "end": 1319.54, "text": " That's how they ensure that they don't overfit"}, {"start": 1319.54, "end": 1321.1799999999998, "text": " to particular domain."}, {"start": 1321.1799999999998, "end": 1324.8999999999999, "text": " And yeah, so they mentioned here speech,"}, {"start": 1324.8999999999999, "end": 1327.9399999999998, "text": " noisy speech, music, and general audio."}, {"start": 1327.9399999999998, "end": 1328.9399999999998, "text": " And here are the results."}, {"start": 1328.9399999999998, "end": 1331.6999999999998, "text": " So you can see here how they compare"}, {"start": 1331.6999999999998, "end": 1334.78, "text": " against some of the baselines, such as OPUS and EDS,"}, {"start": 1334.78, "end": 1337.42, "text": " which are basically signal processing baselines."}, {"start": 1337.42, "end": 1339.1, "text": " And then we have Lyra from Google,"}, {"start": 1339.1, "end": 1341.46, "text": " which is a neural codec as well."}, {"start": 1341.46, "end": 1345.1, "text": " And you can see it basically outperforms all of those models"}, {"start": 1345.1, "end": 1347.92, "text": " for all of the target bandwidths."}, {"start": 1349.18, "end": 1351.62, "text": " Additionally, this entropy coded one,"}, {"start": 1351.62, "end": 1353.66, "text": " which uses the transformer"}, {"start": 1353.66, "end": 1356.26, "text": " to further compress the representation,"}, {"start": 1356.26, "end": 1360.5, "text": " basically outperforms for a lower target bandwidths,"}, {"start": 1360.5, "end": 1363.1, "text": " outperforms the normal one,"}, {"start": 1363.1, "end": 1365.8999999999999, "text": " the normal encodec without the transformer."}, {"start": 1366.8999999999999, "end": 1368.4199999999998, "text": " Okay, guys."}, {"start": 1368.4199999999998, "end": 1371.4199999999998, "text": " Let's see what else is interesting here."}, {"start": 1371.4199999999998, "end": 1373.56, "text": " Yeah, this result here is kinda cool."}, {"start": 1373.56, "end": 1375.3, "text": " So I think I mentioned this in the beginning of the video."}, {"start": 1375.3, "end": 1377.74, "text": " Basically, you can see here that MP3"}, {"start": 1377.74, "end": 1380.8999999999999, "text": " with the target bandwidth of 64 kilobits per second"}, {"start": 1380.8999999999999, "end": 1383.52, "text": " has the same, this MUSHR score,"}, {"start": 1383.52, "end": 1385.62, "text": " which is just basically subjective score,"}, {"start": 1386.9399999999998, "end": 1390.62, "text": " and has the same score as this particular encodec"}, {"start": 1390.62, "end": 1392.8999999999999, "text": " with six kilobits per second."}, {"start": 1392.9, "end": 1395.98, "text": " So you can see these are fairly on-pair."}, {"start": 1395.98, "end": 1399.14, "text": " And you can see that already with 12 kilobits per second,"}, {"start": 1399.14, "end": 1403.94, "text": " it heavily outperforms the MP3 format."}, {"start": 1403.94, "end": 1406.5800000000002, "text": " And also, if you include the entropy coding,"}, {"start": 1406.5800000000002, "end": 1408.4, "text": " then you can even further reduce."}, {"start": 1408.4, "end": 1410.74, "text": " With 4.2 kilobits per second,"}, {"start": 1410.74, "end": 1413.9, "text": " you outperform 64 kilobits per second MP3,"}, {"start": 1413.9, "end": 1418.9, "text": " and you have comparable basically subjective metrics."}, {"start": 1418.9, "end": 1419.7800000000002, "text": " That's fairly cool."}, {"start": 1419.7800000000002, "end": 1422.8600000000001, "text": " Okay, having seen all of this, having seen the paper,"}, {"start": 1422.86, "end": 1425.86, "text": " let's now switch our focus to the actual code."}, {"start": 1425.86, "end": 1427.9799999999998, "text": " And hopefully that's gonna help us"}, {"start": 1427.9799999999998, "end": 1430.04, "text": " understand this much better."}, {"start": 1430.04, "end": 1432.0, "text": " So I just created a long JSON,"}, {"start": 1432.0, "end": 1434.3799999999999, "text": " and I basically passed, as you can see here,"}, {"start": 1434.3799999999999, "end": 1437.26, "text": " the arguments, I just have this test 24K."}, {"start": 1438.12, "end": 1440.54, "text": " I'm gonna pronounce this as wave signal."}, {"start": 1440.54, "end": 1442.1999999999998, "text": " I'm not sure how this format is pronounced,"}, {"start": 1442.1999999999998, "end": 1444.8999999999999, "text": " but wave sounds easy enough."}, {"start": 1444.8999999999999, "end": 1448.3799999999999, "text": " Basically, test 24K is already a part of the codebase."}, {"start": 1448.3799999999999, "end": 1450.82, "text": " It's checked in, so you just have to clone the repo,"}, {"start": 1450.82, "end": 1453.1799999999998, "text": " and you will have it out of the box."}, {"start": 1453.1799999999998, "end": 1454.02, "text": " So that's it."}, {"start": 1454.02, "end": 1455.36, "text": " That's everything we have to set,"}, {"start": 1455.36, "end": 1457.46, "text": " and later I'm just gonna add this LM flag"}, {"start": 1457.46, "end": 1461.46, "text": " to see how the language model fits into this whole picture."}, {"start": 1461.46, "end": 1463.4199999999998, "text": " And there is also the HQ,"}, {"start": 1463.4199999999998, "end": 1467.3799999999999, "text": " which is the 48 kilohertz non-streamable model,"}, {"start": 1467.3799999999999, "end": 1468.74, "text": " but I'm not sure I'll have enough time."}, {"start": 1468.74, "end": 1471.6599999999999, "text": " I don't wanna go over one hour in this video."}, {"start": 1471.6599999999999, "end": 1473.6599999999999, "text": " So let's basically start."}, {"start": 1473.6599999999999, "end": 1476.3799999999999, "text": " I'm gonna enable all of the breakpoints here,"}, {"start": 1476.3799999999999, "end": 1479.2, "text": " and let's run this thing."}, {"start": 1479.2, "end": 1482.54, "text": " Okay, again, I'm gonna focus only on the crucial parts."}, {"start": 1482.54, "end": 1486.18, "text": " So you just make sure that the audio signal exists,"}, {"start": 1486.18, "end": 1491.06, "text": " and then here you can see that because we have wave signal,"}, {"start": 1491.06, "end": 1493.3400000000001, "text": " and this is basically ECDC,"}, {"start": 1493.3400000000001, "end": 1496.3, "text": " which is the compressed format suffix."}, {"start": 1496.3, "end": 1497.78, "text": " Because of that, we are gonna go"}, {"start": 1497.78, "end": 1499.42, "text": " into the compression branch."}, {"start": 1499.42, "end": 1502.66, "text": " So, but by understanding this compressed function,"}, {"start": 1502.66, "end": 1504.22, "text": " this decompressed function is gonna be"}, {"start": 1504.22, "end": 1505.46, "text": " fairly trivial to understand."}, {"start": 1505.46, "end": 1508.06, "text": " So I'm gonna focus on this particular branch of the code."}, {"start": 1508.06, "end": 1511.26, "text": " Okay, so we formed the output,"}, {"start": 1511.26, "end": 1512.98, "text": " as we just changed the suffix."}, {"start": 1512.98, "end": 1514.6599999999999, "text": " Basically, it's also gonna be, as you can see here,"}, {"start": 1514.6599999999999, "end": 1518.78, "text": " test24k, oops, test24k ECDC."}, {"start": 1518.78, "end": 1519.8999999999999, "text": " That's the output."}, {"start": 1519.8999999999999, "end": 1522.74, "text": " And then we check that the output exists,"}, {"start": 1522.74, "end": 1524.26, "text": " and because it already exists,"}, {"start": 1524.26, "end": 1528.72, "text": " I'll just have to first basically delete that signal."}, {"start": 1528.72, "end": 1532.34, "text": " I'm gonna delete this output compressed file,"}, {"start": 1532.34, "end": 1533.22, "text": " and now it's gonna work."}, {"start": 1533.22, "end": 1534.3799999999999, "text": " So this is gonna happen to you as well."}, {"start": 1534.3799999999999, "end": 1536.6599999999999, "text": " So I'm not gonna edit this out."}, {"start": 1536.66, "end": 1539.5, "text": " I'm just gonna restart the program again,"}, {"start": 1539.5, "end": 1541.46, "text": " and this time it will not fail."}, {"start": 1541.46, "end": 1544.46, "text": " So let's skip to this check output,"}, {"start": 1544.46, "end": 1546.26, "text": " and this time this will not fail,"}, {"start": 1546.26, "end": 1547.3000000000002, "text": " so everything's fine there."}, {"start": 1547.3000000000002, "end": 1548.1200000000001, "text": " Cool."}, {"start": 1548.1200000000001, "end": 1549.88, "text": " So because HQ is set to false,"}, {"start": 1549.88, "end": 1552.8600000000001, "text": " we're gonna deal with 24 kilohertz signal."}, {"start": 1552.8600000000001, "end": 1554.2, "text": " Let's enter this."}, {"start": 1554.2, "end": 1559.2, "text": " Let's enter the encodec model 24 kilohertz function."}, {"start": 1560.0600000000002, "end": 1561.5800000000002, "text": " And basically, so you can see here,"}, {"start": 1561.5800000000002, "end": 1562.7, "text": " we just set some variables,"}, {"start": 1562.7, "end": 1566.5800000000002, "text": " and finally we need to first basically get this model."}, {"start": 1566.58, "end": 1568.32, "text": " So the model consists, as you can see here,"}, {"start": 1568.32, "end": 1570.98, "text": " from the encoder and decoder."}, {"start": 1570.98, "end": 1573.3799999999999, "text": " So I'm gonna enter into the encoder"}, {"start": 1573.3799999999999, "end": 1575.54, "text": " just to briefly show you how everything looks like."}, {"start": 1575.54, "end": 1577.74, "text": " So we are in the encoder here,"}, {"start": 1577.74, "end": 1581.74, "text": " and you can see basically just some variables,"}, {"start": 1581.74, "end": 1582.78, "text": " the location there,"}, {"start": 1582.78, "end": 1586.26, "text": " and then we have this ratios list,"}, {"start": 1586.26, "end": 1588.74, "text": " which basically tells us how we are reducing"}, {"start": 1588.74, "end": 1592.06, "text": " the temporal dimensionality"}, {"start": 1592.06, "end": 1596.1, "text": " as we are propagating through the encoder."}, {"start": 1596.1, "end": 1598.1, "text": " And if you sum all of these up,"}, {"start": 1598.1, "end": 1600.1799999999998, "text": " you'll get the number that's probably familiar"}, {"start": 1600.1799999999998, "end": 1602.3, "text": " at this point, and that's 320."}, {"start": 1602.3, "end": 1606.1799999999998, "text": " And that's why we need 320 raw samples"}, {"start": 1606.1799999999998, "end": 1608.06, "text": " to get a single output frame."}, {"start": 1608.06, "end": 1610.62, "text": " That's the particular reason why this is happening,"}, {"start": 1610.62, "end": 1612.82, "text": " because of the architecture of the encoder."}, {"start": 1612.82, "end": 1614.1, "text": " Okay?"}, {"start": 1614.1, "end": 1615.06, "text": " So let's continue."}, {"start": 1615.06, "end": 1616.0, "text": " So you can see here,"}, {"start": 1616.0, "end": 1620.3799999999999, "text": " we just have these COM1D layers with this S prefix."}, {"start": 1620.3799999999999, "end": 1622.54, "text": " We just suggest that they have,"}, {"start": 1622.54, "end": 1624.06, "text": " they kind of modified it a bit."}, {"start": 1624.06, "end": 1627.5, "text": " I'm gonna show you how it looks like in a second."}, {"start": 1627.5, "end": 1630.4199999999998, "text": " But after that, they just keep on adding"}, {"start": 1631.6599999999999, "end": 1634.26, "text": " these ResNet blocks,"}, {"start": 1635.3, "end": 1639.06, "text": " which consists basically out of bunch of these COM1D layers"}, {"start": 1639.06, "end": 1641.4199999999998, "text": " and some skip connections."}, {"start": 1641.4199999999998, "end": 1644.86, "text": " So basically nothing fundamental there."}, {"start": 1644.86, "end": 1647.6599999999999, "text": " It's just your encoder architecture, and that's it."}, {"start": 1647.6599999999999, "end": 1650.7, "text": " I'm gonna quickly just show you how this thing looks like."}, {"start": 1650.7, "end": 1654.06, "text": " So they have this S prefix, as I mentioned."}, {"start": 1654.06, "end": 1655.66, "text": " And so the thing here is,"}, {"start": 1655.66, "end": 1657.22, "text": " they'll have this,"}, {"start": 1657.22, "end": 1659.26, "text": " depending whether we are dealing with a causal model"}, {"start": 1659.26, "end": 1660.6200000000001, "text": " or non-causal model,"}, {"start": 1660.6200000000001, "end": 1662.5800000000002, "text": " they'll have a bit different padding here."}, {"start": 1662.5800000000002, "end": 1665.3400000000001, "text": " And I think that's pretty much it."}, {"start": 1665.3400000000001, "end": 1667.5, "text": " They also have some details like"}, {"start": 1667.5, "end": 1671.3400000000001, "text": " how they wrap up this norm layer with the COM layer."}, {"start": 1671.3400000000001, "end": 1674.1000000000001, "text": " So you can see here, just some idiosyncrasies,"}, {"start": 1674.1000000000001, "end": 1676.94, "text": " but nothing crucial really to understand how this is working."}, {"start": 1676.94, "end": 1680.5800000000002, "text": " So let's treat it as a black box, and we have our encoder."}, {"start": 1680.58, "end": 1682.22, "text": " The decoder is fairly similar,"}, {"start": 1682.22, "end": 1683.9399999999998, "text": " so I'm gonna skip it completely."}, {"start": 1683.9399999999998, "end": 1687.06, "text": " And finally, let me zoom in a bit"}, {"start": 1687.06, "end": 1689.1399999999999, "text": " in case you can see a bit better now."}, {"start": 1689.1399999999999, "end": 1690.8999999999999, "text": " Okay, hopefully this works."}, {"start": 1690.8999999999999, "end": 1693.82, "text": " Let me know whether the visibility is now better"}, {"start": 1693.82, "end": 1695.1799999999998, "text": " than before I zoomed in."}, {"start": 1695.1799999999998, "end": 1696.34, "text": " By the way, I'm a bit sick,"}, {"start": 1696.34, "end": 1698.74, "text": " so that's why my throat might sound a bit different."}, {"start": 1698.74, "end": 1701.74, "text": " But hopefully the video is gonna be high quality nonetheless."}, {"start": 1701.74, "end": 1703.62, "text": " So let's see what's happening here."}, {"start": 1704.98, "end": 1707.98, "text": " You can see here that this enqueue is gonna be"}, {"start": 1707.98, "end": 1711.42, "text": " the max number of those quantizers we had"}, {"start": 1711.42, "end": 1715.02, "text": " in that bottleneck of the encoder decoder"}, {"start": 1715.02, "end": 1718.94, "text": " that this model can have"}, {"start": 1718.94, "end": 1722.06, "text": " in case we have the largest target bandwidth."}, {"start": 1722.06, "end": 1725.42, "text": " And you can see here, target bandwidths go from 1.5 kilohertz"}, {"start": 1725.42, "end": 1727.1, "text": " all the way to 24 kilohertz."}, {"start": 1727.1, "end": 1729.1200000000001, "text": " And when we index by minus one,"}, {"start": 1729.1200000000001, "end": 1732.98, "text": " we fetch the largest bandwidth, which is the 24 kilohertz."}, {"start": 1732.98, "end": 1736.22, "text": " And because it's 24, we multiply with thousands"}, {"start": 1736.22, "end": 1737.78, "text": " such that we end up with hertz."}, {"start": 1737.78, "end": 1741.34, "text": " So this is now the number of hertz for the target bandwidth."}, {"start": 1741.34, "end": 1744.58, "text": " And then we divide that number with this thing."}, {"start": 1744.58, "end": 1745.42, "text": " And so what's this thing?"}, {"start": 1745.42, "end": 1749.34, "text": " So basically we have here the number of frames."}, {"start": 1749.34, "end": 1750.58, "text": " You can see here sample rate."}, {"start": 1750.58, "end": 1752.5, "text": " When you divide that by the hop length,"}, {"start": 1752.5, "end": 1754.18, "text": " you end up with the output number of frames."}, {"start": 1754.18, "end": 1756.98, "text": " It's gonna be like 75 for our particular case here."}, {"start": 1756.98, "end": 1759.3799999999999, "text": " And then we multiply that with 10"}, {"start": 1759.3799999999999, "end": 1762.1399999999999, "text": " because they kind of hard coded that"}, {"start": 1762.1399999999999, "end": 1763.96, "text": " because they assume that the code book"}, {"start": 1763.96, "end": 1766.02, "text": " is gonna be 1,024 slots."}, {"start": 1766.02, "end": 1769.18, "text": " And they basically does assume that we need 10 bits"}, {"start": 1769.18, "end": 1770.82, "text": " to index into that code book."}, {"start": 1770.82, "end": 1773.16, "text": " So again, that means that a single quantizer"}, {"start": 1773.16, "end": 1776.22, "text": " will have 75 times 10 bits."}, {"start": 1776.22, "end": 1781.22, "text": " So that means 750 bits for a second of the video."}, {"start": 1781.46, "end": 1784.06, "text": " And so when we divide these two,"}, {"start": 1784.06, "end": 1786.3799999999999, "text": " we get the number of quantizers we need"}, {"start": 1786.3799999999999, "end": 1788.58, "text": " for the target bandwidth of 24 kilohertz."}, {"start": 1788.58, "end": 1790.62, "text": " As you can see here, it's gonna be 32."}, {"start": 1790.62, "end": 1793.5, "text": " So that means we need 32 of these quantizers"}, {"start": 1793.5, "end": 1797.98, "text": " to be able to achieve this particular target"}, {"start": 1797.98, "end": 1800.14, "text": " bandwidth of 24 kilohertz."}, {"start": 1800.14, "end": 1803.48, "text": " So again, just to clarify as this might be a bit confusing,"}, {"start": 1803.48, "end": 1808.14, "text": " imagine we have a signal, a second of the audio raw signal"}, {"start": 1808.14, "end": 1810.26, "text": " that's gonna be 24,000 samples, right?"}, {"start": 1810.26, "end": 1814.46, "text": " And then we encode that and we end up with 75 frames."}, {"start": 1814.46, "end": 1817.9, "text": " And because for each of the frames, we allocate 10 bits,"}, {"start": 1817.9, "end": 1821.22, "text": " that means we have 75 times 10 bits, right?"}, {"start": 1821.22, "end": 1822.56, "text": " And then if you wanna figure out"}, {"start": 1822.56, "end": 1826.26, "text": " how many of these do we need to get to 24K,"}, {"start": 1826.26, "end": 1829.58, "text": " we just divide 24K by that number of bits."}, {"start": 1829.58, "end": 1832.3, "text": " And that's precisely what you've seen here."}, {"start": 1832.3, "end": 1835.34, "text": " Okay, second thing I wanna show you"}, {"start": 1835.34, "end": 1837.04, "text": " is this residual vector quantizer,"}, {"start": 1837.04, "end": 1839.5, "text": " although all of the fun is gonna happen in the forward pass,"}, {"start": 1839.5, "end": 1843.22, "text": " not here, but let's see how it's constructed."}, {"start": 1843.22, "end": 1846.3, "text": " Again, just bunch of variables here."}, {"start": 1846.3, "end": 1849.1, "text": " And then there is this residual vector quantization."}, {"start": 1849.1, "end": 1850.74, "text": " Let's enter that thing."}, {"start": 1850.74, "end": 1853.94, "text": " And you can see here, we have 32 of these"}, {"start": 1853.94, "end": 1858.06, "text": " vector quantizations basically wrapped into a list."}, {"start": 1858.06, "end": 1861.32, "text": " And let me see whether I have a break point here."}, {"start": 1861.32, "end": 1863.58, "text": " So let's enter into one of these"}, {"start": 1863.58, "end": 1865.7, "text": " vector quantization modules"}, {"start": 1865.7, "end": 1867.54, "text": " and let's see what's going on there."}, {"start": 1867.54, "end": 1870.22, "text": " So here is the codebook dimensionality,"}, {"start": 1870.22, "end": 1871.26, "text": " it's gonna be 128."}, {"start": 1871.26, "end": 1873.04, "text": " So that's the number we've been using"}, {"start": 1873.04, "end": 1875.82, "text": " as a running example throughout the video."}, {"start": 1875.82, "end": 1878.34, "text": " And then because we don't require any projection"}, {"start": 1878.34, "end": 1881.3, "text": " because the dimensionality of the output of the encoder"}, {"start": 1881.3, "end": 1883.58, "text": " and the codebook are the same"}, {"start": 1883.58, "end": 1885.3799999999999, "text": " because of that we don't need any projection"}, {"start": 1885.3799999999999, "end": 1887.8999999999999, "text": " and that's why we're gonna end up with identities here."}, {"start": 1887.8999999999999, "end": 1891.22, "text": " Okay, and finally we have this Euclidean codebook"}, {"start": 1891.22, "end": 1893.34, "text": " and the whole fun is gonna be in the forward pass,"}, {"start": 1893.34, "end": 1895.54, "text": " I'm gonna skip the actual initialization,"}, {"start": 1895.54, "end": 1897.86, "text": " but that's the implementation of the codebook"}, {"start": 1897.86, "end": 1899.9399999999998, "text": " I've been mentioning this whole time."}, {"start": 1899.9399999999998, "end": 1902.3799999999999, "text": " And it's called Euclidean because once we do the,"}, {"start": 1903.3, "end": 1905.6599999999999, "text": " once we try to figure out the closest vector"}, {"start": 1905.66, "end": 1908.78, "text": " in that code table we use the L2 norm"}, {"start": 1908.78, "end": 1911.42, "text": " to figure that out, hence Euclidean."}, {"start": 1911.42, "end": 1914.78, "text": " Okay, I'm gonna hit F5 here and let's exit this."}, {"start": 1914.78, "end": 1918.14, "text": " And now I'm gonna disable all of the break points"}, {"start": 1918.14, "end": 1919.64, "text": " and let's exit this one."}, {"start": 1920.74, "end": 1922.26, "text": " And we are back here."}, {"start": 1922.26, "end": 1924.5800000000002, "text": " I'm gonna re-enable all of the break points"}, {"start": 1924.5800000000002, "end": 1925.78, "text": " and let's continue here."}, {"start": 1925.78, "end": 1928.22, "text": " Okay, so we have the encoder, we have the decoder,"}, {"start": 1928.22, "end": 1930.74, "text": " we have the quantizer, and finally we just have"}, {"start": 1930.74, "end": 1933.8200000000002, "text": " this encodic model which is just basically gonna"}, {"start": 1933.82, "end": 1936.7, "text": " wrap up all of those variables, nothing fancy,"}, {"start": 1936.7, "end": 1940.34, "text": " I can move this break point, I can remove it"}, {"start": 1940.34, "end": 1942.02, "text": " and let's continue here."}, {"start": 1942.02, "end": 1945.1, "text": " So let's just get to here and that's it."}, {"start": 1945.1, "end": 1949.98, "text": " So we are out of here and we have our model finally ready."}, {"start": 1949.98, "end": 1953.82, "text": " So now we just fetch the actual state dictionary,"}, {"start": 1953.82, "end": 1956.26, "text": " so the pre-trained weights, we initialize the model,"}, {"start": 1956.26, "end": 1959.74, "text": " we set it into the eval mode and we continue here."}, {"start": 1959.74, "end": 1963.1, "text": " Okay guys, so now we set the target bandwidth, okay?"}, {"start": 1963.1, "end": 1965.74, "text": " So the target bandwidth by default is gonna be"}, {"start": 1965.74, "end": 1968.8799999999999, "text": " six kilobits per second and what it does is basically"}, {"start": 1968.8799999999999, "end": 1971.3799999999999, "text": " just sets this bandwidth internal variable"}, {"start": 1971.3799999999999, "end": 1974.78, "text": " of the encodic model to six and that's it."}, {"start": 1974.78, "end": 1977.4599999999998, "text": " So nothing fancy happening there."}, {"start": 1977.4599999999998, "end": 1981.26, "text": " Okay, so now let's just load the audio file."}, {"start": 1981.26, "end": 1985.8, "text": " It's gonna be a 24 kilohertz, 20 second audio file."}, {"start": 1985.8, "end": 1989.4399999999998, "text": " So if you take a look at the root of this repository,"}, {"start": 1990.5, "end": 1992.1999999999998, "text": " we can find that signal here."}, {"start": 1992.2, "end": 1993.8, "text": " Let me see where we can hear it."}, {"start": 1993.8, "end": 1998.8, "text": " Okay, you can see it's a 20 second audio file, that's it."}, {"start": 1999.6200000000001, "end": 2000.98, "text": " Okay, so let's continue here."}, {"start": 2000.98, "end": 2004.54, "text": " Let me just show you the shape of that WAV file."}, {"start": 2004.54, "end": 2009.54, "text": " So shape is 480,000 and that's because if you just think"}, {"start": 2012.46, "end": 2016.0, "text": " about it, if you multiply 20, which is the number of seconds"}, {"start": 2016.0, "end": 2019.8400000000001, "text": " of the audio file times the 24,000 samples per second,"}, {"start": 2019.84, "end": 2024.84, "text": " we end up with 480,000 samples."}, {"start": 2025.58, "end": 2027.58, "text": " My brain is glitching, man."}, {"start": 2027.58, "end": 2030.06, "text": " Okay, so let's continue."}, {"start": 2030.06, "end": 2032.52, "text": " We now do the conversion but because,"}, {"start": 2034.1399999999999, "end": 2035.54, "text": " it's gonna be a no-op basically."}, {"start": 2035.54, "end": 2037.6799999999998, "text": " I'm gonna enter here just to show you but basically"}, {"start": 2037.6799999999998, "end": 2039.82, "text": " because the target channel is set to one,"}, {"start": 2039.82, "end": 2042.86, "text": " we do a mean across the zero dimension"}, {"start": 2042.86, "end": 2045.4199999999998, "text": " but because we only have a single channel,"}, {"start": 2045.4199999999998, "end": 2046.8799999999999, "text": " it's gonna be a no-op."}, {"start": 2046.88, "end": 2050.38, "text": " If we had a stereo sound, then we'd have two channels"}, {"start": 2050.38, "end": 2052.82, "text": " and we'd just do a mean across the two channels"}, {"start": 2052.82, "end": 2054.84, "text": " to get the final output channel"}, {"start": 2054.84, "end": 2058.26, "text": " because the target number of channels is set to one here."}, {"start": 2058.26, "end": 2060.78, "text": " But in our case, it's just no-ops, we can ignore it"}, {"start": 2060.78, "end": 2063.1, "text": " and then we do resampling but again,"}, {"start": 2063.1, "end": 2067.7400000000002, "text": " because both our signal and the target signal sampling rates"}, {"start": 2067.7400000000002, "end": 2070.38, "text": " are the same 24K, we do nothing."}, {"start": 2070.38, "end": 2072.7400000000002, "text": " So we just basically, it's a no-op."}, {"start": 2072.7400000000002, "end": 2074.78, "text": " Okay, so here is the fun part."}, {"start": 2074.78, "end": 2078.82, "text": " We do the compression of the signal,"}, {"start": 2078.82, "end": 2081.46, "text": " of the loaded signal and we pass it through our model."}, {"start": 2081.46, "end": 2083.1800000000003, "text": " Let's see how that's gonna work."}, {"start": 2083.1800000000003, "end": 2087.1600000000003, "text": " Okay, so let's kinda skip over all of these details."}, {"start": 2087.1600000000003, "end": 2091.94, "text": " What we do here is we are basically going to load"}, {"start": 2091.94, "end": 2093.94, "text": " the whole signal."}, {"start": 2095.1400000000003, "end": 2098.6200000000003, "text": " So this whole loop is gonna have just a single iteration."}, {"start": 2098.6200000000003, "end": 2102.6200000000003, "text": " And I think worth mentioning is even though they support"}, {"start": 2102.62, "end": 2106.54, "text": " the streamable setting, they don't have it implemented here"}, {"start": 2106.54, "end": 2109.06, "text": " neither do they have implemented the training code."}, {"start": 2109.06, "end": 2111.06, "text": " So what you can find here is the inference."}, {"start": 2111.06, "end": 2113.5, "text": " You can also find some details about the balancer"}, {"start": 2113.5, "end": 2118.5, "text": " and about the multi-scale STFT discriminators."}, {"start": 2119.02, "end": 2121.1, "text": " Okay, so basically you can see here,"}, {"start": 2121.1, "end": 2123.58, "text": " they also use the terminology frame"}, {"start": 2123.58, "end": 2126.94, "text": " for the input raw audio signal"}, {"start": 2126.94, "end": 2129.18, "text": " and not only for the output of the encoder."}, {"start": 2129.18, "end": 2131.74, "text": " So just keep that in mind and don't get confused"}, {"start": 2131.74, "end": 2133.7, "text": " by the terminology here."}, {"start": 2133.7, "end": 2137.4599999999996, "text": " So frame is again, is gonna be simply the same"}, {"start": 2137.4599999999996, "end": 2138.3799999999997, "text": " as our input signal."}, {"start": 2138.3799999999997, "end": 2140.66, "text": " So because we fetched every single sample,"}, {"start": 2140.66, "end": 2143.7, "text": " it's just gonna be 480,000 samples."}, {"start": 2143.7, "end": 2146.52, "text": " Okay, and now the fun happens in the code frame function."}, {"start": 2146.52, "end": 2149.04, "text": " So let's see how that's gonna work."}, {"start": 2149.04, "end": 2151.3399999999997, "text": " So we don't do it in normalizations, we can skip that."}, {"start": 2151.3399999999997, "end": 2154.8399999999997, "text": " And now we pass the X, so the input signal"}, {"start": 2154.8399999999997, "end": 2155.7999999999997, "text": " through the encoder."}, {"start": 2155.7999999999997, "end": 2157.1, "text": " So let's see how that's gonna work."}, {"start": 2157.1, "end": 2159.66, "text": " So again, this is just gonna be a series"}, {"start": 2159.66, "end": 2162.2599999999998, "text": " of these COM1D layers and some of them"}, {"start": 2162.2599999999998, "end": 2163.74, "text": " are gonna have a stride of two."}, {"start": 2163.74, "end": 2167.8599999999997, "text": " And so our input signal is slowly gonna be reduced"}, {"start": 2167.8599999999997, "end": 2170.66, "text": " across the temporal dimension and the number of channels"}, {"start": 2170.66, "end": 2172.02, "text": " is gonna be increasing."}, {"start": 2172.02, "end": 2174.1, "text": " So what I'm gonna do here is I'm gonna enter"}, {"start": 2174.1, "end": 2178.1, "text": " the COM1D layer here and I'm gonna put the break point"}, {"start": 2178.1, "end": 2179.22, "text": " with a particular condition."}, {"start": 2179.22, "end": 2182.02, "text": " And this is why I love Visual Studio Code."}, {"start": 2182.02, "end": 2184.7, "text": " So you can basically edit the break point"}, {"start": 2184.7, "end": 2186.22, "text": " and you can set a particular condition"}, {"start": 2186.22, "end": 2187.42, "text": " when you wanna hit this one."}, {"start": 2187.42, "end": 2190.94, "text": " And I wanna hit it when stride is bigger than one, right?"}, {"start": 2190.94, "end": 2193.2000000000003, "text": " And so because I wanna see how the number"}, {"start": 2193.2000000000003, "end": 2196.86, "text": " of the temporal axis is being reduced."}, {"start": 2196.86, "end": 2199.06, "text": " Okay, so now let's hit F5."}, {"start": 2199.06, "end": 2202.42, "text": " And here we are, so currently if I just print a shape,"}, {"start": 2202.42, "end": 2203.76, "text": " the shape is, as you can see here,"}, {"start": 2203.76, "end": 2207.98, "text": " we already have 32 channels because we had some previous"}, {"start": 2207.98, "end": 2210.82, "text": " COM1D layers which didn't have the stride"}, {"start": 2210.82, "end": 2213.7200000000003, "text": " but they still had increased the number of channels."}, {"start": 2213.7200000000003, "end": 2216.62, "text": " But once I hit, so now I'm gonna put a regular break point"}, {"start": 2216.62, "end": 2217.74, "text": " and hit F5."}, {"start": 2217.74, "end": 2221.66, "text": " So once we are here, now if I do x.shape,"}, {"start": 2221.66, "end": 2223.3399999999997, "text": " we have, as you can see here,"}, {"start": 2223.3399999999997, "end": 2227.2, "text": " two x, a smaller dimensionality across the temporal axis"}, {"start": 2227.2, "end": 2229.5, "text": " and we have two x more channels."}, {"start": 2229.5, "end": 2232.7799999999997, "text": " And that's a regular pattern we've been seeing all around."}, {"start": 2232.7799999999997, "end": 2234.3199999999997, "text": " CNNs do that all the time."}, {"start": 2234.3199999999997, "end": 2237.1, "text": " As soon as you reduce the spatial dimensionality by two,"}, {"start": 2237.1, "end": 2238.8199999999997, "text": " you also increase the number of channels by two."}, {"start": 2238.8199999999997, "end": 2241.46, "text": " That's a common pattern you'll see all around."}, {"start": 2241.46, "end": 2243.98, "text": " Okay guys, so now I'm gonna remove basically"}, {"start": 2243.98, "end": 2246.18, "text": " all of the break points and I'm gonna exit"}, {"start": 2246.18, "end": 2247.02, "text": " from this function."}, {"start": 2247.02, "end": 2250.4199999999996, "text": " I'm gonna exit here, I'm gonna exit here,"}, {"start": 2250.4199999999996, "end": 2254.58, "text": " I'm gonna exit through all of these and we are done."}, {"start": 2254.58, "end": 2256.4199999999996, "text": " Okay, so this is the encoder."}, {"start": 2256.4199999999996, "end": 2260.2599999999998, "text": " I'm gonna hit F10 and here is the final dimensionality."}, {"start": 2260.2599999999998, "end": 2264.4199999999996, "text": " It's gonna be 1,500, 128."}, {"start": 2264.4199999999996, "end": 2266.8599999999997, "text": " And that's the output of our encoder, right?"}, {"start": 2266.8599999999997, "end": 2271.8599999999997, "text": " And it's 1,500 frames because again, we have 20 seconds"}, {"start": 2272.3199999999997, "end": 2274.14, "text": " and each second is 75 frames"}, {"start": 2274.14, "end": 2276.2599999999998, "text": " and that's why we end up with 1,500."}, {"start": 2276.2599999999998, "end": 2277.62, "text": " Hopefully that makes sense."}, {"start": 2277.62, "end": 2281.58, "text": " Okay, so now what we do is we have to quantize"}, {"start": 2281.58, "end": 2284.7, "text": " those vectors and let's see how that's gonna work."}, {"start": 2284.7, "end": 2286.42, "text": " So here we are."}, {"start": 2286.42, "end": 2289.02, "text": " So we first have to figure out"}, {"start": 2289.02, "end": 2291.3799999999997, "text": " for the desired target bandwidth,"}, {"start": 2291.3799999999997, "end": 2294.62, "text": " how many of the quantizers do we have to actually use"}, {"start": 2294.62, "end": 2296.74, "text": " to get that target bandwidth?"}, {"start": 2296.74, "end": 2298.22, "text": " And so that's what this function does."}, {"start": 2298.22, "end": 2300.1, "text": " So let's enter into the function"}, {"start": 2300.1, "end": 2301.7, "text": " and let's see how it works."}, {"start": 2301.7, "end": 2305.74, "text": " So, okay, so they say get bandwidth per quantizer."}, {"start": 2305.74, "end": 2307.7, "text": " So does that make sense?"}, {"start": 2307.7, "end": 2308.74, "text": " Let's think about it."}, {"start": 2308.74, "end": 2311.4199999999996, "text": " So we have 75 here."}, {"start": 2311.4199999999996, "end": 2313.9399999999996, "text": " So that means we have 75 frames."}, {"start": 2313.9399999999996, "end": 2317.3399999999997, "text": " It's gonna be 10 because bins is gonna be 1,024."}, {"start": 2317.3399999999997, "end": 2319.74, "text": " So this is 10 times the number of frames."}, {"start": 2319.74, "end": 2322.22, "text": " So this is basically the number of bits"}, {"start": 2322.22, "end": 2325.46, "text": " for a single quantizer module."}, {"start": 2325.46, "end": 2327.02, "text": " And then we divide by 1,000"}, {"start": 2327.02, "end": 2329.14, "text": " because we wanna have kilobits, right?"}, {"start": 2329.14, "end": 2332.5, "text": " And so you can see here bandwidth per quantizer."}, {"start": 2332.5, "end": 2337.3799999999997, "text": " And that's gonna be 0.75 kilobits per quantizer, okay?"}, {"start": 2337.3799999999997, "end": 2340.2, "text": " And now what we do is we take the target bandwidth,"}, {"start": 2340.2, "end": 2343.46, "text": " which is six kilobits per second,"}, {"start": 2343.46, "end": 2345.18, "text": " and we divide by 0.75,"}, {"start": 2345.18, "end": 2348.2, "text": " and we end up that we need eight of these quantizers"}, {"start": 2348.2, "end": 2350.6, "text": " to achieve that particular target bandwidth, okay?"}, {"start": 2350.6, "end": 2351.56, "text": " And that's it."}, {"start": 2351.56, "end": 2353.66, "text": " So now we actually do the encoding"}, {"start": 2353.66, "end": 2356.8599999999997, "text": " with these eight quantizers."}, {"start": 2356.8599999999997, "end": 2358.1, "text": " So let's see how that's gonna work."}, {"start": 2358.1, "end": 2361.42, "text": " So you can see here, we just take the first eight quantizers"}, {"start": 2361.42, "end": 2365.1, "text": " instead of the full list, which is like 32."}, {"start": 2365.1, "end": 2368.2999999999997, "text": " So remember, we have actually, we have 32 of these,"}, {"start": 2368.2999999999997, "end": 2369.62, "text": " and we can only deploy those"}, {"start": 2369.62, "end": 2372.94, "text": " when we have target bandwidth of 24 kilohertz."}, {"start": 2372.94, "end": 2374.2599999999998, "text": " Okay, so let's enter here."}, {"start": 2374.2599999999998, "end": 2375.92, "text": " Let's see how the encode works."}, {"start": 2375.92, "end": 2376.7799999999997, "text": " So it's fairly simple."}, {"start": 2376.7799999999997, "end": 2379.5, "text": " So first of all, this pre-process"}, {"start": 2379.5, "end": 2381.8399999999997, "text": " is just gonna do some reshaping, nothing fancy there."}, {"start": 2381.8399999999997, "end": 2386.8399999999997, "text": " So we end up with X being of shape 1,528."}, {"start": 2386.84, "end": 2388.56, "text": " And then this is where the fun happens"}, {"start": 2388.56, "end": 2389.88, "text": " in the quantize function."}, {"start": 2389.88, "end": 2391.2200000000003, "text": " So in the quantize function,"}, {"start": 2391.2200000000003, "end": 2393.46, "text": " let me enter the quantize function."}, {"start": 2393.46, "end": 2395.32, "text": " You can see this is the codebook,"}, {"start": 2396.4, "end": 2398.6000000000004, "text": " the codebook table I was mentioning the whole time."}, {"start": 2398.6000000000004, "end": 2401.2000000000003, "text": " So let's see the dimensionality of the codebook table."}, {"start": 2401.2000000000003, "end": 2404.6000000000004, "text": " This is gonna be 1,024 because we have 1,024 vectors,"}, {"start": 2404.6000000000004, "end": 2408.76, "text": " and each of those is 128 dimensionality, right?"}, {"start": 2408.76, "end": 2412.36, "text": " And so what we do here is we basically do,"}, {"start": 2412.36, "end": 2414.6000000000004, "text": " we compute this distance matrix,"}, {"start": 2414.6, "end": 2417.88, "text": " which tells us for each of our output vectors,"}, {"start": 2417.88, "end": 2421.16, "text": " how close are each of these codebook vectors"}, {"start": 2421.16, "end": 2422.2999999999997, "text": " from that particular vector."}, {"start": 2422.2999999999997, "end": 2424.24, "text": " And because of that, we're gonna end up,"}, {"start": 2424.24, "end": 2425.52, "text": " let me just show you this."}, {"start": 2425.52, "end": 2428.88, "text": " Let me show you the dimensionality of the distance matrix."}, {"start": 2428.88, "end": 2433.0, "text": " It's gonna be 1,500, 1,024, because as I said,"}, {"start": 2433.0, "end": 2435.04, "text": " you have for each of the rows,"}, {"start": 2435.04, "end": 2438.3199999999997, "text": " you have the columns telling you how far away"}, {"start": 2438.3199999999997, "end": 2440.08, "text": " is that particular codebook vector"}, {"start": 2440.08, "end": 2443.8399999999997, "text": " from that particular output encoder vector, right?"}, {"start": 2443.84, "end": 2447.92, "text": " And because of this minus sign, if we do a max,"}, {"start": 2447.92, "end": 2451.0, "text": " we actually find the closest vectors, and that's it."}, {"start": 2451.0, "end": 2453.76, "text": " So we end up with, we basically end up with,"}, {"start": 2453.76, "end": 2456.36, "text": " as you can see here, 1,500 indices,"}, {"start": 2456.36, "end": 2458.2000000000003, "text": " and let me print the actual numbers."}, {"start": 2458.2000000000003, "end": 2461.48, "text": " So those are the indices of the closest vectors"}, {"start": 2461.48, "end": 2464.2000000000003, "text": " in those code tables."}, {"start": 2464.2000000000003, "end": 2465.32, "text": " Hopefully that makes sense."}, {"start": 2465.32, "end": 2468.92, "text": " And again, post-process just does some reshaping,"}, {"start": 2468.92, "end": 2470.76, "text": " and we are done, guys."}, {"start": 2470.76, "end": 2471.6400000000003, "text": " That's it."}, {"start": 2471.6400000000003, "end": 2472.84, "text": " So now we do the decoding."}, {"start": 2472.84, "end": 2474.2000000000003, "text": " That's the second interesting part."}, {"start": 2474.2000000000003, "end": 2476.2400000000002, "text": " So let's enter it into the decoding part."}, {"start": 2476.2400000000002, "end": 2478.1200000000003, "text": " So let's see what happens there."}, {"start": 2478.1200000000003, "end": 2482.96, "text": " So decoding is gonna call this Dequantize method,"}, {"start": 2482.96, "end": 2486.6800000000003, "text": " and Dequantize is basically just gonna fetch the indices,"}, {"start": 2486.6800000000003, "end": 2489.88, "text": " and you can see here, we just grab the vectors"}, {"start": 2489.88, "end": 2493.6800000000003, "text": " from that table that correspond to those indices,"}, {"start": 2493.6800000000003, "end": 2497.8, "text": " and we end up with the quantized vectors."}, {"start": 2497.8, "end": 2500.96, "text": " Whoops, let me just show you the dimensionality here."}, {"start": 2500.96, "end": 2503.32, "text": " We have 1,500, 128."}, {"start": 2503.32, "end": 2506.2400000000002, "text": " So that's when we snap our output vectors"}, {"start": 2506.2400000000002, "end": 2508.2400000000002, "text": " onto the closest vectors in the code table."}, {"start": 2508.2400000000002, "end": 2509.96, "text": " That's what happened there."}, {"start": 2509.96, "end": 2511.96, "text": " Okay, and then projections identity,"}, {"start": 2511.96, "end": 2514.84, "text": " so we can ignore this, and that's pretty much it."}, {"start": 2514.84, "end": 2517.4, "text": " And now that we have the quantized vectors,"}, {"start": 2517.4, "end": 2519.8, "text": " we do the residual minus quantized,"}, {"start": 2519.8, "end": 2523.88, "text": " and then we save the indices from the first quantizer,"}, {"start": 2523.88, "end": 2525.68, "text": " and we repeat the process, right?"}, {"start": 2525.68, "end": 2527.4, "text": " We now take the second code book,"}, {"start": 2527.4, "end": 2530.2400000000002, "text": " and we take the residuals, and we just encode them,"}, {"start": 2530.24, "end": 2531.64, "text": " and rinse and repeat."}, {"start": 2531.64, "end": 2534.4799999999996, "text": " So I'm gonna now disable all of the breakpoints,"}, {"start": 2534.4799999999996, "end": 2537.68, "text": " and I'm gonna just put the breakpoint here, hit F5."}, {"start": 2537.68, "end": 2540.16, "text": " I'm gonna re-enable the breakpoints, and that's it."}, {"start": 2540.16, "end": 2543.16, "text": " So the out indices now are 1,508."}, {"start": 2545.3199999999997, "end": 2547.0, "text": " So that means for each of the frames,"}, {"start": 2547.0, "end": 2548.9199999999996, "text": " we now have eight numbers."}, {"start": 2548.9199999999996, "end": 2552.04, "text": " That's how we encoded our output of the encoder."}, {"start": 2552.04, "end": 2556.72, "text": " Okay, let's continue here, and that's pretty much it."}, {"start": 2556.72, "end": 2560.9599999999996, "text": " We can now keep on exiting here,"}, {"start": 2560.9599999999996, "end": 2562.8399999999997, "text": " and we have everything we need."}, {"start": 2562.8399999999997, "end": 2566.9199999999996, "text": " We have the data, the compressed version of our audio file."}, {"start": 2566.9199999999996, "end": 2569.3599999999997, "text": " We now write some metadata into the header"}, {"start": 2569.3599999999997, "end": 2570.7599999999998, "text": " of this ecdc file."}, {"start": 2570.7599999999998, "end": 2572.08, "text": " I'm gonna ignore that completely,"}, {"start": 2572.08, "end": 2574.2, "text": " not that important for us."}, {"start": 2574.2, "end": 2576.9199999999996, "text": " And then because we're not using LM,"}, {"start": 2576.9199999999996, "end": 2579.52, "text": " we are gonna construct this bitpacker."}, {"start": 2579.52, "end": 2580.8799999999997, "text": " And so why the bitpacker?"}, {"start": 2580.8799999999997, "end": 2584.04, "text": " So again, the problem is these numbers"}, {"start": 2584.04, "end": 2587.4, "text": " are gonna be either ints or floats."}, {"start": 2587.4, "end": 2588.8, "text": " Let me just see for a second."}, {"start": 2588.8, "end": 2591.88, "text": " Let me tell you the exact thing here."}, {"start": 2591.88, "end": 2596.2, "text": " So frames, again, frames shape is,"}, {"start": 2596.2, "end": 2597.24, "text": " okay, frames is a list."}, {"start": 2597.24, "end": 2598.52, "text": " Let me just see."}, {"start": 2598.52, "end": 2600.7599999999998, "text": " Okay, it's a list, and then I think I just have"}, {"start": 2600.7599999999998, "end": 2603.12, "text": " to do something like this."}, {"start": 2603.12, "end": 2605.2, "text": " Yeah, okay, so here are the frames."}, {"start": 2605.2, "end": 2608.08, "text": " And if we do the D type, let's see the data type."}, {"start": 2608.08, "end": 2609.88, "text": " So it's gonna be int 64."}, {"start": 2609.88, "end": 2611.92, "text": " So that means each of these eight numbers"}, {"start": 2611.92, "end": 2615.7200000000003, "text": " currently take 64 bits, even though we only need 10 bits."}, {"start": 2615.7200000000003, "end": 2617.6, "text": " And because of that, we only,"}, {"start": 2617.6, "end": 2620.12, "text": " because we only, out of those 64 bits,"}, {"start": 2620.12, "end": 2623.48, "text": " only lower 10 bits have the valuable information."}, {"start": 2623.48, "end": 2625.32, "text": " That's where this bitpacker comes into play"}, {"start": 2625.32, "end": 2627.52, "text": " to handle the low level details."}, {"start": 2627.52, "end": 2629.84, "text": " I did debug it and understand how it works,"}, {"start": 2629.84, "end": 2632.88, "text": " but it's gonna be, I think, redundant"}, {"start": 2632.88, "end": 2634.08, "text": " if I do that in this video,"}, {"start": 2634.08, "end": 2637.2400000000002, "text": " because you can do it at your own pace,"}, {"start": 2637.2400000000002, "end": 2640.8, "text": " but nothing fundamental there other than low level details"}, {"start": 2640.8, "end": 2642.6800000000003, "text": " of how do you extract the 10 bits"}, {"start": 2642.6800000000003, "end": 2643.6000000000004, "text": " that are actually valuable"}, {"start": 2643.6000000000004, "end": 2647.04, "text": " and pack them into the output bit by stream."}, {"start": 2647.04, "end": 2648.88, "text": " Okay, and that's it."}, {"start": 2648.88, "end": 2651.6800000000003, "text": " So here, what we do, we just keep on pushing those values,"}, {"start": 2651.6800000000003, "end": 2653.88, "text": " and values are coming from the frames."}, {"start": 2653.88, "end": 2655.76, "text": " And so basically, that's it."}, {"start": 2655.76, "end": 2658.52, "text": " We're just taking those numbers from this structure here,"}, {"start": 2658.52, "end": 2662.92, "text": " and we're just basically encoding them into this by stream."}, {"start": 2662.92, "end": 2665.5600000000004, "text": " And so let me hit F5 here."}, {"start": 2665.5600000000004, "end": 2667.04, "text": " Let me remove this thing."}, {"start": 2667.04, "end": 2668.92, "text": " And we are done."}, {"start": 2668.92, "end": 2669.76, "text": " We are done."}, {"start": 2669.76, "end": 2671.0, "text": " We can exit here."}, {"start": 2671.0, "end": 2671.92, "text": " We can exit here."}, {"start": 2671.92, "end": 2674.7200000000003, "text": " We just write the bytes, and that's it, guys."}, {"start": 2674.7200000000003, "end": 2675.5400000000004, "text": " So that's it."}, {"start": 2675.5400000000004, "end": 2678.0, "text": " So let me just find the file."}, {"start": 2678.0, "end": 2680.1200000000003, "text": " So the file is here."}, {"start": 2680.1200000000003, "end": 2684.28, "text": " As you can see here, test24k.ecdc is again created,"}, {"start": 2684.28, "end": 2689.28, "text": " and that contains our compressed audio file."}, {"start": 2689.32, "end": 2693.96, "text": " You can see that we started from 937 kilobytes,"}, {"start": 2693.96, "end": 2697.6800000000003, "text": " and we ended up with 14.7 kilobytes."}, {"start": 2697.6800000000003, "end": 2698.6600000000003, "text": " Does that make sense?"}, {"start": 2698.6600000000003, "end": 2699.5, "text": " Let's see."}, {"start": 2699.5, "end": 2704.5, "text": " So we have 14.7 times, I guess, eight,"}, {"start": 2706.2, "end": 2709.68, "text": " and this is how many kilobits we have."}, {"start": 2709.68, "end": 2714.68, "text": " And that's roughly, well, that's roughly 20 times six, right?"}, {"start": 2715.46, "end": 2719.12, "text": " Which is 120, because we have six kilobits per second"}, {"start": 2719.12, "end": 2721.92, "text": " target bandwidth, and we have 20 seconds,"}, {"start": 2721.92, "end": 2722.84, "text": " so that's roughly it."}, {"start": 2722.84, "end": 2725.66, "text": " So everything basically fits."}, {"start": 2725.66, "end": 2728.96, "text": " Okay, so now I'm gonna go to the launch file."}, {"start": 2728.96, "end": 2733.12, "text": " I'm gonna quickly add the LM, and I'm kinda afraid"}, {"start": 2733.12, "end": 2735.6, "text": " that I will not be able to explain this,"}, {"start": 2735.6, "end": 2738.68, "text": " to give it justice, but I'm gonna try nonetheless,"}, {"start": 2739.84, "end": 2742.16, "text": " because I think it would require quite a lot of time"}, {"start": 2742.16, "end": 2744.64, "text": " to actually explain all of the details"}, {"start": 2744.64, "end": 2746.56, "text": " behind how and why it works."}, {"start": 2747.66, "end": 2752.16, "text": " Okay, so now I'm gonna start this again,"}, {"start": 2752.16, "end": 2754.96, "text": " and let's just see the diff between the last ones."}, {"start": 2754.96, "end": 2758.5, "text": " I'm not gonna focus on the things that are the same,"}, {"start": 2758.5, "end": 2761.28, "text": " obviously, I just wanna focus on the differences."}, {"start": 2761.28, "end": 2766.28, "text": " So the differences here are gonna be only,"}, {"start": 2766.88, "end": 2770.04, "text": " as you can see here, we passed the LM argument"}, {"start": 2770.04, "end": 2774.24, "text": " to the compressed function, so I'm gonna disable"}, {"start": 2774.24, "end": 2776.16, "text": " the breakpoints, I'm gonna enable this one."}, {"start": 2776.16, "end": 2778.0, "text": " I'm gonna hit F5."}, {"start": 2778.0, "end": 2782.16, "text": " Whoops, again, I forgot to delete the file, god damn it."}, {"start": 2782.16, "end": 2787.08, "text": " So I'm gonna delete this one, and I'm gonna restart."}, {"start": 2787.08, "end": 2790.64, "text": " So let's hit, you can add the force argument"}, {"start": 2790.64, "end": 2793.34, "text": " to avoid having these errors, but I was lazy,"}, {"start": 2793.34, "end": 2794.88, "text": " so I kinda skipped it."}, {"start": 2794.88, "end": 2796.92, "text": " So I'm gonna enable the breakpoints,"}, {"start": 2796.92, "end": 2799.56, "text": " so let's now enter this, let's enter here,"}, {"start": 2799.56, "end": 2803.68, "text": " and let's see, so use LM is where the differences are."}, {"start": 2803.68, "end": 2804.88, "text": " So there's a couple of places"}, {"start": 2804.88, "end": 2805.84, "text": " where there are the differences."}, {"start": 2805.84, "end": 2809.88, "text": " So I'm gonna get here first, so let's see"}, {"start": 2809.88, "end": 2811.64, "text": " what the get LM model does."}, {"start": 2811.64, "end": 2815.48, "text": " So basically we instantiate a language model here."}, {"start": 2815.48, "end": 2818.76, "text": " So let's see how that thing is implemented."}, {"start": 2818.76, "end": 2821.9, "text": " I'm gonna hit F5 here, here we are."}, {"start": 2821.9, "end": 2825.2, "text": " We have something called Streaming Transformer Encoder,"}, {"start": 2825.2, "end": 2828.28, "text": " and we have, as you can see here, 32 of these"}, {"start": 2828.28, "end": 2831.7, "text": " embedding tables, and we also have 32 of these"}, {"start": 2831.7, "end": 2834.96, "text": " linear layers that project from the 200,"}, {"start": 2834.96, "end": 2838.28, "text": " which is the dimensionality of the transformer,"}, {"start": 2838.28, "end": 2842.2, "text": " to 1024, because that's the cardinality,"}, {"start": 2842.2, "end": 2844.26, "text": " that's what card stands for, the cardinality"}, {"start": 2844.26, "end": 2846.28, "text": " or the size of the codebook table."}, {"start": 2846.28, "end": 2848.4, "text": " That's why we have 1024 there, right?"}, {"start": 2849.6400000000003, "end": 2852.92, "text": " So I'm gonna, okay, I have a breakpoint here already."}, {"start": 2852.92, "end": 2855.0, "text": " Let me see whether there is something interesting"}, {"start": 2855.0, "end": 2857.32, "text": " to mention here, let's quickly walk through"}, {"start": 2857.32, "end": 2859.5200000000004, "text": " the transformer here."}, {"start": 2861.8, "end": 2865.0, "text": " Okay, basically there are these"}, {"start": 2865.0, "end": 2869.1000000000004, "text": " Streaming Transformer Encoder Layer modules,"}, {"start": 2869.1000000000004, "end": 2871.84, "text": " and let me just see whether those are interesting."}, {"start": 2871.84, "end": 2873.7000000000003, "text": " So those are basically just, as you can see here,"}, {"start": 2873.7, "end": 2876.4399999999996, "text": " just stock PyTorch Transformer Encoder Layer"}, {"start": 2876.4399999999996, "end": 2880.2799999999997, "text": " with some differences, like, yeah."}, {"start": 2881.7, "end": 2885.1, "text": " This might be too complex to explain in this shorter video,"}, {"start": 2886.3999999999996, "end": 2888.3199999999997, "text": " but in any case, I'm gonna skip this,"}, {"start": 2888.3199999999997, "end": 2889.96, "text": " I'm gonna skip the instantiation,"}, {"start": 2889.96, "end": 2891.72, "text": " so we just have a transformer,"}, {"start": 2891.72, "end": 2893.2, "text": " we create those embedding tables,"}, {"start": 2893.2, "end": 2895.24, "text": " we create the linear layers, and we continue here."}, {"start": 2895.24, "end": 2897.06, "text": " So that's the LM, okay?"}, {"start": 2897.06, "end": 2900.46, "text": " So, now what happens is we'll load"}, {"start": 2900.46, "end": 2903.2599999999998, "text": " the actual pre-trained weights."}, {"start": 2903.26, "end": 2905.6400000000003, "text": " We put it into the eval mode,"}, {"start": 2905.6400000000003, "end": 2908.48, "text": " and we return the language model, okay."}, {"start": 2908.48, "end": 2912.28, "text": " So now we do the encoding the same way as before."}, {"start": 2912.28, "end": 2913.6200000000003, "text": " So nothing changes here."}, {"start": 2913.6200000000003, "end": 2918.6200000000003, "text": " We still end up with 1500 times eight numbers."}, {"start": 2919.76, "end": 2921.1600000000003, "text": " So let me show you that."}, {"start": 2921.1600000000003, "end": 2925.7200000000003, "text": " So we have, oops, again, I have to first do the zero,"}, {"start": 2925.7200000000003, "end": 2930.0, "text": " zero indexing, and then we end up with 1508,"}, {"start": 2930.0, "end": 2932.2200000000003, "text": " so that's the same shape as before."}, {"start": 2932.22, "end": 2935.6, "text": " And so now, let's see where the difference is."}, {"start": 2935.6, "end": 2936.9199999999996, "text": " So the first difference is here."}, {"start": 2936.9199999999996, "end": 2939.9199999999996, "text": " We formed this thing called Arithmetic Coder,"}, {"start": 2939.9199999999996, "end": 2943.0, "text": " and it's fairly complex to explain,"}, {"start": 2943.0, "end": 2945.18, "text": " as you can see just by the description"}, {"start": 2945.18, "end": 2946.56, "text": " of how this thing works,"}, {"start": 2946.56, "end": 2949.72, "text": " and some of the low-level details here"}, {"start": 2949.72, "end": 2951.4199999999996, "text": " of bit shifting, et cetera, et cetera,"}, {"start": 2951.4199999999996, "end": 2955.3999999999996, "text": " but basically in a nutshell, what it does is using the,"}, {"start": 2956.7999999999997, "end": 2959.48, "text": " well, I'm gonna get there in a second,"}, {"start": 2959.48, "end": 2961.56, "text": " so it's gonna be wasteful for me"}, {"start": 2961.56, "end": 2963.12, "text": " to try and explain it like this."}, {"start": 2963.12, "end": 2964.92, "text": " So in any case, we have this Arithmetic Coder."}, {"start": 2964.92, "end": 2968.38, "text": " We're gonna see how it works a bit later, okay."}, {"start": 2968.38, "end": 2970.92, "text": " So you can see here, we first initialize the input"}, {"start": 2970.92, "end": 2973.88, "text": " as all zeros, and k is equal to eight,"}, {"start": 2973.88, "end": 2975.68, "text": " which is the number of our quantizers."}, {"start": 2975.68, "end": 2979.68, "text": " So this is kinda, before we get the actual eight numbers"}, {"start": 2979.68, "end": 2983.0, "text": " that are the encoded version"}, {"start": 2983.0, "end": 2986.6, "text": " of the first output vector of the encoder,"}, {"start": 2987.58, "end": 2990.08, "text": " before that, we just assume we have all zeros, right?"}, {"start": 2990.08, "end": 2992.16, "text": " So again, let me just show you what I mean."}, {"start": 2992.16, "end": 2996.4, "text": " Let me find the diagram here."}, {"start": 2998.92, "end": 2999.92, "text": " So here we are."}, {"start": 2999.92, "end": 3003.36, "text": " So again, before we have the actual encoding"}, {"start": 3003.36, "end": 3004.6, "text": " for this first vector,"}, {"start": 3004.6, "end": 3006.36, "text": " so we'll have like four numbers here"}, {"start": 3006.36, "end": 3008.22, "text": " or eight numbers in our particular case,"}, {"start": 3008.22, "end": 3010.46, "text": " we just assume we have all zeros."}, {"start": 3010.46, "end": 3012.12, "text": " We have all zeros,"}, {"start": 3012.12, "end": 3014.52, "text": " and that's what the program assumes here."}, {"start": 3014.52, "end": 3017.14, "text": " Okay, that's what this thing is, okay."}, {"start": 3017.14, "end": 3019.84, "text": " So let's now see how this is gonna work."}, {"start": 3019.84, "end": 3022.84, "text": " So we pass the input, the states,"}, {"start": 3022.84, "end": 3025.96, "text": " and the offset into the language model,"}, {"start": 3025.96, "end": 3028.2000000000003, "text": " and then we just basically do a forward pass"}, {"start": 3028.2000000000003, "end": 3033.2000000000003, "text": " through the model to get the output probabilities."}, {"start": 3033.5, "end": 3034.88, "text": " Okay, let's enter here,"}, {"start": 3034.88, "end": 3037.48, "text": " just to, let me just show you one thing."}, {"start": 3038.32, "end": 3040.84, "text": " Whoops, let me just say one thing,"}, {"start": 3040.84, "end": 3042.6400000000003, "text": " and that's the logits part."}, {"start": 3042.6400000000003, "end": 3045.7200000000003, "text": " So after we just do a forward pass through the transformer,"}, {"start": 3045.7200000000003, "end": 3047.8, "text": " which I'm gonna skip, so I'm gonna skip this part."}, {"start": 3047.8, "end": 3049.2400000000002, "text": " So I'm gonna put breakpoint here."}, {"start": 3049.24, "end": 3052.3199999999997, "text": " So I'm gonna disable everything, enable this one."}, {"start": 3052.3199999999997, "end": 3054.8799999999997, "text": " We're gonna do a forward pass through the transformer,"}, {"start": 3054.8799999999997, "end": 3058.04, "text": " and we end up with the output, okay."}, {"start": 3058.04, "end": 3060.56, "text": " So output is gonna be of the shape, let's see,"}, {"start": 3060.56, "end": 3063.04, "text": " so 200, because that's the inherent,"}, {"start": 3063.04, "end": 3066.0, "text": " that's the inner dimensionality of the transformer."}, {"start": 3066.0, "end": 3068.7999999999997, "text": " So we passed in all zeros,"}, {"start": 3068.7999999999997, "end": 3071.04, "text": " and we basically,"}, {"start": 3071.04, "end": 3074.2799999999997, "text": " how we convert those all zeros"}, {"start": 3074.2799999999997, "end": 3077.7599999999998, "text": " into a continuous representation is on this line here."}, {"start": 3077.76, "end": 3080.36, "text": " As you can see here, we just do eight times."}, {"start": 3080.36, "end": 3082.6400000000003, "text": " We grab the number,"}, {"start": 3082.6400000000003, "end": 3086.28, "text": " and we basically grab the corresponding embedding table,"}, {"start": 3086.28, "end": 3088.32, "text": " and then that's how we get the representation,"}, {"start": 3088.32, "end": 3090.1200000000003, "text": " and then we just sum them up."}, {"start": 3090.1200000000003, "end": 3092.36, "text": " And I think that was also explained in the paper here."}, {"start": 3092.36, "end": 3095.7200000000003, "text": " So let me just show you where that thing is explained."}, {"start": 3096.6800000000003, "end": 3099.76, "text": " I think somewhere here, so okay."}, {"start": 3099.76, "end": 3101.5600000000004, "text": " Okay, for a time step T,"}, {"start": 3101.5600000000004, "end": 3103.6800000000003, "text": " the discrete representation obtained at that time,"}, {"start": 3103.6800000000003, "end": 3106.1600000000003, "text": " T minus one is transformed into continuous representation"}, {"start": 3106.16, "end": 3108.08, "text": " using learned embedding tables,"}, {"start": 3108.08, "end": 3110.7999999999997, "text": " one for each codebook, and which are summed."}, {"start": 3110.7999999999997, "end": 3112.08, "text": " So that's basically this line."}, {"start": 3112.08, "end": 3114.6, "text": " This line describes this piece of code here."}, {"start": 3114.6, "end": 3116.64, "text": " So again, let's go back to the diagram."}, {"start": 3116.64, "end": 3118.16, "text": " It's gonna be easier."}, {"start": 3118.16, "end": 3122.64, "text": " So we have these, let's say four numbers here,"}, {"start": 3122.64, "end": 3125.04, "text": " and each of these four numbers is gonna index"}, {"start": 3125.04, "end": 3127.48, "text": " into a particular table."}, {"start": 3127.48, "end": 3132.3599999999997, "text": " So we'll have four tables associated with this transformer,"}, {"start": 3132.3599999999997, "end": 3134.3599999999997, "text": " and we get their corresponding vectors,"}, {"start": 3134.3599999999997, "end": 3135.64, "text": " then we sum them up,"}, {"start": 3135.64, "end": 3139.0, "text": " and then we do a forward pass through the transformer,"}, {"start": 3139.0, "end": 3142.2, "text": " and we end up with a 200-dimensional vector."}, {"start": 3142.2, "end": 3143.8399999999997, "text": " That's what we do there, okay?"}, {"start": 3144.68, "end": 3145.96, "text": " That's the idea."}, {"start": 3145.96, "end": 3148.7999999999997, "text": " And now we just map those using those linear layers"}, {"start": 3148.7999999999997, "end": 3151.3199999999997, "text": " I mentioned before that are gonna have the output"}, {"start": 3151.3199999999997, "end": 3153.8399999999997, "text": " dimensionality being equal to the cardinality"}, {"start": 3153.8399999999997, "end": 3155.64, "text": " of the codebook, which is 1,024."}, {"start": 3157.0, "end": 3159.7999999999997, "text": " So if I do just a forward step over this,"}, {"start": 3159.7999999999997, "end": 3161.8399999999997, "text": " we're gonna end up with logits."}, {"start": 3161.8399999999997, "end": 3163.3599999999997, "text": " Here are the shape."}, {"start": 3163.3599999999997, "end": 3165.6, "text": " The shape is 1,024, eight."}, {"start": 3165.6, "end": 3168.2799999999997, "text": " And that makes sense because for each of the eight numbers"}, {"start": 3168.2799999999997, "end": 3171.68, "text": " that we are supposed to output for this particular time step,"}, {"start": 3171.68, "end": 3176.44, "text": " we have a distribution, which is 1,024 bins,"}, {"start": 3176.44, "end": 3181.24, "text": " across our codebook table, corresponding codebook table."}, {"start": 3181.24, "end": 3182.52, "text": " I know this might be confusing,"}, {"start": 3182.52, "end": 3187.24, "text": " so you'll have to go through the code base at your own pace,"}, {"start": 3187.24, "end": 3189.24, "text": " but hopefully this helps you somewhat."}, {"start": 3190.3199999999997, "end": 3192.0, "text": " Okay, so probabilities are there."}, {"start": 3192.0, "end": 3193.92, "text": " So what we do next is this."}, {"start": 3193.92, "end": 3197.36, "text": " So we have this, we grab the next eight numbers"}, {"start": 3197.36, "end": 3198.64, "text": " and we add one."}, {"start": 3198.64, "end": 3203.12, "text": " And one is added because the zeros, as you see here,"}, {"start": 3203.12, "end": 3204.6800000000003, "text": " have a special connotation."}, {"start": 3204.6800000000003, "end": 3206.64, "text": " They're just like initial values,"}, {"start": 3206.64, "end": 3211.04, "text": " which cannot actually be, we can't get zero as the output."}, {"start": 3211.04, "end": 3212.88, "text": " And that's why there is this plus one here."}, {"start": 3212.88, "end": 3214.6800000000003, "text": " Anyways, a small detail there."}, {"start": 3215.52, "end": 3217.0, "text": " So here is the complex part."}, {"start": 3217.0, "end": 3219.84, "text": " We build this quantized CDF,"}, {"start": 3219.84, "end": 3222.0, "text": " which is a cumulative distribution function."}, {"start": 3222.0, "end": 3225.52, "text": " And basically how it works is that the higher the probability"}, {"start": 3225.52, "end": 3228.36, "text": " at the particular bin, the bigger the range"}, {"start": 3228.36, "end": 3230.6, "text": " that particular bin is gonna occupy."}, {"start": 3230.6, "end": 3235.08, "text": " So again, I told you this is gonna be fairly complex."}, {"start": 3235.08, "end": 3238.44, "text": " So it says here, turn the given PDF into a quantized CDF"}, {"start": 3238.44, "end": 3242.32, "text": " that splits this particular range into chunks of size"}, {"start": 3242.32, "end": 3245.4, "text": " roughly proportional to the PDF, right?"}, {"start": 3245.4, "end": 3246.96, "text": " And they use 24 bits here."}, {"start": 3246.96, "end": 3248.64, "text": " So that means this is gonna be"}, {"start": 3248.64, "end": 3251.08, "text": " two raised to the power of 24 minus one."}, {"start": 3251.08, "end": 3254.7599999999998, "text": " Again, let me try and quickly draw this,"}, {"start": 3254.7599999999998, "end": 3257.0, "text": " and I think we are almost done with the video."}, {"start": 3258.12, "end": 3260.4, "text": " Okay, so here is how it's gonna look like."}, {"start": 3260.4, "end": 3263.24, "text": " So we get as the output, we get some distribution."}, {"start": 3263.24, "end": 3264.7999999999997, "text": " So let's say it looks like this."}, {"start": 3265.64, "end": 3269.92, "text": " Okay, and so what this is gonna be mapped into"}, {"start": 3269.92, "end": 3272.0, "text": " is this range."}, {"start": 3272.0, "end": 3274.0, "text": " So this range is gonna be from zero"}, {"start": 3274.0, "end": 3278.68, "text": " to two raised to the power of 24 minus one, okay?"}, {"start": 3278.68, "end": 3281.24, "text": " And then we're trying to split this range."}, {"start": 3282.24, "end": 3286.6, "text": " And so here, initially, because we have a peak here,"}, {"start": 3286.6, "end": 3290.24, "text": " that means that this range is gonna be huge, okay?"}, {"start": 3291.2799999999997, "end": 3293.52, "text": " Let me just change the color here."}, {"start": 3293.52, "end": 3294.8799999999997, "text": " This range is gonna be huge."}, {"start": 3294.8799999999997, "end": 3296.6, "text": " And then where we have lower probability,"}, {"start": 3296.6, "end": 3299.24, "text": " we'll have much smaller, whoops,"}, {"start": 3299.24, "end": 3300.72, "text": " let me change the color again."}, {"start": 3300.72, "end": 3302.8399999999997, "text": " So we'll have much smaller ranges here."}, {"start": 3302.8399999999997, "end": 3304.8399999999997, "text": " So these ranges here are gonna be small,"}, {"start": 3305.8799999999997, "end": 3307.9199999999996, "text": " but these ones are gonna be bigger,"}, {"start": 3307.92, "end": 3310.08, "text": " and they're gonna slowly diminish here."}, {"start": 3310.08, "end": 3312.04, "text": " So that's the mapping that's happening"}, {"start": 3312.04, "end": 3313.7200000000003, "text": " in that particular function I showed you."}, {"start": 3313.7200000000003, "end": 3314.92, "text": " And these are all of the details"}, {"start": 3314.92, "end": 3316.96, "text": " behind the arithmetic encoding."}, {"start": 3316.96, "end": 3320.56, "text": " And so you'll have to dig into that a bit deeper yourself"}, {"start": 3320.56, "end": 3322.88, "text": " in case you're curious, but that's it."}, {"start": 3322.88, "end": 3324.36, "text": " I'm gonna go back here."}, {"start": 3324.36, "end": 3327.84, "text": " So we end up with the CDF, and let me just show you."}, {"start": 3327.84, "end": 3332.0, "text": " So CDF again, the shape here is 1,024 again."}, {"start": 3332.0, "end": 3335.44, "text": " But you can see here that we just have the numbers"}, {"start": 3335.44, "end": 3339.08, "text": " from zero to two raised to the power of 24 minus one."}, {"start": 3339.08, "end": 3343.56, "text": " And you can see here that the differences here,"}, {"start": 3344.68, "end": 3347.32, "text": " if they are super big, that means that the corresponding bin"}, {"start": 3347.32, "end": 3350.08, "text": " in the PDF had a higher probability."}, {"start": 3350.08, "end": 3351.56, "text": " And that's the idea."}, {"start": 3351.56, "end": 3354.04, "text": " And then we use this arithmetic coder,"}, {"start": 3354.04, "end": 3356.04, "text": " and we push the value."}, {"start": 3356.04, "end": 3357.84, "text": " Whoops, here it is."}, {"start": 3357.84, "end": 3359.56, "text": " But we also pass the CDF,"}, {"start": 3359.56, "end": 3363.36, "text": " because the arithmetic coder requires that information"}, {"start": 3363.36, "end": 3366.52, "text": " of the ranges in order to efficiently encode this."}, {"start": 3366.52, "end": 3371.52, "text": " And this is gonna occupy less than 10 bits in expectation."}, {"start": 3371.52, "end": 3373.7200000000003, "text": " So before, because we had eight numbers,"}, {"start": 3373.7200000000003, "end": 3376.04, "text": " the best we could do was eight times 10 bits,"}, {"start": 3376.04, "end": 3376.88, "text": " that's 80 bits."}, {"start": 3376.88, "end": 3379.2000000000003, "text": " Here we can go below 80 bits."}, {"start": 3379.2000000000003, "end": 3381.84, "text": " And that's the magic behind the arithmetic coder."}, {"start": 3381.84, "end": 3385.52, "text": " So I'm gonna basically enter this"}, {"start": 3385.52, "end": 3387.46, "text": " just to show you the complexity,"}, {"start": 3387.46, "end": 3390.48, "text": " but I'm not gonna try and explain everything here."}, {"start": 3390.48, "end": 3392.08, "text": " Let me hit F5 here."}, {"start": 3392.08, "end": 3396.64, "text": " And so you can see here how the numbers are,"}, {"start": 3396.64, "end": 3399.4, "text": " how the range information here is used"}, {"start": 3399.4, "end": 3404.4, "text": " to basically finally here get the actual encoding"}, {"start": 3406.96, "end": 3410.16, "text": " for this particular number and this CDF."}, {"start": 3410.16, "end": 3415.16, "text": " So you can see here, we are packing bit by bit."}, {"start": 3415.24, "end": 3419.3199999999997, "text": " This packer is the same packer as the one we had before,"}, {"start": 3419.3199999999997, "end": 3421.12, "text": " but instead of using 10 bits,"}, {"start": 3421.12, "end": 3422.8399999999997, "text": " instead of pushing 10 bits into the stream,"}, {"start": 3422.8399999999997, "end": 3424.96, "text": " it pushes just a single bit into the stream."}, {"start": 3424.96, "end": 3427.1, "text": " And, but in expectation,"}, {"start": 3427.1, "end": 3431.24, "text": " this while loop is gonna have less than 10 iterations,"}, {"start": 3431.24, "end": 3434.08, "text": " i.e. 10 bits per symbol."}, {"start": 3434.08, "end": 3436.92, "text": " And that's why we're gonna be able to"}, {"start": 3436.92, "end": 3439.6, "text": " basically compress this even more"}, {"start": 3439.6, "end": 3444.6, "text": " than using the, than not using the arithmetic coding."}, {"start": 3444.9, "end": 3446.8399999999997, "text": " So I'm gonna exit here."}, {"start": 3446.8399999999997, "end": 3448.2799999999997, "text": " I'm gonna exit here."}, {"start": 3448.28, "end": 3452.84, "text": " And basically, I'm gonna remove all of the breakpoints,"}, {"start": 3452.84, "end": 3457.84, "text": " put a breakpoint here, hit F5, and that's it."}, {"start": 3458.2000000000003, "end": 3460.6400000000003, "text": " So you can see it was a bit slower."}, {"start": 3460.6400000000003, "end": 3462.6000000000004, "text": " And that's the trade off you have to take"}, {"start": 3462.6000000000004, "end": 3466.7200000000003, "text": " if you wanna have even higher compression ratio."}, {"start": 3466.7200000000003, "end": 3467.78, "text": " And that's it, guys."}, {"start": 3467.78, "end": 3469.26, "text": " Everything else remains the same."}, {"start": 3469.26, "end": 3472.0, "text": " We just write the output bytes, and that's it."}, {"start": 3472.0, "end": 3476.26, "text": " So hopefully this combination of using the paper"}, {"start": 3476.26, "end": 3478.6800000000003, "text": " and the video was helpful."}, {"start": 3478.6800000000003, "end": 3481.28, "text": " Do let me know whether you have some questions."}, {"start": 3481.28, "end": 3482.96, "text": " I know I had to kinda rush through"}, {"start": 3482.96, "end": 3485.36, "text": " some of the details behind it,"}, {"start": 3485.36, "end": 3487.5200000000004, "text": " but hopefully this gives you some intuition,"}, {"start": 3487.5200000000004, "end": 3489.4, "text": " some understanding."}, {"start": 3489.4, "end": 3491.32, "text": " Do leave some comments down below."}, {"start": 3491.32, "end": 3492.6400000000003, "text": " Any feedback you might have for me,"}, {"start": 3492.6400000000003, "end": 3493.96, "text": " it will be super useful."}, {"start": 3493.96, "end": 3496.7200000000003, "text": " And guys, until next time, bye bye."}, {"start": 3496.72, "end": 3505.74, "text": " Lindsey Ch resolution"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=uNe5QGOJykE
Ask Me Anything: A Simple Strategy For Prompting Language Models | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this short video I cover the "Ask Me Anything: A Simple Strategy For Prompting Language Models" paper introducing a scalable and effective way to prompt language models. Using this method they manage to outperform GPT3-175B using the open-source GPT-Neo-6B model! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2210.02441 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:09 Pipeline overview 09:05 Results 09:30 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #prompting #promptengineering #languagemodels
What's up guys, Alex here. In this video I wanna briefly go through this new paper on prompting called Ask Me Anything, a Simple Strategy for Prompting Language Models. They show amazing results, so let me read it to you straight away. So we evaluate this approach EMA across open source model families such as Neo, Bloom, Opt, and T0, and size is 125 million to 175 billion parameters, demonstrating an average performance lift of 10.2% over the few shot baseline. This simple strategy enables the open source GPT-Neo 6B model to match and exceed the performance of few shot GPT-370B on 15 out of 20 popular benchmarks. So that's huge. And in this video I'm gonna briefly walk you through the idea behind the paper. I wanna also briefly mention that I've covered each of these open source models on my channel. I have the paper covered and I also have a code walkthrough videos of a couple of them. So if you wanna check those out, I'm gonna link them somewhere there in the video card. Okay, so let's see what the paper is about. So they say here, to mitigate the high degree of effort involved in prompting, we instead ask whether collecting multiple effective yet imperfect prompts and aggregating them can lead to a high quality prompting strategy. So as you probably all know, people spend a lot of time finding the exact right prompt that's gonna give them the best result on a particular task. And that's gonna bother some and takes a lot of time. So there is a need to create these methods that are automatic and that can reliably give better performance across a suite of tasks. And that's exactly what this approach does. So our approach recursively uses the LLM to transform task inputs to the effective QA format. We apply these prompts to collect several noisy votes for the inputs true labels. You can see already it's gonna be some form of aggregating the outputs of imperfect prompts. And what they do is not majority voting, they use something a bit more delicate. We're gonna see the details a bit later. But here is the actual high level diagram. So on the left hand side, you can see the input example. So that's the prompt that you would pass, actually tokenize it, you would pass that to an LLM, and then you would sample out the answer to your task. And so let me kind of read it to you. So is the following claim true or false given the context? And the context is the following, John and his friends went to the theater and saw Jurassic Park. And then the claim is John went to the park and the model is supposed to answer that this is a false claim given this context. So one thing you could do is you could basically, well, give it multiple shots, multiple examples of how to handle a similar problem like this one. And that usually requires a lot of, as I said, prompt engineering. So here is the alternative approach that they suggest here. So the idea is to have multiple of these chains. You can see here, they have three chains in this particular image. And each of these chains are gonna have a slight variations in the ways how they could like form these claims and questions, et cetera, et cetera. And well, I can show you that immediately basically. Let me show you what the idea is. So here are some of the variations they do. So for example, so you can vary the in context and demonstrations. Here's an example. So instead of having this claim, Jack camped with Mark, did Jack camp with Mark, whatever, they just kind of put in different claims and questions. And the idea of this one in particular is to learn how to give an acclaim, formulate a question. And you can see, you can just vary that, or you can also just vary how do you form the question. So here, Jack camped with Mark, you have did Jack camp with Mark. And instead here, they vary the question style to wh, so the who, what, where, et cetera questions. As you can see here, they formulate who did Jack camp with or what was not hard. So those are some of the variations that each of these chains are gonna contain. So now let me go back to the main diagram. So once we have those chains, let's see what's the idea. The idea is because of their finding, let me go there. So they find that the prompts that encourage open-ended answers, where did John go to be more effective than prompts that restrict the model output to particular tokens. For example, John went to the park, output true or false. So this is more restrictive as opposed to being more open-ended. And because of this realization, they try and reformulate via this pipeline, the input task into this more of a open-ended Q&A format, that's the idea. So you can see here how they are basically prompting the model to formulate the question. So here is a claim, Jack camped with Mark, and then formulate the question, did Jack camp with Mark? And then another pair, another shot here, and then finally, we take our actual claim from our input example, John went to the park and we prompt the model, hey, give us the question, given the shots that we have shown you before here, okay? So that's the idea, we give you some examples, do the same thing. And then the model outputs, did John go to the park? Okay, and so now, so this is the first stage of this pipeline, the so-called question prompt. And then the second part is, so you can see here, answer the question from context, okay? So context, Joe's birthday was yesterday, then the model is supposed to first output the question, instead of going directly from context and claim into the answer, they instead go through the question as the intermediate step. And that they found out gives better results. Okay, a little bit of alchemy and dark magic here. And so now what happens here is we give the actual context, which is John and his friends went to the theater and saw Jurassic Park, you can see that's the same context as this one is in our actual input example. And then we give the actual output from the previous step from stage one, the question prompt here. And so we plug that in, and then we get the answer. So this is how we get to the actual answer. So instead of just inputting this and giving the answer, we instead first formulate the question, then insert the question here, and only then do we answer, okay? So that's the first part of this method. The second part is you somehow have to combine the outputs of multiple chains. And what they do is not a simple majority voting, which they show underperforms this method. So majority voting would be, for example, if we had three of these chains giving false, true, and false, then the majority voting would simply say false. Well, in this particular example, the output is the same for their method, but as you can see here, they infer the underlying graph structure between the chains. And so basically, because these prompts, these chains are not independent, they try and figure out the dependency and how the output will correlate depending on the changes in the input, okay? Okay, so let's continue here. As I said, so let me read this to you. So we propose the use of weak supervision to reliably aggregate predictions. We find that the errors produced by the predictions of different chains can be highly varying and correlated, right, and so while majority voting may do well on certain sets of prompts, it performs poorly in the above cases, okay? And additionally, worth pointing out is this is not a panacea. There are certain tasks that this method prefers. So they say here, we find the largest gains are on the tasks where the knowledge required to complete the task is found in the provided context, okay? And comparatively less on closed books tasks, for example, factual recall. So the idea is if the answer is already somewhere in the actual context of your input task, then this performs very nicely. Finally, a couple more details I wanna show you. As I said, this is gonna be a brief video. I mentioned this variations here, so they are kind of strategic in those variations. It's not just random variations. They say here to vary the style of open-ended prompt questions, we construct question and answer prompts that produce an answer either yes, no, WH, multiple choice, or close questions. So that's, as I said here, some of these questions are such that, so these are the WH questions, and then some of these questions like did Jack Kemp with Mark or yes or no questions, and they mentioned some other classes of questions that they force these different chains to produce, and that's the idea. Okay, so finally the results are here. So as they said in the abstract, they outperformed GPT-3 few shot on 15 out of 20 tasks, which is super impressive. Again, keep in mind that they are comparing against a model that's against the results published in this paper, Brown et al. from 2020, so the GPT-3 paper. But anyhow, this is super impressive, and yeah, I just wanted to briefly show you this method in case you are not familiar with all of these various advances that are happening in the prompt engineering space. I think this one caught my eye and so thought explaining it to you guys. Awesome, so if you like this video, subscribe to this channel, share the video out, and until next time, bye-bye. And I'll see you next time.
[{"start": 0.0, "end": 2.32, "text": " What's up guys, Alex here."}, {"start": 2.32, "end": 4.62, "text": " In this video I wanna briefly go through this new paper"}, {"start": 4.62, "end": 6.42, "text": " on prompting called Ask Me Anything,"}, {"start": 6.42, "end": 9.540000000000001, "text": " a Simple Strategy for Prompting Language Models."}, {"start": 9.540000000000001, "end": 10.8, "text": " They show amazing results,"}, {"start": 10.8, "end": 13.58, "text": " so let me read it to you straight away."}, {"start": 13.58, "end": 16.28, "text": " So we evaluate this approach EMA"}, {"start": 16.28, "end": 17.94, "text": " across open source model families"}, {"start": 17.94, "end": 20.54, "text": " such as Neo, Bloom, Opt, and T0,"}, {"start": 20.54, "end": 25.44, "text": " and size is 125 million to 175 billion parameters,"}, {"start": 25.44, "end": 28.580000000000002, "text": " demonstrating an average performance lift of 10.2%"}, {"start": 28.58, "end": 30.52, "text": " over the few shot baseline."}, {"start": 30.52, "end": 32.6, "text": " This simple strategy enables the open source"}, {"start": 32.6, "end": 36.56, "text": " GPT-Neo 6B model to match and exceed the performance"}, {"start": 36.56, "end": 41.56, "text": " of few shot GPT-370B on 15 out of 20 popular benchmarks."}, {"start": 43.36, "end": 45.28, "text": " So that's huge."}, {"start": 45.28, "end": 48.519999999999996, "text": " And in this video I'm gonna briefly walk you through"}, {"start": 48.519999999999996, "end": 50.12, "text": " the idea behind the paper."}, {"start": 50.12, "end": 52.0, "text": " I wanna also briefly mention that I've covered"}, {"start": 52.0, "end": 54.7, "text": " each of these open source models on my channel."}, {"start": 54.7, "end": 57.36, "text": " I have the paper covered and I also have a code"}, {"start": 57.36, "end": 59.04, "text": " walkthrough videos of a couple of them."}, {"start": 59.04, "end": 60.4, "text": " So if you wanna check those out,"}, {"start": 60.4, "end": 63.8, "text": " I'm gonna link them somewhere there in the video card."}, {"start": 63.8, "end": 66.92, "text": " Okay, so let's see what the paper is about."}, {"start": 66.92, "end": 69.5, "text": " So they say here, to mitigate the high degree of effort"}, {"start": 69.5, "end": 71.68, "text": " involved in prompting, we instead ask"}, {"start": 71.68, "end": 75.12, "text": " whether collecting multiple effective yet imperfect prompts"}, {"start": 75.12, "end": 78.2, "text": " and aggregating them can lead to a high quality"}, {"start": 78.2, "end": 79.36, "text": " prompting strategy."}, {"start": 79.36, "end": 83.24, "text": " So as you probably all know, people spend a lot of time"}, {"start": 83.24, "end": 85.86, "text": " finding the exact right prompt that's gonna give them"}, {"start": 85.86, "end": 87.8, "text": " the best result on a particular task."}, {"start": 87.8, "end": 89.84, "text": " And that's gonna bother some and takes a lot of time."}, {"start": 89.84, "end": 93.2, "text": " So there is a need to create these methods"}, {"start": 93.2, "end": 96.12, "text": " that are automatic and that can reliably give"}, {"start": 96.12, "end": 99.6, "text": " better performance across a suite of tasks."}, {"start": 99.6, "end": 101.6, "text": " And that's exactly what this approach does."}, {"start": 101.6, "end": 104.12, "text": " So our approach recursively uses the LLM"}, {"start": 104.12, "end": 108.32, "text": " to transform task inputs to the effective QA format."}, {"start": 108.32, "end": 111.76, "text": " We apply these prompts to collect several noisy votes"}, {"start": 111.76, "end": 113.52, "text": " for the inputs true labels."}, {"start": 113.52, "end": 115.44, "text": " You can see already it's gonna be some form"}, {"start": 115.44, "end": 120.44, "text": " of aggregating the outputs of imperfect prompts."}, {"start": 121.36, "end": 123.08, "text": " And what they do is not majority voting,"}, {"start": 123.08, "end": 126.36, "text": " they use something a bit more delicate."}, {"start": 126.36, "end": 129.2, "text": " We're gonna see the details a bit later."}, {"start": 129.2, "end": 133.34, "text": " But here is the actual high level diagram."}, {"start": 133.34, "end": 135.96, "text": " So on the left hand side, you can see the input example."}, {"start": 135.96, "end": 139.92, "text": " So that's the prompt that you would pass,"}, {"start": 139.92, "end": 142.76, "text": " actually tokenize it, you would pass that to an LLM,"}, {"start": 142.76, "end": 145.12, "text": " and then you would sample out the answer"}, {"start": 145.12, "end": 146.44, "text": " to your task."}, {"start": 146.44, "end": 148.88, "text": " And so let me kind of read it to you."}, {"start": 148.88, "end": 152.20000000000002, "text": " So is the following claim true or false given the context?"}, {"start": 152.20000000000002, "end": 153.52, "text": " And the context is the following,"}, {"start": 153.52, "end": 156.20000000000002, "text": " John and his friends went to the theater"}, {"start": 156.20000000000002, "end": 157.96, "text": " and saw Jurassic Park."}, {"start": 157.96, "end": 161.12, "text": " And then the claim is John went to the park"}, {"start": 161.12, "end": 164.20000000000002, "text": " and the model is supposed to answer"}, {"start": 164.20000000000002, "end": 168.84, "text": " that this is a false claim given this context."}, {"start": 168.84, "end": 172.5, "text": " So one thing you could do is you could basically,"}, {"start": 172.5, "end": 176.22, "text": " well, give it multiple shots, multiple examples"}, {"start": 176.22, "end": 179.5, "text": " of how to handle a similar problem like this one."}, {"start": 179.5, "end": 182.14, "text": " And that usually requires a lot of,"}, {"start": 182.14, "end": 184.38, "text": " as I said, prompt engineering."}, {"start": 184.38, "end": 187.72, "text": " So here is the alternative approach that they suggest here."}, {"start": 187.72, "end": 189.74, "text": " So the idea is to have multiple of these chains."}, {"start": 189.74, "end": 191.74, "text": " You can see here, they have three chains"}, {"start": 191.74, "end": 193.26, "text": " in this particular image."}, {"start": 193.26, "end": 197.54, "text": " And each of these chains are gonna have a slight variations"}, {"start": 197.54, "end": 202.1, "text": " in the ways how they could like form these claims"}, {"start": 202.1, "end": 204.46, "text": " and questions, et cetera, et cetera."}, {"start": 204.46, "end": 207.54, "text": " And well, I can show you that immediately basically."}, {"start": 207.54, "end": 210.06, "text": " Let me show you what the idea is."}, {"start": 210.06, "end": 211.62, "text": " So here are some of the variations they do."}, {"start": 211.62, "end": 214.06, "text": " So for example, so you can vary the in context"}, {"start": 214.06, "end": 215.7, "text": " and demonstrations."}, {"start": 215.7, "end": 216.54, "text": " Here's an example."}, {"start": 216.54, "end": 219.06, "text": " So instead of having this claim, Jack camped with Mark,"}, {"start": 219.06, "end": 221.45999999999998, "text": " did Jack camp with Mark, whatever,"}, {"start": 221.45999999999998, "end": 226.45999999999998, "text": " they just kind of put in different claims and questions."}, {"start": 227.18, "end": 229.78, "text": " And the idea of this one in particular is to learn"}, {"start": 229.78, "end": 233.26, "text": " how to give an acclaim, formulate a question."}, {"start": 233.26, "end": 235.18, "text": " And you can see, you can just vary that,"}, {"start": 235.18, "end": 238.22, "text": " or you can also just vary how do you form the question."}, {"start": 238.22, "end": 240.7, "text": " So here, Jack camped with Mark,"}, {"start": 240.7, "end": 242.7, "text": " you have did Jack camp with Mark."}, {"start": 242.7, "end": 246.86, "text": " And instead here, they vary the question style to wh,"}, {"start": 246.86, "end": 250.86, "text": " so the who, what, where, et cetera questions."}, {"start": 250.86, "end": 254.1, "text": " As you can see here, they formulate who did Jack camp with"}, {"start": 254.1, "end": 256.48, "text": " or what was not hard."}, {"start": 256.48, "end": 257.78, "text": " So those are some of the variations"}, {"start": 257.78, "end": 260.78, "text": " that each of these chains are gonna contain."}, {"start": 260.78, "end": 264.38, "text": " So now let me go back to the main diagram."}, {"start": 264.38, "end": 267.21999999999997, "text": " So once we have those chains, let's see what's the idea."}, {"start": 267.21999999999997, "end": 271.82, "text": " The idea is because of their finding, let me go there."}, {"start": 271.82, "end": 274.17999999999995, "text": " So they find that the prompts that encourage"}, {"start": 274.17999999999995, "end": 278.17999999999995, "text": " open-ended answers, where did John go to be more effective"}, {"start": 278.17999999999995, "end": 280.78, "text": " than prompts that restrict the model output"}, {"start": 280.78, "end": 282.09999999999997, "text": " to particular tokens."}, {"start": 282.09999999999997, "end": 285.61999999999995, "text": " For example, John went to the park, output true or false."}, {"start": 285.62, "end": 288.26, "text": " So this is more restrictive as opposed to being more open-ended."}, {"start": 288.26, "end": 292.9, "text": " And because of this realization, they try and reformulate"}, {"start": 292.9, "end": 296.74, "text": " via this pipeline, the input task into this more"}, {"start": 296.74, "end": 299.3, "text": " of a open-ended Q&A format, that's the idea."}, {"start": 299.3, "end": 303.54, "text": " So you can see here how they are basically prompting"}, {"start": 303.54, "end": 305.3, "text": " the model to formulate the question."}, {"start": 305.3, "end": 308.86, "text": " So here is a claim, Jack camped with Mark,"}, {"start": 308.86, "end": 312.18, "text": " and then formulate the question, did Jack camp with Mark?"}, {"start": 312.18, "end": 317.18, "text": " And then another pair, another shot here, and then finally,"}, {"start": 317.46, "end": 320.82, "text": " we take our actual claim from our input example,"}, {"start": 320.82, "end": 323.34000000000003, "text": " John went to the park and we prompt the model,"}, {"start": 323.34000000000003, "end": 325.38, "text": " hey, give us the question, given the shots"}, {"start": 325.38, "end": 329.22, "text": " that we have shown you before here, okay?"}, {"start": 329.22, "end": 331.46000000000004, "text": " So that's the idea, we give you some examples,"}, {"start": 331.46000000000004, "end": 332.62, "text": " do the same thing."}, {"start": 332.62, "end": 335.42, "text": " And then the model outputs, did John go to the park?"}, {"start": 335.42, "end": 338.82, "text": " Okay, and so now, so this is the first stage"}, {"start": 338.82, "end": 342.02, "text": " of this pipeline, the so-called question prompt."}, {"start": 342.02, "end": 344.97999999999996, "text": " And then the second part is, so you can see here,"}, {"start": 344.97999999999996, "end": 347.29999999999995, "text": " answer the question from context, okay?"}, {"start": 347.29999999999995, "end": 350.41999999999996, "text": " So context, Joe's birthday was yesterday,"}, {"start": 350.41999999999996, "end": 352.9, "text": " then the model is supposed to first output the question,"}, {"start": 352.9, "end": 356.78, "text": " instead of going directly from context and claim"}, {"start": 356.78, "end": 359.62, "text": " into the answer, they instead go through the question"}, {"start": 359.62, "end": 361.97999999999996, "text": " as the intermediate step."}, {"start": 361.97999999999996, "end": 365.02, "text": " And that they found out gives better results."}, {"start": 365.02, "end": 367.62, "text": " Okay, a little bit of alchemy and dark magic here."}, {"start": 368.58, "end": 371.38, "text": " And so now what happens here is we give"}, {"start": 371.38, "end": 373.94, "text": " the actual context, which is John and his friends"}, {"start": 373.94, "end": 376.1, "text": " went to the theater and saw Jurassic Park,"}, {"start": 376.1, "end": 378.82, "text": " you can see that's the same context as this one"}, {"start": 378.82, "end": 380.94, "text": " is in our actual input example."}, {"start": 380.94, "end": 383.98, "text": " And then we give the actual output from the previous step"}, {"start": 383.98, "end": 385.9, "text": " from stage one, the question prompt here."}, {"start": 385.9, "end": 389.74, "text": " And so we plug that in, and then we get the answer."}, {"start": 389.74, "end": 391.38, "text": " So this is how we get to the actual answer."}, {"start": 391.38, "end": 394.21999999999997, "text": " So instead of just inputting this and giving the answer,"}, {"start": 394.21999999999997, "end": 395.82, "text": " we instead first formulate the question,"}, {"start": 395.82, "end": 397.34, "text": " then insert the question here,"}, {"start": 397.34, "end": 399.62, "text": " and only then do we answer, okay?"}, {"start": 399.62, "end": 401.58, "text": " So that's the first part of this method."}, {"start": 401.58, "end": 404.82, "text": " The second part is you somehow have to combine"}, {"start": 404.82, "end": 407.5, "text": " the outputs of multiple chains."}, {"start": 407.5, "end": 410.26, "text": " And what they do is not a simple majority voting,"}, {"start": 410.26, "end": 413.58, "text": " which they show underperforms this method."}, {"start": 413.58, "end": 415.82, "text": " So majority voting would be, for example,"}, {"start": 415.82, "end": 418.66, "text": " if we had three of these chains giving false, true,"}, {"start": 418.66, "end": 422.34000000000003, "text": " and false, then the majority voting would simply say false."}, {"start": 422.34000000000003, "end": 423.82, "text": " Well, in this particular example,"}, {"start": 423.82, "end": 425.74, "text": " the output is the same for their method,"}, {"start": 425.74, "end": 428.02, "text": " but as you can see here, they infer"}, {"start": 428.02, "end": 432.06, "text": " the underlying graph structure between the chains."}, {"start": 432.06, "end": 436.34, "text": " And so basically, because these prompts,"}, {"start": 436.34, "end": 437.82, "text": " these chains are not independent,"}, {"start": 437.82, "end": 439.58, "text": " they try and figure out the dependency"}, {"start": 439.58, "end": 442.38, "text": " and how the output will correlate"}, {"start": 442.38, "end": 445.26, "text": " depending on the changes in the input, okay?"}, {"start": 445.26, "end": 447.58, "text": " Okay, so let's continue here."}, {"start": 448.58, "end": 450.9, "text": " As I said, so let me read this to you."}, {"start": 450.9, "end": 453.46, "text": " So we propose the use of weak supervision"}, {"start": 453.46, "end": 455.06, "text": " to reliably aggregate predictions."}, {"start": 455.06, "end": 457.5, "text": " We find that the errors produced by the predictions"}, {"start": 457.5, "end": 460.78, "text": " of different chains can be highly varying and correlated,"}, {"start": 460.78, "end": 463.98, "text": " right, and so while majority voting may do well"}, {"start": 463.98, "end": 465.54, "text": " on certain sets of prompts,"}, {"start": 465.54, "end": 470.46, "text": " it performs poorly in the above cases, okay?"}, {"start": 470.46, "end": 473.42, "text": " And additionally, worth pointing out"}, {"start": 473.42, "end": 475.58, "text": " is this is not a panacea."}, {"start": 475.58, "end": 479.22, "text": " There are certain tasks that this method prefers."}, {"start": 479.22, "end": 482.12, "text": " So they say here, we find the largest gains"}, {"start": 482.12, "end": 484.58, "text": " are on the tasks where the knowledge required"}, {"start": 484.58, "end": 489.21999999999997, "text": " to complete the task is found in the provided context, okay?"}, {"start": 489.21999999999997, "end": 492.53999999999996, "text": " And comparatively less on closed books tasks,"}, {"start": 492.53999999999996, "end": 494.02, "text": " for example, factual recall."}, {"start": 494.02, "end": 496.78, "text": " So the idea is if the answer is already somewhere"}, {"start": 496.78, "end": 500.02, "text": " in the actual context of your input task,"}, {"start": 500.02, "end": 502.94, "text": " then this performs very nicely."}, {"start": 502.94, "end": 504.86, "text": " Finally, a couple more details I wanna show you."}, {"start": 504.86, "end": 506.82, "text": " As I said, this is gonna be a brief video."}, {"start": 506.82, "end": 508.88, "text": " I mentioned this variations here,"}, {"start": 508.88, "end": 511.18, "text": " so they are kind of strategic in those variations."}, {"start": 511.18, "end": 512.98, "text": " It's not just random variations."}, {"start": 512.98, "end": 515.36, "text": " They say here to vary the style"}, {"start": 515.36, "end": 517.38, "text": " of open-ended prompt questions,"}, {"start": 517.38, "end": 519.48, "text": " we construct question and answer prompts"}, {"start": 519.48, "end": 522.6, "text": " that produce an answer either yes, no,"}, {"start": 522.6, "end": 525.44, "text": " WH, multiple choice, or close questions."}, {"start": 525.44, "end": 527.04, "text": " So that's, as I said here,"}, {"start": 527.04, "end": 529.32, "text": " some of these questions are such that,"}, {"start": 529.32, "end": 531.72, "text": " so these are the WH questions,"}, {"start": 531.72, "end": 536.72, "text": " and then some of these questions like did Jack Kemp"}, {"start": 536.76, "end": 538.88, "text": " with Mark or yes or no questions,"}, {"start": 538.88, "end": 541.16, "text": " and they mentioned some other classes of questions"}, {"start": 541.16, "end": 544.68, "text": " that they force these different chains to produce,"}, {"start": 544.68, "end": 546.0799999999999, "text": " and that's the idea."}, {"start": 546.0799999999999, "end": 549.54, "text": " Okay, so finally the results are here."}, {"start": 549.54, "end": 552.9, "text": " So as they said in the abstract,"}, {"start": 552.9, "end": 557.9, "text": " they outperformed GPT-3 few shot on 15 out of 20 tasks,"}, {"start": 557.92, "end": 559.3199999999999, "text": " which is super impressive."}, {"start": 559.3199999999999, "end": 561.16, "text": " Again, keep in mind that they are comparing"}, {"start": 561.16, "end": 564.16, "text": " against a model that's against the results"}, {"start": 564.16, "end": 566.56, "text": " published in this paper, Brown et al. from 2020,"}, {"start": 566.56, "end": 568.12, "text": " so the GPT-3 paper."}, {"start": 568.12, "end": 572.16, "text": " But anyhow, this is super impressive,"}, {"start": 572.16, "end": 576.0, "text": " and yeah, I just wanted to briefly show you this method"}, {"start": 576.0, "end": 577.92, "text": " in case you are not familiar"}, {"start": 577.92, "end": 580.44, "text": " with all of these various advances"}, {"start": 580.44, "end": 582.5600000000001, "text": " that are happening in the prompt engineering space."}, {"start": 582.5600000000001, "end": 584.24, "text": " I think this one caught my eye"}, {"start": 584.24, "end": 585.88, "text": " and so thought explaining it to you guys."}, {"start": 585.88, "end": 587.76, "text": " Awesome, so if you like this video,"}, {"start": 587.76, "end": 589.8, "text": " subscribe to this channel, share the video out,"}, {"start": 589.8, "end": 592.36, "text": " and until next time, bye-bye."}, {"start": 592.36, "end": 597.36, "text": " And I'll see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=MmAJk2BD6WA
Make-A-Video: Text-To-Video Generation Without Text-Video Data | Paper Explained
🚀 Find out how to get started using Weights & Biases 🚀 http://wandb.me/ai-epiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the latest text-to-video paper from Meta: "Make-A-Video: Text-To-Video Generation Without Text-Video Data". I walk you through the 3-stage approach that consists of: * Training a DALL-E 2 type of a model * Integrating temporal information and tuning on unlabeled videos * Fine-tuning the frame interpolation module. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2209.14792 ✅ Website: https://makeavideo.studio/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:25 (sponsored) Weights & Biases 01:37 Going through the generations 06:15 High-level paper overview 10:50 Results 15:40 Limitations 16:30 Diving deep: DALL-E 2 backbone 23:35 Expanding to 3D - temporal info integration 32:39 Frame interpolation 37:24 3-stage training 41:28 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #makeavideo #meta #texttovideo
What's cracking guys in this video I'm covering make a video from Meta a very cool novel Text to video model and in general over the last months. We've seen an explosion in the multimodal research So other than the text to image models such as the lead to mid-journey stable diffusion, etc Etc we've seen models that do text to audio models that do text to 3d shapes models that do text to video guys before We continue I want to make a huge shout out to weights and biases and thank them for sponsoring the video They're an amazing ML platform for tracking experiments doing data subversion in model management and much more Signing up is as simple as like literally type typing in a couple of lines of code And I want to show you one very powerful feature of weights and biases and that's reading these powerful multi media rich Reports so you can see how you can embed a video here And if you click on this particular link you end up seeing a run that produce those images Other than just producing cool videos and images you can interact with the plots as you can see here So you can see how you have a pop-up and actual numbers being displayed And if you're a power user you literally have like unlimited flexibility you can generate plots bar charts videos You can interact with your plots, and that's very cool So having said all of that there is no reason why you shouldn't check them out because they are completely free for Personal use and academic teams and they have great plans for for enterprises So if you want to support the channel sign up using the link down below and now let's get back to the video So this is not the first paper that's been doing obviously this task so we had a previous baseline such as cog video such as video diffusion Models etc etc, but this is the the one that kind of made it impressive The first one that has like really exciting results. So yeah, also there is a concurrent work I think it was called Fennaki or something that also has fastening results But I'm gonna focus on make a make a video in this one So you can see some of the videos generated here and if I open up the actual Like landing page here we can see the associated prompts So let's focus on some of them so we can see here a dog wearing a superhero outfit with red cape flying through the sky Obviously the video is kind of weird, but it's damn impressive So this is like the first the first paper that achieved results such as this. So I think it's fairly laudable So here they show some some examples of like three categories like surreal so you can see here a teddy bear painting a portrait So you can see how it looks like you can see the hands are kind of weird or something But like it's amazing that the model even had the image of a bear The imagination to generate a content like this. That's kind of cool So then they have like robot dancing in Times Square and then there is the cat watching TV with a remote in hand You can also see the hand is kind of scary But like super cool a fluffy baby sloth with an orange knitted hat Trying to figure out a laptop close-up highly detailed studio lighting screen reflecting in its eyes So so we can see the the prompt engineering culture Like perturbing all spheres of research So here is the here is the result It's fairly fascinating. I'm not sure whether this is sloth or some some some very nasty demon But it's cool nonetheless and then they have the realistic category like an artist brush painting on a canvas close-up Clown fish swimming through the coral leaf a young couple walking in in a heavy rain horse drinking water So all of these are very very cool. And then there is a stylized category. Like I'm not sure why the spaceship is stylized but Okay, because it's an alien spaceship, I guess so keep high realistic spaceship landing on the Mars An oil painting of a couple informal evening where going home get caught in a heavy downpour with umbrellas You can see that there is some artistic style here to it. You can also see a bunch of problems with the model like But like considering this is the first one and considering they also don't even have a text video Data set we'll soon see the details behind the paper. This is kind of fascinating There is a table by a window with sunlight streaming through illuminating a pile of books So again, very cool. It looks like some type of uh, like a camera beat like camera panning motion Very nice and then an emoji of a baby panda wearing a red hat So we have the red hat blue gloves so you can see that there are no blue gloves actually and then Green shirt and blue pants again some variable binding issues, but again, I will not complain Because these results are truly fascinating and considering that just like This year we we had the lead two and since then we had a cambering explosion of all of these models. So this is kind of Fairly fascinating where we've come They also can do because of the interpolation module they have which we'll see as soon as I open up the paper They can do uh, they can animate a single image so you can see here an image and then you can literally Create a video out of it. So here a bunch of different ones So again, you can see the sun kind of going through the body So the model is is is having problems with the well common sense and just uh, World understanding which is expected considering how we're training these models And then there is the turtle image here and you can also just do Two pair two images like the the source and the target image and then you can do like a smooth interpolation between the two So results look very fairly cool. There is some like blur effect here, but yeah, super cool. And also you can just do Video variation so you pick a particular video And then you generate a variety of similar videos So the this property the the the make a make a video model is inheriting pretty much from the lead to because as we'll soon see It's leveraging the lead to model as the backbone And yeah, let's let's first let's maybe dig into the paper and then we'll see those details Okay, so let me open it up here So the paper name is make a video text to video generation without text video data And I guess that's a very important point here. So there is no text video data So yeah, let's let's kind of then see how the thing works So our intuition is simple learn what the world looks like and how it is described From paired text image data And learn how the world moves. So learn how the world moves from unsupervised video footage, okay So that's going to be kind of they literally have a three-stage approach So if they first train the the text to image model, which is going to be as I said some variation of the lead to Then they they basically Train it on on the on on data sets of video data sets in an unsupervised fashion And finally, they additionally fine-tune. They train the this interpolation app work, which we'll see in a couple of minutes Okay. So as I said, it does not require paired text video data. That's kind of impressive Okay, so let's see A quick remark here So while there is a remarkable progress in the text to image generation the progress of text to video generational lags behind largely due to two main reasons So the first one is the lack of large scale data sets with high quality text video pairs I mentioned this multiple times and some of the previous work such as cog video are using Text video data sets but um to the best of my knowledge They are super small and that's the reason why they did not achieve any any significant results yet And then the second thing the second point they make here and the complexity of modeling higher dimensional video data So again, it's very compute intensive to do anything with videos. So those are two of the reasons why we didn't have any Cambrian explosion when it comes to text to video models as opposed to Text to image. Okay guys, so here is the high level architecture. I'm just gonna first just give you some Yeah, high level overview and then i'm gonna go through the experiments and then i'll later get back and dig a bit deeper into how this thing works So I want you to first recognize some components that are Same as in the delete two models if you if you haven't watched any of the delete two explanation videos Because I haven't created one i'm gonna give you a brief explanation how the model works But for now, let's just kind of identify some components So first we have input text here on the left hand side. You can see a person doing yoga outdoor during the sunrise And so what you do is you basically Convert that using a clip text encoder into a text embedding And then using this friar network, which is implemented as a diffusion model Usually then you end up with some image encoding That's going to going to well because it's trained by the clip in the background It's going to be it's going to have all of the properties of clip image embeddings and then you use that clip image embedding to condition the Diffusional model and then you use that clip image embedding to Uh condition the uh diffusion model that's going to generate so initially just ignore for the sake of argument for now Just ignore the temporal dimension just so just assume we have a single image here That's going to be the way they first train it and so this is going to be a 64 times 64 Um, like generated image from a diffusion model. Okay, then initially they don't have this frame interpolation network They just have additionally two super resolution models that do this one will do 256 256 So from 64 64 to 256 256 resolution and finally 768 or something. So yeah, so here we can see 768 and then after they've done that after they train the model in the delete two fashion What they then do is they extend these architectures the units of these architectures such that they can also integrate the temporal information then they train that whole system on the uh video data set without any labels and finally they they train this Interpolation the frame interpolation module using this masking of frames again We'll see the details, but that's kind of the rough idea that the model does learns not only the spatial Structure of the natural images that the model was trained on it also learns how to To how to model the time how the how the objects are morphing from frame to frame. That's roughly That's kind of high level intuition behind behind the thing. But yeah, that's that's it. That's how it works the three-stage approach I would say And let's quickly see the results and then i'll get back to these details behind the model So let's just kind of scheme the results so you can see on this msr vtt Benchmark, which is a video benchmark proposed by previous baselines I think cog video and other and other baselines introduced it. You can see that make a video basically, uh Like heavily outperforms them across these two metrics. So f id is way lower and also clip sim is way higher So clip sim, um, by the way, what it does is so once you generate, uh the video you basically take the frames and you see how how similar that frame is to the input text prompt that was used to generate that video so how you compute the similarity is simply a dot product between the Image embedding of the frame and text embedding of the prompt and then you just find the dot product That's the similarity and then what you do is you do average across all of the frames And that's how you get the clip sim metric to the best of my knowledge. That's how it works One thing worth mentioning is because make a video uses clip in the background, uh, that might lead to us over Estimating the actual number here or under or vice versa underestimating the numbers here because these other baselines are not Leveraging clip and thus it might be that um, yeah, we are this this number is not that maybe Accurate as this one here. So this one is kind of showing already that the model is way better compared to the baselines Cool, uh, let's continue here a couple more results. Um, they also test it on this ucf 101 in zero shot and fine tuning setting and again it outperforms The cog video which was I think either trained or fine tuned on ucf 101 training data set and it still outperforms it So that's kind of fascinating even though make a video did not use images from ucf Even though make a video did not use images from ucf. So now i'm not sure about the overlap between Lion which they are using for training make a video and ucf 101 so there might be some data pollution there But if no if not, then these results are super fascinating Uh, and also you can see in the fine tuning approach make a video again, just outperforms all of the baselines, uh heavily Okay, guys, and then what I do additionally is they do human evaluation. So they give um They they have these humans on the mech Amazone Mechanical, uh turk And basically what they do is they show side by side Here is a video from one model from our model for make a video and then the model from the baseline And they ask which one is higher quality and if it's 50 then they're kind of pretty much similar or the same So you can see here. It's always like above 50. So that means this make a video is is way better Uh, even in these uh human eval like, um experiments the faithfulness column basically tells you they ask the uh, um, those reviewers to to basically Judge how close how nicely is the generated video? Described by the text that was used to generate it and that's that's the faithfulness metric and again they're comparing Uh, they are comparing both videos side by side. Okay guys So finally before I dig into the details of the model implementation a bit deeper. Let me show you a couple more results. So Here's again the comparison between vdm So the the video diffusion models on the top We have cog video in the middle and we have the make a video on the bottom and you can again see that the two baselines here are kind of way more abstract and uh Blurry if you zoom in here the the grass doesn't have all of the like the texture information and the details Whereas this one looks uh way better just eyeballing it. It does look better compared to the baselines Then they showcased again from a static image. They generate an animated video. That's kind of cool. They here show Interpolation so this is the the target image. This is the source image and they they basically show Um how they can interpolate very nicely between those images whereas some of the baselines such as film Is struggling with understanding like some concepts like like that human hands are not supposed to to to break all of a sudden When you're interpolating between the images, so it does like some some of the um world knowledge and you can see that on the other hand This model that the make a video is is managing to model the the the The the the uh, I guess temporal has some temporal consistency and understanding I guess Cool, and finally video variation. We already saw uh actual videos on their landing page. So i'm gonna skip that one but it's uh Very very nice to have okay final thing they mentioned here, uh, the the the some of the uh downsides of the fact how they train the model and that's That they are only Using static images and text descriptions because of that. Let me read this for you So as discussed earlier our approach cannot learn associations between text and phenomenon that can only be inferred in videos So how to incorporate these along with generating longer videos with multiple scenes and events depicting more detailed stories is left for future work And obviously the reason for that is because they do not have a video text data set again mentioning that Um, so yeah things like generating a video of a person waving their hand left to right or right to left the model cannot do that reliably because Obviously static static images you cannot learn that from from a static image Okay, guys, let's now see how the model works in a bit more detail And ultimately the first thing you have to understand is how the lead to works So let me try and digest it. Obviously, it's going to be hard to explain it in a couple minutes And i'm going to refer you to some of my previous videos But yeah stick with me. Okay, so there is uh, roughly two two stages of how you train the lead to so the first one is train clip and For those of you who are not familiar with clip I already covered the paper and i've also done a code walkthrough of the original open eyes Uh repost you can check out those videos if you want to know a bit more detail about clip But on a very high level what clip does is it learns these this very rich Embedding joint embedding space between images and text such that you have this cool property where you grab a piece of text such as a corgi playing a flame throwing trumpet You basically embed it using a text encoder You get this embedding text vector And then if you do the same thing for some set of images and you do basically a dot product So you calculate the similarity between those vectors You'll find that the highest similarities exactly for those images which are best described by the text as decided by human beings so, uh, basically you can see here There would be a very high similarity between this image and this this piece of text But if I gave you an image of like a car And if I embedded the image of a car into this vector then the dot product so the similarity between that vector and this Particular text would be the similarity would be way lower and that's a nice property that clip gives us Okay, so once you have the clip And hopefully you already knew how clip works, but yeah, and and if not, hopefully the the the short description helped you Uh, okay. So then the the next uh step is to uh, basically train this model called the prior And you can see that in the original paper open ai tried a couple of different model architectures not architectures So so so basically generative models So one is the autoregressive model here and the second one is the diffusion model here and they ended up using Diffusion because it gave better results surprise surprise. Okay. So what prior does it learn it learns how to given a particular text embedding vector it learns a set of of plausible image Embedding vectors as as basically decided by clip So let me let me kind of dissect what I mean by that. Let me let me read this passage here. It's kind of important So for the diffusion prior we train a decoder only transformer. So let me first break here Let me stop here because it's decoder only so if you are familiar with diffusion models, you've mostly seen uh, unit models So the reason they're not using unit here is because we are not trying to generate an image we're trying to generate an embedding vector and so because of the the way of Because of the nature of the input and output of this prior and that's that we have a text embedding vector here and and image Embedding vector here. They're not using a unit and instead they are using a decoder only transformer. Okay, so let's continue now So we train a decoder only transformer with a causal attention mask on a sequence consisting of in order The encoded text so it's going to be the bpe encoded text the clip text embedding So that's going to be this thing here and embedding for the diffusion time step So again, that's like just the t and then embed it into some multi-dimensional space And then the noised clip image embedding Okay So it's going to be the noisy version of this one and that corresponds directly to how when you're training diffusion models on images You pass in the noisy image. So here you pass in the noisy image embedding vector, okay And finally and a final embedding whose output from the transformer is used to predict the unnoised clip image embedding Okay, so that's the idea you're kind of training get let me let me try and and draw that but uh, hopefully that was clear So you kind of have a decoder only transformer here? And you pass all of those different components which i'm not going to repeat again But like you have a bunch of these components like text embeddings blah blah blah and then ultimately you have this final embedding and on top of it you'll be generating the the unnoised version of the of the image embedding Now important thing to understand here is that the ground truth is provided by the previously trained clip model So let's assume this is the let's assume this is the oops This is the text embedding from clip then this is the image embedding of that Text image pair so these two are connected. This is like the ground truth as provided by clip And because of this it's going to learn reasonable image embeddings given a text embedding and Thus learn the prior behind the clip model if that makes sense, okay So why it's called prior is because given a text embedding vector. It's going to basically generate some plausible set of of image embedding vectors and Obviously because of the stochastic nature of diffusion models every time will generate a bit different Image embedding vector, but it's going to be Well centered around some mean of that of that output distribution. That's how that's why it's called like a prior It's setting a prior across the the possible meaningful image embeddings given that particular Text embedding. Okay. Hopefully that makes sense So once you have that then you train a decoder which is also as you can see by the diagram So so when you see these kind of dots these circles, that's a diffusion model And so what you then do is you again now you have your your regular Diffusion model training you pass in the noisy image and you try to denoise it And you just additionally condition that model throughout that process using this particular That model throughout that process using this particular Image embedding, okay And by doing that later on when you pass a purely noisy image and a particular image embedding that That diffusion model is going to generate something that that roughly corresponds to that particular image embedding Okay, hopefully that makes sense and that's how how clip works in a nutshell There is a couple more details obviously There is like two additional super resolution models because this is going to generate like a 64 by 64 image and then you need to kind of upscale it to two bigger resolutions Again, just to recap on a high level what you now do if you want to generate an image in the lead two is you grab a piece of text you embed it into the This this text embedding then you pass the text embedding into the prior prior is going to generate a reasonable well Image embedding and then you use that image embedding and you pass in a noisy image into the diffusion model To generate the 64 by 64 image and then you basically upscale it two times using the super resolution models That's it on a high level I'm not going to go any deeper than that. Okay. So now the question is how do we get the temporal part? So how do we now generate videos and the the idea is to expand the convolutions and attention layers Such that they can integrate the temporal information additionally Okay, so let me slowly start digging into those details So first just to recap the delete two part So they say here prior to the addition of the temporal components We train the backbone of our method a t2i so text to image model trained on text image pairs Sharing the core components with the work of our mesh at all. So that's the delete two paper authors Okay And there is a couple of components as I said prior network P a decoder network D and two super resolution networks S r l and and h like standing for I guess lower s and higher s. Okay guys So now let's see how the temporal piece comes into the picture They say here in order to expand the two-dimensional conditional network into the temporal dimension We modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos So come convolutional layers and attention layers Outer layers like fully connected don't have to be modified Okay, and then they say temporal modifications are made in most unit based diffusion networks So that means they have to modify the spatial temporal decoder dt That is now generating 16 RGB frames. Okay, so we are starting from a diffusion model That was generating a single RGB 64 by 64 image here and we are in extending it to generate 16 RGB frames, okay We're going to see the details of how that works up in a second and then not only that but they also say new They also modify the newly added frame interpolation network This upper arrow f denoted as upper arrow f increasing the effective frame rate by interpolating between the 16 Generated frames as depicted in figure 2 so and finally the super resolution network. So all of those will need to be modified They say this part so interpolating between the 16 generated frames You can see that it's going to have this this this masking logic where it's going to try to infer What what should be generated in those in those black frames and by doing that you can do either interpolation or like extrapolation Of your generated 16 frame video. That's going to be the thing with this this interpolation network. Okay So they they say a couple more details for the super resolution module So this srl so the lower s module operates across spatial and temporal dimensions In qualitative inspection we found this to significantly outperform per frame super resolution where you don't model the temporal information It is challenging to extend the srh So the high res network to the temporal dimension due to memory and compute constraints as well as the scarcity of higher resolution video data So srh operates only along the spatial dimensions But to encourage consistent detailed hallucination across frames we use the same noise initialization for each frame Okay, so that's why they have in this picture here You can see that it's this is kind of denoted like fixed noise because this one is only doing frame You're literally doing you're using the same model and just doing the the upscaling whereas this one is also actually has the temporal information integrated into the architecture Okay Okay That out of the way, let's just continue slowly and try and digest what's going on Okay, let's see how they create these pseudo 3d convolutional layers. So because Regular come 3d layers are very compute intensive They uh opt on the side of using this pseudo 3d and it's very if you're familiar with the concept of depthwise separable convolutions This does a very similar thing. So you first do spatial processing and then you do the temporal processing So you kind of decouple those as opposed to doing a joint processing of both of those Okay So it's one of the reasons of for not using the 3d Computational load and then the second reason they say is well They want to retain the previously learned spatial knowledge in the spatial convolutions weights, right? So because you first train the on the on the image text pairs you want to retain those weights and just start from there And that's what I do. They they literally do identity they have They initialize the network initially using the identity function because of that initially It's it's behaving as if you had 16 Independent 2d diffusion models. Okay. So here is how the how the process roughly looks like you grab the the input tensor which is bcfhw because you have Well see number of channels and this is the image spatial extent number of frames and batch dimension And so what I do is they first apply 2d. So it's going to be apply a spatial processing of the of the of the frames and After that they do transpose operation which is going to lead to them processing the temporal Dimension now so that's when they apply the com 1d and I think it should be 2d here as well But they're just doing like one times one, uh, like uh, uh 2d convolution basically So yeah, and then finally they also apply the the transpose operation so that they get back into this canonical shape here Okay, and as I previously mentioned note that at initialization The network will generate k different images due to random noise each faithful 2d input text But lacking any temporal coherence So let me explain why that is to understand that we need to check out this image here Okay, so here is how this is implemented So we first do the the temporal the spatial processing so you can just treat these as as as like three independent Diffusion models even though they are not but like for the sake of arguments So you first process the spatial information and then you can see here the way they initialize these temporal Filters is that you can see one here and then zeros here and because of that you you will basically end up Uh, just copy pasting the uh output so these feature maps to the next stage So initially because these are initialized the way they are you you'll just end up copy pasting these feature maps You'll just copy paste this feature map forward and then you'll continue on your processing and because of that You literally have 16 independent diffusion models that are going to generate 16 images that corresponds to correspond to the text prompt But they are very different from each other and there is no temporal consistency consistency. Okay? Temporal consistency will be a later learned using the unlabeled video data set. We'll get there a bit later Okay, so that's the first part. Hopefully that that kind of makes sense And then they have to do the same thing for the attention layers very similarly Uh, you can see here they first, um Basically do some flattening such that we have uh h times w tokens then they apply the attention which is going to be like a basically, um Complete attention pattern every token was going to attend everything every other spatial token then they do transpose that's going to Uh allow us to process that to do the same thing for the temporal extent and then again they do Uh transpose such that we are in the back in the canonical shape and then i'm flatten is going to return this into h times W if that makes sense, okay, and they say similarly to conv p3dp standing for pseudo to allow to allow for smooth Spatial temporal initialization the attention to the layer is initialized from the pre-trained T2i model and the attention 1d layer is initialized as the identity function initially, okay So let's again see the diagram here Uh, the diagrams are quite weak. Uh, and also the the code is not released So I couldn't go and dig into the code and understand some of the details that are kind of ambiguous or not clear enough from the paper But yeah, I i'll do the best I can to to to infer the the correct mechanisms here So so we first have the spatial attention So you do do the spatial attention and then you can see that this temporal attention because of the way this is initialized There are all zeros here and because of the fact we have a skipped connection That basically means that initially it's behaving as if there is no temporal Modules being added and because of that they preserve the identity the the equivalence with the text to image model So the parts that are kind of confusing is like nothing is labeled like I don't see any dimensions being being denoted here like Embedded image tokens. I'm not sure what this is referring to. It's kind of weak to be honest. Um I'm not sure whether I'll be able to reconstruct the paper from just like From just the informations that they have here. So yeah, okay guys, let's continue here There is only a couple more paragraphs worth explaining here. So let's let's focus on this frame interpolation network Okay So let's kind of read this in addition to the spatial temporal modifications discussed in the previous section. We train a new masked frame Interpolation and extrapolation network arrow up f Capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video or by pre Post frame extrapolation for extending the video length Okay So in order to increase the frame rate within memory and compute constraints we fine tune a spatio temporal decoder On the task of masked frame interpolation by zero padding the masked input frames enabling video up sampling Okay, so this is again where the three-stage approach kicks in so they first train this spatio temporal decoder Dt using the unlabeled unsupervised Vision data set a video data set and then they they use it to to fine tune this particular model. So Um, they mentioned them here when fine tuning on masked frame interpolation We add an additional four channels to the input of the unit three channels for the rgb masked video input And an additional binary channel indicating which frames are masked Okay, so these three channels, I guess makes sense Although this is kind of in discrepancy to the drawing they have up here because this kind of makes you think that they always completely Zero out the frames and what this is kind of telling me is that they just have additional flexibility of having a Arbitrary patch inside of the image being masked out instead. So let me just go back here again So they have these four channels. So three channels For the rgb blah blah blah. So that basically means the following so you can literally grab Those three frames. Let's take one of them like maybe one that corresponds to our channel like the red channel and then you can literally mask out only a particular part of the image and then Forced the the the interpolation network to to learn what should be inside of their inside there. Okay, so um Whereas the diagram up there shows that they're constantly zeroing out the whole frames But the part that's kind of confusing to me is this binary channel indicating which frames are masked so that kind of alludes That those are either All ones or all zeros depending on whether we want to infer on whether that the that image is masked or not, but that's kind of confusing because obviously for for the Images where you don't want to do any masking you just don't do the masking and then those three channels should have enough information So i'm not sure why the fourth channel that's kind of confusing if anyone knows why feel free to comment down below Okay, so Then they say we're functioning with variable frame skips and fps so the frame uh per second Conditioning to enable multiple temporal up sample rates at inference time Okay So we denote uh blah blah blah as the operator that expands the given video tensor through mass frame interpolation Okay for all our experiments we applied This interpolation network with frameskip 5 to up sample a 16 frame video to 76 frames So here they are referring to just the eval we've seen previously where they always uh expand to this exact Frame size but like in general during the training they they allow for additional flexibility by as you can see here by by Changing the frame skips and the fps Um, they don't have to always extrapolate to this particular number to this particular number of frames So what this basically means is that what they do is they they grab an image? And then they have four images here They're going to be masked out. So these are just going to be masked out and then they have an image again and etc, etc And because they have in total 16 of the frames you can count that um, you'll have well you'll have 15 of these right you'll have 15 of these And then the last one starts ends here. So it's going to be 15 times Five plus the last one plus one and that's why they end up with 76 Okay, so that's kind of how they do it for the eval they they kind of Have these four here and then they infer what's going on in those frames And because of the way how the decoder was trained you you'll have the temporal consistency built in Okay, let's see a couple more details and then i'm going to try and make a high level overview. What's going on? So the different components of make a video described above are trained independently. Okay, that's very important So the only component that receives text as input is the prior p We train it on pair text image data and do not fine tune it on videos The decoder prior and two super resolution components are first trained on images alone. No align text Recall that the decoder receives clip image embeddings as input And the super resolution components receive down sample images as input during training Okay, so that's the dali 2 part. Okay, and now comes the second the second and the third stage So after training on images we add and initialize the new temporal layers And fine tune them over on layers and fine tune them over unlabeled video data 16 frames are sampled from the original video with random fps ranging from 1 to 30 We use beta function for sampling and while training the decoder start from higher fps ranges Which is less motion because you're taking frames with a very small temporal delta And then transition to lower fps. So that means you're you're now going to be sampling with a lot more space Time wise so so with a bigger time delta between those frames and thus you'll have more motion naturally Okay, and finally the masked frame interpolation component is fine tuned from the temporal decoder Okay, that was a mouthful. Let me try and explain this part with the video. So let's go back to the diagram above here So how I understood it Is that because now they don't have the so they don't have the text video data set because of that they what they do is They grab a video and then they sample 16 frames and they also have this curriculum learning where they initially just have This sample with a very small temporal delta and then they later have bigger and bigger temporal delta So that's kind of a form of a curriculum learning So once you have those 16 images So once you have the 16 images Because you don't have the text what you do is you just embed them using the image encoder of the clip You get the image embeddings and then you literally now Condition use those image embeddings to condition this this temporal diffusion model, right? So hopefully that makes sense. So now we're gonna do the same logic you start from a noisy image You condition each of these 16 diffusion models with with those particular image embeddings that corresponds to two particular frames And then you train it in such a fashion. So because there is this temporal integration They're not really 16 independent models But you can kind of think about it that way because you will provide them with 16 image embeddings So once you train that model the spatiotemporal decoder in that diffusion fashion Then in the third stage of the training you you grab those and you train this frame interpolation app work where you'll occasionally be masking out Well here it looks like it's like whole frame but like looking at their Description previously with the with the four additional channels. It seems they are just doing also partial Zeroing out of the frames now the part here that's confusing to me is how do you condition the the the black the black frames, right? Because in the inference time You're obviously trying to generate something that does not exist. So you do not have any conditioning signal So that means you you won't have it here either So i'm not sure how they handle there is no explanation whatsoever of how this is exactly done in the paper What they might be doing is some form of um Just msc loss or something or simply unconditional generation And because during training you actually have the ground truth images You can use some form of like msc or something to just make sure that whatever is generated is closer to the output Again confusing that doesn't make complete sense I i'm still confused a bit by this if anyone has any any hypothesis feel free to just comment down below i'll be Very happy to to read through and and and discuss what's going on here In any case guys Uh, that's it. Let me again just show you quickly the the videos. They are fairly impressive Um, I I love this one with the with the doggo It's a dog wearing a superhero outfit with red cape flying through the sky and with that i'm gonna leave you here Hopefully you took out something from this video you learned something new If you did consider subscribing to this channel share the video out and until next time. Bye. Bye guys
[{"start": 0.0, "end": 6.38, "text": " What's cracking guys in this video I'm covering make a video from Meta a very cool novel"}, {"start": 6.66, "end": 13.32, "text": " Text to video model and in general over the last months. We've seen an explosion in the multimodal research"}, {"start": 13.32, "end": 18.3, "text": " So other than the text to image models such as the lead to mid-journey stable diffusion, etc"}, {"start": 18.3, "end": 26.060000000000002, "text": " Etc we've seen models that do text to audio models that do text to 3d shapes models that do text to video guys before"}, {"start": 26.06, "end": 30.64, "text": " We continue I want to make a huge shout out to weights and biases and thank them for sponsoring the video"}, {"start": 30.96, "end": 36.62, "text": " They're an amazing ML platform for tracking experiments doing data subversion in model management and much more"}, {"start": 37.2, "end": 41.66, "text": " Signing up is as simple as like literally type typing in a couple of lines of code"}, {"start": 41.66, "end": 46.76, "text": " And I want to show you one very powerful feature of weights and biases and that's reading these powerful"}, {"start": 47.32, "end": 48.239999999999995, "text": " multi"}, {"start": 48.239999999999995, "end": 49.72, "text": " media rich"}, {"start": 49.72, "end": 52.84, "text": " Reports so you can see how you can embed a video here"}, {"start": 52.84, "end": 58.1, "text": " And if you click on this particular link you end up seeing a run that produce those images"}, {"start": 58.28, "end": 63.900000000000006, "text": " Other than just producing cool videos and images you can interact with the plots as you can see here"}, {"start": 64.4, "end": 68.88, "text": " So you can see how you have a pop-up and actual numbers being displayed"}, {"start": 69.24000000000001, "end": 77.24000000000001, "text": " And if you're a power user you literally have like unlimited flexibility you can generate plots bar charts videos"}, {"start": 77.24000000000001, "end": 80.66, "text": " You can interact with your plots, and that's very cool"}, {"start": 80.66, "end": 86.06, "text": " So having said all of that there is no reason why you shouldn't check them out because they are completely free for"}, {"start": 86.2, "end": 90.72, "text": " Personal use and academic teams and they have great plans for for enterprises"}, {"start": 90.72, "end": 96.28, "text": " So if you want to support the channel sign up using the link down below and now let's get back to the video"}, {"start": 96.44, "end": 104.34, "text": " So this is not the first paper that's been doing obviously this task so we had a previous baseline such as cog video such as video"}, {"start": 104.47999999999999, "end": 105.62, "text": " diffusion"}, {"start": 105.62, "end": 110.56, "text": " Models etc etc, but this is the the one that kind of made it impressive"}, {"start": 110.56, "end": 115.88, "text": " The first one that has like really exciting results. So yeah, also there is a concurrent work"}, {"start": 115.88, "end": 119.60000000000001, "text": " I think it was called Fennaki or something that also has fastening results"}, {"start": 119.60000000000001, "end": 123.16, "text": " But I'm gonna focus on make a make a video in this one"}, {"start": 123.56, "end": 128.62, "text": " So you can see some of the videos generated here and if I open up the actual"}, {"start": 129.08, "end": 132.52, "text": " Like landing page here we can see the associated prompts"}, {"start": 132.52, "end": 139.36, "text": " So let's focus on some of them so we can see here a dog wearing a superhero outfit with red cape flying through the sky"}, {"start": 139.36, "end": 143.04000000000002, "text": " Obviously the video is kind of weird, but it's damn impressive"}, {"start": 143.04000000000002, "end": 150.60000000000002, "text": " So this is like the first the first paper that achieved results such as this. So I think it's fairly laudable"}, {"start": 151.36, "end": 158.16000000000003, "text": " So here they show some some examples of like three categories like surreal so you can see here a teddy bear painting a portrait"}, {"start": 158.72000000000003, "end": 163.08, "text": " So you can see how it looks like you can see the hands are kind of weird or something"}, {"start": 163.08, "end": 168.4, "text": " But like it's amazing that the model even had the image of a bear"}, {"start": 168.4, "end": 172.52, "text": " The imagination to generate a content like this. That's kind of cool"}, {"start": 172.52, "end": 179.4, "text": " So then they have like robot dancing in Times Square and then there is the cat watching TV with a remote in hand"}, {"start": 179.4, "end": 181.84, "text": " You can also see the hand is kind of scary"}, {"start": 182.36, "end": 187.64000000000001, "text": " But like super cool a fluffy baby sloth with an orange knitted hat"}, {"start": 187.96, "end": 193.72, "text": " Trying to figure out a laptop close-up highly detailed studio lighting screen reflecting in its eyes"}, {"start": 193.72, "end": 196.68, "text": " So so we can see the the prompt engineering culture"}, {"start": 196.68, "end": 199.6, "text": " Like perturbing all spheres of research"}, {"start": 200.6, "end": 202.6, "text": " So here is the here is the result"}, {"start": 203.6, "end": 209.08, "text": " It's fairly fascinating. I'm not sure whether this is sloth or some some some very nasty demon"}, {"start": 209.56, "end": 216.36, "text": " But it's cool nonetheless and then they have the realistic category like an artist brush painting on a canvas close-up"}, {"start": 216.84, "end": 223.64000000000001, "text": " Clown fish swimming through the coral leaf a young couple walking in in a heavy rain horse drinking water"}, {"start": 223.64, "end": 230.44, "text": " So all of these are very very cool. And then there is a stylized category. Like I'm not sure why the spaceship is stylized but"}, {"start": 231.64, "end": 236.76, "text": " Okay, because it's an alien spaceship, I guess so keep high realistic spaceship landing on the Mars"}, {"start": 237.64, "end": 244.92, "text": " An oil painting of a couple informal evening where going home get caught in a heavy downpour with umbrellas"}, {"start": 245.56, "end": 251.56, "text": " You can see that there is some artistic style here to it. You can also see a bunch of problems with the model like"}, {"start": 251.56, "end": 257.16, "text": " But like considering this is the first one and considering they also don't even have a text video"}, {"start": 257.72, "end": 261.72, "text": " Data set we'll soon see the details behind the paper. This is kind of fascinating"}, {"start": 262.28000000000003, "end": 267.56, "text": " There is a table by a window with sunlight streaming through illuminating a pile of books"}, {"start": 268.2, "end": 273.72, "text": " So again, very cool. It looks like some type of uh, like a camera beat like camera panning motion"}, {"start": 274.28, "end": 278.68, "text": " Very nice and then an emoji of a baby panda wearing a red hat"}, {"start": 278.68, "end": 283.88, "text": " So we have the red hat blue gloves so you can see that there are no blue gloves actually and then"}, {"start": 284.28000000000003, "end": 287.88, "text": " Green shirt and blue pants again some variable binding"}, {"start": 288.76, "end": 291.24, "text": " issues, but again, I will not"}, {"start": 292.44, "end": 293.40000000000003, "text": " complain"}, {"start": 293.40000000000003, "end": 297.32, "text": " Because these results are truly fascinating and considering that just like"}, {"start": 297.8, "end": 303.24, "text": " This year we we had the lead two and since then we had a cambering explosion of all of these models. So this is kind of"}, {"start": 303.8, "end": 305.8, "text": " Fairly fascinating where we've come"}, {"start": 305.8, "end": 311.24, "text": " They also can do because of the interpolation module they have which we'll see as soon as I open up the paper"}, {"start": 311.96000000000004, "end": 317.0, "text": " They can do uh, they can animate a single image so you can see here an image and then you can literally"}, {"start": 317.56, "end": 320.52000000000004, "text": " Create a video out of it. So here a bunch of different ones"}, {"start": 320.52000000000004, "end": 322.92, "text": " So again, you can see the sun kind of going through the body"}, {"start": 322.92, "end": 328.44, "text": " So the model is is is having problems with the well common sense and just uh,"}, {"start": 329.16, "end": 333.72, "text": " World understanding which is expected considering how we're training these models"}, {"start": 333.72, "end": 336.92, "text": " And then there is the turtle image here and you can also just do"}, {"start": 337.32000000000005, "end": 343.16, "text": " Two pair two images like the the source and the target image and then you can do like a smooth interpolation between the two"}, {"start": 343.16, "end": 349.72, "text": " So results look very fairly cool. There is some like blur effect here, but yeah, super cool. And also you can just do"}, {"start": 350.76000000000005, "end": 353.72, "text": " Video variation so you pick a particular video"}, {"start": 354.52000000000004, "end": 358.36, "text": " And then you generate a variety of similar videos"}, {"start": 358.36, "end": 365.72, "text": " So the this property the the the make a make a video model is inheriting pretty much from the lead to because as we'll soon see"}, {"start": 365.72, "end": 369.32, "text": " It's leveraging the lead to model as the backbone"}, {"start": 370.12, "end": 374.52000000000004, "text": " And yeah, let's let's first let's maybe dig into the paper and then we'll see those details"}, {"start": 375.40000000000003, "end": 377.40000000000003, "text": " Okay, so let me open it up here"}, {"start": 378.36, "end": 383.32, "text": " So the paper name is make a video text to video generation without text video data"}, {"start": 383.32, "end": 388.36, "text": " And I guess that's a very important point here. So there is no text video data"}, {"start": 388.68, "end": 392.2, "text": " So yeah, let's let's kind of then see how the thing works"}, {"start": 392.36, "end": 398.04, "text": " So our intuition is simple learn what the world looks like and how it is described"}, {"start": 398.36, "end": 400.36, "text": " From paired text image data"}, {"start": 400.84, "end": 406.03999999999996, "text": " And learn how the world moves. So learn how the world moves from unsupervised video"}, {"start": 406.76, "end": 408.76, "text": " footage, okay"}, {"start": 408.76, "end": 413.0, "text": " So that's going to be kind of they literally have a three-stage approach"}, {"start": 413.0, "end": 418.68, "text": " So if they first train the the text to image model, which is going to be as I said some variation of the lead to"}, {"start": 419.48, "end": 421.48, "text": " Then they they basically"}, {"start": 421.88, "end": 427.4, "text": " Train it on on the on on data sets of video data sets in an unsupervised fashion"}, {"start": 427.64, "end": 435.32, "text": " And finally, they additionally fine-tune. They train the this interpolation app work, which we'll see in a couple of minutes"}, {"start": 435.32, "end": 439.24, "text": " Okay. So as I said, it does not require paired text video data. That's kind of impressive"}, {"start": 440.12, "end": 442.12, "text": " Okay, so let's see"}, {"start": 443.15999999999997, "end": 444.59999999999997, "text": " A quick remark here"}, {"start": 444.59999999999997, "end": 451.96, "text": " So while there is a remarkable progress in the text to image generation the progress of text to video generational lags behind largely due to two main reasons"}, {"start": 452.44, "end": 456.44, "text": " So the first one is the lack of large scale data sets with high quality text video pairs"}, {"start": 456.44, "end": 461.15999999999997, "text": " I mentioned this multiple times and some of the previous work such as cog video are using"}, {"start": 461.16, "end": 464.84000000000003, "text": " Text video data sets but um to the best of my knowledge"}, {"start": 464.84000000000003, "end": 469.56, "text": " They are super small and that's the reason why they did not achieve any any significant results yet"}, {"start": 470.36, "end": 476.44000000000005, "text": " And then the second thing the second point they make here and the complexity of modeling higher dimensional video data"}, {"start": 476.44000000000005, "end": 482.6, "text": " So again, it's very compute intensive to do anything with videos. So those are two of the reasons why we didn't have any"}, {"start": 483.40000000000003, "end": 486.68, "text": " Cambrian explosion when it comes to text to video models as opposed to"}, {"start": 486.68, "end": 492.28000000000003, "text": " Text to image. Okay guys, so here is the high level architecture. I'm just gonna first just give you some"}, {"start": 493.48, "end": 500.28000000000003, "text": " Yeah, high level overview and then i'm gonna go through the experiments and then i'll later get back and dig a bit deeper into how this thing works"}, {"start": 500.92, "end": 503.96000000000004, "text": " So I want you to first recognize some components that are"}, {"start": 504.76, "end": 511.16, "text": " Same as in the delete two models if you if you haven't watched any of the delete two explanation videos"}, {"start": 511.16, "end": 516.44, "text": " Because I haven't created one i'm gonna give you a brief explanation how the model works"}, {"start": 516.44, "end": 519.4, "text": " But for now, let's just kind of identify some components"}, {"start": 520.12, "end": 526.12, "text": " So first we have input text here on the left hand side. You can see a person doing yoga outdoor during the sunrise"}, {"start": 526.44, "end": 529.08, "text": " And so what you do is you basically"}, {"start": 530.12, "end": 535.32, "text": " Convert that using a clip text encoder into a text embedding"}, {"start": 535.32, "end": 540.6800000000001, "text": " And then using this friar network, which is implemented as a diffusion model"}, {"start": 541.48, "end": 544.9200000000001, "text": " Usually then you end up with some image encoding"}, {"start": 545.6400000000001, "end": 549.48, "text": " That's going to going to well because it's trained by the clip in the background"}, {"start": 549.48, "end": 556.6, "text": " It's going to be it's going to have all of the properties of clip image embeddings and then you use that clip image embedding to"}, {"start": 557.4000000000001, "end": 559.0, "text": " condition the"}, {"start": 559.0, "end": 562.36, "text": " Diffusional model and then you use that clip image embedding to"}, {"start": 562.36, "end": 568.76, "text": " Uh condition the uh diffusion model that's going to generate so initially just ignore for the sake of argument for now"}, {"start": 568.76, "end": 572.84, "text": " Just ignore the temporal dimension just so just assume we have a single image here"}, {"start": 573.24, "end": 577.4, "text": " That's going to be the way they first train it and so this is going to be a 64 times 64"}, {"start": 577.96, "end": 584.36, "text": " Um, like generated image from a diffusion model. Okay, then initially they don't have this frame interpolation network"}, {"start": 584.6, "end": 590.28, "text": " They just have additionally two super resolution models that do this one will do 256 256"}, {"start": 590.28, "end": 593.4, "text": " So from 64 64 to 256"}, {"start": 594.4399999999999, "end": 596.4399999999999, "text": " 256 resolution and finally"}, {"start": 597.86, "end": 599.9599999999999, "text": " 768 or something. So yeah, so here we can see"}, {"start": 600.5799999999999, "end": 605.4, "text": " 768 and then after they've done that after they train the model in the delete two fashion"}, {"start": 605.56, "end": 612.1999999999999, "text": " What they then do is they extend these architectures the units of these architectures such that they can also"}, {"start": 612.8399999999999, "end": 614.8399999999999, "text": " integrate the temporal information"}, {"start": 614.84, "end": 622.52, "text": " then they train that whole system on the uh video data set without any labels and finally they they train this"}, {"start": 623.0600000000001, "end": 628.12, "text": " Interpolation the frame interpolation module using this masking of frames again"}, {"start": 628.2, "end": 633.5600000000001, "text": " We'll see the details, but that's kind of the rough idea that the model does learns not only the spatial"}, {"start": 634.52, "end": 639.5600000000001, "text": " Structure of the natural images that the model was trained on it also learns how to"}, {"start": 639.56, "end": 645.4, "text": " To how to model the time how the how the objects are morphing from frame to frame. That's roughly"}, {"start": 645.4, "end": 649.8, "text": " That's kind of high level intuition behind behind the thing. But yeah, that's that's it. That's how it works"}, {"start": 650.52, "end": 652.52, "text": " the three-stage approach I would say"}, {"start": 653.0799999999999, "end": 658.1999999999999, "text": " And let's quickly see the results and then i'll get back to these details behind the model"}, {"start": 658.5999999999999, "end": 663.0799999999999, "text": " So let's just kind of scheme the results so you can see on this msr vtt"}, {"start": 663.4799999999999, "end": 667.0799999999999, "text": " Benchmark, which is a video benchmark proposed by previous baselines"}, {"start": 667.08, "end": 674.0400000000001, "text": " I think cog video and other and other baselines introduced it. You can see that make a video basically, uh"}, {"start": 674.9200000000001, "end": 681.48, "text": " Like heavily outperforms them across these two metrics. So f id is way lower and also clip sim is way higher"}, {"start": 681.8000000000001, "end": 688.76, "text": " So clip sim, um, by the way, what it does is so once you generate, uh the video you basically take the frames"}, {"start": 689.4000000000001, "end": 691.4000000000001, "text": " and you see how"}, {"start": 691.4000000000001, "end": 693.5600000000001, "text": " how similar that frame is to the"}, {"start": 693.56, "end": 696.8399999999999, "text": " input text prompt that was used to generate that video"}, {"start": 697.64, "end": 699.2399999999999, "text": " so how you"}, {"start": 699.2399999999999, "end": 702.52, "text": " compute the similarity is simply a dot product between the"}, {"start": 702.76, "end": 708.28, "text": " Image embedding of the frame and text embedding of the prompt and then you just find the dot product"}, {"start": 708.28, "end": 712.1199999999999, "text": " That's the similarity and then what you do is you do average across all of the frames"}, {"start": 712.4399999999999, "end": 716.68, "text": " And that's how you get the clip sim metric to the best of my knowledge. That's how it works"}, {"start": 716.68, "end": 723.7199999999999, "text": " One thing worth mentioning is because make a video uses clip in the background, uh, that might lead to us"}, {"start": 724.52, "end": 725.56, "text": " over"}, {"start": 725.56, "end": 731.9599999999999, "text": " Estimating the actual number here or under or vice versa underestimating the numbers here because these other baselines are not"}, {"start": 732.4399999999999, "end": 738.68, "text": " Leveraging clip and thus it might be that um, yeah, we are this this number is not that maybe"}, {"start": 739.0, "end": 743.9599999999999, "text": " Accurate as this one here. So this one is kind of showing already that the model is way better compared to the baselines"}, {"start": 743.96, "end": 749.74, "text": " Cool, uh, let's continue here a couple more results. Um, they also test it on this ucf"}, {"start": 750.6800000000001, "end": 755.02, "text": " 101 in zero shot and fine tuning setting and again it outperforms"}, {"start": 755.5600000000001, "end": 760.6, "text": " The cog video which was I think either trained or fine tuned on ucf"}, {"start": 761.24, "end": 765.0, "text": " 101 training data set and it still outperforms it"}, {"start": 765.0, "end": 769.48, "text": " So that's kind of fascinating even though make a video did not use images from ucf"}, {"start": 769.48, "end": 775.4, "text": " Even though make a video did not use images from ucf. So now i'm not sure about the overlap between"}, {"start": 775.72, "end": 782.9200000000001, "text": " Lion which they are using for training make a video and ucf 101 so there might be some data pollution there"}, {"start": 783.8000000000001, "end": 787.16, "text": " But if no if not, then these results are super fascinating"}, {"start": 787.8000000000001, "end": 793.72, "text": " Uh, and also you can see in the fine tuning approach make a video again, just outperforms all of the baselines, uh"}, {"start": 794.36, "end": 796.12, "text": " heavily"}, {"start": 796.12, "end": 801.88, "text": " Okay, guys, and then what I do additionally is they do human evaluation. So they give um"}, {"start": 803.0, "end": 805.12, "text": " They they have these humans on the mech"}, {"start": 806.64, "end": 807.72, "text": " Amazone Mechanical, uh turk"}, {"start": 807.72, "end": 810.12, "text": " And basically what they do is they show side by side"}, {"start": 810.2, "end": 814.84, "text": " Here is a video from one model from our model for make a video and then the model from the baseline"}, {"start": 815.16, "end": 821.72, "text": " And they ask which one is higher quality and if it's 50 then they're kind of pretty much similar or the same"}, {"start": 821.72, "end": 826.6800000000001, "text": " So you can see here. It's always like above 50. So that means this make a video is is way better"}, {"start": 827.24, "end": 831.32, "text": " Uh, even in these uh human eval like, um experiments"}, {"start": 832.2, "end": 838.9200000000001, "text": " the faithfulness column basically tells you they ask the uh, um, those reviewers to to basically"}, {"start": 839.5600000000001, "end": 843.1600000000001, "text": " Judge how close how nicely is the generated video?"}, {"start": 843.96, "end": 849.96, "text": " Described by the text that was used to generate it and that's that's the faithfulness metric and again they're comparing"}, {"start": 849.96, "end": 853.64, "text": " Uh, they are comparing both videos side by side. Okay guys"}, {"start": 853.64, "end": 860.0400000000001, "text": " So finally before I dig into the details of the model implementation a bit deeper. Let me show you a couple more results. So"}, {"start": 860.9200000000001, "end": 862.9200000000001, "text": " Here's again the comparison between vdm"}, {"start": 863.0, "end": 865.64, "text": " So the the video diffusion models on the top"}, {"start": 866.0400000000001, "end": 870.44, "text": " We have cog video in the middle and we have the make a video on the bottom and you can again see that"}, {"start": 870.84, "end": 872.84, "text": " the two baselines here are kind of"}, {"start": 873.32, "end": 875.32, "text": " way more abstract and uh"}, {"start": 875.32, "end": 880.84, "text": " Blurry if you zoom in here the the grass doesn't have all of the like the texture information and the details"}, {"start": 881.24, "end": 886.6, "text": " Whereas this one looks uh way better just eyeballing it. It does look better compared to the baselines"}, {"start": 887.24, "end": 893.8000000000001, "text": " Then they showcased again from a static image. They generate an animated video. That's kind of cool. They here show"}, {"start": 894.34, "end": 899.24, "text": " Interpolation so this is the the target image. This is the source image and they they basically show"}, {"start": 899.24, "end": 905.08, "text": " Um how they can interpolate very nicely between those images whereas some of the baselines such as film"}, {"start": 905.5600000000001, "end": 913.0, "text": " Is struggling with understanding like some concepts like like that human hands are not supposed to to to break all of a sudden"}, {"start": 913.64, "end": 921.0, "text": " When you're interpolating between the images, so it does like some some of the um world knowledge and you can see that on the other hand"}, {"start": 922.52, "end": 926.92, "text": " This model that the make a video is is managing to model the the the"}, {"start": 926.92, "end": 931.9599999999999, "text": " The the the uh, I guess temporal has some temporal consistency and understanding I guess"}, {"start": 932.8399999999999, "end": 939.3199999999999, "text": " Cool, and finally video variation. We already saw uh actual videos on their landing page. So i'm gonna skip that one but it's uh"}, {"start": 940.1999999999999, "end": 942.1999999999999, "text": " Very very nice to have okay"}, {"start": 943.0, "end": 950.28, "text": " final thing they mentioned here, uh, the the the some of the uh downsides of the fact how they train the model and that's"}, {"start": 951.4, "end": 953.0, "text": " That they are only"}, {"start": 953.0, "end": 957.56, "text": " Using static images and text descriptions because of that. Let me read this for you"}, {"start": 958.52, "end": 965.8, "text": " So as discussed earlier our approach cannot learn associations between text and phenomenon that can only be inferred in videos"}, {"start": 967.0, "end": 973.8, "text": " So how to incorporate these along with generating longer videos with multiple scenes and events depicting more detailed stories is left for future work"}, {"start": 974.12, "end": 980.2, "text": " And obviously the reason for that is because they do not have a video text data set again mentioning that"}, {"start": 980.2, "end": 984.36, "text": " Um, so yeah things like generating a video of a person"}, {"start": 984.6800000000001, "end": 988.6800000000001, "text": " waving their hand left to right or right to left the model cannot"}, {"start": 989.4000000000001, "end": 991.4000000000001, "text": " do that reliably because"}, {"start": 992.0400000000001, "end": 995.6400000000001, "text": " Obviously static static images you cannot learn that from from a static image"}, {"start": 996.12, "end": 1000.12, "text": " Okay, guys, let's now see how the model works in a bit more detail"}, {"start": 1001.1600000000001, "end": 1005.4000000000001, "text": " And ultimately the first thing you have to understand is how the lead to works"}, {"start": 1005.4, "end": 1009.9599999999999, "text": " So let me try and digest it. Obviously, it's going to be hard to explain it in a couple minutes"}, {"start": 1011.0, "end": 1013.48, "text": " And i'm going to refer you to some of my previous videos"}, {"start": 1014.1999999999999, "end": 1021.56, "text": " But yeah stick with me. Okay, so there is uh, roughly two two stages of how you train the lead to so the first one is"}, {"start": 1021.64, "end": 1023.56, "text": " train clip and"}, {"start": 1023.56, "end": 1025.8799999999999, "text": " For those of you who are not familiar with clip"}, {"start": 1025.8799999999999, "end": 1031.8799999999999, "text": " I already covered the paper and i've also done a code walkthrough of the original open eyes"}, {"start": 1031.88, "end": 1036.44, "text": " Uh repost you can check out those videos if you want to know a bit more detail about clip"}, {"start": 1037.0800000000002, "end": 1041.64, "text": " But on a very high level what clip does is it learns these this very rich"}, {"start": 1042.1200000000001, "end": 1049.5600000000002, "text": " Embedding joint embedding space between images and text such that you have this cool property where you grab a piece of text such as a corgi"}, {"start": 1049.64, "end": 1051.64, "text": " playing a flame throwing trumpet"}, {"start": 1051.96, "end": 1054.38, "text": " You basically embed it using a text encoder"}, {"start": 1054.8400000000001, "end": 1056.8400000000001, "text": " You get this embedding text vector"}, {"start": 1056.84, "end": 1061.9599999999998, "text": " And then if you do the same thing for some set of images and you do basically a dot product"}, {"start": 1061.9599999999998, "end": 1064.1999999999998, "text": " So you calculate the similarity between those vectors"}, {"start": 1064.4399999999998, "end": 1071.56, "text": " You'll find that the highest similarities exactly for those images which are best described by the text as decided by human beings"}, {"start": 1071.9599999999998, "end": 1073.9599999999998, "text": " so, uh, basically you can see here"}, {"start": 1074.36, "end": 1078.76, "text": " There would be a very high similarity between this image and this this piece of text"}, {"start": 1078.9199999999998, "end": 1080.9199999999998, "text": " But if I gave you an image of like a car"}, {"start": 1080.92, "end": 1088.2, "text": " And if I embedded the image of a car into this vector then the dot product so the similarity between that vector and this"}, {"start": 1088.3600000000001, "end": 1094.1200000000001, "text": " Particular text would be the similarity would be way lower and that's a nice property that clip gives us"}, {"start": 1094.44, "end": 1096.44, "text": " Okay, so once you have the clip"}, {"start": 1096.76, "end": 1103.8000000000002, "text": " And hopefully you already knew how clip works, but yeah, and and if not, hopefully the the the short description helped you"}, {"start": 1104.28, "end": 1109.88, "text": " Uh, okay. So then the the next uh step is to uh, basically train this model called the prior"}, {"start": 1109.88, "end": 1115.3200000000002, "text": " And you can see that in the original paper open ai tried a couple of different model"}, {"start": 1115.8600000000001, "end": 1117.3200000000002, "text": " architectures not architectures"}, {"start": 1117.3200000000002, "end": 1119.16, "text": " So so so basically generative models"}, {"start": 1119.4, "end": 1125.3200000000002, "text": " So one is the autoregressive model here and the second one is the diffusion model here and they ended up using"}, {"start": 1125.4, "end": 1132.6000000000001, "text": " Diffusion because it gave better results surprise surprise. Okay. So what prior does it learn it learns how to given a particular"}, {"start": 1132.8400000000001, "end": 1134.8400000000001, "text": " text embedding vector it learns a"}, {"start": 1135.5600000000002, "end": 1137.5600000000002, "text": " set of of plausible"}, {"start": 1137.8000000000002, "end": 1138.68, "text": " image"}, {"start": 1138.68, "end": 1142.1200000000001, "text": " Embedding vectors as as basically decided by clip"}, {"start": 1142.44, "end": 1147.16, "text": " So let me let me kind of dissect what I mean by that. Let me let me read this passage here. It's kind of important"}, {"start": 1147.72, "end": 1152.76, "text": " So for the diffusion prior we train a decoder only transformer. So let me first break here"}, {"start": 1153.16, "end": 1160.44, "text": " Let me stop here because it's decoder only so if you are familiar with diffusion models, you've mostly seen uh, unit models"}, {"start": 1160.76, "end": 1165.16, "text": " So the reason they're not using unit here is because we are not trying to generate an image"}, {"start": 1165.16, "end": 1169.64, "text": " we're trying to generate an embedding vector and so because of the the way of"}, {"start": 1170.68, "end": 1175.8000000000002, "text": " Because of the nature of the input and output of this prior and that's that we have a text embedding vector here and and image"}, {"start": 1175.8000000000002, "end": 1182.1200000000001, "text": " Embedding vector here. They're not using a unit and instead they are using a decoder only transformer. Okay, so let's continue now"}, {"start": 1182.68, "end": 1187.88, "text": " So we train a decoder only transformer with a causal attention mask on a sequence consisting of in order"}, {"start": 1188.3600000000001, "end": 1193.48, "text": " The encoded text so it's going to be the bpe encoded text the clip text embedding"}, {"start": 1193.48, "end": 1198.44, "text": " So that's going to be this thing here and embedding for the diffusion time step"}, {"start": 1198.76, "end": 1202.68, "text": " So again, that's like just the t and then embed it into some multi-dimensional space"}, {"start": 1204.28, "end": 1206.6, "text": " And then the noised clip image embedding"}, {"start": 1206.68, "end": 1206.84, "text": " Okay"}, {"start": 1206.84, "end": 1212.52, "text": " So it's going to be the noisy version of this one and that corresponds directly to how when you're training diffusion models on images"}, {"start": 1212.68, "end": 1215.88, "text": " You pass in the noisy image. So here you pass in the noisy"}, {"start": 1216.52, "end": 1218.52, "text": " image embedding vector, okay"}, {"start": 1218.52, "end": 1225.72, "text": " And finally and a final embedding whose output from the transformer is used to predict the unnoised clip image embedding"}, {"start": 1225.8799999999999, "end": 1231.32, "text": " Okay, so that's the idea you're kind of training get let me let me try and and draw that but uh, hopefully that was clear"}, {"start": 1231.32, "end": 1234.2, "text": " So you kind of have a decoder only transformer here?"}, {"start": 1235.32, "end": 1239.96, "text": " And you pass all of those different components which i'm not going to repeat again"}, {"start": 1239.96, "end": 1245.48, "text": " But like you have a bunch of these components like text embeddings blah blah blah and then ultimately you have"}, {"start": 1245.48, "end": 1253.4, "text": " this final embedding and on top of it you'll be generating the the unnoised version of the"}, {"start": 1254.1200000000001, "end": 1256.1200000000001, "text": " of the image embedding"}, {"start": 1256.68, "end": 1262.28, "text": " Now important thing to understand here is that the ground truth is provided by the previously trained clip model"}, {"start": 1262.28, "end": 1265.88, "text": " So let's assume this is the let's assume this is the oops"}, {"start": 1266.44, "end": 1272.44, "text": " This is the text embedding from clip then this is the image embedding of that"}, {"start": 1272.44, "end": 1277.88, "text": " Text image pair so these two are connected. This is like the ground truth as provided by clip"}, {"start": 1278.1200000000001, "end": 1284.04, "text": " And because of this it's going to learn reasonable image embeddings given a text embedding and"}, {"start": 1285.4, "end": 1289.64, "text": " Thus learn the prior behind the clip model if that makes sense, okay"}, {"start": 1289.64, "end": 1297.48, "text": " So why it's called prior is because given a text embedding vector. It's going to basically generate some plausible"}, {"start": 1297.48, "end": 1301.64, "text": " set of of image embedding vectors and"}, {"start": 1302.28, "end": 1306.44, "text": " Obviously because of the stochastic nature of diffusion models every time will generate a bit different"}, {"start": 1306.84, "end": 1308.84, "text": " Image embedding vector, but it's going to be"}, {"start": 1309.4, "end": 1314.28, "text": " Well centered around some mean of that of that output distribution. That's how that's why it's called like a prior"}, {"start": 1314.28, "end": 1319.88, "text": " It's setting a prior across the the possible meaningful image embeddings given that particular"}, {"start": 1320.44, "end": 1322.44, "text": " Text embedding. Okay. Hopefully that makes sense"}, {"start": 1322.44, "end": 1328.44, "text": " So once you have that then you train a decoder which is also as you can see by the diagram"}, {"start": 1328.44, "end": 1333.16, "text": " So so when you see these kind of dots these circles, that's a diffusion model"}, {"start": 1333.88, "end": 1339.0800000000002, "text": " And so what you then do is you again now you have your your regular"}, {"start": 1339.48, "end": 1343.48, "text": " Diffusion model training you pass in the noisy image and you try to denoise it"}, {"start": 1343.72, "end": 1349.3200000000002, "text": " And you just additionally condition that model throughout that process using this particular"}, {"start": 1349.32, "end": 1352.28, "text": " That model throughout that process using this particular"}, {"start": 1352.84, "end": 1354.36, "text": " Image embedding, okay"}, {"start": 1354.36, "end": 1360.28, "text": " And by doing that later on when you pass a purely noisy image and a particular image embedding that"}, {"start": 1360.9199999999998, "end": 1365.8, "text": " That diffusion model is going to generate something that that roughly corresponds to that particular image embedding"}, {"start": 1366.12, "end": 1370.2, "text": " Okay, hopefully that makes sense and that's how how clip works in a nutshell"}, {"start": 1370.6799999999998, "end": 1371.8799999999999, "text": " There is a couple more details"}, {"start": 1371.8799999999999, "end": 1372.36, "text": " obviously"}, {"start": 1372.36, "end": 1377.72, "text": " There is like two additional super resolution models because this is going to generate like a 64 by 64"}, {"start": 1377.72, "end": 1381.0, "text": " image and then you need to kind of upscale it to two bigger resolutions"}, {"start": 1381.8, "end": 1385.88, "text": " Again, just to recap on a high level what you now do if you want to generate an image"}, {"start": 1386.76, "end": 1391.08, "text": " in the lead two is you grab a piece of text you embed it into the"}, {"start": 1392.28, "end": 1399.32, "text": " This this text embedding then you pass the text embedding into the prior prior is going to generate a reasonable"}, {"start": 1399.72, "end": 1400.44, "text": " well"}, {"start": 1400.44, "end": 1405.96, "text": " Image embedding and then you use that image embedding and you pass in a noisy image into the diffusion model"}, {"start": 1405.96, "end": 1412.92, "text": " To generate the 64 by 64 image and then you basically upscale it two times using the super resolution models"}, {"start": 1413.32, "end": 1415.16, "text": " That's it on a high level"}, {"start": 1415.16, "end": 1420.6000000000001, "text": " I'm not going to go any deeper than that. Okay. So now the question is how do we get the temporal part?"}, {"start": 1420.6000000000001, "end": 1427.56, "text": " So how do we now generate videos and the the idea is to expand the convolutions and attention layers"}, {"start": 1427.96, "end": 1432.1200000000001, "text": " Such that they can integrate the temporal information additionally"}, {"start": 1432.12, "end": 1436.6, "text": " Okay, so let me slowly start digging into those details"}, {"start": 1437.2399999999998, "end": 1439.6399999999999, "text": " So first just to recap the delete two part"}, {"start": 1439.6399999999999, "end": 1442.76, "text": " So they say here prior to the addition of the temporal components"}, {"start": 1442.76, "end": 1448.1999999999998, "text": " We train the backbone of our method a t2i so text to image model trained on text image pairs"}, {"start": 1448.56, "end": 1453.0, "text": " Sharing the core components with the work of our mesh at all. So that's the delete two paper authors"}, {"start": 1453.0, "end": 1453.4799999999998, "text": " Okay"}, {"start": 1453.4799999999998, "end": 1460.76, "text": " And there is a couple of components as I said prior network P a decoder network D and two super resolution networks"}, {"start": 1460.76, "end": 1466.04, "text": " S r l and and h like standing for I guess lower s and higher s. Okay guys"}, {"start": 1466.04, "end": 1469.48, "text": " So now let's see how the temporal piece comes into the picture"}, {"start": 1470.6, "end": 1475.16, "text": " They say here in order to expand the two-dimensional conditional network into the temporal dimension"}, {"start": 1475.16, "end": 1480.92, "text": " We modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos"}, {"start": 1481.08, "end": 1483.72, "text": " So come convolutional layers and attention layers"}, {"start": 1484.84, "end": 1487.4, "text": " Outer layers like fully connected don't have to be modified"}, {"start": 1487.4, "end": 1492.68, "text": " Okay, and then they say temporal modifications are made in most unit based diffusion networks"}, {"start": 1493.3200000000002, "end": 1497.88, "text": " So that means they have to modify the spatial temporal decoder dt"}, {"start": 1498.3600000000001, "end": 1503.5600000000002, "text": " That is now generating 16 RGB frames. Okay, so we are starting from a diffusion model"}, {"start": 1503.5600000000002, "end": 1511.0, "text": " That was generating a single RGB 64 by 64 image here and we are in extending it to generate"}, {"start": 1511.52, "end": 1513.52, "text": " 16 RGB frames, okay"}, {"start": 1513.52, "end": 1519.12, "text": " We're going to see the details of how that works up in a second and then not only that but they also say new"}, {"start": 1519.12, "end": 1522.24, "text": " They also modify the newly added frame interpolation network"}, {"start": 1523.12, "end": 1529.28, "text": " This upper arrow f denoted as upper arrow f increasing the effective frame rate by interpolating between the 16"}, {"start": 1529.84, "end": 1536.4, "text": " Generated frames as depicted in figure 2 so and finally the super resolution network. So all of those will need to be modified"}, {"start": 1536.8799999999999, "end": 1540.6399999999999, "text": " They say this part so interpolating between the 16 generated frames"}, {"start": 1540.64, "end": 1546.0800000000002, "text": " You can see that it's going to have this this this masking logic where it's going to try to infer"}, {"start": 1546.4, "end": 1553.44, "text": " What what should be generated in those in those black frames and by doing that you can do either interpolation or like extrapolation"}, {"start": 1554.24, "end": 1561.2800000000002, "text": " Of your generated 16 frame video. That's going to be the thing with this this interpolation network. Okay"}, {"start": 1562.24, "end": 1566.88, "text": " So they they say a couple more details for the super resolution module"}, {"start": 1566.88, "end": 1572.88, "text": " So this srl so the lower s module operates across spatial and temporal dimensions"}, {"start": 1573.68, "end": 1580.4, "text": " In qualitative inspection we found this to significantly outperform per frame super resolution where you don't model the temporal information"}, {"start": 1580.8000000000002, "end": 1583.0400000000002, "text": " It is challenging to extend the srh"}, {"start": 1583.0400000000002, "end": 1589.8400000000001, "text": " So the high res network to the temporal dimension due to memory and compute constraints as well as the scarcity of higher resolution video data"}, {"start": 1589.8400000000001, "end": 1594.0, "text": " So srh operates only along the spatial dimensions"}, {"start": 1594.0, "end": 1600.08, "text": " But to encourage consistent detailed hallucination across frames we use the same noise initialization for each frame"}, {"start": 1600.08, "end": 1602.64, "text": " Okay, so that's why they have in this picture here"}, {"start": 1602.64, "end": 1608.32, "text": " You can see that it's this is kind of denoted like fixed noise because this one is only doing frame"}, {"start": 1608.32, "end": 1616.72, "text": " You're literally doing you're using the same model and just doing the the upscaling whereas this one is also actually has the temporal information"}, {"start": 1617.76, "end": 1619.76, "text": " integrated into the architecture"}, {"start": 1620.64, "end": 1622.64, "text": " Okay"}, {"start": 1622.64, "end": 1624.48, "text": " Okay"}, {"start": 1624.48, "end": 1628.64, "text": " That out of the way, let's just continue slowly and try and digest what's going on"}, {"start": 1629.76, "end": 1635.2, "text": " Okay, let's see how they create these pseudo 3d convolutional layers. So because"}, {"start": 1636.3000000000002, "end": 1640.42, "text": " Regular come 3d layers are very compute intensive"}, {"start": 1640.8000000000002, "end": 1648.26, "text": " They uh opt on the side of using this pseudo 3d and it's very if you're familiar with the concept of depthwise separable convolutions"}, {"start": 1648.26, "end": 1653.3799999999999, "text": " This does a very similar thing. So you first do spatial processing and then you do the temporal processing"}, {"start": 1653.3799999999999, "end": 1657.62, "text": " So you kind of decouple those as opposed to doing a joint processing of both of those"}, {"start": 1657.86, "end": 1658.82, "text": " Okay"}, {"start": 1658.82, "end": 1661.3, "text": " So it's one of the reasons of for not using the 3d"}, {"start": 1661.84, "end": 1664.82, "text": " Computational load and then the second reason they say is well"}, {"start": 1664.9, "end": 1669.46, "text": " They want to retain the previously learned spatial knowledge in the spatial convolutions weights, right?"}, {"start": 1669.62, "end": 1676.34, "text": " So because you first train the on the on the image text pairs you want to retain those weights and just start from there"}, {"start": 1676.34, "end": 1679.6999999999998, "text": " And that's what I do. They they literally do identity"}, {"start": 1680.58, "end": 1682.1, "text": " they have"}, {"start": 1682.1, "end": 1685.9399999999998, "text": " They initialize the network initially using the identity function because of that initially"}, {"start": 1686.02, "end": 1688.74, "text": " It's it's behaving as if you had 16"}, {"start": 1689.52, "end": 1696.58, "text": " Independent 2d diffusion models. Okay. So here is how the how the process roughly looks like you grab the"}, {"start": 1697.62, "end": 1699.62, "text": " the input tensor which is"}, {"start": 1700.6399999999999, "end": 1702.6399999999999, "text": " bcfhw because you have"}, {"start": 1702.64, "end": 1708.8000000000002, "text": " Well see number of channels and this is the image spatial extent number of frames and batch dimension"}, {"start": 1709.0400000000002, "end": 1713.92, "text": " And so what I do is they first apply 2d. So it's going to be apply a spatial"}, {"start": 1714.46, "end": 1717.6000000000001, "text": " processing of the of the of the frames and"}, {"start": 1718.24, "end": 1723.92, "text": " After that they do transpose operation which is going to lead to them processing the temporal"}, {"start": 1724.48, "end": 1729.6000000000001, "text": " Dimension now so that's when they apply the com 1d and I think it should be 2d here as well"}, {"start": 1729.6, "end": 1735.36, "text": " But they're just doing like one times one, uh, like uh, uh 2d convolution basically"}, {"start": 1735.4399999999998, "end": 1741.9199999999998, "text": " So yeah, and then finally they also apply the the transpose operation so that they get back into this canonical"}, {"start": 1742.56, "end": 1744.32, "text": " shape here"}, {"start": 1744.32, "end": 1747.4599999999998, "text": " Okay, and as I previously mentioned note that at initialization"}, {"start": 1748.24, "end": 1754.9599999999998, "text": " The network will generate k different images due to random noise each faithful 2d input text"}, {"start": 1754.9599999999998, "end": 1756.9599999999998, "text": " But lacking any temporal coherence"}, {"start": 1756.96, "end": 1762.56, "text": " So let me explain why that is to understand that we need to check out this image here"}, {"start": 1762.8, "end": 1764.8, "text": " Okay, so here is how this is implemented"}, {"start": 1764.88, "end": 1772.1000000000001, "text": " So we first do the the temporal the spatial processing so you can just treat these as as as like three independent"}, {"start": 1772.72, "end": 1776.16, "text": " Diffusion models even though they are not but like for the sake of arguments"}, {"start": 1776.24, "end": 1782.24, "text": " So you first process the spatial information and then you can see here the way they initialize these temporal"}, {"start": 1782.24, "end": 1790.08, "text": " Filters is that you can see one here and then zeros here and because of that you you will basically end up"}, {"start": 1790.24, "end": 1795.68, "text": " Uh, just copy pasting the uh output so these feature maps to the next stage"}, {"start": 1795.68, "end": 1802.48, "text": " So initially because these are initialized the way they are you you'll just end up copy pasting these feature maps"}, {"start": 1802.48, "end": 1808.16, "text": " You'll just copy paste this feature map forward and then you'll continue on your processing and because of that"}, {"start": 1808.16, "end": 1816.0, "text": " You literally have 16 independent diffusion models that are going to generate 16 images that corresponds to correspond to the text prompt"}, {"start": 1816.16, "end": 1820.8000000000002, "text": " But they are very different from each other and there is no temporal consistency consistency. Okay?"}, {"start": 1821.68, "end": 1828.16, "text": " Temporal consistency will be a later learned using the unlabeled video data set. We'll get there a bit later"}, {"start": 1828.3200000000002, "end": 1831.52, "text": " Okay, so that's the first part. Hopefully that that kind of makes sense"}, {"start": 1832.24, "end": 1836.16, "text": " And then they have to do the same thing for the attention layers very similarly"}, {"start": 1836.16, "end": 1838.88, "text": " Uh, you can see here they first, um"}, {"start": 1839.76, "end": 1846.72, "text": " Basically do some flattening such that we have uh h times w tokens then they apply the attention which is going to be like a"}, {"start": 1847.0400000000002, "end": 1848.96, "text": " basically, um"}, {"start": 1848.96, "end": 1855.2, "text": " Complete attention pattern every token was going to attend everything every other spatial token then they do transpose that's going to"}, {"start": 1855.3600000000001, "end": 1860.72, "text": " Uh allow us to process that to do the same thing for the temporal extent and then again they do"}, {"start": 1860.72, "end": 1867.2, "text": " Uh transpose such that we are in the back in the canonical shape and then i'm flatten is going to return this into h"}, {"start": 1868.48, "end": 1870.08, "text": " times"}, {"start": 1870.08, "end": 1878.16, "text": " W if that makes sense, okay, and they say similarly to conv p3dp standing for pseudo to allow to allow for smooth"}, {"start": 1878.8, "end": 1883.28, "text": " Spatial temporal initialization the attention to the layer is initialized from the pre-trained"}, {"start": 1883.6000000000001, "end": 1889.2, "text": " T2i model and the attention 1d layer is initialized as the identity function initially, okay"}, {"start": 1889.2, "end": 1891.76, "text": " So let's again see the diagram here"}, {"start": 1892.4, "end": 1896.48, "text": " Uh, the diagrams are quite weak. Uh, and also the the code is not released"}, {"start": 1896.48, "end": 1902.32, "text": " So I couldn't go and dig into the code and understand some of the details that are kind of ambiguous or not clear enough from the paper"}, {"start": 1902.32, "end": 1908.24, "text": " But yeah, I i'll do the best I can to to to infer the the correct mechanisms here"}, {"start": 1909.04, "end": 1911.3600000000001, "text": " So so we first have the spatial attention"}, {"start": 1911.3600000000001, "end": 1917.8400000000001, "text": " So you do do the spatial attention and then you can see that this temporal attention because of the way this is initialized"}, {"start": 1917.84, "end": 1922.32, "text": " There are all zeros here and because of the fact we have a skipped connection"}, {"start": 1922.8799999999999, "end": 1928.08, "text": " That basically means that initially it's behaving as if there is no temporal"}, {"start": 1928.9599999999998, "end": 1935.1999999999998, "text": " Modules being added and because of that they preserve the identity the the equivalence with the text to image"}, {"start": 1935.84, "end": 1937.4399999999998, "text": " model"}, {"start": 1937.4399999999998, "end": 1943.6, "text": " So the parts that are kind of confusing is like nothing is labeled like I don't see any dimensions being being denoted here like"}, {"start": 1943.6, "end": 1950.8, "text": " Embedded image tokens. I'm not sure what this is referring to. It's kind of weak to be honest. Um"}, {"start": 1952.1599999999999, "end": 1955.4399999999998, "text": " I'm not sure whether I'll be able to reconstruct the paper from just like"}, {"start": 1956.48, "end": 1961.12, "text": " From just the informations that they have here. So yeah, okay guys, let's continue here"}, {"start": 1961.1999999999998, "end": 1967.6799999999998, "text": " There is only a couple more paragraphs worth explaining here. So let's let's focus on this frame interpolation network"}, {"start": 1967.76, "end": 1968.6399999999999, "text": " Okay"}, {"start": 1968.64, "end": 1974.8000000000002, "text": " So let's kind of read this in addition to the spatial temporal modifications discussed in the previous section. We train a new masked frame"}, {"start": 1975.1000000000001, "end": 1977.92, "text": " Interpolation and extrapolation network arrow up f"}, {"start": 1978.5600000000002, "end": 1986.16, "text": " Capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video or by pre"}, {"start": 1986.24, "end": 1988.5600000000002, "text": " Post frame extrapolation for extending the video length"}, {"start": 1988.5600000000002, "end": 1989.2, "text": " Okay"}, {"start": 1989.2, "end": 1995.68, "text": " So in order to increase the frame rate within memory and compute constraints we fine tune a spatio temporal decoder"}, {"start": 1995.68, "end": 2003.76, "text": " On the task of masked frame interpolation by zero padding the masked input frames enabling video up sampling"}, {"start": 2004.64, "end": 2010.3200000000002, "text": " Okay, so this is again where the three-stage approach kicks in so they first train this spatio temporal decoder"}, {"start": 2010.64, "end": 2013.78, "text": " Dt using the unlabeled unsupervised"}, {"start": 2014.4, "end": 2021.2, "text": " Vision data set a video data set and then they they use it to to fine tune this particular model. So"}, {"start": 2021.2, "end": 2026.4, "text": " Um, they mentioned them here when fine tuning on masked frame interpolation"}, {"start": 2026.4, "end": 2033.68, "text": " We add an additional four channels to the input of the unit three channels for the rgb masked video input"}, {"start": 2034.4, "end": 2037.3600000000001, "text": " And an additional binary channel indicating which frames are masked"}, {"start": 2037.92, "end": 2040.48, "text": " Okay, so these three channels, I guess makes sense"}, {"start": 2040.56, "end": 2048.08, "text": " Although this is kind of in discrepancy to the drawing they have up here because this kind of makes you think that they always"}, {"start": 2048.7, "end": 2050.08, "text": " completely"}, {"start": 2050.08, "end": 2058.16, "text": " Zero out the frames and what this is kind of telling me is that they just have additional flexibility of having a"}, {"start": 2058.54, "end": 2064.0, "text": " Arbitrary patch inside of the image being masked out instead. So let me just go back here again"}, {"start": 2065.04, "end": 2067.7599999999998, "text": " So they have these four channels. So three channels"}, {"start": 2068.4, "end": 2073.2799999999997, "text": " For the rgb blah blah blah. So that basically means the following so you can literally"}, {"start": 2074.48, "end": 2075.44, "text": " grab"}, {"start": 2075.44, "end": 2080.88, "text": " Those three frames. Let's take one of them like maybe one that corresponds to our channel like the red channel"}, {"start": 2081.12, "end": 2085.2000000000003, "text": " and then you can literally mask out only a particular part of the image and then"}, {"start": 2086.08, "end": 2093.12, "text": " Forced the the the interpolation network to to learn what should be inside of their inside there. Okay, so um"}, {"start": 2093.6, "end": 2098.48, "text": " Whereas the diagram up there shows that they're constantly zeroing out the whole frames"}, {"start": 2099.04, "end": 2103.76, "text": " But the part that's kind of confusing to me is this binary channel indicating which frames are masked"}, {"start": 2103.76, "end": 2106.0800000000004, "text": " so that kind of alludes"}, {"start": 2107.28, "end": 2109.28, "text": " That those are either"}, {"start": 2110.0800000000004, "end": 2113.36, "text": " All ones or all zeros depending on whether we want to"}, {"start": 2114.0800000000004, "end": 2115.6800000000003, "text": " infer"}, {"start": 2115.6800000000003, "end": 2120.32, "text": " on whether that the that image is masked or not, but that's kind of confusing because"}, {"start": 2121.1200000000003, "end": 2123.1200000000003, "text": " obviously for for the"}, {"start": 2123.1200000000003, "end": 2128.96, "text": " Images where you don't want to do any masking you just don't do the masking and then those three channels should have enough information"}, {"start": 2128.96, "end": 2136.08, "text": " So i'm not sure why the fourth channel that's kind of confusing if anyone knows why feel free to comment down below"}, {"start": 2136.56, "end": 2138.56, "text": " Okay, so"}, {"start": 2138.64, "end": 2144.0, "text": " Then they say we're functioning with variable frame skips and fps so the frame uh per second"}, {"start": 2145.1, "end": 2149.36, "text": " Conditioning to enable multiple temporal up sample rates at inference time"}, {"start": 2150.0, "end": 2151.28, "text": " Okay"}, {"start": 2151.28, "end": 2156.96, "text": " So we denote uh blah blah blah as the operator that expands the given video tensor through mass frame interpolation"}, {"start": 2156.96, "end": 2159.12, "text": " Okay for all our experiments we applied"}, {"start": 2159.76, "end": 2165.68, "text": " This interpolation network with frameskip 5 to up sample a 16 frame video to 76 frames"}, {"start": 2165.68, "end": 2171.68, "text": " So here they are referring to just the eval we've seen previously where they always uh expand to this exact"}, {"start": 2172.32, "end": 2178.8, "text": " Frame size but like in general during the training they they allow for additional flexibility by as you can see here by by"}, {"start": 2179.04, "end": 2181.04, "text": " Changing the frame skips and the fps"}, {"start": 2181.04, "end": 2186.4, "text": " Um, they don't have to always extrapolate to this particular number to this particular number of frames"}, {"start": 2186.88, "end": 2191.04, "text": " So what this basically means is that what they do is they they grab an image?"}, {"start": 2192.4, "end": 2194.4, "text": " And then they have four images here"}, {"start": 2194.96, "end": 2200.16, "text": " They're going to be masked out. So these are just going to be masked out and then they have an image again"}, {"start": 2200.8, "end": 2202.8, "text": " and etc, etc"}, {"start": 2202.8, "end": 2205.7599999999998, "text": " And because they have in total 16 of the frames"}, {"start": 2206.4, "end": 2209.52, "text": " you can count that um, you'll have well you'll have"}, {"start": 2209.52, "end": 2212.88, "text": " 15 of these right you'll have 15 of these"}, {"start": 2213.68, "end": 2217.28, "text": " And then the last one starts ends here. So it's going to be 15"}, {"start": 2218.08, "end": 2219.52, "text": " times"}, {"start": 2219.52, "end": 2224.24, "text": " Five plus the last one plus one and that's why they end up with 76"}, {"start": 2225.12, "end": 2230.0, "text": " Okay, so that's kind of how they do it for the eval they they kind of"}, {"start": 2230.56, "end": 2234.24, "text": " Have these four here and then they infer what's going on in those frames"}, {"start": 2234.24, "end": 2240.08, "text": " And because of the way how the decoder was trained you you'll have the temporal consistency built in"}, {"start": 2240.56, "end": 2245.2, "text": " Okay, let's see a couple more details and then i'm going to try and make a high level overview. What's going on?"}, {"start": 2246.08, "end": 2251.2799999999997, "text": " So the different components of make a video described above are trained independently. Okay, that's very important"}, {"start": 2251.8399999999997, "end": 2255.2799999999997, "text": " So the only component that receives text as input is the prior p"}, {"start": 2255.6, "end": 2258.8799999999997, "text": " We train it on pair text image data and do not fine tune it on videos"}, {"start": 2258.88, "end": 2264.56, "text": " The decoder prior and two super resolution components are first trained on images alone. No align text"}, {"start": 2264.96, "end": 2268.32, "text": " Recall that the decoder receives clip image embeddings as input"}, {"start": 2268.7200000000003, "end": 2272.88, "text": " And the super resolution components receive down sample images as input during training"}, {"start": 2273.12, "end": 2277.92, "text": " Okay, so that's the dali 2 part. Okay, and now comes the second the second and the third stage"}, {"start": 2278.1600000000003, "end": 2282.48, "text": " So after training on images we add and initialize the new temporal layers"}, {"start": 2283.36, "end": 2285.76, "text": " And fine tune them over on layers"}, {"start": 2285.76, "end": 2288.8, "text": " and fine tune them over unlabeled video data"}, {"start": 2289.6000000000004, "end": 2296.1600000000003, "text": " 16 frames are sampled from the original video with random fps ranging from 1 to 30"}, {"start": 2297.0400000000004, "end": 2303.0400000000004, "text": " We use beta function for sampling and while training the decoder start from higher fps ranges"}, {"start": 2304.0, "end": 2309.0400000000004, "text": " Which is less motion because you're taking frames with a very small temporal delta"}, {"start": 2309.04, "end": 2315.12, "text": " And then transition to lower fps. So that means you're you're now going to be sampling with a lot more space"}, {"start": 2315.68, "end": 2321.84, "text": " Time wise so so with a bigger time delta between those frames and thus you'll have more motion naturally"}, {"start": 2321.84, "end": 2326.4, "text": " Okay, and finally the masked frame interpolation component is fine tuned from the temporal decoder"}, {"start": 2327.52, "end": 2334.48, "text": " Okay, that was a mouthful. Let me try and explain this part with the video. So let's go back to the diagram above here"}, {"start": 2335.12, "end": 2337.12, "text": " So how I understood it"}, {"start": 2337.12, "end": 2342.24, "text": " Is that because now they don't have the so they don't have the text video"}, {"start": 2342.88, "end": 2346.24, "text": " data set because of that they what they do is"}, {"start": 2346.72, "end": 2353.12, "text": " They grab a video and then they sample 16 frames and they also have this curriculum learning where they initially just have"}, {"start": 2353.6, "end": 2359.92, "text": " This sample with a very small temporal delta and then they later have bigger and bigger temporal delta"}, {"start": 2359.92, "end": 2361.92, "text": " So that's kind of a form of a curriculum learning"}, {"start": 2362.56, "end": 2364.56, "text": " So once you have those 16 images"}, {"start": 2364.56, "end": 2366.56, "text": " So once you have the 16 images"}, {"start": 2367.04, "end": 2372.7999999999997, "text": " Because you don't have the text what you do is you just embed them using the image encoder of the clip"}, {"start": 2373.52, "end": 2378.08, "text": " You get the image embeddings and then you literally now"}, {"start": 2378.86, "end": 2384.56, "text": " Condition use those image embeddings to condition this this temporal diffusion model, right?"}, {"start": 2384.88, "end": 2389.12, "text": " So hopefully that makes sense. So now we're gonna do the same logic you start from a noisy image"}, {"start": 2390.0, "end": 2392.24, "text": " You condition each of these 16"}, {"start": 2392.24, "end": 2400.0, "text": " diffusion models with with those particular image embeddings that corresponds to two particular frames"}, {"start": 2401.04, "end": 2405.12, "text": " And then you train it in such a fashion. So because there is this temporal integration"}, {"start": 2405.12, "end": 2407.4399999999996, "text": " They're not really 16 independent models"}, {"start": 2407.6, "end": 2412.72, "text": " But you can kind of think about it that way because you will provide them with 16 image embeddings"}, {"start": 2413.3599999999997, "end": 2416.5, "text": " So once you train that model the spatiotemporal decoder"}, {"start": 2417.2799999999997, "end": 2419.2799999999997, "text": " in that diffusion fashion"}, {"start": 2419.28, "end": 2426.48, "text": " Then in the third stage of the training you you grab those and you train this frame interpolation app work where you'll occasionally"}, {"start": 2427.2000000000003, "end": 2429.2000000000003, "text": " be masking out"}, {"start": 2429.44, "end": 2432.8, "text": " Well here it looks like it's like whole frame but like looking at their"}, {"start": 2433.6800000000003, "end": 2439.6000000000004, "text": " Description previously with the with the four additional channels. It seems they are just doing also partial"}, {"start": 2440.96, "end": 2448.0, "text": " Zeroing out of the frames now the part here that's confusing to me is how do you condition the the the black the black frames, right?"}, {"start": 2448.0, "end": 2450.0, "text": " Because in the inference time"}, {"start": 2450.32, "end": 2455.04, "text": " You're obviously trying to generate something that does not exist. So you do not have any conditioning signal"}, {"start": 2455.36, "end": 2457.6, "text": " So that means you you won't have it here either"}, {"start": 2457.6, "end": 2463.52, "text": " So i'm not sure how they handle there is no explanation whatsoever of how this is exactly done in the paper"}, {"start": 2464.32, "end": 2467.44, "text": " What they might be doing is some form of um"}, {"start": 2467.84, "end": 2471.92, "text": " Just msc loss or something or simply unconditional generation"}, {"start": 2472.24, "end": 2475.68, "text": " And because during training you actually have the ground truth images"}, {"start": 2475.68, "end": 2481.52, "text": " You can use some form of like msc or something to just make sure that whatever is generated is closer"}, {"start": 2482.0, "end": 2483.52, "text": " to the output"}, {"start": 2483.52, "end": 2486.56, "text": " Again confusing that doesn't make complete sense"}, {"start": 2486.8799999999997, "end": 2492.8799999999997, "text": " I i'm still confused a bit by this if anyone has any any hypothesis feel free to just comment down below i'll be"}, {"start": 2493.2, "end": 2497.2799999999997, "text": " Very happy to to read through and and and discuss what's going on here"}, {"start": 2497.8399999999997, "end": 2499.3599999999997, "text": " In any case guys"}, {"start": 2499.3599999999997, "end": 2504.08, "text": " Uh, that's it. Let me again just show you quickly the the videos. They are fairly impressive"}, {"start": 2504.08, "end": 2507.04, "text": " Um, I I love this one with the with the doggo"}, {"start": 2507.2799999999997, "end": 2512.7999999999997, "text": " It's a dog wearing a superhero outfit with red cape flying through the sky and with that i'm gonna leave you here"}, {"start": 2513.04, "end": 2515.92, "text": " Hopefully you took out something from this video you learned something new"}, {"start": 2515.92, "end": 2534.7200000000003, "text": " If you did consider subscribing to this channel share the video out and until next time. Bye. Bye guys"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=RyIn-DxGF-c
AudioGen: Textually Guided Audio Generation | Text To Audio | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I do a deep dive of the recent "AudioGen: Textually Guided Audio Generation | Paper Explained" paper that introduced text-guided audio synthesis. In a nutshell, it's the VQ-VAE/GAN idea applied to the audio modality. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://felixkreuk.github.io/text2audio_arxiv_samples/paper.pdf ✅ Site: https://felixkreuk.github.io/text2audio_arxiv_samples/ ✅ 3B1B on Fourier transform: https://www.youtube.com/watch?v=spUNpyF58BY ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 01:17 Why is text-to-audio hard? 02:51 Comparison with VQ-GAN 05:15 Comparison with SoundStream 06:20 AudioGen overview 09:10 Deep dive: audio representation, LSTM 14:05 Losses explained 17:40 Complex-valued STFTs 21:57 Audio Language Modeling 23:37 Multi-stream audio inputs 25:32 Data and augmentations 29:05 Results 35:28 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #audiogen #audiosynthesis #multimodal
What's up guys? In this video I'm covering AudioGen, textually guided audio generation. The idea here is this paper is doing the same thing to audio what Dali2, Mid Journey, Stable Diffusion and other models have been doing to images. I.e. it's doing the text to audio synthesis. And the other models I mentioned were doing the text to image synthesis. Okay, so let's hear the samples they have here and then we can dig into the actual paper. The code is still not released so I might cover the... I might do a code walkthrough in one of the next videos or I might not as well. So let's hear this sample.... Okay, so one thing you might have noticed is that the speech, the human speech was unintelligible. The reason for that is because they've done some filtering where they removed all of those recordings where the speech tag was present. And by doing that they avoided the class imbalance problem but as you can see they've lost the intelligibility of human speech, which kind of sucks. So yeah, let's dig into the actual paper, let's see the details. I'm gonna first give you a high level overview. I'm gonna compare this paper with VQGAN and the sound stream papers and then we can see the actual details. So let's... yeah, so first the paper title is Audio Gen Textually Guided Audio Generation from the teams of MetaAI and the Hebrew University of Jerusalem. A couple of things worth mentioning is that... my gut feeling is that the difficulty level of the audio synthesis lies somewhere in between text to image generation, so image synthesis and text to video, so papers such as Make a Video recently published by MetaAI as well. So I think it's probably way closer to the text to image level of difficulty, but that's my gut feeling. So here are some of the reasons they mentioned why this is an inherently difficult task. One of them is the lack of big datasets and we did see that OpenAI Whisper did solve that particular problem for the case of audio transcription and translation. But here they're still struggling, so they didn't actually create a new dataset for this particular task. So they say here, scarce text annotations impose another constraint limiting the ability to scale models. So that's precisely what I've said. And then the second thing is, well, because we're dealing with audio, which depending on the sampling frequency can be very long, they have the problem of just modeling long range dependencies, which is a common problem we are already familiar with. So they say finally modeling high-field audio requires encoding audio at high sampling rate, leading to extremely long sequences. There are other problems with audio, but those are some worth mentioning. OK, so I mentioned VQGAN, I mentioned this sound stream. Let me first explain, give you some background. So I did cover the VQGAN model that was introduced in the Staming Transformers for High Resolution Image Synthesis paper months ago. So I have a whole playlist on where I've covered VQGANs, VQVAEs, Dali, Glide, various text to image models. So if you like the background knowledge, I encourage you to check out that playlist. But having said that, I think this video is going to be fairly self-contained as well. So let me give you a quick high-level overview. So why so basically what what AudioGen has done is repurposed VQGAN to audio modality. That would be like a simple description of what this paper is doing. So the VQGAN had this two-stage approach. The first thing you do is you train this basically, oops, you train this auto-encoding structure. So you have this auto-encoding structure here. And after you do that, then you train, you fit an autoregressive transformer on top of the latent space of these discrete representations here. So that's kind of the high-level two-stage structure that VQGAN used, which this AudioGen paper uses as well. So why do we have this encoding, decoding structure? Well, the idea is to compress the representation starting from a high resolution, high dimensionality image and go into a lower dimensional latent space where you have these basically discrete vectors. So you have this codebook table and you basically fetch the vectors that are output of this encoder and you find the closest vectors in the table here. And then you basically find the index of that closest vector. And that's why you end up with this grid here with tokens. Once you have the tokens and it's fairly trivial to basically fit an autoregressive model such as transformer to learn how to generate then later novel images unconditionally or conditionally in case you prepend text. So, yeah, also the losses they use to train this auto-encoding structure. So adversarial losses, reconstruction losses, all of those will be reused in the AudioGen paper as we'll soon see. OK, so that's a high-level overview there. So now let's get to the SoundStream paper. So SoundStream was basically inspired by VQGen and they repurposed it to the audio modality. So you can see here a very similar structure. So here instead you have a raw audio instead of an image. You pass it through this auto-encoding structure. Again, have this quantization and they did introduce a novelty here, a thing called residual vector quantization. I'll mention a bit later more about how that thing works. Although it's not that highly relevant for understanding of the AudioGen paper because AudioGen actually is not using it as far as I understood. So as you can see, the idea is similar. You reconstruct the audio raw waveform here and then you again have the adversarial component and you also have the reconstruction component. So that's kind of the high-level overview. You can see it's a similar structure to the VQGen first stage part of the VQGen. OK, so now let me show you the actual architecture of AudioGen. Again, you'll see it's very similar. So we have the auto-encoding structure. We have the raw waveform here. We pass it through the structure. We get the reconstructed waveform and then they have a couple of losses. So we see here the LT which stands for loss in the time domain, which means you basically do, I think that you do L1 between the raw sequences. We'll see how audio is represented, but basically it's an array. In case you have a single audio channel and not the stereo channels, then you just have a single array of numbers which represent your audio file. And then they also do the LF. LF is just the loss in the frequency domain where they convert the audio file into male spectrograms and then they do weighted average of L1 and L2 losses. We'll see the details a bit later, but that's the idea. So all of those kind of enforce the accurate reconstruction of the output signal. So you want to make sure that this output signal is as close as possible as this one in the, well, in both the time and frequency sense of the word. And finally, they have this multiscale STFT. STFT stands for short time Fourier transform. So basically what they do here is they pass complex valued spectrograms and then they do the adversarial loss. If you're confused, we'll get to details a bit later. So once you train this model here, then you can grab the components such as the encoder. So as you can see here, this encoder here corresponds to this structure here. We have the decoder here corresponding to this structure here. And we have the text encoder for which they use the pre-trained T5 to basically encode text such as a dog is barking in the park into a set of embedding vectors. And then they have some additional details like positional encoding of vectors, et cetera. But basically what they use then is they can use that to condition via cross attention to condition the audio tokens and then generate the set of audio tokens here, which using the decoder, they can return back into the actual waveform into the time domain space. So now once you have this structure, you can also just ignore the text conditioning part and you can just start unconditionally generating audio or you can also condition on text or you can do continuation where we have some part of the audio tokens being present and then you just do the continuation using those tokens and using the text tokens. So hopefully that makes sense. That was a kind of high level overview. Now let's dig into a bit more details behind the paper. So let's start here. Let's start with the I think this is the data part. Okay. So here we generate the dog barks while somebody plays the trumpet in a busy street. In the above prompt, the model must generate three categories of acoustic content with varying degrees of background, foreground, durations and relative position in the temporal axis, the composition of which is highly unlikely to be present in the training set. Okay. So this is another paragraph stating why this is a difficult task. Although I guess the ultimate reason why it's difficult is because we don't have big enough data sets to cover a vast amount of possible use cases and then just basically the model can just interpolate between those and get something that looks novel. Okay. So they say three, they said three categories. So that's like a dog is barking, somebody's playing a trumpet and there is a busy street. So it's like the sound of cars passing by or whatnot. So you have to decide like what's going to be the strength of each of those, what's going to be the foreground versus background. You have to decide what's going to be the duration of those sounds. You have to understand how to kind of place them on the temporal axis. So all of that is the reason why this is kind of intuitively a difficult task, right? Although if we like give the modeling up data and just use a generic structure, which is something that OpenAI Whisper has showed us, then we can expect much better results, I would assume, from this work. So I already mentioned the first and second stage. So I'm going to skip that part. So let's continue and see what else is here. So again, I mentioned the audio representation. So they say here an audio signal of duration D can be represented by sequence. So X belongs to this range, CA times T. So this means, so CA is the number of audio channels. Usually, I think in this work, they just deal with basically, this is always equal to 1. So and then T is just the duration, the number of samples. So if you have D seconds of audio, if the sampling frequency is FSR, so they use 16 kilohertz in their example, then you just multiply those two numbers and you get the actual number of samples in your audio signal. You can obviously always normalize the amplitude of the signal such that it always lies in the minus 1 to 1 range. So basically, you end up with something like this. So let me just draw the representation for you, for those of you who are not as familiar. So you can imagine having a signal like this. So here is the time axis. Here is the amplitude axis. Let me denote it as Y. And so you'll just have some signal, some audio signal like this. And let's say the amplitude is between 1 and minus 1 after you've done some normalization. And so what you then do is you just sample with certain frequency. Maybe you do like, I guess, uniform sampling is the most common one. And you basically end up with a discrete signal of which can be represented as just a ray, a 1D array of numbers between minus 1 and 1. And how people usually draw this is basically as this discrete samples like this. It's going to be something like this and then this and then this. I'm just mimicking the above signal and sampling it at the points which I've denoted by blue dots here. Okay, so that's one representation. People also usually use male spectrograms. I did cover that in the Open Air Whisper paper if you're curious to learn about that representation. But in this video, we're going to use a simple raw 1D representation here. Okay, so then they say the whole system is trained end-to-end to minimize reconstruction loss applied over both time and frequency domain together with the perceptual loss in the form of several discriminators operating in different temporal resolutions. So we've seen that already on this diagram here. Now, let's see the actual details of those losses. Before we go there, just a minor note. The convolutional blocks are followed by two-layer LSTM for sequence modeling. So that's a minor change they have as compared to the sound. What's the name of this paper? I forgot the name because I'll be referencing it. So Soundstream. Okay, let's get back here. So Soundstream does not have this part with the LSTM. They simply use COM1D blocks and that's it. So this is also kind of peculiar that I haven't seen LSTM in a while and I strongly assume this is not needed. You can just have a simple architecture, throw a bunch of data at the problem and that's it. You don't really have to have this. And also I didn't see any ablations why using LSTM instead of, I guess, transformer block. I mean, it's kind of confusing. I'm not sure about that part. Why they've done it and I haven't seen the explanation anywhere. Anyways, let's get back to the losses. Let's see how the losses are constructed. We have the first time domain loss. So we minimize the L1 distance between the target and reconstructed audio over the time domain. So basically L1 between those two signals. That's it. As simple as that. For the frequency domain loss, we use a linear combination between the L1 and L2 losses over the mouse spectrogram using several time scales. Okay, so let's kind of decompose what this means. In practice, they set this alpha to 1 so we can kind of ignore, completely ignore the alpha here. Let me just change the color. So we just have like a equally weighted sum of the L1 norm and L2 norm between the mouse spectrograms. And now the differences. So you can see here. So X is just the original signal. X hat is the reconstructed signal. You grab the mouse spectrograms where the time resolution is basically a function of this I. So they say here. So SI is a 64 bin mouse spectrogram using a normalized STFT with window size of 2 raised to the power of I and hop lengths of 2 raised to the power of I over 4 where E goes from 5 to 11. Okay, so it's just a set and I goes through that set. So I was going to have values of 2 raised to the power of 5, which means then we'll have a window size of 32, I guess. And then you just keep on increasing by 2. So the window sizes will be increasing and the hop lengths will also be increasing, which means the higher the I, the shorter will be the time dimension of the mouse spectrogram, if that makes sense. So basically, you'll end up for, for example, for I equals 5, you'll end up with a particular mouse spectrogram that has the following shape. So it's going to be, as they said, 64, 64 bins for the frequency. And then this is going to be something like whatever, like N. And then for I equals 6, basically, this is going to be N over 2. And then so that means we'll end up with a smaller spectrogram, etc, etc. So those will have different resolution of the features and that kind of helps in practice. So, yeah, and then they just do, as you can see here, this simple loss between those mouse spectrograms. And again, a mouse spectrogram is simply this. I think I have a drawing here. So this signal where a single particular dot on this spectrogram tells you that at this particular point of time T, you have a frequency F with a particular intensity that's described by the color of the spectrogram. So that's the amplitude of that frequency. And that just all comes from the Fourier transform theory. Basically, any signal can be decomposed as a sum of sinusoids and with different phases and different amplitudes. And that's where the complex numbers are going to come in. But let me not bother you with that at this point of time. We'll get there a bit later. Also, 3Blue1Brown has amazing explanation of the Fourier transform. So check out his video. I'm going to link it down in description, actually, in case you're confused about the Fourier transform. Although I do assume most of you do know at least on a high level what it is. OK, let's see the third loss. They say here the MS STFT discriminator is based on identically structured networks operating on multiscaled complex valued STFT where it's real and imaginary parts are concatenated. OK, actually, here are the complex numbers already. So let me quickly describe what it is. So those complex valued STFTs are going to be the same shape as the actual male spectrogram. It's just that the frequency. So this part here is going to be 2x. So it's going to be two times bigger because it has complex numbers and they're just going to concatenate the complex numbers. So let me explain to you what that means. I already explained what this dot means here. So it's a particular sinusoid of a particular frequency and a particular amplitude. You can imagine before we got this spectrogram, we actually had a complex signal so you can represent a complex number in this 2D plane. And so you can imagine we have a complex number here. So maybe this is the I axis and this is the real axis. This is kind of the real and the imaginary also describe also denoted as imaginary axis. And so this is your complex number. Right. And so in order to get this amplitude of this of this particular and by the way, this vector represents a particular sinusoid. Okay. If you just grab the amplitude of this thing. So the let's denote it as a then you end up with the scalar here. That's the amplitude. But we ditch the phase information by doing that by just taking the amplitude. We ignore the phase. So there is the face here. So the face kind of tells you how shifted is this sinusoid compared to the origin point. So if you're this is just basic signal signal processing stuff. So basically, let's say this is your normal sinusoid that has phase shift of zero. If you were to increase introduce a phase shift of let's say like whatever pi over two or something. Then you end up with like a cosine signal, etc, etc. So you basically just change the shift and that shift information is lost when you deal with mel spectrograms. And in order not to lose it, they just concatenate both the real and the imaginary axis and they get a complex valued STFT of spectrogram representation. Hopefully that makes sense. Let me know if you if you understood this part nicely. Okay. So now that we have this representation, what they do is they again just have a multiple temporal resolutions. So they can see you can see here that we use five different scales with STF window lengths of blah, blah, blah, blah, blah, blah, which which will lead to two different representations like in the time domain will be of different size. And then you just pass those representations to these discriminators that they've denoted somewhere on the diagram here. So you're just going to pass those representations here. Out comes a scalar that tells you whether the signal is comes from a real audio or from a fake one. And then it's just your regular adversarial loss. I'm not going to dig into a lot of details there. So you can see here we are forcing this this signal to be as close to one as possible because that means we're tricking the discriminator into thinking that the image is real. I mean when I say image, I mean this spectrogram complex representation. And finally, we have this feature matching loss where they just take the features across the discriminator and do make sure that the L1 difference between the complex spectrogram that corresponds to the real signal and the complex spectrogram that corresponds to the generated signal are as close as possible. And finally, you can see the losses are like this. So the discriminators are trained to minimize this thing. So a sum over k discriminators because we saw we have multiple of those. And then you just make sure that you can see here because we want to minimize it. That means we want to have this dk of facts being as close to one as possible. That's when we're going to minimize this part here. And we want to have this as close as possible to zero because this is a generated signal. And finally, the generator is trained with this complex weighted sum of various loss components. We have the time domain reconstruction loss. We have the frequency domain reconstruction loss. We have the generative generator loss here and we have the feature loss here. Okay, guys, so that's the loss. And then we have the audio language modeling part. So as I said, the text representation is obtained using a pre-trained T5 text encoder. And we just have a simple basically cross entropy here. So we are trying to minimize the cross entropy. So you can see the only potentially confusing part here is so use are the text embedding vectors. These are the audio embedding vectors. And then you can see we do a sum over the TV, which is basically we want to predict the next audio token, which kind of makes sense. And there is also this confusing part with I going from one to N and N as a symbol has not been introduced anywhere in the paper. But just reading it later, I figured out it's basically because of this multi stream approach which they are using, which is not that like important to understand how the paper is working. It's more of a way to go around the problem of having long sequences. And so, yeah, the additional use the classifier pre-guidance. We've seen that concept introduced in the future models. I covered that multiple times. So the idea here is nothing different except that here we just as you can see here. So we have the signal that's unconditional. So we don't have any use, which are the text embedding vectors. And then we have the actual condition signal and then we just create a weighted sum of those two. And then and then we sample from that distribution. Also, the Lee Mini project uses this very same idea. So you can check out that code code base if you're curious to see how it's exactly implemented until the code for this paper is released. And the final detail before showing you quickly the results is this the multi stream audio inputs and they say it here. So in order to generate high quality audio samples, we down sample the raw audio by a factor of 32 because of those calm one thing is they have and then which corresponds to two milliseconds for each audio token, assuming the 16 kilohertz sampling rate. So this requires us to operate over extremely long sequences as each second of audio is represented by 500 tokens to alleviate this problem. We propose a multi stream representation and modeling paradigm. So let me just quickly describe what it is. The basic idea is this. So you start with a super long sequence. This is the in the time domain. And then you basically reduce the dimensionality by 32 X usually. And then you end up with with basically audio tokens. And now when you're training like a transformer on top of this space, you can see this is super long. The transformer is going to struggle. So what I do instead is they do the following. So they just have a much higher down sampling strategy. And so they end up with maybe 2 X or 8 X or whatnot smaller dimensionality here. So it's going to be way smaller. And then that means it's much more feasible for transformers to handle that length. But now the problem is you're obviously by compressing it so much more, you're losing a lot of data information. And so because of that, they have this multi transformer approach where they now have multiple transformers running here and then they average the logits of those transformers. Anyways, I'm going to stop there because it's not that relevant. And even the results are like they lose some performance and they just gain a little bit of speed up. So I'm really not sure how this is helping because even the memory footprint will probably not be smaller because you still have multiple transformers in this case. So, yeah, I'm kind of confused by this whole multi stream approach. And what do we get from it actually? Okay, let's quickly go through the experiments. Couple of things worth mentioning about a data set. So for textual description, we they say we use two types of annotations. The first one is multi label notation. And so we form sentences by concatenating a list of text available per audio sample. For example, if you have three labels for that particular audio clips like dog bark and park that's transformed into dog bark bark. And you can already see the problem with this approach. And then the second type of annotation is natural language captions. And then they say we apply a preprocessing step to better match the class label annotation distribution specifically where we move stop words and numbers. Finally, we'll lemmatize the remaining words. And so a dog is barking at the park is transformed to dog bark bark. And so, again, that's the reason why they are hitting some of the limitations they mentioned here. So it still lacks the understanding of temporal ordering the scene. For example, a dog is barking, then a then a then birds are humming, I guess. It's a typo here. So versus the dog is barking and a bird's humming in the background. So is it like happening in parallel or is it like first the dog is barking and then the birds are humming? And they also mentioned that last as we omit most of the speech samples in our training set, the proposed approach often generates unintelligible speech. I did mention that in the beginning of the video. Anyways, let's get back here. And finally, they end up with four thousand with only four thousand hours of training data because they filter out the speech text. You can see here. As speech is the dominant class in the data, we we filter all samples where the tag or caption contains the word speech to generate a more balanced data set. And as a consequence, they end up with unintelligible speech. They do one more thing. I want to mention the data augmentation part. And this is probably just completely unnecessary if you simply have a bigger data set. Although one thing is like it's not easy to obtain high quality, bigger data sets. So, yeah. So one of the most impressive capabilities of recently proposed generative models is their ability to create unseen object compositions. We know from the text image models that a sentence such as an astronaut riding a horse in space is something we can we can kind of generate, even though we the model probably never has seen such an image. So to achieve similar capabilities regards to audio generation, we propose an augmentation method that fuses pairs of audio samples and their respective text captions, thus creating new concept compositions during training. Formally, given two audio samples X1 and X2 and their respective text captions C1 and C2, we first randomly draw a temporal offset to merge the two audio samples. Next, we draw a random signal to noise ratio in the signal minus five to five. And finally, we mix the audio samples and concatenate the text captions C1 and C2. So that's how they get more data from the existing data points. So basically, you grab let me just change the color here. So you basically grab two signals. You have a signal here and you have a different signal here. And so what you do is you basically decide on how to combine them. So you decide on some particular offset point like maybe here and then you basically do the overlap. So just overlap those two and then additionally you need to decide on the amplitude of this one and the amplitude of this one. That's the idea. Okay, that's the augmentation. Let's see some results. Obviously, they have two baselines. One is using the T5 large, the other one is using T5 bass and obviously the bigger the better. Like across all of these subjective and objective metrics, I'm not going to go into what they actually are. But you can see the numbers are in favor for the large model. Surprise, surprise. They also compare favorably with the stiff sound baseline, which is cool. Let me parse the charts for you here. Let's first focus on the right hand side here. So on the x-axis we have the audio prompt duration. So basically how many audio tokens do we use to prompt the model? And then we are looking on the y-axis at this objective KL divergence metric. In the background, they use some pre-trained audio classifier. It doesn't matter. Like let's just take that as a black box here. And so the differential between two approaches, one is to condition on text and the other one is to not condition text. So the unconditional part, the unconditional case. And so you can see that in the case of conditioning with text, you already with only a couple of audio tokens, you already have a signal that's very close to the ground truth. And you get some improvement the more you add the number of the more you increase the number of audio tokens. On the other side, without conditioning on text, you can see that we have a very big discrepancy between the ground truth and actually generated signal if we don't have any text conditioning. And after a while, which kind of makes sense, if you have enough audio tokens, then the text part becomes less relevant. OK, so let's now focus on the chart on the left. This one might be a bit confusing. So let me try and draw a couple of things before that. So we have two audio samples. We have one audio signal here. I'm going to denote it as as a red signal. And there is some associated text. So there is some associated text with this particular signal. OK, and then we have a different audio signal, a different sample. And what we do is we grab the text of this first sample. So I'm going to know this one is one. I'm going to denote this sample here as two. So what we do is we grab the text here. We pass it through the T5 encoder and we end up with some tokens. So we end up with some text tokens. OK, so here they are. Here are some text tokens. And then what we do here is we pass this one through the encoder. That's the first stage of the audio gen model. And we end up with some tokens here. OK, and so we now pass those into the ALM, into the audio language modeling like model. And we obviously have two conflicting requests. This one is requesting, hey, generate basically generate a signal that corresponds to this text. And this is going to be the ground truth. And we have a different set of tokens which are telling us, hey, generate this particular audio clip. And so we can see on this diagram here on the x-axis as the number of audio tokens increases. So the blue tokens here. Let's now focus on, for example, the purple curve. The purple curve is comparing the generated audio signal with basically this signal here, signal one. And it kind of makes sense that as you're increasing the number of blue tokens, you're getting worse and worse at generating a signal that's close to this one, which kind of makes sense. And vice versa, if we focus on this dashed yellow line, here we're comparing with this signal. So we're using this one as the reference one. And obviously, as we're increasing the number of these blue tokens, we'll be closer and closer to generating something that looks like this original signal here. So again, we are just to to to mind you, we are basically sampling. So we pass all of these in into our ALM, which is just basically a transformer. And we keep on generating these samples one after another. And then we just pass all of that through the decoder that we train in the first stage. So let me just go back here. So we just pass those through this structure here. So best was token through this one. And we decode the original audio signal. Okay. I mean, the generator, not the original one. And then we just do the comparisons. The green line is just like a baseline where where you don't even have the text prompts. You just have a couple of tokens from random signal. And then you're comparing whatever is generated with this blue signal here. Kind of confusing. But the idea is that like after one point five seconds, roughly the contribution from text and the contributions from from the audio tokens become roughly similar. And I'm not quite sure what was the length of these text tokens, because otherwise this is not kind of very informative, because obviously this if the number of text tokens is bigger, then this chart would not probably be at one point five. The intersection would not happen at one point five seconds, if that makes sense. OK, in any case, that's pretty much it here. They show some diagrams how the guidance scale for the in the classifier free guidance influences the results. So the FAD metric and the KL metric and they find that the minimum is around three. So for number three, they get the best rate of across metrics, as usually the guidance scale helps. And finally, they have some table comparing this, the multi stream transformer approach I did mention briefly. And they say the results suggest that increasing the number of streams degrades the performance in both base and large models when compared to a single stream system, which kind of makes sense because we're losing information. But let's see what we gain. So what we gain is you can see here a speed up. We gain like three point six X speed up when we use such an approach. But you can see like the performance drop significantly. That's if you focus on the FAD metric, we get from like two to like eleven, which is probably a big difference. Cool, guys. That's pretty much it for this for this video. So a couple of thoughts here. So just like analyzing open eyes whisper that also deals with speech data. I guess the conclusion here would be we probably like next paper down the line. We won't need the LSTMs. We won't need all of these details when it comes to the model architecture. If you just have a generic architecture that simple, if you have much bigger data set and of higher quality, because as we saw here, we are what they are doing is they are basically concatenating just labels. And thus they are losing a lot of the information that's kind of rich and present in the language. And so by improving the data set, but making it bigger and just using a generic like architecture, I assume that one of the next papers will just have way better results. And yeah, so if you like this video, let me know down in the comments. Subscribe to this channel if you haven't already. And until next time, bye bye.
[{"start": 0.0, "end": 5.6000000000000005, "text": " What's up guys? In this video I'm covering AudioGen, textually guided audio generation."}, {"start": 5.6000000000000005, "end": 10.5, "text": " The idea here is this paper is doing the same thing to audio what"}, {"start": 10.5, "end": 15.5, "text": " Dali2, Mid Journey, Stable Diffusion and other models have been doing to images."}, {"start": 15.5, "end": 18.6, "text": " I.e. it's doing the text to audio synthesis."}, {"start": 18.6, "end": 23.5, "text": " And the other models I mentioned were doing the text to image synthesis."}, {"start": 23.5, "end": 29.6, "text": " Okay, so let's hear the samples they have here and then we can dig into the actual paper."}, {"start": 29.6, "end": 35.2, "text": " The code is still not released so I might cover the... I might do a code walkthrough in one of the next videos"}, {"start": 35.2, "end": 38.800000000000004, "text": " or I might not as well. So let's hear this sample."}, {"start": 38.800000000000004, "end": 49.2, "text": "..."}, {"start": 49.2, "end": 54.900000000000006, "text": " Okay, so one thing you might have noticed is that the speech, the human speech was unintelligible."}, {"start": 54.9, "end": 60.8, "text": " The reason for that is because they've done some filtering where they removed all of those recordings"}, {"start": 60.8, "end": 64.5, "text": " where the speech tag was present."}, {"start": 64.5, "end": 69.3, "text": " And by doing that they avoided the class imbalance problem"}, {"start": 69.3, "end": 74.7, "text": " but as you can see they've lost the intelligibility of human speech, which kind of sucks."}, {"start": 74.7, "end": 78.2, "text": " So yeah, let's dig into the actual paper, let's see the details."}, {"start": 78.2, "end": 81.2, "text": " I'm gonna first give you a high level overview."}, {"start": 81.2, "end": 88.9, "text": " I'm gonna compare this paper with VQGAN and the sound stream papers and then we can see the actual details."}, {"start": 88.9, "end": 94.7, "text": " So let's... yeah, so first the paper title is Audio Gen Textually Guided Audio Generation"}, {"start": 94.7, "end": 99.0, "text": " from the teams of MetaAI and the Hebrew University of Jerusalem."}, {"start": 99.0, "end": 105.5, "text": " A couple of things worth mentioning is that... my gut feeling is that the difficulty level of the audio synthesis"}, {"start": 105.5, "end": 110.7, "text": " lies somewhere in between text to image generation, so image synthesis"}, {"start": 110.7, "end": 116.2, "text": " and text to video, so papers such as Make a Video recently published by MetaAI as well."}, {"start": 116.2, "end": 122.60000000000001, "text": " So I think it's probably way closer to the text to image level of difficulty, but that's my gut feeling."}, {"start": 122.60000000000001, "end": 127.0, "text": " So here are some of the reasons they mentioned why this is an inherently difficult task."}, {"start": 127.0, "end": 133.6, "text": " One of them is the lack of big datasets and we did see that OpenAI Whisper did solve that particular problem"}, {"start": 133.6, "end": 138.4, "text": " for the case of audio transcription and translation."}, {"start": 138.4, "end": 143.5, "text": " But here they're still struggling, so they didn't actually create a new dataset for this particular task."}, {"start": 143.5, "end": 148.5, "text": " So they say here, scarce text annotations impose another constraint limiting the ability to scale models."}, {"start": 148.5, "end": 150.4, "text": " So that's precisely what I've said."}, {"start": 150.4, "end": 156.20000000000002, "text": " And then the second thing is, well, because we're dealing with audio, which depending on the sampling frequency"}, {"start": 156.20000000000002, "end": 160.70000000000002, "text": " can be very long, they have the problem of just modeling long range dependencies,"}, {"start": 160.70000000000002, "end": 163.6, "text": " which is a common problem we are already familiar with."}, {"start": 163.6, "end": 170.79999999999998, "text": " So they say finally modeling high-field audio requires encoding audio at high sampling rate, leading to extremely long sequences."}, {"start": 170.79999999999998, "end": 174.4, "text": " There are other problems with audio, but those are some worth mentioning."}, {"start": 174.4, "end": 178.0, "text": " OK, so I mentioned VQGAN, I mentioned this sound stream."}, {"start": 178.0, "end": 181.6, "text": " Let me first explain, give you some background."}, {"start": 181.6, "end": 190.7, "text": " So I did cover the VQGAN model that was introduced in the Staming Transformers for High Resolution Image Synthesis paper months ago."}, {"start": 190.7, "end": 198.6, "text": " So I have a whole playlist on where I've covered VQGANs, VQVAEs, Dali, Glide, various text to image models."}, {"start": 198.6, "end": 203.39999999999998, "text": " So if you like the background knowledge, I encourage you to check out that playlist."}, {"start": 203.39999999999998, "end": 207.2, "text": " But having said that, I think this video is going to be fairly self-contained as well."}, {"start": 207.2, "end": 210.89999999999998, "text": " So let me give you a quick high-level overview."}, {"start": 210.89999999999998, "end": 219.1, "text": " So why so basically what what AudioGen has done is repurposed VQGAN to audio modality."}, {"start": 219.1, "end": 223.2, "text": " That would be like a simple description of what this paper is doing."}, {"start": 223.2, "end": 227.6, "text": " So the VQGAN had this two-stage approach."}, {"start": 227.6, "end": 235.1, "text": " The first thing you do is you train this basically, oops, you train this auto-encoding structure."}, {"start": 235.1, "end": 238.79999999999998, "text": " So you have this auto-encoding structure here."}, {"start": 238.79999999999998, "end": 247.9, "text": " And after you do that, then you train, you fit an autoregressive transformer on top of the latent space of these discrete representations here."}, {"start": 247.9, "end": 255.0, "text": " So that's kind of the high-level two-stage structure that VQGAN used, which this AudioGen paper uses as well."}, {"start": 255.0, "end": 259.3, "text": " So why do we have this encoding, decoding structure?"}, {"start": 259.3, "end": 265.7, "text": " Well, the idea is to compress the representation starting from a high resolution, high dimensionality image"}, {"start": 265.7, "end": 271.2, "text": " and go into a lower dimensional latent space where you have these basically discrete vectors."}, {"start": 271.2, "end": 279.0, "text": " So you have this codebook table and you basically fetch the vectors that are output of this encoder"}, {"start": 279.0, "end": 282.7, "text": " and you find the closest vectors in the table here."}, {"start": 282.7, "end": 286.4, "text": " And then you basically find the index of that closest vector."}, {"start": 286.4, "end": 289.9, "text": " And that's why you end up with this grid here with tokens."}, {"start": 289.9, "end": 296.09999999999997, "text": " Once you have the tokens and it's fairly trivial to basically fit an autoregressive model such as transformer"}, {"start": 296.1, "end": 303.70000000000005, "text": " to learn how to generate then later novel images unconditionally or conditionally in case you prepend text."}, {"start": 303.70000000000005, "end": 308.70000000000005, "text": " So, yeah, also the losses they use to train this auto-encoding structure."}, {"start": 308.70000000000005, "end": 316.40000000000003, "text": " So adversarial losses, reconstruction losses, all of those will be reused in the AudioGen paper as we'll soon see."}, {"start": 316.40000000000003, "end": 319.70000000000005, "text": " OK, so that's a high-level overview there."}, {"start": 319.70000000000005, "end": 322.40000000000003, "text": " So now let's get to the SoundStream paper."}, {"start": 322.4, "end": 330.29999999999995, "text": " So SoundStream was basically inspired by VQGen and they repurposed it to the audio modality."}, {"start": 330.29999999999995, "end": 332.2, "text": " So you can see here a very similar structure."}, {"start": 332.2, "end": 335.7, "text": " So here instead you have a raw audio instead of an image."}, {"start": 335.7, "end": 337.9, "text": " You pass it through this auto-encoding structure."}, {"start": 337.9, "end": 346.09999999999997, "text": " Again, have this quantization and they did introduce a novelty here, a thing called residual vector quantization."}, {"start": 346.09999999999997, "end": 349.9, "text": " I'll mention a bit later more about how that thing works."}, {"start": 349.9, "end": 354.4, "text": " Although it's not that highly relevant for understanding of the AudioGen paper"}, {"start": 354.4, "end": 358.09999999999997, "text": " because AudioGen actually is not using it as far as I understood."}, {"start": 358.09999999999997, "end": 359.7, "text": " So as you can see, the idea is similar."}, {"start": 359.7, "end": 367.09999999999997, "text": " You reconstruct the audio raw waveform here and then you again have the adversarial component"}, {"start": 367.09999999999997, "end": 369.4, "text": " and you also have the reconstruction component."}, {"start": 369.4, "end": 371.79999999999995, "text": " So that's kind of the high-level overview."}, {"start": 371.8, "end": 380.1, "text": " You can see it's a similar structure to the VQGen first stage part of the VQGen."}, {"start": 380.1, "end": 384.8, "text": " OK, so now let me show you the actual architecture of AudioGen."}, {"start": 384.8, "end": 386.3, "text": " Again, you'll see it's very similar."}, {"start": 386.3, "end": 388.40000000000003, "text": " So we have the auto-encoding structure."}, {"start": 388.40000000000003, "end": 391.1, "text": " We have the raw waveform here."}, {"start": 391.1, "end": 393.1, "text": " We pass it through the structure."}, {"start": 393.1, "end": 398.2, "text": " We get the reconstructed waveform and then they have a couple of losses."}, {"start": 398.2, "end": 403.59999999999997, "text": " So we see here the LT which stands for loss in the time domain,"}, {"start": 403.59999999999997, "end": 410.7, "text": " which means you basically do, I think that you do L1 between the raw sequences."}, {"start": 410.7, "end": 413.9, "text": " We'll see how audio is represented, but basically it's an array."}, {"start": 413.9, "end": 417.4, "text": " In case you have a single audio channel and not the stereo channels,"}, {"start": 417.4, "end": 423.09999999999997, "text": " then you just have a single array of numbers which represent your audio file."}, {"start": 423.09999999999997, "end": 424.7, "text": " And then they also do the LF."}, {"start": 424.7, "end": 430.8, "text": " LF is just the loss in the frequency domain where they convert the audio file into"}, {"start": 430.8, "end": 436.59999999999997, "text": " male spectrograms and then they do weighted average of L1 and L2 losses."}, {"start": 436.59999999999997, "end": 439.0, "text": " We'll see the details a bit later, but that's the idea."}, {"start": 439.0, "end": 444.9, "text": " So all of those kind of enforce the accurate reconstruction of the output signal."}, {"start": 444.9, "end": 451.4, "text": " So you want to make sure that this output signal is as close as possible as this one in the,"}, {"start": 451.4, "end": 455.9, "text": " well, in both the time and frequency sense of the word."}, {"start": 455.9, "end": 458.4, "text": " And finally, they have this multiscale STFT."}, {"start": 458.4, "end": 462.09999999999997, "text": " STFT stands for short time Fourier transform."}, {"start": 462.09999999999997, "end": 467.29999999999995, "text": " So basically what they do here is they pass complex valued spectrograms"}, {"start": 467.29999999999995, "end": 469.5, "text": " and then they do the adversarial loss."}, {"start": 469.5, "end": 472.0, "text": " If you're confused, we'll get to details a bit later."}, {"start": 472.0, "end": 478.59999999999997, "text": " So once you train this model here, then you can grab the components such as the encoder."}, {"start": 478.6, "end": 487.5, "text": " So as you can see here, this encoder here corresponds to this structure here."}, {"start": 487.5, "end": 494.3, "text": " We have the decoder here corresponding to this structure here."}, {"start": 494.3, "end": 501.0, "text": " And we have the text encoder for which they use the pre-trained T5 to basically encode"}, {"start": 501.0, "end": 505.90000000000003, "text": " text such as a dog is barking in the park into a set of embedding vectors."}, {"start": 505.9, "end": 510.7, "text": " And then they have some additional details like positional encoding of vectors, et cetera."}, {"start": 510.7, "end": 517.0, "text": " But basically what they use then is they can use that to condition via cross attention"}, {"start": 517.0, "end": 521.0, "text": " to condition the audio tokens and then generate the set of audio tokens here,"}, {"start": 521.0, "end": 528.9, "text": " which using the decoder, they can return back into the actual waveform into the time domain space."}, {"start": 528.9, "end": 536.5, "text": " So now once you have this structure, you can also just ignore the text conditioning part"}, {"start": 536.5, "end": 541.1, "text": " and you can just start unconditionally generating audio or you can also condition on text"}, {"start": 541.1, "end": 547.1, "text": " or you can do continuation where we have some part of the audio tokens being present"}, {"start": 547.1, "end": 552.1, "text": " and then you just do the continuation using those tokens and using the text tokens."}, {"start": 552.1, "end": 553.1, "text": " So hopefully that makes sense."}, {"start": 553.1, "end": 555.1999999999999, "text": " That was a kind of high level overview."}, {"start": 555.2, "end": 558.9000000000001, "text": " Now let's dig into a bit more details behind the paper."}, {"start": 558.9000000000001, "end": 560.9000000000001, "text": " So let's start here."}, {"start": 560.9000000000001, "end": 563.9000000000001, "text": " Let's start with the I think this is the data part."}, {"start": 563.9000000000001, "end": 569.9000000000001, "text": " Okay. So here we generate the dog barks while somebody plays the trumpet in a busy street."}, {"start": 569.9000000000001, "end": 574.2, "text": " In the above prompt, the model must generate three categories of acoustic content"}, {"start": 574.2, "end": 579.7, "text": " with varying degrees of background, foreground, durations and relative position in the temporal axis,"}, {"start": 579.7, "end": 583.1, "text": " the composition of which is highly unlikely to be present in the training set."}, {"start": 583.1, "end": 587.3000000000001, "text": " Okay. So this is another paragraph stating why this is a difficult task."}, {"start": 587.3000000000001, "end": 592.3000000000001, "text": " Although I guess the ultimate reason why it's difficult is because we don't have big enough data sets"}, {"start": 592.3000000000001, "end": 598.1, "text": " to cover a vast amount of possible use cases and then just basically the model can just interpolate"}, {"start": 598.1, "end": 602.6, "text": " between those and get something that looks novel."}, {"start": 602.6, "end": 607.0, "text": " Okay. So they say three, they said three categories."}, {"start": 607.0, "end": 611.7, "text": " So that's like a dog is barking, somebody's playing a trumpet and there is a busy street."}, {"start": 611.7, "end": 614.2, "text": " So it's like the sound of cars passing by or whatnot."}, {"start": 614.2, "end": 618.4000000000001, "text": " So you have to decide like what's going to be the strength of each of those,"}, {"start": 618.4000000000001, "end": 620.6, "text": " what's going to be the foreground versus background."}, {"start": 620.6, "end": 625.2, "text": " You have to decide what's going to be the duration of those sounds."}, {"start": 625.2, "end": 629.5, "text": " You have to understand how to kind of place them on the temporal axis."}, {"start": 629.5, "end": 635.0, "text": " So all of that is the reason why this is kind of intuitively a difficult task, right?"}, {"start": 635.0, "end": 639.7, "text": " Although if we like give the modeling up data and just use a generic structure,"}, {"start": 639.7, "end": 643.5, "text": " which is something that OpenAI Whisper has showed us,"}, {"start": 643.5, "end": 648.5, "text": " then we can expect much better results, I would assume, from this work."}, {"start": 648.5, "end": 651.3000000000001, "text": " So I already mentioned the first and second stage."}, {"start": 651.3000000000001, "end": 654.0, "text": " So I'm going to skip that part."}, {"start": 654.0, "end": 657.6, "text": " So let's continue and see what else is here."}, {"start": 657.6, "end": 659.8000000000001, "text": " So again, I mentioned the audio representation."}, {"start": 659.8000000000001, "end": 664.0, "text": " So they say here an audio signal of duration D can be represented by sequence."}, {"start": 664.0, "end": 668.2, "text": " So X belongs to this range, CA times T."}, {"start": 668.2, "end": 671.7, "text": " So this means, so CA is the number of audio channels."}, {"start": 671.7, "end": 677.8000000000001, "text": " Usually, I think in this work, they just deal with basically, this is always equal to 1."}, {"start": 677.8000000000001, "end": 681.9000000000001, "text": " So and then T is just the duration, the number of samples."}, {"start": 681.9000000000001, "end": 687.2, "text": " So if you have D seconds of audio, if the sampling frequency is FSR,"}, {"start": 687.2, "end": 689.8000000000001, "text": " so they use 16 kilohertz in their example,"}, {"start": 689.8000000000001, "end": 695.4000000000001, "text": " then you just multiply those two numbers and you get the actual number of samples in your audio signal."}, {"start": 695.4, "end": 698.5, "text": " You can obviously always normalize the amplitude of the signal"}, {"start": 698.5, "end": 701.6999999999999, "text": " such that it always lies in the minus 1 to 1 range."}, {"start": 701.6999999999999, "end": 703.9, "text": " So basically, you end up with something like this."}, {"start": 703.9, "end": 707.8, "text": " So let me just draw the representation for you, for those of you who are not as familiar."}, {"start": 707.8, "end": 709.9, "text": " So you can imagine having a signal like this."}, {"start": 709.9, "end": 712.1, "text": " So here is the time axis."}, {"start": 712.1, "end": 714.3, "text": " Here is the amplitude axis."}, {"start": 714.3, "end": 715.6999999999999, "text": " Let me denote it as Y."}, {"start": 715.6999999999999, "end": 720.4, "text": " And so you'll just have some signal, some audio signal like this."}, {"start": 720.4, "end": 725.6, "text": " And let's say the amplitude is between 1 and minus 1 after you've done some normalization."}, {"start": 725.6, "end": 730.1999999999999, "text": " And so what you then do is you just sample with certain frequency."}, {"start": 730.1999999999999, "end": 735.0, "text": " Maybe you do like, I guess, uniform sampling is the most common one."}, {"start": 735.0, "end": 739.9, "text": " And you basically end up with a discrete signal of which can be represented as just a ray,"}, {"start": 739.9, "end": 742.9, "text": " a 1D array of numbers between minus 1 and 1."}, {"start": 742.9, "end": 750.4, "text": " And how people usually draw this is basically as this discrete samples like this."}, {"start": 750.4, "end": 754.6, "text": " It's going to be something like this and then this and then this."}, {"start": 754.6, "end": 761.5, "text": " I'm just mimicking the above signal and sampling it at the points which I've denoted by blue dots here."}, {"start": 761.5, "end": 762.9, "text": " Okay, so that's one representation."}, {"start": 762.9, "end": 765.0, "text": " People also usually use male spectrograms."}, {"start": 765.0, "end": 769.6, "text": " I did cover that in the Open Air Whisper paper if you're curious to learn about that representation."}, {"start": 769.6, "end": 774.9, "text": " But in this video, we're going to use a simple raw 1D representation here."}, {"start": 774.9, "end": 778.6, "text": " Okay, so then they say the whole system is trained end-to-end to minimize reconstruction loss"}, {"start": 778.6, "end": 782.1, "text": " applied over both time and frequency domain together with the perceptual loss"}, {"start": 782.1, "end": 785.9, "text": " in the form of several discriminators operating in different temporal resolutions."}, {"start": 785.9, "end": 788.6, "text": " So we've seen that already on this diagram here."}, {"start": 788.6, "end": 792.3000000000001, "text": " Now, let's see the actual details of those losses."}, {"start": 792.3000000000001, "end": 795.1, "text": " Before we go there, just a minor note."}, {"start": 795.1, "end": 798.9, "text": " The convolutional blocks are followed by two-layer LSTM for sequence modeling."}, {"start": 798.9, "end": 802.9, "text": " So that's a minor change they have as compared to the sound."}, {"start": 802.9, "end": 803.8, "text": " What's the name of this paper?"}, {"start": 803.8, "end": 807.5, "text": " I forgot the name because I'll be referencing it."}, {"start": 807.5, "end": 808.4, "text": " So Soundstream."}, {"start": 808.4, "end": 810.1, "text": " Okay, let's get back here."}, {"start": 810.1, "end": 814.0, "text": " So Soundstream does not have this part with the LSTM."}, {"start": 814.0, "end": 818.1, "text": " They simply use COM1D blocks and that's it."}, {"start": 818.1, "end": 825.5, "text": " So this is also kind of peculiar that I haven't seen LSTM in a while and I strongly assume this is not needed."}, {"start": 825.5, "end": 831.8, "text": " You can just have a simple architecture, throw a bunch of data at the problem and that's it."}, {"start": 831.8, "end": 833.5, "text": " You don't really have to have this."}, {"start": 833.5, "end": 841.4, "text": " And also I didn't see any ablations why using LSTM instead of, I guess, transformer block."}, {"start": 841.4, "end": 842.8, "text": " I mean, it's kind of confusing."}, {"start": 842.8, "end": 843.8, "text": " I'm not sure about that part."}, {"start": 843.8, "end": 847.0, "text": " Why they've done it and I haven't seen the explanation anywhere."}, {"start": 847.0, "end": 848.8, "text": " Anyways, let's get back to the losses."}, {"start": 848.8, "end": 850.6, "text": " Let's see how the losses are constructed."}, {"start": 850.6, "end": 854.2, "text": " We have the first time domain loss."}, {"start": 854.2, "end": 859.4000000000001, "text": " So we minimize the L1 distance between the target and reconstructed audio over the time domain."}, {"start": 859.4000000000001, "end": 861.8000000000001, "text": " So basically L1 between those two signals."}, {"start": 861.8000000000001, "end": 863.2, "text": " That's it. As simple as that."}, {"start": 863.2, "end": 871.4000000000001, "text": " For the frequency domain loss, we use a linear combination between the L1 and L2 losses over the mouse spectrogram using several time scales."}, {"start": 871.4000000000001, "end": 875.7, "text": " Okay, so let's kind of decompose what this means."}, {"start": 875.7, "end": 882.6, "text": " In practice, they set this alpha to 1 so we can kind of ignore, completely ignore the alpha here."}, {"start": 882.6, "end": 884.6, "text": " Let me just change the color."}, {"start": 884.6, "end": 891.0, "text": " So we just have like a equally weighted sum of the L1 norm and L2 norm between the mouse spectrograms."}, {"start": 891.0, "end": 892.3000000000001, "text": " And now the differences."}, {"start": 892.3000000000001, "end": 893.3000000000001, "text": " So you can see here."}, {"start": 893.3000000000001, "end": 895.3000000000001, "text": " So X is just the original signal."}, {"start": 895.3000000000001, "end": 897.4, "text": " X hat is the reconstructed signal."}, {"start": 897.4, "end": 904.8000000000001, "text": " You grab the mouse spectrograms where the time resolution is basically a function of this I."}, {"start": 904.8000000000001, "end": 906.6, "text": " So they say here."}, {"start": 906.6, "end": 919.8000000000001, "text": " So SI is a 64 bin mouse spectrogram using a normalized STFT with window size of 2 raised to the power of I and hop lengths of 2 raised to the power of I over 4 where E goes from 5 to 11."}, {"start": 919.8000000000001, "end": 922.8000000000001, "text": " Okay, so it's just a set and I goes through that set."}, {"start": 922.8000000000001, "end": 928.8000000000001, "text": " So I was going to have values of 2 raised to the power of 5, which means then we'll have a window size of 32, I guess."}, {"start": 928.8000000000001, "end": 931.3000000000001, "text": " And then you just keep on increasing by 2."}, {"start": 931.3, "end": 945.5999999999999, "text": " So the window sizes will be increasing and the hop lengths will also be increasing, which means the higher the I, the shorter will be the time dimension of the mouse spectrogram, if that makes sense."}, {"start": 945.5999999999999, "end": 957.1999999999999, "text": " So basically, you'll end up for, for example, for I equals 5, you'll end up with a particular mouse spectrogram that has the following shape."}, {"start": 957.2, "end": 962.0, "text": " So it's going to be, as they said, 64, 64 bins for the frequency."}, {"start": 962.0, "end": 966.2, "text": " And then this is going to be something like whatever, like N."}, {"start": 966.2, "end": 972.0, "text": " And then for I equals 6, basically, this is going to be N over 2."}, {"start": 972.0, "end": 976.2, "text": " And then so that means we'll end up with a smaller spectrogram, etc, etc."}, {"start": 976.2, "end": 982.0, "text": " So those will have different resolution of the features and that kind of helps in practice."}, {"start": 982.0, "end": 986.8000000000001, "text": " So, yeah, and then they just do, as you can see here, this simple loss between those mouse spectrograms."}, {"start": 986.8, "end": 990.4, "text": " And again, a mouse spectrogram is simply this."}, {"start": 990.4, "end": 991.8, "text": " I think I have a drawing here."}, {"start": 991.8, "end": 1008.3, "text": " So this signal where a single particular dot on this spectrogram tells you that at this particular point of time T, you have a frequency F with a particular intensity that's described by the color of the spectrogram."}, {"start": 1008.3, "end": 1010.5999999999999, "text": " So that's the amplitude of that frequency."}, {"start": 1010.5999999999999, "end": 1014.3, "text": " And that just all comes from the Fourier transform theory."}, {"start": 1014.3, "end": 1023.0, "text": " Basically, any signal can be decomposed as a sum of sinusoids and with different phases and different amplitudes."}, {"start": 1023.0, "end": 1025.8, "text": " And that's where the complex numbers are going to come in."}, {"start": 1025.8, "end": 1029.3, "text": " But let me not bother you with that at this point of time."}, {"start": 1029.3, "end": 1030.7, "text": " We'll get there a bit later."}, {"start": 1030.7, "end": 1035.2, "text": " Also, 3Blue1Brown has amazing explanation of the Fourier transform."}, {"start": 1035.2, "end": 1036.2, "text": " So check out his video."}, {"start": 1036.2, "end": 1040.8, "text": " I'm going to link it down in description, actually, in case you're confused about the Fourier transform."}, {"start": 1040.8, "end": 1045.1, "text": " Although I do assume most of you do know at least on a high level what it is."}, {"start": 1045.1, "end": 1046.8, "text": " OK, let's see the third loss."}, {"start": 1046.8, "end": 1057.8999999999999, "text": " They say here the MS STFT discriminator is based on identically structured networks operating on multiscaled complex valued STFT where it's real and imaginary parts are concatenated."}, {"start": 1057.8999999999999, "end": 1060.7, "text": " OK, actually, here are the complex numbers already."}, {"start": 1060.7, "end": 1064.2, "text": " So let me quickly describe what it is."}, {"start": 1064.2, "end": 1071.0, "text": " So those complex valued STFTs are going to be the same shape as the actual male spectrogram."}, {"start": 1071.0, "end": 1072.6000000000001, "text": " It's just that the frequency."}, {"start": 1072.6000000000001, "end": 1074.7, "text": " So this part here is going to be 2x."}, {"start": 1074.7, "end": 1083.7, "text": " So it's going to be two times bigger because it has complex numbers and they're just going to concatenate the complex numbers."}, {"start": 1083.7, "end": 1087.4, "text": " So let me explain to you what that means."}, {"start": 1087.4, "end": 1089.4, "text": " I already explained what this dot means here."}, {"start": 1089.4, "end": 1093.8, "text": " So it's a particular sinusoid of a particular frequency and a particular amplitude."}, {"start": 1093.8, "end": 1105.6, "text": " You can imagine before we got this spectrogram, we actually had a complex signal so you can represent a complex number in this 2D plane."}, {"start": 1105.6, "end": 1107.5, "text": " And so you can imagine we have a complex number here."}, {"start": 1107.5, "end": 1111.0, "text": " So maybe this is the I axis and this is the real axis."}, {"start": 1111.0, "end": 1116.0, "text": " This is kind of the real and the imaginary also describe also denoted as imaginary axis."}, {"start": 1116.0, "end": 1117.8999999999999, "text": " And so this is your complex number."}, {"start": 1117.8999999999999, "end": 1122.8, "text": " Right. And so in order to get this amplitude of this of this particular and by the way,"}, {"start": 1122.8, "end": 1125.0, "text": " this vector represents a particular sinusoid."}, {"start": 1125.0, "end": 1129.8, "text": " Okay. If you just grab the amplitude of this thing."}, {"start": 1129.8, "end": 1133.5, "text": " So the let's denote it as a then you end up with the scalar here."}, {"start": 1133.5, "end": 1134.6, "text": " That's the amplitude."}, {"start": 1134.6, "end": 1138.6, "text": " But we ditch the phase information by doing that by just taking the amplitude."}, {"start": 1138.6, "end": 1139.6, "text": " We ignore the phase."}, {"start": 1139.6, "end": 1141.3999999999999, "text": " So there is the face here."}, {"start": 1141.3999999999999, "end": 1147.0, "text": " So the face kind of tells you how shifted is this sinusoid compared to the origin point."}, {"start": 1147.0, "end": 1150.8, "text": " So if you're this is just basic signal signal processing stuff."}, {"start": 1150.8, "end": 1158.8, "text": " So basically, let's say this is your normal sinusoid that has phase shift of zero."}, {"start": 1158.8, "end": 1166.0, "text": " If you were to increase introduce a phase shift of let's say like whatever pi over two or something."}, {"start": 1166.0, "end": 1170.5, "text": " Then you end up with like a cosine signal, etc, etc."}, {"start": 1170.5, "end": 1177.0, "text": " So you basically just change the shift and that shift information is lost when you deal with mel spectrograms."}, {"start": 1177.0, "end": 1187.4, "text": " And in order not to lose it, they just concatenate both the real and the imaginary axis and they get a complex valued STFT of spectrogram representation."}, {"start": 1187.4, "end": 1188.4, "text": " Hopefully that makes sense."}, {"start": 1188.4, "end": 1192.2, "text": " Let me know if you if you understood this part nicely."}, {"start": 1192.2, "end": 1199.4, "text": " Okay. So now that we have this representation, what they do is they again just have a multiple temporal resolutions."}, {"start": 1199.4, "end": 1205.6, "text": " So they can see you can see here that we use five different scales with STF window lengths of blah, blah, blah, blah, blah, blah,"}, {"start": 1205.6, "end": 1211.1999999999998, "text": " which which will lead to two different representations like in the time domain will be of different size."}, {"start": 1211.1999999999998, "end": 1219.1999999999998, "text": " And then you just pass those representations to these discriminators that they've denoted somewhere on the diagram here."}, {"start": 1219.1999999999998, "end": 1221.3999999999999, "text": " So you're just going to pass those representations here."}, {"start": 1221.3999999999999, "end": 1227.6999999999998, "text": " Out comes a scalar that tells you whether the signal is comes from a real audio or from a fake one."}, {"start": 1227.6999999999998, "end": 1230.8999999999999, "text": " And then it's just your regular adversarial loss."}, {"start": 1230.8999999999999, "end": 1232.6, "text": " I'm not going to dig into a lot of details there."}, {"start": 1232.6, "end": 1239.0, "text": " So you can see here we are forcing this this signal to be as close to one as possible"}, {"start": 1239.0, "end": 1243.5, "text": " because that means we're tricking the discriminator into thinking that the image is real."}, {"start": 1243.5, "end": 1249.0, "text": " I mean when I say image, I mean this spectrogram complex representation."}, {"start": 1249.0, "end": 1255.1, "text": " And finally, we have this feature matching loss where they just take the features across the discriminator"}, {"start": 1255.1, "end": 1261.6999999999998, "text": " and do make sure that the L1 difference between the complex spectrogram that corresponds to the real signal"}, {"start": 1261.7, "end": 1267.9, "text": " and the complex spectrogram that corresponds to the generated signal are as close as possible."}, {"start": 1267.9, "end": 1270.7, "text": " And finally, you can see the losses are like this."}, {"start": 1270.7, "end": 1273.4, "text": " So the discriminators are trained to minimize this thing."}, {"start": 1273.4, "end": 1279.3, "text": " So a sum over k discriminators because we saw we have multiple of those."}, {"start": 1279.3, "end": 1283.5, "text": " And then you just make sure that you can see here because we want to minimize it."}, {"start": 1283.5, "end": 1289.9, "text": " That means we want to have this dk of facts being as close to one as possible."}, {"start": 1289.9, "end": 1292.3000000000002, "text": " That's when we're going to minimize this part here."}, {"start": 1292.3000000000002, "end": 1299.3000000000002, "text": " And we want to have this as close as possible to zero because this is a generated signal."}, {"start": 1299.3000000000002, "end": 1305.8000000000002, "text": " And finally, the generator is trained with this complex weighted sum of various loss components."}, {"start": 1305.8000000000002, "end": 1308.9, "text": " We have the time domain reconstruction loss."}, {"start": 1308.9, "end": 1311.4, "text": " We have the frequency domain reconstruction loss."}, {"start": 1311.4, "end": 1317.4, "text": " We have the generative generator loss here and we have the feature loss here."}, {"start": 1317.4, "end": 1320.2, "text": " Okay, guys, so that's the loss."}, {"start": 1320.2, "end": 1322.7, "text": " And then we have the audio language modeling part."}, {"start": 1322.7, "end": 1326.9, "text": " So as I said, the text representation is obtained using a pre-trained T5 text encoder."}, {"start": 1326.9, "end": 1330.6000000000001, "text": " And we just have a simple basically cross entropy here."}, {"start": 1330.6000000000001, "end": 1334.5, "text": " So we are trying to minimize the cross entropy."}, {"start": 1334.5, "end": 1342.1000000000001, "text": " So you can see the only potentially confusing part here is so use are the text embedding vectors."}, {"start": 1342.1000000000001, "end": 1345.9, "text": " These are the audio embedding vectors."}, {"start": 1345.9, "end": 1353.4, "text": " And then you can see we do a sum over the TV, which is basically we want to predict the next audio token,"}, {"start": 1353.4, "end": 1355.0, "text": " which kind of makes sense."}, {"start": 1355.0, "end": 1362.2, "text": " And there is also this confusing part with I going from one to N and N as a symbol has not been introduced anywhere in the paper."}, {"start": 1362.2, "end": 1367.8000000000002, "text": " But just reading it later, I figured out it's basically because of this multi stream approach which they are using,"}, {"start": 1367.8000000000002, "end": 1371.6000000000001, "text": " which is not that like important to understand how the paper is working."}, {"start": 1371.6, "end": 1377.6, "text": " It's more of a way to go around the problem of having long sequences."}, {"start": 1377.6, "end": 1381.6999999999998, "text": " And so, yeah, the additional use the classifier pre-guidance."}, {"start": 1381.6999999999998, "end": 1384.8999999999999, "text": " We've seen that concept introduced in the future models."}, {"start": 1384.8999999999999, "end": 1386.5, "text": " I covered that multiple times."}, {"start": 1386.5, "end": 1392.1999999999998, "text": " So the idea here is nothing different except that here we just as you can see here."}, {"start": 1392.1999999999998, "end": 1395.0, "text": " So we have the signal that's unconditional."}, {"start": 1395.0, "end": 1398.3999999999999, "text": " So we don't have any use, which are the text embedding vectors."}, {"start": 1398.4, "end": 1404.4, "text": " And then we have the actual condition signal and then we just create a weighted sum of those two."}, {"start": 1404.4, "end": 1407.4, "text": " And then and then we sample from that distribution."}, {"start": 1407.4, "end": 1410.5, "text": " Also, the Lee Mini project uses this very same idea."}, {"start": 1410.5, "end": 1417.2, "text": " So you can check out that code code base if you're curious to see how it's exactly implemented until the code for this paper is released."}, {"start": 1417.2, "end": 1424.7, "text": " And the final detail before showing you quickly the results is this the multi stream audio inputs and they say it here."}, {"start": 1424.7, "end": 1440.9, "text": " So in order to generate high quality audio samples, we down sample the raw audio by a factor of 32 because of those calm one thing is they have and then which corresponds to two milliseconds for each audio token, assuming the 16 kilohertz sampling rate."}, {"start": 1440.9, "end": 1448.2, "text": " So this requires us to operate over extremely long sequences as each second of audio is represented by 500 tokens to alleviate this problem."}, {"start": 1448.2, "end": 1451.5, "text": " We propose a multi stream representation and modeling paradigm."}, {"start": 1451.5, "end": 1454.3, "text": " So let me just quickly describe what it is."}, {"start": 1454.3, "end": 1455.3, "text": " The basic idea is this."}, {"start": 1455.3, "end": 1459.3, "text": " So you start with a super long sequence."}, {"start": 1459.3, "end": 1462.2, "text": " This is the in the time domain."}, {"start": 1462.2, "end": 1467.6, "text": " And then you basically reduce the dimensionality by 32 X usually."}, {"start": 1467.6, "end": 1472.1, "text": " And then you end up with with basically audio tokens."}, {"start": 1472.1, "end": 1478.3, "text": " And now when you're training like a transformer on top of this space, you can see this is super long."}, {"start": 1478.3, "end": 1479.8999999999999, "text": " The transformer is going to struggle."}, {"start": 1479.8999999999999, "end": 1483.1, "text": " So what I do instead is they do the following."}, {"start": 1483.1, "end": 1488.5, "text": " So they just have a much higher down sampling strategy."}, {"start": 1488.5, "end": 1494.1, "text": " And so they end up with maybe 2 X or 8 X or whatnot smaller dimensionality here."}, {"start": 1494.1, "end": 1495.3999999999999, "text": " So it's going to be way smaller."}, {"start": 1495.3999999999999, "end": 1499.3, "text": " And then that means it's much more feasible for transformers to handle that length."}, {"start": 1499.3, "end": 1505.3999999999999, "text": " But now the problem is you're obviously by compressing it so much more, you're losing a lot of data information."}, {"start": 1505.4, "end": 1514.5, "text": " And so because of that, they have this multi transformer approach where they now have multiple transformers running here and then they average the logits of those transformers."}, {"start": 1514.5, "end": 1517.1000000000001, "text": " Anyways, I'm going to stop there because it's not that relevant."}, {"start": 1517.1000000000001, "end": 1523.6000000000001, "text": " And even the results are like they lose some performance and they just gain a little bit of speed up."}, {"start": 1523.6000000000001, "end": 1532.2, "text": " So I'm really not sure how this is helping because even the memory footprint will probably not be smaller because you still have multiple transformers in this case."}, {"start": 1532.2, "end": 1535.2, "text": " So, yeah, I'm kind of confused by this whole multi stream approach."}, {"start": 1535.2, "end": 1537.2, "text": " And what do we get from it actually?"}, {"start": 1537.2, "end": 1540.5, "text": " Okay, let's quickly go through the experiments."}, {"start": 1540.5, "end": 1543.4, "text": " Couple of things worth mentioning about a data set."}, {"start": 1543.4, "end": 1547.6000000000001, "text": " So for textual description, we they say we use two types of annotations."}, {"start": 1547.6000000000001, "end": 1550.6000000000001, "text": " The first one is multi label notation."}, {"start": 1550.6000000000001, "end": 1556.4, "text": " And so we form sentences by concatenating a list of text available per audio sample."}, {"start": 1556.4, "end": 1564.4, "text": " For example, if you have three labels for that particular audio clips like dog bark and park that's transformed into dog bark bark."}, {"start": 1564.4, "end": 1566.9, "text": " And you can already see the problem with this approach."}, {"start": 1566.9, "end": 1570.0, "text": " And then the second type of annotation is natural language captions."}, {"start": 1570.0, "end": 1577.7, "text": " And then they say we apply a preprocessing step to better match the class label annotation distribution specifically where we move stop words and numbers."}, {"start": 1577.7, "end": 1579.9, "text": " Finally, we'll lemmatize the remaining words."}, {"start": 1579.9, "end": 1585.0, "text": " And so a dog is barking at the park is transformed to dog bark bark."}, {"start": 1585.0, "end": 1591.6000000000001, "text": " And so, again, that's the reason why they are hitting some of the limitations they mentioned here."}, {"start": 1591.6, "end": 1596.0, "text": " So it still lacks the understanding of temporal ordering the scene."}, {"start": 1596.0, "end": 1600.8999999999999, "text": " For example, a dog is barking, then a then a then birds are humming, I guess."}, {"start": 1600.8999999999999, "end": 1601.8, "text": " It's a typo here."}, {"start": 1601.8, "end": 1605.8999999999999, "text": " So versus the dog is barking and a bird's humming in the background."}, {"start": 1605.8999999999999, "end": 1611.8, "text": " So is it like happening in parallel or is it like first the dog is barking and then the birds are humming?"}, {"start": 1611.8, "end": 1617.6, "text": " And they also mentioned that last as we omit most of the speech samples in our training set, the proposed approach often generates unintelligible speech."}, {"start": 1617.6, "end": 1620.1999999999998, "text": " I did mention that in the beginning of the video."}, {"start": 1620.2, "end": 1622.3, "text": " Anyways, let's get back here."}, {"start": 1622.3, "end": 1631.7, "text": " And finally, they end up with four thousand with only four thousand hours of training data because they filter out the speech text."}, {"start": 1631.7, "end": 1633.7, "text": " You can see here."}, {"start": 1633.7, "end": 1643.6000000000001, "text": " As speech is the dominant class in the data, we we filter all samples where the tag or caption contains the word speech to generate a more balanced data set."}, {"start": 1643.6000000000001, "end": 1647.4, "text": " And as a consequence, they end up with unintelligible speech."}, {"start": 1647.4, "end": 1650.4, "text": " They do one more thing. I want to mention the data augmentation part."}, {"start": 1650.4, "end": 1654.5, "text": " And this is probably just completely unnecessary if you simply have a bigger data set."}, {"start": 1654.5, "end": 1659.5, "text": " Although one thing is like it's not easy to obtain high quality, bigger data sets."}, {"start": 1659.5, "end": 1668.0, "text": " So, yeah. So one of the most impressive capabilities of recently proposed generative models is their ability to create unseen object compositions."}, {"start": 1668.0, "end": 1675.7, "text": " We know from the text image models that a sentence such as an astronaut riding a horse in space is something we can we can kind of generate,"}, {"start": 1675.7, "end": 1679.4, "text": " even though we the model probably never has seen such an image."}, {"start": 1679.4, "end": 1682.6000000000001, "text": " So to achieve similar capabilities regards to audio generation,"}, {"start": 1682.6000000000001, "end": 1689.1000000000001, "text": " we propose an augmentation method that fuses pairs of audio samples and their respective text captions,"}, {"start": 1689.1000000000001, "end": 1692.7, "text": " thus creating new concept compositions during training."}, {"start": 1692.7, "end": 1699.7, "text": " Formally, given two audio samples X1 and X2 and their respective text captions C1 and C2,"}, {"start": 1699.7, "end": 1704.9, "text": " we first randomly draw a temporal offset to merge the two audio samples."}, {"start": 1704.9, "end": 1708.8000000000002, "text": " Next, we draw a random signal to noise ratio in the signal minus five to five."}, {"start": 1708.8000000000002, "end": 1713.2, "text": " And finally, we mix the audio samples and concatenate the text captions C1 and C2."}, {"start": 1713.2, "end": 1718.6000000000001, "text": " So that's how they get more data from the existing data points."}, {"start": 1718.6000000000001, "end": 1722.9, "text": " So basically, you grab let me just change the color here."}, {"start": 1722.9, "end": 1725.2, "text": " So you basically grab two signals."}, {"start": 1725.2, "end": 1731.1000000000001, "text": " You have a signal here and you have a different signal here."}, {"start": 1731.1, "end": 1735.1999999999998, "text": " And so what you do is you basically decide on how to combine them."}, {"start": 1735.1999999999998, "end": 1741.1, "text": " So you decide on some particular offset point like maybe here and then you basically do the overlap."}, {"start": 1741.1, "end": 1747.3999999999999, "text": " So just overlap those two and then additionally you need to decide on the amplitude of this one and the amplitude of this one."}, {"start": 1747.3999999999999, "end": 1748.6, "text": " That's the idea."}, {"start": 1748.6, "end": 1750.6, "text": " Okay, that's the augmentation."}, {"start": 1750.6, "end": 1753.6999999999998, "text": " Let's see some results."}, {"start": 1753.6999999999998, "end": 1757.5, "text": " Obviously, they have two baselines."}, {"start": 1757.5, "end": 1763.6, "text": " One is using the T5 large, the other one is using T5 bass and obviously the bigger the better."}, {"start": 1763.6, "end": 1769.1, "text": " Like across all of these subjective and objective metrics, I'm not going to go into what they actually are."}, {"start": 1769.1, "end": 1772.5, "text": " But you can see the numbers are in favor for the large model."}, {"start": 1772.5, "end": 1774.1, "text": " Surprise, surprise."}, {"start": 1774.1, "end": 1778.3, "text": " They also compare favorably with the stiff sound baseline, which is cool."}, {"start": 1778.3, "end": 1781.2, "text": " Let me parse the charts for you here."}, {"start": 1781.2, "end": 1783.9, "text": " Let's first focus on the right hand side here."}, {"start": 1783.9, "end": 1787.7, "text": " So on the x-axis we have the audio prompt duration."}, {"start": 1787.7, "end": 1792.5, "text": " So basically how many audio tokens do we use to prompt the model?"}, {"start": 1792.5, "end": 1799.6000000000001, "text": " And then we are looking on the y-axis at this objective KL divergence metric."}, {"start": 1799.6000000000001, "end": 1802.5, "text": " In the background, they use some pre-trained audio classifier."}, {"start": 1802.5, "end": 1803.1000000000001, "text": " It doesn't matter."}, {"start": 1803.1000000000001, "end": 1805.9, "text": " Like let's just take that as a black box here."}, {"start": 1805.9, "end": 1813.1000000000001, "text": " And so the differential between two approaches, one is to condition on text and the other one is to not condition text."}, {"start": 1813.1, "end": 1816.3999999999999, "text": " So the unconditional part, the unconditional case."}, {"start": 1816.3999999999999, "end": 1824.1, "text": " And so you can see that in the case of conditioning with text, you already with only a couple of audio tokens,"}, {"start": 1824.1, "end": 1828.0, "text": " you already have a signal that's very close to the ground truth."}, {"start": 1828.0, "end": 1834.8999999999999, "text": " And you get some improvement the more you add the number of the more you increase the number of audio tokens."}, {"start": 1834.8999999999999, "end": 1838.1999999999998, "text": " On the other side, without conditioning on text,"}, {"start": 1838.2, "end": 1846.4, "text": " you can see that we have a very big discrepancy between the ground truth and actually generated signal if we don't have any text conditioning."}, {"start": 1846.4, "end": 1854.6000000000001, "text": " And after a while, which kind of makes sense, if you have enough audio tokens, then the text part becomes less relevant."}, {"start": 1854.6000000000001, "end": 1858.1000000000001, "text": " OK, so let's now focus on the chart on the left."}, {"start": 1858.1000000000001, "end": 1859.5, "text": " This one might be a bit confusing."}, {"start": 1859.5, "end": 1864.1000000000001, "text": " So let me try and draw a couple of things before that."}, {"start": 1864.1000000000001, "end": 1865.8, "text": " So we have two audio samples."}, {"start": 1865.8, "end": 1869.3, "text": " We have one audio signal here."}, {"start": 1869.3, "end": 1871.7, "text": " I'm going to denote it as as a red signal."}, {"start": 1871.7, "end": 1873.3999999999999, "text": " And there is some associated text."}, {"start": 1873.3999999999999, "end": 1876.5, "text": " So there is some associated text with this particular signal."}, {"start": 1876.5, "end": 1882.2, "text": " OK, and then we have a different audio signal, a different sample."}, {"start": 1882.2, "end": 1886.0, "text": " And what we do is we grab the text of this first sample."}, {"start": 1886.0, "end": 1888.0, "text": " So I'm going to know this one is one."}, {"start": 1888.0, "end": 1891.2, "text": " I'm going to denote this sample here as two."}, {"start": 1891.2, "end": 1894.1, "text": " So what we do is we grab the text here."}, {"start": 1894.1, "end": 1898.3999999999999, "text": " We pass it through the T5 encoder and we end up with some tokens."}, {"start": 1898.3999999999999, "end": 1899.8, "text": " So we end up with some text tokens."}, {"start": 1899.8, "end": 1902.1999999999998, "text": " OK, so here they are."}, {"start": 1902.1999999999998, "end": 1903.6, "text": " Here are some text tokens."}, {"start": 1903.6, "end": 1907.8, "text": " And then what we do here is we pass this one through the encoder."}, {"start": 1907.8, "end": 1910.6999999999998, "text": " That's the first stage of the audio gen model."}, {"start": 1910.6999999999998, "end": 1913.3999999999999, "text": " And we end up with some tokens here."}, {"start": 1913.3999999999999, "end": 1921.1999999999998, "text": " OK, and so we now pass those into the ALM, into the audio language modeling like model."}, {"start": 1921.2, "end": 1924.9, "text": " And we obviously have two conflicting requests."}, {"start": 1924.9, "end": 1930.2, "text": " This one is requesting, hey, generate basically generate a signal that corresponds to this text."}, {"start": 1930.2, "end": 1932.6000000000001, "text": " And this is going to be the ground truth."}, {"start": 1932.6000000000001, "end": 1939.0, "text": " And we have a different set of tokens which are telling us, hey, generate this particular audio clip."}, {"start": 1939.0, "end": 1945.6000000000001, "text": " And so we can see on this diagram here on the x-axis as the number of audio tokens increases."}, {"start": 1945.6000000000001, "end": 1947.9, "text": " So the blue tokens here."}, {"start": 1947.9, "end": 1951.1000000000001, "text": " Let's now focus on, for example, the purple curve."}, {"start": 1951.1, "end": 1958.5, "text": " The purple curve is comparing the generated audio signal with basically this signal here, signal one."}, {"start": 1958.5, "end": 1962.3999999999999, "text": " And it kind of makes sense that as you're increasing the number of blue tokens,"}, {"start": 1962.3999999999999, "end": 1968.3999999999999, "text": " you're getting worse and worse at generating a signal that's close to this one, which kind of makes sense."}, {"start": 1968.3999999999999, "end": 1976.1, "text": " And vice versa, if we focus on this dashed yellow line, here we're comparing with this signal."}, {"start": 1976.1, "end": 1978.1999999999998, "text": " So we're using this one as the reference one."}, {"start": 1978.2, "end": 1981.5, "text": " And obviously, as we're increasing the number of these blue tokens,"}, {"start": 1981.5, "end": 1987.1000000000001, "text": " we'll be closer and closer to generating something that looks like this original signal here."}, {"start": 1987.1000000000001, "end": 1991.3, "text": " So again, we are just to to to mind you, we are basically sampling."}, {"start": 1991.3, "end": 1997.9, "text": " So we pass all of these in into our ALM, which is just basically a transformer."}, {"start": 1997.9, "end": 2001.6000000000001, "text": " And we keep on generating these samples one after another."}, {"start": 2001.6000000000001, "end": 2006.1000000000001, "text": " And then we just pass all of that through the decoder that we train in the first stage."}, {"start": 2006.1, "end": 2008.6, "text": " So let me just go back here."}, {"start": 2008.6, "end": 2012.1999999999998, "text": " So we just pass those through this structure here."}, {"start": 2012.1999999999998, "end": 2013.8, "text": " So best was token through this one."}, {"start": 2013.8, "end": 2017.6999999999998, "text": " And we decode the original audio signal."}, {"start": 2017.6999999999998, "end": 2019.1, "text": " Okay."}, {"start": 2019.1, "end": 2020.8999999999999, "text": " I mean, the generator, not the original one."}, {"start": 2020.8999999999999, "end": 2022.8, "text": " And then we just do the comparisons."}, {"start": 2022.8, "end": 2029.1999999999998, "text": " The green line is just like a baseline where where you don't even have the text prompts."}, {"start": 2029.1999999999998, "end": 2032.5, "text": " You just have a couple of tokens from random signal."}, {"start": 2032.5, "end": 2037.2, "text": " And then you're comparing whatever is generated with this blue signal here."}, {"start": 2037.2, "end": 2038.4, "text": " Kind of confusing."}, {"start": 2038.4, "end": 2041.8, "text": " But the idea is that like after one point five seconds,"}, {"start": 2041.8, "end": 2049.8, "text": " roughly the contribution from text and the contributions from from the audio tokens become roughly similar."}, {"start": 2049.8, "end": 2053.5, "text": " And I'm not quite sure what was the length of these text tokens,"}, {"start": 2053.5, "end": 2056.8, "text": " because otherwise this is not kind of very informative,"}, {"start": 2056.8, "end": 2061.1, "text": " because obviously this if the number of text tokens is bigger,"}, {"start": 2061.1, "end": 2064.2999999999997, "text": " then this chart would not probably be at one point five."}, {"start": 2064.2999999999997, "end": 2068.7999999999997, "text": " The intersection would not happen at one point five seconds, if that makes sense."}, {"start": 2068.7999999999997, "end": 2071.7999999999997, "text": " OK, in any case, that's pretty much it here."}, {"start": 2071.7999999999997, "end": 2079.0, "text": " They show some diagrams how the guidance scale for the in the classifier free guidance influences the results."}, {"start": 2079.0, "end": 2084.1, "text": " So the FAD metric and the KL metric and they find that the minimum is around three."}, {"start": 2084.1, "end": 2090.7, "text": " So for number three, they get the best rate of across metrics, as usually the guidance scale helps."}, {"start": 2090.7, "end": 2093.1, "text": " And finally, they have some table comparing this,"}, {"start": 2093.1, "end": 2097.6, "text": " the multi stream transformer approach I did mention briefly."}, {"start": 2097.6, "end": 2103.3999999999996, "text": " And they say the results suggest that increasing the number of streams degrades the performance in both base and large models"}, {"start": 2103.3999999999996, "end": 2107.7999999999997, "text": " when compared to a single stream system, which kind of makes sense because we're losing information."}, {"start": 2107.7999999999997, "end": 2109.3999999999996, "text": " But let's see what we gain."}, {"start": 2109.3999999999996, "end": 2112.1, "text": " So what we gain is you can see here a speed up."}, {"start": 2112.1, "end": 2117.2999999999997, "text": " We gain like three point six X speed up when we use such an approach."}, {"start": 2117.2999999999997, "end": 2120.2999999999997, "text": " But you can see like the performance drop significantly."}, {"start": 2120.3, "end": 2128.6000000000004, "text": " That's if you focus on the FAD metric, we get from like two to like eleven, which is probably a big difference."}, {"start": 2128.6000000000004, "end": 2131.5, "text": " Cool, guys. That's pretty much it for this for this video."}, {"start": 2131.5, "end": 2133.3, "text": " So a couple of thoughts here."}, {"start": 2133.3, "end": 2139.3, "text": " So just like analyzing open eyes whisper that also deals with speech data."}, {"start": 2139.3, "end": 2144.0, "text": " I guess the conclusion here would be we probably like next paper down the line."}, {"start": 2144.0, "end": 2145.6000000000004, "text": " We won't need the LSTMs."}, {"start": 2145.6000000000004, "end": 2149.7000000000003, "text": " We won't need all of these details when it comes to the model architecture."}, {"start": 2149.7, "end": 2156.0, "text": " If you just have a generic architecture that simple, if you have much bigger data set and of higher quality,"}, {"start": 2156.0, "end": 2161.5, "text": " because as we saw here, we are what they are doing is they are basically concatenating just labels."}, {"start": 2161.5, "end": 2166.7, "text": " And thus they are losing a lot of the information that's kind of rich and present in the language."}, {"start": 2166.7, "end": 2172.1, "text": " And so by improving the data set, but making it bigger and just using a generic like architecture,"}, {"start": 2172.1, "end": 2176.1, "text": " I assume that one of the next papers will just have way better results."}, {"start": 2176.1, "end": 2180.9, "text": " And yeah, so if you like this video, let me know down in the comments."}, {"start": 2180.9, "end": 2183.1, "text": " Subscribe to this channel if you haven't already."}, {"start": 2183.1, "end": 2206.1, "text": " And until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=AwJf8aQfChE
OpenAI Whisper: Robust Speech Recognition via Large-Scale Weak Supervision | Paper and Code
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover Whisper, an ASR system from OpenAI's "Robust Speech Recognition via Large-Scale Weak Supervision" paper. Trained on a huge multi-lingual, multi-task weakly supervised dataset it achieves a very high effective robustness and accuracy closing the gap with the human baseline using only an off-the-shelf transformer. I walk you through both the paper as well as the actual code. Let me know whether the code part helped! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://cdn.openai.com/papers/whisper.pdf ✅ Code: https://github.com/openai/whisper ✅ Nice explanation of mel spectrograms: https://www.youtube.com/watch?v=9GHCiiDLHQ4 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro 00:02:05 Paper overview 00:07:30 Collecting a large scale weakly supervised dataset 00:13:55 Evaluation metric issues (WER) 00:16:05 Effective robustness 00:18:40 Scaling laws in progress 00:26:30 Decoding is hacky 00:28:30 Code walk-through 00:30:25 Model architecture (diagram vs code) 00:33:30 Transcription task 00:34:10 Loading the audio, mel spectrograms 00:37:50 Language detection 00:45:00 Transcription task continued 00:47:35 Suppressing token logits 00:52:00 Voice activity detection 00:53:35 Decoding and heuristics 01:01:56 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #whisper #openai #asr
OpenAI just open sourced Whisper, which is an automatic speech recognition system that's approaching the human level baseline on both the accuracy as well as on the robustness wise for the English transcription. And that's super awesome. So in this video, I'm gonna cover the paper as well as I'm gonna go through the code behind the paper, which they luckily open source. So the inference code as well as the model checkpoints have all been open sourced and that's kinda cool. So yeah, without further ado, let's dig into the video. So here is the tweet that kinda announced the release of Whisper. They say we've trained a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition. It performs well even on diverse accents and technical language. Whisper is open source for all to use. You can see that Whisper can do many tasks and it's a multilingual multitask system. So here's a couple of examples. So English transcription. So given an audio in English, ask not what your country can do for blah, blah, blah. You ask and then you see the transcription here, ask not what your country can do for you. So that's kinda trivial. And then we have any two English speech translation where we have maybe audio file in let's say Spanish, El rapido thorough marron salta sobre. And then you have the translation, the quick brown fox jumps over. And then we have non-English transcription where the audio file is again in some foreign language that's not English. And the transcription is also in that same language. So I'm not gonna try and pronounce Chinese. My Mandarin, it's not at its peak. So there is the last task here is the no speech one where given the background music playing or like whatever, it's important that we don't have human voices. The model outputs a special token signaling that there is well, no human speech going on. Other than these tasks, the model can also do language detection, which means it can given the audio detect which language does that particular audio file belong to. Okay, let's jump into the paper. Let's see the abstract and then I'm gonna first give you some high level overview and then we can dig a bit deeper. So the name of the paper is robust speech recognition by a large scale weak supervision. We're gonna see that this is one of the main components of the paper so just like the data set itself. And let's kind of skim through the abstract first. So we studied the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalized well to standard benchmarks and are often competitive with prior fully supervised results but in a zero shot transfer setting without the need for any fine tuning. When compared to humans, the models approach their accuracy and robustness. Okay, so let's first start by just showing you the high level diagram, how the architecture looks like and these special tokens that you use to prompt the model to do specific tasks. So the architecture itself is literally like off the shelf transformer encoder decoder transformer module from 2017 attention is all you need paper. So they wanted to kind of remove that confounding factor of just tweaking the architecture for this particular task of speech recognition. And instead they just took the off the shelf model and they've been playing with the data. Like data is the main part here as well as the special tokens. We'll see those in a second. So again, how the system works on the high level is you grab an audio file and then you convert it from its raw form into something called log male spectrograms. So spectrograms are these representations where here on the X axis, you have the time information. So the time steps and then on the Y axis you have the actual frequency. And then the color basically tells you what's the amplitude of that particular frequency at that particular point of time. So like this point here, like this will be the particular time. And then you can see the frequency here. And the frequency is usually in Hertz but in the case of male spectrograms, it's in males. And male is just like a logarithmic scale because that's how we perceive pitch and not like linearly like Hertz scale. So that's a TLDR, but I'll link some resources for explain very nicely how male spectrograms work because that's not the topic of this video. Okay, so once we have that representation of the audio then we literally do a feed forward pass through the encoder stacks. You can see here just blocks of simple transformer layers that consist of self-attention and MLP, multilayer perceptron and outcomes the final representation here. So once you have the final audio representation you use that to condition the decoder, the causal decoder model here via cross attention mechanism. And then the whole magic is in these special tokens. So you can see here, we have this start of transcription token, we have then the English token and then we have the transcribe token and then the model generates like this is first the timestamp and then outcomes the actual transcription of the audio file. You can see that the, here is the target. So it's a supervised dataset. We do have the label for the audio obviously and then we just do, we shift by one and do the classic language modeling objective here. There are some details that we can kind of dig into a bit later like the coder uses learn positionally codings. Here we have sinusoidal ones, but as I said architecture is not the main contribution of this paper. It's the dataset and it's this multi-task setup with the tokens. So ignoring this part for the time being on the left, let's focus on the right part of the pipeline. So we start with the start of transcript token and then the model will autoregressively decode either this no speech token, which basically means that we are not detecting, as I said previously, I think there is no speech going on the audio. The model is only detecting maybe music or something without human vocals or the model could generate language tag and these are by the way, the only things that the model can generate because they have various like masking mechanisms to make sure this is the case. Okay, so let's say the language tag was decoded. In that case, we basically then have two options either transcribe or translate. We've seen the differences. So main differences in the translate option whatever the source language of the audio file is we'll end up with the English translation. Okay, and then we can do the no timestamps route or we can do the one with the timestamps where you additionally predict timestamp information and then you do some transcription inside of that segment and then rinse and repeat. And finally you have the end of transcription token. Okay, so all of these give rise to multiple tasks and we'll see how those play out a bit later in the code as well. Okay, so having explained all of this, let's dig deeper into the paper. Well, let's start in the beginning here. So there are roughly two research directions that people have been pursuing before a whisper was published. So one big research direction was the unsupervised direction. So let me read this one for you. So these pre-trained audio encoders learn high quality representations of speech, but because they are purely unsupervised they lack an equivalently performant decoder mapping those representations to usable outputs necessitating a fine tuning stage in order to actually perform a task such as speech recognition. Okay, so that means on one part of the spectrum we have models that are trained on a huge amount of data. So we have, let me denote the data set like this. So we have huge amount of data. I think they mentioned here, like you can see here scaled up to million hours of training data. So that's like, and that's the worst visualization of data ever. Let me kind of delete it for a second and draw it properly. So as I said, we have a data set of 1 million hours of audio here. This is a bit better. And that's on one part of the spectrum. So let me now show you the other part of the spectrum here. And again, what I've said here is that they train these in a non-supervised fashion, which means you go from audio to audio representation. I guess from male spectrogram to male spectrogram something like that. And then the problem is how do you decode those representations to actual human language? That's the problem with these models. That's why you have to do the fine tuning part. Okay. And then the second direction is this one. So speech recognition systems that are pre-trained in a supervised fashion across many data sets domains exhibit higher robustness and generalize much more effectively to held out data sets than models trained on a single source. Okay, so there's just stating that we wanna have a multilingual multitask setup, which we kind of know already, especially if you have bigger models. However, there is still only a moderate amount of this data easily available. These authors mix together seven pre-existing data sets totaling only 5,000 hours of supervision. Okay, so that's the other part of the spectrum. We have tiny, tiny, tiny models here. I mean, tiny data sets. So this one has like 5K hours compared that to 1 million hours there. Okay. And they say this, so whoops, by relaxing the requirements of gold standard human validated transcripts, these authors make use of sophisticated automated pipelines to scale weekly supervised speech recognition to 10,000 and 30,000 hours of moisture training data. This trade off between quality and quantity is often the right call. Okay, so that means those authors scaled up this by using the weak supervision, but they only went to maybe a bit like a couple of times bigger data sets. So now we have like something like 10 to 30K. And finally, let's get to this paper. So what these authors have done is, whoops, what's going on here? Yet these new data sets are only a few times larger than the sum of existing high quality data sets and still much smaller than prior unsupervised work. In this work, we closed that gap, scaling weekly supervised speech recognition, the next order of magnitude to 680,000 hours of labeled audio data, which means they literally close this gap. They have almost as much data as the unsupervised approach. Obviously there are the trade-offs of the quality and the quantity. And so here are some of the heuristics that I've used to make sure that the quality is as high as possible, considering that they are using these automated pipelines. Okay, so initial inspection showed a large amount of sub-pair transcripts in the raw data set. To address this, we developed several automated filtering methods to improve transcript quality. So let's see a couple of them. In order to avoid learning transcriptase, we developed many heuristics to detect and remove machine-generated transcripts from the training data set. And one of those is, if you detect all uppercase or all lowercase, then it's very unlikely that that's a human-generated transcript, and then you're just gonna remove that pair from your data set. A second heuristic they've used is, ensure that the spoken language matches the language of the transcript according to some pre-trained CLD2 model, which does the language detection, basically. And if the two do not match, we don't include the audio transcript pair as a speech recognition training example in the data set. Et cetera, et cetera. And then finally, once they had this big data set, they trained the initial versions of Whisper, and then they used the predictions of Whisper to figure out what's wrong with the data. So it's kind of very iterative process. So they say that here, for an additional filtering pass, after training an initial model, we aggregated information about its error rate on training data sources and performed manual inspection. So I can assume that that was a fairly laborious process, and that's probably one of the reasons that they did not release the actual data set that was used to train Whisper. Here's some details around the MEL Spectrogram. I'm gonna skip that. Maybe we can get back to that later. I don't wanna dig into too many details that are not that vital here. Here are some details worth mentioning, though. During early development and evaluation, we observed that Whisper models had a tendency to transcribe plausible, but almost always incorrect guesses for the names of the speakers. And the reason being is usually in those transcripts, you have something like this predicted. So open bracket and then a name of a person, and then close bracket. And then the model learned to, because of the way how models learn, statistical learning, to just predict these, even though you cannot infer what's the name of the person from a 30-second audio clip that they are using, in most cases, unless the name was actually referenced. So they say here, to avoid this, we fine-tuned Whisper models briefly on the subset of transcripts that do not include speaker annotations, which removes this behavior. Okay, so one more thing, and then we're gonna get to the robustness part. They had problems with this WER metric, so that's the word error rate metric. And it's a similar problem as to what people are encounter with when they deal with machine translation, for example, with the blue metric, et cetera. So let me read this part. So a WER, which is based on string edit distance, penalizes all differences between the model's output and the reference transcript, including innocuous differences in transcript style, which kind of sucks. We wanna focus on the semantics and understand whether the, well, in the case of audio transcription, every single word matters, that's true, but the actual maybe punctuations or style that's superficial and not changing anything, you kind of want to ignore that, right? So they say here, while this poses a problem for all transcribers, it is particularly acute for zero-shot models like Whisper, which do not observe any examples of specific dataset transcript formats. So that means those other baselines, because they're actually trained on that particular dataset, and then basically they do the evaluate on the validation portion, but the distribution is very similar, they kind of can learn and overfit the particular style, and thus they have an advantage compared to zero-shot Whisper model. So because of that, they developed various laborious normalizers, so they say here, our text normalizer was developed through iterative manual inspection to identify common patterns where naive WER penalized Whisper models for innocuous difference. Because I will not cover this in the code bit later, so I'm gonna just quickly show you how those look like. Let me just try and find those normalizers, so here they are, and if I open up English normalizers, you can see how laborious this must have been, like there is just a lot of rules that they had to do to make sure, yeah, you can see it's kind of messy. So let me go back to the paper, and let's continue here. So they introduced this concept of effective robustness, and what it means is it measures the difference in expected performance between a reference dataset, which is usually in distribution, and one or more out of distribution datasets. So obviously, we wanna have this difference to be as small as possible, which means that you are generalizing to out of distribution datasets, which is a desirable behavior, right? We want to have general models. So a model with high effective robustness does better than expected on out of distribution datasets as a function of its performance on the reference dataset, and approaches the ideal of equal performance on all datasets. So here is the actual diagram. So here we can see in an ideal world, we'd have this line here. Basically, the metric on the LibreSpeech dev clean should be whatever the performance of your model there is, you wanna have the same error rate on the other datasets. So here is the average WER on these other datasets. So you can see here, some of these models that have much better performance compared to Whisper on the LibreSpeech. So you can see this point here, for example, whoops, this point here, I'm terrible at drawing today. So it has much lower error rate here compared to let's say this point, but then you can see how poor the performance actually is. The error rate is huge for these data points of these LibreSpeech models. So you can see all in all, when you fit like a line across these data points, basically you see that it's much more robust, Whisper is much more robust compared to the other supervised LibreSpeech models. Even though it's not actually soda on the LibreSpeech, as you can see here, but that's, I guess, less important. We wanna create robust general models. Okay, so they mentioned that even the smallest zero-shot Whisper model, which only has 39 million parameters and a 6.7 WER on LibreSpeech test clean is roughly competitive with the best supervised LibreSpeech model when evaluated on other data sets. So I guess that's like, literally this model here probably is competitive, as you can see here, performance wise, the Y point here. So the average performance on the other data sets is close to these supervised LibreSpeech models. They have much better performance on the actual LibreSpeech dev clean test. Data set, sorry. Okay, so here are a couple more interesting observations. Probably not that surprising at this point of time. And that's that with more data, you can see on the x-axis hours of transcribed audio, you can see that these languages that have more data available in the data set have much lower error rate. And you can see it's super predictive. Like the correlation coefficient here is 0.84. And they mentioned somewhere here, a couple of these outliers, you can see here, ZH, which I think stands for Mandarin Chinese or something, Korean are kind of outliers here, given that even though they have this much data, the error rate is somewhat higher. And they kind of answer that, hypothesize why that might be the case here. So checking the regression coefficient for a linear fit to these log log values results in an estimate that the error rate has for every 16x increase in training data. So that's the general trend we've seen. And then they say that the many of the largest outliers in terms of worst than expected performance, according to this trend, are languages that have unique scripts and are more distantly related to the Indo-European languages, making up the majority of the training data set, such as Hebrew, Telugu, Chinese, and Korean. So I did point out to these two languages here. So they speculate that these differences could be due, I mean, it's not a speculation, it's a valid hypothesis, I guess, could be due to a lack of transfer, due to linguistic distance, or simply the problem with tokenizer being a poor match for these languages or variations in data quality. So all of those factors could be confounding. Okay, so they have additional interesting diagram here showing that the translation performance is not as predictive, given the hours of translated audio, it's not that easy to infer what will be the blue metric. And then they hypothesize here, somewhere below here. So they say here, so we suspect this is partly caused by the noisier training data due to errors in audio language identification. So as an example, Welsh CY is an outlier with much worse than expected performance at only nine blue, despite supposedly having 9,000 hours of translation data. But as it turns out, and let me just show you on the diagram here, where the CY lies, so you can see here, it literally has 9,000, but it has around nine blue, which is like super low compared to what this trend here would suggest. But then they say here, inspection shows the majority of supposedly Welsh translation data is actually English audio with English captions, where the English audio was misclassified as Welsh by the language identification system, resulting in it being included as translation training data rather than transcription data according to our data set creation rules. So that's a problem because that means you were using the Welsh token, but actually you had English audio and English transcription, which messes up the results. So I guess that's suggestive of not taking this plot here too seriously and just like, I guess the general point of the paper is, given enough data that's high quality, we can achieve human performance on speech recognition pretty much. There is some situation phenomenon, but I'm gonna show you that in a couple of minutes. Okay, so let's continue here. Let's see a couple more things here. Things worth mentioning is this one. So they do the following thing. They add some noise on top of the input audio signal, such as for example, in the left diagram, we see white noise and then on the right, the pub noise. So that's kind of background noise from a restaurant or a pub or something. And you can see that the whisper, which is this red star, you can see it definitely does not have the lowest error rate on the LibreSpeech test screen, but then as the noise levels keep increasing, it all of a sudden becomes one of the best models. And especially here on pub noise, you can see that the model outperforms all of the other ones. So it has lower error rates. So that means it's more robust to noise, and especially the natural noise compared to this synthetic white noise. Okay, let's continue here. They also compare whisper to other baselines in this long form transcription, where because the model can only see 30 second audio inputs, they have to do sliding window approach. And they show here that they are on pair and even better compared to many of these even commercial services. You can see the blue candle bars are for most of these long form transcription datasets lower than the baselines. Given there are some exceptions, obviously, like here you can see that this company B is outperforming them, but yeah, that's kind of impressive. This is also cool. They show how they compare to human transcription. So you can see how whisper is very close to other, even human transcription services. So here they took actual, well, humans to transcribe some, I think like they had 25 recordings of text or something, and then they compare them, and you can see it's on pair pretty much. But then there is this computer assisted human transcription which is the best out of all of these approaches. So that again means that this combination of humans and machines is still the best way to go about many, many problems. Let's see a couple of more results here. Obviously model parameter, this is again not that surprising, although it's very noisy. Let me show you what's going on here. So we have English speech recognition on this leftmost plot. You can see that the model parameters as it's scrolling, we can see that the error rate is obviously falling down, but there is some saturation going on. And there is also a lot of variance. If you take a look at these lighter lines, the dark blue one is just the average of those. So that kind of means the performance varies a lot across these 12 data sets. And we can also see saturation. I guess that's the main takeaway here. We see saturation on English in particular. And then on multilingual speech recognition, we can also see like a trend, downward trend, general downward trend. And again, a lot of variance if you take a look at some of these light blue curves. Considering that the models are approaching, that we are doing a supervised learning approach here and that the models are, so where was the diagram here? So that the models are on pair with this human services, with human transcription that might explain it. And the final results I want to show you are these ones here. Basically, you can see two lines here, two fits, and one is the English only training data set. And the second one is multilingual and multitask. And if you're in this regime where the flops are lower, that means you're training smaller models. And you can see that with smaller capacity, you're suffering the, so you wanna have, this is the error rates, you wanna be lower. So you can see the English only version is better. But the multilingual multitask models scale better. So that means when you go into the bigger model, like bigger scales, you can see that the error rate is actually lower for the multilingual multitask, which is not that surprising, I guess, and encouraging, given that we can just scale it even more and have higher quality data and we can achieve even better results. These are some candle bars about the talk, the actual normalizer, I'm gonna skip that. And finally, it's important to realize that there are still a lot of heuristics that they had to use to make this decoding reliable. So we have developed a set of heuristics that help avoid failure cases of long form transcription in particular. So they say here, first we use beam search with five beams using the log probabilities as the score function to reduce repetition looping, which happens more frequently in greedy decoding. And that's a common problem you have with greedy decoding in general, like you have the repetition problem. And then they say we start with the temperature zero, I always selecting the tokens with the highest probability, which is basically greedy, and then increase the temperature by zero two up to 1.0, when either the average log probability over the generated tokens is lower than minus one, or the generated text has a GZIP compression rate higher than 2.4, which would probably mean that if it's higher than that, that means that the text is too random and too hard to compress, which means it's probably gibberish. I guess that's the rough reasoning behind the heuristic. And there is a lot more details here, I'm just gonna skip that for now. And you can see here, like in a tabular form, the same thing, adding beam search, adding temperature fallback, adding previous text conditioning, adding this voice activity detection, initial timestamp constraints, all of those help improve the average performance across different transcription, long form transcription data sets here. So I guess the main takeaway from all of this is, if you take a look at the diagrams here, this plot is kind of suggestive that, given higher quality and abundance of audio files, of audio data for a particular language of interest, and even off the shelf model, you can achieve remarkable performance. And we also wanna do the multi-lingual multitask setup. Okay, guys, having seen the main ideas from the paper, let's now switch to the code, and let me walk you through and try and show you the correspondence between the code and the diagram. You see here, I'm gonna see in a bit more detail how this special token pipeline looks like. So I already went ahead and cloned the repo, created a condo environment, so all of that should be fairly trivial. Everything is written down in the README quite nicely. So yeah, let me just show you the types of models they have. So they support a couple of models. You can see here, tiny, base, small, medium, and large. You can also see the.end versions, which are English only, and these ones with the other suffix are basically multi-lingual multitask models. Okay, so you can specify multiple audio files here. We're using small model by default. Default task is transcribe. You can also choose translation. Language is set to none, which means we'll infer automatically what the language is, so that's gonna be interesting. And then bunch of parameters around the coding and these thresholds that help robustify the decoding pipeline. Now they just pop some arguments. I'm gonna skip all of these parts that are not relevant and crucial to the understanding of the model. So this is like temperature goes from zero to one with increments of 0.2. So that's something they said in the model they used to robustify the decoding, if that's even a word, and load model. So let's kinda go here and let me jump all the way to here. So I'm gonna disable breakpoints, put a breakpoint here, and let's just go to whisper. Okay, so basically in the background, we download the model. Actually, I already downloaded beforehand, so it just kinda loads the weights. So here is how the model looks like. Let me maybe now enable the breakpoints and enter here. So we have the autoencoder and we have the text decoder. So those are the two stacks we saw previously in the diagram. So let me now enter the, let's enter the encoder and let's see how it looks like. So we can see here, we have two comp1d layers. We have the positional embeddings which are not learned, so they're sinusoids. And let me just put the code and the diagram side by side, it's gonna be easier. Okay, so here we are. I can see the diagrams on the side here. So again, we have the two comp1d layer, so that's this part here, the stem of the encoder. Then we have the non-learnable positional embeddings. You can see those here. And finally, we have bunch of blocks, which are just residual attention blocks. So that's nothing new there. And finally, we end up with a layer norm. And later we'll see in the actual forward pass how we pass the male spectrogram through the comp1d and then we have the JALU, blah, blah, blah, the stem, and then the encoder part. So everything is fairly trivial here. Okay, so let's exit the construction of the audioencoder. Let me just make sure that I have everything enabled. Let's hit F5 and enter the decoder. And now let's just go back to this side by side view. So here is the text decoder. You can see a couple of things. Of course, embedding table here. And because we're dealing with obviously a decoder model that has a vocabulary, we have positional embeddings, which are, as you can see here, these ones are learned. So it says here, learn positional embeddings. And that's why we have parameter. And we just initialize, as you can see here. So 448 is the context length that they are using and the state is around, yeah, 768. Okay, and then we have a bunch of blocks. These ones are cross-attention true. So that means they're additionally gonna have this third component in the transformer layer, the so-called cross-attention block. But yeah, that's it. Fairly similar to what we saw in the encoder part. Okay, and we have a mask, which is going to be basically a causal mask. This is gonna be a causal decoder. So if I were to print this, you can see minus infinities on the upper triangle, which means we're gonna mask out the future tokens and this particular token can only attend itself and the previous token and the previous token, et cetera. Okay, hopefully nothing new there, just like standard attention stuff, transformer stuff, sorry. And now let's go back. So let me now kinda expand this again. So we load the dictionaries. We just push the model to the device. Let's exit here and they have a small, well, it's not a bug, but they do the two device two times, which is kinda unnecessary. But yeah, okay. So here we are, we popped the audio path. So I'm gonna use the one that they provided in the code base. So maybe it will be useful for me to show you how that sounds. So here it is. And so my fellow America, ask not what your country can do for you. Can do for you. Ask what your country can do for your country. Okay, so that's the audio file we are using and let's now transcribe it. So again, let me make sure we are enabling the breakpoints. We hit F5 and here we are. So let's see what's interesting going to happen here. So the first interesting part is the log mal spectrogram. So let's see how we convert an audio file, which is just a path into a mal spectrogram. So let's enter here. So the first part is loading audio files. So we just use FFmpeg for that. And then you can see blah, blah, blah, some from buffer because these are bytes, integer 16. And that's we divide by 32,000 because precisely because we're using int 16. So that's like, I guess two raised to the power of 15 is exactly that number. And that's why we are basically putting the amplitude in the minus one, one range, just normalization detail. Okay, so once we have, so we are in the log mal spectrogram. So once we load the data, which is just raw data at the moment, so that's gonna be just a sequence of numbers, 176,000 floats. Now we form the hand window, just some signal processing stuff. Then we apply the short, basically a Fourier transform. It stands for, what does it stand for? A short time, yeah, short time Fourier transform. Okay, so we applied the short time Fourier transform, which is gonna basically convert this audio file into a spectrogram, which is your regular spectrogram. So if I were to print this particular variable here, the spectrogram, the shape is gonna be, as you can see here, 201 and 1,101. So 201 is the frequency dimension, this one is the time dimension. The reason being they have some, they're doing sliding window approaches. So that's why we don't have as many timestamps as in the original raw audio. Okay, so then we do the magnitudes because we wanna have the magnitude component of the Fourier. And then we form these filters. As you can see here, we just, what they do is they use this Librosa to generate these MEL filters, and then they saved the MEL filters into this particular file. So you can see here, this assets MEL filters, MPZ. So if you go to the assets directory here, you can see that the MPZ, so they just kinda save them there and then they can avoid having the dependency. Anyways, it's already too many details. Let's get out of here, filters, and then we do the matrix multiplication. Again, that's a detail where by, once you multiply using these triangular filters, once you multiply that with the actual magnitude of Fourier spectrogram, then you end up with a MEL spectrogram. And now we're gonna see how this is going to have 80 bands. So if I were to print the shape of this thing, it's gonna have, oops, let's just grab this variable. It's gonna have, as you can see here, 80 bands and we have 1,100, again, time dimension. And if you go to the paper, you'll see all of these details there. So again, let me just quickly jump to that part where they mention it, somewhere here. So all audio is resampled to 16,000 hertz and an 80 channel log magnitude MEL spectrogram representation is computed on 25 millisecond windows with a stride of 10 milliseconds. And because we're using these windows of 25 milliseconds, that's why we end up with a much shorter representation in the time dimension compared to having a 176,000 or something. Okay, let's go back to the code, some normalization. I'm not gonna focus too much on this. And finally, because the language is none, we're now going to predict the language. Let's see how that's gonna work. So detecting language using the first 30 seconds, blah, blah, blah, we first do the padding, which is just going to pad this to a 30 second audio file. So that means it's gonna have 3,000, basically dimensions in this time, along the time axis. And now let's basically do the detect language part. So we passed the segment, which is just a pad MEL spectrogram, which means basically we padded with all zeros to the right. And let's now do the detection. So here is how it's gonna look like. We get the tokenizer first. It might be interesting to just briefly jump into the tokenizer. And let me show you how it looks like. So we specify that it's a multilingual tokenizer, task is transcription, we have English, and now we do the build part. Okay. So here is how building the tokenizer looks like. We are basically taking the pre-trained GPT-2 tokenizer fast from the Huggins face library, and we just pre-loaded. And here are the special tokens. So here are the tokens we saw before in the paper. So we have the start of the transcript token, SOT. We have, as you can see here, I think this language's keys is like 99 keys in total or something. Let me just double check that. So you can see here, 99 keys precisely. So that means we'll add for each of those languages, we'll add this special token. For example, this and like the smaller sign and then bar and then N, like EN, and then the rest. And for each of the languages, a similar thing. So you can see here how the keys look like here for the languages. And then we have translate, transcribe, start of LM, start of prev, no captions and no timestamps. So each of these have some meaning. We're gonna see maybe some of them are subset a bit later. So we end up adding those to the pre-trained Huggins face tokenizer and we return back here. Okay, so now let's just see what's interesting. None of this is gonna be interesting for the time being. They just create a SOT sequence, which is gonna have, as you can see here, they start with the SOT and they added the EN token and they added the transcribe token. And so that's basically the sequence we saw before here. So let me just go back to the paper. So that's your classic sequence here. So the start of transcript, language tag and transcribe. Let's go back to the code. But we are not going to use that for the language detection that we are currently doing. So that's why I said it's not that important at the moment. So we just do unsqueeze on the mail, nothing fancy. And then we pass the mail through the encoder stack. So let's now get the features. So you can see here that everything is the same as what the diagram says. We do a forward pass through, I guess, 12 blocks or something. So we can just skip that. Let me just see how many blocks this one has. I guess it's like 12 or something, 12, yeah. Let's just pass through all of that. And we ultimately end up with the following shape. So you can see here, 1,500. So because we have one of these has stride of two, stride is set to two. And that's why we went from 3,000 to 2,500. And the dimensionality is 768. So that's the model dimension of the encoder. Okay, let's exit here. Number of audio is gonna be one because we only have a one audio file. And here you see, so here we just set the SOT token and we create a tensor that has SOT. And now we're gonna pass the mail, so the encoder features from the output of the encoder, and we pass the SOT. And let's see what the model is gonna generate. So let's see which language it gives the highest probability to. And that's how we pick the language. So let me just go into the code instead of explaining like this. So here we are, we are in the text decoder forward pass. I'm gonna ignore the cache parts. Those are just implementation details of how they optimize this execution of the forward pass of the decoder. And let's see what's going on. So they do the token embedding. So again, remember this is just the SOT token, as you can see here. And then they just add the positional embedding. So all of that is kinda standard stuff. And then you can see here, they're doing the cross attention. So XA, XA is our, the output features of the encoder, as you can see here. And so that's what's going on here. So because of that, I'm just gonna skip all of that. And let's get here. Now the output features are multiplied by the, we do matrix multiplication with the embedding table, and that's gonna project those from 768 dimensions into the vocab space, which is like, let's see what's the dimensionality. So logits shape is gonna be 51,865. Okay, so those are the logits. And now comes the interesting part. So we create a mask that initially just has basically ones for every single token in the vocab. Okay, so that's the mask initially. And then for the language tokens, let's set that mask to false, which means we do not want to mask the language token. So the EN, the DE, all of the 99, I think this should be 99. Let me just make sure that's the case. So let's see what's the length of this list, 99, precisely 99. And so that's how we modify the mask. And now we put the logits of every single token, except for the language tokens to minus infinity, which after we apply the softmax is gonna be, is gonna set them to zero. So that means they'll have zero probability and only the language tokens will have non-zero probability. Okay, so here we do argmax to find the highest probability language token. We end up with, I think, English or something. So this is, I think, English. And we also return the probabilities for the other language tokens. We just zip that and we return back those results. So we return the actual language token and we return the probabilities. But I think, yeah, this one is not even used. So we just grab the, this is kind of wasteful. They could have just passed it directly here, but anyways. So they end up with the language. We just grab the key. So you can see the probabilities are language token and the probability, you can see that EN, English has 0.95, roughly. And that's why we're gonna pick precisely English as the language here. And we output detected language is English language. Okay, that's the first part. So we now have the English predicted. Now let's continue doing the actual transcription. I'm gonna skip over all of these details. Let me just disable the break points. Let's get to the interesting part. These are just some details where, so the precision is, as you can see here, 20 milliseconds. That's why this is 0.02. Those are, I guess, not that vital. Okay, we just pad again the, the MEL so that we end up with 3000 here. So let's just check the shape of this segment. So you can see it's 83,000 because we have 80 MEL bands and we have 3000 because we have a 30 second padded audio clip. Okay, we calculate the duration. It's gonna be 30 seconds. And then this is where the magic starts happening. Decode will fall back. Let's see how that's gonna work. Okay, here we are. Temperature, again, we have the list of various temperatures. We grab the first one, which is gonna be the zero temperature, which means we are basically greedy decoding. And well, actually, because we'll be actually using the beam. So let's just continue here and see what's going on. Decoding task. So this is how they kind of design this. They form a decoding task and then they call run on that task and they pass the MEL spectrogram. So let's see what's going on. I'm gonna skip some of these details because there's too many stuff going on. Just wanna focus on the stuff that's gonna be useful for whisper understanding, of course. So here, this one implements this penalty with the length. So when you're doing beam decoding, you want to take into account how probable that beam is, but you also wanna penalize if it's too long. So this is what this maximum likelihood ranker does here, as you can see here. So penalty and we are appending the log prop divided by the penalty. I'll skip this, but in case you want to learn more, and I think this repo is amazing when it comes to understanding how beam decoding works. If you wanna, I might create a separate video where I just go in a lot of depth and explain how this works, but here I'm gonna skip those details. So we form a beam decoder here, and then this is the interesting part. So here are some of the heuristics that they are using. They're suppressing certain tokens during the decoding process. So one of those is gonna be suppressed blank, which means we're gonna suppress the blank token as the first token after we start the transcription. So we'll get to that a bit later. Then they have a specific list of tokens that they wanna suppress. So because we have, let me enter this one. So let me enter this one. So suppressed tokens are set to minus one because that's our input argument to the program. And now let's see how that's gonna be mapped into the actual tokens. So here you can see we extend by these non-speech tokens and here they are. So just a list of tokens that cannot appear in the spoken language. And again, I'm gonna skip because there is a lot of details, but that's the rough point. We wanna help the decoder by just kinda mapping, by masking out those particular tokens. Okay. And they add some more. We don't wanna predict the start of transcription because we already know we are transcribing. So that's kinda a no-brainer, but yeah. And finally, the no-captions, which is the token that tells you that we don't have any human voice. We just have like music or something going on. Okay. That's a second heuristic they have. And then they also have these applied timestamp rules, which do what? Okay. I have a break point here. We'll see what it does once we hit the apply. So let's just continue. Let's continue here. Let's exit here. And we are now in the run method. So all of that was construction of this decoding task. Now let's see how the actual run looks like. So we first fetch the audio features. And this one is gonna again, just pass the the mel through the encoder and return back the audio features. So I'm gonna skip all of that. So I'm just gonna disable that. Let's fetch the audio features by passing the mel through the encoder stack. And by the way, I'm running this on a CPU because my cond environment was not properly set up. That's why it's a bit slower, but it's still working fairly fast, which is kinda cool. Okay. So we set the initial tokens to these three. So that's the SOT, that's the English, and that's the transcribe. That's our initial sequence. And then this is gonna be a no-op. Basically we just passed the language. We already detected language. We know it's English. So that's gonna be a no-op. Now we do repeat by five because we are doing beam. Again, I'm not gonna focus on the details of beam, but let's just see the shape before and after. So the shape now is 1500 because that's the time. And then this is the actual embedding dimension. So let's see what's going on after this call. We have five. So we have five because we'll have five hypotheses that we are keeping track of during the beam decoding. And we do the same for tokens. And then now here is the main loop. So here is where the magic happens. Okay. Let's start here. So we now sample for 224 samples, and we first pass the tokens as well as the audio features to grab the logits. So it's gonna be a simple pass through the decoder. Again, some hook stuff that's again in optimization detail. Here's the decoder call. So what we do is again, we pass all of the, we embed our three tokens. So here, let's see what's the shape now. So you can see here five, three, 768, because we have three, five because we have beam, and this is the dimension of the model. Now we pass all of those. And we also have the, again, we are cross attending to the output features of the audio encoder. Okay. That's the idea. And finally, we end up with the logits. So we can skip all of this. This is just like regular transformer stuff. Whoops. And let's exit here. Let's exit. And we have our logits. Now this is an interesting point. Let's see what happens here. Because we are in the iteration zero, and because no captions is set to, yeah, because no captions token exists, we're going to enter this branch. And what we do is we grab the logits for the SOT index. So that means what's the, once we condition the model on the SOT token, what's the distribution across the next tokens after the SOT. And that's in case that there is a high probability for the no caption probability, then that means we don't have any audio, any human vocals in the audio clip. So again, we index here, we grab the SOT logits, and that's gonna be, let's see the shape. So here is the shape. So you can see here, because it's being, we have five, but we just have the logits above the SOT token. And then additionally, we grab precisely this particular, we are interested in the probability of this particular token, and that's gonna be the no caption probability. And in our particular case, it's very low, but if it was high enough, then we would basically ignore the decoder output and just say that, okay, there is no voice happening in this audio clip. And let me remind you here, that's this part. So we either have the language tag or the no speech tag, and that's the voice activity detection. So that's how that part, that task breaks. So now you see it in code. Okay, so having collected that, let's now grab the, because remember we actually have the SOT, English and transcribe tokens. And we're gonna do the decoding, even if this was higher probability, we're still gonna do the decoding here and assume as if we wanna transcribe the audio, but in case this was high enough, then we would just ditch and ignore all of that output, which is probably wasteful. Maybe some, just a break statement here or something would be more optimal, but yeah, that's just an optimization detail. So we grab the logits for the fourth token, we apply a bunch of filters. So those are the filters which I mentioned before. So let's see. The first one is the suppressed blank. And you can see here that we wanna mask the blank token and we set it to minus infinity because we don't want to generate, as the first token does not, like it doesn't make any sense to store the audio with a blank token. That's why they can mask it out. What's probably weird is why is it not just like a generic white space and instead it's just like a space. I'm not sure about that point, but yeah, let's continue. And let me just enable all of the breakpoints. Let's enter here. So here's the second one. These are the special tokens. So that's again a list of these special tokens that we wanna suppress. We set all of them to minus infinity, the logits. So that means their probabilities after the softmax will be set to zero. And we have the final one. Here it is. So here is the no timestamp. And let's see how that one works. I'm not sure about this one. Okay, so it seems that we can never sample the no timestamps because we set it to minus infinity. And that means we kinda cannot ever go down this route here, right? So that's kinda weird. So in which setting do they actually sample this particular one? I'm not sure about that. Okay, let's continue here. I think this is gonna be a no op. It's constantly gonna, yeah, it's gonna skip all of that. So let's just put the breakpoint here, hit that five. And let's continue. Let's see what else. So we now apply the log softmax. And now this is one of these additional details. It's kinda important. So if the sum of probability over timestamps is above any other token, then sample the timestamp. So that's how we sample the timestamp. So let's see this part. That's kinda important. So we take the log probabilities, which are basically after we apply log softmax, and before that we did the masking. So that means now we have all of the relevant tokens can be sampled from. And as you can see here, we grab only those tokens which correspond to timestamp. And we do a sum across those just to get their cumulative probability, log probability. And we do the same thing for all the previous tokens. As you can see here, we grab them, actually we grab the max one, not the sum. And if the probability of this one is bigger than that one, then we basically mask all of the non timestamp tokens. And then we're certain that we'll generate the timestamp information. So as you can see in this particular instance, we are generating, because this is a first step, we're generating the zero zero tokens. So let me remind you here, we're actually generating this begin time as the first token, which kinda makes sense. Okay, so let's go back to the code. Let's exit here. Let me just shift F11, we're out. Those are the logits. And now we have this, what's this update function? This is just basically wrapping up the, okay, so this is actually the beam decoding part, collecting the candidates, blah, blah, blah. As I said, I'm gonna skip all of those. Let's just kinda assume that as a black box, we end up sampling, I guess, five candidates. So let's see how that's gonna look like. So tokens is gonna be five, four, and before that we had five, three. And that's because, yeah, we now have five new candidates that are highly likely. And we can just skip everything else here. I think we're safe to skip all of this. I'm just gonna hit F5 to the break point, exit, and that's it. So now we just keep on doing the same thing. So we just keep on, so we get the logits. Whoops, let's just exit from the text decoder. So we get the logits, and then now we'll be skipping this because we're not in the zero step. That's the only step where we calculate the no caption props. And then we just keep on sampling until we hit the end of transcription token, pretty much. So let me now remove all of the break points. Let's put a break point here. Let's exit and let's see what's gonna happen. Okay, so here we are. So tokens, let's see the shape. We have five and each of them has 33 tokens, okay? We return back the probabilities as well as the no caption probabilities. Okay, we can skip all of this. I'm not gonna focus on that too much. And let's just return back to result. So let's return back to result. Here we are. So now we have the results. And now we have this fallback mechanism, as you can see here. And in our case, we'll just be directly, we'll get directly to the results and we will not enter this branch, which basically would force us to do decoding again with a different temperature. So because we satisfied these conditions and that's that the compression ratio is not higher than this threshold 2.4, which means we are not generating something that's super random. And we're also passing this test here. The log probability, the average log probability is lower than minus one. Because of that, we can just kind of, we are already good. We have the correct transcription and we can exit here. We check here whether we have the no caption threshold and it's set. And because of that, we now check, so this is the check. Is the no caption probability higher than the threshold? If it is, then we should skip. And that means that we would literally skip the whole segment here and continue. Continue basically transcribing the next segment. Okay, so in our case, it's not the case. So we just ignore this part here. Now there is some part about the timestamps here. You can see here tokens greater than equal timestamp begin. Because the timestamp token have this nice property that they are sequential. So that means that the beginning one, the zero zero moment of the transcription is maybe on has IDN and then all of the bigger timestamps have even bigger basically index in the embedding table. And that's why this is gonna work. So this is gonna fetch all of those tokens that are timestamp tokens. And then we basically use that information to add some segments. This is not that by looking, go through the code at your own pace. But I'm gonna return back to results and here we are. So we can now tokenizer decode all tokens. So if we were to print this, you can see, and so my fellow Americans, ask not what your country can do for you. Ask what you can do for your country. So that's the, if you remember the clip from the beginning of this coding session, that's precisely what we were playing. So here we just saved that as the TXT and we also saved this format where we have the timestamp information additionally aside from the transcription. Guys, that's pretty much it. Hopefully you saw, I mean, the details of the code are complex, but the idea is if you've bear with me, up until this point, congrats. Hopefully you've seen how this intricate system here works. There is a lot of heuristics, but the system is ultimately very elegant in the sense that it uses off the shelf transformer module and basically just leverages huge data and some decoding heuristics. Cool, so if you like this video, consider subscribing, sharing it out with your friends and see you next time. So until then, thanks for watching.
[{"start": 0.0, "end": 2.64, "text": " OpenAI just open sourced Whisper,"}, {"start": 2.64, "end": 5.48, "text": " which is an automatic speech recognition system"}, {"start": 5.48, "end": 7.7, "text": " that's approaching the human level baseline"}, {"start": 7.7, "end": 11.92, "text": " on both the accuracy as well as on the robustness wise"}, {"start": 11.92, "end": 13.64, "text": " for the English transcription."}, {"start": 13.64, "end": 15.34, "text": " And that's super awesome."}, {"start": 15.34, "end": 17.88, "text": " So in this video, I'm gonna cover the paper"}, {"start": 17.88, "end": 21.0, "text": " as well as I'm gonna go through the code behind the paper,"}, {"start": 21.0, "end": 22.8, "text": " which they luckily open source."}, {"start": 22.8, "end": 25.060000000000002, "text": " So the inference code as well as the model checkpoints"}, {"start": 25.060000000000002, "end": 28.2, "text": " have all been open sourced and that's kinda cool."}, {"start": 28.2, "end": 29.64, "text": " So yeah, without further ado,"}, {"start": 29.64, "end": 31.64, "text": " let's dig into the video."}, {"start": 31.64, "end": 34.0, "text": " So here is the tweet that kinda announced"}, {"start": 34.0, "end": 35.8, "text": " the release of Whisper."}, {"start": 35.8, "end": 38.52, "text": " They say we've trained a neural net called Whisper"}, {"start": 38.52, "end": 40.7, "text": " that approaches human level robustness and accuracy"}, {"start": 40.7, "end": 42.120000000000005, "text": " on English speech recognition."}, {"start": 42.120000000000005, "end": 44.480000000000004, "text": " It performs well even on diverse accents"}, {"start": 44.480000000000004, "end": 45.72, "text": " and technical language."}, {"start": 45.72, "end": 48.24, "text": " Whisper is open source for all to use."}, {"start": 48.24, "end": 51.28, "text": " You can see that Whisper can do many tasks"}, {"start": 51.28, "end": 54.34, "text": " and it's a multilingual multitask system."}, {"start": 54.34, "end": 56.400000000000006, "text": " So here's a couple of examples."}, {"start": 56.400000000000006, "end": 57.68, "text": " So English transcription."}, {"start": 57.68, "end": 60.6, "text": " So given an audio in English, ask not what your country"}, {"start": 60.6, "end": 62.48, "text": " can do for blah, blah, blah."}, {"start": 62.48, "end": 65.22, "text": " You ask and then you see the transcription here,"}, {"start": 65.22, "end": 67.12, "text": " ask not what your country can do for you."}, {"start": 67.12, "end": 67.96, "text": " So that's kinda trivial."}, {"start": 67.96, "end": 71.44, "text": " And then we have any two English speech translation"}, {"start": 71.44, "end": 75.0, "text": " where we have maybe audio file in let's say Spanish,"}, {"start": 75.0, "end": 78.2, "text": " El rapido thorough marron salta sobre."}, {"start": 78.2, "end": 79.86, "text": " And then you have the translation,"}, {"start": 79.86, "end": 82.9, "text": " the quick brown fox jumps over."}, {"start": 82.9, "end": 85.08, "text": " And then we have non-English transcription"}, {"start": 85.08, "end": 87.92, "text": " where the audio file is again in some foreign language"}, {"start": 87.92, "end": 89.12, "text": " that's not English."}, {"start": 89.12, "end": 92.46, "text": " And the transcription is also in that same language."}, {"start": 92.46, "end": 94.12, "text": " So I'm not gonna try and pronounce Chinese."}, {"start": 94.12, "end": 96.96, "text": " My Mandarin, it's not at its peak."}, {"start": 96.96, "end": 100.32, "text": " So there is the last task here is the no speech one"}, {"start": 100.32, "end": 104.48, "text": " where given the background music playing or like whatever,"}, {"start": 104.48, "end": 106.64, "text": " it's important that we don't have human voices."}, {"start": 106.64, "end": 109.4, "text": " The model outputs a special token signaling"}, {"start": 109.4, "end": 113.48, "text": " that there is well, no human speech going on."}, {"start": 113.48, "end": 115.12, "text": " Other than these tasks,"}, {"start": 115.12, "end": 118.76, "text": " the model can also do language detection,"}, {"start": 118.76, "end": 120.96000000000001, "text": " which means it can given the audio detect"}, {"start": 120.96000000000001, "end": 125.76, "text": " which language does that particular audio file belong to."}, {"start": 125.76, "end": 128.8, "text": " Okay, let's jump into the paper."}, {"start": 128.8, "end": 130.86, "text": " Let's see the abstract"}, {"start": 130.86, "end": 133.84, "text": " and then I'm gonna first give you some high level overview"}, {"start": 133.84, "end": 136.26, "text": " and then we can dig a bit deeper."}, {"start": 136.26, "end": 138.88, "text": " So the name of the paper is robust speech recognition"}, {"start": 138.88, "end": 141.2, "text": " by a large scale weak supervision."}, {"start": 141.2, "end": 144.11999999999998, "text": " We're gonna see that this is one of the main components"}, {"start": 144.11999999999998, "end": 147.88, "text": " of the paper so just like the data set itself."}, {"start": 147.88, "end": 150.64, "text": " And let's kind of skim through the abstract first."}, {"start": 150.64, "end": 153.79999999999998, "text": " So we studied the capabilities of speech processing systems"}, {"start": 153.79999999999998, "end": 156.76, "text": " trained simply to predict large amounts of transcripts"}, {"start": 156.76, "end": 158.83999999999997, "text": " of audio on the internet."}, {"start": 158.83999999999997, "end": 163.6, "text": " When scaled to 680,000 hours of multilingual"}, {"start": 163.6, "end": 165.48, "text": " and multitask supervision,"}, {"start": 165.48, "end": 168.67999999999998, "text": " the resulting models generalized well to standard benchmarks"}, {"start": 168.68, "end": 172.28, "text": " and are often competitive with prior fully supervised results"}, {"start": 172.28, "end": 174.44, "text": " but in a zero shot transfer setting"}, {"start": 174.44, "end": 176.66, "text": " without the need for any fine tuning."}, {"start": 176.66, "end": 177.72, "text": " When compared to humans,"}, {"start": 177.72, "end": 181.6, "text": " the models approach their accuracy and robustness."}, {"start": 181.6, "end": 184.44, "text": " Okay, so let's first start by just showing you"}, {"start": 184.44, "end": 186.60000000000002, "text": " the high level diagram,"}, {"start": 186.60000000000002, "end": 188.04000000000002, "text": " how the architecture looks like"}, {"start": 188.04000000000002, "end": 190.60000000000002, "text": " and these special tokens that you use"}, {"start": 190.60000000000002, "end": 193.24, "text": " to prompt the model to do specific tasks."}, {"start": 193.24, "end": 196.32, "text": " So the architecture itself is literally"}, {"start": 196.32, "end": 200.35999999999999, "text": " like off the shelf transformer encoder decoder transformer"}, {"start": 200.35999999999999, "end": 204.92, "text": " module from 2017 attention is all you need paper."}, {"start": 204.92, "end": 208.23999999999998, "text": " So they wanted to kind of remove that confounding factor"}, {"start": 208.23999999999998, "end": 210.04, "text": " of just tweaking the architecture"}, {"start": 210.04, "end": 212.48, "text": " for this particular task of speech recognition."}, {"start": 212.48, "end": 215.32, "text": " And instead they just took the off the shelf model"}, {"start": 215.32, "end": 217.07999999999998, "text": " and they've been playing with the data."}, {"start": 217.07999999999998, "end": 219.68, "text": " Like data is the main part here"}, {"start": 219.68, "end": 221.24, "text": " as well as the special tokens."}, {"start": 221.24, "end": 222.84, "text": " We'll see those in a second."}, {"start": 222.84, "end": 225.44, "text": " So again, how the system works on the high level"}, {"start": 225.44, "end": 227.44, "text": " is you grab an audio file"}, {"start": 227.44, "end": 230.16, "text": " and then you convert it from its raw form"}, {"start": 230.16, "end": 233.35999999999999, "text": " into something called log male spectrograms."}, {"start": 233.35999999999999, "end": 236.6, "text": " So spectrograms are these representations"}, {"start": 236.6, "end": 241.38, "text": " where here on the X axis, you have the time information."}, {"start": 241.38, "end": 244.84, "text": " So the time steps and then on the Y axis"}, {"start": 244.84, "end": 246.68, "text": " you have the actual frequency."}, {"start": 246.68, "end": 250.12, "text": " And then the color basically tells you"}, {"start": 250.12, "end": 253.64, "text": " what's the amplitude of that particular frequency"}, {"start": 253.64, "end": 254.96, "text": " at that particular point of time."}, {"start": 254.96, "end": 256.68, "text": " So like this point here,"}, {"start": 256.68, "end": 258.96000000000004, "text": " like this will be the particular time."}, {"start": 258.96000000000004, "end": 260.6, "text": " And then you can see the frequency here."}, {"start": 260.6, "end": 262.48, "text": " And the frequency is usually in Hertz"}, {"start": 262.48, "end": 265.16, "text": " but in the case of male spectrograms, it's in males."}, {"start": 265.16, "end": 267.6, "text": " And male is just like a logarithmic scale"}, {"start": 267.6, "end": 270.28000000000003, "text": " because that's how we perceive pitch"}, {"start": 270.28000000000003, "end": 273.40000000000003, "text": " and not like linearly like Hertz scale."}, {"start": 273.40000000000003, "end": 277.52, "text": " So that's a TLDR, but I'll link some resources"}, {"start": 277.52, "end": 280.90000000000003, "text": " for explain very nicely how male spectrograms work"}, {"start": 280.90000000000003, "end": 283.5, "text": " because that's not the topic of this video."}, {"start": 283.5, "end": 287.0, "text": " Okay, so once we have that representation of the audio"}, {"start": 287.0, "end": 289.4, "text": " then we literally do a feed forward pass"}, {"start": 289.4, "end": 291.04, "text": " through the encoder stacks."}, {"start": 291.04, "end": 294.0, "text": " You can see here just blocks of simple transformer layers"}, {"start": 294.0, "end": 296.56, "text": " that consist of self-attention and MLP,"}, {"start": 296.56, "end": 297.88, "text": " multilayer perceptron"}, {"start": 297.88, "end": 301.24, "text": " and outcomes the final representation here."}, {"start": 301.24, "end": 304.44, "text": " So once you have the final audio representation"}, {"start": 304.44, "end": 307.14, "text": " you use that to condition the decoder,"}, {"start": 307.14, "end": 309.12, "text": " the causal decoder model here"}, {"start": 309.12, "end": 311.12, "text": " via cross attention mechanism."}, {"start": 311.12, "end": 314.68, "text": " And then the whole magic is in these special tokens."}, {"start": 314.68, "end": 318.36, "text": " So you can see here, we have this start of transcription"}, {"start": 318.36, "end": 320.64, "text": " token, we have then the English token"}, {"start": 320.64, "end": 322.64, "text": " and then we have the transcribe token"}, {"start": 322.64, "end": 325.36, "text": " and then the model generates like this is first"}, {"start": 325.36, "end": 330.36, "text": " the timestamp and then outcomes the actual transcription"}, {"start": 332.14, "end": 333.34000000000003, "text": " of the audio file."}, {"start": 333.34000000000003, "end": 336.72, "text": " You can see that the, here is the target."}, {"start": 336.72, "end": 338.84000000000003, "text": " So it's a supervised dataset."}, {"start": 338.84, "end": 342.96, "text": " We do have the label for the audio obviously"}, {"start": 342.96, "end": 345.15999999999997, "text": " and then we just do, we shift by one"}, {"start": 345.15999999999997, "end": 348.71999999999997, "text": " and do the classic language modeling objective here."}, {"start": 348.71999999999997, "end": 350.91999999999996, "text": " There are some details that we can kind of dig"}, {"start": 350.91999999999996, "end": 352.79999999999995, "text": " into a bit later like the coder uses"}, {"start": 352.79999999999995, "end": 354.2, "text": " learn positionally codings."}, {"start": 354.2, "end": 357.08, "text": " Here we have sinusoidal ones, but as I said"}, {"start": 357.08, "end": 360.67999999999995, "text": " architecture is not the main contribution of this paper."}, {"start": 360.67999999999995, "end": 364.44, "text": " It's the dataset and it's this multi-task setup"}, {"start": 364.44, "end": 365.55999999999995, "text": " with the tokens."}, {"start": 365.55999999999995, "end": 368.73999999999995, "text": " So ignoring this part for the time being"}, {"start": 368.74, "end": 372.12, "text": " on the left, let's focus on the right part of the pipeline."}, {"start": 372.12, "end": 374.68, "text": " So we start with the start of transcript token"}, {"start": 374.68, "end": 378.52, "text": " and then the model will autoregressively decode"}, {"start": 378.52, "end": 381.68, "text": " either this no speech token, which basically means"}, {"start": 381.68, "end": 385.32, "text": " that we are not detecting, as I said previously, I think"}, {"start": 385.32, "end": 388.08, "text": " there is no speech going on the audio."}, {"start": 388.08, "end": 390.92, "text": " The model is only detecting maybe music or something"}, {"start": 390.92, "end": 394.28000000000003, "text": " without human vocals or the model could generate"}, {"start": 394.28000000000003, "end": 397.16, "text": " language tag and these are by the way, the only things"}, {"start": 397.16, "end": 401.16, "text": " that the model can generate because they have various"}, {"start": 401.16, "end": 404.12, "text": " like masking mechanisms to make sure this is the case."}, {"start": 404.12, "end": 408.36, "text": " Okay, so let's say the language tag was decoded."}, {"start": 408.36, "end": 411.96000000000004, "text": " In that case, we basically then have two options"}, {"start": 411.96000000000004, "end": 413.68, "text": " either transcribe or translate."}, {"start": 413.68, "end": 414.68, "text": " We've seen the differences."}, {"start": 414.68, "end": 418.06, "text": " So main differences in the translate option"}, {"start": 418.06, "end": 420.96000000000004, "text": " whatever the source language of the audio file is"}, {"start": 420.96000000000004, "end": 422.92, "text": " we'll end up with the English translation."}, {"start": 422.92, "end": 426.90000000000003, "text": " Okay, and then we can do the no timestamps route"}, {"start": 426.9, "end": 429.23999999999995, "text": " or we can do the one with the timestamps"}, {"start": 429.23999999999995, "end": 432.59999999999997, "text": " where you additionally predict timestamp information"}, {"start": 432.59999999999997, "end": 436.15999999999997, "text": " and then you do some transcription inside of that segment"}, {"start": 436.15999999999997, "end": 438.2, "text": " and then rinse and repeat."}, {"start": 438.2, "end": 441.35999999999996, "text": " And finally you have the end of transcription token."}, {"start": 441.35999999999996, "end": 445.79999999999995, "text": " Okay, so all of these give rise to multiple tasks"}, {"start": 445.79999999999995, "end": 448.88, "text": " and we'll see how those play out a bit later"}, {"start": 448.88, "end": 449.91999999999996, "text": " in the code as well."}, {"start": 450.88, "end": 453.76, "text": " Okay, so having explained all of this,"}, {"start": 453.76, "end": 456.15999999999997, "text": " let's dig deeper into the paper."}, {"start": 456.16, "end": 458.66, "text": " Well, let's start in the beginning here."}, {"start": 458.66, "end": 461.08000000000004, "text": " So there are roughly two research directions"}, {"start": 461.08000000000004, "end": 462.44, "text": " that people have been pursuing"}, {"start": 462.44, "end": 464.22, "text": " before a whisper was published."}, {"start": 464.22, "end": 466.84000000000003, "text": " So one big research direction"}, {"start": 466.84000000000003, "end": 469.08000000000004, "text": " was the unsupervised direction."}, {"start": 469.08000000000004, "end": 471.16, "text": " So let me read this one for you."}, {"start": 471.16, "end": 472.96000000000004, "text": " So these pre-trained audio encoders"}, {"start": 472.96000000000004, "end": 475.44000000000005, "text": " learn high quality representations of speech,"}, {"start": 475.44000000000005, "end": 477.52000000000004, "text": " but because they are purely unsupervised"}, {"start": 477.52000000000004, "end": 480.84000000000003, "text": " they lack an equivalently performant decoder"}, {"start": 480.84000000000003, "end": 483.96000000000004, "text": " mapping those representations to usable outputs"}, {"start": 483.96000000000004, "end": 485.84000000000003, "text": " necessitating a fine tuning stage"}, {"start": 485.84, "end": 487.76, "text": " in order to actually perform a task"}, {"start": 487.76, "end": 489.28, "text": " such as speech recognition."}, {"start": 489.28, "end": 492.84, "text": " Okay, so that means on one part of the spectrum"}, {"start": 492.84, "end": 496.2, "text": " we have models that are trained on a huge amount of data."}, {"start": 496.2, "end": 499.32, "text": " So we have, let me denote the data set like this."}, {"start": 499.32, "end": 500.91999999999996, "text": " So we have huge amount of data."}, {"start": 500.91999999999996, "end": 502.23999999999995, "text": " I think they mentioned here,"}, {"start": 502.23999999999995, "end": 504.91999999999996, "text": " like you can see here scaled up to million hours"}, {"start": 504.91999999999996, "end": 505.76, "text": " of training data."}, {"start": 505.76, "end": 507.64, "text": " So that's like, and that's the worst"}, {"start": 508.59999999999997, "end": 510.47999999999996, "text": " visualization of data ever."}, {"start": 511.32, "end": 514.24, "text": " Let me kind of delete it for a second and draw it properly."}, {"start": 514.24, "end": 519.24, "text": " So as I said, we have a data set of 1 million hours"}, {"start": 521.04, "end": 523.04, "text": " of audio here."}, {"start": 524.0, "end": 525.52, "text": " This is a bit better."}, {"start": 525.52, "end": 527.44, "text": " And that's on one part of the spectrum."}, {"start": 527.44, "end": 530.28, "text": " So let me now show you the other part of the spectrum here."}, {"start": 531.84, "end": 535.16, "text": " And again, what I've said here is that they train these"}, {"start": 535.16, "end": 537.48, "text": " in a non-supervised fashion, which means you go"}, {"start": 537.48, "end": 542.04, "text": " from audio to audio representation."}, {"start": 542.04, "end": 544.02, "text": " I guess from male spectrogram to male spectrogram"}, {"start": 544.02, "end": 544.86, "text": " something like that."}, {"start": 544.86, "end": 547.3199999999999, "text": " And then the problem is how do you decode those"}, {"start": 547.3199999999999, "end": 551.0, "text": " representations to actual human language?"}, {"start": 551.0, "end": 552.36, "text": " That's the problem with these models."}, {"start": 552.36, "end": 554.3199999999999, "text": " That's why you have to do the fine tuning part."}, {"start": 554.3199999999999, "end": 555.16, "text": " Okay."}, {"start": 555.16, "end": 556.9, "text": " And then the second direction is this one."}, {"start": 556.9, "end": 559.12, "text": " So speech recognition systems that are pre-trained"}, {"start": 559.12, "end": 562.76, "text": " in a supervised fashion across many data sets domains"}, {"start": 562.76, "end": 565.68, "text": " exhibit higher robustness and generalize much more"}, {"start": 565.68, "end": 568.24, "text": " effectively to held out data sets than models trained"}, {"start": 568.24, "end": 569.5799999999999, "text": " on a single source."}, {"start": 569.5799999999999, "end": 571.36, "text": " Okay, so there's just stating that we wanna have"}, {"start": 571.36, "end": 573.42, "text": " a multilingual multitask setup,"}, {"start": 573.42, "end": 574.88, "text": " which we kind of know already,"}, {"start": 574.88, "end": 577.12, "text": " especially if you have bigger models."}, {"start": 578.12, "end": 580.0799999999999, "text": " However, there is still only a moderate amount"}, {"start": 580.0799999999999, "end": 582.52, "text": " of this data easily available."}, {"start": 582.52, "end": 586.0799999999999, "text": " These authors mix together seven pre-existing data sets"}, {"start": 586.0799999999999, "end": 589.1999999999999, "text": " totaling only 5,000 hours of supervision."}, {"start": 589.1999999999999, "end": 591.9599999999999, "text": " Okay, so that's the other part of the spectrum."}, {"start": 591.9599999999999, "end": 595.3, "text": " We have tiny, tiny, tiny models here."}, {"start": 596.4799999999999, "end": 597.76, "text": " I mean, tiny data sets."}, {"start": 597.76, "end": 600.64, "text": " So this one has like 5K hours compared"}, {"start": 600.64, "end": 602.56, "text": " that to 1 million hours there."}, {"start": 602.56, "end": 603.4, "text": " Okay."}, {"start": 603.4, "end": 605.9599999999999, "text": " And they say this, so whoops,"}, {"start": 605.9599999999999, "end": 609.24, "text": " by relaxing the requirements of gold standard"}, {"start": 609.24, "end": 611.9599999999999, "text": " human validated transcripts,"}, {"start": 611.9599999999999, "end": 615.66, "text": " these authors make use of sophisticated automated pipelines"}, {"start": 615.66, "end": 618.0, "text": " to scale weekly supervised speech recognition"}, {"start": 618.0, "end": 622.4, "text": " to 10,000 and 30,000 hours of moisture training data."}, {"start": 622.4, "end": 624.12, "text": " This trade off between quality and quantity"}, {"start": 624.12, "end": 625.66, "text": " is often the right call."}, {"start": 625.66, "end": 630.66, "text": " Okay, so that means those authors scaled up this"}, {"start": 630.86, "end": 632.8, "text": " by using the weak supervision,"}, {"start": 632.8, "end": 634.64, "text": " but they only went to maybe a bit"}, {"start": 634.64, "end": 637.0799999999999, "text": " like a couple of times bigger data sets."}, {"start": 637.0799999999999, "end": 640.7199999999999, "text": " So now we have like something like 10 to 30K."}, {"start": 640.7199999999999, "end": 643.1999999999999, "text": " And finally, let's get to this paper."}, {"start": 643.1999999999999, "end": 645.4399999999999, "text": " So what these authors have done is,"}, {"start": 645.4399999999999, "end": 647.16, "text": " whoops, what's going on here?"}, {"start": 647.16, "end": 651.0799999999999, "text": " Yet these new data sets are only a few times larger"}, {"start": 651.0799999999999, "end": 653.4399999999999, "text": " than the sum of existing high quality data sets"}, {"start": 653.4399999999999, "end": 656.5999999999999, "text": " and still much smaller than prior unsupervised work."}, {"start": 656.5999999999999, "end": 658.52, "text": " In this work, we closed that gap,"}, {"start": 658.52, "end": 661.4399999999999, "text": " scaling weekly supervised speech recognition,"}, {"start": 661.44, "end": 666.0400000000001, "text": " the next order of magnitude to 680,000 hours"}, {"start": 666.0400000000001, "end": 667.44, "text": " of labeled audio data,"}, {"start": 667.44, "end": 669.7600000000001, "text": " which means they literally close this gap."}, {"start": 669.7600000000001, "end": 673.0, "text": " They have almost as much data as the unsupervised approach."}, {"start": 673.0, "end": 674.84, "text": " Obviously there are the trade-offs of the quality"}, {"start": 674.84, "end": 675.96, "text": " and the quantity."}, {"start": 675.96, "end": 678.6400000000001, "text": " And so here are some of the heuristics that I've used"}, {"start": 678.6400000000001, "end": 681.6800000000001, "text": " to make sure that the quality is as high as possible,"}, {"start": 681.6800000000001, "end": 682.84, "text": " considering that they are using"}, {"start": 682.84, "end": 685.72, "text": " these automated pipelines."}, {"start": 685.72, "end": 688.08, "text": " Okay, so initial inspection showed a large amount"}, {"start": 688.08, "end": 690.6400000000001, "text": " of sub-pair transcripts in the raw data set."}, {"start": 690.64, "end": 692.64, "text": " To address this, we developed several"}, {"start": 692.64, "end": 696.3199999999999, "text": " automated filtering methods to improve transcript quality."}, {"start": 696.3199999999999, "end": 697.68, "text": " So let's see a couple of them."}, {"start": 697.68, "end": 700.56, "text": " In order to avoid learning transcriptase,"}, {"start": 700.56, "end": 702.84, "text": " we developed many heuristics to detect"}, {"start": 702.84, "end": 705.16, "text": " and remove machine-generated transcripts"}, {"start": 705.16, "end": 706.4399999999999, "text": " from the training data set."}, {"start": 706.4399999999999, "end": 709.4399999999999, "text": " And one of those is, if you detect all uppercase"}, {"start": 709.4399999999999, "end": 712.28, "text": " or all lowercase, then it's very unlikely"}, {"start": 712.28, "end": 715.48, "text": " that that's a human-generated transcript,"}, {"start": 715.48, "end": 717.08, "text": " and then you're just gonna remove that pair"}, {"start": 717.08, "end": 718.36, "text": " from your data set."}, {"start": 718.36, "end": 720.36, "text": " A second heuristic they've used is,"}, {"start": 720.36, "end": 722.8000000000001, "text": " ensure that the spoken language matches the language"}, {"start": 722.8000000000001, "end": 726.84, "text": " of the transcript according to some pre-trained CLD2 model,"}, {"start": 726.84, "end": 729.6, "text": " which does the language detection, basically."}, {"start": 729.6, "end": 730.84, "text": " And if the two do not match,"}, {"start": 730.84, "end": 732.96, "text": " we don't include the audio transcript pair"}, {"start": 732.96, "end": 736.2, "text": " as a speech recognition training example in the data set."}, {"start": 736.2, "end": 737.04, "text": " Et cetera, et cetera."}, {"start": 737.04, "end": 740.22, "text": " And then finally, once they had this big data set,"}, {"start": 740.22, "end": 743.0, "text": " they trained the initial versions of Whisper,"}, {"start": 743.0, "end": 745.12, "text": " and then they used the predictions of Whisper"}, {"start": 745.12, "end": 747.0, "text": " to figure out what's wrong with the data."}, {"start": 747.0, "end": 748.88, "text": " So it's kind of very iterative process."}, {"start": 748.88, "end": 752.2, "text": " So they say that here, for an additional filtering pass,"}, {"start": 752.2, "end": 754.4, "text": " after training an initial model,"}, {"start": 754.4, "end": 757.36, "text": " we aggregated information about its error rate"}, {"start": 757.36, "end": 761.4399999999999, "text": " on training data sources and performed manual inspection."}, {"start": 762.96, "end": 766.6, "text": " So I can assume that that was a fairly laborious process,"}, {"start": 766.6, "end": 768.14, "text": " and that's probably one of the reasons"}, {"start": 768.14, "end": 771.48, "text": " that they did not release the actual data set"}, {"start": 771.48, "end": 773.92, "text": " that was used to train Whisper."}, {"start": 773.92, "end": 776.32, "text": " Here's some details around the MEL Spectrogram."}, {"start": 776.32, "end": 777.16, "text": " I'm gonna skip that."}, {"start": 777.16, "end": 779.24, "text": " Maybe we can get back to that later."}, {"start": 779.24, "end": 782.12, "text": " I don't wanna dig into too many details"}, {"start": 782.12, "end": 784.3199999999999, "text": " that are not that vital here."}, {"start": 784.3199999999999, "end": 787.8, "text": " Here are some details worth mentioning, though."}, {"start": 787.8, "end": 789.76, "text": " During early development and evaluation,"}, {"start": 789.76, "end": 791.88, "text": " we observed that Whisper models had a tendency"}, {"start": 791.88, "end": 795.0799999999999, "text": " to transcribe plausible, but almost always incorrect guesses"}, {"start": 795.0799999999999, "end": 796.88, "text": " for the names of the speakers."}, {"start": 796.88, "end": 799.7199999999999, "text": " And the reason being is usually in those transcripts,"}, {"start": 799.7199999999999, "end": 803.8, "text": " you have something like this predicted."}, {"start": 803.8, "end": 807.3199999999999, "text": " So open bracket and then a name of a person,"}, {"start": 807.3199999999999, "end": 808.56, "text": " and then close bracket."}, {"start": 808.56, "end": 810.04, "text": " And then the model learned to,"}, {"start": 810.04, "end": 813.52, "text": " because of the way how models learn, statistical learning,"}, {"start": 813.52, "end": 816.4799999999999, "text": " to just predict these, even though you cannot infer"}, {"start": 816.4799999999999, "end": 819.3199999999999, "text": " what's the name of the person from a 30-second audio clip"}, {"start": 819.3199999999999, "end": 821.24, "text": " that they are using, in most cases,"}, {"start": 821.24, "end": 823.7199999999999, "text": " unless the name was actually referenced."}, {"start": 823.7199999999999, "end": 825.4399999999999, "text": " So they say here, to avoid this,"}, {"start": 825.4399999999999, "end": 828.12, "text": " we fine-tuned Whisper models briefly"}, {"start": 828.12, "end": 829.92, "text": " on the subset of transcripts"}, {"start": 829.92, "end": 832.12, "text": " that do not include speaker annotations,"}, {"start": 832.12, "end": 833.96, "text": " which removes this behavior."}, {"start": 833.96, "end": 837.12, "text": " Okay, so one more thing,"}, {"start": 837.12, "end": 839.36, "text": " and then we're gonna get to the robustness part."}, {"start": 839.36, "end": 841.96, "text": " They had problems with this WER metric,"}, {"start": 841.96, "end": 844.5600000000001, "text": " so that's the word error rate metric."}, {"start": 844.5600000000001, "end": 846.12, "text": " And it's a similar problem"}, {"start": 846.12, "end": 848.88, "text": " as to what people are encounter with"}, {"start": 848.88, "end": 851.2, "text": " when they deal with machine translation, for example,"}, {"start": 851.2, "end": 852.8, "text": " with the blue metric, et cetera."}, {"start": 852.8, "end": 854.48, "text": " So let me read this part."}, {"start": 854.48, "end": 859.16, "text": " So a WER, which is based on string edit distance,"}, {"start": 859.16, "end": 862.28, "text": " penalizes all differences between the model's output"}, {"start": 862.28, "end": 863.92, "text": " and the reference transcript,"}, {"start": 863.92, "end": 866.8399999999999, "text": " including innocuous differences in transcript style,"}, {"start": 866.8399999999999, "end": 867.66, "text": " which kind of sucks."}, {"start": 867.66, "end": 869.0799999999999, "text": " We wanna focus on the semantics"}, {"start": 869.0799999999999, "end": 871.24, "text": " and understand whether the,"}, {"start": 871.24, "end": 873.7199999999999, "text": " well, in the case of audio transcription,"}, {"start": 873.7199999999999, "end": 875.7199999999999, "text": " every single word matters, that's true,"}, {"start": 875.7199999999999, "end": 880.24, "text": " but the actual maybe punctuations or style"}, {"start": 880.24, "end": 882.1999999999999, "text": " that's superficial and not changing anything,"}, {"start": 882.1999999999999, "end": 884.48, "text": " you kind of want to ignore that, right?"}, {"start": 884.48, "end": 885.56, "text": " So they say here,"}, {"start": 885.56, "end": 888.0799999999999, "text": " while this poses a problem for all transcribers,"}, {"start": 888.08, "end": 891.44, "text": " it is particularly acute for zero-shot models like Whisper,"}, {"start": 891.44, "end": 893.76, "text": " which do not observe any examples"}, {"start": 893.76, "end": 896.6, "text": " of specific dataset transcript formats."}, {"start": 896.6, "end": 898.6800000000001, "text": " So that means those other baselines,"}, {"start": 898.6800000000001, "end": 902.5, "text": " because they're actually trained on that particular dataset,"}, {"start": 902.5, "end": 905.88, "text": " and then basically they do the evaluate"}, {"start": 905.88, "end": 907.32, "text": " on the validation portion,"}, {"start": 907.32, "end": 909.36, "text": " but the distribution is very similar,"}, {"start": 909.36, "end": 913.0, "text": " they kind of can learn and overfit the particular style,"}, {"start": 913.0, "end": 915.9200000000001, "text": " and thus they have an advantage"}, {"start": 915.92, "end": 919.1999999999999, "text": " compared to zero-shot Whisper model."}, {"start": 919.1999999999999, "end": 920.04, "text": " So because of that,"}, {"start": 920.04, "end": 923.5999999999999, "text": " they developed various laborious normalizers,"}, {"start": 923.5999999999999, "end": 924.4399999999999, "text": " so they say here,"}, {"start": 924.4399999999999, "end": 926.04, "text": " our text normalizer was developed"}, {"start": 926.04, "end": 928.0799999999999, "text": " through iterative manual inspection"}, {"start": 928.0799999999999, "end": 929.74, "text": " to identify common patterns"}, {"start": 929.74, "end": 933.1999999999999, "text": " where naive WER penalized Whisper models"}, {"start": 933.1999999999999, "end": 935.36, "text": " for innocuous difference."}, {"start": 935.36, "end": 937.56, "text": " Because I will not cover this in the code bit later,"}, {"start": 937.56, "end": 941.28, "text": " so I'm gonna just quickly show you how those look like."}, {"start": 941.28, "end": 943.92, "text": " Let me just try and find those normalizers,"}, {"start": 943.92, "end": 945.5999999999999, "text": " so here they are,"}, {"start": 945.6, "end": 947.48, "text": " and if I open up English normalizers,"}, {"start": 947.48, "end": 951.52, "text": " you can see how laborious this must have been,"}, {"start": 951.52, "end": 953.5600000000001, "text": " like there is just a lot of rules"}, {"start": 953.5600000000001, "end": 956.16, "text": " that they had to do to make sure,"}, {"start": 956.16, "end": 958.52, "text": " yeah, you can see it's kind of messy."}, {"start": 958.52, "end": 961.44, "text": " So let me go back to the paper,"}, {"start": 961.44, "end": 963.0400000000001, "text": " and let's continue here."}, {"start": 963.0400000000001, "end": 967.44, "text": " So they introduced this concept of effective robustness,"}, {"start": 967.44, "end": 970.3000000000001, "text": " and what it means is it measures the difference"}, {"start": 970.3000000000001, "end": 973.86, "text": " in expected performance between a reference dataset,"}, {"start": 973.86, "end": 975.6800000000001, "text": " which is usually in distribution,"}, {"start": 975.6800000000001, "end": 978.5, "text": " and one or more out of distribution datasets."}, {"start": 978.5, "end": 980.0600000000001, "text": " So obviously, we wanna have this difference"}, {"start": 980.0600000000001, "end": 981.52, "text": " to be as small as possible,"}, {"start": 981.52, "end": 983.34, "text": " which means that you are generalizing"}, {"start": 983.34, "end": 984.88, "text": " to out of distribution datasets,"}, {"start": 984.88, "end": 986.8000000000001, "text": " which is a desirable behavior, right?"}, {"start": 986.8000000000001, "end": 989.02, "text": " We want to have general models."}, {"start": 990.0600000000001, "end": 992.24, "text": " So a model with high effective robustness"}, {"start": 992.24, "end": 995.4, "text": " does better than expected on out of distribution datasets"}, {"start": 995.4, "end": 997.96, "text": " as a function of its performance on the reference dataset,"}, {"start": 997.96, "end": 1000.6, "text": " and approaches the ideal of equal performance"}, {"start": 1000.6, "end": 1003.24, "text": " on all datasets."}, {"start": 1003.24, "end": 1005.6800000000001, "text": " So here is the actual diagram."}, {"start": 1005.6800000000001, "end": 1009.0, "text": " So here we can see in an ideal world,"}, {"start": 1009.0, "end": 1011.0600000000001, "text": " we'd have this line here."}, {"start": 1011.0600000000001, "end": 1016.0600000000001, "text": " Basically, the metric on the LibreSpeech dev clean"}, {"start": 1016.12, "end": 1020.16, "text": " should be whatever the performance of your model there is,"}, {"start": 1020.16, "end": 1024.56, "text": " you wanna have the same error rate on the other datasets."}, {"start": 1024.56, "end": 1029.4, "text": " So here is the average WER on these other datasets."}, {"start": 1029.4, "end": 1030.76, "text": " So you can see here,"}, {"start": 1030.76, "end": 1033.1200000000001, "text": " some of these models that have much better performance"}, {"start": 1033.12, "end": 1036.6799999999998, "text": " compared to Whisper on the LibreSpeech."}, {"start": 1036.6799999999998, "end": 1039.1999999999998, "text": " So you can see this point here, for example, whoops,"}, {"start": 1039.1999999999998, "end": 1041.6, "text": " this point here, I'm terrible at drawing today."}, {"start": 1042.4599999999998, "end": 1045.12, "text": " So it has much lower error rate here"}, {"start": 1045.12, "end": 1046.84, "text": " compared to let's say this point,"}, {"start": 1046.84, "end": 1050.04, "text": " but then you can see how poor the performance actually is."}, {"start": 1050.04, "end": 1053.04, "text": " The error rate is huge for these data points"}, {"start": 1053.04, "end": 1055.3999999999999, "text": " of these LibreSpeech models."}, {"start": 1055.3999999999999, "end": 1056.8, "text": " So you can see all in all,"}, {"start": 1056.8, "end": 1061.8, "text": " when you fit like a line across these data points,"}, {"start": 1061.8, "end": 1065.12, "text": " basically you see that it's much more robust,"}, {"start": 1065.12, "end": 1066.28, "text": " Whisper is much more robust"}, {"start": 1066.28, "end": 1070.2, "text": " compared to the other supervised LibreSpeech models."}, {"start": 1070.2, "end": 1073.96, "text": " Even though it's not actually soda on the LibreSpeech,"}, {"start": 1073.96, "end": 1076.8799999999999, "text": " as you can see here, but that's, I guess, less important."}, {"start": 1076.8799999999999, "end": 1080.54, "text": " We wanna create robust general models."}, {"start": 1080.54, "end": 1084.52, "text": " Okay, so they mentioned that even the smallest"}, {"start": 1084.52, "end": 1085.96, "text": " zero-shot Whisper model,"}, {"start": 1085.96, "end": 1088.7, "text": " which only has 39 million parameters"}, {"start": 1088.7, "end": 1092.24, "text": " and a 6.7 WER on LibreSpeech test clean"}, {"start": 1092.24, "end": 1093.48, "text": " is roughly competitive"}, {"start": 1093.48, "end": 1095.6000000000001, "text": " with the best supervised LibreSpeech model"}, {"start": 1095.6000000000001, "end": 1097.66, "text": " when evaluated on other data sets."}, {"start": 1097.66, "end": 1100.3600000000001, "text": " So I guess that's like,"}, {"start": 1100.3600000000001, "end": 1103.3600000000001, "text": " literally this model here probably is competitive,"}, {"start": 1103.3600000000001, "end": 1107.16, "text": " as you can see here, performance wise, the Y point here."}, {"start": 1107.16, "end": 1109.5800000000002, "text": " So the average performance on the other data sets"}, {"start": 1109.5800000000002, "end": 1113.2, "text": " is close to these supervised LibreSpeech models."}, {"start": 1113.2, "end": 1114.52, "text": " They have much better performance"}, {"start": 1114.52, "end": 1117.8, "text": " on the actual LibreSpeech dev clean test."}, {"start": 1117.8, "end": 1120.0, "text": " Data set, sorry."}, {"start": 1120.0, "end": 1123.6, "text": " Okay, so here are a couple more interesting observations."}, {"start": 1123.6, "end": 1126.3999999999999, "text": " Probably not that surprising at this point of time."}, {"start": 1126.3999999999999, "end": 1127.9199999999998, "text": " And that's that with more data,"}, {"start": 1127.9199999999998, "end": 1131.06, "text": " you can see on the x-axis hours of transcribed audio,"}, {"start": 1131.06, "end": 1132.24, "text": " you can see that these languages"}, {"start": 1132.24, "end": 1135.48, "text": " that have more data available in the data set"}, {"start": 1135.48, "end": 1137.24, "text": " have much lower error rate."}, {"start": 1137.24, "end": 1138.96, "text": " And you can see it's super predictive."}, {"start": 1138.96, "end": 1143.96, "text": " Like the correlation coefficient here is 0.84."}, {"start": 1143.96, "end": 1146.8, "text": " And they mentioned somewhere here,"}, {"start": 1146.8, "end": 1149.12, "text": " a couple of these outliers, you can see here,"}, {"start": 1150.32, "end": 1153.56, "text": " ZH, which I think stands for Mandarin Chinese or something,"}, {"start": 1153.56, "end": 1156.56, "text": " Korean are kind of outliers here,"}, {"start": 1156.56, "end": 1160.24, "text": " given that even though they have this much data,"}, {"start": 1160.24, "end": 1162.1599999999999, "text": " the error rate is somewhat higher."}, {"start": 1162.1599999999999, "end": 1165.68, "text": " And they kind of answer that,"}, {"start": 1165.68, "end": 1168.04, "text": " hypothesize why that might be the case here."}, {"start": 1168.04, "end": 1170.12, "text": " So checking the regression coefficient"}, {"start": 1170.12, "end": 1172.2, "text": " for a linear fit to these log log values"}, {"start": 1172.2, "end": 1174.96, "text": " results in an estimate that the error rate"}, {"start": 1174.96, "end": 1179.3600000000001, "text": " has for every 16x increase in training data."}, {"start": 1179.3600000000001, "end": 1181.6000000000001, "text": " So that's the general trend we've seen."}, {"start": 1181.6000000000001, "end": 1184.24, "text": " And then they say that the many of the largest outliers"}, {"start": 1184.24, "end": 1186.32, "text": " in terms of worst than expected performance,"}, {"start": 1186.32, "end": 1187.48, "text": " according to this trend,"}, {"start": 1187.48, "end": 1189.56, "text": " are languages that have unique scripts"}, {"start": 1189.56, "end": 1191.1000000000001, "text": " and are more distantly related"}, {"start": 1191.1000000000001, "end": 1192.66, "text": " to the Indo-European languages,"}, {"start": 1192.66, "end": 1195.6200000000001, "text": " making up the majority of the training data set,"}, {"start": 1195.6200000000001, "end": 1198.6000000000001, "text": " such as Hebrew, Telugu, Chinese, and Korean."}, {"start": 1198.6000000000001, "end": 1201.52, "text": " So I did point out to these two languages here."}, {"start": 1201.52, "end": 1204.0, "text": " So they speculate that these differences could be due,"}, {"start": 1204.0, "end": 1205.48, "text": " I mean, it's not a speculation,"}, {"start": 1205.48, "end": 1207.32, "text": " it's a valid hypothesis, I guess,"}, {"start": 1207.32, "end": 1209.28, "text": " could be due to a lack of transfer,"}, {"start": 1209.28, "end": 1211.68, "text": " due to linguistic distance,"}, {"start": 1211.68, "end": 1213.52, "text": " or simply the problem with tokenizer"}, {"start": 1213.52, "end": 1215.0, "text": " being a poor match for these languages"}, {"start": 1215.0, "end": 1216.8, "text": " or variations in data quality."}, {"start": 1216.8, "end": 1219.66, "text": " So all of those factors could be confounding."}, {"start": 1221.48, "end": 1225.4, "text": " Okay, so they have additional interesting diagram here"}, {"start": 1225.4, "end": 1228.68, "text": " showing that the translation performance"}, {"start": 1228.68, "end": 1232.0, "text": " is not as predictive,"}, {"start": 1232.0, "end": 1234.42, "text": " given the hours of translated audio,"}, {"start": 1234.42, "end": 1237.96, "text": " it's not that easy to infer what will be the blue metric."}, {"start": 1237.96, "end": 1242.96, "text": " And then they hypothesize here, somewhere below here."}, {"start": 1243.32, "end": 1244.5, "text": " So they say here,"}, {"start": 1244.5, "end": 1246.76, "text": " so we suspect this is partly caused"}, {"start": 1246.76, "end": 1248.64, "text": " by the noisier training data"}, {"start": 1248.64, "end": 1251.9, "text": " due to errors in audio language identification."}, {"start": 1251.9, "end": 1256.16, "text": " So as an example, Welsh CY is an outlier"}, {"start": 1256.16, "end": 1258.4, "text": " with much worse than expected performance"}, {"start": 1258.4, "end": 1259.8, "text": " at only nine blue,"}, {"start": 1259.8, "end": 1263.8799999999999, "text": " despite supposedly having 9,000 hours of translation data."}, {"start": 1263.8799999999999, "end": 1265.08, "text": " But as it turns out,"}, {"start": 1265.08, "end": 1268.08, "text": " and let me just show you on the diagram here,"}, {"start": 1268.08, "end": 1270.04, "text": " where the CY lies,"}, {"start": 1270.04, "end": 1272.8799999999999, "text": " so you can see here, it literally has 9,000,"}, {"start": 1272.8799999999999, "end": 1275.24, "text": " but it has around nine blue,"}, {"start": 1275.24, "end": 1279.6, "text": " which is like super low compared to what this trend here"}, {"start": 1279.6, "end": 1280.8999999999999, "text": " would suggest."}, {"start": 1280.8999999999999, "end": 1282.3999999999999, "text": " But then they say here,"}, {"start": 1282.3999999999999, "end": 1285.6399999999999, "text": " inspection shows the majority of supposedly Welsh"}, {"start": 1285.6399999999999, "end": 1288.2, "text": " translation data is actually English audio"}, {"start": 1288.2, "end": 1290.0, "text": " with English captions,"}, {"start": 1290.0, "end": 1293.28, "text": " where the English audio was misclassified as Welsh"}, {"start": 1293.28, "end": 1295.56, "text": " by the language identification system,"}, {"start": 1295.56, "end": 1299.0800000000002, "text": " resulting in it being included as translation training data"}, {"start": 1299.0800000000002, "end": 1300.28, "text": " rather than transcription data"}, {"start": 1300.28, "end": 1302.04, "text": " according to our data set creation rules."}, {"start": 1302.04, "end": 1302.88, "text": " So that's a problem"}, {"start": 1302.88, "end": 1305.56, "text": " because that means you were using the Welsh token,"}, {"start": 1305.56, "end": 1308.56, "text": " but actually you had English audio and English transcription,"}, {"start": 1308.56, "end": 1309.76, "text": " which messes up the results."}, {"start": 1309.76, "end": 1313.0, "text": " So I guess that's suggestive"}, {"start": 1313.0, "end": 1316.8400000000001, "text": " of not taking this plot here too seriously"}, {"start": 1316.84, "end": 1319.12, "text": " and just like,"}, {"start": 1319.12, "end": 1321.84, "text": " I guess the general point of the paper is,"}, {"start": 1321.84, "end": 1325.08, "text": " given enough data that's high quality,"}, {"start": 1325.08, "end": 1327.08, "text": " we can achieve human performance"}, {"start": 1327.08, "end": 1328.8799999999999, "text": " on speech recognition pretty much."}, {"start": 1330.3799999999999, "end": 1331.9199999999998, "text": " There is some situation phenomenon,"}, {"start": 1331.9199999999998, "end": 1333.9599999999998, "text": " but I'm gonna show you that in a couple of minutes."}, {"start": 1333.9599999999998, "end": 1335.52, "text": " Okay, so let's continue here."}, {"start": 1335.52, "end": 1337.54, "text": " Let's see a couple more things here."}, {"start": 1339.02, "end": 1340.72, "text": " Things worth mentioning is this one."}, {"start": 1340.72, "end": 1342.9599999999998, "text": " So they do the following thing."}, {"start": 1342.9599999999998, "end": 1346.24, "text": " They add some noise on top of the input audio signal,"}, {"start": 1346.24, "end": 1347.84, "text": " such as for example, in the left diagram,"}, {"start": 1347.84, "end": 1351.8, "text": " we see white noise and then on the right, the pub noise."}, {"start": 1351.8, "end": 1353.08, "text": " So that's kind of background noise"}, {"start": 1353.08, "end": 1355.04, "text": " from a restaurant or a pub or something."}, {"start": 1355.04, "end": 1359.08, "text": " And you can see that the whisper, which is this red star,"}, {"start": 1361.32, "end": 1363.72, "text": " you can see it definitely does not have the lowest error rate"}, {"start": 1363.72, "end": 1365.68, "text": " on the LibreSpeech test screen,"}, {"start": 1365.68, "end": 1368.96, "text": " but then as the noise levels keep increasing,"}, {"start": 1368.96, "end": 1372.64, "text": " it all of a sudden becomes one of the best models."}, {"start": 1372.64, "end": 1377.64, "text": " And especially here on pub noise, you can see that"}, {"start": 1377.64, "end": 1379.5600000000002, "text": " the model outperforms all of the other ones."}, {"start": 1379.5600000000002, "end": 1381.3200000000002, "text": " So it has lower error rates."}, {"start": 1381.3200000000002, "end": 1383.3200000000002, "text": " So that means it's more robust to noise,"}, {"start": 1383.3200000000002, "end": 1385.5200000000002, "text": " and especially the natural noise"}, {"start": 1385.5200000000002, "end": 1388.16, "text": " compared to this synthetic white noise."}, {"start": 1390.4, "end": 1392.3600000000001, "text": " Okay, let's continue here."}, {"start": 1393.3200000000002, "end": 1398.0, "text": " They also compare whisper to other baselines"}, {"start": 1398.0, "end": 1399.72, "text": " in this long form transcription,"}, {"start": 1399.72, "end": 1403.92, "text": " where because the model can only see 30 second audio inputs,"}, {"start": 1403.92, "end": 1406.52, "text": " they have to do sliding window approach."}, {"start": 1406.52, "end": 1410.08, "text": " And they show here that they are on pair"}, {"start": 1410.08, "end": 1412.4, "text": " and even better compared to many of these"}, {"start": 1412.4, "end": 1413.68, "text": " even commercial services."}, {"start": 1413.68, "end": 1418.68, "text": " You can see the blue candle bars are for most of these"}, {"start": 1419.1200000000001, "end": 1424.1200000000001, "text": " long form transcription datasets lower than the baselines."}, {"start": 1424.24, "end": 1426.2, "text": " Given there are some exceptions, obviously,"}, {"start": 1426.2, "end": 1428.64, "text": " like here you can see that this company B"}, {"start": 1428.64, "end": 1431.6000000000001, "text": " is outperforming them, but yeah, that's kind of impressive."}, {"start": 1433.5600000000002, "end": 1434.5200000000002, "text": " This is also cool."}, {"start": 1434.5200000000002, "end": 1439.4, "text": " They show how they compare to human transcription."}, {"start": 1439.4, "end": 1441.68, "text": " So you can see how whisper is very close to other,"}, {"start": 1441.68, "end": 1443.68, "text": " even human transcription services."}, {"start": 1443.68, "end": 1448.68, "text": " So here they took actual, well, humans to transcribe some,"}, {"start": 1450.6000000000001, "end": 1453.2800000000002, "text": " I think like they had 25 recordings of text or something,"}, {"start": 1453.2800000000002, "end": 1454.3600000000001, "text": " and then they compare them,"}, {"start": 1454.3600000000001, "end": 1456.0400000000002, "text": " and you can see it's on pair pretty much."}, {"start": 1456.04, "end": 1459.56, "text": " But then there is this computer assisted human transcription"}, {"start": 1459.56, "end": 1462.1599999999999, "text": " which is the best out of all of these approaches."}, {"start": 1462.1599999999999, "end": 1465.08, "text": " So that again means that this combination of humans"}, {"start": 1465.08, "end": 1466.8799999999999, "text": " and machines is still the best way"}, {"start": 1466.8799999999999, "end": 1469.52, "text": " to go about many, many problems."}, {"start": 1469.52, "end": 1471.6399999999999, "text": " Let's see a couple of more results here."}, {"start": 1472.92, "end": 1474.72, "text": " Obviously model parameter,"}, {"start": 1474.72, "end": 1476.36, "text": " this is again not that surprising,"}, {"start": 1476.36, "end": 1477.44, "text": " although it's very noisy."}, {"start": 1477.44, "end": 1479.92, "text": " Let me show you what's going on here."}, {"start": 1479.92, "end": 1481.6399999999999, "text": " So we have English speech recognition"}, {"start": 1481.6399999999999, "end": 1483.44, "text": " on this leftmost plot."}, {"start": 1483.44, "end": 1486.16, "text": " You can see that the model parameters as it's scrolling,"}, {"start": 1486.16, "end": 1489.04, "text": " we can see that the error rate is obviously falling down,"}, {"start": 1489.04, "end": 1491.1200000000001, "text": " but there is some saturation going on."}, {"start": 1491.1200000000001, "end": 1492.48, "text": " And there is also a lot of variance."}, {"start": 1492.48, "end": 1495.8400000000001, "text": " If you take a look at these lighter lines,"}, {"start": 1495.8400000000001, "end": 1498.8400000000001, "text": " the dark blue one is just the average of those."}, {"start": 1498.8400000000001, "end": 1500.96, "text": " So that kind of means the performance varies a lot"}, {"start": 1500.96, "end": 1503.3600000000001, "text": " across these 12 data sets."}, {"start": 1503.3600000000001, "end": 1504.8, "text": " And we can also see saturation."}, {"start": 1504.8, "end": 1509.8, "text": " I guess that's the main takeaway here."}, {"start": 1509.8, "end": 1513.44, "text": " We see saturation on English in particular."}, {"start": 1513.44, "end": 1515.6, "text": " And then on multilingual speech recognition,"}, {"start": 1515.6, "end": 1518.8799999999999, "text": " we can also see like a trend, downward trend,"}, {"start": 1518.8799999999999, "end": 1520.32, "text": " general downward trend."}, {"start": 1520.32, "end": 1521.9199999999998, "text": " And again, a lot of variance"}, {"start": 1521.9199999999998, "end": 1524.6399999999999, "text": " if you take a look at some of these light blue curves."}, {"start": 1524.6399999999999, "end": 1526.8799999999999, "text": " Considering that the models are approaching,"}, {"start": 1526.8799999999999, "end": 1529.24, "text": " that we are doing a supervised learning approach here"}, {"start": 1529.24, "end": 1532.1599999999999, "text": " and that the models are, so where was the diagram here?"}, {"start": 1532.1599999999999, "end": 1537.08, "text": " So that the models are on pair with this human services,"}, {"start": 1537.08, "end": 1540.1599999999999, "text": " with human transcription that might explain it."}, {"start": 1540.1599999999999, "end": 1542.04, "text": " And the final results I want to show you"}, {"start": 1542.04, "end": 1544.32, "text": " are these ones here."}, {"start": 1544.32, "end": 1548.04, "text": " Basically, you can see two lines here, two fits,"}, {"start": 1548.04, "end": 1551.6, "text": " and one is the English only training data set."}, {"start": 1551.6, "end": 1553.9199999999998, "text": " And the second one is multilingual and multitask."}, {"start": 1553.9199999999998, "end": 1557.1599999999999, "text": " And if you're in this regime where the flops are lower,"}, {"start": 1557.1599999999999, "end": 1559.24, "text": " that means you're training smaller models."}, {"start": 1559.24, "end": 1563.8799999999999, "text": " And you can see that with smaller capacity,"}, {"start": 1563.8799999999999, "end": 1566.6399999999999, "text": " you're suffering the, so you wanna have,"}, {"start": 1566.64, "end": 1568.3600000000001, "text": " this is the error rates, you wanna be lower."}, {"start": 1568.3600000000001, "end": 1570.88, "text": " So you can see the English only version is better."}, {"start": 1570.88, "end": 1574.2800000000002, "text": " But the multilingual multitask models scale better."}, {"start": 1574.2800000000002, "end": 1577.5600000000002, "text": " So that means when you go into the bigger model,"}, {"start": 1577.5600000000002, "end": 1580.3200000000002, "text": " like bigger scales, you can see that the error rate"}, {"start": 1580.3200000000002, "end": 1583.3600000000001, "text": " is actually lower for the multilingual multitask,"}, {"start": 1583.3600000000001, "end": 1587.6000000000001, "text": " which is not that surprising, I guess, and encouraging,"}, {"start": 1587.6000000000001, "end": 1589.72, "text": " given that we can just scale it even more"}, {"start": 1589.72, "end": 1591.0, "text": " and have higher quality data"}, {"start": 1591.0, "end": 1593.5600000000002, "text": " and we can achieve even better results."}, {"start": 1593.5600000000002, "end": 1595.48, "text": " These are some candle bars about the talk,"}, {"start": 1595.48, "end": 1597.76, "text": " the actual normalizer, I'm gonna skip that."}, {"start": 1597.76, "end": 1599.84, "text": " And finally, it's important to realize"}, {"start": 1599.84, "end": 1601.56, "text": " that there are still a lot of heuristics"}, {"start": 1601.56, "end": 1605.2, "text": " that they had to use to make this decoding reliable."}, {"start": 1605.2, "end": 1607.0, "text": " So we have developed a set of heuristics"}, {"start": 1607.0, "end": 1608.64, "text": " that help avoid failure cases"}, {"start": 1608.64, "end": 1611.4, "text": " of long form transcription in particular."}, {"start": 1611.4, "end": 1614.48, "text": " So they say here, first we use beam search with five beams"}, {"start": 1614.48, "end": 1617.6, "text": " using the log probabilities as the score function"}, {"start": 1617.6, "end": 1619.16, "text": " to reduce repetition looping,"}, {"start": 1619.16, "end": 1621.88, "text": " which happens more frequently in greedy decoding."}, {"start": 1621.88, "end": 1623.4, "text": " And that's a common problem you have"}, {"start": 1623.4, "end": 1625.44, "text": " with greedy decoding in general,"}, {"start": 1625.44, "end": 1627.24, "text": " like you have the repetition problem."}, {"start": 1627.24, "end": 1630.0, "text": " And then they say we start with the temperature zero,"}, {"start": 1630.0, "end": 1633.44, "text": " I always selecting the tokens with the highest probability,"}, {"start": 1633.44, "end": 1634.96, "text": " which is basically greedy,"}, {"start": 1634.96, "end": 1639.96, "text": " and then increase the temperature by zero two up to 1.0,"}, {"start": 1640.0800000000002, "end": 1641.88, "text": " when either the average log probability"}, {"start": 1641.88, "end": 1644.8000000000002, "text": " over the generated tokens is lower than minus one,"}, {"start": 1644.8000000000002, "end": 1648.52, "text": " or the generated text has a GZIP compression rate"}, {"start": 1648.52, "end": 1651.0800000000002, "text": " higher than 2.4, which would probably mean"}, {"start": 1651.0800000000002, "end": 1652.72, "text": " that if it's higher than that,"}, {"start": 1652.72, "end": 1654.76, "text": " that means that the text is too random"}, {"start": 1654.76, "end": 1656.08, "text": " and too hard to compress,"}, {"start": 1656.08, "end": 1657.8, "text": " which means it's probably gibberish."}, {"start": 1657.8, "end": 1662.04, "text": " I guess that's the rough reasoning behind the heuristic."}, {"start": 1662.04, "end": 1663.44, "text": " And there is a lot more details here,"}, {"start": 1663.44, "end": 1665.76, "text": " I'm just gonna skip that for now."}, {"start": 1665.76, "end": 1668.08, "text": " And you can see here, like in a tabular form,"}, {"start": 1668.08, "end": 1669.92, "text": " the same thing, adding beam search,"}, {"start": 1669.92, "end": 1670.96, "text": " adding temperature fallback,"}, {"start": 1670.96, "end": 1672.3600000000001, "text": " adding previous text conditioning,"}, {"start": 1672.3600000000001, "end": 1674.16, "text": " adding this voice activity detection,"}, {"start": 1674.16, "end": 1675.8, "text": " initial timestamp constraints,"}, {"start": 1675.8, "end": 1678.64, "text": " all of those help improve the average performance"}, {"start": 1678.64, "end": 1680.88, "text": " across different transcription,"}, {"start": 1680.88, "end": 1682.72, "text": " long form transcription data sets here."}, {"start": 1682.72, "end": 1686.6000000000001, "text": " So I guess the main takeaway from all of this is,"}, {"start": 1686.6000000000001, "end": 1690.3200000000002, "text": " if you take a look at the diagrams here,"}, {"start": 1690.3200000000002, "end": 1692.48, "text": " this plot is kind of suggestive that,"}, {"start": 1692.48, "end": 1697.48, "text": " given higher quality and abundance of audio files,"}, {"start": 1698.2, "end": 1702.88, "text": " of audio data for a particular language of interest,"}, {"start": 1702.88, "end": 1705.1200000000001, "text": " and even off the shelf model,"}, {"start": 1705.1200000000001, "end": 1706.92, "text": " you can achieve remarkable performance."}, {"start": 1706.92, "end": 1711.3600000000001, "text": " And we also wanna do the multi-lingual multitask setup."}, {"start": 1711.3600000000001, "end": 1714.52, "text": " Okay, guys, having seen the main ideas from the paper,"}, {"start": 1714.52, "end": 1716.0800000000002, "text": " let's now switch to the code,"}, {"start": 1716.0800000000002, "end": 1717.24, "text": " and let me walk you through"}, {"start": 1717.24, "end": 1719.2, "text": " and try and show you the correspondence"}, {"start": 1719.2, "end": 1721.04, "text": " between the code and the diagram."}, {"start": 1721.04, "end": 1723.6000000000001, "text": " You see here, I'm gonna see in a bit more detail"}, {"start": 1723.6000000000001, "end": 1727.3600000000001, "text": " how this special token pipeline looks like."}, {"start": 1727.3600000000001, "end": 1731.2, "text": " So I already went ahead and cloned the repo,"}, {"start": 1731.2, "end": 1732.6000000000001, "text": " created a condo environment,"}, {"start": 1732.6000000000001, "end": 1734.68, "text": " so all of that should be fairly trivial."}, {"start": 1734.68, "end": 1738.96, "text": " Everything is written down in the README quite nicely."}, {"start": 1738.96, "end": 1743.1200000000001, "text": " So yeah, let me just show you the types of models they have."}, {"start": 1743.1200000000001, "end": 1744.96, "text": " So they support a couple of models."}, {"start": 1744.96, "end": 1747.88, "text": " You can see here, tiny, base, small, medium, and large."}, {"start": 1747.88, "end": 1749.64, "text": " You can also see the.end versions,"}, {"start": 1749.64, "end": 1751.0, "text": " which are English only,"}, {"start": 1751.0, "end": 1753.76, "text": " and these ones with the other suffix"}, {"start": 1753.76, "end": 1758.3600000000001, "text": " are basically multi-lingual multitask models."}, {"start": 1758.3600000000001, "end": 1763.3600000000001, "text": " Okay, so you can specify multiple audio files here."}, {"start": 1763.36, "end": 1766.0, "text": " We're using small model by default."}, {"start": 1766.9199999999998, "end": 1768.28, "text": " Default task is transcribe."}, {"start": 1768.28, "end": 1770.8799999999999, "text": " You can also choose translation."}, {"start": 1770.8799999999999, "end": 1772.12, "text": " Language is set to none,"}, {"start": 1772.12, "end": 1775.56, "text": " which means we'll infer automatically what the language is,"}, {"start": 1775.56, "end": 1777.0, "text": " so that's gonna be interesting."}, {"start": 1777.0, "end": 1779.6399999999999, "text": " And then bunch of parameters around the coding"}, {"start": 1779.6399999999999, "end": 1782.6399999999999, "text": " and these thresholds that help robustify"}, {"start": 1782.6399999999999, "end": 1783.84, "text": " the decoding pipeline."}, {"start": 1783.84, "end": 1785.3999999999999, "text": " Now they just pop some arguments."}, {"start": 1785.3999999999999, "end": 1786.84, "text": " I'm gonna skip all of these parts"}, {"start": 1786.84, "end": 1789.4399999999998, "text": " that are not relevant and crucial"}, {"start": 1789.4399999999998, "end": 1791.3999999999999, "text": " to the understanding of the model."}, {"start": 1791.4, "end": 1794.1200000000001, "text": " So this is like temperature goes from zero to one"}, {"start": 1794.1200000000001, "end": 1795.5600000000002, "text": " with increments of 0.2."}, {"start": 1795.5600000000002, "end": 1797.44, "text": " So that's something they said in the model"}, {"start": 1797.44, "end": 1799.92, "text": " they used to robustify the decoding,"}, {"start": 1799.92, "end": 1802.8400000000001, "text": " if that's even a word, and load model."}, {"start": 1802.8400000000001, "end": 1805.5600000000002, "text": " So let's kinda go here"}, {"start": 1805.5600000000002, "end": 1809.0800000000002, "text": " and let me jump all the way to here."}, {"start": 1809.0800000000002, "end": 1812.6000000000001, "text": " So I'm gonna disable breakpoints, put a breakpoint here,"}, {"start": 1812.6000000000001, "end": 1814.2, "text": " and let's just go to whisper."}, {"start": 1815.24, "end": 1817.92, "text": " Okay, so basically in the background,"}, {"start": 1817.92, "end": 1819.8000000000002, "text": " we download the model."}, {"start": 1819.8, "end": 1821.8799999999999, "text": " Actually, I already downloaded beforehand,"}, {"start": 1821.8799999999999, "end": 1824.56, "text": " so it just kinda loads the weights."}, {"start": 1824.56, "end": 1826.9199999999998, "text": " So here is how the model looks like."}, {"start": 1826.9199999999998, "end": 1831.6399999999999, "text": " Let me maybe now enable the breakpoints and enter here."}, {"start": 1831.6399999999999, "end": 1834.6399999999999, "text": " So we have the autoencoder and we have the text decoder."}, {"start": 1834.6399999999999, "end": 1837.44, "text": " So those are the two stacks we saw previously"}, {"start": 1837.44, "end": 1838.68, "text": " in the diagram."}, {"start": 1838.68, "end": 1843.68, "text": " So let me now enter the, let's enter the encoder"}, {"start": 1844.0, "end": 1845.1599999999999, "text": " and let's see how it looks like."}, {"start": 1845.1599999999999, "end": 1848.9199999999998, "text": " So we can see here, we have two comp1d layers."}, {"start": 1848.92, "end": 1851.3600000000001, "text": " We have the positional embeddings which are not learned,"}, {"start": 1851.3600000000001, "end": 1852.64, "text": " so they're sinusoids."}, {"start": 1852.64, "end": 1855.28, "text": " And let me just put the code and the diagram side by side,"}, {"start": 1855.28, "end": 1856.2, "text": " it's gonna be easier."}, {"start": 1856.2, "end": 1857.52, "text": " Okay, so here we are."}, {"start": 1857.52, "end": 1861.04, "text": " I can see the diagrams on the side here."}, {"start": 1861.04, "end": 1864.8000000000002, "text": " So again, we have the two comp1d layer,"}, {"start": 1864.8000000000002, "end": 1867.96, "text": " so that's this part here, the stem of the encoder."}, {"start": 1867.96, "end": 1871.5600000000002, "text": " Then we have the non-learnable positional embeddings."}, {"start": 1871.5600000000002, "end": 1873.3600000000001, "text": " You can see those here."}, {"start": 1873.3600000000001, "end": 1875.16, "text": " And finally, we have bunch of blocks,"}, {"start": 1875.16, "end": 1877.4, "text": " which are just residual attention blocks."}, {"start": 1877.4, "end": 1879.0400000000002, "text": " So that's nothing new there."}, {"start": 1879.0400000000002, "end": 1880.64, "text": " And finally, we end up with a layer norm."}, {"start": 1880.64, "end": 1883.0400000000002, "text": " And later we'll see in the actual forward pass"}, {"start": 1883.0400000000002, "end": 1887.6000000000001, "text": " how we pass the male spectrogram through the comp1d"}, {"start": 1887.6000000000001, "end": 1890.48, "text": " and then we have the JALU, blah, blah, blah, the stem,"}, {"start": 1890.48, "end": 1892.24, "text": " and then the encoder part."}, {"start": 1892.24, "end": 1894.24, "text": " So everything is fairly trivial here."}, {"start": 1894.24, "end": 1899.24, "text": " Okay, so let's exit the construction of the audioencoder."}, {"start": 1900.1200000000001, "end": 1905.1200000000001, "text": " Let me just make sure that I have everything enabled."}, {"start": 1905.12, "end": 1908.4399999999998, "text": " Let's hit F5 and enter the decoder."}, {"start": 1908.4399999999998, "end": 1912.9199999999998, "text": " And now let's just go back to this side by side view."}, {"start": 1912.9199999999998, "end": 1914.6399999999999, "text": " So here is the text decoder."}, {"start": 1914.6399999999999, "end": 1916.7199999999998, "text": " You can see a couple of things."}, {"start": 1916.7199999999998, "end": 1919.56, "text": " Of course, embedding table here."}, {"start": 1919.56, "end": 1922.6, "text": " And because we're dealing with obviously a decoder model"}, {"start": 1922.6, "end": 1925.76, "text": " that has a vocabulary, we have positional embeddings,"}, {"start": 1925.76, "end": 1927.84, "text": " which are, as you can see here, these ones are learned."}, {"start": 1927.84, "end": 1929.84, "text": " So it says here, learn positional embeddings."}, {"start": 1929.84, "end": 1931.1999999999998, "text": " And that's why we have parameter."}, {"start": 1931.1999999999998, "end": 1934.0, "text": " And we just initialize, as you can see here."}, {"start": 1934.0, "end": 1938.6, "text": " So 448 is the context length that they are using"}, {"start": 1938.6, "end": 1941.36, "text": " and the state is around, yeah, 768."}, {"start": 1941.36, "end": 1943.8, "text": " Okay, and then we have a bunch of blocks."}, {"start": 1943.8, "end": 1946.32, "text": " These ones are cross-attention true."}, {"start": 1946.32, "end": 1948.14, "text": " So that means they're additionally gonna have"}, {"start": 1948.14, "end": 1952.08, "text": " this third component in the transformer layer,"}, {"start": 1952.08, "end": 1955.6, "text": " the so-called cross-attention block."}, {"start": 1955.6, "end": 1956.44, "text": " But yeah, that's it."}, {"start": 1956.44, "end": 1961.08, "text": " Fairly similar to what we saw in the encoder part."}, {"start": 1961.08, "end": 1964.8799999999999, "text": " Okay, and we have a mask, which is going to be"}, {"start": 1964.8799999999999, "end": 1966.84, "text": " basically a causal mask."}, {"start": 1966.84, "end": 1968.24, "text": " This is gonna be a causal decoder."}, {"start": 1968.24, "end": 1973.24, "text": " So if I were to print this, you can see minus infinities"}, {"start": 1973.58, "end": 1977.62, "text": " on the upper triangle, which means we're gonna mask out"}, {"start": 1977.62, "end": 1980.1999999999998, "text": " the future tokens and this particular token"}, {"start": 1980.1999999999998, "end": 1982.28, "text": " can only attend itself and the previous token"}, {"start": 1982.28, "end": 1984.04, "text": " and the previous token, et cetera."}, {"start": 1984.04, "end": 1986.8, "text": " Okay, hopefully nothing new there,"}, {"start": 1986.8, "end": 1988.8, "text": " just like standard attention stuff,"}, {"start": 1988.8, "end": 1991.24, "text": " transformer stuff, sorry."}, {"start": 1991.24, "end": 1992.8799999999999, "text": " And now let's go back."}, {"start": 1992.8799999999999, "end": 1996.6399999999999, "text": " So let me now kinda expand this again."}, {"start": 1996.6399999999999, "end": 1998.48, "text": " So we load the dictionaries."}, {"start": 1998.48, "end": 2001.76, "text": " We just push the model to the device."}, {"start": 2001.76, "end": 2004.76, "text": " Let's exit here and they have a small,"}, {"start": 2004.76, "end": 2008.1399999999999, "text": " well, it's not a bug, but they do the two device two times,"}, {"start": 2008.1399999999999, "end": 2010.32, "text": " which is kinda unnecessary."}, {"start": 2010.32, "end": 2012.0, "text": " But yeah, okay."}, {"start": 2012.0, "end": 2014.56, "text": " So here we are, we popped the audio path."}, {"start": 2014.56, "end": 2017.56, "text": " So I'm gonna use the one that they provided"}, {"start": 2017.56, "end": 2019.6799999999998, "text": " in the code base."}, {"start": 2019.6799999999998, "end": 2022.0, "text": " So maybe it will be useful for me to show you"}, {"start": 2023.28, "end": 2024.52, "text": " how that sounds."}, {"start": 2024.52, "end": 2025.36, "text": " So here it is."}, {"start": 2026.32, "end": 2028.26, "text": " And so my fellow America,"}, {"start": 2029.24, "end": 2032.6399999999999, "text": " ask not what your country can do for you."}, {"start": 2032.6399999999999, "end": 2034.3999999999999, "text": " Can do for you."}, {"start": 2034.3999999999999, "end": 2036.52, "text": " Ask what your country can do for your country."}, {"start": 2036.52, "end": 2039.56, "text": " Okay, so that's the audio file we are using"}, {"start": 2039.56, "end": 2041.5, "text": " and let's now transcribe it."}, {"start": 2041.5, "end": 2044.84, "text": " So again, let me make sure we are enabling the breakpoints."}, {"start": 2044.84, "end": 2047.6399999999999, "text": " We hit F5 and here we are."}, {"start": 2047.6399999999999, "end": 2052.04, "text": " So let's see what's interesting going to happen here."}, {"start": 2052.04, "end": 2054.5, "text": " So the first interesting part is the log mal spectrogram."}, {"start": 2054.5, "end": 2056.5, "text": " So let's see how we convert an audio file,"}, {"start": 2056.5, "end": 2058.88, "text": " which is just a path into a mal spectrogram."}, {"start": 2058.88, "end": 2060.16, "text": " So let's enter here."}, {"start": 2060.16, "end": 2062.92, "text": " So the first part is loading audio files."}, {"start": 2062.92, "end": 2064.92, "text": " So we just use FFmpeg for that."}, {"start": 2064.92, "end": 2066.84, "text": " And then you can see blah, blah, blah,"}, {"start": 2066.84, "end": 2070.96, "text": " some from buffer because these are bytes, integer 16."}, {"start": 2070.96, "end": 2073.24, "text": " And that's we divide by 32,000"}, {"start": 2073.24, "end": 2075.9199999999996, "text": " because precisely because we're using int 16."}, {"start": 2075.9199999999996, "end": 2079.9199999999996, "text": " So that's like, I guess two raised to the power of 15"}, {"start": 2082.08, "end": 2083.8799999999997, "text": " is exactly that number."}, {"start": 2083.8799999999997, "end": 2087.56, "text": " And that's why we are basically putting the amplitude"}, {"start": 2087.56, "end": 2091.6, "text": " in the minus one, one range, just normalization detail."}, {"start": 2091.6, "end": 2094.8199999999997, "text": " Okay, so once we have, so we are in the log mal spectrogram."}, {"start": 2094.8199999999997, "end": 2095.9199999999996, "text": " So once we load the data,"}, {"start": 2095.9199999999996, "end": 2097.68, "text": " which is just raw data at the moment,"}, {"start": 2097.68, "end": 2100.4399999999996, "text": " so that's gonna be just a sequence of numbers,"}, {"start": 2100.44, "end": 2105.36, "text": " 176,000 floats."}, {"start": 2105.36, "end": 2107.44, "text": " Now we form the hand window,"}, {"start": 2107.44, "end": 2109.68, "text": " just some signal processing stuff."}, {"start": 2109.68, "end": 2114.68, "text": " Then we apply the short, basically a Fourier transform."}, {"start": 2114.7200000000003, "end": 2118.7200000000003, "text": " It stands for, what does it stand for?"}, {"start": 2118.7200000000003, "end": 2121.44, "text": " A short time, yeah, short time Fourier transform."}, {"start": 2121.44, "end": 2124.2000000000003, "text": " Okay, so we applied the short time Fourier transform,"}, {"start": 2124.2000000000003, "end": 2127.44, "text": " which is gonna basically convert this audio file"}, {"start": 2127.44, "end": 2131.56, "text": " into a spectrogram, which is your regular spectrogram."}, {"start": 2131.56, "end": 2136.56, "text": " So if I were to print this particular variable here,"}, {"start": 2136.7200000000003, "end": 2139.16, "text": " the spectrogram, the shape is gonna be,"}, {"start": 2139.16, "end": 2143.44, "text": " as you can see here, 201 and 1,101."}, {"start": 2143.44, "end": 2145.52, "text": " So 201 is the frequency dimension,"}, {"start": 2145.52, "end": 2146.8, "text": " this one is the time dimension."}, {"start": 2146.8, "end": 2148.92, "text": " The reason being they have some,"}, {"start": 2148.92, "end": 2150.68, "text": " they're doing sliding window approaches."}, {"start": 2150.68, "end": 2154.44, "text": " So that's why we don't have as many timestamps"}, {"start": 2154.44, "end": 2158.44, "text": " as in the original raw audio."}, {"start": 2158.44, "end": 2160.68, "text": " Okay, so then we do the magnitudes"}, {"start": 2160.68, "end": 2164.52, "text": " because we wanna have the magnitude component of the Fourier."}, {"start": 2164.52, "end": 2167.44, "text": " And then we form these filters."}, {"start": 2167.44, "end": 2169.2000000000003, "text": " As you can see here, we just,"}, {"start": 2169.2000000000003, "end": 2172.8, "text": " what they do is they use this Librosa"}, {"start": 2172.8, "end": 2175.68, "text": " to generate these MEL filters,"}, {"start": 2175.68, "end": 2177.96, "text": " and then they saved the MEL filters"}, {"start": 2177.96, "end": 2180.16, "text": " into this particular file."}, {"start": 2180.16, "end": 2184.7999999999997, "text": " So you can see here, this assets MEL filters, MPZ."}, {"start": 2184.7999999999997, "end": 2188.24, "text": " So if you go to the assets directory here,"}, {"start": 2188.24, "end": 2189.64, "text": " you can see that the MPZ,"}, {"start": 2189.64, "end": 2192.3199999999997, "text": " so they just kinda save them there"}, {"start": 2192.3199999999997, "end": 2195.3199999999997, "text": " and then they can avoid having the dependency."}, {"start": 2195.3199999999997, "end": 2197.08, "text": " Anyways, it's already too many details."}, {"start": 2197.08, "end": 2199.6, "text": " Let's get out of here, filters,"}, {"start": 2199.6, "end": 2202.3199999999997, "text": " and then we do the matrix multiplication."}, {"start": 2202.3199999999997, "end": 2204.6, "text": " Again, that's a detail where by,"}, {"start": 2204.6, "end": 2207.0, "text": " once you multiply using these triangular filters,"}, {"start": 2207.0, "end": 2210.36, "text": " once you multiply that with the actual magnitude"}, {"start": 2210.36, "end": 2211.92, "text": " of Fourier spectrogram,"}, {"start": 2211.92, "end": 2213.64, "text": " then you end up with a MEL spectrogram."}, {"start": 2213.64, "end": 2217.92, "text": " And now we're gonna see how this is going to have 80 bands."}, {"start": 2217.92, "end": 2220.36, "text": " So if I were to print the shape of this thing,"}, {"start": 2220.36, "end": 2223.54, "text": " it's gonna have, oops, let's just grab this variable."}, {"start": 2224.48, "end": 2227.08, "text": " It's gonna have, as you can see here,"}, {"start": 2227.08, "end": 2232.08, "text": " 80 bands and we have 1,100, again, time dimension."}, {"start": 2232.52, "end": 2233.6, "text": " And if you go to the paper,"}, {"start": 2233.6, "end": 2235.68, "text": " you'll see all of these details there."}, {"start": 2235.68, "end": 2240.04, "text": " So again, let me just quickly jump to that part"}, {"start": 2240.04, "end": 2243.3599999999997, "text": " where they mention it, somewhere here."}, {"start": 2244.2, "end": 2247.44, "text": " So all audio is resampled to 16,000 hertz"}, {"start": 2247.44, "end": 2250.12, "text": " and an 80 channel log magnitude MEL spectrogram"}, {"start": 2250.12, "end": 2253.44, "text": " representation is computed on 25 millisecond windows"}, {"start": 2253.44, "end": 2255.3999999999996, "text": " with a stride of 10 milliseconds."}, {"start": 2255.3999999999996, "end": 2258.64, "text": " And because we're using these windows of 25 milliseconds,"}, {"start": 2258.64, "end": 2261.6, "text": " that's why we end up with a much shorter representation"}, {"start": 2261.6, "end": 2263.48, "text": " in the time dimension compared to having"}, {"start": 2263.48, "end": 2266.2400000000002, "text": " a 176,000 or something."}, {"start": 2266.2400000000002, "end": 2269.6, "text": " Okay, let's go back to the code, some normalization."}, {"start": 2269.6, "end": 2271.84, "text": " I'm not gonna focus too much on this."}, {"start": 2271.84, "end": 2273.68, "text": " And finally, because the language is none,"}, {"start": 2273.68, "end": 2276.2, "text": " we're now going to predict the language."}, {"start": 2276.2, "end": 2277.6, "text": " Let's see how that's gonna work."}, {"start": 2277.6, "end": 2281.32, "text": " So detecting language using the first 30 seconds,"}, {"start": 2281.32, "end": 2284.12, "text": " blah, blah, blah, we first do the padding,"}, {"start": 2284.12, "end": 2288.44, "text": " which is just going to pad this to a 30 second audio file."}, {"start": 2288.44, "end": 2292.4, "text": " So that means it's gonna have 3,000,"}, {"start": 2292.4, "end": 2297.4, "text": " basically dimensions in this time, along the time axis."}, {"start": 2299.4, "end": 2303.2000000000003, "text": " And now let's basically do the detect language part."}, {"start": 2303.2000000000003, "end": 2304.2000000000003, "text": " So we passed the segment,"}, {"start": 2304.2000000000003, "end": 2305.96, "text": " which is just a pad MEL spectrogram,"}, {"start": 2305.96, "end": 2310.12, "text": " which means basically we padded with all zeros to the right."}, {"start": 2310.12, "end": 2311.92, "text": " And let's now do the detection."}, {"start": 2311.92, "end": 2314.42, "text": " So here is how it's gonna look like."}, {"start": 2314.42, "end": 2316.6600000000003, "text": " We get the tokenizer first."}, {"start": 2316.6600000000003, "end": 2319.04, "text": " It might be interesting to just briefly"}, {"start": 2319.04, "end": 2320.8, "text": " jump into the tokenizer."}, {"start": 2320.8, "end": 2323.1000000000004, "text": " And let me show you how it looks like."}, {"start": 2323.1000000000004, "end": 2326.36, "text": " So we specify that it's a multilingual tokenizer,"}, {"start": 2326.36, "end": 2328.6000000000004, "text": " task is transcription, we have English,"}, {"start": 2328.6000000000004, "end": 2330.28, "text": " and now we do the build part."}, {"start": 2330.28, "end": 2331.6600000000003, "text": " Okay."}, {"start": 2331.6600000000003, "end": 2334.6200000000003, "text": " So here is how building the tokenizer looks like."}, {"start": 2334.6200000000003, "end": 2337.1000000000004, "text": " We are basically taking the pre-trained"}, {"start": 2337.1000000000004, "end": 2340.04, "text": " GPT-2 tokenizer fast from the Huggins face library,"}, {"start": 2340.04, "end": 2341.96, "text": " and we just pre-loaded."}, {"start": 2341.96, "end": 2343.2400000000002, "text": " And here are the special tokens."}, {"start": 2343.2400000000002, "end": 2346.96, "text": " So here are the tokens we saw before in the paper."}, {"start": 2346.96, "end": 2350.1200000000003, "text": " So we have the start of the transcript token, SOT."}, {"start": 2350.12, "end": 2351.6, "text": " We have, as you can see here,"}, {"start": 2351.6, "end": 2355.48, "text": " I think this language's keys is like 99 keys"}, {"start": 2355.48, "end": 2357.08, "text": " in total or something."}, {"start": 2357.08, "end": 2358.68, "text": " Let me just double check that."}, {"start": 2358.68, "end": 2361.16, "text": " So you can see here, 99 keys precisely."}, {"start": 2361.16, "end": 2363.68, "text": " So that means we'll add for each of those languages,"}, {"start": 2363.68, "end": 2365.5, "text": " we'll add this special token."}, {"start": 2365.5, "end": 2369.6, "text": " For example, this and like the smaller sign"}, {"start": 2369.6, "end": 2373.7599999999998, "text": " and then bar and then N, like EN, and then the rest."}, {"start": 2373.7599999999998, "end": 2375.88, "text": " And for each of the languages, a similar thing."}, {"start": 2375.88, "end": 2377.96, "text": " So you can see here how the keys look like here"}, {"start": 2377.96, "end": 2379.16, "text": " for the languages."}, {"start": 2379.16, "end": 2382.62, "text": " And then we have translate, transcribe, start of LM,"}, {"start": 2382.62, "end": 2385.8199999999997, "text": " start of prev, no captions and no timestamps."}, {"start": 2385.8199999999997, "end": 2388.1, "text": " So each of these have some meaning."}, {"start": 2388.1, "end": 2390.94, "text": " We're gonna see maybe some of them are subset a bit later."}, {"start": 2390.94, "end": 2393.56, "text": " So we end up adding those to the pre-trained"}, {"start": 2393.56, "end": 2397.54, "text": " Huggins face tokenizer and we return back here."}, {"start": 2397.54, "end": 2401.12, "text": " Okay, so now let's just see what's interesting."}, {"start": 2401.12, "end": 2403.52, "text": " None of this is gonna be interesting for the time being."}, {"start": 2403.52, "end": 2406.02, "text": " They just create a SOT sequence,"}, {"start": 2406.02, "end": 2407.7, "text": " which is gonna have, as you can see here,"}, {"start": 2407.7, "end": 2411.7799999999997, "text": " they start with the SOT and they added the EN token"}, {"start": 2411.7799999999997, "end": 2414.58, "text": " and they added the transcribe token."}, {"start": 2414.58, "end": 2418.7799999999997, "text": " And so that's basically the sequence we saw before here."}, {"start": 2418.7799999999997, "end": 2420.7999999999997, "text": " So let me just go back to the paper."}, {"start": 2420.7999999999997, "end": 2422.98, "text": " So that's your classic sequence here."}, {"start": 2422.98, "end": 2426.3599999999997, "text": " So the start of transcript, language tag and transcribe."}, {"start": 2427.5, "end": 2429.58, "text": " Let's go back to the code."}, {"start": 2429.58, "end": 2432.18, "text": " But we are not going to use that for the language detection"}, {"start": 2432.18, "end": 2433.9199999999996, "text": " that we are currently doing."}, {"start": 2433.9199999999996, "end": 2436.3799999999997, "text": " So that's why I said it's not that important at the moment."}, {"start": 2436.38, "end": 2440.1800000000003, "text": " So we just do unsqueeze on the mail, nothing fancy."}, {"start": 2440.1800000000003, "end": 2443.46, "text": " And then we pass the mail through the encoder stack."}, {"start": 2443.46, "end": 2445.3, "text": " So let's now get the features."}, {"start": 2445.3, "end": 2448.6600000000003, "text": " So you can see here that everything is the same"}, {"start": 2448.6600000000003, "end": 2451.1, "text": " as what the diagram says."}, {"start": 2451.1, "end": 2453.5, "text": " We do a forward pass through, I guess,"}, {"start": 2453.5, "end": 2455.1400000000003, "text": " 12 blocks or something."}, {"start": 2455.1400000000003, "end": 2456.6600000000003, "text": " So we can just skip that."}, {"start": 2456.6600000000003, "end": 2459.1, "text": " Let me just see how many blocks this one has."}, {"start": 2459.1, "end": 2462.1600000000003, "text": " I guess it's like 12 or something, 12, yeah."}, {"start": 2462.16, "end": 2466.6, "text": " Let's just pass through all of that."}, {"start": 2466.6, "end": 2469.3399999999997, "text": " And we ultimately end up with the following shape."}, {"start": 2470.24, "end": 2473.8799999999997, "text": " So you can see here, 1,500."}, {"start": 2473.8799999999997, "end": 2477.3599999999997, "text": " So because we have one of these has stride of two,"}, {"start": 2477.3599999999997, "end": 2478.3799999999997, "text": " stride is set to two."}, {"start": 2478.3799999999997, "end": 2481.7999999999997, "text": " And that's why we went from 3,000 to 2,500."}, {"start": 2481.7999999999997, "end": 2484.56, "text": " And the dimensionality is 768."}, {"start": 2484.56, "end": 2488.52, "text": " So that's the model dimension of the encoder."}, {"start": 2488.52, "end": 2490.72, "text": " Okay, let's exit here."}, {"start": 2490.72, "end": 2492.7599999999998, "text": " Number of audio is gonna be one because we only have"}, {"start": 2492.7599999999998, "end": 2493.9599999999996, "text": " a one audio file."}, {"start": 2493.9599999999996, "end": 2496.9599999999996, "text": " And here you see, so here we just set the SOT token"}, {"start": 2498.72, "end": 2501.9199999999996, "text": " and we create a tensor that has SOT."}, {"start": 2501.9199999999996, "end": 2504.3599999999997, "text": " And now we're gonna pass the mail,"}, {"start": 2504.3599999999997, "end": 2508.3599999999997, "text": " so the encoder features from the output of the encoder,"}, {"start": 2508.3599999999997, "end": 2509.9599999999996, "text": " and we pass the SOT."}, {"start": 2509.9599999999996, "end": 2512.56, "text": " And let's see what the model is gonna generate."}, {"start": 2512.56, "end": 2514.7999999999997, "text": " So let's see which language it gives"}, {"start": 2514.7999999999997, "end": 2516.3599999999997, "text": " the highest probability to."}, {"start": 2516.3599999999997, "end": 2518.3999999999996, "text": " And that's how we pick the language."}, {"start": 2518.4, "end": 2521.36, "text": " So let me just go into the code instead of explaining"}, {"start": 2521.36, "end": 2522.92, "text": " like this."}, {"start": 2522.92, "end": 2527.28, "text": " So here we are, we are in the text decoder forward pass."}, {"start": 2527.28, "end": 2529.08, "text": " I'm gonna ignore the cache parts."}, {"start": 2529.08, "end": 2532.2400000000002, "text": " Those are just implementation details of how they optimize"}, {"start": 2532.2400000000002, "end": 2536.0, "text": " this execution of the forward pass of the decoder."}, {"start": 2536.0, "end": 2537.2200000000003, "text": " And let's see what's going on."}, {"start": 2537.2200000000003, "end": 2539.1, "text": " So they do the token embedding."}, {"start": 2539.1, "end": 2541.6800000000003, "text": " So again, remember this is just the SOT token,"}, {"start": 2541.6800000000003, "end": 2542.56, "text": " as you can see here."}, {"start": 2542.56, "end": 2544.7400000000002, "text": " And then they just add the positional embedding."}, {"start": 2544.7400000000002, "end": 2547.48, "text": " So all of that is kinda standard stuff."}, {"start": 2547.48, "end": 2549.44, "text": " And then you can see here,"}, {"start": 2549.44, "end": 2550.92, "text": " they're doing the cross attention."}, {"start": 2550.92, "end": 2553.48, "text": " So XA, XA is our,"}, {"start": 2555.22, "end": 2558.36, "text": " the output features of the encoder, as you can see here."}, {"start": 2558.36, "end": 2560.36, "text": " And so that's what's going on here."}, {"start": 2560.36, "end": 2563.26, "text": " So because of that, I'm just gonna skip all of that."}, {"start": 2563.26, "end": 2565.2, "text": " And let's get here."}, {"start": 2565.2, "end": 2569.0, "text": " Now the output features are multiplied by the,"}, {"start": 2569.0, "end": 2572.76, "text": " we do matrix multiplication with the embedding table,"}, {"start": 2572.76, "end": 2577.76, "text": " and that's gonna project those from 768 dimensions"}, {"start": 2577.84, "end": 2580.0800000000004, "text": " into the vocab space, which is like,"}, {"start": 2580.0800000000004, "end": 2582.1200000000003, "text": " let's see what's the dimensionality."}, {"start": 2582.1200000000003, "end": 2587.1200000000003, "text": " So logits shape is gonna be 51,865."}, {"start": 2589.5200000000004, "end": 2591.76, "text": " Okay, so those are the logits."}, {"start": 2591.76, "end": 2594.5600000000004, "text": " And now comes the interesting part."}, {"start": 2594.5600000000004, "end": 2599.5600000000004, "text": " So we create a mask that initially just has basically ones"}, {"start": 2599.56, "end": 2603.4, "text": " for every single token in the vocab."}, {"start": 2603.4, "end": 2605.2, "text": " Okay, so that's the mask initially."}, {"start": 2605.2, "end": 2607.68, "text": " And then for the language tokens,"}, {"start": 2607.68, "end": 2609.12, "text": " let's set that mask to false,"}, {"start": 2609.12, "end": 2611.72, "text": " which means we do not want to mask the language token."}, {"start": 2611.72, "end": 2614.48, "text": " So the EN, the DE, all of the 99,"}, {"start": 2614.48, "end": 2616.32, "text": " I think this should be 99."}, {"start": 2616.32, "end": 2619.2799999999997, "text": " Let me just make sure that's the case."}, {"start": 2619.2799999999997, "end": 2623.4, "text": " So let's see what's the length of this list, 99,"}, {"start": 2623.4, "end": 2624.56, "text": " precisely 99."}, {"start": 2624.56, "end": 2626.7599999999998, "text": " And so that's how we modify the mask."}, {"start": 2626.76, "end": 2630.96, "text": " And now we put the logits of every single token,"}, {"start": 2630.96, "end": 2634.36, "text": " except for the language tokens to minus infinity,"}, {"start": 2634.36, "end": 2637.6000000000004, "text": " which after we apply the softmax is gonna be,"}, {"start": 2637.6000000000004, "end": 2638.6400000000003, "text": " is gonna set them to zero."}, {"start": 2638.6400000000003, "end": 2640.48, "text": " So that means they'll have zero probability"}, {"start": 2640.48, "end": 2643.36, "text": " and only the language tokens will have non-zero probability."}, {"start": 2643.36, "end": 2645.1600000000003, "text": " Okay, so here we do argmax"}, {"start": 2645.1600000000003, "end": 2648.92, "text": " to find the highest probability language token."}, {"start": 2648.92, "end": 2651.0, "text": " We end up with, I think, English or something."}, {"start": 2651.0, "end": 2653.0400000000004, "text": " So this is, I think, English."}, {"start": 2653.0400000000004, "end": 2656.2400000000002, "text": " And we also return the probabilities"}, {"start": 2656.24, "end": 2659.24, "text": " for the other language tokens."}, {"start": 2660.16, "end": 2663.0, "text": " We just zip that and we return back those results."}, {"start": 2663.0, "end": 2665.24, "text": " So we return the actual language token"}, {"start": 2665.24, "end": 2666.8799999999997, "text": " and we return the probabilities."}, {"start": 2667.7599999999998, "end": 2671.2799999999997, "text": " But I think, yeah, this one is not even used."}, {"start": 2671.2799999999997, "end": 2674.3999999999996, "text": " So we just grab the, this is kind of wasteful."}, {"start": 2674.3999999999996, "end": 2678.06, "text": " They could have just passed it directly here, but anyways."}, {"start": 2679.08, "end": 2681.2799999999997, "text": " So they end up with the language."}, {"start": 2681.2799999999997, "end": 2682.9599999999996, "text": " We just grab the key."}, {"start": 2682.9599999999996, "end": 2685.66, "text": " So you can see the probabilities are language token"}, {"start": 2685.66, "end": 2687.92, "text": " and the probability, you can see that EN,"}, {"start": 2687.92, "end": 2691.24, "text": " English has 0.95, roughly."}, {"start": 2691.24, "end": 2694.68, "text": " And that's why we're gonna pick precisely English"}, {"start": 2694.68, "end": 2695.8599999999997, "text": " as the language here."}, {"start": 2696.8199999999997, "end": 2700.7599999999998, "text": " And we output detected language is English language."}, {"start": 2700.7599999999998, "end": 2702.3799999999997, "text": " Okay, that's the first part."}, {"start": 2702.3799999999997, "end": 2705.08, "text": " So we now have the English predicted."}, {"start": 2705.08, "end": 2709.7999999999997, "text": " Now let's continue doing the actual transcription."}, {"start": 2709.7999999999997, "end": 2712.8399999999997, "text": " I'm gonna skip over all of these details."}, {"start": 2712.84, "end": 2715.88, "text": " Let me just disable the break points."}, {"start": 2715.88, "end": 2718.34, "text": " Let's get to the interesting part."}, {"start": 2718.34, "end": 2720.32, "text": " These are just some details where,"}, {"start": 2720.32, "end": 2723.7200000000003, "text": " so the precision is, as you can see here, 20 milliseconds."}, {"start": 2723.7200000000003, "end": 2726.4, "text": " That's why this is 0.02."}, {"start": 2726.4, "end": 2731.08, "text": " Those are, I guess, not that vital."}, {"start": 2731.08, "end": 2733.6800000000003, "text": " Okay, we just pad again the, the MEL"}, {"start": 2733.6800000000003, "end": 2735.88, "text": " so that we end up with 3000 here."}, {"start": 2735.88, "end": 2739.28, "text": " So let's just check the shape of this segment."}, {"start": 2739.28, "end": 2741.1200000000003, "text": " So you can see it's 83,000"}, {"start": 2741.12, "end": 2744.72, "text": " because we have 80 MEL bands and we have 3000"}, {"start": 2744.72, "end": 2749.08, "text": " because we have a 30 second padded audio clip."}, {"start": 2749.08, "end": 2751.2799999999997, "text": " Okay, we calculate the duration."}, {"start": 2751.2799999999997, "end": 2752.88, "text": " It's gonna be 30 seconds."}, {"start": 2752.88, "end": 2755.6, "text": " And then this is where the magic starts happening."}, {"start": 2755.6, "end": 2757.06, "text": " Decode will fall back."}, {"start": 2757.06, "end": 2758.44, "text": " Let's see how that's gonna work."}, {"start": 2758.44, "end": 2761.72, "text": " Okay, here we are."}, {"start": 2761.72, "end": 2765.56, "text": " Temperature, again, we have the list"}, {"start": 2765.56, "end": 2766.72, "text": " of various temperatures."}, {"start": 2766.72, "end": 2768.92, "text": " We grab the first one,"}, {"start": 2768.92, "end": 2770.48, "text": " which is gonna be the zero temperature,"}, {"start": 2770.48, "end": 2774.92, "text": " which means we are basically greedy decoding."}, {"start": 2774.92, "end": 2779.92, "text": " And well, actually, because we'll be actually using the beam."}, {"start": 2781.34, "end": 2783.84, "text": " So let's just continue here and see what's going on."}, {"start": 2785.12, "end": 2786.98, "text": " Decoding task."}, {"start": 2786.98, "end": 2790.48, "text": " So this is how they kind of design this."}, {"start": 2790.48, "end": 2793.54, "text": " They form a decoding task and then they call run"}, {"start": 2793.54, "end": 2796.58, "text": " on that task and they pass the MEL spectrogram."}, {"start": 2796.58, "end": 2798.04, "text": " So let's see what's going on."}, {"start": 2798.04, "end": 2800.8, "text": " I'm gonna skip some of these details"}, {"start": 2800.8, "end": 2803.68, "text": " because there's too many stuff going on."}, {"start": 2803.68, "end": 2808.68, "text": " Just wanna focus on the stuff that's gonna be useful"}, {"start": 2810.84, "end": 2812.32, "text": " for whisper understanding, of course."}, {"start": 2812.32, "end": 2817.32, "text": " So here, this one implements this penalty with the length."}, {"start": 2818.4, "end": 2820.24, "text": " So when you're doing beam decoding,"}, {"start": 2820.24, "end": 2824.36, "text": " you want to take into account how probable that beam is,"}, {"start": 2824.36, "end": 2826.84, "text": " but you also wanna penalize if it's too long."}, {"start": 2826.84, "end": 2831.84, "text": " So this is what this maximum likelihood ranker does here,"}, {"start": 2832.2000000000003, "end": 2833.8, "text": " as you can see here."}, {"start": 2833.8, "end": 2836.46, "text": " So penalty and we are appending the log prop"}, {"start": 2836.46, "end": 2837.96, "text": " divided by the penalty."}, {"start": 2837.96, "end": 2841.2000000000003, "text": " I'll skip this, but in case you want to learn more,"}, {"start": 2841.2000000000003, "end": 2843.6800000000003, "text": " and I think this repo is amazing when it comes"}, {"start": 2843.6800000000003, "end": 2846.4, "text": " to understanding how beam decoding works."}, {"start": 2846.4, "end": 2849.1800000000003, "text": " If you wanna, I might create a separate video"}, {"start": 2849.1800000000003, "end": 2851.4, "text": " where I just go in a lot of depth"}, {"start": 2851.4, "end": 2853.84, "text": " and explain how this works,"}, {"start": 2853.84, "end": 2856.1200000000003, "text": " but here I'm gonna skip those details."}, {"start": 2856.12, "end": 2859.16, "text": " So we form a beam decoder here,"}, {"start": 2859.16, "end": 2861.24, "text": " and then this is the interesting part."}, {"start": 2861.24, "end": 2864.14, "text": " So here are some of the heuristics that they are using."}, {"start": 2864.14, "end": 2866.6, "text": " They're suppressing certain tokens"}, {"start": 2866.6, "end": 2868.2599999999998, "text": " during the decoding process."}, {"start": 2868.2599999999998, "end": 2871.4, "text": " So one of those is gonna be suppressed blank,"}, {"start": 2871.4, "end": 2874.3199999999997, "text": " which means we're gonna suppress the blank token"}, {"start": 2874.3199999999997, "end": 2877.98, "text": " as the first token after we start the transcription."}, {"start": 2877.98, "end": 2881.4, "text": " So we'll get to that a bit later."}, {"start": 2881.4, "end": 2885.18, "text": " Then they have a specific list of tokens"}, {"start": 2885.18, "end": 2886.9199999999996, "text": " that they wanna suppress."}, {"start": 2886.9199999999996, "end": 2889.7799999999997, "text": " So because we have, let me enter this one."}, {"start": 2889.7799999999997, "end": 2891.04, "text": " So let me enter this one."}, {"start": 2892.2599999999998, "end": 2896.12, "text": " So suppressed tokens are set to minus one"}, {"start": 2896.12, "end": 2899.58, "text": " because that's our input argument to the program."}, {"start": 2899.58, "end": 2903.94, "text": " And now let's see how that's gonna be mapped"}, {"start": 2903.94, "end": 2905.8799999999997, "text": " into the actual tokens."}, {"start": 2905.8799999999997, "end": 2910.16, "text": " So here you can see we extend by these non-speech tokens"}, {"start": 2910.16, "end": 2911.64, "text": " and here they are."}, {"start": 2911.64, "end": 2914.56, "text": " So just a list of tokens that cannot appear"}, {"start": 2914.56, "end": 2916.7799999999997, "text": " in the spoken language."}, {"start": 2917.7599999999998, "end": 2920.7799999999997, "text": " And again, I'm gonna skip because there is a lot of details,"}, {"start": 2920.7799999999997, "end": 2923.0, "text": " but that's the rough point."}, {"start": 2923.0, "end": 2925.7599999999998, "text": " We wanna help the decoder by just kinda mapping,"}, {"start": 2925.7599999999998, "end": 2928.56, "text": " by masking out those particular tokens."}, {"start": 2928.56, "end": 2929.7999999999997, "text": " Okay."}, {"start": 2929.7999999999997, "end": 2931.16, "text": " And they add some more."}, {"start": 2931.16, "end": 2933.16, "text": " We don't wanna predict the start of transcription"}, {"start": 2933.16, "end": 2935.0, "text": " because we already know we are transcribing."}, {"start": 2935.0, "end": 2937.88, "text": " So that's kinda a no-brainer, but yeah."}, {"start": 2938.7599999999998, "end": 2940.56, "text": " And finally, the no-captions,"}, {"start": 2940.56, "end": 2941.92, "text": " which is the token that tells you"}, {"start": 2941.92, "end": 2945.2400000000002, "text": " that we don't have any human voice."}, {"start": 2945.2400000000002, "end": 2948.12, "text": " We just have like music or something going on."}, {"start": 2948.12, "end": 2948.96, "text": " Okay."}, {"start": 2950.04, "end": 2951.8, "text": " That's a second heuristic they have."}, {"start": 2951.8, "end": 2956.8, "text": " And then they also have these applied timestamp rules,"}, {"start": 2958.58, "end": 2960.84, "text": " which do what?"}, {"start": 2960.84, "end": 2962.6, "text": " Okay. I have a break point here."}, {"start": 2962.6, "end": 2965.04, "text": " We'll see what it does once we hit the apply."}, {"start": 2965.04, "end": 2966.44, "text": " So let's just continue."}, {"start": 2967.7400000000002, "end": 2969.32, "text": " Let's continue here."}, {"start": 2969.32, "end": 2971.38, "text": " Let's exit here."}, {"start": 2971.38, "end": 2973.56, "text": " And we are now in the run method."}, {"start": 2973.56, "end": 2976.12, "text": " So all of that was construction of this decoding task."}, {"start": 2976.12, "end": 2980.32, "text": " Now let's see how the actual run looks like."}, {"start": 2980.32, "end": 2982.36, "text": " So we first fetch the audio features."}, {"start": 2982.36, "end": 2983.56, "text": " And this one is gonna again,"}, {"start": 2983.56, "end": 2986.88, "text": " just pass the the mel through the encoder"}, {"start": 2986.88, "end": 2988.48, "text": " and return back the audio features."}, {"start": 2988.48, "end": 2990.88, "text": " So I'm gonna skip all of that."}, {"start": 2990.88, "end": 2992.4, "text": " So I'm just gonna disable that."}, {"start": 2992.4, "end": 2993.88, "text": " Let's fetch the audio features"}, {"start": 2993.88, "end": 2996.36, "text": " by passing the mel through the encoder stack."}, {"start": 2997.36, "end": 2999.88, "text": " And by the way, I'm running this on a CPU"}, {"start": 2999.88, "end": 3002.94, "text": " because my cond environment was not properly set up."}, {"start": 3002.94, "end": 3005.88, "text": " That's why it's a bit slower,"}, {"start": 3005.88, "end": 3008.6800000000003, "text": " but it's still working fairly fast, which is kinda cool."}, {"start": 3009.9, "end": 3013.12, "text": " Okay. So we set the initial tokens to these three."}, {"start": 3013.12, "end": 3015.06, "text": " So that's the SOT, that's the English,"}, {"start": 3015.06, "end": 3017.0, "text": " and that's the transcribe."}, {"start": 3017.0, "end": 3018.6, "text": " That's our initial sequence."}, {"start": 3018.6, "end": 3022.88, "text": " And then this is gonna be a no-op."}, {"start": 3022.88, "end": 3024.8, "text": " Basically we just passed the language."}, {"start": 3024.8, "end": 3026.04, "text": " We already detected language."}, {"start": 3026.04, "end": 3026.88, "text": " We know it's English."}, {"start": 3026.88, "end": 3028.1800000000003, "text": " So that's gonna be a no-op."}, {"start": 3028.18, "end": 3033.18, "text": " Now we do repeat by five because we are doing beam."}, {"start": 3034.3599999999997, "end": 3036.6, "text": " Again, I'm not gonna focus on the details of beam,"}, {"start": 3036.6, "end": 3038.8799999999997, "text": " but let's just see the shape before and after."}, {"start": 3038.8799999999997, "end": 3042.52, "text": " So the shape now is 1500 because that's the time."}, {"start": 3042.52, "end": 3046.2, "text": " And then this is the actual embedding dimension."}, {"start": 3046.2, "end": 3049.3399999999997, "text": " So let's see what's going on after this call."}, {"start": 3049.3399999999997, "end": 3050.18, "text": " We have five."}, {"start": 3050.18, "end": 3053.8399999999997, "text": " So we have five because we'll have five hypotheses"}, {"start": 3053.8399999999997, "end": 3057.24, "text": " that we are keeping track of during the beam decoding."}, {"start": 3057.24, "end": 3058.9199999999996, "text": " And we do the same for tokens."}, {"start": 3058.9199999999996, "end": 3060.7999999999997, "text": " And then now here is the main loop."}, {"start": 3060.7999999999997, "end": 3063.06, "text": " So here is where the magic happens."}, {"start": 3063.06, "end": 3066.2799999999997, "text": " Okay. Let's start here."}, {"start": 3066.2799999999997, "end": 3071.02, "text": " So we now sample for 224 samples,"}, {"start": 3071.02, "end": 3076.02, "text": " and we first pass the tokens as well as the audio features"}, {"start": 3076.2, "end": 3078.24, "text": " to grab the logits."}, {"start": 3078.24, "end": 3080.64, "text": " So it's gonna be a simple pass through the decoder."}, {"start": 3080.64, "end": 3084.66, "text": " Again, some hook stuff that's again in optimization detail."}, {"start": 3084.66, "end": 3086.08, "text": " Here's the decoder call."}, {"start": 3086.08, "end": 3088.7599999999998, "text": " So what we do is again, we pass all of the,"}, {"start": 3088.7599999999998, "end": 3091.12, "text": " we embed our three tokens."}, {"start": 3091.12, "end": 3094.4, "text": " So here, let's see what's the shape now."}, {"start": 3094.4, "end": 3096.7599999999998, "text": " So you can see here five, three, 768,"}, {"start": 3096.7599999999998, "end": 3099.22, "text": " because we have three, five because we have beam,"}, {"start": 3099.22, "end": 3101.36, "text": " and this is the dimension of the model."}, {"start": 3101.36, "end": 3103.0, "text": " Now we pass all of those."}, {"start": 3103.0, "end": 3104.7799999999997, "text": " And we also have the, again,"}, {"start": 3104.7799999999997, "end": 3108.2799999999997, "text": " we are cross attending to the output features"}, {"start": 3108.2799999999997, "end": 3110.16, "text": " of the audio encoder."}, {"start": 3110.16, "end": 3111.62, "text": " Okay. That's the idea."}, {"start": 3111.62, "end": 3113.2799999999997, "text": " And finally, we end up with the logits."}, {"start": 3113.2799999999997, "end": 3114.4, "text": " So we can skip all of this."}, {"start": 3114.4, "end": 3117.8, "text": " This is just like regular transformer stuff."}, {"start": 3117.8, "end": 3119.4, "text": " Whoops."}, {"start": 3119.4, "end": 3120.88, "text": " And let's exit here."}, {"start": 3122.12, "end": 3123.12, "text": " Let's exit."}, {"start": 3123.12, "end": 3125.56, "text": " And we have our logits."}, {"start": 3125.56, "end": 3127.1600000000003, "text": " Now this is an interesting point."}, {"start": 3127.1600000000003, "end": 3128.04, "text": " Let's see what happens here."}, {"start": 3128.04, "end": 3130.7200000000003, "text": " Because we are in the iteration zero,"}, {"start": 3130.7200000000003, "end": 3135.04, "text": " and because no captions is set to,"}, {"start": 3135.04, "end": 3137.46, "text": " yeah, because no captions token exists,"}, {"start": 3137.46, "end": 3139.48, "text": " we're going to enter this branch."}, {"start": 3139.48, "end": 3143.44, "text": " And what we do is we grab the logits"}, {"start": 3143.44, "end": 3145.12, "text": " for the SOT index."}, {"start": 3145.12, "end": 3148.04, "text": " So that means what's the,"}, {"start": 3148.04, "end": 3152.32, "text": " once we condition the model on the SOT token,"}, {"start": 3152.32, "end": 3155.82, "text": " what's the distribution across the next tokens"}, {"start": 3155.82, "end": 3157.32, "text": " after the SOT."}, {"start": 3157.32, "end": 3160.68, "text": " And that's in case that there is a high probability"}, {"start": 3160.68, "end": 3162.08, "text": " for the no caption probability,"}, {"start": 3162.08, "end": 3164.88, "text": " then that means we don't have any audio,"}, {"start": 3164.88, "end": 3168.36, "text": " any human vocals in the audio clip."}, {"start": 3168.36, "end": 3169.66, "text": " So again, we index here,"}, {"start": 3169.66, "end": 3172.36, "text": " we grab the SOT logits,"}, {"start": 3172.36, "end": 3174.6800000000003, "text": " and that's gonna be, let's see the shape."}, {"start": 3174.6800000000003, "end": 3175.92, "text": " So here is the shape."}, {"start": 3175.92, "end": 3177.28, "text": " So you can see here,"}, {"start": 3177.28, "end": 3178.6400000000003, "text": " because it's being, we have five,"}, {"start": 3178.6400000000003, "end": 3182.28, "text": " but we just have the logits above the SOT token."}, {"start": 3182.28, "end": 3186.32, "text": " And then additionally, we grab precisely this particular,"}, {"start": 3186.32, "end": 3187.6400000000003, "text": " we are interested in the probability"}, {"start": 3187.6400000000003, "end": 3188.96, "text": " of this particular token,"}, {"start": 3188.96, "end": 3191.28, "text": " and that's gonna be the no caption probability."}, {"start": 3191.28, "end": 3193.6200000000003, "text": " And in our particular case, it's very low,"}, {"start": 3193.6200000000003, "end": 3195.08, "text": " but if it was high enough,"}, {"start": 3195.08, "end": 3198.44, "text": " then we would basically ignore the decoder output"}, {"start": 3198.44, "end": 3199.78, "text": " and just say that, okay,"}, {"start": 3199.78, "end": 3202.08, "text": " there is no voice happening in this audio clip."}, {"start": 3202.08, "end": 3204.52, "text": " And let me remind you here,"}, {"start": 3204.52, "end": 3205.36, "text": " that's this part."}, {"start": 3205.36, "end": 3208.52, "text": " So we either have the language tag or the no speech tag,"}, {"start": 3208.52, "end": 3210.44, "text": " and that's the voice activity detection."}, {"start": 3210.44, "end": 3213.7599999999998, "text": " So that's how that part, that task breaks."}, {"start": 3213.7599999999998, "end": 3215.22, "text": " So now you see it in code."}, {"start": 3216.6, "end": 3218.58, "text": " Okay, so having collected that,"}, {"start": 3218.58, "end": 3219.88, "text": " let's now grab the,"}, {"start": 3219.88, "end": 3222.88, "text": " because remember we actually have the SOT,"}, {"start": 3222.88, "end": 3227.1, "text": " English and transcribe tokens."}, {"start": 3227.1, "end": 3228.96, "text": " And we're gonna do the decoding,"}, {"start": 3228.96, "end": 3231.02, "text": " even if this was higher probability,"}, {"start": 3231.02, "end": 3232.96, "text": " we're still gonna do the decoding here"}, {"start": 3232.96, "end": 3236.92, "text": " and assume as if we wanna transcribe the audio,"}, {"start": 3236.92, "end": 3238.8, "text": " but in case this was high enough,"}, {"start": 3238.8, "end": 3243.12, "text": " then we would just ditch and ignore all of that output,"}, {"start": 3243.12, "end": 3244.8, "text": " which is probably wasteful."}, {"start": 3244.8, "end": 3246.7599999999998, "text": " Maybe some, just a break statement here"}, {"start": 3246.7599999999998, "end": 3249.14, "text": " or something would be more optimal,"}, {"start": 3249.14, "end": 3251.8, "text": " but yeah, that's just an optimization detail."}, {"start": 3252.64, "end": 3255.48, "text": " So we grab the logits for the fourth token,"}, {"start": 3255.48, "end": 3257.38, "text": " we apply a bunch of filters."}, {"start": 3257.38, "end": 3259.82, "text": " So those are the filters which I mentioned before."}, {"start": 3259.82, "end": 3260.68, "text": " So let's see."}, {"start": 3260.68, "end": 3263.3199999999997, "text": " The first one is the suppressed blank."}, {"start": 3263.3199999999997, "end": 3268.3199999999997, "text": " And you can see here that we wanna mask the blank token"}, {"start": 3268.3199999999997, "end": 3269.96, "text": " and we set it to minus infinity"}, {"start": 3269.96, "end": 3272.64, "text": " because we don't want to generate,"}, {"start": 3272.64, "end": 3274.7599999999998, "text": " as the first token does not,"}, {"start": 3274.7599999999998, "end": 3277.04, "text": " like it doesn't make any sense to store the audio"}, {"start": 3277.04, "end": 3278.2799999999997, "text": " with a blank token."}, {"start": 3278.2799999999997, "end": 3280.3999999999996, "text": " That's why they can mask it out."}, {"start": 3280.3999999999996, "end": 3282.12, "text": " What's probably weird is why is it not"}, {"start": 3282.12, "end": 3283.74, "text": " just like a generic white space"}, {"start": 3283.74, "end": 3285.18, "text": " and instead it's just like a space."}, {"start": 3285.18, "end": 3288.24, "text": " I'm not sure about that point, but yeah, let's continue."}, {"start": 3288.24, "end": 3290.62, "text": " And let me just enable all of the breakpoints."}, {"start": 3290.62, "end": 3291.7599999999998, "text": " Let's enter here."}, {"start": 3291.7599999999998, "end": 3293.08, "text": " So here's the second one."}, {"start": 3293.08, "end": 3295.18, "text": " These are the special tokens."}, {"start": 3295.18, "end": 3297.92, "text": " So that's again a list of these special tokens"}, {"start": 3297.92, "end": 3299.12, "text": " that we wanna suppress."}, {"start": 3299.12, "end": 3301.44, "text": " We set all of them to minus infinity, the logits."}, {"start": 3301.44, "end": 3303.88, "text": " So that means their probabilities after the softmax"}, {"start": 3303.88, "end": 3305.22, "text": " will be set to zero."}, {"start": 3306.2799999999997, "end": 3307.96, "text": " And we have the final one."}, {"start": 3307.96, "end": 3308.7999999999997, "text": " Here it is."}, {"start": 3308.7999999999997, "end": 3311.3599999999997, "text": " So here is the no timestamp."}, {"start": 3311.3599999999997, "end": 3312.68, "text": " And let's see how that one works."}, {"start": 3312.68, "end": 3314.3599999999997, "text": " I'm not sure about this one."}, {"start": 3314.3599999999997, "end": 3319.3599999999997, "text": " Okay, so it seems that we can never sample"}, {"start": 3319.36, "end": 3323.4, "text": " the no timestamps because we set it to minus infinity."}, {"start": 3323.4, "end": 3326.8, "text": " And that means we kinda cannot ever go down"}, {"start": 3326.8, "end": 3329.7000000000003, "text": " this route here, right?"}, {"start": 3329.7000000000003, "end": 3331.6400000000003, "text": " So that's kinda weird."}, {"start": 3331.6400000000003, "end": 3334.76, "text": " So in which setting do they actually sample"}, {"start": 3334.76, "end": 3335.76, "text": " this particular one?"}, {"start": 3335.76, "end": 3337.1600000000003, "text": " I'm not sure about that."}, {"start": 3338.32, "end": 3341.08, "text": " Okay, let's continue here."}, {"start": 3341.08, "end": 3343.02, "text": " I think this is gonna be a no op."}, {"start": 3343.02, "end": 3345.44, "text": " It's constantly gonna, yeah, it's gonna skip all of that."}, {"start": 3345.44, "end": 3349.08, "text": " So let's just put the breakpoint here, hit that five."}, {"start": 3349.08, "end": 3350.38, "text": " And let's continue."}, {"start": 3351.96, "end": 3352.7999999999997, "text": " Let's see what else."}, {"start": 3352.7999999999997, "end": 3356.08, "text": " So we now apply the log softmax."}, {"start": 3356.08, "end": 3358.16, "text": " And now this is one of these additional details."}, {"start": 3358.16, "end": 3359.38, "text": " It's kinda important."}, {"start": 3359.38, "end": 3362.2799999999997, "text": " So if the sum of probability over timestamps"}, {"start": 3362.2799999999997, "end": 3365.9, "text": " is above any other token, then sample the timestamp."}, {"start": 3365.9, "end": 3367.36, "text": " So that's how we sample the timestamp."}, {"start": 3367.36, "end": 3368.48, "text": " So let's see this part."}, {"start": 3368.48, "end": 3369.64, "text": " That's kinda important."}, {"start": 3369.64, "end": 3372.94, "text": " So we take the log probabilities,"}, {"start": 3372.94, "end": 3376.44, "text": " which are basically after we apply log softmax,"}, {"start": 3376.44, "end": 3378.52, "text": " and before that we did the masking."}, {"start": 3378.52, "end": 3381.68, "text": " So that means now we have all of the relevant tokens"}, {"start": 3381.68, "end": 3383.52, "text": " can be sampled from."}, {"start": 3383.52, "end": 3386.92, "text": " And as you can see here, we grab only those tokens"}, {"start": 3386.92, "end": 3389.7599999999998, "text": " which correspond to timestamp."}, {"start": 3389.7599999999998, "end": 3391.92, "text": " And we do a sum across those"}, {"start": 3391.92, "end": 3395.84, "text": " just to get their cumulative probability, log probability."}, {"start": 3395.84, "end": 3398.48, "text": " And we do the same thing for all the previous tokens."}, {"start": 3398.48, "end": 3401.56, "text": " As you can see here, we grab them,"}, {"start": 3401.56, "end": 3404.92, "text": " actually we grab the max one, not the sum."}, {"start": 3404.92, "end": 3408.4, "text": " And if the probability of this one is bigger than that one,"}, {"start": 3408.4, "end": 3413.4, "text": " then we basically mask all of the non timestamp tokens."}, {"start": 3414.84, "end": 3415.88, "text": " And then we're certain"}, {"start": 3415.88, "end": 3417.96, "text": " that we'll generate the timestamp information."}, {"start": 3417.96, "end": 3419.88, "text": " So as you can see in this particular instance,"}, {"start": 3419.88, "end": 3422.08, "text": " we are generating, because this is a first step,"}, {"start": 3422.08, "end": 3424.44, "text": " we're generating the zero zero tokens."}, {"start": 3424.44, "end": 3426.44, "text": " So let me remind you here,"}, {"start": 3426.44, "end": 3429.52, "text": " we're actually generating this begin time"}, {"start": 3430.84, "end": 3433.1600000000003, "text": " as the first token, which kinda makes sense."}, {"start": 3433.1600000000003, "end": 3435.52, "text": " Okay, so let's go back to the code."}, {"start": 3435.52, "end": 3437.56, "text": " Let's exit here."}, {"start": 3437.56, "end": 3442.16, "text": " Let me just shift F11, we're out."}, {"start": 3442.16, "end": 3443.4, "text": " Those are the logits."}, {"start": 3443.4, "end": 3447.4, "text": " And now we have this, what's this update function?"}, {"start": 3448.2, "end": 3452.6, "text": " This is just basically wrapping up the,"}, {"start": 3452.6, "end": 3456.4, "text": " okay, so this is actually the beam decoding part,"}, {"start": 3456.4, "end": 3458.2, "text": " collecting the candidates, blah, blah, blah."}, {"start": 3458.2, "end": 3460.04, "text": " As I said, I'm gonna skip all of those."}, {"start": 3460.04, "end": 3463.12, "text": " Let's just kinda assume that as a black box,"}, {"start": 3463.12, "end": 3467.88, "text": " we end up sampling, I guess, five candidates."}, {"start": 3467.88, "end": 3469.72, "text": " So let's see how that's gonna look like."}, {"start": 3469.72, "end": 3474.24, "text": " So tokens is gonna be five, four,"}, {"start": 3474.24, "end": 3476.16, "text": " and before that we had five, three."}, {"start": 3476.16, "end": 3480.2, "text": " And that's because, yeah, we now have five new candidates"}, {"start": 3480.2, "end": 3482.12, "text": " that are highly likely."}, {"start": 3482.12, "end": 3484.7999999999997, "text": " And we can just skip everything else here."}, {"start": 3484.7999999999997, "end": 3486.7999999999997, "text": " I think we're safe to skip all of this."}, {"start": 3486.7999999999997, "end": 3491.24, "text": " I'm just gonna hit F5 to the break point, exit,"}, {"start": 3491.24, "end": 3492.42, "text": " and that's it."}, {"start": 3492.42, "end": 3494.48, "text": " So now we just keep on doing the same thing."}, {"start": 3494.48, "end": 3499.0, "text": " So we just keep on, so we get the logits."}, {"start": 3499.84, "end": 3502.92, "text": " Whoops, let's just exit from the text decoder."}, {"start": 3504.42, "end": 3509.42, "text": " So we get the logits, and then now we'll be skipping this"}, {"start": 3509.6800000000003, "end": 3511.2000000000003, "text": " because we're not in the zero step."}, {"start": 3511.2000000000003, "end": 3514.36, "text": " That's the only step where we calculate the no caption props."}, {"start": 3514.36, "end": 3517.28, "text": " And then we just keep on sampling until we hit"}, {"start": 3517.28, "end": 3521.6800000000003, "text": " the end of transcription token, pretty much."}, {"start": 3521.68, "end": 3525.04, "text": " So let me now remove all of the break points."}, {"start": 3525.04, "end": 3527.3199999999997, "text": " Let's put a break point here."}, {"start": 3527.3199999999997, "end": 3530.3199999999997, "text": " Let's exit and let's see what's gonna happen."}, {"start": 3532.3599999999997, "end": 3533.98, "text": " Okay, so here we are."}, {"start": 3533.98, "end": 3536.2, "text": " So tokens, let's see the shape."}, {"start": 3536.2, "end": 3540.7599999999998, "text": " We have five and each of them has 33 tokens, okay?"}, {"start": 3540.7599999999998, "end": 3542.44, "text": " We return back the probabilities"}, {"start": 3542.44, "end": 3544.44, "text": " as well as the no caption probabilities."}, {"start": 3544.44, "end": 3546.12, "text": " Okay, we can skip all of this."}, {"start": 3549.08, "end": 3551.06, "text": " I'm not gonna focus on that too much."}, {"start": 3551.06, "end": 3553.6, "text": " And let's just return back to result."}, {"start": 3553.6, "end": 3556.0, "text": " So let's return back to result."}, {"start": 3556.0, "end": 3557.52, "text": " Here we are."}, {"start": 3557.52, "end": 3559.2799999999997, "text": " So now we have the results."}, {"start": 3559.2799999999997, "end": 3562.42, "text": " And now we have this fallback mechanism,"}, {"start": 3562.42, "end": 3563.56, "text": " as you can see here."}, {"start": 3563.56, "end": 3567.68, "text": " And in our case, we'll just be directly,"}, {"start": 3567.68, "end": 3569.2799999999997, "text": " we'll get directly to the results"}, {"start": 3569.2799999999997, "end": 3573.12, "text": " and we will not enter this branch,"}, {"start": 3573.12, "end": 3578.12, "text": " which basically would force us to do decoding again"}, {"start": 3578.12, "end": 3581.24, "text": " with a different temperature."}, {"start": 3581.24, "end": 3582.96, "text": " So because we satisfied these conditions"}, {"start": 3582.96, "end": 3585.56, "text": " and that's that the compression ratio"}, {"start": 3585.56, "end": 3590.2, "text": " is not higher than this threshold 2.4,"}, {"start": 3590.2, "end": 3591.7999999999997, "text": " which means we are not generating something"}, {"start": 3591.7999999999997, "end": 3593.3599999999997, "text": " that's super random."}, {"start": 3593.3599999999997, "end": 3595.8399999999997, "text": " And we're also passing this test here."}, {"start": 3595.8399999999997, "end": 3597.88, "text": " The log probability, the average log probability"}, {"start": 3597.88, "end": 3599.2799999999997, "text": " is lower than minus one."}, {"start": 3599.2799999999997, "end": 3601.6, "text": " Because of that, we can just kind of,"}, {"start": 3601.6, "end": 3603.06, "text": " we are already good."}, {"start": 3603.06, "end": 3606.7599999999998, "text": " We have the correct transcription and we can exit here."}, {"start": 3606.76, "end": 3610.36, "text": " We check here whether we have the no caption threshold"}, {"start": 3610.36, "end": 3611.5200000000004, "text": " and it's set."}, {"start": 3611.5200000000004, "end": 3614.6400000000003, "text": " And because of that, we now check, so this is the check."}, {"start": 3614.6400000000003, "end": 3618.1200000000003, "text": " Is the no caption probability higher than the threshold?"}, {"start": 3618.1200000000003, "end": 3620.1600000000003, "text": " If it is, then we should skip."}, {"start": 3620.1600000000003, "end": 3622.1800000000003, "text": " And that means that we would literally skip"}, {"start": 3622.1800000000003, "end": 3625.28, "text": " the whole segment here and continue."}, {"start": 3625.28, "end": 3628.0200000000004, "text": " Continue basically transcribing the next segment."}, {"start": 3628.0200000000004, "end": 3630.6800000000003, "text": " Okay, so in our case, it's not the case."}, {"start": 3630.6800000000003, "end": 3634.6000000000004, "text": " So we just ignore this part here."}, {"start": 3634.6, "end": 3637.88, "text": " Now there is some part about the timestamps here."}, {"start": 3637.88, "end": 3640.3199999999997, "text": " You can see here tokens greater than equal"}, {"start": 3640.3199999999997, "end": 3641.8399999999997, "text": " timestamp begin."}, {"start": 3641.8399999999997, "end": 3645.2, "text": " Because the timestamp token have this nice property"}, {"start": 3645.2, "end": 3646.52, "text": " that they are sequential."}, {"start": 3646.52, "end": 3648.66, "text": " So that means that the beginning one,"}, {"start": 3648.66, "end": 3651.56, "text": " the zero zero moment of the transcription"}, {"start": 3651.56, "end": 3656.56, "text": " is maybe on has IDN and then all of the bigger timestamps"}, {"start": 3658.2, "end": 3663.2, "text": " have even bigger basically index in the embedding table."}, {"start": 3663.2, "end": 3664.48, "text": " And that's why this is gonna work."}, {"start": 3664.48, "end": 3667.72, "text": " So this is gonna fetch all of those tokens"}, {"start": 3667.72, "end": 3669.92, "text": " that are timestamp tokens."}, {"start": 3669.92, "end": 3672.04, "text": " And then we basically use that information"}, {"start": 3672.04, "end": 3673.92, "text": " to add some segments."}, {"start": 3673.92, "end": 3675.48, "text": " This is not that by looking,"}, {"start": 3675.48, "end": 3678.76, "text": " go through the code at your own pace."}, {"start": 3678.76, "end": 3683.32, "text": " But I'm gonna return back to results and here we are."}, {"start": 3683.32, "end": 3686.78, "text": " So we can now tokenizer decode all tokens."}, {"start": 3686.78, "end": 3689.48, "text": " So if we were to print this, you can see,"}, {"start": 3689.48, "end": 3690.94, "text": " and so my fellow Americans,"}, {"start": 3690.94, "end": 3693.2400000000002, "text": " ask not what your country can do for you."}, {"start": 3693.24, "end": 3695.2799999999997, "text": " Ask what you can do for your country."}, {"start": 3695.2799999999997, "end": 3698.04, "text": " So that's the, if you remember the clip"}, {"start": 3698.04, "end": 3701.72, "text": " from the beginning of this coding session,"}, {"start": 3701.72, "end": 3706.3399999999997, "text": " that's precisely what we were playing."}, {"start": 3706.3399999999997, "end": 3708.12, "text": " So here we just saved that as the TXT"}, {"start": 3708.12, "end": 3709.9599999999996, "text": " and we also saved this format where we have"}, {"start": 3709.9599999999996, "end": 3713.3199999999997, "text": " the timestamp information additionally"}, {"start": 3713.3199999999997, "end": 3715.2799999999997, "text": " aside from the transcription."}, {"start": 3715.2799999999997, "end": 3717.0, "text": " Guys, that's pretty much it."}, {"start": 3717.0, "end": 3719.64, "text": " Hopefully you saw, I mean, the details of the code"}, {"start": 3719.64, "end": 3723.0, "text": " are complex, but the idea is if you've bear with me,"}, {"start": 3723.0, "end": 3725.36, "text": " up until this point, congrats."}, {"start": 3725.36, "end": 3728.52, "text": " Hopefully you've seen how this intricate system here works."}, {"start": 3729.4, "end": 3731.04, "text": " There is a lot of heuristics,"}, {"start": 3731.04, "end": 3733.8, "text": " but the system is ultimately very elegant"}, {"start": 3733.8, "end": 3737.12, "text": " in the sense that it uses off the shelf transformer module"}, {"start": 3737.12, "end": 3740.6, "text": " and basically just leverages huge data"}, {"start": 3740.6, "end": 3742.04, "text": " and some decoding heuristics."}, {"start": 3742.04, "end": 3743.6, "text": " Cool, so if you like this video,"}, {"start": 3743.6, "end": 3747.12, "text": " consider subscribing, sharing it out with your friends"}, {"start": 3747.12, "end": 3749.0, "text": " and see you next time."}, {"start": 3749.0, "end": 3754.0, "text": " So until then, thanks for watching."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=pTChDs5uD8I
BigScience BLOOM | 3D Parallelism Explained | Large Language Models | ML Coding Series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this 4th video of the Large Language Model series I walk you through the BigScience's BLOOM model codebase! The main focus is on understanding the 3D parallelism: * Pipeline parallelism * Model parallelism * Data parallelism A set of beautiful engineering ideas that are behind all of the recent scaling efforts and ML success stories! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ BLOOM code: https://github.com/bigscience-workshop/Megatron-DeepSpeed ✅ Ultimate Guide to Scaling Video: https://www.youtube.com/watch?v=hc0u4avAkuM ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro - focusing on the 3D parallelism! 00:02:00 Quick setup 00:05:00 Stepping through the eval script 00:11:13 3D paralellism - model construction 00:15:00 Sharding the embedding table (model parallelism) 00:20:09 Sharding the transformer layer 00:22:30 LayerNorm fused kernels 00:23:50 Sharding the attention layer 00:25:15 ColumnParallel and RowParallel sharding 00:31:30 Synchronizing input and output embedding tables 00:34:45 Building the dataset (data parallelism) 00:39:15 3D parallelism - forward pass 00:39:25 Pipeline parallelism communication 00:43:35 Pass through the sharded embedding table 00:52:15 Pass through the sharded transformer layer 01:01:36 Sharded logit and cross-entropy computation 01:05:30 Recap 01:11:15 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #bloom #bigscience #3dparallelism #scaling
What's cracking guys, Alex here. In this video, I'm doing the fourth video of the large language model series and I'll be walking you through the Bloom code base. So that's the 175 billion parameter model from the big science event. And I'm gonna focus on showing you how the 3D parallelism is implemented in code. So that means model parallelism, pipeline parallelism and data parallelism. So you're gonna see side by side comparisons of diagrams. I showed you in my previous video on the ultimate guide to scaling and you're gonna see the actual code. So as a quick recap, here's the playlist. As I said, the first video, the ultimate guide to scaling. In that one, I showed you a couple of techniques for scaling the models up to scales, up to those scales of hundreds of billions and even trillions of parameters. So techniques such as model parallelism, data parallelism, pipeline parallelism, mixed precision training and zero optimizer. In the second video, I showed you, I went through, I skimmed through the papers behind the most popular publicly shared large language models such as GPT-New, X1EB, Big Science Bloom and OpenPretrain Transformer from Meta. And then in the third video, I actually went through the code base behind the opt model and I focused on showing you the mixed precision training, the loss scaling, those details. And additionally, I focused on showing you how complex these code bases actually are if you want to get to these scales. So it's not just a matter of taking n, where n is the number of parameters and shifting it from five to 100 and expecting everything to work. Obviously, it's a bit more complex than that. Okay, in any case, as I said, I'm gonna show you the model parallelism from the ultimate guide to scaling video. I'm gonna show you the pipeline parallelism and I'm finally gonna show you the data parallelism approach where you replicate the weights across multiple devices, for example, GPUs and then you do the data replication magic. So you reduce the gradients using all reduce and then you broadcast the weights, et cetera, et cetera. Okay, enough rambling, let's get started. So first thing you have to do is obviously clone the repository. So that's the Megatron DeepSpeed one. I'm gonna obviously link the link down in the video description and just get started through this start fast with me. So just follow the instructions to set up the conda environment and it should be straightforward. Although I did experience some issues on Windows and so ultimately I had to shift to Windows subsystem for Linux and then everything worked out of the box. So it's kind of funny because DeepSpeed is actually from Microsoft folks and it's not working on Windows. I guess it's a bit ironic. In any case, go through this with me, download the data. We'll need these vocab and merges files for the tokenizer a bit later. Go through the training script potentially. I did go through it myself, but I decided to only show you the eval script because it contains all the necessary code for me to showcase the 3D parallelism approach. This one is complex because it additionally uses DeepSpeed which is basically built on top of Megatron and so there is a lot of a lot, the stack of function calls is too deep and it will be super confusing for me to show you that unless I wanna teach you DeepSpeed which is not the point of this video. Okay, long story short, basically go and take these parameters for the eval script. Let me just find where the main pie script, so this one. So, and get started from here. Okay guys, so now I'm just gonna open up my VS code. So you see here Megatron DeepSpeed, I have this Bloom 3 conda environment and I'm just gonna open up VS code. So that's my ID of choice. And let me first show you a couple of things I had to modify in the arguments here. So let me just see what's important. Again, you'll have to link this vocab file after you download it. Just create a random TXT file and you don't really have to download the Viki text dataset because this is going to work out of the box like this. And yeah, I mean, if your goal is to understand how the code works, if your goal is to actually do something with Viki text and obviously you wanna download the actual dataset. For me, I just literally like pasted some random text inside of a TXT file. Okay, the second thing, again, you'll have to download this file, the GPT-2 merges. I also kinda commented out this load function and then we'll just be loading random weights, which again, does not matter because I don't care about the actual result, I just care about these to step through the code and show you what's going on. Then I had this number of workers set to zero because it's easier again to step through the code. And finally, I commented out the checkpoints activations because this does not make any sense for the eval script. So I guess that's just a minor bug on their side. Having said all that, let's start digging into the code. So I'm gonna open up the main function here, the main script. I'm going to grab the eval Viki text. So that's the config for this file that I've just showed you. And let me run this thing. Okay, guys, let me quickly go through the initialize Megatron function. Let's see what's going on there, although it's not that important for what I wanna show you. Just make sure that we have CUDA, so that we have a GPU device. We set some global variables here. This is kinda interesting. So there is the, whoops, don't show this again, please. Okay, so they first parse the arguments and let me kinda dig into that function first. So just to show you how it looks like. So the idea here is to, they have these global variables. So they parse the arguments, they set the, they assign those arguments to this global variable, and then it's gonna be visible across the code base. So that's kinda how they went about designing this thing. I'm gonna skip through the, I'm gonna skip, yeah. I'm just gonna maybe quickly show you how complex this is. There is a lot of arguments that go into this, but I'm gonna skip all of those, and we're just gonna see how that looks like when we start stepping through the code. So as you can see, a lot of error checking, and I'm gonna hit F5 and hopefully get here, yeah. Okay, so let's continue here. So that's the parsing of arguments. Then I'm gonna skip this part. We don't care about that one. And then we build a tokenizer. Again, that's just going to be GPT tokenizer. You can see it here. So we're gonna kinda go through this GPT-2BPE tokenizer, and that's a common tokenizer used throughout many code bases. And just doing this sample coding series, I realized how pervasive this tokenizer is, and I first covered it in my OpenAI clip video, so do check that one out. I'm gonna link it somewhere above if you are curious to learn more about the tokenizers. But for now, I'm just gonna basically disable the breakpoints and skip over the actual tokenizer. And let's continue. So we just do some padding of the vocab such that it's basically suitable to model parallelism. Otherwise, you'd have problems. And that's it. Now we just set up some TensorBoard. This is kinda cool, the code carbon tracker. So they did have this functionality where they track how much are they impacting the environment by just measuring the amount of CO2 emission. But they did comment it out, so as you can see here, they don't actually use it, but it's a cool idea, I guess. Because they say it's very unstable, blah, blah, blah, whatever. Okay, let's step out of this, and let's continue. And that's it. Okay, let me toggle on the breakpoints again. I'm gonna skip most of these things. So get our, so now this is the cool thing. Wherever you are in the code base, you'll literally just grab that global variable and you're golden. You don't have to pass the arguments through the function argument list. Well, it's debatable whether that's the best thing to do, but yeah, I think it's nice. There is a couple of interesting things in this function, so I'm gonna quickly step through it. There is some initialization of the, so I'm using the NCCL, which is NVIDIA Collective Communication Library for all of those old reviews and other operations that you use to communicate and synchronize the GPUs. So it's gonna happen in this function. I'm not gonna step into it because it's fairly complex. Just library code. And then let me just see what else is interesting here, which set the random seed just for reproducibility purposes, obviously. And yeah, that's it. And then in the compile dependencies, there is a couple of interesting things going on. Again, we are going to, so first there's some, this part is not that interesting, but there's one part where they load certain CUDA kernels, and we're gonna later actually use them and leverage that C++ code for extra performance. So I'm gonna skip through most of these. We actually saw something similar, if not the same. Actually, it's the same code because Opt was also using Megatron, so my previous video was also using Megatron in the background. So we've seen the same code here. So what we do is we load these kernels. I'm gonna step into it quickly and just show you how it looks like, but it's nothing super crucial. I mean, not related to the 3D parallelism in that sense. So here you can see we're just loading various kernels for like scaled up or for softmax and then for some other softmax and layer norm, et cetera, et cetera. So I'm gonna put the breakpoint here, hit F5 and just exit here. So it was more of FYI than anything else. Okay, there is some barrier function call which basically synchronizes across. So all of the, it basically waits, all of the different devices, GPUs will wait at that line until the other GPUs hit that same line and then the barrier is gonna release the execution across all of the devices. That's what the barrier does. But I have a single GPU, so that's not interesting. And even though I have a single GPU, I'm still going to be able to show you the 3D parallelism as we'll soon see. Okay, let's just exit here and that's pretty much everything I wanted to show you. Now let's get into the main script. So because I'm using Viki Text 103, I'm gonna take this particular main function and then let's now enter here. Okay, so we fetched the arguments and let's see what's going on. So because we are using Viki Text, we're gonna set the eval metric to loss. That's not that important. And this is where stuff becomes interesting. We're now going to see how the parallelism is implemented in the code here. Okay, so let me re-enable the breakpoints and let's continue stepping through this. So here is the model provider. So we enter this get model function. Let's now see what's going on. Okay, so we hit this else branch. And now, because I have a single device and because of that, my pipeline is only of length one. I have a single stage. And that's why both the pre-process as well as the post-process here are gonna be set to true, as you can see here. But if this was on a, if you were running on a, let's say a two by two topology setup, so we have four GPUs in total and I'm gonna use that topology as a running example. Let me kind of open up the OneNote and show you what I mean. So let me go here. And this is going to be the running topology I'll be using. So basically, I'm gonna have something like this. So we have a GPU here. We have a GPU here. We have another GPU here. And we have yet another GPU here. So we're gonna have three types of parallelism, but I'm gonna show you only the model and the pipeline here on the image. So these ones here are going to be stage one. So stage or stage zero, and this is gonna be stage one, okay? So basically that means we'll have half of the layers of our transformer being here. Maybe if we have 24 layers, 12 will be here. So we'll have 12 layers here. And we'll have 12 layers here. Next up, I'm gonna do something like this. So we're gonna have model parallelism. So across this dimension, we have MP. And across this dimension here, we have pipeline parallelism PP, okay? So this one is gonna be zero and one. So this is gonna be the topology that I want you to have in your mind. And we'll see how it's gonna be useful in a couple of minutes. Okay, again, keep in mind that in the setup I just showed you, the devices from the stage zero of the pipeline parallelism will have, this will be true and this will be false. And then the second stage will have, this will be true and this will be false. And if you have even more stages, then some of these devices will have both of these pre and post process flags set to false because they are neither the first stage nor the last stage of the pipeline. Okay, hopefully that makes sense. Now let's enter this function here. So yeah, we just set this to true because we're dealing with loss metric. Let's continue. And here is the GPT model. So I'm gonna be using the GPT transformer as the underlying architecture. So let's enter here and let's see what's going on. So it's gonna inherit from Megatron module, which is a light wrapper around torch modules. I'm not gonna step through the constructor. Let's fetch the arguments here and let's just kind of skip across all of these details. This is the interesting part, the language model. So now we're gonna enter that function. And here, okay, again, we just fetch the arguments. We continue here. Now there is a transformer language model. So that's gonna be interesting one. So let me hit F5, let's get here. So it's inheriting again from the Megatron module. Okay, we can step across all of these. I'm gonna focus only on the parts that are interesting. So here's the embedding. This is the interesting part. So let me kind of put a break point here, hit F5. Let's see how this is going to work. Okay, so the embedding table is gonna have a couple of details. So here is the Voke parallel embedding. That's what they're using in the background. Parallel suggests that we are basically gonna do a model parallelism of this embedding table. And by the way, let me just go back a step up here. You can see that we only enter here if the pre-process is set to true. And that's only true for the first stage in the pipeline or zeroth index, as I kind of made the notation. But like the first devices in the pipeline will only have this embedding table, which makes sense, because we wanna split the model layers across different stages. Okay, so let's get back here. So here is the Voke parallel embedding. Let's enter here. Let's gonna hit F5. Whoops, let me hit F5 and enter there. Okay, so again, let's see what's gonna happen here. So this is the interesting part. This get tensor model parallel world size is going to in the topology that I showed you a couple of minutes ago, it will return two. So for my particular setup, it's obviously gonna return one because they have a single GPU, but in general, it's gonna return two. And now let's see what's the repercussion of that. So here, what happens is we fetch the rank. So the tensor model parallel rank, and that's going to be zero for the basically devices here. So it's gonna be zero for these devices. It's gonna be one for these devices here, okay? So let me go back to code. Now with that in mind and having this thing, so this is going to be two in the example I'm showing you. Let's now enter this function and let me show you what it does. So it takes the global vocab size, which is this big 50K slots in the embedding table. And we're going to divide it by two because our model parallelism world size is actually of size two in the running example I'm using. Okay, so it will be something like 25K here. Okay, and then what we do is we enter this function here and you can see here, so for rank zero, we would have this thing would be zero because we multiply it by zero 25K. And then this would be 25K plus, no, sorry, this would be zero plus 25K. So that would be 25K. So that means we would return zero and 25,000 here as a result. Obviously, in my case, I'm returning 50K, but bear with me. Okay, so that's the first thing we need to understand. Now those variables are gonna be used to split the embedding table across multiple devices. That's the whole point. So now you can see here, the number of embeddings would be 25,000 if we had the topology I'm using. We can skip this, we can skip this. Those are just some bits and bytes or something, basically where they use integer eight precision, et cetera, et cetera. We're gonna ignore that in this video. So here is the interesting part. So now we create the weight of the embedding table. So we can see here, we are passing this variable, which would be 25,000 again, and we're passing the embedding dimension, which is 1,024, blah, blah, blah. That's it. So the weight would be, as you can see, 25,000 times 1,000 on one device and on those devices that have a rank zero, and then it will be additionally 25,000 for those devices that have rank one, again, talking about model parallelism here. Okay, so we just now initialize it and that's it. So now you've seen for the first time how the model parallelism looks like. So let me just exit here. Let's now continue. The second thing we want to do here is create some positional embeddings. Now these, because they are much smaller than the vocab embedding table, they're just going to allocate the whole thing across all of the different devices. So we have like complete, we can see here 1,024, because that's gonna be our sequence and that's how many positionally encodings we need. Okay, let's continue. We just initialize those and let's see now what's happening here. We can skip all of those. We just add some dropout, blah, blah, blah. I keep entering this function, I'm not sure why, but we now have the embedding and let's continue. Now let me just update the diagram here. I think this is gonna be useful. So let me just take some different color. So we have the embedding table currently placed like this. So we have embedding table here and we have embedding table here. And the size here is 25K and the size here is 25K as well. Okay, let's not go back to decode. Now we have the parallel transformer. Let's see how the transformer is going to be split. Okay, so let's enter the parallel transformer and here it is. Okay, again, skipping these details until we get here. So here is an interesting assert. So the number of layers has to be divisible by the world size for the pipeline parallelism. In the running example I'm using, it's gonna be two. So that means you have to be like, the number of layers has to be divisible by two. In my actual setup, I have, this will be one, so yeah. Okay, now this is the interesting line. So we see here that we divide the number of layers by the number of stages. And again, so because we have 24, that means we would now have 12 layers being allocated on this particular device, okay? So again, keep in mind, this code, so the thing I'm just stepping through, this same thing would be happening in parallel on four devices. And each of those devices have different contexts because they lie in a different part of the topology and because of that they will be entering different types, different code, they will be going through the different code paths, that's the main idea. Hopefully you understand that. Let's continue here. Okay, so now we create the offset. So we grab the rank and we multiply it by the number of layers. So that means that the offset for these two devices here is gonna be zero, for these two is gonna be zero. And for the two ones here is gonna be 12, right? So let's go back here. So it's gonna be one, the rank will be one for those and then times the number of layers, 12. So offset is gonna be 12 for them. So let's see how offset plays a role here. So we now form 12 layers and we have the offset here. So you can see that we are passing that index information into this build layer function. So let's enter the build layer function. Here it is. And there we start building the actual transformer layers. We'll do this 12 times. So let me just enter this layer here. Let me hit F5 and we are here. So again, let me just focus on the important parts, ignoring many details obviously, otherwise it will be too bothersome. Okay, so first things first, we want to instantiate a layer norm. Okay, let's see how the layer norm looks like. The layer norm has some fused components, which is interesting. So that means we'll be using optimized CUDA kernels, fancy word for saying functions, implemented in CUDA library that are running super fast on GPUs obviously, on Nvidia GPUs. So here you can see we import this fused mixed precision layer norm CUDA. So let me show you where that was constructed. And I actually showed you that in the beginning of the video. So here, let me just move this thing here. So if I were to copy paste this here, so you can see here we are loading that precise variable is loaded using the CPP extension load helper. So we kind of passed, I kind of showed you this function before, and that's where it was initialized. Now let's get back to the actual code. Let me go to the top of the stack. We import that kernel and yeah, we're gonna then use it a bit later. So everything else is not that interesting. I'm not gonna focus on the layer norm because this one is not going to be split across devices. Okay, let me exit this. And let me just kind of move this a bit to the left. And we are out. Now let's see the parallel attention. This is gonna be the interesting part. So let's enter the parallel attention. I'm gonna hit F5. Let's go to the part that we care about. And that's the part where we split. So here we can see, we can step over all of these. The magic starts happening right here. So again, model parallel world size is gonna be equal to, so it's gonna be two. And now we divide the projection size by two, blah, blah, blah. So just the devising number of attention heads to two by two. And now we get to the interesting part. So this is the column parallel linear feed forward layer. That might be a mouthful, but let me remind you what that is. So this is what we are actually building right now. This is the self attention layer. And what we are building right now are these, as you can see, column parallel layers. So we take the weights and we split them in this column wise fashion. And then we're gonna use, so we're gonna use these here. The Q1, K1 and V1 are going to be used here, as you can see. And then the Q2, K2 and V2 are gonna be used on the different device. And that's what we are currently building. That's how this code, that's what this code is implementing. So let me enter the actual column parallel linear. And you can see again, the weight matrix is being split in this column parallelism fashion. So let me hit five, let me enter here. Let's see where that's gonna happen. So here is the world size. Again, this is gonna be equal to two. And then the output size is gonna be divided by two. Okay, so let's continue. And here is the actual, so we are now allocating the weights. This is going to, so this would, if we had the topology I'm using as a running example, this would be, as you can see here, divided by two. So that means we're literally splitting it column wise. That's the whole point, okay? And then once we have those weights, we just initialize the weights and there's some bias. Let's keep that part. It's not that interesting. So let's exit here, again, entering this thing. Okay, so that's the first part. Let's see what else. So now we have to create some softmax stuff that's going to be this part, the softmax. The only difference is this is gonna be optimized. So obviously there are no learnable weights in this one. So I'm gonna just skip it. We're not sharding it. So now we create the dropout, blah, blah, blah. And there is the row parallel linear. So this is the second component. So let me go back again to the OneNote. And here is the second part. So here you can see the other part of the self-attention layer. We have this linear layer and we split it row wise this time, B1 and B2. So you can see here. So let me go back to the code and let me enter the row parallel linear. So if I enter here, you can see again the diagram and let me put a break point here. Let me hit that five. So here we are. Again, world size, it's gonna be two. And let me continue. And again, this time you can see that the input size per partition is the one that's being divided. So here the input will be divided, not the output size. And because of that, we are doing row wise splitting, right? That's it. That's the difference. Again, I'm gonna skip everything else here. Let me kind of step out of this function, shift F11 does that on Windows keyboard. And here we are. So we have the row parallel linear instantiated. Let's continue. We don't have row three embeddings. So we can skip this part as well. And here we are. So that was the parallel attention. Okay. Again, let me just update the drawing. We have the parallel attention being sharded across two devices in this running example. So that means we'll have half of the weights here and half of the weights here. Obviously we'll need 12 of these. So I kind of didn't do it justice here, but yeah, you can imagine we'll have 12 of these transformer self-attention layers. So let's go back to the code. Let me exit here. So we have layer norm again. I won't bother. I'm just gonna exit this layer norm. Let's see what else we skip the decoder part. So now there is the parallel MLP. So again, how the transformer block looks like, we have the self-attention module and then we have the MLP part. So now here we'll see first the column-wise splitting and then the row-wise splitting. So this is the diagram we'll implement. Let's go back to the code. Let me show you this quickly. And then this is the last thing I'll show you. And then we are pretty much done with this model construction. So let me hit F5. Let me enter here. So again, column parallel. And I'm just gonna skip. I'm just going to disable the breakpoints because it's the same logic as before, right? We've seen this. This is the same logic as before. So I'm just gonna skip over all of that. Let's get here. Now some activation stuff. Here is the row parallel linear and that's it. And that's the one that has the 4H usually is the common convention. So the hidden layer will have 4X more weights than the input and output, dimension, sorry. Okay. Again, let's exit here. And that's our basically transformer layer. Again, you can always follow here where we are. We're currently implementing the transformer layer. So I'm gonna exit here and we do this 12 times. So I'm gonna, again, let me just make sure it's disabled. I'm gonna enable the breakpoint here, hit F5 and we have 12 layers. So again, let me update the drawing. Here is the drawing. We now have 12 layers here and here. So we have 12 layers here and here. And obviously, because this is running in parallel, these two devices would also have 12 layers here. So 12 layers and 12 layers. The difference is these two, because they are in the rank zero of the pipeline parallelism, they additionally have the embedding table sharded. Cool. Guys, let me know how you like the video so far. Leave any feedback if you have any. That would be super cool. So again, post-process. That's going to be active only on the last stage of the pipeline. So that means only those devices will have the layer norm, but I'm gonna skip the, I won't enter the layer norm because we already seen it. That's the optimized layer norm. And that's it. Again, that was the parallel transformer. Let's continue. We don't have the decoder part. We don't have the pooler, so we can exit here. We can exit here. And we have our language model being constructed. Again, I'm not sure why I'm entering this thing. So I just, once I enter it, I just do step out and that's how I exit in case you're wondering. Okay, guys, here we are, GPT model. We just constructed it. Now there is this initialized word embedding. So because I have, in my actual setup, I have a single GPU, so I'll just hit return. But in general, if we had the actual topology I'm using as the running example, we'd be hitting this code here. And let me show you why this is interesting. So they mentioned it here. So parameters are shared between the word embeddings layer. So that's the common thing you always do. So you have the embedding table being shared between, so mapping the tokens into the embedding vectors. And then at the output, you also use that embedding table to map the embedding vectors into the vocab. So basically distribution across the various words, subtokens from your, or tokens from your vocabulary. So those are tied. And you can see here, they have a nice trick that they do. So if this was running on the last stage of the pipeline, then they would create embedding table. Again, it's gonna be a parallel embedding table. So that means it's gonna be shorted across two devices of the last stage of the pipeline. And then this is the nice trick. So basically what they do is they do all reduce. So ensure that first and last stages have the same initial parameter values. So because this was not independently allocated and because they are lying on different devices, because of that, they have to do all reduce. So you can see here, if we are hitting either the first stage of the pipeline or the last stage, apply the all reduce method on this word embedding, or word embedding's weight. If you're not familiar with these collective operations, I did not explain them so far, just check out the Viki page. They have a nice visual explanation. It's fairly simple. Roughly what's gonna happen is assume that the first stage will literally receive the weights from the last stage, and then they're gonna divide it by two. And if the last stage does the same thing, you'll end up with the same result. And that means you'll have the same weights both in the first stage and in the last stage, and thus you've synchronized the weights. As simple as that. Okay, guys, hopefully that was interesting. And hopefully that makes sense with the drawing I'm using in OneNote. So let me exit here. So here is the, now some stuff that's not that vital. I'm gonna skip all of this, put the break point here, hit F5. Let's see what else. So we push all of these models onto the GPU. We use this float, we convert them to float 16 weights. So I'm going to just, basically let me enable the break points now. So I'm gonna hit this one a bit later, so we will see how that is going to be used. Okay, local DDP data distributed parallelism. We're gonna ignore that for the time being. And let's just return the model. Okay, so again, because I have load set to none, I will not be loading the checkpoints, which means I'll basically be using random weights. Okay, now we build the data set. Let me quickly walk you through this one. Although that's not that relevant, so tokenizer. So again, I just load this random dummy file I created. And you can see what it has. Literally I copy pasted. I actually took a single paragraph from WikiText data set and just copy pasted a couple of times just to have something there. So now we just do some processing, the tokenization, blah, blah, blah. Okay, let me just exit here. This is not interesting. So we then tokenize the text. Okay, I'm just hitting the breakpoints. So this is some processing that they do. As you can see, a lot of details that I'm trying to unsuccessfully ignore. So we now do the tokenization and you can see here, now we have the, let me show you how that's gonna look like. Tokenized data is just a list of tokens of our text. So the text was tokenized and we have 448 tokens, the smallest data set ever. Okay, now we use that to construct this LM data set. Again, I'm not going to dig into a lot of details because that's just the language modeling detail and we are trying to focus on the 3D parallelism in this video. Okay, let's exit here. We have the data set. Now we build the data loader. The reason I'm gonna show you this one is because it involves the data parallelism approach. So here it is. In PyTorch, it's fairly trivial. So here you can see get data parallel world size. So now additionally, I did not show you this so far, but imagine having this same group of devices, like four devices like this. So I'm just gonna change the color. So imagine having this same group duplicated maybe twice or even four times, okay? And we're doing the same thing in each of these groups until now. So now it's gonna be a bit different for each of these groups. Okay, so let's say this is group zero and this is group one. I'm gonna ignore, we're gonna have two groups, okay? So let's go back to the code. We would get world size would be two. And let me show you how this is implemented in the background. It literally calls this distributed get world size and then you specify the particular group. So here they specify the get data parallel group and that's why basically they get number two. Okay, so some constants there, I will not dig deeper than that. Let's continue, let's get the rank. So now where the rank is zero for the left group, so rank would be zero again for this group here, it will be one for these devices here. So that's how we'd have a differentiation. Let's continue. We formed the distributed sampler, okay? And that one is going to be shifting a part of our batch to one group and the second part to the other group. So that's gonna handle in the background by the PyTorch. Okay, so now the data loader, blah, blah, blah, pinned memory, again, a lot of details, I'll have to skip those. And now we enter the evaluate and print results. So here is the actual function where we'll be doing a feed forward pass through the model. So we just set it to eval mode. We entered the TorchNodeGrad context because we're doing inference, we're not trying to train anything here. And now we're going to fetch our first batch. So I'm going to ignore how that works. So I'm gonna disable the breakpoints, otherwise I would hit some breakpoints I left there. So let me hit F10 and we have a batch. So a batch is simply going to be, as you can see here, we have text and we have pet mask keys. So text is going to be, I guess, eight times 1,024 or something. Yeah, so eight because we have a batch of size eight and 1,025 because our sequence that we're gonna pass to the transformer is gonna be 1,024, but it's 1,025 because we'll be using one as the target. And so you're gonna see that in a second. So yeah, hopefully that makes sense. Like you literally take the, it's 1,025 because you take the first 1,024 as the input and then you're trying to predict shifted by one to the right, you try to predict that sequence. And that's why you have 1,025 here. Okay, let's continue. Let's enter the forward pass. Let's see what's going on here. So some batch processing, again, I'm gonna ignore all of that because that's just a language modeling standard stuff. This is the interesting part. So this is the receive forward. So what's going on here? So let's enter there. Let me now enable the breakpoints. So here what happens is the following. So if we are in the first stage, then the input tensor is none, okay? But if we were in some other stage, then we call this communicate method. And that means we're waiting for the output of the previous stage to pass that output as the input of this stage. And if that happens, we're going to override the batch that we just loaded from the data loader. So that might be a mouthful. It's gonna hopefully make sense in a couple of minutes. So here is the input tensor. Let's continue here. Okay, some unwrapping of the model, not that interesting. So here's the part where we set the input tensor. And let me kind of dig into this. So blah, blah, blah, we call the GPT model set input tensor function. And then, okay, let me exit this one. So then we enter the transformer language model set input tensor. Again, a lot of levels of indirection, and I keep hitting this random function. Okay, so here we are. So finally we get to parallel transformer, and we set the input tensor to none, because we are now in the first stage. I mean, at least in the actual setup I have. But let me show you where this one is used. So it's going to be used here a bit later in the forward pass. And we'll see that if preprocess is true, that means we're in the first stage. And in that case, we'll actually be taking the data from the batch. Otherwise, if we are not in the first stage, we'll be passing the actual input tensor that's passed from the previous stage. So again, we'll see that a bit later. So let me now exit all of this. Let me exit this, let me exit this. And let's continue here. So here is the actual forward pass. Again, just so that we are on the same page, let me show you what's going to happen here. So what's happening is we are having our distributed data loader fetching a batch. And I'm just going to simply draw it as a simple 2D square. And we're going to basically divide it into two pieces. This one is going to be passed to this group. And the first chunk is gonna be passed into the zeroth group here, okay? And now what happens is this is going to be fed here. And while that's happening, these two guys are blocked. So the thing is that communicate method I showed you is a blocking method. So that means until this has done its feed, so until this data has passed through the first stage, these guys are waiting. And then, only then do they start doing the actual computation. That's how this is gonna look like. And obviously the same happens for the rank one, for the second group, second data parallel group. So I will not be drawing it there, obviously. Okay, let's enter the feed forward pass. Okay, we have to enter this forward call. So distributed blah, blah, blah. There is a lot of levels of indirection again here. Let me just exit here. Oh my God, oh my God, let me enter here. Okay, so finally we are in the forward pass of the float 16 module. So let's see what's happening here. So as you can see here, if we're in the first stage of the pipeline, we convert our data to float 16, because we're gonna do mixed precision. So now inputs are gonna be converted to float 16. And now we call the actual transformer with the float 16 data. Okay, let's enter here. Let me hit F11. I think I can just hit F5 and I'll hit the, yeah. So here we are. We are in the GPT model forward pass, and let's start going through it. Okay, so again, I think I can hit F5 because they have a break point there. Yeah, so we are in the transformer language model. Okay, so as you can see here, only the first stages, because pre-process is true for first stages of the pipeline, are going to do the embedding part. Okay, so we do the embedding part. Let me kinda enter this. Whoops, I hate this. Let me enter it. So here we are in the embedding forward. Let me see whether it's useful to enter this one as well. So here we are in the vocab parallel embedding. Let's see how the forward would look like. So again, keep in mind that we have sharded that embedding table here and here. So we only have 25,000 of our embedding vectors here, and the other 25,000 is here. So they'll somehow have to communicate between each other, such that we can end up with the correct results. So let's see how that's gonna work. So let's enter here. So we would actually be entering this branch if I had the setup that I'm using as running examples, because the model parallel size would be two. So what would happen is the input mask, as you can see here, it will be true for all of those indices that are smaller or bigger than the index here, so then the boundary index here. And these would be like, these will be zero and 25,000 for this one, for this one here, and 25,000 to 50,000 for this one, okay? And that's why this is gonna mask those indices that do not belong to this device, okay? So there is this subtraction here that we have to do so that we have correct results, because imagine we are now on this device, because its start index is 25,000, we want to subtract 25,000 such that we end up with zero, because we are indexing in its table and the zero index is the first slot in that table on that device, if that makes sense. Sorry for being a bit confusing here. And you can see here, for all of the other indices, we just set them to zero. Maybe let me try and draw this quickly. So here is what's going on. Okay, we have basically input that contains indices from the whole table, so that means it might have numbers. Okay, let's quickly check what's the actual size. So let me use this input as the running example. So we have shape 8024, okay? So let's imagine we just have 1024, because we have batch size of one, it's gonna be easier. Okay, so here we are. So we have a vector that's like 1024. So this is gonna be 1024, okay? And it might contain in any of these slots, in any of the dimensions, let's say this is dimension I, it might contain numbers from zero to 50K. I'm just gonna run the numbers because it's easier, okay? So now let's assume we are on this device here. Okay, so that device only has the following indices. So it has, so we have two tables. This one contains from zero to 25K, and this one contains from 25K to 50K, 25K to 50K, okay? So what this device is going to do is it's going to mask all of those values here. So let me take a different color. So all of those values that have numbers that are zero to 25K, because it does not contain those numbers. So it's going to mask them out by putting them to zero. So let's see, this one is also put to zero, this one is also put to zero, blah, blah, blah, okay? And then it's going to take this number here. Let's say this one is 26K. It's going to subtract the 25K because that's the offset, and we're going to end up with 1K. And 1K, when you index with 1K into this table, you end up with some vector, embedding vector here. And that vector is, if you take a look at the whole table, is at the 26K, okay? So that's why we do all of this offsetting. Hopefully this now makes a bit more sense. Okay, let's continue. So now we do the embedding and we get the results. Okay, so let's check out the shapes. Shape is going to be 8024,024. This is a sequence length, this is the embedding dimension. And now what we would have to do is we would have to set those vectors that were masked out to zero, otherwise they would have incorrect results, okay? And finally, what we do is we call this reduced from tensor model parallel region. Let me quickly enter this one, and then I'm going to show you what is going on here. So here we are. Let me just go like this. So let me actually enter here. Let me enter here, reduce. And here is what's going to happen. In case that the model that we have multiple stages, sorry, this is model parallelism. In case that we have multiple shards across, if we have model parallelism of size two, as the running example is using, then we would not enter this line. We would do the all reduce across the group, across the model parallel group. So let me break that down, what we have done right there. So let's again, get back to the drawing and let's see what happened. Okay, so, okay, because these guys here were masked out, so that means they were, they maybe, this one was maybe 14K. And we've artificially set its value to zero. Because it was zero, that means it was indexing the zero vector from this table, which is incorrect because we want the 14,000, right? So this one here, again, let me just change the color and let me explain. So this one here was maybe 14K. So that's like the token ID, okay? So that one was indexing somewhere here, but because we've set it to zero, it was actually indexing this vector here. And that's why we have, oops, it's glitching. That's why we had to do the zeroing out because we don't want to have this value here. We don't want to have this embedding vector for this particular token. That's why we do the zeroing. That's why we do this part here. And then what the all reduce does is the following. Let me go back here. So this is going to happen. Let me see whether I can explain this nicely. Basically, this device here mapped this set of tokens into obviously various bunch of embedding vectors. So the dimensionality here is the same as the dimensionality up there. So it's 1024, but here we now have the hidden dimension. Okay, so this thing, this vector here is gonna be valid for this device. So that vector is gonna be valid. So let me change the color to green. So this one here, this token is gonna be valid on this particular device. Okay, so it's valid here. But as we just saw, it's going to be invalid for this one. Okay, so let me draw another table. So let me draw the results. So this is the output from this device. So this is the output from the second device. And it contains a corrupt data, actually all zeros, because we've zeroed it out. So it's gonna be all zeros. And so now what the all reduce does is basically, you sum these up. If you sum them up, you'll end up with a correct result both here and here, because if you just add zeros onto this device, you keep the same value. But this device will now have the correct value of the embedding vector. And that's the magic of the all reduce. So what's going to happen is these GPUs here are going to do that type of a communication and update the embedding vectors. So that's a beautiful thing, like if you ask me. So hopefully I managed to explain it nicely. And now finally, let's get out of here. Okay, again, just to update our mental model. So where we are at the moment is in the state where basically these two devices here are waiting for the information. And we have the embedding vectors on this device and on this device being the same because they're synchronized because of this all reduce we've just done. Okay, so we are now there. We have that and now let's get back to the code. Okay, so after this, we're gonna have some positional embeddings being added that's less interesting because it's not sharded. Let's continue we have some drop out, blah, blah, blah. And we end up with the output. Okay, so now let's see what's going on. So we now do the call to the encoder. And that's just going to be our transformer basically. So let's enter there. Let me hit F5. So we are now in the parallel transformer. Let's step over all of these. If self pre process, so again, if we're in the first stage we'll be doing this. So if we're in the first stage, we'll be grabbing the embedding vectors we just created. So let me show you the shape, 8024 to 1024. And we just grab those, we do some transpose operation and we do continuous and we continue. But if you recall this line here, this was from the, if we were, if this was not the first stage of the pipeline, but instead if this was maybe second or third or whatever stage, then instead we'd have hidden states being equal to input tensor, which came from the previous stage. Okay, so we would bypass this part here. Hopefully that makes sense. Let's not continue. We're not doing checkpointing. Okay, now we are just going to do a feed forward through all of the transformer layers. Okay, so let's do that. Let's grab the, so here is the layer. I can hit F5, I think. So we are now in the parallel transformer layer. And the first thing we do is we pass the data through the layer norm. Let me quickly show you how that's gonna look like because it's fused. Oh my God, this sucks. Okay, so let's just step over here. Okay, here we are. And here is the fused layer norm, F5M function we call this apply method. So let me enter here. So here you can see how it looks like. Oops. So what it does is ultimately it calls this CUDA kernel, which is a global variable, which we loaded in the beginning of this program. And then we just call the forward F5. So we cannot actually enter here because it's just a compiled C++ code. So I cannot enter with this current debugger. Maybe there is some, maybe it can be tweaked so we can enter and see the C++ code, but I'm not sure how that will work. Okay, so in any case, I just wanted to show you how the kernels look like. Now let's exit from the layer norm. And now let's hit the actual attention part. Okay, let me hit F5. Here is the attention part. So we are in the parallel attention part. Again, remember the weights are sharded such that each of these devices in the rank zero and rank one of the model parallelism only have a portion of the weights. Okay, so we end up with some... Okay, so we first hit the column parallel layer. Let me go back up again here up the stack. So we hit the query key value is actually the column parallel layer for a function. So that's again, let me show you here. So we are hitting the column parallel. This is the part I was showing you. So let me go back to the code now and let's see what's going on here. So we call this weird function copy to tensor. So what is it doing exactly? Let's maybe enter. So let's enter here. Let's enter here. And you can see here in the forward pass, it's just doing identity function. So why are we doing this? Well, that's because in the backward we'll be doing the reduce method. We'll be calling the all reduce. So let's see why that makes sense. Let me show you the diagram here. It's kind of clear once you see it. So what we're doing here is this. We are implementing F. F is, as you can see here, it's just the identity function in the forward pass because we just passed the data, the same data X to one device as well as to the other device. But once we go backward, we want to all reduce the information. We want to all reduce the gradients that came from this portion and this portion. We want to basically average them and then pass them to the previous layers. Hopefully that makes sense. That's why we have it implemented in such a way. Let's now exit this part. Now this is less interesting. We just apply the actual linear layer and then blah, blah, blah, bias. We can exit there. That's less interesting. Okay, so now we just have some, what's this, reshaping. We have some view, blah, blah, blah. We're splitting into three parts, the query key values. I'm not gonna focus on the actual transformer logic because I do assume you either know that or even if you don't know, just focus, let's focus on the 3D parallelism. So again, just some transformer stuff. We're changing the view so we can do multiplication between the keys and the queries and then basically use that to, okay. So here we just preallocate a tensor that's gonna contain the scores for the, that we'll then apply the softmax. We'll then apply a softmax. As you can see here, so we first do the multiplication of queries and keys here so we can skip all of that. And then we do, we now have attention scores. And in order to convert scores into actual probabilities, we have to apply the softmax. So again, I'm gonna exit this softmax layer. It's not that interesting. We've already seen the CUDA kernels. So I'm gonna just skip that. Okay, attention. We just do the dropout, blah, blah, blah. Let's continue. My computer is kind of struggling. You can see it's kind of lagging behind. So I cannot go faster than this. We just do some, again, view manipulation. And then we do the batch matrix multiply, which is going to basically sum up the value vectors. And we end up with the final result here. And then we just change the view. We do pre-mute, blah, blah, blah. Nothing new there, hopefully. And finally, we hit the dense part. And the dense part is, let me show you here. The dense part is this part. So that should be the row parallel linear layer, right? Because we're splitting the, we're now here. We're now here. Let me go back to the code. Let me enter here. Whoops. Let me enter here. Oh my God. So we are in the row parallel linear layer, as I said. So let's quickly see how it's going to work. Again, a couple of details there. So you can see we first applied the linear layer, and then we call this reduce from tensor model parallel region. So let's see how that thing looks like. That one in the forward pass is doing the all reduce. And in the backward, it's just doing the identity function. So it's just the, literally the polar opposite from what the column-wise was doing, right? So let's go back here. And why is that? Let me show you the diagram again. So here you can see in the forward, we want to all reduce these weights such that we end up with the correct result on both devices. But in the backward, we're just copy pasting the gradients. That's the reason why that was implemented in such a manner. Okay. So I'm going to skip all of this because it's not relevant. And we are done with the parallel tension. We are exiting the attention right now. And okay, now we have some residual stuff, blah, blah, blah, biases. We don't care about those. Some of those are JIT compiled so we wouldn't be able to step through it either way. Post attention layer norm. I think we're going to hit the break point here. So I'm going to remove it, exit this function. Let's exit the function here. And we're back. So we don't have the coder. Now we hit the MLP. So MLP is going to be this part. Again, we're now past the data that came out of this part. We're going to pass it here and we're going to repeat the same thing. So I'm going to skip it. So we're going to do the, as you can see here, column wise and then row wise, but it's the same logic. So we can safely skip this part. Okay. So let me do disable break points. We do the same thing and that's it. We are done here. Let me just exit this part here and let's return back the value. And that was the layer one. We obviously repeat that 12 times. So I'm going to put the break point here, hit F5. And now we're here. So let's see where we are now here with the diagram. So the data has successfully propagated all the way. Let me change. I'm not sure what color is the best one here. So maybe let's do it like this. So we are now here. We successfully propagated our activations all the way to here. And now we'll be passing them slowly to the second stage of the pipeline. So let's go back to the code. In case we are in the last stage of the pipeline, we'd be doing this additionally, passing it through the layer norms. And that's it. We are now here. We can step over all of this and we return back the results from our transformer language model. Guys, this is pretty much it. Like this is, we've seen the main concepts already. There is now a couple of additional details here. So like post language model processing was this one. Let me enable the break points. Let me hit F5. Here we are. So again, there's gonna be some synchronization between the devices in these ultimate parts. So yeah, assume now we're gonna project those weights into the vocab space. So let's see what's going on. So we have input parallel. That's gonna be, I guess, 8,024,024. Yep. And then once we do step over, we're gonna end up with the vocab. So eight, yeah, 1,024,50,304. So that's it. Again, there was some communication happening here between the devices such that we have consistent results on both devices. And that's it. Let's continue here. Whoops. We're now almost at the bottom of the stack. So we are now in this float 16 module. What we now do is because we're in the last stage, if we're in the last stage, we convert from float 16 to float 32, and then we return back the results. Okay. So let's exit here and we are back. So again, now we're gonna have this send forward function. And you can see here, if it's not the last stage, then communicate. So that means all of the stages will have to pass the data to some other device, unless you're at the last stage of the pipeline, which makes sense. Again, communicate is a blocking method. So that means if you're waiting for the data, you'll be blocked. That's why you have the concept of micro batches so that you can minimize the waiting, the idle time of the devices. Hopefully that makes sense. Okay, so in the last stage of the pipeline, you do this additional vocab parallel cross entropy. And I can quickly maybe enter this one, although I think it's gonna be fairly detailed. There's a lot of details here, but like the concepts are the same, like nothing new there. So I'm gonna slowly step out of all of this. So yeah, as you can see, they have devices you have to communicate such that we end up with the correct logits and then blah, blah, blah. You can go through this one at your own pace. I think it should be fairly easy to understand it. Okay, let's exit that part. Let me exit it. And that's it. We just do some summation. We return back the loss. We do some additional overviews here. And now we keep on doing this. Ultimately, you'll end up with the evaluation metrics. I'm gonna disable the breakpoints, hit that five here and exit here. And I'll let's say it's from here and that's it. Now, if we are on the last rank, we just, and by the way, here's how the last rank is calculated in this particular example. So here you can see that we just get the rank without any group, which means that now, however many devices we have like in total, we just find its absolute rank. So that means that for example, this one will be ranked zero, this one will be ranked one, two, three, then five, six, seven, eight. We will be returning those results. So basically we only on the last rank, so only on a single device in our whole cluster, we do this operation. So basically, yeah, just some blah, blah, blah perplexity. And then we print the results, but we don't care about the actual results. So yeah, so you can see here, average loss perplexity adjusted perplexity, blah, blah, blah. Nothing interesting there. And now finally we're done. Okay, guys, we've seen a lot so far. Let me do just a quick recap and then we can wrap this video up. So we have a couple of differences depending on whether we are dealing with the last stage of the pipeline, with the first stage of the pipeline, or the intermediate stages of the pipeline. So let me go and run the eval script again. So I'm gonna hit run here. And I have a breakpoint set inside of this forward step function. There's a couple of things I wanna explain here. So first things first is this receive forward function. Then I'm gonna explain the differences in the forward pass through the model and then the send forward part. So first things first, here, if we enter, you can see that the first stage in the pipeline will have the input tensor set to none, as you can see here, which means that they'll actually be fetching the data from the data loader. So they'll grab the chunk of data from the data loader and pass that as the input into the embedding table and then the transformer layers. So again, let me show you the diagram here. So that means those guys, the first stage, will grab the chunk of data from the batch and will pass that data through the, as you can see here, embedding table and only they have the embedding table, if you recall from the explanation back then. And then they'll pass the data through the transformer layers and ultimately they end up with this, with the output activations denoted in gray color here. Okay. And then all of the other stages will block on the communicate method. And that means they'll be waiting until the previous stage sends the activations to this stage. And then it will continue on the execution. So that means for these guys here, they'll be blocked until the gray activations are sent to the second stage here, and then they'll do the forward pass. Okay, so that's first thing. Then let's go back to the forward step. We have the forward pass through the model. So I'm gonna now just enable all of the breakpoints and let me hit that five just to enter into the actual forward pass. So I want to get, whoops, let me just do this. I'm gonna basically disable this and put a breakpoint here. So this is where the difference happens. So basically this is, as you can see here, GPT model forward, depending on whether we're dealing with the last stage or any other stage of the pipeline, we'll be entering this branch here. And as you can see here, if I enter this one, this thing does converts the activations into the logits. So that means we'll end up with the vocab space dimensionality. So let me digest what I just said there. So if you take a look at the LM output, all of the stages will be just returning the pure activations. So that means 8024,024. So those are the raw activations from the transformer block. But the last stage will be returning this processed information because it maps the tokens into the vocab space. So let me show you what I mean by that. So if we exit here, the outputs for the last stage will be, whoops, I think I have to do step over here and then let's do output shape. So you can see here 50K. It will be 25K in the case of having sharded, the model sharding. And yeah, that's it. So let me go outside of this thing. And now we're here. Okay, so now, again, remember output is one of two things. It's either the raw activations from the transformer layer or it's the tokens mapped into the vocab space for the last stage of the pipeline. And then the send forward, as you can see here, all of the stages will be communicating the activations to the next stage, but the last stage will just skip. As you can see here, if not the last stage, then communicate. If it is the last stage, we'll just ignore this. This is a no-op operation, no-op for the last stage. Okay, and finally, as you can see here, we have the last stage branch here, which means that the last stage will pass the logits into this function here and calculate the cross entropy, and then it will return back the loss. Whereas all of the other stages first communicated information to the next stage, and then they return none. And then if we go a level up from this function, so this is the forward step, you can see that the output, again, for the last stage will be all reduced and then just added to this total output, which will finally return and used to calculate the perplexity, et cetera, et cetera. So hopefully that makes sense. Again, let me go through the diagram. I think it's fairly straightforward. So again, the first stage here will be grabbing the data from the data loader, who will be passing that data through the embedding table and then through the transformer layers until we have the raw activations, and then they'll be passing those activations onto the next stage. Whereas the next stage will be waiting for the activations and then grabbing the activations from the previous stage, passing those activations through the 12 transformer layers, and finally calculating the logits, calculating the cross entropy loss, and then we'll be accumulating that information. If we were to have additionally intermediate stages in this diagram, obviously, then those would be receiving the activations from the previous stage and then just passing the activations to the next stage. So we kind of have three different branches in this pipeline logic. Cool, that was a long video. Hopefully you liked it. If you did, I would really appreciate any feedback. This will be the last video from the large language model series. I might create one more on the GPT-NewX20B if I have any interest. If you'd like me to go through the GPT-NewX20B code base, do comment down below and I'll be monitoring the comments. In any case, if you found this video useful, consider subscribing, sharing it out. That's the best way you can support the channel. And until next time, bye bye. Abū La mumṭaffāṭ 탓 rather le,
[{"start": 0.0, "end": 2.52, "text": " What's cracking guys, Alex here."}, {"start": 2.52, "end": 4.34, "text": " In this video, I'm doing the fourth video"}, {"start": 4.34, "end": 6.16, "text": " of the large language model series"}, {"start": 6.16, "end": 9.32, "text": " and I'll be walking you through the Bloom code base."}, {"start": 9.32, "end": 12.32, "text": " So that's the 175 billion parameter model"}, {"start": 12.32, "end": 14.120000000000001, "text": " from the big science event."}, {"start": 14.120000000000001, "end": 15.780000000000001, "text": " And I'm gonna focus on showing you"}, {"start": 15.780000000000001, "end": 19.080000000000002, "text": " how the 3D parallelism is implemented in code."}, {"start": 19.080000000000002, "end": 20.72, "text": " So that means model parallelism,"}, {"start": 20.72, "end": 22.84, "text": " pipeline parallelism and data parallelism."}, {"start": 22.84, "end": 26.52, "text": " So you're gonna see side by side comparisons of diagrams."}, {"start": 26.52, "end": 28.32, "text": " I showed you in my previous video"}, {"start": 28.32, "end": 30.04, "text": " on the ultimate guide to scaling"}, {"start": 30.04, "end": 31.88, "text": " and you're gonna see the actual code."}, {"start": 31.88, "end": 34.8, "text": " So as a quick recap, here's the playlist."}, {"start": 34.8, "end": 37.72, "text": " As I said, the first video, the ultimate guide to scaling."}, {"start": 37.72, "end": 39.88, "text": " In that one, I showed you a couple of techniques"}, {"start": 39.88, "end": 41.9, "text": " for scaling the models up to scales,"}, {"start": 41.9, "end": 43.92, "text": " up to those scales of hundreds of billions"}, {"start": 43.92, "end": 45.5, "text": " and even trillions of parameters."}, {"start": 45.5, "end": 48.0, "text": " So techniques such as model parallelism,"}, {"start": 48.0, "end": 50.120000000000005, "text": " data parallelism, pipeline parallelism,"}, {"start": 50.120000000000005, "end": 53.24, "text": " mixed precision training and zero optimizer."}, {"start": 53.24, "end": 55.120000000000005, "text": " In the second video, I showed you,"}, {"start": 55.120000000000005, "end": 57.120000000000005, "text": " I went through, I skimmed through the papers"}, {"start": 57.12, "end": 61.64, "text": " behind the most popular publicly shared large language models"}, {"start": 61.64, "end": 64.56, "text": " such as GPT-New, X1EB, Big Science Bloom"}, {"start": 64.56, "end": 67.84, "text": " and OpenPretrain Transformer from Meta."}, {"start": 67.84, "end": 69.2, "text": " And then in the third video,"}, {"start": 69.2, "end": 74.0, "text": " I actually went through the code base behind the opt model"}, {"start": 74.0, "end": 77.02, "text": " and I focused on showing you the mixed precision training,"}, {"start": 77.02, "end": 78.84, "text": " the loss scaling, those details."}, {"start": 78.84, "end": 80.67999999999999, "text": " And additionally, I focused on showing you"}, {"start": 80.67999999999999, "end": 84.28, "text": " how complex these code bases actually are"}, {"start": 84.28, "end": 86.12, "text": " if you want to get to these scales."}, {"start": 86.12, "end": 87.92, "text": " So it's not just a matter of taking n,"}, {"start": 87.92, "end": 89.60000000000001, "text": " where n is the number of parameters"}, {"start": 89.60000000000001, "end": 92.48, "text": " and shifting it from five to 100"}, {"start": 92.48, "end": 93.94, "text": " and expecting everything to work."}, {"start": 93.94, "end": 96.24000000000001, "text": " Obviously, it's a bit more complex than that."}, {"start": 96.24000000000001, "end": 98.12, "text": " Okay, in any case, as I said,"}, {"start": 98.12, "end": 100.12, "text": " I'm gonna show you the model parallelism"}, {"start": 100.12, "end": 103.4, "text": " from the ultimate guide to scaling video."}, {"start": 103.4, "end": 106.96000000000001, "text": " I'm gonna show you the pipeline parallelism"}, {"start": 106.96000000000001, "end": 109.84, "text": " and I'm finally gonna show you the data parallelism approach"}, {"start": 109.84, "end": 113.44, "text": " where you replicate the weights across multiple devices,"}, {"start": 113.44, "end": 117.0, "text": " for example, GPUs and then you do the data replication magic."}, {"start": 117.0, "end": 120.12, "text": " So you reduce the gradients using all reduce"}, {"start": 120.12, "end": 122.8, "text": " and then you broadcast the weights, et cetera, et cetera."}, {"start": 122.8, "end": 127.8, "text": " Okay, enough rambling, let's get started."}, {"start": 127.88, "end": 129.6, "text": " So first thing you have to do"}, {"start": 129.6, "end": 131.88, "text": " is obviously clone the repository."}, {"start": 131.88, "end": 134.32, "text": " So that's the Megatron DeepSpeed one."}, {"start": 134.32, "end": 136.44, "text": " I'm gonna obviously link the link down"}, {"start": 136.44, "end": 138.8, "text": " in the video description"}, {"start": 138.8, "end": 142.72, "text": " and just get started through this start fast with me."}, {"start": 142.72, "end": 144.2, "text": " So just follow the instructions"}, {"start": 144.2, "end": 146.04, "text": " to set up the conda environment"}, {"start": 146.04, "end": 147.72, "text": " and it should be straightforward."}, {"start": 147.72, "end": 150.56, "text": " Although I did experience some issues on Windows"}, {"start": 150.56, "end": 152.44, "text": " and so ultimately I had to shift"}, {"start": 152.44, "end": 154.2, "text": " to Windows subsystem for Linux"}, {"start": 154.2, "end": 156.76, "text": " and then everything worked out of the box."}, {"start": 156.76, "end": 158.44, "text": " So it's kind of funny because DeepSpeed"}, {"start": 158.44, "end": 159.78, "text": " is actually from Microsoft folks"}, {"start": 159.78, "end": 161.46, "text": " and it's not working on Windows."}, {"start": 161.46, "end": 162.84, "text": " I guess it's a bit ironic."}, {"start": 162.84, "end": 166.64, "text": " In any case, go through this with me, download the data."}, {"start": 166.64, "end": 169.28, "text": " We'll need these vocab and merges files"}, {"start": 169.28, "end": 171.44, "text": " for the tokenizer a bit later."}, {"start": 171.44, "end": 173.32, "text": " Go through the training script potentially."}, {"start": 173.32, "end": 175.32, "text": " I did go through it myself,"}, {"start": 175.32, "end": 177.84, "text": " but I decided to only show you the eval script"}, {"start": 177.84, "end": 180.07999999999998, "text": " because it contains all the necessary code"}, {"start": 180.07999999999998, "end": 183.68, "text": " for me to showcase the 3D parallelism approach."}, {"start": 183.68, "end": 186.48, "text": " This one is complex because it additionally uses DeepSpeed"}, {"start": 186.48, "end": 188.8, "text": " which is basically built on top of Megatron"}, {"start": 188.8, "end": 190.6, "text": " and so there is a lot of a lot,"}, {"start": 190.6, "end": 193.56, "text": " the stack of function calls is too deep"}, {"start": 193.56, "end": 196.16, "text": " and it will be super confusing for me to show you that"}, {"start": 196.16, "end": 198.12, "text": " unless I wanna teach you DeepSpeed"}, {"start": 198.12, "end": 200.0, "text": " which is not the point of this video."}, {"start": 200.0, "end": 201.72, "text": " Okay, long story short,"}, {"start": 202.8, "end": 207.76, "text": " basically go and take these parameters for the eval script."}, {"start": 207.76, "end": 211.76, "text": " Let me just find where the main pie script, so this one."}, {"start": 211.76, "end": 214.72, "text": " So, and get started from here."}, {"start": 214.72, "end": 218.68, "text": " Okay guys, so now I'm just gonna open up my VS code."}, {"start": 218.68, "end": 220.72, "text": " So you see here Megatron DeepSpeed,"}, {"start": 220.72, "end": 223.52, "text": " I have this Bloom 3 conda environment"}, {"start": 223.52, "end": 225.28, "text": " and I'm just gonna open up VS code."}, {"start": 225.28, "end": 228.12, "text": " So that's my ID of choice."}, {"start": 228.12, "end": 230.32, "text": " And let me first show you a couple of things"}, {"start": 230.32, "end": 233.84, "text": " I had to modify in the arguments here."}, {"start": 233.84, "end": 236.8, "text": " So let me just see what's important."}, {"start": 236.8, "end": 238.92000000000002, "text": " Again, you'll have to link this vocab file"}, {"start": 238.92000000000002, "end": 240.16, "text": " after you download it."}, {"start": 241.12, "end": 243.92000000000002, "text": " Just create a random TXT file"}, {"start": 243.92000000000002, "end": 248.6, "text": " and you don't really have to download the Viki text dataset"}, {"start": 248.6, "end": 252.12, "text": " because this is going to work out of the box like this."}, {"start": 252.12, "end": 256.52, "text": " And yeah, I mean, if your goal is to understand"}, {"start": 256.52, "end": 257.36, "text": " how the code works,"}, {"start": 257.36, "end": 259.56, "text": " if your goal is to actually do something with Viki text"}, {"start": 259.56, "end": 261.24, "text": " and obviously you wanna download the actual dataset."}, {"start": 261.24, "end": 264.52000000000004, "text": " For me, I just literally like pasted some random text"}, {"start": 264.52000000000004, "end": 265.96000000000004, "text": " inside of a TXT file."}, {"start": 266.8, "end": 267.8, "text": " Okay, the second thing,"}, {"start": 267.8, "end": 271.76, "text": " again, you'll have to download this file, the GPT-2 merges."}, {"start": 271.76, "end": 274.88, "text": " I also kinda commented out this load function"}, {"start": 274.88, "end": 277.44, "text": " and then we'll just be loading random weights,"}, {"start": 277.44, "end": 278.88, "text": " which again, does not matter"}, {"start": 278.88, "end": 280.6, "text": " because I don't care about the actual result,"}, {"start": 280.6, "end": 283.04, "text": " I just care about these to step through the code"}, {"start": 283.04, "end": 285.04, "text": " and show you what's going on."}, {"start": 285.04, "end": 287.40000000000003, "text": " Then I had this number of workers set to zero"}, {"start": 287.40000000000003, "end": 290.40000000000003, "text": " because it's easier again to step through the code."}, {"start": 290.40000000000003, "end": 293.04, "text": " And finally, I commented out the checkpoints activations"}, {"start": 293.04, "end": 296.16, "text": " because this does not make any sense for the eval script."}, {"start": 296.16, "end": 299.20000000000005, "text": " So I guess that's just a minor bug on their side."}, {"start": 299.20000000000005, "end": 302.04, "text": " Having said all that, let's start digging into the code."}, {"start": 302.04, "end": 304.52000000000004, "text": " So I'm gonna open up the main function here,"}, {"start": 304.52000000000004, "end": 306.08000000000004, "text": " the main script."}, {"start": 306.08000000000004, "end": 309.76, "text": " I'm going to grab the eval Viki text."}, {"start": 309.76, "end": 312.96000000000004, "text": " So that's the config for this file"}, {"start": 312.96000000000004, "end": 314.44, "text": " that I've just showed you."}, {"start": 314.44, "end": 317.16, "text": " And let me run this thing."}, {"start": 317.16, "end": 319.36, "text": " Okay, guys, let me quickly go through"}, {"start": 319.36, "end": 321.52, "text": " the initialize Megatron function."}, {"start": 321.52, "end": 322.56, "text": " Let's see what's going on there,"}, {"start": 322.56, "end": 326.12, "text": " although it's not that important for what I wanna show you."}, {"start": 327.04, "end": 328.6, "text": " Just make sure that we have CUDA,"}, {"start": 328.6, "end": 330.84, "text": " so that we have a GPU device."}, {"start": 330.84, "end": 332.64, "text": " We set some global variables here."}, {"start": 332.64, "end": 333.46, "text": " This is kinda interesting."}, {"start": 333.46, "end": 338.04, "text": " So there is the, whoops, don't show this again, please."}, {"start": 338.04, "end": 340.92, "text": " Okay, so they first parse the arguments"}, {"start": 340.92, "end": 344.04, "text": " and let me kinda dig into that function first."}, {"start": 344.04, "end": 346.32, "text": " So just to show you how it looks like."}, {"start": 346.32, "end": 349.5, "text": " So the idea here is to, they have these global variables."}, {"start": 349.5, "end": 352.08000000000004, "text": " So they parse the arguments, they set the,"}, {"start": 352.08000000000004, "end": 355.72, "text": " they assign those arguments to this global variable,"}, {"start": 355.72, "end": 357.76000000000005, "text": " and then it's gonna be visible across the code base."}, {"start": 357.76000000000005, "end": 361.44, "text": " So that's kinda how they went about designing this thing."}, {"start": 361.44, "end": 365.26, "text": " I'm gonna skip through the, I'm gonna skip, yeah."}, {"start": 365.26, "end": 368.38, "text": " I'm just gonna maybe quickly show you how complex this is."}, {"start": 368.38, "end": 370.8, "text": " There is a lot of arguments that go into this,"}, {"start": 370.8, "end": 372.08000000000004, "text": " but I'm gonna skip all of those,"}, {"start": 372.08, "end": 374.15999999999997, "text": " and we're just gonna see how that looks like"}, {"start": 375.24, "end": 377.21999999999997, "text": " when we start stepping through the code."}, {"start": 377.21999999999997, "end": 379.97999999999996, "text": " So as you can see, a lot of error checking,"}, {"start": 379.97999999999996, "end": 382.71999999999997, "text": " and I'm gonna hit F5 and hopefully get here, yeah."}, {"start": 382.71999999999997, "end": 384.91999999999996, "text": " Okay, so let's continue here."}, {"start": 384.91999999999996, "end": 386.76, "text": " So that's the parsing of arguments."}, {"start": 386.76, "end": 389.91999999999996, "text": " Then I'm gonna skip this part."}, {"start": 389.91999999999996, "end": 391.64, "text": " We don't care about that one."}, {"start": 391.64, "end": 393.64, "text": " And then we build a tokenizer."}, {"start": 393.64, "end": 398.64, "text": " Again, that's just going to be GPT tokenizer."}, {"start": 399.08, "end": 400.76, "text": " You can see it here."}, {"start": 400.76, "end": 405.2, "text": " So we're gonna kinda go through this GPT-2BPE tokenizer,"}, {"start": 405.2, "end": 407.76, "text": " and that's a common tokenizer used"}, {"start": 407.76, "end": 409.52, "text": " throughout many code bases."}, {"start": 409.52, "end": 412.36, "text": " And just doing this sample coding series,"}, {"start": 412.36, "end": 415.8, "text": " I realized how pervasive this tokenizer is,"}, {"start": 415.8, "end": 419.56, "text": " and I first covered it in my OpenAI clip video,"}, {"start": 419.56, "end": 420.52, "text": " so do check that one out."}, {"start": 420.52, "end": 422.15999999999997, "text": " I'm gonna link it somewhere above"}, {"start": 422.15999999999997, "end": 425.15999999999997, "text": " if you are curious to learn more about the tokenizers."}, {"start": 425.15999999999997, "end": 428.8, "text": " But for now, I'm just gonna basically disable"}, {"start": 428.8, "end": 433.04, "text": " the breakpoints and skip over the actual tokenizer."}, {"start": 433.04, "end": 434.64, "text": " And let's continue."}, {"start": 434.64, "end": 437.8, "text": " So we just do some padding of the vocab"}, {"start": 437.8, "end": 442.28000000000003, "text": " such that it's basically suitable to model parallelism."}, {"start": 442.28000000000003, "end": 445.12, "text": " Otherwise, you'd have problems."}, {"start": 445.12, "end": 446.36, "text": " And that's it."}, {"start": 446.36, "end": 448.72, "text": " Now we just set up some TensorBoard."}, {"start": 448.72, "end": 450.84000000000003, "text": " This is kinda cool, the code carbon tracker."}, {"start": 450.84000000000003, "end": 453.28000000000003, "text": " So they did have this functionality"}, {"start": 453.28000000000003, "end": 456.24, "text": " where they track how much are they impacting"}, {"start": 456.24, "end": 460.32, "text": " the environment by just measuring the amount of CO2 emission."}, {"start": 460.32, "end": 462.84000000000003, "text": " But they did comment it out, so as you can see here,"}, {"start": 462.84000000000003, "end": 467.0, "text": " they don't actually use it, but it's a cool idea, I guess."}, {"start": 467.0, "end": 468.76, "text": " Because they say it's very unstable,"}, {"start": 468.76, "end": 469.96000000000004, "text": " blah, blah, blah, whatever."}, {"start": 469.96000000000004, "end": 473.16, "text": " Okay, let's step out of this, and let's continue."}, {"start": 473.16, "end": 474.24, "text": " And that's it."}, {"start": 475.2, "end": 478.2, "text": " Okay, let me toggle on the breakpoints again."}, {"start": 478.2, "end": 479.92, "text": " I'm gonna skip most of these things."}, {"start": 479.92, "end": 481.86, "text": " So get our, so now this is the cool thing."}, {"start": 481.86, "end": 484.0, "text": " Wherever you are in the code base,"}, {"start": 484.0, "end": 486.96, "text": " you'll literally just grab that global variable"}, {"start": 486.96, "end": 488.84, "text": " and you're golden."}, {"start": 488.84, "end": 491.08, "text": " You don't have to pass the arguments"}, {"start": 491.08, "end": 493.68, "text": " through the function argument list."}, {"start": 494.72, "end": 497.08, "text": " Well, it's debatable whether that's the best thing to do,"}, {"start": 497.08, "end": 499.68, "text": " but yeah, I think it's nice."}, {"start": 499.68, "end": 501.88, "text": " There is a couple of interesting things in this function,"}, {"start": 501.88, "end": 504.4, "text": " so I'm gonna quickly step through it."}, {"start": 504.4, "end": 507.08, "text": " There is some initialization of the,"}, {"start": 507.08, "end": 508.44, "text": " so I'm using the NCCL,"}, {"start": 508.44, "end": 511.36, "text": " which is NVIDIA Collective Communication Library"}, {"start": 511.36, "end": 515.72, "text": " for all of those old reviews and other operations"}, {"start": 515.72, "end": 519.52, "text": " that you use to communicate and synchronize the GPUs."}, {"start": 519.52, "end": 520.72, "text": " So it's gonna happen in this function."}, {"start": 520.72, "end": 523.88, "text": " I'm not gonna step into it because it's fairly complex."}, {"start": 523.88, "end": 525.84, "text": " Just library code."}, {"start": 525.84, "end": 529.1, "text": " And then let me just see what else is interesting here,"}, {"start": 530.04, "end": 531.36, "text": " which set the random seed"}, {"start": 531.36, "end": 534.44, "text": " just for reproducibility purposes, obviously."}, {"start": 534.44, "end": 537.16, "text": " And yeah, that's it."}, {"start": 537.16, "end": 539.78, "text": " And then in the compile dependencies,"}, {"start": 539.78, "end": 541.72, "text": " there is a couple of interesting things going on."}, {"start": 541.72, "end": 544.04, "text": " Again, we are going to,"}, {"start": 544.04, "end": 545.72, "text": " so first there's some,"}, {"start": 545.72, "end": 547.6, "text": " this part is not that interesting,"}, {"start": 547.6, "end": 552.52, "text": " but there's one part where they load certain CUDA kernels,"}, {"start": 552.52, "end": 554.3, "text": " and we're gonna later actually use them"}, {"start": 554.3, "end": 558.92, "text": " and leverage that C++ code for extra performance."}, {"start": 558.92, "end": 561.16, "text": " So I'm gonna skip through most of these."}, {"start": 561.16, "end": 565.48, "text": " We actually saw something similar, if not the same."}, {"start": 565.48, "end": 566.76, "text": " Actually, it's the same code"}, {"start": 566.76, "end": 568.9399999999999, "text": " because Opt was also using Megatron,"}, {"start": 568.94, "end": 570.8800000000001, "text": " so my previous video was also using Megatron"}, {"start": 570.8800000000001, "end": 572.44, "text": " in the background."}, {"start": 572.44, "end": 574.6800000000001, "text": " So we've seen the same code here."}, {"start": 574.6800000000001, "end": 577.6800000000001, "text": " So what we do is we load these kernels."}, {"start": 577.6800000000001, "end": 579.36, "text": " I'm gonna step into it quickly"}, {"start": 579.36, "end": 581.1800000000001, "text": " and just show you how it looks like,"}, {"start": 581.1800000000001, "end": 584.2600000000001, "text": " but it's nothing super crucial."}, {"start": 585.7600000000001, "end": 589.0, "text": " I mean, not related to the 3D parallelism in that sense."}, {"start": 589.0, "end": 592.6, "text": " So here you can see we're just loading various kernels"}, {"start": 594.58, "end": 597.5400000000001, "text": " for like scaled up or for softmax"}, {"start": 597.54, "end": 600.76, "text": " and then for some other softmax and layer norm,"}, {"start": 600.76, "end": 601.5999999999999, "text": " et cetera, et cetera."}, {"start": 601.5999999999999, "end": 603.8, "text": " So I'm gonna put the breakpoint here,"}, {"start": 603.8, "end": 606.0799999999999, "text": " hit F5 and just exit here."}, {"start": 606.0799999999999, "end": 609.28, "text": " So it was more of FYI than anything else."}, {"start": 609.28, "end": 612.7199999999999, "text": " Okay, there is some barrier function call"}, {"start": 612.7199999999999, "end": 614.88, "text": " which basically synchronizes across."}, {"start": 614.88, "end": 617.68, "text": " So all of the, it basically waits,"}, {"start": 617.68, "end": 622.5999999999999, "text": " all of the different devices, GPUs will wait at that line"}, {"start": 622.5999999999999, "end": 625.0, "text": " until the other GPUs hit that same line"}, {"start": 625.0, "end": 629.0, "text": " and then the barrier is gonna release the execution"}, {"start": 629.0, "end": 630.28, "text": " across all of the devices."}, {"start": 630.28, "end": 631.68, "text": " That's what the barrier does."}, {"start": 631.68, "end": 635.08, "text": " But I have a single GPU, so that's not interesting."}, {"start": 635.08, "end": 636.98, "text": " And even though I have a single GPU,"}, {"start": 636.98, "end": 640.22, "text": " I'm still going to be able to show you the 3D parallelism"}, {"start": 640.22, "end": 641.46, "text": " as we'll soon see."}, {"start": 641.46, "end": 643.76, "text": " Okay, let's just exit here"}, {"start": 643.76, "end": 646.36, "text": " and that's pretty much everything I wanted to show you."}, {"start": 646.36, "end": 648.6, "text": " Now let's get into the main script."}, {"start": 648.6, "end": 652.4, "text": " So because I'm using Viki Text 103,"}, {"start": 652.4, "end": 656.04, "text": " I'm gonna take this particular main function"}, {"start": 656.04, "end": 658.06, "text": " and then let's now enter here."}, {"start": 658.06, "end": 661.1, "text": " Okay, so we fetched the arguments"}, {"start": 661.1, "end": 662.8, "text": " and let's see what's going on."}, {"start": 662.8, "end": 666.0799999999999, "text": " So because we are using Viki Text,"}, {"start": 666.0799999999999, "end": 668.88, "text": " we're gonna set the eval metric to loss."}, {"start": 668.88, "end": 670.64, "text": " That's not that important."}, {"start": 670.64, "end": 674.02, "text": " And this is where stuff becomes interesting."}, {"start": 674.02, "end": 676.5799999999999, "text": " We're now going to see how the parallelism"}, {"start": 676.5799999999999, "end": 679.0, "text": " is implemented in the code here."}, {"start": 679.0, "end": 682.26, "text": " Okay, so let me re-enable the breakpoints"}, {"start": 682.26, "end": 685.26, "text": " and let's continue stepping through this."}, {"start": 685.26, "end": 687.4, "text": " So here is the model provider."}, {"start": 687.4, "end": 689.3199999999999, "text": " So we enter this get model function."}, {"start": 689.3199999999999, "end": 691.28, "text": " Let's now see what's going on."}, {"start": 691.28, "end": 694.16, "text": " Okay, so we hit this else branch."}, {"start": 694.16, "end": 697.24, "text": " And now, because I have a single device"}, {"start": 697.24, "end": 701.48, "text": " and because of that, my pipeline is only of length one."}, {"start": 701.48, "end": 703.2, "text": " I have a single stage."}, {"start": 703.2, "end": 705.48, "text": " And that's why both the pre-process"}, {"start": 705.48, "end": 708.38, "text": " as well as the post-process here"}, {"start": 708.38, "end": 711.2, "text": " are gonna be set to true, as you can see here."}, {"start": 711.2, "end": 713.44, "text": " But if this was on a,"}, {"start": 713.44, "end": 715.0400000000001, "text": " if you were running on a,"}, {"start": 715.0400000000001, "end": 718.5600000000001, "text": " let's say a two by two topology setup,"}, {"start": 718.5600000000001, "end": 720.4000000000001, "text": " so we have four GPUs in total"}, {"start": 720.4000000000001, "end": 722.98, "text": " and I'm gonna use that topology as a running example."}, {"start": 722.98, "end": 725.6, "text": " Let me kind of open up the OneNote and show you what I mean."}, {"start": 725.6, "end": 727.4000000000001, "text": " So let me go here."}, {"start": 727.4000000000001, "end": 731.3000000000001, "text": " And this is going to be the running topology I'll be using."}, {"start": 731.3000000000001, "end": 734.32, "text": " So basically, I'm gonna have something like this."}, {"start": 734.32, "end": 737.3000000000001, "text": " So we have a GPU here."}, {"start": 738.34, "end": 740.84, "text": " We have a GPU here."}, {"start": 740.84, "end": 743.12, "text": " We have another GPU here."}, {"start": 743.12, "end": 745.94, "text": " And we have yet another GPU here."}, {"start": 745.94, "end": 748.6, "text": " So we're gonna have three types of parallelism,"}, {"start": 748.6, "end": 750.12, "text": " but I'm gonna show you only the model"}, {"start": 750.12, "end": 751.88, "text": " and the pipeline here on the image."}, {"start": 751.88, "end": 755.76, "text": " So these ones here are going to be stage one."}, {"start": 755.76, "end": 757.8000000000001, "text": " So stage or stage zero,"}, {"start": 757.8000000000001, "end": 760.2, "text": " and this is gonna be stage one, okay?"}, {"start": 760.2, "end": 763.76, "text": " So basically that means we'll have half of the layers"}, {"start": 763.76, "end": 765.44, "text": " of our transformer being here."}, {"start": 765.44, "end": 768.76, "text": " Maybe if we have 24 layers, 12 will be here."}, {"start": 768.76, "end": 770.6, "text": " So we'll have 12 layers here."}, {"start": 770.6, "end": 772.64, "text": " And we'll have 12 layers here."}, {"start": 773.64, "end": 776.28, "text": " Next up, I'm gonna do something like this."}, {"start": 776.28, "end": 777.82, "text": " So we're gonna have model parallelism."}, {"start": 777.82, "end": 780.3000000000001, "text": " So across this dimension, we have MP."}, {"start": 781.22, "end": 783.16, "text": " And across this dimension here,"}, {"start": 783.16, "end": 786.52, "text": " we have pipeline parallelism PP, okay?"}, {"start": 786.52, "end": 789.32, "text": " So this one is gonna be zero and one."}, {"start": 789.32, "end": 791.64, "text": " So this is gonna be the topology"}, {"start": 791.64, "end": 794.0400000000001, "text": " that I want you to have in your mind."}, {"start": 794.0400000000001, "end": 796.1600000000001, "text": " And we'll see how it's gonna be useful"}, {"start": 796.1600000000001, "end": 797.62, "text": " in a couple of minutes."}, {"start": 797.62, "end": 802.62, "text": " Okay, again, keep in mind that in the setup I just showed you,"}, {"start": 803.94, "end": 808.58, "text": " the devices from the stage zero of the pipeline parallelism"}, {"start": 808.58, "end": 811.58, "text": " will have, this will be true and this will be false."}, {"start": 811.58, "end": 814.16, "text": " And then the second stage will have,"}, {"start": 814.16, "end": 816.34, "text": " this will be true and this will be false."}, {"start": 816.34, "end": 817.94, "text": " And if you have even more stages,"}, {"start": 817.94, "end": 821.98, "text": " then some of these devices will have both of these"}, {"start": 821.98, "end": 824.78, "text": " pre and post process flags set to false"}, {"start": 824.78, "end": 826.98, "text": " because they are neither the first stage"}, {"start": 826.98, "end": 829.14, "text": " nor the last stage of the pipeline."}, {"start": 829.14, "end": 830.44, "text": " Okay, hopefully that makes sense."}, {"start": 830.44, "end": 833.04, "text": " Now let's enter this function here."}, {"start": 834.1, "end": 836.66, "text": " So yeah, we just set this to true"}, {"start": 836.66, "end": 839.2, "text": " because we're dealing with loss metric."}, {"start": 839.2, "end": 840.04, "text": " Let's continue."}, {"start": 840.04, "end": 841.34, "text": " And here is the GPT model."}, {"start": 841.34, "end": 844.1, "text": " So I'm gonna be using the GPT transformer"}, {"start": 844.1, "end": 847.02, "text": " as the underlying architecture."}, {"start": 847.02, "end": 850.1, "text": " So let's enter here and let's see what's going on."}, {"start": 850.1, "end": 852.22, "text": " So it's gonna inherit from Megatron module,"}, {"start": 852.22, "end": 855.26, "text": " which is a light wrapper around torch modules."}, {"start": 855.26, "end": 858.14, "text": " I'm not gonna step through the constructor."}, {"start": 858.14, "end": 859.5, "text": " Let's fetch the arguments here"}, {"start": 859.5, "end": 862.9, "text": " and let's just kind of skip across all of these details."}, {"start": 862.9, "end": 864.58, "text": " This is the interesting part, the language model."}, {"start": 864.58, "end": 868.0, "text": " So now we're gonna enter that function."}, {"start": 869.54, "end": 873.38, "text": " And here, okay, again, we just fetch the arguments."}, {"start": 873.38, "end": 874.78, "text": " We continue here."}, {"start": 874.78, "end": 876.84, "text": " Now there is a transformer language model."}, {"start": 876.84, "end": 878.46, "text": " So that's gonna be interesting one."}, {"start": 878.46, "end": 881.86, "text": " So let me hit F5, let's get here."}, {"start": 881.86, "end": 884.3, "text": " So it's inheriting again from the Megatron module."}, {"start": 884.3, "end": 888.26, "text": " Okay, we can step across all of these."}, {"start": 888.26, "end": 890.8199999999999, "text": " I'm gonna focus only on the parts that are interesting."}, {"start": 890.8199999999999, "end": 891.66, "text": " So here's the embedding."}, {"start": 891.66, "end": 892.8199999999999, "text": " This is the interesting part."}, {"start": 892.8199999999999, "end": 897.14, "text": " So let me kind of put a break point here, hit F5."}, {"start": 897.14, "end": 900.06, "text": " Let's see how this is going to work."}, {"start": 900.06, "end": 903.3399999999999, "text": " Okay, so the embedding table"}, {"start": 903.3399999999999, "end": 905.3, "text": " is gonna have a couple of details."}, {"start": 905.3, "end": 907.3399999999999, "text": " So here is the Voke parallel embedding."}, {"start": 907.3399999999999, "end": 909.28, "text": " That's what they're using in the background."}, {"start": 909.28, "end": 914.28, "text": " Parallel suggests that we are basically gonna do"}, {"start": 915.5, "end": 917.86, "text": " a model parallelism of this embedding table."}, {"start": 917.86, "end": 920.78, "text": " And by the way, let me just go back a step up here."}, {"start": 920.78, "end": 923.42, "text": " You can see that we only enter here"}, {"start": 923.42, "end": 925.9399999999999, "text": " if the pre-process is set to true."}, {"start": 925.9399999999999, "end": 929.8399999999999, "text": " And that's only true for the first stage in the pipeline"}, {"start": 929.8399999999999, "end": 933.4399999999999, "text": " or zeroth index, as I kind of made the notation."}, {"start": 933.4399999999999, "end": 936.36, "text": " But like the first devices in the pipeline"}, {"start": 936.36, "end": 939.06, "text": " will only have this embedding table, which makes sense,"}, {"start": 939.06, "end": 941.4599999999999, "text": " because we wanna split the model layers"}, {"start": 941.4599999999999, "end": 944.14, "text": " across different stages."}, {"start": 944.14, "end": 945.9399999999999, "text": " Okay, so let's get back here."}, {"start": 945.9399999999999, "end": 948.3399999999999, "text": " So here is the Voke parallel embedding."}, {"start": 948.3399999999999, "end": 949.6199999999999, "text": " Let's enter here."}, {"start": 949.6199999999999, "end": 951.54, "text": " Let's gonna hit F5."}, {"start": 951.54, "end": 953.8399999999999, "text": " Whoops, let me hit F5 and enter there."}, {"start": 953.8399999999999, "end": 957.8199999999999, "text": " Okay, so again, let's see what's gonna happen here."}, {"start": 957.8199999999999, "end": 959.26, "text": " So this is the interesting part."}, {"start": 959.26, "end": 962.6999999999999, "text": " This get tensor model parallel world size"}, {"start": 962.6999999999999, "end": 964.7199999999999, "text": " is going to in the topology"}, {"start": 964.7199999999999, "end": 966.7399999999999, "text": " that I showed you a couple of minutes ago,"}, {"start": 966.7399999999999, "end": 968.16, "text": " it will return two."}, {"start": 968.16, "end": 969.54, "text": " So for my particular setup,"}, {"start": 969.54, "end": 970.74, "text": " it's obviously gonna return one"}, {"start": 970.74, "end": 972.42, "text": " because they have a single GPU,"}, {"start": 972.42, "end": 974.24, "text": " but in general, it's gonna return two."}, {"start": 974.24, "end": 977.74, "text": " And now let's see what's the repercussion of that."}, {"start": 977.74, "end": 982.74, "text": " So here, what happens is we fetch the rank."}, {"start": 982.86, "end": 985.66, "text": " So the tensor model parallel rank,"}, {"start": 985.66, "end": 990.66, "text": " and that's going to be zero for the basically devices here."}, {"start": 991.5, "end": 993.3399999999999, "text": " So it's gonna be zero for these devices."}, {"start": 993.3399999999999, "end": 996.4599999999999, "text": " It's gonna be one for these devices here, okay?"}, {"start": 996.46, "end": 998.82, "text": " So let me go back to code."}, {"start": 998.82, "end": 1003.22, "text": " Now with that in mind and having this thing,"}, {"start": 1003.22, "end": 1008.22, "text": " so this is going to be two in the example I'm showing you."}, {"start": 1009.62, "end": 1011.14, "text": " Let's now enter this function"}, {"start": 1011.14, "end": 1012.38, "text": " and let me show you what it does."}, {"start": 1012.38, "end": 1015.86, "text": " So it takes the global vocab size,"}, {"start": 1015.86, "end": 1020.86, "text": " which is this big 50K slots in the embedding table."}, {"start": 1021.1, "end": 1023.5400000000001, "text": " And we're going to divide it by two"}, {"start": 1023.54, "end": 1026.82, "text": " because our model parallelism world size"}, {"start": 1026.82, "end": 1030.02, "text": " is actually of size two in the running example I'm using."}, {"start": 1030.02, "end": 1033.82, "text": " Okay, so it will be something like 25K here."}, {"start": 1033.82, "end": 1036.62, "text": " Okay, and then what we do is we enter this function here"}, {"start": 1037.7, "end": 1040.6599999999999, "text": " and you can see here, so for rank zero,"}, {"start": 1040.6599999999999, "end": 1043.62, "text": " we would have this thing would be zero"}, {"start": 1043.62, "end": 1046.18, "text": " because we multiply it by zero 25K."}, {"start": 1046.18, "end": 1049.54, "text": " And then this would be 25K plus,"}, {"start": 1050.94, "end": 1053.42, "text": " no, sorry, this would be zero plus 25K."}, {"start": 1053.42, "end": 1054.8600000000001, "text": " So that would be 25K."}, {"start": 1054.8600000000001, "end": 1058.5800000000002, "text": " So that means we would return zero and 25,000 here"}, {"start": 1058.5800000000002, "end": 1059.8400000000001, "text": " as a result."}, {"start": 1061.5, "end": 1063.7, "text": " Obviously, in my case, I'm returning 50K,"}, {"start": 1063.7, "end": 1064.7, "text": " but bear with me."}, {"start": 1064.7, "end": 1069.0, "text": " Okay, so that's the first thing we need to understand."}, {"start": 1069.0, "end": 1070.98, "text": " Now those variables are gonna be used"}, {"start": 1070.98, "end": 1074.66, "text": " to split the embedding table across multiple devices."}, {"start": 1074.66, "end": 1076.38, "text": " That's the whole point."}, {"start": 1076.38, "end": 1077.66, "text": " So now you can see here,"}, {"start": 1077.66, "end": 1079.94, "text": " the number of embeddings would be 25,000"}, {"start": 1079.94, "end": 1082.98, "text": " if we had the topology I'm using."}, {"start": 1082.98, "end": 1085.1, "text": " We can skip this, we can skip this."}, {"start": 1085.1, "end": 1088.74, "text": " Those are just some bits and bytes or something,"}, {"start": 1088.74, "end": 1092.02, "text": " basically where they use integer eight precision,"}, {"start": 1092.02, "end": 1092.8600000000001, "text": " et cetera, et cetera."}, {"start": 1092.8600000000001, "end": 1095.3, "text": " We're gonna ignore that in this video."}, {"start": 1095.3, "end": 1096.6200000000001, "text": " So here is the interesting part."}, {"start": 1096.6200000000001, "end": 1099.3, "text": " So now we create the weight of the embedding table."}, {"start": 1099.3, "end": 1102.78, "text": " So we can see here, we are passing this variable,"}, {"start": 1102.78, "end": 1105.06, "text": " which would be 25,000 again,"}, {"start": 1105.06, "end": 1107.0, "text": " and we're passing the embedding dimension,"}, {"start": 1107.0, "end": 1109.34, "text": " which is 1,024, blah, blah, blah."}, {"start": 1109.34, "end": 1110.18, "text": " That's it."}, {"start": 1110.18, "end": 1113.0600000000002, "text": " So the weight would be, as you can see,"}, {"start": 1113.0600000000002, "end": 1116.94, "text": " 25,000 times 1,000 on one device"}, {"start": 1116.94, "end": 1121.0600000000002, "text": " and on those devices that have a rank zero,"}, {"start": 1121.0600000000002, "end": 1123.98, "text": " and then it will be additionally 25,000"}, {"start": 1123.98, "end": 1127.74, "text": " for those devices that have rank one,"}, {"start": 1127.74, "end": 1130.02, "text": " again, talking about model parallelism here."}, {"start": 1130.02, "end": 1132.8200000000002, "text": " Okay, so we just now initialize it and that's it."}, {"start": 1132.8200000000002, "end": 1135.5800000000002, "text": " So now you've seen for the first time"}, {"start": 1135.5800000000002, "end": 1138.1200000000001, "text": " how the model parallelism looks like."}, {"start": 1138.1200000000001, "end": 1140.0600000000002, "text": " So let me just exit here."}, {"start": 1140.06, "end": 1141.74, "text": " Let's now continue."}, {"start": 1141.74, "end": 1143.86, "text": " The second thing we want to do here"}, {"start": 1143.86, "end": 1146.98, "text": " is create some positional embeddings."}, {"start": 1146.98, "end": 1149.1399999999999, "text": " Now these, because they are much smaller"}, {"start": 1149.1399999999999, "end": 1151.8, "text": " than the vocab embedding table,"}, {"start": 1151.8, "end": 1154.86, "text": " they're just going to allocate the whole thing"}, {"start": 1154.86, "end": 1157.46, "text": " across all of the different devices."}, {"start": 1157.46, "end": 1161.8999999999999, "text": " So we have like complete, we can see here 1,024,"}, {"start": 1161.8999999999999, "end": 1163.46, "text": " because that's gonna be our sequence"}, {"start": 1163.46, "end": 1166.72, "text": " and that's how many positionally encodings we need."}, {"start": 1166.72, "end": 1169.3600000000001, "text": " Okay, let's continue."}, {"start": 1170.44, "end": 1172.4, "text": " We just initialize those"}, {"start": 1172.4, "end": 1175.4, "text": " and let's see now what's happening here."}, {"start": 1175.4, "end": 1177.0, "text": " We can skip all of those."}, {"start": 1177.0, "end": 1179.2, "text": " We just add some dropout, blah, blah, blah."}, {"start": 1179.2, "end": 1181.52, "text": " I keep entering this function, I'm not sure why,"}, {"start": 1181.52, "end": 1184.68, "text": " but we now have the embedding and let's continue."}, {"start": 1184.68, "end": 1186.72, "text": " Now let me just update the diagram here."}, {"start": 1186.72, "end": 1188.26, "text": " I think this is gonna be useful."}, {"start": 1188.26, "end": 1191.16, "text": " So let me just take some different color."}, {"start": 1191.16, "end": 1194.38, "text": " So we have the embedding table currently placed like this."}, {"start": 1194.38, "end": 1196.7600000000002, "text": " So we have embedding table here"}, {"start": 1196.7600000000002, "end": 1198.48, "text": " and we have embedding table here."}, {"start": 1199.72, "end": 1201.88, "text": " And the size here is 25K"}, {"start": 1202.92, "end": 1205.8000000000002, "text": " and the size here is 25K as well."}, {"start": 1205.8000000000002, "end": 1208.42, "text": " Okay, let's not go back to decode."}, {"start": 1210.48, "end": 1212.5200000000002, "text": " Now we have the parallel transformer."}, {"start": 1212.5200000000002, "end": 1215.64, "text": " Let's see how the transformer is going to be split."}, {"start": 1215.64, "end": 1219.3200000000002, "text": " Okay, so let's enter the parallel transformer"}, {"start": 1220.2, "end": 1221.4, "text": " and here it is."}, {"start": 1221.4, "end": 1225.96, "text": " Okay, again, skipping these details until we get here."}, {"start": 1225.96, "end": 1227.8200000000002, "text": " So here is an interesting assert."}, {"start": 1227.8200000000002, "end": 1231.2800000000002, "text": " So the number of layers has to be divisible"}, {"start": 1231.2800000000002, "end": 1235.44, "text": " by the world size for the pipeline parallelism."}, {"start": 1235.44, "end": 1238.38, "text": " In the running example I'm using, it's gonna be two."}, {"start": 1238.38, "end": 1240.76, "text": " So that means you have to be like,"}, {"start": 1240.76, "end": 1243.0800000000002, "text": " the number of layers has to be divisible by two."}, {"start": 1243.0800000000002, "end": 1246.16, "text": " In my actual setup, I have, this will be one, so yeah."}, {"start": 1246.16, "end": 1248.0400000000002, "text": " Okay, now this is the interesting line."}, {"start": 1248.0400000000002, "end": 1251.2, "text": " So we see here that we divide the number of layers"}, {"start": 1251.2, "end": 1253.42, "text": " by the number of stages."}, {"start": 1253.42, "end": 1255.68, "text": " And again, so because we have 24,"}, {"start": 1255.68, "end": 1260.4, "text": " that means we would now have 12 layers"}, {"start": 1260.4, "end": 1263.8400000000001, "text": " being allocated on this particular device, okay?"}, {"start": 1263.8400000000001, "end": 1266.0, "text": " So again, keep in mind, this code,"}, {"start": 1266.0, "end": 1268.02, "text": " so the thing I'm just stepping through,"}, {"start": 1268.02, "end": 1270.16, "text": " this same thing would be happening in parallel"}, {"start": 1270.16, "end": 1271.44, "text": " on four devices."}, {"start": 1271.44, "end": 1273.74, "text": " And each of those devices have different contexts"}, {"start": 1273.74, "end": 1276.48, "text": " because they lie in a different part of the topology"}, {"start": 1276.48, "end": 1279.1200000000001, "text": " and because of that they will be entering different types,"}, {"start": 1279.12, "end": 1281.1599999999999, "text": " different code, they will be going through the different"}, {"start": 1281.1599999999999, "end": 1283.0, "text": " code paths, that's the main idea."}, {"start": 1283.0, "end": 1284.9599999999998, "text": " Hopefully you understand that."}, {"start": 1284.9599999999998, "end": 1286.6799999999998, "text": " Let's continue here."}, {"start": 1286.6799999999998, "end": 1289.2399999999998, "text": " Okay, so now we create the offset."}, {"start": 1289.2399999999998, "end": 1293.08, "text": " So we grab the rank and we multiply it"}, {"start": 1293.08, "end": 1294.4399999999998, "text": " by the number of layers."}, {"start": 1294.4399999999998, "end": 1298.08, "text": " So that means that the offset for these two devices here"}, {"start": 1298.08, "end": 1301.6, "text": " is gonna be zero, for these two is gonna be zero."}, {"start": 1301.6, "end": 1309.6, "text": " And for the two ones here is gonna be 12, right?"}, {"start": 1309.6799999999998, "end": 1310.6799999999998, "text": " So let's go back here."}, {"start": 1310.6799999999998, "end": 1313.5, "text": " So it's gonna be one, the rank will be one for those"}, {"start": 1313.5, "end": 1315.6, "text": " and then times the number of layers, 12."}, {"start": 1315.6, "end": 1317.5, "text": " So offset is gonna be 12 for them."}, {"start": 1317.5, "end": 1319.8799999999999, "text": " So let's see how offset plays a role here."}, {"start": 1319.8799999999999, "end": 1324.5, "text": " So we now form 12 layers and we have the offset here."}, {"start": 1324.5, "end": 1327.6599999999999, "text": " So you can see that we are passing that index information"}, {"start": 1327.6599999999999, "end": 1328.84, "text": " into this build layer function."}, {"start": 1328.84, "end": 1331.0, "text": " So let's enter the build layer function."}, {"start": 1331.0, "end": 1332.6, "text": " Here it is."}, {"start": 1332.6, "end": 1335.52, "text": " And there we start building the actual transformer layers."}, {"start": 1335.52, "end": 1337.64, "text": " We'll do this 12 times."}, {"start": 1337.64, "end": 1341.04, "text": " So let me just enter this layer here."}, {"start": 1341.04, "end": 1344.8, "text": " Let me hit F5 and we are here."}, {"start": 1344.8, "end": 1348.34, "text": " So again, let me just focus on the important parts,"}, {"start": 1348.34, "end": 1350.32, "text": " ignoring many details obviously,"}, {"start": 1350.32, "end": 1352.76, "text": " otherwise it will be too bothersome."}, {"start": 1352.76, "end": 1354.1, "text": " Okay, so first things first,"}, {"start": 1354.1, "end": 1356.8, "text": " we want to instantiate a layer norm."}, {"start": 1356.8, "end": 1358.48, "text": " Okay, let's see how the layer norm looks like."}, {"start": 1358.48, "end": 1361.76, "text": " The layer norm has some fused components,"}, {"start": 1361.76, "end": 1362.8600000000001, "text": " which is interesting."}, {"start": 1362.8600000000001, "end": 1366.3, "text": " So that means we'll be using optimized CUDA kernels,"}, {"start": 1366.3, "end": 1368.84, "text": " fancy word for saying functions,"}, {"start": 1368.84, "end": 1371.28, "text": " implemented in CUDA library"}, {"start": 1371.28, "end": 1373.64, "text": " that are running super fast on GPUs obviously,"}, {"start": 1373.64, "end": 1375.56, "text": " on Nvidia GPUs."}, {"start": 1375.56, "end": 1377.92, "text": " So here you can see we import"}, {"start": 1377.92, "end": 1381.24, "text": " this fused mixed precision layer norm CUDA."}, {"start": 1381.24, "end": 1383.9, "text": " So let me show you where that was constructed."}, {"start": 1383.9, "end": 1385.1, "text": " And I actually showed you that"}, {"start": 1385.1, "end": 1386.0, "text": " in the beginning of the video."}, {"start": 1386.0, "end": 1388.72, "text": " So here, let me just move this thing here."}, {"start": 1388.72, "end": 1391.12, "text": " So if I were to copy paste this here,"}, {"start": 1391.12, "end": 1393.8, "text": " so you can see here we are loading"}, {"start": 1393.8, "end": 1396.4, "text": " that precise variable is loaded"}, {"start": 1396.4, "end": 1398.84, "text": " using the CPP extension load helper."}, {"start": 1398.84, "end": 1399.76, "text": " So we kind of passed,"}, {"start": 1399.76, "end": 1402.16, "text": " I kind of showed you this function before,"}, {"start": 1402.16, "end": 1404.82, "text": " and that's where it was initialized."}, {"start": 1404.82, "end": 1407.32, "text": " Now let's get back to the actual code."}, {"start": 1407.32, "end": 1409.68, "text": " Let me go to the top of the stack."}, {"start": 1409.68, "end": 1412.08, "text": " We import that kernel"}, {"start": 1412.08, "end": 1415.36, "text": " and yeah, we're gonna then use it a bit later."}, {"start": 1415.36, "end": 1417.28, "text": " So everything else is not that interesting."}, {"start": 1417.28, "end": 1419.6799999999998, "text": " I'm not gonna focus on the layer norm"}, {"start": 1419.6799999999998, "end": 1423.78, "text": " because this one is not going to be split across devices."}, {"start": 1425.6399999999999, "end": 1427.8, "text": " Okay, let me exit this."}, {"start": 1427.8, "end": 1431.26, "text": " And let me just kind of move this a bit to the left."}, {"start": 1431.26, "end": 1432.82, "text": " And we are out."}, {"start": 1432.82, "end": 1434.1599999999999, "text": " Now let's see the parallel attention."}, {"start": 1434.1599999999999, "end": 1435.76, "text": " This is gonna be the interesting part."}, {"start": 1435.76, "end": 1437.9599999999998, "text": " So let's enter the parallel attention."}, {"start": 1437.9599999999998, "end": 1439.28, "text": " I'm gonna hit F5."}, {"start": 1440.1799999999998, "end": 1442.04, "text": " Let's go to the part that we care about."}, {"start": 1442.04, "end": 1444.28, "text": " And that's the part where we split."}, {"start": 1444.28, "end": 1446.46, "text": " So here we can see,"}, {"start": 1448.3999999999999, "end": 1451.26, "text": " we can step over all of these."}, {"start": 1451.26, "end": 1453.12, "text": " The magic starts happening right here."}, {"start": 1453.12, "end": 1456.7, "text": " So again, model parallel world size is gonna be equal to,"}, {"start": 1456.7, "end": 1457.8799999999999, "text": " so it's gonna be two."}, {"start": 1457.8799999999999, "end": 1460.36, "text": " And now we divide the projection size by two,"}, {"start": 1461.44, "end": 1463.32, "text": " blah, blah, blah."}, {"start": 1463.32, "end": 1468.32, "text": " So just the devising number of attention heads to two by two."}, {"start": 1468.3999999999999, "end": 1470.74, "text": " And now we get to the interesting part."}, {"start": 1470.74, "end": 1475.74, "text": " So this is the column parallel linear feed forward layer."}, {"start": 1475.84, "end": 1477.04, "text": " That might be a mouthful,"}, {"start": 1477.04, "end": 1479.6, "text": " but let me remind you what that is."}, {"start": 1479.6, "end": 1482.6, "text": " So this is what we are actually building right now."}, {"start": 1482.6, "end": 1485.1200000000001, "text": " This is the self attention layer."}, {"start": 1485.1200000000001, "end": 1488.48, "text": " And what we are building right now are these,"}, {"start": 1488.48, "end": 1492.76, "text": " as you can see, column parallel layers."}, {"start": 1492.76, "end": 1495.1, "text": " So we take the weights and we split them"}, {"start": 1495.1, "end": 1497.74, "text": " in this column wise fashion."}, {"start": 1497.74, "end": 1502.74, "text": " And then we're gonna use, so we're gonna use these here."}, {"start": 1502.98, "end": 1507.7, "text": " The Q1, K1 and V1 are going to be used here,"}, {"start": 1507.7, "end": 1509.36, "text": " as you can see."}, {"start": 1509.36, "end": 1513.72, "text": " And then the Q2, K2 and V2 are gonna be used"}, {"start": 1513.72, "end": 1515.48, "text": " on the different device."}, {"start": 1515.48, "end": 1516.92, "text": " And that's what we are currently building."}, {"start": 1516.92, "end": 1520.1200000000001, "text": " That's how this code, that's what this code is implementing."}, {"start": 1520.1200000000001, "end": 1524.04, "text": " So let me enter the actual column parallel linear."}, {"start": 1524.04, "end": 1527.16, "text": " And you can see again, the weight matrix is being split"}, {"start": 1527.16, "end": 1532.16, "text": " in this column parallelism fashion."}, {"start": 1532.3200000000002, "end": 1535.5600000000002, "text": " So let me hit five, let me enter here."}, {"start": 1535.5600000000002, "end": 1536.78, "text": " Let's see where that's gonna happen."}, {"start": 1536.78, "end": 1537.76, "text": " So here is the world size."}, {"start": 1537.76, "end": 1539.88, "text": " Again, this is gonna be equal to two."}, {"start": 1539.88, "end": 1543.1200000000001, "text": " And then the output size is gonna be divided by two."}, {"start": 1544.48, "end": 1546.72, "text": " Okay, so let's continue."}, {"start": 1546.72, "end": 1550.5600000000002, "text": " And here is the actual, so we are now allocating the weights."}, {"start": 1552.0800000000002, "end": 1554.88, "text": " This is going to, so this would,"}, {"start": 1554.88, "end": 1557.8000000000002, "text": " if we had the topology I'm using as a running example,"}, {"start": 1557.8000000000002, "end": 1560.96, "text": " this would be, as you can see here, divided by two."}, {"start": 1560.96, "end": 1563.3200000000002, "text": " So that means we're literally splitting it column wise."}, {"start": 1563.3200000000002, "end": 1565.24, "text": " That's the whole point, okay?"}, {"start": 1565.24, "end": 1567.16, "text": " And then once we have those weights,"}, {"start": 1567.16, "end": 1570.4, "text": " we just initialize the weights and there's some bias."}, {"start": 1570.4, "end": 1571.5400000000002, "text": " Let's keep that part."}, {"start": 1571.5400000000002, "end": 1573.98, "text": " It's not that interesting."}, {"start": 1573.98, "end": 1577.0, "text": " So let's exit here, again, entering this thing."}, {"start": 1577.0, "end": 1580.0600000000002, "text": " Okay, so that's the first part."}, {"start": 1580.0600000000002, "end": 1581.4, "text": " Let's see what else."}, {"start": 1581.4, "end": 1586.4, "text": " So now we have to create some softmax stuff"}, {"start": 1587.2800000000002, "end": 1590.8000000000002, "text": " that's going to be this part, the softmax."}, {"start": 1590.8000000000002, "end": 1593.22, "text": " The only difference is this is gonna be optimized."}, {"start": 1593.22, "end": 1597.44, "text": " So obviously there are no learnable weights in this one."}, {"start": 1597.44, "end": 1599.3200000000002, "text": " So I'm gonna just skip it."}, {"start": 1599.3200000000002, "end": 1601.92, "text": " We're not sharding it."}, {"start": 1601.92, "end": 1604.96, "text": " So now we create the dropout, blah, blah, blah."}, {"start": 1604.96, "end": 1606.68, "text": " And there is the row parallel linear."}, {"start": 1606.68, "end": 1608.2, "text": " So this is the second component."}, {"start": 1608.2, "end": 1610.92, "text": " So let me go back again to the OneNote."}, {"start": 1610.92, "end": 1612.52, "text": " And here is the second part."}, {"start": 1612.52, "end": 1614.2, "text": " So here you can see the other part"}, {"start": 1614.2, "end": 1615.8000000000002, "text": " of the self-attention layer."}, {"start": 1615.8000000000002, "end": 1618.3200000000002, "text": " We have this linear layer and we split it"}, {"start": 1618.3200000000002, "end": 1620.6000000000001, "text": " row wise this time, B1 and B2."}, {"start": 1620.6000000000001, "end": 1622.3200000000002, "text": " So you can see here."}, {"start": 1622.3200000000002, "end": 1624.16, "text": " So let me go back to the code"}, {"start": 1624.16, "end": 1626.24, "text": " and let me enter the row parallel linear."}, {"start": 1626.24, "end": 1629.1200000000001, "text": " So if I enter here, you can see again the diagram"}, {"start": 1629.1200000000001, "end": 1631.16, "text": " and let me put a break point here."}, {"start": 1631.16, "end": 1632.68, "text": " Let me hit that five."}, {"start": 1632.68, "end": 1634.1200000000001, "text": " So here we are."}, {"start": 1634.1200000000001, "end": 1636.22, "text": " Again, world size, it's gonna be two."}, {"start": 1637.3600000000001, "end": 1639.44, "text": " And let me continue."}, {"start": 1639.44, "end": 1641.96, "text": " And again, this time you can see"}, {"start": 1641.96, "end": 1644.8, "text": " that the input size per partition"}, {"start": 1644.8, "end": 1647.04, "text": " is the one that's being divided."}, {"start": 1647.04, "end": 1651.68, "text": " So here the input will be divided, not the output size."}, {"start": 1651.68, "end": 1655.1200000000001, "text": " And because of that, we are doing row wise splitting, right?"}, {"start": 1655.1200000000001, "end": 1655.96, "text": " That's it."}, {"start": 1655.96, "end": 1657.3200000000002, "text": " That's the difference."}, {"start": 1657.3200000000002, "end": 1659.76, "text": " Again, I'm gonna skip everything else here."}, {"start": 1659.76, "end": 1661.96, "text": " Let me kind of step out of this function,"}, {"start": 1661.96, "end": 1665.3600000000001, "text": " shift F11 does that on Windows keyboard."}, {"start": 1667.16, "end": 1668.0, "text": " And here we are."}, {"start": 1668.0, "end": 1671.68, "text": " So we have the row parallel linear instantiated."}, {"start": 1671.68, "end": 1672.64, "text": " Let's continue."}, {"start": 1672.64, "end": 1674.08, "text": " We don't have row three embeddings."}, {"start": 1674.08, "end": 1676.4, "text": " So we can skip this part as well."}, {"start": 1676.4, "end": 1677.4, "text": " And here we are."}, {"start": 1677.4, "end": 1679.2, "text": " So that was the parallel attention."}, {"start": 1679.2, "end": 1680.32, "text": " Okay."}, {"start": 1680.32, "end": 1682.44, "text": " Again, let me just update the drawing."}, {"start": 1682.44, "end": 1686.52, "text": " We have the parallel attention being sharded"}, {"start": 1686.52, "end": 1689.68, "text": " across two devices in this running example."}, {"start": 1689.68, "end": 1693.36, "text": " So that means we'll have half of the weights here"}, {"start": 1694.64, "end": 1696.9, "text": " and half of the weights here."}, {"start": 1696.9, "end": 1699.88, "text": " Obviously we'll need 12 of these."}, {"start": 1699.88, "end": 1702.92, "text": " So I kind of didn't do it justice here,"}, {"start": 1702.92, "end": 1705.52, "text": " but yeah, you can imagine we'll have 12 of these"}, {"start": 1705.52, "end": 1708.0, "text": " transformer self-attention layers."}, {"start": 1708.88, "end": 1711.4, "text": " So let's go back to the code."}, {"start": 1711.4, "end": 1713.02, "text": " Let me exit here."}, {"start": 1713.02, "end": 1714.96, "text": " So we have layer norm again."}, {"start": 1714.96, "end": 1716.0400000000002, "text": " I won't bother."}, {"start": 1716.0400000000002, "end": 1718.2, "text": " I'm just gonna exit this layer norm."}, {"start": 1719.8400000000001, "end": 1722.0400000000002, "text": " Let's see what else we skip the decoder part."}, {"start": 1722.0400000000002, "end": 1723.8400000000001, "text": " So now there is the parallel MLP."}, {"start": 1723.8400000000001, "end": 1726.24, "text": " So again, how the transformer block looks like,"}, {"start": 1726.24, "end": 1730.76, "text": " we have the self-attention module"}, {"start": 1730.76, "end": 1732.2, "text": " and then we have the MLP part."}, {"start": 1732.2, "end": 1735.54, "text": " So now here we'll see first the column-wise splitting"}, {"start": 1735.54, "end": 1737.2, "text": " and then the row-wise splitting."}, {"start": 1737.2, "end": 1739.28, "text": " So this is the diagram we'll implement."}, {"start": 1739.28, "end": 1741.8, "text": " Let's go back to the code."}, {"start": 1741.8, "end": 1743.04, "text": " Let me show you this quickly."}, {"start": 1743.04, "end": 1744.6, "text": " And then this is the last thing I'll show you."}, {"start": 1744.6, "end": 1746.08, "text": " And then we are pretty much done"}, {"start": 1746.08, "end": 1747.84, "text": " with this model construction."}, {"start": 1747.84, "end": 1749.52, "text": " So let me hit F5."}, {"start": 1749.52, "end": 1750.44, "text": " Let me enter here."}, {"start": 1750.44, "end": 1752.36, "text": " So again, column parallel."}, {"start": 1753.36, "end": 1754.6, "text": " And I'm just gonna skip."}, {"start": 1754.6, "end": 1758.1599999999999, "text": " I'm just going to disable the breakpoints"}, {"start": 1758.1599999999999, "end": 1760.36, "text": " because it's the same logic as before, right?"}, {"start": 1760.36, "end": 1761.48, "text": " We've seen this."}, {"start": 1761.48, "end": 1764.06, "text": " This is the same logic as before."}, {"start": 1764.06, "end": 1766.4599999999998, "text": " So I'm just gonna skip over all of that."}, {"start": 1766.4599999999998, "end": 1767.9599999999998, "text": " Let's get here."}, {"start": 1769.6799999999998, "end": 1771.48, "text": " Now some activation stuff."}, {"start": 1771.48, "end": 1774.6999999999998, "text": " Here is the row parallel linear and that's it."}, {"start": 1775.6, "end": 1777.56, "text": " And that's the one that has the 4H"}, {"start": 1777.56, "end": 1780.6399999999999, "text": " usually is the common convention."}, {"start": 1780.64, "end": 1785.64, "text": " So the hidden layer will have 4X more weights"}, {"start": 1785.8400000000001, "end": 1789.74, "text": " than the input and output, dimension, sorry."}, {"start": 1789.74, "end": 1790.5800000000002, "text": " Okay."}, {"start": 1791.64, "end": 1793.4, "text": " Again, let's exit here."}, {"start": 1793.4, "end": 1796.5200000000002, "text": " And that's our basically transformer layer."}, {"start": 1796.5200000000002, "end": 1798.68, "text": " Again, you can always follow here where we are."}, {"start": 1798.68, "end": 1801.0400000000002, "text": " We're currently implementing the transformer layer."}, {"start": 1801.0400000000002, "end": 1804.72, "text": " So I'm gonna exit here and we do this 12 times."}, {"start": 1804.72, "end": 1808.5800000000002, "text": " So I'm gonna, again, let me just make sure it's disabled."}, {"start": 1808.58, "end": 1812.1999999999998, "text": " I'm gonna enable the breakpoint here, hit F5"}, {"start": 1812.1999999999998, "end": 1813.76, "text": " and we have 12 layers."}, {"start": 1813.76, "end": 1815.6599999999999, "text": " So again, let me update the drawing."}, {"start": 1817.1999999999998, "end": 1818.3999999999999, "text": " Here is the drawing."}, {"start": 1818.3999999999999, "end": 1820.96, "text": " We now have 12 layers here and here."}, {"start": 1822.96, "end": 1825.62, "text": " So we have 12 layers here and here."}, {"start": 1825.62, "end": 1829.48, "text": " And obviously, because this is running in parallel,"}, {"start": 1829.48, "end": 1834.48, "text": " these two devices would also have 12 layers here."}, {"start": 1834.84, "end": 1838.08, "text": " So 12 layers and 12 layers."}, {"start": 1838.08, "end": 1839.6399999999999, "text": " The difference is these two,"}, {"start": 1839.6399999999999, "end": 1842.98, "text": " because they are in the rank zero of the pipeline parallelism,"}, {"start": 1842.98, "end": 1845.8, "text": " they additionally have the embedding table sharded."}, {"start": 1845.8, "end": 1846.6399999999999, "text": " Cool."}, {"start": 1846.6399999999999, "end": 1848.52, "text": " Guys, let me know how you like the video so far."}, {"start": 1848.52, "end": 1850.24, "text": " Leave any feedback if you have any."}, {"start": 1850.24, "end": 1851.1599999999999, "text": " That would be super cool."}, {"start": 1851.1599999999999, "end": 1852.6399999999999, "text": " So again, post-process."}, {"start": 1852.6399999999999, "end": 1855.0, "text": " That's going to be active only"}, {"start": 1855.0, "end": 1857.0, "text": " on the last stage of the pipeline."}, {"start": 1858.36, "end": 1861.28, "text": " So that means only those devices will have the layer norm,"}, {"start": 1861.28, "end": 1862.4399999999998, "text": " but I'm gonna skip the,"}, {"start": 1862.4399999999998, "end": 1865.1599999999999, "text": " I won't enter the layer norm because we already seen it."}, {"start": 1865.1599999999999, "end": 1867.32, "text": " That's the optimized layer norm."}, {"start": 1867.32, "end": 1868.84, "text": " And that's it."}, {"start": 1868.84, "end": 1871.6799999999998, "text": " Again, that was the parallel transformer."}, {"start": 1873.04, "end": 1873.8799999999999, "text": " Let's continue."}, {"start": 1873.8799999999999, "end": 1875.56, "text": " We don't have the decoder part."}, {"start": 1875.56, "end": 1878.1799999999998, "text": " We don't have the pooler, so we can exit here."}, {"start": 1878.1799999999998, "end": 1879.24, "text": " We can exit here."}, {"start": 1879.24, "end": 1881.6, "text": " And we have our language model being constructed."}, {"start": 1881.6, "end": 1884.4399999999998, "text": " Again, I'm not sure why I'm entering this thing."}, {"start": 1884.4399999999998, "end": 1887.6799999999998, "text": " So I just, once I enter it, I just do step out"}, {"start": 1887.6799999999998, "end": 1889.96, "text": " and that's how I exit in case you're wondering."}, {"start": 1889.96, "end": 1891.8799999999999, "text": " Okay, guys, here we are, GPT model."}, {"start": 1891.8799999999999, "end": 1894.04, "text": " We just constructed it."}, {"start": 1894.04, "end": 1896.9199999999998, "text": " Now there is this initialized word embedding."}, {"start": 1896.92, "end": 1901.28, "text": " So because I have, in my actual setup,"}, {"start": 1901.28, "end": 1904.24, "text": " I have a single GPU, so I'll just hit return."}, {"start": 1904.24, "end": 1907.04, "text": " But in general, if we had the actual topology I'm using"}, {"start": 1907.04, "end": 1910.64, "text": " as the running example, we'd be hitting this code here."}, {"start": 1910.64, "end": 1912.72, "text": " And let me show you why this is interesting."}, {"start": 1912.72, "end": 1913.88, "text": " So they mentioned it here."}, {"start": 1913.88, "end": 1918.88, "text": " So parameters are shared between the word embeddings layer."}, {"start": 1919.3200000000002, "end": 1921.0800000000002, "text": " So that's the common thing you always do."}, {"start": 1921.0800000000002, "end": 1925.04, "text": " So you have the embedding table being shared between,"}, {"start": 1925.04, "end": 1927.72, "text": " so mapping the tokens into the embedding vectors."}, {"start": 1927.72, "end": 1931.1599999999999, "text": " And then at the output, you also use that embedding table"}, {"start": 1931.1599999999999, "end": 1934.6, "text": " to map the embedding vectors into the vocab."}, {"start": 1934.6, "end": 1938.1599999999999, "text": " So basically distribution across the various words,"}, {"start": 1938.1599999999999, "end": 1941.54, "text": " subtokens from your, or tokens from your vocabulary."}, {"start": 1942.62, "end": 1943.72, "text": " So those are tied."}, {"start": 1943.72, "end": 1947.12, "text": " And you can see here, they have a nice trick that they do."}, {"start": 1947.12, "end": 1952.12, "text": " So if this was running on the last stage of the pipeline,"}, {"start": 1952.12, "end": 1954.9599999999998, "text": " then they would create embedding table."}, {"start": 1954.9599999999998, "end": 1957.1999999999998, "text": " Again, it's gonna be a parallel embedding table."}, {"start": 1957.1999999999998, "end": 1959.32, "text": " So that means it's gonna be shorted across two devices"}, {"start": 1959.32, "end": 1961.6399999999999, "text": " of the last stage of the pipeline."}, {"start": 1961.6399999999999, "end": 1963.7199999999998, "text": " And then this is the nice trick."}, {"start": 1963.7199999999998, "end": 1967.2399999999998, "text": " So basically what they do is they do all reduce."}, {"start": 1967.2399999999998, "end": 1968.84, "text": " So ensure that first and last stages"}, {"start": 1968.84, "end": 1970.6799999999998, "text": " have the same initial parameter values."}, {"start": 1970.6799999999998, "end": 1973.52, "text": " So because this was not independently allocated"}, {"start": 1973.52, "end": 1977.32, "text": " and because they are lying on different devices,"}, {"start": 1977.32, "end": 1980.52, "text": " because of that, they have to do all reduce."}, {"start": 1980.52, "end": 1982.56, "text": " So you can see here, if we are hitting"}, {"start": 1982.56, "end": 1986.16, "text": " either the first stage of the pipeline or the last stage,"}, {"start": 1986.16, "end": 1990.16, "text": " apply the all reduce method on this word embedding,"}, {"start": 1990.16, "end": 1992.32, "text": " or word embedding's weight."}, {"start": 1992.32, "end": 1994.92, "text": " If you're not familiar with these collective operations,"}, {"start": 1994.92, "end": 1998.56, "text": " I did not explain them so far, just check out the Viki page."}, {"start": 1998.56, "end": 2000.12, "text": " They have a nice visual explanation."}, {"start": 2000.12, "end": 2000.96, "text": " It's fairly simple."}, {"start": 2000.96, "end": 2003.8799999999999, "text": " Roughly what's gonna happen is assume that the first stage"}, {"start": 2003.8799999999999, "end": 2007.2, "text": " will literally receive the weights from the last stage,"}, {"start": 2007.2, "end": 2009.4, "text": " and then they're gonna divide it by two."}, {"start": 2009.4, "end": 2012.18, "text": " And if the last stage does the same thing,"}, {"start": 2012.18, "end": 2013.72, "text": " you'll end up with the same result."}, {"start": 2013.72, "end": 2015.5600000000002, "text": " And that means you'll have the same weights"}, {"start": 2015.5600000000002, "end": 2018.3600000000001, "text": " both in the first stage and in the last stage,"}, {"start": 2018.3600000000001, "end": 2020.6000000000001, "text": " and thus you've synchronized the weights."}, {"start": 2020.6000000000001, "end": 2021.64, "text": " As simple as that."}, {"start": 2021.64, "end": 2025.24, "text": " Okay, guys, hopefully that was interesting."}, {"start": 2025.24, "end": 2026.5600000000002, "text": " And hopefully that makes sense"}, {"start": 2026.5600000000002, "end": 2028.8000000000002, "text": " with the drawing I'm using in OneNote."}, {"start": 2028.8000000000002, "end": 2029.96, "text": " So let me exit here."}, {"start": 2029.96, "end": 2034.96, "text": " So here is the, now some stuff that's not that vital."}, {"start": 2035.88, "end": 2039.24, "text": " I'm gonna skip all of this, put the break point here,"}, {"start": 2039.24, "end": 2040.42, "text": " hit F5."}, {"start": 2041.32, "end": 2042.52, "text": " Let's see what else."}, {"start": 2042.52, "end": 2047.52, "text": " So we push all of these models onto the GPU."}, {"start": 2049.92, "end": 2054.24, "text": " We use this float, we convert them to float 16 weights."}, {"start": 2054.24, "end": 2056.92, "text": " So I'm going to just, basically let me enable"}, {"start": 2056.92, "end": 2059.24, "text": " the break points now."}, {"start": 2059.24, "end": 2060.84, "text": " So I'm gonna hit this one a bit later,"}, {"start": 2060.84, "end": 2063.62, "text": " so we will see how that is going to be used."}, {"start": 2063.62, "end": 2068.62, "text": " Okay, local DDP data distributed parallelism."}, {"start": 2069.94, "end": 2072.5, "text": " We're gonna ignore that for the time being."}, {"start": 2072.5, "end": 2074.2999999999997, "text": " And let's just return the model."}, {"start": 2075.22, "end": 2078.7799999999997, "text": " Okay, so again, because I have load set to none,"}, {"start": 2078.7799999999997, "end": 2080.2999999999997, "text": " I will not be loading the checkpoints,"}, {"start": 2080.2999999999997, "end": 2083.46, "text": " which means I'll basically be using random weights."}, {"start": 2083.46, "end": 2085.38, "text": " Okay, now we build the data set."}, {"start": 2085.38, "end": 2087.74, "text": " Let me quickly walk you through this one."}, {"start": 2087.74, "end": 2091.02, "text": " Although that's not that relevant, so tokenizer."}, {"start": 2091.02, "end": 2096.02, "text": " So again, I just load this random dummy file I created."}, {"start": 2097.54, "end": 2099.66, "text": " And you can see what it has."}, {"start": 2099.66, "end": 2100.82, "text": " Literally I copy pasted."}, {"start": 2100.82, "end": 2105.82, "text": " I actually took a single paragraph from WikiText data set"}, {"start": 2106.22, "end": 2107.86, "text": " and just copy pasted a couple of times"}, {"start": 2107.86, "end": 2109.32, "text": " just to have something there."}, {"start": 2110.34, "end": 2112.3, "text": " So now we just do some processing,"}, {"start": 2112.3, "end": 2114.3, "text": " the tokenization, blah, blah, blah."}, {"start": 2114.3, "end": 2115.56, "text": " Okay, let me just exit here."}, {"start": 2115.56, "end": 2117.54, "text": " This is not interesting."}, {"start": 2117.54, "end": 2119.86, "text": " So we then tokenize the text."}, {"start": 2119.86, "end": 2121.9, "text": " Okay, I'm just hitting the breakpoints."}, {"start": 2121.9, "end": 2123.42, "text": " So this is some processing that they do."}, {"start": 2123.42, "end": 2125.26, "text": " As you can see, a lot of details"}, {"start": 2125.26, "end": 2128.78, "text": " that I'm trying to unsuccessfully ignore."}, {"start": 2128.78, "end": 2132.1800000000003, "text": " So we now do the tokenization and you can see here,"}, {"start": 2132.1800000000003, "end": 2133.94, "text": " now we have the, let me show you"}, {"start": 2133.94, "end": 2136.02, "text": " how that's gonna look like."}, {"start": 2136.02, "end": 2140.2400000000002, "text": " Tokenized data is just a list of tokens of our text."}, {"start": 2140.2400000000002, "end": 2144.1, "text": " So the text was tokenized and we have 448 tokens,"}, {"start": 2144.1, "end": 2145.6800000000003, "text": " the smallest data set ever."}, {"start": 2145.68, "end": 2150.0, "text": " Okay, now we use that to construct this LM data set."}, {"start": 2151.1, "end": 2154.7, "text": " Again, I'm not going to dig into a lot of details"}, {"start": 2154.7, "end": 2157.7999999999997, "text": " because that's just the language modeling detail"}, {"start": 2157.7999999999997, "end": 2160.3999999999996, "text": " and we are trying to focus on the 3D parallelism"}, {"start": 2160.3999999999996, "end": 2161.3999999999996, "text": " in this video."}, {"start": 2161.3999999999996, "end": 2163.72, "text": " Okay, let's exit here."}, {"start": 2163.72, "end": 2164.8399999999997, "text": " We have the data set."}, {"start": 2164.8399999999997, "end": 2166.7999999999997, "text": " Now we build the data loader."}, {"start": 2166.7999999999997, "end": 2168.04, "text": " The reason I'm gonna show you this one"}, {"start": 2168.04, "end": 2171.14, "text": " is because it involves the data parallelism approach."}, {"start": 2171.14, "end": 2172.24, "text": " So here it is."}, {"start": 2172.24, "end": 2174.2, "text": " In PyTorch, it's fairly trivial."}, {"start": 2174.2, "end": 2177.7599999999998, "text": " So here you can see get data parallel world size."}, {"start": 2177.7599999999998, "end": 2180.8999999999996, "text": " So now additionally, I did not show you this so far,"}, {"start": 2180.8999999999996, "end": 2185.8999999999996, "text": " but imagine having this same group of devices,"}, {"start": 2186.4399999999996, "end": 2188.08, "text": " like four devices like this."}, {"start": 2188.08, "end": 2189.8999999999996, "text": " So I'm just gonna change the color."}, {"start": 2189.8999999999996, "end": 2194.12, "text": " So imagine having this same group duplicated maybe twice"}, {"start": 2194.12, "end": 2197.12, "text": " or even four times, okay?"}, {"start": 2197.12, "end": 2199.2, "text": " And we're doing the same thing"}, {"start": 2199.2, "end": 2201.56, "text": " in each of these groups until now."}, {"start": 2201.56, "end": 2203.7599999999998, "text": " So now it's gonna be a bit different"}, {"start": 2203.76, "end": 2205.1800000000003, "text": " for each of these groups."}, {"start": 2205.1800000000003, "end": 2208.6400000000003, "text": " Okay, so let's say this is group zero and this is group one."}, {"start": 2208.6400000000003, "end": 2211.7200000000003, "text": " I'm gonna ignore, we're gonna have two groups, okay?"}, {"start": 2212.6000000000004, "end": 2214.1600000000003, "text": " So let's go back to the code."}, {"start": 2215.44, "end": 2217.88, "text": " We would get world size would be two."}, {"start": 2217.88, "end": 2219.6400000000003, "text": " And let me show you how this is implemented"}, {"start": 2219.6400000000003, "end": 2220.5200000000004, "text": " in the background."}, {"start": 2220.5200000000004, "end": 2223.84, "text": " It literally calls this distributed get world size"}, {"start": 2223.84, "end": 2226.38, "text": " and then you specify the particular group."}, {"start": 2226.38, "end": 2229.1600000000003, "text": " So here they specify the get data parallel group"}, {"start": 2229.16, "end": 2234.16, "text": " and that's why basically they get number two."}, {"start": 2234.16, "end": 2235.52, "text": " Okay, so some constants there,"}, {"start": 2235.52, "end": 2237.8799999999997, "text": " I will not dig deeper than that."}, {"start": 2237.8799999999997, "end": 2239.8799999999997, "text": " Let's continue, let's get the rank."}, {"start": 2239.8799999999997, "end": 2242.52, "text": " So now where the rank is zero for the left group,"}, {"start": 2242.52, "end": 2245.7599999999998, "text": " so rank would be zero again for this group here,"}, {"start": 2245.7599999999998, "end": 2248.24, "text": " it will be one for these devices here."}, {"start": 2248.24, "end": 2250.3599999999997, "text": " So that's how we'd have a differentiation."}, {"start": 2251.66, "end": 2252.6, "text": " Let's continue."}, {"start": 2252.6, "end": 2255.58, "text": " We formed the distributed sampler, okay?"}, {"start": 2255.58, "end": 2258.18, "text": " And that one is going to be shifting"}, {"start": 2258.18, "end": 2261.3599999999997, "text": " a part of our batch to one group"}, {"start": 2261.3599999999997, "end": 2263.44, "text": " and the second part to the other group."}, {"start": 2263.44, "end": 2267.72, "text": " So that's gonna handle in the background by the PyTorch."}, {"start": 2267.72, "end": 2270.52, "text": " Okay, so now the data loader, blah, blah, blah,"}, {"start": 2270.52, "end": 2272.44, "text": " pinned memory, again, a lot of details,"}, {"start": 2272.44, "end": 2274.2799999999997, "text": " I'll have to skip those."}, {"start": 2274.2799999999997, "end": 2277.52, "text": " And now we enter the evaluate and print results."}, {"start": 2277.52, "end": 2280.08, "text": " So here is the actual function"}, {"start": 2280.08, "end": 2283.3399999999997, "text": " where we'll be doing a feed forward pass through the model."}, {"start": 2283.3399999999997, "end": 2286.68, "text": " So we just set it to eval mode."}, {"start": 2286.68, "end": 2288.96, "text": " We entered the TorchNodeGrad context"}, {"start": 2288.96, "end": 2290.12, "text": " because we're doing inference,"}, {"start": 2290.12, "end": 2292.3799999999997, "text": " we're not trying to train anything here."}, {"start": 2292.3799999999997, "end": 2296.9199999999996, "text": " And now we're going to fetch our first batch."}, {"start": 2296.9199999999996, "end": 2300.24, "text": " So I'm going to ignore how that works."}, {"start": 2300.24, "end": 2302.0, "text": " So I'm gonna disable the breakpoints,"}, {"start": 2302.0, "end": 2304.72, "text": " otherwise I would hit some breakpoints I left there."}, {"start": 2304.72, "end": 2306.56, "text": " So let me hit F10 and we have a batch."}, {"start": 2306.56, "end": 2310.48, "text": " So a batch is simply going to be,"}, {"start": 2310.48, "end": 2311.9199999999996, "text": " as you can see here, we have text"}, {"start": 2311.9199999999996, "end": 2314.6, "text": " and we have pet mask keys."}, {"start": 2314.6, "end": 2318.2, "text": " So text is going to be, I guess, eight times 1,024"}, {"start": 2318.2, "end": 2319.04, "text": " or something."}, {"start": 2319.7999999999997, "end": 2323.2, "text": " Yeah, so eight because we have a batch of size eight"}, {"start": 2323.2, "end": 2328.2, "text": " and 1,025 because our sequence that we're gonna pass"}, {"start": 2328.92, "end": 2331.02, "text": " to the transformer is gonna be 1,024,"}, {"start": 2331.02, "end": 2335.0, "text": " but it's 1,025 because we'll be using one as the target."}, {"start": 2335.0, "end": 2337.0, "text": " And so you're gonna see that in a second."}, {"start": 2338.0, "end": 2339.4, "text": " So yeah, hopefully that makes sense."}, {"start": 2339.4, "end": 2341.7599999999998, "text": " Like you literally take the, it's 1,025"}, {"start": 2341.76, "end": 2345.0, "text": " because you take the first 1,024 as the input"}, {"start": 2345.0, "end": 2347.6400000000003, "text": " and then you're trying to predict shifted by one"}, {"start": 2347.6400000000003, "end": 2349.96, "text": " to the right, you try to predict that sequence."}, {"start": 2349.96, "end": 2353.48, "text": " And that's why you have 1,025 here."}, {"start": 2353.48, "end": 2354.32, "text": " Okay, let's continue."}, {"start": 2354.32, "end": 2356.1200000000003, "text": " Let's enter the forward pass."}, {"start": 2356.1200000000003, "end": 2357.6000000000004, "text": " Let's see what's going on here."}, {"start": 2357.6000000000004, "end": 2360.5600000000004, "text": " So some batch processing, again,"}, {"start": 2360.5600000000004, "end": 2362.1600000000003, "text": " I'm gonna ignore all of that"}, {"start": 2362.1600000000003, "end": 2366.4, "text": " because that's just a language modeling standard stuff."}, {"start": 2366.4, "end": 2367.6000000000004, "text": " This is the interesting part."}, {"start": 2367.6000000000004, "end": 2369.6400000000003, "text": " So this is the receive forward."}, {"start": 2369.6400000000003, "end": 2370.92, "text": " So what's going on here?"}, {"start": 2370.92, "end": 2372.4, "text": " So let's enter there."}, {"start": 2373.7200000000003, "end": 2376.76, "text": " Let me now enable the breakpoints."}, {"start": 2376.76, "end": 2379.4, "text": " So here what happens is the following."}, {"start": 2379.4, "end": 2382.62, "text": " So if we are in the first stage,"}, {"start": 2382.62, "end": 2385.84, "text": " then the input tensor is none, okay?"}, {"start": 2385.84, "end": 2388.38, "text": " But if we were in some other stage,"}, {"start": 2388.38, "end": 2390.64, "text": " then we call this communicate method."}, {"start": 2390.64, "end": 2393.46, "text": " And that means we're waiting for the output"}, {"start": 2393.46, "end": 2396.76, "text": " of the previous stage to pass that output"}, {"start": 2396.76, "end": 2399.08, "text": " as the input of this stage."}, {"start": 2399.08, "end": 2401.7999999999997, "text": " And if that happens, we're going to override"}, {"start": 2401.7999999999997, "end": 2404.12, "text": " the batch that we just loaded from the data loader."}, {"start": 2404.12, "end": 2405.92, "text": " So that might be a mouthful."}, {"start": 2405.92, "end": 2408.2799999999997, "text": " It's gonna hopefully make sense in a couple of minutes."}, {"start": 2408.2799999999997, "end": 2409.96, "text": " So here is the input tensor."}, {"start": 2409.96, "end": 2411.04, "text": " Let's continue here."}, {"start": 2411.04, "end": 2414.24, "text": " Okay, some unwrapping of the model, not that interesting."}, {"start": 2414.24, "end": 2416.88, "text": " So here's the part where we set the input tensor."}, {"start": 2416.88, "end": 2419.18, "text": " And let me kind of dig into this."}, {"start": 2419.18, "end": 2421.68, "text": " So blah, blah, blah, we call the GPT model"}, {"start": 2421.68, "end": 2423.08, "text": " set input tensor function."}, {"start": 2423.08, "end": 2426.64, "text": " And then, okay, let me exit this one."}, {"start": 2426.64, "end": 2430.3599999999997, "text": " So then we enter the transformer language model"}, {"start": 2430.3599999999997, "end": 2431.6, "text": " set input tensor."}, {"start": 2431.6, "end": 2433.62, "text": " Again, a lot of levels of indirection,"}, {"start": 2433.62, "end": 2436.2799999999997, "text": " and I keep hitting this random function."}, {"start": 2436.2799999999997, "end": 2437.52, "text": " Okay, so here we are."}, {"start": 2437.52, "end": 2439.4, "text": " So finally we get to parallel transformer,"}, {"start": 2439.4, "end": 2442.7799999999997, "text": " and we set the input tensor to none,"}, {"start": 2442.7799999999997, "end": 2444.64, "text": " because we are now in the first stage."}, {"start": 2444.64, "end": 2447.2, "text": " I mean, at least in the actual setup I have."}, {"start": 2447.2, "end": 2449.56, "text": " But let me show you where this one is used."}, {"start": 2449.56, "end": 2453.42, "text": " So it's going to be used here a bit later in the forward pass."}, {"start": 2453.42, "end": 2456.24, "text": " And we'll see that if preprocess is true,"}, {"start": 2456.24, "end": 2458.0, "text": " that means we're in the first stage."}, {"start": 2458.0, "end": 2459.3599999999997, "text": " And in that case,"}, {"start": 2459.3599999999997, "end": 2461.8799999999997, "text": " we'll actually be taking the data from the batch."}, {"start": 2461.8799999999997, "end": 2464.0, "text": " Otherwise, if we are not in the first stage,"}, {"start": 2464.0, "end": 2466.7999999999997, "text": " we'll be passing the actual input tensor"}, {"start": 2467.72, "end": 2469.3599999999997, "text": " that's passed from the previous stage."}, {"start": 2469.3599999999997, "end": 2472.1, "text": " So again, we'll see that a bit later."}, {"start": 2472.1, "end": 2474.4799999999996, "text": " So let me now exit all of this."}, {"start": 2474.4799999999996, "end": 2477.12, "text": " Let me exit this, let me exit this."}, {"start": 2477.12, "end": 2478.8799999999997, "text": " And let's continue here."}, {"start": 2478.8799999999997, "end": 2480.72, "text": " So here is the actual forward pass."}, {"start": 2480.72, "end": 2482.7999999999997, "text": " Again, just so that we are on the same page,"}, {"start": 2482.7999999999997, "end": 2485.6, "text": " let me show you what's going to happen here."}, {"start": 2485.6, "end": 2489.36, "text": " So what's happening is we are having"}, {"start": 2489.36, "end": 2493.92, "text": " our distributed data loader fetching a batch."}, {"start": 2493.92, "end": 2498.92, "text": " And I'm just going to simply draw it as a simple 2D square."}, {"start": 2499.7999999999997, "end": 2504.44, "text": " And we're going to basically divide it into two pieces."}, {"start": 2504.44, "end": 2507.72, "text": " This one is going to be passed to this group."}, {"start": 2507.72, "end": 2511.2, "text": " And the first chunk is gonna be passed"}, {"start": 2511.2, "end": 2513.52, "text": " into the zeroth group here, okay?"}, {"start": 2513.52, "end": 2516.4, "text": " And now what happens is this is going to be fed here."}, {"start": 2517.84, "end": 2521.14, "text": " And while that's happening, these two guys are blocked."}, {"start": 2521.14, "end": 2523.88, "text": " So the thing is that communicate method I showed you"}, {"start": 2523.88, "end": 2525.18, "text": " is a blocking method."}, {"start": 2525.18, "end": 2529.04, "text": " So that means until this has done its feed,"}, {"start": 2529.04, "end": 2532.44, "text": " so until this data has passed through the first stage,"}, {"start": 2532.44, "end": 2533.6, "text": " these guys are waiting."}, {"start": 2533.6, "end": 2535.7599999999998, "text": " And then, only then do they start"}, {"start": 2535.7599999999998, "end": 2537.28, "text": " doing the actual computation."}, {"start": 2537.28, "end": 2538.6, "text": " That's how this is gonna look like."}, {"start": 2538.6, "end": 2542.44, "text": " And obviously the same happens for the rank one,"}, {"start": 2542.44, "end": 2545.96, "text": " for the second group, second data parallel group."}, {"start": 2545.96, "end": 2548.44, "text": " So I will not be drawing it there, obviously."}, {"start": 2548.44, "end": 2551.68, "text": " Okay, let's enter the feed forward pass."}, {"start": 2553.0, "end": 2558.0, "text": " Okay, we have to enter this forward call."}, {"start": 2558.44, "end": 2560.2400000000002, "text": " So distributed blah, blah, blah."}, {"start": 2560.2400000000002, "end": 2563.52, "text": " There is a lot of levels of indirection again here."}, {"start": 2563.52, "end": 2565.44, "text": " Let me just exit here."}, {"start": 2568.4, "end": 2571.26, "text": " Oh my God, oh my God, let me enter here."}, {"start": 2571.26, "end": 2574.2000000000003, "text": " Okay, so finally we are in the forward pass"}, {"start": 2574.2000000000003, "end": 2576.2000000000003, "text": " of the float 16 module."}, {"start": 2576.2000000000003, "end": 2577.88, "text": " So let's see what's happening here."}, {"start": 2577.88, "end": 2579.2000000000003, "text": " So as you can see here,"}, {"start": 2579.2000000000003, "end": 2581.46, "text": " if we're in the first stage of the pipeline,"}, {"start": 2581.46, "end": 2584.26, "text": " we convert our data to float 16,"}, {"start": 2584.26, "end": 2586.5600000000004, "text": " because we're gonna do mixed precision."}, {"start": 2586.5600000000004, "end": 2590.4, "text": " So now inputs are gonna be converted to float 16."}, {"start": 2590.4, "end": 2593.42, "text": " And now we call the actual transformer"}, {"start": 2593.42, "end": 2595.32, "text": " with the float 16 data."}, {"start": 2595.32, "end": 2596.5600000000004, "text": " Okay, let's enter here."}, {"start": 2598.76, "end": 2600.5600000000004, "text": " Let me hit F11."}, {"start": 2600.56, "end": 2603.4, "text": " I think I can just hit F5 and I'll hit the, yeah."}, {"start": 2603.4, "end": 2604.24, "text": " So here we are."}, {"start": 2604.24, "end": 2607.84, "text": " We are in the GPT model forward pass,"}, {"start": 2607.84, "end": 2610.32, "text": " and let's start going through it."}, {"start": 2610.32, "end": 2612.12, "text": " Okay, so again, I think I can hit F5"}, {"start": 2612.12, "end": 2614.2799999999997, "text": " because they have a break point there."}, {"start": 2614.2799999999997, "end": 2616.42, "text": " Yeah, so we are in the transformer language model."}, {"start": 2616.42, "end": 2620.04, "text": " Okay, so as you can see here, only the first stages,"}, {"start": 2620.04, "end": 2623.04, "text": " because pre-process is true for first stages of the pipeline,"}, {"start": 2623.04, "end": 2625.04, "text": " are going to do the embedding part."}, {"start": 2625.04, "end": 2626.7999999999997, "text": " Okay, so we do the embedding part."}, {"start": 2626.7999999999997, "end": 2629.04, "text": " Let me kinda enter this."}, {"start": 2629.04, "end": 2630.32, "text": " Whoops, I hate this."}, {"start": 2630.32, "end": 2631.28, "text": " Let me enter it."}, {"start": 2631.28, "end": 2633.36, "text": " So here we are in the embedding forward."}, {"start": 2633.36, "end": 2636.1600000000003, "text": " Let me see whether it's useful to enter this one as well."}, {"start": 2638.84, "end": 2641.44, "text": " So here we are in the vocab parallel embedding."}, {"start": 2641.44, "end": 2643.6000000000004, "text": " Let's see how the forward would look like."}, {"start": 2643.6000000000004, "end": 2647.1200000000003, "text": " So again, keep in mind that we have sharded"}, {"start": 2647.1200000000003, "end": 2648.84, "text": " that embedding table here and here."}, {"start": 2648.84, "end": 2653.06, "text": " So we only have 25,000 of our embedding vectors here,"}, {"start": 2653.06, "end": 2655.2400000000002, "text": " and the other 25,000 is here."}, {"start": 2655.2400000000002, "end": 2658.6400000000003, "text": " So they'll somehow have to communicate between each other,"}, {"start": 2658.64, "end": 2661.58, "text": " such that we can end up with the correct results."}, {"start": 2661.58, "end": 2663.7599999999998, "text": " So let's see how that's gonna work."}, {"start": 2663.7599999999998, "end": 2665.7999999999997, "text": " So let's enter here."}, {"start": 2667.0, "end": 2669.24, "text": " So we would actually be entering this branch"}, {"start": 2669.24, "end": 2672.3199999999997, "text": " if I had the setup that I'm using as running examples,"}, {"start": 2672.3199999999997, "end": 2676.3599999999997, "text": " because the model parallel size would be two."}, {"start": 2676.3599999999997, "end": 2679.48, "text": " So what would happen is the input mask,"}, {"start": 2679.48, "end": 2681.64, "text": " as you can see here, it will be true"}, {"start": 2681.64, "end": 2686.3199999999997, "text": " for all of those indices that are smaller or bigger"}, {"start": 2686.32, "end": 2690.96, "text": " than the index here, so then the boundary index here."}, {"start": 2690.96, "end": 2694.2000000000003, "text": " And these would be like, these will be zero and 25,000"}, {"start": 2694.2000000000003, "end": 2697.6000000000004, "text": " for this one, for this one here,"}, {"start": 2697.6000000000004, "end": 2700.92, "text": " and 25,000 to 50,000 for this one, okay?"}, {"start": 2700.92, "end": 2705.4, "text": " And that's why this is gonna mask those indices"}, {"start": 2705.4, "end": 2707.92, "text": " that do not belong to this device, okay?"}, {"start": 2707.92, "end": 2711.6400000000003, "text": " So there is this subtraction here that we have to do"}, {"start": 2711.6400000000003, "end": 2713.32, "text": " so that we have correct results,"}, {"start": 2713.32, "end": 2716.44, "text": " because imagine we are now on this device,"}, {"start": 2716.44, "end": 2719.0, "text": " because its start index is 25,000,"}, {"start": 2719.0, "end": 2723.32, "text": " we want to subtract 25,000 such that we end up with zero,"}, {"start": 2723.32, "end": 2725.1600000000003, "text": " because we are indexing in its table"}, {"start": 2725.1600000000003, "end": 2728.6000000000004, "text": " and the zero index is the first slot in that table"}, {"start": 2728.6000000000004, "end": 2730.56, "text": " on that device, if that makes sense."}, {"start": 2730.56, "end": 2732.98, "text": " Sorry for being a bit confusing here."}, {"start": 2734.02, "end": 2736.7400000000002, "text": " And you can see here, for all of the other indices,"}, {"start": 2736.7400000000002, "end": 2738.56, "text": " we just set them to zero."}, {"start": 2738.56, "end": 2740.6400000000003, "text": " Maybe let me try and draw this quickly."}, {"start": 2740.64, "end": 2743.44, "text": " So here is what's going on."}, {"start": 2743.44, "end": 2748.18, "text": " Okay, we have basically input that contains indices"}, {"start": 2748.18, "end": 2751.8599999999997, "text": " from the whole table, so that means it might have numbers."}, {"start": 2753.2799999999997, "end": 2755.58, "text": " Okay, let's quickly check what's the actual size."}, {"start": 2755.58, "end": 2758.68, "text": " So let me use this input as the running example."}, {"start": 2758.68, "end": 2762.3599999999997, "text": " So we have shape 8024, okay?"}, {"start": 2762.3599999999997, "end": 2764.48, "text": " So let's imagine we just have 1024,"}, {"start": 2764.48, "end": 2767.8599999999997, "text": " because we have batch size of one, it's gonna be easier."}, {"start": 2767.8599999999997, "end": 2769.68, "text": " Okay, so here we are."}, {"start": 2769.68, "end": 2773.44, "text": " So we have a vector that's like 1024."}, {"start": 2774.44, "end": 2778.68, "text": " So this is gonna be 1024, okay?"}, {"start": 2778.68, "end": 2781.7999999999997, "text": " And it might contain in any of these slots,"}, {"start": 2781.7999999999997, "end": 2784.9199999999996, "text": " in any of the dimensions, let's say this is dimension I,"}, {"start": 2784.9199999999996, "end": 2788.04, "text": " it might contain numbers from zero to 50K."}, {"start": 2788.04, "end": 2791.08, "text": " I'm just gonna run the numbers because it's easier, okay?"}, {"start": 2791.08, "end": 2795.16, "text": " So now let's assume we are on this device here."}, {"start": 2795.16, "end": 2798.6, "text": " Okay, so that device only has the following indices."}, {"start": 2798.6, "end": 2801.24, "text": " So it has, so we have two tables."}, {"start": 2802.12, "end": 2804.92, "text": " This one contains from zero to 25K,"}, {"start": 2804.92, "end": 2809.92, "text": " and this one contains from 25K to 50K, 25K to 50K, okay?"}, {"start": 2812.44, "end": 2814.2, "text": " So what this device is going to do"}, {"start": 2814.2, "end": 2818.04, "text": " is it's going to mask all of those values here."}, {"start": 2818.04, "end": 2819.4, "text": " So let me take a different color."}, {"start": 2819.4, "end": 2822.16, "text": " So all of those values that have numbers"}, {"start": 2822.16, "end": 2824.4, "text": " that are zero to 25K,"}, {"start": 2824.4, "end": 2826.52, "text": " because it does not contain those numbers."}, {"start": 2826.52, "end": 2828.84, "text": " So it's going to mask them out by putting them to zero."}, {"start": 2828.84, "end": 2830.92, "text": " So let's see, this one is also put to zero,"}, {"start": 2830.92, "end": 2833.16, "text": " this one is also put to zero, blah, blah, blah, okay?"}, {"start": 2833.16, "end": 2835.88, "text": " And then it's going to take this number here."}, {"start": 2835.88, "end": 2839.36, "text": " Let's say this one is 26K."}, {"start": 2839.36, "end": 2842.56, "text": " It's going to subtract the 25K because that's the offset,"}, {"start": 2843.72, "end": 2845.8, "text": " and we're going to end up with 1K."}, {"start": 2845.8, "end": 2848.84, "text": " And 1K, when you index with 1K into this table,"}, {"start": 2849.8, "end": 2853.32, "text": " you end up with some vector, embedding vector here."}, {"start": 2853.32, "end": 2856.5, "text": " And that vector is, if you take a look at the whole table,"}, {"start": 2856.5, "end": 2859.08, "text": " is at the 26K, okay?"}, {"start": 2859.08, "end": 2860.64, "text": " So that's why we do all of this offsetting."}, {"start": 2860.64, "end": 2862.64, "text": " Hopefully this now makes a bit more sense."}, {"start": 2862.64, "end": 2864.84, "text": " Okay, let's continue."}, {"start": 2864.84, "end": 2869.84, "text": " So now we do the embedding and we get the results."}, {"start": 2869.84, "end": 2872.08, "text": " Okay, so let's check out the shapes."}, {"start": 2872.96, "end": 2875.88, "text": " Shape is going to be 8024,024."}, {"start": 2875.88, "end": 2878.76, "text": " This is a sequence length, this is the embedding dimension."}, {"start": 2878.76, "end": 2881.08, "text": " And now what we would have to do"}, {"start": 2881.08, "end": 2886.08, "text": " is we would have to set those vectors that were masked out"}, {"start": 2886.88, "end": 2890.88, "text": " to zero, otherwise they would have incorrect results, okay?"}, {"start": 2890.88, "end": 2892.6, "text": " And finally, what we do is we call this"}, {"start": 2892.6, "end": 2894.98, "text": " reduced from tensor model parallel region."}, {"start": 2894.98, "end": 2896.3199999999997, "text": " Let me quickly enter this one,"}, {"start": 2896.3199999999997, "end": 2898.7599999999998, "text": " and then I'm going to show you what is going on here."}, {"start": 2898.7599999999998, "end": 2901.16, "text": " So here we are."}, {"start": 2901.16, "end": 2902.3199999999997, "text": " Let me just go like this."}, {"start": 2902.3199999999997, "end": 2904.12, "text": " So let me actually enter here."}, {"start": 2904.12, "end": 2907.12, "text": " Let me enter here, reduce."}, {"start": 2907.12, "end": 2908.7599999999998, "text": " And here is what's going to happen."}, {"start": 2908.76, "end": 2912.5600000000004, "text": " In case that the model that we have multiple stages,"}, {"start": 2912.5600000000004, "end": 2913.6800000000003, "text": " sorry, this is model parallelism."}, {"start": 2913.6800000000003, "end": 2917.96, "text": " In case that we have multiple shards across,"}, {"start": 2917.96, "end": 2920.1800000000003, "text": " if we have model parallelism of size two,"}, {"start": 2920.1800000000003, "end": 2922.84, "text": " as the running example is using,"}, {"start": 2922.84, "end": 2925.1600000000003, "text": " then we would not enter this line."}, {"start": 2925.1600000000003, "end": 2928.96, "text": " We would do the all reduce across the group,"}, {"start": 2928.96, "end": 2930.6800000000003, "text": " across the model parallel group."}, {"start": 2930.6800000000003, "end": 2933.92, "text": " So let me break that down, what we have done right there."}, {"start": 2933.92, "end": 2936.32, "text": " So let's again, get back to the drawing"}, {"start": 2936.32, "end": 2937.32, "text": " and let's see what happened."}, {"start": 2937.32, "end": 2941.92, "text": " Okay, so, okay, because these guys here were masked out,"}, {"start": 2941.92, "end": 2943.6800000000003, "text": " so that means they were, they maybe,"}, {"start": 2943.6800000000003, "end": 2945.32, "text": " this one was maybe 14K."}, {"start": 2945.32, "end": 2948.92, "text": " And we've artificially set its value to zero."}, {"start": 2948.92, "end": 2951.56, "text": " Because it was zero, that means it was indexing"}, {"start": 2951.56, "end": 2954.28, "text": " the zero vector from this table, which is incorrect"}, {"start": 2954.28, "end": 2956.32, "text": " because we want the 14,000, right?"}, {"start": 2956.32, "end": 2958.84, "text": " So this one here, again, let me just change the color"}, {"start": 2958.84, "end": 2959.8, "text": " and let me explain."}, {"start": 2959.8, "end": 2962.0, "text": " So this one here was maybe 14K."}, {"start": 2963.0, "end": 2965.4, "text": " So that's like the token ID, okay?"}, {"start": 2965.4, "end": 2967.92, "text": " So that one was indexing somewhere here,"}, {"start": 2967.92, "end": 2969.96, "text": " but because we've set it to zero,"}, {"start": 2969.96, "end": 2972.6800000000003, "text": " it was actually indexing this vector here."}, {"start": 2972.6800000000003, "end": 2975.14, "text": " And that's why we have, oops, it's glitching."}, {"start": 2975.14, "end": 2977.36, "text": " That's why we had to do the zeroing out"}, {"start": 2977.36, "end": 2980.76, "text": " because we don't want to have this value here."}, {"start": 2980.76, "end": 2982.92, "text": " We don't want to have this embedding vector"}, {"start": 2982.92, "end": 2985.4, "text": " for this particular token."}, {"start": 2985.4, "end": 2986.96, "text": " That's why we do the zeroing."}, {"start": 2986.96, "end": 2988.7200000000003, "text": " That's why we do this part here."}, {"start": 2988.7200000000003, "end": 2992.34, "text": " And then what the all reduce does is the following."}, {"start": 2992.34, "end": 2993.46, "text": " Let me go back here."}, {"start": 2993.46, "end": 2994.4, "text": " So this is going to happen."}, {"start": 2994.4, "end": 2996.76, "text": " Let me see whether I can explain this nicely."}, {"start": 2996.76, "end": 3001.76, "text": " Basically, this device here mapped this set of tokens"}, {"start": 3003.02, "end": 3008.02, "text": " into obviously various bunch of embedding vectors."}, {"start": 3009.76, "end": 3013.04, "text": " So the dimensionality here is the same"}, {"start": 3013.04, "end": 3014.7200000000003, "text": " as the dimensionality up there."}, {"start": 3014.7200000000003, "end": 3018.88, "text": " So it's 1024, but here we now have the hidden dimension."}, {"start": 3018.88, "end": 3022.32, "text": " Okay, so this thing, this vector here"}, {"start": 3022.32, "end": 3024.6000000000004, "text": " is gonna be valid for this device."}, {"start": 3024.6000000000004, "end": 3026.2400000000002, "text": " So that vector is gonna be valid."}, {"start": 3026.2400000000002, "end": 3028.0, "text": " So let me change the color to green."}, {"start": 3028.84, "end": 3031.1200000000003, "text": " So this one here, this token is gonna be valid"}, {"start": 3032.48, "end": 3033.88, "text": " on this particular device."}, {"start": 3033.88, "end": 3035.42, "text": " Okay, so it's valid here."}, {"start": 3035.42, "end": 3039.2200000000003, "text": " But as we just saw, it's going to be invalid for this one."}, {"start": 3039.2200000000003, "end": 3042.0800000000004, "text": " Okay, so let me draw another table."}, {"start": 3042.0800000000004, "end": 3043.7200000000003, "text": " So let me draw the results."}, {"start": 3043.7200000000003, "end": 3045.2000000000003, "text": " So this is the output from this device."}, {"start": 3045.2000000000003, "end": 3047.56, "text": " So this is the output from the second device."}, {"start": 3047.56, "end": 3052.56, "text": " And it contains a corrupt data, actually all zeros,"}, {"start": 3056.08, "end": 3057.6, "text": " because we've zeroed it out."}, {"start": 3057.6, "end": 3059.12, "text": " So it's gonna be all zeros."}, {"start": 3059.12, "end": 3063.12, "text": " And so now what the all reduce does is basically,"}, {"start": 3063.12, "end": 3064.7599999999998, "text": " you sum these up."}, {"start": 3064.7599999999998, "end": 3067.88, "text": " If you sum them up, you'll end up with a correct result"}, {"start": 3067.88, "end": 3071.08, "text": " both here and here, because if you just add zeros"}, {"start": 3071.08, "end": 3074.32, "text": " onto this device, you keep the same value."}, {"start": 3074.32, "end": 3077.04, "text": " But this device will now have the correct value"}, {"start": 3077.04, "end": 3078.2, "text": " of the embedding vector."}, {"start": 3078.2, "end": 3080.4, "text": " And that's the magic of the all reduce."}, {"start": 3080.4, "end": 3084.34, "text": " So what's going to happen is these GPUs here"}, {"start": 3084.34, "end": 3086.68, "text": " are going to do that type of a communication"}, {"start": 3086.68, "end": 3089.2799999999997, "text": " and update the embedding vectors."}, {"start": 3089.2799999999997, "end": 3091.64, "text": " So that's a beautiful thing, like if you ask me."}, {"start": 3091.64, "end": 3096.64, "text": " So hopefully I managed to explain it nicely."}, {"start": 3096.8, "end": 3098.7599999999998, "text": " And now finally, let's get out of here."}, {"start": 3098.7599999999998, "end": 3102.88, "text": " Okay, again, just to update our mental model."}, {"start": 3102.88, "end": 3106.48, "text": " So where we are at the moment is in the state"}, {"start": 3106.48, "end": 3111.48, "text": " where basically these two devices here are waiting"}, {"start": 3111.72, "end": 3113.2, "text": " for the information."}, {"start": 3113.2, "end": 3116.7400000000002, "text": " And we have the embedding vectors on this device"}, {"start": 3116.7400000000002, "end": 3119.04, "text": " and on this device being the same"}, {"start": 3119.04, "end": 3121.4, "text": " because they're synchronized because of this all reduce"}, {"start": 3121.4, "end": 3122.22, "text": " we've just done."}, {"start": 3122.22, "end": 3124.08, "text": " Okay, so we are now there."}, {"start": 3124.08, "end": 3126.48, "text": " We have that and now let's get back to the code."}, {"start": 3127.4, "end": 3130.32, "text": " Okay, so after this, we're gonna have some"}, {"start": 3130.32, "end": 3134.56, "text": " positional embeddings being added that's less interesting"}, {"start": 3134.56, "end": 3136.92, "text": " because it's not sharded."}, {"start": 3136.92, "end": 3139.7999999999997, "text": " Let's continue we have some drop out, blah, blah, blah."}, {"start": 3139.7999999999997, "end": 3142.4, "text": " And we end up with the output."}, {"start": 3142.4, "end": 3145.36, "text": " Okay, so now let's see what's going on."}, {"start": 3145.36, "end": 3148.4, "text": " So we now do the call to the encoder."}, {"start": 3148.4, "end": 3151.36, "text": " And that's just going to be our transformer basically."}, {"start": 3151.36, "end": 3153.36, "text": " So let's enter there."}, {"start": 3154.44, "end": 3155.7, "text": " Let me hit F5."}, {"start": 3155.7, "end": 3158.16, "text": " So we are now in the parallel transformer."}, {"start": 3158.16, "end": 3160.72, "text": " Let's step over all of these."}, {"start": 3160.72, "end": 3164.2, "text": " If self pre process, so again, if we're in the first stage"}, {"start": 3164.2, "end": 3165.7599999999998, "text": " we'll be doing this."}, {"start": 3165.7599999999998, "end": 3167.2799999999997, "text": " So if we're in the first stage,"}, {"start": 3167.2799999999997, "end": 3171.9199999999996, "text": " we'll be grabbing the embedding vectors we just created."}, {"start": 3171.9199999999996, "end": 3176.12, "text": " So let me show you the shape, 8024 to 1024."}, {"start": 3176.12, "end": 3179.08, "text": " And we just grab those, we do some transpose operation"}, {"start": 3179.08, "end": 3181.8199999999997, "text": " and we do continuous and we continue."}, {"start": 3181.8199999999997, "end": 3184.8799999999997, "text": " But if you recall this line here,"}, {"start": 3184.8799999999997, "end": 3187.3999999999996, "text": " this was from the, if we were,"}, {"start": 3187.3999999999996, "end": 3190.12, "text": " if this was not the first stage of the pipeline,"}, {"start": 3190.12, "end": 3193.0, "text": " but instead if this was maybe second or third"}, {"start": 3193.0, "end": 3194.96, "text": " or whatever stage,"}, {"start": 3194.96, "end": 3197.24, "text": " then instead we'd have hidden states"}, {"start": 3197.24, "end": 3199.38, "text": " being equal to input tensor,"}, {"start": 3199.38, "end": 3201.32, "text": " which came from the previous stage."}, {"start": 3201.32, "end": 3204.8, "text": " Okay, so we would bypass this part here."}, {"start": 3204.8, "end": 3206.16, "text": " Hopefully that makes sense."}, {"start": 3207.12, "end": 3208.6, "text": " Let's not continue."}, {"start": 3208.6, "end": 3211.2, "text": " We're not doing checkpointing."}, {"start": 3211.2, "end": 3214.52, "text": " Okay, now we are just going to do a feed forward"}, {"start": 3214.52, "end": 3216.96, "text": " through all of the transformer layers."}, {"start": 3216.96, "end": 3219.6, "text": " Okay, so let's do that."}, {"start": 3219.6, "end": 3221.58, "text": " Let's grab the, so here is the layer."}, {"start": 3221.58, "end": 3223.56, "text": " I can hit F5, I think."}, {"start": 3223.56, "end": 3226.08, "text": " So we are now in the parallel transformer layer."}, {"start": 3226.08, "end": 3229.24, "text": " And the first thing we do is we pass the data"}, {"start": 3229.24, "end": 3230.58, "text": " through the layer norm."}, {"start": 3230.58, "end": 3233.52, "text": " Let me quickly show you how that's gonna look like"}, {"start": 3233.52, "end": 3234.88, "text": " because it's fused."}, {"start": 3234.88, "end": 3236.72, "text": " Oh my God, this sucks."}, {"start": 3236.72, "end": 3239.44, "text": " Okay, so let's just step over here."}, {"start": 3239.44, "end": 3240.7999999999997, "text": " Okay, here we are."}, {"start": 3240.7999999999997, "end": 3244.3199999999997, "text": " And here is the fused layer norm,"}, {"start": 3244.3199999999997, "end": 3246.88, "text": " F5M function we call this apply method."}, {"start": 3246.88, "end": 3248.56, "text": " So let me enter here."}, {"start": 3248.56, "end": 3250.84, "text": " So here you can see how it looks like."}, {"start": 3250.84, "end": 3252.0, "text": " Oops."}, {"start": 3252.0, "end": 3256.6000000000004, "text": " So what it does is ultimately it calls this CUDA kernel,"}, {"start": 3256.6000000000004, "end": 3257.6800000000003, "text": " which is a global variable,"}, {"start": 3257.6800000000003, "end": 3260.2000000000003, "text": " which we loaded in the beginning of this program."}, {"start": 3261.08, "end": 3263.08, "text": " And then we just call the forward F5."}, {"start": 3263.08, "end": 3264.6800000000003, "text": " So we cannot actually enter here"}, {"start": 3264.6800000000003, "end": 3266.7200000000003, "text": " because it's just a compiled C++ code."}, {"start": 3266.7200000000003, "end": 3269.3, "text": " So I cannot enter with this current debugger."}, {"start": 3269.3, "end": 3271.92, "text": " Maybe there is some, maybe it can be tweaked"}, {"start": 3271.92, "end": 3273.84, "text": " so we can enter and see the C++ code,"}, {"start": 3273.84, "end": 3276.0, "text": " but I'm not sure how that will work."}, {"start": 3276.0, "end": 3277.32, "text": " Okay, so in any case,"}, {"start": 3277.32, "end": 3279.58, "text": " I just wanted to show you how the kernels look like."}, {"start": 3279.58, "end": 3282.66, "text": " Now let's exit from the layer norm."}, {"start": 3282.66, "end": 3286.16, "text": " And now let's hit the actual attention part."}, {"start": 3286.16, "end": 3288.08, "text": " Okay, let me hit F5."}, {"start": 3288.08, "end": 3289.2, "text": " Here is the attention part."}, {"start": 3289.2, "end": 3291.52, "text": " So we are in the parallel attention part."}, {"start": 3291.52, "end": 3293.88, "text": " Again, remember the weights are sharded"}, {"start": 3293.88, "end": 3296.54, "text": " such that each of these devices"}, {"start": 3296.54, "end": 3299.44, "text": " in the rank zero and rank one of the model parallelism"}, {"start": 3299.44, "end": 3301.52, "text": " only have a portion of the weights."}, {"start": 3301.52, "end": 3304.54, "text": " Okay, so we end up with some..."}, {"start": 3304.54, "end": 3307.48, "text": " Okay, so we first hit the column parallel layer."}, {"start": 3307.48, "end": 3309.84, "text": " Let me go back up again here up the stack."}, {"start": 3309.84, "end": 3311.88, "text": " So we hit the query key value"}, {"start": 3311.88, "end": 3316.2400000000002, "text": " is actually the column parallel layer for a function."}, {"start": 3316.2400000000002, "end": 3318.12, "text": " So that's again, let me show you here."}, {"start": 3318.12, "end": 3320.48, "text": " So we are hitting the column parallel."}, {"start": 3320.48, "end": 3322.32, "text": " This is the part I was showing you."}, {"start": 3322.32, "end": 3323.76, "text": " So let me go back to the code now"}, {"start": 3323.76, "end": 3324.96, "text": " and let's see what's going on here."}, {"start": 3324.96, "end": 3327.84, "text": " So we call this weird function copy to tensor."}, {"start": 3327.84, "end": 3330.32, "text": " So what is it doing exactly?"}, {"start": 3330.32, "end": 3331.26, "text": " Let's maybe enter."}, {"start": 3331.26, "end": 3332.2400000000002, "text": " So let's enter here."}, {"start": 3332.2400000000002, "end": 3333.32, "text": " Let's enter here."}, {"start": 3333.32, "end": 3335.84, "text": " And you can see here in the forward pass,"}, {"start": 3335.84, "end": 3337.56, "text": " it's just doing identity function."}, {"start": 3337.56, "end": 3339.0, "text": " So why are we doing this?"}, {"start": 3339.0, "end": 3341.2000000000003, "text": " Well, that's because in the backward"}, {"start": 3341.2000000000003, "end": 3343.1600000000003, "text": " we'll be doing the reduce method."}, {"start": 3343.1600000000003, "end": 3344.84, "text": " We'll be calling the all reduce."}, {"start": 3344.84, "end": 3346.28, "text": " So let's see why that makes sense."}, {"start": 3346.28, "end": 3347.52, "text": " Let me show you the diagram here."}, {"start": 3347.52, "end": 3349.78, "text": " It's kind of clear once you see it."}, {"start": 3349.78, "end": 3352.28, "text": " So what we're doing here is this."}, {"start": 3352.28, "end": 3354.08, "text": " We are implementing F."}, {"start": 3354.08, "end": 3356.1200000000003, "text": " F is, as you can see here,"}, {"start": 3356.1200000000003, "end": 3359.6400000000003, "text": " it's just the identity function in the forward pass"}, {"start": 3359.6400000000003, "end": 3361.48, "text": " because we just passed the data,"}, {"start": 3361.48, "end": 3364.32, "text": " the same data X to one device"}, {"start": 3364.32, "end": 3366.02, "text": " as well as to the other device."}, {"start": 3366.02, "end": 3368.04, "text": " But once we go backward,"}, {"start": 3368.04, "end": 3370.06, "text": " we want to all reduce the information."}, {"start": 3370.06, "end": 3371.78, "text": " We want to all reduce the gradients"}, {"start": 3371.78, "end": 3374.96, "text": " that came from this portion and this portion."}, {"start": 3374.96, "end": 3377.0800000000004, "text": " We want to basically average them"}, {"start": 3377.0800000000004, "end": 3380.2400000000002, "text": " and then pass them to the previous layers."}, {"start": 3380.2400000000002, "end": 3381.1800000000003, "text": " Hopefully that makes sense."}, {"start": 3381.1800000000003, "end": 3383.96, "text": " That's why we have it implemented in such a way."}, {"start": 3383.96, "end": 3385.84, "text": " Let's now exit this part."}, {"start": 3386.92, "end": 3387.84, "text": " Now this is less interesting."}, {"start": 3387.84, "end": 3390.96, "text": " We just apply the actual linear layer"}, {"start": 3390.96, "end": 3394.44, "text": " and then blah, blah, blah, bias."}, {"start": 3394.44, "end": 3395.28, "text": " We can exit there."}, {"start": 3395.28, "end": 3396.12, "text": " That's less interesting."}, {"start": 3396.12, "end": 3398.7200000000003, "text": " Okay, so now we just have some,"}, {"start": 3398.7200000000003, "end": 3400.88, "text": " what's this, reshaping."}, {"start": 3400.88, "end": 3402.9, "text": " We have some view, blah, blah, blah."}, {"start": 3402.9, "end": 3405.32, "text": " We're splitting into three parts, the query key values."}, {"start": 3405.32, "end": 3407.4, "text": " I'm not gonna focus on the actual transformer logic"}, {"start": 3407.4, "end": 3410.12, "text": " because I do assume you either know that"}, {"start": 3410.12, "end": 3411.28, "text": " or even if you don't know,"}, {"start": 3411.28, "end": 3414.46, "text": " just focus, let's focus on the 3D parallelism."}, {"start": 3414.46, "end": 3418.48, "text": " So again, just some transformer stuff."}, {"start": 3418.48, "end": 3420.06, "text": " We're changing the view"}, {"start": 3420.06, "end": 3422.56, "text": " so we can do multiplication between the keys"}, {"start": 3422.56, "end": 3427.56, "text": " and the queries and then basically use that to, okay."}, {"start": 3428.32, "end": 3431.92, "text": " So here we just preallocate a tensor"}, {"start": 3431.92, "end": 3434.32, "text": " that's gonna contain the scores"}, {"start": 3435.2, "end": 3439.2999999999997, "text": " for the, that we'll then apply the softmax."}, {"start": 3440.96, "end": 3442.84, "text": " We'll then apply a softmax."}, {"start": 3442.84, "end": 3443.68, "text": " As you can see here,"}, {"start": 3443.68, "end": 3447.6, "text": " so we first do the multiplication of queries and keys here"}, {"start": 3447.6, "end": 3450.68, "text": " so we can skip all of that."}, {"start": 3450.68, "end": 3453.68, "text": " And then we do, we now have attention scores."}, {"start": 3453.68, "end": 3457.46, "text": " And in order to convert scores into actual probabilities,"}, {"start": 3457.46, "end": 3459.18, "text": " we have to apply the softmax."}, {"start": 3460.68, "end": 3462.8399999999997, "text": " So again, I'm gonna exit this softmax layer."}, {"start": 3462.8399999999997, "end": 3464.16, "text": " It's not that interesting."}, {"start": 3465.12, "end": 3466.7599999999998, "text": " We've already seen the CUDA kernels."}, {"start": 3466.7599999999998, "end": 3469.0, "text": " So I'm gonna just skip that."}, {"start": 3469.0, "end": 3470.08, "text": " Okay, attention."}, {"start": 3470.08, "end": 3472.68, "text": " We just do the dropout, blah, blah, blah."}, {"start": 3472.68, "end": 3473.88, "text": " Let's continue."}, {"start": 3473.88, "end": 3477.7200000000003, "text": " My computer is kind of struggling."}, {"start": 3477.7200000000003, "end": 3480.12, "text": " You can see it's kind of lagging behind."}, {"start": 3480.12, "end": 3483.04, "text": " So I cannot go faster than this."}, {"start": 3483.04, "end": 3486.2400000000002, "text": " We just do some, again, view manipulation."}, {"start": 3486.2400000000002, "end": 3488.92, "text": " And then we do the batch matrix multiply,"}, {"start": 3488.92, "end": 3491.84, "text": " which is going to basically sum up the value vectors."}, {"start": 3491.84, "end": 3494.2400000000002, "text": " And we end up with the final result here."}, {"start": 3494.2400000000002, "end": 3495.88, "text": " And then we just change the view."}, {"start": 3495.88, "end": 3498.2000000000003, "text": " We do pre-mute, blah, blah, blah."}, {"start": 3498.2000000000003, "end": 3501.76, "text": " Nothing new there, hopefully."}, {"start": 3501.76, "end": 3504.46, "text": " And finally, we hit the dense part."}, {"start": 3504.46, "end": 3507.6000000000004, "text": " And the dense part is, let me show you here."}, {"start": 3507.6000000000004, "end": 3508.76, "text": " The dense part is this part."}, {"start": 3508.76, "end": 3512.48, "text": " So that should be the row parallel linear layer, right?"}, {"start": 3512.48, "end": 3514.44, "text": " Because we're splitting the, we're now here."}, {"start": 3514.44, "end": 3515.6000000000004, "text": " We're now here."}, {"start": 3515.6000000000004, "end": 3517.1200000000003, "text": " Let me go back to the code."}, {"start": 3518.5600000000004, "end": 3519.4, "text": " Let me enter here."}, {"start": 3519.4, "end": 3520.2200000000003, "text": " Whoops."}, {"start": 3520.2200000000003, "end": 3521.36, "text": " Let me enter here."}, {"start": 3522.6400000000003, "end": 3523.48, "text": " Oh my God."}, {"start": 3525.0200000000004, "end": 3528.1400000000003, "text": " So we are in the row parallel linear layer, as I said."}, {"start": 3528.1400000000003, "end": 3530.6400000000003, "text": " So let's quickly see how it's going to work."}, {"start": 3530.64, "end": 3533.7599999999998, "text": " Again, a couple of details there."}, {"start": 3533.7599999999998, "end": 3538.56, "text": " So you can see we first applied the linear layer,"}, {"start": 3538.56, "end": 3539.8799999999997, "text": " and then we call this reduce"}, {"start": 3539.8799999999997, "end": 3542.2799999999997, "text": " from tensor model parallel region."}, {"start": 3542.2799999999997, "end": 3544.94, "text": " So let's see how that thing looks like."}, {"start": 3544.94, "end": 3547.56, "text": " That one in the forward pass is doing the all reduce."}, {"start": 3547.56, "end": 3550.7999999999997, "text": " And in the backward, it's just doing the identity function."}, {"start": 3550.7999999999997, "end": 3554.64, "text": " So it's just the, literally the polar opposite"}, {"start": 3554.64, "end": 3558.0, "text": " from what the column-wise was doing, right?"}, {"start": 3558.0, "end": 3559.52, "text": " So let's go back here."}, {"start": 3559.52, "end": 3560.58, "text": " And why is that?"}, {"start": 3560.58, "end": 3562.2799999999997, "text": " Let me show you the diagram again."}, {"start": 3562.2799999999997, "end": 3563.92, "text": " So here you can see in the forward,"}, {"start": 3563.92, "end": 3566.14, "text": " we want to all reduce these weights"}, {"start": 3566.14, "end": 3569.92, "text": " such that we end up with the correct result on both devices."}, {"start": 3569.92, "end": 3571.12, "text": " But in the backward,"}, {"start": 3571.12, "end": 3573.7599999999998, "text": " we're just copy pasting the gradients."}, {"start": 3573.7599999999998, "end": 3576.72, "text": " That's the reason why that was implemented"}, {"start": 3576.72, "end": 3577.9, "text": " in such a manner."}, {"start": 3577.9, "end": 3578.74, "text": " Okay."}, {"start": 3578.74, "end": 3582.0, "text": " So I'm going to skip all of this"}, {"start": 3582.0, "end": 3584.12, "text": " because it's not relevant."}, {"start": 3584.12, "end": 3587.0, "text": " And we are done with the parallel tension."}, {"start": 3587.0, "end": 3589.4, "text": " We are exiting the attention right now."}, {"start": 3589.4, "end": 3592.2400000000002, "text": " And okay, now we have some residual stuff,"}, {"start": 3592.2400000000002, "end": 3594.6, "text": " blah, blah, blah, biases."}, {"start": 3594.6, "end": 3595.92, "text": " We don't care about those."}, {"start": 3595.92, "end": 3597.4, "text": " Some of those are JIT compiled"}, {"start": 3597.4, "end": 3600.0, "text": " so we wouldn't be able to step through it either way."}, {"start": 3600.96, "end": 3602.2200000000003, "text": " Post attention layer norm."}, {"start": 3602.2200000000003, "end": 3604.2400000000002, "text": " I think we're going to hit the break point here."}, {"start": 3604.2400000000002, "end": 3607.48, "text": " So I'm going to remove it, exit this function."}, {"start": 3607.48, "end": 3609.34, "text": " Let's exit the function here."}, {"start": 3611.12, "end": 3612.04, "text": " And we're back."}, {"start": 3612.04, "end": 3614.44, "text": " So we don't have the coder."}, {"start": 3614.44, "end": 3616.2000000000003, "text": " Now we hit the MLP."}, {"start": 3616.2000000000003, "end": 3618.2200000000003, "text": " So MLP is going to be this part."}, {"start": 3618.22, "end": 3620.6, "text": " Again, we're now past the data"}, {"start": 3620.6, "end": 3622.0, "text": " that came out of this part."}, {"start": 3622.0, "end": 3623.3999999999996, "text": " We're going to pass it here"}, {"start": 3623.3999999999996, "end": 3624.7599999999998, "text": " and we're going to repeat the same thing."}, {"start": 3624.7599999999998, "end": 3625.64, "text": " So I'm going to skip it."}, {"start": 3625.64, "end": 3627.9399999999996, "text": " So we're going to do the, as you can see here,"}, {"start": 3627.9399999999996, "end": 3629.68, "text": " column wise and then row wise,"}, {"start": 3629.68, "end": 3631.2, "text": " but it's the same logic."}, {"start": 3631.2, "end": 3633.64, "text": " So we can safely skip this part."}, {"start": 3633.64, "end": 3634.48, "text": " Okay."}, {"start": 3634.48, "end": 3636.6, "text": " So let me do disable break points."}, {"start": 3636.6, "end": 3640.12, "text": " We do the same thing and that's it."}, {"start": 3640.12, "end": 3641.16, "text": " We are done here."}, {"start": 3642.48, "end": 3644.3999999999996, "text": " Let me just exit this part here"}, {"start": 3644.3999999999996, "end": 3646.3999999999996, "text": " and let's return back the value."}, {"start": 3646.4, "end": 3649.44, "text": " And that was the layer one."}, {"start": 3649.44, "end": 3651.96, "text": " We obviously repeat that 12 times."}, {"start": 3651.96, "end": 3655.1600000000003, "text": " So I'm going to put the break point here, hit F5."}, {"start": 3655.1600000000003, "end": 3656.8, "text": " And now we're here."}, {"start": 3656.8, "end": 3660.14, "text": " So let's see where we are now here with the diagram."}, {"start": 3660.14, "end": 3663.44, "text": " So the data has successfully propagated all the way."}, {"start": 3663.44, "end": 3664.28, "text": " Let me change."}, {"start": 3664.28, "end": 3666.1600000000003, "text": " I'm not sure what color is the best one here."}, {"start": 3666.1600000000003, "end": 3668.52, "text": " So maybe let's do it like this."}, {"start": 3668.52, "end": 3669.58, "text": " So we are now here."}, {"start": 3671.46, "end": 3674.96, "text": " We successfully propagated our activations"}, {"start": 3674.96, "end": 3676.36, "text": " all the way to here."}, {"start": 3676.36, "end": 3678.36, "text": " And now we'll be passing them slowly"}, {"start": 3678.36, "end": 3680.6400000000003, "text": " to the second stage of the pipeline."}, {"start": 3680.6400000000003, "end": 3683.0, "text": " So let's go back to the code."}, {"start": 3683.0, "end": 3685.8, "text": " In case we are in the last stage of the pipeline,"}, {"start": 3685.8, "end": 3688.08, "text": " we'd be doing this additionally,"}, {"start": 3688.08, "end": 3690.6800000000003, "text": " passing it through the layer norms."}, {"start": 3690.6800000000003, "end": 3691.92, "text": " And that's it."}, {"start": 3691.92, "end": 3693.7200000000003, "text": " We are now here."}, {"start": 3693.7200000000003, "end": 3696.6, "text": " We can step over all of this"}, {"start": 3696.6, "end": 3699.2400000000002, "text": " and we return back the results"}, {"start": 3699.2400000000002, "end": 3701.2400000000002, "text": " from our transformer language model."}, {"start": 3701.2400000000002, "end": 3703.52, "text": " Guys, this is pretty much it."}, {"start": 3703.52, "end": 3704.58, "text": " Like this is,"}, {"start": 3704.58, "end": 3708.2999999999997, "text": " we've seen the main concepts already."}, {"start": 3708.2999999999997, "end": 3710.7799999999997, "text": " There is now a couple of additional details here."}, {"start": 3710.7799999999997, "end": 3714.2999999999997, "text": " So like post language model processing was this one."}, {"start": 3714.2999999999997, "end": 3716.18, "text": " Let me enable the break points."}, {"start": 3716.18, "end": 3717.58, "text": " Let me hit F5."}, {"start": 3717.58, "end": 3718.86, "text": " Here we are."}, {"start": 3718.86, "end": 3720.74, "text": " So again, there's gonna be some synchronization"}, {"start": 3720.74, "end": 3723.38, "text": " between the devices in these ultimate parts."}, {"start": 3723.38, "end": 3727.1, "text": " So yeah, assume now we're gonna project those weights"}, {"start": 3727.1, "end": 3729.68, "text": " into the vocab space."}, {"start": 3729.68, "end": 3730.74, "text": " So let's see what's going on."}, {"start": 3730.74, "end": 3731.9, "text": " So we have input parallel."}, {"start": 3731.9, "end": 3736.2200000000003, "text": " That's gonna be, I guess, 8,024,024."}, {"start": 3736.2200000000003, "end": 3737.06, "text": " Yep."}, {"start": 3737.06, "end": 3739.3, "text": " And then once we do step over,"}, {"start": 3739.3, "end": 3741.06, "text": " we're gonna end up with the vocab."}, {"start": 3742.34, "end": 3746.38, "text": " So eight, yeah, 1,024,50,304."}, {"start": 3746.38, "end": 3747.7000000000003, "text": " So that's it."}, {"start": 3747.7000000000003, "end": 3749.78, "text": " Again, there was some communication happening here"}, {"start": 3749.78, "end": 3752.86, "text": " between the devices such that we have consistent results"}, {"start": 3752.86, "end": 3754.26, "text": " on both devices."}, {"start": 3754.26, "end": 3755.56, "text": " And that's it."}, {"start": 3755.56, "end": 3756.88, "text": " Let's continue here."}, {"start": 3758.1800000000003, "end": 3759.02, "text": " Whoops."}, {"start": 3759.02, "end": 3761.9, "text": " We're now almost at the bottom of the stack."}, {"start": 3761.9, "end": 3764.2599999999998, "text": " So we are now in this float 16 module."}, {"start": 3764.2599999999998, "end": 3767.22, "text": " What we now do is because we're in the last stage,"}, {"start": 3767.22, "end": 3768.54, "text": " if we're in the last stage,"}, {"start": 3768.54, "end": 3771.14, "text": " we convert from float 16 to float 32,"}, {"start": 3771.14, "end": 3772.86, "text": " and then we return back the results."}, {"start": 3772.86, "end": 3773.78, "text": " Okay."}, {"start": 3773.78, "end": 3778.78, "text": " So let's exit here and we are back."}, {"start": 3778.9, "end": 3782.42, "text": " So again, now we're gonna have this send forward function."}, {"start": 3782.42, "end": 3786.62, "text": " And you can see here, if it's not the last stage,"}, {"start": 3786.62, "end": 3788.02, "text": " then communicate."}, {"start": 3788.02, "end": 3790.74, "text": " So that means all of the stages will have to pass the data"}, {"start": 3790.74, "end": 3792.9, "text": " to some other device,"}, {"start": 3792.9, "end": 3795.38, "text": " unless you're at the last stage of the pipeline,"}, {"start": 3795.38, "end": 3796.3, "text": " which makes sense."}, {"start": 3796.3, "end": 3798.64, "text": " Again, communicate is a blocking method."}, {"start": 3798.64, "end": 3800.54, "text": " So that means if you're waiting for the data,"}, {"start": 3800.54, "end": 3801.62, "text": " you'll be blocked."}, {"start": 3801.62, "end": 3803.82, "text": " That's why you have the concept of micro batches"}, {"start": 3803.82, "end": 3805.54, "text": " so that you can minimize the waiting,"}, {"start": 3805.54, "end": 3807.3, "text": " the idle time of the devices."}, {"start": 3807.3, "end": 3809.22, "text": " Hopefully that makes sense."}, {"start": 3809.22, "end": 3813.06, "text": " Okay, so in the last stage of the pipeline,"}, {"start": 3813.06, "end": 3816.94, "text": " you do this additional vocab parallel cross entropy."}, {"start": 3816.94, "end": 3821.5, "text": " And I can quickly maybe enter this one,"}, {"start": 3821.5, "end": 3824.34, "text": " although I think it's gonna be fairly detailed."}, {"start": 3824.34, "end": 3826.14, "text": " There's a lot of details here,"}, {"start": 3826.14, "end": 3828.1, "text": " but like the concepts are the same,"}, {"start": 3828.1, "end": 3829.58, "text": " like nothing new there."}, {"start": 3829.58, "end": 3832.26, "text": " So I'm gonna slowly step out of all of this."}, {"start": 3832.26, "end": 3835.1, "text": " So yeah, as you can see,"}, {"start": 3835.1, "end": 3836.7400000000002, "text": " they have devices you have to communicate"}, {"start": 3836.7400000000002, "end": 3838.78, "text": " such that we end up with the correct logits"}, {"start": 3838.78, "end": 3841.62, "text": " and then blah, blah, blah."}, {"start": 3841.62, "end": 3843.82, "text": " You can go through this one at your own pace."}, {"start": 3843.82, "end": 3848.1800000000003, "text": " I think it should be fairly easy to understand it."}, {"start": 3848.1800000000003, "end": 3851.2200000000003, "text": " Okay, let's exit that part."}, {"start": 3851.2200000000003, "end": 3852.4, "text": " Let me exit it."}, {"start": 3853.44, "end": 3854.28, "text": " And that's it."}, {"start": 3854.28, "end": 3855.82, "text": " We just do some summation."}, {"start": 3855.82, "end": 3858.1800000000003, "text": " We return back the loss."}, {"start": 3858.1800000000003, "end": 3860.6600000000003, "text": " We do some additional overviews here."}, {"start": 3860.6600000000003, "end": 3863.54, "text": " And now we keep on doing this."}, {"start": 3863.54, "end": 3866.6600000000003, "text": " Ultimately, you'll end up with the evaluation metrics."}, {"start": 3866.6600000000003, "end": 3868.6600000000003, "text": " I'm gonna disable the breakpoints,"}, {"start": 3868.6600000000003, "end": 3872.6400000000003, "text": " hit that five here and exit here."}, {"start": 3872.64, "end": 3876.2599999999998, "text": " And I'll let's say it's from here and that's it."}, {"start": 3876.2599999999998, "end": 3878.62, "text": " Now, if we are on the last rank,"}, {"start": 3878.62, "end": 3880.22, "text": " we just, and by the way,"}, {"start": 3880.22, "end": 3881.98, "text": " here's how the last rank is calculated"}, {"start": 3881.98, "end": 3883.3399999999997, "text": " in this particular example."}, {"start": 3883.3399999999997, "end": 3887.44, "text": " So here you can see that we just get the rank"}, {"start": 3887.44, "end": 3889.94, "text": " without any group, which means that now,"}, {"start": 3889.94, "end": 3893.56, "text": " however many devices we have like in total,"}, {"start": 3893.56, "end": 3895.98, "text": " we just find its absolute rank."}, {"start": 3895.98, "end": 3898.64, "text": " So that means that for example,"}, {"start": 3898.64, "end": 3900.2, "text": " this one will be ranked zero,"}, {"start": 3900.2, "end": 3902.9399999999996, "text": " this one will be ranked one, two, three,"}, {"start": 3902.9399999999996, "end": 3904.8399999999997, "text": " then five, six, seven, eight."}, {"start": 3904.8399999999997, "end": 3907.48, "text": " We will be returning those results."}, {"start": 3907.48, "end": 3910.8799999999997, "text": " So basically we only on the last rank,"}, {"start": 3910.8799999999997, "end": 3913.52, "text": " so only on a single device in our whole cluster,"}, {"start": 3913.52, "end": 3914.8799999999997, "text": " we do this operation."}, {"start": 3914.8799999999997, "end": 3919.8799999999997, "text": " So basically, yeah, just some blah, blah, blah perplexity."}, {"start": 3920.02, "end": 3922.06, "text": " And then we print the results,"}, {"start": 3922.06, "end": 3923.7799999999997, "text": " but we don't care about the actual results."}, {"start": 3923.7799999999997, "end": 3925.06, "text": " So yeah, so you can see here,"}, {"start": 3925.06, "end": 3928.5, "text": " average loss perplexity adjusted perplexity, blah, blah, blah."}, {"start": 3928.5, "end": 3929.64, "text": " Nothing interesting there."}, {"start": 3929.64, "end": 3931.52, "text": " And now finally we're done."}, {"start": 3931.52, "end": 3933.8599999999997, "text": " Okay, guys, we've seen a lot so far."}, {"start": 3933.8599999999997, "end": 3935.3599999999997, "text": " Let me do just a quick recap"}, {"start": 3935.3599999999997, "end": 3937.42, "text": " and then we can wrap this video up."}, {"start": 3937.42, "end": 3939.8399999999997, "text": " So we have a couple of differences depending on"}, {"start": 3939.8399999999997, "end": 3942.12, "text": " whether we are dealing with the last stage of the pipeline,"}, {"start": 3942.12, "end": 3943.62, "text": " with the first stage of the pipeline,"}, {"start": 3943.62, "end": 3946.2999999999997, "text": " or the intermediate stages of the pipeline."}, {"start": 3946.2999999999997, "end": 3948.96, "text": " So let me go and run the eval script again."}, {"start": 3948.96, "end": 3951.14, "text": " So I'm gonna hit run here."}, {"start": 3951.14, "end": 3952.7999999999997, "text": " And I have a breakpoint set"}, {"start": 3952.7999999999997, "end": 3955.56, "text": " inside of this forward step function."}, {"start": 3955.56, "end": 3957.2799999999997, "text": " There's a couple of things I wanna explain here."}, {"start": 3957.28, "end": 3960.7200000000003, "text": " So first things first is this receive forward function."}, {"start": 3960.7200000000003, "end": 3963.84, "text": " Then I'm gonna explain the differences in the forward pass"}, {"start": 3963.84, "end": 3966.7200000000003, "text": " through the model and then the send forward part."}, {"start": 3966.7200000000003, "end": 3969.44, "text": " So first things first, here, if we enter,"}, {"start": 3969.44, "end": 3972.6600000000003, "text": " you can see that the first stage in the pipeline"}, {"start": 3972.6600000000003, "end": 3976.3, "text": " will have the input tensor set to none, as you can see here,"}, {"start": 3976.3, "end": 3978.6400000000003, "text": " which means that they'll actually be fetching the data"}, {"start": 3978.6400000000003, "end": 3979.8, "text": " from the data loader."}, {"start": 3979.8, "end": 3982.6400000000003, "text": " So they'll grab the chunk of data from the data loader"}, {"start": 3982.6400000000003, "end": 3985.48, "text": " and pass that as the input into the embedding table"}, {"start": 3985.48, "end": 3986.96, "text": " and then the transformer layers."}, {"start": 3986.96, "end": 3989.2400000000002, "text": " So again, let me show you the diagram here."}, {"start": 3989.2400000000002, "end": 3991.52, "text": " So that means those guys, the first stage,"}, {"start": 3991.52, "end": 3994.4, "text": " will grab the chunk of data from the batch"}, {"start": 3994.4, "end": 3998.28, "text": " and will pass that data through the, as you can see here,"}, {"start": 3998.28, "end": 4000.88, "text": " embedding table and only they have the embedding table,"}, {"start": 4000.88, "end": 4004.7400000000002, "text": " if you recall from the explanation back then."}, {"start": 4004.7400000000002, "end": 4007.7200000000003, "text": " And then they'll pass the data through the transformer layers"}, {"start": 4007.7200000000003, "end": 4009.8, "text": " and ultimately they end up with this,"}, {"start": 4009.8, "end": 4013.12, "text": " with the output activations denoted in gray color here."}, {"start": 4013.12, "end": 4014.68, "text": " Okay."}, {"start": 4014.68, "end": 4017.72, "text": " And then all of the other stages will block"}, {"start": 4017.72, "end": 4019.0, "text": " on the communicate method."}, {"start": 4019.0, "end": 4022.96, "text": " And that means they'll be waiting until the previous stage"}, {"start": 4022.96, "end": 4026.1, "text": " sends the activations to this stage."}, {"start": 4026.1, "end": 4028.52, "text": " And then it will continue on the execution."}, {"start": 4028.52, "end": 4031.0, "text": " So that means for these guys here,"}, {"start": 4031.0, "end": 4035.44, "text": " they'll be blocked until the gray activations are sent"}, {"start": 4035.44, "end": 4039.96, "text": " to the second stage here, and then they'll do the forward pass."}, {"start": 4039.96, "end": 4041.72, "text": " Okay, so that's first thing."}, {"start": 4041.72, "end": 4046.24, "text": " Then let's go back to the forward step."}, {"start": 4046.24, "end": 4049.3199999999997, "text": " We have the forward pass through the model."}, {"start": 4049.3199999999997, "end": 4052.16, "text": " So I'm gonna now just enable all of the breakpoints"}, {"start": 4052.16, "end": 4053.72, "text": " and let me hit that five just to enter"}, {"start": 4053.72, "end": 4055.7799999999997, "text": " into the actual forward pass."}, {"start": 4055.7799999999997, "end": 4059.24, "text": " So I want to get, whoops, let me just do this."}, {"start": 4059.24, "end": 4062.64, "text": " I'm gonna basically disable this"}, {"start": 4062.64, "end": 4064.3799999999997, "text": " and put a breakpoint here."}, {"start": 4064.3799999999997, "end": 4066.24, "text": " So this is where the difference happens."}, {"start": 4066.24, "end": 4067.72, "text": " So basically this is, as you can see here,"}, {"start": 4067.72, "end": 4070.7999999999997, "text": " GPT model forward, depending on whether we're dealing"}, {"start": 4070.8, "end": 4074.1200000000003, "text": " with the last stage or any other stage of the pipeline,"}, {"start": 4074.1200000000003, "end": 4076.6000000000004, "text": " we'll be entering this branch here."}, {"start": 4076.6000000000004, "end": 4079.6600000000003, "text": " And as you can see here, if I enter this one,"}, {"start": 4079.6600000000003, "end": 4084.6600000000003, "text": " this thing does converts the activations into the logits."}, {"start": 4086.04, "end": 4088.0, "text": " So that means we'll end up"}, {"start": 4088.0, "end": 4090.76, "text": " with the vocab space dimensionality."}, {"start": 4090.76, "end": 4093.84, "text": " So let me digest what I just said there."}, {"start": 4093.84, "end": 4095.88, "text": " So if you take a look at the LM output,"}, {"start": 4095.88, "end": 4097.66, "text": " all of the stages will be just returning"}, {"start": 4097.66, "end": 4099.02, "text": " the pure activations."}, {"start": 4099.02, "end": 4102.64, "text": " So that means 8024,024."}, {"start": 4102.64, "end": 4106.160000000001, "text": " So those are the raw activations from the transformer block."}, {"start": 4106.160000000001, "end": 4108.4800000000005, "text": " But the last stage will be returning"}, {"start": 4109.320000000001, "end": 4112.56, "text": " this processed information because it maps"}, {"start": 4112.56, "end": 4117.56, "text": " the tokens into the vocab space."}, {"start": 4119.540000000001, "end": 4120.96, "text": " So let me show you what I mean by that."}, {"start": 4120.96, "end": 4125.52, "text": " So if we exit here, the outputs for the last stage will be,"}, {"start": 4125.52, "end": 4130.4400000000005, "text": " whoops, I think I have to do step over here"}, {"start": 4130.4400000000005, "end": 4132.02, "text": " and then let's do output shape."}, {"start": 4132.02, "end": 4133.400000000001, "text": " So you can see here 50K."}, {"start": 4134.52, "end": 4138.96, "text": " It will be 25K in the case of having sharded,"}, {"start": 4138.96, "end": 4140.5, "text": " the model sharding."}, {"start": 4141.4400000000005, "end": 4143.120000000001, "text": " And yeah, that's it."}, {"start": 4143.120000000001, "end": 4145.540000000001, "text": " So let me go outside of this thing."}, {"start": 4146.92, "end": 4148.0, "text": " And now we're here."}, {"start": 4148.0, "end": 4152.160000000001, "text": " Okay, so now, again, remember output is one of two things."}, {"start": 4152.160000000001, "end": 4154.96, "text": " It's either the raw activations from the transformer layer"}, {"start": 4154.96, "end": 4159.44, "text": " or it's the tokens mapped into the vocab space"}, {"start": 4159.44, "end": 4161.0, "text": " for the last stage of the pipeline."}, {"start": 4161.0, "end": 4164.92, "text": " And then the send forward, as you can see here,"}, {"start": 4164.92, "end": 4166.72, "text": " all of the stages will be communicating"}, {"start": 4166.72, "end": 4168.24, "text": " the activations to the next stage,"}, {"start": 4168.24, "end": 4169.76, "text": " but the last stage will just skip."}, {"start": 4169.76, "end": 4173.7, "text": " As you can see here, if not the last stage,"}, {"start": 4173.7, "end": 4174.88, "text": " then communicate."}, {"start": 4174.88, "end": 4176.84, "text": " If it is the last stage, we'll just ignore this."}, {"start": 4176.84, "end": 4180.96, "text": " This is a no-op operation, no-op for the last stage."}, {"start": 4180.96, "end": 4183.56, "text": " Okay, and finally, as you can see here,"}, {"start": 4183.56, "end": 4186.72, "text": " we have the last stage branch here,"}, {"start": 4186.72, "end": 4190.72, "text": " which means that the last stage will pass the logits"}, {"start": 4192.240000000001, "end": 4195.84, "text": " into this function here and calculate the cross entropy,"}, {"start": 4195.84, "end": 4198.240000000001, "text": " and then it will return back the loss."}, {"start": 4198.240000000001, "end": 4201.0, "text": " Whereas all of the other stages"}, {"start": 4201.0, "end": 4203.320000000001, "text": " first communicated information to the next stage,"}, {"start": 4203.320000000001, "end": 4205.52, "text": " and then they return none."}, {"start": 4205.52, "end": 4208.4400000000005, "text": " And then if we go a level up from this function,"}, {"start": 4208.4400000000005, "end": 4209.72, "text": " so this is the forward step,"}, {"start": 4209.72, "end": 4212.0, "text": " you can see that the output, again, for the last stage"}, {"start": 4212.0, "end": 4214.28, "text": " will be all reduced and then just added"}, {"start": 4214.28, "end": 4216.7, "text": " to this total output, which will finally return"}, {"start": 4216.7, "end": 4220.04, "text": " and used to calculate the perplexity, et cetera, et cetera."}, {"start": 4220.04, "end": 4221.56, "text": " So hopefully that makes sense."}, {"start": 4221.56, "end": 4223.28, "text": " Again, let me go through the diagram."}, {"start": 4223.28, "end": 4225.1, "text": " I think it's fairly straightforward."}, {"start": 4225.1, "end": 4228.46, "text": " So again, the first stage here will be grabbing the data"}, {"start": 4228.46, "end": 4231.6, "text": " from the data loader, who will be passing that data"}, {"start": 4231.6, "end": 4232.96, "text": " through the embedding table"}, {"start": 4232.96, "end": 4234.46, "text": " and then through the transformer layers"}, {"start": 4234.46, "end": 4236.12, "text": " until we have the raw activations,"}, {"start": 4236.12, "end": 4238.7, "text": " and then they'll be passing those activations"}, {"start": 4238.7, "end": 4239.96, "text": " onto the next stage."}, {"start": 4239.96, "end": 4243.58, "text": " Whereas the next stage will be waiting for the activations"}, {"start": 4243.58, "end": 4246.8, "text": " and then grabbing the activations from the previous stage,"}, {"start": 4246.8, "end": 4251.0, "text": " passing those activations through the 12 transformer layers,"}, {"start": 4251.9, "end": 4254.74, "text": " and finally calculating the logits,"}, {"start": 4254.74, "end": 4256.56, "text": " calculating the cross entropy loss,"}, {"start": 4256.56, "end": 4259.24, "text": " and then we'll be accumulating that information."}, {"start": 4259.24, "end": 4262.0, "text": " If we were to have additionally intermediate stages"}, {"start": 4262.0, "end": 4263.38, "text": " in this diagram, obviously,"}, {"start": 4263.38, "end": 4266.0, "text": " then those would be receiving the activations"}, {"start": 4266.0, "end": 4267.84, "text": " from the previous stage and then just passing"}, {"start": 4267.84, "end": 4269.26, "text": " the activations to the next stage."}, {"start": 4269.26, "end": 4272.12, "text": " So we kind of have three different branches"}, {"start": 4272.12, "end": 4274.400000000001, "text": " in this pipeline logic."}, {"start": 4274.400000000001, "end": 4277.06, "text": " Cool, that was a long video."}, {"start": 4277.06, "end": 4278.22, "text": " Hopefully you liked it."}, {"start": 4278.22, "end": 4281.2, "text": " If you did, I would really appreciate any feedback."}, {"start": 4281.2, "end": 4282.320000000001, "text": " This will be the last video"}, {"start": 4282.320000000001, "end": 4284.68, "text": " from the large language model series."}, {"start": 4284.68, "end": 4287.8, "text": " I might create one more on the GPT-NewX20B"}, {"start": 4287.8, "end": 4290.24, "text": " if I have any interest."}, {"start": 4290.24, "end": 4294.18, "text": " If you'd like me to go through the GPT-NewX20B code base,"}, {"start": 4294.18, "end": 4295.4400000000005, "text": " do comment down below"}, {"start": 4295.4400000000005, "end": 4298.42, "text": " and I'll be monitoring the comments."}, {"start": 4298.42, "end": 4300.36, "text": " In any case, if you found this video useful,"}, {"start": 4300.36, "end": 4302.6, "text": " consider subscribing, sharing it out."}, {"start": 4302.6, "end": 4305.0, "text": " That's the best way you can support the channel."}, {"start": 4305.0, "end": 4306.92, "text": " And until next time, bye bye."}, {"start": 4306.92, "end": 4331.62, "text": " Ab\u016b La mum\u1e6daff\u0101\u1e6d \ud0d3 rather le,"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=5RUOrXl3nag
OPT-175B: Open Pretrained Transformer | ML Coding Series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I do a deep dive into the metaseq (Open Pretrained Transformer, OPT-175B) codebase. I first show you how to set up the code on your machine, and then I walk you through the codebase behind Meta's large language model (LLM). Along the way I cover key concepts behind mixed precision training such as loss scaling and unscaling, and much more. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My fork of metaseq: https://github.com/gordicaleksa/metaseq ✅ Instructions for Apex: https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/start_fast.md ✅ Ultimate guide to scaling vid: https://www.youtube.com/watch?v=hc0u4avAkuM ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro - open pretrained transformer 00:01:25 Setup (creating the cond env) 00:06:00 Setup (patch the code) 00:11:40 Collecting train script arguments 00:17:49 Training script walk-through 00:25:25 Constructing a dummy task 00:28:30 Building the transformer model 00:35:22 CUDA kernels (C++ code) 00:46:30 Preparing a dummy dataset 00:50:00 Training loop 00:54:05 Zero grad (loss scaling) 00:56:15 Forward pass through a transformer 01:02:20 (IMPORTANT) loss, scaling, mixed precision, error handling 01:13:25 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #opt #meta #largelanguagemodels #transformers
What's cracking guys? Alex here. In this video, I'll be walking you through the code base behind the recent Opt 175B Transformer from Meta and if you're curious to learn about the actual theory about the paper I covered this paper as well as GPT Neo X20B and as well as Big Science blue models So the corresponding papers in the previous video, so I'm gonna link the card somewhere up there in this video I'll be walking you through as I said through the code base behind the opt model basically, that's the meta sec code base and My idea will be to first show you how to set up this on your local machine Because the system was designed the code base was designed to be run on the Linux distributed system So that means multi node multi GPU setting whereas I have windows single GPU machine So that kind of made it a bit harder to get started So that's the first thing the second thing I want to show you some of the concepts I introduced in my Ultimate guide to scaling video. I'm gonna link the card as well somewhere here Basically, I'll focus on the mixed precision training and I'll show you some of the concepts such as loss scaling Etc and also you'll just get to see how this product well not a production level But like a code base behind one of these large language models looks like so hopefully that's gonna be valuable Okay, let's get started. The first thing we need to do is open up the setup guide So there is the setup instructions here, which we're going to follow. It's fairly straightforward There's a couple of caveats, but that's pretty much it. So I'm gonna open up my anaconda prompt here I'm going to first create a conda environment. So conda conda create Name I'm gonna use opt to because I already have Opt environment setup so I have to change the name there and I'm gonna use the Python 3.9 so I'm gonna hit enter there and This will be quickly over and once we have conda Installed will start installing the necessary packages directly inside of it. Okay, this is gonna be quite fast. So activate the environment Clear just to clear the screen a bit and now I'm gonna just copy paste this command here So we do this pip install of basically torch and torch vision etc. So just hit this command and That should execute in a couple of minutes. Okay guys, so that's that's installed now. Let's continue So the next step would be to install the apex library from from Nvidia because I'm on windows This does not work windows is not supported so you can follow these steps on your own if you're on windows Just skip this part. There is a bit better instruction for how to install apex in case you have some mismatch of the CUDA versions, etc, etc and Like some some basically tips on how to overcome those potential problems in this big science Basically repo I'll link that link down in the video description so you can check it out. So I'm skipping the apex Let's now continue on installing the Megatron. So let's just copy paste this part here Well before that, sorry, let's just first clone the the actual repo. So I'm gonna do the following and I'm just create a Directory here. I'm gonna name it opt I'm going to just do a clone operation inside of the directory if I manage to open up it the folder So here's the the git bash Just go and basically copy the link the URL do git clone here and that's gonna download the actual meta sec repo in the in this target directory So once we are there, let me just navigate using my my environment. So I'm gonna open up this thing here I'm gonna navigate here in the in the console to that to the root of my code base and Now I'm gonna follow the rest of the instructions. So let's now do the git clone So let's do the Megatron installation You can just copy paste this directly in the terminal everything is gonna execute automatically except for the last line So hit the pip install command here That's gonna install the Megatron. Hopefully fairly quickly. Yeah. Okay, that's cool. Let's continue on the second step now the important thing you need to do here is just change go go back to the root to to meta sec and Don't install don't obviously don't you don't want to install inside of the Megatron the fair scale library So I'm just gonna copy paste this again here. We're installing the fair scale and I hit pip install For the last line and we'll install the fair scale as well. It's kind of smooth so far No, no problems here. Okay, finally Because we've already installed we already cloned the meta sec. We just need to go to the root so change directory to the root and now we do pip install but before that there is a Additional tweak I have to do and that's in the setup file here I have to basically open up this this file and find the Aim library and this one also doesn't work on Windows for whatever reason I just didn't want to hustle around it so I just kind of comment it out and that's it after this you can hit the the pip install command and Everything should work as expected. Let me paste this likely enter and that's gonna be installed now Okay, there is one more command we have to execute and that's not in this manual So that's this pip install setup tools this version 59 point five point. Oh There is some weird version error going on if I don't do this So I had to I kind of found this on the on some random website as a solution and finally This is not necessary unless you want to to like basically contribute to the to the repo this pre-commit install But let's just do it either way. Okay, cool. I'm gonna clear the console here. Everything is ready now now Let's open up the actual code. So I'm gonna use VS code as usually VS code here and I'm gonna just do the whoops open folder Find this meta sec directory and just select folder That should be pretty much it So now what we want to do is select the correct interpreter and that's the opt-to so let's just open something Let me open up some of the files will need a bit later. So this is gonna be the training file Let me hit the nope. I don't want to dead So I just want to hit the ctrl shift P select interpreter and let's select the opt-to Interpreter here and that's now it the final thing we have to do is a couple of patches Obviously I had to sort it out before before the video. So let me just show you what those are. I Have a working code here. So just to get this tool to show you some of the differences Some of the patches I had to apply so first thing was and we've already done This is to comment out the aim this might not be necessary for for your case But just like that's something that I had to do. Okay, then there is this line in trains I'm just gonna copy paste this one. You obviously won't have to do this because I'll basically create a clone Directly and and push it to my to my github, but let me let me return here So this is something we have to add Basically here. I'm gonna just save it there Then let's see what else so let me exit here We have a couple more steps Metasec tasks Metasec tasks base task we have to add this weird line to convert it to long Otherwise, he's gonna complain. So that's the base task inside of meta task tasks So let me open up that file again You will have you can skip this all of this because you can just start from my from my code So I'm gonna do this so meta sec find the tasks find the best base task and find this line so this is basically Let me find this thing so filter oops filter examples blah blah blah, so it's here So I just paste that line here. It can be cleaner. Obviously I'm just doing it like this like you should import it in on top of the file But this is just the whole point is for me to enable this code So that I can easily run it and and do a stepping through of the code base Okay, so I think that's one step then we have the sweep We can actually skip this part. We have we don't have to do this Then there is this thing so we have to change from 2 to 1 because I only have one GPU Otherwise this will fail because as you can see here, this is the model parallel argument So I'm just gonna do that. So that's meta sec launcher up job constants. I'm gonna just patch that quickly here so let's go to the Metasec launcher so meta sec here is the launcher here the constants and We just put ones here. These are the small smallest models You can see the 8 million the 125 million and the 350 million the tiny small and medium ones. Okay And the final thing we have to do is in the up baselines We have to comment out this if arcs number of GPUs smaller than 2 then raise an error Because obviously I have just single ones I'll have to do that and there is this one here with the environment thingy So it was obviously a lot of trial and error to to to get this uh, like sorted out but yeah Okay, let me find the uh up baselines so arcs Num of GPUs Here it is. So I have to comment out this line And the final patch that I have to apply is this one here. I'm just gonna do Copy paste of this on top of on the bottom of the file and that's it Okay guys, that's pretty much it Okay, having done all of this, let's go back to the repo and now we want to train this thing So let's see what the instructions say so they have the this training link here You open it up and the only thing you have is this so from here I have to had to work my way around and figure out what's going on. So Let's let's see what i'm gonna do. So i'm gonna instead just copy paste the uh The launch arguments I already have in the in in my code Which I've already previously Been running so i'm just gonna copy paste all of these arguments here and you'll see what's the difference basically Let's just give me a second here So i'm gonna open up this one Of baselines i'm gonna create Uh, basically a config file here for the for the for running the code and let's just paste those arguments So let's see what's the difference between this thing and what they show here So first things first, uh, you can notice I have the benchmark in the local The local makes it so that I can run this on the local machine obviously and the benchmark Was necessary to basically So that you can skip downloading bigdp So that you can skip downloading big data sets and this will just load some use some dummy data loaders and Everything else will stay the same. So that's kind of cool. Okay, so number of nodes was obviously I had to change it to one It was two here And number of gpus I had to set it to one instead of eight here And other than that everything else remains the same Okay, so now let's run this code. I'm gonna zoom in a little bit here And i'm going to set a breakpoint here and let's just run this code. Okay, let me run this Hit run And the whole point of this script will be to collect the arguments That then we are going to use for the train script because they don't have any instructions for how to start the actual train script So this is a workaround. I kind of figured out um, let's see what's going on here, so we have to Go here And then sweep main so this is maybe helpful for you to understand my my process and then basically In case you need to figure out something on your own, uh, you can you can follow a similar workflow. So we end up here Then we enter this uh back end main i'm gonna put a breakpoint here Let's just continue the whole point was to find so let's continue here The whole point was to find the actual command line that's being used to call the train script So we'll see how that looks like in a second. Okay, let me do this Let me just find uh the so train commands. We don't care about this code We just want to find the the the final code that's going to be run. So i'm gonna put uh Breakpoint here hit f5 Uh, let's continue here. Let me just find yeah local run. So this is where we want to go um f5 Enter the local run and here it is. So here is the actual piece of code that will call the train script So we have this train command. So it's python blah blah blah. It's calling as you can see here, hopefully um Let me just see whether I can shift this. Okay. So what i'm gonna do is this i'm gonna open up the debug console here and i'm simply going to Capture all of these arguments which we will then use to to to run the the the train script So we don't care about this n thingy. Uh, we only care about about the uh actual commands that were passed to the train script Okay, you can see here python blah blah. It's calling the train script. So this is the thing you can do. So we open um, basically command Uh take take stay uh, and then write mode Uh as f and then we basically want to write to that file This train command and this is going to obviously save This command and capture it and so i'm gonna do that and let's see what's going on. Okay, so we've saved that somewhere here Hopefully, yeah commands And then you can just copy paste all of these and start playing from there So that was my first part of the process So it took some time to figure out how to how to call the train script and this was my workaround To to get the actual commands and then came the tweaking part. So i'm gonna just again Kind of speed this up by by copy pasting the actual Arguments you can see it's super duper verbose Um, i'm gonna copy paste all of this and i'm gonna well i'm actually gonna copy paste everything Including these comments I made here so I can explain what i've done. Uh, so let's go to the code here again You will not have to do this because because um, you'll have this already Uh in my in on my github, uh repo, okay, let's go here. Um Let's rename this as as opt baselines And i'll have to create another config here For the train run, I will just add configuration Python python file and then i'll just add the arguments here. Okay That's it. Hopefully if I haven't made any mistakes here, we don't need this And that's pretty much it. I'm going to rename this file to This this config to train script Uh, and let me show you what's going on here. So a lot of arguments obviously, uh, this is um What you get when you're working with large language models and in general with with libraries such as such as metasec which is I think cloned and then um Modified version of the fairsec library. Okay so this is pretty much the same thing as um as what we get when we run the Previous script with some modifications So some of those modifications I had to me was like we we I can't use fully sharding because I have a single gpu I can't use the parallel version of cross entropy So I had to kind of change it to cross entropy instead of cross entropy parallel or something I was just using transformer lm instead of transformer lm megatron or something and finally, um Windows was complaining I guess any any operating system will complain about super long paths So there was some super long path that we get dumped from the previous script So I had to kind of shorten them into this one So test v0 and finally I had to remove the checkpoint activations and distribute checkpoint activations This is not well, it's actually necessary Otherwise you you will face some bugs but like I didn't bother to try and figure it out So if you want to kind of play with the checkpointing, uh, which is the the technique where you basically trade off the memory versus the the the the speed of your backward pass Uh, then you can kind of enable this and try and figure it out yourself But like i'm going to focus on the mixed precision a part of the of the code. Okay Made a couple more tweaks. Uh, you will be able to just copy paste this Okay, I think a couple of things I had to change was like the batch size because otherwise you I was getting oom's Out of memory exceptions and I had to change to like have 128 tokens per per batch Uh per sentence instead of like 2,048 by default or something like that, but that's it again You'll copy paste this thing, uh, and you can start from here Okay, guys, let's finally get to the transcript. Let me set a breakpoint here Uh, let me use the transcript config And actually i'm gonna set another breakpoint down below Here and now we are ready to to to run Okay, guys, let's start going through the code here i'm gonna focus on on on the Well, i'm gonna show you details that that are different compared to your uh toy Uh repos so to speak because this is after all using this fairly powerful Library, uh that was developed by meta So yeah, there is a lot of code that that makes this harder to read Uh, but like more robust to various errors. Okay, so let's continue here So first things first i'm gonna enter this parser function and I want to focus on a couple of details Like first of all, I think it's quite kind of interesting how they are picking up, uh various arguments uh, it might not be the most readable way to do it, but it's uh Interesting in the sense that let me let me show what I mean So here there is this common config so common config will be just a data class that contains a bunch of these fields that uh basically define uh various, uh variables which are which belong to the common group of variables of arguments such as uh logging interval tensorboard logdir Uh various uh thingies like this aim repo. So that's the this visualization tool there Then there is the weights and biases project here Azure ml logging bunch of details. So yeah, I won't focus on those but I just kind of want to stress that that's neat so once we Basically execute this line. We'll pick up all of the arguments and default values from this config Okay Next up they have this these registries. So this is probably less interesting Let me just see here. Uh, so yeah, they're picking up They're picking up values automatically. I mean i'm gonna skip this part because otherwise this will take too long and it's not that vital so Bottom line is now the same thing will happen in all of these other functions. So this is if you enter here You'll see that there is this uh generate parser from data class and then there is the data set config so here is the where they define some of the default values for the uh, data set Group and then uh, yeah, so we can basically continue. Let me just go back here So yeah, by the way, if you don't know how to use vs code, there is this nice call stack like a window here You just hit the uh, whatever is on top of the stack and you get back to wherever your your debugger has stopped That's kind of neat to know. Um, yeah So again bunch of different groups will add different arguments and finally we end up with a parser And that's it um now some conversion to configs it's going to be a dictionary or something and Now because we're not profiling. We're not doing this. We won't enter this branch. We're going here So let's enter the the actual main Function we can skip all of this and we enter the single gpu main branch. So the only difference is um Well, there are some differences but ultimately all of the code pets whether you're running this in distributed setting or on a single gpu We'll call the same main function So that means we'll be actually stepping through the actual code that was used to train the the huge Opt 175 billion parameter model, which is awesome. So let's let me just show you quickly this thingy here Again, there are many details Here that are relevant if you're running this in a distributed setting That might be interesting interesting for you to check it out. But here i'm going to focus on on on on this code path here So let's enter the main and let's keep on stepping through this. So let me see what's interesting. So here this is the patch I applied So finally we save the config. So this is a config that we got from all those parser steps So ultimately we just save it here to config yaml and we'll see how how the config looks like Okay. Let me just now show you how this thing's going to look like. So this is hopefully the right. Yeah, this is the right directory So let's open up the um test v0. Let's open up the config. I'm just gonna um Basically open up another visual studio instance here And let me open that file. So let's open. Oops Let's open the file here This one and here is the config yaml So i'm going to open it up here side by side. Okay. So as you can see here So as you can see here, there is a lot of a lot of arguments like I think over 400 or something like a huge amount of arguments um, this could use some some cleanup and refactoring but um I guess it works Kind of they did struggle a lot. So I don't know Uh, cool. So I won't be going through the arguments here. Obviously it beats the purpose. There is too many things going on So i'm just gonna Explain you those that we are hitting as we are executing along this code path. Um, okay So there is this nice, um Is is master function? I think it just checks so because the distributed rank is sql zero. So rank means basically if you have multiple gpus in a in a node, uh rank zero would be GPU zero in that node the rank one would be gpu one It's just like basically naming for your gpus in the node because here we are zero because we are running a single gpu Uh, we are going to be the master because we are the only gpu. So we are the master automatically. Okay Let's continue here. Um I'm going to skip all of that just some asserts Ignoring all of that seeding the the code so that we can have reproducibility Uh, and then some additional seeding that kind of probably in the background. I think it just seeds the torch thingies Yeah torch manual seed blah blah blah I'm gonna skip those details Um verify checkpoint directory. I think this is kind of neat. So again this video I'm just showing you some details that that go into the engineering behind these llms is not only like the the actual Ml logic. That's what i'm trying to get at is super simple like you have a transformer you do everything You can literally write all of this logic conceptually. It may be Hundred lines of code or something. I might be exaggerating but uh, the point is there is so many things That you have to add to make this robust and to make this uh work on on such a large scale So this is one of the of the of the details. So let me enter this I'm gonna put a breakpoint here hit f5 Uh, okay. So let's see what they're doing. So they are literally testing with this if this test v0 directory exists And we can actually we actually have the access the the right rights to write to that directory So if it exists, uh, if it doesn't exist we create it because it does exist. We don't create it We fetch the global rank. It's going to be zero. I assume because yeah, as I said, uh, we create the temporary file So we just create some random File here and we try to open it up in a write mode that shows us that we have write access So as you can see here, we're not hitting the exception. So everything's fine there. And finally we try to remove the file So you can see here. So if I were to Go here, there is the dummy file. And now if I go Uh f10 so step over you can see how this thing is deleted So that's how they do the check that you can actually write to that directory and that's robust Okay now some logging how much memory I have etc. Etc Logging the the whole config you can see there is a bunch of data here Everything we saw in the actual yaml file in the vs code And yeah, okay. I'm gonna skip this There is the setup task. I'm gonna enter this part of the code as well. So let's see what's going on there um blah blah blah So we have the task name that's called dummylm And this is a consequence of me using the benchmark flag if you recall And um, yeah, so that's gonna be like a super simple task. We're gonna see what it does in a couple of seconds so here We just pick some uh, basically um Associated config with that task. I will not get into how they populate these Uh, there is a neat method they use in the background. Well, i'm not sure whether it's neat. It's kind of hard to Read and understand but again It's understandable Let's enter this one Let's enter the actual constructor of the dummylm task and here we are. Okay, so Let me see whether it's interesting to see the base task. I think nothing there is fan Yeah, okay So i'm gonna just skip the init function of the base task and let's see how this is gonna work So we create a dictionary and this dictionary is just gonna be adding a bunch of symbols Uh to this to this task, which is not important. It's gonna add like 50 000 or something Of yeah, 51 000 196 Symbols, uh, these will not be important for for for the training as you'll see in a second And then what they do is they pad to multiple fate. So Why do they do this is because sometimes um On the hardware on the underlying hardware if things are divisible by eight things go much faster So that's a constraint that you kind of have to know of the of the underlying gpus you're using So I guess nvidia a 100s or h100s new ones. So yeah again Details that matter when you train on such a large scale Okay, and finally what we do here is we create as you can see here 128 um basically numbers we have plus one because here We'll create one will be the the source sequence and the second one will be the dummy, uh, the the the target sequence So that's basically your transformer logic. You have a sequence and then you shift it You try to predict the next token and that's why you have here You grab the first 128 elements and then you grab the you move by one and then you grab the 128 there And that's what you're trying to predict. Hopefully that makes sense um There is additionally some some some logic here, which is not necessary for this dummy task But um in general, uh, what it does is makes that you're not predicting the padding token So that's why you have this but it doesn't matter for now. That's that's enough. Okay, so that's the setup task As I said super simple. We just have these two, uh dummy sequences That will be later passing through the transformer during the forward pass Okay, because we're not charted i'm gonna skip all of this But you see that if we were running this on a on a on an actual distributed system would be hitting this branch Instead and you can see there is this fsdp Wrap that help helps with the model Training on a distributed system basically, okay build model i'm going to enter this branch as well Uh, let me see what's going on here. So build model We are using the transformer lm uh as again, I cannot run the megatron version because it Assumes I have a at least two gpus and because of that i'm i'm getting Uh, basically exceptions if I do this so Let's just uh pick up the model the actual class is so this is what this registry thing does It takes the string transformer lm and picks up the actual class that we'll be using to instantiate the model. Okay? And then here it just picks up the associated config So transform role language model config It doesn't matter for for for us at this point of time. Let's enter the build model, uh branch Uh, and let's see what's going on here. So We first build the embedding table. Okay, so that's the first part So so that's how transformers work, right? We have a piece of text a sentence We tokenize it into a bunch of integers which are usually of a long data type And then we use those integers to index into this embedding table to extract a particular Uh embedding vector and then we're gonna feed those vectors through the transformer. That's what this thing does okay, and being a bit more detailed than than in general because Yeah, I think it makes sense to do it now Um, let's see how this is gonna work. They call some embedding thingy here And once you enter there, you see that ultimately it boils down to torch pi torch embedding So we fetch the device here it's gonna be uh kuda Zero because we have that's the index of my single gpu And because this is set to true we'll be using half precision So everything is going to be in half precision So you can see here what we do is number of embeddings is 51,200 because that's the i3 we call that's the dictionary size. So that's the uh, Even though we're using a dummy task, um, then we have embedding the dimension which is 512 And we are running this on a on a gpu in half precision. So that's float 16 So let's create this empty tensor And then we just basically initialize it normally and then i'm not sure why we're doing this Basically, what I do is they set the padding Vector to be all zeros. So currently if I were to open up this one Let me show you this. So if I use the weight of padding index And if I do this, uh, you can see there is a bunch of values after we step over here Now if I execute it we'll have all zeros. So i'm not sure why they're doing this because ultimately embeddings are learnable. Um, So so so yeah, um, i'm not completely sure what's going on Embeddings Constructed we get back from the call and we are here The next thing we do is we construct the actual decoder. So this is going to be a decoder only transformer So let's enter into this thingy and And let's execute. Okay Um, let me see whether there's something interesting here incremental decoder. I think it's just going to store the dictionary So we don't have to enter the Constructors of the parent classes. I'm going to skip over all of that Uh, let's focus on the important parts. Um, nothing fancy here. We're just storing some variables Blah blah blah Okay So the first thing that we're gonna do is we if the embedding dimension is different from the input embedding dimension, which they are Uh, then we'll create this linear layer That's gonna take our embedding vectors and just uh increase the dimensionality from 512 to to seven some 700 something So let's see what it is. So we have a linear layer So from 512 is gonna map them into 768 dimensions. Okay We're not using a lib in this particular setup so Um, we can ignore all of this we can ignore all of this Um, and let's see where we are now. So now we are generating the positional embeddings Let me enter inside of there. So I just hit f12 to enter. I set a breakpoint here I hit that five and we are in okay. This is learnable. So we enter this branch here um number of embeddings is gonna be Uh, whatever the number of readings here is plus padding plus one Because I assume it's for the beginning of sentence token or something That's why they have to create 130 of these instead of 128. So remember we only need as many uh positional embeddings as as the longest sequence will be feeding to the transformer because we're using this dummy task We're only feeding 128 tokens. So that's why we have this many uh positional embeddings So then we call this learn positional embedding thingy and that's just going to Do what let me see Uh, well, it's just embedding. It's literally just a torch embedding again. So we can just skip over that He's just gonna create embedding table and then we initialize it again and um Again padding index is set to zero and we return back the embedding. So that's it. Okay Similarly to what we've done for the input tokens embedding table okay so So now we're gonna do this precision conversion. So we'll be uh grabbing the Embed positions and we'll convert them to fp 16 basically, so let me enter this function here i'm gonna hit hit the f uh, like a breakpoint here hit f5 and uh, let's see what's gonna happen so we assert that it's not the B-float 16. So it's the brain float format Uh check out my um again ultimate guide to scaling to understand how these formats look like how they work And why they're important in a nutshell it just increases the range, but I don't have bf 16 on my machine So we're just going to use the half which is the fp 16 Uh format that's a bit has a smaller range and that means it's uh, the the the the weights during the training and the gradients and everything Is more susceptible to both underflow as well as the overflow Okay, so we basically transform them to have precision and we get back here. Okay, let's continue Next up we generate 12 of these decoder layers By calling a build decoder layer a function here and this one is going to be fairly interesting. So i'm gonna Put a breakpoint here hit f5 enter here Let's first build the base decoder layer. So i'm gonna hit f11 to enter f11 again to enter Here and let's see where we are. So we are in the decoder layer Okay, so we're trying to construct the first one now. This part is fairly interesting So this is something you don't see in a regular code base. That's not super optimized Here we'll be using megatron fused kernels. Well, actually for this particular setup I have I will not be using these kernels but still we'll see how they how they are loading them and how it looks like in practice Okay, so let's go through all of this. That's not important And now this part is interesting. So let's enter the the load the fuse kernels load function F11 and here we are okay So first we pick up some some some metadata about the kuda. That's not that important um, let's just Step over all of these and then this one is fairly interesting. So this one is going to load certain Uh kernels written in c++. That's gonna be super efficient compared to running in python So let me just go here So fused softmax, we just defined the function and this is set to true. So we enter this we just Create some flags and you can see here We are now creating a path towards certain cpp file and this cu file kuda kernel file. So So by the way, i'm not super familiar. I've never had to write any code into any kuda kernels myself. So yeah, um, just fyi um, so here are the sources And now we're gonna load those files using the cpp extension load helper. So this function here This is gonna load from those from those files. Let me show you those those files here So here it is Uh, we have to enter the megatron. We have to enter the megatron again and then diffuse kernels And here are those files. So we are scaled upper triang Mast softmax So here it is. I'm gonna open it up I think that's the one and then we have to open up the kuda one, right? so That one let me just open it up with vs code And then there is this one. So let me open it up with vs code as well. And that's the only two files I'm gonna open up just for two to show you how how it roughly looks like So here is the cpp one. So you can see here they define forward and backward for this particular Softmax layer and they make it super efficient So in this particular part in this file, they just create a binding to python so that we can execute a c++ code directly from python Uh, so here is the actual code behind that file So here is the blah blah blah the backward kuda file Uh, and everything boils down to this dispatch half and and and bfloat function Let me see where I can do f12 and see how that function looks like um Yeah, I won't go there man. That's gonna be wicked in any case This was just more of an fyi than than anything else I don't want to dig too much into the c++ plus code because that's gonna get like messy very quickly Okay, let's go back to the um, let's find my my prompt Let's find the debugger and let's continue execution execution here. So this might take a while Uh, just the loading of this i'm gonna i'm gonna skip this loading part and i'm gonna just hit f5 to To get to the mask softmax here and I will get back to you as soon as yeah here it is because i've actually pre-built it before But the first time we run this it will take probably five minutes, uh in case Uh, you you it's hanging there is one trick you have to do and that's to just delete this build directory again Don't ask me why I know this Uh, yeah, the tears on my keyboard know it Okay, let's continue here Let's load these scaled masks softmax once and as I said, we will not actually be using them But it's kind of funny to to see uh that uh, they are doing that Okay, guys, I actually realized that the actual implementation is in the h file So if we open up this one, let me just quickly show you so you can see the actual code You can find it here. So this is how that these functions are defined in the background just a cpp function Uh c++ function and the optimized version of the of those of that functionality. So yeah, let me go back now here Uh, let me show you one more thing. I'm gonna copy this Use the search functionality here in vs code to find where this one is used And you can see that um, so we have in this file here where it's actually being used is somewhere here So let's just find it Uh, here it is. Let me try and see whether yeah forward f line. Okay, so there is this fused layer norm f1 function Uh, that's inside of the megatron library actually And I think it relies on the apex potentially So so yeah, that's i'm just kind of showing you to so that you know your way around this code But yeah, um, so if we were to actually use this this this class somewhere in our code path Uh in the background this part of the code would actually be c++ code. So that's kind of interesting Okay, let's get back here and let's exit out of this function let's load this one and then let me show you the The creation of this of this Decoder layer. So again, we are in the transformer decoder layer Let's continue We're building the self-attention. I'm going to skip that part because that's that's common common knowledge, I guess um Let's continue here layer norms I initialize blah blah blah again. We're converting the layers the the attention layers to to uh float i'm gonna Basically remove this one Uh, so that we don't have to enter anymore. So everything has been converted to fp16 Uh, so that's the mixed precision thingy Uh, and um, yeah, let's continue here. We just fetch the activation function, which is going to be a rally I think Yeah, and then let me just see what this one is So yeah, you can see we're just because activation was set to rally We'll just return the f value the functional from the functional module. We fetch the rally function. Okay Let's go back here Um now we create the fully connected layers So we have obviously two fully connected layers, uh in the transformer block. The first one is after the attention mechanism We have the fc1 and then we have the fc2. No, actually this is the sorry this is the I guess the mlp after after the attention thing is is is is gone but like the actual attention logic also has the Mapping the linear mapping once you do the Attention thingy with value vectors and everything if that makes sense So the reason they split up into fc1 and fc2 is because they use megatron And so I think that's some limitation that they that the megatron imposes on the users You have to kind of split these such that once you do the model parallelism You can split these fully connected layers onto multiple gpus. That's why they have to separately Uh create the function. So if that's confusing you That's the answer Okay, so it's just gonna build in the background as you can see here. It's just gonna build a linear layer But the reason they're doing it like this is again model parallelism And those are all of the details you kind of have to deal with when you're when you're um Working on on large scale there. There are details that you kind of have to to to learn and uh, yeah It's kind of different compared to small scale again conversion to fp16. Let's exit here And that's the base layer um Let me skip over all of this And now additionally if checkpoint was set to true with basically wrap this layer inside of the checkpoint wrapper And again, that's what it does in the back in the background is during the backward pass You didn't store all the activations and instead you'll be recomputing the activations and by doing that you'll be obviously Increasing the time it takes to do the backward. I think it's going to be 2x slower But as a as a consequence you you save up a lot of memory because you don't have to save every activation Along every layer. That's the checkpoint in logic, but we're going to skip it now Uh, let's continue here. This is going to be no op. Nothing's going to happen in this one And let's continue. Okay, so that's going to be repeated 12 times I'm gonna do disable all breakpoints put f put the breakpoint here hit f5 Okay, and let's continue from here. Now. I'm gonna re-enable all of the breakpoints and that's it. Okay guys And we just packed those layers into module list Nothing fancy there Nothing fancy here. We have the project again projection the output dimension projection from I guess 712 to 768 to 512 uh and That's it Okay here because share input output embed it's set to true. This is one of the things you again kind of um kept to note and that's that usually people tie The input embedding table with the output and so here instead of having a separate set of weights for this output projection instead We'll just reuse the same weights that we use to embed our our initial tokens And you can see we're literally using this embed tokens, which is our embedding table as you can see 51 200 and 512 that's the initial embedding table we created and we're going to use those weights to populate this linear layer So then you can imagine what this does is you take the output token Which is 512 and once you multiply with this output projection you end up with this 51 000 something which is basically the vocab size Of your of your output. Okay. Hopefully that makes sense Let me exit this. We are not using a lib so we can exit this we have our decoder We have we have the embedding table. That's it model constructed Okay, now we have the um Criterion we're using cross entropy. I don't think this is anything super interesting Let me see what's going on here You can skip all of this. We can skip all of this Let's enter the builder. I think we can skip all of this as well Well Nothing interesting Here, so here's just the cross entropy criterion I'm gonna skip all of this. This is not interesting Okay. So now there is some logging going on if you open up the terminal here. You'll just see what those logs are Uh, it's just logging various stuff. So we're using dummy element task. We're using transformer language model. We're using cross entropy as the loss Let's continue. We are counting the number of parameters here. We have 112 Um million Uh, blah blah blah. Let's continue here. Now we'll load the data set. So let's enter the Data loading part. So i'm gonna hit f11 Here we are. So let me continue here Okay, so what's this batch size is one as I told you i'm using one because otherwise I would be hitting Outer memory exceptions and now we just construct a dummy data set So here is how it looks like this this one we basically just stored the batch. It's a super simple Well, literally dummy data set the name says it itself. So what we do is we just create this Batch you can see here And what we do is we have source tokens. We have uh, we just have the the array of of 128 elements We created if you recall before so we just use that as a source tokens and then um Um, we set as the target tokens those dummy targets and you can see see they're shifted by one So here we have three four five and there we had for the source one. We had two Three four, so they're exactly shifted by one So this is going to be our simple, um dummy data set. Uh, we just have number of sentences is one We only have 128 tokens blah blah blah So we literally have batch size one and only 128 tokens as simple as it gets as simple as it gets So let me just construct this thingy And that's it. So that's the construction of the data set. It's very easy and simple So now the trainer is just a abstraction they use to to sort everything inside So the task and the data set and everything so here they extract the shared parameters because I told you they're binding the input embedding with the output embedding output projection layer Again, that's now less interesting. I'm going to skip over all of these They are storing the criteria and the model Uh blah blah blah Uh, they are basically again casting to uh half precision float fp 16 both the model and the criterion uh, and uh Now they're pushing it to the gpu and That's it Nothing, nothing interesting there. I'm going to skip all of this I'm going to skip all of this. I'm just going to put f5 somewhere here Sorry, put the breakpoint and hit f5. My brain is glitching nice Okay. So again, it's not fetching the environment for some reason so you can see here I have rtx 2080 8 gigabytes, uh 7.5 are the major and minor And they just store that for whatever reason. I'm not sure why they're using it probably just to For the logging purposes so they they know exactly which devices in case you have heterogeneous distributed system Where you're using multiple gpus some are maybe p 100 some are maybe m 40 some are maybe v 100 Some are a 100 so when I have that additional, uh, as you can see here, they're just Logging everything basically. Okay. We're back to the train. This is the the main function In case you're confused where we are. So, um Let's now i'm gonna skip this because this is just gonna Uh give us back this iterator and there is like literally four layers of wrapping they do here They have some counting wrapper and then this wrapper and this wrapper gets very very unwieldy very quickly So now skip that part and just focus on on the train loop now um So here we are here's the train loop. Let's enter the train loop. Okay, so we're actually hitting this Aggregate function i'm going to show you what it is So i'm gonna hit i'm gonna enter i'm gonna hit f12 to train i'm gonna put the breakpoint here You can see there is this um wrapper. They're using this decorator Aggregate and it's that's just going to make sure that when they're logging something inside of that scope That all of those logging details are being stored into this particular train Dictionary and then they'll have multiple of these, uh aggregate Uh, um decorators and that's just gonna gonna like nicely organize where each of the logs is going to so they know exactly From which part of the code which logs came from that's the basic idea Okay, so we're gonna just um hit f5 and exit this and hit the initial part of the train function Let's just grab the iterator. I can skip over all of this Um, basically iterator is just Uh as the name suggests counting iterators, so it's just gonna feed us those 128 Tokens in in on each call That's it. We grab some progress bar again. I'm gonna skip all of that again. I told you this is this is what it How the code looks like when you're training? Uh bigger models like there is a lot of things that you don't care about because they're conceptually not important But then again, they're super important if you want to make sure if you want to be able to debug Uh things when they go wrong, so i'm gonna skip over all of this. Let's get to here Here is the actual train function. So there is train instead of train instead of training again A lot of layers of of kind of complexity. Uh, let's skip over all of this So here is the definition of the train we hit f5 we get here and this is where the actual magic starts happening So we we're not Gonna iterate and fetch the sample. We will not enter this particular branch because this new profiler is set to false Um, and so we'll we'll just hit this branch here And now i'll hit the uh f11 and enter this train function and instead of this train function There is yet another train function. So let me hit f5 and we end up here so we end up here train step so again, uh the The outer train is the epoch train. This one is the batch trick This one is going to be like the batch train. So we pass just the samples and we do a single forward pass So let's see what the samples are samples Samples is a dictionary as you can see we have all those tokens from the dummy data set loader Two interesting things to notice again. They have this aggregate thingy going on It's kind of convenient. It might be confusing if you're not used to it, but uh, it It does the job and secondly they have this profiler record function So what this thing does is first of all, they enable the profiler. I'm going to show you that in a second Secondly, this record function basically names this particular part of the code with train underscore step dash Some number of the iteration and that's useful because later on the when you analyze the profiler output you'll you'll exactly know How long or like you have the details the profiling details about this particular piece of code? That's why they do it. That's why they have the record function here So again, the profiler should have been set up somewhere here Uh, let me just see. Yeah So when we were here there was the profiler call here and you can see profile memory true with stack true record shapes true So that means they'll be profiling the memory. What was the memory footprint like? In case they're hitting oms this helps you debug it Recording the shapes of the variables and also stack. Um, if I recall correctly Uh, um, yeah, I forgot what the command is. You can always enter the profile here and see what the um Stack means so let me just find that variable Okay, record source information file and line number for the operation. So additional metadata for for the operations Let's go back here and let's enter the train step Enter the train step again. I'll have to it keeps on entering these aggregate Functions that kind of sucks. So I have to enter the train step using f12 set the break point and then hit f5 So now we are here. So let's see what's going on So we set the seed we set the models to train criterion to train We call zero grad and i'll actually enter zero grad and there is a particular reason i'm doing that Let me let me show you what why is that why that is? Okay, let's enter this one Let's enter this one Let's enter This one. Oh my god And here it is So the reason is we have this lost scaling and that's because we're doing mixed precision And so lost scale is set to four and if you recall from my video on again the ultimate guide to scaling Why they do this is because you want to shift all of the values such that you're you're you're less likely to hit the underflow in the fp16 precision So this kind of shifts all of the values and that's why they uh do One over Well, they're doing one over four because the multiply factor is going to do the unscaling So we'll see the this is going to do the scaling and the multiply factor is going to do the unscaling Okay, we'll see that a bit later, but that's a detail. I wanted to show you so that's the zero grad. Um, Let's continue here We grab the sample we do prepare sample which is gonna do two things One is to push the cuda and the second one is to convert to fp16 because of that. I'm just gonna skip Um, let's enter The the final train step and it's happening instead of a task, which is kind of unintuitive Why would you why would the train step be associated with a task? like this Like this way of structuring the the the the programming model is kind of weird and does not resonate with my with my mind Okay, guys, let me enter that one. I'm gonna hit f11 And here we are. This is the train step again. Take a look at the stack here Like the stack is horrible. We have train train train step instead of a trainer train step inside of a task But it is what it is It's kind of ambitious to try to explain this code on the youtube. So yeah I'm giving my best so again record function forward. So we'll know exactly the details of this part of code We'll know how much memory is spent. We'll know the shapes. We'll know the lines everything so that kind of makes it easier to debug Let's enter this f11 Here we are. We do a forward pass through the model. We passed on that input. That's gonna pass us the the actual tokens So here we are. We hit the coder. We pass the tokens to the decoder And now we call this extract features function the extract features function is gonna do a forward pass through the Decoder transformer. I'm not sure why they called it like that I mean, it's not incorrect to to to name it like that but kind of weird extract features. Okay um Let's continue here Let's enter the extract features and instead of extract features There is the extract features scriptable and this the reason why they do this is because of torch script There was again some some constraint that if you want to have optimization you kind of have to do this and And it makes the code horrible. I think they know they kind of mentioned that somewhere here Blah blah. I don't see it now, but I yeah Um a scriptable subclass of this class has an extract features method and calls super extract features But super is not supported in torch script a copy of this function is made to be used in the subclass instead Okay Let's continue here Let's continue here. Okay. This part is not interesting. So the first thing we do is forward embedding so that's gonna take our our Basically tokens so from two to whatever we have 128 of those And we are going to now convert them. We're gonna map them into Multi-dimensional space. So let's let me enter this part So let me enter this forward part and let's see what's going on So basically we just call All yeah, this is the uh, the torch embedding. So it's just gonna embed basically everything out of the box basically Okay, so this was actually the position. Okay, we first embedded the positions. Let me see. Where is the actual embedding going on um Okay, so here's where where we embedded tokens so we embed them here. So now we have token embeddings So let me show you this so we have token embeddings the shape of this thing is 128 512 let's see. What's the shape of the position? Positions it should be also 128 but we have 768 here. So that means we we would have a mismatch So because of that we'll have to first do the projection um Okay, embedding scale is set to one. So this is kind of no up we do the projection From 512 to 768 and now we can add positions on top of x So that's the initial part of the transformer logic We do the embedding we add on top of it the positionally encodings and now we start doing the forward pass Okay, let's continue here and let's exit we're returning back The embeddings the positions and the combined embeddings and positions after we do the additional projection. Okay, let's exit this part We are again in the forward pass of the transformer decoder You can always take a look at here to to figure out where we are transformer decoder We're doing the forward pass. We've done the embedding part here. We create the mask the Attention mask. So let me show you how it's going to look like i'm gonna kind of quickly walk you through this part It's going to create this Tri u is going to create a triangular matrix It's going to be filled with bunch of minus int So negative infinity and that's going to be used to create a causal decoder, right? So we'll just add that to the attention scores and that's going to after applying the softmax You'll end up with tokens that can only attend to itself and the previous tokens So again here we create 128 times 128 Matrix of zeros then we fill it with negative infinity and then we just call this This function that will basically create a triangular map mask out of this. So let me step over this and let me show you how this looks like Here is how it looks like again Zeros and then on this upper the upper triangle is minus infinities Okay Let's continue here. We're not using a lib so we can step over all of this and we just return back that that mask We just created. Okay, so that's the attention mask We do some transpose here operation. So because we are batch number of tokens number of channels now we are Uh number of tokens and then batch and the number of channels again optimization detail the lot of transformer implementations do Okay, we just store some interstates blah blah. Now we do A bunch of passes through all of the layers. We have 12 layers and we are going to do four pass through all of them i'm going to skip the actual layer logic Uh, that would be too much i'm going to skip it, but it's just classic transformer stuff. So let me exit this Let me exit here uh, let me just Put breakpoint here hit f5 and let's exit the transformer We do some again now we now we have again, let me show you the um So we now passed all those vectors through the transformer 12 layers in total And we end up with the shape 128 768 and so you see it's kind of messed up. So we just do the transpose operation again and now we end up with the correct um the usual shape Okay, that's it. And finally the projection so we have 512 instead of 768 Let me do f10. That's it. We return the interstates. We return the attention blah blah blah And we turn x which are the actual output embeddings Okay, let's continue. That's the that part and now we call the output layer And that's gonna project this into I guess, um vocab size. So yeah, you can see here 51 200 Let's continue Let's continue and that's the forward pass through the model now we compute the loss Um, i'm gonna kind of skip through this a little bit So we first to get normal probabilities. It's gonna this is gonna apply the log softmax on top of our um outputs here then we do some reshaping and Let's see the shapes And those of you who are still watching I'd love to hear your feedback. Like if you have any feedback, it would be super appreciated. So 128 51 1,200 whereas now these are normalized because we applied the softmax. Okay, so We get the targets. This is just going to fetch those sample targets so that's going to fetch those 128 tokens that were shifted by one to the right I'm gonna convince you that that's indeed the case so you can see here three four five That's the target sequence we were using the whole time and now we apply the nl loss. So that's the Um negative log likelihood loss and you can see it's a bit more complex than than your usual implementation So your usual implementation will be just this but when you're dealing with um I guess uh, like big dimensionality and a lot of a big vocab Then you can see here they have this if branching. So if the number of elements is smaller than What is this like? 2 billion then you just execute this standard f and l loss if it's not the case then you have to do some some a bit more intricate logic because otherwise, um, you would encounter some uh, Numerical issues so because we are smaller than this Let's see what the number of elements is for us. We have 51 000 and then Dimensionality is 128. So I guess it's yeah, it's below The threshold so we just do this. Um Negative log likelihood and that's it. That's everything That's how transformers work like you apply just you just apply this nl loss on the output and Compared with the targets that are shifted by one and that's it the whole magic behind transformers Okay. Now here's where the actual mixed precision magic happens and we are almost done with this video I'm gonna quickly walk you through all of this. We're just like logging some outputs. That's not important And i'm gonna skip all of this We're just storing some stuff blah blah blah. We're just storing some stuff. Let's go back here Let's return back the loss. So here is the loss and now let's go for the second part So this is the interesting part So we do backward pass and we again do record function backward to just profile this piece of code And let's do um, let's do f11 here. Let's enter so now because we have scaling. Let's see what happens. So um Scaler is not none. So we enter this branch And what we do is we scale the loss, okay So whatever the loss is and it's thousand four hundred something and it's that big because we were just doing summation in the nl loss As the reduction method So if I enter f11 here, you can see what what what we do is we simply take that number and we multiply it by four And that's what I previously explained you because of that That when you multiply the loss by four you trivially multiply all of the gradients are going to be uh multiplied by four that's how the Chain rule works. So let's hit f10 Let's now do the backward pass. I don't think anything special is going to happen here I'm just gonna yeah Everything is yeah, we are we exited the the um train step and now let's Let's go back. We are not we're now level up in the train step of the trainer not in the train step of the task Let's hit f10. We just append the outputs blah blah blah. That part is not interesting and okay, let's continue so here you can see this is kind of uh wrapped up inside of a uh, runtime error out of memory exception so that the whole training procedure is wrapped up in that uh, and um I think this is this is it. We just have a single batch. That's why we exit the this this for loop through the samples and um Let's continue here sample size was 128 because we only had 128 tokens Um, this is some logic that will be necessary if we had multiple machines, which we don't so that's fine Uh, okay, so we now enter this interesting part And this is pretty much the end of the video. So try accept as you can see here There is so many accepts they're catching so they're catching the floating point error. They're catching the overflow error They're catching the outer memory error So all of these things are necessary when you're training at these big scales You want to catch the errors you want to handle them gracefully so you can see here If there is a floating point error, I think they do have this nan detector. Yeah, we'll get to that a bit later Let me now first walk you through the uh reduced gradients So here actually nothing is going to happen because we have a single gpu. Otherwise, uh, you would take Uh gradients from across the devices and then you would do the all reduce So that means you literally take all of the gradients you push them onto a single device You divide them by the number of devices and then you do that for all all the devices and all of the devices will have The same state now If that makes sense, but less important for now, let's focus on the loss scaling Multiply grads. Let's see what's going on here. Okay. So what we do is We divide so we have this multiply grads and we divide this is one divided by 128 the reason we divide by 128 is because nll loss had as a reduction method had a sum And because we had 128 elements, we now want to divide by 128 if that makes sense So let's enter here Let's see what's going to happen here uh, so Multiply grads. So we just take that multiply factor, which is now one over four because we want to do the unscaling of gradients And we additionally multiply by this one one over 128. This is the c constant That's that number and then that's kind of all aggregated inside of this variable and we're going to later apply the variable to actual gradients We'll see that a bit later. Okay, let's exit here Now gradient clipping is going to happen again, I will have to skip this part, but um, Okay. No, actually let me quickly enter it. Let me see what's going on there. I think it's kind of fairly intricate There's a lot of as you can see a lot of steps, but I think there might be a part that's kind of interesting for us Oh, yeah, there is so here is a multiple factor. Here's where it comes into the picture. So we we basically grab the gradients of our we grab the norm of the gradients and then we multiply that by the The multiply factor and that's kind of optimization detail because you didn't have to multiply the actual gradients You just took the norm and then Multiply by this factor and you get the equivalent result. That's kind of neat Okay Let's see what's going on here. Let's continue here max norm grad norm And again, we have another multiplication for the multiply factor. Why because The gradients were the norm was 16 but we want to clip it to to 1 and that means we have to we have to take this number and Multiply with that number of gradients and then they will obey this property of having gradient norm of 1 if that makes sense That's why we have another multiplication. So now the multiply factor has three things going on We have the loss on scaling we have we've divided with a sample size and finally we've divided by this ratio of the norms Let's continue we check some overflow again a bunch of details And let's continue we check the grade norms. Let me see what this is I Think this is just going to be skipped. So yeah, it's just skipped. Okay. Now we make sure that we don't have none of the values in the grads are infinite so if we had nans then this would be This exception would be raised and it would be caught in this particular context here where they have the nan detector That kind of let me show you let me show you what it does. So nan detector Blah blah blah. It basically is going to capture Some of the values so that we can debug it later on detects the first man or in in forward and or backward pass and logs Together with the module name. Okay, so it's just gonna help her to to debug Where the man occurred? And then once you have once they do that they do another for pass here Under that context and it's gonna catch the first man So they say here rerun the forward and backward pass with hooks attached to print out where it fails Okay, so that's kind of how they cope with the errors Let's finally enter the optimizer step. That's where we're gonna finally apply the multiplier the famous multiplier So let me show you here step step step Let's continue here. Okay unscaled grads. Let's see what it is. I think that's going to be our our logic So here we are we enter this part multiply grads with the multiply factor Okay, that's finally the moment where we apply the multiply factor and then we reset it back to one And then the whole procedure will will continue So here you can see here we just go through the parameters and we multiply the so parameter the gradient associated with that parameter with that weight Data and then multiply by c and c is the the the multiplier we were accumulating over time So we had three factors again, if you recall the sample size the gradient on scaling and finally the ratio of the norms I'm gonna Step out of this. So hit this step out Error and we are done. We are done here. We've done the stepping We update the scalar. So the scalar is going to do what? Okay, nothing interesting here We can exit all of this and we are back to our train step in the trainer We just debug some information we didn't have any nonce and that's pretty much it So I think I can skip all of this. I can skip all of this a lot of details Yeah, go at your own pace explore this code if you're curious to learn more But uh, I think we saw most of the interesting parts. So let me just do this Let me see why we are okay. So i'll just have to put f breakpoint there hit f5 here. We are So as you can see, we're finally back in the in the in the in the train.py file And we are inside of the train function here Let me see whether there is something interesting And the feedback, uh, we're gonna just skip this part. Nothing's gonna happen. It's gonna be a no-wisdom Um, and uh If we should stop And for now we are not stopping so that means we are again gonna fetch the next samples and uh Continue this whole Loop and that's it. So we're just gonna keep on doing this training Guys, that's it. Uh, you hopefully saw what it takes to Kind of saw what it takes to build these llms. Um, we did Exclude the whole distributed part of the equation, which is probably the hardest part So like doing the model of parallelism the data Parallelism the the pipeline parallelism all of that Goodness, and then there is a zero optimizer where you do the partitioning of the activation. So there is so many details Again, we also skipped the checkpointing. So like Yeah tragic cool. I'm gonna stop here Let me know please let me know what you think Um, depending on that i'm gonna probably modify my my my uh, the videos i'm gonna be creating because By doing this I learned a lot and so I thought just kind of walking you through Uh the learning so it didn't feel like like a big of an effort But like going forward I'll probably won't be doing this unless people find it useful So do let me know down in the comments whether this was actually interesting and useful for you Having said that if you like this video, please share it out. That's the best way to do it This channel also subscribe to this channel if you haven't already and see you next time
[{"start": 0.0, "end": 7.0, "text": " What's cracking guys? Alex here. In this video, I'll be walking you through the code base behind the recent Opt"}, {"start": 7.84, "end": 9.44, "text": " 175B"}, {"start": 9.44, "end": 14.58, "text": " Transformer from Meta and if you're curious to learn about the actual theory about the paper"}, {"start": 14.58, "end": 21.14, "text": " I covered this paper as well as GPT Neo X20B and as well as Big Science blue models"}, {"start": 21.44, "end": 28.080000000000002, "text": " So the corresponding papers in the previous video, so I'm gonna link the card somewhere up there in this video"}, {"start": 28.08, "end": 31.56, "text": " I'll be walking you through as I said through the code base behind the opt model"}, {"start": 32.04, "end": 34.72, "text": " basically, that's the meta sec code base and"}, {"start": 35.239999999999995, "end": 39.9, "text": " My idea will be to first show you how to set up this on your local machine"}, {"start": 40.12, "end": 45.78, "text": " Because the system was designed the code base was designed to be run on the Linux distributed system"}, {"start": 45.78, "end": 52.3, "text": " So that means multi node multi GPU setting whereas I have windows single GPU machine"}, {"start": 52.3, "end": 54.980000000000004, "text": " So that kind of made it a bit harder to get started"}, {"start": 54.98, "end": 60.31999999999999, "text": " So that's the first thing the second thing I want to show you some of the concepts I introduced in my"}, {"start": 60.739999999999995, "end": 64.66, "text": " Ultimate guide to scaling video. I'm gonna link the card as well somewhere here"}, {"start": 65.53999999999999, "end": 73.52, "text": " Basically, I'll focus on the mixed precision training and I'll show you some of the concepts such as loss scaling"}, {"start": 74.14, "end": 78.69999999999999, "text": " Etc and also you'll just get to see how this product well not a production level"}, {"start": 78.69999999999999, "end": 84.8, "text": " But like a code base behind one of these large language models looks like so hopefully that's gonna be valuable"}, {"start": 84.8, "end": 90.53999999999999, "text": " Okay, let's get started. The first thing we need to do is open up the setup guide"}, {"start": 90.7, "end": 96.17999999999999, "text": " So there is the setup instructions here, which we're going to follow. It's fairly straightforward"}, {"start": 96.17999999999999, "end": 101.69999999999999, "text": " There's a couple of caveats, but that's pretty much it. So I'm gonna open up my anaconda prompt here"}, {"start": 102.17999999999999, "end": 106.78, "text": " I'm going to first create a conda environment. So conda conda create"}, {"start": 107.42, "end": 110.52, "text": " Name I'm gonna use opt to because I already have"}, {"start": 110.52, "end": 116.36, "text": " Opt environment setup so I have to change the name there and I'm gonna use the Python 3.9"}, {"start": 116.36, "end": 118.88, "text": " so I'm gonna hit enter there and"}, {"start": 119.39999999999999, "end": 123.39999999999999, "text": " This will be quickly over and once we have conda"}, {"start": 123.88, "end": 130.44, "text": " Installed will start installing the necessary packages directly inside of it. Okay, this is gonna be quite fast. So activate the"}, {"start": 131.2, "end": 132.16, "text": " environment"}, {"start": 132.16, "end": 136.76, "text": " Clear just to clear the screen a bit and now I'm gonna just copy paste this command here"}, {"start": 136.76, "end": 144.28, "text": " So we do this pip install of basically torch and torch vision etc. So just hit this command and"}, {"start": 144.6, "end": 151.12, "text": " That should execute in a couple of minutes. Okay guys, so that's that's installed now. Let's continue"}, {"start": 151.12, "end": 157.32, "text": " So the next step would be to install the apex library from from Nvidia because I'm on windows"}, {"start": 157.32, "end": 161.92, "text": " This does not work windows is not supported so you can follow these steps on your own if you're on windows"}, {"start": 161.92, "end": 164.64, "text": " Just skip this part. There is a bit better"}, {"start": 164.64, "end": 170.83999999999997, "text": " instruction for how to install apex in case you have some mismatch of the CUDA versions, etc, etc and"}, {"start": 171.27999999999997, "end": 178.39999999999998, "text": " Like some some basically tips on how to overcome those potential problems in this big science"}, {"start": 179.51999999999998, "end": 185.64, "text": " Basically repo I'll link that link down in the video description so you can check it out. So I'm skipping the apex"}, {"start": 185.64, "end": 191.61999999999998, "text": " Let's now continue on installing the Megatron. So let's just copy paste this part here"}, {"start": 191.62, "end": 198.54, "text": " Well before that, sorry, let's just first clone the the actual repo. So I'm gonna do the following and I'm just create a"}, {"start": 199.26, "end": 201.22, "text": " Directory here. I'm gonna name it opt"}, {"start": 201.22, "end": 207.86, "text": " I'm going to just do a clone operation inside of the directory if I manage to open up it the folder"}, {"start": 207.86, "end": 209.86, "text": " So here's the the git bash"}, {"start": 210.14000000000001, "end": 218.86, "text": " Just go and basically copy the link the URL do git clone here and that's gonna download the actual meta sec"}, {"start": 218.86, "end": 222.42000000000002, "text": " repo in the in this target directory"}, {"start": 223.06, "end": 228.38000000000002, "text": " So once we are there, let me just navigate using my my environment. So I'm gonna open up this thing here"}, {"start": 228.38000000000002, "end": 235.42000000000002, "text": " I'm gonna navigate here in the in the console to that to the root of my code base and"}, {"start": 235.96, "end": 240.94000000000003, "text": " Now I'm gonna follow the rest of the instructions. So let's now do the git clone"}, {"start": 241.42000000000002, "end": 243.9, "text": " So let's do the Megatron installation"}, {"start": 243.9, "end": 249.38, "text": " You can just copy paste this directly in the terminal everything is gonna execute automatically except for the last line"}, {"start": 249.38, "end": 251.38, "text": " So hit the pip install"}, {"start": 251.58, "end": 253.18, "text": " command here"}, {"start": 253.18, "end": 260.18, "text": " That's gonna install the Megatron. Hopefully fairly quickly. Yeah. Okay, that's cool. Let's continue on the second step"}, {"start": 261.1, "end": 268.22, "text": " now the important thing you need to do here is just change go go back to the root to to meta sec and"}, {"start": 268.22, "end": 273.34000000000003, "text": " Don't install don't obviously don't you don't want to install inside of the Megatron the fair scale library"}, {"start": 273.34, "end": 279.4, "text": " So I'm just gonna copy paste this again here. We're installing the fair scale and I hit pip install"}, {"start": 280.0, "end": 285.29999999999995, "text": " For the last line and we'll install the fair scale as well. It's kind of smooth so far"}, {"start": 285.65999999999997, "end": 288.29999999999995, "text": " No, no problems here. Okay, finally"}, {"start": 288.97999999999996, "end": 293.47999999999996, "text": " Because we've already installed we already cloned the meta sec. We just need to go"}, {"start": 294.09999999999997, "end": 299.58, "text": " to the root so change directory to the root and now we do pip install but before that there is a"}, {"start": 299.58, "end": 304.3, "text": " Additional tweak I have to do and that's in the setup file here"}, {"start": 304.3, "end": 309.74, "text": " I have to basically open up this this file and find the"}, {"start": 309.97999999999996, "end": 313.94, "text": " Aim library and this one also doesn't work on Windows for whatever reason"}, {"start": 313.94, "end": 316.41999999999996, "text": " I just didn't want to hustle around it"}, {"start": 316.41999999999996, "end": 323.74, "text": " so I just kind of comment it out and that's it after this you can hit the the pip install command and"}, {"start": 323.74, "end": 330.26, "text": " Everything should work as expected. Let me paste this likely enter and that's gonna be installed now"}, {"start": 330.34000000000003, "end": 334.90000000000003, "text": " Okay, there is one more command we have to execute and that's not in this manual"}, {"start": 334.90000000000003, "end": 338.98, "text": " So that's this pip install setup tools this version 59 point five point. Oh"}, {"start": 339.86, "end": 343.1, "text": " There is some weird version error going on if I don't do this"}, {"start": 343.1, "end": 349.14, "text": " So I had to I kind of found this on the on some random website as a solution and finally"}, {"start": 349.14, "end": 355.26, "text": " This is not necessary unless you want to to like basically contribute to the to the repo this pre-commit install"}, {"start": 355.26, "end": 361.46, "text": " But let's just do it either way. Okay, cool. I'm gonna clear the console here. Everything is ready now now"}, {"start": 361.46, "end": 367.58, "text": " Let's open up the actual code. So I'm gonna use VS code as usually VS code here and"}, {"start": 368.62, "end": 371.97999999999996, "text": " I'm gonna just do the whoops open folder"}, {"start": 372.7, "end": 376.82, "text": " Find this meta sec directory and just select folder"}, {"start": 376.82, "end": 378.82, "text": " That should be pretty much it"}, {"start": 378.82, "end": 385.82, "text": " So now what we want to do is select the correct interpreter and that's the opt-to so let's just open something"}, {"start": 386.18, "end": 390.9, "text": " Let me open up some of the files will need a bit later. So this is gonna be the training file"}, {"start": 390.9, "end": 393.34, "text": " Let me hit the nope. I don't want to dead"}, {"start": 393.34, "end": 399.7, "text": " So I just want to hit the ctrl shift P select interpreter and let's select the opt-to"}, {"start": 400.18, "end": 404.98, "text": " Interpreter here and that's now it the final thing we have to do is a couple of patches"}, {"start": 404.98, "end": 409.74, "text": " Obviously I had to sort it out before before the video. So let me just show you what those are. I"}, {"start": 410.06, "end": 416.3, "text": " Have a working code here. So just to get this tool to show you some of the differences"}, {"start": 417.3, "end": 421.86, "text": " Some of the patches I had to apply so first thing was and we've already done"}, {"start": 421.86, "end": 426.26, "text": " This is to comment out the aim this might not be necessary for for your case"}, {"start": 426.26, "end": 430.74, "text": " But just like that's something that I had to do. Okay, then there is this line in trains"}, {"start": 430.74, "end": 436.58, "text": " I'm just gonna copy paste this one. You obviously won't have to do this because I'll basically create a clone"}, {"start": 437.18, "end": 442.46000000000004, "text": " Directly and and push it to my to my github, but let me let me return here"}, {"start": 443.26, "end": 445.26, "text": " So this is something we have to add"}, {"start": 445.86, "end": 449.06, "text": " Basically here. I'm gonna just save it there"}, {"start": 450.3, "end": 454.22, "text": " Then let's see what else so let me exit here"}, {"start": 454.90000000000003, "end": 456.90000000000003, "text": " We have a couple more steps"}, {"start": 456.90000000000003, "end": 458.90000000000003, "text": " Metasec tasks"}, {"start": 458.9, "end": 466.09999999999997, "text": " Metasec tasks base task we have to add this weird line to convert it to long"}, {"start": 466.46, "end": 471.53999999999996, "text": " Otherwise, he's gonna complain. So that's the base task inside of meta task tasks"}, {"start": 471.62, "end": 473.62, "text": " So let me open up that file again"}, {"start": 473.62, "end": 478.15999999999997, "text": " You will have you can skip this all of this because you can just start from my from my code"}, {"start": 478.21999999999997, "end": 485.78, "text": " So I'm gonna do this so meta sec find the tasks find the best base task and find this line"}, {"start": 485.85999999999996, "end": 487.85999999999996, "text": " so this is basically"}, {"start": 487.86, "end": 490.78000000000003, "text": " Let me find this thing so filter oops"}, {"start": 491.42, "end": 493.42, "text": " filter examples"}, {"start": 493.98, "end": 495.98, "text": " blah blah blah, so it's here"}, {"start": 495.98, "end": 499.14, "text": " So I just paste that line here. It can be cleaner. Obviously"}, {"start": 499.14, "end": 502.78000000000003, "text": " I'm just doing it like this like you should import it in on top of the file"}, {"start": 502.78000000000003, "end": 506.42, "text": " But this is just the whole point is for me to enable this code"}, {"start": 506.42, "end": 511.3, "text": " So that I can easily run it and and do a stepping through of the code base"}, {"start": 511.3, "end": 515.3000000000001, "text": " Okay, so I think that's one step then we have the sweep"}, {"start": 515.3, "end": 518.66, "text": " We can actually skip this part. We have we don't have to do this"}, {"start": 519.3, "end": 524.26, "text": " Then there is this thing so we have to change from 2 to 1 because I only have one GPU"}, {"start": 524.26, "end": 528.42, "text": " Otherwise this will fail because as you can see here, this is the model parallel argument"}, {"start": 528.42, "end": 534.5799999999999, "text": " So I'm just gonna do that. So that's meta sec launcher up job constants. I'm gonna just patch that quickly here"}, {"start": 535.38, "end": 537.38, "text": " so let's go to the"}, {"start": 537.62, "end": 543.62, "text": " Metasec launcher so meta sec here is the launcher here the constants and"}, {"start": 543.62, "end": 546.98, "text": " We just put ones here. These are the small smallest models"}, {"start": 546.98, "end": 554.02, "text": " You can see the 8 million the 125 million and the 350 million the tiny small and medium ones. Okay"}, {"start": 554.5, "end": 558.34, "text": " And the final thing we have to do is in the up baselines"}, {"start": 558.82, "end": 564.42, "text": " We have to comment out this if arcs number of GPUs smaller than 2 then raise an error"}, {"start": 564.74, "end": 567.0600000000001, "text": " Because obviously I have just single ones"}, {"start": 567.0600000000001, "end": 571.94, "text": " I'll have to do that and there is this one here with the environment thingy"}, {"start": 571.94, "end": 578.2600000000001, "text": " So it was obviously a lot of trial and error to to to get this uh, like sorted out"}, {"start": 578.82, "end": 580.2600000000001, "text": " but yeah"}, {"start": 580.2600000000001, "end": 583.46, "text": " Okay, let me find the uh up baselines"}, {"start": 584.2600000000001, "end": 586.2600000000001, "text": " so arcs"}, {"start": 586.4200000000001, "end": 588.4200000000001, "text": " Num of GPUs"}, {"start": 588.74, "end": 591.22, "text": " Here it is. So I have to comment out this line"}, {"start": 591.94, "end": 597.3000000000001, "text": " And the final patch that I have to apply is this one here. I'm just gonna do"}, {"start": 597.3, "end": 602.02, "text": " Copy paste of this on top of on the bottom of the file and that's it"}, {"start": 603.3, "end": 605.3, "text": " Okay guys, that's pretty much it"}, {"start": 605.62, "end": 611.9399999999999, "text": " Okay, having done all of this, let's go back to the repo and now we want to train this thing"}, {"start": 611.9399999999999, "end": 616.3399999999999, "text": " So let's see what the instructions say so they have the this training link here"}, {"start": 616.3399999999999, "end": 620.74, "text": " You open it up and the only thing you have is this so from here"}, {"start": 620.74, "end": 624.3399999999999, "text": " I have to had to work my way around and figure out what's going on. So"}, {"start": 624.34, "end": 628.1800000000001, "text": " Let's let's see what i'm gonna do. So i'm gonna instead just copy paste the uh"}, {"start": 628.9, "end": 632.1, "text": " The launch arguments I already have in the in in my code"}, {"start": 632.9, "end": 634.9, "text": " Which I've already previously"}, {"start": 635.5400000000001, "end": 641.14, "text": " Been running so i'm just gonna copy paste all of these arguments here and you'll see what's the difference basically"}, {"start": 641.14, "end": 643.14, "text": " Let's just give me a second here"}, {"start": 643.7, "end": 645.7, "text": " So i'm gonna open up this one"}, {"start": 646.1, "end": 648.26, "text": " Of baselines i'm gonna create"}, {"start": 648.26, "end": 654.66, "text": " Uh, basically a config file here for the for the for running the code and let's just paste those arguments"}, {"start": 654.9, "end": 659.22, "text": " So let's see what's the difference between this thing and what they show here"}, {"start": 659.7, "end": 663.86, "text": " So first things first, uh, you can notice I have the benchmark in the local"}, {"start": 664.26, "end": 668.74, "text": " The local makes it so that I can run this on the local machine obviously and the benchmark"}, {"start": 669.22, "end": 671.22, "text": " Was necessary to basically"}, {"start": 671.9399999999999, "end": 674.66, "text": " So that you can skip downloading bigdp"}, {"start": 674.66, "end": 682.42, "text": " So that you can skip downloading big data sets and this will just load some use some dummy data loaders and"}, {"start": 682.98, "end": 689.3, "text": " Everything else will stay the same. So that's kind of cool. Okay, so number of nodes was obviously I had to change it to one"}, {"start": 689.62, "end": 691.62, "text": " It was two here"}, {"start": 691.6999999999999, "end": 694.98, "text": " And number of gpus I had to set it to one instead of eight here"}, {"start": 695.9399999999999, "end": 698.98, "text": " And other than that everything else remains the same"}, {"start": 699.62, "end": 703.6999999999999, "text": " Okay, so now let's run this code. I'm gonna zoom in a little bit here"}, {"start": 703.7, "end": 709.3000000000001, "text": " And i'm going to set a breakpoint here and let's just run this code. Okay, let me run this"}, {"start": 710.9000000000001, "end": 712.9000000000001, "text": " Hit run"}, {"start": 713.0600000000001, "end": 716.26, "text": " And the whole point of this script will be to collect the arguments"}, {"start": 716.9000000000001, "end": 722.9000000000001, "text": " That then we are going to use for the train script because they don't have any instructions for how to start the actual train script"}, {"start": 723.22, "end": 725.22, "text": " So this is a workaround. I kind of figured out"}, {"start": 725.7, "end": 728.6600000000001, "text": " um, let's see what's going on here, so we have to"}, {"start": 729.46, "end": 731.0600000000001, "text": " Go here"}, {"start": 731.06, "end": 737.78, "text": " And then sweep main so this is maybe helpful for you to understand my my process and then basically"}, {"start": 738.66, "end": 745.78, "text": " In case you need to figure out something on your own, uh, you can you can follow a similar workflow. So we end up here"}, {"start": 746.7399999999999, "end": 751.38, "text": " Then we enter this uh back end main i'm gonna put a breakpoint here"}, {"start": 751.6999999999999, "end": 755.14, "text": " Let's just continue the whole point was to find so let's continue here"}, {"start": 755.4599999999999, "end": 760.5799999999999, "text": " The whole point was to find the actual command line that's being used to call the train script"}, {"start": 760.58, "end": 764.5, "text": " So we'll see how that looks like in a second. Okay, let me do this"}, {"start": 764.82, "end": 769.14, "text": " Let me just find uh the so train commands. We don't care about this code"}, {"start": 769.7, "end": 775.7, "text": " We just want to find the the the final code that's going to be run. So i'm gonna put uh"}, {"start": 776.34, "end": 778.34, "text": " Breakpoint here hit f5"}, {"start": 778.9000000000001, "end": 783.86, "text": " Uh, let's continue here. Let me just find yeah local run. So this is where we want to go"}, {"start": 784.82, "end": 786.74, "text": " um f5"}, {"start": 786.74, "end": 793.78, "text": " Enter the local run and here it is. So here is the actual piece of code that will call the train script"}, {"start": 793.94, "end": 800.02, "text": " So we have this train command. So it's python blah blah blah. It's calling as you can see here, hopefully"}, {"start": 800.58, "end": 801.7, "text": " um"}, {"start": 801.7, "end": 808.5, "text": " Let me just see whether I can shift this. Okay. So what i'm gonna do is this i'm gonna open up the debug console here"}, {"start": 809.62, "end": 811.62, "text": " and i'm simply going to"}, {"start": 811.62, "end": 816.98, "text": " Capture all of these arguments which we will then use to to to run the the the train script"}, {"start": 817.3, "end": 824.18, "text": " So we don't care about this n thingy. Uh, we only care about about the uh actual commands that were passed to the train script"}, {"start": 824.5, "end": 831.62, "text": " Okay, you can see here python blah blah. It's calling the train script. So this is the thing you can do. So we open"}, {"start": 832.18, "end": 834.18, "text": " um, basically command"}, {"start": 835.22, "end": 837.22, "text": " Uh take take stay"}, {"start": 837.62, "end": 839.62, "text": " uh, and then write mode"}, {"start": 839.62, "end": 844.18, "text": " Uh as f and then we basically want to write to that file"}, {"start": 844.66, "end": 848.18, "text": " This train command and this is going to obviously save"}, {"start": 848.82, "end": 856.18, "text": " This command and capture it and so i'm gonna do that and let's see what's going on. Okay, so we've saved that somewhere here"}, {"start": 857.54, "end": 859.54, "text": " Hopefully, yeah commands"}, {"start": 859.54, "end": 863.46, "text": " And then you can just copy paste all of these and start playing from there"}, {"start": 863.62, "end": 865.62, "text": " So that was my first part of the process"}, {"start": 865.62, "end": 870.34, "text": " So it took some time to figure out how to how to call the train script and this was my workaround"}, {"start": 870.9, "end": 875.22, "text": " To to get the actual commands and then came the tweaking part. So i'm gonna just again"}, {"start": 876.1, "end": 879.54, "text": " Kind of speed this up by by copy pasting the actual"}, {"start": 880.26, "end": 883.86, "text": " Arguments you can see it's super duper verbose"}, {"start": 884.9, "end": 891.38, "text": " Um, i'm gonna copy paste all of this and i'm gonna well i'm actually gonna copy paste everything"}, {"start": 891.38, "end": 898.26, "text": " Including these comments I made here so I can explain what i've done. Uh, so let's go to the code here again"}, {"start": 898.26, "end": 900.26, "text": " You will not have to do this"}, {"start": 900.34, "end": 903.78, "text": " because because um, you'll have this already"}, {"start": 904.26, "end": 908.98, "text": " Uh in my in on my github, uh repo, okay, let's go here. Um"}, {"start": 909.54, "end": 912.36, "text": " Let's rename this as as opt baselines"}, {"start": 913.14, "end": 915.46, "text": " And i'll have to create another config here"}, {"start": 916.34, "end": 918.34, "text": " For the train run, I will just"}, {"start": 918.9, "end": 920.58, "text": " add configuration"}, {"start": 920.58, "end": 925.62, "text": " Python python file and then i'll just add the arguments here. Okay"}, {"start": 926.58, "end": 931.0600000000001, "text": " That's it. Hopefully if I haven't made any mistakes here, we don't need this"}, {"start": 931.7800000000001, "end": 934.74, "text": " And that's pretty much it. I'm going to rename this file to"}, {"start": 935.5400000000001, "end": 937.5400000000001, "text": " This this config to train script"}, {"start": 938.6600000000001, "end": 944.1, "text": " Uh, and let me show you what's going on here. So a lot of arguments obviously, uh, this is um"}, {"start": 944.74, "end": 950.1800000000001, "text": " What you get when you're working with large language models and in general with with libraries such as such as metasec"}, {"start": 950.18, "end": 951.78, "text": " which is I think"}, {"start": 951.78, "end": 953.78, "text": " cloned and then um"}, {"start": 953.78, "end": 955.8599999999999, "text": " Modified version of the fairsec library. Okay"}, {"start": 955.8599999999999, "end": 960.66, "text": " so this is pretty much the same thing as um as what we get when we run the"}, {"start": 961.2199999999999, "end": 963.3, "text": " Previous script with some modifications"}, {"start": 963.3, "end": 969.2199999999999, "text": " So some of those modifications I had to me was like we we I can't use fully sharding because I have a single gpu"}, {"start": 969.62, "end": 972.0999999999999, "text": " I can't use the parallel version of cross entropy"}, {"start": 972.0999999999999, "end": 976.42, "text": " So I had to kind of change it to cross entropy instead of cross entropy parallel or something"}, {"start": 976.42, "end": 981.54, "text": " I was just using transformer lm instead of transformer lm megatron or something and finally, um"}, {"start": 982.42, "end": 986.9, "text": " Windows was complaining I guess any any operating system will complain about super long paths"}, {"start": 987.14, "end": 990.74, "text": " So there was some super long path that we get dumped from the previous script"}, {"start": 990.74, "end": 993.06, "text": " So I had to kind of shorten them into this one"}, {"start": 993.38, "end": 999.4, "text": " So test v0 and finally I had to remove the checkpoint activations and distribute checkpoint activations"}, {"start": 1000.02, "end": 1002.0999999999999, "text": " This is not well, it's actually necessary"}, {"start": 1002.1, "end": 1007.22, "text": " Otherwise you you will face some bugs but like I didn't bother to try and figure it out"}, {"start": 1007.7, "end": 1012.66, "text": " So if you want to kind of play with the checkpointing, uh, which is the the technique where you basically"}, {"start": 1013.3000000000001, "end": 1018.5, "text": " trade off the memory versus the the the the speed of your backward pass"}, {"start": 1018.98, "end": 1021.94, "text": " Uh, then you can kind of enable this and try and figure it out yourself"}, {"start": 1021.94, "end": 1025.6200000000001, "text": " But like i'm going to focus on the mixed precision a part of the of the code. Okay"}, {"start": 1026.34, "end": 1030.18, "text": " Made a couple more tweaks. Uh, you will be able to just copy paste this"}, {"start": 1030.18, "end": 1036.42, "text": " Okay, I think a couple of things I had to change was like the batch size because otherwise you I was getting oom's"}, {"start": 1036.66, "end": 1043.3, "text": " Out of memory exceptions and I had to change to like have 128 tokens per per batch"}, {"start": 1043.7, "end": 1048.8200000000002, "text": " Uh per sentence instead of like 2,048 by default or something like that, but that's it again"}, {"start": 1048.8200000000002, "end": 1051.94, "text": " You'll copy paste this thing, uh, and you can start from here"}, {"start": 1052.5800000000002, "end": 1057.54, "text": " Okay, guys, let's finally get to the transcript. Let me set a breakpoint here"}, {"start": 1057.54, "end": 1060.82, "text": " Uh, let me use the transcript config"}, {"start": 1061.54, "end": 1064.6599999999999, "text": " And actually i'm gonna set another breakpoint down below"}, {"start": 1065.46, "end": 1069.3, "text": " Here and now we are ready to to to run"}, {"start": 1070.02, "end": 1073.54, "text": " Okay, guys, let's start going through the code here"}, {"start": 1074.26, "end": 1076.34, "text": " i'm gonna focus on on on the"}, {"start": 1077.22, "end": 1083.22, "text": " Well, i'm gonna show you details that that are different compared to your uh toy"}, {"start": 1083.22, "end": 1088.44, "text": " Uh repos so to speak because this is after all using this fairly powerful"}, {"start": 1089.08, "end": 1091.08, "text": " Library, uh that was developed by meta"}, {"start": 1091.72, "end": 1095.4, "text": " So yeah, there is a lot of code that that makes this harder to read"}, {"start": 1095.8, "end": 1100.52, "text": " Uh, but like more robust to various errors. Okay, so let's continue here"}, {"start": 1101.48, "end": 1106.44, "text": " So first things first i'm gonna enter this parser function and I want to focus on a couple of details"}, {"start": 1106.52, "end": 1111.64, "text": " Like first of all, I think it's quite kind of interesting how they are picking up, uh various arguments"}, {"start": 1111.64, "end": 1116.0400000000002, "text": " uh, it might not be the most readable way to do it, but it's uh"}, {"start": 1116.44, "end": 1118.44, "text": " Interesting in the sense that let me let me show what I mean"}, {"start": 1118.8400000000001, "end": 1125.16, "text": " So here there is this common config so common config will be just a data class that contains a bunch of these fields"}, {"start": 1125.64, "end": 1133.5600000000002, "text": " that uh basically define uh various, uh variables which are which belong to the common group of variables of arguments"}, {"start": 1133.8000000000002, "end": 1137.0, "text": " such as uh logging interval tensorboard logdir"}, {"start": 1137.0, "end": 1142.6, "text": " Uh various uh thingies like this aim repo. So that's the this visualization tool there"}, {"start": 1142.68, "end": 1144.76, "text": " Then there is the weights and biases project here"}, {"start": 1145.32, "end": 1151.88, "text": " Azure ml logging bunch of details. So yeah, I won't focus on those but I just kind of want to stress that that's neat"}, {"start": 1152.52, "end": 1154.36, "text": " so once we"}, {"start": 1154.36, "end": 1159.24, "text": " Basically execute this line. We'll pick up all of the arguments and default values from this config"}, {"start": 1160.2, "end": 1161.32, "text": " Okay"}, {"start": 1161.32, "end": 1166.12, "text": " Next up they have this these registries. So this is probably less interesting"}, {"start": 1166.12, "end": 1168.6799999999998, "text": " Let me just see here. Uh, so yeah, they're"}, {"start": 1169.56, "end": 1171.3999999999999, "text": " picking up"}, {"start": 1171.3999999999999, "end": 1178.04, "text": " They're picking up values automatically. I mean i'm gonna skip this part because otherwise this will take too long and it's not that vital so"}, {"start": 1178.84, "end": 1184.36, "text": " Bottom line is now the same thing will happen in all of these other functions. So this is if you enter here"}, {"start": 1184.36, "end": 1189.4799999999998, "text": " You'll see that there is this uh generate parser from data class and then there is the data set config"}, {"start": 1189.4799999999998, "end": 1194.28, "text": " so here is the where they define some of the default values for the uh, data set"}, {"start": 1194.28, "end": 1199.3999999999999, "text": " Group and then uh, yeah, so we can basically continue. Let me just go back here"}, {"start": 1199.3999999999999, "end": 1204.84, "text": " So yeah, by the way, if you don't know how to use vs code, there is this nice call stack like a window here"}, {"start": 1204.84, "end": 1210.52, "text": " You just hit the uh, whatever is on top of the stack and you get back to wherever your your debugger has stopped"}, {"start": 1210.52, "end": 1213.0, "text": " That's kind of neat to know. Um, yeah"}, {"start": 1213.0, "end": 1218.6, "text": " So again bunch of different groups will add different arguments and finally we end up with a parser"}, {"start": 1219.3999999999999, "end": 1221.3999999999999, "text": " And that's it"}, {"start": 1221.4, "end": 1226.68, "text": " um now some conversion to configs it's going to be a dictionary or something and"}, {"start": 1227.24, "end": 1232.2, "text": " Now because we're not profiling. We're not doing this. We won't enter this branch. We're going here"}, {"start": 1232.68, "end": 1234.68, "text": " So let's enter the the actual main"}, {"start": 1236.1200000000001, "end": 1242.92, "text": " Function we can skip all of this and we enter the single gpu main branch. So the only difference is"}, {"start": 1243.48, "end": 1244.2800000000002, "text": " um"}, {"start": 1244.28, "end": 1251.48, "text": " Well, there are some differences but ultimately all of the code pets whether you're running this in distributed setting or on a single gpu"}, {"start": 1251.72, "end": 1253.16, "text": " We'll call the same main function"}, {"start": 1253.16, "end": 1258.52, "text": " So that means we'll be actually stepping through the actual code that was used to train the the huge"}, {"start": 1259.0, "end": 1265.6399999999999, "text": " Opt 175 billion parameter model, which is awesome. So let's let me just show you quickly this thingy here"}, {"start": 1266.44, "end": 1268.44, "text": " Again, there are many details"}, {"start": 1269.08, "end": 1272.6, "text": " Here that are relevant if you're running this in a distributed setting"}, {"start": 1272.6, "end": 1278.6799999999998, "text": " That might be interesting interesting for you to check it out. But here i'm going to focus on on on on this code path here"}, {"start": 1278.6799999999998, "end": 1285.8799999999999, "text": " So let's enter the main and let's keep on stepping through this. So let me see what's interesting. So here this is the patch I applied"}, {"start": 1286.84, "end": 1291.8799999999999, "text": " So finally we save the config. So this is a config that we got from all those parser steps"}, {"start": 1292.28, "end": 1298.12, "text": " So ultimately we just save it here to config yaml and we'll see how how the config looks like"}, {"start": 1298.12, "end": 1303.6399999999999, "text": " Okay. Let me just now show you how this thing's going to look like. So this is hopefully the right. Yeah, this is the right directory"}, {"start": 1303.6399999999999, "end": 1309.08, "text": " So let's open up the um test v0. Let's open up the config. I'm just gonna um"}, {"start": 1309.7199999999998, "end": 1312.6, "text": " Basically open up another visual studio instance here"}, {"start": 1313.6399999999999, "end": 1316.9199999999998, "text": " And let me open that file. So let's open. Oops"}, {"start": 1317.4799999999998, "end": 1319.4799999999998, "text": " Let's open the file here"}, {"start": 1320.04, "end": 1322.76, "text": " This one and here is the config yaml"}, {"start": 1323.56, "end": 1326.9199999999998, "text": " So i'm going to open it up here side by side. Okay. So as you can see here"}, {"start": 1326.92, "end": 1334.76, "text": " So as you can see here, there is a lot of a lot of arguments like I think over 400 or something like a huge amount of arguments"}, {"start": 1335.3200000000002, "end": 1339.24, "text": " um, this could use some some cleanup and refactoring but um"}, {"start": 1340.28, "end": 1341.88, "text": " I guess it works"}, {"start": 1341.88, "end": 1343.88, "text": " Kind of they did struggle a lot. So I don't know"}, {"start": 1345.0800000000002, "end": 1350.6000000000001, "text": " Uh, cool. So I won't be going through the arguments here. Obviously it beats the purpose. There is too many things going on"}, {"start": 1350.68, "end": 1352.2, "text": " So i'm just gonna"}, {"start": 1352.2, "end": 1357.96, "text": " Explain you those that we are hitting as we are executing along this code path. Um, okay"}, {"start": 1358.3600000000001, "end": 1360.3600000000001, "text": " So there is this nice, um"}, {"start": 1360.76, "end": 1367.96, "text": " Is is master function? I think it just checks so because the distributed rank is sql zero. So rank means"}, {"start": 1368.76, "end": 1372.8400000000001, "text": " basically if you have multiple gpus in a in a node, uh rank zero would be"}, {"start": 1374.28, "end": 1377.24, "text": " GPU zero in that node the rank one would be gpu one"}, {"start": 1377.24, "end": 1384.04, "text": " It's just like basically naming for your gpus in the node because here we are zero because we are running a single gpu"}, {"start": 1384.44, "end": 1389.4, "text": " Uh, we are going to be the master because we are the only gpu. So we are the master automatically. Okay"}, {"start": 1390.6, "end": 1392.6, "text": " Let's continue here. Um"}, {"start": 1393.8, "end": 1395.8, "text": " I'm going to skip all of that"}, {"start": 1396.28, "end": 1398.28, "text": " just some asserts"}, {"start": 1398.28, "end": 1402.7, "text": " Ignoring all of that seeding the the code so that we can have reproducibility"}, {"start": 1402.7, "end": 1410.22, "text": " Uh, and then some additional seeding that kind of probably in the background. I think it just seeds the torch thingies"}, {"start": 1410.38, "end": 1412.38, "text": " Yeah torch manual seed blah blah blah"}, {"start": 1412.8600000000001, "end": 1414.8600000000001, "text": " I'm gonna skip those details"}, {"start": 1415.18, "end": 1420.06, "text": " Um verify checkpoint directory. I think this is kind of neat. So again this video"}, {"start": 1420.06, "end": 1427.5, "text": " I'm just showing you some details that that go into the engineering behind these llms is not only like the the actual"}, {"start": 1427.5, "end": 1432.94, "text": " Ml logic. That's what i'm trying to get at is super simple like you have a transformer you do everything"}, {"start": 1433.1, "end": 1436.22, "text": " You can literally write all of this logic conceptually. It may be"}, {"start": 1436.7, "end": 1441.82, "text": " Hundred lines of code or something. I might be exaggerating but uh, the point is there is so many things"}, {"start": 1442.06, "end": 1447.42, "text": " That you have to add to make this robust and to make this uh work on on such a large scale"}, {"start": 1447.66, "end": 1450.7, "text": " So this is one of the of the of the details. So let me enter this"}, {"start": 1451.66, "end": 1453.82, "text": " I'm gonna put a breakpoint here hit f5"}, {"start": 1453.82, "end": 1460.3, "text": " Uh, okay. So let's see what they're doing. So they are literally testing with this if this test v0 directory exists"}, {"start": 1460.46, "end": 1465.58, "text": " And we can actually we actually have the access the the right rights to write to that directory"}, {"start": 1465.58, "end": 1470.9399999999998, "text": " So if it exists, uh, if it doesn't exist we create it because it does exist. We don't create it"}, {"start": 1471.26, "end": 1477.34, "text": " We fetch the global rank. It's going to be zero. I assume because yeah, as I said, uh, we create the temporary file"}, {"start": 1477.5, "end": 1479.5, "text": " So we just create some random"}, {"start": 1479.5, "end": 1485.58, "text": " File here and we try to open it up in a write mode that shows us that we have write access"}, {"start": 1485.98, "end": 1491.66, "text": " So as you can see here, we're not hitting the exception. So everything's fine there. And finally we try to remove the file"}, {"start": 1491.74, "end": 1493.9, "text": " So you can see here. So if I were to"}, {"start": 1494.94, "end": 1498.22, "text": " Go here, there is the dummy file. And now if I go"}, {"start": 1498.86, "end": 1502.54, "text": " Uh f10 so step over you can see how this thing is deleted"}, {"start": 1502.7, "end": 1507.1, "text": " So that's how they do the check that you can actually write to that directory and that's robust"}, {"start": 1507.1, "end": 1511.02, "text": " Okay now some logging how much memory I have etc. Etc"}, {"start": 1511.74, "end": 1515.82, "text": " Logging the the whole config you can see there is a bunch of data here"}, {"start": 1516.3, "end": 1519.98, "text": " Everything we saw in the actual yaml file in the vs code"}, {"start": 1520.9399999999998, "end": 1522.9399999999998, "text": " And yeah, okay. I'm gonna skip this"}, {"start": 1523.4199999999998, "end": 1528.4599999999998, "text": " There is the setup task. I'm gonna enter this part of the code as well. So let's see what's going on there"}, {"start": 1528.9399999999998, "end": 1530.9399999999998, "text": " um blah blah blah"}, {"start": 1531.02, "end": 1533.6599999999999, "text": " So we have the task name that's called dummylm"}, {"start": 1533.66, "end": 1537.3400000000001, "text": " And this is a consequence of me using the benchmark flag if you recall"}, {"start": 1537.74, "end": 1542.78, "text": " And um, yeah, so that's gonna be like a super simple task. We're gonna see what it does in a couple of seconds"}, {"start": 1543.26, "end": 1544.78, "text": " so here"}, {"start": 1544.78, "end": 1546.78, "text": " We just pick some uh, basically"}, {"start": 1547.3200000000002, "end": 1548.6200000000001, "text": " um"}, {"start": 1548.6200000000001, "end": 1553.5800000000002, "text": " Associated config with that task. I will not get into how they populate these"}, {"start": 1553.9, "end": 1557.98, "text": " Uh, there is a neat method they use in the background. Well, i'm not sure whether it's neat. It's kind of"}, {"start": 1559.02, "end": 1560.38, "text": " hard to"}, {"start": 1560.38, "end": 1562.38, "text": " Read and understand but again"}, {"start": 1562.38, "end": 1564.38, "text": " It's understandable"}, {"start": 1565.0200000000002, "end": 1567.0200000000002, "text": " Let's enter this one"}, {"start": 1567.5800000000002, "end": 1572.46, "text": " Let's enter the actual constructor of the dummylm task and here we are. Okay, so"}, {"start": 1573.3400000000001, "end": 1577.66, "text": " Let me see whether it's interesting to see the base task. I think nothing there is fan"}, {"start": 1577.74, "end": 1578.22, "text": " Yeah, okay"}, {"start": 1578.22, "end": 1582.8600000000001, "text": " So i'm gonna just skip the init function of the base task and let's see how this is gonna work"}, {"start": 1583.0200000000002, "end": 1586.7800000000002, "text": " So we create a dictionary and this dictionary is just gonna be adding"}, {"start": 1587.18, "end": 1589.18, "text": " a bunch of symbols"}, {"start": 1589.18, "end": 1594.54, "text": " Uh to this to this task, which is not important. It's gonna add like 50 000 or something"}, {"start": 1595.18, "end": 1597.18, "text": " Of yeah, 51 000 196"}, {"start": 1598.14, "end": 1603.42, "text": " Symbols, uh, these will not be important for for for the training as you'll see in a second"}, {"start": 1603.98, "end": 1606.94, "text": " And then what they do is they pad to multiple fate. So"}, {"start": 1607.5800000000002, "end": 1610.7, "text": " Why do they do this is because sometimes um"}, {"start": 1611.66, "end": 1616.94, "text": " On the hardware on the underlying hardware if things are divisible by eight things go much faster"}, {"start": 1616.94, "end": 1621.5800000000002, "text": " So that's a constraint that you kind of have to know of the of the underlying gpus you're using"}, {"start": 1621.5800000000002, "end": 1626.14, "text": " So I guess nvidia a 100s or h100s new ones. So yeah"}, {"start": 1626.8600000000001, "end": 1627.8200000000002, "text": " again"}, {"start": 1627.8200000000002, "end": 1631.42, "text": " Details that matter when you train on such a large scale"}, {"start": 1632.46, "end": 1637.02, "text": " Okay, and finally what we do here is we create as you can see here 128"}, {"start": 1637.5800000000002, "end": 1638.6200000000001, "text": " um"}, {"start": 1638.6200000000001, "end": 1642.46, "text": " basically numbers we have plus one because here"}, {"start": 1642.46, "end": 1648.8600000000001, "text": " We'll create one will be the the source sequence and the second one will be the dummy, uh, the the the target sequence"}, {"start": 1649.02, "end": 1652.78, "text": " So that's basically your transformer logic. You have a sequence and then you shift it"}, {"start": 1652.78, "end": 1655.5, "text": " You try to predict the next token and that's why you have here"}, {"start": 1656.46, "end": 1662.22, "text": " You grab the first 128 elements and then you grab the you move by one and then you grab the 128 there"}, {"start": 1662.38, "end": 1665.1000000000001, "text": " And that's what you're trying to predict. Hopefully that makes sense"}, {"start": 1665.58, "end": 1666.3, "text": " um"}, {"start": 1666.3, "end": 1671.26, "text": " There is additionally some some some logic here, which is not necessary for this dummy task"}, {"start": 1671.26, "end": 1676.46, "text": " But um in general, uh, what it does is makes that you're not predicting the padding token"}, {"start": 1676.46, "end": 1681.9, "text": " So that's why you have this but it doesn't matter for now. That's that's enough. Okay, so that's the setup task"}, {"start": 1682.54, "end": 1686.46, "text": " As I said super simple. We just have these two, uh dummy sequences"}, {"start": 1687.5, "end": 1690.94, "text": " That will be later passing through the transformer during the forward pass"}, {"start": 1692.06, "end": 1694.78, "text": " Okay, because we're not charted i'm gonna skip all of this"}, {"start": 1694.78, "end": 1701.02, "text": " But you see that if we were running this on a on a on an actual distributed system would be hitting this branch"}, {"start": 1701.02, "end": 1703.6, "text": " Instead and you can see there is this fsdp"}, {"start": 1704.86, "end": 1706.94, "text": " Wrap that help helps with the model"}, {"start": 1708.86, "end": 1713.1, "text": " Training on a distributed system basically, okay build model i'm going to enter this branch as well"}, {"start": 1714.1399999999999, "end": 1717.02, "text": " Uh, let me see what's going on here. So build model"}, {"start": 1717.98, "end": 1719.98, "text": " We are using the transformer lm"}, {"start": 1720.22, "end": 1723.5, "text": " uh as again, I cannot run the megatron version because it"}, {"start": 1724.1399999999999, "end": 1728.3, "text": " Assumes I have a at least two gpus and because of that i'm i'm getting"}, {"start": 1728.3, "end": 1730.94, "text": " Uh, basically exceptions if I do this"}, {"start": 1731.82, "end": 1733.02, "text": " so"}, {"start": 1733.02, "end": 1738.86, "text": " Let's just uh pick up the model the actual class is so this is what this registry thing does"}, {"start": 1739.18, "end": 1746.62, "text": " It takes the string transformer lm and picks up the actual class that we'll be using to instantiate the model. Okay?"}, {"start": 1748.06, "end": 1751.18, "text": " And then here it just picks up the associated config"}, {"start": 1751.8999999999999, "end": 1754.22, "text": " So transform role language model config"}, {"start": 1754.22, "end": 1759.58, "text": " It doesn't matter for for for us at this point of time. Let's enter the build model, uh branch"}, {"start": 1759.98, "end": 1762.94, "text": " Uh, and let's see what's going on here. So"}, {"start": 1763.58, "end": 1766.7, "text": " We first build the embedding table. Okay, so that's the first part"}, {"start": 1766.7, "end": 1770.3, "text": " So so that's how transformers work, right? We have a piece of text a sentence"}, {"start": 1770.6200000000001, "end": 1775.42, "text": " We tokenize it into a bunch of integers which are usually of a long data type"}, {"start": 1775.82, "end": 1781.18, "text": " And then we use those integers to index into this embedding table to extract a particular"}, {"start": 1781.18, "end": 1786.8600000000001, "text": " Uh embedding vector and then we're gonna feed those vectors through the transformer. That's what this thing does"}, {"start": 1787.1000000000001, "end": 1789.98, "text": " okay, and being a bit more detailed than than in general because"}, {"start": 1791.18, "end": 1793.3400000000001, "text": " Yeah, I think it makes sense to do it now"}, {"start": 1793.98, "end": 1798.22, "text": " Um, let's see how this is gonna work. They call some embedding thingy here"}, {"start": 1799.3400000000001, "end": 1804.8600000000001, "text": " And once you enter there, you see that ultimately it boils down to torch pi torch embedding"}, {"start": 1805.1000000000001, "end": 1807.1000000000001, "text": " So we fetch the device here"}, {"start": 1807.42, "end": 1809.02, "text": " it's gonna be uh"}, {"start": 1809.02, "end": 1810.3, "text": " kuda"}, {"start": 1810.3, "end": 1813.26, "text": " Zero because we have that's the index of my single gpu"}, {"start": 1814.22, "end": 1817.34, "text": " And because this is set to true we'll be using half precision"}, {"start": 1817.74, "end": 1819.5, "text": " So everything is going to be in half precision"}, {"start": 1819.5, "end": 1822.54, "text": " So you can see here what we do is number of embeddings is"}, {"start": 1823.48, "end": 1828.54, "text": " 51,200 because that's the i3 we call that's the dictionary size. So that's the uh,"}, {"start": 1829.26, "end": 1833.82, "text": " Even though we're using a dummy task, um, then we have embedding the dimension which is 512"}, {"start": 1834.3, "end": 1839.58, "text": " And we are running this on a on a gpu in half precision. So that's float 16"}, {"start": 1839.58, "end": 1842.46, "text": " So let's create this empty tensor"}, {"start": 1843.1799999999998, "end": 1848.78, "text": " And then we just basically initialize it normally and then i'm not sure why we're doing this"}, {"start": 1849.26, "end": 1851.26, "text": " Basically, what I do is they set the padding"}, {"start": 1851.74, "end": 1855.58, "text": " Vector to be all zeros. So currently if I were to open up this one"}, {"start": 1856.46, "end": 1859.8999999999999, "text": " Let me show you this. So if I use the weight of padding index"}, {"start": 1860.6999999999998, "end": 1866.22, "text": " And if I do this, uh, you can see there is a bunch of values after we step over here"}, {"start": 1866.22, "end": 1874.54, "text": " Now if I execute it we'll have all zeros. So i'm not sure why they're doing this because ultimately embeddings are learnable. Um,"}, {"start": 1875.26, "end": 1878.38, "text": " So so so yeah, um, i'm not completely sure what's going on"}, {"start": 1879.4, "end": 1880.76, "text": " Embeddings"}, {"start": 1880.76, "end": 1884.22, "text": " Constructed we get back from the call and we are here"}, {"start": 1884.78, "end": 1889.58, "text": " The next thing we do is we construct the actual decoder. So this is going to be a decoder only transformer"}, {"start": 1890.8600000000001, "end": 1892.3, "text": " So let's enter"}, {"start": 1892.3, "end": 1894.3, "text": " into this thingy and"}, {"start": 1894.3, "end": 1896.3, "text": " And let's execute. Okay"}, {"start": 1896.7, "end": 1902.06, "text": " Um, let me see whether there's something interesting here incremental decoder. I think it's just going to store the dictionary"}, {"start": 1902.06, "end": 1904.06, "text": " So we don't have to enter the"}, {"start": 1904.12, "end": 1908.1399999999999, "text": " Constructors of the parent classes. I'm going to skip over all of that"}, {"start": 1908.62, "end": 1914.0, "text": " Uh, let's focus on the important parts. Um, nothing fancy here. We're just storing some variables"}, {"start": 1914.54, "end": 1915.5, "text": " Blah blah blah"}, {"start": 1915.5, "end": 1915.74, "text": " Okay"}, {"start": 1915.74, "end": 1922.06, "text": " So the first thing that we're gonna do is we if the embedding dimension is different from the input embedding dimension, which they are"}, {"start": 1922.06, "end": 1924.62, "text": " Uh, then we'll create this linear layer"}, {"start": 1924.94, "end": 1931.8999999999999, "text": " That's gonna take our embedding vectors and just uh increase the dimensionality from 512 to to seven some 700 something"}, {"start": 1932.1399999999999, "end": 1934.94, "text": " So let's see what it is. So we have a linear layer"}, {"start": 1935.26, "end": 1940.3799999999999, "text": " So from 512 is gonna map them into 768 dimensions. Okay"}, {"start": 1941.5, "end": 1944.1399999999999, "text": " We're not using a lib in this particular setup"}, {"start": 1944.86, "end": 1945.82, "text": " so"}, {"start": 1945.82, "end": 1949.74, "text": " Um, we can ignore all of this we can ignore all of this"}, {"start": 1949.74, "end": 1955.98, "text": " Um, and let's see where we are now. So now we are generating the positional embeddings"}, {"start": 1956.7, "end": 1961.58, "text": " Let me enter inside of there. So I just hit f12 to enter. I set a breakpoint here"}, {"start": 1961.58, "end": 1966.86, "text": " I hit that five and we are in okay. This is learnable. So we enter this branch here"}, {"start": 1968.06, "end": 1969.42, "text": " um"}, {"start": 1969.42, "end": 1971.42, "text": " number of embeddings is gonna be"}, {"start": 1972.14, "end": 1974.54, "text": " Uh, whatever the number of readings here is"}, {"start": 1975.1, "end": 1977.1, "text": " plus padding plus one"}, {"start": 1977.1, "end": 1981.6599999999999, "text": " Because I assume it's for the beginning of sentence token or something"}, {"start": 1982.1399999999999, "end": 1987.98, "text": " That's why they have to create 130 of these instead of 128. So remember we only need as many"}, {"start": 1988.62, "end": 1994.86, "text": " uh positional embeddings as as the longest sequence will be feeding to the transformer because we're using this dummy task"}, {"start": 1994.86, "end": 2000.2199999999998, "text": " We're only feeding 128 tokens. So that's why we have this many uh positional embeddings"}, {"start": 2000.86, "end": 2005.74, "text": " So then we call this learn positional embedding thingy and that's just going to"}, {"start": 2005.74, "end": 2007.74, "text": " Do what let me see"}, {"start": 2007.74, "end": 2013.1, "text": " Uh, well, it's just embedding. It's literally just a torch embedding again. So we can just skip over that"}, {"start": 2013.9, "end": 2020.14, "text": " He's just gonna create embedding table and then we initialize it again and um"}, {"start": 2020.78, "end": 2026.22, "text": " Again padding index is set to zero and we return back the embedding. So that's it. Okay"}, {"start": 2026.78, "end": 2031.66, "text": " Similarly to what we've done for the input tokens embedding table"}, {"start": 2032.3, "end": 2033.42, "text": " okay"}, {"start": 2033.42, "end": 2034.38, "text": " so"}, {"start": 2034.38, "end": 2041.18, "text": " So now we're gonna do this precision conversion. So we'll be uh grabbing the"}, {"start": 2042.22, "end": 2044.7, "text": " Embed positions and we'll convert them to fp"}, {"start": 2045.5, "end": 2048.2200000000003, "text": " 16 basically, so let me enter this function here"}, {"start": 2048.94, "end": 2054.94, "text": " i'm gonna hit hit the f uh, like a breakpoint here hit f5 and uh, let's see what's gonna happen"}, {"start": 2055.02, "end": 2057.02, "text": " so we assert that it's not the"}, {"start": 2057.7400000000002, "end": 2059.98, "text": " B-float 16. So it's the brain float format"}, {"start": 2059.98, "end": 2066.78, "text": " Uh check out my um again ultimate guide to scaling to understand how these formats look like how they work"}, {"start": 2067.26, "end": 2073.1, "text": " And why they're important in a nutshell it just increases the range, but I don't have bf 16 on my machine"}, {"start": 2073.26, "end": 2076.22, "text": " So we're just going to use the half which is the fp 16"}, {"start": 2076.62, "end": 2084.06, "text": " Uh format that's a bit has a smaller range and that means it's uh, the the the the weights during the training and the gradients and everything"}, {"start": 2084.38, "end": 2088.3, "text": " Is more susceptible to both underflow as well as the overflow"}, {"start": 2088.3, "end": 2095.5, "text": " Okay, so we basically transform them to have precision and we get back here. Okay, let's continue"}, {"start": 2096.2200000000003, "end": 2098.94, "text": " Next up we generate 12 of these decoder layers"}, {"start": 2099.9, "end": 2105.82, "text": " By calling a build decoder layer a function here and this one is going to be fairly interesting. So i'm gonna"}, {"start": 2106.46, "end": 2109.02, "text": " Put a breakpoint here hit f5 enter here"}, {"start": 2110.1400000000003, "end": 2116.78, "text": " Let's first build the base decoder layer. So i'm gonna hit f11 to enter f11 again to enter"}, {"start": 2116.78, "end": 2120.38, "text": " Here and let's see where we are. So we are in the decoder layer"}, {"start": 2120.5400000000004, "end": 2125.02, "text": " Okay, so we're trying to construct the first one now. This part is fairly interesting"}, {"start": 2125.1000000000004, "end": 2129.6600000000003, "text": " So this is something you don't see in a regular code base. That's not super optimized"}, {"start": 2130.86, "end": 2135.42, "text": " Here we'll be using megatron fused kernels. Well, actually for this particular setup I have"}, {"start": 2135.82, "end": 2141.98, "text": " I will not be using these kernels but still we'll see how they how they are loading them and how it looks like in practice"}, {"start": 2142.46, "end": 2146.1400000000003, "text": " Okay, so let's go through all of this. That's not important"}, {"start": 2146.14, "end": 2152.2999999999997, "text": " And now this part is interesting. So let's enter the the load the fuse kernels load function"}, {"start": 2152.8599999999997, "end": 2154.8599999999997, "text": " F11 and here we are"}, {"start": 2154.94, "end": 2156.14, "text": " okay"}, {"start": 2156.14, "end": 2161.9, "text": " So first we pick up some some some metadata about the kuda. That's not that important"}, {"start": 2162.54, "end": 2164.54, "text": " um, let's just"}, {"start": 2164.7, "end": 2170.06, "text": " Step over all of these and then this one is fairly interesting. So this one is going to load certain"}, {"start": 2170.06, "end": 2176.7, "text": " Uh kernels written in c++. That's gonna be super efficient compared to running in python"}, {"start": 2177.2599999999998, "end": 2179.34, "text": " So let me just go here"}, {"start": 2180.2999999999997, "end": 2186.2999999999997, "text": " So fused softmax, we just defined the function and this is set to true. So we enter this we just"}, {"start": 2187.02, "end": 2189.5, "text": " Create some flags and you can see here"}, {"start": 2190.14, "end": 2197.98, "text": " We are now creating a path towards certain cpp file and this cu file kuda kernel file. So"}, {"start": 2197.98, "end": 2206.32, "text": " So by the way, i'm not super familiar. I've never had to write any code into any kuda kernels myself. So yeah, um, just fyi"}, {"start": 2206.86, "end": 2208.86, "text": " um, so here are the sources"}, {"start": 2209.26, "end": 2215.1, "text": " And now we're gonna load those files using the cpp extension load helper. So this function here"}, {"start": 2215.5, "end": 2220.7, "text": " This is gonna load from those from those files. Let me show you those those files here"}, {"start": 2221.9, "end": 2223.26, "text": " So here it is"}, {"start": 2223.26, "end": 2228.1400000000003, "text": " Uh, we have to enter the megatron. We have to enter the megatron again and then diffuse kernels"}, {"start": 2228.7000000000003, "end": 2232.2200000000003, "text": " And here are those files. So we are scaled upper"}, {"start": 2233.0, "end": 2235.0, "text": " triang"}, {"start": 2235.0200000000004, "end": 2236.86, "text": " Mast softmax"}, {"start": 2236.86, "end": 2239.42, "text": " So here it is. I'm gonna open it up"}, {"start": 2239.5800000000004, "end": 2242.94, "text": " I think that's the one and then we have to open up the kuda one, right?"}, {"start": 2243.5, "end": 2244.3, "text": " so"}, {"start": 2244.3, "end": 2246.78, "text": " That one let me just open it up with vs code"}, {"start": 2246.78, "end": 2253.26, "text": " And then there is this one. So let me open it up with vs code as well. And that's the only two files"}, {"start": 2253.26, "end": 2257.1800000000003, "text": " I'm gonna open up just for two to show you how how it roughly looks like"}, {"start": 2258.0600000000004, "end": 2263.1800000000003, "text": " So here is the cpp one. So you can see here they define forward and backward for this particular"}, {"start": 2263.7400000000002, "end": 2266.46, "text": " Softmax layer and they make it super efficient"}, {"start": 2266.46, "end": 2274.46, "text": " So in this particular part in this file, they just create a binding to python so that we can execute a c++ code directly from python"}, {"start": 2274.46, "end": 2276.62, "text": " Uh, so here is the actual code"}, {"start": 2277.18, "end": 2279.18, "text": " behind that file"}, {"start": 2279.98, "end": 2283.58, "text": " So here is the blah blah blah the backward kuda file"}, {"start": 2284.2200000000003, "end": 2288.94, "text": " Uh, and everything boils down to this dispatch half and and and bfloat function"}, {"start": 2289.18, "end": 2292.46, "text": " Let me see where I can do f12 and see how that function looks like"}, {"start": 2293.1, "end": 2294.54, "text": " um"}, {"start": 2294.54, "end": 2298.54, "text": " Yeah, I won't go there man. That's gonna be wicked in any case"}, {"start": 2298.54, "end": 2301.82, "text": " This was just more of an fyi than than anything else"}, {"start": 2301.82, "end": 2308.3, "text": " I don't want to dig too much into the c++ plus code because that's gonna get like messy very quickly"}, {"start": 2309.02, "end": 2313.98, "text": " Okay, let's go back to the um, let's find my my prompt"}, {"start": 2314.54, "end": 2320.7000000000003, "text": " Let's find the debugger and let's continue execution execution here. So this might take a while"}, {"start": 2320.86, "end": 2326.38, "text": " Uh, just the loading of this i'm gonna i'm gonna skip this loading part and i'm gonna just hit f5 to"}, {"start": 2326.38, "end": 2332.7000000000003, "text": " To get to the mask softmax here and I will get back to you as soon as yeah here it is because i've actually pre-built it before"}, {"start": 2333.02, "end": 2335.6600000000003, "text": " But the first time we run this it will take probably"}, {"start": 2336.46, "end": 2338.46, "text": " five minutes, uh in case"}, {"start": 2338.86, "end": 2344.94, "text": " Uh, you you it's hanging there is one trick you have to do and that's to just delete this build directory again"}, {"start": 2345.1, "end": 2347.1, "text": " Don't ask me why I know this"}, {"start": 2347.1800000000003, "end": 2349.26, "text": " Uh, yeah, the tears on my keyboard know it"}, {"start": 2351.02, "end": 2353.02, "text": " Okay, let's continue here"}, {"start": 2353.02, "end": 2359.42, "text": " Let's load these scaled masks softmax once and as I said, we will not actually be using them"}, {"start": 2360.3, "end": 2364.22, "text": " But it's kind of funny to to see uh that uh, they are doing that"}, {"start": 2364.7, "end": 2369.2599999999998, "text": " Okay, guys, I actually realized that the actual implementation is in the h file"}, {"start": 2369.5, "end": 2373.82, "text": " So if we open up this one, let me just quickly show you so you can see the actual code"}, {"start": 2373.9, "end": 2378.86, "text": " You can find it here. So this is how that these functions are defined in the background just a cpp function"}, {"start": 2378.86, "end": 2385.6600000000003, "text": " Uh c++ function and the optimized version of the of those of that functionality. So yeah, let me go back now here"}, {"start": 2386.1400000000003, "end": 2388.7000000000003, "text": " Uh, let me show you one more thing. I'm gonna copy this"}, {"start": 2389.34, "end": 2393.34, "text": " Use the search functionality here in vs code to find where this one is used"}, {"start": 2393.82, "end": 2401.6600000000003, "text": " And you can see that um, so we have in this file here where it's actually being used is somewhere here"}, {"start": 2402.86, "end": 2404.38, "text": " So let's just find it"}, {"start": 2404.38, "end": 2411.9, "text": " Uh, here it is. Let me try and see whether yeah forward f line. Okay, so there is this fused layer norm f1 function"}, {"start": 2412.54, "end": 2415.1, "text": " Uh, that's inside of the megatron library actually"}, {"start": 2415.6600000000003, "end": 2418.2200000000003, "text": " And I think it relies on the apex potentially"}, {"start": 2419.26, "end": 2424.46, "text": " So so yeah, that's i'm just kind of showing you to so that you know your way around this code"}, {"start": 2424.94, "end": 2430.54, "text": " But yeah, um, so if we were to actually use this this this class somewhere in our code path"}, {"start": 2430.54, "end": 2436.06, "text": " Uh in the background this part of the code would actually be c++ code. So that's kind of interesting"}, {"start": 2436.7, "end": 2441.1, "text": " Okay, let's get back here and let's exit out of this function"}, {"start": 2442.22, "end": 2444.86, "text": " let's load this one and then let me show you the"}, {"start": 2447.34, "end": 2449.34, "text": " The creation of this of this"}, {"start": 2451.82, "end": 2454.54, "text": " Decoder layer. So again, we are in the transformer decoder layer"}, {"start": 2455.42, "end": 2457.18, "text": " Let's continue"}, {"start": 2457.18, "end": 2461.66, "text": " We're building the self-attention. I'm going to skip that part because that's that's common"}, {"start": 2462.8599999999997, "end": 2464.8599999999997, "text": " common knowledge, I guess um"}, {"start": 2465.66, "end": 2467.66, "text": " Let's continue here layer norms"}, {"start": 2467.8999999999996, "end": 2475.1, "text": " I initialize blah blah blah again. We're converting the layers the the attention layers to to uh float i'm gonna"}, {"start": 2475.8999999999996, "end": 2477.66, "text": " Basically remove this one"}, {"start": 2477.66, "end": 2482.0, "text": " Uh, so that we don't have to enter anymore. So everything has been converted to fp16"}, {"start": 2482.8599999999997, "end": 2484.8599999999997, "text": " Uh, so that's the mixed precision thingy"}, {"start": 2484.86, "end": 2491.26, "text": " Uh, and um, yeah, let's continue here. We just fetch the activation function, which is going to be a rally I think"}, {"start": 2491.9, "end": 2494.54, "text": " Yeah, and then let me just see what this one is"}, {"start": 2495.1800000000003, "end": 2498.86, "text": " So yeah, you can see we're just because activation was set to rally"}, {"start": 2499.1, "end": 2504.46, "text": " We'll just return the f value the functional from the functional module. We fetch the rally function. Okay"}, {"start": 2505.5, "end": 2507.02, "text": " Let's go back here"}, {"start": 2507.02, "end": 2510.54, "text": " Um now we create the fully connected layers"}, {"start": 2510.54, "end": 2516.86, "text": " So we have obviously two fully connected layers, uh in the transformer block. The first one is after the attention mechanism"}, {"start": 2516.94, "end": 2521.34, "text": " We have the fc1 and then we have the fc2. No, actually this is the sorry"}, {"start": 2521.34, "end": 2524.54, "text": " this is the I guess the mlp after after the"}, {"start": 2525.34, "end": 2530.22, "text": " attention thing is is is is gone but like the actual attention logic also has the"}, {"start": 2530.94, "end": 2533.9, "text": " Mapping the linear mapping once you do the"}, {"start": 2534.86, "end": 2537.58, "text": " Attention thingy with value vectors and everything if that makes sense"}, {"start": 2537.58, "end": 2542.14, "text": " So the reason they split up into fc1 and fc2 is because they use megatron"}, {"start": 2542.46, "end": 2548.2999999999997, "text": " And so I think that's some limitation that they that the megatron imposes on the users"}, {"start": 2548.38, "end": 2552.38, "text": " You have to kind of split these such that once you do the model parallelism"}, {"start": 2552.54, "end": 2557.5, "text": " You can split these fully connected layers onto multiple gpus. That's why they have to separately"}, {"start": 2557.98, "end": 2560.7, "text": " Uh create the function. So if that's confusing you"}, {"start": 2561.58, "end": 2563.1, "text": " That's the answer"}, {"start": 2563.1, "end": 2568.2999999999997, "text": " Okay, so it's just gonna build in the background as you can see here. It's just gonna build a linear layer"}, {"start": 2568.54, "end": 2571.52, "text": " But the reason they're doing it like this is again model parallelism"}, {"start": 2572.86, "end": 2576.86, "text": " And those are all of the details you kind of have to deal with when you're when you're um"}, {"start": 2577.74, "end": 2583.58, "text": " Working on on large scale there. There are details that you kind of have to to to learn and uh, yeah"}, {"start": 2583.9, "end": 2589.18, "text": " It's kind of different compared to small scale again conversion to fp16. Let's exit here"}, {"start": 2590.14, "end": 2592.14, "text": " And that's the base layer"}, {"start": 2592.14, "end": 2592.8599999999997, "text": " um"}, {"start": 2592.8599999999997, "end": 2594.8599999999997, "text": " Let me skip over all of this"}, {"start": 2595.66, "end": 2603.42, "text": " And now additionally if checkpoint was set to true with basically wrap this layer inside of the checkpoint wrapper"}, {"start": 2603.98, "end": 2607.8199999999997, "text": " And again, that's what it does in the back in the background is during the backward pass"}, {"start": 2608.14, "end": 2615.2599999999998, "text": " You didn't store all the activations and instead you'll be recomputing the activations and by doing that you'll be"}, {"start": 2616.2799999999997, "end": 2617.5, "text": " obviously"}, {"start": 2617.5, "end": 2622.22, "text": " Increasing the time it takes to do the backward. I think it's going to be 2x slower"}, {"start": 2622.38, "end": 2627.18, "text": " But as a as a consequence you you save up a lot of memory because you don't have to save every activation"}, {"start": 2627.5, "end": 2631.1, "text": " Along every layer. That's the checkpoint in logic, but we're going to skip it now"}, {"start": 2632.14, "end": 2637.1, "text": " Uh, let's continue here. This is going to be no op. Nothing's going to happen in this one"}, {"start": 2638.14, "end": 2642.7, "text": " And let's continue. Okay, so that's going to be repeated 12 times"}, {"start": 2642.7, "end": 2649.02, "text": " I'm gonna do disable all breakpoints put f put the breakpoint here hit f5"}, {"start": 2649.8199999999997, "end": 2656.22, "text": " Okay, and let's continue from here. Now. I'm gonna re-enable all of the breakpoints and that's it. Okay guys"}, {"start": 2656.7, "end": 2659.3399999999997, "text": " And we just packed those layers into module list"}, {"start": 2660.06, "end": 2662.06, "text": " Nothing fancy there"}, {"start": 2662.22, "end": 2668.8599999999997, "text": " Nothing fancy here. We have the project again projection the output dimension projection from I guess 712 to"}, {"start": 2669.8799999999997, "end": 2671.8799999999997, "text": " 768 to 512"}, {"start": 2671.88, "end": 2673.88, "text": " uh and"}, {"start": 2674.6800000000003, "end": 2676.52, "text": " That's it"}, {"start": 2676.52, "end": 2681.96, "text": " Okay here because share input output embed it's set to true. This is one of the things you again"}, {"start": 2682.92, "end": 2686.36, "text": " kind of um kept to note and that's that usually people tie"}, {"start": 2686.92, "end": 2693.8, "text": " The input embedding table with the output and so here instead of having a separate set of weights for this output projection instead"}, {"start": 2693.88, "end": 2699.0, "text": " We'll just reuse the same weights that we use to embed our our initial tokens"}, {"start": 2699.0, "end": 2704.04, "text": " And you can see we're literally using this embed tokens, which is our embedding table as you can see 51"}, {"start": 2704.5, "end": 2710.84, "text": " 200 and 512 that's the initial embedding table we created and we're going to use those weights to populate this linear layer"}, {"start": 2710.92, "end": 2715.24, "text": " So then you can imagine what this does is you take the output token"}, {"start": 2715.24, "end": 2719.24, "text": " Which is 512 and once you multiply with this output projection you end up with this"}, {"start": 2719.86, "end": 2722.6, "text": " 51 000 something which is basically the vocab size"}, {"start": 2723.32, "end": 2727.48, "text": " Of your of your output. Okay. Hopefully that makes sense"}, {"start": 2727.48, "end": 2734.12, "text": " Let me exit this. We are not using a lib so we can exit this we have our decoder"}, {"start": 2734.36, "end": 2737.96, "text": " We have we have the embedding table. That's it model constructed"}, {"start": 2739.4, "end": 2741.4, "text": " Okay, now we have the um"}, {"start": 2741.46, "end": 2746.14, "text": " Criterion we're using cross entropy. I don't think this is anything super interesting"}, {"start": 2746.92, "end": 2748.92, "text": " Let me see what's going on here"}, {"start": 2749.32, "end": 2751.8, "text": " You can skip all of this. We can skip all of this"}, {"start": 2752.44, "end": 2755.8, "text": " Let's enter the builder. I think we can skip all of this as well"}, {"start": 2755.8, "end": 2757.1600000000003, "text": " Well"}, {"start": 2757.1600000000003, "end": 2759.1600000000003, "text": " Nothing interesting"}, {"start": 2759.1600000000003, "end": 2761.5600000000004, "text": " Here, so here's just the cross entropy criterion"}, {"start": 2763.0800000000004, "end": 2765.2400000000002, "text": " I'm gonna skip all of this. This is not interesting"}, {"start": 2765.96, "end": 2771.32, "text": " Okay. So now there is some logging going on if you open up the terminal here. You'll just see what those logs are"}, {"start": 2772.6000000000004, "end": 2779.0800000000004, "text": " Uh, it's just logging various stuff. So we're using dummy element task. We're using transformer language model. We're using cross entropy as the loss"}, {"start": 2779.8, "end": 2783.82, "text": " Let's continue. We are counting the number of parameters here. We have 112"}, {"start": 2783.82, "end": 2785.82, "text": " Um million"}, {"start": 2787.02, "end": 2792.6200000000003, "text": " Uh, blah blah blah. Let's continue here. Now we'll load the data set. So let's enter the"}, {"start": 2793.6600000000003, "end": 2796.0800000000004, "text": " Data loading part. So i'm gonna hit f11"}, {"start": 2797.34, "end": 2799.34, "text": " Here we are. So let me continue here"}, {"start": 2800.78, "end": 2806.46, "text": " Okay, so what's this batch size is one as I told you i'm using one because otherwise I would be hitting"}, {"start": 2806.86, "end": 2809.7400000000002, "text": " Outer memory exceptions and now we just construct a dummy data set"}, {"start": 2809.74, "end": 2814.62, "text": " So here is how it looks like this this one we basically just stored the batch. It's a super simple"}, {"start": 2814.9399999999996, "end": 2820.54, "text": " Well, literally dummy data set the name says it itself. So what we do is we just create this"}, {"start": 2821.18, "end": 2823.18, "text": " Batch you can see here"}, {"start": 2823.18, "end": 2829.58, "text": " And what we do is we have source tokens. We have uh, we just have the the array of of 128 elements"}, {"start": 2829.58, "end": 2835.02, "text": " We created if you recall before so we just use that as a source tokens and then um"}, {"start": 2835.02, "end": 2840.54, "text": " Um, we set as the target tokens those dummy targets and you can see see they're shifted by one"}, {"start": 2840.7, "end": 2845.2599999999998, "text": " So here we have three four five and there we had for the source one. We had two"}, {"start": 2845.82, "end": 2848.14, "text": " Three four, so they're exactly shifted by one"}, {"start": 2848.7, "end": 2854.38, "text": " So this is going to be our simple, um dummy data set. Uh, we just have number of sentences is one"}, {"start": 2854.62, "end": 2857.58, "text": " We only have 128 tokens blah blah blah"}, {"start": 2857.66, "end": 2863.1, "text": " So we literally have batch size one and only 128 tokens as simple as it gets as simple as it gets"}, {"start": 2863.1, "end": 2865.3399999999997, "text": " So let me just construct this thingy"}, {"start": 2866.7, "end": 2871.98, "text": " And that's it. So that's the construction of the data set. It's very easy and simple"}, {"start": 2872.14, "end": 2876.54, "text": " So now the trainer is just a abstraction they use to to sort everything inside"}, {"start": 2876.62, "end": 2884.7, "text": " So the task and the data set and everything so here they extract the shared parameters because I told you they're binding the input embedding"}, {"start": 2884.7, "end": 2887.3399999999997, "text": " with the output embedding output projection layer"}, {"start": 2888.38, "end": 2892.14, "text": " Again, that's now less interesting. I'm going to skip over all of these"}, {"start": 2892.14, "end": 2894.14, "text": " They are storing the criteria and the model"}, {"start": 2894.7, "end": 2896.2999999999997, "text": " Uh blah blah blah"}, {"start": 2896.2999999999997, "end": 2900.8599999999997, "text": " Uh, they are basically again casting to uh half precision float"}, {"start": 2901.42, "end": 2903.98, "text": " fp 16 both the model and the criterion"}, {"start": 2904.46, "end": 2906.46, "text": " uh, and uh"}, {"start": 2906.8599999999997, "end": 2908.8599999999997, "text": " Now they're pushing it to the gpu"}, {"start": 2909.5, "end": 2910.3799999999997, "text": " and"}, {"start": 2910.3799999999997, "end": 2912.14, "text": " That's it"}, {"start": 2912.14, "end": 2914.7799999999997, "text": " Nothing, nothing interesting there. I'm going to skip all of this"}, {"start": 2915.42, "end": 2918.8599999999997, "text": " I'm going to skip all of this. I'm just going to put f5 somewhere here"}, {"start": 2918.86, "end": 2922.54, "text": " Sorry, put the breakpoint and hit f5. My brain is glitching nice"}, {"start": 2923.58, "end": 2929.82, "text": " Okay. So again, it's not fetching the environment for some reason so you can see here I have rtx 2080"}, {"start": 2930.38, "end": 2933.98, "text": " 8 gigabytes, uh 7.5 are the major and minor"}, {"start": 2935.26, "end": 2939.9, "text": " And they just store that for whatever reason. I'm not sure why they're using it probably just to"}, {"start": 2940.86, "end": 2946.6200000000003, "text": " For the logging purposes so they they know exactly which devices in case you have heterogeneous distributed system"}, {"start": 2946.62, "end": 2953.3399999999997, "text": " Where you're using multiple gpus some are maybe p 100 some are maybe m 40 some are maybe v 100"}, {"start": 2953.9, "end": 2957.98, "text": " Some are a 100 so when I have that additional, uh, as you can see here, they're just"}, {"start": 2958.62, "end": 2963.3399999999997, "text": " Logging everything basically. Okay. We're back to the train. This is the the main function"}, {"start": 2963.98, "end": 2966.46, "text": " In case you're confused where we are. So, um"}, {"start": 2967.58, "end": 2970.62, "text": " Let's now i'm gonna skip this because this is just gonna"}, {"start": 2970.62, "end": 2976.22, "text": " Uh give us back this iterator and there is like literally four layers of wrapping they do here"}, {"start": 2976.22, "end": 2982.94, "text": " They have some counting wrapper and then this wrapper and this wrapper gets very very unwieldy very quickly"}, {"start": 2983.3399999999997, "end": 2986.46, "text": " So now skip that part and just focus on on the train loop now"}, {"start": 2986.94, "end": 2988.54, "text": " um"}, {"start": 2988.54, "end": 2993.02, "text": " So here we are here's the train loop. Let's enter the train loop. Okay, so we're actually hitting this"}, {"start": 2994.38, "end": 2996.62, "text": " Aggregate function i'm going to show you what it is"}, {"start": 2996.62, "end": 3002.06, "text": " So i'm gonna hit i'm gonna enter i'm gonna hit f12 to train i'm gonna put the breakpoint here"}, {"start": 3002.38, "end": 3005.74, "text": " You can see there is this um wrapper. They're using this decorator"}, {"start": 3006.2799999999997, "end": 3012.2999999999997, "text": " Aggregate and it's that's just going to make sure that when they're logging something inside of that scope"}, {"start": 3012.94, "end": 3017.1, "text": " That all of those logging details are being stored into this particular train"}, {"start": 3017.64, "end": 3021.3399999999997, "text": " Dictionary and then they'll have multiple of these, uh aggregate"}, {"start": 3021.34, "end": 3029.28, "text": " Uh, um decorators and that's just gonna gonna like nicely organize where each of the logs is going to so they know exactly"}, {"start": 3029.36, "end": 3033.2000000000003, "text": " From which part of the code which logs came from that's the basic idea"}, {"start": 3033.76, "end": 3039.28, "text": " Okay, so we're gonna just um hit f5 and exit this and hit the initial part of the train function"}, {"start": 3040.4, "end": 3043.52, "text": " Let's just grab the iterator. I can skip over all of this"}, {"start": 3044.1600000000003, "end": 3046.1600000000003, "text": " Um, basically iterator is just"}, {"start": 3046.16, "end": 3051.8399999999997, "text": " Uh as the name suggests counting iterators, so it's just gonna feed us those 128"}, {"start": 3053.44, "end": 3055.44, "text": " Tokens in in on each call"}, {"start": 3056.56, "end": 3063.2799999999997, "text": " That's it. We grab some progress bar again. I'm gonna skip all of that again. I told you this is this is what it"}, {"start": 3063.8399999999997, "end": 3065.8399999999997, "text": " How the code looks like when you're training?"}, {"start": 3066.08, "end": 3071.04, "text": " Uh bigger models like there is a lot of things that you don't care about because they're conceptually not important"}, {"start": 3071.12, "end": 3075.2, "text": " But then again, they're super important if you want to make sure if you want to be able to debug"}, {"start": 3075.2, "end": 3080.72, "text": " Uh things when they go wrong, so i'm gonna skip over all of this. Let's get to here"}, {"start": 3082.16, "end": 3086.16, "text": " Here is the actual train function. So there is train instead of train instead of training again"}, {"start": 3086.64, "end": 3091.68, "text": " A lot of layers of of kind of complexity. Uh, let's skip over all of this"}, {"start": 3092.16, "end": 3097.8399999999997, "text": " So here is the definition of the train we hit f5 we get here and this is where the actual magic starts happening"}, {"start": 3097.9199999999996, "end": 3099.68, "text": " So we we're not"}, {"start": 3099.68, "end": 3107.7599999999998, "text": " Gonna iterate and fetch the sample. We will not enter this particular branch because this new profiler is set to false"}, {"start": 3108.56, "end": 3111.3599999999997, "text": " Um, and so we'll we'll just hit this branch here"}, {"start": 3111.9199999999996, "end": 3117.2799999999997, "text": " And now i'll hit the uh f11 and enter this train function and instead of this train function"}, {"start": 3117.52, "end": 3121.12, "text": " There is yet another train function. So let me hit f5 and we end up here"}, {"start": 3121.9199999999996, "end": 3125.12, "text": " so we end up here train step so again, uh the"}, {"start": 3125.12, "end": 3129.7599999999998, "text": " The outer train is the epoch train. This one is the batch trick"}, {"start": 3129.8399999999997, "end": 3134.88, "text": " This one is going to be like the batch train. So we pass just the samples and we do a single forward pass"}, {"start": 3135.2, "end": 3137.44, "text": " So let's see what the samples are samples"}, {"start": 3138.08, "end": 3142.88, "text": " Samples is a dictionary as you can see we have all those tokens from the dummy"}, {"start": 3143.7599999999998, "end": 3145.2799999999997, "text": " data set loader"}, {"start": 3145.2799999999997, "end": 3149.6, "text": " Two interesting things to notice again. They have this aggregate thingy going on"}, {"start": 3150.08, "end": 3154.4, "text": " It's kind of convenient. It might be confusing if you're not used to it, but uh, it"}, {"start": 3154.4, "end": 3159.12, "text": " It does the job and secondly they have this profiler record function"}, {"start": 3159.28, "end": 3163.6800000000003, "text": " So what this thing does is first of all, they enable the profiler. I'm going to show you that in a second"}, {"start": 3164.0, "end": 3171.52, "text": " Secondly, this record function basically names this particular part of the code with train underscore step"}, {"start": 3172.1600000000003, "end": 3173.12, "text": " dash"}, {"start": 3173.12, "end": 3180.7200000000003, "text": " Some number of the iteration and that's useful because later on the when you analyze the profiler output you'll you'll exactly know"}, {"start": 3180.72, "end": 3186.08, "text": " How long or like you have the details the profiling details about this particular piece of code?"}, {"start": 3186.08, "end": 3188.9599999999996, "text": " That's why they do it. That's why they have the record function here"}, {"start": 3189.68, "end": 3192.08, "text": " So again, the profiler should have been set up somewhere here"}, {"start": 3192.64, "end": 3194.3199999999997, "text": " Uh, let me just see. Yeah"}, {"start": 3194.3199999999997, "end": 3201.2, "text": " So when we were here there was the profiler call here and you can see profile memory true with stack true record shapes true"}, {"start": 3201.2, "end": 3205.4399999999996, "text": " So that means they'll be profiling the memory. What was the memory footprint like?"}, {"start": 3205.8399999999997, "end": 3208.64, "text": " In case they're hitting oms this helps you debug it"}, {"start": 3208.64, "end": 3213.92, "text": " Recording the shapes of the variables and also stack. Um, if I recall correctly"}, {"start": 3214.56, "end": 3220.96, "text": " Uh, um, yeah, I forgot what the command is. You can always enter the profile here and see what the um"}, {"start": 3221.44, "end": 3224.16, "text": " Stack means so let me just find that variable"}, {"start": 3224.64, "end": 3230.56, "text": " Okay, record source information file and line number for the operation. So additional metadata for for the operations"}, {"start": 3230.56, "end": 3233.68, "text": " Let's go back here and let's enter the train step"}, {"start": 3233.68, "end": 3238.8799999999997, "text": " Enter the train step again. I'll have to it keeps on entering these aggregate"}, {"start": 3239.6, "end": 3246.24, "text": " Functions that kind of sucks. So I have to enter the train step using f12 set the break point and then hit f5"}, {"start": 3246.3999999999996, "end": 3248.3999999999996, "text": " So now we are here. So let's see what's going on"}, {"start": 3248.56, "end": 3252.24, "text": " So we set the seed we set the models to train criterion to train"}, {"start": 3252.48, "end": 3257.04, "text": " We call zero grad and i'll actually enter zero grad and there is a particular reason i'm doing that"}, {"start": 3257.12, "end": 3260.8799999999997, "text": " Let me let me show you what why is that why that is? Okay, let's enter this one"}, {"start": 3260.88, "end": 3262.88, "text": " Let's enter this one"}, {"start": 3263.6800000000003, "end": 3265.6800000000003, "text": " Let's enter"}, {"start": 3265.6800000000003, "end": 3267.6800000000003, "text": " This one. Oh my god"}, {"start": 3267.6800000000003, "end": 3269.6800000000003, "text": " And here it is"}, {"start": 3269.6800000000003, "end": 3273.6800000000003, "text": " So the reason is we have this lost scaling and that's because we're doing mixed precision"}, {"start": 3273.6800000000003, "end": 3279.6800000000003, "text": " And so lost scale is set to four and if you recall from my video on again the ultimate guide to scaling"}, {"start": 3279.6800000000003, "end": 3289.6800000000003, "text": " Why they do this is because you want to shift all of the values such that you're you're you're less likely to hit the underflow in the fp16 precision"}, {"start": 3289.68, "end": 3293.9199999999996, "text": " So this kind of shifts all of the values and that's why they uh do"}, {"start": 3294.8799999999997, "end": 3296.24, "text": " One over"}, {"start": 3296.24, "end": 3300.08, "text": " Well, they're doing one over four because the multiply factor is going to do the unscaling"}, {"start": 3300.3999999999996, "end": 3304.56, "text": " So we'll see the this is going to do the scaling and the multiply factor is going to do the unscaling"}, {"start": 3305.04, "end": 3309.7599999999998, "text": " Okay, we'll see that a bit later, but that's a detail. I wanted to show you so that's the zero grad. Um,"}, {"start": 3310.3199999999997, "end": 3311.7599999999998, "text": " Let's continue here"}, {"start": 3311.7599999999998, "end": 3316.16, "text": " We grab the sample we do prepare sample which is gonna do two things"}, {"start": 3316.16, "end": 3321.6, "text": " One is to push the cuda and the second one is to convert to fp16 because of that. I'm just gonna skip"}, {"start": 3322.16, "end": 3324.16, "text": " Um, let's enter"}, {"start": 3324.24, "end": 3329.44, "text": " The the final train step and it's happening instead of a task, which is kind of unintuitive"}, {"start": 3329.44, "end": 3333.12, "text": " Why would you why would the train step be associated with a task?"}, {"start": 3333.92, "end": 3335.2799999999997, "text": " like this"}, {"start": 3335.2799999999997, "end": 3342.3199999999997, "text": " Like this way of structuring the the the the programming model is kind of weird and does not resonate with my with my mind"}, {"start": 3342.32, "end": 3346.42, "text": " Okay, guys, let me enter that one. I'm gonna hit f11"}, {"start": 3347.28, "end": 3351.52, "text": " And here we are. This is the train step again. Take a look at the stack here"}, {"start": 3351.84, "end": 3358.48, "text": " Like the stack is horrible. We have train train train step instead of a trainer train step inside of a task"}, {"start": 3358.96, "end": 3360.96, "text": " But it is what it is"}, {"start": 3361.76, "end": 3364.88, "text": " It's kind of ambitious to try to explain this code on the youtube. So yeah"}, {"start": 3365.6800000000003, "end": 3371.44, "text": " I'm giving my best so again record function forward. So we'll know exactly the details of this part of code"}, {"start": 3371.44, "end": 3376.88, "text": " We'll know how much memory is spent. We'll know the shapes. We'll know the lines everything so that kind of makes it easier to debug"}, {"start": 3377.52, "end": 3379.52, "text": " Let's enter this f11"}, {"start": 3379.92, "end": 3386.2400000000002, "text": " Here we are. We do a forward pass through the model. We passed on that input. That's gonna pass us the the actual tokens"}, {"start": 3386.7200000000003, "end": 3390.48, "text": " So here we are. We hit the coder. We pass the tokens to the decoder"}, {"start": 3391.44, "end": 3397.44, "text": " And now we call this extract features function the extract features function is gonna do a forward pass through the"}, {"start": 3398.2400000000002, "end": 3400.7200000000003, "text": " Decoder transformer. I'm not sure why they called it like that"}, {"start": 3400.72, "end": 3407.52, "text": " I mean, it's not incorrect to to to name it like that but kind of weird extract features. Okay"}, {"start": 3408.16, "end": 3409.8399999999997, "text": " um"}, {"start": 3409.8399999999997, "end": 3411.8399999999997, "text": " Let's continue here"}, {"start": 3411.9199999999996, "end": 3415.68, "text": " Let's enter the extract features and instead of extract features"}, {"start": 3415.68, "end": 3420.72, "text": " There is the extract features scriptable and this the reason why they do this is because of torch script"}, {"start": 3420.72, "end": 3426.3999999999996, "text": " There was again some some constraint that if you want to have optimization you kind of have to do this and"}, {"start": 3426.4, "end": 3430.96, "text": " And it makes the code horrible. I think they know they kind of mentioned that somewhere here"}, {"start": 3431.76, "end": 3433.76, "text": " Blah blah. I don't see it now, but I yeah"}, {"start": 3434.32, "end": 3439.2000000000003, "text": " Um a scriptable subclass of this class has an extract features method and calls super extract features"}, {"start": 3439.2000000000003, "end": 3444.56, "text": " But super is not supported in torch script a copy of this function is made to be used in the subclass instead"}, {"start": 3445.2000000000003, "end": 3446.56, "text": " Okay"}, {"start": 3446.56, "end": 3448.56, "text": " Let's continue here"}, {"start": 3449.92, "end": 3455.52, "text": " Let's continue here. Okay. This part is not interesting. So the first thing we do is forward embedding"}, {"start": 3455.52, "end": 3457.52, "text": " so that's gonna take our our"}, {"start": 3458.0, "end": 3463.04, "text": " Basically tokens so from two to whatever we have 128 of those"}, {"start": 3463.84, "end": 3467.84, "text": " And we are going to now convert them. We're gonna map them into"}, {"start": 3468.46, "end": 3471.52, "text": " Multi-dimensional space. So let's let me enter this part"}, {"start": 3472.64, "end": 3476.88, "text": " So let me enter this forward part and let's see what's going on"}, {"start": 3477.52, "end": 3479.52, "text": " So basically we just call"}, {"start": 3479.52, "end": 3486.64, "text": " All yeah, this is the uh, the torch embedding. So it's just gonna embed basically everything out of the box basically"}, {"start": 3487.7599999999998, "end": 3493.44, "text": " Okay, so this was actually the position. Okay, we first embedded the positions. Let me see. Where is the actual"}, {"start": 3494.08, "end": 3495.52, "text": " embedding going on"}, {"start": 3495.52, "end": 3496.56, "text": " um"}, {"start": 3496.56, "end": 3502.0, "text": " Okay, so here's where where we embedded tokens so we embed them here. So now we have token embeddings"}, {"start": 3502.16, "end": 3506.16, "text": " So let me show you this so we have token embeddings the shape of this thing is"}, {"start": 3506.16, "end": 3510.8199999999997, "text": " 128 512 let's see. What's the shape of the position?"}, {"start": 3512.02, "end": 3518.74, "text": " Positions it should be also 128 but we have 768 here. So that means we we would have a mismatch"}, {"start": 3519.14, "end": 3521.7, "text": " So because of that we'll have to first do the projection"}, {"start": 3522.2599999999998, "end": 3523.14, "text": " um"}, {"start": 3523.14, "end": 3526.42, "text": " Okay, embedding scale is set to one. So this is kind of no up"}, {"start": 3527.54, "end": 3529.06, "text": " we do the"}, {"start": 3529.06, "end": 3530.18, "text": " projection"}, {"start": 3530.18, "end": 3535.3799999999997, "text": " From 512 to 768 and now we can add positions on top of x"}, {"start": 3535.38, "end": 3538.26, "text": " So that's the initial part of the transformer logic"}, {"start": 3538.26, "end": 3542.6600000000003, "text": " We do the embedding we add on top of it the positionally encodings and now we start doing the forward pass"}, {"start": 3543.1400000000003, "end": 3546.02, "text": " Okay, let's continue here and let's exit we're returning back"}, {"start": 3546.6600000000003, "end": 3554.98, "text": " The embeddings the positions and the combined embeddings and positions after we do the additional projection. Okay, let's exit this part"}, {"start": 3556.1800000000003, "end": 3559.2200000000003, "text": " We are again in the forward pass of the transformer decoder"}, {"start": 3559.3, "end": 3563.06, "text": " You can always take a look at here to to figure out where we are transformer decoder"}, {"start": 3563.06, "end": 3567.14, "text": " We're doing the forward pass. We've done the embedding part here. We create the"}, {"start": 3568.58, "end": 3570.58, "text": " mask the"}, {"start": 3570.74, "end": 3575.94, "text": " Attention mask. So let me show you how it's going to look like i'm gonna kind of quickly walk you through this part"}, {"start": 3576.42, "end": 3578.42, "text": " It's going to create this"}, {"start": 3578.58, "end": 3581.86, "text": " Tri u is going to create a triangular matrix"}, {"start": 3582.9, "end": 3584.9, "text": " It's going to be filled with bunch of"}, {"start": 3585.94, "end": 3587.46, "text": " minus int"}, {"start": 3587.46, "end": 3592.34, "text": " So negative infinity and that's going to be used to create a causal decoder, right?"}, {"start": 3592.34, "end": 3598.9, "text": " So we'll just add that to the attention scores and that's going to after applying the softmax"}, {"start": 3598.9, "end": 3603.1400000000003, "text": " You'll end up with tokens that can only attend to itself and the previous tokens"}, {"start": 3604.1000000000004, "end": 3607.46, "text": " So again here we create 128 times 128"}, {"start": 3608.02, "end": 3613.46, "text": " Matrix of zeros then we fill it with negative infinity and then we just call this"}, {"start": 3613.46, "end": 3621.2200000000003, "text": " This function that will basically create a triangular map mask out of this. So let me step over this and let me show you how this looks like"}, {"start": 3622.7400000000002, "end": 3624.7400000000002, "text": " Here is how it looks like again"}, {"start": 3625.3, "end": 3630.26, "text": " Zeros and then on this upper the upper triangle is minus infinities"}, {"start": 3630.9, "end": 3632.1, "text": " Okay"}, {"start": 3632.1, "end": 3638.98, "text": " Let's continue here. We're not using a lib so we can step over all of this and we just return back that that mask"}, {"start": 3638.98, "end": 3641.46, "text": " We just created. Okay, so that's the attention mask"}, {"start": 3641.46, "end": 3648.98, "text": " We do some transpose here operation. So because we are batch number of tokens number of channels now we are"}, {"start": 3649.7, "end": 3655.62, "text": " Uh number of tokens and then batch and the number of channels again optimization detail the lot of transformer implementations do"}, {"start": 3656.7400000000002, "end": 3660.02, "text": " Okay, we just store some interstates blah blah. Now we do"}, {"start": 3660.58, "end": 3666.58, "text": " A bunch of passes through all of the layers. We have 12 layers and we are going to do four pass through all of them"}, {"start": 3666.82, "end": 3668.82, "text": " i'm going to skip the"}, {"start": 3668.9, "end": 3670.9, "text": " actual layer logic"}, {"start": 3670.9, "end": 3676.82, "text": " Uh, that would be too much i'm going to skip it, but it's just classic transformer stuff. So let me exit this"}, {"start": 3677.3, "end": 3678.98, "text": " Let me exit here"}, {"start": 3678.98, "end": 3680.58, "text": " uh, let me just"}, {"start": 3680.58, "end": 3685.14, "text": " Put breakpoint here hit f5 and let's exit the transformer"}, {"start": 3686.02, "end": 3689.86, "text": " We do some again now we now we have again, let me show you the um"}, {"start": 3690.02, "end": 3694.02, "text": " So we now passed all those vectors through the transformer 12 layers in total"}, {"start": 3694.9, "end": 3696.9, "text": " And we end up with the shape"}, {"start": 3696.9, "end": 3704.28, "text": " 128 768 and so you see it's kind of messed up. So we just do the transpose operation again and now we end up with the"}, {"start": 3704.84, "end": 3705.88, "text": " correct"}, {"start": 3705.88, "end": 3708.04, "text": " um the usual shape"}, {"start": 3708.84, "end": 3714.12, "text": " Okay, that's it. And finally the projection so we have 512 instead of 768"}, {"start": 3714.52, "end": 3719.56, "text": " Let me do f10. That's it. We return the interstates. We return the attention blah blah blah"}, {"start": 3719.88, "end": 3722.76, "text": " And we turn x which are the actual output embeddings"}, {"start": 3722.76, "end": 3728.6200000000003, "text": " Okay, let's continue. That's the that part and now we call the output layer"}, {"start": 3729.8, "end": 3736.2000000000003, "text": " And that's gonna project this into I guess, um vocab size. So yeah, you can see here 51 200"}, {"start": 3737.6400000000003, "end": 3739.32, "text": " Let's continue"}, {"start": 3739.32, "end": 3742.84, "text": " Let's continue and that's the forward pass through the model now we compute the loss"}, {"start": 3743.32, "end": 3746.36, "text": " Um, i'm gonna kind of skip through this a little bit"}, {"start": 3746.36, "end": 3754.52, "text": " So we first to get normal probabilities. It's gonna this is gonna apply the log softmax on top of our um outputs here"}, {"start": 3755.4, "end": 3757.4, "text": " then we do some reshaping and"}, {"start": 3758.48, "end": 3760.48, "text": " Let's see the shapes"}, {"start": 3760.84, "end": 3762.84, "text": " And those of you who are still watching"}, {"start": 3763.6400000000003, "end": 3769.56, "text": " I'd love to hear your feedback. Like if you have any feedback, it would be super appreciated. So 128 51"}, {"start": 3769.56, "end": 3776.04, "text": " 1,200 whereas now these are normalized because we applied the softmax. Okay, so"}, {"start": 3776.7599999999998, "end": 3779.56, "text": " We get the targets. This is just going to fetch those"}, {"start": 3780.7599999999998, "end": 3782.7599999999998, "text": " sample targets so that's going to fetch those"}, {"start": 3783.54, "end": 3786.04, "text": " 128 tokens that were shifted by one to the right"}, {"start": 3787.0, "end": 3791.72, "text": " I'm gonna convince you that that's indeed the case so you can see here three four five"}, {"start": 3791.7999999999997, "end": 3796.68, "text": " That's the target sequence we were using the whole time and now we apply the nl loss. So that's the"}, {"start": 3796.68, "end": 3802.7599999999998, "text": " Um negative log likelihood loss and you can see it's a bit more complex than than your usual implementation"}, {"start": 3802.7599999999998, "end": 3806.3599999999997, "text": " So your usual implementation will be just this but when you're dealing with um"}, {"start": 3807.3199999999997, "end": 3811.48, "text": " I guess uh, like big dimensionality and a lot of a big vocab"}, {"start": 3812.2799999999997, "end": 3817.0, "text": " Then you can see here they have this if branching. So if the number of elements is smaller than"}, {"start": 3817.64, "end": 3819.64, "text": " What is this like?"}, {"start": 3819.64, "end": 3827.16, "text": " 2 billion then you just execute this standard f and l loss if it's not the case then you have to do some"}, {"start": 3827.56, "end": 3833.4, "text": " some a bit more intricate logic because otherwise, um, you would encounter some uh,"}, {"start": 3834.52, "end": 3837.48, "text": " Numerical issues so because we are smaller than this"}, {"start": 3838.6, "end": 3842.68, "text": " Let's see what the number of elements is for us. We have 51 000 and then"}, {"start": 3843.7, "end": 3847.24, "text": " Dimensionality is 128. So I guess it's yeah, it's below"}, {"start": 3847.24, "end": 3850.04, "text": " The threshold so we just do this. Um"}, {"start": 3851.08, "end": 3853.7999999999997, "text": " Negative log likelihood and that's it. That's everything"}, {"start": 3853.7999999999997, "end": 3859.56, "text": " That's how transformers work like you apply just you just apply this nl loss on the output and"}, {"start": 3859.7999999999997, "end": 3864.52, "text": " Compared with the targets that are shifted by one and that's it the whole magic behind transformers"}, {"start": 3865.7999999999997, "end": 3870.3599999999997, "text": " Okay. Now here's where the actual mixed precision magic happens and we are almost done with this video"}, {"start": 3871.0, "end": 3875.9599999999996, "text": " I'm gonna quickly walk you through all of this. We're just like logging some outputs. That's not important"}, {"start": 3875.96, "end": 3878.12, "text": " And i'm gonna skip all of this"}, {"start": 3879.4, "end": 3884.12, "text": " We're just storing some stuff blah blah blah. We're just storing some stuff. Let's go back here"}, {"start": 3884.44, "end": 3889.56, "text": " Let's return back the loss. So here is the loss and now let's go for the second part"}, {"start": 3889.8, "end": 3891.0, "text": " So this is the interesting part"}, {"start": 3891.0, "end": 3896.6, "text": " So we do backward pass and we again do record function backward to just profile this piece of code"}, {"start": 3897.2400000000002, "end": 3903.7200000000003, "text": " And let's do um, let's do f11 here. Let's enter so now because we have scaling. Let's see what happens. So"}, {"start": 3904.28, "end": 3905.7200000000003, "text": " um"}, {"start": 3905.72, "end": 3907.72, "text": " Scaler is not none. So we enter this branch"}, {"start": 3908.2, "end": 3911.0, "text": " And what we do is we scale the loss, okay"}, {"start": 3911.08, "end": 3917.7999999999997, "text": " So whatever the loss is and it's thousand four hundred something and it's that big because we were just doing summation in the nl loss"}, {"start": 3917.8799999999997, "end": 3919.24, "text": " As the reduction method"}, {"start": 3919.24, "end": 3926.4399999999996, "text": " So if I enter f11 here, you can see what what what we do is we simply take that number and we multiply it by four"}, {"start": 3926.68, "end": 3929.64, "text": " And that's what I previously explained you because of that"}, {"start": 3929.64, "end": 3936.44, "text": " That when you multiply the loss by four you trivially multiply all of the gradients are going to be uh multiplied by four"}, {"start": 3937.0, "end": 3938.92, "text": " that's how the"}, {"start": 3938.92, "end": 3941.3199999999997, "text": " Chain rule works. So let's hit f10"}, {"start": 3942.3599999999997, "end": 3946.52, "text": " Let's now do the backward pass. I don't think anything special is going to happen here"}, {"start": 3947.16, "end": 3948.7599999999998, "text": " I'm just gonna yeah"}, {"start": 3948.7599999999998, "end": 3954.92, "text": " Everything is yeah, we are we exited the the um train step and now let's"}, {"start": 3954.92, "end": 3962.52, "text": " Let's go back. We are not we're now level up in the train step of the trainer not in the train step of the task"}, {"start": 3963.56, "end": 3970.76, "text": " Let's hit f10. We just append the outputs blah blah blah. That part is not interesting and okay, let's continue"}, {"start": 3971.16, "end": 3974.6800000000003, "text": " so here you can see this is kind of uh wrapped up inside of a uh,"}, {"start": 3975.48, "end": 3980.04, "text": " runtime error out of memory exception so that the whole training procedure is wrapped up in that"}, {"start": 3980.84, "end": 3982.52, "text": " uh, and um"}, {"start": 3982.52, "end": 3986.52, "text": " I think this is this is it. We just have a single batch. That's why we exit the"}, {"start": 3987.32, "end": 3989.32, "text": " this this for loop"}, {"start": 3989.4, "end": 3991.4, "text": " through the samples"}, {"start": 3991.48, "end": 3993.08, "text": " and um"}, {"start": 3993.08, "end": 3997.48, "text": " Let's continue here sample size was 128 because we only had 128 tokens"}, {"start": 3998.2, "end": 4003.56, "text": " Um, this is some logic that will be necessary if we had multiple machines, which we don't so that's fine"}, {"start": 4004.52, "end": 4006.92, "text": " Uh, okay, so we now enter this interesting part"}, {"start": 4007.56, "end": 4011.72, "text": " And this is pretty much the end of the video. So try accept as you can see here"}, {"start": 4011.72, "end": 4016.3599999999997, "text": " There is so many accepts they're catching so they're catching the floating point error. They're catching the overflow error"}, {"start": 4016.3599999999997, "end": 4018.2, "text": " They're catching the outer memory error"}, {"start": 4018.2, "end": 4021.8799999999997, "text": " So all of these things are necessary when you're training at these big scales"}, {"start": 4021.8799999999997, "end": 4025.16, "text": " You want to catch the errors you want to handle them gracefully so you can see here"}, {"start": 4025.3999999999996, "end": 4031.0, "text": " If there is a floating point error, I think they do have this nan detector. Yeah, we'll get to that a bit later"}, {"start": 4031.08, "end": 4034.12, "text": " Let me now first walk you through the uh reduced gradients"}, {"start": 4034.6, "end": 4040.52, "text": " So here actually nothing is going to happen because we have a single gpu. Otherwise, uh, you would take"}, {"start": 4040.52, "end": 4045.8, "text": " Uh gradients from across the devices and then you would do the all reduce"}, {"start": 4045.88, "end": 4049.96, "text": " So that means you literally take all of the gradients you push them onto a single device"}, {"start": 4050.12, "end": 4055.96, "text": " You divide them by the number of devices and then you do that for all all the devices and all of the devices will have"}, {"start": 4056.04, "end": 4057.64, "text": " The same state now"}, {"start": 4057.64, "end": 4060.7599999999998, "text": " If that makes sense, but less important for now, let's focus on the loss scaling"}, {"start": 4061.4, "end": 4066.36, "text": " Multiply grads. Let's see what's going on here. Okay. So what we do is"}, {"start": 4066.36, "end": 4071.08, "text": " We divide so we have this multiply grads and we divide this is one"}, {"start": 4071.8, "end": 4079.56, "text": " divided by 128 the reason we divide by 128 is because nll loss had as a reduction method had a sum"}, {"start": 4080.1200000000003, "end": 4085.0, "text": " And because we had 128 elements, we now want to divide by 128 if that makes sense"}, {"start": 4086.1200000000003, "end": 4088.1200000000003, "text": " So let's enter here"}, {"start": 4088.2000000000003, "end": 4090.2000000000003, "text": " Let's see what's going to happen here"}, {"start": 4090.2000000000003, "end": 4092.04, "text": " uh, so"}, {"start": 4092.04, "end": 4099.0, "text": " Multiply grads. So we just take that multiply factor, which is now one over four because we want to do the unscaling of gradients"}, {"start": 4099.24, "end": 4104.28, "text": " And we additionally multiply by this one one over 128. This is the c constant"}, {"start": 4105.64, "end": 4113.56, "text": " That's that number and then that's kind of all aggregated inside of this variable and we're going to later apply the variable to actual gradients"}, {"start": 4113.56, "end": 4116.04, "text": " We'll see that a bit later. Okay, let's exit here"}, {"start": 4116.68, "end": 4118.68, "text": " Now gradient clipping is going to happen"}, {"start": 4118.68, "end": 4122.04, "text": " again, I will have to skip this part, but um,"}, {"start": 4122.4400000000005, "end": 4127.96, "text": " Okay. No, actually let me quickly enter it. Let me see what's going on there. I think it's kind of fairly intricate"}, {"start": 4129.08, "end": 4134.84, "text": " There's a lot of as you can see a lot of steps, but I think there might be a part that's kind of interesting for us"}, {"start": 4134.84, "end": 4139.16, "text": " Oh, yeah, there is so here is a multiple factor. Here's where it comes into the picture. So we"}, {"start": 4140.200000000001, "end": 4147.400000000001, "text": " we basically grab the gradients of our we grab the norm of the gradients and then we multiply that by the"}, {"start": 4147.4, "end": 4153.639999999999, "text": " The multiply factor and that's kind of optimization detail because you didn't have to multiply the actual gradients"}, {"start": 4153.639999999999, "end": 4155.639999999999, "text": " You just took the norm and then"}, {"start": 4156.28, "end": 4160.04, "text": " Multiply by this factor and you get the equivalent result. That's kind of neat"}, {"start": 4161.16, "end": 4162.36, "text": " Okay"}, {"start": 4162.36, "end": 4166.36, "text": " Let's see what's going on here. Let's continue here"}, {"start": 4167.32, "end": 4169.32, "text": " max norm grad norm"}, {"start": 4169.719999999999, "end": 4174.839999999999, "text": " And again, we have another multiplication for the multiply factor. Why because"}, {"start": 4174.84, "end": 4178.04, "text": " The gradients were the norm was 16"}, {"start": 4178.04, "end": 4182.92, "text": " but we want to clip it to to 1 and that means we have to we have to take this number and"}, {"start": 4183.4800000000005, "end": 4191.0, "text": " Multiply with that number of gradients and then they will obey this property of having gradient norm of 1 if that makes sense"}, {"start": 4191.16, "end": 4195.72, "text": " That's why we have another multiplication. So now the multiply factor has three things going on"}, {"start": 4195.96, "end": 4202.04, "text": " We have the loss on scaling we have we've divided with a sample size and finally we've divided by this"}, {"start": 4202.04, "end": 4204.04, "text": " ratio of the norms"}, {"start": 4204.84, "end": 4208.28, "text": " Let's continue we check some overflow again a bunch of details"}, {"start": 4209.0, "end": 4212.92, "text": " And let's continue we check the grade norms. Let me see what this is"}, {"start": 4215.16, "end": 4216.04, "text": " I"}, {"start": 4216.04, "end": 4222.5199999999995, "text": " Think this is just going to be skipped. So yeah, it's just skipped. Okay. Now we make sure that we don't have"}, {"start": 4223.16, "end": 4225.64, "text": " none of the values in the grads are"}, {"start": 4226.5199999999995, "end": 4230.5199999999995, "text": " infinite so if we had nans then this would be"}, {"start": 4230.52, "end": 4237.160000000001, "text": " This exception would be raised and it would be caught in this particular context here where they have the nan detector"}, {"start": 4237.72, "end": 4241.240000000001, "text": " That kind of let me show you let me show you what it does. So nan detector"}, {"start": 4241.8, "end": 4244.84, "text": " Blah blah blah. It basically is going to capture"}, {"start": 4245.240000000001, "end": 4252.040000000001, "text": " Some of the values so that we can debug it later on detects the first man or in in forward and or backward pass and logs"}, {"start": 4252.280000000001, "end": 4256.76, "text": " Together with the module name. Okay, so it's just gonna help her to to debug"}, {"start": 4257.400000000001, "end": 4259.400000000001, "text": " Where the man occurred?"}, {"start": 4259.4, "end": 4265.24, "text": " And then once you have once they do that they do another for pass here"}, {"start": 4266.759999999999, "end": 4269.4, "text": " Under that context and it's gonna catch the first man"}, {"start": 4269.639999999999, "end": 4274.12, "text": " So they say here rerun the forward and backward pass with hooks attached to print out where it fails"}, {"start": 4274.28, "end": 4277.08, "text": " Okay, so that's kind of how they cope with the errors"}, {"start": 4277.719999999999, "end": 4283.48, "text": " Let's finally enter the optimizer step. That's where we're gonna finally apply the multiplier the famous multiplier"}, {"start": 4283.639999999999, "end": 4286.2, "text": " So let me show you here step step step"}, {"start": 4286.2, "end": 4292.84, "text": " Let's continue here. Okay unscaled grads. Let's see what it is. I think that's going to be our our logic"}, {"start": 4293.88, "end": 4298.5199999999995, "text": " So here we are we enter this part multiply grads with the multiply factor"}, {"start": 4298.5199999999995, "end": 4303.72, "text": " Okay, that's finally the moment where we apply the multiply factor and then we reset it back to one"}, {"start": 4304.12, "end": 4306.44, "text": " And then the whole procedure will will continue"}, {"start": 4306.679999999999, "end": 4314.679999999999, "text": " So here you can see here we just go through the parameters and we multiply the so parameter the gradient associated with that parameter with that weight"}, {"start": 4314.68, "end": 4320.52, "text": " Data and then multiply by c and c is the the the multiplier we were accumulating over time"}, {"start": 4320.52, "end": 4328.4400000000005, "text": " So we had three factors again, if you recall the sample size the gradient on scaling and finally the ratio of the norms"}, {"start": 4329.56, "end": 4330.84, "text": " I'm gonna"}, {"start": 4330.84, "end": 4333.320000000001, "text": " Step out of this. So hit this step out"}, {"start": 4334.200000000001, "end": 4338.84, "text": " Error and we are done. We are done here. We've done the stepping"}, {"start": 4339.8, "end": 4343.400000000001, "text": " We update the scalar. So the scalar is going to do what?"}, {"start": 4343.4, "end": 4345.4, "text": " Okay, nothing interesting here"}, {"start": 4346.5199999999995, "end": 4351.4, "text": " We can exit all of this and we are back to our train step in the trainer"}, {"start": 4353.0, "end": 4357.799999999999, "text": " We just debug some information we didn't have any nonce and that's pretty much it"}, {"start": 4357.799999999999, "end": 4362.839999999999, "text": " So I think I can skip all of this. I can skip all of this a lot of details"}, {"start": 4363.639999999999, "end": 4367.879999999999, "text": " Yeah, go at your own pace explore this code if you're curious to learn more"}, {"start": 4367.88, "end": 4373.0, "text": " But uh, I think we saw most of the interesting parts. So let me just do this"}, {"start": 4373.4800000000005, "end": 4379.32, "text": " Let me see why we are okay. So i'll just have to put f breakpoint there hit f5 here. We are"}, {"start": 4380.2, "end": 4384.28, "text": " So as you can see, we're finally back in the in the in the in the train.py file"}, {"start": 4385.0, "end": 4387.08, "text": " And we are inside of the train function here"}, {"start": 4387.72, "end": 4389.74, "text": " Let me see whether there is something interesting"}, {"start": 4390.36, "end": 4395.72, "text": " And the feedback, uh, we're gonna just skip this part. Nothing's gonna happen. It's gonna be a no-wisdom"}, {"start": 4395.72, "end": 4397.72, "text": " Um, and uh"}, {"start": 4398.360000000001, "end": 4400.360000000001, "text": " If we should stop"}, {"start": 4400.6, "end": 4406.52, "text": " And for now we are not stopping so that means we are again gonna fetch the next samples and uh"}, {"start": 4407.0, "end": 4408.4400000000005, "text": " Continue this whole"}, {"start": 4408.4400000000005, "end": 4411.4800000000005, "text": " Loop and that's it. So we're just gonna keep on doing this training"}, {"start": 4412.52, "end": 4415.8, "text": " Guys, that's it. Uh, you hopefully saw what it takes to"}, {"start": 4416.360000000001, "end": 4419.88, "text": " Kind of saw what it takes to build these llms. Um, we did"}, {"start": 4419.88, "end": 4425.72, "text": " Exclude the whole distributed part of the equation, which is probably the hardest part"}, {"start": 4425.72, "end": 4428.36, "text": " So like doing the model of parallelism the data"}, {"start": 4428.92, "end": 4431.8, "text": " Parallelism the the pipeline parallelism all of that"}, {"start": 4432.36, "end": 4437.8, "text": " Goodness, and then there is a zero optimizer where you do the partitioning of the activation. So there is so many details"}, {"start": 4438.36, "end": 4440.92, "text": " Again, we also skipped the checkpointing. So like"}, {"start": 4441.8, "end": 4444.36, "text": " Yeah tragic cool. I'm gonna stop here"}, {"start": 4445.24, "end": 4447.4800000000005, "text": " Let me know please let me know what you think"}, {"start": 4447.48, "end": 4454.04, "text": " Um, depending on that i'm gonna probably modify my my my uh, the videos i'm gonna be creating because"}, {"start": 4454.839999999999, "end": 4458.679999999999, "text": " By doing this I learned a lot and so I thought just kind of walking you through"}, {"start": 4459.24, "end": 4461.959999999999, "text": " Uh the learning so it didn't feel like like a big of an effort"}, {"start": 4462.04, "end": 4465.639999999999, "text": " But like going forward I'll probably won't be doing this unless people find it useful"}, {"start": 4465.639999999999, "end": 4469.5599999999995, "text": " So do let me know down in the comments whether this was actually interesting and useful for you"}, {"start": 4469.959999999999, "end": 4473.5599999999995, "text": " Having said that if you like this video, please share it out. That's the best way to do it"}, {"start": 4473.56, "end": 4479.1, "text": " This channel also subscribe to this channel if you haven't already and see you next time"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=DK-QXsiycpk
GPT-NeoX-20B | BigScience BLOOM | OPT-175B | Training Large Language Models | Papers Explained
🚀 Find out how to get started using Weights & Biases 🚀 http://wandb.me/ai-epiphany ❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany In this video I cover 3 publicly shared LLM 🚀 projects/papers and the pain they experience training them (🍿🍿🍿): 1. "What Language Model to Train if You Have One Million GPU Hours?" introducing BLOOM 176 billion parameter model by BigScience! 2. "OPT: Open Pre-trained Transformer Language Models" introducing 175 billion parameter model OPT-175B! 3. "GPT-NeoX-20B: An Open-Source Autoregressive Language Model" introducing, well, GPT-NeoX-20B, a 20 billion parameter LLM! All 3 projects shared their weights, code, and papers, so it's a great way to dig into large language models and understand them better. I walk you through the papers and the chronicles/logs they've shared sharing the pain they experienced training at these scales! :)) Cluster deletion, 2 million consecutive backslash symbols in the dataset, and much more - it's fun! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ BLOOM paper: https://openreview.net/forum?id=rI7BL3fHIZq ✅ OPT paper: https://arxiv.org/abs/2205.01068 ✅ GPT-NeoX-20B paper: https://arxiv.org/abs/2204.06745 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:33 (sponsored) Weights & Biases 02:43 BLOOM paper 19:37 BLOOM chronicles 25:08 OPT paper 31:20 OPT chronicles 33:46 GPT-NeoX-20B paper 45:30 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #gptneox #bigscience #opt
What's up guys in this video I'll be covering three very famous large language model papers And all of them have in common that they've publicly shared both the code as well as the weights So the idea will be to kind of show you Some of the connections the commonalities between them will not be digging into the actual technical details that much that much because it's basically just A transformer architecture and I already assume most of you know how hot transformers work If you don't know you can check out my video. I've covered a long time ago on how transformers work But before we go there this video is sponsored by weights and biases weights and biases is an amazing machine learning platform that helps you Track your experiments do data set versioning and model management You can see how easy it is to get started with weights and biases They are framework agnostics So that means you can you can either pick pytorch keras or whatever you want and it's only a couple of lines of code so you import the weights and biases after installing it by doing pip install and then if After initialize the project here It will start automatically logging both the configs as well as various system metrics such as GPU utilization Etc etc and if you instruct it to do so you can obviously log curves images videos and much more Here is an example for run that I want to show you quickly. That's very representative So here are some of the system metrics we are logging such as GPU temperature memory location Etc if you instruct it to do so you can log obviously losses validation loss is accuracy whatever you want and finally in this example here you can see a hyper parameter sweep optimization experiment Where we can for example select those runs? They have very low loss and then we can draw some conclusions such as we can see here that higher epochs tend to perform best and Then lower learning rate tends to help etc etc You can additionally log videos not only images and even more structured Information and data such as tables and here you can see this is not just like a your dummy UI You can apply various filters such as for example show me those rows where the where the ground truth equals the actual Guess so that's our prediction and after you click apply here. You'll literally see filtered out exactly those data points So very cool. Finally, one of the features are really love and there are many more features I won't show you here but like creating super rich media reports such as this one you can see how interactive it can be and how beautiful and you can make it as complex as you want as you can see here with many cells in the grid and animations, etc, etc weights and biases are free for personal use and academics So check out the link down below to support this channel and now let's get back into the video So, let's see which papers I'm covering. So the first one is What language model to train if you have 1 million GPU hours coming from the big science event and they managed to train a 176 billion parameter large language model, which is huge. That's the biggest currently open sourced LLM then we have the opt model for meta so open pre trained transformer large language models, which was 175 billion parameters and here I'll mostly focus on some of the struggles they were experiencing will so that we'll see those a bit later And finally, the third people will be GPT neo X 20 B and open source or regressive language model from the illu theory community, okay So before we go there, let me get kind of quickly show you the the glimpse of what we'll see so I'll be walking you through some of the chronicles that opt folks as well as the bloom folks shared So here is one very interesting diagram where you can see on the wire Because you see the perplexity of the language model So basically the loss and you can see the the part I want to I want you to focus on is on the gaps here And so you can see that like for in this duration of like a month or something. They had these huge like Gaps where the model could not train because of our idea hardware hardware failure or software A box or whatnot. You can also see a bunch of spikes You can see how these are kind of almost almost don't don't don't like for my like a smooth curve And that's because they were oftentimes they had to obviously start from from a previous checkpoint And so there was a lot of struggle in there. Just looking at this graph. It's kind of clear And one interesting detail here as well is like the cluster deletion So that you managed so their cloud provider managed to delete their cluster in the middle of the run And so yeah, those are some of the interesting nuggets. We'll see a bit later Okay, having said that let's get back to the papers I'm gonna quickly go through all of those the reason I'm covering these papers is because over the next videos I'll probably do a code walkthroughs of some of these projects. So stay tuned for that So here I'm gonna give you some context and theoretical background Okay. So first of all, we can see here That the bloom folks come up with the same types of scaling laws here So we have the loss on the y-axis and we have the compute in the petaflop days on the x-axis And we can basically see the very similar curves to what Kaplan at all from the scaling loss paper from open AI Kind of already showed us petaflop days by the way stands for so one petaflop day is sustaining one petaflop of compute for a whole day So that that many operations, okay, so Let me kind of go through here. This is gonna be Basically a recurring theme Everyone all of these three papers literally took inspiration from the GPT-3 model So you can see here they say we follow the architecture and hyper parameters of GPT-3 model That's going to be like a commonality between all of these three papers So let's continue here They say we base this model size on Kaplan at all. So that's the the scaling loss paper However, after this paper was accepted for publication, but before it came out Huffman at all provided an alternative approach for selecting the model size. So that's the chinchilla paper from DeepMind I'm going to quickly just show you the the graphs for those of you who are not familiar with that with the scaling loss paper I strongly recommend you at least skim it. It's a very interesting finding and it's also very empirical and very It's over fitted to a particular type of a transformer. So here you can see what the the basic like postulate behind the scaling laws is is if you you can see on the x-axis if we increase compute so here on the on the on this left diagram here we have compute and then we have data set size here and finally we have Number of parameters. So if you increase either three of those You can see that the loss on the y-axis here the test loss is decreasing and we don't see any sign of situation here So that's the that's that was the paper that kind of showed us that we just need to scale the models And we'll get more and more performance now. It's a completely different topic and problematic whether like having a smaller loss Like perplexity actually correlates with the metrics we actually care about but that turns usually to be the case but yeah, okay, so one thing that the this paper the scaling laws paper showed is This one so they say that the the performance penalty depends predictably on the ratio You can see this one here meaning that every time we increase the model size Atex we need we only need to increase the data but roughly five times to avoid a penalty So basically to avoid the over fitting and so this is in starting point Contrast with the chinchilla paper. So what I say here is that you basically need to Increase the number of tokens of your training data set much slower According to this relationship you can see on the screen According to this formula here compared to the number of parameters and then the chinchilla paper actually showed a completely different Finding and that's that so they say specifically given a 10x increase in computational budget They suggest that the size of the model should increase 5.5 By day they refer to the kaplan paper while the number of training tokens should only increase 1.8x Instead we find that the model size so this is the important finding of this paper the model size and the number of training Tokens should be scaled in equal proportions, okay So the problem is there is um, no no strong theoretical underpinning Behind these and and like who knows how how these uh laws will and whether they will be valid for for different Configurations of transformers if you change something if you start using I don't know maybe a libby instead of relative positional encodings Maybe then the laws change and we are not sure how sensitive they are Um, I might be wrong. Maybe the chinchilla paper has like some stronger theoretical insights, but that's i'm recalling from my memory Okay, so let's get back to the paper here So the first thing they show is that obviously it's very important to have high quality data set and by high quality They mean like so so having these diverse cross-domain high quality data Such as the pile data set so that's the data set that we have here And we'll see that their paper a bit later And you can see that using the the pile compared to these other data sets such as the oscar or c4 Gives them basically better better accuracy on on these various nlp tasks. You can see that the blue model here Is on pair and even a bit better compared to the eluthorized paper The gpt neo paper is the same as the The gpt neo x20b and you can also see that basically the the open eyes models to outperform them by by some amount So that's that's um worth noticing Okay, guys, so let's continue here Let's see. What else is interesting in this paper? Um, so as I said diverse cross-domain pre-training data combining web calls with high With curated data and with high quality data So as I said diverse cross-domain pre-training data combining web calls with high with curated high quality sources significantly proves zero shot Generalization over pre-trained data sets constructed from common crawl only Okay, that's one finding the second finding they had was that although learned positional embeddings help perform rotary embeddings Alibi yields significant better results than all the other alternatives now This might be as well very very specific to their particular set of high parameters in architecture I wouldn't like take anything fundamental out of this statement But um, yeah, i'm just kind of walking you through some of the findings behind behind the papers, by the way I did cover the alibi paper I think like eight months ago or something when it came out So do check it out if you're curious to know more and we'll see what the rotary embeddings if you're not familiar Um, what those are? Um a bit later in the in the gpt neo x paper Okay, some of the other findings they had was they tried a bunch of different activation functions And then they say here we present our results in table three bubble us swigloo produces slightly better results than jellu However, this comes at a cost of reducing the throughput by approximately a third So I think they ultimately decided not to use it because of the throughput Okay So that's that's one more finding and then they they they mention how the and we already know this basically, it very it matters a lot where you put the Uh layer normalization and how many layer normalization layers do you include in your transformer blocks and and they say here adding layer Normalization after the embedding layer incurs a significant penalty on zero shot generalization But the thing is when you scale the model to very big sizes Uh, then this thing helps stabilize the training Uh, and when you're on the when you're dealing with lower scales Uh in that case, you probably don't want to include it because you'll be sacrificing as you can see here that the zero shot generalization Okay, guys, let's continue here One thing that I kind of noted to myself as well. I was kind of This is a curious finding. Uh, so they say instead of manually writing So this is about the multilingual, uh, basically evaluation So trying to evaluate on multiple different human languages And they say instead of manually writing prompts for each language, which would obviously be very bothersome We followed the strategy proposed by linetal using english prompts for non-english examples This can be viewed as cross-lingual zero shot generalization Uh, they validated this strategy by demonstrating its ability to achieve zero shot performance on pair With and sometimes even better than human written language specific prompts. This is kind of cool. So you literally take a uh, um, you you have data sets with various different languages and you always take the english prompt and then you concatenate the the Uh, the answer in that different language you repeat it a couple of times and then you evaluate and it's and it's apparently the model Like learn to to understand, uh this which kind of makes sense because you're training the disjoint Uh, like a space of multi disjoint multilingual space Uh when you're training transformer on on on diverse data sets And we always always know that the machine translation kind of emerges as a consequence of this Next token prediction, uh, if you give it enough big data sets with with some presence of different languages So thing worth mentioning and this is something we're probably all aware of at this point And that's that the multilingual pre-training Very significantly diminishes english zero shot generalization. So the thing with this bloom model is that they trained Uh, this is uh, like a multilingual model and because of that it's not as strong Uh on on on english as as uh, basically some of the other baselines Okay, let's continue here. Um A couple things worth mentioning, this is how they've chosen the final configuration of the bloom 176 billion parameter model So they they reference here the the the the scaling loss paper It is notable that this pareto optimal frontier describes very large models trained on few tokens We call this training to optimality So that's basically what I said before basically you want to the the scaling loss paper show that you want to increase the parameters more Than the number of tokens of your data set and then they say this is in stark contrast with the common practice of training much smaller Models on many more tokens to convergence. Okay And now because they're training They have a couple of arguments here But one of them is that they're training on a multilingual data set and because of that They they don't want to strictly follow the the scaling loss. And so this is what they do So we choose to use the optimality front as an upper bound on model size So that's like the biggest model size There will be determined by the scaling loss paper and the lower lower bound on number of training tokens So they'll probably have smaller model and more tokens Okay compared to what the scaling loss paper suggests the second very influential paper that had impact on on the Like modeling decisions in this paper is this levin et al paper So they proposed a theoretically motivated and empirically backed law describing the optimal compromise compromise between width and height Within depth, sorry. So for a gpt3 size model with 175 billion parameters They predict an ideal depth of 80 layers. Okay so now it's just basically Like combining lego bricks you have some constraints from the scaling loss paper You have some constraints from this paper and then given the compute they have and that's Roughly 1 million gp hours on a 100 machines They decide on the on the on the on the hyper parameters of the model. So let's let's see what those are So they they lose this many tokens as you can see here 70 to 80 layers because of this paper and Et cetera, et cetera. So finally they end up with three best configurations And the one they pick at the end is this one here. So they they pick this one with 176 billion parameters 70 layers this many hidden dimensions attention hats Dimensionality of attention hats. This is the memory footprint and the performance. Okay, so that's pretty much it That's the first paper. Let me get kind of show you these numbers side by side in the actual code base And then we'll switch to the second paper. Okay guys, here it is And as I said, i'll i'll do a walkthrough of of the code base. So basically don't worry I'll i'll show you this code in much more depth later on Uh, I just want to kind of briefly show you so this is the big science workshop Repo and you can see the configuration for the 176 billion model And you can see here the numbers are exactly what you see here in the table. So we have 176 layers so you can see that somewhere here. Let me try Nope, sorry, that's the number of uh, that's the number of parameters. Uh, so we have the 70 layers So here is the the variable for 70 layers. We have 14336 you can see that number of hidden is is exactly that we have number of hats 112 etc, etc. So basically yeah, this paper shows you how how they've Decided to to pick the uh, all of the hyper parameters of the model Some of that was inspired by the gpt3 paper. Some of that was inspired by the scaling loss paper. And finally, uh, the The levin et al paper had a had an impact as well on the on the width and depth now I'm gonna quickly walk you through the chronicles they created So basically, uh, i'm gonna start here So first of all one more thing that's kind of common across all of the three papers is that all of them are using the nvdm Megatron library that helps you do model parallelism do check out my scaling Uh, like ultimate guide to scaling video where I cover a couple of these uh influential scaling papers including megatron The second thing they they they all have in common is the deep speed library Again, I did cover that one in my previous video So do check it out and the bloom folks because they had 384 gpus at their disposal basically decided to to Arrange those gpus in this 3d Fashion so the so-called 3d parallelism where one axis is basically pipeline parallelism Where you break the model such that you take a couple of first layers you put them on one gpu Then you take the next uh end layers you put them on the second one, etc, etc Then you have the model parallelism where you break the actual layers uh And then kind of take the weights of that layer and put one part on one gpu the second part on the second one Etc, etc And the third dimension is the the data parallel dimension here where you basically replicate Your models across different gpus and then you feed them different portions of your of your input patch Okay, next up. I want to walk you through some of the chronicles. Uh, they've shared So there is this document called lessons learned. I link the repo down in the description So you can check it out You can go I strongly encourage you to go through uh, at least skim through all of these documents I think it's a valuable learning experience. So a couple of things so they have this title how training divergences were Overcome so how they overcame the the divergences one of the the methods was to just have a different initialization method And it literally means so much like I think they mentioned it somewhere So it has made a huge difference to the training stability and not only stability but the the final Uh actual loss you you converge to and I remember back from my days when I trained the transformer model So I had the original transformer implementation right here and uh Initialization blah blah. I remember this this diagram here. So here's what I said here So initialization matters a lot for transformers. I initially thought that the other implementation using savir Uh initialization is again one of those arbitrary heuristics and the pytorch default in it will do I was wrong You can see here three curves that these two are using pytorch default and this one is using the xavier initialization So that was the moment for me when I realized how sensitive the models are to initialization and how vital that is Okay getting back to the lessons learned document here adding embedding layer norms, so Hopefully we're all aware of this basically the position whether it's before there is a jollo connection or after How many of them do you include etc? etc All of that matters a lot for the final stability and finally patience So in some cases in so they say here in some cases in the case of a huge spike It was taking 2k iterations for training to return to the same Loss it spiked from and then it continued training as if nothing happened So sometimes in order to overcome the spike you just have to wait Uh, but more often than not the training won't recover from a spike Yet in another situation as the training diverged slowly without any spikes. Okay, so let me show you the spikes so here is the Uh, this is probably the most interesting chronicle i've seen so the the the way they've trained the 104 billion parameter model They had the most problems with this model and then actually with the 176b model It was the training was much more stable and there are many reasons why that was first of all They I guess they learned a lot during during this this whole trial and then the second thing is I think like they use b Float so the brain float format instead of the the the the usual float 16 and all of that kind of brought the additional stability So I just want to kind of quickly skim through Some of the diagrams here you can see that the problems they had with spikes So during the training, uh, the spike would appear and then the training would diverge And they had a lot of experiments. They tried a bunch of things to try to to kind of cope with those spikes But they kept getting the spikes. They kept getting the spikes like throughout many many like experiments and many many weeks they constantly they couldn't get the model to Uh to converge it was constantly diverging and then I think only at the end Did they start getting some better results after multiple months? Um, i'll show you the the the 176b model a bit later. So let me show you one more thing. See here This is interesting. So the they had two million backslash only samples in the data set um, so Hugh Negan, I don't know how to pronounce this surname Discovered that there are huge records with just backslashes in oscar English version so we set out to look at their occurrence And so what happens when when your model basically encounters this sequence? Uh, it can produce the the the spiking behavior the divergence So all of these details are this is the this is the messy reality of training of training large language models And it's very nice that they've shared this Okay, I did mention that the 176b was much more stable They literally say here what makes the the model so so stable and they compare so here is the 104 Uh billion parameter model you can see Uh bunch of curves, uh superposition here and you can see the the the 176b So it's literally like a smooth curve converging and the loss is lowering as expected So, uh, here is the kind of the overlapped image and then they say it's a combination of probably a few of these improvements It's kind of hard to pinpoint a particular thing, but I I have a strong hunch that the bfloat Uh format is what had a very very significant impact here. So very clean data. They have better data set for the for the big model They had the bfloat, uh mixed precision regime The fp32 accumulation for the pipeline blah blah blah very low initialization Word embedding layer norm But it was also using okay, so then I guess that doesn't doesn't explain it Uh, okay, that's guys. That's pretty much it They also have this document where they show, uh how they decided to To pick the parameters of the 3d parallelism. So basically they decide how they decided on on picking the tensor parallelism Parameter the pipeline parallelism the data parallelism all of those details so you can kind of walk through this document at your own pace It's too long for me to kind of extract any any particular insight here But that's pretty much it. Okay guys, so now let's go to the second paper I'm gonna open up the um The opt paper and let's start from the beginning here. So open pretrained transformer language models So these guys had a lot of problems training their models like reading through the the the logs was was such a such a joy So basically let me kind of show you a couple of snippets from a huge document They have like there is like 140 pages or something of their logs Internal logs, that's very cool. They shared those so here here are a couple of those once we identify the bad node We need to document these lists of things to complain to csp and get our money back. Give us our money back you bastards Okay, it looks like 26 tried to immediately upload the checkpoint and failed its cp commands Then it took another step lowered its scalar and tried uploading again and again the humanity We are already at loss scale 0.25 and so on and so on like oh hallelujah Things looked good, but nope just kind of complete confusion So I again encourage you to just kind of skim through the the log book. It's a very interesting read Okay, let's start with this one First of all our models and hyper parameters largely follow brown at all So that's the gpt3 paper again with variations in batch size mostly to obtain increased computational efficiency So this is something I did mention in the previous paper as well Oops, so this is kind of a common thread for all of these three papers They all follow the gpt3 hyper parameters and and uh advices Next up they say we found that the pile was particularly full of duplicate documents and advise future researchers using the pile to perform additional deduplication processing this is in particular relevant to the next paper i'll cover from eluthor ai Because they did use pile and they did not duplicate it because of the lack of resources And they also which is very interesting. They've done more than a single epoch, which is the common practice training lms But we'll get there in a couple of minutes a couple of interesting curves again, um, you can see Iterations on the x-axis learning rate on the y-axis. You can see that this was fairly ad hoc like basically the model would crash They would retrieve the the older checkpoint and then lower the low the the learning rate such that in order to avoid Try to avoid the instabilities and as a consequence of that you get this piecewise weird graph here Okay, you can also see the the validation perplexity Has a lot of these bumps and uh, it looks very funny Uh, so here is the part, uh, that's uh joy to read Basically a lot of a lot of problems during during the training So the the thing to mention in metals defense is that they literally had only a couple of months to do this It was a very tight deadline and so yeah, they didn't have the time to to do this properly But here is a couple of insights from from the paper. So in total hardware failures contributed to at least 35 manual restarts and the cycling of over 100 hosts over the course of two months Given the difference between the number of hosts cycled out and the number of manual restarts We estimate 70 plus automatic restarts due to hardware failures. I guess everything that will that can break will break the marfais law thing Okay, we noticed a correlation between loss divergence our dynamic loss scalar crashing to zero and the l2 norm of the activations Of the final layer spiking these observations led us to pick restart points for which our dynamic loss scalar was still in a healthy state So bigger or equal than one and after which our activation norms would trend downward instead of growing Unboundedly, so this is just some detail about how they were handling the um, how What was the heuristic they were using to to pick the best checkpoint in order to avoid the the future spikes? Okay, guys, uh, let's continue here. So the results here as you can see on 14 nlp tasks averaged for the zero shot Evaluation their own pair with gpt. That's very cool on the other side for the multi-shot Evaluation where it does like one or 32 shots. You can see that the uh, the dashed line the gpt model did outperform Them on on on those tasks. So that means It's kind of better few shot learner than the op model and um, yeah They say that the trend follows the gpt3. That's what what I said So let's continue here a couple of more interesting details. First of all, um, hate speech detection. They showed that um That the 175b model the op model considerably outperforms da vinci in all settings So that's very cool And then you think about it and this is all with the Double edged sword. So if it's better at detection, that means it's probably more susceptible to generating toxic And hateful speech so and that's indeed the case. So let me show you that here And they mention it. So overall we see that op 175b has a higher toxicity rate than either palm or da vinci da vinci being the the gpt3 model That's the code name. Okay, so let's continue here Da vinci being the the gpt3 model. That's the code name. Okay, you can see here Prompt toxicity probability toxicity probability of continuation. You can see that the blue curve the op model has the highest toxicity Uh for for for across across the the skip across the whole axis here Okay, guys, um I think that's pretty much it They they kind of summarize saying that we still believe this technology is premature for commercial deployment. And yeah, I mean, uh understanding how these models work, uh, Just interpreting what's going on and why the model is up but in certain results all of that We are still in the early days We are much better at at getting raw numbers on benchmarks and then knowing how to handle and align These models with human preferences, but it's a topic for for a different video. I think that's pretty much it. That's the second paper As I said, i'm just quickly giving you Uh the the the overview here and then in the next videos i'll be doing the coding Series, okay. Um, yeah before this let's let me show you the the opt logbook So here's the the big logbook I mentioned this is the this contains a bunch of notes That they were kind of accumulating throughout this period of a couple months I encourage you to check it out at your own pace. You can skim it. It literally took me maybe 20 30 minutes to just skim through this and kind of extract some of the insights Okay, so I did show you this diagram in the beginning of the video So again, you can see the torque or the number of gaps here and how many failures they were experiencing So we saw the number 70 plus or something restarts. I don't want to count but yeah, this might be At least in that range. Okay. Finally, let's go to their final update Uh, they have a couple of things like some some insights here Um, let me pick a couple of them. So scaling to 1024 a 100 to handle a real Workload of this size is highly non-trivial. We will discuss infrastructure pain points below Okay, ensuring training converges at this scale is also highly non-trivial without sufficient ablations at medium scale Results obtained from training at small scale also do not necessarily hold when scaled up We will cover these learnings in a note to be released in the upcoming weeks. Okay So let me quickly read you the cluster deletion story. That's kind of funny So given the holidays we requested a pool of 12 buffer nodes to guard against hardware failures Two machines go down every day on average in the process of replenishing this pool The cloud provider support team accidentally deleted our entire cluster on december 21st First while the cluster was restored fairly quickly it unfortunately came back with 16 machines that did not pass our infrastructure checks Etc, etc. So that's one of the interesting parts Second interesting diagram here is this one. They showed that they didn't quite match the the loss Of the gpt3 model, but we did see that on the on various nlp tasks the zero shot performance of the model did reach gpt3 so that's again the the the discrepancy between Um, like the loss value the final loss value and the actual performance might not always be that clear Although we did see that we did saw that the gpt3 handles much better the few shot prompts And that might be because of lower of lower loss here again just a speculation, but that's usually the case That's it guys again encourage you to go through it at your own pace I'm going to continue on and explore the gpt neo x20b paper So an open source autoregressive language model And let's see what the insights here are Okay, so we trained on a data set that contains duplicated data for more than one epoch But see no evidence of performance loss While these authors claims that the few shot prompting doesn't improve performance on their task We find that this is actually a phenomenon unique to gpt3 And does not apply to either gpt neo x20b or fair sec models. Okay, so that's just a note that uh, it's dangerous to just take a single model and Draw out extrapolate from from there Because other models might have different behavior so we'll see later in this paper that uh, The probable explanation is basically data set the underlying data set for that they use to train gpt neo x20b i.e the pile Is of different quality and distribution compared to the data sets that were used for gpt3 We don't even know what data was used. So yeah Okay So again, they mentioned here that they largely followed the gpt3 So so blah blah blah the coder model whose architecture largely follows that of gpt3 With a few notable deviations described below. So that's that's again the common thread that I mentioned multiple times But this paper did have like, um, many more modifications compared to the previous ones we saw. Okay Before this paper was published and the ways were were released They had a couple of iterations Behind them. One of them was gpt neo and the second one Was gptj which was basically supported by the elutrii community but executed by ban wang and komatsuzaki So that's kind of worth mentioning. So the architecture of this paper closely follows the gptj paper So let's see what's the diff between those two? So the gptj and this one compared to gpt3 So first of all, they use rotary embeddings I will not get into a lot of details how these work but like here is the here's your uh, classical formulation of the of this uh attention so you can see the formula here we have the uh Not token here. We map it into the query. So this is going to be the query We map this one into the key then we do the dot product blah blah blah And then the rest of the notation is kind of weird because it's not summation you want to have You want to have these? scores In separate spatial locations, you don't want to sum them up. So this is kind of weird notation Um, I mean looking at it. It's not even correct. Like you cannot just sum up these scores. It's kind of weird Uh, yeah, let me just make a diff here. So The the the difference with the rotary medics is you you have this this matrix r here Which basically rotates your intermediate representation. So here's the here's kind of the idea So if you have a key or a query So let's say we have a text here in hands transformer with rotary position embeddings and let's say that each of these words Has a corresponding uh vector here. Let's say this is um, either Uh a key or or a query vector, uh, and basically what I do is they kind of chunk Uh this vector into d over two Groups, uh, and that basically means that each of the groups is two-dimensional and then so once you have this chunk here What you then do is you literally rotate it depending on this position and depending on the on the on the group index So what I mean by that is so this is group one group two Uh group d over two so because this is from group one You basically apply the theta one angle and because it's position one you apply m and that's how you rotate And then you get the output representation here and that's the final representation here. So that's the modification that happens that happens Precisely here. Okay guys Um, so that's one of the modifications they've used the second one is this one We compute the attention and few forward layers in parallel and sum the results rather than running them in series Okay uh, that's a fairly, uh like a known technique for for uh increasing the throughput And uh basically does not sacrifice any of the of the of the performance So they say here this led to a 15% Throughput increase while having comparable loss curves with running them in series during early training So I guess there is a minor um penalty you're paying for for for getting the additional throughput. Okay Then they mentioned due to an oversight in our uh code We unintentionally apply two independent layer norms instead of using a tight layer norm the way that um These authors here do so instead of computing this thing you can see that they they actually decouple l and one and l and two So just just the bug they had in their code. Okay. Um um Okay, they use here's here's the uh infrastructure they are using And there is a nice diagram explaining how how the setup looks like the actual uh, like node setup looks like this So we train gpt new x20b on 12 Super micro servers each with eight nvidia a100 gpus and configured with two cpus. Okay So here you can see that the the complex diagram Um, even though it's a high level diagram and still very complex You can see how uh each of the four gpus connects to a single cpu here And then you have the different group here is connected with this cpu And all of these here is what is commonly referred to as a node. So this thing here is called a node You'll see this terminology a lot, okay guys So again a common thread That we opted to use the values from the gpt3 paper to guide our choice of hyper parameters To achieve a higher training throughput. We opt to use the same batch size as open ai's Uh 175b model approximately 3.15 million tokens We extend addmw with zero optimizer. So that's um, I kind of mentioned that because I did cover zero in one in in my ultimate scaling guide video I'll link that video somewhere in the in the descriptions if you if you're curious to know more about zero Check it out. But the the tldr is instead of replicating the the model states and the residual states So the gradients the the the optimizer states the weight etc Instead of that you just partition them and that's how you achieve the the zero redundancy. So that's that's what zero stands for um Finally a note before we see the results Uh, they say when comparing results in this work to gpt3 the training data is almost certainly the biggest known unknown factor Okay, and yeah data is becoming the name of the game pretty much A couple more differences, uh, they have compared to gpt3 is the tokenizer So we use a bp based tokenizer similar to that used in gpt2 gpt2 With the same total vocab size of 50 000 with three major changes to the tokenizer So first we train a new bpe tokenizer based on the pile which makes sense you want to because they're training on pile they want to optimize the tokenizer for that particular data Second in contrast to the gpt2 Tokenizer which took which treats tokenization as a start of a string as a non-space delimited token The gpt neo x20b tokenizer applies consistent space limitation Uh regardless and third our tokenizer contains tokens for repeated space tokens All positive integer amounts of repeated spaces up to and including 24. Okay, this is kind of a mouthful um, and just some idiosyncrasies of tokenizers Um, but like let me show you like a graphical representation of what's going on So here is the gpt2 and how gpt2 gpt2 tokenizer would tokenize this piece of text this snippet of code and here is how gpt neo x20b would tokenize that same text so this is what I mentioned about the uh Basically the spaces so you can see that basically all of these spaces here are encoded as a single token similarly here Etc etc up to 24. So literally until you get to 24. It's a single token That's that's how I understood this part. Whereas on the other side, um, you can see that gpt2 gpt2 literally encodes each of these spaces as a separate token and that kind of bloats the number of tokens to 55 here compared to 39 uh in this paper Okay, guys, so I mentioned this a couple of times we see no drop In test validation loss after crossing the one epoch boundary Here are some of the diagrams they have and it's a common practice to always train only for a single epoch I guess partially because you have so much data That going for multiple epochs would be super compute intensive. So people almost never go to more than than a single epoch And there are some papers showing that the model starts memorizing very quickly if you if you start Going multiple epochs. So people usually just do single epoch in this paper They literally continued training and they didn't see any any any signs of uh Of validation loss starting to diverge or anything. So they just kind of continued and uh, yeah, I mean um, I guess Yeah, I guess, um the common wisdom should be Questioned because the wisdom is not based on any solid theoretical understanding of why this works and like llm Large language model training is pretty much like dark magic and alchemy as many as many refer to it And so I think it's completely legit to do something like this if it's giving you better performance Why not? Okay, so Let's continue and let's wrap up this paper. Um, let me show you the results Uh, they get so this is the zero shot performance of gpt new x compared to gptj Fairsec and open eyes models. You can see it's pretty much on pair with them But then it really starts shining once you start using it on a day mathematical data sets and like these arithmetic tasks, which is I guess partially because a huge part of I mean not a huge part, but like a significant portions of uh of pile Are are like these scientific websites such as archive papers, etc, etc So because of the data set, uh, it seems that this model came much better compared to to the other baselines You can see the the orange dashed lines here. It's kind of Uh over outperforming all of the other baselines if i'm not wrong I think there are few shot performance not only in these data sets, but in general is also better compared to GPT models, but I don't see the diagrams here. So So yeah, okay. This is actually the one i'm referring to so this is the five shot evaluation You can see that again gpt new x is much better when you give it multiple shots So it's kind of a better few shot learner apparently compared to the gpt model and fairsec, etc And final load here We opted to choose hyper parameters based on on a mixture of experiments at smaller scales and by interpolating parameters Appropriate for our model size based on previously published work But because I think they so however several aspects of both our model architecture and training setup including the data and the tokenizer Diverged significantly from gpt3 as such It is almost certainly the case that the hyper params used for our model are no longer optimal and potentially never were In the appendix they go on to say And argument why it's a good idea to release the model weights because back at the at the time when they released this model This was the biggest model out there now. We have bloom now We have opt but like this was a very important contribution to the community and just a very valuable thing for like researchers and people who are not as as well-financed as some other research labs And that's it guys So those are the three papers I wanted to cover so again gpt neo x20b opt and bloom paper So we saw a couple of commonalities between them. They were all following gpt3 Hyper parameters and recommendations we saw that they were all using pretty much except for the opt paper Both the deep speed and Megatron Whereas opt was using only the Megatron library to scale up these models and train them Which is completely non-trivial thing to do most importantly that the value of these projects Is that they've shared both the code as well as the process as well as the weights which is a huge contribution to the community So kudos for that Guys if you like this video Subscribe to this channel share it out with your friends and Expect new coding videos coming up where I'll be walking you through some of the code bases behind these papers So stay tuned for that and until next time bye bye
[{"start": 0.0, "end": 5.7, "text": " What's up guys in this video I'll be covering three very famous large language model papers"}, {"start": 6.18, "end": 11.92, "text": " And all of them have in common that they've publicly shared both the code as well as the weights"}, {"start": 12.0, "end": 14.5, "text": " So the idea will be to kind of show you"}, {"start": 15.08, "end": 22.56, "text": " Some of the connections the commonalities between them will not be digging into the actual technical details that much that much because it's basically just"}, {"start": 22.56, "end": 27.52, "text": " A transformer architecture and I already assume most of you know how hot transformers work"}, {"start": 27.52, "end": 33.32, "text": " If you don't know you can check out my video. I've covered a long time ago on how transformers work"}, {"start": 33.32, "end": 40.64, "text": " But before we go there this video is sponsored by weights and biases weights and biases is an amazing machine learning platform that helps you"}, {"start": 40.64, "end": 44.16, "text": " Track your experiments do data set versioning and model management"}, {"start": 44.16, "end": 47.04, "text": " You can see how easy it is to get started with weights and biases"}, {"start": 47.239999999999995, "end": 48.82, "text": " They are framework agnostics"}, {"start": 48.82, "end": 54.32, "text": " So that means you can you can either pick pytorch keras or whatever you want and it's only a couple of lines of code"}, {"start": 54.32, "end": 59.28, "text": " so you import the weights and biases after installing it by doing pip install and then if"}, {"start": 59.6, "end": 61.480000000000004, "text": " After initialize the project here"}, {"start": 61.480000000000004, "end": 66.32, "text": " It will start automatically logging both the configs as well as various system metrics such as GPU utilization"}, {"start": 66.44, "end": 72.48, "text": " Etc etc and if you instruct it to do so you can obviously log curves images videos and much more"}, {"start": 72.8, "end": 77.48, "text": " Here is an example for run that I want to show you quickly. That's very representative"}, {"start": 77.48, "end": 81.56, "text": " So here are some of the system metrics we are logging such as GPU temperature memory location"}, {"start": 81.56, "end": 85.2, "text": " Etc if you instruct it to do so you can log obviously losses"}, {"start": 85.84, "end": 92.72, "text": " validation loss is accuracy whatever you want and finally in this example here you can see a hyper parameter sweep optimization experiment"}, {"start": 93.2, "end": 96.32000000000001, "text": " Where we can for example select those runs?"}, {"start": 96.4, "end": 103.52000000000001, "text": " They have very low loss and then we can draw some conclusions such as we can see here that higher epochs tend to perform best and"}, {"start": 103.52000000000001, "end": 106.64, "text": " Then lower learning rate tends to help etc etc"}, {"start": 106.64, "end": 112.36, "text": " You can additionally log videos not only images and even more structured"}, {"start": 113.04, "end": 117.92, "text": " Information and data such as tables and here you can see this is not just like a your dummy UI"}, {"start": 118.08, "end": 123.6, "text": " You can apply various filters such as for example show me those rows where the where the ground truth"}, {"start": 124.44, "end": 126.44, "text": " equals the actual"}, {"start": 127.16, "end": 134.92000000000002, "text": " Guess so that's our prediction and after you click apply here. You'll literally see filtered out exactly those data points"}, {"start": 134.92, "end": 138.64, "text": " So very cool. Finally, one of the features are really love and there are many more features"}, {"start": 138.64, "end": 143.27999999999997, "text": " I won't show you here but like creating super rich media"}, {"start": 144.11999999999998, "end": 150.6, "text": " reports such as this one you can see how interactive it can be and how beautiful and you can make it as complex as you want as"}, {"start": 150.6, "end": 153.0, "text": " you can see here with many cells in the grid and"}, {"start": 153.88, "end": 158.16, "text": " animations, etc, etc weights and biases are free for personal use and academics"}, {"start": 158.16, "end": 162.92, "text": " So check out the link down below to support this channel and now let's get back into the video"}, {"start": 162.92, "end": 165.28, "text": " So, let's see which papers I'm covering. So the first one is"}, {"start": 165.83999999999997, "end": 173.0, "text": " What language model to train if you have 1 million GPU hours coming from the big science event and they managed to train a"}, {"start": 174.23999999999998, "end": 180.95999999999998, "text": " 176 billion parameter large language model, which is huge. That's the biggest currently open sourced"}, {"start": 181.39999999999998, "end": 184.32, "text": " LLM then we have the opt model for meta"}, {"start": 185.0, "end": 189.35999999999999, "text": " so open pre trained transformer large language models, which was"}, {"start": 189.36, "end": 197.76000000000002, "text": " 175 billion parameters and here I'll mostly focus on some of the struggles they were experiencing will so that we'll see those a bit later"}, {"start": 198.48000000000002, "end": 206.0, "text": " And finally, the third people will be GPT neo X 20 B and open source or regressive language model from the illu theory"}, {"start": 206.72000000000003, "end": 208.56, "text": " community, okay"}, {"start": 208.56, "end": 214.0, "text": " So before we go there, let me get kind of quickly show you the the glimpse of what we'll see so"}, {"start": 214.0, "end": 219.32, "text": " I'll be walking you through some of the chronicles that opt folks as well as the bloom folks shared"}, {"start": 219.32, "end": 223.68, "text": " So here is one very interesting diagram where you can see on the wire"}, {"start": 223.68, "end": 226.52, "text": " Because you see the perplexity of the language model"}, {"start": 226.52, "end": 233.28, "text": " So basically the loss and you can see the the part I want to I want you to focus on is on the gaps here"}, {"start": 233.28, "end": 240.12, "text": " And so you can see that like for in this duration of like a month or something. They had these huge like"}, {"start": 240.12, "end": 245.4, "text": " Gaps where the model could not train because of our idea hardware hardware failure or software"}, {"start": 245.72, "end": 248.52, "text": " A box or whatnot. You can also see a bunch of spikes"}, {"start": 248.52, "end": 255.16, "text": " You can see how these are kind of almost almost don't don't don't like for my like a smooth curve"}, {"start": 255.16, "end": 261.88, "text": " And that's because they were oftentimes they had to obviously start from from a previous checkpoint"}, {"start": 262.68, "end": 266.52, "text": " And so there was a lot of struggle in there. Just looking at this graph. It's kind of clear"}, {"start": 266.52, "end": 270.28, "text": " And one interesting detail here as well is like the cluster deletion"}, {"start": 270.28, "end": 275.24, "text": " So that you managed so their cloud provider managed to delete their cluster in the middle of the run"}, {"start": 275.24, "end": 278.91999999999996, "text": " And so yeah, those are some of the interesting nuggets. We'll see a bit later"}, {"start": 278.91999999999996, "end": 281.32, "text": " Okay, having said that let's get back to the papers"}, {"start": 281.32, "end": 288.28, "text": " I'm gonna quickly go through all of those the reason I'm covering these papers is because over the next videos"}, {"start": 288.28, "end": 294.2, "text": " I'll probably do a code walkthroughs of some of these projects. So stay tuned for that"}, {"start": 294.2, "end": 298.2, "text": " So here I'm gonna give you some context and theoretical background"}, {"start": 299.0, "end": 301.47999999999996, "text": " Okay. So first of all, we can see here"}, {"start": 302.28, "end": 307.96, "text": " That the bloom folks come up with the same types of scaling laws here"}, {"start": 307.96, "end": 313.56, "text": " So we have the loss on the y-axis and we have the compute in the petaflop days on the x-axis"}, {"start": 313.56, "end": 321.48, "text": " And we can basically see the very similar curves to what Kaplan at all from the scaling loss paper from open AI"}, {"start": 321.48, "end": 332.36, "text": " Kind of already showed us petaflop days by the way stands for so one petaflop day is sustaining one petaflop of compute for a whole day"}, {"start": 332.36, "end": 335.72, "text": " So that that many operations, okay, so"}, {"start": 336.36, "end": 339.40000000000003, "text": " Let me kind of go through here. This is gonna be"}, {"start": 339.96000000000004, "end": 341.96000000000004, "text": " Basically a recurring theme"}, {"start": 342.52000000000004, "end": 347.48, "text": " Everyone all of these three papers literally took inspiration from the GPT-3 model"}, {"start": 347.48, "end": 354.44, "text": " So you can see here they say we follow the architecture and hyper parameters of GPT-3 model"}, {"start": 354.44, "end": 358.12, "text": " That's going to be like a commonality between all of these three papers"}, {"start": 358.92, "end": 360.92, "text": " So let's continue here"}, {"start": 361.48, "end": 366.92, "text": " They say we base this model size on Kaplan at all. So that's the the scaling loss paper"}, {"start": 366.92, "end": 370.84000000000003, "text": " However, after this paper was accepted for publication, but before it came out"}, {"start": 370.84, "end": 377.79999999999995, "text": " Huffman at all provided an alternative approach for selecting the model size. So that's the chinchilla paper from DeepMind"}, {"start": 377.79999999999995, "end": 383.15999999999997, "text": " I'm going to quickly just show you the the graphs for those of you who are not familiar with that with the scaling loss paper"}, {"start": 383.15999999999997, "end": 388.84, "text": " I strongly recommend you at least skim it. It's a very interesting finding and it's also very"}, {"start": 389.64, "end": 391.64, "text": " empirical and very"}, {"start": 392.2, "end": 397.55999999999995, "text": " It's over fitted to a particular type of a transformer. So here you can see what the the basic"}, {"start": 397.56, "end": 403.88, "text": " like postulate behind the scaling laws is is if you you can see on the x-axis if we increase compute"}, {"start": 403.88, "end": 410.2, "text": " so here on the on the on this left diagram here we have compute and then we have"}, {"start": 411.48, "end": 414.84000000000003, "text": " data set size here and finally we have"}, {"start": 415.88, "end": 419.64, "text": " Number of parameters. So if you increase either three of those"}, {"start": 420.2, "end": 424.04, "text": " You can see that the loss on the y-axis here the test loss is"}, {"start": 424.04, "end": 428.92, "text": " decreasing and we don't see any sign of situation here"}, {"start": 428.92, "end": 434.28000000000003, "text": " So that's the that's that was the paper that kind of showed us that we just need to scale the models"}, {"start": 434.92, "end": 440.28000000000003, "text": " And we'll get more and more performance now. It's a completely different topic and problematic whether"}, {"start": 441.64000000000004, "end": 443.64000000000004, "text": " like having a smaller loss"}, {"start": 444.36, "end": 450.44, "text": " Like perplexity actually correlates with the metrics we actually care about but that turns usually to be the case"}, {"start": 450.44, "end": 455.8, "text": " but yeah, okay, so one thing that the this paper the scaling laws paper showed is"}, {"start": 457.08, "end": 462.52, "text": " This one so they say that the the performance penalty depends predictably on the ratio"}, {"start": 463.08, "end": 467.08, "text": " You can see this one here meaning that every time we increase the model size"}, {"start": 467.72, "end": 472.84, "text": " Atex we need we only need to increase the data but roughly five times to avoid a penalty"}, {"start": 472.84, "end": 477.48, "text": " So basically to avoid the over fitting and so this is in starting point"}, {"start": 477.48, "end": 482.76, "text": " Contrast with the chinchilla paper. So what I say here is that you basically need to"}, {"start": 484.28000000000003, "end": 488.84000000000003, "text": " Increase the number of tokens of your training data set much slower"}, {"start": 489.40000000000003, "end": 492.12, "text": " According to this relationship you can see on the screen"}, {"start": 493.08000000000004, "end": 499.72, "text": " According to this formula here compared to the number of parameters and then the chinchilla paper actually showed a completely different"}, {"start": 499.72, "end": 504.68, "text": " Finding and that's that so they say specifically given a 10x increase in computational budget"}, {"start": 504.68, "end": 507.56, "text": " They suggest that the size of the model should increase"}, {"start": 508.28000000000003, "end": 509.40000000000003, "text": " 5.5"}, {"start": 509.40000000000003, "end": 514.84, "text": " By day they refer to the kaplan paper while the number of training tokens should only increase"}, {"start": 515.4, "end": 517.0, "text": " 1.8x"}, {"start": 517.0, "end": 521.8, "text": " Instead we find that the model size so this is the important finding of this paper the model size"}, {"start": 522.28, "end": 524.28, "text": " and the number of training"}, {"start": 524.84, "end": 527.32, "text": " Tokens should be scaled in equal"}, {"start": 528.36, "end": 530.2, "text": " proportions, okay"}, {"start": 530.2, "end": 535.8000000000001, "text": " So the problem is there is um, no no strong theoretical underpinning"}, {"start": 536.5200000000001, "end": 543.72, "text": " Behind these and and like who knows how how these uh laws will and whether they will be valid for for different"}, {"start": 544.2800000000001, "end": 551.1600000000001, "text": " Configurations of transformers if you change something if you start using I don't know maybe a libby instead of relative positional encodings"}, {"start": 551.48, "end": 555.48, "text": " Maybe then the laws change and we are not sure how sensitive they are"}, {"start": 555.48, "end": 563.0, "text": " Um, I might be wrong. Maybe the chinchilla paper has like some stronger theoretical insights, but that's i'm recalling from my memory"}, {"start": 563.8000000000001, "end": 566.44, "text": " Okay, so let's get back to the paper here"}, {"start": 566.9200000000001, "end": 573.0, "text": " So the first thing they show is that obviously it's very important to have high quality data set and by high quality"}, {"start": 573.8000000000001, "end": 578.84, "text": " They mean like so so having these diverse cross-domain high quality data"}, {"start": 579.4, "end": 581.88, "text": " Such as the pile data set so that's the data set that we have here"}, {"start": 581.88, "end": 585.08, "text": " And we'll see that their paper a bit later"}, {"start": 585.72, "end": 591.16, "text": " And you can see that using the the pile compared to these other data sets such as the oscar or c4"}, {"start": 591.96, "end": 598.68, "text": " Gives them basically better better accuracy on on these various nlp tasks. You can see that the blue model here"}, {"start": 599.24, "end": 603.64, "text": " Is on pair and even a bit better compared to the eluthorized paper"}, {"start": 604.2, "end": 606.6, "text": " The gpt neo paper is the same as the"}, {"start": 606.6, "end": 615.32, "text": " The gpt neo x20b and you can also see that basically the the open eyes models to outperform them by by some amount"}, {"start": 615.88, "end": 618.36, "text": " So that's that's um worth noticing"}, {"start": 619.5600000000001, "end": 621.5600000000001, "text": " Okay, guys, so let's continue here"}, {"start": 622.28, "end": 628.84, "text": " Let's see. What else is interesting in this paper? Um, so as I said diverse cross-domain pre-training data combining web calls with high"}, {"start": 629.4, "end": 632.12, "text": " With curated data and with high quality data"}, {"start": 632.12, "end": 639.4, "text": " So as I said diverse cross-domain pre-training data combining web calls with high with curated high quality sources significantly proves zero shot"}, {"start": 639.86, "end": 643.5600000000001, "text": " Generalization over pre-trained data sets constructed from common crawl only"}, {"start": 644.52, "end": 650.52, "text": " Okay, that's one finding the second finding they had was that although learned positional embeddings help perform rotary embeddings"}, {"start": 650.76, "end": 655.0, "text": " Alibi yields significant better results than all the other alternatives now"}, {"start": 655.0, "end": 659.96, "text": " This might be as well very very specific to their particular set of high parameters in architecture"}, {"start": 659.96, "end": 663.08, "text": " I wouldn't like take anything fundamental out of this statement"}, {"start": 663.08, "end": 667.32, "text": " But um, yeah, i'm just kind of walking you through some of the findings behind behind the papers, by the way"}, {"start": 667.5600000000001, "end": 669.5600000000001, "text": " I did cover the alibi paper"}, {"start": 670.2, "end": 672.6, "text": " I think like eight months ago or something when it came out"}, {"start": 673.0, "end": 677.72, "text": " So do check it out if you're curious to know more and we'll see what the rotary embeddings if you're not familiar"}, {"start": 677.8000000000001, "end": 682.0400000000001, "text": " Um, what those are? Um a bit later in the in the gpt neo x paper"}, {"start": 683.1600000000001, "end": 687.88, "text": " Okay, some of the other findings they had was they tried a bunch of different activation functions"}, {"start": 687.88, "end": 694.52, "text": " And then they say here we present our results in table three bubble us swigloo produces slightly better results than jellu"}, {"start": 694.84, "end": 699.4, "text": " However, this comes at a cost of reducing the throughput by approximately a third"}, {"start": 699.56, "end": 703.56, "text": " So I think they ultimately decided not to use it because of the throughput"}, {"start": 704.2, "end": 705.08, "text": " Okay"}, {"start": 705.08, "end": 710.68, "text": " So that's that's one more finding and then they they they mention how the and we already know this"}, {"start": 711.24, "end": 713.88, "text": " basically, it very it matters a lot where you put the"}, {"start": 713.88, "end": 721.16, "text": " Uh layer normalization and how many layer normalization layers do you include in your transformer blocks and and they say here adding layer"}, {"start": 721.48, "end": 726.52, "text": " Normalization after the embedding layer incurs a significant penalty on zero shot generalization"}, {"start": 726.84, "end": 730.12, "text": " But the thing is when you scale the model to very big sizes"}, {"start": 730.6, "end": 733.64, "text": " Uh, then this thing helps stabilize the training"}, {"start": 733.96, "end": 737.24, "text": " Uh, and when you're on the when you're dealing with lower scales"}, {"start": 737.24, "end": 743.64, "text": " Uh in that case, you probably don't want to include it because you'll be sacrificing as you can see here that the zero shot generalization"}, {"start": 744.6, "end": 746.6, "text": " Okay, guys, let's continue here"}, {"start": 747.08, "end": 751.24, "text": " One thing that I kind of noted to myself as well. I was kind of"}, {"start": 752.12, "end": 756.04, "text": " This is a curious finding. Uh, so they say instead of manually writing"}, {"start": 756.2, "end": 759.48, "text": " So this is about the multilingual, uh, basically evaluation"}, {"start": 759.64, "end": 763.0, "text": " So trying to evaluate on multiple different human languages"}, {"start": 763.0, "end": 767.8, "text": " And they say instead of manually writing prompts for each language, which would obviously be very bothersome"}, {"start": 768.2, "end": 773.48, "text": " We followed the strategy proposed by linetal using english prompts for non-english examples"}, {"start": 773.88, "end": 777.48, "text": " This can be viewed as cross-lingual zero shot generalization"}, {"start": 778.04, "end": 783.64, "text": " Uh, they validated this strategy by demonstrating its ability to achieve zero shot performance on pair"}, {"start": 784.12, "end": 790.04, "text": " With and sometimes even better than human written language specific prompts. This is kind of cool. So you literally take a"}, {"start": 790.04, "end": 796.68, "text": " uh, um, you you have data sets with various different languages and you always take the english prompt and"}, {"start": 796.92, "end": 798.92, "text": " then you concatenate the the"}, {"start": 799.0799999999999, "end": 806.4399999999999, "text": " Uh, the answer in that different language you repeat it a couple of times and then you evaluate and it's and it's apparently the model"}, {"start": 806.5999999999999, "end": 812.5999999999999, "text": " Like learn to to understand, uh this which kind of makes sense because you're training the disjoint"}, {"start": 812.92, "end": 817.24, "text": " Uh, like a space of multi disjoint multilingual space"}, {"start": 817.24, "end": 820.6, "text": " Uh when you're training transformer on on on diverse data sets"}, {"start": 821.4, "end": 826.6, "text": " And we always always know that the machine translation kind of emerges as a consequence of this"}, {"start": 827.16, "end": 834.36, "text": " Next token prediction, uh, if you give it enough big data sets with with some presence of different languages"}, {"start": 835.5600000000001, "end": 840.12, "text": " So thing worth mentioning and this is something we're probably all aware of at this point"}, {"start": 840.2, "end": 842.2, "text": " And that's that the multilingual pre-training"}, {"start": 842.2, "end": 849.4000000000001, "text": " Very significantly diminishes english zero shot generalization. So the thing with this bloom model is that they trained"}, {"start": 849.48, "end": 853.8000000000001, "text": " Uh, this is uh, like a multilingual model and because of that it's not as strong"}, {"start": 854.2800000000001, "end": 858.2, "text": " Uh on on on english as as uh, basically some of the other"}, {"start": 859.0, "end": 860.5200000000001, "text": " baselines"}, {"start": 860.5200000000001, "end": 862.9200000000001, "text": " Okay, let's continue here. Um"}, {"start": 863.88, "end": 869.4000000000001, "text": " A couple things worth mentioning, this is how they've chosen the final configuration of the bloom"}, {"start": 869.4, "end": 872.9399999999999, "text": " 176 billion parameter model"}, {"start": 873.5799999999999, "end": 878.14, "text": " So they they reference here the the the the scaling loss paper"}, {"start": 878.22, "end": 883.74, "text": " It is notable that this pareto optimal frontier describes very large models trained on few tokens"}, {"start": 883.98, "end": 885.98, "text": " We call this training to optimality"}, {"start": 885.9, "end": 891.8199999999999, "text": " So that's basically what I said before basically you want to the the scaling loss paper show that you want to increase the parameters more"}, {"start": 892.14, "end": 898.54, "text": " Than the number of tokens of your data set and then they say this is in stark contrast with the common practice of training much smaller"}, {"start": 898.54, "end": 901.18, "text": " Models on many more tokens to convergence. Okay"}, {"start": 901.98, "end": 903.98, "text": " And now because they're training"}, {"start": 905.02, "end": 906.38, "text": " They have a couple of arguments here"}, {"start": 906.38, "end": 910.2199999999999, "text": " But one of them is that they're training on a multilingual data set and because of that"}, {"start": 910.3, "end": 914.4599999999999, "text": " They they don't want to strictly follow the the scaling loss. And so this is what they do"}, {"start": 914.54, "end": 918.4599999999999, "text": " So we choose to use the optimality front as an upper bound on model size"}, {"start": 918.6999999999999, "end": 920.62, "text": " So that's like the biggest model size"}, {"start": 921.26, "end": 927.66, "text": " There will be determined by the scaling loss paper and the lower lower bound on number of training tokens"}, {"start": 927.66, "end": 930.4599999999999, "text": " So they'll probably have smaller model and more tokens"}, {"start": 930.78, "end": 937.9, "text": " Okay compared to what the scaling loss paper suggests the second very influential paper that had impact on on the"}, {"start": 938.62, "end": 943.98, "text": " Like modeling decisions in this paper is this levin et al paper"}, {"start": 944.06, "end": 951.74, "text": " So they proposed a theoretically motivated and empirically backed law describing the optimal compromise compromise between width and height"}, {"start": 951.74, "end": 958.38, "text": " Within depth, sorry. So for a gpt3 size model with 175 billion parameters"}, {"start": 958.46, "end": 961.58, "text": " They predict an ideal depth of 80 layers. Okay"}, {"start": 962.38, "end": 964.38, "text": " so now it's just basically"}, {"start": 964.54, "end": 969.66, "text": " Like combining lego bricks you have some constraints from the scaling loss paper"}, {"start": 969.66, "end": 974.38, "text": " You have some constraints from this paper and then given the compute they have and that's"}, {"start": 974.94, "end": 978.3, "text": " Roughly 1 million gp hours on a 100 machines"}, {"start": 978.3, "end": 983.66, "text": " They decide on the on the on the on the hyper parameters of the model. So let's let's see what those are"}, {"start": 983.66, "end": 987.02, "text": " So they they lose this many tokens as you can see here"}, {"start": 987.66, "end": 989.66, "text": " 70 to 80 layers"}, {"start": 989.66, "end": 991.66, "text": " because of this paper and"}, {"start": 992.62, "end": 996.2199999999999, "text": " Et cetera, et cetera. So finally they end up with three"}, {"start": 996.8599999999999, "end": 998.78, "text": " best configurations"}, {"start": 998.78, "end": 1005.0999999999999, "text": " And the one they pick at the end is this one here. So they they pick this one with 176 billion"}, {"start": 1005.66, "end": 1007.66, "text": " parameters 70 layers"}, {"start": 1007.66, "end": 1008.9399999999999, "text": " this many"}, {"start": 1008.9399999999999, "end": 1010.4599999999999, "text": " hidden dimensions"}, {"start": 1010.4599999999999, "end": 1011.98, "text": " attention hats"}, {"start": 1011.98, "end": 1018.14, "text": " Dimensionality of attention hats. This is the memory footprint and the performance. Okay, so that's pretty much it"}, {"start": 1018.2199999999999, "end": 1023.8199999999999, "text": " That's the first paper. Let me get kind of show you these numbers side by side in the actual code base"}, {"start": 1024.22, "end": 1026.94, "text": " And then we'll switch to the second paper. Okay guys, here it is"}, {"start": 1027.26, "end": 1031.8999999999999, "text": " And as I said, i'll i'll do a walkthrough of of the code base. So basically don't worry"}, {"start": 1031.98, "end": 1035.26, "text": " I'll i'll show you this code in much more depth later on"}, {"start": 1035.26, "end": 1039.42, "text": " Uh, I just want to kind of briefly show you so this is the big science workshop"}, {"start": 1039.98, "end": 1044.22, "text": " Repo and you can see the configuration for the 176 billion model"}, {"start": 1044.94, "end": 1048.54, "text": " And you can see here the numbers are exactly what you see here in the table. So we have"}, {"start": 1049.48, "end": 1053.58, "text": " 176 layers so you can see that somewhere here. Let me try"}, {"start": 1054.3, "end": 1059.26, "text": " Nope, sorry, that's the number of uh, that's the number of parameters. Uh, so we have the 70 layers"}, {"start": 1059.42, "end": 1062.14, "text": " So here is the the variable for 70 layers. We have"}, {"start": 1062.14, "end": 1068.72, "text": " 14336 you can see that number of hidden is is exactly that we have number of hats"}, {"start": 1069.4, "end": 1074.24, "text": " 112 etc, etc. So basically yeah, this paper shows you how how they've"}, {"start": 1075.0400000000002, "end": 1079.1200000000001, "text": " Decided to to pick the uh, all of the hyper parameters of the model"}, {"start": 1080.0, "end": 1086.0800000000002, "text": " Some of that was inspired by the gpt3 paper. Some of that was inspired by the scaling loss paper. And finally, uh, the"}, {"start": 1086.08, "end": 1091.76, "text": " The levin et al paper had a had an impact as well on the on the width and depth now"}, {"start": 1091.76, "end": 1094.6399999999999, "text": " I'm gonna quickly walk you through the chronicles they created"}, {"start": 1095.6799999999998, "end": 1097.76, "text": " So basically, uh, i'm gonna start here"}, {"start": 1097.84, "end": 1104.8799999999999, "text": " So first of all one more thing that's kind of common across all of the three papers is that all of them are using the nvdm"}, {"start": 1104.8799999999999, "end": 1110.1599999999999, "text": " Megatron library that helps you do model parallelism do check out my scaling"}, {"start": 1110.32, "end": 1114.0, "text": " Uh, like ultimate guide to scaling video where I cover a couple of these"}, {"start": 1114.0, "end": 1116.0, "text": " uh influential scaling papers"}, {"start": 1116.86, "end": 1118.38, "text": " including megatron"}, {"start": 1118.38, "end": 1123.22, "text": " The second thing they they they all have in common is the deep speed library"}, {"start": 1123.58, "end": 1125.82, "text": " Again, I did cover that one in my previous video"}, {"start": 1125.82, "end": 1131.42, "text": " So do check it out and the bloom folks because they had 384 gpus at their disposal"}, {"start": 1132.78, "end": 1134.78, "text": " basically decided to to"}, {"start": 1136.22, "end": 1139.02, "text": " Arrange those gpus in this 3d"}, {"start": 1139.02, "end": 1145.28, "text": " Fashion so the so-called 3d parallelism where one axis is basically pipeline parallelism"}, {"start": 1145.5, "end": 1150.06, "text": " Where you break the model such that you take a couple of first layers you put them on one gpu"}, {"start": 1150.1399999999999, "end": 1153.98, "text": " Then you take the next uh end layers you put them on the second one, etc, etc"}, {"start": 1154.3799999999999, "end": 1157.42, "text": " Then you have the model parallelism where you break the actual layers"}, {"start": 1157.9, "end": 1158.7, "text": " uh"}, {"start": 1158.7, "end": 1164.06, "text": " And then kind of take the weights of that layer and put one part on one gpu the second part on the second one"}, {"start": 1164.06, "end": 1165.02, "text": " Etc, etc"}, {"start": 1165.02, "end": 1170.06, "text": " And the third dimension is the the data parallel dimension here where you basically replicate"}, {"start": 1170.46, "end": 1175.9, "text": " Your models across different gpus and then you feed them different portions of your of your input patch"}, {"start": 1176.54, "end": 1181.18, "text": " Okay, next up. I want to walk you through some of the chronicles. Uh, they've shared"}, {"start": 1181.42, "end": 1186.3799999999999, "text": " So there is this document called lessons learned. I link the repo down in the description"}, {"start": 1186.62, "end": 1187.5, "text": " So you can check it out"}, {"start": 1187.5, "end": 1192.46, "text": " You can go I strongly encourage you to go through uh, at least skim through all of these documents"}, {"start": 1192.46, "end": 1198.7, "text": " I think it's a valuable learning experience. So a couple of things so they have this title how training divergences were"}, {"start": 1199.42, "end": 1206.6200000000001, "text": " Overcome so how they overcame the the divergences one of the the methods was to just have a different initialization method"}, {"start": 1207.02, "end": 1211.18, "text": " And it literally means so much like I think they mentioned it somewhere"}, {"start": 1211.26, "end": 1216.3, "text": " So it has made a huge difference to the training stability and not only stability but the the final"}, {"start": 1216.3, "end": 1222.3799999999999, "text": " Uh actual loss you you converge to and I remember back from my days when I trained the transformer model"}, {"start": 1222.54, "end": 1224.54, "text": " So I had the original"}, {"start": 1224.7, "end": 1226.7, "text": " transformer implementation right here"}, {"start": 1227.4199999999998, "end": 1228.76, "text": " and uh"}, {"start": 1228.76, "end": 1233.8999999999999, "text": " Initialization blah blah. I remember this this diagram here. So here's what I said here"}, {"start": 1233.8999999999999, "end": 1238.7, "text": " So initialization matters a lot for transformers. I initially thought that the other implementation using savir"}, {"start": 1239.18, "end": 1245.18, "text": " Uh initialization is again one of those arbitrary heuristics and the pytorch default in it will do I was wrong"}, {"start": 1245.18, "end": 1251.5800000000002, "text": " You can see here three curves that these two are using pytorch default and this one is using the xavier initialization"}, {"start": 1251.66, "end": 1258.6200000000001, "text": " So that was the moment for me when I realized how sensitive the models are to initialization and how vital that is"}, {"start": 1259.02, "end": 1261.02, "text": " Okay getting back to the"}, {"start": 1261.66, "end": 1263.66, "text": " lessons learned document here"}, {"start": 1264.14, "end": 1266.38, "text": " adding embedding layer norms, so"}, {"start": 1267.1000000000001, "end": 1273.18, "text": " Hopefully we're all aware of this basically the position whether it's before there is a jollo connection or after"}, {"start": 1273.18, "end": 1275.18, "text": " How many of them do you include etc?"}, {"start": 1275.18, "end": 1275.5800000000002, "text": " etc"}, {"start": 1275.5800000000002, "end": 1279.98, "text": " All of that matters a lot for the final stability and finally patience"}, {"start": 1280.22, "end": 1284.7, "text": " So in some cases in so they say here in some cases in the case of a huge spike"}, {"start": 1285.1000000000001, "end": 1288.46, "text": " It was taking 2k iterations for training to return to the same"}, {"start": 1289.1000000000001, "end": 1293.3400000000001, "text": " Loss it spiked from and then it continued training as if nothing happened"}, {"start": 1293.66, "end": 1296.46, "text": " So sometimes in order to overcome the spike you just have to wait"}, {"start": 1296.94, "end": 1300.38, "text": " Uh, but more often than not the training won't recover from a spike"}, {"start": 1300.38, "end": 1306.8600000000001, "text": " Yet in another situation as the training diverged slowly without any spikes. Okay, so let me show you the spikes"}, {"start": 1306.8600000000001, "end": 1308.3000000000002, "text": " so here is the"}, {"start": 1308.3000000000002, "end": 1315.2600000000002, "text": " Uh, this is probably the most interesting chronicle i've seen so the the the way they've trained the 104 billion parameter model"}, {"start": 1315.66, "end": 1321.66, "text": " They had the most problems with this model and then actually with the 176b model"}, {"start": 1322.0600000000002, "end": 1326.94, "text": " It was the training was much more stable and there are many reasons why that was first of all"}, {"start": 1326.94, "end": 1333.26, "text": " They I guess they learned a lot during during this this whole trial and then the second thing is I think like they use b"}, {"start": 1333.26, "end": 1341.8200000000002, "text": " Float so the brain float format instead of the the the the usual float 16 and all of that kind of brought the additional stability"}, {"start": 1342.38, "end": 1344.38, "text": " So I just want to kind of quickly skim through"}, {"start": 1345.02, "end": 1347.98, "text": " Some of the diagrams here you can see that the problems they had with spikes"}, {"start": 1348.22, "end": 1352.3, "text": " So during the training, uh, the spike would appear and then the training would diverge"}, {"start": 1352.3, "end": 1358.86, "text": " And they had a lot of experiments. They tried a bunch of things to try to to kind of cope with those spikes"}, {"start": 1358.86, "end": 1363.8999999999999, "text": " But they kept getting the spikes. They kept getting the spikes like throughout many many"}, {"start": 1364.46, "end": 1369.18, "text": " like experiments and many many weeks they constantly they couldn't get the model to"}, {"start": 1369.82, "end": 1375.1, "text": " Uh to converge it was constantly diverging and then I think only at the end"}, {"start": 1375.5, "end": 1378.78, "text": " Did they start getting some better results after multiple months?"}, {"start": 1378.78, "end": 1384.78, "text": " Um, i'll show you the the the 176b model a bit later. So let me show you one more thing. See here"}, {"start": 1385.02, "end": 1390.06, "text": " This is interesting. So the they had two million backslash only samples in the data set"}, {"start": 1390.54, "end": 1391.98, "text": " um, so"}, {"start": 1391.98, "end": 1392.86, "text": " Hugh"}, {"start": 1392.86, "end": 1394.94, "text": " Negan, I don't know how to pronounce this surname"}, {"start": 1395.58, "end": 1400.22, "text": " Discovered that there are huge records with just backslashes in oscar"}, {"start": 1400.7, "end": 1403.82, "text": " English version so we set out to look at their occurrence"}, {"start": 1403.82, "end": 1408.3799999999999, "text": " And so what happens when when your model basically encounters this sequence?"}, {"start": 1408.78, "end": 1412.78, "text": " Uh, it can produce the the the spiking behavior the divergence"}, {"start": 1412.86, "end": 1419.5, "text": " So all of these details are this is the this is the messy reality of training of training large language models"}, {"start": 1419.5, "end": 1421.5, "text": " And it's very nice that they've shared this"}, {"start": 1421.82, "end": 1425.02, "text": " Okay, I did mention that the 176b was much more stable"}, {"start": 1425.26, "end": 1430.78, "text": " They literally say here what makes the the model so so stable and they compare so here is the 104"}, {"start": 1431.1, "end": 1433.1, "text": " Uh billion parameter model you can see"}, {"start": 1433.1, "end": 1438.86, "text": " Uh bunch of curves, uh superposition here and you can see the the the 176b"}, {"start": 1438.9399999999998, "end": 1443.74, "text": " So it's literally like a smooth curve converging and the loss is lowering as expected"}, {"start": 1444.3, "end": 1450.6999999999998, "text": " So, uh, here is the kind of the overlapped image and then they say it's a combination of probably a few of these improvements"}, {"start": 1450.6999999999998, "end": 1456.3799999999999, "text": " It's kind of hard to pinpoint a particular thing, but I I have a strong hunch that the bfloat"}, {"start": 1456.38, "end": 1464.38, "text": " Uh format is what had a very very significant impact here. So very clean data. They have better data set for the for the big model"}, {"start": 1464.7, "end": 1467.3400000000001, "text": " They had the bfloat, uh mixed precision regime"}, {"start": 1468.5400000000002, "end": 1473.68, "text": " The fp32 accumulation for the pipeline blah blah blah very low initialization"}, {"start": 1475.18, "end": 1477.18, "text": " Word embedding layer norm"}, {"start": 1477.9, "end": 1481.5800000000002, "text": " But it was also using okay, so then I guess that doesn't doesn't explain it"}, {"start": 1482.14, "end": 1483.98, "text": " Uh, okay, that's guys. That's pretty much it"}, {"start": 1483.98, "end": 1487.74, "text": " They also have this document where they show, uh how they decided to"}, {"start": 1488.38, "end": 1494.94, "text": " To pick the parameters of the 3d parallelism. So basically they decide how they decided on on picking the tensor parallelism"}, {"start": 1495.48, "end": 1502.14, "text": " Parameter the pipeline parallelism the data parallelism all of those details so you can kind of walk through this document at your own pace"}, {"start": 1502.22, "end": 1506.22, "text": " It's too long for me to kind of extract any any particular insight here"}, {"start": 1507.02, "end": 1511.02, "text": " But that's pretty much it. Okay guys, so now let's go to the second paper"}, {"start": 1511.02, "end": 1513.66, "text": " I'm gonna open up the um"}, {"start": 1514.94, "end": 1521.42, "text": " The opt paper and let's start from the beginning here. So open pretrained transformer language models"}, {"start": 1521.5, "end": 1528.3, "text": " So these guys had a lot of problems training their models like reading through the the the logs was was such a such a joy"}, {"start": 1529.5, "end": 1533.26, "text": " So basically let me kind of show you a couple of snippets from a huge document"}, {"start": 1533.26, "end": 1536.7, "text": " They have like there is like 140 pages or something of their logs"}, {"start": 1536.7, "end": 1543.66, "text": " Internal logs, that's very cool. They shared those so here here are a couple of those once we identify the bad node"}, {"start": 1543.82, "end": 1551.66, "text": " We need to document these lists of things to complain to csp and get our money back. Give us our money back you bastards"}, {"start": 1552.46, "end": 1557.74, "text": " Okay, it looks like 26 tried to immediately upload the checkpoint and failed its cp commands"}, {"start": 1557.98, "end": 1562.94, "text": " Then it took another step lowered its scalar and tried uploading again and again the humanity"}, {"start": 1562.94, "end": 1567.76, "text": " We are already at loss scale 0.25 and so on and so on like oh hallelujah"}, {"start": 1568.46, "end": 1571.8200000000002, "text": " Things looked good, but nope just kind of complete confusion"}, {"start": 1572.46, "end": 1578.6200000000001, "text": " So I again encourage you to just kind of skim through the the log book. It's a very interesting read"}, {"start": 1579.5, "end": 1581.5, "text": " Okay, let's start with this one"}, {"start": 1582.8600000000001, "end": 1586.22, "text": " First of all our models and hyper parameters largely follow brown at all"}, {"start": 1586.22, "end": 1593.18, "text": " So that's the gpt3 paper again with variations in batch size mostly to obtain increased computational efficiency"}, {"start": 1593.18, "end": 1595.58, "text": " So this is something I did mention in the previous paper as well"}, {"start": 1595.74, "end": 1600.8600000000001, "text": " Oops, so this is kind of a common thread for all of these three papers"}, {"start": 1601.18, "end": 1605.84, "text": " They all follow the gpt3 hyper parameters and and uh advices"}, {"start": 1607.58, "end": 1615.02, "text": " Next up they say we found that the pile was particularly full of duplicate documents and advise future researchers using the pile to perform"}, {"start": 1615.02, "end": 1622.24, "text": " additional deduplication processing this is in particular relevant to the next paper i'll cover from eluthor ai"}, {"start": 1622.48, "end": 1627.36, "text": " Because they did use pile and they did not duplicate it because of the lack of resources"}, {"start": 1627.68, "end": 1634.24, "text": " And they also which is very interesting. They've done more than a single epoch, which is the common practice training lms"}, {"start": 1635.12, "end": 1637.52, "text": " But we'll get there in a couple of minutes"}, {"start": 1638.32, "end": 1641.2, "text": " a couple of interesting curves again, um, you can see"}, {"start": 1641.2, "end": 1648.72, "text": " Iterations on the x-axis learning rate on the y-axis. You can see that this was fairly ad hoc like basically the model would crash"}, {"start": 1648.72, "end": 1655.52, "text": " They would retrieve the the older checkpoint and then lower the low the the learning rate such that in order to avoid"}, {"start": 1655.52, "end": 1661.68, "text": " Try to avoid the instabilities and as a consequence of that you get this piecewise weird graph here"}, {"start": 1662.48, "end": 1665.28, "text": " Okay, you can also see the the validation perplexity"}, {"start": 1665.92, "end": 1669.8400000000001, "text": " Has a lot of these bumps and uh, it looks very funny"}, {"start": 1669.84, "end": 1673.36, "text": " Uh, so here is the part, uh, that's uh joy to read"}, {"start": 1674.08, "end": 1677.6, "text": " Basically a lot of a lot of problems during during the training"}, {"start": 1677.84, "end": 1682.3999999999999, "text": " So the the thing to mention in metals defense is that they literally had only a couple of months to do this"}, {"start": 1682.72, "end": 1688.48, "text": " It was a very tight deadline and so yeah, they didn't have the time to to do this properly"}, {"start": 1689.04, "end": 1695.28, "text": " But here is a couple of insights from from the paper. So in total hardware failures contributed to at least"}, {"start": 1695.28, "end": 1700.8, "text": " 35 manual restarts and the cycling of over 100 hosts over the course of two months"}, {"start": 1701.6, "end": 1705.68, "text": " Given the difference between the number of hosts cycled out and the number of manual restarts"}, {"start": 1705.68, "end": 1713.2, "text": " We estimate 70 plus automatic restarts due to hardware failures. I guess everything that will that can break will break the marfais law thing"}, {"start": 1714.08, "end": 1721.76, "text": " Okay, we noticed a correlation between loss divergence our dynamic loss scalar crashing to zero and the l2 norm of the activations"}, {"start": 1721.76, "end": 1729.28, "text": " Of the final layer spiking these observations led us to pick restart points for which our dynamic loss scalar was still in a healthy state"}, {"start": 1729.28, "end": 1736.0, "text": " So bigger or equal than one and after which our activation norms would trend downward instead of growing"}, {"start": 1736.4, "end": 1741.04, "text": " Unboundedly, so this is just some detail about how they were handling the um, how"}, {"start": 1741.76, "end": 1747.36, "text": " What was the heuristic they were using to to pick the best checkpoint in order to avoid the the future spikes?"}, {"start": 1747.36, "end": 1755.28, "text": " Okay, guys, uh, let's continue here. So the results here as you can see on 14 nlp tasks averaged for the zero shot"}, {"start": 1755.84, "end": 1760.8799999999999, "text": " Evaluation their own pair with gpt. That's very cool on the other side for the multi-shot"}, {"start": 1761.4399999999998, "end": 1769.52, "text": " Evaluation where it does like one or 32 shots. You can see that the uh, the dashed line the gpt model did outperform"}, {"start": 1770.32, "end": 1772.7199999999998, "text": " Them on on on those tasks. So that means"}, {"start": 1772.72, "end": 1777.68, "text": " It's kind of better few shot learner than the op model and um, yeah"}, {"start": 1778.0, "end": 1780.96, "text": " They say that the trend follows the gpt3. That's what what I said"}, {"start": 1781.44, "end": 1787.6000000000001, "text": " So let's continue here a couple of more interesting details. First of all, um, hate speech detection. They showed that um"}, {"start": 1788.32, "end": 1794.48, "text": " That the 175b model the op model considerably outperforms da vinci in all settings"}, {"start": 1794.48, "end": 1795.6000000000001, "text": " So that's very cool"}, {"start": 1795.6000000000001, "end": 1798.32, "text": " And then you think about it and this is all with the"}, {"start": 1798.32, "end": 1804.56, "text": " Double edged sword. So if it's better at detection, that means it's probably more susceptible to generating toxic"}, {"start": 1804.8799999999999, "end": 1809.2, "text": " And hateful speech so and that's indeed the case. So let me show you that here"}, {"start": 1810.0, "end": 1813.04, "text": " And they mention it. So overall we see that op"}, {"start": 1813.6, "end": 1820.56, "text": " 175b has a higher toxicity rate than either palm or da vinci da vinci being the the gpt3 model"}, {"start": 1821.12, "end": 1823.9199999999998, "text": " That's the code name. Okay, so let's continue here"}, {"start": 1823.92, "end": 1828.88, "text": " Da vinci being the the gpt3 model. That's the code name. Okay, you can see here"}, {"start": 1829.92, "end": 1836.88, "text": " Prompt toxicity probability toxicity probability of continuation. You can see that the blue curve the op model has the highest toxicity"}, {"start": 1837.44, "end": 1841.76, "text": " Uh for for for across across the the skip across the whole axis here"}, {"start": 1842.5600000000002, "end": 1844.3200000000002, "text": " Okay, guys, um"}, {"start": 1844.3200000000002, "end": 1845.68, "text": " I think that's pretty much it"}, {"start": 1845.68, "end": 1854.0, "text": " They they kind of summarize saying that we still believe this technology is premature for commercial deployment. And yeah, I mean, uh understanding how these models work, uh,"}, {"start": 1854.0800000000002, "end": 1858.4, "text": " Just interpreting what's going on and why the model is up but in certain results all of that"}, {"start": 1858.4, "end": 1859.8400000000001, "text": " We are still in the early days"}, {"start": 1859.8400000000001, "end": 1865.3600000000001, "text": " We are much better at at getting raw numbers on benchmarks and then knowing how to handle and align"}, {"start": 1865.68, "end": 1871.52, "text": " These models with human preferences, but it's a topic for for a different video. I think that's pretty much it. That's the second paper"}, {"start": 1872.4, "end": 1874.4, "text": " As I said, i'm just quickly giving you"}, {"start": 1874.4, "end": 1879.8400000000001, "text": " Uh the the the overview here and then in the next videos i'll be doing the coding"}, {"start": 1880.4, "end": 1885.44, "text": " Series, okay. Um, yeah before this let's let me show you the the opt logbook"}, {"start": 1885.44, "end": 1890.64, "text": " So here's the the big logbook I mentioned this is the this contains a bunch of notes"}, {"start": 1891.52, "end": 1894.5600000000002, "text": " That they were kind of accumulating throughout this period of a couple months"}, {"start": 1894.72, "end": 1899.2, "text": " I encourage you to check it out at your own pace. You can skim it. It literally took me maybe"}, {"start": 1899.2, "end": 1904.0800000000002, "text": " 20 30 minutes to just skim through this and kind of extract some of the insights"}, {"start": 1905.1200000000001, "end": 1908.72, "text": " Okay, so I did show you this diagram in the beginning of the video"}, {"start": 1908.8, "end": 1914.96, "text": " So again, you can see the torque or the number of gaps here and how many failures they were experiencing"}, {"start": 1914.96, "end": 1920.8, "text": " So we saw the number 70 plus or something restarts. I don't want to count but yeah, this might be"}, {"start": 1921.76, "end": 1925.8400000000001, "text": " At least in that range. Okay. Finally, let's go to their final update"}, {"start": 1925.84, "end": 1929.6, "text": " Uh, they have a couple of things like some some insights here"}, {"start": 1930.0, "end": 1932.6399999999999, "text": " Um, let me pick a couple of them. So scaling to"}, {"start": 1933.26, "end": 1935.76, "text": " 1024 a 100 to handle a real"}, {"start": 1936.24, "end": 1941.1999999999998, "text": " Workload of this size is highly non-trivial. We will discuss infrastructure pain points below"}, {"start": 1941.4399999999998, "end": 1947.28, "text": " Okay, ensuring training converges at this scale is also highly non-trivial without sufficient ablations at medium scale"}, {"start": 1947.84, "end": 1952.9599999999998, "text": " Results obtained from training at small scale also do not necessarily hold when scaled up"}, {"start": 1952.96, "end": 1957.2, "text": " We will cover these learnings in a note to be released in the upcoming weeks. Okay"}, {"start": 1957.92, "end": 1961.68, "text": " So let me quickly read you the cluster deletion story. That's kind of funny"}, {"start": 1961.76, "end": 1967.1200000000001, "text": " So given the holidays we requested a pool of 12 buffer nodes to guard against hardware failures"}, {"start": 1967.8400000000001, "end": 1973.04, "text": " Two machines go down every day on average in the process of replenishing this pool"}, {"start": 1973.28, "end": 1978.8, "text": " The cloud provider support team accidentally deleted our entire cluster on december 21st"}, {"start": 1978.8, "end": 1985.84, "text": " First while the cluster was restored fairly quickly it unfortunately came back with 16 machines that did not pass our infrastructure checks"}, {"start": 1986.08, "end": 1988.08, "text": " Etc, etc. So that's one of the interesting parts"}, {"start": 1988.8, "end": 1994.08, "text": " Second interesting diagram here is this one. They showed that they didn't quite match the the loss"}, {"start": 1994.6399999999999, "end": 1999.36, "text": " Of the gpt3 model, but we did see that on the on various nlp tasks"}, {"start": 1999.6, "end": 2003.68, "text": " the zero shot performance of the model did reach gpt3 so that's again the"}, {"start": 2004.32, "end": 2006.32, "text": " the the discrepancy between"}, {"start": 2006.32, "end": 2012.72, "text": " Um, like the loss value the final loss value and the actual performance might not always be that clear"}, {"start": 2012.96, "end": 2018.24, "text": " Although we did see that we did saw that the gpt3 handles much better the few shot prompts"}, {"start": 2018.96, "end": 2025.84, "text": " And that might be because of lower of lower loss here again just a speculation, but that's usually the case"}, {"start": 2026.08, "end": 2029.6, "text": " That's it guys again encourage you to go through it at your own pace"}, {"start": 2029.84, "end": 2033.6799999999998, "text": " I'm going to continue on and explore the gpt neo x20b paper"}, {"start": 2033.68, "end": 2036.3200000000002, "text": " So an open source autoregressive language model"}, {"start": 2036.88, "end": 2040.16, "text": " And let's see what the insights here are"}, {"start": 2040.5600000000002, "end": 2045.68, "text": " Okay, so we trained on a data set that contains duplicated data for more than one epoch"}, {"start": 2045.76, "end": 2048.4, "text": " But see no evidence of performance loss"}, {"start": 2048.88, "end": 2053.76, "text": " While these authors claims that the few shot prompting doesn't improve performance on their task"}, {"start": 2054.0, "end": 2057.3, "text": " We find that this is actually a phenomenon unique to gpt3"}, {"start": 2057.3, "end": 2063.1400000000003, "text": " And does not apply to either gpt neo x20b or fair sec models. Okay, so"}, {"start": 2063.7000000000003, "end": 2067.7000000000003, "text": " that's just a note that uh, it's dangerous to just take a single model and"}, {"start": 2068.42, "end": 2070.7400000000002, "text": " Draw out extrapolate from from there"}, {"start": 2071.38, "end": 2073.94, "text": " Because other models might have different behavior"}, {"start": 2074.42, "end": 2076.5800000000004, "text": " so we'll see later in this paper that uh,"}, {"start": 2076.82, "end": 2083.6200000000003, "text": " The probable explanation is basically data set the underlying data set for that they use to train gpt neo x20b"}, {"start": 2084.34, "end": 2086.02, "text": " i.e the pile"}, {"start": 2086.02, "end": 2091.72, "text": " Is of different quality and distribution compared to the data sets that were used for gpt3"}, {"start": 2091.94, "end": 2093.94, "text": " We don't even know what data was used. So yeah"}, {"start": 2094.66, "end": 2095.94, "text": " Okay"}, {"start": 2095.94, "end": 2100.02, "text": " So again, they mentioned here that they largely followed the gpt3"}, {"start": 2100.02, "end": 2104.66, "text": " So so blah blah blah the coder model whose architecture largely follows that of gpt3"}, {"start": 2105.14, "end": 2112.02, "text": " With a few notable deviations described below. So that's that's again the common thread that I mentioned"}, {"start": 2112.58, "end": 2113.94, "text": " multiple times"}, {"start": 2113.94, "end": 2120.02, "text": " But this paper did have like, um, many more modifications compared to the previous ones we saw. Okay"}, {"start": 2120.7400000000002, "end": 2125.46, "text": " Before this paper was published and the ways were were released"}, {"start": 2126.1, "end": 2128.02, "text": " They had a couple of iterations"}, {"start": 2128.02, "end": 2131.94, "text": " Behind them. One of them was gpt neo and the second one"}, {"start": 2132.58, "end": 2139.48, "text": " Was gptj which was basically supported by the elutrii community but executed by ban wang and komatsuzaki"}, {"start": 2139.48, "end": 2145.96, "text": " So that's kind of worth mentioning. So the architecture of this paper closely follows the gptj paper"}, {"start": 2146.6, "end": 2152.12, "text": " So let's see what's the diff between those two? So the gptj and this one compared to gpt3"}, {"start": 2152.12, "end": 2154.62, "text": " So first of all, they use rotary embeddings"}, {"start": 2156.28, "end": 2161.16, "text": " I will not get into a lot of details how these work but like here is the here's your uh,"}, {"start": 2161.46, "end": 2168.12, "text": " classical formulation of the of this uh attention so you can see the formula here we have the uh"}, {"start": 2168.12, "end": 2172.68, "text": " Not token here. We map it into the query. So this is going to be the query"}, {"start": 2172.92, "end": 2177.08, "text": " We map this one into the key then we do the dot product blah blah blah"}, {"start": 2177.3199999999997, "end": 2181.3199999999997, "text": " And then the rest of the notation is kind of weird because it's not summation you want to have"}, {"start": 2182.04, "end": 2184.04, "text": " You want to have these?"}, {"start": 2184.3399999999997, "end": 2185.64, "text": " scores"}, {"start": 2185.64, "end": 2190.12, "text": " In separate spatial locations, you don't want to sum them up. So this is kind of weird notation"}, {"start": 2190.44, "end": 2196.12, "text": " Um, I mean looking at it. It's not even correct. Like you cannot just sum up these scores. It's kind of weird"}, {"start": 2196.12, "end": 2199.0, "text": " Uh, yeah, let me just make a diff here. So"}, {"start": 2199.72, "end": 2204.3599999999997, "text": " The the the difference with the rotary medics is you you have this this matrix r here"}, {"start": 2205.48, "end": 2210.6, "text": " Which basically rotates your intermediate representation. So here's the here's kind of the idea"}, {"start": 2210.6, "end": 2212.3599999999997, "text": " So if you have a key or a query"}, {"start": 2212.3599999999997, "end": 2217.96, "text": " So let's say we have a text here in hands transformer with rotary position embeddings and let's say that each of these words"}, {"start": 2218.2799999999997, "end": 2222.2799999999997, "text": " Has a corresponding uh vector here. Let's say this is um, either"}, {"start": 2222.28, "end": 2228.1200000000003, "text": " Uh a key or or a query vector, uh, and basically what I do is they kind of chunk"}, {"start": 2228.44, "end": 2231.0800000000004, "text": " Uh this vector into d over two"}, {"start": 2231.8, "end": 2238.6800000000003, "text": " Groups, uh, and that basically means that each of the groups is two-dimensional and then so once you have this chunk here"}, {"start": 2239.5600000000004, "end": 2246.28, "text": " What you then do is you literally rotate it depending on this position and depending on the on the on the group index"}, {"start": 2246.52, "end": 2249.2400000000002, "text": " So what I mean by that is so this is group one group two"}, {"start": 2249.24, "end": 2253.24, "text": " Uh group d over two so because this is from group one"}, {"start": 2253.64, "end": 2259.9599999999996, "text": " You basically apply the theta one angle and because it's position one you apply m and that's how you rotate"}, {"start": 2260.12, "end": 2267.3199999999997, "text": " And then you get the output representation here and that's the final representation here. So that's the modification that happens that happens"}, {"start": 2267.7999999999997, "end": 2269.7999999999997, "text": " Precisely here. Okay guys"}, {"start": 2269.9599999999996, "end": 2273.8799999999997, "text": " Um, so that's one of the modifications they've used the second one is this one"}, {"start": 2273.88, "end": 2280.76, "text": " We compute the attention and few forward layers in parallel and sum the results rather than running them in series"}, {"start": 2281.32, "end": 2282.2000000000003, "text": " Okay"}, {"start": 2282.2000000000003, "end": 2283.96, "text": " uh, that's a"}, {"start": 2283.96, "end": 2285.48, "text": " fairly, uh"}, {"start": 2285.48, "end": 2287.32, "text": " like a known technique"}, {"start": 2287.32, "end": 2289.4, "text": " for for uh increasing the throughput"}, {"start": 2290.2000000000003, "end": 2294.52, "text": " And uh basically does not sacrifice any of the of the of the performance"}, {"start": 2294.52, "end": 2296.94, "text": " So they say here this led to a 15%"}, {"start": 2297.56, "end": 2302.52, "text": " Throughput increase while having comparable loss curves with running them in series during early training"}, {"start": 2302.52, "end": 2308.6, "text": " So I guess there is a minor um penalty you're paying for for for getting the additional throughput. Okay"}, {"start": 2309.64, "end": 2312.36, "text": " Then they mentioned due to an oversight in our uh code"}, {"start": 2312.6, "end": 2318.68, "text": " We unintentionally apply two independent layer norms instead of using a tight layer norm the way that um"}, {"start": 2319.24, "end": 2326.36, "text": " These authors here do so instead of computing this thing you can see that they they actually decouple l and one and l and two"}, {"start": 2326.68, "end": 2329.96, "text": " So just just the bug they had in their code. Okay. Um"}, {"start": 2329.96, "end": 2331.4, "text": " um"}, {"start": 2331.4, "end": 2334.84, "text": " Okay, they use here's here's the uh infrastructure they are using"}, {"start": 2335.7200000000003, "end": 2341.96, "text": " And there is a nice diagram explaining how how the setup looks like the actual uh, like node setup looks like this"}, {"start": 2342.28, "end": 2345.0, "text": " So we train gpt new x20b on 12"}, {"start": 2345.7200000000003, "end": 2353.2400000000002, "text": " Super micro servers each with eight nvidia a100 gpus and configured with two cpus. Okay"}, {"start": 2354.44, "end": 2358.04, "text": " So here you can see that the the complex diagram"}, {"start": 2358.04, "end": 2360.84, "text": " Um, even though it's a high level diagram and still very complex"}, {"start": 2360.84, "end": 2364.68, "text": " You can see how uh each of the four gpus connects to a single cpu here"}, {"start": 2365.48, "end": 2368.92, "text": " And then you have the different group here is connected with this cpu"}, {"start": 2369.64, "end": 2376.04, "text": " And all of these here is what is commonly referred to as a node. So this thing here is called a node"}, {"start": 2377.08, "end": 2379.08, "text": " You'll see this terminology"}, {"start": 2379.48, "end": 2381.48, "text": " a lot, okay guys"}, {"start": 2381.88, "end": 2383.88, "text": " So again a common thread"}, {"start": 2383.88, "end": 2389.96, "text": " That we opted to use the values from the gpt3 paper to guide our choice of hyper parameters"}, {"start": 2390.6800000000003, "end": 2395.1600000000003, "text": " To achieve a higher training throughput. We opt to use the same batch size as open ai's"}, {"start": 2395.6400000000003, "end": 2400.04, "text": " Uh 175b model approximately 3.15 million tokens"}, {"start": 2401.2400000000002, "end": 2408.76, "text": " We extend addmw with zero optimizer. So that's um, I kind of mentioned that because I did cover zero in one in in my"}, {"start": 2409.08, "end": 2411.2400000000002, "text": " ultimate scaling guide video"}, {"start": 2411.24, "end": 2416.12, "text": " I'll link that video somewhere in the in the descriptions if you if you're curious to know more about zero"}, {"start": 2416.7599999999998, "end": 2422.2, "text": " Check it out. But the the tldr is instead of replicating the the model states and the residual states"}, {"start": 2422.3599999999997, "end": 2426.2799999999997, "text": " So the gradients the the the optimizer states the weight etc"}, {"start": 2426.6, "end": 2432.6, "text": " Instead of that you just partition them and that's how you achieve the the zero redundancy. So that's that's what zero stands for"}, {"start": 2433.08, "end": 2434.04, "text": " um"}, {"start": 2434.04, "end": 2436.04, "text": " Finally a note before we see the results"}, {"start": 2436.04, "end": 2442.12, "text": " Uh, they say when comparing results in this work to gpt3 the training data is almost certainly the biggest known unknown factor"}, {"start": 2442.44, "end": 2445.48, "text": " Okay, and yeah data is becoming the name of the game pretty much"}, {"start": 2448.44, "end": 2453.02, "text": " A couple more differences, uh, they have compared to gpt3 is the tokenizer"}, {"start": 2453.48, "end": 2456.92, "text": " So we use a bp based tokenizer similar to that used in gpt2"}, {"start": 2457.32, "end": 2458.52, "text": " gpt2"}, {"start": 2458.52, "end": 2463.4, "text": " With the same total vocab size of 50 000 with three major changes to the tokenizer"}, {"start": 2463.4, "end": 2470.04, "text": " So first we train a new bpe tokenizer based on the pile which makes sense you want to because they're training on pile they want to"}, {"start": 2470.92, "end": 2474.52, "text": " optimize the tokenizer for that particular data"}, {"start": 2475.32, "end": 2477.2400000000002, "text": " Second in contrast to the gpt2"}, {"start": 2478.02, "end": 2483.88, "text": " Tokenizer which took which treats tokenization as a start of a string as a non-space delimited token"}, {"start": 2484.2000000000003, "end": 2488.7000000000003, "text": " The gpt neo x20b tokenizer applies consistent space limitation"}, {"start": 2488.7, "end": 2494.8799999999997, "text": " Uh regardless and third our tokenizer contains tokens for repeated space tokens"}, {"start": 2495.2, "end": 2501.04, "text": " All positive integer amounts of repeated spaces up to and including 24. Okay, this is kind of a mouthful"}, {"start": 2501.4399999999996, "end": 2504.56, "text": " um, and just some idiosyncrasies of tokenizers"}, {"start": 2505.2, "end": 2509.6, "text": " Um, but like let me show you like a graphical representation of what's going on"}, {"start": 2509.68, "end": 2512.56, "text": " So here is the gpt2 and how gpt2"}, {"start": 2512.56, "end": 2518.64, "text": " gpt2 tokenizer would tokenize this piece of text this snippet of code and here is how gpt neo x20b would"}, {"start": 2518.88, "end": 2522.16, "text": " tokenize that same text so this is what I mentioned about the uh"}, {"start": 2522.72, "end": 2528.56, "text": " Basically the spaces so you can see that basically all of these spaces here are encoded as a single token"}, {"start": 2529.12, "end": 2530.96, "text": " similarly here"}, {"start": 2530.96, "end": 2536.08, "text": " Etc etc up to 24. So literally until you get to 24. It's a single token"}, {"start": 2536.16, "end": 2541.04, "text": " That's that's how I understood this part. Whereas on the other side, um, you can see that gpt2"}, {"start": 2541.04, "end": 2549.04, "text": " gpt2 literally encodes each of these spaces as a separate token and that kind of bloats the number of tokens to 55 here compared to 39"}, {"start": 2549.6, "end": 2551.6, "text": " uh in this paper"}, {"start": 2552.16, "end": 2556.08, "text": " Okay, guys, so I mentioned this a couple of times we see no drop"}, {"start": 2556.8, "end": 2560.0, "text": " In test validation loss after crossing the one epoch boundary"}, {"start": 2560.4, "end": 2566.4, "text": " Here are some of the diagrams they have and it's a common practice to always train only for a single epoch"}, {"start": 2566.88, "end": 2569.04, "text": " I guess partially because you have so much data"}, {"start": 2569.04, "end": 2574.72, "text": " That going for multiple epochs would be super compute intensive. So people almost never go to"}, {"start": 2575.36, "end": 2577.36, "text": " more than than a single epoch"}, {"start": 2577.7599999999998, "end": 2582.96, "text": " And there are some papers showing that the model starts memorizing very quickly if you if you start"}, {"start": 2583.52, "end": 2587.68, "text": " Going multiple epochs. So people usually just do single epoch in this paper"}, {"start": 2588.56, "end": 2592.96, "text": " They literally continued training and they didn't see any any any signs of uh"}, {"start": 2592.96, "end": 2598.7400000000002, "text": " Of validation loss starting to diverge or anything. So they just kind of continued and uh, yeah, I mean"}, {"start": 2599.46, "end": 2601.46, "text": " um, I guess"}, {"start": 2601.94, "end": 2603.7, "text": " Yeah, I guess, um"}, {"start": 2603.7, "end": 2605.7, "text": " the common wisdom should be"}, {"start": 2605.7, "end": 2612.58, "text": " Questioned because the wisdom is not based on any solid theoretical understanding of why this works and like llm"}, {"start": 2613.3, "end": 2618.7400000000002, "text": " Large language model training is pretty much like dark magic and alchemy as many as many refer to it"}, {"start": 2618.74, "end": 2623.4799999999996, "text": " And so I think it's completely legit to do something like this if it's giving you better performance"}, {"start": 2624.18, "end": 2625.7799999999997, "text": " Why not?"}, {"start": 2625.7799999999997, "end": 2627.22, "text": " Okay, so"}, {"start": 2627.22, "end": 2630.9799999999996, "text": " Let's continue and let's wrap up this paper. Um, let me show you the results"}, {"start": 2631.7, "end": 2636.4399999999996, "text": " Uh, they get so this is the zero shot performance of gpt new x compared to gptj"}, {"start": 2637.06, "end": 2641.4599999999996, "text": " Fairsec and open eyes models. You can see it's pretty much on pair with them"}, {"start": 2641.7799999999997, "end": 2646.58, "text": " But then it really starts shining once you start using it on a"}, {"start": 2646.58, "end": 2653.7999999999997, "text": " day mathematical data sets and like these arithmetic tasks, which is I guess partially because a huge part of"}, {"start": 2654.2, "end": 2658.44, "text": " I mean not a huge part, but like a significant portions of uh of pile"}, {"start": 2659.08, "end": 2664.04, "text": " Are are like these scientific websites such as archive papers, etc, etc"}, {"start": 2664.2799999999997, "end": 2669.56, "text": " So because of the data set, uh, it seems that this model came much better compared to to the other baselines"}, {"start": 2669.56, "end": 2672.7599999999998, "text": " You can see the the orange dashed lines here. It's kind of"}, {"start": 2672.76, "end": 2676.6000000000004, "text": " Uh over outperforming all of the other baselines if i'm not wrong"}, {"start": 2676.6000000000004, "end": 2682.2000000000003, "text": " I think there are few shot performance not only in these data sets, but in general is also better compared to"}, {"start": 2682.8, "end": 2685.5600000000004, "text": " GPT models, but I don't see the diagrams here. So"}, {"start": 2686.6400000000003, "end": 2691.6400000000003, "text": " So yeah, okay. This is actually the one i'm referring to so this is the five shot evaluation"}, {"start": 2691.6400000000003, "end": 2696.0800000000004, "text": " You can see that again gpt new x is much better when you give it multiple shots"}, {"start": 2696.1600000000003, "end": 2701.88, "text": " So it's kind of a better few shot learner apparently compared to the gpt model and fairsec, etc"}, {"start": 2701.88, "end": 2703.08, "text": " And final load here"}, {"start": 2703.08, "end": 2709.6800000000003, "text": " We opted to choose hyper parameters based on on a mixture of experiments at smaller scales and by interpolating parameters"}, {"start": 2709.88, "end": 2713.08, "text": " Appropriate for our model size based on previously published work"}, {"start": 2713.2400000000002, "end": 2720.48, "text": " But because I think they so however several aspects of both our model architecture and training setup including the data and the tokenizer"}, {"start": 2721.12, "end": 2723.88, "text": " Diverged significantly from gpt3 as such"}, {"start": 2723.88, "end": 2731.04, "text": " It is almost certainly the case that the hyper params used for our model are no longer optimal and potentially never were"}, {"start": 2731.04, "end": 2733.6, "text": " In the appendix they go on to say"}, {"start": 2734.0, "end": 2741.4, "text": " And argument why it's a good idea to release the model weights because back at the at the time when they released this model"}, {"start": 2741.4, "end": 2744.98, "text": " This was the biggest model out there now. We have bloom now"}, {"start": 2744.98, "end": 2752.84, "text": " We have opt but like this was a very important contribution to the community and just a very valuable thing for"}, {"start": 2753.6, "end": 2756.68, "text": " like researchers and people who are not as as"}, {"start": 2756.68, "end": 2760.56, "text": " well-financed as some other research labs"}, {"start": 2761.24, "end": 2763.08, "text": " And that's it guys"}, {"start": 2763.08, "end": 2770.7999999999997, "text": " So those are the three papers I wanted to cover so again gpt neo x20b opt and bloom paper"}, {"start": 2771.44, "end": 2776.22, "text": " So we saw a couple of commonalities between them. They were all following gpt3"}, {"start": 2777.08, "end": 2784.08, "text": " Hyper parameters and recommendations we saw that they were all using pretty much except for the opt paper"}, {"start": 2784.08, "end": 2786.7999999999997, "text": " Both the deep speed and Megatron"}, {"start": 2786.88, "end": 2791.68, "text": " Whereas opt was using only the Megatron library to scale up these models and train them"}, {"start": 2791.68, "end": 2796.64, "text": " Which is completely non-trivial thing to do most importantly that the value of these"}, {"start": 2797.88, "end": 2798.84, "text": " projects"}, {"start": 2798.84, "end": 2806.2799999999997, "text": " Is that they've shared both the code as well as the process as well as the weights which is a huge contribution to the community"}, {"start": 2806.2799999999997, "end": 2807.96, "text": " So kudos for that"}, {"start": 2807.96, "end": 2809.96, "text": " Guys if you like this video"}, {"start": 2809.96, "end": 2813.64, "text": " Subscribe to this channel share it out with your friends and"}, {"start": 2814.52, "end": 2820.46, "text": " Expect new coding videos coming up where I'll be walking you through some of the code bases behind these papers"}, {"start": 2820.46, "end": 2840.14, "text": " So stay tuned for that and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=f6PtJKdey8E
Stable Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models | ML Coding Series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE If you want to understand how stable diffusion exactly works behind the scenes this video is for you. I do a deep dive into the code behind Stable Diffusion explaining: 1. First stage autoencoder training (autoencoder with KL regularization) 2. Latent Diffusion Model training (UNet + conditioning model) 3. Sampling using PLMS scheduler Stable diffusion directly builds upon the "High-Resolution Image Synthesis with Latent Diffusion Models" paper so I do a deep dive into the code behind this paper. Let me know how you like this one - feedback is welcome as always! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Stable diffusion repo: https://github.com/CompVis/stable-diffusion ✅ LDM repo: https://github.com/CompVis/latent-diffusion ✅ LDM paper: https://arxiv.org/abs/2112.10752 ✅ VQ-GAN (taming transformers) paper: https://arxiv.org/abs/2012.09841 ✅ PLMS paper: https://arxiv.org/abs/2202.09778 ✅ Imagenette dataset: https://github.com/fastai/imagenette ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro: why is Stable Diffusion important 00:03:50 Background knowledge: VQ-GAN, LDM, PLMS papers 00:09:20 Setup for a minimal code walk-through 00:13:30 Autoencoder with KL regularization training 00:17:15 LPIPS (perceptual loss) with discriminator loss 00:21:30 Loading ImageNet data and PyTorch Lightning training loop 00:26:35 Forward pass through the autoencoder 00:30:12 Loss calculation 00:32:08 Perceptual loss 00:36:30 KL and GAN generator loss 00:40:55 Discriminator loss 00:42:45 Summarizing the autoencoder training 00:45:44 LDM training 00:57:00 Encoding the image into the latent space 01:00:12 Forward pass through the LDM 01:01:22 LDM loss 01:04:08 Integrating conditioning via cross attention 01:10:34 Sampling using PLMS 01:16:02 CLIP 01:19:20 Classifier free guidance 01:22:19 Sampling code 01:26:42 Diffusion connection to differential equations (PLMS paper) 01:37:00 Quick glimpse into the safety check function 01:39:20 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #stablediffusion #latentdiffusion #imagesynthesis
What's up guys, Alex here. In this video we are doing a deep dive into stable diffusion model and by the end of this video you'll understand exactly how the training of both stages works, how the sampling works, how to generate images, etc. Having said that, a week ago the weights for the stable diffusion models were published, which is super exciting. So we know that since the beginning of this year with the release of the Dali 2 paper, we had a Cambrian explosion of various image generation models such as, well, Dali 2, we had Mid Journey, we had Imagine, Party, we had Dali Mini, which is the open source implementation of Dali version 1, etc. etc. And the reason why this model is so important is, well, there are multiple reasons. One of them is the images are super high quality and additionally you have much less constraints. So that means you can generate images of human faces. You can also, because the code is open source, remove the safety features and generate whatever you want, although I don't encourage you definitely to share malicious images across the internet, but if you want to experiment you can do that as well. So that's kind of cool. Additionally, and very importantly, you can run this model directly on your machine, even if you only have a GPU that has 8GB of VRAM. So I personally have RTX 2080 on my laptop and I'm able to run this in Flow 16 without a problem and generate awesome images in a couple of seconds. So that's again very cool. So it's much faster, it requires much less memory and it has less constraints and it's high quality. That's why stable diffusion is so interesting. Okay, so I want to showcase a couple of very cool examples that some digital artists such as Xander here have been creating. You can see how cool these videos are and they were created using stable diffusion. You can see that the name of this piece is called Voyage Through Time and what Xander did, and you can kind of go through the video and I strongly encourage you to check out this video, it's amazing, it's really like mind blowing, like that we can create this on like literally consumer grade GPUs and enjoy this piece of art. Okay, so here's an example of the prompts and the seeds that Xander had to create to generate this cool video. So you can see there is a lot of like seed is being used as a hyperparameter, you have to tweak the seeds, you have to tweak the prompts, do some prompt engineering and at the end you end up with something as cool as this video. Okay, so I said I also managed to run this stable diffusion on my GPU that has 8GB of VRAM, I was generating using Float 16 precision, but you can see the images are super high quality as well. So yeah, these images were generated using a prompt, a painting of an AI robot having an epiphany moment. And additionally, I'll basically release a script that I use to generate some cool interpolation, so basically the idea is to you pick, you generate a diverse set of images such as the set you've seen here, and you can pick two images you like and then basically do interpolation between them in the latent space of the model. So that was inspired by Karpathy's gist, but yeah, I'm going to share that script very soon and also cover it in a different video. So you can see here how it looks like interpolation between basically I'm interpolating between this image here and this image here, and I'm going to now show you how the procedure kind of looks like. So here is how the image is being morphed as we are approaching the target image, and you can see there are some like jumps in the latent space, but all in all, let me kind of move this faster, you can see how cool it is. And yeah, I'm going to share the script a bit later. So having said that, let's jump into the code and well, first, let me show you some prerequisite knowledge here. I'll have three papers which I'll consult. I will not do deep dives of the papers in this video. So one of those papers is a teaming transformers basically for high resolution image synthesis that introduced the VQGAN paper. I previously covered that paper on my on my YouTube channel, so do check it out if you want to have a thorough understanding of how that works. I'm going to consult some of the formulas later when I show you the code. Next up, we have the high resolution image synthesis with latent diffusion models or LDMs, and that's the paper that's behind the stable diffusion model basically. So LDMs is what's powering the stable diffusion. And finally, I'm going to briefly consult this paper, pseudo numerical methods for diffusion models on manifolds that introduced the PLMS. So the pseudo linear multi-step scheduler that makes stable diffusion fast and high quality so you can literally have only 50 steps of diffusion model and still generate very high quality images. OK, so I'm going to briefly walk you through some ideas in this paper, like super briefly. And if you want to learn more about diffusion models, I have a whole diffusion playlist, so do check it out. I've been doing both the paper overviews as well as code walkthroughs of the original repos. So do check those out if you want to have a deeper understanding of how diffusion models work. Here I'm going to do mostly a diff between what has changed compared to those older models. OK, let's start here. So to enable diffusion model training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pre-trained autoencoders. So this is literally the main diff compared to all of the other papers. They train instead of working in the image space and doing the forward diffusion process and then learning the reverse process in the image space. Instead, they do that in the latent space. OK, let me just quickly show you the difference in the loss. So here it is. So here is the original basically construction, how the original loss for the DPM model looked like. You basically do you sample images here, you sample noise from the from the normal distribution, you sample different time steps and you literally just do MSC loss. So the mean squared error loss between the noise and what you predict. So this is so epsilon theta is modeled usually as a unit architecture. So using the unit architecture. And so what you do is take the X of t, which is the original input image, plus t steps of diffusion being applied to it. So you kind of noise it and then you pass this noisy version and the time step that's that. So basically the time step information and then you need to to predict the noise that was literally used to noise that image in the forward process, if that makes sense. So this is just a hopefully a recap for for most of you. And then you just keep on repeating this until you train this unit to predict the noise. And then later you can just use it to denoise the well pure noise images such that you can generate cool images. OK, so this is the only difference between LDM. So that this paper and the previous work, they are literally just working in the latent space of this encoder. And you can see everything else remains the same. So literally the difference is the following. So instead of working in the image space, you're going to whoops, you're going to have an like an encoder that's going to train being is going to be trained using the VQ similar to VQ GAN paper. You end up input the image. You end up with a latent here and then everything else remains the same. You're now using this latent to train your diffusion models. So here is the like a snippet from the DDPM paper. So now instead of using X of T will not be an image here. It will it will be instead. Sorry, X of zero will not be an image is going to be instead like a latent representation of the image. And that's pretty much it. Really, really. That's all there is to it. That's the only difference between this paper and and the previous art. So here is how the how the diagram looks like. So we will the code walkthrough will contain the three three parts. First part, I'm going to show you how to train the autoencoder here. It's going to be the first part of the video. The second part, I'm going to show you how to train the unit. So the fusion model such that you can so you can see here we first end up in the latent space. Then we do the forward diffusion process such that we end up with Z of T and then we diffuse it. We will learn how to predict the noise that was added during this diffusion process here. We learn how to predict it as the output. And well, that's it. Everything else is diffusion magic. And also we'll have I'm going to show you how they are using it. We'll also be learning the conditioning model. So we'll mostly be focusing on text as well as classes from the image net data set. OK, that's it. And the third part of the video will be about basically how to sample once we train these models. If you didn't understand everything because you lack some background, feel free to continue watching this video. I think that the code will be fairly self-explanatory. OK, so if you want to follow along what I'm doing here, you'll have to do a couple of steps. Obviously, you have to clone the original repo to the stable diffusion repo here. Just create the con the environment following the instructions here under the requirements section. And after that, go ahead and download like such that we can have a minimal setup and just get something running. You don't have to download the original image net data set, which is huge. You can instead go to this like fast eyes image net. However, we pronounce this thing like GitHub repo and download the smallest version possible. So literally 160 pixels is going to download only 10 classes from ImageNet and is going to make us well set up for it for the training procedure. Basically, this older repo latent diffusion, which preceded their stable diffusion repo, which I'm going to also link in the description, contains the necessary instructions for how you can unpack the data set and where you need to place it such that the script can recognize and the data can be loaded. OK, guys, so let me jump now into the actual code. Here we are. A couple more things we need to sort out. So first of all, obviously, we need to set some input arguments. There is only a couple of them that we care about. First one is we want to pass the like the configuration file for the auto encoder. So first we're going to train the auto encoder and then you need to pass the T flag, meaning we want to train it and then GPUs. So I'm passing zero comma because I only have a single GPU and the index of the GPUs here. If you have multiple GPUs, feel free to add one, two, three, whatever, how many GPUs you have there. OK, final thing, because this is a research code base after all, there are some bugs. And so I had to kind of sort them out before getting this to work on a Windows machine. OK, so let me open up my diff tool here. So I just have to do this diff tool D and I'm going to open up the differences to the code I made. So first of all, this is not a bug. This is just like a small modifications you have to do if you want to train this on a single GPU, if you want to be able to do a walkthrough on a VRAM limited system. So basically set batch size to one instead of default 12. Otherwise, you'll get CUDA out of memory exceptions. That's the first week you have to do. The second one is actually a bug. So you have to go to data, image and file. I mean, it's a bug if you're on Windows. So the thing is they kind of hard coded the slash here, assuming that that's how you split the path on an arbitrary system, which is not the case for Windows. So it's much better to use OS.SAP, which is going to resolve automatically depending on the operating system into the correct character sequence for Windows. It's going to be double backslash. And so this now works. Otherwise, you'll have some errors and the training will not work. OK, and the final tweak is in the main script. So if we go here to the main script, you can see I had to make a couple of changes. First of all, I had to set the number of workers to zero because otherwise I was again getting some errors. That's one tweak. The second tweak is set the shuffle to false here for the train data loader. The reason being is we are using, if you recall, we're using that super small subsample of the image net. And so if you just keep if you just do the shuffling, it might happen that you take some super big index and try to index into our data set and we don't have that image. And then you're going to have the well index out of range exception or something. OK, so that's the second tweak I had to do. The third tweak is comment on the DDP in case you only have a single GPU. So that's the distributed data parallel like object from from by Torch. I don't need that. So I had to comment it out. And finally, I had to comment out the signal parts because that does not work on Windows. And I didn't want to bother figuring out how to fix it on Windows because I'm just want to I just want to do a walkthrough and explain you guys how this training procedure looks like. That's it. We are ready. Let's get into the code. I'm going to start in the main file. So main file is the is where the training magic happens. So let me go to the main function here. OK, so here we are. I'm going to set the breakpoint here and let's start debugging this thing. OK, so just hit the training. We're going to use the information we passed in our launch Jason. If you're using this code, if not, you just need to pass those arguments somehow. Here we are. And again, I'm going to focus only only on on important parts. So I'm going to scheme everything else like we're just doing some parsing, blah, blah, blah. We can skip literally everything. We're just creating some directories, doing some configurations. I'm going to skip here. This is a salient point. So here we are doing the loading from the config file. So let me show you briefly how the config file looks like. So if you go here, if you find the configs directory and then you find under the other encoder, you'll find the one we're using. So that's this one. And you can see everything you need to instantiate various sub modules of the of the LDM is here. So we have the auto encoder here. We have the last we are using. We'll see what this is. It's basically a perceptual loss and the parameters we use to construct that loss. We have information to specify such that we can construct our data loaders. Some callbacks for PyTorch lightning, which is the framework they are using, which is built on top of PyTorch. If you never heard of it and that's it, like some information about accumulating radiance. Nothing, nothing important. Let's go back to the basically main function. That's all you need to see. OK, now I'm going to keep skipping all of this accelerators, GPUs, nothing fancy. I'm going to skip all the way here to a model. So that's where we instantiate the actual model. OK, so you can see here we're going to now construct the auto encoder KL. So let me just see what I've enabled all of the breakpoints. And if so, let me do F10 and I'm going to hit the init function of the auto encoder. OK, a couple of steps here. We because it's an auto encoder, obviously we have to instantiate first the encoder and then decoder. So let's see how those look like. Basically, let me just see what this DDConfig is. Yeah, OK, so it just contains the necessary parameters to construct the encoder. OK, so let's let's go into the encoder encoder. Nothing, nothing really fundamental. Basically, bunch of com layers, bunch of there are some interesting attention layers, which basically what I do is in the latent space of the encoder. So they take the image features and they just do the VAT type of like self attention. So you can you can kind of unroll them and then just do simple like a self attention logic. Where every token attends to every other token, that's it. And then there are some down sampling layers, which are again just a combination like basically you can see here, it's just a com like a layer with the stride of two, what not. That's it. I'm going to jump over all of these again. Just a simple encoder model. Nothing fancy there. OK, let's continue. Let's exit this part and now let's enter the decoder. Similar story. Nothing fundamental to understand here. Bunch of com layers, bunch of res blocks, and then there is some up sampling. I'm going to hit that five. Let's exit this model. That's it. The architecture is not the interesting part of this of training the other encoder. OK, now this is the interesting part. Here is where we instantiate this LP IPS with discriminator loss. It's a mouthful. It's basically a perceptual loss combined with adversarial loss. So let's step into it. OK, so here it is. Here we construct it again. I'm going to focus only on the important parts. We have some loss coefficients. So depending on which component of the loss we are looking at, we'll have a different weight. But most of all, let me let me show you this. So if you're not familiar with this perceptual loss concept, I think it has been introduced at least since the Neural Stuller Transfer Paper. So that was the those were the first papers I saw were using the perceptual loss. So the idea is to basically let me step into it to basically take a pre trained VGG 16 network and then pass your image and grab some representation, intermediate representation from the VGG 16 network and then basically do MSC in the latent space, in that representation space instead of in the image space. And by doing that, you can kind of compare like the semantics of your input images and not not like focus on on some maybe superficial noisy details in the image space. That's the basic idea. Additionally, there are these netlin layers, which what they do is they reduce the number of channels from, let's say, 64 to one. And then you can kind of collapse them and do the loss logic. We'll see in a couple of minutes what that exactly means once we start doing the actual forward prop through the models. Now we're just instantiating. So this is enough for you to understand. So we are loading the model here and then we literally grab the pre trained model from the teaming. So this is their older repos. So teaming transformers, which introduced the VQGAM paper, and they basically have a checkpoint there of the of the that's going to initialize this LP IPS loss. OK, so let me just kind of go and do that. And you can see here loaded pre trained loss from this file here, VGG PTH. OK, so now what happens is we just set gradients to false everywhere. And that's pretty much it. So let me now hit F5 and we are out of that function. OK, now the second interesting part is discriminator. So that's going to be used in the adversarial loss component of the final loss. Again, I'm not sure it's worth even digging through it, but it's literally just a bunch of you can see here. So there is some batch norm going on, calm layers, leaky rail use, calm layers, leaky rail use, nothing fundamental there. I'm just going to skip over it. And it's again going to down sample because this is a patch based discriminator. You can kind of check. You can see here that's the patch can discriminator. There was a first described in the PIX2PIX paper. You can check out this link if you want to check it out. But the main difference is instead of having a scalar and then say scalar telling you whether this image is real or fake, which GAN networks do GAN discriminators do here. You will instead have literally for a patch you have maybe 32 by 32 scalars and all of those will tell you. So a particular scalar will tell you whether that patch is real or fake. And that just kind of gives you more information to train. So more information for for for your model to train. That's it. OK, let's exit the discriminator. Again, we have some hinge loss. We'll see how that comes into into play a bit later. We have this weights again. OK, we'll see all of that a bit later. That's it. Now we define some more calm layers. Blah, blah, blah. Nothing fancy. Monitor just tells us which loss are we monitoring. And in this case, validation reconstruction loss is something we care about. OK, that's it. Now I'm going to skip again. We some biases logging blah, blah, blah. There is some model checkpointing. We don't care about model checkpointing in this logic. There is, as you can see, it's kind of fairly well, researchy code base. Some callbacks, logging images, learning rates, blah, blah, blah. Kuda callbacks. Nothing, nothing fundamental for our understanding of how the how the stable diffusion is trained. So I'm going to skip all of that until until I guess until the data part. OK, so it's going to skip until the data part. So this is where we load the image net data. And again, if you've set if you downloaded the the image net and E.T.T.E. data set and you've placed it into the correct directory, then everything is going to work as expected here. Plus the minor tweak I made with the OS separator, if you recall that from a couple of minutes ago. OK, so again, I'm going to ignore all of this because it's just going to prepare the data. Nothing, nothing fancy there. So I'm going to do this and click F5 and wait for our data to be loaded. You can see there is some filtering going on. Blah, blah, blah. And we have the data ready. OK, so that's it. We have you can see here train data and validation data. You can see the numbers are super, super big. And that's definitely not the number of images we have in our small image net. And that's the reason why you have to set a shuffle to false. Otherwise, you'll you'll get the index out of range exception. Again, this is just what is needed. What is needed to get a minimal setup up and running. If you actually want to train this and obviously you want to have a full image net, you want to have multiple GPUs, et cetera, et cetera. But if you just want to step through and understand what's going on, this is more than enough. OK, again, I'm going to skip across all of these parts. Not interesting checkpointing, blah, blah, blah, debugging signals. I'm going to enable breakpoint here, hit F5, and this is where the magic will start going on. So I'm going to now enable all the breakpoints and let's start digging into this code. OK, so F10, we end up in this on pre-trained routine start. So that's, again, something that PyTorch Lightning defines for you. PyTorch Lightning is very cool if you're a researcher and you're doing something that's fairly has a fairly common structure in the sense that you don't have to think about zeroing your gradients. You don't have to think about calling the optimizer step. You don't have to think about all of those details. You just have to define a couple of functions that the PyTorch Lightning API requires you to. And then everything kind of works out of the box automatically. OK, this part is not interesting. We're just creating some directories for configs and logging. Nothing interesting really. And finally, we hit the validation data loader. Now, you may be confused. Why are we starting with validation? How does that make sense? And that's, again, PyTorch Lightning detail. What the framework does, and I think this is fairly brilliant, is it first literally loads only one or two batches of data in the validation loop just to make sure that the validation works so that you don't have to waste a bunch of time in the training loop only to find out that your validation loop is broken and then you have to start from scratch. So instead, what I do is literally just verify that validation works. And after that, you resume the actual training and then validation. Everything else remains according to the usual sequence. OK, so because of that, I'm going to disable all of the breakpoints here. I'm just going to just enable the one in the train loader, hit F5, and wait until this validation data set basically check is completed. Here it is. We can see we are hitting the train data loader. Again, that's something that PyTorch Lightning requires you to define. Let's continue here. And let me just see whether I've enabled all the breakpoints. OK, so now we're going to first OK, again, some some some PyTorch Lightning stuff. OK, so here we are. So now what's happening is we are loading the data from our training data set. And here we are. We end up with this example. We've done some pre-processing. You can see here we fetch the image, blah, blah, blah. And we end up with the example, which is a dictionary that has multiple keys. So we have image and other ImageNet idiosyncratic like keys. So let me show you some of those. So example image is obviously our input image. So it's processed such that we have 256, 256, and three channels. We also have if I do example, let's say class label. Whoops. I need to make it a string. So class label is going to be, as you can see, label zero. So our data set only has 10 labels. We are not using the full ImageNet. That's why zero is highly probable. Then let me show you one more human label. Obviously, just a human readable label of this ImageNet class. So think of think of whatever that is. And that's it. OK, and this is where it starts getting interesting. So so here we are. We have a batch that was provided to us by PyTorch Lightning. Again, we have image in there. The shape is familiar 256, 256. And now we do this get input. It's just going to fetch the image, do some permutation, make sure that the memory is contiguous, blah, blah, blah. And now we're doing self and we pass inputs. So this is a fancy way of saying call the forward method of this class. And let's see. So we are dealing with we're dealing with all the encoders here, obviously. And the forward class is here. So let me just do F10 and we hit this part. So first part is obviously we do the encoding and then we do the sampling and then we do the decoding. OK, so let's dig into the decoder. Here it is. Here is the decoding logic, some down sampling, blah, blah, blah. Again, I'm going to skip everything here because as I already told you, encoder logic is fairly simple. We just do a forward pass through it and we end up with a latent space representation. So what's the dimensionality of this representation? You can see it's 6, 64, 64. And we expected three because if you recall, let me show you the. Well, we are using such a config where we expect 64, 64, 3. Six is there because we are actually returning the mean and the standard deviation or the variance here. And then we are sampling before we pass that sample to the decoder. So that's just a detail worth mentioning. They did experiment with different types of auto encoders. One of them we are currently working with is the KL regularized auto encoder. They were also playing with quantized versions, but we really don't care about all of those details. There is just too many things going on. OK, some processing of that representation. We end up with moments when they say moment, they literally mean zero within first moment. Like so the mean and the and the and the variance. So now we pass that into this diagonal Gaussian distribution. So we're going to basically form distribution here. We're going to name it posterior. So let me kind of enter there. You can see we just chunk. We just split into two parts that representation and we end up with mean and log variance. So that's what we're actually returning. And then some clamping exponentiation such that we end up with standard deviation variance. And that's it. OK, just a Gaussian. That's all. After that, we have to sample and sampling is simply because Gaussians are such a such a nice mathematical object. You basically just have to take the mean and add to it standard deviation multiplied by the normal noise from the normal Gaussian. That's it. OK, and we end up with the representation. So now let's see that should now have three channels, I guess. So three sixty four sixty four as expected. OK, so now let's pass this into the decoder stage. Here we are. We do some processing, just a conv layer decoder. What it does is, again, just up sampling, conf attention again, trivial stuff. I'm going to skip over that. That's kind of common knowledge. OK, so we end up with output output should be because this is an autoencoder should have the same shape as the input image. So that's two fifty six, two fifty six, three. Let's just print out the shape so that we are. Yeah, OK, we can we can see here that the shape is as expected. Let's continue. We now return that decoded output and we return the posterior object, which is the Gaussian. And that's this part. OK, so now you'll see there are two if statements here. One is when the optimizer index is zero and the other one is when the optimizer index is one. The reason we have those parameters is because we have two different optimizers. Let me just see whether where is that function. So configure optimizers. I didn't show you this one, so we didn't step into it. But simply, simply put, we have two optimizers. One is Adam and that Adam is going to be optimizing encoders and decoders here. And the second optimizer is going to be optimizing the discriminator weights. So we saw that when we instantiated the discriminator, I told you that it's going to be trainable. And here the other optimizer cares about updating discriminator weights, whereas this can be treated as a generator. So the autoencoder part is the generator. The discriminator is obviously discriminator. And that's that's it. OK, so let's go back here. So PyTorch Lightning makes it easy for us to do this type of gain loss computation. So first we'll step into this branch and then it will literally return called the training step again with optimizer index set to one. And then we'll train the discriminator. So that's everything is kind of handled for us by the framework. OK, this is where the whole. Brain of this autoencoder training is going to happen. So let's dig into this code. This is very important case. We get the last layer that's going to be used for some lambda calculation. You'll see that in a second. So here we are. We are inside of the loss. Let's see how the loss looks like for the other encoder. So first of all, we have the reconstruction loss. It's simply as you can see here, we subtract the inputs, which are the inputs image. So let's let me just kind of make sure that this is non-py or by Torch tensor, I guess. Yeah. And then let's just see the shape. The shape should be 256. Yeah, everything's fine there. We have reconstructions. We literally just subtract them. And we obviously want to make this difference as small as possible. So that makes a lot of sense. We're just doing simple image space like emmits type of a loss. OK, now we have perceptual loss. This is the interesting part. So we pass the inputs and the reconstructions again. But this time we do not compare them in the image space. Instead, we compare them in the latent space of the VGG of the pre-trained VGG network. So let me show you that thing as well. OK, so here we are. So we are again in this LP IPS loss and we are in the forward step. So first thing they do is they have the scaling layer and that scaling is just going to subtract some mean. And so let me just find that for a second. So scaling layer is here. So literally there is some shift and scale. So computations going on. I'm not sure whether this is from ImageNet statistics or not. Like if somebody knows, let me know. I think this is from ImageNet and that will make sense since we are now using ImageNet data set as well. So that would make sense. We just kind of normalize our input tensors and then we pass them through the VGG. And so we can see here what the VGG forward passing entails. So literally we're going to return some intermediate representations and we're going to wrap those up into this named top object and return it back. So let me hit F10. We are here. We're going to return all of that. We're going to hit it again because we have two computations, both for the input as well as well as for the reconstruction. Now we kind of cluster all of these layers that are going to reduce the number of channels together, just some syntactic sugar. And finally, let's see. Here is where the magic is happening. So what we do is we normalize the tensor, normalize just going to basically divide it by the L2 norm. OK, and then we do, as you can see here again, we just do we subtract them and we do the square. So it's an MSC loss without the M part because we're not doing the mean part yet. We're just doing this and we repeat that for all of the representations because we will have like five representations from the VGG. So that's why we'll have five iterations of this loop. And now we're going to break out of it. And now we're here. OK, so now what happens is we are going to pass those differences into these layers that are going to reduce the number of channels. So let me let me let me let me explain what I mean by that. So here here is this diff. So here is the difference in the that we that we've done among the features and the shape is going to be what? So here is the shape after we apply this layer. So after we apply this, we'll just end up with a single channel. So if I put zero here and if I put zero here and if I just do the shape, you can see we have one one two fifty six to fifty six. OK, so that's the sole purpose of this of this layer. And finally, we do the special spatial average, which is simply a mean a mean across dimensions. And so we end up with a scalar here. And that's it. That's that's how the perceptual loss looks like. It's fairly simple. And I'm going to just keep on stepping over five times there and then I'm going to keep the five. And that's it. So now we end up with this this array of well, it's actually. Yeah, we've accumulated we've aggregated the values here by doing the sum operator and we end up with a single number. And that's the perceptual loss. That's it, guys. The only part that's kind of new here for me personally are these layers. You could as well skip that part and simply do like a mean operation across these differences. So just MSC loss directly in the feature space. And that's it. But this is some type of modification. I'm not sure why and where there are some ablations, but that's the perceptual loss. OK, finally, we formed the reconstruction loss as a weighted sum of the reconstruction loss from the image space. Plus, we grab the perceptual loss here and we have some weights one point zero here. OK, so they are equally weighted. OK, next up, since this is zero, this is going to be zero. Let me just kind of double check. And because of that exponent raised to power zero is one. So it's just going to be a neutral operation. So this is doing nothing. And this is also going to skip here. And here we just do some rescaling. That's it. Summation, blah, blah, blah. And that's our final reconstruction loss. That's the loss there. Now we do the because remember, we return the posterior, which is the Gaussian from the latent space of the odd encoder. And we're just going to compute the KL divergence. So that's going to be the regularizer component. So I'm going to hit F10 here. You can see it's simply computing the KL divergence using the mean information variance and log variance information. OK, and we end up with a KL loss there. Some rescaling again. And now we enter the branch where we train the generator, I assume. Let's see. OK, all is good so far. OK, so we passed the reconstructions, which are the fake images through the discriminator and we get logits fake. OK. And then what we do is we have a minus and we do a mean across those weights. So let me show you the shape as I told you. That's a special discriminator. That's like a patch based discriminator. So that means that the shape is going to be maybe I think it's 32 times 32 or something. So let me see that. So it's 30 30. So that's how many patches we have. And we just do a mean across them. And by putting a minus here, we are literally forcing we will be tweaking the generator weights in such a way such that the discriminator gives a high value, which means it thinks, quote unquote, that those images are real. So, again, that's just your your your GAN standard GAN GAN stuff. Not nothing, nothing super complex there. If you're familiar with GANs. OK. And now there is some adaptive weight calculation. Let me see. So what's going to happen there? OK. Yeah. So I remember this is going to make make sure that we we are waiting the GAN part of the loss and the reconstruction loss appropriately. Let me show you the paper formula for this one is going to make a bit more sense. OK, guys, here we are. This is the VQ GAN paper I showed that in the beginning of the video. And this is basically the formula we're computing. We are taking the gradients of the reconstruction loss with respect to the last layer of the decoder weights. We divide that by the gradients of the GAN loss with respect to the weights of the last layer of the decoder again. What's the reasoning behind this? The reasoning is the following. If the gradients are super big for the reconstruction loss and they are smaller for the GAN loss, then this is going to be a big number, which is going to going to put a bigger weight on the GAN loss. So by doing that, we make sure that the network is learning from both losses and that one loss is not overwhelming the other loss when it comes to the contribution to the gradients. So that's the rough logic. And you can see that that's exactly what we're computing here. So you can see NLL loss is the reconstruction loss of the last layer weights. And that's it. We get the gradients there. Then we compute the gradients for the G loss, which is the GAN loss. That's it. And now we just normalize them. We do the norm. We divide them as well. And that's it. This is clamping, blah, blah, blah, times some weight. And that's it. That's the that's that weight. OK, I'm now going to return back to the code here. Now, this is going to be zero for the initial 50,000 iterations or something. The reason being they don't want to they first want to train the autoencoder and ignore the GAN loss such that they can form some representations and for the stability sake and only then slowly start using the GAN loss. So let me let me show you what I mean by that. Let me enter here. You can see that until we pass the global threshold. So I'm going to enter here. You'll see that the weight will be zero. So the weight will be set to zero until the global step crosses some certain threshold. And because of that, this is zero. And you can see it's used here. So that means this thing is kind of toggled off for the good portion of the of the of the beginning of the training. So that means we only use the KL regular regularize the loss here and we use the reconstruction loss here as the final loss. That's it. I know this was a mouthful, but like hopefully it makes sense. Now we just do some blah, blah, blah accumulation of those and we return back the loss. And that's it. We do some logging. We return the autoencoder loss. And that's it. Now we're going to hit the training step again. This time we're going to be training not the generator, but instead we are going to be training the discriminator. Let's see how that's going to look like again. Blah, blah, blah. I'm going to skip across these steps. I'm going to do basically wrong disabled break points. We're going to do a four pass again. This is kind of suboptimal. The there must be some way to optimize this such that we don't have to do because we are basically sending the same images here again. And that doesn't make much sense. And now let me let me return. Let me enable back all the break points. Now we enter this this branch here. OK, so again, we have a loss here. All of the inputs are the same. What's different is that let me just enter here. So we're going to have the same parts here. Reconstruction loss. Everything else remains the same. So the interesting part that we care about is here. So we want to go here. So I'm going to disable break points, enable just this one, hit F5 and we enter this branch this time. OK, so here we are. Now we pass the inputs to discriminator to get the real images and we pass the reconstructions to get the fake images, the largest of the fake images. OK, again, we just have this factor which is going to be zero initially, which means this loss will not be enabled in the first part of the training. Later, it's going to gradually kick in. And basically what we do here is a hinge loss between the largest of the real and the fake images. And basically that's it. That's that's the gain loss. So this is going to train the discriminator such that the discriminator learns the difference between real and fake images. Consequently, that's going to lead to better or the encoder because we're losing that we are using that discriminator to train the order. Guys, that's it. That was the training of the order. Hopefully that was interesting and made sense. Now I'm going to just stop this training because that's pretty much it. OK, I've hit F5 just to show you that now we're just going to iterate across patches and keep on repeating the same things we've just seen. So that's why I'm going to stop this training right now. We've seen how the order encoder training looks like. I'm going to show you quickly the formulas from the VQGAN paper just to to consolidate the knowledge here. And then we're going to step into understanding how the diffusion, the unit model is being trained. OK, guys, quickly coming to the paper. Here are the formulas for the VQGAN paper. So originally how the VQGAN was trained was it had these code books of discrete basically vectors and they had the reconstruction loss component. You can see here plus these losses here were called commitment losses and they were used to train the code book and the encoder. What changed in the VQGAN is that they started using so you can see here they're using instead of L2 loss, they use a perceptual loss and they introduce an adversarial training procedure with a patch based discriminator. So that's everything we've seen so far. And you can see that the final loss looks like this. So there is the GAN component weighted by this lambda. We've seen all of these. And there is this component here that basically consists out of the reconstruction loss and the perceptual loss. And additionally, in this in the latent diffusion model, they've introduced the KL divergence regularization. So basically. Bottom line is the order encoder used for the LDM paper is a small modification of what they've already done. The same authors in the VQGAN paper. That's it. Let's go back to the code. Wondering for a second about why these losses make sense, even though they are, they are not by any stretch of imagination, probably an optimal solution to how we should be training our models. Let's just think about it for a second. So we have the reconstruction loss in the image space and we have the perceptual loss. Those basically make sure that we are we are reconstructing images correctly so that we can we learn how to not lose information when we go through the bottleneck part. OK, then we have the GAN component, which makes sure that the images look very realistic. So that's additionally kind of enforcing the reconstruction. And finally, we have the KL divergence loss that's just going to be regularizing the latent space of our order encoder such that we can later be able to smoothly go through that space and be able to have meaningful representations. So that's the basic idea. Like we are. Yeah. OK, having said that, let's go to the long Jason. Let's modify the argument such that we are now training. We're now training the diffusion part and not the order. So I'm just going to remove this part here, remove this space here and paste this back here. So that's everything you need to do. Now we're training the diffusion model. So let's go back here. The difference will now be that they are not using with this config. They're not using the same order encoder. They're using a different one with the with the quantization. But like that doesn't matter. We're going to focus on on important parts only. OK, so I'm going to hit F hit the training here and let's start analyzing the code again. I'm going to focus only on instantiating the models and on the training loop. That's it. I'm going to skip everything else because we've seen all of that. We've seen the config as well. So let's just go to the model instantiation. So let's go here. And now let's let me just make sure that everything is enabled. I think it already is. But yeah. OK, so here we are late in diffusion like object. We're starting to instantiate everything we need. So one of those things is going to be unit architecture. OK, so let's start here. Let's see what's going on. We can ignore all of this. So here is the first interesting part. We're going to be initializing the superclass and the superclass is this DDPM. OK, so the DPM is the denoising diffusion probabilistic model. So that's the original diffusion paper that made diffusion kind of practical. OK, so let's enter there. So let's see what's going on there. OK, so here we are predicting you can see we're running in EPS prediction mode, which means we're predicting that noise instead of. Well, there are some other things you can predict like X zero, et cetera. OK, we can skip all of this. We can skip all of this. Now there is this diffusion wrapper. And that's where we actually start making the unit. So here you can see here unit model is constructed here. So let's construct the unit. Let's see how it looks like again in some of my previous videos. I've been going through in a lot of detail through how unit is constructed so you can go through that if you want. Here I'm just going to kind of put the scheme. And by the way, I love the I love the statements in this in this code base. Fool, you forgot to include the dimension of your cross attention conditioning. Very cool. Yeah, you can you can tell it's a production code. OK, so let's let's continue here. I think we can skip across all of these details. The important parts are these time step in bad sequential objects. What they basically make sure is that we can later pass time step information or conditional information into various sub modules. So let me kind of click F12 there, enter the definition. You can see that depending on the layer type, they'll sometimes be passing the conditioning information, sometimes the embedded information, sometimes just the input features, image features, and that's it. So it might be interesting to for me to just show you one small thing. And that's the following. So there are these blocks that integrate the conditional information, and I think those might be interesting. So here spatial transformer. So that's the module that's going to be integrating the conditional information into the unit. So I'm going to hit F12 there and I'm just going to add I'm just going to add like basically a break point there. And let's hit F10. Let's continue everything else. We don't care about it. Really, we can just construct the unit and let's go to the end here. So this is the end of the unit definition. Quite a long definition definition, as you can tell. So I'm going to skip over it and that's it. So we're using cross attention to do the conditioning, the conditional information integration. So, yeah. OK, so let's continue here. OK, that's it. Counting the parameters, nothing fancy. We don't care about that as well. That's just the exponential moving average. Not the fundamental part why this model is working. So I'm going to skip across all of these. Now we're registering the schedule. So this is the important part. And I've covered how this exactly works. Like I've been doing side by side comparison of formulas and of code. So do check out. I'm going to link those video cards somewhere here. But the diffusion playlist is the best place to start if you want to understand a bit better why those work. Otherwise, this video would be like five hours long or something. So let me let me enter here. Let me just show you how this roughly looks like. So you can see here a bunch of those alphas and alpha like the cumulative products and all of those variations of the formulas. Basically nothing is learnable here. These are just the weights of the scheduler that we need to get diffusion to work. So I'm going to skip across all of these and that's it. That's an important part. But like something I've covered previously and just bunch of formulas, you wouldn't get any insight from me going through it. So, yeah, I'm going to skip over that. OK, we are back in the latent diffusion. So we generated the unit we generated the schedule. Now let's continue and see what else is interesting here. I'm going to skip across all of these. So because now we are training the model in a holistic fashion, we obviously have to instantiate the first stage. And by first stage, they mean the auto encoder. So let's again just briefly go through this one. This time we are forming this VQ model and not the auto encoder KL. So that's a that's a difference. So let's just do that. So here we are. We are instantiating the VQ model. Here we are. And that's going to call the VQ model here. So let's just kind of start entering there. OK, so here we are. Let's see what's the main difference. We still have the encoder. I'm going to just toggle off all the breakpoints. We have the encoder. We have the decoder. Nothing have changed there. The only difference is so we have a loss, which is going to be identity in this this time because we are not training the auto encoder. We'll just be we'll be just loading the pre-trained weights. So let me enable all the breakpoints and let's enter this part. So let's see what's going on there. Some beddings, blah, blah, blah. Well, it's not blah, blah, blah. Embeddings are actually what's important in this model. So this is a codebook. You can see there is sixteen thousand three hundred eighty four codebook vectors and each of those has four dimensionality of four. And that's what's being used to do the quantizations later in the forward step. We'll see how it looks like a bit later. OK, so that's the quantized part. Now we have the conv layer, same as with the auto encoder KL. Nothing has changed there. Blah, blah, blah. We can skip that. We can skip all of this. And now we initialize from the pre-trained checkpoint. I'm just going to scheme over all of that. We are just basically doing the initialization of the auto encoder because remember how the whole logic works. You first pre-trained the auto encoder and then you basically freeze it and you use its latent space. And now you train the unit and the conditional model and everything. That's that's it. That's why we are loading the weights here. OK, and that's it. Now we do some set the eval mode and basically now we set the gradients to false everywhere and we can just continue on with the execution here. Let me hit F5. We exit the condition, instantiate the first stage function and now we instantiate the conditional stage. So this time let's enter this one. I think we are just going to have a class information. So you can see here a class embedder is the type of a conditioning model that will be instantiating here. So let's enter there. You can see it's simply thousand classes because we are dealing with image and embedding dimension and then literally just does the embedding in the forward pass. That's it. That's how the conditional stage model looks like. Let me remind you what. So that's basically let me show you the diagram here. That's this part in the image. So this part here is what we've just instantiated and this is the unit. OK, so we are in the stage two. OK, let's go back here. Let me keep on stepping over here and that's it, guys. Now I'm going to skip across. This is the main function again. I'm going to skip everything here. I'm going to also skip the data because that's again just image net and I'm going to stop at the trainer fit here. So hitting F5 waiting for everything to. Oops. I'm going to have to disable the break points and only then will this work. So disable and just enable this one. Hit the five. Get to trainer fit and then we're going to start start analyzing how this works. OK, so enabling the break points and we're going to hit the validation batch as usually. OK, no, actually. OK, what I've done here is I've added a break point to the configure optimizers this time. We're just using Adam W here and nothing else is important. I'm going to skip over this again. PyTorch lightning stuff. So this is not important. Here is the validation loader. So I'm going to disable all break points and just end up here hitting F5. We're going to end up in the training part of the of the training. OK, here we are. I'm going to step over this and now I'm going to enable the break points. Let's just go through this idiosyncratic part again. And now we're loading the data. OK, so we are loading the data. Everything remains the same. We have example that has keys such as images and labels, blah, blah, blah. I stepped through that part and we end up in the important part. And that's the training step. Again, that's a function that PyTorch lightning requires you to define. You can see that we have our batch here and everything is the same as when we were training our encoder. So batch image shape. You can see it's a 256 256 three channel image. OK, let's stop. Start stepping through this shared step. That's the first part. So we call this get input and I think that's just going to grab us the OK. So it grabs the back the image. So you can see here we just grab the image. And we end up with so X shape. So we have 256 256 and three. All of that is as usual. Now we just push it to the GPU. OK, so now here's what we do. So we encode using the first stage. So that means we don't want to deal with images anymore. When we're training the diffusion in the LVM in the latent diffusion models, instead we want to deal with the latent space. That's why we call the encode first stage. So let's do F10 here. Here's the encoded first stage. And here's what it does. It basically calls first stage model. It just calls the encode function of that model. Here is how it looks like. So it's just going to call the encoder. So let's basically let's hit F10 and we are here. So we are in the encoder. Everything remains. That's the same as our order encoder from the first part of the video. So I'm going to hit F5. Just bunch of calm layers, rest blocks and down sampling stuff. So F10. So we end up representation here. So we have now H shape. We have 32 32 4. OK, four is the number of of latent channels and this is the spatial dimensionality. Now we do some processing with a calm layer and we return back that representation. And that's it. That's the encoder posterior. You can see that the shape here is this. So it's not anymore. It's not a Gaussian distribution because these types of all the encoder models work a bit differently, but the logic is fairly similar. So now let's call this get first stage encoding. Let's see what this is. So I'm going to hit F12 just to see. OK, so I'm going to enter here. We can see it's not this object, so we will not sample from it. Instead, because it's a tensor, we simply just map, create this type of variable name binding and we just scale with some constant factor. Let me see what that number is and how it was defined. I'm not sure about it. OK, so it's one. OK, so we can we can just ignore all that. So let's continue. So now we have our representation. So that's the latent representation now because we have conditioning and the conditioning key is, I think, cross attention or something. So no, it's it's class label, but we are going to integrate using the cross attention logic. So let's step over here. And what we do is we just passed a batch because the batch contains, if you recall, a bunch of keys and among them, it contains the label. So that's how they've implemented this. Basically, they pass more data than is needed, but we'll see how that's going to be integrated a bit later. So let's see what's going on going on here. So we just map H.C. to C. So that's again the batch information. And then we're going to skip all of this. And finally, we we return back. So this is the latent representation and this is the conditioning information and a bit more stuff because it's the batch information. OK, and finally, we return back all of that. So that's the first part of the shared step function. Again, recall that we are currently in the latent diffusion model. Blah, blah, blah. If I scroll all the way up here, you'll see that. Oh, my God. Oh, my God. Latent diffusion. OK, so let's let's go back here. Now we do the forward prop through the diffusion model. So let's see how that looks like. F10. Here we are in the forward prop. We generate some time steps randomly. Basically, this is going to be thousand. So we generate randomly the time steps information. And now we do the conditioning. So because this is trainable, we get the learned conditioning. So let's see what's going to happen there. Basically, we just call the forward pass and we pass C. So C is as you can see, C is still a batch information. So if I enter inside of the forward function of the class embedder, you can see that we are now going to extract the key. So the class label from here is going to be some label of the image. So it's zero. And then we're going to embed it using the embedding table here. So we're going to return some representation that's like what has should have however many dimensions distinct had. I think it was hundred twenty eight or something. So C shape five hundred twelve. OK, so we return that back. We return that back. We turn the C and here we are. So we have the C now. And finally, we pass the image. We pass the sorry, this is not the image. X should be with was X. X should be. X should be the latent representation, right? Yeah, it is. So we passed the latent representation. We passed the conditioning information. We passed the T and we compute the losses. So this is basically what we saw here. We are literally randomly sampling like the these latent representations. Noise and time steps. That's it. Let's go back to the VS code. OK, so now we have the P losses. This is where the whole magic of the training happens. We sample some normal noise. So here we are. So we sample the normal noise and then we do the Q sample. So let's not do the noising process. So we start from our pure latent representation and we add up key steps of noise on top of it. So we simulate that. If you recall from my previous videos, there is a formula that makes us capable of doing that in a single step by just combining the start the start representation with the noise and using these basically non learnable parameters from the scheduler. We end up with a noisy version and now we just apply. We passed the noisy, the T and the conditioning. So that's literally that's literally this formula here. That's literally we are passing that T T and C. So let me show you that version here so you can see this formula three in the paper in the LDM paper. We pass these variables here and we are now passing that through the unit such that we can get a noise as the output. OK, so let's go back to the code. Let's hit F10 and enter the apply model function. So here what happens is just some variable backing. Nothing fundamental there. We just pass that vector that was 512 dimensional. Whoops, actually have to extract the first because we packed it into a list. Just some details. Nothing. Yeah, we are passing the same information. So that's the conditional vector. OK, and here we are. Now we pass. We call the this should be the unit. This should be the unit. Let me just if I do type on this object and this code is so nice I can do this debugging so easily. So diffusion wrapper, which contains the unit model and the scheduler. OK, so if I do F10, here we are. We enter the diffusion wrapper and because we have conditioning key set to cross attention, we're going to call the fusion model, pass the conditioning, pass the time steps, pass the latent representation. And this is going to be automatically handled and integrated by the cross attention. If I click F10, we should be in the forward pass of the unit model. Let's see whether that's indeed the case. And yeah, you can see here this is the definition of unit. We are in the right spot. And so let's now continue. So we just embed the time step information. So now we end up with some processing on top of those temporal representations. You can see that this is what the dimensionality of the time temporal information now is. OK, and now we start integrating. Now we start literally going through the unit. And you can see here we always pass the representation, the temporal embedding and the conditioning information. And now that's where the trick comes. So this is where the special object I mentioned a couple of like 20 minutes ago or something is going to play, come into the picture because it's going to know exactly what to pass. So I'm going to put a breakpoint here. I'm going to click F10 and you can see we immediately hit this one. And now, depending on what instance is this layer, whether it's a spatial transformer or temporal block or just a pure block, it's going to call one of the three versions of the layer. And that's that's it. Like now I'm going to hit F5 and that's going to hit to the it's going to hit the spatial transformer part. OK. So we hit a special transformer. And this is where the contextual information. So this should be five hundred twelve. This is the yeah. So this is the conditioning information from the label. And you can see that you basically now do simple transformer logic with the conditioning additionally here. So we just pass. You can see here just some projections, blah, blah, blah. We rearrange our representation such that it's suitable for transformers. We have batch size. We have sequence size, basically flattening out the height and the width. And we have the number of channels. OK. And now we just pass and do the cross attention with the basically with the transformer blocks. So let me do this. And that's it. Guys, that's it. That's it. That's the that's the whole logic. I'm going to hit F5 again and I'll have to remove this breakpoint. And now let's get out of here. Let's get out of this function. Let's just exit this function. OK. And I'm going to put a breakpoint here. Hit F5 again. Exit this function. And basically now I'm going to hit F5 again. And we are exiting the this is the unit for a pass. We're basically exiting the unit for a pass. And that's it. So let's exit here and let's see what the output shape is. It should be the same as the input latent representation. So it's going to be. Whoops. Let's let's hit F10. Now we have out. I'm not sure whether this is going to. Yeah. So we have the same. You can see the same shape as the input representation we passed. But this is now the prediction. This is now the noise that was put on top of that input latent representation. So, again, simply what we've done here, we have a unit model. We passed the input late representation. We have some time step information, some conditioning information. We use the time step information to noise the input latent. We combine them with the with the conditioning information. We pass all of that through the unit. We do a forward pass and we predict back the noise. And this is where we are at the moment. We have noise. And let's now see what's going on. Going on. We're going to return that noise. And here is how the loss is going to look like in the case where we use Epsilon prediction as parameterization. You can see that exactly this noise that was used to noise the initial representation to get the noisy version is now going to be the one the variable that we're trying to predict. So that's that's that's it. Like that's as simple as that. And this is just going to be literally just the MSC loss or something like that. So so, yeah, let me just do here. Yeah, it was literally just L2 loss. Nothing nothing other than that. And that's it. As simple as that. Again, I think this log bar is set to zero. So it will not influence. Yeah, it will not literally change anything by doing this. We don't do we don't change the loss. And we just now do some waiting. And that's it. I'm not sure why we need the VLB loss, the lower bound loss, because it's going to be the same computation as what we had up there. So look, this thing here is the same as this thing here. So if I do have 11 and enter this will again will again hit the L2 loss branch. So we just compute the L2 loss again and we return that log. OK, I'm not sure why we're computing this, because it literally gives us the same results. If I were to print loss simple here and if I were to print loss VLB, so the variational lower bound, we get the same values and we compute literally the same lines here. There is the only difference is we have a different weight here for this loss. And then because this is zero, this will not even have the impact on the final loss. That part is kind of confusing. And yeah, that's pretty much it. After I do this, there is additionally some some like logic with the with EMA here. But yeah, nothing, nothing, nothing vital there. OK, so we can we can skip over all of this and that's it. And now we just keep on repeating the batch after batch. So that's it. Just a forward pass through the unit. And then we basically do L2 loss between the predicted noise versus the noise that we used to noise our input representation. Fairly simple. Literally the formula we saw in the paper just being played out here in the code. So that's this formula here. We are randomly sampling these noise and conditioning label. OK, that one is correlated, obviously, with the image. And then we just use the latent representation of that image. So Formula 3 is everything that we have in the stage two of training this system here. After we've done this, we literally can now use the pre-trained weights and start sampling. So let me show you how we can sample using using this this code. So I'm going to stop this and let's go to the long script. And this time we're going to be using the text to image script, this one. And the only argument you have to pass is this PLMS. So that's going to be the scheduler we are using. You can also use the I.M. but like this one has higher quality and they showed in the paper that the I.M. is just a special case of the PLMS scheduler. OK, so let's open up the text to image and let's start. OK, guys, let's pick the correct configuration here and I'm going to hit run and we'll soon start executing the sampling script. So this one is used using a textual prompt. You can now generate the images. And that's that's how these text condition image generation models became popular. You can kind of tweak the prompts and generate corresponding images. OK, so here's a here's a prompt I'm currently using a painting of an AI having an epiphany moment. That's the prompt we're using. You specify the output directory, whether you want to. This is not important. Skipping all of these number of diffusion steps. That's important. We will be using five in this example just to make sure this is going to execute very, very quickly. I'm going to store the PLMS such that we want to use that scheduler. This one is also not important. This is if you want to have basically always start from the same latent. And that's going to kind of constrain your outputs to be less diverse as a consequence of that. We don't need that. We don't care about this variable as well. This is how many images I'm generating. Just nine. The input image dimensions, five, twelve, five, twelve. The latent number of channels is four. The sampling factor is eight. Number of samples is one. Blah, blah, blah. We can skip all of this. There's too many. Oh, my God. OK, so I'm just going to hit F5 and get here. We can skip this part because it's false. We see everything. We load the configuration. So let's see what that's going to be. OK, we want inference. So let me see. So let's go to configs. Let's go to configs here. Let's go to stable diffusion. We want inference. This is how that thing looks like. So, again, it's literally just going to specify the latent model, specify the unit, specify all the parameters, all the encoder, pretty much everything. And we're going to be using clip this time, not the class conditional information for the conditioning stage. That's it. It's going to need that all of this is specified inside of a config file. A couple of things you need to do if you want to follow along. First up is you have to go to this like page and download the weights. And the best ones are V1,4 or you can play with V1,3 as well. Basically, go ahead, download those, put them in the corresponding directory. And that's it. After that, there is a couple more things. So I had to, again, create some tweaks. So one of the tweaks is the following. So let's go to latent diffusion. I'm using this image net config file. I had to specify. So here's where you specify the checkpoint path. So wherever you download, you can see you need to specify the checkpoint path there. And there is the batch size. I also reduced it from 64 to 1. Otherwise, I'm getting good out of memory exceptions. OK, so that's one thing. And then you have to also modify this text to image. So basically what I had to do is to set. And this is very dirty just to make sure that I'm getting this to work on my computer. But there are better ways to do this. And it's much better for you to use the diffusers library than to do what I've done here because they actually have some parts where they accumulate in FP32 instead of doing everything in FP16 as I'm doing. So as a consequence, I'm probably getting a bit lower quality images, but it works and I can step through my code. So that's one tweak. And then what else? Let me see what I had to change. That's important to get this to work. Set the number of samples to one. Otherwise, we batch size of three. I was getting out of memory exceptions as well. OK, so here's the checkpoint information. You don't need this part. And and finally, I have explicitly made it such that we are dealing with flow 60 here and not with mixed precision because I was getting I was hitting errors. If I recall correctly, if I don't put the FP16 here explicitly and finally, I've just for the sake of speed, I commented out the check safety functionality that basically checks whether you have not safe for work or other problematic content. OK, having said that, let's go back to the text to image script and we can now continue. That's everything you need to know. OK, we now load the model. So we load the diffusion model this time. Obviously, I'm going to disable. We've seen all of this. So actually, we're going to enter this part and I'm only only going to show you the difference. And that's loading the clip model. So I'm going to hit F10. And let's just see how the clip model is being used to basically create the conditioning information from the input prompt. And that's going to be the only interesting part. OK, so I'm now going to toggle on all the breakpoints, hit F10. And here we are. So we are now creating the conditioning stage. Instantiated from config. We have frozen clip embedder. So let's hit F10. So here it is. So basically what I do is they use hugging faces, pre trained clip tokenizer and pre trained text model. And that's it. Like everything else, I've covered clip in one of my previous videos, so I'm going to link it somewhere here. You can you can go and check it out if you want to understand how clip exactly works. I also covered papers. So, yeah, there is plenty of information about clip on my channel. That's it. I'm going to leave the forward function, the breakpoint there and let's exit here. And we are done. We're done. We just set the gradients to false because we don't want to train this. We're now in the sampling stage. So I'm going to go back to the text to image and I'm just going to put a breakpoint here. I'm going to basically disable everything and just leave that breakpoint here. Let's exit the load model from config function and continue from there. OK, so pushing the model to GPU, we instantiate the PLMS sampler when it comes to well, when it comes to any function, it's not that complicated. We literally just store the LDM model here. So this is going to be just the let me show you this. So basically type is going to be LDM. So you can see here late in diffusion and we have thousand steps were used to train the model. That's an important information to form the schedule and schedule is going to be like a linear schedule. OK, so I'm going to show you more once we get to the actual forward pass sampling. They additionally have this border mark tool that basically makes sure that it encodes a watermark that's invisible to humanize basically into the image such that we know that it was machine generated image. And later we can use that watermark to exclude those samples from our training data if we decide to do that. I guess that's one of the main reasons they were they were doing that as well as to catch someone who is generating images and not giving proper credit to stable diffusion. I assume. OK, so here's a prompt. We have a painting of an AI having epiphany moment and we form some output directories. Nothing interesting there. And finally, here is so this is the interesting part. We're going to start the sampling here. So I'm going to hit F5 exit that part. And here it is. So we we have a single problem. So this loop is going to be kind of trivial. The first part we do is we get the learned conditioning from the for the empty prompt. So this is the again a classifier free guidance technique. So because our scale is seven point five, so that's the guidance scale. That's why we basically entered this part. And now let's let me just make sure this is all enabled. Let's enter this and let's see how clip works here. So what we do is we encode the prompt and the prompt in this particular case is an empty prompt. So encoding is just a forward pass through the frozen clipping better. So we just tokenize the text. You can see here we get batching codings. So if I were to since because it's an empty prompt, it's going to be fairly trivial. Let's see the shape here. OK, so it's going to be a trivial encoding. Basically, all of the numbers are the same. Those are the this is the beginning of sentence token. And these are the end of sentence. OK, we can kind of validate that by doing the following. So tokenizer decode, I think, was the name. And then we just pass this number. And let's see what it is to start of text. And the one with ending in seven is the end of text. That's it. That's how the representation is. The sequence of IDs is formed. We push them to GPU and we just passed them now to the transformer, which is basically the textual part of the clip model. And we end up with a final representation that's seventy seven five or something. So seventy seven and six seven hundred sixty eight. That's that's the final representation that came from clip. And we're going to use that to condition the unit model. That's the idea. OK, so let's get back here. Let's get back here. We're returning the information and that's it. So that's going to be used to condition the unit. Now we do the same thing just with the with the actual prompt. So I'm going to disable the breakpoints here and we do the same thing. So I'm just going to skip that again. C is going to be same shape as you see. You see standing for basically unconditional conditioning. And finally, here is where the sampling starts. After the sampling, we have a simple we just pass the final latent representation through the decoder, which is just a set of layers, attention layers, etc, etc. And up sampling, do some clamping, put the put the final representation on the CPU. When I say final presentation, I mean image, disimplementation, blah, blah, blah. And then we can store the image here. And that's everything. Everything else is kind of arranging the images into into grids. So this is where the gist of the logic is. So in the sample part, so I'm going to enable all the breakpoints and let's enter this part. Let's hit F5. And here we are. So we enter the sample part. And because we have conditioning, let's see what we do. We do some error checking there. We make a schedule. So make a schedule is just going to make sure that we set the appropriate constants. So let me let me go into this. So here here we are. So first of all, let's see. So we have uniform discretization. Number of steps is five because I said this is just dumb. Usually what you want to use like 50 is OK, 200 if you want to get a bit better results. But there is a saturated saturation going on. Definitely. So 50 is completely fine. So this is the actual number we use during the training. That's a vital information for the scheduler so that we can construct the the the final set of time steps. I'm going to disable these and we're going to end up as you can see here. So one two one four one six one and eight one. That means that's how we uniformly sampled our thousand time stamps into only five time steps. That's it. OK, next up, we grab the original alphas from our diffusion model here. We create some lambda functions. And then, as I said, we just start creating these non learnable constants. And I'm going to skip across all of these. Because it's really hard to explain this without taking one hour or something. So, yeah. OK, so finally, we have those constants in place. We formed our time steps. Now let's enter the logic here. So enable breakpoints. Let's hit F5 and enter this part. So here we are. We are starting to generate the image right now. So here we generate the initial random basically noise tensor, which is going to be sixty four sixty four four, because that's the size of the latent space of this particular LDM. And we start there and then we we form the time steps. So you can see here one two one, etc. And we just reverse the time steps here because when we are generating, we want to start from the end. We're starting from the noise image and we're just time going in reverse until we generate the actual image from our data distribution that we learned. OK, so running running PLM sampling with five time steps. Let's continue there. Basically, we we create the time steps there. The first one should be, I guess, eight or one or something. So TSA to one. And then we grab the next ones because the PLM logic needs it. So it's going to be, I guess, six or one. Yeah. And then let's continue. So now we end up doing this. We call the P sample PLM. The code is fairly messy and complicated. So, yeah, I apologize for not being able to explain this a bit more clearly, but I'm giving my best here. So stick with me. So, OK, let's see how the final logic looks like. So I'm setting a breakpoint here. We're going to hit that line a bit later. So for now, we just grab these constants, alphas, sigmas, etc, etc. I'm also going to put the breakpoint here. So we're going to enter that function. And here is the final logic that the PLM sampler does. So we get the model output and that's basically a forward pass through the through the unit. So let's let's go there. Let's see what's going on there. So here it is because we're doing the other classifier free guidance. We have to repeat our input representation, which is currently just a pure noise. Our time steps as well. We have to concatenate both our unconditional conditioning and the conditioning from the actual prompt. We do a forward pass through the unit model. So I'm going to hear just disable breakpoints due to forward pass because nothing insightful is happening there. And finally, we get the output representations here, which we then combine. So we combine the noise from the when we when we condition the unit with the unconditional conditioning and we also pass the basically the information that we got when using the actual conditioning here. And that's that's how we form our final noise prediction. OK, after that. So, OK, it had to take some time to figure out how to connect this code with the formulas from the paper. So let me show you side by side comparison of the code and formulas. And let's start and figure out what's going on here. OK, guys, so here is the paper. So I opened up the pseudo numerical methods for the future models on many folks paper. It's a mouthful. Even the title is hard to comprehend. So let's get back to this statement here. Oops, let me just find it. Basically, so we see four branches here. And the reason that is is because they are using this linear multi step method. And they say here here we cannot use linear multi step initially because the linear multi step method cannot start automatically, which needs at least three previous steps information to generate results. So we use the rune Jakuta method to compute the first three steps results and then use the linear multi step method to calculate the remaining. OK, so I've done some annotations here so that we can find the precise formula for each of these branches. So let's start like that. OK, so maybe I'll start start with the fourth branch. So the final step, once we have had at least three steps of this of this sampling step, basically we'll end up in this hitting this branch every single time. So let's go to Formula 12 and let's kind of convince ourselves that this makes sense. OK, so here is the Formula 12. You can see that the first step is to calculate the epsilon theta. So this is what we've done here in this step. So get model outputs we get we have our epsilon. So the noise prediction here. The next step is, as you can see here, to calculate this epsilon prime and you can see it's one over 24 blah blah blah. Some some some expression there you can see that corresponds directly to this one here. So fifty five E t minus fifty nine E t minus delta, etc. etc. So you can kind of see this corresponds to this. So here is the third step, this five function, which they call, I think, transfer function or something like that. Basically, here it is defined, but we will not use it for for for this explanation. So let me just get back here. So what I've done is basically I figure out that this function here, which will get to in a couple of seconds. So let me just enable the break points for a second here. So let's enable all of the break points. Let me do this. So this basically function corresponds to Formula 8. So that's this formula here. We're going to convince ourselves in that in a couple of seconds. OK, but let me let me get back here. So we saw that this expression here makes sense. Now, let's make sense out of the other branches. So for this one, I could not find the corresponding expression in the paper. So if anyone knows what the heck is going on, feel free to comment down below. I'm listening. So for the for for this branch here, I found the Formula 23 to correspond to this one. So let's let me let me find that one. So 23 is in the appendix of the paper. It's kind of hard to, yeah, even find the correspondence between these, let alone have some intuition. And I guess even the authors of this paper don't have the intuition. It's more of a OK, we associated this differential equation with this diffusion process. And we just then can automatically pick up the tools that we already have from a long and rich history of solving differential equations and apply those to solve diffusion. But like intuition wise, I'm not sure anyone understand what's going on here. Like I might be wrong, but that's that's my current understanding of the things. OK, so I said Formula 23. Here it is. So you can see we calculate Epsilon again and then we have one over half three Epsilon minus Epsilon old. And you can see that exact formula here. So we are computing that result here. And then finally, let me go to equation 22. That's the first branch of this of this complex branching. So first on a high level, this let me kind of step over here. So this function here get X previous and prediction X zero is calculating whatever phi is. We'll see what it is in a second. So once we have the results, so we have this X prev, then we feed that into the neural network and we again grab the find the noise. And that corresponds to this this step here. So whatever is output from the second step, we feed it back into our neural network. We feed in the T next as you can see here. So that's why we have T plus Delta here and we get back the results here. OK, and then finally we grab those results from this step and we just add them up with the Epsilon from the previous step and we divide by two. So that's this part here and we end up with the E T prime. And then after that, we're going to again call the five function here. So now I guess it boils down to figure out what this what this five function is. So let's step inside of this function. Let me show you what's going on here. So again, these are just some non learnable expressions. I'm going to skip those. But let me find the formula eight, which directly I found that that one corresponds directly to this to this code here. So let me find that. OK, so formula eight. Here it is. So let's convince ourselves that this makes sense. So here we have X minus square root one minus 80 times E T Epsilon. So we can see that corresponds to this expression here. We basically have X minus square root one minus this cumulative sum product of alphas. And then we multiply that, as you can see here, with with the output of our neural network. And then we divide all of that by the square root of 80 here. And that's pretty much this first term next up. So that's the first part. And then we calculate the second part. The second part is, as you can see here, it's computing this part here. So one minus a previous minus sigma squared square root of all of that times the Epsilon. And you can see that's precisely this term here. So let me now continue stepping. Now we calculate the noise. And finally, let's see how all of that is combined together. We have a previous square root. So that's this part times whatever we predicted up there, which was the first term, plus this term here. And then finally, plus the noise. So that's this part here. Let me just see what the value of sigma t is. And it's zero. OK, so this this part will actually be ignored. So as you can see here, this part, the noise, because it's multiplied by sigma t. Let me just kind of go into the debug console here. Let's convince ourselves sigma t is zero. And that makes sense because here in the paper, they mention it somewhere. Let me just find it that they only care about the case where sigma is equal to zero. Let me just find that one. OK, I found it. I didn't highlight it initially, so it was hard to find it. So they say here, therefore, our work concentrate on the case where sigma equals zero. OK, let me read this for you. So here sigma controls the ratio of random noise because it's modulating the noise, as you can see here in Formula 8. If sigma equals one, equation eight represents the reverse process of DDPMs. So those are the denoising diffusion probabilistic models. So if sigma equals zero, this equation represents the reverse process of DDIMs. OK, and only when sigma equals zero, this equation removes the random item and becomes a discrete form of a certain ODE, ordinary differential equation. Theoretically, the numerical methods that can be used on differential equations with random items are limited. So that's why we want to escape. We want to set sigma to zero because there are richer set of tools when we are not dealing with that random term. These authors here have done enough research in this case. Empirically, they have shown that DDIMs have a better acceleration effect when the number of total steps is relatively small. Therefore, our work concentrates on the case where sigma equals zero. OK, so guys, that's that's pretty much it. I'm going to open up the code here. As I said, I cannot provide you with much more intuition than this. So let me kind of step through all of this. I'm going to set a breakpoint here. Let me remove the disabled breakpoints. Let me enable this one. Let's hit F5 and let's get back to the function. OK, guys, so here we are. We now take that output. We basically sum it up with the last prediction from the network here, divide by two as per the formulas we saw previously. Next up, we compute the five function again, and that's it. So we return back the X previous, which is the basically as well as the noise, which is the next step in the reverse diffusion process. So we are slowly getting to the pure image. And that's pretty much it. I think we're not going to keep on iterating here. As you can see, we append the Epsilon to this to this array of fold Epsilon, which is used, if you remember the four branches. So this is where we collect the old Epsilon's and then we pass them inside here. And that's it. So if we start some circular array logic, callbacks are not important. So I'm going to skip all of that. And now we just keep on iterating and that's it. So we're going to have in this particular example, five steps, because that's how I've basically configured it. But like in general, you have like 50 steps or 200 steps if you want to have a bit better, basically quality of the image generation. But I'm going to stop this here. And basically I'm going to stop it here. And I'm going to finally just show you briefly the safety function they added that might be interesting to some of you. So let me show you how that functions. Let me just find it basically in the text to image here somewhere we have. OK, here. So here is the check safety function. Basically, what happens is once you have generated the image and you've done this clamping, blah, blah, blah, you convert it into an umpire array. Basically you have an image. So now they call this check safety for you and the check safety function. What it does is cause this safety feature extractor. So let's see what it does. Basically, that's some pre-trained model from hugging face hub. And then they call this safety checker, which is also basically, as you can see here, some pre-trained model from from from hugging face. And so what's interesting here is so they do some checks whether whether you have a basically not safe for work concept inside of the image. And if so, for that particular image, they load the replacement. And I think this one is fairly cool. So basically what I do in this repo is they take the image, they load this image here. I'm going to show you what it is in a second. Basically, they're recalling us. So let's let's find the assets folder. And then under the assets, we are looking for Rick. And that's going to be this one. OK, so they basically load this image in case you have not safe for work content and they get it back. Now, the problem is I was playing with this code base a bit. And even though I was not generating anything explicit or anything, I was still getting this this function being triggered. And so, yeah, it's not perfect. That's that's the point. So we can see the definition. Actually, I found it on the in the diffusers library. Here you can you can see the safety checker, how all of those functions are defined here. And you can see you can kind of go through this if you if you if you care about it. But bottom line is I couldn't find how the models were trained. I guess that's by design because they don't want you to know how to hack the model. Although I don't know where that's the best position on this topic, but I guess it's very highly debatable. So, yeah, you can kind of go through this code and basically explore it at your own pace. If you're guys, this was a super long video. Like we saw a lot of things we saw how to how to basically train and sample from from these class of LMS, latent diffusion models. So we saw how to first train this autoencoder who like whose way whose way to basically then freeze and use the latent space in the second stage where we train the LMS. So basically the unit plus the conditioning model. And finally, we saw how to sample from these models. And you saw that you saw that some of the formulas and the connections with differential equations make it kind of hard to have a clear intuition. But like I'm curious to know and hear whether and how you understand how diffusion models work. So if you have any intuitive type of an explanation, feel free to comment down below. I'll try and read all of those because I'm super curious. In any case, if you like this video, share it out. Subscribe to this channel. And until next time, bye bye.
[{"start": 0.0, "end": 16.0, "text": " What's up guys, Alex here. In this video we are doing a deep dive into stable diffusion model and by the end of this video you'll understand exactly how the training of both stages works, how the sampling works, how to generate images, etc."}, {"start": 16.0, "end": 45.0, "text": " Having said that, a week ago the weights for the stable diffusion models were published, which is super exciting. So we know that since the beginning of this year with the release of the Dali 2 paper, we had a Cambrian explosion of various image generation models such as, well, Dali 2, we had Mid Journey, we had Imagine, Party, we had Dali Mini, which is the open source implementation of Dali version 1, etc. etc."}, {"start": 45.0, "end": 60.0, "text": " And the reason why this model is so important is, well, there are multiple reasons. One of them is the images are super high quality and additionally you have much less constraints. So that means you can generate images of human faces."}, {"start": 60.0, "end": 73.0, "text": " You can also, because the code is open source, remove the safety features and generate whatever you want, although I don't encourage you definitely to share malicious images across the internet, but if you want to experiment you can do that as well."}, {"start": 73.0, "end": 94.0, "text": " So that's kind of cool. Additionally, and very importantly, you can run this model directly on your machine, even if you only have a GPU that has 8GB of VRAM. So I personally have RTX 2080 on my laptop and I'm able to run this in Flow 16 without a problem and generate awesome images in a couple of seconds."}, {"start": 94.0, "end": 103.0, "text": " So that's again very cool. So it's much faster, it requires much less memory and it has less constraints and it's high quality. That's why stable diffusion is so interesting."}, {"start": 103.0, "end": 115.0, "text": " Okay, so I want to showcase a couple of very cool examples that some digital artists such as Xander here have been creating. You can see how cool these videos are and they were created using stable diffusion."}, {"start": 115.0, "end": 134.0, "text": " You can see that the name of this piece is called Voyage Through Time and what Xander did, and you can kind of go through the video and I strongly encourage you to check out this video, it's amazing, it's really like mind blowing, like that we can create this on like literally consumer grade GPUs and enjoy this piece of art."}, {"start": 134.0, "end": 154.0, "text": " Okay, so here's an example of the prompts and the seeds that Xander had to create to generate this cool video. So you can see there is a lot of like seed is being used as a hyperparameter, you have to tweak the seeds, you have to tweak the prompts, do some prompt engineering and at the end you end up with something as cool as this video."}, {"start": 154.0, "end": 166.0, "text": " Okay, so I said I also managed to run this stable diffusion on my GPU that has 8GB of VRAM, I was generating using Float 16 precision, but you can see the images are super high quality as well."}, {"start": 166.0, "end": 174.0, "text": " So yeah, these images were generated using a prompt, a painting of an AI robot having an epiphany moment."}, {"start": 174.0, "end": 196.0, "text": " And additionally, I'll basically release a script that I use to generate some cool interpolation, so basically the idea is to you pick, you generate a diverse set of images such as the set you've seen here, and you can pick two images you like and then basically do interpolation between them in the latent space of the model."}, {"start": 196.0, "end": 204.0, "text": " So that was inspired by Karpathy's gist, but yeah, I'm going to share that script very soon and also cover it in a different video."}, {"start": 204.0, "end": 215.0, "text": " So you can see here how it looks like interpolation between basically I'm interpolating between this image here and this image here, and I'm going to now show you how the procedure kind of looks like."}, {"start": 215.0, "end": 228.0, "text": " So here is how the image is being morphed as we are approaching the target image, and you can see there are some like jumps in the latent space, but all in all, let me kind of move this faster, you can see how cool it is."}, {"start": 228.0, "end": 231.0, "text": " And yeah, I'm going to share the script a bit later."}, {"start": 231.0, "end": 240.0, "text": " So having said that, let's jump into the code and well, first, let me show you some prerequisite knowledge here."}, {"start": 240.0, "end": 253.0, "text": " I'll have three papers which I'll consult. I will not do deep dives of the papers in this video. So one of those papers is a teaming transformers basically for high resolution image synthesis that introduced the VQGAN paper."}, {"start": 253.0, "end": 261.0, "text": " I previously covered that paper on my on my YouTube channel, so do check it out if you want to have a thorough understanding of how that works."}, {"start": 261.0, "end": 265.0, "text": " I'm going to consult some of the formulas later when I show you the code."}, {"start": 265.0, "end": 273.0, "text": " Next up, we have the high resolution image synthesis with latent diffusion models or LDMs, and that's the paper that's behind the stable diffusion model basically."}, {"start": 273.0, "end": 286.0, "text": " So LDMs is what's powering the stable diffusion. And finally, I'm going to briefly consult this paper, pseudo numerical methods for diffusion models on manifolds that introduced the PLMS."}, {"start": 286.0, "end": 301.0, "text": " So the pseudo linear multi-step scheduler that makes stable diffusion fast and high quality so you can literally have only 50 steps of diffusion model and still generate very high quality images."}, {"start": 301.0, "end": 306.0, "text": " OK, so I'm going to briefly walk you through some ideas in this paper, like super briefly."}, {"start": 306.0, "end": 318.0, "text": " And if you want to learn more about diffusion models, I have a whole diffusion playlist, so do check it out. I've been doing both the paper overviews as well as code walkthroughs of the original repos."}, {"start": 318.0, "end": 322.0, "text": " So do check those out if you want to have a deeper understanding of how diffusion models work."}, {"start": 322.0, "end": 328.0, "text": " Here I'm going to do mostly a diff between what has changed compared to those older models."}, {"start": 328.0, "end": 341.0, "text": " OK, let's start here. So to enable diffusion model training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pre-trained autoencoders."}, {"start": 341.0, "end": 352.0, "text": " So this is literally the main diff compared to all of the other papers. They train instead of working in the image space and doing the forward diffusion process and then learning the reverse process in the image space."}, {"start": 352.0, "end": 355.0, "text": " Instead, they do that in the latent space."}, {"start": 355.0, "end": 360.0, "text": " OK, let me just quickly show you the difference in the loss."}, {"start": 360.0, "end": 367.0, "text": " So here it is. So here is the original basically construction, how the original loss for the DPM model looked like."}, {"start": 367.0, "end": 377.0, "text": " You basically do you sample images here, you sample noise from the from the normal distribution, you sample different time steps and you literally just do MSC loss."}, {"start": 377.0, "end": 388.0, "text": " So the mean squared error loss between the noise and what you predict. So this is so epsilon theta is modeled usually as a unit architecture."}, {"start": 388.0, "end": 398.0, "text": " So using the unit architecture. And so what you do is take the X of t, which is the original input image, plus t steps of diffusion being applied to it."}, {"start": 398.0, "end": 414.0, "text": " So you kind of noise it and then you pass this noisy version and the time step that's that. So basically the time step information and then you need to to predict the noise that was literally used to noise that image in the forward process, if that makes sense."}, {"start": 414.0, "end": 422.0, "text": " So this is just a hopefully a recap for for most of you. And then you just keep on repeating this until you train this unit to predict the noise."}, {"start": 422.0, "end": 430.0, "text": " And then later you can just use it to denoise the well pure noise images such that you can generate cool images."}, {"start": 430.0, "end": 440.0, "text": " OK, so this is the only difference between LDM. So that this paper and the previous work, they are literally just working in the latent space of this encoder."}, {"start": 440.0, "end": 447.0, "text": " And you can see everything else remains the same. So literally the difference is the following."}, {"start": 447.0, "end": 460.0, "text": " So instead of working in the image space, you're going to whoops, you're going to have an like an encoder that's going to train being is going to be trained using the VQ similar to VQ GAN paper."}, {"start": 460.0, "end": 469.0, "text": " You end up input the image. You end up with a latent here and then everything else remains the same. You're now using this latent to train your diffusion models."}, {"start": 469.0, "end": 477.0, "text": " So here is the like a snippet from the DDPM paper. So now instead of using X of T will not be an image here."}, {"start": 477.0, "end": 485.0, "text": " It will it will be instead. Sorry, X of zero will not be an image is going to be instead like a latent representation of the image."}, {"start": 485.0, "end": 493.0, "text": " And that's pretty much it. Really, really. That's all there is to it. That's the only difference between this paper and and the previous art."}, {"start": 493.0, "end": 501.0, "text": " So here is how the how the diagram looks like. So we will the code walkthrough will contain the three three parts."}, {"start": 501.0, "end": 505.0, "text": " First part, I'm going to show you how to train the autoencoder here. It's going to be the first part of the video."}, {"start": 505.0, "end": 517.0, "text": " The second part, I'm going to show you how to train the unit. So the fusion model such that you can so you can see here we first end up in the latent space."}, {"start": 517.0, "end": 523.0, "text": " Then we do the forward diffusion process such that we end up with Z of T and then we diffuse it."}, {"start": 523.0, "end": 527.0, "text": " We will learn how to predict the noise that was added during this diffusion process here."}, {"start": 527.0, "end": 533.0, "text": " We learn how to predict it as the output. And well, that's it. Everything else is diffusion magic."}, {"start": 533.0, "end": 539.0, "text": " And also we'll have I'm going to show you how they are using it. We'll also be learning the conditioning model."}, {"start": 539.0, "end": 545.0, "text": " So we'll mostly be focusing on text as well as classes from the image net data set."}, {"start": 545.0, "end": 552.0, "text": " OK, that's it. And the third part of the video will be about basically how to sample once we train these models."}, {"start": 552.0, "end": 558.0, "text": " If you didn't understand everything because you lack some background, feel free to continue watching this video."}, {"start": 558.0, "end": 561.0, "text": " I think that the code will be fairly self-explanatory."}, {"start": 561.0, "end": 568.0, "text": " OK, so if you want to follow along what I'm doing here, you'll have to do a couple of steps."}, {"start": 568.0, "end": 572.0, "text": " Obviously, you have to clone the original repo to the stable diffusion repo here."}, {"start": 572.0, "end": 578.0, "text": " Just create the con the environment following the instructions here under the requirements section."}, {"start": 578.0, "end": 585.0, "text": " And after that, go ahead and download like such that we can have a minimal setup and just get something running."}, {"start": 585.0, "end": 589.0, "text": " You don't have to download the original image net data set, which is huge."}, {"start": 589.0, "end": 593.0, "text": " You can instead go to this like fast eyes image net."}, {"start": 593.0, "end": 599.0, "text": " However, we pronounce this thing like GitHub repo and download the smallest version possible."}, {"start": 599.0, "end": 610.0, "text": " So literally 160 pixels is going to download only 10 classes from ImageNet and is going to make us well set up for it for the training procedure."}, {"start": 610.0, "end": 615.0, "text": " Basically, this older repo latent diffusion, which preceded their stable diffusion repo,"}, {"start": 615.0, "end": 622.0, "text": " which I'm going to also link in the description, contains the necessary instructions for how you can unpack the data set"}, {"start": 622.0, "end": 628.0, "text": " and where you need to place it such that the script can recognize and the data can be loaded."}, {"start": 628.0, "end": 632.0, "text": " OK, guys, so let me jump now into the actual code."}, {"start": 632.0, "end": 635.0, "text": " Here we are. A couple more things we need to sort out."}, {"start": 635.0, "end": 639.0, "text": " So first of all, obviously, we need to set some input arguments."}, {"start": 639.0, "end": 642.0, "text": " There is only a couple of them that we care about."}, {"start": 642.0, "end": 649.0, "text": " First one is we want to pass the like the configuration file for the auto encoder."}, {"start": 649.0, "end": 656.0, "text": " So first we're going to train the auto encoder and then you need to pass the T flag, meaning we want to train it and then GPUs."}, {"start": 656.0, "end": 663.0, "text": " So I'm passing zero comma because I only have a single GPU and the index of the GPUs here."}, {"start": 663.0, "end": 670.0, "text": " If you have multiple GPUs, feel free to add one, two, three, whatever, how many GPUs you have there."}, {"start": 670.0, "end": 675.0, "text": " OK, final thing, because this is a research code base after all, there are some bugs."}, {"start": 675.0, "end": 679.0, "text": " And so I had to kind of sort them out before getting this to work on a Windows machine."}, {"start": 679.0, "end": 683.0, "text": " OK, so let me open up my diff tool here."}, {"start": 683.0, "end": 689.0, "text": " So I just have to do this diff tool D and I'm going to open up the differences to the code I made."}, {"start": 689.0, "end": 692.0, "text": " So first of all, this is not a bug."}, {"start": 692.0, "end": 697.0, "text": " This is just like a small modifications you have to do if you want to train this on a single GPU,"}, {"start": 697.0, "end": 701.0, "text": " if you want to be able to do a walkthrough on a VRAM limited system."}, {"start": 701.0, "end": 705.0, "text": " So basically set batch size to one instead of default 12."}, {"start": 705.0, "end": 709.0, "text": " Otherwise, you'll get CUDA out of memory exceptions."}, {"start": 709.0, "end": 711.0, "text": " That's the first week you have to do."}, {"start": 711.0, "end": 716.0, "text": " The second one is actually a bug. So you have to go to data, image and file."}, {"start": 716.0, "end": 718.0, "text": " I mean, it's a bug if you're on Windows."}, {"start": 718.0, "end": 721.0, "text": " So the thing is they kind of hard coded the slash here,"}, {"start": 721.0, "end": 726.0, "text": " assuming that that's how you split the path on an arbitrary system, which is not the case for Windows."}, {"start": 726.0, "end": 732.0, "text": " So it's much better to use OS.SAP, which is going to resolve automatically depending on the operating system"}, {"start": 732.0, "end": 735.0, "text": " into the correct character sequence for Windows."}, {"start": 735.0, "end": 737.0, "text": " It's going to be double backslash."}, {"start": 737.0, "end": 742.0, "text": " And so this now works. Otherwise, you'll have some errors and the training will not work."}, {"start": 742.0, "end": 746.0, "text": " OK, and the final tweak is in the main script."}, {"start": 746.0, "end": 750.0, "text": " So if we go here to the main script, you can see I had to make a couple of changes."}, {"start": 750.0, "end": 756.0, "text": " First of all, I had to set the number of workers to zero because otherwise I was again getting some errors."}, {"start": 756.0, "end": 758.0, "text": " That's one tweak."}, {"start": 758.0, "end": 763.0, "text": " The second tweak is set the shuffle to false here for the train data loader."}, {"start": 763.0, "end": 770.0, "text": " The reason being is we are using, if you recall, we're using that super small subsample of the image net."}, {"start": 770.0, "end": 773.0, "text": " And so if you just keep if you just do the shuffling,"}, {"start": 773.0, "end": 779.0, "text": " it might happen that you take some super big index and try to index into our data set and we don't have that image."}, {"start": 779.0, "end": 784.0, "text": " And then you're going to have the well index out of range exception or something."}, {"start": 784.0, "end": 787.0, "text": " OK, so that's the second tweak I had to do."}, {"start": 787.0, "end": 791.0, "text": " The third tweak is comment on the DDP in case you only have a single GPU."}, {"start": 791.0, "end": 797.0, "text": " So that's the distributed data parallel like object from from by Torch."}, {"start": 797.0, "end": 799.0, "text": " I don't need that. So I had to comment it out."}, {"start": 799.0, "end": 804.0, "text": " And finally, I had to comment out the signal parts because that does not work on Windows."}, {"start": 804.0, "end": 813.0, "text": " And I didn't want to bother figuring out how to fix it on Windows because I'm just want to I just want to do a walkthrough and explain you guys how this training procedure looks like."}, {"start": 813.0, "end": 817.0, "text": " That's it. We are ready. Let's get into the code."}, {"start": 817.0, "end": 819.0, "text": " I'm going to start in the main file."}, {"start": 819.0, "end": 822.0, "text": " So main file is the is where the training magic happens."}, {"start": 822.0, "end": 825.0, "text": " So let me go to the main function here."}, {"start": 825.0, "end": 831.0, "text": " OK, so here we are. I'm going to set the breakpoint here and let's start debugging this thing."}, {"start": 831.0, "end": 834.0, "text": " OK, so just hit the training."}, {"start": 834.0, "end": 838.0, "text": " We're going to use the information we passed in our launch Jason."}, {"start": 838.0, "end": 842.0, "text": " If you're using this code, if not, you just need to pass those arguments somehow."}, {"start": 842.0, "end": 847.0, "text": " Here we are. And again, I'm going to focus only only on on important parts."}, {"start": 847.0, "end": 852.0, "text": " So I'm going to scheme everything else like we're just doing some parsing, blah, blah, blah."}, {"start": 852.0, "end": 854.0, "text": " We can skip literally everything."}, {"start": 854.0, "end": 857.0, "text": " We're just creating some directories, doing some configurations."}, {"start": 857.0, "end": 860.0, "text": " I'm going to skip here. This is a salient point."}, {"start": 860.0, "end": 863.0, "text": " So here we are doing the loading from the config file."}, {"start": 863.0, "end": 867.0, "text": " So let me show you briefly how the config file looks like."}, {"start": 867.0, "end": 874.0, "text": " So if you go here, if you find the configs directory and then you find under the other encoder, you'll find the one we're using."}, {"start": 874.0, "end": 883.0, "text": " So that's this one. And you can see everything you need to instantiate various sub modules of the of the LDM is here."}, {"start": 883.0, "end": 886.0, "text": " So we have the auto encoder here. We have the last we are using."}, {"start": 886.0, "end": 892.0, "text": " We'll see what this is. It's basically a perceptual loss and the parameters we use to construct that loss."}, {"start": 892.0, "end": 899.0, "text": " We have information to specify such that we can construct our data loaders."}, {"start": 899.0, "end": 904.0, "text": " Some callbacks for PyTorch lightning, which is the framework they are using, which is built on top of PyTorch."}, {"start": 904.0, "end": 910.0, "text": " If you never heard of it and that's it, like some information about accumulating radiance."}, {"start": 910.0, "end": 915.0, "text": " Nothing, nothing important. Let's go back to the basically main function."}, {"start": 915.0, "end": 922.0, "text": " That's all you need to see. OK, now I'm going to keep skipping all of this accelerators, GPUs, nothing fancy."}, {"start": 922.0, "end": 927.0, "text": " I'm going to skip all the way here to a model. So that's where we instantiate the actual model."}, {"start": 927.0, "end": 932.0, "text": " OK, so you can see here we're going to now construct the auto encoder KL."}, {"start": 932.0, "end": 935.0, "text": " So let me just see what I've enabled all of the breakpoints."}, {"start": 935.0, "end": 940.0, "text": " And if so, let me do F10 and I'm going to hit the init function of the auto encoder."}, {"start": 940.0, "end": 947.0, "text": " OK, a couple of steps here. We because it's an auto encoder, obviously we have to instantiate first the encoder and then decoder."}, {"start": 947.0, "end": 954.0, "text": " So let's see how those look like. Basically, let me just see what this DDConfig is."}, {"start": 954.0, "end": 959.0, "text": " Yeah, OK, so it just contains the necessary parameters to construct the encoder."}, {"start": 959.0, "end": 964.0, "text": " OK, so let's let's go into the encoder encoder. Nothing, nothing really fundamental."}, {"start": 964.0, "end": 976.0, "text": " Basically, bunch of com layers, bunch of there are some interesting attention layers, which basically what I do is in the latent space of the encoder."}, {"start": 976.0, "end": 982.0, "text": " So they take the image features and they just do the VAT type of like self attention."}, {"start": 982.0, "end": 991.0, "text": " So you can you can kind of unroll them and then just do simple like a self attention logic."}, {"start": 991.0, "end": 994.0, "text": " Where every token attends to every other token, that's it."}, {"start": 994.0, "end": 1005.0, "text": " And then there are some down sampling layers, which are again just a combination like basically you can see here, it's just a com like a layer with the stride of two, what not."}, {"start": 1005.0, "end": 1009.0, "text": " That's it. I'm going to jump over all of these again."}, {"start": 1009.0, "end": 1015.0, "text": " Just a simple encoder model. Nothing fancy there."}, {"start": 1015.0, "end": 1018.0, "text": " OK, let's continue. Let's exit this part and now let's enter the decoder."}, {"start": 1018.0, "end": 1022.0, "text": " Similar story. Nothing fundamental to understand here."}, {"start": 1022.0, "end": 1026.0, "text": " Bunch of com layers, bunch of res blocks, and then there is some up sampling."}, {"start": 1026.0, "end": 1029.0, "text": " I'm going to hit that five. Let's exit this model."}, {"start": 1029.0, "end": 1034.0, "text": " That's it. The architecture is not the interesting part of this of training the other encoder."}, {"start": 1034.0, "end": 1042.0, "text": " OK, now this is the interesting part. Here is where we instantiate this LP IPS with discriminator loss."}, {"start": 1042.0, "end": 1046.0, "text": " It's a mouthful. It's basically a perceptual loss combined with adversarial loss."}, {"start": 1046.0, "end": 1049.0, "text": " So let's step into it. OK, so here it is."}, {"start": 1049.0, "end": 1052.0, "text": " Here we construct it again. I'm going to focus only on the important parts."}, {"start": 1052.0, "end": 1058.0, "text": " We have some loss coefficients. So depending on which component of the loss we are looking at, we'll have a different weight."}, {"start": 1058.0, "end": 1060.0, "text": " But most of all, let me let me show you this."}, {"start": 1060.0, "end": 1068.0, "text": " So if you're not familiar with this perceptual loss concept, I think it has been introduced at least since the Neural Stuller Transfer Paper."}, {"start": 1068.0, "end": 1072.0, "text": " So that was the those were the first papers I saw were using the perceptual loss."}, {"start": 1072.0, "end": 1085.0, "text": " So the idea is to basically let me step into it to basically take a pre trained VGG 16 network and then pass your image and grab some representation,"}, {"start": 1085.0, "end": 1094.0, "text": " intermediate representation from the VGG 16 network and then basically do MSC in the latent space, in that representation space instead of in the image space."}, {"start": 1094.0, "end": 1105.0, "text": " And by doing that, you can kind of compare like the semantics of your input images and not not like focus on on some maybe superficial noisy details in the image space."}, {"start": 1105.0, "end": 1113.0, "text": " That's the basic idea. Additionally, there are these netlin layers, which what they do is they reduce the number of channels from, let's say, 64 to one."}, {"start": 1113.0, "end": 1115.0, "text": " And then you can kind of collapse them and do the loss logic."}, {"start": 1115.0, "end": 1121.0, "text": " We'll see in a couple of minutes what that exactly means once we start doing the actual forward prop through the models."}, {"start": 1121.0, "end": 1125.0, "text": " Now we're just instantiating. So this is enough for you to understand."}, {"start": 1125.0, "end": 1133.0, "text": " So we are loading the model here and then we literally grab the pre trained model from the teaming."}, {"start": 1133.0, "end": 1135.0, "text": " So this is their older repos."}, {"start": 1135.0, "end": 1145.0, "text": " So teaming transformers, which introduced the VQGAM paper, and they basically have a checkpoint there of the of the that's going to initialize this LP IPS loss."}, {"start": 1145.0, "end": 1148.0, "text": " OK, so let me just kind of go and do that."}, {"start": 1148.0, "end": 1153.0, "text": " And you can see here loaded pre trained loss from this file here, VGG PTH."}, {"start": 1153.0, "end": 1158.0, "text": " OK, so now what happens is we just set gradients to false everywhere."}, {"start": 1158.0, "end": 1161.0, "text": " And that's pretty much it."}, {"start": 1161.0, "end": 1166.0, "text": " So let me now hit F5 and we are out of that function."}, {"start": 1166.0, "end": 1169.0, "text": " OK, now the second interesting part is discriminator."}, {"start": 1169.0, "end": 1175.0, "text": " So that's going to be used in the adversarial loss component of the final loss."}, {"start": 1175.0, "end": 1182.0, "text": " Again, I'm not sure it's worth even digging through it, but it's literally just a bunch of you can see here."}, {"start": 1182.0, "end": 1189.0, "text": " So there is some batch norm going on, calm layers, leaky rail use, calm layers, leaky rail use, nothing fundamental there."}, {"start": 1189.0, "end": 1191.0, "text": " I'm just going to skip over it."}, {"start": 1191.0, "end": 1196.0, "text": " And it's again going to down sample because this is a patch based discriminator."}, {"start": 1196.0, "end": 1200.0, "text": " You can kind of check. You can see here that's the patch can discriminator."}, {"start": 1200.0, "end": 1203.0, "text": " There was a first described in the PIX2PIX paper."}, {"start": 1203.0, "end": 1206.0, "text": " You can check out this link if you want to check it out."}, {"start": 1206.0, "end": 1216.0, "text": " But the main difference is instead of having a scalar and then say scalar telling you whether this image is real or fake, which GAN networks do GAN discriminators do here."}, {"start": 1216.0, "end": 1223.0, "text": " You will instead have literally for a patch you have maybe 32 by 32 scalars and all of those will tell you."}, {"start": 1223.0, "end": 1228.0, "text": " So a particular scalar will tell you whether that patch is real or fake."}, {"start": 1228.0, "end": 1231.0, "text": " And that just kind of gives you more information to train."}, {"start": 1231.0, "end": 1234.0, "text": " So more information for for for your model to train."}, {"start": 1234.0, "end": 1238.0, "text": " That's it. OK, let's exit the discriminator."}, {"start": 1238.0, "end": 1241.0, "text": " Again, we have some hinge loss."}, {"start": 1241.0, "end": 1244.0, "text": " We'll see how that comes into into play a bit later."}, {"start": 1244.0, "end": 1247.0, "text": " We have this weights again."}, {"start": 1247.0, "end": 1249.0, "text": " OK, we'll see all of that a bit later. That's it."}, {"start": 1249.0, "end": 1253.0, "text": " Now we define some more calm layers. Blah, blah, blah."}, {"start": 1253.0, "end": 1257.0, "text": " Nothing fancy. Monitor just tells us which loss are we monitoring."}, {"start": 1257.0, "end": 1261.0, "text": " And in this case, validation reconstruction loss is something we care about."}, {"start": 1261.0, "end": 1264.0, "text": " OK, that's it. Now I'm going to skip again."}, {"start": 1264.0, "end": 1268.0, "text": " We some biases logging blah, blah, blah."}, {"start": 1268.0, "end": 1271.0, "text": " There is some model checkpointing."}, {"start": 1271.0, "end": 1274.0, "text": " We don't care about model checkpointing in this logic."}, {"start": 1274.0, "end": 1279.0, "text": " There is, as you can see, it's kind of fairly well, researchy code base."}, {"start": 1279.0, "end": 1284.0, "text": " Some callbacks, logging images, learning rates, blah, blah, blah."}, {"start": 1284.0, "end": 1290.0, "text": " Kuda callbacks. Nothing, nothing fundamental for our understanding of how the how the stable diffusion is trained."}, {"start": 1290.0, "end": 1296.0, "text": " So I'm going to skip all of that until until I guess until the data part."}, {"start": 1296.0, "end": 1298.0, "text": " OK, so it's going to skip until the data part."}, {"start": 1298.0, "end": 1301.0, "text": " So this is where we load the image net data."}, {"start": 1301.0, "end": 1312.0, "text": " And again, if you've set if you downloaded the the image net and E.T.T.E. data set and you've placed it into the correct directory,"}, {"start": 1312.0, "end": 1314.0, "text": " then everything is going to work as expected here."}, {"start": 1314.0, "end": 1320.0, "text": " Plus the minor tweak I made with the OS separator, if you recall that from a couple of minutes ago."}, {"start": 1320.0, "end": 1325.0, "text": " OK, so again, I'm going to ignore all of this because it's just going to prepare the data."}, {"start": 1325.0, "end": 1328.0, "text": " Nothing, nothing fancy there."}, {"start": 1328.0, "end": 1333.0, "text": " So I'm going to do this and click F5 and wait for our data to be loaded."}, {"start": 1333.0, "end": 1335.0, "text": " You can see there is some filtering going on."}, {"start": 1335.0, "end": 1338.0, "text": " Blah, blah, blah. And we have the data ready. OK, so that's it."}, {"start": 1338.0, "end": 1341.0, "text": " We have you can see here train data and validation data."}, {"start": 1341.0, "end": 1343.0, "text": " You can see the numbers are super, super big."}, {"start": 1343.0, "end": 1347.0, "text": " And that's definitely not the number of images we have in our small image net."}, {"start": 1347.0, "end": 1350.0, "text": " And that's the reason why you have to set a shuffle to false."}, {"start": 1350.0, "end": 1353.0, "text": " Otherwise, you'll you'll get the index out of range exception."}, {"start": 1353.0, "end": 1356.0, "text": " Again, this is just what is needed."}, {"start": 1356.0, "end": 1359.0, "text": " What is needed to get a minimal setup up and running."}, {"start": 1359.0, "end": 1362.0, "text": " If you actually want to train this and obviously you want to have a full image net,"}, {"start": 1362.0, "end": 1364.0, "text": " you want to have multiple GPUs, et cetera, et cetera."}, {"start": 1364.0, "end": 1368.0, "text": " But if you just want to step through and understand what's going on, this is more than enough."}, {"start": 1368.0, "end": 1371.0, "text": " OK, again, I'm going to skip across all of these parts."}, {"start": 1371.0, "end": 1376.0, "text": " Not interesting checkpointing, blah, blah, blah, debugging signals."}, {"start": 1376.0, "end": 1383.0, "text": " I'm going to enable breakpoint here, hit F5, and this is where the magic will start going on."}, {"start": 1383.0, "end": 1388.0, "text": " So I'm going to now enable all the breakpoints and let's start digging into this code."}, {"start": 1388.0, "end": 1393.0, "text": " OK, so F10, we end up in this on pre-trained routine start."}, {"start": 1393.0, "end": 1397.0, "text": " So that's, again, something that PyTorch Lightning defines for you."}, {"start": 1397.0, "end": 1403.0, "text": " PyTorch Lightning is very cool if you're a researcher and you're doing something that's fairly has a fairly common structure"}, {"start": 1403.0, "end": 1406.0, "text": " in the sense that you don't have to think about zeroing your gradients."}, {"start": 1406.0, "end": 1409.0, "text": " You don't have to think about calling the optimizer step."}, {"start": 1409.0, "end": 1411.0, "text": " You don't have to think about all of those details."}, {"start": 1411.0, "end": 1418.0, "text": " You just have to define a couple of functions that the PyTorch Lightning API requires you to."}, {"start": 1418.0, "end": 1421.0, "text": " And then everything kind of works out of the box automatically."}, {"start": 1421.0, "end": 1425.0, "text": " OK, this part is not interesting."}, {"start": 1425.0, "end": 1428.0, "text": " We're just creating some directories for configs and logging."}, {"start": 1428.0, "end": 1430.0, "text": " Nothing interesting really."}, {"start": 1430.0, "end": 1433.0, "text": " And finally, we hit the validation data loader."}, {"start": 1433.0, "end": 1437.0, "text": " Now, you may be confused. Why are we starting with validation?"}, {"start": 1437.0, "end": 1441.0, "text": " How does that make sense? And that's, again, PyTorch Lightning detail."}, {"start": 1441.0, "end": 1449.0, "text": " What the framework does, and I think this is fairly brilliant, is it first literally loads only one or two batches of data"}, {"start": 1449.0, "end": 1453.0, "text": " in the validation loop just to make sure that the validation works"}, {"start": 1453.0, "end": 1456.0, "text": " so that you don't have to waste a bunch of time in the training loop"}, {"start": 1456.0, "end": 1461.0, "text": " only to find out that your validation loop is broken and then you have to start from scratch."}, {"start": 1461.0, "end": 1465.0, "text": " So instead, what I do is literally just verify that validation works."}, {"start": 1465.0, "end": 1469.0, "text": " And after that, you resume the actual training and then validation."}, {"start": 1469.0, "end": 1474.0, "text": " Everything else remains according to the usual sequence."}, {"start": 1474.0, "end": 1480.0, "text": " OK, so because of that, I'm going to disable all of the breakpoints here."}, {"start": 1480.0, "end": 1493.0, "text": " I'm just going to just enable the one in the train loader, hit F5, and wait until this validation data set basically check is completed."}, {"start": 1493.0, "end": 1497.0, "text": " Here it is. We can see we are hitting the train data loader."}, {"start": 1497.0, "end": 1501.0, "text": " Again, that's something that PyTorch Lightning requires you to define."}, {"start": 1501.0, "end": 1507.0, "text": " Let's continue here. And let me just see whether I've enabled all the breakpoints."}, {"start": 1507.0, "end": 1514.0, "text": " OK, so now we're going to first OK, again, some some some PyTorch Lightning stuff."}, {"start": 1514.0, "end": 1519.0, "text": " OK, so here we are. So now what's happening is we are loading the data from our training data set."}, {"start": 1519.0, "end": 1524.0, "text": " And here we are. We end up with this example. We've done some pre-processing."}, {"start": 1524.0, "end": 1527.0, "text": " You can see here we fetch the image, blah, blah, blah."}, {"start": 1527.0, "end": 1531.0, "text": " And we end up with the example, which is a dictionary that has multiple keys."}, {"start": 1531.0, "end": 1538.0, "text": " So we have image and other ImageNet idiosyncratic like keys."}, {"start": 1538.0, "end": 1542.0, "text": " So let me show you some of those. So example image is obviously our input image."}, {"start": 1542.0, "end": 1547.0, "text": " So it's processed such that we have 256, 256, and three channels."}, {"start": 1547.0, "end": 1552.0, "text": " We also have if I do example, let's say class label. Whoops."}, {"start": 1552.0, "end": 1558.0, "text": " I need to make it a string. So class label is going to be, as you can see, label zero."}, {"start": 1558.0, "end": 1563.0, "text": " So our data set only has 10 labels. We are not using the full ImageNet."}, {"start": 1563.0, "end": 1570.0, "text": " That's why zero is highly probable. Then let me show you one more human label."}, {"start": 1570.0, "end": 1576.0, "text": " Obviously, just a human readable label of this ImageNet class."}, {"start": 1576.0, "end": 1579.0, "text": " So think of think of whatever that is. And that's it."}, {"start": 1579.0, "end": 1582.0, "text": " OK, and this is where it starts getting interesting. So so here we are."}, {"start": 1582.0, "end": 1586.0, "text": " We have a batch that was provided to us by PyTorch Lightning."}, {"start": 1586.0, "end": 1591.0, "text": " Again, we have image in there. The shape is familiar 256, 256."}, {"start": 1591.0, "end": 1597.0, "text": " And now we do this get input. It's just going to fetch the image, do some permutation,"}, {"start": 1597.0, "end": 1599.0, "text": " make sure that the memory is contiguous, blah, blah, blah."}, {"start": 1599.0, "end": 1602.0, "text": " And now we're doing self and we pass inputs."}, {"start": 1602.0, "end": 1606.0, "text": " So this is a fancy way of saying call the forward method of this class."}, {"start": 1606.0, "end": 1612.0, "text": " And let's see. So we are dealing with we're dealing with all the encoders here, obviously."}, {"start": 1612.0, "end": 1618.0, "text": " And the forward class is here. So let me just do F10 and we hit this part."}, {"start": 1618.0, "end": 1623.0, "text": " So first part is obviously we do the encoding and then we do the sampling and then we do the decoding."}, {"start": 1623.0, "end": 1628.0, "text": " OK, so let's dig into the decoder. Here it is."}, {"start": 1628.0, "end": 1631.0, "text": " Here is the decoding logic, some down sampling, blah, blah, blah."}, {"start": 1631.0, "end": 1636.0, "text": " Again, I'm going to skip everything here because as I already told you, encoder logic is fairly simple."}, {"start": 1636.0, "end": 1641.0, "text": " We just do a forward pass through it and we end up with a latent space representation."}, {"start": 1641.0, "end": 1645.0, "text": " So what's the dimensionality of this representation?"}, {"start": 1645.0, "end": 1652.0, "text": " You can see it's 6, 64, 64. And we expected three because if you recall, let me show you the."}, {"start": 1652.0, "end": 1657.0, "text": " Well, we are using such a config where we expect 64, 64, 3."}, {"start": 1657.0, "end": 1664.0, "text": " Six is there because we are actually returning the mean and the standard deviation or the variance here."}, {"start": 1664.0, "end": 1668.0, "text": " And then we are sampling before we pass that sample to the decoder."}, {"start": 1668.0, "end": 1673.0, "text": " So that's just a detail worth mentioning. They did experiment with different types of auto encoders."}, {"start": 1673.0, "end": 1678.0, "text": " One of them we are currently working with is the KL regularized auto encoder."}, {"start": 1678.0, "end": 1683.0, "text": " They were also playing with quantized versions, but we really don't care about all of those details."}, {"start": 1683.0, "end": 1687.0, "text": " There is just too many things going on. OK, some processing of that representation."}, {"start": 1687.0, "end": 1692.0, "text": " We end up with moments when they say moment, they literally mean zero within first moment."}, {"start": 1692.0, "end": 1700.0, "text": " Like so the mean and the and the and the variance. So now we pass that into this diagonal Gaussian distribution."}, {"start": 1700.0, "end": 1705.0, "text": " So we're going to basically form distribution here. We're going to name it posterior."}, {"start": 1705.0, "end": 1708.0, "text": " So let me kind of enter there. You can see we just chunk."}, {"start": 1708.0, "end": 1713.0, "text": " We just split into two parts that representation and we end up with mean and log variance."}, {"start": 1713.0, "end": 1720.0, "text": " So that's what we're actually returning. And then some clamping exponentiation such that we end up with standard deviation variance."}, {"start": 1720.0, "end": 1723.0, "text": " And that's it. OK, just a Gaussian. That's all."}, {"start": 1723.0, "end": 1731.0, "text": " After that, we have to sample and sampling is simply because Gaussians are such a such a nice mathematical object."}, {"start": 1731.0, "end": 1739.0, "text": " You basically just have to take the mean and add to it standard deviation multiplied by the normal noise from the normal Gaussian."}, {"start": 1739.0, "end": 1743.0, "text": " That's it. OK, and we end up with the representation."}, {"start": 1743.0, "end": 1749.0, "text": " So now let's see that should now have three channels, I guess. So three sixty four sixty four as expected."}, {"start": 1749.0, "end": 1754.0, "text": " OK, so now let's pass this into the decoder stage. Here we are."}, {"start": 1754.0, "end": 1757.0, "text": " We do some processing, just a conv layer decoder."}, {"start": 1757.0, "end": 1763.0, "text": " What it does is, again, just up sampling, conf attention again, trivial stuff."}, {"start": 1763.0, "end": 1767.0, "text": " I'm going to skip over that. That's kind of common knowledge."}, {"start": 1767.0, "end": 1774.0, "text": " OK, so we end up with output output should be because this is an autoencoder should have the same shape as the input image."}, {"start": 1774.0, "end": 1780.0, "text": " So that's two fifty six, two fifty six, three. Let's just print out the shape so that we are."}, {"start": 1780.0, "end": 1785.0, "text": " Yeah, OK, we can we can see here that the shape is as expected."}, {"start": 1785.0, "end": 1793.0, "text": " Let's continue. We now return that decoded output and we return the posterior object, which is the Gaussian."}, {"start": 1793.0, "end": 1800.0, "text": " And that's this part. OK, so now you'll see there are two if statements here."}, {"start": 1800.0, "end": 1806.0, "text": " One is when the optimizer index is zero and the other one is when the optimizer index is one."}, {"start": 1806.0, "end": 1812.0, "text": " The reason we have those parameters is because we have two different optimizers."}, {"start": 1812.0, "end": 1815.0, "text": " Let me just see whether where is that function. So configure optimizers."}, {"start": 1815.0, "end": 1818.0, "text": " I didn't show you this one, so we didn't step into it."}, {"start": 1818.0, "end": 1822.0, "text": " But simply, simply put, we have two optimizers."}, {"start": 1822.0, "end": 1826.0, "text": " One is Adam and that Adam is going to be optimizing encoders and decoders here."}, {"start": 1826.0, "end": 1830.0, "text": " And the second optimizer is going to be optimizing the discriminator weights."}, {"start": 1830.0, "end": 1835.0, "text": " So we saw that when we instantiated the discriminator, I told you that it's going to be trainable."}, {"start": 1835.0, "end": 1842.0, "text": " And here the other optimizer cares about updating discriminator weights, whereas this can be treated as a generator."}, {"start": 1842.0, "end": 1847.0, "text": " So the autoencoder part is the generator. The discriminator is obviously discriminator."}, {"start": 1847.0, "end": 1852.0, "text": " And that's that's it. OK, so let's go back here."}, {"start": 1852.0, "end": 1859.0, "text": " So PyTorch Lightning makes it easy for us to do this type of gain loss computation."}, {"start": 1859.0, "end": 1867.0, "text": " So first we'll step into this branch and then it will literally return called the training step again with optimizer index set to one."}, {"start": 1867.0, "end": 1872.0, "text": " And then we'll train the discriminator. So that's everything is kind of handled for us by the framework."}, {"start": 1872.0, "end": 1875.0, "text": " OK, this is where the whole."}, {"start": 1875.0, "end": 1879.0, "text": " Brain of this autoencoder training is going to happen."}, {"start": 1879.0, "end": 1883.0, "text": " So let's dig into this code. This is very important case."}, {"start": 1883.0, "end": 1887.0, "text": " We get the last layer that's going to be used for some lambda calculation."}, {"start": 1887.0, "end": 1890.0, "text": " You'll see that in a second. So here we are. We are inside of the loss."}, {"start": 1890.0, "end": 1893.0, "text": " Let's see how the loss looks like for the other encoder."}, {"start": 1893.0, "end": 1900.0, "text": " So first of all, we have the reconstruction loss. It's simply as you can see here, we subtract the inputs, which are the inputs image."}, {"start": 1900.0, "end": 1907.0, "text": " So let's let me just kind of make sure that this is non-py or by Torch tensor, I guess."}, {"start": 1907.0, "end": 1912.0, "text": " Yeah. And then let's just see the shape. The shape should be 256."}, {"start": 1912.0, "end": 1916.0, "text": " Yeah, everything's fine there. We have reconstructions. We literally just subtract them."}, {"start": 1916.0, "end": 1919.0, "text": " And we obviously want to make this difference as small as possible."}, {"start": 1919.0, "end": 1927.0, "text": " So that makes a lot of sense. We're just doing simple image space like emmits type of a loss."}, {"start": 1927.0, "end": 1930.0, "text": " OK, now we have perceptual loss. This is the interesting part."}, {"start": 1930.0, "end": 1934.0, "text": " So we pass the inputs and the reconstructions again."}, {"start": 1934.0, "end": 1942.0, "text": " But this time we do not compare them in the image space. Instead, we compare them in the latent space of the VGG of the pre-trained VGG network."}, {"start": 1942.0, "end": 1946.0, "text": " So let me show you that thing as well. OK, so here we are."}, {"start": 1946.0, "end": 1952.0, "text": " So we are again in this LP IPS loss and we are in the forward step."}, {"start": 1952.0, "end": 1958.0, "text": " So first thing they do is they have the scaling layer and that scaling is just going to subtract some mean."}, {"start": 1958.0, "end": 1962.0, "text": " And so let me just find that for a second. So scaling layer is here."}, {"start": 1962.0, "end": 1968.0, "text": " So literally there is some shift and scale. So computations going on."}, {"start": 1968.0, "end": 1971.0, "text": " I'm not sure whether this is from ImageNet statistics or not."}, {"start": 1971.0, "end": 1979.0, "text": " Like if somebody knows, let me know. I think this is from ImageNet and that will make sense since we are now using ImageNet data set as well."}, {"start": 1979.0, "end": 1986.0, "text": " So that would make sense. We just kind of normalize our input tensors and then we pass them through the VGG."}, {"start": 1986.0, "end": 1992.0, "text": " And so we can see here what the VGG forward passing entails."}, {"start": 1992.0, "end": 2001.0, "text": " So literally we're going to return some intermediate representations and we're going to wrap those up into this named top object and return it back."}, {"start": 2001.0, "end": 2005.0, "text": " So let me hit F10. We are here. We're going to return all of that."}, {"start": 2005.0, "end": 2011.0, "text": " We're going to hit it again because we have two computations, both for the input as well as well as for the reconstruction."}, {"start": 2011.0, "end": 2020.0, "text": " Now we kind of cluster all of these layers that are going to reduce the number of channels together, just some syntactic sugar."}, {"start": 2020.0, "end": 2025.0, "text": " And finally, let's see. Here is where the magic is happening."}, {"start": 2025.0, "end": 2031.0, "text": " So what we do is we normalize the tensor, normalize just going to basically divide it by the L2 norm."}, {"start": 2031.0, "end": 2038.0, "text": " OK, and then we do, as you can see here again, we just do we subtract them and we do the square."}, {"start": 2038.0, "end": 2042.0, "text": " So it's an MSC loss without the M part because we're not doing the mean part yet."}, {"start": 2042.0, "end": 2050.0, "text": " We're just doing this and we repeat that for all of the representations because we will have like five representations from the VGG."}, {"start": 2050.0, "end": 2054.0, "text": " So that's why we'll have five iterations of this loop. And now we're going to break out of it."}, {"start": 2054.0, "end": 2065.0, "text": " And now we're here. OK, so now what happens is we are going to pass those differences into these layers that are going to reduce the number of channels."}, {"start": 2065.0, "end": 2069.0, "text": " So let me let me let me let me explain what I mean by that. So here here is this diff."}, {"start": 2069.0, "end": 2076.0, "text": " So here is the difference in the that we that we've done among the features and the shape is going to be what?"}, {"start": 2076.0, "end": 2084.0, "text": " So here is the shape after we apply this layer. So after we apply this, we'll just end up with a single channel."}, {"start": 2084.0, "end": 2093.0, "text": " So if I put zero here and if I put zero here and if I just do the shape, you can see we have one one two fifty six to fifty six."}, {"start": 2093.0, "end": 2096.0, "text": " OK, so that's the sole purpose of this of this layer."}, {"start": 2096.0, "end": 2103.0, "text": " And finally, we do the special spatial average, which is simply a mean a mean across dimensions."}, {"start": 2103.0, "end": 2107.0, "text": " And so we end up with a scalar here. And that's it. That's that's how the perceptual loss looks like."}, {"start": 2107.0, "end": 2114.0, "text": " It's fairly simple. And I'm going to just keep on stepping over five times there and then I'm going to keep the five."}, {"start": 2114.0, "end": 2121.0, "text": " And that's it. So now we end up with this this array of well, it's actually."}, {"start": 2121.0, "end": 2128.0, "text": " Yeah, we've accumulated we've aggregated the values here by doing the sum operator and we end up with a single number."}, {"start": 2128.0, "end": 2135.0, "text": " And that's the perceptual loss. That's it, guys. The only part that's kind of new here for me personally are these layers."}, {"start": 2135.0, "end": 2142.0, "text": " You could as well skip that part and simply do like a mean operation across these differences."}, {"start": 2142.0, "end": 2148.0, "text": " So just MSC loss directly in the feature space. And that's it. But this is some type of modification."}, {"start": 2148.0, "end": 2152.0, "text": " I'm not sure why and where there are some ablations, but that's the perceptual loss."}, {"start": 2152.0, "end": 2159.0, "text": " OK, finally, we formed the reconstruction loss as a weighted sum of the reconstruction loss from the image space."}, {"start": 2159.0, "end": 2164.0, "text": " Plus, we grab the perceptual loss here and we have some weights one point zero here."}, {"start": 2164.0, "end": 2170.0, "text": " OK, so they are equally weighted. OK, next up, since this is zero, this is going to be zero."}, {"start": 2170.0, "end": 2175.0, "text": " Let me just kind of double check. And because of that exponent raised to power zero is one."}, {"start": 2175.0, "end": 2179.0, "text": " So it's just going to be a neutral operation. So this is doing nothing."}, {"start": 2179.0, "end": 2185.0, "text": " And this is also going to skip here. And here we just do some rescaling. That's it."}, {"start": 2185.0, "end": 2191.0, "text": " Summation, blah, blah, blah. And that's our final reconstruction loss. That's the loss there."}, {"start": 2191.0, "end": 2198.0, "text": " Now we do the because remember, we return the posterior, which is the Gaussian from the latent space of the odd encoder."}, {"start": 2198.0, "end": 2203.0, "text": " And we're just going to compute the KL divergence. So that's going to be the regularizer component."}, {"start": 2203.0, "end": 2212.0, "text": " So I'm going to hit F10 here. You can see it's simply computing the KL divergence using the mean information variance and log variance information."}, {"start": 2212.0, "end": 2217.0, "text": " OK, and we end up with a KL loss there. Some rescaling again."}, {"start": 2217.0, "end": 2224.0, "text": " And now we enter the branch where we train the generator, I assume. Let's see. OK, all is good so far."}, {"start": 2224.0, "end": 2233.0, "text": " OK, so we passed the reconstructions, which are the fake images through the discriminator and we get logits fake. OK."}, {"start": 2233.0, "end": 2239.0, "text": " And then what we do is we have a minus and we do a mean across those weights."}, {"start": 2239.0, "end": 2245.0, "text": " So let me show you the shape as I told you. That's a special discriminator. That's like a patch based discriminator."}, {"start": 2245.0, "end": 2252.0, "text": " So that means that the shape is going to be maybe I think it's 32 times 32 or something. So let me see that."}, {"start": 2252.0, "end": 2257.0, "text": " So it's 30 30. So that's how many patches we have. And we just do a mean across them."}, {"start": 2257.0, "end": 2268.0, "text": " And by putting a minus here, we are literally forcing we will be tweaking the generator weights in such a way such that the discriminator gives a high value,"}, {"start": 2268.0, "end": 2273.0, "text": " which means it thinks, quote unquote, that those images are real."}, {"start": 2273.0, "end": 2281.0, "text": " So, again, that's just your your your GAN standard GAN GAN stuff. Not nothing, nothing super complex there. If you're familiar with GANs."}, {"start": 2281.0, "end": 2288.0, "text": " OK. And now there is some adaptive weight calculation. Let me see. So what's going to happen there?"}, {"start": 2288.0, "end": 2300.0, "text": " OK. Yeah. So I remember this is going to make make sure that we we are waiting the GAN part of the loss and the reconstruction loss appropriately."}, {"start": 2300.0, "end": 2305.0, "text": " Let me show you the paper formula for this one is going to make a bit more sense. OK, guys, here we are."}, {"start": 2305.0, "end": 2312.0, "text": " This is the VQ GAN paper I showed that in the beginning of the video. And this is basically the formula we're computing."}, {"start": 2312.0, "end": 2320.0, "text": " We are taking the gradients of the reconstruction loss with respect to the last layer of the decoder weights."}, {"start": 2320.0, "end": 2328.0, "text": " We divide that by the gradients of the GAN loss with respect to the weights of the last layer of the decoder again."}, {"start": 2328.0, "end": 2337.0, "text": " What's the reasoning behind this? The reasoning is the following. If the gradients are super big for the reconstruction loss and they are smaller for the GAN loss,"}, {"start": 2337.0, "end": 2343.0, "text": " then this is going to be a big number, which is going to going to put a bigger weight on the GAN loss."}, {"start": 2343.0, "end": 2355.0, "text": " So by doing that, we make sure that the network is learning from both losses and that one loss is not overwhelming the other loss when it comes to the contribution to the gradients."}, {"start": 2355.0, "end": 2360.0, "text": " So that's the rough logic. And you can see that that's exactly what we're computing here."}, {"start": 2360.0, "end": 2367.0, "text": " So you can see NLL loss is the reconstruction loss of the last layer weights. And that's it. We get the gradients there."}, {"start": 2367.0, "end": 2374.0, "text": " Then we compute the gradients for the G loss, which is the GAN loss. That's it. And now we just normalize them."}, {"start": 2374.0, "end": 2382.0, "text": " We do the norm. We divide them as well. And that's it. This is clamping, blah, blah, blah, times some weight. And that's it. That's the that's that weight."}, {"start": 2382.0, "end": 2391.0, "text": " OK, I'm now going to return back to the code here. Now, this is going to be zero for the initial 50,000 iterations or something."}, {"start": 2391.0, "end": 2404.0, "text": " The reason being they don't want to they first want to train the autoencoder and ignore the GAN loss such that they can form some representations and for the stability sake and only then slowly start using the GAN loss."}, {"start": 2404.0, "end": 2412.0, "text": " So let me let me show you what I mean by that. Let me enter here. You can see that until we pass the global threshold. So I'm going to enter here."}, {"start": 2412.0, "end": 2420.0, "text": " You'll see that the weight will be zero. So the weight will be set to zero until the global step crosses some certain threshold."}, {"start": 2420.0, "end": 2431.0, "text": " And because of that, this is zero. And you can see it's used here. So that means this thing is kind of toggled off for the good portion of the of the of the beginning of the training."}, {"start": 2431.0, "end": 2440.0, "text": " So that means we only use the KL regular regularize the loss here and we use the reconstruction loss here as the final loss. That's it."}, {"start": 2440.0, "end": 2448.0, "text": " I know this was a mouthful, but like hopefully it makes sense. Now we just do some blah, blah, blah accumulation of those and we return back the loss."}, {"start": 2448.0, "end": 2455.0, "text": " And that's it. We do some logging. We return the autoencoder loss. And that's it. Now we're going to hit the training step again."}, {"start": 2455.0, "end": 2465.0, "text": " This time we're going to be training not the generator, but instead we are going to be training the discriminator. Let's see how that's going to look like again."}, {"start": 2465.0, "end": 2476.0, "text": " Blah, blah, blah. I'm going to skip across these steps. I'm going to do basically wrong disabled break points. We're going to do a four pass again."}, {"start": 2476.0, "end": 2486.0, "text": " This is kind of suboptimal. The there must be some way to optimize this such that we don't have to do because we are basically sending the same images here again."}, {"start": 2486.0, "end": 2495.0, "text": " And that doesn't make much sense. And now let me let me return. Let me enable back all the break points. Now we enter this this branch here."}, {"start": 2495.0, "end": 2506.0, "text": " OK, so again, we have a loss here. All of the inputs are the same. What's different is that let me just enter here. So we're going to have the same parts here."}, {"start": 2506.0, "end": 2513.0, "text": " Reconstruction loss. Everything else remains the same. So the interesting part that we care about is here. So we want to go here."}, {"start": 2513.0, "end": 2521.0, "text": " So I'm going to disable break points, enable just this one, hit F5 and we enter this branch this time. OK, so here we are."}, {"start": 2521.0, "end": 2531.0, "text": " Now we pass the inputs to discriminator to get the real images and we pass the reconstructions to get the fake images, the largest of the fake images."}, {"start": 2531.0, "end": 2542.0, "text": " OK, again, we just have this factor which is going to be zero initially, which means this loss will not be enabled in the first part of the training."}, {"start": 2542.0, "end": 2551.0, "text": " Later, it's going to gradually kick in. And basically what we do here is a hinge loss between the largest of the real and the fake images."}, {"start": 2551.0, "end": 2562.0, "text": " And basically that's it. That's that's the gain loss. So this is going to train the discriminator such that the discriminator learns the difference between real and fake images."}, {"start": 2562.0, "end": 2569.0, "text": " Consequently, that's going to lead to better or the encoder because we're losing that we are using that discriminator to train the order."}, {"start": 2569.0, "end": 2578.0, "text": " Guys, that's it. That was the training of the order. Hopefully that was interesting and made sense."}, {"start": 2578.0, "end": 2589.0, "text": " Now I'm going to just stop this training because that's pretty much it. OK, I've hit F5 just to show you that now we're just going to iterate across patches and keep on repeating the same things we've just seen."}, {"start": 2589.0, "end": 2596.0, "text": " So that's why I'm going to stop this training right now. We've seen how the order encoder training looks like."}, {"start": 2596.0, "end": 2602.0, "text": " I'm going to show you quickly the formulas from the VQGAN paper just to to consolidate the knowledge here."}, {"start": 2602.0, "end": 2606.0, "text": " And then we're going to step into understanding how the diffusion, the unit model is being trained."}, {"start": 2606.0, "end": 2611.0, "text": " OK, guys, quickly coming to the paper. Here are the formulas for the VQGAN paper."}, {"start": 2611.0, "end": 2624.0, "text": " So originally how the VQGAN was trained was it had these code books of discrete basically vectors and they had the reconstruction loss component."}, {"start": 2624.0, "end": 2635.0, "text": " You can see here plus these losses here were called commitment losses and they were used to train the code book and the encoder."}, {"start": 2635.0, "end": 2648.0, "text": " What changed in the VQGAN is that they started using so you can see here they're using instead of L2 loss, they use a perceptual loss and they introduce an adversarial training procedure with a patch based discriminator."}, {"start": 2648.0, "end": 2654.0, "text": " So that's everything we've seen so far. And you can see that the final loss looks like this."}, {"start": 2654.0, "end": 2659.0, "text": " So there is the GAN component weighted by this lambda. We've seen all of these."}, {"start": 2659.0, "end": 2665.0, "text": " And there is this component here that basically consists out of the reconstruction loss and the perceptual loss."}, {"start": 2665.0, "end": 2673.0, "text": " And additionally, in this in the latent diffusion model, they've introduced the KL divergence regularization."}, {"start": 2673.0, "end": 2682.0, "text": " So basically. Bottom line is the order encoder used for the LDM paper is a small modification of what they've already done."}, {"start": 2682.0, "end": 2686.0, "text": " The same authors in the VQGAN paper. That's it. Let's go back to the code."}, {"start": 2686.0, "end": 2698.0, "text": " Wondering for a second about why these losses make sense, even though they are, they are not by any stretch of imagination, probably an optimal solution to how we should be training our models."}, {"start": 2698.0, "end": 2704.0, "text": " Let's just think about it for a second. So we have the reconstruction loss in the image space and we have the perceptual loss."}, {"start": 2704.0, "end": 2714.0, "text": " Those basically make sure that we are we are reconstructing images correctly so that we can we learn how to not lose information when we go through the bottleneck part."}, {"start": 2714.0, "end": 2720.0, "text": " OK, then we have the GAN component, which makes sure that the images look very realistic."}, {"start": 2720.0, "end": 2724.0, "text": " So that's additionally kind of enforcing the reconstruction."}, {"start": 2724.0, "end": 2741.0, "text": " And finally, we have the KL divergence loss that's just going to be regularizing the latent space of our order encoder such that we can later be able to smoothly go through that space and be able to have meaningful representations."}, {"start": 2741.0, "end": 2744.0, "text": " So that's the basic idea. Like we are. Yeah."}, {"start": 2744.0, "end": 2758.0, "text": " OK, having said that, let's go to the long Jason. Let's modify the argument such that we are now training. We're now training the diffusion part and not the order."}, {"start": 2758.0, "end": 2765.0, "text": " So I'm just going to remove this part here, remove this space here and paste this back here."}, {"start": 2765.0, "end": 2771.0, "text": " So that's everything you need to do. Now we're training the diffusion model."}, {"start": 2771.0, "end": 2775.0, "text": " So let's go back here. The difference will now be that they are not using with this config."}, {"start": 2775.0, "end": 2780.0, "text": " They're not using the same order encoder. They're using a different one with the with the quantization."}, {"start": 2780.0, "end": 2784.0, "text": " But like that doesn't matter. We're going to focus on on important parts only."}, {"start": 2784.0, "end": 2791.0, "text": " OK, so I'm going to hit F hit the training here and let's start analyzing the code again."}, {"start": 2791.0, "end": 2795.0, "text": " I'm going to focus only on instantiating the models and on the training loop."}, {"start": 2795.0, "end": 2801.0, "text": " That's it. I'm going to skip everything else because we've seen all of that. We've seen the config as well."}, {"start": 2801.0, "end": 2805.0, "text": " So let's just go to the model instantiation. So let's go here."}, {"start": 2805.0, "end": 2810.0, "text": " And now let's let me just make sure that everything is enabled. I think it already is."}, {"start": 2810.0, "end": 2815.0, "text": " But yeah. OK, so here we are late in diffusion like object."}, {"start": 2815.0, "end": 2822.0, "text": " We're starting to instantiate everything we need. So one of those things is going to be unit architecture."}, {"start": 2822.0, "end": 2827.0, "text": " OK, so let's start here. Let's see what's going on. We can ignore all of this."}, {"start": 2827.0, "end": 2834.0, "text": " So here is the first interesting part. We're going to be initializing the superclass and the superclass is this DDPM."}, {"start": 2834.0, "end": 2838.0, "text": " OK, so the DPM is the denoising diffusion probabilistic model."}, {"start": 2838.0, "end": 2843.0, "text": " So that's the original diffusion paper that made diffusion kind of practical."}, {"start": 2843.0, "end": 2847.0, "text": " OK, so let's enter there. So let's see what's going on there."}, {"start": 2847.0, "end": 2856.0, "text": " OK, so here we are predicting you can see we're running in EPS prediction mode, which means we're predicting that noise instead of."}, {"start": 2856.0, "end": 2862.0, "text": " Well, there are some other things you can predict like X zero, et cetera. OK, we can skip all of this."}, {"start": 2862.0, "end": 2870.0, "text": " We can skip all of this. Now there is this diffusion wrapper. And that's where we actually start making the unit."}, {"start": 2870.0, "end": 2875.0, "text": " So here you can see here unit model is constructed here. So let's construct the unit."}, {"start": 2875.0, "end": 2878.0, "text": " Let's see how it looks like again in some of my previous videos."}, {"start": 2878.0, "end": 2884.0, "text": " I've been going through in a lot of detail through how unit is constructed so you can go through that if you want."}, {"start": 2884.0, "end": 2890.0, "text": " Here I'm just going to kind of put the scheme. And by the way, I love the I love the statements in this in this code base."}, {"start": 2890.0, "end": 2895.0, "text": " Fool, you forgot to include the dimension of your cross attention conditioning. Very cool."}, {"start": 2895.0, "end": 2899.0, "text": " Yeah, you can you can tell it's a production code. OK, so let's let's continue here."}, {"start": 2899.0, "end": 2903.0, "text": " I think we can skip across all of these details."}, {"start": 2903.0, "end": 2908.0, "text": " The important parts are these time step in bad sequential objects."}, {"start": 2908.0, "end": 2918.0, "text": " What they basically make sure is that we can later pass time step information or conditional information into various sub modules."}, {"start": 2918.0, "end": 2922.0, "text": " So let me kind of click F12 there, enter the definition."}, {"start": 2922.0, "end": 2927.0, "text": " You can see that depending on the layer type, they'll sometimes be passing the conditioning information,"}, {"start": 2927.0, "end": 2934.0, "text": " sometimes the embedded information, sometimes just the input features, image features, and that's it."}, {"start": 2934.0, "end": 2942.0, "text": " So it might be interesting to for me to just show you one small thing."}, {"start": 2942.0, "end": 2949.0, "text": " And that's the following. So there are these blocks that integrate the conditional information, and I think those might be interesting."}, {"start": 2949.0, "end": 2955.0, "text": " So here spatial transformer. So that's the module that's going to be integrating the conditional information into the unit."}, {"start": 2955.0, "end": 2962.0, "text": " So I'm going to hit F12 there and I'm just going to add I'm just going to add like basically a break point there."}, {"start": 2962.0, "end": 2968.0, "text": " And let's hit F10. Let's continue everything else. We don't care about it."}, {"start": 2968.0, "end": 2973.0, "text": " Really, we can just construct the unit and let's go to the end here."}, {"start": 2973.0, "end": 2979.0, "text": " So this is the end of the unit definition. Quite a long definition definition, as you can tell."}, {"start": 2979.0, "end": 2982.0, "text": " So I'm going to skip over it and that's it."}, {"start": 2982.0, "end": 2989.0, "text": " So we're using cross attention to do the conditioning, the conditional information integration."}, {"start": 2989.0, "end": 2993.0, "text": " So, yeah. OK, so let's continue here. OK, that's it."}, {"start": 2993.0, "end": 2999.0, "text": " Counting the parameters, nothing fancy. We don't care about that as well."}, {"start": 2999.0, "end": 3006.0, "text": " That's just the exponential moving average. Not the fundamental part why this model is working."}, {"start": 3006.0, "end": 3010.0, "text": " So I'm going to skip across all of these. Now we're registering the schedule."}, {"start": 3010.0, "end": 3014.0, "text": " So this is the important part. And I've covered how this exactly works."}, {"start": 3014.0, "end": 3020.0, "text": " Like I've been doing side by side comparison of formulas and of code."}, {"start": 3020.0, "end": 3023.0, "text": " So do check out. I'm going to link those video cards somewhere here."}, {"start": 3023.0, "end": 3029.0, "text": " But the diffusion playlist is the best place to start if you want to understand a bit better why those work."}, {"start": 3029.0, "end": 3031.0, "text": " Otherwise, this video would be like five hours long or something."}, {"start": 3031.0, "end": 3038.0, "text": " So let me let me enter here. Let me just show you how this roughly looks like."}, {"start": 3038.0, "end": 3047.0, "text": " So you can see here a bunch of those alphas and alpha like the cumulative products and all of those variations of the formulas."}, {"start": 3047.0, "end": 3054.0, "text": " Basically nothing is learnable here. These are just the weights of the scheduler that we need to get diffusion to work."}, {"start": 3054.0, "end": 3057.0, "text": " So I'm going to skip across all of these and that's it."}, {"start": 3057.0, "end": 3065.0, "text": " That's an important part. But like something I've covered previously and just bunch of formulas, you wouldn't get any insight from me going through it."}, {"start": 3065.0, "end": 3070.0, "text": " So, yeah, I'm going to skip over that. OK, we are back in the latent diffusion."}, {"start": 3070.0, "end": 3075.0, "text": " So we generated the unit we generated the schedule."}, {"start": 3075.0, "end": 3079.0, "text": " Now let's continue and see what else is interesting here."}, {"start": 3079.0, "end": 3087.0, "text": " I'm going to skip across all of these. So because now we are training the model in a holistic fashion, we obviously have to instantiate the first stage."}, {"start": 3087.0, "end": 3092.0, "text": " And by first stage, they mean the auto encoder. So let's again just briefly go through this one."}, {"start": 3092.0, "end": 3098.0, "text": " This time we are forming this VQ model and not the auto encoder KL. So that's a that's a difference."}, {"start": 3098.0, "end": 3106.0, "text": " So let's just do that. So here we are. We are instantiating the VQ model."}, {"start": 3106.0, "end": 3112.0, "text": " Here we are. And that's going to call the VQ model here. So let's just kind of start entering there."}, {"start": 3112.0, "end": 3117.0, "text": " OK, so here we are. Let's see what's the main difference. We still have the encoder."}, {"start": 3117.0, "end": 3123.0, "text": " I'm going to just toggle off all the breakpoints. We have the encoder. We have the decoder. Nothing have changed there."}, {"start": 3123.0, "end": 3131.0, "text": " The only difference is so we have a loss, which is going to be identity in this this time because we are not training the auto encoder."}, {"start": 3131.0, "end": 3139.0, "text": " We'll just be we'll be just loading the pre-trained weights. So let me enable all the breakpoints and let's enter this part."}, {"start": 3139.0, "end": 3145.0, "text": " So let's see what's going on there. Some beddings, blah, blah, blah."}, {"start": 3145.0, "end": 3149.0, "text": " Well, it's not blah, blah, blah. Embeddings are actually what's important in this model. So this is a codebook."}, {"start": 3149.0, "end": 3157.0, "text": " You can see there is sixteen thousand three hundred eighty four codebook vectors and each of those has four dimensionality of four."}, {"start": 3157.0, "end": 3164.0, "text": " And that's what's being used to do the quantizations later in the forward step. We'll see how it looks like a bit later."}, {"start": 3164.0, "end": 3169.0, "text": " OK, so that's the quantized part. Now we have the conv layer, same as with the auto encoder KL."}, {"start": 3169.0, "end": 3175.0, "text": " Nothing has changed there. Blah, blah, blah. We can skip that. We can skip all of this."}, {"start": 3175.0, "end": 3182.0, "text": " And now we initialize from the pre-trained checkpoint. I'm just going to scheme over all of that."}, {"start": 3182.0, "end": 3190.0, "text": " We are just basically doing the initialization of the auto encoder because remember how the whole logic works."}, {"start": 3190.0, "end": 3196.0, "text": " You first pre-trained the auto encoder and then you basically freeze it and you use its latent space."}, {"start": 3196.0, "end": 3199.0, "text": " And now you train the unit and the conditional model and everything. That's that's it."}, {"start": 3199.0, "end": 3204.0, "text": " That's why we are loading the weights here. OK, and that's it."}, {"start": 3204.0, "end": 3217.0, "text": " Now we do some set the eval mode and basically now we set the gradients to false everywhere and we can just continue on with the execution here."}, {"start": 3217.0, "end": 3225.0, "text": " Let me hit F5. We exit the condition, instantiate the first stage function and now we instantiate the conditional stage."}, {"start": 3225.0, "end": 3232.0, "text": " So this time let's enter this one. I think we are just going to have a class information."}, {"start": 3232.0, "end": 3240.0, "text": " So you can see here a class embedder is the type of a conditioning model that will be instantiating here."}, {"start": 3240.0, "end": 3252.0, "text": " So let's enter there. You can see it's simply thousand classes because we are dealing with image and embedding dimension and then literally just does the embedding in the forward pass."}, {"start": 3252.0, "end": 3258.0, "text": " That's it. That's how the conditional stage model looks like. Let me remind you what."}, {"start": 3258.0, "end": 3266.0, "text": " So that's basically let me show you the diagram here. That's this part in the image."}, {"start": 3266.0, "end": 3272.0, "text": " So this part here is what we've just instantiated and this is the unit. OK, so we are in the stage two."}, {"start": 3272.0, "end": 3278.0, "text": " OK, let's go back here. Let me keep on stepping over here and that's it, guys."}, {"start": 3278.0, "end": 3283.0, "text": " Now I'm going to skip across. This is the main function again. I'm going to skip everything here."}, {"start": 3283.0, "end": 3290.0, "text": " I'm going to also skip the data because that's again just image net and I'm going to stop at the trainer fit here."}, {"start": 3290.0, "end": 3299.0, "text": " So hitting F5 waiting for everything to. Oops. I'm going to have to disable the break points and only then will this work."}, {"start": 3299.0, "end": 3308.0, "text": " So disable and just enable this one. Hit the five. Get to trainer fit and then we're going to start start analyzing how this works."}, {"start": 3308.0, "end": 3315.0, "text": " OK, so enabling the break points and we're going to hit the validation batch as usually."}, {"start": 3315.0, "end": 3321.0, "text": " OK, no, actually. OK, what I've done here is I've added a break point to the configure optimizers this time."}, {"start": 3321.0, "end": 3330.0, "text": " We're just using Adam W here and nothing else is important. I'm going to skip over this again."}, {"start": 3330.0, "end": 3334.0, "text": " PyTorch lightning stuff. So this is not important. Here is the validation loader."}, {"start": 3334.0, "end": 3340.0, "text": " So I'm going to disable all break points and just end up here hitting F5."}, {"start": 3340.0, "end": 3345.0, "text": " We're going to end up in the training part of the of the training. OK, here we are."}, {"start": 3345.0, "end": 3353.0, "text": " I'm going to step over this and now I'm going to enable the break points."}, {"start": 3353.0, "end": 3359.0, "text": " Let's just go through this idiosyncratic part again. And now we're loading the data."}, {"start": 3359.0, "end": 3362.0, "text": " OK, so we are loading the data. Everything remains the same."}, {"start": 3362.0, "end": 3368.0, "text": " We have example that has keys such as images and labels, blah, blah, blah."}, {"start": 3368.0, "end": 3371.0, "text": " I stepped through that part and we end up in the important part."}, {"start": 3371.0, "end": 3376.0, "text": " And that's the training step. Again, that's a function that PyTorch lightning requires you to define."}, {"start": 3376.0, "end": 3383.0, "text": " You can see that we have our batch here and everything is the same as when we were training our encoder."}, {"start": 3383.0, "end": 3390.0, "text": " So batch image shape. You can see it's a 256 256 three channel image."}, {"start": 3390.0, "end": 3396.0, "text": " OK, let's stop. Start stepping through this shared step. That's the first part."}, {"start": 3396.0, "end": 3402.0, "text": " So we call this get input and I think that's just going to grab us the OK."}, {"start": 3402.0, "end": 3408.0, "text": " So it grabs the back the image. So you can see here we just grab the image."}, {"start": 3408.0, "end": 3416.0, "text": " And we end up with so X shape. So we have 256 256 and three. All of that is as usual."}, {"start": 3416.0, "end": 3421.0, "text": " Now we just push it to the GPU. OK, so now here's what we do."}, {"start": 3421.0, "end": 3426.0, "text": " So we encode using the first stage. So that means we don't want to deal with images anymore."}, {"start": 3426.0, "end": 3433.0, "text": " When we're training the diffusion in the LVM in the latent diffusion models, instead we want to deal with the latent space."}, {"start": 3433.0, "end": 3438.0, "text": " That's why we call the encode first stage. So let's do F10 here."}, {"start": 3438.0, "end": 3444.0, "text": " Here's the encoded first stage. And here's what it does. It basically calls first stage model."}, {"start": 3444.0, "end": 3449.0, "text": " It just calls the encode function of that model. Here is how it looks like."}, {"start": 3449.0, "end": 3457.0, "text": " So it's just going to call the encoder. So let's basically let's hit F10 and we are here."}, {"start": 3457.0, "end": 3463.0, "text": " So we are in the encoder. Everything remains. That's the same as our order encoder from the first part of the video."}, {"start": 3463.0, "end": 3470.0, "text": " So I'm going to hit F5. Just bunch of calm layers, rest blocks and down sampling stuff. So F10."}, {"start": 3470.0, "end": 3477.0, "text": " So we end up representation here. So we have now H shape. We have 32 32 4."}, {"start": 3477.0, "end": 3482.0, "text": " OK, four is the number of of latent channels and this is the spatial dimensionality."}, {"start": 3482.0, "end": 3488.0, "text": " Now we do some processing with a calm layer and we return back that representation. And that's it."}, {"start": 3488.0, "end": 3493.0, "text": " That's the encoder posterior. You can see that the shape here is this."}, {"start": 3493.0, "end": 3501.0, "text": " So it's not anymore. It's not a Gaussian distribution because these types of all the encoder models work a bit differently, but the logic is fairly similar."}, {"start": 3501.0, "end": 3507.0, "text": " So now let's call this get first stage encoding. Let's see what this is. So I'm going to hit F12 just to see."}, {"start": 3507.0, "end": 3514.0, "text": " OK, so I'm going to enter here. We can see it's not this object, so we will not sample from it."}, {"start": 3514.0, "end": 3526.0, "text": " Instead, because it's a tensor, we simply just map, create this type of variable name binding and we just scale with some constant factor."}, {"start": 3526.0, "end": 3532.0, "text": " Let me see what that number is and how it was defined. I'm not sure about it."}, {"start": 3532.0, "end": 3538.0, "text": " OK, so it's one. OK, so we can we can just ignore all that. So let's continue. So now we have our representation."}, {"start": 3538.0, "end": 3545.0, "text": " So that's the latent representation now because we have conditioning and the conditioning key is, I think, cross attention or something."}, {"start": 3545.0, "end": 3550.0, "text": " So no, it's it's class label, but we are going to integrate using the cross attention logic."}, {"start": 3550.0, "end": 3561.0, "text": " So let's step over here. And what we do is we just passed a batch because the batch contains, if you recall, a bunch of keys and among them, it contains the label."}, {"start": 3561.0, "end": 3570.0, "text": " So that's how they've implemented this. Basically, they pass more data than is needed, but we'll see how that's going to be integrated a bit later."}, {"start": 3570.0, "end": 3581.0, "text": " So let's see what's going on going on here. So we just map H.C. to C. So that's again the batch information. And then we're going to skip all of this."}, {"start": 3581.0, "end": 3590.0, "text": " And finally, we we return back. So this is the latent representation and this is the conditioning information and a bit more stuff because it's the batch information."}, {"start": 3590.0, "end": 3596.0, "text": " OK, and finally, we return back all of that. So that's the first part of the shared step function."}, {"start": 3596.0, "end": 3603.0, "text": " Again, recall that we are currently in the latent diffusion model. Blah, blah, blah."}, {"start": 3603.0, "end": 3610.0, "text": " If I scroll all the way up here, you'll see that. Oh, my God. Oh, my God. Latent diffusion."}, {"start": 3610.0, "end": 3617.0, "text": " OK, so let's let's go back here. Now we do the forward prop through the diffusion model. So let's see how that looks like."}, {"start": 3617.0, "end": 3628.0, "text": " F10. Here we are in the forward prop. We generate some time steps randomly. Basically, this is going to be thousand. So we generate randomly the time steps information."}, {"start": 3628.0, "end": 3637.0, "text": " And now we do the conditioning. So because this is trainable, we get the learned conditioning."}, {"start": 3637.0, "end": 3648.0, "text": " So let's see what's going to happen there. Basically, we just call the forward pass and we pass C. So C is as you can see, C is still a batch information."}, {"start": 3648.0, "end": 3656.0, "text": " So if I enter inside of the forward function of the class embedder, you can see that we are now going to extract the key."}, {"start": 3656.0, "end": 3661.0, "text": " So the class label from here is going to be some label of the image. So it's zero."}, {"start": 3661.0, "end": 3671.0, "text": " And then we're going to embed it using the embedding table here. So we're going to return some representation that's like what has should have however many dimensions distinct had."}, {"start": 3671.0, "end": 3678.0, "text": " I think it was hundred twenty eight or something. So C shape five hundred twelve. OK, so we return that back."}, {"start": 3678.0, "end": 3685.0, "text": " We return that back. We turn the C and here we are. So we have the C now. And finally, we pass the image."}, {"start": 3685.0, "end": 3694.0, "text": " We pass the sorry, this is not the image. X should be with was X. X should be."}, {"start": 3694.0, "end": 3699.0, "text": " X should be the latent representation, right? Yeah, it is. So we passed the latent representation."}, {"start": 3699.0, "end": 3703.0, "text": " We passed the conditioning information. We passed the T and we compute the losses."}, {"start": 3703.0, "end": 3714.0, "text": " So this is basically what we saw here. We are literally randomly sampling like the these latent representations."}, {"start": 3714.0, "end": 3721.0, "text": " Noise and time steps. That's it. Let's go back to the VS code. OK, so now we have the P losses."}, {"start": 3721.0, "end": 3726.0, "text": " This is where the whole magic of the training happens. We sample some normal noise."}, {"start": 3726.0, "end": 3733.0, "text": " So here we are. So we sample the normal noise and then we do the Q sample. So let's not do the noising process."}, {"start": 3733.0, "end": 3741.0, "text": " So we start from our pure latent representation and we add up key steps of noise on top of it."}, {"start": 3741.0, "end": 3759.0, "text": " So we simulate that. If you recall from my previous videos, there is a formula that makes us capable of doing that in a single step by just combining the start the start representation with the noise and using these basically non learnable parameters from the scheduler."}, {"start": 3759.0, "end": 3765.0, "text": " We end up with a noisy version and now we just apply. We passed the noisy, the T and the conditioning."}, {"start": 3765.0, "end": 3772.0, "text": " So that's literally that's literally this formula here. That's literally we are passing that T T and C."}, {"start": 3772.0, "end": 3777.0, "text": " So let me show you that version here so you can see this formula three in the paper in the LDM paper."}, {"start": 3777.0, "end": 3784.0, "text": " We pass these variables here and we are now passing that through the unit such that we can get a noise as the output."}, {"start": 3784.0, "end": 3796.0, "text": " OK, so let's go back to the code. Let's hit F10 and enter the apply model function. So here what happens is just some variable backing."}, {"start": 3796.0, "end": 3803.0, "text": " Nothing fundamental there. We just pass that vector that was 512 dimensional."}, {"start": 3803.0, "end": 3808.0, "text": " Whoops, actually have to extract the first because we packed it into a list. Just some details."}, {"start": 3808.0, "end": 3814.0, "text": " Nothing. Yeah, we are passing the same information. So that's the conditional vector. OK, and here we are."}, {"start": 3814.0, "end": 3818.0, "text": " Now we pass. We call the this should be the unit. This should be the unit."}, {"start": 3818.0, "end": 3825.0, "text": " Let me just if I do type on this object and this code is so nice I can do this debugging so easily."}, {"start": 3825.0, "end": 3828.0, "text": " So diffusion wrapper, which contains the unit model and the scheduler."}, {"start": 3828.0, "end": 3839.0, "text": " OK, so if I do F10, here we are. We enter the diffusion wrapper and because we have conditioning key set to cross attention,"}, {"start": 3839.0, "end": 3845.0, "text": " we're going to call the fusion model, pass the conditioning, pass the time steps, pass the latent representation."}, {"start": 3845.0, "end": 3848.0, "text": " And this is going to be automatically handled and integrated by the cross attention."}, {"start": 3848.0, "end": 3854.0, "text": " If I click F10, we should be in the forward pass of the unit model. Let's see whether that's indeed the case."}, {"start": 3854.0, "end": 3858.0, "text": " And yeah, you can see here this is the definition of unit. We are in the right spot."}, {"start": 3858.0, "end": 3865.0, "text": " And so let's now continue. So we just embed the time step information."}, {"start": 3865.0, "end": 3870.0, "text": " So now we end up with some processing on top of those temporal representations."}, {"start": 3870.0, "end": 3880.0, "text": " You can see that this is what the dimensionality of the time temporal information now is."}, {"start": 3880.0, "end": 3886.0, "text": " OK, and now we start integrating. Now we start literally going through the unit."}, {"start": 3886.0, "end": 3895.0, "text": " And you can see here we always pass the representation, the temporal embedding and the conditioning information."}, {"start": 3895.0, "end": 3906.0, "text": " And now that's where the trick comes. So this is where the special object I mentioned a couple of like 20 minutes ago or something is going to play,"}, {"start": 3906.0, "end": 3912.0, "text": " come into the picture because it's going to know exactly what to pass. So I'm going to put a breakpoint here."}, {"start": 3912.0, "end": 3916.0, "text": " I'm going to click F10 and you can see we immediately hit this one."}, {"start": 3916.0, "end": 3923.0, "text": " And now, depending on what instance is this layer, whether it's a spatial transformer or temporal block or just a pure block,"}, {"start": 3923.0, "end": 3927.0, "text": " it's going to call one of the three versions of the layer. And that's that's it."}, {"start": 3927.0, "end": 3934.0, "text": " Like now I'm going to hit F5 and that's going to hit to the it's going to hit the spatial transformer part. OK."}, {"start": 3934.0, "end": 3938.0, "text": " So we hit a special transformer. And this is where the contextual information."}, {"start": 3938.0, "end": 3942.0, "text": " So this should be five hundred twelve. This is the yeah."}, {"start": 3942.0, "end": 3945.0, "text": " So this is the conditioning information from the label."}, {"start": 3945.0, "end": 3952.0, "text": " And you can see that you basically now do simple transformer logic with the conditioning additionally here."}, {"start": 3952.0, "end": 3955.0, "text": " So we just pass. You can see here just some projections, blah, blah, blah."}, {"start": 3955.0, "end": 3961.0, "text": " We rearrange our representation such that it's suitable for transformers."}, {"start": 3961.0, "end": 3966.0, "text": " We have batch size. We have sequence size, basically flattening out the height and the width."}, {"start": 3966.0, "end": 3968.0, "text": " And we have the number of channels. OK."}, {"start": 3968.0, "end": 3977.0, "text": " And now we just pass and do the cross attention with the basically with the transformer blocks."}, {"start": 3977.0, "end": 3981.0, "text": " So let me do this. And that's it. Guys, that's it. That's it."}, {"start": 3981.0, "end": 3984.0, "text": " That's the that's the whole logic."}, {"start": 3984.0, "end": 3987.0, "text": " I'm going to hit F5 again and I'll have to remove this breakpoint."}, {"start": 3987.0, "end": 3991.0, "text": " And now let's get out of here. Let's get out of this function."}, {"start": 3991.0, "end": 3994.0, "text": " Let's just exit this function. OK."}, {"start": 3994.0, "end": 3998.0, "text": " And I'm going to put a breakpoint here. Hit F5 again."}, {"start": 3998.0, "end": 4003.0, "text": " Exit this function. And basically now I'm going to hit F5 again."}, {"start": 4003.0, "end": 4007.0, "text": " And we are exiting the this is the unit for a pass."}, {"start": 4007.0, "end": 4010.0, "text": " We're basically exiting the unit for a pass. And that's it."}, {"start": 4010.0, "end": 4015.0, "text": " So let's exit here and let's see what the output shape is."}, {"start": 4015.0, "end": 4020.0, "text": " It should be the same as the input latent representation."}, {"start": 4020.0, "end": 4024.0, "text": " So it's going to be. Whoops. Let's let's hit F10."}, {"start": 4024.0, "end": 4027.0, "text": " Now we have out. I'm not sure whether this is going to."}, {"start": 4027.0, "end": 4032.0, "text": " Yeah. So we have the same. You can see the same shape as the input representation we passed."}, {"start": 4032.0, "end": 4039.0, "text": " But this is now the prediction. This is now the noise that was put on top of that input latent representation."}, {"start": 4039.0, "end": 4042.0, "text": " So, again, simply what we've done here, we have a unit model."}, {"start": 4042.0, "end": 4048.0, "text": " We passed the input late representation. We have some time step information, some conditioning information."}, {"start": 4048.0, "end": 4052.0, "text": " We use the time step information to noise the input latent."}, {"start": 4052.0, "end": 4057.0, "text": " We combine them with the with the conditioning information. We pass all of that through the unit."}, {"start": 4057.0, "end": 4060.0, "text": " We do a forward pass and we predict back the noise."}, {"start": 4060.0, "end": 4064.0, "text": " And this is where we are at the moment. We have noise."}, {"start": 4064.0, "end": 4067.0, "text": " And let's now see what's going on. Going on."}, {"start": 4067.0, "end": 4078.0, "text": " We're going to return that noise. And here is how the loss is going to look like in the case where we use Epsilon prediction as parameterization."}, {"start": 4078.0, "end": 4090.0, "text": " You can see that exactly this noise that was used to noise the initial representation to get the noisy version is now going to be the one the variable that we're trying to predict."}, {"start": 4090.0, "end": 4093.0, "text": " So that's that's that's it. Like that's as simple as that."}, {"start": 4093.0, "end": 4098.0, "text": " And this is just going to be literally just the MSC loss or something like that."}, {"start": 4098.0, "end": 4104.0, "text": " So so, yeah, let me just do here. Yeah, it was literally just L2 loss."}, {"start": 4104.0, "end": 4108.0, "text": " Nothing nothing other than that. And that's it. As simple as that."}, {"start": 4108.0, "end": 4113.0, "text": " Again, I think this log bar is set to zero. So it will not influence."}, {"start": 4113.0, "end": 4119.0, "text": " Yeah, it will not literally change anything by doing this. We don't do we don't change the loss."}, {"start": 4119.0, "end": 4124.0, "text": " And we just now do some waiting. And that's it."}, {"start": 4124.0, "end": 4131.0, "text": " I'm not sure why we need the VLB loss, the lower bound loss, because it's going to be the same computation as what we had up there."}, {"start": 4131.0, "end": 4141.0, "text": " So look, this thing here is the same as this thing here. So if I do have 11 and enter this will again will again hit the L2 loss branch."}, {"start": 4141.0, "end": 4144.0, "text": " So we just compute the L2 loss again and we return that log."}, {"start": 4144.0, "end": 4149.0, "text": " OK, I'm not sure why we're computing this, because it literally gives us the same results."}, {"start": 4149.0, "end": 4159.0, "text": " If I were to print loss simple here and if I were to print loss VLB, so the variational lower bound, we get the same values and we compute literally the same lines here."}, {"start": 4159.0, "end": 4163.0, "text": " There is the only difference is we have a different weight here for this loss."}, {"start": 4163.0, "end": 4168.0, "text": " And then because this is zero, this will not even have the impact on the final loss."}, {"start": 4168.0, "end": 4173.0, "text": " That part is kind of confusing. And yeah, that's pretty much it."}, {"start": 4173.0, "end": 4182.0, "text": " After I do this, there is additionally some some like logic with the with EMA here."}, {"start": 4182.0, "end": 4189.0, "text": " But yeah, nothing, nothing, nothing vital there."}, {"start": 4189.0, "end": 4192.0, "text": " OK, so we can we can skip over all of this and that's it."}, {"start": 4192.0, "end": 4194.0, "text": " And now we just keep on repeating the batch after batch."}, {"start": 4194.0, "end": 4206.0, "text": " So that's it. Just a forward pass through the unit. And then we basically do L2 loss between the predicted noise versus the noise that we used to noise our input representation."}, {"start": 4206.0, "end": 4212.0, "text": " Fairly simple. Literally the formula we saw in the paper just being played out here in the code."}, {"start": 4212.0, "end": 4221.0, "text": " So that's this formula here. We are randomly sampling these noise and conditioning label."}, {"start": 4221.0, "end": 4227.0, "text": " OK, that one is correlated, obviously, with the image. And then we just use the latent representation of that image."}, {"start": 4227.0, "end": 4234.0, "text": " So Formula 3 is everything that we have in the stage two of training this system here."}, {"start": 4234.0, "end": 4239.0, "text": " After we've done this, we literally can now use the pre-trained weights and start sampling."}, {"start": 4239.0, "end": 4243.0, "text": " So let me show you how we can sample using using this this code."}, {"start": 4243.0, "end": 4254.0, "text": " So I'm going to stop this and let's go to the long script. And this time we're going to be using the text to image script, this one."}, {"start": 4254.0, "end": 4260.0, "text": " And the only argument you have to pass is this PLMS. So that's going to be the scheduler we are using."}, {"start": 4260.0, "end": 4269.0, "text": " You can also use the I.M. but like this one has higher quality and they showed in the paper that the I.M. is just a special case of the PLMS scheduler."}, {"start": 4269.0, "end": 4274.0, "text": " OK, so let's open up the text to image and let's start."}, {"start": 4274.0, "end": 4283.0, "text": " OK, guys, let's pick the correct configuration here and I'm going to hit run and we'll soon start executing the sampling script."}, {"start": 4283.0, "end": 4287.0, "text": " So this one is used using a textual prompt. You can now generate the images."}, {"start": 4287.0, "end": 4292.0, "text": " And that's that's how these text condition image generation models became popular."}, {"start": 4292.0, "end": 4296.0, "text": " You can kind of tweak the prompts and generate corresponding images."}, {"start": 4296.0, "end": 4302.0, "text": " OK, so here's a here's a prompt I'm currently using a painting of an AI having an epiphany moment."}, {"start": 4302.0, "end": 4306.0, "text": " That's the prompt we're using. You specify the output directory, whether you want to."}, {"start": 4306.0, "end": 4311.0, "text": " This is not important. Skipping all of these number of diffusion steps."}, {"start": 4311.0, "end": 4318.0, "text": " That's important. We will be using five in this example just to make sure this is going to execute very, very quickly."}, {"start": 4318.0, "end": 4322.0, "text": " I'm going to store the PLMS such that we want to use that scheduler."}, {"start": 4322.0, "end": 4329.0, "text": " This one is also not important. This is if you want to have basically always start from the same latent."}, {"start": 4329.0, "end": 4335.0, "text": " And that's going to kind of constrain your outputs to be less diverse as a consequence of that."}, {"start": 4335.0, "end": 4338.0, "text": " We don't need that. We don't care about this variable as well."}, {"start": 4338.0, "end": 4342.0, "text": " This is how many images I'm generating. Just nine."}, {"start": 4342.0, "end": 4345.0, "text": " The input image dimensions, five, twelve, five, twelve."}, {"start": 4345.0, "end": 4353.0, "text": " The latent number of channels is four. The sampling factor is eight. Number of samples is one."}, {"start": 4353.0, "end": 4356.0, "text": " Blah, blah, blah. We can skip all of this. There's too many."}, {"start": 4356.0, "end": 4362.0, "text": " Oh, my God. OK, so I'm just going to hit F5 and get here."}, {"start": 4362.0, "end": 4369.0, "text": " We can skip this part because it's false. We see everything. We load the configuration."}, {"start": 4369.0, "end": 4372.0, "text": " So let's see what that's going to be. OK, we want inference."}, {"start": 4372.0, "end": 4376.0, "text": " So let me see. So let's go to configs. Let's go to configs here."}, {"start": 4376.0, "end": 4382.0, "text": " Let's go to stable diffusion. We want inference. This is how that thing looks like."}, {"start": 4382.0, "end": 4393.0, "text": " So, again, it's literally just going to specify the latent model, specify the unit, specify all the parameters, all the encoder, pretty much everything."}, {"start": 4393.0, "end": 4399.0, "text": " And we're going to be using clip this time, not the class conditional information for the conditioning stage."}, {"start": 4399.0, "end": 4403.0, "text": " That's it. It's going to need that all of this is specified inside of a config file."}, {"start": 4403.0, "end": 4406.0, "text": " A couple of things you need to do if you want to follow along."}, {"start": 4406.0, "end": 4412.0, "text": " First up is you have to go to this like page and download the weights."}, {"start": 4412.0, "end": 4418.0, "text": " And the best ones are V1,4 or you can play with V1,3 as well."}, {"start": 4418.0, "end": 4424.0, "text": " Basically, go ahead, download those, put them in the corresponding directory. And that's it."}, {"start": 4424.0, "end": 4431.0, "text": " After that, there is a couple more things. So I had to, again, create some tweaks."}, {"start": 4431.0, "end": 4435.0, "text": " So one of the tweaks is the following. So let's go to latent diffusion."}, {"start": 4435.0, "end": 4442.0, "text": " I'm using this image net config file. I had to specify. So here's where you specify the checkpoint path."}, {"start": 4442.0, "end": 4448.0, "text": " So wherever you download, you can see you need to specify the checkpoint path there."}, {"start": 4448.0, "end": 4456.0, "text": " And there is the batch size. I also reduced it from 64 to 1. Otherwise, I'm getting good out of memory exceptions."}, {"start": 4456.0, "end": 4462.0, "text": " OK, so that's one thing. And then you have to also modify this text to image."}, {"start": 4462.0, "end": 4471.0, "text": " So basically what I had to do is to set. And this is very dirty just to make sure that I'm getting this to work on my computer."}, {"start": 4471.0, "end": 4473.0, "text": " But there are better ways to do this."}, {"start": 4473.0, "end": 4486.0, "text": " And it's much better for you to use the diffusers library than to do what I've done here because they actually have some parts where they accumulate in FP32 instead of doing everything in FP16 as I'm doing."}, {"start": 4486.0, "end": 4493.0, "text": " So as a consequence, I'm probably getting a bit lower quality images, but it works and I can step through my code."}, {"start": 4493.0, "end": 4502.0, "text": " So that's one tweak. And then what else? Let me see what I had to change. That's important to get this to work."}, {"start": 4502.0, "end": 4510.0, "text": " Set the number of samples to one. Otherwise, we batch size of three. I was getting out of memory exceptions as well."}, {"start": 4510.0, "end": 4515.0, "text": " OK, so here's the checkpoint information. You don't need this part."}, {"start": 4515.0, "end": 4527.0, "text": " And and finally, I have explicitly made it such that we are dealing with flow 60 here and not with mixed precision because I was getting I was hitting errors."}, {"start": 4527.0, "end": 4545.0, "text": " If I recall correctly, if I don't put the FP16 here explicitly and finally, I've just for the sake of speed, I commented out the check safety functionality that basically checks whether you have not safe for work or other problematic content."}, {"start": 4545.0, "end": 4553.0, "text": " OK, having said that, let's go back to the text to image script and we can now continue. That's everything you need to know."}, {"start": 4553.0, "end": 4562.0, "text": " OK, we now load the model. So we load the diffusion model this time. Obviously, I'm going to disable. We've seen all of this."}, {"start": 4562.0, "end": 4567.0, "text": " So actually, we're going to enter this part and I'm only only going to show you the difference."}, {"start": 4567.0, "end": 4580.0, "text": " And that's loading the clip model. So I'm going to hit F10. And let's just see how the clip model is being used to basically create the conditioning information from the input prompt."}, {"start": 4580.0, "end": 4589.0, "text": " And that's going to be the only interesting part. OK, so I'm now going to toggle on all the breakpoints, hit F10."}, {"start": 4589.0, "end": 4599.0, "text": " And here we are. So we are now creating the conditioning stage. Instantiated from config. We have frozen clip embedder. So let's hit F10."}, {"start": 4599.0, "end": 4610.0, "text": " So here it is. So basically what I do is they use hugging faces, pre trained clip tokenizer and pre trained text model. And that's it."}, {"start": 4610.0, "end": 4615.0, "text": " Like everything else, I've covered clip in one of my previous videos, so I'm going to link it somewhere here."}, {"start": 4615.0, "end": 4621.0, "text": " You can you can go and check it out if you want to understand how clip exactly works. I also covered papers."}, {"start": 4621.0, "end": 4627.0, "text": " So, yeah, there is plenty of information about clip on my channel. That's it."}, {"start": 4627.0, "end": 4633.0, "text": " I'm going to leave the forward function, the breakpoint there and let's exit here."}, {"start": 4633.0, "end": 4641.0, "text": " And we are done. We're done. We just set the gradients to false because we don't want to train this. We're now in the sampling stage."}, {"start": 4641.0, "end": 4646.0, "text": " So I'm going to go back to the text to image and I'm just going to put a breakpoint here."}, {"start": 4646.0, "end": 4651.0, "text": " I'm going to basically disable everything and just leave that breakpoint here."}, {"start": 4651.0, "end": 4657.0, "text": " Let's exit the load model from config function and continue from there."}, {"start": 4657.0, "end": 4670.0, "text": " OK, so pushing the model to GPU, we instantiate the PLMS sampler when it comes to well, when it comes to any function, it's not that complicated."}, {"start": 4670.0, "end": 4676.0, "text": " We literally just store the LDM model here. So this is going to be just the let me show you this."}, {"start": 4676.0, "end": 4687.0, "text": " So basically type is going to be LDM. So you can see here late in diffusion and we have thousand steps were used to train the model."}, {"start": 4687.0, "end": 4693.0, "text": " That's an important information to form the schedule and schedule is going to be like a linear schedule."}, {"start": 4693.0, "end": 4700.0, "text": " OK, so I'm going to show you more once we get to the actual forward pass sampling."}, {"start": 4700.0, "end": 4716.0, "text": " They additionally have this border mark tool that basically makes sure that it encodes a watermark that's invisible to humanize basically into the image such that we know that it was machine generated image."}, {"start": 4716.0, "end": 4724.0, "text": " And later we can use that watermark to exclude those samples from our training data if we decide to do that."}, {"start": 4724.0, "end": 4734.0, "text": " I guess that's one of the main reasons they were they were doing that as well as to catch someone who is generating images and not giving proper credit to stable diffusion."}, {"start": 4734.0, "end": 4744.0, "text": " I assume. OK, so here's a prompt. We have a painting of an AI having epiphany moment and we form some output directories."}, {"start": 4744.0, "end": 4750.0, "text": " Nothing interesting there. And finally, here is so this is the interesting part."}, {"start": 4750.0, "end": 4755.0, "text": " We're going to start the sampling here. So I'm going to hit F5 exit that part."}, {"start": 4755.0, "end": 4761.0, "text": " And here it is. So we we have a single problem. So this loop is going to be kind of trivial."}, {"start": 4761.0, "end": 4768.0, "text": " The first part we do is we get the learned conditioning from the for the empty prompt."}, {"start": 4768.0, "end": 4774.0, "text": " So this is the again a classifier free guidance technique."}, {"start": 4774.0, "end": 4780.0, "text": " So because our scale is seven point five, so that's the guidance scale. That's why we basically entered this part."}, {"start": 4780.0, "end": 4788.0, "text": " And now let's let me just make sure this is all enabled. Let's enter this and let's see how clip works here."}, {"start": 4788.0, "end": 4795.0, "text": " So what we do is we encode the prompt and the prompt in this particular case is an empty prompt."}, {"start": 4795.0, "end": 4803.0, "text": " So encoding is just a forward pass through the frozen clipping better. So we just tokenize the text."}, {"start": 4803.0, "end": 4813.0, "text": " You can see here we get batching codings. So if I were to since because it's an empty prompt, it's going to be fairly trivial."}, {"start": 4813.0, "end": 4818.0, "text": " Let's see the shape here. OK, so it's going to be a trivial encoding. Basically, all of the numbers are the same."}, {"start": 4818.0, "end": 4823.0, "text": " Those are the this is the beginning of sentence token. And these are the end of sentence."}, {"start": 4823.0, "end": 4834.0, "text": " OK, we can kind of validate that by doing the following. So tokenizer decode, I think, was the name. And then we just pass this number."}, {"start": 4834.0, "end": 4841.0, "text": " And let's see what it is to start of text. And the one with ending in seven is the end of text. That's it."}, {"start": 4841.0, "end": 4847.0, "text": " That's how the representation is. The sequence of IDs is formed."}, {"start": 4847.0, "end": 4854.0, "text": " We push them to GPU and we just passed them now to the transformer, which is basically the textual part of the clip model."}, {"start": 4854.0, "end": 4864.0, "text": " And we end up with a final representation that's seventy seven five or something. So seventy seven and six seven hundred sixty eight."}, {"start": 4864.0, "end": 4870.0, "text": " That's that's the final representation that came from clip. And we're going to use that to condition the unit model."}, {"start": 4870.0, "end": 4876.0, "text": " That's the idea. OK, so let's get back here. Let's get back here. We're returning the information and that's it."}, {"start": 4876.0, "end": 4883.0, "text": " So that's going to be used to condition the unit. Now we do the same thing just with the with the actual prompt."}, {"start": 4883.0, "end": 4888.0, "text": " So I'm going to disable the breakpoints here and we do the same thing. So I'm just going to skip that again."}, {"start": 4888.0, "end": 4898.0, "text": " C is going to be same shape as you see. You see standing for basically unconditional conditioning."}, {"start": 4898.0, "end": 4908.0, "text": " And finally, here is where the sampling starts. After the sampling, we have a simple we just pass the final latent representation through the decoder,"}, {"start": 4908.0, "end": 4917.0, "text": " which is just a set of layers, attention layers, etc, etc. And up sampling, do some clamping, put the put the final representation on the CPU."}, {"start": 4917.0, "end": 4923.0, "text": " When I say final presentation, I mean image, disimplementation, blah, blah, blah. And then we can store the image here."}, {"start": 4923.0, "end": 4929.0, "text": " And that's everything. Everything else is kind of arranging the images into into grids."}, {"start": 4929.0, "end": 4938.0, "text": " So this is where the gist of the logic is. So in the sample part, so I'm going to enable all the breakpoints and let's enter this part."}, {"start": 4938.0, "end": 4947.0, "text": " Let's hit F5. And here we are. So we enter the sample part. And because we have conditioning, let's see what we do."}, {"start": 4947.0, "end": 4955.0, "text": " We do some error checking there. We make a schedule. So make a schedule is just going to make sure that we set the appropriate constants."}, {"start": 4955.0, "end": 4961.0, "text": " So let me let me go into this. So here here we are. So first of all, let's see."}, {"start": 4961.0, "end": 4969.0, "text": " So we have uniform discretization. Number of steps is five because I said this is just dumb."}, {"start": 4969.0, "end": 4978.0, "text": " Usually what you want to use like 50 is OK, 200 if you want to get a bit better results. But there is a saturated saturation going on."}, {"start": 4978.0, "end": 4983.0, "text": " Definitely. So 50 is completely fine. So this is the actual number we use during the training."}, {"start": 4983.0, "end": 4992.0, "text": " That's a vital information for the scheduler so that we can construct the the the final set of time steps."}, {"start": 4992.0, "end": 5000.0, "text": " I'm going to disable these and we're going to end up as you can see here. So one two one four one six one and eight one."}, {"start": 5000.0, "end": 5006.0, "text": " That means that's how we uniformly sampled our thousand time stamps into only five time steps. That's it."}, {"start": 5006.0, "end": 5011.0, "text": " OK, next up, we grab the original alphas from our diffusion model here."}, {"start": 5011.0, "end": 5020.0, "text": " We create some lambda functions. And then, as I said, we just start creating these non learnable constants."}, {"start": 5020.0, "end": 5029.0, "text": " And I'm going to skip across all of these. Because it's really hard to explain this without taking one hour or something."}, {"start": 5029.0, "end": 5034.0, "text": " So, yeah. OK, so finally, we have those constants in place. We formed our time steps."}, {"start": 5034.0, "end": 5041.0, "text": " Now let's enter the logic here. So enable breakpoints. Let's hit F5 and enter this part."}, {"start": 5041.0, "end": 5054.0, "text": " So here we are. We are starting to generate the image right now. So here we generate the initial random basically noise tensor,"}, {"start": 5054.0, "end": 5060.0, "text": " which is going to be sixty four sixty four four, because that's the size of the latent space of this particular LDM."}, {"start": 5060.0, "end": 5066.0, "text": " And we start there and then we we form the time steps. So you can see here one two one, etc."}, {"start": 5066.0, "end": 5073.0, "text": " And we just reverse the time steps here because when we are generating, we want to start from the end."}, {"start": 5073.0, "end": 5080.0, "text": " We're starting from the noise image and we're just time going in reverse until we generate the actual image from our data distribution that we learned."}, {"start": 5080.0, "end": 5087.0, "text": " OK, so running running PLM sampling with five time steps. Let's continue there."}, {"start": 5087.0, "end": 5096.0, "text": " Basically, we we create the time steps there. The first one should be, I guess, eight or one or something. So TSA to one."}, {"start": 5096.0, "end": 5103.0, "text": " And then we grab the next ones because the PLM logic needs it. So it's going to be, I guess, six or one."}, {"start": 5103.0, "end": 5110.0, "text": " Yeah. And then let's continue. So now we end up doing this. We call the P sample PLM."}, {"start": 5110.0, "end": 5118.0, "text": " The code is fairly messy and complicated. So, yeah, I apologize for not being able to explain this a bit more clearly, but I'm giving my best here."}, {"start": 5118.0, "end": 5125.0, "text": " So stick with me. So, OK, let's see how the final logic looks like. So I'm setting a breakpoint here."}, {"start": 5125.0, "end": 5135.0, "text": " We're going to hit that line a bit later. So for now, we just grab these constants, alphas, sigmas, etc, etc."}, {"start": 5135.0, "end": 5143.0, "text": " I'm also going to put the breakpoint here. So we're going to enter that function. And here is the final logic that the PLM sampler does."}, {"start": 5143.0, "end": 5151.0, "text": " So we get the model output and that's basically a forward pass through the through the unit."}, {"start": 5151.0, "end": 5158.0, "text": " So let's let's go there. Let's see what's going on there. So here it is because we're doing the other classifier free guidance."}, {"start": 5158.0, "end": 5163.0, "text": " We have to repeat our input representation, which is currently just a pure noise."}, {"start": 5163.0, "end": 5170.0, "text": " Our time steps as well. We have to concatenate both our unconditional conditioning and the conditioning from the actual prompt."}, {"start": 5170.0, "end": 5180.0, "text": " We do a forward pass through the unit model. So I'm going to hear just disable breakpoints due to forward pass because nothing insightful is happening there."}, {"start": 5180.0, "end": 5184.0, "text": " And finally, we get the output representations here, which we then combine."}, {"start": 5184.0, "end": 5197.0, "text": " So we combine the noise from the when we when we condition the unit with the unconditional conditioning and we also pass the basically the information that we got when using the actual conditioning here."}, {"start": 5197.0, "end": 5203.0, "text": " And that's that's how we form our final noise prediction. OK, after that."}, {"start": 5203.0, "end": 5210.0, "text": " So, OK, it had to take some time to figure out how to connect this code with the formulas from the paper."}, {"start": 5210.0, "end": 5216.0, "text": " So let me show you side by side comparison of the code and formulas. And let's start and figure out what's going on here."}, {"start": 5216.0, "end": 5224.0, "text": " OK, guys, so here is the paper. So I opened up the pseudo numerical methods for the future models on many folks paper."}, {"start": 5224.0, "end": 5231.0, "text": " It's a mouthful. Even the title is hard to comprehend. So let's get back to this statement here."}, {"start": 5231.0, "end": 5241.0, "text": " Oops, let me just find it. Basically, so we see four branches here. And the reason that is is because they are using this linear multi step method."}, {"start": 5241.0, "end": 5253.0, "text": " And they say here here we cannot use linear multi step initially because the linear multi step method cannot start automatically, which needs at least three previous steps information to generate results."}, {"start": 5253.0, "end": 5262.0, "text": " So we use the rune Jakuta method to compute the first three steps results and then use the linear multi step method to calculate the remaining."}, {"start": 5262.0, "end": 5269.0, "text": " OK, so I've done some annotations here so that we can find the precise formula for each of these branches."}, {"start": 5269.0, "end": 5274.0, "text": " So let's start like that. OK, so maybe I'll start start with the fourth branch."}, {"start": 5274.0, "end": 5286.0, "text": " So the final step, once we have had at least three steps of this of this sampling step, basically we'll end up in this hitting this branch every single time."}, {"start": 5286.0, "end": 5292.0, "text": " So let's go to Formula 12 and let's kind of convince ourselves that this makes sense."}, {"start": 5292.0, "end": 5298.0, "text": " OK, so here is the Formula 12. You can see that the first step is to calculate the epsilon theta."}, {"start": 5298.0, "end": 5307.0, "text": " So this is what we've done here in this step. So get model outputs we get we have our epsilon. So the noise prediction here."}, {"start": 5307.0, "end": 5314.0, "text": " The next step is, as you can see here, to calculate this epsilon prime and you can see it's one over 24 blah blah blah."}, {"start": 5314.0, "end": 5318.0, "text": " Some some some expression there you can see that corresponds directly to this one here."}, {"start": 5318.0, "end": 5326.0, "text": " So fifty five E t minus fifty nine E t minus delta, etc. etc. So you can kind of see this corresponds to this."}, {"start": 5326.0, "end": 5333.0, "text": " So here is the third step, this five function, which they call, I think, transfer function or something like that."}, {"start": 5333.0, "end": 5339.0, "text": " Basically, here it is defined, but we will not use it for for for this explanation."}, {"start": 5339.0, "end": 5350.0, "text": " So let me just get back here. So what I've done is basically I figure out that this function here, which will get to in a couple of seconds."}, {"start": 5350.0, "end": 5357.0, "text": " So let me just enable the break points for a second here. So let's enable all of the break points. Let me do this."}, {"start": 5357.0, "end": 5367.0, "text": " So this basically function corresponds to Formula 8. So that's this formula here."}, {"start": 5367.0, "end": 5372.0, "text": " We're going to convince ourselves in that in a couple of seconds. OK, but let me let me get back here."}, {"start": 5372.0, "end": 5378.0, "text": " So we saw that this expression here makes sense. Now, let's make sense out of the other branches."}, {"start": 5378.0, "end": 5382.0, "text": " So for this one, I could not find the corresponding expression in the paper."}, {"start": 5382.0, "end": 5387.0, "text": " So if anyone knows what the heck is going on, feel free to comment down below. I'm listening."}, {"start": 5387.0, "end": 5394.0, "text": " So for the for for this branch here, I found the Formula 23 to correspond to this one."}, {"start": 5394.0, "end": 5400.0, "text": " So let's let me let me find that one. So 23 is in the appendix of the paper."}, {"start": 5400.0, "end": 5407.0, "text": " It's kind of hard to, yeah, even find the correspondence between these, let alone have some intuition."}, {"start": 5407.0, "end": 5410.0, "text": " And I guess even the authors of this paper don't have the intuition."}, {"start": 5410.0, "end": 5416.0, "text": " It's more of a OK, we associated this differential equation with this diffusion process."}, {"start": 5416.0, "end": 5426.0, "text": " And we just then can automatically pick up the tools that we already have from a long and rich history of solving differential equations and apply those to solve diffusion."}, {"start": 5426.0, "end": 5430.0, "text": " But like intuition wise, I'm not sure anyone understand what's going on here."}, {"start": 5430.0, "end": 5434.0, "text": " Like I might be wrong, but that's that's my current understanding of the things."}, {"start": 5434.0, "end": 5438.0, "text": " OK, so I said Formula 23. Here it is."}, {"start": 5438.0, "end": 5445.0, "text": " So you can see we calculate Epsilon again and then we have one over half three Epsilon minus Epsilon old."}, {"start": 5445.0, "end": 5450.0, "text": " And you can see that exact formula here. So we are computing that result here."}, {"start": 5450.0, "end": 5454.0, "text": " And then finally, let me go to equation 22."}, {"start": 5454.0, "end": 5458.0, "text": " That's the first branch of this of this complex branching."}, {"start": 5458.0, "end": 5463.0, "text": " So first on a high level, this let me kind of step over here."}, {"start": 5463.0, "end": 5469.0, "text": " So this function here get X previous and prediction X zero is calculating whatever phi is."}, {"start": 5469.0, "end": 5480.0, "text": " We'll see what it is in a second. So once we have the results, so we have this X prev, then we feed that into the neural network and we again grab the find the noise."}, {"start": 5480.0, "end": 5489.0, "text": " And that corresponds to this this step here. So whatever is output from the second step, we feed it back into our neural network."}, {"start": 5489.0, "end": 5496.0, "text": " We feed in the T next as you can see here. So that's why we have T plus Delta here and we get back the results here."}, {"start": 5496.0, "end": 5505.0, "text": " OK, and then finally we grab those results from this step and we just add them up with the Epsilon from the previous step and we divide by two."}, {"start": 5505.0, "end": 5509.0, "text": " So that's this part here and we end up with the E T prime."}, {"start": 5509.0, "end": 5513.0, "text": " And then after that, we're going to again call the five function here."}, {"start": 5513.0, "end": 5518.0, "text": " So now I guess it boils down to figure out what this what this five function is."}, {"start": 5518.0, "end": 5523.0, "text": " So let's step inside of this function. Let me show you what's going on here."}, {"start": 5523.0, "end": 5528.0, "text": " So again, these are just some non learnable expressions."}, {"start": 5528.0, "end": 5537.0, "text": " I'm going to skip those. But let me find the formula eight, which directly I found that that one corresponds directly to this to this code here."}, {"start": 5537.0, "end": 5544.0, "text": " So let me find that. OK, so formula eight. Here it is."}, {"start": 5544.0, "end": 5552.0, "text": " So let's convince ourselves that this makes sense. So here we have X minus square root one minus 80 times E T Epsilon."}, {"start": 5552.0, "end": 5556.0, "text": " So we can see that corresponds to this expression here."}, {"start": 5556.0, "end": 5563.0, "text": " We basically have X minus square root one minus this cumulative sum product of alphas."}, {"start": 5563.0, "end": 5569.0, "text": " And then we multiply that, as you can see here, with with the output of our neural network."}, {"start": 5569.0, "end": 5574.0, "text": " And then we divide all of that by the square root of 80 here."}, {"start": 5574.0, "end": 5577.0, "text": " And that's pretty much this first term next up."}, {"start": 5577.0, "end": 5581.0, "text": " So that's the first part. And then we calculate the second part."}, {"start": 5581.0, "end": 5585.0, "text": " The second part is, as you can see here, it's computing this part here."}, {"start": 5585.0, "end": 5592.0, "text": " So one minus a previous minus sigma squared square root of all of that times the Epsilon."}, {"start": 5592.0, "end": 5597.0, "text": " And you can see that's precisely this term here. So let me now continue stepping."}, {"start": 5597.0, "end": 5604.0, "text": " Now we calculate the noise. And finally, let's see how all of that is combined together."}, {"start": 5604.0, "end": 5614.0, "text": " We have a previous square root. So that's this part times whatever we predicted up there, which was the first term, plus this term here."}, {"start": 5614.0, "end": 5619.0, "text": " And then finally, plus the noise. So that's this part here."}, {"start": 5619.0, "end": 5626.0, "text": " Let me just see what the value of sigma t is. And it's zero. OK, so this this part will actually be ignored."}, {"start": 5626.0, "end": 5630.0, "text": " So as you can see here, this part, the noise, because it's multiplied by sigma t."}, {"start": 5630.0, "end": 5637.0, "text": " Let me just kind of go into the debug console here. Let's convince ourselves sigma t is zero."}, {"start": 5637.0, "end": 5642.0, "text": " And that makes sense because here in the paper, they mention it somewhere."}, {"start": 5642.0, "end": 5650.0, "text": " Let me just find it that they only care about the case where sigma is equal to zero."}, {"start": 5650.0, "end": 5655.0, "text": " Let me just find that one. OK, I found it. I didn't highlight it initially, so it was hard to find it."}, {"start": 5655.0, "end": 5661.0, "text": " So they say here, therefore, our work concentrate on the case where sigma equals zero."}, {"start": 5661.0, "end": 5668.0, "text": " OK, let me read this for you. So here sigma controls the ratio of random noise because it's modulating the noise, as you can see here in Formula 8."}, {"start": 5668.0, "end": 5674.0, "text": " If sigma equals one, equation eight represents the reverse process of DDPMs."}, {"start": 5674.0, "end": 5677.0, "text": " So those are the denoising diffusion probabilistic models."}, {"start": 5677.0, "end": 5682.0, "text": " So if sigma equals zero, this equation represents the reverse process of DDIMs."}, {"start": 5682.0, "end": 5692.0, "text": " OK, and only when sigma equals zero, this equation removes the random item and becomes a discrete form of a certain ODE, ordinary differential equation."}, {"start": 5692.0, "end": 5698.0, "text": " Theoretically, the numerical methods that can be used on differential equations with random items are limited."}, {"start": 5698.0, "end": 5706.0, "text": " So that's why we want to escape. We want to set sigma to zero because there are richer set of tools when we are not dealing with that random term."}, {"start": 5706.0, "end": 5715.0, "text": " These authors here have done enough research in this case. Empirically, they have shown that DDIMs have a better acceleration effect when the number of total steps is relatively small."}, {"start": 5715.0, "end": 5719.0, "text": " Therefore, our work concentrates on the case where sigma equals zero."}, {"start": 5719.0, "end": 5725.0, "text": " OK, so guys, that's that's pretty much it. I'm going to open up the code here."}, {"start": 5725.0, "end": 5729.0, "text": " As I said, I cannot provide you with much more intuition than this."}, {"start": 5729.0, "end": 5735.0, "text": " So let me kind of step through all of this. I'm going to set a breakpoint here."}, {"start": 5735.0, "end": 5739.0, "text": " Let me remove the disabled breakpoints. Let me enable this one."}, {"start": 5739.0, "end": 5744.0, "text": " Let's hit F5 and let's get back to the function."}, {"start": 5744.0, "end": 5747.0, "text": " OK, guys, so here we are. We now take that output."}, {"start": 5747.0, "end": 5755.0, "text": " We basically sum it up with the last prediction from the network here, divide by two as per the formulas we saw previously."}, {"start": 5755.0, "end": 5759.0, "text": " Next up, we compute the five function again, and that's it."}, {"start": 5759.0, "end": 5770.0, "text": " So we return back the X previous, which is the basically as well as the noise, which is the next step in the reverse diffusion process."}, {"start": 5770.0, "end": 5774.0, "text": " So we are slowly getting to the pure image. And that's pretty much it."}, {"start": 5774.0, "end": 5777.0, "text": " I think we're not going to keep on iterating here."}, {"start": 5777.0, "end": 5785.0, "text": " As you can see, we append the Epsilon to this to this array of fold Epsilon, which is used, if you remember the four branches."}, {"start": 5785.0, "end": 5790.0, "text": " So this is where we collect the old Epsilon's and then we pass them inside here."}, {"start": 5790.0, "end": 5796.0, "text": " And that's it. So if we start some circular array logic, callbacks are not important."}, {"start": 5796.0, "end": 5800.0, "text": " So I'm going to skip all of that. And now we just keep on iterating and that's it."}, {"start": 5800.0, "end": 5806.0, "text": " So we're going to have in this particular example, five steps, because that's how I've basically configured it."}, {"start": 5806.0, "end": 5814.0, "text": " But like in general, you have like 50 steps or 200 steps if you want to have a bit better, basically quality of the image generation."}, {"start": 5814.0, "end": 5820.0, "text": " But I'm going to stop this here. And basically I'm going to stop it here."}, {"start": 5820.0, "end": 5827.0, "text": " And I'm going to finally just show you briefly the safety function they added that might be interesting to some of you."}, {"start": 5827.0, "end": 5836.0, "text": " So let me show you how that functions. Let me just find it basically in the text to image here somewhere we have."}, {"start": 5836.0, "end": 5838.0, "text": " OK, here. So here is the check safety function."}, {"start": 5838.0, "end": 5845.0, "text": " Basically, what happens is once you have generated the image and you've done this clamping, blah, blah, blah, you convert it into an umpire array."}, {"start": 5845.0, "end": 5851.0, "text": " Basically you have an image. So now they call this check safety for you and the check safety function."}, {"start": 5851.0, "end": 5856.0, "text": " What it does is cause this safety feature extractor. So let's see what it does."}, {"start": 5856.0, "end": 5861.0, "text": " Basically, that's some pre-trained model from hugging face hub."}, {"start": 5861.0, "end": 5871.0, "text": " And then they call this safety checker, which is also basically, as you can see here, some pre-trained model from from from hugging face."}, {"start": 5871.0, "end": 5880.0, "text": " And so what's interesting here is so they do some checks whether whether you have a basically not safe for work concept inside of the image."}, {"start": 5880.0, "end": 5887.0, "text": " And if so, for that particular image, they load the replacement. And I think this one is fairly cool."}, {"start": 5887.0, "end": 5893.0, "text": " So basically what I do in this repo is they take the image, they load this image here."}, {"start": 5893.0, "end": 5897.0, "text": " I'm going to show you what it is in a second. Basically, they're recalling us."}, {"start": 5897.0, "end": 5904.0, "text": " So let's let's find the assets folder. And then under the assets, we are looking for Rick."}, {"start": 5904.0, "end": 5914.0, "text": " And that's going to be this one. OK, so they basically load this image in case you have not safe for work content and they get it back."}, {"start": 5914.0, "end": 5917.0, "text": " Now, the problem is I was playing with this code base a bit."}, {"start": 5917.0, "end": 5925.0, "text": " And even though I was not generating anything explicit or anything, I was still getting this this function being triggered."}, {"start": 5925.0, "end": 5928.0, "text": " And so, yeah, it's not perfect. That's that's the point."}, {"start": 5928.0, "end": 5933.0, "text": " So we can see the definition. Actually, I found it on the in the diffusers library."}, {"start": 5933.0, "end": 5939.0, "text": " Here you can you can see the safety checker, how all of those functions are defined here."}, {"start": 5939.0, "end": 5943.0, "text": " And you can see you can kind of go through this if you if you if you care about it."}, {"start": 5943.0, "end": 5949.0, "text": " But bottom line is I couldn't find how the models were trained."}, {"start": 5949.0, "end": 5953.0, "text": " I guess that's by design because they don't want you to know how to hack the model."}, {"start": 5953.0, "end": 5960.0, "text": " Although I don't know where that's the best position on this topic, but I guess it's very highly debatable."}, {"start": 5960.0, "end": 5965.0, "text": " So, yeah, you can kind of go through this code and basically explore it at your own pace."}, {"start": 5965.0, "end": 5968.0, "text": " If you're guys, this was a super long video."}, {"start": 5968.0, "end": 5977.0, "text": " Like we saw a lot of things we saw how to how to basically train and sample from from these class of LMS, latent diffusion models."}, {"start": 5977.0, "end": 5988.0, "text": " So we saw how to first train this autoencoder who like whose way whose way to basically then freeze and use the latent space in the second stage where we train the LMS."}, {"start": 5988.0, "end": 5992.0, "text": " So basically the unit plus the conditioning model."}, {"start": 5992.0, "end": 5995.0, "text": " And finally, we saw how to sample from these models."}, {"start": 5995.0, "end": 6003.0, "text": " And you saw that you saw that some of the formulas and the connections with differential equations make it kind of hard to have a clear intuition."}, {"start": 6003.0, "end": 6009.0, "text": " But like I'm curious to know and hear whether and how you understand how diffusion models work."}, {"start": 6009.0, "end": 6014.0, "text": " So if you have any intuitive type of an explanation, feel free to comment down below."}, {"start": 6014.0, "end": 6017.0, "text": " I'll try and read all of those because I'm super curious."}, {"start": 6017.0, "end": 6019.0, "text": " In any case, if you like this video, share it out."}, {"start": 6019.0, "end": 6021.0, "text": " Subscribe to this channel."}, {"start": 6021.0, "end": 6035.0, "text": " And until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=epktKtLWgHQ
Get Started With Stable Diffusion! (Code, HF Spaces, Diffusers Notebooks)
🚀 Sign up for Cohere using my link and get 150$ credits 🚀 https://os.cohere.ai/register?utm_source=influencer&utm_medium=&utm_campaign=ALEKSA 👨‍👩‍👧‍👦 And check out Cohere's Discord community! 👨‍👩‍👧‍👦 https://discord.gg/co-mmunity In this video I show you 3 ways to get started with Stable diffusion: 1. Using HuggingFace Spaces (super slow, but super easy) 2. Using diffusers Colab notebooks (mid-ground) 3. Running it locally (my code, most control, and flexibility) Watch out for my deep dive into stable diffusion if you want to learn how it works behind the scenes! (coming soon) Update: deep dive into Stable Diffusion video is out: https://www.youtube.com/watch?v=f6PtJKdey8E ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My code: https://github.com/gordicaleksa/stable_diffusion_playground ✅ My older GAN projects: https://github.com/gordicaleksa/pytorch-GANs ✅ Diffusers notebook: https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb ✅ HF Space: https://huggingface.co/spaces/stabilityai/stable-diffusion Misc: ✅ Xander's awesome video: https://twitter.com/xsteenbrugge/status/1558508866463219712 ✅ Model weights: https://huggingface.co/CompVis/stable-diffusion-v1-4 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:55 (sponsored) - Cohere's NLP toolkit 03:35 First option: HF Spaces 05:30 Second option: Diffusers Colab notebooks 15:10 Third option (more control): my code 16:10 Setup 20:40 How to use the script to generate images 28:00 How to reproduce the image from metadata 31:37 Patching the diffusers lib 33:45 How to interpolate between images 38:10 Results 39:20 Outro (share your images on Twitter/LI!) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #stablediffusion #generativeart #coding
What's up guys, Alex here. In this video, I'm showing you how to get started with stable diffusion. I have another video where I do a super deep dive into how everything works. I take the original code base and I literally step through the code and explain you how the training works, how the sampling works, how to generate images, how to have complete power over your generation basically. So if you're curious to learn more about that, I'm gonna link it somewhere down in the video description once the video is published. So in this video, I show you three ways to get started with stable diffusion. The first one being using the hugging face spaces, the second one being using diffusers notebooks, and the third one is using my code I developed inspired by Karpathis recent gist and basically you can see some of the images I managed to generate with that code base. Stick around to the second part of the video where I literally walk you through how to exactly get started and set it up on your own machine. But before we go there, I wanna thank Cohere for sponsoring this video. They're basically an amazing way to get started building NLP applications. Getting started is as easy as just doing pip install Cohere as you can see here and they also have a very cool no code prototyping platform where you can just explore what features they offer and then get started very easily. So here's an example of generation where you can do for example, hashtag generation. They also support embeddings, classifier stuff and getting started is like super easy. You just do export code, you copy paste the snippet here and you additionally just have to create an API key which you can do in the dashboard here. So it literally takes a couple of minutes to get started. If you wanna learn more about Cohere, I strongly suggest checking out some of the blogs where they use Cohere to generate super cool applications. So this one by Jay Elmer, the same author who kind of famous Illustrator transformer blog basically walks you through how you can use Cohere to generate insights from various hacker news posts and there is also a very nice web app that goes behind it. Finally, the documentation is amazing. You can learn just about the machine learning as much as you can learn about Cohere product. So I really like that type of contribution. By signing up by using the link down below, you can get started with $75 and that's enough in their own words to generate like the whole works of Shakespeare for free. So yeah, check them out, support the channel and now let's get back to the video. Having said that you must have seen the news, basically a week ago, the stable diffusion weights were published and now you can generate awesome images such as these ones here. And what you can probably notice immediately is that with stable diffusion, you can generate human faces, you have much more control and the model is less constrained than the other alternatives, I think, to the best of my knowledge. And basically you can run it much faster and you can even run it on your own machine even if you have only eight gigabytes of VRAM. I'm gonna show you how to do that in a couple of minutes. People have already been using a stable diffusion to generate very cool art. So here is an example you must have seen from Twitter where Xander generated this video called Voyage Through Time using various prompts and prompt engineering and tweaking the seeds. You can see, just yeah, check out the video at your own pace, it's an amazing piece of art. So you can see how he managed to generate this video by basically tweaking the prompts, creating a narrative, tweaking the seeds and then stitching all the images together. Okay, so having said that, how do you get started with stable diffusion and how do you generate your own images? So the easiest way you can do is basically use this hugging faces like a space. So this is literally like a web application. You just come here, you don't need to know anything about how stuff works, you just enter a prompt. So I literally just entered this prompt a couple of minutes ago, a horse with glasses. I hit generate image and a couple of minutes later, you get this as the output. So a couple of minutes later statement is the problem. So you literally have to wait a lot depending on the current load on the server. So basically that's the main drawback of this approach. You do have some control, some advanced options you can pick how many images you want, you can pick how many steps of the diffusion process you want. As a general rule of thumb, the more you have, the better the quality. They kind of kept it to 50 here. So you should probably stick it at 50. Well, but if you obviously want the results faster then you need to reduce it. But 50 is kind of recommended value. Then the guidance scale basically is a trade off between how high quality the images, how closely is it following your prompt versus how diverse the inputs are. So like usually people use 7.5 or between three and 10. And basically the more you increase the scale, the higher quality the images will be, they will closely follow the prompt, but you'll get less diverse set of output results. And finally, the seed is also just enables you to control the diversity of your sample. That's it, that's the simplest approach, but it takes a lot of time. So if we try to input something like an AI robot having an epiphany moment, and if I hit generate image, you can see that it will take at least, so 152 seconds current estimated time. So it's two and a half minutes. So if you're kind of a patient to wait, then go ahead. The second best approach is just use the colab. So there is a lot of cool colabs. I linked directly from the diffusers library, which is Hugging Face's new library that basically is trying to get the latest and greatest diffusion models available to general public. So the first thing we need to do here is to connect to the GPU. It requires a lot of RAM, so you just kind of acknowledge that, okay. You make sure that you get the GPU and you make sure by either just checking it here so we can see it here, basically we have GPU enabled. So that's one way to figure out that you have the GPU. The second one is just hit here and then you have this part change runtime type. You can see that the GPU is set as the hardware accelerator by default. Okay, so let's kind of run this one. Just hit run anyway, and you get some output here. The main point is if it's working, that means you have a GPU. Okay, so now we're gonna install some libraries such as diffusers, transformers, SciPy, and some other libraries for processing text. And after that, we'll have to log in to be able to use the model weights. So you basically have to accept an agreement before you use this notebook. So I'm gonna hit run here and then run here and it will ask for credentials. So basically what I have to do is go to this page and then basically just copy the token. You need to have an account and just log in. If you don't have, just create an account on Hogan face and then you can just kind of copy paste that information and you will be able to log in successfully, okay? So that's it. So now the first cell, what it does is basically creates, instantiates this stable diffusion pipeline. You can see we're using this model V14. I'm gonna quickly show you what it means. So you can see here on this link, basically that V14 says it's V12 plus 225,000 steps at certain resolution on lion aesthetics data set, blah, blah, blah. So that means that's pretty much the best checkpoint. So that one, you can use either that one or V13. You can kind of play if you want with both. So having said that, let's go back to the notebook. This thing is still loading. The second part you have to notice here is that they're using FP16 because basically if we were to run in full precision, that's FP32, you would probably get CUDA out of memory exceptions, even on a colab that has 16 gigabytes of VRAM. So I did play and set, like just delete this part here so you can just delete these two, but I was getting out of memory exceptions if I'm using 512, 512 resolution. So probably the best thing is to just leave it FP16. Later, once I show you the script that you can run on your own machine, if you have a better GPU, then it's recommended to just remove this part and then you'll have even higher quality generated images. Okay, so the final argument here is just the authentication token that's just needed for us to load the weights and that's pretty much it. Okay, once that's loaded, I'm just gonna hit this cell here. It's going to push the loaded weights onto the GPU and now we can start generating the images. So here you can see basically, let me zoom in a little bit here and remove this part here. So we can specify a prompt here, a photograph of an astronaut riding a horse, and then you basically pass the prompt to the pipe and you grab the sample, the zero sample, which will be the image. Then we can save it and display the image and you can see the astronaut here. Okay, so I'm gonna run this now and let's see what we get. So we probably won't have the same image as the one that was there and you will see how long it takes to generate an image. So currently, this is the speed. I think we have 50 steps by default. So now we're at 28, we need to wait a bit more and then we'll get the image out. There is a lot more parameters you can tune. Again, I super encourage you to check out my deep dive if you wanna really understand how to delicately control stable diffusion, that's gonna be probably your best bet. Okay, so we see an image here and the second thing you can do, if you kind of, so if I were to hit Shift Enter again here and generate yet another image, because we didn't set any seeds, because of that, we'll basically generate a completely different image every time we run the cell. So if you don't like that behavior, if you want to have consistency between the runs, then you have to do a small trick, so you can see here a different image. You have to do a small trick and that's to basically set the seed here of your torch generator and then pass that generator to the pipe. So here in this example, if I were to run this, every time you rerun the cell, you're gonna get the same image output, assuming that you're not changing the prompt obviously. So that's the idea with the generator part. So in the background, what happens is, it's going to be used to generate like a specific latent, which is just a noise vector and every time you run it because you've passed the generator, you'll end up with the same noise vector and because the noise vector is the same, the output image is gonna be the same. So here you can see the results are consistent with what we had before I rerun the cell. Okay, cool. Again, you can control the number of inference steps. I already mentioned in the previous here, like in the space here that 50 is probably recommended. You don't wanna go lower than that. You can also see the images that were generated, which are fairly cool. But yeah, so here we have 15 and that's why we get, you can see here much lower quality image compared to the one above. That's pretty much it. So here you can check out this notebook at your own pace. This just creates some utility functions where you can generate multiple images in a grid. Same here, but like the logic remains the same. You can also play with the height and width parameters, whereby, well, you can get a non-square type of an image and there are some rules, like you need to have multiples of eight for your image width and height and you probably don't wanna go lower than 512. Okay, so then the rest of the notebook is a deep dive into how the thing works. But I already told you I have a video that's almost two hours long explaining exactly how stable diffusion is trained, how sampling works, how everything works. So do check out that one. Okay, there's a couple more notebooks. I'm gonna exit here. There's a couple more cool notebooks you can check out. So this one, image to image pipeline, basically enables you to start from a particular image and then using a prompt, guide that image towards a certain, basically appearance. So you can see here that if we take the prompt, a fantasy landscape trending on ArtStation, you get this output here. Okay, so I think worth mentioning here is that you can see, like probably, if you've never seen stable diffusion before, this part might be kinda confusing, like why would you put the trending on ArtStation? And that's the whole art of basically prompt engineering where you find certain words, certain sentences that trigger the model to generate better output or output with specific properties. So people kinda, we globally as a community try a lot of prompts and then somebody figures out a good prompt and then they share it and then the community learns and then we're slowly coevolving with the model in a way. I'm gonna leave you to explore this notebook at your own pace, but those are the basic notebooks you have at your disposal. There is this one as well where you can control exactly the type of a seed you're using. So what I mean by that is the following. So here, basically they generate four latents. So those are those types of noise vectors I mentioned in this cell here and they generate four images here. And then basically, if you like one of them, you can pick that same latent and you can reconstruct the image or generate like use different prompts and have a similar type of appearance in the image. Let me show you what I mean by that. So if we just pick this same latent, you can see here. So they pick seed one because we have four seeds. This corresponds to this image here. Whoops. So it corresponds to this image here, to this one here. So if we pick that same seed and regenerate the latent, you can see we get the same image out. But if you now change the prompt, but reuse the same latent, you can see that you get a similar posture to this dog here, but just a different breed. Similarly here, Labrador in the style of Van Gogh. And you can see that you have some features of the output image, some like course level structures in this particular example are very similar. So namely the posture. And you can even go wild and use like, you don't have to use a dog. You can be as creative as you want. Okay, so that's everything that's at your disposal. Basically, that's the two easiest methods you can use is use the space or just use the notebooks to generate images. But if you want more control, if you want more speed, if you have better hardware locally, I strongly recommend you do the third option. And that's to use the script I generated. And now I'm gonna walk you through the process of how you can set it up and get started generating your own images on your own machine. And also do some fancy things like interpolation, etc. The script itself was inspired by Carpathi script. So this is this gist he made basically using spherical interpolation to generate basically a video such as this one, or similar videos. So you basically pick two images, and then you smoothly interpolate in the latent space of the diffusion model, you do interpolation, and then you get the corresponding images out. And you can see how they are slowly morphing from the source image into the target image. Okay, that's the basic idea. Okay, guys, let's see how we can go through this notebook, how we can set up this on our local machine. So here's what you need to do. So I'm gonna assume, so I do have basically a Windows machine, but it should be similar no matter whether you're using Linux or Mac OS or whatnot. So first thing you need to do is basically find a place where you wanna store this. So I'm gonna randomly create a new folder here on desktop, and name it stable diffusion playground. And I'm going to open it up. I'm going to open up a git bash here. So basically you want to clone this particular repo into this directory. So you can do git clone, and then just copy the URL you copied from the website here, from the GitHub. Okay, so that's one way. You can also just do it here. You should have the option to just download, but people usually use console to do this. So if you hit clone, you're going to clone, as you can see here, the repository. Now let's follow the second step. So you can just follow the steps here. Everything should be self-explanatory. Okay, so now just navigate with your conda prompt to the, so navigate with the anaconda to basically this directory. So let's do cd to that directory, and we're here. And basically the second step, the third step is to just do, as you can see here, conda and create. And that's going to automatically create basically a new environment for you. So because I already have an environment, I'm just going to basically do rename it, and then I'm going to rerun it here. So conda and create, and this is going to create a new environment from scratch. So I'm just doing this to make sure that if I'm not hitting any errors, then probably most other people using Windows will not. So hopefully others as well. If you notice any problems, feel free to submit an issue here. I'll be tracking issues over the next couple of days. I mean, the script is fairly simple, so hopefully it's going to be useful and easy to set up. Okay guys, so here it is, the script executed. I can just do the conda activate, blah, blah, blah. We enter the SD playground. And then the next step is to, you will have to run Hugging Face CLI login the first time before you start running the script, because otherwise you will not be able to access the model weights. So this is everything you would do. Basically copy this command, paste it here, execute it, and then it will probably ask you to again, copy that token from the website. You can just copy paste it here, and everything is going to work as expected. Okay guys, so now let's open up an ID of your choice. I'm going to open up VS Code. That's my favorite ID by far. So let me try and open up the corresponding directory. So we just open up this one, I guess. And I just trust the author, blah, blah, blah. I say trust the author because the author, I am the author, man. What's up with this? Okay, so generate images. Here is the main script. I'm going to walk you through it right now. Before that, let me just create, basically, let me just create a JSON, long JSON for the Python file. You don't even need to do this because we will not be passing any arguments through the long JSON. This is where you would usually set the arguments in case you need them. But here I don't need them, so I'm just going to leave them as empty here, okay? So this is a script. Basically, I grabbed this function from Karpathy. Basically, I already previously implemented it when I was developing my GAN projects. You can check out my code there, but this looks neater and supports Torch and NumPy conversion, so I just kind of copy-pasted his version. But it's a fairly similar thing to implement. You can just take a look at the Wiki page and copy-paste everything to your, I mean, translate from human description to code. Fairly trivial. Okay, so I'm going to set a breakpoint here. I'm going to pick a corresponding interpreter we just created, so st-playground-2. So you pick the playground, so we pick the environment we just created. And that should make us, that should make this work out of the box. Sometimes you have to restart VS Code or whatever ID you use in order for the changes to be, and for the interpreter to be correctly loaded. But let's see whether now it's going to crash or not. So I'm going to basically just run, hit run here, and hopefully if everything has worked correctly, I'm going to just hit this breakpoint and everything should work. Okay, let's see whether that's the case, and it is. Awesome. So I'm going to now walk you through the script. It's fairly simple. The first thing I will do is actually I'm going to exit. I'm going to exit the script. I'm going to set generate diverse here. So generate diverse is going to start generating diverse set of images with the prompt you specify. So here is the prompt I've specified, a painting of an AI robot having an epiphany moment. I'm going to save that to the output directory AI epiphany. I'm going to use 50 steps, 7.5 guidance scale, and I'm going to generate, let's say only one image. For the sake of this video, I don't want to make it too slow. And everything else you can kind of leave. You can even put none here. The width and height are 512. I'm using FP16 because my machine only has eight gigabytes of VRAM, and I can basically, I'm going to set none here. This is, I'm going to explain you briefly what this is. Basically, it helps you with reproducibility of this script, but we don't need it now. So I'm just going to leave all of this as none. So I'm going to put a comma there, and that's pretty much it. Okay, so let's now rerun the script in the generate mode. So I'm going to hit run here, and let's see what's going to happen. Okay, so first things first, you do need the GPU to run this, otherwise it's going to take like painfully long time, and it's better for you to use the hugging face space than to wait here. Yeah, so basically, first thing first is we want to make sure that the height and width are a multiply of eight. Okay, so device is CUDA, we just set the seed here such that every time if I rerun this, I'll have the same latent generated. If you don't want that, you can just set seed to none, then every time you rerun this, you'll get a completely new set of images. Okay, so I do some logistics here. Basically, I create a specific output file structure such that we are generating images in a suitable location. I'm going to show you what I mean by that in a second. So let me do it like this. So here is the directory, and that's where we'll be generating. You can see I generated these samples, so that's where the images will be stored. We have meta, that's where the metadata that we use to generate the image will be stored. And finally latent, that's going to be the actual latent representation that was fed into the pipeline to generate the image. Okay, let's create the scheduler. Again, if you don't know what this is, I suggest you watch my deep dive, then everything will be clear. So we pick again the B14 version. We pick because FP16 is set to true, we're going to use the FP16 weights. We passed basically the scheduler, and we finally have the pipe. So we'll take some time for the pipe to be constructed because it has to load a bunch of weights. I'm going to show you in a second, let me show you how this is going to function. Basically performance tab, and you can see how the memory is slowly increasing. And after I execute the two device, you can see how the GPU will start, the GPU consumption, the VRAM consumption will increase. That's going to hopefully happen very soon. So let's take a look. So I'm going to do F10 again here, F10. And now you can see how the pipe was basically pushed onto the GPU. You can see a spike here in the VRAM consumption. Okay guys, so let's ignore that now. I'm going to maximize this window again. So we enter the branch with the generation. And because I've set the number of images to one, we'll have only a single image generated. So let's see what we do. The first step is we generate the latent. This is kind of hard coded because that's how the, this is what pipe is also expecting. Like in general, you don't want to hard code the dimensions like this, but in this case, it just works. Okay, we generate the latent. And now I use the AutoCast, which is going to use Mixed Precision. And in my particular case, it's just going to do, I guess, FP16 pretty much for all computations. I'm not sure whether it's going to do FP32 accumulation for some of the operations, but basically that's it. Okay, so now we run the pipe. We pass the prompt, we pass the number of steps, the latent and the guidance scale. And out comes the image. So as you can see so far, so good. We didn't have any errors with the script. Again, I encourage you, if you find any errors, basically submit an issue to my GitHub repo. That would be super useful. Okay, let's continue here. Here is the image. We now just basically store the image. So let's see how that's going to happen. So we have samples here. After I run this, we have this neat generate name function that's going to automatically detect all of the images inside of this folder and make sure not to overwrite the existing images. Instead, it's gonna increment the file name and just store it there. So I'm gonna show you the script a bit later, but for now, let's treat it as a black box. So here is the image. I'm gonna put extra large icons so you can see the results. So here is the first result, amazing result. Next up, I'm gonna save the metadata. So when I say metadata, I mean, let's save the prompt information, how many steps we used to generate this image and what guidance scale we used to generate this image. I'm gonna improve this script over the next days. So feel free to contribute, feel free to create a pull request. I'm going to be tracking what's going on with the repo over the next days. So let me open up the meta directory here, and I'm expecting to generate a meta file there. So if I hit F10, we can see here there is a JSON. If I open it up, you can see that's basically storing the prompt information, the number of steps, and everything else. I kinda have, wait, let me move this thingy. Let me try and move this thingy. Okay, so we have the scale information there. Okay, so basically that's it. Now, finally, I store the latents. So that's the latent that was used to generate this image. So here's the latent, I just push it onto the CPU, I convert it into NumPy, and then I store it here. So that's it. Let's hit F10. Here is the latent, and okay, now I can step and exit the script. So that's it. That was how the script looks like in this generation mode. There is a couple of different modes. The second one you can use is reproduce mode. So here, and this is more for playing and understanding the reproducibility than something that will help you with your art generation. So what this does is basically now it expects me to pass, as you can see here, so now it expects me to pass the source latent path and the metadata path. And using the latent and the metadata, it's going to regenerate the same image from scratch. Okay, so that's the idea. So let's do that. I'm going to grab this latent. I'm gonna just copy this path here. I'm going to put that path here. So here it is. I'm just gonna do it like this. Okay, comment this out. I probably have, this sucks on Windows. I'm not sure whether, oh my God, am I doing this really live? Okay, so let me put the backslash here so that this actually works. Maybe it can work even without it, but yeah. Okay, now I'm gonna copy the metadata path as well. So just, oops, shift, right click, copy as path, and I'm gonna specify that thing here. Okay, and that should be pretty much it. Now I just have to, well, let's see whether this is gonna crash or not. I'm curious, to be honest. So this should now work. I have reproduce mode and we can just click save here and let me run this script again. So if I run this, okay guys, I did have to add the additional backslash after all, otherwise it's gonna make me, it's gonna complain. Okay, so again, same logic here. I'm gonna skip over all of this. We generate the pipe, blah, blah, blah. So let me just put the breakpoint here and let's hit F5 and let's get to this point in the code. So again, you'll see how we're gonna reuse this particular metadata here and this particular latent here to generate the very same image that we saw here. And additionally, I have a math plot live plot function being called, so this is gonna pop up after this script is done with the execution. Okay, let's see whether we are there. Here we are, we hit this breakpoint. So I make sure that we are passing the needed paths. So I open up the metadata. You can see that the metadata contains exactly the information that was used to generate the image a couple of minutes ago. So I load the latent here and now we end up, let me show you the shape of that thingy. So init.shape, we end up with 464.64. So that's the expected size of the latent space for the diffusion model for this particular one. And now let's pass the same metadata, that latent, and let's generate a NumPy image. So that's why I put output NumPy output type to NumPy, basically can be anything because the way how a pipe is implemented in the background, you just need to specify anything other than then pill and then it will generate a NumPy image as the output. Okay, so here we are. Finally, let's generate the image. So image show, I'm gonna show you the image we generated here and you can see that, okay guys, so that failed because there was a very sneaky bug behind this code. Not behind my code, but behind the version of the diffusers library I'm using. So namely, if you take a look at the main branch, the latest commit in this pipeline stable diffusion file, so that's basically the call function behind the stable diffusion pipeline object, you can see that there is this lines 102 to 113. You can see that the latent is being used here. So if we pass the latent, then basically, we are going to use it. Otherwise, if we pass none, then we generate it on the fly and there is a latent as well here as the argument to this call function. Okay, on the other side, I had to manually add the latent here and I had to add this piece of code here. So let me go back. So I had to copy paste that part and I had to copy paste that part and now after adding that, manually patching the 0.2.4 version of the library I currently have and that's the latest pip version they have. So they still haven't pushed the latest changes into that pip package. So after you do this fix, now everything works. So basically, I went ahead and generated another image. So let me show you what I mean by that. So I went ahead and generated this image here and now if I pick its metadata, so we have its metadata 0.0002, same for the latent. So if I go and specify that here, 0002, and if I just set the reproduce mode, if I hit run, now I'm gonna let it execute and I'm gonna show you the results as soon as we get to this break point here. Okay guys, the moment of truth. Let's basically display the image and you can see that's exactly the same image as what we had here. So that's the same image as what we had here. Let me do a side by side comparison and we have perfect reproducibility of this thingy. Okay, that's very cool. The last thing I wanna show you is the interpolation execution mode. So that's the funny part where we basically, so I'm gonna do interpolate here and here in that case, there is a couple of options. So that's the third branch of the code here. Basically, if you don't specify the paths, then we just generate two random latent vectors. Those two latent vectors correspond to two images and then we're gonna smoothly, spherically interpolate between those images and generate everything in between. So that's if you don't specify the latency. If you do specify the latency, if you find two images that you like, just find basically the numpy, the associated numpy, the latents and pass the paths here and then you can generate, you can basically, well, you can have more control over between which images are you interpolating. That's the basic idea. Okay, so now I'm going to do this route. You can always play with this one, but I'm gonna specify both latents and let's then interpolate and let me show you what's going on. So let's do the following thing. So I'm going to interpolate between this robot and this robot here. So to do that, we need to set one as the source, one as the target. So because this is one is already set as the source, I'm gonna take this one and put it as a target. So basically that means we need to take its corresponding latent and that's gonna be easy because of the structure I'm using. So that means I just have to do the following. I just have to take the source latent, which is 002. I need to paste it here. I need to set it to one instead. And now we have that robot set as the target and that's pretty much it. The metadata will be ignored and I think we just need to, yeah, that's pretty much it. Now we're gonna have source latent. Let me just check. And we have the target latent. We load them and then we are gonna be generating the images. Cool, I'm gonna set a break point here. I'm gonna hit execute and I'm gonna get back to you once we hit this break point. Okay guys, so let's enter. Hopefully we have both paths specified correctly. I'm gonna load those latents. We save the metadata again. And we save these latents, which is probably less relevant in this example because I already had them saved. So probably worth updating the script there. And now let's see what I'm doing. So basically you can see here, I'm doing a lint space. So between zero and one, I want to generate 50 images. So I had to set that number of images to 50 here. And additionally prepend this zero just as a simple hack. And the reason being is I want to, for the first iteration, I want to generate the target image such that I'm confident that the target image I've chosen is the correct one. And then I'm gonna generate the source one and then it's gonna do the smooth interpolation from source to target, if that makes sense. Okay, so that's why in the first iteration we enter here, we grab the target images latent. And now I'm gonna generate it. So let's just do this same logic. We just passed a prompt, we passed the number of inference steps, we passed the latent, the guidance scale and everything. And then we're gonna save the image. So let me show you that indeed, so if I recall correctly, we use which image did we use as the source, just a second. So we use as the target, we are using the image one. Okay, so the target is this one. So I'm expecting to generate yet another image that looks like that. So let's verify that. So I'm gonna hit here. And if I save, okay, I actually get the different image. And the reason is because of the bug I had, basically those metadata that were saved for these first two images because of the bug, it was actually corresponding to this image. So now the second one should be this one. And if that's not correct, then something weird is going on, but I'm fairly sure that's not gonna happen. So I'm gonna hit F5 and get back to you once this image is generated. So that's the source. Okay, so let's do F10. And you can see here, okay, now we're starting the interpolation. I'll get back to you once all of the images are generated, we'll see a smooth interpolation between this image as a source and this image as a target. Okay, guys, I've let it run and here you can see the results. Fairly cool. So here is again the source image. Here is the target image. Again, apologize for the bug there, but yeah. You will not have to, by the way, sort it out because as soon as the patch is submitted to the diffusers library, you can just use the script without having to do the patch. But for now, if you try and use the script, you will have to apply the patch before that. Okay, so let's see how this is gonna look like. Let's start with this image and let's slowly go to the right. You can see how the image is morphing. It's amazing. And then it's slowly morphing to the final robot and that's it. So there are some jumps in the latent space. That's always the case. And I had to use like maybe 200 or more images if I were to have a smoother interpolation over these couple of last steps. So you can see there are some sudden jumps here. But yeah, guys, that's pretty much it. I showed you how to use the script. There is the small gotcha I just explained about having to patch this file, but everything else should work as expected. Hopefully you like this video. If you try and generate some cool images with this script, feel free to tag me on Twitter or on LinkedIn and I'll be happy to see your images and reply. And yeah, again, I'll be updating the script for the next couple of days, especially if somebody submits an issue, a bug somebody finds or something, I'm gonna try and address it as soon as possible. Having said that, if you like this video, share it out with your friends, try and generate cool stuff yourself and subscribe to this YouTube channel, of course. Until next time, bye bye. Often when you miss an issue, I try and watch it again.
[{"start": 0.0, "end": 2.12, "text": " What's up guys, Alex here."}, {"start": 2.12, "end": 5.08, "text": " In this video, I'm showing you how to get started"}, {"start": 5.08, "end": 6.5200000000000005, "text": " with stable diffusion."}, {"start": 6.5200000000000005, "end": 9.24, "text": " I have another video where I do a super deep dive"}, {"start": 9.24, "end": 10.6, "text": " into how everything works."}, {"start": 10.6, "end": 12.6, "text": " I take the original code base"}, {"start": 12.6, "end": 14.42, "text": " and I literally step through the code"}, {"start": 14.42, "end": 15.9, "text": " and explain you how the training works,"}, {"start": 15.9, "end": 17.88, "text": " how the sampling works, how to generate images,"}, {"start": 17.88, "end": 22.52, "text": " how to have complete power over your generation basically."}, {"start": 22.52, "end": 24.36, "text": " So if you're curious to learn more about that,"}, {"start": 24.36, "end": 27.0, "text": " I'm gonna link it somewhere down in the video description"}, {"start": 27.0, "end": 28.400000000000002, "text": " once the video is published."}, {"start": 28.4, "end": 31.0, "text": " So in this video, I show you three ways to get started"}, {"start": 31.0, "end": 32.0, "text": " with stable diffusion."}, {"start": 32.0, "end": 35.04, "text": " The first one being using the hugging face spaces,"}, {"start": 35.04, "end": 38.7, "text": " the second one being using diffusers notebooks,"}, {"start": 38.7, "end": 41.06, "text": " and the third one is using my code"}, {"start": 41.06, "end": 44.66, "text": " I developed inspired by Karpathis recent gist"}, {"start": 44.66, "end": 46.519999999999996, "text": " and basically you can see some of the images"}, {"start": 46.519999999999996, "end": 49.92, "text": " I managed to generate with that code base."}, {"start": 49.92, "end": 52.0, "text": " Stick around to the second part of the video"}, {"start": 52.0, "end": 53.36, "text": " where I literally walk you through"}, {"start": 53.36, "end": 55.18, "text": " how to exactly get started and set it up"}, {"start": 55.18, "end": 56.26, "text": " on your own machine."}, {"start": 56.26, "end": 58.559999999999995, "text": " But before we go there, I wanna thank Cohere"}, {"start": 58.559999999999995, "end": 59.72, "text": " for sponsoring this video."}, {"start": 59.72, "end": 61.96, "text": " They're basically an amazing way to get started"}, {"start": 61.96, "end": 64.48, "text": " building NLP applications."}, {"start": 64.48, "end": 66.64, "text": " Getting started is as easy as just doing"}, {"start": 66.64, "end": 69.1, "text": " pip install Cohere as you can see here"}, {"start": 69.1, "end": 73.24, "text": " and they also have a very cool no code prototyping platform"}, {"start": 73.24, "end": 76.38, "text": " where you can just explore what features they offer"}, {"start": 76.38, "end": 78.1, "text": " and then get started very easily."}, {"start": 78.1, "end": 79.92, "text": " So here's an example of generation"}, {"start": 79.92, "end": 83.32, "text": " where you can do for example, hashtag generation."}, {"start": 83.32, "end": 86.75999999999999, "text": " They also support embeddings, classifier stuff"}, {"start": 86.75999999999999, "end": 89.36, "text": " and getting started is like super easy."}, {"start": 89.36, "end": 92.83999999999999, "text": " You just do export code, you copy paste the snippet here"}, {"start": 92.83999999999999, "end": 96.28, "text": " and you additionally just have to create an API key"}, {"start": 96.28, "end": 98.61999999999999, "text": " which you can do in the dashboard here."}, {"start": 98.61999999999999, "end": 101.0, "text": " So it literally takes a couple of minutes to get started."}, {"start": 101.0, "end": 103.63999999999999, "text": " If you wanna learn more about Cohere,"}, {"start": 103.63999999999999, "end": 105.8, "text": " I strongly suggest checking out some of the blogs"}, {"start": 105.8, "end": 108.94, "text": " where they use Cohere to generate super cool applications."}, {"start": 108.94, "end": 111.19999999999999, "text": " So this one by Jay Elmer, the same author"}, {"start": 111.2, "end": 114.36, "text": " who kind of famous Illustrator transformer blog"}, {"start": 114.36, "end": 116.72, "text": " basically walks you through how you can use Cohere"}, {"start": 116.72, "end": 120.2, "text": " to generate insights from various hacker news posts"}, {"start": 120.2, "end": 124.12, "text": " and there is also a very nice web app that goes behind it."}, {"start": 124.12, "end": 126.56, "text": " Finally, the documentation is amazing."}, {"start": 126.56, "end": 130.16, "text": " You can learn just about the machine learning"}, {"start": 130.16, "end": 132.56, "text": " as much as you can learn about Cohere product."}, {"start": 132.56, "end": 136.6, "text": " So I really like that type of contribution."}, {"start": 136.6, "end": 140.12, "text": " By signing up by using the link down below,"}, {"start": 140.12, "end": 142.8, "text": " you can get started with $75"}, {"start": 142.8, "end": 145.6, "text": " and that's enough in their own words to generate"}, {"start": 145.6, "end": 148.56, "text": " like the whole works of Shakespeare for free."}, {"start": 148.56, "end": 151.28, "text": " So yeah, check them out, support the channel"}, {"start": 151.28, "end": 152.96, "text": " and now let's get back to the video."}, {"start": 152.96, "end": 154.8, "text": " Having said that you must have seen the news,"}, {"start": 154.8, "end": 157.82, "text": " basically a week ago, the stable diffusion weights"}, {"start": 157.82, "end": 160.88, "text": " were published and now you can generate awesome images"}, {"start": 160.88, "end": 162.38, "text": " such as these ones here."}, {"start": 162.38, "end": 164.92000000000002, "text": " And what you can probably notice immediately"}, {"start": 164.92000000000002, "end": 168.12, "text": " is that with stable diffusion, you can generate human faces,"}, {"start": 168.12, "end": 171.32, "text": " you have much more control and the model is less constrained"}, {"start": 171.32, "end": 173.54, "text": " than the other alternatives, I think,"}, {"start": 173.54, "end": 174.96, "text": " to the best of my knowledge."}, {"start": 174.96, "end": 178.0, "text": " And basically you can run it much faster"}, {"start": 178.0, "end": 180.12, "text": " and you can even run it on your own machine"}, {"start": 180.12, "end": 182.8, "text": " even if you have only eight gigabytes of VRAM."}, {"start": 182.8, "end": 185.1, "text": " I'm gonna show you how to do that in a couple of minutes."}, {"start": 185.1, "end": 187.88, "text": " People have already been using a stable diffusion"}, {"start": 187.88, "end": 189.20000000000002, "text": " to generate very cool art."}, {"start": 189.20000000000002, "end": 191.88, "text": " So here is an example you must have seen from Twitter"}, {"start": 191.88, "end": 196.04000000000002, "text": " where Xander generated this video called Voyage Through Time"}, {"start": 196.04, "end": 198.92, "text": " using various prompts and prompt engineering"}, {"start": 198.92, "end": 200.17999999999998, "text": " and tweaking the seeds."}, {"start": 200.17999999999998, "end": 202.92, "text": " You can see, just yeah, check out the video"}, {"start": 202.92, "end": 205.28, "text": " at your own pace, it's an amazing piece of art."}, {"start": 205.28, "end": 208.64, "text": " So you can see how he managed to generate this video"}, {"start": 208.64, "end": 210.76, "text": " by basically tweaking the prompts,"}, {"start": 210.76, "end": 212.79999999999998, "text": " creating a narrative, tweaking the seeds"}, {"start": 212.79999999999998, "end": 215.68, "text": " and then stitching all the images together."}, {"start": 215.68, "end": 217.0, "text": " Okay, so having said that,"}, {"start": 217.0, "end": 218.95999999999998, "text": " how do you get started with stable diffusion"}, {"start": 218.95999999999998, "end": 221.34, "text": " and how do you generate your own images?"}, {"start": 221.34, "end": 224.56, "text": " So the easiest way you can do is basically"}, {"start": 224.56, "end": 227.8, "text": " use this hugging faces like a space."}, {"start": 227.8, "end": 230.58, "text": " So this is literally like a web application."}, {"start": 230.58, "end": 233.14000000000001, "text": " You just come here, you don't need to know anything"}, {"start": 233.14000000000001, "end": 235.6, "text": " about how stuff works, you just enter a prompt."}, {"start": 235.6, "end": 237.2, "text": " So I literally just entered this prompt"}, {"start": 237.2, "end": 239.36, "text": " a couple of minutes ago, a horse with glasses."}, {"start": 239.36, "end": 242.96, "text": " I hit generate image and a couple of minutes later,"}, {"start": 242.96, "end": 244.96, "text": " you get this as the output."}, {"start": 244.96, "end": 248.12, "text": " So a couple of minutes later statement is the problem."}, {"start": 248.12, "end": 249.64000000000001, "text": " So you literally have to wait a lot"}, {"start": 249.64000000000001, "end": 252.36, "text": " depending on the current load on the server."}, {"start": 252.36, "end": 256.36, "text": " So basically that's the main drawback of this approach."}, {"start": 256.36, "end": 258.68, "text": " You do have some control, some advanced options"}, {"start": 258.68, "end": 260.8, "text": " you can pick how many images you want,"}, {"start": 260.8, "end": 262.34000000000003, "text": " you can pick how many steps"}, {"start": 262.34000000000003, "end": 264.1, "text": " of the diffusion process you want."}, {"start": 264.1, "end": 265.32, "text": " As a general rule of thumb,"}, {"start": 265.32, "end": 267.88, "text": " the more you have, the better the quality."}, {"start": 267.88, "end": 269.84000000000003, "text": " They kind of kept it to 50 here."}, {"start": 269.84000000000003, "end": 272.28000000000003, "text": " So you should probably stick it at 50."}, {"start": 272.28000000000003, "end": 274.96000000000004, "text": " Well, but if you obviously want the results faster"}, {"start": 274.96000000000004, "end": 275.92, "text": " then you need to reduce it."}, {"start": 275.92, "end": 277.96000000000004, "text": " But 50 is kind of recommended value."}, {"start": 277.96000000000004, "end": 280.1, "text": " Then the guidance scale basically is a trade off"}, {"start": 280.1, "end": 282.8, "text": " between how high quality the images,"}, {"start": 282.8, "end": 285.52000000000004, "text": " how closely is it following your prompt"}, {"start": 285.52000000000004, "end": 287.6, "text": " versus how diverse the inputs are."}, {"start": 287.6, "end": 292.52000000000004, "text": " So like usually people use 7.5 or between three and 10."}, {"start": 292.52000000000004, "end": 296.6, "text": " And basically the more you increase the scale,"}, {"start": 296.6, "end": 298.68, "text": " the higher quality the images will be,"}, {"start": 298.68, "end": 300.40000000000003, "text": " they will closely follow the prompt,"}, {"start": 300.40000000000003, "end": 303.6, "text": " but you'll get less diverse set of output results."}, {"start": 303.6, "end": 306.52000000000004, "text": " And finally, the seed is also just enables you"}, {"start": 306.52000000000004, "end": 308.72, "text": " to control the diversity of your sample."}, {"start": 308.72, "end": 310.6, "text": " That's it, that's the simplest approach,"}, {"start": 310.6, "end": 312.18, "text": " but it takes a lot of time."}, {"start": 312.18, "end": 315.72, "text": " So if we try to input something like an AI robot"}, {"start": 317.66, "end": 321.28000000000003, "text": " having an epiphany moment,"}, {"start": 321.28000000000003, "end": 323.48, "text": " and if I hit generate image,"}, {"start": 323.48, "end": 325.20000000000005, "text": " you can see that it will take at least,"}, {"start": 325.20000000000005, "end": 328.52000000000004, "text": " so 152 seconds current estimated time."}, {"start": 328.52000000000004, "end": 330.02000000000004, "text": " So it's two and a half minutes."}, {"start": 330.96000000000004, "end": 334.32000000000005, "text": " So if you're kind of a patient to wait, then go ahead."}, {"start": 334.32000000000005, "end": 336.52000000000004, "text": " The second best approach is just use the colab."}, {"start": 336.52000000000004, "end": 338.32000000000005, "text": " So there is a lot of cool colabs."}, {"start": 338.32, "end": 340.71999999999997, "text": " I linked directly from the diffusers library,"}, {"start": 340.71999999999997, "end": 343.44, "text": " which is Hugging Face's new library"}, {"start": 343.44, "end": 345.92, "text": " that basically is trying to get the latest"}, {"start": 345.92, "end": 350.54, "text": " and greatest diffusion models available to general public."}, {"start": 350.54, "end": 352.78, "text": " So the first thing we need to do here"}, {"start": 352.78, "end": 355.56, "text": " is to connect to the GPU."}, {"start": 355.56, "end": 356.92, "text": " It requires a lot of RAM,"}, {"start": 356.92, "end": 359.64, "text": " so you just kind of acknowledge that, okay."}, {"start": 359.64, "end": 362.08, "text": " You make sure that you get the GPU"}, {"start": 362.08, "end": 366.0, "text": " and you make sure by either just checking it here"}, {"start": 366.0, "end": 368.96, "text": " so we can see it here,"}, {"start": 368.96, "end": 372.8, "text": " basically we have GPU enabled."}, {"start": 372.8, "end": 375.76, "text": " So that's one way to figure out that you have the GPU."}, {"start": 375.76, "end": 377.56, "text": " The second one is just hit here"}, {"start": 377.56, "end": 381.12, "text": " and then you have this part change runtime type."}, {"start": 381.12, "end": 382.88, "text": " You can see that the GPU is set"}, {"start": 382.88, "end": 385.12, "text": " as the hardware accelerator by default."}, {"start": 385.12, "end": 388.28, "text": " Okay, so let's kind of run this one."}, {"start": 388.28, "end": 392.64, "text": " Just hit run anyway, and you get some output here."}, {"start": 392.64, "end": 395.24, "text": " The main point is if it's working,"}, {"start": 395.24, "end": 397.12, "text": " that means you have a GPU."}, {"start": 397.12, "end": 398.84000000000003, "text": " Okay, so now we're gonna install some libraries"}, {"start": 398.84000000000003, "end": 401.76, "text": " such as diffusers, transformers, SciPy,"}, {"start": 401.76, "end": 405.0, "text": " and some other libraries for processing text."}, {"start": 405.0, "end": 407.84000000000003, "text": " And after that, we'll have to log in"}, {"start": 407.84000000000003, "end": 411.16, "text": " to be able to use the model weights."}, {"start": 411.16, "end": 413.28000000000003, "text": " So you basically have to accept an agreement"}, {"start": 413.28000000000003, "end": 414.92, "text": " before you use this notebook."}, {"start": 414.92, "end": 418.08, "text": " So I'm gonna hit run here and then run here"}, {"start": 418.08, "end": 420.02, "text": " and it will ask for credentials."}, {"start": 420.02, "end": 423.34000000000003, "text": " So basically what I have to do is go to this page"}, {"start": 423.34, "end": 428.34, "text": " and then basically just copy the token."}, {"start": 428.52, "end": 430.59999999999997, "text": " You need to have an account and just log in."}, {"start": 430.59999999999997, "end": 433.47999999999996, "text": " If you don't have, just create an account on Hogan face"}, {"start": 433.47999999999996, "end": 436.56, "text": " and then you can just kind of copy paste that information"}, {"start": 436.56, "end": 440.67999999999995, "text": " and you will be able to log in successfully, okay?"}, {"start": 440.67999999999995, "end": 441.79999999999995, "text": " So that's it."}, {"start": 441.79999999999995, "end": 444.67999999999995, "text": " So now the first cell, what it does is basically"}, {"start": 444.67999999999995, "end": 448.91999999999996, "text": " creates, instantiates this stable diffusion pipeline."}, {"start": 448.91999999999996, "end": 452.15999999999997, "text": " You can see we're using this model V14."}, {"start": 452.16, "end": 454.16, "text": " I'm gonna quickly show you what it means."}, {"start": 454.16, "end": 457.56, "text": " So you can see here on this link,"}, {"start": 457.56, "end": 462.56, "text": " basically that V14 says it's V12 plus 225,000 steps"}, {"start": 465.04, "end": 468.6, "text": " at certain resolution on lion aesthetics data set,"}, {"start": 468.6, "end": 469.44000000000005, "text": " blah, blah, blah."}, {"start": 469.44000000000005, "end": 471.84000000000003, "text": " So that means that's pretty much the best checkpoint."}, {"start": 471.84000000000003, "end": 475.36, "text": " So that one, you can use either that one or V13."}, {"start": 475.36, "end": 477.56, "text": " You can kind of play if you want with both."}, {"start": 478.92, "end": 481.64000000000004, "text": " So having said that, let's go back to the notebook."}, {"start": 481.64, "end": 483.84, "text": " This thing is still loading."}, {"start": 483.84, "end": 485.71999999999997, "text": " The second part you have to notice here is that"}, {"start": 485.71999999999997, "end": 490.71999999999997, "text": " they're using FP16 because basically if we were to run"}, {"start": 491.08, "end": 495.32, "text": " in full precision, that's FP32, you would probably get"}, {"start": 495.32, "end": 498.52, "text": " CUDA out of memory exceptions, even on a colab"}, {"start": 498.52, "end": 501.44, "text": " that has 16 gigabytes of VRAM."}, {"start": 501.44, "end": 505.36, "text": " So I did play and set, like just delete this part here"}, {"start": 505.36, "end": 506.84, "text": " so you can just delete these two,"}, {"start": 506.84, "end": 509.71999999999997, "text": " but I was getting out of memory exceptions"}, {"start": 509.72, "end": 513.08, "text": " if I'm using 512, 512 resolution."}, {"start": 513.08, "end": 516.48, "text": " So probably the best thing is to just leave it FP16."}, {"start": 516.48, "end": 519.8000000000001, "text": " Later, once I show you the script that you can run"}, {"start": 519.8000000000001, "end": 522.9200000000001, "text": " on your own machine, if you have a better GPU,"}, {"start": 522.9200000000001, "end": 526.2, "text": " then it's recommended to just remove this part"}, {"start": 526.2, "end": 530.5600000000001, "text": " and then you'll have even higher quality generated images."}, {"start": 530.5600000000001, "end": 534.24, "text": " Okay, so the final argument here is just"}, {"start": 534.24, "end": 537.88, "text": " the authentication token that's just needed for us"}, {"start": 537.88, "end": 540.6, "text": " to load the weights and that's pretty much it."}, {"start": 540.6, "end": 543.4, "text": " Okay, once that's loaded, I'm just gonna hit this cell here."}, {"start": 543.4, "end": 548.4, "text": " It's going to push the loaded weights onto the GPU"}, {"start": 548.64, "end": 551.2, "text": " and now we can start generating the images."}, {"start": 551.2, "end": 553.08, "text": " So here you can see basically, let me zoom in"}, {"start": 553.08, "end": 556.68, "text": " a little bit here and remove this part here."}, {"start": 556.68, "end": 559.72, "text": " So we can specify a prompt here, a photograph"}, {"start": 559.72, "end": 563.08, "text": " of an astronaut riding a horse, and then you basically"}, {"start": 563.08, "end": 565.92, "text": " pass the prompt to the pipe and you grab the sample,"}, {"start": 565.92, "end": 568.1999999999999, "text": " the zero sample, which will be the image."}, {"start": 568.1999999999999, "end": 569.8399999999999, "text": " Then we can save it and display the image"}, {"start": 569.8399999999999, "end": 571.88, "text": " and you can see the astronaut here."}, {"start": 571.88, "end": 575.4399999999999, "text": " Okay, so I'm gonna run this now and let's see what we get."}, {"start": 575.4399999999999, "end": 577.3199999999999, "text": " So we probably won't have the same image"}, {"start": 577.3199999999999, "end": 580.3199999999999, "text": " as the one that was there and you will see"}, {"start": 580.3199999999999, "end": 582.1999999999999, "text": " how long it takes to generate an image."}, {"start": 582.1999999999999, "end": 584.28, "text": " So currently, this is the speed."}, {"start": 584.28, "end": 587.24, "text": " I think we have 50 steps by default."}, {"start": 587.24, "end": 590.28, "text": " So now we're at 28, we need to wait a bit more"}, {"start": 590.28, "end": 592.8, "text": " and then we'll get the image out."}, {"start": 592.8, "end": 595.0799999999999, "text": " There is a lot more parameters you can tune."}, {"start": 595.08, "end": 597.96, "text": " Again, I super encourage you to check out my deep dive"}, {"start": 597.96, "end": 601.5600000000001, "text": " if you wanna really understand how to delicately control"}, {"start": 601.5600000000001, "end": 604.64, "text": " stable diffusion, that's gonna be probably your best bet."}, {"start": 605.64, "end": 609.12, "text": " Okay, so we see an image here and the second thing"}, {"start": 609.12, "end": 612.6800000000001, "text": " you can do, if you kind of, so if I were to hit"}, {"start": 612.6800000000001, "end": 616.12, "text": " Shift Enter again here and generate yet another image,"}, {"start": 616.12, "end": 619.08, "text": " because we didn't set any seeds, because of that,"}, {"start": 619.08, "end": 622.12, "text": " we'll basically generate a completely different image"}, {"start": 622.12, "end": 624.1600000000001, "text": " every time we run the cell."}, {"start": 624.16, "end": 626.0, "text": " So if you don't like that behavior,"}, {"start": 626.0, "end": 628.92, "text": " if you want to have consistency between the runs,"}, {"start": 628.92, "end": 630.6, "text": " then you have to do a small trick,"}, {"start": 630.6, "end": 632.7199999999999, "text": " so you can see here a different image."}, {"start": 632.7199999999999, "end": 635.7199999999999, "text": " You have to do a small trick and that's to basically"}, {"start": 635.7199999999999, "end": 638.92, "text": " set the seed here of your torch generator"}, {"start": 638.92, "end": 641.04, "text": " and then pass that generator to the pipe."}, {"start": 641.04, "end": 644.4, "text": " So here in this example, if I were to run this,"}, {"start": 644.4, "end": 647.0799999999999, "text": " every time you rerun the cell,"}, {"start": 647.0799999999999, "end": 650.0, "text": " you're gonna get the same image output,"}, {"start": 650.0, "end": 652.72, "text": " assuming that you're not changing the prompt obviously."}, {"start": 652.72, "end": 655.9200000000001, "text": " So that's the idea with the generator part."}, {"start": 655.9200000000001, "end": 657.48, "text": " So in the background, what happens is,"}, {"start": 657.48, "end": 662.1600000000001, "text": " it's going to be used to generate like a specific latent,"}, {"start": 662.1600000000001, "end": 664.84, "text": " which is just a noise vector and every time you run it"}, {"start": 664.84, "end": 667.0, "text": " because you've passed the generator,"}, {"start": 667.0, "end": 669.24, "text": " you'll end up with the same noise vector"}, {"start": 669.24, "end": 671.24, "text": " and because the noise vector is the same,"}, {"start": 671.24, "end": 672.88, "text": " the output image is gonna be the same."}, {"start": 672.88, "end": 675.08, "text": " So here you can see the results are consistent"}, {"start": 675.08, "end": 678.1600000000001, "text": " with what we had before I rerun the cell."}, {"start": 678.1600000000001, "end": 680.64, "text": " Okay, cool."}, {"start": 680.64, "end": 682.96, "text": " Again, you can control the number of inference steps."}, {"start": 682.96, "end": 685.4399999999999, "text": " I already mentioned in the previous here,"}, {"start": 686.36, "end": 689.92, "text": " like in the space here that 50 is probably recommended."}, {"start": 689.92, "end": 692.0, "text": " You don't wanna go lower than that."}, {"start": 692.0, "end": 694.08, "text": " You can also see the images that were generated,"}, {"start": 694.08, "end": 695.96, "text": " which are fairly cool."}, {"start": 695.96, "end": 699.04, "text": " But yeah, so here we have 15 and that's why we get,"}, {"start": 699.04, "end": 701.76, "text": " you can see here much lower quality image"}, {"start": 701.76, "end": 703.6, "text": " compared to the one above."}, {"start": 704.6, "end": 706.16, "text": " That's pretty much it."}, {"start": 706.16, "end": 709.56, "text": " So here you can check out this notebook at your own pace."}, {"start": 709.56, "end": 711.8399999999999, "text": " This just creates some utility functions"}, {"start": 711.8399999999999, "end": 714.5999999999999, "text": " where you can generate multiple images in a grid."}, {"start": 714.5999999999999, "end": 718.3199999999999, "text": " Same here, but like the logic remains the same."}, {"start": 718.3199999999999, "end": 721.3199999999999, "text": " You can also play with the height and width parameters,"}, {"start": 721.3199999999999, "end": 726.3199999999999, "text": " whereby, well, you can get a non-square type of an image"}, {"start": 727.0799999999999, "end": 728.0, "text": " and there are some rules,"}, {"start": 728.0, "end": 731.0799999999999, "text": " like you need to have multiples of eight"}, {"start": 731.0799999999999, "end": 733.5999999999999, "text": " for your image width and height"}, {"start": 733.5999999999999, "end": 736.28, "text": " and you probably don't wanna go lower than 512."}, {"start": 736.28, "end": 738.1999999999999, "text": " Okay, so then the rest of the notebook"}, {"start": 738.2, "end": 741.44, "text": " is a deep dive into how the thing works."}, {"start": 741.44, "end": 743.24, "text": " But I already told you I have a video"}, {"start": 743.24, "end": 745.08, "text": " that's almost two hours long"}, {"start": 745.08, "end": 747.96, "text": " explaining exactly how stable diffusion is trained,"}, {"start": 747.96, "end": 749.72, "text": " how sampling works, how everything works."}, {"start": 749.72, "end": 751.72, "text": " So do check out that one."}, {"start": 751.72, "end": 753.4000000000001, "text": " Okay, there's a couple more notebooks."}, {"start": 753.4000000000001, "end": 754.6800000000001, "text": " I'm gonna exit here."}, {"start": 754.6800000000001, "end": 756.9200000000001, "text": " There's a couple more cool notebooks you can check out."}, {"start": 756.9200000000001, "end": 759.44, "text": " So this one, image to image pipeline,"}, {"start": 759.44, "end": 762.76, "text": " basically enables you to start from a particular image"}, {"start": 762.76, "end": 764.24, "text": " and then using a prompt,"}, {"start": 764.24, "end": 768.52, "text": " guide that image towards a certain, basically appearance."}, {"start": 768.52, "end": 771.28, "text": " So you can see here that if we take the prompt,"}, {"start": 771.28, "end": 774.92, "text": " a fantasy landscape trending on ArtStation,"}, {"start": 774.92, "end": 777.2, "text": " you get this output here."}, {"start": 777.2, "end": 779.32, "text": " Okay, so I think worth mentioning here"}, {"start": 779.32, "end": 781.48, "text": " is that you can see, like probably,"}, {"start": 781.48, "end": 784.6800000000001, "text": " if you've never seen stable diffusion before,"}, {"start": 784.6800000000001, "end": 786.28, "text": " this part might be kinda confusing,"}, {"start": 786.28, "end": 789.44, "text": " like why would you put the trending on ArtStation?"}, {"start": 789.44, "end": 793.96, "text": " And that's the whole art of basically prompt engineering"}, {"start": 793.96, "end": 798.96, "text": " where you find certain words, certain sentences"}, {"start": 799.2, "end": 801.6800000000001, "text": " that trigger the model to generate better output"}, {"start": 801.6800000000001, "end": 804.76, "text": " or output with specific properties."}, {"start": 804.76, "end": 808.48, "text": " So people kinda, we globally as a community"}, {"start": 808.48, "end": 810.8000000000001, "text": " try a lot of prompts and then somebody figures out"}, {"start": 810.8000000000001, "end": 812.2, "text": " a good prompt and then they share it"}, {"start": 812.2, "end": 813.4000000000001, "text": " and then the community learns"}, {"start": 813.4000000000001, "end": 817.2800000000001, "text": " and then we're slowly coevolving with the model in a way."}, {"start": 818.36, "end": 819.88, "text": " I'm gonna leave you to explore this notebook"}, {"start": 819.88, "end": 822.76, "text": " at your own pace, but those are the basic notebooks"}, {"start": 822.76, "end": 824.3199999999999, "text": " you have at your disposal."}, {"start": 824.3199999999999, "end": 828.16, "text": " There is this one as well where you can control"}, {"start": 828.16, "end": 831.08, "text": " exactly the type of a seed you're using."}, {"start": 831.08, "end": 833.0, "text": " So what I mean by that is the following."}, {"start": 833.0, "end": 837.2, "text": " So here, basically they generate four latents."}, {"start": 837.2, "end": 840.6, "text": " So those are those types of noise vectors I mentioned"}, {"start": 840.6, "end": 844.3199999999999, "text": " in this cell here and they generate four images here."}, {"start": 844.3199999999999, "end": 847.08, "text": " And then basically, if you like one of them,"}, {"start": 847.08, "end": 848.92, "text": " you can pick that same latent"}, {"start": 848.92, "end": 850.92, "text": " and you can reconstruct the image"}, {"start": 850.92, "end": 854.36, "text": " or generate like use different prompts"}, {"start": 854.36, "end": 856.7199999999999, "text": " and have a similar type of appearance in the image."}, {"start": 856.7199999999999, "end": 858.36, "text": " Let me show you what I mean by that."}, {"start": 858.36, "end": 862.0, "text": " So if we just pick this same latent, you can see here."}, {"start": 862.0, "end": 865.24, "text": " So they pick seed one because we have four seeds."}, {"start": 865.24, "end": 867.88, "text": " This corresponds to this image here."}, {"start": 867.88, "end": 868.8399999999999, "text": " Whoops."}, {"start": 868.8399999999999, "end": 873.4399999999999, "text": " So it corresponds to this image here, to this one here."}, {"start": 873.4399999999999, "end": 876.7199999999999, "text": " So if we pick that same seed and regenerate the latent,"}, {"start": 876.7199999999999, "end": 879.28, "text": " you can see we get the same image out."}, {"start": 879.28, "end": 881.56, "text": " But if you now change the prompt,"}, {"start": 881.56, "end": 883.68, "text": " but reuse the same latent,"}, {"start": 883.68, "end": 886.04, "text": " you can see that you get a similar posture"}, {"start": 886.04, "end": 889.24, "text": " to this dog here, but just a different breed."}, {"start": 889.24, "end": 893.1999999999999, "text": " Similarly here, Labrador in the style of Van Gogh."}, {"start": 893.1999999999999, "end": 896.66, "text": " And you can see that you have some features"}, {"start": 896.66, "end": 900.12, "text": " of the output image, some like course level structures"}, {"start": 900.12, "end": 902.56, "text": " in this particular example are very similar."}, {"start": 902.56, "end": 904.68, "text": " So namely the posture."}, {"start": 904.68, "end": 907.4399999999999, "text": " And you can even go wild and use like,"}, {"start": 907.4399999999999, "end": 908.52, "text": " you don't have to use a dog."}, {"start": 908.52, "end": 910.48, "text": " You can be as creative as you want."}, {"start": 910.48, "end": 913.04, "text": " Okay, so that's everything that's at your disposal."}, {"start": 913.04, "end": 917.5799999999999, "text": " Basically, that's the two easiest methods you can use"}, {"start": 917.5799999999999, "end": 919.8, "text": " is use the space or just use the notebooks"}, {"start": 919.8, "end": 921.0799999999999, "text": " to generate images."}, {"start": 921.0799999999999, "end": 923.36, "text": " But if you want more control, if you want more speed,"}, {"start": 923.36, "end": 925.8, "text": " if you have better hardware locally,"}, {"start": 925.8, "end": 928.6, "text": " I strongly recommend you do the third option."}, {"start": 928.6, "end": 931.56, "text": " And that's to use the script I generated."}, {"start": 931.56, "end": 933.16, "text": " And now I'm gonna walk you through the process"}, {"start": 933.16, "end": 935.48, "text": " of how you can set it up and get started"}, {"start": 935.48, "end": 938.4399999999999, "text": " generating your own images on your own machine."}, {"start": 938.44, "end": 941.6400000000001, "text": " And also do some fancy things like interpolation, etc."}, {"start": 941.6400000000001, "end": 944.36, "text": " The script itself was inspired by Carpathi script."}, {"start": 944.36, "end": 947.32, "text": " So this is this gist he made basically"}, {"start": 947.32, "end": 950.6, "text": " using spherical interpolation to generate"}, {"start": 950.6, "end": 954.6800000000001, "text": " basically a video such as this one, or similar videos."}, {"start": 954.6800000000001, "end": 956.6800000000001, "text": " So you basically pick two images,"}, {"start": 956.6800000000001, "end": 960.12, "text": " and then you smoothly interpolate in the latent space"}, {"start": 960.12, "end": 962.5200000000001, "text": " of the diffusion model, you do interpolation,"}, {"start": 962.5200000000001, "end": 965.32, "text": " and then you get the corresponding images out."}, {"start": 965.32, "end": 968.2, "text": " And you can see how they are slowly morphing"}, {"start": 968.2, "end": 971.0600000000001, "text": " from the source image into the target image."}, {"start": 971.0600000000001, "end": 973.6600000000001, "text": " Okay, that's the basic idea."}, {"start": 973.6600000000001, "end": 976.44, "text": " Okay, guys, let's see how we can go through this notebook,"}, {"start": 976.44, "end": 979.5600000000001, "text": " how we can set up this on our local machine."}, {"start": 979.5600000000001, "end": 980.94, "text": " So here's what you need to do."}, {"start": 980.94, "end": 984.86, "text": " So I'm gonna assume, so I do have basically a Windows machine,"}, {"start": 984.86, "end": 988.2, "text": " but it should be similar no matter whether you're using Linux"}, {"start": 988.2, "end": 990.48, "text": " or Mac OS or whatnot."}, {"start": 990.48, "end": 994.0400000000001, "text": " So first thing you need to do is basically find a place"}, {"start": 994.0400000000001, "end": 995.5200000000001, "text": " where you wanna store this."}, {"start": 995.52, "end": 1000.52, "text": " So I'm gonna randomly create a new folder here on desktop,"}, {"start": 1001.6, "end": 1006.6, "text": " and name it stable diffusion playground."}, {"start": 1006.72, "end": 1008.68, "text": " And I'm going to open it up."}, {"start": 1008.68, "end": 1012.24, "text": " I'm going to open up a git bash here."}, {"start": 1012.24, "end": 1017.24, "text": " So basically you want to clone this particular repo"}, {"start": 1017.5, "end": 1018.6, "text": " into this directory."}, {"start": 1018.6, "end": 1022.02, "text": " So you can do git clone, and then just copy the URL"}, {"start": 1022.02, "end": 1025.04, "text": " you copied from the website here, from the GitHub."}, {"start": 1025.04, "end": 1026.3999999999999, "text": " Okay, so that's one way."}, {"start": 1026.3999999999999, "end": 1028.0, "text": " You can also just do it here."}, {"start": 1028.96, "end": 1032.6, "text": " You should have the option to just download,"}, {"start": 1032.6, "end": 1035.8, "text": " but people usually use console to do this."}, {"start": 1035.8, "end": 1038.68, "text": " So if you hit clone, you're going to clone,"}, {"start": 1038.68, "end": 1040.6399999999999, "text": " as you can see here, the repository."}, {"start": 1040.6399999999999, "end": 1041.94, "text": " Now let's follow the second step."}, {"start": 1041.94, "end": 1043.52, "text": " So you can just follow the steps here."}, {"start": 1043.52, "end": 1045.7, "text": " Everything should be self-explanatory."}, {"start": 1045.7, "end": 1050.7, "text": " Okay, so now just navigate with your conda prompt to the,"}, {"start": 1050.7, "end": 1055.7, "text": " so navigate with the anaconda to basically this directory."}, {"start": 1057.5800000000002, "end": 1062.38, "text": " So let's do cd to that directory, and we're here."}, {"start": 1062.38, "end": 1065.98, "text": " And basically the second step, the third step is to just do,"}, {"start": 1065.98, "end": 1069.02, "text": " as you can see here, conda and create."}, {"start": 1069.02, "end": 1071.42, "text": " And that's going to automatically create"}, {"start": 1071.42, "end": 1075.16, "text": " basically a new environment for you."}, {"start": 1075.16, "end": 1076.8600000000001, "text": " So because I already have an environment,"}, {"start": 1076.86, "end": 1080.9799999999998, "text": " I'm just going to basically do rename it,"}, {"start": 1080.9799999999998, "end": 1083.34, "text": " and then I'm going to rerun it here."}, {"start": 1083.34, "end": 1087.1399999999999, "text": " So conda and create, and this is going to create"}, {"start": 1087.1399999999999, "end": 1089.62, "text": " a new environment from scratch."}, {"start": 1089.62, "end": 1092.1399999999999, "text": " So I'm just doing this to make sure that"}, {"start": 1092.1399999999999, "end": 1093.82, "text": " if I'm not hitting any errors,"}, {"start": 1093.82, "end": 1098.26, "text": " then probably most other people using Windows will not."}, {"start": 1098.26, "end": 1099.8999999999999, "text": " So hopefully others as well."}, {"start": 1099.8999999999999, "end": 1101.86, "text": " If you notice any problems,"}, {"start": 1101.86, "end": 1104.5, "text": " feel free to submit an issue here."}, {"start": 1104.5, "end": 1107.42, "text": " I'll be tracking issues over the next couple of days."}, {"start": 1107.42, "end": 1109.34, "text": " I mean, the script is fairly simple,"}, {"start": 1109.34, "end": 1114.24, "text": " so hopefully it's going to be useful and easy to set up."}, {"start": 1114.24, "end": 1116.9, "text": " Okay guys, so here it is, the script executed."}, {"start": 1116.9, "end": 1121.5, "text": " I can just do the conda activate, blah, blah, blah."}, {"start": 1121.5, "end": 1123.74, "text": " We enter the SD playground."}, {"start": 1123.74, "end": 1126.62, "text": " And then the next step is to,"}, {"start": 1126.62, "end": 1130.56, "text": " you will have to run Hugging Face CLI login"}, {"start": 1130.56, "end": 1135.22, "text": " the first time before you start running the script,"}, {"start": 1135.22, "end": 1136.62, "text": " because otherwise you will not be able"}, {"start": 1136.62, "end": 1138.06, "text": " to access the model weights."}, {"start": 1138.06, "end": 1140.8, "text": " So this is everything you would do."}, {"start": 1140.8, "end": 1144.3799999999999, "text": " Basically copy this command, paste it here, execute it,"}, {"start": 1144.3799999999999, "end": 1146.6599999999999, "text": " and then it will probably ask you to again,"}, {"start": 1146.6599999999999, "end": 1148.8999999999999, "text": " copy that token from the website."}, {"start": 1148.8999999999999, "end": 1150.36, "text": " You can just copy paste it here,"}, {"start": 1150.36, "end": 1152.46, "text": " and everything is going to work as expected."}, {"start": 1152.46, "end": 1156.86, "text": " Okay guys, so now let's open up an ID of your choice."}, {"start": 1156.86, "end": 1158.02, "text": " I'm going to open up VS Code."}, {"start": 1158.02, "end": 1160.5, "text": " That's my favorite ID by far."}, {"start": 1160.5, "end": 1165.5, "text": " So let me try and open up the corresponding directory."}, {"start": 1166.34, "end": 1169.46, "text": " So we just open up this one, I guess."}, {"start": 1169.46, "end": 1172.74, "text": " And I just trust the author, blah, blah, blah."}, {"start": 1172.74, "end": 1174.7, "text": " I say trust the author because the author,"}, {"start": 1174.7, "end": 1176.08, "text": " I am the author, man."}, {"start": 1176.08, "end": 1177.7, "text": " What's up with this?"}, {"start": 1177.7, "end": 1179.38, "text": " Okay, so generate images."}, {"start": 1179.38, "end": 1181.94, "text": " Here is the main script."}, {"start": 1181.94, "end": 1184.5, "text": " I'm going to walk you through it right now."}, {"start": 1184.5, "end": 1187.34, "text": " Before that, let me just create, basically,"}, {"start": 1187.34, "end": 1191.26, "text": " let me just create a JSON,"}, {"start": 1191.26, "end": 1193.6, "text": " long JSON for the Python file."}, {"start": 1193.6, "end": 1194.8999999999999, "text": " You don't even need to do this"}, {"start": 1194.8999999999999, "end": 1197.54, "text": " because we will not be passing any arguments"}, {"start": 1197.54, "end": 1198.56, "text": " through the long JSON."}, {"start": 1198.56, "end": 1201.5, "text": " This is where you would usually set the arguments"}, {"start": 1201.5, "end": 1203.0, "text": " in case you need them."}, {"start": 1203.0, "end": 1204.06, "text": " But here I don't need them,"}, {"start": 1204.06, "end": 1207.6, "text": " so I'm just going to leave them as empty here, okay?"}, {"start": 1207.6, "end": 1209.02, "text": " So this is a script."}, {"start": 1209.02, "end": 1212.8999999999999, "text": " Basically, I grabbed this function from Karpathy."}, {"start": 1212.8999999999999, "end": 1215.86, "text": " Basically, I already previously implemented it"}, {"start": 1215.86, "end": 1217.8999999999999, "text": " when I was developing my GAN projects."}, {"start": 1217.8999999999999, "end": 1220.1, "text": " You can check out my code there,"}, {"start": 1220.1, "end": 1224.12, "text": " but this looks neater and supports Torch"}, {"start": 1224.12, "end": 1225.3, "text": " and NumPy conversion,"}, {"start": 1225.3, "end": 1227.1, "text": " so I just kind of copy-pasted his version."}, {"start": 1227.1, "end": 1230.1799999999998, "text": " But it's a fairly similar thing to implement."}, {"start": 1230.1799999999998, "end": 1232.06, "text": " You can just take a look at the Wiki page"}, {"start": 1232.06, "end": 1235.9399999999998, "text": " and copy-paste everything to your,"}, {"start": 1235.9399999999998, "end": 1239.4599999999998, "text": " I mean, translate from human description to code."}, {"start": 1239.4599999999998, "end": 1240.74, "text": " Fairly trivial."}, {"start": 1240.74, "end": 1243.6999999999998, "text": " Okay, so I'm going to set a breakpoint here."}, {"start": 1243.7, "end": 1247.6200000000001, "text": " I'm going to pick a corresponding interpreter"}, {"start": 1247.6200000000001, "end": 1250.7, "text": " we just created, so st-playground-2."}, {"start": 1250.7, "end": 1252.22, "text": " So you pick the playground,"}, {"start": 1252.22, "end": 1255.46, "text": " so we pick the environment we just created."}, {"start": 1255.46, "end": 1257.78, "text": " And that should make us,"}, {"start": 1259.02, "end": 1261.14, "text": " that should make this work out of the box."}, {"start": 1261.14, "end": 1263.44, "text": " Sometimes you have to restart VS Code"}, {"start": 1263.44, "end": 1267.26, "text": " or whatever ID you use in order for the changes to be,"}, {"start": 1267.26, "end": 1270.82, "text": " and for the interpreter to be correctly loaded."}, {"start": 1270.82, "end": 1273.8999999999999, "text": " But let's see whether now it's going to crash or not."}, {"start": 1273.8999999999999, "end": 1277.8999999999999, "text": " So I'm going to basically just run, hit run here,"}, {"start": 1277.8999999999999, "end": 1280.4199999999998, "text": " and hopefully if everything has worked correctly,"}, {"start": 1280.4199999999998, "end": 1283.26, "text": " I'm going to just hit this breakpoint"}, {"start": 1283.26, "end": 1284.86, "text": " and everything should work."}, {"start": 1284.86, "end": 1287.4199999999998, "text": " Okay, let's see whether that's the case, and it is."}, {"start": 1287.4199999999998, "end": 1288.26, "text": " Awesome."}, {"start": 1288.26, "end": 1291.26, "text": " So I'm going to now walk you through the script."}, {"start": 1292.4199999999998, "end": 1293.7, "text": " It's fairly simple."}, {"start": 1293.7, "end": 1297.82, "text": " The first thing I will do is actually I'm going to exit."}, {"start": 1297.82, "end": 1299.82, "text": " I'm going to exit the script."}, {"start": 1299.82, "end": 1302.26, "text": " I'm going to set generate diverse here."}, {"start": 1302.26, "end": 1305.3799999999999, "text": " So generate diverse is going to start generating"}, {"start": 1305.3799999999999, "end": 1308.12, "text": " diverse set of images with the prompt you specify."}, {"start": 1308.12, "end": 1309.86, "text": " So here is the prompt I've specified,"}, {"start": 1309.86, "end": 1313.74, "text": " a painting of an AI robot having an epiphany moment."}, {"start": 1313.74, "end": 1317.78, "text": " I'm going to save that to the output directory AI epiphany."}, {"start": 1317.78, "end": 1322.58, "text": " I'm going to use 50 steps, 7.5 guidance scale,"}, {"start": 1322.58, "end": 1325.1399999999999, "text": " and I'm going to generate, let's say only one image."}, {"start": 1325.1399999999999, "end": 1329.1, "text": " For the sake of this video, I don't want to make it too slow."}, {"start": 1329.1, "end": 1331.74, "text": " And everything else you can kind of leave."}, {"start": 1331.74, "end": 1334.02, "text": " You can even put none here."}, {"start": 1334.02, "end": 1335.98, "text": " The width and height are 512."}, {"start": 1335.98, "end": 1339.62, "text": " I'm using FP16 because my machine only has"}, {"start": 1339.62, "end": 1343.78, "text": " eight gigabytes of VRAM, and I can basically,"}, {"start": 1343.78, "end": 1345.54, "text": " I'm going to set none here."}, {"start": 1345.54, "end": 1348.3, "text": " This is, I'm going to explain you briefly what this is."}, {"start": 1348.3, "end": 1351.26, "text": " Basically, it helps you with reproducibility"}, {"start": 1351.26, "end": 1353.7199999999998, "text": " of this script, but we don't need it now."}, {"start": 1353.7199999999998, "end": 1357.02, "text": " So I'm just going to leave all of this as none."}, {"start": 1357.02, "end": 1360.46, "text": " So I'm going to put a comma there,"}, {"start": 1360.46, "end": 1361.54, "text": " and that's pretty much it."}, {"start": 1361.54, "end": 1365.74, "text": " Okay, so let's now rerun the script in the generate mode."}, {"start": 1365.74, "end": 1368.06, "text": " So I'm going to hit run here,"}, {"start": 1368.06, "end": 1369.9, "text": " and let's see what's going to happen."}, {"start": 1369.9, "end": 1372.46, "text": " Okay, so first things first, you do need the GPU"}, {"start": 1372.46, "end": 1374.78, "text": " to run this, otherwise it's going to take"}, {"start": 1374.78, "end": 1377.94, "text": " like painfully long time, and it's better for you"}, {"start": 1377.94, "end": 1381.86, "text": " to use the hugging face space than to wait here."}, {"start": 1381.86, "end": 1385.42, "text": " Yeah, so basically, first thing first is we want"}, {"start": 1385.42, "end": 1388.66, "text": " to make sure that the height and width are a multiply of eight."}, {"start": 1389.8200000000002, "end": 1394.1000000000001, "text": " Okay, so device is CUDA, we just set the seed here"}, {"start": 1394.1000000000001, "end": 1396.3600000000001, "text": " such that every time if I rerun this,"}, {"start": 1396.3600000000001, "end": 1398.46, "text": " I'll have the same latent generated."}, {"start": 1398.46, "end": 1401.02, "text": " If you don't want that, you can just set seed to none,"}, {"start": 1401.02, "end": 1402.72, "text": " then every time you rerun this,"}, {"start": 1402.72, "end": 1405.5600000000002, "text": " you'll get a completely new set of images."}, {"start": 1405.5600000000002, "end": 1407.5, "text": " Okay, so I do some logistics here."}, {"start": 1407.5, "end": 1411.3400000000001, "text": " Basically, I create a specific output file structure"}, {"start": 1411.3400000000001, "end": 1414.8000000000002, "text": " such that we are generating images in a suitable location."}, {"start": 1414.8, "end": 1416.74, "text": " I'm going to show you what I mean by that in a second."}, {"start": 1416.74, "end": 1418.86, "text": " So let me do it like this."}, {"start": 1418.86, "end": 1423.4199999999998, "text": " So here is the directory,"}, {"start": 1423.4199999999998, "end": 1426.46, "text": " and that's where we'll be generating."}, {"start": 1426.46, "end": 1429.54, "text": " You can see I generated these samples,"}, {"start": 1429.54, "end": 1432.3799999999999, "text": " so that's where the images will be stored."}, {"start": 1432.3799999999999, "end": 1435.06, "text": " We have meta, that's where the metadata that we use"}, {"start": 1435.06, "end": 1436.84, "text": " to generate the image will be stored."}, {"start": 1436.84, "end": 1439.18, "text": " And finally latent, that's going to be the actual"}, {"start": 1439.18, "end": 1442.4199999999998, "text": " latent representation that was fed into the pipeline"}, {"start": 1442.4199999999998, "end": 1444.5, "text": " to generate the image."}, {"start": 1444.5, "end": 1446.46, "text": " Okay, let's create the scheduler."}, {"start": 1446.46, "end": 1448.1, "text": " Again, if you don't know what this is,"}, {"start": 1448.1, "end": 1450.5, "text": " I suggest you watch my deep dive,"}, {"start": 1450.5, "end": 1452.22, "text": " then everything will be clear."}, {"start": 1452.22, "end": 1455.1, "text": " So we pick again the B14 version."}, {"start": 1455.1, "end": 1457.18, "text": " We pick because FP16 is set to true,"}, {"start": 1457.18, "end": 1460.08, "text": " we're going to use the FP16 weights."}, {"start": 1460.08, "end": 1463.14, "text": " We passed basically the scheduler,"}, {"start": 1463.14, "end": 1464.34, "text": " and we finally have the pipe."}, {"start": 1464.34, "end": 1467.1, "text": " So we'll take some time for the pipe to be constructed"}, {"start": 1467.1, "end": 1469.22, "text": " because it has to load a bunch of weights."}, {"start": 1469.22, "end": 1470.38, "text": " I'm going to show you in a second,"}, {"start": 1470.38, "end": 1473.46, "text": " let me show you how this is going to function."}, {"start": 1473.46, "end": 1475.54, "text": " Basically performance tab,"}, {"start": 1475.54, "end": 1478.06, "text": " and you can see how the memory is slowly increasing."}, {"start": 1478.06, "end": 1481.74, "text": " And after I execute the two device,"}, {"start": 1481.74, "end": 1483.54, "text": " you can see how the GPU will start,"}, {"start": 1484.42, "end": 1488.26, "text": " the GPU consumption, the VRAM consumption will increase."}, {"start": 1488.26, "end": 1491.18, "text": " That's going to hopefully happen very soon."}, {"start": 1491.18, "end": 1492.9, "text": " So let's take a look."}, {"start": 1492.9, "end": 1495.78, "text": " So I'm going to do F10 again here, F10."}, {"start": 1495.78, "end": 1499.46, "text": " And now you can see how the pipe was basically pushed"}, {"start": 1499.46, "end": 1500.4, "text": " onto the GPU."}, {"start": 1500.4, "end": 1503.6200000000001, "text": " You can see a spike here in the VRAM consumption."}, {"start": 1503.6200000000001, "end": 1506.74, "text": " Okay guys, so let's ignore that now."}, {"start": 1506.74, "end": 1509.94, "text": " I'm going to maximize this window again."}, {"start": 1509.94, "end": 1513.38, "text": " So we enter the branch with the generation."}, {"start": 1513.38, "end": 1516.14, "text": " And because I've set the number of images to one,"}, {"start": 1516.14, "end": 1518.3000000000002, "text": " we'll have only a single image generated."}, {"start": 1518.3000000000002, "end": 1519.14, "text": " So let's see what we do."}, {"start": 1519.14, "end": 1521.74, "text": " The first step is we generate the latent."}, {"start": 1521.74, "end": 1525.52, "text": " This is kind of hard coded because that's how the,"}, {"start": 1525.52, "end": 1527.7800000000002, "text": " this is what pipe is also expecting."}, {"start": 1527.7800000000002, "end": 1529.98, "text": " Like in general, you don't want to hard code"}, {"start": 1529.98, "end": 1534.24, "text": " the dimensions like this, but in this case, it just works."}, {"start": 1535.26, "end": 1536.88, "text": " Okay, we generate the latent."}, {"start": 1536.88, "end": 1539.18, "text": " And now I use the AutoCast,"}, {"start": 1539.18, "end": 1541.14, "text": " which is going to use Mixed Precision."}, {"start": 1541.14, "end": 1543.54, "text": " And in my particular case, it's just going to do, I guess,"}, {"start": 1543.54, "end": 1546.22, "text": " FP16 pretty much for all computations."}, {"start": 1546.22, "end": 1549.3, "text": " I'm not sure whether it's going to do FP32 accumulation"}, {"start": 1549.3, "end": 1553.84, "text": " for some of the operations, but basically that's it."}, {"start": 1553.84, "end": 1556.1, "text": " Okay, so now we run the pipe."}, {"start": 1556.1, "end": 1558.42, "text": " We pass the prompt, we pass the number of steps,"}, {"start": 1558.42, "end": 1559.8600000000001, "text": " the latent and the guidance scale."}, {"start": 1559.86, "end": 1561.26, "text": " And out comes the image."}, {"start": 1562.78, "end": 1564.54, "text": " So as you can see so far, so good."}, {"start": 1564.54, "end": 1566.9599999999998, "text": " We didn't have any errors with the script."}, {"start": 1566.9599999999998, "end": 1570.86, "text": " Again, I encourage you, if you find any errors,"}, {"start": 1570.86, "end": 1573.34, "text": " basically submit an issue to my GitHub repo."}, {"start": 1573.34, "end": 1574.86, "text": " That would be super useful."}, {"start": 1574.86, "end": 1576.4599999999998, "text": " Okay, let's continue here."}, {"start": 1576.4599999999998, "end": 1577.6799999999998, "text": " Here is the image."}, {"start": 1578.82, "end": 1581.5, "text": " We now just basically store the image."}, {"start": 1581.5, "end": 1582.9799999999998, "text": " So let's see how that's going to happen."}, {"start": 1582.9799999999998, "end": 1584.24, "text": " So we have samples here."}, {"start": 1584.24, "end": 1588.84, "text": " After I run this, we have this neat generate name function"}, {"start": 1588.84, "end": 1592.58, "text": " that's going to automatically detect all of the images"}, {"start": 1592.58, "end": 1595.9399999999998, "text": " inside of this folder and make sure not to overwrite"}, {"start": 1595.9399999999998, "end": 1596.98, "text": " the existing images."}, {"start": 1596.98, "end": 1599.3, "text": " Instead, it's gonna increment the file name"}, {"start": 1599.3, "end": 1600.6999999999998, "text": " and just store it there."}, {"start": 1600.6999999999998, "end": 1602.78, "text": " So I'm gonna show you the script a bit later,"}, {"start": 1602.78, "end": 1604.82, "text": " but for now, let's treat it as a black box."}, {"start": 1604.82, "end": 1606.06, "text": " So here is the image."}, {"start": 1606.06, "end": 1609.06, "text": " I'm gonna put extra large icons so you can see the results."}, {"start": 1609.06, "end": 1612.1799999999998, "text": " So here is the first result, amazing result."}, {"start": 1613.1399999999999, "end": 1615.1399999999999, "text": " Next up, I'm gonna save the metadata."}, {"start": 1615.1399999999999, "end": 1616.98, "text": " So when I say metadata, I mean,"}, {"start": 1616.98, "end": 1618.4199999999998, "text": " let's save the prompt information,"}, {"start": 1618.42, "end": 1620.8600000000001, "text": " how many steps we used to generate this image"}, {"start": 1620.8600000000001, "end": 1624.1000000000001, "text": " and what guidance scale we used to generate this image."}, {"start": 1624.1000000000001, "end": 1626.6200000000001, "text": " I'm gonna improve this script over the next days."}, {"start": 1626.6200000000001, "end": 1628.1000000000001, "text": " So feel free to contribute,"}, {"start": 1628.1000000000001, "end": 1629.8200000000002, "text": " feel free to create a pull request."}, {"start": 1629.8200000000002, "end": 1631.42, "text": " I'm going to be tracking what's going on"}, {"start": 1631.42, "end": 1633.42, "text": " with the repo over the next days."}, {"start": 1633.42, "end": 1636.3200000000002, "text": " So let me open up the meta directory here,"}, {"start": 1636.3200000000002, "end": 1639.1000000000001, "text": " and I'm expecting to generate a meta file there."}, {"start": 1639.1000000000001, "end": 1642.88, "text": " So if I hit F10, we can see here there is a JSON."}, {"start": 1642.88, "end": 1645.46, "text": " If I open it up, you can see that's basically"}, {"start": 1645.46, "end": 1648.54, "text": " storing the prompt information, the number of steps,"}, {"start": 1648.54, "end": 1649.9, "text": " and everything else."}, {"start": 1649.9, "end": 1652.94, "text": " I kinda have, wait, let me move this thingy."}, {"start": 1652.94, "end": 1654.26, "text": " Let me try and move this thingy."}, {"start": 1654.26, "end": 1655.78, "text": " Okay, so we have the scale information there."}, {"start": 1655.78, "end": 1659.3, "text": " Okay, so basically that's it."}, {"start": 1659.3, "end": 1661.52, "text": " Now, finally, I store the latents."}, {"start": 1661.52, "end": 1664.78, "text": " So that's the latent that was used to generate this image."}, {"start": 1664.78, "end": 1667.4, "text": " So here's the latent, I just push it onto the CPU,"}, {"start": 1667.4, "end": 1670.14, "text": " I convert it into NumPy, and then I store it here."}, {"start": 1670.14, "end": 1670.98, "text": " So that's it."}, {"start": 1672.06, "end": 1673.6200000000001, "text": " Let's hit F10."}, {"start": 1673.62, "end": 1677.6599999999999, "text": " Here is the latent, and okay, now I can step"}, {"start": 1678.7399999999998, "end": 1680.6599999999999, "text": " and exit the script."}, {"start": 1680.6599999999999, "end": 1681.5, "text": " So that's it."}, {"start": 1681.5, "end": 1686.1999999999998, "text": " That was how the script looks like in this generation mode."}, {"start": 1686.1999999999998, "end": 1688.58, "text": " There is a couple of different modes."}, {"start": 1688.58, "end": 1692.1, "text": " The second one you can use is reproduce mode."}, {"start": 1692.1, "end": 1695.54, "text": " So here, and this is more for playing"}, {"start": 1695.54, "end": 1697.62, "text": " and understanding the reproducibility"}, {"start": 1697.62, "end": 1701.32, "text": " than something that will help you with your art generation."}, {"start": 1701.32, "end": 1704.98, "text": " So what this does is basically now it expects me to pass,"}, {"start": 1704.98, "end": 1709.86, "text": " as you can see here, so now it expects me to pass"}, {"start": 1709.86, "end": 1712.62, "text": " the source latent path and the metadata path."}, {"start": 1712.62, "end": 1714.86, "text": " And using the latent and the metadata,"}, {"start": 1714.86, "end": 1717.62, "text": " it's going to regenerate the same image from scratch."}, {"start": 1717.62, "end": 1718.9399999999998, "text": " Okay, so that's the idea."}, {"start": 1718.9399999999998, "end": 1720.46, "text": " So let's do that."}, {"start": 1720.46, "end": 1722.82, "text": " I'm going to grab this latent."}, {"start": 1722.82, "end": 1724.78, "text": " I'm gonna just copy this path here."}, {"start": 1725.82, "end": 1729.3, "text": " I'm going to put that path here."}, {"start": 1729.3, "end": 1731.86, "text": " So here it is."}, {"start": 1731.86, "end": 1734.4199999999998, "text": " I'm just gonna do it like this."}, {"start": 1734.4199999999998, "end": 1737.1399999999999, "text": " Okay, comment this out."}, {"start": 1737.1399999999999, "end": 1738.98, "text": " I probably have, this sucks on Windows."}, {"start": 1738.98, "end": 1741.8999999999999, "text": " I'm not sure whether, oh my God,"}, {"start": 1741.8999999999999, "end": 1745.62, "text": " am I doing this really live?"}, {"start": 1745.62, "end": 1750.46, "text": " Okay, so let me put the backslash here"}, {"start": 1750.46, "end": 1752.94, "text": " so that this actually works."}, {"start": 1752.94, "end": 1755.9199999999998, "text": " Maybe it can work even without it, but yeah."}, {"start": 1755.92, "end": 1759.54, "text": " Okay, now I'm gonna copy the metadata path as well."}, {"start": 1759.54, "end": 1764.54, "text": " So just, oops, shift, right click, copy as path,"}, {"start": 1765.22, "end": 1768.14, "text": " and I'm gonna specify that thing here."}, {"start": 1768.14, "end": 1772.54, "text": " Okay, and that should be pretty much it."}, {"start": 1772.54, "end": 1775.42, "text": " Now I just have to, well, let's see whether"}, {"start": 1775.42, "end": 1776.7, "text": " this is gonna crash or not."}, {"start": 1776.7, "end": 1777.98, "text": " I'm curious, to be honest."}, {"start": 1777.98, "end": 1779.3400000000001, "text": " So this should now work."}, {"start": 1779.3400000000001, "end": 1783.22, "text": " I have reproduce mode and we can just click save here"}, {"start": 1783.22, "end": 1784.9, "text": " and let me run this script again."}, {"start": 1784.9, "end": 1787.22, "text": " So if I run this, okay guys,"}, {"start": 1787.22, "end": 1790.5800000000002, "text": " I did have to add the additional backslash after all,"}, {"start": 1790.5800000000002, "end": 1794.5, "text": " otherwise it's gonna make me, it's gonna complain."}, {"start": 1794.5, "end": 1796.3000000000002, "text": " Okay, so again, same logic here."}, {"start": 1796.3000000000002, "end": 1798.9, "text": " I'm gonna skip over all of this."}, {"start": 1798.9, "end": 1800.8600000000001, "text": " We generate the pipe, blah, blah, blah."}, {"start": 1800.8600000000001, "end": 1802.7800000000002, "text": " So let me just put the breakpoint here"}, {"start": 1802.7800000000002, "end": 1806.7800000000002, "text": " and let's hit F5 and let's get to this point in the code."}, {"start": 1806.7800000000002, "end": 1810.7800000000002, "text": " So again, you'll see how we're gonna reuse"}, {"start": 1810.7800000000002, "end": 1813.14, "text": " this particular metadata here"}, {"start": 1813.14, "end": 1815.5, "text": " and this particular latent here"}, {"start": 1815.5, "end": 1819.3400000000001, "text": " to generate the very same image that we saw here."}, {"start": 1819.3400000000001, "end": 1822.0800000000002, "text": " And additionally, I have a math plot live plot function"}, {"start": 1822.0800000000002, "end": 1824.3000000000002, "text": " being called, so this is gonna pop up"}, {"start": 1824.3000000000002, "end": 1829.16, "text": " after this script is done with the execution."}, {"start": 1829.16, "end": 1832.0200000000002, "text": " Okay, let's see whether we are there."}, {"start": 1832.0200000000002, "end": 1834.8600000000001, "text": " Here we are, we hit this breakpoint."}, {"start": 1834.8600000000001, "end": 1838.8200000000002, "text": " So I make sure that we are passing the needed paths."}, {"start": 1838.8200000000002, "end": 1840.38, "text": " So I open up the metadata."}, {"start": 1840.38, "end": 1842.3400000000001, "text": " You can see that the metadata contains"}, {"start": 1842.34, "end": 1844.34, "text": " exactly the information that was used"}, {"start": 1844.34, "end": 1848.02, "text": " to generate the image a couple of minutes ago."}, {"start": 1848.02, "end": 1851.6599999999999, "text": " So I load the latent here and now we end up,"}, {"start": 1851.6599999999999, "end": 1853.78, "text": " let me show you the shape of that thingy."}, {"start": 1853.78, "end": 1858.04, "text": " So init.shape, we end up with 464.64."}, {"start": 1858.04, "end": 1861.1, "text": " So that's the expected size of the latent space"}, {"start": 1861.1, "end": 1863.9399999999998, "text": " for the diffusion model for this particular one."}, {"start": 1863.9399999999998, "end": 1867.78, "text": " And now let's pass the same metadata, that latent,"}, {"start": 1867.78, "end": 1869.82, "text": " and let's generate a NumPy image."}, {"start": 1869.82, "end": 1872.24, "text": " So that's why I put output NumPy output type"}, {"start": 1872.24, "end": 1874.66, "text": " to NumPy, basically can be anything"}, {"start": 1874.66, "end": 1879.22, "text": " because the way how a pipe is implemented in the background,"}, {"start": 1879.22, "end": 1883.9, "text": " you just need to specify anything other than then pill"}, {"start": 1883.9, "end": 1887.14, "text": " and then it will generate a NumPy image as the output."}, {"start": 1887.14, "end": 1889.3, "text": " Okay, so here we are."}, {"start": 1889.3, "end": 1891.38, "text": " Finally, let's generate the image."}, {"start": 1891.38, "end": 1894.78, "text": " So image show, I'm gonna show you the image"}, {"start": 1894.78, "end": 1898.08, "text": " we generated here and you can see that,"}, {"start": 1898.08, "end": 1899.94, "text": " okay guys, so that failed because there was"}, {"start": 1899.94, "end": 1903.22, "text": " a very sneaky bug behind this code."}, {"start": 1903.22, "end": 1905.74, "text": " Not behind my code, but behind the version"}, {"start": 1905.74, "end": 1907.9, "text": " of the diffusers library I'm using."}, {"start": 1907.9, "end": 1911.8200000000002, "text": " So namely, if you take a look at the main branch,"}, {"start": 1911.8200000000002, "end": 1915.0, "text": " the latest commit in this pipeline stable diffusion file,"}, {"start": 1915.0, "end": 1917.7, "text": " so that's basically the call function"}, {"start": 1917.7, "end": 1920.1200000000001, "text": " behind the stable diffusion pipeline object,"}, {"start": 1920.1200000000001, "end": 1924.42, "text": " you can see that there is this lines 102 to 113."}, {"start": 1924.42, "end": 1927.42, "text": " You can see that the latent is being used here."}, {"start": 1927.42, "end": 1931.6200000000001, "text": " So if we pass the latent, then basically,"}, {"start": 1931.6200000000001, "end": 1933.94, "text": " we are going to use it."}, {"start": 1933.94, "end": 1937.8600000000001, "text": " Otherwise, if we pass none, then we generate it on the fly"}, {"start": 1937.8600000000001, "end": 1940.1000000000001, "text": " and there is a latent as well here"}, {"start": 1940.1000000000001, "end": 1943.26, "text": " as the argument to this call function."}, {"start": 1943.26, "end": 1947.8600000000001, "text": " Okay, on the other side, I had to manually add the latent"}, {"start": 1947.8600000000001, "end": 1951.18, "text": " here and I had to add this piece of code here."}, {"start": 1951.18, "end": 1952.1000000000001, "text": " So let me go back."}, {"start": 1952.1000000000001, "end": 1953.8200000000002, "text": " So I had to copy paste that part"}, {"start": 1953.8200000000002, "end": 1955.5, "text": " and I had to copy paste that part"}, {"start": 1955.5, "end": 1960.5, "text": " and now after adding that, manually patching the 0.2.4"}, {"start": 1960.66, "end": 1962.74, "text": " version of the library I currently have"}, {"start": 1962.74, "end": 1964.52, "text": " and that's the latest pip version they have."}, {"start": 1964.52, "end": 1966.54, "text": " So they still haven't pushed the latest changes"}, {"start": 1966.54, "end": 1969.1, "text": " into that pip package."}, {"start": 1969.1, "end": 1971.64, "text": " So after you do this fix, now everything works."}, {"start": 1971.64, "end": 1975.7, "text": " So basically, I went ahead and generated another image."}, {"start": 1975.7, "end": 1977.14, "text": " So let me show you what I mean by that."}, {"start": 1977.14, "end": 1980.58, "text": " So I went ahead and generated this image here"}, {"start": 1980.58, "end": 1983.86, "text": " and now if I pick its metadata,"}, {"start": 1983.86, "end": 1988.54, "text": " so we have its metadata 0.0002, same for the latent."}, {"start": 1988.54, "end": 1993.54, "text": " So if I go and specify that here, 0002,"}, {"start": 1994.2199999999998, "end": 1999.06, "text": " and if I just set the reproduce mode, if I hit run,"}, {"start": 1999.06, "end": 2001.78, "text": " now I'm gonna let it execute and I'm gonna show you"}, {"start": 2001.78, "end": 2006.78, "text": " the results as soon as we get to this break point here."}, {"start": 2007.6999999999998, "end": 2009.4799999999998, "text": " Okay guys, the moment of truth."}, {"start": 2009.4799999999998, "end": 2012.76, "text": " Let's basically display the image"}, {"start": 2012.76, "end": 2015.22, "text": " and you can see that's exactly the same image"}, {"start": 2015.22, "end": 2016.98, "text": " as what we had here."}, {"start": 2016.98, "end": 2019.5, "text": " So that's the same image as what we had here."}, {"start": 2019.5, "end": 2021.46, "text": " Let me do a side by side comparison"}, {"start": 2021.46, "end": 2026.34, "text": " and we have perfect reproducibility of this thingy."}, {"start": 2026.34, "end": 2028.98, "text": " Okay, that's very cool."}, {"start": 2028.98, "end": 2030.86, "text": " The last thing I wanna show you"}, {"start": 2030.86, "end": 2033.7, "text": " is the interpolation execution mode."}, {"start": 2033.7, "end": 2036.6, "text": " So that's the funny part where we basically,"}, {"start": 2036.6, "end": 2041.06, "text": " so I'm gonna do interpolate here and here in that case,"}, {"start": 2041.06, "end": 2042.54, "text": " there is a couple of options."}, {"start": 2042.54, "end": 2045.1, "text": " So that's the third branch of the code here."}, {"start": 2045.1, "end": 2047.86, "text": " Basically, if you don't specify the paths,"}, {"start": 2047.86, "end": 2050.98, "text": " then we just generate two random latent vectors."}, {"start": 2050.98, "end": 2053.7799999999997, "text": " Those two latent vectors correspond to two images"}, {"start": 2053.7799999999997, "end": 2056.7, "text": " and then we're gonna smoothly, spherically interpolate"}, {"start": 2056.7, "end": 2059.62, "text": " between those images and generate everything in between."}, {"start": 2059.62, "end": 2061.34, "text": " So that's if you don't specify the latency."}, {"start": 2061.34, "end": 2062.7799999999997, "text": " If you do specify the latency,"}, {"start": 2062.7799999999997, "end": 2064.82, "text": " if you find two images that you like,"}, {"start": 2064.82, "end": 2068.5, "text": " just find basically the numpy, the associated numpy,"}, {"start": 2068.5, "end": 2070.86, "text": " the latents and pass the paths here"}, {"start": 2070.86, "end": 2073.54, "text": " and then you can generate, you can basically,"}, {"start": 2073.54, "end": 2075.58, "text": " well, you can have more control over"}, {"start": 2075.58, "end": 2078.06, "text": " between which images are you interpolating."}, {"start": 2078.06, "end": 2079.1800000000003, "text": " That's the basic idea."}, {"start": 2079.1800000000003, "end": 2082.42, "text": " Okay, so now I'm going to do this route."}, {"start": 2082.42, "end": 2084.5, "text": " You can always play with this one,"}, {"start": 2084.5, "end": 2087.06, "text": " but I'm gonna specify both latents"}, {"start": 2087.06, "end": 2088.94, "text": " and let's then interpolate"}, {"start": 2088.94, "end": 2090.86, "text": " and let me show you what's going on."}, {"start": 2090.86, "end": 2093.1400000000003, "text": " So let's do the following thing."}, {"start": 2093.1400000000003, "end": 2095.78, "text": " So I'm going to interpolate between this robot"}, {"start": 2095.78, "end": 2097.2000000000003, "text": " and this robot here."}, {"start": 2097.2, "end": 2101.4399999999996, "text": " So to do that, we need to set one as the source,"}, {"start": 2101.4399999999996, "end": 2102.48, "text": " one as the target."}, {"start": 2102.48, "end": 2104.96, "text": " So because this is one is already set as the source,"}, {"start": 2104.96, "end": 2107.3199999999997, "text": " I'm gonna take this one and put it as a target."}, {"start": 2107.3199999999997, "end": 2109.3599999999997, "text": " So basically that means we need to take"}, {"start": 2109.3599999999997, "end": 2112.8999999999996, "text": " its corresponding latent and that's gonna be easy"}, {"start": 2112.8999999999996, "end": 2115.04, "text": " because of the structure I'm using."}, {"start": 2115.04, "end": 2116.8399999999997, "text": " So that means I just have to do the following."}, {"start": 2116.8399999999997, "end": 2121.8399999999997, "text": " I just have to take the source latent, which is 002."}, {"start": 2122.6, "end": 2126.3199999999997, "text": " I need to paste it here."}, {"start": 2126.32, "end": 2128.52, "text": " I need to set it to one instead."}, {"start": 2128.52, "end": 2131.8, "text": " And now we have that robot set as the target"}, {"start": 2131.8, "end": 2133.6400000000003, "text": " and that's pretty much it."}, {"start": 2133.6400000000003, "end": 2136.1600000000003, "text": " The metadata will be ignored"}, {"start": 2136.1600000000003, "end": 2139.2000000000003, "text": " and I think we just need to, yeah, that's pretty much it."}, {"start": 2139.2000000000003, "end": 2141.32, "text": " Now we're gonna have source latent."}, {"start": 2141.32, "end": 2142.84, "text": " Let me just check."}, {"start": 2142.84, "end": 2146.34, "text": " And we have the target latent."}, {"start": 2146.34, "end": 2150.2000000000003, "text": " We load them and then we are gonna be generating the images."}, {"start": 2150.2000000000003, "end": 2152.44, "text": " Cool, I'm gonna set a break point here."}, {"start": 2152.44, "end": 2155.84, "text": " I'm gonna hit execute and I'm gonna get back to you"}, {"start": 2155.84, "end": 2157.7200000000003, "text": " once we hit this break point."}, {"start": 2157.7200000000003, "end": 2159.92, "text": " Okay guys, so let's enter."}, {"start": 2159.92, "end": 2163.48, "text": " Hopefully we have both paths specified correctly."}, {"start": 2163.48, "end": 2166.06, "text": " I'm gonna load those latents."}, {"start": 2167.06, "end": 2169.52, "text": " We save the metadata again."}, {"start": 2171.1600000000003, "end": 2173.1200000000003, "text": " And we save these latents,"}, {"start": 2173.1200000000003, "end": 2177.32, "text": " which is probably less relevant in this example"}, {"start": 2177.32, "end": 2178.92, "text": " because I already had them saved."}, {"start": 2178.92, "end": 2181.6400000000003, "text": " So probably worth updating the script there."}, {"start": 2181.6400000000003, "end": 2183.46, "text": " And now let's see what I'm doing."}, {"start": 2183.46, "end": 2185.2000000000003, "text": " So basically you can see here,"}, {"start": 2185.2, "end": 2186.2799999999997, "text": " I'm doing a lint space."}, {"start": 2186.2799999999997, "end": 2189.3999999999996, "text": " So between zero and one, I want to generate 50 images."}, {"start": 2189.3999999999996, "end": 2192.6, "text": " So I had to set that number of images to 50 here."}, {"start": 2192.6, "end": 2195.74, "text": " And additionally prepend this zero just as a simple hack."}, {"start": 2195.74, "end": 2199.4199999999996, "text": " And the reason being is I want to, for the first iteration,"}, {"start": 2199.4199999999996, "end": 2201.68, "text": " I want to generate the target image"}, {"start": 2201.68, "end": 2206.2, "text": " such that I'm confident that the target image I've chosen"}, {"start": 2206.2, "end": 2207.3199999999997, "text": " is the correct one."}, {"start": 2207.3199999999997, "end": 2209.2, "text": " And then I'm gonna generate the source one"}, {"start": 2209.2, "end": 2211.6, "text": " and then it's gonna do the smooth interpolation"}, {"start": 2211.6, "end": 2213.6, "text": " from source to target, if that makes sense."}, {"start": 2213.6, "end": 2217.64, "text": " Okay, so that's why in the first iteration we enter here,"}, {"start": 2217.64, "end": 2221.02, "text": " we grab the target images latent."}, {"start": 2221.02, "end": 2222.7599999999998, "text": " And now I'm gonna generate it."}, {"start": 2222.7599999999998, "end": 2224.86, "text": " So let's just do this same logic."}, {"start": 2224.86, "end": 2226.14, "text": " We just passed a prompt,"}, {"start": 2226.14, "end": 2227.96, "text": " we passed the number of inference steps,"}, {"start": 2227.96, "end": 2230.36, "text": " we passed the latent, the guidance scale and everything."}, {"start": 2230.36, "end": 2231.9, "text": " And then we're gonna save the image."}, {"start": 2231.9, "end": 2234.7599999999998, "text": " So let me show you that indeed,"}, {"start": 2235.92, "end": 2238.62, "text": " so if I recall correctly,"}, {"start": 2238.62, "end": 2242.48, "text": " we use which image did we use as the source,"}, {"start": 2242.48, "end": 2244.06, "text": " just a second."}, {"start": 2244.06, "end": 2248.08, "text": " So we use as the target, we are using the image one."}, {"start": 2248.08, "end": 2249.76, "text": " Okay, so the target is this one."}, {"start": 2249.76, "end": 2253.16, "text": " So I'm expecting to generate yet another image"}, {"start": 2253.16, "end": 2255.04, "text": " that looks like that."}, {"start": 2255.04, "end": 2257.2400000000002, "text": " So let's verify that."}, {"start": 2257.2400000000002, "end": 2258.76, "text": " So I'm gonna hit here."}, {"start": 2258.76, "end": 2263.12, "text": " And if I save, okay, I actually get the different image."}, {"start": 2263.12, "end": 2265.4, "text": " And the reason is because of the bug I had,"}, {"start": 2265.4, "end": 2267.58, "text": " basically those metadata that were saved"}, {"start": 2267.58, "end": 2270.36, "text": " for these first two images because of the bug,"}, {"start": 2270.36, "end": 2272.96, "text": " it was actually corresponding to this image."}, {"start": 2272.96, "end": 2276.1, "text": " So now the second one should be this one."}, {"start": 2276.1, "end": 2277.34, "text": " And if that's not correct,"}, {"start": 2277.34, "end": 2279.26, "text": " then something weird is going on,"}, {"start": 2279.26, "end": 2281.0, "text": " but I'm fairly sure that's not gonna happen."}, {"start": 2281.0, "end": 2283.88, "text": " So I'm gonna hit F5 and get back to you"}, {"start": 2283.88, "end": 2286.02, "text": " once this image is generated."}, {"start": 2287.0, "end": 2288.1, "text": " So that's the source."}, {"start": 2288.1, "end": 2290.6800000000003, "text": " Okay, so let's do F10."}, {"start": 2290.6800000000003, "end": 2292.6800000000003, "text": " And you can see here, okay,"}, {"start": 2292.6800000000003, "end": 2294.36, "text": " now we're starting the interpolation."}, {"start": 2294.36, "end": 2296.7200000000003, "text": " I'll get back to you once all of the images are generated,"}, {"start": 2296.7200000000003, "end": 2298.08, "text": " we'll see a smooth interpolation"}, {"start": 2298.08, "end": 2299.56, "text": " between this image as a source"}, {"start": 2299.56, "end": 2300.84, "text": " and this image as a target."}, {"start": 2300.84, "end": 2302.48, "text": " Okay, guys, I've let it run"}, {"start": 2302.48, "end": 2304.92, "text": " and here you can see the results."}, {"start": 2304.92, "end": 2306.24, "text": " Fairly cool."}, {"start": 2306.24, "end": 2308.56, "text": " So here is again the source image."}, {"start": 2308.56, "end": 2310.24, "text": " Here is the target image."}, {"start": 2310.24, "end": 2313.64, "text": " Again, apologize for the bug there, but yeah."}, {"start": 2313.64, "end": 2315.96, "text": " You will not have to, by the way, sort it out"}, {"start": 2315.96, "end": 2317.9, "text": " because as soon as the patch is submitted"}, {"start": 2317.9, "end": 2319.24, "text": " to the diffusers library,"}, {"start": 2319.24, "end": 2322.2799999999997, "text": " you can just use the script without having to do the patch."}, {"start": 2322.2799999999997, "end": 2324.4, "text": " But for now, if you try and use the script,"}, {"start": 2324.4, "end": 2327.36, "text": " you will have to apply the patch before that."}, {"start": 2327.36, "end": 2330.28, "text": " Okay, so let's see how this is gonna look like."}, {"start": 2331.1200000000003, "end": 2333.32, "text": " Let's start with this image"}, {"start": 2333.32, "end": 2335.32, "text": " and let's slowly go to the right."}, {"start": 2335.32, "end": 2337.1800000000003, "text": " You can see how the image is morphing."}, {"start": 2338.2000000000003, "end": 2339.26, "text": " It's amazing."}, {"start": 2340.42, "end": 2345.32, "text": " And then it's slowly morphing to the final robot"}, {"start": 2345.32, "end": 2346.1600000000003, "text": " and that's it."}, {"start": 2346.1600000000003, "end": 2348.1200000000003, "text": " So there are some jumps in the latent space."}, {"start": 2348.1200000000003, "end": 2349.76, "text": " That's always the case."}, {"start": 2349.76, "end": 2353.96, "text": " And I had to use like maybe 200 or more images"}, {"start": 2353.96, "end": 2356.76, "text": " if I were to have a smoother interpolation"}, {"start": 2356.76, "end": 2358.36, "text": " over these couple of last steps."}, {"start": 2358.36, "end": 2360.36, "text": " So you can see there are some sudden jumps here."}, {"start": 2360.36, "end": 2363.28, "text": " But yeah, guys, that's pretty much it."}, {"start": 2363.28, "end": 2365.38, "text": " I showed you how to use the script."}, {"start": 2365.38, "end": 2367.5200000000004, "text": " There is the small gotcha I just explained"}, {"start": 2367.5200000000004, "end": 2369.36, "text": " about having to patch this file,"}, {"start": 2369.36, "end": 2373.0400000000004, "text": " but everything else should work as expected."}, {"start": 2373.0400000000004, "end": 2374.88, "text": " Hopefully you like this video."}, {"start": 2374.88, "end": 2378.2000000000003, "text": " If you try and generate some cool images with this script,"}, {"start": 2378.2000000000003, "end": 2380.6400000000003, "text": " feel free to tag me on Twitter or on LinkedIn"}, {"start": 2380.6400000000003, "end": 2384.0, "text": " and I'll be happy to see your images and reply."}, {"start": 2384.0, "end": 2386.6000000000004, "text": " And yeah, again, I'll be updating the script"}, {"start": 2386.6, "end": 2387.8399999999997, "text": " for the next couple of days,"}, {"start": 2387.8399999999997, "end": 2389.96, "text": " especially if somebody submits an issue,"}, {"start": 2389.96, "end": 2392.02, "text": " a bug somebody finds or something,"}, {"start": 2392.02, "end": 2395.48, "text": " I'm gonna try and address it as soon as possible."}, {"start": 2395.48, "end": 2397.6, "text": " Having said that, if you like this video,"}, {"start": 2397.6, "end": 2399.52, "text": " share it out with your friends,"}, {"start": 2399.52, "end": 2402.04, "text": " try and generate cool stuff yourself"}, {"start": 2402.04, "end": 2405.0, "text": " and subscribe to this YouTube channel, of course."}, {"start": 2405.0, "end": 2406.64, "text": " Until next time, bye bye."}, {"start": 2406.64, "end": 2418.46, "text": " Often when you miss an issue, I try and watch it again."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=hc0u4avAkuM
Ultimate Guide To Scaling ML Models - Megatron-LM | ZeRO | DeepSpeed | Mixed Precision
🚀 Sign up for AssemblyAI's speech API using my link 🚀 https://www.assemblyai.com/?utm_source=youtube&utm_medium=social&utm_campaign=theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I show you what it takes to scale ML models up to trillions of parameters! I cover the fundamental ideas behind all of the recent big ML models you must have heard of like Meta's OPT-175B, BigScience BLOOM 176B, EleutherAI's GPT-NeoX-20B, GPT-J, OpenAI's GPT-3, Google's PaLM, DeepMind's Chinchilla/Gopher models, etc. I cover the ideas of data parallelism, model/pipeline parallelism (e.g. GPipe, PipeDream, etc.), model/tensor parallelism (Megatron-LM), activation checkpointing, mixed precision training, ZeRO (zero redundancy optimizer) from Microsoft's DeepSpeed library and many more. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Papers: ✅ Megatron-LM paper: https://arxiv.org/abs/1909.08053 ✅ ZeRO (DeepSpeed) paper: https://arxiv.org/abs/1910.02054v3 ✅ Mixed precision training paper: https://arxiv.org/abs/1710.03740 ✅ Gpipe (pipeline parallelism) paper: https://arxiv.org/abs/1811.06965 Articles: ✅ Collective ops: https://en.wikipedia.org/wiki/Collective_operation ✅ IEEE float16 format: https://en.wikipedia.org/wiki/Half-precision_floating-point_format ✅ Google Brain's bfloat16 format: https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro to training Large ML models (trillions of params!) 00:02:04 (sponsored) AssemblyAI's speech transcription API 00:03:31 Data parallelism 00:01:52 Pipeline/model parallelism 00:14:52 Megatron-LM paper (tensor/model parallelism) 00:18:22 Splitting the MLP block vertically 00:30:07 Splitting the attention block vertically 00:39:24 Activation checkpointing 00:42:12Combining data + model parallelism 00:45:42 Scaling is all you need and 3D parallelism 00:47:57 Mixed precision training paper 00:49:57 Single vs half vs bfloat number formats 00:51:32 Storing master weights in single precision 00:55:41 Loss scaling 00:58:13 Arithmetic precision matters 01:00:32 ZeRO optimizer paper (DeepSpeed library) 01:06:37 Partitioning is all you need? 01:11:02 Where did all the memory go? 01:21:42 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #scaling #deepspeed #megatron
What's cracking guys? The idea of this video is going to be extremely ambitious. I'm going to try and give you the most seminal and important ideas for how to scale your models up to scales of hundreds of billions and even trillions of parameters. So I'm gonna cover multiple papers as you can see here so we have Megatron LM from Nvidia here introducing the tensor or model parallelism. I'm gonna walk you through quickly through pipeline parallelism idea from the GPy paper. I'm gonna walk you through the data parallelism idea which is probably something most of you are already familiar with but just to round them up I'm gonna cover that one as well. I'm gonna quickly go through the mixed precision training paper as well and finally I'm gonna go through the zero optimizer so the zero redundancy optimizer from Microsoft that enabled training models up to 1 trillion parameters big. So why should we why should you care about this video? Well the thing is the following so if you ever want to train a model that's bigger than let's say well if you're dealing with say GPT style of a model transformer if you want to go like above 1.5 billion parameters you literally need more than 32 gigabytes of VRAM which most of you do not have. So up until basically 1.5 billion a single GPU with 32 gigabytes of VRAM can handle that training if you don't have any of these optimizations. Other than that it's impossible to train bigger models so combining all of these ideas you'll be able to literally train super big models some of them you must have seen recently published models such as the bloom from the big science organization and also opt 175 B model from from meta so all of those models are like hundreds and hundreds of billions of parameters other examples are like chinchilla from deep mind go for poem so yeah guys before we go there I want to make a quick shout out to assembly AI and thank them for sponsoring this video basically they offer a super powerful speech understanding and transcription API you can see how easy it is to get started in Python so literally a couple of lines of code and voila you get your transcription back as well as in this case sentiment analysis they offer various other understanding features such as entity detection etc etc you can also check out the no code solution although I guess this is not that interesting for most of you but if you just want to quickly understand the features they support you can check out this page as well if you are struggling to get started which I doubt because it's super simple there is even a tutorial very detailed tutorial on how to get started and if you like ideas for projects they have a YouTube channel with cool videos such as this one which is like literally 30 minutes long explaining how to summarize various podcasts with with their API so that's very cool finally they have a very cool set of blocks in general and tutorials on how to get started with various other models such as the Lee so yeah I think that's that's fairly cool so give them a go support the channel I'm linking the free API token down in the video description so you can use it and get started in a couple of seconds literally you get for the toy projects you have no problems with the free account you get three hundred hours of transcription if you want to upgrade you can always do that and having said that let's go back to the video so first let's start actually with data parallelism because that's probably the simplest idea to explain the idea is the following so you have a neural network and what you do is you literally take the weights of your neural network and you just replicate it so you copy paste those weights across multiple devices so you have like let's say and GPUs and you literally copy paste that same network across all of the devices and then what you do is you take a batch of data so you basically take a batch of data and so it's gonna look something like this so literally you have a batch of data here and what do you do is you split it into multiple parts so let's assume here we just have three devices so that means you're gonna split the batch so this is the batch dimension by the way this here I'm gonna simplify it like just gonna represent the input tensor as a 2d object even though it's usually gonna be depending on your data 3d 40 whatnot so yeah so you grab the first chunk the first part and you feed that into the first device here then you grab the second chunk here you grab the second chunk and you pass that second chunk of data into the second GPU GPU 2 and finally you grab the final piece here and you pass that one here then you do a forward pass through your networks you're gonna do a forward pass here for a pass here for a pass here and then you calculate the gradients so now now what happens is that all of these devices have different gradients as a consequence of you feeding different parts of your input batch and now we just need to basically reduce all of those gradients and when I say reduce I literally mean you you sum them up and then divide by the number of devices to get the average gradient so there are multiple ways how you can implement this in this particular image there is this thing called parameter server so you do the basically reduce and scatter operation so you grab all of the the gradients you're gonna send the gradients here so you're gonna send and by the way I'm sorry I'm kind of glitching with this like highlighter here but hopefully you can you can see what I'm doing so we send the gradients here we send the gradients here we send the gradients here we do the averaging of those gradients and then you literally so in this particular implementation the weights are updated directly on the parameter server and then you do the scatter operation you literally send back that new the new weights back onto onto your devices you send back the the weights okay and that's how you achieve training with a much higher throughput because you can imagine here although there is some overhead obviously you literally have 3x higher throughput compared to if you were to train on a single GPU okay and when I say GPU it can be a TPU it can be arbitrary device basically so what's the problem with this idea the problem is is it's the underlines presupposition here is that all of these networks can fit into a single device which is the problem because we know that our models are much bigger and usually cannot fit into a single device so what do we do then well then we do like things such as model parallelism so I'm gonna show you the first thing so that's the data parallelism I'm going to show you the the model parallelism idea so this is a similar paper from Nvidia called Megatron LM training multi-billion parameter language models using model parallelism it's literally the idea behind both bloom both opt 175 B from meta I think some of the illu theory eyes models were also backed up by Megatron at least implicitly because I think they're using deep speed and GPT new X code base and deep speed basically leverages Megatron in the background I could be wrong here I'm fairly sure this is this is correct anyway so let's dig into the paper so they say here this is called intro layer model parallel approach I want to contrast that quickly and I'm going to explain this in a bit more depth a bit later with the pipeline approach which is the inter layer splitting so the difference is the following so imagine you have a transformer and I'm gonna be using transformer as the running example so imagine you have a transformer here the main difference between these two approaches is the following so you have n layers here so you have n blocks of your transformer and the pipeline approach does the following thing you literally grab let's say we have four devices if we have four devices then we're gonna do the following we're gonna take device the layers one through four and then five through what eight and we're gonna send them on to separate devices so that's the pipeline approach so literally take this and you put that onto a separate device here then you take this one and you put it so let me let me let me draw it here and I'm literally not explaining the pipeline idea so I might as well explain it the whole way so literally you take the layers five through eight and then you feed them into the different device here and then you take the layers the next four layers because I don't know how to add I'm just gonna use yeah whatever so literally you feed that into the third block and then the last one the last four layers are gonna be fed into the last block here okay so that's what the what the pipeline parallelism approach does it literally splits the model so called well I think this is yeah this is the horizontal splitting and you can now imagine that each of these devices needs to hold literally four times less memory roughly I mean at least when it comes to weights there is also the up to optimizer states etc etc okay so the idea on the opposite side here with model parallelism from the Megatron LM paper is instead you're splitting you're doing the so-called vertical split so you do the following you instead let me take some color such as green and let's imagine we have just two devices gonna be easier to draw so you literally split your transformer like this so literally vertical split it and then you send this piece here on to one device and you send the other piece the other half of the transformer on to the other device so that's the model parallelism or the tensor parallelism as a different different name for that same like concept okay because I'm already halfway through explaining the pipeline parallelism let me let me quickly go through through that so that's the idea I think well one of the papers that mentioned that idea was the G pipe from Google others were I think like pipe dream or something there are multiple papers implementing similar ideas but the idea is as I mentioned this one and what's the problem here well the problem is if you just naively approach training a model that's and that's split using the pipeline parallelism you have a problem with with these so-called bubbles so let me try and demonstrate what I mean by that so imagine you feed a batch of data through this pipeline okay so you have a batch of data here I'm gonna denote it as well I guess like this just to make a difference between the layers and and the data so the thing is while this while you're doing the feed forward through the first device I through the first let's say four layers the other three devices are idle and you can see that on this diagram like this so you can see here you can literally see here that these devices here are idle so these three are doing exactly nothing okay there is no no computation going on so that's kind of wasteful right and then once you do the feed forward through the first four layers then you end up you basically send the activations to the second device and you start doing the feed forward here and at this point of time the three devices the first and the second the third and the fourth one are idle so you can see here literally these devices are doing nothing so you can you can see here this is gonna be idle these two devices are idle etc etc and so then you end up you propagate through all of the four devices and then you then you start doing the backprop so you want to calculate the gradients and when you start doing that again in the in the backward pass you are you're literally waiting so so you're doing backprop here but these three devices are being idle so this is super wasteful so there is a simple way how you can avoid this and it's called micro batching so split your mini batch into multiple micro batches and then you start feeding them into your pipeline and increase the throughput let me let me show you what I mean by that so you can see the diagram is here but let me explain conceptually what it means so here what you do is you split the simple data into multiple chunks into so-called micro batches and you feed this first part here so you feed it here and while it's running here and then the moment it starts running through the through the second device you immediately feed the second part the second micro batch through this device and now you have two devices not being idle and then as soon as this device passes on the activations to the third device you can feed the third micro batch into this device and so on and so forth you get the point and you can see the diagram looks like like this so initially for the duration of the first micro batch you literally have you literally and let me just change the color here you literally have four devices being idle so you can still here see that the four devices are being idle which is wasteful and then only two devices are being being idle and then only one and finally here you can see that all of the devices are being busy and you have the the the 4x the throughput compared to the naive implementation of the pipeline parallelism so in order to get as many of these columns where all of the four devices are active you want to have a lot of micro batches and what the GPy paper showed basically empirically is that you want to have M greater or equal than 4k where K is the number of partitions or the number of devices a fancy name saying number of devices so that means if you have four devices you want to have 32 plus micro batches and then literally they showed that you almost have linear scaling of the throughput which is awesome okay so that's one way you can take a huge model split it across multiple devices each of the devices will have only a portion a slice of the network weights and thus you can train this whole system faster because you have faster throughput and it's also feasible in the first place because yeah you cannot kind of stick it into a single device okay so that's the second idea now let's go through the actual paper this one is very fun let's get back to the first sentence so we started here intralayer model parallel approach we now know what that means so it's intro we're splitting the actual layer we're not splitting across layers we take a single layer and we split it into two parts or four parts or eight parts or whatnot our approach does not require a new compiler or library changes is orthogonal and complementary to pipeline model parallelism and can be fully implemented with insertion of a few communication operations in native part or so the cool thing is the complementary part is very important so this part is super important because that means you can combine these two approaches to scale to even bigger model sizes okay then they say this we sustain 15.1 petaflops across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 teraflops which is 30% of peak flops so let me kind of break this down first of all I forgot to mention here they're training a model that has 8.3 billion parameters and they're training on 512 GPUs okay so what I say here is that they achieve a throughput of 15.1 peta so peta is I guess 10 to the power of 15 or something so it's like thousand tera or 1024 depends when you ask flops is the the floating point operations per second so that means you have this many operations per second that's a huge number okay and what they say is that they have a 60 76 percent scaling efficiency so what does that mean in practice that means the following let me open up a calculator here so we have that the single device single GPU has 39 teraflops so if you multiply 39 times the number of GPUs ideally this will be ideal scaling we'd end up with 19 petaflops and you can see here that they only end up with 15.1 and that's where this 60 76 percent scaling efficiency comes into place if I multiply this with 0.76 we end up with exactly 15.1 petaflops okay so basically you can see that the GPU baseline has 30% of the theoretical peak so that the GPU will never have its its its full theory it will never reach its full theoretical peak and then when you additionally scale across multiple devices you'll never have ideal linear scaling but it's close enough so that's cool okay so let's continue here let me see what's interesting we're gonna see how we are gonna split the two particular blocks so we're gonna see how we split the MLP block first and then I'm gonna show you this is probably a bit easier how do we split the the attention block and they also split the output embeddings as well as the input embeddings and finally a detail worth mentioning is that this residual connections additions layer norms are actually replicated across devices so there is some duplication going on for various reasons they decided that's that's the most optimal thing to do but that's the plan let's see how do we how do we vertically break up the MLP how do we vertical split that the attention block that's gonna be the the main gist of this paper okay let's start here here you can see the the diagrams of how how the splitting happens so the upper diagram here shows you the basically oops shows you the MLP block and you can see this is the first linear layer the second near layer and this is how it's kind of split down but let me go into the formulas I'm gonna slowly understand how this works but first on the high level what happens here is that you can see that this weights you take the weights of this linear layer you split them column wise and you keep a 2 on one device and you keep a a 1 on the second device you split the matrix that you use for your feed forward layer and you split it into two parts you do the same thing here but the difference is here you split it row wise so you can see here we don't have column wise splitting we have row wise splitting okay and so you take B1 you put B1 on one device you put B2 on the second device so basically you can see here that all of these operations are happening on a single device okay and then there is this G operation going on which is gonna synchronize the two devices more on that a bit later and the second device obviously here is processing the second part of the MLP okay so it has the a1 and B1 so that's and that's how we achieve the vertical splitting but that's only a high-level diagram let's understand why this works and how is it equivalent to just having everything on the same device it might be like maybe tough to grasp on the on the first fly but it's fairly simple once you think about it a bit okay so here is the implementation here is what the layer does here is what the the feed forward layer does you do the this is your input data X this is your weight matrix a and this is the activation unit jelly in this particular case so there is multiple ways we can split we saw the column wise and row wise let's see the advantages and disadvantages of both if we split the weight matrix basically row wise as you can see here then we have a problem and that problem is we need to synchronize the devices after this pass so let's see why that's the case so if we now multiply this you get X one times a one plus X two times a two and then you need to apply jelly so the problem here is the plus and that means that the two devices will need to communicate so that that's the only way you can you can well add those two separate activations right and then you apply the jelly the problem is the jelly is not like is a nonlinear function so jelly of of adding these two is not the same as jelly of the first term plus the jelly of the second term and so they say this approach will require a synchronization point before the jelly function okay let me try and map this matrix multiplication into the diagram with which you're probably familiar with okay so let's start like this we have the input data X I'm gonna assume our batch size is only one so we have something like this so we have X here and we split X into two parts so we have X one and we have X two here okay next up we need to pass that through the feed forward layer so we have a weight matrix we're going to multiply with it so this is going to be just some box here I'm gonna fill in the details a bit later and we end up with the final representation I'm gonna denote it as a red rectangle here and that's it okay and now I'm gonna draw like a matrix a here and we said that we are going to split the matrix a basically row wise right so you're gonna split the matrix a row wise so let's see what happens now so if we multiply this part here and then we can assume in a little bit here if you multiply this part here with this part here what does that mean so that means that we do the following thing we'll literally do this whoops we'll literally do this so we do this is kind of how it's gonna be how it's gonna look like if you if you write like a neural if you if you draw a neural network okay so that's what you've done by multiplying this part with this part and then you multiply this with the second part okay so now you multiply it with the second piece here and that second piece is gonna give you the second output and you end up with something like this and then you do this here and this here and you end up with four outputs so I'm kind of using only four to to to yeah simplify the drawing so the problem is that all of these outputs here are still incomplete because they are missing the contribution from the x2 portion that's why we have to add the the actual representations so what I do now is let me change the color into let's say red so now you'll have to do the multiplication between this one and this one to end up with the contribution to the first output element okay so this gives us this and then you just repeat repeat repeat and you get the contributions to this one to this one and to this one okay so that's hopefully kind of helps you understand how this maps to the neural network diagrams with which you are used to so as you see we if you if you just do x1 times a1 we only get partial contributions to the output and because of that we also have to add additionally the x2 a2 and only then can we apply the jelly okay now let's go to the second like a way of breaking down the weight matrix and that's the column splitting okay so another option is to split a along its columns like this this partitioning allows the jelly non-linearity to be independently applied to the output of each partitioned gem so you can see what's the what's the benefit here you can literally send X so this time you're sending not X1 is here so please notice this thing this is a very important detail so here we have X1 here we have X so that means we're passing the whole input and as well here and here we had X2 so but a benefit we get here is that we can literally do everything on that device and then apply jelly and that's it we don't have any synchronization going on so that's very cool because synchronization slows us down we have to communicate the weights and sum them up and then return them back okay so this is advantageous as it removes a synchronization point hence we partition the first gem in this column parallel fashion and split the second gem along its rows so it takes the output of the jelly layer directly without requiring any communication as shown in figure 3a so that's the hack we're gonna do so let me now do the following let me first show you how the column splitting is gonna look like so I'm gonna do the same diagram again let me just change the color so I'm gonna do this similar diagram again so we have our X here so this is our X we have something going on here I'm gonna just draw like this like a box here a placeholder and then I'm gonna pick the red color so we denote the output representation with red and let's now multiply the input X so this is again X let's multiply X with our matrix that split column wise this time so that's the main difference okay so now we split it so here you saw the horizontal one here we have a vertical splitting okay so what happens now let's go through this so now when you multiply this with this you literally end up with a full representation so this output here has everything it needs so it's completely has all of the necessary information from the input then we do the same thing and now you can see that basically what is going to happen let me change the color maybe to gray after you keep on multiplying this with this part with this matrix you're gonna end up with a half of the output representation vector but this half is for like is going to be actually contain it's gonna contain all of the information necessary so we don't have to do any summation that's the advantage here okay so we do that on two devices we end up with these representations we apply the gel use and then and then we feed so that so now this is this is the nice part we'll literally we'll literally just take these representations here so we apply jellies again so we apply the jelly and we then feed them as x1 to the this layer where we split the weight matrix horizontally so that's the nice part so now when you concatenate these two you get an MLP and you have literally half of the weights on one device and half of the weights on the second device that's it as simple as that beautiful idea if you ask me very simple but very beautiful and that's drawn here again you can see it here so we start with X we replicate X then we we have because this one is column split we can just do X times a 2 and then we apply jellies and then we just feed that here and then we multiply with b2 and here remember we have horizontal so row wise splitting and then we have the G which is the synchronization operation so that means we're gonna sum them up and finally we end up with the final representation is that here that's it that's how we split the MLP layer okay let me see where there is something interesting here this approach splits both gems in the MLP block across GPUs and requires only a single all reduce operation in the forward pass so the G operator and a single already using the backward pass the F operator so let me show you how that looks like so again we saw that already so in the forward pass the only point where we have to synchronize to do the already use is here the G and when we go backwards once we get the gradients here we somehow have to combine them right to get the representations here so we literally have to well I guess just add them and divide by two whatever we have to have the synchronization primitive there okay now for the attention part and I really hope now you're you're kind of appreciating the whole engineering the infrastructure that goes into these big language models it's not just like you set like a hyper parameter in your code to from from like five layers to like 120 layers of transformer and expect everything is gonna work magically on a single device there is a lot of engineering going on and here trying to explain you some of the methods okay attention is a bit easier I think in my opinion so the thing is you literally take the query matrix and I'm gonna I'm not going to explain the transformers please do check out my transformer video I'm gonna link it somewhere here in case you're not familiar with the details of the transformer model you take the query matrix you take the key matrix you take the value matrix here and basically you split them column wise and each of these columns is gonna literally compute the queries for a single head so we have the multi head attention let's imagine for the sake of argument here that we only have two heads and by the way that brings me let me just quickly mention this so even though we just had two here so we were splitting the so-called two-way model parallelism you can imagine splitting the weight matrices into much more like well parts so you can have like a four-way model of parallelism or eight way and you can kind of convince yourself that you're going to keep the equivalent so if we have eight if we if we divide this into eight rows and if we divide the input data into eight pieces as well you'll just have x1 a1 plus x2 a2 plus x3 a3 plus etc etc until you get to x8 a8 and then you have to synchronize them and then you apply jelly so nothing is preventing us from from splitting this into multiple parts okay so let's get back to the attention so here let's imagine for the sake of explanation we only have two heads and basically what will happen is that this device up here so this device up here is going to compute the first head so the the representation for the first head and the bottom layer the bottom part of this diagram is going to compute the second head and now we have this is the head one representations and these are the head two representations and then we just feed them here and this is the the already familiar hopefully breakdown of the feed forward layer where we do the row by splitting and you can see here that we again have to have the G because we have to synchronize them and then we get the out-propresentations okay let me try and demonstrate this again with a small diagram quickly so we start with the input so let's assume we have that one and sequence size one so we literally have a single token so let's that's gonna be the easiest case but it's gonna make hopefully make the point okay so here is our input data X again it's a single token so B the the batch size is one S which is usually along this dimension is one we just have a single token that has some dimensionality okay so some maybe H the hidden dimension okay so what we do is we basically map it using the key query and values so the first device will literally map this one into let's say like this we have we have the the query we have the key we have the value and you can see that the it's like twice smaller dimensionality compared to the input one and now you're literally gonna do your your magic of the transformer and you end up with the final representation which is again 2x smaller so this is H2 if this is H this is H2 okay and you just do the same thing on the separate device so you'll do the same logic I'm just gonna kind of do it like this and you end up with the same well it's gonna be a different representation but the same structure here so we end up with this H over 2 okay so now we pay pass those tokens we pass them here into the feed forward layer we do the multiplication so now we literally have the the same example with X1 and X2 so we split our data into two parts that's what we've done here and now you literally do the when you multiply would be one you're gonna end up with this so let me let me change the color to like blue so you're gonna end up with this so something like this blah blah blah blah blah so we end up with the full representation again H but if you recall we literally have a partial representation so we'll have to add up on top of this one the the mapping that we get from so the representation that we get from this representation here so now we'll have to add up this thing here and only now do we have the full representation and that's the the G the G operator here so hopefully that the idea is clear until now that was the main idea let me kind of walk you through the text here so partitioning the gems associated with the with key query and value in a column parallel fashion such that the matrix multiply corresponding to each attention head is done locally on one GPU this allows us to split per attention head parameters and workload across the GPUs and doesn't require any immediate communication to complete the self-attention that's very important we don't need any immediate communication okay so this approach for both the MLP and self-attention layer fuses groups of two gems so general matrix multiplies eliminates a synchronization point in between and results in better scaling so what I mean by that is you can see here in both of these diagrams you can see that we have a like a general matrix multiply and then something happens and then general matrix multiply and compilers are smart enough to optimize these to fuse these operations together so that you literally have a single matrix multiply or something I'm not super familiar with those low-level details I've never had to do that but like that's my understanding at the moment so that's what I see here and this results in better scaling this enables us to perform all gems in a simple transformer layer using only two all reduces in the forward path and two in the backward path again that's because we have because the transformer layer consists of one attention we have a single already use here and then we pass the data that comes out of here we pass it here and then we have another already use here the G so that's two in the in the in the single single pass through the transformer layer okay guys that's pretty much it now I did say that they also have to split the output embeddings the input embeddings etc etc I'm just gonna kind of rush you through this the logic is very similar I'm not gonna bother explaining this one we parallelize the input embedding matrix denoted as E and then the subsequent is H with a hidden dimension and V the vocabulary size along the vocab dimension so e1 e2 column wise since since each partition now only contains a portion of the embedding table and already use the G operator is required after the input embedding so again we'll be able to split the e matrix which is usually huge like the vocab can sometimes be like 50,000 etc etc so you can split it into multiple pieces send each of those pieces onto the GPUs then you do the mapping and then you'll just have to synchronize again same idea as what we saw before okay then they also mentioned for the output it's a bit more intricate so blah blah blah if you just do it simply then it's super expensive but then they say so however for this case the all getter will communicate B times s times V elements B is the batch size s is the sequence length and B is the vocab size which is huge due to the vocab size being large to reduce the communication size we fuse the output of the parallel gem with the cross-entropy loss which reduces the dimensions to B times s communicating scalar losses instead of logits is a huge reduction in communication that improves the efficiency of our model parallel approach okay and finally they they mentioned this so those are the almost all the pieces that make the transformer but there is also the residual connections the layer norms all of that so let me just mention that briefly rather than having one GPU compute part of the dropout layer normalization or residual connections and broadcast the results to other GPUs we choose to duplicate the communication across GPUs I encourage you to read at your own pace why this is the case but I think you got the gist of this of this paper already a couple more details before before jumping to the mixed precision training so you can understand and appreciate why we need that method let me just kind of go here so to train our models efficiently we utilize mixed precision training as I said with dynamic loss scaling we're gonna learn what it is and why we need it then you can see a bunch of details like how do we initialize the model how do you scale it you do some norm clipping blah blah blah a lot of engineering going on here lastly to better manage our memory footprint we utilize activation checkpointing after every transformer layer this is a super important idea I'm gonna briefly explain it here because it's used all like all the time all the time so activation checkpointing you'll hear it you'll hear this concept being thrown around all the time so the idea is the following again we have a transformer here okay it's a big model has a lot of layers so now you do a feed forward through that model okay so you do a feed forward through the model and you keep on collecting the activations okay so each of the layers will be storing the activations why well because when you do the back prop in order to calculate the gradients you need to have activations of each of the corresponding layers so you can already probably notice the problem with this approach you're literally storing bunch of data here here here and as you're progressing through the network the memory peak so the memory consumption is rising rising rising so the idea with the activation checkpointing is the following it's very simple and it's a beautiful idea it's covered in one of the other papers I obviously don't have enough time to cover it but if I get enough requests I might cover it in the future merge with some more advanced techniques that came after all these papers I am explaining in this video so did is fairly simple don't store activations for every single layer just checkpoint them so basically let's imagine we have four let's imagine we have four layers okay and instead of storing the activations for all of the feed forward layers inside of the block you just stored the activations at the at the at the end of the block so you literally just store the the ultimate activations okay so like suggest the activations that are at the end of the transformer block so just store these ones okay so you just store these ones and now when you're doing the back prop so now let's see what happens when you do the back prop okay so you start doing the let me change the color into maybe gray so we start doing the back prop here we are good we are good we're good now we hit this part where we don't have the activations and what we do is we just recompute them so now we trigger the recomputation from this checkpoint here okay so we trigger the recomputation from here and then we can continue our back prop and then we hit this part here no activations we hit the we hit the recomputation again and we end up with activations obviously this is gonna happen in a bit more clever way so while they're doing the back prop we are computing the we're triggering the recomputation so that we don't have to wait so all of those optimization details but this is the gist of it this is how the activation checkpointing works okay hopefully you can appreciate that now let me quickly walk you through a couple more details and we are done with this paper so our infrastructure is optimized for multi-node deep learning applications with three hundred gigabytes per second bandwidth between GPUs inside a server so that's important by this NS and VS switch and hundred gigabytes of interconnect bandwidth between servers okay that's important so we have we have 300 gigabytes per second between the GPUs and we have only hundred only hundred a smaller number that's that's the whole point between the servers so why am I mentioning this the reason being is you want when you're doing the metal the model of parallelism because you have all of those already use operations so the synchronization points you want to have those synchronization points happening inside of a server where the bandwidth is super high okay so that's why when you have a cluster where your node only has eight GPUs you don't want to go more than eight way model parallelism you don't want to go 16 because if you have 16 then you have two servers communicating that during the synchronization points and that sucks okay so hopefully that that makes sense now let me show you some some results they achieve you can see here that scaling to from one to two to four to eight GPUs you can see that the the efficiency kind of drops but not significantly so here is the 77% for the eight way model parallelism and if you have if you combine it with the data parallel approach then you have still 74% for 512 GPUs that's huge that's huge I'm going to quickly explain how we combine the model parallelism with the data parallelism it's a very very nice visualization so quickly this is how it's going to look like as I said you're gonna because the servers let's say have eight GPUs basically you're gonna do the following thing you're gonna have something like this and then you're gonna slice this server into eight pieces because all the pieces are a single GPU and they are very densely connected and so you literally do you have one eighth of the of the model weights here one eighth of the model weights here etc etc and now remember the data parallelism you just replicate you copy paste these multiple times so precisely they use the 64 way data parallelism so you literally you're going to replicate this so that you have so that you end up with let me draw it like this so you have eight here whoops so yeah eight here and then you have eight here and you end up with 64 times eight and that's 512 and this is gonna be your setup so you have literally 64 servers each of the servers has eight GPUs with high bandwidth internally and a bit lower bandwidth externally so internode versus intranode and yeah then you basically do the logic I just explained so in this particular example you have a huge batch you're gonna split it into 64 chunks and then you're gonna pass each of the chunks to one of the servers and then you do the logic we just saw from the Megatron LM paper with all of the synchronization points etc etc and then calculate the gradients and then you're gonna do the all reviews and update the whole cluster that's it fairly simple okay at least conceptually one thing I want to the last thing I want to mention from this paper is it came before GPT-3 was published which is probably the most famous language model and the ML model in general and they already noticed the trend everybody knew already that scaling is leading to better and better performance so here by performance I mean lower perplexity for example so you can see that 8.3 billion model has lower perplexity than the 2.5 billion etc etc they mentioned that multiple times as the model size increases the validation perplexity decreases and reaches a validation perplexity of blah blah blah we observed the trend of decreasing model size also leads to lower perplexity blah blah blah recently researchers from Microsoft in collaboration with Nvidia trained a 17 billion parameter GPT-2 called Turing NLG using Megatron and showed that the accuracy is further improved as they scale the model highlighting the value of larger models it was super obvious that somebody's gonna do it it just happened that open AI was the first one to do it and yeah kind of worth mentioning a historical note I guess yeah they also have contribution of this paper is they show that you kind of have to reconfigure where you put your layer norms so they can so that you can train much bigger birds otherwise you have these instabilities you can see how the the model loss explodes if you if you just use this default bird architecture but that's not important for our like well it is you kind of have to be aware that everything matters that even the model architecture is is very sensitive if you change rearrange something all of a sudden you cannot train the bigger models and so on and so forth okay guys now on to the mixed precision training I'm gonna quickly walk you through this one and then we're gonna dig into the zero paper one important quick note before before this is all the three approaches I showed you the pipeline parallelism the data parallelism and the model or the tensor parallelism all of them can be combined together they are complementary and I think it was branded as 3d parallelism so when you combine all of these three you can achieve like huge scales and then when you combine it with mixed precision plus the zero optimizer which you're gonna see in a couple of minutes then you can achieve truly astounding scales okay that's that's it now let's dig into this paper so a couple of contributions from this paper so let's gonna read it firstly we recommend maintaining a single precision copy of weights that accumulates the gradients after each optimizer step this copy is rounded to have precision for the forward and back prop so this is the first part so you're gonna literally have three components that I want to explain and then we're done the second part is we propose loss scaling we don't see what that concept means to preserve gradient values with small small magnitudes thirdly we use half precision arithmetic that accumulates into single precision outputs that's FP 32 versus FP 16 which are converted to have precision before storing to memory okay so today is fairly simple usually when you train neural networks historically people use the FP 32 format that means a single numbers like literally needs 32 bits instead of that people have shown that you didn't you do not need 32 bits you can have lower accuracy for a neural network training so like 16 works and that's it by just having the the amount of amount of data amount of memory that that a single number takes you literally can save a bunch of a bunch of memory and still they show in this paper keep the same accuracy the same performance of the model as the baseline which uses the FP 32 that's very cool okay so why does it matter not only do we store the memory it's also faster on the modern hardware so have precision math throughput in recent GPUs is 2x to 8x higher than single precision in addition to speed improvements reduce precision formats also reduce the amount of memory required for training that's obvious I already said that one okay okay let's see what's going on so specifically we train various neural networks using the IEEE half precision format let me quickly show you I'll link these somewhere below in the video but this is the this is the the format they're they're referring to you so you have the exponent has five bits the fraction has ten bits and then there is a sine bit and this is how you represent numbers like the flow 32 is basically obviously has a bigger exponent and bigger fraction and well while I'm here explaining the formats let me quickly mention that the B float 16 or the brain float because it was originally invented into Google brain AI lab and you can see what's the difference the B float has much bigger exponent and the main insight here is for neural network training you care about the range you don't care about the actual accuracy and this significant or mantissa is what determines the how fine-tuned your numbers are whereas the exponent determines how big can you go so obviously with B float you can have much bigger and much smaller numbers without getting the overflow the infinities or the or the NANDs or whatnot and so basically TPUs natively support this this B float format and like people have shown like when you take a look at the problems that for example Boris Dyma training the Lee mini had compared to training on GPUs TPUs basically are much more robust to these divergences of losses because of the B float format so that's my so that's basically my understanding I might be wrong here but that's I wanted to quickly mention the all of these formats now let's go back to the paper okay so now that you know that let's see what's going on so we have to store the master weights they showed for most models this is kind of necessary so they say here the need for FP32 master weights is not universal but for the most models they trained they did need to do this so here is the idea you store the weights as FP32 but then when you do the forward prop the back prop you do everything in half precision so you literally do this float to half operation so you literally kind of convert the FP32 to FP16 then you do the feed forward you can see here you start with the activations which are also FP16 so you can kind of always make sure that's the case because you control your data so let's say the input the input is your image you can literally feed image as FP16 image and then combine that with FP16 weights do the forward pass you get the activations and then in the backward you do the same thing the weights are FP16 you calculate the activation grads from the activation grads you get the weight grads and all of those are in FP16 and because of that you benefit from like a smaller memory like a blueprint as well as the well higher throughput as we saw that some modern GPUs basically have higher throughput for FP16 compared to FP32 that's that's the bottom line okay so this this is a schematic how everything works so they now mention one explanation why they need the FP32 master weights one explanation is that updates weight gradients multiplied by the learning rate become too small to be represented in FP16 any value whose magnitude is smaller than 2 raised to the power of minus 24 becomes 0 in FP 16 and by the way you might be confused why 24 and not 14 I'm not gonna dig into that but just Google for the sub normalized so sub normal or something like that sub normal values just open up the the the float 16 and float 32 Viki pages I'm gonna link those down below and you'll see what those are so we can see in figure 2b that approximately 5% of weight gradient values have exponents smaller than minus 24 okay let me show you what I mean by that so let's go to figure 2b you can see here this is the training when the model is trained in in FP32 and you can see that most of the gradients here fall below this magic red line the the minus 24 which is the where the representation range of the FP 16 ends and so that means that all of these are gonna be rounded to 0 and you lose a lot of information during training and because of that the accuracy the final accuracy for model may or may not drop okay so that's one kind of idea why why you need FP 32 there is a couple of other explanations here a bit more complex to explain here I'm gonna I'm gonna kind of skip it for now so worth mentioning now that you have to store this additional copy doesn't that blow up the memory and here to explain that's not the case because the activations are what matters so even though maintaining an additional copy of weights increases the memory requirements for the weights by 50% compared with single precision training impact on overall memory usage is much smaller for training memory consumption is dominated by activations due to larger batch sizes and activations of each layer being saved for reuse in the back prop pass since activations are also stored in half precision format the overall memory consumption for training deep neural networks is roughly halved okay they are not obviously taking into account activation checkpointing they just assume that you have to store all of the activations in that case this thing that they've just said here holds but yeah okay now let's go into the law scaling this is a second important concept you need to get the mixed precision training to work know that much of the fp16 representable range was left unused while many values were below the minimum representable range and became zeros scaling up the gradients will shift them to occupy more of the representable range and preserve values that are otherwise lost to zeros this particular network diverges when gradients are not scaled but scaling them by a factor of 8 increasing the exponents by 3 is sufficient to make the accuracy achieved with fp32 training this suggests that activation gradient below mine to raise to the power of minus 27 in magnitude were irrelevant to the training of this model but values in this range were important to preserve okay let me break this down for you okay so here is a histogram of basically activation gradient values you can see what I said it's shifted outside of the range that fp16 basically support so this is the range that fp16 loves okay this is this subnormal or the denormalized range so from minus 24 to minus 14 you can see that most of these values are outside of the range and if you multiply the gradients by 8 you literally shift all of these rightwards or equivalently we move the red line here and now you have all of these being representable in fp16 thus we will not round them to zero we will not lose this information and it turned out they say here blah blah blah that that this particular range here and you can see there is a lot of data here was very important to preserve and so scaling by 8 for this particular model it doesn't even matter what the model is just the concept is important it kind of saved the training okay so one efficient way to shift the gradient values into fp16 representable range is to scale the last value computed in the forward pass prior to starting back propagation by chain rule back propagation ensures that all gradient values are scaled by the same amount so that's a easy way how you can scale up all these gradients you just take the final loss you scale it up by let's say 8 and by doing that once you start doing the back prop because how chain rule works you literally multiply all of the gradients by 8 and thus you'll be shifting this the gradients into the representable range this is the yeah this is the main the main intuition I want you to take out of this paper and here is the here's the third technique they do to maintain model accuracy we found that some networks require that the fp16 vector dot product accumulates the partial products into an fp32 value which is converted to fp16 before writing to memory without this accumulation into fp32 some fp16 models did not match the accuracy of the baseline models okay also for large reductions for example sums across elements of a vector they should all be carried out in fp32 such reductions mostly come up in batch normalization layers when accumulating statistics and softmax layers as well so both of the layer types in our implementation still read and write fp16 tensors from memory performing the arithmetic in fp32 this did not slow down the training process since these layers are memory bandwidth limited and not sensitive to arithmetic speed okay again just a additional detail that needs to be taken care of basically there are three categories so they say here this is this cannot relevant by and by and large neural network arithmetic falls into three categories so the vector dot products the reductions and the point wise operations such as the activation functions and that they just mentioned for these two so they mentioned that for vector dot products and for reductions you sometimes have to convert to fp32 and do the operations there such that you can preserve the accuracy that's it guys that's the idea behind mixed precision training now let me show you the results they show that literally baseline they compare they are almost always even better than the baselines so here you can if you just check out these numbers at your own base you'll see that the numbers are always a bit better so it maybe works as a like a regularizer almost to to to do the training in fp16 for some models as I mentioned the last skill and technique was not required for successful mix precision training of these networks but for some others it was very important let me show you one so here you can see that mixed precision training when the last scale is one diverges when the last scale is 128 it does not diverge and they literally show for every four different classes of models like this again here language models machine translation they always match the baseline when it comes to accuracy so that's awesome and the final idea least last but not least definitely not least because this is such an exciting paper this paper is basically introduces deep speed library you might have heard of it if you've seen if you've went through any of the popular LLMs most of them use deep speed illutri I know they use deep speed bloom use deep speed opt 175 be that the meta folks I'm not sure whether they used it but yeah so it's a literally a paper from Microsoft it's called zero memory optimizations toward training trillion parameter models okay I love this paper let's go through it step by step let's think step by step zero redundancy optimizer that's that's what the acronym stands for we'll see why zero redundancy they literally have well they reduce all of the redundancy when it comes to model states and residual states we're gonna see what those are but that's the idea zero eliminates memory redundancies in data and model parallel training zero has the potential to scale beyond one to them parameters using today's hardware zero can train large models of up to 13 billion parameters for example larger than Megatron GPT 8.3 B so that's a model that was trained in the Megatron LM paper the first paper I covered and the t5 the Google's model that was 11 billion parameter big without even requiring model parallelism so that's a big breakthrough because now with with this set of optimizations you as a practitioner can literally train models as big as 13 B if you have just a single but very good albeit a very good like GPU but that's amazing and obviously when you use the actual optimizations you can you can scale to much bigger sizes they show they have shown indeed that you can scale up to 1 trillion parameters with a small caveat and that's that it becomes very slow I mean very slow you literally for some of the models they've shown you need a year to train it so that's practically not not really practical but yeah they train a model of hundred something billion so that's that's kind of cool okay so let's let's dig into it MP so the model parallelism splits the model vertically partitioning the computation and parameters in each layer across multiple devices requiring significant communication between each layer okay as a result they work well within a single node I did mention this where the inter GPU communication bandwidth is high but the efficiency degree degrades quickly beyond a single node we tested a 40 B parameter model using Megatron LM across 2d GX 2 nodes and observe about 5 tera flops per V 100 GPU less than 5% of hardware peak so you can see it drops dramatically as soon as you start doing the model parallelism across multiple nodes okay then they say this for large models the majority of the memory is occupied by model states this is an important piece of terminology they're gonna be reusing it throughout the paper so let's kind of stress that so the model states which include the optimizer states such as momentum and variances in atom gradients and parameters so they consider those to be the model states the remaining memory is consumed by activation temporary buffers and unusable whoops some glitch unusable fragmented memory which we refer to collectively as residual states so again two pieces of terminology new terminology here we have residual states and we have model states and they're gonna address each one of those separately okay so as your DPU removes the memory state redundancies across data parallel processes by partitioning the model states instead of replicating them so that's the main idea that's literally the main idea of this paper they are partitioning instead of replicating I mean simple conceptually but there is a lot of work that goes into making this to work efficiently that's important so I'm gonna show you the diagram I'm not gonna explain it yet what this means I'm gonna see in a couple of seconds so they have a couple of optimizations optimizer state partitioning so when you partition the optimizer states instead of replicating it then you can do the gradient partitioning and then you can also do the parameter partition and they show that the memory reduction is linear with DP degree so that means the number of devices you use to replicate your models across when you do the data parallelism for example splitting across 64 GPUs will yield a 64 X memory reduction is huge there is a modest 50% increase in communication volume though okay with all three stages enabled zero can train a Trillium parameter model on just 1024 Nvidia GPUs which is crazy okay this is the main diagram let me first walk you through that this piece of text here and then I'm gonna basically yeah jump into explaining the dire for activations stored from forward pass in order to perform regular pass we noticed checkpointing helps I did explain activation checkpointing maybe half an hour ago but not sufficient for large models thus zero are optimizes activation memory by identifying and removing the activation replication in existing empty approaches through activation partition again partitioning partitioning it also offloads activations to CPU when appropriate okay that's another detail that can be done okay zero DP and zero are combined together forms a powerful system of memory optimizations for DL training that we collectively refer to as zero again just another piece of terminology if you ever start playing with deep speed this might be useful to know okay so let's see what's going on here is how the that the opera the baseline is how your classical change the color here this is how your data parallelism works you have three types of basically what you have parameters radiance and optimizer states you can see that optimizer states take that the biggest part of the memory and you can see across when you do the data parallelism I did mention that we literally replicate all of these you copy paste them and that's super inefficient okay you can see that the memory consumed is 2 plus 2 plus K times psi I'm going to explain what these are in a second and for I think they're using some GPT model here whatnot they show that you need 120 gigabytes of memory per device that's huge so psi is the number of parameters I think it's 1.5 billion in this particular example they're using the the GPT model and the GPT-2 2 is because you use two bytes when you use mixed precision training you use two bytes for weights you use two bytes for gradients so that's why you have 2 plus 2 and then K is equal to 12 because these are the optimizer states because we'll see the calculation a bit later but for Adam K equals 12 and so you have 16 times 1.5 B you end up with with I guess this number is that correct just see what I messed something up at 1.5 times 16 wait I'm not sure yeah we'll see why yeah well sorry it's 7.5 B it says here okay that's why we get 100 now the first optimization is instead of having a copy of the optimizer states everywhere you instead partition it so you store one piece of the optimizer states on one GPU you store the second part here you store the the the the nth part here and you can see so sorry we have NGP's here so that's why we are gonna have ultimately when you cannot combine all of them together you'll have still the same the same information as here right and then you can do this same thing for everything else for gradients you can partition them for parameters you can partition them and you can see how the memory is slowly decreasing so here we have 2 psi plus 2 psi because we are not doing the the parameter and gradient optimization but we do save up on the optimizer state and that's why we have K times psi divided by ND where ND is the number of devices in your data parallel setup it's gonna be 64 in in their particular case and you can see how the memory dramatically reduces so you started with 120 gigabytes and you ultimately end up with 1.9 gigabytes which is crazy I mean this is an amazing optimization okay let's continue here this is what I mentioned before the complete set of optimizations in Xero could allow us to run models with trillion parameters on the high-end hardware cluster today for example with 1000 V100 GPUs however the hardware compute capacity is still too limited and training time can be in practically long over a year so always take with a grain of salt when somebody says one trillion you have to kind of read the fine print and understand the details of the method zero power is the largest language model with 17 D parameters and record-breaking accuracy cheering energy important to mention this was literally I think a month before GPT-3 was published which was 10x bigger than this model which is like a huge we share zero as a part of our open source DL training optimization library called deep speed so I did mention this library multiple times you can check it out especially if you want to play with the models by the way I'm making and I'm going through these papers precisely because over the next videos I'll be covering some of the largest ML models out there and do let me know if you if you if you have any any suggestions what you want to cover or not feel free to comment down in the comments also congrats if you if you sticked up through the whole video until here okay so let's do a quick memory analysis to understand why case 12 etc etc so a 1.5 billion parameter GPT-2 model requires 3 gigabytes of memory for its weights if we assume FB 16 or in 16-bit precision yet it cannot be trained on a single GPU with 32 gigabytes of memory using tensorflow or PyTorch at least without any of the optimizations I assume that PyTorch and tensorflow now have a bunch of optimizations included one may wonder where all the memory goes so now let's see where it goes let's take Adam as a concrete example so the mixed precision training of a model with psi parameters using Adam requires enough memory to hold an FB 16 copy of the parameters and gradients with memory requirements of two psi and two psi bytes respectively so we've seen that one in the table before basically yeah so in addition it needs to hold the optimizer states and FB 32 copy of the parameters so we know why this is the case because of the mixed precision training we always need to store the master copy in FB 32 okay and that's kind of gonna bloat up the memory but it's necessary the momentum and the variance so these are Adam idiosyncrasies and then with memory requirements of 4 psi 4 psi and 4 psi bytes so 4 because here we have FB 32 okay and because of that if you sum them up you end up with 12 and that's why K equals 12 so in total this results in 16 psi bytes of memory requirements so yeah that's why you cannot train a GPT 1.5b on the GPU unless you have some of these optimizations okay now they break up the residual memory consumption blah blah blah they mention for some set of parameters like if you're using GPT 2 with a sequence length of 1000 that size of 32 you require 60 gigabytes of memory unless you do activation checkpoint then they say here activation checkpoint here or activation computation or computation is a common approach to reduce the creation memory by approximately the square root of the total activations at the expense of 33% of recomputation overhead this would reduce the activation memory consumption of this model to about 8 gigabytes that's a huge saving okay when the size of the model is large these temporary buffer sizes are non-trivial okay so they go on to say that there are three components to the residual number consumption so the activations the temporary buffers and the memory fragmentation yeah I'm gonna skip this one it's not that vital but yeah you you kind of have to have these temporary buffers when you want to do some reductions etc etc yeah that's too much detail for this video okay let's get some hints why this works zero DP partitions the model states instead of replicating them and uses a dynamic communication schedule that exploits the intrinsically temporal nature of the model states while minimizing the communication volume okay so let that sink in the intrinsically temporal nature so let me quickly go back to the diagram here I think that's gonna be the easiest thing to do so the thing is let's say let's take parameters as an example so the the blue the blue line here literally when you're doing a forward pass obviously the forward pass is always at a certain point at a certain layer in your model and the only parameters you care about are the parameters that are needed to do the immediately next computation right so you can kind of discard everything before and you don't care about everything that's about to come so they literally explore that temporal thingy and they in a smart way communicate the parameters the gradients the optimizer states such that you have basically an illusion that everything is already present there and they kind of mask the the overhead by the well because of certain intrinsic latencies during the model training I'm not gonna go into much more detail there so let's go back here let's go back here we are here I think okay a couple more things and I think we're done here the arithmetic dense intensity ratio of the amount of computation per iteration to amount of activation checkpoints per iteration is very large and increases linearly with hidden dimension making it possible to hide so this is the thing I just said the data movement cost for the activation checkpoints even when the bandwidth is low for very large model zero can even choose to move the activation partitions to the CPU memory and then you literally have like a zero cost of memory older and and in case you manage to mask that transitioning the movement of the data to the CPU then you literally like you get it for free so unfortunately the paper is very textual hopefully you got some insight from the diagrams and the thing I things I explained so far let me try and read you a couple more things I highlighted with which I think are insightful so for a DP degree again DP is standing for a data parallelism of ND we group the optimizer states into ND equal partitions okay such that the I data parallel process only updates the optimizer states corresponding to the I partition thus each data parallel process only needs to store and update one over ND of the total optimizer states and then only update one over ND of the parameters we perform an all gatherer across the data parallel process at the end of each step to get the fully updated parameters across all data parallel process okay so it's kind of hard to explain everything I'm going to show you one thing here so there's this nice Vicky page where you can see these collective operations such as reduce or reduce blah blah blah all gather scatter all to all all these are implemented in libraries such as NCCL from from Nvidia so that stands for Nvidia collective communications library so you can kind of check it out the diagrams here are useful it's gonna help you kind of understand this but ultimately someone needs to create maybe to me something that visualizes this much like nicely such it so they can see understand what's going on but yeah hopefully this gives you some intuition and that's what we are aiming for just like have at least the basic terminology and the basic understanding of all of these optimization methods in place and then you can always read on the documentation of the particular library you're using if you care to learn more okay I'm gonna skip all of these here interesting piece of theory but not that practical they show that because of we saw these formulas already in the table but we ultimately end up with 16 psi over MD formula for the amount of memory which means is Andy in the limit of going to infinity this goes to zero but yeah it becomes impractical body this shows that this has a profound implication zero powers DP to fit model with arbitrary size as long as there are sufficient number of devices to share the model states but yeah it becomes after a certain point you hear the situation point and this becomes very very slow okay this is the last last part let me kind of read and walk you through this it might be insightful so more specifically once the forward propagation for a layer of a model is computed the input activations are partitioned across all the model parallel process until it is needed again during the back propagation okay so what they say here is let me try and cannot draw this down so let's say we have a bunch of devices here something like this let's let's say we have three devices and let's say you just computed you're just you're currently here in this part of the network and you computed some of the activations okay and then you proceed forward so after that you proceed forward you continue your forward prop here okay so instead of having that checkpoint so what's usually done that checkpoint is stored here instead of doing that why not partition it and basically why not store the literally change the color so you instead store only a chunk here so you only store a chunk here you store the second time here and you store the third time here so that's the idea and then once we need these activations again we just communicate back those activations back to this device so that we can compute the gradients etc etc so that's the basic idea I mean now how that happens obviously there is a lot of computer science going on in the background the algorithms data structures all of that hardware electronics but yeah we have to kind of stop at a certain level of abstraction okay so it works in conjunction with activation checkpointing storing partitioned activation checkpoints only instead of replicated copy so that's what I told you here so furthermore in the case of very large models and very limited device memory these partitioned activation checkpoints can also be offloaded to the CPU reducing the activation memory overhead to nearly zero at an additional communication cost I didn't mention this before okay if we check point a single activation for each transformer layer it will require about 33 gigabytes of memory per GPU just to store the activation checkpoints but with this optimization the PA and zero it can be reduced to about 2 gigabytes per GPU because yeah if you have 16 degree of 16 of your model parallelism then you just divide by 16 you get roughly 2 gigabytes furthermore this 2 gigabytes can be offloaded to the CPU reducing the memory footprint for activations to nearly zero okay guys this is this is pretty much it the paper is huge obviously there is a lot of heuristics in the background so they mentioned something here so given model and hardware characteristics will leverage the above analysis to decide if and when to apply PA and PA plus CPU so the activation checkpointing the partitioning of the activation checkpoints optimization and the CPU offloading optimization and the above analysis is something I encourage you to read but the idea is depending on like various latencies they decide whether this is a good idea or not so there is a lot of details there but nothing conceptually fundamental that that yeah it will lead you to an Eureka moment to epiphany I guess yeah okay guys hopefully you like this video let me find let me end up with this nice chart here okay so hopefully you like this video if you did please share it out with with your friends with anyone for whom you think would benefit from understanding these scaling techniques so we covered a lot of ground here we covered the data parallelism approach we covered the tensor parallelism approach we covered the pipeline parallelism approach we covered the mixed precision training and finally we covered the zero optimizer where we saw how we can basically cut down and do the partitioning of various model states and digital states so that we can train much bigger models and even on a single GPU so yeah do let me know whether you found this video interesting leave any feedback down below subscribe to the channel and until next time bye bye
[{"start": 0.0, "end": 4.92, "text": " What's cracking guys? The idea of this video is going to be extremely ambitious."}, {"start": 4.92, "end": 11.36, "text": " I'm going to try and give you the most seminal and important ideas for how to"}, {"start": 11.36, "end": 15.52, "text": " scale your models up to scales of hundreds of billions and even trillions"}, {"start": 15.52, "end": 20.48, "text": " of parameters. So I'm gonna cover multiple papers as you can see here so"}, {"start": 20.48, "end": 26.64, "text": " we have Megatron LM from Nvidia here introducing the tensor or model"}, {"start": 26.64, "end": 30.48, "text": " parallelism. I'm gonna walk you through quickly through pipeline parallelism idea"}, {"start": 30.48, "end": 34.24, "text": " from the GPy paper. I'm gonna walk you through the data parallelism idea which"}, {"start": 34.24, "end": 38.52, "text": " is probably something most of you are already familiar with but just to round"}, {"start": 38.52, "end": 42.78, "text": " them up I'm gonna cover that one as well. I'm gonna quickly go through the mixed"}, {"start": 42.78, "end": 47.24, "text": " precision training paper as well and finally I'm gonna go through the zero"}, {"start": 47.24, "end": 52.56, "text": " optimizer so the zero redundancy optimizer from Microsoft that enabled"}, {"start": 52.56, "end": 58.68, "text": " training models up to 1 trillion parameters big. So why should we why"}, {"start": 58.68, "end": 62.68000000000001, "text": " should you care about this video? Well the thing is the following so if you"}, {"start": 62.68000000000001, "end": 67.92, "text": " ever want to train a model that's bigger than let's say well if you're dealing"}, {"start": 67.92, "end": 73.68, "text": " with say GPT style of a model transformer if you want to go like above"}, {"start": 73.68, "end": 79.52000000000001, "text": " 1.5 billion parameters you literally need more than 32 gigabytes of VRAM which"}, {"start": 79.52, "end": 86.6, "text": " most of you do not have. So up until basically 1.5 billion a single GPU with"}, {"start": 86.6, "end": 91.67999999999999, "text": " 32 gigabytes of VRAM can handle that training if you don't have any of these"}, {"start": 91.67999999999999, "end": 96.24, "text": " optimizations. Other than that it's impossible to train bigger models so"}, {"start": 96.24, "end": 100.84, "text": " combining all of these ideas you'll be able to literally train super big models"}, {"start": 100.84, "end": 106.0, "text": " some of them you must have seen recently published models such as the bloom from"}, {"start": 106.0, "end": 115.0, "text": " the big science organization and also opt 175 B model from from meta so all"}, {"start": 115.0, "end": 119.52, "text": " of those models are like hundreds and hundreds of billions of parameters other"}, {"start": 119.52, "end": 125.4, "text": " examples are like chinchilla from deep mind go for poem so yeah guys before we"}, {"start": 125.4, "end": 128.72, "text": " go there I want to make a quick shout out to assembly AI and thank them for"}, {"start": 128.72, "end": 132.6, "text": " sponsoring this video basically they offer a super powerful speech"}, {"start": 132.6, "end": 137.92, "text": " understanding and transcription API you can see how easy it is to get started in"}, {"start": 137.92, "end": 141.16, "text": " Python so literally a couple of lines of code and voila you get your"}, {"start": 141.16, "end": 145.16, "text": " transcription back as well as in this case sentiment analysis they offer"}, {"start": 145.16, "end": 150.68, "text": " various other understanding features such as entity detection etc etc you can"}, {"start": 150.68, "end": 153.79999999999998, "text": " also check out the no code solution although I guess this is not that"}, {"start": 153.79999999999998, "end": 157.0, "text": " interesting for most of you but if you just want to quickly understand the"}, {"start": 157.0, "end": 161.28, "text": " features they support you can check out this page as well if you are struggling"}, {"start": 161.28, "end": 165.2, "text": " to get started which I doubt because it's super simple there is even a"}, {"start": 165.2, "end": 170.12, "text": " tutorial very detailed tutorial on how to get started and if you like ideas for"}, {"start": 170.12, "end": 174.04, "text": " projects they have a YouTube channel with cool videos such as this one which"}, {"start": 174.04, "end": 178.44, "text": " is like literally 30 minutes long explaining how to summarize various"}, {"start": 178.44, "end": 184.88, "text": " podcasts with with their API so that's very cool finally they have a very cool"}, {"start": 184.88, "end": 189.76, "text": " set of blocks in general and tutorials on how to get started with various other"}, {"start": 189.76, "end": 193.88, "text": " models such as the Lee so yeah I think that's that's fairly cool so give them a"}, {"start": 193.88, "end": 197.64, "text": " go support the channel I'm linking the free API token down in the video"}, {"start": 197.64, "end": 200.72, "text": " description so you can use it and get started in a couple of seconds"}, {"start": 200.72, "end": 204.84, "text": " literally you get for the toy projects you have no problems with the free"}, {"start": 204.84, "end": 208.04, "text": " account you get three hundred hours of transcription if you want to upgrade you"}, {"start": 208.04, "end": 212.12, "text": " can always do that and having said that let's go back to the video so first"}, {"start": 212.12, "end": 215.72, "text": " let's start actually with data parallelism because that's probably the"}, {"start": 215.72, "end": 220.32, "text": " simplest idea to explain the idea is the following so you have a neural network"}, {"start": 220.32, "end": 225.2, "text": " and what you do is you literally take the weights of your neural network and"}, {"start": 225.2, "end": 232.12, "text": " you just replicate it so you copy paste those weights across multiple devices so"}, {"start": 232.12, "end": 237.72, "text": " you have like let's say and GPUs and you literally copy paste that same network"}, {"start": 237.72, "end": 241.76, "text": " across all of the devices and then what you do is you take a batch of data so"}, {"start": 241.76, "end": 245.92, "text": " you basically take a batch of data and so it's gonna look something like this"}, {"start": 245.92, "end": 252.0, "text": " so literally you have a batch of data here and what do you do is you split it"}, {"start": 252.0, "end": 256.08, "text": " into multiple parts so let's assume here we just have three devices so that means"}, {"start": 256.08, "end": 259.96, "text": " you're gonna split the batch so this is the batch dimension by the way this here"}, {"start": 259.96, "end": 264.56, "text": " I'm gonna simplify it like just gonna represent the input tensor as a 2d"}, {"start": 264.56, "end": 270.08, "text": " object even though it's usually gonna be depending on your data 3d 40 whatnot so"}, {"start": 270.08, "end": 275.08, "text": " yeah so you grab the first chunk the first part and you feed that into the"}, {"start": 275.08, "end": 280.96, "text": " first device here then you grab the second chunk here you grab the second"}, {"start": 280.96, "end": 285.36, "text": " chunk and you pass that second chunk of data into the second GPU GPU 2 and"}, {"start": 285.36, "end": 292.24, "text": " finally you grab the final piece here and you pass that one here then you do a"}, {"start": 292.24, "end": 295.56, "text": " forward pass through your networks you're gonna do a forward pass here for"}, {"start": 295.56, "end": 301.56, "text": " a pass here for a pass here and then you calculate the gradients so now now what"}, {"start": 301.56, "end": 305.48, "text": " happens is that all of these devices have different gradients as a consequence"}, {"start": 305.48, "end": 310.32, "text": " of you feeding different parts of your input batch and now we just need to"}, {"start": 310.32, "end": 316.2, "text": " basically reduce all of those gradients and when I say reduce I literally mean"}, {"start": 316.2, "end": 321.08, "text": " you you sum them up and then divide by the number of devices to get the average"}, {"start": 321.08, "end": 326.35999999999996, "text": " gradient so there are multiple ways how you can implement this in this particular"}, {"start": 326.35999999999996, "end": 331.35999999999996, "text": " image there is this thing called parameter server so you do the basically"}, {"start": 331.35999999999996, "end": 335.84, "text": " reduce and scatter operation so you grab all of the the gradients you're gonna"}, {"start": 335.84, "end": 340.64, "text": " send the gradients here so you're gonna send and by the way I'm sorry I'm kind"}, {"start": 340.64, "end": 344.76, "text": " of glitching with this like highlighter here but hopefully you can you can see"}, {"start": 344.76, "end": 348.68, "text": " what I'm doing so we send the gradients here we send the gradients here we send"}, {"start": 348.68, "end": 353.56, "text": " the gradients here we do the averaging of those gradients and then you literally"}, {"start": 353.56, "end": 357.64, "text": " so in this particular implementation the weights are updated directly on the"}, {"start": 357.64, "end": 361.40000000000003, "text": " parameter server and then you do the scatter operation you literally send"}, {"start": 361.40000000000003, "end": 368.36, "text": " back that new the new weights back onto onto your devices you send back the the"}, {"start": 368.36, "end": 374.48, "text": " weights okay and that's how you achieve training with a much higher throughput"}, {"start": 374.48, "end": 379.20000000000005, "text": " because you can imagine here although there is some overhead obviously you"}, {"start": 379.20000000000005, "end": 384.20000000000005, "text": " literally have 3x higher throughput compared to if you were to train on a"}, {"start": 384.20000000000005, "end": 390.88, "text": " single GPU okay and when I say GPU it can be a TPU it can be arbitrary device"}, {"start": 390.88, "end": 398.12, "text": " basically so what's the problem with this idea the problem is is it's the"}, {"start": 398.12, "end": 403.86, "text": " underlines presupposition here is that all of these networks can fit into a"}, {"start": 403.86, "end": 408.2, "text": " single device which is the problem because we know that our models are much"}, {"start": 408.2, "end": 412.64, "text": " bigger and usually cannot fit into a single device so what do we do then well"}, {"start": 412.64, "end": 416.96000000000004, "text": " then we do like things such as model parallelism so I'm gonna show you the"}, {"start": 416.96000000000004, "end": 419.68, "text": " first thing so that's the data parallelism I'm going to show you the"}, {"start": 419.68, "end": 425.12, "text": " the model parallelism idea so this is a similar paper from Nvidia called"}, {"start": 425.12, "end": 429.52000000000004, "text": " Megatron LM training multi-billion parameter language models using model"}, {"start": 429.52, "end": 436.35999999999996, "text": " parallelism it's literally the idea behind both bloom both opt 175 B from"}, {"start": 436.35999999999996, "end": 441.82, "text": " meta I think some of the illu theory eyes models were also backed up by"}, {"start": 441.82, "end": 446.14, "text": " Megatron at least implicitly because I think they're using deep speed and GPT"}, {"start": 446.14, "end": 451.35999999999996, "text": " new X code base and deep speed basically leverages Megatron in the background I"}, {"start": 451.35999999999996, "end": 455.28, "text": " could be wrong here I'm fairly sure this is this is correct anyway so let's dig"}, {"start": 455.28, "end": 461.52, "text": " into the paper so they say here this is called intro layer model parallel"}, {"start": 461.52, "end": 466.03999999999996, "text": " approach I want to contrast that quickly and I'm going to explain this in a bit"}, {"start": 466.03999999999996, "end": 470.35999999999996, "text": " more depth a bit later with the pipeline approach which is the inter layer"}, {"start": 470.35999999999996, "end": 474.84, "text": " splitting so the difference is the following so imagine you have a"}, {"start": 474.84, "end": 479.08, "text": " transformer and I'm gonna be using transformer as the running example so"}, {"start": 479.08, "end": 484.08, "text": " imagine you have a transformer here the main difference between these two"}, {"start": 484.08, "end": 488.08, "text": " approaches is the following so you have n layers here so you have n blocks of"}, {"start": 488.08, "end": 493.28, "text": " your transformer and the pipeline approach does the following thing you"}, {"start": 493.28, "end": 497.4, "text": " literally grab let's say we have four devices if we have four devices then"}, {"start": 497.4, "end": 501.56, "text": " we're gonna do the following we're gonna take device the layers one through four"}, {"start": 501.56, "end": 507.06, "text": " and then five through what eight and we're gonna send them on to separate"}, {"start": 507.06, "end": 511.74, "text": " devices so that's the pipeline approach so literally take this and you put that"}, {"start": 511.74, "end": 518.48, "text": " onto a separate device here then you take this one and you put it so let me"}, {"start": 518.48, "end": 521.96, "text": " let me let me draw it here and I'm literally not explaining the pipeline"}, {"start": 521.96, "end": 525.92, "text": " idea so I might as well explain it the whole way so literally you take the"}, {"start": 525.92, "end": 530.12, "text": " layers five through eight and then you feed them into the different device here"}, {"start": 530.12, "end": 536.76, "text": " and then you take the layers the next four layers because I don't know how to"}, {"start": 536.76, "end": 542.4, "text": " add I'm just gonna use yeah whatever so literally you feed that into the third"}, {"start": 542.4, "end": 548.4399999999999, "text": " block and then the last one the last four layers are gonna be fed into the"}, {"start": 548.4399999999999, "end": 554.28, "text": " last block here okay so that's what the what the pipeline parallelism approach"}, {"start": 554.28, "end": 558.92, "text": " does it literally splits the model so called well I think this is yeah this is"}, {"start": 558.92, "end": 564.6, "text": " the horizontal splitting and you can now imagine that each of these devices needs"}, {"start": 564.6, "end": 571.4, "text": " to hold literally four times less memory roughly I mean at least when it comes to"}, {"start": 571.4, "end": 577.64, "text": " weights there is also the up to optimizer states etc etc okay so the idea on the"}, {"start": 577.64, "end": 581.6, "text": " opposite side here with model parallelism from the Megatron LM paper is"}, {"start": 581.6, "end": 587.0, "text": " instead you're splitting you're doing the so-called vertical split so you do"}, {"start": 587.0, "end": 593.76, "text": " the following you instead let me take some color such as green and let's"}, {"start": 593.76, "end": 597.24, "text": " imagine we have just two devices gonna be easier to draw so you literally split"}, {"start": 597.24, "end": 602.8199999999999, "text": " your transformer like this so literally vertical split it and then you send this"}, {"start": 602.8199999999999, "end": 608.88, "text": " piece here on to one device and you send the other piece the other half of the"}, {"start": 608.88, "end": 612.68, "text": " transformer on to the other device so that's the model parallelism or the"}, {"start": 612.68, "end": 616.56, "text": " tensor parallelism as a different different name for that same like"}, {"start": 616.56, "end": 619.96, "text": " concept okay because I'm already halfway through explaining the pipeline"}, {"start": 619.96, "end": 624.6, "text": " parallelism let me let me quickly go through through that so that's the idea"}, {"start": 624.6, "end": 629.96, "text": " I think well one of the papers that mentioned that idea was the G pipe from"}, {"start": 629.96, "end": 634.44, "text": " Google others were I think like pipe dream or something there are multiple"}, {"start": 634.44, "end": 638.88, "text": " papers implementing similar ideas but the idea is as I mentioned this one and"}, {"start": 638.88, "end": 643.6800000000001, "text": " what's the problem here well the problem is if you just naively approach training"}, {"start": 643.6800000000001, "end": 648.1600000000001, "text": " a model that's and that's split using the pipeline parallelism you have a"}, {"start": 648.16, "end": 652.88, "text": " problem with with these so-called bubbles so let me try and demonstrate"}, {"start": 652.88, "end": 658.12, "text": " what I mean by that so imagine you feed a batch of data through this pipeline"}, {"start": 658.12, "end": 664.3199999999999, "text": " okay so you have a batch of data here I'm gonna denote it as well I guess like"}, {"start": 664.3199999999999, "end": 670.04, "text": " this just to make a difference between the layers and and the data so the thing"}, {"start": 670.04, "end": 674.56, "text": " is while this while you're doing the feed forward through the first device I"}, {"start": 674.56, "end": 680.4399999999999, "text": " through the first let's say four layers the other three devices are idle and you"}, {"start": 680.4399999999999, "end": 684.76, "text": " can see that on this diagram like this so you can see here you can literally"}, {"start": 684.76, "end": 690.64, "text": " see here that these devices here are idle so these three are doing exactly"}, {"start": 690.64, "end": 695.76, "text": " nothing okay there is no no computation going on so that's kind of wasteful"}, {"start": 695.76, "end": 700.0799999999999, "text": " right and then once you do the feed forward through the first four layers"}, {"start": 700.08, "end": 705.84, "text": " then you end up you basically send the activations to the second device and you"}, {"start": 705.84, "end": 710.2800000000001, "text": " start doing the feed forward here and at this point of time the three devices the"}, {"start": 710.2800000000001, "end": 715.0, "text": " first and the second the third and the fourth one are idle so you can see here"}, {"start": 715.0, "end": 719.2, "text": " literally these devices are doing nothing so you can you can see here this"}, {"start": 719.2, "end": 725.84, "text": " is gonna be idle these two devices are idle etc etc and so then you end up you"}, {"start": 725.84, "end": 729.48, "text": " propagate through all of the four devices and then you then you start doing"}, {"start": 729.48, "end": 733.28, "text": " the backprop so you want to calculate the gradients and when you start doing"}, {"start": 733.28, "end": 738.9200000000001, "text": " that again in the in the backward pass you are you're literally waiting so so"}, {"start": 738.9200000000001, "end": 744.12, "text": " you're doing backprop here but these three devices are being idle so this is"}, {"start": 744.12, "end": 748.72, "text": " super wasteful so there is a simple way how you can avoid this and it's called"}, {"start": 748.72, "end": 754.32, "text": " micro batching so split your mini batch into multiple micro batches and then you"}, {"start": 754.32, "end": 758.52, "text": " start feeding them into your pipeline and increase the throughput let me let"}, {"start": 758.52, "end": 764.1999999999999, "text": " me show you what I mean by that so you can see the diagram is here but let me"}, {"start": 764.1999999999999, "end": 769.28, "text": " explain conceptually what it means so here what you do is you split the simple"}, {"start": 769.28, "end": 774.24, "text": " data into multiple chunks into so-called micro batches and you feed this first"}, {"start": 774.24, "end": 780.36, "text": " part here so you feed it here and while it's running here and then the moment"}, {"start": 780.36, "end": 785.48, "text": " it starts running through the through the second device you immediately feed"}, {"start": 785.48, "end": 789.52, "text": " the second part the second micro batch through this device and now you have two"}, {"start": 789.52, "end": 796.0, "text": " devices not being idle and then as soon as this device passes on the activations"}, {"start": 796.0, "end": 800.28, "text": " to the third device you can feed the third micro batch into this device and"}, {"start": 800.28, "end": 804.4, "text": " so on and so forth you get the point and you can see the diagram looks like like"}, {"start": 804.4, "end": 809.9200000000001, "text": " this so initially for the duration of the first micro batch you literally have"}, {"start": 809.9200000000001, "end": 813.72, "text": " you literally and let me just change the color here you literally have four"}, {"start": 813.72, "end": 818.44, "text": " devices being idle so you can still here see that the four devices are being"}, {"start": 818.44, "end": 823.0, "text": " idle which is wasteful and then only two devices are being being idle and then"}, {"start": 823.0, "end": 828.0, "text": " only one and finally here you can see that all of the devices are being busy"}, {"start": 828.0, "end": 832.88, "text": " and you have the the the 4x the throughput compared to the naive"}, {"start": 832.88, "end": 837.24, "text": " implementation of the pipeline parallelism so in order to get as many of"}, {"start": 837.24, "end": 842.48, "text": " these columns where all of the four devices are active you want to have a"}, {"start": 842.48, "end": 847.8000000000001, "text": " lot of micro batches and what the GPy paper showed basically empirically is"}, {"start": 847.8000000000001, "end": 853.64, "text": " that you want to have M greater or equal than 4k where K is the number of"}, {"start": 853.64, "end": 858.0, "text": " partitions or the number of devices a fancy name saying number of devices so"}, {"start": 858.0, "end": 862.64, "text": " that means if you have four devices you want to have 32 plus micro batches and"}, {"start": 862.64, "end": 868.04, "text": " then literally they showed that you almost have linear scaling of the"}, {"start": 868.04, "end": 875.1999999999999, "text": " throughput which is awesome okay so that's one way you can take a huge model"}, {"start": 875.1999999999999, "end": 880.7199999999999, "text": " split it across multiple devices each of the devices will have only a portion a"}, {"start": 880.7199999999999, "end": 888.3199999999999, "text": " slice of the network weights and thus you can train this whole system faster"}, {"start": 888.3199999999999, "end": 891.56, "text": " because you have faster throughput and it's also feasible in the first place"}, {"start": 891.56, "end": 895.8399999999999, "text": " because yeah you cannot kind of stick it into a single device okay so that's the"}, {"start": 895.84, "end": 901.24, "text": " second idea now let's go through the actual paper this one is very fun let's"}, {"start": 901.24, "end": 903.6800000000001, "text": " get back to the first sentence so we started here"}, {"start": 903.6800000000001, "end": 907.12, "text": " intralayer model parallel approach we now know what that means so it's intro"}, {"start": 907.12, "end": 910.44, "text": " we're splitting the actual layer we're not splitting across layers we take a"}, {"start": 910.44, "end": 914.24, "text": " single layer and we split it into two parts or four parts or eight parts or"}, {"start": 914.24, "end": 919.2800000000001, "text": " whatnot our approach does not require a new compiler or library changes is"}, {"start": 919.2800000000001, "end": 923.52, "text": " orthogonal and complementary to pipeline model parallelism and can be fully"}, {"start": 923.52, "end": 927.4399999999999, "text": " implemented with insertion of a few communication operations in native"}, {"start": 927.4399999999999, "end": 932.48, "text": " part or so the cool thing is the complementary part is very important so"}, {"start": 932.48, "end": 935.88, "text": " this part is super important because that means you can combine these two"}, {"start": 935.88, "end": 942.4399999999999, "text": " approaches to scale to even bigger model sizes okay then they say this we"}, {"start": 942.4399999999999, "end": 949.72, "text": " sustain 15.1 petaflops across the entire application with 76% scaling"}, {"start": 949.72, "end": 954.8000000000001, "text": " efficiency when compared to a strong single GPU baseline that sustains 39"}, {"start": 954.8000000000001, "end": 960.4, "text": " teraflops which is 30% of peak flops so let me kind of break this down first of"}, {"start": 960.4, "end": 964.6, "text": " all I forgot to mention here they're training a model that has 8.3 billion"}, {"start": 964.6, "end": 969.6800000000001, "text": " parameters and they're training on 512 GPUs okay so what I say here is that"}, {"start": 969.6800000000001, "end": 976.9200000000001, "text": " they achieve a throughput of 15.1 peta so peta is I guess 10 to the power of 15 or"}, {"start": 976.92, "end": 982.8, "text": " something so it's like thousand tera or 1024 depends when you ask flops is the"}, {"start": 982.8, "end": 986.92, "text": " the floating point operations per second so that means you have this many"}, {"start": 986.92, "end": 991.7199999999999, "text": " operations per second that's a huge number okay and what they say is that"}, {"start": 991.7199999999999, "end": 997.36, "text": " they have a 60 76 percent scaling efficiency so what does that mean in"}, {"start": 997.36, "end": 1001.0, "text": " practice that means the following let me open up a calculator here so we have"}, {"start": 1001.0, "end": 1006.88, "text": " that the single device single GPU has 39 teraflops so if you multiply 39"}, {"start": 1006.88, "end": 1012.04, "text": " times the number of GPUs ideally this will be ideal scaling we'd end up with"}, {"start": 1012.04, "end": 1017.8, "text": " 19 petaflops and you can see here that they only end up with 15.1 and that's"}, {"start": 1017.8, "end": 1022.82, "text": " where this 60 76 percent scaling efficiency comes into place if I"}, {"start": 1022.82, "end": 1032.88, "text": " multiply this with 0.76 we end up with exactly 15.1 petaflops okay so basically"}, {"start": 1032.88, "end": 1037.7600000000002, "text": " you can see that the GPU baseline has 30% of the theoretical peak so that the"}, {"start": 1037.7600000000002, "end": 1041.6000000000001, "text": " GPU will never have its its its full theory it will never reach its full"}, {"start": 1041.6000000000001, "end": 1045.3200000000002, "text": " theoretical peak and then when you additionally scale across multiple"}, {"start": 1045.3200000000002, "end": 1050.8400000000001, "text": " devices you'll never have ideal linear scaling but it's close enough so that's"}, {"start": 1050.8400000000001, "end": 1056.96, "text": " cool okay so let's continue here let me see what's interesting we're gonna see"}, {"start": 1056.96, "end": 1064.08, "text": " how we are gonna split the two particular blocks so we're gonna see how"}, {"start": 1064.08, "end": 1068.96, "text": " we split the MLP block first and then I'm gonna show you this is probably a"}, {"start": 1068.96, "end": 1074.48, "text": " bit easier how do we split the the attention block and they also split the"}, {"start": 1074.48, "end": 1079.8400000000001, "text": " output embeddings as well as the input embeddings and finally a detail worth"}, {"start": 1079.8400000000001, "end": 1083.8, "text": " mentioning is that this residual connections additions layer norms are"}, {"start": 1083.8, "end": 1088.68, "text": " actually replicated across devices so there is some duplication going on for"}, {"start": 1088.68, "end": 1093.8799999999999, "text": " various reasons they decided that's that's the most optimal thing to do but"}, {"start": 1093.8799999999999, "end": 1098.8, "text": " that's the plan let's see how do we how do we vertically break up the MLP how"}, {"start": 1098.8, "end": 1103.84, "text": " do we vertical split that the attention block that's gonna be the the main gist"}, {"start": 1103.84, "end": 1109.72, "text": " of this paper okay let's start here here you can see the the diagrams of how how"}, {"start": 1109.72, "end": 1115.84, "text": " the splitting happens so the upper diagram here shows you the basically"}, {"start": 1115.84, "end": 1123.96, "text": " oops shows you the MLP block and you can see this is the first linear layer the"}, {"start": 1123.96, "end": 1129.2, "text": " second near layer and this is how it's kind of split down but let me go into"}, {"start": 1129.2, "end": 1133.64, "text": " the formulas I'm gonna slowly understand how this works but first on the high"}, {"start": 1133.64, "end": 1141.2, "text": " level what happens here is that you can see that this weights you take the"}, {"start": 1141.2, "end": 1147.1200000000001, "text": " weights of this linear layer you split them column wise and you keep a 2 on one"}, {"start": 1147.1200000000001, "end": 1152.8400000000001, "text": " device and you keep a a 1 on the second device you split the matrix that you use"}, {"start": 1152.8400000000001, "end": 1157.16, "text": " for your feed forward layer and you split it into two parts you do the same"}, {"start": 1157.16, "end": 1162.4, "text": " thing here but the difference is here you split it row wise so you can see"}, {"start": 1162.4, "end": 1166.6000000000001, "text": " here we don't have column wise splitting we have row wise splitting okay and so"}, {"start": 1166.6000000000001, "end": 1170.96, "text": " you take B1 you put B1 on one device you put B2 on the second device so"}, {"start": 1170.96, "end": 1175.68, "text": " basically you can see here that all of these operations are happening on a"}, {"start": 1175.68, "end": 1181.6000000000001, "text": " single device okay and then there is this G operation going on which is gonna"}, {"start": 1181.6000000000001, "end": 1186.88, "text": " synchronize the two devices more on that a bit later and the second device"}, {"start": 1186.88, "end": 1194.4, "text": " obviously here is processing the second part of the MLP okay so it has the a1"}, {"start": 1194.4, "end": 1198.16, "text": " and B1 so that's and that's how we achieve the vertical splitting but"}, {"start": 1198.16, "end": 1202.0800000000002, "text": " that's only a high-level diagram let's understand why this works and how is it"}, {"start": 1202.0800000000002, "end": 1207.8000000000002, "text": " equivalent to just having everything on the same device it might be like maybe"}, {"start": 1207.8000000000002, "end": 1211.88, "text": " tough to grasp on the on the first fly but it's fairly simple once you think"}, {"start": 1211.88, "end": 1216.0, "text": " about it a bit okay so here is the implementation here is what the layer"}, {"start": 1216.0, "end": 1221.76, "text": " does here is what the the feed forward layer does you do the this is your input"}, {"start": 1221.76, "end": 1227.96, "text": " data X this is your weight matrix a and this is the activation unit jelly in"}, {"start": 1227.96, "end": 1233.16, "text": " this particular case so there is multiple ways we can split we saw the"}, {"start": 1233.16, "end": 1237.88, "text": " column wise and row wise let's see the advantages and disadvantages of both if"}, {"start": 1237.88, "end": 1245.88, "text": " we split the weight matrix basically row wise as you can see here then we have a"}, {"start": 1245.88, "end": 1250.5600000000002, "text": " problem and that problem is we need to synchronize the devices after this pass"}, {"start": 1250.5600000000002, "end": 1257.5200000000002, "text": " so let's see why that's the case so if we now multiply this you get X one times"}, {"start": 1257.5200000000002, "end": 1263.5200000000002, "text": " a one plus X two times a two and then you need to apply jelly so the problem"}, {"start": 1263.5200000000002, "end": 1268.7600000000002, "text": " here is the plus and that means that the two devices will need to communicate so"}, {"start": 1268.7600000000002, "end": 1274.0400000000002, "text": " that that's the only way you can you can well add those two separate activations"}, {"start": 1274.04, "end": 1279.2, "text": " right and then you apply the jelly the problem is the jelly is not like is a"}, {"start": 1279.2, "end": 1286.76, "text": " nonlinear function so jelly of of adding these two is not the same as jelly of"}, {"start": 1286.76, "end": 1291.32, "text": " the first term plus the jelly of the second term and so they say this"}, {"start": 1291.32, "end": 1296.28, "text": " approach will require a synchronization point before the jelly function okay let"}, {"start": 1296.28, "end": 1304.48, "text": " me try and map this matrix multiplication into the diagram with"}, {"start": 1304.48, "end": 1309.48, "text": " which you're probably familiar with okay so let's start like this we have the"}, {"start": 1309.48, "end": 1315.36, "text": " input data X I'm gonna assume our batch size is only one so we have something"}, {"start": 1315.36, "end": 1323.36, "text": " like this so we have X here and we split X into two parts so we have X one and we"}, {"start": 1323.36, "end": 1330.84, "text": " have X two here okay next up we need to pass that through the feed forward layer"}, {"start": 1330.84, "end": 1336.6399999999999, "text": " so we have a weight matrix we're going to multiply with it so this is going to"}, {"start": 1336.6399999999999, "end": 1341.6, "text": " be just some box here I'm gonna fill in the details a bit later and we end up"}, {"start": 1341.6, "end": 1349.12, "text": " with the final representation I'm gonna denote it as a red rectangle here and"}, {"start": 1349.12, "end": 1356.9599999999998, "text": " that's it okay and now I'm gonna draw like a matrix a here and we said that we"}, {"start": 1356.9599999999998, "end": 1362.52, "text": " are going to split the matrix a basically row wise right so you're gonna"}, {"start": 1362.52, "end": 1371.3999999999999, "text": " split the matrix a row wise so let's see what happens now so if we multiply this"}, {"start": 1371.3999999999999, "end": 1377.32, "text": " part here and then we can assume in a little bit here if you multiply this"}, {"start": 1377.32, "end": 1384.48, "text": " part here with this part here what does that mean so that means that we do the"}, {"start": 1384.48, "end": 1391.04, "text": " following thing we'll literally do this whoops we'll literally do this so we do"}, {"start": 1391.04, "end": 1396.36, "text": " this is kind of how it's gonna be how it's gonna look like if you if you write"}, {"start": 1396.36, "end": 1400.3999999999999, "text": " like a neural if you if you draw a neural network okay so that's what you've done"}, {"start": 1400.3999999999999, "end": 1405.72, "text": " by multiplying this part with this part and then you multiply this with the"}, {"start": 1405.72, "end": 1409.32, "text": " second part okay so now you multiply it with the second piece here and that"}, {"start": 1409.32, "end": 1414.68, "text": " second piece is gonna give you the second output and you end up with"}, {"start": 1414.68, "end": 1419.6000000000001, "text": " something like this and then you do this here and this here and you end up with"}, {"start": 1419.6000000000001, "end": 1425.92, "text": " four outputs so I'm kind of using only four to to to yeah simplify the drawing"}, {"start": 1425.92, "end": 1431.96, "text": " so the problem is that all of these outputs here are still incomplete"}, {"start": 1431.96, "end": 1436.64, "text": " because they are missing the contribution from the x2 portion that's"}, {"start": 1436.64, "end": 1442.32, "text": " why we have to add the the actual representations so what I do now is let"}, {"start": 1442.32, "end": 1446.44, "text": " me change the color into let's say red so now you'll have to do the"}, {"start": 1446.44, "end": 1452.64, "text": " multiplication between this one and this one to end up with the contribution to"}, {"start": 1452.64, "end": 1457.92, "text": " the first output element okay so this gives us this and then you just repeat"}, {"start": 1457.92, "end": 1461.76, "text": " repeat repeat and you get the contributions to this one to this one and"}, {"start": 1461.76, "end": 1466.68, "text": " to this one okay so that's hopefully kind of helps you understand how this"}, {"start": 1466.68, "end": 1471.92, "text": " maps to the neural network diagrams with which you are used to so as you see we"}, {"start": 1471.92, "end": 1477.5, "text": " if you if you just do x1 times a1 we only get partial contributions to the"}, {"start": 1477.5, "end": 1482.92, "text": " output and because of that we also have to add additionally the x2 a2 and only"}, {"start": 1482.92, "end": 1489.24, "text": " then can we apply the jelly okay now let's go to the second like a way of"}, {"start": 1489.24, "end": 1493.36, "text": " breaking down the weight matrix and that's the column splitting okay so"}, {"start": 1493.36, "end": 1497.2, "text": " another option is to split a along its columns like this this partitioning"}, {"start": 1497.2, "end": 1502.32, "text": " allows the jelly non-linearity to be independently applied to the output of"}, {"start": 1502.32, "end": 1507.48, "text": " each partitioned gem so you can see what's the what's the benefit here you"}, {"start": 1507.48, "end": 1513.04, "text": " can literally send X so this time you're sending not X1 is here so please notice"}, {"start": 1513.04, "end": 1517.04, "text": " this thing this is a very important detail so here we have X1 here we have"}, {"start": 1517.04, "end": 1521.72, "text": " X so that means we're passing the whole input and as well here and here we had"}, {"start": 1521.72, "end": 1527.44, "text": " X2 so but a benefit we get here is that we can literally do everything on that"}, {"start": 1527.44, "end": 1531.48, "text": " device and then apply jelly and that's it we don't have any synchronization"}, {"start": 1531.48, "end": 1534.8799999999999, "text": " going on so that's very cool because synchronization slows us down we have"}, {"start": 1534.8799999999999, "end": 1539.76, "text": " to communicate the weights and sum them up and then return them back okay so"}, {"start": 1539.76, "end": 1543.12, "text": " this is advantageous as it removes a synchronization point hence we"}, {"start": 1543.12, "end": 1548.4399999999998, "text": " partition the first gem in this column parallel fashion and split the second"}, {"start": 1548.4399999999998, "end": 1553.4399999999998, "text": " gem along its rows so it takes the output of the jelly layer directly"}, {"start": 1553.4399999999998, "end": 1558.6, "text": " without requiring any communication as shown in figure 3a so that's the hack"}, {"start": 1558.6, "end": 1563.6399999999999, "text": " we're gonna do so let me now do the following let me first show you how the"}, {"start": 1563.6399999999999, "end": 1569.4799999999998, "text": " column splitting is gonna look like so I'm gonna do the same diagram again let"}, {"start": 1569.48, "end": 1574.52, "text": " me just change the color so I'm gonna do this similar diagram again so we have"}, {"start": 1574.52, "end": 1583.2, "text": " our X here so this is our X we have something going on here I'm gonna just"}, {"start": 1583.2, "end": 1589.84, "text": " draw like this like a box here a placeholder and then I'm gonna pick the"}, {"start": 1589.84, "end": 1596.52, "text": " red color so we denote the output representation with red and let's now"}, {"start": 1596.52, "end": 1604.44, "text": " multiply the input X so this is again X let's multiply X with our matrix that"}, {"start": 1604.44, "end": 1609.0, "text": " split column wise this time so that's the main difference okay so now we"}, {"start": 1609.0, "end": 1613.84, "text": " split it so here you saw the horizontal one here we have a vertical splitting"}, {"start": 1613.84, "end": 1618.68, "text": " okay so what happens now let's go through this so now when you multiply"}, {"start": 1618.68, "end": 1624.8, "text": " this with this you literally end up with a full representation so this output"}, {"start": 1624.8, "end": 1632.9199999999998, "text": " here has everything it needs so it's completely has all of the necessary"}, {"start": 1632.9199999999998, "end": 1636.9199999999998, "text": " information from the input then we do the same thing and now you can see that"}, {"start": 1636.9199999999998, "end": 1642.08, "text": " basically what is going to happen let me change the color maybe to gray after you"}, {"start": 1642.08, "end": 1647.52, "text": " keep on multiplying this with this part with this matrix you're gonna end up"}, {"start": 1647.52, "end": 1654.96, "text": " with a half of the output representation vector but this half is for like is"}, {"start": 1654.96, "end": 1658.56, "text": " going to be actually contain it's gonna contain all of the information necessary"}, {"start": 1658.56, "end": 1663.32, "text": " so we don't have to do any summation that's the advantage here okay so we do"}, {"start": 1663.32, "end": 1668.0, "text": " that on two devices we end up with these representations we apply the gel use and"}, {"start": 1668.0, "end": 1672.16, "text": " then and then we feed so that so now this is this is the nice part we'll"}, {"start": 1672.16, "end": 1676.32, "text": " literally we'll literally just take these representations here so we apply"}, {"start": 1676.32, "end": 1684.4399999999998, "text": " jellies again so we apply the jelly and we then feed them as x1 to the this"}, {"start": 1684.4399999999998, "end": 1690.8, "text": " layer where we split the weight matrix horizontally so that's the nice part so"}, {"start": 1690.8, "end": 1695.24, "text": " now when you concatenate these two you get an MLP and you have literally half"}, {"start": 1695.24, "end": 1699.56, "text": " of the weights on one device and half of the weights on the second device that's"}, {"start": 1699.56, "end": 1704.24, "text": " it as simple as that beautiful idea if you ask me very simple but very"}, {"start": 1704.24, "end": 1709.88, "text": " beautiful and that's drawn here again you can see it here so we start with X"}, {"start": 1709.88, "end": 1715.1200000000001, "text": " we replicate X then we we have because this one is column split we can just do"}, {"start": 1715.1200000000001, "end": 1720.92, "text": " X times a 2 and then we apply jellies and then we just feed that here and then"}, {"start": 1720.92, "end": 1725.9, "text": " we multiply with b2 and here remember we have horizontal so row wise splitting"}, {"start": 1725.9, "end": 1730.36, "text": " and then we have the G which is the synchronization operation so that means"}, {"start": 1730.36, "end": 1734.24, "text": " we're gonna sum them up and finally we end up with the final representation is"}, {"start": 1734.24, "end": 1739.56, "text": " that here that's it that's how we split the MLP layer okay let me see where"}, {"start": 1739.56, "end": 1744.36, "text": " there is something interesting here this approach splits both gems in the MLP"}, {"start": 1744.36, "end": 1748.24, "text": " block across GPUs and requires only a single all reduce operation in the"}, {"start": 1748.24, "end": 1752.34, "text": " forward pass so the G operator and a single already using the backward pass"}, {"start": 1752.34, "end": 1757.56, "text": " the F operator so let me show you how that looks like so again we saw that"}, {"start": 1757.56, "end": 1761.1599999999999, "text": " already so in the forward pass the only point where we have to synchronize to do"}, {"start": 1761.1599999999999, "end": 1765.84, "text": " the already use is here the G and when we go backwards once we get the"}, {"start": 1765.84, "end": 1769.12, "text": " gradients here we somehow have to combine them right to get the"}, {"start": 1769.12, "end": 1773.76, "text": " representations here so we literally have to well I guess just add them and"}, {"start": 1773.76, "end": 1778.6, "text": " divide by two whatever we have to have the synchronization primitive there okay"}, {"start": 1778.6, "end": 1782.9199999999998, "text": " now for the attention part and I really hope now you're you're kind of"}, {"start": 1782.92, "end": 1788.3200000000002, "text": " appreciating the whole engineering the infrastructure that goes into these big"}, {"start": 1788.3200000000002, "end": 1793.52, "text": " language models it's not just like you set like a hyper parameter in your code"}, {"start": 1793.52, "end": 1798.64, "text": " to from from like five layers to like 120 layers of transformer and expect"}, {"start": 1798.64, "end": 1801.52, "text": " everything is gonna work magically on a single device there is a lot of"}, {"start": 1801.52, "end": 1806.4, "text": " engineering going on and here trying to explain you some of the methods okay"}, {"start": 1806.4, "end": 1811.6000000000001, "text": " attention is a bit easier I think in my opinion so the thing is you literally"}, {"start": 1811.6, "end": 1815.04, "text": " take the query matrix and I'm gonna I'm not going to explain the transformers"}, {"start": 1815.04, "end": 1818.12, "text": " please do check out my transformer video I'm gonna link it somewhere here in"}, {"start": 1818.12, "end": 1822.4399999999998, "text": " case you're not familiar with the details of the transformer model you"}, {"start": 1822.4399999999998, "end": 1827.1999999999998, "text": " take the query matrix you take the key matrix you take the value matrix here and"}, {"start": 1827.1999999999998, "end": 1833.36, "text": " basically you split them column wise and each of these columns is gonna"}, {"start": 1833.36, "end": 1838.56, "text": " literally compute the queries for a single head so we have the multi head"}, {"start": 1838.56, "end": 1842.52, "text": " attention let's imagine for the sake of argument here that we only have two heads"}, {"start": 1842.52, "end": 1846.32, "text": " and by the way that brings me let me just quickly mention this so even though"}, {"start": 1846.32, "end": 1850.44, "text": " we just had two here so we were splitting the so-called two-way model"}, {"start": 1850.44, "end": 1854.8799999999999, "text": " parallelism you can imagine splitting the weight matrices into much more like"}, {"start": 1854.8799999999999, "end": 1859.6399999999999, "text": " well parts so you can have like a four-way model of parallelism or eight"}, {"start": 1859.6399999999999, "end": 1864.08, "text": " way and you can kind of convince yourself that you're going to keep the"}, {"start": 1864.08, "end": 1870.3999999999999, "text": " equivalent so if we have eight if we if we divide this into eight rows and if we"}, {"start": 1870.3999999999999, "end": 1876.52, "text": " divide the input data into eight pieces as well you'll just have x1 a1 plus x2"}, {"start": 1876.52, "end": 1883.8, "text": " a2 plus x3 a3 plus etc etc until you get to x8 a8 and then you have to"}, {"start": 1883.8, "end": 1887.8799999999999, "text": " synchronize them and then you apply jelly so nothing is preventing us from"}, {"start": 1887.8799999999999, "end": 1893.28, "text": " from splitting this into multiple parts okay so let's get back to the attention"}, {"start": 1893.28, "end": 1898.92, "text": " so here let's imagine for the sake of explanation we only have two heads and"}, {"start": 1898.92, "end": 1904.48, "text": " basically what will happen is that this device up here so this device up here is"}, {"start": 1904.48, "end": 1912.36, "text": " going to compute the first head so the the representation for the first head"}, {"start": 1912.36, "end": 1918.16, "text": " and the bottom layer the bottom part of this diagram is going to compute the"}, {"start": 1918.16, "end": 1922.32, "text": " second head and now we have this is the head one representations and these are"}, {"start": 1922.32, "end": 1926.76, "text": " the head two representations and then we just feed them here and this is the the"}, {"start": 1926.76, "end": 1932.84, "text": " already familiar hopefully breakdown of the feed forward layer where we do the"}, {"start": 1932.84, "end": 1936.02, "text": " row by splitting and you can see here that we again have to have the G"}, {"start": 1936.02, "end": 1939.96, "text": " because we have to synchronize them and then we get the out-propresentations"}, {"start": 1939.96, "end": 1945.08, "text": " okay let me try and demonstrate this again with a small diagram quickly so we"}, {"start": 1945.08, "end": 1951.04, "text": " start with the input so let's assume we have that one and sequence size one so"}, {"start": 1951.04, "end": 1954.2, "text": " we literally have a single token so let's that's gonna be the easiest case"}, {"start": 1954.2, "end": 1961.84, "text": " but it's gonna make hopefully make the point okay so here is our input data X"}, {"start": 1961.84, "end": 1968.92, "text": " again it's a single token so B the the batch size is one S which is usually"}, {"start": 1968.92, "end": 1972.52, "text": " along this dimension is one we just have a single token that has some"}, {"start": 1972.52, "end": 1979.44, "text": " dimensionality okay so some maybe H the hidden dimension okay so what we do is"}, {"start": 1979.44, "end": 1988.3200000000002, "text": " we basically map it using the key query and values so the first device will"}, {"start": 1988.3200000000002, "end": 1996.76, "text": " literally map this one into let's say like this we have we have the the query"}, {"start": 1996.76, "end": 2005.56, "text": " we have the key we have the value and you can see that the it's like twice"}, {"start": 2005.56, "end": 2011.76, "text": " smaller dimensionality compared to the input one and now you're literally gonna"}, {"start": 2011.76, "end": 2016.6399999999999, "text": " do your your magic of the transformer and you end up with the final"}, {"start": 2016.6399999999999, "end": 2023.6399999999999, "text": " representation which is again 2x smaller so this is H2 if this is H this is H2"}, {"start": 2023.6399999999999, "end": 2028.72, "text": " okay and you just do the same thing on the separate device so you'll do the"}, {"start": 2028.72, "end": 2033.9199999999998, "text": " same logic I'm just gonna kind of do it like this and you end up with the same"}, {"start": 2033.92, "end": 2037.4, "text": " well it's gonna be a different representation but the same structure"}, {"start": 2037.4, "end": 2043.6000000000001, "text": " here so we end up with this H over 2 okay so now we pay pass those tokens we"}, {"start": 2043.6000000000001, "end": 2048.92, "text": " pass them here into the feed forward layer we do the multiplication so now"}, {"start": 2048.92, "end": 2054.92, "text": " we literally have the the same example with X1 and X2 so we split our data into"}, {"start": 2054.92, "end": 2061.2400000000002, "text": " two parts that's what we've done here and now you literally do the when you"}, {"start": 2061.24, "end": 2065.3599999999997, "text": " multiply would be one you're gonna end up with this so let me let me change the"}, {"start": 2065.3599999999997, "end": 2070.4799999999996, "text": " color to like blue so you're gonna end up with this so something like this blah"}, {"start": 2070.4799999999996, "end": 2075.8799999999997, "text": " blah blah blah blah so we end up with the full representation again H but if"}, {"start": 2075.8799999999997, "end": 2080.3599999999997, "text": " you recall we literally have a partial representation so we'll have to add up"}, {"start": 2080.3599999999997, "end": 2085.6, "text": " on top of this one the the mapping that we get from so the representation that"}, {"start": 2085.6, "end": 2091.16, "text": " we get from this representation here so now we'll have to add up this thing here"}, {"start": 2091.16, "end": 2096.64, "text": " and only now do we have the full representation and that's the the G"}, {"start": 2096.64, "end": 2102.3599999999997, "text": " the G operator here so hopefully that the idea is clear until now that was the"}, {"start": 2102.3599999999997, "end": 2105.3599999999997, "text": " main idea let me kind of walk you through the text here so partitioning the"}, {"start": 2105.3599999999997, "end": 2110.2799999999997, "text": " gems associated with the with key query and value in a column parallel fashion"}, {"start": 2110.2799999999997, "end": 2113.8799999999997, "text": " such that the matrix multiply corresponding to each attention head is"}, {"start": 2113.8799999999997, "end": 2120.12, "text": " done locally on one GPU this allows us to split per attention head parameters"}, {"start": 2120.12, "end": 2126.04, "text": " and workload across the GPUs and doesn't require any immediate communication to"}, {"start": 2126.04, "end": 2129.3599999999997, "text": " complete the self-attention that's very important we don't need any immediate"}, {"start": 2129.3599999999997, "end": 2134.44, "text": " communication okay so this approach for both the MLP and self-attention layer"}, {"start": 2134.44, "end": 2140.08, "text": " fuses groups of two gems so general matrix multiplies eliminates a"}, {"start": 2140.08, "end": 2144.96, "text": " synchronization point in between and results in better scaling so what I mean"}, {"start": 2144.96, "end": 2149.2799999999997, "text": " by that is you can see here in both of these diagrams you can see that we have"}, {"start": 2149.28, "end": 2153.48, "text": " a like a general matrix multiply and then something happens and then general"}, {"start": 2153.48, "end": 2158.2000000000003, "text": " matrix multiply and compilers are smart enough to optimize these to fuse these"}, {"start": 2158.2000000000003, "end": 2161.88, "text": " operations together so that you literally have a single matrix multiply"}, {"start": 2161.88, "end": 2165.6800000000003, "text": " or something I'm not super familiar with those low-level details I've never had"}, {"start": 2165.6800000000003, "end": 2170.32, "text": " to do that but like that's my understanding at the moment so that's"}, {"start": 2170.32, "end": 2173.4, "text": " what I see here and this results in better scaling this enables us to"}, {"start": 2173.4, "end": 2178.2000000000003, "text": " perform all gems in a simple transformer layer using only two all reduces in the"}, {"start": 2178.2, "end": 2183.04, "text": " forward path and two in the backward path again that's because we have"}, {"start": 2183.04, "end": 2187.0, "text": " because the transformer layer consists of one attention we have a single"}, {"start": 2187.0, "end": 2191.96, "text": " already use here and then we pass the data that comes out of here we pass it"}, {"start": 2191.96, "end": 2197.0, "text": " here and then we have another already use here the G so that's two in the in"}, {"start": 2197.0, "end": 2201.68, "text": " the in the single single pass through the transformer layer okay guys that's"}, {"start": 2201.68, "end": 2209.2799999999997, "text": " pretty much it now I did say that they also have to split the output embeddings"}, {"start": 2209.2799999999997, "end": 2212.6, "text": " the input embeddings etc etc I'm just gonna kind of rush you through this the"}, {"start": 2212.6, "end": 2216.6, "text": " logic is very similar I'm not gonna bother explaining this one we parallelize"}, {"start": 2216.6, "end": 2222.68, "text": " the input embedding matrix denoted as E and then the subsequent is H with a"}, {"start": 2222.68, "end": 2227.4199999999996, "text": " hidden dimension and V the vocabulary size along the vocab dimension so e1"}, {"start": 2227.42, "end": 2233.04, "text": " e2 column wise since since each partition now only contains a portion of"}, {"start": 2233.04, "end": 2237.76, "text": " the embedding table and already use the G operator is required after the input"}, {"start": 2237.76, "end": 2242.28, "text": " embedding so again we'll be able to split the e matrix which is usually huge"}, {"start": 2242.28, "end": 2248.12, "text": " like the vocab can sometimes be like 50,000 etc etc so you can split it into"}, {"start": 2248.12, "end": 2252.92, "text": " multiple pieces send each of those pieces onto the GPUs then you do the"}, {"start": 2252.92, "end": 2257.52, "text": " mapping and then you'll just have to synchronize again same idea as what we"}, {"start": 2257.52, "end": 2263.52, "text": " saw before okay then they also mentioned for the output it's a bit more intricate"}, {"start": 2263.52, "end": 2268.2400000000002, "text": " so blah blah blah if you just do it simply then it's super expensive but"}, {"start": 2268.2400000000002, "end": 2272.44, "text": " then they say so however for this case the all getter will communicate B times"}, {"start": 2272.44, "end": 2276.86, "text": " s times V elements B is the batch size s is the sequence length and B is the"}, {"start": 2276.86, "end": 2280.88, "text": " vocab size which is huge due to the vocab size being large to reduce the"}, {"start": 2280.88, "end": 2285.56, "text": " communication size we fuse the output of the parallel gem with the cross-entropy"}, {"start": 2285.56, "end": 2289.88, "text": " loss which reduces the dimensions to B times s communicating scalar losses"}, {"start": 2289.88, "end": 2293.7200000000003, "text": " instead of logits is a huge reduction in communication that improves the"}, {"start": 2293.7200000000003, "end": 2299.2000000000003, "text": " efficiency of our model parallel approach okay and finally they they"}, {"start": 2299.2000000000003, "end": 2304.84, "text": " mentioned this so those are the almost all the pieces that make the transformer"}, {"start": 2304.84, "end": 2307.96, "text": " but there is also the residual connections the layer norms all of that"}, {"start": 2307.96, "end": 2312.28, "text": " so let me just mention that briefly rather than having one GPU compute part"}, {"start": 2312.28, "end": 2316.0, "text": " of the dropout layer normalization or residual connections and broadcast the"}, {"start": 2316.0, "end": 2320.2, "text": " results to other GPUs we choose to duplicate the communication across"}, {"start": 2320.2, "end": 2325.6, "text": " GPUs I encourage you to read at your own pace why this is the case but I think"}, {"start": 2325.6, "end": 2330.32, "text": " you got the gist of this of this paper already a couple more details before"}, {"start": 2330.32, "end": 2335.52, "text": " before jumping to the mixed precision training so you can understand and"}, {"start": 2335.52, "end": 2340.88, "text": " appreciate why we need that method let me just kind of go here so to train our"}, {"start": 2340.88, "end": 2345.0, "text": " models efficiently we utilize mixed precision training as I said with"}, {"start": 2345.0, "end": 2348.96, "text": " dynamic loss scaling we're gonna learn what it is and why we need it then you"}, {"start": 2348.96, "end": 2352.52, "text": " can see a bunch of details like how do we initialize the model how do you scale"}, {"start": 2352.52, "end": 2357.24, "text": " it you do some norm clipping blah blah blah a lot of engineering going on here"}, {"start": 2357.24, "end": 2361.44, "text": " lastly to better manage our memory footprint we utilize activation"}, {"start": 2361.44, "end": 2366.28, "text": " checkpointing after every transformer layer this is a super important idea I'm"}, {"start": 2366.28, "end": 2370.84, "text": " gonna briefly explain it here because it's used all like all the time all the"}, {"start": 2370.84, "end": 2376.16, "text": " time so activation checkpointing you'll hear it you'll hear this concept being"}, {"start": 2376.16, "end": 2380.2400000000002, "text": " thrown around all the time so the idea is the following again we have a"}, {"start": 2380.2400000000002, "end": 2387.28, "text": " transformer here okay it's a big model has a lot of layers so now you do a feed"}, {"start": 2387.28, "end": 2392.1600000000003, "text": " forward through that model okay so you do a feed forward through the model and"}, {"start": 2392.1600000000003, "end": 2396.0400000000004, "text": " you keep on collecting the activations okay so each of the layers will be"}, {"start": 2396.0400000000004, "end": 2401.28, "text": " storing the activations why well because when you do the back prop in order to"}, {"start": 2401.28, "end": 2405.1200000000003, "text": " calculate the gradients you need to have activations of each of the"}, {"start": 2405.1200000000003, "end": 2410.8, "text": " corresponding layers so you can already probably notice the problem with this"}, {"start": 2410.8, "end": 2415.6000000000004, "text": " approach you're literally storing bunch of data here here here and as you're"}, {"start": 2415.6, "end": 2420.3199999999997, "text": " progressing through the network the memory peak so the memory consumption is"}, {"start": 2420.3199999999997, "end": 2424.48, "text": " rising rising rising so the idea with the activation checkpointing is the"}, {"start": 2424.48, "end": 2427.72, "text": " following it's very simple and it's a beautiful idea it's covered in one of"}, {"start": 2427.72, "end": 2431.7599999999998, "text": " the other papers I obviously don't have enough time to cover it but if I get"}, {"start": 2431.7599999999998, "end": 2436.04, "text": " enough requests I might cover it in the future merge with some more advanced"}, {"start": 2436.04, "end": 2440.44, "text": " techniques that came after all these papers I am explaining in this video so"}, {"start": 2440.44, "end": 2445.7200000000003, "text": " did is fairly simple don't store activations for every single layer just"}, {"start": 2445.7200000000003, "end": 2450.76, "text": " checkpoint them so basically let's imagine we have four let's imagine we"}, {"start": 2450.76, "end": 2459.44, "text": " have four layers okay and instead of storing the activations for all of the"}, {"start": 2459.44, "end": 2464.94, "text": " feed forward layers inside of the block you just stored the activations at the"}, {"start": 2464.94, "end": 2469.16, "text": " at the at the end of the block so you literally just store the the ultimate"}, {"start": 2469.16, "end": 2475.6, "text": " activations okay so like suggest the activations that are at the end of the"}, {"start": 2475.6, "end": 2479.68, "text": " transformer block so just store these ones okay so you just store these ones"}, {"start": 2479.68, "end": 2483.24, "text": " and now when you're doing the back prop so now let's see what happens when you"}, {"start": 2483.24, "end": 2487.3599999999997, "text": " do the back prop okay so you start doing the let me change the color into maybe"}, {"start": 2487.3599999999997, "end": 2492.0, "text": " gray so we start doing the back prop here we are good we are good we're good"}, {"start": 2492.0, "end": 2496.7999999999997, "text": " now we hit this part where we don't have the activations and what we do is we"}, {"start": 2496.8, "end": 2501.32, "text": " just recompute them so now we trigger the recomputation from this checkpoint"}, {"start": 2501.32, "end": 2505.6000000000004, "text": " here okay so we trigger the recomputation from here and then we can"}, {"start": 2505.6000000000004, "end": 2510.6000000000004, "text": " continue our back prop and then we hit this part here no activations we hit the"}, {"start": 2510.6000000000004, "end": 2515.4, "text": " we hit the recomputation again and we end up with activations obviously this is"}, {"start": 2515.4, "end": 2519.5600000000004, "text": " gonna happen in a bit more clever way so while they're doing the back prop we are"}, {"start": 2519.5600000000004, "end": 2523.1600000000003, "text": " computing the we're triggering the recomputation so that we don't have to"}, {"start": 2523.16, "end": 2527.12, "text": " wait so all of those optimization details but this is the gist of it this"}, {"start": 2527.12, "end": 2530.96, "text": " is how the activation checkpointing works okay hopefully you can appreciate"}, {"start": 2530.96, "end": 2535.72, "text": " that now let me quickly walk you through a couple more details and we are done"}, {"start": 2535.72, "end": 2540.2799999999997, "text": " with this paper so our infrastructure is optimized for multi-node deep learning"}, {"start": 2540.2799999999997, "end": 2544.3199999999997, "text": " applications with three hundred gigabytes per second bandwidth between"}, {"start": 2544.3199999999997, "end": 2550.3999999999996, "text": " GPUs inside a server so that's important by this NS and VS switch and hundred"}, {"start": 2550.4, "end": 2555.0, "text": " gigabytes of interconnect bandwidth between servers okay that's important so"}, {"start": 2555.0, "end": 2561.32, "text": " we have we have 300 gigabytes per second between the GPUs and we have only hundred"}, {"start": 2561.32, "end": 2565.44, "text": " only hundred a smaller number that's that's the whole point between the"}, {"start": 2565.44, "end": 2570.62, "text": " servers so why am I mentioning this the reason being is you want when you're"}, {"start": 2570.62, "end": 2574.88, "text": " doing the metal the model of parallelism because you have all of those already"}, {"start": 2574.88, "end": 2578.36, "text": " use operations so the synchronization points you want to have those"}, {"start": 2578.36, "end": 2583.0, "text": " synchronization points happening inside of a server where the bandwidth is super"}, {"start": 2583.0, "end": 2587.96, "text": " high okay so that's why when you have a cluster where your node only has eight"}, {"start": 2587.96, "end": 2591.92, "text": " GPUs you don't want to go more than eight way model parallelism you don't"}, {"start": 2591.92, "end": 2596.1600000000003, "text": " want to go 16 because if you have 16 then you have two servers communicating"}, {"start": 2596.1600000000003, "end": 2600.88, "text": " that during the synchronization points and that sucks okay so hopefully that"}, {"start": 2600.88, "end": 2606.2400000000002, "text": " that makes sense now let me show you some some results they achieve you can"}, {"start": 2606.24, "end": 2611.24, "text": " see here that scaling to from one to two to four to eight GPUs you can see that"}, {"start": 2611.24, "end": 2617.04, "text": " the the efficiency kind of drops but not significantly so here is the 77% for the"}, {"start": 2617.04, "end": 2621.3599999999997, "text": " eight way model parallelism and if you have if you combine it with the data"}, {"start": 2621.3599999999997, "end": 2630.4399999999996, "text": " parallel approach then you have still 74% for 512 GPUs that's huge that's huge"}, {"start": 2630.4399999999996, "end": 2634.9199999999996, "text": " I'm going to quickly explain how we combine the model parallelism with the"}, {"start": 2634.92, "end": 2639.64, "text": " data parallelism it's a very very nice visualization so quickly this is how it's"}, {"start": 2639.64, "end": 2643.48, "text": " going to look like as I said you're gonna because the servers let's say have"}, {"start": 2643.48, "end": 2647.56, "text": " eight GPUs basically you're gonna do the following thing you're gonna have"}, {"start": 2647.56, "end": 2652.52, "text": " something like this and then you're gonna slice this server into eight pieces"}, {"start": 2652.52, "end": 2657.6800000000003, "text": " because all the pieces are a single GPU and they are very densely connected and"}, {"start": 2657.6800000000003, "end": 2662.84, "text": " so you literally do you have one eighth of the of the model weights here one"}, {"start": 2662.84, "end": 2667.52, "text": " eighth of the model weights here etc etc and now remember the data parallelism"}, {"start": 2667.52, "end": 2673.1600000000003, "text": " you just replicate you copy paste these multiple times so precisely they use the"}, {"start": 2673.1600000000003, "end": 2681.48, "text": " 64 way data parallelism so you literally you're going to replicate this so that"}, {"start": 2681.48, "end": 2688.56, "text": " you have so that you end up with let me draw it like this so you have eight here"}, {"start": 2688.56, "end": 2697.96, "text": " whoops so yeah eight here and then you have eight here and you end up with 64"}, {"start": 2697.96, "end": 2703.44, "text": " times eight and that's 512 and this is gonna be your setup so you have"}, {"start": 2703.44, "end": 2709.36, "text": " literally 64 servers each of the servers has eight GPUs with high bandwidth"}, {"start": 2709.36, "end": 2714.36, "text": " internally and a bit lower bandwidth externally so internode versus"}, {"start": 2714.36, "end": 2720.28, "text": " intranode and yeah then you basically do the logic I just explained so in this"}, {"start": 2720.28, "end": 2725.04, "text": " particular example you have a huge batch you're gonna split it into 64 chunks and"}, {"start": 2725.04, "end": 2728.8, "text": " then you're gonna pass each of the chunks to one of the servers and then"}, {"start": 2728.8, "end": 2733.2000000000003, "text": " you do the logic we just saw from the Megatron LM paper with all of the"}, {"start": 2733.2000000000003, "end": 2736.96, "text": " synchronization points etc etc and then calculate the gradients and then you're"}, {"start": 2736.96, "end": 2741.6800000000003, "text": " gonna do the all reviews and update the whole cluster that's it fairly simple"}, {"start": 2741.68, "end": 2746.3999999999996, "text": " okay at least conceptually one thing I want to the last thing I want to mention"}, {"start": 2746.3999999999996, "end": 2751.96, "text": " from this paper is it came before GPT-3 was published which is probably the most"}, {"start": 2751.96, "end": 2757.08, "text": " famous language model and the ML model in general and they already noticed the"}, {"start": 2757.08, "end": 2762.44, "text": " trend everybody knew already that scaling is leading to better and better"}, {"start": 2762.44, "end": 2767.8799999999997, "text": " performance so here by performance I mean lower perplexity for example so you"}, {"start": 2767.88, "end": 2773.12, "text": " can see that 8.3 billion model has lower perplexity than the 2.5 billion etc etc"}, {"start": 2773.12, "end": 2776.98, "text": " they mentioned that multiple times as the model size increases the validation"}, {"start": 2776.98, "end": 2780.1600000000003, "text": " perplexity decreases and reaches a validation perplexity of blah blah blah"}, {"start": 2780.1600000000003, "end": 2783.96, "text": " we observed the trend of decreasing model size also leads to lower perplexity"}, {"start": 2783.96, "end": 2787.6, "text": " blah blah blah recently researchers from Microsoft in collaboration with Nvidia"}, {"start": 2787.6, "end": 2792.6400000000003, "text": " trained a 17 billion parameter GPT-2 called Turing NLG using Megatron and"}, {"start": 2792.6400000000003, "end": 2795.44, "text": " showed that the accuracy is further improved as they scale the model"}, {"start": 2795.44, "end": 2799.64, "text": " highlighting the value of larger models it was super obvious that somebody's"}, {"start": 2799.64, "end": 2805.4, "text": " gonna do it it just happened that open AI was the first one to do it and yeah"}, {"start": 2805.4, "end": 2810.36, "text": " kind of worth mentioning a historical note I guess yeah they also have"}, {"start": 2810.36, "end": 2814.08, "text": " contribution of this paper is they show that you kind of have to reconfigure"}, {"start": 2814.08, "end": 2817.84, "text": " where you put your layer norms so they can so that you can train much bigger"}, {"start": 2817.84, "end": 2821.48, "text": " birds otherwise you have these instabilities you can see how the the"}, {"start": 2821.48, "end": 2826.16, "text": " model loss explodes if you if you just use this default bird architecture but"}, {"start": 2826.16, "end": 2831.2400000000002, "text": " that's not important for our like well it is you kind of have to be aware that"}, {"start": 2831.2400000000002, "end": 2837.08, "text": " everything matters that even the model architecture is is very sensitive if you"}, {"start": 2837.08, "end": 2840.48, "text": " change rearrange something all of a sudden you cannot train the bigger"}, {"start": 2840.48, "end": 2845.56, "text": " models and so on and so forth okay guys now on to the mixed precision training"}, {"start": 2845.56, "end": 2848.96, "text": " I'm gonna quickly walk you through this one and then we're gonna dig into the"}, {"start": 2848.96, "end": 2853.84, "text": " zero paper one important quick note before before this is all the three"}, {"start": 2853.84, "end": 2858.52, "text": " approaches I showed you the pipeline parallelism the data parallelism and the"}, {"start": 2858.52, "end": 2862.52, "text": " model or the tensor parallelism all of them can be combined together they are"}, {"start": 2862.52, "end": 2868.92, "text": " complementary and I think it was branded as 3d parallelism so when you combine"}, {"start": 2868.92, "end": 2872.88, "text": " all of these three you can achieve like huge scales and then when you combine it"}, {"start": 2872.88, "end": 2876.56, "text": " with mixed precision plus the zero optimizer which you're gonna see in a"}, {"start": 2876.56, "end": 2881.64, "text": " couple of minutes then you can achieve truly astounding scales okay that's"}, {"start": 2881.64, "end": 2886.56, "text": " that's it now let's dig into this paper so a couple of contributions from this"}, {"start": 2886.56, "end": 2891.2599999999998, "text": " paper so let's gonna read it firstly we recommend maintaining a single"}, {"start": 2891.2599999999998, "end": 2894.52, "text": " precision copy of weights that accumulates the gradients after each"}, {"start": 2894.52, "end": 2898.0, "text": " optimizer step this copy is rounded to have precision for the forward and back"}, {"start": 2898.0, "end": 2901.64, "text": " prop so this is the first part so you're gonna literally have three components"}, {"start": 2901.64, "end": 2906.04, "text": " that I want to explain and then we're done the second part is we propose loss"}, {"start": 2906.04, "end": 2910.04, "text": " scaling we don't see what that concept means to preserve gradient values with"}, {"start": 2910.04, "end": 2915.48, "text": " small small magnitudes thirdly we use half precision arithmetic that accumulates"}, {"start": 2915.48, "end": 2921.7599999999998, "text": " into single precision outputs that's FP 32 versus FP 16 which are converted to"}, {"start": 2921.7599999999998, "end": 2926.68, "text": " have precision before storing to memory okay so today is fairly simple usually"}, {"start": 2926.68, "end": 2932.04, "text": " when you train neural networks historically people use the FP 32 format"}, {"start": 2932.04, "end": 2936.6, "text": " that means a single numbers like literally needs 32 bits instead of that"}, {"start": 2936.6, "end": 2941.56, "text": " people have shown that you didn't you do not need 32 bits you can have lower"}, {"start": 2941.56, "end": 2947.4, "text": " accuracy for a neural network training so like 16 works and that's it by just"}, {"start": 2947.4, "end": 2952.68, "text": " having the the amount of amount of data amount of memory that that a single"}, {"start": 2952.68, "end": 2957.2799999999997, "text": " number takes you literally can save a bunch of a bunch of memory and still"}, {"start": 2957.28, "end": 2962.1200000000003, "text": " they show in this paper keep the same accuracy the same performance of the"}, {"start": 2962.1200000000003, "end": 2967.96, "text": " model as the baseline which uses the FP 32 that's very cool okay so why does it"}, {"start": 2967.96, "end": 2972.88, "text": " matter not only do we store the memory it's also faster on the modern hardware"}, {"start": 2972.88, "end": 2979.76, "text": " so have precision math throughput in recent GPUs is 2x to 8x higher than"}, {"start": 2979.76, "end": 2983.6000000000004, "text": " single precision in addition to speed improvements reduce precision formats"}, {"start": 2983.6000000000004, "end": 2986.1600000000003, "text": " also reduce the amount of memory required for training that's obvious I"}, {"start": 2986.16, "end": 2991.3999999999996, "text": " already said that one okay okay let's see what's going on so specifically we"}, {"start": 2991.3999999999996, "end": 2996.3199999999997, "text": " train various neural networks using the IEEE half precision format let me"}, {"start": 2996.3199999999997, "end": 3003.04, "text": " quickly show you I'll link these somewhere below in the video but this is"}, {"start": 3003.04, "end": 3007.12, "text": " the this is the the format they're they're referring to you so you have the"}, {"start": 3007.12, "end": 3010.96, "text": " exponent has five bits the fraction has ten bits and then there is a sine bit"}, {"start": 3010.96, "end": 3017.6, "text": " and this is how you represent numbers like the flow 32 is basically obviously"}, {"start": 3017.6, "end": 3023.2, "text": " has a bigger exponent and bigger fraction and well while I'm here"}, {"start": 3023.2, "end": 3027.7200000000003, "text": " explaining the formats let me quickly mention that the B float 16 or the brain"}, {"start": 3027.7200000000003, "end": 3034.48, "text": " float because it was originally invented into Google brain AI lab and you can see"}, {"start": 3034.48, "end": 3038.48, "text": " what's the difference the B float has much bigger exponent and the main"}, {"start": 3038.48, "end": 3043.48, "text": " insight here is for neural network training you care about the range you"}, {"start": 3043.48, "end": 3047.68, "text": " don't care about the actual accuracy and this significant or mantissa is what"}, {"start": 3047.68, "end": 3052.92, "text": " determines the how fine-tuned your numbers are whereas the exponent"}, {"start": 3052.92, "end": 3058.52, "text": " determines how big can you go so obviously with B float you can have much"}, {"start": 3058.52, "end": 3062.36, "text": " bigger and much smaller numbers without getting the overflow the infinities or"}, {"start": 3062.36, "end": 3068.04, "text": " the or the NANDs or whatnot and so basically TPUs natively support this"}, {"start": 3068.04, "end": 3073.56, "text": " this B float format and like people have shown like when you take a look at the"}, {"start": 3073.56, "end": 3079.2599999999998, "text": " problems that for example Boris Dyma training the Lee mini had compared to"}, {"start": 3079.2599999999998, "end": 3085.4, "text": " training on GPUs TPUs basically are much more robust to these divergences of"}, {"start": 3085.4, "end": 3090.36, "text": " losses because of the B float format so that's my so that's basically my"}, {"start": 3090.36, "end": 3094.46, "text": " understanding I might be wrong here but that's I wanted to quickly mention the"}, {"start": 3094.46, "end": 3099.12, "text": " all of these formats now let's go back to the paper okay so now that you know"}, {"start": 3099.12, "end": 3104.2400000000002, "text": " that let's see what's going on so we have to store the master weights they"}, {"start": 3104.2400000000002, "end": 3109.16, "text": " showed for most models this is kind of necessary so they say here the need for"}, {"start": 3109.16, "end": 3114.44, "text": " FP32 master weights is not universal but for the most models they trained they"}, {"start": 3114.44, "end": 3121.52, "text": " did need to do this so here is the idea you store the weights as FP32 but then"}, {"start": 3121.52, "end": 3125.8, "text": " when you do the forward prop the back prop you do everything in half precision"}, {"start": 3125.8, "end": 3129.24, "text": " so you literally do this float to half operation so you literally kind of"}, {"start": 3129.24, "end": 3134.88, "text": " convert the FP32 to FP16 then you do the feed forward you can see here you start"}, {"start": 3134.88, "end": 3139.0, "text": " with the activations which are also FP16 so you can kind of always make sure"}, {"start": 3139.0, "end": 3143.72, "text": " that's the case because you control your data so let's say the input the input is"}, {"start": 3143.72, "end": 3149.04, "text": " your image you can literally feed image as FP16 image and then combine that with"}, {"start": 3149.04, "end": 3153.64, "text": " FP16 weights do the forward pass you get the activations and then in the"}, {"start": 3153.64, "end": 3157.36, "text": " backward you do the same thing the weights are FP16 you calculate the"}, {"start": 3157.36, "end": 3161.72, "text": " activation grads from the activation grads you get the weight grads and all"}, {"start": 3161.72, "end": 3166.64, "text": " of those are in FP16 and because of that you benefit from like a smaller memory"}, {"start": 3166.64, "end": 3174.6, "text": " like a blueprint as well as the well higher throughput as we saw that some"}, {"start": 3174.6, "end": 3179.2, "text": " modern GPUs basically have higher throughput for FP16 compared to FP32"}, {"start": 3179.2, "end": 3183.6, "text": " that's that's the bottom line okay so this this is a schematic how everything"}, {"start": 3183.6, "end": 3188.48, "text": " works so they now mention one explanation why they need the FP32"}, {"start": 3188.48, "end": 3193.18, "text": " master weights one explanation is that updates weight gradients multiplied by"}, {"start": 3193.18, "end": 3198.7799999999997, "text": " the learning rate become too small to be represented in FP16 any value whose"}, {"start": 3198.7799999999997, "end": 3204.36, "text": " magnitude is smaller than 2 raised to the power of minus 24 becomes 0 in FP"}, {"start": 3204.36, "end": 3210.76, "text": " 16 and by the way you might be confused why 24 and not 14 I'm not gonna dig into"}, {"start": 3210.76, "end": 3218.0, "text": " that but just Google for the sub normalized so sub normal or something"}, {"start": 3218.0, "end": 3227.6, "text": " like that sub normal values just open up the the the float 16 and float 32 Viki"}, {"start": 3227.6, "end": 3232.6800000000003, "text": " pages I'm gonna link those down below and you'll see what those are so we can"}, {"start": 3232.68, "end": 3237.08, "text": " see in figure 2b that approximately 5% of weight gradient values have exponents"}, {"start": 3237.08, "end": 3242.7999999999997, "text": " smaller than minus 24 okay let me show you what I mean by that so let's go to"}, {"start": 3242.7999999999997, "end": 3249.0, "text": " figure 2b you can see here this is the training when the model is trained in in"}, {"start": 3249.0, "end": 3254.2599999999998, "text": " FP32 and you can see that most of the gradients here fall below this"}, {"start": 3254.2599999999998, "end": 3260.72, "text": " magic red line the the minus 24 which is the where the representation range of"}, {"start": 3260.72, "end": 3266.08, "text": " the FP 16 ends and so that means that all of these are gonna be rounded to 0"}, {"start": 3266.08, "end": 3269.9199999999996, "text": " and you lose a lot of information during training and because of that the"}, {"start": 3269.9199999999996, "end": 3275.6, "text": " accuracy the final accuracy for model may or may not drop okay so that's one"}, {"start": 3275.6, "end": 3284.8799999999997, "text": " kind of idea why why you need FP 32 there is a couple of other explanations"}, {"start": 3284.8799999999997, "end": 3289.16, "text": " here a bit more complex to explain here I'm gonna I'm gonna kind of skip it for"}, {"start": 3289.16, "end": 3293.56, "text": " now so worth mentioning now that you have to store this additional copy"}, {"start": 3293.56, "end": 3297.08, "text": " doesn't that blow up the memory and here to explain that's not the case"}, {"start": 3297.08, "end": 3301.0, "text": " because the activations are what matters so even though maintaining an additional"}, {"start": 3301.0, "end": 3304.3199999999997, "text": " copy of weights increases the memory requirements for the weights by 50%"}, {"start": 3304.3199999999997, "end": 3309.2799999999997, "text": " compared with single precision training impact on overall memory usage is much"}, {"start": 3309.2799999999997, "end": 3313.44, "text": " smaller for training memory consumption is dominated by activations due to"}, {"start": 3313.44, "end": 3318.56, "text": " larger batch sizes and activations of each layer being saved for reuse in the"}, {"start": 3318.56, "end": 3323.96, "text": " back prop pass since activations are also stored in half precision format the"}, {"start": 3323.96, "end": 3327.68, "text": " overall memory consumption for training deep neural networks is roughly halved"}, {"start": 3327.68, "end": 3332.12, "text": " okay they are not obviously taking into account activation checkpointing they"}, {"start": 3332.12, "end": 3336.08, "text": " just assume that you have to store all of the activations in that case this"}, {"start": 3336.08, "end": 3343.04, "text": " thing that they've just said here holds but yeah okay now let's go into the law"}, {"start": 3343.04, "end": 3346.7599999999998, "text": " scaling this is a second important concept you need to get the mixed"}, {"start": 3346.76, "end": 3351.44, "text": " precision training to work know that much of the fp16 representable range was left"}, {"start": 3351.44, "end": 3356.36, "text": " unused while many values were below the minimum representable range and became"}, {"start": 3356.36, "end": 3360.0600000000004, "text": " zeros scaling up the gradients will shift them to occupy more of the"}, {"start": 3360.0600000000004, "end": 3364.0400000000004, "text": " representable range and preserve values that are otherwise lost to zeros this"}, {"start": 3364.0400000000004, "end": 3367.2400000000002, "text": " particular network diverges when gradients are not scaled but scaling"}, {"start": 3367.2400000000002, "end": 3371.0, "text": " them by a factor of 8 increasing the exponents by 3 is sufficient to make"}, {"start": 3371.0, "end": 3377.56, "text": " the accuracy achieved with fp32 training this suggests that activation gradient"}, {"start": 3377.56, "end": 3382.88, "text": " below mine to raise to the power of minus 27 in magnitude were irrelevant to"}, {"start": 3382.88, "end": 3387.2, "text": " the training of this model but values in this range were important to preserve"}, {"start": 3387.2, "end": 3392.88, "text": " okay let me break this down for you okay so here is a histogram of basically"}, {"start": 3392.88, "end": 3398.3, "text": " activation gradient values you can see what I said it's shifted outside of the"}, {"start": 3398.3, "end": 3403.5800000000004, "text": " range that fp16 basically support so this is the range that fp16 loves okay"}, {"start": 3403.5800000000004, "end": 3409.04, "text": " this is this subnormal or the denormalized range so from minus 24 to"}, {"start": 3409.04, "end": 3414.2000000000003, "text": " minus 14 you can see that most of these values are outside of the range and if"}, {"start": 3414.2000000000003, "end": 3421.2000000000003, "text": " you multiply the gradients by 8 you literally shift all of these rightwards"}, {"start": 3421.2000000000003, "end": 3427.48, "text": " or equivalently we move the red line here and now you have all of these being"}, {"start": 3427.48, "end": 3432.96, "text": " representable in fp16 thus we will not round them to zero we will not lose this"}, {"start": 3432.96, "end": 3439.36, "text": " information and it turned out they say here blah blah blah that that this"}, {"start": 3439.36, "end": 3442.96, "text": " particular range here and you can see there is a lot of data here was very"}, {"start": 3442.96, "end": 3448.04, "text": " important to preserve and so scaling by 8 for this particular model it doesn't"}, {"start": 3448.04, "end": 3452.08, "text": " even matter what the model is just the concept is important it kind of saved"}, {"start": 3452.08, "end": 3457.08, "text": " the training okay so one efficient way to shift the gradient values into fp16"}, {"start": 3457.08, "end": 3461.96, "text": " representable range is to scale the last value computed in the forward pass prior"}, {"start": 3461.96, "end": 3466.52, "text": " to starting back propagation by chain rule back propagation ensures that all"}, {"start": 3466.52, "end": 3470.4, "text": " gradient values are scaled by the same amount so that's a easy way how you can"}, {"start": 3470.4, "end": 3474.08, "text": " scale up all these gradients you just take the final loss you scale it up by"}, {"start": 3474.08, "end": 3478.44, "text": " let's say 8 and by doing that once you start doing the back prop because how"}, {"start": 3478.44, "end": 3482.56, "text": " chain rule works you literally multiply all of the gradients by 8 and thus you'll"}, {"start": 3482.56, "end": 3487.72, "text": " be shifting this the gradients into the representable range this is the yeah"}, {"start": 3487.72, "end": 3491.7999999999997, "text": " this is the main the main intuition I want you to take out of this paper and"}, {"start": 3491.7999999999997, "end": 3497.44, "text": " here is the here's the third technique they do to maintain model accuracy we"}, {"start": 3497.44, "end": 3501.52, "text": " found that some networks require that the fp16 vector dot product accumulates"}, {"start": 3501.52, "end": 3506.58, "text": " the partial products into an fp32 value which is converted to fp16 before"}, {"start": 3506.58, "end": 3511.36, "text": " writing to memory without this accumulation into fp32 some fp16 models"}, {"start": 3511.36, "end": 3515.88, "text": " did not match the accuracy of the baseline models okay also for large"}, {"start": 3515.88, "end": 3519.92, "text": " reductions for example sums across elements of a vector they should all be"}, {"start": 3519.92, "end": 3524.28, "text": " carried out in fp32 such reductions mostly come up in batch normalization"}, {"start": 3524.28, "end": 3529.1200000000003, "text": " layers when accumulating statistics and softmax layers as well so both of the"}, {"start": 3529.1200000000003, "end": 3532.88, "text": " layer types in our implementation still read and write fp16 tensors from memory"}, {"start": 3532.88, "end": 3537.38, "text": " performing the arithmetic in fp32 this did not slow down the training process"}, {"start": 3537.38, "end": 3540.3, "text": " since these layers are memory bandwidth limited and not sensitive to"}, {"start": 3540.3, "end": 3546.4, "text": " arithmetic speed okay again just a additional detail that needs to be"}, {"start": 3546.4, "end": 3551.6400000000003, "text": " taken care of basically there are three categories so they say here this is this"}, {"start": 3551.6400000000003, "end": 3556.5600000000004, "text": " cannot relevant by and by and large neural network arithmetic falls into"}, {"start": 3556.5600000000004, "end": 3560.84, "text": " three categories so the vector dot products the reductions and the point"}, {"start": 3560.84, "end": 3564.7200000000003, "text": " wise operations such as the activation functions and that they just mentioned"}, {"start": 3564.7200000000003, "end": 3569.6800000000003, "text": " for these two so they mentioned that for vector dot products and for"}, {"start": 3569.68, "end": 3575.2599999999998, "text": " reductions you sometimes have to convert to fp32 and do the operations there such"}, {"start": 3575.2599999999998, "end": 3580.0, "text": " that you can preserve the accuracy that's it guys that's the idea behind"}, {"start": 3580.0, "end": 3584.02, "text": " mixed precision training now let me show you the results they show that"}, {"start": 3584.02, "end": 3588.68, "text": " literally baseline they compare they are almost always even better than the"}, {"start": 3588.68, "end": 3592.56, "text": " baselines so here you can if you just check out these numbers at your own base"}, {"start": 3592.56, "end": 3597.24, "text": " you'll see that the numbers are always a bit better so it maybe works as a like a"}, {"start": 3597.24, "end": 3604.7599999999998, "text": " regularizer almost to to to do the training in fp16 for some models as I"}, {"start": 3604.7599999999998, "end": 3608.08, "text": " mentioned the last skill and technique was not required for successful mix"}, {"start": 3608.08, "end": 3612.4799999999996, "text": " precision training of these networks but for some others it was very important"}, {"start": 3612.4799999999996, "end": 3617.68, "text": " let me show you one so here you can see that mixed precision training when the"}, {"start": 3617.68, "end": 3623.56, "text": " last scale is one diverges when the last scale is 128 it does not diverge and"}, {"start": 3623.56, "end": 3627.08, "text": " they literally show for every four different classes of models like this"}, {"start": 3627.08, "end": 3633.24, "text": " again here language models machine translation they always match the"}, {"start": 3633.24, "end": 3637.68, "text": " baseline when it comes to accuracy so that's awesome and the final idea least"}, {"start": 3637.68, "end": 3642.7999999999997, "text": " last but not least definitely not least because this is such an exciting paper"}, {"start": 3642.7999999999997, "end": 3650.1, "text": " this paper is basically introduces deep speed library you might have heard of it"}, {"start": 3650.1, "end": 3654.96, "text": " if you've seen if you've went through any of the popular LLMs most of them use"}, {"start": 3654.96, "end": 3662.3399999999997, "text": " deep speed illutri I know they use deep speed bloom use deep speed opt 175 be"}, {"start": 3662.3399999999997, "end": 3665.7999999999997, "text": " that the meta folks I'm not sure whether they used it but yeah so it's a"}, {"start": 3665.7999999999997, "end": 3670.3199999999997, "text": " literally a paper from Microsoft it's called zero memory optimizations toward"}, {"start": 3670.3199999999997, "end": 3675.48, "text": " training trillion parameter models okay I love this paper let's go through it"}, {"start": 3675.48, "end": 3681.84, "text": " step by step let's think step by step zero redundancy optimizer that's that's"}, {"start": 3681.84, "end": 3685.8, "text": " what the acronym stands for we'll see why zero redundancy they literally have"}, {"start": 3685.8, "end": 3690.08, "text": " well they reduce all of the redundancy when it comes to model states and"}, {"start": 3690.08, "end": 3694.2, "text": " residual states we're gonna see what those are but that's the idea zero"}, {"start": 3694.2, "end": 3698.52, "text": " eliminates memory redundancies in data and model parallel training zero has the"}, {"start": 3698.52, "end": 3702.96, "text": " potential to scale beyond one to them parameters using today's hardware zero"}, {"start": 3702.96, "end": 3707.88, "text": " can train large models of up to 13 billion parameters for example larger"}, {"start": 3707.88, "end": 3712.92, "text": " than Megatron GPT 8.3 B so that's a model that was trained in the Megatron LM"}, {"start": 3712.92, "end": 3717.6, "text": " paper the first paper I covered and the t5 the Google's model that was 11"}, {"start": 3717.6, "end": 3722.88, "text": " billion parameter big without even requiring model parallelism so that's a"}, {"start": 3722.88, "end": 3727.7200000000003, "text": " big breakthrough because now with with this set of optimizations you as a"}, {"start": 3727.72, "end": 3733.6, "text": " practitioner can literally train models as big as 13 B if you have just a single"}, {"start": 3733.6, "end": 3740.52, "text": " but very good albeit a very good like GPU but that's amazing and obviously"}, {"start": 3740.52, "end": 3746.24, "text": " when you use the actual optimizations you can you can scale to much bigger"}, {"start": 3746.24, "end": 3750.3999999999996, "text": " sizes they show they have shown indeed that you can scale up to 1 trillion"}, {"start": 3750.3999999999996, "end": 3755.16, "text": " parameters with a small caveat and that's that it becomes very slow I mean"}, {"start": 3755.16, "end": 3758.7999999999997, "text": " very slow you literally for some of the models they've shown you need a year to"}, {"start": 3758.7999999999997, "end": 3764.96, "text": " train it so that's practically not not really practical but yeah they train a"}, {"start": 3764.96, "end": 3769.0, "text": " model of hundred something billion so that's that's kind of cool okay so let's"}, {"start": 3769.0, "end": 3772.7999999999997, "text": " let's dig into it MP so the model parallelism splits the"}, {"start": 3772.7999999999997, "end": 3776.2, "text": " model vertically partitioning the computation and parameters in each layer"}, {"start": 3776.2, "end": 3779.8399999999997, "text": " across multiple devices requiring significant communication between each"}, {"start": 3779.8399999999997, "end": 3784.8399999999997, "text": " layer okay as a result they work well within a single node I did mention this"}, {"start": 3784.84, "end": 3789.8, "text": " where the inter GPU communication bandwidth is high but the efficiency"}, {"start": 3789.8, "end": 3795.6400000000003, "text": " degree degrades quickly beyond a single node we tested a 40 B parameter model"}, {"start": 3795.6400000000003, "end": 3803.1200000000003, "text": " using Megatron LM across 2d GX 2 nodes and observe about 5 tera flops per V 100"}, {"start": 3803.1200000000003, "end": 3808.08, "text": " GPU less than 5% of hardware peak so you can see it drops dramatically as soon as"}, {"start": 3808.08, "end": 3815.2, "text": " you start doing the model parallelism across multiple nodes okay then they say"}, {"start": 3815.2, "end": 3819.72, "text": " this for large models the majority of the memory is occupied by model states"}, {"start": 3819.72, "end": 3823.72, "text": " this is an important piece of terminology they're gonna be reusing it"}, {"start": 3823.72, "end": 3828.3199999999997, "text": " throughout the paper so let's kind of stress that so the model states which"}, {"start": 3828.3199999999997, "end": 3833.24, "text": " include the optimizer states such as momentum and variances in atom"}, {"start": 3833.24, "end": 3840.64, "text": " gradients and parameters so they consider those to be the model states the"}, {"start": 3840.64, "end": 3845.7999999999997, "text": " remaining memory is consumed by activation temporary buffers and"}, {"start": 3845.7999999999997, "end": 3851.7599999999998, "text": " unusable whoops some glitch unusable fragmented memory which we refer to"}, {"start": 3851.7599999999998, "end": 3856.8399999999997, "text": " collectively as residual states so again two pieces of terminology new"}, {"start": 3856.8399999999997, "end": 3862.72, "text": " terminology here we have residual states and we have model states and they're"}, {"start": 3862.72, "end": 3869.24, "text": " gonna address each one of those separately okay so as your DPU removes"}, {"start": 3869.24, "end": 3872.9599999999996, "text": " the memory state redundancies across data parallel processes by partitioning"}, {"start": 3872.9599999999996, "end": 3877.4399999999996, "text": " the model states instead of replicating them so that's the main idea that's"}, {"start": 3877.4399999999996, "end": 3881.3199999999997, "text": " literally the main idea of this paper they are partitioning instead of"}, {"start": 3881.3199999999997, "end": 3886.2, "text": " replicating I mean simple conceptually but there is a lot of work that goes"}, {"start": 3886.2, "end": 3893.2799999999997, "text": " into making this to work efficiently that's important so I'm gonna show you"}, {"start": 3893.2799999999997, "end": 3896.52, "text": " the diagram I'm not gonna explain it yet what this means I'm gonna see in a"}, {"start": 3896.52, "end": 3901.52, "text": " couple of seconds so they have a couple of optimizations optimizer state"}, {"start": 3901.52, "end": 3904.68, "text": " partitioning so when you partition the optimizer states instead of replicating"}, {"start": 3904.68, "end": 3908.3199999999997, "text": " it then you can do the gradient partitioning and then you can also do"}, {"start": 3908.3199999999997, "end": 3912.4399999999996, "text": " the parameter partition and they show that the memory reduction is linear with"}, {"start": 3912.44, "end": 3918.04, "text": " DP degree so that means the number of devices you use to replicate your models"}, {"start": 3918.04, "end": 3923.44, "text": " across when you do the data parallelism for example splitting across 64 GPUs"}, {"start": 3923.44, "end": 3929.92, "text": " will yield a 64 X memory reduction is huge there is a modest 50% increase in"}, {"start": 3929.92, "end": 3934.04, "text": " communication volume though okay with all three stages enabled zero can train a"}, {"start": 3934.04, "end": 3942.32, "text": " Trillium parameter model on just 1024 Nvidia GPUs which is crazy okay this is"}, {"start": 3942.32, "end": 3946.48, "text": " the main diagram let me first walk you through that this piece of text here and"}, {"start": 3946.48, "end": 3951.68, "text": " then I'm gonna basically yeah jump into explaining the dire for activations"}, {"start": 3951.68, "end": 3954.84, "text": " stored from forward pass in order to perform regular pass we noticed"}, {"start": 3954.84, "end": 3958.92, "text": " checkpointing helps I did explain activation checkpointing maybe half an"}, {"start": 3958.92, "end": 3964.4, "text": " hour ago but not sufficient for large models thus zero are optimizes"}, {"start": 3964.4, "end": 3968.36, "text": " activation memory by identifying and removing the activation replication in"}, {"start": 3968.36, "end": 3972.64, "text": " existing empty approaches through activation partition again partitioning"}, {"start": 3972.64, "end": 3976.8, "text": " partitioning it also offloads activations to CPU when appropriate okay"}, {"start": 3976.8, "end": 3982.16, "text": " that's another detail that can be done okay zero DP and zero are combined"}, {"start": 3982.16, "end": 3985.92, "text": " together forms a powerful system of memory optimizations for DL training"}, {"start": 3985.92, "end": 3990.96, "text": " that we collectively refer to as zero again just another piece of terminology"}, {"start": 3990.96, "end": 3995.7200000000003, "text": " if you ever start playing with deep speed this might be useful to know okay"}, {"start": 3995.7200000000003, "end": 4002.84, "text": " so let's see what's going on here is how the that the opera the baseline is how"}, {"start": 4002.84, "end": 4011.08, "text": " your classical change the color here this is how your data parallelism works"}, {"start": 4011.08, "end": 4017.84, "text": " you have three types of basically what you have parameters radiance and"}, {"start": 4017.84, "end": 4022.6, "text": " optimizer states you can see that optimizer states take that the biggest"}, {"start": 4022.6, "end": 4028.4, "text": " part of the memory and you can see across when you do the data parallelism I did"}, {"start": 4028.4, "end": 4032.72, "text": " mention that we literally replicate all of these you copy paste them and that's"}, {"start": 4032.72, "end": 4037.88, "text": " super inefficient okay you can see that the memory consumed is 2 plus 2 plus K"}, {"start": 4037.88, "end": 4042.44, "text": " times psi I'm going to explain what these are in a second and for I think"}, {"start": 4042.44, "end": 4047.6400000000003, "text": " they're using some GPT model here whatnot they show that you need 120"}, {"start": 4047.6400000000003, "end": 4054.0, "text": " gigabytes of memory per device that's huge so psi is the number of parameters"}, {"start": 4054.0, "end": 4058.36, "text": " I think it's 1.5 billion in this particular example they're using the"}, {"start": 4058.36, "end": 4065.52, "text": " the GPT model and the GPT-2 2 is because you use two bytes when you use mixed"}, {"start": 4065.52, "end": 4070.8, "text": " precision training you use two bytes for weights you use two bytes for gradients"}, {"start": 4070.8, "end": 4076.6, "text": " so that's why you have 2 plus 2 and then K is equal to 12 because these are the"}, {"start": 4076.6, "end": 4081.9, "text": " optimizer states because we'll see the calculation a bit later but for Adam K"}, {"start": 4081.9, "end": 4087.16, "text": " equals 12 and so you have 16 times 1.5 B you end up with with I guess this number"}, {"start": 4087.16, "end": 4095.48, "text": " is that correct just see what I messed something up at 1.5 times 16 wait I'm"}, {"start": 4095.48, "end": 4102.92, "text": " not sure yeah we'll see why yeah well sorry it's 7.5 B it says here okay that's"}, {"start": 4102.92, "end": 4107.8, "text": " why we get 100 now the first optimization is instead of having a"}, {"start": 4107.8, "end": 4112.32, "text": " copy of the optimizer states everywhere you instead partition it so you store"}, {"start": 4112.32, "end": 4117.48, "text": " one piece of the optimizer states on one GPU you store the second part here you"}, {"start": 4117.48, "end": 4124.36, "text": " store the the the the nth part here and you can see so sorry we have NGP's here"}, {"start": 4124.36, "end": 4128.92, "text": " so that's why we are gonna have ultimately when you cannot combine all"}, {"start": 4128.92, "end": 4134.44, "text": " of them together you'll have still the same the same information as here right"}, {"start": 4134.44, "end": 4139.04, "text": " and then you can do this same thing for everything else for gradients you can"}, {"start": 4139.04, "end": 4142.679999999999, "text": " partition them for parameters you can partition them and you can see how the"}, {"start": 4142.679999999999, "end": 4147.04, "text": " memory is slowly decreasing so here we have 2 psi plus 2 psi because we are"}, {"start": 4147.04, "end": 4153.299999999999, "text": " not doing the the parameter and gradient optimization but we do save up on the"}, {"start": 4153.3, "end": 4158.96, "text": " optimizer state and that's why we have K times psi divided by ND where ND is the"}, {"start": 4158.96, "end": 4163.08, "text": " number of devices in your data parallel setup it's gonna be 64 in in their"}, {"start": 4163.08, "end": 4167.16, "text": " particular case and you can see how the memory dramatically reduces so you"}, {"start": 4167.16, "end": 4172.2, "text": " started with 120 gigabytes and you ultimately end up with 1.9 gigabytes"}, {"start": 4172.2, "end": 4178.2, "text": " which is crazy I mean this is an amazing optimization okay let's continue here"}, {"start": 4178.2, "end": 4183.599999999999, "text": " this is what I mentioned before the complete set of optimizations in Xero"}, {"start": 4183.599999999999, "end": 4187.08, "text": " could allow us to run models with trillion parameters on the high-end"}, {"start": 4187.08, "end": 4193.2, "text": " hardware cluster today for example with 1000 V100 GPUs however the hardware"}, {"start": 4193.2, "end": 4197.84, "text": " compute capacity is still too limited and training time can be in practically"}, {"start": 4197.84, "end": 4201.92, "text": " long over a year so always take with a grain of salt when somebody says one"}, {"start": 4201.92, "end": 4206.5599999999995, "text": " trillion you have to kind of read the fine print and understand the details of"}, {"start": 4206.56, "end": 4213.4800000000005, "text": " the method zero power is the largest language model with 17 D parameters and"}, {"start": 4213.4800000000005, "end": 4218.68, "text": " record-breaking accuracy cheering energy important to mention this was"}, {"start": 4218.68, "end": 4223.4400000000005, "text": " literally I think a month before GPT-3 was published which was 10x bigger than"}, {"start": 4223.4400000000005, "end": 4228.72, "text": " this model which is like a huge we share zero as a part of our open source DL"}, {"start": 4228.72, "end": 4234.64, "text": " training optimization library called deep speed so I did mention this library"}, {"start": 4234.64, "end": 4237.400000000001, "text": " multiple times you can check it out especially if you want to play with the"}, {"start": 4237.400000000001, "end": 4241.8, "text": " models by the way I'm making and I'm going through these papers precisely"}, {"start": 4241.8, "end": 4246.08, "text": " because over the next videos I'll be covering some of the largest ML models"}, {"start": 4246.08, "end": 4252.68, "text": " out there and do let me know if you if you if you have any any suggestions what"}, {"start": 4252.68, "end": 4257.200000000001, "text": " you want to cover or not feel free to comment down in the comments also"}, {"start": 4257.200000000001, "end": 4262.56, "text": " congrats if you if you sticked up through the whole video until here okay"}, {"start": 4262.56, "end": 4270.240000000001, "text": " so let's do a quick memory analysis to understand why case 12 etc etc so a 1.5"}, {"start": 4270.240000000001, "end": 4275.280000000001, "text": " billion parameter GPT-2 model requires 3 gigabytes of memory for its weights if"}, {"start": 4275.280000000001, "end": 4281.160000000001, "text": " we assume FB 16 or in 16-bit precision yet it cannot be trained on a single"}, {"start": 4281.160000000001, "end": 4287.0, "text": " GPU with 32 gigabytes of memory using tensorflow or PyTorch at least without"}, {"start": 4287.0, "end": 4290.64, "text": " any of the optimizations I assume that PyTorch and tensorflow now have a bunch"}, {"start": 4290.64, "end": 4296.200000000001, "text": " of optimizations included one may wonder where all the memory goes so now let's"}, {"start": 4296.200000000001, "end": 4299.92, "text": " see where it goes let's take Adam as a concrete example so the mixed precision"}, {"start": 4299.92, "end": 4304.360000000001, "text": " training of a model with psi parameters using Adam requires enough memory to"}, {"start": 4304.360000000001, "end": 4309.8, "text": " hold an FB 16 copy of the parameters and gradients with memory requirements of"}, {"start": 4309.8, "end": 4313.320000000001, "text": " two psi and two psi bytes respectively so we've seen that one in the table"}, {"start": 4313.320000000001, "end": 4319.4800000000005, "text": " before basically yeah so in addition it needs to hold the optimizer states and"}, {"start": 4319.48, "end": 4324.44, "text": " FB 32 copy of the parameters so we know why this is the case because of the"}, {"start": 4324.44, "end": 4329.44, "text": " mixed precision training we always need to store the master copy in FB 32 okay"}, {"start": 4329.44, "end": 4334.44, "text": " and that's kind of gonna bloat up the memory but it's necessary the momentum"}, {"start": 4334.44, "end": 4340.36, "text": " and the variance so these are Adam idiosyncrasies and then with memory"}, {"start": 4340.36, "end": 4345.879999999999, "text": " requirements of 4 psi 4 psi and 4 psi bytes so 4 because here we have FB 32"}, {"start": 4345.88, "end": 4350.2, "text": " okay and because of that if you sum them up you end up with 12 and that's why"}, {"start": 4350.2, "end": 4356.32, "text": " K equals 12 so in total this results in 16 psi bytes of memory requirements so"}, {"start": 4356.32, "end": 4362.2, "text": " yeah that's why you cannot train a GPT 1.5b on the GPU unless you have some of"}, {"start": 4362.2, "end": 4366.4400000000005, "text": " these optimizations okay now they break up the residual memory consumption blah"}, {"start": 4366.4400000000005, "end": 4369.84, "text": " blah blah they mention for some set of parameters like if you're using GPT 2"}, {"start": 4369.84, "end": 4376.6, "text": " with a sequence length of 1000 that size of 32 you require 60 gigabytes of memory"}, {"start": 4376.6, "end": 4380.72, "text": " unless you do activation checkpoint then they say here activation checkpoint here"}, {"start": 4380.72, "end": 4384.24, "text": " or activation computation or computation is a common approach to reduce the"}, {"start": 4384.24, "end": 4388.52, "text": " creation memory by approximately the square root of the total activations at"}, {"start": 4388.52, "end": 4393.72, "text": " the expense of 33% of recomputation overhead this would reduce the activation"}, {"start": 4393.72, "end": 4399.24, "text": " memory consumption of this model to about 8 gigabytes that's a huge saving"}, {"start": 4399.24, "end": 4404.12, "text": " okay when the size of the model is large these temporary buffer sizes are"}, {"start": 4404.12, "end": 4407.679999999999, "text": " non-trivial okay so they go on to say that there are three components to the"}, {"start": 4407.679999999999, "end": 4411.639999999999, "text": " residual number consumption so the activations the temporary buffers and"}, {"start": 4411.639999999999, "end": 4417.48, "text": " the memory fragmentation yeah I'm gonna skip this one it's not that vital but"}, {"start": 4417.48, "end": 4421.08, "text": " yeah you you kind of have to have these temporary buffers when you want to do"}, {"start": 4421.08, "end": 4428.12, "text": " some reductions etc etc yeah that's too much detail for this video okay let's"}, {"start": 4428.12, "end": 4432.92, "text": " get some hints why this works zero DP partitions the model states instead of"}, {"start": 4432.92, "end": 4438.08, "text": " replicating them and uses a dynamic communication schedule that exploits the"}, {"start": 4438.08, "end": 4442.96, "text": " intrinsically temporal nature of the model states while minimizing the"}, {"start": 4442.96, "end": 4449.32, "text": " communication volume okay so let that sink in the intrinsically temporal"}, {"start": 4449.32, "end": 4454.04, "text": " nature so let me quickly go back to the diagram here I think that's gonna be the"}, {"start": 4454.04, "end": 4459.88, "text": " easiest thing to do so the thing is let's say let's take parameters as an"}, {"start": 4459.88, "end": 4464.64, "text": " example so the the blue the blue line here literally when you're doing a"}, {"start": 4464.64, "end": 4470.04, "text": " forward pass obviously the forward pass is always at a certain point at a"}, {"start": 4470.04, "end": 4474.44, "text": " certain layer in your model and the only parameters you care about are the"}, {"start": 4474.44, "end": 4479.96, "text": " parameters that are needed to do the immediately next computation right so"}, {"start": 4479.96, "end": 4482.88, "text": " you can kind of discard everything before and you don't care about"}, {"start": 4482.88, "end": 4487.96, "text": " everything that's about to come so they literally explore that temporal thingy"}, {"start": 4487.96, "end": 4492.76, "text": " and they in a smart way communicate the parameters the gradients the optimizer"}, {"start": 4492.76, "end": 4500.28, "text": " states such that you have basically an illusion that everything is already"}, {"start": 4500.28, "end": 4506.52, "text": " present there and they kind of mask the the overhead by the well because of"}, {"start": 4506.52, "end": 4512.96, "text": " certain intrinsic latencies during the model training I'm not gonna go into"}, {"start": 4512.96, "end": 4519.360000000001, "text": " much more detail there so let's go back here let's go back here we are here I"}, {"start": 4519.360000000001, "end": 4523.160000000001, "text": " think okay a couple more things and I think we're done here the arithmetic"}, {"start": 4523.160000000001, "end": 4527.0, "text": " dense intensity ratio of the amount of computation per iteration to amount of"}, {"start": 4527.0, "end": 4532.0, "text": " activation checkpoints per iteration is very large and increases linearly with"}, {"start": 4532.0, "end": 4536.400000000001, "text": " hidden dimension making it possible to hide so this is the thing I just said"}, {"start": 4536.4, "end": 4541.96, "text": " the data movement cost for the activation checkpoints even when the"}, {"start": 4541.96, "end": 4546.42, "text": " bandwidth is low for very large model zero can even choose to move the"}, {"start": 4546.42, "end": 4550.96, "text": " activation partitions to the CPU memory and then you literally have like a zero"}, {"start": 4550.96, "end": 4557.92, "text": " cost of memory older and and in case you manage to mask that transitioning the"}, {"start": 4557.92, "end": 4565.08, "text": " movement of the data to the CPU then you literally like you get it for free so"}, {"start": 4565.08, "end": 4569.6, "text": " unfortunately the paper is very textual hopefully you got some insight from the"}, {"start": 4569.6, "end": 4573.48, "text": " diagrams and the thing I things I explained so far let me try and read you"}, {"start": 4573.48, "end": 4577.88, "text": " a couple more things I highlighted with which I think are insightful so for a"}, {"start": 4577.88, "end": 4582.6, "text": " DP degree again DP is standing for a data parallelism of ND we group the"}, {"start": 4582.6, "end": 4588.48, "text": " optimizer states into ND equal partitions okay such that the I data"}, {"start": 4588.48, "end": 4593.16, "text": " parallel process only updates the optimizer states corresponding to the"}, {"start": 4593.16, "end": 4598.92, "text": " I partition thus each data parallel process only needs to store and update"}, {"start": 4598.92, "end": 4605.36, "text": " one over ND of the total optimizer states and then only update one over ND"}, {"start": 4605.36, "end": 4610.76, "text": " of the parameters we perform an all gatherer across the data parallel"}, {"start": 4610.76, "end": 4616.48, "text": " process at the end of each step to get the fully updated parameters across all"}, {"start": 4616.48, "end": 4621.9, "text": " data parallel process okay so it's kind of hard to explain everything I'm going"}, {"start": 4621.9, "end": 4626.48, "text": " to show you one thing here so there's this nice Vicky page where you can see"}, {"start": 4626.48, "end": 4631.719999999999, "text": " these collective operations such as reduce or reduce blah blah blah all"}, {"start": 4631.719999999999, "end": 4638.44, "text": " gather scatter all to all all these are implemented in libraries such as NCCL"}, {"start": 4638.44, "end": 4642.24, "text": " from from Nvidia so that stands for Nvidia collective communications"}, {"start": 4642.24, "end": 4646.0, "text": " library so you can kind of check it out the diagrams here are useful it's gonna"}, {"start": 4646.0, "end": 4651.679999999999, "text": " help you kind of understand this but ultimately someone needs to create maybe"}, {"start": 4651.68, "end": 4657.4400000000005, "text": " to me something that visualizes this much like nicely such it so they can see"}, {"start": 4657.4400000000005, "end": 4660.240000000001, "text": " understand what's going on but yeah hopefully this gives you some intuition"}, {"start": 4660.240000000001, "end": 4664.56, "text": " and that's what we are aiming for just like have at least the basic terminology"}, {"start": 4664.56, "end": 4667.92, "text": " and the basic understanding of all of these optimization methods in place and"}, {"start": 4667.92, "end": 4671.400000000001, "text": " then you can always read on the documentation of the particular library"}, {"start": 4671.400000000001, "end": 4678.64, "text": " you're using if you care to learn more okay I'm gonna skip all of these here"}, {"start": 4678.64, "end": 4684.04, "text": " interesting piece of theory but not that practical they show that because of we"}, {"start": 4684.04, "end": 4688.76, "text": " saw these formulas already in the table but we ultimately end up with 16 psi"}, {"start": 4688.76, "end": 4694.84, "text": " over MD formula for the amount of memory which means is Andy in the limit of"}, {"start": 4694.84, "end": 4699.56, "text": " going to infinity this goes to zero but yeah it becomes impractical body this"}, {"start": 4699.56, "end": 4703.6, "text": " shows that this has a profound implication zero powers DP to fit model"}, {"start": 4703.6, "end": 4707.400000000001, "text": " with arbitrary size as long as there are sufficient number of devices to share"}, {"start": 4707.4, "end": 4710.799999999999, "text": " the model states but yeah it becomes after a certain point you hear the"}, {"start": 4710.799999999999, "end": 4716.04, "text": " situation point and this becomes very very slow okay this is the last last"}, {"start": 4716.04, "end": 4720.44, "text": " part let me kind of read and walk you through this it might be insightful so"}, {"start": 4720.44, "end": 4724.0, "text": " more specifically once the forward propagation for a layer of a model is"}, {"start": 4724.0, "end": 4729.32, "text": " computed the input activations are partitioned across all the model"}, {"start": 4729.32, "end": 4734.94, "text": " parallel process until it is needed again during the back propagation okay"}, {"start": 4734.94, "end": 4741.16, "text": " so what they say here is let me try and cannot draw this down so let's say we"}, {"start": 4741.16, "end": 4747.719999999999, "text": " have a bunch of devices here something like this let's let's say we have three"}, {"start": 4747.719999999999, "end": 4752.4, "text": " devices and let's say you just computed you're just you're currently here in"}, {"start": 4752.4, "end": 4757.32, "text": " this part of the network and you computed some of the activations okay and"}, {"start": 4757.32, "end": 4761.679999999999, "text": " then you proceed forward so after that you proceed forward you continue your"}, {"start": 4761.68, "end": 4767.96, "text": " forward prop here okay so instead of having that checkpoint so what's usually"}, {"start": 4767.96, "end": 4774.240000000001, "text": " done that checkpoint is stored here instead of doing that why not partition"}, {"start": 4774.240000000001, "end": 4779.320000000001, "text": " it and basically why not store the literally change the color so you"}, {"start": 4779.320000000001, "end": 4784.64, "text": " instead store only a chunk here so you only store a chunk here you store the"}, {"start": 4784.64, "end": 4789.4800000000005, "text": " second time here and you store the third time here so that's the idea and then"}, {"start": 4789.48, "end": 4795.48, "text": " once we need these activations again we just communicate back those activations"}, {"start": 4795.48, "end": 4800.08, "text": " back to this device so that we can compute the gradients etc etc so that's"}, {"start": 4800.08, "end": 4803.2, "text": " the basic idea I mean now how that happens obviously there is a lot of"}, {"start": 4803.2, "end": 4805.719999999999, "text": " computer science going on in the background the algorithms data"}, {"start": 4805.719999999999, "end": 4809.44, "text": " structures all of that hardware electronics but yeah we have to kind of"}, {"start": 4809.44, "end": 4814.36, "text": " stop at a certain level of abstraction okay so it works in conjunction with"}, {"start": 4814.36, "end": 4819.639999999999, "text": " activation checkpointing storing partitioned activation checkpoints only"}, {"start": 4819.639999999999, "end": 4823.639999999999, "text": " instead of replicated copy so that's what I told you here so furthermore in"}, {"start": 4823.639999999999, "end": 4828.839999999999, "text": " the case of very large models and very limited device memory these partitioned"}, {"start": 4828.839999999999, "end": 4833.719999999999, "text": " activation checkpoints can also be offloaded to the CPU reducing the"}, {"start": 4833.719999999999, "end": 4838.16, "text": " activation memory overhead to nearly zero at an additional communication cost I"}, {"start": 4838.16, "end": 4842.08, "text": " didn't mention this before okay if we check point a single activation for each"}, {"start": 4842.08, "end": 4847.04, "text": " transformer layer it will require about 33 gigabytes of memory per GPU just to"}, {"start": 4847.04, "end": 4852.0, "text": " store the activation checkpoints but with this optimization the PA and zero"}, {"start": 4852.0, "end": 4857.48, "text": " it can be reduced to about 2 gigabytes per GPU because yeah if you have 16"}, {"start": 4857.48, "end": 4861.08, "text": " degree of 16 of your model parallelism then you just divide by 16 you get"}, {"start": 4861.08, "end": 4865.6, "text": " roughly 2 gigabytes furthermore this 2 gigabytes can be offloaded to the CPU"}, {"start": 4865.6, "end": 4870.32, "text": " reducing the memory footprint for activations to nearly zero okay guys this"}, {"start": 4870.32, "end": 4876.2, "text": " is this is pretty much it the paper is huge obviously there is a lot of"}, {"start": 4876.2, "end": 4880.4, "text": " heuristics in the background so they mentioned something here so given model"}, {"start": 4880.4, "end": 4883.4, "text": " and hardware characteristics will leverage the above analysis to decide if"}, {"start": 4883.4, "end": 4888.5199999999995, "text": " and when to apply PA and PA plus CPU so the activation checkpointing the"}, {"start": 4888.5199999999995, "end": 4892.4, "text": " partitioning of the activation checkpoints optimization and the CPU"}, {"start": 4892.4, "end": 4896.48, "text": " offloading optimization and the above analysis is something I encourage you"}, {"start": 4896.48, "end": 4900.5199999999995, "text": " to read but the idea is depending on like various latencies they decide"}, {"start": 4900.5199999999995, "end": 4904.48, "text": " whether this is a good idea or not so there is a lot of details there but"}, {"start": 4904.48, "end": 4909.28, "text": " nothing conceptually fundamental that that yeah it will lead you to an Eureka"}, {"start": 4909.28, "end": 4916.719999999999, "text": " moment to epiphany I guess yeah okay guys hopefully you like this video let"}, {"start": 4916.719999999999, "end": 4921.24, "text": " me find let me end up with this nice chart here okay so hopefully you like"}, {"start": 4921.24, "end": 4925.5599999999995, "text": " this video if you did please share it out with with your friends with anyone"}, {"start": 4925.56, "end": 4930.92, "text": " for whom you think would benefit from understanding these scaling techniques so"}, {"start": 4930.92, "end": 4934.96, "text": " we covered a lot of ground here we covered the data parallelism approach we"}, {"start": 4934.96, "end": 4938.4800000000005, "text": " covered the tensor parallelism approach we covered the pipeline parallelism"}, {"start": 4938.4800000000005, "end": 4942.360000000001, "text": " approach we covered the mixed precision training and finally we covered the zero"}, {"start": 4942.360000000001, "end": 4947.280000000001, "text": " optimizer where we saw how we can basically cut down and do the"}, {"start": 4947.280000000001, "end": 4952.6, "text": " partitioning of various model states and digital states so that we can train much"}, {"start": 4952.6, "end": 4957.96, "text": " bigger models and even on a single GPU so yeah do let me know whether you found"}, {"start": 4957.96, "end": 4962.0, "text": " this video interesting leave any feedback down below subscribe to the"}, {"start": 4962.0, "end": 4982.64, "text": " channel and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=KTZbi7-f09Q
30k subscribers AMA (Ask Me Anything) | Channel Update
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE Celebrating 30.000 subscribers and doing an ask-me-anything session! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:26 Your feedback matters! 00:52 AMA session kick off 01:08 Question1 - Research engineer vs research scientist? 03:10 Question2 - PhD? 04:36 Question3 - How is living in London? 05:00 Question4 - Most important skills for AI? 06:18 Question5 - Will you be doing podcasts on this channel? 06:50 Question6 - What are you most excited about in ML today? 08:41 Question7 - What do you do to relax? 09:08 Question8 - Replicability crisis, large models? 11:15 Question9 - When will AI finally become conscious? 11:35 Question10 - Experiences so far in DeepMind? 12:26 Question11 - How has my view of the ML changed? 14:41 Question12 - Next big thing? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #ama #milestone #youtube
What's up guys, the channel just hit 30,000 subscribers and I'm making this video to answer some of your questions and just to celebrate and reflect over the last period. I'm super thankful for all of you following my channel. I'm learning so much from you as well, just reading the feedback, reading the comments, interacting with you guys is super valuable for me as well, so I'm very grateful for each one of you. Having said that, over the last period I've been focusing more, I guess you noticed, on these ML code walkthrough videos, so do let me know whether you find those interesting. Any comment, feedback, whatnot, what can I improve there, what you want to see more of, whatever pops on your mind, feel free to just kind of comment down below, I'll answer every single comment and yeah, I'm looking forward to your thoughts. So I'm making this video also to just answer some of your questions, so this morning I posted a couple of posts to LinkedIn and Twitter and I collected some of your questions, so I thought just going through them and answering them and hopefully you get something useful out of this video. Okay guys, so let's start with the first one. Ahmet Tek here says, so congratulations, this is a great channel, thanks Ahmet. My question is, should I go for a research engineer position or a research scientist position? What is the difference in roles and who generally has more impact? This is one of the questions where the answer is really, it depends. It depends on the team you're in, it depends on the project you have, it depends on the phase of the project and it depends on your particular role. So there is a lot of overlap between what a research engineer and research scientists do, so again, it varies a lot and sometimes a research engineer will be doing more research than a research scientist in particular phases of the project and vice versa. So it really depends on a particular team you land in. And the second related question from Ahmet as well, also what do you think of being a research engineer for one to three years then becoming a research scientist with a PhD? I feel this is the best thing to do. So I personally encourage you to, so this is just my advice, my personal like thinking on this topic is only if you are passionate about a particular topic and you really want to invest all like four or five plus years investigating this particular research topic then you should go and do the PhD route. Otherwise you probably don't want to do PhD, but that's just my two cents. Touching on the impact topic a little bit, again basically the impact inside of a company is usually measured, so the proxy we use is the the level you're on, so the seniority. And basically I can give you REs who are of higher levels than some RSs and thus they've been more impactful using that that metric. So it really really depends and I wouldn't consider any each like any of these two as a superior to the other one. They're just different and ultimately there is a lot of overlap. So that's those are my two cents. The second question comes from Kevin Crow. The first one is, so if you were doing a machine learning PhD what would be your preferred area of research and why? So obviously I haven't been I haven't done a PhD because that's my personal like choice. I think that it's much better for me to do this self-education path and just have a broader overview of the field rather than zooming into a particular topic. That's just my personal preference but if I were to choose I would probably be well this is kind of not not really ML but related especially nowadays. So somewhere on the intersection like scaling or high performance computing, so HPC, there will be an interest like the compilers or just like how to scale models to well much bigger sizes. That's interesting. Reconciling, learning and causality. That's something that's very interesting and currently people are working on that very hard. So I think I would maybe try that one. Geometric ML is something I'm super passionate about and finally RL. So there is a lot of cool recent projects coming coming from OpenAI and others where like basically we are learning the human preferences using certain RL methods. So I think RL still has a lot to say in the future of AI. The second question from from Kevin is how is living in London? It's super great. I love it. It's such a multicultural city. You can find anything really what you want to do. Like I live in Camden so that area is very nice. I have like Regions Park which is a huge park here in London, fairly nearby and like yeah I'm loving it so far. Thanks for the question Kevin. The next one is from Hightam. So I have one question. What are the skills and knowledge that you think are most important for AI scientists to have in this new era? I'm gonna go a bit meta here and not focus on any particular technology or skill set. So I'm gonna focus on things such as you need to have patience. You need to be willing to learn to constantly learn and self-improve. You need decent communication skills and a lot of I think a lot of engineers and scientists struggle with this one but this is super important. Ultimately you're going to be collaborating with people and you don't want the bottleneck to be your like a lack of communication skills. Some software engineer skills are super recommended no matter the role to be honest. Even if you're RE, RS or well if you're a software engineer obviously you need software engineer skills but like that's maybe like a basic skill, a fundamental skill and of course some domain expertise depending on the role you're working in. There'll be different things you'll want to learn depending whether you're doing computer vision or RL or this or that you'll want to learn different frameworks, different technologies etc etc. Okay so the next one is from Marco. Have you thought about doing interviews on your channel and by interviews I assume Marco means podcasts here. I'm planning to do those maybe some sometime in the future or I might occasionally surprise you with some interesting guests on this channel but so far I don't have any any clear plans on that. I will have to skip the the second one here because I do work a deep mind so yeah. So the next one comes from Igor. What about are you most excited in deep learning today and I have to say so again my personal preference here is just scaling various existing approaches and the reasoning behind this and I don't want to sound like that like AGI is coming guy but my reasoning here is we haven't even scaled to the scale of a human brain. So human brain has roughly let's say 80 billion neurons and on average there is maybe 10,000 connections so all in all there is like what one quadrillion like synapses in the brain and we are nowhere close to that to that scale and since we are still not seeing any situation why would we stop plus the brain size is a fictional limit why not go 10x the brain size or 100x the brain size as long as we don't see the situation I think it's a viable path forward not the only one but like a one that looks to be very promising and so I think we we need to to kind of build up those muscles so I like to think about that similarly to how gaming like inadvertently was the cause for us to build the GPU muscles and then we could use the GPU muscles to do deep learning and similarly here if we just develop this skill of building huge models maybe later on when when some novel research idea comes like pops into the well into the consciousness of the of the research community then we can easily scale it up and ultimately we know that all of the biological intelligence that exists on this world does have bunch of neurons and bunch of bunch of connections so we knew we know that scale is a necessary factor probably not sufficient but but yeah there was a bit longer answer there the next one what do you do after work in London before you go to make AI videos how do you relax so for me it's usually doing power naps working out I started running a lot here in London I also watch podcasts with my girlfriend and every now and then I try and travel around so I recently like literally a week ago came back from from my Italy trip so that was very fairly refreshing just kind of charging the batteries okay let's go for the next one in a lot of papers there is a replicability issue as the models to get bigger and bigger and significantly more powerful hardware is required if you were to organize conferences would you try to do something regarding that do you believe that good papers that required insane amount of computational power would have still been possible if they weren't to have access to the required hardware for example the lead to or GPT 3 so there is multiple questions bundled into this one I'm gonna try and dissect it may be starting from the from the last one so it's obvious that these models would not have been possible without the scale because we do know that certain like skills emerge with the scale and has all of the all of this scale is all you need hype going on I think that like a big science like effort is a good example of what like an open source what a community could do when we come together and and build something together so if you're not familiar with this there was this huge language model 176 billion parameters called bloom that came from big science and that's an example of how people coming together can build awesome stuff you lose the AI is also a good example where we're just passionate individuals applying for I think it was the TPU research crawler something you can apply for that and get some free TPUs and then literally they meet they built a model that had 20 billion parameters the GPT Neo X 20b so I do think this is a problem especially when we when we realize that skills do emerge with scale so here are some I gave you some examples like you lose very I am big science effort as examples of what we can do but there is also a lot of things we can do a lot of research directions that don't need that type of a scale but like I don't want to be that guy saying so I don't want to be that guy downplaying the importance of having like access to a nice amount of resources so yeah these are just some ideas of how you can maybe join those organizations or yeah hopefully that answers your question the one from Stefan here when will AI finally become conscious asking for a friend's grandmother who is very concerned so I tell the grandma not to be like worried it won't be conscious anytime soon but slightly conscious who knows okay so Brian Pulfer here says congratulations Alexa I'd love to know more about your experience in a top AI company like DeepMind and how it has affected your view about ML if at all I'd also be curious to know your opinion about the next big thing in the field as a DeepMind employee you probably have a sense for the field that others don't congrats again and keep up the amazing work you're doing thanks Brian so so my experience so far with DeepMind is that the company really cares about the employees that the environment is amazing the people are amazing I mean you do have a bunch of world-class exports around you and just learning by osmosis and reaching out to those to those folks is super valuable you can you can they're usually super responsible like I literally had a conversation with I am good fellow a couple actually yesterday so so yeah I really love it so far okay on to the second part of your question basically how has it affected my view on the ML field I think just being present on the on the edge of what's going on in the in the research in our community I realized like how crazy we have this commonality of engineers experience when when I say that I mean suffering if you take a look at some of the of the recent Chronicles that for example let me show you let me open up a page here on github so here is the like Chronicles from the like metas training of this huge LLM that this was like hundred seventy five billion parameter big you can see it like problems that they are facing like like cluster deletion let me read this for you so given the holidays we requested a pool of 12 buffer nodes to guard against hardware failures two machines go down every day in the process of replenishing this pool the cloud providers support team accidentally deleted our entire cluster on December 21st so things like this happen all too often even in companies such as meta and and others let me show you another example from from the big science effort I mentioned the blue model so these guys literally had some problems with the stability you can see the the spikes in the loss so the the loss would go up and never return back to its initial value neither go down obviously so the training would diverge and one of the things they noticed analyzing the spikes is that they literally have so they said here two million backslash only samples in our data set so literally like a string of two million backslashes that that somehow nobody noticed up until this point when the when the when the training started diverging and so problems like this pop up all the time even if you work on in the best companies in the world wherever you are all people like all engineers and scientists are struggling and having to deal with various different issues and I think having that getting that perspective not just internally in deep mind but this is I showed you here some external examples you can see how how the everyday work of an engineer or a scientist looks completely different compared to what people usually think as for the next big thing I can only give my personal opinion here obviously I'm not representing deep mind here on this channel I as I said I'm a believer in in scaling the current existing approaches not saying those are sufficient I'm just saying those are necessary and like building those muscles is something that's gonna be super useful over the long run that's it guys those are the questions I kind of selected thanks a lot for asking those questions again leave the feedback down below if you have any ideas any suggestions what I should add to my channel or improve or change or whatnot feel free to just comment down below and yeah having said that subscribe to this channel if you haven't let's go to 50k let's go to 100 K and beyond so until next time bye bye
[{"start": 0.0, "end": 6.12, "text": " What's up guys, the channel just hit 30,000 subscribers and I'm making this"}, {"start": 6.12, "end": 10.56, "text": " video to answer some of your questions and just to celebrate and reflect over"}, {"start": 10.56, "end": 16.04, "text": " the last period. I'm super thankful for all of you following my channel. I'm"}, {"start": 16.04, "end": 19.88, "text": " learning so much from you as well, just reading the feedback, reading the"}, {"start": 19.88, "end": 24.2, "text": " comments, interacting with you guys is super valuable for me as well, so I'm"}, {"start": 24.2, "end": 28.2, "text": " very grateful for each one of you. Having said that, over the last period"}, {"start": 28.2, "end": 33.24, "text": " I've been focusing more, I guess you noticed, on these ML code walkthrough"}, {"start": 33.24, "end": 38.879999999999995, "text": " videos, so do let me know whether you find those interesting. Any comment,"}, {"start": 38.879999999999995, "end": 43.24, "text": " feedback, whatnot, what can I improve there, what you want to see more of,"}, {"start": 43.24, "end": 46.96, "text": " whatever pops on your mind, feel free to just kind of comment down below, I'll"}, {"start": 46.96, "end": 51.519999999999996, "text": " answer every single comment and yeah, I'm looking forward to your thoughts."}, {"start": 51.519999999999996, "end": 55.519999999999996, "text": " So I'm making this video also to just answer some of your questions, so this"}, {"start": 55.52, "end": 60.64, "text": " morning I posted a couple of posts to LinkedIn and Twitter and I collected"}, {"start": 60.64, "end": 64.60000000000001, "text": " some of your questions, so I thought just going through them and answering them"}, {"start": 64.60000000000001, "end": 68.4, "text": " and hopefully you get something useful out of this video. Okay guys, so let's"}, {"start": 68.4, "end": 74.08, "text": " start with the first one. Ahmet Tek here says, so congratulations, this is a great"}, {"start": 74.08, "end": 78.56, "text": " channel, thanks Ahmet. My question is, should I go for a research engineer"}, {"start": 78.56, "end": 82.48, "text": " position or a research scientist position? What is the difference in roles"}, {"start": 82.48, "end": 88.0, "text": " and who generally has more impact? This is one of the questions where the"}, {"start": 88.0, "end": 92.68, "text": " answer is really, it depends. It depends on the team you're in, it depends on the"}, {"start": 92.68, "end": 96.92, "text": " project you have, it depends on the phase of the project and it depends on your"}, {"start": 96.92, "end": 100.68, "text": " particular role. So there is a lot of overlap between what a research"}, {"start": 100.68, "end": 106.04, "text": " engineer and research scientists do, so again, it varies a lot and sometimes a"}, {"start": 106.04, "end": 109.96000000000001, "text": " research engineer will be doing more research than a research scientist in"}, {"start": 109.96, "end": 114.44, "text": " particular phases of the project and vice versa. So it really depends on a"}, {"start": 114.44, "end": 119.32, "text": " particular team you land in. And the second related question from Ahmet as"}, {"start": 119.32, "end": 124.08, "text": " well, also what do you think of being a research engineer for one to three years"}, {"start": 124.08, "end": 128.44, "text": " then becoming a research scientist with a PhD? I feel this is the best thing to"}, {"start": 128.44, "end": 135.24, "text": " do. So I personally encourage you to, so this is just my advice, my personal like"}, {"start": 135.24, "end": 139.92, "text": " thinking on this topic is only if you are passionate about a particular topic"}, {"start": 139.92, "end": 146.07999999999998, "text": " and you really want to invest all like four or five plus years investigating"}, {"start": 146.07999999999998, "end": 151.32, "text": " this particular research topic then you should go and do the PhD route. Otherwise"}, {"start": 151.32, "end": 157.67999999999998, "text": " you probably don't want to do PhD, but that's just my two cents. Touching on the"}, {"start": 157.67999999999998, "end": 162.95999999999998, "text": " impact topic a little bit, again basically the impact inside of a company"}, {"start": 162.95999999999998, "end": 167.27999999999997, "text": " is usually measured, so the proxy we use is the the level you're on, so the"}, {"start": 167.28, "end": 172.0, "text": " seniority. And basically I can give you REs who are of higher levels than"}, {"start": 172.0, "end": 177.28, "text": " some RSs and thus they've been more impactful using that that metric. So it"}, {"start": 177.28, "end": 182.84, "text": " really really depends and I wouldn't consider any each like any of these two"}, {"start": 182.84, "end": 187.56, "text": " as a superior to the other one. They're just different and ultimately there is a"}, {"start": 187.56, "end": 191.88, "text": " lot of overlap. So that's those are my two cents. The second question comes from"}, {"start": 191.88, "end": 198.6, "text": " Kevin Crow. The first one is, so if you were doing a machine learning PhD what"}, {"start": 198.6, "end": 204.16, "text": " would be your preferred area of research and why? So obviously I haven't been I"}, {"start": 204.16, "end": 209.88, "text": " haven't done a PhD because that's my personal like choice. I think that it's"}, {"start": 209.88, "end": 214.12, "text": " much better for me to do this self-education path and just have a"}, {"start": 214.12, "end": 217.72, "text": " broader overview of the field rather than zooming into a particular topic."}, {"start": 217.72, "end": 223.12, "text": " That's just my personal preference but if I were to choose I would probably be"}, {"start": 223.12, "end": 229.36, "text": " well this is kind of not not really ML but related especially nowadays. So"}, {"start": 229.36, "end": 234.12, "text": " somewhere on the intersection like scaling or high performance computing, so"}, {"start": 234.12, "end": 238.8, "text": " HPC, there will be an interest like the compilers or just like how to scale"}, {"start": 238.8, "end": 244.64, "text": " models to well much bigger sizes. That's interesting. Reconciling, learning and"}, {"start": 244.64, "end": 248.67999999999998, "text": " causality. That's something that's very interesting and currently people are"}, {"start": 248.67999999999998, "end": 254.88, "text": " working on that very hard. So I think I would maybe try that one. Geometric ML is"}, {"start": 254.88, "end": 259.12, "text": " something I'm super passionate about and finally RL. So there is a lot of cool"}, {"start": 259.12, "end": 264.64, "text": " recent projects coming coming from OpenAI and others where like basically"}, {"start": 264.64, "end": 270.0, "text": " we are learning the human preferences using certain RL methods. So I think RL"}, {"start": 270.0, "end": 275.72, "text": " still has a lot to say in the future of AI. The second question from from Kevin"}, {"start": 275.72, "end": 280.52, "text": " is how is living in London? It's super great. I love it. It's such a multicultural"}, {"start": 280.52, "end": 286.8, "text": " city. You can find anything really what you want to do. Like I live in"}, {"start": 286.8, "end": 292.64, "text": " Camden so that area is very nice. I have like Regions Park which is a huge park"}, {"start": 292.64, "end": 299.08, "text": " here in London, fairly nearby and like yeah I'm loving it so far. Thanks for the"}, {"start": 299.08, "end": 305.28, "text": " question Kevin. The next one is from Hightam. So I have one"}, {"start": 305.28, "end": 308.15999999999997, "text": " question. What are the skills and knowledge that you think are most"}, {"start": 308.15999999999997, "end": 314.4, "text": " important for AI scientists to have in this new era? I'm gonna go a bit meta"}, {"start": 314.4, "end": 319.32, "text": " here and not focus on any particular technology or skill set. So I'm gonna"}, {"start": 319.32, "end": 325.28, "text": " focus on things such as you need to have patience. You need to be willing to learn"}, {"start": 325.28, "end": 331.4, "text": " to constantly learn and self-improve. You need decent communication skills and a"}, {"start": 331.4, "end": 334.64, "text": " lot of I think a lot of engineers and scientists struggle with this one but"}, {"start": 334.64, "end": 337.91999999999996, "text": " this is super important. Ultimately you're going to be collaborating with"}, {"start": 337.91999999999996, "end": 342.0, "text": " people and you don't want the bottleneck to be your like a"}, {"start": 342.0, "end": 347.03999999999996, "text": " lack of communication skills. Some software engineer skills are super"}, {"start": 347.03999999999996, "end": 351.44, "text": " recommended no matter the role to be honest. Even if you're RE, RS or"}, {"start": 351.44, "end": 354.08, "text": " well if you're a software engineer obviously you need software"}, {"start": 354.08, "end": 359.59999999999997, "text": " engineer skills but like that's maybe like a basic skill, a fundamental skill"}, {"start": 359.59999999999997, "end": 365.76, "text": " and of course some domain expertise depending on the role you're"}, {"start": 365.76, "end": 369.59999999999997, "text": " working in. There'll be different things you'll want to learn depending whether"}, {"start": 369.59999999999997, "end": 374.0, "text": " you're doing computer vision or RL or this or that you'll want to learn"}, {"start": 374.0, "end": 379.52, "text": " different frameworks, different technologies etc etc. Okay so the next"}, {"start": 379.52, "end": 385.59999999999997, "text": " one is from Marco. Have you thought about doing interviews on your channel and by"}, {"start": 385.59999999999997, "end": 391.12, "text": " interviews I assume Marco means podcasts here. I'm planning to do those maybe"}, {"start": 391.12, "end": 396.4, "text": " some sometime in the future or I might occasionally surprise you with some"}, {"start": 396.4, "end": 401.52, "text": " interesting guests on this channel but so far I don't have any any clear plans on"}, {"start": 401.52, "end": 406.47999999999996, "text": " that. I will have to skip the the second one here because I do work a deep mind"}, {"start": 406.48, "end": 415.04, "text": " so yeah. So the next one comes from Igor. What about are you most"}, {"start": 415.04, "end": 420.96000000000004, "text": " excited in deep learning today and I have to say so again my personal"}, {"start": 420.96000000000004, "end": 425.36, "text": " preference here is just scaling various existing approaches and the reasoning"}, {"start": 425.36, "end": 430.8, "text": " behind this and I don't want to sound like that like AGI is coming guy but"}, {"start": 430.8, "end": 435.76, "text": " my reasoning here is we haven't even scaled to the scale of a human"}, {"start": 435.76, "end": 440.56, "text": " brain. So human brain has roughly let's say 80 billion neurons and on average"}, {"start": 440.56, "end": 444.32, "text": " there is maybe 10,000 connections so all in all there is like what one"}, {"start": 444.32, "end": 450.56, "text": " quadrillion like synapses in the brain and we are nowhere close to that to that"}, {"start": 450.56, "end": 456.4, "text": " scale and since we are still not seeing any situation why would we stop plus the"}, {"start": 456.4, "end": 461.44, "text": " brain size is a fictional limit why not go 10x the brain size or 100x the brain"}, {"start": 461.44, "end": 466.64, "text": " size as long as we don't see the situation I think it's a viable path"}, {"start": 466.64, "end": 472.16, "text": " forward not the only one but like a one that looks to be very promising and so I"}, {"start": 472.16, "end": 476.4, "text": " think we we need to to kind of build up those muscles so I like to think about"}, {"start": 476.4, "end": 483.12, "text": " that similarly to how gaming like inadvertently was the cause for us to"}, {"start": 483.12, "end": 487.4, "text": " build the GPU muscles and then we could use the GPU muscles to do deep learning"}, {"start": 487.4, "end": 492.91999999999996, "text": " and similarly here if we just develop this skill of building huge models maybe"}, {"start": 492.91999999999996, "end": 498.79999999999995, "text": " later on when when some novel research idea comes like pops into the well into"}, {"start": 498.79999999999995, "end": 503.71999999999997, "text": " the consciousness of the of the research community then we can easily scale it up"}, {"start": 503.71999999999997, "end": 509.32, "text": " and ultimately we know that all of the biological intelligence that exists on"}, {"start": 509.32, "end": 513.1999999999999, "text": " this world does have bunch of neurons and bunch of bunch of connections so we"}, {"start": 513.1999999999999, "end": 517.36, "text": " knew we know that scale is a necessary factor probably not sufficient but"}, {"start": 517.36, "end": 521.28, "text": " but yeah there was a bit longer answer there the next one what do you do after"}, {"start": 521.28, "end": 526.36, "text": " work in London before you go to make AI videos how do you relax so for me it's"}, {"start": 526.36, "end": 532.52, "text": " usually doing power naps working out I started running a lot here in London I"}, {"start": 532.52, "end": 538.72, "text": " also watch podcasts with my girlfriend and every now and then I try and travel"}, {"start": 538.72, "end": 543.04, "text": " around so I recently like literally a week ago came back from from my Italy"}, {"start": 543.04, "end": 548.04, "text": " trip so that was very fairly refreshing just kind of charging the batteries okay"}, {"start": 548.04, "end": 552.0999999999999, "text": " let's go for the next one in a lot of papers there is a replicability issue as"}, {"start": 552.0999999999999, "end": 555.88, "text": " the models to get bigger and bigger and significantly more powerful hardware is"}, {"start": 555.88, "end": 559.88, "text": " required if you were to organize conferences would you try to do"}, {"start": 559.88, "end": 563.76, "text": " something regarding that do you believe that good papers that required insane"}, {"start": 563.76, "end": 568.3199999999999, "text": " amount of computational power would have still been possible if they weren't to"}, {"start": 568.32, "end": 573.08, "text": " have access to the required hardware for example the lead to or GPT 3 so there is"}, {"start": 573.08, "end": 577.0400000000001, "text": " multiple questions bundled into this one I'm gonna try and dissect it may be"}, {"start": 577.0400000000001, "end": 582.32, "text": " starting from the from the last one so it's obvious that these models would not"}, {"start": 582.32, "end": 586.9200000000001, "text": " have been possible without the scale because we do know that certain like"}, {"start": 586.9200000000001, "end": 591.12, "text": " skills emerge with the scale and has all of the all of this scale is all you need"}, {"start": 591.12, "end": 599.88, "text": " hype going on I think that like a big science like effort is a good"}, {"start": 599.88, "end": 604.76, "text": " example of what like an open source what a community could do when we come"}, {"start": 604.76, "end": 607.68, "text": " together and and build something together so if you're not familiar with"}, {"start": 607.68, "end": 613.64, "text": " this there was this huge language model 176 billion parameters called bloom that"}, {"start": 613.64, "end": 618.12, "text": " came from big science and that's an example of how people coming together"}, {"start": 618.12, "end": 622.72, "text": " can build awesome stuff you lose the AI is also a good example where we're just"}, {"start": 622.72, "end": 627.28, "text": " passionate individuals applying for I think it was the TPU research crawler"}, {"start": 627.28, "end": 632.0, "text": " something you can apply for that and get some free TPUs and then literally they"}, {"start": 632.0, "end": 637.24, "text": " meet they built a model that had 20 billion parameters the GPT Neo X 20b so"}, {"start": 637.24, "end": 642.88, "text": " I do think this is a problem especially when we when we realize that skills do"}, {"start": 642.88, "end": 647.32, "text": " emerge with scale so here are some I gave you some examples like you lose"}, {"start": 647.32, "end": 652.2, "text": " very I am big science effort as examples of what we can do but there is also a"}, {"start": 652.2, "end": 655.72, "text": " lot of things we can do a lot of research directions that don't need that"}, {"start": 655.72, "end": 660.1600000000001, "text": " type of a scale but like I don't want to be that guy saying so I don't want to be"}, {"start": 660.1600000000001, "end": 666.72, "text": " that guy downplaying the importance of having like access to a nice amount of"}, {"start": 666.72, "end": 671.6800000000001, "text": " resources so yeah these are just some ideas of how you can maybe join those"}, {"start": 671.6800000000001, "end": 676.96, "text": " organizations or yeah hopefully that answers your question the one from"}, {"start": 676.96, "end": 681.6800000000001, "text": " Stefan here when will AI finally become conscious asking for a friend's"}, {"start": 681.6800000000001, "end": 687.76, "text": " grandmother who is very concerned so I tell the grandma not to be like worried"}, {"start": 687.76, "end": 694.36, "text": " it won't be conscious anytime soon but slightly conscious who knows okay so"}, {"start": 694.36, "end": 700.6, "text": " Brian Pulfer here says congratulations Alexa I'd love to know more about your"}, {"start": 700.6, "end": 704.48, "text": " experience in a top AI company like DeepMind and how it has affected your"}, {"start": 704.48, "end": 709.32, "text": " view about ML if at all I'd also be curious to know your opinion about the"}, {"start": 709.32, "end": 713.08, "text": " next big thing in the field as a DeepMind employee you probably have a"}, {"start": 713.08, "end": 716.8000000000001, "text": " sense for the field that others don't congrats again and keep up the amazing"}, {"start": 716.8000000000001, "end": 722.36, "text": " work you're doing thanks Brian so so my experience so far with DeepMind is that"}, {"start": 722.36, "end": 725.84, "text": " the company really cares about the employees that the environment is"}, {"start": 725.84, "end": 730.0, "text": " amazing the people are amazing I mean you do have a bunch of world-class"}, {"start": 730.0, "end": 734.6, "text": " exports around you and just learning by osmosis and reaching out to those to"}, {"start": 734.6, "end": 738.56, "text": " those folks is super valuable you can you can they're usually super"}, {"start": 738.56, "end": 741.96, "text": " responsible like I literally had a conversation with I am good fellow a"}, {"start": 741.96, "end": 747.4, "text": " couple actually yesterday so so yeah I really love it so far okay on to the"}, {"start": 747.4, "end": 753.04, "text": " second part of your question basically how has it affected my view on the ML"}, {"start": 753.04, "end": 758.6, "text": " field I think just being present on the on the edge of what's going on in the in"}, {"start": 758.6, "end": 765.48, "text": " the research in our community I realized like how crazy we have this commonality"}, {"start": 765.48, "end": 771.72, "text": " of engineers experience when when I say that I mean suffering if you take a look"}, {"start": 771.72, "end": 775.8000000000001, "text": " at some of the of the recent Chronicles that for example let me show you let me"}, {"start": 775.8000000000001, "end": 782.44, "text": " open up a page here on github so here is the like Chronicles from the like"}, {"start": 782.44, "end": 786.86, "text": " metas training of this huge LLM that this was like hundred seventy five"}, {"start": 786.86, "end": 790.4, "text": " billion parameter big you can see it like problems that they are facing like"}, {"start": 790.4, "end": 794.0, "text": " like cluster deletion let me read this for you so given the holidays we"}, {"start": 794.0, "end": 798.14, "text": " requested a pool of 12 buffer nodes to guard against hardware failures two"}, {"start": 798.14, "end": 801.98, "text": " machines go down every day in the process of replenishing this pool the"}, {"start": 801.98, "end": 806.16, "text": " cloud providers support team accidentally deleted our entire cluster"}, {"start": 806.16, "end": 811.52, "text": " on December 21st so things like this happen all too often even in companies"}, {"start": 811.52, "end": 815.96, "text": " such as meta and and others let me show you another example from from the big"}, {"start": 815.96, "end": 820.58, "text": " science effort I mentioned the blue model so these guys literally had some"}, {"start": 820.58, "end": 824.76, "text": " problems with the stability you can see the the spikes in the loss so the the"}, {"start": 824.76, "end": 829.52, "text": " loss would go up and never return back to its initial value neither go down"}, {"start": 829.52, "end": 833.74, "text": " obviously so the training would diverge and one of the things they noticed"}, {"start": 833.74, "end": 837.08, "text": " analyzing the spikes is that they literally have so they said here two"}, {"start": 837.08, "end": 841.64, "text": " million backslash only samples in our data set so literally like a string of"}, {"start": 841.64, "end": 847.9399999999999, "text": " two million backslashes that that somehow nobody noticed up until this"}, {"start": 847.9399999999999, "end": 851.14, "text": " point when the when the when the training started diverging and so"}, {"start": 851.14, "end": 855.96, "text": " problems like this pop up all the time even if you work on in the best"}, {"start": 855.96, "end": 860.8, "text": " companies in the world wherever you are all people like all engineers and"}, {"start": 860.8, "end": 864.96, "text": " scientists are struggling and having to deal with various different issues and I"}, {"start": 864.96, "end": 869.68, "text": " think having that getting that perspective not just internally in deep"}, {"start": 869.68, "end": 874.7199999999999, "text": " mind but this is I showed you here some external examples you can see how how"}, {"start": 874.7199999999999, "end": 879.0799999999999, "text": " the everyday work of an engineer or a scientist looks completely different"}, {"start": 879.0799999999999, "end": 883.9599999999999, "text": " compared to what people usually think as for the next big thing I can only give"}, {"start": 883.9599999999999, "end": 887.7199999999999, "text": " my personal opinion here obviously I'm not representing deep mind here on this"}, {"start": 887.7199999999999, "end": 895.3199999999999, "text": " channel I as I said I'm a believer in in scaling the current existing approaches"}, {"start": 895.32, "end": 900.72, "text": " not saying those are sufficient I'm just saying those are necessary and like"}, {"start": 900.72, "end": 904.8000000000001, "text": " building those muscles is something that's gonna be super useful over the"}, {"start": 904.8000000000001, "end": 909.84, "text": " long run that's it guys those are the questions I kind of selected thanks a"}, {"start": 909.84, "end": 914.32, "text": " lot for asking those questions again leave the feedback down below if you"}, {"start": 914.32, "end": 918.6800000000001, "text": " have any ideas any suggestions what I should add to my channel or improve or"}, {"start": 918.6800000000001, "end": 923.8000000000001, "text": " change or whatnot feel free to just comment down below and yeah having said"}, {"start": 923.8, "end": 928.12, "text": " that subscribe to this channel if you haven't let's go to 50k let's go to 100"}, {"start": 928.12, "end": 957.24, "text": " K and beyond so until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=x_8uHX5KngE
DALL-E mini explained | min(DALL-E) | Craiyon | ML Coding Series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In the 6th video of the ML coding series I start explaining the DALL-E mini project - the open-source implementation of DALL-E. I start with its minimal port into PyTorch called min(DALL-E). I first give you the necessary context by walking you through the VQ-GAN, BART, GLU, and DALL-E papers as well as the Weights & Biases report on DALL-E mini, and then I dig into the actual code! Let me know how you like this video! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ min-dalle code: https://github.com/kuprel/min-dalle My previous relevant videos: ✅ VQ-GAN: https://www.youtube.com/watch?v=j2PXES-liuc ✅ VQ-VAE: https://www.youtube.com/watch?v=VZFVUrYcig0 ✅ DALL-E: https://www.youtube.com/watch?v=jMqLTPcA9CQ ✅ Weights & Biases DALL-E mini report: https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA ✅ Rivers Have Wings tweet: https://twitter.com/RiversHaveWings/status/1478093658716966912 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro 00:02:12 VQGAN overview 00:08:42 Conditioning in VQGAN 00:14:00 BART transformer 00:18:25 DALL-E 1 overview 00:24:13 DALL-E mini Weights & Biases report 00:30:35 [code] min-dalle 00:34:23 Text tokenizer 00:41:50 BART encoder 00:44:22 GLU explained (paper + code) 00:51:22 BART decoder 00:58:35 Image latent vector autoregressive generation 01:05:10 Super conditioning, top-k sampling 01:09:53 VQGAN decoder 01:15:05 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dallemini #imagesynthesis #mindalle
What's up guys in this video I'm continuing on with the machine learning coding series and in this one I'll be covering the Lee mini. So that's the open source implementation of the of open AI's Dali model and when actually say I'll be covering the Lee mini what I mean I'll be cover covering this minimal port of the Lee mini that's inference only and that's Basically written in pytorch and the reason is it's much easier to go through this one and to set it up on my Windows machine, so we'll start with this one, but in the next video. I'll actually be covering the full the original Jack's code base of the Lee mini which is much more involved But this video will give you enough context to know what's going on Okay, so as Usually I'll first give you the additional context in the form of explaining the papers behind this And one thing that's very important to understand is that the Lee mini is not actually an implementation of the Lee model It's more of a mixture and let me open up my one note So it's more of a mixture of this paper called VQ GANs or taming transformers for high resolution image synthesis So that's the paper that introduced the VQ GANs And I've covered this one and all of these papers actually pretty much in in my previous videos So I'll be linking those as we go along So this is the VQ GAN paper, and they also leverage this model which is encoder decoder transformer model called BART And they use some modification of BART. They use this glue from Noam Shazir. We'll see all of those details a bit later and Here is the original the Lee paper, and then I'm gonna kind of contrast What the Lee mini is with the actual the Lee paper? They also have like bits and biases also have a report which will like literally skim through I'm gonna show you Some of the they actually have very nice training and inference Diagrams here how the Lee mini works and they also have Comparisons between the Lee and the Lee mini and you can see there is quite a lot of bullet points here So I'm gonna go through all of those a bit later. Okay, so let's first start with with actual papers I'm gonna skim. I'm gonna literally just give you the gist because otherwise there's like four papers That's that would take too much time. So let me go straight to it So this is the VQ GAN paper and as I said, I previously already covered it So if you want a deep dive go ahead and check it out. I'm gonna link it somewhere here Okay guys, so again on the high level Keeping a high level because I've covered this in great depth in a separate video We have two components of VQ GAN basically One is the learning the visual Codebook here and that's this component here and Then the second part is after you learn the visual codebook. Then you basically train a transformer model to just to autoregressively model the latent space of visual vectors here So this is the transformer part. So basically as I said, we have the first part here and we have the transformer part here So how this is trained is very similar to VQ VAE again That's a paper I've covered before and a couple of differences are that they are using this thing called reconstruction loss And the second thing is they're using discriminator here So basically adversarial approach and because of that the images from VQ GAN are much crispier Compared to what you get using VQ VAEs, which is something that actually the lead version one was using Quickly running through the pipeline here. What you do is you feed an image here you then encode it into a Lower result into a lower dimensional latent space here So you end up with some vectors as you can tell here and then what what what you do is you kind of snap them onto the closest visual Vector that you learn in this in this codebook so you can see that for example for this vector here the closest vector In as you can see here L2 by using the L2 as the definition of closeness You snap it onto a vector number one. So that means this one was closest to this one to this vector Number one from the visual codebook. Okay So once you do that for all of the vectors from your low dimensional space here You snap them onto discrete vectors here then you pass them through a convolutional decoder to get the image out and Then again, I'm going to show you the losses in a second But you train them using a couple of losses such as a reconstruction loss the perceptual loss and the and the Adversarial loss here and then the transformer part is I guess very self-explanatory Once you have this whole pathway here that allows you to map your data set of images into a data set of basically tokens because you can flip now this in our Raster order and you end up with you can directly map your image into a sequence of tokens Then you basically can just train your transformer on this new data set of Basically sequences and that's how you learn how to well the data distribution of interest Okay, so that's that's it on a high level here a couple of formulas I'm going to show you so basically as I said, what do you do is you snap your your vectors? So Basically and I have some glitch with my one node so you can see you'll you'll be seeing this thing kind of staying here Unfortunately, but yeah for whatever reason one out is glitching today So here is the vector that you estimate then you snap it on to the closest codebook vector here That's that the quantization step here. So you first encoded then you quantize it and then you have the Basically decoder part and that's how you get the reconstructed image X hat out and with that notation under our belt here is the actual loss The weak you lost is just as you can see here are some of couple of losses One of them is reconstruction loss, which is L2 in this case here And then there there are a couple of these losses which they I think they call commitment loss This one so the idea as you can see here is you put a stop gradient? in this particular case here on on top of your codebook vector and then you try and and and make sure to Make that to make your vectors closer to the codebook and here is the opposite here You you kind of put a stop gradient operator on top of your estimated Vectors and then you try and make your codebook closer to the output of your encoder as you can see here So that's the that's that and then the VQ again actually adds a couple of modifications So first one is they replaced the L2 loss as you can see here for the perceptual loss and They introduce another serial training procedure as well with the patch based discriminator So you don't have a single scalar you have as you can as you could saw see on the on the image here You actually have basically you're doing a patch based discrimination of whether that patch is real or or false Stemming from real or false image and that helps additionally that can increases the bandwidth of the learning signal compared to just using a classical scalar Okay, and that's pretty much it. That's what we can changes So we have the VQ component and then we have now the GAN component and again The difference here is that they are using the perceptual loss if you're not familiar with what that is Check out some of my neural style transfer videos But basically the idea is to take like a pre-trained convolutional neural network such as like a maybe VGG pre-trained on like image net Or something and then you instead of doing the L2 in the pixel space of the of your reconstructed image you actually feed it inside of that like a Network and then you do L2 in the latent space of that network where the features are much more abstract and you're you're basically trying to find Well, you're basically capturing the semantics much more than than than high level noise in the picture space Again refer to my video if you want to know more details But this will be enough more than enough for the for the for the code walkthrough Okay, guys, so let me show you one more thing in this paper and that's conditional synthesis So dimension here. So if the conditioning information C has spatial extent we first learn in other VQ GAN to obtain again an index-based representation as you can see here So you'll have what they say here you they'll have A height times width of these vectors where the set of vectors has is this big there is like big Z sub C Number of those vectors. So that's the size of the code book for that Condition conditional information. So they're there in concrete. Let me give you a concrete example here This could be maybe like semantic segmentation Maps that you can then basically learn a VQ GAN on top of that type of a data set. Okay Well blind then you have this newly obtained code book and then due to autoregressive structure of the transformer we can then simply prepend R to S which is the Well, that's your your your your image from your target data set and R is stems from the conditioning information and then basically you do the Negative log likelihood to these entries. So in case that was not that clear Let me quickly draw it here. So again, imagine you have Basically some semantic segmentation Masks here. So maybe something like this. I'm gonna kind of try and draw it somehow so This is some type of a semantic segmentation image and you have a big data set of such images Big data set of such images and now what you do is basically train a VQ GAN Such that you'll end up with a particular code book For that vector for for that data set so this is going to be your your code book For this particular Image here, so you then encode it you end up with basically Well, you end up with HC times WC vectors you then flatten them out and You end up with a sequence and This sequence that which they denote as R is something you can prepend while training on your on your data set Okay. So now obviously for each of these images you have an actual image you care about which is the Image from your from the data set you're trying to for which you're trying to learn the data distribution off So that there will be some image here, which is actually original image And this is its semantic segmentation and then you do the same thing here. So you'll you first learn the VQ GAN component So something like this and I'm really terrible at drawing today And and then you encode it here You flatten out this sequence. I'm going to denote it with a different color. So we now have a sequence here And now how you're going to train is you're going to prepend this vector R so we'll you'll grab the vector R from the semantic segmentation map and Here you have the target image which they denote as S And now you just collect these tuples for your whole data set So you basically collect this and then you learn how to to model that type of uh of of of sequence And so how do you later when you want to generate? A new image you literally you literally grab an arbitrary semantic segmentation map And you you pass it through the encoder here you fetch the This vector R and then you pass it through the Decoder you fetch the uh this vector R and then you put it inside of your transformer So here you'll have a transformer You put the R here And then you ought to aggressively keep on Generating the sequence and then you pass that through the decoder here So once you generate that sequence here you pass it through the decoder and out comes the image So this will maybe be in practice like 256 tokens Which you can then rearrange into 16 by 16 Tokens something like this And then that's that's like that's the that's the low uh dimensional representation And then you can as I said you pass it through the decoder and you get your image out So why I took some time to explain this is because um Dali and and uh dali mini, uh use a very similar approach instead of uh, and instead of using the uh Semantic segmentation masks they're actually using text but the logic is super similar. So just instead of um uh doing this whole thing you um Pass a piece of text that's associated with your image Here and then you learn how to model them together and that's it. So literally by doing that you learn the um You learn the data distribution uh p x y and that's where the bart paper comes into picture. So that's that's where this paper comes into play I'm gonna explain to you. Whoops Um, i'm gonna explain to you in a second like it's a it's a very simple It's basically like an encoder decoder transformer. There is not too much to say there Okay So again how bart will come into play here is instead of doing this pipeline with semantic segmentation masks You'll instead have uh an image And you'll have associated caption here. So some piece of text here. Okay, so have some piece of text here And you'll basically feed that text through uh encoder part of of uh of bart So let me kind of denote the uh bart Such as this like we have encoder part of bart and we hope we have the decoder part Of bart decoder is obviously attending Uh the encoder part that's how the transformers work. And so you you grab this text here You basically feed it directly into the encoder part and out comes something from the decoder Out comes this thing here and now you use this output of the encoder as r and that's it And now you just again you you'll now have these tuples of uh, this conditioning vectors And your uh images from your data set and you just learn how to model the p x y So the the distribution of your uh, like images and uh and captions and by the way, this is literally this this These two boxes is everything you need to have like that's a mental model of bart I have when i'm when i'm discussing it. So the only thing we need to See to understand how bart works compared to all of the other transformers is I guess this thing here so basically in your regular, uh, for example in inbert model so inbert What the pre-training objective looked like is you masked out so it's first is bidirectional So it's only encoder part of the of the uh encoder decoder original transformer structure Whereas bart is both encoder as well as the decoder So you're trying to just you just kind of mask out some of the code So you're trying to just you just kind of mask out certain tokens and I think like you do that with like 10 20 percent Probability for each of the tokens And then you try to predict what the token at that particular position was so it's a simple, uh, like a auto encoding Objective, uh, and what's different with bart is you can kind of model You can kind of take a span of of tokens and then mask them out with a single Mask token and then make sure to predict the original sequence here So because of this thing because of you can mask it out here However, um many tokens you want and then predict them here you have additional flexibility Which you don't have with a pure encoder, um, like transformers such as such as bert And I think they mentioned it somewhere here. Let me find that piece of text So the objective is called text in filling and they mentioned here a number of text spans are sampled with span lengths drawn from a post Poisson distribution with lambda Parameter equals three and each span is replaced with a single mask token zero length spans correspond to the insertion of mask tokens and Okay, it's inspired blah blah blah So text in filling teaches the model to predict how many tokens are missing from a span And that's all like it's literally encoder decoder transformer with this particular denoising objective And they show a bit later here how it compares against the baselines using different types of Like retraining objectives such as just token masking a la bert or token deletion or text in filling So a bunch of different sentence shuffling document rotation, etc, etc, and they compare that against the Language model so that's just predicting the next token or mask language models such as birth, etc, etc, and they show Well competitive results Okay, that's the bird part the bard part and then there's a small modification called glue Which i'll show you a bit later. So glue variance improved transformer from this author noam schazer Who is actually the co-author of the original transformer paper. So it's a very prolific author But before that, let me just quickly skim over over dali version one Which is which came from the paper zero shot text image generation again Super similar pipeline to vq again Again, they mentioned here they have two stages. So in the first stage they say here we train a discrete variational autoencoder to compress each of the 256 256 rgb images into 32 by 32 grid of image tokens each elements of which can assume 8192 possible values so that that means that the codebook size is this big and this reduces the context size of the transformer by a factor of 192 without a large degradation in visual quality because if you divide basically 256 times 256 times 3 because you have rgb image into 232 times 32 you end up with 192. Okay and The same authors of this paper. Well, I think this is from yeah, this is from open.ai So basically they they they previously trained this model called image gpt and in that model They were training they were autoregressively learning how to model the the images in the pixel space, which is much more complex because Transformers have quadratic complexity and obviously the image space can can grow like up to like Well thousand by thousand images and so it becomes very quickly impossible to model in the pixel space So that's why this reduction this mapping into a lower dimensional space here. Okay So again, that's stage one and it's literally the same as vq again without the Perceptual loss and without the adversarial component and then they say here In stage two we concatenate up to 256 vpe By pair encoding text tokens with the 32 times 32 image tokens that come from the stage one and Train an autoregressive transformer to model the joint distribution over text and image tokens. So basically what I said here Is that so again they they train the vqvae which you can think as as same as vqgan So yeah, they have the encoder part. They have the decoder part They kind of have a low dimensional representation here and so what they did what they do instead so so they can kind of unroll this in a raster order to end up with a Sequence of tokens s so to get r so to get the conditioning vector And I don't know what why they're using r like I guess c c makes more sense But i'll just keep on using the same uh terminology So to get the r component That you'll use to to prepend here And then train or progressively train a transformer on top of it Uh, they just use as as you saw here They just have uh, basically they take a piece of text and they bpe encoded so you start with the text And then you apply the tokenizer which gives you a sequence of tokens It just gives you a sequence of tokens and then I assume they just embed those using like a shallow like a simple Well, you have an embedding table you map that into tokens and then you you place that directly here to start um modeling whereas Um, as you saw the lini is using bart encoder. So that means they'll have additional Layers of processing before they they output this r vector. So the the conditioning vector Okay, guys, that's that's pretty much it. Um, let me quickly show you what they say here There is a lot of complexity hidden in here As I said, I covered this one the lee one, uh, you know in a separate video so you can check it out They use stuff such as gumbell softmax relaxation To train this model, etc, etc. And also it's worth Keeping in mind. I think they mentioned it somewhere here Let me just show you this so getting the model To train in 16-bit precision past 1 billion parameters without diverging was the most challenging part of this project so training this um Big model well back in back a couple of years ago. This was considered a big model Was was extremely hard and um, that's something I think boris the the main developer of the um, the lee mini Project is also struggling with but he's using jacks and he he has like free tpus From google so that kind of helps Because tpus have they did mention somewhere here that tpus have some less problems than than than gpus in certain aspects Let me quickly show you one more thing again Uh, let me just explain this conditioning part So given a text image pair we bpe encode with lowercase captions using at most 256 tokens with woke up size 16 384 and encode the image using 32 times 32 tokens with woke up size 8192 The image tokens are obtained using argmax sampling from the dvee Via encoded logits without adding any gumble noise. Uh, finally the text and image tokens are concatenated and model Regressively as a single stream of data. So that's something we saw multiple times already Okay, guys, so that's it. It's a very simple model as you saw here Uh, there is a lot of overlap between the ideas between vq gam paper and the lee Uh version one Um, they took the ideas from bar. They took some other ideas Uh boris is experimenting a lot as soon as a new paper comes out such as the deep net paper With novel initializations. He tries an experiment with it. Okay, so let me now show you the uh report that Uh boris wrote with the other other co-authors Again They have a very nice drawing of the training pipeline of the lee mini Model as you can see here again, they have the Bart encoder so you as you can see you you first have these Puppets of images and associated captions then you pass the caption through the bart encoder you get the the conditioning vector here You take the image you pass it through the vq again encoder you get the image tokens you feed them here So you can see here vq again, uh tokens And then you pass all of that through a bar decoder and you learn how to autoregressively model these sequences Uh simple stuff as for the inference we can see it here Uh, so you grab a piece of text you pass it through the bart encoder You feed it into the decoder and then you autoregressively generate the image tokens Uh, you can generate multiple Uh series of these image tokens because uh, the process of generation is inherently stochastic Then you pass all of those through the vq again Uh decoder you get some associated images So the token sequence from the low dimensional embedding space is projected into Uh the image in the pixel space And then you can use something like clip to filter out the best images So you again feed the same caption to clip and finds the image that That is best described by this particular caption you can see in this particular example White snow covered mountain under blue sky during daytime clip found that this image here is the best this is is uh Well best described by this by this particular caption. Okay That's it. Let me quickly Walk you through and I encourage you to check out the whole report In this in this awesome, uh weight and biases report, but let me show you some some differences that the uh authors Of the lee Mini model themselves mentioned so we are grateful for the research and pre-trained models published by open.ai which we're essentially building our model Uh, not all the uh, the details on the leak our public knowledge But here what we consider to be the main differences So the lee uses a 12 billion parameter version of gpt3 In comparison our model is 27 times smaller with about 0.4 billion parameters Now this report was written a while ago. And so this was the the lee mini Like they now have the lee mega as well And also I think there is a mistake here because i'm fairly sure the original dali Paper used gpt2 not gpt3 Which are fairly similar but still Um, then they say we heavily leverage pre-trained models such as vq-gan barding coder and clip We mentioned all of those so far Uh, and while open.ai had to train all their models from scratch our model architecture takes into account pre-trained models available And their efficiency The lee encodes images using a larger number of tokens So they use 32 times 32 whereas the uh, dali mini use uh uses 16 uh by 16 from a smaller vocab And the lee uses a vqvae while we use a vq-gan Okay, then they say the lee encodes text using fewer tokens at most 256 Uh, whereas they where the lee mini uses 1024 And the lee reads text and images as a single stream of data while we split them between these uh, well bard encoder and decoder Uh, this also lets us use independent vocab for text and images The lee reads the text through an autoregressive model while we use a bi-directional encoder Okay, so I made a small mistake when I was explaining how the lee wevon works. I'm gonna Show you what I mean by that in a second, but let me finish here So the lee was trained on 250 million pairs of image and text while we used only 15 million pairs Okay So the difference is scale obviously the model scale the dataset scale etc, etc But also a lot of difference in the architecture and the training method like they don't have the gumbo softmax They don't have the vqvae. They're using vq-gan. They're using bard instead of using gpt, etc, etc So there is so the lee mini is um, well not really the lee model It's a it's a mixture of of vq-gans and bard and everything else So so yeah, but still and that's why they are now called crayon not the lee mini But in any case, let me quickly go back to notebook and then we're jumping to code So I think I mentioned somewhere here that they bpe encode Text and then they just learn how to autoregressively Model that and well, I kind of did correctly explain it but not quite so what they do is so let me just draw this again here so we have the Conditioning part and we have the target sequence So what they do how they learn this is they feed all of these into the transformer And then you learn how to predict all of the tokens. So basically here So yeah again as a target you just take this same sequence and you shift it leftwards by by one uh by one step and that's what you're trying to to predict so this is what you're trying to predict and then here ultimately you have the uh end of Image token in this particular case because in this particular example the green part Are the image tokens from the vq vae portion? Okay, so that's it The main thing I want to stress here is that the the embeddings for the conditioning part are not shallow they're using uh deep transformer layers because as you as you can as you know, like the uh transformer works by Cross attending um previous tokens here. So it will be attending to deeper representations of this particular Text here. Okay guys this there was a bit more than I wanted to To give you when it comes to context and and paper overviews, but it is what it is Is let's now dig into the actual code. Okay, so here we are in my vs code I'm gonna open up the i'm gonna hit debug here and we're gonna slowly start stepping over the code again I'm gonna skip all of the unnecessary details. I just want to show you The similarity between what we just saw theoretically and the actual code implementation again this uh I'm showing you the code from a mindaly uh repo and uh, they only have the inference portion of the pipeline and they're um, It's it's a port in into pytorch and it's whereas the original repo was written in jax We have some arguments here as you can see, uh, so i'm using the mega version and not the mini version Uh fp16 set to false so that we have higher quality images, but the footprint will be bigger So I have a 8 gigabyte tpu and uh Otherwise, you'll have problems executing this thing. So let me show you uh, we'll see how the um GPU memory is slowly increasing as we're going through the code There is a couple of optimization in the code as well such as as soon as the encoder Is created and and the image is encoded they uh dispose of the encoder So that's controlled by this parameter. Let me see whether They have it here. I don't think it's in here. They have a parameter that literally controls that that that behavior Okay. So again, we have uh, we're using mega we have a piece of text I just inputted a random caption here a photo of a funny dog. Let's see what we get There is some seed there is a grid size. We're just using we're just generating a single image top cape for top case sampling Is set to 256? Uh image path, uh, so this is going to be directly dumped into the root root of of this repo so the image will be saved there Uh, the models are uh, well already downloaded into this pre-trained I went ahead and downloaded them beforehand so that we don't have to wait and again as I said, we're using fp16 So let's continue here. So first off we're gonna generate the we're gonna create this Uh mindaly model and then we are gonna do this generation process. So let's see what happens here So again is reusable is set to false That's the parameter that's going to control that optimization behavior. We'll see that in a second. So let me continue here So here it is Nothing fancy there just storing these variables we have number of text tokens that's gonna specify the the maximal size of the caption Number of layers is going to be 24 for mega Uh, as you can see the difference between mega and mini is just in the number of layers number of attention heads Also the size of the embedding space The size of the glue embedding space we'll get once we get there I'll briefly show you the paper as well for glue and then the text vocab count Uh, and the image vocab count kind of differ between these two models in any case, we just uh form the paths here for the uh Well by dali here, they are literally referring to the bart part of the dali model. So that's kind of um Potentially confusing. Um, okay, so Basically, i'm gonna skip over all of this. We have a vocab json and merges text. So that's gonna Basically, uh be used to form the tokenizer the text tokenizer. Okay Okay, and now we just form some some um paths which contain the the pre-trained models weights And now we hit in initialize tokenizer. Let me just see what I have. Yep. I have a breakpoint here. So Basically, it's already downloaded so we can ignore all of this Uh, and now we just open the vocab and here here it is So let's see what's the size of the vocab. So length of the vocab is 50,265 50,265 Now we go and read this merge merges file that contains the basically all of the merges that happen through during the training of the bpe uh tokenizer Do check out my open ai clip video if you if you're confused by the tokenizer details i've covered that they are in a bit greater depth So let's continue here. We just stored the vocab. We loaded a couple of seconds ago We basically grabbed the the pairs from this Like uh merges object and we split them into well into doubles To get the pair. So let me show you an example of what the pair looked like so Pairs and i'm gonna print the first three pairs. You can see Inn characters are merged together Uh, and this weird I don't know how to pronounce this one, etc. Etc. So there is there's going to be a lot of those bpe merges Stored in the in the pairs this rank from pair is basically going to associate integer From zero to the length of pairs with each of the pairs and that's going to be used during the tokenization step I'm going to quickly Later show you how the tokenization Uh works on a very high level but i'm gonna skip this part here because it's fairly intricate to to to describe And uh, you can you can read on bpe yourself. We can just treat the tokenizer as a black box. That's fairly Common structure between all of these models. So i'm not gonna keep on explaining it every single time I did cover it in an open ai's clip, uh video okay, so Continuing on because the easy reusable flag is set. We're not going to beforehand create encoder decoder and the uh, The tokenizer which is the vqgen Uh decoder because otherwise the memory would literally explode so I don't have enough memory in my machine to do that So we're going to skip over that and now we have the generate image part Okay, so we have the model here and let's now generate the image again the encoder and everything else will now be built Uh on the fly so we have the text again. We have the photo of a funny dog blah blah same parameters before Let's see what this does. So there is this um, uh generator, uh function that they uh, Developed let me that's not that vital. So we're just gonna generate the stream and then grab doing next We're gonna grab a single image from the stream, but let's dig into the actual function Okay, nothing happened there because it's a generator So once then this line once we hit this line when they go f10, we're gonna enter the actual function. Okay, so Similarly here everything interesting happens in the generate draw stream part. So i'm gonna ignore everything else here So now let's see what's going on. Okay. So first off we um Depending on the grid size we estimate the number of images because the grid size is one for us We just generate a single image. So this grid they basically this project offers an Like an option to generate like a grid of multiple images here because of memory constraints I'm just dealing with single image so we can ignore the grid every single time Okay, so because we are in a verbose mode We are printing out some details here tokenizing text etc. Etc and now we hit the tokenization part So now we tokenize our input caption before we feed it into the bart encoder so let's see what's going on here, so they grab the Like a separation token the cls token And the unknown token here, uh, there's some processing that they do on on top of the text So they do lower casing and then they encode it as ascii with this error set to ignore and then they do decoding So this is just going to do some simple primitive processing of text Nothing fancy there. Let me let me print the text for you here Let's see what happened. Basically nothing happened with this particular sentence because it was fairly simple And then you literally kind of split the sentence into words And splitting is happening. So basically you use spaces to split the sentence into words And then each of the word of the words is going to be split into the sub words here Because the this get byte pair encoding which i'll skip height It's fairly complex to explain but it's just gonna split the word into bpe Like sub words and then each of the sub words is basically associated with the like with the specific index integer from from the vocab here and that's it on a high level and then we just kind of Bracket that with the cls token in the beginning and that this uh see a separation token at the at the end and that's how we form Uh, basically a sequence of tokens for our particular text Okay, then they do optionally here the cropping but because we are less than 64 tokens. I think this is set to 64 Let me see what this number is, but i'm fairly sure it was 64 Yep. So if if it's like a longer than that length then they crop the the the tokens But for us, it's not Okay, so just some printing because it's verbose mode blah blah blah Uh now As you can see, this is the interesting part. So they they create this variable Text tokens which is initialized as all ones and it has you can be confused by a couple of details here So first why why is once is because one is a bad token in this vocab? Uh, and that's kind of implicit here because this is as I said just a port from the original repo So sometimes sometimes things are hard to understand unless you well unless you understand what's going on So here here am I trying to explain to you that so that's one detail The second important detail is there is this number two? and why they do that is because they have a basically analog thing to Classify our free guidance that we saw in diffusion model So you can basically do the same thing for the class of autoregressive generative models and that's why they have two so as you see here they create the first sequence will just contain the cls and the Separator token so it's going to be literally empty sequence everything else going to be pad paddings So that's what we do on this line. And then on the second line we add the actual tokens. Okay So let me show you what this thing is so text tokens If I print text tokens zero and then first maybe seven tokens You can see it's an empty sequence because this is a cls. This is a Separator token and everything else is padding and the second one contains the actual Uh tokens for our caption. Okay, that's it Now we just convert that into tensor with a long data type We put it onto gpu gpu and we continue on okay Now here is the optimization I was mentioning because we're using this not reusable We're because we said this flag is reusable to false. We now on the fly initialize the encoder So the encoder is going to be the vq-gan encoder. So let me step inside of here and let's see what's going on So again, uh, because I already downloaded the model We can skip over all of these steps and we're going to focus on how this dali birth encoder is constructed So the interesting part as usually is going to be in the forward pass of the of the model and After we construct it you can see here. We just load the parameters Of the pre-trained model that was trained in the actual dali mini repository We're gonna load those parameters and then we're gonna push to do this this model onto the gpu, okay So let's see how dali birth encoder looks like Here we are. We have the the vocab size of 50k something we form the embedding tables for the Well, that's the the text vocab Because encoder again remember has the text vocab and the decoder will be generating the image token So it will have a separate image vocab Okay, so we generate that table the embedding vectors will be 2048 we then generate the positional Encodings here. This is going to be only 64 because if you recall before We had the cropping so that we make sure that the number of tokens that we feed into the bart Encoder is always up to 64 never more than that. And that's why we never need to go Uh above 64 positional encodings. Okay now we start forming the basically Encoder layers and we'll have 24 of these for the Bart encoder for dali mega I'm gonna hit go here and put a small breakpoint In an encoder layer, which is a simple transformer Uh block nothing fancy. The interesting part is this glue. So i'm gonna set uh one breakpoint there Uh, and that's it. So let's now continue execution here So we end up in the encoder layer as you can see, it's just a simple transformer Block we have the encoder. We have the self-attention module we have the some layer norms and we have glue which is just a modification of the the uh, your regular like um mlp that's used of the feed forward, uh, like a layer of the transformer block, okay So let's see how glue looks like here is glue It's a very simple modification by noam scherzer And i'm gonna now open up the paper side by side to show you What this is and to make sense out of it, but it's a very experimental finding So there is nothing. Uh, well fundamental understanding you can get from this. Okay guys here it is side by side here are the equations from the glue paper by noam scherzer And you can see that this layer here implements exactly this line here So the idea here was they they kind of like he ablated a couple of um, he made a couple of modifications Why not using different activation functions instead of rally? Why not using gelu or swish etc? Etc. Why not combine him combine them in a bit different ways using um Similarly to the gated linear units so you can see so Like the lee mini ended up using this variant here And let me just kind of convince you that that's indeed the case So let's see so we have we get the input x this is x and then we pass it through layer norm So ignoring that part we we pass it we then pass it through a linear layer so that's that corresponds to basically this bracket here and after that we we pass it through Gelu, so that's this part the sigma of this so that's this term And then we have v which is formed from x as well from the input. We also pass it through the linear layer so that's that's this part the x times big v and finally we do element wise product so hadamard product here and then we pass that product through a layer norm and So that's that's that's basically what happened here and finally we we do a feed forward through a linear layer So that's the w that's the that's the sorry. That's the w2 here And that's thus you can see that this simple forward pass of this glue model implements exactly This ffn glue from norm shimseyer's paper and that's it. Now. Let's go back to the code I'm gonna Remove this break point. So it's a simple simple simple modification instead of having just like a like a feed forward Like a basically a linear layer for all but a relu followed by a linear layer. It's a bit more intricate Uh, and they showed experimentally that this gives a better performance But there is no ultimately no theory behind why this should work and that's it. That's I guess deep learning Let's continue I'm gonna now We're now back into encoder layer and again encoder layer is simply a transformer block So we have as you can see here we have some um layer norm and then attention and then norm and then the residual connection and then Then we pass through the feed forward part and then we have again the residual connection and we return back the variable So that's it. So i'm gonna Basically ignore all of these break points and let's form the 24 layers Of this birth of bart Encoder so i'm gonna hit f5 i'm gonna form 24 layers and then We are now here We form additional layer norm Additional layer norm and then we form uh these token indices which are going to be used to index the positional embeddings here and that's it and we just Do times two because remember we are doing the classifier free Trick for all progressive models. That's why we have Batch size of two here Okay That's it. That's the that's the initialization of the bart encoder. Now we are loading the weights here and then we kind of Push those weights into the encoder structure and then we delete the parameters now You can already see that this is pushing my gpu somewhat. Well, well actually not that much Okay, so i'm surprised. This is actually not that big of a memory footprint I think some of the later components will be spiking it up. But yeah, let's continue so After the initialization of the encoder, let's see what else happens Okay, and now we grab the text tokens and we pass them through the bart encoder as we saw in the paper part of the video Okay, let's step into it so we're gonna hit the for function of the Bart encoder here So let's see what happens. So we ask where the text tokens are not equal to one and one is again remember They're not using a constant here. They're just using one to represent the pad tokens It's kind of hard coded and dirty but it works Okay, so not equal to one That's the actual content. That's where attention mask is going to say true So let's now do something like this. Let me print this you'll see that It basically for the second component of the text tokens where we actually passed the the the caption tokens We have a bunch of truths here But for the first one we only have true true for the first two because those are the cls and the separator token Okay now what we do is we pass those tokens into the We embed them and then we just add the positional encodings using the these uh, well the the pose Those tokens indices that were previously constructed and that's it. That's simple stuff there And now we just pass it through the layer norm and then we keep on processing those embeddings Via the transformer Bart transformer layers and that's it. I'm just gonna hit that five here We're gonna do a forward pass through the transformer and we end up with the final representation here called encoder state Okay, and because again, this is the optimization I was mentioning. We now delete the weights of the encoder Let's see how the the memory footprint is gonna reduce. Okay, so the memory footprint went up You can see it here and after we hit delete and empty cache, you're gonna see how it kind of dips down Hopefully, I'm not sure what's going on Why is my gpu not reacting here? Not sure in any case Let's continue so now we have the initialized decoder part i'm gonna hit Step over and we enter the init decoder So again, it's already downloaded so we don't care about that part Here we construct it and again, we'll load the weights a similar structure as as with the encoder Now we are passing the image vocab, etc. So let's enter the constructor here So let's see what's the difference? So here we form again the embedding table this time. We have the image vocab size plus one Because let me just think here because they also have the they pass this this beginning of sentence token And that's the additional token that they need in addition to the image to image code book So I think they mentioned this somewhere in the in the report Let me show you this so basically It's called bos token Okay, so they say here at inference time blah blah. So a bos token is fed through the bar decoder And yeah, that's so that's basically why there is plus one as I said not not the cleanest way to do something But yeah, this is just a port of the original repo Now we have we form the table for the position in encodings and there is only 256 of those because as you recall from the paper part It's going to the latent space is going to be 16 times 16 of those basically vectors That are then going to be fed into dvq again decoder. Okay, let's continue here So we now generate a bunch of decoder layers And the coder layers have additionally the cross attention component because they are the cross attention component because this is again encoder decoder transformer so, let me see whether it makes sense to To kind of show you that part Let me just think Okay, i'm gonna put a break point into the mid part here and we can maybe do a single pass Here because they're doing something interesting With this attention state we'll see that a bit later And here as well i'm going to put the break point here And now let's continue. Okay, so let's hit the first decoder layer It's again Simply there is a self-attention part There is the decoder there is a cross attention part and there is the feed forward part So the glue variant in this particular case, so that's pretty much it So i'm going to now remove the break point from the init And let's just generate a bunch of these layers layers oops Let's now do that and we do that for 24 times again. I think let me just print the Layer count 24. Okay, so there is 24 decoder layers as well Okay, we have some layer norms nothing fancy and finally we have we take the final embedding vectors and we map them into the The space that has this dimensionality because remember we now want to sample those image tokens That's why we need to to to go from 2048 which was the internal model dimension. We want to go into into this dimension here Okay, let's continue on here And that's it. Okay, that's pretty much it. We load the weights again 24 layers of decoder blocks and finally then we we map into the image vocab Size dimensionality space output space so that we can sample from it later Okay Again, we delete the parameters. We put the decoder on top. We push it to our to my gpu. So let's see whether Something changes here. Nope Uh, it's a bit different behavior compared to what I've seen before usually there is a spike um Bigger than this one happening and i'm tracking this uh slot here by the way Okay In any case now comes the interesting part. So here we are going to now start sampling from the decoder Those image tokens and this one is already trained So everything it samples is going to be meaningful and then we're just going to pass it through the vq GAN decoder and that's the that's the end of the program. Okay, let's go here. So um We're dealing with float 32. So this doesn't make much sense, but with float 30, uh float 16 Uh, this kind of puts it puts all the operations inside of this context in the float float 16 regime which helps us save some memory So in the case when you have multiple images Uh, this expanded indices make sense because you will not replicate The the textual captions for multiple images because you'll be generating multiple images for one Particular caption, but we don't care about that here so we just basically Grab the uh encoder state and the and the and the text tokens and those are of dimensionality So this is going to be 264 if you recall and this one is going to be I guess uh to 2000 something 2048 because that's the Output of the encoder, right? So to oh actually yeah. Yeah 264 2048 because we just embedded this into this dimensionality here. Okay, so we formed the mask again This is the padding token. So mask is going to be true for for the non-pad tokens Uh, here is the we form this attention state which is gonna contain the Keys and values from each of the layers and for each of the positions in the bart decoder That's just how they decided to implement this and finally image tokens is gonna contain That's the output array that's gonna contain two 256 tokens Plus one because the first one will be the I think yeah the bos Uh token the beginning of uh sentence token. So let's continue we're gonna initialize this um Vector initially as you can see here with this value here, which is going to be the beginning of sentence uh tokens id so let me show you what That looks like i'm going to take the first five elements and all of them are going to be equal like this So that's how how they initially Um, well initialize the image tokens and then later we're gonna keep on as we produce the image tokens We're gonna feed them back into this image tokens array. Okay So nothing fancy there Uh continuing on we form these uh token indices And we have some settings like temperature is set to one top k 256 superconditioning factor Which is used for the classifier free, uh Uh guidance part and that's it. Those are the settings and now we start iterating basically a for loop 256 times we're gonna generate a novel image vector Uh, and uh, basically then we're gonna feed that into the vq-gan decoder as I said multiple times already so We care about this part of the loop because this one we'll enter this one once we have 256 tokens Um, we'll enter the image grid from tokens function So let's focus on this part here. So what happens here as you can see whatever we take the image So i is initially zero, which means we initially feed the b.os So the beginning of of of sentence token And we feed the output result into into the first slot of this image tokens And that's how we start populating this the image tokens Array again, its dimensionality is one plus 256 It's precisely because of this behavior here We pass the tokens we pass the attention mask the encoder state the attention state Which is going to contain the keys and values from the decoder And that's it and we we pass the token indices. So we basically pass Where we are in the generation? uh In the generation sequence, okay in the generated sequence. Okay. Now let's step inside of the uh forward function of the bar decoder so We grab the number of images uh, we then basically Make sure that the batch dimension is corresponds to to the super conditioning Thingy, so it's going to be two Okay, so it's going to be two there We grab the the list of previous tokens uh, so previous tokens dimensionality is Uh one so that's going to be initially just a b.os token and then Because we only have a single image. This is just going to replicate Uh the previous tokens we're going to have two b.os tokens And that's it Again, the reason being the super conditioning That they are doing so the classifier free guidance thingy for autoregressive models Uh some clamping. I don't think this actually is needed And then so now we do the embedding but remember this time. This is the these are the image embeddings Okay, so these are the as you can see here. These are this is the embedding table This is a separate vocabulary compared to the text that we use for the bar encoder part. Okay, so we do the encoding Then we add the positional Encoding here And now we have the coder state. It's going to be I guess What two? One 2000 something maybe Yeah, no two two two thousand forty eight makes sense Okay, because we have b.os a single token we have two because of super conditioning and we just embedded them into this Dimensionality here. So all of that makes sense. Let's continue on here Now we add the additional dimensions now. It's going to be 21 2048 and now we just iterate through the decoder layers of this decoder So we pass the decoder state The encoder state because we have the cross attention. Remember we passed the attention state. So attention state is going to be Basically um number of layers uh, and then four because we're going to keep keys and values and times two because we have the superconditioning and then finally we're going to have 256 and 2048 so again 24 because we have 24 layers four because we are saving key and value For each of the layers but times two because we are using Uh two sequences the one with uh empty sequence and the one with the actual caption because of superconditioning and then 256 is because we have 256 image tokens in the decoder and 2048 because that's the model dimension. Okay, just kind of breaking down. The shape is always useful we pass the attention mask for the Input caption and we pass the token now, let's hit the forward function here So here we just generate the attention mask depending on where we are in the generation process So because token token index currently is zero We are we're just we're we're just have passed the b.os. So the beginning of sentence token And that's why the self-attention mask will currently just be active in the first In the zero slot and everything else is false. So you can see here That the first will be true and then everything else is false is false and then we just kind of make sure that we have batch dimension of two because of superconditioning And then we just pass the equator state through the layer normalization. Nothing fancy there and now we pass it through the Attention part. So i'm gonna skip the attention part is fairly. Um Well, actually, let me see This is where we will store the keys and values. Let me show you this part So let me just see whether I have uh Okay, so we have a self-attention we have a break point there so i'm gonna hit That break point and let's see what we do. So we store the keys and values And let's see what their dimensionality is. So we have two one 2048 and the same for for a value vector We basically concatenate them and then we store it inside of this attention state And that's it. And now grabbing from the attention state We're going to grab all of the previous keys up to this token Which in this particular case is only these keys and values because we are just starting the generation But that's how the how the logic is implemented using this attention state We're basically fetching all of the previous keys and values and that's it And now we just pass it through the we just do the attention logic and we return back here That's it now we have the uh cross attention part we passed the decoder and encoder Uh, and that's it. Everything else is standard transformer logic. We pass it now through the glue and that's a single step through the Decoder layer i'm gonna now remove the break points here I'm gonna remove the break point here as well And let's continue on here I'm gonna hit f5 we went through all of the layers and now we pass it through uh, again Uh, this final l n is just like layer norm Uh, and finally here is where the mapping happens So here we're gonna take the vector that's 2 1 2048 and we're gonna map that into the corresponding into the image space. So basically now the logits Will be of 2 1 16 000 416 Because that's the basically the image space Dimensionality we grab the settings such as top k temperature is super conditioning vector now this part is kind of um Not completely clear because it's so so hard coded. So let me try and uh, break it down so first we grab all of the um Our long the batch dimension we grab everything and that makes sense we take minus one because we only only only want to we only care about the last the logis from the from the Um from the last token because we're trying to sample from there that makes sense, but this part doesn't make sense. Why is it? um this number Which is smaller than the number here And my hypothesis is that well it might be that the original vq. Gan checkpoint they used Um had a bigger vocab compared to their target fine tuning After they've done the fine tuning of of the gans not completely clear About about this part in any case. So here is the super conditioning part So we grab the logits and as you can see here we grab now we grab the so these are the logis that correspond To the empty sequence and these are the logis that correspond to the actual caption And we do the logic we saw in the diffusion models as well and that's how we combine the logis to To form the final Supercondition distribution here. Okay, so I think let me just check this Fairly sure so this is this reverse half wings Twitter profile I think was the first one to have suggested this type of super conditioning for Uh, ultra-regressive models. So so she said you can apply a similar trick to classify our free guidance to other regressive transformers To sample from a synthetic supercondition distribution Uh blah blah blah. I trained to try this and you can see how well, basically you trade off the variance, uh, with the um Quality of the images And finally, this is what we do So you take the unconditional logits and you just basically add up on top of that This difference times the conditional scale. So this is the same expression. We are now using in in the code Okay, let me go back to the code here So that's basically if you cannot decompose this you get that same expression um Now we do some this is the top case sampling part So we do we sort and we sort in a descending fashion such that the first Logit will be the biggest one and then they start descending Okay, and then is kept so we take the original logits and we find basically the 256th biggest logits here because here's a sorted one and because we have 256 here, so we're going to grab the 255th Biggest logit and use that as a threshold and all of those logits which are bigger than that threshold are kept That's how the top case sampling works Then we do some uh, this is just for for stabilization purposes temperature Exponential and then we just multiply with uh with this mask. So again, we are just keeping those logits that are that are Basically big enough where big enough is defined by the top k parameter. Okay, and Finally, we just sample from the distribution and we get the image token. Okay, so we now have image token here That's the that's the token we sampled and in the next iteration of this loop Basically now we're gonna feed that token as the input and then we're gonna keep on autoregressively generating the image tokens So now i'm gonna try and skip all of this So i'm gonna go and disable all breakpoints and i'm gonna enable only this one here I'm gonna hit f5 and i'm gonna let it run until we generate all of the 256 image tokens basically Okay, here we are uh now let's enter the Vq.GAN decoder As you can see here, we only pass we ignore the first token in the image tokens because that was the BOS So that was the beginning of sentence token. So we ignore it. So let's jump here. I'm gonna uh enable all of the breakpoints Let's see where I have a breakpoint here. Yes, I do So let's continue doing this Okay, here we are We delete the decoder that's gonna hopefully release some memory and empty we empty the cache So here finally we have this tip in memory This is what i'm used to and i'm not sure why that didn't happen previously when we were deleting the encoder Um, yeah, that was weird in any case now we can see it here. Okay, so we now initialize the um Detokenizer or the Vq.GAN decoder So let's see what I have. Yeah, I have a breakpoint there again similar structure as before Uh, we create the tokenizer object and then we just load the parameters. So let's see how it looks like Again, we pass it the vocab count and the embedding count So it's gonna have 256 tokens and this is the vocab count which corresponds To the this is the same Size as what the bar decoder had right? Okay, we form the embedding table Uh, just some processing layers and decoder. I'm gonna ignore how the architecture looks like It's basically a bunch of like up sampling layers and attention blocks and resonant blocks Nothing super, um I guess Informative so we just have resonant blocks some attention blocks which literally do attention uh treating image tokens as as well Treating images as as sequences and thus we can just apply attention Um, just some up sampling As I said nothing worth digging deeper into to be honest Uh, you can go and check it out at your own pace if you want. Okay, so we now load the parameters And we push the the tokenizer that we can again on the gpu And now we do the forward pass through the uh, this is the interesting part Not that complex but still interesting. So we have z that these are 256 tokens. So is that Shape is 256 26 again, i'm not sure why they're doing the clamping because we we we are certain that this will be inside of these boundaries already So there is some weird thing happening there Uh, that's a consequence of that cropping we saw in the when the with the logits so that that might explain it Okay, grid size is one because uh, as I said, we're generating just a single image and now there is a difference whether um Well, basically the the shape of this, um, how do we feed the the the image tokens into the actual convolutional decoder part? And we're going to enter this branch because the seamless is set to to false Uh, and we just embed using the uh embedding table embed our tokens and I assume this is the same Embedding table as the one that we had in the bart decoder. That's that should be shared weights and now we just Do this view thingy and we end up with 16 16 256 And that's precisely the latent space volume that we saw in the papers that we're going to now feed into dvq again uh convolutional, uh decoder part, okay Some permutation some processing with the cnn Basically, this is a like a single, uh channel cnn Sorry single, uh, the spatial extent of the cnn is one times one and then we just up sample This this image and we end up finally with uh z of shape Uh 256 256 and three channels and that's our image and now we do some clipping Uh renormalization back into the 0255 range And that's it Some again manipulations on the shape and let's see what we whoops. Let's see what we end up with here So 256 256 3 so that's our rgb image and that's it guys basically, we now uh push the image we convert it into uh Unsigned integer, uh 8-bit format. We push it onto cpu from the kuda and we Basically convert it into non-py and now we basically yield back Uh pil image pill image and that's it and then we just save the image and that's it Now there is also this print ascii from image function. That's very neat actually So again ascii from image makes sense when you're running this from a console. It's just gonna print the image Um in the ascii format, I guess. Okay, that's it Uh, let me uh step over this and we are pretty much done That's the end of the program. Let me now show you the generated image I should have it somewhere here Let me open it up And here it is. So here is the uh, what was our caption? Our caption was um, a photo of a funny dog And here is a dog. I'm not sure whether it's funny, but Uh, basically, that's it. Let me try a different caption and i'll get back to you once we once once it's done Maybe like something a bit more complex. This might break the dali mega But it's worth a shot a photo of a funny dog. Uh, maybe riding a bicycle Bicycle Let me run this one And let's see what we get. Okay guys, here is the actual image It's kind of weird You can kind of see it's trying to merge the concept of a dog with a concept of a bicycle But it's still not that expressive and big to achieve that so scale is vital for these types of models And that's it. Uh, do let me know whether you found this video useful as as usually Uh, leave down in the comments below if you have any feedback for me, uh, and until next time. Bye. Bye
[{"start": 0.0, "end": 5.62, "text": " What's up guys in this video I'm continuing on with the machine learning coding series and in this one"}, {"start": 5.62, "end": 12.26, "text": " I'll be covering the Lee mini. So that's the open source implementation of the of open AI's"}, {"start": 12.64, "end": 17.1, "text": " Dali model and when actually say I'll be covering the Lee mini what I mean"}, {"start": 17.1, "end": 22.78, "text": " I'll be cover covering this minimal port of the Lee mini that's inference only and that's"}, {"start": 22.78, "end": 29.54, "text": " Basically written in pytorch and the reason is it's much easier to go through this one and to set it up on my"}, {"start": 29.92, "end": 35.78, "text": " Windows machine, so we'll start with this one, but in the next video. I'll actually be covering the full the original"}, {"start": 36.56, "end": 40.480000000000004, "text": " Jack's code base of the Lee mini which is much more involved"}, {"start": 40.96, "end": 43.92, "text": " But this video will give you enough context to know what's going on"}, {"start": 45.2, "end": 47.0, "text": " Okay, so as"}, {"start": 47.0, "end": 53.04, "text": " Usually I'll first give you the additional context in the form of explaining the papers behind this"}, {"start": 53.64, "end": 60.6, "text": " And one thing that's very important to understand is that the Lee mini is not actually an implementation of the Lee model"}, {"start": 60.6, "end": 64.03999999999999, "text": " It's more of a mixture and let me open up my one note"}, {"start": 64.28, "end": 70.96000000000001, "text": " So it's more of a mixture of this paper called VQ GANs or taming transformers for high resolution image synthesis"}, {"start": 70.96000000000001, "end": 73.2, "text": " So that's the paper that introduced the VQ GANs"}, {"start": 73.2, "end": 77.92, "text": " And I've covered this one and all of these papers actually pretty much in in my previous videos"}, {"start": 77.92, "end": 80.08, "text": " So I'll be linking those as we go along"}, {"start": 80.44, "end": 88.44, "text": " So this is the VQ GAN paper, and they also leverage this model which is encoder decoder transformer model called BART"}, {"start": 89.24000000000001, "end": 96.4, "text": " And they use some modification of BART. They use this glue from Noam Shazir. We'll see all of those details a bit later and"}, {"start": 97.48, "end": 101.0, "text": " Here is the original the Lee paper, and then I'm gonna kind of contrast"}, {"start": 101.0, "end": 106.0, "text": " What the Lee mini is with the actual the Lee paper?"}, {"start": 107.24, "end": 114.12, "text": " They also have like bits and biases also have a report which will like literally skim through I'm gonna show you"}, {"start": 114.8, "end": 118.44, "text": " Some of the they actually have very nice training and inference"}, {"start": 119.12, "end": 122.02, "text": " Diagrams here how the Lee mini works and they also have"}, {"start": 122.84, "end": 128.96, "text": " Comparisons between the Lee and the Lee mini and you can see there is quite a lot of bullet points here"}, {"start": 128.96, "end": 134.38, "text": " So I'm gonna go through all of those a bit later. Okay, so let's first start with with actual papers"}, {"start": 135.56, "end": 140.88, "text": " I'm gonna skim. I'm gonna literally just give you the gist because otherwise there's like four papers"}, {"start": 140.88, "end": 145.08, "text": " That's that would take too much time. So let me go straight to it"}, {"start": 145.44, "end": 149.10000000000002, "text": " So this is the VQ GAN paper and as I said, I previously already covered it"}, {"start": 149.10000000000002, "end": 153.04000000000002, "text": " So if you want a deep dive go ahead and check it out. I'm gonna link it somewhere here"}, {"start": 154.0, "end": 156.0, "text": " Okay guys, so again on the high level"}, {"start": 156.0, "end": 161.12, "text": " Keeping a high level because I've covered this in great depth in a separate video"}, {"start": 161.64, "end": 164.92, "text": " We have two components of VQ GAN basically"}, {"start": 166.0, "end": 168.66, "text": " One is the learning the visual"}, {"start": 169.48, "end": 171.68, "text": " Codebook here and that's this"}, {"start": 172.64, "end": 174.16, "text": " component here and"}, {"start": 174.16, "end": 181.08, "text": " Then the second part is after you learn the visual codebook. Then you basically train a transformer model"}, {"start": 181.08, "end": 186.68, "text": " to just to autoregressively model the latent space of visual vectors here"}, {"start": 186.68, "end": 194.0, "text": " So this is the transformer part. So basically as I said, we have the first part here and we have the transformer part here"}, {"start": 195.04000000000002, "end": 199.28, "text": " So how this is trained is very similar to VQ VAE again"}, {"start": 199.28, "end": 205.92000000000002, "text": " That's a paper I've covered before and a couple of differences are that they are using this thing called reconstruction loss"}, {"start": 205.92, "end": 210.39999999999998, "text": " And the second thing is they're using discriminator here"}, {"start": 210.39999999999998, "end": 216.51999999999998, "text": " So basically adversarial approach and because of that the images from VQ GAN are much crispier"}, {"start": 217.16, "end": 222.64, "text": " Compared to what you get using VQ VAEs, which is something that actually the lead version one"}, {"start": 223.39999999999998, "end": 225.35999999999999, "text": " was using"}, {"start": 225.35999999999999, "end": 231.51999999999998, "text": " Quickly running through the pipeline here. What you do is you feed an image here you then encode it into a"}, {"start": 231.52, "end": 236.16, "text": " Lower result into a lower dimensional latent space here"}, {"start": 236.28, "end": 243.20000000000002, "text": " So you end up with some vectors as you can tell here and then what what what you do is you kind of snap them"}, {"start": 243.52, "end": 245.52, "text": " onto the closest"}, {"start": 245.68, "end": 247.52, "text": " visual"}, {"start": 247.52, "end": 253.68, "text": " Vector that you learn in this in this codebook so you can see that for example for this vector here the closest vector"}, {"start": 254.28, "end": 259.44, "text": " In as you can see here L2 by using the L2 as the definition of closeness"}, {"start": 259.44, "end": 267.0, "text": " You snap it onto a vector number one. So that means this one was closest to this one to this vector"}, {"start": 267.8, "end": 270.78, "text": " Number one from the visual codebook. Okay"}, {"start": 271.36, "end": 276.2, "text": " So once you do that for all of the vectors from your low dimensional space here"}, {"start": 276.2, "end": 278.8, "text": " You snap them onto discrete vectors here"}, {"start": 278.84, "end": 283.72, "text": " then you pass them through a convolutional decoder to get the image out and"}, {"start": 284.15999999999997, "end": 287.14, "text": " Then again, I'm going to show you the losses in a second"}, {"start": 287.14, "end": 294.0, "text": " But you train them using a couple of losses such as a reconstruction loss the perceptual loss and the and the"}, {"start": 294.47999999999996, "end": 298.52, "text": " Adversarial loss here and then the transformer part is I guess very self-explanatory"}, {"start": 299.64, "end": 305.96, "text": " Once you have this whole pathway here that allows you to map your data set of images"}, {"start": 306.47999999999996, "end": 309.36, "text": " into a data set of basically"}, {"start": 309.88, "end": 312.8, "text": " tokens because you can flip now this in our"}, {"start": 312.8, "end": 318.44, "text": " Raster order and you end up with you can directly map your image into a sequence of tokens"}, {"start": 318.56, "end": 324.08, "text": " Then you basically can just train your transformer on this new data set of"}, {"start": 324.68, "end": 331.62, "text": " Basically sequences and that's how you learn how to well the data distribution of interest"}, {"start": 332.40000000000003, "end": 336.88, "text": " Okay, so that's that's it on a high level here a couple of formulas"}, {"start": 336.88, "end": 342.0, "text": " I'm going to show you so basically as I said, what do you do is you snap your your vectors? So"}, {"start": 342.0, "end": 349.16, "text": " Basically and I have some glitch with my one node so you can see you'll you'll be seeing this thing kind of staying here"}, {"start": 349.64, "end": 352.88, "text": " Unfortunately, but yeah for whatever reason one out is glitching today"}, {"start": 353.28, "end": 359.3, "text": " So here is the vector that you estimate then you snap it on to the closest codebook vector here"}, {"start": 360.04, "end": 366.92, "text": " That's that the quantization step here. So you first encoded then you quantize it and then you have the"}, {"start": 366.92, "end": 371.8, "text": " Basically decoder part and that's how you get the reconstructed image"}, {"start": 372.2, "end": 376.76, "text": " X hat out and with that notation under our belt here is the actual loss"}, {"start": 377.32, "end": 381.0, "text": " The weak you lost is just as you can see here are some of couple of losses"}, {"start": 381.0, "end": 384.36, "text": " One of them is reconstruction loss, which is L2 in this case here"}, {"start": 384.36, "end": 388.8, "text": " And then there there are a couple of these losses which they I think they call commitment loss"}, {"start": 389.32, "end": 393.12, "text": " This one so the idea as you can see here is you put a stop gradient?"}, {"start": 393.12, "end": 400.52, "text": " in this particular case here on on top of your codebook vector and then you try and and and make sure to"}, {"start": 400.8, "end": 406.08, "text": " Make that to make your vectors closer to the codebook and here is the opposite here"}, {"start": 406.08, "end": 410.46, "text": " You you kind of put a stop gradient operator on top of your estimated"}, {"start": 411.56, "end": 417.8, "text": " Vectors and then you try and make your codebook closer to the output of your encoder as you can see here"}, {"start": 417.8, "end": 424.28000000000003, "text": " So that's the that's that and then the VQ again actually adds a couple of modifications"}, {"start": 424.6, "end": 430.96000000000004, "text": " So first one is they replaced the L2 loss as you can see here for the"}, {"start": 432.40000000000003, "end": 434.08000000000004, "text": " perceptual loss and"}, {"start": 434.08000000000004, "end": 438.48, "text": " They introduce another serial training procedure as well with the patch based discriminator"}, {"start": 438.48, "end": 444.56, "text": " So you don't have a single scalar you have as you can as you could saw see on the on the image here"}, {"start": 444.56, "end": 451.6, "text": " You actually have basically you're doing a patch based discrimination of whether that patch is real or or false"}, {"start": 452.84, "end": 458.84000000000003, "text": " Stemming from real or false image and that helps additionally that can increases the bandwidth of the learning signal"}, {"start": 459.4, "end": 461.04, "text": " compared to just using a"}, {"start": 461.04, "end": 462.96, "text": " classical scalar"}, {"start": 462.96, "end": 466.16, "text": " Okay, and that's pretty much it. That's what we can changes"}, {"start": 466.16, "end": 471.88, "text": " So we have the VQ component and then we have now the GAN component and again"}, {"start": 471.88, "end": 476.2, "text": " The difference here is that they are using the perceptual loss if you're not familiar with what that is"}, {"start": 476.52, "end": 478.8, "text": " Check out some of my neural style transfer videos"}, {"start": 478.92, "end": 486.08, "text": " But basically the idea is to take like a pre-trained convolutional neural network such as like a maybe VGG pre-trained on like image net"}, {"start": 486.08, "end": 493.4, "text": " Or something and then you instead of doing the L2 in the pixel space of the of your reconstructed image"}, {"start": 493.52, "end": 496.2, "text": " you actually feed it inside of that like a"}, {"start": 496.2, "end": 504.2, "text": " Network and then you do L2 in the latent space of that network where the features are much more abstract and you're you're basically"}, {"start": 505.2, "end": 507.2, "text": " trying to find"}, {"start": 507.32, "end": 513.0, "text": " Well, you're basically capturing the semantics much more than than than high level noise in the picture space"}, {"start": 513.8, "end": 516.8, "text": " Again refer to my video if you want to know more details"}, {"start": 516.84, "end": 521.6, "text": " But this will be enough more than enough for the for the for the code walkthrough"}, {"start": 521.6, "end": 526.76, "text": " Okay, guys, so let me show you one more thing in this paper and that's conditional"}, {"start": 527.36, "end": 528.5600000000001, "text": " synthesis"}, {"start": 528.5600000000001, "end": 537.0400000000001, "text": " So dimension here. So if the conditioning information C has spatial extent we first learn in other VQ GAN to obtain again an index-based"}, {"start": 537.96, "end": 539.96, "text": " representation as you can see here"}, {"start": 540.96, "end": 544.12, "text": " So you'll have what they say here you they'll have"}, {"start": 544.12, "end": 552.16, "text": " A height times width of these vectors where the set of vectors has is this big there is like"}, {"start": 552.8, "end": 554.8, "text": " big Z sub C"}, {"start": 555.28, "end": 558.92, "text": " Number of those vectors. So that's the size of the code book for that"}, {"start": 560.2, "end": 566.32, "text": " Condition conditional information. So they're there in concrete. Let me give you a concrete example here"}, {"start": 566.32, "end": 568.32, "text": " This could be maybe like"}, {"start": 568.6, "end": 570.6, "text": " semantic segmentation"}, {"start": 570.6, "end": 576.96, "text": " Maps that you can then basically learn a VQ GAN on top of that type of a data set. Okay"}, {"start": 578.08, "end": 583.4, "text": " Well blind then you have this newly obtained code book and then due to autoregressive structure of the transformer"}, {"start": 583.4, "end": 587.62, "text": " we can then simply prepend R to S which is the"}, {"start": 588.36, "end": 592.8000000000001, "text": " Well, that's your your your your image from your target data set and R is"}, {"start": 593.28, "end": 597.22, "text": " stems from the conditioning information and then basically you do the"}, {"start": 597.22, "end": 602.26, "text": " Negative log likelihood to these entries. So in case that was not that clear"}, {"start": 602.26, "end": 605.46, "text": " Let me quickly draw it here. So again, imagine you have"}, {"start": 606.1, "end": 608.4200000000001, "text": " Basically some semantic segmentation"}, {"start": 609.22, "end": 614.0600000000001, "text": " Masks here. So maybe something like this. I'm gonna kind of try and"}, {"start": 614.82, "end": 616.82, "text": " draw it somehow so"}, {"start": 618.82, "end": 624.4200000000001, "text": " This is some type of a semantic segmentation image and you have a big data set of such images"}, {"start": 624.42, "end": 629.9, "text": " Big data set of such images and now what you do is basically train a VQ GAN"}, {"start": 631.5, "end": 635.6999999999999, "text": " Such that you'll end up with a particular code book"}, {"start": 637.4599999999999, "end": 641.78, "text": " For that vector for for that data set so this is going to be your your code book"}, {"start": 643.2199999999999, "end": 645.2199999999999, "text": " For this particular"}, {"start": 645.86, "end": 650.62, "text": " Image here, so you then encode it you end up with basically"}, {"start": 650.62, "end": 652.62, "text": " Well, you end up with"}, {"start": 653.02, "end": 659.34, "text": " HC times WC vectors you then flatten them out and"}, {"start": 659.82, "end": 661.82, "text": " You end up with a sequence and"}, {"start": 662.3, "end": 667.82, "text": " This sequence that which they denote as R is something you can prepend while training"}, {"start": 668.38, "end": 670.38, "text": " on your on your data set"}, {"start": 670.62, "end": 676.94, "text": " Okay. So now obviously for each of these images you have an actual image you care about which is the"}, {"start": 676.94, "end": 682.7, "text": " Image from your from the data set you're trying to for which you're trying to learn the data distribution off"}, {"start": 683.1, "end": 687.1800000000001, "text": " So that there will be some image here, which is actually original image"}, {"start": 687.1800000000001, "end": 694.0600000000001, "text": " And this is its semantic segmentation and then you do the same thing here. So you'll you first learn the VQ GAN component"}, {"start": 695.2600000000001, "end": 698.3000000000001, "text": " So something like this and I'm really terrible at drawing today"}, {"start": 699.0200000000001, "end": 701.9000000000001, "text": " And and then you encode it here"}, {"start": 701.9, "end": 707.66, "text": " You flatten out this sequence. I'm going to denote it with a different color. So we now have a sequence here"}, {"start": 709.26, "end": 712.62, "text": " And now how you're going to train is you're going to prepend this vector R"}, {"start": 713.26, "end": 717.9, "text": " so we'll you'll grab the vector R from the semantic segmentation map and"}, {"start": 718.38, "end": 721.98, "text": " Here you have the target image which they denote as S"}, {"start": 722.6999999999999, "end": 725.98, "text": " And now you just collect these tuples for your whole data set"}, {"start": 725.98, "end": 732.54, "text": " So you basically collect this and then you learn how to to model that type of uh of of of sequence"}, {"start": 732.86, "end": 735.82, "text": " And so how do you later when you want to generate?"}, {"start": 736.46, "end": 742.38, "text": " A new image you literally you literally grab an arbitrary semantic segmentation"}, {"start": 743.26, "end": 744.46, "text": " map"}, {"start": 744.46, "end": 748.14, "text": " And you you pass it through the encoder here"}, {"start": 748.62, "end": 750.3000000000001, "text": " you fetch the"}, {"start": 750.3000000000001, "end": 753.26, "text": " This vector R and then you pass it through the"}, {"start": 753.26, "end": 759.5, "text": " Decoder you fetch the uh this vector R and then you put it inside of your transformer"}, {"start": 760.62, "end": 762.62, "text": " So here you'll have a transformer"}, {"start": 763.5, "end": 765.5, "text": " You put the R here"}, {"start": 765.9, "end": 767.9, "text": " And then you ought to aggressively"}, {"start": 768.62, "end": 770.62, "text": " keep on"}, {"start": 771.34, "end": 774.9399999999999, "text": " Generating the sequence and then you pass that through the decoder here"}, {"start": 775.26, "end": 780.46, "text": " So once you generate that sequence here you pass it through the decoder and out comes the image"}, {"start": 780.46, "end": 783.5, "text": " So this will maybe be in practice like 256 tokens"}, {"start": 784.14, "end": 787.26, "text": " Which you can then rearrange into 16 by 16"}, {"start": 788.38, "end": 790.38, "text": " Tokens something like this"}, {"start": 792.3000000000001, "end": 797.58, "text": " And then that's that's like that's the that's the low uh dimensional representation"}, {"start": 797.58, "end": 801.5, "text": " And then you can as I said you pass it through the decoder and you get your image out"}, {"start": 801.5, "end": 805.4200000000001, "text": " So why I took some time to explain this is because um"}, {"start": 805.42, "end": 811.8199999999999, "text": " Dali and and uh dali mini, uh use a very similar approach instead of uh, and instead of using the uh"}, {"start": 812.38, "end": 818.9399999999999, "text": " Semantic segmentation masks they're actually using text but the logic is super similar. So just instead of um"}, {"start": 819.74, "end": 822.54, "text": " uh doing this whole thing you um"}, {"start": 823.3399999999999, "end": 826.38, "text": " Pass a piece of text that's associated with your image"}, {"start": 826.86, "end": 833.0999999999999, "text": " Here and then you learn how to model them together and that's it. So literally by doing that you learn the um"}, {"start": 833.1, "end": 835.1, "text": " You learn the data distribution"}, {"start": 835.82, "end": 837.82, "text": " uh p"}, {"start": 837.82, "end": 844.0600000000001, "text": " x y and that's where the bart paper comes into picture. So that's that's where this paper comes into play"}, {"start": 844.5400000000001, "end": 847.1, "text": " I'm gonna explain to you. Whoops"}, {"start": 847.66, "end": 851.34, "text": " Um, i'm gonna explain to you in a second like it's a it's a very simple"}, {"start": 851.34, "end": 855.1, "text": " It's basically like an encoder decoder transformer. There is not too much to say there"}, {"start": 855.34, "end": 855.98, "text": " Okay"}, {"start": 855.98, "end": 861.74, "text": " So again how bart will come into play here is instead of doing this pipeline with semantic segmentation masks"}, {"start": 861.74, "end": 863.74, "text": " You'll instead have uh an image"}, {"start": 864.78, "end": 870.3, "text": " And you'll have associated caption here. So some piece of text here. Okay, so have some piece of text here"}, {"start": 871.98, "end": 878.54, "text": " And you'll basically feed that text through uh encoder part of of uh of bart"}, {"start": 879.02, "end": 881.58, "text": " So let me kind of denote the uh bart"}, {"start": 882.22, "end": 887.82, "text": " Such as this like we have encoder part of bart and we hope we have the decoder part"}, {"start": 887.82, "end": 891.34, "text": " Of bart decoder is obviously attending"}, {"start": 891.98, "end": 896.94, "text": " Uh the encoder part that's how the transformers work. And so you you grab this text here"}, {"start": 897.9000000000001, "end": 902.86, "text": " You basically feed it directly into the encoder part and out comes something from the decoder"}, {"start": 903.58, "end": 909.1800000000001, "text": " Out comes this thing here and now you use this output of the encoder as r and that's it"}, {"start": 909.1800000000001, "end": 915.1, "text": " And now you just again you you'll now have these tuples of uh, this conditioning vectors"}, {"start": 915.1, "end": 920.94, "text": " And your uh images from your data set and you just learn how to model the p x y"}, {"start": 920.94, "end": 928.38, "text": " So the the distribution of your uh, like images and uh and captions and by the way, this is literally this this"}, {"start": 928.62, "end": 932.38, "text": " These two boxes is everything you need to have like that's a mental model of bart"}, {"start": 932.38, "end": 936.0600000000001, "text": " I have when i'm when i'm discussing it. So the only thing we need to"}, {"start": 937.1, "end": 940.94, "text": " See to understand how bart works compared to all of the other transformers is"}, {"start": 940.94, "end": 949.74, "text": " I guess this thing here so basically in your regular, uh, for example in inbert model so inbert"}, {"start": 951.0200000000001, "end": 956.46, "text": " What the pre-training objective looked like is you masked out so it's first is bidirectional"}, {"start": 956.46, "end": 961.9000000000001, "text": " So it's only encoder part of the of the uh encoder decoder original transformer structure"}, {"start": 961.9000000000001, "end": 964.62, "text": " Whereas bart is both encoder as well as the decoder"}, {"start": 965.1, "end": 968.1400000000001, "text": " So you're trying to just you just kind of mask out some of the code"}, {"start": 968.14, "end": 973.26, "text": " So you're trying to just you just kind of mask out certain tokens and I think like you do that with like"}, {"start": 973.9, "end": 975.9, "text": " 10 20 percent"}, {"start": 976.92, "end": 978.92, "text": " Probability for each of the tokens"}, {"start": 979.42, "end": 986.06, "text": " And then you try to predict what the token at that particular position was so it's a simple, uh, like a auto encoding"}, {"start": 987.5, "end": 992.14, "text": " Objective, uh, and what's different with bart is you can kind of model"}, {"start": 992.14, "end": 998.3, "text": " You can kind of take a span of of tokens and then mask them out with a single"}, {"start": 998.6999999999999, "end": 1002.14, "text": " Mask token and then make sure to predict the original sequence here"}, {"start": 1002.46, "end": 1005.74, "text": " So because of this thing because of you can mask it out here"}, {"start": 1005.8199999999999, "end": 1011.1, "text": " However, um many tokens you want and then predict them here you have additional flexibility"}, {"start": 1011.1, "end": 1015.98, "text": " Which you don't have with a pure encoder, um, like transformers such as such as bert"}, {"start": 1016.22, "end": 1019.42, "text": " And I think they mentioned it somewhere here. Let me find that piece of text"}, {"start": 1019.42, "end": 1027.18, "text": " So the objective is called text in filling and they mentioned here a number of text spans are sampled with span lengths drawn from a post"}, {"start": 1027.34, "end": 1028.7, "text": " Poisson"}, {"start": 1028.7, "end": 1030.46, "text": " distribution with lambda"}, {"start": 1030.46, "end": 1036.54, "text": " Parameter equals three and each span is replaced with a single mask token zero length spans"}, {"start": 1037.08, "end": 1039.6599999999999, "text": " correspond to the insertion of mask tokens and"}, {"start": 1040.7, "end": 1042.3799999999999, "text": " Okay, it's inspired blah blah blah"}, {"start": 1042.3799999999999, "end": 1047.34, "text": " So text in filling teaches the model to predict how many tokens are missing from a span"}, {"start": 1047.34, "end": 1052.4599999999998, "text": " And that's all like it's literally encoder decoder transformer with this particular"}, {"start": 1053.34, "end": 1055.34, "text": " denoising objective"}, {"start": 1055.4199999999998, "end": 1059.98, "text": " And they show a bit later here how it compares against the baselines"}, {"start": 1061.34, "end": 1063.34, "text": " using different types of"}, {"start": 1063.34, "end": 1069.6599999999999, "text": " Like retraining objectives such as just token masking a la bert or token deletion or text in filling"}, {"start": 1069.6599999999999, "end": 1074.3799999999999, "text": " So a bunch of different sentence shuffling document rotation, etc, etc, and they compare"}, {"start": 1075.1799999999998, "end": 1076.9399999999998, "text": " that against the"}, {"start": 1076.94, "end": 1083.18, "text": " Language model so that's just predicting the next token or mask language models such as birth, etc, etc, and they show"}, {"start": 1084.38, "end": 1086.3, "text": " Well competitive results"}, {"start": 1086.3, "end": 1092.06, "text": " Okay, that's the bird part the bard part and then there's a small modification called glue"}, {"start": 1092.06, "end": 1097.18, "text": " Which i'll show you a bit later. So glue variance improved transformer from this author noam schazer"}, {"start": 1097.98, "end": 1102.78, "text": " Who is actually the co-author of the original transformer paper. So it's a very prolific author"}, {"start": 1102.78, "end": 1107.58, "text": " But before that, let me just quickly skim over over dali version one"}, {"start": 1108.06, "end": 1111.34, "text": " Which is which came from the paper zero shot text image generation"}, {"start": 1112.1399999999999, "end": 1113.74, "text": " again"}, {"start": 1113.74, "end": 1116.3, "text": " Super similar pipeline to vq again"}, {"start": 1116.86, "end": 1123.82, "text": " Again, they mentioned here they have two stages. So in the first stage they say here we train a discrete variational autoencoder"}, {"start": 1124.54, "end": 1126.54, "text": " to compress each of the"}, {"start": 1126.62, "end": 1131.42, "text": " 256 256 rgb images into 32 by 32 grid of image tokens"}, {"start": 1131.42, "end": 1134.94, "text": " each elements of which can assume"}, {"start": 1136.92, "end": 1144.94, "text": " 8192 possible values so that that means that the codebook size is this big and this reduces the context size of the transformer"}, {"start": 1145.26, "end": 1150.6200000000001, "text": " by a factor of 192 without a large degradation in visual quality because if you divide"}, {"start": 1151.18, "end": 1152.8400000000001, "text": " basically"}, {"start": 1152.8400000000001, "end": 1158.6200000000001, "text": " 256 times 256 times 3 because you have rgb image into"}, {"start": 1158.62, "end": 1163.34, "text": " 232 times 32 you end up with 192. Okay"}, {"start": 1164.2199999999998, "end": 1165.26, "text": " and"}, {"start": 1165.26, "end": 1170.6999999999998, "text": " The same authors of this paper. Well, I think this is from yeah, this is from open.ai"}, {"start": 1170.86, "end": 1176.6999999999998, "text": " So basically they they they previously trained this model called image gpt and in that model"}, {"start": 1176.6999999999998, "end": 1179.9799999999998, "text": " They were training they were autoregressively learning how to model"}, {"start": 1180.6999999999998, "end": 1184.9399999999998, "text": " the the images in the pixel space, which is much more complex because"}, {"start": 1184.94, "end": 1191.5800000000002, "text": " Transformers have quadratic complexity and obviously the image space can can grow like up to like"}, {"start": 1192.14, "end": 1197.98, "text": " Well thousand by thousand images and so it becomes very quickly impossible to model in the pixel space"}, {"start": 1197.98, "end": 1203.26, "text": " So that's why this reduction this mapping into a lower dimensional space here. Okay"}, {"start": 1203.98, "end": 1205.98, "text": " So again, that's stage one"}, {"start": 1205.98, "end": 1209.18, "text": " and it's literally the same as vq again without the"}, {"start": 1209.74, "end": 1213.74, "text": " Perceptual loss and without the adversarial component and then they say here"}, {"start": 1213.74, "end": 1218.54, "text": " In stage two we concatenate up to 256 vpe"}, {"start": 1218.6200000000001, "end": 1222.54, "text": " By pair encoding text tokens with the 32 times 32"}, {"start": 1223.1, "end": 1225.98, "text": " image tokens that come from the stage one and"}, {"start": 1226.46, "end": 1232.7, "text": " Train an autoregressive transformer to model the joint distribution over text and image tokens. So basically what I said here"}, {"start": 1233.42, "end": 1239.5, "text": " Is that so again they they train the vqvae which you can think as as same as vqgan"}, {"start": 1239.5, "end": 1244.22, "text": " So yeah, they have the encoder part. They have the decoder part"}, {"start": 1245.18, "end": 1248.3, "text": " They kind of have a low dimensional representation here"}, {"start": 1249.58, "end": 1253.5, "text": " and so what they did what they do instead so so they can kind of"}, {"start": 1254.3, "end": 1257.1, "text": " unroll this in a raster order to end up with a"}, {"start": 1259.18, "end": 1263.82, "text": " Sequence of tokens s so to get r so to get the conditioning vector"}, {"start": 1264.06, "end": 1267.82, "text": " And I don't know what why they're using r like I guess c c makes more sense"}, {"start": 1267.82, "end": 1270.9399999999998, "text": " But i'll just keep on using the same uh terminology"}, {"start": 1271.8999999999999, "end": 1273.8999999999999, "text": " So to get the r component"}, {"start": 1274.3799999999999, "end": 1276.62, "text": " That you'll use to to prepend here"}, {"start": 1277.34, "end": 1280.3, "text": " And then train or progressively train a transformer on top of it"}, {"start": 1280.78, "end": 1283.6599999999999, "text": " Uh, they just use as as you saw here"}, {"start": 1283.8999999999999, "end": 1290.22, "text": " They just have uh, basically they take a piece of text and they bpe encoded so you start with the text"}, {"start": 1290.86, "end": 1294.86, "text": " And then you apply the tokenizer which gives you a sequence of tokens"}, {"start": 1294.86, "end": 1301.82, "text": " It just gives you a sequence of tokens and then I assume they just embed those using like a shallow like a simple"}, {"start": 1302.1399999999999, "end": 1308.78, "text": " Well, you have an embedding table you map that into tokens and then you you place that directly here to start"}, {"start": 1309.4199999999998, "end": 1311.4199999999998, "text": " um modeling whereas"}, {"start": 1311.4199999999998, "end": 1316.3, "text": " Um, as you saw the lini is using bart encoder. So that means they'll have additional"}, {"start": 1316.9399999999998, "end": 1322.54, "text": " Layers of processing before they they output this r vector. So the the conditioning vector"}, {"start": 1322.54, "end": 1327.8999999999999, "text": " Okay, guys, that's that's pretty much it. Um, let me quickly show you what they say here"}, {"start": 1327.98, "end": 1330.94, "text": " There is a lot of complexity hidden in here"}, {"start": 1331.02, "end": 1335.8999999999999, "text": " As I said, I covered this one the lee one, uh, you know in a separate video so you can check it out"}, {"start": 1335.98, "end": 1338.78, "text": " They use stuff such as gumbell softmax relaxation"}, {"start": 1339.5, "end": 1343.18, "text": " To train this model, etc, etc. And also it's worth"}, {"start": 1343.82, "end": 1346.3799999999999, "text": " Keeping in mind. I think they mentioned it somewhere here"}, {"start": 1347.5, "end": 1350.1399999999999, "text": " Let me just show you this so getting the model"}, {"start": 1350.14, "end": 1357.18, "text": " To train in 16-bit precision past 1 billion parameters without diverging was the most challenging part of this project"}, {"start": 1357.5, "end": 1359.5, "text": " so training this um"}, {"start": 1359.74, "end": 1363.66, "text": " Big model well back in back a couple of years ago. This was considered a big model"}, {"start": 1364.22, "end": 1371.42, "text": " Was was extremely hard and um, that's something I think boris the the main developer of the um, the lee mini"}, {"start": 1371.9, "end": 1377.66, "text": " Project is also struggling with but he's using jacks and he he has like free tpus"}, {"start": 1377.66, "end": 1379.66, "text": " From google so that kind of helps"}, {"start": 1379.98, "end": 1387.3400000000001, "text": " Because tpus have they did mention somewhere here that tpus have some less problems than than than gpus in certain aspects"}, {"start": 1388.22, "end": 1390.78, "text": " Let me quickly show you one more thing again"}, {"start": 1391.26, "end": 1393.5, "text": " Uh, let me just explain this conditioning part"}, {"start": 1393.5, "end": 1401.42, "text": " So given a text image pair we bpe encode with lowercase captions using at most 256 tokens with woke up size"}, {"start": 1401.98, "end": 1404.8600000000001, "text": " 16 384 and encode the image using"}, {"start": 1404.86, "end": 1407.9199999999998, "text": " 32 times 32 tokens with woke up size 8192"}, {"start": 1412.78, "end": 1416.1599999999999, "text": " The image tokens are obtained using argmax sampling from the dvee"}, {"start": 1417.58, "end": 1424.78, "text": " Via encoded logits without adding any gumble noise. Uh, finally the text and image tokens are concatenated and model"}, {"start": 1425.34, "end": 1429.34, "text": " Regressively as a single stream of data. So that's something we saw multiple times already"}, {"start": 1429.34, "end": 1434.54, "text": " Okay, guys, so that's it. It's a very simple model as you saw here"}, {"start": 1435.5, "end": 1439.58, "text": " Uh, there is a lot of overlap between the ideas between vq gam paper and the lee"}, {"start": 1440.1399999999999, "end": 1441.6599999999999, "text": " Uh version one"}, {"start": 1441.6599999999999, "end": 1445.02, "text": " Um, they took the ideas from bar. They took some other ideas"}, {"start": 1445.1799999999998, "end": 1450.1399999999999, "text": " Uh boris is experimenting a lot as soon as a new paper comes out such as the deep net paper"}, {"start": 1450.54, "end": 1456.6999999999998, "text": " With novel initializations. He tries an experiment with it. Okay, so let me now show you the uh report that"}, {"start": 1456.7, "end": 1460.48, "text": " Uh boris wrote with the other other co-authors"}, {"start": 1462.06, "end": 1463.3400000000001, "text": " Again"}, {"start": 1463.3400000000001, "end": 1467.5800000000002, "text": " They have a very nice drawing of the training pipeline of the lee"}, {"start": 1468.38, "end": 1469.3400000000001, "text": " mini"}, {"start": 1469.3400000000001, "end": 1471.3400000000001, "text": " Model as you can see here"}, {"start": 1471.9, "end": 1473.9, "text": " again, they have the"}, {"start": 1474.54, "end": 1478.46, "text": " Bart encoder so you as you can see you you first have these"}, {"start": 1479.1000000000001, "end": 1485.26, "text": " Puppets of images and associated captions then you pass the caption through the bart encoder you get the"}, {"start": 1485.26, "end": 1487.26, "text": " the conditioning vector here"}, {"start": 1487.9, "end": 1493.02, "text": " You take the image you pass it through the vq again encoder you get the image tokens you feed them here"}, {"start": 1493.1, "end": 1495.5, "text": " So you can see here vq again, uh tokens"}, {"start": 1496.06, "end": 1502.78, "text": " And then you pass all of that through a bar decoder and you learn how to autoregressively model these sequences"}, {"start": 1503.34, "end": 1507.18, "text": " Uh simple stuff as for the inference we can see it here"}, {"start": 1507.9, "end": 1512.06, "text": " Uh, so you grab a piece of text you pass it through the bart encoder"}, {"start": 1512.06, "end": 1518.3799999999999, "text": " You feed it into the decoder and then you autoregressively generate the image tokens"}, {"start": 1519.1, "end": 1521.1, "text": " Uh, you can generate multiple"}, {"start": 1521.34, "end": 1527.98, "text": " Uh series of these image tokens because uh, the process of generation is inherently stochastic"}, {"start": 1528.62, "end": 1530.78, "text": " Then you pass all of those through the vq again"}, {"start": 1531.1799999999998, "end": 1534.3799999999999, "text": " Uh decoder you get some associated images"}, {"start": 1534.78, "end": 1539.4199999999998, "text": " So the token sequence from the low dimensional embedding space is projected into"}, {"start": 1539.42, "end": 1542.38, "text": " Uh the image in the pixel space"}, {"start": 1543.74, "end": 1545.74, "text": " And then you can use something like clip"}, {"start": 1546.22, "end": 1548.54, "text": " to filter out the best images"}, {"start": 1548.78, "end": 1554.22, "text": " So you again feed the same caption to clip and finds the image that"}, {"start": 1554.7, "end": 1559.3400000000001, "text": " That is best described by this particular caption you can see in this particular example"}, {"start": 1559.5800000000002, "end": 1565.42, "text": " White snow covered mountain under blue sky during daytime clip found that this image here is the best"}, {"start": 1565.9, "end": 1567.98, "text": " this is is uh"}, {"start": 1567.98, "end": 1570.94, "text": " Well best described by this by this particular caption. Okay"}, {"start": 1571.9, "end": 1573.58, "text": " That's it. Let me quickly"}, {"start": 1573.58, "end": 1576.22, "text": " Walk you through and I encourage you to check out the whole report"}, {"start": 1577.1, "end": 1584.54, "text": " In this in this awesome, uh weight and biases report, but let me show you some some differences that the uh authors"}, {"start": 1585.18, "end": 1586.38, "text": " Of the lee"}, {"start": 1586.38, "end": 1593.5, "text": " Mini model themselves mentioned so we are grateful for the research and pre-trained models published by open.ai which we're essentially building our model"}, {"start": 1593.5, "end": 1598.22, "text": " Uh, not all the uh, the details on the leak our public knowledge"}, {"start": 1598.22, "end": 1601.1, "text": " But here what we consider to be the main differences"}, {"start": 1601.18, "end": 1606.16, "text": " So the lee uses a 12 billion parameter version of gpt3"}, {"start": 1606.86, "end": 1612.38, "text": " In comparison our model is 27 times smaller with about 0.4 billion parameters"}, {"start": 1612.54, "end": 1617.42, "text": " Now this report was written a while ago. And so this was the the lee mini"}, {"start": 1618.14, "end": 1620.14, "text": " Like they now have the lee mega as well"}, {"start": 1620.14, "end": 1624.94, "text": " And also I think there is a mistake here because i'm fairly sure the original dali"}, {"start": 1625.5800000000002, "end": 1627.5800000000002, "text": " Paper used gpt2 not gpt3"}, {"start": 1628.7800000000002, "end": 1630.7800000000002, "text": " Which are fairly similar but still"}, {"start": 1630.7800000000002, "end": 1636.14, "text": " Um, then they say we heavily leverage pre-trained models such as vq-gan barding coder and clip"}, {"start": 1636.46, "end": 1638.46, "text": " We mentioned all of those so far"}, {"start": 1638.46, "end": 1645.5800000000002, "text": " Uh, and while open.ai had to train all their models from scratch our model architecture takes into account pre-trained models available"}, {"start": 1645.98, "end": 1647.8200000000002, "text": " And their efficiency"}, {"start": 1647.82, "end": 1650.86, "text": " The lee encodes images using a larger number of tokens"}, {"start": 1651.26, "end": 1656.46, "text": " So they use 32 times 32 whereas the uh, dali mini use uh uses"}, {"start": 1657.1599999999999, "end": 1660.54, "text": " 16 uh by 16 from a smaller vocab"}, {"start": 1661.26, "end": 1664.46, "text": " And the lee uses a vqvae while we use a vq-gan"}, {"start": 1664.9399999999998, "end": 1670.3999999999999, "text": " Okay, then they say the lee encodes text using fewer tokens at most 256"}, {"start": 1671.1, "end": 1674.08, "text": " Uh, whereas they where the lee mini uses 1024"}, {"start": 1674.08, "end": 1681.84, "text": " And the lee reads text and images as a single stream of data while we split them between these uh, well bard encoder and decoder"}, {"start": 1682.56, "end": 1686.24, "text": " Uh, this also lets us use independent vocab for text and images"}, {"start": 1686.8799999999999, "end": 1691.4399999999998, "text": " The lee reads the text through an autoregressive model while we use a bi-directional encoder"}, {"start": 1691.52, "end": 1696.08, "text": " Okay, so I made a small mistake when I was explaining how the lee wevon works. I'm gonna"}, {"start": 1696.72, "end": 1700.3999999999999, "text": " Show you what I mean by that in a second, but let me finish here"}, {"start": 1700.4, "end": 1707.2, "text": " So the lee was trained on 250 million pairs of image and text while we used only 15 million pairs"}, {"start": 1707.44, "end": 1707.68, "text": " Okay"}, {"start": 1707.68, "end": 1712.48, "text": " So the difference is scale obviously the model scale the dataset scale etc, etc"}, {"start": 1712.64, "end": 1717.52, "text": " But also a lot of difference in the architecture and the training method like they don't have the gumbo softmax"}, {"start": 1717.6000000000001, "end": 1722.96, "text": " They don't have the vqvae. They're using vq-gan. They're using bard instead of using gpt, etc, etc"}, {"start": 1722.96, "end": 1727.3600000000001, "text": " So there is so the lee mini is um, well not really the lee model"}, {"start": 1727.36, "end": 1731.28, "text": " It's a it's a mixture of of vq-gans and bard and everything else"}, {"start": 1731.6, "end": 1736.4799999999998, "text": " So so yeah, but still and that's why they are now called crayon not the lee mini"}, {"start": 1737.04, "end": 1740.3999999999999, "text": " But in any case, let me quickly go back to notebook and then we're jumping to code"}, {"start": 1741.1999999999998, "end": 1744.8799999999999, "text": " So I think I mentioned somewhere here that they bpe encode"}, {"start": 1745.6799999999998, "end": 1749.2199999999998, "text": " Text and then they just learn how to autoregressively"}, {"start": 1750.56, "end": 1755.1999999999998, "text": " Model that and well, I kind of did correctly explain it but not quite"}, {"start": 1755.2, "end": 1757.68, "text": " so what they do is so let me just"}, {"start": 1758.48, "end": 1760.8, "text": " draw this again here so we have the"}, {"start": 1762.16, "end": 1764.56, "text": " Conditioning part and we have the target"}, {"start": 1765.42, "end": 1767.42, "text": " sequence"}, {"start": 1768.0800000000002, "end": 1773.8600000000001, "text": " So what they do how they learn this is they feed all of these into the transformer"}, {"start": 1775.04, "end": 1778.64, "text": " And then you learn how to predict all of the tokens. So basically here"}, {"start": 1778.96, "end": 1784.0, "text": " So yeah again as a target you just take this same sequence and you shift it leftwards by by one"}, {"start": 1784.0, "end": 1787.44, "text": " uh by one step and that's what you're trying to"}, {"start": 1788.32, "end": 1790.0, "text": " to predict"}, {"start": 1790.0, "end": 1794.16, "text": " so this is what you're trying to predict and then here ultimately you have the"}, {"start": 1794.88, "end": 1796.88, "text": " uh end of"}, {"start": 1797.84, "end": 1801.76, "text": " Image token in this particular case because in this particular example the green part"}, {"start": 1802.4, "end": 1807.2, "text": " Are the image tokens from the vq vae portion? Okay, so that's it"}, {"start": 1807.44, "end": 1813.2, "text": " The main thing I want to stress here is that the the embeddings for the conditioning part are not shallow they're using"}, {"start": 1813.2, "end": 1815.3600000000001, "text": " uh deep transformer layers"}, {"start": 1816.0, "end": 1820.16, "text": " because as you as you can as you know, like the uh transformer works by"}, {"start": 1821.6000000000001, "end": 1829.04, "text": " Cross attending um previous tokens here. So it will be attending to deeper representations of this particular"}, {"start": 1829.6000000000001, "end": 1833.04, "text": " Text here. Okay guys this there was a bit more than I wanted to"}, {"start": 1833.52, "end": 1838.32, "text": " To give you when it comes to context and and paper overviews, but it is what it is"}, {"start": 1838.32, "end": 1842.96, "text": " Is let's now dig into the actual code. Okay, so here we are in my vs code"}, {"start": 1844.0, "end": 1850.3999999999999, "text": " I'm gonna open up the i'm gonna hit debug here and we're gonna slowly start stepping over the code again"}, {"start": 1850.48, "end": 1853.9199999999998, "text": " I'm gonna skip all of the unnecessary details. I just want to show you"}, {"start": 1854.56, "end": 1859.28, "text": " The similarity between what we just saw theoretically and the actual code"}, {"start": 1859.8999999999999, "end": 1861.8999999999999, "text": " implementation again this uh"}, {"start": 1862.1599999999999, "end": 1864.56, "text": " I'm showing you the code from a mindaly"}, {"start": 1864.56, "end": 1870.48, "text": " uh repo and uh, they only have the inference portion of the pipeline and they're um,"}, {"start": 1870.72, "end": 1876.8799999999999, "text": " It's it's a port in into pytorch and it's whereas the original repo was written in jax"}, {"start": 1877.52, "end": 1883.44, "text": " We have some arguments here as you can see, uh, so i'm using the mega version and not the mini version"}, {"start": 1884.0, "end": 1889.52, "text": " Uh fp16 set to false so that we have higher quality images, but the footprint will be bigger"}, {"start": 1889.76, "end": 1892.32, "text": " So I have a 8 gigabyte tpu"}, {"start": 1892.96, "end": 1894.1599999999999, "text": " and uh"}, {"start": 1894.16, "end": 1897.52, "text": " Otherwise, you'll have problems executing this thing. So let me show you"}, {"start": 1898.4, "end": 1900.4, "text": " uh, we'll see how the um"}, {"start": 1901.28, "end": 1904.4, "text": " GPU memory is slowly increasing as we're going through the code"}, {"start": 1904.5600000000002, "end": 1909.0400000000002, "text": " There is a couple of optimization in the code as well such as as soon as the encoder"}, {"start": 1909.44, "end": 1914.88, "text": " Is created and and the image is encoded they uh dispose of the encoder"}, {"start": 1915.44, "end": 1919.3600000000001, "text": " So that's controlled by this parameter. Let me see whether"}, {"start": 1919.36, "end": 1927.04, "text": " They have it here. I don't think it's in here. They have a parameter that literally controls that that that behavior"}, {"start": 1927.4399999999998, "end": 1931.52, "text": " Okay. So again, we have uh, we're using mega we have a piece of text"}, {"start": 1931.52, "end": 1935.6, "text": " I just inputted a random caption here a photo of a funny dog. Let's see what we get"}, {"start": 1936.1599999999999, "end": 1942.56, "text": " There is some seed there is a grid size. We're just using we're just generating a single image top cape for top case sampling"}, {"start": 1942.9599999999998, "end": 1944.9599999999998, "text": " Is set to 256?"}, {"start": 1944.96, "end": 1949.44, "text": " Uh image path, uh, so this is going to be directly dumped into the root"}, {"start": 1950.24, "end": 1953.52, "text": " root of of this repo so the image will be"}, {"start": 1955.04, "end": 1956.56, "text": " saved there"}, {"start": 1956.56, "end": 1960.56, "text": " Uh, the models are uh, well already downloaded into this pre-trained"}, {"start": 1960.8, "end": 1966.8, "text": " I went ahead and downloaded them beforehand so that we don't have to wait and again as I said, we're using fp16"}, {"start": 1966.8, "end": 1971.68, "text": " So let's continue here. So first off we're gonna generate the we're gonna create this"}, {"start": 1971.68, "end": 1978.16, "text": " Uh mindaly model and then we are gonna do this generation process. So let's see what happens here"}, {"start": 1978.3200000000002, "end": 1980.3200000000002, "text": " So again is reusable is set to false"}, {"start": 1981.04, "end": 1986.8, "text": " That's the parameter that's going to control that optimization behavior. We'll see that in a second. So let me continue here"}, {"start": 1987.76, "end": 1989.76, "text": " So here it is"}, {"start": 1989.8400000000001, "end": 1992.0800000000002, "text": " Nothing fancy there just storing these"}, {"start": 1992.94, "end": 1994.4, "text": " variables"}, {"start": 1994.4, "end": 1996.4, "text": " we have"}, {"start": 1996.4, "end": 1999.68, "text": " number of text tokens that's gonna specify the"}, {"start": 1999.68, "end": 2000.88, "text": " the"}, {"start": 2000.88, "end": 2002.88, "text": " maximal size of the caption"}, {"start": 2003.52, "end": 2006.48, "text": " Number of layers is going to be 24 for mega"}, {"start": 2007.04, "end": 2012.72, "text": " Uh, as you can see the difference between mega and mini is just in the number of layers number of attention heads"}, {"start": 2013.1200000000001, "end": 2015.52, "text": " Also the size of the embedding space"}, {"start": 2016.4, "end": 2019.44, "text": " The size of the glue embedding space we'll get once we get there"}, {"start": 2019.6000000000001, "end": 2024.48, "text": " I'll briefly show you the paper as well for glue and then the text vocab count"}, {"start": 2024.64, "end": 2028.4, "text": " Uh, and the image vocab count kind of differ between these two models"}, {"start": 2028.4, "end": 2032.88, "text": " in any case, we just uh form the paths here for the uh"}, {"start": 2033.68, "end": 2040.72, "text": " Well by dali here, they are literally referring to the bart part of the dali model. So that's kind of um"}, {"start": 2041.9, "end": 2044.5600000000002, "text": " Potentially confusing. Um, okay, so"}, {"start": 2045.74, "end": 2051.44, "text": " Basically, i'm gonna skip over all of this. We have a vocab json and merges text. So that's gonna"}, {"start": 2052.08, "end": 2057.12, "text": " Basically, uh be used to form the tokenizer the text tokenizer. Okay"}, {"start": 2057.12, "end": 2063.44, "text": " Okay, and now we just form some some um paths which contain the the pre-trained models weights"}, {"start": 2064.88, "end": 2071.04, "text": " And now we hit in initialize tokenizer. Let me just see what I have. Yep. I have a breakpoint here. So"}, {"start": 2071.7599999999998, "end": 2074.7999999999997, "text": " Basically, it's already downloaded so we can ignore all of this"}, {"start": 2075.3599999999997, "end": 2079.3599999999997, "text": " Uh, and now we just open the vocab and here here it is"}, {"start": 2079.44, "end": 2084.74, "text": " So let's see what's the size of the vocab. So length of the vocab is 50,265"}, {"start": 2084.74, "end": 2086.9799999999996, "text": " 50,265"}, {"start": 2086.9799999999996, "end": 2091.22, "text": " Now we go and read this merge merges file"}, {"start": 2091.8599999999997, "end": 2094.58, "text": " that contains the basically all of the"}, {"start": 2095.3799999999997, "end": 2098.58, "text": " merges that happen through during the training of the bpe"}, {"start": 2099.3599999999997, "end": 2101.3599999999997, "text": " uh tokenizer"}, {"start": 2101.7799999999997, "end": 2108.66, "text": " Do check out my open ai clip video if you if you're confused by the tokenizer details i've covered that they are in a bit"}, {"start": 2109.4599999999996, "end": 2110.8199999999997, "text": " greater depth"}, {"start": 2110.82, "end": 2116.42, "text": " So let's continue here. We just stored the vocab. We loaded a couple of seconds ago"}, {"start": 2116.98, "end": 2119.94, "text": " We basically grabbed the the pairs from this"}, {"start": 2120.98, "end": 2125.78, "text": " Like uh merges object and we split them into well into doubles"}, {"start": 2126.34, "end": 2129.86, "text": " To get the pair. So let me show you an example of what the pair looked like so"}, {"start": 2130.34, "end": 2133.7000000000003, "text": " Pairs and i'm gonna print the first three pairs. You can see"}, {"start": 2134.56, "end": 2137.2200000000003, "text": " Inn characters are merged together"}, {"start": 2137.22, "end": 2143.2999999999997, "text": " Uh, and this weird I don't know how to pronounce this one, etc. Etc. So there is there's going to be a lot of those"}, {"start": 2144.18, "end": 2145.4599999999996, "text": " bpe"}, {"start": 2145.4599999999996, "end": 2146.74, "text": " merges"}, {"start": 2146.74, "end": 2150.98, "text": " Stored in the in the pairs this rank from pair is basically going to associate"}, {"start": 2151.8599999999997, "end": 2153.06, "text": " integer"}, {"start": 2153.06, "end": 2159.22, "text": " From zero to the length of pairs with each of the pairs and that's going to be used during the tokenization step"}, {"start": 2159.54, "end": 2161.22, "text": " I'm going to quickly"}, {"start": 2161.22, "end": 2163.22, "text": " Later show you how the tokenization"}, {"start": 2163.22, "end": 2169.4599999999996, "text": " Uh works on a very high level but i'm gonna skip this part here because it's fairly intricate to to to describe"}, {"start": 2169.9399999999996, "end": 2176.5, "text": " And uh, you can you can read on bpe yourself. We can just treat the tokenizer as a black box. That's fairly"}, {"start": 2177.14, "end": 2182.4199999999996, "text": " Common structure between all of these models. So i'm not gonna keep on explaining it every single time"}, {"start": 2182.74, "end": 2185.62, "text": " I did cover it in an open ai's clip, uh video"}, {"start": 2186.3399999999997, "end": 2188.02, "text": " okay, so"}, {"start": 2188.02, "end": 2194.66, "text": " Continuing on because the easy reusable flag is set. We're not going to beforehand create encoder decoder and the uh,"}, {"start": 2194.74, "end": 2196.74, "text": " The tokenizer which is the vqgen"}, {"start": 2197.22, "end": 2203.38, "text": " Uh decoder because otherwise the memory would literally explode so I don't have enough memory in my machine to do that"}, {"start": 2203.54, "end": 2206.34, "text": " So we're going to skip over that and now we have the generate image part"}, {"start": 2206.42, "end": 2213.46, "text": " Okay, so we have the model here and let's now generate the image again the encoder and everything else will now be built"}, {"start": 2213.46, "end": 2219.46, "text": " Uh on the fly so we have the text again. We have the photo of a funny dog blah blah same parameters before"}, {"start": 2219.78, "end": 2225.54, "text": " Let's see what this does. So there is this um, uh generator, uh function that they uh,"}, {"start": 2226.18, "end": 2232.34, "text": " Developed let me that's not that vital. So we're just gonna generate the stream and then grab doing next"}, {"start": 2232.34, "end": 2236.82, "text": " We're gonna grab a single image from the stream, but let's dig into the actual function"}, {"start": 2236.98, "end": 2239.7, "text": " Okay, nothing happened there because it's a generator"}, {"start": 2239.7, "end": 2246.74, "text": " So once then this line once we hit this line when they go f10, we're gonna enter the actual function. Okay, so"}, {"start": 2247.54, "end": 2254.18, "text": " Similarly here everything interesting happens in the generate draw stream part. So i'm gonna ignore everything else here"}, {"start": 2255.22, "end": 2258.8199999999997, "text": " So now let's see what's going on. Okay. So first off we um"}, {"start": 2259.62, "end": 2264.58, "text": " Depending on the grid size we estimate the number of images because the grid size is one for us"}, {"start": 2264.58, "end": 2270.18, "text": " We just generate a single image. So this grid they basically this project offers an"}, {"start": 2270.74, "end": 2276.02, "text": " Like an option to generate like a grid of multiple images here because of memory constraints"}, {"start": 2276.02, "end": 2280.18, "text": " I'm just dealing with single image so we can ignore the grid every single time"}, {"start": 2281.14, "end": 2283.62, "text": " Okay, so because we are in a verbose mode"}, {"start": 2283.62, "end": 2289.06, "text": " We are printing out some details here tokenizing text etc. Etc and now we hit the tokenization part"}, {"start": 2289.06, "end": 2294.74, "text": " So now we tokenize our input caption before we feed it into the bart encoder"}, {"start": 2294.98, "end": 2297.38, "text": " so let's see what's going on here, so they grab the"}, {"start": 2298.82, "end": 2301.86, "text": " Like a separation token the cls token"}, {"start": 2302.58, "end": 2307.22, "text": " And the unknown token here, uh, there's some processing that they do on on top of the text"}, {"start": 2307.22, "end": 2314.18, "text": " So they do lower casing and then they encode it as ascii with this error set to ignore and then they do decoding"}, {"start": 2314.2599999999998, "end": 2317.46, "text": " So this is just going to do some simple primitive processing of text"}, {"start": 2317.46, "end": 2320.1, "text": " Nothing fancy there. Let me let me print the text for you here"}, {"start": 2321.3, "end": 2327.38, "text": " Let's see what happened. Basically nothing happened with this particular sentence because it was fairly simple"}, {"start": 2327.94, "end": 2332.98, "text": " And then you literally kind of split the sentence into words"}, {"start": 2334.02, "end": 2338.9, "text": " And splitting is happening. So basically you use spaces to split the sentence into words"}, {"start": 2339.38, "end": 2343.7, "text": " And then each of the word of the words is going to be split into the sub words here"}, {"start": 2343.7, "end": 2348.02, "text": " Because the this get byte pair encoding which i'll skip height"}, {"start": 2348.4199999999996, "end": 2353.22, "text": " It's fairly complex to explain but it's just gonna split the word into bpe"}, {"start": 2353.8599999999997, "end": 2361.22, "text": " Like sub words and then each of the sub words is basically associated with the like with the specific index"}, {"start": 2361.7799999999997, "end": 2365.62, "text": " integer from from the vocab here and that's it on a high level and then we just kind of"}, {"start": 2365.9399999999996, "end": 2368.8999999999996, "text": " Bracket that with the cls token in the beginning and that this"}, {"start": 2368.9, "end": 2374.7400000000002, "text": " uh see a separation token at the at the end and that's how we form"}, {"start": 2375.14, "end": 2378.42, "text": " Uh, basically a sequence of tokens for our particular text"}, {"start": 2378.82, "end": 2385.46, "text": " Okay, then they do optionally here the cropping but because we are less than 64 tokens. I think this is set to 64"}, {"start": 2386.7400000000002, "end": 2390.1, "text": " Let me see what this number is, but i'm fairly sure it was 64"}, {"start": 2390.6600000000003, "end": 2397.3, "text": " Yep. So if if it's like a longer than that length then they crop the the"}, {"start": 2397.3, "end": 2399.3, "text": " the tokens"}, {"start": 2399.46, "end": 2400.98, "text": " But for us, it's not"}, {"start": 2400.98, "end": 2404.9, "text": " Okay, so just some printing because it's verbose mode blah blah blah"}, {"start": 2405.3, "end": 2406.9, "text": " Uh now"}, {"start": 2406.9, "end": 2410.34, "text": " As you can see, this is the interesting part. So they they create this variable"}, {"start": 2411.3, "end": 2417.86, "text": " Text tokens which is initialized as all ones and it has you can be confused by a couple of details here"}, {"start": 2417.94, "end": 2423.38, "text": " So first why why is once is because one is a bad token in this vocab?"}, {"start": 2423.38, "end": 2429.06, "text": " Uh, and that's kind of implicit here because this is as I said just a port from the original repo"}, {"start": 2429.1400000000003, "end": 2433.86, "text": " So sometimes sometimes things are hard to understand unless you well unless you understand what's going on"}, {"start": 2434.42, "end": 2438.1, "text": " So here here am I trying to explain to you that so that's one detail"}, {"start": 2438.1800000000003, "end": 2440.58, "text": " The second important detail is there is this number two?"}, {"start": 2441.54, "end": 2446.5, "text": " and why they do that is because they have a basically analog thing to"}, {"start": 2447.2200000000003, "end": 2450.02, "text": " Classify our free guidance that we saw in diffusion model"}, {"start": 2450.02, "end": 2455.38, "text": " So you can basically do the same thing for the class of autoregressive generative models and that's why they have two"}, {"start": 2455.7, "end": 2461.86, "text": " so as you see here they create the first sequence will just contain the cls and the"}, {"start": 2463.7, "end": 2469.3, "text": " Separator token so it's going to be literally empty sequence everything else going to be pad paddings"}, {"start": 2469.46, "end": 2475.14, "text": " So that's what we do on this line. And then on the second line we add the actual tokens. Okay"}, {"start": 2475.78, "end": 2478.42, "text": " So let me show you what this thing is"}, {"start": 2478.42, "end": 2480.42, "text": " so text tokens"}, {"start": 2480.7400000000002, "end": 2486.1800000000003, "text": " If I print text tokens zero and then first maybe seven tokens"}, {"start": 2486.1800000000003, "end": 2491.2200000000003, "text": " You can see it's an empty sequence because this is a cls. This is a"}, {"start": 2492.02, "end": 2496.34, "text": " Separator token and everything else is padding and the second one contains the actual"}, {"start": 2496.7400000000002, "end": 2499.06, "text": " Uh tokens for our caption. Okay, that's it"}, {"start": 2499.38, "end": 2503.54, "text": " Now we just convert that into tensor with a long data type"}, {"start": 2503.86, "end": 2505.86, "text": " We put it onto gpu"}, {"start": 2505.86, "end": 2508.58, "text": " gpu and we continue on okay"}, {"start": 2509.46, "end": 2514.9, "text": " Now here is the optimization I was mentioning because we're using this not reusable"}, {"start": 2515.7000000000003, "end": 2521.46, "text": " We're because we said this flag is reusable to false. We now on the fly initialize the encoder"}, {"start": 2521.6200000000003, "end": 2527.6200000000003, "text": " So the encoder is going to be the vq-gan encoder. So let me step inside of here and let's see what's going on"}, {"start": 2527.94, "end": 2530.9, "text": " So again, uh, because I already downloaded the model"}, {"start": 2530.9, "end": 2537.38, "text": " We can skip over all of these steps and we're going to focus on how this dali birth encoder is constructed"}, {"start": 2538.1800000000003, "end": 2543.62, "text": " So the interesting part as usually is going to be in the forward pass of the of the model"}, {"start": 2544.34, "end": 2545.62, "text": " and"}, {"start": 2545.62, "end": 2548.6, "text": " After we construct it you can see here. We just load the parameters"}, {"start": 2549.38, "end": 2554.26, "text": " Of the pre-trained model that was trained in the actual dali mini"}, {"start": 2555.36, "end": 2556.9, "text": " repository"}, {"start": 2556.9, "end": 2562.02, "text": " We're gonna load those parameters and then we're gonna push to do this this model onto"}, {"start": 2562.6600000000003, "end": 2564.58, "text": " the gpu, okay"}, {"start": 2564.58, "end": 2568.34, "text": " So let's see how dali birth encoder looks like"}, {"start": 2568.98, "end": 2576.34, "text": " Here we are. We have the the vocab size of 50k something we form the embedding tables for the"}, {"start": 2577.3, "end": 2579.3, "text": " Well, that's the the text vocab"}, {"start": 2579.62, "end": 2585.78, "text": " Because encoder again remember has the text vocab and the decoder will be generating the image token"}, {"start": 2585.78, "end": 2587.78, "text": " So it will have a separate"}, {"start": 2588.02, "end": 2590.02, "text": " image vocab"}, {"start": 2590.34, "end": 2594.02, "text": " Okay, so we generate that table the embedding vectors will be"}, {"start": 2594.88, "end": 2597.7000000000003, "text": " 2048 we then generate the positional"}, {"start": 2598.34, "end": 2602.26, "text": " Encodings here. This is going to be only 64 because if you recall before"}, {"start": 2602.5800000000004, "end": 2607.2200000000003, "text": " We had the cropping so that we make sure that the number of tokens that we feed into the bart"}, {"start": 2607.3, "end": 2612.5800000000004, "text": " Encoder is always up to 64 never more than that. And that's why we never need to go"}, {"start": 2612.58, "end": 2616.02, "text": " Uh above 64 positional encodings. Okay"}, {"start": 2616.58, "end": 2618.58, "text": " now we start forming the"}, {"start": 2619.62, "end": 2621.14, "text": " basically"}, {"start": 2621.14, "end": 2624.42, "text": " Encoder layers and we'll have 24 of these"}, {"start": 2625.06, "end": 2626.18, "text": " for the"}, {"start": 2626.18, "end": 2628.42, "text": " Bart encoder for dali mega"}, {"start": 2629.2999999999997, "end": 2633.4, "text": " I'm gonna hit go here and put a small breakpoint"}, {"start": 2634.34, "end": 2637.56, "text": " In an encoder layer, which is a simple transformer"}, {"start": 2637.56, "end": 2643.4, "text": " Uh block nothing fancy. The interesting part is this glue. So i'm gonna set uh one"}, {"start": 2644.44, "end": 2646.2799999999997, "text": " breakpoint there"}, {"start": 2646.2799999999997, "end": 2650.68, "text": " Uh, and that's it. So let's now continue execution here"}, {"start": 2651.88, "end": 2656.06, "text": " So we end up in the encoder layer as you can see, it's just a simple transformer"}, {"start": 2656.7599999999998, "end": 2660.2, "text": " Block we have the encoder. We have the self-attention module"}, {"start": 2660.52, "end": 2665.48, "text": " we have the some layer norms and we have glue which is just a modification of the"}, {"start": 2665.48, "end": 2667.56, "text": " the uh, your regular"}, {"start": 2668.68, "end": 2675.48, "text": " like um mlp that's used of the feed forward, uh, like a layer of the transformer block, okay"}, {"start": 2676.6, "end": 2679.48, "text": " So let's see how glue looks like here is glue"}, {"start": 2680.36, "end": 2684.14, "text": " It's a very simple modification by noam scherzer"}, {"start": 2684.52, "end": 2688.44, "text": " And i'm gonna now open up the paper side by side to show you"}, {"start": 2689.16, "end": 2694.2, "text": " What this is and to make sense out of it, but it's a very experimental finding"}, {"start": 2694.2, "end": 2701.48, "text": " So there is nothing. Uh, well fundamental understanding you can get from this. Okay guys here it is side by side"}, {"start": 2702.04, "end": 2704.04, "text": " here are the equations from the"}, {"start": 2704.3599999999997, "end": 2706.3599999999997, "text": " glue paper by noam scherzer"}, {"start": 2706.52, "end": 2711.48, "text": " And you can see that this layer here implements exactly this line here"}, {"start": 2711.8799999999997, "end": 2717.64, "text": " So the idea here was they they kind of like he ablated a couple of um, he made a couple of modifications"}, {"start": 2717.7999999999997, "end": 2722.7599999999998, "text": " Why not using different activation functions instead of rally? Why not using gelu or swish etc?"}, {"start": 2722.76, "end": 2727.5600000000004, "text": " Etc. Why not combine him combine them in a bit different ways using um"}, {"start": 2728.1200000000003, "end": 2731.1600000000003, "text": " Similarly to the gated linear units so you can see so"}, {"start": 2732.0400000000004, "end": 2734.6800000000003, "text": " Like the lee mini ended up using this variant here"}, {"start": 2735.5600000000004, "end": 2738.5200000000004, "text": " And let me just kind of convince you that that's indeed the case"}, {"start": 2739.0800000000004, "end": 2745.88, "text": " So let's see so we have we get the input x this is x and then we pass it through layer norm"}, {"start": 2746.6800000000003, "end": 2751.0, "text": " So ignoring that part we we pass it we then pass it through a linear layer"}, {"start": 2751.0, "end": 2753.16, "text": " so that's that corresponds to"}, {"start": 2753.88, "end": 2759.72, "text": " basically this bracket here and after that we we pass it through"}, {"start": 2760.12, "end": 2765.0, "text": " Gelu, so that's this part the sigma of this so that's this term"}, {"start": 2765.56, "end": 2772.68, "text": " And then we have v which is formed from x as well from the input. We also pass it through the linear layer"}, {"start": 2772.76, "end": 2775.56, "text": " so that's that's this part the x times"}, {"start": 2776.44, "end": 2778.36, "text": " big v"}, {"start": 2778.36, "end": 2780.36, "text": " and finally we"}, {"start": 2780.36, "end": 2782.1200000000003, "text": " do"}, {"start": 2782.1200000000003, "end": 2785.7200000000003, "text": " element wise product so hadamard product here"}, {"start": 2786.52, "end": 2789.56, "text": " and then we pass that product through"}, {"start": 2790.36, "end": 2792.36, "text": " a layer norm and"}, {"start": 2792.84, "end": 2799.96, "text": " So that's that's that's basically what happened here and finally we we do a feed forward through a linear layer"}, {"start": 2800.04, "end": 2803.7200000000003, "text": " So that's the w that's the that's the sorry. That's the w2 here"}, {"start": 2803.72, "end": 2811.08, "text": " And that's thus you can see that this simple forward pass of this glue model implements exactly"}, {"start": 2811.56, "end": 2817.9599999999996, "text": " This ffn glue from norm shimseyer's paper and that's it. Now. Let's go back to the code"}, {"start": 2818.9199999999996, "end": 2820.12, "text": " I'm gonna"}, {"start": 2820.12, "end": 2826.52, "text": " Remove this break point. So it's a simple simple simple modification instead of having just like a like a feed forward"}, {"start": 2827.08, "end": 2832.9199999999996, "text": " Like a basically a linear layer for all but a relu followed by a linear layer. It's a bit more intricate"}, {"start": 2832.92, "end": 2838.04, "text": " Uh, and they showed experimentally that this gives a better performance"}, {"start": 2838.04, "end": 2844.52, "text": " But there is no ultimately no theory behind why this should work and that's it. That's I guess deep learning"}, {"start": 2845.08, "end": 2846.84, "text": " Let's continue"}, {"start": 2846.84, "end": 2848.84, "text": " I'm gonna now"}, {"start": 2849.08, "end": 2854.2000000000003, "text": " We're now back into encoder layer and again encoder layer is simply a transformer block"}, {"start": 2854.2000000000003, "end": 2856.84, "text": " So we have as you can see here we have some"}, {"start": 2856.84, "end": 2862.6800000000003, "text": " um layer norm and then attention and then norm and then the residual connection and then"}, {"start": 2863.96, "end": 2869.6400000000003, "text": " Then we pass through the feed forward part and then we have again the residual connection and we return back the variable"}, {"start": 2869.6400000000003, "end": 2871.2400000000002, "text": " So that's it. So i'm gonna"}, {"start": 2871.2400000000002, "end": 2875.7200000000003, "text": " Basically ignore all of these break points and let's form the 24 layers"}, {"start": 2876.44, "end": 2878.6000000000004, "text": " Of this birth of bart"}, {"start": 2879.4, "end": 2881.4, "text": " Encoder so i'm gonna hit f5"}, {"start": 2882.28, "end": 2884.28, "text": " i'm gonna form 24 layers"}, {"start": 2885.1600000000003, "end": 2886.76, "text": " and then"}, {"start": 2886.76, "end": 2888.6000000000004, "text": " We are now here"}, {"start": 2888.6000000000004, "end": 2890.6000000000004, "text": " We form additional layer norm"}, {"start": 2891.0800000000004, "end": 2898.44, "text": " Additional layer norm and then we form uh these token indices which are going to be used to index the positional embeddings"}, {"start": 2898.92, "end": 2900.1200000000003, "text": " here"}, {"start": 2900.1200000000003, "end": 2902.1200000000003, "text": " and that's it and we just"}, {"start": 2902.92, "end": 2906.5200000000004, "text": " Do times two because remember we are doing the classifier free"}, {"start": 2907.4, "end": 2910.1200000000003, "text": " Trick for all progressive models. That's why we have"}, {"start": 2911.1600000000003, "end": 2912.84, "text": " Batch size of two here"}, {"start": 2912.84, "end": 2914.36, "text": " Okay"}, {"start": 2914.36, "end": 2921.0, "text": " That's it. That's the that's the initialization of the bart encoder. Now we are loading the weights here"}, {"start": 2921.8, "end": 2923.2400000000002, "text": " and then"}, {"start": 2923.2400000000002, "end": 2925.2400000000002, "text": " we kind of"}, {"start": 2925.4, "end": 2929.88, "text": " Push those weights into the encoder structure and then we delete the parameters now"}, {"start": 2929.88, "end": 2935.6400000000003, "text": " You can already see that this is pushing my gpu somewhat. Well, well actually not that much"}, {"start": 2935.8, "end": 2941.88, "text": " Okay, so i'm surprised. This is actually not that big of a memory footprint"}, {"start": 2941.88, "end": 2946.76, "text": " I think some of the later components will be spiking it up. But yeah, let's continue"}, {"start": 2948.12, "end": 2949.88, "text": " so"}, {"start": 2949.88, "end": 2952.84, "text": " After the initialization of the encoder, let's see what else happens"}, {"start": 2953.4, "end": 2961.1600000000003, "text": " Okay, and now we grab the text tokens and we pass them through the bart encoder as we saw in the paper part of the video"}, {"start": 2961.56, "end": 2964.2000000000003, "text": " Okay, let's step into it"}, {"start": 2965.08, "end": 2967.48, "text": " so we're gonna hit the for function of the"}, {"start": 2968.36, "end": 2970.36, "text": " Bart encoder here"}, {"start": 2970.36, "end": 2977.48, "text": " So let's see what happens. So we ask where the text tokens are not equal to one and one is again remember"}, {"start": 2978.04, "end": 2982.2000000000003, "text": " They're not using a constant here. They're just using one to represent the pad tokens"}, {"start": 2982.52, "end": 2984.92, "text": " It's kind of hard coded and dirty but it works"}, {"start": 2985.4, "end": 2987.32, "text": " Okay, so not equal"}, {"start": 2987.32, "end": 2989.0, "text": " to one"}, {"start": 2989.0, "end": 2994.1200000000003, "text": " That's the actual content. That's where attention mask is going to say true"}, {"start": 2994.52, "end": 2998.52, "text": " So let's now do something like this. Let me print this you'll see that"}, {"start": 2998.52, "end": 3005.16, "text": " It basically for the second component of the text tokens where we actually passed the the the caption tokens"}, {"start": 3005.32, "end": 3007.08, "text": " We have a bunch of truths here"}, {"start": 3007.08, "end": 3013.16, "text": " But for the first one we only have true true for the first two because those are the cls and the separator token"}, {"start": 3013.32, "end": 3014.44, "text": " Okay"}, {"start": 3014.44, "end": 3017.4, "text": " now what we do is we pass those tokens into the"}, {"start": 3018.12, "end": 3025.0, "text": " We embed them and then we just add the positional encodings using the these uh, well the the pose"}, {"start": 3025.0, "end": 3031.0, "text": " Those tokens indices that were previously constructed and that's it. That's simple stuff there"}, {"start": 3031.64, "end": 3035.56, "text": " And now we just pass it through the layer norm and then we keep on"}, {"start": 3036.9, "end": 3039.16, "text": " processing those embeddings"}, {"start": 3039.96, "end": 3045.56, "text": " Via the transformer Bart transformer layers and that's it. I'm just gonna hit that five here"}, {"start": 3045.88, "end": 3052.52, "text": " We're gonna do a forward pass through the transformer and we end up with the final representation here called encoder state"}, {"start": 3052.52, "end": 3058.2, "text": " Okay, and because again, this is the optimization I was mentioning. We now delete the weights of the encoder"}, {"start": 3058.2, "end": 3063.56, "text": " Let's see how the the memory footprint is gonna reduce. Okay, so the memory footprint went up"}, {"start": 3063.56, "end": 3070.68, "text": " You can see it here and after we hit delete and empty cache, you're gonna see how it kind of dips down"}, {"start": 3071.56, "end": 3073.72, "text": " Hopefully, I'm not sure what's going on"}, {"start": 3075.24, "end": 3079.92, "text": " Why is my gpu not reacting here? Not sure in any case"}, {"start": 3079.92, "end": 3086.2400000000002, "text": " Let's continue so now we have the initialized decoder part i'm gonna hit"}, {"start": 3087.76, "end": 3090.96, "text": " Step over and we enter the init decoder"}, {"start": 3091.44, "end": 3095.6800000000003, "text": " So again, it's already downloaded so we don't care about that part"}, {"start": 3096.2400000000002, "end": 3101.36, "text": " Here we construct it and again, we'll load the weights a similar structure as as with the encoder"}, {"start": 3102.08, "end": 3107.12, "text": " Now we are passing the image vocab, etc. So let's enter the constructor here"}, {"start": 3107.12, "end": 3109.6, "text": " So let's see what's the difference?"}, {"start": 3110.16, "end": 3116.56, "text": " So here we form again the embedding table this time. We have the image vocab size plus one"}, {"start": 3117.2, "end": 3124.0, "text": " Because let me just think here because they also have the they pass this this beginning of sentence token"}, {"start": 3124.72, "end": 3128.72, "text": " And that's the additional token that they need in addition to the"}, {"start": 3129.68, "end": 3131.68, "text": " image to image code book"}, {"start": 3132.08, "end": 3135.04, "text": " So I think they mentioned this somewhere in the in the report"}, {"start": 3135.04, "end": 3137.6, "text": " Let me show you this so basically"}, {"start": 3138.08, "end": 3140.08, "text": " It's called bos token"}, {"start": 3140.48, "end": 3145.84, "text": " Okay, so they say here at inference time blah blah. So a bos token is fed through the bar decoder"}, {"start": 3148.16, "end": 3155.36, "text": " And yeah, that's so that's basically why there is plus one as I said not not the cleanest way to do something"}, {"start": 3155.36, "end": 3158.4, "text": " But yeah, this is just a port of the original repo"}, {"start": 3158.96, "end": 3163.04, "text": " Now we have we form the table for the position"}, {"start": 3163.04, "end": 3165.2799999999997, "text": " in encodings and there is only"}, {"start": 3166.08, "end": 3170.08, "text": " 256 of those because as you recall from the paper part"}, {"start": 3170.64, "end": 3174.96, "text": " It's going to the latent space is going to be 16 times 16 of those"}, {"start": 3175.68, "end": 3177.12, "text": " basically"}, {"start": 3177.12, "end": 3178.72, "text": " vectors"}, {"start": 3178.72, "end": 3183.04, "text": " That are then going to be fed into dvq again decoder. Okay, let's continue here"}, {"start": 3183.36, "end": 3185.84, "text": " So we now generate a bunch of decoder layers"}, {"start": 3186.4, "end": 3190.88, "text": " And the coder layers have additionally the cross attention component because they are"}, {"start": 3190.88, "end": 3195.28, "text": " the cross attention component because this is again encoder decoder transformer"}, {"start": 3195.92, "end": 3198.1600000000003, "text": " so, let me see whether it makes sense to"}, {"start": 3200.7200000000003, "end": 3202.7200000000003, "text": " To kind of show you that part"}, {"start": 3203.12, "end": 3204.88, "text": " Let me just think"}, {"start": 3204.88, "end": 3210.96, "text": " Okay, i'm gonna put a break point into the mid part here and we can maybe do a single pass"}, {"start": 3212.08, "end": 3214.7400000000002, "text": " Here because they're doing something interesting"}, {"start": 3215.36, "end": 3218.08, "text": " With this attention state we'll see that a bit later"}, {"start": 3218.08, "end": 3220.88, "text": " And here as well i'm going to put the break point here"}, {"start": 3221.2799999999997, "end": 3225.68, "text": " And now let's continue. Okay, so let's hit the first decoder layer"}, {"start": 3226.64, "end": 3228.0, "text": " It's again"}, {"start": 3228.0, "end": 3230.0, "text": " Simply there is a self-attention part"}, {"start": 3230.0, "end": 3234.72, "text": " There is the decoder there is a cross attention part and there is the feed forward part"}, {"start": 3234.72, "end": 3238.24, "text": " So the glue variant in this particular case, so that's pretty much it"}, {"start": 3238.24, "end": 3241.6, "text": " So i'm going to now remove the break point from the init"}, {"start": 3242.24, "end": 3245.7599999999998, "text": " And let's just generate a bunch of these layers"}, {"start": 3245.76, "end": 3247.36, "text": " layers"}, {"start": 3247.36, "end": 3248.48, "text": " oops"}, {"start": 3248.48, "end": 3254.48, "text": " Let's now do that and we do that for 24 times again. I think let me just print the"}, {"start": 3255.44, "end": 3259.6000000000004, "text": " Layer count 24. Okay, so there is 24 decoder layers as well"}, {"start": 3261.2000000000003, "end": 3267.2000000000003, "text": " Okay, we have some layer norms nothing fancy and finally we have we take the final"}, {"start": 3267.6000000000004, "end": 3270.8, "text": " embedding vectors and we map them into the"}, {"start": 3270.8, "end": 3278.1600000000003, "text": " The space that has this dimensionality because remember we now want to sample those image tokens"}, {"start": 3278.1600000000003, "end": 3280.6400000000003, "text": " That's why we need to to to go from"}, {"start": 3281.5800000000004, "end": 3288.4, "text": " 2048 which was the internal model dimension. We want to go into into this dimension here"}, {"start": 3288.7200000000003, "end": 3291.28, "text": " Okay, let's continue on here"}, {"start": 3292.0, "end": 3298.8, "text": " And that's it. Okay, that's pretty much it. We load the weights again 24 layers of decoder"}, {"start": 3298.8, "end": 3300.48, "text": " blocks"}, {"start": 3300.48, "end": 3303.28, "text": " and finally then we we map into the"}, {"start": 3304.2400000000002, "end": 3305.84, "text": " image vocab"}, {"start": 3305.84, "end": 3311.2000000000003, "text": " Size dimensionality space output space so that we can sample from it later"}, {"start": 3312.1600000000003, "end": 3313.44, "text": " Okay"}, {"start": 3313.44, "end": 3320.96, "text": " Again, we delete the parameters. We put the decoder on top. We push it to our to my gpu. So let's see whether"}, {"start": 3321.6000000000004, "end": 3323.6000000000004, "text": " Something changes here. Nope"}, {"start": 3323.6, "end": 3328.7999999999997, "text": " Uh, it's a bit different behavior compared to what I've seen before usually there is a spike"}, {"start": 3329.52, "end": 3330.4, "text": " um"}, {"start": 3330.4, "end": 3334.88, "text": " Bigger than this one happening and i'm tracking this uh slot here by the way"}, {"start": 3335.52, "end": 3336.88, "text": " Okay"}, {"start": 3336.88, "end": 3343.2, "text": " In any case now comes the interesting part. So here we are going to now start sampling from the decoder"}, {"start": 3343.44, "end": 3346.4, "text": " Those image tokens and this one is already trained"}, {"start": 3346.48, "end": 3350.7999999999997, "text": " So everything it samples is going to be meaningful and then we're just going to pass it through the vq"}, {"start": 3350.8, "end": 3356.5600000000004, "text": " GAN decoder and that's the that's the end of the program. Okay, let's go here. So um"}, {"start": 3357.6800000000003, "end": 3363.6800000000003, "text": " We're dealing with float 32. So this doesn't make much sense, but with float 30, uh float 16"}, {"start": 3364.1600000000003, "end": 3371.2000000000003, "text": " Uh, this kind of puts it puts all the operations inside of this context in the float float 16 regime which helps us save"}, {"start": 3371.84, "end": 3373.2000000000003, "text": " some memory"}, {"start": 3373.2000000000003, "end": 3375.6000000000004, "text": " So in the case when you have multiple images"}, {"start": 3375.6, "end": 3380.16, "text": " Uh, this expanded indices make sense because you will not replicate"}, {"start": 3380.64, "end": 3386.7999999999997, "text": " The the textual captions for multiple images because you'll be generating multiple images for one"}, {"start": 3387.36, "end": 3390.0, "text": " Particular caption, but we don't care about that here"}, {"start": 3390.64, "end": 3392.64, "text": " so we just basically"}, {"start": 3393.52, "end": 3399.36, "text": " Grab the uh encoder state and the and the and the text tokens and those are of dimensionality"}, {"start": 3399.44, "end": 3401.7599999999998, "text": " So this is going to be 264 if you recall"}, {"start": 3402.4, "end": 3404.4, "text": " and this one is going to be I guess"}, {"start": 3404.4, "end": 3406.4, "text": " uh to"}, {"start": 3406.46, "end": 3408.46, "text": " 2000 something 2048"}, {"start": 3409.12, "end": 3411.12, "text": " because that's the"}, {"start": 3411.52, "end": 3417.2000000000003, "text": " Output of the encoder, right? So to oh actually yeah. Yeah 264"}, {"start": 3418.06, "end": 3423.76, "text": " 2048 because we just embedded this into this dimensionality here. Okay, so we formed the mask again"}, {"start": 3423.84, "end": 3426.0, "text": " This is the padding token. So mask is going to be"}, {"start": 3426.7200000000003, "end": 3428.48, "text": " true"}, {"start": 3428.48, "end": 3430.4, "text": " for for the"}, {"start": 3430.4, "end": 3432.4, "text": " non-pad tokens"}, {"start": 3432.4, "end": 3437.2000000000003, "text": " Uh, here is the we form this attention state which is gonna contain the"}, {"start": 3437.84, "end": 3443.76, "text": " Keys and values from each of the layers and for each of the positions in the bart decoder"}, {"start": 3443.92, "end": 3449.12, "text": " That's just how they decided to implement this and finally image tokens is gonna contain"}, {"start": 3449.52, "end": 3451.92, "text": " That's the output array that's gonna contain two"}, {"start": 3452.7000000000003, "end": 3454.7000000000003, "text": " 256 tokens"}, {"start": 3455.2000000000003, "end": 3460.56, "text": " Plus one because the first one will be the I think yeah the bos"}, {"start": 3460.56, "end": 3467.04, "text": " Uh token the beginning of uh sentence token. So let's continue we're gonna initialize"}, {"start": 3467.84, "end": 3469.84, "text": " this um"}, {"start": 3470.08, "end": 3476.96, "text": " Vector initially as you can see here with this value here, which is going to be the beginning of sentence"}, {"start": 3477.44, "end": 3480.32, "text": " uh tokens id so let me show you what"}, {"start": 3481.12, "end": 3485.92, "text": " That looks like i'm going to take the first five elements and all of them are going to be equal like this"}, {"start": 3485.92, "end": 3487.92, "text": " So that's how how they initially"}, {"start": 3487.92, "end": 3494.08, "text": " Um, well initialize the image tokens and then later we're gonna keep on as we produce the image tokens"}, {"start": 3494.08, "end": 3496.88, "text": " We're gonna feed them back into this image tokens array. Okay"}, {"start": 3497.52, "end": 3499.2000000000003, "text": " So nothing fancy there"}, {"start": 3499.2000000000003, "end": 3503.28, "text": " Uh continuing on we form these uh token indices"}, {"start": 3503.92, "end": 3508.42, "text": " And we have some settings like temperature is set to one top k 256"}, {"start": 3509.02, "end": 3511.02, "text": " superconditioning factor"}, {"start": 3512.08, "end": 3514.32, "text": " Which is used for the classifier free, uh"}, {"start": 3514.32, "end": 3520.56, "text": " Uh guidance part and that's it. Those are the settings and now we start iterating basically a for loop"}, {"start": 3521.42, "end": 3523.84, "text": " 256 times we're gonna generate a novel"}, {"start": 3524.48, "end": 3526.1600000000003, "text": " image vector"}, {"start": 3526.1600000000003, "end": 3532.4, "text": " Uh, and uh, basically then we're gonna feed that into the vq-gan decoder as I said multiple times already"}, {"start": 3533.28, "end": 3534.88, "text": " so"}, {"start": 3534.88, "end": 3540.96, "text": " We care about this part of the loop because this one we'll enter this one once we have 256 tokens"}, {"start": 3540.96, "end": 3544.0, "text": " Um, we'll enter the image grid from tokens function"}, {"start": 3544.88, "end": 3550.56, "text": " So let's focus on this part here. So what happens here as you can see whatever we take the image"}, {"start": 3550.64, "end": 3554.2400000000002, "text": " So i is initially zero, which means we initially feed the b.os"}, {"start": 3554.32, "end": 3556.7200000000003, "text": " So the beginning of of of sentence token"}, {"start": 3557.36, "end": 3562.32, "text": " And we feed the output result into into the first slot of this image tokens"}, {"start": 3562.56, "end": 3565.6, "text": " And that's how we start populating this the image tokens"}, {"start": 3566.32, "end": 3570.32, "text": " Array again, its dimensionality is one plus 256"}, {"start": 3570.32, "end": 3573.28, "text": " It's precisely because of this behavior here"}, {"start": 3574.2400000000002, "end": 3578.4, "text": " We pass the tokens we pass the attention mask the encoder state the attention state"}, {"start": 3578.48, "end": 3581.2000000000003, "text": " Which is going to contain the keys and values from the decoder"}, {"start": 3581.76, "end": 3586.0, "text": " And that's it and we we pass the token indices. So we basically pass"}, {"start": 3586.6400000000003, "end": 3588.6400000000003, "text": " Where we are in the generation?"}, {"start": 3588.6400000000003, "end": 3589.92, "text": " uh"}, {"start": 3589.92, "end": 3596.32, "text": " In the generation sequence, okay in the generated sequence. Okay. Now let's step inside of the uh forward function"}, {"start": 3597.2000000000003, "end": 3599.2000000000003, "text": " of the bar decoder"}, {"start": 3599.2, "end": 3600.48, "text": " so"}, {"start": 3600.48, "end": 3602.48, "text": " We grab the number of images"}, {"start": 3602.56, "end": 3604.56, "text": " uh, we then basically"}, {"start": 3605.68, "end": 3610.66, "text": " Make sure that the batch dimension is corresponds to to the super conditioning"}, {"start": 3611.68, "end": 3613.68, "text": " Thingy, so it's going to be two"}, {"start": 3614.08, "end": 3616.08, "text": " Okay, so it's going to be two there"}, {"start": 3616.3999999999996, "end": 3619.2, "text": " We grab the the list of previous tokens"}, {"start": 3620.3999999999996, "end": 3623.2799999999997, "text": " uh, so previous tokens dimensionality is"}, {"start": 3624.24, "end": 3627.8399999999997, "text": " Uh one so that's going to be initially just a b.os token"}, {"start": 3627.84, "end": 3629.84, "text": " and then"}, {"start": 3629.92, "end": 3633.1200000000003, "text": " Because we only have a single image. This is just going to replicate"}, {"start": 3633.76, "end": 3637.04, "text": " Uh the previous tokens we're going to have two b.os tokens"}, {"start": 3638.08, "end": 3639.36, "text": " And that's it"}, {"start": 3639.36, "end": 3641.44, "text": " Again, the reason being the super conditioning"}, {"start": 3642.1600000000003, "end": 3645.76, "text": " That they are doing so the classifier free guidance thingy for autoregressive models"}, {"start": 3646.32, "end": 3648.48, "text": " Uh some clamping. I don't think this actually"}, {"start": 3649.44, "end": 3651.1200000000003, "text": " is needed"}, {"start": 3651.12, "end": 3657.8399999999997, "text": " And then so now we do the embedding but remember this time. This is the these are the image embeddings"}, {"start": 3658.0, "end": 3661.68, "text": " Okay, so these are the as you can see here. These are this is the embedding table"}, {"start": 3662.16, "end": 3669.2, "text": " This is a separate vocabulary compared to the text that we use for the bar encoder part. Okay, so we do the encoding"}, {"start": 3669.7599999999998, "end": 3671.7599999999998, "text": " Then we add the positional"}, {"start": 3672.3199999999997, "end": 3673.92, "text": " Encoding here"}, {"start": 3673.92, "end": 3676.16, "text": " And now we have the coder state. It's going to be I guess"}, {"start": 3677.2, "end": 3679.2, "text": " What two?"}, {"start": 3679.2, "end": 3681.2, "text": " One"}, {"start": 3681.66, "end": 3683.2799999999997, "text": " 2000 something maybe"}, {"start": 3683.2799999999997, "end": 3686.24, "text": " Yeah, no two two two thousand forty eight makes sense"}, {"start": 3686.48, "end": 3692.7999999999997, "text": " Okay, because we have b.os a single token we have two because of super conditioning and we just embedded them into this"}, {"start": 3693.8399999999997, "end": 3697.68, "text": " Dimensionality here. So all of that makes sense. Let's continue on here"}, {"start": 3698.3999999999996, "end": 3700.7999999999997, "text": " Now we add the additional dimensions now. It's going to be"}, {"start": 3700.8, "end": 3708.0800000000004, "text": " 21 2048 and now we just iterate through the decoder"}, {"start": 3709.44, "end": 3711.44, "text": " layers of this decoder"}, {"start": 3711.6800000000003, "end": 3714.1600000000003, "text": " So we pass the decoder state"}, {"start": 3714.7200000000003, "end": 3721.2000000000003, "text": " The encoder state because we have the cross attention. Remember we passed the attention state. So attention state is going to be"}, {"start": 3722.94, "end": 3724.1600000000003, "text": " Basically"}, {"start": 3724.1600000000003, "end": 3726.1600000000003, "text": " um number of layers"}, {"start": 3726.16, "end": 3732.0, "text": " uh, and then four because we're going to keep keys and values and times two because we have the"}, {"start": 3732.62, "end": 3736.96, "text": " superconditioning and then finally we're going to have 256 and"}, {"start": 3737.8199999999997, "end": 3745.44, "text": " 2048 so again 24 because we have 24 layers four because we are saving key and value"}, {"start": 3746.3999999999996, "end": 3750.08, "text": " For each of the layers but times two because we are using"}, {"start": 3750.08, "end": 3757.36, "text": " Uh two sequences the one with uh empty sequence and the one with the actual caption because of superconditioning and then 256 is because we have"}, {"start": 3757.66, "end": 3760.56, "text": " 256 image tokens in the decoder and"}, {"start": 3761.2599999999998, "end": 3766.3199999999997, "text": " 2048 because that's the model dimension. Okay, just kind of breaking down. The shape is always useful"}, {"start": 3766.96, "end": 3769.2799999999997, "text": " we pass the attention mask for the"}, {"start": 3771.44, "end": 3776.0, "text": " Input caption and we pass the token now, let's hit the forward function here"}, {"start": 3776.0, "end": 3781.44, "text": " So here we just generate the attention mask depending on where we are in the generation process"}, {"start": 3781.44, "end": 3784.48, "text": " So because token token index currently is zero"}, {"start": 3784.96, "end": 3790.0, "text": " We are we're just we're we're just have passed the b.os. So the beginning of sentence token"}, {"start": 3790.32, "end": 3794.32, "text": " And that's why the self-attention mask will currently just be active in the first"}, {"start": 3794.88, "end": 3798.96, "text": " In the zero slot and everything else is false. So you can see here"}, {"start": 3799.92, "end": 3803.28, "text": " That the first will be true and then everything else is false"}, {"start": 3803.28, "end": 3809.92, "text": " is false and then we just kind of make sure that we have batch dimension of two because of superconditioning"}, {"start": 3810.88, "end": 3816.4, "text": " And then we just pass the equator state through the layer normalization. Nothing fancy there"}, {"start": 3817.6800000000003, "end": 3819.6800000000003, "text": " and now we pass it through the"}, {"start": 3819.9, "end": 3823.84, "text": " Attention part. So i'm gonna skip the attention part is fairly. Um"}, {"start": 3824.96, "end": 3826.96, "text": " Well, actually, let me see"}, {"start": 3827.0400000000004, "end": 3830.32, "text": " This is where we will store the keys and values. Let me show you this part"}, {"start": 3830.32, "end": 3833.04, "text": " So let me just see whether I have uh"}, {"start": 3836.0, "end": 3839.76, "text": " Okay, so we have a self-attention we have a break point there so i'm gonna hit"}, {"start": 3840.96, "end": 3844.6400000000003, "text": " That break point and let's see what we do. So we store the keys and values"}, {"start": 3845.92, "end": 3850.0, "text": " And let's see what their dimensionality is. So we have two one"}, {"start": 3850.94, "end": 3854.48, "text": " 2048 and the same for for a value vector"}, {"start": 3855.04, "end": 3859.28, "text": " We basically concatenate them and then we store it inside of this attention state"}, {"start": 3859.28, "end": 3862.48, "text": " And that's it. And now grabbing from the attention state"}, {"start": 3862.88, "end": 3866.5600000000004, "text": " We're going to grab all of the previous keys up to this token"}, {"start": 3866.96, "end": 3871.84, "text": " Which in this particular case is only these keys and values because we are just starting the generation"}, {"start": 3872.1600000000003, "end": 3875.92, "text": " But that's how the how the logic is implemented using this attention state"}, {"start": 3875.92, "end": 3880.2400000000002, "text": " We're basically fetching all of the previous keys and values and that's it"}, {"start": 3880.2400000000002, "end": 3885.1200000000003, "text": " And now we just pass it through the we just do the attention logic and we return back here"}, {"start": 3886.48, "end": 3888.48, "text": " That's it now we have the"}, {"start": 3888.48, "end": 3891.76, "text": " uh cross attention part we passed the decoder and encoder"}, {"start": 3892.56, "end": 3900.72, "text": " Uh, and that's it. Everything else is standard transformer logic. We pass it now through the glue and that's a single step through the"}, {"start": 3901.2, "end": 3904.32, "text": " Decoder layer i'm gonna now remove the break points here"}, {"start": 3905.44, "end": 3907.84, "text": " I'm gonna remove the break point here as well"}, {"start": 3908.8, "end": 3910.8, "text": " And let's continue on here"}, {"start": 3910.8, "end": 3917.84, "text": " I'm gonna hit f5 we went through all of the layers and now we pass it through uh, again"}, {"start": 3917.84, "end": 3921.76, "text": " Uh, this final l n is just like layer norm"}, {"start": 3922.2400000000002, "end": 3925.2000000000003, "text": " Uh, and finally here is where the mapping happens"}, {"start": 3925.52, "end": 3928.7200000000003, "text": " So here we're gonna take the vector that's 2 1"}, {"start": 3929.34, "end": 3936.5, "text": " 2048 and we're gonna map that into the corresponding into the image space. So basically now the logits"}, {"start": 3937.92, "end": 3941.7000000000003, "text": " Will be of 2 1 16 000 416"}, {"start": 3942.56, "end": 3946.0, "text": " Because that's the basically the image space"}, {"start": 3946.0, "end": 3952.9, "text": " Dimensionality we grab the settings such as top k temperature is super conditioning vector"}, {"start": 3953.54, "end": 3955.54, "text": " now this part is kind of"}, {"start": 3956.1, "end": 3957.06, "text": " um"}, {"start": 3957.06, "end": 3962.74, "text": " Not completely clear because it's so so hard coded. So let me try and uh, break it down"}, {"start": 3963.06, "end": 3965.7, "text": " so first we grab all of the um"}, {"start": 3966.5, "end": 3970.1, "text": " Our long the batch dimension we grab everything and that makes sense"}, {"start": 3970.1, "end": 3976.3399999999997, "text": " we take minus one because we only only only want to we only care about the last the logis from the from the"}, {"start": 3976.98, "end": 3983.7, "text": " Um from the last token because we're trying to sample from there that makes sense, but this part doesn't make sense. Why is it?"}, {"start": 3984.18, "end": 3986.18, "text": " um this number"}, {"start": 3986.18, "end": 3988.42, "text": " Which is smaller than the number here"}, {"start": 3989.38, "end": 3996.66, "text": " And my hypothesis is that well it might be that the original vq. Gan checkpoint they used"}, {"start": 3996.66, "end": 4001.54, "text": " Um had a bigger vocab compared to their target fine tuning"}, {"start": 4002.1, "end": 4006.2599999999998, "text": " After they've done the fine tuning of of the gans not completely clear"}, {"start": 4006.8999999999996, "end": 4011.06, "text": " About about this part in any case. So here is the super conditioning part"}, {"start": 4011.06, "end": 4017.7799999999997, "text": " So we grab the logits and as you can see here we grab now we grab the so these are the logis that correspond"}, {"start": 4018.18, "end": 4022.74, "text": " To the empty sequence and these are the logis that correspond to the actual caption"}, {"start": 4022.74, "end": 4028.66, "text": " And we do the logic we saw in the diffusion models as well and that's how we combine the logis to"}, {"start": 4029.2999999999997, "end": 4031.2999999999997, "text": " To form the final"}, {"start": 4031.8399999999997, "end": 4036.5, "text": " Supercondition distribution here. Okay, so I think let me just check this"}, {"start": 4037.9399999999996, "end": 4041.2999999999997, "text": " Fairly sure so this is this reverse half wings"}, {"start": 4041.8599999999997, "end": 4048.02, "text": " Twitter profile I think was the first one to have suggested this type of super conditioning for"}, {"start": 4048.02, "end": 4055.4, "text": " Uh, ultra-regressive models. So so she said you can apply a similar trick to classify our free guidance to other regressive transformers"}, {"start": 4055.4, "end": 4057.88, "text": " To sample from a synthetic supercondition distribution"}, {"start": 4058.52, "end": 4061.48, "text": " Uh blah blah blah. I trained to try this and you can see how"}, {"start": 4062.2, "end": 4065.16, "text": " well, basically you trade off the variance, uh,"}, {"start": 4065.8, "end": 4067.8, "text": " with the um"}, {"start": 4068.2, "end": 4070.12, "text": " Quality of the images"}, {"start": 4070.12, "end": 4071.96, "text": " And finally, this is what we do"}, {"start": 4072.04, "end": 4076.84, "text": " So you take the unconditional logits and you just basically add up on top of that"}, {"start": 4076.84, "end": 4082.52, "text": " This difference times the conditional scale. So this is the same expression. We are now using in in the code"}, {"start": 4082.6000000000004, "end": 4084.6000000000004, "text": " Okay, let me go back to the code here"}, {"start": 4085.6400000000003, "end": 4089.6400000000003, "text": " So that's basically if you cannot decompose this you get that same"}, {"start": 4090.1800000000003, "end": 4091.08, "text": " expression"}, {"start": 4091.08, "end": 4091.88, "text": " um"}, {"start": 4091.88, "end": 4094.6000000000004, "text": " Now we do some this is the top case sampling part"}, {"start": 4094.84, "end": 4099.96, "text": " So we do we sort and we sort in a descending fashion such that the first"}, {"start": 4100.360000000001, "end": 4104.360000000001, "text": " Logit will be the biggest one and then they start descending"}, {"start": 4104.36, "end": 4111.88, "text": " Okay, and then is kept so we take the original logits and we find basically the"}, {"start": 4112.58, "end": 4117.5599999999995, "text": " 256th biggest logits here because here's a sorted one and because we have"}, {"start": 4118.339999999999, "end": 4120.339999999999, "text": " 256 here, so we're going to grab the"}, {"start": 4121.46, "end": 4122.92, "text": " 255th"}, {"start": 4122.92, "end": 4129.48, "text": " Biggest logit and use that as a threshold and all of those logits which are bigger than that threshold are kept"}, {"start": 4129.639999999999, "end": 4131.639999999999, "text": " That's how the top case sampling works"}, {"start": 4131.64, "end": 4134.4400000000005, "text": " Then we do some uh, this is just for for"}, {"start": 4135.54, "end": 4137.46, "text": " stabilization purposes"}, {"start": 4137.46, "end": 4138.68, "text": " temperature"}, {"start": 4138.68, "end": 4144.76, "text": " Exponential and then we just multiply with uh with this mask. So again, we are just keeping those logits that are that are"}, {"start": 4145.4800000000005, "end": 4150.84, "text": " Basically big enough where big enough is defined by the top k parameter. Okay, and"}, {"start": 4151.8, "end": 4158.12, "text": " Finally, we just sample from the distribution and we get the image token. Okay, so we now have image token here"}, {"start": 4158.12, "end": 4163.4, "text": " That's the that's the token we sampled and in the next iteration of this"}, {"start": 4164.36, "end": 4165.48, "text": " loop"}, {"start": 4165.48, "end": 4172.84, "text": " Basically now we're gonna feed that token as the input and then we're gonna keep on autoregressively generating the image tokens"}, {"start": 4172.84, "end": 4176.36, "text": " So now i'm gonna try and skip all of this"}, {"start": 4176.76, "end": 4182.84, "text": " So i'm gonna go and disable all breakpoints and i'm gonna enable only this one here"}, {"start": 4182.84, "end": 4188.92, "text": " I'm gonna hit f5 and i'm gonna let it run until we generate all of the 256"}, {"start": 4189.400000000001, "end": 4191.400000000001, "text": " image tokens basically"}, {"start": 4192.04, "end": 4193.56, "text": " Okay, here we are"}, {"start": 4193.56, "end": 4195.56, "text": " uh now let's enter the"}, {"start": 4196.52, "end": 4198.52, "text": " Vq.GAN decoder"}, {"start": 4198.76, "end": 4203.8, "text": " As you can see here, we only pass we ignore the first token in the image tokens because that was the BOS"}, {"start": 4203.88, "end": 4209.96, "text": " So that was the beginning of sentence token. So we ignore it. So let's jump here. I'm gonna"}, {"start": 4209.96, "end": 4213.02, "text": " uh enable all of the breakpoints"}, {"start": 4214.06, "end": 4216.54, "text": " Let's see where I have a breakpoint here. Yes, I do"}, {"start": 4217.18, "end": 4219.18, "text": " So let's continue doing this"}, {"start": 4219.74, "end": 4221.74, "text": " Okay, here we are"}, {"start": 4222.22, "end": 4227.5, "text": " We delete the decoder that's gonna hopefully release some memory and empty we empty the cache"}, {"start": 4228.3, "end": 4231.1, "text": " So here finally we have this tip in memory"}, {"start": 4231.1, "end": 4237.18, "text": " This is what i'm used to and i'm not sure why that didn't happen previously when we were deleting the encoder"}, {"start": 4237.18, "end": 4244.780000000001, "text": " Um, yeah, that was weird in any case now we can see it here. Okay, so we now initialize the um"}, {"start": 4245.400000000001, "end": 4247.58, "text": " Detokenizer or the Vq.GAN decoder"}, {"start": 4248.700000000001, "end": 4253.26, "text": " So let's see what I have. Yeah, I have a breakpoint there again similar structure as before"}, {"start": 4253.740000000001, "end": 4258.860000000001, "text": " Uh, we create the tokenizer object and then we just load the parameters. So let's see"}, {"start": 4259.5, "end": 4261.26, "text": " how it looks like"}, {"start": 4261.26, "end": 4265.26, "text": " Again, we pass it the vocab count and the embedding count"}, {"start": 4265.26, "end": 4270.860000000001, "text": " So it's gonna have 256 tokens and this is the vocab count which corresponds"}, {"start": 4271.66, "end": 4273.66, "text": " To the this is the same"}, {"start": 4274.54, "end": 4280.7, "text": " Size as what the bar decoder had right? Okay, we form the embedding table"}, {"start": 4281.58, "end": 4286.780000000001, "text": " Uh, just some processing layers and decoder. I'm gonna ignore how the architecture looks like"}, {"start": 4286.780000000001, "end": 4292.62, "text": " It's basically a bunch of like up sampling layers and attention blocks and resonant blocks"}, {"start": 4292.62, "end": 4295.099999999999, "text": " Nothing super, um"}, {"start": 4295.66, "end": 4297.26, "text": " I guess"}, {"start": 4297.26, "end": 4301.9, "text": " Informative so we just have resonant blocks some attention blocks which literally do attention"}, {"start": 4302.38, "end": 4303.98, "text": " uh treating"}, {"start": 4303.98, "end": 4306.3, "text": " image tokens as as well"}, {"start": 4307.74, "end": 4311.82, "text": " Treating images as as sequences and thus we can just apply attention"}, {"start": 4312.38, "end": 4314.38, "text": " Um, just some up sampling"}, {"start": 4315.099999999999, "end": 4319.0199999999995, "text": " As I said nothing worth digging deeper into to be honest"}, {"start": 4319.02, "end": 4324.860000000001, "text": " Uh, you can go and check it out at your own pace if you want. Okay, so we now load the parameters"}, {"start": 4325.660000000001, "end": 4330.46, "text": " And we push the the tokenizer that we can again on the gpu"}, {"start": 4331.02, "end": 4335.26, "text": " And now we do the forward pass through the uh, this is the interesting part"}, {"start": 4336.06, "end": 4342.22, "text": " Not that complex but still interesting. So we have z that these are 256 tokens. So is that"}, {"start": 4343.1, "end": 4345.1, "text": " Shape is 256"}, {"start": 4345.1, "end": 4353.58, "text": " 26 again, i'm not sure why they're doing the clamping because we we we are certain that this will be inside of these boundaries already"}, {"start": 4353.820000000001, "end": 4355.820000000001, "text": " So there is some weird thing happening there"}, {"start": 4356.22, "end": 4362.620000000001, "text": " Uh, that's a consequence of that cropping we saw in the when the with the logits so that that might explain it"}, {"start": 4363.820000000001, "end": 4369.18, "text": " Okay, grid size is one because uh, as I said, we're generating just a single image and now there is a difference whether"}, {"start": 4370.46, "end": 4371.5, "text": " um"}, {"start": 4371.5, "end": 4380.54, "text": " Well, basically the the shape of this, um, how do we feed the the the image tokens into the actual convolutional decoder part?"}, {"start": 4380.78, "end": 4384.14, "text": " And we're going to enter this branch because the seamless is set to to false"}, {"start": 4384.62, "end": 4391.74, "text": " Uh, and we just embed using the uh embedding table embed our tokens and I assume this is the same"}, {"start": 4392.62, "end": 4398.14, "text": " Embedding table as the one that we had in the bart decoder. That's that should be shared weights"}, {"start": 4399.02, "end": 4400.62, "text": " and now we just"}, {"start": 4400.62, "end": 4403.099999999999, "text": " Do this view thingy and we end up with"}, {"start": 4404.2, "end": 4406.2, "text": " 16 16 256"}, {"start": 4406.78, "end": 4414.54, "text": " And that's precisely the latent space volume that we saw in the papers that we're going to now feed into dvq again"}, {"start": 4414.92, "end": 4417.74, "text": " uh convolutional, uh decoder part, okay"}, {"start": 4418.54, "end": 4421.099999999999, "text": " Some permutation some processing with the cnn"}, {"start": 4422.3, "end": 4425.58, "text": " Basically, this is a like a single, uh channel cnn"}, {"start": 4425.58, "end": 4432.0599999999995, "text": " Sorry single, uh, the spatial extent of the cnn is one times one and then we just up sample"}, {"start": 4432.54, "end": 4435.98, "text": " This this image and we end up finally with"}, {"start": 4437.18, "end": 4439.18, "text": " uh z of shape"}, {"start": 4440.38, "end": 4445.5, "text": " Uh 256 256 and three channels and that's our image and now we do some clipping"}, {"start": 4446.04, "end": 4449.5, "text": " Uh renormalization back into the 0255 range"}, {"start": 4450.0599999999995, "end": 4451.5, "text": " And that's it"}, {"start": 4451.5, "end": 4457.58, "text": " Some again manipulations on the shape and let's see what we whoops. Let's see what we end up with here"}, {"start": 4458.3, "end": 4463.5, "text": " So 256 256 3 so that's our rgb image and that's it guys"}, {"start": 4464.14, "end": 4468.06, "text": " basically, we now uh push the image we convert it into uh"}, {"start": 4468.7, "end": 4473.5, "text": " Unsigned integer, uh 8-bit format. We push it onto cpu from the kuda"}, {"start": 4474.22, "end": 4475.98, "text": " and we"}, {"start": 4475.98, "end": 4479.9, "text": " Basically convert it into non-py and now we basically yield back"}, {"start": 4479.9, "end": 4485.82, "text": " Uh pil image pill image and that's it and then we just save the image and that's it"}, {"start": 4485.82, "end": 4490.62, "text": " Now there is also this print ascii from image function. That's very neat actually"}, {"start": 4490.62, "end": 4496.139999999999, "text": " So again ascii from image makes sense when you're running this from a console. It's just gonna print the image"}, {"start": 4497.58, "end": 4501.179999999999, "text": " Um in the ascii format, I guess. Okay, that's it"}, {"start": 4501.66, "end": 4506.139999999999, "text": " Uh, let me uh step over this and we are pretty much done"}, {"start": 4506.14, "end": 4509.660000000001, "text": " That's the end of the program. Let me now show you the generated image"}, {"start": 4511.740000000001, "end": 4513.740000000001, "text": " I should have it somewhere here"}, {"start": 4514.22, "end": 4516.22, "text": " Let me open it up"}, {"start": 4516.22, "end": 4524.06, "text": " And here it is. So here is the uh, what was our caption? Our caption was um, a photo of a funny dog"}, {"start": 4524.62, "end": 4527.900000000001, "text": " And here is a dog. I'm not sure whether it's funny, but"}, {"start": 4528.54, "end": 4535.1, "text": " Uh, basically, that's it. Let me try a different caption and i'll get back to you once we once once it's done"}, {"start": 4535.1, "end": 4539.5, "text": " Maybe like something a bit more complex. This might break the dali mega"}, {"start": 4540.06, "end": 4545.660000000001, "text": " But it's worth a shot a photo of a funny dog. Uh, maybe riding a bicycle"}, {"start": 4546.860000000001, "end": 4548.46, "text": " Bicycle"}, {"start": 4548.46, "end": 4550.46, "text": " Let me run this one"}, {"start": 4550.700000000001, "end": 4554.780000000001, "text": " And let's see what we get. Okay guys, here is the actual image"}, {"start": 4554.780000000001, "end": 4556.3, "text": " It's kind of weird"}, {"start": 4556.3, "end": 4561.660000000001, "text": " You can kind of see it's trying to merge the concept of a dog with a concept of a bicycle"}, {"start": 4561.66, "end": 4569.18, "text": " But it's still not that expressive and big to achieve that so scale is vital for these types of models"}, {"start": 4569.74, "end": 4574.78, "text": " And that's it. Uh, do let me know whether you found this video useful as as usually"}, {"start": 4574.78, "end": 4592.139999999999, "text": " Uh, leave down in the comments below if you have any feedback for me, uh, and until next time. Bye. Bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=c1GwVg3lt1c
OpenAI GLIDE (Diffusion) | ML Coding series | Towards Photorealistic Image Generation and Editing
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE 5th video of the ML coding series! In this one I cover OpenAI's GLIDE model from the "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models" paper. It directly builds upon the "Diffusion Models Beat GANs on Image Synthesis" paper that I've covered in the previous video of the series. It is also a direct precursor to DALL-E 2! I explain classifier-free guidance as well as CLIP guidance in this one. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2112.10741 ✅ GitHub: https://github.com/openai/glide-text2im ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 [paper] GLIDE - quick overview 00:03:30 Classifier free guidance explained 00:07:25 CLIP guidance explained 00:10:49 Training details, etc. 00:14:20 [coding] CLIP guidance script 00:26:25 Generating CLIP text target embedding 00:32:32 Reverse process (p_sample_loop) 00:34:35 How is text conditioning implemented 00:43:00 CLIP guidance code 00:46:26 Classifier-free guidance script 00:51:47 Main logic 00:56:35 Showing generated images 00:58:23 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #glide #diffusion #imagesynthesis
What's cracking guys in this video? I'm continuing on with the diffusion model tracking the machine learning coding series and this one I'll be covering glide, so that's a model that was a precursor to delete to Coming from open AI and I've previously covered the paper in great depth So if you want to have a deep dive into the paper itself, you can check it out I'm gonna link it somewhere here But in this video, I'll obviously be focusing on analyzing understanding the code I have three Jupyter notebooks prepared for you guys. And so we'll be going through those So what I've done is as always adjust I've downloaded I've cloned the repo here. You can find it I'll kind of link it in the video description But before we go there, I first want to Walk you through the paper as usually just to give you all of the necessary context Hopefully that's gonna be like 10 15 minutes quick overview compared to the deep dive. I've already previously made so without further ado let's start understanding how glide works and Then we'll we'll start doing the coding part cool so First things first as I said glide is a precursor to like a Dali model So they mentioned here when evaluated by human judges our samples are preferred to those from Dali. So this is the version 1 87% of the time when evaluated for photorealism and 69% of the time when evaluated for caption similarity The Dali version 1 actually did not even use diffusion models But the lead to did and it built directly up on glide. So yeah, we're slowly building our way towards more complex models Cool, you can see the some of the images here again here instead of compared to my previous video where I showed you the guided DDPM a model where you can could condition But using only like a class and for example image net class here Glide can actually cope with text with natural language So that's much much cooler and you can see some very cool examples here like a corgi varying a red bowtie and a purple party hat you can see that like you can composite different concepts together in a plausible way you can bind attributes such as a Purple party hat to to certain objects and red bowtie. So it's a fairly Powerful model and as I said, it was a precursor to delete you and we all know what delete you can now make Aside from image generation. It can also do image in painting You can see here you put a mask here and then you set some natural language prompt here zebras roaming in the field and Outcomes an image that's modified similarly here a girl hugging a corgi and a pedestal You can see how this dog here is is swapped by a corgi Etc etc. You can see a bunch of examples there So the model itself stands for guided language to image diffusion for generation and editing So generation is the the part that we all know about and anything is the in painting part cool so there is not this background about DDP M's are gonna skip all of that because Hopefully you watch some of my previous videos. This one is gonna build heavily upon those So do check out the whole series if you haven't so far You can also iteratively do this image in painting so you can see here how slowly they are adding They first had the corgi image and they add the table here then they add the ways here, etc. Etc So you can cannot iteratively improve your images Okay, so the parts which I want to focus on in this video are mostly about clip guidance and Classifier free guidance so I want to focus on those and I'll probably if I have enough time focus on in painting as well So without further ado, let's first introduce the classifier free guidance. So in the previous video, we saw how we can do classifier based guidance where you use a pre trained Well classifier and you use its gradients So the classifier was trained on noisy images. That's an important detail and then you just use the gradients from that classifier to kind of guide your your class conditional diffusion generation Here the cool thing about classifier free guidance is you do not have to train a separate classifier on pre trained on noisy images It's it's much easier and and that's better Okay, so let me try to briefly explain how classifier free guidance works Again, as I just mentioned, this method does not require a separate classifier That's a big plus for for this method compared to the classifier guidance And then they say here so for classifier free guidance the label in a class conditional diffusion model is replaced with a null label with a fixed probability during training This is I think usually 10 or 20 percent during sampling the output of the model is extrapolated further in the direction of the model The output of the model is extrapolated further in the direction of the class conditional model and away from the null conditional model like so And you can see here the expression we basically have the one where we condition on those on the on the null label And then we just add this this basically this difference between the class conditional and the null conditional and we just use this guidance scale so the parameter s to well to to to modulate how much we want to go in in that direction And how I think about this in my head the mental model I have is basically you have this thing here tells you gives you the the noise in the unconditional case So it's going to give you some noise in some is going to be some point in the end dimensional space where and is basically the number of dimensions in your image So the resolution times the number of channels and so now once you have that point you then plot this point of where the model thinks the class conditional noise should be So if it outputs point like here then you can form the vector by just kind of subtracting one from another and you get something like this And then what what this final expression is going to be about is OK the model outputs the null conditioned noise and it's here And we know that if you want to move towards that particular class on upon which we are conditioning we need to move in this direction And because of that we just add that vector and we modulated with the with the scalar s to basically well to to start generating images that are from that particular class So that's the I literally have a geometric interpretation now whether this is completely correct or not I don't know but like I think I think it does sound like a like a good mental model In any case let's now see how we can replace the simple labels by the natural language because that's what we care about when it comes to to glide So what what they do is so to implement classifier free guidance with generic text prompts we sometimes replace text captions with an empty sequence which we also refer to as this Well zero crossed I guess during the training we then guide towards the caption using the modified prediction blah blah blah OK so you can see that everything remains the same except that we are now conditioning on caption Like and not on the simple label and that's pretty much it so the second thing they do is they use clip for guidance because here you cannot have like a thousand classes or whatnot because languages obviously has infinite possibilities of compositing And yeah there is infinite number of sentences I guess so you have to use something like clip and the only difference compared to the classifier guidance we saw in the previous video is that here you're looking for a gradient of the product between the caption and the image So let me read this for you perturbing the so when applied to the future models these techniques are similar to classifier guidance perturbing the reverse process mean along the gradient of the text image product with respect to the image OK So what that means is basically if you recall how we how we've done the classifier guidance we output like a mean and then on top of that we shift that mean by the scale version of the covariance matrix times the gradient of our classifier So now the only thing that differs in this in this clip guidance is that this these gradients here because we are now dealing with text and not with with classes are going to be this thing here So basically you encode a certain caption and you want to make sure you want to find you want to learn how to tweak the input image such that such that the product between the image feature from that image and the text embedding here is maximized And that means that the image is belonging more and more to that class or differently said it's becoming better and better described by that particular caption So that's that's all like that's a small tweak that we have to make in order to to make the clip thing and the text conditioning to work Again let me just quickly draw a picture here it might be a bit easier to understand so we have an input image here and we have our clip model here OK So here's clip So now we now have some type of a prompt like I don't know like maybe something simple like a dog OK So we encode that thing we pass it through the clip again we pass it through clip here so this same model basically and outcomes outcomes some some in there here so this is going to be an embedding that corresponds to this prompt here Next up we pass the image through the clip and we get a different embedding here OK This is going to be the image embedding let me maybe choose a different color just for the sake of like not to come in order not to confuse this So here we have a prompt for the So we have the text embedding and here we have the image embedding and now the gradients are going to tell us how we should tweak the image how should we tweak the image such that this vector is much closer in the image text embedding space OK Hopefully that was clear enough Now let me briefly tell you a couple of things about their training so they mentioned here for our main experiments we train a 3.5 billion parameter text conditional diffusion model at 64 times 64 resolution and another 1.5 billion parameter text conditional up sampling diffusion model to increase the resolution to 256 times 256 For clip guidance we also train a noise aware 64 times 64 VAT clip model OK So the only difference compared to the previous video on the guided DDPMs is basically here we have additional transformer that's going to be used to text condition our diffusion model instead of just doing a simple class conditioning It's not that like big of a difference to be honest so a couple more details here to condition on the text we first encoded into sequence of key tokens and feed these tokens into transformer model OK The output of this transformer is used in two ways so first the final token embedding is used in place of a class embedding in the ADM model so that's the model from the previous paper again the paper is called I think diffusion models beat GANs on image synthesis And second the last layer of token embeddings a sequence of key feature vectors is separately projected to the dimensionality of each attention layer throughout the ADM model so the fusion model and then concatenated to the attention context at each layer OK So that's how we integrate this textual conditioning information into the ADM into the diffusion model That's an additional detail I want you to understand OK Finally here they mentioned how like let me read this for you so after the initial training run we fine tuned our base models to support unconditional image generation this training procedure is exactly like pre training except 20% of token of text token sequences are replaced with the empty sequence this way the model retains its ability to generate text conditional outputs but can also generate images unconditionally and we need this ability so that we can such that we can well use the the the basically the classifier free guidance effectively OK guys that's pretty much it I'm going to skip everything else again the results are basically better when we use classifier free guidance compared to clip guidance so that's one of the reasons this is so cool I'm gonna skip all of this this is not that vital and finally worth mentioning the only model that they've open source is the so called glide filtered where they say here we filtered out training images containing people to reduce the capabilities of the model in many people sent you problematic use cases the model is also way smaller compared to their biggest version and they also filter out some other images such as like hate symbols et cetera et cetera so just worth having that in mind finally the model such as glide was still struggling a lot with with certain well let's call it weird phrases whereas the lead to became much better with these still not perfect but but better than then then what glide could achieve here we can see an example a mouse hunting a lion and we can see that the model does does not know that there does not understand the direction so here it appears as if the lion was hunting the mouse and not the other way around so it understands the association but not the the the causal relationship here cool okay that's pretty much it now let's go back to decode let's understand how this looks in actual code okay I'm gonna first start with this simple clip guided script so we're going to use the clip the pre trained clip model on on that's like noise up there and we're going to use it to to guide our generation so let me run this thing and again as usually I'm going to skip over everything that's not core to understanding what I'm trying to explain in a particular script here we want to understand how clip is used to guide the generation process so we just grab the devices and I'm going to have I have a GPU so it's going to be a GPU here some options default options for the model and diffusion we saw this in the first video like they're obviously are reusing a lot of the code, which is a good thing. And so I'm going to scheme across all of this. So time a step through spacing. So what it does is if the model was trained on like 1000 steps, you can now instead during the sampling procedure have only hundred steps and still get good results, good high quality samples. Then we're going to form the model and the diffusion here. So model is basically a unit model and diffusion is contains all the constants that are necessary for the diffusion. Okay, so we'll need to step into this part because the model is a bit different compared to previous papers. So the model now contains also the transformer because of the text conditioning. So let's see how that looks like diffusion itself looks pretty much the same as in previous videos. So I'm going to skip over that. But we're going to jump into this create model function. Let me just hit F5 and let's get into the function itself. Okay, so a couple of details here depending on the image size. We're going to set up the unit in a particular way number number of channels is going to be different compared to depending on the image resolution. Again, we've seen this multiple times and they just have these attention blocks for particular resolutions. Not that important. And finally, we're going to have this text to image unit. So again, it's just a simple unit plus the transformer. That's everything that Glide changed compared to previous papers. Okay, now we're going to form the text to image unit here. There is a lot of parameters. I don't want to explain every single one of those. There is also the encoder part. I'm going to quickly show you what this is. Basically, they have a pre trained BPE encoder that are going that they are going to use. And I explained this in the open AI clip video. So I'm going to link it somewhere here. You can check it out if you want to understand a bit better what's going on here. How the actual tokenization works, how that magic looks like. Then you can check out that video. But here I'm just going to skip all of that and hit that five to enter the constructor of the text to image unit. Okay, so let's see what's different here. We call the constructor of the super class and that's going to be just a simple unit model. I'm not going to show you the unit because we already saw that in previous videos. As I said, I have to kind of depend on previous videos because otherwise this would be three or four hours long video. I kid you not. Okay, so we construct the we construct the unit model here. I'm going to skip over everything here. I'm just going to hit F5 and we have our unit constructed. Let's go outside here. And now, so this is the different part compared to previous papers. They have a transformer. So they form a transformer that has a particular context 128 tokens, certain width 512 dimensions, number of layers 16, etc, etc. Then they have they use some layer norm, not that important. They create some token embedding layer here. So this one is basically going to embed our tokens into certain well into a certain vector space. And then we also have positional embeddings. We don't need more than 128 because that's going to be the max context that this transformer is going to use. And we finally have this transformer projection linear layer. We're going to see how that one is used a bit later. Okay, finally, there is this padding embedding. We'll see how that all fits together a bit later. Okay, so cash stuff, not that vital. I'm going to skip all of that. And finally, I'm going to skip the diffusion model because we've seen that already. I'll have to skip this. And here we are. So again, we have a unit with transformer and we have our diffusion model here. So basically those two together form form a diffusion model. Okay, let's go back into the main script. We just do some some boilerplate for machine learning, just in conversion to FP 16. We put the model onto GPU will load the actual weights, and then we print the number of parameters here. Okay, we repeat the same thing for the up sampling model. I'm going to skip all of this because we will not be focusing on the up sampling model. So I'm going to kind of skip over all of this until we get back into the main script. I should have toggled off the break points. But yeah, I guess next time. Cool. I think we are there almost. Basically, okay, so we created the up sampling model. Now we create the clip model. Okay, let's see how clip looks like. So clip, as you can see here, I'm going to just step into here. We grab some default config path so they have this config YAML file that specifies certain properties of our model. So of the clip model so you can see it here again too many details to cover. I'm just going to continue on execution in the in the create clip function. So here we are. We create a simple tokenizer so clip will have a different a bit different tokenizer compared to the diffusion model. Again, we can treat it as a black box for all practical purposes. Okay. Now, as I already said, I covered clip in one of the previous videos. If you really want to understand how the code works and how clip works to check it out or or also check out my paper video on clip. I've done that as well. I can link those somewhere here. We can briefly jump into the text encoder. Let me see whether I have basically. Okay, so let's kind of let me let me quickly show you how this text encoder is going to look like. I'm going to skip some parts so they just form the causal mask here because well does that make sense. Yeah, it does because we are here dealing with with text. But then again, we are not generating text. So I guess you could use bi-directional you could use non causal masks as well here. But in any case, let me get a scheme over all of this. So now they start stacking these blocks. The first block is just going to be a text embedding block, which means you're going to take tokens and then you're going to embed them into certain space. I think I'm going to show you this in the in the I'm going to just put a well, I don't want to even put a breakpoint here. It's just a simple embedding layer. I'm going to skip that. And then they form they had a bunch of transformer blocks and then at the end they just extract the basically embedding that's above the last token in the sequence. So that's how clip works. I'm going to have to skip all of that and let's continue. So we have the text encoders. Basically, it's a simple transformer. It first embeds the tokens, then it processes them using the transformer blocks. And then at the end, it just extracts, as I said, the last embedding. That's it. Next up, we have image encoder. Image encoder is VAT. So they're using vision transformer. Again, I'll have to just treat this as a black box. Let me see whether there's something interesting here. OK, this time they're using a non causal mask, as you can see here, which makes sense for images. Let's see what else. So again, similar structure. They form these blocks. The first one is going to be image embedding. So we're going to basically do the patchify like layer of the VAT, which means you're going to treat image as a bunch of patches. And then you're going to embed those patches into certain into certain well, vector space. And then you're going to just treat those as tokens. And then it's pretty much the same as if you were to just use transformer. And you can see here they do have a bunch of transformer blocks. And at the end, they have image feature extractor that's going to basically extract. Let me just see what's going on there. OK, we're going to I'm going to put a like a simple break point in the four functions. We're going to see that a bit later once we start using clip actually. OK, so that's it. We have the image, you go to we have the text encoder. We form this logic scale. That's a learnable parameter. We might see how it's used a bit later. And finally, they form the actual clip model. So he has a bunch of helper functions. What we care about is, well, for now, the I can just keep the function. That's not that important. We're going to see everything a bit later during the actual four pass. OK, so now we just load the checkpoints for the image encoder. We load the checkpoint for the text encoder. This is where the actual fun starts. We're going to take the prompt and oil painting of a corgi. We're going to encode that and we're going to generate an image that's going to have that's going to be well described by this prompt. So the guidance scale is going to be three. So that's the S parameter that we use to multiply the gradients coming from the clip model. We have some temperature that's not that important. And now we just tokenize. Again, we're going to treat this as a black box. We take the prompt and we talk and I said using the diffusion models tokenizer into tokens. OK, so let me show you how that thing is going to look like. It's going to be as you can see here, there is seven tokens in total. And then there's this special utility function that's going to just pad the these tokens such that we fill in the context for the transformer. That's a part of our diffusion model. OK, let me show you what that means. In practice, that means we're going to have I think we are using hundred twenty eight. So that's this text context parameter. So basically if I were to print the number of tokens, you can see there is hundred twenty eight tokens now. And the mask just tells you which ones are actually legit. And the first seven ones are actually true tokens. Everything else is just padding, which is specified by setting false for all of those other tokens. OK, so so the reason we're doing this is so that we can basically take multiple prompts and do this in a single for a pass. And that's why we cannot pad to a fixed dimension. OK, so now we form this model keyword arguments dictionary. We just store the tokens of exercise is just one. So that means we're going to we won't do anything smart here. We just store the tokens and the mask. And this is going to be later used to condition the diffusion model. We're going to see that in a second. OK, so there is this condition function. I'm just going to enable basically all of the breakpoints. And this is going to we're going to hit this function a bit later. It's going to be very important. We are actually going to have to enter this function. So let me click F10 and enter this one. So we're going to form this. Well, basically the embedding for the prompts and then we're going to return the this conditioning function. OK, so let's see what's going to happen here. So we just basically pass prompts through the textual pathway of the clip model. Nothing fancy there. We've seen this multiple times. We've seen this in my previous clip video. So we've called this in code prompts that's just going to, as you can see here, iterate through the list of prompts. We only have a single prompt, so nothing fancy will happen there. We're going to treat the tokenizer as a black box. So this is going to return us some sequence of tokens. Let me let me show you what we're going to get here. So if I run this, we have like, as you can see here, six tokens and then we're going to call again this padded tokens and length function, which is going to just pad everything. OK, I kind of entered here. Let me just skip over this. We don't want to go there. So again, we end up with these subtokens, which are going to be just padded versions. Whoops. So if I just grab the subtokens here and if I print this, you can see we have the same tokens as above. Hmm. For some reason, there is some difference here. Let me just see. Oh, no. OK, we have the start of sentence token here and the end of sentence token here. And we have our tokens here. And then we have additionally the padding. So it's a bit different. As I said, Clip has a bit different tokenizer compared to the tokenizer that we have in our diffusion model. And now we're just going to append that and we are done. We just converted to torch tensors and we are back here. So now we have basically tokens and their associated lengths. Next up, we're going to pass these tokens through the text pathway. So that's again just going to embed the tokens, then apply a bunch of transformer blocks. And ultimately, we are going to extract a particular a particular token. So let's see how the extractor part works. So again, this is the this is the forward pass of our of our text encoder. And we don't care about most of these parts because that's just going to do as I said, just going to do the embedding and then number of transformer blocks. And ultimately, we'll have this. I think this is the part that we care about. Let's see how this is going to work. So we call the text extractor here. And here it is. So the text feature extractor. Let's see what it does. Basically, as you can see here, text length tells us how big how many tokens do we have fed into the transformer. And it's going to be equal to seven. So that tells you that that's the index of the last token. And then we just gather. So we let me show you what text is. So text is a print shape here. So we have 77 tokens. Some of those tokens are obviously padding. Some of those tokens are start and end of sentence tokens and 512 is just a dimensionality. So what we do by by doing the gather, we are basically going to extract the last token here. So that means we're going to extract the seventh, the well, the eighth token. Let's step over there and let me convince you that that's indeed the case if I were to take the text. And then if I were to take the seventh here and let's take the first three elements, you can see them here. And now let's let me show you the let me just see what the shape of this thing is. OK, so let's now take zero zero. And let's take also first three elements. And those. Yeah, as you can see, these are the same because we literally just extracted whatever the embeddings are on top of that seventh token. That's pretty much it. And finally, we pass that X through the layer norm. And finally, we just return the basically the mapped version of that particular token. So let's see what F is. I think it's just a simple. Yeah, fine. It's just a basically a simple linear layer with some tweaks. But I will not get into those details because as you can imagine, this is a very huge code base and I cannot cover everything. OK, so let's go outside here and we end up with H being what H shape is going to be one five hundred twelve. So we have a single token that is supposed to be our representation of our input prompt. OK, continuing on here, just some asserts. And that's it. We have our ZT tokens. That's the same thing. So again, one five twelve. And we're going to now normalize it just using the well, I guess this is just L2 norm as as previously as we've seen previously in the open clip video. OK, so that's the CT is that the and now we return this con function. I have a breakpoint here. So we're going to see it a bit later when the time comes. But for now, let's continue on with the main script. So a recap, because this was fairly overwhelming potentially. So we formed our diffusion models and we loaded the actual checkpoints here. We can ignore the sampling model. Then we we loaded the clip model. We have a prompt and we basically encoded a prompt here. We have a mask and tokens for that prompt. And now we ended up creating this this embedding vector for our prompt using the clip model. OK, so that's it. Now we're going to go through the P sample loop. So that's going to be the reverse process of our diffusion model. Let's see what we're going to do here. We want to generate an image that's going to be of this particular resolution. So it's going to be a sixty four sixty four RGB. And let's see what's interesting here. We're going to use the tokens and mask to condition the model. So that's going to be different compared to our previous videos with the DPMs. In any case, let's now jump into this P sample loop. Again, I kind of skipped the P sample loop because everything it does is just basically iterates through this dysfunction loop progressive and starts generating multiple samples. But I think we are generating just a single sample so that it doesn't even matter. So we can just kind of skip over all of this. And now the actual fun starts. So we start as we as you may know from the completely like a normal distribution. We can just sample our image from a normal distribution. That means we have initially completely noisy image. And then we're going to slowly work our way from there and denoise that image until we get the image that corresponds to the prompt that we are using for this for this program. OK, so we'll have a hundred steps of the reverse process and just some DQDM stuff for for monitoring the progress. But we're going to now do hundred times in order to hundred denoising steps so that we end up with an image. OK, so initially T is going to be 99. As you can see here. And now the whole fun happens in this P sample function. So we're going to pass the fusion model so that's unit with transformer. We're going to pass the noisy image time step and a couple of other basically. Parameters here and let's see how the sample is going to look like. OK, so this is the funny part. OK, here let's see how the text information is used to condition the model. That's going to be the interesting part. OK, let me step into here. So that's going to be again. That's going to be the text to image unit. And what we're going to do is first we're going to embed the time steps. So first sinusoidal embeddings and then we just map them using simple linear layers here. And this is the interesting and different part. We're going to pass the tokens and the mask through the transformer part of our what's the name again of this thing? Text to image unit. So let's see how that thing is going to look like. Let me set a break point here. So it's going to be simply a transformer. So I don't think anything fancy will happen here. We get the embeddings, we do the positionally encoding, blah, blah, blah. We apply the transformer here. Let me just see whether there is something interesting. OK, so here is the interesting part. We are returning, as you recall from the paper, not just like the embedding for the last token, we're also returning all of the tokens, all of the embedding vectors from the previous layer or something like that. Let me just see what exactly is going on here. So we passed it through the transformer. We get this XF out and then we applied this layer norm. And as you can see here, so let me just show you the shape of XF out first. So the shape of this thing is going to be 128, 512. OK, and so we are going to do the following. We're going to take the last token and then map it using this transformer projection layer. And that's going to be simply, I assume, yeah, a linear layer. OK, and that's how we add we end up with XF projection, which is going to be 11512 in shape, I assume. Let me just verify that's indeed the case. Because the linear layer actually has a different output dimensionality. That's why we end up with 768. Not that important, but in any case. And we take here, we just do some permutation, but we take all of the tokens. And that's the part from the paper. So again, this is going to be 512 tokens. And this is the part we saw in the paper. Let me let me show you somewhere here. They mentioned it. So the output of this transformer is used in two ways. First, the final token embedding is used in place of a class embedding in the ADM model. Second, the last layer of token embeddings, a sequence of K feature vectors, is separately projected to the dimensionality of each attention layer throughout the ADM model and then concatenated to the attention context of each layer. So that's we're using this twofold. And that's why we're returning this dictionary containing both of these. OK, let's continue on. And now you can see we are adding to the temporal embedding. We're just having this this this projection. So this is this simple embedding vector as per the as the paper already mentioned. So we're doing this in the same way as if we were if we had just a simple class information and not text. So that's everything remains the same as in previous papers. But the different part is going to be how do we use this XF out. And we see here that we are passing out XF out throughout various modules of our unit model. So that's the difference. That's how we are conditioning using the temporal information plus the textual information. OK, let's go here. Finally, the whole magic is going to happen here. Let's find a module that's actually integrating the text information. I'm not sure which ones are used. Let me just double check that. So let's go to unit model here and let's find the model that's going to be using the textual information. I think it's going to be either textual block. Yeah, I think it's going to be the textual block. So this is the part is going to be using the textual information. The rest blocks. So here is the residual block. That one is only going to be using the temporal information as previously we saw this in in previous videos as well. So let's just step over. Whoops. Where am I? OK, so the first module is not the attention module. So we're not hitting the we'll have to kind of keep on doing this or I can alternatively hit F5 until we hit the first attention block where we're going to finally integrate this text information. OK, so we can verify that this is 512, 128 as expected. And you can hear see how it's integrated into the attention. So what we do with those tokens is we apply this encoder key value. So that's just going to be a convolutional like a layer applied. So we do some processing here first and then we end up with a shape that's going to be what one seven sixty eight hundred twenty eight hundred twenty eight tokens of this dimensionality. And then we're going to pass that into the attention and attention is this QKB attention object. Let me get kind of enter into it and let's just add a break point there. So let's just see how the information is integrated. We can see here. So we care about this encoder encoder variable. We form the queries, keys and values the usual way using the image features. And now we're trying to integrate the textual features and how they do this is as you can see, they grab they just do some reshaping and then they concatenate those textual basically keys to image keys. And they concatenate the textual values to image values. And then they just do the attention. So it's kind of bothersome. This is obviously not the only way you can do it, but it's basically exactly what the paper here said. So the output of this transformer is using two ways. Blah, blah, blah. And then second, the last layer of token embeddings, a sequence of key feature vectors is separately projected to the dimensionality of each attention layer. Throughout the ADI model and then concatenated to the attention context at each layer. So we just saw that that's indeed the case. And let me just continue here. We don't need to go through the this is the implementation of a transformer. I'm going to skip all of this, assuming you already know how this thing works. And that's it. So basically we saw how we are taking the textual information stored in this dictionary here, temporal information and passing all of that through UNET and forming the model output model. This is again going to contain the Epsilon. So the noise plus the covariance matrix is going to be one six, sixty four, sixty four, six, because we have three channels predicting the Epsilon and three channels predicting Sigma or the covariance matrix. OK, so next up. So everything else remains the same as in the first DDPM, the coding video I've done. The only difference here was the textual basically conditioning. OK, so we split the output into two parts, the Epsilon and the model var values. Again, there is this V vector that's actually predicted and then we form the model variance by by using these expressions here. Not as important. We saw that multiple times in previous videos. And finally, we predict. So we now pass the noisy image, the time steps and the output. And then we just predict the X zero here. Just some clamping. And finally, we predict the model mean. So everything here remains the same as in the original, as in the improved DDPM paper. So I'm going to skip all of this. And now this is where the difference will will will kick in. So here we just a sample from a normal distribution. We create this non zero mask. And this is the interesting part. So we now want to see how the conditioning is done using the clip model. Again, this is very similar in structure to the guided DDPM that I've covered in my previous video. So let's step inside here. I'm going to hit F 10. We are going to form the gradient and then we're just going to shift the predicted mean by the as you can see here. So Sigma, the covariance times the gradient. OK, and the grading itself already contains the scaling coefficient. So that's why we don't see it here. OK, let's hit F 10. And here are we in the conditioning function. So we previously saw how we created this textual embedding. Now we're going to do the fun part. So we're going to take the input noisy image, detach it from the computational graph and set requires graded true because we want to compute the gradients for this particular image. We're going to embed this image using the image pathway of the clip. So such that we end up with embedding image embedding here. So that's again, let me show you what we're doing there. We're doing the drawing I showed you a couple of minutes ago. So I think it's somewhere here. So we are just now forming this red embedding vector. And now we're going to basically, well, find the gradients of the product between the text embedding vector and the image embedding vector. OK, let's go back here. Let's step over image embeddings. So nothing. Well, it's again just walking me through the I have a bunch of break points here, but we can kind of ignore all of those safely. And what's this? My God, my God, I'm hitting every single I'm hitting every single break point here. Nothing interesting here. Nothing interesting here. So we form the embedding vector. We normalize it. And here we are. So we have the embedding vector for the image. And now we form the product and we additionally have this logic scale. So we form the loss and we just then apply. We just find the gradient of that loss and that gradient is now going to be returned. We just multiply with the gradient scale. So it's going to be three because that's the parameter we set in the main script. And we just return that gradient. But everything. So basically this is what we saw in the paper. And this is the only difference compared to your regular classifier guidance we saw in the previous video on diffusion models beat GANs on image synthesis paper. So this gradient is going to move our image towards the part of the space where that image resembles the prompt much more closely compared to if we were to just do it unconditionally. OK, let's step over here. We have the gradients. We find the new mean. And that's it, guys. Then we just sample from that new mean. Again, we have this this standard expression. We just have mean plus sigma and we just multiply by the by the noise here. This is how we sample from the Gaussian. And that's it. And this now just keeps on repeating. We're now going to repeat the same process for the next time step, which is going to be 98 and all the way to zero. And that's pretty much it. OK, guys, next up, I want to show you how the classifier free guidance works. So let me run the script text to image and let's start. A lot of the structure is shared between this script and the previous one. So I'll just be skipping even more than before. So we're creating the model and the diffusion here. Let me disable the break points such that we are just stepping over all of this. So we have modeling, diffusion, blah, blah, blah, shifting into GPU, loading the checkpoint, counting the number of parameters. Then we are forming the up sampling model here, which we don't care about. So we're going to skip all of that until we hit the prompt here and all painting of accordi. And now let's start. So same hyper parameters as before, same encoding of the tokens and then adding the padding and the mask. And this is where it's now starting to become a bit different. This is where the structure starts to deviate from the previous script. So now we encode the null prompt. So that's the empty prompt here, as you can see. And that's the part. I'll show you again in the paper. So let's find the OK, here it is. Here's the equation. So we have to pass the empty prompt and then we have to pass captions. So that means we'll effectively have two passes through the diffusion model. Or in this particular implementation, what I've done is they've sacrificed some memory for a single pass for time. And so they basically do a single pass, as we'll soon see, and they compute both of these expressions here. So both conditional caption as well as as on the empty well, null caption or empty caption. OK, so let's go back to the code. Here we can see that we now tokenize the empty empty sequence and we end up with tokens. We just contain pretty much just the I think end of tokens. So all of these, as you can see here, just contain the padding token. I think this is the padding token. Let me double check. That's indeed the case. So if I were to enter here, so this padded tokens, let me just find that thing. I think it's going to be somewhere in here. So there is a tokenizer part and we care about the BPE file. And here it is. So you can see here. Because we have passed empty tokens, we're basically going to pad the end token for the full context. Basically, that's what happens. We have end token as the result here. That's that's what these this fifty thousand two fifty six represents. It's just a pet token. OK. Now we're going to store both the tokens as well as the this unconditional tokens. And we're also going to store mask as well as the unconditional mask. So this mask is kind of boring. I assume it's going to have all forces. Yeah, as you could have expected. And now let's continue. So that's the key word arguments that we're going to use to condition our model during the four pass. OK, so here we have an interesting function. So let me now enable all of the breakpoints. We'll enter this one a bit later. This is where the whole magic happens when it comes to computing the classifier free guidance. And again, we have the P sample loop standard stuff. We want to generate an image that's going to be I think this is going to be sixty four sixty four resolution. Yep, that's the case. And let's continue here. We are passing the conditioning information here. And finally, you can see here that we are extracting batch size, which is one, whereas the full batch size here is going to be two. And the reason is this is the optimization they've done such that they have a single four pass and they end up with the with the result, even though they have to compute the and the output both for the empty token string as well as for the actual prompt we're using. OK, let's now dig into the code again. I'm skipping some parts because it's just boilerplate, some for loops and stuff. So we are generating the noisy image we're sampling from the I guess normal distribution here. We are forming the indices again. We have how many steps? Hundred. We are logging the progress. Nothing fancy there. And now we're starting the actual diffusion reverse process. OK, so here is the P sample function. This is the important function. Really, nothing has changed. So the model is going to be different here. That's the one that was defined in the main script. We're going to hit it a bit later. So you're going to see it otherwise. Nothing interesting here. No conditioning here because we are doing the classifier free guidance. And this is the conditional information. OK, we're going to see all of that in a second. Let me step over all of this. And this is where the magic of the classifier free guidance is going to kick in. So here we hit the model function that we define in the main script here. So let's see what we're doing. So we take the input images and we just grab the first part. So again, remember that we have two images here. So let me show you the shape. So we have two. So we're going to take the first part only. Because only the first part is going to contain the actual information of interest. And that's again the consequence of the way of them wanted to have a single forward pass, but like sacrifice the memory. And that's why there is this weird burden that we as someone analyzing this code base have to kind of suffer because of that optimization trick. But like it's fairly simple. Basically, you just take the first part of the of the image. We we know just as you can see here, we form this combined variable, which is just basically copy paste. We copy paste that first portion. And now we pass that basically together with time steps and together with conditioning information. We pass all of that through the unit with the transformer. So I'm going to disable the break points and just skip over this whole step over this this this function. Because we already saw this in the first script. We are basically just integrating the temporal information, integrating the textual information into the attention context, etc, etc. And as the output, we get the epsilons and the sigmas for both images. So let's see the model output. Basically, it should be 264, 64, because we have two images, six because we have both the epsilon and as well as the covariance matrix, so the sigma. And now we're going to split those, as you can see here, into epsilons and and and well, they call it rest. And then finally, we do another splitting into conditional epsilon and unconditional epsilon, because as you recall, the first image is going to be conditioned here with the conditional information. The second one is just going to have the tokens for the for the empty prompt. That's why we have these two variables here. So these two, again, are just are just these two variables here. And the epsilon conditioned on the empty prompt as well as the epsilon conditioned on the on the caption on the prompt. OK, let's go back to the code. And now here's the step. Here's the actual formula. We have the unconditional epsilon. We add up the scaled version of this difference. And that's it. That's the whole magic. And now we just again do this. Well, as I said, it's a consequence of justice of the optimization they've implemented. And then we return all of that. And that's it. Everything else remains pretty much the same. We split into we form the model variance. We form the let's see where there is something different here. We just predict the X start or the X zero, which is the I guess the newest image. And then we do some pre-processing. So nothing fancy there again will only care. So this model output contains copy pasted same results. So we only care. And let me kind of show you that's indeed the case. So the shape of this thing is two, three, sixty four, sixty four. If I were to take all zeros, so that means I'm going to just grab a single element from the first batch. If I were to take the second, the same element from the second batch, you can see they are the same because that's how, again, the optimization they're doing is playing out. OK, so all in all, at all times, only the zeros image contains the results of interest. The second one is just basically a consequence of the optimization. Now they form the mean and then they return all of these variables back. And that's it. Everything else remains the same. We just we don't have the conditioning now and we just form the final sample by sampling from this distribution sample. Here is still two, two, three, sixty four, sixty four. But only the first part contains the sort of zeros index of the batch dimension contains the actual information that we care about. And that's it, guys. I'm going to now go back into the script. I'm going to disable all of the breakpoints and I'm going to set a breakpoint to show images here. Now let's hit F5 and let's see how this is going to be generated. You can see here the progress is being tracked using the TQDM library, which is so cool. And now let me show you the image. So if I click F10 here, let me show you the image. So we have a corgi, as you can see here. So what was the prompt? Our prompt was, let me just find it, an oil painting of a corgi. And you can see indeed this is a low resolution version of that particular prompt because the models are already pre-trained for us. That's why we are getting these cool results. Keep in mind that OpenAI has not open source the training code, so we can only play with the sampling and generation in general and not with training. Let me quickly show you what's going to happen after we do the up sampling. I'm going to set another breakpoint here. Let me hit F5 here. And now we're generating the up sampled version. So if I hit F10 here, let me just see whether that's going to exit the program or not. Okay, unfortunately that exited the whole program, so I'm going to have to set something here like print blah blah, for example. And now let's hit run again. And we're going to sample both the corgi as well as the up sampled version of the corgi, although it may take some time. Cool, guys. That's pretty much it. I know there is a lot of information in this video. Do let me know if you have any feedback. Did you find this one useful or not? All of that's going to help me modify my future videos. And hopefully you learned something from this. Hopefully you like this combination of using paper, using code. There is a lot of information, but I think it might be useful for all of you. So here's the corgi again. This is the low res version. And now we're going to see the up sampled version. So here is the up sampled version. So that's it. I'm going to quickly also show you how the inpainting script looks like. Let me move it here. Let me try and run it. Let me see where the show images are. So I'm going to set a break point there and then there is the up sampling part. But yeah, I think we can just set a couple of break points here to visualize what's going on. So let me hit run and let's see what's going on. So first things first after the loading has happened. So again, the logic is actually not that complicated. It's a bit harder to form a mental model for how this inpainting works, but I encourage you to check out the code at your own pace and understand what's going on. So here is the image we are trying to impaint. So we basically have this grass image and then we masked all of the rows here. So I'm going to exit here. Let's hit F5. Now we're going to generate impaint this particular image. And let me now hit this. We're going to hit this break point here after 100 steps. So let me show you what we get as a result. We get a corgi on this on this on this on the grass. And that's because the prompt we are using is let me just find a prompt a corgi in a field, which makes sense. And finally, if I were to hit well F5 here, well, we'll just get the super like the up sampled version. But that's pretty much it. Again, guys, do let me know whether you find this video interesting and useful. Any feedback is super appreciated. And do subscribe to this channel if you liked this video. Share it out with your friends. And until next time, bye bye.
[{"start": 0.0, "end": 7.26, "text": " What's cracking guys in this video? I'm continuing on with the diffusion model tracking the machine learning coding series and this one"}, {"start": 7.26, "end": 11.4, "text": " I'll be covering glide, so that's a model that was a precursor to delete to"}, {"start": 11.96, "end": 16.78, "text": " Coming from open AI and I've previously covered the paper in great depth"}, {"start": 16.78, "end": 20.28, "text": " So if you want to have a deep dive into the paper itself, you can check it out"}, {"start": 20.28, "end": 22.28, "text": " I'm gonna link it somewhere here"}, {"start": 22.36, "end": 26.8, "text": " But in this video, I'll obviously be focusing on analyzing understanding the code"}, {"start": 26.8, "end": 32.36, "text": " I have three Jupyter notebooks prepared for you guys. And so we'll be going through those"}, {"start": 33.04, "end": 40.44, "text": " So what I've done is as always adjust I've downloaded I've cloned the repo here. You can find it"}, {"start": 40.44, "end": 43.040000000000006, "text": " I'll kind of link it in the video description"}, {"start": 43.6, "end": 46.480000000000004, "text": " But before we go there, I first want to"}, {"start": 47.2, "end": 53.28, "text": " Walk you through the paper as usually just to give you all of the necessary context"}, {"start": 53.28, "end": 60.36, "text": " Hopefully that's gonna be like 10 15 minutes quick overview compared to the deep dive. I've already previously made so without further ado"}, {"start": 60.36, "end": 63.68, "text": " let's start understanding how glide works and"}, {"start": 64.72, "end": 68.6, "text": " Then we'll we'll start doing the coding part cool"}, {"start": 69.28, "end": 70.4, "text": " so"}, {"start": 70.4, "end": 75.6, "text": " First things first as I said glide is a precursor to like a Dali model"}, {"start": 75.6, "end": 82.6, "text": " So they mentioned here when evaluated by human judges our samples are preferred to those from Dali. So this is the version 1"}, {"start": 83.47999999999999, "end": 89.97999999999999, "text": " 87% of the time when evaluated for photorealism and 69% of the time when evaluated for caption similarity"}, {"start": 90.32, "end": 93.91999999999999, "text": " The Dali version 1 actually did not even use diffusion models"}, {"start": 94.39999999999999, "end": 102.08, "text": " But the lead to did and it built directly up on glide. So yeah, we're slowly building our way towards more complex models"}, {"start": 102.08, "end": 109.64, "text": " Cool, you can see the some of the images here again here instead of compared to my previous video where I showed you the guided"}, {"start": 109.8, "end": 112.72, "text": " DDPM a model where you can could condition"}, {"start": 113.24, "end": 118.6, "text": " But using only like a class and for example image net class here"}, {"start": 119.0, "end": 121.88, "text": " Glide can actually cope with text with natural language"}, {"start": 121.88, "end": 128.88, "text": " So that's much much cooler and you can see some very cool examples here like a corgi varying a red bowtie and a purple"}, {"start": 128.88, "end": 131.12, "text": " party hat you can see that like you can"}, {"start": 131.12, "end": 133.04, "text": " composite"}, {"start": 133.04, "end": 137.92000000000002, "text": " different concepts together in a plausible way you can bind attributes such as a"}, {"start": 138.44, "end": 143.56, "text": " Purple party hat to to certain objects and red bowtie. So it's a fairly"}, {"start": 144.36, "end": 150.96, "text": " Powerful model and as I said, it was a precursor to delete you and we all know what delete you can now make"}, {"start": 151.56, "end": 155.84, "text": " Aside from image generation. It can also do image in painting"}, {"start": 155.84, "end": 162.28, "text": " You can see here you put a mask here and then you set some natural language prompt here zebras roaming in the field and"}, {"start": 162.48000000000002, "end": 167.6, "text": " Outcomes an image that's modified similarly here a girl hugging a corgi and a pedestal"}, {"start": 167.6, "end": 171.48000000000002, "text": " You can see how this dog here is is swapped by a corgi"}, {"start": 171.8, "end": 174.24, "text": " Etc etc. You can see a bunch of examples there"}, {"start": 174.24, "end": 178.8, "text": " So the model itself stands for guided language to image diffusion for generation and editing"}, {"start": 178.8, "end": 183.92000000000002, "text": " So generation is the the part that we all know about and anything is the in painting part cool"}, {"start": 183.92, "end": 187.44, "text": " so there is not this background about DDP M's are gonna skip all of that because"}, {"start": 188.16, "end": 192.83999999999997, "text": " Hopefully you watch some of my previous videos. This one is gonna build heavily upon those"}, {"start": 192.83999999999997, "end": 195.29999999999998, "text": " So do check out the whole series if you haven't so far"}, {"start": 195.72, "end": 201.6, "text": " You can also iteratively do this image in painting so you can see here how slowly they are adding"}, {"start": 201.6, "end": 206.35999999999999, "text": " They first had the corgi image and they add the table here then they add the ways here, etc. Etc"}, {"start": 206.35999999999999, "end": 209.35999999999999, "text": " So you can cannot iteratively improve your images"}, {"start": 209.36, "end": 216.36, "text": " Okay, so the parts which I want to focus on in this video are mostly about clip guidance and"}, {"start": 216.36, "end": 222.36, "text": " Classifier free guidance so I want to focus on those and I'll probably if I have enough time focus on in painting as well"}, {"start": 222.36, "end": 229.36, "text": " So without further ado, let's first introduce the classifier free guidance. So in the previous video, we saw how we can do classifier"}, {"start": 229.36, "end": 232.36, "text": " based guidance where you use a pre trained"}, {"start": 232.36, "end": 235.36, "text": " Well classifier and you use its gradients"}, {"start": 235.36, "end": 242.36, "text": " So the classifier was trained on noisy images. That's an important detail and then you just use the gradients from that classifier to kind of guide"}, {"start": 242.36, "end": 244.36, "text": " your your"}, {"start": 244.36, "end": 247.36, "text": " class conditional diffusion generation"}, {"start": 247.36, "end": 256.36, "text": " Here the cool thing about classifier free guidance is you do not have to train a separate classifier on pre trained on noisy images"}, {"start": 256.36, "end": 259.36, "text": " It's it's much easier and and that's better"}, {"start": 259.36, "end": 262.36, "text": " Okay, so let me try to briefly explain how classifier free guidance works"}, {"start": 262.36, "end": 268.36, "text": " Again, as I just mentioned, this method does not require a separate classifier"}, {"start": 268.36, "end": 272.36, "text": " That's a big plus for for this method compared to the classifier guidance"}, {"start": 272.36, "end": 282.36, "text": " And then they say here so for classifier free guidance the label in a class conditional diffusion model is replaced with a null label with a fixed probability during training"}, {"start": 282.36, "end": 289.36, "text": " This is I think usually 10 or 20 percent during sampling the output of the model is extrapolated further in the direction of the model"}, {"start": 289.36, "end": 299.36, "text": " The output of the model is extrapolated further in the direction of the class conditional model and away from the null conditional model like so"}, {"start": 299.36, "end": 306.36, "text": " And you can see here the expression we basically have the one where we condition on those on the on the null label"}, {"start": 306.36, "end": 322.36, "text": " And then we just add this this basically this difference between the class conditional and the null conditional and we just use this guidance scale so the parameter s to well to to to modulate how much we want to go in in that direction"}, {"start": 322.36, "end": 332.36, "text": " And how I think about this in my head the mental model I have is basically you have this thing here tells you gives you the the noise in the unconditional case"}, {"start": 332.36, "end": 341.36, "text": " So it's going to give you some noise in some is going to be some point in the end dimensional space where and is basically the number of dimensions in your image"}, {"start": 341.36, "end": 358.36, "text": " So the resolution times the number of channels and so now once you have that point you then plot this point of where the model thinks the class conditional noise should be"}, {"start": 358.36, "end": 367.36, "text": " So if it outputs point like here then you can form the vector by just kind of subtracting one from another and you get something like this"}, {"start": 367.36, "end": 377.36, "text": " And then what what this final expression is going to be about is OK the model outputs the null conditioned noise and it's here"}, {"start": 377.36, "end": 384.36, "text": " And we know that if you want to move towards that particular class on upon which we are conditioning we need to move in this direction"}, {"start": 384.36, "end": 395.36, "text": " And because of that we just add that vector and we modulated with the with the scalar s to basically well to to start generating images that are from that particular class"}, {"start": 395.36, "end": 406.36, "text": " So that's the I literally have a geometric interpretation now whether this is completely correct or not I don't know but like I think I think it does sound like a like a good mental model"}, {"start": 406.36, "end": 416.36, "text": " In any case let's now see how we can replace the simple labels by the natural language because that's what we care about when it comes to to glide"}, {"start": 416.36, "end": 427.36, "text": " So what what they do is so to implement classifier free guidance with generic text prompts we sometimes replace text captions with an empty sequence which we also refer to as this"}, {"start": 427.36, "end": 440.36, "text": " Well zero crossed I guess during the training we then guide towards the caption using the modified prediction blah blah blah OK so you can see that everything remains the same except that we are now conditioning on caption"}, {"start": 440.36, "end": 461.36, "text": " Like and not on the simple label and that's pretty much it so the second thing they do is they use clip for guidance because here you cannot have like a thousand classes or whatnot because languages obviously has infinite possibilities of compositing"}, {"start": 461.36, "end": 476.36, "text": " And yeah there is infinite number of sentences I guess so you have to use something like clip and the only difference compared to the classifier guidance we saw in the previous video is that here you're looking for a gradient of the product between the caption and the image"}, {"start": 476.36, "end": 491.36, "text": " So let me read this for you perturbing the so when applied to the future models these techniques are similar to classifier guidance perturbing the reverse process mean along the gradient of the text image product with respect to the image OK"}, {"start": 491.36, "end": 511.36, "text": " So what that means is basically if you recall how we how we've done the classifier guidance we output like a mean and then on top of that we shift that mean by the scale version of the covariance matrix times the gradient of our classifier"}, {"start": 511.36, "end": 524.36, "text": " So now the only thing that differs in this in this clip guidance is that this these gradients here because we are now dealing with text and not with with classes are going to be this thing here"}, {"start": 524.36, "end": 547.36, "text": " So basically you encode a certain caption and you want to make sure you want to find you want to learn how to tweak the input image such that such that the product between the image feature from that image and the text embedding here is maximized"}, {"start": 547.36, "end": 567.36, "text": " And that means that the image is belonging more and more to that class or differently said it's becoming better and better described by that particular caption So that's that's all like that's a small tweak that we have to make in order to to make the clip thing and the text conditioning to work"}, {"start": 567.36, "end": 582.36, "text": " Again let me just quickly draw a picture here it might be a bit easier to understand so we have an input image here and we have our clip model here OK So here's clip"}, {"start": 582.36, "end": 607.36, "text": " So now we now have some type of a prompt like I don't know like maybe something simple like a dog OK So we encode that thing we pass it through the clip again we pass it through clip here so this same model basically and outcomes outcomes some some in there here so this is going to be an embedding that corresponds to this prompt here"}, {"start": 607.36, "end": 627.36, "text": " Next up we pass the image through the clip and we get a different embedding here OK This is going to be the image embedding let me maybe choose a different color just for the sake of like not to come in order not to confuse this So here we have a prompt for the"}, {"start": 627.36, "end": 648.36, "text": " So we have the text embedding and here we have the image embedding and now the gradients are going to tell us how we should tweak the image how should we tweak the image such that this vector is much closer in the image text embedding space OK Hopefully that was clear enough"}, {"start": 648.36, "end": 668.36, "text": " Now let me briefly tell you a couple of things about their training so they mentioned here for our main experiments we train a 3.5 billion parameter text conditional diffusion model at 64 times 64 resolution and another 1.5 billion parameter text conditional up sampling diffusion model to increase the resolution to 256 times 256"}, {"start": 668.36, "end": 688.36, "text": " For clip guidance we also train a noise aware 64 times 64 VAT clip model OK So the only difference compared to the previous video on the guided DDPMs is basically here we have additional transformer that's going to be used to text condition our diffusion model instead of just doing a simple class conditioning"}, {"start": 688.36, "end": 717.36, "text": " It's not that like big of a difference to be honest so a couple more details here to condition on the text we first encoded into sequence of key tokens and feed these tokens into transformer model OK The output of this transformer is used in two ways so first the final token embedding is used in place of a class embedding in the ADM model so that's the model from the previous paper again the paper is called I think diffusion models beat GANs on image synthesis"}, {"start": 717.36, "end": 739.36, "text": " And second the last layer of token embeddings a sequence of key feature vectors is separately projected to the dimensionality of each attention layer throughout the ADM model so the fusion model and then concatenated to the attention context at each layer OK So that's how we integrate this textual conditioning information into the ADM into the diffusion model"}, {"start": 739.36, "end": 769.36, "text": " That's an additional detail I want you to understand OK Finally here they mentioned how like let me read this for you so after the initial training run we fine tuned our base models to support unconditional image generation this training procedure is exactly like pre training except 20% of token of text token sequences are replaced with the empty sequence this way the model retains its ability to generate text conditional outputs but can also generate images unconditionally and we need this ability so that we can"}, {"start": 769.36, "end": 791.36, "text": " such that we can well use the the the basically the classifier free guidance effectively OK guys that's pretty much it I'm going to skip everything else again the results are basically better when we use classifier free guidance compared to clip guidance so that's one of the reasons this is so cool"}, {"start": 791.36, "end": 811.36, "text": " I'm gonna skip all of this this is not that vital and finally worth mentioning the only model that they've open source is the so called glide filtered where they say here we filtered out training images containing people to reduce the capabilities of the model in many people sent you problematic use cases"}, {"start": 811.36, "end": 840.36, "text": " the model is also way smaller compared to their biggest version and they also filter out some other images such as like hate symbols et cetera et cetera so just worth having that in mind finally the model such as glide was still struggling a lot with with certain well let's call it weird phrases whereas the lead to became much better with these still not perfect but but better than then then what glide could achieve"}, {"start": 840.36, "end": 860.36, "text": " here we can see an example a mouse hunting a lion and we can see that the model does does not know that there does not understand the direction so here it appears as if the lion was hunting the mouse and not the other way around so it understands the association but not the the the causal relationship here cool"}, {"start": 860.36, "end": 887.36, "text": " okay that's pretty much it now let's go back to decode let's understand how this looks in actual code okay I'm gonna first start with this simple clip guided script so we're going to use the clip the pre trained clip model on on that's like noise up there and we're going to use it to to guide our generation so let me run this thing"}, {"start": 887.36, "end": 909.36, "text": " and again as usually I'm going to skip over everything that's not core to understanding what I'm trying to explain in a particular script here we want to understand how clip is used to guide the generation process so we just grab the devices and I'm going to have I have a GPU so it's going to be a GPU here"}, {"start": 909.36, "end": 938.36, "text": " some options default options for the model and diffusion we saw this in the first video like they're obviously are reusing a lot of the code, which is a good thing. And so I'm going to scheme across all of this. So time a step through spacing. So what it does is if the model was trained on like 1000 steps, you can now instead during the sampling procedure have only hundred steps and still get good results, good high quality samples."}, {"start": 938.36, "end": 961.36, "text": " Then we're going to form the model and the diffusion here. So model is basically a unit model and diffusion is contains all the constants that are necessary for the diffusion. Okay, so we'll need to step into this part because the model is a bit different compared to previous papers. So the model now contains also the transformer because of the text conditioning."}, {"start": 961.36, "end": 984.36, "text": " So let's see how that looks like diffusion itself looks pretty much the same as in previous videos. So I'm going to skip over that. But we're going to jump into this create model function. Let me just hit F5 and let's get into the function itself. Okay, so a couple of details here depending on the image size."}, {"start": 984.36, "end": 1003.36, "text": " We're going to set up the unit in a particular way number number of channels is going to be different compared to depending on the image resolution. Again, we've seen this multiple times and they just have these attention blocks for particular resolutions. Not that important."}, {"start": 1003.36, "end": 1024.3600000000001, "text": " And finally, we're going to have this text to image unit. So again, it's just a simple unit plus the transformer. That's everything that Glide changed compared to previous papers. Okay, now we're going to form the text to image unit here. There is a lot of parameters. I don't want to explain every single one of those."}, {"start": 1024.36, "end": 1044.36, "text": " There is also the encoder part. I'm going to quickly show you what this is. Basically, they have a pre trained BPE encoder that are going that they are going to use. And I explained this in the open AI clip video. So I'm going to link it somewhere here. You can check it out if you want to understand a bit better what's going on here."}, {"start": 1044.36, "end": 1059.36, "text": " How the actual tokenization works, how that magic looks like. Then you can check out that video. But here I'm just going to skip all of that and hit that five to enter the constructor of the text to image unit. Okay, so let's see what's different here."}, {"start": 1059.36, "end": 1076.36, "text": " We call the constructor of the super class and that's going to be just a simple unit model. I'm not going to show you the unit because we already saw that in previous videos. As I said, I have to kind of depend on previous videos because otherwise this would be three or four hours long video."}, {"start": 1076.36, "end": 1095.36, "text": " I kid you not. Okay, so we construct the we construct the unit model here. I'm going to skip over everything here. I'm just going to hit F5 and we have our unit constructed. Let's go outside here."}, {"start": 1095.36, "end": 1112.36, "text": " And now, so this is the different part compared to previous papers. They have a transformer. So they form a transformer that has a particular context 128 tokens, certain width 512 dimensions, number of layers 16, etc, etc."}, {"start": 1112.36, "end": 1127.36, "text": " Then they have they use some layer norm, not that important. They create some token embedding layer here. So this one is basically going to embed our tokens into certain well into a certain vector space."}, {"start": 1127.36, "end": 1136.36, "text": " And then we also have positional embeddings. We don't need more than 128 because that's going to be the max context that this transformer is going to use."}, {"start": 1136.36, "end": 1147.36, "text": " And we finally have this transformer projection linear layer. We're going to see how that one is used a bit later. Okay, finally, there is this padding embedding. We'll see how that all fits together a bit later."}, {"start": 1147.36, "end": 1159.36, "text": " Okay, so cash stuff, not that vital. I'm going to skip all of that. And finally, I'm going to skip the diffusion model because we've seen that already. I'll have to skip this."}, {"start": 1159.36, "end": 1173.36, "text": " And here we are. So again, we have a unit with transformer and we have our diffusion model here. So basically those two together form form a diffusion model."}, {"start": 1173.36, "end": 1189.36, "text": " Okay, let's go back into the main script. We just do some some boilerplate for machine learning, just in conversion to FP 16. We put the model onto GPU will load the actual weights, and then we print the number of parameters here."}, {"start": 1189.36, "end": 1203.36, "text": " Okay, we repeat the same thing for the up sampling model. I'm going to skip all of this because we will not be focusing on the up sampling model. So I'm going to kind of skip over all of this until we get back into the main script."}, {"start": 1203.36, "end": 1211.36, "text": " I should have toggled off the break points. But yeah, I guess next time."}, {"start": 1211.36, "end": 1212.36, "text": " Cool."}, {"start": 1212.36, "end": 1226.36, "text": " I think we are there almost. Basically, okay, so we created the up sampling model. Now we create the clip model. Okay, let's see how clip looks like. So clip, as you can see here, I'm going to just step into here."}, {"start": 1226.36, "end": 1246.36, "text": " We grab some default config path so they have this config YAML file that specifies certain properties of our model. So of the clip model so you can see it here again too many details to cover. I'm just going to continue on execution in the in the create clip function."}, {"start": 1246.36, "end": 1260.36, "text": " So here we are. We create a simple tokenizer so clip will have a different a bit different tokenizer compared to the diffusion model. Again, we can treat it as a black box for all practical purposes."}, {"start": 1260.36, "end": 1261.36, "text": " Okay."}, {"start": 1261.36, "end": 1276.36, "text": " Now, as I already said, I covered clip in one of the previous videos. If you really want to understand how the code works and how clip works to check it out or or also check out my paper video on clip. I've done that as well. I can link those somewhere here."}, {"start": 1276.36, "end": 1288.36, "text": " We can briefly jump into the text encoder. Let me see whether I have basically. Okay, so let's kind of let me let me quickly show you how this text encoder is going to look like."}, {"start": 1288.36, "end": 1306.36, "text": " I'm going to skip some parts so they just form the causal mask here because well does that make sense. Yeah, it does because we are here dealing with with text. But then again, we are not generating text. So I guess you could use bi-directional you could use non causal masks as well here."}, {"start": 1306.36, "end": 1320.36, "text": " But in any case, let me get a scheme over all of this. So now they start stacking these blocks. The first block is just going to be a text embedding block, which means you're going to take tokens and then you're going to embed them into certain space."}, {"start": 1320.36, "end": 1331.36, "text": " I think I'm going to show you this in the in the I'm going to just put a well, I don't want to even put a breakpoint here. It's just a simple embedding layer. I'm going to skip that."}, {"start": 1331.36, "end": 1345.36, "text": " And then they form they had a bunch of transformer blocks and then at the end they just extract the basically embedding that's above the last token in the sequence."}, {"start": 1345.36, "end": 1354.36, "text": " So that's how clip works. I'm going to have to skip all of that and let's continue. So we have the text encoders. Basically, it's a simple transformer."}, {"start": 1354.36, "end": 1365.36, "text": " It first embeds the tokens, then it processes them using the transformer blocks. And then at the end, it just extracts, as I said, the last embedding. That's it."}, {"start": 1365.36, "end": 1380.36, "text": " Next up, we have image encoder. Image encoder is VAT. So they're using vision transformer. Again, I'll have to just treat this as a black box. Let me see whether there's something interesting here."}, {"start": 1380.36, "end": 1388.36, "text": " OK, this time they're using a non causal mask, as you can see here, which makes sense for images."}, {"start": 1388.36, "end": 1401.36, "text": " Let's see what else. So again, similar structure. They form these blocks. The first one is going to be image embedding. So we're going to basically do the patchify like layer of the VAT, which means you're going to treat image as a bunch of patches."}, {"start": 1401.36, "end": 1410.36, "text": " And then you're going to embed those patches into certain into certain well, vector space. And then you're going to just treat those as tokens."}, {"start": 1410.36, "end": 1418.36, "text": " And then it's pretty much the same as if you were to just use transformer. And you can see here they do have a bunch of transformer blocks."}, {"start": 1418.36, "end": 1426.36, "text": " And at the end, they have image feature extractor that's going to basically extract. Let me just see what's going on there."}, {"start": 1426.36, "end": 1434.36, "text": " OK, we're going to I'm going to put a like a simple break point in the four functions. We're going to see that a bit later once we start using clip actually."}, {"start": 1434.36, "end": 1444.36, "text": " OK, so that's it. We have the image, you go to we have the text encoder. We form this logic scale. That's a learnable parameter."}, {"start": 1444.36, "end": 1453.36, "text": " We might see how it's used a bit later. And finally, they form the actual clip model. So he has a bunch of helper functions."}, {"start": 1453.36, "end": 1460.36, "text": " What we care about is, well, for now, the I can just keep the function. That's not that important."}, {"start": 1460.36, "end": 1468.36, "text": " We're going to see everything a bit later during the actual four pass. OK, so now we just load the checkpoints for the image encoder."}, {"start": 1468.36, "end": 1474.36, "text": " We load the checkpoint for the text encoder. This is where the actual fun starts."}, {"start": 1474.36, "end": 1478.36, "text": " We're going to take the prompt and oil painting of a corgi."}, {"start": 1478.36, "end": 1487.36, "text": " We're going to encode that and we're going to generate an image that's going to have that's going to be well described by this prompt."}, {"start": 1487.36, "end": 1495.36, "text": " So the guidance scale is going to be three. So that's the S parameter that we use to multiply the gradients coming from the clip model."}, {"start": 1495.36, "end": 1501.36, "text": " We have some temperature that's not that important. And now we just tokenize. Again, we're going to treat this as a black box."}, {"start": 1501.36, "end": 1511.36, "text": " We take the prompt and we talk and I said using the diffusion models tokenizer into tokens. OK, so let me show you how that thing is going to look like."}, {"start": 1511.36, "end": 1515.36, "text": " It's going to be as you can see here, there is seven tokens in total."}, {"start": 1515.36, "end": 1526.36, "text": " And then there's this special utility function that's going to just pad the these tokens such that we fill in the context for the transformer."}, {"start": 1526.36, "end": 1534.36, "text": " That's a part of our diffusion model. OK, let me show you what that means. In practice, that means we're going to have I think we are using hundred twenty eight."}, {"start": 1534.36, "end": 1544.36, "text": " So that's this text context parameter. So basically if I were to print the number of tokens, you can see there is hundred twenty eight tokens now."}, {"start": 1544.36, "end": 1551.36, "text": " And the mask just tells you which ones are actually legit. And the first seven ones are actually true tokens."}, {"start": 1551.36, "end": 1563.36, "text": " Everything else is just padding, which is specified by setting false for all of those other tokens. OK, so so the reason we're doing this is so that we can basically take multiple prompts and do this in a single for a pass."}, {"start": 1563.36, "end": 1571.36, "text": " And that's why we cannot pad to a fixed dimension. OK, so now we form this model keyword arguments dictionary."}, {"start": 1571.36, "end": 1577.36, "text": " We just store the tokens of exercise is just one. So that means we're going to we won't do anything smart here."}, {"start": 1577.36, "end": 1585.36, "text": " We just store the tokens and the mask. And this is going to be later used to condition the diffusion model. We're going to see that in a second."}, {"start": 1585.36, "end": 1592.36, "text": " OK, so there is this condition function. I'm just going to enable basically all of the breakpoints."}, {"start": 1592.36, "end": 1597.36, "text": " And this is going to we're going to hit this function a bit later. It's going to be very important."}, {"start": 1597.36, "end": 1603.36, "text": " We are actually going to have to enter this function. So let me click F10 and enter this one."}, {"start": 1603.36, "end": 1611.36, "text": " So we're going to form this. Well, basically the embedding for the prompts and then we're going to return the this conditioning function."}, {"start": 1611.36, "end": 1619.36, "text": " OK, so let's see what's going to happen here. So we just basically pass prompts through the textual pathway of the clip model."}, {"start": 1619.36, "end": 1624.36, "text": " Nothing fancy there. We've seen this multiple times. We've seen this in my previous clip video."}, {"start": 1624.36, "end": 1633.36, "text": " So we've called this in code prompts that's just going to, as you can see here, iterate through the list of prompts."}, {"start": 1633.36, "end": 1638.36, "text": " We only have a single prompt, so nothing fancy will happen there. We're going to treat the tokenizer as a black box."}, {"start": 1638.36, "end": 1645.36, "text": " So this is going to return us some sequence of tokens. Let me let me show you what we're going to get here."}, {"start": 1645.36, "end": 1654.36, "text": " So if I run this, we have like, as you can see here, six tokens and then we're going to call again this padded tokens and length function,"}, {"start": 1654.36, "end": 1663.36, "text": " which is going to just pad everything. OK, I kind of entered here. Let me just skip over this. We don't want to go there."}, {"start": 1663.36, "end": 1670.36, "text": " So again, we end up with these subtokens, which are going to be just padded versions. Whoops."}, {"start": 1670.36, "end": 1677.36, "text": " So if I just grab the subtokens here and if I print this, you can see we have the same tokens as above."}, {"start": 1677.36, "end": 1686.36, "text": " Hmm. For some reason, there is some difference here. Let me just see. Oh, no. OK, we have the start of sentence token here"}, {"start": 1686.36, "end": 1692.36, "text": " and the end of sentence token here. And we have our tokens here. And then we have additionally the padding."}, {"start": 1692.36, "end": 1701.36, "text": " So it's a bit different. As I said, Clip has a bit different tokenizer compared to the tokenizer that we have in our diffusion model."}, {"start": 1701.36, "end": 1708.36, "text": " And now we're just going to append that and we are done. We just converted to torch tensors and we are back here."}, {"start": 1708.36, "end": 1720.36, "text": " So now we have basically tokens and their associated lengths. Next up, we're going to pass these tokens through the text pathway."}, {"start": 1720.36, "end": 1725.36, "text": " So that's again just going to embed the tokens, then apply a bunch of transformer blocks."}, {"start": 1725.36, "end": 1733.36, "text": " And ultimately, we are going to extract a particular a particular token. So let's see how the extractor part works."}, {"start": 1733.36, "end": 1741.36, "text": " So again, this is the this is the forward pass of our of our text encoder."}, {"start": 1741.36, "end": 1749.36, "text": " And we don't care about most of these parts because that's just going to do as I said, just going to do the embedding and then number of transformer blocks."}, {"start": 1749.36, "end": 1755.36, "text": " And ultimately, we'll have this. I think this is the part that we care about. Let's see how this is going to work."}, {"start": 1755.36, "end": 1763.36, "text": " So we call the text extractor here. And here it is. So the text feature extractor. Let's see what it does."}, {"start": 1763.36, "end": 1773.36, "text": " Basically, as you can see here, text length tells us how big how many tokens do we have fed into the transformer."}, {"start": 1773.36, "end": 1779.36, "text": " And it's going to be equal to seven. So that tells you that that's the index of the last token."}, {"start": 1779.36, "end": 1786.36, "text": " And then we just gather. So we let me show you what text is. So text is a print shape here."}, {"start": 1786.36, "end": 1792.36, "text": " So we have 77 tokens. Some of those tokens are obviously padding."}, {"start": 1792.36, "end": 1798.36, "text": " Some of those tokens are start and end of sentence tokens and 512 is just a dimensionality."}, {"start": 1798.36, "end": 1803.36, "text": " So what we do by by doing the gather, we are basically going to extract the last token here."}, {"start": 1803.36, "end": 1809.36, "text": " So that means we're going to extract the seventh, the well, the eighth token."}, {"start": 1809.36, "end": 1815.36, "text": " Let's step over there and let me convince you that that's indeed the case if I were to take the text."}, {"start": 1815.36, "end": 1823.36, "text": " And then if I were to take the seventh here and let's take the first three elements, you can see them here."}, {"start": 1823.36, "end": 1830.36, "text": " And now let's let me show you the let me just see what the shape of this thing is. OK, so let's now take zero zero."}, {"start": 1830.36, "end": 1842.36, "text": " And let's take also first three elements. And those. Yeah, as you can see, these are the same because we literally just extracted whatever the embeddings are on top of that seventh token."}, {"start": 1842.36, "end": 1849.36, "text": " That's pretty much it. And finally, we pass that X through the layer norm."}, {"start": 1849.36, "end": 1855.36, "text": " And finally, we just return the basically the mapped version of that particular token."}, {"start": 1855.36, "end": 1863.36, "text": " So let's see what F is. I think it's just a simple. Yeah, fine. It's just a basically a simple linear layer with some tweaks."}, {"start": 1863.36, "end": 1870.36, "text": " But I will not get into those details because as you can imagine, this is a very huge code base and I cannot cover everything."}, {"start": 1870.36, "end": 1885.36, "text": " OK, so let's go outside here and we end up with H being what H shape is going to be one five hundred twelve. So we have a single token that is supposed to be our representation of our input prompt."}, {"start": 1885.36, "end": 1894.36, "text": " OK, continuing on here, just some asserts. And that's it. We have our ZT tokens. That's the same thing."}, {"start": 1894.36, "end": 1908.36, "text": " So again, one five twelve. And we're going to now normalize it just using the well, I guess this is just L2 norm as as previously as we've seen previously in the open clip video."}, {"start": 1908.36, "end": 1915.36, "text": " OK, so that's the CT is that the and now we return this con function. I have a breakpoint here."}, {"start": 1915.36, "end": 1921.36, "text": " So we're going to see it a bit later when the time comes. But for now, let's continue on with the main script."}, {"start": 1921.36, "end": 1930.36, "text": " So a recap, because this was fairly overwhelming potentially. So we formed our diffusion models and we loaded the actual checkpoints here."}, {"start": 1930.36, "end": 1935.36, "text": " We can ignore the sampling model. Then we we loaded the clip model."}, {"start": 1935.36, "end": 1943.36, "text": " We have a prompt and we basically encoded a prompt here. We have a mask and tokens for that prompt."}, {"start": 1943.36, "end": 1951.36, "text": " And now we ended up creating this this embedding vector for our prompt using the clip model. OK, so that's it."}, {"start": 1951.36, "end": 1959.36, "text": " Now we're going to go through the P sample loop. So that's going to be the reverse process of our diffusion model."}, {"start": 1959.36, "end": 1965.36, "text": " Let's see what we're going to do here. We want to generate an image that's going to be of this particular resolution."}, {"start": 1965.36, "end": 1976.36, "text": " So it's going to be a sixty four sixty four RGB. And let's see what's interesting here. We're going to use the tokens and mask to condition the model."}, {"start": 1976.36, "end": 1981.36, "text": " So that's going to be different compared to our previous videos with the DPMs."}, {"start": 1981.36, "end": 1987.36, "text": " In any case, let's now jump into this P sample loop."}, {"start": 1987.36, "end": 1999.36, "text": " Again, I kind of skipped the P sample loop because everything it does is just basically iterates through this dysfunction loop progressive and starts generating multiple samples."}, {"start": 1999.36, "end": 2005.36, "text": " But I think we are generating just a single sample so that it doesn't even matter. So we can just kind of skip over all of this."}, {"start": 2005.36, "end": 2015.36, "text": " And now the actual fun starts. So we start as we as you may know from the completely like a normal distribution."}, {"start": 2015.36, "end": 2020.36, "text": " We can just sample our image from a normal distribution. That means we have initially completely noisy image."}, {"start": 2020.36, "end": 2032.36, "text": " And then we're going to slowly work our way from there and denoise that image until we get the image that corresponds to the prompt that we are using for this for this program."}, {"start": 2032.36, "end": 2045.36, "text": " OK, so we'll have a hundred steps of the reverse process and just some DQDM stuff for for monitoring the progress."}, {"start": 2045.36, "end": 2052.3599999999997, "text": " But we're going to now do hundred times in order to hundred denoising steps so that we end up with an image."}, {"start": 2052.3599999999997, "end": 2060.3599999999997, "text": " OK, so initially T is going to be 99. As you can see here. And now the whole fun happens in this P sample function."}, {"start": 2060.36, "end": 2071.36, "text": " So we're going to pass the fusion model so that's unit with transformer. We're going to pass the noisy image time step and a couple of other basically."}, {"start": 2071.36, "end": 2079.36, "text": " Parameters here and let's see how the sample is going to look like. OK, so this is the funny part."}, {"start": 2079.36, "end": 2085.36, "text": " OK, here let's see how the text information is used to condition the model. That's going to be the interesting part."}, {"start": 2085.36, "end": 2096.36, "text": " OK, let me step into here. So that's going to be again. That's going to be the text to image unit."}, {"start": 2096.36, "end": 2101.36, "text": " And what we're going to do is first we're going to embed the time steps."}, {"start": 2101.36, "end": 2107.36, "text": " So first sinusoidal embeddings and then we just map them using simple linear layers here."}, {"start": 2107.36, "end": 2118.36, "text": " And this is the interesting and different part. We're going to pass the tokens and the mask through the transformer part of our what's the name again of this thing?"}, {"start": 2118.36, "end": 2123.36, "text": " Text to image unit. So let's see how that thing is going to look like."}, {"start": 2123.36, "end": 2130.36, "text": " Let me set a break point here. So it's going to be simply a transformer. So I don't think anything fancy will happen here."}, {"start": 2130.36, "end": 2140.36, "text": " We get the embeddings, we do the positionally encoding, blah, blah, blah. We apply the transformer here. Let me just see whether there is something interesting."}, {"start": 2140.36, "end": 2153.36, "text": " OK, so here is the interesting part. We are returning, as you recall from the paper, not just like the embedding for the last token, we're also returning all of the tokens, all of the embedding vectors from the previous layer or something like that."}, {"start": 2153.36, "end": 2163.36, "text": " Let me just see what exactly is going on here. So we passed it through the transformer. We get this XF out and then we applied this layer norm."}, {"start": 2163.36, "end": 2172.36, "text": " And as you can see here, so let me just show you the shape of XF out first. So the shape of this thing is going to be 128, 512."}, {"start": 2172.36, "end": 2185.36, "text": " OK, and so we are going to do the following. We're going to take the last token and then map it using this transformer projection layer. And that's going to be simply, I assume, yeah, a linear layer."}, {"start": 2185.36, "end": 2197.36, "text": " OK, and that's how we add we end up with XF projection, which is going to be 11512 in shape, I assume. Let me just verify that's indeed the case."}, {"start": 2197.36, "end": 2204.36, "text": " Because the linear layer actually has a different output dimensionality. That's why we end up with 768. Not that important, but in any case."}, {"start": 2204.36, "end": 2214.36, "text": " And we take here, we just do some permutation, but we take all of the tokens. And that's the part from the paper."}, {"start": 2214.36, "end": 2223.36, "text": " So again, this is going to be 512 tokens. And this is the part we saw in the paper. Let me let me show you somewhere here."}, {"start": 2223.36, "end": 2231.36, "text": " They mentioned it. So the output of this transformer is used in two ways. First, the final token embedding is used in place of a class embedding in the ADM model."}, {"start": 2231.36, "end": 2243.36, "text": " Second, the last layer of token embeddings, a sequence of K feature vectors, is separately projected to the dimensionality of each attention layer throughout the ADM model and then concatenated to the attention context of each layer."}, {"start": 2243.36, "end": 2257.36, "text": " So that's we're using this twofold. And that's why we're returning this dictionary containing both of these. OK, let's continue on. And now you can see we are adding to the temporal embedding."}, {"start": 2257.36, "end": 2265.36, "text": " We're just having this this this projection. So this is this simple embedding vector as per the as the paper already mentioned."}, {"start": 2265.36, "end": 2276.36, "text": " So we're doing this in the same way as if we were if we had just a simple class information and not text. So that's everything remains the same as in previous papers."}, {"start": 2276.36, "end": 2285.36, "text": " But the different part is going to be how do we use this XF out. And we see here that we are passing out XF out throughout various modules of our unit model."}, {"start": 2285.36, "end": 2293.36, "text": " So that's the difference. That's how we are conditioning using the temporal information plus the textual information. OK, let's go here."}, {"start": 2293.36, "end": 2301.36, "text": " Finally, the whole magic is going to happen here. Let's find a module that's actually integrating the text information."}, {"start": 2301.36, "end": 2313.36, "text": " I'm not sure which ones are used. Let me just double check that. So let's go to unit model here and let's find the model that's going to be using the textual information."}, {"start": 2313.36, "end": 2322.36, "text": " I think it's going to be either textual block. Yeah, I think it's going to be the textual block. So this is the part is going to be using the textual information."}, {"start": 2322.36, "end": 2335.36, "text": " The rest blocks. So here is the residual block. That one is only going to be using the temporal information as previously we saw this in in previous videos as well."}, {"start": 2335.36, "end": 2343.36, "text": " So let's just step over. Whoops. Where am I? OK, so the first module is not the attention module."}, {"start": 2343.36, "end": 2355.36, "text": " So we're not hitting the we'll have to kind of keep on doing this or I can alternatively hit F5 until we hit the first attention block where we're going to finally integrate this text information."}, {"start": 2355.36, "end": 2365.36, "text": " OK, so we can verify that this is 512, 128 as expected. And you can hear see how it's integrated into the attention."}, {"start": 2365.36, "end": 2377.36, "text": " So what we do with those tokens is we apply this encoder key value. So that's just going to be a convolutional like a layer applied."}, {"start": 2377.36, "end": 2389.36, "text": " So we do some processing here first and then we end up with a shape that's going to be what one seven sixty eight hundred twenty eight hundred twenty eight tokens of this dimensionality."}, {"start": 2389.36, "end": 2397.36, "text": " And then we're going to pass that into the attention and attention is this QKB attention object."}, {"start": 2397.36, "end": 2405.36, "text": " Let me get kind of enter into it and let's just add a break point there. So let's just see how the information is integrated."}, {"start": 2405.36, "end": 2410.36, "text": " We can see here. So we care about this encoder encoder variable."}, {"start": 2410.36, "end": 2433.36, "text": " We form the queries, keys and values the usual way using the image features. And now we're trying to integrate the textual features and how they do this is as you can see, they grab they just do some reshaping and then they concatenate those textual basically keys to image keys."}, {"start": 2433.36, "end": 2444.36, "text": " And they concatenate the textual values to image values. And then they just do the attention. So it's kind of bothersome."}, {"start": 2444.36, "end": 2450.36, "text": " This is obviously not the only way you can do it, but it's basically exactly what the paper here said."}, {"start": 2450.36, "end": 2461.36, "text": " So the output of this transformer is using two ways. Blah, blah, blah. And then second, the last layer of token embeddings, a sequence of key feature vectors is separately projected to the dimensionality of each attention layer."}, {"start": 2461.36, "end": 2470.36, "text": " Throughout the ADI model and then concatenated to the attention context at each layer. So we just saw that that's indeed the case."}, {"start": 2470.36, "end": 2480.36, "text": " And let me just continue here. We don't need to go through the this is the implementation of a transformer. I'm going to skip all of this, assuming you already know how this thing works."}, {"start": 2480.36, "end": 2494.36, "text": " And that's it. So basically we saw how we are taking the textual information stored in this dictionary here, temporal information and passing all of that through UNET and forming the model output model."}, {"start": 2494.36, "end": 2510.36, "text": " This is again going to contain the Epsilon. So the noise plus the covariance matrix is going to be one six, sixty four, sixty four, six, because we have three channels predicting the Epsilon and three channels predicting Sigma or the covariance matrix."}, {"start": 2510.36, "end": 2531.36, "text": " OK, so next up. So everything else remains the same as in the first DDPM, the coding video I've done. The only difference here was the textual basically conditioning. OK, so we split the output into two parts, the Epsilon and the model var values."}, {"start": 2531.36, "end": 2544.36, "text": " Again, there is this V vector that's actually predicted and then we form the model variance by by using these expressions here. Not as important. We saw that multiple times in previous videos."}, {"start": 2544.36, "end": 2556.36, "text": " And finally, we predict. So we now pass the noisy image, the time steps and the output. And then we just predict the X zero here."}, {"start": 2556.36, "end": 2567.36, "text": " Just some clamping. And finally, we predict the model mean. So everything here remains the same as in the original, as in the improved DDPM paper."}, {"start": 2567.36, "end": 2580.36, "text": " So I'm going to skip all of this. And now this is where the difference will will will kick in. So here we just a sample from a normal distribution."}, {"start": 2580.36, "end": 2587.36, "text": " We create this non zero mask. And this is the interesting part. So we now want to see how the conditioning is done using the clip model."}, {"start": 2587.36, "end": 2596.36, "text": " Again, this is very similar in structure to the guided DDPM that I've covered in my previous video. So let's step inside here."}, {"start": 2596.36, "end": 2610.36, "text": " I'm going to hit F 10. We are going to form the gradient and then we're just going to shift the predicted mean by the as you can see here. So Sigma, the covariance times the gradient."}, {"start": 2610.36, "end": 2617.36, "text": " OK, and the grading itself already contains the scaling coefficient. So that's why we don't see it here."}, {"start": 2617.36, "end": 2626.36, "text": " OK, let's hit F 10. And here are we in the conditioning function. So we previously saw how we created this textual embedding."}, {"start": 2626.36, "end": 2641.36, "text": " Now we're going to do the fun part. So we're going to take the input noisy image, detach it from the computational graph and set requires graded true because we want to compute the gradients for this particular image."}, {"start": 2641.36, "end": 2649.36, "text": " We're going to embed this image using the image pathway of the clip. So such that we end up with embedding image embedding here."}, {"start": 2649.36, "end": 2658.36, "text": " So that's again, let me show you what we're doing there. We're doing the drawing I showed you a couple of minutes ago. So I think it's somewhere here."}, {"start": 2658.36, "end": 2670.36, "text": " So we are just now forming this red embedding vector. And now we're going to basically, well, find the gradients of the product between the text embedding vector and the image embedding vector."}, {"start": 2670.36, "end": 2685.36, "text": " OK, let's go back here. Let's step over image embeddings. So nothing. Well, it's again just walking me through the I have a bunch of break points here, but we can kind of ignore all of those safely."}, {"start": 2685.36, "end": 2696.36, "text": " And what's this? My God, my God, I'm hitting every single I'm hitting every single break point here. Nothing interesting here. Nothing interesting here."}, {"start": 2696.36, "end": 2703.36, "text": " So we form the embedding vector. We normalize it. And here we are. So we have the embedding vector for the image."}, {"start": 2703.36, "end": 2711.36, "text": " And now we form the product and we additionally have this logic scale. So we form the loss and we just then apply."}, {"start": 2711.36, "end": 2719.36, "text": " We just find the gradient of that loss and that gradient is now going to be returned. We just multiply with the gradient scale."}, {"start": 2719.36, "end": 2726.36, "text": " So it's going to be three because that's the parameter we set in the main script. And we just return that gradient. But everything."}, {"start": 2726.36, "end": 2739.36, "text": " So basically this is what we saw in the paper. And this is the only difference compared to your regular classifier guidance we saw in the previous video on diffusion models beat GANs on image synthesis paper."}, {"start": 2739.36, "end": 2754.36, "text": " So this gradient is going to move our image towards the part of the space where that image resembles the prompt much more closely compared to if we were to just do it unconditionally."}, {"start": 2754.36, "end": 2760.36, "text": " OK, let's step over here. We have the gradients. We find the new mean. And that's it, guys."}, {"start": 2760.36, "end": 2771.36, "text": " Then we just sample from that new mean. Again, we have this this standard expression. We just have mean plus sigma and we just multiply by the by the noise here."}, {"start": 2771.36, "end": 2778.36, "text": " This is how we sample from the Gaussian. And that's it. And this now just keeps on repeating."}, {"start": 2778.36, "end": 2787.36, "text": " We're now going to repeat the same process for the next time step, which is going to be 98 and all the way to zero. And that's pretty much it."}, {"start": 2787.36, "end": 2799.36, "text": " OK, guys, next up, I want to show you how the classifier free guidance works. So let me run the script text to image and let's start."}, {"start": 2799.36, "end": 2806.36, "text": " A lot of the structure is shared between this script and the previous one. So I'll just be skipping even more than before."}, {"start": 2806.36, "end": 2814.36, "text": " So we're creating the model and the diffusion here. Let me disable the break points such that we are just stepping over all of this."}, {"start": 2814.36, "end": 2823.36, "text": " So we have modeling, diffusion, blah, blah, blah, shifting into GPU, loading the checkpoint, counting the number of parameters."}, {"start": 2823.36, "end": 2833.36, "text": " Then we are forming the up sampling model here, which we don't care about. So we're going to skip all of that until we hit the prompt here and all painting of accordi."}, {"start": 2833.36, "end": 2844.36, "text": " And now let's start. So same hyper parameters as before, same encoding of the tokens and then adding the padding and the mask."}, {"start": 2844.36, "end": 2852.36, "text": " And this is where it's now starting to become a bit different. This is where the structure starts to deviate from the previous script."}, {"start": 2852.36, "end": 2860.36, "text": " So now we encode the null prompt. So that's the empty prompt here, as you can see. And that's the part."}, {"start": 2860.36, "end": 2868.36, "text": " I'll show you again in the paper. So let's find the OK, here it is. Here's the equation."}, {"start": 2868.36, "end": 2874.36, "text": " So we have to pass the empty prompt and then we have to pass captions."}, {"start": 2874.36, "end": 2878.36, "text": " So that means we'll effectively have two passes through the diffusion model."}, {"start": 2878.36, "end": 2884.36, "text": " Or in this particular implementation, what I've done is they've sacrificed some memory for a single pass for time."}, {"start": 2884.36, "end": 2891.36, "text": " And so they basically do a single pass, as we'll soon see, and they compute both of these expressions here."}, {"start": 2891.36, "end": 2899.36, "text": " So both conditional caption as well as as on the empty well, null caption or empty caption."}, {"start": 2899.36, "end": 2909.36, "text": " OK, so let's go back to the code. Here we can see that we now tokenize the empty empty sequence and we end up with tokens."}, {"start": 2909.36, "end": 2918.36, "text": " We just contain pretty much just the I think end of tokens. So all of these, as you can see here, just contain the padding token."}, {"start": 2918.36, "end": 2922.36, "text": " I think this is the padding token. Let me double check. That's indeed the case."}, {"start": 2922.36, "end": 2929.36, "text": " So if I were to enter here, so this padded tokens, let me just find that thing."}, {"start": 2929.36, "end": 2938.36, "text": " I think it's going to be somewhere in here. So there is a tokenizer part and we care about the BPE file."}, {"start": 2938.36, "end": 2952.36, "text": " And here it is. So you can see here. Because we have passed empty tokens, we're basically going to pad the end token for the full context."}, {"start": 2952.36, "end": 2956.36, "text": " Basically, that's what happens. We have end token as the result here."}, {"start": 2956.36, "end": 2961.36, "text": " That's that's what these this fifty thousand two fifty six represents. It's just a pet token."}, {"start": 2961.36, "end": 2970.36, "text": " OK. Now we're going to store both the tokens as well as the this unconditional tokens."}, {"start": 2970.36, "end": 2974.36, "text": " And we're also going to store mask as well as the unconditional mask."}, {"start": 2974.36, "end": 2978.36, "text": " So this mask is kind of boring. I assume it's going to have all forces."}, {"start": 2978.36, "end": 2983.36, "text": " Yeah, as you could have expected. And now let's continue."}, {"start": 2983.36, "end": 2992.36, "text": " So that's the key word arguments that we're going to use to condition our model during the four pass."}, {"start": 2992.36, "end": 2999.36, "text": " OK, so here we have an interesting function. So let me now enable all of the breakpoints."}, {"start": 2999.36, "end": 3007.36, "text": " We'll enter this one a bit later. This is where the whole magic happens when it comes to computing the classifier free guidance."}, {"start": 3007.36, "end": 3017.36, "text": " And again, we have the P sample loop standard stuff. We want to generate an image that's going to be I think this is going to be sixty four sixty four resolution."}, {"start": 3017.36, "end": 3022.36, "text": " Yep, that's the case. And let's continue here. We are passing the conditioning information here."}, {"start": 3022.36, "end": 3033.36, "text": " And finally, you can see here that we are extracting batch size, which is one, whereas the full batch size here is going to be two."}, {"start": 3033.36, "end": 3042.36, "text": " And the reason is this is the optimization they've done such that they have a single four pass and they end up with the with the result,"}, {"start": 3042.36, "end": 3052.36, "text": " even though they have to compute the and the output both for the empty token string as well as for the actual prompt we're using."}, {"start": 3052.36, "end": 3062.36, "text": " OK, let's now dig into the code again. I'm skipping some parts because it's just boilerplate, some for loops and stuff."}, {"start": 3062.36, "end": 3069.36, "text": " So we are generating the noisy image we're sampling from the I guess normal distribution here."}, {"start": 3069.36, "end": 3076.36, "text": " We are forming the indices again. We have how many steps? Hundred. We are logging the progress. Nothing fancy there."}, {"start": 3076.36, "end": 3084.36, "text": " And now we're starting the actual diffusion reverse process. OK, so here is the P sample function. This is the important function."}, {"start": 3084.36, "end": 3090.36, "text": " Really, nothing has changed. So the model is going to be different here. That's the one that was defined in the main script."}, {"start": 3090.36, "end": 3097.36, "text": " We're going to hit it a bit later. So you're going to see it otherwise. Nothing interesting here."}, {"start": 3097.36, "end": 3103.36, "text": " No conditioning here because we are doing the classifier free guidance. And this is the conditional information."}, {"start": 3103.36, "end": 3107.36, "text": " OK, we're going to see all of that in a second. Let me step over all of this."}, {"start": 3107.36, "end": 3112.36, "text": " And this is where the magic of the classifier free guidance is going to kick in."}, {"start": 3112.36, "end": 3119.36, "text": " So here we hit the model function that we define in the main script here. So let's see what we're doing."}, {"start": 3119.36, "end": 3127.36, "text": " So we take the input images and we just grab the first part. So again, remember that we have two images here."}, {"start": 3127.36, "end": 3133.36, "text": " So let me show you the shape. So we have two. So we're going to take the first part only."}, {"start": 3133.36, "end": 3138.36, "text": " Because only the first part is going to contain the actual information of interest."}, {"start": 3138.36, "end": 3148.36, "text": " And that's again the consequence of the way of them wanted to have a single forward pass, but like sacrifice the memory."}, {"start": 3148.36, "end": 3160.36, "text": " And that's why there is this weird burden that we as someone analyzing this code base have to kind of suffer because of that optimization trick."}, {"start": 3160.36, "end": 3167.36, "text": " But like it's fairly simple. Basically, you just take the first part of the of the image."}, {"start": 3167.36, "end": 3173.36, "text": " We we know just as you can see here, we form this combined variable, which is just basically copy paste."}, {"start": 3173.36, "end": 3182.36, "text": " We copy paste that first portion. And now we pass that basically together with time steps and together with conditioning information."}, {"start": 3182.36, "end": 3198.36, "text": " We pass all of that through the unit with the transformer. So I'm going to disable the break points and just skip over this whole step over this this this function."}, {"start": 3198.36, "end": 3210.36, "text": " Because we already saw this in the first script. We are basically just integrating the temporal information, integrating the textual information into the attention context, etc, etc."}, {"start": 3210.36, "end": 3217.36, "text": " And as the output, we get the epsilons and the sigmas for both images. So let's see the model output."}, {"start": 3217.36, "end": 3229.36, "text": " Basically, it should be 264, 64, because we have two images, six because we have both the epsilon and as well as the covariance matrix, so the sigma."}, {"start": 3229.36, "end": 3236.36, "text": " And now we're going to split those, as you can see here, into epsilons and and and well, they call it rest."}, {"start": 3236.36, "end": 3244.36, "text": " And then finally, we do another splitting into conditional epsilon and unconditional epsilon, because as you recall,"}, {"start": 3244.36, "end": 3252.36, "text": " the first image is going to be conditioned here with the conditional information."}, {"start": 3252.36, "end": 3256.36, "text": " The second one is just going to have the tokens for the for the empty prompt."}, {"start": 3256.36, "end": 3264.36, "text": " That's why we have these two variables here. So these two, again, are just are just these two variables here."}, {"start": 3264.36, "end": 3275.36, "text": " And the epsilon conditioned on the empty prompt as well as the epsilon conditioned on the on the caption on the prompt."}, {"start": 3275.36, "end": 3281.36, "text": " OK, let's go back to the code. And now here's the step. Here's the actual formula."}, {"start": 3281.36, "end": 3287.36, "text": " We have the unconditional epsilon. We add up the scaled version of this difference. And that's it."}, {"start": 3287.36, "end": 3295.36, "text": " That's the whole magic. And now we just again do this. Well, as I said, it's a consequence of justice of the optimization they've implemented."}, {"start": 3295.36, "end": 3300.36, "text": " And then we return all of that. And that's it. Everything else remains pretty much the same."}, {"start": 3300.36, "end": 3308.36, "text": " We split into we form the model variance. We form the let's see where there is something different here."}, {"start": 3308.36, "end": 3317.36, "text": " We just predict the X start or the X zero, which is the I guess the newest image. And then we do some pre-processing."}, {"start": 3317.36, "end": 3325.36, "text": " So nothing fancy there again will only care. So this model output contains copy pasted same results."}, {"start": 3325.36, "end": 3333.36, "text": " So we only care. And let me kind of show you that's indeed the case. So the shape of this thing is two, three, sixty four, sixty four."}, {"start": 3333.36, "end": 3339.36, "text": " If I were to take all zeros, so that means I'm going to just grab a single element from the first batch."}, {"start": 3339.36, "end": 3350.36, "text": " If I were to take the second, the same element from the second batch, you can see they are the same because that's how, again, the optimization they're doing is playing out."}, {"start": 3350.36, "end": 3357.36, "text": " OK, so all in all, at all times, only the zeros image contains the results of interest."}, {"start": 3357.36, "end": 3367.36, "text": " The second one is just basically a consequence of the optimization. Now they form the mean and then they return all of these variables back."}, {"start": 3367.36, "end": 3380.36, "text": " And that's it. Everything else remains the same. We just we don't have the conditioning now and we just form the final sample by sampling from this distribution sample."}, {"start": 3380.36, "end": 3394.36, "text": " Here is still two, two, three, sixty four, sixty four. But only the first part contains the sort of zeros index of the batch dimension contains the actual information that we care about."}, {"start": 3394.36, "end": 3406.36, "text": " And that's it, guys. I'm going to now go back into the script. I'm going to disable all of the breakpoints and I'm going to set a breakpoint to show images here."}, {"start": 3406.36, "end": 3419.36, "text": " Now let's hit F5 and let's see how this is going to be generated. You can see here the progress is being tracked using the TQDM library, which is so cool."}, {"start": 3419.36, "end": 3426.36, "text": " And now let me show you the image. So if I click F10 here, let me show you the image. So we have a corgi, as you can see here."}, {"start": 3426.36, "end": 3442.36, "text": " So what was the prompt? Our prompt was, let me just find it, an oil painting of a corgi. And you can see indeed this is a low resolution version of that particular prompt because the models are already pre-trained for us."}, {"start": 3442.36, "end": 3457.36, "text": " That's why we are getting these cool results. Keep in mind that OpenAI has not open source the training code, so we can only play with the sampling and generation in general and not with training."}, {"start": 3457.36, "end": 3467.36, "text": " Let me quickly show you what's going to happen after we do the up sampling. I'm going to set another breakpoint here. Let me hit F5 here."}, {"start": 3467.36, "end": 3480.36, "text": " And now we're generating the up sampled version. So if I hit F10 here, let me just see whether that's going to exit the program or not."}, {"start": 3480.36, "end": 3489.36, "text": " Okay, unfortunately that exited the whole program, so I'm going to have to set something here like print blah blah, for example."}, {"start": 3489.36, "end": 3501.36, "text": " And now let's hit run again. And we're going to sample both the corgi as well as the up sampled version of the corgi, although it may take some time."}, {"start": 3501.36, "end": 3511.36, "text": " Cool, guys. That's pretty much it. I know there is a lot of information in this video. Do let me know if you have any feedback. Did you find this one useful or not?"}, {"start": 3511.36, "end": 3522.36, "text": " All of that's going to help me modify my future videos. And hopefully you learned something from this. Hopefully you like this combination of using paper, using code."}, {"start": 3522.36, "end": 3530.36, "text": " There is a lot of information, but I think it might be useful for all of you. So here's the corgi again. This is the low res version."}, {"start": 3530.36, "end": 3536.36, "text": " And now we're going to see the up sampled version. So here is the up sampled version. So that's it."}, {"start": 3536.36, "end": 3548.36, "text": " I'm going to quickly also show you how the inpainting script looks like. Let me move it here. Let me try and run it. Let me see where the show images are."}, {"start": 3548.36, "end": 3560.36, "text": " So I'm going to set a break point there and then there is the up sampling part. But yeah, I think we can just set a couple of break points here to visualize what's going on."}, {"start": 3560.36, "end": 3574.36, "text": " So let me hit run and let's see what's going on. So first things first after the loading has happened. So again, the logic is actually not that complicated."}, {"start": 3574.36, "end": 3583.36, "text": " It's a bit harder to form a mental model for how this inpainting works, but I encourage you to check out the code at your own pace and understand what's going on."}, {"start": 3583.36, "end": 3593.36, "text": " So here is the image we are trying to impaint. So we basically have this grass image and then we masked all of the rows here."}, {"start": 3593.36, "end": 3602.36, "text": " So I'm going to exit here. Let's hit F5. Now we're going to generate impaint this particular image."}, {"start": 3602.36, "end": 3611.36, "text": " And let me now hit this. We're going to hit this break point here after 100 steps. So let me show you what we get as a result."}, {"start": 3611.36, "end": 3624.36, "text": " We get a corgi on this on this on this on the grass. And that's because the prompt we are using is let me just find a prompt a corgi in a field, which makes sense."}, {"start": 3624.36, "end": 3636.36, "text": " And finally, if I were to hit well F5 here, well, we'll just get the super like the up sampled version. But that's pretty much it."}, {"start": 3636.36, "end": 3643.36, "text": " Again, guys, do let me know whether you find this video interesting and useful. Any feedback is super appreciated."}, {"start": 3643.36, "end": 3666.36, "text": " And do subscribe to this channel if you liked this video. Share it out with your friends. And until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=hAp7Lk7W4QQ
Diffusion Models Beat GANs on Image Synthesis | ML Coding Series | Part 2
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE 4th video in the ML coding series! In this one I continue explaining diffusion models! I cover the "Diffusion Models Beat GANs on Image Synthesis" paper and the code behind it. I focus on how classifier guidance works. I cover both the training of the noise-aware classifier as well as the actual sampling (the mean shift method). I also walk you through a minor bug in their code. Let me know how you find this format! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GitHub: https://github.com/openai/guided-diffusion ✅ My issue: https://github.com/openai/guided-diffusion/issues/51 ✅ Paper: https://arxiv.org/abs/2105.05233 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro 00:01:30 Paper overview part - U-Net architecture improvements 00:05:38 Classifier guidance explained 00:15:18 Intuition behind classifier guidance 00:20:10 Scaling classifier guidance 00:24:10 Diversity vs quality tradeoff and future work 00:26:15 Coding part - training a noise-aware classifier 00:35:35 Main training loop 00:44:26 Visualizing timestep conditioning 00:46:00 Sampling using classifier guidance 00:52:35 Core of the sampling logic 00:59:20 Shifting the mean - classifier guidance 01:05:03 Minor bug in their code and my GitHub issue 01:07:53 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #diffusion #generativemodeling #coding
What's up guys in this video I'm continuing with the machine learning coding series and I'll be covering a follow-up paper to the previous video. So in the last one we covered the improved DDPMs or denoising diffusion probabilistic models paper in this one I'm gonna cover the follow-up work called the diffusion models speed cans on image synthesis Paper and the associated code. So the idea will be to To again for me to give you a brief context like 10-15 minutes short overview of the most salient parts of the paper and After that we're gonna dig into the code and analyze two scripts So one is this classifier train and the other one is classifier sample. So basically I want to focus on The guiding so how do we guide? diffusion using pre trained noise classifiers So understand what that means in 10-15 minutes In any case what I what I've done is I cloned this repo so the guided diffusion from open AI They have like everything all the instructions you'll need I had to do some modifications again If I get enough enough votes down in the in the comments, I'll just push my my changes So I had to make minor changes to make this to work on my machine in any case you can you can basically download various checkpoints here and just follow the Instructions they they have in the readme which are super nice and yeah, you should be ready to go without further Do let's first start with the paper. Okay, so as I said, it's diffusion models beat cans on image synthesis and Two main innovations here are they improve the unit architecture from their previous work So from the improved DDP M's paper and the second improvement is they find a way how to condition an additionally condition the model so that they can achieve higher quality so they can trade off the Diversity versus the quality of a sample which is something that previous generative models such as GANs used to achieve using this truncation Truncation trick as well as some autoregressive models can simply modify the temperature and thus Control the like diversity versus the quality of the of the sample So we'll see in this paper how they achieve the same thing for for the class of generative models called diffusion models so I strongly suggest you you watch my previous video if you haven't because I'm gonna kind of assume some knowledge from that one But yeah, you can you can continue watching this one as well. Okay, so So they say here we hypothesize that the gap between diffusion models and GANs stems from at least two factors So first one that the model architecture is used by recent GAN literature have been heavily explored and refined And we saw the same thing happening with in the Comnex paper if you recall that one Where like basically they people showed like people focused a lot on transformers and there was a lot of innovation that happened on that side of the spectrum whereas the CNNs were not as as explored and then what Comnex showed that if you take some of the innovations that were Well invented in the context of transformers and you adopt that into CNNs you get much better models And so similarly here they took some ideas from other GAN papers And they took those ideas and then integrated them into the fusion models into the architecture and that Improved the results quite quite nicely Anyways, so that's the first thing the second thing is that GANs are able to trade off diversity for fidelity Producing high quality samples, but not covering the whole distribution. So that's the second thing I mentioned previously So they do that using this truncation trick. You can check out the the style GAN paper I think that's that's the one where I recall that they have that they're using the the truncation trick in any case Let me quickly show you the Dimensions across which they they kind of explored And how to improve the architecture so one is increasing depth versus width holding model size relatively constant Increasing the number of attention heads using attention at 32 times 32 resolution as well as 16 By 16 and not and 8 by 8 Rather than only at 16 by 16 using big GAN residual block for up sampling and down sampling the activations And rescaling residual connections with one over square root of two. Okay. So those are some of the ideas they they kind of played with And you can see here some of the curves here after you Take the best of out of those changes after a lot of experimentation I guess they end up having the the the best architecture here So you can see here number of channels 160 number of blocks two blah blah blah Anyways, not that important. I'm going to focus on the um like guiding the diffusion much more than on the architecture So here again just to to wrap this up the architecture part in the rest of the paper We use this final improved model architecture as our default Variable width with two residual blocks per resolution multiple heads with 64 channels per head attention at 32 16 and 8 resolutions Big GAN residual blocks for up and down sampling and adaptive group normalization for injecting time step and Class embeddings into residual blocks you might recall this one from my previous video So this adaptive group normalization for injecting So this one adaptive for injecting time step and class embeddings So do check it out if you want to know the the details how that is how that works Okay, so this is the important part So classifier guidance, that's what we want to understand a bit better They meant to be a little bit more complex That's what we want to understand a bit better. They mentioned here. We already incorporate class information into normalization layers So if you recall from the previous video basically the same way we we incorporate the the time step information Through that same pathway They incorporate the class information. So that means they already had class condition diffusion models But the thing is they show in this paper that using this classifier guidance is complementary to the class conditioning And because of that they achieve the best results With with class conditioning obviously so with classifier conditioning So let's let's see what it is basically So here we explore a different approach compared to the one I just mentioned so exploiting a classifier p Uh y uh conditioned on x so basically you input x and you get like a distribution across labels That's how you're going to implement this abstract distribution here To improve a diffusion generator So blah blah blah the previous paper show one way to achieve this wherein a pre-trained diffusion model can be conditioned using the Gradients of a classifier. Okay, that's the important part. So so in particular we can train a classifier p parameterized by parameters phi and conditioned on x of t And t so xt is basically the the if you recall from the that this is the forward so again diffusion uh, just a quick reminder is you start with uh, like a Fresh image. Let's let's uh, let me draw it like here. So we have a fresh image here And then what you do is you gradually stop adding more and more noise so Basically here you had like some image and then you have here the same image And basically you start adding gaussian noise So you're going to add some noise here and then you you keep repeating that that that process uh multiple times Uh, basically, uh, like for example 1000 4000 are our numbers which makes sense And so at the end well actually at the end you'll end up with a completely noised image So at the end you'll just have a pure gaussian noise here I'm just gonna like draw it like this basically it's pure noise and what you're trying to learn is how to Go backwards. So this is the forward part. This is the forward direction and we want to learn how to go backwards So we want to learn how to start from noise and then gradually basically generate the image from the underlying Probability distribution which we have learned during the training procedure. Okay? So x of t is basically the t-th step of this forward process. So we we take the x-th image here So there's going to be like some image here and that's like t-th time step This is going to be this is usually denoted as 0-th. This is usually denoted as big t And that's or t minus 1 whatever in any case So that's that's what this x of t is and t is just a scalar which we're gonna obviously Encode using the usually they use the like sinusoid embeddings same as for the original transformer paper Okay, so in particular we can train a classifier which I just explained on noisy images And then use gradients of that classifier as you can see here So to guide the diffusion sampling process towards an arbitrary class label y So the idea here is This gradient here tells you how you need to tweak the current image So x of t how you need to tweak it such that you maximize the class label of your interest So let's say you want to you want to create a dog image Then you want to make sure that so this will tell you how to tweak the image such that the dog class is maximized i.e. how to yeah basically how to tweak the diffusion process such that Basically, you start guiding it towards this part of the space where the dog images lie in We're gonna see those details in a second, but like that's that's that's a rough intuition for now. Okay, let's continue here Here just briefly they briefly mentioned the notation they're gonna omit the t part because it's kind of Redundant in the sense that well just for concise and sake basically. It's not redundant It's just easier to write it down like that. Okay, so So here's how we're gonna do this. We start with the diffusion model with an unconditional reverse Noising process so p parameters by theta and the idea is to given x t plus one So that's the image with more noise You want to predict the next step of the reverse process as I previously explained Here, okay. So that's that's this part. We want to learn how to go how to you how to go from more noise to less noise in any case To condition this on label y it suffices to sample each transition according to This distribution here. So if we want to sample from Class conditions so you can see here. We additionally have the class information here It can be shown and they have a like a whole derivation in the appendix h So if you want to dig into the maths you can go there. I'm gonna kind of assume this to be like true Like as an axiom because otherwise will take too much time for me to explain the maths And in any in any case you can see here. So that is just an arbitrary context Constant and we we already have this part here. So this is the unconditioned Reverse process Reverse process which we learned and here this is going to be the classifier that we're going to train On the noisy images where by noisy images, I mean Images taken from arbitrary time step of the forward diffusion process. Okay, that's the basic idea So if you take this as as given Now let's kind of derive how can we actually sample from this distribution on the right hand side because as they mentioned here Uh, it is typically interactable to sample from this distribution exactly Okay, so that's that's the that's the idea Now let's see how we can break this down and make it simple and easy to sample from They say here Well, we basically know that our our reverse process can be parameterized like this This is what we saw in the previous paper. So we got we're going to learn how to predict mu and covariance matrix Now if we take log of this expression here since this is like e raised to blah blah blah something That's the gaussian definition you end up with this expression here. Okay, so we're working our way here so we are basically trying to understand how we can break this down and now we're going to see how we can break this down and that's going to be on the next page and then we're going to have a simple expression for for uh, like a Sampling from a class conditioned distribution. Okay So under certain assumptions which they mentioned here so we can assume that the log of this of our classifier has low curvature compared to the Basically inverse of this covariance matrix, which are which we're going to predict from the unconditional Diffusion model. So this assumption is reasonable in the limit of infinite diffusion steps blah blah blah where the Well, I guess this is for binius norm Converges to zero and in this case we can approximate this a log of the classifier using a Taylor expansion around mu and mu is basically the Mean of the x of t. So basically let me go back here So that's the same u as this one here. Okay, so that's that's the Basically the mean of the distribution from which we're going to sample x of t. That's it Fairly simple and if we do that they end up with this expression here This component here is going to be very very important and under this like Basically approximation and then derivation here. We can finally get the the the the final expression here So we just take the log of the distribution We just saw above and we can omit z because that is a constant So we don't care about it and you can see that after some derivation here We end up with this expression here So again, we can ignore a c4 is just a constant So we basically if we want to sample from that complex intractable distribution here What we in practice need to do is just sample from this gaussian here So as you can see the Gaussian just has a mean a shifted mean so we have mu which is from the unconditional Diffusion model and we just simply add This offset here. So we have the covariance matrix. We just basically do like we Multiply it with the g and g is just the as you can see here the gradient of our classifier For x of t equals mu. So i'm going to give you some intuition for behind these formulas But first let me just tell you this so so we have thus found that the conditional transition operator So that's the the one we saw above here Can be approximated by a gaussian similar to the unconditional transition operator But with its mean shifted by sigma g Okay, so let me try and Give you some intuition for why this works. It's going to be a very hand-wavy explanation But like I think the intuition is is completely correct on some level of abstraction. It's a correct So let's just say that we have a condition But I think the intuition is is completely correct on some level of abstraction. It's it's a correct Hand-wavy explanation. Okay. So g again it tells you how you should tweak X of t so the the image such that you can maximize certain class y. Okay, so Let's see what we do. So we we pass in we have our model and Let me kind of draw it like this. So let me just change the color here So we have an image That's x t plus one. So that's the x t plus one image. We pass that image into our diffusion model So that's our diffusion model here and out comes what outcomes mu? and out comes this Covariance matrix and these two together basically parameterize the Gaussian distribution from which we would sample the Next so x of t so from from this from this distribution here. We sample x of t Okay, so we can kind of visualize the Gaussian as just Like a point in a in a in a multi-dimensional space for the sake of explanation I'm just gonna assume we're we're dealing with 3d space But obviously this is going to be much higher dimensionality mu is actually the dimension the same dimensionality as our our image so basically, uh, it's gonna be um, well h Times w times three so that's the dimension of our image and similarly for for sigma But just for the sake of understanding what's going on with this shift here. So the the sigma g part I'm gonna kind of assume a coordinate system like this and Like assume we have a like a Gaussian distribution. The mean is here. So this is the mu. Okay and uh, we have some we basically have some some variance So under normal circumstances if we just have like an unconditional diffusion model We would basically sample a point some from from this distribution somewhere from from this distribution Like maybe maybe this would be our next uh, this would be our x of t. Okay, but Because we we additionally have this information coming from from the classifier What it does is the following so now you take this mu here Let me change the color Okay, so so you take the mu and you pass that as an image to like our classifier So we're gonna have something like this. So we have an image That's gonna be the same as this one here. So that's gonna be mu We pass that inside of our classifier This is our classifier And out comes uh, well for image net they train obviously it has like thousand classes And We basically learn how we should tweak this image So the classifier tells us how we should tweak the image such that a particular class is maximized maybe this one Okay, and with that information with that gradient, so so those blue dots. I'm just gonna denote that as g With that information which is of the same dimensionality as the mu we're going to shift this mu So we're going to shift this maybe here And we're going to end up here. So our new distribution Is going to be actually here instead of here. Okay, and now we have and we have the same variance So i'm going to kind of change the color because otherwise you will not be able to see anything So we shifted our original color And we're not going to be able to see anything so we shifted our original distribution and now we are somewhere here Okay, it's kind of messy, but you get the point so this Shift vector we just saw there. So that's this blue one is just this expression here. So Sigma big sigma times g And this is the like the hand way we explanation what's going on So the classifier knows where you should kind of move this image In which direction should you move this image such such that you maximize the class of interest and by just doing that by just kind of moving the The mean in the direction of the gradient of the classifier You basically increase the probability of the class label here and that's that's basically everything you need to understand. Okay Uh, hopefully that's clear enough. That's how we can combine again The classifier here and the unconditional diffusion model here to form images from a particular class Again, i'm kind of lying here because as I said, they are still class conditioning this model here But using the classifier we additionally complementary, uh improve, uh the uh, well this class conditioned, uh generation Cool, uh, that's it. Let me briefly, um Explain a couple more things. Um, aside from using the the the classifier The classifier gradient they additionally scale it and this is just empirically they found out that this gives them much better results So here they mentioned our classifier architecture is simply the down sampling trunk of the unit model With an attention pool at the 8 times 8 layer to produce the final output We train these classifiers on the same noising distribution as the corresponding diffusion model Then they say in initial experiments with unconditional image map models We found it necessary to scale the classifier gradients by constant factor larger than one So to understand the effect of the scaling classifier gradients note that if we scale by s So this is the what we denoted as g previously. This is these are the gradients of our classifier So if we do this you can show quite easily that that's equivalent to this expression here So well, basically, um, you can convince yourself This is true because log of a product is log of sums and because this is delta x We don't care about the this part here and then we would end up with the same expression Basically, these two are equivalent Okay, and you can see that we are basically just raising this to the power of s which which means we're going to sharpen this distribution So let me continue reading what they said here So so z is an arbitrary constant and as a result the conditioning process is still theoretically grounded in our renormalized classifier distribution proportional to P raised to the power of s when s is bigger than one. This is distribution becomes sharper than The original distribution. Okay in the above derivations. We assume that the underlying diffusion model was unconditional modeling P of x it is also possible to train conditional diffusion models And use classifier guidance in the exact same way table 4 shows that the sample quality of both unconditional and conditional models can be greatly improved by classifier guidance So this is the the thing I was mentioning we we can both have conditional model plus the classifier guidance because they are complementary We see that with a high enough scale the guided Unconditional model can get quite close to the fid of an unguided conditional model Although training directly with the class label still helps finally guiding a conditional model further improves fid So that's what I mentioned multiple times using the class conditional model plus using the guiding is what gave them the best Result on various metrics such as fid is inception score Precision recall, etc, etc. Okay Um quickly to let me just quickly address this because we're going to see it in the code later on So how they train the so why do they need to train the model? Uh, the the the classifier on on noisy images because they say here we train these classifiers on the same Noising distribution as the corresponding diffusion model Well Because we're going to be using it on images on x of t so x of t which can be super noisy And if we just train our classifier on normal images normal when I say normal I mean images without any noise Then the model will probably struggle because these images would be out of distribution for the model And then it would not give us correct class for a noisy version of the original image because of that we want to train model For all so basically on the whole process. So this is our diffusion Like for example, if we want to train the model Our diffusion, uh, like forward process something like this. So this is the step zero. This is the step Let's say big t minus one We want to train on all of these images So we'll be randomly sampling images Um, and uh making sure that the model knows how to predict the correct class And uh, because of that this is kind of bothersome and some of the later papers such as glide Introduce this concept of a classifier free guidance. I'm gonna, um, well tell you more about that one Uh in one of the future videos. Okay, let's go back here Uh here we can see that the trade-off we mentioned in the beginning of the video And that's that we want to trade out between diversity and quality of the sample They show here using various metrics that that's indeed the case So you can see here as we increase this gradient scale. So that's the hyper parameter s Uh, you can see that the recall is uh falling down which means that the diversity the the the mode coverage Of our generative model is falling down But the precision which basically um, well, uh, it's a proxy for the image quality Goes up and you can see similarly here is goes up which means that the image quality is increasing And f id is is also going up which means that the diversity is is uh, basically, uh, diminishing Okay, that's pretty much it. I'm gonna skip all of this and i'm just gonna briefly, uh Uh mention a couple sentences here and then we're gonna jump into the code Our proposed classifier guidance technique is currently limited to labeled datasets. So keep that in mind We do have to train it on on on labeled, uh images um Well, the effectiveness of classifier guidance demonstrates that we can obtain powerful generative models from the gradients of a classification function This could be used to condition pre-trained models in a plethora of ways for example by conditioning an image generator with a text caption Using a noisy version of clip and because of this sentence After reading this paper You could have assumed that open ai would start working on something like glide or dali models and they did indeed And finally it also suggests that large unlabeled datasets could be leveraged in the future to pre-train powerful diffusion models that can later be improved By using a classifier with desirable properties. Okay again You could have if you read this paper You could have anticipated open ai is going to try and just scale up scale up diffusion models Uh such as what they've done with with dali and when I say dali I basically mean dali Two because dali one didn't even use diffusion models. Okay, let's jump to the code now um, I I did expect this Uh paper overview part to be a bit shorter But like do let me know whether you find this type of combination useful or not Just feel free to comment down below. What do you think about this format? In any case, let's now Start working on on the classifier train script. So this is how we trained This is how we train this this noisy classifier Uh that I just mentioned so let me start and walk you through the actual script for training it So i'm gonna obviously again, uh abstract a lot of the details because otherwise this would not be tractable uh for for a video That has an ambition to be less than an hour long. Uh, so just some Setting up some distributed stuff, which we don't care about logging blah blah blah Um this part part we do care about because I just want to briefly show you basically how the classifier is constructed We did sign the paper that it's simply the first part of the unit model up to the bottleneck part and then we just stitch Attention pooling and we ditch the the second part of the unit model. That's how we that's how they set up the uh, the classifier The architecture so let's kind of go inside here. So this just kind of gives you some Um default arguments i'm gonna try and zoom in a little bit in case you don't see this I'm gonna close this part. Okay, so let's step inside this Line of code and let's see how the classifier works Okay, so here's the classifier Image size 128 128 blah blah. Those are just some uh hyper parameters obviously can be different Uh, we are not using fp16 Uh, there is some width depth Various hyper parameters, uh, we saw this is going to be um, well basically this is going to enable Uh the attention to be at resolutions 32 times 32 in the latent space as well as 16 by 16 and 8 by 8 Um, this is the scale shift norm that I showed you in the previous video uh, this basically, um enables us to Integrate to incorporate the the class as well as the time step information into the model um In any case i'm going to scheme over those details Um, let me just kind of go through here because we have image size 128 We have some specification for the number of channels in the unit And finally, we just uh transform, uh this from the resolution. So so remember this is like, um, Basically 6 32 16 and 8 after we step through this, uh for loop We're basically and end up with the same information But just like and now it it tells us how many down sampling steps you need to take before you apply the attention Uh, we saw that in the previous video as well. So i'm gonna just kind of skip over this Blah blah blah and here here it comes. So here's the encoder unit model Basically, let me see what interesting is here. So we have the output number of channels is hard coded to 2000 because this is trained on on on image net Uh, i'm gonna skip over all of those not that important Uh classifier pool is going to be attention. We're gonna see that in a second. So let me Go to the construction constructor here So i'm gonna again just scheme over they're just kind of setting all of these arguments to um local variables here Uh, nothing nothing fancy there Uh, we construct the uh time embedding. This is the the mlp that we use to transform The sinusoidal embeddings for our time step time steps um Nothing, nothing interesting really we can just kind of scheme over all of this. So Um, if we recall the from the previous video we had the same structure So we initially just have number of input block. We just have the conv, uh, basically 2d Uh, so the convolution as the first processing layer then we have this uh part where we uh add various Red residual residual blocks and attention blocks and then we have the middle block And the main difference is we don't have here the output blocks. We just have as you can see here We just have the pooling operator. So let me go To there obviously these details are not that important. The only important part lies in the residual block where you can see how the um How the information such as time steps and class information is incorporated. I'm going to show you that in a second I'm going to kind of skip over all of this again. You can see here that uh Sometimes we're going to add attention blocks and that's going to happen precisely for the resolutions 32 16 and 8 And there are these wrapper functions which just enable us to uh, sometimes pass the temporal information Sometimes just pass the the feature vectors. Okay, in any case i'm going to skip all of this because i'm just um There's too much information there and it's not that vital for your understanding It's just like the architectural details of the unit model. I'm just going to show you this one So here is the let me just kind of uh remind you from the previous video uh There is this part where we embed so here we embed the uh temporal information and then because this is set to true Use scale shift norm what we're gonna do is we're gonna take the temporal information here as in a form of a scale and shift And we are going to combine that this is how we merge the temporal information Uh into into the uh features here and this is just as a reminder For the what happened in the previous video. Let me just see whether they have class information here Um doesn't seem to be the case so embedding They passed embedding here No, I don't think this is going to have well, it doesn't make any sense to have uh like a class Um conditioning in a classifier. So this is going to make sense for the diffusion model and in any case Okay, let's continue here so we Uh, we have the the unit part and now we just end up uh with this um Sequential model that has some normalization some activation function and finally this attention pooling 2d 2d this is basically just going to take the output features from the unit model And using a tension mechanism is going to form a resultant feature vector which we're going to then use to predict the the logits and you can see that the output channels here is Thousands is going to give us a thousand Thousand dimensional vector which is going to be just the logits that come out of this classifier and that's it guys Um, I kind of schemed a lot of details because they are not really that important. So Uh, you just need to have like a rough mental model how the architecture looks like it has this this basically encoder type of a structure it has this attention pooling at the end and That's pretty much it Okay So I skipped the the gaussian diffusion model here because we saw that in the previous video I just want to focus on the differences compared to the previous video Uh, so here we just uh shift the model onto the gpu Uh, we create the uniform sampler here. So this one is just going to help us sample from the forward Process we're just gonna pick the random uh time step using the uniform distribution Obviously you can kind of bias this and and form various different distributions One of those which I showed you in the previous video was depending on the loss You're gonna focus more on those time steps where we have higher loss, which makes sense intuitively Um, okay, so we can skip over all of these parts. We don't want to resume from the checkpoint We just have some mixed precision trainer wrapper here. We don't care about that part. We have the distributed Data parallel object that's for distributed training, but we don't care about it either I'm gonna skip over all of that. We have some data loading Basically, we're gonna have batch size 256 image size 120 128 Uh, we have the data directory which I downloaded before the video started and that's it We just create the training and validation datasets I'm not gonna dig into the actual details of how that works because it's just like a side thing Um, so logging, addmw optimizer. We're not resuming from a checkpoint blah blah blah And here this this is going to be the main function to understand how to classify a string So we have forward backward log. I've added a breakpoint to that function. We're gonna step into it a bit later So here we defined it And now here we have we step for a number of iterations. Okay, so we're gonna do this for a number of steps So as specified by the number of iterations there I've just set it to 30 because I want to have a short training here just for for Debugging purposes in any case just some logging nothing interesting I can skip all of this and this is where the whole magic of the classifier training happens after that. We just do the Optimization step and we do the same logic for on the validation. We just do the forward backward on the validation Set and finally we just do some logging here So basically this is where the whole magic is in so the forward backward log function is everything that we care about So let's step into it So we first gonna grab like a batch of images and associated labels. Let me show you the shapes As always that's a very important thing to do So let's see what batch looks like so if I do batch shape So we can see here Bad shape so we have 200 256 images RGB images of resolution 128 128 We have the labels as well. So after I do this, so basically this extra here Just contains this y And let's see the shape So shape is 256. Obviously we have for each of the images we have associated label If I were to step over this And print labels We can see there is a bunch of labels here because i'm using cipher 10 even though this is This is image net training That's why we only see labels from zero through nine, but we don't care about this detail for the Purpose of understanding how the classified training looks like. Okay, so we we put the as you can see here We put the labels onto our target device, which is gpu in my case same with images here And now what we do is we do we take the uniform sampler and we just sample 256 time steps. That's it. So we now have let's see if I were to print t you can see Uh, like it's a it's a pile of numbers Of the following length. Let me just print this so we have 256 time steps for each of the images in the batch and it's just randomly sampled between uh zero and And and 999 after that we just do the q sample So we take the image we take the time step and we just do the noising process So we create from the x0 image we create we create the x t image. I'm gonna quickly remind you I showed you this in the previous video. So i'm going to just quickly show you how this thing looks like So here it is. Uh, there is literally a formula which I showed you in the previous paper, which is fairly easy you just take uh these Uh specific constants here you multiply it with the x start x start is the x0 the original image And we just add noise According to this square blah blah. Again, there's some content constant. Let me try to find a formula. It's going to be easier Okay, guys, here is the formula from the previous video. So here is how we sample So given x0 the original image, here's how we get x of t We can just basically sample from this distribution here or equivalently. We just compute this expression here So you can see here we multiply by some constant constant We multiply the x0 or x start as we saw in the code We just add up on on top of that. We add this constant times epsilon, which is just your noise from your normal Distribution now, let's go back to the code. Let me see whether you can you can see resemblance between this formula here And between this thing here. So again constant times x start We add on top of that some constant times noise and that's how we get xt and that's it That's because that's why i'm going to just skip over this function and we have finally we now have a batch of Noisy xt samples and that's what we're going to use to train our classifier Uh on and again the reason we are doing this is because otherwise if we didn't have these noisy images if we just trained on the original images then the classifier wouldn't be able to To estimate the gradients correctly for for an arbitrary noisy image and that's why we want to train it on noisy images Okay So here now we just um instead of training on the whole batch This is just for for for memory optimization stuff. Uh, we're gonna kind of chunk the batch size That's 256 into smaller batches so-called micro batches Okay, and here it is and now we just call we just pass so we do a for pass through our Our classifier again, the the classifier is basically just a unit model Uh, well basically the first portion of the unit model Uh, and first of all, let me just show you the shapes here. I think it's gonna be four No, it's gonna be actually just one. Okay, so the micro batch size is one So we basically pass just a single image through the classifier as an optimization Thingy and we pass in the corresponding Like basically time step. So again, if this is like five That means this is going to be x5 and we pass that through the model and we get back to logits. Okay, let me uh step through there and here we end up in the forward pass of our model so that's the that's the That's the classifier and we do the time step embedding which is just going to embed it Like using the the sinusoidal embeddings and then we just pass it through the mlp. And so we ultimately end up with Uh something that's going to be of shape one 512. Yeah, okay So and then we just do forward pass through the input blocks through the middle blocks and finally we pass through the attention Uh pooling layer and that's it depending on the module some of them are going to encode the temporal Uh, the time step information into our features some of the other modules will not do that such as the conf2d layer But in any case, that's pretty much it so I can now get back to the classifier training Uh, we can skip all the way to the loss here. So let's Skip skip over and here we are. So we got some logits and now basically we just do a cross entropy For the label we have and that's it. That's our loss. We just do the cross entropy loss Which is basically just a standard classifier training loss Okay, and now we just collect some information such as the accuracy at one accuracy at five We do some logging blah blah blah. We do the mean on top of the loss And basically here because we have micro batches What they do is in the in the zeroth micro batch We just clear up the gradients and then we keep on accumulating the gradients for each of the micro batches And we don't do the update until we do we go through all of the micro batches inside of uh, Instead of inside of a single mini batch and that's just again, uh minor optimization detail But we saw everything we needed to know basically that the main thing is that we have this Q-samp we call this function here, which makes us train not on the original images, but instead we train on x Uh t images so noisy versions of the images. Um Okay, guys, that's it. Uh just for um being detailed here, I want to show you the part with merging the um The temporal information I want to show you that part here So res block i'm going to enter there and i'm going to set i'm going to set a breakpoint inside of the forward function here Uh, just so that I can show you how we merge the information from the embedding here Okay, so i'm going to set a breakpoint here And let's now step let's do another pass through another micro batch and here we are So we have the embedding and we are gonna hit The res block at one point of time and here we are. So here is the um part where we merge the temporal information I want to show you this once more Again, it's fairly simple. We just do some additional processing. So let's see what these In m layers are so again, it's just like a like a basically a linear layer and an activation function Nothing fancy there And we end up with embedding Here of the shape 1 to 56 and now we just as you can see here We start adding dummy dimensions such that we can merge this temporal information into the Um image features, okay, so we can see here because this is set to true we enter this branch here. We have some layers Here and we split them into two parts. The first part is the normalization layer and then the rest And you can see this out there. I think it's just like a sequential. Yeah, you can see here how it's constructed Um, not that important. So the important part is here. We take our embedding out So this vector and we split it into two parts one is scale one is shift And I guess it's going to be 128 is going to be the shape here So one 128 and we added the dummy dimensions And so now if we do this we're going to have some broadcasting going on And we are going to going to merge them with the image features and this image features are going to be of shape As you can see here 128 128 128 and that's it guys I'm going to briefly show you how this looks like inside of my one note So again fairly simple thing. We have so we have something that came out of the So we have a volume that came out of the unit model. That's going to look like something like this So here is the our volume So those are the image features From the latent space of our encoder And we saw that that's going to be like this is 128 Number of channels and we saw that this is also 128 so the height and the width all of this is 128 Whereas on the other hand we just have this is our vector. This is our temporal embedding So this is how it looks like it's only going to be 128 here But it's only one one here and then what broadcasting is going to do Is basically just copy paste this multiple times so that we end up with something like this So that we end up with multiple copies here here here Etc so we're just going to copy paste it here and then after we do that we can freely add it up So we can freely do addition on top of the image features and that's how we incorporate the temporal information Cool. Let's go back here I'm going to stop the execution of the classifier train script. We saw how to train the classifier Do let me know whether you found this explanation useful whether I can improve something any feedback is always appreciated And also if you if you like this video share it out with your friends Let's now focus on the sampling function. So I have prepared Some arguments again. I just took the arguments from the read me and I created I just kind of extracted them here So here are the arguments i'm using but you can find all of that You can find all of that basically in the read me file itself So i'm going to go back here and i'm going to now focus on the sampling function. Okay guys, let's run this script and Again, i'm going to scheme a lot of the i'm going to skip explaining a lot of details Which are irrelevant to understanding the actual Well, uh got in this particular case We want to see how the classifier guidance works like and that's what i'm going to focus on So what this function is going to return is the model which is going to be the unit model That's going to predict the epsilon. So the the noise And diffusion just contains bunch of those constants necessary to do the diffusion process So i'm going to skip over all of that We're just going to load a certain checkpoint We like as I said, I have a model path here So, let me just see whether I can show you yeah, so I went ahead and downloaded the model Into this models directory inside of my root directory and Everything is already explained in the read me so you don't have to worry about it So we just push the model to the gpu We convert it into fp16 Again, just some Optimization details. I'm not going to focus on that We create the classifier. So that's again our encoder The the the half portion of the unit You can see here and we already Schemed through that code previously in the in the training script So i'm going to just ignore all of that and uh, whoops Well, obviously not so i'll have to just ask Let me just kind of do the Disable all the points all the breakpoints and then i'm going to click f5 and here we are Now i'm going to enable all of the breakpoints again And we just load the weights We load the state dictionary from the classifier path So I went ahead and downloaded this this model again And I put it inside of the models directory. That's it Nothing smart there We push the model the classifier on the to the gpu as well And we set it inside into the eval mode That's it. Now this is going to be the most important part This is where the gradients are calculated for the classifier I'm going to analyze that a bit later once we hit once we enter that function So here is where it's defined later. We're going to see how it basically works So that's it. We just define this function as well The model function and Now we're going to do a bunch of iterations until we create the number of samples that we can specify And you can see here I specified 100 but that's just arbitrary numbers so not that important Okay, so we create random numbers so random classes and depending on the batch size We have like four random classes being generated here So let me show you what those are If I go to debug console it's going to be a bunch of random classes If I go to debug console if I print that we can see we have four random classes And we don't I don't know directly what these correspond to But like let's say these are just classes 33 918 etc Okay, we don't care about the actual semantics behind the numbers We are going to store those the class information inside of this model Like keyword arguments dictionary So I'm gonna go there and because we're not using this ddim again ddim is just just it's just like This optimize it like this sampling method whereby if you have less than 50 diffusion steps They showed in that paper that they achieve better results compared to the original ddpm paper So only in the mode where you only sample for less than 50 steps roughly then ddim makes sense But here I think we have like I think we set like thousand steps so we're going to use the p sample loop instead of the ddim sample loop I'm not going to focus on the ddim part because there is a lot of formulas It's hard to explain the logic but like on the high level just treating it as a black box When you would want to use it is when you have less than 50 diffusion steps Okay, that's it Here is where the whole magic happens After that there's just some normalization going on permutation blah blah blah And we just collect the images here and finally yeah basically just a bunch of boilerplate code This is where the main logic is in so we're going to step inside of here We basically specify here the target shape so our images are going to be 64 by 64 Later on they have the up sampling modules which we're also going to ignore But let's focus on this part here Okay so we pass in the model keyword arguments that have the class information We pass these special functions so the cond fn which we define here is going to calculate the gradients we're going to see how that comes into the play in a second So here is the p sample loop what it does is we're just going to keep on generating samples I'm just going to directly jump inside of this function Because everything else is just like basically boilerplate code So here we generate the random noise image Okay so we have the desired shape and the desired shape is as you can see here Four because we have four images in the batch and here we have the resolution of our target image And we just generate we just sample from a normal distribution here And then we just generate indices So depending on the number of time steps so we have 250 here We just generate these indices and remember that's because we want to start from the Completely noisy image so from the index 249 and we're going to work our way all the way to zero Where we're going to end up with completely denoised images from our underlying data Distribution which we learned during the training but this is now the sampling part Okay so indices let's see what t is it should be 249 as I said And here is where the whole magic happens in this p sample step So after 250 steps we are going to end up with our final image And that's one of the drawbacks of current diffusion models We need much more compute compared to like say GANs where you just need a single four pass And you end up with a like a very realistic image very crispy image Here we have much more we need much more compute okay So let's step inside of the p sample function and the main function here is going to be this p mean Variance that's just basically our reverse process that we learned After we calculate the out variable here we're going to generate noise again We're going to do the conditioning I'm going to get here so this is going to be important part here This is where we do the conditioning this is where we do the where we shift the mean Via the sigma times gradients expression and then we just form the image here okay So the this is the this is going to contain the x t minus one If the if we condition on the x t or alternatively if we condition off on x t plus one This is going to be x t in any case it's just like a single step closer to the final image Okay let's jump into the p mean variance here it is this is what we calculate as I said our Reverse process and let's kind of quickly scheme over over this part so I'm going to Set a break point here here we are we enter here and now we're going to pass what we're going to Pass a couple things first we're going to pass the the the noise we just sampled so that's going to Be I guess uh four images right so four three sixty four sixty four we're going to pass the time steps So those are going to be the two forty nine two forty nine two forty nine for each image in the batch And we're going to additionally pass the class information because if you recall From the paper explanation we are not only going to use classifier to guide We are also going to use class information to condition the model so those two do the same Thing but are kind of complementary so don't be confused by that okay let's step there we hit the Basically the this model fn that we saw previously and because class conditioning is set to true We're going to pass x so the noise images the time steps and the conditioning information all of that Let's now quickly step into the unit model into the forward pass and see how the class information Is incorporated into uh with the image features and hint hint it's literally the same thing as with The temporal information pretty much we're going to see that in a second so here it is we have the Time steps we're going to embed those so sinusoidal embeddings then some learned stuff here and we end Up with a vector that's of shape i guess let's see what it is um it's going to be four seven sixty Eight because we're dealing with uh batches with four images okay and now because we have the the Classes set here the only thing we're going to do with our classes so here here are classes so we Have four classes here we're going to um embed them using this label embed oh i hate these warnings I should have toggled them off there is some config uh line that you can set to lon jason if you if you Don't want to see those but yeah i forgot to do that uh so let's see what this this label embed Does okay so you can see here is just a simple embedding layer and it's going to specify uh it's Going to have some number of dimensions for each of the classes nothing fancy there you can literally Treat that as um let's see what it gives us basically uh so if we do this and i do shape Yeah you can see it's going to transform uh our four scalars into four vectors of dimensionality 768 and then we're just going to simply add that on top of the temporal embedding and then everything Else proceeds as as usual we're just going to keep on incorporating that information so both the class Information as well as the temporal information directly merge them with the image features Using the logic i showed you with the scaling and the offset we saw in the res block and that's it I'm going to skip over all of this uh we don't care about it and now let's continue on so we are Here and we ended up with the model outputs if sigma is learned this should be four six Uh something right so four six because six because we all we predict the epsilon so that's the noise And we also predict the sigma so that's the the covariance um and now let's continue here We saw all of this logic in the previous video so i'm going to kind of skim through this We split the model outputs into two parts so again the epsilon and the variance and then Well actually it's not the variance it's it's the v vector let me show you in a second formula So again if you recall from the previous video this is the formula we used so we are predicting the v's here And then we're going to form the variance by doing this expression here so this is the expression calculation we just saw So we calculate all of those we get the variance ultimately here and now we need to predict the mean So here is how we predict the mean i'm going to skim through all of this here We just need to first predict the x start so that's the x0 And then we use this posterior mean variance to get the model mean so this is all of this is same as in the previous video pretty much So we got the mean we have the variance and we finally return all of those variables so the mean Uh variance model log variance etc etc by the way let me just quickly show you what those are So here are we in the one note explanation I showed you in the beginning of the video So this is what we currently generated we generated the Um these two so we have the mean and we have the sigma and now we need to do this shift part So we need to do the shift part and then we can sample our our our image and that's it Fairly simple. Okay. Let me go back here. I'm going to return all of these variables I'm going to return all of those We generate we sample from a normal distribution We have this masking stuff going on Uh that um because we have if you recall we have a discrepancy for t equals zero time step versus all of the other steps Uh things look a bit different, but like I guess that's just a minor detail again. This is where the whole magic happens This is where the shifting of the mean happens using the sigma using the gradients of our classifier Uh, and also like just a note, uh, there is a bug here I submitted an issue and the authors of open from opening I basically confirmed that's indeed Uh an issue i'm gonna um explain you a bit later Uh what that exactly is but the funny thing is it did not even matter It's like that's the thing with uh training machine learning models Sometimes you have a bug but the models are forgiving and you will just get Something working maybe a bit suboptimal, but still there is no way you can figure out there is a bug It's not like with software systems where something when you when you when you like a basically develop a traditional software system If something does not work You will know that something will crash and the program will not work. So yeah ml ml is a bit different. Okay, so Here we form a new mean and a new mean is formed by adding On top of our old mean we just add the sigma times the gradient. So again, that's that's that's the whole idea So here again, we just have uh, we pass the um, we have the conditional con function which we defined in the main script we pass our Uh means the mean and the and the covariance we just calculated uh here above So this basically contains the x t and this is the x t plus one and this is the time step in any case Let's step inside of this function and let's see what's going on Okay So first we calculate the gradient and then as we recall from the equation we just add on our new on our old mean we add the The variance we calculated times the gradient and that's it. So that's that's easy. Let's now see that this part here Let's step into this function here. Okay, so First thing we do is we take our noisy image Uh, we detach it from the computational graph We said it requires graph to true so that we can calculate the gradients for that x because that's what we ultimately care about right, we want to have the gradients of x with respect to the log Probability of of our target class from the classifier. Okay, so We pass it through the classifier. We get the logits And unfortunately, I stepped inside of the classifier. We don't really care about it. Let's just kind of um Continue here. We don't care about it. So we have logits now we just form the log softmax So we have the log props So first, let's see what what the dimensionality of log props is So if we do shape here we have four because if we have four images and we have thousand lodges because we're dealing with image And and so what we do here is we basically extract the target. So why is again remember? Why is the the target? So those are the ground truth classes. We use those to take only those ledges logits. So we have the selected ones so now we if I print this these are the The um, uh target classes. So so these are the logits we need to maximize if we want to have An image from that particular class if that makes sense And so what we do here was just and this is a part that kind of confused me We sum up all of these uh different, um, like, um Logits, sorry not logits the log props the log probabilities And then we find the gradient with respect to the input. So the input is again a batch So it's going to be four three sixty four sixty four and we just multiply so this is the classifier scale I mentioned So that's denoted as s in the paper. So let me show you that in a second here Let's go back to the paper and let's find the uh, that part Somewhere here. So I think it's down here. Okay, so here's here is the part We're just computing we have the s Which is on the right hand side in the code and we have the gradients here. So again, let's go back to the Uh code here. So this is the s and this is the gradient we are we are computing and the reason this works is because um If you do a sum you'll end up with so our loss here is going to be L1 so that's like l1 is just my uh, shorthand notation for log probability for the first image So for the for the log probability for the first image plus l2 plus l3 plus l4 And if we do a gradient with respect to x because x1 so the first image from this batch only Influences this particular loss. Okay x2. So that's the second image from from the batch only influences the second one Okay, so because they only influence the log probes that correspond to to their index and because of that if we do gradient here Uh, each of the images here will have only the so basically the first image will have um l1 dx The second image will be uh l2 dx etc. Etc. Okay So basically we'll return a batch of gradients where each of the uh images inside of that batch Is basically gradients with respect to the corresponding loss. So if I go back here We basically end up with a gradient here and the gradient is going to be of the same shape as the images So here 436464 and that's pretty much it Okay, so now we just do the the the logic we saw in the paper and guys that's pretty much it We're done. We now form Uh using the new mean and the log variance We just sample by doing this expression by just adding the log variance times the noise And that's how we get the uh next image in the reverse process Now this just repeats on and on but this was the main this was the gist of the paper So where is the bug I mentioned? I'm going to show you the issue in a second But like basically what happens here is they passed xt plus one and t plus one So these are the uh, they should have passed the out mean Instead of xt plus one because this is xt and this is xt plus one and so It might be a bit subtle. So let's let's let's try and see whether I can explain it So as you can see here they pass the the the the the xt plus one. So if I go back here Uh, you can see that then they call this conditional function with xt plus one and t plus one So we end up using xt plus one t plus one here Uh, so that means That we calculate we pass xt plus one t plus one here. We get the logits, but if you recall from the actual paper Uh, we do not need to we we need to condition on on x. Let me just find the the expression here blah blah blah so You can see here That we want to condition on xt and not on xt plus one. Otherwise the gradients we get here Are again from the xt plus one not from the xt and all of that kind of messes up the logic I just explained it here. So mu Mu is from xt and we want to use that same mu and pass it inside of the classifier and not the previous image And despite all of this, it's very subtle. I don't even know how I noticed it. I was fairly I guess pedantic stepping through this code Uh They they get the same results pretty much. They didn't even notice it They noticed it post-hoc after the paper was published or something. I'm going to show you the issue in a second So here is the issue I opened up I think a couple of days ago um So I think it's closed already so bugs blah blah blah. So here's my issue So I created an issue here as you can see here I think I found two bugs So the first one we shouldn't shouldn't we pass the out of mean so the x of t instead of x of t plus one here Uh, and similarly the the smaller time step instead of t Uh, and then I linked to the particular line of code which is the same line of code I showed to you guys So that's here And uh, basically, um, one of the authors replied yes, this is indeed a slight bug we which we noticed shortly after releasing our work However, we did try ablating using the correct formula and found that it did not noticeably change results again That's the tricky part about training machine learning models And then the second part I was kind of confused with the summation of of of those losses But it turned out it's quite. Um, it makes a lot of sense. Uh, it was just a minor Uh, mistake on my on my side anyway in any case I can kind of link this issue if you want to dig deeper into it Uh, I'll have it down in the video description So guys, that's pretty much it. Um, I showed you how the how the paper what the paper introduced the two main things are Improving the unit architecture. The second thing is uh, adding this classifier guidance technique And then I showed you how to train the classifier Uh in in this classifier transcript we saw how to train the classifier This classifier transcript we saw how to train it on on basically noisy images and then use that very same classifier Inside of this classifier sample script to basically shift the mean and thus Like sample images from the class conditioned diffusion model There is a lot of details Hopefully you picked up something useful from this video if you did Uh, consider subscribing also share the video out and until next time. Bye. Bye
[{"start": 0.0, "end": 4.44, "text": " What's up guys in this video I'm continuing with the machine learning coding series and"}, {"start": 4.76, "end": 11.06, "text": " I'll be covering a follow-up paper to the previous video. So in the last one we covered the improved"}, {"start": 11.8, "end": 16.36, "text": " DDPMs or denoising diffusion probabilistic models paper in this one"}, {"start": 16.36, "end": 22.02, "text": " I'm gonna cover the follow-up work called the diffusion models speed cans on image synthesis"}, {"start": 22.72, "end": 26.12, "text": " Paper and the associated code. So the idea will be to"}, {"start": 26.12, "end": 33.52, "text": " To again for me to give you a brief context like 10-15 minutes short overview of the most salient parts of the paper and"}, {"start": 34.4, "end": 38.0, "text": " After that we're gonna dig into the code and analyze two scripts"}, {"start": 38.0, "end": 43.6, "text": " So one is this classifier train and the other one is classifier sample. So basically I want to focus on"}, {"start": 44.120000000000005, "end": 46.56, "text": " The guiding so how do we guide?"}, {"start": 47.28, "end": 50.8, "text": " diffusion using pre trained noise classifiers"}, {"start": 51.480000000000004, "end": 54.64, "text": " So understand what that means in 10-15 minutes"}, {"start": 54.64, "end": 61.76, "text": " In any case what I what I've done is I cloned this repo so the guided diffusion from open AI"}, {"start": 62.64, "end": 68.68, "text": " They have like everything all the instructions you'll need I had to do some modifications again"}, {"start": 68.68, "end": 75.24000000000001, "text": " If I get enough enough votes down in the in the comments, I'll just push my my changes"}, {"start": 75.24000000000001, "end": 81.04, "text": " So I had to make minor changes to make this to work on my machine in any case you can you can basically download"}, {"start": 81.04, "end": 84.4, "text": " various checkpoints here and just follow the"}, {"start": 85.12, "end": 91.68, "text": " Instructions they they have in the readme which are super nice and yeah, you should be ready to go without further"}, {"start": 91.68, "end": 98.28, "text": " Do let's first start with the paper. Okay, so as I said, it's diffusion models beat cans on image synthesis and"}, {"start": 98.80000000000001, "end": 103.80000000000001, "text": " Two main innovations here are they improve the unit architecture from their previous work"}, {"start": 103.80000000000001, "end": 109.04, "text": " So from the improved DDP M's paper and the second improvement is they find a way how to"}, {"start": 109.04, "end": 116.04, "text": " condition an additionally condition the model so that they can achieve higher quality so they can trade off the"}, {"start": 116.56, "end": 123.28, "text": " Diversity versus the quality of a sample which is something that previous generative models such as GANs used to achieve"}, {"start": 123.76, "end": 125.76, "text": " using this truncation"}, {"start": 126.24000000000001, "end": 131.92000000000002, "text": " Truncation trick as well as some autoregressive models can simply modify the temperature and thus"}, {"start": 132.4, "end": 136.72, "text": " Control the like diversity versus the quality of the of the sample"}, {"start": 136.72, "end": 143.04, "text": " So we'll see in this paper how they achieve the same thing for for the class of generative models called diffusion models"}, {"start": 143.84, "end": 151.44, "text": " so I strongly suggest you you watch my previous video if you haven't because I'm gonna kind of assume some knowledge from that one"}, {"start": 152.07999999999998, "end": 156.4, "text": " But yeah, you can you can continue watching this one as well. Okay, so"}, {"start": 157.04, "end": 162.96, "text": " So they say here we hypothesize that the gap between diffusion models and GANs stems from at least two factors"}, {"start": 162.96, "end": 170.16, "text": " So first one that the model architecture is used by recent GAN literature have been heavily explored and refined"}, {"start": 170.8, "end": 175.60000000000002, "text": " And we saw the same thing happening with in the Comnex paper if you recall that one"}, {"start": 176.24, "end": 182.8, "text": " Where like basically they people showed like people focused a lot on transformers and there was a lot of innovation that happened"}, {"start": 183.28, "end": 186.4, "text": " on that side of the spectrum whereas the CNNs were"}, {"start": 186.4, "end": 192.8, "text": " not as as explored and then what Comnex showed that if you take some of the innovations that were"}, {"start": 193.36, "end": 198.96, "text": " Well invented in the context of transformers and you adopt that into CNNs you get much better models"}, {"start": 198.96, "end": 203.36, "text": " And so similarly here they took some ideas from other GAN papers"}, {"start": 203.36, "end": 208.8, "text": " And they took those ideas and then integrated them into the fusion models into the architecture and that"}, {"start": 209.36, "end": 211.92000000000002, "text": " Improved the results quite quite nicely"}, {"start": 211.92, "end": 218.0, "text": " Anyways, so that's the first thing the second thing is that GANs are able to trade off diversity for fidelity"}, {"start": 218.48, "end": 224.48, "text": " Producing high quality samples, but not covering the whole distribution. So that's the second thing I mentioned previously"}, {"start": 224.48, "end": 228.79999999999998, "text": " So they do that using this truncation trick. You can check out the the style GAN paper"}, {"start": 228.79999999999998, "end": 235.83999999999997, "text": " I think that's that's the one where I recall that they have that they're using the the truncation trick in any case"}, {"start": 237.04, "end": 239.04, "text": " Let me quickly show you the"}, {"start": 239.04, "end": 242.07999999999998, "text": " Dimensions across which they they kind of explored"}, {"start": 243.12, "end": 248.95999999999998, "text": " And how to improve the architecture so one is increasing depth versus width holding model size relatively constant"}, {"start": 249.51999999999998, "end": 255.92, "text": " Increasing the number of attention heads using attention at 32 times 32 resolution as well as 16"}, {"start": 256.64, "end": 259.12, "text": " By 16 and not and 8 by 8"}, {"start": 259.68, "end": 265.28, "text": " Rather than only at 16 by 16 using big GAN residual block for up sampling and down sampling the activations"}, {"start": 265.28, "end": 272.23999999999995, "text": " And rescaling residual connections with one over square root of two. Okay. So those are some of the ideas they they kind of played with"}, {"start": 273.2, "end": 276.32, "text": " And you can see here some of the curves here after you"}, {"start": 277.03999999999996, "end": 280.71999999999997, "text": " Take the best of out of those changes after a lot of experimentation"}, {"start": 280.96, "end": 285.28, "text": " I guess they end up having the the the best architecture here"}, {"start": 285.28, "end": 289.2, "text": " So you can see here number of channels 160 number of blocks two blah blah blah"}, {"start": 289.76, "end": 291.35999999999996, "text": " Anyways, not that important. I'm going to focus on the"}, {"start": 291.36, "end": 295.52000000000004, "text": " um like guiding the diffusion much more than on the architecture"}, {"start": 296.16, "end": 300.96000000000004, "text": " So here again just to to wrap this up the architecture part in the rest of the paper"}, {"start": 300.96000000000004, "end": 303.68, "text": " We use this final improved model architecture as our default"}, {"start": 304.08000000000004, "end": 311.28000000000003, "text": " Variable width with two residual blocks per resolution multiple heads with 64 channels per head attention at 32 16 and 8 resolutions"}, {"start": 311.68, "end": 316.8, "text": " Big GAN residual blocks for up and down sampling and adaptive group normalization for injecting time step and"}, {"start": 316.8, "end": 322.16, "text": " Class embeddings into residual blocks you might recall this one from my previous video"}, {"start": 322.16, "end": 325.04, "text": " So this adaptive group normalization for injecting"}, {"start": 325.6, "end": 328.72, "text": " So this one adaptive for injecting time step and class embeddings"}, {"start": 329.04, "end": 333.2, "text": " So do check it out if you want to know the the details how that is how that works"}, {"start": 334.08000000000004, "end": 336.08000000000004, "text": " Okay, so this is the important part"}, {"start": 336.48, "end": 340.72, "text": " So classifier guidance, that's what we want to understand a bit better"}, {"start": 341.6, "end": 343.6, "text": " They meant to be a little bit more complex"}, {"start": 343.6, "end": 350.8, "text": " That's what we want to understand a bit better. They mentioned here. We already incorporate class information into normalization layers"}, {"start": 351.12, "end": 356.08000000000004, "text": " So if you recall from the previous video basically the same way we we"}, {"start": 356.94, "end": 358.72, "text": " incorporate the"}, {"start": 358.72, "end": 360.72, "text": " the time step information"}, {"start": 360.72, "end": 362.72, "text": " Through that same pathway"}, {"start": 362.72, "end": 368.32000000000005, "text": " They incorporate the class information. So that means they already had class condition diffusion models"}, {"start": 368.32, "end": 376.32, "text": " But the thing is they show in this paper that using this classifier guidance is complementary to the class conditioning"}, {"start": 376.71999999999997, "end": 379.44, "text": " And because of that they achieve the best results"}, {"start": 380.56, "end": 384.32, "text": " With with class conditioning obviously so with classifier conditioning"}, {"start": 385.28, "end": 387.28, "text": " So let's let's see what it is"}, {"start": 387.36, "end": 388.71999999999997, "text": " basically"}, {"start": 388.71999999999997, "end": 393.76, "text": " So here we explore a different approach compared to the one I just mentioned so exploiting a classifier p"}, {"start": 393.76, "end": 399.76, "text": " Uh y uh conditioned on x so basically you input x and you get like a distribution across labels"}, {"start": 400.24, "end": 403.03999999999996, "text": " That's how you're going to implement this abstract"}, {"start": 404.4, "end": 406.0, "text": " distribution here"}, {"start": 406.0, "end": 408.0, "text": " To improve a diffusion generator"}, {"start": 408.08, "end": 415.36, "text": " So blah blah blah the previous paper show one way to achieve this wherein a pre-trained diffusion model can be conditioned using the"}, {"start": 415.42, "end": 420.9, "text": " Gradients of a classifier. Okay, that's the important part. So so in particular we can train a classifier"}, {"start": 420.9, "end": 423.7, "text": " p parameterized by parameters phi"}, {"start": 424.73999999999995, "end": 425.85999999999996, "text": " and"}, {"start": 425.85999999999996, "end": 427.85999999999996, "text": " conditioned on x of t"}, {"start": 428.09999999999997, "end": 434.9, "text": " And t so xt is basically the the if you recall from the that this is the forward so again diffusion"}, {"start": 435.38, "end": 439.7, "text": " uh, just a quick reminder is you start with uh, like a"}, {"start": 440.26, "end": 445.29999999999995, "text": " Fresh image. Let's let's uh, let me draw it like here. So we have a fresh image here"}, {"start": 446.09999999999997, "end": 449.38, "text": " And then what you do is you gradually stop adding more and more noise"}, {"start": 449.38, "end": 450.58, "text": " so"}, {"start": 450.58, "end": 456.18, "text": " Basically here you had like some image and then you have here the same image"}, {"start": 456.65999999999997, "end": 460.1, "text": " And basically you start adding gaussian noise"}, {"start": 460.1, "end": 465.62, "text": " So you're going to add some noise here and then you you keep repeating that that that process"}, {"start": 466.18, "end": 468.18, "text": " uh multiple times"}, {"start": 468.18, "end": 470.65999999999997, "text": " Uh, basically, uh, like for example"}, {"start": 471.21999999999997, "end": 474.02, "text": " 1000 4000 are our numbers which makes sense"}, {"start": 474.02, "end": 479.53999999999996, "text": " And so at the end well actually at the end you'll end up with a completely noised image"}, {"start": 479.53999999999996, "end": 482.65999999999997, "text": " So at the end you'll just have a pure gaussian noise here"}, {"start": 482.65999999999997, "end": 487.94, "text": " I'm just gonna like draw it like this basically it's pure noise and what you're trying to learn is how to"}, {"start": 488.26, "end": 494.26, "text": " Go backwards. So this is the forward part. This is the forward direction and we want to learn how to go backwards"}, {"start": 494.26, "end": 500.26, "text": " So we want to learn how to start from noise and then gradually basically generate the image from the underlying"}, {"start": 500.26, "end": 505.62, "text": " Probability distribution which we have learned during the training procedure. Okay?"}, {"start": 506.42, "end": 512.42, "text": " So x of t is basically the t-th step of this forward process. So we we take the x-th image here"}, {"start": 512.42, "end": 519.3, "text": " So there's going to be like some image here and that's like t-th time step"}, {"start": 519.3, "end": 523.7, "text": " This is going to be this is usually denoted as 0-th. This is usually denoted as big t"}, {"start": 523.7, "end": 527.3, "text": " And that's or t minus 1 whatever in any case"}, {"start": 527.3, "end": 534.5, "text": " So that's that's what this x of t is and t is just a scalar which we're gonna obviously"}, {"start": 535.4599999999999, "end": 542.8199999999999, "text": " Encode using the usually they use the like sinusoid embeddings same as for the original transformer paper"}, {"start": 542.9, "end": 548.18, "text": " Okay, so in particular we can train a classifier which I just explained on noisy images"}, {"start": 548.8199999999999, "end": 551.8599999999999, "text": " And then use gradients of that classifier as you can see here"}, {"start": 551.86, "end": 557.46, "text": " So to guide the diffusion sampling process towards an arbitrary class label y"}, {"start": 557.46, "end": 559.46, "text": " So the idea here is"}, {"start": 559.46, "end": 564.26, "text": " This gradient here tells you how you need to tweak the current image"}, {"start": 564.26, "end": 570.9, "text": " So x of t how you need to tweak it such that you maximize the class label of your interest"}, {"start": 570.9, "end": 574.26, "text": " So let's say you want to you want to create a dog image"}, {"start": 574.26, "end": 580.66, "text": " Then you want to make sure that so this will tell you how to tweak the image such that the dog class is"}, {"start": 580.66, "end": 587.2199999999999, "text": " maximized i.e. how to yeah basically how to tweak the diffusion process such that"}, {"start": 587.78, "end": 593.54, "text": " Basically, you start guiding it towards this part of the space where the dog images lie in"}, {"start": 594.26, "end": 600.5, "text": " We're gonna see those details in a second, but like that's that's that's a rough intuition for now. Okay, let's"}, {"start": 601.38, "end": 602.98, "text": " continue here"}, {"start": 602.98, "end": 607.9399999999999, "text": " Here just briefly they briefly mentioned the notation they're gonna omit the t part because it's kind of"}, {"start": 607.94, "end": 612.9000000000001, "text": " Redundant in the sense that well just for concise and sake basically. It's not redundant"}, {"start": 612.9000000000001, "end": 615.94, "text": " It's just easier to write it down like that. Okay, so"}, {"start": 616.5, "end": 621.7, "text": " So here's how we're gonna do this. We start with the diffusion model with an unconditional reverse"}, {"start": 622.2600000000001, "end": 628.2600000000001, "text": " Noising process so p parameters by theta and the idea is to given x t plus one"}, {"start": 628.2600000000001, "end": 630.5, "text": " So that's the image with more noise"}, {"start": 630.5, "end": 635.3000000000001, "text": " You want to predict the next step of the reverse process as I previously explained"}, {"start": 635.3, "end": 642.26, "text": " Here, okay. So that's that's this part. We want to learn how to go how to you how to go from more noise to less noise"}, {"start": 643.38, "end": 645.38, "text": " in any case"}, {"start": 646.02, "end": 651.54, "text": " To condition this on label y it suffices to sample each transition according to"}, {"start": 652.0999999999999, "end": 656.0999999999999, "text": " This distribution here. So if we want to sample from"}, {"start": 656.9, "end": 660.0999999999999, "text": " Class conditions so you can see here. We additionally have the class information here"}, {"start": 660.1, "end": 664.74, "text": " It can be shown and they have a like a whole derivation in the appendix h"}, {"start": 664.74, "end": 670.02, "text": " So if you want to dig into the maths you can go there. I'm gonna kind of assume this to be like true"}, {"start": 670.66, "end": 674.82, "text": " Like as an axiom because otherwise will take too much time for me to explain the maths"}, {"start": 675.38, "end": 679.46, "text": " And in any in any case you can see here. So that is just an arbitrary context"}, {"start": 680.1, "end": 685.46, "text": " Constant and we we already have this part here. So this is the unconditioned"}, {"start": 687.3000000000001, "end": 689.3000000000001, "text": " Reverse process"}, {"start": 689.3, "end": 694.9799999999999, "text": " Reverse process which we learned and here this is going to be the classifier that we're going to train"}, {"start": 695.4599999999999, "end": 698.42, "text": " On the noisy images where by noisy images, I mean"}, {"start": 699.14, "end": 704.66, "text": " Images taken from arbitrary time step of the forward diffusion process. Okay, that's the basic idea"}, {"start": 704.66, "end": 706.66, "text": " So if you take this as as given"}, {"start": 707.2199999999999, "end": 714.02, "text": " Now let's kind of derive how can we actually sample from this distribution on the right hand side because as they mentioned here"}, {"start": 714.26, "end": 718.5799999999999, "text": " Uh, it is typically interactable to sample from this distribution exactly"}, {"start": 718.58, "end": 720.9000000000001, "text": " Okay, so that's that's the that's the idea"}, {"start": 722.5, "end": 726.82, "text": " Now let's see how we can break this down and make it simple and easy to sample from"}, {"start": 728.4200000000001, "end": 729.94, "text": " They say here"}, {"start": 729.94, "end": 735.3000000000001, "text": " Well, we basically know that our our reverse process can be parameterized like this"}, {"start": 735.38, "end": 741.86, "text": " This is what we saw in the previous paper. So we got we're going to learn how to predict mu and covariance matrix"}, {"start": 741.86, "end": 747.94, "text": " Now if we take log of this expression here since this is like e raised to blah blah blah something"}, {"start": 747.94, "end": 753.54, "text": " That's the gaussian definition you end up with this expression here. Okay, so we're working our way here"}, {"start": 753.54, "end": 756.34, "text": " so we are basically trying to understand how we can break this down and"}, {"start": 756.98, "end": 759.38, "text": " now we're going to see how we can break this down and"}, {"start": 759.94, "end": 765.46, "text": " that's going to be on the next page and then we're going to have a simple expression for for uh, like a"}, {"start": 766.26, "end": 769.38, "text": " Sampling from a class conditioned distribution. Okay"}, {"start": 769.38, "end": 773.3, "text": " So under certain assumptions which they mentioned here so we can assume that the log"}, {"start": 774.02, "end": 777.9399999999999, "text": " of this of our classifier has low curvature compared to the"}, {"start": 778.5, "end": 783.62, "text": " Basically inverse of this covariance matrix, which are which we're going to predict from the unconditional"}, {"start": 784.42, "end": 790.42, "text": " Diffusion model. So this assumption is reasonable in the limit of infinite diffusion steps blah blah blah where the"}, {"start": 791.14, "end": 793.14, "text": " Well, I guess this is for binius norm"}, {"start": 793.54, "end": 798.02, "text": " Converges to zero and in this case we can approximate this a log of"}, {"start": 798.02, "end": 800.8199999999999, "text": " the classifier using a Taylor expansion around"}, {"start": 801.38, "end": 803.38, "text": " mu and mu is basically the"}, {"start": 803.86, "end": 808.18, "text": " Mean of the x of t. So basically let me go back here"}, {"start": 809.46, "end": 813.54, "text": " So that's the same u as this one here. Okay, so that's that's the"}, {"start": 814.34, "end": 818.9, "text": " Basically the mean of the distribution from which we're going to sample x of t. That's it"}, {"start": 819.62, "end": 822.98, "text": " Fairly simple and if we do that they end up with this expression here"}, {"start": 822.98, "end": 828.82, "text": " This component here is going to be very very important and under this like"}, {"start": 829.7, "end": 837.22, "text": " Basically approximation and then derivation here. We can finally get the the the the final expression here"}, {"start": 837.22, "end": 840.34, "text": " So we just take the log of the distribution"}, {"start": 840.34, "end": 843.94, "text": " We just saw above and we can omit z because that is a constant"}, {"start": 843.94, "end": 848.02, "text": " So we don't care about it and you can see that after some derivation here"}, {"start": 848.02, "end": 850.02, "text": " We end up with this expression here"}, {"start": 850.02, "end": 852.66, "text": " So again, we can ignore a c4 is just a constant"}, {"start": 852.66, "end": 857.86, "text": " So we basically if we want to sample from that complex intractable distribution here"}, {"start": 858.34, "end": 862.18, "text": " What we in practice need to do is just sample from this gaussian here"}, {"start": 862.18, "end": 868.66, "text": " So as you can see the Gaussian just has a mean a shifted mean so we have mu which is from the unconditional"}, {"start": 869.38, "end": 871.78, "text": " Diffusion model and we just simply add"}, {"start": 872.8199999999999, "end": 876.26, "text": " This offset here. So we have the covariance matrix. We just"}, {"start": 876.26, "end": 879.7, "text": " basically do like we"}, {"start": 880.26, "end": 885.9399999999999, "text": " Multiply it with the g and g is just the as you can see here the gradient of our classifier"}, {"start": 886.9, "end": 892.5, "text": " For x of t equals mu. So i'm going to give you some intuition for behind these formulas"}, {"start": 892.5, "end": 897.7, "text": " But first let me just tell you this so so we have thus found that the conditional transition operator"}, {"start": 897.7, "end": 900.02, "text": " So that's the the one we saw above here"}, {"start": 900.02, "end": 905.3, "text": " Can be approximated by a gaussian similar to the unconditional transition operator"}, {"start": 905.3, "end": 908.74, "text": " But with its mean shifted by sigma g"}, {"start": 909.3, "end": 911.78, "text": " Okay, so let me try and"}, {"start": 912.5, "end": 917.06, "text": " Give you some intuition for why this works. It's going to be a very hand-wavy explanation"}, {"start": 917.06, "end": 922.9, "text": " But like I think the intuition is is completely correct on some level of abstraction. It's a correct"}, {"start": 922.9, "end": 924.98, "text": " So let's just say that we have a condition"}, {"start": 924.98, "end": 930.1, "text": " But I think the intuition is is completely correct on some level of abstraction. It's it's a correct"}, {"start": 931.14, "end": 935.86, "text": " Hand-wavy explanation. Okay. So g again it tells you how you should tweak"}, {"start": 936.82, "end": 942.34, "text": " X of t so the the image such that you can maximize certain class y. Okay, so"}, {"start": 943.38, "end": 947.62, "text": " Let's see what we do. So we we pass in we have our model and"}, {"start": 948.26, "end": 952.1, "text": " Let me kind of draw it like this. So let me just change the color here"}, {"start": 952.1, "end": 954.1, "text": " So we have an image"}, {"start": 954.66, "end": 962.02, "text": " That's x t plus one. So that's the x t plus one image. We pass that image into our diffusion model"}, {"start": 962.02, "end": 966.4200000000001, "text": " So that's our diffusion model here and out comes what outcomes mu?"}, {"start": 967.38, "end": 969.38, "text": " and out comes this"}, {"start": 969.86, "end": 978.34, "text": " Covariance matrix and these two together basically parameterize the Gaussian distribution from which we would sample the"}, {"start": 978.34, "end": 984.02, "text": " Next so x of t so from from this from this distribution here. We sample x of t"}, {"start": 985.14, "end": 988.82, "text": " Okay, so we can kind of visualize the Gaussian as just"}, {"start": 989.5400000000001, "end": 994.1, "text": " Like a point in a in a in a multi-dimensional space for the sake of explanation"}, {"start": 994.1, "end": 996.98, "text": " I'm just gonna assume we're we're dealing with 3d space"}, {"start": 996.98, "end": 1004.1800000000001, "text": " But obviously this is going to be much higher dimensionality mu is actually the dimension the same dimensionality as our our image"}, {"start": 1004.18, "end": 1007.6999999999999, "text": " so basically, uh, it's gonna be um,"}, {"start": 1008.3399999999999, "end": 1009.3, "text": " well"}, {"start": 1009.3, "end": 1010.0999999999999, "text": " h"}, {"start": 1010.0999999999999, "end": 1015.06, "text": " Times w times three so that's the dimension of our image and similarly for for sigma"}, {"start": 1015.38, "end": 1021.14, "text": " But just for the sake of understanding what's going on with this shift here. So the the sigma g part"}, {"start": 1021.54, "end": 1024.26, "text": " I'm gonna kind of assume a coordinate system like this and"}, {"start": 1024.98, "end": 1030.02, "text": " Like assume we have a like a Gaussian distribution. The mean is here. So this is the mu. Okay"}, {"start": 1030.02, "end": 1034.5, "text": " and uh, we have some we basically have some some variance"}, {"start": 1036.02, "end": 1040.74, "text": " So under normal circumstances if we just have like an unconditional diffusion model"}, {"start": 1040.98, "end": 1046.5, "text": " We would basically sample a point some from from this distribution somewhere from from this distribution"}, {"start": 1046.5, "end": 1051.62, "text": " Like maybe maybe this would be our next uh, this would be our x of t. Okay, but"}, {"start": 1052.26, "end": 1056.42, "text": " Because we we additionally have this information coming from from the classifier"}, {"start": 1056.42, "end": 1060.8200000000002, "text": " What it does is the following so now you take this mu here"}, {"start": 1061.7, "end": 1063.7, "text": " Let me change the color"}, {"start": 1064.1000000000001, "end": 1069.38, "text": " Okay, so so you take the mu and you pass that as an image to like our classifier"}, {"start": 1069.38, "end": 1072.3400000000001, "text": " So we're gonna have something like this. So we have an image"}, {"start": 1072.9, "end": 1075.94, "text": " That's gonna be the same as this one here. So that's gonna be mu"}, {"start": 1076.74, "end": 1079.3000000000002, "text": " We pass that inside of our classifier"}, {"start": 1080.02, "end": 1082.02, "text": " This is our classifier"}, {"start": 1082.02, "end": 1087.78, "text": " And out comes uh, well for image net they train obviously it has like thousand classes"}, {"start": 1089.3799999999999, "end": 1090.26, "text": " And"}, {"start": 1090.26, "end": 1093.3799999999999, "text": " We basically learn how we should tweak this image"}, {"start": 1094.18, "end": 1100.18, "text": " So the classifier tells us how we should tweak the image such that a particular class is maximized maybe this one"}, {"start": 1100.5, "end": 1106.66, "text": " Okay, and with that information with that gradient, so so those blue dots. I'm just gonna denote that as g"}, {"start": 1106.66, "end": 1112.02, "text": " With that information which is of the same dimensionality as the mu we're going to shift this mu"}, {"start": 1112.02, "end": 1114.5800000000002, "text": " So we're going to shift this maybe here"}, {"start": 1115.8600000000001, "end": 1118.9, "text": " And we're going to end up here. So our new distribution"}, {"start": 1119.6200000000001, "end": 1125.14, "text": " Is going to be actually here instead of here. Okay, and now we have and we have the same variance"}, {"start": 1125.14, "end": 1128.5800000000002, "text": " So i'm going to kind of change the color because otherwise you will not be able to see anything"}, {"start": 1129.5400000000002, "end": 1132.42, "text": " So we shifted our original color"}, {"start": 1132.42, "end": 1139.38, "text": " And we're not going to be able to see anything so we shifted our original distribution and now we are somewhere here"}, {"start": 1140.26, "end": 1142.9, "text": " Okay, it's kind of messy, but you get the point so this"}, {"start": 1143.6200000000001, "end": 1149.54, "text": " Shift vector we just saw there. So that's this blue one is just this expression here. So"}, {"start": 1150.9, "end": 1152.98, "text": " Sigma big sigma times g"}, {"start": 1155.3000000000002, "end": 1158.3400000000001, "text": " And this is the like the hand way we explanation what's going on"}, {"start": 1158.34, "end": 1162.74, "text": " So the classifier knows where you should kind of move this image"}, {"start": 1163.62, "end": 1170.1799999999998, "text": " In which direction should you move this image such such that you maximize the class of interest and by just doing that by just kind of"}, {"start": 1170.58, "end": 1172.74, "text": " moving the"}, {"start": 1172.74, "end": 1175.8799999999999, "text": " The mean in the direction of the gradient of the classifier"}, {"start": 1176.26, "end": 1184.1799999999998, "text": " You basically increase the probability of the class label here and that's that's basically everything you need to understand. Okay"}, {"start": 1184.18, "end": 1188.26, "text": " Uh, hopefully that's clear enough. That's how we can combine again"}, {"start": 1189.0600000000002, "end": 1196.9, "text": " The classifier here and the unconditional diffusion model here to form images from a particular class"}, {"start": 1197.6200000000001, "end": 1202.5, "text": " Again, i'm kind of lying here because as I said, they are still class conditioning this model here"}, {"start": 1202.5, "end": 1211.14, "text": " But using the classifier we additionally complementary, uh improve, uh the uh, well this class conditioned, uh generation"}, {"start": 1211.14, "end": 1213.6200000000001, "text": " Cool, uh, that's it. Let me briefly, um"}, {"start": 1214.3400000000001, "end": 1219.38, "text": " Explain a couple more things. Um, aside from using the the the classifier"}, {"start": 1219.38, "end": 1226.3400000000001, "text": " The classifier gradient they additionally scale it and this is just empirically they found out that this gives them much better results"}, {"start": 1226.66, "end": 1231.94, "text": " So here they mentioned our classifier architecture is simply the down sampling trunk of the unit model"}, {"start": 1232.26, "end": 1236.66, "text": " With an attention pool at the 8 times 8 layer to produce the final output"}, {"start": 1236.66, "end": 1241.14, "text": " We train these classifiers on the same noising distribution as the corresponding diffusion model"}, {"start": 1241.7, "end": 1245.22, "text": " Then they say in initial experiments with unconditional image map models"}, {"start": 1245.46, "end": 1250.9, "text": " We found it necessary to scale the classifier gradients by constant factor larger than one"}, {"start": 1250.9, "end": 1255.8600000000001, "text": " So to understand the effect of the scaling classifier gradients note that if we scale by s"}, {"start": 1256.3400000000001, "end": 1261.8600000000001, "text": " So this is the what we denoted as g previously. This is these are the gradients of our classifier"}, {"start": 1261.86, "end": 1267.3, "text": " So if we do this you can show quite easily that that's equivalent to this expression here"}, {"start": 1267.3, "end": 1271.06, "text": " So well, basically, um, you can convince yourself"}, {"start": 1271.06, "end": 1276.82, "text": " This is true because log of a product is log of sums and because this is delta x"}, {"start": 1276.82, "end": 1281.54, "text": " We don't care about the this part here and then we would end up with the same expression"}, {"start": 1281.54, "end": 1283.06, "text": " Basically, these two are equivalent"}, {"start": 1283.06, "end": 1288.6599999999999, "text": " Okay, and you can see that we are basically just raising this to the power of s which which means we're going to sharpen this distribution"}, {"start": 1288.66, "end": 1292.1000000000001, "text": " So let me continue reading what they said here"}, {"start": 1292.1000000000001, "end": 1299.5400000000002, "text": " So so z is an arbitrary constant and as a result the conditioning process is still theoretically grounded in our renormalized classifier distribution proportional to"}, {"start": 1300.18, "end": 1306.18, "text": " P raised to the power of s when s is bigger than one. This is distribution becomes sharper than"}, {"start": 1306.98, "end": 1313.6200000000001, "text": " The original distribution. Okay in the above derivations. We assume that the underlying diffusion model was unconditional modeling"}, {"start": 1313.62, "end": 1318.1799999999998, "text": " P of x it is also possible to train conditional diffusion models"}, {"start": 1318.8999999999999, "end": 1328.34, "text": " And use classifier guidance in the exact same way table 4 shows that the sample quality of both unconditional and conditional models can be greatly improved by classifier guidance"}, {"start": 1328.34, "end": 1335.3799999999999, "text": " So this is the the thing I was mentioning we we can both have conditional model plus the classifier guidance because they are complementary"}, {"start": 1335.3799999999999, "end": 1338.1799999999998, "text": " We see that with a high enough scale the guided"}, {"start": 1338.18, "end": 1344.02, "text": " Unconditional model can get quite close to the fid of an unguided conditional model"}, {"start": 1344.02, "end": 1349.94, "text": " Although training directly with the class label still helps finally guiding a conditional model further improves fid"}, {"start": 1349.94, "end": 1356.1000000000001, "text": " So that's what I mentioned multiple times using the class conditional model plus using the guiding is what gave them the best"}, {"start": 1356.74, "end": 1360.42, "text": " Result on various metrics such as fid is inception score"}, {"start": 1361.22, "end": 1363.22, "text": " Precision recall, etc, etc. Okay"}, {"start": 1363.22, "end": 1367.6200000000001, "text": " Um quickly to let me just quickly address this because we're going to see it in the code later on"}, {"start": 1367.94, "end": 1371.54, "text": " So how they train the so why do they need to train the model?"}, {"start": 1371.54, "end": 1377.46, "text": " Uh, the the the classifier on on noisy images because they say here we train these classifiers on the same"}, {"start": 1377.46, "end": 1379.7, "text": " Noising distribution as the corresponding diffusion model"}, {"start": 1380.5, "end": 1381.14, "text": " Well"}, {"start": 1381.14, "end": 1388.58, "text": " Because we're going to be using it on images on x of t so x of t which can be super noisy"}, {"start": 1388.58, "end": 1394.98, "text": " And if we just train our classifier on normal images normal when I say normal I mean images without any noise"}, {"start": 1395.62, "end": 1400.34, "text": " Then the model will probably struggle because these images would be out of distribution for the model"}, {"start": 1401.22, "end": 1408.26, "text": " And then it would not give us correct class for a noisy version of the original image because of that we want to train model"}, {"start": 1408.26, "end": 1411.86, "text": " For all so basically on the whole process. So this is our diffusion"}, {"start": 1413.22, "end": 1415.9399999999998, "text": " Like for example, if we want to train the model"}, {"start": 1415.94, "end": 1422.74, "text": " Our diffusion, uh, like forward process something like this. So this is the step zero. This is the step"}, {"start": 1423.14, "end": 1425.14, "text": " Let's say big t minus one"}, {"start": 1425.94, "end": 1428.66, "text": " We want to train on all of these images"}, {"start": 1430.42, "end": 1432.42, "text": " So we'll be randomly sampling images"}, {"start": 1432.74, "end": 1437.22, "text": " Um, and uh making sure that the model knows how to predict the correct class"}, {"start": 1437.6200000000001, "end": 1443.06, "text": " And uh, because of that this is kind of bothersome and some of the later papers such as glide"}, {"start": 1443.06, "end": 1448.74, "text": " Introduce this concept of a classifier free guidance. I'm gonna, um, well tell you more about that one"}, {"start": 1448.74, "end": 1451.78, "text": " Uh in one of the future videos. Okay, let's go back here"}, {"start": 1452.82, "end": 1457.1399999999999, "text": " Uh here we can see that the trade-off we mentioned in the beginning of the video"}, {"start": 1457.46, "end": 1461.86, "text": " And that's that we want to trade out between diversity and quality of the sample"}, {"start": 1462.34, "end": 1465.46, "text": " They show here using various metrics that that's indeed the case"}, {"start": 1465.46, "end": 1469.3799999999999, "text": " So you can see here as we increase this gradient scale. So that's the hyper parameter s"}, {"start": 1469.38, "end": 1476.3400000000001, "text": " Uh, you can see that the recall is uh falling down which means that the diversity the the the mode coverage"}, {"start": 1476.66, "end": 1478.66, "text": " Of our generative model is falling down"}, {"start": 1479.0600000000002, "end": 1484.1000000000001, "text": " But the precision which basically um, well, uh, it's a proxy for the image quality"}, {"start": 1484.5800000000002, "end": 1490.8200000000002, "text": " Goes up and you can see similarly here is goes up which means that the image quality is increasing"}, {"start": 1491.14, "end": 1498.18, "text": " And f id is is also going up which means that the diversity is is uh, basically, uh, diminishing"}, {"start": 1498.18, "end": 1503.54, "text": " Okay, that's pretty much it. I'm gonna skip all of this and i'm just gonna briefly, uh"}, {"start": 1504.02, "end": 1506.9, "text": " Uh mention a couple sentences here and then we're gonna jump into the code"}, {"start": 1507.6200000000001, "end": 1512.42, "text": " Our proposed classifier guidance technique is currently limited to labeled datasets. So keep that in mind"}, {"start": 1512.5, "end": 1515.54, "text": " We do have to train it on on on labeled, uh images"}, {"start": 1516.5800000000002, "end": 1517.6200000000001, "text": " um"}, {"start": 1517.6200000000001, "end": 1524.1000000000001, "text": " Well, the effectiveness of classifier guidance demonstrates that we can obtain powerful generative models from the gradients of a classification function"}, {"start": 1524.1, "end": 1531.62, "text": " This could be used to condition pre-trained models in a plethora of ways for example by conditioning an image generator with a text caption"}, {"start": 1531.9399999999998, "end": 1535.9399999999998, "text": " Using a noisy version of clip and because of this sentence"}, {"start": 1536.74, "end": 1537.78, "text": " After reading this paper"}, {"start": 1537.78, "end": 1544.5, "text": " You could have assumed that open ai would start working on something like glide or dali models and they did indeed"}, {"start": 1545.3, "end": 1552.8999999999999, "text": " And finally it also suggests that large unlabeled datasets could be leveraged in the future to pre-train powerful diffusion models that can later be improved"}, {"start": 1552.9, "end": 1557.46, "text": " By using a classifier with desirable properties. Okay again"}, {"start": 1558.26, "end": 1560.02, "text": " You could have if you read this paper"}, {"start": 1560.02, "end": 1565.46, "text": " You could have anticipated open ai is going to try and just scale up scale up diffusion models"}, {"start": 1565.8600000000001, "end": 1570.9, "text": " Uh such as what they've done with with dali and when I say dali I basically mean dali"}, {"start": 1571.38, "end": 1576.66, "text": " Two because dali one didn't even use diffusion models. Okay, let's jump to the code now"}, {"start": 1576.98, "end": 1578.98, "text": " um, I I did expect this"}, {"start": 1578.98, "end": 1582.34, "text": " Uh paper overview part to be a bit shorter"}, {"start": 1582.58, "end": 1587.46, "text": " But like do let me know whether you find this type of combination useful or not"}, {"start": 1588.02, "end": 1591.54, "text": " Just feel free to comment down below. What do you think about this format?"}, {"start": 1592.18, "end": 1594.18, "text": " In any case, let's now"}, {"start": 1594.66, "end": 1599.22, "text": " Start working on on the classifier train script. So this is how we trained"}, {"start": 1599.8600000000001, "end": 1602.74, "text": " This is how we train this this noisy classifier"}, {"start": 1602.74, "end": 1608.5, "text": " Uh that I just mentioned so let me start and walk you through the actual script for training it"}, {"start": 1609.6200000000001, "end": 1615.8, "text": " So i'm gonna obviously again, uh abstract a lot of the details because otherwise this would not be tractable"}, {"start": 1616.26, "end": 1618.26, "text": " uh for for a video"}, {"start": 1619.06, "end": 1623.14, "text": " That has an ambition to be less than an hour long. Uh, so just some"}, {"start": 1624.1, "end": 1628.1, "text": " Setting up some distributed stuff, which we don't care about logging blah blah blah"}, {"start": 1628.1, "end": 1634.82, "text": " Um this part part we do care about because I just want to briefly show you basically how the classifier is constructed"}, {"start": 1635.78, "end": 1642.1, "text": " We did sign the paper that it's simply the first part of the unit model up to the bottleneck part and then we just stitch"}, {"start": 1642.6599999999999, "end": 1648.98, "text": " Attention pooling and we ditch the the second part of the unit model. That's how we that's how they set up the uh, the classifier"}, {"start": 1649.2199999999998, "end": 1654.98, "text": " The architecture so let's kind of go inside here. So this just kind of gives you some"}, {"start": 1654.98, "end": 1659.94, "text": " Um default arguments i'm gonna try and zoom in a little bit in case you don't see this"}, {"start": 1660.34, "end": 1664.02, "text": " I'm gonna close this part. Okay, so let's step inside this"}, {"start": 1665.3, "end": 1667.8600000000001, "text": " Line of code and let's see how the classifier works"}, {"start": 1668.66, "end": 1670.66, "text": " Okay, so here's the classifier"}, {"start": 1670.82, "end": 1676.58, "text": " Image size 128 128 blah blah. Those are just some uh hyper parameters obviously can be different"}, {"start": 1677.06, "end": 1679.06, "text": " Uh, we are not using fp16"}, {"start": 1679.94, "end": 1681.94, "text": " Uh, there is some width depth"}, {"start": 1681.94, "end": 1688.66, "text": " Various hyper parameters, uh, we saw this is going to be um, well basically this is going to enable"}, {"start": 1688.8200000000002, "end": 1696.66, "text": " Uh the attention to be at resolutions 32 times 32 in the latent space as well as 16 by 16 and 8 by 8"}, {"start": 1697.14, "end": 1700.8200000000002, "text": " Um, this is the scale shift norm that I showed you in the previous video"}, {"start": 1701.14, "end": 1704.18, "text": " uh, this basically, um enables us to"}, {"start": 1704.8200000000002, "end": 1709.8600000000001, "text": " Integrate to incorporate the the class as well as the time step information into the model"}, {"start": 1710.42, "end": 1711.54, "text": " um"}, {"start": 1711.54, "end": 1713.78, "text": " In any case i'm going to scheme over those details"}, {"start": 1714.26, "end": 1718.58, "text": " Um, let me just kind of go through here because we have image size 128"}, {"start": 1718.58, "end": 1722.1, "text": " We have some specification for the number of channels in the unit"}, {"start": 1722.82, "end": 1729.62, "text": " And finally, we just uh transform, uh this from the resolution. So so remember this is like, um,"}, {"start": 1730.1, "end": 1734.74, "text": " Basically 6 32 16 and 8 after we step through this, uh for loop"}, {"start": 1734.98, "end": 1737.7, "text": " We're basically and end up with the same information"}, {"start": 1737.7, "end": 1743.6200000000001, "text": " But just like and now it it tells us how many down sampling steps you need to take before you apply the attention"}, {"start": 1743.94, "end": 1747.78, "text": " Uh, we saw that in the previous video as well. So i'm gonna just kind of skip over this"}, {"start": 1748.26, "end": 1751.8600000000001, "text": " Blah blah blah and here here it comes. So here's the encoder unit model"}, {"start": 1753.14, "end": 1760.18, "text": " Basically, let me see what interesting is here. So we have the output number of channels is hard coded to 2000 because this is"}, {"start": 1760.98, "end": 1762.98, "text": " trained on on on image net"}, {"start": 1762.98, "end": 1767.8600000000001, "text": " Uh, i'm gonna skip over all of those not that important"}, {"start": 1768.58, "end": 1772.82, "text": " Uh classifier pool is going to be attention. We're gonna see that in a second. So let me"}, {"start": 1773.8600000000001, "end": 1776.02, "text": " Go to the construction constructor here"}, {"start": 1776.5, "end": 1783.22, "text": " So i'm gonna again just scheme over they're just kind of setting all of these arguments to um local"}, {"start": 1783.94, "end": 1785.6200000000001, "text": " variables here"}, {"start": 1785.6200000000001, "end": 1787.7, "text": " Uh, nothing nothing fancy there"}, {"start": 1787.7, "end": 1793.94, "text": " Uh, we construct the uh time embedding. This is the the mlp that we use to transform"}, {"start": 1794.42, "end": 1798.42, "text": " The sinusoidal embeddings for our time step time steps"}, {"start": 1798.9, "end": 1799.8600000000001, "text": " um"}, {"start": 1799.8600000000001, "end": 1803.78, "text": " Nothing, nothing interesting really we can just kind of scheme over all of this. So"}, {"start": 1804.3400000000001, "end": 1807.94, "text": " Um, if we recall the from the previous video we had the same structure"}, {"start": 1807.94, "end": 1813.22, "text": " So we initially just have number of input block. We just have the conv, uh, basically 2d"}, {"start": 1813.22, "end": 1817.3, "text": " Uh, so the convolution as the first processing layer"}, {"start": 1817.6200000000001, "end": 1819.38, "text": " then we have this uh"}, {"start": 1819.38, "end": 1822.34, "text": " part where we uh add various"}, {"start": 1822.88, "end": 1827.78, "text": " Red residual residual blocks and attention blocks and then we have the middle block"}, {"start": 1828.18, "end": 1832.34, "text": " And the main difference is we don't have here the output blocks. We just have as you can see here"}, {"start": 1832.34, "end": 1834.82, "text": " We just have the pooling operator. So let me go"}, {"start": 1835.7, "end": 1841.14, "text": " To there obviously these details are not that important. The only important part lies in the residual block"}, {"start": 1841.14, "end": 1843.14, "text": " where you can see how the um"}, {"start": 1843.5400000000002, "end": 1850.18, "text": " How the information such as time steps and class information is incorporated. I'm going to show you that in a second"}, {"start": 1850.26, "end": 1854.18, "text": " I'm going to kind of skip over all of this again. You can see here that uh"}, {"start": 1854.18, "end": 1860.9, "text": " Sometimes we're going to add attention blocks and that's going to happen precisely for the resolutions 32 16 and 8"}, {"start": 1861.6200000000001, "end": 1868.26, "text": " And there are these wrapper functions which just enable us to uh, sometimes pass the temporal information"}, {"start": 1868.26, "end": 1874.9, "text": " Sometimes just pass the the feature vectors. Okay, in any case i'm going to skip all of this because i'm just um"}, {"start": 1875.3, "end": 1878.98, "text": " There's too much information there and it's not that vital for your understanding"}, {"start": 1878.98, "end": 1882.82, "text": " It's just like the architectural details of the unit model. I'm just going to show you this one"}, {"start": 1883.06, "end": 1886.98, "text": " So here is the let me just kind of uh remind you from the previous video"}, {"start": 1888.18, "end": 1889.22, "text": " uh"}, {"start": 1889.22, "end": 1896.42, "text": " There is this part where we embed so here we embed the uh temporal information and then because this is set to true"}, {"start": 1896.42, "end": 1903.0600000000002, "text": " Use scale shift norm what we're gonna do is we're gonna take the temporal information here as in a form of a scale and shift"}, {"start": 1903.3000000000002, "end": 1907.54, "text": " And we are going to combine that this is how we merge the temporal information"}, {"start": 1907.94, "end": 1912.5800000000002, "text": " Uh into into the uh features here and this is just as a reminder"}, {"start": 1913.14, "end": 1918.74, "text": " For the what happened in the previous video. Let me just see whether they have class information here"}, {"start": 1919.46, "end": 1922.5800000000002, "text": " Um doesn't seem to be the case so embedding"}, {"start": 1923.3000000000002, "end": 1925.3000000000002, "text": " They passed embedding here"}, {"start": 1925.3, "end": 1931.46, "text": " No, I don't think this is going to have well, it doesn't make any sense to have uh like a class"}, {"start": 1932.26, "end": 1938.1, "text": " Um conditioning in a classifier. So this is going to make sense for the diffusion model and in any case"}, {"start": 1938.1, "end": 1940.1, "text": " Okay, let's continue here"}, {"start": 1940.1, "end": 1941.3, "text": " so we"}, {"start": 1941.3, "end": 1946.8999999999999, "text": " Uh, we have the the unit part and now we just end up uh with this um"}, {"start": 1947.46, "end": 1952.4199999999998, "text": " Sequential model that has some normalization some activation function and finally this attention pooling 2d"}, {"start": 1952.42, "end": 1956.9, "text": " 2d this is basically just going to take the output features from the unit model"}, {"start": 1957.94, "end": 1962.02, "text": " And using a tension mechanism is going to form a resultant"}, {"start": 1962.5800000000002, "end": 1970.18, "text": " feature vector which we're going to then use to predict the the logits and you can see that the output channels here is"}, {"start": 1970.18, "end": 1972.18, "text": " Thousands is going to give us a thousand"}, {"start": 1972.8200000000002, "end": 1978.66, "text": " Thousand dimensional vector which is going to be just the logits that come out of this classifier and that's it guys"}, {"start": 1978.66, "end": 1984.1000000000001, "text": " Um, I kind of schemed a lot of details because they are not really that important. So"}, {"start": 1984.8200000000002, "end": 1989.78, "text": " Uh, you just need to have like a rough mental model how the architecture looks like it has this this basically"}, {"start": 1990.42, "end": 1995.78, "text": " encoder type of a structure it has this attention pooling at the end and"}, {"start": 1997.3000000000002, "end": 1998.3400000000001, "text": " That's pretty much it"}, {"start": 1998.3400000000001, "end": 1998.5800000000002, "text": " Okay"}, {"start": 1998.5800000000002, "end": 2003.8600000000001, "text": " So I skipped the the gaussian diffusion model here because we saw that in the previous video"}, {"start": 2003.8600000000001, "end": 2007.5400000000002, "text": " I just want to focus on the differences compared to the previous video"}, {"start": 2007.54, "end": 2011.8799999999999, "text": " Uh, so here we just uh shift the model onto the gpu"}, {"start": 2012.42, "end": 2018.84, "text": " Uh, we create the uniform sampler here. So this one is just going to help us sample from the forward"}, {"start": 2019.62, "end": 2024.6599999999999, "text": " Process we're just gonna pick the random uh time step using the uniform distribution"}, {"start": 2024.74, "end": 2028.34, "text": " Obviously you can kind of bias this and and form various different distributions"}, {"start": 2028.58, "end": 2031.78, "text": " One of those which I showed you in the previous video was depending on the loss"}, {"start": 2031.78, "end": 2037.78, "text": " You're gonna focus more on those time steps where we have higher loss, which makes sense intuitively"}, {"start": 2038.42, "end": 2043.22, "text": " Um, okay, so we can skip over all of these parts. We don't want to resume from the checkpoint"}, {"start": 2043.62, "end": 2048.5, "text": " We just have some mixed precision trainer wrapper here. We don't care about that part. We have the distributed"}, {"start": 2049.06, "end": 2054.18, "text": " Data parallel object that's for distributed training, but we don't care about it either"}, {"start": 2055.06, "end": 2058.74, "text": " I'm gonna skip over all of that. We have some data loading"}, {"start": 2058.74, "end": 2063.4599999999996, "text": " Basically, we're gonna have batch size 256 image size 120 128"}, {"start": 2063.7799999999997, "end": 2069.22, "text": " Uh, we have the data directory which I downloaded before the video started and that's it"}, {"start": 2069.22, "end": 2071.7, "text": " We just create the training and validation datasets"}, {"start": 2071.7, "end": 2076.8199999999997, "text": " I'm not gonna dig into the actual details of how that works because it's just like a side thing"}, {"start": 2077.3799999999997, "end": 2083.14, "text": " Um, so logging, addmw optimizer. We're not resuming from a checkpoint blah blah blah"}, {"start": 2083.3799999999997, "end": 2087.22, "text": " And here this this is going to be the main function to understand how to classify a string"}, {"start": 2087.22, "end": 2092.8999999999996, "text": " So we have forward backward log. I've added a breakpoint to that function. We're gonna step into it a bit later"}, {"start": 2093.14, "end": 2094.8199999999997, "text": " So here we defined it"}, {"start": 2094.8199999999997, "end": 2101.4599999999996, "text": " And now here we have we step for a number of iterations. Okay, so we're gonna do this for a number of steps"}, {"start": 2101.54, "end": 2104.02, "text": " So as specified by the number of iterations there"}, {"start": 2104.18, "end": 2109.2999999999997, "text": " I've just set it to 30 because I want to have a short training here just for for"}, {"start": 2110.02, "end": 2113.8599999999997, "text": " Debugging purposes in any case just some logging nothing interesting"}, {"start": 2113.86, "end": 2120.58, "text": " I can skip all of this and this is where the whole magic of the classifier training happens after that. We just do the"}, {"start": 2121.6, "end": 2129.2200000000003, "text": " Optimization step and we do the same logic for on the validation. We just do the forward backward on the validation"}, {"start": 2130.6600000000003, "end": 2134.02, "text": " Set and finally we just do some logging here"}, {"start": 2134.34, "end": 2139.86, "text": " So basically this is where the whole magic is in so the forward backward log function is everything that we care about"}, {"start": 2140.1800000000003, "end": 2142.1800000000003, "text": " So let's step into it"}, {"start": 2142.18, "end": 2148.02, "text": " So we first gonna grab like a batch of images and associated labels. Let me show you the"}, {"start": 2148.8999999999996, "end": 2150.02, "text": " shapes"}, {"start": 2150.02, "end": 2152.02, "text": " As always that's a very important thing to do"}, {"start": 2152.8199999999997, "end": 2156.4199999999996, "text": " So let's see what batch looks like so if I do batch shape"}, {"start": 2157.7799999999997, "end": 2159.46, "text": " So we can see here"}, {"start": 2159.46, "end": 2162.5, "text": " Bad shape so we have 200 256 images"}, {"start": 2163.22, "end": 2166.3599999999997, "text": " RGB images of resolution 128 128"}, {"start": 2166.36, "end": 2172.28, "text": " We have the labels as well. So after I do this, so basically this extra here"}, {"start": 2172.92, "end": 2174.92, "text": " Just contains this y"}, {"start": 2175.56, "end": 2177.32, "text": " And let's see the shape"}, {"start": 2177.32, "end": 2182.1200000000003, "text": " So shape is 256. Obviously we have for each of the images we have associated label"}, {"start": 2182.6800000000003, "end": 2184.92, "text": " If I were to step over this"}, {"start": 2186.52, "end": 2188.6, "text": " And print labels"}, {"start": 2188.6, "end": 2193.88, "text": " We can see there is a bunch of labels here because i'm using cipher 10 even though this is"}, {"start": 2193.88, "end": 2195.88, "text": " This is image net training"}, {"start": 2196.44, "end": 2202.76, "text": " That's why we only see labels from zero through nine, but we don't care about this detail for the"}, {"start": 2203.32, "end": 2209.08, "text": " Purpose of understanding how the classified training looks like. Okay, so we we put the as you can see here"}, {"start": 2209.08, "end": 2214.44, "text": " We put the labels onto our target device, which is gpu in my case same with images here"}, {"start": 2215.0, "end": 2220.36, "text": " And now what we do is we do we take the uniform sampler and we just sample"}, {"start": 2220.36, "end": 2227.26, "text": " 256 time steps. That's it. So we now have let's see if I were to print t you can see"}, {"start": 2227.7400000000002, "end": 2229.9, "text": " Uh, like it's a it's a pile of numbers"}, {"start": 2230.7000000000003, "end": 2234.06, "text": " Of the following length. Let me just print this so we have 256"}, {"start": 2234.6200000000003, "end": 2241.82, "text": " time steps for each of the images in the batch and it's just randomly sampled between uh zero and"}, {"start": 2242.3, "end": 2246.78, "text": " And and 999 after that we just do the q sample"}, {"start": 2246.78, "end": 2251.26, "text": " So we take the image we take the time step and we just do the noising process"}, {"start": 2251.5, "end": 2257.6600000000003, "text": " So we create from the x0 image we create we create the x t image. I'm gonna quickly remind you"}, {"start": 2257.7400000000002, "end": 2262.2200000000003, "text": " I showed you this in the previous video. So i'm going to just quickly show you how this thing looks like"}, {"start": 2262.5400000000004, "end": 2268.38, "text": " So here it is. Uh, there is literally a formula which I showed you in the previous paper, which is fairly easy"}, {"start": 2268.6200000000003, "end": 2270.6200000000003, "text": " you just take uh these"}, {"start": 2270.62, "end": 2277.8199999999997, "text": " Uh specific constants here you multiply it with the x start x start is the x0 the original image"}, {"start": 2278.06, "end": 2280.06, "text": " And we just add noise"}, {"start": 2280.54, "end": 2286.8599999999997, "text": " According to this square blah blah. Again, there's some content constant. Let me try to find a formula. It's going to be easier"}, {"start": 2287.2599999999998, "end": 2290.94, "text": " Okay, guys, here is the formula from the previous video. So here is how we sample"}, {"start": 2291.18, "end": 2294.38, "text": " So given x0 the original image, here's how we get x of t"}, {"start": 2294.7, "end": 2300.54, "text": " We can just basically sample from this distribution here or equivalently. We just compute this expression here"}, {"start": 2300.54, "end": 2303.1, "text": " So you can see here we multiply by some constant constant"}, {"start": 2303.18, "end": 2306.54, "text": " We multiply the x0 or x start as we saw in the code"}, {"start": 2306.86, "end": 2311.9, "text": " We just add up on on top of that. We add this constant times epsilon, which is just your"}, {"start": 2312.54, "end": 2314.54, "text": " noise from your normal"}, {"start": 2314.84, "end": 2321.02, "text": " Distribution now, let's go back to the code. Let me see whether you can you can see resemblance between this formula here"}, {"start": 2321.66, "end": 2325.42, "text": " And between this thing here. So again constant times x start"}, {"start": 2325.42, "end": 2330.54, "text": " We add on top of that some constant times noise and that's how we get xt and that's it"}, {"start": 2330.54, "end": 2336.62, "text": " That's because that's why i'm going to just skip over this function and we have finally we now have a batch of"}, {"start": 2337.26, "end": 2342.0, "text": " Noisy xt samples and that's what we're going to use to train our classifier"}, {"start": 2342.2200000000003, "end": 2348.46, "text": " Uh on and again the reason we are doing this is because otherwise if we didn't have these noisy images if we just trained on"}, {"start": 2348.46, "end": 2351.7400000000002, "text": " the original images then the classifier wouldn't be able to"}, {"start": 2351.74, "end": 2358.8599999999997, "text": " To estimate the gradients correctly for for an arbitrary noisy image and that's why we want to train it on noisy images"}, {"start": 2359.2599999999998, "end": 2360.3799999999997, "text": " Okay"}, {"start": 2360.3799999999997, "end": 2364.14, "text": " So here now we just um instead of training on the whole batch"}, {"start": 2364.2999999999997, "end": 2369.9799999999996, "text": " This is just for for for memory optimization stuff. Uh, we're gonna kind of chunk the batch size"}, {"start": 2369.9799999999996, "end": 2373.74, "text": " That's 256 into smaller batches so-called micro batches"}, {"start": 2373.8999999999996, "end": 2380.2999999999997, "text": " Okay, and here it is and now we just call we just pass so we do a for pass through our"}, {"start": 2380.3, "end": 2384.46, "text": " Our classifier again, the the classifier is basically just a unit model"}, {"start": 2384.78, "end": 2387.5, "text": " Uh, well basically the first portion of the unit model"}, {"start": 2387.82, "end": 2391.82, "text": " Uh, and first of all, let me just show you the shapes here. I think it's gonna be four"}, {"start": 2392.38, "end": 2395.42, "text": " No, it's gonna be actually just one. Okay, so the micro batch size is one"}, {"start": 2395.5800000000004, "end": 2399.6800000000003, "text": " So we basically pass just a single image through the classifier as an optimization"}, {"start": 2401.1000000000004, "end": 2403.6000000000004, "text": " Thingy and we pass in the corresponding"}, {"start": 2404.3, "end": 2407.5800000000004, "text": " Like basically time step. So again, if this is like five"}, {"start": 2407.58, "end": 2413.5, "text": " That means this is going to be x5 and we pass that through the model and we get back to logits. Okay, let me"}, {"start": 2414.06, "end": 2416.06, "text": " uh step through there"}, {"start": 2416.7, "end": 2420.46, "text": " and here we end up in the forward pass of our"}, {"start": 2421.02, "end": 2423.02, "text": " model so that's the that's the"}, {"start": 2423.02, "end": 2427.5, "text": " That's the classifier and we do the time step embedding which is just going to embed it"}, {"start": 2427.98, "end": 2434.62, "text": " Like using the the sinusoidal embeddings and then we just pass it through the mlp. And so we ultimately end up with"}, {"start": 2434.62, "end": 2437.8199999999997, "text": " Uh something that's going to be of shape one"}, {"start": 2438.8599999999997, "end": 2440.46, "text": " 512. Yeah, okay"}, {"start": 2440.46, "end": 2446.7799999999997, "text": " So and then we just do forward pass through the input blocks through the middle blocks and finally we pass through the attention"}, {"start": 2447.18, "end": 2453.1, "text": " Uh pooling layer and that's it depending on the module some of them are going to encode the temporal"}, {"start": 2453.42, "end": 2460.14, "text": " Uh, the time step information into our features some of the other modules will not do that such as the conf2d layer"}, {"start": 2460.14, "end": 2465.98, "text": " But in any case, that's pretty much it so I can now get back to the classifier training"}, {"start": 2466.3799999999997, "end": 2469.74, "text": " Uh, we can skip all the way to the loss here. So let's"}, {"start": 2470.3799999999997, "end": 2476.14, "text": " Skip skip over and here we are. So we got some logits and now basically we just do a cross entropy"}, {"start": 2476.7, "end": 2481.3399999999997, "text": " For the label we have and that's it. That's our loss. We just do the cross entropy loss"}, {"start": 2481.9, "end": 2484.8599999999997, "text": " Which is basically just a standard classifier training loss"}, {"start": 2484.86, "end": 2490.46, "text": " Okay, and now we just collect some information such as the accuracy at one accuracy at five"}, {"start": 2490.78, "end": 2495.1800000000003, "text": " We do some logging blah blah blah. We do the mean on top of the loss"}, {"start": 2495.6600000000003, "end": 2499.02, "text": " And basically here because we have micro batches"}, {"start": 2499.6600000000003, "end": 2503.42, "text": " What they do is in the in the zeroth micro batch"}, {"start": 2503.42, "end": 2510.06, "text": " We just clear up the gradients and then we keep on accumulating the gradients for each of the micro batches"}, {"start": 2510.06, "end": 2515.82, "text": " And we don't do the update until we do we go through all of the micro batches inside of uh,"}, {"start": 2516.06, "end": 2522.14, "text": " Instead of inside of a single mini batch and that's just again, uh minor optimization detail"}, {"start": 2522.38, "end": 2527.2599999999998, "text": " But we saw everything we needed to know basically that the main thing is that we have this"}, {"start": 2527.58, "end": 2535.2599999999998, "text": " Q-samp we call this function here, which makes us train not on the original images, but instead we train on"}, {"start": 2535.9, "end": 2536.86, "text": " x"}, {"start": 2536.86, "end": 2540.3, "text": " Uh t images so noisy versions of the images. Um"}, {"start": 2541.02, "end": 2544.2200000000003, "text": " Okay, guys, that's it. Uh just for um"}, {"start": 2544.78, "end": 2548.2200000000003, "text": " being detailed here, I want to show you the part with merging the"}, {"start": 2548.94, "end": 2549.9, "text": " um"}, {"start": 2549.9, "end": 2552.6200000000003, "text": " The temporal information I want to show you that part here"}, {"start": 2552.6200000000003, "end": 2560.1400000000003, "text": " So res block i'm going to enter there and i'm going to set i'm going to set a breakpoint inside of the forward function here"}, {"start": 2560.7000000000003, "end": 2564.38, "text": " Uh, just so that I can show you how we merge the information from the embedding here"}, {"start": 2564.38, "end": 2566.86, "text": " Okay, so i'm going to set a breakpoint here"}, {"start": 2567.9, "end": 2572.86, "text": " And let's now step let's do another pass through another micro batch and here we are"}, {"start": 2573.58, "end": 2576.54, "text": " So we have the embedding and we are gonna hit"}, {"start": 2577.26, "end": 2583.9, "text": " The res block at one point of time and here we are. So here is the um part where we merge the temporal information"}, {"start": 2583.9, "end": 2585.9, "text": " I want to show you this once more"}, {"start": 2585.9, "end": 2589.82, "text": " Again, it's fairly simple. We just do some additional processing. So let's see what these"}, {"start": 2589.82, "end": 2597.1000000000004, "text": " In m layers are so again, it's just like a like a basically a linear layer and an activation function"}, {"start": 2597.5800000000004, "end": 2599.42, "text": " Nothing fancy there"}, {"start": 2599.42, "end": 2601.42, "text": " And we end up with embedding"}, {"start": 2601.7400000000002, "end": 2607.02, "text": " Here of the shape 1 to 56 and now we just as you can see here"}, {"start": 2607.02, "end": 2611.84, "text": " We start adding dummy dimensions such that we can merge this temporal information"}, {"start": 2612.46, "end": 2614.46, "text": " into the"}, {"start": 2614.46, "end": 2621.98, "text": " Um image features, okay, so we can see here because this is set to true we enter this branch here. We have some layers"}, {"start": 2622.7, "end": 2627.58, "text": " Here and we split them into two parts. The first part is the normalization layer and then the rest"}, {"start": 2627.9, "end": 2632.62, "text": " And you can see this out there. I think it's just like a sequential. Yeah, you can see here how it's constructed"}, {"start": 2633.1, "end": 2637.9, "text": " Um, not that important. So the important part is here. We take our embedding out"}, {"start": 2638.46, "end": 2642.38, "text": " So this vector and we split it into two parts one is scale one is shift"}, {"start": 2642.38, "end": 2647.82, "text": " And I guess it's going to be 128 is going to be the shape here"}, {"start": 2648.46, "end": 2651.42, "text": " So one 128 and we added the dummy dimensions"}, {"start": 2651.42, "end": 2655.1800000000003, "text": " And so now if we do this we're going to have some broadcasting going on"}, {"start": 2655.1800000000003, "end": 2661.34, "text": " And we are going to going to merge them with the image features and this image features are going to be of shape"}, {"start": 2661.82, "end": 2665.82, "text": " As you can see here 128 128 128 and that's it guys"}, {"start": 2665.82, "end": 2669.02, "text": " I'm going to briefly show you how this looks like inside of my one note"}, {"start": 2669.02, "end": 2676.14, "text": " So again fairly simple thing. We have so we have something that came out of the"}, {"start": 2677.1, "end": 2682.22, "text": " So we have a volume that came out of the unit model. That's going to look like something like this"}, {"start": 2682.78, "end": 2684.78, "text": " So here is the our volume"}, {"start": 2685.34, "end": 2687.66, "text": " So those are the image features"}, {"start": 2688.78, "end": 2690.78, "text": " From the latent space of our encoder"}, {"start": 2691.5, "end": 2694.72, "text": " And we saw that that's going to be like this is 128"}, {"start": 2694.72, "end": 2702.0, "text": " Number of channels and we saw that this is also 128 so the height and the width all of this is 128"}, {"start": 2702.64, "end": 2708.0, "text": " Whereas on the other hand we just have this is our vector. This is our temporal embedding"}, {"start": 2709.7599999999998, "end": 2713.3599999999997, "text": " So this is how it looks like it's only going to be 128 here"}, {"start": 2715.6, "end": 2720.56, "text": " But it's only one one here and then what broadcasting is going to do"}, {"start": 2720.56, "end": 2725.68, "text": " Is basically just copy paste this multiple times so that we end up with something like this"}, {"start": 2726.32, "end": 2730.16, "text": " So that we end up with multiple copies here here here"}, {"start": 2731.04, "end": 2736.56, "text": " Etc so we're just going to copy paste it here and then after we do that we can freely add it up"}, {"start": 2736.56, "end": 2742.0, "text": " So we can freely do addition on top of the image features and that's how we incorporate the temporal information"}, {"start": 2743.12, "end": 2745.12, "text": " Cool. Let's go back here"}, {"start": 2745.12, "end": 2750.7999999999997, "text": " I'm going to stop the execution of the classifier train script. We saw how to train the classifier"}, {"start": 2751.12, "end": 2756.88, "text": " Do let me know whether you found this explanation useful whether I can improve something any feedback is always appreciated"}, {"start": 2757.44, "end": 2760.16, "text": " And also if you if you like this video share it out with your friends"}, {"start": 2761.6, "end": 2765.52, "text": " Let's now focus on the sampling function. So I have prepared"}, {"start": 2766.16, "end": 2773.2799999999997, "text": " Some arguments again. I just took the arguments from the read me and I created I just kind of extracted them here"}, {"start": 2773.28, "end": 2776.32, "text": " So here are the arguments i'm using but you can find all of that"}, {"start": 2776.32, "end": 2780.0800000000004, "text": " You can find all of that basically in the read me file itself"}, {"start": 2780.0800000000004, "end": 2786.2400000000002, "text": " So i'm going to go back here and i'm going to now focus on the sampling function. Okay guys, let's run this script"}, {"start": 2787.0400000000004, "end": 2789.0400000000004, "text": " and"}, {"start": 2789.0400000000004, "end": 2793.0400000000004, "text": " Again, i'm going to scheme a lot of the i'm going to skip explaining a lot of details"}, {"start": 2793.0400000000004, "end": 2796.1600000000003, "text": " Which are irrelevant to understanding the actual"}, {"start": 2796.96, "end": 2799.6800000000003, "text": " Well, uh got in this particular case"}, {"start": 2799.68, "end": 2804.48, "text": " We want to see how the classifier guidance works like and that's what i'm going to focus on"}, {"start": 2805.2, "end": 2809.2799999999997, "text": " So what this function is going to return is the model which is going to be the unit model"}, {"start": 2809.2799999999997, "end": 2812.48, "text": " That's going to predict the epsilon. So the the noise"}, {"start": 2813.2799999999997, "end": 2818.72, "text": " And diffusion just contains bunch of those constants necessary to do the diffusion process"}, {"start": 2818.72, "end": 2821.12, "text": " So i'm going to skip over all of that"}, {"start": 2821.7599999999998, "end": 2824.96, "text": " We're just going to load a certain checkpoint"}, {"start": 2824.96, "end": 2829.28, "text": " We like as I said, I have a model path here"}, {"start": 2829.76, "end": 2834.56, "text": " So, let me just see whether I can show you yeah, so I went ahead and downloaded the model"}, {"start": 2834.56, "end": 2839.52, "text": " Into this models directory inside of my root directory and"}, {"start": 2839.52, "end": 2843.84, "text": " Everything is already explained in the read me so you don't have to worry about it"}, {"start": 2843.84, "end": 2846.88, "text": " So we just push the model to the gpu"}, {"start": 2846.88, "end": 2848.88, "text": " We convert it into fp16"}, {"start": 2849.68, "end": 2851.68, "text": " Again, just some"}, {"start": 2851.68, "end": 2854.72, "text": " Optimization details. I'm not going to focus on that"}, {"start": 2854.72, "end": 2857.68, "text": " We create the classifier. So that's again our encoder"}, {"start": 2857.68, "end": 2860.7999999999997, "text": " The the the half portion of the unit"}, {"start": 2861.7599999999998, "end": 2863.8399999999997, "text": " You can see here and we already"}, {"start": 2865.44, "end": 2869.44, "text": " Schemed through that code previously in the in the training script"}, {"start": 2869.44, "end": 2872.72, "text": " So i'm going to just ignore all of that and uh, whoops"}, {"start": 2872.72, "end": 2876.0, "text": " Well, obviously not so i'll have to just ask"}, {"start": 2876.64, "end": 2878.24, "text": " Let me just kind of do the"}, {"start": 2878.24, "end": 2883.4399999999996, "text": " Disable all the points all the breakpoints and then i'm going to click f5 and here we are"}, {"start": 2883.4399999999996, "end": 2886.4799999999996, "text": " Now i'm going to enable all of the breakpoints again"}, {"start": 2887.3599999999997, "end": 2889.3599999999997, "text": " And we just load the weights"}, {"start": 2889.3599999999997, "end": 2892.16, "text": " We load the state dictionary from the classifier path"}, {"start": 2892.16, "end": 2896.16, "text": " So I went ahead and downloaded this this model again"}, {"start": 2896.16, "end": 2898.8799999999997, "text": " And I put it inside of the models directory. That's it"}, {"start": 2899.4399999999996, "end": 2900.64, "text": " Nothing smart there"}, {"start": 2900.64, "end": 2904.4799999999996, "text": " We push the model the classifier on the to the gpu as well"}, {"start": 2904.48, "end": 2908.4, "text": " And we set it inside into the eval mode"}, {"start": 2908.4, "end": 2911.68, "text": " That's it. Now this is going to be the most important part"}, {"start": 2911.68, "end": 2916.2400000000002, "text": " This is where the gradients are calculated for the classifier"}, {"start": 2916.2400000000002, "end": 2920.08, "text": " I'm going to analyze that a bit later once we hit once we enter that function"}, {"start": 2920.08, "end": 2925.12, "text": " So here is where it's defined later. We're going to see how it basically works"}, {"start": 2925.12, "end": 2928.32, "text": " So that's it. We just define this function as well"}, {"start": 2928.32, "end": 2930.4, "text": " The model function and"}, {"start": 2930.4, "end": 2937.6, "text": " Now we're going to do a bunch of iterations until we create the number of samples that we can specify"}, {"start": 2938.48, "end": 2943.04, "text": " And you can see here I specified 100 but that's just arbitrary numbers so not that important"}, {"start": 2943.04, "end": 2949.84, "text": " Okay, so we create random numbers so random classes and depending on the batch size"}, {"start": 2949.84, "end": 2952.1600000000003, "text": " We have like four random classes being generated here"}, {"start": 2952.1600000000003, "end": 2953.76, "text": " So let me show you what those are"}, {"start": 2953.76, "end": 2957.84, "text": " If I go to debug console it's going to be a bunch of random classes"}, {"start": 2957.84, "end": 2963.84, "text": " If I go to debug console if I print that we can see we have four random classes"}, {"start": 2964.48, "end": 2968.0, "text": " And we don't I don't know directly what these correspond to"}, {"start": 2968.0, "end": 2972.56, "text": " But like let's say these are just classes 33 918 etc"}, {"start": 2972.56, "end": 2976.1600000000003, "text": " Okay, we don't care about the actual semantics behind the numbers"}, {"start": 2976.1600000000003, "end": 2980.2400000000002, "text": " We are going to store those the class information inside of this model"}, {"start": 2981.04, "end": 2984.4, "text": " Like keyword arguments dictionary"}, {"start": 2984.4, "end": 2990.7200000000003, "text": " So I'm gonna go there and because we're not using this ddim again ddim is just just it's just like"}, {"start": 2990.7200000000003, "end": 2997.2000000000003, "text": " This optimize it like this sampling method whereby if you have less than 50 diffusion steps"}, {"start": 2997.84, "end": 3004.32, "text": " They showed in that paper that they achieve better results compared to the original ddpm paper"}, {"start": 3004.32, "end": 3010.32, "text": " So only in the mode where you only sample for less than 50 steps roughly then ddim makes sense"}, {"start": 3010.32, "end": 3014.96, "text": " But here I think we have like I think we set like thousand steps so we're going to use the"}, {"start": 3014.96, "end": 3018.0800000000004, "text": " p sample loop instead of the ddim sample loop"}, {"start": 3018.0800000000004, "end": 3021.76, "text": " I'm not going to focus on the ddim part because there is a lot of formulas"}, {"start": 3021.76, "end": 3025.92, "text": " It's hard to explain the logic but like on the high level just treating it as a black box"}, {"start": 3026.48, "end": 3030.88, "text": " When you would want to use it is when you have less than 50 diffusion steps"}, {"start": 3030.88, "end": 3031.76, "text": " Okay, that's it"}, {"start": 3032.56, "end": 3034.6400000000003, "text": " Here is where the whole magic happens"}, {"start": 3034.6400000000003, "end": 3039.44, "text": " After that there's just some normalization going on permutation blah blah blah"}, {"start": 3039.44, "end": 3047.04, "text": " And we just collect the images here and finally yeah basically just a bunch of boilerplate code"}, {"start": 3047.04, "end": 3051.6, "text": " This is where the main logic is in so we're going to step inside of here"}, {"start": 3052.8, "end": 3060.16, "text": " We basically specify here the target shape so our images are going to be 64 by 64"}, {"start": 3060.16, "end": 3064.4, "text": " Later on they have the up sampling modules which we're also going to ignore"}, {"start": 3064.4, "end": 3066.0, "text": " But let's focus on this part here"}, {"start": 3066.0, "end": 3073.28, "text": " Okay so we pass in the model keyword arguments that have the class information"}, {"start": 3073.92, "end": 3078.88, "text": " We pass these special functions so the cond fn which we define here is going to calculate the"}, {"start": 3078.88, "end": 3082.8, "text": " gradients we're going to see how that comes into the play in a second"}, {"start": 3082.8, "end": 3088.4, "text": " So here is the p sample loop what it does is we're just going to keep on generating samples"}, {"start": 3088.96, "end": 3092.72, "text": " I'm just going to directly jump inside of this function"}, {"start": 3092.72, "end": 3096.72, "text": " Because everything else is just like basically boilerplate code"}, {"start": 3098.3999999999996, "end": 3103.2, "text": " So here we generate the random noise image"}, {"start": 3103.2, "end": 3107.9199999999996, "text": " Okay so we have the desired shape and the desired shape is as you can see here"}, {"start": 3108.48, "end": 3113.12, "text": " Four because we have four images in the batch and here we have the resolution of our target image"}, {"start": 3113.12, "end": 3118.16, "text": " And we just generate we just sample from a normal distribution here"}, {"start": 3119.4399999999996, "end": 3122.08, "text": " And then we just generate indices"}, {"start": 3122.08, "end": 3125.36, "text": " So depending on the number of time steps so we have 250 here"}, {"start": 3126.0, "end": 3130.56, "text": " We just generate these indices and remember that's because we want to start from the"}, {"start": 3130.56, "end": 3136.72, "text": " Completely noisy image so from the index 249 and we're going to work our way all the way to zero"}, {"start": 3136.72, "end": 3141.7599999999998, "text": " Where we're going to end up with completely denoised images from our underlying data"}, {"start": 3141.7599999999998, "end": 3145.84, "text": " Distribution which we learned during the training but this is now the sampling part"}, {"start": 3145.84, "end": 3154.4, "text": " Okay so indices let's see what t is it should be 249 as I said"}, {"start": 3154.4, "end": 3159.04, "text": " And here is where the whole magic happens in this p sample step"}, {"start": 3159.04, "end": 3164.1600000000003, "text": " So after 250 steps we are going to end up with our final image"}, {"start": 3164.1600000000003, "end": 3167.6000000000004, "text": " And that's one of the drawbacks of current diffusion models"}, {"start": 3167.6000000000004, "end": 3172.7200000000003, "text": " We need much more compute compared to like say GANs where you just need a single four pass"}, {"start": 3172.72, "end": 3177.68, "text": " And you end up with a like a very realistic image very crispy image"}, {"start": 3177.68, "end": 3180.48, "text": " Here we have much more we need much more compute okay"}, {"start": 3182.08, "end": 3187.9199999999996, "text": " So let's step inside of the p sample function and the main function here is going to be this p mean"}, {"start": 3187.9199999999996, "end": 3193.8399999999997, "text": " Variance that's just basically our reverse process that we learned"}, {"start": 3194.72, "end": 3200.48, "text": " After we calculate the out variable here we're going to generate noise again"}, {"start": 3200.48, "end": 3203.68, "text": " We're going to do the conditioning I'm going to get here so this is going to be important part here"}, {"start": 3203.68, "end": 3208.16, "text": " This is where we do the conditioning this is where we do the where we shift the mean"}, {"start": 3209.2, "end": 3217.44, "text": " Via the sigma times gradients expression and then we just form the image here okay"}, {"start": 3217.44, "end": 3222.8, "text": " So the this is the this is going to contain the x t minus one"}, {"start": 3224.16, "end": 3229.28, "text": " If the if we condition on the x t or alternatively if we condition off on x t plus one"}, {"start": 3229.28, "end": 3234.4, "text": " This is going to be x t in any case it's just like a single step closer to the final image"}, {"start": 3235.84, "end": 3241.84, "text": " Okay let's jump into the p mean variance here it is this is what we calculate as I said our"}, {"start": 3241.84, "end": 3247.1200000000003, "text": " Reverse process and let's kind of quickly scheme over over this part so I'm going to"}, {"start": 3247.1200000000003, "end": 3254.96, "text": " Set a break point here here we are we enter here and now we're going to pass what we're going to"}, {"start": 3254.96, "end": 3260.16, "text": " Pass a couple things first we're going to pass the the the noise we just sampled so that's going to"}, {"start": 3260.16, "end": 3267.92, "text": " Be I guess uh four images right so four three sixty four sixty four we're going to pass the time steps"}, {"start": 3267.92, "end": 3273.36, "text": " So those are going to be the two forty nine two forty nine two forty nine for each image in the batch"}, {"start": 3273.92, "end": 3278.4, "text": " And we're going to additionally pass the class information because if you recall"}, {"start": 3278.4, "end": 3282.64, "text": " From the paper explanation we are not only going to use classifier to guide"}, {"start": 3282.64, "end": 3288.24, "text": " We are also going to use class information to condition the model so those two do the same"}, {"start": 3288.24, "end": 3296.0, "text": " Thing but are kind of complementary so don't be confused by that okay let's step there we hit the"}, {"start": 3296.64, "end": 3303.3599999999997, "text": " Basically the this model fn that we saw previously and because class conditioning is set to true"}, {"start": 3303.3599999999997, "end": 3310.7999999999997, "text": " We're going to pass x so the noise images the time steps and the conditioning information all of that"}, {"start": 3310.8, "end": 3316.4, "text": " Let's now quickly step into the unit model into the forward pass and see how the class information"}, {"start": 3316.4, "end": 3323.2000000000003, "text": " Is incorporated into uh with the image features and hint hint it's literally the same thing as with"}, {"start": 3323.2000000000003, "end": 3327.76, "text": " The temporal information pretty much we're going to see that in a second so here it is we have the"}, {"start": 3327.76, "end": 3334.6400000000003, "text": " Time steps we're going to embed those so sinusoidal embeddings then some learned stuff here and we end"}, {"start": 3334.64, "end": 3341.92, "text": " Up with a vector that's of shape i guess let's see what it is um it's going to be four seven sixty"}, {"start": 3341.92, "end": 3348.24, "text": " Eight because we're dealing with uh batches with four images okay and now because we have the the"}, {"start": 3348.24, "end": 3354.64, "text": " Classes set here the only thing we're going to do with our classes so here here are classes so we"}, {"start": 3354.64, "end": 3362.56, "text": " Have four classes here we're going to um embed them using this label embed oh i hate these warnings"}, {"start": 3362.56, "end": 3368.72, "text": " I should have toggled them off there is some config uh line that you can set to lon jason if you if you"}, {"start": 3368.72, "end": 3375.6, "text": " Don't want to see those but yeah i forgot to do that uh so let's see what this this label embed"}, {"start": 3375.6, "end": 3382.72, "text": " Does okay so you can see here is just a simple embedding layer and it's going to specify uh it's"}, {"start": 3382.72, "end": 3388.56, "text": " Going to have some number of dimensions for each of the classes nothing fancy there you can literally"}, {"start": 3388.56, "end": 3395.68, "text": " Treat that as um let's see what it gives us basically uh so if we do this and i do shape"}, {"start": 3396.24, "end": 3402.64, "text": " Yeah you can see it's going to transform uh our four scalars into four vectors of dimensionality"}, {"start": 3402.64, "end": 3409.7599999999998, "text": " 768 and then we're just going to simply add that on top of the temporal embedding and then everything"}, {"start": 3409.7599999999998, "end": 3415.84, "text": " Else proceeds as as usual we're just going to keep on incorporating that information so both the class"}, {"start": 3415.84, "end": 3420.7200000000003, "text": " Information as well as the temporal information directly merge them with the image features"}, {"start": 3420.7200000000003, "end": 3426.8, "text": " Using the logic i showed you with the scaling and the offset we saw in the res block and that's it"}, {"start": 3427.6000000000004, "end": 3434.2400000000002, "text": " I'm going to skip over all of this uh we don't care about it and now let's continue on so we are"}, {"start": 3434.2400000000002, "end": 3441.36, "text": " Here and we ended up with the model outputs if sigma is learned this should be four six"}, {"start": 3441.36, "end": 3447.52, "text": " Uh something right so four six because six because we all we predict the epsilon so that's the noise"}, {"start": 3447.52, "end": 3454.08, "text": " And we also predict the sigma so that's the the covariance um and now let's continue here"}, {"start": 3454.08, "end": 3457.6, "text": " We saw all of this logic in the previous video so i'm going to kind of skim through this"}, {"start": 3457.6, "end": 3463.6, "text": " We split the model outputs into two parts so again the epsilon and the variance and then"}, {"start": 3464.1600000000003, "end": 3468.8, "text": " Well actually it's not the variance it's it's the v vector let me show you in a second formula"}, {"start": 3468.8, "end": 3475.28, "text": " So again if you recall from the previous video this is the formula we used so we are predicting the v's here"}, {"start": 3475.28, "end": 3480.8, "text": " And then we're going to form the variance by doing this expression here so this is the expression calculation we just saw"}, {"start": 3480.8, "end": 3486.32, "text": " So we calculate all of those we get the variance ultimately here and now we need to predict the mean"}, {"start": 3486.32, "end": 3489.84, "text": " So here is how we predict the mean i'm going to skim through all of this here"}, {"start": 3490.8, "end": 3494.32, "text": " We just need to first predict the x start so that's the x0"}, {"start": 3494.32, "end": 3501.36, "text": " And then we use this posterior mean variance to get the model mean so this is all of this is same as in the previous video pretty much"}, {"start": 3501.36, "end": 3507.36, "text": " So we got the mean we have the variance and we finally return all of those variables so the mean"}, {"start": 3508.48, "end": 3513.6000000000004, "text": " Uh variance model log variance etc etc by the way let me just quickly show you what those are"}, {"start": 3514.0800000000004, "end": 3518.2400000000002, "text": " So here are we in the one note explanation I showed you in the beginning of the video"}, {"start": 3518.48, "end": 3521.36, "text": " So this is what we currently generated we generated the"}, {"start": 3521.36, "end": 3528.08, "text": " Um these two so we have the mean and we have the sigma and now we need to do this shift part"}, {"start": 3528.2400000000002, "end": 3532.4, "text": " So we need to do the shift part and then we can sample our our our image and that's it"}, {"start": 3532.96, "end": 3537.52, "text": " Fairly simple. Okay. Let me go back here. I'm going to return all of these variables"}, {"start": 3538.08, "end": 3540.08, "text": " I'm going to return all of those"}, {"start": 3540.2400000000002, "end": 3544.26, "text": " We generate we sample from a normal distribution"}, {"start": 3545.36, "end": 3547.36, "text": " We have this masking stuff going on"}, {"start": 3547.36, "end": 3554.96, "text": " Uh that um because we have if you recall we have a discrepancy for t equals zero time step versus all of the other steps"}, {"start": 3555.28, "end": 3561.36, "text": " Uh things look a bit different, but like I guess that's just a minor detail again. This is where the whole magic happens"}, {"start": 3561.52, "end": 3567.04, "text": " This is where the shifting of the mean happens using the sigma using the gradients of our classifier"}, {"start": 3567.52, "end": 3570.8, "text": " Uh, and also like just a note, uh, there is a bug here"}, {"start": 3570.96, "end": 3575.52, "text": " I submitted an issue and the authors of open from opening I basically confirmed that's indeed"}, {"start": 3575.52, "end": 3578.56, "text": " Uh an issue i'm gonna um explain you a bit later"}, {"start": 3578.56, "end": 3582.56, "text": " Uh what that exactly is but the funny thing is it did not even matter"}, {"start": 3582.56, "end": 3585.52, "text": " It's like that's the thing with uh training machine learning models"}, {"start": 3585.52, "end": 3589.2, "text": " Sometimes you have a bug but the models are forgiving and you will just get"}, {"start": 3589.68, "end": 3594.8, "text": " Something working maybe a bit suboptimal, but still there is no way you can figure out there is a bug"}, {"start": 3594.8, "end": 3601.12, "text": " It's not like with software systems where something when you when you when you like a basically develop a traditional software system"}, {"start": 3601.12, "end": 3602.48, "text": " If something does not work"}, {"start": 3602.48, "end": 3609.2, "text": " You will know that something will crash and the program will not work. So yeah ml ml is a bit different. Okay, so"}, {"start": 3610.16, "end": 3615.2, "text": " Here we form a new mean and a new mean is formed by adding"}, {"start": 3615.68, "end": 3622.08, "text": " On top of our old mean we just add the sigma times the gradient. So again, that's that's that's the whole idea"}, {"start": 3622.4, "end": 3630.2400000000002, "text": " So here again, we just have uh, we pass the um, we have the conditional con function which we defined in the main script"}, {"start": 3630.24, "end": 3632.24, "text": " we pass our"}, {"start": 3632.24, "end": 3637.7599999999998, "text": " Uh means the mean and the and the covariance we just calculated uh here above"}, {"start": 3638.24, "end": 3643.68, "text": " So this basically contains the x t and this is the x t plus one and this is the time step in any case"}, {"start": 3643.7599999999998, "end": 3646.3999999999996, "text": " Let's step inside of this function and let's see what's going on"}, {"start": 3646.56, "end": 3646.8799999999997, "text": " Okay"}, {"start": 3646.8799999999997, "end": 3652.08, "text": " So first we calculate the gradient and then as we recall from the equation"}, {"start": 3652.3999999999996, "end": 3655.8399999999997, "text": " we just add on our new on our old mean we add the"}, {"start": 3655.84, "end": 3662.0, "text": " The variance we calculated times the gradient and that's it. So that's that's easy. Let's now see that this part here"}, {"start": 3662.08, "end": 3664.4, "text": " Let's step into this function here. Okay, so"}, {"start": 3665.2000000000003, "end": 3667.2000000000003, "text": " First thing we do is we take our"}, {"start": 3668.32, "end": 3669.76, "text": " noisy image"}, {"start": 3669.76, "end": 3671.76, "text": " Uh, we detach it from the computational graph"}, {"start": 3671.76, "end": 3677.76, "text": " We said it requires graph to true so that we can calculate the gradients for that x because that's what we ultimately care about"}, {"start": 3677.76, "end": 3681.28, "text": " right, we want to have the gradients of x with respect to the"}, {"start": 3682.2400000000002, "end": 3683.84, "text": " log"}, {"start": 3683.84, "end": 3688.08, "text": " Probability of of our target class from the classifier. Okay, so"}, {"start": 3688.7200000000003, "end": 3691.28, "text": " We pass it through the classifier. We get the logits"}, {"start": 3692.48, "end": 3698.4, "text": " And unfortunately, I stepped inside of the classifier. We don't really care about it. Let's just kind of um"}, {"start": 3699.04, "end": 3704.98, "text": " Continue here. We don't care about it. So we have logits now we just form the log softmax"}, {"start": 3705.84, "end": 3707.76, "text": " So we have the log props"}, {"start": 3707.76, "end": 3710.88, "text": " So first, let's see what what the dimensionality of log props is"}, {"start": 3710.88, "end": 3717.2000000000003, "text": " So if we do shape here we have four because if we have four images and we have thousand lodges because we're dealing with image"}, {"start": 3717.2000000000003, "end": 3724.48, "text": " And and so what we do here is we basically extract the target. So why is again remember? Why is the the target?"}, {"start": 3725.04, "end": 3732.1600000000003, "text": " So those are the ground truth classes. We use those to take only those ledges logits. So we have the selected ones"}, {"start": 3732.1600000000003, "end": 3734.8, "text": " so now we if I print this these are the"}, {"start": 3734.8, "end": 3741.1200000000003, "text": " The um, uh target classes. So so these are the logits we need to maximize if we want to have"}, {"start": 3741.84, "end": 3744.6400000000003, "text": " An image from that particular class if that makes sense"}, {"start": 3744.96, "end": 3748.32, "text": " And so what we do here was just and this is a part that kind of confused me"}, {"start": 3748.5600000000004, "end": 3752.1600000000003, "text": " We sum up all of these uh different, um, like, um"}, {"start": 3753.04, "end": 3756.9, "text": " Logits, sorry not logits the log props the log probabilities"}, {"start": 3757.28, "end": 3762.8, "text": " And then we find the gradient with respect to the input. So the input is again a batch"}, {"start": 3762.8, "end": 3769.28, "text": " So it's going to be four three sixty four sixty four and we just multiply so this is the classifier scale I mentioned"}, {"start": 3769.28, "end": 3773.1200000000003, "text": " So that's denoted as s in the paper. So let me show you that in a second here"}, {"start": 3773.44, "end": 3777.6000000000004, "text": " Let's go back to the paper and let's find the uh, that part"}, {"start": 3778.2400000000002, "end": 3783.04, "text": " Somewhere here. So I think it's down here. Okay, so here's here is the part"}, {"start": 3783.04, "end": 3785.04, "text": " We're just computing we have the s"}, {"start": 3785.76, "end": 3791.92, "text": " Which is on the right hand side in the code and we have the gradients here. So again, let's go back to the"}, {"start": 3791.92, "end": 3799.36, "text": " Uh code here. So this is the s and this is the gradient we are we are computing and the reason this works is because um"}, {"start": 3799.92, "end": 3803.92, "text": " If you do a sum you'll end up with so our loss here is going to be"}, {"start": 3804.56, "end": 3810.96, "text": " L1 so that's like l1 is just my uh, shorthand notation for log probability for the first image"}, {"start": 3811.36, "end": 3816.64, "text": " So for the for the log probability for the first image plus l2 plus l3 plus l4"}, {"start": 3816.64, "end": 3822.96, "text": " And if we do a gradient with respect to x because x1 so the first image from this batch only"}, {"start": 3823.2, "end": 3830.16, "text": " Influences this particular loss. Okay x2. So that's the second image from from the batch only influences the second one"}, {"start": 3830.64, "end": 3839.2799999999997, "text": " Okay, so because they only influence the log probes that correspond to to their index and because of that if we do gradient here"}, {"start": 3839.92, "end": 3845.8399999999997, "text": " Uh, each of the images here will have only the so basically the first image"}, {"start": 3845.84, "end": 3847.84, "text": " will have um"}, {"start": 3847.84, "end": 3849.1200000000003, "text": " l1"}, {"start": 3849.1200000000003, "end": 3850.2400000000002, "text": " dx"}, {"start": 3850.2400000000002, "end": 3855.44, "text": " The second image will be uh l2 dx etc. Etc. Okay"}, {"start": 3855.44, "end": 3861.84, "text": " So basically we'll return a batch of gradients where each of the uh images inside of that batch"}, {"start": 3862.2400000000002, "end": 3867.52, "text": " Is basically gradients with respect to the corresponding loss. So if I go back here"}, {"start": 3868.56, "end": 3873.76, "text": " We basically end up with a gradient here and the gradient is going to be of the same shape as the images"}, {"start": 3873.76, "end": 3878.0, "text": " So here 436464 and that's pretty much it"}, {"start": 3878.1600000000003, "end": 3884.5600000000004, "text": " Okay, so now we just do the the the logic we saw in the paper and guys that's pretty much it"}, {"start": 3884.96, "end": 3886.96, "text": " We're done. We now form"}, {"start": 3887.2000000000003, "end": 3889.92, "text": " Uh using the new mean and the log variance"}, {"start": 3890.2400000000002, "end": 3895.0400000000004, "text": " We just sample by doing this expression by just adding the log variance times the noise"}, {"start": 3895.36, "end": 3899.6800000000003, "text": " And that's how we get the uh next image in the reverse process"}, {"start": 3899.68, "end": 3903.52, "text": " Now this just repeats on and on but this was the main this was the gist of the paper"}, {"start": 3903.8399999999997, "end": 3907.2799999999997, "text": " So where is the bug I mentioned? I'm going to show you the issue in a second"}, {"start": 3907.2799999999997, "end": 3911.2799999999997, "text": " But like basically what happens here is they passed xt plus one and t plus one"}, {"start": 3911.6, "end": 3915.2, "text": " So these are the uh, they should have passed the out mean"}, {"start": 3915.68, "end": 3920.96, "text": " Instead of xt plus one because this is xt and this is xt plus one and so"}, {"start": 3921.44, "end": 3925.52, "text": " It might be a bit subtle. So let's let's let's try and see whether I can explain it"}, {"start": 3925.52, "end": 3932.08, "text": " So as you can see here they pass the the the the the xt plus one. So if I go back here"}, {"start": 3932.64, "end": 3938.32, "text": " Uh, you can see that then they call this conditional function with xt plus one and t plus one"}, {"start": 3938.88, "end": 3940.64, "text": " So we end up"}, {"start": 3940.64, "end": 3943.12, "text": " using xt plus one t plus one here"}, {"start": 3943.68, "end": 3945.68, "text": " Uh, so that means"}, {"start": 3945.68, "end": 3952.8, "text": " That we calculate we pass xt plus one t plus one here. We get the logits, but if you recall from the actual paper"}, {"start": 3952.8, "end": 3958.7200000000003, "text": " Uh, we do not need to we we need to condition on on x. Let me just find the the expression here"}, {"start": 3959.44, "end": 3960.96, "text": " blah blah blah"}, {"start": 3960.96, "end": 3962.0800000000004, "text": " so"}, {"start": 3962.0800000000004, "end": 3964.0, "text": " You can see here"}, {"start": 3964.0, "end": 3969.1200000000003, "text": " That we want to condition on xt and not on xt plus one. Otherwise the gradients we get here"}, {"start": 3970.0800000000004, "end": 3975.84, "text": " Are again from the xt plus one not from the xt and all of that kind of messes up the logic"}, {"start": 3975.84, "end": 3977.84, "text": " I just explained it here. So mu"}, {"start": 3977.84, "end": 3984.2400000000002, "text": " Mu is from xt and we want to use that same mu and pass it inside of the classifier and not the previous image"}, {"start": 3984.8, "end": 3992.88, "text": " And despite all of this, it's very subtle. I don't even know how I noticed it. I was fairly I guess pedantic stepping through this code"}, {"start": 3993.84, "end": 3995.04, "text": " Uh"}, {"start": 3995.04, "end": 3997.6000000000004, "text": " They they get the same results pretty much. They didn't even notice it"}, {"start": 3997.6000000000004, "end": 4001.76, "text": " They noticed it post-hoc after the paper was published or something. I'm going to show you the issue in a second"}, {"start": 4001.76, "end": 4003.76, "text": " So here is the issue"}, {"start": 4003.76, "end": 4006.4, "text": " I opened up I think a couple of days ago"}, {"start": 4006.4, "end": 4008.4, "text": " um"}, {"start": 4008.4, "end": 4012.48, "text": " So I think it's closed already so bugs blah blah blah. So here's my issue"}, {"start": 4012.48, "end": 4014.48, "text": " So I created an issue here as you can see here"}, {"start": 4014.48, "end": 4016.0, "text": " I think I found two bugs"}, {"start": 4016.0, "end": 4022.8, "text": " So the first one we shouldn't shouldn't we pass the out of mean so the x of t instead of x of t plus one here"}, {"start": 4022.8, "end": 4026.48, "text": " Uh, and similarly the the smaller time step instead of t"}, {"start": 4026.48, "end": 4032.48, "text": " Uh, and then I linked to the particular line of code which is the same line of code I showed to you guys"}, {"start": 4032.48, "end": 4034.48, "text": " So that's here"}, {"start": 4034.48, "end": 4043.68, "text": " And uh, basically, um, one of the authors replied yes, this is indeed a slight bug we which we noticed shortly after releasing our work"}, {"start": 4043.68, "end": 4050.0, "text": " However, we did try ablating using the correct formula and found that it did not noticeably change results again"}, {"start": 4050.0, "end": 4052.8, "text": " That's the tricky part about training machine learning models"}, {"start": 4052.8, "end": 4057.68, "text": " And then the second part I was kind of confused with the summation of of of those losses"}, {"start": 4057.68, "end": 4062.08, "text": " But it turned out it's quite. Um, it makes a lot of sense. Uh, it was just a minor"}, {"start": 4062.08, "end": 4068.4, "text": " Uh, mistake on my on my side anyway in any case I can kind of link this issue if you want to dig deeper into it"}, {"start": 4068.4, "end": 4070.4, "text": " Uh, I'll have it down in the video description"}, {"start": 4070.4, "end": 4078.48, "text": " So guys, that's pretty much it. Um, I showed you how the how the paper what the paper introduced the two main things are"}, {"start": 4078.48, "end": 4084.08, "text": " Improving the unit architecture. The second thing is uh, adding this classifier guidance technique"}, {"start": 4084.08, "end": 4086.4, "text": " And then I showed you how to train the classifier"}, {"start": 4086.4, "end": 4090.4, "text": " Uh in in this classifier transcript we saw how to train the classifier"}, {"start": 4090.4, "end": 4097.76, "text": " This classifier transcript we saw how to train it on on basically noisy images and then use that very same classifier"}, {"start": 4098.88, "end": 4104.88, "text": " Inside of this classifier sample script to basically shift the mean and thus"}, {"start": 4106.56, "end": 4111.36, "text": " Like sample images from the class conditioned diffusion model"}, {"start": 4112.4, "end": 4114.32, "text": " There is a lot of details"}, {"start": 4114.32, "end": 4117.52, "text": " Hopefully you picked up something useful from this video if you did"}, {"start": 4117.52, "end": 4122.580000000001, "text": " Uh, consider subscribing also share the video out and until next time. Bye. Bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=y7J6sSO1k50
Ultimate Guide to Diffusion Models | ML Coding Series | Denoising Diffusion Probabilistic Models
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this 3rd video of my ML coding series, we do a deep dive into diffusion models! Diffusion is the powerhouse behind recent text-to-image generation models such as OpenAI's DALL-E 2, Google's Imagen, etc. I first give you some context by going over 2 seminal diffusion papers: * Denoising Diffusion Probabilistic Models * Improved Denoising Diffusion Probabilistic Models And then we dig deep into the actual code analysis comparing mathematical formulas behind diffusion with actual code implementation. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ DDPM paper: https://arxiv.org/abs/2006.11239 ✅ Improved DDPM paper: https://arxiv.org/abs/2102.09672 ✅ Improved DDPM code: https://github.com/openai/improved-diffusion ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 (Paper) Denoising Diffusion Probabilistic Models 00:16:00 (Paper) Improved DDPMs 00:23:10 (Coding starts) Training DDPMs 00:24:50 UNet model creation walk-through 00:35:05 Gaussian Diffusion model creation walk-through 00:43:50 Training loop 00:56:08 Computing noise and variance (forward prop through UNet) 01:04:00 Variational lower bound loss 01:17:25 MSE loss 01:19:23 Sampling from diffusion models 01:26:50 Sampling an actual image 01:28:03 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #diffusion #ddpm #coding
What's up guys, in this video I'm focusing on covering diffusion models. So basically the family of models that are powering some of the most famous AI models over the last couple of months such as I guess the LEED 2, Imagine from Google, Glide from OpenAI as well and many other models. So the idea, the video will be quite ambitious so I'll try and first walk you through two of the seminal papers behind diffusion models and then I'm going to actually go through the code base. So the skimming of the paper just serves the purpose of me showing you the formulas, the mathematical formulas which we can then map and relate to actual code. So it's not going to be a deep dive into the papers per se but hopefully this gives you the necessary context to kind of later on cope with the actual code. Okay, having said that, I'm covering two papers. One is denoising diffusion probabilistic models. That's pretty much the paper that made diffusion models practical and then I'm going to cover improved denoising diffusion probabilistic models from OpenAI and I'm actually going to be covering the code base behind this paper. Okay, but let's start with this one. So this is going to introduce the necessary basics if you haven't already learned anything about diffusion models, hopefully that's going to be some context for you. I did cover the glide paper so I did cover some diffusion models there so do check it out. I'm going to link it somewhere here but this video, having said that, this video is fairly self-contained so yeah, you can continue watching. Okay, so here is how the diffusion model looks like on a high level. So the idea is to start from an image here and then you slowly, gradually start adding Gaussian noise on top of your image. So that's called the forward diffusion process and ultimately you end up with an image such as this one which is, as you can see here, basically a complete noise of an image. And now if you learn how to reverse this process, so this is the forward process here as you can see, if you learn how to reverse this process, so denote it as P theta, if we learn how to do that, then basically if we start a training procedure and we learn to do this for all of the images in our dataset, you eventually learn the underlying data distribution and then you can basically start from a random noise sample and start denoising that image until you end up with a hallucinated new image from your underlying data distribution. So it's going to be a novel image obviously. Okay, so that's a very high level explanation there. Let me now walk you through the formulas. So again, this is how the forward process looks like. This is basically the joint distribution and if we want to sample, if we want to do one step of the reverse process, here is how we do it. So basically we are going to learn a neural network that's going to predict the mu of theta and the sigma of theta, which are basically the mean and the variance of the Gaussian. So now there is a bunch of theory for why this is possible. So they show that if you have steps small enough, basically that means that if your forward process is adding Gaussians, they show that you can also approximate the reverse process as sampling from a Gaussian. So it's not kind of completely obvious why this is the fact, but yeah, we'll have to take it for granted. So the idea is to learn those two. Let's continue here. So here is the actual forward process. You can see how we sample from the forward process here. Basically you kind of downscale the current image and then, so this is how you form the mean. You take the current image, you downscale it and basically then you have this like, covariance matrix and then you just sample from this Gaussian here to end up with the X of t. Okay? So again, we condition on the X t minus one and by sampling from this distribution here, we end up with X t. Okay? So that's basically going from here all the way to here. That's one step. What's next? Now, they show that you can basically train these models by optimizing this variational bound. The idea is, so here we have the log likelihood of our data. We want to obviously maximize. We want to tweak our model such that all of the data points from our data set are super likely under our model. So that's the idea of how we train most of these, like all of these pretty much generative models, not only diffusion models. And now this is a standard thing with variational bounds. Basically you find a surrogate loss, which is like a lower bound basically for this loss here. And then by maximizing it, basically you're certain that the likelihood of your data is going to be at least as big. So that's the main idea there. Now I'm not going to dig into formulas here. We're going to later see how they decompose this into an actual expression that's going to be leveraged later on in the code. But here is the equation for now. Now this is an important, this here is a super important finding. So instead of having to sample every, so during the forward process, instead of having to sample multiple times, and in practice they use thousand steps, they show you can literally sample arbitrary X of t starting from X zero, where X zero is your original image, by sampling from this Gaussian here. So these are some important coefficients we'll be seeing in the code as well. So we have these alpha t's, which are one minus beta t's. And beta is basically, as you can see here, it's basically just like your covariance matrix here. In practice, the original DDPM, so this paper here, used fixed, like fixed schedules, whereas the later papers, such as the one from OpenAI, the improved one, used learnable like schedules. And when I say schedule, I just mean how does beta vary when we go across the whole forward process. Then there is this alpha t bar, which is just a product of all of these alphas, which are defined here, starting from one, all the way to t. So one is the, we start from the, usually one is denoted as the start of the process before the image has been deteriorated, and then as t grows, we're going towards image becoming pure Gaussian noise. So here is the expression, you basically can use square root of this alpha t bar, multiply that, so we multiply the original image here, this is how we form the variance, and then we just sample from this distribution and we get the x of t. So that means we immediately get arbitrarily noisy image. Nice. I mentioned the loss we'll be using, so here is the loss just reshaped into different forms. So this is the form that's going to be actionable, so this is the form we'll be using in the code. So we have L0, LT minus one, and LT, so these are kind of three classes of similar components here. This one is super important, basically KL divergence between this here, as you can see, is the one step of the reverse process, and we're going to do KL divergence with this basically forward process posterior. And then we have this component, which will actually, in this paper, they can ignore it because this is going to be like pure Gaussian, and because they don't learn that the variance is basically, you can ignore this term here, and this one here is basically the negative log likelihood of the image conditioned on the previous step. Anyways, a lot of details I'm going to have to kind of hand wave explain all of this just for you to understand and see the formulas, that's the important part. Okay, so let's see what's up here. So we can actually calculate, that's a cool thing, we can calculate this posterior of the forward process analytically here, and you can see how this mu t tilde is computed, and you can see how beta tilde is computed here. So these expressions are all going to be here in the code, so just kind of take mental notes here. So I'll be comparing the formulas with code like side by side, so that's going to probably be useful for you guys. So anyways, because we have this is a Gaussian, and this is a Gaussian, that means we'll end up when you do the KL divergence between two Gaussians, you end up with simple analytical expressions, we're going to see those a bit later. Okay, so what's up next? So actually here, so here is how these LT minus ones are going to simplify to, so we simplify them to these expressions here. Basically we just find the MSE, so the mean squared error between the means, so this is going to be our learnable one, this is the one we get from the forward posterior. And that can be further, like basically just simple algebra here, because we know that Xt equals this, and that's from the so called nice property here, so that's this property here. So you can kind of imagine that sampling from this Gaussian is actually equivalent to computing the following expression. So if you want to get Xt, you basically do the following. So you do this square root alpha t bar X zero, and you basically add plus square root because this is variance, we want to have like standard deviation, so minus alpha bar t, so it's going to be under square root, and then we just multiply times epsilon, where epsilon is just basically your normal distribution. Okay, so that's the Gaussian with the mean of zero and variance of one. Okay, so let's go back here, and then we just plug in Xt here, so we, sorry, we just calculate X zero here, we just kind of do the algebra, and then we plug it in here, and this is the next expression, and then this simplifies because we know how this is computed up here, so here is how we compute that one, so it's just like bunch of manipulations of symbols, and I'm kind of scheming over it. So you end up with this expression, and then you basically, this is what you want your neural network to learn. So this is going to be your diffusion model, it's going to learn how to predict the noise here. Okay, so let's see, that simplifies to what, basically we want to have the learnable mean needs to be equal to whatever we had here, so we basically want to have this thing, we want it to be like the same as this term here, because then the loss will obviously go to zero, that's what I show here, and if we achieve that, if we learn that mean, then how we sample, how we'll be sampling from the reverse process is again a simple computation, so here's the mean, so that's the same expression as this one here, and then we just add the, basically standard deviation times this vector here, which is going to be sampled from a normal distribution, okay? And ultimately, when you simplify even further that expression, you end up with this type of parameterization, so you can either learn a neural network that's going to predict the mean, or you can learn instead just this term here, just the actual noise, and that's what I end up doing, so this expression, as you can see, is just a mean squared error between the sample noise, so we're basically learning what type of a noise was added to our image, and then we have these weights for each of our terms, LT minus one, so this is again, this is LT minus one, those are the last components we saw, and we're going to see that this weight basically is a function of the step in the diffusion process. Okay, I know this is a lot of formulas, bear with me, it's going to become much easier as the video progresses, because I'm going to start introducing code as well, but let me quickly just explain what this means, so we start from an image here, this is some image, and then basically we add on top of it some noise, it's going to be something inside of there, let me just draw some human being, and then after adding the noise, like this, let me change the color, so let's imagine we added a bunch of noise here, so now your diffusion model is learning this green stuff, and if it learns the green stuff, then you basically know how to go backwards, okay? So you learn the green stuff, the noise, that's the epsilon here, and you know how to denoise your images. Cool, it's kind of magical, it doesn't click, you probably won't understand it immediately, it's not completely straightforward to understand why this works, I'm still struggling to be honest, but the formulas are here, and we're going to follow them for the time being. Okay, I'm going to skip this, and starting from this LT minus one, they basically derive empirically this simplified objective, where they drop the term here, so the term that depends on the basically time step of the diffusion process, and we end up with this one here, and so as you can see, basically we're doing simple MSC loss between the noise, so we're trying to predict this noise added on top of the image, and we do that for various time steps of the diffusion process. So basically what we're doing again is we start from an image here, we add some noise on top of it, we end with an image, and then we keep on doing that for like, let's say a thousand steps, and we end up here, and basically we'll be trying to, at each step of the way, understand which noise was added here, which noise was added here, which noise was added here, and by doing that we learn the reverse process of the diffusion, and that's going to lead us to like a powerful generative model. Cool, let's continue here, let me just show you, so here are some images they get from the model, not that important because you already know that these models are super powerful, so here you can see how the diffusion process looks like in practice, you start with the noisy image, and then gradually keep on denoising it until you get to the sampled image, which is the image sample from the underlying data distribution, okay? Here they show how, depending on from which latent you start from, if you start from a latent like X thousand, so that means thousand steps of diffusion, and then you start three independent reverse processes because they are stochastic, you end up with three different images, but as soon as we start taking like latents that come from later in the reverse process, so let's say X 500, then you can see that three independent reverse processes lead to images that are quite similar. Having seen all of that, now I'm going to quickly walk you through the innovation that the second paper brought, so it's basically building directly on top of the denoising diffusion probabilistic models or DDPMs for short. Let me show you main contributions of this paper, so first of all, they have a learnable variance schedule, so this is how they are going to do it, remember this formula, we're going to see it a bit later, so basically they predict this vector V, and then they kind of do interpolation between beta T and beta tilde T, which is the posterior variance, and this is the forward process variance. Okay, so that's one thing they've done here, so this formula here, the second thing they've done is they use this hybrid loss, so L simple we saw what the one is, that's when you drop the terms that depend on the time step, and then we have the LVLB, which is the variational lower bound, so that's the actual original loss with all of those complex terms, and by just creating this type of weighted average between those two, and using this one to learn the variance, and using this one to learn the mean, basically they show this is the best, this was the best trade off, so there is a lot of experimentation going on here, so it's kind of a lot of hacks put on top of the DTPM in order to make this work, as well as DTPM itself had a bunch of hacks, such as using constant variances instead of learning them, etc., etc., so there is a lot of hacks going on in diffusion models, at least in these earlier papers. Okay, so they say here along the same line of reasoning, we also apply a stop gradient to the mu of theta output for the LVLB term, which basically translates to, so this component is only going to be training this variance expression here. Okay, so that's one thing, so that's the second thing actually, and then the third thing is instead of using linear, like basically noise schedule, so those betas being a simple linear sequence, instead of that they propose this cosine schedule, and what that brings is that you can see that alpha bar theta, alpha bar t, sorry, basically has much more gradual drop compared to linear, and because of that, that determines directly the amount of noise, and thus if you use the linear schedule, so basically using the linear schedule will lead to noisier images earlier on in the forward process of the diffusion. Okay, that's a third thing they do that helps a lot, and then let me show you a couple more expressions here. One very cool thing is using basically the values of the loss for each of the time step to understand how much weight we want to put on top of that time step. So this is kind of middle ground between using the simple objective where all of those constants with all of the LT minus ones are basically constant, like an equal to, I guess, one, then you have the LVLB which had those complex expressions, and finally we have this type of important sampling where depending on the loss, so if, for example, if you're struggling with one particular image in the process, let's say this one, the ith image here, basically what you do is you increase the loss for that particular XT, so you'll put additional focus on trying to predict that noise that was put on top of this image. So that's the idea with this expression here. Let me continue, and we are almost done. Finally this expression 19, not that important, basically what they show is that during training and during sampling you don't have to use the same type, the same length of a diffusion process. For example, your training chain has like 4,000 images whereas you can, during the sampling time you can just have 100 images, 100 latents, and basically they show that how to remap the betas and beta tildes, the posterior variances such that this actually works out nicely, and they get high quality images and they obviously save a lot of computation which is super important because you don't want to sample 4,000 images every time you need to generate an image. That's going to be super expensive. Okay, guys that's pretty much it, couple more things here and we are done with the papers. They show, and this is super interesting, they show the scaling loss for diffusion models and this was back I think in 2020, so if you kind of read this paper you could have expected that they are going to do the same thing as with GPT-3 and that's to scale up these models and that eventually led to Glide and then the Li and the Li2. And you can see here again we have the power law, we see that the FID which is the metric that shows you how high quality your samples are, you can see that with the increasing size we keep on getting smaller and smaller FIDs whereas with NLL which is the negative loss likelihood it does not exactly follow the power law but it's kind of still going down here, so yeah. In any case these two charts are indicative that scaling up diffusion models will probably be a good avenue for future research. Okay let's see the conclusion, the likelihood is improved by learning the variances using our parameterization and hybrid objective. Furthermore we have investigated how DDPM scale with the amount of available training compute and found that more training compute trivially leads to better sample quality and log likelihood. Okay guys, so you hopefully got the gist of diffusion models, we saw how we have this forward process of noising the images and then we have the backward, the reverse process of denoising the images, we learned that and then plus add to that a bunch of hacks to get this thing to work but ultimately they are now the best generality models we have because they have compared to GANs they are much better at covering covering all of the modes of your data distribution whereas we know that GANs suffer from mode collapse, we also have much more like it's much more stable to train diffusion models as compared to GANs so yeah basically diffusion models are currently what GANs were a couple of years ago. Cool guys let's now switch to the code, I'm gonna show you how this thing is actually trained and how we sample from the diffusion models. Before that let me just show you the architecture they're using to actually learn the epsilon so the noise during the diffusion model training so this is a simple unit, this is how the architecture looks like, we're gonna see this in the code, this is less important arguably you could use some other architectures as well, they're just kind of stuck with the unit. Okay let's start doing the training, okay I went on and downloaded the github repo, had to create some modifications to get this thing to work on my single GPU Windows machine, I can kind of submit that to my github, do let me know if you want me to do that, if so I can easily create a like a I'll easily just push this code on to my github repo. So I just created this launch script so I basically used the recommended settings that they've showed in their readme file here so you can kind of take a look at the readme of the repo which we're gonna be using the improved diffusion repo and basically yeah so that's what I've set in my launch and now we can start training here. That's less important we're gonna see what's going on here in a minute. Okay so let me start training, we're first gonna obviously have a bunch of arguments so here they are I'm gonna kind of print them out but don't try and understand what's going on there's too many of them, we'll gradually start analyzing what's going on so we have the Cypher data set which I downloaded using their Cypher script then there is like bunch of other stuff learning rate blah blah blah we'll see all of that later. So I'm gonna kind of ignore the parts that are not crucial to understanding diffusion models which means I'm gonna be ignoring the distributed training, I'm gonna be ignoring the loggers all of that and I'm gonna focus only on diffusion. So here is the first important step we basically take these arguments convert them into dictionary and we start creating the actual unit model and then the diffusion object here. So let's see how the model is gonna be constructed so we basically pass image size is gonna be 64, usually they have lower resolutions because remember the latents are of the same like dimensionality as the X0 as the original image. Contrast that with VAE is contrast that with GANs with any other model the latents are always much of smaller dimensionality whereas here it's a bit more computational intensive and so because of that what they end up doing in practice is training models for smaller images and then training additionally super resolution models and they actually have like here scripts as you can see here basically super res sample super res train I'm gonna skip those because they really work very similarly to how diffusion works I'm gonna focus on these two scripts here the image sample and the image train. Okay anyways number of channels that's gonna be internal dimension of the unit number of res blocks nothing interesting there learn Sigma set to true means we are learning the variances instead of hard coding them instead of using the fixed ones so that's the innovation from the improved diffusion paper class conditioning so we will not be using that but it's very easy to have class conditioning we'll see how they use temporal conditioning so how do we pass the T because T is a scalar how do we pass that into a neural network we're gonna see they are just using simple sinusoids like simple similar embeddings as what the original transformer paper used so and then they can alert I'll show you how they can fuse that into the unit model and we would be doing the same thing with this class conditioning if we were using it which we are not at the moment. Okay checkpointing is just an optimization technique I'm gonna kind of ignore what checkpointing does is during the forward pass instead of storing the activations for every single layer you don't do that and because of that you save bunch of memory but as on the con side you'll have to kind of when you do the back prop in order to calculate the gradients you'll have to do your computations again so you're trading off the memory for time so you'll spend up much more time but you'll save up memory by doing this we'll not be using the checkpointing so yeah just FYI certain layers of unit have basically VAT type of attention so that means each of the token of the image is gonna be attending to each of the other tokens and that's what they show here so 16 means at resolution 16 by 16 they'll be doing the this attention and at 8 by 8 resolution they'll be doing the same thing so when I say 8 by 8 it's inside of the actual unit because you know that unit has that characteristic shape okay so number of heads again just the parameter for that attention layer inside of unit nothing important there we're gonna see how this parameter is used basically this is how we depending on this flag they'll have two different ways of combining of conditioning the images with the time steps so yeah we'll see how this plays out a bit later okay so here is the create model function basically because image size is 64 they specify this how they specify how unit will be constructed then we have this attention DS attention DS basically converts this attention resolution into how many down sampling layers we have to wait inside of the unit before we start using these attentional layers so just another way of specifying when do we start inserting those attention layers into our unit okay not that vital you can ignore that if you didn't understand it so input number of channels obviously three we're dealing with RGB images we specify number of channels here because we are learning Sigma because of that this we end up with six output channels and the first three channels will be predicting the epsilon so that's the noise and the second three channels will be predicting the actual variances so that's why we have six here okay number of blocks blah blah blah nothing special drop out blah blah blah we'll not be using class conditioning so we end up with none here for number of classes we're not using checkpointing okay specifying details of attention etc okay let's enter the constructor I'm going to quickly walk you through unit so we are starting step by step we'll first see how unit works and then we're going to see how it fits into the whole like training loop later on okay so let's see here we just store all of these parameters inside of like internal fields here nothing fancy there so we here we create this layer the sequential layer which consists out of this basically inverse bottleneck shape of MLP this is a simple MLP that's going to be used to transform the sinusoids and before we use them to condition the model we'll see that a bit later okay so we can ignore this because we're not not using classes and now we start adding blocks to form the unit again we have three types of blocks we have the input blocks then we have the middle blocks here and finally we have the output blocks so that corresponds to what we saw in this diagram here basically whoops my one note is glitching so you can see it here so we have we kind of have the first part here the input blocks then we have the middle blocks and then we have the output blocks here that's roughly how how this code is structured so let me now get back to it and let me quickly walk you through there is i'm not going to dig into the actual details there is a couple important things i want to show you how they fuse the temporal information with the image information that's the vital thing i want to show you here okay so there is this wrapper object called time step embed sequential we're going to see that a lot what it does basically is the following depending if the layer inherits from this time step block which is just a simple interface a dummy interface that where where the for a function supports both the x which is the image representation and the embedding the temporal embeddings in that case we'll be we'll be calling the layer passing both arguments whereas if it's not inheriting from time step block then we'll just be passing the x and ignoring like the embeddings so as i said a simple wrapper nothing too interesting okay so the first thing we do is we create this a com2d layer why com2d because this is com and d a generic layer they created and then number of dimensions is two and because of that we end up with a com2d so that's how the unit model starts next up let me show you what is going on here so we have channel multiplication we're going to iterate through this array and then this is the interesting part we start adding residual blocks so the interesting part about residual blocks is actually only the forward function and i'll i'm gonna put a breakpoint here and later on i'm going to show you how this temporal fusion is is happening but for the time being let me quickly step through the constructor of the of the of the resnet block just quickly so we just stored the number of channels the the the number of channels for for the basically temporal vectors blah blah blah nothing fancy here let me see whether there is something interesting i need to focus on here so we have we specify these input layers we specify the embedding layers we'll see how these are used a bit later so bear with me here and we finally have out layers normalization blah blah blah clu is just activation unit there is like a zillion of these activation units and it's not even worth mentioning okay so basically all of the fun will happen later on during the forward prop and that's when i'm gonna kind of step into the resnet block and show you how the how the temporal information is mixed into the network okay so after that sometimes we'll be adding as as i said here so if this thing is if we are if we don't sample four times then we're gonna add the attention block which we haven't at this step so we're gonna skip this part for now and then we add as you can see here to input blocks we just add the layers which we accumulated during the loop and we just wrap it up into this time step embed sequential which is again that useful wrapper we saw a couple minutes ago that's it guys nothing fancy there so i'm gonna kind of skip to middle layer here okay let me ignore this thing here so here we are this middle block consists of a res block attention block and additional res block i can skip all of that let's continue here and finally the output blocks are literally what what we just saw in the input blocks so it's the same pretty much list of objects and here you can see by doing this we are reversing and we're creating a symmetric construction for the output blocks okay so because of that i'm just going to skip all of this and and end up here and here we add the normalization layer which is i think group normalization but again not that important to understand and finally we end up with a con layer and this zero module just zeros out the weights of these kernels i'm not sure why they're doing that if anyone knows that feel free to comment down below why do they initialize some of the layers with all zero weights i'm not completely sure cool again main takeaway from the from the unit model is like it has this very interesting type of a shape and with the input blocks middle blocks and our blocks and the most important thing i want you to remember here is they have this part where they are mixing in the temporal information we're gonna see that a bit later but just keep that in mind for now okay that was the creation of the model we do the same thing for diffusion so we have like i've set 50 steps it was 4000 by default this is just so that i can train a bit quicker on my machine otherwise it's very slow to to actually train this okay so we have learned sigma nothing special there noise schedule is going to be linear so the betas for the forward process are going to be basically sampled as we sample the linear function instead of the cosine we don't use the kl basically this if we were to set this to true we'd be using the variational lower bound loss instead of this we're going to be using the hybrid loss so we'll see all of that a bit later we are not predicting the x start x start is just a starting image the original image from a data set and we will be basically rescaling time step all of this is not that important we'll see what those parameters are a bit later but they're not that interesting okay so here we have we first form the the beta schedule so we pass the like name of the schedule and number of steps and this just returns those betas we saw in the paper in the beginning of the video so let me quickly show you the two like schedules they support one is linear layer as mentioned it's just a simple in space between the beta start bet beta end and we linearly interpolate using the number of diffusion steps here for the for the cosine schedule you have this a bit more complex expression where you create those we saw these equations in the paper i'm going to kind of ignore those for now not that important okay so let's see which loss we pick we are going to use something called rescaled msc mean squared error that's because we are using the hybrid loss we'll see how that plays out a bit later time steps re spacing again this is important only when you want to reduce the number of steps during the sampling process which we will not be using so we can kind of ignore all of that again this is the function that does that type of a logic and basically for our case we can skip this because we end up with let me show you how this all steps look like it's basically just mp our range so you end up with as you can see here zero through 49 because we have 50 steps and so there is nothing no like sub sampling that will be doing in this training okay we pass the betas we'll be model mean type is going to be epsilons we'll be we'll be predicting epsilon instead of predicting x start so in some of the ablations they actually tried predicting the x start which means the original image so instead of trying to predict the noise that the that was added on top of the original image why not try and predict the original image but empirically they showed that it's better to predict the epsilon okay let's continue so what's the model variance type so because this is set to true that means we'll be using the learned range so we'll be learning the variances our last type is as we saw rescaled msc and that's pretty much it now let's step into this space diffusion basically the important piece here is the construction of this Gaussian diffusion object so let's see how that thing is going to look like so that's the Gaussian diffusion class that's important this is the file where all of the magic will happen basically this Gaussian diffusion dot pi that's the most important file in this whole code base so again we specify the mean it's going to be the epsilon the variance which is going to be learned and then the last type rescaled msc blah blah blah and next up we we form the we just basically convert betas into numpy array we do some error checking assert asserting that that's just a like a 1d array and that all of the betas are bigger than zero and smaller or equal than one that's all we grab the number of steps here and here we start creating those equations we saw the coefficients not the equations we saw in the paper now for this formula so i'm just going to put things side by side so it's going to be easier for you to understand what's going on okay let's start so here are the alphas the alphas we saw those coefficients here basically one minus betas that's how we form the alphas and then we have alpha bars which is just the products of alphas and you can see how we form those alphas it's called com prod like a cumulative product we basically just call this mp com prod and we end up with arrays so this thing is still going to be 50 the length of this thing is going to be as you can see here it's going to be 50 that means that by indexing into this array we end up with up like a basically well we basically specify t of this product so it's not like we just calculated the whole product of all of the alphas we actually have well we actually have the array that contains alpha t bars for all of the t's which means zero through 49 okay next up we have these this this parameter called this coefficient called alphas com prod prev like basically what we do here is we what we do the right shifting so we add one as the first element and then we take the first n minus one elements and and pre like append them to this list and we also have this com prod next so where those are used is let me just see i'm fairly sure it's going to be used for this mean yeah it's going to be used for the posterior so it's basically used for this posterior like forward process so for the mean and for the as you can see here for the for the variance okay so we're going to see how we construct those in a second so we we now construct square root of alphas com prod so those are going to be used as you can see here so this let me kind of change the color here and you can see this is now the term we constructed here then we create square root one minus alphas com prod which is going to be let's see whether we have that one somewhere here i don't think so but like yeah it's going to be used somewhere later on okay we do the same thing for log square root blah blah blah there is a lot of these so this is square root reciprocal alphas com prod so that's square root of one over this okay so the square root so this expression here is actually used for the component in the loss here you can see it here as i said for each of these we'll have we'll basically calculate all of the coefficients we have the posterior variance which is again this expression here let me just kind of let me show you that this is indeed the case so we have betas here are the betas we have one minus alphas com prod prep so that's this expression here so that's the numerator of our expression here and then we have denominator which is one minus alphas com prod so that's it as i said they are now literally going through formulas in this paper and calculating all of the necessary coefficients we do the same thing here and then we have this i'm going to show you quickly these two so we have the posterior mean coefficient basically it's equal to um betas as you can see here and then time square root alphas com prod prep so that's this one square root of alphas previous uh and bar okay and then we divide that by one minus this expression here so hopefully i this this convinced you that indeed they're just going on here and calculating all of these coefficients and that's pretty much it i'm going to go back into the full screen here and continue with the code okay so this is again not interesting because this is only vital in the case of we doing a sub sampling during sampling which we will not be doing i using less steps during the sampling procedure which we will not be using so i can kind of safely ignore all of this and finally uh they use uh those betas to construct the uh this gossian diffusion model so this one was just used for for this process here for all practical purposes this init function here is going to construct the same gossian object we just saw so i'm going to skip i'm going to skip everything here and so that's that's it we end up with a gossian diffusion object okay guys so we have unit we saw how they literally go formula by formula and compute all of that uh inside of the constructor of the gossian diffusion object now let's go on further here let's exit all these functions we are back to our main function what they do is they just push the model to gpu in case in case you have gpu uh they just now uh create this schedule sampler so this one is fairly interesting i'm going to show you how this one looks like it basically um specifies how do we want during the training sample uh so which steps i which loss components those lt minus ones do we want to train and so what they use by default is just a uniform sampler which means all of the steps are equally likely but then they also have that other sample which i did mention where depending on the loss of particular lt minus one they'll maybe increase like upscale that weight so that we focus more on uh basically yeah learning that particular uh part of the diffusion process okay let me just kind of go through through here so here is the uniform sampler uh basically here is how that sampler looks like so the weights of the uniform sample are obviously all ones so we have number of steps so we have like 50 ones and then during the sampling what happens is this sample function is called because as you can see here uniform sampler and here is from schedule sampler the subject here and here's what happens so we have weights so this is all ones and then we divide here so we normalize and actually have a probability distribution because the sum needs to be equal to one as you know uh and then we just call mp random choice so we basically do uh this is how we weight all of the uh 50 uh time steps and we just uh basically sample uh whatever the batch size is of those of those time steps so that's what's going to happen and then some conversion here uh nothing fancy so the interesting sampler is that second one which i'll just kind of briefly show you but i will not go into it maybe later if we have enough time so last second moment resampler so this is the one where they basically use the loss history to uh decide on how so you can see here depending on the loss you'll have bigger weights for for those components where the loss is bigger so that's roughly it but we'll not be using this one so i'm going to kind of skip it for the for the sake of time okay let's continue here we have our uniform sampler we now create the data set so basically i'll be just using cipher that doesn't need to be the case they also used image net 64 by 64 in the paper and you can use whatever data set you want and that's it now we have the training loop this is where the actual training is going to happen we pass in the model we put like that's the unit we pass in the diffusion um model on top of it we feed in the data uh batch size micro batch so this is going to be um like uh we're going to be chunking the actual batch into micro batches because we like as i said this is fairly memory intensive that's why they have all of these optimization methods such as checkpointing and micro batching to kind of cope with all of that uh excess memory okay ema so that's your exponential moving average because uh they'll actually be using um uh so ema type of weights for for sampling later on nothing fancy here logging saving resuming checkpointing blah blah fp 16 we don't care about mixed precision training i just want to focus on diffusion uh we pass in our uniform sampler blah blah blah some optimization details like weight decay for the atom w okay we can enter this uh train loop let's see what's going on here so we pass in all of these we kind of store them into the internal fields okay um we have all of this i'm going to kind of skip through all of this nothing interesting okay we have our model parameters we have the master parameters again this is a uh like a consequence of the training this in a distributed fashion of course multiple machines so we for our like for all practical purposes we don't care about the this we just have single set of parameters okay syncing again a consequence of of distributed training we don't care about it they create the atom w optimizer okay uh now they have for each of the ema uh rates specified uh they create uh like a deep copy of the parameters so because we only have one ema rate we'll basically just have a single um like copy of the parameters again not that important for you to understand let's go on here this is distributed uh data parallel a wrapper in pytorch again we'll not be using that so for all practical purposes we can ignore it let's jump into the actual meat of the code of the training code so we first go on to sample a single batch of images and potentially the classic the classes that correspond to those images uh let me quickly walk you through how this load data looks like basically this is gonna collect all of the um images all of the paths from my uh data set so i did go on and kind of download those as i told you so you can see them here and if i kind of step over that uh it's gonna collect all of my images and um let me now show you how it looks like whoops if i copy paste the all files and i print the first let's say two paths you can see 006 png and 13 png and those are exactly those are exactly these two first images here so here is that one well it's super small so you will not see the cipher after all okay um okay then they go on to form this uh image data cell uh where just they just do some resizing cropping but nothing too interesting and here we have the data uh loader with batch sizes of 128 and we finally yield the batch from our data set okay that ends up giving us a batch of size i guess it's gonna be i expect it's gonna be 128 uh 364 64 because it's cipher it's rbg rgb images and 128 is the batch size so let's see whether that's the case uh indeed it is the case and conditioning we don't have anything so it's gonna be just empty dictionary as you can see here okay cool so here's the first step let's now this is the this is where all the magic happens i'm gonna ignore everything afterwards because it's just kind of it's just your common machine learning boilerplate code so i'm gonna ignore all of that so we have a forward forward backward call here okay so we just use zero gradients of our unit we want to clean those gradients before we recompute them again and then do the update okay so here we load the first micro batch so you can see we just sub sample this batch and we end up with like two three sixty four sixty four because our micro batch dimension is two and we did the same thing for conditioning which because it's an empty dictionary we don't really care we just check whether this is a last patch which it's not because we just started the loop and now this is where we do the uniform sampling of the diffusion process okay so this is this is the idea we we now sample t's so the time steps for each of the images in our batch okay so let's do that so we we we do that and we end up with two random t's so 44 and three so that means we'll take the first image we'll basically add noise 44 times in practice we'll just have a single step because of that nice property we saw in the paper and we'll get there and then this second one just has three steps of noise and then we try and predict back the noise so that's going to be the goal okay so this is the main function in this training code this training losses function we'll see it in a moment we just kind of wrap it up using the func func tools partial so that we don't have to pass these parameters every single time so just kind of convenience wrapper there and again we just you can ignore all of this because that's just distributed code nothing interesting let's focus on the compute losses this is the most important function here okay so what's going on first of all we generate noise that has the same shape as our input image uh our input micro batch so because x star is again two three sixty four sixty four we're going to end up with basically normal gossian tensor of the same shape okay so that's again our noise of the same shape as the input images and then we do the q sample so let me remind you what that thing is okay so here is a formula that we are going to use uh we are basically going to do this computation here actually this computation i showed you here so let's see that that's indeed the case so let me just kind of see whether i have a break point here i do have a break point so here we are we are in the in that function you can see that uh this is everything we do so we have this extract into tensor we'll see what that does but we have this square root alpha scum prod which is this part here let me just change the color so okay so we have this part here and then we multiply that element wise with x start which is x here which is this thing here okay and then we add a square root one minus alpha scum prod which is this part here we add up this and we multiply it with with noise as you can see here so we literally are just computing formulas from the paper nothing fancy there okay now i'm gonna one time just show you how this uh extract into tensor function looks like uh for the sake of your um i guess understanding of the code so i'm gonna kind of add a break point there so what we do is the following so we uh take that um so the the array we have like the let's say let's say the alpha t bar or whatnot and we basically just extract using the time steps so time steps are if you recall like we had um whoops whoops time steps we end up with 44.3 so we're gonna end up taking the alpha 44 bar and we're gonna end up taking the alpha 3 bar which are gonna contain uh respectively 44 uh terms uh the product of 44 terms and the product of three terms and then the only thing we do is literally we add these dummy dimensions as many times as possible so that we have the same dimensionality as the image we are doing element wise multiplication with and then we just expand here uh the the vector so we we just want to have the same shape of this of this uh yeah so why are we doing this well because we cannot multiply a scalar with the tensor and because of that we're just basically just copy pasting and broadcasting this this scalar so we end up with a tensor that contains whose all elements contain this particular alpha 44 bar whatnot okay hopefully that was clear enough and now i'm gonna kind of uh let me first uh ignore this i'm gonna not will not be stepping into this function anymore and that's it that's it guys we have we have the x t so we have our noisy version of the image right here okay let's see what the next steps are first of all uh because our loss is uh gonna be hybrid we're gonna ignore this part and we'll see that we'll actually be computing the same thing a bit later down below because hybrid also contains this variational lower bound loss as well okay so here we are we have the our loss type is if you recall rescale msc and we end up now here so this step is supposed to output the epsilon so we need to learn to predict the noise that if you recall uh we basically took that noise and we now we basically using that noise we form the x t's now we're trying to predict especially like particularly this noise so this is the term we'll be trying to learn how to predict those are the green dots i showed you in the one on paper so that's going to be the first three channels the last three channels are going to be about the variance and we'll see those uh uh in a bit uh but for now let me show you how this forward prop is going to work so this is again unit we're going to do something to the time steps so we're going to somehow merge them into this uh together with this uh with the image representation and let me quickly show you why we are doing this why are we passing x and t so this is the particular expression we are now dealing with here we have our unit so this epsilon t in practice so let me again change the color so this thing here is our unit we have this expression is the x t so that's the thing we just calculated so that's the x t again x t and we also pass t which are the time steps and that's why we have this particular expression that's why we are computing by executing this line of code here okay let me let me go back here and let me step uh through this particular thingy uh we can ignore this rep model um basically it serves the purpose of again only if you have the the sub sampling during this the your sampling procedure which will not be doing so none of this really matters and then we do some rescaling nothing that's super interesting just some legacy stuff so that they are comparable with the ddpm original paper etc so here is the actual unit forward pass so we have the new ts so this is just kind of some rescaled versions but still scalars of our original 44 and 3 if we recall these time steps here and now let's let's pass that and this is just empty like the key word arguments are just empty we don't have anything there so we just pass the time steps and we pass the images so let me show you the shape here shape here is again 2364 64 okay now let's step through the forward pass of the uh unit model here is where the magic happens first of all we have this time step embedding which is going to map the scalars into certain uh vectors which are just like heuristics basically we're just using some sinusoids same thing as in the original transformer paper so let's see how that looks like okay so here it is again not gonna dig into the details you can see a bunch of sinusoids cosines what ultimately matters is that we end up instead of with scalars we end up with 228 so we end up with two vectors so now we can we can kind of work with vectors in neural networks it's kind of hard to work with scalars after that okay i stepped into something uh whatever okay so now we have this it was actually the the activation function of this time embed and if you recall time embed is this inverse bottleneck of a of a of a mlp so let me just find that thing quickly so time embed where are you here it is it's just a combination of two linear layers uh whereas the this uh inner most uh like layer has forex the uh dimensionality of the input and output layers hence i call it inverse bottleneck and this is just like this times four is stuck with us since 2017 the original paper people just keep using that again just a simple transformation and we end up with some we'll end up with some uh vector of different dimensionality so let me show you that let's kind of step over here we end up with what we end up embedding vector here being two five twelve okay so that's it that's our time information now let's let's insert that time information somehow so we go through the input blocks remember the first block is going to be like the com layer and then we're going to have the rest blocks so let's see the the first thing so the first thing nothing nothing super interesting because it's just a com layer but if i have hopefully i've added a break point to the rest block let's just find the rest block here uh where are you okay here it is here is the residual block here is where the magic of of merging happens let me show you this so okay here we are we do some processing on the image that's not that important so these in layers are just like normalization some convolutions whatnot and nothing interesting there next up we take our embedding vectors which are again two five twelve right so that's going to be two five twelve and we pass them through the embed layers so let's see what those are again that's just like a linear layer uh that does some additional processing on top of those representations nothing super vital let me show you the shape okay two two fifty six and now what we do is we just add the dummy dimensions until we have the same shape as this as these image features h so h has the shape of as you can see 228 64 64 because of that we're going to keep on adding dummy dimensions to to our temporal tensor until we end up with the same uh number of dimensions you can see here and now because these two are one one we'll basically be broadcasting those so copy pasting the temporal vectors before we combine them with the image features so here's where the magic happens so this is the parameter we saw before the boolean use scale shift norm that's one option how we can do it you can just also just simply add you can simply add the temporal information to the image features and that's how you can condition the model okay but they i guess empirically uh found out that this approach works a bit better instead of just adding up the temporal representations to the image features instead of that let's kind of chunk this so we start with two 256 one one by doing this chunking along the first dimension that means we're going to split this 256 into 128 okay so let me kind of step over there let me show you the scale and shift are going to be 228 11 right the same thing for shift and then this is how they kind of combine the image features with the temporal features one plus scale and then we add a shift and we just have some normalization for the image features beforehand and then we do some um additional processing that's it guys that's everything there is to to to how this unit works i think this is like everything you really need to know i'm gonna remove that breakpoint and continue on here uh and that's pretty much it and now we're gonna keep on doing this input blocks middle blocks apple blocks we're gonna keep on adding and merging the the temporal information and that's it i'm gonna just skip over all of this okay so we've done the for a prop um here we are we are exiting the uh the function so we end up with let's see the shape should be as i said we should have i guess um 264 64 that's the prediction let's see whether that's indeed the case so 264 64 indeed why is that well because basically um uh we we have six channels because the first three channels again are epsilon which are the noise which we're trying to predict and the next three channels are the variance so let's see how that's gonna be used here um later on so because we have uh learned range we're gonna enter this part of the uh this branch here uh we just extract some dimensions so xt is again that's our tensor of noise images 23 64 64 so we're gonna split the output uh into two groups each of with three channels that's what i just explained so model output is gonna have the epsilon so that's gonna contain the noise whereas this is gonna contain the variance and let me now show you the shapes basically it's gonna be 23 64 64 whereas previously we had 26 64 64 okay now what we do here is uh basically we take the epsilon we detach it from the computational pie tort graph which means we'll not be updating uh the the the epsilon whereas we pass the variance like this and why is that well if you recall from the paper and i'll show you that in a second as well we basically just uh in the hybrid loss we use the um um this variational uh bound loss to train the variances whereas we use the simplified a loss objective to train the mean or in this case to train the epsilon which is equivalent because we have that different representations parameterizations so let me show you again the paper and compare this side by side okay so here it is basically as i mentioned we have the hybrid loss consists out of a simple and so the variational lower bound objective and they mentioned here along this same line of reasoning we also apply a stop gradient to the mu of theta or equivalently to the epsilon theta output for the lvlb term so that means this term is gonna we're gonna freeze we're gonna detach from the computational graph that's why we are doing this this part here okay now let's go on and compute the actual uh lvlb terms so we are now computing literally this we're computing this thing here right that's what we are currently doing and later you'll see in a couple seconds we'll be computing the l simple again just executing the formulas from the paper that's it do let me know whether this side by side comparison between formulas and code helps you or not because these videos are super long and take me a lot of time to to create them so any feedback is very much appreciated continuing on here we do that we concatenate those two and then we call this uh vb terms bbd so let me just kind of uh see where i have a breakpoint there yes i do let's go back to the code okay so what we do here is we pass this dummy model why dummy because because the this function here will later be calling this model but we will not actually be doing a for a prop through the unit as you can see this is how we define the model you basically see this is just a like a dummy lambda function that just returns uh so no matter what you pass in it's just going to return r which is the frozen out which is this thing here so we'll see some model like we'll see a for a prop but it's not going to be a for a prop it's just going to be this dummy return of of what we already computed just keep that in mind okay so what we pass we pass the x start which are the original images we pass the x t which are the images that were noised using uh like t number of steps which we have here and now let's kind of step into this uh function and see how it's computed so what we do is we compute the um posterior mean and variance here by doing that we basically compute the um the forward process posterior and we're going to basically do a kl divergence as you can see here between that and between our learned reverse process again let me show you the paper to make this a bit more concrete and then we're going to dig into the code okay guys so here is the expression we are calculating it's nothing more than this so we have the as i said posterior here it is like of the of the uh forward process that's this thing here here we have the learned reverse process so this thing here corresponds to this lines here the q part the posterior of the of the for process corresponds to this line here q posterior mean variance and then we have the kl divergence which is just this part here again this is just this part here kl of those two distributions and those are the l t minus one terms so the variational lower bound terms that's it nothing nothing too fancy now let's kind of dig and see how that's going to be computed so we pass the x start x t and t which are the original image noist image and t the time steps so let me kind of start uh enter this function again so it even says here what it's computing you can see here so it's very nice to add this type of comments that helps a lot to be honest so here it is how we compute this is we take this mean coefficient one we multiply it with x start we take the mean coefficient two we multiply it with x t and that's it we end up with a posterior mean again let me show you this side by side with the paper here it is guys so we saw this already actually so we have again posterior mean coefficient one is just this thing here so we actually computed that if you recall in the constructor of the Gaussian diffusion object so this is the the first coefficient here then we multiply that with x zero as you can see here x start just a different notation and then we have posterior mean coefficient two so that's this part here and we multiply with x t so that's this part here so that's it that's how this formula corresponds to this line here let me now continue here let me step over all of this we end up with the posterior mean we do the same thing with a posterior uh variance so that's just those betas again here are those betas that's what we just computed here that's the posterior variance and the posterior log variance i'm not sure how this one is going to be used we'll see that a bit later hopefully or maybe not okay just some assertions and that's it we computed uh the uh basically this distribution we because it's a Gaussian we can just return the mean and the variance and that's it that's perfectly describing the Gaussian okay now we do the p mean variance so we're now doing um that other step so again that's that's this part here this is what we're trying to compute at the moment here let's see how that's gonna look like uh let me see whether i have a break point inside of here yes i do so we passed uh again model remember here model is just uh dummy it's just going to return what we already computed beforehand so the epsilon and the variances and we pass in x t and t so that's the the noist image and uh time steps okay let's enter here again uh it says here we are basically computing this we are computing the um p mean variance okay so let me step over this nothing fancy there again uh this does nothing this just returns this just returns we're going to see how we are not going to enter the for function we're just going to return here okay so we just end up with the uh tensor we already previously computed so it's 264 64 let's see how how this is going to be computed so because we are actually our model var type so we are we have the we as i said we are we are learning the variances that's why we enter this branch here we split the model output again into the epsilon and variances and now because we are um learned range and not learned we enter the else branch and as you can see here we're now calculated the equation 15 from the improved ddpm so uh paper okay let me show you that side by side okay so here it is guys here's what we are computing this equation 15 so when i said we have variance i was we have variance i was kind of lying because that's not the case that's just a component that when you combine it like this then you end up with the variance okay so let me let me show you what they do here let me kind of remove this uh panel here uh okay so they have the mean log so they have the posterior log variance clipped so that's going to be this part here so that's where the log part comes into play so this is the the part that they compute here it says the mean log and then they compute the max log which is this part here actually because this is betas it's why it's the other way around but okay doesn't matter that much you get the point uh finally we have this um just normalization of the output the output is minus one one they add plus one so that's zero two and then divide dividing by two we bring it into the zero one uh range i guess and then we just do literally this equation here so you can see v times this plus this blah blah same equation so this line is literally this equation here and then we just add the additionally this exponent part so that's this part and we've successfully computed equation 15 okay let's continue on here this is just some clipping we can ignore it for now let me zoom in here uh we are not predicting previous x we are predicting epsilon that's why we enter here and we enter this branch so we first want to predict the the x start uh given the um um epsilon so the noise given the currently noisy images and time step we want to predict the x here i will see why we're using that in a second so let me kind of step into this and show you this part so here we are we're predicting x star from epsilon uh here is how we do that let me again have the equation on the side so we'll be able to understand what's going on here okay guys this is the equation we are computing so we recall that xt is computed like this if we just rearrange the terms and we compute x zero x zero turns out to be this expression here and that's what we are currently computing here okay so here are the square root reciprocal alphas come prods that's literally this square root blah blah this thing here then we multiply that with xt you can see xt here then we have minus uh square root reciprocal minus alpha blah blah so that's literally this term here you can just read it kind of word by word and understand that that's this term here and we multiply that by epsilon and that's it so that's how we calculate the x zero or x start uh let's go back here and now we're going to use that to find the model mean and by model mean i literally mean this expression here no pun intended so this expression here okay so let's see how this is going to work uh posterior mean variance i think this is well yeah this is the expression we already saw uh we already kind of computed this thing here before let me kind of zoom in uh so i can kind of skip through this yeah i'm gonna skip through this part here we're going to ignore this because we already saw it and that's it we end up with our mean and that's what we return back we return back the mean the variance the log variance and the predicted so the x start and after that we just here pass the true mean true log variance that's the the ground truth and we want to minimize the scale divergence so we want to basically be as close as possible to that distribution we pass our mean log variance we calculate the k l do some normalization blah blah okay finally there is this step this is l zero so in case that the time step was zero we do some different computation if you recall from the variational lower bound loss uh let me show you uh let me show you that side by side okay guys so here it is so here are the l t minus ones we were computing so those are the k l divergence we here compute this expression minus log p uh data x zero condition like x one so that's what we compute here this discretized gaussian likelihood blah blah blah that's how we get the negative log likelihood decoder negative log likelihood and that's pretty much it and now we uh compute depending on whether the as i said depending on whether t equals zero if it's equal zero then we have decoder nl that's what we use as the loss otherwise we use k l fairly simple fair enough okay let me kind of zoom in here and that's it we have our first term that's the vb the variational lower bound term and we just do some rescaling that's why the loss is called rescaled msc let's continue i'm going to ignore this part here uh because we'll be extracting epsilon which is noise so we can kind of ignore everything else here so let me just uh make sure this doesn't have any yep doesn't have any break point so we can kind of step over this we end up with target it's just going to be the noise uh and here we just do msc between the model output which are the epsilons that we predicted from the unit and we just do msc here between that and the target which is the noise and that noise let me see whether i can find it whether it's in this function okay and this is the noise that we initially created here before we even started creating these xt's that's it it's fairly actually simple um to be super frank here it's not easy to understand why this exactly works it's kind of still magical to me but i'm slowly learning about these diffusion models more and more i think the first time i kind of took some time to understand these was when i was preparing the glide paper so again do check out that paper i do have some short introduction maybe even better than the one i made in this video when it comes to understanding diffusion models but in any case finally because we have the hybrid loss we just combine the msc and the variational lower bound loss and that's it that's our loss after this we just do the um uh like basically do some waiting uh and for each of the time steps uh and because we use uniform sampling we'll just have ones everywhere uh do some logging blah blah blah and finally we do the computation of the gradients by calling the backward function on our hybrid loss here so that's pretty much it next up we just keep on repeating micro batches blah blah blah uh i'm gonna kind of stop here this is everything you need to know to understand how the how the training procedure works hopefully that was useful now let's quickly dig into the sampling code and then we are done so i'm gonna stop this training here i'm gonna enter the uh sampling script so that's gonna be let me find it here so it's gonna be the image sample function here the script here uh and let's start uh with the main so i'm gonna now start and run this one i'm gonna set the sample configuration from my launchpy in vs code by the way i love vs code he has beautiful design uh amazing debugging experience and i mean i don't know why you would use anything else unless you don't have a choice i guess that's that's a reasonable uh excuse i guess okay let's step over this again bunch of arguments we're gonna ignore distribution blob uh distributional training logging um so i'll skip the model creation as well so i'm just gonna disable all of the breakpoints and i'm gonna step over all of this because we already saw all of that so i'm gonna ignore it now we'll load the actual model they do provide the checkpoints in their github repost to do check it out you can download it and then just set it here so i can have it stored somewhere so it's an image net 64 64 unconditional model trained on 100 million steps i guess so we load it we put it onto to gpu we set it into eval mode and we now start creating the samples okay so because this is not a class conditional model we can ignore all of this part and now this is where the magic happens the sample function first of all there is this paper called ddim i might cover this in the next uh video but for now we're just going to ignore it because this is false we're going to be using the p sample loop this function here that's the one we're going to be using not the ddim it turns out that ddim in a nutshell works better when you only have 50 or less time steps during your sampling procedure as soon as you pass the 50 threshold roughly then this method here operates better and this is just like the the method from the improved ddpm paper the one i showed you in the beginning of the video okay so here's the sampling function this is our desired shape we want to have one three sixty four sixty four images this is the image we want to generate uh this is kind of empty okay let's enter this function and let's see how it works okay before that let me just enable the break points so enable all breakpoints okay now we can enter that here let me enter and here we are so first of all there's going to be this um um p sample loop progressive uh basically this thing is going to keep on producing the samples we pass in the model uh we pass in the desired shape uh noise is none we don't pass anything here we don't specify this blah blah blah this is just boilerplate i'm going to ignore all of that so here we are p sample loop progressive again we just pick a device uh it's going to be gpu in my case because i have uh like gpu here um then we generate the image so here's how we start we literally start with the gaussian noise so this is your your normal distribution so so basically gaussian with mean zero and variance one okay with the desired shape on the desired device let's generate that image now we have some number of time steps which is set to 100 again i think i just modified this for the sake of time otherwise i think default was a bit bigger like 4 000 um in any case so we generate the indices and you can see here we reverse them because now we are doing the uh the reverse process we start with the uh 99 with the index 99 and we go all the way to zero which is the original image when i say original image here this is a generated image from the underlying data distribution that we learned during the training procedure that's the that's the idea there okay let's step over here so let's go over the indices we start with the index 99 so this is going to be first 99 okay we kind of um well you can see here because batch dimension is one nothing fancy happens here but otherwise if you wanted to generate like five images in the batch then you would just copy paste uh 99 five times because for all of the all of the noisy images will be following the same process so that's where we start with with the last time step okay and in this function is where all the magic actually happens everything else was kind of boilerplate let's let's focus on this one so we pass the image which is again in the initial point so now in the initial step it's going to be just pure image pure sorry pure noise we pass the time step uh blah blah blah nothing fancy there let's enter the p sample and here is how the sampling works and actually we already saw this during the training so you pretty much know how this is going to work let me enter here uh this is that part of code where we were literally computing the um we split the model output the model variance then we compute those mean log max log so that's the equation i saw the equation 15 from the paper we get the variance and then later down below what we do is we basically predict the x0 and then use that to to compute model mean so we we already kind of stepped through this code so i'm not going to do that now so yeah because of that let me kind of step over here and ignore this so i'm going to just kind of disable all of the breakpoints let me disable all of the breakpoints let me step to here and let me click f5 to step over okay we end up here uh now i'm going to enable the breakpoints again in any case so we generate the noise again this is your normal distribution uh we generate these masks uh which basically are always going to be one if t is equal is different from zero because we treat the zero the zero step differently if you recall even during the training we had that nl loss uh as opposed to variational lower bound loss similarly here when we when we sample we're going to have a bit different behavior depending on the number of time steps so we just take the mean that's the mean we computed and then we take the log variance we do the xp xp and log cancel out we get the variance we multiply that with the noise and we end up with a sample okay so this is now we are in step 98 okay and we're going to keep on doing this until we get a completely denoised image and at the end once we end up with the zeroth time step this is going to be equal to zero and that means the only result we'll return is the final mean again that's just a detail of how this thing works and we return the sample that's it basically the sample now becomes image that we feed into the next iteration of this p sample function and that's how it pretty much works i'm gonna now stop this because you pretty much saw everything i'm gonna go to launch i'm going to go to sample i'm gonna increase the number of diffusion steps to let's say 500 and i'm gonna run the script again and just show you the results we get this time we're going to disable all of the breakpoints i'm gonna run this and just show you how the results look like let me show you yeah i added this image show plot show line so that we can visualize the actual sample and yeah i'll i'll get back to you as soon as the image is generated okay guys here's the image we got some dog image again do do keep in mind that i only have 500 steps it'd be much better if i used more steps obviously like thousand although there is some bug with this code when you use 4 000 you can check out the issue in their repo basically you get like completely saturated image so that means that the optimum quality of an image is maybe around thousand two thousand or something but like at four thousand the images hit the saturation point and you get like just pretty much junk out of these of this model guys this is pretty much it hopefully you found this format of a video useful do let me know whether you find it useful if so i'll keep on making videos such as this one combining papers combining code putting code in paper side by side trying to make these abstract mathematical ideas a bit more grounded and hopefully i've done a job at doing that in any case if you found this video useful do share it out with others who want to learn more about diffusion processes subscribe to this channel join the discord community you can find the link down below in the video description and until next time bye bye
[{"start": 0.0, "end": 6.04, "text": " What's up guys, in this video I'm focusing on covering diffusion models."}, {"start": 6.04, "end": 11.200000000000001, "text": " So basically the family of models that are powering some of the most famous AI models"}, {"start": 11.200000000000001, "end": 18.64, "text": " over the last couple of months such as I guess the LEED 2, Imagine from Google, Glide from"}, {"start": 18.64, "end": 22.16, "text": " OpenAI as well and many other models."}, {"start": 22.16, "end": 28.72, "text": " So the idea, the video will be quite ambitious so I'll try and first walk you through two"}, {"start": 28.72, "end": 34.44, "text": " of the seminal papers behind diffusion models and then I'm going to actually go through"}, {"start": 34.44, "end": 36.06, "text": " the code base."}, {"start": 36.06, "end": 40.2, "text": " So the skimming of the paper just serves the purpose of me showing you the formulas, the"}, {"start": 40.2, "end": 44.68, "text": " mathematical formulas which we can then map and relate to actual code."}, {"start": 44.68, "end": 49.28, "text": " So it's not going to be a deep dive into the papers per se but hopefully this gives you"}, {"start": 49.28, "end": 53.72, "text": " the necessary context to kind of later on cope with the actual code."}, {"start": 53.72, "end": 56.739999999999995, "text": " Okay, having said that, I'm covering two papers."}, {"start": 56.74, "end": 60.160000000000004, "text": " One is denoising diffusion probabilistic models."}, {"start": 60.160000000000004, "end": 66.36, "text": " That's pretty much the paper that made diffusion models practical and then I'm going to cover"}, {"start": 66.36, "end": 72.36, "text": " improved denoising diffusion probabilistic models from OpenAI and I'm actually going"}, {"start": 72.36, "end": 77.0, "text": " to be covering the code base behind this paper."}, {"start": 77.0, "end": 79.0, "text": " Okay, but let's start with this one."}, {"start": 79.0, "end": 84.36, "text": " So this is going to introduce the necessary basics if you haven't already learned anything"}, {"start": 84.36, "end": 88.84, "text": " about diffusion models, hopefully that's going to be some context for you."}, {"start": 88.84, "end": 93.12, "text": " I did cover the glide paper so I did cover some diffusion models there so do check it"}, {"start": 93.12, "end": 94.12, "text": " out."}, {"start": 94.12, "end": 98.92, "text": " I'm going to link it somewhere here but this video, having said that, this video is fairly"}, {"start": 98.92, "end": 101.96000000000001, "text": " self-contained so yeah, you can continue watching."}, {"start": 101.96000000000001, "end": 107.12, "text": " Okay, so here is how the diffusion model looks like on a high level."}, {"start": 107.12, "end": 114.08, "text": " So the idea is to start from an image here and then you slowly, gradually start adding"}, {"start": 114.08, "end": 116.2, "text": " Gaussian noise on top of your image."}, {"start": 116.2, "end": 122.03999999999999, "text": " So that's called the forward diffusion process and ultimately you end up with an image such"}, {"start": 122.03999999999999, "end": 128.48, "text": " as this one which is, as you can see here, basically a complete noise of an image."}, {"start": 128.48, "end": 133.44, "text": " And now if you learn how to reverse this process, so this is the forward process here as you"}, {"start": 133.44, "end": 142.24, "text": " can see, if you learn how to reverse this process, so denote it as P theta, if we learn"}, {"start": 142.24, "end": 147.04000000000002, "text": " how to do that, then basically if we start a training procedure and we learn to do this"}, {"start": 147.04000000000002, "end": 152.20000000000002, "text": " for all of the images in our dataset, you eventually learn the underlying data distribution"}, {"start": 152.20000000000002, "end": 158.36, "text": " and then you can basically start from a random noise sample and start denoising that image"}, {"start": 158.36, "end": 164.54000000000002, "text": " until you end up with a hallucinated new image from your underlying data distribution."}, {"start": 164.54000000000002, "end": 166.32000000000002, "text": " So it's going to be a novel image obviously."}, {"start": 166.32000000000002, "end": 170.88, "text": " Okay, so that's a very high level explanation there."}, {"start": 170.88, "end": 172.79999999999998, "text": " Let me now walk you through the formulas."}, {"start": 172.79999999999998, "end": 178.32, "text": " So again, this is how the forward process looks like."}, {"start": 178.32, "end": 188.4, "text": " This is basically the joint distribution and if we want to sample, if we want to do one"}, {"start": 188.4, "end": 193.24, "text": " step of the reverse process, here is how we do it."}, {"start": 193.24, "end": 198.84, "text": " So basically we are going to learn a neural network that's going to predict the mu of"}, {"start": 198.84, "end": 204.76, "text": " theta and the sigma of theta, which are basically the mean and the variance of the Gaussian."}, {"start": 204.76, "end": 208.12, "text": " So now there is a bunch of theory for why this is possible."}, {"start": 208.12, "end": 216.52, "text": " So they show that if you have steps small enough, basically that means that if your"}, {"start": 216.52, "end": 222.46, "text": " forward process is adding Gaussians, they show that you can also approximate the reverse"}, {"start": 222.46, "end": 226.7, "text": " process as sampling from a Gaussian."}, {"start": 226.7, "end": 231.72, "text": " So it's not kind of completely obvious why this is the fact, but yeah, we'll have to"}, {"start": 231.72, "end": 233.67999999999998, "text": " take it for granted."}, {"start": 233.67999999999998, "end": 236.16, "text": " So the idea is to learn those two."}, {"start": 236.16, "end": 237.16, "text": " Let's continue here."}, {"start": 237.16, "end": 240.04, "text": " So here is the actual forward process."}, {"start": 240.04, "end": 243.32, "text": " You can see how we sample from the forward process here."}, {"start": 243.32, "end": 249.6, "text": " Basically you kind of downscale the current image and then, so this is how you form the"}, {"start": 249.6, "end": 250.6, "text": " mean."}, {"start": 250.6, "end": 256.68, "text": " You take the current image, you downscale it and basically then you have this like,"}, {"start": 256.68, "end": 264.04, "text": " covariance matrix and then you just sample from this Gaussian here to end up with the"}, {"start": 264.04, "end": 265.04, "text": " X of t."}, {"start": 265.04, "end": 266.04, "text": " Okay?"}, {"start": 266.04, "end": 271.02, "text": " So again, we condition on the X t minus one and by sampling from this distribution here,"}, {"start": 271.02, "end": 272.2, "text": " we end up with X t."}, {"start": 272.2, "end": 273.2, "text": " Okay?"}, {"start": 273.2, "end": 278.08, "text": " So that's basically going from here all the way to here."}, {"start": 278.08, "end": 281.24, "text": " That's one step."}, {"start": 281.24, "end": 282.24, "text": " What's next?"}, {"start": 282.24, "end": 287.12, "text": " Now, they show that you can basically train these models by optimizing this variational"}, {"start": 287.12, "end": 288.12, "text": " bound."}, {"start": 288.12, "end": 292.92, "text": " The idea is, so here we have the log likelihood of our data."}, {"start": 292.92, "end": 294.6, "text": " We want to obviously maximize."}, {"start": 294.6, "end": 300.2, "text": " We want to tweak our model such that all of the data points from our data set are super"}, {"start": 300.2, "end": 302.44, "text": " likely under our model."}, {"start": 302.44, "end": 306.84000000000003, "text": " So that's the idea of how we train most of these, like all of these pretty much generative"}, {"start": 306.84000000000003, "end": 310.16, "text": " models, not only diffusion models."}, {"start": 310.16, "end": 315.12, "text": " And now this is a standard thing with variational bounds."}, {"start": 315.12, "end": 322.84000000000003, "text": " Basically you find a surrogate loss, which is like a lower bound basically for this loss"}, {"start": 322.84000000000003, "end": 323.84000000000003, "text": " here."}, {"start": 323.84000000000003, "end": 329.6, "text": " And then by maximizing it, basically you're certain that the likelihood of your data is"}, {"start": 329.6, "end": 331.82000000000005, "text": " going to be at least as big."}, {"start": 331.82000000000005, "end": 334.40000000000003, "text": " So that's the main idea there."}, {"start": 334.40000000000003, "end": 336.88, "text": " Now I'm not going to dig into formulas here."}, {"start": 336.88, "end": 341.12, "text": " We're going to later see how they decompose this into an actual expression that's going"}, {"start": 341.12, "end": 343.8, "text": " to be leveraged later on in the code."}, {"start": 343.8, "end": 345.84, "text": " But here is the equation for now."}, {"start": 345.84, "end": 349.52, "text": " Now this is an important, this here is a super important finding."}, {"start": 349.52, "end": 354.28, "text": " So instead of having to sample every, so during the forward process, instead of having to"}, {"start": 354.28, "end": 360.6, "text": " sample multiple times, and in practice they use thousand steps, they show you can literally"}, {"start": 360.6, "end": 367.72, "text": " sample arbitrary X of t starting from X zero, where X zero is your original image, by sampling"}, {"start": 367.72, "end": 369.96000000000004, "text": " from this Gaussian here."}, {"start": 369.96000000000004, "end": 373.96000000000004, "text": " So these are some important coefficients we'll be seeing in the code as well."}, {"start": 373.96000000000004, "end": 378.48, "text": " So we have these alpha t's, which are one minus beta t's."}, {"start": 378.48, "end": 384.92, "text": " And beta is basically, as you can see here, it's basically just like your covariance matrix"}, {"start": 384.92, "end": 386.20000000000005, "text": " here."}, {"start": 386.2, "end": 394.52, "text": " In practice, the original DDPM, so this paper here, used fixed, like fixed schedules, whereas"}, {"start": 394.52, "end": 401.28, "text": " the later papers, such as the one from OpenAI, the improved one, used learnable like schedules."}, {"start": 401.28, "end": 408.76, "text": " And when I say schedule, I just mean how does beta vary when we go across the whole forward"}, {"start": 408.76, "end": 409.76, "text": " process."}, {"start": 409.76, "end": 418.08, "text": " Then there is this alpha t bar, which is just a product of all of these alphas, which are"}, {"start": 418.08, "end": 421.88, "text": " defined here, starting from one, all the way to t."}, {"start": 421.88, "end": 427.56, "text": " So one is the, we start from the, usually one is denoted as the start of the process"}, {"start": 427.56, "end": 431.88, "text": " before the image has been deteriorated, and then as t grows, we're going towards image"}, {"start": 431.88, "end": 434.92, "text": " becoming pure Gaussian noise."}, {"start": 434.92, "end": 440.64000000000004, "text": " So here is the expression, you basically can use square root of this alpha t bar, multiply"}, {"start": 440.64000000000004, "end": 445.56, "text": " that, so we multiply the original image here, this is how we form the variance, and then"}, {"start": 445.56, "end": 448.52000000000004, "text": " we just sample from this distribution and we get the x of t."}, {"start": 448.52000000000004, "end": 453.72, "text": " So that means we immediately get arbitrarily noisy image."}, {"start": 453.72, "end": 454.72, "text": " Nice."}, {"start": 454.72, "end": 460.12, "text": " I mentioned the loss we'll be using, so here is the loss just reshaped into different forms."}, {"start": 460.12, "end": 462.88, "text": " So this is the form that's going to be actionable, so this is the form we'll be using in the"}, {"start": 462.88, "end": 463.88, "text": " code."}, {"start": 463.88, "end": 472.0, "text": " So we have L0, LT minus one, and LT, so these are kind of three classes of similar components"}, {"start": 472.0, "end": 474.2, "text": " here."}, {"start": 474.2, "end": 478.96, "text": " This one is super important, basically KL divergence between this here, as you can see,"}, {"start": 478.96, "end": 484.48, "text": " is the one step of the reverse process, and we're going to do KL divergence with this"}, {"start": 484.48, "end": 488.6, "text": " basically forward process posterior."}, {"start": 488.6, "end": 492.28, "text": " And then we have this component, which will actually, in this paper, they can ignore it"}, {"start": 492.28, "end": 500.35999999999996, "text": " because this is going to be like pure Gaussian, and because they don't learn that the variance"}, {"start": 500.35999999999996, "end": 506.0, "text": " is basically, you can ignore this term here, and this one here is basically the negative"}, {"start": 506.0, "end": 511.79999999999995, "text": " log likelihood of the image conditioned on the previous step."}, {"start": 511.79999999999995, "end": 516.68, "text": " Anyways, a lot of details I'm going to have to kind of hand wave explain all of this just"}, {"start": 516.68, "end": 519.8399999999999, "text": " for you to understand and see the formulas, that's the important part."}, {"start": 519.84, "end": 522.6800000000001, "text": " Okay, so let's see what's up here."}, {"start": 522.6800000000001, "end": 528.5600000000001, "text": " So we can actually calculate, that's a cool thing, we can calculate this posterior of"}, {"start": 528.5600000000001, "end": 536.0, "text": " the forward process analytically here, and you can see how this mu t tilde is computed,"}, {"start": 536.0, "end": 539.2800000000001, "text": " and you can see how beta tilde is computed here."}, {"start": 539.2800000000001, "end": 544.64, "text": " So these expressions are all going to be here in the code, so just kind of take mental notes"}, {"start": 544.64, "end": 545.64, "text": " here."}, {"start": 545.64, "end": 550.92, "text": " So I'll be comparing the formulas with code like side by side, so that's going to probably"}, {"start": 550.92, "end": 552.92, "text": " be useful for you guys."}, {"start": 552.92, "end": 558.48, "text": " So anyways, because we have this is a Gaussian, and this is a Gaussian, that means we'll end"}, {"start": 558.48, "end": 563.04, "text": " up when you do the KL divergence between two Gaussians, you end up with simple analytical"}, {"start": 563.04, "end": 566.28, "text": " expressions, we're going to see those a bit later."}, {"start": 566.28, "end": 568.48, "text": " Okay, so what's up next?"}, {"start": 568.48, "end": 572.4399999999999, "text": " So actually here, so here is how these LT minus ones are going to simplify to, so we"}, {"start": 572.44, "end": 577.0600000000001, "text": " simplify them to these expressions here."}, {"start": 577.0600000000001, "end": 583.2, "text": " Basically we just find the MSE, so the mean squared error between the means, so this is"}, {"start": 583.2, "end": 588.9200000000001, "text": " going to be our learnable one, this is the one we get from the forward posterior."}, {"start": 588.9200000000001, "end": 594.98, "text": " And that can be further, like basically just simple algebra here, because we know that"}, {"start": 594.98, "end": 601.0, "text": " Xt equals this, and that's from the so called nice property here, so that's this property"}, {"start": 601.0, "end": 602.0, "text": " here."}, {"start": 602.0, "end": 609.52, "text": " So you can kind of imagine that sampling from this Gaussian is actually equivalent to computing"}, {"start": 609.52, "end": 610.96, "text": " the following expression."}, {"start": 610.96, "end": 614.16, "text": " So if you want to get Xt, you basically do the following."}, {"start": 614.16, "end": 623.0, "text": " So you do this square root alpha t bar X zero, and you basically add plus square root because"}, {"start": 623.0, "end": 628.24, "text": " this is variance, we want to have like standard deviation, so minus alpha bar t, so it's going"}, {"start": 628.24, "end": 634.04, "text": " to be under square root, and then we just multiply times epsilon, where epsilon is just"}, {"start": 634.04, "end": 636.8, "text": " basically your normal distribution."}, {"start": 636.8, "end": 642.6, "text": " Okay, so that's the Gaussian with the mean of zero and variance of one."}, {"start": 642.6, "end": 652.44, "text": " Okay, so let's go back here, and then we just plug in Xt here, so we, sorry, we just calculate"}, {"start": 652.44, "end": 657.52, "text": " X zero here, we just kind of do the algebra, and then we plug it in here, and this is the"}, {"start": 657.52, "end": 663.48, "text": " next expression, and then this simplifies because we know how this is computed up here,"}, {"start": 663.48, "end": 668.24, "text": " so here is how we compute that one, so it's just like bunch of manipulations of symbols,"}, {"start": 668.24, "end": 671.28, "text": " and I'm kind of scheming over it."}, {"start": 671.28, "end": 675.6, "text": " So you end up with this expression, and then you basically, this is what you want your"}, {"start": 675.6, "end": 676.6, "text": " neural network to learn."}, {"start": 676.6, "end": 681.4399999999999, "text": " So this is going to be your diffusion model, it's going to learn how to predict the noise"}, {"start": 681.4399999999999, "end": 682.4399999999999, "text": " here."}, {"start": 682.44, "end": 692.6, "text": " Okay, so let's see, that simplifies to what, basically we want to have the learnable mean"}, {"start": 692.6, "end": 698.48, "text": " needs to be equal to whatever we had here, so we basically want to have this thing, we"}, {"start": 698.48, "end": 704.4000000000001, "text": " want it to be like the same as this term here, because then the loss will obviously go to"}, {"start": 704.4000000000001, "end": 711.72, "text": " zero, that's what I show here, and if we achieve that, if we learn that mean, then how we sample,"}, {"start": 711.72, "end": 716.5600000000001, "text": " how we'll be sampling from the reverse process is again a simple computation, so here's the"}, {"start": 716.5600000000001, "end": 722.36, "text": " mean, so that's the same expression as this one here, and then we just add the, basically"}, {"start": 722.36, "end": 728.88, "text": " standard deviation times this vector here, which is going to be sampled from a normal"}, {"start": 728.88, "end": 731.0, "text": " distribution, okay?"}, {"start": 731.0, "end": 736.96, "text": " And ultimately, when you simplify even further that expression, you end up with this type"}, {"start": 736.96, "end": 740.0400000000001, "text": " of parameterization, so you can either learn a neural network that's going to predict"}, {"start": 740.04, "end": 746.0, "text": " the mean, or you can learn instead just this term here, just the actual noise, and that's"}, {"start": 746.0, "end": 751.3199999999999, "text": " what I end up doing, so this expression, as you can see, is just a mean squared error"}, {"start": 751.3199999999999, "end": 757.9, "text": " between the sample noise, so we're basically learning what type of a noise was added to"}, {"start": 757.9, "end": 764.28, "text": " our image, and then we have these weights for each of our terms, LT minus one, so this"}, {"start": 764.28, "end": 770.0799999999999, "text": " is again, this is LT minus one, those are the last components we saw, and we're going"}, {"start": 770.0799999999999, "end": 775.56, "text": " to see that this weight basically is a function of the step in the diffusion process."}, {"start": 775.56, "end": 779.68, "text": " Okay, I know this is a lot of formulas, bear with me, it's going to become much easier"}, {"start": 779.68, "end": 784.0, "text": " as the video progresses, because I'm going to start introducing code as well, but let"}, {"start": 784.0, "end": 789.1999999999999, "text": " me quickly just explain what this means, so we start from an image here, this is some"}, {"start": 789.2, "end": 796.48, "text": " image, and then basically we add on top of it some noise, it's going to be something"}, {"start": 796.48, "end": 803.32, "text": " inside of there, let me just draw some human being, and then after adding the noise, like"}, {"start": 803.32, "end": 811.2800000000001, "text": " this, let me change the color, so let's imagine we added a bunch of noise here, so now your"}, {"start": 811.2800000000001, "end": 817.72, "text": " diffusion model is learning this green stuff, and if it learns the green stuff, then you"}, {"start": 817.72, "end": 821.24, "text": " basically know how to go backwards, okay?"}, {"start": 821.24, "end": 826.5600000000001, "text": " So you learn the green stuff, the noise, that's the epsilon here, and you know how to denoise"}, {"start": 826.5600000000001, "end": 827.5600000000001, "text": " your images."}, {"start": 827.5600000000001, "end": 834.12, "text": " Cool, it's kind of magical, it doesn't click, you probably won't understand it immediately,"}, {"start": 834.12, "end": 837.76, "text": " it's not completely straightforward to understand why this works, I'm still struggling to be"}, {"start": 837.76, "end": 842.0400000000001, "text": " honest, but the formulas are here, and we're going to follow them for the time being."}, {"start": 842.04, "end": 848.88, "text": " Okay, I'm going to skip this, and starting from this LT minus one, they basically derive"}, {"start": 848.88, "end": 854.0, "text": " empirically this simplified objective, where they drop the term here, so the term that"}, {"start": 854.0, "end": 860.68, "text": " depends on the basically time step of the diffusion process, and we end up with this"}, {"start": 860.68, "end": 867.36, "text": " one here, and so as you can see, basically we're doing simple MSC loss between the noise,"}, {"start": 867.36, "end": 872.6800000000001, "text": " so we're trying to predict this noise added on top of the image, and we do that for various"}, {"start": 872.6800000000001, "end": 874.92, "text": " time steps of the diffusion process."}, {"start": 874.92, "end": 880.4, "text": " So basically what we're doing again is we start from an image here, we add some noise"}, {"start": 880.4, "end": 884.88, "text": " on top of it, we end with an image, and then we keep on doing that for like, let's say"}, {"start": 884.88, "end": 890.8000000000001, "text": " a thousand steps, and we end up here, and basically we'll be trying to, at each step"}, {"start": 890.8000000000001, "end": 894.96, "text": " of the way, understand which noise was added here, which noise was added here, which noise"}, {"start": 894.96, "end": 901.24, "text": " was added here, and by doing that we learn the reverse process of the diffusion, and"}, {"start": 901.24, "end": 905.2800000000001, "text": " that's going to lead us to like a powerful generative model."}, {"start": 905.2800000000001, "end": 909.84, "text": " Cool, let's continue here, let me just show you, so here are some images they get from"}, {"start": 909.84, "end": 915.72, "text": " the model, not that important because you already know that these models are super powerful,"}, {"start": 915.72, "end": 920.2800000000001, "text": " so here you can see how the diffusion process looks like in practice, you start with the"}, {"start": 920.28, "end": 926.36, "text": " noisy image, and then gradually keep on denoising it until you get to the sampled image, which"}, {"start": 926.36, "end": 931.6, "text": " is the image sample from the underlying data distribution, okay?"}, {"start": 931.6, "end": 935.98, "text": " Here they show how, depending on from which latent you start from, if you start from a"}, {"start": 935.98, "end": 940.76, "text": " latent like X thousand, so that means thousand steps of diffusion, and then you start three"}, {"start": 940.76, "end": 944.8399999999999, "text": " independent reverse processes because they are stochastic, you end up with three different"}, {"start": 944.84, "end": 951.12, "text": " images, but as soon as we start taking like latents that come from later in the reverse"}, {"start": 951.12, "end": 958.52, "text": " process, so let's say X 500, then you can see that three independent reverse processes"}, {"start": 958.52, "end": 960.76, "text": " lead to images that are quite similar."}, {"start": 960.76, "end": 965.52, "text": " Having seen all of that, now I'm going to quickly walk you through the innovation that"}, {"start": 965.52, "end": 971.0, "text": " the second paper brought, so it's basically building directly on top of the denoising"}, {"start": 971.0, "end": 975.64, "text": " diffusion probabilistic models or DDPMs for short."}, {"start": 975.64, "end": 982.52, "text": " Let me show you main contributions of this paper, so first of all, they have a learnable"}, {"start": 982.52, "end": 988.04, "text": " variance schedule, so this is how they are going to do it, remember this formula, we're"}, {"start": 988.04, "end": 994.68, "text": " going to see it a bit later, so basically they predict this vector V, and then they"}, {"start": 994.68, "end": 1001.56, "text": " kind of do interpolation between beta T and beta tilde T, which is the posterior variance,"}, {"start": 1001.56, "end": 1004.52, "text": " and this is the forward process variance."}, {"start": 1004.52, "end": 1010.4399999999999, "text": " Okay, so that's one thing they've done here, so this formula here, the second thing they've"}, {"start": 1010.4399999999999, "end": 1017.52, "text": " done is they use this hybrid loss, so L simple we saw what the one is, that's when you drop"}, {"start": 1017.52, "end": 1023.16, "text": " the terms that depend on the time step, and then we have the LVLB, which is the variational"}, {"start": 1023.16, "end": 1030.56, "text": " lower bound, so that's the actual original loss with all of those complex terms, and"}, {"start": 1030.56, "end": 1039.1599999999999, "text": " by just creating this type of weighted average between those two, and using this one to learn"}, {"start": 1039.1599999999999, "end": 1044.3999999999999, "text": " the variance, and using this one to learn the mean, basically they show this is the"}, {"start": 1044.3999999999999, "end": 1049.2, "text": " best, this was the best trade off, so there is a lot of experimentation going on here,"}, {"start": 1049.2, "end": 1055.1200000000001, "text": " so it's kind of a lot of hacks put on top of the DTPM in order to make this work, as"}, {"start": 1055.1200000000001, "end": 1059.8, "text": " well as DTPM itself had a bunch of hacks, such as using constant variances instead of"}, {"start": 1059.8, "end": 1064.2, "text": " learning them, etc., etc., so there is a lot of hacks going on in diffusion models, at"}, {"start": 1064.2, "end": 1066.24, "text": " least in these earlier papers."}, {"start": 1066.24, "end": 1072.2, "text": " Okay, so they say here along the same line of reasoning, we also apply a stop gradient"}, {"start": 1072.2, "end": 1079.0800000000002, "text": " to the mu of theta output for the LVLB term, which basically translates to, so this component"}, {"start": 1079.08, "end": 1083.4399999999998, "text": " is only going to be training this variance expression here."}, {"start": 1083.4399999999998, "end": 1088.32, "text": " Okay, so that's one thing, so that's the second thing actually, and then the third thing is"}, {"start": 1088.32, "end": 1096.24, "text": " instead of using linear, like basically noise schedule, so those betas being a simple linear"}, {"start": 1096.24, "end": 1104.22, "text": " sequence, instead of that they propose this cosine schedule, and what that brings is that"}, {"start": 1104.22, "end": 1112.4, "text": " you can see that alpha bar theta, alpha bar t, sorry, basically has much more gradual"}, {"start": 1112.4, "end": 1119.1000000000001, "text": " drop compared to linear, and because of that, that determines directly the amount of noise,"}, {"start": 1119.1000000000001, "end": 1123.52, "text": " and thus if you use the linear schedule, so basically using the linear schedule will lead"}, {"start": 1123.52, "end": 1129.32, "text": " to noisier images earlier on in the forward process of the diffusion."}, {"start": 1129.32, "end": 1136.24, "text": " Okay, that's a third thing they do that helps a lot, and then let me show you a couple more"}, {"start": 1136.24, "end": 1137.84, "text": " expressions here."}, {"start": 1137.84, "end": 1145.4399999999998, "text": " One very cool thing is using basically the values of the loss for each of the time step"}, {"start": 1145.4399999999998, "end": 1149.08, "text": " to understand how much weight we want to put on top of that time step."}, {"start": 1149.08, "end": 1154.36, "text": " So this is kind of middle ground between using the simple objective where all of those constants"}, {"start": 1154.36, "end": 1161.3999999999999, "text": " with all of the LT minus ones are basically constant, like an equal to, I guess, one,"}, {"start": 1161.3999999999999, "end": 1166.84, "text": " then you have the LVLB which had those complex expressions, and finally we have this type"}, {"start": 1166.84, "end": 1171.78, "text": " of important sampling where depending on the loss, so if, for example, if you're struggling"}, {"start": 1171.78, "end": 1177.4399999999998, "text": " with one particular image in the process, let's say this one, the ith image here, basically"}, {"start": 1177.44, "end": 1184.72, "text": " what you do is you increase the loss for that particular XT, so you'll put additional focus"}, {"start": 1184.72, "end": 1189.6000000000001, "text": " on trying to predict that noise that was put on top of this image."}, {"start": 1189.6000000000001, "end": 1194.56, "text": " So that's the idea with this expression here."}, {"start": 1194.56, "end": 1196.96, "text": " Let me continue, and we are almost done."}, {"start": 1196.96, "end": 1201.56, "text": " Finally this expression 19, not that important, basically what they show is that during training"}, {"start": 1201.56, "end": 1205.92, "text": " and during sampling you don't have to use the same type, the same length of a diffusion"}, {"start": 1205.92, "end": 1206.92, "text": " process."}, {"start": 1206.92, "end": 1212.5600000000002, "text": " For example, your training chain has like 4,000 images whereas you can, during the sampling"}, {"start": 1212.5600000000002, "end": 1220.76, "text": " time you can just have 100 images, 100 latents, and basically they show that how to remap"}, {"start": 1220.76, "end": 1228.16, "text": " the betas and beta tildes, the posterior variances such that this actually works out nicely,"}, {"start": 1228.16, "end": 1232.44, "text": " and they get high quality images and they obviously save a lot of computation which"}, {"start": 1232.44, "end": 1235.96, "text": " is super important because you don't want to sample 4,000 images every time you need"}, {"start": 1235.96, "end": 1236.96, "text": " to generate an image."}, {"start": 1236.96, "end": 1238.88, "text": " That's going to be super expensive."}, {"start": 1238.88, "end": 1244.96, "text": " Okay, guys that's pretty much it, couple more things here and we are done with the papers."}, {"start": 1244.96, "end": 1249.4, "text": " They show, and this is super interesting, they show the scaling loss for diffusion models"}, {"start": 1249.4, "end": 1254.8400000000001, "text": " and this was back I think in 2020, so if you kind of read this paper you could have expected"}, {"start": 1254.8400000000001, "end": 1259.6000000000001, "text": " that they are going to do the same thing as with GPT-3 and that's to scale up these models"}, {"start": 1259.6000000000001, "end": 1264.3600000000001, "text": " and that eventually led to Glide and then the Li and the Li2."}, {"start": 1264.36, "end": 1269.84, "text": " And you can see here again we have the power law, we see that the FID which is the metric"}, {"start": 1269.84, "end": 1273.76, "text": " that shows you how high quality your samples are, you can see that with the increasing"}, {"start": 1273.76, "end": 1279.36, "text": " size we keep on getting smaller and smaller FIDs whereas with NLL which is the negative"}, {"start": 1279.36, "end": 1285.04, "text": " loss likelihood it does not exactly follow the power law but it's kind of still going"}, {"start": 1285.04, "end": 1286.62, "text": " down here, so yeah."}, {"start": 1286.62, "end": 1291.9199999999998, "text": " In any case these two charts are indicative that scaling up diffusion models will probably"}, {"start": 1291.92, "end": 1296.48, "text": " be a good avenue for future research."}, {"start": 1296.48, "end": 1302.3200000000002, "text": " Okay let's see the conclusion, the likelihood is improved by learning the variances using"}, {"start": 1302.3200000000002, "end": 1306.48, "text": " our parameterization and hybrid objective."}, {"start": 1306.48, "end": 1310.6000000000001, "text": " Furthermore we have investigated how DDPM scale with the amount of available training"}, {"start": 1310.6000000000001, "end": 1314.5600000000002, "text": " compute and found that more training compute trivially leads to better sample quality and"}, {"start": 1314.5600000000002, "end": 1315.5600000000002, "text": " log likelihood."}, {"start": 1315.5600000000002, "end": 1321.2, "text": " Okay guys, so you hopefully got the gist of diffusion models, we saw how we have this"}, {"start": 1321.2, "end": 1325.48, "text": " forward process of noising the images and then we have the backward, the reverse process"}, {"start": 1325.48, "end": 1333.48, "text": " of denoising the images, we learned that and then plus add to that a bunch of hacks to"}, {"start": 1333.48, "end": 1338.0800000000002, "text": " get this thing to work but ultimately they are now the best generality models we have"}, {"start": 1338.0800000000002, "end": 1342.88, "text": " because they have compared to GANs they are much better at covering covering all of the"}, {"start": 1342.88, "end": 1349.2, "text": " modes of your data distribution whereas we know that GANs suffer from mode collapse, we"}, {"start": 1349.2, "end": 1356.44, "text": " also have much more like it's much more stable to train diffusion models as compared to GANs"}, {"start": 1356.44, "end": 1362.24, "text": " so yeah basically diffusion models are currently what GANs were a couple of years ago."}, {"start": 1362.24, "end": 1366.2, "text": " Cool guys let's now switch to the code, I'm gonna show you how this thing is actually"}, {"start": 1366.2, "end": 1370.68, "text": " trained and how we sample from the diffusion models."}, {"start": 1370.68, "end": 1375.16, "text": " Before that let me just show you the architecture they're using to actually learn the epsilon"}, {"start": 1375.16, "end": 1381.92, "text": " so the noise during the diffusion model training so this is a simple unit, this is how the"}, {"start": 1381.92, "end": 1386.3600000000001, "text": " architecture looks like, we're gonna see this in the code, this is less important arguably"}, {"start": 1386.3600000000001, "end": 1390.28, "text": " you could use some other architectures as well, they're just kind of stuck with the"}, {"start": 1390.28, "end": 1391.28, "text": " unit."}, {"start": 1391.28, "end": 1399.3200000000002, "text": " Okay let's start doing the training, okay I went on and downloaded the github repo, had"}, {"start": 1399.3200000000002, "end": 1404.64, "text": " to create some modifications to get this thing to work on my single GPU Windows machine,"}, {"start": 1404.64, "end": 1411.0400000000002, "text": " I can kind of submit that to my github, do let me know if you want me to do that, if"}, {"start": 1411.0400000000002, "end": 1420.0800000000002, "text": " so I can easily create a like a I'll easily just push this code on to my github repo."}, {"start": 1420.0800000000002, "end": 1425.48, "text": " So I just created this launch script so I basically used the recommended settings that"}, {"start": 1425.48, "end": 1430.5200000000002, "text": " they've showed in their readme file here so you can kind of take a look at the readme"}, {"start": 1430.52, "end": 1435.6, "text": " of the repo which we're gonna be using the improved diffusion repo and basically yeah"}, {"start": 1435.6, "end": 1439.76, "text": " so that's what I've set in my launch and now we can start training here."}, {"start": 1439.76, "end": 1443.24, "text": " That's less important we're gonna see what's going on here in a minute."}, {"start": 1443.24, "end": 1450.28, "text": " Okay so let me start training, we're first gonna obviously have a bunch of arguments"}, {"start": 1450.28, "end": 1454.6399999999999, "text": " so here they are I'm gonna kind of print them out but don't try and understand what's going"}, {"start": 1454.6399999999999, "end": 1460.16, "text": " on there's too many of them, we'll gradually start analyzing what's going on so we have"}, {"start": 1460.16, "end": 1466.8000000000002, "text": " the Cypher data set which I downloaded using their Cypher script then there is like bunch"}, {"start": 1466.8000000000002, "end": 1470.42, "text": " of other stuff learning rate blah blah blah we'll see all of that later."}, {"start": 1470.42, "end": 1474.48, "text": " So I'm gonna kind of ignore the parts that are not crucial to understanding diffusion"}, {"start": 1474.48, "end": 1478.6000000000001, "text": " models which means I'm gonna be ignoring the distributed training, I'm gonna be ignoring"}, {"start": 1478.6000000000001, "end": 1482.52, "text": " the loggers all of that and I'm gonna focus only on diffusion."}, {"start": 1482.52, "end": 1488.76, "text": " So here is the first important step we basically take these arguments convert them into dictionary"}, {"start": 1488.76, "end": 1496.52, "text": " and we start creating the actual unit model and then the diffusion object here."}, {"start": 1496.52, "end": 1501.04, "text": " So let's see how the model is gonna be constructed so we basically pass image size is gonna be"}, {"start": 1501.04, "end": 1507.12, "text": " 64, usually they have lower resolutions because remember the latents are of the same like"}, {"start": 1507.12, "end": 1513.2, "text": " dimensionality as the X0 as the original image."}, {"start": 1513.2, "end": 1517.72, "text": " Contrast that with VAE is contrast that with GANs with any other model the latents are"}, {"start": 1517.72, "end": 1522.92, "text": " always much of smaller dimensionality whereas here it's a bit more computational intensive"}, {"start": 1522.92, "end": 1528.3, "text": " and so because of that what they end up doing in practice is training models for smaller"}, {"start": 1528.3, "end": 1533.24, "text": " images and then training additionally super resolution models and they actually have like"}, {"start": 1533.24, "end": 1538.8, "text": " here scripts as you can see here basically super res sample super res train I'm gonna"}, {"start": 1538.8, "end": 1544.16, "text": " skip those because they really work very similarly to how diffusion works I'm gonna focus on"}, {"start": 1544.16, "end": 1546.84, "text": " these two scripts here the image sample and the image train."}, {"start": 1546.84, "end": 1553.6799999999998, "text": " Okay anyways number of channels that's gonna be internal dimension of the unit number of"}, {"start": 1553.6799999999998, "end": 1558.1999999999998, "text": " res blocks nothing interesting there learn Sigma set to true means we are learning the"}, {"start": 1558.1999999999998, "end": 1564.1999999999998, "text": " variances instead of hard coding them instead of using the fixed ones so that's the innovation"}, {"start": 1564.1999999999998, "end": 1571.04, "text": " from the improved diffusion paper class conditioning so we will not be using that but it's very"}, {"start": 1571.04, "end": 1575.56, "text": " easy to have class conditioning we'll see how they use temporal conditioning so how"}, {"start": 1575.56, "end": 1579.8799999999999, "text": " do we pass the T because T is a scalar how do we pass that into a neural network we're"}, {"start": 1579.8799999999999, "end": 1584.72, "text": " gonna see they are just using simple sinusoids like simple similar embeddings as what the"}, {"start": 1584.72, "end": 1589.56, "text": " original transformer paper used so and then they can alert I'll show you how they can"}, {"start": 1589.56, "end": 1595.96, "text": " fuse that into the unit model and we would be doing the same thing with this class conditioning"}, {"start": 1595.96, "end": 1598.6799999999998, "text": " if we were using it which we are not at the moment."}, {"start": 1598.6799999999998, "end": 1605.36, "text": " Okay checkpointing is just an optimization technique I'm gonna kind of ignore what checkpointing"}, {"start": 1605.36, "end": 1609.12, "text": " does is during the forward pass instead of storing the activations for every single"}, {"start": 1609.12, "end": 1614.9599999999998, "text": " layer you don't do that and because of that you save bunch of memory but as on the con"}, {"start": 1614.9599999999998, "end": 1620.4799999999998, "text": " side you'll have to kind of when you do the back prop in order to calculate the gradients"}, {"start": 1620.4799999999998, "end": 1625.32, "text": " you'll have to do your computations again so you're trading off the memory for time"}, {"start": 1625.32, "end": 1629.24, "text": " so you'll spend up much more time but you'll save up memory by doing this we'll not be"}, {"start": 1629.24, "end": 1638.04, "text": " using the checkpointing so yeah just FYI certain layers of unit have basically VAT type of"}, {"start": 1638.04, "end": 1643.3, "text": " attention so that means each of the token of the image is gonna be attending to each"}, {"start": 1643.3, "end": 1650.92, "text": " of the other tokens and that's what they show here so 16 means at resolution 16 by 16 they'll"}, {"start": 1650.92, "end": 1656.76, "text": " be doing the this attention and at 8 by 8 resolution they'll be doing the same thing"}, {"start": 1656.76, "end": 1662.84, "text": " so when I say 8 by 8 it's inside of the actual unit because you know that unit has that characteristic"}, {"start": 1662.84, "end": 1669.0, "text": " shape okay so number of heads again just the parameter for that attention layer inside"}, {"start": 1669.0, "end": 1674.48, "text": " of unit nothing important there we're gonna see how this parameter is used basically this"}, {"start": 1674.48, "end": 1680.72, "text": " is how we depending on this flag they'll have two different ways of combining of conditioning"}, {"start": 1680.72, "end": 1688.44, "text": " the images with the time steps so yeah we'll see how this plays out a bit later okay so"}, {"start": 1688.44, "end": 1694.48, "text": " here is the create model function basically because image size is 64 they specify this"}, {"start": 1694.48, "end": 1700.3600000000001, "text": " how they specify how unit will be constructed then we have this attention DS attention DS"}, {"start": 1700.3600000000001, "end": 1706.38, "text": " basically converts this attention resolution into how many down sampling layers we have"}, {"start": 1706.38, "end": 1712.2800000000002, "text": " to wait inside of the unit before we start using these attentional layers so just another"}, {"start": 1712.2800000000002, "end": 1717.8400000000001, "text": " way of specifying when do we start inserting those attention layers into our unit okay"}, {"start": 1717.8400000000001, "end": 1722.8000000000002, "text": " not that vital you can ignore that if you didn't understand it so input number of channels"}, {"start": 1722.8000000000002, "end": 1728.2800000000002, "text": " obviously three we're dealing with RGB images we specify number of channels here because"}, {"start": 1728.2800000000002, "end": 1734.1200000000001, "text": " we are learning Sigma because of that this we end up with six output channels and the"}, {"start": 1734.12, "end": 1738.9599999999998, "text": " first three channels will be predicting the epsilon so that's the noise and the second"}, {"start": 1738.9599999999998, "end": 1744.52, "text": " three channels will be predicting the actual variances so that's why we have six here okay"}, {"start": 1744.52, "end": 1749.9799999999998, "text": " number of blocks blah blah blah nothing special drop out blah blah blah we'll not be using"}, {"start": 1749.9799999999998, "end": 1754.04, "text": " class conditioning so we end up with none here for number of classes we're not using"}, {"start": 1754.04, "end": 1761.36, "text": " checkpointing okay specifying details of attention etc okay let's enter the constructor I'm going"}, {"start": 1761.36, "end": 1765.32, "text": " to quickly walk you through unit so we are starting step by step we'll first see how"}, {"start": 1765.32, "end": 1770.6399999999999, "text": " unit works and then we're going to see how it fits into the whole like training loop"}, {"start": 1770.6399999999999, "end": 1779.6799999999998, "text": " later on okay so let's see here we just store all of these parameters inside of like internal"}, {"start": 1779.6799999999998, "end": 1786.1999999999998, "text": " fields here nothing fancy there so we here we create this layer the sequential layer"}, {"start": 1786.2, "end": 1793.52, "text": " which consists out of this basically inverse bottleneck shape of MLP this is a simple MLP"}, {"start": 1793.52, "end": 1801.0, "text": " that's going to be used to transform the sinusoids and before we use them to condition the model"}, {"start": 1801.0, "end": 1805.64, "text": " we'll see that a bit later okay so we can ignore this because we're not not using classes"}, {"start": 1805.64, "end": 1811.56, "text": " and now we start adding blocks to form the unit again we have three types of blocks we"}, {"start": 1811.56, "end": 1817.8, "text": " have the input blocks then we have the middle blocks here and finally we have the output"}, {"start": 1817.8, "end": 1825.56, "text": " blocks so that corresponds to what we saw in this diagram here basically whoops my one"}, {"start": 1825.56, "end": 1831.28, "text": " note is glitching so you can see it here so we have we kind of have the first part here"}, {"start": 1831.28, "end": 1835.36, "text": " the input blocks then we have the middle blocks and then we have the output blocks here that's"}, {"start": 1835.36, "end": 1841.32, "text": " roughly how how this code is structured so let me now get back to it and let me quickly"}, {"start": 1841.32, "end": 1845.04, "text": " walk you through there is i'm not going to dig into the actual details there is a couple"}, {"start": 1845.04, "end": 1849.6, "text": " important things i want to show you how they fuse the temporal information with the image"}, {"start": 1849.6, "end": 1854.8799999999999, "text": " information that's the vital thing i want to show you here okay so there is this wrapper"}, {"start": 1854.8799999999999, "end": 1860.22, "text": " object called time step embed sequential we're going to see that a lot what it does basically"}, {"start": 1860.22, "end": 1864.84, "text": " is the following depending if the layer inherits from this time step block which is just a"}, {"start": 1864.84, "end": 1869.24, "text": " simple interface a dummy interface that where where the for a function supports both the"}, {"start": 1869.24, "end": 1874.56, "text": " x which is the image representation and the embedding the temporal embeddings in that"}, {"start": 1874.56, "end": 1881.28, "text": " case we'll be we'll be calling the layer passing both arguments whereas if it's not inheriting"}, {"start": 1881.28, "end": 1886.6, "text": " from time step block then we'll just be passing the x and ignoring like the embeddings so"}, {"start": 1886.6, "end": 1892.88, "text": " as i said a simple wrapper nothing too interesting okay so the first thing we do is we create"}, {"start": 1892.88, "end": 1898.56, "text": " this a com2d layer why com2d because this is com and d a generic layer they created"}, {"start": 1898.56, "end": 1904.08, "text": " and then number of dimensions is two and because of that we end up with a com2d so that's how"}, {"start": 1904.08, "end": 1910.6399999999999, "text": " the unit model starts next up let me show you what is going on here so we have channel"}, {"start": 1910.6399999999999, "end": 1915.76, "text": " multiplication we're going to iterate through this array and then this is the interesting"}, {"start": 1915.76, "end": 1922.08, "text": " part we start adding residual blocks so the interesting part about residual blocks is"}, {"start": 1922.08, "end": 1927.8, "text": " actually only the forward function and i'll i'm gonna put a breakpoint here and later"}, {"start": 1927.8, "end": 1934.48, "text": " on i'm going to show you how this temporal fusion is is happening but for the time being"}, {"start": 1934.48, "end": 1939.44, "text": " let me quickly step through the constructor of the of the of the resnet block just quickly"}, {"start": 1939.44, "end": 1946.68, "text": " so we just stored the number of channels the the the number of channels for for the basically"}, {"start": 1946.68, "end": 1952.68, "text": " temporal vectors blah blah blah nothing fancy here let me see whether there is something"}, {"start": 1952.68, "end": 1958.8400000000001, "text": " interesting i need to focus on here so we have we specify these input layers we specify"}, {"start": 1958.8400000000001, "end": 1964.04, "text": " the embedding layers we'll see how these are used a bit later so bear with me here and"}, {"start": 1964.04, "end": 1969.52, "text": " we finally have out layers normalization blah blah blah clu is just activation unit there"}, {"start": 1969.52, "end": 1976.16, "text": " is like a zillion of these activation units and it's not even worth mentioning okay so"}, {"start": 1976.16, "end": 1981.28, "text": " basically all of the fun will happen later on during the forward prop and that's when"}, {"start": 1981.28, "end": 1986.56, "text": " i'm gonna kind of step into the resnet block and show you how the how the temporal information"}, {"start": 1986.56, "end": 1993.6, "text": " is mixed into the network okay so after that sometimes we'll be adding as as i said here"}, {"start": 1993.6, "end": 2000.08, "text": " so if this thing is if we are if we don't sample four times then we're gonna add the"}, {"start": 2000.08, "end": 2005.28, "text": " attention block which we haven't at this step so we're gonna skip this part for now and"}, {"start": 2005.28, "end": 2010.84, "text": " then we add as you can see here to input blocks we just add the layers which we accumulated"}, {"start": 2010.84, "end": 2016.08, "text": " during the loop and we just wrap it up into this time step embed sequential which is again"}, {"start": 2016.08, "end": 2021.6799999999998, "text": " that useful wrapper we saw a couple minutes ago that's it guys nothing fancy there so"}, {"start": 2021.6799999999998, "end": 2028.48, "text": " i'm gonna kind of skip to middle layer here okay let me ignore this thing here so here"}, {"start": 2028.48, "end": 2035.12, "text": " we are this middle block consists of a res block attention block and additional res block"}, {"start": 2035.12, "end": 2041.6399999999999, "text": " i can skip all of that let's continue here and finally the output blocks are literally"}, {"start": 2041.6399999999999, "end": 2047.36, "text": " what what we just saw in the input blocks so it's the same pretty much list of objects"}, {"start": 2047.36, "end": 2052.8399999999997, "text": " and here you can see by doing this we are reversing and we're creating a symmetric construction"}, {"start": 2052.8399999999997, "end": 2057.64, "text": " for the output blocks okay so because of that i'm just going to skip all of this and and"}, {"start": 2057.64, "end": 2064.56, "text": " end up here and here we add the normalization layer which is i think group normalization"}, {"start": 2064.56, "end": 2068.56, "text": " but again not that important to understand and finally we end up with a con layer and"}, {"start": 2068.56, "end": 2075.06, "text": " this zero module just zeros out the weights of these kernels i'm not sure why they're"}, {"start": 2075.06, "end": 2079.96, "text": " doing that if anyone knows that feel free to comment down below why do they initialize"}, {"start": 2079.96, "end": 2084.7999999999997, "text": " some of the layers with all zero weights i'm not completely sure cool again main takeaway"}, {"start": 2084.7999999999997, "end": 2091.12, "text": " from the from the unit model is like it has this very interesting type of a shape and"}, {"start": 2091.12, "end": 2095.12, "text": " with the input blocks middle blocks and our blocks and the most important thing i want"}, {"start": 2095.12, "end": 2099.88, "text": " you to remember here is they have this part where they are mixing in the temporal information"}, {"start": 2099.88, "end": 2104.3599999999997, "text": " we're gonna see that a bit later but just keep that in mind for now okay that was the"}, {"start": 2104.3599999999997, "end": 2110.88, "text": " creation of the model we do the same thing for diffusion so we have like i've set 50"}, {"start": 2110.88, "end": 2116.6, "text": " steps it was 4000 by default this is just so that i can train a bit quicker on my machine"}, {"start": 2116.6, "end": 2121.96, "text": " otherwise it's very slow to to actually train this okay so we have learned sigma nothing"}, {"start": 2121.96, "end": 2126.0, "text": " special there noise schedule is going to be linear so the betas for the forward process"}, {"start": 2126.0, "end": 2132.88, "text": " are going to be basically sampled as we sample the linear function instead of the cosine"}, {"start": 2132.88, "end": 2138.16, "text": " we don't use the kl basically this if we were to set this to true we'd be using the variational"}, {"start": 2138.16, "end": 2142.2799999999997, "text": " lower bound loss instead of this we're going to be using the hybrid loss so we'll see all"}, {"start": 2142.28, "end": 2146.96, "text": " of that a bit later we are not predicting the x start x start is just a starting image"}, {"start": 2146.96, "end": 2153.0, "text": " the original image from a data set and we will be basically rescaling time step all"}, {"start": 2153.0, "end": 2157.7200000000003, "text": " of this is not that important we'll see what those parameters are a bit later but they're"}, {"start": 2157.7200000000003, "end": 2164.1200000000003, "text": " not that interesting okay so here we have we first form the the beta schedule so we"}, {"start": 2164.1200000000003, "end": 2170.0400000000004, "text": " pass the like name of the schedule and number of steps and this just returns those betas"}, {"start": 2170.04, "end": 2176.16, "text": " we saw in the paper in the beginning of the video so let me quickly show you the two like"}, {"start": 2176.16, "end": 2181.48, "text": " schedules they support one is linear layer as mentioned it's just a simple in space between"}, {"start": 2181.48, "end": 2187.8, "text": " the beta start bet beta end and we linearly interpolate using the number of diffusion"}, {"start": 2187.8, "end": 2192.8, "text": " steps here for the for the cosine schedule you have this a bit more complex expression"}, {"start": 2192.8, "end": 2199.04, "text": " where you create those we saw these equations in the paper i'm going to kind of ignore those"}, {"start": 2199.04, "end": 2204.92, "text": " for now not that important okay so let's see which loss we pick we are going to use something"}, {"start": 2204.92, "end": 2212.32, "text": " called rescaled msc mean squared error that's because we are using the hybrid loss we'll"}, {"start": 2212.32, "end": 2219.64, "text": " see how that plays out a bit later time steps re spacing again this is important only when"}, {"start": 2219.64, "end": 2224.88, "text": " you want to reduce the number of steps during the sampling process which we will not be"}, {"start": 2224.88, "end": 2229.92, "text": " using so we can kind of ignore all of that again this is the function that does that"}, {"start": 2229.92, "end": 2236.8, "text": " type of a logic and basically for our case we can skip this because we end up with let"}, {"start": 2236.8, "end": 2241.6, "text": " me show you how this all steps look like it's basically just mp our range so you end up"}, {"start": 2241.6, "end": 2248.96, "text": " with as you can see here zero through 49 because we have 50 steps and so there is nothing no"}, {"start": 2248.96, "end": 2256.0, "text": " like sub sampling that will be doing in this training okay we pass the betas we'll be model"}, {"start": 2256.0, "end": 2260.52, "text": " mean type is going to be epsilons we'll be we'll be predicting epsilon instead of predicting"}, {"start": 2260.52, "end": 2265.04, "text": " x start so in some of the ablations they actually tried predicting the x start which means the"}, {"start": 2265.04, "end": 2270.32, "text": " original image so instead of trying to predict the noise that the that was added on top of"}, {"start": 2270.32, "end": 2274.48, "text": " the original image why not try and predict the original image but empirically they showed"}, {"start": 2274.48, "end": 2280.36, "text": " that it's better to predict the epsilon okay let's continue so what's the model variance"}, {"start": 2280.36, "end": 2287.28, "text": " type so because this is set to true that means we'll be using the learned range so we'll"}, {"start": 2287.28, "end": 2293.28, "text": " be learning the variances our last type is as we saw rescaled msc and that's pretty much"}, {"start": 2293.28, "end": 2299.72, "text": " it now let's step into this space diffusion basically the important piece here is the"}, {"start": 2299.72, "end": 2304.04, "text": " construction of this Gaussian diffusion object so let's see how that thing is going to look"}, {"start": 2304.04, "end": 2309.08, "text": " like so that's the Gaussian diffusion class that's important this is the file where all"}, {"start": 2309.08, "end": 2313.12, "text": " of the magic will happen basically this Gaussian diffusion dot pi that's the most important"}, {"start": 2313.12, "end": 2319.08, "text": " file in this whole code base so again we specify the mean it's going to be the epsilon the"}, {"start": 2319.08, "end": 2323.44, "text": " variance which is going to be learned and then the last type rescaled msc blah blah"}, {"start": 2323.44, "end": 2329.92, "text": " blah and next up we we form the we just basically convert betas into numpy array we do some"}, {"start": 2329.92, "end": 2335.0, "text": " error checking assert asserting that that's just a like a 1d array and that all of the"}, {"start": 2335.0, "end": 2342.08, "text": " betas are bigger than zero and smaller or equal than one that's all we grab the number"}, {"start": 2342.08, "end": 2347.12, "text": " of steps here and here we start creating those equations we saw the coefficients not the"}, {"start": 2347.12, "end": 2350.48, "text": " equations we saw in the paper now for this formula so i'm just going to put things side"}, {"start": 2350.48, "end": 2354.6800000000003, "text": " by side so it's going to be easier for you to understand what's going on okay let's start"}, {"start": 2354.68, "end": 2360.3199999999997, "text": " so here are the alphas the alphas we saw those coefficients here basically one minus betas"}, {"start": 2360.3199999999997, "end": 2364.2799999999997, "text": " that's how we form the alphas and then we have alpha bars which is just the products"}, {"start": 2364.2799999999997, "end": 2369.3999999999996, "text": " of alphas and you can see how we form those alphas it's called com prod like a cumulative"}, {"start": 2369.3999999999996, "end": 2375.3999999999996, "text": " product we basically just call this mp com prod and we end up with arrays so this thing"}, {"start": 2375.3999999999996, "end": 2380.56, "text": " is still going to be 50 the length of this thing is going to be as you can see here it's"}, {"start": 2380.56, "end": 2389.32, "text": " going to be 50 that means that by indexing into this array we end up with up like a basically"}, {"start": 2389.32, "end": 2395.56, "text": " well we basically specify t of this product so it's not like we just calculated the whole"}, {"start": 2395.56, "end": 2402.44, "text": " product of all of the alphas we actually have well we actually have the array that contains"}, {"start": 2402.44, "end": 2410.04, "text": " alpha t bars for all of the t's which means zero through 49 okay next up we have these"}, {"start": 2410.04, "end": 2417.0, "text": " this this parameter called this coefficient called alphas com prod prev like basically"}, {"start": 2417.0, "end": 2423.96, "text": " what we do here is we what we do the right shifting so we add one as the first element"}, {"start": 2423.96, "end": 2432.4, "text": " and then we take the first n minus one elements and and pre like append them to this list"}, {"start": 2432.4, "end": 2438.6, "text": " and we also have this com prod next so where those are used is let me just see i'm fairly"}, {"start": 2438.6, "end": 2444.96, "text": " sure it's going to be used for this mean yeah it's going to be used for the posterior so"}, {"start": 2444.96, "end": 2452.36, "text": " it's basically used for this posterior like forward process so for the mean and for the"}, {"start": 2452.36, "end": 2457.16, "text": " as you can see here for the for the variance okay so we're going to see how we construct"}, {"start": 2457.16, "end": 2462.2799999999997, "text": " those in a second so we we now construct square root of alphas com prod so those are going"}, {"start": 2462.28, "end": 2470.0, "text": " to be used as you can see here so this let me kind of change the color here and you can"}, {"start": 2470.0, "end": 2477.5600000000004, "text": " see this is now the term we constructed here then we create square root one minus alphas"}, {"start": 2477.5600000000004, "end": 2483.48, "text": " com prod which is going to be let's see whether we have that one somewhere here i don't think"}, {"start": 2483.48, "end": 2492.0400000000004, "text": " so but like yeah it's going to be used somewhere later on okay we do the same thing for log"}, {"start": 2492.04, "end": 2498.48, "text": " square root blah blah blah there is a lot of these so this is square root reciprocal"}, {"start": 2498.48, "end": 2504.16, "text": " alphas com prod so that's square root of one over this okay so the square root so this"}, {"start": 2504.16, "end": 2511.2, "text": " expression here is actually used for the component in the loss here you can see it here as i"}, {"start": 2511.2, "end": 2517.32, "text": " said for each of these we'll have we'll basically calculate all of the coefficients we have"}, {"start": 2517.32, "end": 2524.2000000000003, "text": " the posterior variance which is again this expression here let me just kind of let me"}, {"start": 2524.2000000000003, "end": 2528.88, "text": " show you that this is indeed the case so we have betas here are the betas we have one"}, {"start": 2528.88, "end": 2534.56, "text": " minus alphas com prod prep so that's this expression here so that's the numerator of"}, {"start": 2534.56, "end": 2541.56, "text": " our expression here and then we have denominator which is one minus alphas com prod so that's"}, {"start": 2541.56, "end": 2546.84, "text": " it as i said they are now literally going through formulas in this paper and calculating"}, {"start": 2546.84, "end": 2552.48, "text": " all of the necessary coefficients we do the same thing here and then we have this i'm"}, {"start": 2552.48, "end": 2557.56, "text": " going to show you quickly these two so we have the posterior mean coefficient basically"}, {"start": 2557.56, "end": 2566.28, "text": " it's equal to um betas as you can see here and then time square root alphas com prod"}, {"start": 2566.28, "end": 2571.92, "text": " prep so that's this one square root of alphas previous uh and bar okay and then we divide"}, {"start": 2571.92, "end": 2578.7200000000003, "text": " that by one minus this expression here so hopefully i this this convinced you that indeed"}, {"start": 2578.7200000000003, "end": 2582.84, "text": " they're just going on here and calculating all of these coefficients and that's pretty"}, {"start": 2582.84, "end": 2587.04, "text": " much it i'm going to go back into the full screen here and continue with the code okay"}, {"start": 2587.04, "end": 2593.7400000000002, "text": " so this is again not interesting because this is only vital in the case of we doing a sub"}, {"start": 2593.7400000000002, "end": 2599.98, "text": " sampling during sampling which we will not be doing i using less steps during the sampling"}, {"start": 2599.98, "end": 2605.6, "text": " procedure which we will not be using so i can kind of safely ignore all of this and"}, {"start": 2605.6, "end": 2613.36, "text": " finally uh they use uh those betas to construct the uh this gossian diffusion model so this"}, {"start": 2613.36, "end": 2619.76, "text": " one was just used for for this process here for all practical purposes this init function"}, {"start": 2619.76, "end": 2624.56, "text": " here is going to construct the same gossian object we just saw so i'm going to skip i'm"}, {"start": 2624.56, "end": 2630.04, "text": " going to skip everything here and so that's that's it we end up with a gossian diffusion"}, {"start": 2630.04, "end": 2636.4, "text": " object okay guys so we have unit we saw how they literally go formula by formula and compute"}, {"start": 2636.4, "end": 2641.84, "text": " all of that uh inside of the constructor of the gossian diffusion object now let's go"}, {"start": 2641.84, "end": 2647.52, "text": " on further here let's exit all these functions we are back to our main function what they"}, {"start": 2647.52, "end": 2654.12, "text": " do is they just push the model to gpu in case in case you have gpu uh they just now uh create"}, {"start": 2654.12, "end": 2659.44, "text": " this schedule sampler so this one is fairly interesting i'm going to show you how this"}, {"start": 2659.44, "end": 2666.52, "text": " one looks like it basically um specifies how do we want during the training sample uh so"}, {"start": 2666.52, "end": 2674.24, "text": " which steps i which loss components those lt minus ones do we want to train and so what"}, {"start": 2674.24, "end": 2678.72, "text": " they use by default is just a uniform sampler which means all of the steps are equally likely"}, {"start": 2678.72, "end": 2683.2, "text": " but then they also have that other sample which i did mention where depending on the"}, {"start": 2683.2, "end": 2689.3599999999997, "text": " loss of particular lt minus one they'll maybe increase like upscale that weight so that"}, {"start": 2689.3599999999997, "end": 2695.2, "text": " we focus more on uh basically yeah learning that particular uh part of the diffusion process"}, {"start": 2695.2, "end": 2701.8399999999997, "text": " okay let me just kind of go through through here so here is the uniform sampler uh basically"}, {"start": 2701.8399999999997, "end": 2706.3399999999997, "text": " here is how that sampler looks like so the weights of the uniform sample are obviously"}, {"start": 2706.3399999999997, "end": 2711.3999999999996, "text": " all ones so we have number of steps so we have like 50 ones and then during the sampling"}, {"start": 2711.4, "end": 2716.0, "text": " what happens is this sample function is called because as you can see here uniform sampler"}, {"start": 2716.0, "end": 2722.44, "text": " and here is from schedule sampler the subject here and here's what happens so we have weights"}, {"start": 2722.44, "end": 2726.4, "text": " so this is all ones and then we divide here so we normalize and actually have a probability"}, {"start": 2726.4, "end": 2731.48, "text": " distribution because the sum needs to be equal to one as you know uh and then we just call"}, {"start": 2731.48, "end": 2738.92, "text": " mp random choice so we basically do uh this is how we weight all of the uh 50 uh time"}, {"start": 2738.92, "end": 2745.76, "text": " steps and we just uh basically sample uh whatever the batch size is of those of those time steps"}, {"start": 2745.76, "end": 2750.12, "text": " so that's what's going to happen and then some conversion here uh nothing fancy so the"}, {"start": 2750.12, "end": 2754.7200000000003, "text": " interesting sampler is that second one which i'll just kind of briefly show you but i will"}, {"start": 2754.7200000000003, "end": 2759.7200000000003, "text": " not go into it maybe later if we have enough time so last second moment resampler so this"}, {"start": 2759.7200000000003, "end": 2765.84, "text": " is the one where they basically use the loss history to uh decide on how so you can see"}, {"start": 2765.84, "end": 2770.28, "text": " here depending on the loss you'll have bigger weights for for those components where the"}, {"start": 2770.28, "end": 2773.84, "text": " loss is bigger so that's roughly it but we'll not be using this one so i'm going to kind"}, {"start": 2773.84, "end": 2781.08, "text": " of skip it for the for the sake of time okay let's continue here we have our uniform sampler"}, {"start": 2781.08, "end": 2785.76, "text": " we now create the data set so basically i'll be just using cipher that doesn't need to"}, {"start": 2785.76, "end": 2791.88, "text": " be the case they also used image net 64 by 64 in the paper and you can use whatever data"}, {"start": 2791.88, "end": 2797.2400000000002, "text": " set you want and that's it now we have the training loop this is where the actual training"}, {"start": 2797.2400000000002, "end": 2801.8, "text": " is going to happen we pass in the model we put like that's the unit we pass in the diffusion"}, {"start": 2801.8, "end": 2807.6400000000003, "text": " um model on top of it we feed in the data uh batch size micro batch so this is going"}, {"start": 2807.6400000000003, "end": 2812.4, "text": " to be um like uh we're going to be chunking the actual batch into micro batches because"}, {"start": 2812.4, "end": 2818.12, "text": " we like as i said this is fairly memory intensive that's why they have all of these optimization"}, {"start": 2818.12, "end": 2823.3599999999997, "text": " methods such as checkpointing and micro batching to kind of cope with all of that uh excess"}, {"start": 2823.3599999999997, "end": 2829.2799999999997, "text": " memory okay ema so that's your exponential moving average because uh they'll actually"}, {"start": 2829.2799999999997, "end": 2837.52, "text": " be using um uh so ema type of weights for for sampling later on nothing fancy here logging"}, {"start": 2837.52, "end": 2842.56, "text": " saving resuming checkpointing blah blah fp 16 we don't care about mixed precision training"}, {"start": 2842.56, "end": 2847.64, "text": " i just want to focus on diffusion uh we pass in our uniform sampler blah blah blah some"}, {"start": 2847.64, "end": 2853.56, "text": " optimization details like weight decay for the atom w okay we can enter this uh train"}, {"start": 2853.56, "end": 2857.52, "text": " loop let's see what's going on here so we pass in all of these we kind of store them"}, {"start": 2857.52, "end": 2863.7599999999998, "text": " into the internal fields okay um we have all of this i'm going to kind of skip through"}, {"start": 2863.7599999999998, "end": 2870.04, "text": " all of this nothing interesting okay we have our model parameters we have the master parameters"}, {"start": 2870.04, "end": 2875.44, "text": " again this is a uh like a consequence of the training this in a distributed fashion of"}, {"start": 2875.44, "end": 2881.44, "text": " course multiple machines so we for our like for all practical purposes we don't care about"}, {"start": 2881.44, "end": 2888.32, "text": " the this we just have single set of parameters okay syncing again a consequence of of distributed"}, {"start": 2888.32, "end": 2894.48, "text": " training we don't care about it they create the atom w optimizer okay uh now they have"}, {"start": 2894.48, "end": 2901.2400000000002, "text": " for each of the ema uh rates specified uh they create uh like a deep copy of the parameters"}, {"start": 2901.24, "end": 2907.24, "text": " so because we only have one ema rate we'll basically just have a single um like copy"}, {"start": 2907.24, "end": 2912.56, "text": " of the parameters again not that important for you to understand let's go on here this"}, {"start": 2912.56, "end": 2917.6, "text": " is distributed uh data parallel a wrapper in pytorch again we'll not be using that"}, {"start": 2917.6, "end": 2922.8399999999997, "text": " so for all practical purposes we can ignore it let's jump into the actual meat of the"}, {"start": 2922.8399999999997, "end": 2930.7599999999998, "text": " code of the training code so we first go on to sample a single batch of images and potentially"}, {"start": 2930.76, "end": 2936.48, "text": " the classic the classes that correspond to those images uh let me quickly walk you through"}, {"start": 2936.48, "end": 2944.0400000000004, "text": " how this load data looks like basically this is gonna collect all of the um images all"}, {"start": 2944.0400000000004, "end": 2949.5200000000004, "text": " of the paths from my uh data set so i did go on and kind of download those as i told"}, {"start": 2949.5200000000004, "end": 2954.7200000000003, "text": " you so you can see them here and if i kind of step over that uh it's gonna collect all"}, {"start": 2954.72, "end": 2963.2, "text": " of my images and um let me now show you how it looks like whoops if i copy paste the all"}, {"start": 2963.2, "end": 2970.7599999999998, "text": " files and i print the first let's say two paths you can see 006 png and 13 png and those"}, {"start": 2970.7599999999998, "end": 2977.16, "text": " are exactly those are exactly these two first images here so here is that one well it's"}, {"start": 2977.16, "end": 2982.68, "text": " super small so you will not see the cipher after all okay um okay then they go on to"}, {"start": 2982.68, "end": 2989.44, "text": " form this uh image data cell uh where just they just do some resizing cropping but nothing"}, {"start": 2989.44, "end": 2996.0, "text": " too interesting and here we have the data uh loader with batch sizes of 128 and we finally"}, {"start": 2996.0, "end": 3003.96, "text": " yield the batch from our data set okay that ends up giving us a batch of size i guess"}, {"start": 3003.96, "end": 3015.16, "text": " it's gonna be i expect it's gonna be 128 uh 364 64 because it's cipher it's rbg rgb images"}, {"start": 3015.16, "end": 3021.12, "text": " and 128 is the batch size so let's see whether that's the case uh indeed it is the case and"}, {"start": 3021.12, "end": 3024.52, "text": " conditioning we don't have anything so it's gonna be just empty dictionary as you can"}, {"start": 3024.52, "end": 3029.56, "text": " see here okay cool so here's the first step let's now this is the this is where all the"}, {"start": 3029.56, "end": 3033.32, "text": " magic happens i'm gonna ignore everything afterwards because it's just kind of it's"}, {"start": 3033.32, "end": 3037.56, "text": " just your common machine learning boilerplate code so i'm gonna ignore all of that so we"}, {"start": 3037.56, "end": 3044.2400000000002, "text": " have a forward forward backward call here okay so we just use zero gradients of our"}, {"start": 3044.2400000000002, "end": 3049.56, "text": " unit we want to clean those gradients before we recompute them again and then do the update"}, {"start": 3049.56, "end": 3055.6000000000004, "text": " okay so here we load the first micro batch so you can see we just sub sample this batch"}, {"start": 3055.6, "end": 3063.2799999999997, "text": " and we end up with like two three sixty four sixty four because our micro batch dimension is two and we"}, {"start": 3063.2799999999997, "end": 3067.2799999999997, "text": " did the same thing for conditioning which because it's an empty dictionary we don't really care"}, {"start": 3067.92, "end": 3072.88, "text": " we just check whether this is a last patch which it's not because we just started the loop and"}, {"start": 3073.7599999999998, "end": 3079.92, "text": " now this is where we do the uniform sampling of the diffusion process okay so this is this is the"}, {"start": 3079.92, "end": 3088.96, "text": " idea we we now sample t's so the time steps for each of the images in our batch okay so let's do"}, {"start": 3088.96, "end": 3096.96, "text": " that so we we we do that and we end up with two random t's so 44 and three so that means we'll"}, {"start": 3096.96, "end": 3103.6, "text": " take the first image we'll basically add noise 44 times in practice we'll just have a single step"}, {"start": 3103.6, "end": 3108.88, "text": " because of that nice property we saw in the paper and we'll get there and then this second one just"}, {"start": 3108.88, "end": 3113.44, "text": " has three steps of noise and then we try and predict back the noise so that's going to be the"}, {"start": 3113.44, "end": 3119.36, "text": " goal okay so this is the main function in this training code this training losses function we'll"}, {"start": 3119.36, "end": 3124.7200000000003, "text": " see it in a moment we just kind of wrap it up using the func func tools partial so that we don't"}, {"start": 3124.7200000000003, "end": 3131.52, "text": " have to pass these parameters every single time so just kind of convenience wrapper there and again"}, {"start": 3131.52, "end": 3137.2000000000003, "text": " we just you can ignore all of this because that's just distributed code nothing interesting let's"}, {"start": 3137.2, "end": 3142.3999999999996, "text": " focus on the compute losses this is the most important function here okay so what's going on"}, {"start": 3143.3599999999997, "end": 3151.6, "text": " first of all we generate noise that has the same shape as our input image uh our input micro batch"}, {"start": 3151.6, "end": 3159.2799999999997, "text": " so because x star is again two three sixty four sixty four we're going to end up with basically"}, {"start": 3159.2799999999997, "end": 3166.08, "text": " normal gossian tensor of the same shape okay so that's again our noise of the same shape as the"}, {"start": 3166.08, "end": 3171.52, "text": " input images and then we do the q sample so let me remind you what that thing is okay so here is"}, {"start": 3171.52, "end": 3177.12, "text": " a formula that we are going to use uh we are basically going to do this computation here"}, {"start": 3177.12, "end": 3181.84, "text": " actually this computation i showed you here so let's see that that's indeed the case so let me"}, {"start": 3181.84, "end": 3187.04, "text": " just kind of see whether i have a break point here i do have a break point so here we are we are in"}, {"start": 3187.04, "end": 3194.56, "text": " the in that function you can see that uh this is everything we do so we have this extract into"}, {"start": 3194.56, "end": 3199.84, "text": " tensor we'll see what that does but we have this square root alpha scum prod which is this part here"}, {"start": 3199.84, "end": 3204.4, "text": " let me just change the color so okay so we have this part here"}, {"start": 3206.72, "end": 3212.24, "text": " and then we multiply that element wise with x start which is x here which is this thing here"}, {"start": 3212.24, "end": 3220.0, "text": " okay and then we add a square root one minus alpha scum prod which is this part here"}, {"start": 3220.0, "end": 3229.12, "text": " we add up this and we multiply it with with noise as you can see here so we literally are just"}, {"start": 3229.12, "end": 3236.0, "text": " computing formulas from the paper nothing fancy there okay now i'm gonna one time just show you"}, {"start": 3236.0, "end": 3243.92, "text": " how this uh extract into tensor function looks like uh for the sake of your um i guess understanding"}, {"start": 3243.92, "end": 3251.28, "text": " of the code so i'm gonna kind of add a break point there so what we do is the following so we uh take"}, {"start": 3251.28, "end": 3258.56, "text": " that um so the the array we have like the let's say let's say the alpha t bar or whatnot and we"}, {"start": 3258.56, "end": 3265.76, "text": " basically just extract using the time steps so time steps are if you recall like we had um whoops"}, {"start": 3265.76, "end": 3274.4, "text": " whoops time steps we end up with 44.3 so we're gonna end up taking the alpha 44 bar and we're"}, {"start": 3274.4, "end": 3282.5600000000004, "text": " gonna end up taking the alpha 3 bar which are gonna contain uh respectively 44 uh terms uh the"}, {"start": 3282.5600000000004, "end": 3288.6400000000003, "text": " product of 44 terms and the product of three terms and then the only thing we do is literally"}, {"start": 3288.6400000000003, "end": 3294.4, "text": " we add these dummy dimensions as many times as possible so that we have the same dimensionality"}, {"start": 3294.4, "end": 3301.04, "text": " as the image we are doing element wise multiplication with and then we just expand here"}, {"start": 3301.6, "end": 3308.88, "text": " uh the the vector so we we just want to have the same shape of this of this uh yeah so why are we"}, {"start": 3308.88, "end": 3314.08, "text": " doing this well because we cannot multiply a scalar with the tensor and because of that we're just"}, {"start": 3314.08, "end": 3318.8, "text": " basically just copy pasting and broadcasting this this scalar so we end up with a tensor that"}, {"start": 3318.8, "end": 3325.6800000000003, "text": " contains whose all elements contain this particular alpha 44 bar whatnot okay hopefully that was clear"}, {"start": 3325.6800000000003, "end": 3334.0800000000004, "text": " enough and now i'm gonna kind of uh let me first uh ignore this i'm gonna not will not be stepping"}, {"start": 3334.0800000000004, "end": 3340.6400000000003, "text": " into this function anymore and that's it that's it guys we have we have the x t so we have our"}, {"start": 3340.6400000000003, "end": 3347.92, "text": " noisy version of the image right here okay let's see what the next steps are first of all uh because"}, {"start": 3347.92, "end": 3354.4, "text": " our loss is uh gonna be hybrid we're gonna ignore this part and we'll see that we'll actually be"}, {"start": 3354.4, "end": 3359.6, "text": " computing the same thing a bit later down below because hybrid also contains this variational"}, {"start": 3359.6, "end": 3366.16, "text": " lower bound loss as well okay so here we are we have the our loss type is if you recall rescale"}, {"start": 3366.16, "end": 3373.52, "text": " msc and we end up now here so this step is supposed to output the epsilon so we need to learn to"}, {"start": 3373.52, "end": 3381.44, "text": " predict the noise that if you recall uh we basically took that noise and we now we basically using that"}, {"start": 3381.44, "end": 3387.12, "text": " noise we form the x t's now we're trying to predict especially like particularly this noise so this is"}, {"start": 3387.12, "end": 3391.04, "text": " the term we'll be trying to learn how to predict those are the green dots i showed you in the one"}, {"start": 3391.04, "end": 3394.96, "text": " on paper so that's going to be the first three channels the last three channels are going to be"}, {"start": 3394.96, "end": 3402.88, "text": " about the variance and we'll see those uh uh in a bit uh but for now let me show you how this forward"}, {"start": 3402.88, "end": 3408.48, "text": " prop is going to work so this is again unit we're going to do something to the time steps so we're"}, {"start": 3408.48, "end": 3414.32, "text": " going to somehow merge them into this uh together with this uh with the image representation and let"}, {"start": 3414.32, "end": 3419.44, "text": " me quickly show you why we are doing this why are we passing x and t so this is the particular"}, {"start": 3419.44, "end": 3426.08, "text": " expression we are now dealing with here we have our unit so this epsilon t in practice so let me"}, {"start": 3426.08, "end": 3434.88, "text": " again change the color so this thing here is our unit we have this expression is the x t so that's"}, {"start": 3434.88, "end": 3443.2, "text": " the thing we just calculated so that's the x t again x t and we also pass t which are the time"}, {"start": 3443.2, "end": 3447.68, "text": " steps and that's why we have this particular expression that's why we are computing by"}, {"start": 3447.68, "end": 3452.72, "text": " executing this line of code here okay let me let me go back here and let me step uh through this"}, {"start": 3452.72, "end": 3461.8399999999997, "text": " particular thingy uh we can ignore this rep model um basically it serves the purpose of again"}, {"start": 3462.3999999999996, "end": 3468.0, "text": " only if you have the the sub sampling during this the your sampling procedure which will not be"}, {"start": 3468.0, "end": 3473.7599999999998, "text": " doing so none of this really matters and then we do some rescaling nothing that's super interesting"}, {"start": 3473.7599999999998, "end": 3479.12, "text": " just some legacy stuff so that they are comparable with the ddpm original paper etc so here is the"}, {"start": 3479.12, "end": 3486.08, "text": " actual unit forward pass so we have the new ts so this is just kind of some rescaled versions"}, {"start": 3486.64, "end": 3494.4, "text": " but still scalars of our original 44 and 3 if we recall these time steps here and now let's let's"}, {"start": 3494.4, "end": 3499.52, "text": " pass that and this is just empty like the key word arguments are just empty we don't have anything"}, {"start": 3499.52, "end": 3504.24, "text": " there so we just pass the time steps and we pass the images so let me show you the shape here"}, {"start": 3504.24, "end": 3511.8399999999997, "text": " shape here is again 2364 64 okay now let's step through the forward pass of the uh unit model"}, {"start": 3513.2799999999997, "end": 3519.52, "text": " here is where the magic happens first of all we have this time step embedding which is going to"}, {"start": 3520.8799999999997, "end": 3527.6, "text": " map the scalars into certain uh vectors which are just like heuristics basically we're just using"}, {"start": 3527.6, "end": 3532.16, "text": " some sinusoids same thing as in the original transformer paper so let's see how that looks"}, {"start": 3532.16, "end": 3537.92, "text": " like okay so here it is again not gonna dig into the details you can see a bunch of sinusoids"}, {"start": 3537.92, "end": 3545.44, "text": " cosines what ultimately matters is that we end up instead of with scalars we end up with 228"}, {"start": 3545.44, "end": 3550.08, "text": " so we end up with two vectors so now we can we can kind of work with vectors in neural networks"}, {"start": 3550.08, "end": 3557.8399999999997, "text": " it's kind of hard to work with scalars after that okay i stepped into something uh whatever"}, {"start": 3557.84, "end": 3563.04, "text": " okay so now we have this it was actually the the activation function of this time"}, {"start": 3563.04, "end": 3570.2400000000002, "text": " embed and if you recall time embed is this inverse bottleneck of a of a of a mlp so let me just find"}, {"start": 3570.2400000000002, "end": 3577.04, "text": " that thing quickly so time embed where are you here it is it's just a combination of two linear"}, {"start": 3577.04, "end": 3585.2000000000003, "text": " layers uh whereas the this uh inner most uh like layer has forex the uh dimensionality of the input"}, {"start": 3585.2, "end": 3591.7599999999998, "text": " and output layers hence i call it inverse bottleneck and this is just like this times four is stuck with"}, {"start": 3591.7599999999998, "end": 3598.7999999999997, "text": " us since 2017 the original paper people just keep using that again just a simple transformation"}, {"start": 3598.7999999999997, "end": 3604.64, "text": " and we end up with some we'll end up with some uh vector of different dimensionality so let me"}, {"start": 3604.64, "end": 3610.64, "text": " show you that let's kind of step over here we end up with what we end up embedding vector here being"}, {"start": 3610.64, "end": 3616.7999999999997, "text": " two five twelve okay so that's it that's our time information now let's let's insert that time"}, {"start": 3616.7999999999997, "end": 3621.7599999999998, "text": " information somehow so we go through the input blocks remember the first block is going to be"}, {"start": 3621.7599999999998, "end": 3625.8399999999997, "text": " like the com layer and then we're going to have the rest blocks so let's see the the first thing"}, {"start": 3625.8399999999997, "end": 3629.6, "text": " so the first thing nothing nothing super interesting because it's just a com layer"}, {"start": 3630.4, "end": 3637.68, "text": " but if i have hopefully i've added a break point to the rest block let's just find the rest block here"}, {"start": 3637.68, "end": 3644.7999999999997, "text": " uh where are you okay here it is here is the residual block here is where the magic of of"}, {"start": 3644.7999999999997, "end": 3652.3199999999997, "text": " merging happens let me show you this so okay here we are we do some processing on the image that's"}, {"start": 3652.3199999999997, "end": 3658.16, "text": " not that important so these in layers are just like normalization some convolutions whatnot"}, {"start": 3658.64, "end": 3663.68, "text": " and nothing interesting there next up we take our embedding vectors which are again two five twelve"}, {"start": 3663.68, "end": 3671.2, "text": " right so that's going to be two five twelve and we pass them through the embed layers so let's see"}, {"start": 3671.2, "end": 3677.2799999999997, "text": " what those are again that's just like a linear layer uh that does some additional processing on"}, {"start": 3677.2799999999997, "end": 3684.96, "text": " top of those representations nothing super vital let me show you the shape okay two two fifty six"}, {"start": 3685.68, "end": 3692.48, "text": " and now what we do is we just add the dummy dimensions until we have the same shape as this"}, {"start": 3692.48, "end": 3700.08, "text": " as these image features h so h has the shape of as you can see 228 64 64 because of that we're"}, {"start": 3700.08, "end": 3706.56, "text": " going to keep on adding dummy dimensions to to our temporal tensor until we end up with the same"}, {"start": 3708.32, "end": 3714.96, "text": " uh number of dimensions you can see here and now because these two are one one we'll basically be"}, {"start": 3714.96, "end": 3719.76, "text": " broadcasting those so copy pasting the temporal vectors before we combine them with the image"}, {"start": 3719.76, "end": 3724.5600000000004, "text": " features so here's where the magic happens so this is the parameter we saw before the boolean use"}, {"start": 3724.5600000000004, "end": 3730.1600000000003, "text": " scale shift norm that's one option how we can do it you can just also just simply add you can simply"}, {"start": 3730.1600000000003, "end": 3735.84, "text": " add the temporal information to the image features and that's how you can condition the model okay"}, {"start": 3736.96, "end": 3742.88, "text": " but they i guess empirically uh found out that this approach works a bit better instead of just"}, {"start": 3742.88, "end": 3749.2000000000003, "text": " adding up the temporal representations to the image features instead of that let's kind of chunk"}, {"start": 3749.2, "end": 3756.3199999999997, "text": " this so we start with two 256 one one by doing this chunking along the first dimension that means"}, {"start": 3756.3199999999997, "end": 3761.52, "text": " we're going to split this 256 into 128 okay so let me kind of step over there let me show you the"}, {"start": 3761.52, "end": 3768.3999999999996, "text": " scale and shift are going to be 228 11 right the same thing for shift and then this is how they"}, {"start": 3768.3999999999996, "end": 3773.2, "text": " kind of combine the image features with the temporal features one plus scale and then we"}, {"start": 3773.2, "end": 3778.0, "text": " add a shift and we just have some normalization for the image features beforehand and then we do some"}, {"start": 3778.0, "end": 3784.56, "text": " um additional processing that's it guys that's everything there is to to to how this unit works"}, {"start": 3784.56, "end": 3789.84, "text": " i think this is like everything you really need to know i'm gonna remove that breakpoint and"}, {"start": 3789.84, "end": 3797.12, "text": " continue on here uh and that's pretty much it and now we're gonna keep on doing this input blocks"}, {"start": 3797.12, "end": 3802.32, "text": " middle blocks apple blocks we're gonna keep on adding and merging the the temporal information"}, {"start": 3802.32, "end": 3809.44, "text": " and that's it i'm gonna just skip over all of this okay so we've done the for a prop um here we are"}, {"start": 3809.44, "end": 3816.6400000000003, "text": " we are exiting the uh the function so we end up with let's see the shape should be as i said we"}, {"start": 3816.6400000000003, "end": 3827.36, "text": " should have i guess um 264 64 that's the prediction let's see whether that's indeed the case so 264"}, {"start": 3827.36, "end": 3835.6800000000003, "text": " 64 indeed why is that well because basically um uh we we have six channels because the first three"}, {"start": 3835.6800000000003, "end": 3840.6400000000003, "text": " channels again are epsilon which are the noise which we're trying to predict and the next three"}, {"start": 3840.6400000000003, "end": 3848.4, "text": " channels are the variance so let's see how that's gonna be used here um later on so because we have"}, {"start": 3848.4, "end": 3855.36, "text": " uh learned range we're gonna enter this part of the uh this branch here uh we just extract some"}, {"start": 3855.36, "end": 3864.2400000000002, "text": " dimensions so xt is again that's our tensor of noise images 23 64 64 so we're gonna split the"}, {"start": 3864.2400000000002, "end": 3870.4, "text": " output uh into two groups each of with three channels that's what i just explained so model"}, {"start": 3870.4, "end": 3874.88, "text": " output is gonna have the epsilon so that's gonna contain the noise whereas this is gonna contain"}, {"start": 3874.88, "end": 3881.44, "text": " the variance and let me now show you the shapes basically it's gonna be 23 64 64 whereas previously"}, {"start": 3881.44, "end": 3890.88, "text": " we had 26 64 64 okay now what we do here is uh basically we take the epsilon we detach it from"}, {"start": 3890.88, "end": 3899.12, "text": " the computational pie tort graph which means we'll not be updating uh the the the epsilon"}, {"start": 3899.12, "end": 3904.16, "text": " whereas we pass the variance like this and why is that well if you recall from the paper and"}, {"start": 3904.16, "end": 3910.32, "text": " i'll show you that in a second as well we basically just uh in the hybrid loss we use the um"}, {"start": 3910.32, "end": 3917.28, "text": " um this variational uh bound loss to train the variances whereas we use the simplified"}, {"start": 3917.28, "end": 3922.8, "text": " a loss objective to train the mean or in this case to train the epsilon which is equivalent"}, {"start": 3922.8, "end": 3927.28, "text": " because we have that different representations parameterizations so let me show you again the"}, {"start": 3927.28, "end": 3933.76, "text": " paper and compare this side by side okay so here it is basically as i mentioned we have the hybrid"}, {"start": 3933.76, "end": 3940.7200000000003, "text": " loss consists out of a simple and so the variational lower bound objective and they mentioned here"}, {"start": 3940.7200000000003, "end": 3947.5200000000004, "text": " along this same line of reasoning we also apply a stop gradient to the mu of theta or equivalently"}, {"start": 3947.5200000000004, "end": 3953.5200000000004, "text": " to the epsilon theta output for the lvlb term so that means this term is gonna we're gonna freeze"}, {"start": 3953.5200000000004, "end": 3957.76, "text": " we're gonna detach from the computational graph that's why we are doing this this part here okay"}, {"start": 3957.76, "end": 3963.6000000000004, "text": " now let's go on and compute the actual uh lvlb terms so we are now computing literally this"}, {"start": 3963.6000000000004, "end": 3968.32, "text": " we're computing this thing here right that's what we are currently doing and later you'll see in a"}, {"start": 3968.32, "end": 3973.76, "text": " couple seconds we'll be computing the l simple again just executing the formulas from the paper"}, {"start": 3973.76, "end": 3980.0, "text": " that's it do let me know whether this side by side comparison between formulas and code helps you or"}, {"start": 3980.0, "end": 3986.0, "text": " not because these videos are super long and take me a lot of time to to create them so any feedback"}, {"start": 3986.0, "end": 3993.52, "text": " is very much appreciated continuing on here we do that we concatenate those two and then we call"}, {"start": 3993.52, "end": 4000.24, "text": " this uh vb terms bbd so let me just kind of uh see where i have a breakpoint there yes i do"}, {"start": 4001.76, "end": 4010.16, "text": " let's go back to the code okay so what we do here is we pass this dummy model why dummy because"}, {"start": 4010.16, "end": 4016.3199999999997, "text": " because the this function here will later be calling this model but we will not actually be"}, {"start": 4016.3199999999997, "end": 4021.6, "text": " doing a for a prop through the unit as you can see this is how we define the model you basically"}, {"start": 4021.6, "end": 4027.52, "text": " see this is just a like a dummy lambda function that just returns uh so no matter what you pass"}, {"start": 4027.52, "end": 4032.72, "text": " in it's just going to return r which is the frozen out which is this thing here so we'll see some"}, {"start": 4033.6, "end": 4037.68, "text": " model like we'll see a for a prop but it's not going to be a for a prop it's just going to be"}, {"start": 4037.68, "end": 4043.7599999999998, "text": " this dummy return of of what we already computed just keep that in mind okay so what we pass we"}, {"start": 4043.7599999999998, "end": 4049.6, "text": " pass the x start which are the original images we pass the x t which are the images that were noised"}, {"start": 4049.6, "end": 4057.3599999999997, "text": " using uh like t number of steps which we have here and now let's kind of step into this uh"}, {"start": 4057.3599999999997, "end": 4064.96, "text": " function and see how it's computed so what we do is we compute the um posterior mean and variance"}, {"start": 4064.96, "end": 4072.7200000000003, "text": " here by doing that we basically compute the um the forward process posterior and we're going to"}, {"start": 4073.52, "end": 4080.96, "text": " basically do a kl divergence as you can see here between that and between our learned reverse"}, {"start": 4080.96, "end": 4084.8, "text": " process again let me show you the paper to make this a bit more concrete and then we're going to"}, {"start": 4084.8, "end": 4089.92, "text": " dig into the code okay guys so here is the expression we are calculating it's nothing more"}, {"start": 4089.92, "end": 4097.12, "text": " than this so we have the as i said posterior here it is like of the of the uh forward process"}, {"start": 4097.84, "end": 4106.0, "text": " that's this thing here here we have the learned reverse process so this thing here corresponds to"}, {"start": 4106.0, "end": 4114.0, "text": " this lines here the q part the posterior of the of the for process corresponds to this line here"}, {"start": 4114.0, "end": 4119.84, "text": " q posterior mean variance and then we have the kl divergence which is just this part here again"}, {"start": 4119.84, "end": 4125.28, "text": " this is just this part here kl of those two distributions and those are the l t minus one"}, {"start": 4125.28, "end": 4132.48, "text": " terms so the variational lower bound terms that's it nothing nothing too fancy now let's kind of dig"}, {"start": 4132.48, "end": 4139.12, "text": " and see how that's going to be computed so we pass the x start x t and t which are the original"}, {"start": 4139.12, "end": 4148.16, "text": " image noist image and t the time steps so let me kind of start uh enter this function again so it"}, {"start": 4148.16, "end": 4153.76, "text": " even says here what it's computing you can see here so it's very nice to add this type of comments"}, {"start": 4153.76, "end": 4161.12, "text": " that helps a lot to be honest so here it is how we compute this is we take this mean coefficient"}, {"start": 4161.12, "end": 4167.04, "text": " one we multiply it with x start we take the mean coefficient two we multiply it with x t and that's"}, {"start": 4167.04, "end": 4171.84, "text": " it we end up with a posterior mean again let me show you this side by side with the paper here it"}, {"start": 4171.84, "end": 4178.96, "text": " is guys so we saw this already actually so we have again posterior mean coefficient one is just this"}, {"start": 4178.96, "end": 4184.24, "text": " thing here so we actually computed that if you recall in the constructor of the Gaussian diffusion"}, {"start": 4185.04, "end": 4190.8, "text": " object so this is the the first coefficient here then we multiply that with x zero as you can see"}, {"start": 4190.8, "end": 4196.24, "text": " here x start just a different notation and then we have posterior mean coefficient two so that's"}, {"start": 4196.24, "end": 4202.16, "text": " this part here and we multiply with x t so that's this part here so that's it that's how this formula"}, {"start": 4202.16, "end": 4209.76, "text": " corresponds to this line here let me now continue here let me step over all of this we end up with"}, {"start": 4209.76, "end": 4217.12, "text": " the posterior mean we do the same thing with a posterior uh variance so that's just those betas"}, {"start": 4217.12, "end": 4222.88, "text": " again here are those betas that's what we just computed here that's the posterior variance and"}, {"start": 4222.88, "end": 4228.24, "text": " the posterior log variance i'm not sure how this one is going to be used we'll see that a bit later"}, {"start": 4228.24, "end": 4235.6, "text": " hopefully or maybe not okay just some assertions and that's it we computed uh the uh basically"}, {"start": 4235.6, "end": 4240.16, "text": " this distribution we because it's a Gaussian we can just return the mean and the variance and that's"}, {"start": 4240.16, "end": 4247.12, "text": " it that's perfectly describing the Gaussian okay now we do the p mean variance so we're now doing"}, {"start": 4247.12, "end": 4253.2, "text": " um that other step so again that's that's this part here this is what we're trying to compute at the"}, {"start": 4253.2, "end": 4259.599999999999, "text": " moment here let's see how that's gonna look like uh let me see whether i have a break point inside"}, {"start": 4259.599999999999, "end": 4266.5599999999995, "text": " of here yes i do so we passed uh again model remember here model is just uh dummy it's just"}, {"start": 4266.5599999999995, "end": 4271.84, "text": " going to return what we already computed beforehand so the epsilon and the variances and we pass in x"}, {"start": 4271.84, "end": 4281.04, "text": " t and t so that's the the noist image and uh time steps okay let's enter here again uh it says here"}, {"start": 4281.04, "end": 4289.12, "text": " we are basically computing this we are computing the um p mean variance okay so let me step over"}, {"start": 4289.12, "end": 4296.24, "text": " this nothing fancy there again uh this does nothing this just returns this just returns"}, {"start": 4296.24, "end": 4299.68, "text": " we're going to see how we are not going to enter the for function we're just going to return here"}, {"start": 4299.68, "end": 4307.360000000001, "text": " okay so we just end up with the uh tensor we already previously computed so it's 264 64 let's"}, {"start": 4307.360000000001, "end": 4315.360000000001, "text": " see how how this is going to be computed so because we are actually our model var type so we are we"}, {"start": 4315.360000000001, "end": 4319.76, "text": " have the we as i said we are we are learning the variances that's why we enter this branch here"}, {"start": 4319.76, "end": 4329.360000000001, "text": " we split the model output again into the epsilon and variances and now because we are um learned"}, {"start": 4329.360000000001, "end": 4334.400000000001, "text": " range and not learned we enter the else branch and as you can see here we're now calculated the"}, {"start": 4334.400000000001, "end": 4339.6, "text": " equation 15 from the improved ddpm so uh paper okay let me show you that side by side okay so"}, {"start": 4339.6, "end": 4345.2, "text": " here it is guys here's what we are computing this equation 15 so when i said we have variance i was"}, {"start": 4345.2, "end": 4349.92, "text": " we have variance i was kind of lying because that's not the case that's just a component"}, {"start": 4349.92, "end": 4355.04, "text": " that when you combine it like this then you end up with the variance okay so let me let me show you"}, {"start": 4355.04, "end": 4362.16, "text": " what they do here let me kind of remove this uh panel here uh okay so they have the mean log so"}, {"start": 4362.16, "end": 4368.5599999999995, "text": " they have the posterior log variance clipped so that's going to be this part here so that's where"}, {"start": 4368.56, "end": 4375.120000000001, "text": " the log part comes into play so this is the the part that they compute here it says the mean log"}, {"start": 4375.120000000001, "end": 4378.96, "text": " and then they compute the max log which is this part here"}, {"start": 4381.4400000000005, "end": 4386.400000000001, "text": " actually because this is betas it's why it's the other way around but okay doesn't matter that much"}, {"start": 4386.400000000001, "end": 4393.92, "text": " you get the point uh finally we have this um just normalization of the output the output is minus one"}, {"start": 4393.92, "end": 4399.36, "text": " one they add plus one so that's zero two and then divide dividing by two we bring it into the zero"}, {"start": 4399.36, "end": 4405.92, "text": " one uh range i guess and then we just do literally this equation here so you can see v times this"}, {"start": 4405.92, "end": 4412.64, "text": " plus this blah blah same equation so this line is literally this equation here and then we just add"}, {"start": 4412.64, "end": 4418.24, "text": " the additionally this exponent part so that's this part and we've successfully computed equation 15"}, {"start": 4418.24, "end": 4424.96, "text": " okay let's continue on here this is just some clipping we can ignore it for now let me zoom in"}, {"start": 4424.96, "end": 4431.679999999999, "text": " here uh we are not predicting previous x we are predicting epsilon that's why we enter here"}, {"start": 4432.88, "end": 4440.16, "text": " and we enter this branch so we first want to predict the the x start uh given the um"}, {"start": 4440.16, "end": 4448.96, "text": " um epsilon so the noise given the currently noisy images and time step we want to predict"}, {"start": 4448.96, "end": 4454.88, "text": " the x here i will see why we're using that in a second so let me kind of step into this and show"}, {"start": 4454.88, "end": 4461.76, "text": " you this part so here we are we're predicting x star from epsilon uh here is how we do that"}, {"start": 4463.44, "end": 4467.84, "text": " let me again have the equation on the side so we'll be able to understand what's going on here"}, {"start": 4467.84, "end": 4473.28, "text": " okay guys this is the equation we are computing so we recall that xt is computed like this"}, {"start": 4474.4800000000005, "end": 4480.400000000001, "text": " if we just rearrange the terms and we compute x zero x zero turns out to be this expression here"}, {"start": 4481.4400000000005, "end": 4486.64, "text": " and that's what we are currently computing here okay so here are the square root reciprocal"}, {"start": 4486.64, "end": 4492.64, "text": " alphas come prods that's literally this square root blah blah this thing here then we multiply"}, {"start": 4492.64, "end": 4501.52, "text": " that with xt you can see xt here then we have minus uh square root reciprocal minus alpha blah blah"}, {"start": 4501.52, "end": 4507.52, "text": " so that's literally this term here you can just read it kind of word by word and understand that"}, {"start": 4507.52, "end": 4513.04, "text": " that's this term here and we multiply that by epsilon and that's it so that's how we calculate"}, {"start": 4513.04, "end": 4525.12, "text": " the x zero or x start uh let's go back here and now we're going to use that to find the model mean"}, {"start": 4525.12, "end": 4530.48, "text": " and by model mean i literally mean this expression here no pun intended so this expression here okay"}, {"start": 4531.36, "end": 4539.12, "text": " so let's see how this is going to work uh posterior mean variance i think this is well yeah this is"}, {"start": 4539.12, "end": 4544.5599999999995, "text": " the expression we already saw uh we already kind of computed this thing here before let me kind of"}, {"start": 4544.5599999999995, "end": 4551.28, "text": " zoom in uh so i can kind of skip through this yeah i'm gonna skip through this part here we're going"}, {"start": 4551.28, "end": 4557.68, "text": " to ignore this because we already saw it and that's it we end up with our mean and that's what we"}, {"start": 4557.68, "end": 4563.76, "text": " return back we return back the mean the variance the log variance and the predicted so the x start"}, {"start": 4563.76, "end": 4571.04, "text": " and after that we just here pass the true mean true log variance that's the the ground truth"}, {"start": 4571.68, "end": 4576.64, "text": " and we want to minimize the scale divergence so we want to basically be as close as possible to"}, {"start": 4576.64, "end": 4582.4800000000005, "text": " that distribution we pass our mean log variance we calculate the k l do some normalization blah"}, {"start": 4582.4800000000005, "end": 4587.92, "text": " blah okay finally there is this step this is l zero so in case that the time step was zero we do"}, {"start": 4587.92, "end": 4593.280000000001, "text": " some different computation if you recall from the variational lower bound loss uh let me show you"}, {"start": 4593.28, "end": 4597.44, "text": " uh let me show you that side by side okay guys so here it is so here are the l t minus ones we"}, {"start": 4597.44, "end": 4604.88, "text": " were computing so those are the k l divergence we here compute this expression minus log p uh"}, {"start": 4605.759999999999, "end": 4611.2, "text": " data x zero condition like x one so that's what we compute here this discretized gaussian likelihood"}, {"start": 4611.2, "end": 4616.4, "text": " blah blah blah that's how we get the negative log likelihood decoder negative log likelihood"}, {"start": 4616.4, "end": 4623.5199999999995, "text": " and that's pretty much it and now we uh compute depending on whether the as i said depending on"}, {"start": 4623.5199999999995, "end": 4630.719999999999, "text": " whether t equals zero if it's equal zero then we have decoder nl that's what we use as the loss"}, {"start": 4630.719999999999, "end": 4638.48, "text": " otherwise we use k l fairly simple fair enough okay let me kind of zoom in here and that's it"}, {"start": 4638.48, "end": 4643.36, "text": " we have our first term that's the vb the variational lower bound term and we just do some"}, {"start": 4643.36, "end": 4651.28, "text": " rescaling that's why the loss is called rescaled msc let's continue i'm going to ignore this part"}, {"start": 4651.28, "end": 4659.28, "text": " here uh because we'll be extracting epsilon which is noise so we can kind of ignore everything else"}, {"start": 4659.28, "end": 4664.16, "text": " here so let me just uh make sure this doesn't have any yep doesn't have any break point so we can"}, {"start": 4664.16, "end": 4671.04, "text": " kind of step over this we end up with target it's just going to be the noise uh and here we just do"}, {"start": 4671.04, "end": 4677.84, "text": " msc between the model output which are the epsilons that we predicted from the unit and we just do msc"}, {"start": 4677.84, "end": 4682.72, "text": " here between that and the target which is the noise and that noise let me see whether i can find"}, {"start": 4682.72, "end": 4688.0, "text": " it whether it's in this function okay and this is the noise that we initially created here before"}, {"start": 4688.0, "end": 4695.36, "text": " we even started creating these xt's that's it it's fairly actually simple um to be super frank here"}, {"start": 4695.36, "end": 4701.599999999999, "text": " it's not easy to understand why this exactly works it's kind of still magical to me but i'm slowly"}, {"start": 4701.599999999999, "end": 4706.16, "text": " learning about these diffusion models more and more i think the first time i kind of took some"}, {"start": 4706.16, "end": 4710.88, "text": " time to understand these was when i was preparing the glide paper so again do check out that paper"}, {"start": 4710.88, "end": 4715.839999999999, "text": " i do have some short introduction maybe even better than the one i made in this video when it"}, {"start": 4715.839999999999, "end": 4722.48, "text": " comes to understanding diffusion models but in any case finally because we have the hybrid loss we"}, {"start": 4722.48, "end": 4729.04, "text": " just combine the msc and the variational lower bound loss and that's it that's our loss after this"}, {"start": 4729.04, "end": 4736.24, "text": " we just do the um uh like basically do some waiting uh and for each of the time steps"}, {"start": 4737.04, "end": 4743.44, "text": " uh and because we use uniform sampling we'll just have ones everywhere uh do some logging blah blah"}, {"start": 4743.44, "end": 4747.5199999999995, "text": " blah and finally we do the computation of the gradients by calling the backward function"}, {"start": 4747.52, "end": 4754.56, "text": " on our hybrid loss here so that's pretty much it next up we just keep on repeating micro batches"}, {"start": 4754.56, "end": 4760.56, "text": " blah blah blah uh i'm gonna kind of stop here this is everything you need to know to understand how"}, {"start": 4760.56, "end": 4766.0, "text": " the how the training procedure works hopefully that was useful now let's quickly dig into the"}, {"start": 4766.0, "end": 4773.120000000001, "text": " sampling code and then we are done so i'm gonna stop this training here i'm gonna enter the"}, {"start": 4773.12, "end": 4780.24, "text": " uh sampling script so that's gonna be let me find it here so it's gonna be the image sample function"}, {"start": 4780.24, "end": 4786.72, "text": " here the script here uh and let's start uh with the main so i'm gonna now start and run this one"}, {"start": 4787.84, "end": 4793.5199999999995, "text": " i'm gonna set the sample configuration from my launchpy in vs code by the way i love vs code"}, {"start": 4793.5199999999995, "end": 4798.96, "text": " he has beautiful design uh amazing debugging experience and i mean i don't know why you would"}, {"start": 4798.96, "end": 4804.8, "text": " use anything else unless you don't have a choice i guess that's that's a reasonable uh excuse i guess"}, {"start": 4805.76, "end": 4810.24, "text": " okay let's step over this again bunch of arguments we're gonna ignore distribution"}, {"start": 4810.24, "end": 4819.2, "text": " blob uh distributional training logging um so i'll skip the model creation as well so i'm just gonna"}, {"start": 4819.2, "end": 4825.2, "text": " disable all of the breakpoints and i'm gonna step over all of this because we already saw all of"}, {"start": 4825.2, "end": 4830.5599999999995, "text": " that so i'm gonna ignore it now we'll load the actual model they do provide the checkpoints"}, {"start": 4831.44, "end": 4837.76, "text": " in their github repost to do check it out you can download it and then just set it here so i can"}, {"start": 4837.76, "end": 4844.639999999999, "text": " have it stored somewhere so it's an image net 64 64 unconditional model trained on 100 million steps"}, {"start": 4844.639999999999, "end": 4853.84, "text": " i guess so we load it we put it onto to gpu we set it into eval mode and we now start creating the"}, {"start": 4853.84, "end": 4859.360000000001, "text": " samples okay so because this is not a class conditional model we can ignore all of this part"}, {"start": 4859.360000000001, "end": 4867.68, "text": " and now this is where the magic happens the sample function first of all there is this paper called"}, {"start": 4867.68, "end": 4873.2, "text": " ddim i might cover this in the next uh video but for now we're just going to ignore it because this"}, {"start": 4873.2, "end": 4878.64, "text": " is false we're going to be using the p sample loop this function here that's the one we're going to"}, {"start": 4878.64, "end": 4886.400000000001, "text": " be using not the ddim it turns out that ddim in a nutshell works better when you only have 50 or"}, {"start": 4886.400000000001, "end": 4891.84, "text": " less time steps during your sampling procedure as soon as you pass the 50 threshold roughly"}, {"start": 4891.84, "end": 4898.88, "text": " then this method here operates better and this is just like the the method from the improved"}, {"start": 4898.88, "end": 4904.240000000001, "text": " ddpm paper the one i showed you in the beginning of the video okay so here's the sampling function"}, {"start": 4904.24, "end": 4910.639999999999, "text": " this is our desired shape we want to have one three sixty four sixty four images this is the image we"}, {"start": 4910.639999999999, "end": 4916.639999999999, "text": " want to generate uh this is kind of empty okay let's enter this function and let's see how it"}, {"start": 4916.639999999999, "end": 4923.12, "text": " works okay before that let me just enable the break points so enable all breakpoints okay now"}, {"start": 4923.12, "end": 4930.48, "text": " we can enter that here let me enter and here we are so first of all there's going to be this um"}, {"start": 4930.48, "end": 4937.679999999999, "text": " um p sample loop progressive uh basically this thing is going to keep on producing the samples"}, {"start": 4938.5599999999995, "end": 4945.759999999999, "text": " we pass in the model uh we pass in the desired shape uh noise is none we don't pass anything here"}, {"start": 4945.759999999999, "end": 4950.08, "text": " we don't specify this blah blah blah this is just boilerplate i'm going to ignore all of that"}, {"start": 4950.08, "end": 4957.599999999999, "text": " so here we are p sample loop progressive again we just pick a device uh it's going to be gpu"}, {"start": 4957.6, "end": 4964.96, "text": " in my case because i have uh like gpu here um then we generate the image so here's how we start we"}, {"start": 4964.96, "end": 4970.320000000001, "text": " literally start with the gaussian noise so this is your your normal distribution so so basically"}, {"start": 4970.320000000001, "end": 4977.120000000001, "text": " gaussian with mean zero and variance one okay with the desired shape on the desired device"}, {"start": 4977.92, "end": 4984.0, "text": " let's generate that image now we have some number of time steps which is set to 100"}, {"start": 4984.0, "end": 4989.52, "text": " again i think i just modified this for the sake of time otherwise i think default was a bit bigger"}, {"start": 4989.52, "end": 4995.36, "text": " like 4 000 um in any case so we generate the indices and you can see here we reverse them"}, {"start": 4995.36, "end": 5003.2, "text": " because now we are doing the uh the reverse process we start with the uh 99 with the index 99 and we"}, {"start": 5003.2, "end": 5008.48, "text": " go all the way to zero which is the original image when i say original image here this is a"}, {"start": 5008.48, "end": 5013.6, "text": " generated image from the underlying data distribution that we learned during the training procedure"}, {"start": 5013.6, "end": 5020.08, "text": " that's the that's the idea there okay let's step over here so let's go over the indices we start"}, {"start": 5020.08, "end": 5028.64, "text": " with the index 99 so this is going to be first 99 okay we kind of um well you can see here because"}, {"start": 5028.64, "end": 5033.76, "text": " batch dimension is one nothing fancy happens here but otherwise if you wanted to generate like five"}, {"start": 5033.76, "end": 5039.52, "text": " images in the batch then you would just copy paste uh 99 five times because for all of the all of the"}, {"start": 5039.52, "end": 5044.080000000001, "text": " noisy images will be following the same process so that's where we start with with the last time"}, {"start": 5044.080000000001, "end": 5049.200000000001, "text": " step okay and in this function is where all the magic actually happens everything else was kind"}, {"start": 5049.200000000001, "end": 5055.120000000001, "text": " of boilerplate let's let's focus on this one so we pass the image which is again in the initial"}, {"start": 5055.120000000001, "end": 5060.8, "text": " point so now in the initial step it's going to be just pure image pure sorry pure noise we pass the"}, {"start": 5060.8, "end": 5066.72, "text": " time step uh blah blah blah nothing fancy there let's enter the p sample and here is how the"}, {"start": 5066.72, "end": 5071.84, "text": " sampling works and actually we already saw this during the training so you pretty much know how"}, {"start": 5071.84, "end": 5078.240000000001, "text": " this is going to work let me enter here uh this is that part of code where we were literally"}, {"start": 5078.240000000001, "end": 5086.400000000001, "text": " computing the um we split the model output the model variance then we compute those mean log max"}, {"start": 5086.400000000001, "end": 5091.52, "text": " log so that's the equation i saw the equation 15 from the paper we get the variance and then later"}, {"start": 5091.52, "end": 5100.72, "text": " down below what we do is we basically predict the x0 and then use that to to compute model mean so we"}, {"start": 5100.72, "end": 5106.96, "text": " we already kind of stepped through this code so i'm not going to do that now so yeah because of"}, {"start": 5106.96, "end": 5113.52, "text": " that let me kind of step over here and ignore this so i'm going to just kind of disable all of the"}, {"start": 5113.52, "end": 5122.8, "text": " breakpoints let me disable all of the breakpoints let me step to here and let me click f5 to step"}, {"start": 5122.8, "end": 5132.240000000001, "text": " over okay we end up here uh now i'm going to enable the breakpoints again in any case so we"}, {"start": 5132.240000000001, "end": 5139.68, "text": " generate the noise again this is your normal distribution uh we generate these masks uh which"}, {"start": 5139.68, "end": 5146.8, "text": " basically are always going to be one if t is equal is different from zero because we treat the zero"}, {"start": 5146.8, "end": 5153.4400000000005, "text": " the zero step differently if you recall even during the training we had that nl loss uh as opposed to"}, {"start": 5153.4400000000005, "end": 5159.04, "text": " variational lower bound loss similarly here when we when we sample we're going to have a bit different"}, {"start": 5159.04, "end": 5165.68, "text": " behavior depending on the number of time steps so we just take the mean that's the mean we computed"}, {"start": 5165.68, "end": 5172.0, "text": " and then we take the log variance we do the xp xp and log cancel out we get the variance we"}, {"start": 5172.0, "end": 5179.4400000000005, "text": " multiply that with the noise and we end up with a sample okay so this is now we are in step 98"}, {"start": 5179.4400000000005, "end": 5185.84, "text": " okay and we're going to keep on doing this until we get a completely denoised image and at the end"}, {"start": 5185.84, "end": 5192.64, "text": " once we end up with the zeroth time step this is going to be equal to zero and that means the only"}, {"start": 5192.64, "end": 5198.64, "text": " result we'll return is the final mean again that's just a detail of how this thing works and we return"}, {"start": 5198.64, "end": 5205.92, "text": " the sample that's it basically the sample now becomes image that we feed into the next iteration"}, {"start": 5205.92, "end": 5211.68, "text": " of this p sample function and that's how it pretty much works i'm gonna now stop this because you"}, {"start": 5211.68, "end": 5217.84, "text": " pretty much saw everything i'm gonna go to launch i'm going to go to sample i'm gonna increase the"}, {"start": 5217.84, "end": 5225.04, "text": " number of diffusion steps to let's say 500 and i'm gonna run the script again and just show you the"}, {"start": 5225.04, "end": 5230.400000000001, "text": " results we get this time we're going to disable all of the breakpoints i'm gonna run this and"}, {"start": 5230.400000000001, "end": 5236.16, "text": " just show you how the results look like let me show you yeah i added this image show plot show"}, {"start": 5237.12, "end": 5243.360000000001, "text": " line so that we can visualize the actual sample and yeah i'll i'll get back to you as soon as the"}, {"start": 5243.36, "end": 5252.719999999999, "text": " image is generated okay guys here's the image we got some dog image again do do keep in mind that"}, {"start": 5252.719999999999, "end": 5261.2, "text": " i only have 500 steps it'd be much better if i used more steps obviously like thousand although"}, {"start": 5261.2, "end": 5265.839999999999, "text": " there is some bug with this code when you use 4 000 you can check out the issue in their repo"}, {"start": 5266.639999999999, "end": 5272.96, "text": " basically you get like completely saturated image so that means that the optimum quality of an image"}, {"start": 5272.96, "end": 5277.92, "text": " is maybe around thousand two thousand or something but like at four thousand the images hit the"}, {"start": 5277.92, "end": 5284.4, "text": " saturation point and you get like just pretty much junk out of these of this model guys this is pretty"}, {"start": 5284.4, "end": 5290.64, "text": " much it hopefully you found this format of a video useful do let me know whether you find it useful"}, {"start": 5291.44, "end": 5297.6, "text": " if so i'll keep on making videos such as this one combining papers combining code putting code in"}, {"start": 5297.6, "end": 5306.400000000001, "text": " paper side by side trying to make these abstract mathematical ideas a bit more grounded and hopefully"}, {"start": 5306.400000000001, "end": 5311.92, "text": " i've done a job at doing that in any case if you found this video useful do share it out with others"}, {"start": 5311.92, "end": 5317.68, "text": " who want to learn more about diffusion processes subscribe to this channel join the discord"}, {"start": 5317.68, "end": 5335.200000000001, "text": " community you can find the link down below in the video description and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=xkuoZ50gK4Q
Facebook DETR | ML Coding Series | End to end object detection with transformers
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE The second video in the ML coding series! The longest video I ever made! In this video I do a code walkthrough of Facebook's DETR model from the "End-to-End Object Detection with Transformers" paper. Let me know what you'd like me to cover next! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ DETR GitHub: https://github.com/facebookresearch/detr ✅ DETR Paper: https://arxiv.org/abs/2005.12872 ✅ Mini COCO: https://github.com/giddyyupp/coco-minitrain I edited the download/sampling scripts and ended up downloading a valid dataset that has only 100 images. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 DETR model recap 00:07:30 DETR demo notebook 00:18:55 Visualizing attention notebook 00:29:50 Visualizing encoder attention 00:36:45 Going through the training script 00:38:45 Backbone construction 00:42:05 DETR construction 00:54:00 Data loading and nested tensors 01:02:30 Forward pass through ResNet backbone 01:09:00 Forward pass through the transformer 01:22:00 Hungarian matching algorithm 01:32:50 Loss calculation 01:40:50 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #detr #transformers #coding
What's cracking guys? This is the second video in the machine learning coding series and I'll be covering Facebook Deter, so that's a short for Detection Transformer and some of you might recall that I've already covered the paper in great depth I think a year ago, so if you're not familiar with what Deter is maybe consider checking out that video first, but I think this one will be like self-sufficient as well. So as I said it's a model for object detection and just for those of you who are new to the channel, object detection is this task where you have some salient objects in the image, as you can see we have some cats and remote controllers here and you can see that we have a bounding box around this cat here and we also have a label associated with the bounding box similarly for the remote controller here, for the second one here and for the other cat this blue box and finally we can see a purple box around this whole image and basically the class being couch which is the correct class in this particular case. So what I've done is I went ahead and basically cloned the official repo and as I said I'm gonna walk you through the code base in a lot of details. Before we go there I want to quickly walk you through the pipeline on the high level so that you have a mental model going into this video. The model itself is fairly simple and we'll be focusing on three components in this particular video. So first is understanding how the actual architecture works. So how do we basically, let me take a pen here, so how do we do this transformation here? How do we go from the image all the way to this set of box predictions? So understanding the architecture of the actual model. The second part is given the predictions, given the the bounding boxes predictions and the associated classes, how do we match those with the set of ground truth bounding boxes here? Okay so basically what this model outputs as we'll soon see is, and this is a hyper parameter but the model they've used 100 as the as the default, so that means we are always outputting 100 bounding boxes and so we'll have two vectors as the output from our model. One is going to be a 104 so that's going to be our bounding boxes. Obviously the four points are enough to describe the actual bounding box. The format is I think first the center point coordinates XY and then the height and the width but that's less important. The second vector that the model will output is just like 100 basically containing the classes of those associated bounding boxes here. Now as I said the hard part will be how do we match these outputs with like correct ground truth bounding boxes? So imagine we have this red bounding box represented as this red square and consider we have, I'll just pick another color like green because I don't have yellow at disposal. So imagine this yellow is this green and so the idea is how do we decide how do we know that we need to connect this one with this particular ground truth bounding box and how do we know that we need to connect this red one with this red one here? And also we need to take all of the other predictions and map them into this no object into this special class that no object class and okay so that's the step number two. And the way they do that is using something called Hungarian matching algorithm. We'll see how it works in a very like a very in a lot of details. Finally once we have the matching we now need to calculate some sort of a loss and then do the back prop. So they'll have three types of losses. The first one is gonna be the label loss so that means for this for the match boxes we want to make sure that the ground truth that the probability of the ground truth label is as high as possible. That's first loss. The second loss is gonna be L1 loss. So basically how far off are our predictions from from the associated ground truth boxes. So imagine we have a prediction here and imagine we have a different prediction basically maybe sorry the ground truth prediction being here. We calculate the L1 loss and I'll get into the details of how that's calculated but basically well you just treat the bounding boxes as a four-dimensional vector and you just find the L1 between the difference of those two vectors. That's how we calculate L1 loss. And the final one is the generalized intersection over union which is basically a fancy version of your theme of the of the classic IOU metric. IOU is just a way to so so what it does is it finds the area of this intersection here and it basically divides that with the union so hence the intersection of a union and you can imagine that as this tends to one that means that these two boxes are completely overlapping and you can see as the less the overlap the smaller the intersection area and the and when you divide that by the actual union that goes towards zero as the boxes don't so so yeah it will be ultimately equals zero once we don't have any overlap. So that's the third type of a loss again don't worry if you didn't understand everything we'll go into a lot of details in this particular video. Okay let me just show you a bit more detailed view of the architecture here so the idea will be to encode so we'll pass the first we pass the image to a CNN we get some set of image features we are going to then basically tokenize those image features so we'll just treat them as tokens for the NLP transformer so that part is here we'll have some so so basically I said we're gonna pass them here and then encode them using the encoder stack of the transformer we'll add some spatial positional encodings and on the other hand side we have this very interesting thing going on so we have these let me just change the color so we basically have these object fairies which are going to be learnable like hundred learnable embedding vectors and it's hundred because as I said that's just a hyper parameter and because we have hundred we'll also have output hundred like bounding boxes and you can see here what happens is that in this middle lab layer of the transformer layer we have cross attention going on so we have that these object queries are basically each of them are attending to this encoded version of the image features and each of them are looking for something in the in the image so that's and that's how on the high level obviously very hand baby how this pipeline works once we get the output set of hundred tokens here we just map them individually using MLPs or just a simple linear layer and we map so we take this token we map it into basically a class so we'll have associated class and we also map it into a four dimensional vector here and that's gonna be that's going to represent the bounding box that this particular token encodes and that's it on a very high level with this prerequisite knowledge we can now jump into the actual true piter notebook so let's see what's going on here let's start from the beginning of the notebook I'm just going to run these imports here I'm going to import the necessary libraries we're gonna be working with pytorch because this is Facebook after all or meta as they are now known to as here is a small minimal implementation of debtor it's not as complicated and it's not the same as the one they actually have in the official code base but it's good enough and gives an understanding of what's going on so I'm gonna run the cell and then explain what's going on again quickly we have a backbone resident 50 we have this a comb layer which is going to take the 2048 channels of the output embedding volume from the from the CNN and just reduce the number of dimensions to 256 and that's going to be the ultimate dimension of the tokens that we pass into the transformer so let me quickly just kind of explain what that exactly means so as I said we have an image here so this is an image we have a CNN block here I'm just gonna denote it as a box here we pass the image and out comes some some volume basically obviously the spatial so here the spatial extent was we had three channels here so this was this was three here and after encoding it through a CNN we're gonna end up with something that has like a smaller spatial extent so let me try and denote that like the following so basically we have a smaller spatial extent but then we have much more channels okay so something like this I'm really bad at drawing today so something like this basically this number of channels here is gonna be 248 for resonance 50 and this is just like your your down sample spatial extent so this is like small H small W whereas this was like big H here and big W so the the convolutional layer I just mentioned has the kernel of size one times one which means each is just going to reduce the number of channels of this of this volume here from 2048 into I think 256 and then the idea is to just like treat these as a set of tokens so we can in a raster order we can just do the following we can just do this and we take this vector this is gonna be the first embedding vector which we pass into the transformer then we take the second one here so this is gonna be the second vector this one here is the second one and so yeah we basically do that in the raster order let me can zoom in here a little bit and let's show you what I mean exactly so this is vector number one vector number two vector number three vector number four five six etc etc and that's basically how this this part of the pipeline looks like okay let me go back to the code so that's this this what this conversion layer does then we have a simple transformer again encoder and decoder stacks and finally we have the two linear layers linear class and linear b-box I showed you what these do these take the output embeddings from the decoder and just map them into either the distribution over classes for the cocoa data set or this second one just gives us the bounding boxes for each of the output tokens okay output embedding vectors here the query at the query vectors the learnable query vectors we have hundreds of them and that's it and finally we have a row embed and call embed these are just going to be the the positional encodings okay and once we do so here is how the forward path looks like we basically pass the inputs so that's a set of images we pass that through the comp one then we apply some batch normalization ReLU max pooling whatnot and then we apply the four layers of the of the resonance 50 and finally we apply this small this this convolutional layer where we we basically reduce number of dimensions as I previously mentioned and what happens here is we construct the positional encodings then we're going to take those positional encodings just sum them up with these tokens here so that's this part here don't worry about the actual details we'll get to those a bit later so what is important is we have some tokens from the image we add the positional encodings we pass that into the encoder stack and then through the decoder we pass the query the query embedding vectors that gives us as the output a set of embedding vectors here which we then map using the linear class and linear b-box linear layers we additionally apply sigmoid because we want to have normalized bounding box coordinates so from 0 to 1 okay that's it now let's see what happens let's just run this this cell is going to load a pre-trained data model so I'm going to just run this and here we're just going to run the this cell that has that defines the classes from the cocoa data set and some associated colors which are going to help us visualize stuff so again the number of classes in cocoa is let's see what we have here so if I add a code cell let's bring the number of classes so shape well shape will not work because this is what this is like a list so we're gonna do length is that so length and this should be I think 91 or 91 yeah 91 and then we'll add the additional no object class we'll see that a bit later okay here's some helper functions this one helps us just do some type of processing to the input images we do some resize operator then we convert the images into pie torch tensors and then we do image net normalization so these are statistics calculated from the actual image net data set this is going to be like the mean and this is going to be the standard deviation statistic of over the calculated over the image net data set these helper functions just map as you can see from the format CX CY WH which is as you specify the center coordinate of the bounding box and width and height and instead of that it's going to convert it into a format where we specify the top left point and the bottom right point and those are obviously equally informative representations that take equal amount of space it's just that this representation is a bit more suitable to some of the API's they're actually using and we'll see that later not that important rescale boxes so as I said we do predict the bounding boxes which are normalized which means from 0 to 1 this is just going to return a map those predictions into the original input image space like a range okay did we run the cell and just run the cell and finally we have the detection function here what we do is we pass an image we apply the transforms we just saw we unsqueeze so that we add like the batch dimension so this adds a dummy batch dimension one because that's what by torch models expect just some error checking we pass the images through the model we get the outputs and finally what we do here is we take the prediction logits we apply the softmax because remember this is just going to apply linear layer so these are still unnormalized logits so we apply softmax after we've normalized the logits using softmax we just extract all of the classes except for the no object class here that's what we have up to minus one and then what we do is we take those probabilities we find the max across all of the hundred outputs and we find those predictions where the peak of the distribution across classes is at least 0.7 in that way we kind of take out only those very confident bounding box predictions okay that's the idea there and then we use that vector the key vector to basically extract only those bounding boxes and then we just rescale those and return back the bounding boxes we additionally return the associated probabilities here so if I run this and if we run this here we're going to get as the output the scores and the bounding boxes so let's kind of quickly examine what's going on here let's see what the shapes are here so if I do print score shape that's going to give us 591 so it's five because only five of the hundred bounding boxes were like confident enough and if I was to print the boxes this should be what this should be five four that's the expectation so let's run it's five four indeed so five boxes that are good enough so I'm gonna quickly just modify this and return back like this probability vector so this should contain if I'm not wrong so let me just actually return back this original output so like this and I'm just gonna call that logits so I'm gonna run that cell I'm gonna add a logits whoops variable here so just add logits logits run that and see what the shape of that thing is so that should be a hundred ninety one I assume right so if I were to run this we have 192 so it's 192 because as I said here we extract the no object box so if we were to take this probus whatever like it's a weird variable name if we were to take that variable and return it instead that's gonna be that's going to give us 91 instead of 92 so let me show you here and as you can see here that's that's pretty much it okay now we can finally visualize what's the output of this of this function detect function and what we do is we iterate over this over the probabilities over the bounding boxes and we just plot rectangles using the coordinates from those boxes and we plot the basically the most probable class from the distribution there and if I were to run this I mean we already had the image so let me let me show you this is what we get so this is the image with which we started this video I showed you this one in the beginning okay so far so good let's now jump to the second notebook this was just like a quick like a high level overview of how minimal implementation of a debtor could look like and it turns out it works already fairly fairly nicely let's now go to this one where you're gonna see some attention visualization so I'm gonna again run the necessary imports I'm gonna again run the cocoa classes here we have the same type of helper functions that transform functions we have the these functions that change the box format they do the rescaling so I'm just gonna run those again we have the same plot results function as from the previous two Python notebook the details are not important it's just like your math plotlib stuff I'm gonna focus on the decor and the gist and that's the actual data model and how those three components that I showed you in the beginning of the video so that the architecture the Hungarian matching and the loss calculation look like that's gonna be the focus of the video again let's load some of the pre-trained models here we set the model into eval mode so eval just for those of you who are not familiar with pytorch sets so that's very important because some of the layers such as batch norm behave differently whether they are in the depending on whether they are in the train or eval setting okay and we did the same thing we just load load an image here and then we have a similar type of a forward a prop as in the previous detect function from the previous notebook so we just pass the images we transform them and squeeze them pass through the through the model we get the outputs we do the same type of transform here so we get the probabilities and this time the threshold is a bit higher we use 0.9 here although I think I am the one who modified this not sure let's let's assume this is 0.9 and then we do the rescaling so let's see what we get so yeah for the most part this part was the same as in the previous notebook so we get the output predictions now that we have all of that calculated let's go and add some hooks to our model so what the hook does is as the image is passing through the actual data model we're going to log certain activations and the ones we are going to be interested in is going to be the activations from the actual backbone model so that's going to be the calm features let me let me show you what I mean by that so it's going to be we're going to log these features here better yet so we're gonna log actually these features here and we're going to log some of the basically attention coefficients I think from the let's see I think from the multi hat attention like the cross attention component here so let's just make sure that's the case so we get we take from from the encoder from the last layer of the encoder we're gonna grab the self attention module so that means we're going to grab from the last layer so as you can see we have and here I think in practice they use six so from the sixth one we're gonna grab the activations from this module here okay multi had self attention in particular we're gonna take the matrix like when you multiply query vectors with with key vectors you get a matrix of scores which after you which after you apply the softmax you get basically the attention scores so that's the one we're gonna visualize and the second one if I'm not wrong I think it's this one here but let's double check if I go back to the code here we can see we again we take the decoder the last layer of decoder multi had attention and I'm fairly sure that's the second the cross attention one so let me quickly go to the transformer file here let me just make sure that's the case so transformer decoder layer we are going to have let's see so self attention and multi head attention the multi head yeah the multi head attention is the second is the cross attention layer okay so we are basically taking the activations from this particular multi head attention head okay not yet but the module sorry okay let's go back that's enough rambling let's go back to the notebook after we attach these hooks and this is just your pie tour syntax I will not get into details there and we run the inference all of those are gonna be locked in the associated lists then we just remove the hooks and then we extract the zeroth element from the list because here as you can see we just append it so we by doing this we just extract the collected data so I'm gonna run this cell and let's see what happens okay so we end up with those shapes I'm going to quickly analyze what's going on so con features let's print what we get here so what I expect is I assume something like the following shape like maybe one to 48 and then some spatial extent which is gonna be I think something like age is gonna be 19 and double is gonna be 29 that's less important but let's kind of print this and see what we get okay so we get that this uh-huh oh yeah or I remember so there is this particular detail of the implementation so we have to actually index we have to index by this key and only then do we actually get the okay and then we have to do additional tensors because additional implementation detail because this is gonna give us nested tensor and we just want the tensor info so finally okay this is what we get as I said I kind of missed the small H and small W but that was not the point that the same overall format is as we expected let's copy paste this and let me do the following here I'm not sure whether this is gonna be nested vectors let me just try and do shapes like this I'm gonna comment this out comment this out see whether this works and it does so 850 850 that's the I said the scores but what you get by multiplying the queries and key vectors similarly here if I were to do this and we're gonna visualize the detection attention weights let's see the shape here so we're gonna have I guess the same exact shape oh no actually it's hundred because here we are in the decoder stack so because of that we have a hundred eight hundred fifty because we have hundred very the learnable vectors and we have eight hundred fifty keys from the encoder stack that's why we get this this different shape it all makes sense nice we've seen the shapes now let's visualize the attention so we're gonna in particular focus on visualizing the decoder attention weights and as you can see here we they just grab the height and width from the com features that's gonna be the 2534 here and the each way through the bounding boxes we previously predicted so the five bounding boxes we previously predicted and visualized above here so these are the two remote controllers two cats and a couch and they are then going to as you can see here grab a particular index so remember this is gonna be of the shape hundred because we have hundred query embeddings and 850 because we have 850 of the basically keys coming from the image features of the 3d encoder stack and so this is going to grab 850 coefficients that basically correspond to the scores that this particular query vector has when it attends to the keys okay just to quickly clarify what I mean by that is it means the following basically so again remember we are we have encoder output here so we have basically 850 outputs here and then we have the decoder stack here and as I said we grab the cross attention layer and that cross attention layer has hundred hundred query vectors we have in total hundred query vectors there and we take a particular one that's associated with the bounding box we care about so we take for example we take this one here and we see how that query vector basically we do the dot product between that query vector and all of these key vectors and those are the 850 numbers that we're now going to visualize okay that's that's the basic idea so let's see what we get so after we do that and run this cell we have this so for this particular remote controller you can see that this is the pattern that that we attend to in the these are the image features that we attend to and you can see there is obviously a correlation between these two similarly for this image here similarly for the this is I guess the couch and so we are basically attending the ends of the image that's how it figures out that this is the couch that we are seeing and then you can see kind of a faint figure of the cat here and finally similarly here we can see a faint figure of the cat here okay so hopefully this gave you some understanding of what's what's going on and that I hope that this this image here helped you understand again what's going on so let me just recap one part so the shape here is 2534 if you multiply those two numbers you get 850 that's why we have 850 here so 2534 so remember that we started here from the image features that were precisely 2534 and then we took those in a rest order and we kind of flattened those out and that's how we got these 850 tokens so that means by attending to these 850 tokens you're actually attending to the original image but just a down sampled version that's why we are able to get these interesting attention maps here so that's how you connect these 850 with the images we get here so with these images here okay so yeah that's the best I can do for now but bear with me we'll get into we'll understand all of the details a bit later so what I do here is just they just plot the they just print these shapes and we saw all of this already so I'm just gonna skip through all of that because we already saw it what they additionally do is they take the now the encoder attention weights instead of decoder ones and they just reshape those such that we end up with 2534 2534 instead of with 850 850 so this is just going to be a bit easier to understand what's going on when we index into this array into this tensor okay let's now visualize the encoder attention patterns so they grab four points from the original image and then using those points they're going to visualize the encoder attention so let's see what exactly happens here so you can see original image they picked four points denoted here by these red circles and then we visualize the where these points attend to in the original input image because we have the self-attention in the encoder layer which are the image tokens and that's why we get these image patterns and you can see the interesting fact about data is that the encoder already learns to do some sort of instance segmentation which is super useful if you later want to create a bounding boxes around those objects and this was not explicitly trained for so this is just something that emerges from the from the way that data was trained which is super nice so let's kind of dig through this code and understand what's going on it's a bit messy here so I'm gonna just keep it on a high level basically they create a bunch of subplots that's not that interesting and then the important part is let's see so the important part is this so we have image show of the attention pattern so we take the index so indices are again the points we picked right and so we grab those points those points are initially in the image space and so because of that we want to down sample because that's what the calm layer does the backbone is going to don't sample by 32 that's why we divide by fact and this fact is going to be equal 32 that's how we get the indices for the down sampled image features and then we just basically grab the points and index into this attention tensor okay and by doing that we basically figure out the following I'm gonna again use my one note because this is very hard to I realize this can be hard to understand but just me speaking about it so what what we've done is the following so again remember we have this volume here and so what we are going to do with it is the following so this is our volume these are our tokens basically so this is the volume that came out from the CNN so we take a particular point so let me kind of create a grid structure here so make it easier to visualize we take a particular point so this one here so that's as you can recall that's the tens that's the token that is fed into the encoder transformer so somewhere in the transformer we're gonna map this into a query vector and then we're gonna take that very vector and basically do a dot product with the key vectors of all of these other tokens right all of these other tokens and by doing that we basically understand how this image token is attending all of the other image tokens in this in this volume here so that's the idea by the way obviously this is flattened in the actual transformer I'm just visualizing this as in this in this spatial format just because it's easier to map what's happening here and between and why do we have the images that we get here so hopefully that was somewhat understandable and now this final cell just makes this basically interactive so you can pick your own image and do the same thing in an interactive fashion so if I were to run this they created this nice widget where you basically have let me show you what happens so you can see here you can kind of move this red point here and you can see how the attention patterns change so if I were to move this a bit more directly on to the cat well we get some weird patterns so apparently when we are closer to the boundary of the cat that's when we we get the nicest patterns of attention here so you can also play and just use some other image from the internet and this is gonna work so that's the idea with this with this Jupyter hopefully you got some understanding of how of the things that this data model is learning so you can see that we have interesting patterns going on in the encoder so instance segmentation types of of masks here and we also saw nice patterns in the actual decoder where we saw where the query tokens from the decoder stack are attending to in the original down sampled image features again quickly this is decoder so the query embedding vectors attending the embedding vectors from the encoder stack so that's this part here so again query vector attending the embedding vectors from the top of the encoding stack and by visualizing that you basically visualize the where this query token is attending to the query vector and the second image is about grabbing the image token and visualizing where that image token is attending to over the over the neighborhood so that's that's this particular image here so should be fairly simple and easy to understand cool now let's start digging through the actual code base and for that I'll first have to check out the main branch so I'm gonna have to check out the main branch because the colabs are so the the Jupyter notebooks are checked into this colab branch so if I do get check out main now I'll be able to find the code that I need so let's go back here and let's start debugging the main file so I'm gonna skip a lot of unnecessary details like because the code was developed to be trained on a you know like a multi node multi GPU setup we don't care about that we just care about understanding how data works also there is a lot of details about pen optic segmentation because data can easily be adjusted so that it works for the pen optic segmentation desk we're also going to ignore everything there and finally the actual cocoa implementation details are not a part of this code base but they are using an existing API so we'll kind of skim through that as well so the main focus is let's understand the three components of data the architecture the Hungarian matching algorithm and finally how do we calculate those three types of losses we saw in the beginning of the video so we offered our do let's start here so we are going to just ignore these parts this just prints the basically the shot of the of the current commit but that's again not vital I'm gonna skip all of this we just grab the device so I have a GPU so it's gonna be GPU is gonna be used so for reproducibility reasons they they set the seeds for various libraries pytorch non pythons random etc so that not that important now this is where the actual important part starts let's see how the model is built so we first have basically we picked a number of classes because we'll have 91 here because we are dealing with cocoa so as you can see here we have 91 and then as I said we're gonna ignore the pen optic part so now let's build the backbone so the belt bone is the resonance 50 but let's see the a bit more details there so first of all we're gonna build the position in coatings and the interesting part of this positioning coatings is going to be in the actual for a function of this of this module so we can just kind of skip through that because the learning rate is bigger than zero we are gonna make backbone trainable nothing fancy there so now we form the backbone and this join our object built which will see what it is in a second so it's basically just gonna join the backbone which is gonna be the resonant 50 and the positioning beddings so let's step into the constructor of the backbone here you can see what happens is that name is going to be the resident 50 so we get the we fetch the class the resident 50 class from the torch vision models and you can see here that sedulation is set to false and because of that we'll just fetch your regular resident 50 not the one with dilated convolution and if you're wondering this is what dilated convolution is all about basically here is a nice visualization of how it works so it has a similar effect when it comes to down sampling the spatial extent so it's instead of using strides to down sample the output map you just do this type of you create this type of a kernel where you only where you basically well it's basically dilated and you're attending only certain pixels of the image here and that's the way you you you manage to down sample the output map okay but that's not that important just like a fun fact okay finally they're using these frozen batch norms again just in other detail not that crucial it's just a common thing in these detection pipelines okay so number of channel is gonna be 2048 and finally we construct the backbone base some more details here let's kind of scheme through this they're gonna freeze some of the parameters basically the only parameters that are going to be trainable are these layer 2 layer 3 and layer 4 blocks let me kind of scheme over that click f5 to jump over all of that and finally we are gonna care we're just going to extract the features from this layer 4 and we are going to take those features and basically put them as a value with the associated key that's gonna be this zero key okay so that's the reason we during the the past through the Jupiter we had to have that weird zero key and then the tensors will see that in a second but yeah this is the detail why that had to happen okay so finally we just take the resonance 50 and we call this intermediate layer gatherer which is just gonna make sure that we only fetch layer 4 representations from this resonance 50 backbone that's that's that's it nothing fancy there joiner nothing fancy there as well we'll see the interesting part actually lies not in the neat function of the joiner but in the for function again most of the things are gonna happen in the actual for a prop but this is just giving us some idea of how the model is implemented so let's continue here next up we build the transformer your standard transformer not nothing interesting in the in the neat function so we're just gonna focus on the forward prop a bit later now this is interesting that the better part is gonna be probably worth kind of skimming through the neat function a number of queries is gonna be hundred as previously mentioned we just stored a transformer here we store the model dimension which is 256 the internal transformer dimension we create these class in bed and d-box in bed if you recall those are gonna map the output decoder output embedding vectors they're gonna map those into the distribution of classes and the four in the bounding boxes okay so you can see here we map from 256 which is the side dimensionality of the output embedding vector we map that into 92 because we have 91 classes in cocoa plus one for the no object for the special class okay for the b-box so for the body box we'll have MLP instead instead of a simple linear layer so this is kind of the API here is kind of convoluted but what this says is hey I want the output of this MLP to be four I want to have three layers this is these are the hidden dimensions of the layer and this is the input dimension so basically this is some type of an MLP that outputs number four dimensional vectors at the end okay we form the query embeddings just embedding table so hundred of these and dimensionality is again 256 finally this is the input projection so that's the one that maps from 2048 into 256 and we just stored the backbone and this auxiliary losses is set to true we'll see what it is basically what it does is the following let me let me go through the notebook again it's going to make sure that we fetch the outputs of every single out so so not just for the last layer of the decoder but for all of the six decoder transformer layers so that's they showed in the paper that that helps improve the performance by having intermediate losses and not just the loss at the end of the decoder stack that's the idea there okay so let's continue here again ignoring this part because this is just relevant for the panoptic segmentation pathway we don't care about that one and this is gonna construct the Hungarian matter so that's the second important part of the of the of the pipeline but again the whole magic happens in the forward pass so we're just gonna ignore all of this for now weighted dictionary is just gonna obviously define the weights for the associated loss so this loss is the label loss this is gonna be the bounding box loss etc etc so this one is gonna be five this is one again these are just some hyper parameters that they figured out during the training okay we skip again this part because that's the panoptic segmentation and because auxiliary loss is set to true we are also gonna have to define weights for all of the intermediate losses as well so that's what this is gonna do I'm just gonna skip all over this and let me let me print you what our weighted deck is gonna look like at the end so here it is if I kind of pass it here you can see that we have various different weights for a bunch of different losses that we'll have so let's see how many losses we have so like 18 losses that's that's a lot like that's three times six because we have six layers in the decoder and we have three light layers in each of those three losses in each of those okay so again we defined here the types of losses we care about this cardinality is not actually a loss we'll see what it is a bit later and yeah let's keep over this and finally this is the the final important layer during the for prop of this object here we're gonna do the matching plus the loss calculation so that's the third component of our data model that we are gonna care about okay let's continue here so here is how the in the function looks like we store the Hungarian metric here we stored a bunch of different objects nothing super vital there so one interesting detail is that we have this empty weight matrix vector and it has so as you can see the dimensionality of it it's gonna be I think it should be 92 so if I were to print this 92 so that's the number of classes plus the no object class and the reason we set the last so so all of the weights are gonna be one except that for the no object class so you can see here that's why we specify it so for the 90 so for the last element of this array we're gonna set the coefficient to 0.1 and that's going to help us put our smaller weight on this class during the label loss and the reason we have to do that is because for the for the most part we'll have only a couple of box predictions and all of the other ones are gonna be the no object prediction and remember we have hundreds of predictions so that's why we want to kind of reduce the weight for these no object predictions and that's just a common thing people do for to to kind of counter this class imbalance basically okay we registered the buffer meaning we this is not a trainable vector it's a constant but it's still a part of the model so we want to store it in the model so that's why we do register buffer again pie torch syntax continuing on we are almost done with the data model so we are almost done building the the whole thing so we just placed that this this object onto GPU and then we store some post-processing we're gonna see what those are in the forward pass okay finally we return back the model so that's the better criterion that contains the Hungarian matter and the loss calculation and we turn back this these post processors which are gonna be more relevant during the evaluation than during training but yeah bear with me for now okay so again we move the model weights on to the GPU and here yeah just some variable like renaming we don't do distributed so I can skip this here we calculate number of parameters basically we trade through all the parameters and if they require grad then we just find the number of elements in that particular layer and we sum up all of those numbers and we get that number is equal to forty something million I think so let's find where the output is so we can see here we have 41 million trainable parameters for the data model okay what this piece of code here does is separates the trainable weights into two groups one group is the backbone group the other one is the non backbone a group of parameters so we'll have a different learning rate for for the backbone and a different learning rate for the other parameters so again this is what this logic does let's not lose too much time there and they're using Adam W which additionally adds the weight decay logic into into the optimization procedure finally this is just a special learning rate scheduler that's going to drop but I think 10x after 200 epochs so after 200 epochs it's going to drop the learning rate by 10x that's the idea okay now we need to build the cocoa datasets so let me quickly go through what's going on here so we build the cocoa we we form we are going to transform each of our images using various transforms such as normalization using the image net statistics and then you can see here that we'll have some random horizontal flipping and then either random resizing or this combination of resizing cropping etc etc so just bunch of details not that interesting okay so we just store those transforms as well as some other processing here again we'll see all of that a bit later during the forward pass okay this is how the data set was formed by the way I kind of skipped through this part but let me let me quickly show you so image folder is let me print you the actual paths so we can see here the actual path towards the image directory and then we have the annotation file and so what I've done here is the following thing so I kind of took I found this repo I'm gonna link it down in the video description and they they've down sampled the training set of the of the cocoa data set into so that it only contains 25,000 images instead of 117,000 or something and I've additionally literally just extracted hundred images and then created a valid like a valid annotation file that contains only hundred annotations so those are these 391 kilobytes these small annotation files which are gonna be valid annotation files for those hundred images so by doing that I have a minimal cocoa data set that helps me be able to run and debug this code without having to download like 20 gigabytes of data or something so do let me know if you want me to share the small modifications I've done so that I have this mini data set ready for this video okay so that's just a minor detail so again this data set now loaded all of our hundred images and the associated annotations that's the idea let's continue here we do the same thing for validation I'm just gonna skip to here I'm just gonna skip all of this let's keep all of this and we are back to the main function so we have the data sets we have the optimizers we have the model here we have everything now we need to form the data loaders so for that they just create some random sampling of the training data and they're gonna have sequential sampling of the validation data nothing too interesting then they create this batch sampler so we we basically batch two training images during during training nothing fancy there just standard pie-trot stuff finally we create the data loaded loaders here using the training data set using this batch sampler and we'll have this collate function which is going to be interesting and we'll see how it works a bit later again we have data loaders ready and that's it okay finally some cocoa specific idiosyncrasies so I'll just skip this as well not that vital we don't have any frozen weights that's again relevant only for the panoptic segmentation let's continue here we don't want to resume because that's when we have a checkpoint which we don't we don't want to do eval we just want to focus on the training okay here we start so we are gonna run this for 300 epochs obviously I'm not gonna show you that in this video I'm just gonna do a single iteration a single like a loop through the training and let's see how that things work so this is this part here is the the most important part of the whole codebase let's step into the training for one epoch and here we are so we set the model to trainable as well as the criterion again as a reminder it contains the Hungarian matter plus the loss calculation logic okay I'm gonna skip all of these logging functionalities nothing super vital there and finally here we start loading the images okay so this is where the fun starts basically let's step through this and see what happens okay so we are in the cocoa detection so this is the data set class and we are grabbing a single image so let's see what happens here so we're gonna end up with I think this image is gonna be a pillow yeah type so image pillow exactly and target is just gonna be the set addiction like a list containing all of the associated annotations and you can see there is much fun tations like segmentation is crowd bonding box blah blah blah we only care about the actual bounty box so that's it let's continue here less important details like image ID is gonna be 42 415 I think that's gonna be the file name so if we if you take a look at the actual training images here I think it just grabbed yeah four two four one five it just grabbed one of these images from the training data set took its ID and that's what it's using as this image ID here nothing too interesting there let's continue so prepare just kind of processes the target this this set of annotations it's a bit cleaner nothing nothing fancy there let's continue here transform is gonna apply the set of horizontal flipping etc etc to the image so that's it so again we just fetch the image we fetch the notations we process the image we process the annotations and now let's continue let's see what happens now so we do that again the reason is remember we have batch size set to two so that's why we're gonna call get item of the data set twice and then we're gonna collate those to form the final batch okay so I'm gonna skip through all of this because we saw how it works already and now we have the call a function going on okay so let's see what's going on here so batch is going to be basically a list of couples containing as you can see image annotations image annotations and instead of doing that we want to regroup it so that we have a couple of images and a couple of annotations so that's what this align does here we unpack the batch we do the zipping we do the listing and then we end up with batch that's gonna be as you can see here image image and then annotations annotations that's that's all this is an interesting part and an important part because images in the case of object detection are gonna be oftentimes of different shapes and we don't want to resize them into the same dimension we want to operate on the original images so let's quickly step into this nasty tensor from tensor list function and see what it does so here we index with zero which means we're gonna fetch the top all that contains the tensors of images so let's do f10 here and we end up in that function okay so again the images are obviously gonna be of dimensionality they're gonna have three dimensions if I'm going to let me just plot this let me just print this oh my god so we have three seven sixty eight eleven fifty one so that's the number of channels and this is a spatial extent of the image okay let's continue here and basically here what we do is we iterate through all of the images in the batch and we find the maximum basically well let's call it the like a convex hull or a bounding box so we find the biggest box that encompasses all of the spatial resolutions of all of the images in our data set if that makes sense so let me show you what I mean by that so tensor list we saw we just saw how it looks how it looks like for the zeroth image let's see the shape for the second one and you can see that this dimension is bigger than this one and this dimension is bigger than this one and that's what why our result in resultant max size is gonna be the same shape as this tensor here so if I am to step over this and if I were to print the max size we'll see we get as I said the biggest possible bounding box that encompasses all of these spatial extents so it has to be bigger than we basically do a max across all of the images in the batch across this dimension and across this dimension and that's how we get this final max size okay so finally we we form the the the batch shape into which we're going to now copy paste all of these individual images so we end up with a batch shape of two and then the same shape as this one here then we just extract individual so the batch size the channel size the height and the width so let's continue here we just grab the types and the devices nothing fancy there and we create that this is gonna be the like the container tensor and we just initialize it with zeros so it's going to have this patch shape as we just saw here next up we're gonna form a mask that's gonna tell us later on which tokens we need to ignore when we pass them through the transformer because some of these you saw that the second image is smaller one and so some of the the tokens will have to be ignored from that smaller image that's why we have a mask and we initialize it initially with all ones let's see now what happens so we iterate through the tens or lists so that's a list containing our images and we trade through the basically this this place this this container tensor and we're just gonna like we're just gonna copy our image into the into that container tensor and then we just take the mask and we fill in the false such that the parts that remain true are the part are the parts that need to be padded for that image if that makes sense so for this first one I assume that everything is gonna be false because the bigger image has the same shape as the as the container so that's why if I were to print the mask you'll see that all of these are false and that makes sense because again the bigger image is of the same shape as the container object so let's now step through it again so now this time mask is gonna be a bit different so you can see here that we now have true here because we do need padding for the second image because it is smaller than the container tensor that's the idea and that's it like after this you can see that we've created successfully created like this container tensor we've stored our image data inside and we have the mask that also knows which tokens we need to ignore later on during during the the forward pass okay let's see what this nested tensor is basically we just store it's a simple simple object that source that tensors and the masks nothing fancy there okay so that was this part that's that was the what this collate function does and now we can return back to the main training loop okay so we can just skip all of this let me just go back to the file here so we were here we just need to skip all the way through here so f5 nice so samples just contains the nested tensors we just saw so we just created those so let me just show you that that's indeed the case if I were to print type of samples we're gonna have a nested tensor and if I were to reach for the tensors here and then do the shape we'll see that that contains our images and if I were to do instead of tensors I think a mask yeah mask and then shape here we can see that we have a mask stored so mask obviously doesn't need to have the number of channels because we just carry care about masking out certain tokens from the from the image space that's it okay so these two lines are just gonna push the labels and the images to the GPU in case you have one and now comes the fun part so this is literally the most important part of the code base so inside of the train one epoch these two lines are the main that the gist of what's going on so we have we do the the forward pass here so we get the bounding box predictions and then the criterion is gonna do the Hungarian matching followed by loss calculation so let's see how that's gonna work okay so we are in the data for pass here as you can see here so because samples is already a nested list we're just gonna skip this and the first step is to take the images so so the samples and pass them through the backbone so let's see how that's gonna work okay so here is the backbone first things first thing that this backbone is gonna do again remember there is this joiner object it's nothing fancy so this self zero let me show you what it is so self zero is simply your backbone so that's your resonant 50 so if I were to print the type of this you can see it's a backbone you later see this self one this self one is just gonna do the positionally encoding so if we were to if I were to replace this zero by one we can see that this is just position embeddings sign so the sinusoid from the like the same type of positionally embeddings as from the original transformer paper okay so that means that by doing this we are passing the nested I think this is the nested tensor let me kind of double check this again so let's just take the tensor list yeah it says here the notation says it already it's a nested tensor we passed it through the backbone so let's see the forward pass of the backbone okay here it is the backbone base we basically pass the so we extract the tensions which are the images and we just pass that through the body which is gonna fetch us the as you recall here it's just gonna fetch us the features from the layer 4 of the resonant 50 so let's do that and we end up with XS XS is just gonna have I think one key and that's this key zero and and we we've specified it here exactly so that this tells us hey if you want access layer four features access me via key zero and that's what we are going to do so if I if I were to take excess of zero we're gonna access the features and the features are gonna be as you can see of the following dimension so we have 2048 channels and we can see that the number of the spatial extent kind of reduced here so it started with we started with this shape let me show you what we had here so we started with two three and the original image resolution and we got two more channels and smaller spatial resolution which is expected from a convolutional neural network such as resonant 50 okay next up what we do is the following we grab the features from access and we take the mask and now see what we do so we basically now down sample the mask such that this mask is no longer of the same spatial resolution as the original image but it's of the same spatial resolution as the as the image features that came out of resonant 50 because remember we want to know which of those image features we want to mask out because we are gonna later on pass them through the transformer so that's why we do this step so basically you can see that mask currently has the following shape whoops let me see why what it so it's called them M shape so this is the original shape after we do the down sampling we end up with mask shape so to 1929 which is of the same resolution as the image features that came out of resonant 50 so let me now print mask and you can see again we have all false for the first image and we have some of these being true for the second image and that's it okay finally we store the image features and the associated mask into a nested vector and we return that back basically in this out dictionary that's it that's the first step of the of the backbone so now we just want to calculate the positional embeddings let's see how we do that again we grab the image features and the down sample mask here we just store that into this out dictionary so again let me remind you that X is just if I were to access tensors and bring the shape that contains the image features and if I were to ask excess mask that contains the down sample masks that's the X object here and we store it into this out list here we do just a simple positional encoding calculation nothing too fancy basically we give it the X and it's gonna do its own magic so a bunch of code but that's your just your like signs and cosines calculation so that we ultimately end up with this positional embedding like set of vectors of the shape as you can see the following so he has two because we have remember we have batch dimension of two we have the same resolution here as the as the as the image features that came out of the resonant and we have 256 because that's going to be the hidden dimension that the transformers are gonna be using that's why we have all of this okay so I encourage you to check out this this code if you're interested but I'm just gonna treat it as a black box for now it's nothing to fancy there okay we store that in this positional list and we return that back so that's the first step of the data for pass so data model again for pass and we return back the image features the positional vectors and the mask next up we just decompose the nested tensor into the source and masks the source again image features and mask the down sample mask nothing fancy there now this is the interesting part so we take the source which is again recall this is going to have two thousand twenty four sorry wait 2048 and then the down sampled image resolution so again source shape here it is and after we apply the input projection that's going to reduce this number of channels as I previously explained to 256 so that's going to happen there and then we just passed the mask so this is what we pass to the encoder and then we also pass the query vectors to the decoder and we pass the positional encodings so let's step through this piece of code let's do f10 here and we are hitting the transformer so this is a transformer layer this is the for pass and let me now show you the source that it indeed is now has 256 when it comes to a number of channels so you can see here to 256 so that's the consequence of applying that convolutional layer before passing it into the transformer so nothing fancy there now we're going to do some flattening and permutation so that we have a shape that's suitable for passing it into the transformer so that's something I mentioned here previously basically you take this and in the rest order you're gonna flatten out this this volume into a sequence of tokens that's that's the basic idea and so again let me show you the shape the shape is going to be 551 to 256 so this is the batch size this is the hidden dimension and this is the number of tokens and it's kind of weird permutation but that's what the this original pie torch implementation of the transformer expects okay instead of having batch as the first dimensionality and the reason is some optimization tricks we do the same thing for the positional embeddings so if I were to show you the positional embedding shape here we're gonna end up with literally the same shape as as the source because we want to add them up later on and finally we do a similar thing with query embedding basically they are initially just going to be 100 256 I think so hundred two fifty six and here we just add dimension like at the location one and then we repeat it whatever the number of patches is that's how many times we just copy paste these very actors and then we end up with basically the same so hundred to 256 okay we do the flattening of the mask that we can so we can pass it into interest former and finally we form this target so targets is going to be so we have a source we have the targets targets are going to be the tokens that we pass into the decoder part of the transformer layer whereas we're gonna just be using these query vectors as the positionally coatings that we're gonna sum up with the target tokens we're gonna see that in a second okay so finally we now have encoding part and decoding part let's jump and see how those are gonna work let me just see whether I have a breakpoint here I'm gonna add a breakpoint to forward post post because we're gonna have layer norm applied after the residual connection not before that's why we have this for a priest we're gonna just be using this version of the function here let's see let's see whether I should add in honor I should probably add breakpoint here as well so now let's do f10 just step over so as you can see here this is your encoder layer we do for like six times because let me show you the the length we have like six of these as you can see here so we're gonna six times just apply the transformer layer and we pass basically the source here so those are the image tokens just flattened out and we passed the mask so that's the mask that we got as the odd but if you recall from the back bone and finally we we add the positional encodings let me now go into the actual layer so let's see what's going on here so we sum up the positional encoding so those are the sinusoid we computed we sum them up with the image tokens and we get the queries and the keys and then we're just going to do a self-attention as you can see here so the only difference is that the value vectors will not contain the positional encodings that's it and then we have this the part where we mask out some parts so let me show you how this looks like so it's mostly false but then there are some parts that are true which means we're going to ignore those tokens because if you recall the that part of the image is non-existent let me quickly explain what I mean by that exactly so imagine so remember that we passed two images into our like debtor so we had image one and we had I'm gonna just draw like something like this that's of the same shape as this one here but instead so the second image was maybe like this okay so that means that we have to pad all of these tokens out we have to ignore all of these tokens and that information is now stored in this mask because it's gonna be set to true for all of these image tokens so that's kind of just a visualization that I have in my mind when I think about it so let's go back here let's step through this and that's just gonna be a single step of self-attention of image tokens across each other then this is just your like regular transform logic we have this inverse bottleneck MLP so so nothing nothing fancy there I'm not gonna lose any time doing that so let's step back here and let's continue with the decoder portion so I'm gonna click f5 and we're gonna get to the decoder okay so we end up with memory the memory is gonna be of the same dimensionality as the input source token so source tokens again those are the image tokens are of this shape and you can see that the memory is gonna be of the same shape here okay and again memory is just let me show you the the image here so that's gonna be the tokens we got we get here so we start with image features here and we end up with with the tensor that has the same dimensionality as the input image features so that's where we are at at the moment so now we need to pass through the decoder part of the transform so let me show you how that's gonna look like we passed the target which is gonna be initially all zeros we passed the memory so that's the output of the decoder we then pass what we passed the mask so that we know which tokens we should not attend to when doing the cross-attention positional embeddings query vectors bunch of details there basically everything that we have in this image here is gonna be passed into the decoder so object varies positional encodings memory targets tokens which are not displayed here but all zeros initially so all that needs to be passed into the decoder layer so now let's go back here and let me step over the let me just kind of add like a breakpoint somewhere here so here okay let's now click f10 and we're gonna hit decoder part I'm gonna ignore this so let's step through that code so here we are in a particular decoder layer so here it probably works we have the query position positional embedding so those are these positional embedding vectors here so we are going to add those to the target tokens which are gonna be all zeros initially so we end up with query and keys being equal to those query embedding vectors again we have hundred of those let me print a shape here so we have hundred of those and dimensionality is 256 okay so then we apply as you can see here we just apply a self-attention so those query vectors are gonna basically self attend to each other and nothing nothing super fancy there the interesting part in this decoder our layer happens actually at the cross attention module here so here as you can see again we are adding the query position to the target tokens and we use as keys memory so that's the output of the encoder and we add the positional encodings on top of the memory tokens and we use as the value vectors the memory vectors so we're gonna somehow combine the outputs of the encoder to form our novel representations after this so target two okay and again we'll be passing the padding information so the masking information because we do not want to attend all of the encoder outputs because some of them are not non valid because some of them stem from the parts of the image where we just added padding so it's like gonna be black if you look at the image in the image space okay so that's that part and then we just have regular transformer logic the MLP part and that's pretty much it okay guys so we're just gonna repeat this six times and as you can see we are storing the intermediate representations for the auxiliary loss I mentioned but yeah let me now step over and let's get to this part here see if I were to click f5 let me remove the breakpoint here we are now finished decoder we got decoder output so let me show you the shape of the output here we have 600 to 256 so why does this make sense so 100 is because we have 100 query embedding vectors to become because we have batch dimension of 2 to 56 is the dimensionality of the embedding vectors throughout the decoder stack and throughout the encoder stack and finally we have six because we have six decoder layers and we're storing or all of the intermediate representations because we are gonna apply this auxiliary auxiliary loss a bit later so that's it so this is the thing we're gonna be using this HS after we do this transpose operation okay so finally we applied the class embedding so that's gonna map our representation HS from this shape into we're gonna have instead of 256 92 because that's the number of classes plus the no object class so let me show you the the shape of the output class so this is gonna be as I told you 92 here then we do b-box embeds that's the MLP that's gonna give us the outputs like the bounding box coordinates so let me do f10 there and let me show you the shape so here is the shape we have four here is the output and that's it and finally we grab only the last so this is gonna grab only the output from the last decoder layer and then we're gonna store all of the intermediate representations inside of coupled with this this particular key here auxiliary outputs okay so we store the classes the logits we store the bounding boxes and then we store all of the other ones excluding the last one inside of inside of inside of this basically dictionary so let me kind of step over that and if I were to show you the keys here these are the keys we get but then inside of this key we have this list with all of the other outputs yeah hopefully yeah I know this is a bunch of details but do let me know if you if you if you get anything out of this video and even if you if you watched all the way to here like congrats like this was like very I can assume very like intensive and so I'd be super happy to hear the feedback of all of you guys who who stuck all the way to here so let's continue that was the one of the most important parts of this whole code base and now there is a second part which is much easier we're gonna soon finish up this video so let's see the the Hungarian matching algorithm and the loss calculation so let's dig into it so here we fetch just the outputs from the last layer of the decoder we ignore this auxiliary outputs and finally comes the matcher so this is the Hungarian matching algorithm let's see how this thing works fairly simple logic to be honest so let me kind of explain what's going on okay so first we are gonna grab the batch size and the number of various it's gonna be two and hundred here then we're gonna flatten out the prediction logic so again prediction logic so let me show you the shape here and this is how it looks like 292 we're gonna flat nod the first two and we're gonna apply the softmax instead of using unnormal as logits so let me do f10 here and so we end up with a shape 292 as you can see here we do the same thing for the bounding boxes so I do it here let's not do this it's gonna be 204 as you can see here so we just flatten across all of the batches and across all of the like tokens and now we'll be able to compare that to ground truth so here's the ground truth we trade through this targets dictionary that contains the annotations the ground truth annotations we grab the labels and we concatenate those so we'll have we'll have two of these for the two of images and we end up with I think 16 I already went through this code so it's 16 because we have like I think two labels for the first image and 14 labels for the second image so that means two bounding boxes for the first image and 14 for the second one we do the same thing here we just grab the bounding boxes the ground truth money boxes that we end up with 16 for let me show you the shape here so 64 and now for the fun part so this is the fun part of this whole logic so what this thing is going to do is so we take these 16 ground truth indices so this tells us as you can see on the screen here that the first bounding blocks class is 7 and then the second is 1 and then 53 53 53 etc so for all of the 200 boxes that we predicted for the batch size of 2 we're going to take those particular 16 lodges for the 16 ground truth classes and extract them into this cost class matrix okay so we're gonna end up with what 216 instead of 292 so again we start with out prop being 292 and we end up with having a subset of those with 216 okay so we'll see why that makes sense then we do the following we calculate the distance between all of our predicted boxes and the target bounding boxes this is just going to do L1 distance nothing nothing fancy there and again we're going to end up with a matrix that's 216 and it contains basically the distance between each of our 200 predictions and each of the 16 ground truth labels so we'll know basically yeah how far away each of them are like a fully connected graph okay if I click F10 let me show you and convince you that we have the shape 216 so quickly what this distance function does is let me show you so it's a L1 distance between two bounding boxes so imagine we have one bounding box that's like maybe here is one here's one so we have a bounding box here and let's imagine we have a second one here okay so okay that's it and now if you were to represent this bounding box as a vector let's use the XY XY representation so that's gonna we're gonna end up with having so that what's the coordinate of this X coordinate is 0 y coordinate is 1 then we have X coordinate is 1 y coordinate is 0 so this is going to be a representation for this bounding box and then we have this bounding box here which is going to be what X coordinate is 2 y coordinate is 1 we're taking a look at this point here and then we have 3 and we have 0 okay so now as you can imagine the how L1 is calculated is you basically subtract these two vectors and so you'll end up with what you'll end up with like and you take the absolute value so you'll have 2 0 you'll have 2 and you'll have 0 and then I think you just add this up and so you end up with like 4 so the distance between these two is gonna be 4 and so we do this for all of the predicted boxes and the ground truth boxes that's it that's the logic we apply and then we have this and I'm gonna ignore the implementation but we just find the same type of so the intersectional reunion logic I just I think I mentioned an hour ago or something okay so we end up with this cost matrix that's also 216 so after that we sum up all of those matrices and we have associated weights here 5 and we have 1 and we have 2 so what's the semantics what's the idea why are we doing this well the idea is the following we care about we're trying to match the predicted box with the ground truth box there is multiple angles at which we could look at this so we can only focus maybe on what's the closest one but then if they are close but the class is completely off is that a good match so if on the other hand we have so let me show you what I mean by that so imagine this is imagine this is a ground truth box here imagine we have a ground truth box here and imagine we have one box that's predicted like this it has some significant overlap but the class of this one is maybe cat and the class of this one is maybe dog and now let's assume we have a different bounding box so that bounding box is a little bit less overlap but it has a correct class it's cat so which one is a better one which better is a which one of these is better match so there is no correct answer and that's why we have to kind of combine both the intersectional reunion both the distance both the class to form the final matrix and then solve this basically graph flow problem to find the optimal assignment so here so we finally have the cost matrix and then we just kind of change the change the shape of that matrix so that we have let me show you here so we have two hundred and sixteen now we collect sizes because we want to separately match the prediction bounding boxes from the first batch with the ground truth boxes from the first batch from the first element of the batch and similarly for the second one so we want to have those are completely orthogonal so that's why we have to do this separation so size is just going to be 2 and 14 so 2 and 14 because we have two bounding boxes for the first image and 14 for the second one okay and now we just do this split over sizes which is basically going to split this into 102 and hundred fourteen so the two separate problems as I mentioned before so let me see whether this is gonna work if I were to print this and allocate it to a and B let's print shape of a and let's print shape of B as you can see here this is what we got okay and after splitting you can see that we have this indexing by I it's gonna be zero in the first iteration and then one in the in the second one so that means we'll be grabbing only the zero with batch here and the first one here and then basically doing this assignment problem on 102 cost matrix and hundred fourteen and it's literally our minimal flow type of an algorithm so this is what this linear sum assignment is gonna find it's gonna find such a choice of edges in the bipartite graph so as to minimize the cost and so what I mean by that let's for example take the hundred two hundred fourteen one so what is gonna happen is we have a matrix that's like let's say hundred fourteen okay and it tells you how so this row here tells you the cost between this particular predicted box and all of the fourteen ground truth boxes and so you're gonna have like what's the cost between each of those and so we can visualize this this matrix is the following way as well so you imagine you have like literally hundred nodes here and you only have 14 nodes here and so we have a fully connected graph okay something like this something like this and then we have it all of the edges are connected here we have a by part of a graph and so the the idea is to find two such edges that will minimize the the cost of this of this bipartite graph so hopefully that makes sense plus there is a constraint that you cannot pick edges that go to the same notes you have to cover all of the output nodes so you have to cover both this node and all of the other nodes and this one by doing that you literally found for each of the node you found some ideal match so maybe you found for this one maybe this one is ideal and for this one some of these ones here is ideal one so that's the idea that's the Hungarian match algorithm if you just think about it a little bit this should make sense it's not that hard okay let's get back to the code after we run that we'll end up with a set of indices and you will see what we get here basically you can see that the bounding box 17 is matched with the bonding ground truth bonding box one the bonding box 66 is matched with the ground box zero and all of this is for the first image in the batch and then for the second image we have four and the 11 are matched seven and one twenty six and ten etc etc so we got our matching that's it that's the Hungarian matching that's the second component hopefully this was clear enough let's continue on now we just need to calculate the losses and we are pretty much done that's that's everything there is to data so we just calculate the number of boxes so we can use that for normalization nothing fancy there is gonna be 16 for our particular batch so that's 16 and now we do the loss computation so let's dig into this so we have three losses we're calculating one is labels one is boxes we'll see that boxes actually encompasses two losses and then cardinality is not a loss per se but that's just an implementation detail let's let's kind of dig into this okay so loss labels is the first loss what do we do here we grab the prediction the predicted logits so that's gonna be the output of our decoder stack and the shape is gonna be this one okay so we have hundred tokens nine two logits and we have two batches we grab this we call this function which is going to basically give us how we should index this this output here so as to get those boxes we just found using the Hungarian matching algorithm so those predict the boxes that are matched to the ground truth boxes that's what this IDX index is gonna contain so you can see here box 1766 all the way through 99 those are the ones that were matched with with our ground truth boxes next up here we are just going to take the indices we got from our Hungarian matching algorithm we're just gonna focus on the ground truth indices ground truth bounding boxes indices let me remind you so this is the ground truth bounding box indices we're just gonna focus on this one and on this one and why we do that is this is just gonna be a permutation for the labels so we want to now permute the ground truth labels so that we have exact match with the predicted boxes that match those ground truth boxes so let me go through that so we end up with basically 16 classes there that are permuted in the exactly the right manner next up we create this tensor of target classes that's going to be of the following shape so 200 and it's basically going to contain 91 across all so it's gonna just contain 91s everywhere and that basically means that currently these are the classes that are output output bounding boxes should should should have but then we just copy paste the target the actual target classes so now let me print you that the target classes here you can see that some of them are now correctly whoops what have I done man let me just do that again and not click anything so let me show you what we have here so now you can see that we basically for the match boxes so let's find the first one so 53 this is the first predicted box it's gonna be 0 1 2 3 4 so the fourth predicted box should be of class 53 because that predicted box is matched with a certain ground truth box okay that's that's the basic idea again I'm just gonna kind of skim through this this is a lot of details already but hopefully this gives you some understanding and now we just do the cross entropy loss which is going to force such that all of these classes have a probability that goes up to 1 and we also have this waiting for the all of these different classes such that the no object class which is this 91 thing has lesser weight in the cross entropy which makes sense because you can see how many of these are there okay let me quickly show you in the one note what this exactly means so just to make it a bit clear imagine we have our output here let me just change the color here let me first erase this thing so this is our output we have hundred outputs here I'm gonna just picture that a bit differently so imagine we have hundred outputs here we have I'm gonna just write down maybe four okay and imagine we have two bonding boxes so we have one blue one and we have one that's a green one so now imagine that we matched this one with this one and imagine that we've matched maybe the second one with this one so if the class for this one is 23 and is the class for this one is like 15 let me change the color so if the class for this one is 15 we'll want to make sure that the logits here so that for the whatever the 15 indexes we will have like 92 here so this is gonna be 92 outputs we'll want to make sure that we push the distribution such that we have a peak around 15 because that's the ground truth class so that's what this loss is going to enforce it's going to enforce that the logic distribution for this particular output token right because remember we have output tokens which then we map into both the boxes as well as this distribution so this is gonna push that distribution so that we have the correct class similarly for here so we'll make sure that for this particular class for this particular token that at 23 we're gonna have a peak so that's what this this loss is about hopefully this kind of helps it a little bit okay let's continue on and see the second loss the second loss is gonna be the boxes loss so I'm just gonna kind of quickly go through this we get the predicted boxes here we get the target boxes we just calculate the loss between the sources boxes and the target boxes by doing the L1 I already explained that in one note and then we just do some normalization with the number of boxes we do a similar thing here so just generalized box IOU we hit that's the second component of the loss and we just do some realization and those are the two additional losses that we we have that's it and the final one is this cardinality that is just going to tell us how many of the of the how many errors do we have when it comes to computing the no object boxes okay so basically it tells us for each of the hundred predictions how many of those were different from the no object class so that's what this this thing here does so let me show you let me print this here you can see that because the model is still not trained pretty much all of these IE none of these predicts the no object bounding box because currently is just kind of random so basically if I were to print this thing let me see what we get so as you can see none of these is predicting 91 which is the no object class and that's where we get hundred hundred here and now we just do l1 loss between that and the target lengths so we expect 16 boxes 2 and 14 and so that's it like just calculating that type of discrepancy between between the ground truth and the predictions nothing fancy there that's it guys we just repeat the same procedure for all of the other five layers of the decoder I'm just gonna skip all of that okay here we are those losses are not computed and finally we just use that a weighted dictionary to sum up so for all of the losses as you can see here we go through the radio dictionary we go through the last dictionary and if if we have a corresponding weight in our weighted dictionary we just multiply those and we sum them up and we get losses so you can imagine that this is like probably the most complex loss I've seen in my life like let me let me see what's the length of this thing number of keys here 18 so we have like 18 losses in total here being computed that's it and after this this is just some boilerplate to be honest we just grab the loss final loss we do the zero grads so clean the gradients of the associated parameters in the in the torch computational graph we do the backward which is gonna compute the gradients for the last we just computed we just do some gradient clipping with 0.1 max norm and we do a step of the optimizer guys that's it this is like the longest video I ever made I'm experimenting I am super curious to know how many of you followed through the whole video this is probably the last video I'll make that this is that's this long unless I get a very good feedback on this one because it takes a lot of time to prepare these and it takes a lot of time to film and to edit super huge videos such as this one in any case I really hope you learn something new I would super appreciate your feedback do let me know if you like this video please share it out that's the best way you can help out this channel and finally subscribe to this channel if you haven't in any case until next time bye bye
[{"start": 0.0, "end": 4.12, "text": " What's cracking guys? This is the second video in the machine learning coding"}, {"start": 4.12, "end": 9.200000000000001, "text": " series and I'll be covering Facebook Deter, so that's a short for Detection"}, {"start": 9.200000000000001, "end": 15.4, "text": " Transformer and some of you might recall that I've already covered the paper in"}, {"start": 15.4, "end": 20.68, "text": " great depth I think a year ago, so if you're not familiar with what Deter is"}, {"start": 20.68, "end": 26.28, "text": " maybe consider checking out that video first, but I think this one will be like"}, {"start": 26.28, "end": 31.96, "text": " self-sufficient as well. So as I said it's a model for object detection and"}, {"start": 31.96, "end": 36.56, "text": " just for those of you who are new to the channel, object detection is this"}, {"start": 36.56, "end": 41.04, "text": " task where you have some salient objects in the image, as you can see we have some"}, {"start": 41.04, "end": 45.0, "text": " cats and remote controllers here and you can see that we have a bounding box"}, {"start": 45.0, "end": 49.6, "text": " around this cat here and we also have a label associated with the bounding box"}, {"start": 49.6, "end": 53.96, "text": " similarly for the remote controller here, for the second one here and for the"}, {"start": 53.96, "end": 58.96, "text": " other cat this blue box and finally we can see a purple box around this"}, {"start": 58.96, "end": 63.92, "text": " whole image and basically the class being couch which is the correct class"}, {"start": 63.92, "end": 69.2, "text": " in this particular case. So what I've done is I went ahead and basically"}, {"start": 69.2, "end": 74.24000000000001, "text": " cloned the official repo and as I said I'm gonna walk you through the code base"}, {"start": 74.24000000000001, "end": 81.72, "text": " in a lot of details. Before we go there I want to quickly walk you through the"}, {"start": 81.72, "end": 85.52, "text": " pipeline on the high level so that you have a mental model going into this"}, {"start": 85.52, "end": 90.92, "text": " video. The model itself is fairly simple and we'll be focusing on three"}, {"start": 90.92, "end": 94.84, "text": " components in this particular video. So first is understanding how the actual"}, {"start": 94.84, "end": 100.92, "text": " architecture works. So how do we basically, let me take a pen here, so how"}, {"start": 100.92, "end": 105.64, "text": " do we do this transformation here? How do we go from the image all the way to this"}, {"start": 105.64, "end": 113.16, "text": " set of box predictions? So understanding the architecture of the actual model. The"}, {"start": 113.16, "end": 118.16, "text": " second part is given the predictions, given the the bounding boxes predictions"}, {"start": 118.16, "end": 123.72, "text": " and the associated classes, how do we match those with the set of ground truth"}, {"start": 123.72, "end": 128.68, "text": " bounding boxes here? Okay so basically what this model outputs as we'll soon"}, {"start": 128.68, "end": 135.2, "text": " see is, and this is a hyper parameter but the model they've used 100 as the"}, {"start": 135.2, "end": 139.04, "text": " as the default, so that means we are always outputting 100 bounding boxes and"}, {"start": 139.04, "end": 144.39999999999998, "text": " so we'll have two vectors as the output from our model. One is going to be a"}, {"start": 144.39999999999998, "end": 149.23999999999998, "text": " 104 so that's going to be our bounding boxes. Obviously the four points are"}, {"start": 149.23999999999998, "end": 154.0, "text": " enough to describe the actual bounding box. The format is I think"}, {"start": 154.0, "end": 160.2, "text": " first the center point coordinates XY and then the height and the width but"}, {"start": 160.2, "end": 164.79999999999998, "text": " that's less important. The second vector that the model will output is just like"}, {"start": 164.8, "end": 171.68, "text": " 100 basically containing the classes of those associated bounding boxes here. Now"}, {"start": 171.68, "end": 176.4, "text": " as I said the hard part will be how do we match these outputs"}, {"start": 176.4, "end": 182.36, "text": " with like correct ground truth bounding boxes? So imagine we have this"}, {"start": 182.36, "end": 188.48000000000002, "text": " red bounding box represented as this red square and consider we have, I'll"}, {"start": 188.48000000000002, "end": 193.04000000000002, "text": " just pick another color like green because I don't have yellow at disposal."}, {"start": 193.04, "end": 197.32, "text": " So imagine this yellow is this green and so the idea is how do we decide how do"}, {"start": 197.32, "end": 201.76, "text": " we know that we need to connect this one with this particular ground truth"}, {"start": 201.76, "end": 207.56, "text": " bounding box and how do we know that we need to connect this red one with this"}, {"start": 207.56, "end": 212.88, "text": " red one here? And also we need to take all of the other predictions and map"}, {"start": 212.88, "end": 217.95999999999998, "text": " them into this no object into this special class that no object class and"}, {"start": 217.96, "end": 223.08, "text": " okay so that's the step number two. And the way they do that is using something"}, {"start": 223.08, "end": 228.24, "text": " called Hungarian matching algorithm. We'll see how it works in a very like a"}, {"start": 228.24, "end": 234.0, "text": " very in a lot of details. Finally once we have the matching we now need to"}, {"start": 234.0, "end": 238.8, "text": " calculate some sort of a loss and then do the back prop. So they'll have three"}, {"start": 238.8, "end": 243.48000000000002, "text": " types of losses. The first one is gonna be the label loss so that means for this"}, {"start": 243.48000000000002, "end": 247.64000000000001, "text": " for the match boxes we want to make sure that the ground truth that the"}, {"start": 247.64, "end": 252.48, "text": " probability of the ground truth label is as high as possible. That's first loss."}, {"start": 252.48, "end": 258.28, "text": " The second loss is gonna be L1 loss. So basically how far off are our"}, {"start": 258.28, "end": 262.68, "text": " predictions from from the associated ground truth boxes. So imagine we have a"}, {"start": 262.68, "end": 269.12, "text": " prediction here and imagine we have a different prediction basically maybe"}, {"start": 269.12, "end": 274.12, "text": " sorry the ground truth prediction being here. We calculate the L1 loss and I'll"}, {"start": 274.12, "end": 279.72, "text": " get into the details of how that's calculated but basically well you just"}, {"start": 279.72, "end": 283.08, "text": " treat the bounding boxes as a four-dimensional vector and you just"}, {"start": 283.08, "end": 286.4, "text": " find the L1 between the difference of those two vectors. That's how we"}, {"start": 286.4, "end": 291.0, "text": " calculate L1 loss. And the final one is the generalized intersection over union"}, {"start": 291.0, "end": 296.72, "text": " which is basically a fancy version of your theme of the of the classic IOU"}, {"start": 296.72, "end": 303.68, "text": " metric. IOU is just a way to so so what it does is it finds the area of this"}, {"start": 303.68, "end": 309.04, "text": " intersection here and it basically divides that with the union so hence the"}, {"start": 309.04, "end": 313.56, "text": " intersection of a union and you can imagine that as this tends to one that"}, {"start": 313.56, "end": 316.88, "text": " means that these two boxes are completely overlapping and you can see"}, {"start": 316.88, "end": 322.72, "text": " as the less the overlap the smaller the intersection area and the and when you"}, {"start": 322.72, "end": 327.32, "text": " divide that by the actual union that goes towards zero as the boxes don't so"}, {"start": 327.32, "end": 331.84000000000003, "text": " so yeah it will be ultimately equals zero once we don't have any overlap. So"}, {"start": 331.84, "end": 335.35999999999996, "text": " that's the third type of a loss again don't worry if you didn't understand"}, {"start": 335.35999999999996, "end": 340.2, "text": " everything we'll go into a lot of details in this particular video. Okay"}, {"start": 340.2, "end": 345.08, "text": " let me just show you a bit more detailed view of the architecture here so the"}, {"start": 345.08, "end": 350.55999999999995, "text": " idea will be to encode so we'll pass the first we pass the image to a CNN we get"}, {"start": 350.55999999999995, "end": 356.52, "text": " some set of image features we are going to then basically tokenize those"}, {"start": 356.52, "end": 361.82, "text": " image features so we'll just treat them as tokens for the NLP transformer"}, {"start": 361.82, "end": 367.0, "text": " so that part is here we'll have some so so basically I said we're gonna pass"}, {"start": 367.0, "end": 370.84, "text": " them here and then encode them using the encoder stack of the transformer we'll"}, {"start": 370.84, "end": 375.52, "text": " add some spatial positional encodings and on the other hand side we have this"}, {"start": 375.52, "end": 379.2, "text": " very interesting thing going on so we have these let me just change the color"}, {"start": 379.2, "end": 385.12, "text": " so we basically have these object fairies which are going to be learnable"}, {"start": 385.12, "end": 389.4, "text": " like hundred learnable embedding vectors and it's hundred because as I said"}, {"start": 389.4, "end": 393.0, "text": " that's just a hyper parameter and because we have hundred we'll also have"}, {"start": 393.0, "end": 399.0, "text": " output hundred like bounding boxes and you can see here what happens is that in"}, {"start": 399.0, "end": 403.88, "text": " this middle lab layer of the transformer layer we have cross attention going on"}, {"start": 403.88, "end": 409.15999999999997, "text": " so we have that these object queries are basically each of them are attending to"}, {"start": 409.15999999999997, "end": 413.76, "text": " this encoded version of the image features and each of them are looking"}, {"start": 413.76, "end": 417.91999999999996, "text": " for something in the in the image so that's and that's how on the high level"}, {"start": 417.92, "end": 422.8, "text": " obviously very hand baby how this pipeline works once we get the output"}, {"start": 422.8, "end": 428.76, "text": " set of hundred tokens here we just map them individually using MLPs or just a"}, {"start": 428.76, "end": 434.12, "text": " simple linear layer and we map so we take this token we map it into basically"}, {"start": 434.12, "end": 437.40000000000003, "text": " a class so we'll have associated class and we also map it into a four"}, {"start": 437.40000000000003, "end": 443.16, "text": " dimensional vector here and that's gonna be that's going to represent the"}, {"start": 443.16, "end": 448.72, "text": " bounding box that this particular token encodes and that's it on a very high"}, {"start": 448.72, "end": 453.74, "text": " level with this prerequisite knowledge we can now jump into the actual true"}, {"start": 453.74, "end": 459.40000000000003, "text": " piter notebook so let's see what's going on here let's start from the beginning"}, {"start": 459.40000000000003, "end": 464.68, "text": " of the notebook I'm just going to run these imports here I'm going to import"}, {"start": 464.68, "end": 468.28000000000003, "text": " the necessary libraries we're gonna be working with pytorch because this is"}, {"start": 468.28, "end": 474.96, "text": " Facebook after all or meta as they are now known to as here is a small minimal"}, {"start": 474.96, "end": 480.76, "text": " implementation of debtor it's not as complicated and it's not the same as"}, {"start": 480.76, "end": 483.52, "text": " the one they actually have in the official code base but it's good enough"}, {"start": 483.52, "end": 486.52, "text": " and gives an understanding of what's going on so I'm gonna run the cell and"}, {"start": 486.52, "end": 490.15999999999997, "text": " then explain what's going on again quickly we have a backbone"}, {"start": 490.16, "end": 498.16, "text": " resident 50 we have this a comb layer which is going to take the 2048 channels"}, {"start": 498.16, "end": 503.16, "text": " of the output embedding volume from the from the CNN and just reduce the number"}, {"start": 503.16, "end": 507.40000000000003, "text": " of dimensions to 256 and that's going to be the ultimate dimension of the tokens"}, {"start": 507.40000000000003, "end": 511.28000000000003, "text": " that we pass into the transformer so let me quickly just kind of explain what"}, {"start": 511.28000000000003, "end": 518.9200000000001, "text": " that exactly means so as I said we have an image here so this is an image we"}, {"start": 518.92, "end": 523.4799999999999, "text": " have a CNN block here I'm just gonna denote it as a box here we pass the"}, {"start": 523.4799999999999, "end": 529.7199999999999, "text": " image and out comes some some volume basically obviously the spatial so here"}, {"start": 529.7199999999999, "end": 536.12, "text": " the spatial extent was we had three channels here so this was this was three"}, {"start": 536.12, "end": 541.36, "text": " here and after encoding it through a CNN we're gonna end up with something that"}, {"start": 541.36, "end": 546.48, "text": " has like a smaller spatial extent so let me try and denote that like the"}, {"start": 546.48, "end": 551.0, "text": " following so basically we have a smaller spatial extent but then we have much"}, {"start": 551.0, "end": 556.96, "text": " more channels okay so something like this I'm really bad at drawing today so"}, {"start": 556.96, "end": 563.28, "text": " something like this basically this number of channels here is gonna be 248"}, {"start": 563.28, "end": 568.5600000000001, "text": " for resonance 50 and this is just like your your down sample spatial extent so"}, {"start": 568.5600000000001, "end": 575.96, "text": " this is like small H small W whereas this was like big H here and big W so the"}, {"start": 575.96, "end": 581.0, "text": " the convolutional layer I just mentioned has the kernel of size one times one"}, {"start": 581.0, "end": 585.5600000000001, "text": " which means each is just going to reduce the number of channels of this of this"}, {"start": 585.5600000000001, "end": 594.6800000000001, "text": " volume here from 2048 into I think 256 and then the idea is to just like treat"}, {"start": 594.6800000000001, "end": 599.08, "text": " these as a set of tokens so we can in a raster order we can just do the"}, {"start": 599.08, "end": 603.5600000000001, "text": " following we can just do this and we take this vector this is gonna be the"}, {"start": 603.56, "end": 608.8, "text": " first embedding vector which we pass into the transformer then we take the"}, {"start": 608.8, "end": 615.3599999999999, "text": " second one here so this is gonna be the second vector this one here is the"}, {"start": 615.3599999999999, "end": 619.8, "text": " second one and so yeah we basically do that in the raster order let me can zoom"}, {"start": 619.8, "end": 624.26, "text": " in here a little bit and let's show you what I mean exactly so this is vector"}, {"start": 624.26, "end": 629.28, "text": " number one vector number two vector number three vector number four five six"}, {"start": 629.28, "end": 634.56, "text": " etc etc and that's basically how this this part of the pipeline looks like"}, {"start": 634.56, "end": 639.24, "text": " okay let me go back to the code so that's this this what this conversion"}, {"start": 639.24, "end": 644.56, "text": " layer does then we have a simple transformer again encoder and decoder"}, {"start": 644.56, "end": 650.6, "text": " stacks and finally we have the two linear layers linear class and linear"}, {"start": 650.6, "end": 654.76, "text": " b-box I showed you what these do these take the output embeddings from the"}, {"start": 654.76, "end": 660.24, "text": " decoder and just map them into either the distribution over classes for the"}, {"start": 660.24, "end": 666.4399999999999, "text": " cocoa data set or this second one just gives us the bounding boxes for each of"}, {"start": 666.4399999999999, "end": 672.64, "text": " the output tokens okay output embedding vectors here the query at the query"}, {"start": 672.64, "end": 677.56, "text": " vectors the learnable query vectors we have hundreds of them and that's it and"}, {"start": 677.56, "end": 682.52, "text": " finally we have a row embed and call embed these are just going to be the"}, {"start": 682.52, "end": 686.8, "text": " the positional encodings okay and once we do so here is how the forward path"}, {"start": 686.8, "end": 691.4, "text": " looks like we basically pass the inputs so that's a set of images we pass that"}, {"start": 691.4, "end": 696.14, "text": " through the comp one then we apply some batch normalization ReLU max pooling"}, {"start": 696.14, "end": 701.0, "text": " whatnot and then we apply the four layers of the of the resonance 50 and"}, {"start": 701.0, "end": 708.12, "text": " finally we apply this small this this convolutional layer where we we basically"}, {"start": 708.12, "end": 713.32, "text": " reduce number of dimensions as I previously mentioned and what happens"}, {"start": 713.32, "end": 718.8, "text": " here is we construct the positional encodings then we're going to take those"}, {"start": 718.8, "end": 723.36, "text": " positional encodings just sum them up with these tokens here so that's this"}, {"start": 723.36, "end": 728.36, "text": " part here don't worry about the actual details we'll get to those a bit later"}, {"start": 728.36, "end": 732.8, "text": " so what is important is we have some tokens from the image we add the"}, {"start": 732.8, "end": 737.04, "text": " positional encodings we pass that into the encoder stack and then through the"}, {"start": 737.04, "end": 741.68, "text": " decoder we pass the query the query embedding vectors that gives us as the"}, {"start": 741.68, "end": 746.04, "text": " output a set of embedding vectors here which we then map using the linear class"}, {"start": 746.04, "end": 751.4599999999999, "text": " and linear b-box linear layers we additionally apply sigmoid because we"}, {"start": 751.4599999999999, "end": 758.28, "text": " want to have normalized bounding box coordinates so from 0 to 1 okay that's"}, {"start": 758.28, "end": 766.52, "text": " it now let's see what happens let's just run this this cell is going to load a"}, {"start": 766.52, "end": 773.68, "text": " pre-trained data model so I'm going to just run this and here we're just going"}, {"start": 773.68, "end": 778.6, "text": " to run the this cell that has that defines the classes from the cocoa data"}, {"start": 778.6, "end": 783.24, "text": " set and some associated colors which are going to help us visualize stuff so"}, {"start": 783.24, "end": 789.64, "text": " again the number of classes in cocoa is let's see what we have here so if I add"}, {"start": 789.64, "end": 798.08, "text": " a code cell let's bring the number of classes so shape well shape will not"}, {"start": 798.08, "end": 803.64, "text": " work because this is what this is like a list so we're gonna do length is that so"}, {"start": 803.64, "end": 809.68, "text": " length and this should be I think 91 or 91 yeah 91 and then we'll add the"}, {"start": 809.68, "end": 816.48, "text": " additional no object class we'll see that a bit later okay here's some helper"}, {"start": 816.48, "end": 823.48, "text": " functions this one helps us just do some type of processing to the input images"}, {"start": 823.48, "end": 829.8000000000001, "text": " we do some resize operator then we convert the images into pie torch"}, {"start": 829.8000000000001, "end": 834.76, "text": " tensors and then we do image net normalization so these are statistics"}, {"start": 834.76, "end": 838.24, "text": " calculated from the actual image net data set this is going to be like the"}, {"start": 838.24, "end": 844.24, "text": " mean and this is going to be the standard deviation statistic of over the"}, {"start": 844.24, "end": 849.08, "text": " calculated over the image net data set these helper functions just map as you"}, {"start": 849.08, "end": 855.72, "text": " can see from the format CX CY WH which is as you specify the center coordinate"}, {"start": 855.72, "end": 859.6, "text": " of the bounding box and width and height and instead of that it's going to"}, {"start": 859.6, "end": 864.52, "text": " convert it into a format where we specify the top left point and the bottom"}, {"start": 864.52, "end": 869.04, "text": " right point and those are obviously equally informative representations"}, {"start": 869.04, "end": 873.52, "text": " that take equal amount of space it's just that this representation is a bit"}, {"start": 873.52, "end": 877.72, "text": " more suitable to some of the API's they're actually using and we'll see that"}, {"start": 877.72, "end": 882.8, "text": " later not that important rescale boxes so as I said we do predict the bounding"}, {"start": 882.8, "end": 886.88, "text": " boxes which are normalized which means from 0 to 1 this is just going to return"}, {"start": 886.88, "end": 893.92, "text": " a map those predictions into the original input image space like a range"}, {"start": 893.92, "end": 899.84, "text": " okay did we run the cell and just run the cell and finally we have the"}, {"start": 899.84, "end": 905.36, "text": " detection function here what we do is we pass an image we apply the transforms"}, {"start": 905.36, "end": 909.84, "text": " we just saw we unsqueeze so that we add like the batch dimension so this adds a"}, {"start": 909.84, "end": 915.5600000000001, "text": " dummy batch dimension one because that's what by torch models expect just some"}, {"start": 915.5600000000001, "end": 920.72, "text": " error checking we pass the images through the model we get the outputs and"}, {"start": 920.72, "end": 926.36, "text": " finally what we do here is we take the prediction logits we apply the softmax"}, {"start": 926.36, "end": 931.2, "text": " because remember this is just going to apply linear layer so these are still"}, {"start": 931.2, "end": 935.92, "text": " unnormalized logits so we apply softmax after we've normalized the logits using"}, {"start": 935.92, "end": 941.8000000000001, "text": " softmax we just extract all of the classes except for the no object class"}, {"start": 941.8000000000001, "end": 947.4, "text": " here that's what we have up to minus one and then what we do is we take those"}, {"start": 947.4, "end": 955.52, "text": " probabilities we find the max across all of the hundred outputs and we find those"}, {"start": 955.52, "end": 961.16, "text": " predictions where the peak of the distribution across classes is at least"}, {"start": 961.16, "end": 967.48, "text": " 0.7 in that way we kind of take out only those very confident bounding box"}, {"start": 967.48, "end": 972.4399999999999, "text": " predictions okay that's the idea there and then we use that vector the key"}, {"start": 972.4399999999999, "end": 978.0, "text": " vector to basically extract only those bounding boxes and then we just rescale"}, {"start": 978.0, "end": 983.3199999999999, "text": " those and return back the bounding boxes we additionally return the associated"}, {"start": 983.32, "end": 989.0400000000001, "text": " probabilities here so if I run this and if we run this here we're going to get"}, {"start": 989.0400000000001, "end": 992.72, "text": " as the output the scores and the bounding boxes so let's kind of quickly"}, {"start": 992.72, "end": 997.24, "text": " examine what's going on here let's see what the shapes are here so if I do"}, {"start": 997.24, "end": 1004.5200000000001, "text": " print score shape that's going to give us 591 so it's five because only five of"}, {"start": 1004.5200000000001, "end": 1012.5600000000001, "text": " the hundred bounding boxes were like confident enough and if I was to"}, {"start": 1012.56, "end": 1018.4799999999999, "text": " print the boxes this should be what this should be five four that's the"}, {"start": 1018.4799999999999, "end": 1024.24, "text": " expectation so let's run it's five four indeed so five boxes that are good"}, {"start": 1024.24, "end": 1033.6399999999999, "text": " enough so I'm gonna quickly just modify this and return back like this"}, {"start": 1033.6399999999999, "end": 1039.96, "text": " probability vector so this should contain if I'm not wrong so let me just"}, {"start": 1039.96, "end": 1046.16, "text": " actually return back this original output so like this and I'm just gonna"}, {"start": 1046.16, "end": 1052.56, "text": " call that logits so I'm gonna run that cell I'm gonna add a logits whoops"}, {"start": 1052.56, "end": 1061.44, "text": " variable here so just add logits logits run that and see what the shape of that"}, {"start": 1061.44, "end": 1067.8400000000001, "text": " thing is so that should be a hundred ninety one I assume right so if I were"}, {"start": 1067.84, "end": 1074.04, "text": " to run this we have 192 so it's 192 because as I said here we extract the"}, {"start": 1074.04, "end": 1081.12, "text": " no object box so if we were to take this probus whatever like it's a weird"}, {"start": 1081.12, "end": 1087.24, "text": " variable name if we were to take that variable and return it instead that's"}, {"start": 1087.24, "end": 1092.9199999999998, "text": " gonna be that's going to give us 91 instead of 92 so let me show you here"}, {"start": 1092.9199999999998, "end": 1096.8, "text": " and as you can see here that's that's pretty much it okay now we can finally"}, {"start": 1096.8, "end": 1103.36, "text": " visualize what's the output of this of this function detect function and what"}, {"start": 1103.36, "end": 1109.9199999999998, "text": " we do is we iterate over this over the probabilities over the bounding boxes"}, {"start": 1109.9199999999998, "end": 1116.0, "text": " and we just plot rectangles using the coordinates from those boxes and we"}, {"start": 1116.0, "end": 1123.6399999999999, "text": " plot the basically the most probable class from the distribution there and"}, {"start": 1123.64, "end": 1128.5600000000002, "text": " if I were to run this I mean we already had the image so let me let me show you"}, {"start": 1128.5600000000002, "end": 1132.8400000000001, "text": " this is what we get so this is the image with which we started this video I"}, {"start": 1132.8400000000001, "end": 1137.16, "text": " showed you this one in the beginning okay so far so good let's now jump to"}, {"start": 1137.16, "end": 1143.2800000000002, "text": " the second notebook this was just like a quick like a high level overview of how"}, {"start": 1143.2800000000002, "end": 1149.8000000000002, "text": " minimal implementation of a debtor could look like and it turns out it works"}, {"start": 1149.8, "end": 1154.68, "text": " already fairly fairly nicely let's now go to this one where you're gonna see"}, {"start": 1154.68, "end": 1158.76, "text": " some attention visualization so I'm gonna again run the necessary imports I'm"}, {"start": 1158.76, "end": 1165.72, "text": " gonna again run the cocoa classes here we have the same type of helper"}, {"start": 1165.72, "end": 1170.68, "text": " functions that transform functions we have the these functions that change the"}, {"start": 1170.68, "end": 1176.24, "text": " box format they do the rescaling so I'm just gonna run those again we have the"}, {"start": 1176.24, "end": 1180.04, "text": " same plot results function as from the previous two Python notebook the details"}, {"start": 1180.04, "end": 1184.1200000000001, "text": " are not important it's just like your math plotlib stuff I'm gonna focus on"}, {"start": 1184.1200000000001, "end": 1188.56, "text": " the decor and the gist and that's the actual data model and how those three"}, {"start": 1188.56, "end": 1191.24, "text": " components that I showed you in the beginning of the video so that the"}, {"start": 1191.24, "end": 1195.92, "text": " architecture the Hungarian matching and the loss calculation look like that's"}, {"start": 1195.92, "end": 1201.04, "text": " gonna be the focus of the video again let's load some of the pre-trained"}, {"start": 1201.04, "end": 1207.36, "text": " models here we set the model into eval mode so eval just for those of you who"}, {"start": 1207.36, "end": 1211.48, "text": " are not familiar with pytorch sets so that's very important because some of"}, {"start": 1211.48, "end": 1214.8799999999999, "text": " the layers such as batch norm behave differently whether they are in the"}, {"start": 1214.8799999999999, "end": 1221.72, "text": " depending on whether they are in the train or eval setting okay and we did"}, {"start": 1221.72, "end": 1226.52, "text": " the same thing we just load load an image here and then we have a similar"}, {"start": 1226.52, "end": 1230.96, "text": " type of a forward a prop as in the previous detect function from the"}, {"start": 1230.96, "end": 1234.48, "text": " previous notebook so we just pass the images we transform them and squeeze"}, {"start": 1234.48, "end": 1239.52, "text": " them pass through the through the model we get the outputs we do the same type"}, {"start": 1239.52, "end": 1243.56, "text": " of transform here so we get the probabilities and this time the"}, {"start": 1243.56, "end": 1249.04, "text": " threshold is a bit higher we use 0.9 here although I think I am the one who"}, {"start": 1249.04, "end": 1254.4, "text": " modified this not sure let's let's assume this is 0.9 and then we do the"}, {"start": 1254.4, "end": 1259.48, "text": " rescaling so let's see what we get so yeah for the most part this part was the"}, {"start": 1259.48, "end": 1264.16, "text": " same as in the previous notebook so we get the output predictions now that we"}, {"start": 1264.16, "end": 1271.0400000000002, "text": " have all of that calculated let's go and add some hooks to our model so what the"}, {"start": 1271.0400000000002, "end": 1276.8400000000001, "text": " hook does is as the image is passing through the actual data model we're"}, {"start": 1276.8400000000001, "end": 1283.3600000000001, "text": " going to log certain activations and the ones we are going to be interested in is"}, {"start": 1283.36, "end": 1287.84, "text": " going to be the activations from the actual backbone model so that's going to"}, {"start": 1287.84, "end": 1290.84, "text": " be the calm features let me let me show you what I mean by that so it's going to"}, {"start": 1290.84, "end": 1297.04, "text": " be we're going to log these features here better yet so we're gonna log"}, {"start": 1297.04, "end": 1303.6399999999999, "text": " actually these features here and we're going to log some of the basically"}, {"start": 1303.6399999999999, "end": 1307.1999999999998, "text": " attention coefficients I think from the let's see I think from the multi hat"}, {"start": 1307.2, "end": 1313.76, "text": " attention like the cross attention component here so let's just make sure"}, {"start": 1313.76, "end": 1318.0800000000002, "text": " that's the case so we get we take from from the encoder from the last layer of"}, {"start": 1318.0800000000002, "end": 1322.6000000000001, "text": " the encoder we're gonna grab the self attention module so that means we're"}, {"start": 1322.6000000000001, "end": 1327.4, "text": " going to grab from the last layer so as you can see we have and here I think in"}, {"start": 1327.4, "end": 1332.76, "text": " practice they use six so from the sixth one we're gonna grab the activations"}, {"start": 1332.76, "end": 1338.0, "text": " from this module here okay multi had self attention in particular we're gonna"}, {"start": 1338.0, "end": 1342.52, "text": " take the matrix like when you multiply query vectors with with key vectors you"}, {"start": 1342.52, "end": 1348.2, "text": " get a matrix of scores which after you which after you apply the softmax you"}, {"start": 1348.2, "end": 1351.56, "text": " get basically the attention scores so that's the one we're gonna visualize and"}, {"start": 1351.56, "end": 1356.0, "text": " the second one if I'm not wrong I think it's this one here but let's double"}, {"start": 1356.0, "end": 1362.56, "text": " check if I go back to the code here we can see we again we take the decoder the"}, {"start": 1362.56, "end": 1367.84, "text": " last layer of decoder multi had attention and I'm fairly sure that's the"}, {"start": 1367.84, "end": 1373.8799999999999, "text": " second the cross attention one so let me quickly go to the transformer file here"}, {"start": 1373.8799999999999, "end": 1379.6799999999998, "text": " let me just make sure that's the case so transformer decoder layer we are going"}, {"start": 1379.6799999999998, "end": 1386.24, "text": " to have let's see so self attention and multi head attention the multi head yeah"}, {"start": 1386.24, "end": 1391.0, "text": " the multi head attention is the second is the cross attention layer okay so we"}, {"start": 1391.0, "end": 1396.56, "text": " are basically taking the activations from this particular multi head attention"}, {"start": 1396.56, "end": 1401.88, "text": " head okay not yet but the module sorry okay let's go back that's enough"}, {"start": 1401.88, "end": 1408.96, "text": " rambling let's go back to the notebook after we attach these hooks and this is"}, {"start": 1408.96, "end": 1413.66, "text": " just your pie tour syntax I will not get into details there and we run the"}, {"start": 1413.66, "end": 1419.88, "text": " inference all of those are gonna be locked in the associated lists then we"}, {"start": 1419.88, "end": 1424.6000000000001, "text": " just remove the hooks and then we extract the zeroth element from the list"}, {"start": 1424.6000000000001, "end": 1427.92, "text": " because here as you can see we just append it so we by doing this we just"}, {"start": 1427.92, "end": 1432.8000000000002, "text": " extract the collected data so I'm gonna run this cell and let's see what happens"}, {"start": 1432.8000000000002, "end": 1438.9, "text": " okay so we end up with those shapes I'm going to quickly analyze what's going on"}, {"start": 1438.9, "end": 1446.2, "text": " so con features let's print what we get here so what I expect is I assume"}, {"start": 1446.2, "end": 1451.68, "text": " something like the following shape like maybe one to 48 and then some spatial"}, {"start": 1451.68, "end": 1455.92, "text": " extent which is gonna be I think something like age is gonna be 19 and"}, {"start": 1455.92, "end": 1459.0, "text": " double is gonna be 29 that's less important but let's kind of print this"}, {"start": 1459.0, "end": 1465.3600000000001, "text": " and see what we get okay so we get that this uh-huh oh yeah or I remember so"}, {"start": 1465.3600000000001, "end": 1469.76, "text": " there is this particular detail of the implementation so we have to actually"}, {"start": 1469.76, "end": 1477.8, "text": " index we have to index by this key and only then do we actually get the okay"}, {"start": 1477.8, "end": 1481.32, "text": " and then we have to do additional tensors because additional implementation"}, {"start": 1481.32, "end": 1485.0, "text": " detail because this is gonna give us nested tensor and we just want the tensor"}, {"start": 1485.0, "end": 1490.72, "text": " info so finally okay this is what we get as I said I kind of missed the small H"}, {"start": 1490.72, "end": 1496.2, "text": " and small W but that was not the point that the same overall format is as we"}, {"start": 1496.2, "end": 1504.04, "text": " expected let's copy paste this and let me do the following here I'm not sure"}, {"start": 1504.04, "end": 1507.64, "text": " whether this is gonna be nested vectors let me just try and do shapes like this"}, {"start": 1507.64, "end": 1512.4, "text": " I'm gonna comment this out comment this out see whether this works and it does"}, {"start": 1512.4, "end": 1520.04, "text": " so 850 850 that's the I said the scores but what you get by multiplying the"}, {"start": 1520.04, "end": 1526.8799999999999, "text": " queries and key vectors similarly here if I were to do this and we're gonna"}, {"start": 1526.8799999999999, "end": 1531.72, "text": " visualize the detection attention weights let's see the shape here so we're"}, {"start": 1531.72, "end": 1536.8, "text": " gonna have I guess the same exact shape oh no actually it's hundred because here"}, {"start": 1536.8, "end": 1542.24, "text": " we are in the decoder stack so because of that we have a hundred eight hundred"}, {"start": 1542.24, "end": 1548.12, "text": " fifty because we have hundred very the learnable vectors and we have eight"}, {"start": 1548.12, "end": 1553.36, "text": " hundred fifty keys from the encoder stack that's why we get this this"}, {"start": 1553.36, "end": 1558.6, "text": " different shape it all makes sense nice we've seen the shapes now let's"}, {"start": 1558.6, "end": 1562.8799999999999, "text": " visualize the attention so we're gonna in particular focus on visualizing the"}, {"start": 1562.8799999999999, "end": 1569.1599999999999, "text": " decoder attention weights and as you can see here we they just grab the height"}, {"start": 1569.1599999999999, "end": 1577.4399999999998, "text": " and width from the com features that's gonna be the 2534 here and the each way"}, {"start": 1577.44, "end": 1580.92, "text": " through the bounding boxes we previously predicted so the five bounding boxes we"}, {"start": 1580.92, "end": 1585.96, "text": " previously predicted and visualized above here so these are the two remote"}, {"start": 1585.96, "end": 1592.44, "text": " controllers two cats and a couch and they are then going to as you can see"}, {"start": 1592.44, "end": 1598.0800000000002, "text": " here grab a particular index so remember this is gonna be of the shape hundred"}, {"start": 1598.0800000000002, "end": 1605.04, "text": " because we have hundred query embeddings and 850 because we have 850 of the"}, {"start": 1605.04, "end": 1611.24, "text": " basically keys coming from the image features of the 3d encoder stack and so"}, {"start": 1611.24, "end": 1621.24, "text": " this is going to grab 850 coefficients that basically correspond to the scores"}, {"start": 1621.24, "end": 1627.36, "text": " that this particular query vector has when it attends to the keys okay just to"}, {"start": 1627.36, "end": 1633.0, "text": " quickly clarify what I mean by that is it means the following basically so"}, {"start": 1633.0, "end": 1640.68, "text": " again remember we are we have encoder output here so we have basically 850"}, {"start": 1640.68, "end": 1646.72, "text": " outputs here and then we have the decoder stack here and as I said we"}, {"start": 1646.72, "end": 1653.88, "text": " grab the cross attention layer and that cross attention layer has hundred"}, {"start": 1653.88, "end": 1660.32, "text": " hundred query vectors we have in total hundred query vectors there and we take"}, {"start": 1660.32, "end": 1663.56, "text": " a particular one that's associated with the bounding box we care about so we"}, {"start": 1663.56, "end": 1670.08, "text": " take for example we take this one here and we see how that query vector"}, {"start": 1670.08, "end": 1674.46, "text": " basically we do the dot product between that query vector and all of these key"}, {"start": 1674.46, "end": 1679.28, "text": " vectors and those are the 850 numbers that we're now going to visualize okay"}, {"start": 1679.28, "end": 1685.72, "text": " that's that's the basic idea so let's see what we get so after we do that and"}, {"start": 1685.72, "end": 1691.28, "text": " run this cell we have this so for this particular remote controller you can see"}, {"start": 1691.28, "end": 1699.52, "text": " that this is the pattern that that we attend to in the these are the image"}, {"start": 1699.52, "end": 1703.56, "text": " features that we attend to and you can see there is obviously a correlation"}, {"start": 1703.56, "end": 1709.06, "text": " between these two similarly for this image here similarly for the this is I"}, {"start": 1709.06, "end": 1713.6000000000001, "text": " guess the couch and so we are basically attending the ends of the image that's"}, {"start": 1713.6, "end": 1719.1999999999998, "text": " how it figures out that this is the couch that we are seeing and then you"}, {"start": 1719.1999999999998, "end": 1725.0, "text": " can see kind of a faint figure of the cat here and finally similarly here we"}, {"start": 1725.0, "end": 1729.0, "text": " can see a faint figure of the cat here okay so hopefully this gave you some"}, {"start": 1729.0, "end": 1733.9399999999998, "text": " understanding of what's what's going on and that I hope that this this image"}, {"start": 1733.9399999999998, "end": 1739.34, "text": " here helped you understand again what's going on so let me just recap one part"}, {"start": 1739.34, "end": 1745.76, "text": " so the shape here is 2534 if you multiply those two numbers you get 850"}, {"start": 1745.76, "end": 1752.6399999999999, "text": " that's why we have 850 here so 2534 so remember that we started here from the"}, {"start": 1752.6399999999999, "end": 1759.86, "text": " image features that were precisely 2534 and then we took those in a"}, {"start": 1759.86, "end": 1766.12, "text": " rest order and we kind of flattened those out and that's how we got these 850"}, {"start": 1766.12, "end": 1770.04, "text": " tokens so that means by attending to these 850 tokens you're actually"}, {"start": 1770.04, "end": 1775.08, "text": " attending to the original image but just a down sampled version that's why we are"}, {"start": 1775.08, "end": 1780.6, "text": " able to get these interesting attention maps here so that's how you connect"}, {"start": 1780.6, "end": 1789.9599999999998, "text": " these 850 with the images we get here so with these images here okay so yeah"}, {"start": 1789.9599999999998, "end": 1795.4799999999998, "text": " that's the best I can do for now but bear with me we'll get into we'll"}, {"start": 1795.48, "end": 1800.88, "text": " understand all of the details a bit later so what I do here is just they"}, {"start": 1800.88, "end": 1807.24, "text": " just plot the they just print these shapes and we saw all of this already so"}, {"start": 1807.24, "end": 1812.4, "text": " I'm just gonna skip through all of that because we already saw it what they"}, {"start": 1812.4, "end": 1816.68, "text": " additionally do is they take the now the encoder attention weights instead of"}, {"start": 1816.68, "end": 1823.8, "text": " decoder ones and they just reshape those such that we end up with 2534 2534"}, {"start": 1823.8, "end": 1830.36, "text": " instead of with 850 850 so this is just going to be a bit easier to understand"}, {"start": 1830.36, "end": 1837.96, "text": " what's going on when we index into this array into this tensor okay let's now"}, {"start": 1837.96, "end": 1844.54, "text": " visualize the encoder attention patterns so they grab four points from the"}, {"start": 1844.54, "end": 1851.12, "text": " original image and then using those points they're going to visualize the"}, {"start": 1851.12, "end": 1855.8799999999999, "text": " encoder attention so let's see what exactly happens here so you can see"}, {"start": 1855.8799999999999, "end": 1861.12, "text": " original image they picked four points denoted here by these red circles and"}, {"start": 1861.12, "end": 1869.8799999999999, "text": " then we visualize the where these points attend to in the original input image"}, {"start": 1869.8799999999999, "end": 1875.28, "text": " because we have the self-attention in the encoder layer which are the image"}, {"start": 1875.28, "end": 1878.84, "text": " tokens and that's why we get these image patterns and you can see the"}, {"start": 1878.84, "end": 1883.9199999999998, "text": " interesting fact about data is that the encoder already learns to do some sort"}, {"start": 1883.9199999999998, "end": 1888.9199999999998, "text": " of instance segmentation which is super useful if you later want to create a"}, {"start": 1888.9199999999998, "end": 1894.1999999999998, "text": " bounding boxes around those objects and this was not explicitly trained for so"}, {"start": 1894.1999999999998, "end": 1897.9599999999998, "text": " this is just something that emerges from the from the way that data was trained"}, {"start": 1897.9599999999998, "end": 1902.56, "text": " which is super nice so let's kind of dig through this code and understand what's"}, {"start": 1902.56, "end": 1908.32, "text": " going on it's a bit messy here so I'm gonna just keep it on a high level"}, {"start": 1908.32, "end": 1916.56, "text": " basically they create a bunch of subplots that's not that interesting and"}, {"start": 1916.56, "end": 1925.12, "text": " then the important part is let's see so the important part is this so we have"}, {"start": 1925.12, "end": 1932.2, "text": " image show of the attention pattern so we take the index so indices are again"}, {"start": 1932.2, "end": 1937.24, "text": " the points we picked right and so we grab those points those points are"}, {"start": 1937.24, "end": 1943.6, "text": " initially in the image space and so because of that we want to down sample"}, {"start": 1943.6, "end": 1948.76, "text": " because that's what the calm layer does the backbone is going to don't sample"}, {"start": 1948.76, "end": 1953.92, "text": " by 32 that's why we divide by fact and this fact is going to be equal 32 that's"}, {"start": 1953.92, "end": 1959.84, "text": " how we get the indices for the down sampled image features and then we just"}, {"start": 1959.84, "end": 1967.16, "text": " basically grab the points and index into this attention tensor okay and by"}, {"start": 1967.16, "end": 1971.48, "text": " doing that we basically figure out the following I'm gonna again use my one"}, {"start": 1971.48, "end": 1974.8400000000001, "text": " note because this is very hard to I realize this can be hard to understand"}, {"start": 1974.8400000000001, "end": 1980.3600000000001, "text": " but just me speaking about it so what what we've done is the following so"}, {"start": 1980.3600000000001, "end": 1987.76, "text": " again remember we have this volume here and"}, {"start": 1987.76, "end": 1996.68, "text": " so what we are going to do with it is the following so this is our volume"}, {"start": 1996.68, "end": 2003.16, "text": " these are our tokens basically so this is the volume that came out from the"}, {"start": 2003.16, "end": 2009.16, "text": " CNN so we take a particular point so let me kind of create a grid structure here"}, {"start": 2009.16, "end": 2016.12, "text": " so make it easier to visualize we take a particular point so this one here so"}, {"start": 2016.12, "end": 2023.9799999999998, "text": " that's as you can recall that's the tens that's the token that is fed into the"}, {"start": 2023.9799999999998, "end": 2029.08, "text": " encoder transformer so somewhere in the transformer we're gonna map this into a"}, {"start": 2029.08, "end": 2034.08, "text": " query vector and then we're gonna take that very vector and basically do a"}, {"start": 2034.08, "end": 2040.0, "text": " dot product with the key vectors of all of these other tokens right all of these"}, {"start": 2040.0, "end": 2044.7399999999998, "text": " other tokens and by doing that we basically understand how this image"}, {"start": 2044.74, "end": 2051.6, "text": " token is attending all of the other image tokens in this in this volume here"}, {"start": 2051.6, "end": 2056.24, "text": " so that's the idea by the way obviously this is flattened in the actual"}, {"start": 2056.24, "end": 2062.44, "text": " transformer I'm just visualizing this as in this in this spatial format just"}, {"start": 2062.44, "end": 2067.56, "text": " because it's easier to map what's happening here and between and why do we"}, {"start": 2067.56, "end": 2072.56, "text": " have the images that we get here so hopefully that was somewhat"}, {"start": 2072.56, "end": 2077.72, "text": " understandable and now this final cell just makes this basically interactive"}, {"start": 2077.72, "end": 2082.48, "text": " so you can pick your own image and do the same thing in an interactive fashion"}, {"start": 2082.48, "end": 2087.88, "text": " so if I were to run this they created this nice widget where you basically"}, {"start": 2087.88, "end": 2092.56, "text": " have let me show you what happens so you can see here you can kind of move this"}, {"start": 2092.56, "end": 2099.4, "text": " red point here and you can see how the attention patterns change so if I were"}, {"start": 2099.4, "end": 2104.1600000000003, "text": " to move this a bit more directly on to the cat well we get some weird patterns"}, {"start": 2104.1600000000003, "end": 2111.08, "text": " so apparently when we are closer to the boundary of the cat that's when we we"}, {"start": 2111.08, "end": 2118.2400000000002, "text": " get the nicest patterns of attention here so you can also play and just use"}, {"start": 2118.2400000000002, "end": 2122.2400000000002, "text": " some other image from the internet and this is gonna work so that's the idea"}, {"start": 2122.2400000000002, "end": 2129.04, "text": " with this with this Jupyter hopefully you got some understanding of how of the"}, {"start": 2129.04, "end": 2132.7599999999998, "text": " things that this data model is learning so you can see that we have interesting"}, {"start": 2132.7599999999998, "end": 2137.96, "text": " patterns going on in the encoder so instance segmentation types of of masks"}, {"start": 2137.96, "end": 2144.92, "text": " here and we also saw nice patterns in the actual decoder where we saw where"}, {"start": 2144.92, "end": 2150.0, "text": " the query tokens from the decoder stack are attending to in the original"}, {"start": 2150.0, "end": 2156.32, "text": " down sampled image features again quickly this is decoder so the query"}, {"start": 2156.32, "end": 2163.6000000000004, "text": " embedding vectors attending the embedding vectors from the encoder stack"}, {"start": 2163.6000000000004, "end": 2170.44, "text": " so that's this part here so again query vector attending the embedding vectors"}, {"start": 2170.44, "end": 2175.1200000000003, "text": " from the top of the encoding stack and by visualizing that you basically"}, {"start": 2175.1200000000003, "end": 2182.52, "text": " visualize the where this query token is attending to the query vector and the"}, {"start": 2182.52, "end": 2191.44, "text": " second image is about grabbing the image token and visualizing where that image"}, {"start": 2191.44, "end": 2197.68, "text": " token is attending to over the over the neighborhood so that's that's this"}, {"start": 2197.68, "end": 2202.8, "text": " particular image here so should be fairly simple and easy to understand"}, {"start": 2202.8, "end": 2206.74, "text": " cool now let's start digging through the actual code base and for that I'll"}, {"start": 2206.74, "end": 2211.2, "text": " first have to check out the main branch so I'm gonna have to check out the main"}, {"start": 2211.2, "end": 2216.8399999999997, "text": " branch because the colabs are so the the Jupyter notebooks are checked into this"}, {"start": 2216.8399999999997, "end": 2222.64, "text": " colab branch so if I do get check out main now I'll be able to find the code"}, {"start": 2222.64, "end": 2228.68, "text": " that I need so let's go back here and let's start debugging the main file so"}, {"start": 2228.68, "end": 2232.2, "text": " I'm gonna skip a lot of unnecessary details like because the code was"}, {"start": 2232.2, "end": 2239.12, "text": " developed to be trained on a you know like a multi node multi GPU setup we"}, {"start": 2239.12, "end": 2242.56, "text": " don't care about that we just care about understanding how data works also there"}, {"start": 2242.56, "end": 2246.92, "text": " is a lot of details about pen optic segmentation because data can easily be"}, {"start": 2246.92, "end": 2250.56, "text": " adjusted so that it works for the pen optic segmentation desk we're also"}, {"start": 2250.56, "end": 2255.8399999999997, "text": " going to ignore everything there and finally the actual cocoa implementation"}, {"start": 2255.8399999999997, "end": 2259.8399999999997, "text": " details are not a part of this code base but they are using an existing API so"}, {"start": 2259.8399999999997, "end": 2264.24, "text": " we'll kind of skim through that as well so the main focus is let's understand"}, {"start": 2264.24, "end": 2268.52, "text": " the three components of data the architecture the Hungarian matching"}, {"start": 2268.52, "end": 2272.32, "text": " algorithm and finally how do we calculate those three types of losses we"}, {"start": 2272.32, "end": 2277.44, "text": " saw in the beginning of the video so we offered our do let's start here so we"}, {"start": 2277.44, "end": 2282.92, "text": " are going to just ignore these parts this just prints the basically the shot"}, {"start": 2282.92, "end": 2288.72, "text": " of the of the current commit but that's again not vital I'm gonna skip all of"}, {"start": 2288.72, "end": 2293.96, "text": " this we just grab the device so I have a GPU so it's gonna be GPU is gonna be"}, {"start": 2293.96, "end": 2299.6, "text": " used so for reproducibility reasons they they set the seeds for various libraries"}, {"start": 2299.6, "end": 2306.04, "text": " pytorch non pythons random etc so that not that important now this is where"}, {"start": 2306.04, "end": 2310.84, "text": " the actual important part starts let's see how the model is built so we first"}, {"start": 2310.84, "end": 2317.6, "text": " have basically we picked a number of classes because we'll have 91 here"}, {"start": 2317.6, "end": 2324.16, "text": " because we are dealing with cocoa so as you can see here we have 91 and then as"}, {"start": 2324.16, "end": 2328.36, "text": " I said we're gonna ignore the pen optic part so now let's build the backbone so"}, {"start": 2328.36, "end": 2333.48, "text": " the belt bone is the resonance 50 but let's see the a bit more details there"}, {"start": 2333.48, "end": 2339.16, "text": " so first of all we're gonna build the position in coatings and the interesting"}, {"start": 2339.16, "end": 2342.56, "text": " part of this positioning coatings is going to be in the actual for a function"}, {"start": 2342.56, "end": 2349.08, "text": " of this of this module so we can just kind of skip through that because the"}, {"start": 2349.08, "end": 2355.32, "text": " learning rate is bigger than zero we are gonna make backbone trainable nothing"}, {"start": 2355.32, "end": 2360.04, "text": " fancy there so now we form the backbone and this join our object built which"}, {"start": 2360.04, "end": 2364.64, "text": " will see what it is in a second so it's basically just gonna join the backbone"}, {"start": 2364.64, "end": 2368.64, "text": " which is gonna be the resonant 50 and the positioning beddings so let's step"}, {"start": 2368.64, "end": 2372.68, "text": " into the constructor of the backbone here you can see what happens is that"}, {"start": 2372.68, "end": 2378.16, "text": " name is going to be the resident 50 so we get the we fetch the class the"}, {"start": 2378.16, "end": 2382.68, "text": " resident 50 class from the torch vision models and you can see here that"}, {"start": 2382.68, "end": 2389.08, "text": " sedulation is set to false and because of that we'll just fetch your regular"}, {"start": 2389.08, "end": 2393.2, "text": " resident 50 not the one with dilated convolution and if you're wondering this"}, {"start": 2393.2, "end": 2398.3599999999997, "text": " is what dilated convolution is all about basically here is a nice visualization"}, {"start": 2398.36, "end": 2403.2000000000003, "text": " of how it works so it has a similar effect when it comes to down sampling"}, {"start": 2403.2000000000003, "end": 2408.4, "text": " the spatial extent so it's instead of using strides to down sample the output"}, {"start": 2408.4, "end": 2414.84, "text": " map you just do this type of you create this type of a kernel where you only"}, {"start": 2414.84, "end": 2419.04, "text": " where you basically well it's basically dilated and you're attending only"}, {"start": 2419.04, "end": 2424.56, "text": " certain pixels of the image here and that's the way you you you manage to"}, {"start": 2424.56, "end": 2428.64, "text": " down sample the output map okay but that's not that important just like a"}, {"start": 2428.64, "end": 2434.68, "text": " fun fact okay finally they're using these frozen batch norms again just in"}, {"start": 2434.68, "end": 2439.4, "text": " other detail not that crucial it's just a common thing in these detection"}, {"start": 2439.4, "end": 2447.7999999999997, "text": " pipelines okay so number of channel is gonna be 2048 and finally we construct"}, {"start": 2447.7999999999997, "end": 2453.72, "text": " the backbone base some more details here let's kind of scheme through this they're"}, {"start": 2453.72, "end": 2458.16, "text": " gonna freeze some of the parameters basically the only parameters that are"}, {"start": 2458.16, "end": 2463.7599999999998, "text": " going to be trainable are these layer 2 layer 3 and layer 4 blocks let me kind"}, {"start": 2463.7599999999998, "end": 2469.2, "text": " of scheme over that click f5 to jump over all of that and finally we are"}, {"start": 2469.2, "end": 2474.12, "text": " gonna care we're just going to extract the features from this layer 4 and we"}, {"start": 2474.12, "end": 2478.9199999999996, "text": " are going to take those features and basically put them as a value with the"}, {"start": 2478.9199999999996, "end": 2483.2, "text": " associated key that's gonna be this zero key okay so that's the reason we"}, {"start": 2483.2, "end": 2488.6, "text": " during the the past through the Jupiter we had to have that weird zero key and"}, {"start": 2488.6, "end": 2492.8799999999997, "text": " then the tensors will see that in a second but yeah this is the detail why"}, {"start": 2492.8799999999997, "end": 2496.96, "text": " that had to happen okay so finally we just take the"}, {"start": 2496.96, "end": 2502.2799999999997, "text": " resonance 50 and we call this intermediate layer gatherer which is"}, {"start": 2502.2799999999997, "end": 2507.72, "text": " just gonna make sure that we only fetch layer 4 representations from this"}, {"start": 2507.72, "end": 2513.3599999999997, "text": " resonance 50 backbone that's that's that's it nothing fancy there joiner"}, {"start": 2513.3599999999997, "end": 2517.6, "text": " nothing fancy there as well we'll see the interesting part actually lies not"}, {"start": 2517.6, "end": 2521.68, "text": " in the neat function of the joiner but in the for function again most of the"}, {"start": 2521.68, "end": 2525.08, "text": " things are gonna happen in the actual for a prop but this is just giving us"}, {"start": 2525.08, "end": 2529.72, "text": " some idea of how the model is implemented so let's continue here next"}, {"start": 2529.72, "end": 2534.12, "text": " up we build the transformer your standard transformer not nothing"}, {"start": 2534.12, "end": 2537.2, "text": " interesting in the in the neat function so we're just gonna focus on the"}, {"start": 2537.2, "end": 2542.0, "text": " forward prop a bit later now this is interesting that the better part is"}, {"start": 2542.0, "end": 2548.48, "text": " gonna be probably worth kind of skimming through the neat function a number of"}, {"start": 2548.48, "end": 2551.64, "text": " queries is gonna be hundred as previously mentioned we just stored a"}, {"start": 2551.64, "end": 2557.12, "text": " transformer here we store the model dimension which is 256 the internal"}, {"start": 2557.12, "end": 2562.7999999999997, "text": " transformer dimension we create these class in bed and d-box in bed if you"}, {"start": 2562.8, "end": 2568.92, "text": " recall those are gonna map the output decoder output embedding vectors they're"}, {"start": 2568.92, "end": 2573.5600000000004, "text": " gonna map those into the distribution of classes and the four in the bounding"}, {"start": 2573.5600000000004, "end": 2578.2000000000003, "text": " boxes okay so you can see here we map from 256 which is the side"}, {"start": 2578.2000000000003, "end": 2584.04, "text": " dimensionality of the output embedding vector we map that into 92 because we"}, {"start": 2584.04, "end": 2589.0, "text": " have 91 classes in cocoa plus one for the no object for the special class okay"}, {"start": 2589.0, "end": 2594.08, "text": " for the b-box so for the body box we'll have MLP instead instead of a simple"}, {"start": 2594.08, "end": 2600.2, "text": " linear layer so this is kind of the API here is kind of convoluted but what this"}, {"start": 2600.2, "end": 2604.8, "text": " says is hey I want the output of this MLP to be four I want to have three"}, {"start": 2604.8, "end": 2608.78, "text": " layers this is these are the hidden dimensions of the layer and this is the"}, {"start": 2608.78, "end": 2613.08, "text": " input dimension so basically this is some type of an MLP that outputs number"}, {"start": 2613.08, "end": 2618.48, "text": " four dimensional vectors at the end okay we form the query embeddings just"}, {"start": 2618.48, "end": 2624.72, "text": " embedding table so hundred of these and dimensionality is again 256 finally this"}, {"start": 2624.72, "end": 2632.18, "text": " is the input projection so that's the one that maps from 2048 into 256 and we"}, {"start": 2632.18, "end": 2636.16, "text": " just stored the backbone and this auxiliary losses is set to true we'll"}, {"start": 2636.16, "end": 2640.16, "text": " see what it is basically what it does is the following let me let me go through"}, {"start": 2640.16, "end": 2647.98, "text": " the notebook again it's going to make sure that we fetch the outputs of every"}, {"start": 2647.98, "end": 2653.8, "text": " single out so so not just for the last layer of the decoder but for all of the"}, {"start": 2653.8, "end": 2659.6, "text": " six decoder transformer layers so that's they showed in the paper that that"}, {"start": 2659.6, "end": 2663.58, "text": " helps improve the performance by having intermediate losses and not just the"}, {"start": 2663.58, "end": 2668.72, "text": " loss at the end of the decoder stack that's the idea there okay so let's"}, {"start": 2668.72, "end": 2673.4, "text": " continue here again ignoring this part because this is just relevant for the"}, {"start": 2673.4, "end": 2677.28, "text": " panoptic segmentation pathway we don't care about that one and this is gonna"}, {"start": 2677.28, "end": 2681.52, "text": " construct the Hungarian matter so that's the second important part of the of the"}, {"start": 2681.52, "end": 2684.92, "text": " of the pipeline but again the whole magic happens in the forward pass so"}, {"start": 2684.92, "end": 2690.48, "text": " we're just gonna ignore all of this for now weighted dictionary is just gonna"}, {"start": 2690.48, "end": 2695.76, "text": " obviously define the weights for the associated loss so this loss is the"}, {"start": 2695.76, "end": 2699.36, "text": " label loss this is gonna be the bounding box loss etc etc so this one is"}, {"start": 2699.36, "end": 2704.36, "text": " gonna be five this is one again these are just some hyper parameters that they"}, {"start": 2704.36, "end": 2709.6600000000003, "text": " figured out during the training okay we skip again this part because that's the"}, {"start": 2709.6600000000003, "end": 2715.6400000000003, "text": " panoptic segmentation and because auxiliary loss is set to true we are"}, {"start": 2715.6400000000003, "end": 2720.1600000000003, "text": " also gonna have to define weights for all of the intermediate losses as well"}, {"start": 2720.1600000000003, "end": 2724.48, "text": " so that's what this is gonna do I'm just gonna skip all over this and let me let"}, {"start": 2724.48, "end": 2729.4, "text": " me print you what our weighted deck is gonna look like at the end so here it is"}, {"start": 2729.4, "end": 2734.56, "text": " if I kind of pass it here you can see that we have various different weights"}, {"start": 2734.56, "end": 2738.7200000000003, "text": " for a bunch of different losses that we'll have so let's see how many losses"}, {"start": 2738.7200000000003, "end": 2743.32, "text": " we have so like 18 losses that's that's a lot like that's three times six"}, {"start": 2743.32, "end": 2746.84, "text": " because we have six layers in the decoder and we have three light layers"}, {"start": 2746.84, "end": 2752.92, "text": " in each of those three losses in each of those okay so again we defined here the"}, {"start": 2752.92, "end": 2756.7000000000003, "text": " types of losses we care about this cardinality is not actually a loss we'll"}, {"start": 2756.7, "end": 2764.52, "text": " see what it is a bit later and yeah let's keep over this and finally this is"}, {"start": 2764.52, "end": 2770.96, "text": " the the final important layer during the for prop of this object here we're gonna"}, {"start": 2770.96, "end": 2774.12, "text": " do the matching plus the loss calculation so that's the third"}, {"start": 2774.12, "end": 2778.8799999999997, "text": " component of our data model that we are gonna care about okay let's continue"}, {"start": 2778.8799999999997, "end": 2786.54, "text": " here so here is how the in the function looks like we store the Hungarian"}, {"start": 2786.54, "end": 2792.12, "text": " metric here we stored a bunch of different objects nothing super vital"}, {"start": 2792.12, "end": 2797.96, "text": " there so one interesting detail is that we have this empty weight matrix vector"}, {"start": 2797.96, "end": 2802.36, "text": " and it has so as you can see the dimensionality of it it's gonna be I"}, {"start": 2802.36, "end": 2808.24, "text": " think it should be 92 so if I were to print this 92 so that's the number of"}, {"start": 2808.24, "end": 2814.2, "text": " classes plus the no object class and the reason we set the last so so all of the"}, {"start": 2814.2, "end": 2818.3599999999997, "text": " weights are gonna be one except that for the no object class so you can see here"}, {"start": 2818.3599999999997, "end": 2825.2, "text": " that's why we specify it so for the 90 so for the last element of this array"}, {"start": 2825.2, "end": 2830.16, "text": " we're gonna set the coefficient to 0.1 and that's going to help us put our"}, {"start": 2830.16, "end": 2836.3199999999997, "text": " smaller weight on this class during the label loss and the reason we have to do"}, {"start": 2836.3199999999997, "end": 2841.2, "text": " that is because for the for the most part we'll have only a couple of box"}, {"start": 2841.2, "end": 2846.3999999999996, "text": " predictions and all of the other ones are gonna be the no object prediction"}, {"start": 2846.3999999999996, "end": 2849.6, "text": " and remember we have hundreds of predictions so that's why we want to"}, {"start": 2849.6, "end": 2855.16, "text": " kind of reduce the weight for these no object predictions and that's just a"}, {"start": 2855.16, "end": 2863.3999999999996, "text": " common thing people do for to to kind of counter this class imbalance basically"}, {"start": 2863.3999999999996, "end": 2869.8799999999997, "text": " okay we registered the buffer meaning we this is not a trainable vector it's a"}, {"start": 2869.88, "end": 2874.58, "text": " constant but it's still a part of the model so we want to store it in the"}, {"start": 2874.58, "end": 2879.08, "text": " model so that's why we do register buffer again pie torch syntax"}, {"start": 2879.08, "end": 2883.8, "text": " continuing on we are almost done with the data model so we are almost done"}, {"start": 2883.8, "end": 2892.2400000000002, "text": " building the the whole thing so we just placed that this this object onto GPU and"}, {"start": 2892.2400000000002, "end": 2896.2400000000002, "text": " then we store some post-processing we're gonna see what those are in the forward"}, {"start": 2896.24, "end": 2900.6, "text": " pass okay finally we return back the model so that's the better criterion"}, {"start": 2900.6, "end": 2905.3199999999997, "text": " that contains the Hungarian matter and the loss calculation and we turn back"}, {"start": 2905.3199999999997, "end": 2908.9199999999996, "text": " this these post processors which are gonna be more relevant during the"}, {"start": 2908.9199999999996, "end": 2913.9199999999996, "text": " evaluation than during training but yeah bear with me for now okay so again we"}, {"start": 2913.9199999999996, "end": 2920.9199999999996, "text": " move the model weights on to the GPU and here yeah just some variable like"}, {"start": 2920.9199999999996, "end": 2925.8399999999997, "text": " renaming we don't do distributed so I can skip this here we calculate number"}, {"start": 2925.84, "end": 2930.32, "text": " of parameters basically we trade through all the parameters and if they require"}, {"start": 2930.32, "end": 2936.0, "text": " grad then we just find the number of elements in that particular layer and we"}, {"start": 2936.0, "end": 2940.4, "text": " sum up all of those numbers and we get that number is equal to forty something"}, {"start": 2940.4, "end": 2943.92, "text": " million I think so let's find where the output is so we can see here we have"}, {"start": 2943.92, "end": 2952.2400000000002, "text": " 41 million trainable parameters for the data model okay what this piece of code"}, {"start": 2952.24, "end": 2958.56, "text": " here does is separates the trainable weights into two groups one group is the"}, {"start": 2958.56, "end": 2963.2799999999997, "text": " backbone group the other one is the non backbone a group of parameters so we'll"}, {"start": 2963.2799999999997, "end": 2966.64, "text": " have a different learning rate for for the backbone and a different learning"}, {"start": 2966.64, "end": 2971.08, "text": " rate for the other parameters so again this is what this logic does let's not"}, {"start": 2971.08, "end": 2977.7599999999998, "text": " lose too much time there and they're using Adam W which additionally adds the"}, {"start": 2977.76, "end": 2982.44, "text": " weight decay logic into into the optimization procedure finally this is"}, {"start": 2982.44, "end": 2987.6400000000003, "text": " just a special learning rate scheduler that's going to drop but I think 10x"}, {"start": 2987.6400000000003, "end": 2992.84, "text": " after 200 epochs so after 200 epochs it's going to drop the learning rate by"}, {"start": 2992.84, "end": 2999.0400000000004, "text": " 10x that's the idea okay now we need to build the cocoa datasets so let me"}, {"start": 2999.0400000000004, "end": 3005.44, "text": " quickly go through what's going on here so we build the cocoa we we form we are"}, {"start": 3005.44, "end": 3009.84, "text": " going to transform each of our images using various transforms such as"}, {"start": 3009.84, "end": 3014.36, "text": " normalization using the image net statistics and then you can see here"}, {"start": 3014.36, "end": 3019.52, "text": " that we'll have some random horizontal flipping and then either random"}, {"start": 3019.52, "end": 3024.68, "text": " resizing or this combination of resizing cropping etc etc so just bunch of details"}, {"start": 3024.68, "end": 3031.12, "text": " not that interesting okay so we just store those transforms as well as some"}, {"start": 3031.12, "end": 3036.24, "text": " other processing here again we'll see all of that a bit later during the"}, {"start": 3036.24, "end": 3041.2799999999997, "text": " forward pass okay this is how the data set was formed by the way I kind of"}, {"start": 3041.2799999999997, "end": 3045.7999999999997, "text": " skipped through this part but let me let me quickly show you so image folder is"}, {"start": 3045.7999999999997, "end": 3052.3199999999997, "text": " let me print you the actual paths so we can see here the actual path towards the"}, {"start": 3052.3199999999997, "end": 3057.16, "text": " image directory and then we have the annotation file and so what I've done"}, {"start": 3057.16, "end": 3063.44, "text": " here is the following thing so I kind of took I found this repo I'm gonna link it"}, {"start": 3063.44, "end": 3069.12, "text": " down in the video description and they they've down sampled the training set"}, {"start": 3069.12, "end": 3075.08, "text": " of the of the cocoa data set into so that it only contains 25,000 images"}, {"start": 3075.08, "end": 3081.72, "text": " instead of 117,000 or something and I've additionally literally just extracted"}, {"start": 3081.72, "end": 3087.52, "text": " hundred images and then created a valid like a valid annotation file that"}, {"start": 3087.52, "end": 3093.04, "text": " contains only hundred annotations so those are these 391 kilobytes these"}, {"start": 3093.04, "end": 3096.64, "text": " small annotation files which are gonna be valid annotation files for those"}, {"start": 3096.64, "end": 3102.7799999999997, "text": " hundred images so by doing that I have a minimal cocoa data set that helps me be"}, {"start": 3102.7799999999997, "end": 3106.52, "text": " able to run and debug this code without having to download like 20 gigabytes of"}, {"start": 3106.52, "end": 3111.92, "text": " data or something so do let me know if you want me to share the small"}, {"start": 3111.92, "end": 3117.36, "text": " modifications I've done so that I have this mini data set ready for this video"}, {"start": 3117.36, "end": 3122.4, "text": " okay so that's just a minor detail so again this data set now loaded all of"}, {"start": 3122.4, "end": 3126.88, "text": " our hundred images and the associated annotations that's the idea let's"}, {"start": 3126.88, "end": 3132.04, "text": " continue here we do the same thing for validation I'm just gonna skip to here"}, {"start": 3132.04, "end": 3136.96, "text": " I'm just gonna skip all of this let's keep all of this and we are back to the"}, {"start": 3136.96, "end": 3141.56, "text": " main function so we have the data sets we have the optimizers we have the model"}, {"start": 3141.56, "end": 3150.48, "text": " here we have everything now we need to form the data loaders so for that they"}, {"start": 3150.48, "end": 3153.52, "text": " just create some random sampling of the training data and they're gonna have"}, {"start": 3153.52, "end": 3158.36, "text": " sequential sampling of the validation data nothing too interesting then they"}, {"start": 3158.36, "end": 3162.92, "text": " create this batch sampler so we we basically batch two training images"}, {"start": 3162.92, "end": 3168.2000000000003, "text": " during during training nothing fancy there just standard pie-trot stuff"}, {"start": 3168.2000000000003, "end": 3173.7200000000003, "text": " finally we create the data loaded loaders here using the training data set"}, {"start": 3173.7200000000003, "end": 3178.52, "text": " using this batch sampler and we'll have this collate function which is going to"}, {"start": 3178.52, "end": 3183.32, "text": " be interesting and we'll see how it works a bit later again we have data"}, {"start": 3183.32, "end": 3188.08, "text": " loaders ready and that's it okay finally some cocoa specific idiosyncrasies so"}, {"start": 3188.08, "end": 3191.92, "text": " I'll just skip this as well not that vital we don't have any frozen weights"}, {"start": 3191.92, "end": 3197.04, "text": " that's again relevant only for the panoptic segmentation let's continue"}, {"start": 3197.04, "end": 3201.0, "text": " here we don't want to resume because that's when we have a checkpoint which"}, {"start": 3201.0, "end": 3205.24, "text": " we don't we don't want to do eval we just want to focus on the training okay"}, {"start": 3205.24, "end": 3210.64, "text": " here we start so we are gonna run this for 300 epochs obviously I'm not gonna"}, {"start": 3210.64, "end": 3214.4, "text": " show you that in this video I'm just gonna do a single iteration a single"}, {"start": 3214.4, "end": 3218.7200000000003, "text": " like a loop through the training and let's see how that things work so this"}, {"start": 3218.7200000000003, "end": 3223.64, "text": " is this part here is the the most important part of the whole codebase"}, {"start": 3223.64, "end": 3229.88, "text": " let's step into the training for one epoch and here we are so we set the"}, {"start": 3229.88, "end": 3235.2000000000003, "text": " model to trainable as well as the criterion again as a reminder it"}, {"start": 3235.2000000000003, "end": 3241.2000000000003, "text": " contains the Hungarian matter plus the loss calculation logic okay I'm gonna"}, {"start": 3241.2, "end": 3245.7999999999997, "text": " skip all of these logging functionalities nothing super vital there"}, {"start": 3245.7999999999997, "end": 3250.0, "text": " and finally here we start loading the images okay so this is where the fun"}, {"start": 3250.0, "end": 3258.24, "text": " starts basically let's step through this and see what happens okay so we are in"}, {"start": 3258.24, "end": 3261.96, "text": " the cocoa detection so this is the data set class and we are grabbing a single"}, {"start": 3261.96, "end": 3266.7, "text": " image so let's see what happens here so we're gonna end up with I think this"}, {"start": 3266.7, "end": 3272.68, "text": " image is gonna be a pillow yeah type so image pillow exactly and target is just"}, {"start": 3272.68, "end": 3277.3599999999997, "text": " gonna be the set addiction like a list containing all of the associated"}, {"start": 3277.3599999999997, "end": 3281.12, "text": " annotations and you can see there is much fun tations like segmentation is"}, {"start": 3281.12, "end": 3286.5, "text": " crowd bonding box blah blah blah we only care about the actual bounty box so"}, {"start": 3286.5, "end": 3292.2, "text": " that's it let's continue here less important details like image ID is gonna"}, {"start": 3292.2, "end": 3298.4399999999996, "text": " be 42 415 I think that's gonna be the file name so if we if you take a look at"}, {"start": 3298.4399999999996, "end": 3304.7599999999998, "text": " the actual training images here I think it just grabbed yeah four two four one"}, {"start": 3304.7599999999998, "end": 3308.7599999999998, "text": " five it just grabbed one of these images from the training data set took its ID"}, {"start": 3308.7599999999998, "end": 3314.52, "text": " and that's what it's using as this image ID here nothing too interesting there"}, {"start": 3314.52, "end": 3322.0, "text": " let's continue so prepare just kind of processes the target this this set of"}, {"start": 3322.0, "end": 3330.04, "text": " annotations it's a bit cleaner nothing nothing fancy there let's continue here"}, {"start": 3330.68, "end": 3335.68, "text": " transform is gonna apply the set of horizontal flipping etc etc to the image"}, {"start": 3335.68, "end": 3341.04, "text": " so that's it so again we just fetch the image we fetch the notations we process"}, {"start": 3341.04, "end": 3345.56, "text": " the image we process the annotations and now let's continue let's see what"}, {"start": 3345.56, "end": 3350.16, "text": " happens now so we do that again the reason is remember we have batch size"}, {"start": 3350.16, "end": 3355.56, "text": " set to two so that's why we're gonna call get item of the data set twice and"}, {"start": 3355.56, "end": 3361.44, "text": " then we're gonna collate those to form the final batch okay so I'm gonna skip"}, {"start": 3361.44, "end": 3364.92, "text": " through all of this because we saw how it works already and now we have the"}, {"start": 3364.92, "end": 3370.2, "text": " call a function going on okay so let's see what's going on here so batch is"}, {"start": 3370.2, "end": 3375.92, "text": " going to be basically a list of couples containing as you can see image"}, {"start": 3375.92, "end": 3382.28, "text": " annotations image annotations and instead of doing that we want to regroup"}, {"start": 3382.28, "end": 3386.0, "text": " it so that we have a couple of images and a couple of annotations so that's"}, {"start": 3386.0, "end": 3390.2400000000002, "text": " what this align does here we unpack the batch we do the zipping we do the"}, {"start": 3390.2400000000002, "end": 3394.84, "text": " listing and then we end up with batch that's gonna be as you can see here"}, {"start": 3394.84, "end": 3399.7400000000002, "text": " image image and then annotations annotations that's that's all this is an"}, {"start": 3399.7400000000002, "end": 3405.0, "text": " interesting part and an important part because images in the case of object"}, {"start": 3405.0, "end": 3409.32, "text": " detection are gonna be oftentimes of different shapes and we don't want to"}, {"start": 3409.32, "end": 3414.64, "text": " resize them into the same dimension we want to operate on the original images"}, {"start": 3414.64, "end": 3418.72, "text": " so let's quickly step into this nasty tensor from tensor list function and see"}, {"start": 3418.72, "end": 3423.48, "text": " what it does so here we index with zero which means we're gonna fetch the top"}, {"start": 3423.48, "end": 3429.4, "text": " all that contains the tensors of images so let's do f10 here and we end up in"}, {"start": 3429.4, "end": 3436.36, "text": " that function okay so again the images are obviously gonna be of dimensionality"}, {"start": 3436.36, "end": 3441.28, "text": " they're gonna have three dimensions if I'm going to let me just plot this let"}, {"start": 3441.28, "end": 3447.2000000000003, "text": " me just print this oh my god so we have three seven sixty eight eleven fifty one"}, {"start": 3447.2000000000003, "end": 3450.4, "text": " so that's the number of channels and this is a spatial extent of the image"}, {"start": 3450.4, "end": 3458.44, "text": " okay let's continue here and basically here what we do is we iterate through"}, {"start": 3458.44, "end": 3464.08, "text": " all of the images in the batch and we find the maximum basically well let's"}, {"start": 3464.08, "end": 3471.64, "text": " call it the like a convex hull or a bounding box so we find the biggest box"}, {"start": 3471.64, "end": 3475.78, "text": " that encompasses all of the spatial resolutions of all of the images in our"}, {"start": 3475.78, "end": 3479.52, "text": " data set if that makes sense so let me show you what I mean by that so tensor"}, {"start": 3479.52, "end": 3484.9, "text": " list we saw we just saw how it looks how it looks like for the zeroth image"}, {"start": 3484.9, "end": 3489.08, "text": " let's see the shape for the second one and you can see that this dimension is"}, {"start": 3489.08, "end": 3492.48, "text": " bigger than this one and this dimension is bigger than this one and that's what"}, {"start": 3492.48, "end": 3498.48, "text": " why our result in resultant max size is gonna be the same shape as this tensor"}, {"start": 3498.48, "end": 3504.84, "text": " here so if I am to step over this and if I were to print the max size we'll see"}, {"start": 3504.84, "end": 3511.36, "text": " we get as I said the biggest possible bounding box that encompasses all of"}, {"start": 3511.36, "end": 3515.36, "text": " these spatial extents so it has to be bigger than we basically do a max"}, {"start": 3515.36, "end": 3518.88, "text": " across all of the images in the batch across this dimension and across this"}, {"start": 3518.88, "end": 3524.04, "text": " dimension and that's how we get this final max size okay so finally we we"}, {"start": 3524.04, "end": 3529.28, "text": " form the the the batch shape into which we're going to now copy paste all of"}, {"start": 3529.28, "end": 3536.44, "text": " these individual images so we end up with a batch shape of two and then the"}, {"start": 3536.44, "end": 3542.64, "text": " same shape as this one here then we just extract individual so the batch size the"}, {"start": 3542.64, "end": 3546.68, "text": " channel size the height and the width so let's continue here we just grab the"}, {"start": 3546.68, "end": 3550.68, "text": " types and the devices nothing fancy there and we create that this is gonna"}, {"start": 3550.68, "end": 3557.56, "text": " be the like the container tensor and we just initialize it with zeros so it's"}, {"start": 3557.56, "end": 3561.92, "text": " going to have this patch shape as we just saw here next up we're gonna form a"}, {"start": 3561.92, "end": 3568.46, "text": " mask that's gonna tell us later on which tokens we need to ignore when we pass"}, {"start": 3568.46, "end": 3574.2000000000003, "text": " them through the transformer because some of these you saw that the second"}, {"start": 3574.2000000000003, "end": 3578.6, "text": " image is smaller one and so some of the the tokens will have to be ignored from"}, {"start": 3578.6, "end": 3582.48, "text": " that smaller image that's why we have a mask and we initialize it initially with"}, {"start": 3582.48, "end": 3587.2400000000002, "text": " all ones let's see now what happens so we iterate through the tens or lists so"}, {"start": 3587.24, "end": 3592.08, "text": " that's a list containing our images and we trade through the basically this this"}, {"start": 3592.08, "end": 3596.4799999999996, "text": " place this this container tensor and we're just gonna like we're just gonna"}, {"start": 3596.4799999999996, "end": 3602.68, "text": " copy our image into the into that container tensor and then we just take"}, {"start": 3602.68, "end": 3610.68, "text": " the mask and we fill in the false such that the parts that remain true are the"}, {"start": 3610.68, "end": 3614.3599999999997, "text": " part are the parts that need to be padded for that image if that makes sense"}, {"start": 3614.36, "end": 3620.44, "text": " so for this first one I assume that everything is gonna be false because the"}, {"start": 3620.44, "end": 3625.48, "text": " bigger image has the same shape as the as the container so that's why if I were"}, {"start": 3625.48, "end": 3629.96, "text": " to print the mask you'll see that all of these are false and that makes sense"}, {"start": 3629.96, "end": 3635.7200000000003, "text": " because again the bigger image is of the same shape as the container object so"}, {"start": 3635.7200000000003, "end": 3641.08, "text": " let's now step through it again so now this time mask is gonna be a bit"}, {"start": 3641.08, "end": 3645.48, "text": " different so you can see here that we now have true here because we do need"}, {"start": 3645.48, "end": 3649.2799999999997, "text": " padding for the second image because it is smaller than the container tensor"}, {"start": 3649.2799999999997, "end": 3655.7599999999998, "text": " that's the idea and that's it like after this you can see that we've created"}, {"start": 3655.7599999999998, "end": 3662.24, "text": " successfully created like this container tensor we've stored our image data"}, {"start": 3662.24, "end": 3666.84, "text": " inside and we have the mask that also knows which tokens we need to ignore"}, {"start": 3666.84, "end": 3672.92, "text": " later on during during the the forward pass okay let's see what this nested"}, {"start": 3672.92, "end": 3677.96, "text": " tensor is basically we just store it's a simple simple object that source that"}, {"start": 3677.96, "end": 3683.88, "text": " tensors and the masks nothing fancy there okay so that was this part that's"}, {"start": 3683.88, "end": 3688.6400000000003, "text": " that was the what this collate function does and now we can return back to the"}, {"start": 3688.6400000000003, "end": 3695.96, "text": " main training loop okay so we can just skip all of this let me just go back to"}, {"start": 3695.96, "end": 3703.04, "text": " the file here so we were here we just need to skip all the way through here so"}, {"start": 3703.04, "end": 3711.12, "text": " f5 nice so samples just contains the nested tensors we just saw so we just"}, {"start": 3711.12, "end": 3715.96, "text": " created those so let me just show you that that's indeed the case if I were"}, {"start": 3715.96, "end": 3721.76, "text": " to print type of samples we're gonna have a nested tensor and if I were to"}, {"start": 3721.76, "end": 3727.44, "text": " reach for the tensors here and then do the shape we'll see that that contains"}, {"start": 3727.44, "end": 3733.92, "text": " our images and if I were to do instead of tensors I think a mask yeah mask and"}, {"start": 3733.92, "end": 3739.0800000000004, "text": " then shape here we can see that we have a mask stored so mask obviously doesn't"}, {"start": 3739.0800000000004, "end": 3743.44, "text": " need to have the number of channels because we just carry care about masking"}, {"start": 3743.44, "end": 3749.88, "text": " out certain tokens from the from the image space that's it okay so these two"}, {"start": 3749.88, "end": 3755.96, "text": " lines are just gonna push the labels and the images to the GPU in case you have"}, {"start": 3755.96, "end": 3762.2400000000002, "text": " one and now comes the fun part so this is literally the most important part of"}, {"start": 3762.2400000000002, "end": 3767.76, "text": " the code base so inside of the train one epoch these two lines are the main that"}, {"start": 3767.76, "end": 3772.6, "text": " the gist of what's going on so we have we do the the forward pass here so we"}, {"start": 3772.6, "end": 3776.36, "text": " get the bounding box predictions and then the criterion is gonna do the"}, {"start": 3776.36, "end": 3780.48, "text": " Hungarian matching followed by loss calculation so let's see how that's"}, {"start": 3780.48, "end": 3787.08, "text": " gonna work okay so we are in the data for pass here as you can see here so"}, {"start": 3787.08, "end": 3793.48, "text": " because samples is already a nested list we're just gonna skip this and the first"}, {"start": 3793.48, "end": 3798.8, "text": " step is to take the images so so the samples and pass them through the"}, {"start": 3798.8, "end": 3803.6, "text": " backbone so let's see how that's gonna work okay so here is the backbone first"}, {"start": 3803.6, "end": 3807.56, "text": " things first thing that this backbone is gonna do again remember there is this"}, {"start": 3807.56, "end": 3812.92, "text": " joiner object it's nothing fancy so this self zero let me show you what it is so"}, {"start": 3812.92, "end": 3817.46, "text": " self zero is simply your backbone so that's your resonant 50 so if I were to"}, {"start": 3817.46, "end": 3821.4, "text": " print the type of this you can see it's a backbone you later see this self one"}, {"start": 3821.4, "end": 3825.88, "text": " this self one is just gonna do the positionally encoding so if we were to"}, {"start": 3825.88, "end": 3830.36, "text": " if I were to replace this zero by one we can see that this is just position"}, {"start": 3830.36, "end": 3835.48, "text": " embeddings sign so the sinusoid from the like the same type of positionally"}, {"start": 3835.48, "end": 3839.76, "text": " embeddings as from the original transformer paper okay so that means"}, {"start": 3839.76, "end": 3845.6400000000003, "text": " that by doing this we are passing the nested I think this is the nested tensor"}, {"start": 3845.6400000000003, "end": 3851.6, "text": " let me kind of double check this again so let's just take the tensor list yeah"}, {"start": 3851.6, "end": 3855.84, "text": " it says here the notation says it already it's a nested tensor we passed"}, {"start": 3855.84, "end": 3859.48, "text": " it through the backbone so let's see the forward pass of the backbone okay here"}, {"start": 3859.48, "end": 3864.92, "text": " it is the backbone base we basically pass the so we extract the tensions"}, {"start": 3864.92, "end": 3868.84, "text": " which are the images and we just pass that through the body which is gonna"}, {"start": 3868.84, "end": 3873.16, "text": " fetch us the as you recall here it's just gonna fetch us the features from"}, {"start": 3873.16, "end": 3880.2, "text": " the layer 4 of the resonant 50 so let's do that and we end up with XS XS is"}, {"start": 3880.2, "end": 3885.04, "text": " just gonna have I think one key and that's this key zero and and we we've"}, {"start": 3885.04, "end": 3889.56, "text": " specified it here exactly so that this tells us hey if you want access layer"}, {"start": 3889.56, "end": 3894.32, "text": " four features access me via key zero and that's what we are going to do so if I"}, {"start": 3894.32, "end": 3899.12, "text": " if I were to take excess of zero we're gonna access the features and the"}, {"start": 3899.12, "end": 3902.92, "text": " features are gonna be as you can see of the following dimension so we have 2048"}, {"start": 3902.92, "end": 3906.64, "text": " channels and we can see that the number of the spatial extent kind of reduced"}, {"start": 3906.64, "end": 3911.6, "text": " here so it started with we started with this shape let me show you what we had"}, {"start": 3911.6, "end": 3919.04, "text": " here so we started with two three and the original image resolution and we"}, {"start": 3919.04, "end": 3925.48, "text": " got two more channels and smaller spatial resolution which is expected from"}, {"start": 3925.48, "end": 3931.24, "text": " a convolutional neural network such as resonant 50 okay next up what we do is"}, {"start": 3931.24, "end": 3938.36, "text": " the following we grab the features from access and we take the mask and now see"}, {"start": 3938.36, "end": 3944.44, "text": " what we do so we basically now down sample the mask such that this mask is"}, {"start": 3944.44, "end": 3950.76, "text": " no longer of the same spatial resolution as the original image but it's of the"}, {"start": 3950.76, "end": 3955.04, "text": " same spatial resolution as the as the image features that came out of resonant"}, {"start": 3955.04, "end": 3959.1200000000003, "text": " 50 because remember we want to know which of those image features we want to"}, {"start": 3959.1200000000003, "end": 3963.56, "text": " mask out because we are gonna later on pass them through the transformer so"}, {"start": 3963.56, "end": 3968.64, "text": " that's why we do this step so basically you can see that mask currently has the"}, {"start": 3968.64, "end": 3976.7999999999997, "text": " following shape whoops let me see why what it so it's called them M shape so"}, {"start": 3976.7999999999997, "end": 3982.08, "text": " this is the original shape after we do the down sampling we end up with mask"}, {"start": 3982.08, "end": 3990.24, "text": " shape so to 1929 which is of the same resolution as the image features that"}, {"start": 3990.24, "end": 3995.68, "text": " came out of resonant 50 so let me now print mask and you can see again we have"}, {"start": 3995.68, "end": 4000.7599999999998, "text": " all false for the first image and we have some of these being true for the"}, {"start": 4000.7599999999998, "end": 4006.8399999999997, "text": " second image and that's it okay finally we store the image features and the"}, {"start": 4006.8399999999997, "end": 4013.2, "text": " associated mask into a nested vector and we return that back basically in this"}, {"start": 4013.2, "end": 4020.04, "text": " out dictionary that's it that's the first step of the of the backbone so now"}, {"start": 4020.04, "end": 4025.8399999999997, "text": " we just want to calculate the positional embeddings let's see how we do that"}, {"start": 4025.8399999999997, "end": 4031.72, "text": " again we grab the image features and the down sample mask here we just store that"}, {"start": 4031.72, "end": 4037.6, "text": " into this out dictionary so again let me remind you that X is just if I were to"}, {"start": 4037.6, "end": 4042.64, "text": " access tensors and bring the shape that contains the image features and if I"}, {"start": 4042.64, "end": 4047.04, "text": " were to ask excess mask that contains the down sample masks that's the X"}, {"start": 4047.04, "end": 4055.52, "text": " object here and we store it into this out list here we do just a simple"}, {"start": 4055.52, "end": 4061.7599999999998, "text": " positional encoding calculation nothing too fancy basically we give it the X and"}, {"start": 4061.7599999999998, "end": 4069.4, "text": " it's gonna do its own magic so a bunch of code but that's your just your like"}, {"start": 4069.4, "end": 4075.28, "text": " signs and cosines calculation so that we ultimately end up with this positional"}, {"start": 4075.28, "end": 4082.2400000000002, "text": " embedding like set of vectors of the shape as you can see the following so he"}, {"start": 4082.2400000000002, "end": 4087.52, "text": " has two because we have remember we have batch dimension of two we have the same"}, {"start": 4087.52, "end": 4091.96, "text": " resolution here as the as the as the image features that came out of the"}, {"start": 4091.96, "end": 4097.4800000000005, "text": " resonant and we have 256 because that's going to be the hidden dimension that"}, {"start": 4097.48, "end": 4101.919999999999, "text": " the transformers are gonna be using that's why we have all of this okay so I"}, {"start": 4101.919999999999, "end": 4105.44, "text": " encourage you to check out this this code if you're interested but I'm just"}, {"start": 4105.44, "end": 4110.719999999999, "text": " gonna treat it as a black box for now it's nothing to fancy there okay we"}, {"start": 4110.719999999999, "end": 4117.08, "text": " store that in this positional list and we return that back so that's the first"}, {"start": 4117.08, "end": 4124.759999999999, "text": " step of the data for pass so data model again for pass and we return back the"}, {"start": 4124.76, "end": 4131.360000000001, "text": " image features the positional vectors and the mask next up we just decompose"}, {"start": 4131.360000000001, "end": 4136.72, "text": " the nested tensor into the source and masks the source again image features and"}, {"start": 4136.72, "end": 4141.2, "text": " mask the down sample mask nothing fancy there now this is the interesting part"}, {"start": 4141.2, "end": 4148.320000000001, "text": " so we take the source which is again recall this is going to have two thousand"}, {"start": 4148.32, "end": 4157.04, "text": " twenty four sorry wait 2048 and then the down sampled image resolution so again"}, {"start": 4157.04, "end": 4162.48, "text": " source shape here it is and after we apply the input projection that's going"}, {"start": 4162.48, "end": 4168.4, "text": " to reduce this number of channels as I previously explained to 256 so that's"}, {"start": 4168.4, "end": 4173.0, "text": " going to happen there and then we just passed the mask so this is what we pass"}, {"start": 4173.0, "end": 4179.36, "text": " to the encoder and then we also pass the query vectors to the decoder and we pass"}, {"start": 4179.36, "end": 4185.56, "text": " the positional encodings so let's step through this piece of code let's do f10"}, {"start": 4185.56, "end": 4191.0, "text": " here and we are hitting the transformer so this is a transformer layer this is"}, {"start": 4191.0, "end": 4197.7, "text": " the for pass and let me now show you the source that it indeed is now has 256 when"}, {"start": 4197.7, "end": 4201.64, "text": " it comes to a number of channels so you can see here to 256 so that's the"}, {"start": 4201.64, "end": 4207.04, "text": " consequence of applying that convolutional layer before passing it"}, {"start": 4207.04, "end": 4213.52, "text": " into the transformer so nothing fancy there now we're going to do some"}, {"start": 4213.52, "end": 4220.68, "text": " flattening and permutation so that we have a shape that's suitable for passing"}, {"start": 4220.68, "end": 4224.08, "text": " it into the transformer so that's something I mentioned here previously"}, {"start": 4224.08, "end": 4230.08, "text": " basically you take this and in the rest order you're gonna flatten out this this"}, {"start": 4230.08, "end": 4237.44, "text": " volume into a sequence of tokens that's that's the basic idea and so again let"}, {"start": 4237.44, "end": 4243.48, "text": " me show you the shape the shape is going to be 551 to 256 so this is the batch"}, {"start": 4243.48, "end": 4248.32, "text": " size this is the hidden dimension and this is the number of tokens and it's"}, {"start": 4248.32, "end": 4255.2, "text": " kind of weird permutation but that's what the this original pie torch"}, {"start": 4255.2, "end": 4259.5599999999995, "text": " implementation of the transformer expects okay instead of having batch as"}, {"start": 4259.56, "end": 4265.240000000001, "text": " the first dimensionality and the reason is some optimization tricks we do the"}, {"start": 4265.240000000001, "end": 4269.64, "text": " same thing for the positional embeddings so if I were to show you the positional"}, {"start": 4269.64, "end": 4275.52, "text": " embedding shape here we're gonna end up with literally the same shape as as the"}, {"start": 4275.52, "end": 4280.4800000000005, "text": " source because we want to add them up later on and finally we do a similar"}, {"start": 4280.4800000000005, "end": 4286.04, "text": " thing with query embedding basically they are initially just going to be 100"}, {"start": 4286.04, "end": 4293.0, "text": " 256 I think so hundred two fifty six and here we just add dimension like at the"}, {"start": 4293.0, "end": 4297.84, "text": " location one and then we repeat it whatever the number of patches is that's"}, {"start": 4297.84, "end": 4302.68, "text": " how many times we just copy paste these very actors and then we end up with"}, {"start": 4302.68, "end": 4310.04, "text": " basically the same so hundred to 256 okay we do the flattening of the mask"}, {"start": 4310.04, "end": 4314.6, "text": " that we can so we can pass it into interest former and finally we form this"}, {"start": 4314.6, "end": 4318.280000000001, "text": " target so targets is going to be so we have a source we have the targets"}, {"start": 4318.280000000001, "end": 4322.68, "text": " targets are going to be the tokens that we pass into the decoder part of the"}, {"start": 4322.68, "end": 4327.200000000001, "text": " transformer layer whereas we're gonna just be using these query vectors as the"}, {"start": 4327.200000000001, "end": 4331.400000000001, "text": " positionally coatings that we're gonna sum up with the target tokens we're"}, {"start": 4331.400000000001, "end": 4337.4400000000005, "text": " gonna see that in a second okay so finally we now have encoding part and"}, {"start": 4337.4400000000005, "end": 4344.400000000001, "text": " decoding part let's jump and see how those are gonna work let me just see"}, {"start": 4344.4, "end": 4348.96, "text": " whether I have a breakpoint here I'm gonna add a breakpoint to forward post"}, {"start": 4348.96, "end": 4354.04, "text": " post because we're gonna have layer norm applied after the residual connection"}, {"start": 4354.04, "end": 4357.719999999999, "text": " not before that's why we have this for a priest we're gonna just be using this"}, {"start": 4357.719999999999, "end": 4363.96, "text": " version of the function here let's see let's see whether I should add in honor"}, {"start": 4363.96, "end": 4370.4, "text": " I should probably add breakpoint here as well so now let's do f10 just step over"}, {"start": 4370.4, "end": 4375.92, "text": " so as you can see here this is your encoder layer we do for like six times"}, {"start": 4375.92, "end": 4381.5199999999995, "text": " because let me show you the the length we have like six of these as you can see"}, {"start": 4381.5199999999995, "end": 4385.759999999999, "text": " here so we're gonna six times just apply the transformer layer and we pass"}, {"start": 4385.759999999999, "end": 4392.5199999999995, "text": " basically the source here so those are the image tokens just flattened out and"}, {"start": 4392.5199999999995, "end": 4397.5599999999995, "text": " we passed the mask so that's the mask that we got as the odd but if you recall"}, {"start": 4397.56, "end": 4405.200000000001, "text": " from the back bone and finally we we add the positional encodings let me now go"}, {"start": 4405.200000000001, "end": 4409.160000000001, "text": " into the actual layer so let's see what's going on here so we sum up the"}, {"start": 4409.160000000001, "end": 4413.0, "text": " positional encoding so those are the sinusoid we computed we sum them up"}, {"start": 4413.0, "end": 4418.320000000001, "text": " with the image tokens and we get the queries and the keys and then we're just"}, {"start": 4418.320000000001, "end": 4423.240000000001, "text": " going to do a self-attention as you can see here so the only difference is that"}, {"start": 4423.24, "end": 4428.32, "text": " the value vectors will not contain the positional encodings that's it and then"}, {"start": 4428.32, "end": 4433.28, "text": " we have this the part where we mask out some parts so let me show you how this"}, {"start": 4433.28, "end": 4437.0599999999995, "text": " looks like so it's mostly false but then there are some parts that are true which"}, {"start": 4437.0599999999995, "end": 4442.76, "text": " means we're going to ignore those tokens because if you recall the that part of"}, {"start": 4442.76, "end": 4446.719999999999, "text": " the image is non-existent let me quickly explain what I mean by that exactly so"}, {"start": 4446.72, "end": 4453.8, "text": " imagine so remember that we passed two images into our like debtor so we had"}, {"start": 4453.8, "end": 4462.38, "text": " image one and we had I'm gonna just draw like something like this that's of the"}, {"start": 4462.38, "end": 4467.6, "text": " same shape as this one here but instead so the second image was maybe like this"}, {"start": 4467.6, "end": 4475.04, "text": " okay so that means that we have to pad all of these tokens out we have to"}, {"start": 4475.04, "end": 4482.4, "text": " ignore all of these tokens and that information is now stored in this mask"}, {"start": 4482.4, "end": 4488.2, "text": " because it's gonna be set to true for all of these image tokens so that's kind"}, {"start": 4488.2, "end": 4491.84, "text": " of just a visualization that I have in my mind when I think about it so let's"}, {"start": 4491.84, "end": 4499.96, "text": " go back here let's step through this and that's just gonna be a single step of"}, {"start": 4499.96, "end": 4507.16, "text": " self-attention of image tokens across each other then this is just your like"}, {"start": 4507.16, "end": 4513.24, "text": " regular transform logic we have this inverse bottleneck MLP so so nothing"}, {"start": 4513.24, "end": 4518.88, "text": " nothing fancy there I'm not gonna lose any time doing that so let's step back"}, {"start": 4518.88, "end": 4526.2, "text": " here and let's continue with the decoder portion so I'm gonna click f5 and we're"}, {"start": 4526.2, "end": 4530.28, "text": " gonna get to the decoder okay so we end up with memory the memory is gonna be of"}, {"start": 4530.28, "end": 4534.24, "text": " the same dimensionality as the input source token so source tokens again"}, {"start": 4534.24, "end": 4540.36, "text": " those are the image tokens are of this shape and you can see that the memory is"}, {"start": 4540.36, "end": 4545.92, "text": " gonna be of the same shape here okay and again memory is just let me show you the"}, {"start": 4545.92, "end": 4551.679999999999, "text": " the image here so that's gonna be the tokens we got we get here so we start"}, {"start": 4551.68, "end": 4556.72, "text": " with image features here and we end up with with the tensor that has the same"}, {"start": 4556.72, "end": 4561.52, "text": " dimensionality as the input image features so that's where we are at at"}, {"start": 4561.52, "end": 4564.84, "text": " the moment so now we need to pass through the decoder part of the"}, {"start": 4564.84, "end": 4569.280000000001, "text": " transform so let me show you how that's gonna look like we passed the target"}, {"start": 4569.280000000001, "end": 4573.56, "text": " which is gonna be initially all zeros we passed the memory so that's the output"}, {"start": 4573.56, "end": 4581.280000000001, "text": " of the decoder we then pass what we passed the mask so that we know which"}, {"start": 4581.280000000001, "end": 4586.200000000001, "text": " tokens we should not attend to when doing the cross-attention positional"}, {"start": 4586.200000000001, "end": 4591.160000000001, "text": " embeddings query vectors bunch of details there basically everything that"}, {"start": 4591.160000000001, "end": 4595.52, "text": " we have in this image here is gonna be passed into the decoder so object"}, {"start": 4595.52, "end": 4602.76, "text": " varies positional encodings memory targets tokens which are not displayed"}, {"start": 4602.76, "end": 4607.2, "text": " here but all zeros initially so all that needs to be passed into the decoder"}, {"start": 4607.2, "end": 4613.8, "text": " layer so now let's go back here and let me step over the let me just kind of add"}, {"start": 4613.8, "end": 4621.6, "text": " like a breakpoint somewhere here so here okay let's now click f10 and we're gonna"}, {"start": 4621.6, "end": 4631.0, "text": " hit decoder part I'm gonna ignore this so let's step through that code so here"}, {"start": 4631.0, "end": 4636.0, "text": " we are in a particular decoder layer so here it probably works we have the query"}, {"start": 4636.0, "end": 4643.7, "text": " position positional embedding so those are these positional embedding vectors"}, {"start": 4643.7, "end": 4651.16, "text": " here so we are going to add those to the target tokens which are gonna be all"}, {"start": 4651.16, "end": 4657.16, "text": " zeros initially so we end up with query and keys being equal to those query"}, {"start": 4657.16, "end": 4661.68, "text": " embedding vectors again we have hundred of those let me print a shape here so we"}, {"start": 4661.68, "end": 4667.72, "text": " have hundred of those and dimensionality is 256 okay so then we apply as you can"}, {"start": 4667.72, "end": 4672.36, "text": " see here we just apply a self-attention so those query vectors are gonna"}, {"start": 4672.36, "end": 4678.84, "text": " basically self attend to each other and nothing nothing super fancy there the"}, {"start": 4678.84, "end": 4683.08, "text": " interesting part in this decoder our layer happens actually at the cross"}, {"start": 4683.08, "end": 4688.88, "text": " attention module here so here as you can see again we are adding the query"}, {"start": 4688.88, "end": 4696.36, "text": " position to the target tokens and we use as keys memory so that's the output of"}, {"start": 4696.36, "end": 4701.48, "text": " the encoder and we add the positional encodings on top of the memory tokens"}, {"start": 4701.48, "end": 4705.24, "text": " and we use as the value vectors the memory vectors so we're gonna somehow"}, {"start": 4705.24, "end": 4711.16, "text": " combine the outputs of the encoder to form our novel representations after"}, {"start": 4711.16, "end": 4719.4, "text": " this so target two okay and again we'll be passing the padding information so"}, {"start": 4719.4, "end": 4723.4, "text": " the masking information because we do not want to attend all of the encoder"}, {"start": 4723.4, "end": 4728.92, "text": " outputs because some of them are not non valid because some of them stem from the"}, {"start": 4728.92, "end": 4733.24, "text": " parts of the image where we just added padding so it's like gonna be black if"}, {"start": 4733.24, "end": 4739.16, "text": " you look at the image in the image space okay so that's that part and then we"}, {"start": 4739.16, "end": 4746.16, "text": " just have regular transformer logic the MLP part and that's pretty much it okay"}, {"start": 4746.16, "end": 4751.96, "text": " guys so we're just gonna repeat this six times and as you can see we are storing"}, {"start": 4751.96, "end": 4758.12, "text": " the intermediate representations for the auxiliary loss I mentioned but yeah let"}, {"start": 4758.12, "end": 4765.8, "text": " me now step over and let's get to this part here see if I were to click f5 let"}, {"start": 4765.8, "end": 4771.4400000000005, "text": " me remove the breakpoint here we are now finished decoder we got decoder output"}, {"start": 4771.4400000000005, "end": 4778.68, "text": " so let me show you the shape of the output here we have 600 to 256 so why"}, {"start": 4778.68, "end": 4783.0, "text": " does this make sense so 100 is because we have 100 query embedding vectors to"}, {"start": 4783.0, "end": 4789.56, "text": " become because we have batch dimension of 2 to 56 is the dimensionality of the"}, {"start": 4789.56, "end": 4793.360000000001, "text": " embedding vectors throughout the decoder stack and throughout the encoder stack"}, {"start": 4793.36, "end": 4797.24, "text": " and finally we have six because we have six decoder layers and we're storing or"}, {"start": 4797.24, "end": 4800.5599999999995, "text": " all of the intermediate representations because we are gonna apply this"}, {"start": 4800.5599999999995, "end": 4805.4, "text": " auxiliary auxiliary loss a bit later so that's it so this is the thing we're"}, {"start": 4805.4, "end": 4813.339999999999, "text": " gonna be using this HS after we do this transpose operation okay so finally we"}, {"start": 4813.339999999999, "end": 4819.28, "text": " applied the class embedding so that's gonna map our representation HS from"}, {"start": 4819.28, "end": 4823.96, "text": " this shape into we're gonna have instead of 256 92 because that's the number of"}, {"start": 4823.96, "end": 4829.28, "text": " classes plus the no object class so let me show you the the shape of the output"}, {"start": 4829.28, "end": 4835.5599999999995, "text": " class so this is gonna be as I told you 92 here then we do b-box embeds that's"}, {"start": 4835.5599999999995, "end": 4840.599999999999, "text": " the MLP that's gonna give us the outputs like the bounding box coordinates so"}, {"start": 4840.599999999999, "end": 4845.639999999999, "text": " let me do f10 there and let me show you the shape so here is the shape we have"}, {"start": 4845.64, "end": 4850.6, "text": " four here is the output and that's it and finally we grab only the last so"}, {"start": 4850.6, "end": 4856.76, "text": " this is gonna grab only the output from the last decoder layer and then we're"}, {"start": 4856.76, "end": 4862.0, "text": " gonna store all of the intermediate representations inside of coupled with"}, {"start": 4862.0, "end": 4866.700000000001, "text": " this this particular key here auxiliary outputs okay so we store the classes the"}, {"start": 4866.700000000001, "end": 4871.280000000001, "text": " logits we store the bounding boxes and then we store all of the other ones"}, {"start": 4871.28, "end": 4878.4, "text": " excluding the last one inside of inside of inside of this basically dictionary"}, {"start": 4878.4, "end": 4887.639999999999, "text": " so let me kind of step over that and if I were to show you the keys here these"}, {"start": 4887.639999999999, "end": 4894.44, "text": " are the keys we get but then inside of this key we have this list with all of"}, {"start": 4894.44, "end": 4899.32, "text": " the other outputs yeah hopefully yeah I know this is a bunch of details but do"}, {"start": 4899.32, "end": 4902.88, "text": " let me know if you if you if you get anything out of this video and even if"}, {"start": 4902.88, "end": 4907.759999999999, "text": " you if you watched all the way to here like congrats like this was like very I"}, {"start": 4907.759999999999, "end": 4913.5599999999995, "text": " can assume very like intensive and so I'd be super happy to hear the feedback"}, {"start": 4913.5599999999995, "end": 4919.92, "text": " of all of you guys who who stuck all the way to here so let's continue that was"}, {"start": 4919.92, "end": 4924.96, "text": " the one of the most important parts of this whole code base and now there is a"}, {"start": 4924.96, "end": 4930.0, "text": " second part which is much easier we're gonna soon finish up this video so let's"}, {"start": 4930.0, "end": 4934.68, "text": " see the the Hungarian matching algorithm and the loss calculation so let's dig"}, {"start": 4934.68, "end": 4940.6, "text": " into it so here we fetch just the outputs from the last layer of the"}, {"start": 4940.6, "end": 4947.4800000000005, "text": " decoder we ignore this auxiliary outputs and finally comes the matcher so this is"}, {"start": 4947.4800000000005, "end": 4951.78, "text": " the Hungarian matching algorithm let's see how this thing works fairly simple"}, {"start": 4951.78, "end": 4957.92, "text": " logic to be honest so let me kind of explain what's going on okay so first we"}, {"start": 4957.92, "end": 4962.32, "text": " are gonna grab the batch size and the number of various it's gonna be two and"}, {"start": 4962.32, "end": 4967.48, "text": " hundred here then we're gonna flatten out the prediction logic so again"}, {"start": 4967.48, "end": 4971.12, "text": " prediction logic so let me show you the shape here and this is how it looks like"}, {"start": 4971.12, "end": 4977.4, "text": " 292 we're gonna flat nod the first two and we're gonna apply the softmax"}, {"start": 4977.4, "end": 4983.2, "text": " instead of using unnormal as logits so let me do f10 here and so we end up with"}, {"start": 4983.2, "end": 4989.879999999999, "text": " a shape 292 as you can see here we do the same thing for the bounding boxes so"}, {"start": 4989.879999999999, "end": 4996.0, "text": " I do it here let's not do this it's gonna be 204 as you can see here so we"}, {"start": 4996.0, "end": 5000.5199999999995, "text": " just flatten across all of the batches and across all of the like tokens and"}, {"start": 5000.5199999999995, "end": 5004.799999999999, "text": " now we'll be able to compare that to ground truth so here's the ground truth"}, {"start": 5004.8, "end": 5009.16, "text": " we trade through this targets dictionary that contains the annotations the ground"}, {"start": 5009.16, "end": 5015.18, "text": " truth annotations we grab the labels and we concatenate those so we'll have we'll"}, {"start": 5015.18, "end": 5022.2, "text": " have two of these for the two of images and we end up with I think 16 I already"}, {"start": 5022.2, "end": 5026.92, "text": " went through this code so it's 16 because we have like I think two labels"}, {"start": 5026.92, "end": 5031.28, "text": " for the first image and 14 labels for the second image so that means two"}, {"start": 5031.28, "end": 5035.599999999999, "text": " bounding boxes for the first image and 14 for the second one we do the same"}, {"start": 5035.599999999999, "end": 5038.759999999999, "text": " thing here we just grab the bounding boxes the ground truth money boxes that"}, {"start": 5038.759999999999, "end": 5044.24, "text": " we end up with 16 for let me show you the shape here so 64 and now for the"}, {"start": 5044.24, "end": 5049.88, "text": " fun part so this is the fun part of this whole logic so what this thing is going"}, {"start": 5049.88, "end": 5056.2, "text": " to do is so we take these 16 ground truth indices so this tells us as you"}, {"start": 5056.2, "end": 5060.12, "text": " can see on the screen here that the first bounding blocks class is 7 and"}, {"start": 5060.12, "end": 5066.76, "text": " then the second is 1 and then 53 53 53 etc so for all of the 200 boxes that we"}, {"start": 5066.76, "end": 5072.36, "text": " predicted for the batch size of 2 we're going to take those particular 16 lodges"}, {"start": 5072.36, "end": 5077.04, "text": " for the 16 ground truth classes and extract them into this cost class"}, {"start": 5077.04, "end": 5083.8, "text": " matrix okay so we're gonna end up with what 216 instead of 292 so again we"}, {"start": 5083.8, "end": 5091.4800000000005, "text": " start with out prop being 292 and we end up with having a subset of those with"}, {"start": 5091.4800000000005, "end": 5099.28, "text": " 216 okay so we'll see why that makes sense then we do the following we"}, {"start": 5099.28, "end": 5104.400000000001, "text": " calculate the distance between all of our predicted boxes and the target"}, {"start": 5104.400000000001, "end": 5109.8, "text": " bounding boxes this is just going to do L1 distance nothing nothing fancy there"}, {"start": 5109.8, "end": 5115.64, "text": " and again we're going to end up with a matrix that's 216 and it contains"}, {"start": 5115.64, "end": 5122.04, "text": " basically the distance between each of our 200 predictions and each of the 16"}, {"start": 5122.04, "end": 5128.320000000001, "text": " ground truth labels so we'll know basically yeah how far away each of them"}, {"start": 5128.320000000001, "end": 5133.56, "text": " are like a fully connected graph okay if I click F10 let me show you and convince"}, {"start": 5133.56, "end": 5139.0, "text": " you that we have the shape 216 so quickly what this distance function does"}, {"start": 5139.0, "end": 5145.56, "text": " is let me show you so it's a L1 distance between two bounding boxes so imagine we"}, {"start": 5145.56, "end": 5150.84, "text": " have one bounding box that's like maybe here is one here's one so we have a"}, {"start": 5150.84, "end": 5157.08, "text": " bounding box here and let's imagine we have a second one here okay so okay"}, {"start": 5157.08, "end": 5161.96, "text": " that's it and now if you were to represent this bounding box as a vector"}, {"start": 5161.96, "end": 5166.84, "text": " let's use the XY XY representation so that's gonna we're gonna end up with"}, {"start": 5166.84, "end": 5171.2, "text": " having so that what's the coordinate of this X coordinate is 0 y coordinate is"}, {"start": 5171.2, "end": 5176.2, "text": " 1 then we have X coordinate is 1 y coordinate is 0 so this is going to be"}, {"start": 5176.2, "end": 5180.6, "text": " a representation for this bounding box and then we have this bounding box here"}, {"start": 5180.6, "end": 5186.32, "text": " which is going to be what X coordinate is 2 y coordinate is 1 we're taking a"}, {"start": 5186.32, "end": 5193.84, "text": " look at this point here and then we have 3 and we have 0 okay so now as you can"}, {"start": 5193.84, "end": 5200.32, "text": " imagine the how L1 is calculated is you basically subtract these two vectors and"}, {"start": 5200.32, "end": 5204.12, "text": " so you'll end up with what you'll end up with like and you take the absolute"}, {"start": 5204.12, "end": 5212.22, "text": " value so you'll have 2 0 you'll have 2 and you'll have 0 and then I think you"}, {"start": 5212.22, "end": 5216.72, "text": " just add this up and so you end up with like 4 so the distance between these two"}, {"start": 5216.72, "end": 5222.16, "text": " is gonna be 4 and so we do this for all of the predicted boxes and the ground"}, {"start": 5222.16, "end": 5227.639999999999, "text": " truth boxes that's it that's the logic we apply and then we have this and I'm"}, {"start": 5227.639999999999, "end": 5231.84, "text": " gonna ignore the implementation but we just find the same type of so the"}, {"start": 5231.84, "end": 5237.4, "text": " intersectional reunion logic I just I think I mentioned an hour ago or"}, {"start": 5237.4, "end": 5245.76, "text": " something okay so we end up with this cost matrix that's also 216 so after"}, {"start": 5245.76, "end": 5253.08, "text": " that we sum up all of those matrices and we have associated weights here 5 and we"}, {"start": 5253.08, "end": 5257.72, "text": " have 1 and we have 2 so what's the semantics what's the idea why are we"}, {"start": 5257.72, "end": 5262.16, "text": " doing this well the idea is the following we care about we're trying to"}, {"start": 5262.16, "end": 5267.76, "text": " match the predicted box with the ground truth box there is multiple angles at"}, {"start": 5267.76, "end": 5271.6, "text": " which we could look at this so we can only focus maybe on what's the closest"}, {"start": 5271.6, "end": 5276.92, "text": " one but then if they are close but the class is completely off is that a good"}, {"start": 5276.92, "end": 5280.88, "text": " match so if on the other hand we have so let me show you what I mean by that so"}, {"start": 5280.88, "end": 5286.56, "text": " imagine this is imagine this is a ground truth box here imagine we have a ground"}, {"start": 5286.56, "end": 5293.14, "text": " truth box here and imagine we have one box that's predicted like this it has"}, {"start": 5293.14, "end": 5299.200000000001, "text": " some significant overlap but the class of this one is maybe cat and the class"}, {"start": 5299.2, "end": 5303.66, "text": " of this one is maybe dog and now let's assume we have a different bounding box"}, {"start": 5303.66, "end": 5310.28, "text": " so that bounding box is a little bit less overlap but it has a correct class"}, {"start": 5310.28, "end": 5314.639999999999, "text": " it's cat so which one is a better one which better is a which one of these is"}, {"start": 5314.639999999999, "end": 5318.44, "text": " better match so there is no correct answer and that's why we have to kind of"}, {"start": 5318.44, "end": 5324.36, "text": " combine both the intersectional reunion both the distance both the class to form"}, {"start": 5324.36, "end": 5329.599999999999, "text": " the final matrix and then solve this basically graph flow problem to find the"}, {"start": 5329.599999999999, "end": 5338.32, "text": " optimal assignment so here so we finally have the cost matrix and then we just"}, {"start": 5338.32, "end": 5343.92, "text": " kind of change the change the shape of that matrix so that we have let me show"}, {"start": 5343.92, "end": 5348.88, "text": " you here so we have two hundred and sixteen now we collect sizes because we"}, {"start": 5348.88, "end": 5354.4800000000005, "text": " want to separately match the prediction bounding boxes from the first batch with"}, {"start": 5354.4800000000005, "end": 5357.68, "text": " the ground truth boxes from the first batch from the first element of the"}, {"start": 5357.68, "end": 5361.32, "text": " batch and similarly for the second one so we want to have those are completely"}, {"start": 5361.32, "end": 5365.400000000001, "text": " orthogonal so that's why we have to do this separation so size is just going to"}, {"start": 5365.400000000001, "end": 5370.0, "text": " be 2 and 14 so 2 and 14 because we have two bounding boxes for the first image"}, {"start": 5370.0, "end": 5376.36, "text": " and 14 for the second one okay and now we just do this split over sizes which"}, {"start": 5376.36, "end": 5381.44, "text": " is basically going to split this into 102 and hundred fourteen so the two"}, {"start": 5381.44, "end": 5384.759999999999, "text": " separate problems as I mentioned before so let me see whether this is gonna"}, {"start": 5384.759999999999, "end": 5393.0, "text": " work if I were to print this and allocate it to a and B let's print"}, {"start": 5393.0, "end": 5401.4, "text": " shape of a and let's print shape of B as you can see here this is what we got"}, {"start": 5401.4, "end": 5406.96, "text": " okay and after splitting you can see that we have this indexing by I it's"}, {"start": 5406.96, "end": 5410.96, "text": " gonna be zero in the first iteration and then one in the in the second one so"}, {"start": 5410.96, "end": 5418.4, "text": " that means we'll be grabbing only the zero with batch here and the first one"}, {"start": 5418.4, "end": 5425.879999999999, "text": " here and then basically doing this assignment problem on 102 cost matrix"}, {"start": 5425.88, "end": 5432.96, "text": " and hundred fourteen and it's literally our minimal flow type of an algorithm so"}, {"start": 5432.96, "end": 5437.08, "text": " this is what this linear sum assignment is gonna find it's gonna find such a"}, {"start": 5437.08, "end": 5441.56, "text": " choice of edges in the bipartite graph so as to minimize the cost and so what I"}, {"start": 5441.56, "end": 5446.04, "text": " mean by that let's for example take the hundred two hundred fourteen one so what"}, {"start": 5446.04, "end": 5452.8, "text": " is gonna happen is we have a matrix that's like let's say hundred fourteen"}, {"start": 5452.8, "end": 5459.68, "text": " okay and it tells you how so this row here tells you the cost between this"}, {"start": 5459.68, "end": 5465.84, "text": " particular predicted box and all of the fourteen ground truth boxes and so you're"}, {"start": 5465.84, "end": 5470.92, "text": " gonna have like what's the cost between each of those and so we can visualize"}, {"start": 5470.92, "end": 5474.56, "text": " this this matrix is the following way as well so you imagine you have like"}, {"start": 5474.56, "end": 5481.04, "text": " literally hundred nodes here and you only have 14 nodes here and so we have"}, {"start": 5481.04, "end": 5486.92, "text": " a fully connected graph okay something like this something like this and then"}, {"start": 5486.92, "end": 5491.16, "text": " we have it all of the edges are connected here we have a by part of a"}, {"start": 5491.16, "end": 5497.96, "text": " graph and so the the idea is to find two such edges that will minimize the the"}, {"start": 5497.96, "end": 5503.64, "text": " cost of this of this bipartite graph so hopefully that makes sense plus there is"}, {"start": 5503.64, "end": 5507.96, "text": " a constraint that you cannot pick edges that go to the same notes you have to"}, {"start": 5507.96, "end": 5512.16, "text": " cover all of the output nodes so you have to cover both this node and all of"}, {"start": 5512.16, "end": 5516.44, "text": " the other nodes and this one by doing that you literally found for each of"}, {"start": 5516.44, "end": 5521.56, "text": " the node you found some ideal match so maybe you found for this one maybe this"}, {"start": 5521.56, "end": 5526.08, "text": " one is ideal and for this one some of these ones here is ideal one so that's"}, {"start": 5526.08, "end": 5530.36, "text": " the idea that's the Hungarian match algorithm if you just think about it a"}, {"start": 5530.36, "end": 5534.8, "text": " little bit this should make sense it's not that hard okay let's get back to the"}, {"start": 5534.8, "end": 5541.8, "text": " code after we run that we'll end up with a set of indices and you will see what"}, {"start": 5541.8, "end": 5549.68, "text": " we get here basically you can see that the bounding box 17 is matched with the"}, {"start": 5549.68, "end": 5554.64, "text": " bonding ground truth bonding box one the bonding box 66 is matched with the"}, {"start": 5554.64, "end": 5558.92, "text": " ground box zero and all of this is for the first image in the batch and then"}, {"start": 5558.92, "end": 5564.6, "text": " for the second image we have four and the 11 are matched seven and one twenty six"}, {"start": 5564.6, "end": 5569.32, "text": " and ten etc etc so we got our matching that's it that's the Hungarian matching"}, {"start": 5569.32, "end": 5572.88, "text": " that's the second component hopefully this was clear enough let's continue on"}, {"start": 5572.88, "end": 5577.2, "text": " now we just need to calculate the losses and we are pretty much done that's"}, {"start": 5577.2, "end": 5580.88, "text": " that's everything there is to data so we just calculate the number of boxes so we"}, {"start": 5580.88, "end": 5585.32, "text": " can use that for normalization nothing fancy there is gonna be 16 for our"}, {"start": 5585.32, "end": 5591.719999999999, "text": " particular batch so that's 16 and now we do the loss computation so let's dig"}, {"start": 5591.719999999999, "end": 5596.12, "text": " into this so we have three losses we're calculating one is labels one is boxes"}, {"start": 5596.12, "end": 5599.759999999999, "text": " we'll see that boxes actually encompasses two losses and then"}, {"start": 5599.759999999999, "end": 5604.32, "text": " cardinality is not a loss per se but that's just an implementation detail"}, {"start": 5604.32, "end": 5610.28, "text": " let's let's kind of dig into this okay so loss labels is the first loss what do"}, {"start": 5610.28, "end": 5615.5199999999995, "text": " we do here we grab the prediction the predicted logits so that's gonna be the"}, {"start": 5615.5199999999995, "end": 5620.16, "text": " output of our decoder stack and the shape is gonna be this one okay so we"}, {"start": 5620.16, "end": 5627.04, "text": " have hundred tokens nine two logits and we have two batches we grab this we call"}, {"start": 5627.04, "end": 5633.04, "text": " this function which is going to basically give us how we should index"}, {"start": 5633.04, "end": 5639.5, "text": " this this output here so as to get those boxes we just found using the Hungarian"}, {"start": 5639.5, "end": 5643.72, "text": " matching algorithm so those predict the boxes that are matched to the ground"}, {"start": 5643.72, "end": 5648.66, "text": " truth boxes that's what this IDX index is gonna contain so you can see here"}, {"start": 5648.66, "end": 5655.16, "text": " box 1766 all the way through 99 those are the ones that were matched with with"}, {"start": 5655.16, "end": 5659.68, "text": " our ground truth boxes next up here we are just going to take the indices we"}, {"start": 5659.68, "end": 5663.92, "text": " got from our Hungarian matching algorithm we're just gonna focus on the"}, {"start": 5663.92, "end": 5669.96, "text": " ground truth indices ground truth bounding boxes indices let me remind you"}, {"start": 5669.96, "end": 5674.04, "text": " so this is the ground truth bounding box indices we're just gonna focus on this"}, {"start": 5674.04, "end": 5679.28, "text": " one and on this one and why we do that is this is just gonna be a permutation"}, {"start": 5679.28, "end": 5686.36, "text": " for the labels so we want to now permute the ground truth labels so that we have"}, {"start": 5686.36, "end": 5692.68, "text": " exact match with the predicted boxes that match those ground truth boxes so"}, {"start": 5692.68, "end": 5698.0, "text": " let me go through that so we end up with basically 16 classes there that are"}, {"start": 5698.0, "end": 5704.0, "text": " permuted in the exactly the right manner next up we create this tensor of target"}, {"start": 5704.0, "end": 5709.72, "text": " classes that's going to be of the following shape so 200 and it's"}, {"start": 5709.72, "end": 5716.320000000001, "text": " basically going to contain 91 across all so it's gonna just contain 91s"}, {"start": 5716.320000000001, "end": 5722.4400000000005, "text": " everywhere and that basically means that currently these are the classes that are"}, {"start": 5722.44, "end": 5726.759999999999, "text": " output output bounding boxes should should should have but then we just"}, {"start": 5726.759999999999, "end": 5731.4, "text": " copy paste the target the actual target classes so now let me print you that the"}, {"start": 5731.4, "end": 5736.32, "text": " target classes here you can see that some of them are now correctly whoops"}, {"start": 5736.32, "end": 5743.04, "text": " what have I done man let me just do that again and not click anything so let me"}, {"start": 5743.04, "end": 5748.0, "text": " show you what we have here so now you can see that we basically for the match"}, {"start": 5748.0, "end": 5753.44, "text": " boxes so let's find the first one so 53 this is the first predicted box it's"}, {"start": 5753.44, "end": 5760.8, "text": " gonna be 0 1 2 3 4 so the fourth predicted box should be of class 53"}, {"start": 5760.8, "end": 5765.7, "text": " because that predicted box is matched with a certain ground truth box okay"}, {"start": 5765.7, "end": 5770.08, "text": " that's that's the basic idea again I'm just gonna kind of skim through this"}, {"start": 5770.08, "end": 5773.2, "text": " this is a lot of details already but hopefully this gives you some"}, {"start": 5773.2, "end": 5777.24, "text": " understanding and now we just do the cross entropy loss which is going to"}, {"start": 5777.24, "end": 5782.76, "text": " force such that all of these classes have a probability that goes up to 1 and"}, {"start": 5782.76, "end": 5789.32, "text": " we also have this waiting for the all of these different classes such that the"}, {"start": 5789.32, "end": 5795.639999999999, "text": " no object class which is this 91 thing has lesser weight in the cross entropy"}, {"start": 5795.639999999999, "end": 5799.96, "text": " which makes sense because you can see how many of these are there okay let me"}, {"start": 5799.96, "end": 5805.48, "text": " quickly show you in the one note what this exactly means so just to make it a"}, {"start": 5805.48, "end": 5811.2, "text": " bit clear imagine we have our output here let me just change the color here"}, {"start": 5811.2, "end": 5819.799999999999, "text": " let me first erase this thing so this is our output we have hundred outputs here"}, {"start": 5819.799999999999, "end": 5825.04, "text": " I'm gonna just picture that a bit differently so imagine we have hundred"}, {"start": 5825.04, "end": 5832.799999999999, "text": " outputs here we have I'm gonna just write down maybe four okay and imagine"}, {"start": 5832.8, "end": 5839.24, "text": " we have two bonding boxes so we have one blue one and we have one that's a green"}, {"start": 5839.24, "end": 5848.96, "text": " one so now imagine that we matched this one with this one and imagine that we've"}, {"start": 5848.96, "end": 5857.16, "text": " matched maybe the second one with this one so if the class for this one is 23"}, {"start": 5857.16, "end": 5863.72, "text": " and is the class for this one is like 15 let me change the color so if the class"}, {"start": 5863.72, "end": 5873.68, "text": " for this one is 15 we'll want to make sure that the logits here so that for"}, {"start": 5873.68, "end": 5880.48, "text": " the whatever the 15 indexes we will have like 92 here so this is gonna be 92"}, {"start": 5880.48, "end": 5886.0, "text": " outputs we'll want to make sure that we push the distribution such that we have"}, {"start": 5886.0, "end": 5889.76, "text": " a peak around 15 because that's the ground truth class so that's what this"}, {"start": 5889.76, "end": 5894.28, "text": " loss is going to enforce it's going to enforce that the logic distribution for"}, {"start": 5894.28, "end": 5901.32, "text": " this particular output token right because remember we have output tokens"}, {"start": 5901.32, "end": 5906.96, "text": " which then we map into both the boxes as well as this distribution so this is"}, {"start": 5906.96, "end": 5911.36, "text": " gonna push that distribution so that we have the correct class similarly for"}, {"start": 5911.36, "end": 5916.16, "text": " here so we'll make sure that for this particular class for this particular"}, {"start": 5916.16, "end": 5926.36, "text": " token that at 23 we're gonna have a peak so that's what this this loss is about"}, {"start": 5926.36, "end": 5934.839999999999, "text": " hopefully this kind of helps it a little bit okay let's continue on and see the"}, {"start": 5934.839999999999, "end": 5938.92, "text": " second loss the second loss is gonna be the boxes loss so I'm just gonna kind of"}, {"start": 5938.92, "end": 5944.28, "text": " quickly go through this we get the predicted boxes here we get the target"}, {"start": 5944.28, "end": 5948.52, "text": " boxes we just calculate the loss between the sources boxes and the target boxes"}, {"start": 5948.52, "end": 5953.56, "text": " by doing the L1 I already explained that in one note and then we just do some"}, {"start": 5953.56, "end": 5957.52, "text": " normalization with the number of boxes we do a similar thing here so just"}, {"start": 5957.52, "end": 5962.4800000000005, "text": " generalized box IOU we hit that's the second component of the loss and we just"}, {"start": 5962.4800000000005, "end": 5966.36, "text": " do some realization and those are the two additional losses that we we have"}, {"start": 5966.36, "end": 5971.04, "text": " that's it and the final one is this cardinality that is just going to tell"}, {"start": 5971.04, "end": 5976.28, "text": " us how many of the of the how many errors do we have when it comes to"}, {"start": 5976.28, "end": 5985.2, "text": " computing the no object boxes okay so basically it tells us for each of the"}, {"start": 5985.2, "end": 5993.36, "text": " hundred predictions how many of those were different from the no object class"}, {"start": 5993.36, "end": 5997.679999999999, "text": " so that's what this this thing here does so let me show you let me print this"}, {"start": 5997.679999999999, "end": 6003.16, "text": " here you can see that because the model is still not trained pretty much all of"}, {"start": 6003.16, "end": 6007.799999999999, "text": " these IE none of these predicts the no object bounding box because currently"}, {"start": 6007.799999999999, "end": 6016.5599999999995, "text": " is just kind of random so basically if I were to print this thing let me see what"}, {"start": 6016.5599999999995, "end": 6022.08, "text": " we get so as you can see none of these is predicting 91 which is the no object"}, {"start": 6022.08, "end": 6027.48, "text": " class and that's where we get hundred hundred here and now we just do l1 loss"}, {"start": 6027.48, "end": 6034.76, "text": " between that and the target lengths so we expect 16 boxes 2 and 14 and so that's"}, {"start": 6034.76, "end": 6039.6, "text": " it like just calculating that type of discrepancy between between the ground"}, {"start": 6039.6, "end": 6044.36, "text": " truth and the predictions nothing fancy there that's it guys we just repeat the"}, {"start": 6044.36, "end": 6048.32, "text": " same procedure for all of the other five layers of the decoder I'm just gonna"}, {"start": 6048.32, "end": 6055.04, "text": " skip all of that okay here we are those losses are not computed and finally we"}, {"start": 6055.04, "end": 6060.24, "text": " just use that a weighted dictionary to sum up so for all of the losses as you"}, {"start": 6060.24, "end": 6063.679999999999, "text": " can see here we go through the radio dictionary we go through the last"}, {"start": 6063.679999999999, "end": 6067.48, "text": " dictionary and if if we have a corresponding weight in our weighted"}, {"start": 6067.48, "end": 6072.12, "text": " dictionary we just multiply those and we sum them up and we get losses so you"}, {"start": 6072.12, "end": 6075.799999999999, "text": " can imagine that this is like probably the most complex loss I've seen in my"}, {"start": 6075.8, "end": 6080.84, "text": " life like let me let me see what's the length of this thing number of keys here"}, {"start": 6080.84, "end": 6087.400000000001, "text": " 18 so we have like 18 losses in total here being computed that's it and after"}, {"start": 6087.400000000001, "end": 6094.2, "text": " this this is just some boilerplate to be honest we just grab the loss final loss"}, {"start": 6094.2, "end": 6098.72, "text": " we do the zero grads so clean the gradients of the associated parameters"}, {"start": 6098.72, "end": 6102.2, "text": " in the in the torch computational graph we do the backward which is gonna"}, {"start": 6102.2, "end": 6107.32, "text": " compute the gradients for the last we just computed we just do some gradient"}, {"start": 6107.32, "end": 6115.96, "text": " clipping with 0.1 max norm and we do a step of the optimizer guys that's it"}, {"start": 6115.96, "end": 6122.8, "text": " this is like the longest video I ever made I'm experimenting I am super"}, {"start": 6122.8, "end": 6127.44, "text": " curious to know how many of you followed through the whole video this is probably"}, {"start": 6127.44, "end": 6132.32, "text": " the last video I'll make that this is that's this long unless I get a very"}, {"start": 6132.32, "end": 6135.919999999999, "text": " good feedback on this one because it takes a lot of time to prepare these and"}, {"start": 6135.919999999999, "end": 6141.16, "text": " it takes a lot of time to film and to edit super huge videos such as this one"}, {"start": 6141.16, "end": 6146.879999999999, "text": " in any case I really hope you learn something new I would super appreciate"}, {"start": 6146.879999999999, "end": 6151.12, "text": " your feedback do let me know if you like this video please share it out that's"}, {"start": 6151.12, "end": 6155.08, "text": " the best way you can help out this channel and finally subscribe to this"}, {"start": 6155.08, "end": 6161.16, "text": " channel if you haven't in any case until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=jwZQD0Cqz4o
OpenAI CLIP | Machine Learning Coding Series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE Kicking off a series of videos where I'll be going through the actual code of many of the papers I've covered over the last few years! In this video I do a code walkthrough of OpenAI's CLIP model from the "Learning Transferable Visual Models From Natural Language Supervision" paper. Let me know what you'd like me to cover next! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GitHub: https://github.com/openai/CLIP ✅ Paper: https://arxiv.org/abs/2103.00020 Learn about Byte-Pair Encoding: ✅ https://en.wikipedia.org/wiki/Byte_pair_encoding ✅ https://leimao.github.io/blog/Byte-Pair-Encoding/ ✅ Video: https://www.youtube.com/watch?v=tOMjTCO0htA&ab_channel=FromLanguagestoInformation ✅ https://www.youtube.com/watch?v=zjaRNfvNMTs&ab_channel=AbhishekThakur Unicode: ✅ https://dmitripavlutin.com/what-every-javascript-developer-should-know-about-unicode/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro 00:02:00 High level overview: Interacting with CLIP 00:26:11 High level overview: Prompt engineering for ImageNet 00:40:25 Deep dive starts: vocabulary and byte-pair encoding 00:49:00 Vision Transformer & Text Transformer explained 01:02:00 Tokenization walkthrough 01:09:25 Encoding the image 01:15:15 Encoding the text 01:23:15 Learning a linear probe 01:27:00 Tokenization of the (brain emoji) 01:29:56 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #CLIP #contrastive #codewalkthrough
What's cracking guys? In this video I'm doing something a bit different. So I'll be kicking off this series of videos where I'll be going through various, through the code of various papers that I've covered over the last months. And so I'll be starting with OpenAI's clip. So basically I cloned this repository OpenAI's clip implementation, created a conda environment locally and I'll be just debugging and going through VS code and showing you how how clip works. But if you don't want to go through the hustle of creating the environment, you can just use their basically like colab here. The environment is already set up, you can kind of copy paste some of the code from here into the colab and that's how you can kind of avoid having to go through the conda environment creation, which I'll not be explaining how to do in this video. So having said that, let's jump into the actual code. So first I'll go through the Jupyter notebooks to give you a high level understanding of how clip works. And then we'll literally step through the code and see how the details of clip are implemented. So vision transformers, transformers, contrastive loss, all of that. So no, actually the training details are not implemented in the OpenAI's like open source code. There is a separate repository that does that. So I'll be skipping that part, but it's probably less relevant for what I'm trying to explain you here. And that's how clip works in general. OK, so I did create a video covering clips. So do check that one out if you want to get the theoretical understanding. I'll not be combining paper and code right now. I'll just be focusing on code. So if you haven't watched that one, go ahead and watch it. I'll link it somewhere here. So yeah, do check it out. Anyways, let's start here. So let's basically import a couple of things here. No, thanks. VS code is fairly annoying with these prompts, but it's probably the nicest environment to be debugging the code, both visually and just like there's so many cool features, which you'll hopefully see over this video. So I imported necessary dependencies, numpy, torch, etc. So let's see which models we have available. So clip basically, again, how clip is constructed. You have the visual like pathway, you have the textual pathway, so to speak. So the visual one will encode the image into a vector. The textual side will basically take a caption and embed it into a vector. And basically then you just want to see the similarity between the dot product between normalized embeddings. That's clip on a high level. So as you can see, they've been using various different visual baselines. So here are some modified versions of Fresnets. They also use VAT. We'll be using VAT in this Jupyter notebook. So I can also kind of go through the definition and show you how this looks. So basically they've defined this models dictionary. It's a key value dictionary, obviously. So we have the key here and we have the associated URL. So that's where the model is stored. So here and basically what this function shows us is all of the available, like pre-trained models, basically. Okay, let's get back here. Let me now run this cell. So first what it does is again, it's going to load this VAT. So the big version, the patches will be 32 by 32. And it's going to basically give us the model and this pre-processing like a structure that contains the PyTorch transforms that we'll be using to transform the images like cropping and stuff. We'll see that in a second. So let's run this cell and let's see what's going on here. Okay, so it's going to count the number of parameters in the model. We can see here we have 151 millions of parameters in this VAT. The input resolution that the VAT is expecting is 224 by 224 and they've artificially set this context length to 77. And so that means the number of tokens that will go into the, like text encoding pathway will be 77 tokens always. So that's kind of fixed. The woke up size is, as you can see here, 49,000 roughly. And they are using the BPE encoding. So that's the byte pairing coding. I'll link some of the resources when it comes to BPE, which you can check out because I will not get into the details. Otherwise, I could just create a video explaining how that thing works. Okay, so this is some metadata about the model. This is, okay, let me just run this to see which transformation we're using. So we have resize to 224 because again, images will be of arbitrary aspect ratios and arbitrary resolutions. We want to make it basically of the format that the VAT is expecting and that's 224 by 224. So we're going to resize the image to 224, then do the central crop, then convert the image into RGB, convert it into tensor, and then do the normalization. So these are just, I assume this is just ImageNet, like mean and standard deviations. You literally go through the ImageNet training set and you collect these statistics. You find the mean for all of the images, you find the standard deviation, and then you use that to center the input images before feeding them into the VAT portion of the clip. Okay, so let's continue. So we are first going to tokenize the sentence, hello world, and we'll later see how this tokenization works. So as I said, it's using BPE in the background. If we run this, we expect to get like a list of integers basically. So you can see here, here is the result. So this is the BPE magic we somehow get. So this is going to be the start of sentence token, the end of sentence token, and these are the subtokens that BPE kind of broke this sentence down into these. And it has all of these because, as I said, it's going to have 77 tokens because that's the constraint they artificially are using. So let me just kind of maybe store this into a temporary variable and then print the shape of this thing. It's going to be 7, yeah, as you can see, it's 77. Okay, so let's get to this part here. What they do here is they create this dictionary of descriptions. So this is going to be a name of the image and this is going to be associated caption. So we'll be using these names to just fetch the images from, I think, scikit-learn or something. You'll see that in a second. So let me run this cell. Let's form the dictionary and now let's get here. Okay, so what's going on here? So as you can see, we're going to iterate through this scikit-image. So scikit-image package has certain images that come pre-installed with it. So we're going to iterate through those file names. And if the file name ends with PNG or JPEG, meaning if the file name is an image, we're going to enter this loop. Then we're going to just grab the actual name here. And if it's not in the description, so we're going to only extract these eight images here. If it's not in the description, we'll just kind of skip it. If it's in the description, then we're going to grab the path of that image, open it up as a pillow image and convert it into RGB. Then we're going to just show all of those on a nice figure here. I will not get into the Matplotlib details here. And we're just going to store the pill images as well as the processed images and associated captions. So this text is going to have so descriptions is here. We're going to have the keywords here. So basically we'll fetch the associated captions. Let me run this enough rambling. So you can see here the images we got from the scikit image package. And well, unfortunately, let me see whether I can change the color here. Yeah, I'm not sure whether you see the associated caption, but like I can read one for you, basically a portrait of an astronaut with the American flag. This is this one. And then a person looking at a camera on a tripod, blah, blah, blah. So we now have the data set as clip expects. So it's a small eight image data set image and associated caption. And now let's see how clip is going to use these. So let's continue here. First of all, we're going to stack the processed images. So remember images are the ones where we've applied the processing. So the cropping and conversion to tensor normalization, all of that goodies. Now we're going to stack it. We're going to convert it into into into like a big tensor, which then we're going to move to to CUDA. So that means basically pushing this tensor onto your GPU. So let me run this. And let me now create another cell here. Whoops. That's not what I wanted to do. Let me just basically create a cell. Okay, mixing up shortcuts from Colab. So I'll just have to manually add a code cell here. What I want to show you is just the shape. So always understanding the shapes of your object is super important. Let me run this. So 8, 3, 2, 24, 2, 24. So that's your image. And because we have eight of those, we stacked eight of those. We have like a torch tensor like this. And similarly, Remy do the same thing for text tokens. So what we've done here is we've gone through the captions. Okay, so that's the text. And then we just create this artificial prompt. So this is and then we append the captions, the ones we had here. So this is a page of text about segmentation, etc. etc. And then we're going to tokenize that which means we'll get those 77 context tokens. So let me just run this again. Let's see what we get. So we have 8, 77. So 8 because we have 8 captions and 77 because that's the basically context length that the transformer textual pathway in the clip is expecting. Okay. Finally, what we're going to do is we're going to patch. We're going to pass the image tensor to the clip model. So model is just clip and we're going to call this encode image function. We'll see later what exactly does but basically just a forward prop of the images through your VAT and as the output will get I guess something probably the shape will be something like 8 and then I think it's either 512 or 756 or something like that. We're going to see that in a second. And here what we're expecting is similarly will pass these text tokens and I expect we'll have something like well, I guess 8 512 because we need to be able to compare those do dot product between the textual and visual embeddings. So let me run this and then let me test the hypothesis whether I got the shapes right from like by the heart. Okay, let me add an additional code cell here and let's just print the shapes of these objects. So shape will be 8 512. That's correct. And then this must be the same. So let me run this. So 8 512. Okay. So again, pretty simple here because this is the transformer will be just taking the whatever the feature vector is above the end of sentence token. So we just fetch that vector and that's what we're going to be using as a embedding for that particular caption. Again, we'll see the details a bit later. That's it. Now the next steps are normalization. So we'll normalize all of these embedding vectors. So visual and textual embedding vectors and then we're going to do matrix multiply which is going to do dot products between all of these combinations of visual and textual embeddings and that's going to give us the similarities between those images and all of these captions. So let me run this first like this and then I'm going to analyze what we are getting out of here. So similarity should be 8 by 8. So I'm expecting shape 8 by 8 because we want to know for each image. What's the similarity score for that image and all of the other captions. So let's see whether that's correct. So if I print similarity shape here and if I run this it's 8 by 8 as expected. Okay, so what's this thing about? The best thing to do is to just kind of because we are we have the interactivity here in Jupyter so why not use that fact. So let me just print this and see what we get here. Basically what this is going to do is find the I assume this is going to do L2 norm. So it's going to find the L2 norm for each of the embedding vectors and then just divide as you can see here, we're just going to divide the vectors by their L2. So basically if we have a vector X what we do is we convert it into X divided by L2 norm. How can I denote that? Maybe like this L2 norm of that X and that's just your regularization that normalization that that makes it makes it so that the norm will now be 1. So all of your vectors will be of norm 1. Okay, let's run this cell and see what we get. Well because I've already done the because I've already done the I already ran the cell. I'm getting basically all of them are already 1. So not that interesting. So let me just rerun this for a second. So let me rerun the image features and text features get the original ones and now before I run this cell, let me see what we get here. So you can see here the associated L2 norms for each of the eight visual embedding vectors and we just divide by those L2 norms and we normalize the vectors by doing that. We do the similar thing for basically for for the text. So let me see if I can go to definition and see whether okay, no definition found for this. So I guess we're going to use just some good old Google search. So let's see what's going on here. So norm by torch. I think it should be like basically okay for a binius norm. Let me see what's the default here. So the order of norm default is for binius norm, which is just a fancy way of saying L2 for for matrix. It's going to do the square root and then basically squared of all of the elements and then you just do the summation. So yeah, we can quickly verify that hypothesis. Just let's create some artificial vector. So let's let me maybe create something like this. So we have like a dummy dummy tensor. My pi torch skills are bit rusty. So hopefully I get everything correctly here. So torch tensor or just zeros. Can we do zeros? Well that will not be that interesting. Let me just think what we can do here. Okay, so I'm gonna just do once and then I'm gonna take like size. Let's say we just have a vector of size 2. So there's going to be one one if it's L2 norm we expect square root of 2 which is roughly 1.41. So if I now pass dummy tensor here and let's find the norm probably dimension and all of this is not that important. But yeah, you see basically we have L2 here and that's pretty much it. Now this thing is probably interesting. We convert that at tensors. We just push them to CPU, convert them to non-pi and then we do matrix multiplication. So why does this make sense? And we have to transpose here. So why does that make sense? Well because the following basically if we think about it text features is what 8 512 and then we do matrix multiplication with a matrix that's 512 8 because we do the transpose here operation and that's gonna obviously give us if you do your matrix multiplication 8 by 8. So what you're doing is you're taking a vector like you're taking like a textual embedding and by multiplying what you're doing is you're doing the products because you're multiplying the row with the column and the column is the visual embedding vector. So if you do all of that you're basically getting the similarities between this particular text vector with all of these image vectors. Hopefully that makes sense. We can finally print we can print the similarity and do let me know whether you find this type of explanation useful. Any feedback is super useful because this is the first time I'm doing something like this. So as you can see all of these are similarities so they are not normalized which means if I were to do a summation will this work x is minus one if I print this. So as you can see it's not one because we haven't applied softmax or anything. So that's what we get here. Okay let's continue. Let's see what we have here. So descriptions that's going to be our dictionaries we'll get I guess eight here which is going to plot something. Let's see what we're doing here. So original images we're going to plot the image and then we are going to plot the scores. Okay let's just run the cell. It's going to be easier to understand what's going on once we plotted. So as you can see so what they've what they've done here is basically they have they plot the images here they plot the captions here and then they just plot this similarity matrix. So we are basically visualizing the similarity matrix that I just printed a couple of cells before and you can see that as expected because this is a train clip model the biggest similarity is between the image and its correct caption and that's why we have this diagonal here. That's pretty much it. So yeah I guess I'll just ignore the details of this code because it's just idiosyncrasies of Matplotlib. What I'm going to focus on is on the semantics and on what clip model is doing. So this is it. Let's see what else. So now we are going to basically load a cipher hundred data set. So that's going to probably take a while. Oh okay I already have it cached so that's why it was so fast. So we have a cipher hundred data set which is just your image data set that has a hundred classes hence the suffix hundred here. And so what we're going to do is we're going to iterate through the classes so through all of those classes and we are going to form a caption out of that class. So if you read the paper this is what I've been doing this is how you can do the zero shot image classification. So we form a caption like this. This is a photo of and then goes a label for example horse dog or whatnot. So I'm not sure what the cipher 10 classes are. We can print that. I can kind of add a cell here and then print cipher 10. Let's see what's going on here. If I run this. Okay. Apple aquarium fish baby bear beaver bad blah blah blah. Nice. So we're going to as I said form captions from the classes like this and then do the tokenization. So again the shape of this should be I guess hundred seventy seven because we have hundred captions hundred labels and hundred captions and seventy seven is the expected context length. So let me run this and let me print the text tokens shape just to be sure. So yeah as expected. And now what we're going to do is the same thing as before past the textual these these these like tensors. Okay this is not a tensor. This is just as a wait. What's the what's the type of this. It's interesting. What. Okay. It says here it's a torch tensor. We're going to pass that we're going to encode a text. So we are going to end up here with I assume hundred five twelve because all of those captions are going to be embedded into a vector. So we get hundred five twelve. Then we do the normalization as before. So that means all of the norms of those embedding vectors going to be equal to one. Whereas by norm I mean L2 norm in particular. And then what we do is again image features text feature. So we're not teaching the images. So we're still using those eight images from before. And we're just trying to find the most similar class from Cypher hundred for those images. And again this thing is just going to calculate pair by similarity score between each of the images each of the eight images and each of the hundred captions. We have to multiply by 100 just for numerical stability. And then we apply softmax. So that that means that now we'll basically have a sum sum would be equal to one for a particular image. And so the sum of the similarity scores for this particular image and all of the captions needs to be equal to one. OK. So let me run this. And finally what we do is once we find those. So let me maybe let me print this. So if I do this if I take the text props and they print those let's first bring the shape the shape should be what the shape should be eight times hundred I guess. Yeah. And if I print the actual text props it's going to be just not that intelligible because bunch of small numbers. So this here this vector here is basically the similarity score for the first image and all of the hundred classes of the Cypher 10 Cypher 100. And now we're trying to find the maximum because that's the high similarity. And we find that by doing the top case. So we find the five highest similarity scores for each of the images. And that's what we get here. So let me now print the top props and the top labels. So let me just bring the top labels actually. So we're going to get. So these are the indices of the classes from Cypher 100. They have the highest similarity score for the associated image. So hopefully that makes sense. OK. So having done that let me run this final cell which is going to do the following. So we are going to go through the original images. We're going to plot that image. We're going to take the probability scores. OK. And we're just going to do bar plots with those scores and with the associated images. So again let me run this because this is too much detail to analyze in a video. But basically this is what happens. We just plot the images and the probabilities as well as the labels we just found in the cell above. So this is what this magic here does. You can see that this image is the closest class in Cypher 100 is woman. The closest class for this picture here is man. The closest class for the cat images sweet pepper. OK. We have a false like positive there. And now we have cop here. OK. That's fairly decent motorcycle cattle. I guess that's the closest thing they have. Let me see where they have a horse. So I guess the horse would be the closest label. I'm going to do the following. So I'm going to print his horse in Cypher 10 classes. And it's not. So we don't have a horse label. So that means that this is probably the closest label we got to hear a motorcycle make sense again bed. So this is completely gibberish unless there is some word better something inside of this image. Let me zoom in. Whoops. Let us first determine my blah blah blah. No this is probably just false positive. And finally here we have a rocket which is also a good class. That's it. That's the first Jupiter notebook. Do let me know how you're liking this video so far. Comment down below if you find it useful. OK now let's go to the second Jupiter notebook. This one we're going to go fast through it because it's much simpler I assume. Let me run the cell. OK let me again available models. We've seen that we're just going to load the VAT big 32 by 32. That size is that's that model. So that's the same model we used in the previous Jupiter again 151 million 224 2424 context length is 77 for the transformer woke up sizes like this. So here what I've done is the following. So they have a thousand image net classes with a small caveat. So they've actually modified some of the classes such that clip has easier time basically recognizing those images. So I think they mentioned it here. Let me kind of read it through. So subset of these class names are modified from the default image and class names source from any athletes image and simple labels. These edits were made by trial and error and concentrated on the lowest performing classes according to top one and top five accuracy on the image net training set for the for these variations of the models. So these weeks improved up one by one point five percent on the ATB 32 over using the default class names Alec which is the author of the Alec Redford assume from the clip paper got bored somewhere along the way as gain started to diminish and never finished updating tweaking the list. He also didn't revisit this with the better performing models or any of the VAT models he thinks it's likely in other zero point five percent to one percent up one could be gained from further work here. It'd be interesting to more rigorously study understand this. So this is super interesting. So this means that by tweaking the classes and then embedding them into those templates that the caption templates we saw. So let me show you what I mean here. So this is the template we are using. So by just tweaking that we are forming better text embeddings and then consequently those make for a better classifiers your shot classifier on top of our image embedding vectors. So let me quickly explain what I mean by that before proceeding here. So imagine we call up we've done here we basically took hundred classes we embedded them into captions we have hundred like strings now we formed embedding vectors for each of the captions and now we have hundred hundred seventy. So we basically have this we have sorry we have after embedding we end up with hundred five twelve. So let me explain why this is so cool. I'm going to open up my one note for a second here and let me just draw you a couple of things here. OK. So let's see what we've done. So we have we embedded our image here into an embedding vector that's like five hundred twelve dimensions we have five hundred and twelve dimensions here. Then we saw what we've done with the with the with the classes we've we formed basically a collection of these embedding vectors. So this is hundred we have hundred and we have five twelve here as well. So now because of the way how we calculate the similarity scores we basically multiply with your dot product between this one and this one with your dot product between the image vector again and the next and the next embedding vector the next representation of a class and the next one next one. And what do you do if you think about it what do you do when you do dot products with these weights here. This represents the weights of ad hoc MLP of a zero shot MLP that's used as a prediction head to predict the classes. So again if you do dot product between this and this what you're effectively doing is the following. So you're doing this and you form one weight and then you take this one. So let me change the color here. Then you do this you take this vector you do dot product between this and this and you effectively do the following. So you just I'm kind of assuming this has four dimensions but it has five twelve so just bear with me here. So you do this and then you do the same thing for the third vector. So you're going to do dot product which is going to do this. And as you can see this is basically the weights of this particular MLP here. So you ad hoc construct the prediction head and by doing that you can take arbitrary data set you can literally take arbitrary data set here. Take the classes form those captions embed those captions through just kind of do a forward pass through the transformer. You get additional MLP. So now you have MLP so this this is going to be a set of end classes with dimensionality of five twelve and you can basically use this to now classify this image into end classes. So that's super cool and that's what a white clip is such an awesome model because of all of these zero shot capabilities. So having said that let's go back to the code. Let me go back to the where we stop here. So now you can appreciate why why tweaking this can improve that prediction had weights and thus give us higher top one and top five accuracy on arbitrary data set data set. So that's why they've been doing that. OK. So they give an example of what the tweaks some of the tweaks they've done. So some of the examples beyond the crane crane so crane is going to be so one crane can be construction crane. The other one is actually like a bird so bird crane. So that's why they did those modifications and then they've done stuff like mail convert that to fingernail or like metal nail depending. Yes I guess. OK clip interprets nail as fingernail but we actually want it to interpret it as a metal nail and that's why they explicitly change the class to metal nail. And similarly image net kite classes refer to the bird of prey not the flying toy. So we change kite to kite bird of prey etc etc. OK. So that's that's it. Now they have instead of using that single template we've been using here this is a photo of they also experimented with trying different templates. You see here a bad photo of a photo of many a sculpture of. And so there is a creative set of these here. I think there is like literally at least 100 of them. Let me let me see if you run this. So we have 80 templates. So they devised 80 of these templates. And now they're going to do some type of in sampling to get better results for for image net. Let's see how how that works. Let's quickly go through the Russian rationale behind doing this again. A similar intuition guided trial and error based on the image and training set was used for templates. This list is pretty haphazard and was gradually made expanded over the course of about a year of the project and was revisited tweak every few months. A surprising weird thing was adding templates intended to help image net our performance specifying different possible renditions of an object improved standard image net accuracy to after the 80 templates were locked for the paper we ran sequential forward selection over the list of 80 templates. The search terminated after in sampling seven templates and selected them in the order below. So combining these seven templates gave them the best results. So then they say speculating we think it's interesting to see different scales large and small. That's why they have templates like this working very nicely a photo of the large a difficult view. That's why they have a bad photo of the and abstract versions origami video game art were all selected for. But we haven't studied this in any detail. This subset performs a bit better than the full 80 ensemble reported in the paper especially for the smaller models. So what I mean by ensemble let me let me kind of break that down. So we're going to see that now. So let me load the image net data set here. I think it's cash so shouldn't take too much. So we're going to have the image net and here is what what they are doing. So basically they form a zero shot classifier using the following technique. So we pass all of the image and templates and we pass the image and classes. And so let's see what they do. So they go through all of the classes and then they go through all of the templates. So that means that a single class will this time be embedded using not a single template but 80 of those and then they tokenize those the encoders which means we'll hear after doing the encoding we'll end up with what we'll end up with because we have 80 templates we'll end up with 80 77 right. I think let me think here. So we're doing the tokenization no sorry. So here we'll end up with 80 77. So tokenizer is going to give us this. We can kind of print those shapes later on and see whether that's true. And now after doing the embeddings they get 80 512 here. And now the smart part is after doing the normalization which is which is like standard thing to do they form the final representation for that class by averaging across all of these embeddings for all of these different templates. And then additionally they divide by the by the norm of the class embedding to make sure it's again has L2 norm equals to one. And then they use. So now they just stack those zero shot weights. So this is going to be the zero shot classifier I explained in the one note a couple of minutes ago and that's it. So this is going to be our classifier zero shot weights. So if I run this it might take a while but it's such a neat idea. Basically in sampling is a very very famous technique like you people use that to trade off between higher performance but like somewhat obviously longer inference more memory etc. But just by kind of combining the majority like logic just doing this majority voting type of logic we get higher quality embeddings. And and yeah. OK this run now we can print what we get here again we're expecting I guess a shape of 1000 512 because we have 1000 image and classes we have embedding vectors of size 512. So let me just print the zero shot weights shape here. So if I print whoops if I print the shape here let me see whether my hypothesis was correct. So 512 1000 OK it's just not it's stacked in a different way so that's why we get this shape shape. OK we can maybe quickly just print these shapes here so text shape and I'm going to add print class embeddings here the shape as well. And then we're just going to break out of this whole logic so that this is run quickly. So whoops I'm getting OK because of the break statement everything crashed but like we are verifying that this makes sense so 80 77 80 512. Let me now remove these statements and I'll have to rerun this to get the actual zero shot weights. So after that while that's running let me walk you through this part here. So we have the accuracy I will not go through how the accuracy is calculated you can go through that piece of code on your own. So what we're going to do is we're going to here go through the image net batches of image net images. We're going to push those to CUDA the associated labels as well. So when I say to CUDA I mean to a GPU if you have one. We are going to then encode the images so there's going to be 32 512 so 32 because we specified the batch size somewhere here so batch size going to be 32 and then 512 because as I said the visual embedding vector is going to be a 512 dimensionality. Then we just do the regular normalization here and then we find the similarity. So basically this is again this is your prediction head that was ad hoc constructed ad hoc. So this is the zero shot classifier head and we get the logits so and then after we pass the logits and the target we'll see how good do we perform here when it comes to accuracy top one and top five accuracy. So let me run this cell and let me run this one. This might take a while so maybe I'll just kind of skip to the end. Okay so this thing ran and we can see top one accuracy 55 top five 83.4 by just having by just doing this zero shot classification. Again because we're passing logits just a minor detail obviously we'll have some soft max here or something. Let me let me see how this logic here works. So we just grab the outputs we find. Okay we don't even have to do the soft max we just need to find the top five or whatever the case is five in our case top five logits which is going to correlate with the top five highest probabilities if we were to do soft max which we will not because that would be redundant and then just your usual logic. It's going to convolute it because it's a one liner but it just kind of calculates the top one and top five accuracy. Okay that's it. Do let me know whether you found this Jupyter walkthrough useful and now I'm going to dig into going through some details of how everything works behind the scenes. Hopefully you'll find this interesting as well. So let's start with this playground. So I just created a simple playground.py file then I just copy pasted pretty much some of the examples here from the readme file of their official repository and now we're going to walk through this particular example and see the details behind how the imaging code tokenization all of that works. So without further ado let me run the debugger here. So let me maybe zoom in a little bit more here hopefully you can see everything nicely. So what's going on is that this tokenizer is just being created initially before we even hit this first line in simple. We have a tokenizer somewhere here. Let me just find it. It's kind of defined I think in model or somewhere. The object is independently defined. Let me find it. Oh my god. Oh my god. Okay tokenizer. Yeah here. So here is the construct. So basically the constructor is called before we even hit the first line of our function in the playground. So again let me just kind of show you. I have the main function I'm calling the simple here and simple is this one. So you'd expect this to be the first line that we hit but that's not the case. We instead hit this one because as I already explained why that's going on. Okay so tokenizer. What's going on? So I'll have to kind of skim through some of the details because it gets fairly complicated and I also don't understand every single detail as well to be honest. So this bytes to unicode what it does is just creates like a dictionary between 0 to 255 which are like values that a byte can have and then it maps it onto a unicode string. There is some convoluted logic here of which ranges they take etc. So that's why I'm saying I'm going to kind of skip some of these details but like we'll analyze what we get as the output. So we'll treat it like as a semi black box. So if that's even a word. Okay let me go to debug console and this is why Visual Studio Code is such an awesome environment. Let me show you what we have here. So if I print this we get as you can see here we get basically so from 0 to 255 we get the keys and then associated unicode characters. Okay that's pretty much it. Now the byte encoder is going to do the opposite. It's just going to map from the unicode character onto the byte here. So I'm just going to kind of skim over that. So F10 then we open up so there is this BPE path. So there is basically if I open it up here I think it stores somewhere in here. Let me let me let me see where it is. Just a second. Okay so here it is. So this is this file BP simple vocab 16E6 takes that. So that's basically your BPE. That's the byte pair encoding as I said I will not get into the details. So what they've done they had some data set. I'm not sure how I can track back the actual data set that was used to train this BPE. And then you do the BPE logic which is basically finding the pair of tokens with the highest frequency and then merging those two tokens and adding that token to the vocab and then just repeating those merges until we have the final vocabulary that that's the BPE vocab. So that's kind of already done for us. That's why we can just read this and then basically decode. Consider thinking like decode as a UTF-8 and then just split on the new line characters. And then I'll show you in a second what this merges are doing. So the merges are doing the following. So let me first close this so we have more space. So merges is a list. So I'm going to just show you the first five for example. So you can see here. So here these are the merges that were happening. So that means this this this whoops why is this okay. So this means this particular these two tokens were the most frequent tokens in that corpus of text at some point of time during the BPE algorithm. So if you don't know what BPE is I strongly suggest you just kind of find some article. I'm going to link it down in the YouTube description actually in the video description and you can check it out. And so this will make sense my explanations. So so yeah we have these merges now and I'm going to go over this. And finally we start forming the vocab. So first of all we're going to grab the the the the Unicode strings that we formed up there. So let me go F10. So that means the vocab size will be 256. So if I print the vocab size here it's going to be 256. Now the following again this is BPE idosyncrasy. So basically we have to to concat this this end of word like token. So if I click we'll just do we'll just basically form new tokens in our vocab by concatenating all of the existing like tokens with this particular end of word token. OK. So if I click F10 if I print the length now it's going to be 512 because we just duplicate everything and then we go through the merges and we because all of those merges basically are now a unique token in the BPE vocab. That's why we do this logic here. So we basically join. So that means we're going to join these. So we're going to have in instead of instead of so I can maybe let me see whether I can kind of step through this quickly or just do something like this. OK. So if I do this if I do F10 and let me print you the merge here directly in the debug console. So as you can see we're going to add the in to the vocab. And again that's BPE idosyncrasy. I'm going to skip over all of that. Click F5 here and just get to the next line here. OK. Let me collapse this. So bear with me. It's a bit more. I'll summarize the findings in a second. So we extend with these two special tokens. Start of text and of text. So it's like start of sentence and of sentence tokens. So if I add all that now we have a pretty big vocab. So I think it's almost 50. Yeah. 49,408. That's it. Now we form the encoder by basically zipping the those tokens from the vocab with the associated indices and we do the decoder. So that's going to basically convert from from from a token into an int and vice versa the decoder will do from int into into the token. So if I do F10 here just step over we get all of this and then they form the special variable BP ranks. It's going to take those merges. So and and and kind of store them. We'll see why that's useful a bit later. And then they have this cache again some optimization detail that the BP will need otherwise will be super slow. And finally this pattern that they're going to use to kind of parse the input text. We'll get to that in a second. OK. So as a summary we have a vocabulary formed using the BP. Lots of details. You don't need to know all of these. What is important is that we have the encoder and decoder and all of these are some other implementation details. Important ones but we can we can kind of kind of squint and ignore them. OK. Let me go to our next breakpoint. OK. So we are now in the first line of the simple method. OK. So we just do the standard stuff. We see whether we have like a GPU device and I do have to do and I do happen to have a GPU. So that's that's cool. Now we'll load the device. So now we'll have some some some break point. That's interesting. So basically before the video I put some salient break points on certain parts of the code base that I want to dig to kind of go through with you. The code base is fairly big so you can kind of do that in your own time. But I'm going to focus on the most salient parts of the code base. OK. So the first one being the construction of the clip model. So context length. That's a 77. That's a number we've been seeing all around. And that's because that's a that's a parameter that's passed during the construction of the clip model. So let's see what's going on here. So we're going to form the VAT model here. So I'll hit the VAT construction right now. So we have image net. The image resolution is 24 through to the internal vectors will be 768. We'll have 12 layers 12 heads and the embedding dimension at the output is 512. So even though the inner dimensions are a bit larger. So that's here 768. The output as you can recall is 512. We saw that in the Jupiter. So let me click F5 here and go to the vision transformer basically in it function. So we're just going to store some of these dimensions. So this is everything that vision transformer needs. This is the conv layer that that you have to this is the patchy file layer basically of your vision transformer. As you can see here the pet size is 32. So we set the kernel size to 32 and we set a stride the same exact width. And the reason is is because we want to take non overlapping patches of the image and then embed them from the initial dimension of three into 768. So let me quickly go through that in the one note maybe it will help you understand what's going on. Again what is going on is the following. So if you have an image here. This is our input image. We have three channels. It's going to be three dimensions here so this is three and then this first conv layer is doing the following. It's going to take the kernel put it here and map this whole volume into a vector into a corresponding vector that's going to be of dimension 768. Then we're going to move here. So let me change the color. We'll then move here and then embed that into additional vector here 768. And we're going to repeat the whole procedure until we hit the end of the image. So now we have here etc. etc. You do that for the whole image and you end up with what you end up with whatever the number of patches is and it's going to be 224. We divide that by what by 32. That's going to give us I think 16. So we're going to end up with 16 by 16 of these 768 vectors. So we're going to end up with like 16, 16, 768 volumes here. So let's kind of verify that hypothesis. Well we'll get to that a bit later but that's what this layer is doing. Hopefully this helped. Let's continue. So if I hit f10 we just form this scale. This is just for normalization. Basically what we do here is we form the CLS embedding. So that's the CLS token of our vision transformer. That's something that BERT is also using and many other transformer models. And we are basically going to it's going to be a 768 dimensional vector. We form the positional embeddings. It's going to be of the size. So basically 16 squared. That's the number of tokens we'll have. So that's like 256 right? And plus one because we have CLS. And then the width is 768 because we want to basically add those positional encodings on top of each of the embedding vectors. So if you're confused by the means is basically let me again go back here. So what it does is it's going to construct it's going to construct embedding table here. So we'll have an embedding table here. The dimensionality will be again 16 times 16 plus one for all of these vectors here. So let me kind of try and draw them some somehow here. So this is going to be our output volume 16 16. And this is going to be this dimensionality here 768. Okay. And we'll have corresponding embedding vectors. And now we'll basically be learning these and then adding them on top of these ones. So basically what will happen is that this vector here, this position vector here will always be added to this vector here. So to the first vector here. So we take this one and we combine it. We'll sum it up. We'll do the summation between those two. And that's going to happen in the forward pass of the VAT. Okay. Let's go back to the code. That's that. We have some layer norms. We form some transformers. And then yeah, so because transformer will also be used in this second, I dubbed it as a textual pathway. Not sure whether anyone else is using that terminology, but like let me kind of just quickly scheme over what transformer is doing. So if I click this, we'll get to the transformer logic. So what transformer does is fairly simple. We just specify the width, which is going to be also 768. We have number of layers. We just form a sequential layer. So we're going to form these residual attention blocks. And what they do is, I mean, your regular transformer logic. So let me show you what it does. We have the multi-head attention head. Wait, multi-head attention layer. And then we have the layer norm. And then we have the MLP. So we have the linear, followed by Jalu, followed by linear. And you can see this is your regular expansion ratio of four. Usually the innermost part of that MLP in that particular transformer layer is usually 4x bigger than the input part. Okay? And so then we construct the second LN layer and we optionally have attention mask in case we want to have a causal transformer model. So the vision transformer will be using non-causal transformer model because you want to attend each token, each patch needs to attend to each other patch. And in the case of a textual transformer, we'll be using a causal one. So that's why we have this attention part. And now what the forward pass of the transformer does is basically applies the attention. So sorry, this is just a particular layer of the transformer, not the whole transformer. So it does the attention and then just basically, okay, of course, layer norms, attention, residual connections, and then there is the MLP part. So that's your transformer logic here. And finally, we just kind of do a forward pass through all of the 12 of these transformer layers. And that's your transformer model. Okay? So that's a quick scheme, like a quick scheme over what's going on there. Anyways, let's proceed here. I'm going to ignore all of this and we are back to vision transformer. We form all of this. And finally, we have the projection layer that's going to project from 768, which is the inner model dimension into our desired output dimension, which is 512. That's it. I'm going to now hit run and get to our next step. So this is another point I wanted to make. Basically, this is the attention mask that's going to be formed for the causal transformers. If I hit step over and if I print the mask here, let's see what we get. So as you can see here is that your classical mask that's used to create a causal attention pattern. Okay? Let's continue. Okay. So we're back in the clip in this function before we created the visual parts. That's the vision transformer. And now we just create the transformer here. So as you see here, this particular transformer is going to have the build attention mask. It's going to be a causal one, whereas we do not pass the same argument to the vision transformer. So that one's going to be non-causal. Okay? And then we have the vocab size. So that's the token. So we saw that this is the BPE vocab we just created. And we're going to obviously create a token embedding table, which is going to be used by the transformer, by the textual pathway to basically embed the tokenized text. Okay? And we'll also have a positional embedding also for the transformer. And that's going to be 77 transformer width is 5. Okay. So that's going to be the positional embeddings. So again, it may be quickly worth going through that again here. So that was vision transformer. Now let me show you how this thing will work. So we have vocab size. We'll have a table. So we'll have a table here. That's going to be whatever the vocab size is, which is like 49k or something. And we'll have this size being, so I forgot what it is. Let me check what exactly that part is. It's going to be transformer width. Okay. So it's going to be 512. Okay. Let's go back here. That thing is going to be 512. And then we have additional table that's going to be like context length, which is going to be 77. And then also it's going to be 512. So those are going to be, this is the, basically the table we use to embed our tokens. And this is the positional, the positional encoding table. So what it means in practice is the following. So imagine you have a piece of text. You take the text, you tokenize it, you convert it into a list of integers. So let's imagine you started with like a hello world. And then what you do is you first tokenize it. So that means it's going to be somehow split. We'll see that logic a bit later. Maybe some like here, here, here. And then we're going to somehow map it into integer. So all of those tokens will have corresponding integers. And then that means we're going to end up with something like this. We'll end up with like, I don't know, like 23, 47, 56, 37. And we now use this to index this embedding table to get the associated vector. So 23 means, hey, let's grab token, let's grab the embedding vector that has index 23. And that's going to be your representation for this particular token. Okay. Then we say, let's take this 47. Let's take the, this token here and let's find its corresponding embedding vector. So that's this one. And then we put it here, et cetera, et cetera. And then once we form this and remember we're going to pad this with all zeros. So this is going to have bunch of zeros here. Zero, zero, zero, zeros, because we want to have the size of this is going to be 77. Whoops. Sorry, let me redraw that. That was some terrible drawing. So if I do this, basically this whole thing is going to be 77. And now we just add. So now we're going to add these positionally, positional embeddings on top of these vectors here. That's how this whole, this is how all of these structures come together. Hopefully that kind of clarifies it. Okay. Let's go back to the text, to the code. We formed the positional embedding table. We formed the layer norm. We finally have the text projection, which is going to be used to transform from the transformer width into the embedding width, which happened to have the same dimensionality. So this transformer is actually using internally 512, not 768 as this vision transformer. Okay. And there is this allotted scale. I'm not sure why these exact numbers kind of seems like, yeah, magic numbers, but yeah. And then just some initialization and that's it. Okay. Let me click F5. Let's get to this next salient break point. And that's here. Okay. So we made the image, which is going to be just this clip image, just some random diagram. We're going to do the pre-processing and basically we end up with a batch of size one. So if I were to print the image shape, it's going to be probably, okay, before we even do it, let's kind of do the predictions. It's going to be one, I guess, three, two, two, four, two, two, four, because this is the format that PyTorch usually does. So you have this channel dimension here instead of here. And if I print the shape, we're going to get, yeah, exactly that. Okay. So next up, we need to do the tokenization of these captions here. So a diagram, a dog, and a cat. And let me quickly recap what we've seen so far. I showed you how the clip is constructed. We saw how the vision transformer and the transformer models are instantiated and created. We've seen the image pre-processing here. And finally, let's kind of jump into this part here with the tokenization. So let me click F5. So we pass our text here and we need to tokenize them. And when I say tokenize, we literally need to transform to basically split these captions into sub words and then map those into corresponding integers, depending on our vocab mapping. Okay. So let me kind of go through this. First of all, we want to tokenize these special tokens because as you can see here, whatever the text is, we'll be trading through our captions. We're going to encode them. So this is going to give us a list of integers. And then we basically put the start of text token in the beginning and end of text token at the end. And now let's kind of jump into the actual encode function. What it does is, well, fairly complicated. I have to say the tokenization part, even though it's always abstracted away from us when you read the papers, et cetera, et cetera, I found this to be the hardest part of this whole code base for understanding. And yeah, just like the level of details is amazing. Having said that, let's go through this. Okay. So this line is basically going to clean the text. I won't go into the details of the actual cleaning. They're using some external libraries to do that. And then they just lowercase the whole caption. And that's the whole processing that's going on here. Next up, they are using this pattern. So this regex expression they compiled in the tokenizer to find the next piece of text from the caption, which I don't fully understand. If somebody understands the exact details of why this particular... So let me show you the pattern again. So here's the pattern, fairly complicated, not sure why all of these details. There is no comments, so it's kind of dark magic. But yeah, we'll have to roll with it. We can kind of treat it as a black box for now. Okay. So what happens here is the token is encoded using the UTF-8 encoding, which for this simple... So currently token is this small letter A, which is going to be encoded as a single byte. But some of the characters might be encoded into two, three, or four bytes. That's how UTF-8 works. Well, I'm not sure about the three, whether there are characters that have three bytes. Although, yeah, I'm fairly sure that's the case. I'm going to show you a bit later when you use emoji, this is going to give us four bytes, the encoding into UTF-8. But for this simple example, it's going to be a single byte. And then we use this dictionary byte encoder to fetch the corresponding... Basically to catch the corresponding Unicode character. So if I do F10 here, we're going to end up with this... Again, this is identity mapping for this particular token. We'll see later that it's going to be a bit more complicated for some other Unicode characters. So what happens now is the BPE magic. It's going to basically give us back... It's going to... In the general case, it's going to split this token into BPE tokens. In this case, it's going to be small a, again, just trivial. And then we're going to encode that using the encoder. So again, remember encoder just contains our vocabulary. The keys are the tokens and the values are the associated IDs, I guess. If I click F10 here, we just end up with BPE tokens being... As you can see here, we just encode 320. That's going to be the ID that corresponds to small letter a. If I continue the loop here, we get diagram. Again, this is going to be encoded. And finally, we're going to end up with, I think... Yeah, again, just a single... There is a single token that corresponds to diagram in our vocabulary. And that's going to be it. So now we basically repeat this whole procedure for the other captions. And let me kind of skip this and click F5. So let me go over this. And here we are. So we end up with all tokens being these realists. So as you can see, it's always a beginning and ending with a start and end of text tokens. And then in between is the tokenization that we get from our BPE tokenizer. Okay, let's continue here. So again, if you didn't quite understand everything that's going on here, don't worry because I did not also understand every single detail, to be honest. Okay, let's go on here. So what we do is we just pre-allocate a tensor of a corresponding... Of a following shape. Basically we want to have the context length, 77 here, and then we want to have whatever the number of captions is along the zeroth axis. So we end up with a tensor that's like, I guess, 377. Okay, let me go click F10. So just for sanity here, the shape is going to be 377. And here we're just going to fill it in. Basically we're just going to fill in the tokens that we got from the encoding here, and we're going to fill into this tensor. So I'm going to skip over all of these parts, and let me show you how the resulting tensor looks like. Basically, as you can see here, it begins with these IDs for our tokens, and then it's just like padded with zeros until we have the full 77 tokens worth of context. That's it. Let me step over here and let's continue. So that was a tokenization. We ended up with a tensor here. Again, the shape is 377. We have an image. The image is like this. And now we want to encode the image and we want to encode the text. And that means passing them through the visual transformer and through the transformer respectively. So let me show you how that is going to work. Here I think we're going to go through the... So visual is again a vision transformer. There is some type of conversion going on here. Basically we want to make sure that whatever the weights of our com1, so that's the patchify there, whatever the type of that thing is, we want to convert our image into the same type. So whether it's float32 or float16. In this case, I'm fairly sure it's going to be float32. Not that important, but let me just kind of see. Okay, it's actually float16. Yeah, because they have during the model loading, they have this conversion into FP16 function being called. So that explains it. Anyway, so let me go to the forward pass. Again, what's going to happen here is the logic I explained in my OneNote. Let me remind you if you forgot about that. So this is the logic that's going to happen. Basically we're going to patchify and extract these vectors and we're going to end up with like I think 16, 16, 7, 68. That should be the tensor we end up with. So if I click F10, we're going to patchify our tensor and let's see what the shape is. Or maybe we'll have additionally prepended one because we have a batch of one. So yeah, so it's basically one, whoops, it's one, 7, 68, 7, 7. So there is some basically permutation going on, but we kind of got the shape right. Okay, so now there is the reshaping going on here. We're going to, let me just kind of step through all of this, not explain all of the details, just some permutation and reshape going on. So we now just flatten the 7, 7 so we end up with all of the tokens being flattened out. So this is the batch size, the number of tokens. I don't know why it's expanding this constantly. Okay, so batch size, number of tokens, and finally the size of the embedding vector. Cool. So far so good. What we do here is we now take the class token, which is a learnable embedding vector, and we are going to basically concatenate it with this vector X here. Okay, so let me show you what I mean by that. So if I take the self class embedding, this is going to be just, the shape should be 1, 7, 6, 8, I think. So the shape of this is, the shape is actually just 7, 6, 8. And then once we sum it up with zeros, with a zero vector here, this is going to convert it into whatever the batch size is, and then 1, and then we'll end up with 7, 6, 8 here as well. I guess this is just a way to broadcast the CLS token. I can kind of copy paste this whole thing. I'm going to store it into a temporary variable. Oops, did I copy paste it? Yeah, I did. It just kind of overflowed. I can maybe remove this part with D type and device. Let me see where it's going to work. Maybe or maybe not. I think, yeah, it's important to actually preserve this part. So I'm going to add, do it like this. TMP and TMP shape is going to be 1, 1, 7, 6, 8. So basically we've done broadcasting, and then once you concatenate that with the X, which is of shape 1, 4, 9, 7, 6, 8, what is that going to do is let me think. Yeah, basically we're going to end up with 1, 5, 7, 6, 8, because we're going to concatenate the CLS to prepend the CLS to our token sequence. That should be the case here. So if I click F10, we end up with X shape 1, 5, 7, 6, 8, as I said. And then we just add the positional embeddings. So that's this step here. Oops, let me show you that. So that's going to be basically, let me just find it. That's this part, adding the positional encodings onto the vectors there. Let me step over that. Then there is some LNNorm going on, some permutation of the dimensions, because that's how this transformer expects it. Then we just process basically these. So now you can literally treat this as a simple NLP problem. We have a bunch of tokens with the corresponding channel dimensions. We just pass that through the transformer, and then we do the permutation again here. And that's it. So Vision Transformer is basically Patchify plus Transformer. That's all there is to it. And finally, we just want to project onto the common dimensionality of this joint multimodal space between the visual part and the textual part. And that's why we have this self-projection. So it's going to project us from basically 756 to 512. So as you can see here, that's the corresponding dimensions. And if we run this, we end up with what? We end up with a single vector that has 512 dimensions, because here we extracted some of the other dimensions. So basically what happened is we took our image and we ended up with a single vector that has 512 dimensions. Let's continue. That's the encoding image function. Now for the encoding text function. Again, let me know whether this is too much detail for you. Any feedback is super appreciated. The first time I'm doing this, so it will be super useful to hear some feedback. Embedding text, what's going to happen here is, so we have token embedding. So let's see what's going to happen here. Okay, so basically we are going to use the text tensor to index into our vocab, into our token embedding table. Okay. And let me remind you what this table is about. So token embedding is just a vocab size transformer width. So it's going to be just our basically embedding table. If I enter here, if I run this code, we end up with bit size number of tokens. Sorry, the size of the context, which is 77 and then 512. So let me bring the shape. So we started with this. We started with 377 and after embedding all of those IDs, and I'm butchering the terminology here, sorry for that. I'm not used to explaining these types of codes out loud, but yeah. Then we just add the self-positional embeddings, which are going to be of the shape 77, 512, I assume. So let me remind myself here, 77, 512. Yeah. So we just add them up. So that's the step we saw again. We saw that in the one note here. So basically that's this step. You have this embedding table and you're going to just add it up. You're just going to kind of put it here. So after the embedding has happened, you just kind of align it with your tokens. So it's going to be 77 and you just kind of add the corresponding vectors. You just add these two, you add these two, et cetera, et cetera. Okay. That's what happens. That's all. Let me step over. We do the, again, the permutation. We pass through the transformer. Again, return back into the shape that's desired. Then we pass through the layer normalization. And finally, there is this interesting piece of code. So what's this going to do is extract those tokens that are above the end of text token. Let me break that down for you. Again, let me show you what I mean by that. So if I do text argmax, we're going to end up with basically a vector of threes. Okay. And that's because the text tensor itself. So we have zero, one, two, three. You can see that the end of text token has the highest associated ID and that's why this argmax is going to work. So basically what we do is we extract the location of where this end of text token is in our tensor. And then we use those positions to extract the tokens from our vector x. So let me again show you the shape of x. So x is 3, 77, 5, 12. And what we're going to do out of all of these 77 tokens, we extract just a single one. And that's it. And then we do the projection into the final 512 dimension. So actually it's already in the correct dimension, but still they do this projection just to increase the capacity of the model, I guess. Let me step over this and we end up with x shape being 3, 5, 12. Okay. So that's it. We passed in three captions and we got out three vectors that all have 512 dimensions. That's everything we need. We can just do a simple normalization and then do the dot product to get the similarity score. Okay. Let's go back here. Now this function is kind of silly because we will not be using this, but if we just pass this model, if we just call the model, it's going to do exactly this. So let me show you what I mean by that. So if we go and if I click F5, so the for function again just does the encode image and encode text. So I'm just going to skip over those two because we've seen it already. So I'm just going to skip over all of that. Whoops. Let me just, my God, this was a mistake. Let me go back here. Let me go back here. Let me just put a break point here and click F5, skip all of this. Okay. So we are here. So we've done, we have, again, we have the image features, which is going to be 1, 5, 12. That's our vector. And then we're going to have text features, which is going to be 3, 5, 12. So that's a three vectors for the three captions. Now we do the normalization. We've seen this in the Drupiter already. So I'm just going to skip and assume you understand all of this. And finally, we, we take this, this is a learnable parameter that we did. I kind of wasn't sure what this magic number 0.7 is. Let me show you what I mean by that. So here is that logit scale. They kind of make it learnable and they initialize it like this. I'm not sure why this exact number, if anyone knows, feel free to comment down below. I guess it doesn't matter that much, but probably knowing how these NLP things work, it probably matters a lot. Like it's a, it's a, it's a difference between this, this training loss spiking or not a single, a single change of a variable can do that. So let's step over here. And what we do is we find the similarity matrix between the image vectors and the text vectors. And we just do the multiplication here because I guess they're going to pass it later on into soft max. That's why they're doing this. So in any case, we get the logits per image and logits per text, which you can get by just transposing so you don't have to do again, text features and do matrix multiplication with image features transpose. You can just transpose result. You can, you can kind of verify that that's going to be correct. Okay. Let's click F10. So we get the logits. And now, as I said, we do the soft max along the last dimension. So dimension minus one, and that's going to give us the probabilities. So logits, whoops, let me copy paste this. This should be, I guess, one three shape or something. Yep. And then if I kind of just print it, so here are the results. As you can see, they are not normalized. So after doing soft max here, if I click F10 here, we end up with probabilities. So probabilities, and we find that the caption number one or zero, depending on how you index your arrays, is going to be the one that corresponds to our image. And we'll see that that makes a lot of sense. So okay, I'm just going to exit this. And okay, I think, yeah, this is actually the end of the program. So we got the result that the zeroth caption is the best one. Let's see what it is. Well, because the zeroth caption is this caption here, it's a diagram. And let me show you the corresponding image. I think it's somewhere here. So clip image, where are you? Where are you? Okay, clip image here. So as you can see, it's a diagram of the clip infrastructure, of the clip architecture, and thus this makes a lot of sense. Okay, guys, this was pretty much it. This was a very, very long video. I hope you found something useful out of it. Before we wrap up, I just want to show you one more thing. So there is one more cool function I found in the in the reading file, and that's the linear probe. So everything is the same. So what we do, we download the cipher 100. What we then do is we would pass through the data set, and we encode all of the images into these feature vectors. So basically, what that means is we end up with, at the end, we end up with a bunch of like training features and associated labels, and same for the test data set. And then instead of doing this zero shop classifier, you can instead just train a simple logistic regression on top of the train feature. So again, train features, let me show you the shape. Shape is going to be whatever the number of data points in the training data set is, 512. So this is the shape. And then because we have the data set and we have the associated labels, this is a supervised data set will have basically an associated ground truth labels. And using these, you can train a linear probe. So that's this classifier here. Instead of doing what? Instead of doing the zero shot, where you would basically take the class, you would embed it into the template, and then you would encode that. And that's how you create those ad hoc classifiers. If you recall what I showed you before here. So that's basically this logic here. You could do zero shot, but obviously it's much better perf wise to do like a linear probe. Although again, you would you do have to train to get that. Whereas zero shot obviously is zero shot. Hence you don't have to do the training. Okay, so these variables are just found on some validation set. And then once you train the classifier, you can then see how it performs on your test features and report the accuracy. That's it. That's clip for you. Let me try and maybe quickly recap what we've seen. I'm going to walk you through the function here again. We load the clip model, we saw that clip model consists out of the vision transformer and the transformer. Aside from the training parameters, those models have these associated tables. Basically it's going to the division transformer is going to have this associated position encoding table plus the CLS token that's going to be learnable. Transformer is going to have this positional encoding table as well, as well as the vocab table. So that's the one that we use to embed our tokens here or the well tokens or IDs. We have the integers, so maybe better to refer to those as IDs, although people call those tokens as well. And that's that part. So after we create the clip, we can then do the tokenization. And this is arguably the hardest part. In my humble opinion, it's kind of dark magic. You have to process this text and you have to decide how to kind of split it using those reg ex expressions. And then after you do the reg ex expression, you use this PP to kind of split it into into sub words. And after that, you have to do the encoding into the corresponding integers. And that's how you get the text. Okay. Finally, once we have those two, we can basically pass them through the vision transformer and we can pass them through our transformer and get associated vectors. And then we can just do simple dot product after we normalize them to get the scores and thus similarity. That's it. I realized I want to show you one thing quickly. As I said, this is the hardest part of this tokenization. So I'm just going to add this emoji here and let's run the program again. So let me focus. Let me just remove these breakpoints and let me run this again. So let me debug the file. Let me show you what happens when we pass the brain emoji. Okay. Here we are. Let me go through the tokenize function again. Again we encode the special tokens and this is where the magic happens. So let me enter it into this function encode. So this is the hardest part of the whole code base if you ask me. I'm still trying to understand every detail here. We do some cleaning. Cleaning will not change this. This captions will end up with the same caption here. So we end up with the same caption. Nothing changed. Then we're going to first grab token A. So that's nothing interesting there. I'm just going to skip that. Finally this is the interesting part. So now we have a token brain. So this is emoji and this is your this is a legit Unicode character and there is a unique code point that Unicode assigns to each of the characters you can encounter wherever. So in whatever language you always have a Unicode is this this very nice. Well how would they call it like encoding principle to to map each of the characters Unicode characters onto unique code points which is just a unique integer. Okay. And so what happens here is once you encode this particular token you're going to end up with four bytes. So this particular token once you encoded using UTF eight it's going to give you four bytes. So I'll show you what I mean by that. If I take this and just print it here we end up with four bytes. And so that means we have four iterations here and then we're going to use this mapping table to fetch the associated to fetch the associated Unicode character and then we're going to just join them. And so we'll end up with some very weird token. Let me do show you that. So here's what this token looks like. Very weird. This is the representation we'll use to represent our brain emoji. And then now we have to do the BPE token part. So this is going to split this this character into sub words. So let me show you what we get. So this is what how he decides to split this particular character. And after that we just encode all of those. So that means we'll end up with two integers. So as you can see that the brain will be encoded using two integers. So let me show you what the BPE tokens are. So these two numbers eight seven nine two and five ten. That's what we how we encode the brain. As I said fairly fairly magic. I think they're referring to it as a dark horse behind the NLP like success because nobody talks about organization but it's fairly complicated to understand and to kind of capture all of those edge cases. Anyways enough rambling guys. Hope you found this video useful. If you did comment down below. Let me know how whether I can improve something or what not. Consider subscribing to this channel. Share the video art and until next time. Bye bye.
[{"start": 0.0, "end": 3.36, "text": " What's cracking guys? In this video I'm doing something a bit different."}, {"start": 3.36, "end": 9.72, "text": " So I'll be kicking off this series of videos where I'll be going through various,"}, {"start": 9.72, "end": 13.72, "text": " through the code of various papers that I've covered over the last months."}, {"start": 13.72, "end": 17.72, "text": " And so I'll be starting with OpenAI's clip."}, {"start": 17.72, "end": 24.92, "text": " So basically I cloned this repository OpenAI's clip implementation,"}, {"start": 24.92, "end": 32.04, "text": " created a conda environment locally and I'll be just debugging and going through VS code"}, {"start": 32.04, "end": 34.040000000000006, "text": " and showing you how how clip works."}, {"start": 34.040000000000006, "end": 37.44, "text": " But if you don't want to go through the hustle of creating the environment,"}, {"start": 37.44, "end": 42.84, "text": " you can just use their basically like colab here."}, {"start": 42.84, "end": 48.32000000000001, "text": " The environment is already set up, you can kind of copy paste some of the code from here into the colab"}, {"start": 48.32000000000001, "end": 53.040000000000006, "text": " and that's how you can kind of avoid having to go through the conda environment creation,"}, {"start": 53.04, "end": 56.64, "text": " which I'll not be explaining how to do in this video."}, {"start": 56.64, "end": 62.64, "text": " So having said that, let's jump into the actual code."}, {"start": 62.64, "end": 68.64, "text": " So first I'll go through the Jupyter notebooks to give you a high level understanding of how clip works."}, {"start": 68.64, "end": 75.24, "text": " And then we'll literally step through the code and see how the details of clip are implemented."}, {"start": 75.24, "end": 80.84, "text": " So vision transformers, transformers, contrastive loss, all of that."}, {"start": 80.84, "end": 89.04, "text": " So no, actually the training details are not implemented in the OpenAI's like open source code."}, {"start": 89.04, "end": 91.44, "text": " There is a separate repository that does that."}, {"start": 91.44, "end": 97.84, "text": " So I'll be skipping that part, but it's probably less relevant for what I'm trying to explain you here."}, {"start": 97.84, "end": 101.24000000000001, "text": " And that's how clip works in general."}, {"start": 101.24000000000001, "end": 105.24000000000001, "text": " OK, so I did create a video covering clips."}, {"start": 105.24000000000001, "end": 109.44, "text": " So do check that one out if you want to get the theoretical understanding."}, {"start": 109.44, "end": 113.64, "text": " I'll not be combining paper and code right now."}, {"start": 113.64, "end": 115.03999999999999, "text": " I'll just be focusing on code."}, {"start": 115.03999999999999, "end": 117.64, "text": " So if you haven't watched that one, go ahead and watch it."}, {"start": 117.64, "end": 119.03999999999999, "text": " I'll link it somewhere here."}, {"start": 119.03999999999999, "end": 121.03999999999999, "text": " So yeah, do check it out."}, {"start": 121.03999999999999, "end": 122.44, "text": " Anyways, let's start here."}, {"start": 122.44, "end": 126.64, "text": " So let's basically import a couple of things here."}, {"start": 126.64, "end": 127.64, "text": " No, thanks."}, {"start": 127.64, "end": 130.44, "text": " VS code is fairly annoying with these prompts,"}, {"start": 130.44, "end": 134.64, "text": " but it's probably the nicest environment to be debugging the code,"}, {"start": 134.64, "end": 137.24, "text": " both visually and just like there's so many cool features,"}, {"start": 137.24, "end": 140.64000000000001, "text": " which you'll hopefully see over this video."}, {"start": 140.64000000000001, "end": 146.84, "text": " So I imported necessary dependencies, numpy, torch, etc."}, {"start": 146.84, "end": 150.84, "text": " So let's see which models we have available."}, {"start": 150.84, "end": 154.44, "text": " So clip basically, again, how clip is constructed."}, {"start": 154.44, "end": 160.04000000000002, "text": " You have the visual like pathway, you have the textual pathway, so to speak."}, {"start": 160.04000000000002, "end": 163.84, "text": " So the visual one will encode the image into a vector."}, {"start": 163.84, "end": 170.64000000000001, "text": " The textual side will basically take a caption and embed it into a vector."}, {"start": 170.64000000000001, "end": 173.84, "text": " And basically then you just want to see the similarity"}, {"start": 173.84, "end": 176.84, "text": " between the dot product between normalized embeddings."}, {"start": 176.84, "end": 178.44, "text": " That's clip on a high level."}, {"start": 178.44, "end": 184.44, "text": " So as you can see, they've been using various different visual baselines."}, {"start": 184.44, "end": 187.44, "text": " So here are some modified versions of Fresnets."}, {"start": 187.44, "end": 188.84, "text": " They also use VAT."}, {"start": 188.84, "end": 192.24, "text": " We'll be using VAT in this Jupyter notebook."}, {"start": 192.24, "end": 196.24, "text": " So I can also kind of go through the definition and show you how this looks."}, {"start": 196.24, "end": 200.44, "text": " So basically they've defined this models dictionary."}, {"start": 200.44, "end": 202.44, "text": " It's a key value dictionary, obviously."}, {"start": 202.44, "end": 207.44, "text": " So we have the key here and we have the associated URL."}, {"start": 207.44, "end": 209.44, "text": " So that's where the model is stored."}, {"start": 209.44, "end": 217.24, "text": " So here and basically what this function shows us is all of the available,"}, {"start": 217.24, "end": 220.24, "text": " like pre-trained models, basically."}, {"start": 220.24, "end": 223.24, "text": " Okay, let's get back here."}, {"start": 223.24, "end": 225.84, "text": " Let me now run this cell."}, {"start": 225.84, "end": 229.84, "text": " So first what it does is again, it's going to load this VAT."}, {"start": 229.84, "end": 234.64000000000001, "text": " So the big version, the patches will be 32 by 32."}, {"start": 234.64000000000001, "end": 239.64000000000001, "text": " And it's going to basically give us the model and this pre-processing"}, {"start": 239.64000000000001, "end": 244.64000000000001, "text": " like a structure that contains the PyTorch transforms that we'll be using"}, {"start": 244.64000000000001, "end": 247.04000000000002, "text": " to transform the images like cropping and stuff."}, {"start": 247.04000000000002, "end": 248.44, "text": " We'll see that in a second."}, {"start": 248.44, "end": 253.44, "text": " So let's run this cell and let's see what's going on here."}, {"start": 253.44, "end": 256.24, "text": " Okay, so it's going to count the number of parameters in the model."}, {"start": 256.24, "end": 261.64, "text": " We can see here we have 151 millions of parameters in this VAT."}, {"start": 261.64, "end": 266.84, "text": " The input resolution that the VAT is expecting is 224 by 224"}, {"start": 266.84, "end": 271.24, "text": " and they've artificially set this context length to 77."}, {"start": 271.24, "end": 275.04, "text": " And so that means the number of tokens that will go into the,"}, {"start": 275.04, "end": 279.64000000000004, "text": " like text encoding pathway will be 77 tokens always."}, {"start": 279.64000000000004, "end": 281.04, "text": " So that's kind of fixed."}, {"start": 281.04, "end": 287.24, "text": " The woke up size is, as you can see here, 49,000 roughly."}, {"start": 287.24, "end": 289.84000000000003, "text": " And they are using the BPE encoding."}, {"start": 289.84000000000003, "end": 292.04, "text": " So that's the byte pairing coding."}, {"start": 292.04, "end": 294.44, "text": " I'll link some of the resources when it comes to BPE,"}, {"start": 294.44, "end": 297.04, "text": " which you can check out because I will not get into the details."}, {"start": 297.04, "end": 301.64000000000004, "text": " Otherwise, I could just create a video explaining how that thing works."}, {"start": 301.64, "end": 305.64, "text": " Okay, so this is some metadata about the model."}, {"start": 305.64, "end": 309.64, "text": " This is, okay, let me just run this to see which transformation we're using."}, {"start": 309.64, "end": 313.24, "text": " So we have resize to 224 because again,"}, {"start": 313.24, "end": 317.64, "text": " images will be of arbitrary aspect ratios and arbitrary resolutions."}, {"start": 317.64, "end": 324.24, "text": " We want to make it basically of the format that the VAT is expecting and that's 224 by 224."}, {"start": 324.24, "end": 328.64, "text": " So we're going to resize the image to 224, then do the central crop,"}, {"start": 328.64, "end": 333.84, "text": " then convert the image into RGB, convert it into tensor, and then do the normalization."}, {"start": 333.84, "end": 341.03999999999996, "text": " So these are just, I assume this is just ImageNet, like mean and standard deviations."}, {"start": 341.03999999999996, "end": 346.64, "text": " You literally go through the ImageNet training set and you collect these statistics."}, {"start": 346.64, "end": 349.44, "text": " You find the mean for all of the images, you find the standard deviation,"}, {"start": 349.44, "end": 358.44, "text": " and then you use that to center the input images before feeding them into the VAT portion of the clip."}, {"start": 358.44, "end": 361.64, "text": " Okay, so let's continue."}, {"start": 361.64, "end": 365.64, "text": " So we are first going to tokenize the sentence, hello world,"}, {"start": 365.64, "end": 369.24, "text": " and we'll later see how this tokenization works."}, {"start": 369.24, "end": 372.24, "text": " So as I said, it's using BPE in the background."}, {"start": 372.24, "end": 376.84, "text": " If we run this, we expect to get like a list of integers basically."}, {"start": 376.84, "end": 379.24, "text": " So you can see here, here is the result."}, {"start": 379.24, "end": 382.24, "text": " So this is the BPE magic we somehow get."}, {"start": 382.24, "end": 387.24, "text": " So this is going to be the start of sentence token, the end of sentence token,"}, {"start": 387.24, "end": 395.24, "text": " and these are the subtokens that BPE kind of broke this sentence down into these."}, {"start": 395.24, "end": 400.24, "text": " And it has all of these because, as I said, it's going to have 77 tokens"}, {"start": 400.24, "end": 403.44, "text": " because that's the constraint they artificially are using."}, {"start": 403.44, "end": 407.64, "text": " So let me just kind of maybe store this into a temporary variable"}, {"start": 407.64, "end": 410.64, "text": " and then print the shape of this thing."}, {"start": 410.64, "end": 413.84000000000003, "text": " It's going to be 7, yeah, as you can see, it's 77."}, {"start": 413.84, "end": 418.64, "text": " Okay, so let's get to this part here."}, {"start": 418.64, "end": 424.44, "text": " What they do here is they create this dictionary of descriptions."}, {"start": 424.44, "end": 431.03999999999996, "text": " So this is going to be a name of the image and this is going to be associated caption."}, {"start": 431.03999999999996, "end": 435.84, "text": " So we'll be using these names to just fetch the images from,"}, {"start": 435.84, "end": 439.23999999999995, "text": " I think, scikit-learn or something."}, {"start": 439.23999999999995, "end": 440.44, "text": " You'll see that in a second."}, {"start": 440.44, "end": 442.03999999999996, "text": " So let me run this cell."}, {"start": 442.04, "end": 445.44, "text": " Let's form the dictionary and now let's get here."}, {"start": 445.44, "end": 448.04, "text": " Okay, so what's going on here?"}, {"start": 448.04, "end": 455.44, "text": " So as you can see, we're going to iterate through this scikit-image."}, {"start": 455.44, "end": 461.84000000000003, "text": " So scikit-image package has certain images that come pre-installed with it."}, {"start": 461.84000000000003, "end": 463.64000000000004, "text": " So we're going to iterate through those file names."}, {"start": 463.64000000000004, "end": 471.44, "text": " And if the file name ends with PNG or JPEG, meaning if the file name is an image,"}, {"start": 471.44, "end": 473.64, "text": " we're going to enter this loop."}, {"start": 473.64, "end": 478.64, "text": " Then we're going to just grab the actual name here."}, {"start": 478.64, "end": 483.84, "text": " And if it's not in the description, so we're going to only extract these eight images here."}, {"start": 483.84, "end": 488.04, "text": " If it's not in the description, we'll just kind of skip it."}, {"start": 488.04, "end": 493.24, "text": " If it's in the description, then we're going to grab the path of that image,"}, {"start": 493.24, "end": 498.64, "text": " open it up as a pillow image and convert it into RGB."}, {"start": 498.64, "end": 504.44, "text": " Then we're going to just show all of those on a nice figure here."}, {"start": 504.44, "end": 507.64, "text": " I will not get into the Matplotlib details here."}, {"start": 507.64, "end": 514.4399999999999, "text": " And we're just going to store the pill images as well as the processed images and associated captions."}, {"start": 514.4399999999999, "end": 517.84, "text": " So this text is going to have so descriptions is here."}, {"start": 517.84, "end": 520.04, "text": " We're going to have the keywords here."}, {"start": 520.04, "end": 522.84, "text": " So basically we'll fetch the associated captions."}, {"start": 522.84, "end": 526.24, "text": " Let me run this enough rambling."}, {"start": 526.24, "end": 534.04, "text": " So you can see here the images we got from the scikit image package."}, {"start": 534.04, "end": 540.44, "text": " And well, unfortunately, let me see whether I can change the color here."}, {"start": 540.44, "end": 543.24, "text": " Yeah, I'm not sure whether you see the associated caption,"}, {"start": 543.24, "end": 548.24, "text": " but like I can read one for you, basically a portrait of an astronaut with the American flag."}, {"start": 548.24, "end": 549.24, "text": " This is this one."}, {"start": 549.24, "end": 552.84, "text": " And then a person looking at a camera on a tripod, blah, blah, blah."}, {"start": 552.84, "end": 556.24, "text": " So we now have the data set as clip expects."}, {"start": 556.24, "end": 560.44, "text": " So it's a small eight image data set image and associated caption."}, {"start": 560.44, "end": 565.84, "text": " And now let's see how clip is going to use these."}, {"start": 565.84, "end": 568.0400000000001, "text": " So let's continue here."}, {"start": 568.0400000000001, "end": 572.0400000000001, "text": " First of all, we're going to stack the processed images."}, {"start": 572.0400000000001, "end": 575.64, "text": " So remember images are the ones where we've applied the processing."}, {"start": 575.64, "end": 580.24, "text": " So the cropping and conversion to tensor normalization, all of that goodies."}, {"start": 580.24, "end": 582.0400000000001, "text": " Now we're going to stack it."}, {"start": 582.04, "end": 586.8399999999999, "text": " We're going to convert it into into into like a big tensor,"}, {"start": 586.8399999999999, "end": 589.24, "text": " which then we're going to move to to CUDA."}, {"start": 589.24, "end": 595.64, "text": " So that means basically pushing this tensor onto your GPU."}, {"start": 595.64, "end": 598.8399999999999, "text": " So let me run this."}, {"start": 598.8399999999999, "end": 601.04, "text": " And let me now create another cell here."}, {"start": 601.04, "end": 603.4399999999999, "text": " Whoops."}, {"start": 603.4399999999999, "end": 604.8399999999999, "text": " That's not what I wanted to do."}, {"start": 604.8399999999999, "end": 609.3399999999999, "text": " Let me just basically create a cell."}, {"start": 609.3399999999999, "end": 611.64, "text": " Okay, mixing up shortcuts from Colab."}, {"start": 611.64, "end": 615.24, "text": " So I'll just have to manually add a code cell here."}, {"start": 615.24, "end": 617.04, "text": " What I want to show you is just the shape."}, {"start": 617.04, "end": 622.04, "text": " So always understanding the shapes of your object is super important."}, {"start": 622.04, "end": 623.04, "text": " Let me run this."}, {"start": 623.04, "end": 626.4399999999999, "text": " So 8, 3, 2, 24, 2, 24."}, {"start": 626.4399999999999, "end": 628.14, "text": " So that's your image."}, {"start": 628.14, "end": 631.34, "text": " And because we have eight of those, we stacked eight of those."}, {"start": 631.34, "end": 634.84, "text": " We have like a torch tensor like this."}, {"start": 634.84, "end": 638.9399999999999, "text": " And similarly, Remy do the same thing for text tokens."}, {"start": 638.94, "end": 642.34, "text": " So what we've done here is we've gone through the captions."}, {"start": 642.34, "end": 644.34, "text": " Okay, so that's the text."}, {"start": 644.34, "end": 647.84, "text": " And then we just create this artificial prompt."}, {"start": 647.84, "end": 653.0400000000001, "text": " So this is and then we append the captions, the ones we had here."}, {"start": 653.0400000000001, "end": 656.24, "text": " So this is a page of text about segmentation, etc."}, {"start": 656.24, "end": 656.94, "text": " etc."}, {"start": 656.94, "end": 662.5400000000001, "text": " And then we're going to tokenize that which means we'll get those 77 context tokens."}, {"start": 662.5400000000001, "end": 664.84, "text": " So let me just run this again."}, {"start": 664.84, "end": 665.6400000000001, "text": " Let's see what we get."}, {"start": 665.6400000000001, "end": 667.34, "text": " So we have 8, 77."}, {"start": 667.34, "end": 673.0400000000001, "text": " So 8 because we have 8 captions and 77 because that's the basically context length"}, {"start": 673.0400000000001, "end": 677.64, "text": " that the transformer textual pathway in the clip is expecting."}, {"start": 677.64, "end": 679.44, "text": " Okay."}, {"start": 679.44, "end": 682.44, "text": " Finally, what we're going to do is we're going to patch."}, {"start": 682.44, "end": 686.74, "text": " We're going to pass the image tensor to the clip model."}, {"start": 686.74, "end": 690.64, "text": " So model is just clip and we're going to call this encode image function."}, {"start": 690.64, "end": 696.0400000000001, "text": " We'll see later what exactly does but basically just a forward prop of the images"}, {"start": 696.04, "end": 702.04, "text": " through your VAT and as the output will get I guess something probably the shape will"}, {"start": 702.04, "end": 709.74, "text": " be something like 8 and then I think it's either 512 or 756 or something like that."}, {"start": 709.74, "end": 711.54, "text": " We're going to see that in a second."}, {"start": 711.54, "end": 718.74, "text": " And here what we're expecting is similarly will pass these text tokens"}, {"start": 718.74, "end": 724.3399999999999, "text": " and I expect we'll have something like well, I guess 8 512 because we need to be able"}, {"start": 724.34, "end": 729.44, "text": " to compare those do dot product between the textual and visual embeddings."}, {"start": 729.44, "end": 734.84, "text": " So let me run this and then let me test the hypothesis whether I got the shapes right"}, {"start": 734.84, "end": 738.34, "text": " from like by the heart."}, {"start": 738.34, "end": 746.24, "text": " Okay, let me add an additional code cell here and let's just print the shapes of these objects."}, {"start": 746.24, "end": 749.74, "text": " So shape will be 8 512."}, {"start": 749.74, "end": 750.44, "text": " That's correct."}, {"start": 750.44, "end": 751.94, "text": " And then this must be the same."}, {"start": 751.94, "end": 753.34, "text": " So let me run this."}, {"start": 753.34, "end": 755.14, "text": " So 8 512."}, {"start": 755.14, "end": 755.5400000000001, "text": " Okay."}, {"start": 755.5400000000001, "end": 762.34, "text": " So again, pretty simple here because this is the transformer will be just taking the"}, {"start": 762.34, "end": 767.64, "text": " whatever the feature vector is above the end of sentence token."}, {"start": 767.64, "end": 773.44, "text": " So we just fetch that vector and that's what we're going to be using as a embedding for"}, {"start": 773.44, "end": 774.74, "text": " that particular caption."}, {"start": 774.74, "end": 777.44, "text": " Again, we'll see the details a bit later."}, {"start": 777.44, "end": 778.64, "text": " That's it."}, {"start": 778.64, "end": 782.5400000000001, "text": " Now the next steps are normalization."}, {"start": 782.54, "end": 786.64, "text": " So we'll normalize all of these embedding vectors."}, {"start": 786.64, "end": 793.14, "text": " So visual and textual embedding vectors and then we're going to do matrix multiply which"}, {"start": 793.14, "end": 801.04, "text": " is going to do dot products between all of these combinations of visual and textual embeddings"}, {"start": 801.04, "end": 807.64, "text": " and that's going to give us the similarities between those images and all of these captions."}, {"start": 807.64, "end": 816.84, "text": " So let me run this first like this and then I'm going to analyze what we are getting out"}, {"start": 816.84, "end": 817.24, "text": " of here."}, {"start": 817.24, "end": 819.34, "text": " So similarity should be 8 by 8."}, {"start": 819.34, "end": 824.4399999999999, "text": " So I'm expecting shape 8 by 8 because we want to know for each image."}, {"start": 824.4399999999999, "end": 827.74, "text": " What's the similarity score for that image and all of the other captions."}, {"start": 827.74, "end": 829.9399999999999, "text": " So let's see whether that's correct."}, {"start": 829.9399999999999, "end": 836.9399999999999, "text": " So if I print similarity shape here and if I run this it's 8 by 8 as expected."}, {"start": 836.94, "end": 840.74, "text": " Okay, so what's this thing about?"}, {"start": 840.74, "end": 845.84, "text": " The best thing to do is to just kind of because we are we have the interactivity here in Jupyter"}, {"start": 845.84, "end": 848.74, "text": " so why not use that fact."}, {"start": 848.74, "end": 851.74, "text": " So let me just print this and see what we get here."}, {"start": 851.74, "end": 857.34, "text": " Basically what this is going to do is find the I assume this is going to do L2 norm."}, {"start": 857.34, "end": 863.6400000000001, "text": " So it's going to find the L2 norm for each of the embedding vectors and then just divide"}, {"start": 863.64, "end": 868.14, "text": " as you can see here, we're just going to divide the vectors by their L2."}, {"start": 868.14, "end": 874.84, "text": " So basically if we have a vector X what we do is we convert it into X divided by L2 norm."}, {"start": 874.84, "end": 875.9399999999999, "text": " How can I denote that?"}, {"start": 875.9399999999999, "end": 883.24, "text": " Maybe like this L2 norm of that X and that's just your regularization that normalization"}, {"start": 883.24, "end": 887.34, "text": " that that makes it makes it so that the norm will now be 1."}, {"start": 887.34, "end": 890.9399999999999, "text": " So all of your vectors will be of norm 1."}, {"start": 890.94, "end": 895.94, "text": " Okay, let's run this cell and see what we get."}, {"start": 895.94, "end": 902.44, "text": " Well because I've already done the because I've already done the I already ran the cell."}, {"start": 902.44, "end": 906.24, "text": " I'm getting basically all of them are already 1."}, {"start": 906.24, "end": 907.74, "text": " So not that interesting."}, {"start": 907.74, "end": 909.74, "text": " So let me just rerun this for a second."}, {"start": 909.74, "end": 914.94, "text": " So let me rerun the image features and text features get the original ones and now before"}, {"start": 914.94, "end": 917.74, "text": " I run this cell, let me see what we get here."}, {"start": 917.74, "end": 923.14, "text": " So you can see here the associated L2 norms for each of the eight visual embedding vectors"}, {"start": 923.14, "end": 927.84, "text": " and we just divide by those L2 norms and we normalize the vectors by doing that."}, {"start": 927.84, "end": 933.44, "text": " We do the similar thing for basically for for the text."}, {"start": 933.44, "end": 937.74, "text": " So let me see if I can go to definition and see whether okay, no definition found for this."}, {"start": 937.74, "end": 942.14, "text": " So I guess we're going to use just some good old Google search."}, {"start": 942.14, "end": 943.74, "text": " So let's see what's going on here."}, {"start": 943.74, "end": 946.54, "text": " So norm by torch."}, {"start": 946.54, "end": 950.4399999999999, "text": " I think it should be like basically okay for a binius norm."}, {"start": 950.4399999999999, "end": 953.4399999999999, "text": " Let me see what's the default here."}, {"start": 953.4399999999999, "end": 958.74, "text": " So the order of norm default is for binius norm, which is just a fancy way of saying"}, {"start": 958.74, "end": 960.74, "text": " L2 for for matrix."}, {"start": 960.74, "end": 966.24, "text": " It's going to do the square root and then basically squared of all of the elements and"}, {"start": 966.24, "end": 968.4399999999999, "text": " then you just do the summation."}, {"start": 968.4399999999999, "end": 971.8399999999999, "text": " So yeah, we can quickly verify that hypothesis."}, {"start": 971.8399999999999, "end": 975.54, "text": " Just let's create some artificial vector."}, {"start": 975.54, "end": 980.9399999999999, "text": " So let's let me maybe create something like this."}, {"start": 980.9399999999999, "end": 986.4399999999999, "text": " So we have like a dummy dummy tensor."}, {"start": 986.4399999999999, "end": 987.8399999999999, "text": " My pi torch skills are bit rusty."}, {"start": 987.8399999999999, "end": 990.24, "text": " So hopefully I get everything correctly here."}, {"start": 990.24, "end": 995.9399999999999, "text": " So torch tensor or just zeros."}, {"start": 995.9399999999999, "end": 997.8399999999999, "text": " Can we do zeros?"}, {"start": 997.8399999999999, "end": 999.9399999999999, "text": " Well that will not be that interesting."}, {"start": 999.9399999999999, "end": 1001.54, "text": " Let me just think what we can do here."}, {"start": 1001.54, "end": 1007.04, "text": " Okay, so I'm gonna just do once and then I'm gonna take like size."}, {"start": 1007.04, "end": 1009.4399999999999, "text": " Let's say we just have a vector of size 2."}, {"start": 1009.4399999999999, "end": 1014.54, "text": " So there's going to be one one if it's L2 norm we expect square root of 2 which is roughly"}, {"start": 1014.54, "end": 1016.3399999999999, "text": " 1.41."}, {"start": 1016.3399999999999, "end": 1024.1399999999999, "text": " So if I now pass dummy tensor here and let's find the norm probably dimension and all of"}, {"start": 1024.1399999999999, "end": 1025.84, "text": " this is not that important."}, {"start": 1025.84, "end": 1031.04, "text": " But yeah, you see basically we have L2 here and that's pretty much it."}, {"start": 1031.04, "end": 1033.6399999999999, "text": " Now this thing is probably interesting."}, {"start": 1033.6399999999999, "end": 1035.6399999999999, "text": " We convert that at tensors."}, {"start": 1035.6399999999999, "end": 1040.94, "text": " We just push them to CPU, convert them to non-pi and then we do matrix multiplication."}, {"start": 1040.94, "end": 1042.34, "text": " So why does this make sense?"}, {"start": 1042.34, "end": 1043.54, "text": " And we have to transpose here."}, {"start": 1043.54, "end": 1044.6399999999999, "text": " So why does that make sense?"}, {"start": 1044.6399999999999, "end": 1052.94, "text": " Well because the following basically if we think about it text features is what 8 512"}, {"start": 1052.94, "end": 1060.1399999999999, "text": " and then we do matrix multiplication with a matrix that's 512 8 because we do the transpose"}, {"start": 1060.14, "end": 1066.64, "text": " here operation and that's gonna obviously give us if you do your matrix multiplication"}, {"start": 1066.64, "end": 1067.64, "text": " 8 by 8."}, {"start": 1067.64, "end": 1073.94, "text": " So what you're doing is you're taking a vector like you're taking like a textual embedding"}, {"start": 1073.94, "end": 1078.3400000000001, "text": " and by multiplying what you're doing is you're doing the products because you're multiplying"}, {"start": 1078.3400000000001, "end": 1082.8400000000001, "text": " the row with the column and the column is the visual embedding vector."}, {"start": 1082.84, "end": 1090.6399999999999, "text": " So if you do all of that you're basically getting the similarities between this particular"}, {"start": 1090.6399999999999, "end": 1093.24, "text": " text vector with all of these image vectors."}, {"start": 1093.24, "end": 1094.4399999999998, "text": " Hopefully that makes sense."}, {"start": 1094.4399999999998, "end": 1103.28, "text": " We can finally print we can print the similarity and do let me know whether you find this type"}, {"start": 1103.28, "end": 1104.58, "text": " of explanation useful."}, {"start": 1104.58, "end": 1107.84, "text": " Any feedback is super useful because this is the first time I'm doing something like"}, {"start": 1107.84, "end": 1108.84, "text": " this."}, {"start": 1108.84, "end": 1116.1399999999999, "text": " So as you can see all of these are similarities so they are not normalized which means if"}, {"start": 1116.1399999999999, "end": 1123.1, "text": " I were to do a summation will this work x is minus one if I print this."}, {"start": 1123.1, "end": 1127.98, "text": " So as you can see it's not one because we haven't applied softmax or anything."}, {"start": 1127.98, "end": 1129.74, "text": " So that's what we get here."}, {"start": 1129.74, "end": 1131.78, "text": " Okay let's continue."}, {"start": 1131.78, "end": 1132.78, "text": " Let's see what we have here."}, {"start": 1132.78, "end": 1139.62, "text": " So descriptions that's going to be our dictionaries we'll get I guess eight here which is going"}, {"start": 1139.62, "end": 1140.98, "text": " to plot something."}, {"start": 1140.98, "end": 1142.34, "text": " Let's see what we're doing here."}, {"start": 1142.34, "end": 1152.06, "text": " So original images we're going to plot the image and then we are going to plot the scores."}, {"start": 1152.06, "end": 1153.3799999999999, "text": " Okay let's just run the cell."}, {"start": 1153.3799999999999, "end": 1157.1, "text": " It's going to be easier to understand what's going on once we plotted."}, {"start": 1157.1, "end": 1161.46, "text": " So as you can see so what they've what they've done here is basically they have they plot"}, {"start": 1161.46, "end": 1167.46, "text": " the images here they plot the captions here and then they just plot this similarity matrix."}, {"start": 1167.46, "end": 1172.1000000000001, "text": " So we are basically visualizing the similarity matrix that I just printed a couple of cells"}, {"start": 1172.1000000000001, "end": 1178.8600000000001, "text": " before and you can see that as expected because this is a train clip model the biggest similarity"}, {"start": 1178.8600000000001, "end": 1185.94, "text": " is between the image and its correct caption and that's why we have this diagonal here."}, {"start": 1185.94, "end": 1187.8, "text": " That's pretty much it."}, {"start": 1187.8, "end": 1193.6599999999999, "text": " So yeah I guess I'll just ignore the details of this code because it's just idiosyncrasies"}, {"start": 1193.6599999999999, "end": 1195.78, "text": " of Matplotlib."}, {"start": 1195.78, "end": 1199.9199999999998, "text": " What I'm going to focus on is on the semantics and on what clip model is doing."}, {"start": 1199.9199999999998, "end": 1201.98, "text": " So this is it."}, {"start": 1201.98, "end": 1203.12, "text": " Let's see what else."}, {"start": 1203.12, "end": 1209.6599999999999, "text": " So now we are going to basically load a cipher hundred data set."}, {"start": 1209.6599999999999, "end": 1211.86, "text": " So that's going to probably take a while."}, {"start": 1211.86, "end": 1215.6599999999999, "text": " Oh okay I already have it cached so that's why it was so fast."}, {"start": 1215.66, "end": 1220.3400000000001, "text": " So we have a cipher hundred data set which is just your image data set that has a hundred"}, {"start": 1220.3400000000001, "end": 1223.9, "text": " classes hence the suffix hundred here."}, {"start": 1223.9, "end": 1228.02, "text": " And so what we're going to do is we're going to iterate through the classes so through"}, {"start": 1228.02, "end": 1232.42, "text": " all of those classes and we are going to form a caption out of that class."}, {"start": 1232.42, "end": 1235.66, "text": " So if you read the paper this is what I've been doing this is how you can do the zero"}, {"start": 1235.66, "end": 1238.0800000000002, "text": " shot image classification."}, {"start": 1238.0800000000002, "end": 1239.78, "text": " So we form a caption like this."}, {"start": 1239.78, "end": 1244.26, "text": " This is a photo of and then goes a label for example horse dog or whatnot."}, {"start": 1244.26, "end": 1246.06, "text": " So I'm not sure what the cipher 10 classes are."}, {"start": 1246.06, "end": 1247.46, "text": " We can print that."}, {"start": 1247.46, "end": 1253.66, "text": " I can kind of add a cell here and then print cipher 10."}, {"start": 1253.66, "end": 1254.94, "text": " Let's see what's going on here."}, {"start": 1254.94, "end": 1255.94, "text": " If I run this."}, {"start": 1255.94, "end": 1256.94, "text": " Okay."}, {"start": 1256.94, "end": 1261.66, "text": " Apple aquarium fish baby bear beaver bad blah blah blah."}, {"start": 1261.66, "end": 1262.66, "text": " Nice."}, {"start": 1262.66, "end": 1266.26, "text": " So we're going to as I said form captions from the classes like this and then do the"}, {"start": 1266.26, "end": 1267.26, "text": " tokenization."}, {"start": 1267.26, "end": 1272.3, "text": " So again the shape of this should be I guess hundred seventy seven because we have hundred"}, {"start": 1272.3, "end": 1278.3799999999999, "text": " captions hundred labels and hundred captions and seventy seven is the expected context"}, {"start": 1278.3799999999999, "end": 1279.3799999999999, "text": " length."}, {"start": 1279.3799999999999, "end": 1284.7, "text": " So let me run this and let me print the text tokens shape just to be sure."}, {"start": 1284.7, "end": 1287.8999999999999, "text": " So yeah as expected."}, {"start": 1287.8999999999999, "end": 1295.02, "text": " And now what we're going to do is the same thing as before past the textual these these"}, {"start": 1295.02, "end": 1298.02, "text": " these like tensors."}, {"start": 1298.02, "end": 1299.02, "text": " Okay this is not a tensor."}, {"start": 1299.02, "end": 1300.7, "text": " This is just as a wait."}, {"start": 1300.7, "end": 1302.26, "text": " What's the what's the type of this."}, {"start": 1302.26, "end": 1303.26, "text": " It's interesting."}, {"start": 1303.26, "end": 1304.26, "text": " What."}, {"start": 1304.26, "end": 1305.26, "text": " Okay."}, {"start": 1305.26, "end": 1306.26, "text": " It says here it's a torch tensor."}, {"start": 1306.26, "end": 1308.06, "text": " We're going to pass that we're going to encode a text."}, {"start": 1308.06, "end": 1314.74, "text": " So we are going to end up here with I assume hundred five twelve because all of those captions"}, {"start": 1314.74, "end": 1317.54, "text": " are going to be embedded into a vector."}, {"start": 1317.54, "end": 1319.48, "text": " So we get hundred five twelve."}, {"start": 1319.48, "end": 1321.72, "text": " Then we do the normalization as before."}, {"start": 1321.72, "end": 1326.3799999999999, "text": " So that means all of the norms of those embedding vectors going to be equal to one."}, {"start": 1326.3799999999999, "end": 1330.34, "text": " Whereas by norm I mean L2 norm in particular."}, {"start": 1330.34, "end": 1332.9399999999998, "text": " And then what we do is again image features text feature."}, {"start": 1332.9399999999998, "end": 1334.54, "text": " So we're not teaching the images."}, {"start": 1334.54, "end": 1337.4199999999998, "text": " So we're still using those eight images from before."}, {"start": 1337.4199999999998, "end": 1344.1, "text": " And we're just trying to find the most similar class from Cypher hundred for those images."}, {"start": 1344.1, "end": 1349.6999999999998, "text": " And again this thing is just going to calculate pair by similarity score between each of the"}, {"start": 1349.6999999999998, "end": 1354.26, "text": " images each of the eight images and each of the hundred captions."}, {"start": 1354.26, "end": 1358.1, "text": " We have to multiply by 100 just for numerical stability."}, {"start": 1358.1, "end": 1359.78, "text": " And then we apply softmax."}, {"start": 1359.78, "end": 1365.94, "text": " So that that means that now we'll basically have a sum sum would be equal to one for a"}, {"start": 1365.94, "end": 1367.62, "text": " particular image."}, {"start": 1367.62, "end": 1373.1, "text": " And so the sum of the similarity scores for this particular image and all of the captions"}, {"start": 1373.1, "end": 1375.3, "text": " needs to be equal to one."}, {"start": 1375.3, "end": 1376.3, "text": " OK."}, {"start": 1376.3, "end": 1380.58, "text": " So let me run this."}, {"start": 1380.58, "end": 1383.1, "text": " And finally what we do is once we find those."}, {"start": 1383.1, "end": 1385.18, "text": " So let me maybe let me print this."}, {"start": 1385.18, "end": 1393.18, "text": " So if I do this if I take the text props and they print those let's first bring the shape"}, {"start": 1393.18, "end": 1399.46, "text": " the shape should be what the shape should be eight times hundred I guess."}, {"start": 1399.46, "end": 1400.46, "text": " Yeah."}, {"start": 1400.46, "end": 1406.42, "text": " And if I print the actual text props it's going to be just not that intelligible because"}, {"start": 1406.42, "end": 1408.1000000000001, "text": " bunch of small numbers."}, {"start": 1408.1000000000001, "end": 1413.8200000000002, "text": " So this here this vector here is basically the similarity score for the first image and"}, {"start": 1413.82, "end": 1418.54, "text": " all of the hundred classes of the Cypher 10 Cypher 100."}, {"start": 1418.54, "end": 1422.98, "text": " And now we're trying to find the maximum because that's the high similarity."}, {"start": 1422.98, "end": 1425.3799999999999, "text": " And we find that by doing the top case."}, {"start": 1425.3799999999999, "end": 1431.4199999999998, "text": " So we find the five highest similarity scores for each of the images."}, {"start": 1431.4199999999998, "end": 1432.4199999999998, "text": " And that's what we get here."}, {"start": 1432.4199999999998, "end": 1436.86, "text": " So let me now print the top props and the top labels."}, {"start": 1436.86, "end": 1438.62, "text": " So let me just bring the top labels actually."}, {"start": 1438.62, "end": 1439.82, "text": " So we're going to get."}, {"start": 1439.82, "end": 1444.58, "text": " So these are the indices of the classes from Cypher 100."}, {"start": 1444.58, "end": 1447.82, "text": " They have the highest similarity score for the associated image."}, {"start": 1447.82, "end": 1449.9399999999998, "text": " So hopefully that makes sense."}, {"start": 1449.9399999999998, "end": 1451.5, "text": " OK."}, {"start": 1451.5, "end": 1458.34, "text": " So having done that let me run this final cell which is going to do the following."}, {"start": 1458.34, "end": 1463.86, "text": " So we are going to go through the original images."}, {"start": 1463.86, "end": 1466.8999999999999, "text": " We're going to plot that image."}, {"start": 1466.9, "end": 1471.46, "text": " We're going to take the probability scores."}, {"start": 1471.46, "end": 1473.14, "text": " OK."}, {"start": 1473.14, "end": 1477.6200000000001, "text": " And we're just going to do bar plots with those scores and with the associated images."}, {"start": 1477.6200000000001, "end": 1482.3400000000001, "text": " So again let me run this because this is too much detail to analyze in a video."}, {"start": 1482.3400000000001, "end": 1485.5800000000002, "text": " But basically this is what happens."}, {"start": 1485.5800000000002, "end": 1492.3400000000001, "text": " We just plot the images and the probabilities as well as the labels we just found in the"}, {"start": 1492.3400000000001, "end": 1493.3400000000001, "text": " cell above."}, {"start": 1493.3400000000001, "end": 1496.02, "text": " So this is what this magic here does."}, {"start": 1496.02, "end": 1503.1399999999999, "text": " You can see that this image is the closest class in Cypher 100 is woman."}, {"start": 1503.1399999999999, "end": 1507.34, "text": " The closest class for this picture here is man."}, {"start": 1507.34, "end": 1510.86, "text": " The closest class for the cat images sweet pepper."}, {"start": 1510.86, "end": 1511.86, "text": " OK."}, {"start": 1511.86, "end": 1516.54, "text": " We have a false like positive there."}, {"start": 1516.54, "end": 1518.02, "text": " And now we have cop here."}, {"start": 1518.02, "end": 1519.02, "text": " OK."}, {"start": 1519.02, "end": 1521.22, "text": " That's fairly decent motorcycle cattle."}, {"start": 1521.22, "end": 1522.98, "text": " I guess that's the closest thing they have."}, {"start": 1522.98, "end": 1524.82, "text": " Let me see where they have a horse."}, {"start": 1524.82, "end": 1527.26, "text": " So I guess the horse would be the closest label."}, {"start": 1527.26, "end": 1529.06, "text": " I'm going to do the following."}, {"start": 1529.06, "end": 1535.34, "text": " So I'm going to print his horse in Cypher 10 classes."}, {"start": 1535.34, "end": 1536.34, "text": " And it's not."}, {"start": 1536.34, "end": 1537.34, "text": " So we don't have a horse label."}, {"start": 1537.34, "end": 1542.3799999999999, "text": " So that means that this is probably the closest label we got to hear a motorcycle make sense"}, {"start": 1542.3799999999999, "end": 1543.58, "text": " again bed."}, {"start": 1543.58, "end": 1548.4199999999998, "text": " So this is completely gibberish unless there is some word better something inside of this"}, {"start": 1548.4199999999998, "end": 1550.7, "text": " image."}, {"start": 1550.7, "end": 1551.7, "text": " Let me zoom in."}, {"start": 1551.7, "end": 1552.7, "text": " Whoops."}, {"start": 1552.7, "end": 1558.66, "text": " Let us first determine my blah blah blah."}, {"start": 1558.66, "end": 1560.66, "text": " No this is probably just false positive."}, {"start": 1560.66, "end": 1564.22, "text": " And finally here we have a rocket which is also a good class."}, {"start": 1564.22, "end": 1565.22, "text": " That's it."}, {"start": 1565.22, "end": 1566.22, "text": " That's the first Jupiter notebook."}, {"start": 1566.22, "end": 1569.26, "text": " Do let me know how you're liking this video so far."}, {"start": 1569.26, "end": 1572.3, "text": " Comment down below if you find it useful."}, {"start": 1572.3, "end": 1575.74, "text": " OK now let's go to the second Jupiter notebook."}, {"start": 1575.74, "end": 1579.82, "text": " This one we're going to go fast through it because it's much simpler I assume."}, {"start": 1579.82, "end": 1583.3799999999999, "text": " Let me run the cell."}, {"start": 1583.3799999999999, "end": 1586.26, "text": " OK let me again available models."}, {"start": 1586.26, "end": 1592.54, "text": " We've seen that we're just going to load the VAT big 32 by 32."}, {"start": 1592.54, "end": 1594.82, "text": " That size is that's that model."}, {"start": 1594.82, "end": 1600.6599999999999, "text": " So that's the same model we used in the previous Jupiter again 151 million 224 2424 context"}, {"start": 1600.6599999999999, "end": 1605.1, "text": " length is 77 for the transformer woke up sizes like this."}, {"start": 1605.1, "end": 1607.26, "text": " So here what I've done is the following."}, {"start": 1607.26, "end": 1613.3799999999999, "text": " So they have a thousand image net classes with a small caveat."}, {"start": 1613.3799999999999, "end": 1621.7, "text": " So they've actually modified some of the classes such that clip has easier time basically recognizing"}, {"start": 1621.7, "end": 1622.7, "text": " those images."}, {"start": 1622.7, "end": 1624.86, "text": " So I think they mentioned it here."}, {"start": 1624.86, "end": 1626.08, "text": " Let me kind of read it through."}, {"start": 1626.08, "end": 1630.2, "text": " So subset of these class names are modified from the default image and class names source"}, {"start": 1630.2, "end": 1634.52, "text": " from any athletes image and simple labels."}, {"start": 1634.52, "end": 1639.76, "text": " These edits were made by trial and error and concentrated on the lowest performing classes"}, {"start": 1639.76, "end": 1646.06, "text": " according to top one and top five accuracy on the image net training set for the for"}, {"start": 1646.06, "end": 1647.98, "text": " these variations of the models."}, {"start": 1647.98, "end": 1653.7, "text": " So these weeks improved up one by one point five percent on the ATB 32 over using the"}, {"start": 1653.7, "end": 1660.22, "text": " default class names Alec which is the author of the Alec Redford assume from the clip paper"}, {"start": 1660.22, "end": 1665.18, "text": " got bored somewhere along the way as gain started to diminish and never finished updating"}, {"start": 1665.18, "end": 1666.5, "text": " tweaking the list."}, {"start": 1666.5, "end": 1671.38, "text": " He also didn't revisit this with the better performing models or any of the VAT models"}, {"start": 1671.38, "end": 1676.3, "text": " he thinks it's likely in other zero point five percent to one percent up one could be"}, {"start": 1676.3, "end": 1677.82, "text": " gained from further work here."}, {"start": 1677.82, "end": 1682.0, "text": " It'd be interesting to more rigorously study understand this."}, {"start": 1682.0, "end": 1683.32, "text": " So this is super interesting."}, {"start": 1683.32, "end": 1687.9, "text": " So this means that by tweaking the classes and then embedding them into those templates"}, {"start": 1687.9, "end": 1690.1200000000001, "text": " that the caption templates we saw."}, {"start": 1690.12, "end": 1692.6, "text": " So let me show you what I mean here."}, {"start": 1692.6, "end": 1696.0, "text": " So this is the template we are using."}, {"start": 1696.0, "end": 1702.8999999999999, "text": " So by just tweaking that we are forming better text embeddings and then consequently those"}, {"start": 1702.8999999999999, "end": 1708.8799999999999, "text": " make for a better classifiers your shot classifier on top of our image embedding vectors."}, {"start": 1708.8799999999999, "end": 1712.5, "text": " So let me quickly explain what I mean by that before proceeding here."}, {"start": 1712.5, "end": 1719.86, "text": " So imagine we call up we've done here we basically took hundred classes we embedded them into"}, {"start": 1719.86, "end": 1727.1399999999999, "text": " captions we have hundred like strings now we formed embedding vectors for each of the"}, {"start": 1727.1399999999999, "end": 1730.62, "text": " captions and now we have hundred hundred seventy."}, {"start": 1730.62, "end": 1737.82, "text": " So we basically have this we have sorry we have after embedding we end up with hundred"}, {"start": 1737.82, "end": 1738.86, "text": " five twelve."}, {"start": 1738.86, "end": 1741.06, "text": " So let me explain why this is so cool."}, {"start": 1741.06, "end": 1745.58, "text": " I'm going to open up my one note for a second here and let me just draw you a couple of"}, {"start": 1745.58, "end": 1746.58, "text": " things here."}, {"start": 1746.58, "end": 1747.58, "text": " OK."}, {"start": 1747.58, "end": 1748.58, "text": " So let's see what we've done."}, {"start": 1748.58, "end": 1752.9399999999998, "text": " So we have we embedded our image here into an embedding vector that's like five hundred"}, {"start": 1752.9399999999998, "end": 1757.26, "text": " twelve dimensions we have five hundred and twelve dimensions here."}, {"start": 1757.26, "end": 1763.22, "text": " Then we saw what we've done with the with the with the classes we've we formed basically"}, {"start": 1763.22, "end": 1766.26, "text": " a collection of these embedding vectors."}, {"start": 1766.26, "end": 1771.8999999999999, "text": " So this is hundred we have hundred and we have five twelve here as well."}, {"start": 1771.8999999999999, "end": 1776.74, "text": " So now because of the way how we calculate the similarity scores we basically multiply"}, {"start": 1776.74, "end": 1780.66, "text": " with your dot product between this one and this one with your dot product between the"}, {"start": 1780.66, "end": 1786.34, "text": " image vector again and the next and the next embedding vector the next representation of"}, {"start": 1786.34, "end": 1788.86, "text": " a class and the next one next one."}, {"start": 1788.86, "end": 1793.42, "text": " And what do you do if you think about it what do you do when you do dot products with these"}, {"start": 1793.42, "end": 1794.42, "text": " weights here."}, {"start": 1794.42, "end": 1800.22, "text": " This represents the weights of ad hoc MLP of a zero shot MLP that's used as a prediction"}, {"start": 1800.22, "end": 1802.14, "text": " head to predict the classes."}, {"start": 1802.14, "end": 1806.34, "text": " So again if you do dot product between this and this what you're effectively doing is"}, {"start": 1806.34, "end": 1807.34, "text": " the following."}, {"start": 1807.34, "end": 1812.6999999999998, "text": " So you're doing this and you form one weight and then you take this one."}, {"start": 1812.6999999999998, "end": 1814.82, "text": " So let me change the color here."}, {"start": 1814.82, "end": 1819.1, "text": " Then you do this you take this vector you do dot product between this and this and you"}, {"start": 1819.1, "end": 1820.54, "text": " effectively do the following."}, {"start": 1820.54, "end": 1824.6999999999998, "text": " So you just I'm kind of assuming this has four dimensions but it has five twelve so"}, {"start": 1824.6999999999998, "end": 1826.26, "text": " just bear with me here."}, {"start": 1826.26, "end": 1831.3, "text": " So you do this and then you do the same thing for the third vector."}, {"start": 1831.3, "end": 1834.52, "text": " So you're going to do dot product which is going to do this."}, {"start": 1834.52, "end": 1838.98, "text": " And as you can see this is basically the weights of this particular MLP here."}, {"start": 1838.98, "end": 1845.98, "text": " So you ad hoc construct the prediction head and by doing that you can take arbitrary data"}, {"start": 1845.98, "end": 1850.48, "text": " set you can literally take arbitrary data set here."}, {"start": 1850.48, "end": 1857.42, "text": " Take the classes form those captions embed those captions through just kind of do a forward"}, {"start": 1857.42, "end": 1859.54, "text": " pass through the transformer."}, {"start": 1859.54, "end": 1862.0, "text": " You get additional MLP."}, {"start": 1862.0, "end": 1867.04, "text": " So now you have MLP so this this is going to be a set of end classes with dimensionality"}, {"start": 1867.04, "end": 1873.5, "text": " of five twelve and you can basically use this to now classify this image into end classes."}, {"start": 1873.5, "end": 1878.54, "text": " So that's super cool and that's what a white clip is such an awesome model because of all"}, {"start": 1878.54, "end": 1880.74, "text": " of these zero shot capabilities."}, {"start": 1880.74, "end": 1883.98, "text": " So having said that let's go back to the code."}, {"start": 1883.98, "end": 1887.74, "text": " Let me go back to the where we stop here."}, {"start": 1887.74, "end": 1894.14, "text": " So now you can appreciate why why tweaking this can improve that prediction had weights"}, {"start": 1894.14, "end": 1899.8, "text": " and thus give us higher top one and top five accuracy on arbitrary data set data set."}, {"start": 1899.8, "end": 1901.7, "text": " So that's why they've been doing that."}, {"start": 1901.7, "end": 1902.7, "text": " OK."}, {"start": 1902.7, "end": 1905.46, "text": " So they give an example of what the tweaks some of the tweaks they've done."}, {"start": 1905.46, "end": 1910.14, "text": " So some of the examples beyond the crane crane so crane is going to be so one crane can be"}, {"start": 1910.14, "end": 1911.14, "text": " construction crane."}, {"start": 1911.14, "end": 1913.66, "text": " The other one is actually like a bird so bird crane."}, {"start": 1913.66, "end": 1919.14, "text": " So that's why they did those modifications and then they've done stuff like mail convert"}, {"start": 1919.14, "end": 1923.5800000000002, "text": " that to fingernail or like metal nail depending."}, {"start": 1923.5800000000002, "end": 1925.66, "text": " Yes I guess."}, {"start": 1925.66, "end": 1929.74, "text": " OK clip interprets nail as fingernail but we actually want it to interpret it as a metal"}, {"start": 1929.74, "end": 1934.38, "text": " nail and that's why they explicitly change the class to metal nail."}, {"start": 1934.38, "end": 1939.18, "text": " And similarly image net kite classes refer to the bird of prey not the flying toy."}, {"start": 1939.18, "end": 1943.02, "text": " So we change kite to kite bird of prey etc etc."}, {"start": 1943.02, "end": 1944.02, "text": " OK."}, {"start": 1944.02, "end": 1945.06, "text": " So that's that's it."}, {"start": 1945.06, "end": 1949.62, "text": " Now they have instead of using that single template we've been using here this is a photo"}, {"start": 1949.62, "end": 1953.74, "text": " of they also experimented with trying different templates."}, {"start": 1953.74, "end": 1957.86, "text": " You see here a bad photo of a photo of many a sculpture of."}, {"start": 1957.86, "end": 1961.1, "text": " And so there is a creative set of these here."}, {"start": 1961.1, "end": 1963.66, "text": " I think there is like literally at least 100 of them."}, {"start": 1963.66, "end": 1965.82, "text": " Let me let me see if you run this."}, {"start": 1965.82, "end": 1967.22, "text": " So we have 80 templates."}, {"start": 1967.22, "end": 1970.9, "text": " So they devised 80 of these templates."}, {"start": 1970.9, "end": 1977.5400000000002, "text": " And now they're going to do some type of in sampling to get better results for for image"}, {"start": 1977.5400000000002, "end": 1978.5400000000002, "text": " net."}, {"start": 1978.5400000000002, "end": 1980.3400000000001, "text": " Let's see how how that works."}, {"start": 1980.3400000000001, "end": 1984.18, "text": " Let's quickly go through the Russian rationale behind doing this again."}, {"start": 1984.18, "end": 1988.98, "text": " A similar intuition guided trial and error based on the image and training set was used"}, {"start": 1988.98, "end": 1990.22, "text": " for templates."}, {"start": 1990.22, "end": 1995.5, "text": " This list is pretty haphazard and was gradually made expanded over the course of about a year"}, {"start": 1995.5, "end": 1998.74, "text": " of the project and was revisited tweak every few months."}, {"start": 1998.74, "end": 2004.42, "text": " A surprising weird thing was adding templates intended to help image net our performance"}, {"start": 2004.42, "end": 2008.9, "text": " specifying different possible renditions of an object improved standard image net accuracy"}, {"start": 2008.9, "end": 2015.98, "text": " to after the 80 templates were locked for the paper we ran sequential forward selection"}, {"start": 2015.98, "end": 2018.06, "text": " over the list of 80 templates."}, {"start": 2018.06, "end": 2023.34, "text": " The search terminated after in sampling seven templates and selected them in the order below."}, {"start": 2023.34, "end": 2026.32, "text": " So combining these seven templates gave them the best results."}, {"start": 2026.32, "end": 2030.7, "text": " So then they say speculating we think it's interesting to see different scales large"}, {"start": 2030.7, "end": 2031.7, "text": " and small."}, {"start": 2031.7, "end": 2037.54, "text": " That's why they have templates like this working very nicely a photo of the large a difficult"}, {"start": 2037.54, "end": 2038.54, "text": " view."}, {"start": 2038.54, "end": 2042.74, "text": " That's why they have a bad photo of the and abstract versions origami video game art were"}, {"start": 2042.74, "end": 2044.06, "text": " all selected for."}, {"start": 2044.06, "end": 2046.34, "text": " But we haven't studied this in any detail."}, {"start": 2046.34, "end": 2051.64, "text": " This subset performs a bit better than the full 80 ensemble reported in the paper especially"}, {"start": 2051.64, "end": 2053.36, "text": " for the smaller models."}, {"start": 2053.36, "end": 2056.52, "text": " So what I mean by ensemble let me let me kind of break that down."}, {"start": 2056.52, "end": 2058.88, "text": " So we're going to see that now."}, {"start": 2058.88, "end": 2064.7000000000003, "text": " So let me load the image net data set here."}, {"start": 2064.7000000000003, "end": 2067.98, "text": " I think it's cash so shouldn't take too much."}, {"start": 2067.98, "end": 2071.42, "text": " So we're going to have the image net and here is what what they are doing."}, {"start": 2071.42, "end": 2079.9, "text": " So basically they form a zero shot classifier using the following technique."}, {"start": 2079.9, "end": 2085.3, "text": " So we pass all of the image and templates and we pass the image and classes."}, {"start": 2085.3, "end": 2086.6, "text": " And so let's see what they do."}, {"start": 2086.6, "end": 2091.98, "text": " So they go through all of the classes and then they go through all of the templates."}, {"start": 2091.98, "end": 2096.82, "text": " So that means that a single class will this time be embedded using not a single template"}, {"start": 2096.82, "end": 2103.56, "text": " but 80 of those and then they tokenize those the encoders which means we'll hear after"}, {"start": 2103.56, "end": 2108.9, "text": " doing the encoding we'll end up with what we'll end up with because we have 80 templates"}, {"start": 2108.9, "end": 2113.04, "text": " we'll end up with 80 77 right."}, {"start": 2113.04, "end": 2115.7000000000003, "text": " I think let me think here."}, {"start": 2115.7000000000003, "end": 2118.5, "text": " So we're doing the tokenization no sorry."}, {"start": 2118.5, "end": 2121.76, "text": " So here we'll end up with 80 77."}, {"start": 2121.76, "end": 2125.36, "text": " So tokenizer is going to give us this."}, {"start": 2125.36, "end": 2128.98, "text": " We can kind of print those shapes later on and see whether that's true."}, {"start": 2128.98, "end": 2135.7200000000003, "text": " And now after doing the embeddings they get 80 512 here."}, {"start": 2135.72, "end": 2140.4599999999996, "text": " And now the smart part is after doing the normalization which is which is like standard"}, {"start": 2140.4599999999996, "end": 2148.3399999999997, "text": " thing to do they form the final representation for that class by averaging across all of"}, {"start": 2148.3399999999997, "end": 2151.08, "text": " these embeddings for all of these different templates."}, {"start": 2151.08, "end": 2156.02, "text": " And then additionally they divide by the by the norm of the class embedding to make sure"}, {"start": 2156.02, "end": 2160.7, "text": " it's again has L2 norm equals to one."}, {"start": 2160.7, "end": 2162.3599999999997, "text": " And then they use."}, {"start": 2162.3599999999997, "end": 2164.8999999999996, "text": " So now they just stack those zero shot weights."}, {"start": 2164.9, "end": 2169.14, "text": " So this is going to be the zero shot classifier I explained in the one note a couple of minutes"}, {"start": 2169.14, "end": 2170.98, "text": " ago and that's it."}, {"start": 2170.98, "end": 2173.7400000000002, "text": " So this is going to be our classifier zero shot weights."}, {"start": 2173.7400000000002, "end": 2179.62, "text": " So if I run this it might take a while but it's such a neat idea."}, {"start": 2179.62, "end": 2184.94, "text": " Basically in sampling is a very very famous technique like you people use that to trade"}, {"start": 2184.94, "end": 2190.5, "text": " off between higher performance but like somewhat obviously longer inference more memory etc."}, {"start": 2190.5, "end": 2197.22, "text": " But just by kind of combining the majority like logic just doing this majority voting"}, {"start": 2197.22, "end": 2202.3, "text": " type of logic we get higher quality embeddings."}, {"start": 2202.3, "end": 2204.38, "text": " And and yeah."}, {"start": 2204.38, "end": 2214.38, "text": " OK this run now we can print what we get here again we're expecting I guess a shape of 1000"}, {"start": 2214.38, "end": 2221.2200000000003, "text": " 512 because we have 1000 image and classes we have embedding vectors of size 512."}, {"start": 2221.2200000000003, "end": 2225.6400000000003, "text": " So let me just print the zero shot weights shape here."}, {"start": 2225.6400000000003, "end": 2232.6600000000003, "text": " So if I print whoops if I print the shape here let me see whether my hypothesis was"}, {"start": 2232.6600000000003, "end": 2234.12, "text": " correct."}, {"start": 2234.12, "end": 2240.46, "text": " So 512 1000 OK it's just not it's stacked in a different way so that's why we get this"}, {"start": 2240.46, "end": 2242.82, "text": " shape shape."}, {"start": 2242.82, "end": 2255.34, "text": " OK we can maybe quickly just print these shapes here so text shape and I'm going to add print"}, {"start": 2255.34, "end": 2259.7000000000003, "text": " class embeddings here the shape as well."}, {"start": 2259.7000000000003, "end": 2265.2400000000002, "text": " And then we're just going to break out of this whole logic so that this is run quickly."}, {"start": 2265.2400000000002, "end": 2272.1000000000004, "text": " So whoops I'm getting OK because of the break statement everything crashed but like we are"}, {"start": 2272.1, "end": 2276.7, "text": " verifying that this makes sense so 80 77 80 512."}, {"start": 2276.7, "end": 2283.3399999999997, "text": " Let me now remove these statements and I'll have to rerun this to get the actual zero"}, {"start": 2283.3399999999997, "end": 2284.5, "text": " shot weights."}, {"start": 2284.5, "end": 2288.42, "text": " So after that while that's running let me walk you through this part here."}, {"start": 2288.42, "end": 2292.96, "text": " So we have the accuracy I will not go through how the accuracy is calculated you can go"}, {"start": 2292.96, "end": 2296.56, "text": " through that piece of code on your own."}, {"start": 2296.56, "end": 2302.56, "text": " So what we're going to do is we're going to here go through the image net batches of image"}, {"start": 2302.56, "end": 2303.56, "text": " net images."}, {"start": 2303.56, "end": 2309.2599999999998, "text": " We're going to push those to CUDA the associated labels as well."}, {"start": 2309.2599999999998, "end": 2312.7, "text": " So when I say to CUDA I mean to a GPU if you have one."}, {"start": 2312.7, "end": 2321.9, "text": " We are going to then encode the images so there's going to be 32 512 so 32 because we"}, {"start": 2321.9, "end": 2328.58, "text": " specified the batch size somewhere here so batch size going to be 32 and then 512 because"}, {"start": 2328.58, "end": 2335.7400000000002, "text": " as I said the visual embedding vector is going to be a 512 dimensionality."}, {"start": 2335.7400000000002, "end": 2340.78, "text": " Then we just do the regular normalization here and then we find the similarity."}, {"start": 2340.78, "end": 2347.06, "text": " So basically this is again this is your prediction head that was ad hoc constructed ad hoc."}, {"start": 2347.06, "end": 2355.46, "text": " So this is the zero shot classifier head and we get the logits so and then after we pass"}, {"start": 2355.46, "end": 2364.1, "text": " the logits and the target we'll see how good do we perform here when it comes to accuracy"}, {"start": 2364.1, "end": 2365.2999999999997, "text": " top one and top five accuracy."}, {"start": 2365.2999999999997, "end": 2368.56, "text": " So let me run this cell and let me run this one."}, {"start": 2368.56, "end": 2372.9, "text": " This might take a while so maybe I'll just kind of skip to the end."}, {"start": 2372.9, "end": 2381.06, "text": " Okay so this thing ran and we can see top one accuracy 55 top five 83.4 by just having"}, {"start": 2381.06, "end": 2383.82, "text": " by just doing this zero shot classification."}, {"start": 2383.82, "end": 2387.62, "text": " Again because we're passing logits just a minor detail obviously we'll have some soft"}, {"start": 2387.62, "end": 2388.82, "text": " max here or something."}, {"start": 2388.82, "end": 2392.7000000000003, "text": " Let me let me see how this logic here works."}, {"start": 2392.7000000000003, "end": 2394.7400000000002, "text": " So we just grab the outputs we find."}, {"start": 2394.7400000000002, "end": 2402.06, "text": " Okay we don't even have to do the soft max we just need to find the top five or whatever"}, {"start": 2402.06, "end": 2408.2999999999997, "text": " the case is five in our case top five logits which is going to correlate with the top five"}, {"start": 2408.2999999999997, "end": 2412.48, "text": " highest probabilities if we were to do soft max which we will not because that would be"}, {"start": 2412.48, "end": 2416.74, "text": " redundant and then just your usual logic."}, {"start": 2416.74, "end": 2420.62, "text": " It's going to convolute it because it's a one liner but it just kind of calculates the"}, {"start": 2420.62, "end": 2423.2599999999998, "text": " top one and top five accuracy."}, {"start": 2423.2599999999998, "end": 2424.2599999999998, "text": " Okay that's it."}, {"start": 2424.2599999999998, "end": 2430.42, "text": " Do let me know whether you found this Jupyter walkthrough useful and now I'm going to dig"}, {"start": 2430.42, "end": 2436.62, "text": " into going through some details of how everything works behind the scenes."}, {"start": 2436.62, "end": 2439.14, "text": " Hopefully you'll find this interesting as well."}, {"start": 2439.14, "end": 2440.82, "text": " So let's start with this playground."}, {"start": 2440.82, "end": 2447.66, "text": " So I just created a simple playground.py file then I just copy pasted pretty much some of"}, {"start": 2447.66, "end": 2454.58, "text": " the examples here from the readme file of their official repository and now we're going"}, {"start": 2454.58, "end": 2460.7, "text": " to walk through this particular example and see the details behind how the imaging code"}, {"start": 2460.7, "end": 2462.24, "text": " tokenization all of that works."}, {"start": 2462.24, "end": 2467.58, "text": " So without further ado let me run the debugger here."}, {"start": 2467.58, "end": 2473.14, "text": " So let me maybe zoom in a little bit more here hopefully you can see everything nicely."}, {"start": 2473.14, "end": 2478.9, "text": " So what's going on is that this tokenizer is just being created initially before we"}, {"start": 2478.9, "end": 2481.7, "text": " even hit this first line in simple."}, {"start": 2481.7, "end": 2484.8199999999997, "text": " We have a tokenizer somewhere here."}, {"start": 2484.8199999999997, "end": 2487.18, "text": " Let me just find it."}, {"start": 2487.18, "end": 2490.9399999999996, "text": " It's kind of defined I think in model or somewhere."}, {"start": 2490.9399999999996, "end": 2492.7799999999997, "text": " The object is independently defined."}, {"start": 2492.7799999999997, "end": 2493.7799999999997, "text": " Let me find it."}, {"start": 2493.7799999999997, "end": 2494.7799999999997, "text": " Oh my god."}, {"start": 2494.7799999999997, "end": 2495.7799999999997, "text": " Oh my god."}, {"start": 2495.7799999999997, "end": 2496.7799999999997, "text": " Okay tokenizer."}, {"start": 2496.7799999999997, "end": 2497.7799999999997, "text": " Yeah here."}, {"start": 2497.7799999999997, "end": 2498.7799999999997, "text": " So here is the construct."}, {"start": 2498.7799999999997, "end": 2504.48, "text": " So basically the constructor is called before we even hit the first line of our function"}, {"start": 2504.48, "end": 2505.48, "text": " in the playground."}, {"start": 2505.48, "end": 2506.8999999999996, "text": " So again let me just kind of show you."}, {"start": 2506.8999999999996, "end": 2511.3199999999997, "text": " I have the main function I'm calling the simple here and simple is this one."}, {"start": 2511.32, "end": 2515.42, "text": " So you'd expect this to be the first line that we hit but that's not the case."}, {"start": 2515.42, "end": 2519.7400000000002, "text": " We instead hit this one because as I already explained why that's going on."}, {"start": 2519.7400000000002, "end": 2521.42, "text": " Okay so tokenizer."}, {"start": 2521.42, "end": 2522.42, "text": " What's going on?"}, {"start": 2522.42, "end": 2528.44, "text": " So I'll have to kind of skim through some of the details because it gets fairly complicated"}, {"start": 2528.44, "end": 2532.52, "text": " and I also don't understand every single detail as well to be honest."}, {"start": 2532.52, "end": 2539.1400000000003, "text": " So this bytes to unicode what it does is just creates like a dictionary between 0 to 255"}, {"start": 2539.14, "end": 2548.06, "text": " which are like values that a byte can have and then it maps it onto a unicode string."}, {"start": 2548.06, "end": 2553.04, "text": " There is some convoluted logic here of which ranges they take etc."}, {"start": 2553.04, "end": 2557.14, "text": " So that's why I'm saying I'm going to kind of skip some of these details but like we'll"}, {"start": 2557.14, "end": 2559.96, "text": " analyze what we get as the output."}, {"start": 2559.96, "end": 2563.42, "text": " So we'll treat it like as a semi black box."}, {"start": 2563.42, "end": 2565.24, "text": " So if that's even a word."}, {"start": 2565.24, "end": 2569.54, "text": " Okay let me go to debug console and this is why Visual Studio Code is such an awesome"}, {"start": 2569.54, "end": 2572.2599999999998, "text": " environment."}, {"start": 2572.2599999999998, "end": 2574.14, "text": " Let me show you what we have here."}, {"start": 2574.14, "end": 2582.3799999999997, "text": " So if I print this we get as you can see here we get basically so from 0 to 255 we get the"}, {"start": 2582.3799999999997, "end": 2587.06, "text": " keys and then associated unicode characters."}, {"start": 2587.06, "end": 2589.7599999999998, "text": " Okay that's pretty much it."}, {"start": 2589.7599999999998, "end": 2592.4599999999996, "text": " Now the byte encoder is going to do the opposite."}, {"start": 2592.46, "end": 2600.92, "text": " It's just going to map from the unicode character onto the byte here."}, {"start": 2600.92, "end": 2603.96, "text": " So I'm just going to kind of skim over that."}, {"start": 2603.96, "end": 2607.2, "text": " So F10 then we open up so there is this BPE path."}, {"start": 2607.2, "end": 2611.98, "text": " So there is basically if I open it up here I think it stores somewhere in here."}, {"start": 2611.98, "end": 2614.42, "text": " Let me let me let me see where it is."}, {"start": 2614.42, "end": 2615.42, "text": " Just a second."}, {"start": 2615.42, "end": 2616.46, "text": " Okay so here it is."}, {"start": 2616.46, "end": 2623.0, "text": " So this is this file BP simple vocab 16E6 takes that."}, {"start": 2623.0, "end": 2626.42, "text": " So that's basically your BPE."}, {"start": 2626.42, "end": 2630.5, "text": " That's the byte pair encoding as I said I will not get into the details."}, {"start": 2630.5, "end": 2632.42, "text": " So what they've done they had some data set."}, {"start": 2632.42, "end": 2639.14, "text": " I'm not sure how I can track back the actual data set that was used to train this BPE."}, {"start": 2639.14, "end": 2645.58, "text": " And then you do the BPE logic which is basically finding the pair of tokens with the highest"}, {"start": 2645.58, "end": 2649.86, "text": " frequency and then merging those two tokens and adding that token to the vocab and then"}, {"start": 2649.86, "end": 2657.5, "text": " just repeating those merges until we have the final vocabulary that that's the BPE vocab."}, {"start": 2657.5, "end": 2660.74, "text": " So that's kind of already done for us."}, {"start": 2660.74, "end": 2665.86, "text": " That's why we can just read this and then basically decode."}, {"start": 2665.86, "end": 2672.1, "text": " Consider thinking like decode as a UTF-8 and then just split on the new line characters."}, {"start": 2672.1, "end": 2676.22, "text": " And then I'll show you in a second what this merges are doing."}, {"start": 2676.22, "end": 2678.18, "text": " So the merges are doing the following."}, {"start": 2678.18, "end": 2681.7799999999997, "text": " So let me first close this so we have more space."}, {"start": 2681.7799999999997, "end": 2684.06, "text": " So merges is a list."}, {"start": 2684.06, "end": 2686.96, "text": " So I'm going to just show you the first five for example."}, {"start": 2686.96, "end": 2688.38, "text": " So you can see here."}, {"start": 2688.38, "end": 2690.46, "text": " So here these are the merges that were happening."}, {"start": 2690.46, "end": 2694.94, "text": " So that means this this this whoops why is this okay."}, {"start": 2694.94, "end": 2701.18, "text": " So this means this particular these two tokens were the most frequent tokens in that corpus"}, {"start": 2701.18, "end": 2705.3399999999997, "text": " of text at some point of time during the BPE algorithm."}, {"start": 2705.3399999999997, "end": 2709.54, "text": " So if you don't know what BPE is I strongly suggest you just kind of find some article."}, {"start": 2709.54, "end": 2713.7, "text": " I'm going to link it down in the YouTube description actually in the video description and you"}, {"start": 2713.7, "end": 2714.7, "text": " can check it out."}, {"start": 2714.7, "end": 2717.22, "text": " And so this will make sense my explanations."}, {"start": 2717.22, "end": 2723.1, "text": " So so yeah we have these merges now and I'm going to go over this."}, {"start": 2723.1, "end": 2725.62, "text": " And finally we start forming the vocab."}, {"start": 2725.62, "end": 2732.46, "text": " So first of all we're going to grab the the the the Unicode strings that we formed up"}, {"start": 2732.46, "end": 2733.46, "text": " there."}, {"start": 2733.46, "end": 2734.46, "text": " So let me go F10."}, {"start": 2734.46, "end": 2736.94, "text": " So that means the vocab size will be 256."}, {"start": 2736.94, "end": 2741.08, "text": " So if I print the vocab size here it's going to be 256."}, {"start": 2741.08, "end": 2744.7599999999998, "text": " Now the following again this is BPE idosyncrasy."}, {"start": 2744.7599999999998, "end": 2751.22, "text": " So basically we have to to concat this this end of word like token."}, {"start": 2751.22, "end": 2760.18, "text": " So if I click we'll just do we'll just basically form new tokens in our vocab by concatenating"}, {"start": 2760.18, "end": 2765.06, "text": " all of the existing like tokens with this particular end of word token."}, {"start": 2765.06, "end": 2766.06, "text": " OK."}, {"start": 2766.06, "end": 2771.74, "text": " So if I click F10 if I print the length now it's going to be 512 because we just duplicate"}, {"start": 2771.74, "end": 2776.3799999999997, "text": " everything and then we go through the merges and we because all of those merges basically"}, {"start": 2776.3799999999997, "end": 2780.04, "text": " are now a unique token in the BPE vocab."}, {"start": 2780.04, "end": 2781.42, "text": " That's why we do this logic here."}, {"start": 2781.42, "end": 2782.68, "text": " So we basically join."}, {"start": 2782.68, "end": 2784.5, "text": " So that means we're going to join these."}, {"start": 2784.5, "end": 2789.66, "text": " So we're going to have in instead of instead of so I can maybe let me see whether I can"}, {"start": 2789.66, "end": 2794.9, "text": " kind of step through this quickly or just do something like this."}, {"start": 2794.9, "end": 2795.9, "text": " OK."}, {"start": 2795.9, "end": 2802.14, "text": " So if I do this if I do F10 and let me print you the merge here directly in the debug console."}, {"start": 2802.14, "end": 2805.22, "text": " So as you can see we're going to add the in to the vocab."}, {"start": 2805.22, "end": 2807.2599999999998, "text": " And again that's BPE idosyncrasy."}, {"start": 2807.2599999999998, "end": 2809.42, "text": " I'm going to skip over all of that."}, {"start": 2809.42, "end": 2812.66, "text": " Click F5 here and just get to the next line here."}, {"start": 2812.66, "end": 2813.66, "text": " OK."}, {"start": 2813.66, "end": 2816.62, "text": " Let me collapse this."}, {"start": 2816.62, "end": 2817.62, "text": " So bear with me."}, {"start": 2817.62, "end": 2818.62, "text": " It's a bit more."}, {"start": 2818.62, "end": 2821.66, "text": " I'll summarize the findings in a second."}, {"start": 2821.66, "end": 2823.66, "text": " So we extend with these two special tokens."}, {"start": 2823.66, "end": 2824.66, "text": " Start of text and of text."}, {"start": 2824.66, "end": 2827.7400000000002, "text": " So it's like start of sentence and of sentence tokens."}, {"start": 2827.7400000000002, "end": 2831.14, "text": " So if I add all that now we have a pretty big vocab."}, {"start": 2831.14, "end": 2833.34, "text": " So I think it's almost 50."}, {"start": 2833.34, "end": 2834.34, "text": " Yeah."}, {"start": 2834.34, "end": 2835.34, "text": " 49,408."}, {"start": 2835.34, "end": 2837.12, "text": " That's it."}, {"start": 2837.12, "end": 2843.9, "text": " Now we form the encoder by basically zipping the those tokens from the vocab with the associated"}, {"start": 2843.9, "end": 2846.3399999999997, "text": " indices and we do the decoder."}, {"start": 2846.3399999999997, "end": 2852.02, "text": " So that's going to basically convert from from from a token into an int and vice versa"}, {"start": 2852.02, "end": 2855.58, "text": " the decoder will do from int into into the token."}, {"start": 2855.58, "end": 2862.2999999999997, "text": " So if I do F10 here just step over we get all of this and then they form the special"}, {"start": 2862.2999999999997, "end": 2864.8199999999997, "text": " variable BP ranks."}, {"start": 2864.8199999999997, "end": 2866.7999999999997, "text": " It's going to take those merges."}, {"start": 2866.8, "end": 2869.86, "text": " So and and and kind of store them."}, {"start": 2869.86, "end": 2872.1400000000003, "text": " We'll see why that's useful a bit later."}, {"start": 2872.1400000000003, "end": 2877.5, "text": " And then they have this cache again some optimization detail that the BP will need otherwise will"}, {"start": 2877.5, "end": 2878.88, "text": " be super slow."}, {"start": 2878.88, "end": 2885.86, "text": " And finally this pattern that they're going to use to kind of parse the input text."}, {"start": 2885.86, "end": 2886.86, "text": " We'll get to that in a second."}, {"start": 2886.86, "end": 2887.86, "text": " OK."}, {"start": 2887.86, "end": 2891.7000000000003, "text": " So as a summary we have a vocabulary formed using the BP."}, {"start": 2891.7000000000003, "end": 2892.7000000000003, "text": " Lots of details."}, {"start": 2892.7000000000003, "end": 2894.6600000000003, "text": " You don't need to know all of these."}, {"start": 2894.66, "end": 2899.22, "text": " What is important is that we have the encoder and decoder and all of these are some other"}, {"start": 2899.22, "end": 2901.2999999999997, "text": " implementation details."}, {"start": 2901.2999999999997, "end": 2905.2999999999997, "text": " Important ones but we can we can kind of kind of squint and ignore them."}, {"start": 2905.2999999999997, "end": 2906.2999999999997, "text": " OK."}, {"start": 2906.2999999999997, "end": 2908.3799999999997, "text": " Let me go to our next breakpoint."}, {"start": 2908.3799999999997, "end": 2909.3799999999997, "text": " OK."}, {"start": 2909.3799999999997, "end": 2911.54, "text": " So we are now in the first line of the simple method."}, {"start": 2911.54, "end": 2912.54, "text": " OK."}, {"start": 2912.54, "end": 2914.06, "text": " So we just do the standard stuff."}, {"start": 2914.06, "end": 2919.7799999999997, "text": " We see whether we have like a GPU device and I do have to do and I do happen to have a"}, {"start": 2919.7799999999997, "end": 2920.7799999999997, "text": " GPU."}, {"start": 2920.7799999999997, "end": 2921.8999999999996, "text": " So that's that's cool."}, {"start": 2921.8999999999996, "end": 2923.04, "text": " Now we'll load the device."}, {"start": 2923.04, "end": 2926.3, "text": " So now we'll have some some some break point."}, {"start": 2926.3, "end": 2927.3, "text": " That's interesting."}, {"start": 2927.3, "end": 2932.66, "text": " So basically before the video I put some salient break points on certain parts of the code"}, {"start": 2932.66, "end": 2936.7799999999997, "text": " base that I want to dig to kind of go through with you."}, {"start": 2936.7799999999997, "end": 2940.82, "text": " The code base is fairly big so you can kind of do that in your own time."}, {"start": 2940.82, "end": 2943.7799999999997, "text": " But I'm going to focus on the most salient parts of the code base."}, {"start": 2943.7799999999997, "end": 2944.7799999999997, "text": " OK."}, {"start": 2944.7799999999997, "end": 2949.42, "text": " So the first one being the construction of the clip model."}, {"start": 2949.42, "end": 2950.94, "text": " So context length."}, {"start": 2950.94, "end": 2952.1, "text": " That's a 77."}, {"start": 2952.1, "end": 2954.1, "text": " That's a number we've been seeing all around."}, {"start": 2954.1, "end": 2959.06, "text": " And that's because that's a that's a parameter that's passed during the construction of the"}, {"start": 2959.06, "end": 2960.06, "text": " clip model."}, {"start": 2960.06, "end": 2961.54, "text": " So let's see what's going on here."}, {"start": 2961.54, "end": 2966.54, "text": " So we're going to form the VAT model here."}, {"start": 2966.54, "end": 2968.7599999999998, "text": " So I'll hit the VAT construction right now."}, {"start": 2968.7599999999998, "end": 2970.9, "text": " So we have image net."}, {"start": 2970.9, "end": 2978.46, "text": " The image resolution is 24 through to the internal vectors will be 768."}, {"start": 2978.46, "end": 2987.12, "text": " We'll have 12 layers 12 heads and the embedding dimension at the output is 512."}, {"start": 2987.12, "end": 2989.98, "text": " So even though the inner dimensions are a bit larger."}, {"start": 2989.98, "end": 2993.3, "text": " So that's here 768."}, {"start": 2993.3, "end": 2996.18, "text": " The output as you can recall is 512."}, {"start": 2996.18, "end": 2997.7400000000002, "text": " We saw that in the Jupiter."}, {"start": 2997.7400000000002, "end": 3007.18, "text": " So let me click F5 here and go to the vision transformer basically in it function."}, {"start": 3007.18, "end": 3009.68, "text": " So we're just going to store some of these dimensions."}, {"start": 3009.68, "end": 3012.14, "text": " So this is everything that vision transformer needs."}, {"start": 3012.14, "end": 3016.44, "text": " This is the conv layer that that you have to this is the patchy file layer basically"}, {"start": 3016.44, "end": 3018.3999999999996, "text": " of your vision transformer."}, {"start": 3018.3999999999996, "end": 3020.58, "text": " As you can see here the pet size is 32."}, {"start": 3020.58, "end": 3027.46, "text": " So we set the kernel size to 32 and we set a stride the same exact width."}, {"start": 3027.46, "end": 3033.66, "text": " And the reason is is because we want to take non overlapping patches of the image and then"}, {"start": 3033.66, "end": 3039.62, "text": " embed them from the initial dimension of three into 768."}, {"start": 3039.62, "end": 3044.06, "text": " So let me quickly go through that in the one note maybe it will help you understand what's"}, {"start": 3044.06, "end": 3045.94, "text": " going on."}, {"start": 3045.94, "end": 3049.3399999999997, "text": " Again what is going on is the following."}, {"start": 3049.3399999999997, "end": 3053.8599999999997, "text": " So if you have an image here."}, {"start": 3053.8599999999997, "end": 3055.2599999999998, "text": " This is our input image."}, {"start": 3055.2599999999998, "end": 3057.1, "text": " We have three channels."}, {"start": 3057.1, "end": 3065.54, "text": " It's going to be three dimensions here so this is three and then this first conv layer"}, {"start": 3065.54, "end": 3066.54, "text": " is doing the following."}, {"start": 3066.54, "end": 3074.02, "text": " It's going to take the kernel put it here and map this whole volume into a vector into"}, {"start": 3074.02, "end": 3080.66, "text": " a corresponding vector that's going to be of dimension 768."}, {"start": 3080.66, "end": 3082.9, "text": " Then we're going to move here."}, {"start": 3082.9, "end": 3085.38, "text": " So let me change the color."}, {"start": 3085.38, "end": 3091.38, "text": " We'll then move here and then embed that into additional vector here 768."}, {"start": 3091.38, "end": 3096.2200000000003, "text": " And we're going to repeat the whole procedure until we hit the end of the image."}, {"start": 3096.2200000000003, "end": 3097.58, "text": " So now we have here etc. etc."}, {"start": 3097.58, "end": 3103.5, "text": " You do that for the whole image and you end up with what you end up with whatever the"}, {"start": 3103.5, "end": 3106.5, "text": " number of patches is and it's going to be 224."}, {"start": 3106.5, "end": 3109.82, "text": " We divide that by what by 32."}, {"start": 3109.82, "end": 3112.3, "text": " That's going to give us I think 16."}, {"start": 3112.3, "end": 3117.46, "text": " So we're going to end up with 16 by 16 of these 768 vectors."}, {"start": 3117.46, "end": 3125.94, "text": " So we're going to end up with like 16, 16, 768 volumes here."}, {"start": 3125.94, "end": 3129.34, "text": " So let's kind of verify that hypothesis."}, {"start": 3129.34, "end": 3132.7000000000003, "text": " Well we'll get to that a bit later but that's what this layer is doing."}, {"start": 3132.7000000000003, "end": 3134.98, "text": " Hopefully this helped."}, {"start": 3134.98, "end": 3135.98, "text": " Let's continue."}, {"start": 3135.98, "end": 3140.7400000000002, "text": " So if I hit f10 we just form this scale."}, {"start": 3140.74, "end": 3143.66, "text": " This is just for normalization."}, {"start": 3143.66, "end": 3146.3399999999997, "text": " Basically what we do here is we form the CLS embedding."}, {"start": 3146.3399999999997, "end": 3149.9399999999996, "text": " So that's the CLS token of our vision transformer."}, {"start": 3149.9399999999996, "end": 3154.4399999999996, "text": " That's something that BERT is also using and many other transformer models."}, {"start": 3154.4399999999996, "end": 3160.9399999999996, "text": " And we are basically going to it's going to be a 768 dimensional vector."}, {"start": 3160.9399999999996, "end": 3163.6, "text": " We form the positional embeddings."}, {"start": 3163.6, "end": 3165.7799999999997, "text": " It's going to be of the size."}, {"start": 3165.7799999999997, "end": 3168.6, "text": " So basically 16 squared."}, {"start": 3168.6, "end": 3170.74, "text": " That's the number of tokens we'll have."}, {"start": 3170.74, "end": 3173.66, "text": " So that's like 256 right?"}, {"start": 3173.66, "end": 3175.86, "text": " And plus one because we have CLS."}, {"start": 3175.86, "end": 3181.12, "text": " And then the width is 768 because we want to basically add those positional encodings"}, {"start": 3181.12, "end": 3185.7999999999997, "text": " on top of each of the embedding vectors."}, {"start": 3185.7999999999997, "end": 3191.8199999999997, "text": " So if you're confused by the means is basically let me again go back here."}, {"start": 3191.8199999999997, "end": 3197.38, "text": " So what it does is it's going to construct it's going to construct embedding table here."}, {"start": 3197.38, "end": 3200.94, "text": " So we'll have an embedding table here."}, {"start": 3200.94, "end": 3209.54, "text": " The dimensionality will be again 16 times 16 plus one for all of these vectors here."}, {"start": 3209.54, "end": 3213.7200000000003, "text": " So let me kind of try and draw them some somehow here."}, {"start": 3213.7200000000003, "end": 3219.5, "text": " So this is going to be our output volume 16 16."}, {"start": 3219.5, "end": 3224.42, "text": " And this is going to be this dimensionality here 768."}, {"start": 3224.42, "end": 3225.7000000000003, "text": " Okay."}, {"start": 3225.7, "end": 3229.1, "text": " And we'll have corresponding embedding vectors."}, {"start": 3229.1, "end": 3234.14, "text": " And now we'll basically be learning these and then adding them on top of these ones."}, {"start": 3234.14, "end": 3240.54, "text": " So basically what will happen is that this vector here, this position vector here will"}, {"start": 3240.54, "end": 3242.54, "text": " always be added to this vector here."}, {"start": 3242.54, "end": 3243.9399999999996, "text": " So to the first vector here."}, {"start": 3243.9399999999996, "end": 3247.46, "text": " So we take this one and we combine it."}, {"start": 3247.46, "end": 3250.1, "text": " We'll sum it up."}, {"start": 3250.1, "end": 3253.4199999999996, "text": " We'll do the summation between those two."}, {"start": 3253.42, "end": 3256.02, "text": " And that's going to happen in the forward pass of the VAT."}, {"start": 3256.02, "end": 3257.02, "text": " Okay."}, {"start": 3257.02, "end": 3258.98, "text": " Let's go back to the code."}, {"start": 3258.98, "end": 3260.2200000000003, "text": " That's that."}, {"start": 3260.2200000000003, "end": 3261.54, "text": " We have some layer norms."}, {"start": 3261.54, "end": 3263.06, "text": " We form some transformers."}, {"start": 3263.06, "end": 3269.58, "text": " And then yeah, so because transformer will also be used in this second, I dubbed it as"}, {"start": 3269.58, "end": 3271.04, "text": " a textual pathway."}, {"start": 3271.04, "end": 3275.1, "text": " Not sure whether anyone else is using that terminology, but like let me kind of just"}, {"start": 3275.1, "end": 3277.2000000000003, "text": " quickly scheme over what transformer is doing."}, {"start": 3277.2000000000003, "end": 3282.34, "text": " So if I click this, we'll get to the transformer logic."}, {"start": 3282.34, "end": 3285.78, "text": " So what transformer does is fairly simple."}, {"start": 3285.78, "end": 3289.54, "text": " We just specify the width, which is going to be also 768."}, {"start": 3289.54, "end": 3290.9, "text": " We have number of layers."}, {"start": 3290.9, "end": 3292.48, "text": " We just form a sequential layer."}, {"start": 3292.48, "end": 3296.02, "text": " So we're going to form these residual attention blocks."}, {"start": 3296.02, "end": 3300.46, "text": " And what they do is, I mean, your regular transformer logic."}, {"start": 3300.46, "end": 3302.58, "text": " So let me show you what it does."}, {"start": 3302.58, "end": 3305.6200000000003, "text": " We have the multi-head attention head."}, {"start": 3305.6200000000003, "end": 3308.6000000000004, "text": " Wait, multi-head attention layer."}, {"start": 3308.6000000000004, "end": 3310.7000000000003, "text": " And then we have the layer norm."}, {"start": 3310.7000000000003, "end": 3311.7000000000003, "text": " And then we have the MLP."}, {"start": 3311.7, "end": 3315.8999999999996, "text": " So we have the linear, followed by Jalu, followed by linear."}, {"start": 3315.8999999999996, "end": 3320.62, "text": " And you can see this is your regular expansion ratio of four."}, {"start": 3320.62, "end": 3325.74, "text": " Usually the innermost part of that MLP in that particular transformer layer is usually"}, {"start": 3325.74, "end": 3329.02, "text": " 4x bigger than the input part."}, {"start": 3329.02, "end": 3330.02, "text": " Okay?"}, {"start": 3330.02, "end": 3335.58, "text": " And so then we construct the second LN layer and we optionally have attention mask in case"}, {"start": 3335.58, "end": 3338.8199999999997, "text": " we want to have a causal transformer model."}, {"start": 3338.82, "end": 3344.7000000000003, "text": " So the vision transformer will be using non-causal transformer model because you want to attend"}, {"start": 3344.7000000000003, "end": 3348.9, "text": " each token, each patch needs to attend to each other patch."}, {"start": 3348.9, "end": 3353.5, "text": " And in the case of a textual transformer, we'll be using a causal one."}, {"start": 3353.5, "end": 3355.1800000000003, "text": " So that's why we have this attention part."}, {"start": 3355.1800000000003, "end": 3364.42, "text": " And now what the forward pass of the transformer does is basically applies the attention."}, {"start": 3364.42, "end": 3369.54, "text": " So sorry, this is just a particular layer of the transformer, not the whole transformer."}, {"start": 3369.54, "end": 3376.9, "text": " So it does the attention and then just basically, okay, of course, layer norms, attention, residual"}, {"start": 3376.9, "end": 3379.54, "text": " connections, and then there is the MLP part."}, {"start": 3379.54, "end": 3382.02, "text": " So that's your transformer logic here."}, {"start": 3382.02, "end": 3388.86, "text": " And finally, we just kind of do a forward pass through all of the 12 of these transformer"}, {"start": 3388.86, "end": 3389.86, "text": " layers."}, {"start": 3389.86, "end": 3390.86, "text": " And that's your transformer model."}, {"start": 3390.86, "end": 3391.86, "text": " Okay?"}, {"start": 3391.86, "end": 3395.98, "text": " So that's a quick scheme, like a quick scheme over what's going on there."}, {"start": 3395.98, "end": 3399.7400000000002, "text": " Anyways, let's proceed here."}, {"start": 3399.7400000000002, "end": 3402.78, "text": " I'm going to ignore all of this and we are back to vision transformer."}, {"start": 3402.78, "end": 3404.82, "text": " We form all of this."}, {"start": 3404.82, "end": 3409.38, "text": " And finally, we have the projection layer that's going to project from 768, which is"}, {"start": 3409.38, "end": 3413.7400000000002, "text": " the inner model dimension into our desired output dimension, which is 512."}, {"start": 3413.7400000000002, "end": 3414.7400000000002, "text": " That's it."}, {"start": 3414.7400000000002, "end": 3419.1, "text": " I'm going to now hit run and get to our next step."}, {"start": 3419.1, "end": 3423.58, "text": " So this is another point I wanted to make."}, {"start": 3423.58, "end": 3430.14, "text": " Basically, this is the attention mask that's going to be formed for the causal transformers."}, {"start": 3430.14, "end": 3436.42, "text": " If I hit step over and if I print the mask here, let's see what we get."}, {"start": 3436.42, "end": 3442.54, "text": " So as you can see here is that your classical mask that's used to create a causal attention"}, {"start": 3442.54, "end": 3443.54, "text": " pattern."}, {"start": 3443.54, "end": 3444.54, "text": " Okay?"}, {"start": 3444.54, "end": 3445.54, "text": " Let's continue."}, {"start": 3445.54, "end": 3446.54, "text": " Okay."}, {"start": 3446.54, "end": 3451.9, "text": " So we're back in the clip in this function before we created the visual parts."}, {"start": 3451.9, "end": 3453.34, "text": " That's the vision transformer."}, {"start": 3453.34, "end": 3457.46, "text": " And now we just create the transformer here."}, {"start": 3457.46, "end": 3463.46, "text": " So as you see here, this particular transformer is going to have the build attention mask."}, {"start": 3463.46, "end": 3467.98, "text": " It's going to be a causal one, whereas we do not pass the same argument to the vision"}, {"start": 3467.98, "end": 3468.98, "text": " transformer."}, {"start": 3468.98, "end": 3469.98, "text": " So that one's going to be non-causal."}, {"start": 3469.98, "end": 3471.1, "text": " Okay?"}, {"start": 3471.1, "end": 3472.38, "text": " And then we have the vocab size."}, {"start": 3472.38, "end": 3473.7, "text": " So that's the token."}, {"start": 3473.7, "end": 3478.9399999999996, "text": " So we saw that this is the BPE vocab we just created."}, {"start": 3478.9399999999996, "end": 3485.5, "text": " And we're going to obviously create a token embedding table, which is going to be used"}, {"start": 3485.5, "end": 3493.74, "text": " by the transformer, by the textual pathway to basically embed the tokenized text."}, {"start": 3493.74, "end": 3494.7799999999997, "text": " Okay?"}, {"start": 3494.7799999999997, "end": 3500.7999999999997, "text": " And we'll also have a positional embedding also for the transformer."}, {"start": 3500.8, "end": 3504.6400000000003, "text": " And that's going to be 77 transformer width is 5."}, {"start": 3504.6400000000003, "end": 3505.6400000000003, "text": " Okay."}, {"start": 3505.6400000000003, "end": 3508.9, "text": " So that's going to be the positional embeddings."}, {"start": 3508.9, "end": 3512.94, "text": " So again, it may be quickly worth going through that again here."}, {"start": 3512.94, "end": 3513.94, "text": " So that was vision transformer."}, {"start": 3513.94, "end": 3516.78, "text": " Now let me show you how this thing will work."}, {"start": 3516.78, "end": 3518.0600000000004, "text": " So we have vocab size."}, {"start": 3518.0600000000004, "end": 3520.44, "text": " We'll have a table."}, {"start": 3520.44, "end": 3523.3, "text": " So we'll have a table here."}, {"start": 3523.3, "end": 3529.78, "text": " That's going to be whatever the vocab size is, which is like 49k or something."}, {"start": 3529.78, "end": 3533.7400000000002, "text": " And we'll have this size being, so I forgot what it is."}, {"start": 3533.7400000000002, "end": 3535.78, "text": " Let me check what exactly that part is."}, {"start": 3535.78, "end": 3537.38, "text": " It's going to be transformer width."}, {"start": 3537.38, "end": 3538.38, "text": " Okay."}, {"start": 3538.38, "end": 3539.38, "text": " So it's going to be 512."}, {"start": 3539.38, "end": 3540.38, "text": " Okay."}, {"start": 3540.38, "end": 3541.38, "text": " Let's go back here."}, {"start": 3541.38, "end": 3544.1000000000004, "text": " That thing is going to be 512."}, {"start": 3544.1000000000004, "end": 3548.26, "text": " And then we have additional table that's going to be like context length, which is going"}, {"start": 3548.26, "end": 3550.0600000000004, "text": " to be 77."}, {"start": 3550.0600000000004, "end": 3552.7400000000002, "text": " And then also it's going to be 512."}, {"start": 3552.7400000000002, "end": 3559.1400000000003, "text": " So those are going to be, this is the, basically the table we use to embed our tokens."}, {"start": 3559.14, "end": 3565.2599999999998, "text": " And this is the positional, the positional encoding table."}, {"start": 3565.2599999999998, "end": 3566.74, "text": " So what it means in practice is the following."}, {"start": 3566.74, "end": 3568.94, "text": " So imagine you have a piece of text."}, {"start": 3568.94, "end": 3574.3399999999997, "text": " You take the text, you tokenize it, you convert it into a list of integers."}, {"start": 3574.3399999999997, "end": 3579.7, "text": " So let's imagine you started with like a hello world."}, {"start": 3579.7, "end": 3583.3399999999997, "text": " And then what you do is you first tokenize it."}, {"start": 3583.3399999999997, "end": 3586.94, "text": " So that means it's going to be somehow split."}, {"start": 3586.94, "end": 3588.2799999999997, "text": " We'll see that logic a bit later."}, {"start": 3588.28, "end": 3590.6200000000003, "text": " Maybe some like here, here, here."}, {"start": 3590.6200000000003, "end": 3592.6000000000004, "text": " And then we're going to somehow map it into integer."}, {"start": 3592.6000000000004, "end": 3596.02, "text": " So all of those tokens will have corresponding integers."}, {"start": 3596.02, "end": 3598.78, "text": " And then that means we're going to end up with something like this."}, {"start": 3598.78, "end": 3608.1400000000003, "text": " We'll end up with like, I don't know, like 23, 47, 56, 37."}, {"start": 3608.1400000000003, "end": 3616.42, "text": " And we now use this to index this embedding table to get the associated vector."}, {"start": 3616.42, "end": 3625.86, "text": " So 23 means, hey, let's grab token, let's grab the embedding vector that has index 23."}, {"start": 3625.86, "end": 3629.38, "text": " And that's going to be your representation for this particular token."}, {"start": 3629.38, "end": 3630.38, "text": " Okay."}, {"start": 3630.38, "end": 3633.06, "text": " Then we say, let's take this 47."}, {"start": 3633.06, "end": 3639.06, "text": " Let's take the, this token here and let's find its corresponding embedding vector."}, {"start": 3639.06, "end": 3640.3, "text": " So that's this one."}, {"start": 3640.3, "end": 3643.14, "text": " And then we put it here, et cetera, et cetera."}, {"start": 3643.14, "end": 3647.7, "text": " And then once we form this and remember we're going to pad this with all zeros."}, {"start": 3647.7, "end": 3650.9, "text": " So this is going to have bunch of zeros here."}, {"start": 3650.9, "end": 3657.2599999999998, "text": " Zero, zero, zero, zeros, because we want to have the size of this is going to be 77."}, {"start": 3657.2599999999998, "end": 3658.8199999999997, "text": " Whoops."}, {"start": 3658.8199999999997, "end": 3661.9, "text": " Sorry, let me redraw that."}, {"start": 3661.9, "end": 3665.3399999999997, "text": " That was some terrible drawing."}, {"start": 3665.3399999999997, "end": 3671.2599999999998, "text": " So if I do this, basically this whole thing is going to be 77."}, {"start": 3671.2599999999998, "end": 3673.02, "text": " And now we just add."}, {"start": 3673.02, "end": 3677.82, "text": " So now we're going to add these positionally, positional embeddings on top of these vectors"}, {"start": 3677.82, "end": 3678.82, "text": " here."}, {"start": 3678.82, "end": 3682.1, "text": " That's how this whole, this is how all of these structures come together."}, {"start": 3682.1, "end": 3683.54, "text": " Hopefully that kind of clarifies it."}, {"start": 3683.54, "end": 3684.54, "text": " Okay."}, {"start": 3684.54, "end": 3689.88, "text": " Let's go back to the text, to the code."}, {"start": 3689.88, "end": 3692.94, "text": " We formed the positional embedding table."}, {"start": 3692.94, "end": 3694.94, "text": " We formed the layer norm."}, {"start": 3694.94, "end": 3699.62, "text": " We finally have the text projection, which is going to be used to transform from the"}, {"start": 3699.62, "end": 3703.2599999999998, "text": " transformer width into the embedding width, which happened to have the same dimensionality."}, {"start": 3703.2599999999998, "end": 3709.2599999999998, "text": " So this transformer is actually using internally 512, not 768 as this vision transformer."}, {"start": 3709.2599999999998, "end": 3710.7799999999997, "text": " Okay."}, {"start": 3710.7799999999997, "end": 3712.22, "text": " And there is this allotted scale."}, {"start": 3712.22, "end": 3717.9, "text": " I'm not sure why these exact numbers kind of seems like, yeah, magic numbers, but yeah."}, {"start": 3717.9, "end": 3720.2999999999997, "text": " And then just some initialization and that's it."}, {"start": 3720.2999999999997, "end": 3721.2999999999997, "text": " Okay."}, {"start": 3721.2999999999997, "end": 3722.48, "text": " Let me click F5."}, {"start": 3722.48, "end": 3726.22, "text": " Let's get to this next salient break point."}, {"start": 3726.22, "end": 3727.22, "text": " And that's here."}, {"start": 3727.22, "end": 3728.22, "text": " Okay."}, {"start": 3728.22, "end": 3732.66, "text": " So we made the image, which is going to be just this clip image, just some random diagram."}, {"start": 3732.66, "end": 3739.06, "text": " We're going to do the pre-processing and basically we end up with a batch of size one."}, {"start": 3739.06, "end": 3746.22, "text": " So if I were to print the image shape, it's going to be probably, okay, before we even"}, {"start": 3746.22, "end": 3748.14, "text": " do it, let's kind of do the predictions."}, {"start": 3748.14, "end": 3752.7799999999997, "text": " It's going to be one, I guess, three, two, two, four, two, two, four, because this is"}, {"start": 3752.7799999999997, "end": 3754.4199999999996, "text": " the format that PyTorch usually does."}, {"start": 3754.42, "end": 3759.02, "text": " So you have this channel dimension here instead of here."}, {"start": 3759.02, "end": 3762.9, "text": " And if I print the shape, we're going to get, yeah, exactly that."}, {"start": 3762.9, "end": 3763.9, "text": " Okay."}, {"start": 3763.9, "end": 3769.1800000000003, "text": " So next up, we need to do the tokenization of these captions here."}, {"start": 3769.1800000000003, "end": 3771.9, "text": " So a diagram, a dog, and a cat."}, {"start": 3771.9, "end": 3774.54, "text": " And let me quickly recap what we've seen so far."}, {"start": 3774.54, "end": 3777.32, "text": " I showed you how the clip is constructed."}, {"start": 3777.32, "end": 3784.94, "text": " We saw how the vision transformer and the transformer models are instantiated and created."}, {"start": 3784.94, "end": 3787.54, "text": " We've seen the image pre-processing here."}, {"start": 3787.54, "end": 3792.98, "text": " And finally, let's kind of jump into this part here with the tokenization."}, {"start": 3792.98, "end": 3795.0800000000004, "text": " So let me click F5."}, {"start": 3795.0800000000004, "end": 3799.02, "text": " So we pass our text here and we need to tokenize them."}, {"start": 3799.02, "end": 3805.38, "text": " And when I say tokenize, we literally need to transform to basically split these captions"}, {"start": 3805.38, "end": 3813.1400000000003, "text": " into sub words and then map those into corresponding integers, depending on our vocab mapping."}, {"start": 3813.1400000000003, "end": 3814.1400000000003, "text": " Okay."}, {"start": 3814.1400000000003, "end": 3816.98, "text": " So let me kind of go through this."}, {"start": 3816.98, "end": 3825.5, "text": " First of all, we want to tokenize these special tokens because as you can see here, whatever"}, {"start": 3825.5, "end": 3827.54, "text": " the text is, we'll be trading through our captions."}, {"start": 3827.54, "end": 3828.6600000000003, "text": " We're going to encode them."}, {"start": 3828.6600000000003, "end": 3831.7000000000003, "text": " So this is going to give us a list of integers."}, {"start": 3831.7, "end": 3837.9399999999996, "text": " And then we basically put the start of text token in the beginning and end of text token"}, {"start": 3837.9399999999996, "end": 3839.6, "text": " at the end."}, {"start": 3839.6, "end": 3843.66, "text": " And now let's kind of jump into the actual encode function."}, {"start": 3843.66, "end": 3847.62, "text": " What it does is, well, fairly complicated."}, {"start": 3847.62, "end": 3854.1, "text": " I have to say the tokenization part, even though it's always abstracted away from us"}, {"start": 3854.1, "end": 3859.3399999999997, "text": " when you read the papers, et cetera, et cetera, I found this to be the hardest part of this"}, {"start": 3859.34, "end": 3863.58, "text": " whole code base for understanding."}, {"start": 3863.58, "end": 3866.94, "text": " And yeah, just like the level of details is amazing."}, {"start": 3866.94, "end": 3870.82, "text": " Having said that, let's go through this."}, {"start": 3870.82, "end": 3871.82, "text": " Okay."}, {"start": 3871.82, "end": 3874.1800000000003, "text": " So this line is basically going to clean the text."}, {"start": 3874.1800000000003, "end": 3877.1800000000003, "text": " I won't go into the details of the actual cleaning."}, {"start": 3877.1800000000003, "end": 3880.06, "text": " They're using some external libraries to do that."}, {"start": 3880.06, "end": 3883.3, "text": " And then they just lowercase the whole caption."}, {"start": 3883.3, "end": 3886.06, "text": " And that's the whole processing that's going on here."}, {"start": 3886.06, "end": 3889.02, "text": " Next up, they are using this pattern."}, {"start": 3889.02, "end": 3896.02, "text": " So this regex expression they compiled in the tokenizer to find the next piece of text"}, {"start": 3896.02, "end": 3900.42, "text": " from the caption, which I don't fully understand."}, {"start": 3900.42, "end": 3904.54, "text": " If somebody understands the exact details of why this particular..."}, {"start": 3904.54, "end": 3906.18, "text": " So let me show you the pattern again."}, {"start": 3906.18, "end": 3912.78, "text": " So here's the pattern, fairly complicated, not sure why all of these details."}, {"start": 3912.78, "end": 3915.22, "text": " There is no comments, so it's kind of dark magic."}, {"start": 3915.22, "end": 3916.82, "text": " But yeah, we'll have to roll with it."}, {"start": 3916.82, "end": 3919.54, "text": " We can kind of treat it as a black box for now."}, {"start": 3919.54, "end": 3920.6200000000003, "text": " Okay."}, {"start": 3920.6200000000003, "end": 3929.6200000000003, "text": " So what happens here is the token is encoded using the UTF-8 encoding, which for this simple..."}, {"start": 3929.6200000000003, "end": 3935.2400000000002, "text": " So currently token is this small letter A, which is going to be encoded as a single byte."}, {"start": 3935.2400000000002, "end": 3941.2200000000003, "text": " But some of the characters might be encoded into two, three, or four bytes."}, {"start": 3941.2200000000003, "end": 3942.7000000000003, "text": " That's how UTF-8 works."}, {"start": 3942.7, "end": 3947.14, "text": " Well, I'm not sure about the three, whether there are characters that have three bytes."}, {"start": 3947.14, "end": 3949.5, "text": " Although, yeah, I'm fairly sure that's the case."}, {"start": 3949.5, "end": 3953.8199999999997, "text": " I'm going to show you a bit later when you use emoji, this is going to give us four bytes,"}, {"start": 3953.8199999999997, "end": 3955.3799999999997, "text": " the encoding into UTF-8."}, {"start": 3955.3799999999997, "end": 3958.7799999999997, "text": " But for this simple example, it's going to be a single byte."}, {"start": 3958.7799999999997, "end": 3964.18, "text": " And then we use this dictionary byte encoder to fetch the corresponding..."}, {"start": 3964.18, "end": 3966.9399999999996, "text": " Basically to catch the corresponding Unicode character."}, {"start": 3966.9399999999996, "end": 3970.3799999999997, "text": " So if I do F10 here, we're going to end up with this..."}, {"start": 3970.38, "end": 3973.9, "text": " Again, this is identity mapping for this particular token."}, {"start": 3973.9, "end": 3979.06, "text": " We'll see later that it's going to be a bit more complicated for some other Unicode characters."}, {"start": 3979.06, "end": 3982.1800000000003, "text": " So what happens now is the BPE magic."}, {"start": 3982.1800000000003, "end": 3985.1, "text": " It's going to basically give us back..."}, {"start": 3985.1, "end": 3986.1, "text": " It's going to..."}, {"start": 3986.1, "end": 3990.5, "text": " In the general case, it's going to split this token into BPE tokens."}, {"start": 3990.5, "end": 3995.04, "text": " In this case, it's going to be small a, again, just trivial."}, {"start": 3995.04, "end": 3998.1400000000003, "text": " And then we're going to encode that using the encoder."}, {"start": 3998.14, "end": 4001.74, "text": " So again, remember encoder just contains our vocabulary."}, {"start": 4001.74, "end": 4010.7799999999997, "text": " The keys are the tokens and the values are the associated IDs, I guess."}, {"start": 4010.7799999999997, "end": 4015.42, "text": " If I click F10 here, we just end up with BPE tokens being..."}, {"start": 4015.42, "end": 4019.14, "text": " As you can see here, we just encode 320."}, {"start": 4019.14, "end": 4023.2999999999997, "text": " That's going to be the ID that corresponds to small letter a."}, {"start": 4023.2999999999997, "end": 4027.42, "text": " If I continue the loop here, we get diagram."}, {"start": 4027.42, "end": 4030.98, "text": " Again, this is going to be encoded."}, {"start": 4030.98, "end": 4034.7000000000003, "text": " And finally, we're going to end up with, I think..."}, {"start": 4034.7000000000003, "end": 4036.58, "text": " Yeah, again, just a single..."}, {"start": 4036.58, "end": 4041.58, "text": " There is a single token that corresponds to diagram in our vocabulary."}, {"start": 4041.58, "end": 4042.62, "text": " And that's going to be it."}, {"start": 4042.62, "end": 4049.02, "text": " So now we basically repeat this whole procedure for the other captions."}, {"start": 4049.02, "end": 4053.66, "text": " And let me kind of skip this and click F5."}, {"start": 4053.66, "end": 4056.1800000000003, "text": " So let me go over this."}, {"start": 4056.1800000000003, "end": 4057.1800000000003, "text": " And here we are."}, {"start": 4057.18, "end": 4060.58, "text": " So we end up with all tokens being these realists."}, {"start": 4060.58, "end": 4066.18, "text": " So as you can see, it's always a beginning and ending with a start and end of text tokens."}, {"start": 4066.18, "end": 4072.74, "text": " And then in between is the tokenization that we get from our BPE tokenizer."}, {"start": 4072.74, "end": 4076.3399999999997, "text": " Okay, let's continue here."}, {"start": 4076.3399999999997, "end": 4080.46, "text": " So again, if you didn't quite understand everything that's going on here, don't worry because"}, {"start": 4080.46, "end": 4083.4199999999996, "text": " I did not also understand every single detail, to be honest."}, {"start": 4083.4199999999996, "end": 4085.3999999999996, "text": " Okay, let's go on here."}, {"start": 4085.4, "end": 4092.02, "text": " So what we do is we just pre-allocate a tensor of a corresponding..."}, {"start": 4092.02, "end": 4095.14, "text": " Of a following shape."}, {"start": 4095.14, "end": 4100.64, "text": " Basically we want to have the context length, 77 here, and then we want to have whatever"}, {"start": 4100.64, "end": 4104.9400000000005, "text": " the number of captions is along the zeroth axis."}, {"start": 4104.9400000000005, "end": 4110.3, "text": " So we end up with a tensor that's like, I guess, 377."}, {"start": 4110.3, "end": 4113.5, "text": " Okay, let me go click F10."}, {"start": 4113.5, "end": 4120.86, "text": " So just for sanity here, the shape is going to be 377."}, {"start": 4120.86, "end": 4122.1, "text": " And here we're just going to fill it in."}, {"start": 4122.1, "end": 4128.32, "text": " Basically we're just going to fill in the tokens that we got from the encoding here,"}, {"start": 4128.32, "end": 4131.3, "text": " and we're going to fill into this tensor."}, {"start": 4131.3, "end": 4136.42, "text": " So I'm going to skip over all of these parts, and let me show you how the resulting tensor"}, {"start": 4136.42, "end": 4137.42, "text": " looks like."}, {"start": 4137.42, "end": 4144.86, "text": " Basically, as you can see here, it begins with these IDs for our tokens, and then it's"}, {"start": 4144.86, "end": 4151.9, "text": " just like padded with zeros until we have the full 77 tokens worth of context."}, {"start": 4151.9, "end": 4152.9, "text": " That's it."}, {"start": 4152.9, "end": 4155.34, "text": " Let me step over here and let's continue."}, {"start": 4155.34, "end": 4156.34, "text": " So that was a tokenization."}, {"start": 4156.34, "end": 4159.46, "text": " We ended up with a tensor here."}, {"start": 4159.46, "end": 4163.06, "text": " Again, the shape is 377."}, {"start": 4163.06, "end": 4165.1, "text": " We have an image."}, {"start": 4165.1, "end": 4166.46, "text": " The image is like this."}, {"start": 4166.46, "end": 4169.78, "text": " And now we want to encode the image and we want to encode the text."}, {"start": 4169.78, "end": 4175.46, "text": " And that means passing them through the visual transformer and through the transformer respectively."}, {"start": 4175.46, "end": 4178.94, "text": " So let me show you how that is going to work."}, {"start": 4178.94, "end": 4180.94, "text": " Here I think we're going to go through the..."}, {"start": 4180.94, "end": 4184.06, "text": " So visual is again a vision transformer."}, {"start": 4184.06, "end": 4187.34, "text": " There is some type of conversion going on here."}, {"start": 4187.34, "end": 4192.22, "text": " Basically we want to make sure that whatever the weights of our com1, so that's the patchify"}, {"start": 4192.22, "end": 4196.820000000001, "text": " there, whatever the type of that thing is, we want to convert our image into the same"}, {"start": 4196.820000000001, "end": 4197.820000000001, "text": " type."}, {"start": 4197.820000000001, "end": 4200.9800000000005, "text": " So whether it's float32 or float16."}, {"start": 4200.9800000000005, "end": 4204.740000000001, "text": " In this case, I'm fairly sure it's going to be float32."}, {"start": 4204.740000000001, "end": 4206.46, "text": " Not that important, but let me just kind of see."}, {"start": 4206.46, "end": 4207.820000000001, "text": " Okay, it's actually float16."}, {"start": 4207.820000000001, "end": 4216.58, "text": " Yeah, because they have during the model loading, they have this conversion into FP16 function"}, {"start": 4216.58, "end": 4217.58, "text": " being called."}, {"start": 4217.58, "end": 4218.820000000001, "text": " So that explains it."}, {"start": 4218.820000000001, "end": 4222.14, "text": " Anyway, so let me go to the forward pass."}, {"start": 4222.14, "end": 4228.34, "text": " Again, what's going to happen here is the logic I explained in my OneNote."}, {"start": 4228.34, "end": 4230.62, "text": " Let me remind you if you forgot about that."}, {"start": 4230.62, "end": 4232.820000000001, "text": " So this is the logic that's going to happen."}, {"start": 4232.820000000001, "end": 4236.5, "text": " Basically we're going to patchify and extract these vectors and we're going to end up with"}, {"start": 4236.5, "end": 4239.42, "text": " like I think 16, 16, 7, 68."}, {"start": 4239.42, "end": 4241.700000000001, "text": " That should be the tensor we end up with."}, {"start": 4241.700000000001, "end": 4250.14, "text": " So if I click F10, we're going to patchify our tensor and let's see what the shape is."}, {"start": 4250.14, "end": 4253.62, "text": " Or maybe we'll have additionally prepended one because we have a batch of one."}, {"start": 4253.62, "end": 4260.06, "text": " So yeah, so it's basically one, whoops, it's one, 7, 68, 7, 7."}, {"start": 4260.06, "end": 4266.18, "text": " So there is some basically permutation going on, but we kind of got the shape right."}, {"start": 4266.18, "end": 4269.9400000000005, "text": " Okay, so now there is the reshaping going on here."}, {"start": 4269.9400000000005, "end": 4274.820000000001, "text": " We're going to, let me just kind of step through all of this, not explain all of the details,"}, {"start": 4274.820000000001, "end": 4277.62, "text": " just some permutation and reshape going on."}, {"start": 4277.62, "end": 4281.38, "text": " So we now just flatten the 7, 7 so we end up with all of the tokens being flattened"}, {"start": 4281.38, "end": 4282.38, "text": " out."}, {"start": 4282.38, "end": 4284.66, "text": " So this is the batch size, the number of tokens."}, {"start": 4284.66, "end": 4287.46, "text": " I don't know why it's expanding this constantly."}, {"start": 4287.46, "end": 4293.82, "text": " Okay, so batch size, number of tokens, and finally the size of the embedding vector."}, {"start": 4293.82, "end": 4294.82, "text": " Cool."}, {"start": 4294.82, "end": 4296.22, "text": " So far so good."}, {"start": 4296.22, "end": 4302.38, "text": " What we do here is we now take the class token, which is a learnable embedding vector, and"}, {"start": 4302.38, "end": 4310.1, "text": " we are going to basically concatenate it with this vector X here."}, {"start": 4310.1, "end": 4313.06, "text": " Okay, so let me show you what I mean by that."}, {"start": 4313.06, "end": 4320.1, "text": " So if I take the self class embedding, this is going to be just, the shape should be 1,"}, {"start": 4320.1, "end": 4322.14, "text": " 7, 6, 8, I think."}, {"start": 4322.14, "end": 4327.38, "text": " So the shape of this is, the shape is actually just 7, 6, 8."}, {"start": 4327.38, "end": 4334.1, "text": " And then once we sum it up with zeros, with a zero vector here, this is going to convert"}, {"start": 4334.1, "end": 4341.62, "text": " it into whatever the batch size is, and then 1, and then we'll end up with 7, 6, 8 here"}, {"start": 4341.62, "end": 4343.12, "text": " as well."}, {"start": 4343.12, "end": 4346.58, "text": " I guess this is just a way to broadcast the CLS token."}, {"start": 4346.58, "end": 4349.14, "text": " I can kind of copy paste this whole thing."}, {"start": 4349.14, "end": 4352.26, "text": " I'm going to store it into a temporary variable."}, {"start": 4352.26, "end": 4355.18, "text": " Oops, did I copy paste it?"}, {"start": 4355.18, "end": 4356.18, "text": " Yeah, I did."}, {"start": 4356.18, "end": 4357.860000000001, "text": " It just kind of overflowed."}, {"start": 4357.860000000001, "end": 4361.9400000000005, "text": " I can maybe remove this part with D type and device."}, {"start": 4361.9400000000005, "end": 4365.42, "text": " Let me see where it's going to work."}, {"start": 4365.42, "end": 4366.42, "text": " Maybe or maybe not."}, {"start": 4366.42, "end": 4369.54, "text": " I think, yeah, it's important to actually preserve this part."}, {"start": 4369.54, "end": 4372.34, "text": " So I'm going to add, do it like this."}, {"start": 4372.34, "end": 4376.66, "text": " TMP and TMP shape is going to be 1, 1, 7, 6, 8."}, {"start": 4376.66, "end": 4383.3, "text": " So basically we've done broadcasting, and then once you concatenate that with the X,"}, {"start": 4383.3, "end": 4394.74, "text": " which is of shape 1, 4, 9, 7, 6, 8, what is that going to do is let me think."}, {"start": 4394.74, "end": 4405.46, "text": " Yeah, basically we're going to end up with 1, 5, 7, 6, 8, because we're going to concatenate"}, {"start": 4405.46, "end": 4409.78, "text": " the CLS to prepend the CLS to our token sequence."}, {"start": 4409.78, "end": 4410.9400000000005, "text": " That should be the case here."}, {"start": 4410.94, "end": 4417.139999999999, "text": " So if I click F10, we end up with X shape 1, 5, 7, 6, 8, as I said."}, {"start": 4417.139999999999, "end": 4421.219999999999, "text": " And then we just add the positional embeddings."}, {"start": 4421.219999999999, "end": 4424.099999999999, "text": " So that's this step here."}, {"start": 4424.099999999999, "end": 4426.139999999999, "text": " Oops, let me show you that."}, {"start": 4426.139999999999, "end": 4430.94, "text": " So that's going to be basically, let me just find it."}, {"start": 4430.94, "end": 4438.86, "text": " That's this part, adding the positional encodings onto the vectors there."}, {"start": 4438.86, "end": 4440.259999999999, "text": " Let me step over that."}, {"start": 4440.26, "end": 4445.34, "text": " Then there is some LNNorm going on, some permutation of the dimensions, because that's how this"}, {"start": 4445.34, "end": 4447.02, "text": " transformer expects it."}, {"start": 4447.02, "end": 4449.46, "text": " Then we just process basically these."}, {"start": 4449.46, "end": 4453.58, "text": " So now you can literally treat this as a simple NLP problem."}, {"start": 4453.58, "end": 4456.1, "text": " We have a bunch of tokens with the corresponding channel dimensions."}, {"start": 4456.1, "end": 4461.54, "text": " We just pass that through the transformer, and then we do the permutation again here."}, {"start": 4461.54, "end": 4462.54, "text": " And that's it."}, {"start": 4462.54, "end": 4465.34, "text": " So Vision Transformer is basically Patchify plus Transformer."}, {"start": 4465.34, "end": 4468.54, "text": " That's all there is to it."}, {"start": 4468.54, "end": 4475.78, "text": " And finally, we just want to project onto the common dimensionality of this joint multimodal"}, {"start": 4475.78, "end": 4479.04, "text": " space between the visual part and the textual part."}, {"start": 4479.04, "end": 4482.42, "text": " And that's why we have this self-projection."}, {"start": 4482.42, "end": 4486.94, "text": " So it's going to project us from basically 756 to 512."}, {"start": 4486.94, "end": 4489.74, "text": " So as you can see here, that's the corresponding dimensions."}, {"start": 4489.74, "end": 4492.66, "text": " And if we run this, we end up with what?"}, {"start": 4492.66, "end": 4497.98, "text": " We end up with a single vector that has 512 dimensions, because here we extracted some"}, {"start": 4497.98, "end": 4499.5, "text": " of the other dimensions."}, {"start": 4499.5, "end": 4505.0199999999995, "text": " So basically what happened is we took our image and we ended up with a single vector"}, {"start": 4505.0199999999995, "end": 4506.7, "text": " that has 512 dimensions."}, {"start": 4506.7, "end": 4509.0599999999995, "text": " Let's continue."}, {"start": 4509.0599999999995, "end": 4510.86, "text": " That's the encoding image function."}, {"start": 4510.86, "end": 4512.94, "text": " Now for the encoding text function."}, {"start": 4512.94, "end": 4516.299999999999, "text": " Again, let me know whether this is too much detail for you."}, {"start": 4516.299999999999, "end": 4518.5599999999995, "text": " Any feedback is super appreciated."}, {"start": 4518.5599999999995, "end": 4523.08, "text": " The first time I'm doing this, so it will be super useful to hear some feedback."}, {"start": 4523.08, "end": 4529.08, "text": " Embedding text, what's going to happen here is, so we have token embedding."}, {"start": 4529.08, "end": 4530.7, "text": " So let's see what's going to happen here."}, {"start": 4530.7, "end": 4539.42, "text": " Okay, so basically we are going to use the text tensor to index into our vocab, into"}, {"start": 4539.42, "end": 4541.22, "text": " our token embedding table."}, {"start": 4541.22, "end": 4542.22, "text": " Okay."}, {"start": 4542.22, "end": 4544.82, "text": " And let me remind you what this table is about."}, {"start": 4544.82, "end": 4548.74, "text": " So token embedding is just a vocab size transformer width."}, {"start": 4548.74, "end": 4552.34, "text": " So it's going to be just our basically embedding table."}, {"start": 4552.34, "end": 4561.34, "text": " If I enter here, if I run this code, we end up with bit size number of tokens."}, {"start": 4561.34, "end": 4565.74, "text": " Sorry, the size of the context, which is 77 and then 512."}, {"start": 4565.74, "end": 4567.76, "text": " So let me bring the shape."}, {"start": 4567.76, "end": 4570.4800000000005, "text": " So we started with this."}, {"start": 4570.4800000000005, "end": 4577.46, "text": " We started with 377 and after embedding all of those IDs, and I'm butchering the terminology"}, {"start": 4577.46, "end": 4578.860000000001, "text": " here, sorry for that."}, {"start": 4578.86, "end": 4584.62, "text": " I'm not used to explaining these types of codes out loud, but yeah."}, {"start": 4584.62, "end": 4591.38, "text": " Then we just add the self-positional embeddings, which are going to be of the shape 77, 512,"}, {"start": 4591.38, "end": 4592.66, "text": " I assume."}, {"start": 4592.66, "end": 4595.7, "text": " So let me remind myself here, 77, 512."}, {"start": 4595.7, "end": 4596.7, "text": " Yeah."}, {"start": 4596.7, "end": 4598.0, "text": " So we just add them up."}, {"start": 4598.0, "end": 4600.179999999999, "text": " So that's the step we saw again."}, {"start": 4600.179999999999, "end": 4604.62, "text": " We saw that in the one note here."}, {"start": 4604.62, "end": 4606.46, "text": " So basically that's this step."}, {"start": 4606.46, "end": 4609.9, "text": " You have this embedding table and you're going to just add it up."}, {"start": 4609.9, "end": 4611.38, "text": " You're just going to kind of put it here."}, {"start": 4611.38, "end": 4617.1, "text": " So after the embedding has happened, you just kind of align it with your tokens."}, {"start": 4617.1, "end": 4624.3, "text": " So it's going to be 77 and you just kind of add the corresponding vectors."}, {"start": 4624.3, "end": 4628.46, "text": " You just add these two, you add these two, et cetera, et cetera."}, {"start": 4628.46, "end": 4629.46, "text": " Okay."}, {"start": 4629.46, "end": 4630.46, "text": " That's what happens."}, {"start": 4630.46, "end": 4632.6, "text": " That's all."}, {"start": 4632.6, "end": 4634.02, "text": " Let me step over."}, {"start": 4634.02, "end": 4637.22, "text": " We do the, again, the permutation."}, {"start": 4637.22, "end": 4638.9400000000005, "text": " We pass through the transformer."}, {"start": 4638.9400000000005, "end": 4642.540000000001, "text": " Again, return back into the shape that's desired."}, {"start": 4642.540000000001, "end": 4645.38, "text": " Then we pass through the layer normalization."}, {"start": 4645.38, "end": 4650.06, "text": " And finally, there is this interesting piece of code."}, {"start": 4650.06, "end": 4659.02, "text": " So what's this going to do is extract those tokens that are above the end of text token."}, {"start": 4659.02, "end": 4660.5, "text": " Let me break that down for you."}, {"start": 4660.5, "end": 4662.740000000001, "text": " Again, let me show you what I mean by that."}, {"start": 4662.74, "end": 4671.86, "text": " So if I do text argmax, we're going to end up with basically a vector of threes."}, {"start": 4671.86, "end": 4672.86, "text": " Okay."}, {"start": 4672.86, "end": 4677.5599999999995, "text": " And that's because the text tensor itself."}, {"start": 4677.5599999999995, "end": 4680.099999999999, "text": " So we have zero, one, two, three."}, {"start": 4680.099999999999, "end": 4688.5599999999995, "text": " You can see that the end of text token has the highest associated ID and that's why this"}, {"start": 4688.5599999999995, "end": 4690.219999999999, "text": " argmax is going to work."}, {"start": 4690.22, "end": 4697.7, "text": " So basically what we do is we extract the location of where this end of text token is"}, {"start": 4697.7, "end": 4699.02, "text": " in our tensor."}, {"start": 4699.02, "end": 4704.26, "text": " And then we use those positions to extract the tokens from our vector x."}, {"start": 4704.26, "end": 4706.4800000000005, "text": " So let me again show you the shape of x."}, {"start": 4706.4800000000005, "end": 4709.46, "text": " So x is 3, 77, 5, 12."}, {"start": 4709.46, "end": 4716.18, "text": " And what we're going to do out of all of these 77 tokens, we extract just a single one."}, {"start": 4716.18, "end": 4717.18, "text": " And that's it."}, {"start": 4717.18, "end": 4723.1, "text": " And then we do the projection into the final 512 dimension."}, {"start": 4723.1, "end": 4728.3, "text": " So actually it's already in the correct dimension, but still they do this projection just to"}, {"start": 4728.3, "end": 4731.860000000001, "text": " increase the capacity of the model, I guess."}, {"start": 4731.860000000001, "end": 4737.02, "text": " Let me step over this and we end up with x shape being 3, 5, 12."}, {"start": 4737.02, "end": 4738.02, "text": " Okay."}, {"start": 4738.02, "end": 4739.02, "text": " So that's it."}, {"start": 4739.02, "end": 4745.820000000001, "text": " We passed in three captions and we got out three vectors that all have 512 dimensions."}, {"start": 4745.820000000001, "end": 4746.820000000001, "text": " That's everything we need."}, {"start": 4746.82, "end": 4752.62, "text": " We can just do a simple normalization and then do the dot product to get the similarity"}, {"start": 4752.62, "end": 4753.62, "text": " score."}, {"start": 4753.62, "end": 4754.62, "text": " Okay."}, {"start": 4754.62, "end": 4756.0599999999995, "text": " Let's go back here."}, {"start": 4756.0599999999995, "end": 4762.179999999999, "text": " Now this function is kind of silly because we will not be using this, but if we just"}, {"start": 4762.179999999999, "end": 4767.04, "text": " pass this model, if we just call the model, it's going to do exactly this."}, {"start": 4767.04, "end": 4768.34, "text": " So let me show you what I mean by that."}, {"start": 4768.34, "end": 4775.28, "text": " So if we go and if I click F5, so the for function again just does the encode image"}, {"start": 4775.28, "end": 4776.78, "text": " and encode text."}, {"start": 4776.78, "end": 4780.78, "text": " So I'm just going to skip over those two because we've seen it already."}, {"start": 4780.78, "end": 4782.58, "text": " So I'm just going to skip over all of that."}, {"start": 4782.58, "end": 4783.58, "text": " Whoops."}, {"start": 4783.58, "end": 4787.54, "text": " Let me just, my God, this was a mistake."}, {"start": 4787.54, "end": 4788.82, "text": " Let me go back here."}, {"start": 4788.82, "end": 4790.86, "text": " Let me go back here."}, {"start": 4790.86, "end": 4795.66, "text": " Let me just put a break point here and click F5, skip all of this."}, {"start": 4795.66, "end": 4796.66, "text": " Okay."}, {"start": 4796.66, "end": 4797.66, "text": " So we are here."}, {"start": 4797.66, "end": 4801.38, "text": " So we've done, we have, again, we have the image features, which is going to be 1, 5,"}, {"start": 4801.38, "end": 4802.38, "text": " 12."}, {"start": 4802.38, "end": 4803.38, "text": " That's our vector."}, {"start": 4803.38, "end": 4807.18, "text": " And then we're going to have text features, which is going to be 3, 5, 12."}, {"start": 4807.18, "end": 4810.4800000000005, "text": " So that's a three vectors for the three captions."}, {"start": 4810.4800000000005, "end": 4812.58, "text": " Now we do the normalization."}, {"start": 4812.58, "end": 4814.86, "text": " We've seen this in the Drupiter already."}, {"start": 4814.86, "end": 4818.3, "text": " So I'm just going to skip and assume you understand all of this."}, {"start": 4818.3, "end": 4824.18, "text": " And finally, we, we take this, this is a learnable parameter that we did."}, {"start": 4824.18, "end": 4828.7, "text": " I kind of wasn't sure what this magic number 0.7 is."}, {"start": 4828.7, "end": 4830.38, "text": " Let me show you what I mean by that."}, {"start": 4830.38, "end": 4831.9400000000005, "text": " So here is that logit scale."}, {"start": 4831.94, "end": 4835.58, "text": " They kind of make it learnable and they initialize it like this."}, {"start": 4835.58, "end": 4840.9, "text": " I'm not sure why this exact number, if anyone knows, feel free to comment down below."}, {"start": 4840.9, "end": 4849.62, "text": " I guess it doesn't matter that much, but probably knowing how these NLP things work, it probably"}, {"start": 4849.62, "end": 4850.62, "text": " matters a lot."}, {"start": 4850.62, "end": 4855.74, "text": " Like it's a, it's a, it's a difference between this, this training loss spiking or not a"}, {"start": 4855.74, "end": 4859.919999999999, "text": " single, a single change of a variable can do that."}, {"start": 4859.92, "end": 4863.1, "text": " So let's step over here."}, {"start": 4863.1, "end": 4872.38, "text": " And what we do is we find the similarity matrix between the image vectors and the text vectors."}, {"start": 4872.38, "end": 4876.84, "text": " And we just do the multiplication here because I guess they're going to pass it later on"}, {"start": 4876.84, "end": 4878.18, "text": " into soft max."}, {"start": 4878.18, "end": 4879.7, "text": " That's why they're doing this."}, {"start": 4879.7, "end": 4885.66, "text": " So in any case, we get the logits per image and logits per text, which you can get by"}, {"start": 4885.66, "end": 4891.82, "text": " just transposing so you don't have to do again, text features and do matrix multiplication"}, {"start": 4891.82, "end": 4893.26, "text": " with image features transpose."}, {"start": 4893.26, "end": 4895.26, "text": " You can just transpose result."}, {"start": 4895.26, "end": 4898.86, "text": " You can, you can kind of verify that that's going to be correct."}, {"start": 4898.86, "end": 4899.86, "text": " Okay."}, {"start": 4899.86, "end": 4901.0199999999995, "text": " Let's click F10."}, {"start": 4901.0199999999995, "end": 4902.46, "text": " So we get the logits."}, {"start": 4902.46, "end": 4907.58, "text": " And now, as I said, we do the soft max along the last dimension."}, {"start": 4907.58, "end": 4911.86, "text": " So dimension minus one, and that's going to give us the probabilities."}, {"start": 4911.86, "end": 4916.98, "text": " So logits, whoops, let me copy paste this."}, {"start": 4916.98, "end": 4920.58, "text": " This should be, I guess, one three shape or something."}, {"start": 4920.58, "end": 4921.58, "text": " Yep."}, {"start": 4921.58, "end": 4926.94, "text": " And then if I kind of just print it, so here are the results."}, {"start": 4926.94, "end": 4929.219999999999, "text": " As you can see, they are not normalized."}, {"start": 4929.219999999999, "end": 4936.58, "text": " So after doing soft max here, if I click F10 here, we end up with probabilities."}, {"start": 4936.58, "end": 4942.54, "text": " So probabilities, and we find that the caption number one or zero, depending on how you index"}, {"start": 4942.54, "end": 4948.78, "text": " your arrays, is going to be the one that corresponds to our image."}, {"start": 4948.78, "end": 4951.18, "text": " And we'll see that that makes a lot of sense."}, {"start": 4951.18, "end": 4956.78, "text": " So okay, I'm just going to exit this."}, {"start": 4956.78, "end": 4960.44, "text": " And okay, I think, yeah, this is actually the end of the program."}, {"start": 4960.44, "end": 4965.64, "text": " So we got the result that the zeroth caption is the best one."}, {"start": 4965.64, "end": 4966.64, "text": " Let's see what it is."}, {"start": 4966.64, "end": 4973.1, "text": " Well, because the zeroth caption is this caption here, it's a diagram."}, {"start": 4973.1, "end": 4975.18, "text": " And let me show you the corresponding image."}, {"start": 4975.18, "end": 4976.780000000001, "text": " I think it's somewhere here."}, {"start": 4976.780000000001, "end": 4978.9400000000005, "text": " So clip image, where are you?"}, {"start": 4978.9400000000005, "end": 4979.9400000000005, "text": " Where are you?"}, {"start": 4979.9400000000005, "end": 4981.18, "text": " Okay, clip image here."}, {"start": 4981.18, "end": 4988.26, "text": " So as you can see, it's a diagram of the clip infrastructure, of the clip architecture,"}, {"start": 4988.26, "end": 4991.18, "text": " and thus this makes a lot of sense."}, {"start": 4991.18, "end": 4994.0, "text": " Okay, guys, this was pretty much it."}, {"start": 4994.0, "end": 4996.62, "text": " This was a very, very long video."}, {"start": 4996.62, "end": 4999.4, "text": " I hope you found something useful out of it."}, {"start": 4999.4, "end": 5002.76, "text": " Before we wrap up, I just want to show you one more thing."}, {"start": 5002.76, "end": 5006.38, "text": " So there is one more cool function I found in the in the reading file, and that's the"}, {"start": 5006.38, "end": 5007.7, "text": " linear probe."}, {"start": 5007.7, "end": 5009.36, "text": " So everything is the same."}, {"start": 5009.36, "end": 5012.5, "text": " So what we do, we download the cipher 100."}, {"start": 5012.5, "end": 5018.78, "text": " What we then do is we would pass through the data set, and we encode all of the images"}, {"start": 5018.78, "end": 5020.54, "text": " into these feature vectors."}, {"start": 5020.54, "end": 5026.66, "text": " So basically, what that means is we end up with, at the end, we end up with a bunch of"}, {"start": 5026.66, "end": 5033.1, "text": " like training features and associated labels, and same for the test data set."}, {"start": 5033.1, "end": 5039.14, "text": " And then instead of doing this zero shop classifier, you can instead just train a simple logistic"}, {"start": 5039.14, "end": 5041.5, "text": " regression on top of the train feature."}, {"start": 5041.5, "end": 5044.14, "text": " So again, train features, let me show you the shape."}, {"start": 5044.14, "end": 5048.82, "text": " Shape is going to be whatever the number of data points in the training data set is, 512."}, {"start": 5048.82, "end": 5050.0199999999995, "text": " So this is the shape."}, {"start": 5050.02, "end": 5054.540000000001, "text": " And then because we have the data set and we have the associated labels, this is a supervised"}, {"start": 5054.540000000001, "end": 5059.3, "text": " data set will have basically an associated ground truth labels."}, {"start": 5059.3, "end": 5063.02, "text": " And using these, you can train a linear probe."}, {"start": 5063.02, "end": 5065.620000000001, "text": " So that's this classifier here."}, {"start": 5065.620000000001, "end": 5067.4400000000005, "text": " Instead of doing what?"}, {"start": 5067.4400000000005, "end": 5073.900000000001, "text": " Instead of doing the zero shot, where you would basically take the class, you would"}, {"start": 5073.900000000001, "end": 5077.5, "text": " embed it into the template, and then you would encode that."}, {"start": 5077.5, "end": 5079.84, "text": " And that's how you create those ad hoc classifiers."}, {"start": 5079.84, "end": 5083.6, "text": " If you recall what I showed you before here."}, {"start": 5083.6, "end": 5085.26, "text": " So that's basically this logic here."}, {"start": 5085.26, "end": 5092.900000000001, "text": " You could do zero shot, but obviously it's much better perf wise to do like a linear"}, {"start": 5092.900000000001, "end": 5093.900000000001, "text": " probe."}, {"start": 5093.900000000001, "end": 5097.38, "text": " Although again, you would you do have to train to get that."}, {"start": 5097.38, "end": 5099.5, "text": " Whereas zero shot obviously is zero shot."}, {"start": 5099.5, "end": 5101.7, "text": " Hence you don't have to do the training."}, {"start": 5101.7, "end": 5106.34, "text": " Okay, so these variables are just found on some validation set."}, {"start": 5106.34, "end": 5110.42, "text": " And then once you train the classifier, you can then see how it performs on your test"}, {"start": 5110.42, "end": 5114.34, "text": " features and report the accuracy."}, {"start": 5114.34, "end": 5115.34, "text": " That's it."}, {"start": 5115.34, "end": 5116.34, "text": " That's clip for you."}, {"start": 5116.34, "end": 5121.9400000000005, "text": " Let me try and maybe quickly recap what we've seen."}, {"start": 5121.9400000000005, "end": 5125.6, "text": " I'm going to walk you through the function here again."}, {"start": 5125.6, "end": 5130.02, "text": " We load the clip model, we saw that clip model consists out of the vision transformer and"}, {"start": 5130.02, "end": 5131.46, "text": " the transformer."}, {"start": 5131.46, "end": 5137.54, "text": " Aside from the training parameters, those models have these associated tables."}, {"start": 5137.54, "end": 5143.86, "text": " Basically it's going to the division transformer is going to have this associated position"}, {"start": 5143.86, "end": 5148.88, "text": " encoding table plus the CLS token that's going to be learnable."}, {"start": 5148.88, "end": 5155.18, "text": " Transformer is going to have this positional encoding table as well, as well as the vocab"}, {"start": 5155.18, "end": 5156.18, "text": " table."}, {"start": 5156.18, "end": 5163.9400000000005, "text": " So that's the one that we use to embed our tokens here or the well tokens or IDs."}, {"start": 5163.9400000000005, "end": 5170.42, "text": " We have the integers, so maybe better to refer to those as IDs, although people call those"}, {"start": 5170.42, "end": 5172.18, "text": " tokens as well."}, {"start": 5172.18, "end": 5174.34, "text": " And that's that part."}, {"start": 5174.34, "end": 5178.1, "text": " So after we create the clip, we can then do the tokenization."}, {"start": 5178.1, "end": 5179.9400000000005, "text": " And this is arguably the hardest part."}, {"start": 5179.9400000000005, "end": 5183.22, "text": " In my humble opinion, it's kind of dark magic."}, {"start": 5183.22, "end": 5186.56, "text": " You have to process this text and you have to decide how to kind of split it using those"}, {"start": 5186.56, "end": 5188.46, "text": " reg ex expressions."}, {"start": 5188.46, "end": 5196.14, "text": " And then after you do the reg ex expression, you use this PP to kind of split it into into"}, {"start": 5196.14, "end": 5197.360000000001, "text": " sub words."}, {"start": 5197.360000000001, "end": 5204.42, "text": " And after that, you have to do the encoding into the corresponding integers."}, {"start": 5204.42, "end": 5206.22, "text": " And that's how you get the text."}, {"start": 5206.22, "end": 5207.22, "text": " Okay."}, {"start": 5207.22, "end": 5211.860000000001, "text": " Finally, once we have those two, we can basically pass them through the vision transformer and"}, {"start": 5211.86, "end": 5216.5, "text": " we can pass them through our transformer and get associated vectors."}, {"start": 5216.5, "end": 5220.7, "text": " And then we can just do simple dot product after we normalize them to get the scores"}, {"start": 5220.7, "end": 5223.0599999999995, "text": " and thus similarity."}, {"start": 5223.0599999999995, "end": 5224.0599999999995, "text": " That's it."}, {"start": 5224.0599999999995, "end": 5225.98, "text": " I realized I want to show you one thing quickly."}, {"start": 5225.98, "end": 5228.78, "text": " As I said, this is the hardest part of this tokenization."}, {"start": 5228.78, "end": 5234.92, "text": " So I'm just going to add this emoji here and let's run the program again."}, {"start": 5234.92, "end": 5235.92, "text": " So let me focus."}, {"start": 5235.92, "end": 5239.339999999999, "text": " Let me just remove these breakpoints and let me run this again."}, {"start": 5239.34, "end": 5242.1, "text": " So let me debug the file."}, {"start": 5242.1, "end": 5245.22, "text": " Let me show you what happens when we pass the brain emoji."}, {"start": 5245.22, "end": 5246.3, "text": " Okay."}, {"start": 5246.3, "end": 5247.76, "text": " Here we are."}, {"start": 5247.76, "end": 5252.74, "text": " Let me go through the tokenize function again."}, {"start": 5252.74, "end": 5257.5, "text": " Again we encode the special tokens and this is where the magic happens."}, {"start": 5257.5, "end": 5262.02, "text": " So let me enter it into this function encode."}, {"start": 5262.02, "end": 5265.66, "text": " So this is the hardest part of the whole code base if you ask me."}, {"start": 5265.66, "end": 5269.08, "text": " I'm still trying to understand every detail here."}, {"start": 5269.08, "end": 5270.98, "text": " We do some cleaning."}, {"start": 5270.98, "end": 5271.98, "text": " Cleaning will not change this."}, {"start": 5271.98, "end": 5275.22, "text": " This captions will end up with the same caption here."}, {"start": 5275.22, "end": 5276.7, "text": " So we end up with the same caption."}, {"start": 5276.7, "end": 5278.14, "text": " Nothing changed."}, {"start": 5278.14, "end": 5281.5, "text": " Then we're going to first grab token A. So that's nothing interesting there."}, {"start": 5281.5, "end": 5282.98, "text": " I'm just going to skip that."}, {"start": 5282.98, "end": 5283.98, "text": " Finally this is the interesting part."}, {"start": 5283.98, "end": 5285.86, "text": " So now we have a token brain."}, {"start": 5285.86, "end": 5291.58, "text": " So this is emoji and this is your this is a legit Unicode character and there is a unique"}, {"start": 5291.58, "end": 5298.22, "text": " code point that Unicode assigns to each of the characters you can encounter wherever."}, {"start": 5298.22, "end": 5303.860000000001, "text": " So in whatever language you always have a Unicode is this this very nice."}, {"start": 5303.860000000001, "end": 5310.22, "text": " Well how would they call it like encoding principle to to map each of the characters Unicode characters"}, {"start": 5310.22, "end": 5313.9400000000005, "text": " onto unique code points which is just a unique integer."}, {"start": 5313.9400000000005, "end": 5315.02, "text": " Okay."}, {"start": 5315.02, "end": 5319.9400000000005, "text": " And so what happens here is once you encode this particular token you're going to end"}, {"start": 5319.9400000000005, "end": 5321.1, "text": " up with four bytes."}, {"start": 5321.1, "end": 5325.38, "text": " So this particular token once you encoded using UTF eight it's going to give you four"}, {"start": 5325.38, "end": 5326.38, "text": " bytes."}, {"start": 5326.38, "end": 5328.18, "text": " So I'll show you what I mean by that."}, {"start": 5328.18, "end": 5333.22, "text": " If I take this and just print it here we end up with four bytes."}, {"start": 5333.22, "end": 5337.9400000000005, "text": " And so that means we have four iterations here and then we're going to use this mapping"}, {"start": 5337.9400000000005, "end": 5346.42, "text": " table to fetch the associated to fetch the associated Unicode character and then we're"}, {"start": 5346.42, "end": 5347.64, "text": " going to just join them."}, {"start": 5347.64, "end": 5349.86, "text": " And so we'll end up with some very weird token."}, {"start": 5349.86, "end": 5351.52, "text": " Let me do show you that."}, {"start": 5351.52, "end": 5354.0, "text": " So here's what this token looks like."}, {"start": 5354.0, "end": 5355.0, "text": " Very weird."}, {"start": 5355.0, "end": 5358.86, "text": " This is the representation we'll use to represent our brain emoji."}, {"start": 5358.86, "end": 5363.18, "text": " And then now we have to do the BPE token part."}, {"start": 5363.18, "end": 5369.34, "text": " So this is going to split this this character into sub words."}, {"start": 5369.34, "end": 5370.9, "text": " So let me show you what we get."}, {"start": 5370.9, "end": 5377.78, "text": " So this is what how he decides to split this particular character."}, {"start": 5377.78, "end": 5380.38, "text": " And after that we just encode all of those."}, {"start": 5380.38, "end": 5382.42, "text": " So that means we'll end up with two integers."}, {"start": 5382.42, "end": 5387.84, "text": " So as you can see that the brain will be encoded using two integers."}, {"start": 5387.84, "end": 5390.0, "text": " So let me show you what the BPE tokens are."}, {"start": 5390.0, "end": 5393.7, "text": " So these two numbers eight seven nine two and five ten."}, {"start": 5393.7, "end": 5395.96, "text": " That's what we how we encode the brain."}, {"start": 5395.96, "end": 5399.1, "text": " As I said fairly fairly magic."}, {"start": 5399.1, "end": 5405.58, "text": " I think they're referring to it as a dark horse behind the NLP like success because"}, {"start": 5405.58, "end": 5410.7, "text": " nobody talks about organization but it's fairly complicated to understand and to kind"}, {"start": 5410.7, "end": 5413.179999999999, "text": " of capture all of those edge cases."}, {"start": 5413.179999999999, "end": 5415.22, "text": " Anyways enough rambling guys."}, {"start": 5415.22, "end": 5416.22, "text": " Hope you found this video useful."}, {"start": 5416.22, "end": 5418.22, "text": " If you did comment down below."}, {"start": 5418.22, "end": 5422.66, "text": " Let me know how whether I can improve something or what not."}, {"start": 5422.66, "end": 5424.139999999999, "text": " Consider subscribing to this channel."}, {"start": 5424.139999999999, "end": 5426.5, "text": " Share the video art and until next time."}, {"start": 5426.5, "end": 5441.5, "text": " Bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=JiQmkhsbRwk
Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover "Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs" paper. The paper takes ideas from the sheaf theory - a branch of algebraic topology - and combines them with GNNs, enriching them with a rich geometric structure (sheaves) achieving provably more expressive diffusion-based graph neural networks! It took a lot of time to prepare this video and read everything that was necessary as a background reading - check out the resource section below for how to get started! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/pdf/2202.04579.pdf Truly gentle introduction to sheaves: ✅ A Very Elementary Introduction to Sheaves: https://arxiv.org/pdf/2202.01379.pdf ✅ Opinion dynamics on discourse sheaves: https://arxiv.org/pdf/2005.12798.pdf ✅ Sheaf Neural Networks: https://arxiv.org/pdf/2012.06333.pdf Blogs: ✅ Blog accompanying the paper: https://towardsdatascience.com/neural-sheaf-diffusion-for-deep-learning-on-graphs-bfa200e6afa6 ✅ Differential geometry and algebraic topology papers overview: https://towardsdatascience.com/graph-neural-networks-through-the-lens-of-differential-geometry-and-algebraic-topology-3a7c3c22d5f ✅ Beginner intro to topology: https://www.cantorsparadise.com/what-is-topology-963ef4cc6365 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Gentle intro to sheaf theory and algebraic topology 00:12:00 Making it less abstract: examples from sheaf theory 00:23:50 Formal terminology 00:27:40 Sheaf Laplacian dissected 00:40:00 The separation power of sheaf diffusion 00:53:00 Dirichlet energy and converging to harmonic space 01:01:05 Neural Sheaf Diffusion GNN 01:06:00 Results and outro (feedback appreciated) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #sheaftheory #algebraictopology #oversmoothing #heterophily
What's up you guys, in this video I'll be covering neural sheaf diffusion, a topological perspective on heterophily and oversmoothing in GNNs. This is a paper by Christian Bodnar, Francesco Di Giovanni, Benjamin Chamberlain, Pietro Leo and Michael Bronstein. And like you'll probably notice here, like most of you are probably not familiar with this word sheaf, and there is a whole theory called sheaf theory that's coming from the field of algebraic topology, and all of that may sound mouthful, and that's because it is. And I'll first try and give you a lot of context and understanding of what this theory is about, and then we'll start digging into the paper. So let's start with the abstract, but you'll probably won't understand anything here, but still let's kind of read it out and work our way from there. So cellular sheaves equip graphs with a geometrical structure by assigning vector spaces and linear maps to nodes and edges. Graph neural networks, or GNNs for short, implicitly assume a graph with a trivial underlying sheaf. So this is an important point. So the neural graph neural networks just assume a trivial sheaf structure. So this choice is reflected in the structure of the graph laplation operator, the properties of the associated diffusion equation, and the characteristics of the convolutional models that discretize this equation. So one example would be the GCN model. So in this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in the heterophilic settings and their over-smoothing behavior, and you may be familiar with this already, but GNNs are usually not as great for the heterophilic setting. And what I mean by a heterophilic setting is that, okay, let me maybe explain the homophilic setting. So in homophilic setting, the nodes which are close together and connected usually share the same label. So there is a saying that goes like, birds of feather flock together. So that's basically your homophilic setting. So the heterophilic setting is different. So that basically means even though if some nodes are connecting, that doesn't mean they have the same label. They may have completely different labels. Traditionally, GNNs struggled with this setting, and that's what this paper is trying to tackle, as well as over-smooting, which is just a consequence of the fact that how most GNNs work is they take some type of an average of the neighborhood features. And by just doing that and iterating that through multiple GNN layers, you end up with feature vectors converging to some value. And that's the over-smoothing phenomena, like a very hand-wavy of explaining it. Okay, so let me start by just kind of showing you a couple of snippets from this paper that's called A Gentle Introduction to Sheaves on Graphs. And I want to stress the gentle part, because the paper is like, I could maybe understand the first couple of pages, and then it just kind of spiraled out. And basically, unless you're a mathematician that has some understanding of topology and algebraic topology, preferably, you'll have a tough time understanding this gentle introduction. So even though I consider myself to be very comfortable with various branches of mathematics, I still struggled a lot. So let me read you a couple of snippets anyways. So while these concepts may seem even more hopelessly abstract than the already abstract notions of sheaves and morphisms, the reader should rest assured that the exposition here is far simpler than nearly any other explanation of sheave cohomology or homological algebra. Okay. And then they proceed to say the interested reader might consult a book like this one, but a warning is in order. A typical prerequisite for reading and understanding a book on homological algebra is to have read and understood a book on homological algebra. Okay, so this kind of hopefully gives you some understanding of what's going on in this field is fairly abstract. And worst of all, there is not that many like beginner friendly resources out there. There is one paper I found which is absolutely great, and that's called A Very Elementary Introduction to Sheaves this time. So we can see the progression here. We have the classical theory, then we have the gentle introduction, and only the very elementary introduction will actually work. And it will give you some mental models of what's going on in this when we mention sheaves and we mention all of these abstract concepts. So I'll get to this a bit later. First of all, let me let me start and explain this like starting from a more specific setting, and that's the setting we'll actually care about. So that's the setting here. So we have a graph here. You can see two nodes. We have a node V and a node U. We have an edge connecting those two nodes. And what we do is we associate this algebraical structure. So in this particular case, we're going to use a vector space and we're going to associate a vector space to both of these nodes and as well as to the edge. So other than the vector spaces, we'll also associate these restriction maps. I mean, not associate, we'll define these restriction maps for for all of the incident node edge pairs. So that's on a high level what's going on. So we have a topological object, which is a graph in this in this particular example. So let me take a pen here. So this is our topological object and we associate certain algebraic objects with with our topology. So in this particular case, we associate I can see here a vector space in another vector space and yet another vector space with an edge with nodes and with nodes here as well. And we define these as I said, restriction maps, which are simple linear maps between these different vector spaces. So there is a nice basically analogy to if we go to the world of differential geometry and we start thinking about manifolds. And in this particular case, we have a 2D sphere embedded into a 3D space. Here we have a similar notion. So we have a we have a vector space associated with this particular point on the manifold. So this is the tangent space of that manifold. And this can be roughly treated as the as the vector space associated with this particular node here. So here we associate the tangent space with the point on the manifold and basically the optimal transport. So not the optimal, sorry, the parallel transport on this sphere is an equivalent of this not equivalent, but an analog of this restriction map in our in our setup here on the sheaf here. And you can see so the parallel transport is going to convert a vector from this tangent space to a vector in this other tangent space associated with some other vector space. So this is going to convert from this tangent space to some other point on the manifold. So there is a kind of loose analogy between these these two. So that's maybe worth mentioning. OK, so now let me step back and try and give you a bit more context and what this algebraic topology is about, because this is just a particular specific like setup of algebraic topology that we'll care about. So this is a vector from Michael Bronstein's blog, and it kind of introduces the wider terminology. So the idea is the following. So here you have like multiple stocks of grain which are like basically tied together. Getting back to our setup, these vector spaces here and algebraic structures more generally are called stocks. So this is going to be called a stock. And so that's that's the stock part. And then the restriction maps are what's tying together all of these stocks. So that's why you'll often see this like basically a bundle of grains used to represent ideas from from algebraic from from sheaf theory, which is a branch of this field of algebraic topology. And that may or may not help you memorize what the sheaf structure is about. OK, but let me generalize. Let me take even like a step further and let's generalize this even more. So this is how the what the informal definition of a sheaf would be. So we assign to each open subset of the underlying topological space a certain algebraic structure and we specify restriction maps between them. So what's a topological space? Well, topological space is a very, very general notion. So that can be anything from like a line from a real line to graphs, to manifolds, to simple complexes, which are themselves a generalization of specific types of graphs. We then have cellular complexes as another topological object, etc. Etc. You can see that by saying this sentence, you basically mean a lot of things. It's very general. It's very abstract. Then they see a certain algebraic structure. And most of you are probably familiar with vector spaces, but not that much with groups or rings. But like those are some other types of objects you could kind of associate with this open subset on your topological space. And finally, we specify the restriction maps so you can see that we get our set up by specifying a particular topological space. So we care about graphs and we care about vector spaces on those graphs. And even more specifically, we're going to be using vector spaces of the same dimensionality for each of our stocks. That's additional additional detail. OK, so again, briefly, topology is for those of you who are not familiar. This is a loose informal definition, but it will maybe help you create some some mental model. So what is important in topology is the connectivity and not the distances and or angles, whereas in geometry, we care about the distances and angles. So the connectivity is the name of the game. And that's why the donut and a coffee mug are the same thing for a topologist. So that's like a usual joke in for at least for mathematicians like the like basically a coffee mug and a donut are the same thing. But because as you can see here, both of these objects can be morphed into each other. So there are these mappings. I think they are called homomorphism, I guess, between like a coffee mug and a donut. And the reason is we only have a single hole here, and that's what what what makes a difference. So if we had two holes, then we have a different object or if we had zero holes, that would be a different object. But everything else can kind of be, as you can see, morphed from from one shape to another. And because of that, they are considered equivalent. I also mentioned these objects called simplicial complexes. If you take a simple graph, whereas by simple graph, what I mean is a graph that doesn't have any self edges nor does it have any multiple edges between two nodes. So this would be this would be a simple graph. Something like this would be a simple graph and a generalization of the simple graph is something called simplicial complexes. You'll probably see that term thrown around. So just to kind of make it to define it here. And finally, if we have graphs with self edges and with multiple edges between two nodes, this is known as a multigraph with self edges and a generalization of these is called cellular complex or cellular complexes. So, yeah, that's just additional context for you. Finally, there is this nice statement from from from Maurice Auslender saying that she theory is the subject in which you do topology horizontally and algebra vertically. OK, so that was kind of various terms and concepts related to this like branch of mathematics. And now let me try and make this a bit more concrete. OK, I already mentioned this paper, a very elementary introduction to sheaves. And I want to make like a huge shout out to the author of this paper. I think that like doing this is incredibly important, especially if you have some very abstract and for most people like alien type of a theory. You really want to create a paper where you dump all of your like mental models, all of your visualizations and all of the concrete examples you use to ground some abstract concepts from that theory and make them digestible to your like brain. I guess I would really love to see more of these papers for each of like different branches of more abstract pure mathematics. And I guess the reason people are not doing this usually is because it's not that rigorous and then maybe your peers will judge you or whatnot. But like like I urge you, this is like so important. This paper made it much like more visceral for me what she theory is about. So let's read this abstract and then let's see a couple of figures from from this particular paper. So here we give a simple and approachable explanation to the mathematical objects called sheaves. This paper presents tangible and concrete examples so that readers will be able to further explore these concepts on their own in more abstraction and application. So for me and I know for for many people, I know it's much easier to start from a concrete example and then abstract away rather than start in the abstract space and then try and understand what's going on and where can I relate this to. So I think this is that's why this paper is so important. So as they say here that the paper this paper is very is a very non rigorous. It's a non rigorous loose and extremely basic. And I would also add extremely useful introduction to sheaves. This is meant to be a guide to gaining intuition, which is so important about sheaves, what they look like and how they work. So after reading this paper, someone can jump into the extremely abstract definitions and examples seen in textbooks with this some idea of what is going on. OK, so let's get back to our particular example of a sheaf where we care about graphs as that underlying topological object and where we care about vector spaces with the same number of dimensions as our particular algebraic space object. OK, so here we can see that we have, for example, this this particular stock for this node is going to have two dimensions. It's going to be two one and this other stock is going to be minus two ten three. So in this particular example, they did use different dimensionalities, but that does not matter that much. Then we have they've specified a particular restriction maps. You can see they're just simple linear maps as represented by this matrix. We have a different one here. And if we take this vector and we map it, we get we end up with this vector here. OK, so let me just change the color. So we end up with this one here. And if we take this vector and we map it with its restriction map, we end up with this particular vector here. OK, and so the important thing I want you to notice here is that these two agree, which is not the case in general, obviously. And this is super like this is a vital idea in this in this sheaf theory. This agreement and to just to make this a bit more concrete, let me give you a particular interpretation of what this could mean. So imagine this graph represents a group of people, a social network, and imagine these stocks here. So the vectors associated with our nodes, which themselves are basically represent certain individuals in the society. So this stock is just opinion space. So you can imagine of this as an opinion vector. Basically, minus two would mean this particular person thinks negatively about some topic X. I don't know, whatever like politics. Then this means that this same person thinks extremely positively of some other topic Y and so on, etc. So we have three for some other topics. Same here. And what this this model does is by mapping by mapping this opinion vector into this so-called discourse space. So this thing here, they refer to this as a discourse space. Here we achieved an agreement. So even though the personal private opinion vectors are not the same after we express them, we get to an agreement. And that's pretty much the only way how a society can function. We have to roughly have similar like agreements across across certain topics. Otherwise, everything explodes. And so this could be one particular interpretation of what is going on and why it is important to find such vectors here. So such vectors, such stocks, such that we have an agreement across all of the edges of our graph. So that's the idea there. Okay, so now again, let me take a step back and just make this a bit more general because I assume that most of you have no idea of what algebraic topology is. And so that's why I'm giving you just a lot of examples and I also link a bunch of resources down in the video description. So do check them out. Okay, let me give you another example and then we'll get to the rest of the paper. And hopefully it's going to be easier to understand after this. So for a topological space X. So we are again now more general. We're not talking about graphs. We're talking about general topological spaces. A sheaf of a billion groups on X assigns each open set of X to an abelian group. Okay, so this time our abstract object is in a billion group and we have some open set of a topological space as our underlying horizontal object. So which a billion group you ask. So here it will be the group of continuous functions from U, which is the open set to R. That is for an open set U, which is a subset of our topological space. The stock of U is the group this one. So it's a set of functions F of continuous functions F that map from that open subset to R. Okay, just to make this a bit more concrete. Here is a particular example of what that means. We have an open subset here U as you can see here. And we you can see like us. This is a set of continuous functions defined there. So this thing here would be the stock and by picking so picking a particular function from this space here. So maybe picking this one is the same thing as picking a particular vector here. Okay, we're just dealing with different objects. But like this isn't a concrete instantiation on the vector space. And this is a concrete instantiation in this particular a billion group. Okay, so so along with the point wise addition, this is this is basically a group and for an open set V, which is a subset of U the restriction map from U to V as denoted like this is actually really simple for an element F of F of U, which is a continuous function. This restriction map is the restriction of F from U to V. Okay, so I'm going to stop there. So what it means is the following. So after we apply this restriction map, we end up so you basically map the following thing is you still map. So let me take a color like this. So you'll map this thing. On to this thing here. So basically you just constrain. So before that your function was defined. This was a domain of your function after this restriction map. You just redefine it here, but you have the same you have the same function in this in this particular domain. So it's kind of trivial, but we'll see why why that why that makes it makes sense. But again, by doing this, what we've done is we took this function, which is in this analogy like this particular vector. And then we mapped it all the way to here. So this time we don't have this intermediate step. We just map it. We just find the inverse mapping here and we map from here to here. Okay. Now with this definition, let's see how we define the agreement this time. So here you can see here is a bunch of a bunch of open sets, which you can see those were nodes in our previous example. And for each of those open sets, we associated a particular function. So that's a concrete instance from our group, which is the algebraic structure we care about in this example. So that will be a particular vector in this for this particular node. We'd associate a particular vector here for this particular open set. We associate a particular function. So how do we get to an agreement? So here is what the agreement looks like for this particular sheath. We basically get a continuous function. And if you think about it, it makes sense because so after you might say, let's take let's take one example. Let's take, for example, this this purple open space from here to here. And if we do the restriction map, if we apply the mapping to this other orange space, so this one here, let me just change the color. So if we map there, we know that the function needs to remain the same. So that's that's how we define our mapping is going to be here. And the only way that this small function agrees with this function here is that they perfectly align. And because you can have multiple of these open sets here which intersect with each other, the only solution where we have this agreement for this particular sheath is if we have a continuous function on this on this space. OK, so hopefully that gives you some understanding of what's going on. I did get into a bit more nitty gritty details than I needed. But like I just wanted you to understand the whole point. I want you to take out from this explanation is agreement in in in sheath theory is a very, very important concept. So agreement here leads to a very nice like continuous function agreement here leads to people and society being a functional society instead of just breaking apart because people are polarized. OK, guys, let me quickly give you one more take of how this works in case you did not understand it. I'm going to make it much more pictorial. So imagine we focus on on these two open sets. So we have this open set here. And we have this orange one here. OK, so imagine instead if on this orange open space, we had a function defined like this. So we pick a value from our stock and it looks like this. OK, and imagine for that this this bigger open subset, we have exactly this function here. So this is the value we took. So after we take so we take the function from you from this you three. So from this bigger open space, we apply the restriction map onto this smaller open space. And what do we get? Well, we get this we get this thing here because that's how we define the restriction map. And you can see here that there is no agreement between that and this thing here. OK, and the only way we can get an agreement is if we have a continuous function like this. And yeah, hopefully that now makes it a bit more visceral and understandable. Let's get back to the paper. Let me briefly just introduce you to the formal terminology here. So this is how it looks like. Definition one, a cellular sheaf, which is a tuple containing a graph and this big F, which is the sheaf structure on an undirected graph G, which consists of a set of vertices and set of edges consists of so one a vector space F of V for each node from the basically set of nodes, a vector space F of E for each edge in the set of edges. And these are called stocks again, a linear map as denoted like this. So it maps so that the this means edge E, which is incident. So that's this this this funny symbol with a triangle stuff. So this means they are incident. So this edge is incident to this node. So it maps from the stock of V onto the stock of E. So we define that for each incident node node edge pair. So this is what what the sheaf is. You define these maps, you define these spaces and you've got a sheaf. Okay, so again, I mentioned these are the stocks. These are the restriction maps. The linear map is a restriction map. And now we have this concept of a zero coaching. So zero coaching as denoted with C subscript zero is basically defined as this O plus, which is a direct sum of different stocks. And you can think of this as basically a concatenation of these various vector vector spaces. And then you can define a signal, an element of that zero coaching this bold X here, which is basically a collection of stocks for each of the nodes. Okay, that's one important space you need to to to have in mind. Let's continue on a particularly important subspace of this zero coaching space is the space of global sections. And we've seen this and you'll understand what this exactly means in a second. So this is the this is the agreement thing I was I was mentioning. So this global section space denoted as H zero, which stands for zero cohomology. That's also how you'll see this space referred to. So it's basically a set of these signals of these axes such that and those axes belong to this zero coaching. So it's a set of these axes such that we have an agreement along every single edge of our graph. OK, that's the that's the idea. So we have we apply the restriction map onto our feature vector associated with with with node V. And we apply a different restriction map to node use feature vector. And we have to have the same we have to have an equality here. That's when we achieve the agreement and then mention here containing those private opinions X for which all neighbors agree with each other in the discourse discourse space. So that will be one particular interpretation of what this whole abstract global section space means. So, again, remember this one space of global sections. So those are going to be all of the possible signals on our nodes that leads to the agreement across all of the edges on our graph. That's it. As simple as that. OK, let's continue here. Definition two. And this is very important. I'm going to spend some time here. So the sheaf loplation of a sheaf is a linear map from the space of zero coaching to the spaces your coachings defined node wise. Why this equation? And now this might seem scary, but it's not because if you know anything about graph, I'm only familiar with graph GCN's, for example. This is just going to be this is just a generalization of the very same thing we already saw there. If you if you're not familiar with with those like papers and with that theory, go check out my graph ML playlist. It's going to be immensely helpful for you to understand this video. So imagine if these restriction maps were just scalars. This was a one. If this was a one, if this was a one, you just have a sum containing differences between the central node V minus its neighbors. And that's something we've seen in the graph loplation matrix. So if you multiply your your your basically set of your feature vectors with all graph loplation, you end up with such an equation. So I'm going to make this a bit more complete. So let me give you a particular example to and let's kind of this is very important. This this shift location is one of the crucial objects in this paper. So let me give you a bit more concrete examples here. So let's imagine we have a graph like this. We have three nodes and they're connected like this. So this is a node one. This is no two. This is no three. Let's form a loplation matrix basically. So loplation is defined the following way. The plation is usually defined. There are multiple ways how you can define it. But this is one way. This is so-called I think combinatorial loplation. So we take the degree matrix D and you subtract away the adjacency matrix A. So what's the what's a let's make that more concrete. So let me give you for this particular example. This means the following. So D is going to be the following matrix. So it's going to be a three by three matrix. It's a diagonal matrix and on each of the diagonals you basically contain how many neighbors does that particular node has. So one has only one neighbor and we're going to have zeros here. So we're going to only have basically elements on the on the diagonal. So let me just kind of fill that in. So what does what goes here? So here we take a look at node two. It has only one neighbor. That's node three. That's why we input one here. And finally node three has two neighbors. And that's why the degree of this node is two. Okay, so we have two. So that's the degree matrix. The A matrix is just your adjacency matrix. So basically your connectivity who is connected to whom. So this is going to be a. So node one is only connected to node three. That's why we're going to have one here and we have zero and zero here. No two is only connected to node three again. So we have one here zero zero. And finally node three is connected to node one as you can see here and it's connected to node two and that's why we add ones here and we have zero here. Finally in order to get the loplation matrix. Let's see what we do. We just subtract away these two and we get the following thing. We basically end up with this matrix. Let me just kind of make it here. So we have one one two and then we have minus one here minus one here minus one minus one. And we have zeros elsewhere. Okay, so why is this important? Why is this interesting? Well, let's see what happens if we multiply the set of feature vectors of node feature vectors with this loplation. So this is L. Let's construct the X. So if we associate let's let's say for the sake of simplicity, we only have basically scalars associated with all of these like nodes. So we have scalar X one X two X three. Let's kind of stick them together here. We have X one. We have X two. We have X three. What do we get if we multiply these? So what do we get here? So we basically have and I'm going to just ignore the first two rows. I'm just going to focus on this one. So we have two X three. Okay, minus X one minus X two. And now for the moment, I've been trying to get to is let's make a comparison between this thing here. And this thing here. They have the same structure. This thing is just a generalization of this thing here. So as I said, if these restriction maps were just a simple scalars, so that means if our sheath structure is trivial in that particular case, we fall back to a graph loplation. So let's imagine this is one. This is one. This is one. And let's imagine we focus on node V. V equals three. So we end up with a sum where V is equal to three. And we look at the incident edges, which are going to be this edge and this edge here. So we have a sum. So we have here we have X three minus first edge. So minus X one. And then we have additional term. So plus X three minus this time X two. And as you can see, we end up with two X three minus X one minus X two, which is the same thing as this thing here. Okay, so hopefully now the connection is clear. By introducing this sheath structure, we generalized our graph loplation into this thing called sheath loplation. And that's everything you need to know for now. Let's see a bit more information about sheath loplation. So it's an ND times ND matrix. Why ND times ND? Well, in general, I mean, not in general. In this particular case where we have a trivial sheath structure, then we have N times N. So we saw that right here. So it's D because the dimensionality of a stock is D. That's why we have ND times ND. Okay. So then they say when the vector spaces are set to R, so the dimensionality is equal to one. And the linear maps are identity maps over R. The underlying sheath is trivial and one recovers the well-known N times N graph loplation matrix and its normalized version. And we'll care about the normalized version. And the same thing for, this is the normalized version of the graph of the sheath loplation. And what this sentence says is basically the thing I just explained. We just made this connection. This is a special case. The graph loplation is a special case of sheath loplation. Okay. Next up, they say the following, and this is important. Harmonic signals have perfect agreement of opinions with respect to the sheath structure, i.e. if you multiply those signals with the sheath loplation, you end up having a zero vector. And that makes sense, right? So if we have an agreement here, if X1 and X2 are in agreement with X3, then we'll have zeros here. So if we're in agreement, all of these are going to be, let me change the color. So all of these are going to be zero. Zero, zero, if we have agreement. And that's why, so L times X equals zero. That's what I state here. This is just like sheath loplation instead of a simple loplation. And the name is kind of, I guess, indicative. I'm not sure whether the name came from this fact, but like the fact that we have an agreement that all of these signals are kind of in equilibrium, that's why these are called harmonic signals. Not completely sure, but I suspect, I strongly suspect that may be the case. Theorem 3 just states that this space of global sections, which we saw a couple of minutes ago, and the kernel of this sheath loplation are isomorphic as vector spaces. So let me kind of break this down. Don't be scared here. So isomorphic just means a fancy way of saying these two vector spaces are the same. So these two objects are the same thing, really. So why the kernel space? So this is the kernel space. So basically the set of X's that are mapped by this map, by the sheath loplation into the zero vector, that's what's known as the kernel space. And that's how we define the H0 as well. If you remember, these basically mean the same thing. Okay, enough rambling. Let me try and make this a bit more concrete. And let's make some connections. This is how the sheath loplation looks like. So it's going to be a block matrix. And these blocks are D times D because the stock is D dimensional. And you can see how the structure looks like. So we have, so on the main diagonal, we're going to have these sums of basically, this is the restriction map, multiplied by the transpose of that restriction map. Off the main diagonal, we're going to have minus restriction map of node V. So this is V. And here we have a restriction map of node U, because here we have, this slot corresponds to UV. That's why we have these two. And if you think about it, this construct makes sense because that's how you end up with particular, with this equation here. That's how we end up with this whole expression here. So you can see that if you just kind of ignore the X's, we have the sum of these restriction maps and we have this minus restriction map of U and restriction map of V. And if you just kind of add a vector here, so if we add a vector here, and that's going to be our Rx from this zero co-chain space. So we'll have vectors here. So this is going to be D dimensional. So this is going to be D dimensional. This is D dimensional. And here we just have a single dimension. I kind of made it a bit wider, but like this is just a single element there. And then if you just kind of multiply this, you'll understand that you basically end up with this same equation here. And because here you have like X, V, and then a bit further down, maybe the penultimate slot is going to be X, U. And that's why this thing is going to multiply this thing, and this thing is going to multiply this thing, and that's how we end up with the equation. Sheaf loplation itself can be constructed from these co-boundary operators, and I'll not get into a lot of details there. You can check them out in your own time, but for now just be aware that these co-boundary maps exist, and they basically, as you can see here, they measure the discrepancy again in the discourse space, so in the edge stock. Okay, let's make it a bit more concrete. So here is a particular graph, and here is the sheaf structure. So here are the restriction maps. Here are the stocks, and you see the concrete numbers. And by using those numbers, you can plug them in into this co-boundary operator. So this is how you form the co-boundary matrix. You take the... So this is the restriction map F1, and this is the restriction map that corresponds to node 2, V2, and that's why we have, as you can see here, F1 minus F2, because once we multiply this with stocks, so we have X1, X2, we'll end up with F1, X1, minus F2, X2, and that's precisely measuring the disagreement along this particular edge. So this is how you form them, and you can see again a connection between this and this thing here. We get precisely this equation by forming a matrix such as this one. And if we were to multiply this co-boundary transpose times co-boundary, we end up with the sheaf, and you can kind of imagine that with these Fs here and F2s, if you kind of do your math, you're gonna end up with an equation such as this one, a sum of these Fs and stuff. Okay, I'm gonna stop there. I'm aware this is packed with details, but it is how it is. Hopefully this video is just gonna give you like a glimpse, an overview of this whole field, and you can go on and explore at your own pace, and then return back to this video and watch it again, I guess. Let's get to the actual neural networks now. So an important concept is this diffusion process. Okay, now, so all of those laplations and the sheaf laplations can be thought of as a discretization of this diffusion process. So it's basically a continuous process governed by a PDE, so a partial differential equation. So we can see that the rate of change of X equals minus sheaf laplation times the current values of X. And let me try and briefly explain what this diffusion process is about. So again, how it came to be, this whole theory, is I'm fairly sure it came from Fourier analyzing the dissipation on solid bodies, the heat dissipation on solid bodies. So imagine you have like a 3D solid object here, and you have a certain point inside here. So you have a certain point, and that certain point has associated temperature, so a certain scalar. And imagine now that you have around it, you have like a dense, like a minisphere around that point, the green point, and all of those points in the neighborhood are colder. So what's gonna happen is that, as you know from your real life, this red, the green point is gonna slowly cool down. And I should have probably used the red color to denote it as being hot. It's gonna slowly cool down. And the reason it cools down is because of this diffusion equation. So basically you can imagine what happens is, we subtract from this value, whatever that value is, so some T zero, we subtract the average of the temperatures in the neighborhood. And that means we're gonna subtract the average, so like a sum of the temperatures of the neighbors where I is gonna be from the neighbor set, just denoted as this very ugly N. And if you think about it, this is exactly the same thing we're doing with our graphs. So we just made this continuous, both in space and in time process, and we discretize it. So we're doing it in a discrete domain, which is a graph, and we do it in discrete steps. So in every iteration, we're gonna be doing the same thing. Okay, so what this eventually leads to is basically everything is gonna have the same temperature. Okay, that's the end goal of this process. And we just took the discretized version of this same process, and we are piggybacking that as we're using it as the workhorse for all of our, for most of these graph neural networks. Now they say here, it can be shown that in the time limit, each feature channel is projected into kernel of that sheaf laplation. Okay, that means again, that this X is gonna be, that these updates are gonna be zero vectors, and that means that we won't have any rate of change. So the diffusion stopped, the temperature is stable everywhere. And that means continuing here, as described above, this space contains the signals that agree with the restriction maps of the sheaf along all the edges. In what follows, we analyze the ability of certain classes of sheaves to linearly separate the features in the limit of the diffusion processes they induce. Okay, restating this again, by repeating this diffusion process, we'll eventually end up being a harmonic signal. So we'll eventually end up with these harmonic signals, i.e. signals that basically lead to alignment with their neighbors. Now, in general case, that signal might be trivial, so it might be all zeros, which is not that interesting, but like we're gonna now see specific classes and how for specific, if we pick specific sheaf structures, we'll end up with very interesting harmonic spaces. So what would constitute an interesting signal from that harmonic space? Well, a signal such that we can linearly separate our classes, so we can classify it, we care about like node classification, edge classification, regression, graph classification, whatnot. And so if this diffusion process eventually leads to such a harmonic signal, such that we have linear separability of our classes, then we're good, then we want to kind of ride the diffusion process all the way to this equilibrium point. Now, let's see a couple of sheaf structures and what the harmonic signals they result with, basically how powerful are those signals. Okay, so first class they consider is this very simple symmetric invertible class. Okay, it's defined the following way. It's such a sheaf structure such that we have the restriction map for two nodes that connect one edge. So these two nodes are gonna have the same, they're gonna be the same. These two restriction maps are gonna be the same and the determinant of those restriction maps is gonna be different than zero, which means this is invertible as it says here and it's symmetric because these two are equal, okay? Then they mentioned here, we note that for D equals one, so stock dimension equals one, the sheaf loplations induced by this class of sheaves coincides with the set of the well-known weighted graph loplations with strictly positive weights, which also includes the usual graph loplation. Therefore, this hypothesis class is of particular interest since it includes those graph loplations typically used by graph convolutional models such as GCN. So this is the minimal generalization we can make, okay? And we first show that this class of sheaf loplations can linearly separate the classes in binary classification settings under certain homophily assumptions. So that's very important. I'm gonna go through this proposition, but then I'll start slowly scheming some of the details because there is too many details and not all of them are equally important here. Let G be the set of connected graphs with two classes. We have two classes because it's a binary classification problem, so nodes A and nodes B belong into the set of nodes of the graph such that for each node from one class, there exists a node U from the same class and an edge connecting them. So that means that the nodes with the same label are connected to each other. And that's the homophily assumption they mentioned. Then this sheaf has linear separation power over this particular graph, okay? But then they show that it's not powerful enough to linearly separate two classes if basically we have heterophilic conditions. And that's what I show here. Basically, if let G be the set of connected bipartite graphs with partitions A and B forming two classes and being of, like, those two classes have the same number of elements, then this sheaf cannot linearly separate any graph in G for any initial conditions, okay? So that means that if we have something like this, so we have a bipartite graph such as this one, and we don't have any connections here. We just connect like this. We just have these connections, okay? And they can show that in this particular setup, this class of sheaf structures will not work. And that's why we have to resort to some more powerful sheaf classes. And let's see what those are. So then they introduce this non-symmetric invertible where we only have this constraint, so it's invertible, but we don't have the symmetry constraint. Here they show, I'm gonna skip reading this, but they basically say that even in heterophilic conditions, this sheaf class is gonna have the linear separation power over those graphs, which is super nice. So we're slowly, by adding more powerful sheaf structures, we can solve more difficult problems. So in this particular case, basically, binary classification in heterophilic setting, okay? And now they continue doing this. So here they state a fundamental limitation of sheaf diffusion when D equals one, so when we only have single-dimensional stocks. They basically say that let G be a connected graph with more than three classes, so three or more, then this previous class of sheafs cannot linearly separate any signal on that graph, and that's a powerful statement. So that means we need to go deeper. We need to have more dimensions along the stock, along the vector space dimension. So they say here, from a GNM perspective, this means that in the infinite-depth setting, sufficient stock width, i.e. dimension D, is needed in order to solve tasks involving more than two classes, so more than two classes. Note that D is different from the classical notion of feature, channels F. As a result, above shows, the letter has no effect on the linear separability of the classes in D equals one, whereas they showed that the former does. So you can still have, so this sheaf structure, this sheaf neural networks are still going to admit the usage of features. It's just that those do not help increase the expressivity of our models, whereas increasing the stock dimension on the other hand does. So next up, they introduce this diagonal, invertible family of sheafs where the restriction maps are diagonal, so basically, if you take a look at their representation of the matrix, you'll just see diagonal elements, and everything else is zero, and again, it's invertible, so we have this condition of the determinant condition. And they say that let G be the set of connected graphs with nodes belonging to more than three, so three or more classes, then for D equals, or greater or equal than the number of classes, this particular class of sheafs has linear separation power. So as long as the D is bigger than the, or equal to the number of classes, we have separation power. So again, more powerful than the previous families. Then they do the same thing for orthogonal families where basically the restriction maps now belong to this group of orthogonal matrices, which basically means they just do rotations, they don't do any scaling, they just rotate your vectors. And here they show that this class has even more powers in the sense that here you can have, for example, two for the dimensionality of the stock, and you can classify, you can correctly linearly separate up to four classes, so that means it's even more powerful. And the interesting part here is that the proof is crazy, like they use quaternions and complex numbers for two, and basically this statement here may hold even for higher dimensions, it's just that they did not make the proofs for those other dimensions. If you're curious in the very heavy mathematics, the appendix of the paper is full of various proofs of all of these propositions and theorems you saw here. So very, very mathematically rich paper. So finally, the summary. Different shift classes give rise to different behaviors of the diffusion process and consequently to different separation capabilities. Taken together, these results show that solving any node classification test can be reduced to performing diffusion with the right sheaf. A very powerful statement. So again, by taking a particular sheaf structure and by just constructing the corresponding sheaf loplation and by just iterating, applying the sheaf loplation in this diffusion process, we'll end up with certain harmonic signals and depending on the basically richness of the class you picked, those harmonic signals may lead to a very nice linear separability of your classes. So that's the idea here. Okay, we're slowly wrapping up this paper. So there is not a lot more, so maybe pause the video and stretch a bit, I know this was super heavy. So here you can see a previous sheaf neural network devised in one of the previous papers. And it again uses the diffusion process as the workhorse and just adds additional learnable parameters and non-linearities to this diffusion process to make these more powerful neural networks. But what they've done is they actually hard-coded the sheaf structure, so they are not learning the sheaf structure, whereas as we'll soon see in this paper, they're actually learning the sheaf structure as well. So let's see what this thing does. So we have X, which is basically, those are our signals, and the dimensionality of that matrix is gonna be basically Nd times F, so that means we'll have Nd dimensions here because we have N nodes, dimensionality of the stokes is D, and we have potentially F features here. That's why we have something like this. This is the structure of X. So what W2 does is first, you just linearly transform, you map these feature vectors into some other representations. That's the first transformation you do. Then what you do is you take this W1 matrix, which is of dimensionality D times D. This is gonna be D times D. You're gonna do a Kronecker product with identity matrix N, which is a fancy way of doing the following thing. So you're gonna end up with Nd times Nd matrix, which is basically necessary if you want to multiply, left multiply this particular structure here. So why does that happen? Well, you have a matrix that's T times D, and what this Kronecker product does is the following. By multiplying this by basically, just a second, sorry, it's on the other side, so it's gonna be from the left side. So by multiplying this with this identity matrix that has dimensions N times N, you're gonna end up with the following structure. Basically along this diagonal, we're gonna have, we're just gonna copy paste these matrices D. So where there was one in this matrix, we're gonna have a D times D matrix here. And that's why we end up with N times D matrix here, okay? So that matrix is gonna basically take the signals here. So let's assume this is one vector here, this is D. It's going to linearly transform stocks for each of the features. That's what this part is gonna do. And finally, we apply the shift-loplation here, which is going to do the, again, the averaging of the neighbors logic, and then you subtract that value from the node of interest. And then you just apply the non-linearity here. So again, you're basically in a way using this diffusion logic as your workhorse, and then you're building these learnable components and putting those learnable components around your diffusion logic. That's how this network works. And then they say it is natural to call this model a shift convolutional network, since when the shift-loplation is the usual normalized graph-loplation, W1 becomes a scalar and one recovers the GCN of Kipf and Welling, so that's the famous GCN paper. So this shows that GCNs and more generally SCNs are a non-linear parametric and discrete version of shift diffusion. In what follows, we analyze how expressive these non-linear models are compared to their base diffusion process, okay? So let's see a couple of things here. So first of all, they form something called Dirichlet energy. So for the sake of time, it's not even important for you to understand the exact details. What you need to know is that this Dirichlet energy basically measures how far is the current signal from being a harmonic signal. So from being, yeah, basically a harmonic signal. So then they state the following here. If we have a shift class belonging to this particular orthogonal symmetric shift class, which is fairly restricted, if we have a non-linearity that's either a ReLU or a Leaky ReLU, then they can show that the Dirichlet energy in the next state, so this is a signal, the transformed signal after we apply the logic from the neural network, is gonna be upper bounded by this expression here, and this is the old signal. This is the old energy of the old signal, okay? So in particular, this means that if this whole term here is smaller than one, that means that with every step, the energy will go down, and as I said, the energy going down means the signal is going towards becoming a harmonic signal, okay? So this means that the signal converges exponentially fast to the kernel of the shift laplation, i.e. to being the harmonic signal. Okay, and this is usually going to be the case because this lambda star is usually smaller than one, and because of the regularization and everything, in order to reduce the fitting, the norms here are gonna be also small, so this is usually satisfied, which means that for this particular class of sheaves, the signal is necessarily going to go towards the actual becoming a harmonic signal, and that's not always desirable. We'll see why that is. So let's see the proposition 17 here first. They mentioned that as soon as we have an asymmetric transport map, so here remember, this class had the symmetry constraint, so as soon as we have the asymmetric transport map, we can find an arbitrarily small linear transformation W that increases the energy, okay? So this means that this class is gonna be more powerful and does not necessarily need to monotonically go towards becoming a harmonic signal, and we need that flexibility because we are learning the sheave. We don't know the true sheave, like we don't know the ground truth sheave, and that means that we may end up with some bad sheave whose harmonic signals are not gonna end up becoming an interesting solution, meaning we will not have the linear separability, and that's why sometimes we want to actually go away, further away from becoming a harmonic signal. That's why we wanna have more powerful sheave classes. So they mentioned here for any connected graph G and epsilon bigger than zero, there exists a sheave class that is not symmetric, that's important, such that basically for arbitrarily small norm of this W matrix and feature vector X, the energy is gonna actually increase instead of decreasing, and that's the power of these more expressive, like basically sheave classes. So as a summary, not only the sheave diffusion is more expressive than heat diffusion, as shown in section four, but the sheave convolutional networks are also more expressive than GCNs in the sense that they are generally not constrained to decrease the Dirichlet energy. This gives them greater control over their asymptotic behavior than GCNs. And let's finally wrap up this paper. It's becoming a very, very long video. So here is the actual model they propose. So instead of using the sheave equation, diffusion equation we saw moments ago, which is, let me just find the one we were looking at before. So just a second here, where are you? Okay, here it is. This was the diffusion equation we were looking at. So now instead of this one, they actually generalize that diffusion equation and they end up with something like this. So you can easily show that by using particular values for W1 and W2. And for example, if you're in the linear regime of the allel nonlinearity, you can show that this is, you recover the original diffusion equation. So they, and then they start from, and then they take this continuous diffusion equation and they discretize it and they end up with the model that they use. And here it is. Here's the model they use. It's somewhat similar to the previous, like basically a sheave convolutional network we saw. There is a couple of important differences. So we still have this F times F matrix that's mapping the features. We have the D times D matrix that's like basically mapping the stock vectors into novel representations. We have the sheaf loplation. So important details are this. In contrast, we'll learn a sheaf, which makes our model applicable to any real world graph data set, even in the absence of a sheave structure. So the previous paper, they actually hard coded the sheave structure. So they manually set the restriction maps to certain values, et cetera. Whereas here they are learning the sheaf because we'll see in a couple of seconds that this contains additional matrix. So we'll basically have a three set of matrices, W1, W2, and the one that's hidden here, I'm gonna denote it as W3, for each of the layers of this neural network. And the second thing they mentioned is our model uses a more residual parameterization of the discretized diffusion process, which empirically improves its performance. Again, residual connections usually work. We know that since 2015, we'd resonate. Okay, so here is how the sheaf is actually learned. They parameterize these restriction maps like this. We have this phi of, and we basically input the two nodes. And depending on the features of those nodes, we're gonna basically output a particular restriction map. So it's a map that looks like this. Basically we concatenate the features of those two nodes. We linearly map them with W and then we apply some non-linearity. And that's how we end up with a restriction map. So now you're gonna use these to form the graph loplation matrix, which is gonna be your basically block type of a matrix that's gonna have some blocks here, some blocks here. So that's your sheaf loplation. And as I said, the parameters here are gonna be learnable this time. So you're learning the sheaf structure, which is very, very important. Okay, that's pretty much it. They then state here that if you let G be, so this is your graph. That's a final graph with features X. Then if basically the features are different for different edges. So if we have sufficiently different features and this phi is an MLP with sufficient capacity, they show that phi can learn any sheaf. So this is very, very probably very expressive architecture. Finally, they experimented with different sheaf classes. We saw why one would want to do something like that. And the ones they've used is diagonal. Then they've used again the orthogonal ones. And finally the general ones. And as you go towards the general ones, the computational cost becomes a bit greater. For the general one, you have something like this. So D cubed and plus M, whereas your regular GCNs, your GNNs will have M plus M, just this part here. So this is the number of nodes plus the number of edges. So yeah, cubed term, but in practice, you'll almost never have to use D that's huge. So D is gonna be a small number. So this can be considered in a way as a constant, a bigger one, but like still a constant. Here are the results. They showed that they pretty much on all of these data sets that are more heterophilic as basically determined by this homophily level. You can see 0.11 means this is a very, very heterophilic data set. They showed that they outperform all of the previous baselines pretty much. There are a couple of exceptions, maybe on Chameleon, this GDCN, which was particularly designed for heterophilic setting, actually outperforms their model. And also on homophilic data sets such as sites here, PubMed and Quora, they're outperformed by some of the other baselines, which have biases that bias them to perform well for the homophilic setting by design. And that's why it's hard to kind of compete with them, even though in theory, these networks can be able to learn even better representations and thus outperform these models. But that's, I guess, the same thing as with Resonance and like VGGs, like VGGs can theoretically learn something, but unless you provide these hints, like these identity mappings, the skip connections, you will basically not learn the desired representations. Okay, that was pretty much it. Please let me know if you watched this video all the way to the end, I would really appreciate that. If you did, please comment down below so that I know for the future videos where I wanna go into so much details and this video took so much time to prepare. Like I literally read a lot of papers, a lot of papers to get the decent knowledge so that I'm comfortable explaining this to you guys. So hopefully you found it useful. If you did, again, please leave a comment down below if you actually watched the video to the very end. If you did, congrats. Consider subscribing to this channel, share the video out and also consider joining our Discord community. Until next time, bye bye. Caitlin's
[{"start": 0.0, "end": 8.4, "text": " What's up you guys, in this video I'll be covering neural sheaf diffusion, a topological perspective on heterophily and oversmoothing in GNNs."}, {"start": 8.4, "end": 16.8, "text": " This is a paper by Christian Bodnar, Francesco Di Giovanni, Benjamin Chamberlain, Pietro Leo and Michael Bronstein."}, {"start": 16.8, "end": 23.8, "text": " And like you'll probably notice here, like most of you are probably not familiar with this word sheaf,"}, {"start": 23.8, "end": 33.2, "text": " and there is a whole theory called sheaf theory that's coming from the field of algebraic topology, and all of that may sound mouthful, and that's because it is."}, {"start": 33.2, "end": 41.6, "text": " And I'll first try and give you a lot of context and understanding of what this theory is about, and then we'll start digging into the paper."}, {"start": 41.6, "end": 49.8, "text": " So let's start with the abstract, but you'll probably won't understand anything here, but still let's kind of read it out and work our way from there."}, {"start": 49.8, "end": 58.599999999999994, "text": " So cellular sheaves equip graphs with a geometrical structure by assigning vector spaces and linear maps to nodes and edges."}, {"start": 58.599999999999994, "end": 65.0, "text": " Graph neural networks, or GNNs for short, implicitly assume a graph with a trivial underlying sheaf."}, {"start": 65.0, "end": 71.8, "text": " So this is an important point. So the neural graph neural networks just assume a trivial sheaf structure."}, {"start": 71.8, "end": 76.6, "text": " So this choice is reflected in the structure of the graph laplation operator,"}, {"start": 76.6, "end": 85.0, "text": " the properties of the associated diffusion equation, and the characteristics of the convolutional models that discretize this equation."}, {"start": 85.0, "end": 88.19999999999999, "text": " So one example would be the GCN model."}, {"start": 88.19999999999999, "end": 98.6, "text": " So in this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in"}, {"start": 98.6, "end": 109.39999999999999, "text": " the heterophilic settings and their over-smoothing behavior, and you may be familiar with this already, but GNNs are usually not as great for the heterophilic setting."}, {"start": 109.39999999999999, "end": 114.6, "text": " And what I mean by a heterophilic setting is that, okay, let me maybe explain the homophilic setting."}, {"start": 114.6, "end": 120.6, "text": " So in homophilic setting, the nodes which are close together and connected usually share the same label."}, {"start": 120.6, "end": 125.39999999999999, "text": " So there is a saying that goes like, birds of feather flock together."}, {"start": 125.4, "end": 130.6, "text": " So that's basically your homophilic setting. So the heterophilic setting is different."}, {"start": 130.6, "end": 135.4, "text": " So that basically means even though if some nodes are connecting, that doesn't mean they have the same label."}, {"start": 135.4, "end": 137.4, "text": " They may have completely different labels."}, {"start": 137.4, "end": 143.8, "text": " Traditionally, GNNs struggled with this setting, and that's what this paper is trying to tackle, as well as over-smooting,"}, {"start": 143.8, "end": 151.4, "text": " which is just a consequence of the fact that how most GNNs work is they take some type of an average of the neighborhood features."}, {"start": 151.4, "end": 160.4, "text": " And by just doing that and iterating that through multiple GNN layers, you end up with feature vectors converging to some value."}, {"start": 160.4, "end": 165.6, "text": " And that's the over-smoothing phenomena, like a very hand-wavy of explaining it."}, {"start": 165.6, "end": 175.70000000000002, "text": " Okay, so let me start by just kind of showing you a couple of snippets from this paper that's called A Gentle Introduction to Sheaves on Graphs."}, {"start": 175.7, "end": 185.0, "text": " And I want to stress the gentle part, because the paper is like, I could maybe understand the first couple of pages, and then it just kind of spiraled out."}, {"start": 185.0, "end": 194.0, "text": " And basically, unless you're a mathematician that has some understanding of topology and algebraic topology, preferably,"}, {"start": 194.0, "end": 197.39999999999998, "text": " you'll have a tough time understanding this gentle introduction."}, {"start": 197.39999999999998, "end": 204.79999999999998, "text": " So even though I consider myself to be very comfortable with various branches of mathematics, I still struggled a lot."}, {"start": 204.8, "end": 206.9, "text": " So let me read you a couple of snippets anyways."}, {"start": 206.9, "end": 214.20000000000002, "text": " So while these concepts may seem even more hopelessly abstract than the already abstract notions of sheaves and morphisms,"}, {"start": 214.20000000000002, "end": 223.8, "text": " the reader should rest assured that the exposition here is far simpler than nearly any other explanation of sheave cohomology or homological algebra."}, {"start": 223.8, "end": 233.4, "text": " Okay. And then they proceed to say the interested reader might consult a book like this one, but a warning is in order."}, {"start": 233.4, "end": 242.0, "text": " A typical prerequisite for reading and understanding a book on homological algebra is to have read and understood a book on homological algebra."}, {"start": 242.0, "end": 248.3, "text": " Okay, so this kind of hopefully gives you some understanding of what's going on in this field is fairly abstract."}, {"start": 248.3, "end": 254.4, "text": " And worst of all, there is not that many like beginner friendly resources out there."}, {"start": 254.4, "end": 261.6, "text": " There is one paper I found which is absolutely great, and that's called A Very Elementary Introduction to Sheaves this time."}, {"start": 261.6, "end": 270.40000000000003, "text": " So we can see the progression here. We have the classical theory, then we have the gentle introduction, and only the very elementary introduction will actually work."}, {"start": 270.40000000000003, "end": 281.20000000000005, "text": " And it will give you some mental models of what's going on in this when we mention sheaves and we mention all of these abstract concepts."}, {"start": 281.20000000000005, "end": 283.8, "text": " So I'll get to this a bit later."}, {"start": 283.8, "end": 292.0, "text": " First of all, let me let me start and explain this like starting from a more specific setting, and that's the setting we'll actually care about."}, {"start": 292.0, "end": 294.90000000000003, "text": " So that's the setting here. So we have a graph here."}, {"start": 294.90000000000003, "end": 298.6, "text": " You can see two nodes. We have a node V and a node U."}, {"start": 298.6, "end": 300.7, "text": " We have an edge connecting those two nodes."}, {"start": 300.7, "end": 304.40000000000003, "text": " And what we do is we associate this algebraical structure."}, {"start": 304.40000000000003, "end": 313.3, "text": " So in this particular case, we're going to use a vector space and we're going to associate a vector space to both of these nodes and as well as to the edge."}, {"start": 313.3, "end": 320.0, "text": " So other than the vector spaces, we'll also associate these restriction maps."}, {"start": 320.0, "end": 328.40000000000003, "text": " I mean, not associate, we'll define these restriction maps for for all of the incident node edge pairs."}, {"start": 328.40000000000003, "end": 330.0, "text": " So that's on a high level what's going on."}, {"start": 330.0, "end": 334.7, "text": " So we have a topological object, which is a graph in this in this particular example."}, {"start": 334.7, "end": 336.40000000000003, "text": " So let me take a pen here."}, {"start": 336.4, "end": 343.7, "text": " So this is our topological object and we associate certain algebraic objects with with our topology."}, {"start": 343.7, "end": 354.2, "text": " So in this particular case, we associate I can see here a vector space in another vector space and yet another vector space with an edge with nodes and with nodes here as well."}, {"start": 354.2, "end": 362.7, "text": " And we define these as I said, restriction maps, which are simple linear maps between these different vector spaces."}, {"start": 362.7, "end": 372.9, "text": " So there is a nice basically analogy to if we go to the world of differential geometry and we start thinking about manifolds."}, {"start": 372.9, "end": 377.2, "text": " And in this particular case, we have a 2D sphere embedded into a 3D space."}, {"start": 377.2, "end": 383.59999999999997, "text": " Here we have a similar notion. So we have a we have a vector space associated with this particular point on the manifold."}, {"start": 383.59999999999997, "end": 386.3, "text": " So this is the tangent space of that manifold."}, {"start": 386.3, "end": 392.09999999999997, "text": " And this can be roughly treated as the as the vector space associated with this particular node here."}, {"start": 392.1, "end": 399.5, "text": " So here we associate the tangent space with the point on the manifold and basically the optimal transport."}, {"start": 399.5, "end": 406.8, "text": " So not the optimal, sorry, the parallel transport on this sphere is an equivalent of this not equivalent,"}, {"start": 406.8, "end": 412.90000000000003, "text": " but an analog of this restriction map in our in our setup here on the sheaf here."}, {"start": 412.90000000000003, "end": 421.90000000000003, "text": " And you can see so the parallel transport is going to convert a vector from this tangent space to a vector in this other tangent space associated with some other vector space."}, {"start": 421.9, "end": 428.0, "text": " So this is going to convert from this tangent space to some other point on the manifold."}, {"start": 428.0, "end": 433.5, "text": " So there is a kind of loose analogy between these these two."}, {"start": 433.5, "end": 443.5, "text": " So that's maybe worth mentioning. OK, so now let me step back and try and give you a bit more context and what this algebraic topology is about,"}, {"start": 443.5, "end": 450.0, "text": " because this is just a particular specific like setup of algebraic topology that we'll care about."}, {"start": 450.0, "end": 458.3, "text": " So this is a vector from Michael Bronstein's blog, and it kind of introduces the wider terminology."}, {"start": 458.3, "end": 467.7, "text": " So the idea is the following. So here you have like multiple stocks of grain which are like basically tied together."}, {"start": 467.7, "end": 476.3, "text": " Getting back to our setup, these vector spaces here and algebraic structures more generally are called stocks."}, {"start": 476.3, "end": 482.2, "text": " So this is going to be called a stock. And so that's that's the stock part."}, {"start": 482.2, "end": 487.2, "text": " And then the restriction maps are what's tying together all of these stocks."}, {"start": 487.2, "end": 497.3, "text": " So that's why you'll often see this like basically a bundle of grains used to represent ideas from from algebraic from from sheaf theory,"}, {"start": 497.3, "end": 500.3, "text": " which is a branch of this field of algebraic topology."}, {"start": 500.3, "end": 507.1, "text": " And that may or may not help you memorize what the sheaf structure is about."}, {"start": 507.1, "end": 511.90000000000003, "text": " OK, but let me generalize. Let me take even like a step further and let's generalize this even more."}, {"start": 511.90000000000003, "end": 517.9, "text": " So this is how the what the informal definition of a sheaf would be."}, {"start": 517.9, "end": 529.6, "text": " So we assign to each open subset of the underlying topological space a certain algebraic structure and we specify restriction maps between them."}, {"start": 529.6, "end": 534.8000000000001, "text": " So what's a topological space? Well, topological space is a very, very general notion."}, {"start": 534.8000000000001, "end": 546.1, "text": " So that can be anything from like a line from a real line to graphs, to manifolds, to simple complexes, which are themselves a generalization of specific types of graphs."}, {"start": 546.1, "end": 550.4, "text": " We then have cellular complexes as another topological object, etc."}, {"start": 550.4, "end": 557.2, "text": " Etc. You can see that by saying this sentence, you basically mean a lot of things. It's very general. It's very abstract."}, {"start": 557.2, "end": 563.9000000000001, "text": " Then they see a certain algebraic structure. And most of you are probably familiar with vector spaces, but not that much with groups or rings."}, {"start": 563.9000000000001, "end": 571.1, "text": " But like those are some other types of objects you could kind of associate with this open subset on your topological space."}, {"start": 571.1, "end": 578.0, "text": " And finally, we specify the restriction maps so you can see that we get our set up by specifying a particular topological space."}, {"start": 578.0, "end": 582.6, "text": " So we care about graphs and we care about vector spaces on those graphs."}, {"start": 582.6, "end": 590.2, "text": " And even more specifically, we're going to be using vector spaces of the same dimensionality for each of our stocks."}, {"start": 590.2, "end": 591.9, "text": " That's additional additional detail."}, {"start": 591.9, "end": 598.4, "text": " OK, so again, briefly, topology is for those of you who are not familiar."}, {"start": 598.4, "end": 603.1, "text": " This is a loose informal definition, but it will maybe help you create some some mental model."}, {"start": 603.1, "end": 611.8000000000001, "text": " So what is important in topology is the connectivity and not the distances and or angles, whereas in geometry, we care about the distances and angles."}, {"start": 611.8, "end": 614.1999999999999, "text": " So the connectivity is the name of the game."}, {"start": 614.1999999999999, "end": 618.9, "text": " And that's why the donut and a coffee mug are the same thing for a topologist."}, {"start": 618.9, "end": 628.0, "text": " So that's like a usual joke in for at least for mathematicians like the like basically a coffee mug and a donut are the same thing."}, {"start": 628.0, "end": 632.3, "text": " But because as you can see here, both of these objects can be morphed into each other."}, {"start": 632.3, "end": 633.9, "text": " So there are these mappings."}, {"start": 633.9, "end": 638.5999999999999, "text": " I think they are called homomorphism, I guess, between like a coffee mug and a donut."}, {"start": 638.6, "end": 643.2, "text": " And the reason is we only have a single hole here, and that's what what what makes a difference."}, {"start": 643.2, "end": 649.2, "text": " So if we had two holes, then we have a different object or if we had zero holes, that would be a different object."}, {"start": 649.2, "end": 654.6, "text": " But everything else can kind of be, as you can see, morphed from from one shape to another."}, {"start": 654.6, "end": 657.2, "text": " And because of that, they are considered equivalent."}, {"start": 657.2, "end": 660.9, "text": " I also mentioned these objects called simplicial complexes."}, {"start": 660.9, "end": 669.5, "text": " If you take a simple graph, whereas by simple graph, what I mean is a graph that doesn't have any self edges nor does it have any multiple edges between two nodes."}, {"start": 669.5, "end": 671.9, "text": " So this would be this would be a simple graph."}, {"start": 671.9, "end": 679.8, "text": " Something like this would be a simple graph and a generalization of the simple graph is something called simplicial complexes."}, {"start": 679.8, "end": 682.3, "text": " You'll probably see that term thrown around."}, {"start": 682.3, "end": 686.3, "text": " So just to kind of make it to define it here."}, {"start": 686.3, "end": 692.9, "text": " And finally, if we have graphs with self edges and with multiple edges between two nodes,"}, {"start": 692.9, "end": 702.0999999999999, "text": " this is known as a multigraph with self edges and a generalization of these is called cellular complex or cellular complexes."}, {"start": 702.0999999999999, "end": 705.3, "text": " So, yeah, that's just additional context for you."}, {"start": 705.3, "end": 716.0999999999999, "text": " Finally, there is this nice statement from from from Maurice Auslender saying that she theory is the subject in which you do topology horizontally and algebra vertically."}, {"start": 716.1, "end": 724.2, "text": " OK, so that was kind of various terms and concepts related to this like branch of mathematics."}, {"start": 724.2, "end": 728.3000000000001, "text": " And now let me try and make this a bit more concrete."}, {"start": 728.3000000000001, "end": 731.9, "text": " OK, I already mentioned this paper, a very elementary introduction to sheaves."}, {"start": 731.9, "end": 735.2, "text": " And I want to make like a huge shout out to the author of this paper."}, {"start": 735.2, "end": 746.0, "text": " I think that like doing this is incredibly important, especially if you have some very abstract and for most people like alien type of a theory."}, {"start": 746.0, "end": 761.8, "text": " You really want to create a paper where you dump all of your like mental models, all of your visualizations and all of the concrete examples you use to ground some abstract concepts from that theory and make them digestible to your like brain."}, {"start": 761.8, "end": 770.4, "text": " I guess I would really love to see more of these papers for each of like different branches of more abstract pure mathematics."}, {"start": 770.4, "end": 777.8, "text": " And I guess the reason people are not doing this usually is because it's not that rigorous and then maybe your peers will judge you or whatnot."}, {"start": 777.8, "end": 781.4, "text": " But like like I urge you, this is like so important."}, {"start": 781.4, "end": 788.0, "text": " This paper made it much like more visceral for me what she theory is about."}, {"start": 788.0, "end": 792.3, "text": " So let's read this abstract and then let's see a couple of figures from from this particular paper."}, {"start": 792.3, "end": 797.8, "text": " So here we give a simple and approachable explanation to the mathematical objects called sheaves."}, {"start": 797.8, "end": 807.3, "text": " This paper presents tangible and concrete examples so that readers will be able to further explore these concepts on their own in more abstraction and application."}, {"start": 807.3, "end": 820.3, "text": " So for me and I know for for many people, I know it's much easier to start from a concrete example and then abstract away rather than start in the abstract space and then try and understand what's going on and where can I relate this to."}, {"start": 820.3, "end": 823.0, "text": " So I think this is that's why this paper is so important."}, {"start": 823.0, "end": 828.0, "text": " So as they say here that the paper this paper is very is a very non rigorous."}, {"start": 828.0, "end": 832.2, "text": " It's a non rigorous loose and extremely basic."}, {"start": 832.2, "end": 835.4, "text": " And I would also add extremely useful introduction to sheaves."}, {"start": 835.4, "end": 844.4, "text": " This is meant to be a guide to gaining intuition, which is so important about sheaves, what they look like and how they work."}, {"start": 844.4, "end": 853.1999999999999, "text": " So after reading this paper, someone can jump into the extremely abstract definitions and examples seen in textbooks with this some idea of what is going on."}, {"start": 853.1999999999999, "end": 871.0, "text": " OK, so let's get back to our particular example of a sheaf where we care about graphs as that underlying topological object and where we care about vector spaces with the same number of dimensions as our particular algebraic space object."}, {"start": 871.0, "end": 879.6, "text": " OK, so here we can see that we have, for example, this this particular stock for this node is going to have two dimensions."}, {"start": 879.6, "end": 883.9, "text": " It's going to be two one and this other stock is going to be minus two ten three."}, {"start": 883.9, "end": 889.0, "text": " So in this particular example, they did use different dimensionalities, but that does not matter that much."}, {"start": 889.0, "end": 892.7, "text": " Then we have they've specified a particular restriction maps."}, {"start": 892.7, "end": 896.1, "text": " You can see they're just simple linear maps as represented by this matrix."}, {"start": 896.1, "end": 897.6, "text": " We have a different one here."}, {"start": 897.6, "end": 902.2, "text": " And if we take this vector and we map it, we get we end up with this vector here."}, {"start": 902.2, "end": 904.7, "text": " OK, so let me just change the color."}, {"start": 904.7, "end": 906.4, "text": " So we end up with this one here."}, {"start": 906.4, "end": 913.8000000000001, "text": " And if we take this vector and we map it with its restriction map, we end up with this particular vector here."}, {"start": 913.8000000000001, "end": 921.7, "text": " OK, and so the important thing I want you to notice here is that these two agree, which is not the case in general, obviously."}, {"start": 921.7, "end": 927.5, "text": " And this is super like this is a vital idea in this in this sheaf theory."}, {"start": 927.5, "end": 935.4, "text": " This agreement and to just to make this a bit more concrete, let me give you a particular interpretation of what this could mean."}, {"start": 935.4, "end": 942.6, "text": " So imagine this graph represents a group of people, a social network, and imagine these stocks here."}, {"start": 942.6, "end": 952.0, "text": " So the vectors associated with our nodes, which themselves are basically represent certain individuals in the society."}, {"start": 952.0, "end": 955.0, "text": " So this stock is just opinion space."}, {"start": 955.0, "end": 958.3, "text": " So you can imagine of this as an opinion vector."}, {"start": 958.3, "end": 964.3, "text": " Basically, minus two would mean this particular person thinks negatively about some topic X."}, {"start": 964.3, "end": 966.3, "text": " I don't know, whatever like politics."}, {"start": 966.3, "end": 975.7, "text": " Then this means that this same person thinks extremely positively of some other topic Y and so on, etc."}, {"start": 975.7, "end": 978.1, "text": " So we have three for some other topics."}, {"start": 978.1, "end": 978.8, "text": " Same here."}, {"start": 978.8, "end": 989.1999999999999, "text": " And what this this model does is by mapping by mapping this opinion vector into this so-called discourse space."}, {"start": 989.1999999999999, "end": 994.1999999999999, "text": " So this thing here, they refer to this as a discourse space."}, {"start": 994.1999999999999, "end": 995.8, "text": " Here we achieved an agreement."}, {"start": 995.8, "end": 1004.5999999999999, "text": " So even though the personal private opinion vectors are not the same after we express them, we get to an agreement."}, {"start": 1004.5999999999999, "end": 1007.6999999999999, "text": " And that's pretty much the only way how a society can function."}, {"start": 1007.7, "end": 1013.0, "text": " We have to roughly have similar like agreements across across certain topics."}, {"start": 1013.0, "end": 1014.6, "text": " Otherwise, everything explodes."}, {"start": 1014.6, "end": 1024.2, "text": " And so this could be one particular interpretation of what is going on and why it is important to find such vectors here."}, {"start": 1024.2, "end": 1032.4, "text": " So such vectors, such stocks, such that we have an agreement across all of the edges of our graph."}, {"start": 1032.4, "end": 1033.6000000000001, "text": " So that's the idea there."}, {"start": 1033.6, "end": 1044.8, "text": " Okay, so now again, let me take a step back and just make this a bit more general because I assume that most of you have no idea of what algebraic topology is."}, {"start": 1044.8, "end": 1050.1999999999998, "text": " And so that's why I'm giving you just a lot of examples and I also link a bunch of resources down in the video description."}, {"start": 1050.1999999999998, "end": 1051.6999999999998, "text": " So do check them out."}, {"start": 1051.6999999999998, "end": 1055.8, "text": " Okay, let me give you another example and then we'll get to the rest of the paper."}, {"start": 1055.8, "end": 1058.0, "text": " And hopefully it's going to be easier to understand after this."}, {"start": 1058.0, "end": 1061.0, "text": " So for a topological space X."}, {"start": 1061.0, "end": 1063.1999999999998, "text": " So we are again now more general."}, {"start": 1063.2, "end": 1064.4, "text": " We're not talking about graphs."}, {"start": 1064.4, "end": 1068.0, "text": " We're talking about general topological spaces."}, {"start": 1068.0, "end": 1075.5, "text": " A sheaf of a billion groups on X assigns each open set of X to an abelian group."}, {"start": 1075.5, "end": 1086.5, "text": " Okay, so this time our abstract object is in a billion group and we have some open set of a topological space as our underlying horizontal object."}, {"start": 1086.5, "end": 1088.8, "text": " So which a billion group you ask."}, {"start": 1088.8, "end": 1097.3, "text": " So here it will be the group of continuous functions from U, which is the open set to R."}, {"start": 1097.3, "end": 1102.3, "text": " That is for an open set U, which is a subset of our topological space."}, {"start": 1102.3, "end": 1105.6, "text": " The stock of U is the group this one."}, {"start": 1105.6, "end": 1113.2, "text": " So it's a set of functions F of continuous functions F that map from that open subset to R."}, {"start": 1113.2, "end": 1115.8, "text": " Okay, just to make this a bit more concrete."}, {"start": 1115.8, "end": 1118.8999999999999, "text": " Here is a particular example of what that means."}, {"start": 1118.8999999999999, "end": 1124.8, "text": " We have an open subset here U as you can see here."}, {"start": 1124.8, "end": 1127.1, "text": " And we you can see like us."}, {"start": 1127.1, "end": 1130.8, "text": " This is a set of continuous functions defined there."}, {"start": 1130.8, "end": 1138.5, "text": " So this thing here would be the stock and by picking so picking a particular function from this space here."}, {"start": 1138.5, "end": 1144.3999999999999, "text": " So maybe picking this one is the same thing as picking a particular vector here."}, {"start": 1144.4, "end": 1147.4, "text": " Okay, we're just dealing with different objects."}, {"start": 1147.4, "end": 1151.0, "text": " But like this isn't a concrete instantiation on the vector space."}, {"start": 1151.0, "end": 1155.1000000000001, "text": " And this is a concrete instantiation in this particular a billion group."}, {"start": 1155.1000000000001, "end": 1164.0, "text": " Okay, so so along with the point wise addition, this is this is basically a group and for an open set V,"}, {"start": 1164.0, "end": 1173.7, "text": " which is a subset of U the restriction map from U to V as denoted like this is actually really simple for an"}, {"start": 1173.7, "end": 1178.9, "text": " element F of F of U, which is a continuous function."}, {"start": 1178.9, "end": 1184.5, "text": " This restriction map is the restriction of F from U to V."}, {"start": 1184.5, "end": 1185.9, "text": " Okay, so I'm going to stop there."}, {"start": 1185.9, "end": 1187.4, "text": " So what it means is the following."}, {"start": 1187.4, "end": 1194.8, "text": " So after we apply this restriction map, we end up so you basically map the following thing is you still map."}, {"start": 1194.8, "end": 1196.5, "text": " So let me take a color like this."}, {"start": 1196.5, "end": 1199.8, "text": " So you'll map this thing."}, {"start": 1199.8, "end": 1204.7, "text": " On to this thing here. So basically you just constrain."}, {"start": 1204.7, "end": 1207.2, "text": " So before that your function was defined."}, {"start": 1207.2, "end": 1210.2, "text": " This was a domain of your function after this restriction map."}, {"start": 1210.2, "end": 1217.3, "text": " You just redefine it here, but you have the same you have the same function in this in this particular domain."}, {"start": 1217.3, "end": 1221.3, "text": " So it's kind of trivial, but we'll see why why that why that makes it makes sense."}, {"start": 1221.3, "end": 1229.6, "text": " But again, by doing this, what we've done is we took this function, which is in this analogy like this particular vector."}, {"start": 1229.6, "end": 1233.1999999999998, "text": " And then we mapped it all the way to here."}, {"start": 1233.1999999999998, "end": 1235.1, "text": " So this time we don't have this intermediate step."}, {"start": 1235.1, "end": 1236.1999999999998, "text": " We just map it."}, {"start": 1236.1999999999998, "end": 1239.6999999999998, "text": " We just find the inverse mapping here and we map from here to here."}, {"start": 1239.6999999999998, "end": 1241.1, "text": " Okay."}, {"start": 1241.1, "end": 1245.1999999999998, "text": " Now with this definition, let's see how we define the agreement this time."}, {"start": 1245.1999999999998, "end": 1254.5, "text": " So here you can see here is a bunch of a bunch of open sets, which you can see those were nodes in our previous example."}, {"start": 1254.5, "end": 1259.3, "text": " And for each of those open sets, we associated a particular function."}, {"start": 1259.3, "end": 1264.6, "text": " So that's a concrete instance from our group, which is the algebraic structure we care about in this example."}, {"start": 1264.6, "end": 1269.8, "text": " So that will be a particular vector in this for this particular node."}, {"start": 1269.8, "end": 1274.5, "text": " We'd associate a particular vector here for this particular open set."}, {"start": 1274.5, "end": 1276.8, "text": " We associate a particular function."}, {"start": 1276.8, "end": 1278.6, "text": " So how do we get to an agreement?"}, {"start": 1278.6, "end": 1283.2, "text": " So here is what the agreement looks like for this particular sheath."}, {"start": 1283.2, "end": 1286.3999999999999, "text": " We basically get a continuous function."}, {"start": 1286.4, "end": 1292.9, "text": " And if you think about it, it makes sense because so after you might say, let's take let's take one example."}, {"start": 1292.9, "end": 1299.5, "text": " Let's take, for example, this this purple open space from here to here."}, {"start": 1299.5, "end": 1306.9, "text": " And if we do the restriction map, if we apply the mapping to this other orange space, so this one here, let me just change the color."}, {"start": 1306.9, "end": 1311.3000000000002, "text": " So if we map there, we know that the function needs to remain the same."}, {"start": 1311.3000000000002, "end": 1314.4, "text": " So that's that's how we define our mapping is going to be here."}, {"start": 1314.4, "end": 1320.4, "text": " And the only way that this small function agrees with this function here is that they perfectly align."}, {"start": 1320.4, "end": 1324.9, "text": " And because you can have multiple of these open sets here which intersect with each other,"}, {"start": 1324.9, "end": 1332.6000000000001, "text": " the only solution where we have this agreement for this particular sheath is if we have a continuous function on this on this space."}, {"start": 1332.6000000000001, "end": 1335.8000000000002, "text": " OK, so hopefully that gives you some understanding of what's going on."}, {"start": 1335.8000000000002, "end": 1338.5, "text": " I did get into a bit more nitty gritty details than I needed."}, {"start": 1338.5, "end": 1341.5, "text": " But like I just wanted you to understand the whole point."}, {"start": 1341.5, "end": 1349.7, "text": " I want you to take out from this explanation is agreement in in in sheath theory is a very, very important concept."}, {"start": 1349.7, "end": 1362.7, "text": " So agreement here leads to a very nice like continuous function agreement here leads to people and society being a functional society instead of just breaking apart because people are polarized."}, {"start": 1362.7, "end": 1368.0, "text": " OK, guys, let me quickly give you one more take of how this works in case you did not understand it."}, {"start": 1368.0, "end": 1369.5, "text": " I'm going to make it much more pictorial."}, {"start": 1369.5, "end": 1372.9, "text": " So imagine we focus on on these two open sets."}, {"start": 1372.9, "end": 1375.8, "text": " So we have this open set here."}, {"start": 1375.8, "end": 1377.3, "text": " And we have this orange one here."}, {"start": 1377.3, "end": 1385.5, "text": " OK, so imagine instead if on this orange open space, we had a function defined like this."}, {"start": 1385.5, "end": 1388.1, "text": " So we pick a value from our stock and it looks like this."}, {"start": 1388.1, "end": 1394.7, "text": " OK, and imagine for that this this bigger open subset, we have exactly this function here."}, {"start": 1394.7, "end": 1396.1, "text": " So this is the value we took."}, {"start": 1396.1, "end": 1401.1999999999998, "text": " So after we take so we take the function from you from this you three."}, {"start": 1401.1999999999998, "end": 1407.6999999999998, "text": " So from this bigger open space, we apply the restriction map onto this smaller open space."}, {"start": 1407.6999999999998, "end": 1408.6, "text": " And what do we get?"}, {"start": 1408.6, "end": 1414.6, "text": " Well, we get this we get this thing here because that's how we define the restriction map."}, {"start": 1414.6, "end": 1420.1999999999998, "text": " And you can see here that there is no agreement between that and this thing here."}, {"start": 1420.1999999999998, "end": 1425.6, "text": " OK, and the only way we can get an agreement is if we have a continuous function like this."}, {"start": 1425.6, "end": 1430.6999999999998, "text": " And yeah, hopefully that now makes it a bit more visceral and understandable."}, {"start": 1430.6999999999998, "end": 1432.3999999999999, "text": " Let's get back to the paper."}, {"start": 1432.3999999999999, "end": 1437.1, "text": " Let me briefly just introduce you to the formal terminology here."}, {"start": 1437.1, "end": 1439.3, "text": " So this is how it looks like."}, {"start": 1439.3, "end": 1446.1999999999998, "text": " Definition one, a cellular sheaf, which is a tuple containing a graph and this big F,"}, {"start": 1446.1999999999998, "end": 1453.5, "text": " which is the sheaf structure on an undirected graph G, which consists of a set of vertices"}, {"start": 1453.5, "end": 1463.3, "text": " and set of edges consists of so one a vector space F of V for each node from the basically set of nodes,"}, {"start": 1463.3, "end": 1469.5, "text": " a vector space F of E for each edge in the set of edges."}, {"start": 1469.5, "end": 1475.5, "text": " And these are called stocks again, a linear map as denoted like this."}, {"start": 1475.5, "end": 1481.8, "text": " So it maps so that the this means edge E, which is incident."}, {"start": 1481.8, "end": 1487.8, "text": " So that's this this this funny symbol with a triangle stuff."}, {"start": 1487.8, "end": 1489.5, "text": " So this means they are incident."}, {"start": 1489.5, "end": 1492.1, "text": " So this edge is incident to this node."}, {"start": 1492.1, "end": 1497.5, "text": " So it maps from the stock of V onto the stock of E."}, {"start": 1497.5, "end": 1501.7, "text": " So we define that for each incident node node edge pair."}, {"start": 1501.7, "end": 1503.8, "text": " So this is what what the sheaf is."}, {"start": 1503.8, "end": 1509.3, "text": " You define these maps, you define these spaces and you've got a sheaf."}, {"start": 1509.3, "end": 1513.2, "text": " Okay, so again, I mentioned these are the stocks."}, {"start": 1513.2, "end": 1514.6, "text": " These are the restriction maps."}, {"start": 1514.6, "end": 1516.6, "text": " The linear map is a restriction map."}, {"start": 1516.6, "end": 1519.2, "text": " And now we have this concept of a zero coaching."}, {"start": 1519.2, "end": 1528.3, "text": " So zero coaching as denoted with C subscript zero is basically defined as this O plus,"}, {"start": 1528.3, "end": 1531.1, "text": " which is a direct sum of different stocks."}, {"start": 1531.1, "end": 1538.2, "text": " And you can think of this as basically a concatenation of these various vector vector spaces."}, {"start": 1538.2, "end": 1544.9, "text": " And then you can define a signal, an element of that zero coaching this bold X here,"}, {"start": 1544.9, "end": 1549.2, "text": " which is basically a collection of stocks for each of the nodes."}, {"start": 1549.2, "end": 1556.3, "text": " Okay, that's one important space you need to to to have in mind."}, {"start": 1556.3, "end": 1564.0, "text": " Let's continue on a particularly important subspace of this zero coaching space is the space of global sections."}, {"start": 1564.0, "end": 1568.6, "text": " And we've seen this and you'll understand what this exactly means in a second."}, {"start": 1568.6, "end": 1571.9, "text": " So this is the this is the agreement thing I was I was mentioning."}, {"start": 1571.9, "end": 1580.6, "text": " So this global section space denoted as H zero, which stands for zero cohomology."}, {"start": 1580.6, "end": 1585.0, "text": " That's also how you'll see this space referred to."}, {"start": 1585.0, "end": 1593.0, "text": " So it's basically a set of these signals of these axes such that and those axes belong to this zero coaching."}, {"start": 1593.0, "end": 1600.1, "text": " So it's a set of these axes such that we have an agreement along every single edge of our graph."}, {"start": 1600.1, "end": 1610.6, "text": " OK, that's the that's the idea. So we have we apply the restriction map onto our feature vector associated with with with node V."}, {"start": 1610.6, "end": 1617.3, "text": " And we apply a different restriction map to node use feature vector."}, {"start": 1617.3, "end": 1620.5, "text": " And we have to have the same we have to have an equality here."}, {"start": 1620.5, "end": 1632.1, "text": " That's when we achieve the agreement and then mention here containing those private opinions X for which all neighbors agree with each other in the discourse discourse space."}, {"start": 1632.1, "end": 1640.6, "text": " So that will be one particular interpretation of what this whole abstract global section space means."}, {"start": 1640.6, "end": 1645.5, "text": " So, again, remember this one space of global sections."}, {"start": 1645.5, "end": 1656.1, "text": " So those are going to be all of the possible signals on our nodes that leads to the agreement across all of the edges on our graph."}, {"start": 1656.1, "end": 1658.5, "text": " That's it. As simple as that."}, {"start": 1658.5, "end": 1661.0, "text": " OK, let's continue here."}, {"start": 1661.0, "end": 1663.0, "text": " Definition two. And this is very important."}, {"start": 1663.0, "end": 1665.0, "text": " I'm going to spend some time here."}, {"start": 1665.0, "end": 1674.9, "text": " So the sheaf loplation of a sheaf is a linear map from the space of zero coaching to the spaces your coachings defined node wise."}, {"start": 1674.9, "end": 1676.3000000000002, "text": " Why this equation?"}, {"start": 1676.3000000000002, "end": 1682.9, "text": " And now this might seem scary, but it's not because if you know anything about graph,"}, {"start": 1682.9, "end": 1686.7, "text": " I'm only familiar with graph GCN's, for example."}, {"start": 1686.7, "end": 1691.9, "text": " This is just going to be this is just a generalization of the very same thing we already saw there."}, {"start": 1691.9, "end": 1698.5, "text": " If you if you're not familiar with with those like papers and with that theory, go check out my graph ML playlist."}, {"start": 1698.5, "end": 1703.2, "text": " It's going to be immensely helpful for you to understand this video."}, {"start": 1703.2, "end": 1706.7, "text": " So imagine if these restriction maps were just scalars."}, {"start": 1706.7, "end": 1718.3, "text": " This was a one. If this was a one, if this was a one, you just have a sum containing differences between the central node V minus its neighbors."}, {"start": 1718.3, "end": 1722.7, "text": " And that's something we've seen in the graph loplation matrix."}, {"start": 1722.7, "end": 1730.3, "text": " So if you multiply your your your basically set of your feature vectors with all graph loplation, you end up with such an equation."}, {"start": 1730.3, "end": 1732.8, "text": " So I'm going to make this a bit more complete."}, {"start": 1732.8, "end": 1736.5, "text": " So let me give you a particular example to and let's kind of this is very important."}, {"start": 1736.5, "end": 1740.7, "text": " This this shift location is one of the crucial objects in this paper."}, {"start": 1740.7, "end": 1745.6, "text": " So let me give you a bit more concrete examples here."}, {"start": 1745.6, "end": 1747.7, "text": " So let's imagine we have a graph like this."}, {"start": 1747.7, "end": 1751.8999999999999, "text": " We have three nodes and they're connected like this."}, {"start": 1751.8999999999999, "end": 1754.6, "text": " So this is a node one. This is no two."}, {"start": 1754.6, "end": 1760.3999999999999, "text": " This is no three. Let's form a loplation matrix basically."}, {"start": 1760.4, "end": 1762.8000000000002, "text": " So loplation is defined the following way."}, {"start": 1762.8000000000002, "end": 1764.3000000000002, "text": " The plation is usually defined."}, {"start": 1764.3000000000002, "end": 1766.3000000000002, "text": " There are multiple ways how you can define it."}, {"start": 1766.3000000000002, "end": 1770.0, "text": " But this is one way. This is so-called I think combinatorial loplation."}, {"start": 1770.0, "end": 1776.0, "text": " So we take the degree matrix D and you subtract away the adjacency matrix A."}, {"start": 1776.0, "end": 1778.7, "text": " So what's the what's a let's make that more concrete."}, {"start": 1778.7, "end": 1781.2, "text": " So let me give you for this particular example."}, {"start": 1781.2, "end": 1785.0, "text": " This means the following. So D is going to be the following matrix."}, {"start": 1785.0, "end": 1787.2, "text": " So it's going to be a three by three matrix."}, {"start": 1787.2, "end": 1792.8, "text": " It's a diagonal matrix and on each of the diagonals you basically contain"}, {"start": 1792.8, "end": 1796.0, "text": " how many neighbors does that particular node has."}, {"start": 1796.0, "end": 1799.9, "text": " So one has only one neighbor and we're going to have zeros here."}, {"start": 1799.9, "end": 1803.3, "text": " So we're going to only have basically elements on the on the diagonal."}, {"start": 1803.3, "end": 1805.7, "text": " So let me just kind of fill that in."}, {"start": 1805.7, "end": 1807.3, "text": " So what does what goes here?"}, {"start": 1807.3, "end": 1809.3, "text": " So here we take a look at node two."}, {"start": 1809.3, "end": 1810.5, "text": " It has only one neighbor."}, {"start": 1810.5, "end": 1812.8, "text": " That's node three. That's why we input one here."}, {"start": 1812.8, "end": 1815.0, "text": " And finally node three has two neighbors."}, {"start": 1815.0, "end": 1817.9, "text": " And that's why the degree of this node is two."}, {"start": 1817.9, "end": 1819.7, "text": " Okay, so we have two."}, {"start": 1819.7, "end": 1821.0, "text": " So that's the degree matrix."}, {"start": 1821.0, "end": 1823.5, "text": " The A matrix is just your adjacency matrix."}, {"start": 1823.5, "end": 1826.4, "text": " So basically your connectivity who is connected to whom."}, {"start": 1826.4, "end": 1829.3, "text": " So this is going to be a."}, {"start": 1829.3, "end": 1832.3, "text": " So node one is only connected to node three."}, {"start": 1832.3, "end": 1835.6, "text": " That's why we're going to have one here and we have zero and zero here."}, {"start": 1835.6, "end": 1837.8, "text": " No two is only connected to node three again."}, {"start": 1837.8, "end": 1840.0, "text": " So we have one here zero zero."}, {"start": 1840.0, "end": 1843.2, "text": " And finally node three is connected to node one as you can see here"}, {"start": 1843.2, "end": 1847.5, "text": " and it's connected to node two and that's why we add ones here and we have zero here."}, {"start": 1847.5, "end": 1850.1000000000001, "text": " Finally in order to get the loplation matrix."}, {"start": 1850.1000000000001, "end": 1851.2, "text": " Let's see what we do."}, {"start": 1851.2, "end": 1854.5, "text": " We just subtract away these two and we get the following thing."}, {"start": 1854.5, "end": 1858.0, "text": " We basically end up with this matrix."}, {"start": 1858.0, "end": 1859.6000000000001, "text": " Let me just kind of make it here."}, {"start": 1859.6000000000001, "end": 1867.6000000000001, "text": " So we have one one two and then we have minus one here minus one here minus one minus one."}, {"start": 1867.6000000000001, "end": 1869.1000000000001, "text": " And we have zeros elsewhere."}, {"start": 1869.1000000000001, "end": 1872.2, "text": " Okay, so why is this important?"}, {"start": 1872.2, "end": 1873.2, "text": " Why is this interesting?"}, {"start": 1873.2, "end": 1881.2, "text": " Well, let's see what happens if we multiply the set of feature vectors of node feature vectors with this loplation."}, {"start": 1881.2, "end": 1882.8, "text": " So this is L."}, {"start": 1882.8, "end": 1884.6000000000001, "text": " Let's construct the X."}, {"start": 1884.6000000000001, "end": 1888.5, "text": " So if we associate let's let's say for the sake of simplicity,"}, {"start": 1888.5, "end": 1895.0, "text": " we only have basically scalars associated with all of these like nodes."}, {"start": 1895.0, "end": 1898.7, "text": " So we have scalar X one X two X three."}, {"start": 1898.7, "end": 1901.4, "text": " Let's kind of stick them together here."}, {"start": 1901.4, "end": 1902.7, "text": " We have X one."}, {"start": 1902.7, "end": 1903.9, "text": " We have X two."}, {"start": 1903.9, "end": 1905.3000000000002, "text": " We have X three."}, {"start": 1905.3000000000002, "end": 1907.1000000000001, "text": " What do we get if we multiply these?"}, {"start": 1907.1000000000001, "end": 1908.8000000000002, "text": " So what do we get here?"}, {"start": 1908.8000000000002, "end": 1912.4, "text": " So we basically have and I'm going to just ignore the first two rows."}, {"start": 1912.4, "end": 1914.0, "text": " I'm just going to focus on this one."}, {"start": 1914.0, "end": 1916.7, "text": " So we have two X three."}, {"start": 1916.7, "end": 1921.8000000000002, "text": " Okay, minus X one minus X two."}, {"start": 1921.8000000000002, "end": 1925.6000000000001, "text": " And now for the moment,"}, {"start": 1925.6, "end": 1932.1999999999998, "text": " I've been trying to get to is let's make a comparison between this thing here."}, {"start": 1932.1999999999998, "end": 1934.0, "text": " And this thing here."}, {"start": 1934.0, "end": 1935.3999999999999, "text": " They have the same structure."}, {"start": 1935.3999999999999, "end": 1939.1, "text": " This thing is just a generalization of this thing here."}, {"start": 1939.1, "end": 1939.8999999999999, "text": " So as I said,"}, {"start": 1939.8999999999999, "end": 1943.3999999999999, "text": " if these restriction maps were just a simple scalars,"}, {"start": 1943.3999999999999, "end": 1947.8999999999999, "text": " so that means if our sheath structure is trivial in that particular case,"}, {"start": 1947.8999999999999, "end": 1950.6999999999998, "text": " we fall back to a graph loplation."}, {"start": 1950.6999999999998, "end": 1953.8, "text": " So let's imagine this is one."}, {"start": 1953.8, "end": 1954.5, "text": " This is one."}, {"start": 1954.5, "end": 1955.5, "text": " This is one."}, {"start": 1955.5, "end": 1958.2, "text": " And let's imagine we focus on node V."}, {"start": 1958.2, "end": 1959.7, "text": " V equals three."}, {"start": 1959.7, "end": 1964.0, "text": " So we end up with a sum where V is equal to three."}, {"start": 1964.0, "end": 1966.2, "text": " And we look at the incident edges,"}, {"start": 1966.2, "end": 1969.1, "text": " which are going to be this edge and this edge here."}, {"start": 1969.1, "end": 1970.2, "text": " So we have a sum."}, {"start": 1970.2, "end": 1975.5, "text": " So we have here we have X three minus first edge."}, {"start": 1975.5, "end": 1977.4, "text": " So minus X one."}, {"start": 1977.4, "end": 1978.9, "text": " And then we have additional term."}, {"start": 1978.9, "end": 1983.3, "text": " So plus X three minus this time X two."}, {"start": 1983.3, "end": 1987.3, "text": " And as you can see, we end up with two X three minus X one minus X two,"}, {"start": 1987.3, "end": 1990.1, "text": " which is the same thing as this thing here."}, {"start": 1990.1, "end": 1995.2, "text": " Okay, so hopefully now the connection is clear."}, {"start": 1995.2, "end": 1997.1, "text": " By introducing this sheath structure,"}, {"start": 1997.1, "end": 2001.6, "text": " we generalized our graph loplation into this thing called sheath loplation."}, {"start": 2001.6, "end": 2004.1, "text": " And that's everything you need to know for now."}, {"start": 2004.1, "end": 2006.3, "text": " Let's see a bit more information about sheath loplation."}, {"start": 2006.3, "end": 2009.5, "text": " So it's an ND times ND matrix."}, {"start": 2009.5, "end": 2011.1, "text": " Why ND times ND?"}, {"start": 2011.1, "end": 2013.5, "text": " Well, in general, I mean, not in general."}, {"start": 2013.5, "end": 2017.3999999999999, "text": " In this particular case where we have a trivial sheath structure,"}, {"start": 2017.3999999999999, "end": 2019.3999999999999, "text": " then we have N times N."}, {"start": 2019.3999999999999, "end": 2021.1, "text": " So we saw that right here."}, {"start": 2021.1, "end": 2024.6999999999998, "text": " So it's D because the dimensionality of a stock is D."}, {"start": 2024.6999999999998, "end": 2026.8, "text": " That's why we have ND times ND."}, {"start": 2026.8, "end": 2028.5, "text": " Okay."}, {"start": 2028.5, "end": 2031.8999999999999, "text": " So then they say when the vector spaces are set to R,"}, {"start": 2031.8999999999999, "end": 2034.6999999999998, "text": " so the dimensionality is equal to one."}, {"start": 2034.6999999999998, "end": 2038.0, "text": " And the linear maps are identity maps over R."}, {"start": 2038.0, "end": 2042.5, "text": " The underlying sheath is trivial and one recovers the well-known N times N"}, {"start": 2042.5, "end": 2045.5, "text": " graph loplation matrix and its normalized version."}, {"start": 2045.5, "end": 2047.6, "text": " And we'll care about the normalized version."}, {"start": 2047.6, "end": 2049.6, "text": " And the same thing for,"}, {"start": 2049.6, "end": 2053.7, "text": " this is the normalized version of the graph of the sheath loplation."}, {"start": 2053.7, "end": 2057.6, "text": " And what this sentence says is basically the thing I just explained."}, {"start": 2057.6, "end": 2058.9, "text": " We just made this connection."}, {"start": 2058.9, "end": 2060.1, "text": " This is a special case."}, {"start": 2060.1, "end": 2063.1, "text": " The graph loplation is a special case of sheath loplation."}, {"start": 2063.1, "end": 2064.2, "text": " Okay."}, {"start": 2064.2, "end": 2066.4, "text": " Next up, they say the following, and this is important."}, {"start": 2066.4, "end": 2072.6, "text": " Harmonic signals have perfect agreement of opinions with respect to the sheath"}, {"start": 2072.6, "end": 2079.3, "text": " structure, i.e. if you multiply those signals with the sheath loplation,"}, {"start": 2079.3, "end": 2082.4, "text": " you end up having a zero vector."}, {"start": 2082.4, "end": 2083.5, "text": " And that makes sense, right?"}, {"start": 2083.5, "end": 2085.2000000000003, "text": " So if we have an agreement here,"}, {"start": 2085.2000000000003, "end": 2090.8, "text": " if X1 and X2 are in agreement with X3, then we'll have zeros here."}, {"start": 2090.8, "end": 2093.5, "text": " So if we're in agreement, all of these are going to be,"}, {"start": 2093.5, "end": 2094.5, "text": " let me change the color."}, {"start": 2094.5, "end": 2095.7000000000003, "text": " So all of these are going to be zero."}, {"start": 2095.7, "end": 2097.7999999999997, "text": " Zero, zero, if we have agreement."}, {"start": 2097.7999999999997, "end": 2100.3999999999996, "text": " And that's why, so L times X equals zero."}, {"start": 2100.3999999999996, "end": 2101.6, "text": " That's what I state here."}, {"start": 2101.6, "end": 2105.2999999999997, "text": " This is just like sheath loplation instead of a simple loplation."}, {"start": 2105.2999999999997, "end": 2108.3999999999996, "text": " And the name is kind of, I guess, indicative."}, {"start": 2108.3999999999996, "end": 2111.1, "text": " I'm not sure whether the name came from this fact,"}, {"start": 2111.1, "end": 2115.0, "text": " but like the fact that we have an agreement that all of these signals are"}, {"start": 2115.0, "end": 2119.7999999999997, "text": " kind of in equilibrium, that's why these are called harmonic signals."}, {"start": 2119.7999999999997, "end": 2124.8999999999996, "text": " Not completely sure, but I suspect, I strongly suspect that may be the case."}, {"start": 2124.9, "end": 2128.7000000000003, "text": " Theorem 3 just states that this space of global sections,"}, {"start": 2128.7000000000003, "end": 2131.1, "text": " which we saw a couple of minutes ago,"}, {"start": 2131.1, "end": 2136.7000000000003, "text": " and the kernel of this sheath loplation are isomorphic as vector spaces."}, {"start": 2136.7000000000003, "end": 2138.0, "text": " So let me kind of break this down."}, {"start": 2138.0, "end": 2139.4, "text": " Don't be scared here."}, {"start": 2139.4, "end": 2143.8, "text": " So isomorphic just means a fancy way of saying these two vector spaces are the same."}, {"start": 2143.8, "end": 2146.4, "text": " So these two objects are the same thing, really."}, {"start": 2146.4, "end": 2148.9, "text": " So why the kernel space?"}, {"start": 2148.9, "end": 2150.1, "text": " So this is the kernel space."}, {"start": 2150.1, "end": 2155.2, "text": " So basically the set of X's that are mapped by this map,"}, {"start": 2155.2, "end": 2157.9, "text": " by the sheath loplation into the zero vector,"}, {"start": 2157.9, "end": 2160.0, "text": " that's what's known as the kernel space."}, {"start": 2160.0, "end": 2163.1, "text": " And that's how we define the H0 as well."}, {"start": 2163.1, "end": 2166.9, "text": " If you remember, these basically mean the same thing."}, {"start": 2166.9, "end": 2169.5, "text": " Okay, enough rambling."}, {"start": 2169.5, "end": 2172.2999999999997, "text": " Let me try and make this a bit more concrete."}, {"start": 2172.2999999999997, "end": 2175.2999999999997, "text": " And let's make some connections."}, {"start": 2175.2999999999997, "end": 2177.7999999999997, "text": " This is how the sheath loplation looks like."}, {"start": 2177.8, "end": 2180.3, "text": " So it's going to be a block matrix."}, {"start": 2180.3, "end": 2185.9, "text": " And these blocks are D times D because the stock is D dimensional."}, {"start": 2185.9, "end": 2188.3, "text": " And you can see how the structure looks like."}, {"start": 2188.3, "end": 2191.1000000000004, "text": " So we have, so on the main diagonal,"}, {"start": 2191.1000000000004, "end": 2196.0, "text": " we're going to have these sums of basically,"}, {"start": 2196.0, "end": 2197.5, "text": " this is the restriction map,"}, {"start": 2197.5, "end": 2201.8, "text": " multiplied by the transpose of that restriction map."}, {"start": 2201.8, "end": 2207.5, "text": " Off the main diagonal, we're going to have minus restriction map of node V."}, {"start": 2207.5, "end": 2211.1, "text": " So this is V. And here we have a restriction map of node U,"}, {"start": 2211.1, "end": 2215.2, "text": " because here we have, this slot corresponds to UV."}, {"start": 2215.2, "end": 2216.7, "text": " That's why we have these two."}, {"start": 2216.7, "end": 2219.8, "text": " And if you think about it, this construct makes sense"}, {"start": 2219.8, "end": 2222.5, "text": " because that's how you end up with particular,"}, {"start": 2222.5, "end": 2224.4, "text": " with this equation here."}, {"start": 2224.4, "end": 2227.9, "text": " That's how we end up with this whole expression here."}, {"start": 2227.9, "end": 2233.8, "text": " So you can see that if you just kind of ignore the X's,"}, {"start": 2233.8, "end": 2236.6, "text": " we have the sum of these restriction maps"}, {"start": 2236.6, "end": 2242.0, "text": " and we have this minus restriction map of U and restriction map of V."}, {"start": 2242.0, "end": 2245.5, "text": " And if you just kind of add a vector here,"}, {"start": 2245.5, "end": 2249.9, "text": " so if we add a vector here,"}, {"start": 2249.9, "end": 2255.7999999999997, "text": " and that's going to be our Rx from this zero co-chain space."}, {"start": 2255.7999999999997, "end": 2257.9, "text": " So we'll have vectors here."}, {"start": 2257.9, "end": 2260.9, "text": " So this is going to be D dimensional."}, {"start": 2260.9, "end": 2263.7999999999997, "text": " So this is going to be D dimensional."}, {"start": 2263.7999999999997, "end": 2264.7999999999997, "text": " This is D dimensional."}, {"start": 2264.7999999999997, "end": 2266.2999999999997, "text": " And here we just have a single dimension."}, {"start": 2266.3, "end": 2267.8, "text": " I kind of made it a bit wider,"}, {"start": 2267.8, "end": 2270.0, "text": " but like this is just a single element there."}, {"start": 2270.0, "end": 2272.9, "text": " And then if you just kind of multiply this,"}, {"start": 2272.9, "end": 2280.2000000000003, "text": " you'll understand that you basically end up with this same equation here."}, {"start": 2280.2000000000003, "end": 2284.0, "text": " And because here you have like X, V,"}, {"start": 2284.0, "end": 2286.8, "text": " and then a bit further down,"}, {"start": 2286.8, "end": 2290.6000000000004, "text": " maybe the penultimate slot is going to be X, U."}, {"start": 2290.6000000000004, "end": 2292.8, "text": " And that's why this thing is going to multiply this thing,"}, {"start": 2292.8, "end": 2294.3, "text": " and this thing is going to multiply this thing,"}, {"start": 2294.3, "end": 2296.1000000000004, "text": " and that's how we end up with the equation."}, {"start": 2296.1, "end": 2300.2999999999997, "text": " Sheaf loplation itself can be constructed from these co-boundary operators,"}, {"start": 2300.2999999999997, "end": 2303.7, "text": " and I'll not get into a lot of details there."}, {"start": 2303.7, "end": 2306.2999999999997, "text": " You can check them out in your own time,"}, {"start": 2306.2999999999997, "end": 2309.4, "text": " but for now just be aware that these co-boundary maps exist,"}, {"start": 2309.4, "end": 2311.2999999999997, "text": " and they basically, as you can see here,"}, {"start": 2311.2999999999997, "end": 2316.6, "text": " they measure the discrepancy again in the discourse space,"}, {"start": 2316.6, "end": 2318.6, "text": " so in the edge stock."}, {"start": 2318.6, "end": 2320.5, "text": " Okay, let's make it a bit more concrete."}, {"start": 2320.5, "end": 2323.0, "text": " So here is a particular graph,"}, {"start": 2323.0, "end": 2326.0, "text": " and here is the sheaf structure."}, {"start": 2326.0, "end": 2327.7, "text": " So here are the restriction maps."}, {"start": 2327.7, "end": 2329.8, "text": " Here are the stocks,"}, {"start": 2329.8, "end": 2332.1, "text": " and you see the concrete numbers."}, {"start": 2332.1, "end": 2334.0, "text": " And by using those numbers,"}, {"start": 2334.0, "end": 2337.0, "text": " you can plug them in into this co-boundary operator."}, {"start": 2337.0, "end": 2340.0, "text": " So this is how you form the co-boundary matrix."}, {"start": 2340.0, "end": 2341.1, "text": " You take the..."}, {"start": 2341.1, "end": 2345.1, "text": " So this is the restriction map F1,"}, {"start": 2345.1, "end": 2349.7, "text": " and this is the restriction map that corresponds to node 2, V2,"}, {"start": 2349.7, "end": 2354.8, "text": " and that's why we have, as you can see here, F1 minus F2,"}, {"start": 2354.8, "end": 2359.0, "text": " because once we multiply this with stocks,"}, {"start": 2359.0, "end": 2360.8, "text": " so we have X1, X2,"}, {"start": 2360.8, "end": 2365.3, "text": " we'll end up with F1, X1, minus F2, X2,"}, {"start": 2365.3, "end": 2367.3, "text": " and that's precisely measuring the disagreement"}, {"start": 2367.3, "end": 2369.1000000000004, "text": " along this particular edge."}, {"start": 2369.1000000000004, "end": 2370.3, "text": " So this is how you form them,"}, {"start": 2370.3, "end": 2372.4, "text": " and you can see again a connection"}, {"start": 2372.4, "end": 2375.1000000000004, "text": " between this and this thing here."}, {"start": 2375.1000000000004, "end": 2377.2000000000003, "text": " We get precisely this equation"}, {"start": 2377.2000000000003, "end": 2380.3, "text": " by forming a matrix such as this one."}, {"start": 2380.3, "end": 2384.3, "text": " And if we were to multiply this co-boundary transpose"}, {"start": 2384.3, "end": 2386.9, "text": " times co-boundary, we end up with the sheaf,"}, {"start": 2386.9, "end": 2390.5, "text": " and you can kind of imagine that with these Fs here and F2s,"}, {"start": 2390.5, "end": 2392.4, "text": " if you kind of do your math,"}, {"start": 2392.4, "end": 2395.8, "text": " you're gonna end up with an equation such as this one,"}, {"start": 2395.8, "end": 2398.3, "text": " a sum of these Fs and stuff."}, {"start": 2398.3, "end": 2399.9, "text": " Okay, I'm gonna stop there."}, {"start": 2399.9, "end": 2402.8, "text": " I'm aware this is packed with details,"}, {"start": 2402.8, "end": 2404.2000000000003, "text": " but it is how it is."}, {"start": 2404.2000000000003, "end": 2406.3, "text": " Hopefully this video is just gonna give you"}, {"start": 2406.3, "end": 2408.5, "text": " like a glimpse, an overview of this whole field,"}, {"start": 2408.5, "end": 2411.1000000000004, "text": " and you can go on and explore at your own pace,"}, {"start": 2411.1000000000004, "end": 2412.7000000000003, "text": " and then return back to this video"}, {"start": 2412.7, "end": 2415.2999999999997, "text": " and watch it again, I guess."}, {"start": 2415.2999999999997, "end": 2418.3999999999996, "text": " Let's get to the actual neural networks now."}, {"start": 2418.3999999999996, "end": 2423.0, "text": " So an important concept is this diffusion process."}, {"start": 2423.0, "end": 2426.3999999999996, "text": " Okay, now, so all of those laplations"}, {"start": 2426.3999999999996, "end": 2429.5, "text": " and the sheaf laplations can be thought of"}, {"start": 2429.5, "end": 2434.5, "text": " as a discretization of this diffusion process."}, {"start": 2434.5, "end": 2438.2999999999997, "text": " So it's basically a continuous process governed by a PDE,"}, {"start": 2438.2999999999997, "end": 2440.0, "text": " so a partial differential equation."}, {"start": 2440.0, "end": 2442.0, "text": " So we can see that the rate of change of X"}, {"start": 2442.0, "end": 2446.9, "text": " equals minus sheaf laplation times the current values of X."}, {"start": 2446.9, "end": 2450.1, "text": " And let me try and briefly explain"}, {"start": 2450.1, "end": 2452.1, "text": " what this diffusion process is about."}, {"start": 2452.1, "end": 2455.6, "text": " So again, how it came to be, this whole theory,"}, {"start": 2455.6, "end": 2458.2, "text": " is I'm fairly sure it came from Fourier"}, {"start": 2458.2, "end": 2462.5, "text": " analyzing the dissipation on solid bodies,"}, {"start": 2462.5, "end": 2464.5, "text": " the heat dissipation on solid bodies."}, {"start": 2464.5, "end": 2469.5, "text": " So imagine you have like a 3D solid object here,"}, {"start": 2469.5, "end": 2473.3, "text": " and you have a certain point inside here."}, {"start": 2473.3, "end": 2474.7, "text": " So you have a certain point,"}, {"start": 2474.7, "end": 2477.1, "text": " and that certain point has associated temperature,"}, {"start": 2477.1, "end": 2478.8, "text": " so a certain scalar."}, {"start": 2478.8, "end": 2481.5, "text": " And imagine now that you have around it,"}, {"start": 2481.5, "end": 2485.7, "text": " you have like a dense, like a minisphere"}, {"start": 2485.7, "end": 2488.3, "text": " around that point, the green point,"}, {"start": 2488.3, "end": 2493.3, "text": " and all of those points in the neighborhood are colder."}, {"start": 2493.9, "end": 2496.1, "text": " So what's gonna happen is that,"}, {"start": 2496.1, "end": 2498.9, "text": " as you know from your real life,"}, {"start": 2498.9, "end": 2503.9, "text": " this red, the green point is gonna slowly cool down."}, {"start": 2504.2000000000003, "end": 2506.9, "text": " And I should have probably used the red color"}, {"start": 2506.9, "end": 2508.5, "text": " to denote it as being hot."}, {"start": 2508.5, "end": 2510.8, "text": " It's gonna slowly cool down."}, {"start": 2510.8, "end": 2512.7000000000003, "text": " And the reason it cools down is because"}, {"start": 2512.7000000000003, "end": 2514.7000000000003, "text": " of this diffusion equation."}, {"start": 2514.7000000000003, "end": 2517.5, "text": " So basically you can imagine what happens is,"}, {"start": 2517.5, "end": 2521.0, "text": " we subtract from this value, whatever that value is,"}, {"start": 2521.0, "end": 2525.9, "text": " so some T zero, we subtract the average"}, {"start": 2525.9, "end": 2528.3, "text": " of the temperatures in the neighborhood."}, {"start": 2528.3, "end": 2531.8, "text": " And that means we're gonna subtract the average,"}, {"start": 2531.8, "end": 2536.1000000000004, "text": " so like a sum of the temperatures of the neighbors"}, {"start": 2536.1000000000004, "end": 2540.0, "text": " where I is gonna be from the neighbor set,"}, {"start": 2540.0, "end": 2543.1000000000004, "text": " just denoted as this very ugly N."}, {"start": 2543.1000000000004, "end": 2546.9, "text": " And if you think about it, this is exactly the same thing"}, {"start": 2546.9, "end": 2548.5, "text": " we're doing with our graphs."}, {"start": 2548.5, "end": 2551.9, "text": " So we just made this continuous,"}, {"start": 2551.9, "end": 2555.8, "text": " both in space and in time process, and we discretize it."}, {"start": 2555.8, "end": 2557.8, "text": " So we're doing it in a discrete domain,"}, {"start": 2557.8, "end": 2560.5, "text": " which is a graph, and we do it in discrete steps."}, {"start": 2560.5, "end": 2563.0, "text": " So in every iteration, we're gonna be doing the same thing."}, {"start": 2563.0, "end": 2566.5, "text": " Okay, so what this eventually leads to is basically"}, {"start": 2566.5, "end": 2568.9, "text": " everything is gonna have the same temperature."}, {"start": 2568.9, "end": 2572.5, "text": " Okay, that's the end goal of this process."}, {"start": 2572.5, "end": 2574.5, "text": " And we just took the discretized version"}, {"start": 2574.5, "end": 2577.8, "text": " of this same process, and we are piggybacking that"}, {"start": 2577.8, "end": 2581.1000000000004, "text": " as we're using it as the workhorse for all of our,"}, {"start": 2581.1000000000004, "end": 2583.9, "text": " for most of these graph neural networks."}, {"start": 2583.9, "end": 2586.4, "text": " Now they say here, it can be shown that in the time limit,"}, {"start": 2586.4, "end": 2589.2000000000003, "text": " each feature channel is projected into kernel"}, {"start": 2589.2000000000003, "end": 2591.3, "text": " of that sheaf laplation."}, {"start": 2591.3, "end": 2596.3, "text": " Okay, that means again, that this X is gonna be,"}, {"start": 2596.7000000000003, "end": 2598.9, "text": " that these updates are gonna be zero vectors,"}, {"start": 2598.9, "end": 2602.4, "text": " and that means that we won't have any rate of change."}, {"start": 2602.4, "end": 2604.6, "text": " So the diffusion stopped,"}, {"start": 2604.6, "end": 2607.6, "text": " the temperature is stable everywhere."}, {"start": 2607.6, "end": 2610.8, "text": " And that means continuing here, as described above,"}, {"start": 2610.8, "end": 2613.3, "text": " this space contains the signals that agree"}, {"start": 2613.3, "end": 2618.2000000000003, "text": " with the restriction maps of the sheaf along all the edges."}, {"start": 2618.2000000000003, "end": 2620.6000000000004, "text": " In what follows, we analyze the ability of certain classes"}, {"start": 2620.6000000000004, "end": 2623.2000000000003, "text": " of sheaves to linearly separate the features"}, {"start": 2623.2000000000003, "end": 2626.9, "text": " in the limit of the diffusion processes they induce."}, {"start": 2626.9, "end": 2628.6000000000004, "text": " Okay, restating this again,"}, {"start": 2629.9, "end": 2632.2000000000003, "text": " by repeating this diffusion process,"}, {"start": 2632.2000000000003, "end": 2635.2000000000003, "text": " we'll eventually end up being a harmonic signal."}, {"start": 2635.2000000000003, "end": 2638.0, "text": " So we'll eventually end up with these harmonic signals,"}, {"start": 2638.0, "end": 2643.3, "text": " i.e. signals that basically lead to alignment"}, {"start": 2643.3, "end": 2645.5, "text": " with their neighbors."}, {"start": 2645.5, "end": 2649.1, "text": " Now, in general case, that signal might be trivial,"}, {"start": 2649.1, "end": 2652.0, "text": " so it might be all zeros, which is not that interesting,"}, {"start": 2652.0, "end": 2655.7, "text": " but like we're gonna now see specific classes"}, {"start": 2655.7, "end": 2660.3, "text": " and how for specific, if we pick specific sheaf structures,"}, {"start": 2660.3, "end": 2663.7, "text": " we'll end up with very interesting harmonic spaces."}, {"start": 2663.7, "end": 2665.7, "text": " So what would constitute an interesting signal"}, {"start": 2665.7, "end": 2666.9, "text": " from that harmonic space?"}, {"start": 2666.9, "end": 2670.0, "text": " Well, a signal such that we can linearly separate"}, {"start": 2670.0, "end": 2671.8, "text": " our classes, so we can classify it,"}, {"start": 2671.8, "end": 2674.0, "text": " we care about like node classification,"}, {"start": 2674.0, "end": 2675.6, "text": " edge classification, regression,"}, {"start": 2675.6, "end": 2677.0, "text": " graph classification, whatnot."}, {"start": 2677.0, "end": 2679.8, "text": " And so if this diffusion process eventually leads"}, {"start": 2679.8, "end": 2681.4, "text": " to such a harmonic signal,"}, {"start": 2681.4, "end": 2684.3, "text": " such that we have linear separability of our classes,"}, {"start": 2684.3, "end": 2687.2000000000003, "text": " then we're good, then we want to kind of ride"}, {"start": 2687.2000000000003, "end": 2691.9, "text": " the diffusion process all the way to this equilibrium point."}, {"start": 2692.9, "end": 2696.5, "text": " Now, let's see a couple of sheaf structures"}, {"start": 2696.5, "end": 2701.5, "text": " and what the harmonic signals they result with,"}, {"start": 2702.6, "end": 2705.1, "text": " basically how powerful are those signals."}, {"start": 2705.1, "end": 2707.2, "text": " Okay, so first class they consider"}, {"start": 2707.2, "end": 2710.2, "text": " is this very simple symmetric invertible class."}, {"start": 2710.2, "end": 2712.0, "text": " Okay, it's defined the following way."}, {"start": 2712.9, "end": 2717.5, "text": " It's such a sheaf structure such that we have"}, {"start": 2717.5, "end": 2723.6, "text": " the restriction map for two nodes that connect one edge."}, {"start": 2723.6, "end": 2726.3, "text": " So these two nodes are gonna have the same,"}, {"start": 2726.3, "end": 2727.2000000000003, "text": " they're gonna be the same."}, {"start": 2727.2000000000003, "end": 2729.4, "text": " These two restriction maps are gonna be the same"}, {"start": 2729.4, "end": 2733.5, "text": " and the determinant of those restriction maps"}, {"start": 2733.5, "end": 2735.3, "text": " is gonna be different than zero,"}, {"start": 2735.3, "end": 2739.3, "text": " which means this is invertible as it says here"}, {"start": 2739.3, "end": 2742.9, "text": " and it's symmetric because these two are equal, okay?"}, {"start": 2742.9, "end": 2745.8, "text": " Then they mentioned here, we note that for D equals one,"}, {"start": 2745.8, "end": 2747.8, "text": " so stock dimension equals one,"}, {"start": 2747.8, "end": 2750.7000000000003, "text": " the sheaf loplations induced by this class of sheaves"}, {"start": 2750.7000000000003, "end": 2753.8, "text": " coincides with the set of the well-known weighted"}, {"start": 2753.8, "end": 2756.6000000000004, "text": " graph loplations with strictly positive weights,"}, {"start": 2756.6000000000004, "end": 2759.4, "text": " which also includes the usual graph loplation."}, {"start": 2759.4, "end": 2763.1000000000004, "text": " Therefore, this hypothesis class is of particular interest"}, {"start": 2763.1000000000004, "end": 2765.3, "text": " since it includes those graph loplations"}, {"start": 2765.3, "end": 2768.5, "text": " typically used by graph convolutional models such as GCN."}, {"start": 2768.5, "end": 2772.5, "text": " So this is the minimal generalization we can make, okay?"}, {"start": 2772.5, "end": 2775.7000000000003, "text": " And we first show that this class of sheaf loplations"}, {"start": 2775.7000000000003, "end": 2777.9, "text": " can linearly separate the classes"}, {"start": 2777.9, "end": 2780.5, "text": " in binary classification settings"}, {"start": 2780.5, "end": 2783.6000000000004, "text": " under certain homophily assumptions."}, {"start": 2783.6, "end": 2784.7999999999997, "text": " So that's very important."}, {"start": 2784.7999999999997, "end": 2786.2, "text": " I'm gonna go through this proposition,"}, {"start": 2786.2, "end": 2788.7999999999997, "text": " but then I'll start slowly scheming some of the details"}, {"start": 2788.7999999999997, "end": 2790.0, "text": " because there is too many details"}, {"start": 2790.0, "end": 2792.7, "text": " and not all of them are equally important here."}, {"start": 2792.7, "end": 2795.7, "text": " Let G be the set of connected graphs with two classes."}, {"start": 2795.7, "end": 2797.9, "text": " We have two classes because it's a binary classification"}, {"start": 2797.9, "end": 2801.9, "text": " problem, so nodes A and nodes B belong into the set of nodes"}, {"start": 2801.9, "end": 2805.4, "text": " of the graph such that for each node from one class,"}, {"start": 2805.4, "end": 2810.0, "text": " there exists a node U from the same class"}, {"start": 2810.0, "end": 2811.6, "text": " and an edge connecting them."}, {"start": 2811.6, "end": 2814.9, "text": " So that means that the nodes with the same label"}, {"start": 2814.9, "end": 2816.7, "text": " are connected to each other."}, {"start": 2816.7, "end": 2819.4, "text": " And that's the homophily assumption they mentioned."}, {"start": 2819.4, "end": 2824.4, "text": " Then this sheaf has linear separation power"}, {"start": 2824.7, "end": 2826.7, "text": " over this particular graph, okay?"}, {"start": 2826.7, "end": 2829.7, "text": " But then they show that it's not powerful enough"}, {"start": 2829.7, "end": 2831.2999999999997, "text": " to linearly separate two classes"}, {"start": 2831.2999999999997, "end": 2834.7999999999997, "text": " if basically we have heterophilic conditions."}, {"start": 2834.7999999999997, "end": 2836.5, "text": " And that's what I show here."}, {"start": 2836.5, "end": 2841.5, "text": " Basically, if let G be the set of connected bipartite graphs"}, {"start": 2842.3, "end": 2844.6, "text": " with partitions A and B forming two classes"}, {"start": 2844.6, "end": 2848.4, "text": " and being of, like, those two classes"}, {"start": 2848.4, "end": 2850.2, "text": " have the same number of elements,"}, {"start": 2850.2, "end": 2854.4, "text": " then this sheaf cannot linearly separate any graph in G"}, {"start": 2854.4, "end": 2856.2, "text": " for any initial conditions, okay?"}, {"start": 2856.2, "end": 2859.3, "text": " So that means that if we have something like this,"}, {"start": 2859.3, "end": 2863.6, "text": " so we have a bipartite graph such as this one,"}, {"start": 2863.6, "end": 2867.7999999999997, "text": " and we don't have any connections here."}, {"start": 2867.7999999999997, "end": 2869.2999999999997, "text": " We just connect like this."}, {"start": 2869.2999999999997, "end": 2871.1, "text": " We just have these connections, okay?"}, {"start": 2871.1, "end": 2875.6, "text": " And they can show that in this particular setup,"}, {"start": 2875.6, "end": 2879.6, "text": " this class of sheaf structures will not work."}, {"start": 2879.6, "end": 2881.2999999999997, "text": " And that's why we have to resort"}, {"start": 2881.2999999999997, "end": 2883.9, "text": " to some more powerful sheaf classes."}, {"start": 2883.9, "end": 2886.0, "text": " And let's see what those are."}, {"start": 2886.0, "end": 2889.1, "text": " So then they introduce this non-symmetric invertible"}, {"start": 2889.1, "end": 2892.0, "text": " where we only have this constraint,"}, {"start": 2892.0, "end": 2895.9, "text": " so it's invertible, but we don't have the symmetry constraint."}, {"start": 2895.9, "end": 2898.5, "text": " Here they show, I'm gonna skip reading this,"}, {"start": 2898.5, "end": 2903.5, "text": " but they basically say that even in heterophilic conditions,"}, {"start": 2904.5, "end": 2909.5, "text": " this sheaf class is gonna have the linear separation power"}, {"start": 2909.7, "end": 2913.1, "text": " over those graphs, which is super nice."}, {"start": 2913.1, "end": 2916.6, "text": " So we're slowly, by adding more powerful sheaf structures,"}, {"start": 2916.6, "end": 2919.2, "text": " we can solve more difficult problems."}, {"start": 2919.2, "end": 2922.7, "text": " So in this particular case, basically,"}, {"start": 2922.7, "end": 2925.7, "text": " binary classification in heterophilic setting, okay?"}, {"start": 2926.7, "end": 2928.0, "text": " And now they continue doing this."}, {"start": 2928.0, "end": 2931.7, "text": " So here they state a fundamental limitation"}, {"start": 2931.7, "end": 2933.8999999999996, "text": " of sheaf diffusion when D equals one,"}, {"start": 2933.8999999999996, "end": 2937.2999999999997, "text": " so when we only have single-dimensional stocks."}, {"start": 2939.2999999999997, "end": 2941.7, "text": " They basically say that let G be a connected graph"}, {"start": 2941.7, "end": 2945.7, "text": " with more than three classes, so three or more,"}, {"start": 2945.7, "end": 2950.2, "text": " then this previous class of sheafs cannot linearly separate"}, {"start": 2950.2, "end": 2955.2, "text": " any signal on that graph, and that's a powerful statement."}, {"start": 2956.5, "end": 2958.7, "text": " So that means we need to go deeper."}, {"start": 2958.7, "end": 2963.3999999999996, "text": " We need to have more dimensions along the stock,"}, {"start": 2963.3999999999996, "end": 2965.5, "text": " along the vector space dimension."}, {"start": 2965.5, "end": 2967.5, "text": " So they say here, from a GNM perspective,"}, {"start": 2967.5, "end": 2970.0, "text": " this means that in the infinite-depth setting,"}, {"start": 2970.0, "end": 2973.0, "text": " sufficient stock width, i.e. dimension D,"}, {"start": 2973.0, "end": 2975.8, "text": " is needed in order to solve tasks involving"}, {"start": 2975.8, "end": 2978.8, "text": " more than two classes, so more than two classes."}, {"start": 2978.8, "end": 2981.1, "text": " Note that D is different from the classical notion"}, {"start": 2981.1, "end": 2983.1, "text": " of feature, channels F."}, {"start": 2983.1, "end": 2986.2, "text": " As a result, above shows, the letter has no effect"}, {"start": 2986.2, "end": 2989.8, "text": " on the linear separability of the classes in D equals one,"}, {"start": 2989.8, "end": 2991.8, "text": " whereas they showed that the former does."}, {"start": 2991.8, "end": 2995.8, "text": " So you can still have, so this sheaf structure,"}, {"start": 2997.6, "end": 3001.2, "text": " this sheaf neural networks are still going to admit"}, {"start": 3001.2, "end": 3003.0, "text": " the usage of features."}, {"start": 3003.0, "end": 3006.7999999999997, "text": " It's just that those do not help increase the expressivity"}, {"start": 3006.7999999999997, "end": 3010.0, "text": " of our models, whereas increasing the stock dimension"}, {"start": 3010.0, "end": 3011.5, "text": " on the other hand does."}, {"start": 3011.5, "end": 3013.6, "text": " So next up, they introduce this diagonal,"}, {"start": 3013.6, "end": 3018.6, "text": " invertible family of sheafs where the restriction maps"}, {"start": 3019.0, "end": 3021.2999999999997, "text": " are diagonal, so basically, if you take a look"}, {"start": 3021.2999999999997, "end": 3023.7999999999997, "text": " at their representation of the matrix,"}, {"start": 3023.7999999999997, "end": 3025.3999999999996, "text": " you'll just see diagonal elements,"}, {"start": 3025.3999999999996, "end": 3028.0, "text": " and everything else is zero, and again, it's invertible,"}, {"start": 3028.0, "end": 3030.3999999999996, "text": " so we have this condition of the determinant condition."}, {"start": 3030.4, "end": 3034.5, "text": " And they say that let G be the set of connected graphs"}, {"start": 3034.5, "end": 3037.6, "text": " with nodes belonging to more than three,"}, {"start": 3037.6, "end": 3042.0, "text": " so three or more classes, then for D equals,"}, {"start": 3042.0, "end": 3045.5, "text": " or greater or equal than the number of classes,"}, {"start": 3045.5, "end": 3050.5, "text": " this particular class of sheafs has linear separation power."}, {"start": 3050.6, "end": 3053.9, "text": " So as long as the D is bigger than the,"}, {"start": 3053.9, "end": 3056.4, "text": " or equal to the number of classes,"}, {"start": 3056.4, "end": 3058.1, "text": " we have separation power."}, {"start": 3058.1, "end": 3061.2999999999997, "text": " So again, more powerful than the previous families."}, {"start": 3061.2999999999997, "end": 3063.7, "text": " Then they do the same thing for orthogonal families"}, {"start": 3063.7, "end": 3067.9, "text": " where basically the restriction maps now belong"}, {"start": 3067.9, "end": 3072.9, "text": " to this group of orthogonal matrices,"}, {"start": 3073.7999999999997, "end": 3075.9, "text": " which basically means they just do rotations,"}, {"start": 3075.9, "end": 3078.9, "text": " they don't do any scaling, they just rotate your vectors."}, {"start": 3078.9, "end": 3081.4, "text": " And here they show that this class has even more powers"}, {"start": 3081.4, "end": 3084.0, "text": " in the sense that here you can have, for example,"}, {"start": 3084.0, "end": 3087.7, "text": " two for the dimensionality of the stock,"}, {"start": 3087.7, "end": 3092.2, "text": " and you can classify, you can correctly linearly separate"}, {"start": 3092.2, "end": 3096.0, "text": " up to four classes, so that means it's even more powerful."}, {"start": 3096.0, "end": 3099.3999999999996, "text": " And the interesting part here is that the proof is crazy,"}, {"start": 3099.3999999999996, "end": 3103.5, "text": " like they use quaternions and complex numbers for two,"}, {"start": 3103.5, "end": 3108.3999999999996, "text": " and basically this statement here may hold"}, {"start": 3108.3999999999996, "end": 3110.8999999999996, "text": " even for higher dimensions, it's just that they did not"}, {"start": 3110.8999999999996, "end": 3114.0, "text": " make the proofs for those other dimensions."}, {"start": 3114.0, "end": 3118.3, "text": " If you're curious in the very heavy mathematics,"}, {"start": 3118.3, "end": 3121.9, "text": " the appendix of the paper is full of various proofs"}, {"start": 3121.9, "end": 3125.3, "text": " of all of these propositions and theorems you saw here."}, {"start": 3125.3, "end": 3129.0, "text": " So very, very mathematically rich paper."}, {"start": 3129.0, "end": 3130.7, "text": " So finally, the summary."}, {"start": 3130.7, "end": 3133.4, "text": " Different shift classes give rise to different behaviors"}, {"start": 3133.4, "end": 3136.3, "text": " of the diffusion process and consequently"}, {"start": 3136.3, "end": 3138.7, "text": " to different separation capabilities."}, {"start": 3138.7, "end": 3141.7, "text": " Taken together, these results show that solving"}, {"start": 3141.7, "end": 3143.8, "text": " any node classification test can be reduced"}, {"start": 3143.8, "end": 3146.5, "text": " to performing diffusion with the right sheaf."}, {"start": 3146.5, "end": 3149.3, "text": " A very powerful statement."}, {"start": 3149.3, "end": 3153.4, "text": " So again, by taking a particular sheaf structure"}, {"start": 3153.4, "end": 3158.6000000000004, "text": " and by just constructing the corresponding sheaf loplation"}, {"start": 3158.6000000000004, "end": 3161.5, "text": " and by just iterating, applying the sheaf loplation"}, {"start": 3161.5, "end": 3163.9, "text": " in this diffusion process, we'll end up"}, {"start": 3163.9, "end": 3166.9, "text": " with certain harmonic signals and depending"}, {"start": 3166.9, "end": 3170.7000000000003, "text": " on the basically richness of the class you picked,"}, {"start": 3170.7, "end": 3175.7, "text": " those harmonic signals may lead to a very nice"}, {"start": 3175.7999999999997, "end": 3178.5, "text": " linear separability of your classes."}, {"start": 3178.5, "end": 3181.2, "text": " So that's the idea here."}, {"start": 3181.2, "end": 3183.2999999999997, "text": " Okay, we're slowly wrapping up this paper."}, {"start": 3183.2999999999997, "end": 3186.8999999999996, "text": " So there is not a lot more, so maybe pause the video"}, {"start": 3186.8999999999996, "end": 3190.1, "text": " and stretch a bit, I know this was super heavy."}, {"start": 3190.1, "end": 3195.1, "text": " So here you can see a previous sheaf neural network"}, {"start": 3195.2999999999997, "end": 3198.3999999999996, "text": " devised in one of the previous papers."}, {"start": 3198.4, "end": 3203.1, "text": " And it again uses the diffusion process as the workhorse"}, {"start": 3203.1, "end": 3206.2000000000003, "text": " and just adds additional learnable parameters"}, {"start": 3206.2000000000003, "end": 3210.3, "text": " and non-linearities to this diffusion process"}, {"start": 3210.3, "end": 3213.8, "text": " to make these more powerful neural networks."}, {"start": 3213.8, "end": 3216.4, "text": " But what they've done is they actually hard-coded"}, {"start": 3216.4, "end": 3218.2000000000003, "text": " the sheaf structure, so they are not learning"}, {"start": 3218.2000000000003, "end": 3220.8, "text": " the sheaf structure, whereas as we'll soon see"}, {"start": 3220.8, "end": 3222.5, "text": " in this paper, they're actually learning"}, {"start": 3222.5, "end": 3224.3, "text": " the sheaf structure as well."}, {"start": 3224.3, "end": 3227.1, "text": " So let's see what this thing does."}, {"start": 3227.1, "end": 3232.1, "text": " So we have X, which is basically, those are our signals,"}, {"start": 3232.2999999999997, "end": 3236.7, "text": " and the dimensionality of that matrix is gonna be basically"}, {"start": 3236.7, "end": 3241.7, "text": " Nd times F, so that means we'll have Nd dimensions here"}, {"start": 3244.2, "end": 3248.1, "text": " because we have N nodes, dimensionality of the stokes is D,"}, {"start": 3248.1, "end": 3250.6, "text": " and we have potentially F features here."}, {"start": 3250.6, "end": 3252.1, "text": " That's why we have something like this."}, {"start": 3252.1, "end": 3253.7, "text": " This is the structure of X."}, {"start": 3253.7, "end": 3258.7, "text": " So what W2 does is first, you just linearly transform,"}, {"start": 3258.7, "end": 3262.1, "text": " you map these feature vectors into some other representations."}, {"start": 3262.1, "end": 3264.2999999999997, "text": " That's the first transformation you do."}, {"start": 3264.2999999999997, "end": 3267.7, "text": " Then what you do is you take this W1 matrix,"}, {"start": 3267.7, "end": 3270.1, "text": " which is of dimensionality D times D."}, {"start": 3270.1, "end": 3271.7999999999997, "text": " This is gonna be D times D."}, {"start": 3271.7999999999997, "end": 3275.7999999999997, "text": " You're gonna do a Kronecker product with identity matrix N,"}, {"start": 3275.7999999999997, "end": 3277.8999999999996, "text": " which is a fancy way of doing the following thing."}, {"start": 3277.8999999999996, "end": 3281.2, "text": " So you're gonna end up with Nd times Nd matrix,"}, {"start": 3281.2, "end": 3285.8999999999996, "text": " which is basically necessary if you want to multiply,"}, {"start": 3285.8999999999996, "end": 3288.3999999999996, "text": " left multiply this particular structure here."}, {"start": 3288.3999999999996, "end": 3289.6, "text": " So why does that happen?"}, {"start": 3289.6, "end": 3293.3999999999996, "text": " Well, you have a matrix that's T times D,"}, {"start": 3293.3999999999996, "end": 3297.0, "text": " and what this Kronecker product does is the following."}, {"start": 3297.0, "end": 3302.0, "text": " By multiplying this by basically, just a second,"}, {"start": 3302.1, "end": 3303.8999999999996, "text": " sorry, it's on the other side,"}, {"start": 3303.8999999999996, "end": 3305.8999999999996, "text": " so it's gonna be from the left side."}, {"start": 3306.7, "end": 3310.5, "text": " So by multiplying this with this identity matrix"}, {"start": 3310.5, "end": 3312.5, "text": " that has dimensions N times N,"}, {"start": 3312.5, "end": 3315.5, "text": " you're gonna end up with the following structure."}, {"start": 3315.5, "end": 3317.9, "text": " Basically along this diagonal,"}, {"start": 3317.9, "end": 3321.7, "text": " we're gonna have, we're just gonna copy paste"}, {"start": 3321.7, "end": 3323.3, "text": " these matrices D."}, {"start": 3323.3, "end": 3326.6, "text": " So where there was one in this matrix,"}, {"start": 3326.6, "end": 3328.7, "text": " we're gonna have a D times D matrix here."}, {"start": 3328.7, "end": 3332.0, "text": " And that's why we end up with N times D matrix here, okay?"}, {"start": 3333.2, "end": 3338.2, "text": " So that matrix is gonna basically take the signals here."}, {"start": 3338.2, "end": 3343.2, "text": " So let's assume this is one vector here, this is D."}, {"start": 3344.3999999999996, "end": 3349.3999999999996, "text": " It's going to linearly transform stocks"}, {"start": 3350.1, "end": 3351.7999999999997, "text": " for each of the features."}, {"start": 3351.7999999999997, "end": 3353.7999999999997, "text": " That's what this part is gonna do."}, {"start": 3353.7999999999997, "end": 3358.2, "text": " And finally, we apply the shift-loplation here,"}, {"start": 3358.2, "end": 3360.2, "text": " which is going to do the,"}, {"start": 3360.2, "end": 3363.1, "text": " again, the averaging of the neighbors logic,"}, {"start": 3363.1, "end": 3367.2999999999997, "text": " and then you subtract that value from the node of interest."}, {"start": 3367.3, "end": 3369.9, "text": " And then you just apply the non-linearity here."}, {"start": 3369.9, "end": 3373.3, "text": " So again, you're basically in a way"}, {"start": 3374.5, "end": 3378.0, "text": " using this diffusion logic as your workhorse,"}, {"start": 3378.0, "end": 3382.0, "text": " and then you're building these learnable components"}, {"start": 3382.0, "end": 3383.7000000000003, "text": " and putting those learnable components"}, {"start": 3383.7000000000003, "end": 3386.2000000000003, "text": " around your diffusion logic."}, {"start": 3386.2000000000003, "end": 3388.6000000000004, "text": " That's how this network works."}, {"start": 3388.6000000000004, "end": 3391.8, "text": " And then they say it is natural to call this model"}, {"start": 3391.8, "end": 3393.8, "text": " a shift convolutional network,"}, {"start": 3393.8, "end": 3397.4, "text": " since when the shift-loplation is the usual"}, {"start": 3397.4, "end": 3400.0, "text": " normalized graph-loplation,"}, {"start": 3400.0, "end": 3403.6000000000004, "text": " W1 becomes a scalar and one recovers the GCN"}, {"start": 3403.6000000000004, "end": 3407.5, "text": " of Kipf and Welling, so that's the famous GCN paper."}, {"start": 3407.5, "end": 3411.5, "text": " So this shows that GCNs and more generally SCNs"}, {"start": 3411.5, "end": 3414.3, "text": " are a non-linear parametric and discrete version"}, {"start": 3414.3, "end": 3415.9, "text": " of shift diffusion."}, {"start": 3415.9, "end": 3418.0, "text": " In what follows, we analyze how expressive"}, {"start": 3418.0, "end": 3420.5, "text": " these non-linear models are compared to their"}, {"start": 3420.5, "end": 3424.1, "text": " base diffusion process, okay?"}, {"start": 3424.1, "end": 3427.3, "text": " So let's see a couple of things here."}, {"start": 3427.3, "end": 3432.5, "text": " So first of all, they form something called"}, {"start": 3432.5, "end": 3433.7, "text": " Dirichlet energy."}, {"start": 3433.7, "end": 3437.7, "text": " So for the sake of time, it's not even important"}, {"start": 3437.7, "end": 3440.8, "text": " for you to understand the exact details."}, {"start": 3440.8, "end": 3443.4, "text": " What you need to know is that this Dirichlet energy"}, {"start": 3443.4, "end": 3446.9, "text": " basically measures how far is the current signal"}, {"start": 3446.9, "end": 3449.2, "text": " from being a harmonic signal."}, {"start": 3449.2, "end": 3452.6, "text": " So from being, yeah, basically a harmonic signal."}, {"start": 3452.6, "end": 3455.7999999999997, "text": " So then they state the following here."}, {"start": 3455.7999999999997, "end": 3460.7999999999997, "text": " If we have a shift class belonging to this particular"}, {"start": 3461.1, "end": 3462.8999999999996, "text": " orthogonal symmetric shift class,"}, {"start": 3462.8999999999996, "end": 3465.8999999999996, "text": " which is fairly restricted, if we have a non-linearity"}, {"start": 3465.8999999999996, "end": 3469.0, "text": " that's either a ReLU or a Leaky ReLU,"}, {"start": 3469.0, "end": 3473.1, "text": " then they can show that the Dirichlet energy"}, {"start": 3473.1, "end": 3476.3999999999996, "text": " in the next state, so this is a signal,"}, {"start": 3476.3999999999996, "end": 3479.1, "text": " the transformed signal after we apply the logic"}, {"start": 3479.1, "end": 3482.7999999999997, "text": " from the neural network, is gonna be upper bounded"}, {"start": 3482.7999999999997, "end": 3485.7, "text": " by this expression here, and this is the old signal."}, {"start": 3485.7, "end": 3489.2999999999997, "text": " This is the old energy of the old signal, okay?"}, {"start": 3489.2999999999997, "end": 3492.7999999999997, "text": " So in particular, this means that if this whole term here"}, {"start": 3492.7999999999997, "end": 3497.2, "text": " is smaller than one, that means that with every step,"}, {"start": 3497.2, "end": 3500.2999999999997, "text": " the energy will go down, and as I said,"}, {"start": 3500.2999999999997, "end": 3503.0, "text": " the energy going down means the signal is going towards"}, {"start": 3503.0, "end": 3505.7999999999997, "text": " becoming a harmonic signal, okay?"}, {"start": 3505.7999999999997, "end": 3508.7999999999997, "text": " So this means that the signal converges exponentially"}, {"start": 3508.8, "end": 3511.6000000000004, "text": " fast to the kernel of the shift laplation,"}, {"start": 3511.6000000000004, "end": 3513.9, "text": " i.e. to being the harmonic signal."}, {"start": 3513.9, "end": 3515.7000000000003, "text": " Okay, and this is usually going to be the case"}, {"start": 3515.7000000000003, "end": 3520.7000000000003, "text": " because this lambda star is usually smaller than one,"}, {"start": 3520.8, "end": 3524.1000000000004, "text": " and because of the regularization and everything,"}, {"start": 3524.1000000000004, "end": 3525.6000000000004, "text": " in order to reduce the fitting,"}, {"start": 3525.6000000000004, "end": 3527.3, "text": " the norms here are gonna be also small,"}, {"start": 3527.3, "end": 3531.2000000000003, "text": " so this is usually satisfied, which means that"}, {"start": 3531.2000000000003, "end": 3534.8, "text": " for this particular class of sheaves,"}, {"start": 3534.8, "end": 3539.8, "text": " the signal is necessarily going to go towards"}, {"start": 3540.2000000000003, "end": 3543.8, "text": " the actual becoming a harmonic signal,"}, {"start": 3543.8, "end": 3545.8, "text": " and that's not always desirable."}, {"start": 3545.8, "end": 3547.5, "text": " We'll see why that is."}, {"start": 3547.5, "end": 3552.2000000000003, "text": " So let's see the proposition 17 here first."}, {"start": 3553.6000000000004, "end": 3556.4, "text": " They mentioned that as soon as we have"}, {"start": 3556.4, "end": 3559.7000000000003, "text": " an asymmetric transport map, so here remember,"}, {"start": 3559.7000000000003, "end": 3562.7000000000003, "text": " this class had the symmetry constraint,"}, {"start": 3562.7, "end": 3566.6, "text": " so as soon as we have the asymmetric transport map,"}, {"start": 3566.6, "end": 3569.8999999999996, "text": " we can find an arbitrarily small linear transformation W"}, {"start": 3569.8999999999996, "end": 3571.8999999999996, "text": " that increases the energy, okay?"}, {"start": 3571.8999999999996, "end": 3576.0, "text": " So this means that this class is gonna be more powerful"}, {"start": 3576.0, "end": 3579.2, "text": " and does not necessarily need to monotonically go towards"}, {"start": 3579.2, "end": 3583.0, "text": " becoming a harmonic signal, and we need that flexibility"}, {"start": 3583.0, "end": 3585.3999999999996, "text": " because we are learning the sheave."}, {"start": 3585.3999999999996, "end": 3587.8999999999996, "text": " We don't know the true sheave,"}, {"start": 3587.8999999999996, "end": 3589.7999999999997, "text": " like we don't know the ground truth sheave,"}, {"start": 3589.8, "end": 3593.5, "text": " and that means that we may end up with some bad sheave"}, {"start": 3593.5, "end": 3596.6000000000004, "text": " whose harmonic signals are not gonna end up"}, {"start": 3596.6000000000004, "end": 3598.8, "text": " becoming an interesting solution,"}, {"start": 3598.8, "end": 3601.4, "text": " meaning we will not have the linear separability,"}, {"start": 3601.4, "end": 3604.6000000000004, "text": " and that's why sometimes we want to actually go away,"}, {"start": 3604.6000000000004, "end": 3608.4, "text": " further away from becoming a harmonic signal."}, {"start": 3608.4, "end": 3611.8, "text": " That's why we wanna have more powerful sheave classes."}, {"start": 3611.8, "end": 3614.8, "text": " So they mentioned here for any connected graph G"}, {"start": 3614.8, "end": 3617.0, "text": " and epsilon bigger than zero,"}, {"start": 3617.0, "end": 3619.8, "text": " there exists a sheave class that is not symmetric,"}, {"start": 3619.8, "end": 3622.8, "text": " that's important, such that basically"}, {"start": 3622.8, "end": 3627.2, "text": " for arbitrarily small norm of this W matrix"}, {"start": 3627.2, "end": 3632.2, "text": " and feature vector X, the energy is gonna actually increase"}, {"start": 3632.4, "end": 3635.6, "text": " instead of decreasing, and that's the power"}, {"start": 3635.6, "end": 3640.6, "text": " of these more expressive, like basically sheave classes."}, {"start": 3641.0, "end": 3644.1, "text": " So as a summary, not only the sheave diffusion"}, {"start": 3644.1, "end": 3646.5, "text": " is more expressive than heat diffusion,"}, {"start": 3646.5, "end": 3647.9, "text": " as shown in section four,"}, {"start": 3647.9, "end": 3650.5, "text": " but the sheave convolutional networks"}, {"start": 3650.5, "end": 3652.6, "text": " are also more expressive than GCNs"}, {"start": 3652.6, "end": 3655.3, "text": " in the sense that they are generally not constrained"}, {"start": 3655.3, "end": 3657.7, "text": " to decrease the Dirichlet energy."}, {"start": 3657.7, "end": 3659.4, "text": " This gives them greater control"}, {"start": 3659.4, "end": 3662.9, "text": " over their asymptotic behavior than GCNs."}, {"start": 3664.0, "end": 3666.4, "text": " And let's finally wrap up this paper."}, {"start": 3666.4, "end": 3669.0, "text": " It's becoming a very, very long video."}, {"start": 3669.0, "end": 3671.8, "text": " So here is the actual model they propose."}, {"start": 3671.8, "end": 3676.8, "text": " So instead of using the sheave equation,"}, {"start": 3676.8, "end": 3680.1000000000004, "text": " diffusion equation we saw moments ago,"}, {"start": 3680.1000000000004, "end": 3682.6000000000004, "text": " which is, let me just find the one"}, {"start": 3682.6000000000004, "end": 3686.5, "text": " we were looking at before."}, {"start": 3686.5, "end": 3689.6000000000004, "text": " So just a second here, where are you?"}, {"start": 3692.5, "end": 3694.0, "text": " Okay, here it is."}, {"start": 3694.0, "end": 3696.5, "text": " This was the diffusion equation we were looking at."}, {"start": 3697.4, "end": 3699.2000000000003, "text": " So now instead of this one,"}, {"start": 3699.2, "end": 3701.7999999999997, "text": " they actually generalize that diffusion equation"}, {"start": 3701.7999999999997, "end": 3704.8999999999996, "text": " and they end up with something like this."}, {"start": 3704.8999999999996, "end": 3709.8999999999996, "text": " So you can easily show that by using particular values"}, {"start": 3710.3999999999996, "end": 3712.2999999999997, "text": " for W1 and W2."}, {"start": 3712.2999999999997, "end": 3715.3999999999996, "text": " And for example, if you're in the linear regime"}, {"start": 3715.3999999999996, "end": 3717.7, "text": " of the allel nonlinearity,"}, {"start": 3717.7, "end": 3719.7999999999997, "text": " you can show that this is,"}, {"start": 3719.7999999999997, "end": 3724.7999999999997, "text": " you recover the original diffusion equation."}, {"start": 3725.0, "end": 3726.7999999999997, "text": " So they, and then they start from,"}, {"start": 3726.8, "end": 3729.6000000000004, "text": " and then they take this continuous diffusion equation"}, {"start": 3729.6000000000004, "end": 3731.8, "text": " and they discretize it and they end up"}, {"start": 3731.8, "end": 3733.3, "text": " with the model that they use."}, {"start": 3733.3, "end": 3735.0, "text": " And here it is."}, {"start": 3735.0, "end": 3736.2000000000003, "text": " Here's the model they use."}, {"start": 3736.2000000000003, "end": 3738.5, "text": " It's somewhat similar to the previous,"}, {"start": 3738.5, "end": 3742.7000000000003, "text": " like basically a sheave convolutional network we saw."}, {"start": 3743.6000000000004, "end": 3745.5, "text": " There is a couple of important differences."}, {"start": 3745.5, "end": 3750.5, "text": " So we still have this F times F matrix"}, {"start": 3750.6000000000004, "end": 3752.2000000000003, "text": " that's mapping the features."}, {"start": 3752.2000000000003, "end": 3753.9, "text": " We have the D times D matrix"}, {"start": 3753.9, "end": 3758.8, "text": " that's like basically mapping the stock vectors"}, {"start": 3758.8, "end": 3760.2000000000003, "text": " into novel representations."}, {"start": 3760.2000000000003, "end": 3762.3, "text": " We have the sheaf loplation."}, {"start": 3762.3, "end": 3763.7000000000003, "text": " So important details are this."}, {"start": 3763.7000000000003, "end": 3765.4, "text": " In contrast, we'll learn a sheaf,"}, {"start": 3765.4, "end": 3766.9, "text": " which makes our model applicable"}, {"start": 3766.9, "end": 3769.1, "text": " to any real world graph data set,"}, {"start": 3769.1, "end": 3771.2000000000003, "text": " even in the absence of a sheave structure."}, {"start": 3771.2000000000003, "end": 3772.9, "text": " So the previous paper,"}, {"start": 3772.9, "end": 3775.1, "text": " they actually hard coded the sheave structure."}, {"start": 3775.1, "end": 3777.1, "text": " So they manually set the restriction maps"}, {"start": 3777.1, "end": 3778.7000000000003, "text": " to certain values, et cetera."}, {"start": 3778.7000000000003, "end": 3780.7000000000003, "text": " Whereas here they are learning the sheaf"}, {"start": 3780.7, "end": 3784.6, "text": " because we'll see in a couple of seconds"}, {"start": 3784.6, "end": 3787.7, "text": " that this contains additional matrix."}, {"start": 3787.7, "end": 3790.8999999999996, "text": " So we'll basically have a three set of matrices,"}, {"start": 3790.8999999999996, "end": 3794.2, "text": " W1, W2, and the one that's hidden here,"}, {"start": 3794.2, "end": 3796.6, "text": " I'm gonna denote it as W3,"}, {"start": 3796.6, "end": 3800.5, "text": " for each of the layers of this neural network."}, {"start": 3800.5, "end": 3802.7, "text": " And the second thing they mentioned is"}, {"start": 3802.7, "end": 3805.2, "text": " our model uses a more residual parameterization"}, {"start": 3805.2, "end": 3807.7999999999997, "text": " of the discretized diffusion process,"}, {"start": 3807.7999999999997, "end": 3809.7999999999997, "text": " which empirically improves its performance."}, {"start": 3809.8, "end": 3813.6000000000004, "text": " Again, residual connections usually work."}, {"start": 3813.6000000000004, "end": 3817.1000000000004, "text": " We know that since 2015, we'd resonate."}, {"start": 3818.0, "end": 3823.0, "text": " Okay, so here is how the sheaf is actually learned."}, {"start": 3824.3, "end": 3828.7000000000003, "text": " They parameterize these restriction maps like this."}, {"start": 3828.7000000000003, "end": 3830.8, "text": " We have this phi of,"}, {"start": 3830.8, "end": 3833.7000000000003, "text": " and we basically input the two nodes."}, {"start": 3833.7000000000003, "end": 3836.1000000000004, "text": " And depending on the features of those nodes,"}, {"start": 3836.1000000000004, "end": 3838.2000000000003, "text": " we're gonna basically output"}, {"start": 3838.2, "end": 3840.3999999999996, "text": " a particular restriction map."}, {"start": 3840.3999999999996, "end": 3843.0, "text": " So it's a map that looks like this."}, {"start": 3843.0, "end": 3846.8999999999996, "text": " Basically we concatenate the features of those two nodes."}, {"start": 3846.8999999999996, "end": 3850.7, "text": " We linearly map them with W"}, {"start": 3850.7, "end": 3852.7, "text": " and then we apply some non-linearity."}, {"start": 3852.7, "end": 3855.8999999999996, "text": " And that's how we end up with a restriction map."}, {"start": 3855.8999999999996, "end": 3857.7, "text": " So now you're gonna use these"}, {"start": 3857.7, "end": 3861.7, "text": " to form the graph loplation matrix,"}, {"start": 3861.7, "end": 3866.7, "text": " which is gonna be your basically block type of a matrix"}, {"start": 3866.7, "end": 3870.0, "text": " that's gonna have some blocks here, some blocks here."}, {"start": 3870.0, "end": 3872.2999999999997, "text": " So that's your sheaf loplation."}, {"start": 3872.2999999999997, "end": 3874.5, "text": " And as I said, the parameters here"}, {"start": 3874.5, "end": 3875.8999999999996, "text": " are gonna be learnable this time."}, {"start": 3875.8999999999996, "end": 3877.7, "text": " So you're learning the sheaf structure,"}, {"start": 3877.7, "end": 3879.3999999999996, "text": " which is very, very important."}, {"start": 3879.3999999999996, "end": 3880.3999999999996, "text": " Okay, that's pretty much it."}, {"start": 3880.3999999999996, "end": 3884.0, "text": " They then state here that if you let G be,"}, {"start": 3884.0, "end": 3885.7999999999997, "text": " so this is your graph."}, {"start": 3885.7999999999997, "end": 3887.8999999999996, "text": " That's a final graph with features X."}, {"start": 3887.8999999999996, "end": 3892.8999999999996, "text": " Then if basically the features are different"}, {"start": 3893.0, "end": 3894.5, "text": " for different edges."}, {"start": 3894.5, "end": 3897.3, "text": " So if we have sufficiently different features"}, {"start": 3897.3, "end": 3900.4, "text": " and this phi is an MLP with sufficient capacity,"}, {"start": 3900.4, "end": 3903.6, "text": " they show that phi can learn any sheaf."}, {"start": 3903.6, "end": 3907.6, "text": " So this is very, very probably very expressive architecture."}, {"start": 3908.6, "end": 3912.1, "text": " Finally, they experimented with different sheaf classes."}, {"start": 3912.1, "end": 3916.0, "text": " We saw why one would want to do something like that."}, {"start": 3916.0, "end": 3919.9, "text": " And the ones they've used is diagonal."}, {"start": 3919.9, "end": 3923.1, "text": " Then they've used again the orthogonal ones."}, {"start": 3923.1, "end": 3926.0, "text": " And finally the general ones."}, {"start": 3926.0, "end": 3928.9, "text": " And as you go towards the general ones,"}, {"start": 3928.9, "end": 3931.9, "text": " the computational cost becomes a bit greater."}, {"start": 3931.9, "end": 3934.2999999999997, "text": " For the general one, you have something like this."}, {"start": 3934.2999999999997, "end": 3938.4, "text": " So D cubed and plus M, whereas your regular GCNs,"}, {"start": 3938.4, "end": 3941.6, "text": " your GNNs will have M plus M, just this part here."}, {"start": 3941.6, "end": 3944.2, "text": " So this is the number of nodes plus the number of edges."}, {"start": 3944.2, "end": 3947.2, "text": " So yeah, cubed term, but in practice,"}, {"start": 3947.2, "end": 3949.9, "text": " you'll almost never have to use D that's huge."}, {"start": 3949.9, "end": 3951.2999999999997, "text": " So D is gonna be a small number."}, {"start": 3951.3, "end": 3954.8, "text": " So this can be considered in a way as a constant,"}, {"start": 3954.8, "end": 3957.0, "text": " a bigger one, but like still a constant."}, {"start": 3958.0, "end": 3959.2000000000003, "text": " Here are the results."}, {"start": 3960.2000000000003, "end": 3963.8, "text": " They showed that they pretty much on all of these data sets"}, {"start": 3963.8, "end": 3968.0, "text": " that are more heterophilic as basically determined"}, {"start": 3968.0, "end": 3970.5, "text": " by this homophily level."}, {"start": 3970.5, "end": 3974.2000000000003, "text": " You can see 0.11 means this is a very,"}, {"start": 3974.2000000000003, "end": 3975.7000000000003, "text": " very heterophilic data set."}, {"start": 3975.7000000000003, "end": 3978.2000000000003, "text": " They showed that they outperform all of the previous"}, {"start": 3978.2000000000003, "end": 3979.7000000000003, "text": " baselines pretty much."}, {"start": 3979.7, "end": 3982.7, "text": " There are a couple of exceptions, maybe on Chameleon,"}, {"start": 3982.7, "end": 3985.2999999999997, "text": " this GDCN, which was particularly designed"}, {"start": 3985.2999999999997, "end": 3989.2999999999997, "text": " for heterophilic setting, actually outperforms their model."}, {"start": 3989.2999999999997, "end": 3993.1, "text": " And also on homophilic data sets such as sites here,"}, {"start": 3993.1, "end": 3996.2, "text": " PubMed and Quora, they're outperformed by some"}, {"start": 3996.2, "end": 3999.6, "text": " of the other baselines, which have biases that bias them"}, {"start": 3999.6, "end": 4003.2999999999997, "text": " to perform well for the homophilic setting by design."}, {"start": 4003.2999999999997, "end": 4005.7999999999997, "text": " And that's why it's hard to kind of compete with them,"}, {"start": 4005.7999999999997, "end": 4009.6, "text": " even though in theory, these networks can be able to learn"}, {"start": 4009.6, "end": 4013.1, "text": " even better representations and thus outperform these models."}, {"start": 4013.1, "end": 4015.1, "text": " But that's, I guess, the same thing as with Resonance"}, {"start": 4015.1, "end": 4018.9, "text": " and like VGGs, like VGGs can theoretically learn something,"}, {"start": 4018.9, "end": 4022.6, "text": " but unless you provide these hints, like these identity"}, {"start": 4022.6, "end": 4027.2999999999997, "text": " mappings, the skip connections, you will basically not learn"}, {"start": 4027.2999999999997, "end": 4030.5, "text": " the desired representations."}, {"start": 4030.5, "end": 4033.1, "text": " Okay, that was pretty much it."}, {"start": 4033.1, "end": 4035.7999999999997, "text": " Please let me know if you watched this video all the way"}, {"start": 4035.7999999999997, "end": 4037.7999999999997, "text": " to the end, I would really appreciate that."}, {"start": 4037.8, "end": 4040.6000000000004, "text": " If you did, please comment down below so that I know"}, {"start": 4040.6000000000004, "end": 4043.8, "text": " for the future videos where I wanna go into so much details"}, {"start": 4043.8, "end": 4046.0, "text": " and this video took so much time to prepare."}, {"start": 4046.0, "end": 4050.1000000000004, "text": " Like I literally read a lot of papers, a lot of papers"}, {"start": 4050.1000000000004, "end": 4054.0, "text": " to get the decent knowledge so that I'm comfortable"}, {"start": 4054.0, "end": 4055.7000000000003, "text": " explaining this to you guys."}, {"start": 4055.7000000000003, "end": 4057.7000000000003, "text": " So hopefully you found it useful."}, {"start": 4057.7000000000003, "end": 4060.0, "text": " If you did, again, please leave a comment down below"}, {"start": 4060.0, "end": 4063.1000000000004, "text": " if you actually watched the video to the very end."}, {"start": 4063.1000000000004, "end": 4065.1000000000004, "text": " If you did, congrats."}, {"start": 4065.1000000000004, "end": 4067.1000000000004, "text": " Consider subscribing to this channel,"}, {"start": 4067.1, "end": 4069.5, "text": " share the video out and also consider joining"}, {"start": 4069.5, "end": 4071.0, "text": " our Discord community."}, {"start": 4071.0, "end": 4072.7, "text": " Until next time, bye bye."}, {"start": 4072.7, "end": 4099.7, "text": " Caitlin's"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=gPMA5CmFcXc
Hyperbolic Graph Convolutional Networks | Geometric ML Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video we dig deep into the hyperbolic graph convolutional networks paper introducing a class of GCNs operating in the hyperbolic space. Hyperbolic GCNs give exceptional results for the class of scale-free/hierarchical/tree-like graphs. I dive deep into differential geometry theory and explain how the method works. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/1910.12933 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro - why the hyperbolic space? 04:00 Graph Convolutional Networks recap 08:50 Hyperbolic space and curvature theory 15:25 Geodesics, exp, and log maps 23:00 Mapping from Euclidean to hyperbolic space 26:35 Feature transform in hyperbolic space 32:47 Aggregregation on the hyperboloid manifold 35:25 Non-linear activation with different curvatures 36:30 Holistic overview of the method 38:20 Results, Ablations, and curvature analysis 41:00 Why does curvature help? 42:05 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphs #graphconvolutionalnetwork #hyperbolicspace
What's cracking guys? In this video I'll be covering hyperbolic graph convolutional neural networks by the authors from the Yura Leskovitz group at Stanford and we're going to see that hyperbolic keyword here in the title is what makes this paper particularly scary and math heavy and I'm gonna also argue that it's not just mathematics for the sake of mathematics. I'm gonna show you why they have chosen to use this exotic geometric space and use it to learn like useful embeddings later on for node classification and link prediction tasks. So let me start with abstract here and then I'm gonna show you a nice visualization that's gonna give you like a gut feeling for why we are using hyperbolic spaces in the first place and I'm gonna also get into the mathematical formulas and so if you're not familiar with what like curvature of spaces, what hyperbolic space is, don't worry. So we'll get there. Okay, so let's start here. Graph convolutional neural networks or GCNs for short embed nodes in a graph into Euclidean space which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure, okay? And then they say we derive GCNs operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. So my whole goal of this video will be to decode what like the thing they just said in this in this particular sentence here. Okay, so let me just read this one and then I'm gonna tell you what this scale-free graph is. So in particular scale-free graphs have tree-like structure and in such graphs the graph volume defined as the number of nodes with some radius to a center node grows exponentially as a function of radius. So basically scale-free networks are just a particular type of networks that have this this power law behavior where nodes that have a lot of connections become less and less frequent. As you can see here asymptotically we have K which is the number of nodes. We have that the probability of a node having K connections as K grows you can see that like basically that probability drops according to this power law here. So yeah, basically the main idea actually is that this exponential keyword and if we have like graphs that are crowded because of this exponential growth of neighbors that's when we want to use the hyperbolic space. So now let me give you like a visual intuition for why that may be. Okay, let's get to this chart here. Okay, so let's imagine we have our nodes and we have the associated node features and we can always basically visualize node features as points in like Euclidean space. So let's now imagine our Euclidean space is this plane here. We have the points here and you can imagine that if we have like exponentially more neighbors here if it's kind of crowded then mapping to this like geometric space which we call hyperbolic space is going to make them more spread out and like that's the basic intuition. Basically by spreading the points which are densely clustered together in the Euclidean space so by spreading them out in the hyperbolic space you make it easier to discern to discriminate between different node features and thus do better job at classification etc. So that's the main mental model I want you to have throughout this video. And having said that now let me dig into like mathematical formalisms and let me try and explain what the hyperbolic GCNs basically are. Okay, I'm gonna start with some basic notation and explanation of what GCNs are. If you've never watched graph ML videos so far go ahead and check out my playlist on graph ML. Like in particular go and watch the GCNs, so graph convolutional network paper as well as maybe graph attention and network paper, so GATT paper. So that's gonna get you up to speed. But here I'm just gonna briefly recap what we've seen in those videos. So first of all the whole point of graph like representation learning is to learn these useful representations of nodes and sometimes edges which we can then use to do classification tasks on nodes on graphs on edges etc. So let me just show you the formalism here. So graph representation learning is this mapping F. So from this set V of nodes, set E of edges and then we have associated node features. Sometimes we also have edge features but here I'm just gonna they just ignored the edge features. So you can see the notation is the following. So X of I which means node I. So these are the features at the raw features because basically zero means zero layer before we even started applying the graph neural network. So and also we have this symbol E which means these are Euclidean raw features of this particular node I. So that's how you treat this symbol here. Okay and the goal is to find like a mapping to this basically a representation Z which is of dimension the number of nodes times D prime which is the dimensionality of your output node features. Okay so the whole point is to find like a basically good features so that we can discriminate between different classes or do whatever arbitrary task. Okay so that's the graph representation learning. Now quickly on to how graph convolutional neural networks work. Let me just maybe draw a simple example graph here. So we have a node I here. We have a couple of neighbors. So we have some neighbors here. They are connected to node I. Of course these neighbors could have their own neighbors etc etc. But let's focus on this particular node I here and let's see how do we learn the representation of that particular node. Okay so what they first do is they create this intermediate representation H. So I can see here from layer L minus 1 which is the previous layer. We do the mapping using the weight matrix and then we add the bias. So we'll have like a separate set of these W's and B's in every single layer of the graph neural network of the GCN in this particular example. So once we form these intermediate representations what we do in order to update the feature vector of this node I is we simply sum. So we sum the this intermediate representation of node I with like a sum over neighbors of weighted intermediate representations. So let me just kind of break it down quickly. Basically what I've said there is the following. So let's imagine we have so these are the neighbors. We have three neighbors in this particular example. We have certain basically feature vectors associated with the neighbors. So something like that and let's imagine these are the H's. So these are the the representations that we have after we do this particular mapping here. So we just do weighted average of those of those particular feature vectors. We sum them up and then apply the nonlinear activation function in order to get basically novel so layer L representation of node I. So we're gonna finally end up having like H, I for layer L and these are just Euclidean features now. Okay now these W IJs depending on the architecture these could be learnable but that's now getting into the attention spectrum. What the original GCN did here is basically so this this W IJ was simply 1 over square root DI times DJ where D is just a degree of a node. So that means you basically normalize this particular presentation depending on the degree of this node and degree of your target node J. So let's say this is node J in this particular example. Okay so that's how they've done it in the original paper and that's the GCN formalism. Now for the more important part let me introduce you to that formalism of hyperbolic spaces and let's see what it's all about. Okay so hyperbolic spaces have different models the most famous ones is this so-called Lorentz model also known as the hyperboloid model as you can see here and there is also this Poincaré ball model of hyperbolic space. So here they are gonna work with the hyperboloid model because they showed it's more stable and so they basically stuck with it. So we are going to be working with the hyperboloid model of hyperbolic space. So they introduced this like inner product called Minkowski inner product. So it's a simple mapping of you map two topple of these D plus one dimensional points into the real basically into a real number and this is how Minkowski product is defined. Basically it's similar to your dot product to which you're used to and the only difference is basically this addition of a minus sign for the first coordinate of the points in this space. Okay next up they denote by this HDK they denote the hyperboloid manifold in D dimensions with constant negative curvature minus 1 over K where K is always bigger than 0. So briefly just what a curvature of space is that may seem like very scary but it's not. Basically you're usually used to working in the Euclidean space so let's imagine we have like a example of a Euclidean space a 2d space like a plane here. If we shoot parallel geodesics where geodesic is just a generalization of like a straight line to arbitrary spaces. So if we were to shoot them in the Euclidean space here in the plane you can see that the distance between these two lines is always going to be constant and these two lines will never intersect nor diverge. They'll always stay at the same distance from each other. That's not the case in every single space and that's why we have the notion of curvature. In the case of a sphere if you shoot two parallel geodesics you can see they are going to intersect at the North Pole in this particular example which means when the parallel geodesics intersect we call that space positively curved and on the other hand if the parallel geodesics diverge then we call that space negatively curved and you can see here an example of a hyperbolic space here. So you can imagine if I were to extrapolate these points here they are going to diverge each in its own direction here across this hyperbolic space. So that's the idea with curvature. It's not that fancy just once you have this visual mental picture then everything becomes much easier. Okay so next up they have this tangent space at point X of the manifold HDK and let me just now show you how they define those two formally. So again this hyperboloid model is defined as a set of D plus one dimensional points such that the Minkowski inner product between a point with itself is basically always going to be constant and equals to minus K and additionally the first coordinate of these points needs to be positive. So that's how we define the hyperboloid model. Following up we have this tangent space which is just like a set of perpendicular vectors. So a set of orthogonal vectors to your vector X. So again basically it's a set of vectors V, D plus one dimensional vectors V such that the inner product again Minkowski inner product equals zero. So tangent space is a concept you're familiar with like even though this is just like a let me give you a like an analogous case in the case of like a sphere. Let's imagine we have a sphere here. So we have a sphere here and basically this is like maybe North Pole, this is like South Pole. Tangent space to the North Pole would be just like a simple plane. It's a plane such that it touches the North Pole. It basically touches a sphere only at a single point and that's at the North Pole. And you can see here that all of the vectors in this particular plane, so let me draw a couple of vectors here, all of these vectors are going to be perpendicular. So if I take the center of the sphere as the origin and if I were to draw, so this is the point X in this formalism, so this is point X and so basically you can see that this vector here, so all of these vectors in the plane are perpendicular, are orthogonal to this particular vector X. So this is basically a nice way to form you to algebraically describe this notion of a tangent space. We'll soon see why it's convenient to instead of working on the manifold to be working on on a particular tangent space of that manifold. We'll see why that is convenient. Okay, let me continue here and let me introduce a couple more details. So I'm gonna quickly skim over over these formalisms because they're not as vital for this paper, but let me just introduce you to the mathematics. It may be interesting. So now for V and W which lie in the tangent space of the hyperboloid model at point X, we define this G of X which is basically something called a Riemannian metric tensor or it's a simple it's basically defined as a Minkowski inner product and it's gonna later on allow us to define distances in the tangent space. That's why it's an important contract and we call this tuple of the hyperboloid model and this Riemannian metric tensor, we call this a Riemannian manifold with negative curvature minus 1 over K. Okay, so not as important just like introducing you to some notation. So finally the tangent space is useful to perform Euclidean operations undefined in hyperbolic space and we denote by the norm of we basically induce the norm of a vector in tangent space by doing a square root of the inner product where inner product is Minkowski inner product and that's how we define the norm of vectors and by doing that we basically have a nice way to define a distance in tangent space of this particular hyperboloid manifold. Okay, so all of that formalism was basically so that we can understand what exponential and logarithmic maps are. So this is going to be the main construct we need to understand in order to understand how hyperbolic GCNs work. Before I get there I need to briefly introduce geodesics which are basically just and they say here which are generalizations of shortest paths in graphs or straight lines in Euclidean geometry. So that's just a generalization a notion of like a shortest path between two points in arbitrary spaces. So let me show you how they define it. Let X be a point on the hyperboloid manifold. Let U be like a basically a point in a vector in the tangent space of that point X. We call that U the unit speed and because they're going to basically make sure that the inner product here or the norm of that particular vector is going to be equal to 1. And then they say the unique unit speed geodesic denoted like this such that basically that geodesic at a point 0 equals to this point X on the manifold. Next up we have gamma dot which just basically means a velocity vector at point evaluated at 0 is going to be equal exactly to that U unit speed. Okay so let me give you like a mental model I have when I think about these geodesics. So imagine we have a curved space such as this one. So something like this and that's just like a basically imagine we had some type of like a sheet of paper or something and I'm looking at that sheet from this side and that's where we get something like this. So now now imagine we have point X which is the point we've been using in this formalism of the geodesics here and let's imagine we have like a unit vector unit speed vector U and now what geodesic is is and and that this U lies in the tangential plane of this point X. So imagine we have like a tangential plane here and U just lies in that plane. And so what the geodesic is is imagine we start from this point and we have this velocity vector and for some amount of time you basically shoot like a point from you shoot a point from X using this velocity vector V and you just trace out where that point will go on this on this curved space. So basically imagine if we were to travel for one second maybe we'd end up somewhere here. And we'll see why that is useful because that basically allows us to map vectors from tangent space as you can see here. So we basically just map this vector here we mapped it onto a particular point on the on the manifold here. And that's how I like to think about geodesics and exponential and logarithmic maps which I'm going to introduce in a second. So and now we have for this particular hyperboloid model we can see how the geodesic is defined here basically some complex equation using hyperbolic cosine and the hyperbolic sine and it's not even important you can treat this as a black box. So what is important for you to understand is that as we are as we are basically changing this parameter T we're going to be tracing a path along the manifold. So we're going to be tracing a point that's always going to belong to manifold and not to the tangent space. And here they show how you can calculate the distance between two points on this hyperboloid manifold. Again some complicated equation we have Minkowski inner product we have arc cosine hyperbolic so not that important you can treat it as a black box. So what is important is that we have like geodesic and we have a distance. Okay so now to the more important part and that's the exponential and logarithmic maps. Let's see how these are defined. So given a point on the manifold X and a tangent vector V that belongs to the tangent space as defined by point X. So the exponential map maps from the tangent space of X on to the manifold on to the hyperboloid manifold. And you can see how it assigns the point it's basically evaluated as as geodesic at when you set t equals to 1. And that's precisely what I just explained here. So you have basically let me just map this directly here. So here in this notation we have X and V which means this thing here we now call it so instead of U we call it V. So this is V this is X and we basically what we do is we trace out this geodesic. So gamma is a unique geodesic satisfying that at t equals 0 it equals X and the velocity is described by vector V. Which means as I said so we have a point here at X with the velocity V and we just trace out its path along the manifold and that's how we map vector V to a novel point on the manifold. So that's your exponential basically exponential map. Vice versa we can define a logarithmic map which has this property that if you then apply after applying exponential if you apply a log then you'll end up with the initial vector V. And then they say here in general Romanian manifolds these operations are only defined locally but in the hyperbolic space they form a bijection between the hyperbolic space and the tangent space at a point. Now this might not be apparent why this is relevant but like it is. I'm gonna briefly tell you what this means and that's the following. So for this particular hyperbolic model you can see it here this tangent space even if it was infinite we'd have a unique point so for every single point on this plane so we basically have a situation where we can map any arbitrary point here to some point unique point on this particular hyperbolic manifold. And that would not be the case for arbitrary general Romanian manifold. So let me for example show you a contra example. So let's imagine we have a sphere. So let's imagine we have a sphere here and let's imagine that at the North Pole we have like a tangent space and let's imagine it's just an infinite tangent space. And so you can imagine that if we had a vector such as this one okay so this vector would maybe if we were to trace out the geodesic here by doing exponential by applying the exponential map we'd end up maybe mapping this vector to this point here. Okay but now the thing is because of how this manifold looks like it's a sphere if we had like maybe like 3x the size of this vector we'd end up doing like doing a full circle and then one more half and we'd end up at the same point here which means we've basically mapped we don't have a projection anymore we map these two vectors from the plane on to the same point. So both of these map to the same point and we don't have a projection and that's a problematic property if you want to learn embeddings. Okay so that's everything you need to understand. Now they just show how these exponential and logarithmic maps look like for this particular example of using a hyperboloid manifold. You can just see they're using cosine hyperboloid cosines etc etc but the main idea is the thing I just explained to you. Okay now that we have this differential geometry under our belt let's now dig into the actual model and understand how it works. Okay so this should now be fairly straightforward. The first step we need to do is given our node feature vectors which are in the Euclidean space initially we first want to map them into hyperbolic space. Okay so how we do that is the following. So they define something called North Pole of this hyperboloid model and they define it like this. You can see this like bold O the first coordinate is a square root of K everything else is zero and this is the North Pole of the of the hyperboloid. And so why that is important is because if we were to construct a point if we were to augment our Euclidean feature vector here by just prepending zero as the zeroth dimension we can see that if we were to do inner product between this augmented Euclidean point with the origin we'd get a zero because basically we have zero here which means zero times zero is gonna be zero and because here we have all zeros no matter what we have in X we're gonna end up with zeros once you sum that up a bunch of zeros yields a zero and so that's why we have this fact here. Interesting property if you understand what this what's the semantics behind this expression that's that we we now know that this is a lot that this augmented point lies in the tangent space of this of this particular North Pole of the hyperboloid. Okay so that's what we have when we have zero that means we have orthogonal like vectors and they say therefore we interpret this point here as a point in the tangent space of the North Pole and basically then they show how how you can map from from the from that augmented point in the tangent space by just applying exponential map at North Pole to get the finally to get the hyperboloid embeddings. Okay let me quickly show you how you should think about this as it's fairly easy given the diagram up here so this is a tangent space of the North Pole imagine that this red dot here is the North Pole so this is the tangent space of the North Pole of this hyperboloid model here. Okay so now we want to map an arbitrary point from the tangent space on to the hyperboloid space. So what we are going to do is the following so imagine this is the point we're trying to to map so this is the one we're trying to find like a corresponding point on the manifold for this particular point here in the Euclidean space. So what we do is you can see we have this vector here and imagine this this plane is now touching this hyperboloid model at a single point so it's a tangent space as I said and then we just do the exponential map which is we shoot a point we start here from the North Pole and we just shoot a point in that direction with that velocity vector and then it's going to trace out this particular geodesic and we're gonna end up with this point here and that's why this point maps to this one. It's fairly easy really once you understand the visualization of this it should be fairly trivial. Okay so that's the first step of the hyperboloid GCN model we map from Euclidean points into hyperboloid points. Okay the next step is we need to now do we need to find equivalent operations in the hyperboloid space to what GCN is doing in Euclidean space. So that means we first need to understand how to do feature transforms in the hyperbolic space. Let's see what they say. So we now want to learn transformations of points on the hyperboloid manifold. However there is no notion of vector space structure in hyperbolic space. So I think the main thing here you should you should have in your mind is like the closure property is violated. What I mean by that is if you are on a plane and you were to add two arbitrary vectors you end up being in the plane still. So that's the closure property. So basically let me go up here again. So if you have two vectors here let's imagine we have this vector here and this vector here. If we add them up we'll end up with additional vector that lies in the plane. But if we were to do if we were to do the same thing for this hyperboloid model we'd basically be violating the closure property and that's why we don't have a notion of vector space on this on this hyperbolic space. That's how I understand it. I may be wrong here but like that's my best understanding of this thing. Okay so let's continue here. So the main idea is to leverage the exponential and log maps so that we can use the tangent space to perform Euclidean transformations. So let me break it down for you. What they now do is they're gonna use the log operator to now get from from the hyperboloid point on to the tangent space again of the North Pole. So we are always mapping on to the tangent space of the North Pole. Then we apply once we have a point there then we apply this this this linear mapping W which may be which may reduce or increase the dimension. So that's why we have D prime here. Finally we apply again exponential mapping which means we take this whatever this point here is and we form a vector between origin and this point here and we do the usual like the shooting metaphor and that's how we end up on the manifold again. So again we we get from the we map from manifold onto the tangent space we do the transform there the linear transform and then we map it back on to the manifold. I'm gonna stop explaining now the exponential and log maps and assume you understand how we map from manifold to tangent space and back. Now we need to add the concept of a bias and how we do that is the following. So bias is a vector that's so we they define B so the bias as an Euclidean vector located in the tangent space of the North Pole again of the hyperboloid model and so what they do is they do something called basically parallel transport and the parallel transport is going to take the bias vector that lies in the tangent space of the North Pole it's gonna transport it in a smart way and I will not get into the formalism of parallel transport. We can imagine just kind of shifting that bias vector from the tangent space of the North Pole into the tangent space of this point X the point of interest the point we're currently trying to transform and then we're just going to apply exponential map starting from that point X. Now this may be confusing but again it has a very like simple and simple visual interpretation and semantics. It's just hard to maybe understand it from the first attempt from this equation so let me try and get back to our diagram here. So imagine we have let me just delete a couple of things here so as to reduce the clutter. Let me delete all of this and let's now try and understand how the biasing works. Okay so let's imagine we have so this is the the tangent space of the North Pole and let's imagine we have a bias vector somewhere here so maybe it's something like this and it's a learnable vector remember that so we were trying to learn weights and biases. So what we do is the following so now imagine we have a point that we are trying to transform and I'm going to pick just a just a single one like let's take this one and basically that point is mapped onto this point on the manifold and you can now imagine that this point here we have associated tangent space I'm gonna try and do that it's gonna probably fail miserably so we have a tangent space that touches only this point so it's a tangent space of this particular point. Okay so what we're going to do is take this bias vector B and we're gonna transport it onto this tangent space of this point here okay so we're gonna transport it to here let's imagine maybe it's now going to lie maybe somewhere here and once we have that we just apply the exponential map which means we're going to shift which means we're gonna do the following we're gonna take this point and now because of the bias vector and applying the exponential map we're gonna end up here so we add it we just successfully added a bias in the hyperbolic space so that's what we've done. Okay let me try and explain this once more because it's hard to visualize this and I did not do a great job of drawing this so we have a bias vector here in the tangent space we map it we parallel transport it into this different tangent space that correspond to this particular point of interest and once we have the bias vector there then we do the shooting so that's the exponential map and we end up from this orange point we end up here and thus we as I said successfully managed to add a bias in the hyperbolic space. Okay that's the best I can do in one attempt. Okay so let's get back to the section 4.2 we've defined we've successfully defined a linear mapping in the tangent space and we've successfully defined bias as well. Next up we need to define neighborhood aggregation so I'm gonna just jump to the equations straight ahead so here is what they do given two notes XI and XJ we're gonna apply log mapping and that means we end up in the tangent space of the North Pole and once we are there we're going to concatenate them as you can see by the symbol here and then we just pass them into the MLP and we do that for all of the J's from the neighborhood of node I and then we just apply a softmax that's how we basically end up with WIJ's. Once we have those coefficients we use those coefficients to do the aggregation the following way so again we have these are the hyperbolic embedding vectors of neighbors XJ's we're going to do logarithmic map but this time the map starts the map is based in point XI and not the North Pole we're gonna see why that is and once we have that that means now we're in the tangent space now we can do the simple scaling with WIJ's we sum those up and finally we end up with some resultant vector in the tangent space and then we map it back using the exponential map and that's how we end up with a new representation for this particular vector XI. So because this was the first time we did not use the North Pole as the basis of mapping let me just kind of explain this part a bit better so know that our proposed aggregation is directly performed in the tangent space of each center point XIH as this is where the Euclidean approximation is best we show in our ablation experiments that this local aggregation outperforms aggregation in tangent space at the origin due to the fact that relative distances have lower distortion in our approach basically what this means is let me go back to the example I showed you before so let's again focus on this particular orange point and its tangent space so now we're gonna map all of the points on the manifold onto this particular tangent space and then do the basically aggregation in that tangent space instead of using the tangent space of the origin that's the difference and they as they said they've done ablations and it turns out that doing this is better than doing aggregation in the in the in this particular tangent space here okay so that's it let me go back here and let me end up explaining the nonlinear activations so here is how is how the nonlinearities defined on a curved space so we take the particular embedding vector of interest that lies on the hyperboloid model we do the log mapping which means we map it on to the tangent space of the North Pole then we apply the nonlinearity whatever that is like value usually and then we apply the x map returning it back onto the manifold the thing to notice here is that these curvatures so KL minus 1 and KL these curvatures might be different and they are actually learnable parameter in hyperbolic GCN model so they can do this because the mathematics adds up they say here fortunately 10 tangent spaces of the North Pole are shared across hyperboloid model manifolds of the same dimension that have different curvatures making equation 10 so this is this equation mathematically correct okay that's pretty much it now let's glue everything back together and try to get a holistic overview what's going on here I know this was a lot of mathematics differential geometry a lot of details let's try and get like a high level mental model of what just happened here okay so there is a couple of steps okay so first step is we map from the Euclidean space on to the hyperbolic space once we have the feature vectors in the hyperbolic space then we do these basically special types of feature transforms and bias so there is a lot of back and forth between manifold and tangent space we mostly do we do everything in tangent space so we map this feature from hyperbolic space into tangent space we apply the linear transformation we get we return it we map it back onto the manifold and then we just shift it using this particular bias vector and it still remains on the manifold so that's this first step once we have those features we do that for all of the node features of our graph once we have that we do the smart aggregation whereby again we're going to be mapping onto tangent spaces of those and and this time tangent space is defined by particular node I for which we're trying to find representation okay we're not using the North Pole tangent space this time we do the weighted sum in that tangent space and then we make the result we map the resultant point back onto the manifold that's how we get these wise and then finally we apply this nonlinear basically mapping again we have back and forth between tangent space and manifold and we just apply we kind of squeeze in the your your regular non-linearity between these mappings okay that's that's it it looks complicated it actually is not that complicated when you're familiar with differential geometry and and this kind of sinks in and yeah okay so let me now briefly walk you through the results they showed that for a particular class of graphs that have this low hyperbolicity value Delta which basically means it's a fancy way of saying these are graphs that are tree like in nature and they showed that basically HGCN so that's that this model in this introducing this paper outperforms all of the previous space lines so even GNN such as GCN and get and sage and SGC and as well as neural networks and some shallow embeddings so they show better results there and they show that as we go to higher hyperbolicity constant which means that the graphs are less and less exponentially nature and less tree like they show worse results here okay and final results I want to show you are here they've done some oblations basically the oblations they've done is doing this attention aggregation in the North Pole's tangent space instead of using X eyes to form the tangent space so that's the difference and here C just stands for whether they use trainable curvatures or not and they showed that by by by both using the trainable curvatures as well as using the aggregation in the tangent space of X eyes that gives them the best results across various different datasets so yeah just some oblations they've done additionally they showed that for some for this particular data set disease the harder curvature of the hyperboloid model basically the better the results let me just show you how to parse this this chart basically if K is large let's say 10 to the power of 3 so that's like thousand if you plug in 10 to the power of 3 you'll end up with minus 3 so that's means we are here and you can see that for BK for like thousand we have like lower metric here compared to if K was much smaller so if case maybe maybe 1 over 10 that would be 10 raised to power of minus 1 which means we map to here so that means for 1 over 10 for for for high curvature basically we have better performance and again this ties back nicely to the visualization to the explanation I started with basically if we have crowded points in the Euclidean space the more the so so so the higher the curvature of this particular hyper hyperboloid model I'm having such a hard time pronouncing that word so let me just draw it like this so if this was even more curved so something like this that means that basically the separation would be even bigger because you can imagine that us even a small perturbation here in the in the Euclidean space would cause those two points to be mapped onto very like distant points on the actual manifold so maybe this one here would be mapped here and this one here would be mapped even here so and the higher the curvature the bigger this distance will grow and so that's why you can kind of imagine that higher curvatures like in the negative direction basically help help us spread out the feature vectors okay that was my best attempt to explain this paper there is a lot of mathematics I'm not an expert in differential geometry I know enough to basically understand on an intuitive level how this works it's hard to visualize all of this I hope that this paper helped you understand basically this this this this model a bit better if it did consider sharing the video out also consider subscribing to this channel and finally join our discord community until next time bye bye
[{"start": 0.0, "end": 4.2, "text": " What's cracking guys? In this video I'll be covering hyperbolic graph"}, {"start": 4.2, "end": 9.200000000000001, "text": " convolutional neural networks by the authors from the Yura Leskovitz group at"}, {"start": 9.200000000000001, "end": 15.0, "text": " Stanford and we're going to see that hyperbolic keyword here in the title is"}, {"start": 15.0, "end": 20.28, "text": " what makes this paper particularly scary and math heavy and I'm gonna also argue"}, {"start": 20.28, "end": 23.28, "text": " that it's not just mathematics for the sake of mathematics. I'm gonna show you"}, {"start": 23.28, "end": 30.720000000000002, "text": " why they have chosen to use this exotic geometric space and use it"}, {"start": 30.720000000000002, "end": 36.24, "text": " to learn like useful embeddings later on for node classification and link"}, {"start": 36.24, "end": 41.2, "text": " prediction tasks. So let me start with abstract here and then I'm gonna show"}, {"start": 41.2, "end": 45.16, "text": " you a nice visualization that's gonna give you like a gut feeling for why we"}, {"start": 45.16, "end": 49.8, "text": " are using hyperbolic spaces in the first place and I'm gonna also"}, {"start": 49.8, "end": 53.64, "text": " get into the mathematical formulas and so if you're not familiar with what like"}, {"start": 53.64, "end": 58.28, "text": " curvature of spaces, what hyperbolic space is, don't worry. So we'll get there."}, {"start": 58.28, "end": 63.4, "text": " Okay, so let's start here. Graph convolutional neural networks or GCNs"}, {"start": 63.4, "end": 68.84, "text": " for short embed nodes in a graph into Euclidean space which has been shown to"}, {"start": 68.84, "end": 75.8, "text": " incur a large distortion when embedding real-world graphs with scale-free or"}, {"start": 75.8, "end": 81.44, "text": " hierarchical structure, okay? And then they say we derive GCNs operations in"}, {"start": 81.44, "end": 86.96, "text": " the hyperboloid model of hyperbolic space and map Euclidean input features to"}, {"start": 86.96, "end": 91.0, "text": " embeddings in hyperbolic spaces with different trainable curvature at each"}, {"start": 91.0, "end": 96.88, "text": " layer. So my whole goal of this video will be to decode what like the thing"}, {"start": 96.88, "end": 102.16, "text": " they just said in this in this particular sentence here. Okay, so let me"}, {"start": 102.16, "end": 106.67999999999999, "text": " just read this one and then I'm gonna tell you what this scale-free graph is."}, {"start": 106.67999999999999, "end": 111.47999999999999, "text": " So in particular scale-free graphs have tree-like structure and in such graphs"}, {"start": 111.47999999999999, "end": 116.08, "text": " the graph volume defined as the number of nodes with some radius to a center"}, {"start": 116.08, "end": 123.08, "text": " node grows exponentially as a function of radius. So basically scale-free"}, {"start": 123.08, "end": 128.68, "text": " networks are just a particular type of networks that have this this power law"}, {"start": 128.68, "end": 134.36, "text": " behavior where nodes that have a lot of connections become less and less"}, {"start": 134.36, "end": 138.08, "text": " frequent. As you can see here asymptotically we have K which is the"}, {"start": 138.08, "end": 143.08, "text": " number of nodes. We have that the probability of a node having K"}, {"start": 143.08, "end": 148.20000000000002, "text": " connections as K grows you can see that like basically that probability drops"}, {"start": 148.20000000000002, "end": 154.12, "text": " according to this power law here. So yeah, basically the main idea actually is"}, {"start": 154.12, "end": 160.16, "text": " that this exponential keyword and if we have like graphs that are crowded"}, {"start": 160.16, "end": 164.52, "text": " because of this exponential growth of neighbors that's when we want to use the"}, {"start": 164.52, "end": 168.4, "text": " hyperbolic space. So now let me give you like a visual intuition for why that may"}, {"start": 168.4, "end": 173.6, "text": " be. Okay, let's get to this chart here. Okay, so let's imagine we have our nodes"}, {"start": 173.6, "end": 178.6, "text": " and we have the associated node features and we can always basically visualize"}, {"start": 178.6, "end": 183.08, "text": " node features as points in like Euclidean space. So let's now imagine our"}, {"start": 183.08, "end": 187.84, "text": " Euclidean space is this plane here. We have the points here and you can"}, {"start": 187.84, "end": 192.8, "text": " imagine that if we have like exponentially more neighbors here if"}, {"start": 192.8, "end": 197.8, "text": " it's kind of crowded then mapping to this like geometric space which we call"}, {"start": 197.8, "end": 205.04000000000002, "text": " hyperbolic space is going to make them more spread out and like that's the"}, {"start": 205.04000000000002, "end": 209.96, "text": " basic intuition. Basically by spreading the points which are densely clustered"}, {"start": 209.96, "end": 214.28, "text": " together in the Euclidean space so by spreading them out in the hyperbolic"}, {"start": 214.28, "end": 219.12, "text": " space you make it easier to discern to discriminate between different node"}, {"start": 219.12, "end": 225.36, "text": " features and thus do better job at classification etc. So that's the"}, {"start": 225.36, "end": 232.36, "text": " main mental model I want you to have throughout this video. And having said"}, {"start": 232.36, "end": 237.52, "text": " that now let me dig into like mathematical formalisms and let me try"}, {"start": 237.52, "end": 242.56, "text": " and explain what the hyperbolic GCNs basically are. Okay, I'm gonna start"}, {"start": 242.56, "end": 248.88, "text": " with some basic notation and explanation of what GCNs are. If you've"}, {"start": 248.88, "end": 253.44, "text": " never watched graph ML videos so far go ahead and check out my playlist on graph"}, {"start": 253.44, "end": 257.92, "text": " ML. Like in particular go and watch the GCNs, so graph convolutional network"}, {"start": 257.92, "end": 264.0, "text": " paper as well as maybe graph attention and network paper, so GATT paper. So"}, {"start": 264.0, "end": 268.96, "text": " that's gonna get you up to speed. But here I'm just gonna briefly recap what"}, {"start": 268.96, "end": 274.12, "text": " we've seen in those videos. So first of all the whole point of graph like"}, {"start": 274.12, "end": 279.28, "text": " representation learning is to learn these useful representations of nodes"}, {"start": 279.28, "end": 283.92, "text": " and sometimes edges which we can then use to do classification tasks on nodes"}, {"start": 283.92, "end": 289.0, "text": " on graphs on edges etc. So let me just show you the formalism here. So graph"}, {"start": 289.0, "end": 295.44, "text": " representation learning is this mapping F. So from this set V of nodes, set E"}, {"start": 295.44, "end": 300.8, "text": " of edges and then we have associated node features. Sometimes we also have edge"}, {"start": 300.8, "end": 305.24, "text": " features but here I'm just gonna they just ignored the edge features. So you"}, {"start": 305.24, "end": 310.88, "text": " can see the notation is the following. So X of I which means node I. So these are"}, {"start": 310.88, "end": 317.14, "text": " the features at the raw features because basically zero means zero layer before"}, {"start": 317.14, "end": 322.47999999999996, "text": " we even started applying the graph neural network. So and also we have this"}, {"start": 322.47999999999996, "end": 327.68, "text": " symbol E which means these are Euclidean raw features of this particular node I."}, {"start": 327.68, "end": 333.2, "text": " So that's how you treat this symbol here. Okay and the goal is to find like a"}, {"start": 333.2, "end": 339.58, "text": " mapping to this basically a representation Z which is of dimension"}, {"start": 339.58, "end": 344.44, "text": " the number of nodes times D prime which is the dimensionality of your output"}, {"start": 344.44, "end": 351.52, "text": " node features. Okay so the whole point is to find like a basically good features"}, {"start": 351.52, "end": 355.56, "text": " so that we can discriminate between different classes or do whatever"}, {"start": 355.56, "end": 360.92, "text": " arbitrary task. Okay so that's the graph representation learning. Now quickly on"}, {"start": 360.92, "end": 366.68, "text": " to how graph convolutional neural networks work. Let me just maybe draw a"}, {"start": 366.68, "end": 372.24, "text": " simple example graph here. So we have a node I here. We have a couple of"}, {"start": 372.24, "end": 377.44, "text": " neighbors. So we have some neighbors here. They are connected to node I. Of course"}, {"start": 377.44, "end": 382.6, "text": " these neighbors could have their own neighbors etc etc. But let's focus on"}, {"start": 382.6, "end": 387.08, "text": " this particular node I here and let's see how do we learn the representation"}, {"start": 387.08, "end": 391.84000000000003, "text": " of that particular node. Okay so what they first do is they create this"}, {"start": 391.84000000000003, "end": 397.68, "text": " intermediate representation H. So I can see here from layer L minus 1 which is"}, {"start": 397.68, "end": 402.36, "text": " the previous layer. We do the mapping using the weight matrix and then we add"}, {"start": 402.36, "end": 408.16, "text": " the bias. So we'll have like a separate set of these W's and B's in every single"}, {"start": 408.16, "end": 413.64, "text": " layer of the graph neural network of the GCN in this particular example. So once"}, {"start": 413.64, "end": 417.4, "text": " we form these intermediate representations what we do in order to"}, {"start": 417.4, "end": 424.16, "text": " update the feature vector of this node I is we simply sum. So we sum the"}, {"start": 424.16, "end": 431.48, "text": " this intermediate representation of node I with like a sum over neighbors of"}, {"start": 431.48, "end": 435.64000000000004, "text": " weighted intermediate representations. So let me just kind of break it down"}, {"start": 435.64000000000004, "end": 439.92, "text": " quickly. Basically what I've said there is the following. So let's imagine we"}, {"start": 439.92, "end": 443.84000000000003, "text": " have so these are the neighbors. We have three neighbors in this particular"}, {"start": 443.84000000000003, "end": 448.92, "text": " example. We have certain basically feature vectors associated with the"}, {"start": 448.92, "end": 453.40000000000003, "text": " neighbors. So something like that and let's imagine these are the H's. So these"}, {"start": 453.4, "end": 458.35999999999996, "text": " are the the representations that we have after we do this particular mapping here."}, {"start": 458.35999999999996, "end": 463.4, "text": " So we just do weighted average of those of those particular feature vectors. We"}, {"start": 463.4, "end": 469.47999999999996, "text": " sum them up and then apply the nonlinear activation function in order to get"}, {"start": 469.47999999999996, "end": 475.84, "text": " basically novel so layer L representation of node I. So we're gonna"}, {"start": 475.84, "end": 484.44, "text": " finally end up having like H, I for layer L and these are just Euclidean"}, {"start": 484.44, "end": 491.15999999999997, "text": " features now. Okay now these W IJs depending on the architecture these"}, {"start": 491.15999999999997, "end": 495.91999999999996, "text": " could be learnable but that's now getting into the attention spectrum. What"}, {"start": 495.91999999999996, "end": 503.67999999999995, "text": " the original GCN did here is basically so this this W IJ was simply 1 over"}, {"start": 503.68, "end": 511.12, "text": " square root DI times DJ where D is just a degree of a node. So that means you"}, {"start": 511.12, "end": 517.32, "text": " basically normalize this particular presentation depending on the degree of"}, {"start": 517.32, "end": 521.52, "text": " this node and degree of your target node J. So let's say this is node J in this"}, {"start": 521.52, "end": 525.76, "text": " particular example. Okay so that's how they've done it in the"}, {"start": 525.76, "end": 530.26, "text": " original paper and that's the GCN formalism. Now for the more important"}, {"start": 530.26, "end": 534.52, "text": " part let me introduce you to that formalism of hyperbolic spaces and let's"}, {"start": 534.52, "end": 539.14, "text": " see what it's all about. Okay so hyperbolic spaces have different models"}, {"start": 539.14, "end": 543.48, "text": " the most famous ones is this so-called Lorentz model also known as the"}, {"start": 543.48, "end": 546.88, "text": " hyperboloid model as you can see here and there is also this Poincar\u00e9"}, {"start": 546.88, "end": 551.8, "text": " ball model of hyperbolic space. So here they are gonna work with the hyperboloid"}, {"start": 551.8, "end": 556.06, "text": " model because they showed it's more stable and so they basically stuck"}, {"start": 556.06, "end": 560.9599999999999, "text": " with it. So we are going to be working with the hyperboloid model of hyperbolic"}, {"start": 560.9599999999999, "end": 567.7199999999999, "text": " space. So they introduced this like inner product called Minkowski inner product."}, {"start": 567.7199999999999, "end": 574.78, "text": " So it's a simple mapping of you map two topple of these D plus one dimensional"}, {"start": 574.78, "end": 580.56, "text": " points into the real basically into a real number and this is how Minkowski"}, {"start": 580.56, "end": 585.56, "text": " product is defined. Basically it's similar to your dot product to"}, {"start": 585.56, "end": 590.4399999999999, "text": " which you're used to and the only difference is basically this addition"}, {"start": 590.4399999999999, "end": 596.92, "text": " of a minus sign for the first coordinate of the points in this space. Okay next up"}, {"start": 596.92, "end": 602.5999999999999, "text": " they denote by this HDK they denote the hyperboloid manifold in D"}, {"start": 602.5999999999999, "end": 608.1199999999999, "text": " dimensions with constant negative curvature minus 1 over K where K is"}, {"start": 608.1199999999999, "end": 613.8, "text": " always bigger than 0. So briefly just what a curvature of space is that may"}, {"start": 613.8, "end": 619.1999999999999, "text": " seem like very scary but it's not. Basically you're usually used to working"}, {"start": 619.1999999999999, "end": 624.28, "text": " in the Euclidean space so let's imagine we have like a example of a Euclidean"}, {"start": 624.28, "end": 630.56, "text": " space a 2d space like a plane here. If we shoot parallel geodesics where"}, {"start": 630.56, "end": 635.56, "text": " geodesic is just a generalization of like a straight line to arbitrary spaces."}, {"start": 635.56, "end": 640.04, "text": " So if we were to shoot them in the Euclidean space here in the plane you"}, {"start": 640.04, "end": 643.8399999999999, "text": " can see that the distance between these two lines is always going to be"}, {"start": 643.8399999999999, "end": 647.56, "text": " constant and these two lines will never intersect nor diverge. They'll always"}, {"start": 647.56, "end": 651.28, "text": " stay at the same distance from each other. That's not the case in every single"}, {"start": 651.28, "end": 655.2199999999999, "text": " space and that's why we have the notion of curvature. In the case of a sphere if"}, {"start": 655.2199999999999, "end": 659.24, "text": " you shoot two parallel geodesics you can see they are going to intersect at the"}, {"start": 659.24, "end": 664.48, "text": " North Pole in this particular example which means when the"}, {"start": 664.48, "end": 669.88, "text": " parallel geodesics intersect we call that space positively curved and on the"}, {"start": 669.88, "end": 675.16, "text": " other hand if the parallel geodesics diverge then we call that space"}, {"start": 675.16, "end": 679.48, "text": " negatively curved and you can see here an example of a hyperbolic space here. So"}, {"start": 679.48, "end": 684.24, "text": " you can imagine if I were to extrapolate these points here they are going to"}, {"start": 684.24, "end": 688.88, "text": " diverge each in its own direction here across this hyperbolic space. So that's"}, {"start": 688.88, "end": 693.0, "text": " the idea with curvature. It's not that fancy just once you have this"}, {"start": 693.0, "end": 698.4, "text": " visual mental picture then everything becomes much easier. Okay so"}, {"start": 698.4, "end": 704.56, "text": " next up they have this tangent space at point X of the manifold HDK and"}, {"start": 704.56, "end": 710.12, "text": " let me just now show you how they define those two formally. So again this"}, {"start": 710.12, "end": 716.4, "text": " hyperboloid model is defined as a set of D plus one dimensional points such that"}, {"start": 716.4, "end": 721.88, "text": " the Minkowski inner product between a point with itself is basically always"}, {"start": 721.88, "end": 726.6, "text": " going to be constant and equals to minus K and additionally the first coordinate"}, {"start": 726.6, "end": 731.64, "text": " of these points needs to be positive. So that's how we define the"}, {"start": 731.64, "end": 737.08, "text": " hyperboloid model. Following up we have this tangent space which is just like a"}, {"start": 737.08, "end": 743.52, "text": " set of perpendicular vectors. So a set of orthogonal vectors to your vector X. So"}, {"start": 743.52, "end": 749.44, "text": " again basically it's a set of vectors V, D plus one dimensional vectors V such"}, {"start": 749.44, "end": 753.52, "text": " that the inner product again Minkowski inner product equals zero. So tangent"}, {"start": 753.52, "end": 757.48, "text": " space is a concept you're familiar with like even though this is just like a let"}, {"start": 757.48, "end": 763.52, "text": " me give you a like an analogous case in the case of like a sphere. Let's"}, {"start": 763.52, "end": 771.12, "text": " imagine we have a sphere here. So we have a sphere here and basically this is like"}, {"start": 771.12, "end": 776.68, "text": " maybe North Pole, this is like South Pole. Tangent space to the North Pole"}, {"start": 776.68, "end": 783.7199999999999, "text": " would be just like a simple plane. It's a plane such that it touches the North"}, {"start": 783.7199999999999, "end": 787.1999999999999, "text": " Pole. It basically touches a sphere only at a single point and that's at the"}, {"start": 787.1999999999999, "end": 791.1999999999999, "text": " North Pole. And you can see here that all of the vectors in this particular plane,"}, {"start": 791.1999999999999, "end": 795.88, "text": " so let me draw a couple of vectors here, all of these vectors are going to be"}, {"start": 795.88, "end": 800.52, "text": " perpendicular. So if I take the center of the sphere as the origin and"}, {"start": 800.52, "end": 805.1999999999999, "text": " if I were to draw, so this is the point X in this formalism, so this is point X"}, {"start": 805.2, "end": 809.6, "text": " and so basically you can see that this vector here, so all of these vectors in"}, {"start": 809.6, "end": 813.8000000000001, "text": " the plane are perpendicular, are orthogonal to this particular vector X."}, {"start": 813.8000000000001, "end": 819.88, "text": " So this is basically a nice way to form you to algebraically describe this"}, {"start": 819.88, "end": 824.44, "text": " notion of a tangent space. We'll soon see why it's convenient to instead of"}, {"start": 824.44, "end": 829.36, "text": " working on the manifold to be working on on a particular tangent space of that"}, {"start": 829.36, "end": 834.8000000000001, "text": " manifold. We'll see why that is convenient. Okay, let me continue here and"}, {"start": 834.8, "end": 839.68, "text": " let me introduce a couple more details. So I'm gonna quickly skim over over"}, {"start": 839.68, "end": 843.3199999999999, "text": " these formalisms because they're not as vital for this paper, but let me just"}, {"start": 843.3199999999999, "end": 848.3599999999999, "text": " introduce you to the mathematics. It may be interesting. So now for V and W which"}, {"start": 848.3599999999999, "end": 853.3199999999999, "text": " lie in the tangent space of the hyperboloid model at point X, we define"}, {"start": 853.3199999999999, "end": 860.0, "text": " this G of X which is basically something called a Riemannian metric"}, {"start": 860.0, "end": 865.4, "text": " tensor or it's a simple it's basically defined as a Minkowski inner product and"}, {"start": 865.4, "end": 871.68, "text": " it's gonna later on allow us to define distances in the tangent space. That's"}, {"start": 871.68, "end": 876.08, "text": " why it's an important contract and we call this tuple of the"}, {"start": 876.08, "end": 882.72, "text": " hyperboloid model and this Riemannian metric tensor, we call this a Riemannian"}, {"start": 882.72, "end": 888.8, "text": " manifold with negative curvature minus 1 over K. Okay, so not as important just"}, {"start": 888.8, "end": 893.7199999999999, "text": " like introducing you to some notation. So finally the tangent space is"}, {"start": 893.7199999999999, "end": 898.7199999999999, "text": " useful to perform Euclidean operations undefined in hyperbolic space and we"}, {"start": 898.7199999999999, "end": 905.1999999999999, "text": " denote by the norm of we basically induce the norm of a vector in tangent"}, {"start": 905.1999999999999, "end": 909.3199999999999, "text": " space by doing a square root of the inner product where inner product is"}, {"start": 909.3199999999999, "end": 913.3199999999999, "text": " Minkowski inner product and that's how we define the norm of vectors and by"}, {"start": 913.3199999999999, "end": 918.7199999999999, "text": " doing that we basically have a nice way to define a distance in tangent space"}, {"start": 918.72, "end": 925.44, "text": " of this particular hyperboloid manifold. Okay, so all of that formalism was"}, {"start": 925.44, "end": 932.52, "text": " basically so that we can understand what exponential and logarithmic maps are. So"}, {"start": 932.52, "end": 936.28, "text": " this is going to be the main construct we need to understand in order to"}, {"start": 936.28, "end": 940.4, "text": " understand how hyperbolic GCNs work. Before I get there I need to briefly"}, {"start": 940.4, "end": 945.4, "text": " introduce geodesics which are basically just and they say here which are"}, {"start": 945.4, "end": 951.6, "text": " generalizations of shortest paths in graphs or straight lines in Euclidean"}, {"start": 951.6, "end": 956.9599999999999, "text": " geometry. So that's just a generalization a notion of like a shortest path"}, {"start": 956.9599999999999, "end": 962.4, "text": " between two points in arbitrary spaces. So let me show you how they define it."}, {"start": 962.4, "end": 967.92, "text": " Let X be a point on the hyperboloid manifold. Let U be like a basically a"}, {"start": 967.92, "end": 974.76, "text": " point in a vector in the tangent space of that point X. We call that U the unit"}, {"start": 974.76, "end": 978.96, "text": " speed and because they're going to basically make sure that the inner"}, {"start": 978.96, "end": 982.24, "text": " product here or the norm of that particular vector is going to be equal"}, {"start": 982.24, "end": 987.88, "text": " to 1. And then they say the unique unit speed geodesic denoted like this such"}, {"start": 987.88, "end": 994.48, "text": " that basically that geodesic at a point 0 equals to this point X on the"}, {"start": 994.48, "end": 998.84, "text": " manifold. Next up we have gamma dot which just basically means a velocity vector"}, {"start": 998.84, "end": 1004.48, "text": " at point evaluated at 0 is going to be equal exactly to that U unit speed."}, {"start": 1004.48, "end": 1008.4, "text": " Okay so let me give you like a mental model I have when I think about these"}, {"start": 1008.4, "end": 1014.5600000000001, "text": " geodesics. So imagine we have a curved space such as this one. So something like"}, {"start": 1014.5600000000001, "end": 1021.64, "text": " this and that's just like a basically imagine we had some type of like a sheet"}, {"start": 1021.64, "end": 1025.68, "text": " of paper or something and I'm looking at that sheet from this side and that's"}, {"start": 1025.68, "end": 1029.68, "text": " where we get something like this. So now now imagine we have point X which is the"}, {"start": 1029.68, "end": 1035.04, "text": " point we've been using in this formalism of the geodesics here and let's imagine"}, {"start": 1035.04, "end": 1044.16, "text": " we have like a unit vector unit speed vector U and now what geodesic is is and"}, {"start": 1044.16, "end": 1048.96, "text": " and that this U lies in the tangential plane of this point X. So imagine we"}, {"start": 1048.96, "end": 1053.8400000000001, "text": " have like a tangential plane here and U just lies in that plane. And so what the"}, {"start": 1053.8400000000001, "end": 1057.96, "text": " geodesic is is imagine we start from this point and we have this velocity"}, {"start": 1057.96, "end": 1063.4, "text": " vector and for some amount of time you basically shoot like a point from you"}, {"start": 1063.4, "end": 1069.28, "text": " shoot a point from X using this velocity vector V and you just trace out where"}, {"start": 1069.28, "end": 1073.92, "text": " that point will go on this on this curved space. So basically imagine if we"}, {"start": 1073.92, "end": 1079.96, "text": " were to travel for one second maybe we'd end up somewhere here. And we'll see why"}, {"start": 1079.96, "end": 1085.44, "text": " that is useful because that basically allows us to map vectors from tangent"}, {"start": 1085.44, "end": 1089.4, "text": " space as you can see here. So we basically just map this vector here we"}, {"start": 1089.4, "end": 1093.72, "text": " mapped it onto a particular point on the on the manifold here. And that's how I"}, {"start": 1093.72, "end": 1098.76, "text": " like to think about geodesics and exponential and logarithmic maps which"}, {"start": 1098.76, "end": 1103.3600000000001, "text": " I'm going to introduce in a second. So and now we have for this particular"}, {"start": 1103.3600000000001, "end": 1108.68, "text": " hyperboloid model we can see how the geodesic is defined here basically some"}, {"start": 1108.68, "end": 1115.64, "text": " complex equation using hyperbolic cosine and the hyperbolic sine and it's not"}, {"start": 1115.64, "end": 1119.16, "text": " even important you can treat this as a black box. So what is important for you"}, {"start": 1119.16, "end": 1123.48, "text": " to understand is that as we are as we are basically changing this parameter T"}, {"start": 1123.48, "end": 1128.24, "text": " we're going to be tracing a path along the manifold. So we're going to be"}, {"start": 1128.24, "end": 1132.04, "text": " tracing a point that's always going to belong to manifold and not to the"}, {"start": 1132.04, "end": 1135.8400000000001, "text": " tangent space. And here they show how you can calculate the distance between two"}, {"start": 1135.84, "end": 1139.48, "text": " points on this hyperboloid manifold. Again some complicated equation we have"}, {"start": 1139.48, "end": 1146.12, "text": " Minkowski inner product we have arc cosine hyperbolic so not that important"}, {"start": 1146.12, "end": 1150.56, "text": " you can treat it as a black box. So what is important is that we have like"}, {"start": 1150.56, "end": 1154.8, "text": " geodesic and we have a distance. Okay so now to the more important part and"}, {"start": 1154.8, "end": 1158.8, "text": " that's the exponential and logarithmic maps. Let's see how these are defined. So"}, {"start": 1158.8, "end": 1164.6, "text": " given a point on the manifold X and a tangent vector V that belongs to the"}, {"start": 1164.6, "end": 1170.6399999999999, "text": " tangent space as defined by point X. So the exponential map maps from the"}, {"start": 1170.6399999999999, "end": 1176.7199999999998, "text": " tangent space of X on to the manifold on to the hyperboloid manifold. And you can"}, {"start": 1176.7199999999998, "end": 1182.1999999999998, "text": " see how it assigns the point it's basically evaluated as as geodesic at"}, {"start": 1182.1999999999998, "end": 1187.36, "text": " when you set t equals to 1. And that's precisely what I just explained here. So"}, {"start": 1187.36, "end": 1192.3999999999999, "text": " you have basically let me just map this directly here. So here in this notation"}, {"start": 1192.4, "end": 1198.16, "text": " we have X and V which means this thing here we now call it so instead of U we"}, {"start": 1198.16, "end": 1205.92, "text": " call it V. So this is V this is X and we basically what we do is we trace out"}, {"start": 1205.92, "end": 1211.76, "text": " this geodesic. So gamma is a unique geodesic satisfying that at t equals 0"}, {"start": 1211.76, "end": 1217.72, "text": " it equals X and the velocity is described by vector V. Which means as I"}, {"start": 1217.72, "end": 1223.28, "text": " said so we have a point here at X with the velocity V and we just trace out"}, {"start": 1223.28, "end": 1228.8, "text": " its path along the manifold and that's how we map vector V to a novel point on"}, {"start": 1228.8, "end": 1233.64, "text": " the manifold. So that's your exponential basically exponential map."}, {"start": 1233.64, "end": 1238.76, "text": " Vice versa we can define a logarithmic map which has this property that if you"}, {"start": 1238.76, "end": 1243.72, "text": " then apply after applying exponential if you apply a log then you'll"}, {"start": 1243.72, "end": 1248.52, "text": " end up with the initial vector V. And then they say here in general Romanian"}, {"start": 1248.52, "end": 1252.64, "text": " manifolds these operations are only defined locally but in the hyperbolic"}, {"start": 1252.64, "end": 1256.44, "text": " space they form a bijection between the hyperbolic space and the tangent space"}, {"start": 1256.44, "end": 1262.6000000000001, "text": " at a point. Now this might not be apparent why this is relevant but like"}, {"start": 1262.6000000000001, "end": 1267.52, "text": " it is. I'm gonna briefly tell you what this means and that's the following. So"}, {"start": 1267.52, "end": 1273.64, "text": " for this particular hyperbolic model you can see it here this tangent space even"}, {"start": 1273.64, "end": 1277.8400000000001, "text": " if it was infinite we'd have a unique point so for every single point on this"}, {"start": 1277.8400000000001, "end": 1282.44, "text": " plane so we basically have a situation where we can map any arbitrary point"}, {"start": 1282.44, "end": 1287.96, "text": " here to some point unique point on this particular hyperbolic manifold. And"}, {"start": 1287.96, "end": 1293.68, "text": " that would not be the case for arbitrary general Romanian manifold. So let me for"}, {"start": 1293.68, "end": 1299.2, "text": " example show you a contra example. So let's imagine we have a sphere. So let's"}, {"start": 1299.2, "end": 1303.5200000000002, "text": " imagine we have a sphere here and let's imagine that at the North Pole we have"}, {"start": 1303.52, "end": 1308.08, "text": " like a tangent space and let's imagine it's just an infinite tangent space. And"}, {"start": 1308.08, "end": 1313.24, "text": " so you can imagine that if we had a vector such as this one okay so this"}, {"start": 1313.24, "end": 1319.2, "text": " vector would maybe if we were to trace out the geodesic here by doing"}, {"start": 1319.2, "end": 1324.56, "text": " exponential by applying the exponential map we'd end up maybe mapping this"}, {"start": 1324.56, "end": 1331.8799999999999, "text": " vector to this point here. Okay but now the thing is because of how this"}, {"start": 1331.88, "end": 1339.24, "text": " manifold looks like it's a sphere if we had like maybe like 3x the size of this"}, {"start": 1339.24, "end": 1345.64, "text": " vector we'd end up doing like doing a full circle and then one more half and"}, {"start": 1345.64, "end": 1350.64, "text": " we'd end up at the same point here which means we've basically mapped we don't"}, {"start": 1350.64, "end": 1354.48, "text": " have a projection anymore we map these two vectors from the plane on to the"}, {"start": 1354.48, "end": 1358.2800000000002, "text": " same point. So both of these map to the same point and we don't have a projection"}, {"start": 1358.28, "end": 1363.48, "text": " and that's a problematic property if you want to learn embeddings. Okay so"}, {"start": 1363.48, "end": 1368.84, "text": " that's everything you need to understand. Now they just show how these"}, {"start": 1368.84, "end": 1373.44, "text": " exponential and logarithmic maps look like for this particular example of using"}, {"start": 1373.44, "end": 1377.48, "text": " a hyperboloid manifold. You can just see they're using cosine hyperboloid cosines"}, {"start": 1377.48, "end": 1382.8799999999999, "text": " etc etc but the main idea is the thing I just explained to you. Okay now that we"}, {"start": 1382.88, "end": 1389.1200000000001, "text": " have this differential geometry under our belt let's now dig into the actual"}, {"start": 1389.1200000000001, "end": 1392.92, "text": " model and understand how it works. Okay so this should now be fairly"}, {"start": 1392.92, "end": 1398.92, "text": " straightforward. The first step we need to do is given our node feature vectors"}, {"start": 1398.92, "end": 1403.68, "text": " which are in the Euclidean space initially we first want to map them into"}, {"start": 1403.68, "end": 1408.44, "text": " hyperbolic space. Okay so how we do that is the following. So they define"}, {"start": 1408.44, "end": 1413.24, "text": " something called North Pole of this hyperboloid model and they define it"}, {"start": 1413.24, "end": 1418.4, "text": " like this. You can see this like bold O the first coordinate is a square root of"}, {"start": 1418.4, "end": 1423.6000000000001, "text": " K everything else is zero and this is the North Pole of the of the hyperboloid."}, {"start": 1423.6000000000001, "end": 1429.72, "text": " And so why that is important is because if we were to construct a point if we"}, {"start": 1429.72, "end": 1436.28, "text": " were to augment our Euclidean feature vector here by just prepending zero as"}, {"start": 1436.28, "end": 1441.68, "text": " the zeroth dimension we can see that if we were to do inner product between this"}, {"start": 1441.68, "end": 1447.96, "text": " augmented Euclidean point with the origin we'd get a zero because basically"}, {"start": 1447.96, "end": 1453.32, "text": " we have zero here which means zero times zero is gonna be zero and because here"}, {"start": 1453.32, "end": 1458.04, "text": " we have all zeros no matter what we have in X we're gonna end up with zeros once"}, {"start": 1458.04, "end": 1463.2, "text": " you sum that up a bunch of zeros yields a zero and so that's why we have this"}, {"start": 1463.2, "end": 1467.8600000000001, "text": " fact here. Interesting property if you understand what this what's the"}, {"start": 1467.8600000000001, "end": 1472.3600000000001, "text": " semantics behind this expression that's that we we now know that this is a lot"}, {"start": 1472.3600000000001, "end": 1478.68, "text": " that this augmented point lies in the tangent space of this of this particular"}, {"start": 1478.68, "end": 1483.68, "text": " North Pole of the hyperboloid. Okay so that's what we have when we have zero"}, {"start": 1483.68, "end": 1489.0, "text": " that means we have orthogonal like vectors and they say therefore we"}, {"start": 1489.0, "end": 1494.4, "text": " interpret this point here as a point in the tangent space of the North Pole and"}, {"start": 1494.4, "end": 1500.64, "text": " basically then they show how how you can map from from the from that augmented"}, {"start": 1500.64, "end": 1506.42, "text": " point in the tangent space by just applying exponential map at North Pole"}, {"start": 1506.42, "end": 1511.92, "text": " to get the finally to get the hyperboloid embeddings. Okay let me quickly show you"}, {"start": 1511.92, "end": 1517.44, "text": " how you should think about this as it's fairly easy given the diagram up here so"}, {"start": 1517.44, "end": 1521.8, "text": " this is a tangent space of the North Pole imagine that this red dot here is"}, {"start": 1521.8, "end": 1525.1200000000001, "text": " the North Pole so this is the tangent space of the North Pole of this"}, {"start": 1525.1200000000001, "end": 1531.2, "text": " hyperboloid model here. Okay so now we want to map an arbitrary point from the"}, {"start": 1531.2, "end": 1536.1200000000001, "text": " tangent space on to the hyperboloid space. So what we are going to do"}, {"start": 1536.1200000000001, "end": 1540.24, "text": " is the following so imagine this is the point we're trying to to map so this is"}, {"start": 1540.24, "end": 1545.04, "text": " the one we're trying to find like a corresponding point on the manifold for"}, {"start": 1545.04, "end": 1548.8, "text": " this particular point here in the Euclidean space. So what we do is you"}, {"start": 1548.8, "end": 1555.68, "text": " can see we have this vector here and imagine this this plane is now touching"}, {"start": 1555.68, "end": 1560.1599999999999, "text": " this hyperboloid model at a single point so it's a tangent space as I said and"}, {"start": 1560.1599999999999, "end": 1564.84, "text": " then we just do the exponential map which is we shoot a point we start here"}, {"start": 1564.84, "end": 1568.56, "text": " from the North Pole and we just shoot a point in that direction with that"}, {"start": 1568.56, "end": 1573.92, "text": " velocity vector and then it's going to trace out this particular geodesic and"}, {"start": 1573.92, "end": 1577.3600000000001, "text": " we're gonna end up with this point here and that's why this point maps to this"}, {"start": 1577.3600000000001, "end": 1583.3200000000002, "text": " one. It's fairly easy really once you understand the visualization of this it"}, {"start": 1583.3200000000002, "end": 1588.92, "text": " should be fairly trivial. Okay so that's the first step of the hyperboloid GCN"}, {"start": 1588.92, "end": 1595.3200000000002, "text": " model we map from Euclidean points into hyperboloid points. Okay the next step is"}, {"start": 1595.3200000000002, "end": 1601.24, "text": " we need to now do we need to find equivalent operations in the hyperboloid"}, {"start": 1601.24, "end": 1606.08, "text": " space to what GCN is doing in Euclidean space. So that means we first need to"}, {"start": 1606.08, "end": 1611.04, "text": " understand how to do feature transforms in the hyperbolic space. Let's see what"}, {"start": 1611.04, "end": 1614.36, "text": " they say. So we now want to learn transformations of points on the"}, {"start": 1614.36, "end": 1619.08, "text": " hyperboloid manifold. However there is no notion of vector space structure in"}, {"start": 1619.08, "end": 1624.04, "text": " hyperbolic space. So I think the main thing here you should you should have in"}, {"start": 1624.04, "end": 1629.84, "text": " your mind is like the closure property is violated. What I mean by that is"}, {"start": 1629.84, "end": 1635.72, "text": " if you are on a plane and you were to add two arbitrary vectors you end up"}, {"start": 1635.72, "end": 1640.56, "text": " being in the plane still. So that's the closure property. So basically let me go"}, {"start": 1640.56, "end": 1646.72, "text": " up here again. So if you have two vectors here let's imagine we have this vector"}, {"start": 1646.72, "end": 1650.9199999999998, "text": " here and this vector here. If we add them up we'll end up with additional vector"}, {"start": 1650.9199999999998, "end": 1655.9599999999998, "text": " that lies in the plane. But if we were to do if we were to do the same thing for"}, {"start": 1655.96, "end": 1662.28, "text": " this hyperboloid model we'd basically be violating the closure property and that's"}, {"start": 1662.28, "end": 1666.64, "text": " why we don't have a notion of vector space on this on this hyperbolic space."}, {"start": 1666.64, "end": 1671.0, "text": " That's how I understand it. I may be wrong here but like that's my"}, {"start": 1671.0, "end": 1678.28, "text": " best understanding of this thing. Okay so let's continue here. So the main idea is"}, {"start": 1678.28, "end": 1685.6000000000001, "text": " to leverage the exponential and log maps so that we can use the tangent space to"}, {"start": 1685.6, "end": 1689.6, "text": " perform Euclidean transformations. So let me break it down for you. What they now"}, {"start": 1689.6, "end": 1696.24, "text": " do is they're gonna use the log operator to now get from from the"}, {"start": 1696.24, "end": 1701.1599999999999, "text": " hyperboloid point on to the tangent space again of the North Pole. So we are"}, {"start": 1701.1599999999999, "end": 1705.4399999999998, "text": " always mapping on to the tangent space of the North Pole. Then we apply once we"}, {"start": 1705.4399999999998, "end": 1711.1599999999999, "text": " have a point there then we apply this this this linear mapping W which may be"}, {"start": 1711.1599999999999, "end": 1715.48, "text": " which may reduce or increase the dimension. So that's why we have D prime"}, {"start": 1715.48, "end": 1720.08, "text": " here. Finally we apply again exponential mapping which means we take this whatever"}, {"start": 1720.08, "end": 1727.3600000000001, "text": " this point here is and we form a vector between origin and this point here and"}, {"start": 1727.3600000000001, "end": 1732.4, "text": " we do the usual like the shooting metaphor and that's how we end up on the"}, {"start": 1732.4, "end": 1736.96, "text": " manifold again. So again we we get from the we map from manifold onto the"}, {"start": 1736.96, "end": 1741.26, "text": " tangent space we do the transform there the linear transform and then we map it"}, {"start": 1741.26, "end": 1746.32, "text": " back on to the manifold. I'm gonna stop explaining now the exponential and"}, {"start": 1746.32, "end": 1750.36, "text": " log maps and assume you understand how we map from manifold to tangent space"}, {"start": 1750.36, "end": 1757.16, "text": " and back. Now we need to add the concept of a bias and how we do"}, {"start": 1757.16, "end": 1763.92, "text": " that is the following. So bias is a vector that's so we they define B so the"}, {"start": 1763.92, "end": 1767.48, "text": " bias as an Euclidean vector located in the tangent space of the North Pole"}, {"start": 1767.48, "end": 1772.92, "text": " again of the hyperboloid model and so what they do is they do something called"}, {"start": 1772.92, "end": 1777.92, "text": " basically parallel transport and the parallel transport is going to take the"}, {"start": 1777.92, "end": 1781.4, "text": " bias vector that lies in the tangent space of the North Pole it's gonna"}, {"start": 1781.4, "end": 1785.3600000000001, "text": " transport it in a smart way and I will not get into the formalism of parallel"}, {"start": 1785.3600000000001, "end": 1789.16, "text": " transport. We can imagine just kind of shifting that bias vector from the"}, {"start": 1789.16, "end": 1796.04, "text": " tangent space of the North Pole into the tangent space of this point X the point"}, {"start": 1796.04, "end": 1799.52, "text": " of interest the point we're currently trying to transform and then we're just"}, {"start": 1799.52, "end": 1805.2, "text": " going to apply exponential map starting from that point X. Now this may be"}, {"start": 1805.2, "end": 1811.32, "text": " confusing but again it has a very like simple and simple visual interpretation"}, {"start": 1811.32, "end": 1816.68, "text": " and semantics. It's just hard to maybe understand it from the first"}, {"start": 1816.68, "end": 1824.04, "text": " attempt from this equation so let me try and get back to our diagram here. So"}, {"start": 1824.04, "end": 1829.0, "text": " imagine we have let me just delete a couple of things here so as to reduce"}, {"start": 1829.0, "end": 1834.28, "text": " the clutter. Let me delete all of this and let's now try and understand how"}, {"start": 1834.28, "end": 1839.52, "text": " the biasing works. Okay so let's imagine we have so this is the the tangent space"}, {"start": 1839.52, "end": 1842.84, "text": " of the North Pole and let's imagine we have a bias vector somewhere here so"}, {"start": 1842.84, "end": 1846.68, "text": " maybe it's something like this and it's a learnable vector remember that so we"}, {"start": 1846.68, "end": 1851.76, "text": " were trying to learn weights and biases. So what we do is the following so now"}, {"start": 1851.76, "end": 1856.4, "text": " imagine we have a point that we are trying to transform and I'm going to"}, {"start": 1856.4, "end": 1862.68, "text": " pick just a just a single one like let's take this one and basically that point"}, {"start": 1862.68, "end": 1865.98, "text": " is mapped onto this point on the manifold and you can now imagine that"}, {"start": 1865.98, "end": 1870.68, "text": " this point here we have associated tangent space I'm gonna try and do that"}, {"start": 1870.68, "end": 1875.04, "text": " it's gonna probably fail miserably so we have a tangent space that touches only"}, {"start": 1875.04, "end": 1879.36, "text": " this point so it's a tangent space of this particular point. Okay so what we're"}, {"start": 1879.36, "end": 1884.4399999999998, "text": " going to do is take this bias vector B and we're gonna transport it onto this"}, {"start": 1884.4399999999998, "end": 1890.84, "text": " tangent space of this point here okay so we're gonna transport it to here"}, {"start": 1890.84, "end": 1897.6599999999999, "text": " let's imagine maybe it's now going to lie maybe somewhere here and once we"}, {"start": 1897.6599999999999, "end": 1902.56, "text": " have that we just apply the exponential map which means we're going to shift"}, {"start": 1902.56, "end": 1906.4399999999998, "text": " which means we're gonna do the following we're gonna take this point and now"}, {"start": 1906.44, "end": 1910.64, "text": " because of the bias vector and applying the exponential map we're gonna end up"}, {"start": 1910.64, "end": 1917.14, "text": " here so we add it we just successfully added a bias in the hyperbolic space so"}, {"start": 1917.14, "end": 1921.8, "text": " that's what we've done. Okay let me try and explain this once more because it's"}, {"start": 1921.8, "end": 1926.04, "text": " hard to visualize this and I did not do a great job of drawing this so we have a"}, {"start": 1926.04, "end": 1930.56, "text": " bias vector here in the tangent space we map it we parallel transport it into"}, {"start": 1930.56, "end": 1934.28, "text": " this different tangent space that correspond to this particular point of"}, {"start": 1934.28, "end": 1938.68, "text": " interest and once we have the bias vector there then we do the shooting so"}, {"start": 1938.68, "end": 1943.2, "text": " that's the exponential map and we end up from this orange point we end up here"}, {"start": 1943.2, "end": 1948.2, "text": " and thus we as I said successfully managed to add a bias in the hyperbolic"}, {"start": 1948.2, "end": 1954.92, "text": " space. Okay that's the best I can do in one attempt. Okay so let's get back to"}, {"start": 1954.92, "end": 1962.16, "text": " the section 4.2 we've defined we've successfully defined a linear mapping"}, {"start": 1962.16, "end": 1967.92, "text": " in the tangent space and we've successfully defined bias as well. Next"}, {"start": 1967.92, "end": 1973.6000000000001, "text": " up we need to define neighborhood aggregation so I'm gonna just jump to"}, {"start": 1973.6000000000001, "end": 1980.5600000000002, "text": " the equations straight ahead so here is what they do given two notes XI and XJ"}, {"start": 1980.5600000000002, "end": 1986.52, "text": " we're gonna apply log mapping and that means we end up in the tangent space of"}, {"start": 1986.52, "end": 1990.96, "text": " the North Pole and once we are there we're going to concatenate them as you"}, {"start": 1990.96, "end": 1995.1200000000001, "text": " can see by the symbol here and then we just pass them into the MLP and we do"}, {"start": 1995.1200000000001, "end": 1999.28, "text": " that for all of the J's from the neighborhood of node I and then we just"}, {"start": 1999.28, "end": 2005.68, "text": " apply a softmax that's how we basically end up with WIJ's. Once we have those"}, {"start": 2005.68, "end": 2010.6000000000001, "text": " coefficients we use those coefficients to do the aggregation the following way so"}, {"start": 2010.6000000000001, "end": 2017.8, "text": " again we have these are the hyperbolic embedding vectors of neighbors XJ's"}, {"start": 2017.8, "end": 2024.3999999999999, "text": " we're going to do logarithmic map but this time the map starts the map is"}, {"start": 2024.3999999999999, "end": 2032.32, "text": " based in point XI and not the North Pole we're gonna see why that is and once we"}, {"start": 2032.32, "end": 2037.8799999999999, "text": " have that that means now we're in the tangent space now we can do the simple"}, {"start": 2037.8799999999999, "end": 2044.44, "text": " scaling with WIJ's we sum those up and finally we end up with some resultant"}, {"start": 2044.44, "end": 2048.44, "text": " vector in the tangent space and then we map it back using the exponential"}, {"start": 2048.44, "end": 2052.64, "text": " map and that's how we end up with a new representation for this particular"}, {"start": 2052.64, "end": 2060.2400000000002, "text": " vector XI. So because this was the first time we did not use the North Pole as"}, {"start": 2060.2400000000002, "end": 2066.64, "text": " the basis of mapping let me just kind of explain this part a bit better so know"}, {"start": 2066.64, "end": 2070.0, "text": " that our proposed aggregation is directly performed in the tangent space"}, {"start": 2070.0, "end": 2076.6, "text": " of each center point XIH as this is where the Euclidean approximation is"}, {"start": 2076.6, "end": 2081.08, "text": " best we show in our ablation experiments that this local aggregation outperforms"}, {"start": 2081.08, "end": 2085.08, "text": " aggregation in tangent space at the origin due to the fact that relative"}, {"start": 2085.08, "end": 2088.96, "text": " distances have lower distortion in our approach basically what this means is"}, {"start": 2088.96, "end": 2094.88, "text": " let me go back to the example I showed you before so let's again focus on this"}, {"start": 2094.88, "end": 2101.36, "text": " particular orange point and its tangent space so now we're gonna map all of the"}, {"start": 2101.36, "end": 2106.12, "text": " points on the manifold onto this particular tangent space and then do the"}, {"start": 2106.12, "end": 2110.92, "text": " basically aggregation in that tangent space instead of using the tangent space"}, {"start": 2110.92, "end": 2115.12, "text": " of the origin that's the difference and they as they said they've done"}, {"start": 2115.12, "end": 2119.84, "text": " ablations and it turns out that doing this is better than doing aggregation in"}, {"start": 2119.84, "end": 2125.08, "text": " the in the in this particular tangent space here okay so that's it let me go"}, {"start": 2125.08, "end": 2133.0, "text": " back here and let me end up explaining the nonlinear activations so here is"}, {"start": 2133.0, "end": 2138.1600000000003, "text": " how is how the nonlinearities defined on a curved space so we take the"}, {"start": 2138.1600000000003, "end": 2143.7200000000003, "text": " particular embedding vector of interest that lies on the hyperboloid model we do"}, {"start": 2143.7200000000003, "end": 2147.6400000000003, "text": " the log mapping which means we map it on to the tangent space of the North Pole"}, {"start": 2147.64, "end": 2152.8399999999997, "text": " then we apply the nonlinearity whatever that is like value usually and then we"}, {"start": 2152.8399999999997, "end": 2159.0, "text": " apply the x map returning it back onto the manifold the thing to notice here is"}, {"start": 2159.0, "end": 2166.92, "text": " that these curvatures so KL minus 1 and KL these curvatures might be different"}, {"start": 2166.92, "end": 2174.72, "text": " and they are actually learnable parameter in hyperbolic GCN model so"}, {"start": 2174.72, "end": 2177.9599999999996, "text": " they can do this because the mathematics adds up they say here"}, {"start": 2177.9599999999996, "end": 2183.0, "text": " fortunately 10 tangent spaces of the North Pole are shared across hyperboloid"}, {"start": 2183.0, "end": 2188.16, "text": " model manifolds of the same dimension that have different curvatures making"}, {"start": 2188.16, "end": 2192.68, "text": " equation 10 so this is this equation mathematically correct okay that's"}, {"start": 2192.68, "end": 2197.6, "text": " pretty much it now let's glue everything back together and try to get a holistic"}, {"start": 2197.6, "end": 2201.04, "text": " overview what's going on here I know this was a lot of mathematics"}, {"start": 2201.04, "end": 2205.8, "text": " differential geometry a lot of details let's try and get like a high level"}, {"start": 2205.8, "end": 2210.04, "text": " mental model of what just happened here okay so there is a couple of steps okay"}, {"start": 2210.04, "end": 2214.64, "text": " so first step is we map from the Euclidean space on to the hyperbolic"}, {"start": 2214.64, "end": 2220.7599999999998, "text": " space once we have the feature vectors in the hyperbolic space then we do these"}, {"start": 2220.7599999999998, "end": 2228.4, "text": " basically special types of feature transforms and bias so there is a lot of"}, {"start": 2228.4, "end": 2232.2400000000002, "text": " back and forth between manifold and tangent space we mostly do we do"}, {"start": 2232.2400000000002, "end": 2236.12, "text": " everything in tangent space so we map this feature from hyperbolic space into"}, {"start": 2236.12, "end": 2240.64, "text": " tangent space we apply the linear transformation we get we return it we"}, {"start": 2240.64, "end": 2245.44, "text": " map it back onto the manifold and then we just shift it using this particular"}, {"start": 2245.44, "end": 2250.7200000000003, "text": " bias vector and it still remains on the manifold so that's this first step once"}, {"start": 2250.7200000000003, "end": 2255.12, "text": " we have those features we do that for all of the node features of our graph"}, {"start": 2255.12, "end": 2259.92, "text": " once we have that we do the smart aggregation whereby again we're going to"}, {"start": 2259.92, "end": 2265.04, "text": " be mapping onto tangent spaces of those and and this time tangent space is"}, {"start": 2265.04, "end": 2269.56, "text": " defined by particular node I for which we're trying to find representation"}, {"start": 2269.56, "end": 2274.52, "text": " okay we're not using the North Pole tangent space this time we do the weighted"}, {"start": 2274.52, "end": 2279.2799999999997, "text": " sum in that tangent space and then we make the result we map the resultant"}, {"start": 2279.2799999999997, "end": 2284.12, "text": " point back onto the manifold that's how we get these wise and then finally we"}, {"start": 2284.12, "end": 2290.16, "text": " apply this nonlinear basically mapping again we have back and forth between"}, {"start": 2290.16, "end": 2295.7599999999998, "text": " tangent space and manifold and we just apply we kind of squeeze in the your"}, {"start": 2295.7599999999998, "end": 2302.96, "text": " your regular non-linearity between these mappings okay that's that's it it looks"}, {"start": 2302.96, "end": 2306.88, "text": " complicated it actually is not that complicated when you're familiar with"}, {"start": 2306.88, "end": 2313.04, "text": " differential geometry and and this kind of sinks in and yeah okay so let me now"}, {"start": 2313.04, "end": 2319.96, "text": " briefly walk you through the results they showed that for a particular class"}, {"start": 2319.96, "end": 2326.96, "text": " of graphs that have this low hyperbolicity value Delta which basically"}, {"start": 2326.96, "end": 2331.36, "text": " means it's a fancy way of saying these are graphs that are tree like in nature"}, {"start": 2331.36, "end": 2336.72, "text": " and they showed that basically HGCN so that's that this model in this"}, {"start": 2336.72, "end": 2341.96, "text": " introducing this paper outperforms all of the previous space lines so even GNN"}, {"start": 2341.96, "end": 2348.16, "text": " such as GCN and get and sage and SGC and as well as neural networks and some"}, {"start": 2348.16, "end": 2352.88, "text": " shallow embeddings so they show better results there and they show that as we"}, {"start": 2352.88, "end": 2360.7200000000003, "text": " go to higher hyperbolicity constant which means that the graphs are less and"}, {"start": 2360.7200000000003, "end": 2367.2, "text": " less exponentially nature and less tree like they show worse results here okay"}, {"start": 2367.2, "end": 2373.8799999999997, "text": " and final results I want to show you are here they've done some oblations"}, {"start": 2373.8799999999997, "end": 2381.06, "text": " basically the oblations they've done is doing this attention aggregation in the"}, {"start": 2381.06, "end": 2388.6, "text": " North Pole's tangent space instead of using X eyes to form the tangent space so"}, {"start": 2388.6, "end": 2392.62, "text": " that's the difference and here C just stands for whether they use trainable"}, {"start": 2392.62, "end": 2397.4, "text": " curvatures or not and they showed that by by by both using the trainable"}, {"start": 2397.4, "end": 2404.4, "text": " curvatures as well as using the aggregation in the tangent space of X eyes"}, {"start": 2404.4, "end": 2409.2, "text": " that gives them the best results across various different datasets so yeah just"}, {"start": 2409.2, "end": 2413.3199999999997, "text": " some oblations they've done additionally they showed that for some for this"}, {"start": 2413.3199999999997, "end": 2418.08, "text": " particular data set disease the harder curvature of the hyperboloid model"}, {"start": 2418.08, "end": 2422.14, "text": " basically the better the results let me just show you how to parse this this"}, {"start": 2422.14, "end": 2428.72, "text": " chart basically if K is large let's say 10 to the power of 3 so that's like"}, {"start": 2428.72, "end": 2434.08, "text": " thousand if you plug in 10 to the power of 3 you'll end up with minus 3 so that's"}, {"start": 2434.08, "end": 2439.6, "text": " means we are here and you can see that for BK for like thousand we have like"}, {"start": 2439.6, "end": 2448.7599999999998, "text": " lower metric here compared to if K was much smaller so if case maybe maybe 1"}, {"start": 2448.76, "end": 2454.4, "text": " over 10 that would be 10 raised to power of minus 1 which means we map to here so"}, {"start": 2454.4, "end": 2461.0400000000004, "text": " that means for 1 over 10 for for for high curvature basically we have better"}, {"start": 2461.0400000000004, "end": 2466.2000000000003, "text": " performance and again this ties back nicely to the visualization to the"}, {"start": 2466.2000000000003, "end": 2473.2400000000002, "text": " explanation I started with basically if we have crowded points in the Euclidean"}, {"start": 2473.24, "end": 2478.8799999999997, "text": " space the more the so so so the higher the curvature of this particular hyper"}, {"start": 2478.8799999999997, "end": 2484.3199999999997, "text": " hyperboloid model I'm having such a hard time pronouncing that word so let me"}, {"start": 2484.3199999999997, "end": 2488.9199999999996, "text": " just draw it like this so if this was even more curved so something like this"}, {"start": 2488.9199999999996, "end": 2493.6, "text": " that means that basically the separation would be even bigger because you can"}, {"start": 2493.6, "end": 2497.08, "text": " imagine that us even a small perturbation here in the in the Euclidean"}, {"start": 2497.08, "end": 2503.84, "text": " space would cause those two points to be mapped onto very like distant points on"}, {"start": 2503.84, "end": 2508.2799999999997, "text": " the actual manifold so maybe this one here would be mapped here and this one"}, {"start": 2508.2799999999997, "end": 2512.16, "text": " here would be mapped even here so and the higher the curvature the bigger this"}, {"start": 2512.16, "end": 2517.36, "text": " distance will grow and so that's why you can kind of imagine that higher"}, {"start": 2517.36, "end": 2523.04, "text": " curvatures like in the negative direction basically help help us spread"}, {"start": 2523.04, "end": 2529.72, "text": " out the feature vectors okay that was my best attempt to explain this paper there"}, {"start": 2529.72, "end": 2535.08, "text": " is a lot of mathematics I'm not an expert in differential geometry I know"}, {"start": 2535.08, "end": 2541.0, "text": " enough to basically understand on an intuitive level how this works it's hard"}, {"start": 2541.0, "end": 2545.08, "text": " to visualize all of this I hope that this paper helped you understand"}, {"start": 2545.08, "end": 2550.12, "text": " basically this this this this model a bit better if it did consider sharing"}, {"start": 2550.12, "end": 2554.92, "text": " the video out also consider subscribing to this channel and finally join our"}, {"start": 2554.92, "end": 2582.0, "text": " discord community until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=zpDdvI95igc
Understanding over-squashing and bottlenecks on graphs via curvature | Ricci Flow | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the "Understanding over-squashing and bottlenecks on graphs via curvature" paper that introduces a new graph rewiring method that alleviates the problem of over-squashing by borrowing ideas from the field of differential geometry. The main workhorse behind the method is the discrete version of Ricci curvature (balanced Froman curvature) and an analog of Ricci Flow as applied to graphs. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2111.14522 ✅ M. Bronstein's blog: https://towardsdatascience.com/over-squashing-bottlenecks-and-graph-ricci-curvature-c238b7169e16 ✅Excellent Ricci-flow video: https://www.youtube.com/watch?v=PwRl5W-whTs&ab_channel=Aleph0 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Bottlenecks and over-squashing in graphs 04:42 The concept of Ricci curvature and flow 11:10 Message passing neural networks and notation 16:45 Formalizing the concept of over-squashing 22:05 Balanced Forman Curvature 31:15 Connecting curvature with over-squashing 37:10 Connection with the Cheeger constant and spectral theory 41:20 SDRF - Stochastic Discrete Ricci Flow 47:47 Results, graph statistics, and limitations ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #ricciflow #geometric #graphs
What's up guys in this video I'm covering understanding over squashing and bottlenecks on graphs via curvature paper by Jake topping Francesco DiGiovanni Benjamin Tamboulin Xiao Wen Dong and Michael Bronstein So before we even start I guess it will only make sense for me to explain to you what over squashing Over squashing exactly means what our bottlenecks and finally what is curvature? So let's see this sentence in in the abstract that connects these these concepts And then I'm gonna give you an explanation of what those exactly are So we provide a precise description of the over squashing phenomena in graph neural networks and analyze how it arises from bottlenecks in the graph for this purpose we introduce a new edge based combinatorial Curvature and prove that negatively curved edges are responsible for the over squashing issue So negatively curved edges whatever that means I'm gonna explain that briefly Are responsible for the over squashing issue, okay? So let me first start with the preliminaries So let's first see what this bottleneck thing is in graph neural networks And what over squashing is which is a very connected phenomena So you may be familiar with the RNN so recurrent neural networks And you may know that we have this squashing phenomenon occurring in RNN as well, so So this is just a rolled out version of an RNN model And you can see that so some information some input is fed into the RNN here And then this information is passed on to the next step here where we have another piece of information Mixing together with this previous step and then in the third step We'll be having this additional information mixing with the information from the previous two steps and so as you can imagine so already here the like the this this information from from the first step Constitutes only like maybe roughly one-third of the informational content that you can be they can be find found in this particular Time step of RNN and so the more we go towards right the this this information from this step constitutes a minor and minor percentage of the informational content of that fixed size vector and Basically, that's what bottleneck is all about so as you can see here because we have this Bottleneck where all of these informations have to pass and be squeezed inside of this fixed size vector So that bottleneck causes the over squashing of information, so that's how these two concepts are related Okay, so we know that RNNs were susceptible to this particular problem, and we know the later some models such as LSTM or GRU's managed to basically focus on on some parts of the information more on some parts of the information less and Thus they managed to you know in a way avoid this bottleneck problem, but not completely Well with graph neural networks the problem is even more severe and the reason can be seen in this example here So you can see that like compared to RNNs we have exponentially more Informational vectors coming in in every single step so because of how the spatial GNNs work, and that's by basically aggregating One hop neighborhood. That's how usually GNNs are implemented So because of that if you had a graph such as maybe let me just draw some examples So if you have a graph like this where this node is connected to two more nodes, and then these two nodes are connected to two more nodes So like this so then if you were to start calculating features for this one so in the first step you'd be including So let me just draw this so you'd include information from these two and These two would include information from these two which means that information from this vector here from this node here will in the first step propagate here And then in the second step it will propagate all the way here, and that's what we can see visually on this graph exactly so basically for Topology such as this one where we have with every where every node will have two additional neighbors Will basically have exponential grow growth of Information content that we need to squeeze in ultimately into into a fixed size representation of this particular node here So that's the that's the bottleneck and over squashing problem And it's as you can see intimately related to the topology of a particular graph you're dealing with okay Next up second concept that I want to explain before we start analyzing the paper, and that's the curvature so Let me read this this Statement here so richie curvature of a manifold can be characterized by geodesic dispersion I whether two parallel geodesics shot from nearby points converge Which happens in a positively curved spaces locally resembling a sphere such as this example here Or they remain parallel in the flat or Euclidean case as you can see here Or they diverge negative curvature giving rise to hyperbolic geometry So most of you are probably just familiar with Euclidean geometry, but like back in I think 19th century already people such as Riemann and And others started devising these other types of geometry So you can see here that if you were to shot like two parallel lines so two geodesics where geodesic is just a fancy name Of saying like basically the shortest path on a particular geometry Which in the case of a Euclidean plane will be like like a straight line But in the case of like sphere or hyperbolic type of surfaces It's not going to be a straight line, so you can see here if you shoot two Two geodesics they're going to ultimately converge here on the North Pole And that's why so when the lines convert when the geodesics converge that space is basically described as being positively curved Here you can see that these geodesics are gonna. Let me just take a pen here, so they are going to diverge So this is going to track this line here, and this geodesic is going to be tracking this line here So you can see they are diverging and because of that this hyperbolic space has negative curvature, okay? So this is also so this Ricci curvature basically is a scalar that tells you whether the space is Positively curved in the case of a sphere or zero or negatively curved in the case of this particular geometric object here Okay, so aside from the Ricci curvature. We need to know a couple more concepts and the main one will be Ricci flow Which basically tells you the rate of change of a metric tensor? Is directly proportional to the Ricci curvature now that may seem and sound very complex because it probably is But I want to show you a short video that's going to give you like all the necessary intuition that you need to have before proceeding with this paper, so Just as a trivia. I want to mention a couple of people here, so this guy here called Gregorio Ricci Curbastro Was the guy who invented the Ricci curvature and after whom the Ricci curvature was named. This is Hamilton who discovered the Ricci flow Equation and finally this is Gregory Perelman a guy who managed to solve the famous Millennium problem in mathematics called the Poincare Conjecture and he solved it using basically building on top of Hamilton's work And he used Ricci flow to solve this Poincare conjecture also as a fun fact he decided to Basically decline both the Fields Medal which is an equivalent of a Nobel prize in in Mathematics as well as the Millennium prize like a one million dollar prize for solving one of these seven Millennium problems as proposed by the Clay Institute So Poincare Conjecture was one of those those. Okay, so Now check out this video where you'll understand what Ricci flow is a lobar and then we'll continue Ricci flow is a way of changing the metric tensor over time so that the manifold becomes rounder So how do we express Ricci flow concretely? I want you to focus on this region over here the Ricci flow inflates it like a balloon By convention, we say that it has negative Ricci curvature So if the curvature is negative the length increases Now focus on this region here Ricci flow deflates it So if the Ricci curvature is positive the length decreases If the Ricci curvature is positive the length decreases We can phrase this differently g decreases means the derivative of g is negative g increases just means the derivative of g is positive These two guys always have opposite sides So we might guess an equation like this And that's it That is the equation describing Ricci flow. Okay, cool So I strongly suggest you go ahead and watch the whole video as it's super informative And but this this clip I just showed you is enough for you to understand the paper so we can continue Cool. So as you just saw, we have Ricci flow is basically this differential equation connecting metric tensor with Ricci curvature And you can see so in the example of a body you can see here on the screen We have that this we see that this part has very negative curvature, which is denoted by this blue color And by by by applying Ricci flow because of the negative Like negative value of the curvature that part is going to be inflated Whereas this part which which has uh, like positive curvature is going to be deflated Basically, and we end up with something like this So we can see that Ricci flow in this continue on this continuous manifold can reduce these bottlenecks And now let's just translate this idea onto the discrete case Which are the graphs and we can see that intuitively what happens is is a similar thing So we can see we have a bottleneck here. So let me just change the color So we see a bottleneck here, which is denoted again by a blue edge and by inflating that space by adding Edges here, uh, basically we are making this part of the space more positive and we are also They are also removing some of the edges here Which means we'll have less edges here and then the statistics of the graph stays roughly the same But like we reduce bottleneck and you can see that the colors are Homogenized meaning that this became more reddish and this part that was super red now became Also has some blue Colors to it. So that's that's what we we managed to accomplish a similar thing to this Analogous thing to this what happened in the continuous space uh, cool So now let me just show you some preliminaries and then we'll we'll we'll uh, See how they formalize the notion of over squashing and then we'll see how they formalize the notion of negative curvature on graphs And finally how they connect the two so that they can devise an algorithm where they'll be picking these negative edges and rewiring the graph to get Uh, uh less problems with over squashing so, um Let's see some piece of notation here first this s sub r of i Is basically the the the neighborhood So it's a set of these j's of nodes j's that belong to the set of all nodes denoted as v That have this geodesic distance between i and j that's precisely equal to r So it's a fancy way of saying this is a set of nodes j which are exactly r hops away from node i okay Uh, and then this b sub r of i is the same thing but here we have like a smaller or equal Which means we'll be including all of the intermediate Intermediate nodes and thus this is uh, like a receptive field of this particular, uh node Okay, just to visualize the notation. It's going to be probably a bit easier so if this node 2 here is our node i then the The s1 neighborhood of i is basically nodes 3 and nodes 1 Uh, then the the the s2 neighborhood So if we were to draw s2 neighborhood of i that's going to be nodes 0 and 4 As you can see we need exactly the shortest path takes two Edges to get to node 0 and the same for node 4 Finally the the b neighborhood. So let's say we have uh neighborhood b2 of i Is going to encompass all of these nodes we saw so that's going to be b2 And let me just check whether I think it also includes. Yeah, it also includes node 2 as well So this every all of these nodes are now going to be b2 neighborhood of i and just an additional detail I assume b stands for ball and s stands for surface which makes intuitive sense As you can see here, we can include the whole volume all of these nodes Whereas in s we just take the front so we basically ignore all all of these inner nodes inside of the s sets cool, so next up, uh just a Short mention of message passing neural networks and if you're not familiar with graph neural networks I have a whole playlist i'm going to link it somewhere here. You can check it out. I'm going to just briefly Recap what this is all about And you can see that in order to calculate the features of a node i, uh in layer l plus one What we do is the following so we take the previous Feature of that node i so in the previous layer a layer l and we basically combine it So maybe concatenation with this expression here and then we we apply this function. That's called the update function phi Okay, so let me just now decouple What this term is all about so you can see we have a Some j goes from one to two n Which is basically n is the the cardinality of the node set So that means there's a fancy way of saying it's the number of nodes in your graph A hat ij is basically your adjacency matrix So the connectivity matrix, but this is with additional twist They added self edges and that means basically every node will have additionally uh implied edge like this Although if it's undirected it's going to be uh, like both directions basically so every node will have this self edge And plus they also have normalization using these d matrices, which are basically your um, well, yeah degree matrices and this just normalizes the adjacency the augmented adjacency matrix a tilde So what this will effectively do is filter out so this is going to be Um non-zero only for immediate nodes of node i so that means we're just going to look at the one hop neighborhood of node i and Otherwise this will be zero and thus we won't care about this thing Uh, whereas this is just your message passing function psi and these are the representations of node i and its neighbors node j Uh, so Let me just show you a diagram to make it a bit Uh easier so that means we're going to take a look at the one hop neighbors of node h of node 1 And we're going to aggregate somehow combine this feature with this feature using some uh function psi Which is usually like mlp like a multilayer perception We're going to form representations in such a way which are called messages and then we're going to aggregate them Uh like this so in this particular case they they've chosen a particular Uh permutation invariant function i sum in this particular case but in general it can be uh whatever like whatever function that's that's permutation invariant like maybe Uh like average or or even a max function is permutation invariant and so people usually denote that you'll probably see that in the literature as like o plus Uh which is a general a generic permutation invariant operator instead of using sum So that's some uh preliminaries uh the notation and uh and uh these generic message passing neural networks And now let me show you how they formalize the concept of over squashing Uh but before that why do we care like well the reason we care is because as we saw as soon as we have long dependencies The information is going to be over squashed and thus the two remote nodes cannot communicate effectively with each other And so they say here we say that a graph learning problem has long range dependencies when the output of a mpn n depends on representations of distant nodes interacting with each other And so in these types of graphs where which are called heterophilic uh as compared to homophilic Uh you basically want to have this uh remote communication going on if you want to have accurate predictions And so that's why we need to solve the over squashing problem if we are to be performant on these heterophilic types of graphs So let me show you how they formalize this this notion of over squashing Okay let's first uh let me show you the the notation here before I start parsing the equation uh the inequality So what they have is the following so they have nodes i and s from the set of nodes v such that s belongs to the set s sub r plus one of i Which means uh s is r plus one uh hops away from node i so that's something like this maybe we have a node i here And then we have a bunch of nodes here bunch of nodes and then somewhere here we have node s And this shortest path to get from from uh basically i to s is equal to r plus one So let me just draw that so that's r plus one length okay so we have node s here and node i here and um basically then they say if the If you can bound the gradient of the phi and psi functions which are the message and uh and update functions Basically if you can bound them by some some some scalars alpha and beta uh then uh they show that this here this expression here holds And this what this tells us is the following so the sensitivity of this representation of node i in layer r plus one Uh and sensitivity with respect to uh node s so to the raw features of node s is upper bounded So we take the absolute value of this of this uh derivative here and we can see it's bounded by alpha beta Raise to the power of r plus one times a hat raised to the power of r plus one and then we just take the uh index by i and s Okay so there is a couple of things worth mentioning here first of all if you're not familiar with taking powers of these adjacency matrices There is a special semantics behind uh a raised to the power r plus one namely if you index this matrix here with i s You'll get the number of shortest paths between i nodes i and nodes s which makes intuitive sense So basically the more such paths we had that means that s so the features from from from the information from node s Had multiple paths by which they could reach node i and thus you can intuitively understand that that means that this upper bound is going to become bigger Because the end the over squashing will will be uh like uh less um prominent That's observation number one observation number two here is the following so you do have to have uh you do have to to observe the representation of node i Uh only uh after applying r plus one uh layers of of your graph neural network the reason is underreaching If you were to just apply let's say if like less than r plus one then this information from node s would have not reached node i yet So that means maybe the information from from this node s only have reached until here and that means that this is going to be uh zero Because like there is no influence of hs of the of the raw vector of node s uh on the representation Because the information still hasn't propagated to node i so that's something to keep in mind So that means that here you have to have r plus one because then you're basically calculating non-zero sensitivity of node i And how the information from s influences node i so i think that's that's pretty much everything you need to understand about about this formula So it's a nice way of connecting over smoothing on one hand with topological uh topology of your graph on the other hand okay So as an example they show here for example if the uh shortest path between i and s is r plus one and the sub graph induced on on this uh b set is a binary tree Uh then this adjacency matrix uh like indexed at is is going to have this particular shape and what's interesting here is that as r grows Which means as you're looking at at further away nodes you're going to have exponential decrease of this term Which means in turn that you're going to have small this will be exponentially decreasing and thus the sensitivity will be exponentially decreasing Which means that the over squashing will become more and more prominent Okay now let's continue so let me show you now how they formalize how they calculate the curvature of a graph that may be unintuitive to some of you As well as to me on the first pass of reading this paper Okay so we saw how the how these curved spaces look like when they are continuous Now let's see how this thing works for discrete case i.e. for graphs So intuitively the the notion is the following so if you take these two nodes p and q and you take two edges you'll see that in graphs such as this one They'll tend to converge to this node k and thus form a triangle and basically triangles will be a feature that indicates that the space is positively curved in graphs On the other hand if you take two edges and they stay on the same distance then and they form four cycles by doing so then this space is going to be classified as having zero curvature And finally if we have edges that start diverging then in that case the space is negative Now this is kind of hand wavy because unless you attach geometric like information to these edges like the fact we just chosen to draw the graph like this doesn't mean anything But like you can imagine that also like you can see here that the path between this node and this node starts becoming larger and larger whereas here it's like zero and here it stays constant So that's maybe a better way to think about this but like just visually as well you can see that these two kind of converge these two stay parallel and these two diverge This is probably yeah probably better to explain it like this Okay so now intuitively what this curvature calculation needs to do is count these topological elements and by counting them decide on the actual curvature of an edge So let's see how they choose to do that So first worth mentioning there are two discrete variants of calculating rigid curvature for graphs one is known as the Olivier curvature denoted as this italic weird font K symbol And we also have this from a Foreman curvature denoted as big F here So there are trade-offs between these two curvature Olivier is much more expressive but much more computationally expensive Foreman has the opposite properties basically it's much more efficient computationally but it's not as expressive as Olivier's curvature What they've done is they've taken the Foreman curvature and they've upgraded it to this balanced Foreman curvature And in order to understand the equation let me first break down a couple of sets we'll need to take care about So the first set is this one which basically tells you the number of triangles that are formed on this edge ij And you can formally basically calculate that by finding an intersection between S1 sets of node i and node j So these are the triangles that are based at the edge ij So let me show you an example so if we have here this particular graph and let me just Okay so basically this is node i here and this is node j we find the S1 sets first so for node i so S1 of i is going to be this It's going to be this element here because you can see the distance is just one this one this one and this one as well as j so this is the S1 set On the other hand if we want to calculate S1 set for j that's going to be 5, 6 and 0 And what they do is they find an intersection and you can see that the only intersection we have in this particular example is node 6 So this one here and as you can see it does form a triangle and that's the only triangle we have in this particular graph So that's the first thing we need to care about triangles the second thing we need to care about is these four cycles And you can see here a formulaic description i'm going to just read you what it represents and that's that these are the neighbors of i forming a four cycle based at the edge ij without the diagonals inside Okay so basically let me again show you this for node i the four cycles are going to be so that those are going to be 2 and 3 because you can see here we have a four cycle here going from i to j and then back So it's a four cycle we have a second four cycle going over 3 as you can see here going to j and backwards so those are four cycles And you may wonder why have I excluded 4 from this set and the reason is because we have this diagonal here and that's why we will not include node 4 into this set of four cycles of node i Okay now let me try and connect that with the formula just for fun so you can see here what the formula states and let's check whether this why the node 4 basically will not belong to a set So we can see that this set of four cycles for node i is a set of these k's such that k is basically belongs to S1 set of node i but does not belong to S1 set of j neither can k be equal to j Okay let me now just delete this to make this clearer so let me delete everything here and let me basically show you how this is going to function so k belongs to S1 of i so that means k is one of these elements here okay It's going to be one of those elements here and then we exclude basically we exclude j which means we exclude this one and we also exclude everything that belongs to node S1 set of j so that means we're going to exclude this one as well So we end up with 2, 3 and 4 and we already saw that 2 and 3 do satisfy all of the conditions and thus we included those in this set but we for some reason we did not include number 4 and the reason is because of this other condition So that k needs to be such that there exists this node w that belongs so that means that this set here is non-zero not empty set so you can see that S1 of k so let's find S1 of k so k is 4 in our example so this is our k this is our potential candidate at this point of time So this is these are the S1 this is the S1 neighborhood of k and intersection with S1 of j is going to be just this node here so this is going to be the intersection But at the same time we need to find something that does not belong to S1 and because this thing belongs to S1 that's why we have to digit again and that's a very complicated way of telling you that hey if we have a diagonal here we cannot count in this 4 cycle that's the whole point Finally I'm going to skip this one basically gamma max tells you the number of these degenerate cycles i.e. cycles that pass over a common node and for this particular graph it's going to be 2 and that's because you can see that 2 cycles 2 4 cycles are passing over this node 5 here and that's why this gamma is going to be 2 for this particular example Okay so enough rambling let's see the formula so first in line with the discussion about geodesic dispersion one expects the number of triangles to be related to positive curvature complete graph number of 4 cycles to 0 curvature grid and the remaining outgoing edges to negative curvature 3 Our new curvature formulation reflects such an intuition and here is finally the formula so you can see the Ricci curvature to the balance form on curvature implementation of the Ricci curvature has this very very like detailed shape and so I'm not going to explain every single detail but you can understand basically that it sums You can see here we have a sum of triangles which is going to contribute to the positive value of this curvature and everything is normalized by the degrees of nodes i and j and a fun property that emerges from this is because we have this minus 2 is that the curvature is always going to be bigger than minus 2 So that's and I think it's also upper bounded because of the normalization by one but don't quote me on that one okay cool so now let's see a couple of very interesting theorems that are going to connect the this this negative curvature the curvature calculation with the over squashing So because of this theorem that the Olivier curvature is lower bounded by this balanced form on the curvature because of that there is a very interesting corollary that's very interesting for this paper and the corollary says the following thing if our curvature is positive For any edge i j and I'm really sure that you should state all here because you need to have this condition satisfied for every single edge which is a very strong condition but if you have that then there exists a polynomial P such that So that the B set of this of this node i so the the R hop receptive field is going to be upper bounded by a polynomial in R and that's super important because we just saw before that if we have for example like a binary tree type of branching then we're going to have exponentially growing receptive field Whereas here if the edges are positive we can have a guarantee that they are bounded by a polynomial and here we for the first time see the connection between over squashing and curvature so there is a notion that if your graph is positively curved then because of this polynomial thingy indirectly you're going to have less over squashing because you're going to have less bottlenecks and less over squashing Okay now let's take that one step forward and find an explicit connection between curvature on one hand side and over squashing on the other side and that's what this theorem for is all about so this is going to be probably the the the main the main contribution of the paper and let me start and dissect this theorem for you There is a huge proof in the appendix you can go and check it out if you're into math but like this is going to give you a rough intuitive understanding of what's going on here and why this works so consider a MPN so a message passing neural network so the one we saw in equation one that's the generic MPN Let I be connected with J and the degree of node I is smaller or equal than DJ and then assume two conditions so the first condition assumes that you can basically upper bound the gradient of the update function and the gradient of the psi function of the message function for any layer for any L between 0 and capital L minus 1 which is L is the number of layers in our MPN With the depth being at least two layers then they say in two there exists Delta such that Delta is bounded like this and bounded like this so this is the this gamma coefficient we saw a couple minutes ago and we have that the curvature is upper bounding by upper bounded by minus two plus Delta Okay so this is so far very like maybe not not that clear I'm going to give you the gist in a second so then there exists this set Q sub J which is a subset of S to set of five satisfying that the size of that set is bigger than one over Delta blah blah and for all of these L zeros between zero and capital L minus two we have this equation here Okay so what this tells us here is that if we sum over elements of that of that subset Q of S2 and if we if we basically sum up all of these sensitivities of various nodes K from that subset that's going to be upper bounded by this expression here so alpha beta raised to the power of two times Delta raised to the power of one over four so now let me let me try and visualize this and break it down so first of all we have node I here so let's say let's say we have node I here we have one hop neighborhood here and finally we have two hop neighborhood here so what this theorem states is that basically there exists a subset so this is S2 so this this thing here let me try and draw it this thing here is S2 and what they state is that there exists some subset of these nodes so let me just kind of take a subset such that if you were to sum so now take some nodes from from this subset and if we were to calculate their sensitivities they're going to be upper bounded by this expression here so you can also read this as over squashing so over squashing of that over squashing of that neighborhood so the I'm going to denote that as like over squashing OS for example yeah and now what you can see is that if the edge is very negative so if Delta goes down to zero if you start in the limit when Delta goes to zero this edge IJ is going to be very negative because it's going to be upper bounded by minus two plus a small number which means it's going to converge almost to minus two which is super negative edge if that happens that means that this expression here so Delta goes to zero here and that means that you can see this becomes the whole expression goes to zero and that means we have over squashing so again the the the chain of thoughts here is the following if we have negative edges that means that we directly have over squashing problem and that's the direct connection they made there is a lot of details which follow from many other theories there is a lot of math in the in the appendix but this is a roughly what they're they're saying I'm going to make one more connection here with this chigar constant which is very interesting constant in in graph theory and then I'm going to show you the final algorithm the rewiring algorithm they applied in this paper so back to chigar constant so chigar constant intuitively captures the notion of of how bottlenecked the the graph is so if I were to draw a simple graph like this so let's imagine we have some some some nodes inside of here which are densely connected and then we have another community of nodes which are densely connected and if we have a single edge here you can see intuitively that this barbell shaped graph has a very high like it's very bottlenecked like there is a like a huge bottleneck and that's the one here okay and you'll see that this a H J so this chigar constant correlates with our intuition here very nicely because how we calculated and there is a mistake here so this should not be here I had a discussion with one of the main authors of the paper francesco he told me this is just a typo so basically you can see that we take the number of elements in this set so this set is as you can see here those are the edges such that I belongs to S and J belongs to its complement so in this particular example that will mean the following so we're going to so this is if this is S and this is basically the complement then this edge here is going to be a part of this set so this is the only edge that belongs to this set here and that means we have one here so very small number here and then we have like a minimum of volume so whatever the minimal volume between these two sets is we find that thing and that we divide it and the volume itself is divide as the sum across all of the notes of that set S and we just sum up their degrees so you can see here is going to be like some huge number and we have a small number and that means this is going to go towards zero for this edge case of a very bottlenecked graph and as we start adding and you can imagine if we start adding more edges if I were to start more edges here add more edges this thing is going to grow up so it's going to explode not explode but like it's going to have more and more elements and thus this chigger constant is going to consequently rise up from zero to some bigger number and thus it correlates in a way with how bottlenecked the graph is and now you can see a bunch of problems it's a single scaler what if we had much more communities what if we had like what if we had even more communities here and they are connected so it fails to to capture more interesting topologies but like it's it's a number that give you gives you some global it's a global statistics some gives you a global understanding what's going on in the graph but it's not that detailed and and and and yeah and you want so then they show the following thing and that's that the basically spec like spectral gap of a graph is bounded by the chigger constant and this spectral graph gap again correlates with this how connected the graph is and what this spectral gap is and they say here so lambda one is the first non zero eigenvalue of the normalized graph laplacian often referred to as the spectral gap so that's a nice connection between the spectral theory and and this graph theory i.e. the chigger constant but more interestingly for us they show that if the curvature is positive for all of the edges then you can you can basically like lower bound the chigger constant and that and also the the the spectral gap by this k which is the same k as we have here which means that by controlling the curvature how positive it is we can control the the the chigger constant i.e. how bottlenecked the graph is and the spectral gap of the graph so that's some some nice connections not necessary to understand this paper but like i made a small tangent here hopefully you can appreciate it okay so finally let's get to the to the meat of the of the paper i mean at least procedurally this is the final rewiring algorithm before going to the details let me just tell you what those are so they say here that more recently there is a trend to decouple the input graph from the graph used for information propagation so that's an important thing to notice such methods are often generically referred to as graph rewiring and we see here an example so here on the left we have a graph on the left hand side we have the original graph and we have some coloring depending on the curvature so again blue means we have negative edges red means we have positive edges blah blah blah but more importantly you can see that if we apply some some other baseline and this is called deagle method so this is called deagle which stands for diffusion improves graph learning we can see it completely changes the statistics of this particular graph it adds a lot of edges and thus completely skews the degree distributions etc etc on the other hand this thing here is their proposed method which we're going to see in a second and you can see it preserves the statistics but manages to basically makes the curvature of these of this graph higher okay now let's see how the actual thing works and yeah the important part to notice is that now instead of operating your your graph neural network on top of this original graph you instead what you do is you rewire the graph and then you use these edges to do your message passing on your graph so that's that's the that's the idea behind these rewiring methods they serve such like some type of pre-processing steps okay so here is the stochastic discrete richie flow algorithm which is super fancy name but it's a very cool algorithm so let's see what happens here so we have as input we have a graph G is given we have temperature tau which is a positive temperature which is because we are going to be sampling in this algorithm that's why we have that the notion of temperature we have max number of iterations and optional optional curvature upper bound called C plus we're going to see what that means in a second okay so the algorithm proceeds as follows so we're going to repeat the following steps until convergence or the max max number of iterations has been reached let me just draw a company graph that's going to help us grasp this a bit better so let's say we have two edges here let's say they are maybe a it's a bridge between some densely connected communities maybe something like this and let's see how now the graph operates so for edge i j with minimal richie curvature which is in this example going to be this one here so this is going to be our our like negative the most negative edge here then they say calculate vector x where the element of that vector x denoted as x sub k l and define like this is basically the improvement to the curvature of this edge i j from adding edge k l where k is belongs to the b one set of five and l belongs to the b one set of j okay so let me again just explain what it means that means we're going to make the following thing so let's um i'm going to try and add so we're going to try and add an edge so basically here maybe and this one you can see so this is node i this is node j you can see that i took a node from the b one set of i and i connected with with a with a node from b one set of j and now i see how how how much of an improvement does this have on on the curvature of this edge and that's this term here okay so we're basically going to do the same thing so we're going to do this multiple times and we're going to exhaustively connect this one with this one and and and this one with this one etc etc and we're going to calculate this improvement for all of those and then we're going to sample with probability uh basically softmax of that vector x and we use tau to modulate the temperature and we're going to sample from that vector and then add add the edge that was sampled to the graph so maybe one of these edges is going to be sampled maybe i don't know maybe this one is going to be chosen and uh basically that will be the end of step one you can imagine that if you push tau to like positive infinity to some big number big positive number then you're going to have instead of of this uh like a smooth distribution you're going to have like a deterministic distribution distribution where the edge that had the highest uh improvement is going to be chosen sampled and added to the graph so you can kind of use tau to uh tweak your your algorithm in that sense okay so we've added the edge and that that increased uh the positivity of this uh initial uh curved edge uh denoted in blue and the second step is remove edge with maximal uh rigid curvature only if that curvature is bigger than this c plus so that's the upper bound i mentioned and again you have some control here if you make c plus arbitrary like very very big then at one point of time uh that will mean that you'll never be removing positive edges and you'll just be adding uh these edges that increase the positivity of the of the graph okay so the reason why they have this part here is in order to preserve the statistics of the graph so as much as you add and thus increase the number of edges you want to also remove and thus kind of balance out uh the the the the the number of edges and the degree of nodes are going to be uh less skewed in this way in that way cool and you do this until convergence uh and whereas convergence means sometimes whatever combination of edges you add you'll never have the improvement here and thus that's that's how convergence is defined and that's that's it that's the that's the uh stochastic uh discrete root flow algorithm uh fairly simple idea uh if you think about it very very neat because it has this awesome connection with uh differential geometry but uh but yeah okay so finally they do compare um how strf works compared to deagle which is this competitive approach which works much better on homophilic uh graphs but i'm going to skip this math and theorems because it's a it's a arguably a tangent not uh not necessary to understand the core of this paper and so i'm going to skip that part um this time so here are the results you can see uh on various different um graph data sets uh core sites here pubmed might be familiar to most of you who have who have had any exposure to graph ml field uh you can see we have the homophilic coefficient associated with each of these graph data sets as denoted by this uh add h uh of j and you can see these ones have low h which means low homophilic which means high heterophilic and these ones are much more homophilic and again homophilic just means uh birds of feather flock together i.e. if you have a node and it's connected to its neighboring nodes there is a high uh chance that basically they'll share the same label whereas in heterophilic data sets oftentimes very distant nodes so some nodes which are uh yeah more distant from each other are going to have the same label i'm going to share the label and you can imagine for these heterophilic data sets it's very important to have effective strategy of communicating over those long distances and over squashing is one of the things that's going to uh ruin that communication and that's what this paper is handling basically and that's why it's better for heterophilic data sets compared to homophilic data sets uh the results support the claim you can see that for these heterophilic data sets it basically outperforms all of the other baselines we have deagle here we have this uh plus fa approach which is uh basically just fully a full um adjacency matrix uh in the last layer of our graph neural network a very simple modification that kind of tries to cope with over squashing effect as well uh and yeah as as i said better performance on these heterophilic data sets and you can see not as great although competitive performance on on these uh homophilic data sets deagle seems to perform somewhat better uh compared to to strf which is expected okay so uh finally just some distributions here they show that their rewiring method is much uh better at preserving various statistics so you can see here that comparing the degree distributions you can see that the the blue one is the original one uh the green one is the strf and deagle is the orange one and you can see it completely changes the the degree distributions for various different data sets and also you can see that uh the number of edges that deagle adds is huge whereas it does not remove edges and on the other hand strf both adds and removes edges and that's why it preserves the statistics of the graph finally i'm gonna uh end it up on this note uh they mentioned the limitation which is very something to think about uh so one limitation of our work is that the theoretical results presented here do not currently extend to multi graphs okay that's one thing because uh this whole like thing of calculating the number of triangles four cycles all of that breaks down uh in the case of multi graphs where you can have multiple edges between nodes and finally they say even more importantly that in addition the current methodology is agnostic to information beyond the graph topology such as node features so why is that important so imagine you have a graph where for example like a social media graph and that graph represents who is following whom and if you were to start and just um based on the curvature of the edges start rewiring that you're changing the underlying semantics of the social media graph and that can lead to false predictions later in your pipeline so because you're not taking features node features and net features into account you are less nuanced in how you change the semantics of the underlying graph if that makes sense okay so on that note um this is a very very cool paper i loved it and uh if you found the video useful yourself uh consider sharing the video um and subscribe to this channel and finally do join our discord community you'll check you can find the links down in the description until next time bye bye
[{"start": 0.0, "end": 6.66, "text": " What's up guys in this video I'm covering understanding over squashing and bottlenecks on graphs via curvature paper"}, {"start": 7.16, "end": 9.06, "text": " by Jake topping"}, {"start": 9.06, "end": 13.74, "text": " Francesco DiGiovanni Benjamin Tamboulin Xiao Wen Dong and Michael Bronstein"}, {"start": 14.02, "end": 21.26, "text": " So before we even start I guess it will only make sense for me to explain to you what over squashing"}, {"start": 21.900000000000002, "end": 26.86, "text": " Over squashing exactly means what our bottlenecks and finally what is curvature?"}, {"start": 26.86, "end": 32.28, "text": " So let's see this sentence in in the abstract that connects these these concepts"}, {"start": 32.28, "end": 35.34, "text": " And then I'm gonna give you an explanation of what those exactly are"}, {"start": 35.54, "end": 43.1, "text": " So we provide a precise description of the over squashing phenomena in graph neural networks and analyze how it arises from"}, {"start": 43.3, "end": 49.08, "text": " bottlenecks in the graph for this purpose we introduce a new edge based combinatorial"}, {"start": 49.6, "end": 55.56, "text": " Curvature and prove that negatively curved edges are responsible for the over squashing issue"}, {"start": 55.56, "end": 59.260000000000005, "text": " So negatively curved edges whatever that means I'm gonna explain that briefly"}, {"start": 59.980000000000004, "end": 63.78, "text": " Are responsible for the over squashing issue, okay?"}, {"start": 64.62, "end": 67.46000000000001, "text": " So let me first start with the preliminaries"}, {"start": 67.46000000000001, "end": 72.56, "text": " So let's first see what this bottleneck thing is in graph neural networks"}, {"start": 72.56, "end": 76.9, "text": " And what over squashing is which is a very connected phenomena"}, {"start": 77.18, "end": 81.86, "text": " So you may be familiar with the RNN so recurrent neural networks"}, {"start": 81.86, "end": 87.58, "text": " And you may know that we have this squashing phenomenon occurring in RNN as well, so"}, {"start": 88.38, "end": 92.26, "text": " So this is just a rolled out version of an RNN model"}, {"start": 92.26, "end": 97.06, "text": " And you can see that so some information some input is fed into the RNN here"}, {"start": 97.06, "end": 102.92, "text": " And then this information is passed on to the next step here where we have another piece of information"}, {"start": 103.3, "end": 107.36, "text": " Mixing together with this previous step and then in the third step"}, {"start": 107.36, "end": 115.5, "text": " We'll be having this additional information mixing with the information from the previous two steps and so as you can imagine so already here"}, {"start": 115.62, "end": 119.74, "text": " the like the this this information from from the first step"}, {"start": 120.42, "end": 128.02, "text": " Constitutes only like maybe roughly one-third of the informational content that you can be they can be find found in this particular"}, {"start": 128.57999999999998, "end": 135.74, "text": " Time step of RNN and so the more we go towards right the this this information from this step"}, {"start": 135.74, "end": 141.26000000000002, "text": " constitutes a minor and minor percentage of the informational content of that fixed size vector and"}, {"start": 142.3, "end": 147.54000000000002, "text": " Basically, that's what bottleneck is all about so as you can see here because we have this"}, {"start": 148.54000000000002, "end": 153.62, "text": " Bottleneck where all of these informations have to pass and be squeezed inside of this fixed size vector"}, {"start": 154.26000000000002, "end": 160.52, "text": " So that bottleneck causes the over squashing of information, so that's how these two concepts are related"}, {"start": 160.52, "end": 167.96, "text": " Okay, so we know that RNNs were susceptible to this particular problem, and we know the later some models such as LSTM or GRU's"}, {"start": 169.08, "end": 170.72, "text": " managed to"}, {"start": 170.72, "end": 171.92000000000002, "text": " basically"}, {"start": 171.92000000000002, "end": 177.52, "text": " focus on on some parts of the information more on some parts of the information less and"}, {"start": 178.04000000000002, "end": 183.76000000000002, "text": " Thus they managed to you know in a way avoid this bottleneck problem, but not completely"}, {"start": 183.76, "end": 191.07999999999998, "text": " Well with graph neural networks the problem is even more severe and the reason can be seen in this example here"}, {"start": 191.07999999999998, "end": 192.92, "text": " So you can see that"}, {"start": 192.92, "end": 195.56, "text": " like compared to RNNs we have"}, {"start": 196.07999999999998, "end": 198.07999999999998, "text": " exponentially more"}, {"start": 198.23999999999998, "end": 205.18, "text": " Informational vectors coming in in every single step so because of how the spatial GNNs work, and that's by basically aggregating"}, {"start": 205.84, "end": 210.23999999999998, "text": " One hop neighborhood. That's how usually GNNs are implemented"}, {"start": 210.24, "end": 215.92000000000002, "text": " So because of that if you had a graph such as maybe let me just draw some examples"}, {"start": 215.92000000000002, "end": 221.92000000000002, "text": " So if you have a graph like this where this node is connected to two more nodes, and then these two nodes are connected to two"}, {"start": 221.92000000000002, "end": 223.52, "text": " more nodes"}, {"start": 223.52, "end": 230.28, "text": " So like this so then if you were to start calculating features for this one so in the first step you'd be including"}, {"start": 230.44, "end": 234.48000000000002, "text": " So let me just draw this so you'd include information from these two and"}, {"start": 235.08, "end": 238.96, "text": " These two would include information from these two which means that information"}, {"start": 238.96, "end": 245.16, "text": " from this vector here from this node here will in the first step propagate here"}, {"start": 245.16, "end": 251.64000000000001, "text": " And then in the second step it will propagate all the way here, and that's what we can see visually on this graph exactly so"}, {"start": 252.68, "end": 254.12, "text": " basically for"}, {"start": 254.12, "end": 258.84000000000003, "text": " Topology such as this one where we have with every where every node will have two additional neighbors"}, {"start": 259.36, "end": 262.96000000000004, "text": " Will basically have exponential grow growth of"}, {"start": 262.96, "end": 270.59999999999997, "text": " Information content that we need to squeeze in ultimately into into a fixed size representation of this particular node here"}, {"start": 270.79999999999995, "end": 274.68, "text": " So that's the that's the bottleneck and over squashing problem"}, {"start": 275.12, "end": 282.71999999999997, "text": " And it's as you can see intimately related to the topology of a particular graph you're dealing with okay"}, {"start": 283.67999999999995, "end": 289.67999999999995, "text": " Next up second concept that I want to explain before we start analyzing the paper, and that's the curvature so"}, {"start": 290.24, "end": 292.24, "text": " Let me read this this"}, {"start": 292.24, "end": 298.32, "text": " Statement here so richie curvature of a manifold can be characterized by geodesic dispersion"}, {"start": 298.32, "end": 303.78000000000003, "text": " I whether two parallel geodesics shot from nearby points converge"}, {"start": 304.04, "end": 309.56, "text": " Which happens in a positively curved spaces locally resembling a sphere such as this example here"}, {"start": 309.56, "end": 314.44, "text": " Or they remain parallel in the flat or Euclidean case as you can see here"}, {"start": 314.44, "end": 319.16, "text": " Or they diverge negative curvature giving rise to hyperbolic geometry"}, {"start": 319.16, "end": 324.96000000000004, "text": " So most of you are probably just familiar with Euclidean geometry, but like back in I think 19th century already"}, {"start": 325.68, "end": 327.68, "text": " people such as Riemann and"}, {"start": 328.28000000000003, "end": 331.68, "text": " And others started devising these other types of geometry"}, {"start": 331.68, "end": 338.0, "text": " So you can see here that if you were to shot like two parallel lines so two geodesics where geodesic is just a fancy name"}, {"start": 338.0, "end": 341.96000000000004, "text": " Of saying like basically the shortest path on a particular geometry"}, {"start": 342.16, "end": 346.3, "text": " Which in the case of a Euclidean plane will be like like a straight line"}, {"start": 346.3, "end": 351.18, "text": " But in the case of like sphere or hyperbolic type of surfaces"}, {"start": 351.38, "end": 354.86, "text": " It's not going to be a straight line, so you can see here if you shoot two"}, {"start": 355.22, "end": 359.78000000000003, "text": " Two geodesics they're going to ultimately converge here on the North Pole"}, {"start": 359.78000000000003, "end": 367.1, "text": " And that's why so when the lines convert when the geodesics converge that space is basically described as being"}, {"start": 367.82, "end": 369.26, "text": " positively curved"}, {"start": 369.26, "end": 374.62, "text": " Here you can see that these geodesics are gonna. Let me just take a pen here, so they are going to diverge"}, {"start": 374.62, "end": 379.98, "text": " So this is going to track this line here, and this geodesic is going to be tracking this line here"}, {"start": 379.98, "end": 385.78000000000003, "text": " So you can see they are diverging and because of that this hyperbolic space has negative curvature, okay?"}, {"start": 385.94, "end": 391.42, "text": " So this is also so this Ricci curvature basically is a scalar that tells you whether the space is"}, {"start": 391.66, "end": 398.46, "text": " Positively curved in the case of a sphere or zero or negatively curved in the case of this particular"}, {"start": 399.14, "end": 400.78000000000003, "text": " geometric object here"}, {"start": 400.78, "end": 407.38, "text": " Okay, so aside from the Ricci curvature. We need to know a couple more concepts and the main one will be Ricci flow"}, {"start": 407.94, "end": 412.38, "text": " Which basically tells you the rate of change of a metric tensor?"}, {"start": 412.97999999999996, "end": 420.21999999999997, "text": " Is directly proportional to the Ricci curvature now that may seem and sound very complex because it probably is"}, {"start": 420.26, "end": 425.44, "text": " But I want to show you a short video that's going to give you like all the necessary intuition that you need to have"}, {"start": 426.05999999999995, "end": 428.26, "text": " before proceeding with this paper, so"}, {"start": 428.26, "end": 432.98, "text": " Just as a trivia. I want to mention a couple of people here, so this guy here called"}, {"start": 433.7, "end": 435.7, "text": " Gregorio Ricci Curbastro"}, {"start": 436.42, "end": 443.21999999999997, "text": " Was the guy who invented the Ricci curvature and after whom the Ricci curvature was named. This is Hamilton who"}, {"start": 443.86, "end": 445.86, "text": " discovered the Ricci flow"}, {"start": 447.06, "end": 453.06, "text": " Equation and finally this is Gregory Perelman a guy who managed to solve the famous Millennium"}, {"start": 453.62, "end": 456.42, "text": " problem in mathematics called the Poincare"}, {"start": 456.42, "end": 461.86, "text": " Conjecture and he solved it using basically building on top of Hamilton's work"}, {"start": 462.34000000000003, "end": 468.18, "text": " And he used Ricci flow to solve this Poincare conjecture also as a fun fact he decided to"}, {"start": 468.90000000000003, "end": 474.58000000000004, "text": " Basically decline both the Fields Medal which is an equivalent of a Nobel prize in in"}, {"start": 475.14, "end": 480.66, "text": " Mathematics as well as the Millennium prize like a one million dollar prize for solving one of these seven"}, {"start": 481.22, "end": 484.58000000000004, "text": " Millennium problems as proposed by the Clay Institute"}, {"start": 484.58, "end": 487.7, "text": " So Poincare Conjecture was one of those those. Okay, so"}, {"start": 488.41999999999996, "end": 495.14, "text": " Now check out this video where you'll understand what Ricci flow is a lobar and then we'll continue"}, {"start": 500.18, "end": 506.02, "text": " Ricci flow is a way of changing the metric tensor over time so that the manifold becomes rounder"}, {"start": 507.7, "end": 510.02, "text": " So how do we express Ricci flow concretely?"}, {"start": 510.02, "end": 515.78, "text": " I want you to focus on this region over here the Ricci flow inflates it like a balloon"}, {"start": 516.74, "end": 520.26, "text": " By convention, we say that it has negative Ricci curvature"}, {"start": 520.9, "end": 525.06, "text": " So if the curvature is negative the length increases"}, {"start": 526.5, "end": 528.5, "text": " Now focus on this region here"}, {"start": 529.14, "end": 531.14, "text": " Ricci flow deflates it"}, {"start": 531.78, "end": 535.62, "text": " So if the Ricci curvature is positive the length decreases"}, {"start": 535.62, "end": 539.14, "text": " If the Ricci curvature is positive the length decreases"}, {"start": 541.0600000000001, "end": 546.58, "text": " We can phrase this differently g decreases means the derivative of g is negative"}, {"start": 547.62, "end": 551.62, "text": " g increases just means the derivative of g is positive"}, {"start": 552.66, "end": 554.98, "text": " These two guys always have opposite sides"}, {"start": 555.62, "end": 557.62, "text": " So we might guess an equation like this"}, {"start": 558.58, "end": 560.02, "text": " And that's it"}, {"start": 560.02, "end": 562.98, "text": " That is the equation describing Ricci flow. Okay, cool"}, {"start": 562.98, "end": 568.26, "text": " So I strongly suggest you go ahead and watch the whole video as it's super informative"}, {"start": 568.5, "end": 573.14, "text": " And but this this clip I just showed you is enough for you to understand the paper so we can continue"}, {"start": 573.62, "end": 580.9, "text": " Cool. So as you just saw, we have Ricci flow is basically this differential equation connecting metric tensor with Ricci curvature"}, {"start": 581.38, "end": 584.98, "text": " And you can see so in the example of a body you can see here on the screen"}, {"start": 585.3000000000001, "end": 590.9, "text": " We have that this we see that this part has very negative curvature, which is denoted by this blue color"}, {"start": 590.9, "end": 595.4599999999999, "text": " And by by by applying Ricci flow because of the negative"}, {"start": 596.66, "end": 600.02, "text": " Like negative value of the curvature that part is going to be inflated"}, {"start": 600.26, "end": 604.8199999999999, "text": " Whereas this part which which has uh, like positive curvature is going to be deflated"}, {"start": 605.22, "end": 607.54, "text": " Basically, and we end up with something like this"}, {"start": 607.6999999999999, "end": 613.06, "text": " So we can see that Ricci flow in this continue on this continuous manifold can reduce these bottlenecks"}, {"start": 613.22, "end": 616.5, "text": " And now let's just translate this idea onto the discrete case"}, {"start": 616.5, "end": 621.54, "text": " Which are the graphs and we can see that intuitively what happens is is a similar thing"}, {"start": 621.62, "end": 625.54, "text": " So we can see we have a bottleneck here. So let me just change the color"}, {"start": 625.86, "end": 633.54, "text": " So we see a bottleneck here, which is denoted again by a blue edge and by inflating that space by adding"}, {"start": 633.94, "end": 639.62, "text": " Edges here, uh, basically we are making this part of the space more positive and we are also"}, {"start": 640.26, "end": 642.5, "text": " They are also removing some of the edges here"}, {"start": 642.5, "end": 647.38, "text": " Which means we'll have less edges here and then the statistics of the graph stays roughly the same"}, {"start": 648.18, "end": 651.54, "text": " But like we reduce bottleneck and you can see that the colors are"}, {"start": 652.16, "end": 657.3, "text": " Homogenized meaning that this became more reddish and this part that was super red now became"}, {"start": 658.34, "end": 660.34, "text": " Also has some blue"}, {"start": 660.42, "end": 664.82, "text": " Colors to it. So that's that's what we we managed to accomplish a similar thing to this"}, {"start": 665.38, "end": 667.86, "text": " Analogous thing to this what happened in the continuous space"}, {"start": 668.74, "end": 670.26, "text": " uh, cool"}, {"start": 670.26, "end": 675.22, "text": " So now let me just show you some preliminaries and then we'll we'll we'll uh,"}, {"start": 675.3, "end": 681.62, "text": " See how they formalize the notion of over squashing and then we'll see how they formalize the notion of negative curvature on graphs"}, {"start": 681.78, "end": 689.38, "text": " And finally how they connect the two so that they can devise an algorithm where they'll be picking these negative edges and rewiring the graph to get"}, {"start": 689.54, "end": 692.02, "text": " Uh, uh less problems with over squashing"}, {"start": 693.22, "end": 694.5, "text": " so, um"}, {"start": 694.5, "end": 698.74, "text": " Let's see some piece of notation here first this s sub r of i"}, {"start": 698.74, "end": 701.86, "text": " Is basically the the the neighborhood"}, {"start": 702.34, "end": 708.9, "text": " So it's a set of these j's of nodes j's that belong to the set of all nodes denoted as v"}, {"start": 709.54, "end": 714.66, "text": " That have this geodesic distance between i and j that's precisely equal to r"}, {"start": 715.14, "end": 722.9, "text": " So it's a fancy way of saying this is a set of nodes j which are exactly r hops away from node i"}, {"start": 723.46, "end": 724.66, "text": " okay"}, {"start": 724.66, "end": 732.5, "text": " Uh, and then this b sub r of i is the same thing but here we have like a smaller or equal"}, {"start": 732.74, "end": 735.62, "text": " Which means we'll be including all of the intermediate"}, {"start": 736.26, "end": 741.86, "text": " Intermediate nodes and thus this is uh, like a receptive field of this particular, uh node"}, {"start": 742.26, "end": 745.86, "text": " Okay, just to visualize the notation. It's going to be probably a bit easier"}, {"start": 746.02, "end": 750.26, "text": " so if this node 2 here is our node i then the"}, {"start": 750.26, "end": 755.86, "text": " The s1 neighborhood of i is basically nodes 3 and nodes 1"}, {"start": 756.26, "end": 758.9, "text": " Uh, then the the the s2 neighborhood"}, {"start": 758.98, "end": 764.58, "text": " So if we were to draw s2 neighborhood of i that's going to be nodes 0 and 4"}, {"start": 764.8199999999999, "end": 769.06, "text": " As you can see we need exactly the shortest path takes two"}, {"start": 769.86, "end": 772.5, "text": " Edges to get to node 0 and the same for node 4"}, {"start": 773.38, "end": 776.74, "text": " Finally the the b neighborhood. So let's say we have"}, {"start": 776.74, "end": 779.54, "text": " uh neighborhood b2 of i"}, {"start": 780.58, "end": 785.54, "text": " Is going to encompass all of these nodes we saw so that's going to be b2"}, {"start": 786.02, "end": 791.46, "text": " And let me just check whether I think it also includes. Yeah, it also includes node 2 as well"}, {"start": 791.54, "end": 796.9, "text": " So this every all of these nodes are now going to be b2 neighborhood of i and just an additional detail"}, {"start": 797.14, "end": 801.7, "text": " I assume b stands for ball and s stands for surface which makes intuitive sense"}, {"start": 801.78, "end": 805.14, "text": " As you can see here, we can include the whole volume all of these nodes"}, {"start": 805.14, "end": 809.62, "text": " Whereas in s we just take the front so we basically ignore all all of these"}, {"start": 810.18, "end": 813.06, "text": " inner nodes inside of the s sets"}, {"start": 814.34, "end": 816.02, "text": " cool, so"}, {"start": 816.02, "end": 818.02, "text": " next up, uh just a"}, {"start": 818.26, "end": 822.9, "text": " Short mention of message passing neural networks and if you're not familiar with graph neural networks"}, {"start": 822.9, "end": 827.38, "text": " I have a whole playlist i'm going to link it somewhere here. You can check it out. I'm going to just briefly"}, {"start": 828.1, "end": 830.1, "text": " Recap what this is all about"}, {"start": 830.1, "end": 836.9, "text": " And you can see that in order to calculate the features of a node i, uh in layer l plus one"}, {"start": 837.5400000000001, "end": 840.98, "text": " What we do is the following so we take the previous"}, {"start": 841.38, "end": 847.22, "text": " Feature of that node i so in the previous layer a layer l and we basically combine it"}, {"start": 847.3000000000001, "end": 854.58, "text": " So maybe concatenation with this expression here and then we we apply this function. That's called the update function phi"}, {"start": 854.82, "end": 857.72, "text": " Okay, so let me just now decouple"}, {"start": 857.72, "end": 861.0, "text": " What this term is all about so you can see we have a"}, {"start": 861.72, "end": 864.28, "text": " Some j goes from one to two n"}, {"start": 864.6800000000001, "end": 868.84, "text": " Which is basically n is the the cardinality of the node set"}, {"start": 868.84, "end": 872.2, "text": " So that means there's a fancy way of saying it's the number of nodes in your graph"}, {"start": 873.1600000000001, "end": 877.48, "text": " A hat ij is basically your adjacency matrix"}, {"start": 877.48, "end": 880.12, "text": " So the connectivity matrix, but this is with additional twist"}, {"start": 880.12, "end": 886.52, "text": " They added self edges and that means basically every node will have additionally"}, {"start": 886.52, "end": 888.52, "text": " uh implied edge like this"}, {"start": 889.0, "end": 894.84, "text": " Although if it's undirected it's going to be uh, like both directions basically so every node will have this self edge"}, {"start": 894.84, "end": 900.52, "text": " And plus they also have normalization using these d matrices, which are basically your"}, {"start": 901.24, "end": 908.76, "text": " um, well, yeah degree matrices and this just normalizes the adjacency the augmented adjacency matrix a tilde"}, {"start": 909.96, "end": 915.16, "text": " So what this will effectively do is filter out so this is going to be"}, {"start": 915.16, "end": 925.3199999999999, "text": " Um non-zero only for immediate nodes of node i so that means we're just going to look at the one hop neighborhood of node i and"}, {"start": 926.04, "end": 929.8, "text": " Otherwise this will be zero and thus we won't care about this thing"}, {"start": 929.8, "end": 937.8, "text": " Uh, whereas this is just your message passing function psi and these are the representations of node i and its neighbors node j"}, {"start": 937.8, "end": 939.8, "text": " Uh, so"}, {"start": 939.8, "end": 941.9599999999999, "text": " Let me just show you a diagram to make it a bit"}, {"start": 941.96, "end": 950.0400000000001, "text": " Uh easier so that means we're going to take a look at the one hop neighbors of node h of node 1"}, {"start": 950.0400000000001, "end": 956.6800000000001, "text": " And we're going to aggregate somehow combine this feature with this feature using some uh function psi"}, {"start": 956.6800000000001, "end": 959.72, "text": " Which is usually like mlp like a multilayer perception"}, {"start": 959.72, "end": 965.1600000000001, "text": " We're going to form representations in such a way which are called messages and then we're going to aggregate them"}, {"start": 965.1600000000001, "end": 970.6, "text": " Uh like this so in this particular case they they've chosen a particular"}, {"start": 970.6, "end": 981.4, "text": " Uh permutation invariant function i sum in this particular case but in general it can be uh whatever like whatever function that's that's permutation invariant like maybe"}, {"start": 981.4, "end": 991.72, "text": " Uh like average or or even a max function is permutation invariant and so people usually denote that you'll probably see that in the literature as like o plus"}, {"start": 991.72, "end": 997.4, "text": " Uh which is a general a generic permutation invariant operator instead of using sum"}, {"start": 997.4, "end": 1006.76, "text": " So that's some uh preliminaries uh the notation and uh and uh these generic message passing neural networks"}, {"start": 1006.76, "end": 1011.24, "text": " And now let me show you how they formalize the concept of over squashing"}, {"start": 1011.24, "end": 1019.4, "text": " Uh but before that why do we care like well the reason we care is because as we saw as soon as we have long dependencies"}, {"start": 1019.4, "end": 1025.96, "text": " The information is going to be over squashed and thus the two remote nodes cannot communicate effectively with each other"}, {"start": 1025.96, "end": 1039.48, "text": " And so they say here we say that a graph learning problem has long range dependencies when the output of a mpn n depends on representations of distant nodes interacting with each other"}, {"start": 1039.48, "end": 1046.28, "text": " And so in these types of graphs where which are called heterophilic uh as compared to homophilic"}, {"start": 1046.28, "end": 1052.3600000000001, "text": " Uh you basically want to have this uh remote communication going on if you want to have accurate predictions"}, {"start": 1052.36, "end": 1059.8, "text": " And so that's why we need to solve the over squashing problem if we are to be performant on these heterophilic types of graphs"}, {"start": 1059.8, "end": 1063.8, "text": " So let me show you how they formalize this this notion of over squashing"}, {"start": 1063.8, "end": 1070.36, "text": " Okay let's first uh let me show you the the notation here before I start parsing the equation uh the inequality"}, {"start": 1070.36, "end": 1079.8799999999999, "text": " So what they have is the following so they have nodes i and s from the set of nodes v such that s belongs to the set s sub r plus one of i"}, {"start": 1079.88, "end": 1087.48, "text": " Which means uh s is r plus one uh hops away from node i so that's something like this maybe we have a node i here"}, {"start": 1088.5200000000002, "end": 1094.44, "text": " And then we have a bunch of nodes here bunch of nodes and then somewhere here we have node s"}, {"start": 1095.0, "end": 1103.24, "text": " And this shortest path to get from from uh basically i to s is equal to r plus one"}, {"start": 1103.24, "end": 1113.32, "text": " So let me just draw that so that's r plus one length okay so we have node s here and node i here and um basically then they say if the"}, {"start": 1113.32, "end": 1121.96, "text": " If you can bound the gradient of the phi and psi functions which are the message and uh and update functions"}, {"start": 1122.6, "end": 1132.36, "text": " Basically if you can bound them by some some some scalars alpha and beta uh then uh they show that this here this expression here holds"}, {"start": 1132.36, "end": 1140.28, "text": " And this what this tells us is the following so the sensitivity of this representation of node i in layer r plus one"}, {"start": 1141.0, "end": 1149.3999999999999, "text": " Uh and sensitivity with respect to uh node s so to the raw features of node s is upper bounded"}, {"start": 1149.3999999999999, "end": 1155.3999999999999, "text": " So we take the absolute value of this of this uh derivative here and we can see it's bounded by alpha beta"}, {"start": 1155.4, "end": 1164.2800000000002, "text": " Raise to the power of r plus one times a hat raised to the power of r plus one and then we just take the uh index by i and s"}, {"start": 1164.2800000000002, "end": 1171.4, "text": " Okay so there is a couple of things worth mentioning here first of all if you're not familiar with taking powers of these adjacency matrices"}, {"start": 1172.0400000000002, "end": 1180.52, "text": " There is a special semantics behind uh a raised to the power r plus one namely if you index this matrix here with i s"}, {"start": 1180.52, "end": 1188.12, "text": " You'll get the number of shortest paths between i nodes i and nodes s which makes intuitive sense"}, {"start": 1188.12, "end": 1195.48, "text": " So basically the more such paths we had that means that s so the features from from from the information from node s"}, {"start": 1195.48, "end": 1204.92, "text": " Had multiple paths by which they could reach node i and thus you can intuitively understand that that means that this upper bound is going to become bigger"}, {"start": 1204.92, "end": 1211.8000000000002, "text": " Because the end the over squashing will will be uh like uh less um prominent"}, {"start": 1211.8000000000002, "end": 1221.88, "text": " That's observation number one observation number two here is the following so you do have to have uh you do have to to observe the representation of node i"}, {"start": 1221.88, "end": 1230.76, "text": " Uh only uh after applying r plus one uh layers of of your graph neural network the reason is underreaching"}, {"start": 1230.76, "end": 1240.52, "text": " If you were to just apply let's say if like less than r plus one then this information from node s would have not reached node i yet"}, {"start": 1240.52, "end": 1250.12, "text": " So that means maybe the information from from this node s only have reached until here and that means that this is going to be uh zero"}, {"start": 1250.12, "end": 1257.48, "text": " Because like there is no influence of hs of the of the raw vector of node s uh on the representation"}, {"start": 1257.48, "end": 1262.52, "text": " Because the information still hasn't propagated to node i so that's something to keep in mind"}, {"start": 1262.52, "end": 1271.48, "text": " So that means that here you have to have r plus one because then you're basically calculating non-zero sensitivity of node i"}, {"start": 1271.48, "end": 1278.76, "text": " And how the information from s influences node i so i think that's that's pretty much everything you need to understand about about this formula"}, {"start": 1278.76, "end": 1288.04, "text": " So it's a nice way of connecting over smoothing on one hand with topological uh topology of your graph on the other hand okay"}, {"start": 1288.04, "end": 1299.8, "text": " So as an example they show here for example if the uh shortest path between i and s is r plus one and the sub graph induced on on this uh b set is a binary tree"}, {"start": 1299.8, "end": 1309.72, "text": " Uh then this adjacency matrix uh like indexed at is is going to have this particular shape and what's interesting here is that as r grows"}, {"start": 1309.72, "end": 1315.96, "text": " Which means as you're looking at at further away nodes you're going to have exponential decrease of this term"}, {"start": 1315.96, "end": 1324.2, "text": " Which means in turn that you're going to have small this will be exponentially decreasing and thus the sensitivity will be exponentially decreasing"}, {"start": 1324.2, "end": 1327.3999999999999, "text": " Which means that the over squashing will become more and more prominent"}, {"start": 1327.4, "end": 1338.6000000000001, "text": " Okay now let's continue so let me show you now how they formalize how they calculate the curvature of a graph that may be unintuitive to some of you"}, {"start": 1338.6000000000001, "end": 1341.8000000000002, "text": " As well as to me on the first pass of reading this paper"}, {"start": 1342.6000000000001, "end": 1348.8400000000001, "text": " Okay so we saw how the how these curved spaces look like when they are continuous"}, {"start": 1348.8400000000001, "end": 1353.48, "text": " Now let's see how this thing works for discrete case i.e. for graphs"}, {"start": 1353.48, "end": 1367.48, "text": " So intuitively the the notion is the following so if you take these two nodes p and q and you take two edges you'll see that in graphs such as this one"}, {"start": 1367.48, "end": 1379.48, "text": " They'll tend to converge to this node k and thus form a triangle and basically triangles will be a feature that indicates that the space is positively curved in graphs"}, {"start": 1379.48, "end": 1393.48, "text": " On the other hand if you take two edges and they stay on the same distance then and they form four cycles by doing so then this space is going to be classified as having zero curvature"}, {"start": 1393.48, "end": 1401.48, "text": " And finally if we have edges that start diverging then in that case the space is negative"}, {"start": 1401.48, "end": 1415.48, "text": " Now this is kind of hand wavy because unless you attach geometric like information to these edges like the fact we just chosen to draw the graph like this doesn't mean anything"}, {"start": 1415.48, "end": 1427.48, "text": " But like you can imagine that also like you can see here that the path between this node and this node starts becoming larger and larger whereas here it's like zero and here it stays constant"}, {"start": 1427.48, "end": 1435.48, "text": " So that's maybe a better way to think about this but like just visually as well you can see that these two kind of converge these two stay parallel and these two diverge"}, {"start": 1435.48, "end": 1439.48, "text": " This is probably yeah probably better to explain it like this"}, {"start": 1439.48, "end": 1451.48, "text": " Okay so now intuitively what this curvature calculation needs to do is count these topological elements and by counting them decide on the actual curvature of an edge"}, {"start": 1451.48, "end": 1455.48, "text": " So let's see how they choose to do that"}, {"start": 1455.48, "end": 1470.48, "text": " So first worth mentioning there are two discrete variants of calculating rigid curvature for graphs one is known as the Olivier curvature denoted as this italic weird font K symbol"}, {"start": 1470.48, "end": 1476.48, "text": " And we also have this from a Foreman curvature denoted as big F here"}, {"start": 1476.48, "end": 1492.48, "text": " So there are trade-offs between these two curvature Olivier is much more expressive but much more computationally expensive Foreman has the opposite properties basically it's much more efficient computationally but it's not as expressive as Olivier's curvature"}, {"start": 1492.48, "end": 1501.48, "text": " What they've done is they've taken the Foreman curvature and they've upgraded it to this balanced Foreman curvature"}, {"start": 1501.48, "end": 1509.48, "text": " And in order to understand the equation let me first break down a couple of sets we'll need to take care about"}, {"start": 1509.48, "end": 1517.48, "text": " So the first set is this one which basically tells you the number of triangles that are formed on this edge ij"}, {"start": 1517.48, "end": 1525.48, "text": " And you can formally basically calculate that by finding an intersection between S1 sets of node i and node j"}, {"start": 1525.48, "end": 1529.48, "text": " So these are the triangles that are based at the edge ij"}, {"start": 1529.48, "end": 1535.48, "text": " So let me show you an example so if we have here this particular graph and let me just"}, {"start": 1535.48, "end": 1549.48, "text": " Okay so basically this is node i here and this is node j we find the S1 sets first so for node i so S1 of i is going to be this"}, {"start": 1549.48, "end": 1558.48, "text": " It's going to be this element here because you can see the distance is just one this one this one and this one as well as j so this is the S1 set"}, {"start": 1558.48, "end": 1567.48, "text": " On the other hand if we want to calculate S1 set for j that's going to be 5, 6 and 0"}, {"start": 1567.48, "end": 1577.48, "text": " And what they do is they find an intersection and you can see that the only intersection we have in this particular example is node 6"}, {"start": 1577.48, "end": 1584.48, "text": " So this one here and as you can see it does form a triangle and that's the only triangle we have in this particular graph"}, {"start": 1584.48, "end": 1590.48, "text": " So that's the first thing we need to care about triangles the second thing we need to care about is these four cycles"}, {"start": 1590.48, "end": 1604.48, "text": " And you can see here a formulaic description i'm going to just read you what it represents and that's that these are the neighbors of i forming a four cycle based at the edge ij without the diagonals inside"}, {"start": 1604.48, "end": 1619.48, "text": " Okay so basically let me again show you this for node i the four cycles are going to be so that those are going to be 2 and 3 because you can see here we have a four cycle here going from i to j and then back"}, {"start": 1619.48, "end": 1627.48, "text": " So it's a four cycle we have a second four cycle going over 3 as you can see here going to j and backwards so those are four cycles"}, {"start": 1627.48, "end": 1643.48, "text": " And you may wonder why have I excluded 4 from this set and the reason is because we have this diagonal here and that's why we will not include node 4 into this set of four cycles of node i"}, {"start": 1643.48, "end": 1659.48, "text": " Okay now let me try and connect that with the formula just for fun so you can see here what the formula states and let's check whether this why the node 4 basically will not belong to a set"}, {"start": 1659.48, "end": 1676.48, "text": " So we can see that this set of four cycles for node i is a set of these k's such that k is basically belongs to S1 set of node i but does not belong to S1 set of j neither can k be equal to j"}, {"start": 1676.48, "end": 1695.48, "text": " Okay let me now just delete this to make this clearer so let me delete everything here and let me basically show you how this is going to function so k belongs to S1 of i so that means k is one of these elements here okay"}, {"start": 1695.48, "end": 1712.48, "text": " It's going to be one of those elements here and then we exclude basically we exclude j which means we exclude this one and we also exclude everything that belongs to node S1 set of j so that means we're going to exclude this one as well"}, {"start": 1712.48, "end": 1728.48, "text": " So we end up with 2, 3 and 4 and we already saw that 2 and 3 do satisfy all of the conditions and thus we included those in this set but we for some reason we did not include number 4 and the reason is because of this other condition"}, {"start": 1728.48, "end": 1750.48, "text": " So that k needs to be such that there exists this node w that belongs so that means that this set here is non-zero not empty set so you can see that S1 of k so let's find S1 of k so k is 4 in our example so this is our k this is our potential candidate at this point of time"}, {"start": 1750.48, "end": 1762.48, "text": " So this is these are the S1 this is the S1 neighborhood of k and intersection with S1 of j is going to be just this node here so this is going to be the intersection"}, {"start": 1762.48, "end": 1780.48, "text": " But at the same time we need to find something that does not belong to S1 and because this thing belongs to S1 that's why we have to digit again and that's a very complicated way of telling you that hey if we have a diagonal here we cannot count in this 4 cycle that's the whole point"}, {"start": 1780.48, "end": 1805.48, "text": " Finally I'm going to skip this one basically gamma max tells you the number of these degenerate cycles i.e. cycles that pass over a common node and for this particular graph it's going to be 2 and that's because you can see that 2 cycles 2 4 cycles are passing over this node 5 here and that's why this gamma is going to be 2 for this particular example"}, {"start": 1805.48, "end": 1825.48, "text": " Okay so enough rambling let's see the formula so first in line with the discussion about geodesic dispersion one expects the number of triangles to be related to positive curvature complete graph number of 4 cycles to 0 curvature grid and the remaining outgoing edges to negative curvature 3"}, {"start": 1825.48, "end": 1848.48, "text": " Our new curvature formulation reflects such an intuition and here is finally the formula so you can see the Ricci curvature to the balance form on curvature implementation of the Ricci curvature has this very very like detailed shape and so I'm not going to explain every single detail but you can understand basically that it sums"}, {"start": 1848.48, "end": 1871.48, "text": " You can see here we have a sum of triangles which is going to contribute to the positive value of this curvature and everything is normalized by the degrees of nodes i and j and a fun property that emerges from this is because we have this minus 2 is that the curvature is always going to be bigger than minus 2"}, {"start": 1871.48, "end": 1890.48, "text": " So that's and I think it's also upper bounded because of the normalization by one but don't quote me on that one okay cool so now let's see a couple of very interesting theorems that are going to connect the this this negative curvature the curvature calculation with the over squashing"}, {"start": 1890.48, "end": 1912.48, "text": " So because of this theorem that the Olivier curvature is lower bounded by this balanced form on the curvature because of that there is a very interesting corollary that's very interesting for this paper and the corollary says the following thing if our curvature is positive"}, {"start": 1912.48, "end": 1928.48, "text": " For any edge i j and I'm really sure that you should state all here because you need to have this condition satisfied for every single edge which is a very strong condition but if you have that then there exists a polynomial P such that"}, {"start": 1928.48, "end": 1950.48, "text": " So that the B set of this of this node i so the the R hop receptive field is going to be upper bounded by a polynomial in R and that's super important because we just saw before that if we have for example like a binary tree type of branching then we're going to have exponentially growing receptive field"}, {"start": 1950.48, "end": 1976.48, "text": " Whereas here if the edges are positive we can have a guarantee that they are bounded by a polynomial and here we for the first time see the connection between over squashing and curvature so there is a notion that if your graph is positively curved then because of this polynomial thingy indirectly you're going to have less over squashing because you're going to have less bottlenecks and less over squashing"}, {"start": 1976.48, "end": 1998.48, "text": " Okay now let's take that one step forward and find an explicit connection between curvature on one hand side and over squashing on the other side and that's what this theorem for is all about so this is going to be probably the the the main the main contribution of the paper and let me start and dissect this theorem for you"}, {"start": 1998.48, "end": 2016.48, "text": " There is a huge proof in the appendix you can go and check it out if you're into math but like this is going to give you a rough intuitive understanding of what's going on here and why this works so consider a MPN so a message passing neural network so the one we saw in equation one that's the generic MPN"}, {"start": 2016.48, "end": 2045.48, "text": " Let I be connected with J and the degree of node I is smaller or equal than DJ and then assume two conditions so the first condition assumes that you can basically upper bound the gradient of the update function and the gradient of the psi function of the message function for any layer for any L between 0 and capital L minus 1 which is L is the number of layers in our MPN"}, {"start": 2045.48, "end": 2068.48, "text": " With the depth being at least two layers then they say in two there exists Delta such that Delta is bounded like this and bounded like this so this is the this gamma coefficient we saw a couple minutes ago and we have that the curvature is upper bounding by upper bounded by minus two plus Delta"}, {"start": 2068.48, "end": 2094.48, "text": " Okay so this is so far very like maybe not not that clear I'm going to give you the gist in a second so then there exists this set Q sub J which is a subset of S to set of five satisfying that the size of that set is bigger than one over Delta blah blah and for all of these L zeros between zero and capital L minus two we have this equation here"}, {"start": 2094.48, "end": 2120.48, "text": " Okay so what this tells us here is that if we sum over elements of that of that subset Q of S2 and if we if we basically sum up all of these sensitivities of various nodes K from that subset that's going to be upper bounded by this expression here so alpha beta raised to the power of two times Delta"}, {"start": 2120.48, "end": 2141.48, "text": " raised to the power of one over four so now let me let me try and visualize this and break it down so first of all we have node I here so let's say let's say we have node I here we have one hop neighborhood here and finally we have two hop neighborhood here so what this theorem states is that basically there exists a subset so this is"}, {"start": 2141.48, "end": 2170.48, "text": " S2 so this this thing here let me try and draw it this thing here is S2 and what they state is that there exists some subset of these nodes so let me just kind of take a subset such that if you were to sum so now take some nodes from from this subset and if we were to calculate their sensitivities they're going to be upper bounded by this expression here so you can also read this as over squashing so over squashing of that"}, {"start": 2170.48, "end": 2199.48, "text": " over squashing of that neighborhood so the I'm going to denote that as like over squashing OS for example yeah and now what you can see is that if the edge is very negative so if Delta goes down to zero if you start in the limit when Delta goes to zero this edge IJ is going to be very negative because it's going to be upper bounded by minus two plus a small number which means it's going to converge almost to minus two which is super negative edge if that happens"}, {"start": 2199.48, "end": 2221.48, "text": " that means that this expression here so Delta goes to zero here and that means that you can see this becomes the whole expression goes to zero and that means we have over squashing so again the the the chain of thoughts here is the following if we have negative edges that means that we directly have over squashing problem and that's the direct connection they made"}, {"start": 2221.48, "end": 2250.48, "text": " there is a lot of details which follow from many other theories there is a lot of math in the in the appendix but this is a roughly what they're they're saying I'm going to make one more connection here with this chigar constant which is very interesting constant in in graph theory and then I'm going to show you the final algorithm the rewiring algorithm they applied in this paper so back to chigar constant so chigar constant intuitively captures the notion of of how bottlenecked the the graph is"}, {"start": 2250.48, "end": 2279.48, "text": " so if I were to draw a simple graph like this so let's imagine we have some some some nodes inside of here which are densely connected and then we have another community of nodes which are densely connected and if we have a single edge here you can see intuitively that this barbell shaped graph has a very high like it's very bottlenecked like there is a like a huge bottleneck and that's the one here okay and you'll see that this"}, {"start": 2279.48, "end": 2309.48, "text": " a H J so this chigar constant correlates with our intuition here very nicely because how we calculated and there is a mistake here so this should not be here I had a discussion with one of the main authors of the paper francesco he told me this is just a typo so basically you can see that we take the number of elements in this set so this set is as you can see here those are the edges such that I belongs to S and J belongs to its complement so in this particular example that will mean the following so we're going to"}, {"start": 2309.48, "end": 2338.48, "text": " so this is if this is S and this is basically the complement then this edge here is going to be a part of this set so this is the only edge that belongs to this set here and that means we have one here so very small number here and then we have like a minimum of volume so whatever the minimal volume between these two sets is we find that thing and that we divide it and the volume itself is divide as the sum"}, {"start": 2338.48, "end": 2360.48, "text": " across all of the notes of that set S and we just sum up their degrees so you can see here is going to be like some huge number and we have a small number and that means this is going to go towards zero for this edge case of a very bottlenecked graph and as we start adding and you can imagine if we start adding more edges if I were to start more edges here"}, {"start": 2360.48, "end": 2390.48, "text": " add more edges this thing is going to grow up so it's going to explode not explode but like it's going to have more and more elements and thus this chigger constant is going to consequently rise up from zero to some bigger number and thus it correlates in a way with how bottlenecked the graph is and now you can see a bunch of problems it's a single scaler what if we had much more communities what if we had like what if we had even more communities here and they"}, {"start": 2390.48, "end": 2407.48, "text": " are connected so it fails to to capture more interesting topologies but like it's it's a number that give you gives you some global it's a global statistics some gives you a global understanding what's going on in the graph but it's not that detailed and and and and yeah and you want"}, {"start": 2407.48, "end": 2437.48, "text": " so then they show the following thing and that's that the basically spec like spectral gap of a graph is bounded by the chigger constant and this spectral graph gap again correlates with this how connected the graph is and what this spectral gap is and they say here so lambda one is the first non zero eigenvalue of the normalized graph laplacian often referred to as the spectral gap so that's a nice connection between the"}, {"start": 2437.48, "end": 2466.48, "text": " spectral theory and and this graph theory i.e. the chigger constant but more interestingly for us they show that if the curvature is positive for all of the edges then you can you can basically like lower bound the chigger constant and that and also the the the spectral gap by this k which is the same k as we have here which means that by controlling the curvature how positive it is we can control the the the"}, {"start": 2466.48, "end": 2480.48, "text": " chigger constant i.e. how bottlenecked the graph is and the spectral gap of the graph so that's some some nice connections not necessary to understand this paper but like i made a small tangent here hopefully you can appreciate it"}, {"start": 2480.48, "end": 2510.48, "text": " okay so finally let's get to the to the meat of the of the paper i mean at least procedurally this is the final rewiring algorithm before going to the details let me just tell you what those are so they say here that more recently there is a trend to decouple the input graph from the graph used for information propagation so that's an important thing to notice such methods are often generically referred to as graph rewiring and we see here an example so here on the left we have a graph"}, {"start": 2510.48, "end": 2540.48, "text": " on the left hand side we have the original graph and we have some coloring depending on the curvature so again blue means we have negative edges red means we have positive edges blah blah blah but more importantly you can see that if we apply some some other baseline and this is called deagle method so this is called deagle which stands for diffusion improves graph learning we can see it completely changes the statistics of this particular graph it adds a lot of edges and thus completely skews the degree"}, {"start": 2540.48, "end": 2570.4, "text": " distributions etc etc on the other hand this thing here is their proposed method which we're going to see in a second and you can see it preserves the statistics but manages to basically makes the curvature of these of this graph higher okay now let's see how the actual thing works and yeah the important part to notice is that now instead of operating your your graph neural network on top of this original graph you instead what you do is you rewire"}, {"start": 2570.4, "end": 2600.4, "text": " the graph and then you use these edges to do your message passing on your graph so that's that's the that's the idea behind these rewiring methods they serve such like some type of pre-processing steps okay so here is the stochastic discrete richie flow algorithm which is super fancy name but it's a very cool algorithm so let's see what happens here so we have as input we have a graph G is given we have temperature tau which is a positive temperature which is because we are going to be"}, {"start": 2600.4, "end": 2630.4, "text": " sampling in this algorithm that's why we have that the notion of temperature we have max number of iterations and optional optional curvature upper bound called C plus we're going to see what that means in a second okay so the algorithm proceeds as follows so we're going to repeat the following steps until convergence or the max max number of iterations has been reached let me just draw a company graph that's going to help us grasp this a bit better so let's say we have two edges here let's say they are maybe"}, {"start": 2630.4, "end": 2660.2400000000002, "text": " a it's a bridge between some densely connected communities maybe something like this and let's see how now the graph operates so for edge i j with minimal richie curvature which is in this example going to be this one here so this is going to be our our like negative the most negative edge here then they say calculate vector x where the element of that vector x denoted as"}, {"start": 2660.24, "end": 2684.7999999999997, "text": " x sub k l and define like this is basically the improvement to the curvature of this edge i j from adding edge k l where k is belongs to the b one set of five and l belongs to the b one set of j okay so let me again just explain what it means that means we're going to make the following thing so let's"}, {"start": 2684.8, "end": 2714.0800000000004, "text": " um i'm going to try and add so we're going to try and add an edge so basically here maybe and this one you can see so this is node i this is node j you can see that i took a node from the b one set of i and i connected with with a with a node from b one set of j and now i see how how how much of an improvement does this have on on the curvature of this edge and that's this term here okay"}, {"start": 2714.08, "end": 2734.56, "text": " so we're basically going to do the same thing so we're going to do this multiple times and we're going to exhaustively connect this one with this one and and and this one with this one etc etc and we're going to calculate this improvement for all of those and then we're going to sample with probability"}, {"start": 2734.56, "end": 2764.48, "text": " uh basically softmax of that vector x and we use tau to modulate the temperature and we're going to sample from that vector and then add add the edge that was sampled to the graph so maybe one of these edges is going to be sampled maybe i don't know maybe this one is going to be chosen and uh basically that will be the end of step one you can imagine that if you push tau to like positive infinity to some big number"}, {"start": 2764.48, "end": 2782.2400000000002, "text": " big positive number then you're going to have instead of of this uh like a smooth distribution you're going to have like a deterministic distribution distribution where the edge that had the highest uh improvement is going to be chosen sampled and added to the graph so you can kind of use tau to"}, {"start": 2782.24, "end": 2809.7599999999998, "text": " uh tweak your your algorithm in that sense okay so we've added the edge and that that increased uh the positivity of this uh initial uh curved edge uh denoted in blue and the second step is remove edge with maximal uh rigid curvature only if that curvature is bigger than this c plus so that's the upper bound i mentioned and again you have some control here if you make c plus arbitrary like very very big"}, {"start": 2809.76, "end": 2839.6800000000003, "text": " then at one point of time uh that will mean that you'll never be removing positive edges and you'll just be adding uh these edges that increase the positivity of the of the graph okay so the reason why they have this part here is in order to preserve the statistics of the graph so as much as you add and thus increase the number of edges you want to also remove and thus kind of balance out uh the the the the the number of edges and the degree of nodes are going to be uh less skewed in this way"}, {"start": 2839.68, "end": 2869.52, "text": " in that way cool and you do this until convergence uh and whereas convergence means sometimes whatever combination of edges you add you'll never have the improvement here and thus that's that's how convergence is defined and that's that's it that's the that's the uh stochastic uh discrete root flow algorithm uh fairly simple idea uh if you think about it very very neat because it has this awesome connection with uh differential geometry but uh but yeah okay so finally they do compare"}, {"start": 2869.52, "end": 2888.08, "text": " um how strf works compared to deagle which is this competitive approach which works much better on homophilic uh graphs but i'm going to skip this math and theorems because it's a it's a arguably a tangent not uh not necessary to understand the core of this paper"}, {"start": 2888.08, "end": 2903.84, "text": " and so i'm going to skip that part um this time so here are the results you can see uh on various different um graph data sets uh core sites here pubmed might be familiar to most of you who have who have had any exposure to graph ml field"}, {"start": 2903.84, "end": 2933.76, "text": " uh you can see we have the homophilic coefficient associated with each of these graph data sets as denoted by this uh add h uh of j and you can see these ones have low h which means low homophilic which means high heterophilic and these ones are much more homophilic and again homophilic just means uh birds of feather flock together i.e. if you have a node and it's connected to its neighboring nodes there is a high uh chance that basically they'll share the same label"}, {"start": 2933.76, "end": 2963.5200000000004, "text": " whereas in heterophilic data sets oftentimes very distant nodes so some nodes which are uh yeah more distant from each other are going to have the same label i'm going to share the label and you can imagine for these heterophilic data sets it's very important to have effective strategy of communicating over those long distances and over squashing is one of the things that's going to uh ruin that communication and that's what this paper is handling basically and that's why it's better for heterophilic data sets compared to homophilic data sets"}, {"start": 2963.52, "end": 2986.88, "text": " uh the results support the claim you can see that for these heterophilic data sets it basically outperforms all of the other baselines we have deagle here we have this uh plus fa approach which is uh basically just fully a full um adjacency matrix uh in the last layer of our graph neural network a very simple modification that kind of tries to cope with over squashing effect as well"}, {"start": 2986.88, "end": 3014.96, "text": " uh and yeah as as i said better performance on these heterophilic data sets and you can see not as great although competitive performance on on these uh homophilic data sets deagle seems to perform somewhat better uh compared to to strf which is expected okay so uh finally just some distributions here they show that their"}, {"start": 3014.96, "end": 3038.32, "text": " rewiring method is much uh better at preserving various statistics so you can see here that comparing the degree distributions you can see that the the blue one is the original one uh the green one is the strf and deagle is the orange one and you can see it completely changes the the degree distributions for various different data sets and also you can see that"}, {"start": 3038.32, "end": 3056.7200000000003, "text": " uh the number of edges that deagle adds is huge whereas it does not remove edges and on the other hand strf both adds and removes edges and that's why it preserves the statistics of the graph finally i'm gonna uh end it up on this note uh they mentioned the limitation which is very something to think about"}, {"start": 3056.72, "end": 3076.9599999999996, "text": " uh so one limitation of our work is that the theoretical results presented here do not currently extend to multi graphs okay that's one thing because uh this whole like thing of calculating the number of triangles four cycles all of that breaks down uh in the case of multi graphs where you can have multiple edges between nodes"}, {"start": 3076.96, "end": 3106.88, "text": " and finally they say even more importantly that in addition the current methodology is agnostic to information beyond the graph topology such as node features so why is that important so imagine you have a graph where for example like a social media graph and that graph represents who is following whom and if you were to start and just um based on the curvature of the edges start rewiring that you're changing the underlying semantics of the social media"}, {"start": 3106.88, "end": 3136.8, "text": " graph and that can lead to false predictions later in your pipeline so because you're not taking features node features and net features into account you are less nuanced in how you change the semantics of the underlying graph if that makes sense okay so on that note um this is a very very cool paper i loved it and uh if you found the video useful yourself uh consider sharing the video um and subscribe"}, {"start": 3136.8, "end": 3157.44, "text": " to this channel and finally do join our discord community you'll check you can find the links down in the description until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=vf18FLdKkY4
NeuroEvolution of Augmenting Topologies (NEAT) and Compositional Pattern Producing Networks (CPPN)
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover 2 papers: 1) NEAT: NeuroEvolution of Augmenting Topologies - a seminal paper from 2002 that evolves not just the network weights but also network architectures 2) CPPN: Compositional Pattern Producing Networks: A Novel Abstraction of Development - an interesting model of developmental biology with a completely different approach compared to e.g. cellular automata models ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ NEAT paper: http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf ✅ CPPN paper: http://eplex.cs.ucf.edu/papers/stanley_gpem07.pdf ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to NEAT and CPPNs 02:35 Basic ideas behind NEAT 07:55 NEAT genome explained 11:05 Competing conventions problem 13:25 NEAT mutations explained 15:30 NEAT genome mating explained 19:20 Maintaining innovations via speciation 25:25 Explicit fitness sharing 29:45 NEAT on XOR task 31:30 CPPNs and neural automata 36:40 Spatial signal as a chemical gradient abstraction 39:20 Composing functions 45:10 CPPN main idea recap 46:45 Breeding "images" using CPPNs 49:20 CPPNs are highly expressive (symmetries, repetition...) 54:17 HyperNEAT idea explained 57:43 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #NEAT #CPPN #evolution
What's cracking guys in this video I'll be covering two very interesting older seminal papers one of them being this one that introduces a method called new revolution of augmented topologies or Neat for short so you may know it by the by the acronym neat And if you ever watched some of those neural network learns to play a game Types of videos and YouTube such as these two are very very popular So the fun fact is both of these videos used the neat algorithm that was introduced in this paper to Train a neural network to play the snake game or to play the Super Mario in this video here the second video I'm gonna the second paper I'm gonna cover is the compositional pattern producing networks or Cpp ends for short and the reason I'm so excited about this paper and that I've discovered it quite recently is that Well, I've been personally thinking about Like approaching machine learning from various different perspectives one of those being genetics The other one being neuroscience and as you may know like there's a lot of people Like doing genetics as their as their primary job And maybe then additionally learning machine learning or deep learning and infusing those two fields together and that's super exciting but like if genetics is step one and Neuroscience ie adult human brains are like step three then like for whatever reason we are missing out on this whole Step in between where after the fertilization after you have the sperm cell Fertilizing the XL and you get the zygote. So then this whole process of development development whereby from this single cell called zygote you end up with a like a adult I mean baby brain baby human brain we kind of don't know what's going on there and In my opinion just understanding that whole procedure Can help us design better architectures. So that's at least my intuition. So basically instead of crafting neural architectures why not understand the actual process a biological process that leads to To like such a complex architectures as the human brain and then train that that architecture So that's the rough idea the CPPM paper Is one of the models of this development process? So covering this this number two here on this drawing and I'll be covering it after need So before that let's just dig into neat and understand how this like optimization method works. Okay So let's start here The goal of fixed topology neural evolution is to optimize the connection weights that determine the functionality of a network And that's something we are we are very familiar with we usually have a fixed architecture of this fixed topology and we try and We try and tune the weights now they see here the basic question However remains can evolving topologies along with weights provide an advantage over evolving weights on a fixed topology So again, let me just kind of reiterate here Usually what you end up with in machine learning is at least in deep learning So if you have a neural network, let me just kind of draw for simplicity sake a simple NLP model a single hidden layer So it's gonna be connected something like this. So you have Connection going from input neurons to output neurons something like this and Now what you usually end up doing is you have depending on your task You define a loss function and then you apply some gradient based optimization method Usually Adam is very popular or just a plain old SGD, etc. Etc. So what this like paper instead proposes is Why not learn I mean learn when I discover? architectures and then Tune them along the way as well. So instead What it does is it starts with the simplest possible neural networks where depending on the on the task up obviously you have for example Let's say we have such a task where the input is a scalar and the output is a scalar in that case It will start with a simple neural networks as this one where you have a just a single Connection directly from the input output. There is some weight here and that's that's it very simple and then what they do is basically they have a population of these neural networks and They'll basically be evolving them. They'll be mutating them. They will be discovering new architectures tuning the parameters and By doing that solving the task, so it's a very rough idea behind needs now Let's let's dig into a bit more details. So first let's let's kind of open it up with some questions They say here so evolving structure incrementally presents several technical challenges first Is there a genetic representation that allows disparate? Topologies to cross over in a meaningful way because you can imagine if you have very different topologies like for example Let's take this one and let's take this super simple one. It's not quite clear. How do you go about? doing cross mating between these two architectures, how do you Mix up the genes behind these architectures and if the terminology is not clear enough now I'll get to that in a second, but like Just have some rough mental map for now The second question is how can topological innovation that needs a few generations to be optimized be protected? So that it does not disappear from the population prematurely So what they say here is the following so imagine you have a Imagine you have again a set of population of neural networks and I'm gonna just know them by these simple rectangles just for abstraction sake for for simplicity sake, so Let's say we have like three neural networks and let's say you mutate one of these so these are just MLPs So this is like neural network one two three and let's say you mutate this one by adding additional node here. Okay, and Now what will happen is if you were to continue comparing this novel like this mutated neural network with the Networks two and three it would be a suboptimal probably because initially This is just randomly initialized or something and then it cannot compete and so what they do how needs solves This is they basically do this speciation process whereby they group similar networks together And then they form niches and then these networks compare between each other, but they don't compare with other Global like arbitrary network from from the whole population So that will be the the solution that need introduces to cope with this problem. So basically speciation The third question is how can topologies be minimized throughout evolution without the need for a specially contrived fitness function? that measures complexity and I kind of hinted at the solution already here and the solution is to start with the simplest networks possible and then gradually keep adding the complexity and Because they the optimization procedure of need has this this bias this this inherent bias That's why it's going to be looking for simpler architectures In the first place and this aligns very neatly with the with the principle of the OCam's razor Whereby the the simpler the simplest hypothesis the simplest model that performs Like good enough is probably the best one Okay And the first question will be solved by these historical markings But we're gonna get at that point in a couple of minutes Okay Let me first show you how they represent the networks in the background because if you're not familiar with genetic algorithms Did this all all this may be a bit like weird? And so because of that, let me just start here So what they do is for every neural network in the population of neural networks They keep up they keep this genome in the background So also known as genotype and each genotype corresponds one to two one to one to the to the phenotype Corresponding phenotype. So this genotype basically determines this this phenotype this neural network here And this is just a terminology from genetics So phenotype basically means any observable trait for example in humans that may be like the eye color That means like maybe like blood type whatnot and there is some gene that's going up for that particular phenotype for that observable trait So that's the terminology they are taking here because this is this whole optimization field is roughly loosely inspired by the genetics and and Thus the terminology Okay, so you can see here they have two different lists of of uh, like genes versus the node genes So we have node one through node five corresponding to these nodes here one through five And you can see there are different types of nodes So some of these are sensor nodes, which means input nodes Some of these are output nodes like node four here and then we have the hidden nodes the node five here More interestingly, they have these connections genes and you can see how each of these squares basically describes these connections So let's take this one for example, so you can see here. We have one. Let me just change the color to red here So we have one as the input and then four as the output so that basically specifies this particular connection here You can see the weight is 0.7. You can see that it's enabled I'm going to tell you what that is in a minute and we have this innovation number also known as this historical marker Which is a very important part of this whole algorithm and i'll get into a bit more details a bit later Okay, so this enabled thing so let me contrast that with disabled So we have this connection between two and four and if you take a look There is no connection between two or four because it's disabled and again this terminology comes from the genetics Genes can be expressed or not expressed if a gene is not expressed then basically you don't have the associated phenotype so for example You don't want to so for example if you have you have obviously different cell types in your body You have like skin cells you have like muscle cells you have neurons or neural cells and obviously a muscle cell Doesn't need a certain proteins that the neural that the neural cell needs and thus those are the Neural that the neural cell needs and thus those genes are not expressed in that particular cell So that's the rough Like yeah background in genetics behind all of this terminology so That's that's roughly it. So now that we have this list this list This genotype completely specifies a particular network and then you can kind of mutate these and we'll get to that in a second Okay, so before that let me just uh first mention this problem of competing conventions so Let me tell you what this is and why should we care? You can see two neural networks here on the left and the right And basically these two neural networks have the same function And the reason is you can see here Basically c is the same as c here b's are the same nodes and a's are the same node They're just permuted and because we have a sum function here because we as you know How these neurons work you have a summation here, which is permutation invariant Which means we don't really care about the order of these nodes and because of that the function will be the same and you can Just kind of think about it and you'll see that what I just said makes sense So it doesn't matter like no matter the permutation you'll you'll end up with the same functionality in the neural network Now why this is problematic for genetic algorithms is the following So if you want to cross these two genomes, so let's say we have genome a b and c and we have c b and a If you were to do a point wise mutation So that means you'll take a certain gene from let's say we have a b c and we take Instead of c we take the gene a we end up with a b and a if we had a point mutation That's happening in this genome. So if this one was the dominant genome and we were to mutate a into c by randomly taking the gene from this genome instead We end up with c b c now why this is bad is because as you may notice here We just lost one third of the information in from the from the from the original Genotypes and that's obviously undesirable. So we want to be able to recognize and match corresponding genes Because we don't want to lose them and that's where these historical markings come into play and we'll get to those in a minute So let me just kind of reiterate when one of these permutations crosses over with another critical information is likely to be lost For example crossing a b c and c b a can result in c b c a representation that has lost one third of the information that both of the parents had Okay, so neat basically solves this problem by doing this historical Markings stuff Okay, let's continue here and let me show you the mutation procedure And then we're going to slowly build up the whole holistic picture of how this need algorithm works So i'm kind of going bottom up which is contrary to my usual approach of going top to bottom But like in this particular paper, I guess that makes some sense There are two types of mutations we can we can imagine here one is Adding like a connection and the other one is adding a node. So Let's focus on the first one adding connection. You can see here that we add this connection between neurons three between nodes Three and five and you can see that we added so starting from this genotype here After we added the connection between three and five with the historical marking seven or innovation number as it's also called So we end up with this novel genome here. We have the other Mutation having a node Um mutation happening here here. Uh, we add the node so you can see here. We add node six which basically Like disables this previous connection and that's two novel connections and that's why we have so between three and four This one will be disabled and you can see that we added from three to six So we have this one and we have from six to four So we added two novel connections and we have the novel innovation numbers here So this is the way in which um basically Neat adds the additional complexity to the current population So again, retracing the top number in each genome is the innovation number of that gene The innovation numbers are historical markers that identify the original historical ancestor of each gene New genes are signed new increasingly higher numbers In case in other instance of a neural network adds the exact same connection. Let's say Uh this connection three to five then They would reuse the this innovation number seven for that particular instance as well So there would not be like a blow up artificial blow up of these of this historical markings and that's why they they work Okay So that's some background now. Let's dig a bit deeper Um, so they say here i'm just going to skip this part because we already covered it So in the future whenever these genomes meet the offspring will inherit the same innovation numbers on each gene Innovation numbers are never changed Okay, they are never changed thus the historical origin of every gene in the system is known throughout evolution Okay So when crossing over the genes in both genomes with the same innovation numbers are lined up So we line them up we're going to see an image in a second and these genes are called matching genes So we have the matching genes genes that do not match are either disjoint or excess Depending on whether they occur within or outside the range of other parents innovation numbers In composing the offspring genes are randomly chosen from either parent and matching genes Whereas all excess or disjoint genes are always included from the more fit parent. So that's an important detail here Okay So let's see all of that pictorially here Let's assume we have two different topologies from the same population from the same subpopulation Same species if you want, although they sometimes do interspecies and intra species. They do both intra and interspecies Crossing but here so you can see two genomes and what they do. Let me just kind of zoom in a bit They align these two genomes according to these Like innovation numbers so you can see one one two two three three four four five five All of these are aligned and then you have the disjoint ones because six and seven from this one does not exist In this genotype and then we have eight being disjoint and finally we have these excess genes Because they are out of the range of the parent one Okay So now how the crossing works is you basically randomly take gene from either parent one or parent two Uh when it comes to these matched genes, so you can see here This is kind of gray, which means we took that one from the parent one then this is uh Okay darker gray, which is maybe from either of the parents and this is uh, White-ish which means we took it from here then we took it from here and finally we took it from here So that means we are randomly taking genes from from either parent Finally, we take the the genes from the more fit parent and because in this particular case They kind of have equal fitness both of these they they kind of end up taking all of the genes and we end up with the offspring Genotype here and you can see the corresponding phenotype exactly here Okay So that's cool now We can slowly start and build up the whole picture Let me first read a couple more sentences and then i'm gonna kind of draw a diagram explaining the whole the big picture Okay. So again retreating in this case equal fitnesses are assumed. So the disjoint and excess genes are also inherited randomly so the reason I kind of highlighted this is to show that one of the the cons of this approach is that it's super complicated compared to gradient-based methods such as sgd Where it's quite straightforward and uh, there is not a lot of hyper parameters Whereas here in need there is a lot of things you have to kind of tune the population size how to speciate how to set The thresholds we'll see that in a second But it's there's a much more details to to think about compared to gradient-based optimizations so For example, the disabled genes may become enabled again in future generations So that means some of these may be randomly enabled again Also, there's a preset chance that an inherited gene is disabled if it is disabled in either parent So that's another detail to think about okay Now as I mentioned these mutations will create suboptimal neural networks After the the actual mutation happens and before the optimization of that neural network happens There's going to be a suboptimal architecture So how do we make sure that the innovations survive and the answer is here? By adding new genes to the population and sensibly meeting genomes representing different structures The systems can form a population of diverse topologies However, it turns out that such a population on its own cannot maintain topological innovation and that's quite important to maintain Because smaller structures optimize faster than larger structures and adding nodes and connections usually initially decreases the fitness of the network Recently augmented structures have little hope of surviving more than one generation Even though the innovations they represent may be crucial towards solving the task in the long run The solution is to protect innovation by speciating the population as explained in the next section And this is the most important part of the paper Probably, I mean it's one of the most important details. So let's dig a bit deeper So let's see how they define how do they discriminate between how do they separate a special neural a new neural network a new offspring into into a species How do they assign a species to that particular neural network? And we can see it here So the number of excess and disjoint genes between a pair of genomes is a natural measure of their compatibility distance The more disjoint two genomes are the less evolutionary history they share and thus the less compatible they are Therefore we can measure the compatibility distance delta of different structures in need as a simple linear combination of the number of excess and disjoint genes As well as the average weight differences of matching genes including disabled genes So you can see a simpler a very simple way to determine how different how incompatible to or compatible to genotypes are So you can see the more of excess genes we have between two genotypes and or the more disjoint genes we have Or even if we have matching like genes if the actual weights of those connections are quite different than depending on c3 and this variable here this delta can explode So once we have this once we have this way of measuring compatibility we can just they just set a threshold So the distance measure delta allows us to speciate using a compatibility threshold delta t And that's how you basically speciate So continuing on a given genome g in the current generation is placed in the first species in which g is compatible with the representative genome of that species If g is not compatible with any existing species a new species is created with g as its representative Okay now it's a good time to draw some diagram and explain what's going on here So imagine we have multiple subpopulations so let's imagine we have multiple species so here is a species one Then we have some bit bigger species we have another species here and so what happens is the following Now these are the most like these are the fittest individuals from the last generation and I'm going to explain how do we determine fitness and how do we prune based on fitness a bit later But for now just assume all of these subpopulations are left with the fittest individuals Now what you do is you do some basically crossing so you'll take two parents here and you cross those two and you end up with another with the offspring And so this offspring now needs to be compared with representative genomes in order to determine which species it belongs to So even though it originates from this species it now because of these mutations and because of the crossing of the genes from the parents It may end up being a different species altogether which does not make a lot of sense when it comes to this like discussing I guess biological creatures At least not the vertebrates but yeah so let me denote that like this so let's let me just do this So basically what I do is you randomly pick a genotype from this population and that's going to be G and you do the same thing here just randomly pick one and then you do the same thing here and for every single subpopulation And now what you do is the following so you take the freshly created offspring here and you calculate the compatibility measure so this thing here So you can basically you'll basically end up with a couple of numbers you'll have some some delta here you'll have and you'll see whether this delta is below the threshold If it's below the threshold that means that we need to assign this particular offspring to this species So what I do they have some canonical ordering of these subpopulations so that means maybe maybe this one's going to be like subpopulation one then two then three And the first subpopulation where the compatibility is lower than the threshold they just kind of allocate they just kind of assign that offspring to that subspecies So for example these two were not compatible if this G the representative genome from this subpopulation was not compatible with the offspring then we would continue with with the representative G from the second subpopulation Which is in this particular case the same species from which this offspring originates from so now if the compatibility was lower than threshold with would kind of assign again the offspring to this subpopulation so that's the rough idea So after this crossing happens I guess what happens next is just some random mutations and then they assert the fitness of all of the individuals of individual genotypes genomes in the whole population Let's see how the fitness is determined so fitness is basically depends on the particular task at hand so depends what you're trying to solve with need so fitness will be very very domain specific basically The important detail is that they use something called explicit fitness sharing which means they do this and that means that so not only do you care about the particular individual You also care how big is the species that that individual belongs to and that's denoted like algebraically by this formula so basically summation from one to through n where n is the whole population size so including all the species And then we have this sh function which is basically set to one when the this compatibility threshold is below below the this compatibility measure is below the threshold which basically means hey this is going to be the size of the species that this individual belongs to So that's a long long story short so because of that even if you have very very fit individuals if the species it belongs to is very big the this adapted I think is adjusted fitness is going to be lower so that means some other less fit individuals from smaller subpopulations have the chance of surviving to the next generation Let me just read you a couple of more sentences here so they say every species is assigned a potentially different number of offspring in proportion to the sum of adjusted fitness is of its member organisms species then reproduce by first eliminating the lowest performing members from the population The entire population is then replaced by the offspring of the remaining organisms in each species okay let me recap what I think is going on here so the bad thing about the paper is that they don't have like a like a pseudo algorithm specifying the steps in the exact order so I kind of have to infer what's going on without digging into the actual software packages which are as I said quite complex compared to just your regular back back grading based optimizations so as I said let's let's assume we have these subpopulations here and then we determine the fitness of each of the individuals and then basically we're going to prune the less fit individuals in each of these subpopulations so we'll end up with some subsets of each of these subpopulations okay and now depending on the size of those like subsets each of these species will have a certain number of offsprings they can generate so that means now we'll take the parents we'll do the crossing we'll have an offspring and then we'll just kind of alloc assign it to your species depending on the as I previously discussed on this compatibility measure and we rinse and repeat for all of the subpopulations that same algorithm after we do that I guess what happens is inside of the before you start evaluating them and trying to figure out the fitness of these novel individuals you do some mutations so adding connections adding notes and then only then do you evaluate the fitness and then the circle repeats you prune out the weakest individuals the ones with the lowest fitness value and rinse and repeat and rinse and then you rinse and repeat okay final detail worth mentioning is that so as discussed in section 2.4 blah blah blah so this just means topology and weight evolution artificial neural networks so basically evolving both the topology and the weights and not just the weights as we are used to so these twins if that's the way you pronounce this typically start with an initial population of random topologies in order to introduce diversity from the outset so that's how these other approaches do it in contrast neat biases the search towards minimal dimensional spaces by starting out with a uniform population of networks with zero hidden nodes and we saw that in the beginning of video but I thought worth mentioning that okay so that's pretty much it that's neat they then showed some results on a couple of tasks one of those tasks is this XOR so your your logical gate using all over computer science and they show that they can basically find a topology that can perform the XOR function for that you need to have at least one hidden node and you can see here so basically this is the initial individual of the of the population and then after applying meet they end up with this phenotype with this individual they can solve the XOR problem you can see it had to add some novel nodes you see a novel note here you can see novel connections here and basically the weights which you can see on this on this diagram are also changed so that this performs the XOR function okay so hopefully you like the idea behind the neat paper I think it's very cool because it's inspired by evolution and it has this progressive building of build up of complexity and you're trying to find the topologies you're trying to at the same time tweak those weights and find the simplest individual that can solve the task that you care about now the the con side about this whole approach as you can see it's quite intricate there's a lot of details you need to care about and all of that makes it a bit less attractive and since this paper was published in two thousand two we haven't seen we really haven't seen the push towards this research direction but nonetheless I think it's a very valuable like idea to keep in your mind okay next up let me walk you through the compositional pattern producing networks okay so on to compositional pattern producing networks a novel abstraction of development paper and before we dig into CPPN's deeper let me give you another example of a very popular like model of development with which you may be familiar with already and that's the cellular automata models so let me open up this very cool distilled pop and let me just kind of slow this down so that you can see what's going on so you can see here that through local interactions so each of the cell is communicating only with the neighboring cells and via a couple of simple rules and in this case the rules are learned and hence this is called neural cellular automata you can see that this phenotype this body plan this morphology albeit a 2D one is built up and this could easily be generalizable to 3D space and thus to the space where we live in and we could be building like real 3D bodies using cellular automata so this is a model of development as well so you can see you can imagine starting from a zygote from a single cell and then through some type of communication local communication that's very important you end up with this particular morphology so now what CPPN show is that you don't need to have this local communication going on and temporal unfolding and that's just like a consequence of the fact that we live in a physical world with physics where the laws of physics are at play and this constrains the developmental process to do it like this so let me now go back to CPPN and let's see how they actually approach this let's start by reading the abstract a little bit natural DNA can encode complexity on an enormous scale researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings ie encodings that map the genotype to the phenotype through a process of growth from a small starting point to mature form a major challenge in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies okay so the rest of the paper will show how they found a different model how can a Stanley found a different model to for this developmental process so basically the main contribution of the paper is to establish that CPPN are legitimate abstraction of natural developmental encoding which may not be super obvious on the first glance of this paper so let's start here so what like the example here shows that you can have basically this phenotype being generated by this function F instead of having so basically instead of having local interactions are having this temporal unfolding to get the final to the morphology in this case the morphology is just a triangle so this is not like a biological like morphology but nonetheless you can just kind of abstract it away and let's deal with mathematical objects into the space so it's the same thing so F is a developmental model of this of this triangle if you will okay so so basically they say here that a developmental chronology is only one way of producing a particular constellation of particles it follows that encoding and unfolding process may be unnecessary to produce complex phenotypes because a functional description could alternatively have been evolved such description would not necessitate a long unfolding chain of interactions and productions so the reasoning is why go through why simulate every single intermediate step with what when what we care about is the actual like phenotype and if you could catch capture the whole development inside of a function why not do just that so they state here that so looking at this diagram here so this observation implies that any phenotype produced through a temporal progression is also possible to represent through a functional description so in fact this paper from I think 1989 show that any function can be approximated by neural network with two hidden layers that's the universal approximation theorem and the more neurons in networks to the wider network the more accurate the approximation can be that's any morphology when viewed as a distribution of particle in space is possible to represent as a function without the notion of time so that's the thing they're trying to kind of skip over here and just kind of understanding that that's a more of a physical constraint and something we need to care about when we're trying to develop mathematical models such as this one okay let's let's see this sentence a significant function and early stage of development in natural embryo Jenny is to define a coordinate frame i.e. a set of virtual coordinate axes upon which future stages of development will be based the simplest and most basic of these coordinates frames are the main axis of the body which are defined at the very beginning of development inside the egg itself these axes include the anterior posterior axis i.e. head to feet and the doors or ventral i.e. back to front so what are what they are basically saying here is the following so so the gist of this is that during the development we have to somehow form this intricate patterns and those are usually just chemical gradients and if you if you if you manage to form those chemical gradients then the cells then each cell depending on where it is located in depend on this and this and this chemical gradient map will know what to do so we'll know what exactly to do depending on the on the on that scalar of the of the chemical gradient so if you are able to capture the pattern of this chemical gradient without even having the local interactions or the temporal unfolding then we can capture the development itself and you can see that because of this left to right gradient which is in the form of a Gaussian function this this if this basically implies that we'll have will end up with a bilateral symmetry and you can see that this fly here has a symmetry bilateral symmetry so if I were to draw a vertical axis through this fly here you can notice that we have a symmetry here can just flip it and nothing would change the underlying structure which is in this case that the fly the to the image of a fly would not change and similarly there is a gradient along this anterior posterior axis and different patterns along along those those along the axis produce different cells so basically you can ignore all of this and you can imagine so you can ignore all of this and you can imagine we have a certain like just draw this so we have a certain to the spatial pattern here so it will be hard to draw but you can imagine going from so around this axis here it's going to be quite symmetric left and right and that's so basically this to the spatial signal is going to directly in code those chemical gradients and that's gonna basically allow us to to to form this this this this morphology here and that means we can ignore we can just focus on trying to generate these intricate to the spatial patterns okay so now let's see how do we generate these to the spatial signals which are going to loosely represent these chemical gradients and that's gonna be a blueprint for how to generate the morphology like the body of some biological organism or or whatnot so because we're trying to remember we're trying to build a developmental model for biological organisms so so here's one way we could be composing functions so here we have a symmetric function here we have a periodic function you can see we can generate arbitrary patterns by doing this they mentioned here that coordinate frames created through a developmental process interact with each other to produce complex patterns with regularities in the same way functionally represented frames can be composed to create complex regularities so in this way and this has been mentioned over and over again throughout the paper so the composition the function composition replaces local interactions okay so instead of having some some local cellular automata like communication we can just compose the the the various functions to end up with these to the spatial signals we we care about I mean I'm saying to the but those could be 3d or or even 40 as we'll later see and it's just a pattern we're trying to to to generate so here is the how cppn's look like and they say that the main idea is that the order in which functions are composited is an abstraction for the chronology of events over the course of development without the need for simulating such events locally so in a way signal flowing from from here to here is is an abstraction for the chronology of time so even though we don't have any notion of time in this network per se we are kind of capturing the time and all of these local interactions by doing this functional compositing so starting from X and Y which are just the coordinate coordinates of your of your Cartesian plane we can generate the output the phenotype by by by composing functions and here compared to your regular artificial neural networks what cppn's do is instead of like just using rally or sigmoid for the activation function they use arbitrary like functions which are which have some nice properties so it's either like a Gaussian or some symmetric function or like identity function or a periodic function etc etc so this is what how cppn's look like okay they mentioned here that providing the initial coordinate axis as inputs to the graph is what allows local interaction to be eliminated in physical space there are no intrinsic coordinates that are individual cell can access to determine its location and hence its identity therefore local interaction becomes a way of asking where am i that is through the collective negotiation for Jason cells that interact with each other it is possible to derive a coordinate frame however by composing functions and that take as arguments and absolute frame of reference the need for such negotiation is eliminated and all identities and relative locations can be determined completely independently so basically what they said here is we can kind of feed these absolute coordinates into the cppn to generate this this output pattern and instead you can imagine the difficulty that a cell has like it needs to figure out where it is based on the local communication with the with the neighboring cells and this this this cppn model basically replaces a little bit of cppn model basically replaces eliminates the need for for that type of communication to happen so as I mentioned there is a lot of similarity between like neural networks your regular neural networks and cppn's and they say here interesting a graph of such compositions is very similar to an artificial neural network with arbitrary topology the only difference between the two is that artificial neural networks generally use sigmoid functions and sometimes gossians or relius or whatnot as activation functions in each note whereas the function composition graph may use any of a variety of biological functions at each note as I previously mentioned they then say that the analogy between a function composition graph and a neural network is so strong in fact that is stamping to equate the two however while from an external objective standpoint they are closely related so from the mathematical standpoint I guess using the term artificial neural network would be misleading in the context of this discussion because neural networks were so named in order to establish a metaphor with a different biological phenomena and that's the brain and the terminology should avoid making the implication that biological thinking brains are in fact the same as developing embryos this is kind of loose because in one of their follow up papers in one of Kenneth Stanley's follow up papers called a hyper neat he shows that you can basically use cppn's to model neural networks by just modeling for the spatial signals you can map that to neural networks so which means that in the fact this can be a model of a brain as well so these are very abstract objects cppn's are very abstract and they are very abstract and they can capture arbitrary spatial patterns which can be mapped to various things so they can be mapped to like chemical gradients fields which then can be used as a guide for formulating morphology like the body of an organism or it can be just a like a plan for how to construct construct the actual brain the architecture the design of a of a of a neural network okay so let me recap what what we've seen so far it's a bit harder to follow along because there is a lot of text and not that many images so yeah stick with me basically the idea is the following so if you want to produce a certain morphology beat the 2d body like in this case or a 3d body all you want to do is be able to specify arbitrarily complex spatial signals so that means you want to have certain properties like you want to be able for your your system to generate symmetric patterns you want to be able to generate imperfect symmetry you want to be able to basically a model a repetitiveness etc etc and once you have that spatial signal you can imagine you can you can basically treat that as a as an abstraction for actual physical processes such as chemical gradients which is cell uses to communicate and build up the body so if you can get to the spatial signal then you're basically you've solved the task and you've successfully modeled the developmental process and so how we can do that that's what the cpn paper shows is by just composing these arbitrary not arbitrary these special functions such as gaussians symmetric functions periodic functions etc and you can generate those those patterns by doing this we basically skip through the whole local communication temporal unfolding thingies that are going on and we just end up with a final plan with the final map that's eventually going to build up the phenotype okay so now for the fun part let me show you what I've done they basically used and that's why I cut where I covered neat so they basically is need to evolve cpn such that we can produce very complex patterns with all of the necessary properties that showcase that this is indeed a good model of development and now we're going to show you a couple things so here is the experiments they've done so various people created these platforms so you on the left you can see this Nelphy neat platform on the right you can see this sharp neat platform and basically what you can see here is the following so this is a phenotype so basically this this this chemical gradient map that was created by a cpn in the background so each of these images is you can see are these 2d spatial signals which we can interpret as I said multiple times chemical gradients and each of these are in the background created by a particular cpn and the same thing on the right and so what you do here is using neat and using people to select the parents we can we can evolve more complex and more complex spatial signals so what that means is the following so they said this particular pattern is going to have a cpn in the background so it's going to be some cpn number one this is going to be cpn number two so somebody for user takes these two and we match the genes and we do the crossing and then we do all the mutations etc etc we're going to end up with a novel cpn which means that in turn we're going to end up with a novel pattern and now because humans are in the loop here they can select interesting patterns and thus across breed interesting patterns to create even more complex patterns and if you do that you end up with very school parents and let me show you some of those by the way just a slight detail so aside from x and y what they additionally input is this signal d which is just your your your distance from the center of your domain here and this could be in theory learned by the cpn by just composing some some functions there but like this makes it a bit easier this is just a kind of a shortcut the biases this cpn towards being i guess radially symmetric i think they mentioned somewhere so however since the is radially symmetric it does not automatically provide a bilaterally symmetric coordinate frame okay so the is biasing the cpn towards radial symmetry let's see some results so you can see that people manage to generate symmetric patterns such as this one which is a very important finding because as we saw with the fly example when you have this bullet bilateral symmetry pattern then that means you can form by let bilaterally symmetric bodies and this is the corresponding cpn that is generating this particular pattern they also showed that they can evolve quite intricate patterns so you can see here the idea is to create a spaceship and again people were just playing taking certain parents certain cpns across breeding them and mutating them and we end up with this sequence so this is these are like this is like maybe generation number one and then basically we are evolving with each generation you have more and more intricate patterns and you can see what happens is that we're kind of elaborating on certain details whereas this overarching pattern this this bilateral symmetry is preserved throughout this whole sequence all the way to the end where we have these all this looks more like mental rate and then like a spaceship maybe this looks like an airplane but in all case in any case you can see that we can generate very very complex patterns with all of the necessary properties using cpns important thing is to to be able to represent not just perfect symmetry but also imperfect symmetry because for example take human body as an example so your heart is not perfectly symmetric so your heart is moved a bit to the left instead of being centrally positioned so that's a very good example of imperfect symmetry in our bodies so we want to be able to represent that in our in our in these signals so here they show that aside from having perfectly symmetric sunglasses pattern that somebody produced we can have also this this this imperfect symmetry where you can see that this part is of different size compared to the to this left I guess part of the signal so that's another important argument for the expressivity of the cpns now again repetition is something that's super important and repetition with variation is also important so similarly to how it's important to have imperfect symmetry and not just perfect symmetry so we will also want to have some variations in the repetition not just perfect repetition so here they show that if you input these additional these special signals like sign sign of 10x and sign of 10y and then the d which is the distance from the center if you input these signals we generate very complex patterns and now you may think to yourself now this is cheating because why would you input sign so it's instead of just your regular x and y coordinates and the thing is this is just kind of convenience because cpns could in theory learn without any problem to combine because they do have signs at their disposal when composing the cpns itself so they could learn the same thing this is just for convenience and you can see amazing patterns popping up here so here you can see there is a lot of repetition obviously but there is some variation as well so if we were to zoom in into this region then you get this one here and you can see that obviously that this kernel here is is different from from this one here and yet yeah like a bunch of images further testify that this is the case yeah going forward you can see that in some cases that the cpn learns to ignore those sinusoids and focuses more on the d signal either distance from the center of the image and then you end up with these very intricate patterns which are much more radio symmetric compared to the previous ones and they indeed they showcase that that's not like a cherry picked example a lot of times the cpns end up with these signals that have all of these nice patterns that have all of these nice properties so that's pretty much it they mention finally this is an important sentence so the importance of local interaction to developmental abstraction is an open question they basically show that cpns are an interesting alternative to cellular automata or other models of developmental biology now I did briefly mention that even though they are using cpns as a as a developmental model they can also use it to just model various neural network architectures and let me show you how that could be done like a simple interpretation of these spatial signals that are generated by by by cpns so if you if I were to take this simple cpn let me just find the diagram so this one here now let's imagine that instead of feeding in the signal deep let's imagine we feed in x1 y1 and x2 y2 and then we somehow learn cpn so that the outputs this output scalar is has such prop like the similar properties to what we've seen in 2d signals so being symmetric being being repetitive etc etc now if we were to constrain the the range of these signals to be from 0 to 1 this is basically a signal defined on on a 4d hypercube and what's now interesting is how you can interpret this so you can interpret it the following way so imagine we have a grid imagine we have a grid like this and so let me just draw the vertical lines and the horizontal lines and let's now imagine you input the coordinate so this is x1 y1 and this is x2 y2 and you plug in these numbers so this is going to be for example 0 0 this is going to be maybe 0 and this is going to be 0.2 because like basically what I've done is I've normalized the the the length of this grid to be so this is where 0 is this is 1 and similarly here so we have 0 being here and 1 being here and then we just take the coordinates and you get output and you get some number as the output and you can interpret that as a as a as a weight of the connection between these two cells and that means you can now build neural networks that have symmetric connections that have repetitive connections etc etc by just finding appropriate cpp ends so that's the whole idea behind the hyper need paper that's a follow-up work after that came after this paper so again let's imagine this is maybe output 0.7 for these particular coordinates so that means this is going to be of weight 0.7 and then you just start tracing you just do an extensive search you basically input all the possible combinations here and for example you set a certain threshold so if a certain weight is below I don't know like maybe 0.3 then there is no connection there otherwise you have a connection with the given weight and that's how you form a neural network by using a cppn so why this is very cool is because you can use a much smaller model so this cppn may have like hundreds or thousands of nodes and you may end up with like millions or even billions of connections in your in your neural networks this is like basically a low dimensional representation like the gist of your of your phenotype which is in this particular case a neural network cool hopefully you found some interesting ideas in these two papers if you did consider subscribing share the video out and join the discord community as well and until next time bye bye you
[{"start": 0.0, "end": 6.76, "text": " What's cracking guys in this video I'll be covering two very interesting older seminal papers"}, {"start": 7.04, "end": 14.36, "text": " one of them being this one that introduces a method called new revolution of augmented topologies or"}, {"start": 15.040000000000001, "end": 18.740000000000002, "text": " Neat for short so you may know it by the by the acronym neat"}, {"start": 18.740000000000002, "end": 23.400000000000002, "text": " And if you ever watched some of those neural network learns to play a game"}, {"start": 23.92, "end": 27.76, "text": " Types of videos and YouTube such as these two are very very popular"}, {"start": 27.76, "end": 30.200000000000003, "text": " So the fun fact is both of these"}, {"start": 31.240000000000002, "end": 36.68, "text": " videos used the neat algorithm that was introduced in this paper to"}, {"start": 37.44, "end": 43.56, "text": " Train a neural network to play the snake game or to play the Super Mario in this video here"}, {"start": 44.28, "end": 47.480000000000004, "text": " the second video I'm gonna the second paper I'm gonna cover is the"}, {"start": 48.400000000000006, "end": 51.120000000000005, "text": " compositional pattern producing networks or"}, {"start": 51.12, "end": 58.239999999999995, "text": " Cpp ends for short and the reason I'm so excited about this paper and that I've discovered it quite recently is that"}, {"start": 58.76, "end": 60.76, "text": " Well, I've been personally thinking about"}, {"start": 61.559999999999995, "end": 67.08, "text": " Like approaching machine learning from various different perspectives one of those being genetics"}, {"start": 67.64, "end": 71.72, "text": " The other one being neuroscience and as you may know like there's a lot of people"}, {"start": 72.32, "end": 75.68, "text": " Like doing genetics as their as their primary job"}, {"start": 75.68, "end": 82.52000000000001, "text": " And maybe then additionally learning machine learning or deep learning and infusing those two fields together and that's super exciting"}, {"start": 82.64, "end": 85.12, "text": " but like if genetics is step one and"}, {"start": 86.16000000000001, "end": 93.24000000000001, "text": " Neuroscience ie adult human brains are like step three then like for whatever reason we are missing out on this whole"}, {"start": 93.56, "end": 98.60000000000001, "text": " Step in between where after the fertilization after you have the sperm cell"}, {"start": 99.44000000000001, "end": 104.16000000000001, "text": " Fertilizing the XL and you get the zygote. So then this whole process of development"}, {"start": 104.16, "end": 111.2, "text": " development whereby from this single cell called zygote you end up with a like a adult I mean baby"}, {"start": 111.72, "end": 115.75999999999999, "text": " brain baby human brain we kind of don't know what's going on there and"}, {"start": 116.39999999999999, "end": 120.32, "text": " In my opinion just understanding that whole procedure"}, {"start": 121.24, "end": 127.28, "text": " Can help us design better architectures. So that's at least my intuition. So basically instead of"}, {"start": 128.0, "end": 129.8, "text": " crafting neural architectures"}, {"start": 129.8, "end": 134.4, "text": " why not understand the actual process a biological process that leads to"}, {"start": 135.0, "end": 141.8, "text": " To like such a complex architectures as the human brain and then train that that architecture"}, {"start": 141.8, "end": 145.12, "text": " So that's the rough idea the CPPM paper"}, {"start": 145.64000000000001, "end": 149.32000000000002, "text": " Is one of the models of this development process?"}, {"start": 149.32000000000002, "end": 155.48000000000002, "text": " So covering this this number two here on this drawing and I'll be covering it after need"}, {"start": 155.48, "end": 162.84, "text": " So before that let's just dig into neat and understand how this like optimization method works. Okay"}, {"start": 163.51999999999998, "end": 165.12, "text": " So let's start here"}, {"start": 165.12, "end": 172.95999999999998, "text": " The goal of fixed topology neural evolution is to optimize the connection weights that determine the functionality of a network"}, {"start": 172.95999999999998, "end": 179.28, "text": " And that's something we are we are very familiar with we usually have a fixed architecture of this fixed topology and we try and"}, {"start": 179.51999999999998, "end": 183.85999999999999, "text": " We try and tune the weights now they see here the basic question"}, {"start": 183.86, "end": 192.34, "text": " However remains can evolving topologies along with weights provide an advantage over evolving weights on a fixed topology"}, {"start": 192.48000000000002, "end": 194.84, "text": " So again, let me just kind of reiterate here"}, {"start": 195.72000000000003, "end": 199.12, "text": " Usually what you end up with in machine learning is at least in deep learning"}, {"start": 199.16000000000003, "end": 206.28000000000003, "text": " So if you have a neural network, let me just kind of draw for simplicity sake a simple NLP model a single hidden layer"}, {"start": 206.44000000000003, "end": 209.8, "text": " So it's gonna be connected something like this. So you have"}, {"start": 209.8, "end": 213.36, "text": " Connection going from input neurons to output neurons"}, {"start": 214.28, "end": 216.28, "text": " something like this and"}, {"start": 216.44, "end": 220.84, "text": " Now what you usually end up doing is you have depending on your task"}, {"start": 220.84, "end": 225.68, "text": " You define a loss function and then you apply some gradient based optimization method"}, {"start": 225.68, "end": 231.96, "text": " Usually Adam is very popular or just a plain old SGD, etc. Etc. So what this"}, {"start": 232.64000000000001, "end": 234.8, "text": " like paper instead proposes is"}, {"start": 234.8, "end": 239.20000000000002, "text": " Why not learn I mean learn when I discover?"}, {"start": 239.84, "end": 241.60000000000002, "text": " architectures and then"}, {"start": 241.60000000000002, "end": 244.96, "text": " Tune them along the way as well. So instead"}, {"start": 245.60000000000002, "end": 253.44, "text": " What it does is it starts with the simplest possible neural networks where depending on the on the task up obviously you have for example"}, {"start": 253.44, "end": 258.6, "text": " Let's say we have such a task where the input is a scalar and the output is a scalar in that case"}, {"start": 258.6, "end": 263.28000000000003, "text": " It will start with a simple neural networks as this one where you have a just a single"}, {"start": 263.28, "end": 267.67999999999995, "text": " Connection directly from the input output. There is some weight here and that's that's it"}, {"start": 267.67999999999995, "end": 273.71999999999997, "text": " very simple and then what they do is basically they have a population of these neural networks and"}, {"start": 274.35999999999996, "end": 280.47999999999996, "text": " They'll basically be evolving them. They'll be mutating them. They will be discovering new architectures tuning the parameters and"}, {"start": 282.15999999999997, "end": 287.0, "text": " By doing that solving the task, so it's a very rough idea behind needs now"}, {"start": 287.0, "end": 292.58, "text": " Let's let's dig into a bit more details. So first let's let's kind of open it up with some questions"}, {"start": 292.58, "end": 299.09999999999997, "text": " They say here so evolving structure incrementally presents several technical challenges first"}, {"start": 299.09999999999997, "end": 302.74, "text": " Is there a genetic representation that allows disparate?"}, {"start": 303.18, "end": 310.62, "text": " Topologies to cross over in a meaningful way because you can imagine if you have very different topologies like for example"}, {"start": 310.78, "end": 316.59999999999997, "text": " Let's take this one and let's take this super simple one. It's not quite clear. How do you go about?"}, {"start": 317.38, "end": 321.06, "text": " doing cross mating between these two architectures, how do you"}, {"start": 321.06, "end": 326.78000000000003, "text": " Mix up the genes behind these architectures and if the terminology is not clear enough now"}, {"start": 326.78000000000003, "end": 329.18, "text": " I'll get to that in a second, but like"}, {"start": 329.9, "end": 333.26, "text": " Just have some rough mental map for now"}, {"start": 333.78000000000003, "end": 340.78, "text": " The second question is how can topological innovation that needs a few generations to be optimized be protected?"}, {"start": 340.78, "end": 343.78, "text": " So that it does not disappear from the population"}, {"start": 344.54, "end": 346.06, "text": " prematurely"}, {"start": 346.06, "end": 350.26, "text": " So what they say here is the following so imagine you have a"}, {"start": 350.26, "end": 355.94, "text": " Imagine you have again a set of population of neural networks and I'm gonna just know them by these"}, {"start": 356.5, "end": 361.34, "text": " simple rectangles just for abstraction sake for for simplicity sake, so"}, {"start": 362.21999999999997, "end": 367.62, "text": " Let's say we have like three neural networks and let's say you mutate one of these so these are just MLPs"}, {"start": 367.62, "end": 375.14, "text": " So this is like neural network one two three and let's say you mutate this one by adding additional node here. Okay, and"}, {"start": 376.3, "end": 379.14, "text": " Now what will happen is if you were to continue"}, {"start": 379.14, "end": 381.14, "text": " comparing this"}, {"start": 381.7, "end": 385.58, "text": " novel like this mutated neural network with the"}, {"start": 386.58, "end": 390.78, "text": " Networks two and three it would be a suboptimal probably because initially"}, {"start": 390.78, "end": 396.38, "text": " This is just randomly initialized or something and then it cannot compete and so what they do how needs solves"}, {"start": 396.38, "end": 403.02, "text": " This is they basically do this speciation process whereby they group similar networks together"}, {"start": 403.02, "end": 409.78, "text": " And then they form niches and then these networks compare between each other, but they don't compare with other"}, {"start": 410.41999999999996, "end": 413.41999999999996, "text": " Global like arbitrary network from from the whole population"}, {"start": 413.41999999999996, "end": 418.7, "text": " So that will be the the solution that need introduces to cope with this problem. So basically speciation"}, {"start": 419.53999999999996, "end": 427.06, "text": " The third question is how can topologies be minimized throughout evolution without the need for a specially contrived fitness function?"}, {"start": 427.38, "end": 429.38, "text": " that measures complexity and"}, {"start": 429.38, "end": 432.74, "text": " I kind of hinted at the solution already here"}, {"start": 432.74, "end": 438.86, "text": " and the solution is to start with the simplest networks possible and then gradually keep adding the complexity and"}, {"start": 439.34, "end": 444.82, "text": " Because they the optimization procedure of need has this this bias this this inherent bias"}, {"start": 445.14, "end": 448.34, "text": " That's why it's going to be looking for simpler architectures"}, {"start": 448.9, "end": 454.9, "text": " In the first place and this aligns very neatly with the with the principle of the OCam's razor"}, {"start": 454.9, "end": 458.94, "text": " Whereby the the simpler the simplest hypothesis the simplest model"}, {"start": 459.46, "end": 461.26, "text": " that performs"}, {"start": 461.26, "end": 463.85999999999996, "text": " Like good enough is probably the best one"}, {"start": 464.85999999999996, "end": 466.29999999999995, "text": " Okay"}, {"start": 466.29999999999995, "end": 470.7, "text": " And the first question will be solved by these historical markings"}, {"start": 470.7, "end": 473.94, "text": " But we're gonna get at that point in a couple of minutes"}, {"start": 474.26, "end": 474.73999999999995, "text": " Okay"}, {"start": 474.73999999999995, "end": 480.38, "text": " Let me first show you how they represent the networks in the background because if you're not familiar with genetic algorithms"}, {"start": 480.38, "end": 485.1, "text": " Did this all all this may be a bit like weird?"}, {"start": 485.1, "end": 487.9, "text": " And so because of that, let me just start here"}, {"start": 487.9, "end": 493.3, "text": " So what they do is for every neural network in the population of neural networks"}, {"start": 493.3, "end": 495.9, "text": " They keep up they keep this genome in the background"}, {"start": 495.9, "end": 502.86, "text": " So also known as genotype and each genotype corresponds one to two one to one to the to the phenotype"}, {"start": 502.86, "end": 508.46, "text": " Corresponding phenotype. So this genotype basically determines this this phenotype this neural network here"}, {"start": 508.46, "end": 510.46, "text": " And this is just a terminology from genetics"}, {"start": 511.09999999999997, "end": 517.34, "text": " So phenotype basically means any observable trait for example in humans that may be like the eye color"}, {"start": 517.34, "end": 525.1, "text": " That means like maybe like blood type whatnot and there is some gene that's going up for that particular phenotype for that observable trait"}, {"start": 525.1, "end": 530.62, "text": " So that's the terminology they are taking here because this is this whole optimization field is roughly"}, {"start": 531.34, "end": 533.74, "text": " loosely inspired by the genetics and and"}, {"start": 534.38, "end": 536.38, "text": " Thus the terminology"}, {"start": 536.38, "end": 543.42, "text": " Okay, so you can see here they have two different lists of of uh, like genes versus the node genes"}, {"start": 543.42, "end": 548.3, "text": " So we have node one through node five corresponding to these nodes here one through five"}, {"start": 548.3, "end": 550.3, "text": " And you can see there are different types of nodes"}, {"start": 550.3, "end": 553.74, "text": " So some of these are sensor nodes, which means input nodes"}, {"start": 553.74, "end": 559.26, "text": " Some of these are output nodes like node four here and then we have the hidden nodes the node five here"}, {"start": 559.26, "end": 567.18, "text": " More interestingly, they have these connections genes and you can see how each of these squares basically describes these connections"}, {"start": 567.18, "end": 573.58, "text": " So let's take this one for example, so you can see here. We have one. Let me just change the color to red here"}, {"start": 573.58, "end": 581.18, "text": " So we have one as the input and then four as the output so that basically specifies this particular connection here"}, {"start": 581.9, "end": 585.5, "text": " You can see the weight is 0.7. You can see that it's enabled"}, {"start": 585.5, "end": 591.66, "text": " I'm going to tell you what that is in a minute and we have this innovation number also known as this historical marker"}, {"start": 591.66, "end": 597.66, "text": " Which is a very important part of this whole algorithm and i'll get into a bit more details a bit later"}, {"start": 598.22, "end": 602.3, "text": " Okay, so this enabled thing so let me contrast that with disabled"}, {"start": 602.86, "end": 606.46, "text": " So we have this connection between two and four and if you take a look"}, {"start": 606.46, "end": 612.38, "text": " There is no connection between two or four because it's disabled and again this terminology comes from the genetics"}, {"start": 612.38, "end": 619.02, "text": " Genes can be expressed or not expressed if a gene is not expressed then basically you don't have the associated phenotype"}, {"start": 619.42, "end": 621.42, "text": " so for example"}, {"start": 621.42, "end": 625.98, "text": " You don't want to so for example if you have you have obviously different cell types in your body"}, {"start": 625.98, "end": 633.18, "text": " You have like skin cells you have like muscle cells you have neurons or neural cells and obviously a muscle cell"}, {"start": 633.5, "end": 639.5, "text": " Doesn't need a certain proteins that the neural that the neural cell needs and thus those are the"}, {"start": 639.5, "end": 644.7, "text": " Neural that the neural cell needs and thus those genes are not expressed in that particular cell"}, {"start": 645.18, "end": 647.18, "text": " So that's the rough"}, {"start": 647.18, "end": 649.5, "text": " Like yeah background in genetics"}, {"start": 650.14, "end": 652.14, "text": " behind all of this terminology"}, {"start": 652.54, "end": 653.5, "text": " so"}, {"start": 653.5, "end": 656.62, "text": " That's that's roughly it. So now that we have this list this list"}, {"start": 657.5, "end": 664.62, "text": " This genotype completely specifies a particular network and then you can kind of mutate these and we'll get to that in a second"}, {"start": 665.9, "end": 667.9, "text": " Okay, so before that let me just"}, {"start": 667.9, "end": 668.9399999999999, "text": " uh"}, {"start": 668.9399999999999, "end": 673.18, "text": " first mention this problem of competing conventions"}, {"start": 674.14, "end": 675.1, "text": " so"}, {"start": 675.1, "end": 677.66, "text": " Let me tell you what this is and why should we care?"}, {"start": 678.9399999999999, "end": 681.74, "text": " You can see two neural networks here on the left and the right"}, {"start": 682.3, "end": 685.5, "text": " And basically these two neural networks have the same function"}, {"start": 686.22, "end": 688.22, "text": " And the reason is you can see here"}, {"start": 688.22, "end": 693.5799999999999, "text": " Basically c is the same as c here b's are the same nodes and a's are the same node"}, {"start": 693.58, "end": 698.5400000000001, "text": " They're just permuted and because we have a sum function here because we as you know"}, {"start": 698.5400000000001, "end": 702.38, "text": " How these neurons work you have a summation here, which is permutation invariant"}, {"start": 702.7, "end": 708.86, "text": " Which means we don't really care about the order of these nodes and because of that the function will be the same and you can"}, {"start": 709.1800000000001, "end": 712.94, "text": " Just kind of think about it and you'll see that what I just said makes sense"}, {"start": 713.1800000000001, "end": 718.3000000000001, "text": " So it doesn't matter like no matter the permutation you'll you'll end up with the same functionality in the neural network"}, {"start": 719.0200000000001, "end": 722.7, "text": " Now why this is problematic for genetic algorithms is the following"}, {"start": 722.7, "end": 729.26, "text": " So if you want to cross these two genomes, so let's say we have genome a b and c and we have c b and a"}, {"start": 729.98, "end": 732.1400000000001, "text": " If you were to do a point wise mutation"}, {"start": 732.22, "end": 737.1, "text": " So that means you'll take a certain gene from let's say we have a b c and we take"}, {"start": 737.74, "end": 742.86, "text": " Instead of c we take the gene a we end up with a b and a if we had a point mutation"}, {"start": 742.86, "end": 749.26, "text": " That's happening in this genome. So if this one was the dominant genome and we were to mutate a into c"}, {"start": 749.26, "end": 751.98, "text": " by randomly taking the gene from this genome instead"}, {"start": 752.3, "end": 757.18, "text": " We end up with c b c now why this is bad is because as you may notice here"}, {"start": 757.18, "end": 761.9, "text": " We just lost one third of the information in from the from the from the original"}, {"start": 762.38, "end": 769.26, "text": " Genotypes and that's obviously undesirable. So we want to be able to recognize and match"}, {"start": 769.74, "end": 771.26, "text": " corresponding genes"}, {"start": 771.26, "end": 777.8199999999999, "text": " Because we don't want to lose them and that's where these historical markings come into play and we'll get to those in a minute"}, {"start": 777.82, "end": 785.4200000000001, "text": " So let me just kind of reiterate when one of these permutations crosses over with another critical information is likely to be lost"}, {"start": 785.4200000000001, "end": 793.82, "text": " For example crossing a b c and c b a can result in c b c a representation that has lost one third of the information that both of the parents had"}, {"start": 793.82, "end": 799.2600000000001, "text": " Okay, so neat basically solves this problem by doing this historical"}, {"start": 801.0200000000001, "end": 803.0200000000001, "text": " Markings stuff"}, {"start": 803.02, "end": 807.1, "text": " Okay, let's continue here and let me show you the mutation procedure"}, {"start": 807.1, "end": 811.42, "text": " And then we're going to slowly build up the whole holistic picture of how this need algorithm works"}, {"start": 811.9, "end": 817.1, "text": " So i'm kind of going bottom up which is contrary to my usual approach of going top to bottom"}, {"start": 817.42, "end": 820.3, "text": " But like in this particular paper, I guess that makes some sense"}, {"start": 821.26, "end": 824.62, "text": " There are two types of mutations we can we can imagine here one is"}, {"start": 825.26, "end": 828.86, "text": " Adding like a connection and the other one is adding a node. So"}, {"start": 828.86, "end": 835.82, "text": " Let's focus on the first one adding connection. You can see here that we add this connection between neurons three between nodes"}, {"start": 836.46, "end": 841.66, "text": " Three and five and you can see that we added so starting from this genotype here"}, {"start": 842.3000000000001, "end": 849.1800000000001, "text": " After we added the connection between three and five with the historical marking seven or innovation number as it's also called"}, {"start": 849.5, "end": 853.34, "text": " So we end up with this novel genome here. We have the other"}, {"start": 854.78, "end": 856.78, "text": " Mutation having a node"}, {"start": 856.78, "end": 864.14, "text": " Um mutation happening here here. Uh, we add the node so you can see here. We add node six which basically"}, {"start": 865.8199999999999, "end": 871.5799999999999, "text": " Like disables this previous connection and that's two novel connections and that's why we have so between three and four"}, {"start": 871.8199999999999, "end": 875.74, "text": " This one will be disabled and you can see that we added from three to six"}, {"start": 875.8199999999999, "end": 878.6999999999999, "text": " So we have this one and we have from six to four"}, {"start": 878.9399999999999, "end": 883.5799999999999, "text": " So we added two novel connections and we have the novel innovation numbers here"}, {"start": 883.58, "end": 886.62, "text": " So this is the way in which um"}, {"start": 887.4200000000001, "end": 888.7800000000001, "text": " basically"}, {"start": 888.7800000000001, "end": 892.86, "text": " Neat adds the additional complexity to the current population"}, {"start": 893.82, "end": 897.58, "text": " So again, retracing the top number in each genome is the innovation number of that gene"}, {"start": 897.98, "end": 904.7, "text": " The innovation numbers are historical markers that identify the original historical ancestor of each gene"}, {"start": 905.1800000000001, "end": 908.46, "text": " New genes are signed new increasingly higher numbers"}, {"start": 908.46, "end": 914.3000000000001, "text": " In case in other instance of a neural network adds the exact same connection. Let's say"}, {"start": 914.7, "end": 916.7, "text": " Uh this connection three to five"}, {"start": 917.1800000000001, "end": 918.0600000000001, "text": " then"}, {"start": 918.0600000000001, "end": 923.82, "text": " They would reuse the this innovation number seven for that particular instance as well"}, {"start": 923.98, "end": 930.3000000000001, "text": " So there would not be like a blow up artificial blow up of these of this historical markings and that's why they they work"}, {"start": 931.34, "end": 932.46, "text": " Okay"}, {"start": 932.46, "end": 936.38, "text": " So that's some background now. Let's dig a bit deeper"}, {"start": 936.38, "end": 940.3, "text": " Um, so they say here i'm just going to skip this part because we already covered it"}, {"start": 940.46, "end": 947.66, "text": " So in the future whenever these genomes meet the offspring will inherit the same innovation numbers on each gene"}, {"start": 948.12, "end": 950.12, "text": " Innovation numbers are never changed"}, {"start": 950.54, "end": 956.3, "text": " Okay, they are never changed thus the historical origin of every gene in the system is known throughout evolution"}, {"start": 956.38, "end": 957.02, "text": " Okay"}, {"start": 957.02, "end": 962.38, "text": " So when crossing over the genes in both genomes with the same innovation numbers are lined up"}, {"start": 962.38, "end": 967.58, "text": " So we line them up we're going to see an image in a second and these genes are called matching genes"}, {"start": 967.58, "end": 972.38, "text": " So we have the matching genes genes that do not match are either disjoint or excess"}, {"start": 973.02, "end": 978.78, "text": " Depending on whether they occur within or outside the range of other parents innovation numbers"}, {"start": 979.34, "end": 983.9, "text": " In composing the offspring genes are randomly chosen from either parent and matching genes"}, {"start": 984.22, "end": 991.5, "text": " Whereas all excess or disjoint genes are always included from the more fit parent. So that's an important detail here"}, {"start": 991.5, "end": 992.94, "text": " Okay"}, {"start": 992.94, "end": 995.02, "text": " So let's see all of that pictorially here"}, {"start": 995.9, "end": 1001.26, "text": " Let's assume we have two different topologies from the same population from the same subpopulation"}, {"start": 1001.74, "end": 1009.18, "text": " Same species if you want, although they sometimes do interspecies and intra species. They do both intra and interspecies"}, {"start": 1010.3, "end": 1015.74, "text": " Crossing but here so you can see two genomes and what they do. Let me just kind of zoom in a bit"}, {"start": 1016.14, "end": 1018.14, "text": " They align these two genomes"}, {"start": 1018.68, "end": 1020.38, "text": " according to these"}, {"start": 1020.38, "end": 1025.26, "text": " Like innovation numbers so you can see one one two two three three four four five five"}, {"start": 1025.34, "end": 1030.78, "text": " All of these are aligned and then you have the disjoint ones because six and seven from this one does not exist"}, {"start": 1031.18, "end": 1036.06, "text": " In this genotype and then we have eight being disjoint and finally we have these excess genes"}, {"start": 1036.3, "end": 1038.94, "text": " Because they are out of the range of the parent one"}, {"start": 1039.5, "end": 1040.7, "text": " Okay"}, {"start": 1040.7, "end": 1047.74, "text": " So now how the crossing works is you basically randomly take gene from either parent one or parent two"}, {"start": 1047.74, "end": 1050.86, "text": " Uh when it comes to these matched genes, so you can see here"}, {"start": 1050.94, "end": 1055.58, "text": " This is kind of gray, which means we took that one from the parent one then this is uh"}, {"start": 1056.3, "end": 1060.7, "text": " Okay darker gray, which is maybe from either of the parents and this is uh,"}, {"start": 1061.1, "end": 1066.14, "text": " White-ish which means we took it from here then we took it from here and finally we took it from here"}, {"start": 1066.3, "end": 1069.74, "text": " So that means we are randomly taking genes from from either parent"}, {"start": 1070.6200000000001, "end": 1075.02, "text": " Finally, we take the the genes from the more fit parent and because in this particular case"}, {"start": 1075.02, "end": 1081.34, "text": " They kind of have equal fitness both of these they they kind of end up taking all of the genes and we end up with the offspring"}, {"start": 1081.8799999999999, "end": 1084.24, "text": " Genotype here and you can see the corresponding"}, {"start": 1085.32, "end": 1087.32, "text": " phenotype exactly here"}, {"start": 1087.98, "end": 1089.34, "text": " Okay"}, {"start": 1089.34, "end": 1091.34, "text": " So that's cool now"}, {"start": 1091.42, "end": 1094.94, "text": " We can slowly start and build up the whole picture"}, {"start": 1095.98, "end": 1102.06, "text": " Let me first read a couple more sentences and then i'm gonna kind of draw a diagram explaining the whole the big picture"}, {"start": 1102.06, "end": 1109.34, "text": " Okay. So again retreating in this case equal fitnesses are assumed. So the disjoint and excess genes are also inherited randomly"}, {"start": 1110.46, "end": 1113.1799999999998, "text": " so the reason I kind of highlighted this is to show that"}, {"start": 1113.82, "end": 1120.3, "text": " one of the the cons of this approach is that it's super complicated compared to gradient-based methods such as sgd"}, {"start": 1120.78, "end": 1124.94, "text": " Where it's quite straightforward and uh, there is not a lot of hyper parameters"}, {"start": 1125.1799999999998, "end": 1131.26, "text": " Whereas here in need there is a lot of things you have to kind of tune the population size how to speciate how to set"}, {"start": 1131.26, "end": 1132.94, "text": " The thresholds we'll see that in a second"}, {"start": 1132.94, "end": 1138.3, "text": " But it's there's a much more details to to think about compared to gradient-based optimizations"}, {"start": 1139.02, "end": 1140.22, "text": " so"}, {"start": 1140.22, "end": 1144.22, "text": " For example, the disabled genes may become enabled again in future generations"}, {"start": 1144.3, "end": 1147.34, "text": " So that means some of these may be randomly enabled again"}, {"start": 1148.06, "end": 1154.46, "text": " Also, there's a preset chance that an inherited gene is disabled if it is disabled in either parent"}, {"start": 1155.02, "end": 1157.34, "text": " So that's another detail to think about okay"}, {"start": 1157.34, "end": 1160.62, "text": " Now as I mentioned these mutations will create suboptimal"}, {"start": 1161.1799999999998, "end": 1163.1799999999998, "text": " neural networks"}, {"start": 1163.1799999999998, "end": 1167.98, "text": " After the the actual mutation happens and before the optimization of that neural network happens"}, {"start": 1167.98, "end": 1169.98, "text": " There's going to be a suboptimal architecture"}, {"start": 1169.98, "end": 1174.86, "text": " So how do we make sure that the innovations survive and the answer is here?"}, {"start": 1175.6599999999999, "end": 1181.02, "text": " By adding new genes to the population and sensibly meeting genomes representing different structures"}, {"start": 1181.02, "end": 1183.98, "text": " The systems can form a population of diverse topologies"}, {"start": 1183.98, "end": 1190.94, "text": " However, it turns out that such a population on its own cannot maintain topological innovation and that's quite important"}, {"start": 1191.82, "end": 1193.34, "text": " to maintain"}, {"start": 1193.34, "end": 1201.1, "text": " Because smaller structures optimize faster than larger structures and adding nodes and connections usually initially decreases the fitness of the network"}, {"start": 1201.58, "end": 1205.42, "text": " Recently augmented structures have little hope of surviving more than one generation"}, {"start": 1205.42, "end": 1211.18, "text": " Even though the innovations they represent may be crucial towards solving the task in the long run"}, {"start": 1211.18, "end": 1216.54, "text": " The solution is to protect innovation by speciating the population as explained in the next section"}, {"start": 1216.54, "end": 1218.54, "text": " And this is the most important part of the paper"}, {"start": 1219.26, "end": 1223.98, "text": " Probably, I mean it's one of the most important details. So let's dig a bit deeper"}, {"start": 1223.98, "end": 1234.54, "text": " So let's see how they define how do they discriminate between how do they separate a special neural a new neural network a new offspring into into a species"}, {"start": 1234.54, "end": 1237.74, "text": " How do they assign a species to that particular neural network?"}, {"start": 1237.74, "end": 1238.8600000000001, "text": " And we can see it here"}, {"start": 1238.86, "end": 1246.3799999999999, "text": " So the number of excess and disjoint genes between a pair of genomes is a natural measure of their compatibility distance"}, {"start": 1246.3799999999999, "end": 1252.78, "text": " The more disjoint two genomes are the less evolutionary history they share and thus the less compatible they are"}, {"start": 1253.26, "end": 1262.2199999999998, "text": " Therefore we can measure the compatibility distance delta of different structures in need as a simple linear combination of the number of excess and disjoint genes"}, {"start": 1262.22, "end": 1269.02, "text": " As well as the average weight differences of matching genes including disabled genes"}, {"start": 1269.02, "end": 1279.58, "text": " So you can see a simpler a very simple way to determine how different how incompatible to or compatible to genotypes are"}, {"start": 1279.58, "end": 1288.78, "text": " So you can see the more of excess genes we have between two genotypes and or the more disjoint genes we have"}, {"start": 1288.78, "end": 1303.66, "text": " Or even if we have matching like genes if the actual weights of those connections are quite different than depending on c3 and this variable here this delta can explode"}, {"start": 1303.66, "end": 1311.8999999999999, "text": " So once we have this once we have this way of measuring compatibility we can just they just set a threshold"}, {"start": 1311.8999999999999, "end": 1317.5, "text": " So the distance measure delta allows us to speciate using a compatibility threshold delta t"}, {"start": 1317.5, "end": 1320.86, "text": " And that's how you basically speciate"}, {"start": 1320.86, "end": 1332.06, "text": " So continuing on a given genome g in the current generation is placed in the first species in which g is compatible with the representative genome of that species"}, {"start": 1332.06, "end": 1339.26, "text": " If g is not compatible with any existing species a new species is created with g as its representative"}, {"start": 1339.26, "end": 1345.02, "text": " Okay now it's a good time to draw some diagram and explain what's going on here"}, {"start": 1345.02, "end": 1352.94, "text": " So imagine we have multiple subpopulations so let's imagine we have multiple species so here is a species one"}, {"start": 1352.94, "end": 1359.34, "text": " Then we have some bit bigger species we have another species here and so what happens is the following"}, {"start": 1359.34, "end": 1370.06, "text": " Now these are the most like these are the fittest individuals from the last generation and I'm going to explain how do we determine fitness and how do we prune based on fitness a bit later"}, {"start": 1370.06, "end": 1375.74, "text": " But for now just assume all of these subpopulations are left with the fittest individuals"}, {"start": 1375.74, "end": 1388.06, "text": " Now what you do is you do some basically crossing so you'll take two parents here and you cross those two and you end up with another with the offspring"}, {"start": 1388.06, "end": 1397.82, "text": " And so this offspring now needs to be compared with representative genomes in order to determine which species it belongs to"}, {"start": 1397.82, "end": 1405.4199999999998, "text": " So even though it originates from this species it now because of these mutations and because of the crossing of the genes from the parents"}, {"start": 1405.4199999999998, "end": 1414.1399999999999, "text": " It may end up being a different species altogether which does not make a lot of sense when it comes to this like discussing I guess biological creatures"}, {"start": 1414.1399999999999, "end": 1422.1399999999999, "text": " At least not the vertebrates but yeah so let me denote that like this so let's let me just do this"}, {"start": 1422.14, "end": 1434.3000000000002, "text": " So basically what I do is you randomly pick a genotype from this population and that's going to be G and you do the same thing here just randomly pick one and then you do the same thing here and for every single subpopulation"}, {"start": 1434.3000000000002, "end": 1446.3000000000002, "text": " And now what you do is the following so you take the freshly created offspring here and you calculate the compatibility measure so this thing here"}, {"start": 1446.3, "end": 1456.86, "text": " So you can basically you'll basically end up with a couple of numbers you'll have some some delta here you'll have and you'll see whether this delta is below the threshold"}, {"start": 1456.86, "end": 1463.74, "text": " If it's below the threshold that means that we need to assign this particular offspring to this species"}, {"start": 1463.74, "end": 1473.74, "text": " So what I do they have some canonical ordering of these subpopulations so that means maybe maybe this one's going to be like subpopulation one then two then three"}, {"start": 1473.74, "end": 1483.66, "text": " And the first subpopulation where the compatibility is lower than the threshold they just kind of allocate they just kind of assign that offspring to that subspecies"}, {"start": 1483.66, "end": 1497.1, "text": " So for example these two were not compatible if this G the representative genome from this subpopulation was not compatible with the offspring then we would continue with with the representative G from the second subpopulation"}, {"start": 1497.1, "end": 1511.26, "text": " Which is in this particular case the same species from which this offspring originates from so now if the compatibility was lower than threshold with would kind of assign again the offspring to this subpopulation so that's the rough idea"}, {"start": 1511.26, "end": 1527.02, "text": " So after this crossing happens I guess what happens next is just some random mutations and then they assert the fitness of all of the individuals of individual genotypes genomes in the whole population"}, {"start": 1527.02, "end": 1541.74, "text": " Let's see how the fitness is determined so fitness is basically depends on the particular task at hand so depends what you're trying to solve with need so fitness will be very very domain specific basically"}, {"start": 1541.74, "end": 1554.06, "text": " The important detail is that they use something called explicit fitness sharing which means they do this and that means that so not only do you care about the particular individual"}, {"start": 1554.06, "end": 1571.74, "text": " You also care how big is the species that that individual belongs to and that's denoted like algebraically by this formula so basically summation from one to through n where n is the whole population size so including all the species"}, {"start": 1571.74, "end": 1590.6200000000001, "text": " And then we have this sh function which is basically set to one when the this compatibility threshold is below below the this compatibility measure is below the threshold which basically means hey this is going to be the size of the species that this individual belongs to"}, {"start": 1590.62, "end": 1612.54, "text": " So that's a long long story short so because of that even if you have very very fit individuals if the species it belongs to is very big the this adapted I think is adjusted fitness is going to be lower so that means some other less fit individuals from smaller subpopulations have the chance of surviving to the next generation"}, {"start": 1612.54, "end": 1629.98, "text": " Let me just read you a couple of more sentences here so they say every species is assigned a potentially different number of offspring in proportion to the sum of adjusted fitness is of its member organisms species then reproduce by first eliminating the lowest performing members from the population"}, {"start": 1629.98, "end": 1650.6200000000001, "text": " The entire population is then replaced by the offspring of the remaining organisms in each species okay let me recap what I think is going on here so the bad thing about the paper is that they don't have like a like a pseudo algorithm specifying the steps in the exact order so I kind of have to infer what's going on"}, {"start": 1650.62, "end": 1668.54, "text": " without digging into the actual software packages which are as I said quite complex compared to just your regular back back grading based optimizations so as I said let's let's assume we have these subpopulations here and then we determine the fitness of each of the individuals"}, {"start": 1668.54, "end": 1694.78, "text": " and then basically we're going to prune the less fit individuals in each of these subpopulations so we'll end up with some subsets of each of these subpopulations okay and now depending on the size of those like subsets each of these species will have a certain number of offsprings they can generate"}, {"start": 1694.78, "end": 1712.7, "text": " so that means now we'll take the parents we'll do the crossing we'll have an offspring and then we'll just kind of alloc assign it to your species depending on the as I previously discussed on this compatibility measure and we rinse and repeat for all of the subpopulations that same algorithm"}, {"start": 1712.7, "end": 1741.42, "text": " after we do that I guess what happens is inside of the before you start evaluating them and trying to figure out the fitness of these novel individuals you do some mutations so adding connections adding notes and then only then do you evaluate the fitness and then the circle repeats you prune out the weakest individuals the ones with the lowest fitness value and rinse and repeat and rinse"}, {"start": 1741.42, "end": 1760.38, "text": " and then you rinse and repeat okay final detail worth mentioning is that so as discussed in section 2.4 blah blah blah so this just means topology and weight evolution artificial neural networks so basically evolving both the topology and the weights and not just the weights as we are used to"}, {"start": 1760.38, "end": 1786.38, "text": " so these twins if that's the way you pronounce this typically start with an initial population of random topologies in order to introduce diversity from the outset so that's how these other approaches do it in contrast neat biases the search towards minimal dimensional spaces by starting out with a uniform population of networks with zero hidden nodes and we saw that in the beginning of video but I thought worth mentioning that"}, {"start": 1786.38, "end": 1816.3000000000002, "text": " okay so that's pretty much it that's neat they then showed some results on a couple of tasks one of those tasks is this XOR so your your logical gate using all over computer science and they show that they can basically find a topology that can perform the XOR function for that you need to have at least one hidden node and you can see here so basically this is the initial individual of the of the"}, {"start": 1816.3, "end": 1846.22, "text": " population and then after applying meet they end up with this phenotype with this individual they can solve the XOR problem you can see it had to add some novel nodes you see a novel note here you can see novel connections here and basically the weights which you can see on this on this diagram are also changed so that this performs the XOR function okay so hopefully you like the idea behind the neat paper"}, {"start": 1846.22, "end": 1876.14, "text": " I think it's very cool because it's inspired by evolution and it has this progressive building of build up of complexity and you're trying to find the topologies you're trying to at the same time tweak those weights and find the simplest individual that can solve the task that you care about now the the con side about this whole approach as you can see it's quite intricate there's a lot of details you need to care about and all of that makes it a bit less attractive"}, {"start": 1876.14, "end": 1887.98, "text": " and since this paper was published in two thousand two we haven't seen we really haven't seen the push towards this research direction but nonetheless I think it's a very valuable"}, {"start": 1887.98, "end": 1890.46, "text": " like idea to keep in your mind"}, {"start": 1890.46, "end": 1899.38, "text": " okay next up let me walk you through the compositional pattern producing networks okay so on to compositional pattern producing networks"}, {"start": 1899.38, "end": 1919.94, "text": " a novel abstraction of development paper and before we dig into CPPN's deeper let me give you another example of a very popular like model of development with which you may be familiar with already and that's the cellular automata models so let me open up this very cool distilled pop"}, {"start": 1919.94, "end": 1940.3400000000001, "text": " and let me just kind of slow this down so that you can see what's going on so you can see here that through local interactions so each of the cell is communicating only with the neighboring cells and via a couple of simple rules and in this case the rules are learned and hence this is called neural cellular automata"}, {"start": 1940.34, "end": 1964.58, "text": " you can see that this phenotype this body plan this morphology albeit a 2D one is built up and this could easily be generalizable to 3D space and thus to the space where we live in and we could be building like real 3D bodies using cellular automata so this is a model of development as well so you can see"}, {"start": 1964.58, "end": 1994.34, "text": " you can imagine starting from a zygote from a single cell and then through some type of communication local communication that's very important you end up with this particular morphology so now what CPPN show is that you don't need to have this local communication going on and temporal unfolding and that's just like a consequence of the fact that we live in a physical world with physics where the laws of physics"}, {"start": 1994.34, "end": 2021.06, "text": " are at play and this constrains the developmental process to do it like this so let me now go back to CPPN and let's see how they actually approach this let's start by reading the abstract a little bit natural DNA can encode complexity on an enormous scale researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings"}, {"start": 2021.06, "end": 2040.1, "text": " ie encodings that map the genotype to the phenotype through a process of growth from a small starting point to mature form a major challenge in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies"}, {"start": 2040.1, "end": 2068.3399999999997, "text": " okay so the rest of the paper will show how they found a different model how can a Stanley found a different model to for this developmental process so basically the main contribution of the paper is to establish that CPPN are legitimate abstraction of natural developmental encoding which may not be super obvious on the first glance of this paper so let's start here so"}, {"start": 2068.34, "end": 2095.2200000000003, "text": " what like the example here shows that you can have basically this phenotype being generated by this function F instead of having so basically instead of having local interactions are having this temporal unfolding to get the final to the morphology in this case the morphology is just a triangle so this is not like a biological like morphology but nonetheless you can just kind of abstract it away"}, {"start": 2095.22, "end": 2125.14, "text": " and let's deal with mathematical objects into the space so it's the same thing so F is a developmental model of this of this triangle if you will okay so so basically they say here that a developmental chronology is only one way of producing a particular constellation of particles it follows that encoding and unfolding process may be unnecessary to produce complex phenotypes because a functional description could alternatively have"}, {"start": 2125.14, "end": 2155.06, "text": " been evolved such description would not necessitate a long unfolding chain of interactions and productions so the reasoning is why go through why simulate every single intermediate step with what when what we care about is the actual like phenotype and if you could catch capture the whole development inside of a function why not do just that so they state here that so looking at this diagram here so this observation implies"}, {"start": 2155.06, "end": 2184.98, "text": " that any phenotype produced through a temporal progression is also possible to represent through a functional description so in fact this paper from I think 1989 show that any function can be approximated by neural network with two hidden layers that's the universal approximation theorem and the more neurons in networks to the wider network the more accurate the approximation can be that's any morphology when viewed as a distribution of particle in space is possible to represent as a function without the notion of time"}, {"start": 2184.98, "end": 2214.98, "text": " so that's the thing they're trying to kind of skip over here and just kind of understanding that that's a more of a physical constraint and something we need to care about when we're trying to develop mathematical models such as this one okay let's let's see this sentence a significant function and early stage of development in natural embryo Jenny is to define a coordinate frame i.e. a set of virtual coordinate axes upon which future stages of development will be"}, {"start": 2214.98, "end": 2244.9, "text": " based the simplest and most basic of these coordinates frames are the main axis of the body which are defined at the very beginning of development inside the egg itself these axes include the anterior posterior axis i.e. head to feet and the doors or ventral i.e. back to front so what are what they are basically saying here is the following so so the gist of this is that during the development we have to somehow form this intricate patterns"}, {"start": 2244.9, "end": 2265.9, "text": " and those are usually just chemical gradients and if you if you if you manage to form those chemical gradients then the cells then each cell depending on where it is located in depend on this and this and this chemical gradient map will know what to do so we'll know what exactly to do depending on the on the on that scalar of the of the chemical gradient"}, {"start": 2265.9, "end": 2295.86, "text": " so if you are able to capture the pattern of this chemical gradient without even having the local interactions or the temporal unfolding then we can capture the development itself and you can see that because of this left to right gradient which is in the form of a Gaussian function this this if this basically implies that we'll have will end up with a bilateral symmetry and you can see that this fly here has a symmetry bilateral symmetry so if I were to draw a vertical axis"}, {"start": 2295.86, "end": 2325.82, "text": " through this fly here you can notice that we have a symmetry here can just flip it and nothing would change the underlying structure which is in this case that the fly the to the image of a fly would not change and similarly there is a gradient along this anterior posterior axis and different patterns along along those those along the axis produce different cells so basically you can ignore all of this and you can imagine so you can ignore"}, {"start": 2325.82, "end": 2347.82, "text": " all of this and you can imagine we have a certain like just draw this so we have a certain to the spatial pattern here so it will be hard to draw but you can imagine going from so around this axis here it's going to be quite symmetric left and right and that's so basically this to the spatial signal is going to directly in code those chemical"}, {"start": 2347.82, "end": 2373.82, "text": " gradients and that's gonna basically allow us to to to form this this this this morphology here and that means we can ignore we can just focus on trying to generate these intricate to the spatial patterns okay so now let's see how do we generate these to the spatial signals which are going to loosely represent these chemical gradients and that's gonna be a blueprint for how to generate the morphology"}, {"start": 2373.82, "end": 2395.82, "text": " like the body of some biological organism or or whatnot so because we're trying to remember we're trying to build a developmental model for biological organisms so so here's one way we could be composing functions so here we have a symmetric function here we have a periodic function you can see we can generate arbitrary"}, {"start": 2395.82, "end": 2424.82, "text": " patterns by doing this they mentioned here that coordinate frames created through a developmental process interact with each other to produce complex patterns with regularities in the same way functionally represented frames can be composed to create complex regularities so in this way and this has been mentioned over and over again throughout the paper so the composition the function composition replaces local interactions okay"}, {"start": 2424.82, "end": 2449.82, "text": " so instead of having some some local cellular automata like communication we can just compose the the the various functions to end up with these to the spatial signals we we care about I mean I'm saying to the but those could be 3d or or even 40 as we'll later see and it's just a pattern we're trying to to to generate so here is the"}, {"start": 2449.82, "end": 2478.82, "text": " how cppn's look like and they say that the main idea is that the order in which functions are composited is an abstraction for the chronology of events over the course of development without the need for simulating such events locally so in a way signal flowing from from here to here is is an abstraction for the chronology of time so even though we don't have any notion of time in this network per se we are kind of capturing the time"}, {"start": 2478.82, "end": 2501.82, "text": " and all of these local interactions by doing this functional compositing so starting from X and Y which are just the coordinate coordinates of your of your Cartesian plane we can generate the output the phenotype by by by composing functions and here compared to your regular artificial neural networks what cppn's do is instead of like just using rally or sigmoid for the activation function they use arbitrary like functions which are"}, {"start": 2501.82, "end": 2520.82, "text": " which have some nice properties so it's either like a Gaussian or some symmetric function or like identity function or a periodic function etc etc so this is what how cppn's look like okay they mentioned here that providing the initial coordinate axis as inputs to the graph is what allows local interaction to be eliminated in physical space there are no intrinsic coordinates that are"}, {"start": 2520.82, "end": 2539.82, "text": " individual cell can access to determine its location and hence its identity therefore local interaction becomes a way of asking where am i that is through the collective negotiation for Jason cells that interact with each other it is possible to derive a coordinate frame however by composing functions and"}, {"start": 2539.82, "end": 2558.82, "text": " that take as arguments and absolute frame of reference the need for such negotiation is eliminated and all identities and relative locations can be determined completely independently so basically what they said here is we can kind of feed these absolute coordinates into the"}, {"start": 2558.82, "end": 2584.82, "text": " cppn to generate this this output pattern and instead you can imagine the difficulty that a cell has like it needs to figure out where it is based on the local communication with the with the neighboring cells and this this this cppn model basically replaces a little bit of"}, {"start": 2584.82, "end": 2603.82, "text": " cppn model basically replaces eliminates the need for for that type of communication to happen so as I mentioned there is a lot of similarity between like neural networks your regular neural networks and cppn's and they say here interesting a graph of such compositions is very similar to an artificial neural"}, {"start": 2603.82, "end": 2618.82, "text": " network with arbitrary topology the only difference between the two is that artificial neural networks generally use sigmoid functions and sometimes gossians or relius or whatnot as activation functions in each note whereas the function composition graph may use any of a variety of"}, {"start": 2618.82, "end": 2635.82, "text": " biological functions at each note as I previously mentioned they then say that the analogy between a function composition graph and a neural network is so strong in fact that is stamping to equate the two however while from an external objective standpoint they are closely related"}, {"start": 2635.82, "end": 2657.82, "text": " so from the mathematical standpoint I guess using the term artificial neural network would be misleading in the context of this discussion because neural networks were so named in order to establish a metaphor with a different biological phenomena and that's the brain and the terminology should avoid making the implication that biological thinking brains are in fact the same as developing embryos"}, {"start": 2657.82, "end": 2686.82, "text": " this is kind of loose because in one of their follow up papers in one of Kenneth Stanley's follow up papers called a hyper neat he shows that you can basically use cppn's to model neural networks by just modeling for the spatial signals you can map that to neural networks so which means that in the fact this can be a model of a brain as well so these are very abstract objects cppn's are very abstract and they are very abstract"}, {"start": 2686.82, "end": 2710.82, "text": " and they can capture arbitrary spatial patterns which can be mapped to various things so they can be mapped to like chemical gradients fields which then can be used as a guide for formulating morphology like the body of an organism or it can be just a like a plan for how to construct construct the actual brain the architecture the design of a"}, {"start": 2710.82, "end": 2730.82, "text": " of a of a neural network okay so let me recap what what we've seen so far it's a bit harder to follow along because there is a lot of text and not that many images so yeah stick with me basically the idea is the following so if you want to produce a certain morphology beat the 2d body like in this case or a 3d body"}, {"start": 2730.82, "end": 2749.82, "text": " all you want to do is be able to specify arbitrarily complex spatial signals so that means you want to have certain properties like you want to be able for your your system to generate symmetric patterns you want to be able to generate imperfect symmetry you want to be able to"}, {"start": 2749.82, "end": 2770.82, "text": " basically a model a repetitiveness etc etc and once you have that spatial signal you can imagine you can you can basically treat that as a as an abstraction for actual physical processes such as chemical gradients which is cell uses to communicate and build up the body so if you can get to the spatial signal"}, {"start": 2770.82, "end": 2799.82, "text": " then you're basically you've solved the task and you've successfully modeled the developmental process and so how we can do that that's what the cpn paper shows is by just composing these arbitrary not arbitrary these special functions such as gaussians symmetric functions periodic functions etc and you can generate those those patterns by doing this we basically skip through the whole local communication temporal unfolding thingies"}, {"start": 2799.82, "end": 2828.82, "text": " that are going on and we just end up with a final plan with the final map that's eventually going to build up the phenotype okay so now for the fun part let me show you what I've done they basically used and that's why I cut where I covered neat so they basically is need to evolve cpn such that we can produce very complex patterns with all of the necessary properties that showcase that this is indeed a good model of development"}, {"start": 2828.82, "end": 2858.82, "text": " and now we're going to show you a couple things so here is the experiments they've done so various people created these platforms so you on the left you can see this Nelphy neat platform on the right you can see this sharp neat platform and basically what you can see here is the following so this is a phenotype so basically this this this chemical gradient map that was created by a cpn in the background so each of these images is"}, {"start": 2858.82, "end": 2888.78, "text": " you can see are these 2d spatial signals which we can interpret as I said multiple times chemical gradients and each of these are in the background created by a particular cpn and the same thing on the right and so what you do here is using neat and using people to select the parents we can we can evolve more complex and more complex spatial signals so what that means is the following so they said this particular pattern is going to have a cpn"}, {"start": 2888.78, "end": 2918.7400000000002, "text": " in the background so it's going to be some cpn number one this is going to be cpn number two so somebody for user takes these two and we match the genes and we do the crossing and then we do all the mutations etc etc we're going to end up with a novel cpn which means that in turn we're going to end up with a novel pattern and now because humans are in the loop here they can select interesting patterns and thus across breed interesting patterns to create even more complex patterns and if you do that you end up with very"}, {"start": 2918.74, "end": 2947.74, "text": " school parents and let me show you some of those by the way just a slight detail so aside from x and y what they additionally input is this signal d which is just your your your distance from the center of your domain here and this could be in theory learned by the cpn by just composing some some functions there but like this makes it a bit easier this is just a kind of a shortcut the biases this cpn towards being"}, {"start": 2947.74, "end": 2961.74, "text": " i guess radially symmetric i think they mentioned somewhere so however since the is radially symmetric it does not automatically provide a bilaterally symmetric coordinate frame okay so the is biasing the cpn towards radial symmetry"}, {"start": 2961.74, "end": 2988.74, "text": " let's see some results so you can see that people manage to generate symmetric patterns such as this one which is a very important finding because as we saw with the fly example when you have this bullet bilateral symmetry pattern then that means you can form by let bilaterally symmetric bodies and this is the corresponding cpn"}, {"start": 2988.74, "end": 3013.74, "text": " that is generating this particular pattern they also showed that they can evolve quite intricate patterns so you can see here the idea is to create a spaceship and again people were just playing taking certain parents certain cpns across breeding them and mutating them and we end up with this sequence so this is"}, {"start": 3013.74, "end": 3036.74, "text": " these are like this is like maybe generation number one and then basically we are evolving with each generation you have more and more intricate patterns and you can see what happens is that we're kind of elaborating on certain details whereas this overarching pattern this this bilateral symmetry is preserved"}, {"start": 3036.74, "end": 3058.74, "text": " throughout this whole sequence all the way to the end where we have these all this looks more like mental rate and then like a spaceship maybe this looks like an airplane but in all case in any case you can see that we can generate very very complex patterns with all of the necessary properties using cpns"}, {"start": 3058.74, "end": 3079.74, "text": " important thing is to to be able to represent not just perfect symmetry but also imperfect symmetry because for example take human body as an example so your heart is not perfectly symmetric so your heart is moved a bit to the left instead of being centrally positioned so that's a very good example of imperfect symmetry"}, {"start": 3079.74, "end": 3105.74, "text": " in our bodies so we want to be able to represent that in our in our in these signals so here they show that aside from having perfectly symmetric sunglasses pattern that somebody produced we can have also this this this imperfect symmetry where you can see that this part is of different size compared to the to this left"}, {"start": 3105.74, "end": 3128.74, "text": " I guess part of the signal so that's another important argument for the expressivity of the cpns now again repetition is something that's super important and repetition with variation is also important so similarly to how it's important to have imperfect symmetry and not just"}, {"start": 3128.74, "end": 3147.74, "text": " perfect symmetry so we will also want to have some variations in the repetition not just perfect repetition so here they show that if you input these additional these special signals like sign sign of 10x and sign of 10y and then the d which is the distance from the center if you input these signals we generate"}, {"start": 3147.74, "end": 3176.74, "text": " very complex patterns and now you may think to yourself now this is cheating because why would you input sign so it's instead of just your regular x and y coordinates and the thing is this is just kind of convenience because cpns could in theory learn without any problem to combine because they do have signs at their disposal when composing the cpns itself so they could learn the same thing this is just for convenience and you can see amazing patterns popping up"}, {"start": 3176.74, "end": 3202.74, "text": " here so here you can see there is a lot of repetition obviously but there is some variation as well so if we were to zoom in into this region then you get this one here and you can see that obviously that this kernel here is is different from from this one here and yet yeah like a bunch of images further testify that this is the case"}, {"start": 3202.74, "end": 3231.74, "text": " yeah going forward you can see that in some cases that the cpn learns to ignore those sinusoids and focuses more on the d signal either distance from the center of the image and then you end up with these very intricate patterns which are much more radio symmetric compared to the previous ones and they indeed they showcase that that's not like a cherry picked example a lot of times the cpns end up with these signals that have all of these nice patterns"}, {"start": 3231.74, "end": 3260.74, "text": " that have all of these nice properties so that's pretty much it they mention finally this is an important sentence so the importance of local interaction to developmental abstraction is an open question they basically show that cpns are an interesting alternative to cellular automata or other models of developmental biology now I did briefly mention that even though they are using cpns as a as a developmental"}, {"start": 3260.74, "end": 3286.74, "text": " model they can also use it to just model various neural network architectures and let me show you how that could be done like a simple interpretation of these spatial signals that are generated by by by cpns so if you if I were to take this simple cpn let me just find the diagram so this one here now let's imagine that instead of feeding in the signal deep let's imagine we feed in x1"}, {"start": 3286.74, "end": 3315.74, "text": " y1 and x2 y2 and then we somehow learn cpn so that the outputs this output scalar is has such prop like the similar properties to what we've seen in 2d signals so being symmetric being being repetitive etc etc now if we were to constrain the the range of these signals to be from 0 to 1 this is basically a signal defined"}, {"start": 3315.74, "end": 3337.74, "text": " on on a 4d hypercube and what's now interesting is how you can interpret this so you can interpret it the following way so imagine we have a grid imagine we have a grid like this and so let me just draw the vertical lines and the horizontal lines and let's now imagine you input the coordinate"}, {"start": 3337.74, "end": 3364.74, "text": " so this is x1 y1 and this is x2 y2 and you plug in these numbers so this is going to be for example 0 0 this is going to be maybe 0 and this is going to be 0.2 because like basically what I've done is I've normalized the the the length of this grid to be so this is where 0 is this is 1 and similarly here"}, {"start": 3364.74, "end": 3392.74, "text": " so we have 0 being here and 1 being here and then we just take the coordinates and you get output and you get some number as the output and you can interpret that as a as a as a weight of the connection between these two cells and that means you can now build neural networks that have symmetric connections that have repetitive connections etc etc"}, {"start": 3392.74, "end": 3412.74, "text": " by just finding appropriate cpp ends so that's the whole idea behind the hyper need paper that's a follow-up work after that came after this paper so again let's imagine this is maybe output 0.7 for these particular coordinates so that means this is going to be of weight 0.7 and then you just start"}, {"start": 3412.74, "end": 3434.74, "text": " tracing you just do an extensive search you basically input all the possible combinations here and for example you set a certain threshold so if a certain weight is below I don't know like maybe 0.3 then there is no connection there otherwise you have a connection with the given weight and that's how you form a neural network by using a cppn"}, {"start": 3434.74, "end": 3455.74, "text": " so why this is very cool is because you can use a much smaller model so this cppn may have like hundreds or thousands of nodes and you may end up with like millions or even billions of connections in your in your neural networks this is like basically a low dimensional representation"}, {"start": 3455.74, "end": 3475.74, "text": " like the gist of your of your phenotype which is in this particular case a neural network cool hopefully you found some interesting ideas in these two papers if you did consider subscribing share the video out and join the discord community as well and until next time bye bye"}, {"start": 3485.74, "end": 3488.74, "text": " you"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=1MnkDLHzzvQ
POET: Paired Open-Ended Trailblazer | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover "Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions" paper. They simultaneously keep a collection of multiple environments and their associated agents. Environments are occasionally mutated, agents are optimized (using evolutionary search algorithm) w.r.t. their envs. Finally, periodically a competition happens that decides the best agent (out of all agents in the collection) for each env individually. This forms a potent curriculum that leads to agents mastering even tougher envs. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/1901.01753 ✅ Kenneth Stanley on open-endedness: https://www.youtube.com/watch?v=dXQPL9GooyI ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to POET 08:00 Evolutionary search explained 17:15 POET algorithm 18:40 Environment encoding 21:35 Agent and reward modeling 24:35 POET main loop 27:30 Environment mutation algorithm 30:35 Results 33:20 Direct-path curriculum comparison 39:45 POET "unstucks" agents 41:30 Extensions and outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Kevin Stone Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #poet #evolution #openended
What's cracking guys? In this video I'll be covering an older paper called Paired Open-Ended Trailblazer or POET for short, endlessly generating increasingly complex and diverse learning environments and their solutions by this group of people that used to work at Uber AI labs and you may recognize Kenneth Stanley as the current open-endedness AI lead at OpenAI. So the reason I'm covering this paper is because it's super interesting. I think it's a very nice model of the evolutionary process itself and the main differentiation point between POET and many of the like common and mainstream approaches nowadays is that they do keep like the not a single environment that's that's kind of crafted and it's very hard but they instead keep up set of environments and corresponding agents and they keep on mutating the environments they keep on optimizing the agents for their corresponding environments and additionally they have this transferring where they do like a type of a competition where you try to find the best agent for a particular environment and if it outperforms the current niche champion then you replace the niche champion with that better agent and you keep on repeating this process so that's a high level kind of picture. Now let me show you their video and then we'll get back to the paper. Okay now that you have a rough idea of what's going on in the paper let's take a bit deeper. They say here that while the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions an important question is whether the problems themselves can be generated by algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula so diverse and expanding curricula and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. So let me contrast this this description here with what's usually done in for example reinforcement learning. So usually what you end up with is you have a complex environment like let me denote that as a is this big big square here. It's a complex environment for example Dota or Starcraft or like Go or chess or all of those environments we've used to seeing and we know are very challenging and what I do is they basically train a very specialized agent so that's that's kind of very performant at that particular environment and so it's a very convergent behavior so you have one agent and you're training it and then you converge to a single set of weights and then you can just deploy those that model in the environment and that's it the process stops. So on the other hand what this open-endedness is all about is instead of having this single environment and a single agent that's adapted that environment why not instead keep like a set of a collection of various different environments and let me denote those by different rectangles here like this is one environment and then we have like a second different environment and then we have like a third environment here etc etc and you keep on optimizing corresponding agents for each of these environments as you can see these are very adapt to these particular environments. Okay I'm gonna denote them so yeah they kind of fit perfectly together that's that's that's kind of a pictorial way to for me to to tell you that this agent is very adapt for this particular environment here. So other than having a set of environments and agents which wouldn't be that interesting in and of itself what they do is they occasionally mutate these environments so let me denote that something like this so this environment is gonna be changed it's gonna become maybe a bit different maybe it's gonna have like something like this and then this environment here may become like a bit different after the mutation etc etc and now you keep on optimizing the agents for these novel environments so now you'll end up with an agent that's kind of fits perfectly together into this environment here and the same goes for all of the other environments here so you do this for all of the pairs of environments and agents and the final step in there in this open ended process is to every now and then take all of the agents you have so take all of these agents and basically make them compete on one of the environments so for example you take this one and this one and this one and you see which one of these is the best for let's say for this particular environment and if this one if this like second agent here is better than this one then you're just gonna kill this one and you replace this agent with this one here and that's the those are the high level components that are needed for this open-endedness poet process we'll see more details as I go deeper into the paper but that's that's the rough idea okay let's continue so the results show that poet produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges many of which cannot be solved so that this is an important point here as they'll be comparing poet with these more direct types of solutions so they cannot be solved by direct optimization alone or even through a direct path curriculum building control algorithm we'll see what that means in a second introduced to highlight the critical role of open-endedness in solving ambiguous challenges so as you probably noticed by the very description of this framework I did not specify any particular class optimization algorithms that is going to be used to adapt to to adapt the agents for their corresponding environments what they use in practice here is the evolutionary strategy algorithm so let me start by explaining what those what that class of algorithm is all about for those of you who are not familiar with evolutionary algorithms so the idea is actually fairly simple what do you have is the following so let's imagine we represent our agent as an as a simple MLP so multi-layer perceptron network so let's imagine we have like some input observation and then we have the agent represented as this MLP with multiple hidden layers and then we have the final linear layer and now it comes the output like for example this is the action space of the agent and as you can imagine this contains bunch of learning weights and we can kind of flatten those out into a vector and let's denote that vector as s theta so this is a vector theta and this theta is obviously a multi-dimensional vector depending on how many parameters you've got and we can represent it as a point in space like as a way of visualizing that that theta vector we can represent it as a point in 2d space so obviously because it's multi-dimensional would have to do some type of a dimensionality reduction method in order to plot it like this but you like bear with me this is this is the visualizations I'm going to use so now what evolutionary algorithms do is in their most common form is you fit a Gaussian like on top of this of this theta vector so this is let me just denote it again this is theta vector this is where we start from and then you fit a Gaussian on top of this vector so basically what it means you take it as a mean you take this vector as the mean of the Gaussian and you have some some some basically covariance matrix it's usually just isotropic like Gaussian and so what the idea here is is now you're going to sample randomly sample novel agents so according to the Gaussian distribution so because it's Gaussian is more dense here you're gonna have more agents here you'll have some agents here you'll maybe have one agent here and so on and now what you do is you take all of those like novel random agents and you evaluate them at the environment at hand and you get the associated fitness you get the associated score of that agent in this particular environment at hand so let's now imagine that for example this agent here got a score of 150 this one got a score of let's say 60 this one got a score of like 35 this one like 40 or like I don't know like five or whatnot and you can imagine immediately see that probably this part of environment is for some reason so the weights that are in this part of the space are kind of more valuable for this particular environment and so you intuitively want to move your vector your theta vector in this direction and so how you do that practically in the in these class of evolutionary algorithms is the following so you have associated perturbation vector that you used in order to get to this particular agent from this theta initial state of set of weights so and you have like corresponding perturbation vector for all of these novel agents here in the environment and so what you'll do is a simple in practice you just do a simple weighted average of all of these vectors and you can imagine that after you do that so after you multiply this vector here with with 150 and this vector here with 60 and you do the same for these other vectors you'll end up with a resultant vector that maybe goes that maybe looks like this maybe you end up going in this direction here and so you move and because you have a learning step you'll you won't move too much you'll maybe move like here and so this is your novel theta so this is theta prime okay again re-trating what we've done here is we've we've taken the initial set of weights theta we've fit a Gaussian on top of that theta vector and what I mean by that is basically take the theta as the mean of the Gaussian and you somehow pick the covariance it's usually isotropic Gaussian then you sample n random points from that Gaussian and what you do by doing that is you sample n novel agents and then you evaluate those agents on the environment at hand you get some associated like scores and you just create this weighted sum of all of these perturbation vector and when I when I say a perturbation vector what I mean is basically this vector here so what it took to get from theta so from from from the original like vector here to get to this novel candidate agent so that's the high level understanding of what's going on now let me show you how the formulas look so that you are not confused next time when you see the formula I think it's always good to have this type of grounding and understand like visual understanding of what's going on or like just the semantics behind the notation okay let's first start and understand this this more general formula here where instead of a modeling distribution as a Gaussian you you kind of you parameterize your distribution using the the weights theta so those are the the weights you start from so this vector here and you somehow model that probability distribution and so the idea is so this this tells you the same story I just told you about so this e theta of I just means how well does this theta I agent perform on this particular environment e so this will be a score like for example hundred fifty as we saw before so let me just connect this with the drawing here so this particular set of weights here so this novel candidate vector is going to be theta I and E of theta I so that the performance of that agent on this particular environment is going to be hundred fifty as we saw here okay so that's how those two connect next up let's imagine that under the current probability distribution that's parameterized by theta this agent theta I has the probability of 0.4 and because it has such a high score like intuitively we would love to increase this probability the next time we sample from this probability distribution so optimally when we do a single step of evolutionary strategy algorithm will boost up this one to maybe 0.6 so we'll have more probability put on to on top of this particular set of weights which makes intuitively sense so again this term here tells you what's the what's the fastest way to increase the probability of this particular theta I so that's what gradient does it shows you the direction of the steepest ascent so that we can maximize this this term here and so it's weighted by the the score that that particular agent achieves and then you just do a sum across all of the end samples so and samples are are those random the random samples we take given the the the initial set of weights theta here okay that's how those two connect so that's just a more general notion of this whole evolutionary strategy setup if we assume now that the probability distribution is a Gaussian this is how this expression like transforms you can see again we have a sum across and samples this time we find how performant are these agents so this here is now going to be theta I let me just change the color so this here is a new set of weights called theta I and we just have to so epsilon subscript I is just a sample from the standard Gaussian distribution so this is basically sampling from your Gaussian and you then evaluate the that novel model on the on the environment you get the score and so as you can see here you just use those as the weights for these resultant vectors so those resultant vectors are expect are exactly these dotted lines I showed here and so yeah basically this equation is the formulaic way of describing how evolutionary search breaks okay hopefully that helped you better understand the evolutionary strategy algorithm as a final note this theta plus sigma times epsilon sub I can be denoted as this so it's a it's a Gaussian with mean equal to theta and with covariance matrix equal to sigma identity matrix so you're basically sampling your agent from this distribution here you evaluate and find the score on this environment e and then you use that as the weight to multiply these perturbation vectors epsilon sub eyes okay enough ramble about evolutionary strategy let's continue and explain understand the poet algorithm itself okay let's see how it proceeds so you initialize this list of environment agents as as an empty list you add the initial environment and you add the initial theta so that's just gonna be a random vector and the initial environment is gonna be basically this thing here so they've been using this by by pedal Walker environment except that this one is already fairly complex let me show you how the initial one will probably look like so it will look something like this it's completely flat you don't even have the gap here you you will have like nothing you'll just have a flat environment that's gonna be the initial environment of the poet algorithm okay so let's go back here and then once you add that initial pair of environment environment agent pair so then you do basically capital T number of iterations and in each of those iterations you do the following so every now and then you are going to mutate your environments and so as you can see here so T is the current iteration and every now and then which is dictated by the end mutate you're going to execute this so what does it mean to mutate an environment such as the bipedal by pedal Walker environment we saw here and to understand that let me let me get to this table here so basically each environment is encoded as a as a as a vector and you can treat that vector as a like a gene of the environment and let's understand the format of that of that gene so it looks like something like this so here is our vector that encodes the environment and the first two coordinates here are gonna encode the stump the stump height as you can see here the initial values are gonna be like zero and then 0.4 and that's going to determine the stump heights throughout the environment so just to clarify what this here means is that your stump height so let me draw that as this so here you have like stump height here so stump height and you're gonna have the stump height be between 0 and 0.4 and you're just going to have like a uniform distribution so that means all of these heights here are gonna be of equal probability and throughout the the as the environment progresses you're gonna be sampling from this distribution okay next up you have gap width so the next two coordinates are gonna be gap width and then the next two are gonna be step width and then you're gonna have the step numbers and the roughness let me just kind of quickly explain the semantics behind these so gap width is I guess fairly clear so that's this thing here so this is your gap width this is the stump height then what else they have let me just check so we have step height and number of steps so let me let me see what I can find an environment with steps I think there is one here so you can see here so this is here we have like six steps and you can see what this is the step height so all of these various different parameters that describe the environment are encoded in this in this vector here okay so an important detail here is this mutation step row which tells you how are you going to with each duration which each mutation step how are you going to mutate these these parameters that describe the environment so for this stump height you have 0.2 which means you're going to every now and then with some probability boost up the 0 to like 0.2 and this 0.4 to 0.6 and now you you can imagine you now have a much harder environment because all of the stumps are going to be between like higher on average than than than before and that's how you're progressively and randomly mutating and making environments harder you can do that for all of the parameters that describe the environment here so all across all of these five columns dimensions cool now that we understand the environment let me first before getting back to the algorithm explain how the actual agent is constructed so you can see here how the agent looks like and you basically have four control signals you're controlling the hip torque you're controlling like so here you're controlling the knee torque you're controlling the other hip torque and the other knee torque that's the output so as for the input I think they mentioned it somewhere here let me just if I can zoom out they mentioned here so the agent has 10 lighter range finders for perceiving obstacles and terrain whose measurements are included in the state space in other 14 state variables include hull angle hull angular velocity horizontal and vertical speeds positions of joins blah blah blah blah blah okay so all in all you have its model the following way you have 24 dimensional input observation or state then you have an MLP here a couple of hidden layers and you end up with four dimensional output vector so this is for this is 24 this is just an MLP and that's everything you need to know about the agent the final detail that's important is how do you define the fitness how do you find the score and here is the formulaic description of the reward you can see that if the robot falls then you like the robot gets minus hundred reward so negative reward you can see here that we are encouraging the robot to move like rightward so here Delta X because the X axis basically as you can assume goes from left to right so you want to move as much as possible so that's the Delta as much as possible to the right then it's penalized for the hull angle whereas the hull angle is this thing so basically you want to make sure let me zoom in a bit more so you want to make sure that this particular angle here so this angle here let me know it does maybe alpha you want to minimize it and that means you want to have this this hull being as horizontal as possible and the final detail they have here is they are also penalizing for the applied torque so if we were to explain what this means in human language is it's basically like a like a like a formal description of this informal statement please don't fall move to the right as much as possible try and keep your hull as horizontal as possible and try to minimize the energy consumption while doing all of that so that would be like an informal description of what this this this reward is encoding cool now that we understand all of that let's get back to the algorithm put main loop so we got to this line six here we're somehow mutating the environments we saw the mechanism roughly there is a pseudo algorithm in the appendix I'm going to show you in a minute but like for now just imagine you have like a set of vectors which are encoding for particular environments you can you can kind of imagine you mutate those vectors according to the rules we saw and then you get novel novel environments okay now that we have that environment a novel set of mutated environments we just iterate through those environments and we optimize the agents here as you can see in this step here we're going to take the the environment agent pair we're gonna do a single evolutionary strategy step so this as you remember produces the resultant vector so we just add that resultant vector to the theta that was used as the mean of the Gaussian and we get the novel the novel set of weights for that agent so that's how the agents are optimized here obviously you can see that we have like the learning rate alpha here we have Sigma the standard deviation for the Gaussian at hand and all of that combined goes into the input of this ES step and that's how we update the weights okay finally and I mentioned this every now and then we're going to so every now and then is determined by this end transfer we're going to evaluate candidates as you can see here we take all of the agents except for the theta t plus 1 M except for the Mth agent where M is the current agent in this loop here and we evaluate all of these agents on the Mth environment and we find the best performer for that environment and if the score that the best performer achieved is better than the than the current agents then the niche champion then you replace you basically overwrite so basically bye bye the old agent and here comes the new one which was the best agent which is the most adapt agent for this environment even though it came from some different environment so that's the this interesting cross pollination that's going on okay so on a high level the Poet algorithm is fairly simple you keep this collection of environment agent pairs you keep mutating the environments you keep optimizing the corresponding agents for those particular environments and every now and then you do this cross pollination step whereby for each of the environments you're going to find the best agent overall which may come from some different environment and replace the current niche champion with that one if that happens okay so that's that's everything and now let me just go to appendix and show you the mutation pseudo algorithm in a bit more detail so here we are let's see how it works so we have some list already we iterate through the list of agent environment pairs and we see if they are eligible to reproduce and what this means is they basically evaluate the agent M on this environment M and they see what the score is and if the score is like if it's not in this in this in this range between like for example 50 and 200 they discard that agent for that particular environment I mean both the agent and the environment and why is that well because that means that if you're outside of this range that means that you're either here which means you achieve super low score which means that the environment is probably very hard or it means that you're here which means it's you have a very high score and the environment is very very easy and so you want to kind of filter out those either too hard or too simple of environment after we've done that we have this parent list so basically all of the eligible environment agent pairs are there then we do the environment to produce so this step here is gonna do these random mutations you're gonna take the vector that describes the environment you're gonna do the mutations according to certain rules and you end up with this child list then you again just check this minimal condition IE you check whether they are in this range you discard the ones which are not after all of that is done there is this line 13 where there is something interesting going on so they rank by novelty and how they define novelty is using L2 metric basically they take the encoding vector and they compare it with all of the other environment vectors as well as some of the older like they have this archive of older environments and the further away it is from all of the other environments as defined by L2 the more novel it is so that's how they define the concept of novelty it's it's it's very different compared to the previous environments we've seen and they basically sort this list according to to not according to novelty and once we've done all this they also here they have this evaluation of candidates so they find the best agent for that particular environment and they and if this minimal condition is satisfied then they add the environment agent pair to the list so what can potentially happen here again is that some of the other agents is better at the environment that can this at this child environment and then you're just gonna overwrite the agent with this better agent here so yeah there is a it's kind of intricate but you got the gist of it already you have before I showed you this this pseudo algorithm okay let's get back to the main paper and see where we stopped okay let's get to the results the first comparison they do is they compare poet with the evolutionary strategy from scratch which means you find some fairly complicated environment as the one you can see here and you either train the agent using poet or you directly train evolutionary search so you start from a random agent and you try by random perturbations to find the agent that performs well for this particular environment and it turns out that as soon as the environment gets like decently complicated the evolutionary strategy from scratch starts failing and they mention here some of the failure cases so they say here in effect to obtain positive scores these agents learn to move forward but also to freeze before challenging obstacles which helped them avoid the penalty of minus hundred for falling down and this behavior is a local optimum the agents could in principle learn to overcome the obstacles but instead converge on playing it safe by not moving it's a very interesting behavior like a like a like a risk adversity like being manifested here in these simple agents in the most simple of environments and yeah I guess we see that in the in the real happening in the real world as well so here you can see the results you can see that that that the ES kind of gets stuck here and stays in the same place because otherwise it will fall down and get a minus hundred reward which is suboptimal whereas poet can actually yeah can actually kind of jump across these these gaps and and and continue on doing its job similarly for these other types of environments you can see here as soon as a big obstacle comes here the agent gets stuck whereas the poet keeps on going to the right same here it just stops in front of a big obstacle and does not move and that's the local optimum that they were referring to okay let's continue here you can see that poet can solve even some of very difficult environments and that's that's very very cool results okay here is just the quantitative visualization of how like of this huge gap between poet performance and evolutionary search by the way I'm butchering the names here like I'm not sure whether you can use interchangeably strategy search and like just evolutionary system because I think opening I had some evolutionary strategy paper so it may be the name may be overloaded but I explained the semantics between behind the evolutionary search like a couple minutes ago so hopefully that won't be confusing okay now we've seen the direct comparison between poet and evolutionary search now you can do something a bit more smart and that's doing a curriculum based learning but like not using poet which is highly open-ended but doing a more direct path curriculum learning let's see what what that means so a natural question then is whether the environments created and solved by poet can also be solved by an explicit direct path curriculum building control algorithm the sequence of environment starts with an environment of only flat ground without any obstacles which is easy enough for any randomly initialized agent to quickly learn to complete then each of the subsequent environments are constructed by slightly modifying the current environment okay let me try and explain what that means and what's the difference in approach so on one hand you have poet so you start with some random environment you start with the easy environment and then you randomly start protruding that environment so it's environment number one you do some mutations you get to environment number two and then you do some like mutations you get to three and then you get to four and then etc etc and you end up with environment after n steps which is somewhere here okay so now what they propose is let's start from this very same so so this is the original environment which is the flat ground everything super simple let me just duplicate it here for the sake of easier visualization so we have the same environment so these two here are the same so that was that's the simplest environment possible now instead of having this highly nonlinear mutation path towards M why not just go directly towards N and when I say directly as you know the environment described by a vector and you can literally linearly interpolate between these two vectors and create a set of environments that goes from one to N but following a much more direct path and they mentioned they explained here how exactly they do that they say more specifically to get a new environment each obstacle parameter of the current environment has an equal chance of staying the same value or increasing by corresponding mutation step value in table one until that obstacle parameter reaches that of the target environment so they do introduce a slight stochasticity in this process is not just a pure linear interpolation but like it's very close to that okay so this is the approach they take and let's now see the results they got comparing to this baseline okay here are the results these are something called rose plots and you can see that so run one is a single run of the poet algorithm and as poet was was running you can imagine that the environments were getting harder and harder let me first explain to you how you can parse this this this diagram so each of these five axes here correspond to one of the five parameters we saw in the table before so that means one is coding for like gap one is coding for the stump size one is coding for the roughness etc etc so the hardest environments are the ones which would be touching basically the end points here so these would be the this would be the hardest environment possible so this one here this is the hardest environment okay so looking at these these plots here we can see that so we have here three independent fronts let me first explain what the columns mean and what the what the rows mean so we have three independent runs as and the run like the the environments were getting harder and harder as the as the runs were progressing and then we can notice that the blue pentagons is the direct path curriculum and we can see that it's not able to to achieve the same to solve the same complexity level environments as poet given the similar amount of compute so again connecting this diagram with the thing I draw here basically the the the poet does not achieve never gets to end it maybe gets to like manages to solve like some of the environments the more the more these easier environments but never manages to actually solve these harder environments that's what we can see visually on this on these rose plots here continuing on this is just another representation of that same fact and one thing that's worth mentioning about poet is the following so while poet cannot guarantee reaching a particular preconceived target these results suggested in some environment spaces poets unique ability to generate multiple different challenges and solve them may actually still provide a more promising path towards solving some preconceived target challenges so what this sentence here is telling us is the following so the thing with poet is you can solve arbitrarily complicated environments but like it's not like you can specify like up in front like you cannot upfront specify this particular environment and tell poet hey go and solve this environment which I deeply care about no you can't do that so that's that's that's what this approach here try to do and we show that it failed so the whole point is you have to have this open-ended process and you have to be collecting milestones and maybe some of the milestones turn out to be very useful for the problem you care about so Kenneth Stanley one of the co-authors of this paper built like a whole philosophy behind this this this discovery here he even has a book why greatness cannot be planned where he is discussing this this tension that exists between having clear objectives and where you want to get to versus having this more divergent approach and he's arguing very very very very strongly for this more open-ended approach as opposed to these direct optimization methods and we saw like and he uses results such as this one to to argument why that might be the best thing to do in the long run and that's something that evolution itself is doing it's not like evolution is going towards a particular solution okay let me explain you this very very interesting diagram that that kind of shows the power of the poet algorithm so here you can see the parent environment and this particular agent at iteration 400 learned to do this type of like it's dragging its leg very close to the ground so it's a very we can see it's a very suboptimal behavior so the agent ended up in some local optimum and it cannot get unstuck so what poet does is at one point of time this agent is gonna perform outperform some other agent on some other environment so that's the transfer we saw multiple times and you see that in this environment there are stumps and after training on this so after training this same agent so the same set of weights on this novel environment the agent learned how to straighten up the legs you can see after iteration 1175 the agent is already walking quite like the legs are straightened out and then what happens is after you now copy that agent into the old environment and you keep on iterating you can see that here in the parent environment the agent finally learned how to walk straight so if you did not have this this this dynamics where the agent was sent to another environment trained there and then returned back to this environment you would never get unstuck from this local optimum we see here and so that's that's the whole that's a very powerful thing about poet I guess the main the main con side of poet is you cannot as we saw we cannot directly solve the environments we care about we kind of have to be collecting milestones and hope for what and hope that some of the milestones make sense for what we care about okay let me wrap up this paper by stating the possum some possible extensions of poet although obviously the paper was published in 2019 so I'm not quite sure whether any of these extensions happened or did not okay so they say the following an additional constraint that exists in this initial work is that the body of the agent is fixed ultimately limiting the sort of obstacles it can overcome for example how big of a gap it can jump over here too a more powerful and expressive encoding of morphologies including when I say morphologies they mean body shapes including those like CPP ends they're based on developmental biology could allow us to co-evolve the morphology and the body of the agent along with its brain in addition to the environment it is solving I mean this is a very straightforward extension of the of the research that was presented in this paper you can imagine that not only can you mutate the environment and then adapt the agents to that environment but you can imagine that you can also have a encoding vector for the morphology of the agent and you could be mutating the more that encoding vector and by doing those mutations you can maybe end up with an agent that has like longer legs or you can also imagine that you could be mutating the actual like brain architecture of the agent so that means instead of instead of like having that fixed MLP I described a couple of minutes ago so something like this you can be modifying this architecture as well mutating the architecture so you're mutating the architecture of the brain of the body and you can also imagine you could be mutating the rewards for each of the environments so not why just keep the why keep the reward constant as we saw there like with minus hundred penalty for for falling etc etc let me just find this is a super long paper so you could be also mutating the reward and all of that additional diversity could yeah probably either end up going towards some space that we don't care about or it could be very interesting although you probably would not want to give the reward too much flexibility you want to kind of scope it you roughly know what you care about and then you form a space you maybe form a probability distribution across rewards so that some rewards are more likely than some other ones okay those were some candies for your thought hopefully you liked this this video hopefully found this this idea of poet an open endedness interesting and if you did share the video out consider subscribing join the discord community and until next time bye bye
[{"start": 0.0, "end": 4.48, "text": " What's cracking guys? In this video I'll be covering an older paper called"}, {"start": 4.48, "end": 10.200000000000001, "text": " Paired Open-Ended Trailblazer or POET for short, endlessly generating"}, {"start": 10.200000000000001, "end": 14.16, "text": " increasingly complex and diverse learning environments and their solutions"}, {"start": 14.16, "end": 19.080000000000002, "text": " by this group of people that used to work at Uber AI labs and you may"}, {"start": 19.080000000000002, "end": 24.88, "text": " recognize Kenneth Stanley as the current open-endedness AI lead at OpenAI."}, {"start": 24.88, "end": 28.8, "text": " So the reason I'm covering this paper is because it's super interesting. I think"}, {"start": 28.8, "end": 33.32, "text": " it's a very nice model of the evolutionary process itself and the main"}, {"start": 33.32, "end": 38.36, "text": " differentiation point between POET and many of the like common and mainstream"}, {"start": 38.36, "end": 43.36, "text": " approaches nowadays is that they do keep like the not a single environment"}, {"start": 43.36, "end": 48.36, "text": " that's that's kind of crafted and it's very hard but they instead keep up set"}, {"start": 48.36, "end": 51.96, "text": " of environments and corresponding agents and they keep on mutating the"}, {"start": 51.96, "end": 55.44, "text": " environments they keep on optimizing the agents for their corresponding"}, {"start": 55.44, "end": 59.239999999999995, "text": " environments and additionally they have this transferring where they do like a"}, {"start": 59.239999999999995, "end": 64.44, "text": " type of a competition where you try to find the best agent for a particular"}, {"start": 64.44, "end": 69.2, "text": " environment and if it outperforms the current niche champion then you replace"}, {"start": 69.2, "end": 73.0, "text": " the niche champion with that better agent and you keep on repeating this"}, {"start": 73.0, "end": 77.16, "text": " process so that's a high level kind of picture. Now let me show you their"}, {"start": 77.16, "end": 104.96, "text": " video and then we'll get back to the paper."}, {"start": 197.16, "end": 211.76, "text": " Okay now that you have a rough idea of what's going on in the paper let's take"}, {"start": 211.76, "end": 216.3, "text": " a bit deeper. They say here that while the history of machine learning so far"}, {"start": 216.3, "end": 221.6, "text": " largely encompasses a series of problems posed by researchers and algorithms that"}, {"start": 221.6, "end": 225.12, "text": " learn their solutions an important question is whether the problems"}, {"start": 225.12, "end": 229.44, "text": " themselves can be generated by algorithm at the same time as they are being"}, {"start": 229.44, "end": 234.08, "text": " solved. Such a process would in effect build its own diverse and expanding"}, {"start": 234.08, "end": 238.64000000000001, "text": " curricula so diverse and expanding curricula and the solutions to problems"}, {"start": 238.64000000000001, "end": 243.92000000000002, "text": " at various stages would become stepping stones towards solving even more"}, {"start": 243.92000000000002, "end": 248.0, "text": " challenging problems later in the process. So let me contrast this this"}, {"start": 248.0, "end": 253.4, "text": " description here with what's usually done in for example reinforcement"}, {"start": 253.4, "end": 257.88, "text": " learning. So usually what you end up with is you have a complex environment like"}, {"start": 257.88, "end": 262.6, "text": " let me denote that as a is this big big square here. It's a complex environment"}, {"start": 262.6, "end": 268.22, "text": " for example Dota or Starcraft or like Go or chess or all of those"}, {"start": 268.22, "end": 273.2, "text": " environments we've used to seeing and we know are very challenging and"}, {"start": 273.2, "end": 281.0, "text": " what I do is they basically train a very specialized agent so that's that's kind"}, {"start": 281.0, "end": 285.08, "text": " of very performant at that particular environment and so it's a very"}, {"start": 285.08, "end": 289.12, "text": " convergent behavior so you have one agent and you're training it and then"}, {"start": 289.12, "end": 293.0, "text": " you converge to a single set of weights and then you can just deploy those that"}, {"start": 293.0, "end": 297.76, "text": " model in the environment and that's it the process stops. So on the other hand"}, {"start": 297.76, "end": 301.92, "text": " what this open-endedness is all about is instead of having this single"}, {"start": 301.92, "end": 305.76, "text": " environment and a single agent that's adapted that environment why not"}, {"start": 305.76, "end": 310.72, "text": " instead keep like a set of a collection of various different environments and"}, {"start": 310.72, "end": 314.12, "text": " let me denote those by different rectangles here like this is one"}, {"start": 314.12, "end": 318.6, "text": " environment and then we have like a second different environment and then"}, {"start": 318.6, "end": 325.24, "text": " we have like a third environment here etc etc and you keep on optimizing"}, {"start": 325.24, "end": 329.04, "text": " corresponding agents for each of these environments as you can see these are"}, {"start": 329.04, "end": 334.48, "text": " very adapt to these particular environments. Okay I'm gonna denote them"}, {"start": 334.48, "end": 339.6, "text": " so yeah they kind of fit perfectly together that's that's that's kind of a"}, {"start": 339.6, "end": 346.28000000000003, "text": " pictorial way to for me to to tell you that this agent is very adapt for this"}, {"start": 346.28000000000003, "end": 351.96000000000004, "text": " particular environment here. So other than having a set of environments and"}, {"start": 351.96000000000004, "end": 355.52000000000004, "text": " agents which wouldn't be that interesting in and of itself what they"}, {"start": 355.52000000000004, "end": 359.04, "text": " do is they occasionally mutate these environments so let me denote that"}, {"start": 359.04, "end": 362.8, "text": " something like this so this environment is gonna be changed it's gonna become"}, {"start": 362.8, "end": 367.40000000000003, "text": " maybe a bit different maybe it's gonna have like something like this and then"}, {"start": 367.4, "end": 373.71999999999997, "text": " this environment here may become like a bit different after the mutation etc"}, {"start": 373.71999999999997, "end": 378.08, "text": " etc and now you keep on optimizing the agents for these novel environments so"}, {"start": 378.08, "end": 381.44, "text": " now you'll end up with an agent that's kind of fits perfectly together into"}, {"start": 381.44, "end": 387.35999999999996, "text": " this environment here and the same goes for all of the other environments here"}, {"start": 387.35999999999996, "end": 392.56, "text": " so you do this for all of the pairs of environments and agents and the final"}, {"start": 392.56, "end": 399.36, "text": " step in there in this open ended process is to every now and then take all of the"}, {"start": 399.36, "end": 403.96, "text": " agents you have so take all of these agents and basically make them compete"}, {"start": 403.96, "end": 407.08, "text": " on one of the environments so for example you take this one and this one"}, {"start": 407.08, "end": 410.96, "text": " and this one and you see which one of these is the best for let's say for this"}, {"start": 410.96, "end": 415.96, "text": " particular environment and if this one if this like second agent here is better"}, {"start": 415.96, "end": 421.28, "text": " than this one then you're just gonna kill this one and you replace this agent"}, {"start": 421.28, "end": 426.4, "text": " with this one here and that's the those are the high level components that are"}, {"start": 426.4, "end": 431.46, "text": " needed for this open-endedness poet process we'll see more details as I go"}, {"start": 431.46, "end": 435.23999999999995, "text": " deeper into the paper but that's that's the rough idea okay let's continue so"}, {"start": 435.23999999999995, "end": 439.28, "text": " the results show that poet produces a diverse range of sophisticated"}, {"start": 439.28, "end": 443.53999999999996, "text": " behaviors that solve a wide range of environmental challenges many of which"}, {"start": 443.53999999999996, "end": 447.55999999999995, "text": " cannot be solved so that this is an important point here as they'll be"}, {"start": 447.56, "end": 453.56, "text": " comparing poet with these more direct types of solutions so they cannot be"}, {"start": 453.56, "end": 458.48, "text": " solved by direct optimization alone or even through a direct path curriculum"}, {"start": 458.48, "end": 462.86, "text": " building control algorithm we'll see what that means in a second introduced to"}, {"start": 462.86, "end": 466.72, "text": " highlight the critical role of open-endedness in solving ambiguous"}, {"start": 466.72, "end": 471.86, "text": " challenges so as you probably noticed by the very description of this framework I"}, {"start": 471.86, "end": 475.96, "text": " did not specify any particular class optimization algorithms that is going"}, {"start": 475.96, "end": 481.12, "text": " to be used to adapt to to adapt the agents for their corresponding"}, {"start": 481.12, "end": 484.56, "text": " environments what they use in practice here is the evolutionary strategy"}, {"start": 484.56, "end": 490.4, "text": " algorithm so let me start by explaining what those what that class of algorithm"}, {"start": 490.4, "end": 494.44, "text": " is all about for those of you who are not familiar with evolutionary algorithms"}, {"start": 494.44, "end": 498.4, "text": " so the idea is actually fairly simple what do you have is the following so"}, {"start": 498.4, "end": 504.03999999999996, "text": " let's imagine we represent our agent as an as a simple MLP so multi-layer"}, {"start": 504.04, "end": 508.88, "text": " perceptron network so let's imagine we have like some input observation and"}, {"start": 508.88, "end": 513.52, "text": " then we have the agent represented as this MLP with multiple hidden layers and"}, {"start": 513.52, "end": 517.96, "text": " then we have the final linear layer and now it comes the output like for example"}, {"start": 517.96, "end": 522.2, "text": " this is the action space of the agent and as you can imagine this contains"}, {"start": 522.2, "end": 526.6800000000001, "text": " bunch of learning weights and we can kind of flatten those out into a vector"}, {"start": 526.68, "end": 534.8399999999999, "text": " and let's denote that vector as s theta so this is a vector theta and this theta"}, {"start": 534.8399999999999, "end": 538.1999999999999, "text": " is obviously a multi-dimensional vector depending on how many parameters you've"}, {"start": 538.1999999999999, "end": 543.0799999999999, "text": " got and we can represent it as a point in space like as a way of"}, {"start": 543.0799999999999, "end": 548.4, "text": " visualizing that that theta vector we can represent it as a point in 2d space"}, {"start": 548.4, "end": 551.0799999999999, "text": " so obviously because it's multi-dimensional would have to do some"}, {"start": 551.0799999999999, "end": 555.4399999999999, "text": " type of a dimensionality reduction method in order to plot it like this but"}, {"start": 555.44, "end": 559.8000000000001, "text": " you like bear with me this is this is the visualizations I'm going to use so"}, {"start": 559.8000000000001, "end": 566.84, "text": " now what evolutionary algorithms do is in their most common form is you fit a"}, {"start": 566.84, "end": 571.6400000000001, "text": " Gaussian like on top of this of this theta vector so this is let me just"}, {"start": 571.6400000000001, "end": 576.5600000000001, "text": " denote it again this is theta vector this is where we start from and then you"}, {"start": 576.5600000000001, "end": 583.0, "text": " fit a Gaussian on top of this vector so basically what it means you take it as a"}, {"start": 583.0, "end": 586.56, "text": " mean you take this vector as the mean of the Gaussian and you have some some"}, {"start": 586.56, "end": 591.52, "text": " some basically covariance matrix it's usually just isotropic like Gaussian"}, {"start": 591.52, "end": 597.44, "text": " and so what the idea here is is now you're going to sample randomly sample"}, {"start": 597.44, "end": 601.44, "text": " novel agents so according to the Gaussian distribution so because it's"}, {"start": 601.44, "end": 605.08, "text": " Gaussian is more dense here you're gonna have more agents here you'll have some"}, {"start": 605.08, "end": 610.2, "text": " agents here you'll maybe have one agent here and so on and now what you do is"}, {"start": 610.2, "end": 616.08, "text": " you take all of those like novel random agents and you evaluate them at the"}, {"start": 616.08, "end": 620.2800000000001, "text": " environment at hand and you get the associated fitness you get the"}, {"start": 620.2800000000001, "end": 625.5200000000001, "text": " associated score of that agent in this particular environment at hand so let's"}, {"start": 625.5200000000001, "end": 632.88, "text": " now imagine that for example this agent here got a score of 150 this one got a"}, {"start": 632.88, "end": 639.72, "text": " score of let's say 60 this one got a score of like 35 this one like 40 or"}, {"start": 639.72, "end": 645.2, "text": " like I don't know like five or whatnot and you can imagine immediately see"}, {"start": 645.2, "end": 650.5600000000001, "text": " that probably this part of environment is for some reason so the weights that"}, {"start": 650.5600000000001, "end": 654.0400000000001, "text": " are in this part of the space are kind of more valuable for this particular"}, {"start": 654.0400000000001, "end": 658.76, "text": " environment and so you intuitively want to move your vector your theta vector"}, {"start": 658.76, "end": 664.36, "text": " in this direction and so how you do that practically in the in these class of"}, {"start": 664.36, "end": 667.72, "text": " evolutionary algorithms is the following so you have associated"}, {"start": 667.72, "end": 672.6800000000001, "text": " perturbation vector that you used in order to get to this particular agent"}, {"start": 672.6800000000001, "end": 679.08, "text": " from this theta initial state of set of weights so and you have like"}, {"start": 679.08, "end": 683.52, "text": " corresponding perturbation vector for all of these novel agents here in the"}, {"start": 683.52, "end": 688.88, "text": " environment and so what you'll do is a simple in practice you just do a simple"}, {"start": 688.88, "end": 693.0, "text": " weighted average of all of these vectors and you can imagine that after you do"}, {"start": 693.0, "end": 697.52, "text": " that so after you multiply this vector here with with 150 and this vector here"}, {"start": 697.52, "end": 701.84, "text": " with 60 and you do the same for these other vectors you'll end up with a"}, {"start": 701.84, "end": 708.16, "text": " resultant vector that maybe goes that maybe looks like this maybe you end up"}, {"start": 708.16, "end": 712.96, "text": " going in this direction here and so you move and because you have a learning"}, {"start": 712.96, "end": 716.96, "text": " step you'll you won't move too much you'll maybe move like here and so this"}, {"start": 716.96, "end": 720.84, "text": " is your novel theta so this is theta prime okay again re-trating what we've"}, {"start": 720.84, "end": 726.24, "text": " done here is we've we've taken the initial set of weights theta we've fit"}, {"start": 726.24, "end": 731.32, "text": " a Gaussian on top of that theta vector and what I mean by that is basically"}, {"start": 731.32, "end": 734.5600000000001, "text": " take the theta as the mean of the Gaussian and you somehow pick the"}, {"start": 734.5600000000001, "end": 738.84, "text": " covariance it's usually isotropic Gaussian then you sample n random points"}, {"start": 738.84, "end": 743.0, "text": " from that Gaussian and what you do by doing that is you sample n novel agents"}, {"start": 743.0, "end": 748.2, "text": " and then you evaluate those agents on the environment at hand you get some"}, {"start": 748.2, "end": 754.44, "text": " associated like scores and you just create this weighted sum of all of these"}, {"start": 754.44, "end": 758.6, "text": " perturbation vector and when I when I say a perturbation vector what I mean is"}, {"start": 758.6, "end": 764.24, "text": " basically this vector here so what it took to get from theta so from from"}, {"start": 764.24, "end": 772.32, "text": " from the original like vector here to get to this novel candidate agent so"}, {"start": 772.32, "end": 776.9200000000001, "text": " that's the high level understanding of what's going on now let me show you how"}, {"start": 776.9200000000001, "end": 780.4200000000001, "text": " the formulas look so that you are not confused next time when you see the"}, {"start": 780.4200000000001, "end": 784.1600000000001, "text": " formula I think it's always good to have this type of grounding and"}, {"start": 784.16, "end": 787.76, "text": " understand like visual understanding of what's going on or like just the"}, {"start": 787.76, "end": 792.8399999999999, "text": " semantics behind the notation okay let's first start and understand this this"}, {"start": 792.8399999999999, "end": 797.68, "text": " more general formula here where instead of a modeling distribution as a"}, {"start": 797.68, "end": 801.9, "text": " Gaussian you you kind of you parameterize your distribution using the"}, {"start": 801.9, "end": 806.68, "text": " the weights theta so those are the the weights you start from so this vector"}, {"start": 806.68, "end": 812.92, "text": " here and you somehow model that probability distribution and so the idea"}, {"start": 812.92, "end": 818.16, "text": " is so this this tells you the same story I just told you about so this e theta of"}, {"start": 818.16, "end": 824.1999999999999, "text": " I just means how well does this theta I agent perform on this particular"}, {"start": 824.1999999999999, "end": 828.04, "text": " environment e so this will be a score like for example hundred fifty as we saw"}, {"start": 828.04, "end": 833.92, "text": " before so let me just connect this with the drawing here so this particular set"}, {"start": 833.92, "end": 840.0, "text": " of weights here so this novel candidate vector is going to be theta I and E of"}, {"start": 840.0, "end": 844.68, "text": " theta I so that the performance of that agent on this particular environment is"}, {"start": 844.68, "end": 849.44, "text": " going to be hundred fifty as we saw here okay so that's how those two connect"}, {"start": 849.44, "end": 855.68, "text": " next up let's imagine that under the current probability distribution that's"}, {"start": 855.68, "end": 866.44, "text": " parameterized by theta this agent theta I has the probability of 0.4 and because"}, {"start": 866.44, "end": 870.48, "text": " it has such a high score like intuitively we would love to increase"}, {"start": 870.48, "end": 874.2, "text": " this probability the next time we sample from this probability distribution so"}, {"start": 874.2, "end": 880.72, "text": " optimally when we do a single step of evolutionary strategy algorithm will"}, {"start": 880.72, "end": 885.7600000000001, "text": " boost up this one to maybe 0.6 so we'll have more probability put on to on top"}, {"start": 885.7600000000001, "end": 890.6, "text": " of this particular set of weights which makes intuitively sense so again this"}, {"start": 890.6, "end": 896.9200000000001, "text": " term here tells you what's the what's the fastest way to increase the"}, {"start": 896.9200000000001, "end": 903.0, "text": " probability of this particular theta I so that's what gradient does it shows you"}, {"start": 903.0, "end": 907.84, "text": " the direction of the steepest ascent so that we can maximize this this term here"}, {"start": 907.84, "end": 912.6, "text": " and so it's weighted by the the score that that particular agent achieves and"}, {"start": 912.6, "end": 918.1600000000001, "text": " then you just do a sum across all of the end samples so and samples are are those"}, {"start": 918.16, "end": 924.76, "text": " random the random samples we take given the the the initial set of weights theta"}, {"start": 924.76, "end": 929.56, "text": " here okay that's how those two connect so that's just a more general notion of"}, {"start": 929.56, "end": 936.52, "text": " this whole evolutionary strategy setup if we assume now that the probability"}, {"start": 936.52, "end": 943.48, "text": " distribution is a Gaussian this is how this expression like transforms you can"}, {"start": 943.48, "end": 948.28, "text": " see again we have a sum across and samples this time we find how"}, {"start": 948.28, "end": 952.84, "text": " performant are these agents so this here is now going to be theta I let me just"}, {"start": 952.84, "end": 958.8000000000001, "text": " change the color so this here is a new set of weights called theta I and we"}, {"start": 958.8000000000001, "end": 964.54, "text": " just have to so epsilon subscript I is just a sample from the standard Gaussian"}, {"start": 964.54, "end": 968.5600000000001, "text": " distribution so this is basically sampling from your Gaussian and you"}, {"start": 968.56, "end": 973.76, "text": " then evaluate the that novel model on the on the environment you get the score"}, {"start": 973.76, "end": 978.56, "text": " and so as you can see here you just use those as the weights for these resultant"}, {"start": 978.56, "end": 983.8, "text": " vectors so those resultant vectors are expect are exactly these dotted lines I"}, {"start": 983.8, "end": 989.0799999999999, "text": " showed here and so yeah basically this equation is the formulaic way of"}, {"start": 989.0799999999999, "end": 993.64, "text": " describing how evolutionary search breaks okay hopefully that helped you"}, {"start": 993.64, "end": 999.24, "text": " better understand the evolutionary strategy algorithm as a final note this"}, {"start": 999.24, "end": 1008.96, "text": " theta plus sigma times epsilon sub I can be denoted as this so it's a it's a"}, {"start": 1008.96, "end": 1017.08, "text": " Gaussian with mean equal to theta and with covariance matrix equal to sigma"}, {"start": 1017.08, "end": 1021.0, "text": " identity matrix so you're basically sampling your agent from this"}, {"start": 1021.0, "end": 1026.52, "text": " distribution here you evaluate and find the score on this environment e and then"}, {"start": 1026.52, "end": 1031.16, "text": " you use that as the weight to multiply these perturbation vectors epsilon sub"}, {"start": 1031.16, "end": 1036.08, "text": " eyes okay enough ramble about evolutionary strategy let's continue and"}, {"start": 1036.08, "end": 1043.12, "text": " explain understand the poet algorithm itself okay let's see how it proceeds so"}, {"start": 1043.12, "end": 1049.96, "text": " you initialize this list of environment agents as as an empty list you add the"}, {"start": 1049.96, "end": 1054.6000000000001, "text": " initial environment and you add the initial theta so that's just gonna be a"}, {"start": 1054.6000000000001, "end": 1059.64, "text": " random vector and the initial environment is gonna be basically this"}, {"start": 1059.64, "end": 1065.44, "text": " thing here so they've been using this by by pedal Walker environment except that"}, {"start": 1065.44, "end": 1069.0, "text": " this one is already fairly complex let me show you how the initial one will"}, {"start": 1069.0, "end": 1072.3600000000001, "text": " probably look like so it will look something like this it's completely flat"}, {"start": 1072.3600000000001, "end": 1076.24, "text": " you don't even have the gap here you you will have like nothing you'll just have"}, {"start": 1076.24, "end": 1079.08, "text": " a flat environment that's gonna be the initial environment of the poet"}, {"start": 1079.08, "end": 1086.1999999999998, "text": " algorithm okay so let's go back here and then once you add that initial pair of"}, {"start": 1086.1999999999998, "end": 1092.24, "text": " environment environment agent pair so then you do basically capital T number"}, {"start": 1092.24, "end": 1096.6, "text": " of iterations and in each of those iterations you do the following so every"}, {"start": 1096.6, "end": 1103.1399999999999, "text": " now and then you are going to mutate your environments and so as you can see"}, {"start": 1103.1399999999999, "end": 1108.04, "text": " here so T is the current iteration and every now and then which is dictated by"}, {"start": 1108.04, "end": 1113.24, "text": " the end mutate you're going to execute this so what does it mean to mutate an"}, {"start": 1113.24, "end": 1119.72, "text": " environment such as the bipedal by pedal Walker environment we saw here and to"}, {"start": 1119.72, "end": 1124.32, "text": " understand that let me let me get to this table here so basically each"}, {"start": 1124.32, "end": 1130.68, "text": " environment is encoded as a as a as a vector and you can treat that vector as"}, {"start": 1130.68, "end": 1135.1599999999999, "text": " a like a gene of the environment and let's understand the format of that of"}, {"start": 1135.16, "end": 1141.76, "text": " that gene so it looks like something like this so here is our vector that"}, {"start": 1141.76, "end": 1147.8000000000002, "text": " encodes the environment and the first two coordinates here are gonna encode"}, {"start": 1147.8000000000002, "end": 1152.48, "text": " the stump the stump height as you can see here the initial values are gonna be"}, {"start": 1152.48, "end": 1158.96, "text": " like zero and then 0.4 and that's going to determine the stump heights"}, {"start": 1158.96, "end": 1164.76, "text": " throughout the environment so just to clarify what this here means is that"}, {"start": 1164.76, "end": 1171.08, "text": " your stump height so let me draw that as this so here you have like stump height"}, {"start": 1171.08, "end": 1178.04, "text": " here so stump height and you're gonna have the stump height be between 0 and"}, {"start": 1178.04, "end": 1183.56, "text": " 0.4 and you're just going to have like a uniform distribution so that means all"}, {"start": 1183.56, "end": 1188.0, "text": " of these heights here are gonna be of equal probability and throughout the the"}, {"start": 1188.0, "end": 1191.12, "text": " as the environment progresses you're gonna be sampling from this distribution"}, {"start": 1191.12, "end": 1196.6399999999999, "text": " okay next up you have gap width so the next two coordinates are gonna be gap"}, {"start": 1196.6399999999999, "end": 1200.4399999999998, "text": " width and then the next two are gonna be step width and then you're gonna have"}, {"start": 1200.4399999999998, "end": 1204.84, "text": " the step numbers and the roughness let me just kind of quickly explain the"}, {"start": 1204.84, "end": 1212.1999999999998, "text": " semantics behind these so gap width is I guess fairly clear so that's this thing"}, {"start": 1212.1999999999998, "end": 1219.84, "text": " here so this is your gap width this is the stump height then what else they"}, {"start": 1219.84, "end": 1226.72, "text": " have let me just check so we have step height and number of steps so let me let"}, {"start": 1226.72, "end": 1230.6799999999998, "text": " me see what I can find an environment with steps I think there is one here so"}, {"start": 1230.6799999999998, "end": 1234.72, "text": " you can see here so this is here we have like six steps and you can see what this"}, {"start": 1234.72, "end": 1238.9599999999998, "text": " is the step height so all of these various different parameters that"}, {"start": 1238.9599999999998, "end": 1245.1999999999998, "text": " describe the environment are encoded in this in this vector here okay so an"}, {"start": 1245.2, "end": 1250.64, "text": " important detail here is this mutation step row which tells you how are you"}, {"start": 1250.64, "end": 1254.3600000000001, "text": " going to with each duration which each mutation step how are you going to"}, {"start": 1254.3600000000001, "end": 1258.72, "text": " mutate these these parameters that describe the environment so for this"}, {"start": 1258.72, "end": 1265.48, "text": " stump height you have 0.2 which means you're going to every now and then with"}, {"start": 1265.48, "end": 1273.66, "text": " some probability boost up the 0 to like 0.2 and this 0.4 to 0.6 and now you you"}, {"start": 1273.66, "end": 1276.72, "text": " can imagine you now have a much harder environment because all of the stumps"}, {"start": 1276.72, "end": 1282.3200000000002, "text": " are going to be between like higher on average than than than before and that's"}, {"start": 1282.3200000000002, "end": 1286.88, "text": " how you're progressively and randomly mutating and making environments harder"}, {"start": 1286.88, "end": 1290.3200000000002, "text": " you can do that for all of the parameters that describe the environment"}, {"start": 1290.3200000000002, "end": 1296.4, "text": " here so all across all of these five columns dimensions cool now that we"}, {"start": 1296.4, "end": 1302.92, "text": " understand the environment let me first before getting back to the algorithm"}, {"start": 1302.92, "end": 1309.8000000000002, "text": " explain how the actual agent is constructed so you can see here how the"}, {"start": 1309.8000000000002, "end": 1314.76, "text": " agent looks like and you basically have four control signals you're controlling"}, {"start": 1314.76, "end": 1320.16, "text": " the hip torque you're controlling like so here you're controlling the knee"}, {"start": 1320.16, "end": 1325.8400000000001, "text": " torque you're controlling the other hip torque and the other knee torque"}, {"start": 1326.16, "end": 1331.2, "text": " that's the output so as for the input I think they mentioned it somewhere here"}, {"start": 1331.2, "end": 1336.92, "text": " let me just if I can zoom out they mentioned here so the agent has 10"}, {"start": 1336.92, "end": 1342.32, "text": " lighter range finders for perceiving obstacles and terrain whose measurements"}, {"start": 1342.32, "end": 1347.6000000000001, "text": " are included in the state space in other 14 state variables include hull angle"}, {"start": 1347.6000000000001, "end": 1352.3600000000001, "text": " hull angular velocity horizontal and vertical speeds positions of joins blah"}, {"start": 1352.3600000000001, "end": 1356.72, "text": " blah blah blah blah okay so all in all you have its model the following way you"}, {"start": 1356.72, "end": 1366.28, "text": " have 24 dimensional input observation or state then you have an MLP here a couple"}, {"start": 1366.28, "end": 1371.8, "text": " of hidden layers and you end up with four dimensional output vector so this"}, {"start": 1371.8, "end": 1376.48, "text": " is for this is 24 this is just an MLP and that's everything you need to know"}, {"start": 1376.48, "end": 1381.1200000000001, "text": " about the agent the final detail that's important is how do you define the"}, {"start": 1381.1200000000001, "end": 1385.64, "text": " fitness how do you find the score and here is the formulaic description of the"}, {"start": 1385.64, "end": 1392.8000000000002, "text": " reward you can see that if the robot falls then you like the robot gets minus"}, {"start": 1392.8000000000002, "end": 1397.5200000000002, "text": " hundred reward so negative reward you can see here that we are encouraging the"}, {"start": 1397.5200000000002, "end": 1405.0400000000002, "text": " robot to move like rightward so here Delta X because the X axis basically as"}, {"start": 1405.0400000000002, "end": 1409.2800000000002, "text": " you can assume goes from left to right so you want to move as much as possible"}, {"start": 1409.28, "end": 1415.96, "text": " so that's the Delta as much as possible to the right then it's penalized for the"}, {"start": 1415.96, "end": 1419.96, "text": " hull angle whereas the hull angle is this thing so basically you want to make"}, {"start": 1419.96, "end": 1425.2, "text": " sure let me zoom in a bit more so you want to make sure that this particular"}, {"start": 1425.2, "end": 1432.32, "text": " angle here so this angle here let me know it does maybe alpha you want to"}, {"start": 1432.32, "end": 1436.8, "text": " minimize it and that means you want to have this this hull being as horizontal"}, {"start": 1436.8, "end": 1442.52, "text": " as possible and the final detail they have here is they are also penalizing"}, {"start": 1442.52, "end": 1451.6399999999999, "text": " for the applied torque so if we were to explain what this means in human"}, {"start": 1451.6399999999999, "end": 1455.86, "text": " language is it's basically like a like a like a formal description of this"}, {"start": 1455.86, "end": 1462.2, "text": " informal statement please don't fall move to the right as much as possible"}, {"start": 1462.2, "end": 1467.8, "text": " try and keep your hull as horizontal as possible and try to minimize the energy"}, {"start": 1467.8, "end": 1471.96, "text": " consumption while doing all of that so that would be like an informal"}, {"start": 1471.96, "end": 1477.5, "text": " description of what this this this reward is encoding cool now that we"}, {"start": 1477.5, "end": 1483.96, "text": " understand all of that let's get back to the algorithm put main loop so we got to"}, {"start": 1483.96, "end": 1489.44, "text": " this line six here we're somehow mutating the environments we saw the"}, {"start": 1489.44, "end": 1494.16, "text": " mechanism roughly there is a pseudo algorithm in the appendix I'm going to"}, {"start": 1494.16, "end": 1499.16, "text": " show you in a minute but like for now just imagine you have like a set of"}, {"start": 1499.16, "end": 1502.56, "text": " vectors which are encoding for particular environments you can you can"}, {"start": 1502.56, "end": 1507.3200000000002, "text": " kind of imagine you mutate those vectors according to the rules we saw and then"}, {"start": 1507.3200000000002, "end": 1511.88, "text": " you get novel novel environments okay now that we have that environment a"}, {"start": 1511.88, "end": 1516.0800000000002, "text": " novel set of mutated environments we just iterate through those environments"}, {"start": 1516.08, "end": 1521.1999999999998, "text": " and we optimize the agents here as you can see in this step here we're going to"}, {"start": 1521.1999999999998, "end": 1526.08, "text": " take the the environment agent pair we're gonna do a single evolutionary"}, {"start": 1526.08, "end": 1530.6599999999999, "text": " strategy step so this as you remember produces the resultant vector so we just"}, {"start": 1530.6599999999999, "end": 1537.3999999999999, "text": " add that resultant vector to the theta that was used as the mean of the Gaussian"}, {"start": 1537.3999999999999, "end": 1541.9199999999998, "text": " and we get the novel the novel set of weights for that agent so that's how the"}, {"start": 1541.92, "end": 1547.0800000000002, "text": " agents are optimized here obviously you can see that we have like the learning"}, {"start": 1547.0800000000002, "end": 1552.76, "text": " rate alpha here we have Sigma the standard deviation for the Gaussian at"}, {"start": 1552.76, "end": 1558.72, "text": " hand and all of that combined goes into the input of this ES step and that's how"}, {"start": 1558.72, "end": 1565.16, "text": " we update the weights okay finally and I mentioned this every now and then we're"}, {"start": 1565.16, "end": 1569.8400000000001, "text": " going to so every now and then is determined by this end transfer we're"}, {"start": 1569.84, "end": 1574.04, "text": " going to evaluate candidates as you can see here we take all of the agents"}, {"start": 1574.04, "end": 1581.84, "text": " except for the theta t plus 1 M except for the Mth agent where M is the current"}, {"start": 1581.84, "end": 1587.52, "text": " agent in this loop here and we evaluate all of these agents on the Mth"}, {"start": 1587.52, "end": 1592.8, "text": " environment and we find the best performer for that environment and if"}, {"start": 1592.8, "end": 1597.6399999999999, "text": " the score that the best performer achieved is better than the than the"}, {"start": 1597.64, "end": 1603.6000000000001, "text": " current agents then the niche champion then you replace you basically overwrite"}, {"start": 1603.6000000000001, "end": 1609.44, "text": " so basically bye bye the old agent and here comes the new one which was the"}, {"start": 1609.44, "end": 1613.96, "text": " best agent which is the most adapt agent for this environment even though it came"}, {"start": 1613.96, "end": 1616.6000000000001, "text": " from some different environment so that's the this interesting cross"}, {"start": 1616.6000000000001, "end": 1620.5200000000002, "text": " pollination that's going on okay so on a high level the Poet algorithm is fairly"}, {"start": 1620.52, "end": 1627.44, "text": " simple you keep this collection of environment agent pairs you keep"}, {"start": 1627.44, "end": 1632.0, "text": " mutating the environments you keep optimizing the corresponding agents for"}, {"start": 1632.0, "end": 1636.24, "text": " those particular environments and every now and then you do this cross pollination"}, {"start": 1636.24, "end": 1641.04, "text": " step whereby for each of the environments you're going to find the"}, {"start": 1641.04, "end": 1644.72, "text": " best agent overall which may come from some different environment and replace"}, {"start": 1644.72, "end": 1650.04, "text": " the current niche champion with that one if that happens okay so that's that's"}, {"start": 1650.04, "end": 1657.12, "text": " everything and now let me just go to appendix and show you the mutation"}, {"start": 1657.12, "end": 1664.3999999999999, "text": " pseudo algorithm in a bit more detail so here we are let's see how it works so we"}, {"start": 1664.3999999999999, "end": 1668.92, "text": " have some list already we iterate through the list of agent environment"}, {"start": 1668.92, "end": 1674.44, "text": " pairs and we see if they are eligible to reproduce and what this means is they"}, {"start": 1674.44, "end": 1679.68, "text": " basically evaluate the agent M on this environment M and they see what the"}, {"start": 1679.68, "end": 1685.2, "text": " score is and if the score is like if it's not in this in this in this range"}, {"start": 1685.2, "end": 1691.44, "text": " between like for example 50 and 200 they discard that agent for that particular"}, {"start": 1691.44, "end": 1696.64, "text": " environment I mean both the agent and the environment and why is that well"}, {"start": 1696.64, "end": 1702.44, "text": " because that means that if you're outside of this range that means that"}, {"start": 1702.44, "end": 1706.0, "text": " you're either here which means you achieve super low score which means that"}, {"start": 1706.0, "end": 1710.32, "text": " the environment is probably very hard or it means that you're here which means"}, {"start": 1710.32, "end": 1715.88, "text": " it's you have a very high score and the environment is very very easy and so you"}, {"start": 1715.88, "end": 1721.2, "text": " want to kind of filter out those either too hard or too simple of environment"}, {"start": 1721.2, "end": 1725.36, "text": " after we've done that we have this parent list so basically all of the"}, {"start": 1725.36, "end": 1729.8, "text": " eligible environment agent pairs are there then we do the environment to"}, {"start": 1729.8, "end": 1733.88, "text": " produce so this step here is gonna do these random mutations you're gonna take"}, {"start": 1733.88, "end": 1736.96, "text": " the vector that describes the environment you're gonna do the"}, {"start": 1736.96, "end": 1742.0400000000002, "text": " mutations according to certain rules and you end up with this child list then you"}, {"start": 1742.0400000000002, "end": 1746.3200000000002, "text": " again just check this minimal condition IE you check whether they are in this"}, {"start": 1746.3200000000002, "end": 1751.5600000000002, "text": " range you discard the ones which are not after all of that is done there is this"}, {"start": 1751.5600000000002, "end": 1755.68, "text": " line 13 where there is something interesting going on so they rank by"}, {"start": 1755.68, "end": 1763.38, "text": " novelty and how they define novelty is using L2 metric basically they take the"}, {"start": 1763.38, "end": 1768.8400000000001, "text": " encoding vector and they compare it with all of the other environment vectors as"}, {"start": 1768.8400000000001, "end": 1773.2800000000002, "text": " well as some of the older like they have this archive of older environments and"}, {"start": 1773.2800000000002, "end": 1779.4, "text": " the further away it is from all of the other environments as defined by L2 the"}, {"start": 1779.4, "end": 1783.5200000000002, "text": " more novel it is so that's how they define the concept of novelty it's it's"}, {"start": 1783.5200000000002, "end": 1786.8000000000002, "text": " it's very different compared to the previous environments we've seen and"}, {"start": 1786.8000000000002, "end": 1793.2, "text": " they basically sort this list according to to not according to novelty and once"}, {"start": 1793.2, "end": 1798.0, "text": " we've done all this they also here they have this evaluation of candidates so"}, {"start": 1798.0, "end": 1803.04, "text": " they find the best agent for that particular environment and they and if"}, {"start": 1803.04, "end": 1809.6000000000001, "text": " this minimal condition is satisfied then they add the environment agent pair to"}, {"start": 1809.6000000000001, "end": 1813.92, "text": " the list so what can potentially happen here again is that some of the other"}, {"start": 1813.92, "end": 1820.44, "text": " agents is better at the environment that can this at this child environment and"}, {"start": 1820.44, "end": 1826.96, "text": " then you're just gonna overwrite the agent with this better agent here so"}, {"start": 1826.96, "end": 1831.64, "text": " yeah there is a it's kind of intricate but you got the gist of it already you"}, {"start": 1831.64, "end": 1836.0800000000002, "text": " have before I showed you this this pseudo algorithm okay let's get back to"}, {"start": 1836.0800000000002, "end": 1843.44, "text": " the main paper and see where we stopped okay let's get to the results the first"}, {"start": 1843.44, "end": 1848.8, "text": " comparison they do is they compare poet with the evolutionary strategy from"}, {"start": 1848.8, "end": 1853.84, "text": " scratch which means you find some fairly complicated environment as the one you"}, {"start": 1853.84, "end": 1860.8, "text": " can see here and you either train the agent using poet or you directly train"}, {"start": 1860.8, "end": 1864.6, "text": " evolutionary search so you start from a random agent and you try by random"}, {"start": 1864.6, "end": 1868.36, "text": " perturbations to find the agent that performs well for this particular"}, {"start": 1868.36, "end": 1872.86, "text": " environment and it turns out that as soon as the environment gets like"}, {"start": 1872.86, "end": 1877.44, "text": " decently complicated the evolutionary strategy from scratch starts failing and"}, {"start": 1877.44, "end": 1881.96, "text": " they mention here some of the failure cases so they say here in effect to"}, {"start": 1881.96, "end": 1886.52, "text": " obtain positive scores these agents learn to move forward but also to freeze"}, {"start": 1886.52, "end": 1890.56, "text": " before challenging obstacles which helped them avoid the penalty of minus"}, {"start": 1890.56, "end": 1895.4, "text": " hundred for falling down and this behavior is a local optimum the agents"}, {"start": 1895.4, "end": 1899.4, "text": " could in principle learn to overcome the obstacles but instead converge on"}, {"start": 1899.4, "end": 1904.2, "text": " playing it safe by not moving it's a very interesting behavior like a like a"}, {"start": 1904.2, "end": 1910.6000000000001, "text": " like a risk adversity like being manifested here in these simple agents"}, {"start": 1910.6000000000001, "end": 1915.24, "text": " in the most simple of environments and yeah I guess we see that in the in the"}, {"start": 1915.24, "end": 1920.52, "text": " real happening in the real world as well so here you can see the results you can"}, {"start": 1920.52, "end": 1925.6000000000001, "text": " see that that that the ES kind of gets stuck here and stays in the same place"}, {"start": 1925.6000000000001, "end": 1929.6000000000001, "text": " because otherwise it will fall down and get a minus hundred reward which is"}, {"start": 1929.6, "end": 1934.9199999999998, "text": " suboptimal whereas poet can actually yeah can actually kind of jump across"}, {"start": 1934.9199999999998, "end": 1940.1599999999999, "text": " these these gaps and and and continue on doing its job similarly for these other"}, {"start": 1940.1599999999999, "end": 1945.36, "text": " types of environments you can see here as soon as a big obstacle comes here the"}, {"start": 1945.36, "end": 1951.32, "text": " agent gets stuck whereas the poet keeps on going to the right same here it just"}, {"start": 1951.32, "end": 1955.28, "text": " stops in front of a big obstacle and does not move and that's the local"}, {"start": 1955.28, "end": 1962.28, "text": " optimum that they were referring to okay let's continue here you can see that"}, {"start": 1962.28, "end": 1967.44, "text": " poet can solve even some of very difficult environments and that's that's"}, {"start": 1967.44, "end": 1972.76, "text": " very very cool results okay here is just the quantitative visualization of how"}, {"start": 1972.76, "end": 1978.08, "text": " like of this huge gap between poet performance and evolutionary search by"}, {"start": 1978.08, "end": 1981.2, "text": " the way I'm butchering the names here like I'm not sure whether you can use"}, {"start": 1981.2, "end": 1987.0, "text": " interchangeably strategy search and like just evolutionary system because I think"}, {"start": 1987.0, "end": 1991.32, "text": " opening I had some evolutionary strategy paper so it may be the name may be"}, {"start": 1991.32, "end": 1997.4, "text": " overloaded but I explained the semantics between behind the evolutionary search"}, {"start": 1997.4, "end": 2003.0800000000002, "text": " like a couple minutes ago so hopefully that won't be confusing okay now we've"}, {"start": 2003.0800000000002, "end": 2008.88, "text": " seen the direct comparison between poet and evolutionary search now you can do"}, {"start": 2008.88, "end": 2015.1200000000001, "text": " something a bit more smart and that's doing a curriculum based learning but"}, {"start": 2015.1200000000001, "end": 2019.68, "text": " like not using poet which is highly open-ended but doing a more direct path"}, {"start": 2019.68, "end": 2023.88, "text": " curriculum learning let's see what what that means so a natural question then is"}, {"start": 2023.88, "end": 2027.7600000000002, "text": " whether the environments created and solved by poet can also be solved by an"}, {"start": 2027.7600000000002, "end": 2033.16, "text": " explicit direct path curriculum building control algorithm the sequence of"}, {"start": 2033.16, "end": 2036.48, "text": " environment starts with an environment of only flat ground without any"}, {"start": 2036.48, "end": 2040.2, "text": " obstacles which is easy enough for any randomly initialized agent to quickly"}, {"start": 2040.2, "end": 2044.64, "text": " learn to complete then each of the subsequent environments are constructed"}, {"start": 2044.64, "end": 2048.64, "text": " by slightly modifying the current environment okay let me try and explain"}, {"start": 2048.64, "end": 2053.52, "text": " what that means and what's the difference in approach so on one hand"}, {"start": 2053.52, "end": 2058.88, "text": " you have poet so you start with some random environment you start with the"}, {"start": 2058.88, "end": 2062.0, "text": " easy environment and then you randomly start protruding that environment so"}, {"start": 2062.0, "end": 2064.84, "text": " it's environment number one you do some mutations you get to environment number"}, {"start": 2064.84, "end": 2070.1200000000003, "text": " two and then you do some like mutations you get to three and then you get to"}, {"start": 2070.1200000000003, "end": 2075.6400000000003, "text": " four and then etc etc and you end up with environment after n steps which is"}, {"start": 2075.6400000000003, "end": 2080.76, "text": " somewhere here okay so now what they propose is let's start from this very"}, {"start": 2080.76, "end": 2086.2000000000003, "text": " same so so this is the original environment which is the flat ground"}, {"start": 2086.2000000000003, "end": 2089.88, "text": " everything super simple let me just duplicate it here for the sake of easier"}, {"start": 2089.88, "end": 2093.52, "text": " visualization so we have the same environment so these two here are the"}, {"start": 2093.52, "end": 2097.0, "text": " same so that was that's the simplest environment possible now instead of"}, {"start": 2097.0, "end": 2102.08, "text": " having this highly nonlinear mutation path towards M why not just go directly"}, {"start": 2102.08, "end": 2108.8, "text": " towards N and when I say directly as you know the environment described by a"}, {"start": 2108.8, "end": 2113.12, "text": " vector and you can literally linearly interpolate between these two vectors"}, {"start": 2113.12, "end": 2118.8, "text": " and create a set of environments that goes from one to N but following a much"}, {"start": 2118.8, "end": 2123.12, "text": " more direct path and they mentioned they explained here how exactly they do that"}, {"start": 2123.12, "end": 2127.44, "text": " they say more specifically to get a new environment each obstacle parameter of"}, {"start": 2127.44, "end": 2131.52, "text": " the current environment has an equal chance of staying the same value or"}, {"start": 2131.52, "end": 2135.92, "text": " increasing by corresponding mutation step value in table one until that"}, {"start": 2135.92, "end": 2141.12, "text": " obstacle parameter reaches that of the target environment so they do introduce"}, {"start": 2141.12, "end": 2147.0, "text": " a slight stochasticity in this process is not just a pure linear interpolation"}, {"start": 2147.0, "end": 2151.72, "text": " but like it's very close to that okay so this is the approach they take and let's"}, {"start": 2151.72, "end": 2159.16, "text": " now see the results they got comparing to this baseline okay here are the"}, {"start": 2159.16, "end": 2165.12, "text": " results these are something called rose plots and you can see that so run one is"}, {"start": 2165.12, "end": 2170.0, "text": " a single run of the poet algorithm and as poet was was running you can imagine"}, {"start": 2170.0, "end": 2175.04, "text": " that the environments were getting harder and harder let me first explain"}, {"start": 2175.04, "end": 2180.64, "text": " to you how you can parse this this this diagram so each of these five axes here"}, {"start": 2180.64, "end": 2185.04, "text": " correspond to one of the five parameters we saw in the table before so that means"}, {"start": 2185.04, "end": 2192.64, "text": " one is coding for like gap one is coding for the stump size one is coding for the"}, {"start": 2192.64, "end": 2198.04, "text": " roughness etc etc so the hardest environments are the ones which would be"}, {"start": 2198.04, "end": 2203.14, "text": " touching basically the end points here so these would be the this would be the"}, {"start": 2203.14, "end": 2207.92, "text": " hardest environment possible so this one here this is the hardest environment"}, {"start": 2207.92, "end": 2213.96, "text": " okay so looking at these these plots here we can see that so we have here"}, {"start": 2213.96, "end": 2217.28, "text": " three independent fronts let me first explain what the columns mean and what"}, {"start": 2217.28, "end": 2222.64, "text": " the what the rows mean so we have three independent runs as and the run like the"}, {"start": 2222.64, "end": 2226.04, "text": " the environments were getting harder and harder as the as the runs were"}, {"start": 2226.04, "end": 2233.16, "text": " progressing and then we can notice that the blue pentagons is the direct path"}, {"start": 2233.16, "end": 2239.12, "text": " curriculum and we can see that it's not able to to achieve the same to solve"}, {"start": 2239.12, "end": 2245.2, "text": " the same complexity level environments as poet given the similar amount of"}, {"start": 2245.2, "end": 2251.12, "text": " compute so again connecting this diagram with the thing I draw here"}, {"start": 2251.12, "end": 2257.24, "text": " basically the the the poet does not achieve never gets to end it maybe gets"}, {"start": 2257.24, "end": 2262.2, "text": " to like manages to solve like some of the environments the more the more these"}, {"start": 2262.2, "end": 2265.52, "text": " easier environments but never manages to actually solve these harder"}, {"start": 2265.52, "end": 2269.7999999999997, "text": " environments that's what we can see visually on this on these rose plots"}, {"start": 2269.7999999999997, "end": 2277.6, "text": " here continuing on this is just another representation of that same fact and one"}, {"start": 2277.6, "end": 2281.8799999999997, "text": " thing that's worth mentioning about poet is the following so while poet cannot"}, {"start": 2281.8799999999997, "end": 2286.2799999999997, "text": " guarantee reaching a particular preconceived target these results"}, {"start": 2286.2799999999997, "end": 2290.12, "text": " suggested in some environment spaces poets unique ability to generate"}, {"start": 2290.12, "end": 2294.7599999999998, "text": " multiple different challenges and solve them may actually still provide a more"}, {"start": 2294.7599999999998, "end": 2299.2, "text": " promising path towards solving some preconceived target challenges so what"}, {"start": 2299.2, "end": 2304.52, "text": " this sentence here is telling us is the following so the thing with poet is you"}, {"start": 2304.52, "end": 2309.64, "text": " can solve arbitrarily complicated environments but like it's not like you"}, {"start": 2309.64, "end": 2315.04, "text": " can specify like up in front like you cannot upfront specify this particular"}, {"start": 2315.04, "end": 2319.12, "text": " environment and tell poet hey go and solve this environment which I deeply"}, {"start": 2319.12, "end": 2323.2799999999997, "text": " care about no you can't do that so that's that's that's what this approach"}, {"start": 2323.2799999999997, "end": 2328.68, "text": " here try to do and we show that it failed so the whole point is you have to"}, {"start": 2328.68, "end": 2332.64, "text": " have this open-ended process and you have to be collecting milestones and"}, {"start": 2332.64, "end": 2336.68, "text": " maybe some of the milestones turn out to be very useful for the problem you care"}, {"start": 2336.68, "end": 2341.88, "text": " about so Kenneth Stanley one of the co-authors of this paper built like a"}, {"start": 2341.88, "end": 2346.52, "text": " whole philosophy behind this this this discovery here he even has a book why"}, {"start": 2346.52, "end": 2353.24, "text": " greatness cannot be planned where he is discussing this this tension that"}, {"start": 2353.24, "end": 2357.44, "text": " exists between having clear objectives and where you want to get to versus"}, {"start": 2357.44, "end": 2364.0, "text": " having this more divergent approach and he's arguing very very very very"}, {"start": 2364.0, "end": 2368.18, "text": " strongly for this more open-ended approach as opposed to these direct"}, {"start": 2368.18, "end": 2374.84, "text": " optimization methods and we saw like and he uses results such as this one to to"}, {"start": 2374.84, "end": 2380.0, "text": " argument why that might be the best thing to do in the long run and that's"}, {"start": 2380.0, "end": 2383.6000000000004, "text": " something that evolution itself is doing it's not like evolution is going towards"}, {"start": 2383.6000000000004, "end": 2391.52, "text": " a particular solution okay let me explain you this very very interesting"}, {"start": 2391.52, "end": 2396.8, "text": " diagram that that kind of shows the power of the poet algorithm so here you"}, {"start": 2396.8, "end": 2400.82, "text": " can see the parent environment and this particular agent at iteration 400"}, {"start": 2400.82, "end": 2406.28, "text": " learned to do this type of like it's dragging its leg very close to the"}, {"start": 2406.28, "end": 2410.36, "text": " ground so it's a very we can see it's a very suboptimal behavior so the agent"}, {"start": 2410.36, "end": 2415.84, "text": " ended up in some local optimum and it cannot get unstuck so what poet does is"}, {"start": 2415.84, "end": 2422.32, "text": " at one point of time this agent is gonna perform outperform some other agent on"}, {"start": 2422.32, "end": 2426.04, "text": " some other environment so that's the transfer we saw multiple times and you"}, {"start": 2426.04, "end": 2431.16, "text": " see that in this environment there are stumps and after training on this so"}, {"start": 2431.16, "end": 2434.56, "text": " after training this same agent so the same set of weights on this novel"}, {"start": 2434.56, "end": 2439.64, "text": " environment the agent learned how to straighten up the legs you can see after"}, {"start": 2439.64, "end": 2446.24, "text": " iteration 1175 the agent is already walking quite like the legs are"}, {"start": 2446.24, "end": 2451.88, "text": " straightened out and then what happens is after you now copy that agent into"}, {"start": 2451.88, "end": 2456.56, "text": " the old environment and you keep on iterating you can see that here in the"}, {"start": 2456.56, "end": 2460.48, "text": " parent environment the agent finally learned how to walk straight so if you"}, {"start": 2460.48, "end": 2465.52, "text": " did not have this this this dynamics where the agent was sent to another"}, {"start": 2465.52, "end": 2468.36, "text": " environment trained there and then returned back to this environment you"}, {"start": 2468.36, "end": 2473.6, "text": " would never get unstuck from this local optimum we see here and so that's that's"}, {"start": 2473.6, "end": 2478.0, "text": " the whole that's a very powerful thing about poet I guess the main the main"}, {"start": 2478.0, "end": 2483.28, "text": " con side of poet is you cannot as we saw we cannot directly solve the"}, {"start": 2483.28, "end": 2486.72, "text": " environments we care about we kind of have to be collecting milestones and"}, {"start": 2486.72, "end": 2490.88, "text": " hope for what and hope that some of the milestones make sense for what we care"}, {"start": 2490.88, "end": 2495.12, "text": " about okay let me wrap up this paper by stating the possum some possible"}, {"start": 2495.12, "end": 2501.08, "text": " extensions of poet although obviously the paper was published in 2019 so I'm"}, {"start": 2501.08, "end": 2506.48, "text": " not quite sure whether any of these extensions happened or did not okay so"}, {"start": 2506.48, "end": 2510.32, "text": " they say the following an additional constraint that exists in this initial"}, {"start": 2510.32, "end": 2515.36, "text": " work is that the body of the agent is fixed ultimately limiting the sort of"}, {"start": 2515.36, "end": 2521.2400000000002, "text": " obstacles it can overcome for example how big of a gap it can jump over here"}, {"start": 2521.2400000000002, "end": 2525.88, "text": " too a more powerful and expressive encoding of morphologies including when"}, {"start": 2525.88, "end": 2530.52, "text": " I say morphologies they mean body shapes including those like CPP ends"}, {"start": 2530.52, "end": 2535.72, "text": " they're based on developmental biology could allow us to co-evolve the"}, {"start": 2535.72, "end": 2540.16, "text": " morphology and the body of the agent along with its brain in addition to the"}, {"start": 2540.16, "end": 2544.3599999999997, "text": " environment it is solving I mean this is a very straightforward extension of the"}, {"start": 2544.3599999999997, "end": 2549.8799999999997, "text": " of the research that was presented in this paper you can imagine that not only"}, {"start": 2549.8799999999997, "end": 2556.9199999999996, "text": " can you mutate the environment and then adapt the agents to that environment but"}, {"start": 2556.9199999999996, "end": 2561.2, "text": " you can imagine that you can also have a encoding vector for the morphology of"}, {"start": 2561.2, "end": 2566.3999999999996, "text": " the agent and you could be mutating the more that encoding vector and by doing"}, {"start": 2566.3999999999996, "end": 2569.9199999999996, "text": " those mutations you can maybe end up with an agent that has like longer legs"}, {"start": 2569.9199999999996, "end": 2575.8399999999997, "text": " or you can also imagine that you could be mutating the actual like brain"}, {"start": 2575.8399999999997, "end": 2580.4399999999996, "text": " architecture of the agent so that means instead of instead of like having that"}, {"start": 2580.4399999999996, "end": 2587.72, "text": " fixed MLP I described a couple of minutes ago so something like this you"}, {"start": 2587.72, "end": 2591.52, "text": " can be modifying this architecture as well mutating the architecture so you're"}, {"start": 2591.52, "end": 2595.9599999999996, "text": " mutating the architecture of the brain of the body and you can also imagine you"}, {"start": 2595.9599999999996, "end": 2600.52, "text": " could be mutating the rewards for each of the environments so not why just keep"}, {"start": 2600.52, "end": 2606.48, "text": " the why keep the reward constant as we saw there like with minus hundred"}, {"start": 2606.48, "end": 2611.7999999999997, "text": " penalty for for falling etc etc let me just find this is a super long paper so"}, {"start": 2611.7999999999997, "end": 2617.22, "text": " you could be also mutating the reward and all of that additional diversity"}, {"start": 2617.22, "end": 2622.7599999999998, "text": " could yeah probably either end up going towards some space that we don't care"}, {"start": 2622.7599999999998, "end": 2627.12, "text": " about or it could be very interesting although you probably would not want to"}, {"start": 2627.12, "end": 2632.08, "text": " give the reward too much flexibility you want to kind of scope it you roughly"}, {"start": 2632.08, "end": 2635.4399999999996, "text": " know what you care about and then you form a space you maybe form a"}, {"start": 2635.4399999999996, "end": 2640.3999999999996, "text": " probability distribution across rewards so that some rewards are more likely"}, {"start": 2640.3999999999996, "end": 2645.24, "text": " than some other ones okay those were some candies for your thought hopefully"}, {"start": 2645.24, "end": 2651.24, "text": " you liked this this video hopefully found this this idea of poet an open"}, {"start": 2651.24, "end": 2655.72, "text": " endedness interesting and if you did share the video out consider"}, {"start": 2655.72, "end": 2675.64, "text": " subscribing join the discord community and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=MNOJQINH-qw
Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (μTransfer)
👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer (μTransfer)" paper that makes optimal hyperparameters stable w.r.t. width scaling! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2203.03466 ✅ Their previous paper: https://arxiv.org/abs/2011.14522 ✅ μTransfer tool: https://github.com/microsoft/mup ✅ DeepNet paper: https://arxiv.org/abs/2203.00555 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 uTransfer introduced 04:30 Previous work (tensor programs IV) 10:00 NTK - neural tangent kernel recap 16:30 abc parametrization 19:25 How does learning happen in NTK? 33:50 Connections to Central Limit Theorem 39:30 Maximal Update Parametrization in Practice 45:50 DeepNet paper connection 48:20 Results (width is all you need?) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ THE AI EPIPHANY PATREONS WALL ❤️ Eli Mahler Kevin Stone Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #μTransfer #MaximalUpdateParametrization #scalingl
What's cracking guys? In this video I'm covering Tensor Programs 5 Tuning Large Neural Networks by a Zero-Shot Hyperparameter Transfer by Greg Young, Edward Hu and others from Microsoft and OpenAI. As a quick note, I've been uploading somewhat less regularly over the last month and a half. Basically, I've been moving around London and a lot of other stuff has been going on. But going forward, basically I'm going to be much more consistent and also a very cool series is coming up, so stay tuned for that one. Without further ado, let's jump back to this paper. It's one of the most interesting theoretical and practical papers I've read in a while. There's a bunch of theory going behind it, as you can tell by this Roman numeral 5 here. This is basically a fifth paper in this sequence of papers with Tensor Programs papers. There's a bunch of theory. I'm going to try and give you the necessary background theories, although it's going to be a lot of hand-waving, but hopefully you're going to get the gist of it. Let's see what's the main punchline of the paper. We showed that in the recently discovered maximal update parameterization, so do remember this concept. I'm going to try and explain what exactly this thing is in this video. So many optimal HPs, hyperparameters, remain stable even as model size changes. Okay? So then they go and say, by transferring pre-training HPs from a model of 13 million parameters, we outperform published numbers of birth large 350 million parameters with a total tuning cost equivalent to pre-training birth large ones. So this is quite a big deal. Basically, what they do, they have a nice diagram here. They take a huge model, like let's say birth large or GPT-3, then they scale it down, as you can see on the diagram here. They tune this smaller model, so they do the parameter tuning on the smaller models, which is, as you can tell, much more cost-effective, time-effective, and easier. It's easier to just kind of sweep the search space. And then what they show is that by doing this microparametrization, the thing is those optimal hyperparameters are going to stay optimal even for this much bigger model. So you can see here, they can scale it up to the original size, just apply the hyperparameters they found on the smaller model, and it's going to be optimal for the bigger model as well. So that's the punchline, and that's super promising, because so far, as you may know, they usually just kind of find hyperparameters directly on the bigger model, and it's nowhere near often exhaustive search, so it's suboptimal. Here are some other diagrams where they show that, so on the x-axis, we have learning rate. On the y-axis, we have training loss, and we can see that as we are changing widths of our neural networks, basically for the standard parameterization, you can see that the optimum is shifting here. So that means that depending on the size of your model, your optimal learning rate is going to be different. And here on the right-hand side, you can see that for this novel parameterization, once you find the optimal learning rate, it stays pretty much constant with width for all of these models, and you can see that progressively as we add more and more neurons along the width dimension, we get better performance, i.e., a lower training loss. Here, it's not quite the same situation, because you can see that this model that has 2,000 parameters is actually the optimal model here as the lowest loss, so we are not benefiting from having bigger models. Now, there is a gotcha here. So they actually have a theoretical guarantee that this is going to be the case when you're trying to scale the model along the width dimension. But here, Greg Yang, one of the main authors of the paper, says in his Twitter thread, weight can shrink the model only in width, and he says, bad news, there is not much theoretical guarantee for non-width stuff, but good news is we empirically tested transfer across depth, batch size, sequence length, and timestamp, and all of those work within reasonable ranges on pre-LN transformers. Okay, so let's understand what kind of source you realize behind this paper, and for that purpose, I have a bunch of snippets from their last paper, the fourth paper in this series of papers, and let's see some definitions and understand what's going on and what type of parameterization is this microparameterization thingy. Okay, so they say here, this paper studies a natural class of parameterizations, which we call the ABC parameterization and describe here. Consider an L hidden layer perception. For weight matrices, W1, which has N times D dimensions, whereas D is basically your input dimensionality, and N is the hidden layer dimensionality. Then we have layers W2 through WL, which are all of the N times N dimensionality, which means we keep the same number of neurons across each of the hidden layers. We have a non-linearity phi, just a scalar like mapping from real numbers to real numbers, so that's your ReLU, sigmoid, 10H, whatnot. Such a neural network on input, Xi, which is D dimensional, as I mentioned, is given by this H1, and you just get that by multiplying W1 with the input. So your matrix, you multiply your input with your weights of your first layer. Those give you the preactivations, and then after you apply the activation function here, you get the activations, which they denote with X. L just stands as the index into the layer in this deep MLP. Basically, in order to get the next layer preactivations, you just multiply your activations from your last layer with the next layer's weights here. And they say that the network output, also called logits, is denoted as F of Xi, and that's basically, finally, you just apply on the last activations, you apply this linear mapping, and you get your logits, and that's it. So now, ABC parameterization is specified by a set of numbers, AL, BL tuples. So we have for each of the layers in the MLP, we're going to have one of these tuples. And then we have this number C, which is going to be the same across the whole network, because it's going to modulate the learning rate, as we're going to see in a second. So let's understand why ABC, what does ABC mean. And ABC basically comes from these three statements here. So we have the A statement, B statement, and C statement. The first one says, we parameterize each weight, so WL, as N raised to the power of minus AL WL, where WLs are the actual learnable, trainable parameters, and N is just basically the width of your layer, and this is just a constant. So that means this is going to be for a given network architecture, and for a given set of ALs and these parameters, this is going to be constant, and this is learnable, so W is learnable. So that's the A part of this parameterization. The B part is we initialize each weight, so they just index using alpha and beta. I'm not sure why they're not using something like I and J, which is much more common, but yeah, I guess fancy Greek symbols are cool. So they initialize that as a Gaussian that has zero mean, and it has a variance of N raised to the power of minus 2 BL. So that's where the BL parameters come into play. And finally, the statement C here, the SGD learning rate is this nu times N minus C. Let me delete this so you can see it a bit better. So if I delete this, you can see we have nu times N raised to the power of minus C for some width independent nu. So that's like the master. I think they kind of call that the master learning rate. Now, as you can understand, this ABC parameterization is very general. So depending on how you set these parameters, you can get various different behaviors in this limit. And this whole paper is going to be about this infinite width limit analysis. So keep that in mind. So here they mention a couple of instantiations, like specific instantiations of this ABC parameterization. So one of those is NTK parameterization. So that's neural tangent kernel. And I'm fairly sure most of you don't know this. So I'm going to do a small digression and explain what it is, because it's fairly important to understand NTK, to understand the theory behind this paper. So NTK has you set A1 to 0, you set AL to 1 half for all of the outer layers. So the first layer will have A1 equals to 0, and then all of the outer layers are going to have AL equals to 0.5. BL equals 0 for all L, and then C equals 0. Similarly, setting some other set of As, Bs, and Cs, you get this mean field parameterization, which is not that important for our discussion at the moment. And we have also standard parameterization, whereby standard, they mean just the thing that's currently implemented as a default in PyTorch. And I guess TensorFlow has pretty much the same parameterization. OK, a brief detour into NTK. Bear with me. It's going to be, this whole paper is a bit tougher when it comes to theory, but hopefully it's going to pay off and you'll understand stuff a bit more after this video. So what's NTK all about? NTK is basically this thing where people notice that if you have like a network with increasingly bigger and bigger width, what happens is during the training, as you're trying to learn on a particular dataset, people observe that the weights of the neural network are changing very, very slightly. And extrapolating from there, they understood that we can basically linearize our neural network, which is highly unlinear, by just doing up like a Taylor expansion and just keeping the first term of the Taylor expansion. So you basically end up with something like this. So you have your highly nonlinear neural network F and it's basically you feed the input X and you parameterize it by some weights W. So if you now, if we were to now just do the Taylor expansion, we'll get this. We'll get F of X at, we're going to expand it at this, these weights W0, which are basically our initialization weights. So now we add the first term, which is the first order derivatives, basically. So plus gradient of F at this same point W0. So basically gradients of F with respect to neural network parameters. And we need to multiply this with W minus W0. And we just have to transpose this because this is going to be obviously a vector. And so this is a row vector. This is a column vector and we get our results. So Taylor expansion in and of itself will now have higher order derivatives. So basically you'll now have a Hessian here and then third order derivatives, et cetera, et cetera. But if in certain conditions you can ignore all of the other terms and this will be a good enough approximation. And so you've basically managed to linearize your neural network. The approximation becomes better and better as the width goes to infinity. So we have smaller and smaller approximation error. So where does the kernel come into play? Why a neural tangent kernel? So the trick is the following. So if you know your linear regression and if you notice here, this thing, this whole thing here, since we have a set of W0. So W0 is kind of constant. We initialize an approach somehow. And so this is going to map X into some representation. So this is going to be your F of X, Phi of X, sorry. And basically in linear regression, you usually just have like W's here. We have minus W0, but it doesn't change anything. So we end up with a linear regression problem. Obviously the features are nonlinear, but this is still called a linear regression because basically it's linear in the parameters. W. The kernel comes into play if you do. So kernel is basically if you take Phi of X and you do an outer product with Phi of X transposed. So why would you want to do that is because sometimes it's too expensive to calculate these Phi of X's. And so people just use directly this kernel. And so this is your neural tangent kernel. And I'm going to break it down a little bit more now. So neural tangent kernel because, so kernel because it's a kernel, neural because we are basically using neural network to form these nonlinear transformations of our input X. So that's the neural part. And then we have the tangent part. The tangent part is because we are basically creating this linear model, the tangent model of a neural network. Now the reason I mentioned this neural tangent kernel, the reason I did this regression is because it turns out that during this neural, when we're in this NTK regime, the network F is changing. So we are learning something. But the thing is the weights and the features are not changing. And that may be super counterintuitive because it is. And there is a whole snippet I'm going to explain in a couple of minutes here that is going to kind of give you a rough intuition for why this is happening. And because of that, precisely, we don't want to be in this kernel regime because we usually want to do transfer learning. And in order to do transfer learning, you need to learn some useful weights. You need to modify the weights and then transfer them to other tasks. So that's some hand wavy explanation here. I'm going to dig into a bit more details. But to just kind of drive home the point of how this thing works quickly, if we have a one dimensional function. So let's assume we have a one dimensional function here, something like this. Let me zoom in a little bit more. The reason why this Taylor approximation works is the following. So this is your neural network. Imagine this is your F, just a single dimensional case, but still it's your nonlinear function. And you're trying to to basically describe this function using just these two terms. So what you end up doing is if we have if this is w. So this here is w zero. We're basically doing the following thing. So we are describing the network as. So this is F of w zero F of w zero, and then we just add this term here, which is the derivative of this function at this point. So this is basically this this this tangent here. And then if you want to and you can see that for very close points, so for the points are very close to two w zero. This approximation that I've just done. So this tangent is going to be fairly close to the actual function. So let me just change the color here to use green instead. You can see the green approximates our function very well when we are in the in the in the neighborhood in the Epsilon neighborhood of w w zero. OK, that's enough with NTK. Let's get back to the last subroutine. And that's explaining ABC parameterization. OK, so it turns out and this is a simple simplistic representation of this of this ABC parameterization space. But it turns out roughly that we want to be so we want to find ABC parameterization such that we have certain properties. So first of all, we don't want to be in the part of the space where the network training is unstable in this infinite width limit. And that's this whole space, this gray area here. So what an unstable means in this context is that the logits, so the last basically pre activations of the of the network of the output of the network start diverging. So they start going towards infinity in this in this regime. So that's bad. We want to have stable converging logits. Next up, it's not enough that we just like are the logits are not exploding. We also don't want to be in the trivial regime. Trivial would mean, OK, imagine you have logits are all like set to zero and you're learning your training your network, but nothing is changing. The weights are just kind of frozen to zero. And that's stable. OK, because the logits are not exploding. But obviously it's trivial because we're not learning anything smart. So we don't want to be in that regime either. So we end up with being inside of this of this polygon shape part of the space. And again, this is what I explained. This is the kernel regime where we are actually not learning any any useful features. So that means we can do any useful transfer learning. And so we don't want to be here either. And I want to explain why, as I said, I'm going to give you some intuition why how is this possible that we are learning F but not learning features. It's kind of confusing. I'm still trying to grasp this theory completely myself. I'm not an expert here, but like hopefully going to get a better feeling after this video. So what we end up so we end up wanting to be in this part of the space. So on these brown lines here, as you can see here. So these are the lines where we have this feature learning regime where basically we're stable. So the logits are not exploding. We are doing something non trivial. So we are learning features and we also want to be learning the features as much as possible. But just below like the divergence mode where we were the logical start exploding. So that's this maximal update parameterization point. So that's the apex of this of this polygon here. It turns out that the classical the standard parameterization, as they call it here, is actually unstable in the infinite width limit, which is the reason why we have that the instability of the optimal hyperparameters as we are changing the width dimension. OK, so now brace yourselves. I'm going to try and explain this the intuition behind why why is this possible? How can we be learning function F and not learning the actual features? So the features are not changing, but the F is changing. OK, let's see what's going on there. And this is from the appendix of their fourth paper, which I'm going to link down in the video description. So how does the function change if the limit does not allow features to evolve? Then how does learning occur to answer this question? Note that Delta and I'm going to explain all of these symbols and what they mean in a second. So Delta F T T means we are we are looking at function F after T steps of gradient of SGD of stochastic gradient descent. Delta means that we are basically looking at the difference between F T and F zero. So F zero is your neural network. Sorry, your your logits F represents the logits of your network during the initialization. So zero steps of SGD. That's the initialization. So that's equal to V zero, blah, blah, blah, blah, blah. And before I go there, let me just paint a picture here. What is we what is what is all of this? So basically they're just looking at an MLP. And so we have something like this. So we have an MLP. So this is your input. We have an MLP, a hidden layer, and we have another projection here. And here are the logits. This matrix here is called V. This matrix here is called U. And that's all we care about. So now let's see. Let's get back here. Yeah. One thing I forgot to mention. The activations here are called X. So now we have our terminology, our symbols grounded into the physical network here. OK, so equals V zero. And this this very fancy expression where they they take so easier is the the the weights of that network. We during initialization, then they have XT, which are the the activations here basically because it's delta. That's going to be XT minus X zero. And then we have plus a couple of other terms. So instead of me jabbering here, let me just expand this so that we understand what's going on. We're going to end up with the following. So we zero. XT minus X zero. OK, so that's the first term here. Then we have plus we have this part here. So we have VT minus V zero times X zero. And then we have this this this last term, which is basically VT minus V zero times XT. Minus X zero. And if we were to just kind of cancel out the terms here, what we end up with here very trivially is this. So we have VT XT minus V zero. X zero. And now the semantics behind this expression is the following. So here we take the matrix at initialization, so matrix F. So the weights at initialization, we multiply them with the signal here with the activations at initialization. And we subtract that from the same the same thing just at step T. So after T steps of SGD. And so this is kind of quantifying how are the logits of our network evolving throughout training? OK, so we want this to change. We want to to to if we are learning something, this thing is changing. It's evolving. OK, let's continue here. In short, then the evolution of FT of psi where psi is the the input. In the NTK limit is predominantly due to V zero delta XT and delta VT X zero. So why is that? That's because they show that this delta is not changing by a lot in the NTK regime. So it's a very small quantity. And that means that this when you have two small quantities multiplied with one another, this is going to rapidly go down to zero. And so it doesn't it does not contribute to the evolution of our logits. OK, next up, they give a concrete example. So let's imagine now that T equals one. So we've applied only a single SGD update to our network and we get the following expression. So we get delta F one is going to be this fancy expression. So all of these and raised to the power of minus two AV blah, blah, blah comes from our ABC parameterization. If you recall correctly, basically by just plugging in the ABC parameterization into this expression here, we get the following three terms. So the details here don't don't don't don't worry about the details. Let's just see the big picture here. So in NTP, so that's the neural tangent parameterization. Every AV equals one half. So the term and raised to the power of blah, blah, blah is theta of one. OK, confusing piece of notation here as well. I'm going to I'm going to try and dissect that as well. So basically, that means that this term here is state of one, whatever that means. And it roughly means that the this random vector, if you take the coordinate, the standard deviation is roughly theta of one. So that means in the limit, it's going to be upper bounded and lower bounded by by some constant function. Then they see on the other hand, this expression here and raised to the power of blah, blah, blah. So that's that's this expression here is of O of one over square root of N. OK, so that means that as N grows, this is going to go to zero. It's going to converge to zero. And finally, they say likewise for this last term. So for this term here, they kind of manage to reform it into this different shape here. And they show that this is also theta of one. No, sorry, that's the term C actually. So all in all, this term here is going to be theta one over N. So that's this here. And because we have a sum of N elements that's going to cancel out and this is also theta of one. And so we end up in the grand scheme of things, we end up that this term is also theta of one. So we end up with the following. We end up with this being theta of one, this being theta one and this being one over square root of N, which means this is going to go to zero. So in summary, even though this delta X as N is approaching infinity is converging towards zero, which means that with big enough width of a network, even after a single step, because we have one here, even after a single step of this, we don't have any change whatsoever in the features. So this X stays the same after a single step. On the other hand, we see that the F, so the logits of the neural network are actually evolving. And that's why we are able to learn the task because of this weird nonlinear relationship. Okay, that was the best I could do for now. Let's not continue. Hopefully you got something out of that. So I mentioned this theta expression. Basically, they define it quite formally here. Basically, what that means is that your vector X and N is kind of unfortunate here. It should be subscript because N means Nth element in the sequence and not X raised to the power of N. So we take the norm from the Nth random vector in our sequence, we square it, we divide it by N, and we take a square root of all of that. And basically this sequence here is theta of N raised to the power of minus A, which means we can find by the usual definition of big theta, we can find constants A and B such that we can bound that sequence of elements here. So how I like to think about it is, and it's a great simplification, is imagine all of the coordinates here are the same. Then if you just can expand this, this is going to be X1, so the first coordinate of your random vector X raised to the power of 2 plus second coordinate. And because they're the same, let's just kind of put X1 here, square. And we'll have, so plus plus plus, so we'll have N of these. So we end up with NX1 squared. And so because we have N here, so this thing here cancels out with this one. And so we end up with something like this. So you end up with the result that the standard deviation is roughly equal to, of a single coordinate of this vector here, X is equal to this. So that's how you can parse this result that delta F of t is theta of 1. Okay, let me show you just a couple more results from that paper. And we're going back to the main paper, which is going to be shorter than the explanation, this explanation I just gave you here. So for L, so for layers L from 0 to big L, which is the depth of your MLP, we say that this WL, so that the weights of a particular layer L is updated maximally if this delta WTL XTL-1, so basically you take the activations from your previous layer at step t and you multiply that with the weights of your L layer at step t of SGD, but you just have the delta here. So basically if this thing has theta of 1 coordinates for some training routine, some time that's greater than or equal to 1, and some input Xi. So what this roughly means is that if you think about what this is, you're multiplying with weight matrix, you're multiplying the activations, you're getting the pre-activations, so you're basically saying that the pre-activations are evolving as the training progresses. And then they find this complex expression here that shows that you need, so the A, Bs and Cs need to respect this complicated equation here if you were to get this maximal update parameterization. They finally state that this is, there is a unique stable ABC parameterization that has this property, and that's what we saw on this diagram here as the apex of this ABC space. Okay, I'm well aware that this was all very hand-wavy. It's hard to explain this amount of math in a single video, but hopefully you got some understanding of this previous thing that came before this paper. So yeah, in a nutshell, we have this ABC parameterization. We somehow find, they somehow find the relationship between A, Bs and Cs such that we have this maximal learning of features and being stable and all of these nice properties. Now it's much easier to just understand what you need to do as a practitioner. You just need to set a couple of numbers, and this is all going to work magically for you in the background. Okay, so let's get back to the paper. So they say here algorithm one is the following. So they first parameterize target model in maximal update parameterization. So that means pick A, Bs and Cs in a particular way. Then you just reduce the model, and because you parameterize your model, you'll know how to change your initialization, you'll know how to change your weight, your parameter coefficients, and you'll know how to change your learning rate. Then you tune a smaller version of your model, and finally you copy the tuned hyperparameters to the target model, to the big model. Because of all of this, you can now have all of your compute budget used to search for the optimal, for the most optimal hyperparameters, and then you can just apply them to the bigger model and have the ABC parameterization, the maximal update parameterization do all the work for you. So let's see what parameters, which hyperparameters are transferable, and they say here, and we saw this already, so initialization ones, parameter multipliers, and optimization related, especially learning rate. Transferred across our, so we see width here, so this means we can transfer these hyperparameters across these different scaling dimensions. So we do have the theoretical guarantee for width, and we already saw that we don't have the same theoretical guarantee for depth or batch size or any of these, but they empirically, so that's what asterisk means here, the empirically shows that this roughly remains true even when scaling across these different, like scaling dimensions such as depth. What is not transferable are regularization techniques such as dropout, weight decay, because intuitive explanation would be the parameterization only cares about the model, not about the actual data set size, and because of that you cannot count in the regularization correctly. Okay, let's continue here. Worth mentioning, they say that in this work, we primarily focus on hyperparameter transfer with respect to training loss. In settings where regularization is not the bottleneck to test performance, as in all of our experiments here, this also translates to efficacy in terms of test loss. Okay, and that's what we saw in all of these diagrams here, so we had the training loss, and we saw that the training loss remains stable, but if the regularization is not, as they say here, if the regularization is not the bottleneck, then basically even for the test, the test loss will stay similarly stable across various scales. Let's continue here, and let me give you some more intuition for why this whole theory works the way it does. So a bit more math ahead, so brace yourself, let's try and take this one. So the central limit theorem says that if x1 through xn are IID samples, so independent identical distributions, so samples from a zero mean unit variance distribution, so this can be arbitrary distribution, it's just important that we have zero mean and unit variance, and then if we form this novel random variable like this, so basically a sum of all of those random samples, divided by the square root of n, where n is the number of the samples, this converges to a standard Gaussian distribution as n goes to infinity. Therefore, we can say that 1 over square root of n is the right order of scaling factor cn, such that cn times x1 through xn converges to something non-trivial. So in contrast, if we were to change cn to 1 over n, this whole thing would converge to zero, or if cn was 1, then this whole thing will blow up in variance as n goes to infinity. So why would we care about this central limit theorem right now, and this sum and all of this? Well, the answer is kind of simple once you understand it. Basically, let's imagine we have an MLP here, so let's imagine we have our hidden layer, and we have our previous layer, and so how the MLP connectivity pattern looks like is we have, as you can see here, a bunch of connections from previous layer to the next layer. And you can see here that this exactly has that same form as this one here. So we have some weights here, so we have x1, x2, all the way to here, x4, and this will be xn in general when we have much more nodes here. Assuming that the inputs are also zero mean and variance 1, then this multiplication between these two will end up having zero mean and finite variance samples here, and then we sum them up, and that's why we, and now once we sum them up, if we don't want this thing to explode or converge to zero, so if we don't want this to go to zero or infinity in the infinite width limit, then we have to scale this with 1 over square root of n. That's how this mathematics translates to what we care about. Okay, let's continue here, so let's see what's going on here. They say now suppose we would like to minimize the function fn of c, which is defined as an expectation across all of these random samples, f, and then as the input we have this expression c times x1 plus all the way through xn. So we're trying to find that c such that we can minimize this function, and then they say if we reparameterize c as alpha over square root of n, for alpha being a real number, then by the central limit theorem we can see that this converges to basically your Gaussian distribution with variance alpha squared. In the case that alpha is equal to 1, then we converge to standard Gaussian here. Basically any other parameterization would lead to this either exploding or going to zero, and that's not something that we care about, it's not interesting, and that's why we have to kind of choose this type of parameterization inspired by the CLT theorem. Finally they say then for sufficiently large n, the optimal alpha, so alpha n, this star means it's optimal, n means we have n of these samples, and that's the argument of this function, so that optimal alpha n star should be close to alpha big n star for any n big n that's bigger than this small n, and like in particular when n is equal to infinity. And this precisely means that we can transfer the optimal, so cn or alpha n, because those two are kind of connected directly here, they're proportional because this is a constant, so we can transfer from the optimal parameter from a smaller problem to a larger problem. Finally, because the transfer algorithm is simply copying alpha, we say that parameterization alpha over square root of n is the correct parameterization for this problem. So what I've basically said here is that this c is some hyperparameter, whatever that hyperparameter is, and with this parameterization it's going to give us the minimum of this function f, even for smaller as well as for the bigger models, so as n goes to this n which goes to infinity. That's a rough explanation, so this is fairly hard to understand even for me who've been reading a bunch of papers over the many last years, and I can't imagine how this might look to beginners, so just having some writing some blog around this whole work would be super, super beneficial, so if the authors see this video, I strongly encourage them to write a blog and try and explain this with less math and with more pictures. It would be super useful, just like this thing I tried to do here basically. Okay, let's continue on and see the rest of the paper. Okay, I think we're through with most of the math, so this part now is going to be hopefully a lot simpler compared to what we've seen previously. Okay, the point they're trying to make throughout this whole paper is that hyperparameters do not transfer conventionally, and we've seen that in the first graphs. We've seen that you cannot just take a set of hyperparameters you found for a small model and just assume they're going to work for the bigger model, and they say that here, so on the other hand, a non-trivial fraction of papers in deep learning fixes all HPs when comparing against baselines, which reflects an assumption that the optimal HPs should be stable, and my comment to that would be basically it reflects a lack of resources and hoping for the best rather than people thinking this is going to work. That's usually the case, I guess. Okay, so having said that, let's now see some more experiments they've done. On the left-hand side here, we can see the MLP, I think a two-hidden layer MLP, trained with the standard parameterization, so SP stands for standard parameterization, and microP stands for maximal. By the way, I have no idea why they're using microP to describe maximal update parameterization. In any case, we can see here that as they're training with different widths, so going from 256 all the way to 8K192, we can see that the optimum when it comes to learning rate is shifting, like literally for an order of magnitude at least here from minus 7 to minus 9, even more. And this is a log scale, so it's multiple orders of magnitude. On the other hand, we can see that for microP, for their method, it's basically stable here. So this graph is MLP, the one we saw above, so this one here was actually transformers, so they do show that for various architectures, this thing actually works in practice. It's not just a theoretical paper without any results. It actually has awesome results to back up all of this theory, which is one of the nicest parts about the paper. Okay, so let's see that famous maximal update parameterization. Here is what you need to do in practice. You initialize your weights in a particular manner, and the only difference compared to your standard parameterization is a couple of weeks. So here in the third layer, so for the third set of weights, you just have to do n squared instead of n. And this is something you regularly see. This is just 1 over fan in, so the number of neurons entering your neuron. This intuitively makes a lot of sense because if you have, basically, if you have, again, a neuron here and you have n neurons going into this neuron here, then if you, so if you have f in, so fan in, number of neurons here, so it makes sense that you want to initialize these weights as 1 over f in, so as to compensate for the sum of this f in element. So that's the intuitive idea for why these methods work, and what they've shown is that these methods do not work as expected in the infinite width limit, so that's why we have the maximal update parameterization here. Finally, the SGD rates are modified per layer. You can see that the only modifications are that for the first layer you multiply by n here, and for the last layer you multiply by 1 over n, which means we have per layer learning rate in this parameterization. So practically it's very simple to actually implement this, and you don't have to care about this because they actually open sourced a package that does this for you. I'm going to link it down in the video description as well so you can check it out. Here are some empirical validations they've done for different hyperparameters and scaling across width and depth. So as I said, for width this whole theory was developed for the infinite width limit, whereas the depth is not guaranteed theoretically, but you can see that for various different hyperparameters like learning rate schedule and learning rate and these initializations, standard deviation, etc. we can see that the curves are fairly stable across all of these diagrams. I guess this is the only thing that kind of deviates compared to the width analog here, so it doesn't have a clear minimum as up here. Finally, one interesting fact is that, so you're all familiar, I guess, with transformers and how the actual attention mechanism works, and you know that in the original paper they used to divide by square root of D so to avoid the explosion and keep the variance controlled, and it turns out that in this infinite width analysis you actually want to divide by D, not by square root of D, which is a super interesting result considering that if you think, like, the original transformer paper did give a nice explanation for why square root of D works, and now all of a sudden one over D is actually better in this infinite width limit. Yeah, this whole paper feels like magic, but the best thing of all is that it actually works. It's not just a pure theoretical paper without the results to back it up. In these diagrams they show how logits, attention logits, and word embeddings are stable across various widths, as you can see on the x-axis, for the proposed parameterization versus the standard parameterization, where we can see that logits and attention logits start exploding with width, whereas word embeddings are actually constant, and that's why it's problematic to only try, and I think they try to just manipulate, like, a learning rate, and then this becomes stable, but then this diverges or converges to zero or something, and so it's only, yeah, obviously only their parameterization leads to consistently stable results across all of the widths. Okay, let's continue here and let's see what else is interesting. Okay, this one. The transfer across depth only works for the pre-layer norm transformer, and there was a very cool recent paper called DeepNet that addressed this problem, actually. It introduced this deep norm layer, and they managed to train transformers that are super deep, like, 1,000 plus layers deep, as you can see here, and it's a very interesting chart, because you have on the x-axis chronology, like, time, and then on the y-axis, we have the metric of depth, and they are definitely like new soda when it comes to depth on this benchmark. Yeah, so they mentioned somewhere in the paper the following thing. So these guys here find that the pre-norm residual connections, pre-LN, improve the stability of transformers compared to post-norm connections. However, the gradients of pre-LN at bottom layers tend to be larger than at top layers, leading to a degradation in performance compared with post-LN. So here it was already shown that pre-LN improves stability, whereas it loses some of the performance, and here you can see that exactly that's what they're dealing with here. The transfer across definitely works for pre-LN transformers. So the DeepNet paper shows how to reconcile these two worlds, so to take the best out of the two worlds, the performance and the stability, so the stability of pre-LN and the performance of post-LN module, by just applying a couple of tricks, so just kind of multiplying the signal here by alpha, so we have the residual plus the signal, and this is the pre-LN form, as you can see, because we have the sum here before the LN, that's why it's called pre-LN, and just modifying it with alpha here and initializing some of these weights by beta, they manage to get to amazing depths. And so combining this paper, understanding, totally understanding the theory behind DeepNet paper and this maximal object parameterization sounds like a very interesting research idea that promises both depth and both infinite depth and infinite width, and so StackMoreLayers team is winning again. Okay, guys, I thought that this connection with DeepNet paper was quite valuable, so thought mentioning it. So now let's see the results. There's a bunch of tables, I'm going to focus only on a couple of these, so here we have BERT pre-training, here we can see BERT large, and we can see that the model speedup goes for this microtransfer technique at 22x, which means the smaller model takes 22x less time to train compared to the big model, and all in all, the total speedup for the whole hyperparameter tuning procedure takes hundreds and hundreds of times less time to train than the BERT large, than directly training the big model. Interestingly, we can also see that it transfers perfectly, it even achieves better results than training it directly, and that's the consequence of this amazing theory behind this paper. GPT-3, they showed that if we take the smaller 6.7 billion parameter model and train it with microtransfer, you get better results even compared to the 13 billion size model, and that's the consequence of OpenAI not having enough resources to try out different hyperparameter combinations for such a big model. And now with this technique, you can do much more thorough HP search and then just apply the same hyperparameters to the bigger model, and as you can see, they get pretty much better results across most of these datasets. There are some outliers here for some reason, it performs way worse compared to these two on this Lamberta, few shots, not sure what's going on there. Finally, interestingly enough, the same as what DeepNet did for depth, these guys showed that with increasing width, they always get better and better performance, so lower training loss in this example compared to standard parameterization where it's not stable. You can see that the pattern holds up until a certain point depending on the learning rate, and then given enough width, it's going to explode. So here, because the learning rate is bigger, it explodes even sooner than on the middle chart here. And in another example, another chart showing that wider is better, basically you can see that throughout the whole training, we have the training tokens on the x-axis, so this is plotting during the actual training procedure, and you can see that the validation loss is getting smaller and smaller as we're increasing width, although there is some saturation going on. That's pretty much it, guys. This was a super heavy paper, lots of equations, the previous paper is even harder than this one, there is so much math, and it looks like a super impactful work, and it'd be very useful to make some visualizations and make this a bit easier to understand. But yeah, hopefully you found this explanation somewhat useful, I'm aware it was very hand-wavy and not rigorous, but hopefully some of the mental models of how I think about all of this helped you out. If it did, consider sharing the video out and subscribe to this channel, join the Discord community, and until next time, bye-bye.
[{"start": 0.0, "end": 2.7600000000000002, "text": " What's cracking guys? In this video I'm covering"}, {"start": 2.7600000000000002, "end": 6.26, "text": " Tensor Programs 5 Tuning Large Neural Networks"}, {"start": 6.26, "end": 9.06, "text": " by a Zero-Shot Hyperparameter Transfer"}, {"start": 9.06, "end": 14.06, "text": " by Greg Young, Edward Hu and others from Microsoft and OpenAI."}, {"start": 14.06, "end": 18.26, "text": " As a quick note, I've been uploading somewhat less regularly"}, {"start": 18.26, "end": 21.26, "text": " over the last month and a half."}, {"start": 21.26, "end": 23.26, "text": " Basically, I've been moving around London"}, {"start": 23.26, "end": 25.76, "text": " and a lot of other stuff has been going on."}, {"start": 25.76, "end": 30.76, "text": " But going forward, basically I'm going to be much more consistent"}, {"start": 30.76, "end": 35.760000000000005, "text": " and also a very cool series is coming up, so stay tuned for that one."}, {"start": 35.760000000000005, "end": 38.760000000000005, "text": " Without further ado, let's jump back to this paper."}, {"start": 38.760000000000005, "end": 41.760000000000005, "text": " It's one of the most interesting theoretical and practical papers"}, {"start": 41.760000000000005, "end": 43.760000000000005, "text": " I've read in a while."}, {"start": 43.760000000000005, "end": 46.760000000000005, "text": " There's a bunch of theory going behind it,"}, {"start": 46.760000000000005, "end": 49.760000000000005, "text": " as you can tell by this Roman numeral 5 here."}, {"start": 49.760000000000005, "end": 53.760000000000005, "text": " This is basically a fifth paper in this sequence of papers"}, {"start": 53.76, "end": 55.76, "text": " with Tensor Programs papers."}, {"start": 55.76, "end": 57.76, "text": " There's a bunch of theory."}, {"start": 57.76, "end": 60.76, "text": " I'm going to try and give you the necessary background theories,"}, {"start": 60.76, "end": 62.76, "text": " although it's going to be a lot of hand-waving,"}, {"start": 62.76, "end": 64.75999999999999, "text": " but hopefully you're going to get the gist of it."}, {"start": 64.75999999999999, "end": 68.75999999999999, "text": " Let's see what's the main punchline of the paper."}, {"start": 68.75999999999999, "end": 71.75999999999999, "text": " We showed that in the recently discovered"}, {"start": 71.75999999999999, "end": 73.75999999999999, "text": " maximal update parameterization,"}, {"start": 73.75999999999999, "end": 75.75999999999999, "text": " so do remember this concept."}, {"start": 75.75999999999999, "end": 80.75999999999999, "text": " I'm going to try and explain what exactly this thing is in this video."}, {"start": 80.76, "end": 83.76, "text": " So many optimal HPs, hyperparameters,"}, {"start": 83.76, "end": 87.76, "text": " remain stable even as model size changes."}, {"start": 87.76, "end": 88.76, "text": " Okay?"}, {"start": 88.76, "end": 90.76, "text": " So then they go and say,"}, {"start": 90.76, "end": 95.76, "text": " by transferring pre-training HPs from a model of 13 million parameters,"}, {"start": 95.76, "end": 100.76, "text": " we outperform published numbers of birth large 350 million parameters"}, {"start": 100.76, "end": 105.76, "text": " with a total tuning cost equivalent to pre-training birth large ones."}, {"start": 105.76, "end": 107.76, "text": " So this is quite a big deal."}, {"start": 107.76, "end": 110.76, "text": " Basically, what they do, they have a nice diagram here."}, {"start": 110.76, "end": 115.76, "text": " They take a huge model, like let's say birth large or GPT-3,"}, {"start": 115.76, "end": 119.76, "text": " then they scale it down, as you can see on the diagram here."}, {"start": 119.76, "end": 122.76, "text": " They tune this smaller model,"}, {"start": 122.76, "end": 125.76, "text": " so they do the parameter tuning on the smaller models,"}, {"start": 125.76, "end": 130.76, "text": " which is, as you can tell, much more cost-effective, time-effective,"}, {"start": 130.76, "end": 131.76, "text": " and easier."}, {"start": 131.76, "end": 134.76, "text": " It's easier to just kind of sweep the search space."}, {"start": 134.76, "end": 139.76, "text": " And then what they show is that by doing this microparametrization,"}, {"start": 139.76, "end": 143.76, "text": " the thing is those optimal hyperparameters are going to stay optimal"}, {"start": 143.76, "end": 145.76, "text": " even for this much bigger model."}, {"start": 145.76, "end": 148.76, "text": " So you can see here, they can scale it up to the original size,"}, {"start": 148.76, "end": 153.76, "text": " just apply the hyperparameters they found on the smaller model,"}, {"start": 153.76, "end": 156.76, "text": " and it's going to be optimal for the bigger model as well."}, {"start": 156.76, "end": 160.76, "text": " So that's the punchline, and that's super promising,"}, {"start": 160.76, "end": 162.76, "text": " because so far, as you may know,"}, {"start": 162.76, "end": 166.76, "text": " they usually just kind of find hyperparameters directly on the bigger model,"}, {"start": 166.76, "end": 171.76, "text": " and it's nowhere near often exhaustive search, so it's suboptimal."}, {"start": 171.76, "end": 175.76, "text": " Here are some other diagrams where they show that,"}, {"start": 175.76, "end": 178.76, "text": " so on the x-axis, we have learning rate."}, {"start": 178.76, "end": 181.76, "text": " On the y-axis, we have training loss,"}, {"start": 181.76, "end": 186.76, "text": " and we can see that as we are changing widths of our neural networks,"}, {"start": 186.76, "end": 189.76, "text": " basically for the standard parameterization,"}, {"start": 189.76, "end": 192.76, "text": " you can see that the optimum is shifting here."}, {"start": 192.76, "end": 195.76, "text": " So that means that depending on the size of your model,"}, {"start": 195.76, "end": 198.76, "text": " your optimal learning rate is going to be different."}, {"start": 198.76, "end": 204.76, "text": " And here on the right-hand side, you can see that for this novel parameterization,"}, {"start": 204.76, "end": 206.76, "text": " once you find the optimal learning rate,"}, {"start": 206.76, "end": 211.76, "text": " it stays pretty much constant with width for all of these models,"}, {"start": 211.76, "end": 217.76, "text": " and you can see that progressively as we add more and more neurons along the width dimension,"}, {"start": 217.76, "end": 221.76, "text": " we get better performance, i.e., a lower training loss."}, {"start": 221.76, "end": 223.76, "text": " Here, it's not quite the same situation,"}, {"start": 223.76, "end": 227.76, "text": " because you can see that this model that has 2,000 parameters"}, {"start": 227.76, "end": 231.76, "text": " is actually the optimal model here as the lowest loss,"}, {"start": 231.76, "end": 234.76, "text": " so we are not benefiting from having bigger models."}, {"start": 234.76, "end": 236.76, "text": " Now, there is a gotcha here."}, {"start": 236.76, "end": 241.76, "text": " So they actually have a theoretical guarantee that this is going to be the case"}, {"start": 241.76, "end": 246.76, "text": " when you're trying to scale the model along the width dimension."}, {"start": 246.76, "end": 252.76, "text": " But here, Greg Yang, one of the main authors of the paper, says in his Twitter thread,"}, {"start": 252.76, "end": 254.76, "text": " weight can shrink the model only in width,"}, {"start": 254.76, "end": 259.76, "text": " and he says, bad news, there is not much theoretical guarantee for non-width stuff,"}, {"start": 259.76, "end": 263.76, "text": " but good news is we empirically tested transfer across depth,"}, {"start": 263.76, "end": 266.76, "text": " batch size, sequence length, and timestamp,"}, {"start": 266.76, "end": 271.76, "text": " and all of those work within reasonable ranges on pre-LN transformers."}, {"start": 271.76, "end": 275.76, "text": " Okay, so let's understand what kind of source you realize behind this paper,"}, {"start": 275.76, "end": 279.76, "text": " and for that purpose, I have a bunch of snippets from their last paper,"}, {"start": 279.76, "end": 282.76, "text": " the fourth paper in this series of papers,"}, {"start": 282.76, "end": 285.76, "text": " and let's see some definitions and understand what's going on"}, {"start": 285.76, "end": 291.76, "text": " and what type of parameterization is this microparameterization thingy."}, {"start": 291.76, "end": 296.76, "text": " Okay, so they say here, this paper studies a natural class of parameterizations,"}, {"start": 296.76, "end": 299.76, "text": " which we call the ABC parameterization and describe here."}, {"start": 299.76, "end": 302.76, "text": " Consider an L hidden layer perception."}, {"start": 302.76, "end": 307.76, "text": " For weight matrices, W1, which has N times D dimensions,"}, {"start": 307.76, "end": 311.76, "text": " whereas D is basically your input dimensionality,"}, {"start": 311.76, "end": 316.76, "text": " and N is the hidden layer dimensionality."}, {"start": 316.76, "end": 319.76, "text": " Then we have layers W2 through WL,"}, {"start": 319.76, "end": 322.76, "text": " which are all of the N times N dimensionality,"}, {"start": 322.76, "end": 328.76, "text": " which means we keep the same number of neurons across each of the hidden layers."}, {"start": 328.76, "end": 333.76, "text": " We have a non-linearity phi, just a scalar like mapping from real numbers to real numbers,"}, {"start": 333.76, "end": 337.76, "text": " so that's your ReLU, sigmoid, 10H, whatnot."}, {"start": 337.76, "end": 342.76, "text": " Such a neural network on input, Xi, which is D dimensional, as I mentioned,"}, {"start": 342.76, "end": 348.76, "text": " is given by this H1, and you just get that by multiplying W1 with the input."}, {"start": 348.76, "end": 354.76, "text": " So your matrix, you multiply your input with your weights of your first layer."}, {"start": 354.76, "end": 359.76, "text": " Those give you the preactivations, and then after you apply the activation function here,"}, {"start": 359.76, "end": 362.76, "text": " you get the activations, which they denote with X."}, {"start": 362.76, "end": 368.76, "text": " L just stands as the index into the layer in this deep MLP."}, {"start": 368.76, "end": 371.76, "text": " Basically, in order to get the next layer preactivations,"}, {"start": 371.76, "end": 379.76, "text": " you just multiply your activations from your last layer with the next layer's weights here."}, {"start": 379.76, "end": 386.76, "text": " And they say that the network output, also called logits, is denoted as F of Xi,"}, {"start": 386.76, "end": 390.76, "text": " and that's basically, finally, you just apply on the last activations,"}, {"start": 390.76, "end": 396.76, "text": " you apply this linear mapping, and you get your logits, and that's it."}, {"start": 396.76, "end": 402.76, "text": " So now, ABC parameterization is specified by a set of numbers, AL, BL tuples."}, {"start": 402.76, "end": 407.76, "text": " So we have for each of the layers in the MLP, we're going to have one of these tuples."}, {"start": 407.76, "end": 411.76, "text": " And then we have this number C, which is going to be the same across the whole network,"}, {"start": 411.76, "end": 416.76, "text": " because it's going to modulate the learning rate, as we're going to see in a second."}, {"start": 416.76, "end": 420.76, "text": " So let's understand why ABC, what does ABC mean."}, {"start": 420.76, "end": 423.76, "text": " And ABC basically comes from these three statements here."}, {"start": 423.76, "end": 426.76, "text": " So we have the A statement, B statement, and C statement."}, {"start": 426.76, "end": 434.76, "text": " The first one says, we parameterize each weight, so WL, as N raised to the power of minus AL WL,"}, {"start": 434.76, "end": 438.76, "text": " where WLs are the actual learnable, trainable parameters,"}, {"start": 438.76, "end": 446.76, "text": " and N is just basically the width of your layer, and this is just a constant."}, {"start": 446.76, "end": 450.76, "text": " So that means this is going to be for a given network architecture,"}, {"start": 450.76, "end": 455.76, "text": " and for a given set of ALs and these parameters, this is going to be constant,"}, {"start": 455.76, "end": 458.76, "text": " and this is learnable, so W is learnable."}, {"start": 458.76, "end": 461.76, "text": " So that's the A part of this parameterization."}, {"start": 461.76, "end": 468.76, "text": " The B part is we initialize each weight, so they just index using alpha and beta."}, {"start": 468.76, "end": 473.76, "text": " I'm not sure why they're not using something like I and J, which is much more common,"}, {"start": 473.76, "end": 476.76, "text": " but yeah, I guess fancy Greek symbols are cool."}, {"start": 476.76, "end": 481.76, "text": " So they initialize that as a Gaussian that has zero mean,"}, {"start": 481.76, "end": 485.76, "text": " and it has a variance of N raised to the power of minus 2 BL."}, {"start": 485.76, "end": 489.76, "text": " So that's where the BL parameters come into play."}, {"start": 489.76, "end": 496.76, "text": " And finally, the statement C here, the SGD learning rate is this nu times N minus C."}, {"start": 496.76, "end": 499.76, "text": " Let me delete this so you can see it a bit better."}, {"start": 499.76, "end": 505.76, "text": " So if I delete this, you can see we have nu times N raised to the power of minus C"}, {"start": 505.76, "end": 507.76, "text": " for some width independent nu."}, {"start": 507.76, "end": 509.76, "text": " So that's like the master."}, {"start": 509.76, "end": 513.76, "text": " I think they kind of call that the master learning rate."}, {"start": 513.76, "end": 518.76, "text": " Now, as you can understand, this ABC parameterization is very general."}, {"start": 518.76, "end": 525.76, "text": " So depending on how you set these parameters, you can get various different behaviors in this limit."}, {"start": 525.76, "end": 530.76, "text": " And this whole paper is going to be about this infinite width limit analysis."}, {"start": 530.76, "end": 532.76, "text": " So keep that in mind."}, {"start": 532.76, "end": 537.76, "text": " So here they mention a couple of instantiations,"}, {"start": 537.76, "end": 540.76, "text": " like specific instantiations of this ABC parameterization."}, {"start": 540.76, "end": 543.76, "text": " So one of those is NTK parameterization."}, {"start": 543.76, "end": 545.76, "text": " So that's neural tangent kernel."}, {"start": 545.76, "end": 547.76, "text": " And I'm fairly sure most of you don't know this."}, {"start": 547.76, "end": 550.76, "text": " So I'm going to do a small digression and explain what it is,"}, {"start": 550.76, "end": 556.76, "text": " because it's fairly important to understand NTK, to understand the theory behind this paper."}, {"start": 556.76, "end": 563.76, "text": " So NTK has you set A1 to 0, you set AL to 1 half for all of the outer layers."}, {"start": 563.76, "end": 566.76, "text": " So the first layer will have A1 equals to 0,"}, {"start": 566.76, "end": 571.76, "text": " and then all of the outer layers are going to have AL equals to 0.5."}, {"start": 571.76, "end": 576.76, "text": " BL equals 0 for all L, and then C equals 0."}, {"start": 576.76, "end": 581.76, "text": " Similarly, setting some other set of As, Bs, and Cs, you get this mean field parameterization,"}, {"start": 581.76, "end": 584.76, "text": " which is not that important for our discussion at the moment."}, {"start": 584.76, "end": 592.76, "text": " And we have also standard parameterization, whereby standard, they mean just the thing that's currently implemented as a default in PyTorch."}, {"start": 592.76, "end": 598.76, "text": " And I guess TensorFlow has pretty much the same parameterization."}, {"start": 598.76, "end": 601.76, "text": " OK, a brief detour into NTK."}, {"start": 601.76, "end": 602.76, "text": " Bear with me."}, {"start": 602.76, "end": 606.76, "text": " It's going to be, this whole paper is a bit tougher when it comes to theory,"}, {"start": 606.76, "end": 612.76, "text": " but hopefully it's going to pay off and you'll understand stuff a bit more after this video."}, {"start": 612.76, "end": 615.76, "text": " So what's NTK all about?"}, {"start": 615.76, "end": 625.76, "text": " NTK is basically this thing where people notice that if you have like a network with increasingly bigger and bigger width,"}, {"start": 625.76, "end": 629.76, "text": " what happens is during the training, as you're trying to learn on a particular dataset,"}, {"start": 629.76, "end": 635.76, "text": " people observe that the weights of the neural network are changing very, very slightly."}, {"start": 635.76, "end": 642.76, "text": " And extrapolating from there, they understood that we can basically linearize our neural network,"}, {"start": 642.76, "end": 651.76, "text": " which is highly unlinear, by just doing up like a Taylor expansion and just keeping the first term of the Taylor expansion."}, {"start": 651.76, "end": 653.76, "text": " So you basically end up with something like this."}, {"start": 653.76, "end": 665.76, "text": " So you have your highly nonlinear neural network F and it's basically you feed the input X and you parameterize it by some weights W."}, {"start": 665.76, "end": 669.76, "text": " So if you now, if we were to now just do the Taylor expansion, we'll get this."}, {"start": 669.76, "end": 679.76, "text": " We'll get F of X at, we're going to expand it at this, these weights W0, which are basically our initialization weights."}, {"start": 679.76, "end": 683.76, "text": " So now we add the first term, which is the first order derivatives, basically."}, {"start": 683.76, "end": 692.76, "text": " So plus gradient of F at this same point W0."}, {"start": 692.76, "end": 698.76, "text": " So basically gradients of F with respect to neural network parameters."}, {"start": 698.76, "end": 703.76, "text": " And we need to multiply this with W minus W0."}, {"start": 703.76, "end": 707.76, "text": " And we just have to transpose this because this is going to be obviously a vector."}, {"start": 707.76, "end": 710.76, "text": " And so this is a row vector."}, {"start": 710.76, "end": 713.76, "text": " This is a column vector and we get our results."}, {"start": 713.76, "end": 716.76, "text": " So Taylor expansion in and of itself will now have higher order derivatives."}, {"start": 716.76, "end": 722.76, "text": " So basically you'll now have a Hessian here and then third order derivatives, et cetera, et cetera."}, {"start": 722.76, "end": 731.76, "text": " But if in certain conditions you can ignore all of the other terms and this will be a good enough approximation."}, {"start": 731.76, "end": 735.76, "text": " And so you've basically managed to linearize your neural network."}, {"start": 735.76, "end": 740.76, "text": " The approximation becomes better and better as the width goes to infinity."}, {"start": 740.76, "end": 743.76, "text": " So we have smaller and smaller approximation error."}, {"start": 743.76, "end": 746.76, "text": " So where does the kernel come into play?"}, {"start": 746.76, "end": 748.76, "text": " Why a neural tangent kernel?"}, {"start": 748.76, "end": 750.76, "text": " So the trick is the following."}, {"start": 750.76, "end": 761.76, "text": " So if you know your linear regression and if you notice here, this thing, this whole thing here, since we have a set of W0."}, {"start": 761.76, "end": 762.76, "text": " So W0 is kind of constant."}, {"start": 762.76, "end": 764.76, "text": " We initialize an approach somehow."}, {"start": 764.76, "end": 770.76, "text": " And so this is going to map X into some representation."}, {"start": 770.76, "end": 775.76, "text": " So this is going to be your F of X, Phi of X, sorry."}, {"start": 775.76, "end": 779.76, "text": " And basically in linear regression, you usually just have like W's here."}, {"start": 779.76, "end": 783.76, "text": " We have minus W0, but it doesn't change anything."}, {"start": 783.76, "end": 786.76, "text": " So we end up with a linear regression problem."}, {"start": 786.76, "end": 793.76, "text": " Obviously the features are nonlinear, but this is still called a linear regression because basically it's linear in the parameters."}, {"start": 793.76, "end": 796.76, "text": " W. The kernel comes into play if you do."}, {"start": 796.76, "end": 807.76, "text": " So kernel is basically if you take Phi of X and you do an outer product with Phi of X transposed."}, {"start": 807.76, "end": 813.76, "text": " So why would you want to do that is because sometimes it's too expensive to calculate these Phi of X's."}, {"start": 813.76, "end": 816.76, "text": " And so people just use directly this kernel."}, {"start": 816.76, "end": 820.76, "text": " And so this is your neural tangent kernel."}, {"start": 820.76, "end": 822.76, "text": " And I'm going to break it down a little bit more now."}, {"start": 822.76, "end": 835.76, "text": " So neural tangent kernel because, so kernel because it's a kernel, neural because we are basically using neural network to form these nonlinear transformations of our input X."}, {"start": 835.76, "end": 837.76, "text": " So that's the neural part."}, {"start": 837.76, "end": 838.76, "text": " And then we have the tangent part."}, {"start": 838.76, "end": 845.76, "text": " The tangent part is because we are basically creating this linear model, the tangent model of a neural network."}, {"start": 845.76, "end": 859.76, "text": " Now the reason I mentioned this neural tangent kernel, the reason I did this regression is because it turns out that during this neural, when we're in this NTK regime, the network F is changing."}, {"start": 859.76, "end": 860.76, "text": " So we are learning something."}, {"start": 860.76, "end": 864.76, "text": " But the thing is the weights and the features are not changing."}, {"start": 864.76, "end": 868.76, "text": " And that may be super counterintuitive because it is."}, {"start": 868.76, "end": 876.76, "text": " And there is a whole snippet I'm going to explain in a couple of minutes here that is going to kind of give you a rough intuition for why this is happening."}, {"start": 876.76, "end": 885.76, "text": " And because of that, precisely, we don't want to be in this kernel regime because we usually want to do transfer learning."}, {"start": 885.76, "end": 891.76, "text": " And in order to do transfer learning, you need to learn some useful weights."}, {"start": 891.76, "end": 894.76, "text": " You need to modify the weights and then transfer them to other tasks."}, {"start": 894.76, "end": 896.76, "text": " So that's some hand wavy explanation here."}, {"start": 896.76, "end": 898.76, "text": " I'm going to dig into a bit more details."}, {"start": 898.76, "end": 906.76, "text": " But to just kind of drive home the point of how this thing works quickly, if we have a one dimensional function."}, {"start": 906.76, "end": 912.76, "text": " So let's assume we have a one dimensional function here, something like this."}, {"start": 912.76, "end": 913.76, "text": " Let me zoom in a little bit more."}, {"start": 913.76, "end": 918.76, "text": " The reason why this Taylor approximation works is the following."}, {"start": 918.76, "end": 920.76, "text": " So this is your neural network."}, {"start": 920.76, "end": 926.76, "text": " Imagine this is your F, just a single dimensional case, but still it's your nonlinear function."}, {"start": 926.76, "end": 932.76, "text": " And you're trying to to basically describe this function using just these two terms."}, {"start": 932.76, "end": 937.76, "text": " So what you end up doing is if we have if this is w."}, {"start": 937.76, "end": 939.76, "text": " So this here is w zero."}, {"start": 939.76, "end": 941.76, "text": " We're basically doing the following thing."}, {"start": 941.76, "end": 944.76, "text": " So we are describing the network as."}, {"start": 944.76, "end": 956.76, "text": " So this is F of w zero F of w zero, and then we just add this term here, which is the derivative of this function at this point."}, {"start": 956.76, "end": 959.76, "text": " So this is basically this this this tangent here."}, {"start": 959.76, "end": 968.76, "text": " And then if you want to and you can see that for very close points, so for the points are very close to two w zero."}, {"start": 968.76, "end": 970.76, "text": " This approximation that I've just done."}, {"start": 970.76, "end": 973.76, "text": " So this tangent is going to be fairly close to the actual function."}, {"start": 973.76, "end": 976.76, "text": " So let me just change the color here to use green instead."}, {"start": 976.76, "end": 986.76, "text": " You can see the green approximates our function very well when we are in the in the in the neighborhood in the Epsilon neighborhood of w w zero."}, {"start": 986.76, "end": 987.76, "text": " OK, that's enough with NTK."}, {"start": 987.76, "end": 990.76, "text": " Let's get back to the last subroutine."}, {"start": 990.76, "end": 994.76, "text": " And that's explaining ABC parameterization."}, {"start": 994.76, "end": 1002.76, "text": " OK, so it turns out and this is a simple simplistic representation of this of this ABC parameterization space."}, {"start": 1002.76, "end": 1011.76, "text": " But it turns out roughly that we want to be so we want to find ABC parameterization such that we have certain properties."}, {"start": 1011.76, "end": 1018.76, "text": " So first of all, we don't want to be in the part of the space where the network training is unstable in this infinite width limit."}, {"start": 1018.76, "end": 1021.76, "text": " And that's this whole space, this gray area here."}, {"start": 1021.76, "end": 1034.76, "text": " So what an unstable means in this context is that the logits, so the last basically pre activations of the of the network of the output of the network start diverging."}, {"start": 1034.76, "end": 1037.76, "text": " So they start going towards infinity in this in this regime."}, {"start": 1037.76, "end": 1043.76, "text": " So that's bad. We want to have stable converging logits."}, {"start": 1043.76, "end": 1053.76, "text": " Next up, it's not enough that we just like are the logits are not exploding. We also don't want to be in the trivial regime."}, {"start": 1053.76, "end": 1062.76, "text": " Trivial would mean, OK, imagine you have logits are all like set to zero and you're learning your training your network, but nothing is changing."}, {"start": 1062.76, "end": 1064.76, "text": " The weights are just kind of frozen to zero."}, {"start": 1064.76, "end": 1067.76, "text": " And that's stable. OK, because the logits are not exploding."}, {"start": 1067.76, "end": 1073.76, "text": " But obviously it's trivial because we're not learning anything smart. So we don't want to be in that regime either."}, {"start": 1073.76, "end": 1078.76, "text": " So we end up with being inside of this of this polygon shape part of the space."}, {"start": 1078.76, "end": 1081.76, "text": " And again, this is what I explained."}, {"start": 1081.76, "end": 1088.76, "text": " This is the kernel regime where we are actually not learning any any useful features."}, {"start": 1088.76, "end": 1090.76, "text": " So that means we can do any useful transfer learning."}, {"start": 1090.76, "end": 1092.76, "text": " And so we don't want to be here either."}, {"start": 1092.76, "end": 1099.76, "text": " And I want to explain why, as I said, I'm going to give you some intuition why how is this possible that we are learning F but not learning features."}, {"start": 1099.76, "end": 1103.76, "text": " It's kind of confusing. I'm still trying to grasp this theory completely myself."}, {"start": 1103.76, "end": 1108.76, "text": " I'm not an expert here, but like hopefully going to get a better feeling after this video."}, {"start": 1108.76, "end": 1114.76, "text": " So what we end up so we end up wanting to be in this part of the space."}, {"start": 1114.76, "end": 1117.76, "text": " So on these brown lines here, as you can see here."}, {"start": 1117.76, "end": 1124.76, "text": " So these are the lines where we have this feature learning regime where basically we're stable."}, {"start": 1124.76, "end": 1126.76, "text": " So the logits are not exploding."}, {"start": 1126.76, "end": 1128.76, "text": " We are doing something non trivial."}, {"start": 1128.76, "end": 1135.76, "text": " So we are learning features and we also want to be learning the features as much as possible."}, {"start": 1135.76, "end": 1140.76, "text": " But just below like the divergence mode where we were the logical start exploding."}, {"start": 1140.76, "end": 1145.76, "text": " So that's this maximal update parameterization point."}, {"start": 1145.76, "end": 1148.76, "text": " So that's the apex of this of this polygon here."}, {"start": 1148.76, "end": 1157.76, "text": " It turns out that the classical the standard parameterization, as they call it here, is actually unstable in the infinite width limit,"}, {"start": 1157.76, "end": 1166.76, "text": " which is the reason why we have that the instability of the optimal hyperparameters as we are changing the width dimension."}, {"start": 1166.76, "end": 1176.76, "text": " OK, so now brace yourselves. I'm going to try and explain this the intuition behind why why is this possible?"}, {"start": 1176.76, "end": 1182.76, "text": " How can we be learning function F and not learning the actual features?"}, {"start": 1182.76, "end": 1185.76, "text": " So the features are not changing, but the F is changing."}, {"start": 1185.76, "end": 1188.76, "text": " OK, let's see what's going on there."}, {"start": 1188.76, "end": 1193.76, "text": " And this is from the appendix of their fourth paper, which I'm going to link down in the video description."}, {"start": 1193.76, "end": 1199.76, "text": " So how does the function change if the limit does not allow features to evolve?"}, {"start": 1199.76, "end": 1203.76, "text": " Then how does learning occur to answer this question?"}, {"start": 1203.76, "end": 1208.76, "text": " Note that Delta and I'm going to explain all of these symbols and what they mean in a second."}, {"start": 1208.76, "end": 1220.76, "text": " So Delta F T T means we are we are looking at function F after T steps of gradient of SGD of stochastic gradient descent."}, {"start": 1220.76, "end": 1226.76, "text": " Delta means that we are basically looking at the difference between F T and F zero."}, {"start": 1226.76, "end": 1229.76, "text": " So F zero is your neural network."}, {"start": 1229.76, "end": 1237.76, "text": " Sorry, your your logits F represents the logits of your network during the initialization."}, {"start": 1237.76, "end": 1240.76, "text": " So zero steps of SGD. That's the initialization."}, {"start": 1240.76, "end": 1243.76, "text": " So that's equal to V zero, blah, blah, blah, blah, blah."}, {"start": 1243.76, "end": 1250.76, "text": " And before I go there, let me just paint a picture here. What is we what is what is all of this?"}, {"start": 1250.76, "end": 1253.76, "text": " So basically they're just looking at an MLP."}, {"start": 1253.76, "end": 1257.76, "text": " And so we have something like this. So we have an MLP."}, {"start": 1257.76, "end": 1263.76, "text": " So this is your input. We have an MLP, a hidden layer, and we have another projection here."}, {"start": 1263.76, "end": 1268.76, "text": " And here are the logits. This matrix here is called V."}, {"start": 1268.76, "end": 1273.76, "text": " This matrix here is called U. And that's all we care about."}, {"start": 1273.76, "end": 1276.76, "text": " So now let's see. Let's get back here."}, {"start": 1276.76, "end": 1281.76, "text": " Yeah. One thing I forgot to mention. The activations here are called X."}, {"start": 1281.76, "end": 1287.76, "text": " So now we have our terminology, our symbols grounded into the physical network here."}, {"start": 1287.76, "end": 1295.76, "text": " OK, so equals V zero. And this this very fancy expression where they they take so easier is the the the weights of that network."}, {"start": 1295.76, "end": 1305.76, "text": " We during initialization, then they have XT, which are the the activations here basically because it's delta."}, {"start": 1305.76, "end": 1308.76, "text": " That's going to be XT minus X zero."}, {"start": 1308.76, "end": 1311.76, "text": " And then we have plus a couple of other terms."}, {"start": 1311.76, "end": 1317.76, "text": " So instead of me jabbering here, let me just expand this so that we understand what's going on."}, {"start": 1317.76, "end": 1320.76, "text": " We're going to end up with the following. So we zero."}, {"start": 1320.76, "end": 1326.76, "text": " XT minus X zero. OK, so that's the first term here."}, {"start": 1326.76, "end": 1329.76, "text": " Then we have plus we have this part here."}, {"start": 1329.76, "end": 1337.76, "text": " So we have VT minus V zero times X zero."}, {"start": 1337.76, "end": 1350.76, "text": " And then we have this this this last term, which is basically VT minus V zero times XT."}, {"start": 1350.76, "end": 1358.76, "text": " Minus X zero. And if we were to just kind of cancel out the terms here, what we end up with here very trivially is this."}, {"start": 1358.76, "end": 1369.76, "text": " So we have VT XT minus V zero. X zero."}, {"start": 1369.76, "end": 1372.76, "text": " And now the semantics behind this expression is the following."}, {"start": 1372.76, "end": 1378.76, "text": " So here we take the matrix at initialization, so matrix F."}, {"start": 1378.76, "end": 1385.76, "text": " So the weights at initialization, we multiply them with the signal here with the activations at initialization."}, {"start": 1385.76, "end": 1391.76, "text": " And we subtract that from the same the same thing just at step T."}, {"start": 1391.76, "end": 1400.76, "text": " So after T steps of SGD. And so this is kind of quantifying how are the logits of our network evolving throughout training?"}, {"start": 1400.76, "end": 1406.76, "text": " OK, so we want this to change. We want to to to if we are learning something, this thing is changing."}, {"start": 1406.76, "end": 1408.76, "text": " It's evolving. OK, let's continue here."}, {"start": 1408.76, "end": 1414.76, "text": " In short, then the evolution of FT of psi where psi is the the input."}, {"start": 1414.76, "end": 1423.76, "text": " In the NTK limit is predominantly due to V zero delta XT and delta VT X zero."}, {"start": 1423.76, "end": 1430.76, "text": " So why is that? That's because they show that this delta is not changing by a lot in the NTK regime."}, {"start": 1430.76, "end": 1432.76, "text": " So it's a very small quantity."}, {"start": 1432.76, "end": 1438.76, "text": " And that means that this when you have two small quantities multiplied with one another, this is going to rapidly go down to zero."}, {"start": 1438.76, "end": 1446.76, "text": " And so it doesn't it does not contribute to the evolution of our logits."}, {"start": 1446.76, "end": 1448.76, "text": " OK, next up, they give a concrete example."}, {"start": 1448.76, "end": 1451.76, "text": " So let's imagine now that T equals one."}, {"start": 1451.76, "end": 1455.76, "text": " So we've applied only a single SGD update to our network and we get the following expression."}, {"start": 1455.76, "end": 1460.76, "text": " So we get delta F one is going to be this fancy expression."}, {"start": 1460.76, "end": 1468.76, "text": " So all of these and raised to the power of minus two AV blah, blah, blah comes from our ABC parameterization."}, {"start": 1468.76, "end": 1476.76, "text": " If you recall correctly, basically by just plugging in the ABC parameterization into this expression here, we get the following three terms."}, {"start": 1476.76, "end": 1480.76, "text": " So the details here don't don't don't don't worry about the details."}, {"start": 1480.76, "end": 1483.76, "text": " Let's just see the big picture here."}, {"start": 1483.76, "end": 1486.76, "text": " So in NTP, so that's the neural tangent parameterization."}, {"start": 1486.76, "end": 1494.76, "text": " Every AV equals one half. So the term and raised to the power of blah, blah, blah is theta of one."}, {"start": 1494.76, "end": 1496.76, "text": " OK, confusing piece of notation here as well."}, {"start": 1496.76, "end": 1499.76, "text": " I'm going to I'm going to try and dissect that as well."}, {"start": 1499.76, "end": 1504.76, "text": " So basically, that means that this term here is state of one, whatever that means."}, {"start": 1504.76, "end": 1512.76, "text": " And it roughly means that the this random vector, if you take the coordinate, the standard deviation is roughly theta of one."}, {"start": 1512.76, "end": 1518.76, "text": " So that means in the limit, it's going to be upper bounded and lower bounded by by some constant function."}, {"start": 1518.76, "end": 1524.76, "text": " Then they see on the other hand, this expression here and raised to the power of blah, blah, blah."}, {"start": 1524.76, "end": 1531.76, "text": " So that's that's this expression here is of O of one over square root of N."}, {"start": 1531.76, "end": 1535.76, "text": " OK, so that means that as N grows, this is going to go to zero."}, {"start": 1535.76, "end": 1537.76, "text": " It's going to converge to zero."}, {"start": 1537.76, "end": 1540.76, "text": " And finally, they say likewise for this last term."}, {"start": 1540.76, "end": 1546.76, "text": " So for this term here, they kind of manage to reform it into this different shape here."}, {"start": 1546.76, "end": 1549.76, "text": " And they show that this is also theta of one."}, {"start": 1549.76, "end": 1551.76, "text": " No, sorry, that's the term C actually."}, {"start": 1551.76, "end": 1556.76, "text": " So all in all, this term here is going to be theta one over N."}, {"start": 1556.76, "end": 1564.76, "text": " So that's this here. And because we have a sum of N elements that's going to cancel out and this is also theta of one."}, {"start": 1564.76, "end": 1570.76, "text": " And so we end up in the grand scheme of things, we end up that this term is also theta of one."}, {"start": 1570.76, "end": 1573.76, "text": " So we end up with the following."}, {"start": 1573.76, "end": 1581.76, "text": " We end up with this being theta of one, this being theta one and this being one over square root of N, which means this is going to go to zero."}, {"start": 1581.76, "end": 1590.76, "text": " So in summary, even though this delta X as N is approaching infinity is converging towards zero,"}, {"start": 1590.76, "end": 1596.76, "text": " which means that with big enough width of a network, even after a single step, because we have one here,"}, {"start": 1596.76, "end": 1602.76, "text": " even after a single step of this, we don't have any change whatsoever in the features."}, {"start": 1602.76, "end": 1607.76, "text": " So this X stays the same after a single step."}, {"start": 1607.76, "end": 1612.76, "text": " On the other hand, we see that the F, so the logits of the neural network are actually evolving."}, {"start": 1612.76, "end": 1618.76, "text": " And that's why we are able to learn the task because of this weird nonlinear relationship."}, {"start": 1618.76, "end": 1621.76, "text": " Okay, that was the best I could do for now."}, {"start": 1621.76, "end": 1623.76, "text": " Let's not continue."}, {"start": 1623.76, "end": 1625.76, "text": " Hopefully you got something out of that."}, {"start": 1625.76, "end": 1628.76, "text": " So I mentioned this theta expression."}, {"start": 1628.76, "end": 1631.76, "text": " Basically, they define it quite formally here."}, {"start": 1631.76, "end": 1637.76, "text": " Basically, what that means is that your vector X and N is kind of unfortunate here."}, {"start": 1637.76, "end": 1642.76, "text": " It should be subscript because N means Nth element in the sequence and not X raised to the power of N."}, {"start": 1642.76, "end": 1650.76, "text": " So we take the norm from the Nth random vector in our sequence, we square it, we divide it by N,"}, {"start": 1650.76, "end": 1653.76, "text": " and we take a square root of all of that."}, {"start": 1653.76, "end": 1663.76, "text": " And basically this sequence here is theta of N raised to the power of minus A,"}, {"start": 1663.76, "end": 1668.76, "text": " which means we can find by the usual definition of big theta,"}, {"start": 1668.76, "end": 1674.76, "text": " we can find constants A and B such that we can bound that sequence of elements here."}, {"start": 1674.76, "end": 1680.76, "text": " So how I like to think about it is, and it's a great simplification,"}, {"start": 1680.76, "end": 1683.76, "text": " is imagine all of the coordinates here are the same."}, {"start": 1683.76, "end": 1687.76, "text": " Then if you just can expand this, this is going to be X1,"}, {"start": 1687.76, "end": 1693.76, "text": " so the first coordinate of your random vector X raised to the power of 2 plus second coordinate."}, {"start": 1693.76, "end": 1697.76, "text": " And because they're the same, let's just kind of put X1 here, square."}, {"start": 1697.76, "end": 1701.76, "text": " And we'll have, so plus plus plus, so we'll have N of these."}, {"start": 1701.76, "end": 1705.76, "text": " So we end up with NX1 squared."}, {"start": 1705.76, "end": 1709.76, "text": " And so because we have N here, so this thing here cancels out with this one."}, {"start": 1709.76, "end": 1711.76, "text": " And so we end up with something like this."}, {"start": 1711.76, "end": 1715.76, "text": " So you end up with the result that the standard deviation is roughly equal to,"}, {"start": 1715.76, "end": 1719.76, "text": " of a single coordinate of this vector here, X is equal to this."}, {"start": 1719.76, "end": 1727.76, "text": " So that's how you can parse this result that delta F of t is theta of 1."}, {"start": 1727.76, "end": 1730.76, "text": " Okay, let me show you just a couple more results from that paper."}, {"start": 1730.76, "end": 1735.76, "text": " And we're going back to the main paper, which is going to be shorter than the explanation,"}, {"start": 1735.76, "end": 1737.76, "text": " this explanation I just gave you here."}, {"start": 1737.76, "end": 1744.76, "text": " So for L, so for layers L from 0 to big L, which is the depth of your MLP,"}, {"start": 1744.76, "end": 1756.76, "text": " we say that this WL, so that the weights of a particular layer L is updated maximally if this delta WTL XTL-1,"}, {"start": 1756.76, "end": 1761.76, "text": " so basically you take the activations from your previous layer at step t"}, {"start": 1761.76, "end": 1768.76, "text": " and you multiply that with the weights of your L layer at step t of SGD,"}, {"start": 1768.76, "end": 1770.76, "text": " but you just have the delta here."}, {"start": 1770.76, "end": 1776.76, "text": " So basically if this thing has theta of 1 coordinates for some training routine,"}, {"start": 1776.76, "end": 1781.76, "text": " some time that's greater than or equal to 1, and some input Xi."}, {"start": 1781.76, "end": 1785.76, "text": " So what this roughly means is that if you think about what this is,"}, {"start": 1785.76, "end": 1789.76, "text": " you're multiplying with weight matrix, you're multiplying the activations,"}, {"start": 1789.76, "end": 1795.76, "text": " you're getting the pre-activations, so you're basically saying that the pre-activations are evolving as the training progresses."}, {"start": 1795.76, "end": 1799.76, "text": " And then they find this complex expression here that shows that you need,"}, {"start": 1799.76, "end": 1804.76, "text": " so the A, Bs and Cs need to respect this complicated equation here"}, {"start": 1804.76, "end": 1811.76, "text": " if you were to get this maximal update parameterization."}, {"start": 1811.76, "end": 1816.76, "text": " They finally state that this is, there is a unique stable ABC parameterization that has this property,"}, {"start": 1816.76, "end": 1823.76, "text": " and that's what we saw on this diagram here as the apex of this ABC space."}, {"start": 1823.76, "end": 1828.76, "text": " Okay, I'm well aware that this was all very hand-wavy."}, {"start": 1828.76, "end": 1831.76, "text": " It's hard to explain this amount of math in a single video,"}, {"start": 1831.76, "end": 1838.76, "text": " but hopefully you got some understanding of this previous thing that came before this paper."}, {"start": 1838.76, "end": 1843.76, "text": " So yeah, in a nutshell, we have this ABC parameterization."}, {"start": 1843.76, "end": 1850.76, "text": " We somehow find, they somehow find the relationship between A, Bs and Cs"}, {"start": 1850.76, "end": 1858.76, "text": " such that we have this maximal learning of features and being stable and all of these nice properties."}, {"start": 1858.76, "end": 1863.76, "text": " Now it's much easier to just understand what you need to do as a practitioner."}, {"start": 1863.76, "end": 1868.76, "text": " You just need to set a couple of numbers, and this is all going to work magically for you in the background."}, {"start": 1868.76, "end": 1871.76, "text": " Okay, so let's get back to the paper."}, {"start": 1871.76, "end": 1874.76, "text": " So they say here algorithm one is the following."}, {"start": 1874.76, "end": 1878.76, "text": " So they first parameterize target model in maximal update parameterization."}, {"start": 1878.76, "end": 1882.76, "text": " So that means pick A, Bs and Cs in a particular way."}, {"start": 1882.76, "end": 1887.76, "text": " Then you just reduce the model, and because you parameterize your model,"}, {"start": 1887.76, "end": 1891.76, "text": " you'll know how to change your initialization, you'll know how to change your weight,"}, {"start": 1891.76, "end": 1896.76, "text": " your parameter coefficients, and you'll know how to change your learning rate."}, {"start": 1896.76, "end": 1899.76, "text": " Then you tune a smaller version of your model,"}, {"start": 1899.76, "end": 1903.76, "text": " and finally you copy the tuned hyperparameters to the target model, to the big model."}, {"start": 1903.76, "end": 1910.76, "text": " Because of all of this, you can now have all of your compute budget used to search for the optimal,"}, {"start": 1910.76, "end": 1916.76, "text": " for the most optimal hyperparameters, and then you can just apply them to the bigger model"}, {"start": 1916.76, "end": 1921.76, "text": " and have the ABC parameterization, the maximal update parameterization do all the work for you."}, {"start": 1921.76, "end": 1927.76, "text": " So let's see what parameters, which hyperparameters are transferable, and they say here,"}, {"start": 1927.76, "end": 1932.76, "text": " and we saw this already, so initialization ones, parameter multipliers, and optimization related,"}, {"start": 1932.76, "end": 1934.76, "text": " especially learning rate."}, {"start": 1934.76, "end": 1941.76, "text": " Transferred across our, so we see width here, so this means we can transfer these hyperparameters"}, {"start": 1941.76, "end": 1944.76, "text": " across these different scaling dimensions."}, {"start": 1944.76, "end": 1949.76, "text": " So we do have the theoretical guarantee for width, and we already saw that we don't have"}, {"start": 1949.76, "end": 1953.76, "text": " the same theoretical guarantee for depth or batch size or any of these,"}, {"start": 1953.76, "end": 1957.76, "text": " but they empirically, so that's what asterisk means here,"}, {"start": 1957.76, "end": 1966.76, "text": " the empirically shows that this roughly remains true even when scaling across these different,"}, {"start": 1966.76, "end": 1969.76, "text": " like scaling dimensions such as depth."}, {"start": 1969.76, "end": 1974.76, "text": " What is not transferable are regularization techniques such as dropout, weight decay,"}, {"start": 1974.76, "end": 1979.76, "text": " because intuitive explanation would be the parameterization only cares about the model,"}, {"start": 1979.76, "end": 1985.76, "text": " not about the actual data set size, and because of that you cannot count in the regularization correctly."}, {"start": 1985.76, "end": 1990.76, "text": " Okay, let's continue here."}, {"start": 1990.76, "end": 1996.76, "text": " Worth mentioning, they say that in this work, we primarily focus on hyperparameter transfer"}, {"start": 1996.76, "end": 1998.76, "text": " with respect to training loss."}, {"start": 1998.76, "end": 2002.76, "text": " In settings where regularization is not the bottleneck to test performance,"}, {"start": 2002.76, "end": 2008.76, "text": " as in all of our experiments here, this also translates to efficacy in terms of test loss."}, {"start": 2008.76, "end": 2013.76, "text": " Okay, and that's what we saw in all of these diagrams here, so we had the training loss,"}, {"start": 2013.76, "end": 2018.76, "text": " and we saw that the training loss remains stable, but if the regularization is not,"}, {"start": 2018.76, "end": 2022.76, "text": " as they say here, if the regularization is not the bottleneck,"}, {"start": 2022.76, "end": 2030.76, "text": " then basically even for the test, the test loss will stay similarly stable across various scales."}, {"start": 2030.76, "end": 2038.76, "text": " Let's continue here, and let me give you some more intuition for why this whole theory works the way it does."}, {"start": 2038.76, "end": 2044.76, "text": " So a bit more math ahead, so brace yourself, let's try and take this one."}, {"start": 2044.76, "end": 2054.76, "text": " So the central limit theorem says that if x1 through xn are IID samples, so independent identical distributions,"}, {"start": 2054.76, "end": 2060.76, "text": " so samples from a zero mean unit variance distribution, so this can be arbitrary distribution,"}, {"start": 2060.76, "end": 2063.76, "text": " it's just important that we have zero mean and unit variance,"}, {"start": 2063.76, "end": 2071.76, "text": " and then if we form this novel random variable like this, so basically a sum of all of those random samples,"}, {"start": 2071.76, "end": 2075.76, "text": " divided by the square root of n, where n is the number of the samples,"}, {"start": 2075.76, "end": 2083.76, "text": " this converges to a standard Gaussian distribution as n goes to infinity."}, {"start": 2083.76, "end": 2089.76, "text": " Therefore, we can say that 1 over square root of n is the right order of scaling factor cn,"}, {"start": 2089.76, "end": 2096.76, "text": " such that cn times x1 through xn converges to something non-trivial."}, {"start": 2096.76, "end": 2103.76, "text": " So in contrast, if we were to change cn to 1 over n, this whole thing would converge to zero,"}, {"start": 2103.76, "end": 2109.76, "text": " or if cn was 1, then this whole thing will blow up in variance as n goes to infinity."}, {"start": 2109.76, "end": 2115.76, "text": " So why would we care about this central limit theorem right now, and this sum and all of this?"}, {"start": 2115.76, "end": 2120.76, "text": " Well, the answer is kind of simple once you understand it."}, {"start": 2120.76, "end": 2125.76, "text": " Basically, let's imagine we have an MLP here, so let's imagine we have our hidden layer,"}, {"start": 2125.76, "end": 2131.76, "text": " and we have our previous layer, and so how the MLP connectivity pattern looks like is we have,"}, {"start": 2131.76, "end": 2136.76, "text": " as you can see here, a bunch of connections from previous layer to the next layer."}, {"start": 2136.76, "end": 2143.76, "text": " And you can see here that this exactly has that same form as this one here."}, {"start": 2143.76, "end": 2149.76, "text": " So we have some weights here, so we have x1, x2, all the way to here, x4,"}, {"start": 2149.76, "end": 2154.76, "text": " and this will be xn in general when we have much more nodes here."}, {"start": 2154.76, "end": 2162.76, "text": " Assuming that the inputs are also zero mean and variance 1, then this multiplication between these two"}, {"start": 2162.76, "end": 2170.76, "text": " will end up having zero mean and finite variance samples here, and then we sum them up,"}, {"start": 2170.76, "end": 2179.76, "text": " and that's why we, and now once we sum them up, if we don't want this thing to explode or converge to zero,"}, {"start": 2179.76, "end": 2184.76, "text": " so if we don't want this to go to zero or infinity in the infinite width limit,"}, {"start": 2184.76, "end": 2189.76, "text": " then we have to scale this with 1 over square root of n."}, {"start": 2189.76, "end": 2195.76, "text": " That's how this mathematics translates to what we care about."}, {"start": 2195.76, "end": 2200.76, "text": " Okay, let's continue here, so let's see what's going on here."}, {"start": 2200.76, "end": 2204.76, "text": " They say now suppose we would like to minimize the function fn of c,"}, {"start": 2204.76, "end": 2211.76, "text": " which is defined as an expectation across all of these random samples,"}, {"start": 2211.76, "end": 2220.76, "text": " f, and then as the input we have this expression c times x1 plus all the way through xn."}, {"start": 2220.76, "end": 2225.76, "text": " So we're trying to find that c such that we can minimize this function,"}, {"start": 2225.76, "end": 2231.76, "text": " and then they say if we reparameterize c as alpha over square root of n,"}, {"start": 2231.76, "end": 2238.76, "text": " for alpha being a real number, then by the central limit theorem we can see that this converges to basically"}, {"start": 2238.76, "end": 2242.76, "text": " your Gaussian distribution with variance alpha squared."}, {"start": 2242.76, "end": 2247.76, "text": " In the case that alpha is equal to 1, then we converge to standard Gaussian here."}, {"start": 2247.76, "end": 2253.76, "text": " Basically any other parameterization would lead to this either exploding or going to zero,"}, {"start": 2253.76, "end": 2256.76, "text": " and that's not something that we care about, it's not interesting,"}, {"start": 2256.76, "end": 2261.76, "text": " and that's why we have to kind of choose this type of parameterization inspired by the CLT theorem."}, {"start": 2261.76, "end": 2267.76, "text": " Finally they say then for sufficiently large n, the optimal alpha, so alpha n,"}, {"start": 2267.76, "end": 2273.76, "text": " this star means it's optimal, n means we have n of these samples,"}, {"start": 2273.76, "end": 2281.76, "text": " and that's the argument of this function, so that optimal alpha n star should be close to alpha big n star"}, {"start": 2281.76, "end": 2288.76, "text": " for any n big n that's bigger than this small n, and like in particular when n is equal to infinity."}, {"start": 2288.76, "end": 2294.76, "text": " And this precisely means that we can transfer the optimal, so cn or alpha n,"}, {"start": 2294.76, "end": 2299.76, "text": " because those two are kind of connected directly here, they're proportional because this is a constant,"}, {"start": 2299.76, "end": 2304.76, "text": " so we can transfer from the optimal parameter from a smaller problem to a larger problem."}, {"start": 2304.76, "end": 2308.76, "text": " Finally, because the transfer algorithm is simply copying alpha,"}, {"start": 2308.76, "end": 2313.76, "text": " we say that parameterization alpha over square root of n is the correct parameterization for this problem."}, {"start": 2313.76, "end": 2319.76, "text": " So what I've basically said here is that this c is some hyperparameter, whatever that hyperparameter is,"}, {"start": 2319.76, "end": 2325.76, "text": " and with this parameterization it's going to give us the minimum of this function f,"}, {"start": 2325.76, "end": 2333.76, "text": " even for smaller as well as for the bigger models, so as n goes to this n which goes to infinity."}, {"start": 2333.76, "end": 2343.76, "text": " That's a rough explanation, so this is fairly hard to understand even for me who've been reading a bunch of papers over the many last years,"}, {"start": 2343.76, "end": 2346.76, "text": " and I can't imagine how this might look to beginners,"}, {"start": 2346.76, "end": 2353.76, "text": " so just having some writing some blog around this whole work would be super, super beneficial,"}, {"start": 2353.76, "end": 2362.76, "text": " so if the authors see this video, I strongly encourage them to write a blog and try and explain this with less math and with more pictures."}, {"start": 2362.76, "end": 2367.76, "text": " It would be super useful, just like this thing I tried to do here basically."}, {"start": 2367.76, "end": 2372.76, "text": " Okay, let's continue on and see the rest of the paper."}, {"start": 2372.76, "end": 2380.76, "text": " Okay, I think we're through with most of the math, so this part now is going to be hopefully a lot simpler compared to what we've seen previously."}, {"start": 2380.76, "end": 2387.76, "text": " Okay, the point they're trying to make throughout this whole paper is that hyperparameters do not transfer conventionally,"}, {"start": 2387.76, "end": 2391.76, "text": " and we've seen that in the first graphs."}, {"start": 2391.76, "end": 2402.76, "text": " We've seen that you cannot just take a set of hyperparameters you found for a small model and just assume they're going to work for the bigger model,"}, {"start": 2402.76, "end": 2411.76, "text": " and they say that here, so on the other hand, a non-trivial fraction of papers in deep learning fixes all HPs when comparing against baselines,"}, {"start": 2411.76, "end": 2416.76, "text": " which reflects an assumption that the optimal HPs should be stable,"}, {"start": 2416.76, "end": 2425.76, "text": " and my comment to that would be basically it reflects a lack of resources and hoping for the best rather than people thinking this is going to work."}, {"start": 2425.76, "end": 2427.76, "text": " That's usually the case, I guess."}, {"start": 2427.76, "end": 2434.76, "text": " Okay, so having said that, let's now see some more experiments they've done."}, {"start": 2434.76, "end": 2443.76, "text": " On the left-hand side here, we can see the MLP, I think a two-hidden layer MLP, trained with the standard parameterization,"}, {"start": 2443.76, "end": 2448.76, "text": " so SP stands for standard parameterization, and microP stands for maximal."}, {"start": 2448.76, "end": 2457.76, "text": " By the way, I have no idea why they're using microP to describe maximal update parameterization."}, {"start": 2457.76, "end": 2462.76, "text": " In any case, we can see here that as they're training with different widths,"}, {"start": 2462.76, "end": 2468.76, "text": " so going from 256 all the way to 8K192,"}, {"start": 2468.76, "end": 2473.76, "text": " we can see that the optimum when it comes to learning rate is shifting,"}, {"start": 2473.76, "end": 2479.76, "text": " like literally for an order of magnitude at least here from minus 7 to minus 9, even more."}, {"start": 2479.76, "end": 2483.76, "text": " And this is a log scale, so it's multiple orders of magnitude."}, {"start": 2483.76, "end": 2492.76, "text": " On the other hand, we can see that for microP, for their method, it's basically stable here."}, {"start": 2492.76, "end": 2499.76, "text": " So this graph is MLP, the one we saw above, so this one here was actually transformers,"}, {"start": 2499.76, "end": 2504.76, "text": " so they do show that for various architectures, this thing actually works in practice."}, {"start": 2504.76, "end": 2508.76, "text": " It's not just a theoretical paper without any results."}, {"start": 2508.76, "end": 2516.76, "text": " It actually has awesome results to back up all of this theory, which is one of the nicest parts about the paper."}, {"start": 2516.76, "end": 2521.76, "text": " Okay, so let's see that famous maximal update parameterization."}, {"start": 2521.76, "end": 2524.76, "text": " Here is what you need to do in practice."}, {"start": 2524.76, "end": 2532.76, "text": " You initialize your weights in a particular manner, and the only difference compared to your standard parameterization is a couple of weeks."}, {"start": 2532.76, "end": 2539.76, "text": " So here in the third layer, so for the third set of weights, you just have to do n squared instead of n."}, {"start": 2539.76, "end": 2547.76, "text": " And this is something you regularly see. This is just 1 over fan in, so the number of neurons entering your neuron."}, {"start": 2547.76, "end": 2558.76, "text": " This intuitively makes a lot of sense because if you have, basically, if you have, again, a neuron here and you have n neurons going into this neuron here,"}, {"start": 2558.76, "end": 2569.76, "text": " then if you, so if you have f in, so fan in, number of neurons here, so it makes sense that you want to initialize these weights as 1 over f in,"}, {"start": 2569.76, "end": 2572.76, "text": " so as to compensate for the sum of this f in element."}, {"start": 2572.76, "end": 2581.76, "text": " So that's the intuitive idea for why these methods work, and what they've shown is that these methods do not work as expected in the infinite width limit,"}, {"start": 2581.76, "end": 2585.76, "text": " so that's why we have the maximal update parameterization here."}, {"start": 2585.76, "end": 2589.76, "text": " Finally, the SGD rates are modified per layer."}, {"start": 2589.76, "end": 2598.76, "text": " You can see that the only modifications are that for the first layer you multiply by n here, and for the last layer you multiply by 1 over n,"}, {"start": 2598.76, "end": 2603.76, "text": " which means we have per layer learning rate in this parameterization."}, {"start": 2603.76, "end": 2611.76, "text": " So practically it's very simple to actually implement this, and you don't have to care about this because they actually open sourced a package that does this for you."}, {"start": 2611.76, "end": 2616.76, "text": " I'm going to link it down in the video description as well so you can check it out."}, {"start": 2616.76, "end": 2623.76, "text": " Here are some empirical validations they've done for different hyperparameters and scaling across width and depth."}, {"start": 2623.76, "end": 2628.76, "text": " So as I said, for width this whole theory was developed for the infinite width limit,"}, {"start": 2628.76, "end": 2631.76, "text": " whereas the depth is not guaranteed theoretically,"}, {"start": 2631.76, "end": 2642.76, "text": " but you can see that for various different hyperparameters like learning rate schedule and learning rate and these initializations, standard deviation, etc."}, {"start": 2642.76, "end": 2647.76, "text": " we can see that the curves are fairly stable across all of these diagrams."}, {"start": 2647.76, "end": 2661.76, "text": " I guess this is the only thing that kind of deviates compared to the width analog here, so it doesn't have a clear minimum as up here."}, {"start": 2661.76, "end": 2669.76, "text": " Finally, one interesting fact is that, so you're all familiar, I guess, with transformers and how the actual attention mechanism works,"}, {"start": 2669.76, "end": 2678.76, "text": " and you know that in the original paper they used to divide by square root of D so to avoid the explosion and keep the variance controlled,"}, {"start": 2678.76, "end": 2685.76, "text": " and it turns out that in this infinite width analysis you actually want to divide by D, not by square root of D,"}, {"start": 2685.76, "end": 2694.76, "text": " which is a super interesting result considering that if you think, like, the original transformer paper did give a nice explanation for why square root of D works,"}, {"start": 2694.76, "end": 2699.76, "text": " and now all of a sudden one over D is actually better in this infinite width limit."}, {"start": 2699.76, "end": 2704.76, "text": " Yeah, this whole paper feels like magic, but the best thing of all is that it actually works."}, {"start": 2704.76, "end": 2709.76, "text": " It's not just a pure theoretical paper without the results to back it up."}, {"start": 2709.76, "end": 2716.76, "text": " In these diagrams they show how logits, attention logits, and word embeddings are stable across various widths,"}, {"start": 2716.76, "end": 2724.76, "text": " as you can see on the x-axis, for the proposed parameterization versus the standard parameterization,"}, {"start": 2724.76, "end": 2731.76, "text": " where we can see that logits and attention logits start exploding with width, whereas word embeddings are actually constant,"}, {"start": 2731.76, "end": 2736.76, "text": " and that's why it's problematic to only try, and I think they try to just manipulate, like, a learning rate,"}, {"start": 2736.76, "end": 2741.76, "text": " and then this becomes stable, but then this diverges or converges to zero or something,"}, {"start": 2741.76, "end": 2750.76, "text": " and so it's only, yeah, obviously only their parameterization leads to consistently stable results across all of the widths."}, {"start": 2750.76, "end": 2753.76, "text": " Okay, let's continue here and let's see what else is interesting."}, {"start": 2753.76, "end": 2754.76, "text": " Okay, this one."}, {"start": 2754.76, "end": 2758.76, "text": " The transfer across depth only works for the pre-layer norm transformer,"}, {"start": 2758.76, "end": 2763.76, "text": " and there was a very cool recent paper called DeepNet that addressed this problem, actually."}, {"start": 2763.76, "end": 2769.76, "text": " It introduced this deep norm layer, and they managed to train transformers that are super deep,"}, {"start": 2769.76, "end": 2775.76, "text": " like, 1,000 plus layers deep, as you can see here, and it's a very interesting chart,"}, {"start": 2775.76, "end": 2781.76, "text": " because you have on the x-axis chronology, like, time, and then on the y-axis, we have the metric of depth,"}, {"start": 2781.76, "end": 2786.76, "text": " and they are definitely like new soda when it comes to depth on this benchmark."}, {"start": 2786.76, "end": 2793.76, "text": " Yeah, so they mentioned somewhere in the paper the following thing."}, {"start": 2793.76, "end": 2804.76, "text": " So these guys here find that the pre-norm residual connections, pre-LN, improve the stability of transformers compared to post-norm connections."}, {"start": 2804.76, "end": 2809.76, "text": " However, the gradients of pre-LN at bottom layers tend to be larger than at top layers,"}, {"start": 2809.76, "end": 2813.76, "text": " leading to a degradation in performance compared with post-LN."}, {"start": 2813.76, "end": 2820.76, "text": " So here it was already shown that pre-LN improves stability, whereas it loses some of the performance,"}, {"start": 2820.76, "end": 2824.76, "text": " and here you can see that exactly that's what they're dealing with here."}, {"start": 2824.76, "end": 2829.76, "text": " The transfer across definitely works for pre-LN transformers."}, {"start": 2829.76, "end": 2835.76, "text": " So the DeepNet paper shows how to reconcile these two worlds, so to take the best out of the two worlds,"}, {"start": 2835.76, "end": 2842.76, "text": " the performance and the stability, so the stability of pre-LN and the performance of post-LN module,"}, {"start": 2842.76, "end": 2847.76, "text": " by just applying a couple of tricks, so just kind of multiplying the signal here by alpha,"}, {"start": 2847.76, "end": 2852.76, "text": " so we have the residual plus the signal, and this is the pre-LN form, as you can see,"}, {"start": 2852.76, "end": 2856.76, "text": " because we have the sum here before the LN, that's why it's called pre-LN,"}, {"start": 2856.76, "end": 2864.76, "text": " and just modifying it with alpha here and initializing some of these weights by beta,"}, {"start": 2864.76, "end": 2868.76, "text": " they manage to get to amazing depths."}, {"start": 2868.76, "end": 2875.76, "text": " And so combining this paper, understanding, totally understanding the theory behind DeepNet paper"}, {"start": 2875.76, "end": 2881.76, "text": " and this maximal object parameterization sounds like a very interesting research idea"}, {"start": 2881.76, "end": 2886.76, "text": " that promises both depth and both infinite depth and infinite width,"}, {"start": 2886.76, "end": 2889.76, "text": " and so StackMoreLayers team is winning again."}, {"start": 2889.76, "end": 2894.76, "text": " Okay, guys, I thought that this connection with DeepNet paper was quite valuable,"}, {"start": 2894.76, "end": 2897.76, "text": " so thought mentioning it."}, {"start": 2897.76, "end": 2899.76, "text": " So now let's see the results."}, {"start": 2899.76, "end": 2902.76, "text": " There's a bunch of tables, I'm going to focus only on a couple of these,"}, {"start": 2902.76, "end": 2905.76, "text": " so here we have BERT pre-training, here we can see BERT large,"}, {"start": 2905.76, "end": 2912.76, "text": " and we can see that the model speedup goes for this microtransfer technique at 22x,"}, {"start": 2912.76, "end": 2918.76, "text": " which means the smaller model takes 22x less time to train compared to the big model,"}, {"start": 2918.76, "end": 2923.76, "text": " and all in all, the total speedup for the whole hyperparameter tuning procedure"}, {"start": 2923.76, "end": 2928.76, "text": " takes hundreds and hundreds of times less time to train than the BERT large,"}, {"start": 2928.76, "end": 2931.76, "text": " than directly training the big model."}, {"start": 2931.76, "end": 2934.76, "text": " Interestingly, we can also see that it transfers perfectly,"}, {"start": 2934.76, "end": 2937.76, "text": " it even achieves better results than training it directly,"}, {"start": 2937.76, "end": 2942.76, "text": " and that's the consequence of this amazing theory behind this paper."}, {"start": 2942.76, "end": 2949.76, "text": " GPT-3, they showed that if we take the smaller 6.7 billion parameter model"}, {"start": 2949.76, "end": 2957.76, "text": " and train it with microtransfer, you get better results even compared to the 13 billion size model,"}, {"start": 2957.76, "end": 2965.76, "text": " and that's the consequence of OpenAI not having enough resources to try out different hyperparameter combinations for such a big model."}, {"start": 2965.76, "end": 2969.76, "text": " And now with this technique, you can do much more thorough HP search"}, {"start": 2969.76, "end": 2974.76, "text": " and then just apply the same hyperparameters to the bigger model,"}, {"start": 2974.76, "end": 2981.76, "text": " and as you can see, they get pretty much better results across most of these datasets."}, {"start": 2981.76, "end": 2984.76, "text": " There are some outliers here for some reason,"}, {"start": 2984.76, "end": 2992.76, "text": " it performs way worse compared to these two on this Lamberta, few shots, not sure what's going on there."}, {"start": 2992.76, "end": 2997.76, "text": " Finally, interestingly enough, the same as what DeepNet did for depth,"}, {"start": 2997.76, "end": 3003.76, "text": " these guys showed that with increasing width, they always get better and better performance,"}, {"start": 3003.76, "end": 3009.76, "text": " so lower training loss in this example compared to standard parameterization where it's not stable."}, {"start": 3009.76, "end": 3014.76, "text": " You can see that the pattern holds up until a certain point depending on the learning rate,"}, {"start": 3014.76, "end": 3018.76, "text": " and then given enough width, it's going to explode."}, {"start": 3018.76, "end": 3024.76, "text": " So here, because the learning rate is bigger, it explodes even sooner than on the middle chart here."}, {"start": 3024.76, "end": 3031.76, "text": " And in another example, another chart showing that wider is better,"}, {"start": 3031.76, "end": 3036.76, "text": " basically you can see that throughout the whole training, we have the training tokens on the x-axis,"}, {"start": 3036.76, "end": 3040.76, "text": " so this is plotting during the actual training procedure,"}, {"start": 3040.76, "end": 3046.76, "text": " and you can see that the validation loss is getting smaller and smaller as we're increasing width,"}, {"start": 3046.76, "end": 3049.76, "text": " although there is some saturation going on."}, {"start": 3049.76, "end": 3055.76, "text": " That's pretty much it, guys. This was a super heavy paper, lots of equations,"}, {"start": 3055.76, "end": 3059.76, "text": " the previous paper is even harder than this one, there is so much math,"}, {"start": 3059.76, "end": 3068.76, "text": " and it looks like a super impactful work, and it'd be very useful to make some visualizations"}, {"start": 3068.76, "end": 3072.76, "text": " and make this a bit easier to understand."}, {"start": 3072.76, "end": 3075.76, "text": " But yeah, hopefully you found this explanation somewhat useful,"}, {"start": 3075.76, "end": 3078.76, "text": " I'm aware it was very hand-wavy and not rigorous,"}, {"start": 3078.76, "end": 3082.76, "text": " but hopefully some of the mental models of how I think about all of this helped you out."}, {"start": 3082.76, "end": 3087.76, "text": " If it did, consider sharing the video out and subscribe to this channel,"}, {"start": 3087.76, "end": 3091.76, "text": " join the Discord community, and until next time, bye-bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=3EQJAo_k8Ak
VOS: Learning What You Don't Know By Virtual Outlier Synthesis | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this paper I cover the "VOS: Learning What You Don't Know By Virtual Outlier Synthesis" paper - where they introduce a clever way of sampling OOD (out-of-distribution) data in the feature space in order to produce a more robust ID (in-distribution) image classification/object detection that's OOD aware. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2202.01197 ✅ Code: https://github.com/deeplearning-wisc/vos ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to the OOD problem 04:40 High-level VOS explanation 09:38 Alternative synthesis approach (GANs) 11:40 Diving deeper into the method 17:40 Uncertainty loss component 22:45 Inference-time OOD detection 24:05 Method step-by-step overview 26:35 Results 27:45 Computational cost 29:00 Ablations, visualization 30:23 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #generalization #OOD #virtualoutliersynthesis
What's up guys, in this video I'm covering this paper called VOS, Learning What You Don't Know by Virtual Outlier Synthesis, by this nice group of people here whose names I don't want to butcher with my poor pronunciation. And what the paper is all about is handling this out of distribution data in the context of image classification and in the context of object detection. So here we can see on this left hand side image, basically we see a moose being detected and then labeled as a pedestrian with high confidence. And this image may not even be the best one to present here because pedestrian is semantically very close to what this is. I mean the moose is crossing the street so it is a pedestrian in a way. But you get a point, like here it could have been mislabeled as like a truck or whatnot. And that would be very bad, especially if the model is very confident about that. So why this happens is because these models are trained on certain ID data, so the in-distribution data, data sets. And then once you show them certain objects which were not a part of their training set, so the OOD data, they usually are very confident and very incorrect at the same time. So this paper is trying to address exactly that. So why does this type of behavior happen in the first place? And the reason is due to the training procedure. You're basically training these object detection models in such a way that you're just feeding them the ID data. So as soon as you give them some object that was not seen in the training data set, then you have no idea and no guarantees of how the model is going to behave with that new data point because the model's decision boundaries may be completely unknown outside of your domain, of your initial training data set. So to understand that better, let's shift our focus to the two diagrams on the right-hand side here. So we have two different models here. And let me first help you understand how to parse these diagrams. So we have our data set, which is a contrived data set of two deep points sampled from a Gaussian mixture model here. So we have these three clusters here, here, here, and here. And the model is trained to predict whether the data is in distribution or out of distribution. And you can see that this color bar on the right-hand side, so if it's very red, that means model is fairly certain that data point belongs, is in the ID data indeed. And if the color is like very deep blue, then basically it thinks that it's OOD data, so out of distribution data. And here you can see that this landscape, that these decision boundaries are quite bad, because if we were to take, and let me change the color to something that's not blue or red here, let's say green. So if we were to take a data point here, this model, however it was trained, would say that this data point is most certainly ID, so that means in distribution, which obviously doesn't make any sense. As you can see here, as a human looking at this, it's very far off from all of these three clusters and doesn't make any sense. On the other hand, the model arbitrarily decided to, that these data points here are OOD. So that's a behavior that's not desired. And then on the right hand side here, we can see the VOS, so that's the model that was proposed in this paper. We can see a much nicer behavior, because we have that model is predicting that the nearby data points here are actually very likely to be ID. And then that slowly dropping as we're radially going outwards from the clusters, you can see that it goes towards the white color, which means it's becoming less and less certain. It's becoming more and more certain that the data point is OOD data point. And so we can see that these data points here are clearly marked as OOD. And that's a behavior, that's the type of behavior we want to enforce. So now this is, we can all agree that this is the behavior we want to see. This is the type of decision boundaries we want to see in the models, but how do we enforce such decision boundaries? And that's what I'm going to explain you next up. I'm going to focus a lot on the method in this paper and just briefly show you the results a bit later. So let's focus on this high level diagram here. So I mentioned that this paper works in the context of image detection, object detection, and image classification. For the sake of argument, and for the sake of this method, it doesn't even matter. We can focus on image classification because it's even simpler or we can just do object detection. It doesn't matter both ways. So you can see here the pipeline looks the following. You feed an image into some backbone network, which is usually, I guess, either like a CNN or some type of a transformer, like vision transformer or some hybrid or whatnot. So that's less important for the sake of this paper. And we have this proposal generator, we generate some bounding boxes. And the main point here is we somehow take our input images, our input data points, which they denote as x. So let me just change the color. So they denote those as x. And they feed also the bounding box here. And they somehow form these embeddings that they denote as h. And if you take images from various different classes, so let's say we have, let me zoom in a bit here. So let's say we have some class like a that's blue circle class, whatever that means. And then we take some images that are like yellow square. And we basically feed all of those. So we take a bunch of images that belong to the blue circle class, we feed them through this like basically backbone networks. And we can see that they tend to cluster together here. Now that will obviously not happen initially as the network is highly random, like random. But like they actually start doing this process a bit later throughout the training. Initially, they just use these losses for the classification and for the basically location or the bounding box loss here. And only later do they include the most important component and the most important contribution of this paper. And that's this uncertainty loss here. But I'll get to those details in a bit later. Now let's just keep it high level. So for the time being, let's assume that the embedding vectors are indeed clustered. So basically for a particular class, the embedding vectors are going to be clustered roughly together in the same part of the space. The main idea is to form these class conditional Gaussians. So these here, using the embedding vectors here. So you can imagine calculating a mean vector for this cluster, and basically a covariance matrix. And then using those two matrices to model like a Gaussian. And what this distribution tells you is whether a certain embedding is likely to belong to that class. So if you fit it, as you can see here on the image like this, then this model is fairly certain then that if we were to take this point here, it's highly unlikely that this data point here belongs to this class here. So they form the same types of class conditional Gaussians for all of the classes. So you can see here for the yellow squares, they form a Gaussian. For the blue circles, they form a Gaussian as well. And now the cool thing is you can, in this feature space, you can start sampling the outliers. Because by definition, we know that the ID data for this class lies somewhere in this radius here. So then that means that we can sample these less likely samples and use them as the outliers for that particular class. And that's exactly what I do. And now the final idea is to just use both those outliers as well as the ID data, form these energy scores. So energy terms, which they also call uncertainty scores. And make sure to push the energy for the ID data towards minus infinity and to push the scores or the energy term for the outliers towards plus infinity. So you just want to make that distinction. And they kind of depict that here as this separate, you can see like a fairly nice separation between the ID data and the OOD data as a consequence of having this uncertainty loss here. So yeah, we'll get to the details in a couple of seconds. But like that's the high level overview here. So this is the main component. This uncertainty loss component is the main one. And finally, we can see that the model this time correctly labels moose as OOD. So whatever type of object we had here, if it wasn't in the training data set, the model is going to correctly label all of those as OOD. Ideally, of course, that's what we're trying to get at. And this model seems to have a fairly nice results. So now let's start digging a bit deeper. They say here, while a straightforward idea is to train generative models such as GANs, synthesizing images in the high dimensional pixel space can be difficult to optimize. Instead, our key idea is to synthesize virtual outliers in the future space, which is more tractable given lower dimensionality. So we've seen that the system is actually dealing with embedding vectors in this so-called feature space. And one like alternative way how you could approach this problem, which is a fairly intuitive one, is to use GANs. So that means you would train a GAN architecture here additionally, that would generate OOD data. So you'd have a GAN here. And you would basically be generating images which are OOD and then feeding those into your pipeline. And then basically want to push this uncertainty score to different parts of the spectrum for the ID data versus the OOD data. But it's hard to train GANs. And also it's a question how you would, how would you cover the whole space of OOD data? Like would you just train GAN on some different data set or multiple different data sets that have the OOD classes? Or would you just try to push the image generation towards something that's not ID? So I guess those two approaches pop to my head. Although I'm not sure whether those images generated in such a manner would even be looking like natural images and whether that would be helpful. This method here definitely seems to have its own appeal in the computational sense because it's dealing with embedding vectors which are inherently lower dimensional as well as just the semantics. Just you're basically sampling data right near to your decision boundary. I mean this is a two-dimensional data set so that's the case. But in the case of multi-dimensional vectors it's going to be a bit different. But still this definitely seems to be a better approach than generating in the image space. Okay, nice. Let's get back to the formula here. So this is just a like a formula explaining what I just explained visually. Basically this probability distribution here for a condition on a specific class k, it's modeling the probability of these embedding vectors as and it's modeling those as a Gaussian. And now the question is how do you form the mu case so that's the means of this of this Gaussian distribution and as well as the covariance. And the trick is to, as I said, to just use the clusters that belong to a specific class. So you can see here if we want to generate the mean for a particular class we take only those embedding vectors whose class is k, which makes a lot of sense I guess. You just average them out and that's how you get the mu k and then you just form the covariance matrix actually using all of the classes. So here I'm a bit confused why not just use the specific class here. So my gut feeling would be to first try to form these covariance matrices using only class specific information and not all of the classes. You can see this sum over k. It's kind of confusing. I didn't see any ablations on this. If anybody has an idea why this might be better feel free to comment down below. So you can see one potential problem with this thing here because after every single weight update of your system you'll have to recompute these and that's quite expensive. So you'll have to go through all of to pass your all of your data points again because this time after the weight update you'll have different embedding vectors h and so you'll have to keep recomputing this. And so how they make this computationally tractable is the following. They say that we use online estimation for efficient training where we maintain a class conditional queue with this many object instances from each class. And I think in practice they use something around thousand of these object instances and when they say object instances what I mean is they basically feed these embedding vectors h in separate queues. So every class will have a dedicated queue basically. So they mention here that in each iteration we enqueue the embeddings of objects to their corresponding class conditional queues and the queue the same number of object embeddings. Cool that's how we form the class conditional gaussians now in the feature space. So now let's see how we sample the virtual outliers and it's fairly simple. So you can see here that the set of virtual outliers for a class k denoted like this. So it's a set of vks such that basically the probability of those vks has to be smaller than some epsilon. And this whole ugly expression is just your multi-dimensional multivariate Gaussian distribution and this one is parameterized using the mean and the covariance for this particular class k. So what this tells you basically is to sample these outliers from low probability regions of the class conditional Gaussian. So that may be a mouthful but it's fairly simple. Basically you can imagine in this simple case of a 2D data set here that the peak of this Gaussian is somewhere here and then as you're radially moving outwards here, so if you're radially moving outwards the probability is dropping down. So if I were to do a cross cut through this distribution here and basically let's take this as the origin point then how this would look like is the following. So you have something like this, you have the peak at zero, so this is zero and you'd be sampling your outliers starting at certain cutoff probabilities. So they denote that one as epsilon, so that means let's say somewhere here this is your epsilon and that means you'd be sampling only these data points here. Okay that explanation was more for those of you who are new to this channel. Let's now continue and understand how do they leverage these outliers to form the uncertainty loss component I just mentioned above on the high level diagram. And the main idea is to form this log partition function that looks something like this. So we have a minus log sum over the number of classes where big K denotes the number of classes in your image classification or object detection task and you sum up these terms here. So I guess the main question is what is FK exactly and what FK is it's just a logit so it's basically you just take the embedding vector so HK you take that one and you linearly project that to the logit so you you basically have a linear layer here and as the output you have the logits so this here the length of this vector is big K which is the number of classes so this is just your dense or feedforward layer and that's how you form the FKs and now they're going to make sure that E for ID data goes to minus infinity and that E for OOD data goes to plus infinity and thus that's going to force these logits to have a different distribution depending whether your ID or OOD data point. And that's a high level intuitive idea why this works. So let's see this formula here. So here is the uncertainty component I was mentioning. So you can see how it's constructed here. So we have these in index this is known as an index function so basically when this expression here is true so when this is true then this equals one otherwise it equals zero so that means you want to make sure that this is like false almost all of the time and you also want to make sure that this is false almost all of the time because that way you're going to minimize the uncertainty loss components. Your goal is to minimize this and in order to minimize this you want to make sure that the energy component for the outliers goes towards minus infinity and you want to make sure that the energy component for the ID data goes to plus infinity which is completely the opposite as what I've just mentioned a couple of seconds ago because I think there is a missed error in this formula. I think the signs here should be switched so this should be like smaller than and this should be greater or equal than. I don't want to confuse you here the whole point is to make sure that the energy terms go to opposite parts of the spectrum whether the the ID data goes to minus infinity or to plus infinity does not matter as long as the OOD data goes to the opposite part of the spectrum basically but I'm fairly sure this is a mistake here somebody may correct me if I'm wrong here. Okay so there was an intuitive explanation you obviously want to include the expectations here so you have expectation over the data distribution here you have an expectation over the virtual outlier distribution here and you're trying to minimize this whole component this whole sum here. Now we don't know how to optimize index function and so what I've done is they did a smooth approximation of this of this index function they also call it 0 0 slash 1 loss using the binary sigmoid loss so here is the exact same formula just a realistic implementation and you can notice a couple of things here so first off if if you were to just replace this with x so this term here with x this is your familiar sigmoid function so let's just do a simple limit analysis here so if if e was to go to minus infinity then both of these terms would go to plus infinity and so this term the numerator over denominator is going to converge in the limit is going to go to one that means that log of one is going to go to zero and thus we are minimizing this so that means that this current this formula here is forcing the energy of the id data to go towards minus infinity so as e of x and i'm just going to ignore the this part the theta part so it's obviously parameterized by by theta so if e goes to minus infinity then that implies that the the loss goes to zero at least for for this term here and vice versa here you can see that when e for the virtual outliers so when e for the virtual outliers goes to plus infinity then this term goes to zero because e raised to the power of minus infinity goes to zero and then you have log of one which is zero so this term goes down to zero as as e of as the energy of the virtual outliers is approaching plus infinity and that's it it's fairly simple obviously the sign does not matter it's just important that you separate the energy terms for id and ood data that's it okay let's continue here basically this is just the same thing just for the object detection task so obviously here we were ignoring the bounding boxes because we were dealing with image classification so they're switching between image classification and object detection but for the purpose of understanding their fundamental idea and those are the virtual outliers and how to combine them to do separation between id and ood data you don't need to know the specifics i'm just going to skip over it this is the final loss component the final loss so it's basically a sum of your classical so this is a classification loss this is your location so bounding box loss in case you have in case you're dealing with object detection you'll have this one in case you're just dealing with image classification you can just ditch this one so you have those classic losses plus a weighted uncertainty loss here which is the main part of this paper and you're trying to minimize this the expected value across your data set of this loss here by tweaking thetas and that's that's it now once you train the model in such a way you can use that energy term to basically have a probabilistic output of whether your data is id or ood so in case of this formula here you can see that for if the model was properly trained for id data e goes to minus infinity which means this whole thing goes to one and that means this basically signals if it's going towards one so it's going to be on a spectrum between zero and one so zero let me just do it like this so zero and one so this indicates that it's id and this indicates that it's ood finally you can set a certain threshold and when you're above certain threshold gamma you say hey this is id otherwise you say hey this is ood data point and that's the final thing we need in order to do the type of classifications we saw in the example above so let me show you what what i mean by that basically this thing here so this moose was obviously the the output of that g model was towards zero so lower than gamma and that's why the model was labeled as ood object and that's it okay let me go back here let me show you the algorithm finally the whole picture is fairly simple so given the id data here so the data set that consists out of data points images corresponding bounding boxes again in the case of object detection task and corresponding labels so we have n of those data points i images we have randomly initialized detector with parameter theta so that's our neural network we have q size for gaussian density estimation so all of those are parameters for the algorithm we have the weight for uncertainty regularization beta and we have the epsilon which is if you remember the threshold that we use to determine how to sample the virtual outliers so the probability under the class conditional gaussian has to be lower than this parameter epsilon if we were to sample those virtual outliers okay so let's see what the algorithm does so when we were training we update the idq with the training object so this is kind of weird statement you basically take the you embed a certain amount of data points for each of the classes you take the embedding vectors and you update those corresponding cues then you estimate the multivariate distributions based on the id on these embedding vectors using using equations one and two and equations one and two are these ones so let me go back up here basically you form the the mean and you form the the covariance matrix for your given embedding vectors so that's the next step of the algorithm and then they say sample virtual outliers v using equation three so that means we take the equation three here so having formed the class conditional gaussians using the parameters above so the the mean here and the covariance we can basically sample a set of these outliers from those distributions so that's the next step here and then they say calculate the regularization loss using equation five and update the parameters using equation seven so regularization number five basically that's how we calculate the loss the uncertainty component and then finally the whole loss consists out of the classification loss in case we have object detection we have this localization loss i guess and uncertainty component okay and finally once you train the model like that you can use the equations eight and nine to determine whether the object at hand is od or id and that's pretty much it there was i'm aware there was a lot of nitty-gritty details now let me show you the results so first thing to keep in mind is that they are only comparing against methods that are not using external outlier data sets if you do have external outlier data set those models those yeah those methods tend to perform better and they did have some results comparing against those methods and they are a bit worse now the problem with those data sets is they are very usually very expensive depends on your domain of application they're going to be more or less expensive to acquire so having methods such as this one where you don't need any additional data set you just need your id and then you have this clever way of sampling the the virtual outliers in the space in the feature space and this just works very nicely compared to other baselines here you can see that across all of these metrics it outperforms the previous methods but again keep in mind they are not comparing i think they even say here yeah they say it here all baseline methods are based on a model train on id data only that's the important part here okay cool now one thing worth thinking about is the following for each in distribution class we use thousand samples to estimate the class conditional gossians so that means they have to take thousand images embed them and then calculate the the mean and the and the and the covariance matrix and then they say since the threshold epsilon can be infinitesimally small we instead choose epsilon based on the teeth smallest likelihood in a pool of 10 000 samples per class generated from the class conditional gossian distribution so once you form the the gossian distribution using these thousand samples then what they do is as far as i understand here they take another 10 000 samples and then whatever out of those 10 000 samples what whatever whatever was the the lowest probability of a sample that's what they use as the epsilon and then they use that epsilon to sample the virtual outliers now the problem here is that this can be very expensive and the thing is in none of these tables did they compare the computational budget used to train these methods so that's something to definitely keep in mind if you're if you're on budget anyways let's continue here and see what other results they have so here they compare against some other types of synthesis methods i did mention the gan approach where you're trying to in the image space generate the the the outliers so the ood data and then use those to to train the system they also compare against these negative proposal methods etc etc and they again show up like superior results here okay let's continue yeah yeah they do all these ablations like having k plus one classes instead of k classes so basically this plus one is where the training happens so they use that plus one class as a detector of whether the data whether the um embedding vector is belongs to id data or to od data that's the trick there okay let me just show you the visualizations and we are pretty much done on the top here you can see faster rcnn and on the bottom you can see vos and you can see that eos successfully detects that the bet here and some other objects are od whereas the faster rcnn fails completely and is and it's confident in the prediction so it's confident that the the the bet here is car with 90 97 percent certainty that's that's weird um cool let me just show you one single detail here in the appendix if i can find it um and that's this part here so they say that uh importantly we show that uncertainty regularization should be added in the middle of the training if it is added too early the feature space is not sufficiently discriminative for gossian distribution estimation and that's something i mentioned in the beginning of the video basically initially you're just training for the classif you're just minimizing the classification loss the object detection loss if you're doing object detection and only later during the training when when basically the feature space becomes sufficiently discriminative i.e. the the the embedding vectors from the same class tend to cluster together only then can you start using this method and then it works in any case hopefully like this video if you did consider subscribing uh leave a comment down below if you have anything to ask or or any feedback uh in any case until next time bye bye
[{"start": 0.0, "end": 4.16, "text": " What's up guys, in this video I'm covering this paper called VOS,"}, {"start": 4.16, "end": 7.28, "text": " Learning What You Don't Know by Virtual Outlier Synthesis,"}, {"start": 8.08, "end": 14.16, "text": " by this nice group of people here whose names I don't want to butcher with my poor pronunciation."}, {"start": 14.88, "end": 21.76, "text": " And what the paper is all about is handling this out of distribution data in the context of"}, {"start": 21.76, "end": 26.96, "text": " image classification and in the context of object detection."}, {"start": 26.96, "end": 35.92, "text": " So here we can see on this left hand side image, basically we see a moose being detected and then"}, {"start": 35.92, "end": 41.92, "text": " labeled as a pedestrian with high confidence. And this image may not even be the best one to"}, {"start": 41.92, "end": 48.88, "text": " present here because pedestrian is semantically very close to what this is. I mean the moose is"}, {"start": 48.88, "end": 54.32, "text": " crossing the street so it is a pedestrian in a way. But you get a point, like here it could have"}, {"start": 54.32, "end": 60.0, "text": " been mislabeled as like a truck or whatnot. And that would be very bad, especially if the model"}, {"start": 60.0, "end": 65.84, "text": " is very confident about that. So why this happens is because these models are trained on certain"}, {"start": 66.4, "end": 74.48, "text": " ID data, so the in-distribution data, data sets. And then once you show them certain objects which"}, {"start": 74.48, "end": 79.92, "text": " were not a part of their training set, so the OOD data, they usually are very confident and"}, {"start": 79.92, "end": 84.96000000000001, "text": " very incorrect at the same time. So this paper is trying to address exactly that. So why does this"}, {"start": 84.96000000000001, "end": 89.92, "text": " type of behavior happen in the first place? And the reason is due to the training procedure."}, {"start": 89.92, "end": 94.72, "text": " You're basically training these object detection models in such a way that you're just feeding"}, {"start": 94.72, "end": 102.08, "text": " them the ID data. So as soon as you give them some object that was not seen in the training data set,"}, {"start": 102.08, "end": 108.4, "text": " then you have no idea and no guarantees of how the model is going to behave with that new data"}, {"start": 108.4, "end": 116.4, "text": " point because the model's decision boundaries may be completely unknown outside of your domain,"}, {"start": 116.4, "end": 123.84, "text": " of your initial training data set. So to understand that better, let's shift our focus to the two"}, {"start": 123.84, "end": 131.04000000000002, "text": " diagrams on the right-hand side here. So we have two different models here. And let me first help"}, {"start": 131.04000000000002, "end": 137.36, "text": " you understand how to parse these diagrams. So we have our data set, which is a contrived data set"}, {"start": 137.36, "end": 143.92000000000002, "text": " of two deep points sampled from a Gaussian mixture model here. So we have these three clusters here,"}, {"start": 144.48000000000002, "end": 151.28, "text": " here, here, and here. And the model is trained to predict whether the data is in distribution or"}, {"start": 151.28, "end": 156.96, "text": " out of distribution. And you can see that this color bar on the right-hand side, so if it's very"}, {"start": 156.96, "end": 164.4, "text": " red, that means model is fairly certain that data point belongs, is in the ID data indeed. And if the"}, {"start": 164.4, "end": 172.8, "text": " color is like very deep blue, then basically it thinks that it's OOD data, so out of distribution"}, {"start": 172.8, "end": 177.68, "text": " data. And here you can see that this landscape, that these decision boundaries are quite bad,"}, {"start": 177.68, "end": 183.36, "text": " because if we were to take, and let me change the color to something that's not blue or red here,"}, {"start": 183.36, "end": 189.20000000000002, "text": " let's say green. So if we were to take a data point here, this model, however it was trained,"}, {"start": 189.2, "end": 196.48, "text": " would say that this data point is most certainly ID, so that means in distribution, which obviously"}, {"start": 196.48, "end": 201.76, "text": " doesn't make any sense. As you can see here, as a human looking at this, it's very far off from"}, {"start": 201.76, "end": 207.51999999999998, "text": " all of these three clusters and doesn't make any sense. On the other hand, the model arbitrarily"}, {"start": 207.51999999999998, "end": 216.72, "text": " decided to, that these data points here are OOD. So that's a behavior that's not desired. And then"}, {"start": 216.72, "end": 221.52, "text": " on the right hand side here, we can see the VOS, so that's the model that was proposed in this paper."}, {"start": 221.52, "end": 227.28, "text": " We can see a much nicer behavior, because we have that model is predicting that the nearby data"}, {"start": 227.28, "end": 233.76, "text": " points here are actually very likely to be ID. And then that slowly dropping as we're radially"}, {"start": 233.76, "end": 240.56, "text": " going outwards from the clusters, you can see that it goes towards the white color, which means"}, {"start": 240.56, "end": 245.84, "text": " it's becoming less and less certain. It's becoming more and more certain that the data point is OOD"}, {"start": 245.84, "end": 255.36, "text": " data point. And so we can see that these data points here are clearly marked as OOD. And that's"}, {"start": 255.36, "end": 260.88, "text": " a behavior, that's the type of behavior we want to enforce. So now this is, we can all agree that"}, {"start": 260.88, "end": 267.44, "text": " this is the behavior we want to see. This is the type of decision boundaries we want to see in the"}, {"start": 267.44, "end": 272.56, "text": " models, but how do we enforce such decision boundaries? And that's what I'm going to explain"}, {"start": 272.56, "end": 279.2, "text": " you next up. I'm going to focus a lot on the method in this paper and just briefly show you the"}, {"start": 279.2, "end": 286.8, "text": " results a bit later. So let's focus on this high level diagram here. So I mentioned that this paper"}, {"start": 286.8, "end": 292.88, "text": " works in the context of image detection, object detection, and image classification. For the sake"}, {"start": 292.88, "end": 297.44, "text": " of argument, and for the sake of this method, it doesn't even matter. We can focus on image"}, {"start": 297.44, "end": 301.84000000000003, "text": " classification because it's even simpler or we can just do object detection. It doesn't matter"}, {"start": 301.84, "end": 306.88, "text": " both ways. So you can see here the pipeline looks the following. You feed an image into some backbone"}, {"start": 306.88, "end": 313.28, "text": " network, which is usually, I guess, either like a CNN or some type of a transformer, like vision"}, {"start": 313.28, "end": 318.71999999999997, "text": " transformer or some hybrid or whatnot. So that's less important for the sake of this paper."}, {"start": 319.59999999999997, "end": 327.2, "text": " And we have this proposal generator, we generate some bounding boxes. And the main point here is"}, {"start": 327.2, "end": 335.28, "text": " we somehow take our input images, our input data points, which they denote as x. So let me just"}, {"start": 335.28, "end": 341.2, "text": " change the color. So they denote those as x. And they feed also the bounding box here. And they"}, {"start": 341.2, "end": 352.48, "text": " somehow form these embeddings that they denote as h. And if you take images from various different"}, {"start": 352.48, "end": 357.84000000000003, "text": " classes, so let's say we have, let me zoom in a bit here. So let's say we have some class like a"}, {"start": 357.84000000000003, "end": 363.52000000000004, "text": " that's blue circle class, whatever that means. And then we take some images that are like yellow"}, {"start": 363.52000000000004, "end": 369.12, "text": " square. And we basically feed all of those. So we take a bunch of images that belong to the blue"}, {"start": 369.68, "end": 376.24, "text": " circle class, we feed them through this like basically backbone networks. And we can see that"}, {"start": 376.24, "end": 381.76, "text": " they tend to cluster together here. Now that will obviously not happen initially as the network is"}, {"start": 381.76, "end": 388.56, "text": " highly random, like random. But like they actually start doing this process a bit later throughout"}, {"start": 388.56, "end": 393.84, "text": " the training. Initially, they just use these losses for the classification and for the"}, {"start": 394.64, "end": 400.64, "text": " basically location or the bounding box loss here. And only later do they include the most important"}, {"start": 400.64, "end": 405.03999999999996, "text": " component and the most important contribution of this paper. And that's this uncertainty loss here."}, {"start": 405.68, "end": 411.36, "text": " But I'll get to those details in a bit later. Now let's just keep it high level. So for the time"}, {"start": 411.36, "end": 416.32, "text": " being, let's assume that the embedding vectors are indeed clustered. So basically for a particular"}, {"start": 416.32, "end": 420.40000000000003, "text": " class, the embedding vectors are going to be clustered roughly together in the same part of"}, {"start": 420.40000000000003, "end": 430.16, "text": " the space. The main idea is to form these class conditional Gaussians. So these here, using the"}, {"start": 430.16, "end": 436.48, "text": " embedding vectors here. So you can imagine calculating a mean vector for this cluster,"}, {"start": 436.48, "end": 447.84000000000003, "text": " and basically a covariance matrix. And then using those two matrices to model like a Gaussian. And"}, {"start": 447.84000000000003, "end": 453.76, "text": " what this distribution tells you is whether a certain embedding is likely to belong to that"}, {"start": 453.76, "end": 460.40000000000003, "text": " class. So if you fit it, as you can see here on the image like this, then this model is fairly"}, {"start": 460.40000000000003, "end": 466.32, "text": " certain then that if we were to take this point here, it's highly unlikely that this data"}, {"start": 466.32, "end": 474.15999999999997, "text": " point here belongs to this class here. So they form the same types of class conditional Gaussians for"}, {"start": 474.15999999999997, "end": 480.24, "text": " all of the classes. So you can see here for the yellow squares, they form a Gaussian. For the"}, {"start": 480.24, "end": 485.28, "text": " blue circles, they form a Gaussian as well. And now the cool thing is you can, in this feature"}, {"start": 485.28, "end": 490.8, "text": " space, you can start sampling the outliers. Because by definition, we know that the"}, {"start": 490.8, "end": 499.2, "text": " ID data for this class lies somewhere in this radius here. So then that means that we can sample"}, {"start": 499.2, "end": 505.68, "text": " these less likely samples and use them as the outliers for that particular class. And that's"}, {"start": 505.68, "end": 513.6, "text": " exactly what I do. And now the final idea is to just use both those outliers as well as the ID data,"}, {"start": 513.6, "end": 520.96, "text": " form these energy scores. So energy terms, which they also call uncertainty scores. And make sure"}, {"start": 520.96, "end": 529.9200000000001, "text": " to push the energy for the ID data towards minus infinity and to push the scores or the energy term"}, {"start": 529.9200000000001, "end": 536.08, "text": " for the outliers towards plus infinity. So you just want to make that distinction. And they kind of"}, {"start": 536.08, "end": 541.6800000000001, "text": " depict that here as this separate, you can see like a fairly nice separation between the ID data"}, {"start": 541.68, "end": 546.7199999999999, "text": " and the OOD data as a consequence of having this uncertainty loss here. So yeah, we'll get to the"}, {"start": 546.7199999999999, "end": 552.0799999999999, "text": " details in a couple of seconds. But like that's the high level overview here. So this is the main"}, {"start": 552.0799999999999, "end": 558.64, "text": " component. This uncertainty loss component is the main one. And finally, we can see that the model"}, {"start": 558.64, "end": 567.12, "text": " this time correctly labels moose as OOD. So whatever type of object we had here, if it wasn't in the"}, {"start": 567.12, "end": 572.5600000000001, "text": " training data set, the model is going to correctly label all of those as OOD. Ideally, of course,"}, {"start": 572.5600000000001, "end": 579.2, "text": " that's what we're trying to get at. And this model seems to have a fairly nice results. So now let's"}, {"start": 579.2, "end": 584.96, "text": " start digging a bit deeper. They say here, while a straightforward idea is to train generative"}, {"start": 584.96, "end": 590.24, "text": " models such as GANs, synthesizing images in the high dimensional pixel space can be difficult to"}, {"start": 590.24, "end": 595.84, "text": " optimize. Instead, our key idea is to synthesize virtual outliers in the future space, which is"}, {"start": 595.84, "end": 600.72, "text": " more tractable given lower dimensionality. So we've seen that the system is actually"}, {"start": 600.72, "end": 608.08, "text": " dealing with embedding vectors in this so-called feature space. And one like alternative way how"}, {"start": 608.08, "end": 613.44, "text": " you could approach this problem, which is a fairly intuitive one, is to use GANs. So that means you"}, {"start": 613.44, "end": 621.12, "text": " would train a GAN architecture here additionally, that would generate OOD data. So you'd have a GAN"}, {"start": 621.12, "end": 631.12, "text": " here. And you would basically be generating images which are OOD and then feeding those into your"}, {"start": 631.12, "end": 636.48, "text": " pipeline. And then basically want to push this uncertainty score to different parts of the"}, {"start": 636.48, "end": 644.88, "text": " spectrum for the ID data versus the OOD data. But it's hard to train GANs. And also it's a question"}, {"start": 644.88, "end": 652.0, "text": " how you would, how would you cover the whole space of OOD data? Like would you just train GAN on some"}, {"start": 652.0, "end": 657.84, "text": " different data set or multiple different data sets that have the OOD classes? Or would you just try"}, {"start": 657.84, "end": 663.92, "text": " to push the image generation towards something that's not ID? So I guess those two approaches"}, {"start": 663.92, "end": 669.2, "text": " pop to my head. Although I'm not sure whether those images generated in such a manner would even be"}, {"start": 669.2, "end": 674.72, "text": " looking like natural images and whether that would be helpful. This method here definitely seems to"}, {"start": 674.72, "end": 680.24, "text": " have its own appeal in the computational sense because it's dealing with embedding vectors which"}, {"start": 680.24, "end": 687.6, "text": " are inherently lower dimensional as well as just the semantics. Just you're basically sampling data"}, {"start": 687.6, "end": 693.84, "text": " right near to your decision boundary. I mean this is a two-dimensional data set so that's the case."}, {"start": 693.84, "end": 697.5200000000001, "text": " But in the case of multi-dimensional vectors it's going to be a bit different. But still"}, {"start": 697.52, "end": 703.28, "text": " this definitely seems to be a better approach than generating in the image space. Okay, nice."}, {"start": 703.28, "end": 710.8, "text": " Let's get back to the formula here. So this is just a like a formula explaining what I just explained"}, {"start": 710.8, "end": 716.88, "text": " visually. Basically this probability distribution here for a condition on a specific class k,"}, {"start": 718.3199999999999, "end": 726.24, "text": " it's modeling the probability of these embedding vectors as and it's modeling those as a Gaussian."}, {"start": 726.24, "end": 731.84, "text": " And now the question is how do you form the mu case so that's the means of this of this"}, {"start": 731.84, "end": 737.76, "text": " Gaussian distribution and as well as the covariance. And the trick is to, as I said, to just use the"}, {"start": 739.52, "end": 746.0, "text": " clusters that belong to a specific class. So you can see here if we want to generate the mean"}, {"start": 746.0, "end": 752.96, "text": " for a particular class we take only those embedding vectors whose class is k, which makes a lot of"}, {"start": 752.96, "end": 758.64, "text": " sense I guess. You just average them out and that's how you get the mu k and then you just"}, {"start": 758.64, "end": 764.32, "text": " form the covariance matrix actually using all of the classes. So here I'm a bit confused why not"}, {"start": 764.32, "end": 773.36, "text": " just use the specific class here. So my gut feeling would be to first try to form these"}, {"start": 773.36, "end": 778.72, "text": " covariance matrices using only class specific information and not all of the classes. You can"}, {"start": 778.72, "end": 784.0, "text": " see this sum over k. It's kind of confusing. I didn't see any ablations on this. If anybody"}, {"start": 784.0, "end": 791.52, "text": " has an idea why this might be better feel free to comment down below. So you can see one potential"}, {"start": 791.52, "end": 797.2, "text": " problem with this thing here because after every single weight update of your system"}, {"start": 798.0, "end": 802.64, "text": " you'll have to recompute these and that's quite expensive. So you'll have to go through"}, {"start": 802.64, "end": 809.28, "text": " all of to pass your all of your data points again because this time after the weight update you'll"}, {"start": 809.28, "end": 815.1999999999999, "text": " have different embedding vectors h and so you'll have to keep recomputing this. And so how they"}, {"start": 815.1999999999999, "end": 819.76, "text": " make this computationally tractable is the following. They say that we use online estimation"}, {"start": 819.76, "end": 826.24, "text": " for efficient training where we maintain a class conditional queue with this many object instances"}, {"start": 826.24, "end": 832.96, "text": " from each class. And I think in practice they use something around thousand of these object instances"}, {"start": 833.84, "end": 839.28, "text": " and when they say object instances what I mean is they basically feed these embedding vectors h"}, {"start": 840.0, "end": 846.08, "text": " in separate queues. So every class will have a dedicated queue basically. So they mention here"}, {"start": 846.08, "end": 851.2, "text": " that in each iteration we enqueue the embeddings of objects to their corresponding class conditional"}, {"start": 851.2, "end": 856.96, "text": " queues and the queue the same number of object embeddings. Cool that's how we form the class"}, {"start": 856.96, "end": 864.88, "text": " conditional gaussians now in the feature space. So now let's see how we sample the virtual outliers"}, {"start": 864.88, "end": 871.9200000000001, "text": " and it's fairly simple. So you can see here that the set of virtual outliers for a class k denoted"}, {"start": 871.92, "end": 884.24, "text": " like this. So it's a set of vks such that basically the probability of those vks has to be smaller"}, {"start": 884.24, "end": 890.88, "text": " than some epsilon. And this whole ugly expression is just your multi-dimensional multivariate"}, {"start": 891.5999999999999, "end": 899.1999999999999, "text": " Gaussian distribution and this one is parameterized using the mean and the covariance for this"}, {"start": 899.2, "end": 907.2, "text": " particular class k. So what this tells you basically is to sample these outliers from"}, {"start": 908.5600000000001, "end": 913.44, "text": " low probability regions of the class conditional Gaussian. So that may be a mouthful but it's fairly"}, {"start": 913.44, "end": 919.2800000000001, "text": " simple. Basically you can imagine in this simple case of a 2D data set here that the peak of this"}, {"start": 919.2800000000001, "end": 924.8000000000001, "text": " Gaussian is somewhere here and then as you're radially moving outwards here, so if you're"}, {"start": 924.8, "end": 929.8399999999999, "text": " radially moving outwards the probability is dropping down. So if I were to do a cross cut"}, {"start": 930.4799999999999, "end": 937.04, "text": " through this distribution here and basically let's take this as the origin point then how this would"}, {"start": 937.04, "end": 944.0, "text": " look like is the following. So you have something like this, you have the peak at zero, so this is zero"}, {"start": 946.4, "end": 954.0, "text": " and you'd be sampling your outliers starting at certain cutoff probabilities. So they denote"}, {"start": 954.0, "end": 961.04, "text": " that one as epsilon, so that means let's say somewhere here this is your epsilon and that"}, {"start": 961.04, "end": 968.64, "text": " means you'd be sampling only these data points here. Okay that explanation was more for those"}, {"start": 968.64, "end": 975.36, "text": " of you who are new to this channel. Let's now continue and understand how do they leverage these"}, {"start": 975.36, "end": 983.44, "text": " outliers to form the uncertainty loss component I just mentioned above on the high level diagram."}, {"start": 984.32, "end": 988.64, "text": " And the main idea is to form this log partition function that looks something like this. So we"}, {"start": 988.64, "end": 995.04, "text": " have a minus log sum over the number of classes where big K denotes the number of classes in your"}, {"start": 995.04, "end": 1000.96, "text": " image classification or object detection task and you sum up these terms here. So I guess the main"}, {"start": 1000.96, "end": 1008.32, "text": " question is what is FK exactly and what FK is it's just a logit so it's basically you just take the"}, {"start": 1009.52, "end": 1019.44, "text": " embedding vector so HK you take that one and you linearly project that to the logit so you"}, {"start": 1019.44, "end": 1027.3600000000001, "text": " you basically have a linear layer here and as the output you have the logits so this here the length"}, {"start": 1027.36, "end": 1032.8799999999999, "text": " of this vector is big K which is the number of classes so this is just your dense or feedforward"}, {"start": 1032.8799999999999, "end": 1040.24, "text": " layer and that's how you form the FKs and now they're going to make sure that E for ID data"}, {"start": 1040.24, "end": 1047.28, "text": " goes to minus infinity and that E for OOD data goes to plus infinity and thus that's going to"}, {"start": 1047.84, "end": 1054.4799999999998, "text": " force these logits to have a different distribution depending whether your ID or OOD data point."}, {"start": 1054.48, "end": 1062.4, "text": " And that's a high level intuitive idea why this works. So let's see this formula here."}, {"start": 1062.4, "end": 1067.84, "text": " So here is the uncertainty component I was mentioning. So you can see how it's constructed"}, {"start": 1067.84, "end": 1073.28, "text": " here. So we have these in index this is known as an index function so basically when this expression"}, {"start": 1073.28, "end": 1082.48, "text": " here is true so when this is true then this equals one otherwise it equals zero so that means you"}, {"start": 1082.48, "end": 1088.4, "text": " want to make sure that this is like false almost all of the time and you also want to make sure"}, {"start": 1088.4, "end": 1095.1200000000001, "text": " that this is false almost all of the time because that way you're going to minimize the"}, {"start": 1095.1200000000001, "end": 1099.68, "text": " uncertainty loss components. Your goal is to minimize this and in order to minimize this you"}, {"start": 1099.68, "end": 1106.48, "text": " want to make sure that the energy component for the outliers goes towards minus infinity and you"}, {"start": 1106.48, "end": 1112.32, "text": " want to make sure that the energy component for the ID data goes to plus infinity which is completely"}, {"start": 1112.32, "end": 1119.9199999999998, "text": " the opposite as what I've just mentioned a couple of seconds ago because I think there is a missed"}, {"start": 1119.9199999999998, "end": 1123.9199999999998, "text": " error in this formula. I think the signs here should be switched so this should be like"}, {"start": 1124.72, "end": 1130.48, "text": " smaller than and this should be greater or equal than. I don't want to confuse you here the whole"}, {"start": 1130.48, "end": 1138.32, "text": " point is to make sure that the energy terms go to opposite parts of the spectrum whether the the ID"}, {"start": 1138.32, "end": 1144.32, "text": " data goes to minus infinity or to plus infinity does not matter as long as the OOD data goes to"}, {"start": 1144.32, "end": 1149.2, "text": " the opposite part of the spectrum basically but I'm fairly sure this is a mistake here somebody"}, {"start": 1149.2, "end": 1154.0, "text": " may correct me if I'm wrong here. Okay so there was an intuitive explanation you obviously want"}, {"start": 1154.0, "end": 1158.32, "text": " to include the expectations here so you have expectation over the data distribution here you"}, {"start": 1158.32, "end": 1164.1599999999999, "text": " have an expectation over the virtual outlier distribution here and you're trying to minimize"}, {"start": 1164.16, "end": 1171.6000000000001, "text": " this whole component this whole sum here. Now we don't know how to optimize index function"}, {"start": 1171.6000000000001, "end": 1177.6000000000001, "text": " and so what I've done is they did a smooth approximation of this of this index function"}, {"start": 1178.16, "end": 1185.2, "text": " they also call it 0 0 slash 1 loss using the binary sigmoid loss so here is the exact same"}, {"start": 1185.2, "end": 1191.6000000000001, "text": " formula just a realistic implementation and you can notice a couple of things here so first off"}, {"start": 1191.6, "end": 1198.1599999999999, "text": " if if you were to just replace this with x so this term here with x this is your familiar"}, {"start": 1198.1599999999999, "end": 1205.52, "text": " sigmoid function so let's just do a simple limit analysis here so if if e was to go to minus"}, {"start": 1205.52, "end": 1212.9599999999998, "text": " infinity then both of these terms would go to plus infinity and so this term the numerator over"}, {"start": 1212.9599999999998, "end": 1219.28, "text": " denominator is going to converge in the limit is going to go to one that means that log of one is"}, {"start": 1219.28, "end": 1225.44, "text": " going to go to zero and thus we are minimizing this so that means that this current this formula"}, {"start": 1225.44, "end": 1235.36, "text": " here is forcing the energy of the id data to go towards minus infinity so as e of x and i'm just"}, {"start": 1235.36, "end": 1242.3999999999999, "text": " going to ignore the this part the theta part so it's obviously parameterized by by theta so if"}, {"start": 1242.4, "end": 1250.64, "text": " e goes to minus infinity then that implies that the the loss goes to zero at least for for this"}, {"start": 1250.64, "end": 1261.0400000000002, "text": " term here and vice versa here you can see that when e for the virtual outliers so when e for the"}, {"start": 1261.0400000000002, "end": 1269.68, "text": " virtual outliers goes to plus infinity then this term goes to zero because e raised to the power of"}, {"start": 1269.68, "end": 1276.72, "text": " minus infinity goes to zero and then you have log of one which is zero so this term goes down to zero"}, {"start": 1276.72, "end": 1283.28, "text": " as as e of as the energy of the virtual outliers is approaching plus infinity and that's it it's"}, {"start": 1283.28, "end": 1290.0, "text": " fairly simple obviously the sign does not matter it's just important that you separate the energy"}, {"start": 1290.0, "end": 1300.48, "text": " terms for id and ood data that's it okay let's continue here basically this is just the same"}, {"start": 1300.48, "end": 1304.56, "text": " thing just for the object detection task so obviously here we were ignoring the bounding"}, {"start": 1304.56, "end": 1307.68, "text": " boxes because we were dealing with image classification so they're switching between"}, {"start": 1307.68, "end": 1312.48, "text": " image classification and object detection but for the purpose of understanding their fundamental idea"}, {"start": 1313.12, "end": 1319.12, "text": " and those are the virtual outliers and how to combine them to do separation between id and ood"}, {"start": 1319.12, "end": 1324.2399999999998, "text": " data you don't need to know the specifics i'm just going to skip over it this is the final loss"}, {"start": 1324.2399999999998, "end": 1332.08, "text": " component the final loss so it's basically a sum of your classical so this is a classification loss"}, {"start": 1332.08, "end": 1337.6799999999998, "text": " this is your location so bounding box loss in case you have in case you're dealing with object"}, {"start": 1337.6799999999998, "end": 1342.0, "text": " detection you'll have this one in case you're just dealing with image classification you can just"}, {"start": 1342.0, "end": 1352.24, "text": " ditch this one so you have those classic losses plus a weighted uncertainty loss here which is the"}, {"start": 1352.24, "end": 1358.24, "text": " main part of this paper and you're trying to minimize this the expected value across your"}, {"start": 1358.24, "end": 1367.84, "text": " data set of this loss here by tweaking thetas and that's that's it now once you train the model"}, {"start": 1367.84, "end": 1375.28, "text": " in such a way you can use that energy term to basically have a probabilistic output of whether"}, {"start": 1375.28, "end": 1384.1599999999999, "text": " your data is id or ood so in case of this formula here you can see that for if the model was properly"}, {"start": 1384.1599999999999, "end": 1393.52, "text": " trained for id data e goes to minus infinity which means this whole thing goes to one and that means"}, {"start": 1393.52, "end": 1398.16, "text": " this basically signals if it's going towards one so it's going to be on a spectrum between zero and"}, {"start": 1398.16, "end": 1405.12, "text": " one so zero let me just do it like this so zero and one so this indicates that it's id and this"}, {"start": 1405.12, "end": 1411.68, "text": " indicates that it's ood finally you can set a certain threshold and when you're above certain"}, {"start": 1411.68, "end": 1418.6399999999999, "text": " threshold gamma you say hey this is id otherwise you say hey this is ood data point and that's the"}, {"start": 1418.64, "end": 1425.2, "text": " final thing we need in order to do the type of classifications we saw in the example above so"}, {"start": 1425.2, "end": 1433.44, "text": " let me show you what what i mean by that basically this thing here so this moose was obviously the"}, {"start": 1434.0800000000002, "end": 1439.6000000000001, "text": " the output of that g model was towards zero so lower than gamma and that's why the model was"}, {"start": 1439.6000000000001, "end": 1447.6000000000001, "text": " labeled as ood object and that's it okay let me go back here let me show you the algorithm finally"}, {"start": 1447.6, "end": 1454.1599999999999, "text": " the whole picture is fairly simple so given the id data here so the data set that consists out of"}, {"start": 1454.1599999999999, "end": 1459.84, "text": " data points images corresponding bounding boxes again in the case of object detection task and"}, {"start": 1459.84, "end": 1466.6399999999999, "text": " corresponding labels so we have n of those data points i images we have randomly initialized"}, {"start": 1466.6399999999999, "end": 1472.08, "text": " detector with parameter theta so that's our neural network we have q size for gaussian density"}, {"start": 1472.08, "end": 1476.3999999999999, "text": " estimation so all of those are parameters for the algorithm we have the weight for uncertainty"}, {"start": 1476.4, "end": 1482.5600000000002, "text": " regularization beta and we have the epsilon which is if you remember the threshold that we use"}, {"start": 1482.5600000000002, "end": 1489.3600000000001, "text": " to determine how to sample the virtual outliers so the probability under the class conditional"}, {"start": 1489.3600000000001, "end": 1495.1200000000001, "text": " gaussian has to be lower than this parameter epsilon if we were to sample those virtual"}, {"start": 1495.1200000000001, "end": 1501.68, "text": " outliers okay so let's see what the algorithm does so when we were training we update the idq"}, {"start": 1501.68, "end": 1508.48, "text": " with the training object so this is kind of weird statement you basically take the you embed a"}, {"start": 1508.48, "end": 1512.8, "text": " certain amount of data points for each of the classes you take the embedding vectors and you"}, {"start": 1512.8, "end": 1518.48, "text": " update those corresponding cues then you estimate the multivariate distributions based on the id"}, {"start": 1519.1200000000001, "end": 1525.1200000000001, "text": " on these embedding vectors using using equations one and two and equations one and two are these"}, {"start": 1525.12, "end": 1532.4799999999998, "text": " ones so let me go back up here basically you form the the mean and you form the the covariance matrix"}, {"start": 1533.52, "end": 1538.7199999999998, "text": " for your given embedding vectors so that's the next step of the algorithm and then they say"}, {"start": 1538.7199999999998, "end": 1545.4399999999998, "text": " sample virtual outliers v using equation three so that means we take the equation three here"}, {"start": 1545.4399999999998, "end": 1551.4399999999998, "text": " so having formed the class conditional gaussians using the parameters above so the the mean here"}, {"start": 1551.44, "end": 1560.16, "text": " and the covariance we can basically sample a set of these outliers from those distributions so that's"}, {"start": 1560.16, "end": 1565.3600000000001, "text": " the next step here and then they say calculate the regularization loss using equation five"}, {"start": 1566.0, "end": 1573.6000000000001, "text": " and update the parameters using equation seven so regularization number five basically that's how"}, {"start": 1573.6000000000001, "end": 1578.8, "text": " we calculate the loss the uncertainty component and then finally the whole loss consists out of"}, {"start": 1578.8, "end": 1585.04, "text": " the classification loss in case we have object detection we have this localization loss i guess"}, {"start": 1585.04, "end": 1589.28, "text": " and uncertainty component okay and finally once you train the model like that you can use"}, {"start": 1590.0, "end": 1597.2, "text": " the equations eight and nine to determine whether the object at hand is od or id and that's pretty"}, {"start": 1597.2, "end": 1601.52, "text": " much it there was i'm aware there was a lot of nitty-gritty details now let me show you the"}, {"start": 1601.52, "end": 1607.04, "text": " results so first thing to keep in mind is that they are only comparing against methods that are"}, {"start": 1607.04, "end": 1615.76, "text": " not using external outlier data sets if you do have external outlier data set those models those"}, {"start": 1616.3999999999999, "end": 1621.92, "text": " yeah those methods tend to perform better and they did have some results comparing against those"}, {"start": 1621.92, "end": 1627.68, "text": " methods and they are a bit worse now the problem with those data sets is they are very usually very"}, {"start": 1627.68, "end": 1632.8799999999999, "text": " expensive depends on your domain of application they're going to be more or less expensive to"}, {"start": 1632.88, "end": 1639.0400000000002, "text": " acquire so having methods such as this one where you don't need any additional data set you just"}, {"start": 1639.0400000000002, "end": 1643.8400000000001, "text": " need your id and then you have this clever way of sampling the the virtual outliers in the space in"}, {"start": 1643.8400000000001, "end": 1650.16, "text": " the feature space and this just works very nicely compared to other baselines here you can see that"}, {"start": 1650.8000000000002, "end": 1655.68, "text": " across all of these metrics it outperforms the previous methods but again keep in mind"}, {"start": 1655.68, "end": 1660.64, "text": " they are not comparing i think they even say here yeah they say it here all baseline methods are"}, {"start": 1660.64, "end": 1668.16, "text": " based on a model train on id data only that's the important part here okay cool now one thing worth"}, {"start": 1669.0400000000002, "end": 1674.3200000000002, "text": " thinking about is the following for each in distribution class we use thousand samples"}, {"start": 1674.3200000000002, "end": 1678.8000000000002, "text": " to estimate the class conditional gossians so that means they have to take thousand images"}, {"start": 1679.6000000000001, "end": 1685.2800000000002, "text": " embed them and then calculate the the mean and the and the and the covariance matrix and then they"}, {"start": 1685.28, "end": 1692.08, "text": " say since the threshold epsilon can be infinitesimally small we instead choose epsilon based on the teeth"}, {"start": 1692.6399999999999, "end": 1698.6399999999999, "text": " smallest likelihood in a pool of 10 000 samples per class generated from the class conditional"}, {"start": 1698.6399999999999, "end": 1704.32, "text": " gossian distribution so once you form the the gossian distribution using these thousand samples"}, {"start": 1704.32, "end": 1711.04, "text": " then what they do is as far as i understand here they take another 10 000 samples and then whatever"}, {"start": 1711.04, "end": 1717.28, "text": " out of those 10 000 samples what whatever whatever was the the lowest probability of a sample"}, {"start": 1717.28, "end": 1723.76, "text": " that's what they use as the epsilon and then they use that epsilon to sample the virtual outliers"}, {"start": 1723.76, "end": 1729.92, "text": " now the problem here is that this can be very expensive and the thing is in none of these"}, {"start": 1729.92, "end": 1736.72, "text": " tables did they compare the computational budget used to train these methods so that's something"}, {"start": 1736.72, "end": 1743.1200000000001, "text": " to definitely keep in mind if you're if you're on budget anyways let's continue here and see"}, {"start": 1743.1200000000001, "end": 1747.92, "text": " what other results they have so here they compare against some other types of synthesis methods"}, {"start": 1747.92, "end": 1754.64, "text": " i did mention the gan approach where you're trying to in the image space generate the the the outliers"}, {"start": 1754.64, "end": 1762.96, "text": " so the ood data and then use those to to train the system they also compare against these negative"}, {"start": 1762.96, "end": 1769.68, "text": " proposal methods etc etc and they again show up like superior results here okay let's continue"}, {"start": 1770.32, "end": 1777.04, "text": " yeah yeah they do all these ablations like having k plus one classes instead of k classes so basically"}, {"start": 1777.04, "end": 1783.44, "text": " this plus one is where the training happens so they use that plus one class as a detector of"}, {"start": 1783.44, "end": 1791.92, "text": " whether the data whether the um embedding vector is belongs to id data or to od data that's the"}, {"start": 1791.92, "end": 1795.92, "text": " trick there okay let me just show you the visualizations and we are pretty much done"}, {"start": 1797.1200000000001, "end": 1804.0800000000002, "text": " on the top here you can see faster rcnn and on the bottom you can see vos and you can see that"}, {"start": 1804.0800000000002, "end": 1812.48, "text": " eos successfully detects that the bet here and some other objects are od whereas the faster rcnn"}, {"start": 1812.48, "end": 1818.24, "text": " fails completely and is and it's confident in the prediction so it's confident that the the the bet"}, {"start": 1818.24, "end": 1827.04, "text": " here is car with 90 97 percent certainty that's that's weird um cool let me just show you one"}, {"start": 1827.04, "end": 1835.28, "text": " single detail here in the appendix if i can find it um and that's this part here so they say that"}, {"start": 1835.28, "end": 1840.32, "text": " uh importantly we show that uncertainty regularization should be added in the middle of"}, {"start": 1840.32, "end": 1845.2, "text": " the training if it is added too early the feature space is not sufficiently discriminative for"}, {"start": 1845.2, "end": 1849.3600000000001, "text": " gossian distribution estimation and that's something i mentioned in the beginning of the video"}, {"start": 1850.0, "end": 1854.24, "text": " basically initially you're just training for the classif you're just minimizing the classification"}, {"start": 1854.24, "end": 1859.1200000000001, "text": " loss the object detection loss if you're doing object detection and only later during the training"}, {"start": 1859.1200000000001, "end": 1865.2, "text": " when when basically the feature space becomes sufficiently discriminative i.e. the the the"}, {"start": 1865.2, "end": 1870.48, "text": " embedding vectors from the same class tend to cluster together only then can you start using"}, {"start": 1870.48, "end": 1876.72, "text": " this method and then it works in any case hopefully like this video if you did consider subscribing"}, {"start": 1876.72, "end": 1880.64, "text": " uh leave a comment down below if you have anything to ask or or any feedback"}, {"start": 1880.64, "end": 1900.4, "text": " uh in any case until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=idiIllIQOfU
ConvNeXt: A ConvNet for the 2020s | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the recently published "A ConvNet for the 2020s" paper. They show that ConvNets are still in the game! - by adding new design ideas and training procedures they outperform vision transformers even in big data regimes and without any attention layers. Convolutional prior continues to stand the test of time in the field of computer vision. Note: I also partially cover the Swin transformer paper in case you missed out on that one. :) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2201.03545 ✅ GitHub: https://github.com/facebookresearch/ConvNeXt ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro - convergence of transformers and CNNs 05:05 Main diagram explained 07:40 Main diagram corrections 10:10 Swin transformer recap 20:20 Modernizing ResNets 24:10 Diving deeper: stage ratio 27:20 Diving deeper: misc (inverted bottleneck, depthwise conv...) 34:45 Results (classification, object detection, segmentation) 37:35 RIP DanNet 38:40 Summary and outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #convnext #visiontransformers #computervision
What's cracking guys? In this video I'm covering this new paper called A Comnet for the 2020s. The paper itself was followed by a lot of hype on social media, especially on Twitter. So yeah, let's see what it's all about. It's a paper by Facebook AI Research. I guess it's now MetaAI and UC Berkeley folks. So let's dig straight into it. The roaring 20s of visual recognition began with introduction of vision transformers or VATs, which quickly superseded Comnets as the state of the art image classification model. So the important detail here is it was very good. It was a soda, but only on image classification task. A vanilla VAT on the other hand, faces difficulties when applied to general computer vision tasks, such as object detection and semantic segmentation. So object detection and semantic segmentation. It is the hierarchical transformers like swing transformers that reintroduced several common priors, making transformers practically viable as a generic vision backbone. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of transformers rather than the inherent inductive biases of convolutions. So let me try and put this into broader historical perspective. Okay, so the story basically begins, I mean begins around 2011. That's when the first like successful CNN that has superhuman performance appeared. And the name of that CNN was not AlexNet, it was actually DanNet. So let me kind of put it here. So we have 2011 here and we have this branch of CNNs. I'm gonna draw it like this. So CNNs. And then in 2012, we have the most famous model that was AlexNet. And since then we had a bunch of different models, VGGs, Resnets, which were, Resnets were a big thing back in 2015. So maybe somewhere around this time. And all in all, that trend continued until 2020s. And then what happened is that basically in late 2020, we had Vision Transformer like published. And I've covered the paper by the way on VATs, if you're not familiar, you can check it out. I'm gonna link it somewhere here. So let me draw that as another thread in these like visual perception models. We have Transformers here and they all started with VAT. And what basically what VAT did is it petrified the image and then passed those tokens into Pure Transformer. And it just turned out to work very nicely for our image classification task. More importantly, for our story here is the appearance of Swin Transformer. So that was somewhere around. So this is like late 2020. This is early 2021. So that's like March 2021 is when the Swin Transformer appears and it stands for shifted window transformer. I think that would be the contraction. And from that point onward, basically Transformers were successful on semantic segmentation as well. And all of those more like types of visual tasks where you need dense predictions, such as semantic segmentation, instant segmentation, et cetera. So now let me draw, that's the kind of timeline. Now let me draw how these two branches of models started converging in a way. So what actually happened is that Swin Transformer kind of went this direction towards the CNN line. And now what this paper does is the same thing. It takes the most recent and best practices when it comes to training CNNs and combines those with the design ideas that Transformers introduced. So it does a similar thing. So basically what this paper does is starts going towards the transformer branch of models. So that's a mental model I have in my head at the moment basically. And now there is a boundary between these two. The reason being is that COMnext, which is the model from this paper, is still a pure CNN. So that means there is a boundary here because these are still pure CNNs. So the COMnext is still a pure CNN with some ideas from the transformer branch, whereas Swin Transformer arguably maybe even passes this virtual boundary because it has a lot of CNN priors as we are going to see very soon. So that's the basic mental model. And the whole idea is for me to show you the steps that happened here. So how did they go from ResNet-50, which is the baseline model they're using, which steps did they take, one, two, three, et cetera, to get to outperforming Vision Transformers and specifically Swin Transformer, which was the baseline they were comparing with against. So let's start with this diagram. It's one of the most important diagrams in this paper. And you can see a couple of things here. So first, we have two groups here. We have ImageNet-1K pre-trained models here. Let me just change the color here to red. And we have ImageNet-22K pre-trained models. So that's the superset of ImageNet dataset, which has around 22,000 classes. It's way bigger compared to this one. So yeah, the whole point is to see whether the models scale with additional data, which is something that VATs are notoriously famous for because they can learn the priors that the CNNs have. And you can check those out in my VAT paper, but for now, that's pretty much enough. So what we see here on the Y-axis is the top one accuracy on ImageNet. And you can see that ResNet here has lower accuracy compared to date. I'm not sure how to pronounce this one, but it's pretty much a plain VAT model. So that means little to no conf net biases, except for the patch, for the input patchify layer, which is arguably just a convolutional layer. Now we have the Swin transformer here. You can see that it outperforms the VAT. And finally, we can see the comnext here. Let me zoom in a little bit here. You can see that it outperforms Swin transformers. And not only that, so not only that we have higher performance, but you can also notice that you can also notice that by increasing the amount of computation, so by making models bigger, which you can see by the diameter of these circles here expressed in gigaflops, you can see that the performance goes up. So here for these smaller models, the accuracy was somewhere here. And then as they were increasing the model size, you can see that the center of these circles basically went up, which means they scale fairly well with model size. But the second dimension where they scale well across is also the data. And that's what we actually care arguably even more about. So here you can see that even with more data, the comnext architecture basically benefits from additional data. And that's an important finding. Again, comparisons with VAT, with Swin transformer and comnext, you can see that both the accuracy is better as well as the, I mean, scaling exists, that's what we are looking for pretty much. Now the thing is this diagram is a little bit misleading because as I said, there was a lot of hype following this paper and there were a lot of Twitter threads, let me just show you this one. So Lucas Baer, who is the co-author of the VAT paper, showed that they were actually comparing with the old VAT that still didn't have all of the modern augmentations. And if you add those additional augmentations, we see the top one accuracy translating upwards here, thus becoming almost on pair with comnext. So this convergence further supports my claims here that we have like a huge, huge like convergence happening between these two separate branches of perception models. A couple more comments here. The creator of this famous team library and the co-author of the Resonance Strikes Back paper, which I've covered in one of my previous videos, which also showed that if you train Resonance with modern training techniques and recipes, you basically get way better performance and we ought to be comparing with those novel Resonance and not with the old results from back from 2015. Yeah, he says here that while they did mention better Resonance training in the core of the paper, that headline grabbing plot used the originals. I rewrote some of the layer norm, general stuff, blah, blah, blah, and got the extra 20% boost. Basically that means that we need to shift upwards the Resonance as well and that the performance actually, like this gap here is closed down pretty much with Swin Transformers. It would be nice to see like a visualization, but like just keep that in your head. And finally, the co-author of the both efficient net paper, so both the version one and version two, says here that the auto ML models are already ahead by a couple of months, but this paper chooses to compare 2019 efficient net, we won instead of 2021 efficient net, we two, which is a paper I also covered, you can check it out if you're curious more about it. But the interesting thing here is, let's see some comparisons. Basically we have the efficient net, so let's take a look at these pre-trained only on ImageNet that has 1000 classes. You can see that efficient net, we two has higher accuracy, has much smaller amount of parameters, less computation necessary and three X bigger throughput, which means that efficient net we two is actually the common net for the 2020s. Cool, that's a necessary context before I start digging into the paper, but there is one more thing I want to cover and that's the Swin transformer, because it's arguably the model from which they were taking a lot of inspiration from, as we'll soon see, and that's why I wanted to kind of be familiar with Swin transformer, I haven't covered it before on my YouTube channel, so I'm gonna give you a quick introduction here. So what's the main idea here? I already mentioned that they are introducing certain CNN biases and now let me show you why that is. So first things first, we know that transformers scale poorly with the number of input tokens. So in the limit when a token is a single pixel, you have this many input tokens. So height times width, this is the number of tokens and we can see more importantly that we have quadratic scaling with the number of input tokens, which does not scale when you have huge resolutions, it's too computationally intensive pretty much. What this Swin transformer paper introduced is this windowed MSA, so the multi-head self-attention, plus the shifted window MSA, which is where they got the name from, so basically sliding windows, that's why they have the Swin part. And you can see here that, let me zoom in a little bit, that here they have a linear dependency on the number of input tokens, but they have a quadratic dependency on this M, which is usually kept constant around seven x seven, so seven times seven, and that's just the number of tokens inside of the window, basically. That's gonna be more clear in a couple of seconds. So all in all, this windowed MSA solves the quadratic relationship here. Okay, so let's see quickly how the thing works. And fairly simple, on the right-hand side here, you can see the original VAT, you can see that it's fairly uniform in the sense that every single, the resolutions, the spatial resolution is kept uniform across every single stage of VAT. So what happens is initially you patchify the image, and then you start processing it, processing the tokens using global self-attentions, which means every single token here, so this one here, will be able to attend all of the other tokens, so this one, and this one, and also this one. So every token can attend every single token. You basically have a fully connected graph here. Contrast that to Swin Transformer, where they introduced local computations. So that means here, you run your self-attention module, the MHA module, on these smaller parts of the input image. So basically you can see here four by four, and on each one of these windows, they'll be running, so the window is the terminology here, they'll be running a self-attention module. You can already see that this local computation is something that CNN prior is all about. I mean, you have basically kernels which you slide, you share the weights, and you have local computation, which is implicitly, you're implying that local neighborhood of pixels is more important than attending pixels which are far away. And that's something that for, at least for natural images, is usually the case, like there is a correlation, a strong correlation between neighboring pixels, and that's what CNNs are exploiting. So that understanding and human knowledge is basically ingrained, baked into the architecture itself. The second prior I can see here on top of my mind is the hierarchical approach, and that's why they call these hierarchical transformers. You can see that basically here we have self-attention acting on these fine-grained features, and then what they do is they take a pool of these windows, so let me change the color here. So let's say they take like four of these windows, and they merge all of those into this new window. So now here, we are processing the data on a coarser, like this is a coarse-grained layer, and then finally they do the same thing here, they repeat the procedure, and they have even more coarse-grained features here. So contrast that with the uniformity of the original transformers, and you'll see that a lot of this was basically, all of these ideas are stemming pretty much from the CNN branch of models. Now you may see a problem here, and that's that we are not sharing information across various spatial locations, and that can be problematic. So because you can see here, we are just processing, so these pixels here, these features, cannot learn and take information from the other, under other spatial locations here, so what they additionally do is they have this window-shifting technique. So let's understand why is this important, and how the mechanism works actually. So as I said, we need to have some mixing between various spatial locations, and how this technique does it, how it solves the problem is the following way. So in this initial, once you apply the self-attention module, that means that all of these tokens will contain information from all of the other tokens in the window by the very way how self-attention works. So that means that we'll have, so this token here will contain information from all of the other tokens, from this one, from this one, from this one, from all of the tokens. Whoops, let me draw it like this, and from this one, and that's it. Now in the next stage, once we are here, because we have a different layout, that means that the information that's contained here, so that means that this information here, which stems from this, that contains the information from all around, from all of these tokens in this quadrant, will now be kind of shared with the other tokens, and that means the following. So let me change the color here. That means that now, after we apply self-attention here, the information contained here, which is information that originally stems from here, will be shared to all of the other tokens. So this one, and this one, and all of the other tokens. Which means that now this token here contains information from these tokens here. When I say token, I mean the feature vector. And finally, once we get in the next stage, and we apply the self-attention using this layout again, you can see how this information here will be propagated to all of the other tokens here. So we'll be propagating it even to the top left. And that's the way how the information traverses this spatial extent here. That's the basic idea. Now I did mention that Swin works with these, like first on a fine-grained scale, and that means that each of these tokens is basically four by four, contains four by four pixels. So if you take a single token out here, it contains, let me change the color, it contains four times four pixels. Whereas the BAT has a more aggressive down sampling strategy, which means that this token here actually contains information from the 16 times 16 patch of the original image. So how they form the features, and this is a minor detail, I'm gonna mention it either way. So each patch is treated as a token, and its feature is set as a concatenation of the raw pixel RGB values. In our implementation, we use a patch size of four by four, as I just mentioned, and thus the feature dimension of each patch is four by four times three, and that's 48. And we have three because it's basically RGB image. So we just take the image, we take those four times four pixels, we take the RGB features and we just concatenate those, and that's what is fed into the screen transformer. So here is the architecture quickly. Again, you can see some resemblance with ComNets. After the initial patch partition layer, as they call it, we have, which hopefully makes sense now, so the spatial resolution is decreased by four because we have four times four pixels in a patch, and we have 48 because I just explained how we're forming those features. So that's why I explained those so that you're able to parse the numbers here. Then what happens is they apply a linear embedding, which effectively maps this 48 to C, even though they didn't mark that here. So what goes into this first transformer block is this, and the feature size is C, not 48. Now, why do they have two X here? And the reason is they are combining the windowed self-attention with this shifted window self-attention. And they apply those in a sequential manner, and then they do the patch merging, which I explained here. They have this merging of patches so that they have these core screened representations, and then they just keep on doing the same thing, windowed attention followed by sliding windowed attention, et cetera, et cetera. And what happens additionally is you can see here, C goes to two C, goes to four C, goes to eight C, which means we are increasing the number of channels as we are reducing the spatial dimension, which is also a pattern from CNNs. These transformer blocks themselves like are consist out of these two modules, which are your, this is your regular, this is a transformer block. The only difference is they're using, instead of the regular MSA, they're using windowed MSA and the sliding, this shift windowed MSA. I may be butchering the naming, the terminology here. I think it's, yeah, shifted windowing configuration. Got it. Cool. Okay, now that I've properly, hopefully properly, motivated the reason for this paper and some of the background knowledge like Swin Transformer and some of the, yeah, corrections to this main diagram, let's get to the rest of the paper. So our research is intended to bridge the gap between the pre-VIT and post-VIT eras for ComNets. To do this, we start with a standard ResNet, so ResNet-50 for example, trained with an improved procedure. We gradually modernize the architecture to the construction of a hierarchical vision transformer, Swin Tiny, T stands for tiny. So basically again, what they're saying here is, hey, we're taking some inspiration, design inspiration from Swin Transformers. Our exploration is directed by a quick question. How do design decisions in Transformers impact ComNet's performance? Okay, so that's the motivating statement there. Now, before I show this diagram, which is the second most important diagram, if not the most important diagram, let me just start with this. So a recent paper, which is basically this one, the ResNet Strike Spec, which I've covered in one of the previous videos, demonstrates how a set of modern training techniques can significantly enhance the performance of a simple ResNet-50 model. In our study, we use a training recipe that is close to DATES and Swin Transformers. So not quite sure why they did this, because this should have the best training procedure for ResNet-50. So I'm not sure why they're using training procedures from Transformers, probably because of the compatibility reasons, because they are later taking some design decisions from Transformers, and that's probably better. The compatibility is, I guess, better, compared to just taking the training ideas from this paper. So by itself, they say here, the enhanced training recipe increased the performance of the ResNet-50 model from 76.1 to 78.8, so plus 2.7% in top one accuracy, which is a huge boost. So with that out of the way, let's see the modernization procedure they applied. First, let me kind of dissect this diagram for you. What we can see here is in the bottom here, we have the Swin-T Transformer, and in the upper part here, we have ResNet-50. Just ignore the gray bars for now, because those correspond to these bigger alternatives, so to ResNet-200 and to Swin-B, which stands for bigger. And let's focus on these blue bars, and let's first see this orange bar. So this orange bar tells us that the top one accuracy, which is this horizontal axis here, is 81.3. And this star here, not sure if you can see it, but it says that we have 4.5 gigaflops of computation for the Swin-T model. Now, let's see what happened with ResNet. What's the progression here? So we start with 78.8, which is the numbers they got with this improved training procedure. So then they apply various techniques. They organize those across five groups. We have the macro design, we have the ResNext ideas, we have the inverted bottleneck, we have the large kernel, we have the micro design here. And you can see that all of those start, and all of those contribute to the final performance, and we can see that the final model has outperformed, so all of these outperform basically this Swin-T Transformer. And you can also see the stars here again denotes the gigaflops, the computation that's needed for a single forward pass. You can see it's varying throughout these, when they're doing these ablations, but they roughly oscillate around 4.5, which means these are comparable in that sense. Now I'm gonna focus in more detail on these first couple of techniques, and then I'm gonna slowly just scheme all of the other techniques, because there is a lot of details, and I'm gonna just kinda have to abstract some of those details away from you. So let's start with this stage ratio here. So what is stage ratio? What does that mean? They say here, Swin-T, on the other hand, follow the same principle, but with a slightly different stage compute ratio of one, one, three, one. We're gonna see what this means in a second. So for larger Swin-Transformers, the ratio is this one, and following the design, we adjust the number of blocks in each stage from three, four, six, three, in Resonant 50, to three, three, nine, three. So you can see when you divide all of this by three, you get the same ratio as in the Swin-Transformer right here. So the idea is basically they wanna have the amount of compute, the number of flops, across the model should have roughly the same ratio as the one in Swin-Transformer, because some of the other papers, they reference them here, show that that plays an important role in the design of novel architectures. So let me show you some code here, and remember these numbers here, three, four, six, three. So that's the number of blocks in each stage. So basically, if you take a look at the, let me zoom in a little bit here, at the API here, when you're constructing Resonant 50, or Resonant 101, or whatnot, you always call this underlying, this is by torch code, by the way, you call this common API function, and then you pass different number of blocks per stage, and that's what determines how big the model is, how many layers it has, and how many parameters does it has. So let me quickly show you this underscore Resonant function. So it's here, and we're passing, so that sequence of numbers is named layers, and we pass layers to this Resonant class here. So let me find the class, class Resonant. Okay, here it is. So it's passed as layers. So now let me search for layers, layers, and basically you can see here that those numbers, so three, six, four, three, or something, are passed here to these make layer functions, and they effectively, what they do is they say, take this block and repeat it this many times. So if we were to find the make layer function, I could probably just click and find, go by reference, but I'm being lazy here. We can see here that here is the blocks, so let's find the blocks, whoops, the blocks variable. Okay, blocks variable is called here, and you can see basically we have this layer append, and for the number of blocks, we're just going to be appending in this stage that amount of blocks. So that's why we have, that's how they manage to regulate the amount of computation across each of the stage of this resonant and reflect mirror the same type of ratios that are present in the Swin transformer. Cool, let me know whether you find this type of mixing of code and paper explanation useful or not. Let's continue here. So this design decision falls under this umbrella term of macro design. You can understand why that is. We are changing the macro structure of the architecture. Then they have this changing stem to patchify, and they mentioned here that a common stem cell with will aggressively down sample the input images. So that's something that's common in most of those older CNN architectures, and they say here that the stem cell in standard resonant contains a seven times seven convolution layer with try two, followed by a max pooling layer, which results in a forex down sampling of the input images, which isn't fairly aggressive down sampling. They say here, we replaced the resonant style stem cell with a patchify layer implemented using a four times four strive for convolutional layer. And that's how basically VAT like model works. What you do is you can interpret this patchifying strategy. Basically you have an image here. And what they do is they group these pixels into these patches. And then what they do is they do a linear projection of those pixels into some feature vector. And you can model that by just thinking of this as if you had, let me change the color, as if you had like a kernel, that's let's say in this case, they said four by four. So kernel that's four by four, and then you have a stride that has a size of four, which means the next step will be here, the next step will be here. And basically, yeah, that's how you implement. If you take a look at any code base, that's how you implement the patchify layer using a convolutional operation. Cool. So that's the second design decision. Then they mentioned here Resnextify. They take some ideas from the Resnext paper. So they say here, more precisely, Resnext employs grouped convolution for the three times three conv layer in a bottleneck block. As this significantly reduces the flops, the network width is expanded to compensate for the capacity loss. In our case, we use depthwise convolution, a special case of grouped convolution where the number of groups equals the number of channels. So a quick recap here, how the depthwise convolution works is the following. So let's say you have an activation volume. So that's the feature maps that comes out of your layer after you process and apply the activation functions. You get the activation volume, which looks something like this. Let me try and draw it here, something like this. And so how your regular convolutions work is that, you make a kernel, you make like a filter here, which is maybe usually three times three in most of these older CNN architectures. And now what I do here is the following. Let me just kind of draw it here roughly. So what I do is the depth of this filter is the same as the depth of your activation volume. So that's your regular convolution. Now what depthwise convolution does is the following, it slices this activation volume, like you have a single feature map here, then you have a second feature map, et cetera, et cetera. And what you do, you just take a filter that has this depth dimension is just one, and then you apply and apply that filter to this slice, and then you apply the second filter to the second slice, et cetera, et cetera. So again, let me draw that here. So the filter for the depthwise will be something like this. So again, let's say it's three times three again, but the depth component is just one. It's basically a single dimension basically. Now there is a problem in using depthwise convolution, and that's that you break the correlation between flops and the throughput, which means that basically for the same amount of compute, because of some memory issues, the throughput, i.e. the number of images you can process in a second or whatever unit of time, drops down dramatically. We're gonna address that a little bit later. And they did show that the throughput is competitive and even better than Swin Transformers, so that's cool. Now there are a lot more details here. I'm gonna skip some of those. They had the inverted block which was introduced. I'm gonna actually cover this one. So one important design in every transformer block is that it creates an inverted bottleneck. i.e. the hidden dimension of the MLP block is four times wider than the input dimension. And interestingly, this transformer design is connected to the inverted bottleneck design with an expansion ratio of four used in ComNets. The idea was popularized by MobileNet V2. So here we see again this cross-pollination from CNNs. So taking ideas from CNNs and implementing them even in the original transformer. So the idea was kinda informed by certain MobileNet architectures which came. I think they were developed prior to 2017 obviously. That's when the transformer paper originally was published. So what I said here, if you did not understand, and it's just a basic understanding of how transformers work, at least the usual architectures. So you have your MSA module here. So this is your MSA module. You have your input tokens. You have your output tokens. And then you apply an MLP. So each of these, this is a feature vector that has certain dimension like D. And what you apply MLP basically sharing the weights. So you just kinda do a for loop across these feature vectors and you apply the MLP which will map from D to 4D. So this feature vector is gonna be 4X longer. So one, two, three, four compared to this one. And then they again map it down into D dimensions. So that's what's referred to as a bottleneck layer. You can kinda imagine that you start from D and then you basically upscale it in a way to 4D and then you downscale it to D. So something like this. That's why it's called a bottleneck layer. Sorry, I meant to say inverted bottleneck the whole time. Not bottleneck layer because that one would look something like this. This would be your bottleneck layer. Okay, so this is a fun fact. I wasn't aware of that. Now here they just showed the progression of how they were additionally manipulating the modules. And I think there is a small error here. This should be I think 384 mapped down to 96 and not the other way around. But yeah, anyways, that's a small detail. Then they try and play with kernel sizes because as I said, usually you use three by three. They figure out that seven by seven works the best and that's it. Here they compare the Swin transformer block with the Resonant block with the final Comnex block. A couple more things they experimented with are the following. You can see that they are using in the Comnex block, they're using Jell-U. So that's Gaussian error linear unit or something instead of Relu. They have layer norm instead of batch norm. They have less normalization layers in general and less activation functions. You can see they apply the activation function only here. Let me change the color here. So they apply it only here, whereas here they apply it in multiple locations and all of that and they have, as you can see here, a bunch of batch norm layers, batch norm, batch norm. Here they only have a single layer norm. Okay, that's pretty much the level of detail I can tolerate for one video. So let me just slowly wrap this up. They say that our Comnex model has approximately the same number of flops, parameters, same throughput and memory use as the Swin transformer, but does not require specialized modules such as shift the window, the tension, or relative position biases, which is very cool because they get to have a simpler architecture. Finally, here are the results. They basically show that on image classification for all of various sizes, so both the small, the big one variant, for different resolutions, they outperform, as you can see here, they outperform Swin. So let's focus on a single pair here. Let's take, for example, Swin B and Comnex B working on 384 resolution. They have similar number of parameters. They have similar flops. Swin even has a bit more flops here and you can see that throughput is lower for Swin and you can see that the accuracy is better for Comnex. So basically across all of these dimensions, it outperforms the Swin model, which is very cool. They also showed that basically increasing the model, as we already mentioned, leads to increase in performance, which is something, a desirable thing. You wanna be able to scale the architecture, the size, and to get additional performance. Here, they additionally show the models pre-trained on the ImageNet 22K. And they showed that even this big data regime, where vision transformers are supposed to be superior, they are not only comparable, but they outperform. We again, focus on one of these pairs. Let me take this one. They have similar number of parameters again, similar flops. Again, actually, Swin takes more computation and it has lower throughput and it has lower accuracy. So again, it outperforms a Swin, even in the big data regime, which is fairly important. And wrapping this up, they also showed that on object detection tasks and semantic segmentation tasks, Swin transformer is again, you'll see fairly similar results as here on the image classification tests. So I don't wanna even dig into too much details. The last thing I wanna mention here is, so under similar flops, models with depth-wise convolutions are known to be slower and consume more memory than ComNets with only dense convolutions. It is natural to ask whether the design of ComNets will render it practically inefficient. As demonstrated throughout the paper, the inference throughputs of ComNets are comparable to or exceed that of Swin transformers. And that's something I was worried about, but it turns out that the throughput is actually kept on a healthy level. That's it. I wanna wrap this video by just mentioning one thing. And it's maybe a semi-rend, if that's even a word. So I just kinda skimmed through the references and I didn't see Schmidt-Huber mentioned anywhere. And it's kinda sad. I recently watched this video that Schmidt-Huber published to his YouTube channel where he explained the evolution of these earlier models. And it actually turned out, and I mentioned this in the beginning of this video, that prior to AlexNet, there was this model called DanNet. So it's spelled like this, I think, DanNet. And that model was the first convolutional neural network that showed superhuman performance on I think four or five vision benchmarks, although those benchmarks were arguably way smaller compared to ImageNet. It's kinda sad that DanNet got censored out of the AI history, even though it was such an important model back in 2011. So yeah, I encourage you to check out Schmidt-Huber's video and let me know your thoughts on this topic as well. Cool, let's do a quick summary here what we've seen in this video. So basically, we've seen this convergence going on between vision transformers on one hand side with Swin transformer being a representative that was used in this paper. And on the other hand side, we've seen ComNets being rejuvenated with novel training strategies such as the ones from the ResNet Strikes Again paper or from ComNEXT paper. We've also seen the results from the EfficientNet V2 paper, which even seem to outperform ComNEXT models. So it would be a fun comparison to compare them across multiple benchmarks and stuff like that. In a nutshell, it seems that convolutional priors are very good and hard to beat priors for natural images. And I'm very excited to see where these types of hybrid models will take us and whether we're gonna have some novel ideas that are going to boost the performance of these visual perception models even further. Anyways, if you found this video useful with all of these semi-rants, consider subscribing, sharing the video out and also do join the Discord community. I'll just link it down in the video description as always. So until next time, bye-bye. Thank you so much for watching, you
[{"start": 0.64, "end": 1.6, "text": " What's cracking guys?"}, {"start": 1.6, "end": 3.64, "text": " In this video I'm covering this new paper"}, {"start": 3.64, "end": 6.76, "text": " called A Comnet for the 2020s."}, {"start": 6.76, "end": 9.040000000000001, "text": " The paper itself was followed by a lot of hype"}, {"start": 9.040000000000001, "end": 11.56, "text": " on social media, especially on Twitter."}, {"start": 11.56, "end": 14.16, "text": " So yeah, let's see what it's all about."}, {"start": 14.16, "end": 16.76, "text": " It's a paper by Facebook AI Research."}, {"start": 16.76, "end": 21.2, "text": " I guess it's now MetaAI and UC Berkeley folks."}, {"start": 21.2, "end": 23.76, "text": " So let's dig straight into it."}, {"start": 23.76, "end": 27.14, "text": " The roaring 20s of visual recognition began"}, {"start": 27.14, "end": 30.16, "text": " with introduction of vision transformers or VATs,"}, {"start": 30.16, "end": 33.88, "text": " which quickly superseded Comnets as the state of the art"}, {"start": 33.88, "end": 35.44, "text": " image classification model."}, {"start": 35.44, "end": 39.24, "text": " So the important detail here is it was very good."}, {"start": 39.24, "end": 42.92, "text": " It was a soda, but only on image classification task."}, {"start": 42.92, "end": 45.519999999999996, "text": " A vanilla VAT on the other hand,"}, {"start": 45.519999999999996, "end": 46.96, "text": " faces difficulties when applied"}, {"start": 46.96, "end": 48.36, "text": " to general computer vision tasks,"}, {"start": 48.36, "end": 51.36, "text": " such as object detection and semantic segmentation."}, {"start": 51.36, "end": 53.74, "text": " So object detection and semantic segmentation."}, {"start": 53.74, "end": 57.56, "text": " It is the hierarchical transformers like swing transformers"}, {"start": 57.56, "end": 60.52, "text": " that reintroduced several common priors,"}, {"start": 60.52, "end": 63.0, "text": " making transformers practically viable"}, {"start": 63.0, "end": 64.64, "text": " as a generic vision backbone."}, {"start": 64.64, "end": 67.52000000000001, "text": " However, the effectiveness of such hybrid approaches"}, {"start": 67.52000000000001, "end": 71.64, "text": " is still largely credited to the intrinsic superiority"}, {"start": 71.64, "end": 75.5, "text": " of transformers rather than the inherent inductive biases"}, {"start": 75.5, "end": 76.86, "text": " of convolutions."}, {"start": 76.86, "end": 78.28, "text": " So let me try and put this"}, {"start": 78.28, "end": 81.24000000000001, "text": " into broader historical perspective."}, {"start": 81.24000000000001, "end": 83.52000000000001, "text": " Okay, so the story basically begins,"}, {"start": 83.52, "end": 86.56, "text": " I mean begins around 2011."}, {"start": 86.56, "end": 89.96, "text": " That's when the first like successful CNN"}, {"start": 89.96, "end": 92.38, "text": " that has superhuman performance appeared."}, {"start": 92.38, "end": 95.36, "text": " And the name of that CNN was not AlexNet,"}, {"start": 95.36, "end": 97.08, "text": " it was actually DanNet."}, {"start": 97.08, "end": 98.44, "text": " So let me kind of put it here."}, {"start": 98.44, "end": 102.72, "text": " So we have 2011 here and we have this branch of CNNs."}, {"start": 102.72, "end": 103.66, "text": " I'm gonna draw it like this."}, {"start": 103.66, "end": 105.16, "text": " So CNNs."}, {"start": 105.16, "end": 108.28, "text": " And then in 2012, we have the most famous model"}, {"start": 108.28, "end": 110.16, "text": " that was AlexNet."}, {"start": 110.16, "end": 112.64, "text": " And since then we had a bunch of different models,"}, {"start": 112.64, "end": 114.96, "text": " VGGs, Resnets, which were,"}, {"start": 114.96, "end": 117.92, "text": " Resnets were a big thing back in 2015."}, {"start": 117.92, "end": 120.42, "text": " So maybe somewhere around this time."}, {"start": 120.42, "end": 124.72, "text": " And all in all, that trend continued until 2020s."}, {"start": 124.72, "end": 129.12, "text": " And then what happened is that basically in late 2020,"}, {"start": 129.12, "end": 132.58, "text": " we had Vision Transformer like published."}, {"start": 132.58, "end": 135.28, "text": " And I've covered the paper by the way on VATs,"}, {"start": 135.28, "end": 136.84, "text": " if you're not familiar, you can check it out."}, {"start": 136.84, "end": 138.84, "text": " I'm gonna link it somewhere here."}, {"start": 138.84, "end": 141.64, "text": " So let me draw that as another thread"}, {"start": 141.64, "end": 144.6, "text": " in these like visual perception models."}, {"start": 144.6, "end": 149.04, "text": " We have Transformers here and they all started with VAT."}, {"start": 149.04, "end": 153.55999999999997, "text": " And what basically what VAT did is it petrified the image"}, {"start": 153.55999999999997, "end": 157.92, "text": " and then passed those tokens into Pure Transformer."}, {"start": 157.92, "end": 160.44, "text": " And it just turned out to work very nicely"}, {"start": 160.44, "end": 162.07999999999998, "text": " for our image classification task."}, {"start": 163.85999999999999, "end": 166.76, "text": " More importantly, for our story here"}, {"start": 166.76, "end": 169.27999999999997, "text": " is the appearance of Swin Transformer."}, {"start": 169.27999999999997, "end": 171.22, "text": " So that was somewhere around."}, {"start": 171.22, "end": 174.72, "text": " So this is like late 2020."}, {"start": 174.72, "end": 176.12, "text": " This is early 2021."}, {"start": 176.12, "end": 180.18, "text": " So that's like March 2021 is when the Swin Transformer"}, {"start": 180.18, "end": 184.24, "text": " appears and it stands for shifted window transformer."}, {"start": 184.24, "end": 186.44, "text": " I think that would be the contraction."}, {"start": 186.44, "end": 189.32, "text": " And from that point onward,"}, {"start": 189.32, "end": 191.64, "text": " basically Transformers were successful"}, {"start": 191.64, "end": 193.52, "text": " on semantic segmentation as well."}, {"start": 193.52, "end": 196.52, "text": " And all of those more like types of visual tasks"}, {"start": 196.52, "end": 197.8, "text": " where you need dense predictions,"}, {"start": 197.8, "end": 199.16, "text": " such as semantic segmentation,"}, {"start": 199.16, "end": 201.2, "text": " instant segmentation, et cetera."}, {"start": 201.2, "end": 204.94, "text": " So now let me draw, that's the kind of timeline."}, {"start": 204.94, "end": 208.76, "text": " Now let me draw how these two branches of models"}, {"start": 208.76, "end": 210.56, "text": " started converging in a way."}, {"start": 210.56, "end": 214.29999999999998, "text": " So what actually happened is that Swin Transformer"}, {"start": 214.29999999999998, "end": 217.5, "text": " kind of went this direction towards the CNN line."}, {"start": 218.33999999999997, "end": 223.23999999999998, "text": " And now what this paper does is the same thing."}, {"start": 223.23999999999998, "end": 227.06, "text": " It takes the most recent and best practices"}, {"start": 227.06, "end": 229.48, "text": " when it comes to training CNNs"}, {"start": 229.48, "end": 232.28, "text": " and combines those with the design ideas"}, {"start": 232.28, "end": 234.23999999999998, "text": " that Transformers introduced."}, {"start": 234.23999999999998, "end": 235.34, "text": " So it does a similar thing."}, {"start": 235.34, "end": 238.78, "text": " So basically what this paper does is"}, {"start": 238.78, "end": 243.6, "text": " starts going towards the transformer branch of models."}, {"start": 243.6, "end": 245.34, "text": " So that's a mental model I have in my head"}, {"start": 245.34, "end": 246.72, "text": " at the moment basically."}, {"start": 246.72, "end": 251.64, "text": " And now there is a boundary between these two."}, {"start": 251.64, "end": 253.72, "text": " The reason being is that COMnext,"}, {"start": 253.72, "end": 255.6, "text": " which is the model from this paper,"}, {"start": 255.6, "end": 257.56, "text": " is still a pure CNN."}, {"start": 257.56, "end": 260.4, "text": " So that means there is a boundary here"}, {"start": 260.4, "end": 262.52, "text": " because these are still pure CNNs."}, {"start": 262.52, "end": 265.12, "text": " So the COMnext is still a pure CNN"}, {"start": 265.12, "end": 267.52, "text": " with some ideas from the transformer branch,"}, {"start": 267.52, "end": 269.56, "text": " whereas Swin Transformer arguably"}, {"start": 269.56, "end": 272.68, "text": " maybe even passes this virtual boundary"}, {"start": 272.68, "end": 274.84000000000003, "text": " because it has a lot of CNN priors"}, {"start": 274.84000000000003, "end": 277.64, "text": " as we are going to see very soon."}, {"start": 277.64, "end": 280.44, "text": " So that's the basic mental model."}, {"start": 280.44, "end": 282.92, "text": " And the whole idea is for me to show you"}, {"start": 282.92, "end": 284.48, "text": " the steps that happened here."}, {"start": 284.48, "end": 287.76, "text": " So how did they go from ResNet-50,"}, {"start": 287.76, "end": 290.32, "text": " which is the baseline model they're using,"}, {"start": 290.32, "end": 293.44, "text": " which steps did they take, one, two, three, et cetera,"}, {"start": 293.44, "end": 298.24, "text": " to get to outperforming Vision Transformers"}, {"start": 298.24, "end": 300.90000000000003, "text": " and specifically Swin Transformer,"}, {"start": 300.90000000000003, "end": 304.52000000000004, "text": " which was the baseline they were comparing with against."}, {"start": 304.52000000000004, "end": 307.52000000000004, "text": " So let's start with this diagram."}, {"start": 307.52000000000004, "end": 310.02000000000004, "text": " It's one of the most important diagrams in this paper."}, {"start": 310.02000000000004, "end": 312.36, "text": " And you can see a couple of things here."}, {"start": 312.36, "end": 314.6, "text": " So first, we have two groups here."}, {"start": 314.6, "end": 317.72, "text": " We have ImageNet-1K pre-trained models here."}, {"start": 317.72, "end": 320.28000000000003, "text": " Let me just change the color here to red."}, {"start": 320.28000000000003, "end": 323.0, "text": " And we have ImageNet-22K pre-trained models."}, {"start": 323.0, "end": 325.56, "text": " So that's the superset of ImageNet dataset,"}, {"start": 325.56, "end": 328.36, "text": " which has around 22,000 classes."}, {"start": 328.36, "end": 330.04, "text": " It's way bigger compared to this one."}, {"start": 330.04, "end": 332.08000000000004, "text": " So yeah, the whole point is to see"}, {"start": 332.08000000000004, "end": 335.8, "text": " whether the models scale with additional data,"}, {"start": 335.8, "end": 339.2, "text": " which is something that VATs are notoriously famous for"}, {"start": 339.2, "end": 343.76, "text": " because they can learn the priors that the CNNs have."}, {"start": 343.76, "end": 346.59999999999997, "text": " And you can check those out in my VAT paper,"}, {"start": 346.59999999999997, "end": 350.15999999999997, "text": " but for now, that's pretty much enough."}, {"start": 350.15999999999997, "end": 352.36, "text": " So what we see here on the Y-axis"}, {"start": 352.36, "end": 355.64, "text": " is the top one accuracy on ImageNet."}, {"start": 355.64, "end": 357.68, "text": " And you can see that ResNet here"}, {"start": 359.8, "end": 362.59999999999997, "text": " has lower accuracy compared to date."}, {"start": 362.59999999999997, "end": 364.08, "text": " I'm not sure how to pronounce this one,"}, {"start": 364.08, "end": 366.38, "text": " but it's pretty much a plain VAT model."}, {"start": 366.38, "end": 371.38, "text": " So that means little to no conf net biases,"}, {"start": 371.56, "end": 374.88, "text": " except for the patch, for the input patchify layer,"}, {"start": 374.88, "end": 377.15999999999997, "text": " which is arguably just a convolutional layer."}, {"start": 378.64, "end": 380.04, "text": " Now we have the Swin transformer here."}, {"start": 380.04, "end": 383.04, "text": " You can see that it outperforms the VAT."}, {"start": 383.04, "end": 385.5, "text": " And finally, we can see the comnext here."}, {"start": 385.5, "end": 387.96, "text": " Let me zoom in a little bit here."}, {"start": 387.96, "end": 390.65999999999997, "text": " You can see that it outperforms Swin transformers."}, {"start": 390.65999999999997, "end": 393.44, "text": " And not only that, so not only that we have higher"}, {"start": 393.44, "end": 396.6, "text": " performance, but you can also notice that"}, {"start": 396.6, "end": 399.68, "text": " you can also notice that by increasing the amount"}, {"start": 399.68, "end": 402.52, "text": " of computation, so by making models bigger,"}, {"start": 402.52, "end": 405.68, "text": " which you can see by the diameter of these circles"}, {"start": 405.68, "end": 408.08, "text": " here expressed in gigaflops,"}, {"start": 408.08, "end": 409.78, "text": " you can see that the performance goes up."}, {"start": 409.78, "end": 412.88, "text": " So here for these smaller models,"}, {"start": 412.88, "end": 414.96, "text": " the accuracy was somewhere here."}, {"start": 414.96, "end": 417.46, "text": " And then as they were increasing the model size,"}, {"start": 417.46, "end": 421.2, "text": " you can see that the center of these circles"}, {"start": 421.2, "end": 424.71999999999997, "text": " basically went up, which means they scale fairly well"}, {"start": 424.71999999999997, "end": 427.24, "text": " with model size."}, {"start": 427.24, "end": 430.18, "text": " But the second dimension where they scale well across"}, {"start": 430.18, "end": 431.84, "text": " is also the data."}, {"start": 431.84, "end": 435.12, "text": " And that's what we actually care arguably even more about."}, {"start": 435.12, "end": 439.32, "text": " So here you can see that even with more data,"}, {"start": 439.32, "end": 441.4, "text": " the comnext architecture basically benefits"}, {"start": 441.4, "end": 443.36, "text": " from additional data."}, {"start": 443.36, "end": 444.91999999999996, "text": " And that's an important finding."}, {"start": 444.91999999999996, "end": 449.0, "text": " Again, comparisons with VAT, with Swin transformer"}, {"start": 449.0, "end": 453.68, "text": " and comnext, you can see that both the accuracy is better"}, {"start": 453.68, "end": 458.4, "text": " as well as the, I mean, scaling exists,"}, {"start": 458.4, "end": 460.36, "text": " that's what we are looking for pretty much."}, {"start": 460.36, "end": 463.0, "text": " Now the thing is this diagram is a little bit misleading"}, {"start": 463.0, "end": 465.24, "text": " because as I said, there was a lot of hype"}, {"start": 465.24, "end": 468.28, "text": " following this paper and there were a lot of"}, {"start": 468.28, "end": 471.12, "text": " Twitter threads, let me just show you this one."}, {"start": 471.12, "end": 475.4, "text": " So Lucas Baer, who is the co-author of the VAT paper,"}, {"start": 475.4, "end": 477.72, "text": " showed that they were actually comparing"}, {"start": 477.72, "end": 481.64000000000004, "text": " with the old VAT that still didn't have all of the modern"}, {"start": 481.64000000000004, "end": 482.6, "text": " augmentations."}, {"start": 482.6, "end": 484.76000000000005, "text": " And if you add those additional augmentations,"}, {"start": 484.76000000000005, "end": 488.48, "text": " we see the top one accuracy translating upwards here,"}, {"start": 488.48, "end": 492.16, "text": " thus becoming almost on pair with comnext."}, {"start": 492.16, "end": 495.88000000000005, "text": " So this convergence further supports my claims here"}, {"start": 495.88000000000005, "end": 499.12, "text": " that we have like a huge, huge like convergence happening"}, {"start": 499.12, "end": 504.12, "text": " between these two separate branches of perception models."}, {"start": 505.44000000000005, "end": 507.36, "text": " A couple more comments here."}, {"start": 507.36, "end": 510.36, "text": " The creator of this famous team library"}, {"start": 510.36, "end": 512.6, "text": " and the co-author of the Resonance Strikes Back paper,"}, {"start": 512.6, "end": 514.84, "text": " which I've covered in one of my previous videos,"}, {"start": 514.84, "end": 518.24, "text": " which also showed that if you train Resonance"}, {"start": 518.24, "end": 520.32, "text": " with modern training techniques and recipes,"}, {"start": 520.32, "end": 522.6, "text": " you basically get way better performance"}, {"start": 522.6, "end": 526.08, "text": " and we ought to be comparing with those novel Resonance"}, {"start": 526.08, "end": 529.0, "text": " and not with the old results from back from 2015."}, {"start": 530.08, "end": 533.2, "text": " Yeah, he says here that while they did mention"}, {"start": 533.2, "end": 535.16, "text": " better Resonance training in the core of the paper,"}, {"start": 535.16, "end": 537.36, "text": " that headline grabbing plot used the originals."}, {"start": 537.36, "end": 540.3199999999999, "text": " I rewrote some of the layer norm, general stuff,"}, {"start": 540.3199999999999, "end": 544.16, "text": " blah, blah, blah, and got the extra 20% boost."}, {"start": 544.16, "end": 546.76, "text": " Basically that means that we need to shift upwards"}, {"start": 546.76, "end": 550.28, "text": " the Resonance as well and that the performance actually,"}, {"start": 550.28, "end": 553.28, "text": " like this gap here is closed down pretty much"}, {"start": 553.28, "end": 554.3199999999999, "text": " with Swin Transformers."}, {"start": 554.3199999999999, "end": 556.16, "text": " It would be nice to see like a visualization,"}, {"start": 556.16, "end": 558.16, "text": " but like just keep that in your head."}, {"start": 558.16, "end": 561.68, "text": " And finally, the co-author of the both efficient net paper,"}, {"start": 561.68, "end": 563.64, "text": " so both the version one and version two,"}, {"start": 563.64, "end": 567.8, "text": " says here that the auto ML models are already ahead"}, {"start": 567.8, "end": 570.72, "text": " by a couple of months, but this paper chooses to compare"}, {"start": 570.72, "end": 575.4, "text": " 2019 efficient net, we won instead of 2021 efficient net,"}, {"start": 575.4, "end": 577.76, "text": " we two, which is a paper I also covered,"}, {"start": 577.76, "end": 580.64, "text": " you can check it out if you're curious more about it."}, {"start": 580.64, "end": 583.88, "text": " But the interesting thing here is,"}, {"start": 583.88, "end": 585.68, "text": " let's see some comparisons."}, {"start": 585.68, "end": 589.2, "text": " Basically we have the efficient net,"}, {"start": 589.2, "end": 591.64, "text": " so let's take a look at these pre-trained"}, {"start": 591.64, "end": 594.68, "text": " only on ImageNet that has 1000 classes."}, {"start": 594.68, "end": 598.68, "text": " You can see that efficient net, we two has higher accuracy,"}, {"start": 598.68, "end": 601.76, "text": " has much smaller amount of parameters,"}, {"start": 601.76, "end": 605.68, "text": " less computation necessary and three X bigger throughput,"}, {"start": 605.68, "end": 608.4, "text": " which means that efficient net we two"}, {"start": 608.4, "end": 611.92, "text": " is actually the common net for the 2020s."}, {"start": 611.92, "end": 616.0, "text": " Cool, that's a necessary context"}, {"start": 616.0, "end": 618.12, "text": " before I start digging into the paper,"}, {"start": 618.12, "end": 620.3199999999999, "text": " but there is one more thing I want to cover"}, {"start": 620.32, "end": 621.6, "text": " and that's the Swin transformer,"}, {"start": 621.6, "end": 626.1600000000001, "text": " because it's arguably the model from which"}, {"start": 626.1600000000001, "end": 628.2800000000001, "text": " they were taking a lot of inspiration from,"}, {"start": 628.2800000000001, "end": 631.0400000000001, "text": " as we'll soon see, and that's why I wanted to kind of"}, {"start": 631.0400000000001, "end": 632.6400000000001, "text": " be familiar with Swin transformer,"}, {"start": 632.6400000000001, "end": 634.72, "text": " I haven't covered it before on my YouTube channel,"}, {"start": 634.72, "end": 637.36, "text": " so I'm gonna give you a quick introduction here."}, {"start": 637.36, "end": 639.6, "text": " So what's the main idea here?"}, {"start": 640.8000000000001, "end": 642.6, "text": " I already mentioned that they are introducing"}, {"start": 642.6, "end": 646.2800000000001, "text": " certain CNN biases and now let me show you why that is."}, {"start": 646.2800000000001, "end": 648.96, "text": " So first things first, we know that transformers"}, {"start": 648.96, "end": 651.88, "text": " scale poorly with the number of input tokens."}, {"start": 651.88, "end": 656.0400000000001, "text": " So in the limit when a token is a single pixel,"}, {"start": 656.0400000000001, "end": 658.44, "text": " you have this many input tokens."}, {"start": 658.44, "end": 662.12, "text": " So height times width, this is the number of tokens"}, {"start": 662.12, "end": 663.8000000000001, "text": " and we can see more importantly"}, {"start": 663.8000000000001, "end": 665.6, "text": " that we have quadratic scaling"}, {"start": 666.5600000000001, "end": 668.12, "text": " with the number of input tokens,"}, {"start": 668.12, "end": 671.12, "text": " which does not scale when you have huge resolutions,"}, {"start": 671.12, "end": 673.84, "text": " it's too computationally intensive pretty much."}, {"start": 673.84, "end": 676.2800000000001, "text": " What this Swin transformer paper introduced"}, {"start": 676.28, "end": 681.28, "text": " is this windowed MSA, so the multi-head self-attention,"}, {"start": 682.16, "end": 684.76, "text": " plus the shifted window MSA,"}, {"start": 684.76, "end": 688.0, "text": " which is where they got the name from,"}, {"start": 688.0, "end": 690.64, "text": " so basically sliding windows,"}, {"start": 690.64, "end": 693.04, "text": " that's why they have the Swin part."}, {"start": 693.04, "end": 696.12, "text": " And you can see here that, let me zoom in a little bit,"}, {"start": 696.12, "end": 698.64, "text": " that here they have a linear dependency"}, {"start": 698.64, "end": 700.8399999999999, "text": " on the number of input tokens,"}, {"start": 700.8399999999999, "end": 703.8, "text": " but they have a quadratic dependency on this M,"}, {"start": 703.8, "end": 706.76, "text": " which is usually kept constant around seven x seven,"}, {"start": 706.76, "end": 710.3599999999999, "text": " so seven times seven, and that's just the number of tokens"}, {"start": 710.3599999999999, "end": 713.88, "text": " inside of the window, basically."}, {"start": 713.88, "end": 716.16, "text": " That's gonna be more clear in a couple of seconds."}, {"start": 716.16, "end": 719.56, "text": " So all in all, this windowed MSA"}, {"start": 719.56, "end": 723.3599999999999, "text": " solves the quadratic relationship here."}, {"start": 723.3599999999999, "end": 726.28, "text": " Okay, so let's see quickly how the thing works."}, {"start": 726.28, "end": 729.52, "text": " And fairly simple, on the right-hand side here,"}, {"start": 729.52, "end": 730.76, "text": " you can see the original VAT,"}, {"start": 730.76, "end": 732.4399999999999, "text": " you can see that it's fairly uniform"}, {"start": 732.44, "end": 734.24, "text": " in the sense that every single,"}, {"start": 734.24, "end": 737.32, "text": " the resolutions, the spatial resolution is kept uniform"}, {"start": 737.32, "end": 740.08, "text": " across every single stage of VAT."}, {"start": 740.08, "end": 743.24, "text": " So what happens is initially you patchify the image,"}, {"start": 743.24, "end": 745.5600000000001, "text": " and then you start processing it,"}, {"start": 745.5600000000001, "end": 748.72, "text": " processing the tokens using global self-attentions,"}, {"start": 748.72, "end": 752.12, "text": " which means every single token here, so this one here,"}, {"start": 752.12, "end": 754.44, "text": " will be able to attend all of the other tokens,"}, {"start": 754.44, "end": 757.9200000000001, "text": " so this one, and this one, and also this one."}, {"start": 757.9200000000001, "end": 760.2800000000001, "text": " So every token can attend every single token."}, {"start": 760.28, "end": 763.52, "text": " You basically have a fully connected graph here."}, {"start": 763.52, "end": 765.9599999999999, "text": " Contrast that to Swin Transformer,"}, {"start": 765.9599999999999, "end": 769.72, "text": " where they introduced local computations."}, {"start": 769.72, "end": 773.92, "text": " So that means here, you run your self-attention module,"}, {"start": 773.92, "end": 778.92, "text": " the MHA module, on these smaller parts of the input image."}, {"start": 779.72, "end": 782.3199999999999, "text": " So basically you can see here four by four,"}, {"start": 782.3199999999999, "end": 785.56, "text": " and on each one of these windows, they'll be running,"}, {"start": 785.56, "end": 787.6, "text": " so the window is the terminology here,"}, {"start": 787.6, "end": 789.4, "text": " they'll be running a self-attention module."}, {"start": 789.4, "end": 791.6, "text": " You can already see that this local computation"}, {"start": 791.6, "end": 794.92, "text": " is something that CNN prior is all about."}, {"start": 794.92, "end": 798.02, "text": " I mean, you have basically kernels which you slide,"}, {"start": 798.02, "end": 800.48, "text": " you share the weights, and you have local computation,"}, {"start": 800.48, "end": 803.4399999999999, "text": " which is implicitly, you're implying that"}, {"start": 803.4399999999999, "end": 805.8, "text": " local neighborhood of pixels is more important"}, {"start": 805.8, "end": 809.48, "text": " than attending pixels which are far away."}, {"start": 809.48, "end": 812.0799999999999, "text": " And that's something that for, at least for natural images,"}, {"start": 812.0799999999999, "end": 814.04, "text": " is usually the case, like there is a correlation,"}, {"start": 814.04, "end": 816.12, "text": " a strong correlation between neighboring pixels,"}, {"start": 816.12, "end": 818.34, "text": " and that's what CNNs are exploiting."}, {"start": 818.34, "end": 819.72, "text": " So that understanding and human knowledge"}, {"start": 819.72, "end": 823.7800000000001, "text": " is basically ingrained, baked into the architecture itself."}, {"start": 823.7800000000001, "end": 827.0, "text": " The second prior I can see here on top of my mind"}, {"start": 827.0, "end": 829.32, "text": " is the hierarchical approach,"}, {"start": 829.32, "end": 832.58, "text": " and that's why they call these hierarchical transformers."}, {"start": 832.58, "end": 835.6800000000001, "text": " You can see that basically here we have"}, {"start": 835.6800000000001, "end": 838.64, "text": " self-attention acting on these fine-grained features,"}, {"start": 838.64, "end": 842.08, "text": " and then what they do is they take a pool of these windows,"}, {"start": 842.08, "end": 843.5600000000001, "text": " so let me change the color here."}, {"start": 843.5600000000001, "end": 846.2800000000001, "text": " So let's say they take like four of these windows,"}, {"start": 846.28, "end": 849.3199999999999, "text": " and they merge all of those into this new window."}, {"start": 849.3199999999999, "end": 853.92, "text": " So now here, we are processing the data on a coarser,"}, {"start": 853.92, "end": 857.62, "text": " like this is a coarse-grained layer,"}, {"start": 857.62, "end": 859.8399999999999, "text": " and then finally they do the same thing here,"}, {"start": 859.8399999999999, "end": 861.3399999999999, "text": " they repeat the procedure,"}, {"start": 861.3399999999999, "end": 864.8199999999999, "text": " and they have even more coarse-grained features here."}, {"start": 864.8199999999999, "end": 866.8399999999999, "text": " So contrast that with the uniformity"}, {"start": 866.8399999999999, "end": 868.56, "text": " of the original transformers,"}, {"start": 868.56, "end": 872.28, "text": " and you'll see that a lot of this was basically,"}, {"start": 872.28, "end": 874.78, "text": " all of these ideas are stemming pretty much"}, {"start": 874.78, "end": 876.4599999999999, "text": " from the CNN branch of models."}, {"start": 876.4599999999999, "end": 877.9599999999999, "text": " Now you may see a problem here,"}, {"start": 879.0799999999999, "end": 881.4, "text": " and that's that we are not sharing information"}, {"start": 881.4, "end": 883.0799999999999, "text": " across various spatial locations,"}, {"start": 883.0799999999999, "end": 885.1999999999999, "text": " and that can be problematic."}, {"start": 885.1999999999999, "end": 888.28, "text": " So because you can see here, we are just processing,"}, {"start": 888.28, "end": 890.28, "text": " so these pixels here, these features,"}, {"start": 890.28, "end": 893.3199999999999, "text": " cannot learn and take information from the other,"}, {"start": 894.52, "end": 896.52, "text": " under other spatial locations here,"}, {"start": 896.52, "end": 897.52, "text": " so what they additionally do"}, {"start": 897.52, "end": 901.36, "text": " is they have this window-shifting technique."}, {"start": 901.36, "end": 903.92, "text": " So let's understand why is this important,"}, {"start": 903.92, "end": 906.9599999999999, "text": " and how the mechanism works actually."}, {"start": 906.9599999999999, "end": 909.62, "text": " So as I said, we need to have some mixing"}, {"start": 909.62, "end": 911.92, "text": " between various spatial locations,"}, {"start": 911.92, "end": 913.7199999999999, "text": " and how this technique does it,"}, {"start": 913.7199999999999, "end": 916.16, "text": " how it solves the problem is the following way."}, {"start": 916.16, "end": 920.3399999999999, "text": " So in this initial, once you apply the self-attention module,"}, {"start": 920.3399999999999, "end": 921.8399999999999, "text": " that means that all of these tokens"}, {"start": 921.8399999999999, "end": 924.3, "text": " will contain information from all of the other tokens"}, {"start": 924.3, "end": 928.56, "text": " in the window by the very way how self-attention works."}, {"start": 928.56, "end": 930.16, "text": " So that means that we'll have,"}, {"start": 930.16, "end": 932.52, "text": " so this token here will contain information"}, {"start": 932.52, "end": 935.0, "text": " from all of the other tokens, from this one, from this one,"}, {"start": 935.0, "end": 937.22, "text": " from this one, from all of the tokens."}, {"start": 937.22, "end": 940.56, "text": " Whoops, let me draw it like this, and from this one,"}, {"start": 940.56, "end": 941.56, "text": " and that's it."}, {"start": 941.56, "end": 944.3, "text": " Now in the next stage, once we are here,"}, {"start": 944.3, "end": 945.92, "text": " because we have a different layout,"}, {"start": 945.92, "end": 948.48, "text": " that means that the information that's contained here,"}, {"start": 948.48, "end": 950.56, "text": " so that means that this information here,"}, {"start": 950.56, "end": 953.26, "text": " which stems from this, that contains the information"}, {"start": 953.26, "end": 956.36, "text": " from all around, from all of these tokens in this quadrant,"}, {"start": 956.36, "end": 960.06, "text": " will now be kind of shared with the other tokens,"}, {"start": 960.06, "end": 961.3199999999999, "text": " and that means the following."}, {"start": 961.32, "end": 963.44, "text": " So let me change the color here."}, {"start": 963.44, "end": 967.1600000000001, "text": " That means that now, after we apply self-attention here,"}, {"start": 967.1600000000001, "end": 968.7600000000001, "text": " the information contained here,"}, {"start": 968.7600000000001, "end": 971.2800000000001, "text": " which is information that originally stems from here,"}, {"start": 971.2800000000001, "end": 974.62, "text": " will be shared to all of the other tokens."}, {"start": 974.62, "end": 977.5600000000001, "text": " So this one, and this one, and all of the other tokens."}, {"start": 977.5600000000001, "end": 980.4000000000001, "text": " Which means that now this token here"}, {"start": 980.4000000000001, "end": 984.44, "text": " contains information from these tokens here."}, {"start": 984.44, "end": 986.5, "text": " When I say token, I mean the feature vector."}, {"start": 986.5, "end": 990.0400000000001, "text": " And finally, once we get in the next stage,"}, {"start": 990.04, "end": 994.76, "text": " and we apply the self-attention using this layout again,"}, {"start": 994.76, "end": 998.26, "text": " you can see how this information here"}, {"start": 998.26, "end": 1002.36, "text": " will be propagated to all of the other tokens here."}, {"start": 1002.36, "end": 1005.74, "text": " So we'll be propagating it even to the top left."}, {"start": 1005.74, "end": 1008.64, "text": " And that's the way how the information traverses"}, {"start": 1008.64, "end": 1011.1999999999999, "text": " this spatial extent here."}, {"start": 1011.1999999999999, "end": 1012.88, "text": " That's the basic idea."}, {"start": 1012.88, "end": 1016.4599999999999, "text": " Now I did mention that Swin works with these,"}, {"start": 1016.4599999999999, "end": 1019.52, "text": " like first on a fine-grained scale,"}, {"start": 1019.52, "end": 1021.84, "text": " and that means that each of these tokens"}, {"start": 1021.84, "end": 1025.36, "text": " is basically four by four, contains four by four pixels."}, {"start": 1025.36, "end": 1027.72, "text": " So if you take a single token out here,"}, {"start": 1027.72, "end": 1029.96, "text": " it contains, let me change the color,"}, {"start": 1029.96, "end": 1034.92, "text": " it contains four times four pixels."}, {"start": 1034.92, "end": 1039.92, "text": " Whereas the BAT has a more aggressive down sampling strategy,"}, {"start": 1040.94, "end": 1042.6, "text": " which means that this token here"}, {"start": 1042.6, "end": 1047.6, "text": " actually contains information from the 16 times 16 patch"}, {"start": 1047.6, "end": 1050.1999999999998, "text": " of the original image."}, {"start": 1050.1999999999998, "end": 1052.36, "text": " So how they form the features,"}, {"start": 1052.36, "end": 1054.04, "text": " and this is a minor detail,"}, {"start": 1054.04, "end": 1055.8999999999999, "text": " I'm gonna mention it either way."}, {"start": 1055.8999999999999, "end": 1060.56, "text": " So each patch is treated as a token,"}, {"start": 1060.56, "end": 1063.5, "text": " and its feature is set as a concatenation"}, {"start": 1063.5, "end": 1066.34, "text": " of the raw pixel RGB values."}, {"start": 1066.34, "end": 1069.04, "text": " In our implementation, we use a patch size of four by four,"}, {"start": 1069.04, "end": 1070.48, "text": " as I just mentioned,"}, {"start": 1070.48, "end": 1072.9599999999998, "text": " and thus the feature dimension of each patch"}, {"start": 1072.9599999999998, "end": 1076.56, "text": " is four by four times three, and that's 48."}, {"start": 1076.56, "end": 1080.08, "text": " And we have three because it's basically RGB image."}, {"start": 1080.08, "end": 1081.8999999999999, "text": " So we just take the image,"}, {"start": 1081.8999999999999, "end": 1084.3999999999999, "text": " we take those four times four pixels,"}, {"start": 1084.3999999999999, "end": 1087.3799999999999, "text": " we take the RGB features and we just concatenate those,"}, {"start": 1087.3799999999999, "end": 1091.44, "text": " and that's what is fed into the screen transformer."}, {"start": 1091.44, "end": 1093.84, "text": " So here is the architecture quickly."}, {"start": 1093.84, "end": 1096.6399999999999, "text": " Again, you can see some resemblance with ComNets."}, {"start": 1096.6399999999999, "end": 1099.7, "text": " After the initial patch partition layer, as they call it,"}, {"start": 1099.7, "end": 1102.48, "text": " we have, which hopefully makes sense now,"}, {"start": 1102.48, "end": 1104.76, "text": " so the spatial resolution is decreased by four"}, {"start": 1104.76, "end": 1107.96, "text": " because we have four times four pixels in a patch,"}, {"start": 1107.96, "end": 1109.96, "text": " and we have 48 because I just explained"}, {"start": 1109.96, "end": 1111.32, "text": " how we're forming those features."}, {"start": 1111.32, "end": 1112.8, "text": " So that's why I explained those"}, {"start": 1112.8, "end": 1115.74, "text": " so that you're able to parse the numbers here."}, {"start": 1115.74, "end": 1118.24, "text": " Then what happens is they apply a linear embedding,"}, {"start": 1118.24, "end": 1123.24, "text": " which effectively maps this 48 to C,"}, {"start": 1124.18, "end": 1126.26, "text": " even though they didn't mark that here."}, {"start": 1126.26, "end": 1130.36, "text": " So what goes into this first transformer block is this,"}, {"start": 1130.36, "end": 1133.68, "text": " and the feature size is C, not 48."}, {"start": 1133.68, "end": 1136.9, "text": " Now, why do they have two X here?"}, {"start": 1136.9, "end": 1138.8200000000002, "text": " And the reason is they are combining"}, {"start": 1138.8200000000002, "end": 1140.6200000000001, "text": " the windowed self-attention"}, {"start": 1140.6200000000001, "end": 1143.88, "text": " with this shifted window self-attention."}, {"start": 1144.74, "end": 1148.66, "text": " And they apply those in a sequential manner,"}, {"start": 1148.66, "end": 1152.6200000000001, "text": " and then they do the patch merging, which I explained here."}, {"start": 1152.6200000000001, "end": 1154.54, "text": " They have this merging of patches"}, {"start": 1154.54, "end": 1158.1200000000001, "text": " so that they have these core screened representations,"}, {"start": 1158.1200000000001, "end": 1161.22, "text": " and then they just keep on doing the same thing,"}, {"start": 1161.22, "end": 1165.26, "text": " windowed attention followed by sliding windowed attention,"}, {"start": 1165.26, "end": 1166.5, "text": " et cetera, et cetera."}, {"start": 1166.5, "end": 1168.6200000000001, "text": " And what happens additionally is you can see here,"}, {"start": 1168.6200000000001, "end": 1172.06, "text": " C goes to two C, goes to four C, goes to eight C,"}, {"start": 1172.06, "end": 1176.42, "text": " which means we are increasing the number of channels"}, {"start": 1176.42, "end": 1178.42, "text": " as we are reducing the spatial dimension,"}, {"start": 1178.42, "end": 1180.46, "text": " which is also a pattern from CNNs."}, {"start": 1180.46, "end": 1182.66, "text": " These transformer blocks themselves"}, {"start": 1182.66, "end": 1185.3, "text": " like are consist out of these two modules,"}, {"start": 1185.3, "end": 1187.06, "text": " which are your, this is your regular,"}, {"start": 1187.06, "end": 1188.14, "text": " this is a transformer block."}, {"start": 1188.14, "end": 1189.42, "text": " The only difference is they're using,"}, {"start": 1189.42, "end": 1192.6200000000001, "text": " instead of the regular MSA, they're using windowed MSA"}, {"start": 1192.6200000000001, "end": 1197.0600000000002, "text": " and the sliding, this shift windowed MSA."}, {"start": 1197.0600000000002, "end": 1200.26, "text": " I may be butchering the naming, the terminology here."}, {"start": 1200.26, "end": 1202.7, "text": " I think it's, yeah, shifted windowing configuration."}, {"start": 1202.7, "end": 1204.02, "text": " Got it."}, {"start": 1204.02, "end": 1205.1000000000001, "text": " Cool."}, {"start": 1205.1000000000001, "end": 1208.14, "text": " Okay, now that I've properly, hopefully properly,"}, {"start": 1208.14, "end": 1210.5, "text": " motivated the reason for this paper"}, {"start": 1210.5, "end": 1213.0600000000002, "text": " and some of the background knowledge like Swin Transformer"}, {"start": 1213.0600000000002, "end": 1217.7, "text": " and some of the, yeah, corrections to this main diagram,"}, {"start": 1217.7, "end": 1219.82, "text": " let's get to the rest of the paper."}, {"start": 1220.6200000000001, "end": 1222.78, "text": " So our research is intended to bridge the gap"}, {"start": 1222.78, "end": 1226.82, "text": " between the pre-VIT and post-VIT eras for ComNets."}, {"start": 1226.82, "end": 1229.3400000000001, "text": " To do this, we start with a standard ResNet,"}, {"start": 1229.3400000000001, "end": 1233.5800000000002, "text": " so ResNet-50 for example, trained with an improved procedure."}, {"start": 1233.5800000000002, "end": 1236.1000000000001, "text": " We gradually modernize the architecture"}, {"start": 1236.1000000000001, "end": 1239.26, "text": " to the construction of a hierarchical vision transformer,"}, {"start": 1239.26, "end": 1241.9, "text": " Swin Tiny, T stands for tiny."}, {"start": 1241.9, "end": 1244.14, "text": " So basically again, what they're saying here is,"}, {"start": 1244.14, "end": 1246.06, "text": " hey, we're taking some inspiration,"}, {"start": 1246.06, "end": 1248.86, "text": " design inspiration from Swin Transformers."}, {"start": 1248.86, "end": 1251.4199999999998, "text": " Our exploration is directed by a quick question."}, {"start": 1251.4199999999998, "end": 1254.8999999999999, "text": " How do design decisions in Transformers"}, {"start": 1254.8999999999999, "end": 1256.98, "text": " impact ComNet's performance?"}, {"start": 1256.98, "end": 1260.74, "text": " Okay, so that's the motivating statement there."}, {"start": 1260.74, "end": 1263.1799999999998, "text": " Now, before I show this diagram,"}, {"start": 1263.1799999999998, "end": 1264.98, "text": " which is the second most important diagram,"}, {"start": 1264.98, "end": 1267.22, "text": " if not the most important diagram,"}, {"start": 1267.22, "end": 1269.1399999999999, "text": " let me just start with this."}, {"start": 1269.1399999999999, "end": 1272.4199999999998, "text": " So a recent paper, which is basically this one,"}, {"start": 1272.4199999999998, "end": 1273.6599999999999, "text": " the ResNet Strike Spec,"}, {"start": 1273.66, "end": 1276.1000000000001, "text": " which I've covered in one of the previous videos,"}, {"start": 1276.1000000000001, "end": 1278.46, "text": " demonstrates how a set of modern training techniques"}, {"start": 1278.46, "end": 1280.5400000000002, "text": " can significantly enhance the performance"}, {"start": 1280.5400000000002, "end": 1283.14, "text": " of a simple ResNet-50 model."}, {"start": 1283.14, "end": 1285.94, "text": " In our study, we use a training recipe"}, {"start": 1285.94, "end": 1290.5400000000002, "text": " that is close to DATES and Swin Transformers."}, {"start": 1290.5400000000002, "end": 1292.3000000000002, "text": " So not quite sure why they did this,"}, {"start": 1292.3000000000002, "end": 1295.38, "text": " because this should have the best training procedure"}, {"start": 1295.38, "end": 1296.5, "text": " for ResNet-50."}, {"start": 1296.5, "end": 1300.9, "text": " So I'm not sure why they're using training procedures"}, {"start": 1300.9, "end": 1302.02, "text": " from Transformers,"}, {"start": 1302.02, "end": 1304.58, "text": " probably because of the compatibility reasons,"}, {"start": 1304.58, "end": 1306.78, "text": " because they are later taking some design decisions"}, {"start": 1306.78, "end": 1308.86, "text": " from Transformers, and that's probably better."}, {"start": 1308.86, "end": 1312.42, "text": " The compatibility is, I guess, better,"}, {"start": 1312.42, "end": 1317.42, "text": " compared to just taking the training ideas from this paper."}, {"start": 1317.62, "end": 1319.22, "text": " So by itself, they say here,"}, {"start": 1319.22, "end": 1322.66, "text": " the enhanced training recipe increased the performance"}, {"start": 1322.66, "end": 1327.66, "text": " of the ResNet-50 model from 76.1 to 78.8,"}, {"start": 1328.26, "end": 1331.42, "text": " so plus 2.7% in top one accuracy,"}, {"start": 1331.42, "end": 1332.8600000000001, "text": " which is a huge boost."}, {"start": 1332.8600000000001, "end": 1334.2, "text": " So with that out of the way,"}, {"start": 1334.2, "end": 1337.7, "text": " let's see the modernization procedure they applied."}, {"start": 1337.7, "end": 1341.22, "text": " First, let me kind of dissect this diagram for you."}, {"start": 1341.22, "end": 1343.1200000000001, "text": " What we can see here is in the bottom here,"}, {"start": 1343.1200000000001, "end": 1344.9, "text": " we have the Swin-T Transformer,"}, {"start": 1344.9, "end": 1348.94, "text": " and in the upper part here, we have ResNet-50."}, {"start": 1348.94, "end": 1350.72, "text": " Just ignore the gray bars for now,"}, {"start": 1350.72, "end": 1353.98, "text": " because those correspond to these bigger alternatives,"}, {"start": 1353.98, "end": 1357.76, "text": " so to ResNet-200 and to Swin-B, which stands for bigger."}, {"start": 1358.74, "end": 1360.78, "text": " And let's focus on these blue bars,"}, {"start": 1360.78, "end": 1363.02, "text": " and let's first see this orange bar."}, {"start": 1363.02, "end": 1366.12, "text": " So this orange bar tells us that the top one accuracy,"}, {"start": 1366.12, "end": 1370.3799999999999, "text": " which is this horizontal axis here, is 81.3."}, {"start": 1370.3799999999999, "end": 1372.58, "text": " And this star here, not sure if you can see it,"}, {"start": 1372.58, "end": 1377.58, "text": " but it says that we have 4.5 gigaflops of computation"}, {"start": 1379.3, "end": 1380.7, "text": " for the Swin-T model."}, {"start": 1380.7, "end": 1384.1, "text": " Now, let's see what happened with ResNet."}, {"start": 1384.1, "end": 1385.42, "text": " What's the progression here?"}, {"start": 1385.42, "end": 1389.42, "text": " So we start with 78.8, which is the numbers they got"}, {"start": 1389.42, "end": 1391.98, "text": " with this improved training procedure."}, {"start": 1391.98, "end": 1393.94, "text": " So then they apply various techniques."}, {"start": 1393.94, "end": 1395.78, "text": " They organize those across five groups."}, {"start": 1395.78, "end": 1398.74, "text": " We have the macro design, we have the ResNext ideas,"}, {"start": 1398.74, "end": 1402.0600000000002, "text": " we have the inverted bottleneck, we have the large kernel,"}, {"start": 1402.0600000000002, "end": 1404.0600000000002, "text": " we have the micro design here."}, {"start": 1404.0600000000002, "end": 1406.3400000000001, "text": " And you can see that all of those start,"}, {"start": 1406.3400000000001, "end": 1409.46, "text": " and all of those contribute to the final performance,"}, {"start": 1409.46, "end": 1413.14, "text": " and we can see that the final model has outperformed,"}, {"start": 1413.14, "end": 1417.14, "text": " so all of these outperform basically this Swin-T Transformer."}, {"start": 1417.14, "end": 1420.42, "text": " And you can also see the stars here again denotes"}, {"start": 1420.42, "end": 1424.0600000000002, "text": " the gigaflops, the computation that's needed"}, {"start": 1424.0600000000002, "end": 1425.9, "text": " for a single forward pass."}, {"start": 1425.9, "end": 1428.18, "text": " You can see it's varying throughout these,"}, {"start": 1428.18, "end": 1430.0200000000002, "text": " when they're doing these ablations,"}, {"start": 1430.0200000000002, "end": 1434.0600000000002, "text": " but they roughly oscillate around 4.5,"}, {"start": 1434.0600000000002, "end": 1437.24, "text": " which means these are comparable in that sense."}, {"start": 1437.24, "end": 1438.8600000000001, "text": " Now I'm gonna focus in more detail"}, {"start": 1438.8600000000001, "end": 1441.0800000000002, "text": " on these first couple of techniques,"}, {"start": 1441.0800000000002, "end": 1443.8200000000002, "text": " and then I'm gonna slowly just scheme"}, {"start": 1443.8200000000002, "end": 1444.9, "text": " all of the other techniques,"}, {"start": 1444.9, "end": 1446.8200000000002, "text": " because there is a lot of details,"}, {"start": 1446.82, "end": 1448.96, "text": " and I'm gonna just kinda have to abstract"}, {"start": 1448.96, "end": 1450.72, "text": " some of those details away from you."}, {"start": 1450.72, "end": 1453.3, "text": " So let's start with this stage ratio here."}, {"start": 1453.3, "end": 1454.4399999999998, "text": " So what is stage ratio?"}, {"start": 1454.4399999999998, "end": 1455.6, "text": " What does that mean?"}, {"start": 1456.78, "end": 1458.78, "text": " They say here, Swin-T, on the other hand,"}, {"start": 1458.78, "end": 1459.96, "text": " follow the same principle,"}, {"start": 1459.96, "end": 1463.1, "text": " but with a slightly different stage compute ratio"}, {"start": 1463.1, "end": 1465.54, "text": " of one, one, three, one."}, {"start": 1465.54, "end": 1467.3799999999999, "text": " We're gonna see what this means in a second."}, {"start": 1467.3799999999999, "end": 1470.34, "text": " So for larger Swin-Transformers, the ratio is this one,"}, {"start": 1470.34, "end": 1473.28, "text": " and following the design, we adjust the number of blocks"}, {"start": 1473.28, "end": 1477.42, "text": " in each stage from three, four, six, three, in Resonant 50,"}, {"start": 1477.42, "end": 1479.06, "text": " to three, three, nine, three."}, {"start": 1479.06, "end": 1481.1399999999999, "text": " So you can see when you divide all of this by three,"}, {"start": 1481.1399999999999, "end": 1486.1399999999999, "text": " you get the same ratio as in the Swin-Transformer right here."}, {"start": 1486.1399999999999, "end": 1488.68, "text": " So the idea is basically they wanna have"}, {"start": 1488.68, "end": 1491.36, "text": " the amount of compute, the number of flops,"}, {"start": 1491.36, "end": 1496.04, "text": " across the model should have roughly the same ratio"}, {"start": 1496.04, "end": 1497.66, "text": " as the one in Swin-Transformer,"}, {"start": 1497.66, "end": 1499.2, "text": " because some of the other papers,"}, {"start": 1499.2, "end": 1500.5, "text": " they reference them here,"}, {"start": 1500.5, "end": 1503.34, "text": " show that that plays an important role"}, {"start": 1503.34, "end": 1506.56, "text": " in the design of novel architectures."}, {"start": 1506.56, "end": 1508.18, "text": " So let me show you some code here,"}, {"start": 1508.18, "end": 1510.88, "text": " and remember these numbers here, three, four, six, three."}, {"start": 1510.88, "end": 1513.66, "text": " So that's the number of blocks in each stage."}, {"start": 1513.66, "end": 1516.42, "text": " So basically, if you take a look at the,"}, {"start": 1516.42, "end": 1520.06, "text": " let me zoom in a little bit here, at the API here,"}, {"start": 1520.06, "end": 1521.56, "text": " when you're constructing Resonant 50,"}, {"start": 1521.56, "end": 1523.36, "text": " or Resonant 101, or whatnot,"}, {"start": 1523.36, "end": 1525.1, "text": " you always call this underlying,"}, {"start": 1525.1, "end": 1526.88, "text": " this is by torch code, by the way,"}, {"start": 1526.88, "end": 1530.02, "text": " you call this common API function,"}, {"start": 1530.02, "end": 1533.04, "text": " and then you pass different number of blocks per stage,"}, {"start": 1533.04, "end": 1535.46, "text": " and that's what determines how big the model is,"}, {"start": 1535.46, "end": 1538.6, "text": " how many layers it has, and how many parameters does it has."}, {"start": 1538.6, "end": 1542.54, "text": " So let me quickly show you this underscore Resonant function."}, {"start": 1542.54, "end": 1544.7, "text": " So it's here, and we're passing,"}, {"start": 1544.7, "end": 1548.96, "text": " so that sequence of numbers is named layers,"}, {"start": 1548.96, "end": 1552.96, "text": " and we pass layers to this Resonant class here."}, {"start": 1552.96, "end": 1555.98, "text": " So let me find the class, class Resonant."}, {"start": 1556.8799999999999, "end": 1557.8799999999999, "text": " Okay, here it is."}, {"start": 1557.8799999999999, "end": 1559.0, "text": " So it's passed as layers."}, {"start": 1559.0, "end": 1562.18, "text": " So now let me search for layers, layers,"}, {"start": 1562.18, "end": 1564.82, "text": " and basically you can see here that those numbers,"}, {"start": 1564.82, "end": 1567.28, "text": " so three, six, four, three, or something,"}, {"start": 1567.28, "end": 1572.28, "text": " are passed here to these make layer functions,"}, {"start": 1572.62, "end": 1577.34, "text": " and they effectively, what they do is they say,"}, {"start": 1577.34, "end": 1580.46, "text": " take this block and repeat it this many times."}, {"start": 1580.46, "end": 1582.76, "text": " So if we were to find the make layer function,"}, {"start": 1584.26, "end": 1586.1, "text": " I could probably just click and find,"}, {"start": 1586.1, "end": 1589.58, "text": " go by reference, but I'm being lazy here."}, {"start": 1589.58, "end": 1591.54, "text": " We can see here that here is the blocks,"}, {"start": 1591.54, "end": 1595.7199999999998, "text": " so let's find the blocks, whoops, the blocks variable."}, {"start": 1595.7199999999998, "end": 1598.82, "text": " Okay, blocks variable is called here,"}, {"start": 1598.82, "end": 1601.1, "text": " and you can see basically we have this layer append,"}, {"start": 1601.1, "end": 1602.7199999999998, "text": " and for the number of blocks,"}, {"start": 1602.7199999999998, "end": 1605.1399999999999, "text": " we're just going to be appending in this stage"}, {"start": 1606.02, "end": 1607.48, "text": " that amount of blocks."}, {"start": 1607.48, "end": 1611.1799999999998, "text": " So that's why we have, that's how they manage to regulate"}, {"start": 1611.1799999999998, "end": 1613.3799999999999, "text": " the amount of computation across each of the stage"}, {"start": 1613.38, "end": 1618.38, "text": " of this resonant and reflect mirror the same type of ratios"}, {"start": 1619.42, "end": 1622.3000000000002, "text": " that are present in the Swin transformer."}, {"start": 1622.3000000000002, "end": 1624.7, "text": " Cool, let me know whether you find this type of mixing"}, {"start": 1624.7, "end": 1629.16, "text": " of code and paper explanation useful or not."}, {"start": 1629.16, "end": 1630.0, "text": " Let's continue here."}, {"start": 1630.0, "end": 1633.74, "text": " So this design decision falls under this umbrella term"}, {"start": 1633.74, "end": 1635.5200000000002, "text": " of macro design."}, {"start": 1635.5200000000002, "end": 1637.0400000000002, "text": " You can understand why that is."}, {"start": 1637.0400000000002, "end": 1640.7600000000002, "text": " We are changing the macro structure of the architecture."}, {"start": 1640.7600000000002, "end": 1643.0800000000002, "text": " Then they have this changing stem to patchify,"}, {"start": 1643.08, "end": 1645.3, "text": " and they mentioned here that a common stem cell"}, {"start": 1645.3, "end": 1649.1, "text": " with will aggressively down sample the input images."}, {"start": 1649.1, "end": 1651.6, "text": " So that's something that's common in most of those older"}, {"start": 1651.6, "end": 1654.36, "text": " CNN architectures, and they say here that the stem cell"}, {"start": 1654.36, "end": 1657.26, "text": " in standard resonant contains a seven times seven"}, {"start": 1657.26, "end": 1659.8, "text": " convolution layer with try two,"}, {"start": 1659.8, "end": 1661.5, "text": " followed by a max pooling layer,"}, {"start": 1661.5, "end": 1665.54, "text": " which results in a forex down sampling of the input images,"}, {"start": 1665.54, "end": 1668.3799999999999, "text": " which isn't fairly aggressive down sampling."}, {"start": 1668.3799999999999, "end": 1671.46, "text": " They say here, we replaced the resonant style stem cell"}, {"start": 1671.46, "end": 1675.02, "text": " with a patchify layer implemented using a four times four"}, {"start": 1675.02, "end": 1677.18, "text": " strive for convolutional layer."}, {"start": 1677.18, "end": 1681.22, "text": " And that's how basically VAT like model works."}, {"start": 1681.22, "end": 1685.22, "text": " What you do is you can interpret this patchifying strategy."}, {"start": 1685.22, "end": 1686.8600000000001, "text": " Basically you have an image here."}, {"start": 1687.74, "end": 1691.5, "text": " And what they do is they group these pixels"}, {"start": 1691.5, "end": 1693.42, "text": " into these patches."}, {"start": 1693.42, "end": 1696.74, "text": " And then what they do is they do a linear projection"}, {"start": 1696.74, "end": 1700.26, "text": " of those pixels into some feature vector."}, {"start": 1700.26, "end": 1703.14, "text": " And you can model that by just thinking of this"}, {"start": 1703.14, "end": 1706.64, "text": " as if you had, let me change the color,"}, {"start": 1706.64, "end": 1708.5, "text": " as if you had like a kernel,"}, {"start": 1709.34, "end": 1713.62, "text": " that's let's say in this case, they said four by four."}, {"start": 1713.62, "end": 1715.14, "text": " So kernel that's four by four,"}, {"start": 1715.14, "end": 1719.46, "text": " and then you have a stride that has a size of four,"}, {"start": 1719.46, "end": 1721.42, "text": " which means the next step will be here,"}, {"start": 1721.42, "end": 1723.3, "text": " the next step will be here."}, {"start": 1723.3, "end": 1726.02, "text": " And basically, yeah, that's how you implement."}, {"start": 1726.02, "end": 1727.74, "text": " If you take a look at any code base,"}, {"start": 1727.74, "end": 1729.66, "text": " that's how you implement the patchify layer"}, {"start": 1729.66, "end": 1732.02, "text": " using a convolutional operation."}, {"start": 1732.02, "end": 1733.1000000000001, "text": " Cool."}, {"start": 1733.1000000000001, "end": 1734.3000000000002, "text": " So that's the second design decision."}, {"start": 1734.3000000000002, "end": 1737.1000000000001, "text": " Then they mentioned here Resnextify."}, {"start": 1738.26, "end": 1742.18, "text": " They take some ideas from the Resnext paper."}, {"start": 1742.18, "end": 1743.8600000000001, "text": " So they say here, more precisely,"}, {"start": 1743.8600000000001, "end": 1745.9, "text": " Resnext employs grouped convolution"}, {"start": 1745.9, "end": 1749.8200000000002, "text": " for the three times three conv layer in a bottleneck block."}, {"start": 1749.8200000000002, "end": 1751.7, "text": " As this significantly reduces the flops,"}, {"start": 1751.7, "end": 1753.66, "text": " the network width is expanded to compensate"}, {"start": 1753.66, "end": 1754.8200000000002, "text": " for the capacity loss."}, {"start": 1754.8200000000002, "end": 1757.16, "text": " In our case, we use depthwise convolution,"}, {"start": 1757.16, "end": 1759.1200000000001, "text": " a special case of grouped convolution"}, {"start": 1759.12, "end": 1762.02, "text": " where the number of groups equals the number of channels."}, {"start": 1762.02, "end": 1763.78, "text": " So a quick recap here,"}, {"start": 1763.78, "end": 1767.04, "text": " how the depthwise convolution works is the following."}, {"start": 1767.04, "end": 1768.76, "text": " So let's say you have an activation volume."}, {"start": 1768.76, "end": 1772.6999999999998, "text": " So that's the feature maps that comes out of your layer"}, {"start": 1772.6999999999998, "end": 1775.5, "text": " after you process and apply the activation functions."}, {"start": 1775.5, "end": 1776.6999999999998, "text": " You get the activation volume,"}, {"start": 1776.6999999999998, "end": 1778.5, "text": " which looks something like this."}, {"start": 1778.5, "end": 1782.4599999999998, "text": " Let me try and draw it here, something like this."}, {"start": 1782.46, "end": 1789.46, "text": " And so how your regular convolutions work is that,"}, {"start": 1790.26, "end": 1794.1000000000001, "text": " you make a kernel, you make like a filter here,"}, {"start": 1794.1000000000001, "end": 1797.14, "text": " which is maybe usually three times three"}, {"start": 1797.14, "end": 1800.26, "text": " in most of these older CNN architectures."}, {"start": 1800.26, "end": 1802.78, "text": " And now what I do here is the following."}, {"start": 1802.78, "end": 1805.3, "text": " Let me just kind of draw it here roughly."}, {"start": 1805.3, "end": 1810.3, "text": " So what I do is the depth of this filter"}, {"start": 1810.3, "end": 1813.3, "text": " is the same as the depth of your activation volume."}, {"start": 1813.3, "end": 1815.3, "text": " So that's your regular convolution."}, {"start": 1815.3, "end": 1818.68, "text": " Now what depthwise convolution does is the following,"}, {"start": 1818.68, "end": 1821.3999999999999, "text": " it slices this activation volume,"}, {"start": 1821.3999999999999, "end": 1823.82, "text": " like you have a single feature map here,"}, {"start": 1823.82, "end": 1827.34, "text": " then you have a second feature map, et cetera, et cetera."}, {"start": 1827.34, "end": 1831.3799999999999, "text": " And what you do, you just take a filter"}, {"start": 1831.3799999999999, "end": 1834.62, "text": " that has this depth dimension is just one,"}, {"start": 1834.62, "end": 1838.9199999999998, "text": " and then you apply and apply that filter to this slice,"}, {"start": 1838.92, "end": 1841.02, "text": " and then you apply the second filter to the second slice,"}, {"start": 1841.02, "end": 1841.8600000000001, "text": " et cetera, et cetera."}, {"start": 1841.8600000000001, "end": 1843.94, "text": " So again, let me draw that here."}, {"start": 1843.94, "end": 1846.72, "text": " So the filter for the depthwise will be something like this."}, {"start": 1846.72, "end": 1850.42, "text": " So again, let's say it's three times three again,"}, {"start": 1850.42, "end": 1852.94, "text": " but the depth component is just one."}, {"start": 1852.94, "end": 1855.72, "text": " It's basically a single dimension basically."}, {"start": 1855.72, "end": 1858.5800000000002, "text": " Now there is a problem in using depthwise convolution,"}, {"start": 1858.5800000000002, "end": 1860.5800000000002, "text": " and that's that you break the correlation"}, {"start": 1860.5800000000002, "end": 1863.0600000000002, "text": " between flops and the throughput,"}, {"start": 1863.0600000000002, "end": 1866.8600000000001, "text": " which means that basically for the same amount of compute,"}, {"start": 1866.86, "end": 1871.86, "text": " because of some memory issues, the throughput,"}, {"start": 1872.2199999999998, "end": 1874.5, "text": " i.e. the number of images you can process in a second"}, {"start": 1874.5, "end": 1877.78, "text": " or whatever unit of time, drops down dramatically."}, {"start": 1877.78, "end": 1879.74, "text": " We're gonna address that a little bit later."}, {"start": 1879.74, "end": 1882.9199999999998, "text": " And they did show that the throughput is competitive"}, {"start": 1882.9199999999998, "end": 1885.2199999999998, "text": " and even better than Swin Transformers, so that's cool."}, {"start": 1885.2199999999998, "end": 1886.62, "text": " Now there are a lot more details here."}, {"start": 1886.62, "end": 1887.9799999999998, "text": " I'm gonna skip some of those."}, {"start": 1887.9799999999998, "end": 1890.1799999999998, "text": " They had the inverted block which was introduced."}, {"start": 1890.1799999999998, "end": 1891.6999999999998, "text": " I'm gonna actually cover this one."}, {"start": 1891.6999999999998, "end": 1894.54, "text": " So one important design in every transformer block"}, {"start": 1894.54, "end": 1897.1399999999999, "text": " is that it creates an inverted bottleneck."}, {"start": 1897.1399999999999, "end": 1899.6, "text": " i.e. the hidden dimension of the MLP block"}, {"start": 1899.6, "end": 1903.7, "text": " is four times wider than the input dimension."}, {"start": 1903.7, "end": 1906.06, "text": " And interestingly, this transformer design"}, {"start": 1906.06, "end": 1909.28, "text": " is connected to the inverted bottleneck design"}, {"start": 1909.28, "end": 1912.54, "text": " with an expansion ratio of four used in ComNets."}, {"start": 1912.54, "end": 1915.42, "text": " The idea was popularized by MobileNet V2."}, {"start": 1915.42, "end": 1918.7, "text": " So here we see again this cross-pollination from CNNs."}, {"start": 1918.7, "end": 1921.5, "text": " So taking ideas from CNNs and implementing them"}, {"start": 1921.5, "end": 1923.1399999999999, "text": " even in the original transformer."}, {"start": 1923.14, "end": 1925.42, "text": " So the idea was kinda informed"}, {"start": 1925.42, "end": 1928.98, "text": " by certain MobileNet architectures which came."}, {"start": 1928.98, "end": 1932.7, "text": " I think they were developed prior to 2017 obviously."}, {"start": 1932.7, "end": 1936.5400000000002, "text": " That's when the transformer paper originally was published."}, {"start": 1936.5400000000002, "end": 1939.0600000000002, "text": " So what I said here, if you did not understand,"}, {"start": 1939.0600000000002, "end": 1940.5800000000002, "text": " and it's just a basic understanding"}, {"start": 1940.5800000000002, "end": 1944.72, "text": " of how transformers work, at least the usual architectures."}, {"start": 1944.72, "end": 1948.3000000000002, "text": " So you have your MSA module here."}, {"start": 1948.3000000000002, "end": 1950.4, "text": " So this is your MSA module."}, {"start": 1950.4, "end": 1952.0800000000002, "text": " You have your input tokens."}, {"start": 1952.08, "end": 1954.8, "text": " You have your output tokens."}, {"start": 1954.8, "end": 1956.56, "text": " And then you apply an MLP."}, {"start": 1956.56, "end": 1958.6799999999998, "text": " So each of these, this is a feature vector"}, {"start": 1958.6799999999998, "end": 1961.6799999999998, "text": " that has certain dimension like D."}, {"start": 1961.6799999999998, "end": 1966.56, "text": " And what you apply MLP basically sharing the weights."}, {"start": 1966.56, "end": 1970.4399999999998, "text": " So you just kinda do a for loop across these feature vectors"}, {"start": 1970.4399999999998, "end": 1975.4399999999998, "text": " and you apply the MLP which will map from D to 4D."}, {"start": 1976.32, "end": 1979.24, "text": " So this feature vector is gonna be 4X longer."}, {"start": 1979.24, "end": 1982.24, "text": " So one, two, three, four compared to this one."}, {"start": 1982.24, "end": 1987.24, "text": " And then they again map it down into D dimensions."}, {"start": 1987.48, "end": 1989.84, "text": " So that's what's referred to as a bottleneck layer."}, {"start": 1989.84, "end": 1992.84, "text": " You can kinda imagine that you start from D"}, {"start": 1992.84, "end": 1997.44, "text": " and then you basically upscale it in a way to 4D"}, {"start": 1997.44, "end": 1999.58, "text": " and then you downscale it to D."}, {"start": 1999.58, "end": 2000.44, "text": " So something like this."}, {"start": 2000.44, "end": 2003.2, "text": " That's why it's called a bottleneck layer."}, {"start": 2003.2, "end": 2005.88, "text": " Sorry, I meant to say inverted bottleneck the whole time."}, {"start": 2005.88, "end": 2008.08, "text": " Not bottleneck layer because that one"}, {"start": 2008.08, "end": 2010.4399999999998, "text": " would look something like this."}, {"start": 2010.4399999999998, "end": 2012.78, "text": " This would be your bottleneck layer."}, {"start": 2012.78, "end": 2016.1599999999999, "text": " Okay, so this is a fun fact."}, {"start": 2016.1599999999999, "end": 2017.8, "text": " I wasn't aware of that."}, {"start": 2017.8, "end": 2019.24, "text": " Now here they just showed the progression"}, {"start": 2019.24, "end": 2021.96, "text": " of how they were additionally manipulating the modules."}, {"start": 2021.96, "end": 2023.56, "text": " And I think there is a small error here."}, {"start": 2023.56, "end": 2028.56, "text": " This should be I think 384 mapped down to 96"}, {"start": 2030.52, "end": 2031.6399999999999, "text": " and not the other way around."}, {"start": 2031.6399999999999, "end": 2034.24, "text": " But yeah, anyways, that's a small detail."}, {"start": 2034.24, "end": 2036.58, "text": " Then they try and play with kernel sizes"}, {"start": 2036.58, "end": 2039.04, "text": " because as I said, usually you use three by three."}, {"start": 2039.04, "end": 2041.56, "text": " They figure out that seven by seven works the best"}, {"start": 2041.56, "end": 2043.0, "text": " and that's it."}, {"start": 2043.0, "end": 2044.8799999999999, "text": " Here they compare the Swin transformer block"}, {"start": 2044.8799999999999, "end": 2047.58, "text": " with the Resonant block with the final Comnex block."}, {"start": 2047.58, "end": 2049.68, "text": " A couple more things they experimented with"}, {"start": 2049.68, "end": 2051.48, "text": " are the following."}, {"start": 2051.48, "end": 2055.56, "text": " You can see that they are using in the Comnex block,"}, {"start": 2055.56, "end": 2056.96, "text": " they're using Jell-U."}, {"start": 2056.96, "end": 2060.16, "text": " So that's Gaussian error linear unit or something"}, {"start": 2060.16, "end": 2061.64, "text": " instead of Relu."}, {"start": 2061.64, "end": 2064.7599999999998, "text": " They have layer norm instead of batch norm."}, {"start": 2064.76, "end": 2067.76, "text": " They have less normalization layers in general"}, {"start": 2067.76, "end": 2069.38, "text": " and less activation functions."}, {"start": 2069.38, "end": 2072.28, "text": " You can see they apply the activation function only here."}, {"start": 2072.28, "end": 2073.76, "text": " Let me change the color here."}, {"start": 2073.76, "end": 2075.5600000000004, "text": " So they apply it only here,"}, {"start": 2075.5600000000004, "end": 2077.8, "text": " whereas here they apply it in multiple locations"}, {"start": 2077.8, "end": 2079.6800000000003, "text": " and all of that and they have, as you can see here,"}, {"start": 2079.6800000000003, "end": 2082.28, "text": " a bunch of batch norm layers, batch norm, batch norm."}, {"start": 2082.28, "end": 2084.48, "text": " Here they only have a single layer norm."}, {"start": 2084.48, "end": 2086.1200000000003, "text": " Okay, that's pretty much the level of detail"}, {"start": 2086.1200000000003, "end": 2088.48, "text": " I can tolerate for one video."}, {"start": 2088.48, "end": 2090.92, "text": " So let me just slowly wrap this up."}, {"start": 2090.92, "end": 2092.96, "text": " They say that our Comnex model"}, {"start": 2092.96, "end": 2095.96, "text": " has approximately the same number of flops,"}, {"start": 2095.96, "end": 2098.5, "text": " parameters, same throughput and memory use"}, {"start": 2098.5, "end": 2099.7200000000003, "text": " as the Swin transformer,"}, {"start": 2099.7200000000003, "end": 2101.7200000000003, "text": " but does not require specialized modules"}, {"start": 2101.7200000000003, "end": 2104.12, "text": " such as shift the window, the tension,"}, {"start": 2104.12, "end": 2105.68, "text": " or relative position biases,"}, {"start": 2105.68, "end": 2108.4, "text": " which is very cool because they get to have"}, {"start": 2108.4, "end": 2110.26, "text": " a simpler architecture."}, {"start": 2110.26, "end": 2111.76, "text": " Finally, here are the results."}, {"start": 2112.76, "end": 2116.36, "text": " They basically show that on image classification"}, {"start": 2116.36, "end": 2118.36, "text": " for all of various sizes,"}, {"start": 2118.36, "end": 2121.36, "text": " so both the small, the big one variant,"}, {"start": 2121.36, "end": 2123.48, "text": " for different resolutions,"}, {"start": 2123.48, "end": 2125.2400000000002, "text": " they outperform, as you can see here,"}, {"start": 2125.2400000000002, "end": 2126.2000000000003, "text": " they outperform Swin."}, {"start": 2126.2000000000003, "end": 2127.92, "text": " So let's focus on a single pair here."}, {"start": 2127.92, "end": 2131.48, "text": " Let's take, for example, Swin B and Comnex B"}, {"start": 2131.48, "end": 2134.48, "text": " working on 384 resolution."}, {"start": 2134.48, "end": 2136.2000000000003, "text": " They have similar number of parameters."}, {"start": 2136.2000000000003, "end": 2138.6400000000003, "text": " They have similar flops."}, {"start": 2138.6400000000003, "end": 2140.6, "text": " Swin even has a bit more flops here"}, {"start": 2140.6, "end": 2143.2400000000002, "text": " and you can see that throughput is lower for Swin"}, {"start": 2143.2400000000002, "end": 2146.1200000000003, "text": " and you can see that the accuracy is better for Comnex."}, {"start": 2146.1200000000003, "end": 2148.44, "text": " So basically across all of these dimensions,"}, {"start": 2148.44, "end": 2151.56, "text": " it outperforms the Swin model, which is very cool."}, {"start": 2152.4, "end": 2156.52, "text": " They also showed that basically increasing the model,"}, {"start": 2156.52, "end": 2158.36, "text": " as we already mentioned,"}, {"start": 2158.36, "end": 2161.12, "text": " leads to increase in performance,"}, {"start": 2161.12, "end": 2163.5, "text": " which is something, a desirable thing."}, {"start": 2163.5, "end": 2165.78, "text": " You wanna be able to scale the architecture,"}, {"start": 2165.78, "end": 2168.16, "text": " the size, and to get additional performance."}, {"start": 2169.98, "end": 2172.48, "text": " Here, they additionally show the models"}, {"start": 2172.48, "end": 2175.86, "text": " pre-trained on the ImageNet 22K."}, {"start": 2175.86, "end": 2179.2400000000002, "text": " And they showed that even this big data regime,"}, {"start": 2179.2400000000002, "end": 2184.2400000000002, "text": " where vision transformers are supposed to be superior,"}, {"start": 2185.08, "end": 2188.2000000000003, "text": " they are not only comparable, but they outperform."}, {"start": 2188.2000000000003, "end": 2190.26, "text": " We again, focus on one of these pairs."}, {"start": 2190.26, "end": 2191.58, "text": " Let me take this one."}, {"start": 2191.58, "end": 2194.1200000000003, "text": " They have similar number of parameters again,"}, {"start": 2194.1200000000003, "end": 2195.54, "text": " similar flops."}, {"start": 2195.54, "end": 2200.38, "text": " Again, actually, Swin takes more computation"}, {"start": 2200.38, "end": 2204.1200000000003, "text": " and it has lower throughput and it has lower accuracy."}, {"start": 2204.12, "end": 2205.96, "text": " So again, it outperforms a Swin,"}, {"start": 2207.12, "end": 2210.12, "text": " even in the big data regime, which is fairly important."}, {"start": 2210.12, "end": 2213.3199999999997, "text": " And wrapping this up, they also showed that"}, {"start": 2213.3199999999997, "end": 2218.16, "text": " on object detection tasks and semantic segmentation tasks,"}, {"start": 2218.16, "end": 2220.8399999999997, "text": " Swin transformer is again,"}, {"start": 2220.8399999999997, "end": 2222.3199999999997, "text": " you'll see fairly similar results"}, {"start": 2222.3199999999997, "end": 2224.3199999999997, "text": " as here on the image classification tests."}, {"start": 2224.3199999999997, "end": 2227.0, "text": " So I don't wanna even dig into too much details."}, {"start": 2227.0, "end": 2228.56, "text": " The last thing I wanna mention here is,"}, {"start": 2228.56, "end": 2229.88, "text": " so under similar flops,"}, {"start": 2229.88, "end": 2232.7599999999998, "text": " models with depth-wise convolutions are known to be slower"}, {"start": 2232.76, "end": 2234.44, "text": " and consume more memory than ComNets"}, {"start": 2234.44, "end": 2236.5200000000004, "text": " with only dense convolutions."}, {"start": 2236.5200000000004, "end": 2239.1200000000003, "text": " It is natural to ask whether the design of ComNets"}, {"start": 2239.1200000000003, "end": 2241.4, "text": " will render it practically inefficient."}, {"start": 2241.4, "end": 2242.92, "text": " As demonstrated throughout the paper,"}, {"start": 2242.92, "end": 2245.32, "text": " the inference throughputs of ComNets are comparable"}, {"start": 2245.32, "end": 2248.2400000000002, "text": " to or exceed that of Swin transformers."}, {"start": 2248.2400000000002, "end": 2249.98, "text": " And that's something I was worried about,"}, {"start": 2249.98, "end": 2253.1600000000003, "text": " but it turns out that the throughput is actually kept"}, {"start": 2253.1600000000003, "end": 2255.2400000000002, "text": " on a healthy level."}, {"start": 2256.1200000000003, "end": 2257.2400000000002, "text": " That's it."}, {"start": 2257.2400000000002, "end": 2260.92, "text": " I wanna wrap this video by just mentioning one thing."}, {"start": 2260.92, "end": 2263.98, "text": " And it's maybe a semi-rend, if that's even a word."}, {"start": 2265.88, "end": 2268.06, "text": " So I just kinda skimmed through the references"}, {"start": 2268.06, "end": 2271.92, "text": " and I didn't see Schmidt-Huber mentioned anywhere."}, {"start": 2271.92, "end": 2273.28, "text": " And it's kinda sad."}, {"start": 2273.28, "end": 2274.76, "text": " I recently watched this video"}, {"start": 2274.76, "end": 2277.44, "text": " that Schmidt-Huber published to his YouTube channel"}, {"start": 2277.44, "end": 2281.44, "text": " where he explained the evolution of these earlier models."}, {"start": 2281.44, "end": 2282.6800000000003, "text": " And it actually turned out,"}, {"start": 2282.6800000000003, "end": 2284.36, "text": " and I mentioned this in the beginning of this video,"}, {"start": 2284.36, "end": 2286.86, "text": " that prior to AlexNet,"}, {"start": 2286.86, "end": 2289.36, "text": " there was this model called DanNet."}, {"start": 2289.36, "end": 2293.08, "text": " So it's spelled like this, I think, DanNet."}, {"start": 2293.08, "end": 2296.76, "text": " And that model was the first convolutional neural network"}, {"start": 2296.76, "end": 2298.88, "text": " that showed superhuman performance"}, {"start": 2298.88, "end": 2302.2200000000003, "text": " on I think four or five vision benchmarks,"}, {"start": 2302.2200000000003, "end": 2304.96, "text": " although those benchmarks were arguably way smaller"}, {"start": 2304.96, "end": 2306.36, "text": " compared to ImageNet."}, {"start": 2306.36, "end": 2309.52, "text": " It's kinda sad that DanNet got censored out"}, {"start": 2309.52, "end": 2311.2400000000002, "text": " of the AI history,"}, {"start": 2311.2400000000002, "end": 2314.6800000000003, "text": " even though it was such an important model back in 2011."}, {"start": 2314.6800000000003, "end": 2317.1200000000003, "text": " So yeah, I encourage you to check out Schmidt-Huber's video"}, {"start": 2317.12, "end": 2319.68, "text": " and let me know your thoughts on this topic as well."}, {"start": 2319.68, "end": 2321.6, "text": " Cool, let's do a quick summary here"}, {"start": 2321.6, "end": 2323.52, "text": " what we've seen in this video."}, {"start": 2323.52, "end": 2326.04, "text": " So basically, we've seen this convergence going on"}, {"start": 2326.04, "end": 2329.0, "text": " between vision transformers on one hand side"}, {"start": 2329.0, "end": 2331.96, "text": " with Swin transformer being a representative"}, {"start": 2331.96, "end": 2333.56, "text": " that was used in this paper."}, {"start": 2333.56, "end": 2334.7599999999998, "text": " And on the other hand side,"}, {"start": 2334.7599999999998, "end": 2338.7599999999998, "text": " we've seen ComNets being rejuvenated"}, {"start": 2338.7599999999998, "end": 2340.6, "text": " with novel training strategies"}, {"start": 2340.6, "end": 2345.12, "text": " such as the ones from the ResNet Strikes Again paper"}, {"start": 2345.12, "end": 2346.88, "text": " or from ComNEXT paper."}, {"start": 2346.88, "end": 2350.0, "text": " We've also seen the results from the EfficientNet V2 paper,"}, {"start": 2350.0, "end": 2352.88, "text": " which even seem to outperform ComNEXT models."}, {"start": 2352.88, "end": 2354.48, "text": " So it would be a fun comparison"}, {"start": 2354.48, "end": 2357.12, "text": " to compare them across multiple benchmarks"}, {"start": 2357.12, "end": 2357.96, "text": " and stuff like that."}, {"start": 2357.96, "end": 2361.04, "text": " In a nutshell, it seems that convolutional priors"}, {"start": 2361.04, "end": 2365.7200000000003, "text": " are very good and hard to beat priors for natural images."}, {"start": 2365.7200000000003, "end": 2369.0, "text": " And I'm very excited to see"}, {"start": 2369.0, "end": 2372.0, "text": " where these types of hybrid models will take us"}, {"start": 2372.0, "end": 2373.92, "text": " and whether we're gonna have some novel ideas"}, {"start": 2373.92, "end": 2376.52, "text": " that are going to boost the performance"}, {"start": 2376.52, "end": 2379.92, "text": " of these visual perception models even further."}, {"start": 2379.92, "end": 2382.0, "text": " Anyways, if you found this video useful"}, {"start": 2382.0, "end": 2384.6, "text": " with all of these semi-rants,"}, {"start": 2384.6, "end": 2386.84, "text": " consider subscribing, sharing the video out"}, {"start": 2386.84, "end": 2389.6, "text": " and also do join the Discord community."}, {"start": 2389.6, "end": 2392.4, "text": " I'll just link it down in the video description as always."}, {"start": 2392.4, "end": 2394.64, "text": " So until next time, bye-bye."}, {"start": 2394.64, "end": 2401.12, "text": " Thank you so much for watching,"}, {"start": 2401.12, "end": 2403.18, "text": " you"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=5eUSmJvK8WA
Machine Learning with Flax - From Zero to Hero
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE ❤️ Deepnote referral link: https://deepnote.com/referral?token=823d18856ad5 - get 20h of free "Pro machine" + support the channel at 0 cost to you! ❤️ In this video I cover Flax - a JAX-based machine learning library. It's a part of my machine learning with JAX series of videos! You'll learn everything you need to get you started building ML models in Flax. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Flax notebook (do follow along!): https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_4_Flax_Zero2Hero_Colab.ipynb ✅ Flax GitHub: https://github.com/google/flax ✅ Flax docs: https://flax.readthedocs.io/en/latest/ ✅ HuggingFace Flax community week: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro - Flax is performant and reproducible 00:02:00 Deepnote walk-through (sponsored) 00:04:57 Flax basics 00:14:05 Flax vs Haiku 00:17:25 Benchmarking Flax 00:18:28 Linear regression toy example 00:27:05 Introducing Optax (Adam state example) 00:33:15 Creating custom models 00:39:30 self.param example 00:45:55 self.variable example 00:57:15 Handling dropout, BatchNorm, etc. 01:02:25 CNN on MNIST example 01:11:00 TrainState source code 01:13:00 CNN dropout modification 01:15:05 Outro and summary ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #flax #jax #deeplearning
What's cracking you guys? In this video I'm resuming my Jax series of videos and in this one I'm gonna cover Flex. Basically a neural network library built on top of Jax and you probably are already aware of frameworks such as TensorFlow, Keras, PyTorch. So then like why is there a need for yet another deep learning framework? And the answer would be like twofold probably. One of the most important things I think is the performance component. We're gonna later see some numbers on certain benchmarks and Flex kinda was the best, the fastest. I.e. in general like Jax is faster than the other alternatives. And B. you get better reproducibility because of the way that Jax and thus Flex and some other frameworks such as Haiku which is also built on top of Jax. How they handle state. And basically because they're using the functional programming paradigm, they handle state externally. They deal with pure functions and so basically even the random number generators handle state externally and all of that leads to better reproducibility. So those are some whys behind learning Flex and yeah. So before you start watching this video you should have some knowledge of Jax already and if you don't know what Jax is, if you've never done any Jax, I strongly encourage you to check out my Machine Learning with Jax series of videos. You can see here the first one from zero to hero which is going to teach you the basics. Then we have from hero to hero pro plus which is going to go beyond and teach you like a nitty-gritty details of Jax in general. And finally I have a video where I'm covering where I'm coding a neural network from scratch in Pure Jax. So yeah, if you don't know anything about Jax, maybe go and check out at least the tutorial number one before you watch this one. Okay, now you maybe notice something different in this video. I'm not using colab anymore and what this is is deep note and I first want to give up like a huge thank you to the deep note team for deciding to sponsor this video. So I want to briefly mention that like I have so far decided to decline like multiple sponsorship opportunities and the reason is I really want to promote something that I truly believe in myself. So that means I should either use the product like on a daily or on a weekly basis myself or I think that the product has a lot of value and deep note definitely checks the marks. And so yeah, here I am. So what's the thing with deep note? If you take a look at it, it looks like a regular Jupyter notebook like your colab notebook. So what's the difference? So you can see here like you've got your markdown cells. You can click shift enter and render the markdown cell. You have the code cells. Well, the one of the most important things about deep note is the collaborative feature. So that basically means you can share this notebook with multiple collaborators with a team and you can real time collaborate on the notebook and you can basically see other people like typing real time commenting, etc. It's a very good collaborative tool. And unfortunately, I cannot show you right now those features because I'm so here. Secondly, it integrates which bunch of different data sources you can check out those yourself. But like drive, Amazon s3, Redshift, BigQuery, a bunch of other applications with GitHub as well. They also provide templates which can get you started like you can maybe pick this A B testing. Let me show you this one. So it's got a very cool widgets, cool dashboards, and you can even create deep note apps. So let me show you briefly how this thing looks like. You can see there is like a lot of these widgets. You can kind of create these interactive plots. And if you go and hit this publishing editor, you can even go and create a dashboard. And so that means no code, which means you can kind of can basically use this as a data science notebook and present the results to your like business partners or whoever is not tech savvy, but wants to see the results and get some insights. So yeah, that's very cool. So let me see what else I think I've covered. The main things you can check it out in your own pace. You've even got terminals. You can kind of edit the environment if for whatever reason you need to do that. You can you can track the common history of the whole team. In general, the whole history of what happened is fairly nicely tracked. So and finally, let me get back to my notebook. You can see I've installed some of the packages here. You don't have to do that. You can create a fully custom Docker container. They have a support for that. I even created some of the Docker images myself here, but I won't be using that. I'm going to be using this 3.7, which is the same one. I recommend you use if you want to start with this specific notebook. And finally, I've got a referral link for you guys down in the video description. If you want to support the channel and also if you want to get 20 additional hours of these pro machines on deep node, do click on the referral link and do check out deep node. So so yeah, let's get started. The point of this video is to learn to get some basic understanding of flex so that you can build train your own neural networks and then you can take it from there and start reading the documentation, which is probably the best source of information right now because the community is not as strong as like say pie torts. Even though it's growing and it's being adopted by hugging face, they even had a community week where they were coded model in flex. I'm going to link some of those in the in the video description so you can check those out. But yeah, let's get started here. Basically, I'm going to import here some some some like Jax and certain libraries, which is non-pie from Jax. So Jax, Numbay, and then we're going to import the flex, which is the library we're going to learn today. And it's a it's a library developed by Google Research, in particular, Google Brain, and it's a the name itself is a contraction between like you have flexibility, you have Jax, and then you kind of get flex very creative. I'm going to import some functions here. They have this leaning API, which is I think the second iteration of their API. And you can see that the convention is to import the API itself as an N, which you may be familiar with if you come from the pie torch background. It's a fairly common notation. Okay, train state is just a wrapper function that's going to help us make things a bit cleaner. We're going to see that in a moment. Yeah, I think I mentioned Haiku. It's it's also like a library built on top of Jax that comes from DeepMind. And the only reason I have it here is for comparison purposes. So we won't be learning Haiku in this video, although I'll cover it in the next one. So stay tuned for that. I'm importing Optex here, which is like another library built on top of Jax for Optimizers. And the rationale here is the following. You want to have in this Jax ecosystem, every one of these libraries takes care of a specific vertical. So you have Optex for Optimizers, you've got Jax, you've got Flex and Haiku for like neural network building and construction. You've got checks for testing the networks. You've got libraries for reinforcement learning, libraries for graph neural networks, etc. So that's like kind of the paradigm, the mindset of the Jax ecosystem, so to speak. Finally, I'm going to use a pie torch data loading functionalities here, because if you recall from my Jax tutorials, they did not want to reinvent the wheel here. They think that data loading functionality is already good. And so they're just kind of utilizing either pie torch or tensorflow data sets, etc. I'm going to use a pie torch because that's what I'm the most familiar with. Finally, I'm importing some functions here and some common modules. I already ran these cells because, yeah, just to save some time. But now starting from here, we're going to run every single cell and see what is going on. So let's start by building the most simple model, which is like just a simple, this is like a feed forward layer. You can also think about it as a linear regression because we don't have any nonlinear activation functions. And so this is just a linear regression model. So let me run this. So you just kind of click shift enter the same as in Colab. And the notation is dense instead of linear. And you can see that the dense class inherits from this module class, which is the same analogous to pie torch. And the idea is even though this is a functional programming like environment, like in its core, they tried and built like an objective oriented paradigm wrapper around it. And that's what Flex is. And that's what Haiku is. So that's you as a programmer are kind of programming in your usual using your usual mental model like the object oriented programming. OK, let's continue here. So let's see how we can do inference using this model. And the answer would be two steps. We've got this in that function and we've got the apply function. The point of init is to initialize the initial parameters because as you recall, Jax and thus Flex by transitivity property is handling like state externally. So that's why we need init and apply. And let's see what we do here. Basically, hopefully you can see the text reasonably well. I'm going to zoom in a little bit more for just in case. But the UI is a bit uglier. Hopefully you'll see the text, which is the point here. OK, we have some random seed here. Your regular Jax thing here. We are externally handling the random state of the random number generator and we split it into two keys. Then we just instantiate a random like data point here using the key one is just a 10 dimensional vector. And then what we do is we take the model, we call this init function and we pass the key. And the key is used obviously because initialization itself is random process. So we want to have key for reproducibility reasons. We pass the input X and basically we get back the parameters and the output in the case of using this function here. In it with output, not that it's that important. I'm going to later just using it. I'm going to show in a second. But yeah, then you can just we can visualize the shape of the parameters and see that it matches our expectation for a simple linear model. So let me run the cell and you can see here that the output is exactly what we expect. Bias term has is the dimensionality is five because the as we call here, we have five output neurons and we have 10 to five for the weight for the kernel. And you can always a couple of things here. So first is the automatic shape inference because I haven't specified 10 anywhere here. I've just specified the output dimensionality. And so the shape is automatically inferred through this input variable X. So that's what happened there. The second thing you can notice here is this immutable structure because as you recall again, Jax loves dealing with immutable objects. So what flex has done here is just made it easier for Jax to not introduce any box here by returning immutable structures. So that's why we have the frozen dict here. Finally, I'm using in it with output here, but you usually use just in it. You don't care about the output initially. So I can just run in it here and then I'm going to delete the Y here. I'm going to comment this out and we'll get the same result. So yeah, usually you'll see the in it function again. To recap here, what this thing does here is it's iterating through the supremes of pie tree and a pie tree object, which is a term from from Jax again. And basically you're iterating. You're kind of traversing the tree and for every single leaf you're applying this function here, which means you'll be taking like, let's say you have bias and you have some like a Jax lump array in there. You're going to take it. You're going to take its shape and you're going to change. You're basically going to map the tree of parameters into tree of shapes. So that's what you what you get here. So yeah, just a small, small recap there. Okay, so that should be fairly familiar. So the new thing is the in it function and the second thing is this apply function. So let's see what happens here. So we have model we call apply and we pass the parameters and the input X. So this is the only thing that you have to kind of that you have to adjust your mental model to this new paradigm and it may be very weird initially, but like believe me, I started playing with Jax in November last year 2021 and I'm already feeling very, very comfortable with it with this much. I feel much better than when I first encountered this this kind of signature. So you call apply you get the output and we can run this and get the results as expected five outputs here. So again, you cannot call the model like this. So this is what you're used to in other frameworks, but here this is not going to work. It's going to throw an exception. We're going to catch it and print something can call compact methods on unbound modules. So the details themselves are not that important here. Okay, so that's the very, very basics of flex. Now I want to do a small coding exercise here and contrast flex with Haiku and in my personal opinion Haiku is a bit cleaner than flex. I like it better me personally, although it may happen the flex is more powerful because of the additional like notation differences you'll see soon see. So yeah, I'm not very strongly opinionated about this, but so far for me it seems Haiku is even nicer than flex even though flex is way more popular in the community. I mentioned hugging face community week. I mentioned they have even have like 5,000 plus models on hugging face hub, which is fairly, fairly nice amount of models for a relatively new framework compared to TensorFlow or like PyTorch. So let me show you how you would do the same thing in Haiku now. It's going to be very brief. So let's create a model. So Haiku uses this HK as a, as a, as a like a notation then linear and then five. Now if I were to run this, it's going to fail. So let me try and run this. And as you can see here, all of these modules must be initialized inside an HK Haiku transform function. So what the trick is Haiku is very explicit about when you want to, you take a function like linear and you, in order to have the init and apply. So it's using the same notation as flex. You have to pass it into this HQ transform function. So let me do that. Transform. And what we do here, we create a Lambda anonymous function here and we just simply call this, this, this, this linear model. And that's it. Now this will work. So if I were to run this, it will not fail. But now let me show you what else happens. So let me just copy paste these parts here because they remain the same, the seeds and the inputs will remain the same. Now let me initialize the parameters. So we get model init again, we pass the key, we pass the X and we get parameters. So so far so good. It looks very, very similar to two flex. And finally, let's get the output by calling model apply. We pass the parameters here. We pass none because we don't have any stochasticity in the forward pass. And for now it may be, it may look as high because even more complex and a kind of uglier in the notation compared to two, two, two flex, but it's not the case. You'll see that in the next video. So you'll do it like this. And by the way, I made a small mistake here. You don't want to reuse keys. So this should be key two. So I should be able to run this right now and everything should work. So this is, this is high Q very similar. You just have this transform function and, and yeah, so that's cool. I had the solutions here. So let me kind of show this code and hopefully this looks the same. So yeah, we have the model, we form the data here, we call in it, we call apply and we print. And again, we have this, I forgot to do this. Basically it just shows that high Q follows the same paradigm as flex. It's inheriting from this module class as well. So let me just hide this code again. So hide the code and that's it. Cool. That was section number one. Hopefully you're not too overwhelmed because as I said, you will have to kind of get accustomed with the, with this functional programming paradigm. But yeah, it gets easier after, after, after a quick. So as I said, flex and high Q offer a performance boost. So here's some numbers from the hugging face team. You can see here using a single GPU and using a flex you get compared to PyTorch, you get certain improvements. Like in all of these, I think these are, all of these are NLP tasks. So you have Cola, you have the natural language inference task. You have, yeah, semantic, I guess, yeah, whatever. So in any case, you get a fairly nice improvements here speed wise. And in total, you can see that here you've kind of saved more than an hour using a single GPU, which is not a small thing, especially when you scale these two big language, large language models. Okay. Let's get going and let's introduce a single, a simple toy example where we're going to just form like a two dimensional data set and try and train a simple linear regression model using the NN dense model, which is an implementation and instantiation of the linear regression model. So I'm going to have 150 data points. The X dimension is two. The output is going to be, we are regressing like a scalar just to make things a plottable, if that's even a word, because yeah, if we had multiple dimensions here, I wouldn't be able to plot. I'd have to do some dimensionality reduction. So I'm going to skip that. Again, we're going to form some keys here being explicit about the random state as usual. We are going to form the ground truth weights and biases here. So basically we'll be trying to learn this weight and this bias later during the training. And now what we do here is we form a structure. That's, that's something that flex is, is used to. So you have the freeze is supposed to create a frozen dict, a frozen dictionary out of the normal Python native dictionary here. As you can recall from, from this section, we had frozen dict here. So let me get back here. And the structure is such that we have this params collection as flex calls it. And then we have bias B and kernel W and that's it. Now we, we've used this ground truth model and generate the data. So again, we split the key, which we created here, we split it and we get the key that we're going to use to generate the axis, the domain of our data. So we have 150 samples of, so 150 to two doubles. So that means just 150 two dimensional data points. And then we're going to use it, use the ground truth model and regress the, the YS here. And then we're just going to do a superposition of noise here using this amplitude. That's pretty much it. Let me run this now. And you can see the shapes, 152 and 151. And now let's plot the, the, the data set we just created. And you can see it here. Now I was lazy and so I didn't try and create an interactive plot here. And so let's be smart about it. And let's try and visualize where this data lies in. And you probably already know the answer. If not, let's do a simple exercise and try to approach it. So first one is we know how we created, how we generated the data. So basically what we've done here is we had Y is equal to W X plus some bias. And because X is two dimensional, this basically turns into W one X one plus W two X two plus B. And you can see that this is basically your, your analytical formulation of a plane, of a 2D plane embedded into 3D space. So that would be one approach. The second approach is data driven as a data driven. I mean, what I'm going to do here is increase from 150 to 1500. Okay, let's run this cell now and see what we get. So we get 1500 data points and let's run this again. And hopefully now you can see that it lies, it lies on the, on the 2D plane here. And yeah, the reason we added noise, hopefully that's, that's clear enough is to kind of hide the actual, the ground to the, the actual model that, that lies in the background that generated this data. And now we're going to try next up to, to kind of like a regress and infer the model that generated the data. So first let's create this, this loss function. This make MSC is just going to return this MSC loss function, which is a jitted version of the MSC. So just kind of trying to optimize things a little bit here. So squared error itself takes a single data point X and Y. We do inference, we just do a prediction. We pass in the single data point, we get a prediction and return the mean squared error here. You may be confused by the inner function here and inner is used just because so as to be able to generalize two cases when like Y was a multi-dimensional vector and not a scalar like it is right now. So this is just a generalization to you of our mean squared error loss. Finally we take the square error and we call the V map. So that's the transform from Jack's that makes this thing run on a batch of data. So we pass now like a batch of our data, like the, let me, whoops, let me actually reduce from thousand five hundred. No, I'm going to leave it at thousand five hundred. It's still like a small data set. So we're going to run this on the batch of data. So this will give us like a thousand five hundred by one vector. And then we're going to do mean over the X is equal zero. So that means we're going to kind of average out all of those, of those loss losses for all of these singular data points to get the final loss. And that's pretty much it. That's how we get our MSC loss here. And finally, I kind of wrap it into this value and grad transform function from Jack's to get the, yeah. That's how we calculate gradients and you should be familiar with this already. So now this will be our model. So we're kind of cheating here a little bit because this is the exact same model that generated the data. So we, we usually wouldn't have this advantage of knowing the model, but here for the sake of just demoing this, I'm going to suppose we, I'm going to assume we know the exact model. So we're going to do the usual stuff here, initialize the model. We get initial parameters and then we're going to do a simple training here, a training loop. We're going to have 20 epochs, just an arbitrary number. You can tweak it, but like it's enough for this toy problem. We called, we called the, we just passed the params and we get the gradients and we get the loss. So this may be like a dark magic a little bit, but the trick is this value and grad function already contains the MSC loss, which already contains our data because we had this wrapper function make MSC loss and the data excess and YS is already stored internally. So that's why we don't have to even pass anything here. So what this does is for the current set of parameters and for the car, for the dataset, which is static, it figures out the current gradients and the current loss for those parameters. And then we just do a simple stochastic grid in the send here. I'm being very explicit here. You usually will not be doing it like this. You're going to be using Optex, but here I'm being explicit and doing it in pure Jax way. By the way, I'm having some internet connection problems. Hopefully that will not interfere. I think not, but yeah. So you can see here, we do the tree multimap from Jax. We basically have two pie trees here, params and grads. We are going and traversing the leaves and we do the following simple manipulation from the current parameter. We subtract the learning rate times the gradient, which is a definition of your simplest stochastic gradient descent. And that's it. So this is actually not stochastic gradient descent because we're using the whole data. So this will be a batch gradient descent. So it's exact. We're not being stochastic here. And finally, we are logging every now and then. So every five epochs, I just print the epoch and then print the loss. And yeah, if I were to run this shift enter, we can get, we can see the initial parameters which were initialized using this key, which is different compared to the way we've initialized the model here. We use the specialized keys here and then we kind of wrap those into this freeze dictionary, et cetera. Okay. So those were the initial parameters. Then we had loss going down towards zero. And finally we get like, as you can see here, the final parameters, the learn parameters are here and you can see that they are pretty much the same. We get two five four here, two five, very similar numbers, which is also indicated by the loss. So yeah, that's how the training loop in Flex looks like. Almost in Flex. As I said, I'm using more, this is more like a puregex type of syntax here. So let's now use OptX, which is a dedicated library for this purpose in particular, and let's initialize it. So from OptX we call the SGD, we pass in the learning rates and then we have to again initialize the OptX because state has to be externally manipulated. And here we get a state and we kind of print the state. So let's run this cell and let's see what happens. And as you can see, we've got a tuple of two empty states and that may not make sense if you're not familiar with how optimizers work, but if you are, this makes perfect sense because SGD doesn't have any state. Compare that to Atom, which has to kind of keep the various statistics of gradients, like the variance of gradients in memory in state and that's why they're like, we have to like kind of explicitly manipulate it. So if I were to change this to Atom, and yeah, I won't be changing the variable name, I'm just gonna change and use Atom. Let me try and run this. You can see that now we have like a meaningful state here and that's something you have to kind of pass externally here in Flex, in Jax in general. So yeah, you can see what happens here is we have two set of parameters here. First is the mu parameters and then we have the nu parameters. And these are just your Greek alphabet letters that are denoting the same quantities as in the original Atom paper. So if I were to Google here Atom paper, blah, blah, blah, and if I were to open this paper on archive, we're going to see the formulas are using the exact same variable name. So I think it's here, is this the Atom? Yeah. And this is the nu, I don't know whether I'm pronouncing it correctly, probably not, but basically those are the statistics, the gradient statistics that Atom is kind of keeping in the background in order to do the update. So here is the Atom update. It's way more complicated compared to, let's say, SGD because yeah, you can see here we're using the mu and the nu here to form the novel parameter, to form the update. Okay, so that's it. Let me return back to SGD so that we can have comparable results compared to the pure Jax approach we used here. And so the tree multimap. So let me go back here and we're going to have the same exact thing replicated. The only thing that's different here is we are using instead of explicitly handling the optimization part, we're going to use OptX. And you can see in this example, it actually takes two lines to implement the code that we implemented here. I think in, yeah, we had just a single line here. So you may ask yourself why would we use OptX then? And then, I mean, it's pretty obvious. I mean, when you have some more complex optimizers, you'd have in pure Jax, you'd have like 10, 15 lines or more. Here you always have two lines and that's very cool. So this is how it looks like. So you have the optimizer, the SGD optimizer here. We pass the gradients and we pass its state and outcomes, the novel state and outcome, the updates, which are the modified gradients. So in the case of Atom, depending on the internal state, we'd be modifying these updates and then we apply, we just call this apply updates on the parameters and we get the novel parameters and that's it. So this should not work. Let me try and run this and see what we get. And you can see here that we get like a loss. I think it's the same loss. It's 512 here and if we take a look here, it's also 512. The last two digits are 85. The last two digits are 85. We get the same loss because I mean, this implements the SGD perfectly. Okay. So that's cool. And later, I have a note here. Later we'll see how we can kind of take all of these various states that are kind of floating around our program loops and wrap them in a single object and then use that single object to kind of, yeah, do the training. So just so you can see that Optex is much more than SGD and Atom. Let me show you this piece of code here. I mentioned here that you can do various schedules, cosine schedules, linear schedules, arbitrary schedules. You can do chaining of multiple optimizers, multiple components. You can do parameter freezing when you want to do some fine tuning. So it's very powerful, much more powerful than using pure JECs. I mean, it's as powerful, but like it's way more concise. Okay. So let's show this code here. This will not run. This is just some simple example of copy pasted from ImageNet examples from the respective repos from Haiku and from Flex. Here you can see this function create learning rate, which basically takes some parameters and then creates this from Optex. You call this linear schedule function. We give it the initial value, the end value, and then the number of steps where we are linearly increasing the learning rate or decreasing. Now here in this case, yeah, you can basically either increase or decrease depending on the initial and end values. Then we have the cosine function here. So the cosine schedule. And finally, we kind of join schedules using Optex join schedules. And we join the warmup, the linear warmup combined with the cosine schedule. And I guess, yeah, and we put the boundaries here to kind of denote the moment when we are switching modes. And then you can use this scheduling function and pass it directly to Optex SGD here with some additional like Nestoro momentum, et cetera. Finally from, you can see here a snippet from Haiku, even more complex, arguably. We have a trace here. We have a chaining operation of this tracing operator. And then we have the scale by schedule, like a bunch of small components which can make arbitrary complex transformations. So that's pretty much it. That was section number two. We got some understanding of how the toy like neural networks, i.e. linear regression in this case, are trained. And now we're going to see how to create custom models in Flex, which is something we care about. So let's start with a simple example of how to build an MLP. So multi-layer perceptron network. And you can see here we are inheriting from NNModule. So we form a class, we inherit from the module. So every, when you want to build a custom model, you always want to inherit from the NNModule. Then you can see that we have this field here. And this is called data field. It's from one of the newer versions of Python. You may be familiar with it. You may be not. But basically NNModule is Python's data class. So if I were to, let me check. I think I have a link here. Yeah, we have a link here. Let me just copy paste this thing here and let me show you what a data class is. So data class, let me kind of zoom here. A data class is basically this. So you have this wrapper called data class, and then you can just kind of put the variable names here. And then a couple of functions are automatically instantiated for you, defined for you. So you can see, let me just find an equivalent representation here. Yeah. So if you were to explicitly do the code op, the data class, this is what you'd have to do for those two fields. You'd have to create this representation function, the quality function. You'd have to call the init. And there is a lot of redundancy here. As you can see, we have rank being called three times here. So all in all, there's just a neat feature of Python more than Flex that, as I said, NNModule is kind of piggybacking on. And yeah, so as a consequence of NNModule being a Python data class, unfortunately, we'll have to deal with a new function name. We're not using init anymore. We're using setup. So setup is old init. And because as I mentioned here, basically, the data class is implicitly using the init function. And we saw that on the tab I just closed. What we do here is we just form a list of layers. So you can see here, we iterate through this sequence of integers, which is a simple list of integers. We create these dense, i.e. feed forward layers. And then during the call function, what we do, we pass the input. We just rename that as activation. And we iterate over the layers. We kind of call, we pass the activation through the layer. And finally, depending on whether or not we are in the last layer, we apply the ReLU, the ReLU nonlinear activation here. And then we finally return the activation. So the output layer, that's the output activation. So yeah, that's your basic definition of an MLP. Fairly, fairly simple. Now let's see how we can kind of do some inference. So again, we split a key. We have the H key. We have the init key. Init is supposed to initialize the parameters. Let me kind of first slowly walk you through this part. So we have MLP. We instantiate it using this number of neurons per layer is going to be 16, 8, 1, which means in the first layer, we have 16 neurons. Then in the second, we have 8. And then we have final, a single neuron. And we instantiate this data set here. We have the four data points that each has four dimensions. And we use the H, so we use the X key here. Then we call the model init here with the init key and we pass X. So we've seen this multiple times already. We have params. We call model apply and we do a forward pass and we get Y here. And that's it. The same as if we were to use a simple, like a dense layer and not a complex MLP. Again, I'm just calling a tree map and we're printing shapes. So let's see how this thing looks like when we run it. And as expected, we have 16, 8, 1, and we have 4 here because remember we have automatic shape inference. And because our data points have four dimensions, basically, that's what we have for here. And this kernel here, and this weight here. By the way, I hate the word kernel. It's so overloaded in machine learning that it's crazy. So yeah, maybe using a weight, like a terminology here is maybe a bit better. I don't know. Cool. I made a note here. Let's try and do the same thing just with this nn-compact wrapper. And what the trick here is, is you don't have to have set up at all. So what you can do is, because we have a fairly simple function here, we can just kind of wrap this call function into nn-compact. And then what we can do here is we can iterate through the number of neurons per layer instead of through layers. And that means we'll have number of neurons here, number of neurons. And then what we do instead of having a layer, we'll just instantiate it on the fly here. So nn-dense, we kind of initialize it using the number of neurons, and then we do the same. Everything else remains the same. And I think this should run unless I've made some mistake. Let me see whether it's going to work. So everything remains the same. It's just a bit more concise way of writing down things. So yeah, I managed to make an error here, and the error is that we should not be using the layers. Layers does not exist anymore. So let me just kind of switch that here. And this should now work. Yeah, it is. Okay, cool. You just saw a second way. So this is a very flex-like way of writing things down using nn-compact. But either way, you get the same semantics. Cool. Let's continue here. The output is, by the way, as expected, we have four times one because single output neuron for data points, remember, and then that's why we have four times one. It's good to kind of check the shapes and make sure those make sense. Now, let's dig even deeper. And there are two very, very crucial concepts you ought to know about. The first one is this thing called a param. And basically, it's a terminology for trainable parameters in neural networks in Flex. So let's implement the simple model that's going to leverage those params and see how this thing works. So here, what we see is if we were to try to implement the dense layer, either feedforward layer, this is how we would approximately do it. So here, again, it's a data class. So we can instantiate these fields here. We have number of neurons. We have the weight and the bias initializer functions. So yeah, bias is going to be initialized as all zeros. And for weights, we're going to use like a normal initializer. Not that important. The details are not that important. The actual implementation of this function is not vital for this explanation. So let's see what we do here. So we call self-param and we give it a name. And here, it's going to be called weight. And it can be arbitrary name. You can choose whatever you want. So that's the name that will appear in that frozen dict under the params collection, if you recall. If not, I'm going to run this. You're going to see it in a second. So now we call the initializer function of our choice. So the random state is going to be implicitly passed into this function through the init and apply function. So you don't have to worry about that here. So the only thing you have to provide is the shape information for this initializer function. And we'll just take the last dimension of our data and then the number of neurons here, which is the desired number of outputs. We do the same thing for bias. We call self.param. So this is, again, this is flex syntax. And we name it bias. We have the bias in it. We pass the number of neurons as the shape because, yeah, that's how bias shape is supposed to look like. And then this is a simple implementation of feed-forward layer. We do a dot product between the inputs and between the kernel, the weights, and then we just add the bias. And that's it. This is how we would implement like a dense, a linear, a feed-forward layer in flex. Again, we have a common pattern here. We just form the random seeds here, the random states here. And then we instantiate the model here. We just form some random data on this line 18. Then we initialize. We get the parameters. We call apply and we get the output. And finally, we can print the shapes. And let me run this and see what we have as the output. And you can see this is like something we're used to seeing. We have params collection. And I'm deliberately saying collection because that's a reserved word in flex. And I'm going to soon explain what exactly it means. And yeah, you can see biases and weights here. And that's it. I've linked a couple of source code files here. You can check those at your own pace. But let me briefly show you, and you may be confused by some of the idiosyncrasies of flex. Like why do we have brackets here and why don't we have brackets here? What type of inconsistency is this? So my OCD was kind of triggered by this. I had to dig into the source code and I found the solution here. Basically what happens, let me zoom in here. So we have a partial. It forms a higher order function. It just kind of plugs in these arguments that you don't have to kind of plug those in later on when you're calling this variance scaling function. So the reason for the inconsistency is that if I were to find the variance scaling implementation, you can see the variance scaling basically returns back this function in it. So that's why we have to call the brackets so that we get back the init function, which we can then call as an initializer. Whereas zeros, if I were to find the zeros, it's already a function. So that's why you don't have to call like the brackets. So minor implementation detail. I just thought explaining the why the inconsistency in case you wondered. Second thing, I want to quickly show you the implementation of the aptly named linear layer, even though they call it dense. So let's see that. Okay. So let me find the class dense, dense general, dense here. Okay. So obviously it's a bit more complicated than the toy implementation, which I just showed you, but like the basics are the same. There is just an additional type conversion depending on whether you're training this thing in a mixed precision mode. You may want to convert the data type of your activations, of your input data, et cetera. So you can see here again, we have the self-param kernel initializer function. We have the shape. Additionally, again, I said we have a data type here and then again, we have some conversion of data type here. So basically, and then we have dot general, which is basically as if we were to call JMP. It's just a more lower level function that says you can see more verbose, but also much more, I guess, less error prone. We formed the bias here and basically if bias exists, we just add it here and then that's, yeah, I mean, that's just a W times X plus B implementation of that. So that's pretty much it. Now let me get back to the notebook and let's continue this journey. Yeah, again, I just have a simple cell here showing that the signature is such that this initializer function, the first argument is actually key, not the shape. So that's why I said that you are implicitly passing in the RNG. So by the way, you usually will not have to care about these things that much because you'll just usually just be using the NN layers, which are already implemented for you. Here I'm just trying to dig a bit deeper into Flex and kind of show you how things work behind the, like in the background. Hopefully you'll appreciate that. Okay, let's continue here. So those were the trainable parameters, but as you may know, when you're training neural networks, you sometimes have parameters which are a part of the model, but they're not exactly trainable. So you're maybe keeping some statistics in the case of batch norm. I guess that's the best example. The most famous example would be the batch norm and batch norm has a trainable parameter as well as those non-trainable batch statistics. And now we're going to see how Flex handles those. And the novel concept I want you to keep in mind here is variable. And I just have a note in terminology, so it's a broader term and it includes both the trainable as well as the non-trainable variables. So you'll see params being called variables as well. They're just trainable variables. Okay, that out of the way, let's see a very simple contrived example that contains both the variables and the parameters alike. So we have a simple class here called very convoluted name bias adder with running mean. Don't try and deduce the semantics of this function from this title, from this name, it's just a contrived example. So what happens in the call function, we're using again this compact notation, we just check whether this variable is already initialized, namely the batch stats collection. And then we take a look whether the EMA variable inside of that collection has been initialized. Now let's see what collection and all of this, what I've just said means. So batch stats is actually not an arbitrary name. You'll see that in, I think I'm gonna show you the implementation of the batch norm, the source code itself. Basically they've hard coded now whether that's a good design or not, I don't know. I guess I don't have enough context, but like in old one, I'd say it's not the best design to hard code like a string like that. But yeah, it is what it is. So we can see here self.variable. So again, that's a flex thing and we form a collection. So this is the collection name, batch stats. And the reason they have different collections is so that you can treat them differently. So that's the point. So we have the params collection which usually contains the trainable parameter, the trainable variables. And then you have batch stats which contains these types of non-trainable variables where you take the batch of images say and you calculate some statistics. So for example, in the case of batch norm, you calculate the mean statistics, you calculate the variance and then you use that to do the logic of the layer. Okay, so it's very similar to self.param. You just have the collection name here, then you have the name, then you have the initializer function here. We are just being explicit here. We just have the zeros function and then we pass in the shape. We have one and then we have a column here because basically the zero dimension is batch dimension. So we're gonna just have the shape of a single data point here. Then we have a bias. So bias is a trainable variable and this will be added. So this thing here is kind of implicitly set to params here. So I noted, I have it here noted. So by default, we'll add this variable to params collection, whereas the above will be added to the batch stats collection. Here because of some of the implementation details, unfortunately, we have to pass the key because that's the expected signature of the function for this self.param, even though we're not using it in this particular case. So yeah. And finally, before I explain this snippet here, EMA stands for Exponentially Moving Average, just for you to understand the semantics there. So what we do here is if we have initialized the EMA variable already, then we can start in the forward pass, we're going to update it using this exponentially moving logic. So what we do is we take the decay. Decay is usually some larger number like 099 or something. And then you keep, you kind of have inertia. You keep the old value with a higher probability and then one minus that and then you do the mean of the current batch of data. So this is kind of damping, has this damping effect. I'm trying to teach you flex here, but like I think that explaining some of the machine learning concepts along the way may help a lot of you. So that's why I'm doing this. Finally we just returned the return is X, which is input data minus the exponentially moving average here plus bias. And you may wonder why do we have dot value and the reason is that self dot variable returns a reference, not the value itself. So that's why you have to kind of do the dot value to get the actual value of the variable. Okay, this will make a bit more sense in a couple of seconds. Again, very, very familiar pattern here. We form the random states. We instantiate the model here. We form the data. So we have 10 data points. Each data point has four dimensions. Again, we initialize the model, we get the variables and then we print them here. So let me just kind of run this and show you already what's going on here. So here is the frozen dict, which we've, we're used to seeing this. And now aside from params, we also have the batch stats where we have this EMA variable. And so yeah, this is the structure, the underlying structure of this, of this contrived model. Only the values are zero because that's the initializer we've been, we've set here, the zeros. And then what we do here is we apply. So this is the novel part of notation. I want you to remember we pass the variables and then we pass the input and then we tell flex, Hey, batch stats are actually mutable, which means that during the forward pass, the value is going to change. Whereas for regular parameters, when you do like a feed forward to a neural network, you will not be changing the weights. But here for these types of variables, they're going to change during the forward pass. And that's the reason why we have to have this distinction and different collections. Okay. So we get the output and we get this time, we get a tuple, not just the output, we get the updated non-drainable parameters and I've printed them here and you can see how all of the zeros now converted into this, which is because we've done, we have data X, we've done the mean across the data. So we've aggregated those 10 vectors into a single vector. And then here using this update rule, we've kind of have a novel, novel value here. So all in all, what I want you to take out of from here is there are multiple collections because we have multiple semantics, depending whether we have trainable parameters or these types of other variables. And again, we have this mutable keyword, which you should remember alongside this self dot variable. So those are some of the important flex idiosyncrasies. Okay. Let's continue here and let's form like an update function, a training function. Let's see how we could train this function. And once you understand this, trust me, it's going to be way easier because we are now digging really deep into all of the nitty gritty details. It's going to be much more easy, much more easy, much easier to actually train flex models than this. Okay. We pass bunch of arguments here because we're still not using the train state, which I'm going to later introduce. So bear with me here. We pass the params into loss function. And what we do here, we pass the apply function. Apply function is basically your model apply. So we have model here and then I'm going to pass model dot apply here into the update step. So that's the apply function. So it's basically how you do the feed forward. Then we pass the variables and here we're being explicit. So I have params collection and I pass the params and we have the non-trainable params, which I just do the double star here to unpack the keys and the values. And this is how you basically form the variables. We will soon see why we have to split these two. And then we pass the input and then we have the mutable. Again here we're just being a bit more general and we say, give me the keys that correspond to those non-trainable parameters. So batch stats would be one of those keys in one of the last examples. So just to give you some idea. Finally here the loss, because this example is super contrived, this doesn't make much sense. Basically once we have the loss function, we do this is a very standard pattern. You've seen this already a couple of times, like only in this notebook and you'll be seeing it even more once you start coding in Flex. Basically you pass the loss function to the value and grad transform function. And we say has auxiliary, we set to true because aside from loss, loss function is returning these updated non-trainable parameters. And remember, once we do a forward pass, we are kind of mutating those parameters. That's why we have to have this mutable in the first place. Okay, cool. So that's what we get here. Once we pass the parameters, we get back the gradients of those parameters with respect to loss function and we return the value because we have value and grad. So we return back the loss and the updated non-trainable parameters. Then we have this common pattern using OptX. We update using the optimizer state and then we apply the updates to the parameters to get novel parameters. And that's it. Then we return a bunch of arguments here because we're still not using the train state and I'm going to introduce that I think in one of the next cells. So yeah, stay tuned. Cool. So let's see how this whole like how everything comes into a single picture. So we have Winstonchi the model here, we have dummy data and so we initialize the model, we get the variables. Now here, because with this time we have both the trainable as well as the non-trainable variables, we split them using this pop function and then because we just have a dictionary, so this is going to work. And then they usually do this in the documentation, they delete the variables to avoid wasting resources. And then we kind of create the optimizer here, we initialize the state and then we have just a simple for loop running a couple of training steps and here is the update step that's being called. So we are correctly updating the parameters as well as treating the non-trainable variables in a correct manner. So this is as nitty gritty as you're going to get in this video. So if you followed me so far, congrats. If not, you're still won't have any problems following the rest of the notebook. So now we're going to go up in the abstraction and see one example where you basically have the most complex syntax you can have in Flex. So I'm kind of going to mix a lot of various layers which are going to introduce both those non-trainable variables as well as some stochasticity in the forward pass via a dropout. So let's see that example here. We have this DDM block and the name DDM because we have dense dropout batch norm which is a fairly regular block you'll see like in also in CNNs or whatnot. It's a fairly regular stack of layers. Again it's a data class so we have number of neurons specified here and the training flag and depending on the value of this training flag, we'll have different behavior from dropout and batch norm here. So what happens is we instantiate the feed forward layer here using the num neurons here, the num neurons variable and then we pass whatever like comes as the output, we pass it into dropout. The dropout has 50% probability rate of kind of dropping the activations and again we have this deterministic. So if we are training, that means not true will be false which means we are not deterministic during the training. We want to have the stochasticity but when we are evaluating then the training flag is set to false so this will be true. We're going to be deterministic and basically dropout is going to be kind of turned off pretty much. Finally we take the output from dropout, we pass it into batch norm, we have use running average again depends on the training flag. If training flag is true, if we are training then we pass false because we don't want to use the running average. We want to use the batch average, that's how batch norm works. So you take a batch, you calculate the statistics and you use those parameters instead of using the parameters that were accumulated throughout the training. So that's it. This is more of a machine learning concept than a flex concept so I'm going to not dwell too much on it. Again familiar pattern, we form a bunch of random states here, we instantiate our block, we get some random, some dummy data here. We initialize the model and here are some additional idiosyncrasies of flex. So this time when we want to initialize and this may be a bit confusing, we have to pass like a random state for both the params collection as well as for the dropout. And it doesn't make much sense semantically because I mean dropout cannot be initialized, you don't have any learnable parameters there but just because of the way how everything is implemented you kind of have to have this additional step. So keep that in mind. Then we pass the X and everything else remains the same. We get the variables back and then we apply and now here comes additional part because now we have dropout and dropout as you saw complicates both the initialization procedure as well as the apply procedure. So here we have to have this RNG's keyword and basically what that means is you set the random state so you seed the dropout procedure so you can again reproduce it later on. So that's why it makes a lot of sense to have it here although it does not make much sense to have dropout here to the best of my knowledge. So yeah, outcome the output, the Y and the non-trainable variables, the non-trainable parameters. I'm kind of mixing the terminology here between hopefully you're getting the gist of this. And if we want to run this in evaluation mode where training flag is set to false, the apply function simplifies by a lot. We now don't have to pass the RNG's because dropout is effectively turned off and we also don't have to care about mutable part because batch norm just won't be updated. Those internal kind of statistics will not be updated so we don't have to include the mutable. So this is as complicated as it gets. Remember a couple of these novel keywords. We have the mutable, we have the inclusion of RNG's for the apply when we have the stochasticity in the forward pass. And we have to during initialization if we have dropout we have to kind of specify keys per collection. And this is why I said Haiku seems to be cleaner in my honest opinion compared to Flex because they don't have the separation. It's kind of cleaner. Like all of these signatures simplified by a lot. Now whether that means Haiku loses some of the expressivity that's I guess worth thinking about but yeah. Okay that was it. Now we are in the last section of this video which is already overly long. Hopefully you're following along. Basically I'm gonna walk you through this fully fledged convolutional neural network example on MNIST. We're gonna code it up in Flex and this is basically now that you have all of the components in your mind this is gonna make hopefully a lot of sense and you're gonna see how Flex loops and training like code actually looks like. Okay let's start by defining the model. We have a convolutional neural network. We're using this MN compact pattern. A lot of hard coding here for the demo purpose I guess it's fine. Basically a lot of convolutional layers followed by relu's and then here and there we have some pooling average pooling then we flatten out the shapes and we have a couple of linear layers here and finally we have as the output the log softmax of the outputs. That's it and we have 10 classes because we are dealing with MNIST as you recall. Okay let's run this thing to define the neural network there and now I'm just going to reuse the the PyTorch data loading functionality from from tutorial number three of Jack series. So I won't be diving deeper here. Basically I have some collate function that's supposed to create non-Py arrays out of out of PyTorch tensors because that's what a Jacks and thus Flex is dealing with. I have some custom transformations to just kind of manipulate the MNIST image and make it a zero one range flow through the two etc. Some need to degree deals. Basically we form the data sets here. We have the data loaders with certain batch size like 128 images per batch. I load the training images and the labels as well as the test images and specifically I load the whole test data set because it's fairly small for MNIST so we can kind of keep this in memory no problem. Cool let's run the cell to load the data and now let's visualize a single image from from this data set. So it's just like a regular MNIST image. You can see number seven here. You can see the ground truth label printed here on the screen as well. So let's now understand a couple of functions that we'll need to to get this CNN training and learning on this simple MNIST data set. So first things first we have this train step and eval step functions. So I guess you're already familiar with most of these patterns but let me quickly walk you through what's going on here. So what's happening is we have the loss function here. What we do we do a forward pass through the CNN. We pass the variables. So again I'm being explicit here with the collections. We have the params collections. We pass the params here. We pass the images which is our data and we get back the logits for those images. Now we convert the ground truth labels using the basically this one hot function. We get the one hot labels and then what this thing here is if you can guess it's a simple cross entropy loss because what we do here is we do element wise multiplication and because both the logits as well as the one hot labels have the shape of batch size times 10 because we have 10 classes in MNIST. What this does is basically like the one hot labels will kind of select the true label from the logits and because logits is log softmax if you remember how we define the CNN here. So let me just go up there. So we have log softmax as the output of the CNN. So what will happen is this will be just log softmax. Let me just find where we stopped. Okay here and then we're going to sum across the minus one. So that's the last dimension i.e. the dimension where we have the 10 labels. So we're going to sum those up. Most of those are going to be zero because as I said this is going to select just a single one. So we end up with a bunch of logs and then we just kind of do the mean across the dimension zero by default. So that's across the batch dimension and we get like a mean of the log sum pretty much which is the cross entropy loss with the minus here. And we pretty much return back the loss as well as the logits. And then we have like a very very familiar pattern already. We have the value and grad function we set as auxiliary to true because we're not only returning loss. We're also returning the logits. And then this is the new thing. So we have this state now and that's the thing I've been mentioning throughout the video. We're going to soon see what exactly and how we construct that. But basically we take out the parameters which is just they form the part of our of our training state. So parameters are basically the weights of our neural network and we pass those and basically we get the gradients back and we get the logits. Then we just call state apply gradients. So now we don't have the two optics line because those are hidden in the apply gradients function. And we just pass the gradients and get the state back which is the updated state. Next up we compute the metrics. We just pass the logits and ground truth labels. We get the metrics which is a simple dictionary containing loss and accuracy. We'll soon see that. Not that important though. Evaluation step again very simple. We pass the test images here. We call we do a forward pass. We get logits back and we just compute the metrics because we want to evaluate the model on the test images and that's it. Okay let's continue here. Now that we have the training step and evaluation step defined we can basically just kind of call those in a loop for the whole epoch. So we iterate here through the PyTorch data loader. We are fetching images and labels. We start calling the train step over and over again and we're just collecting the metrics across all of those batches. And then what we do is we call device get which if you recall from one of the previous videos we'll just take those like the data that's currently on the accelerator whatever that is GPU or TPU or whatnot. It's going to fetch it back to the host memory. So that's why we have the MP suffix here. And then I just do some simple aggregation here across the batches and I return the final state and all of those metrics back from the function. Evaluate model fairly similar. We just call the eval step just once here. We don't have to loop because if you recall I'm loading the whole test data set and so a single line of code here will work. We do the same thing device get to pull on to host and then we have to do this dot item because this is just going to contain a NumPy arrays with a single element. So you have to kind of call the dot item to get the scalar from the NumPy array. And that's it. We return back the metrics. Okay. Finally, these are the two last functions that we need before we see the main training loop. And this one is fairly important. So this one makes things I mean it makes things it's a syntactic sugar but it makes things way cleaner and thus less error prone. So what we do here is we instantiate the CNN. We basically initialize the parameters. We pass the key. We pass the dummy data. So just we pass all ones and then we extract from that frozen dict we just extract the params collection because that's the format that this train state here will expect. And then we just kind of call and form this SGT optimizer without having to optimize it. Without having to initialize it because that's going to happen inside of this train state class. So that's cool. We're kind of delegating a lot of the functionality there. And finally, we also pass the CNN apply so that we can then do some more advanced logic as we saw there in the loop. So here we call the apply gradients. We can do that because we have all of those ingredients inside. Okay. Let me kind of quickly walk you through that class so that it's kind of clear what's going on. Let me open it up. If I open it up and here is the train state function. So we have the apply gradients. We saw that one in the training loop and we have the create function. So basically you can see here the optimizer is initialized internally. So yeah, like as much work as possible was delegated to this function so that we don't have to externally do the initialization. And here you can see what it does when we call the apply gradients. It basically does those two optics functions. So we have the update. We get the updates in the new optimizer state and then we apply the updates. It's just a simple, simple function that makes things a bit cleaner in our code. So that's it. Let me go back here. So let me run this if I haven't already. And we are getting to the final cell of this notebook where we have the training loop. We create the training state. We have it here and you can see how neat the training loop actually looks like. So we just call the train for one epoch. So for every epoch, we just call the train one epoch and we get back from the trains. We convert the train state. So there's going to be modified train state. We're going to get the metrics. We plot the metrics like the loss and accuracy, et cetera. And we do the same thing on the test dataset. So we pass the test images and test labels and that's pretty much it. So I can try and run this, but currently I don't think I have an access to a GPU. So it's going to be a bit slower, but yeah, it's just going to print every single epoch. It's going to print some of this data. And yeah, that's everything you have to know pretty much. And in my opinion, this is probably this example, even though it's way more complex and actually does a real thing, it was way simpler compared to those contrived examples where we had to do the mutable, where we had to pass the random states for the dropout and stuff like that. So as an exercise, how would we go about adding dropout here and also batch norm? So that would like start adding that complexity on the syntactic level. Let's roughly think what would happen if we were to add dropout. So first we'd have to modify this create train state function and we'd have to add not just the key for the params, but also for the, we'd have to create a kind of dictionary here where we'd have like params and then we'd initialize that collection with a certain key and then we'd have the dropout and we'd have to initialize that with another key. And that's on the initialization side. Then in the actual training loop, what we'd have to do is we'd have to pass the key for every single train step, we'd have to have a unique key. So because as you recall, we'd have to add RNGs here and then create and pass the key here. So that means that for every single train step, we'll have to have a key and let me see where we're calling the train step right here. So that means we have to have splitting happening on this line here. So we just have to pass a specific key to the train one epoch function and then kind of keep splitting because we don't want to reuse the key because if you reuse the key, you'll get the same output, the same randomness and thus you don't have this stochasticity. So that's very important to keep in mind. So yeah, it's a fairly simple modification. I just wanted to kind of quickly walk you through that and for batch norm similarly, you'd have to have the mutable keyword here and there and it would complicate things a little bit, but definitely not by a lot. So yeah, let's see whether we have some outputs. Yeah, we have some outputs here and you can see the accuracy is already and we don't have any overfitting and so yeah, we have like a 98% accuracy. That's fairly nice. That's pretty much it. I encourage you to go and check out the Flex documentation A and B. Go and check out the examples they have on their GitHub. So you can see like this ImageNet example is very cool. They have mixed precision training. They have like training over multiple devices, then like all of that additional complexity and I think they can train a model in like an hour or something. Yeah, like wall time, a couple of hours to train these, although you have to have a bunch of TPUs or V100s to do that. So yeah, go check those out at your own pace and I think I mentioned in the beginning, Hugging Face has very nice examples as well and they've organized this community week which you should check out as well. So I'm gonna link all of these resources in the video description, so do check them out as well and yeah, I mean Hugging Face team is doing an amazing job, so just a huge shout out to those guys. Cool, that's pretty much it. Let's do a quick recap. Let's do a short summary of what we've learned, what we've seen in this video about Flex and basically I'm gonna open up this table of content in deep note here to kind of guide my thinking and let me create a code cell here using Control J shortcut. So basically we've seen the, so on top of everything you need to know about Jax already, we've seen the init and apply functions. We've seen that if you wanna create our custom modules, sometimes you'll have to use the self-param and self-variable functions. Then we've seen stuff such as nn-compact which makes the construction of those custom modules a bit more compact obviously. Then we saw some new keywords that we haven't seen in Jax such as mutable if you wanna have the non-trainable variables or RNGs if you wanna have like stochasticity such as dropout like in the forward pass and what else? We've seen the train state object which is pretty much like a wrapper state that makes things a little bit cleaner. So all in all, there is not that much you need to know on top of Jax, so if you already have the basics kinda set up, then you'll be good to go and this video will hopefully give you enough context to get you started. Again, huge shout out to the Deep Note team for sponsoring this video and if you like this one, share it out with your friends. That's the best way you can support the channel. Also check out the reference. I'm gonna link below in the video description. By using the link again, you'll get 20 hours for free using the, you can use these pro machines and that's it. Subscribe to this channel and until next time, bye bye.
[{"start": 0.0, "end": 7.5, "text": " What's cracking you guys? In this video I'm resuming my Jax series of videos and in this one I'm gonna cover Flex."}, {"start": 7.5, "end": 18.8, "text": " Basically a neural network library built on top of Jax and you probably are already aware of frameworks such as TensorFlow, Keras, PyTorch."}, {"start": 18.8, "end": 23.3, "text": " So then like why is there a need for yet another deep learning framework?"}, {"start": 23.3, "end": 32.0, "text": " And the answer would be like twofold probably. One of the most important things I think is the performance component."}, {"start": 32.0, "end": 39.0, "text": " We're gonna later see some numbers on certain benchmarks and Flex kinda was the best, the fastest."}, {"start": 39.0, "end": 43.400000000000006, "text": " I.e. in general like Jax is faster than the other alternatives."}, {"start": 43.4, "end": 53.8, "text": " And B. you get better reproducibility because of the way that Jax and thus Flex and some other frameworks such as Haiku which is also built on top of Jax."}, {"start": 53.8, "end": 61.599999999999994, "text": " How they handle state. And basically because they're using the functional programming paradigm, they handle state externally."}, {"start": 61.599999999999994, "end": 71.4, "text": " They deal with pure functions and so basically even the random number generators handle state externally and all of that leads to better reproducibility."}, {"start": 71.4, "end": 76.2, "text": " So those are some whys behind learning Flex and yeah."}, {"start": 76.2, "end": 86.4, "text": " So before you start watching this video you should have some knowledge of Jax already and if you don't know what Jax is, if you've never done any Jax,"}, {"start": 86.4, "end": 91.7, "text": " I strongly encourage you to check out my Machine Learning with Jax series of videos."}, {"start": 91.7, "end": 96.9, "text": " You can see here the first one from zero to hero which is going to teach you the basics."}, {"start": 96.9, "end": 105.4, "text": " Then we have from hero to hero pro plus which is going to go beyond and teach you like a nitty-gritty details of Jax in general."}, {"start": 105.4, "end": 110.9, "text": " And finally I have a video where I'm covering where I'm coding a neural network from scratch in Pure Jax."}, {"start": 110.9, "end": 117.9, "text": " So yeah, if you don't know anything about Jax, maybe go and check out at least the tutorial number one before you watch this one."}, {"start": 117.9, "end": 122.7, "text": " Okay, now you maybe notice something different in this video."}, {"start": 122.7, "end": 134.7, "text": " I'm not using colab anymore and what this is is deep note and I first want to give up like a huge thank you to the deep note team for deciding to sponsor this video."}, {"start": 134.7, "end": 145.1, "text": " So I want to briefly mention that like I have so far decided to decline like multiple sponsorship opportunities and the reason is I really want to promote something that I truly believe in myself."}, {"start": 145.1, "end": 155.7, "text": " So that means I should either use the product like on a daily or on a weekly basis myself or I think that the product has a lot of value and deep note definitely checks the marks."}, {"start": 155.7, "end": 157.4, "text": " And so yeah, here I am."}, {"start": 157.4, "end": 159.4, "text": " So what's the thing with deep note?"}, {"start": 159.4, "end": 164.0, "text": " If you take a look at it, it looks like a regular Jupyter notebook like your colab notebook."}, {"start": 164.0, "end": 165.0, "text": " So what's the difference?"}, {"start": 165.0, "end": 167.79999999999998, "text": " So you can see here like you've got your markdown cells."}, {"start": 167.79999999999998, "end": 171.6, "text": " You can click shift enter and render the markdown cell."}, {"start": 171.6, "end": 173.29999999999998, "text": " You have the code cells."}, {"start": 173.3, "end": 178.10000000000002, "text": " Well, the one of the most important things about deep note is the collaborative feature."}, {"start": 178.10000000000002, "end": 191.60000000000002, "text": " So that basically means you can share this notebook with multiple collaborators with a team and you can real time collaborate on the notebook and you can basically see other people like typing real time commenting, etc."}, {"start": 191.60000000000002, "end": 193.20000000000002, "text": " It's a very good collaborative tool."}, {"start": 193.20000000000002, "end": 198.9, "text": " And unfortunately, I cannot show you right now those features because I'm so here."}, {"start": 198.9, "end": 204.6, "text": " Secondly, it integrates which bunch of different data sources you can check out those yourself."}, {"start": 204.6, "end": 212.3, "text": " But like drive, Amazon s3, Redshift, BigQuery, a bunch of other applications with GitHub as well."}, {"start": 212.3, "end": 217.9, "text": " They also provide templates which can get you started like you can maybe pick this A B testing."}, {"start": 217.9, "end": 219.3, "text": " Let me show you this one."}, {"start": 219.3, "end": 224.9, "text": " So it's got a very cool widgets, cool dashboards, and you can even create deep note apps."}, {"start": 224.9, "end": 228.20000000000002, "text": " So let me show you briefly how this thing looks like."}, {"start": 228.2, "end": 230.39999999999998, "text": " You can see there is like a lot of these widgets."}, {"start": 230.39999999999998, "end": 233.2, "text": " You can kind of create these interactive plots."}, {"start": 233.2, "end": 238.29999999999998, "text": " And if you go and hit this publishing editor, you can even go and create a dashboard."}, {"start": 238.29999999999998, "end": 253.2, "text": " And so that means no code, which means you can kind of can basically use this as a data science notebook and present the results to your like business partners or whoever is not tech savvy, but wants to see the results and get some insights."}, {"start": 253.2, "end": 254.79999999999998, "text": " So yeah, that's very cool."}, {"start": 254.79999999999998, "end": 258.0, "text": " So let me see what else I think I've covered."}, {"start": 258.0, "end": 260.9, "text": " The main things you can check it out in your own pace."}, {"start": 260.9, "end": 262.2, "text": " You've even got terminals."}, {"start": 262.2, "end": 266.7, "text": " You can kind of edit the environment if for whatever reason you need to do that."}, {"start": 266.7, "end": 269.6, "text": " You can you can track the common history of the whole team."}, {"start": 269.6, "end": 273.1, "text": " In general, the whole history of what happened is fairly nicely tracked."}, {"start": 273.1, "end": 277.0, "text": " So and finally, let me get back to my notebook."}, {"start": 277.0, "end": 279.9, "text": " You can see I've installed some of the packages here."}, {"start": 279.9, "end": 280.8, "text": " You don't have to do that."}, {"start": 280.8, "end": 283.6, "text": " You can create a fully custom Docker container."}, {"start": 283.6, "end": 285.1, "text": " They have a support for that."}, {"start": 285.1, "end": 289.6, "text": " I even created some of the Docker images myself here, but I won't be using that."}, {"start": 289.6, "end": 292.70000000000005, "text": " I'm going to be using this 3.7, which is the same one."}, {"start": 292.70000000000005, "end": 297.5, "text": " I recommend you use if you want to start with this specific notebook."}, {"start": 297.5, "end": 302.1, "text": " And finally, I've got a referral link for you guys down in the video description."}, {"start": 302.1, "end": 312.6, "text": " If you want to support the channel and also if you want to get 20 additional hours of these pro machines on deep node, do click on the referral link and do check out deep node."}, {"start": 312.6, "end": 314.8, "text": " So so yeah, let's get started."}, {"start": 314.8, "end": 333.90000000000003, "text": " The point of this video is to learn to get some basic understanding of flex so that you can build train your own neural networks and then you can take it from there and start reading the documentation, which is probably the best source of information right now because the community is not as strong as like say pie torts."}, {"start": 333.90000000000003, "end": 341.2, "text": " Even though it's growing and it's being adopted by hugging face, they even had a community week where they were coded model in flex."}, {"start": 341.2, "end": 345.5, "text": " I'm going to link some of those in the in the video description so you can check those out."}, {"start": 345.5, "end": 348.2, "text": " But yeah, let's get started here."}, {"start": 348.2, "end": 356.8, "text": " Basically, I'm going to import here some some some like Jax and certain libraries, which is non-pie from Jax."}, {"start": 356.8, "end": 362.2, "text": " So Jax, Numbay, and then we're going to import the flex, which is the library we're going to learn today."}, {"start": 362.2, "end": 376.8, "text": " And it's a it's a library developed by Google Research, in particular, Google Brain, and it's a the name itself is a contraction between like you have flexibility, you have Jax, and then you kind of get flex very creative."}, {"start": 376.8, "end": 378.7, "text": " I'm going to import some functions here."}, {"start": 378.7, "end": 384.4, "text": " They have this leaning API, which is I think the second iteration of their API."}, {"start": 384.4, "end": 393.59999999999997, "text": " And you can see that the convention is to import the API itself as an N, which you may be familiar with if you come from the pie torch background."}, {"start": 393.59999999999997, "end": 396.59999999999997, "text": " It's a fairly common notation."}, {"start": 396.59999999999997, "end": 403.9, "text": " Okay, train state is just a wrapper function that's going to help us make things a bit cleaner."}, {"start": 403.9, "end": 405.4, "text": " We're going to see that in a moment."}, {"start": 405.4, "end": 406.79999999999995, "text": " Yeah, I think I mentioned Haiku."}, {"start": 406.79999999999995, "end": 412.0, "text": " It's it's also like a library built on top of Jax that comes from DeepMind."}, {"start": 412.0, "end": 415.8, "text": " And the only reason I have it here is for comparison purposes."}, {"start": 415.8, "end": 419.6, "text": " So we won't be learning Haiku in this video, although I'll cover it in the next one."}, {"start": 419.6, "end": 421.8, "text": " So stay tuned for that."}, {"start": 421.8, "end": 429.1, "text": " I'm importing Optex here, which is like another library built on top of Jax for Optimizers."}, {"start": 429.1, "end": 432.5, "text": " And the rationale here is the following."}, {"start": 432.5, "end": 439.2, "text": " You want to have in this Jax ecosystem, every one of these libraries takes care of a specific vertical."}, {"start": 439.2, "end": 447.4, "text": " So you have Optex for Optimizers, you've got Jax, you've got Flex and Haiku for like neural network building and construction."}, {"start": 447.4, "end": 449.9, "text": " You've got checks for testing the networks."}, {"start": 449.9, "end": 454.7, "text": " You've got libraries for reinforcement learning, libraries for graph neural networks, etc."}, {"start": 454.7, "end": 460.9, "text": " So that's like kind of the paradigm, the mindset of the Jax ecosystem, so to speak."}, {"start": 460.9, "end": 468.59999999999997, "text": " Finally, I'm going to use a pie torch data loading functionalities here, because if you recall from my Jax tutorials,"}, {"start": 468.6, "end": 471.1, "text": " they did not want to reinvent the wheel here."}, {"start": 471.1, "end": 475.0, "text": " They think that data loading functionality is already good."}, {"start": 475.0, "end": 480.0, "text": " And so they're just kind of utilizing either pie torch or tensorflow data sets, etc."}, {"start": 480.0, "end": 485.40000000000003, "text": " I'm going to use a pie torch because that's what I'm the most familiar with."}, {"start": 485.40000000000003, "end": 490.20000000000005, "text": " Finally, I'm importing some functions here and some common modules."}, {"start": 490.20000000000005, "end": 494.40000000000003, "text": " I already ran these cells because, yeah, just to save some time."}, {"start": 494.4, "end": 500.4, "text": " But now starting from here, we're going to run every single cell and see what is going on."}, {"start": 500.4, "end": 511.29999999999995, "text": " So let's start by building the most simple model, which is like just a simple, this is like a feed forward layer."}, {"start": 511.29999999999995, "end": 516.9, "text": " You can also think about it as a linear regression because we don't have any nonlinear activation functions."}, {"start": 516.9, "end": 519.9, "text": " And so this is just a linear regression model."}, {"start": 519.9, "end": 521.6999999999999, "text": " So let me run this."}, {"start": 521.7, "end": 525.1, "text": " So you just kind of click shift enter the same as in Colab."}, {"start": 525.1, "end": 529.1, "text": " And the notation is dense instead of linear."}, {"start": 529.1, "end": 538.1, "text": " And you can see that the dense class inherits from this module class, which is the same analogous to pie torch."}, {"start": 538.1, "end": 545.5, "text": " And the idea is even though this is a functional programming like environment, like in its core,"}, {"start": 545.5, "end": 551.8, "text": " they tried and built like an objective oriented paradigm wrapper around it. And that's what Flex is."}, {"start": 551.8, "end": 553.9, "text": " And that's what Haiku is."}, {"start": 553.9, "end": 563.7, "text": " So that's you as a programmer are kind of programming in your usual using your usual mental model like the object oriented programming."}, {"start": 563.7, "end": 565.0, "text": " OK, let's continue here."}, {"start": 565.0, "end": 568.5, "text": " So let's see how we can do inference using this model."}, {"start": 568.5, "end": 571.6, "text": " And the answer would be two steps."}, {"start": 571.6, "end": 574.9, "text": " We've got this in that function and we've got the apply function."}, {"start": 574.9, "end": 580.5, "text": " The point of init is to initialize the initial parameters because as you recall,"}, {"start": 580.5, "end": 587.3, "text": " Jax and thus Flex by transitivity property is handling like state externally."}, {"start": 587.3, "end": 589.5, "text": " So that's why we need init and apply."}, {"start": 589.5, "end": 591.1, "text": " And let's see what we do here."}, {"start": 591.1, "end": 594.8, "text": " Basically, hopefully you can see the text reasonably well."}, {"start": 594.8, "end": 598.1, "text": " I'm going to zoom in a little bit more for just in case."}, {"start": 598.1, "end": 601.3, "text": " But the UI is a bit uglier."}, {"start": 601.3, "end": 604.3, "text": " Hopefully you'll see the text, which is the point here."}, {"start": 604.3, "end": 606.5, "text": " OK, we have some random seed here."}, {"start": 606.5, "end": 608.3, "text": " Your regular Jax thing here."}, {"start": 608.3, "end": 614.5, "text": " We are externally handling the random state of the random number generator and we split it into two keys."}, {"start": 614.5, "end": 624.3, "text": " Then we just instantiate a random like data point here using the key one is just a 10 dimensional vector."}, {"start": 624.3, "end": 631.1999999999999, "text": " And then what we do is we take the model, we call this init function and we pass the key."}, {"start": 631.2, "end": 635.9000000000001, "text": " And the key is used obviously because initialization itself is random process."}, {"start": 635.9000000000001, "end": 639.5, "text": " So we want to have key for reproducibility reasons."}, {"start": 639.5, "end": 647.5, "text": " We pass the input X and basically we get back the parameters and the output in the case of using this function here."}, {"start": 647.5, "end": 650.3000000000001, "text": " In it with output, not that it's that important."}, {"start": 650.3000000000001, "end": 651.6, "text": " I'm going to later just using it."}, {"start": 651.6, "end": 653.5, "text": " I'm going to show in a second."}, {"start": 653.5, "end": 658.7, "text": " But yeah, then you can just we can visualize the shape of the parameters and see that it"}, {"start": 658.7, "end": 662.5, "text": " matches our expectation for a simple linear model."}, {"start": 662.5, "end": 668.5, "text": " So let me run the cell and you can see here that the output is exactly what we expect."}, {"start": 668.5, "end": 676.9000000000001, "text": " Bias term has is the dimensionality is five because the as we call here, we have five output neurons"}, {"start": 676.9000000000001, "end": 681.1, "text": " and we have 10 to five for the weight for the kernel."}, {"start": 681.1, "end": 683.7, "text": " And you can always a couple of things here."}, {"start": 683.7, "end": 690.9000000000001, "text": " So first is the automatic shape inference because I haven't specified 10 anywhere here."}, {"start": 690.9000000000001, "end": 693.5, "text": " I've just specified the output dimensionality."}, {"start": 693.5, "end": 698.6, "text": " And so the shape is automatically inferred through this input variable X."}, {"start": 698.6, "end": 700.1, "text": " So that's what happened there."}, {"start": 700.1, "end": 706.1, "text": " The second thing you can notice here is this immutable structure because as you recall again,"}, {"start": 706.1, "end": 708.5, "text": " Jax loves dealing with immutable objects."}, {"start": 708.5, "end": 715.2, "text": " So what flex has done here is just made it easier for Jax to not introduce any box here by"}, {"start": 715.2, "end": 717.6, "text": " returning immutable structures."}, {"start": 717.6, "end": 720.4, "text": " So that's why we have the frozen dict here."}, {"start": 720.4, "end": 724.3, "text": " Finally, I'm using in it with output here, but you usually use just in it."}, {"start": 724.3, "end": 725.9, "text": " You don't care about the output initially."}, {"start": 725.9, "end": 731.2, "text": " So I can just run in it here and then I'm going to delete the Y here."}, {"start": 731.2, "end": 734.1, "text": " I'm going to comment this out and we'll get the same result."}, {"start": 734.1, "end": 738.4, "text": " So yeah, usually you'll see the in it function again."}, {"start": 738.4, "end": 744.6, "text": " To recap here, what this thing does here is it's iterating through the supremes of pie"}, {"start": 744.6, "end": 749.6, "text": " tree and a pie tree object, which is a term from from Jax again."}, {"start": 749.6, "end": 751.1, "text": " And basically you're iterating."}, {"start": 751.1, "end": 755.9, "text": " You're kind of traversing the tree and for every single leaf you're applying this function"}, {"start": 755.9, "end": 762.6999999999999, "text": " here, which means you'll be taking like, let's say you have bias and you have some like a"}, {"start": 762.6999999999999, "end": 764.4, "text": " Jax lump array in there."}, {"start": 764.4, "end": 765.3, "text": " You're going to take it."}, {"start": 765.3, "end": 767.3, "text": " You're going to take its shape and you're going to change."}, {"start": 767.3, "end": 771.4, "text": " You're basically going to map the tree of parameters into tree of shapes."}, {"start": 771.4, "end": 772.9, "text": " So that's what you what you get here."}, {"start": 772.9, "end": 775.5, "text": " So yeah, just a small, small recap there."}, {"start": 775.5, "end": 778.5, "text": " Okay, so that should be fairly familiar."}, {"start": 778.5, "end": 783.4, "text": " So the new thing is the in it function and the second thing is this apply function."}, {"start": 783.4, "end": 784.9, "text": " So let's see what happens here."}, {"start": 784.9, "end": 789.6999999999999, "text": " So we have model we call apply and we pass the parameters and the input X."}, {"start": 789.6999999999999, "end": 795.5999999999999, "text": " So this is the only thing that you have to kind of that you have to adjust your mental"}, {"start": 795.6, "end": 801.6, "text": " model to this new paradigm and it may be very weird initially, but like believe me, I started"}, {"start": 801.6, "end": 806.8000000000001, "text": " playing with Jax in November last year 2021 and I'm already feeling very, very comfortable"}, {"start": 806.8000000000001, "end": 808.6, "text": " with it with this much."}, {"start": 808.6, "end": 814.5, "text": " I feel much better than when I first encountered this this kind of signature."}, {"start": 814.5, "end": 820.2, "text": " So you call apply you get the output and we can run this and get the results as expected"}, {"start": 820.2, "end": 822.0, "text": " five outputs here."}, {"start": 822.0, "end": 826.0, "text": " So again, you cannot call the model like this."}, {"start": 826.0, "end": 830.3, "text": " So this is what you're used to in other frameworks, but here this is not going to work."}, {"start": 830.3, "end": 831.3, "text": " It's going to throw an exception."}, {"start": 831.3, "end": 836.3, "text": " We're going to catch it and print something can call compact methods on unbound modules."}, {"start": 836.3, "end": 839.8, "text": " So the details themselves are not that important here."}, {"start": 839.8, "end": 843.5, "text": " Okay, so that's the very, very basics of flex."}, {"start": 843.5, "end": 849.4, "text": " Now I want to do a small coding exercise here and contrast flex with Haiku and in my personal"}, {"start": 849.4, "end": 853.3, "text": " opinion Haiku is a bit cleaner than flex."}, {"start": 853.3, "end": 858.5, "text": " I like it better me personally, although it may happen the flex is more powerful because"}, {"start": 858.5, "end": 864.56, "text": " of the additional like notation differences you'll see soon see."}, {"start": 864.56, "end": 870.24, "text": " So yeah, I'm not very strongly opinionated about this, but so far for me it seems Haiku"}, {"start": 870.24, "end": 874.4399999999999, "text": " is even nicer than flex even though flex is way more popular in the community."}, {"start": 874.4399999999999, "end": 876.4, "text": " I mentioned hugging face community week."}, {"start": 876.4, "end": 881.48, "text": " I mentioned they have even have like 5,000 plus models on hugging face hub, which is"}, {"start": 881.48, "end": 886.68, "text": " fairly, fairly nice amount of models for a relatively new framework compared to TensorFlow"}, {"start": 886.68, "end": 889.04, "text": " or like PyTorch."}, {"start": 889.04, "end": 892.6, "text": " So let me show you how you would do the same thing in Haiku now."}, {"start": 892.6, "end": 894.16, "text": " It's going to be very brief."}, {"start": 894.16, "end": 896.1999999999999, "text": " So let's create a model."}, {"start": 896.1999999999999, "end": 904.86, "text": " So Haiku uses this HK as a, as a, as a like a notation then linear and then five."}, {"start": 904.86, "end": 908.16, "text": " Now if I were to run this, it's going to fail."}, {"start": 908.16, "end": 909.52, "text": " So let me try and run this."}, {"start": 909.52, "end": 916.1800000000001, "text": " And as you can see here, all of these modules must be initialized inside an HK Haiku transform"}, {"start": 916.1800000000001, "end": 917.3000000000001, "text": " function."}, {"start": 917.3000000000001, "end": 922.16, "text": " So what the trick is Haiku is very explicit about when you want to, you take a function"}, {"start": 922.16, "end": 926.52, "text": " like linear and you, in order to have the init and apply."}, {"start": 926.52, "end": 928.36, "text": " So it's using the same notation as flex."}, {"start": 928.36, "end": 930.6800000000001, "text": " You have to pass it into this HQ transform function."}, {"start": 930.6800000000001, "end": 932.44, "text": " So let me do that."}, {"start": 932.44, "end": 933.72, "text": " Transform."}, {"start": 933.72, "end": 939.8000000000001, "text": " And what we do here, we create a Lambda anonymous function here and we just simply call this,"}, {"start": 939.8000000000001, "end": 942.8000000000001, "text": " this, this, this linear model."}, {"start": 942.8000000000001, "end": 943.8000000000001, "text": " And that's it."}, {"start": 943.8000000000001, "end": 944.84, "text": " Now this will work."}, {"start": 944.84, "end": 947.44, "text": " So if I were to run this, it will not fail."}, {"start": 947.44, "end": 950.1, "text": " But now let me show you what else happens."}, {"start": 950.1, "end": 955.6800000000001, "text": " So let me just copy paste these parts here because they remain the same, the seeds and"}, {"start": 955.6800000000001, "end": 958.76, "text": " the inputs will remain the same."}, {"start": 958.76, "end": 961.6, "text": " Now let me initialize the parameters."}, {"start": 961.6, "end": 968.16, "text": " So we get model init again, we pass the key, we pass the X and we get parameters."}, {"start": 968.16, "end": 969.16, "text": " So so far so good."}, {"start": 969.16, "end": 972.6, "text": " It looks very, very similar to two flex."}, {"start": 972.6, "end": 978.16, "text": " And finally, let's get the output by calling model apply."}, {"start": 978.16, "end": 981.44, "text": " We pass the parameters here."}, {"start": 981.44, "end": 987.2, "text": " We pass none because we don't have any stochasticity in the forward pass."}, {"start": 987.2, "end": 992.6800000000001, "text": " And for now it may be, it may look as high because even more complex and a kind of uglier"}, {"start": 992.6800000000001, "end": 997.0400000000001, "text": " in the notation compared to two, two, two flex, but it's not the case."}, {"start": 997.0400000000001, "end": 999.0200000000001, "text": " You'll see that in the next video."}, {"start": 999.0200000000001, "end": 1000.6, "text": " So you'll do it like this."}, {"start": 1000.6, "end": 1002.4000000000001, "text": " And by the way, I made a small mistake here."}, {"start": 1002.4000000000001, "end": 1003.9200000000001, "text": " You don't want to reuse keys."}, {"start": 1003.9200000000001, "end": 1005.88, "text": " So this should be key two."}, {"start": 1005.88, "end": 1009.76, "text": " So I should be able to run this right now and everything should work."}, {"start": 1009.76, "end": 1012.44, "text": " So this is, this is high Q very similar."}, {"start": 1012.44, "end": 1017.0, "text": " You just have this transform function and, and yeah, so that's cool."}, {"start": 1017.0, "end": 1018.76, "text": " I had the solutions here."}, {"start": 1018.76, "end": 1022.6, "text": " So let me kind of show this code and hopefully this looks the same."}, {"start": 1022.6, "end": 1027.96, "text": " So yeah, we have the model, we form the data here, we call in it, we call apply and we"}, {"start": 1027.96, "end": 1028.96, "text": " print."}, {"start": 1028.96, "end": 1032.0, "text": " And again, we have this, I forgot to do this."}, {"start": 1032.0, "end": 1038.08, "text": " Basically it just shows that high Q follows the same paradigm as flex."}, {"start": 1038.08, "end": 1040.5, "text": " It's inheriting from this module class as well."}, {"start": 1040.5, "end": 1044.1, "text": " So let me just hide this code again."}, {"start": 1044.1, "end": 1046.56, "text": " So hide the code and that's it."}, {"start": 1046.56, "end": 1047.56, "text": " Cool."}, {"start": 1047.56, "end": 1050.36, "text": " That was section number one."}, {"start": 1050.36, "end": 1055.6799999999998, "text": " Hopefully you're not too overwhelmed because as I said, you will have to kind of get accustomed"}, {"start": 1055.6799999999998, "end": 1059.24, "text": " with the, with this functional programming paradigm."}, {"start": 1059.24, "end": 1062.3999999999999, "text": " But yeah, it gets easier after, after, after a quick."}, {"start": 1062.3999999999999, "end": 1069.12, "text": " So as I said, flex and high Q offer a performance boost."}, {"start": 1069.12, "end": 1071.96, "text": " So here's some numbers from the hugging face team."}, {"start": 1071.96, "end": 1078.18, "text": " You can see here using a single GPU and using a flex you get compared to PyTorch, you get"}, {"start": 1078.18, "end": 1079.52, "text": " certain improvements."}, {"start": 1079.52, "end": 1083.6000000000001, "text": " Like in all of these, I think these are, all of these are NLP tasks."}, {"start": 1083.6000000000001, "end": 1086.58, "text": " So you have Cola, you have the natural language inference task."}, {"start": 1086.58, "end": 1091.14, "text": " You have, yeah, semantic, I guess, yeah, whatever."}, {"start": 1091.14, "end": 1095.56, "text": " So in any case, you get a fairly nice improvements here speed wise."}, {"start": 1095.56, "end": 1100.96, "text": " And in total, you can see that here you've kind of saved more than an hour using a single"}, {"start": 1100.96, "end": 1106.48, "text": " GPU, which is not a small thing, especially when you scale these two big language, large"}, {"start": 1106.48, "end": 1107.48, "text": " language models."}, {"start": 1107.48, "end": 1108.48, "text": " Okay."}, {"start": 1108.48, "end": 1113.32, "text": " Let's get going and let's introduce a single, a simple toy example where we're going to"}, {"start": 1113.32, "end": 1119.64, "text": " just form like a two dimensional data set and try and train a simple linear regression"}, {"start": 1119.64, "end": 1126.6000000000001, "text": " model using the NN dense model, which is an implementation and instantiation of the linear"}, {"start": 1126.6000000000001, "end": 1127.88, "text": " regression model."}, {"start": 1127.88, "end": 1131.4, "text": " So I'm going to have 150 data points."}, {"start": 1131.4, "end": 1133.4, "text": " The X dimension is two."}, {"start": 1133.4, "end": 1140.3600000000001, "text": " The output is going to be, we are regressing like a scalar just to make things a plottable,"}, {"start": 1140.3600000000001, "end": 1145.0400000000002, "text": " if that's even a word, because yeah, if we had multiple dimensions here, I wouldn't be"}, {"start": 1145.0400000000002, "end": 1146.0400000000002, "text": " able to plot."}, {"start": 1146.0400000000002, "end": 1147.92, "text": " I'd have to do some dimensionality reduction."}, {"start": 1147.92, "end": 1149.5200000000002, "text": " So I'm going to skip that."}, {"start": 1149.5200000000002, "end": 1156.72, "text": " Again, we're going to form some keys here being explicit about the random state as usual."}, {"start": 1156.72, "end": 1161.76, "text": " We are going to form the ground truth weights and biases here."}, {"start": 1161.76, "end": 1167.4, "text": " So basically we'll be trying to learn this weight and this bias later during the training."}, {"start": 1167.4, "end": 1171.04, "text": " And now what we do here is we form a structure."}, {"start": 1171.04, "end": 1174.52, "text": " That's, that's something that flex is, is used to."}, {"start": 1174.52, "end": 1180.0, "text": " So you have the freeze is supposed to create a frozen dict, a frozen dictionary out of"}, {"start": 1180.0, "end": 1183.2, "text": " the normal Python native dictionary here."}, {"start": 1183.2, "end": 1188.52, "text": " As you can recall from, from this section, we had frozen dict here."}, {"start": 1188.52, "end": 1190.8400000000001, "text": " So let me get back here."}, {"start": 1190.8400000000001, "end": 1195.8600000000001, "text": " And the structure is such that we have this params collection as flex calls it."}, {"start": 1195.8600000000001, "end": 1200.0, "text": " And then we have bias B and kernel W and that's it."}, {"start": 1200.0, "end": 1204.24, "text": " Now we, we've used this ground truth model and generate the data."}, {"start": 1204.24, "end": 1210.44, "text": " So again, we split the key, which we created here, we split it and we get the key that"}, {"start": 1210.44, "end": 1214.16, "text": " we're going to use to generate the axis, the domain of our data."}, {"start": 1214.16, "end": 1219.0800000000002, "text": " So we have 150 samples of, so 150 to two doubles."}, {"start": 1219.0800000000002, "end": 1222.92, "text": " So that means just 150 two dimensional data points."}, {"start": 1222.92, "end": 1229.16, "text": " And then we're going to use it, use the ground truth model and regress the, the YS here."}, {"start": 1229.16, "end": 1233.68, "text": " And then we're just going to do a superposition of noise here using this amplitude."}, {"start": 1233.68, "end": 1234.68, "text": " That's pretty much it."}, {"start": 1234.68, "end": 1236.52, "text": " Let me run this now."}, {"start": 1236.52, "end": 1240.48, "text": " And you can see the shapes, 152 and 151."}, {"start": 1240.48, "end": 1244.16, "text": " And now let's plot the, the, the data set we just created."}, {"start": 1244.16, "end": 1245.24, "text": " And you can see it here."}, {"start": 1245.24, "end": 1250.2, "text": " Now I was lazy and so I didn't try and create an interactive plot here."}, {"start": 1250.2, "end": 1253.78, "text": " And so let's be smart about it."}, {"start": 1253.78, "end": 1256.24, "text": " And let's try and visualize where this data lies in."}, {"start": 1256.24, "end": 1258.08, "text": " And you probably already know the answer."}, {"start": 1258.08, "end": 1261.6399999999999, "text": " If not, let's do a simple exercise and try to approach it."}, {"start": 1261.6399999999999, "end": 1265.68, "text": " So first one is we know how we created, how we generated the data."}, {"start": 1265.68, "end": 1274.8, "text": " So basically what we've done here is we had Y is equal to W X plus some bias."}, {"start": 1274.8, "end": 1284.16, "text": " And because X is two dimensional, this basically turns into W one X one plus W two X two plus"}, {"start": 1284.16, "end": 1291.8400000000001, "text": " B. And you can see that this is basically your, your analytical formulation of a plane,"}, {"start": 1291.84, "end": 1297.08, "text": " of a 2D plane embedded into 3D space."}, {"start": 1297.08, "end": 1299.12, "text": " So that would be one approach."}, {"start": 1299.12, "end": 1301.8799999999999, "text": " The second approach is data driven as a data driven."}, {"start": 1301.8799999999999, "end": 1306.6799999999998, "text": " I mean, what I'm going to do here is increase from 150 to 1500."}, {"start": 1306.6799999999998, "end": 1309.9199999999998, "text": " Okay, let's run this cell now and see what we get."}, {"start": 1309.9199999999998, "end": 1314.76, "text": " So we get 1500 data points and let's run this again."}, {"start": 1314.76, "end": 1319.8, "text": " And hopefully now you can see that it lies, it lies on the, on the 2D plane here."}, {"start": 1319.8, "end": 1323.1599999999999, "text": " And yeah, the reason we added noise, hopefully that's, that's clear enough is to kind of"}, {"start": 1323.1599999999999, "end": 1328.12, "text": " hide the actual, the ground to the, the actual model that, that lies in the background that"}, {"start": 1328.12, "end": 1329.68, "text": " generated this data."}, {"start": 1329.68, "end": 1335.6399999999999, "text": " And now we're going to try next up to, to kind of like a regress and infer the model"}, {"start": 1335.6399999999999, "end": 1337.24, "text": " that generated the data."}, {"start": 1337.24, "end": 1340.68, "text": " So first let's create this, this loss function."}, {"start": 1340.68, "end": 1345.54, "text": " This make MSC is just going to return this MSC loss function, which is a jitted version"}, {"start": 1345.54, "end": 1346.6, "text": " of the MSC."}, {"start": 1346.6, "end": 1349.56, "text": " So just kind of trying to optimize things a little bit here."}, {"start": 1349.56, "end": 1354.1599999999999, "text": " So squared error itself takes a single data point X and Y."}, {"start": 1354.1599999999999, "end": 1356.9199999999998, "text": " We do inference, we just do a prediction."}, {"start": 1356.9199999999998, "end": 1361.6799999999998, "text": " We pass in the single data point, we get a prediction and return the mean squared error"}, {"start": 1361.6799999999998, "end": 1362.6799999999998, "text": " here."}, {"start": 1362.6799999999998, "end": 1369.48, "text": " You may be confused by the inner function here and inner is used just because so as"}, {"start": 1369.48, "end": 1377.36, "text": " to be able to generalize two cases when like Y was a multi-dimensional vector and not a"}, {"start": 1377.36, "end": 1379.04, "text": " scalar like it is right now."}, {"start": 1379.04, "end": 1384.1599999999999, "text": " So this is just a generalization to you of our mean squared error loss."}, {"start": 1384.1599999999999, "end": 1387.24, "text": " Finally we take the square error and we call the V map."}, {"start": 1387.24, "end": 1392.1399999999999, "text": " So that's the transform from Jack's that makes this thing run on a batch of data."}, {"start": 1392.1399999999999, "end": 1397.6399999999999, "text": " So we pass now like a batch of our data, like the, let me, whoops, let me actually reduce"}, {"start": 1397.6399999999999, "end": 1398.8799999999999, "text": " from thousand five hundred."}, {"start": 1398.8799999999999, "end": 1401.1399999999999, "text": " No, I'm going to leave it at thousand five hundred."}, {"start": 1401.1399999999999, "end": 1402.78, "text": " It's still like a small data set."}, {"start": 1402.78, "end": 1404.94, "text": " So we're going to run this on the batch of data."}, {"start": 1404.94, "end": 1410.14, "text": " So this will give us like a thousand five hundred by one vector."}, {"start": 1410.14, "end": 1414.04, "text": " And then we're going to do mean over the X is equal zero."}, {"start": 1414.04, "end": 1418.8, "text": " So that means we're going to kind of average out all of those, of those loss losses for"}, {"start": 1418.8, "end": 1422.6000000000001, "text": " all of these singular data points to get the final loss."}, {"start": 1422.6000000000001, "end": 1423.92, "text": " And that's pretty much it."}, {"start": 1423.92, "end": 1426.28, "text": " That's how we get our MSC loss here."}, {"start": 1426.28, "end": 1431.6000000000001, "text": " And finally, I kind of wrap it into this value and grad transform function from Jack's to"}, {"start": 1431.6000000000001, "end": 1433.72, "text": " get the, yeah."}, {"start": 1433.72, "end": 1437.08, "text": " That's how we calculate gradients and you should be familiar with this already."}, {"start": 1437.08, "end": 1440.72, "text": " So now this will be our model."}, {"start": 1440.72, "end": 1445.48, "text": " So we're kind of cheating here a little bit because this is the exact same model that"}, {"start": 1445.48, "end": 1446.52, "text": " generated the data."}, {"start": 1446.52, "end": 1451.0, "text": " So we, we usually wouldn't have this advantage of knowing the model, but here for the sake"}, {"start": 1451.0, "end": 1456.68, "text": " of just demoing this, I'm going to suppose we, I'm going to assume we know the exact"}, {"start": 1456.68, "end": 1457.68, "text": " model."}, {"start": 1457.68, "end": 1460.96, "text": " So we're going to do the usual stuff here, initialize the model."}, {"start": 1460.96, "end": 1466.52, "text": " We get initial parameters and then we're going to do a simple training here, a training loop."}, {"start": 1466.52, "end": 1470.16, "text": " We're going to have 20 epochs, just an arbitrary number."}, {"start": 1470.16, "end": 1474.0, "text": " You can tweak it, but like it's enough for this toy problem."}, {"start": 1474.0, "end": 1479.48, "text": " We called, we called the, we just passed the params and we get the gradients and we get"}, {"start": 1479.48, "end": 1480.48, "text": " the loss."}, {"start": 1480.48, "end": 1486.22, "text": " So this may be like a dark magic a little bit, but the trick is this value and grad"}, {"start": 1486.22, "end": 1492.96, "text": " function already contains the MSC loss, which already contains our data because we had this"}, {"start": 1492.96, "end": 1500.38, "text": " wrapper function make MSC loss and the data excess and YS is already stored internally."}, {"start": 1500.38, "end": 1503.3, "text": " So that's why we don't have to even pass anything here."}, {"start": 1503.3, "end": 1508.76, "text": " So what this does is for the current set of parameters and for the car, for the dataset,"}, {"start": 1508.76, "end": 1515.1200000000001, "text": " which is static, it figures out the current gradients and the current loss for those parameters."}, {"start": 1515.12, "end": 1518.56, "text": " And then we just do a simple stochastic grid in the send here."}, {"start": 1518.56, "end": 1520.6, "text": " I'm being very explicit here."}, {"start": 1520.6, "end": 1522.56, "text": " You usually will not be doing it like this."}, {"start": 1522.56, "end": 1527.36, "text": " You're going to be using Optex, but here I'm being explicit and doing it in pure Jax way."}, {"start": 1527.36, "end": 1529.6799999999998, "text": " By the way, I'm having some internet connection problems."}, {"start": 1529.6799999999998, "end": 1531.08, "text": " Hopefully that will not interfere."}, {"start": 1531.08, "end": 1532.6399999999999, "text": " I think not, but yeah."}, {"start": 1532.6399999999999, "end": 1536.52, "text": " So you can see here, we do the tree multimap from Jax."}, {"start": 1536.52, "end": 1539.6799999999998, "text": " We basically have two pie trees here, params and grads."}, {"start": 1539.6799999999998, "end": 1544.0, "text": " We are going and traversing the leaves and we do the following simple manipulation from"}, {"start": 1544.0, "end": 1545.28, "text": " the current parameter."}, {"start": 1545.28, "end": 1551.44, "text": " We subtract the learning rate times the gradient, which is a definition of your simplest stochastic"}, {"start": 1551.44, "end": 1552.8, "text": " gradient descent."}, {"start": 1552.8, "end": 1554.12, "text": " And that's it."}, {"start": 1554.12, "end": 1558.36, "text": " So this is actually not stochastic gradient descent because we're using the whole data."}, {"start": 1558.36, "end": 1560.34, "text": " So this will be a batch gradient descent."}, {"start": 1560.34, "end": 1561.96, "text": " So it's exact."}, {"start": 1561.96, "end": 1565.64, "text": " We're not being stochastic here."}, {"start": 1565.64, "end": 1568.64, "text": " And finally, we are logging every now and then."}, {"start": 1568.64, "end": 1572.64, "text": " So every five epochs, I just print the epoch and then print the loss."}, {"start": 1572.64, "end": 1578.2, "text": " And yeah, if I were to run this shift enter, we can get, we can see the initial parameters"}, {"start": 1578.2, "end": 1583.1000000000001, "text": " which were initialized using this key, which is different compared to the way we've initialized"}, {"start": 1583.1000000000001, "end": 1584.1000000000001, "text": " the model here."}, {"start": 1584.1000000000001, "end": 1590.8400000000001, "text": " We use the specialized keys here and then we kind of wrap those into this freeze dictionary,"}, {"start": 1590.8400000000001, "end": 1591.8400000000001, "text": " et cetera."}, {"start": 1591.8400000000001, "end": 1592.8400000000001, "text": " Okay."}, {"start": 1592.8400000000001, "end": 1595.0400000000002, "text": " So those were the initial parameters."}, {"start": 1595.0400000000002, "end": 1598.3200000000002, "text": " Then we had loss going down towards zero."}, {"start": 1598.32, "end": 1604.04, "text": " And finally we get like, as you can see here, the final parameters, the learn parameters"}, {"start": 1604.04, "end": 1607.12, "text": " are here and you can see that they are pretty much the same."}, {"start": 1607.12, "end": 1613.36, "text": " We get two five four here, two five, very similar numbers, which is also indicated by"}, {"start": 1613.36, "end": 1614.96, "text": " the loss."}, {"start": 1614.96, "end": 1619.3999999999999, "text": " So yeah, that's how the training loop in Flex looks like."}, {"start": 1619.3999999999999, "end": 1620.3999999999999, "text": " Almost in Flex."}, {"start": 1620.3999999999999, "end": 1625.28, "text": " As I said, I'm using more, this is more like a puregex type of syntax here."}, {"start": 1625.28, "end": 1632.12, "text": " So let's now use OptX, which is a dedicated library for this purpose in particular, and"}, {"start": 1632.12, "end": 1633.28, "text": " let's initialize it."}, {"start": 1633.28, "end": 1640.48, "text": " So from OptX we call the SGD, we pass in the learning rates and then we have to again initialize"}, {"start": 1640.48, "end": 1644.44, "text": " the OptX because state has to be externally manipulated."}, {"start": 1644.44, "end": 1647.42, "text": " And here we get a state and we kind of print the state."}, {"start": 1647.42, "end": 1652.0, "text": " So let's run this cell and let's see what happens."}, {"start": 1652.0, "end": 1657.8, "text": " And as you can see, we've got a tuple of two empty states and that may not make sense if"}, {"start": 1657.8, "end": 1662.24, "text": " you're not familiar with how optimizers work, but if you are, this makes perfect sense because"}, {"start": 1662.24, "end": 1664.72, "text": " SGD doesn't have any state."}, {"start": 1664.72, "end": 1670.72, "text": " Compare that to Atom, which has to kind of keep the various statistics of gradients,"}, {"start": 1670.72, "end": 1676.2, "text": " like the variance of gradients in memory in state and that's why they're like, we have"}, {"start": 1676.2, "end": 1678.84, "text": " to like kind of explicitly manipulate it."}, {"start": 1678.84, "end": 1685.8, "text": " So if I were to change this to Atom, and yeah, I won't be changing the variable name, I'm"}, {"start": 1685.8, "end": 1687.6799999999998, "text": " just gonna change and use Atom."}, {"start": 1687.6799999999998, "end": 1688.6799999999998, "text": " Let me try and run this."}, {"start": 1688.6799999999998, "end": 1692.76, "text": " You can see that now we have like a meaningful state here and that's something you have to"}, {"start": 1692.76, "end": 1697.8799999999999, "text": " kind of pass externally here in Flex, in Jax in general."}, {"start": 1697.8799999999999, "end": 1704.3999999999999, "text": " So yeah, you can see what happens here is we have two set of parameters here."}, {"start": 1704.3999999999999, "end": 1708.1999999999998, "text": " First is the mu parameters and then we have the nu parameters."}, {"start": 1708.2, "end": 1715.52, "text": " And these are just your Greek alphabet letters that are denoting the same quantities as in"}, {"start": 1715.52, "end": 1716.52, "text": " the original Atom paper."}, {"start": 1716.52, "end": 1723.32, "text": " So if I were to Google here Atom paper, blah, blah, blah, and if I were to open this paper"}, {"start": 1723.32, "end": 1729.3600000000001, "text": " on archive, we're going to see the formulas are using the exact same variable name."}, {"start": 1729.3600000000001, "end": 1732.1200000000001, "text": " So I think it's here, is this the Atom?"}, {"start": 1732.1200000000001, "end": 1733.1200000000001, "text": " Yeah."}, {"start": 1733.12, "end": 1740.3999999999999, "text": " And this is the nu, I don't know whether I'm pronouncing it correctly, probably not, but"}, {"start": 1740.3999999999999, "end": 1745.56, "text": " basically those are the statistics, the gradient statistics that Atom is kind of keeping in"}, {"start": 1745.56, "end": 1747.52, "text": " the background in order to do the update."}, {"start": 1747.52, "end": 1749.3, "text": " So here is the Atom update."}, {"start": 1749.3, "end": 1754.52, "text": " It's way more complicated compared to, let's say, SGD because yeah, you can see here we're"}, {"start": 1754.52, "end": 1760.84, "text": " using the mu and the nu here to form the novel parameter, to form the update."}, {"start": 1760.84, "end": 1762.84, "text": " Okay, so that's it."}, {"start": 1762.84, "end": 1769.6399999999999, "text": " Let me return back to SGD so that we can have comparable results compared to the pure Jax"}, {"start": 1769.6399999999999, "end": 1771.84, "text": " approach we used here."}, {"start": 1771.84, "end": 1773.36, "text": " And so the tree multimap."}, {"start": 1773.36, "end": 1778.36, "text": " So let me go back here and we're going to have the same exact thing replicated."}, {"start": 1778.36, "end": 1783.4399999999998, "text": " The only thing that's different here is we are using instead of explicitly handling the"}, {"start": 1783.4399999999998, "end": 1787.72, "text": " optimization part, we're going to use OptX."}, {"start": 1787.72, "end": 1792.92, "text": " And you can see in this example, it actually takes two lines to implement the code that"}, {"start": 1792.92, "end": 1793.92, "text": " we implemented here."}, {"start": 1793.92, "end": 1797.46, "text": " I think in, yeah, we had just a single line here."}, {"start": 1797.46, "end": 1801.28, "text": " So you may ask yourself why would we use OptX then?"}, {"start": 1801.28, "end": 1802.56, "text": " And then, I mean, it's pretty obvious."}, {"start": 1802.56, "end": 1807.76, "text": " I mean, when you have some more complex optimizers, you'd have in pure Jax, you'd have like 10,"}, {"start": 1807.76, "end": 1809.08, "text": " 15 lines or more."}, {"start": 1809.08, "end": 1811.28, "text": " Here you always have two lines and that's very cool."}, {"start": 1811.28, "end": 1812.5, "text": " So this is how it looks like."}, {"start": 1812.5, "end": 1814.96, "text": " So you have the optimizer, the SGD optimizer here."}, {"start": 1814.96, "end": 1820.72, "text": " We pass the gradients and we pass its state and outcomes, the novel state and outcome,"}, {"start": 1820.72, "end": 1823.28, "text": " the updates, which are the modified gradients."}, {"start": 1823.28, "end": 1828.2, "text": " So in the case of Atom, depending on the internal state, we'd be modifying these updates and"}, {"start": 1828.2, "end": 1832.76, "text": " then we apply, we just call this apply updates on the parameters and we get the novel parameters"}, {"start": 1832.76, "end": 1833.76, "text": " and that's it."}, {"start": 1833.76, "end": 1834.92, "text": " So this should not work."}, {"start": 1834.92, "end": 1838.88, "text": " Let me try and run this and see what we get."}, {"start": 1838.88, "end": 1841.92, "text": " And you can see here that we get like a loss."}, {"start": 1841.92, "end": 1842.92, "text": " I think it's the same loss."}, {"start": 1842.92, "end": 1848.64, "text": " It's 512 here and if we take a look here, it's also 512."}, {"start": 1848.64, "end": 1850.28, "text": " The last two digits are 85."}, {"start": 1850.28, "end": 1851.72, "text": " The last two digits are 85."}, {"start": 1851.72, "end": 1856.76, "text": " We get the same loss because I mean, this implements the SGD perfectly."}, {"start": 1856.76, "end": 1857.88, "text": " Okay."}, {"start": 1857.88, "end": 1858.88, "text": " So that's cool."}, {"start": 1858.88, "end": 1860.6000000000001, "text": " And later, I have a note here."}, {"start": 1860.6000000000001, "end": 1866.1000000000001, "text": " Later we'll see how we can kind of take all of these various states that are kind of floating"}, {"start": 1866.1000000000001, "end": 1871.2, "text": " around our program loops and wrap them in a single object and then use that single object"}, {"start": 1871.2, "end": 1874.76, "text": " to kind of, yeah, do the training."}, {"start": 1874.76, "end": 1879.52, "text": " So just so you can see that Optex is much more than SGD and Atom."}, {"start": 1879.52, "end": 1882.76, "text": " Let me show you this piece of code here."}, {"start": 1882.76, "end": 1888.56, "text": " I mentioned here that you can do various schedules, cosine schedules, linear schedules, arbitrary"}, {"start": 1888.56, "end": 1889.56, "text": " schedules."}, {"start": 1889.56, "end": 1892.8400000000001, "text": " You can do chaining of multiple optimizers, multiple components."}, {"start": 1892.8400000000001, "end": 1896.44, "text": " You can do parameter freezing when you want to do some fine tuning."}, {"start": 1896.44, "end": 1900.52, "text": " So it's very powerful, much more powerful than using pure JECs."}, {"start": 1900.52, "end": 1904.2, "text": " I mean, it's as powerful, but like it's way more concise."}, {"start": 1904.2, "end": 1905.2, "text": " Okay."}, {"start": 1905.2, "end": 1908.24, "text": " So let's show this code here."}, {"start": 1908.24, "end": 1909.24, "text": " This will not run."}, {"start": 1909.24, "end": 1915.2, "text": " This is just some simple example of copy pasted from ImageNet examples from the respective"}, {"start": 1915.2, "end": 1918.2, "text": " repos from Haiku and from Flex."}, {"start": 1918.2, "end": 1922.16, "text": " Here you can see this function create learning rate, which basically takes some parameters"}, {"start": 1922.16, "end": 1925.12, "text": " and then creates this from Optex."}, {"start": 1925.12, "end": 1927.1399999999999, "text": " You call this linear schedule function."}, {"start": 1927.14, "end": 1931.4, "text": " We give it the initial value, the end value, and then the number of steps where we are"}, {"start": 1931.4, "end": 1935.1200000000001, "text": " linearly increasing the learning rate or decreasing."}, {"start": 1935.1200000000001, "end": 1939.8000000000002, "text": " Now here in this case, yeah, you can basically either increase or decrease depending on the"}, {"start": 1939.8000000000002, "end": 1941.6000000000001, "text": " initial and end values."}, {"start": 1941.6000000000001, "end": 1943.88, "text": " Then we have the cosine function here."}, {"start": 1943.88, "end": 1945.24, "text": " So the cosine schedule."}, {"start": 1945.24, "end": 1949.0, "text": " And finally, we kind of join schedules using Optex join schedules."}, {"start": 1949.0, "end": 1953.68, "text": " And we join the warmup, the linear warmup combined with the cosine schedule."}, {"start": 1953.68, "end": 1957.44, "text": " And I guess, yeah, and we put the boundaries here to kind of denote the moment when we"}, {"start": 1957.44, "end": 1959.44, "text": " are switching modes."}, {"start": 1959.44, "end": 1966.48, "text": " And then you can use this scheduling function and pass it directly to Optex SGD here with"}, {"start": 1966.48, "end": 1969.68, "text": " some additional like Nestoro momentum, et cetera."}, {"start": 1969.68, "end": 1974.4, "text": " Finally from, you can see here a snippet from Haiku, even more complex, arguably."}, {"start": 1974.4, "end": 1976.28, "text": " We have a trace here."}, {"start": 1976.28, "end": 1979.8400000000001, "text": " We have a chaining operation of this tracing operator."}, {"start": 1979.84, "end": 1984.32, "text": " And then we have the scale by schedule, like a bunch of small components which can make"}, {"start": 1984.32, "end": 1987.84, "text": " arbitrary complex transformations."}, {"start": 1987.84, "end": 1989.76, "text": " So that's pretty much it."}, {"start": 1989.76, "end": 1991.82, "text": " That was section number two."}, {"start": 1991.82, "end": 1996.56, "text": " We got some understanding of how the toy like neural networks, i.e. linear regression in"}, {"start": 1996.56, "end": 1998.36, "text": " this case, are trained."}, {"start": 1998.36, "end": 2003.32, "text": " And now we're going to see how to create custom models in Flex, which is something we care"}, {"start": 2003.32, "end": 2004.3999999999999, "text": " about."}, {"start": 2004.3999999999999, "end": 2008.1999999999998, "text": " So let's start with a simple example of how to build an MLP."}, {"start": 2008.2, "end": 2010.88, "text": " So multi-layer perceptron network."}, {"start": 2010.88, "end": 2015.46, "text": " And you can see here we are inheriting from NNModule."}, {"start": 2015.46, "end": 2017.56, "text": " So we form a class, we inherit from the module."}, {"start": 2017.56, "end": 2022.48, "text": " So every, when you want to build a custom model, you always want to inherit from the"}, {"start": 2022.48, "end": 2023.48, "text": " NNModule."}, {"start": 2023.48, "end": 2026.22, "text": " Then you can see that we have this field here."}, {"start": 2026.22, "end": 2028.88, "text": " And this is called data field."}, {"start": 2028.88, "end": 2030.94, "text": " It's from one of the newer versions of Python."}, {"start": 2030.94, "end": 2032.5800000000002, "text": " You may be familiar with it."}, {"start": 2032.5800000000002, "end": 2034.44, "text": " You may be not."}, {"start": 2034.44, "end": 2037.96, "text": " But basically NNModule is Python's data class."}, {"start": 2037.96, "end": 2040.08, "text": " So if I were to, let me check."}, {"start": 2040.08, "end": 2041.48, "text": " I think I have a link here."}, {"start": 2041.48, "end": 2042.94, "text": " Yeah, we have a link here."}, {"start": 2042.94, "end": 2047.68, "text": " Let me just copy paste this thing here and let me show you what a data class is."}, {"start": 2047.68, "end": 2049.6, "text": " So data class, let me kind of zoom here."}, {"start": 2049.6, "end": 2052.68, "text": " A data class is basically this."}, {"start": 2052.68, "end": 2058.48, "text": " So you have this wrapper called data class, and then you can just kind of put the variable"}, {"start": 2058.48, "end": 2059.48, "text": " names here."}, {"start": 2059.48, "end": 2064.66, "text": " And then a couple of functions are automatically instantiated for you, defined for you."}, {"start": 2064.66, "end": 2068.6, "text": " So you can see, let me just find an equivalent representation here."}, {"start": 2068.6, "end": 2069.6, "text": " Yeah."}, {"start": 2069.6, "end": 2077.04, "text": " So if you were to explicitly do the code op, the data class, this is what you'd have to"}, {"start": 2077.04, "end": 2078.92, "text": " do for those two fields."}, {"start": 2078.92, "end": 2083.24, "text": " You'd have to create this representation function, the quality function."}, {"start": 2083.24, "end": 2084.24, "text": " You'd have to call the init."}, {"start": 2084.24, "end": 2085.64, "text": " And there is a lot of redundancy here."}, {"start": 2085.64, "end": 2088.8399999999997, "text": " As you can see, we have rank being called three times here."}, {"start": 2088.84, "end": 2095.96, "text": " So all in all, there's just a neat feature of Python more than Flex that, as I said,"}, {"start": 2095.96, "end": 2099.08, "text": " NNModule is kind of piggybacking on."}, {"start": 2099.08, "end": 2107.04, "text": " And yeah, so as a consequence of NNModule being a Python data class, unfortunately,"}, {"start": 2107.04, "end": 2109.8, "text": " we'll have to deal with a new function name."}, {"start": 2109.8, "end": 2111.32, "text": " We're not using init anymore."}, {"start": 2111.32, "end": 2112.36, "text": " We're using setup."}, {"start": 2112.36, "end": 2114.44, "text": " So setup is old init."}, {"start": 2114.44, "end": 2121.28, "text": " And because as I mentioned here, basically, the data class is implicitly using the init"}, {"start": 2121.28, "end": 2122.28, "text": " function."}, {"start": 2122.28, "end": 2124.96, "text": " And we saw that on the tab I just closed."}, {"start": 2124.96, "end": 2128.6, "text": " What we do here is we just form a list of layers."}, {"start": 2128.6, "end": 2132.84, "text": " So you can see here, we iterate through this sequence of integers, which is a simple list"}, {"start": 2132.84, "end": 2134.4, "text": " of integers."}, {"start": 2134.4, "end": 2138.12, "text": " We create these dense, i.e. feed forward layers."}, {"start": 2138.12, "end": 2142.36, "text": " And then during the call function, what we do, we pass the input."}, {"start": 2142.36, "end": 2144.46, "text": " We just rename that as activation."}, {"start": 2144.46, "end": 2146.84, "text": " And we iterate over the layers."}, {"start": 2146.84, "end": 2150.2000000000003, "text": " We kind of call, we pass the activation through the layer."}, {"start": 2150.2000000000003, "end": 2155.2400000000002, "text": " And finally, depending on whether or not we are in the last layer, we apply the ReLU,"}, {"start": 2155.2400000000002, "end": 2157.48, "text": " the ReLU nonlinear activation here."}, {"start": 2157.48, "end": 2159.7200000000003, "text": " And then we finally return the activation."}, {"start": 2159.7200000000003, "end": 2162.84, "text": " So the output layer, that's the output activation."}, {"start": 2162.84, "end": 2165.2000000000003, "text": " So yeah, that's your basic definition of an MLP."}, {"start": 2165.2000000000003, "end": 2166.76, "text": " Fairly, fairly simple."}, {"start": 2166.76, "end": 2169.28, "text": " Now let's see how we can kind of do some inference."}, {"start": 2169.28, "end": 2171.36, "text": " So again, we split a key."}, {"start": 2171.36, "end": 2172.32, "text": " We have the H key."}, {"start": 2172.32, "end": 2174.0800000000004, "text": " We have the init key."}, {"start": 2174.0800000000004, "end": 2176.44, "text": " Init is supposed to initialize the parameters."}, {"start": 2176.44, "end": 2179.46, "text": " Let me kind of first slowly walk you through this part."}, {"start": 2179.46, "end": 2180.7200000000003, "text": " So we have MLP."}, {"start": 2180.7200000000003, "end": 2187.1200000000003, "text": " We instantiate it using this number of neurons per layer is going to be 16, 8, 1, which means"}, {"start": 2187.1200000000003, "end": 2189.88, "text": " in the first layer, we have 16 neurons."}, {"start": 2189.88, "end": 2190.92, "text": " Then in the second, we have 8."}, {"start": 2190.92, "end": 2194.92, "text": " And then we have final, a single neuron."}, {"start": 2194.92, "end": 2200.7200000000003, "text": " And we instantiate this data set here."}, {"start": 2200.72, "end": 2205.56, "text": " We have the four data points that each has four dimensions."}, {"start": 2205.56, "end": 2209.48, "text": " And we use the H, so we use the X key here."}, {"start": 2209.48, "end": 2213.72, "text": " Then we call the model init here with the init key and we pass X."}, {"start": 2213.72, "end": 2216.24, "text": " So we've seen this multiple times already."}, {"start": 2216.24, "end": 2217.2799999999997, "text": " We have params."}, {"start": 2217.2799999999997, "end": 2222.72, "text": " We call model apply and we do a forward pass and we get Y here."}, {"start": 2222.72, "end": 2224.72, "text": " And that's it."}, {"start": 2224.72, "end": 2230.7599999999998, "text": " The same as if we were to use a simple, like a dense layer and not a complex MLP."}, {"start": 2230.7599999999998, "end": 2234.2999999999997, "text": " Again, I'm just calling a tree map and we're printing shapes."}, {"start": 2234.2999999999997, "end": 2239.3399999999997, "text": " So let's see how this thing looks like when we run it."}, {"start": 2239.3399999999997, "end": 2248.2799999999997, "text": " And as expected, we have 16, 8, 1, and we have 4 here because remember we have automatic"}, {"start": 2248.2799999999997, "end": 2249.8399999999997, "text": " shape inference."}, {"start": 2249.8399999999997, "end": 2254.68, "text": " And because our data points have four dimensions, basically, that's what we have for here."}, {"start": 2254.68, "end": 2257.3199999999997, "text": " And this kernel here, and this weight here."}, {"start": 2257.3199999999997, "end": 2259.7599999999998, "text": " By the way, I hate the word kernel."}, {"start": 2259.7599999999998, "end": 2262.2999999999997, "text": " It's so overloaded in machine learning that it's crazy."}, {"start": 2262.2999999999997, "end": 2268.44, "text": " So yeah, maybe using a weight, like a terminology here is maybe a bit better."}, {"start": 2268.44, "end": 2270.24, "text": " I don't know."}, {"start": 2270.24, "end": 2271.3599999999997, "text": " Cool."}, {"start": 2271.3599999999997, "end": 2272.9199999999996, "text": " I made a note here."}, {"start": 2272.9199999999996, "end": 2278.6, "text": " Let's try and do the same thing just with this nn-compact wrapper."}, {"start": 2278.6, "end": 2282.72, "text": " And what the trick here is, is you don't have to have set up at all."}, {"start": 2282.72, "end": 2287.24, "text": " So what you can do is, because we have a fairly simple function here, we can just kind of"}, {"start": 2287.24, "end": 2291.6, "text": " wrap this call function into nn-compact."}, {"start": 2291.6, "end": 2297.54, "text": " And then what we can do here is we can iterate through the number of neurons per layer instead"}, {"start": 2297.54, "end": 2299.16, "text": " of through layers."}, {"start": 2299.16, "end": 2303.12, "text": " And that means we'll have number of neurons here, number of neurons."}, {"start": 2303.12, "end": 2307.4199999999996, "text": " And then what we do instead of having a layer, we'll just instantiate it on the fly here."}, {"start": 2307.42, "end": 2314.7000000000003, "text": " So nn-dense, we kind of initialize it using the number of neurons, and then we do the"}, {"start": 2314.7000000000003, "end": 2315.7000000000003, "text": " same."}, {"start": 2315.7000000000003, "end": 2317.2400000000002, "text": " Everything else remains the same."}, {"start": 2317.2400000000002, "end": 2320.48, "text": " And I think this should run unless I've made some mistake."}, {"start": 2320.48, "end": 2322.88, "text": " Let me see whether it's going to work."}, {"start": 2322.88, "end": 2324.32, "text": " So everything remains the same."}, {"start": 2324.32, "end": 2329.52, "text": " It's just a bit more concise way of writing down things."}, {"start": 2329.52, "end": 2333.64, "text": " So yeah, I managed to make an error here, and the error is that we should not be using"}, {"start": 2333.64, "end": 2334.64, "text": " the layers."}, {"start": 2334.64, "end": 2336.32, "text": " Layers does not exist anymore."}, {"start": 2336.32, "end": 2339.6400000000003, "text": " So let me just kind of switch that here."}, {"start": 2339.6400000000003, "end": 2341.4, "text": " And this should now work."}, {"start": 2341.4, "end": 2342.88, "text": " Yeah, it is."}, {"start": 2342.88, "end": 2343.92, "text": " Okay, cool."}, {"start": 2343.92, "end": 2345.76, "text": " You just saw a second way."}, {"start": 2345.76, "end": 2350.84, "text": " So this is a very flex-like way of writing things down using nn-compact."}, {"start": 2350.84, "end": 2353.7200000000003, "text": " But either way, you get the same semantics."}, {"start": 2353.7200000000003, "end": 2354.96, "text": " Cool."}, {"start": 2354.96, "end": 2356.3, "text": " Let's continue here."}, {"start": 2356.3, "end": 2361.1200000000003, "text": " The output is, by the way, as expected, we have four times one because single output"}, {"start": 2361.1200000000003, "end": 2365.92, "text": " neuron for data points, remember, and then that's why we have four times one."}, {"start": 2365.92, "end": 2369.92, "text": " It's good to kind of check the shapes and make sure those make sense."}, {"start": 2369.92, "end": 2372.56, "text": " Now, let's dig even deeper."}, {"start": 2372.56, "end": 2379.7200000000003, "text": " And there are two very, very crucial concepts you ought to know about."}, {"start": 2379.7200000000003, "end": 2384.16, "text": " The first one is this thing called a param."}, {"start": 2384.16, "end": 2391.0, "text": " And basically, it's a terminology for trainable parameters in neural networks in Flex."}, {"start": 2391.0, "end": 2396.0, "text": " So let's implement the simple model that's going to leverage those params and see how"}, {"start": 2396.0, "end": 2397.34, "text": " this thing works."}, {"start": 2397.34, "end": 2403.24, "text": " So here, what we see is if we were to try to implement the dense layer, either feedforward"}, {"start": 2403.24, "end": 2407.68, "text": " layer, this is how we would approximately do it."}, {"start": 2407.68, "end": 2409.6, "text": " So here, again, it's a data class."}, {"start": 2409.6, "end": 2412.52, "text": " So we can instantiate these fields here."}, {"start": 2412.52, "end": 2414.02, "text": " We have number of neurons."}, {"start": 2414.02, "end": 2418.36, "text": " We have the weight and the bias initializer functions."}, {"start": 2418.36, "end": 2422.56, "text": " So yeah, bias is going to be initialized as all zeros."}, {"start": 2422.56, "end": 2426.6400000000003, "text": " And for weights, we're going to use like a normal initializer."}, {"start": 2426.6400000000003, "end": 2427.6400000000003, "text": " Not that important."}, {"start": 2427.6400000000003, "end": 2430.94, "text": " The details are not that important."}, {"start": 2430.94, "end": 2435.04, "text": " The actual implementation of this function is not vital for this explanation."}, {"start": 2435.04, "end": 2436.6, "text": " So let's see what we do here."}, {"start": 2436.6, "end": 2441.42, "text": " So we call self-param and we give it a name."}, {"start": 2441.42, "end": 2443.6, "text": " And here, it's going to be called weight."}, {"start": 2443.6, "end": 2445.0, "text": " And it can be arbitrary name."}, {"start": 2445.0, "end": 2446.4, "text": " You can choose whatever you want."}, {"start": 2446.4, "end": 2450.6800000000003, "text": " So that's the name that will appear in that frozen dict under the params collection, if"}, {"start": 2450.6800000000003, "end": 2451.6800000000003, "text": " you recall."}, {"start": 2451.6800000000003, "end": 2452.6800000000003, "text": " If not, I'm going to run this."}, {"start": 2452.6800000000003, "end": 2453.82, "text": " You're going to see it in a second."}, {"start": 2453.82, "end": 2456.2400000000002, "text": " So now we call the initializer function of our choice."}, {"start": 2456.2400000000002, "end": 2461.64, "text": " So the random state is going to be implicitly passed into this function through the init"}, {"start": 2461.64, "end": 2462.64, "text": " and apply function."}, {"start": 2462.64, "end": 2465.36, "text": " So you don't have to worry about that here."}, {"start": 2465.36, "end": 2472.1, "text": " So the only thing you have to provide is the shape information for this initializer function."}, {"start": 2472.1, "end": 2477.2799999999997, "text": " And we'll just take the last dimension of our data and then the number of neurons here,"}, {"start": 2477.2799999999997, "end": 2480.48, "text": " which is the desired number of outputs."}, {"start": 2480.48, "end": 2481.72, "text": " We do the same thing for bias."}, {"start": 2481.72, "end": 2482.72, "text": " We call self.param."}, {"start": 2482.72, "end": 2487.36, "text": " So this is, again, this is flex syntax."}, {"start": 2487.36, "end": 2488.96, "text": " And we name it bias."}, {"start": 2488.96, "end": 2490.1, "text": " We have the bias in it."}, {"start": 2490.1, "end": 2495.68, "text": " We pass the number of neurons as the shape because, yeah, that's how bias shape is supposed"}, {"start": 2495.68, "end": 2496.68, "text": " to look like."}, {"start": 2496.68, "end": 2499.7599999999998, "text": " And then this is a simple implementation of feed-forward layer."}, {"start": 2499.76, "end": 2505.5600000000004, "text": " We do a dot product between the inputs and between the kernel, the weights, and then"}, {"start": 2505.5600000000004, "end": 2506.6400000000003, "text": " we just add the bias."}, {"start": 2506.6400000000003, "end": 2507.6400000000003, "text": " And that's it."}, {"start": 2507.6400000000003, "end": 2513.88, "text": " This is how we would implement like a dense, a linear, a feed-forward layer in flex."}, {"start": 2513.88, "end": 2517.5600000000004, "text": " Again, we have a common pattern here."}, {"start": 2517.5600000000004, "end": 2522.28, "text": " We just form the random seeds here, the random states here."}, {"start": 2522.28, "end": 2525.2400000000002, "text": " And then we instantiate the model here."}, {"start": 2525.24, "end": 2530.12, "text": " We just form some random data on this line 18."}, {"start": 2530.12, "end": 2531.12, "text": " Then we initialize."}, {"start": 2531.12, "end": 2532.16, "text": " We get the parameters."}, {"start": 2532.16, "end": 2535.16, "text": " We call apply and we get the output."}, {"start": 2535.16, "end": 2536.9399999999996, "text": " And finally, we can print the shapes."}, {"start": 2536.9399999999996, "end": 2539.8999999999996, "text": " And let me run this and see what we have as the output."}, {"start": 2539.8999999999996, "end": 2543.3799999999997, "text": " And you can see this is like something we're used to seeing."}, {"start": 2543.3799999999997, "end": 2545.2, "text": " We have params collection."}, {"start": 2545.2, "end": 2550.8399999999997, "text": " And I'm deliberately saying collection because that's a reserved word in flex."}, {"start": 2550.8399999999997, "end": 2554.3199999999997, "text": " And I'm going to soon explain what exactly it means."}, {"start": 2554.32, "end": 2556.76, "text": " And yeah, you can see biases and weights here."}, {"start": 2556.76, "end": 2557.76, "text": " And that's it."}, {"start": 2557.76, "end": 2561.7200000000003, "text": " I've linked a couple of source code files here."}, {"start": 2561.7200000000003, "end": 2563.6400000000003, "text": " You can check those at your own pace."}, {"start": 2563.6400000000003, "end": 2568.7200000000003, "text": " But let me briefly show you, and you may be confused by some of the idiosyncrasies of"}, {"start": 2568.7200000000003, "end": 2569.7200000000003, "text": " flex."}, {"start": 2569.7200000000003, "end": 2573.46, "text": " Like why do we have brackets here and why don't we have brackets here?"}, {"start": 2573.46, "end": 2575.28, "text": " What type of inconsistency is this?"}, {"start": 2575.28, "end": 2577.96, "text": " So my OCD was kind of triggered by this."}, {"start": 2577.96, "end": 2583.0800000000004, "text": " I had to dig into the source code and I found the solution here."}, {"start": 2583.0800000000004, "end": 2584.0800000000004, "text": " Basically what happens, let me zoom in here."}, {"start": 2584.08, "end": 2585.08, "text": " So we have a partial."}, {"start": 2585.08, "end": 2586.08, "text": " It forms a higher order function."}, {"start": 2586.08, "end": 2596.84, "text": " It just kind of plugs in these arguments that you don't have to kind of plug those in later"}, {"start": 2596.84, "end": 2599.3199999999997, "text": " on when you're calling this variance scaling function."}, {"start": 2599.3199999999997, "end": 2604.54, "text": " So the reason for the inconsistency is that if I were to find the variance scaling implementation,"}, {"start": 2604.54, "end": 2609.0, "text": " you can see the variance scaling basically returns back this function in it."}, {"start": 2609.0, "end": 2613.56, "text": " So that's why we have to call the brackets so that we get back the init function, which"}, {"start": 2613.56, "end": 2615.96, "text": " we can then call as an initializer."}, {"start": 2615.96, "end": 2620.2799999999997, "text": " Whereas zeros, if I were to find the zeros, it's already a function."}, {"start": 2620.2799999999997, "end": 2622.96, "text": " So that's why you don't have to call like the brackets."}, {"start": 2622.96, "end": 2625.12, "text": " So minor implementation detail."}, {"start": 2625.12, "end": 2630.32, "text": " I just thought explaining the why the inconsistency in case you wondered."}, {"start": 2630.32, "end": 2636.6, "text": " Second thing, I want to quickly show you the implementation of the aptly named linear layer,"}, {"start": 2636.6, "end": 2638.2799999999997, "text": " even though they call it dense."}, {"start": 2638.2799999999997, "end": 2639.7599999999998, "text": " So let's see that."}, {"start": 2639.7599999999998, "end": 2640.7599999999998, "text": " Okay."}, {"start": 2640.76, "end": 2645.6000000000004, "text": " So let me find the class dense, dense general, dense here."}, {"start": 2645.6000000000004, "end": 2646.6000000000004, "text": " Okay."}, {"start": 2646.6000000000004, "end": 2652.1200000000003, "text": " So obviously it's a bit more complicated than the toy implementation, which I just showed"}, {"start": 2652.1200000000003, "end": 2656.1600000000003, "text": " you, but like the basics are the same."}, {"start": 2656.1600000000003, "end": 2659.76, "text": " There is just an additional type conversion depending on whether you're training this"}, {"start": 2659.76, "end": 2662.44, "text": " thing in a mixed precision mode."}, {"start": 2662.44, "end": 2669.28, "text": " You may want to convert the data type of your activations, of your input data, et cetera."}, {"start": 2669.28, "end": 2674.6000000000004, "text": " So you can see here again, we have the self-param kernel initializer function."}, {"start": 2674.6000000000004, "end": 2675.6000000000004, "text": " We have the shape."}, {"start": 2675.6000000000004, "end": 2681.6200000000003, "text": " Additionally, again, I said we have a data type here and then again, we have some conversion"}, {"start": 2681.6200000000003, "end": 2682.6200000000003, "text": " of data type here."}, {"start": 2682.6200000000003, "end": 2688.1200000000003, "text": " So basically, and then we have dot general, which is basically as if we were to call JMP."}, {"start": 2688.1200000000003, "end": 2693.52, "text": " It's just a more lower level function that says you can see more verbose, but also much"}, {"start": 2693.52, "end": 2697.7200000000003, "text": " more, I guess, less error prone."}, {"start": 2697.72, "end": 2704.9199999999996, "text": " We formed the bias here and basically if bias exists, we just add it here and then that's,"}, {"start": 2704.9199999999996, "end": 2709.48, "text": " yeah, I mean, that's just a W times X plus B implementation of that."}, {"start": 2709.48, "end": 2711.2, "text": " So that's pretty much it."}, {"start": 2711.2, "end": 2715.9199999999996, "text": " Now let me get back to the notebook and let's continue this journey."}, {"start": 2715.9199999999996, "end": 2723.9199999999996, "text": " Yeah, again, I just have a simple cell here showing that the signature is such that this"}, {"start": 2723.92, "end": 2728.6, "text": " initializer function, the first argument is actually key, not the shape."}, {"start": 2728.6, "end": 2734.16, "text": " So that's why I said that you are implicitly passing in the RNG."}, {"start": 2734.16, "end": 2738.98, "text": " So by the way, you usually will not have to care about these things that much because"}, {"start": 2738.98, "end": 2743.92, "text": " you'll just usually just be using the NN layers, which are already implemented for you."}, {"start": 2743.92, "end": 2748.4, "text": " Here I'm just trying to dig a bit deeper into Flex and kind of show you how things work"}, {"start": 2748.4, "end": 2751.48, "text": " behind the, like in the background."}, {"start": 2751.48, "end": 2752.64, "text": " Hopefully you'll appreciate that."}, {"start": 2752.64, "end": 2755.72, "text": " Okay, let's continue here."}, {"start": 2755.72, "end": 2763.24, "text": " So those were the trainable parameters, but as you may know, when you're training neural"}, {"start": 2763.24, "end": 2768.2799999999997, "text": " networks, you sometimes have parameters which are a part of the model, but they're not exactly"}, {"start": 2768.2799999999997, "end": 2769.2799999999997, "text": " trainable."}, {"start": 2769.2799999999997, "end": 2772.3199999999997, "text": " So you're maybe keeping some statistics in the case of batch norm."}, {"start": 2772.3199999999997, "end": 2773.64, "text": " I guess that's the best example."}, {"start": 2773.64, "end": 2779.48, "text": " The most famous example would be the batch norm and batch norm has a trainable parameter"}, {"start": 2779.48, "end": 2784.84, "text": " as well as those non-trainable batch statistics."}, {"start": 2784.84, "end": 2789.44, "text": " And now we're going to see how Flex handles those."}, {"start": 2789.44, "end": 2795.0, "text": " And the novel concept I want you to keep in mind here is variable."}, {"start": 2795.0, "end": 2799.64, "text": " And I just have a note in terminology, so it's a broader term and it includes both the"}, {"start": 2799.64, "end": 2803.76, "text": " trainable as well as the non-trainable variables."}, {"start": 2803.76, "end": 2807.56, "text": " So you'll see params being called variables as well."}, {"start": 2807.56, "end": 2809.04, "text": " They're just trainable variables."}, {"start": 2809.04, "end": 2816.48, "text": " Okay, that out of the way, let's see a very simple contrived example that contains both"}, {"start": 2816.48, "end": 2819.32, "text": " the variables and the parameters alike."}, {"start": 2819.32, "end": 2826.52, "text": " So we have a simple class here called very convoluted name bias adder with running mean."}, {"start": 2826.52, "end": 2832.88, "text": " Don't try and deduce the semantics of this function from this title, from this name,"}, {"start": 2832.88, "end": 2835.44, "text": " it's just a contrived example."}, {"start": 2835.44, "end": 2839.44, "text": " So what happens in the call function, we're using again this compact notation, we just"}, {"start": 2839.44, "end": 2847.04, "text": " check whether this variable is already initialized, namely the batch stats collection."}, {"start": 2847.04, "end": 2853.12, "text": " And then we take a look whether the EMA variable inside of that collection has been initialized."}, {"start": 2853.12, "end": 2858.5, "text": " Now let's see what collection and all of this, what I've just said means."}, {"start": 2858.5, "end": 2862.2400000000002, "text": " So batch stats is actually not an arbitrary name."}, {"start": 2862.24, "end": 2866.72, "text": " You'll see that in, I think I'm gonna show you the implementation of the batch norm,"}, {"start": 2866.72, "end": 2869.0, "text": " the source code itself."}, {"start": 2869.0, "end": 2873.52, "text": " Basically they've hard coded now whether that's a good design or not, I don't know."}, {"start": 2873.52, "end": 2877.6, "text": " I guess I don't have enough context, but like in old one, I'd say it's not the best design"}, {"start": 2877.6, "end": 2880.64, "text": " to hard code like a string like that."}, {"start": 2880.64, "end": 2883.24, "text": " But yeah, it is what it is."}, {"start": 2883.24, "end": 2887.08, "text": " So we can see here self.variable."}, {"start": 2887.08, "end": 2890.0, "text": " So again, that's a flex thing and we form a collection."}, {"start": 2890.0, "end": 2892.72, "text": " So this is the collection name, batch stats."}, {"start": 2892.72, "end": 2897.4, "text": " And the reason they have different collections is so that you can treat them differently."}, {"start": 2897.4, "end": 2898.48, "text": " So that's the point."}, {"start": 2898.48, "end": 2902.84, "text": " So we have the params collection which usually contains the trainable parameter, the trainable"}, {"start": 2902.84, "end": 2903.84, "text": " variables."}, {"start": 2903.84, "end": 2910.24, "text": " And then you have batch stats which contains these types of non-trainable variables where"}, {"start": 2910.24, "end": 2913.72, "text": " you take the batch of images say and you calculate some statistics."}, {"start": 2913.72, "end": 2918.08, "text": " So for example, in the case of batch norm, you calculate the mean statistics, you calculate"}, {"start": 2918.08, "end": 2923.36, "text": " the variance and then you use that to do the logic of the layer."}, {"start": 2923.36, "end": 2927.52, "text": " Okay, so it's very similar to self.param."}, {"start": 2927.52, "end": 2932.2, "text": " You just have the collection name here, then you have the name, then you have the initializer"}, {"start": 2932.2, "end": 2934.84, "text": " function here."}, {"start": 2934.84, "end": 2936.6, "text": " We are just being explicit here."}, {"start": 2936.6, "end": 2939.98, "text": " We just have the zeros function and then we pass in the shape."}, {"start": 2939.98, "end": 2946.64, "text": " We have one and then we have a column here because basically the zero dimension is batch"}, {"start": 2946.64, "end": 2947.64, "text": " dimension."}, {"start": 2947.64, "end": 2951.72, "text": " So we're gonna just have the shape of a single data point here."}, {"start": 2951.72, "end": 2953.18, "text": " Then we have a bias."}, {"start": 2953.18, "end": 2958.2799999999997, "text": " So bias is a trainable variable and this will be added."}, {"start": 2958.2799999999997, "end": 2962.9, "text": " So this thing here is kind of implicitly set to params here."}, {"start": 2962.9, "end": 2965.3199999999997, "text": " So I noted, I have it here noted."}, {"start": 2965.3199999999997, "end": 2971.7999999999997, "text": " So by default, we'll add this variable to params collection, whereas the above will"}, {"start": 2971.7999999999997, "end": 2974.7599999999998, "text": " be added to the batch stats collection."}, {"start": 2974.76, "end": 2979.1600000000003, "text": " Here because of some of the implementation details, unfortunately, we have to pass the"}, {"start": 2979.1600000000003, "end": 2983.92, "text": " key because that's the expected signature of the function for this self.param, even"}, {"start": 2983.92, "end": 2988.1200000000003, "text": " though we're not using it in this particular case."}, {"start": 2988.1200000000003, "end": 2989.1200000000003, "text": " So yeah."}, {"start": 2989.1200000000003, "end": 2993.88, "text": " And finally, before I explain this snippet here, EMA stands for Exponentially Moving"}, {"start": 2993.88, "end": 2997.0800000000004, "text": " Average, just for you to understand the semantics there."}, {"start": 2997.08, "end": 3004.7599999999998, "text": " So what we do here is if we have initialized the EMA variable already, then we can start"}, {"start": 3004.7599999999998, "end": 3010.68, "text": " in the forward pass, we're going to update it using this exponentially moving logic."}, {"start": 3010.68, "end": 3012.56, "text": " So what we do is we take the decay."}, {"start": 3012.56, "end": 3016.84, "text": " Decay is usually some larger number like 099 or something."}, {"start": 3016.84, "end": 3019.06, "text": " And then you keep, you kind of have inertia."}, {"start": 3019.06, "end": 3026.36, "text": " You keep the old value with a higher probability and then one minus that and then you do the"}, {"start": 3026.36, "end": 3029.4, "text": " mean of the current batch of data."}, {"start": 3029.4, "end": 3032.84, "text": " So this is kind of damping, has this damping effect."}, {"start": 3032.84, "end": 3037.6400000000003, "text": " I'm trying to teach you flex here, but like I think that explaining some of the machine"}, {"start": 3037.6400000000003, "end": 3039.96, "text": " learning concepts along the way may help a lot of you."}, {"start": 3039.96, "end": 3043.04, "text": " So that's why I'm doing this."}, {"start": 3043.04, "end": 3049.4, "text": " Finally we just returned the return is X, which is input data minus the exponentially"}, {"start": 3049.4, "end": 3052.6, "text": " moving average here plus bias."}, {"start": 3052.6, "end": 3057.92, "text": " And you may wonder why do we have dot value and the reason is that self dot variable returns"}, {"start": 3057.92, "end": 3060.42, "text": " a reference, not the value itself."}, {"start": 3060.42, "end": 3065.04, "text": " So that's why you have to kind of do the dot value to get the actual value of the variable."}, {"start": 3065.04, "end": 3068.72, "text": " Okay, this will make a bit more sense in a couple of seconds."}, {"start": 3068.72, "end": 3072.02, "text": " Again, very, very familiar pattern here."}, {"start": 3072.02, "end": 3074.2799999999997, "text": " We form the random states."}, {"start": 3074.2799999999997, "end": 3076.0, "text": " We instantiate the model here."}, {"start": 3076.0, "end": 3077.8399999999997, "text": " We form the data."}, {"start": 3077.8399999999997, "end": 3079.66, "text": " So we have 10 data points."}, {"start": 3079.66, "end": 3081.72, "text": " Each data point has four dimensions."}, {"start": 3081.72, "end": 3086.56, "text": " Again, we initialize the model, we get the variables and then we print them here."}, {"start": 3086.56, "end": 3090.9599999999996, "text": " So let me just kind of run this and show you already what's going on here."}, {"start": 3090.9599999999996, "end": 3094.8799999999997, "text": " So here is the frozen dict, which we've, we're used to seeing this."}, {"start": 3094.8799999999997, "end": 3101.7999999999997, "text": " And now aside from params, we also have the batch stats where we have this EMA variable."}, {"start": 3101.7999999999997, "end": 3106.9199999999996, "text": " And so yeah, this is the structure, the underlying structure of this, of this contrived model."}, {"start": 3106.92, "end": 3112.16, "text": " Only the values are zero because that's the initializer we've been, we've set here, the"}, {"start": 3112.16, "end": 3113.16, "text": " zeros."}, {"start": 3113.16, "end": 3115.92, "text": " And then what we do here is we apply."}, {"start": 3115.92, "end": 3118.12, "text": " So this is the novel part of notation."}, {"start": 3118.12, "end": 3125.6, "text": " I want you to remember we pass the variables and then we pass the input and then we tell"}, {"start": 3125.6, "end": 3131.56, "text": " flex, Hey, batch stats are actually mutable, which means that during the forward pass,"}, {"start": 3131.56, "end": 3133.32, "text": " the value is going to change."}, {"start": 3133.32, "end": 3137.7200000000003, "text": " Whereas for regular parameters, when you do like a feed forward to a neural network, you"}, {"start": 3137.7200000000003, "end": 3139.7200000000003, "text": " will not be changing the weights."}, {"start": 3139.7200000000003, "end": 3145.56, "text": " But here for these types of variables, they're going to change during the forward pass."}, {"start": 3145.56, "end": 3149.1200000000003, "text": " And that's the reason why we have to have this distinction and different collections."}, {"start": 3149.1200000000003, "end": 3150.1200000000003, "text": " Okay."}, {"start": 3150.1200000000003, "end": 3154.76, "text": " So we get the output and we get this time, we get a tuple, not just the output, we get"}, {"start": 3154.76, "end": 3160.6000000000004, "text": " the updated non-drainable parameters and I've printed them here and you can see how all"}, {"start": 3160.6, "end": 3168.36, "text": " of the zeros now converted into this, which is because we've done, we have data X, we've"}, {"start": 3168.36, "end": 3170.7999999999997, "text": " done the mean across the data."}, {"start": 3170.7999999999997, "end": 3174.24, "text": " So we've aggregated those 10 vectors into a single vector."}, {"start": 3174.24, "end": 3180.56, "text": " And then here using this update rule, we've kind of have a novel, novel value here."}, {"start": 3180.56, "end": 3186.0, "text": " So all in all, what I want you to take out of from here is there are multiple collections"}, {"start": 3186.0, "end": 3189.96, "text": " because we have multiple semantics, depending whether we have trainable parameters or these"}, {"start": 3189.96, "end": 3193.4, "text": " types of other variables."}, {"start": 3193.4, "end": 3198.6, "text": " And again, we have this mutable keyword, which you should remember alongside this self dot"}, {"start": 3198.6, "end": 3199.6, "text": " variable."}, {"start": 3199.6, "end": 3202.68, "text": " So those are some of the important flex idiosyncrasies."}, {"start": 3202.68, "end": 3204.28, "text": " Okay."}, {"start": 3204.28, "end": 3210.12, "text": " Let's continue here and let's form like an update function, a training function."}, {"start": 3210.12, "end": 3211.7200000000003, "text": " Let's see how we could train this function."}, {"start": 3211.7200000000003, "end": 3215.88, "text": " And once you understand this, trust me, it's going to be way easier because we are now"}, {"start": 3215.88, "end": 3219.56, "text": " digging really deep into all of the nitty gritty details."}, {"start": 3219.56, "end": 3227.68, "text": " It's going to be much more easy, much more easy, much easier to actually train flex models"}, {"start": 3227.68, "end": 3228.68, "text": " than this."}, {"start": 3228.68, "end": 3229.68, "text": " Okay."}, {"start": 3229.68, "end": 3233.6, "text": " We pass bunch of arguments here because we're still not using the train state, which I'm"}, {"start": 3233.6, "end": 3235.44, "text": " going to later introduce."}, {"start": 3235.44, "end": 3236.98, "text": " So bear with me here."}, {"start": 3236.98, "end": 3238.96, "text": " We pass the params into loss function."}, {"start": 3238.96, "end": 3241.64, "text": " And what we do here, we pass the apply function."}, {"start": 3241.64, "end": 3245.72, "text": " Apply function is basically your model apply."}, {"start": 3245.72, "end": 3252.0, "text": " So we have model here and then I'm going to pass model dot apply here into the update"}, {"start": 3252.0, "end": 3253.0, "text": " step."}, {"start": 3253.0, "end": 3254.0, "text": " So that's the apply function."}, {"start": 3254.0, "end": 3256.3999999999996, "text": " So it's basically how you do the feed forward."}, {"start": 3256.3999999999996, "end": 3259.8799999999997, "text": " Then we pass the variables and here we're being explicit."}, {"start": 3259.8799999999997, "end": 3266.24, "text": " So I have params collection and I pass the params and we have the non-trainable params,"}, {"start": 3266.24, "end": 3270.9599999999996, "text": " which I just do the double star here to unpack the keys and the values."}, {"start": 3270.9599999999996, "end": 3273.9599999999996, "text": " And this is how you basically form the variables."}, {"start": 3273.96, "end": 3277.12, "text": " We will soon see why we have to split these two."}, {"start": 3277.12, "end": 3280.12, "text": " And then we pass the input and then we have the mutable."}, {"start": 3280.12, "end": 3284.92, "text": " Again here we're just being a bit more general and we say, give me the keys that correspond"}, {"start": 3284.92, "end": 3287.0, "text": " to those non-trainable parameters."}, {"start": 3287.0, "end": 3291.84, "text": " So batch stats would be one of those keys in one of the last examples."}, {"start": 3291.84, "end": 3294.48, "text": " So just to give you some idea."}, {"start": 3294.48, "end": 3300.0, "text": " Finally here the loss, because this example is super contrived, this doesn't make much"}, {"start": 3300.0, "end": 3301.0, "text": " sense."}, {"start": 3301.0, "end": 3305.84, "text": " Basically once we have the loss function, we do this is a very standard pattern."}, {"start": 3305.84, "end": 3309.64, "text": " You've seen this already a couple of times, like only in this notebook and you'll be seeing"}, {"start": 3309.64, "end": 3313.48, "text": " it even more once you start coding in Flex."}, {"start": 3313.48, "end": 3319.68, "text": " Basically you pass the loss function to the value and grad transform function."}, {"start": 3319.68, "end": 3326.4, "text": " And we say has auxiliary, we set to true because aside from loss, loss function is returning"}, {"start": 3326.4, "end": 3329.44, "text": " these updated non-trainable parameters."}, {"start": 3329.44, "end": 3334.92, "text": " And remember, once we do a forward pass, we are kind of mutating those parameters."}, {"start": 3334.92, "end": 3337.44, "text": " That's why we have to have this mutable in the first place."}, {"start": 3337.44, "end": 3338.68, "text": " Okay, cool."}, {"start": 3338.68, "end": 3340.58, "text": " So that's what we get here."}, {"start": 3340.58, "end": 3345.6, "text": " Once we pass the parameters, we get back the gradients of those parameters with respect"}, {"start": 3345.6, "end": 3351.12, "text": " to loss function and we return the value because we have value and grad."}, {"start": 3351.12, "end": 3356.68, "text": " So we return back the loss and the updated non-trainable parameters."}, {"start": 3356.68, "end": 3359.3, "text": " Then we have this common pattern using OptX."}, {"start": 3359.3, "end": 3364.1200000000003, "text": " We update using the optimizer state and then we apply the updates to the parameters to"}, {"start": 3364.1200000000003, "end": 3365.76, "text": " get novel parameters."}, {"start": 3365.76, "end": 3366.76, "text": " And that's it."}, {"start": 3366.76, "end": 3371.2000000000003, "text": " Then we return a bunch of arguments here because we're still not using the train state and"}, {"start": 3371.2000000000003, "end": 3374.0600000000004, "text": " I'm going to introduce that I think in one of the next cells."}, {"start": 3374.0600000000004, "end": 3376.0800000000004, "text": " So yeah, stay tuned."}, {"start": 3376.0800000000004, "end": 3377.2400000000002, "text": " Cool."}, {"start": 3377.2400000000002, "end": 3382.76, "text": " So let's see how this whole like how everything comes into a single picture."}, {"start": 3382.76, "end": 3389.0600000000004, "text": " So we have Winstonchi the model here, we have dummy data and so we initialize the model,"}, {"start": 3389.06, "end": 3390.32, "text": " we get the variables."}, {"start": 3390.32, "end": 3395.0, "text": " Now here, because with this time we have both the trainable as well as the non-trainable"}, {"start": 3395.0, "end": 3400.72, "text": " variables, we split them using this pop function and then because we just have a dictionary,"}, {"start": 3400.72, "end": 3401.94, "text": " so this is going to work."}, {"start": 3401.94, "end": 3407.52, "text": " And then they usually do this in the documentation, they delete the variables to avoid wasting"}, {"start": 3407.52, "end": 3409.2, "text": " resources."}, {"start": 3409.2, "end": 3416.96, "text": " And then we kind of create the optimizer here, we initialize the state and then we have just"}, {"start": 3416.96, "end": 3425.08, "text": " a simple for loop running a couple of training steps and here is the update step that's being"}, {"start": 3425.08, "end": 3426.08, "text": " called."}, {"start": 3426.08, "end": 3432.84, "text": " So we are correctly updating the parameters as well as treating the non-trainable variables"}, {"start": 3432.84, "end": 3434.4, "text": " in a correct manner."}, {"start": 3434.4, "end": 3438.7200000000003, "text": " So this is as nitty gritty as you're going to get in this video."}, {"start": 3438.7200000000003, "end": 3442.12, "text": " So if you followed me so far, congrats."}, {"start": 3442.12, "end": 3446.04, "text": " If not, you're still won't have any problems following the rest of the notebook."}, {"start": 3446.04, "end": 3456.0, "text": " So now we're going to go up in the abstraction and see one example where you basically have"}, {"start": 3456.0, "end": 3459.7, "text": " the most complex syntax you can have in Flex."}, {"start": 3459.7, "end": 3466.2799999999997, "text": " So I'm kind of going to mix a lot of various layers which are going to introduce both those"}, {"start": 3466.2799999999997, "end": 3472.64, "text": " non-trainable variables as well as some stochasticity in the forward pass via a dropout."}, {"start": 3472.64, "end": 3475.12, "text": " So let's see that example here."}, {"start": 3475.12, "end": 3480.24, "text": " We have this DDM block and the name DDM because we have dense dropout batch norm which is"}, {"start": 3480.24, "end": 3485.48, "text": " a fairly regular block you'll see like in also in CNNs or whatnot."}, {"start": 3485.48, "end": 3488.4, "text": " It's a fairly regular stack of layers."}, {"start": 3488.4, "end": 3492.68, "text": " Again it's a data class so we have number of neurons specified here and the training"}, {"start": 3492.68, "end": 3500.52, "text": " flag and depending on the value of this training flag, we'll have different behavior from dropout"}, {"start": 3500.52, "end": 3502.44, "text": " and batch norm here."}, {"start": 3502.44, "end": 3509.04, "text": " So what happens is we instantiate the feed forward layer here using the num neurons here,"}, {"start": 3509.04, "end": 3515.28, "text": " the num neurons variable and then we pass whatever like comes as the output, we pass"}, {"start": 3515.28, "end": 3517.04, "text": " it into dropout."}, {"start": 3517.04, "end": 3524.28, "text": " The dropout has 50% probability rate of kind of dropping the activations and again we have"}, {"start": 3524.28, "end": 3525.28, "text": " this deterministic."}, {"start": 3525.28, "end": 3531.14, "text": " So if we are training, that means not true will be false which means we are not deterministic"}, {"start": 3531.14, "end": 3532.14, "text": " during the training."}, {"start": 3532.14, "end": 3536.52, "text": " We want to have the stochasticity but when we are evaluating then the training flag is"}, {"start": 3536.52, "end": 3538.44, "text": " set to false so this will be true."}, {"start": 3538.44, "end": 3543.68, "text": " We're going to be deterministic and basically dropout is going to be kind of turned off"}, {"start": 3543.68, "end": 3545.44, "text": " pretty much."}, {"start": 3545.44, "end": 3550.72, "text": " Finally we take the output from dropout, we pass it into batch norm, we have use running"}, {"start": 3550.72, "end": 3553.2799999999997, "text": " average again depends on the training flag."}, {"start": 3553.2799999999997, "end": 3558.72, "text": " If training flag is true, if we are training then we pass false because we don't want to"}, {"start": 3558.72, "end": 3560.48, "text": " use the running average."}, {"start": 3560.48, "end": 3563.0, "text": " We want to use the batch average, that's how batch norm works."}, {"start": 3563.0, "end": 3567.68, "text": " So you take a batch, you calculate the statistics and you use those parameters instead of using"}, {"start": 3567.68, "end": 3571.2400000000002, "text": " the parameters that were accumulated throughout the training."}, {"start": 3571.2400000000002, "end": 3573.0, "text": " So that's it."}, {"start": 3573.0, "end": 3576.96, "text": " This is more of a machine learning concept than a flex concept so I'm going to not dwell"}, {"start": 3576.96, "end": 3578.8, "text": " too much on it."}, {"start": 3578.8, "end": 3585.06, "text": " Again familiar pattern, we form a bunch of random states here, we instantiate our block,"}, {"start": 3585.06, "end": 3588.64, "text": " we get some random, some dummy data here."}, {"start": 3588.64, "end": 3593.72, "text": " We initialize the model and here are some additional idiosyncrasies of flex."}, {"start": 3593.72, "end": 3599.8799999999997, "text": " So this time when we want to initialize and this may be a bit confusing, we have to pass"}, {"start": 3599.8799999999997, "end": 3607.56, "text": " like a random state for both the params collection as well as for the dropout."}, {"start": 3607.56, "end": 3614.7799999999997, "text": " And it doesn't make much sense semantically because I mean dropout cannot be initialized,"}, {"start": 3614.7799999999997, "end": 3618.24, "text": " you don't have any learnable parameters there but just because of the way how everything"}, {"start": 3618.24, "end": 3621.52, "text": " is implemented you kind of have to have this additional step."}, {"start": 3621.52, "end": 3623.1, "text": " So keep that in mind."}, {"start": 3623.1, "end": 3625.7999999999997, "text": " Then we pass the X and everything else remains the same."}, {"start": 3625.7999999999997, "end": 3631.12, "text": " We get the variables back and then we apply and now here comes additional part because"}, {"start": 3631.12, "end": 3637.72, "text": " now we have dropout and dropout as you saw complicates both the initialization procedure"}, {"start": 3637.72, "end": 3639.8199999999997, "text": " as well as the apply procedure."}, {"start": 3639.8199999999997, "end": 3646.9599999999996, "text": " So here we have to have this RNG's keyword and basically what that means is you set the"}, {"start": 3646.96, "end": 3652.84, "text": " random state so you seed the dropout procedure so you can again reproduce it later on."}, {"start": 3652.84, "end": 3657.2, "text": " So that's why it makes a lot of sense to have it here although it does not make much sense"}, {"start": 3657.2, "end": 3660.16, "text": " to have dropout here to the best of my knowledge."}, {"start": 3660.16, "end": 3666.52, "text": " So yeah, outcome the output, the Y and the non-trainable variables, the non-trainable"}, {"start": 3666.52, "end": 3667.52, "text": " parameters."}, {"start": 3667.52, "end": 3674.68, "text": " I'm kind of mixing the terminology here between hopefully you're getting the gist of this."}, {"start": 3674.68, "end": 3680.6, "text": " And if we want to run this in evaluation mode where training flag is set to false, the apply"}, {"start": 3680.6, "end": 3683.3999999999996, "text": " function simplifies by a lot."}, {"start": 3683.3999999999996, "end": 3690.3799999999997, "text": " We now don't have to pass the RNG's because dropout is effectively turned off and we also"}, {"start": 3690.3799999999997, "end": 3697.12, "text": " don't have to care about mutable part because batch norm just won't be updated."}, {"start": 3697.12, "end": 3702.96, "text": " Those internal kind of statistics will not be updated so we don't have to include the"}, {"start": 3702.96, "end": 3704.12, "text": " mutable."}, {"start": 3704.12, "end": 3707.8599999999997, "text": " So this is as complicated as it gets."}, {"start": 3707.8599999999997, "end": 3710.12, "text": " Remember a couple of these novel keywords."}, {"start": 3710.12, "end": 3716.64, "text": " We have the mutable, we have the inclusion of RNG's for the apply when we have the stochasticity"}, {"start": 3716.64, "end": 3718.3199999999997, "text": " in the forward pass."}, {"start": 3718.3199999999997, "end": 3723.7999999999997, "text": " And we have to during initialization if we have dropout we have to kind of specify keys"}, {"start": 3723.7999999999997, "end": 3725.56, "text": " per collection."}, {"start": 3725.56, "end": 3733.16, "text": " And this is why I said Haiku seems to be cleaner in my honest opinion compared to Flex because"}, {"start": 3733.16, "end": 3735.2, "text": " they don't have the separation."}, {"start": 3735.2, "end": 3737.44, "text": " It's kind of cleaner."}, {"start": 3737.44, "end": 3740.2, "text": " Like all of these signatures simplified by a lot."}, {"start": 3740.2, "end": 3745.48, "text": " Now whether that means Haiku loses some of the expressivity that's I guess worth thinking"}, {"start": 3745.48, "end": 3747.2799999999997, "text": " about but yeah."}, {"start": 3747.2799999999997, "end": 3748.3999999999996, "text": " Okay that was it."}, {"start": 3748.3999999999996, "end": 3753.56, "text": " Now we are in the last section of this video which is already overly long."}, {"start": 3753.56, "end": 3755.44, "text": " Hopefully you're following along."}, {"start": 3755.44, "end": 3760.04, "text": " Basically I'm gonna walk you through this fully fledged convolutional neural network"}, {"start": 3760.04, "end": 3762.3599999999997, "text": " example on MNIST."}, {"start": 3762.36, "end": 3766.56, "text": " We're gonna code it up in Flex and this is basically now that you have all of the components"}, {"start": 3766.56, "end": 3771.6, "text": " in your mind this is gonna make hopefully a lot of sense and you're gonna see how Flex"}, {"start": 3771.6, "end": 3775.84, "text": " loops and training like code actually looks like."}, {"start": 3775.84, "end": 3777.92, "text": " Okay let's start by defining the model."}, {"start": 3777.92, "end": 3779.6, "text": " We have a convolutional neural network."}, {"start": 3779.6, "end": 3781.84, "text": " We're using this MN compact pattern."}, {"start": 3781.84, "end": 3785.96, "text": " A lot of hard coding here for the demo purpose I guess it's fine."}, {"start": 3785.96, "end": 3791.7200000000003, "text": " Basically a lot of convolutional layers followed by relu's and then here and there we have"}, {"start": 3791.72, "end": 3797.12, "text": " some pooling average pooling then we flatten out the shapes and we have a couple of linear"}, {"start": 3797.12, "end": 3802.2799999999997, "text": " layers here and finally we have as the output the log softmax of the outputs."}, {"start": 3802.2799999999997, "end": 3806.3199999999997, "text": " That's it and we have 10 classes because we are dealing with MNIST as you recall."}, {"start": 3806.3199999999997, "end": 3814.3599999999997, "text": " Okay let's run this thing to define the neural network there and now I'm just going to reuse"}, {"start": 3814.3599999999997, "end": 3820.2799999999997, "text": " the the PyTorch data loading functionality from from tutorial number three of Jack series."}, {"start": 3820.28, "end": 3822.6400000000003, "text": " So I won't be diving deeper here."}, {"start": 3822.6400000000003, "end": 3828.36, "text": " Basically I have some collate function that's supposed to create non-Py arrays out of out"}, {"start": 3828.36, "end": 3833.28, "text": " of PyTorch tensors because that's what a Jacks and thus Flex is dealing with."}, {"start": 3833.28, "end": 3839.5600000000004, "text": " I have some custom transformations to just kind of manipulate the MNIST image and make"}, {"start": 3839.5600000000004, "end": 3842.1200000000003, "text": " it a zero one range flow through the two etc."}, {"start": 3842.1200000000003, "end": 3844.0400000000004, "text": " Some need to degree deals."}, {"start": 3844.0400000000004, "end": 3845.8, "text": " Basically we form the data sets here."}, {"start": 3845.8, "end": 3850.28, "text": " We have the data loaders with certain batch size like 128 images per batch."}, {"start": 3850.28, "end": 3856.44, "text": " I load the training images and the labels as well as the test images and specifically"}, {"start": 3856.44, "end": 3862.0800000000004, "text": " I load the whole test data set because it's fairly small for MNIST so we can kind of keep"}, {"start": 3862.0800000000004, "end": 3864.1200000000003, "text": " this in memory no problem."}, {"start": 3864.1200000000003, "end": 3871.4, "text": " Cool let's run the cell to load the data and now let's visualize a single image from from"}, {"start": 3871.4, "end": 3872.88, "text": " this data set."}, {"start": 3872.88, "end": 3876.04, "text": " So it's just like a regular MNIST image."}, {"start": 3876.04, "end": 3877.6800000000003, "text": " You can see number seven here."}, {"start": 3877.6800000000003, "end": 3882.2000000000003, "text": " You can see the ground truth label printed here on the screen as well."}, {"start": 3882.2000000000003, "end": 3890.28, "text": " So let's now understand a couple of functions that we'll need to to get this CNN training"}, {"start": 3890.28, "end": 3893.12, "text": " and learning on this simple MNIST data set."}, {"start": 3893.12, "end": 3899.58, "text": " So first things first we have this train step and eval step functions."}, {"start": 3899.58, "end": 3904.92, "text": " So I guess you're already familiar with most of these patterns but let me quickly walk"}, {"start": 3904.92, "end": 3907.34, "text": " you through what's going on here."}, {"start": 3907.34, "end": 3911.12, "text": " So what's happening is we have the loss function here."}, {"start": 3911.12, "end": 3913.66, "text": " What we do we do a forward pass through the CNN."}, {"start": 3913.66, "end": 3915.2999999999997, "text": " We pass the variables."}, {"start": 3915.2999999999997, "end": 3917.68, "text": " So again I'm being explicit here with the collections."}, {"start": 3917.68, "end": 3919.56, "text": " We have the params collections."}, {"start": 3919.56, "end": 3921.2999999999997, "text": " We pass the params here."}, {"start": 3921.2999999999997, "end": 3927.12, "text": " We pass the images which is our data and we get back the logits for those images."}, {"start": 3927.12, "end": 3932.16, "text": " Now we convert the ground truth labels using the basically this one hot function."}, {"start": 3932.16, "end": 3938.56, "text": " We get the one hot labels and then what this thing here is if you can guess it's a simple"}, {"start": 3938.56, "end": 3945.3199999999997, "text": " cross entropy loss because what we do here is we do element wise multiplication and because"}, {"start": 3945.3199999999997, "end": 3953.08, "text": " both the logits as well as the one hot labels have the shape of batch size times 10 because"}, {"start": 3953.08, "end": 3955.18, "text": " we have 10 classes in MNIST."}, {"start": 3955.18, "end": 3962.98, "text": " What this does is basically like the one hot labels will kind of select the true label"}, {"start": 3962.98, "end": 3968.2, "text": " from the logits and because logits is log softmax if you remember how we define the"}, {"start": 3968.2, "end": 3969.56, "text": " CNN here."}, {"start": 3969.56, "end": 3971.04, "text": " So let me just go up there."}, {"start": 3971.04, "end": 3975.6, "text": " So we have log softmax as the output of the CNN."}, {"start": 3975.6, "end": 3978.66, "text": " So what will happen is this will be just log softmax."}, {"start": 3978.66, "end": 3980.24, "text": " Let me just find where we stopped."}, {"start": 3980.24, "end": 3984.24, "text": " Okay here and then we're going to sum across the minus one."}, {"start": 3984.24, "end": 3987.72, "text": " So that's the last dimension i.e. the dimension where we have the 10 labels."}, {"start": 3987.72, "end": 3989.8399999999997, "text": " So we're going to sum those up."}, {"start": 3989.8399999999997, "end": 3993.72, "text": " Most of those are going to be zero because as I said this is going to select just a single"}, {"start": 3993.72, "end": 3994.72, "text": " one."}, {"start": 3994.72, "end": 4000.3999999999996, "text": " So we end up with a bunch of logs and then we just kind of do the mean across the dimension"}, {"start": 4000.3999999999996, "end": 4002.3599999999997, "text": " zero by default."}, {"start": 4002.3599999999997, "end": 4008.8599999999997, "text": " So that's across the batch dimension and we get like a mean of the log sum pretty much"}, {"start": 4008.8599999999997, "end": 4012.2, "text": " which is the cross entropy loss with the minus here."}, {"start": 4012.2, "end": 4016.4399999999996, "text": " And we pretty much return back the loss as well as the logits."}, {"start": 4016.4399999999996, "end": 4020.52, "text": " And then we have like a very very familiar pattern already."}, {"start": 4020.52, "end": 4025.68, "text": " We have the value and grad function we set as auxiliary to true because we're not only"}, {"start": 4025.68, "end": 4027.72, "text": " returning loss."}, {"start": 4027.72, "end": 4030.0, "text": " We're also returning the logits."}, {"start": 4030.0, "end": 4031.9199999999996, "text": " And then this is the new thing."}, {"start": 4031.9199999999996, "end": 4036.2, "text": " So we have this state now and that's the thing I've been mentioning throughout the video."}, {"start": 4036.2, "end": 4039.4199999999996, "text": " We're going to soon see what exactly and how we construct that."}, {"start": 4039.42, "end": 4046.08, "text": " But basically we take out the parameters which is just they form the part of our of our training"}, {"start": 4046.08, "end": 4047.6, "text": " state."}, {"start": 4047.6, "end": 4052.88, "text": " So parameters are basically the weights of our neural network and we pass those and basically"}, {"start": 4052.88, "end": 4056.96, "text": " we get the gradients back and we get the logits."}, {"start": 4056.96, "end": 4060.38, "text": " Then we just call state apply gradients."}, {"start": 4060.38, "end": 4067.2400000000002, "text": " So now we don't have the two optics line because those are hidden in the apply gradients function."}, {"start": 4067.24, "end": 4072.7999999999997, "text": " And we just pass the gradients and get the state back which is the updated state."}, {"start": 4072.7999999999997, "end": 4074.6, "text": " Next up we compute the metrics."}, {"start": 4074.6, "end": 4077.7999999999997, "text": " We just pass the logits and ground truth labels."}, {"start": 4077.7999999999997, "end": 4081.56, "text": " We get the metrics which is a simple dictionary containing loss and accuracy."}, {"start": 4081.56, "end": 4082.56, "text": " We'll soon see that."}, {"start": 4082.56, "end": 4084.7599999999998, "text": " Not that important though."}, {"start": 4084.7599999999998, "end": 4086.58, "text": " Evaluation step again very simple."}, {"start": 4086.58, "end": 4088.6, "text": " We pass the test images here."}, {"start": 4088.6, "end": 4090.56, "text": " We call we do a forward pass."}, {"start": 4090.56, "end": 4094.72, "text": " We get logits back and we just compute the metrics because we want to evaluate the model"}, {"start": 4094.72, "end": 4098.32, "text": " on the test images and that's it."}, {"start": 4098.32, "end": 4100.28, "text": " Okay let's continue here."}, {"start": 4100.28, "end": 4106.719999999999, "text": " Now that we have the training step and evaluation step defined we can basically just kind of"}, {"start": 4106.719999999999, "end": 4110.24, "text": " call those in a loop for the whole epoch."}, {"start": 4110.24, "end": 4113.92, "text": " So we iterate here through the PyTorch data loader."}, {"start": 4113.92, "end": 4116.04, "text": " We are fetching images and labels."}, {"start": 4116.04, "end": 4122.92, "text": " We start calling the train step over and over again and we're just collecting the metrics"}, {"start": 4122.92, "end": 4125.92, "text": " across all of those batches."}, {"start": 4125.92, "end": 4130.26, "text": " And then what we do is we call device get which if you recall from one of the previous"}, {"start": 4130.26, "end": 4135.6, "text": " videos we'll just take those like the data that's currently on the accelerator whatever"}, {"start": 4135.6, "end": 4138.36, "text": " that is GPU or TPU or whatnot."}, {"start": 4138.36, "end": 4141.2, "text": " It's going to fetch it back to the host memory."}, {"start": 4141.2, "end": 4143.76, "text": " So that's why we have the MP suffix here."}, {"start": 4143.76, "end": 4150.02, "text": " And then I just do some simple aggregation here across the batches and I return the final"}, {"start": 4150.02, "end": 4154.240000000001, "text": " state and all of those metrics back from the function."}, {"start": 4154.240000000001, "end": 4156.080000000001, "text": " Evaluate model fairly similar."}, {"start": 4156.080000000001, "end": 4158.68, "text": " We just call the eval step just once here."}, {"start": 4158.68, "end": 4164.46, "text": " We don't have to loop because if you recall I'm loading the whole test data set and so"}, {"start": 4164.46, "end": 4167.240000000001, "text": " a single line of code here will work."}, {"start": 4167.240000000001, "end": 4173.72, "text": " We do the same thing device get to pull on to host and then we have to do this dot item"}, {"start": 4173.72, "end": 4179.02, "text": " because this is just going to contain a NumPy arrays with a single element."}, {"start": 4179.02, "end": 4183.84, "text": " So you have to kind of call the dot item to get the scalar from the NumPy array."}, {"start": 4183.84, "end": 4184.84, "text": " And that's it."}, {"start": 4184.84, "end": 4186.56, "text": " We return back the metrics."}, {"start": 4186.56, "end": 4187.56, "text": " Okay."}, {"start": 4187.56, "end": 4194.240000000001, "text": " Finally, these are the two last functions that we need before we see the main training loop."}, {"start": 4194.240000000001, "end": 4196.1, "text": " And this one is fairly important."}, {"start": 4196.1, "end": 4200.72, "text": " So this one makes things I mean it makes things it's a syntactic sugar but it makes things"}, {"start": 4200.72, "end": 4204.96, "text": " way cleaner and thus less error prone."}, {"start": 4204.96, "end": 4208.240000000001, "text": " So what we do here is we instantiate the CNN."}, {"start": 4208.24, "end": 4212.719999999999, "text": " We basically initialize the parameters."}, {"start": 4212.719999999999, "end": 4213.719999999999, "text": " We pass the key."}, {"start": 4213.719999999999, "end": 4215.0199999999995, "text": " We pass the dummy data."}, {"start": 4215.0199999999995, "end": 4220.48, "text": " So just we pass all ones and then we extract from that frozen dict we just extract the"}, {"start": 4220.48, "end": 4228.76, "text": " params collection because that's the format that this train state here will expect."}, {"start": 4228.76, "end": 4234.92, "text": " And then we just kind of call and form this SGT optimizer without having to optimize it."}, {"start": 4234.92, "end": 4240.26, "text": " Without having to initialize it because that's going to happen inside of this train state"}, {"start": 4240.26, "end": 4241.26, "text": " class."}, {"start": 4241.26, "end": 4242.56, "text": " So that's cool."}, {"start": 4242.56, "end": 4246.28, "text": " We're kind of delegating a lot of the functionality there."}, {"start": 4246.28, "end": 4252.8, "text": " And finally, we also pass the CNN apply so that we can then do some more advanced logic"}, {"start": 4252.8, "end": 4254.76, "text": " as we saw there in the loop."}, {"start": 4254.76, "end": 4257.28, "text": " So here we call the apply gradients."}, {"start": 4257.28, "end": 4260.68, "text": " We can do that because we have all of those ingredients inside."}, {"start": 4260.68, "end": 4262.12, "text": " Okay."}, {"start": 4262.12, "end": 4266.8, "text": " Let me kind of quickly walk you through that class so that it's kind of clear what's going"}, {"start": 4266.8, "end": 4267.8, "text": " on."}, {"start": 4267.8, "end": 4269.76, "text": " Let me open it up."}, {"start": 4269.76, "end": 4273.5599999999995, "text": " If I open it up and here is the train state function."}, {"start": 4273.5599999999995, "end": 4275.16, "text": " So we have the apply gradients."}, {"start": 4275.16, "end": 4279.74, "text": " We saw that one in the training loop and we have the create function."}, {"start": 4279.74, "end": 4285.44, "text": " So basically you can see here the optimizer is initialized internally."}, {"start": 4285.44, "end": 4291.08, "text": " So yeah, like as much work as possible was delegated to this function so that we don't"}, {"start": 4291.08, "end": 4294.2, "text": " have to externally do the initialization."}, {"start": 4294.2, "end": 4298.24, "text": " And here you can see what it does when we call the apply gradients."}, {"start": 4298.24, "end": 4301.0199999999995, "text": " It basically does those two optics functions."}, {"start": 4301.0199999999995, "end": 4302.58, "text": " So we have the update."}, {"start": 4302.58, "end": 4305.96, "text": " We get the updates in the new optimizer state and then we apply the updates."}, {"start": 4305.96, "end": 4311.4, "text": " It's just a simple, simple function that makes things a bit cleaner in our code."}, {"start": 4311.4, "end": 4312.4, "text": " So that's it."}, {"start": 4312.4, "end": 4313.84, "text": " Let me go back here."}, {"start": 4313.84, "end": 4316.4, "text": " So let me run this if I haven't already."}, {"start": 4316.4, "end": 4321.96, "text": " And we are getting to the final cell of this notebook where we have the training loop."}, {"start": 4321.96, "end": 4324.16, "text": " We create the training state."}, {"start": 4324.16, "end": 4329.48, "text": " We have it here and you can see how neat the training loop actually looks like."}, {"start": 4329.48, "end": 4331.679999999999, "text": " So we just call the train for one epoch."}, {"start": 4331.679999999999, "end": 4336.639999999999, "text": " So for every epoch, we just call the train one epoch and we get back from the trains."}, {"start": 4336.639999999999, "end": 4338.12, "text": " We convert the train state."}, {"start": 4338.12, "end": 4340.04, "text": " So there's going to be modified train state."}, {"start": 4340.04, "end": 4341.32, "text": " We're going to get the metrics."}, {"start": 4341.32, "end": 4344.2, "text": " We plot the metrics like the loss and accuracy, et cetera."}, {"start": 4344.2, "end": 4346.76, "text": " And we do the same thing on the test dataset."}, {"start": 4346.76, "end": 4350.92, "text": " So we pass the test images and test labels and that's pretty much it."}, {"start": 4350.92, "end": 4354.84, "text": " So I can try and run this, but currently I don't think I have an access to a GPU."}, {"start": 4354.84, "end": 4359.84, "text": " So it's going to be a bit slower, but yeah, it's just going to print every single epoch."}, {"start": 4359.84, "end": 4362.66, "text": " It's going to print some of this data."}, {"start": 4362.66, "end": 4366.36, "text": " And yeah, that's everything you have to know pretty much."}, {"start": 4366.36, "end": 4371.48, "text": " And in my opinion, this is probably this example, even though it's way more complex and actually"}, {"start": 4371.48, "end": 4377.679999999999, "text": " does a real thing, it was way simpler compared to those contrived examples where we had to"}, {"start": 4377.679999999999, "end": 4383.959999999999, "text": " do the mutable, where we had to pass the random states for the dropout and stuff like that."}, {"start": 4383.959999999999, "end": 4392.0, "text": " So as an exercise, how would we go about adding dropout here and also batch norm?"}, {"start": 4392.0, "end": 4398.599999999999, "text": " So that would like start adding that complexity on the syntactic level."}, {"start": 4398.6, "end": 4401.4800000000005, "text": " Let's roughly think what would happen if we were to add dropout."}, {"start": 4401.4800000000005, "end": 4409.4400000000005, "text": " So first we'd have to modify this create train state function and we'd have to add not just"}, {"start": 4409.4400000000005, "end": 4413.72, "text": " the key for the params, but also for the, we'd have to create a kind of dictionary here"}, {"start": 4413.72, "end": 4418.52, "text": " where we'd have like params and then we'd initialize that collection with a certain"}, {"start": 4418.52, "end": 4425.08, "text": " key and then we'd have the dropout and we'd have to initialize that with another key."}, {"start": 4425.08, "end": 4428.56, "text": " And that's on the initialization side."}, {"start": 4428.56, "end": 4435.080000000001, "text": " Then in the actual training loop, what we'd have to do is we'd have to pass the key for"}, {"start": 4435.080000000001, "end": 4438.4400000000005, "text": " every single train step, we'd have to have a unique key."}, {"start": 4438.4400000000005, "end": 4445.04, "text": " So because as you recall, we'd have to add RNGs here and then create and pass the key"}, {"start": 4445.04, "end": 4446.120000000001, "text": " here."}, {"start": 4446.120000000001, "end": 4451.400000000001, "text": " So that means that for every single train step, we'll have to have a key and let me"}, {"start": 4451.400000000001, "end": 4456.240000000001, "text": " see where we're calling the train step right here."}, {"start": 4456.24, "end": 4461.8, "text": " So that means we have to have splitting happening on this line here."}, {"start": 4461.8, "end": 4465.76, "text": " So we just have to pass a specific key to the train one epoch function and then kind"}, {"start": 4465.76, "end": 4470.76, "text": " of keep splitting because we don't want to reuse the key because if you reuse the key,"}, {"start": 4470.76, "end": 4474.96, "text": " you'll get the same output, the same randomness and thus you don't have this stochasticity."}, {"start": 4474.96, "end": 4476.96, "text": " So that's very important to keep in mind."}, {"start": 4476.96, "end": 4479.48, "text": " So yeah, it's a fairly simple modification."}, {"start": 4479.48, "end": 4483.719999999999, "text": " I just wanted to kind of quickly walk you through that and for batch norm similarly,"}, {"start": 4483.72, "end": 4489.360000000001, "text": " you'd have to have the mutable keyword here and there and it would complicate things a"}, {"start": 4489.360000000001, "end": 4492.04, "text": " little bit, but definitely not by a lot."}, {"start": 4492.04, "end": 4494.6, "text": " So yeah, let's see whether we have some outputs."}, {"start": 4494.6, "end": 4500.2, "text": " Yeah, we have some outputs here and you can see the accuracy is already and we don't have"}, {"start": 4500.2, "end": 4504.360000000001, "text": " any overfitting and so yeah, we have like a 98% accuracy."}, {"start": 4504.360000000001, "end": 4505.68, "text": " That's fairly nice."}, {"start": 4505.68, "end": 4506.68, "text": " That's pretty much it."}, {"start": 4506.68, "end": 4513.12, "text": " I encourage you to go and check out the Flex documentation A and B. Go and check out the"}, {"start": 4513.12, "end": 4515.86, "text": " examples they have on their GitHub."}, {"start": 4515.86, "end": 4519.5199999999995, "text": " So you can see like this ImageNet example is very cool."}, {"start": 4519.5199999999995, "end": 4521.8, "text": " They have mixed precision training."}, {"start": 4521.8, "end": 4528.92, "text": " They have like training over multiple devices, then like all of that additional complexity"}, {"start": 4528.92, "end": 4531.32, "text": " and I think they can train a model in like an hour or something."}, {"start": 4531.32, "end": 4535.5599999999995, "text": " Yeah, like wall time, a couple of hours to train these, although you have to have a bunch"}, {"start": 4535.5599999999995, "end": 4539.04, "text": " of TPUs or V100s to do that."}, {"start": 4539.04, "end": 4545.72, "text": " So yeah, go check those out at your own pace and I think I mentioned in the beginning,"}, {"start": 4545.72, "end": 4552.0, "text": " Hugging Face has very nice examples as well and they've organized this community week"}, {"start": 4552.0, "end": 4553.24, "text": " which you should check out as well."}, {"start": 4553.24, "end": 4560.08, "text": " So I'm gonna link all of these resources in the video description, so do check them out"}, {"start": 4560.08, "end": 4565.8, "text": " as well and yeah, I mean Hugging Face team is doing an amazing job, so just a huge shout"}, {"start": 4565.8, "end": 4566.8, "text": " out to those guys."}, {"start": 4566.8, "end": 4568.4, "text": " Cool, that's pretty much it."}, {"start": 4568.4, "end": 4569.4, "text": " Let's do a quick recap."}, {"start": 4569.4, "end": 4576.799999999999, "text": " Let's do a short summary of what we've learned, what we've seen in this video about Flex and"}, {"start": 4576.799999999999, "end": 4582.4, "text": " basically I'm gonna open up this table of content in deep note here to kind of guide"}, {"start": 4582.4, "end": 4589.7, "text": " my thinking and let me create a code cell here using Control J shortcut."}, {"start": 4589.7, "end": 4596.12, "text": " So basically we've seen the, so on top of everything you need to know about Jax already,"}, {"start": 4596.12, "end": 4599.88, "text": " we've seen the init and apply functions."}, {"start": 4599.88, "end": 4604.44, "text": " We've seen that if you wanna create our custom modules, sometimes you'll have to use the"}, {"start": 4604.44, "end": 4610.76, "text": " self-param and self-variable functions."}, {"start": 4610.76, "end": 4617.36, "text": " Then we've seen stuff such as nn-compact which makes the construction of those custom modules"}, {"start": 4617.36, "end": 4620.12, "text": " a bit more compact obviously."}, {"start": 4620.12, "end": 4626.72, "text": " Then we saw some new keywords that we haven't seen in Jax such as mutable if you wanna have"}, {"start": 4626.72, "end": 4633.92, "text": " the non-trainable variables or RNGs if you wanna have like stochasticity such as dropout"}, {"start": 4633.92, "end": 4638.12, "text": " like in the forward pass and what else?"}, {"start": 4638.12, "end": 4643.5199999999995, "text": " We've seen the train state object which is pretty much like a wrapper state that makes"}, {"start": 4643.5199999999995, "end": 4645.0599999999995, "text": " things a little bit cleaner."}, {"start": 4645.0599999999995, "end": 4649.24, "text": " So all in all, there is not that much you need to know on top of Jax, so if you already"}, {"start": 4649.24, "end": 4655.88, "text": " have the basics kinda set up, then you'll be good to go and this video will hopefully"}, {"start": 4655.88, "end": 4658.08, "text": " give you enough context to get you started."}, {"start": 4658.08, "end": 4663.2, "text": " Again, huge shout out to the Deep Note team for sponsoring this video and if you like"}, {"start": 4663.2, "end": 4665.5599999999995, "text": " this one, share it out with your friends."}, {"start": 4665.5599999999995, "end": 4667.5, "text": " That's the best way you can support the channel."}, {"start": 4667.5, "end": 4669.5199999999995, "text": " Also check out the reference."}, {"start": 4669.5199999999995, "end": 4672.24, "text": " I'm gonna link below in the video description."}, {"start": 4672.24, "end": 4676.84, "text": " By using the link again, you'll get 20 hours for free using the, you can use these pro"}, {"start": 4676.84, "end": 4678.88, "text": " machines and that's it."}, {"start": 4678.88, "end": 4681.8, "text": " Subscribe to this channel and until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=0QVci2tKVJ8
A Neural Network Solves and Generates Mathematics Problems by Program Synthesis | Paper Explained
✅❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover "A Neural Network Solves and Generates Mathematics Problems by Program Synthesis: Calculus, Differential Equations, Linear Algebra, and More" paper that showed that you can solve university-level mathematics problems using OpenAI's davinci Codex model coupled with some prompt hacking. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ This video's paper: https://arxiv.org/abs/2112.15594 ✅ OpenAI Codex paper: https://arxiv.org/abs/2107.03374 ✅ Universal Computation Engine paper: https://arxiv.org/abs/2103.05247 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 High level overview of the paper 03:05 OpenAI Codex background 12:35 Prompt modifications (high level) 15:05 Testing on unseen maths course, problem examples 17:10 Prompt modifications (in depth) 25:00 Problem generation: humans vs Codex 28:25 Quantifying the problem statement modification levels 31:50 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #mathematics #codex #gpt3
What's cracking guys? So as you can see here I finally got my camera back. I'm slowly setting up my YouTube studio here in London to its previous glory and yeah hopefully in the next couple weeks it's gonna get even better. Anyhow, so in this video I'm gonna cover this novel paper called A Neural Network Solves and Generates Mathematics Problems by Program Synthesis. Calculus, Differential Equations, Linear Algebra and more by this like it's a collaboration between a couple of universities among them MIT, Columbia, Harvard and University of Waterloo. So in a nutshell what I've done is this neural network here is a fancy way of saying codex from OpenAI and for those of you who are not familiar with the with the model I'm gonna don't worry I'm gonna cover it in a second give you some background some high-level understanding of what's going on. Hint hint it's just a GPT-3 model fine-tuned on codebase like on Python codebase and so what they've done what they've achieved is they've taken a bunch of problems from as you can see like these faculties such as MIT and they managed to solve every single problem which is amazing but the trick is you have to kind of tweak the input prompts you have to do the prompt programming magic to get this thing to work but in any case like it's very very very interesting results and yeah I'm gonna cover those in this in this video. So let me just show you what is going on how the how the pipeline looks like basically basically you have something like this so here is an example of problem A and so the question is find the differential oops let me just one no doesn't want to render here okay so find the differential of W where W equals like a natural like logarithm of X square plus Y squared plus Z squared and basically want to find the derivatives with respect to both X Y and Z and you can see what it do here they take that that prompt and they basically rephrase it in a different way here so in differential equations they add some additional context and then they feed that into codecs which you sample from to generate this snippet of Python text and I deliberately say text because this is nothing than a large language model pre-chained on code which is just a like a specific type of text so then they run the code and basically they show that every single unit test passed for all of the they actually show that all the solutions that codecs generated were were correct with minor or bigger tweaks as we're going to see a bit later but that's it in a nutshell so if you're not familiar with codecs let me now briefly briefly show you what codecs is all about so evaluating large language models trained on code so this is the the codecs paper and you can see a group of people from OpenAI I guess a large proportion of their team and you shouldn't be surprised when you when you see when you see a bunch of authors on a paper it basically means that this was more of a like a engineering effort rather than trying to invent something some new machine learning like idea or whatnot so what's going on here one thing too worth mentioning is you you probably heard of github copilot basically it's it's powered whoops I crossed that oh my god so github copilot is basically just using codecs improved version of the codecs from this paper in the background so yeah maybe that's a that's like a kind of fun fact but like let me jump and explain how this thing works so as I said it's simply a GPT-3 model fine-tuned so pre-trained GPT-3 model fine-tuned on this on this code base on this data set that was collected by scraping various Python files from github and then post hoc doing some filtering so now the important detail here is that you don't actually have to have a pre-trained GPT-3 model that's what I showed okay let's see that part here basically they say that surprisingly we did not observe improvements when starting from a pre-trained language model possibly because the fine-tuned fine-tuning data set is so large so by so large what they actually mean is 159 gigabytes of text which they got after collecting as I said from github and then doing some post hoc filtering to get this data set here so nevertheless models fine-tuned from GPT converge more quickly so we apply this strategy for all subsequent experiments so basically like the pre-training part is not about the same thing as if you've read the Universal Transformers or Universal Computational Engines I think that's the name of the of the of the paper so the idea is not to use the knowledge like the idea is not to leverage the knowledge the the problem-solving logic that GPT-3 obtained by by pre-training on this huge corpus of text the idea is just to bootstrap it to make it a bit quicker so this is more of an optimization detail I guess than the actual important detail so if you remember the Universal Computational Engine paper the idea was that transformers when you pre-train them using the usual unsupervised like pre-training objective so you've got your transformer here and the idea is that the weights that you obtained by doing that by pre-training it on a huge corpus of text is those weights are kind of very generic and they can solve problems for like whatever modality you want so what they've showed is that you can kind of take the input layer the embeddings and discard the old ones as well as the like whatever like classification heads in the case of classification problems just kind of fine-tune these parts and you can apply then the same transformer to like images or whatnot so anyhow in this case in this paper they are not actually using to fine-tune they are more they're using the the pre-trained weights to bootstrap the training process anyhow let's get back to the explanation here so what you do is you prompt the codex with this thing here so this is your prompt and then you basically generate this piece of text the same thing as with GPT-3 which in this case is code so and this code probably passes the unit test for this particular function usually the context is such that you have that you have the the signature here so you have the signature of the function right here then you have this part the the the the doc string and then you prompt and create the the body part of the function and that's how people usually use github copilot as well so you may be familiar with that format so let's see what what the codex brought onto the table so again codex is just a GPT-3 model pre-trained on a particular type of text which is code so we are basically just narrowing the distribution of the data that we want codex to learn how to model and the second innovation here is innovation I mean this is not a novel idea but it's a very neat idea to use unit tests to make sure that your your model is working as expected and because of unit tests what they can afford to do is generate like case samples for example hundred samples and see whether any of those passes all of the unit tests for that particular function we're currently trying to to to create to generate and they introduce this this metric called I think it's called pass at K where K is the number of samples they generate so it's usually I think they usually use either one or or hundred etc and basically they treat it as a pass if any one of those samples passes all of the unit tests so you can imagine if you had like 50 examples in the data set and so 50 and for all of those we the one some of these samples managed to pass all of the unit tests then this pass at K metric would be equal to 1 and subsequently if you if you if some of the functions if none of the samples managed to produce to pass all the unit tests and obviously pass at K is gonna decrease towards towards zero hopefully that's very very self-explanatory but the thing is you don't have the luxury of generating hundred samples when somebody is trying to use codecs to for example generate novel code because you don't have usually usually don't have like unit tests unless you're doing test-driven development which people are usually not doing and so what they had to figure out is how to generate the best possible sample from codecs and they just devised this mean logprop heuristic which is fairly simple and they tried a couple other things like they tried instead of using mean they tried using some and it just this thing just worked worked the best so the idea is the following let me try and kind of draw it down here so you have your your your codecs model here and you prompt it with some with some prompt as we saw on the on the chart there so we prompt it with something here and then we start generating okay so you generate the first token you and how you generate is you basically have to sample from a distribution over tokens so let's say we had some nice distribution like this so basically if you're using greedy you'd be you'd be sampling exactly this token so let me change the color so you'd sample this token here so whatever is this token here so you take that token oops and you then feed that token here you'd embed the token obviously so you pass the vector in and then you get another output here and then you just repeat that and so what what the what the mean logprop heuristic does is for all of degenerated tokens you just take their mean so you just so for all the tokens you just take this this probability and log log of the probability and you just do the mean so intuitively what the heuristic is trying to do is capture those samples that are kind of very probable under under this codecs model and that's that's the whole idea and that's pretty much it so additionally they had some part where they find you know even like on data sets which are more pure than just simply collecting data from from github so let me find that thing somewhere here so supervised fine-tuning is a section of the paper and what I've done here is they've they've taken only the code from competitive programming websites which is usually of higher quality than if you were to just randomly scrape the code from the internet from the from github in particular I think and so they collect the data by as I said competitive programming websites plus they found those repos which have continuous integration in place and they kind of found the logic to extract the functions and use those functions as training examples and by doing this they created codecs as family of models and those perform even better looking at the pass at K metric compared to the regular codecs so you can imagine that github copilot is using some combination of these techniques let's see how this novel paper leveraged codecs to solve these university level problems they use the problems as I mentioned from MIT various MIT courses from this Columbia course a computational linear algebra and there is this additional benchmark called math topics and they just do problems from there so what they do is they take a particular problem to be more precise they're using they take a question for that mathematical problem and they do three things so they either they either tidy questions so simplify a word problem to mathematical content or they add some context like topic library a name for example they hint to to codecs hey use numpy or hey use matplotlib and they sometimes define the concepts because it's expected that students who are attending the course already have that context and so it's kind of unfair not to give the same context to codecs so they provide a context and finally sometimes they have to interactively refine the prompt on until they they solve the problem so yeah you can see some problems here it's one thing when you know the actual solution that's that's cool but like when you don't know the solution this obviously will not work so it's very the application area is actually universities and university classes so it's got a huge potential there and here are some examples of what codecs managed to do so codecs generated a snippet of code of Python code in particular in this paper at least and the code generated these plots and solve this problem actually so so the problem was to like what's the volume you get if you rotate this 2d chart around the third axis so what what happens if you have for example imagine this is 3d imagine you add an axis here and you try and rotate this this 2d chart around this axis and you eventually get this 3d body here so that's what codecs had to kind of figure out and in this particular question and there are many other things that that it managed to solve obviously every single question they throw at codecs it managed it managed to solve it after some some prompt hacking as I like to call these things here so let's see what is going on here so one one thing you may think here is that what if codecs during the the pre training procedure which took into account the whole like a huge amount of code on the internet among those you can be certain that some of the code is used to solve some of the mathematical problems from these specific universities like MIT etc so that was my concern for sure and they did a good job there obviously by by yeah they sit here we validate that our results are not merely overfitting training data by solving a new computational linear algebra course which is not available online and is therefore unseen during codecs training we obtain correct answer for all of the randomly sampled questions and that's that's kind of a nice result okay so that out of the way let's continue here here you can see some some example questions from from these courses so they have 13 courses in in in total and for each one of those they have a representative example here and you can see for example one of those in in the multivariable calculus course is describe the graph of the function F and F is a multivariable function of X and Y equals to 10 minus as you can see here and what codecs managed to do is probably use modplotlib and basically plotted the function in 3d as you can see here and yeah you can go at your own pace through these problems if you're if you're curious but like they vary a lot and in the sense that sometimes you have to plot the the function sometimes you just have to give a numerical answer like let me find some some some simple example what is the greatest common factor of 84 112 and 210 and the model outputs 14 which is great okay so that's it now let me dig deeper into what exactly do they do how do they hack that the prompts so that the codecs generates correct outputs so first things first is here they added the in differential equations so that's providing the topic context and then they kind of rearrange this so you can see it's a it's a mix of multiple things so here in particular they not only provide the topic context they also provide the library context you can see here simpy is is a library and they kind of rearrange the question and this is the code that that that that codecs generates so it imports the simpy correctly it then generates the the these abstract symbols here x y and z and then just does the log as the function here is telling it and finally it just prints that if like derivatives of W of W with respect to X Y and Z and this is one of the simpler examples actually you can you can later there is a huge appendix and they codecs sometimes generates fairly fairly complex programs here is another another example here they they add multiple library hints they have some question here and they rephrase it and they put simpy they put stream stream plot and you can indeed see that stream plot is used here as well as the simpy so that's cool finally here is the context in the form of definitions okay so let's see what I've done here calculate the probability of getting a two-pair poker hand and use a human if you're not familiar with poker rules will have difficulty solving this because you don't know what these things are in the first place so that's why they sometimes have to add additional context in the form of definitions so a hand is a set of five cards that are drawn randomly from a standard 52 card deck with 13 ranks of four cards each etc etc and finally they hint them out here so write a program that generates simulations so it's additionally telling the codecs to generate the program and like a four in a form of a simulation so all of these help the model generate the correct solution and you can see here fairly fairly nice nice like a for loop it does a simulation where it does like what's this like 10 million iterations 10 million I guess iterations in this simulation at shuffles the deck takes five cards first five cards so it's like a not that optimal way to do this but like still it's it's it's semantically sound and it takes the first five which is a random hand and then it checks whether it has two pairs if the hand has two pairs then we just increment and finally it returns the diffraction so how likely is it to have like a two pair if you just draw random hands as it was asked here so calculate the probability of getting a two pair poker hand and the model did exactly that so that's cool so these examples now get a bit more problematic so you can see here that this is way more intricate so what it done is they've split the problem a statement into multiple parts so they first write like they first say here write a program that draws the Lorenz strange attractor and you can see that codecs generate some code and we have some some some some results here now here I'm not kind of sure whether they're just appending this to the previous prompt including this output so I'm not sure whether they just take this and then concatenate that with this snippet here and then they just add this in order to get this or do they just kind of yeah I guess that makes a lot of sense although that can very soon become unwieldy if the output is very very large because transformers as you recall have memory issues you can't have infinitely long sequences so yeah you can see they had to do produce the X projection of the Lawrence trajectory and then outcomes in our snippet and then plot the XY projection of the Lawrence trajectory and then comes in another snippet and finally here's our last example here and let me read it to you outside of their humdrum duties as 6042 TAs teaching assistants cyan is trying to learn to levitate using only intense concentration and Yelani is launching a Nelson 2008 presidential campaign suppose that science probability of levitating is 1 over 6 Yelani's chance of becoming president is 1 over 4 and the success of one does not alter the others chances if at most one of them succeeds what is the probability that Yelani becomes the president of the United States this is a fairly complex problem statement and the things that kind of bothers me here is that they've actually doing this rephrasing procedure they've kind of solved and made this problem easier now you can see for yourself whether that's true or not but like they say here that suppose that science probability science succeeding is 1 over 6 I guess it should be off science succeeding here or something like that and Yelani's probability of succeeding is 1 over 4 use numpy to find the probability at most one of Yelani and science succeed yeah there's it sounds weird but that's not the point so this is a point here so use numpy to find the probability Yelani succeeds but cyan does not succeed and finally divide the former by the latter probability I mean by doing this you're not just rephrasing the problem you're solving the problem if you know how to rephrase it this in this manner you already know how to solve the problem and you probably don't need codecs to do it for you so that's yeah I'm not sure how many of the solutions they have have such an intensive rephrasing and we'll see in a couple of seconds they've tried to kind of quantify the modifications they've done to the original problem statements but I'm afraid that the embeddings they've used can't capture this notion of how much you've actually helped solve the problem and it's gonna capture more of the yeah so I'm just gonna suspicious of how this similarity metric they've done is gonna capture this notion of actually solving the problem when rephrasing it if you know what I mean so finally we can see the the result here probability of Sam probability of Yelani blah blah just calculates and nicely names the variable so that's very cool things like it has comments it has a combination of informal and formal language here so you can see the the variable is very descriptive here and finally you can see that the vision which is not this thing is also a bit worrisome because I can see three dots here and yeah it'd be probably a bit cleaner if you assign this to a variable and then try to print it otherwise it's kind of weird anyhow you saw now the tweaks they've doing to make codecs work you can argue that some of them are are a bit like too helpful to codecs like the the number at the e problem we saw here but yeah let's see some other charts they have here but they've been additionally prompting the model with multiple problem statements and then sampling in other problem statement that codecs now invented and they've done a survey of those codecs generated like problem statements with the human generated problem statements and we can see the results here first off if we split on the x-axis you can see some MIT courses on the y-axis we can see difficulty and you can see that human written and machine generated problem statements are fairly close in difficulty according to this survey participants and then we have here like whether the the the problem statement was appropriate or not we can see that codecs generated problem statements are not as high quality as human ones but it still comes very very close and finally we have here a chart then that basically they've been given a problem statement and the question was hey is this generated by codecs by machine or by human and you can see that in the case when all of the problem statements were written by humans people sometimes thought that like like around 25% of those seemed like maybe machine generated those but when we're when they were presented with purely machine generated problem statements so here in this case people can kind of tell that something is not quite right but still they can judge and so half of the time they just say it's machines sometimes and then the other 50% of time they just say hey it's this is human generated okay let's see final charts here you can check out these ones yourself at your own pace I'm gonna just take a single one here is the one of the questions that the codecs generated it says find the area of the region bounded by the curve and the x-axis where the function is defined as y is equal to x squared times sine of x and the x-axis goes from 0 to pi and you can see the closest question in the data set looks very very similar but not quite there so find the area of the region under the given curve from 1 to 2 so they've defined as you can see X here is defined more formally here they've kind of used like an like human language to informal language to define the same thing and finally Y is defined similarly to codecs example here so maybe this one is very very close the only thing that codecs had to do is to change the function and change the the domain on the x-axis but like you can see some of these differ by quite a lot at least on the syntax level but looking at these generated questions I can I can kind of empathize with the participants in this survey here because it's really kind of tricky to tell which one of these is generated by a human and which one is generated by a by by codecs and that's that's fairly awesome finally finally this is the the similarity rate like metric I mentioned what I've done here is for every single course and again they have like 13 of those courses so here are the the MIT courses here is the Columbia course here are the courses from from that I guess the problems from that math benchmark and you can see that if this candle chart is close to one ideally that means that they didn't have to do any modification whatsoever so again stepping back for a second here what happened here is that they take the original problem statement they pass it to the I think Bert they take the modified problem statement and they do the same thing and then they just find the cosine similarity between those two like feature vectors and now the if they had minor modifications and those two vectors will be fairly similar and the cosine similarity will be close to one so what this basically shows you is that for example for this Columbia course they didn't have to to tweak the problems statements by a lot so that may either mean that that probably means that like those those problem statements are a bit easier for codecs to to to kind of parse and and generate the answer to you yeah you can see that for some of these they had to kind of do more tweaking so you can see here the the average cosine similarity is a bit more to the left compared to these other ones same here there is a huge variance here and the the average is quite on the left it's almost around 0.5 as I said it's kind of hard to looking at these quantitative results figure out so where I'm trying to get it here is that you can have the same cosine similarity for two modifications of the same original problem statement where one is a simple syntactic modification of the input problem statement whereas the other one is such rephrasing that you have actually solved the problem in a way so we saw that with the this problem this this example e so I'm trying to allude to this one here by by telling telling the codecs to divide the former by the letter probability they've kind of made it a lot easier so yeah it's kind of hard to understand through these visualizations how many times they had to do something like this if you yeah that's my point okay finally they have a similar thing just a histogram and you can see that all in all I think this is for all when you combine all the problem statements and then just accumulate and plot a histogram you can see that most of the time they had like 0.8 that's where the mode of this distribution lies basically yeah again it's hard to understand what this means in practice obviously be optimally if this mode was basically skewed all the way to the right here where we have cosine similarity of one which means there was no modification happening whatsoever and that's pretty much it I don't want to be too nitpicky here I mean the results are fairly astounding even with these tweaking I mean in two years time with better hardware bigger models more a novel like research ideas this can just get better cool hopefully like this video if you did consider subscribing click that notification bell share it with your friends and also join the discord community until next time bye
[{"start": 0.0, "end": 5.0, "text": " What's cracking guys? So as you can see here I finally got my camera back. I'm"}, {"start": 5.0, "end": 10.24, "text": " slowly setting up my YouTube studio here in London to its previous glory and yeah"}, {"start": 10.24, "end": 15.68, "text": " hopefully in the next couple weeks it's gonna get even better. Anyhow, so in"}, {"start": 15.68, "end": 20.52, "text": " this video I'm gonna cover this novel paper called A Neural Network Solves"}, {"start": 20.52, "end": 25.68, "text": " and Generates Mathematics Problems by Program Synthesis. Calculus, Differential"}, {"start": 25.68, "end": 32.04, "text": " Equations, Linear Algebra and more by this like it's a collaboration between a"}, {"start": 32.04, "end": 37.68, "text": " couple of universities among them MIT, Columbia, Harvard and University of"}, {"start": 37.68, "end": 43.739999999999995, "text": " Waterloo. So in a nutshell what I've done is this neural network here is a fancy"}, {"start": 43.739999999999995, "end": 49.760000000000005, "text": " way of saying codex from OpenAI and for those of you who are not familiar with"}, {"start": 49.760000000000005, "end": 53.8, "text": " the with the model I'm gonna don't worry I'm gonna cover it in a second give you"}, {"start": 53.8, "end": 56.959999999999994, "text": " some background some high-level understanding of what's going on. Hint"}, {"start": 56.959999999999994, "end": 64.64, "text": " hint it's just a GPT-3 model fine-tuned on codebase like on Python codebase and"}, {"start": 64.64, "end": 68.39999999999999, "text": " so what they've done what they've achieved is they've taken a bunch of"}, {"start": 68.39999999999999, "end": 72.96, "text": " problems from as you can see like these faculties such as MIT and they managed"}, {"start": 72.96, "end": 78.44, "text": " to solve every single problem which is amazing but the trick is you have to"}, {"start": 78.44, "end": 83.24, "text": " kind of tweak the input prompts you have to do the prompt programming magic to"}, {"start": 83.24, "end": 88.67999999999999, "text": " get this thing to work but in any case like it's very very very interesting"}, {"start": 88.67999999999999, "end": 93.56, "text": " results and yeah I'm gonna cover those in this in this video. So let me just"}, {"start": 93.56, "end": 98.72, "text": " show you what is going on how the how the pipeline looks like basically"}, {"start": 98.72, "end": 104.11999999999999, "text": " basically you have something like this so here is an example of problem A and"}, {"start": 104.11999999999999, "end": 110.39999999999999, "text": " so the question is find the differential oops let me just one no doesn't want to"}, {"start": 110.4, "end": 116.32000000000001, "text": " render here okay so find the differential of W where W equals like"}, {"start": 116.32000000000001, "end": 122.88000000000001, "text": " a natural like logarithm of X square plus Y squared plus Z squared and"}, {"start": 122.88000000000001, "end": 130.44, "text": " basically want to find the derivatives with respect to both X Y and Z and you"}, {"start": 130.44, "end": 135.52, "text": " can see what it do here they take that that prompt and they basically rephrase"}, {"start": 135.52, "end": 139.06, "text": " it in a different way here so in differential equations they add some"}, {"start": 139.06, "end": 145.28, "text": " additional context and then they feed that into codecs which you sample from"}, {"start": 145.28, "end": 150.36, "text": " to generate this snippet of Python text and I deliberately say text because this"}, {"start": 150.36, "end": 155.2, "text": " is nothing than a large language model pre-chained on code which is just"}, {"start": 155.2, "end": 163.76, "text": " a like a specific type of text so then they run the code and basically they"}, {"start": 163.76, "end": 168.68, "text": " show that every single unit test passed for all of the they actually show that"}, {"start": 168.68, "end": 174.92000000000002, "text": " all the solutions that codecs generated were were correct with minor or bigger"}, {"start": 174.92000000000002, "end": 179.92000000000002, "text": " tweaks as we're going to see a bit later but that's it in a nutshell so if you're"}, {"start": 179.92000000000002, "end": 185.36, "text": " not familiar with codecs let me now briefly briefly show you what codecs is"}, {"start": 185.36, "end": 190.60000000000002, "text": " all about so evaluating large language models trained on code so this is the"}, {"start": 190.60000000000002, "end": 197.12, "text": " the codecs paper and you can see a group of people from OpenAI I guess a large"}, {"start": 197.12, "end": 203.08, "text": " proportion of their team and you shouldn't be surprised when you when you"}, {"start": 203.08, "end": 207.4, "text": " see when you see a bunch of authors on a paper it basically means that this was"}, {"start": 207.4, "end": 212.64000000000001, "text": " more of a like a engineering effort rather than trying to invent something"}, {"start": 212.64000000000001, "end": 222.36, "text": " some new machine learning like idea or whatnot so what's going on here one"}, {"start": 222.36, "end": 225.96, "text": " thing too worth mentioning is you you probably heard of github copilot"}, {"start": 225.96, "end": 232.4, "text": " basically it's it's powered whoops I crossed that oh my god so github copilot"}, {"start": 232.4, "end": 238.60000000000002, "text": " is basically just using codecs improved version of the codecs from this paper in"}, {"start": 238.60000000000002, "end": 243.84, "text": " the background so yeah maybe that's a that's like a kind of fun fact but like"}, {"start": 243.84, "end": 248.8, "text": " let me jump and explain how this thing works so as I said it's simply a GPT-3"}, {"start": 248.8, "end": 255.34, "text": " model fine-tuned so pre-trained GPT-3 model fine-tuned on this on this code"}, {"start": 255.34, "end": 261.82, "text": " base on this data set that was collected by scraping various Python files from"}, {"start": 261.82, "end": 267.22, "text": " github and then post hoc doing some filtering so now the important detail"}, {"start": 267.22, "end": 271.8, "text": " here is that you don't actually have to have a pre-trained GPT-3 model that's"}, {"start": 271.8, "end": 278.0, "text": " what I showed okay let's see that part here basically they say that surprisingly"}, {"start": 278.0, "end": 282.82, "text": " we did not observe improvements when starting from a pre-trained language"}, {"start": 282.82, "end": 288.76, "text": " model possibly because the fine-tuned fine-tuning data set is so large so by"}, {"start": 288.76, "end": 296.38, "text": " so large what they actually mean is 159 gigabytes of text which they got after"}, {"start": 296.38, "end": 301.24, "text": " collecting as I said from github and then doing some post hoc filtering to"}, {"start": 301.24, "end": 307.0, "text": " get this data set here so nevertheless models fine-tuned from GPT converge more"}, {"start": 307.0, "end": 312.44, "text": " quickly so we apply this strategy for all subsequent experiments so basically"}, {"start": 312.44, "end": 318.88, "text": " like the pre-training part is not about the same thing as if you've read the"}, {"start": 318.88, "end": 322.71999999999997, "text": " Universal Transformers or Universal Computational Engines I think that's the"}, {"start": 322.71999999999997, "end": 328.84, "text": " name of the of the of the paper so the idea is not to use the knowledge like"}, {"start": 328.84, "end": 333.15999999999997, "text": " the idea is not to leverage the knowledge the the problem-solving logic"}, {"start": 333.15999999999997, "end": 338.36, "text": " that GPT-3 obtained by by pre-training on this huge corpus of text the idea is"}, {"start": 338.36, "end": 341.32, "text": " just to bootstrap it to make it a bit quicker so this is more of an"}, {"start": 341.32, "end": 346.96, "text": " optimization detail I guess than the actual important detail"}, {"start": 346.96, "end": 354.24, "text": " so if you remember the Universal Computational Engine paper the idea was"}, {"start": 354.24, "end": 359.84, "text": " that transformers when you pre-train them using the usual unsupervised like"}, {"start": 359.84, "end": 365.12, "text": " pre-training objective so you've got your transformer here and the idea is"}, {"start": 365.12, "end": 370.12, "text": " that the weights that you obtained by doing that by pre-training it on a huge"}, {"start": 370.12, "end": 375.48, "text": " corpus of text is those weights are kind of very generic and they can solve"}, {"start": 375.48, "end": 380.16, "text": " problems for like whatever modality you want so what they've showed is that you"}, {"start": 380.16, "end": 387.96, "text": " can kind of take the input layer the embeddings and discard the old ones as"}, {"start": 387.96, "end": 391.48, "text": " well as the like whatever like classification heads in the case of"}, {"start": 391.48, "end": 396.52, "text": " classification problems just kind of fine-tune these parts and you can apply"}, {"start": 396.52, "end": 402.52, "text": " then the same transformer to like images or whatnot so anyhow in this"}, {"start": 402.52, "end": 406.79999999999995, "text": " case in this paper they are not actually using to fine-tune they are more they're"}, {"start": 406.79999999999995, "end": 412.79999999999995, "text": " using the the pre-trained weights to bootstrap the training process anyhow"}, {"start": 412.79999999999995, "end": 419.76, "text": " let's get back to the explanation here so what you do is you prompt the codex"}, {"start": 419.76, "end": 424.24, "text": " with this thing here so this is your prompt and then you basically"}, {"start": 424.24, "end": 428.28000000000003, "text": " generate this piece of text the same thing as with GPT-3 which in this case"}, {"start": 428.28000000000003, "end": 433.32, "text": " is code so and this code probably passes the unit test for this particular"}, {"start": 433.32, "end": 439.0, "text": " function usually the context is such that you have that you have the the"}, {"start": 439.0, "end": 443.76, "text": " signature here so you have the signature of the function right here then you have"}, {"start": 443.76, "end": 449.8, "text": " this part the the the the doc string and then you prompt and create the the body"}, {"start": 449.8, "end": 453.6, "text": " part of the function and that's how people usually use github copilot as"}, {"start": 453.6, "end": 458.96000000000004, "text": " well so you may be familiar with that format so let's see what what the codex"}, {"start": 458.96000000000004, "end": 464.1, "text": " brought onto the table so again codex is just a GPT-3 model pre-trained on a"}, {"start": 464.1, "end": 469.26000000000005, "text": " particular type of text which is code so we are basically just narrowing the"}, {"start": 469.26000000000005, "end": 475.12, "text": " distribution of the data that we want codex to learn how to model and the"}, {"start": 475.12, "end": 480.72, "text": " second innovation here is innovation I mean this is not a novel idea but it's a"}, {"start": 480.72, "end": 486.40000000000003, "text": " very neat idea to use unit tests to make sure that your your model is working as"}, {"start": 486.40000000000003, "end": 492.32000000000005, "text": " expected and because of unit tests what they can afford to do is generate like"}, {"start": 492.32000000000005, "end": 499.76000000000005, "text": " case samples for example hundred samples and see whether any of those passes all"}, {"start": 499.76000000000005, "end": 504.36, "text": " of the unit tests for that particular function we're currently trying to to to"}, {"start": 504.36, "end": 510.6, "text": " create to generate and they introduce this this metric called I think it's called"}, {"start": 510.6, "end": 519.16, "text": " pass at K where K is the number of samples they generate so it's usually I"}, {"start": 519.16, "end": 526.84, "text": " think they usually use either one or or hundred etc and basically they treat it"}, {"start": 526.84, "end": 530.64, "text": " as a pass if any one of those samples passes all of the unit tests so you can"}, {"start": 530.64, "end": 537.16, "text": " imagine if you had like 50 examples in the data set and so 50 and for all of"}, {"start": 537.16, "end": 542.28, "text": " those we the one some of these samples managed to pass all of the unit tests"}, {"start": 542.28, "end": 548.64, "text": " then this pass at K metric would be equal to 1 and subsequently if you if you"}, {"start": 548.64, "end": 553.88, "text": " if some of the functions if none of the samples managed to produce to pass all"}, {"start": 553.88, "end": 557.84, "text": " the unit tests and obviously pass at K is gonna decrease towards towards zero"}, {"start": 557.84, "end": 563.72, "text": " hopefully that's very very self-explanatory but the thing is you"}, {"start": 563.72, "end": 568.88, "text": " don't have the luxury of generating hundred samples when somebody is trying"}, {"start": 568.88, "end": 575.64, "text": " to use codecs to for example generate novel code because you don't have usually"}, {"start": 575.64, "end": 579.1600000000001, "text": " usually don't have like unit tests unless you're doing test-driven"}, {"start": 579.1600000000001, "end": 584.36, "text": " development which people are usually not doing and so what they had to figure"}, {"start": 584.36, "end": 590.28, "text": " out is how to generate the best possible sample from codecs and they just devised"}, {"start": 590.28, "end": 595.5600000000001, "text": " this mean logprop heuristic which is fairly simple and they tried a couple"}, {"start": 595.5600000000001, "end": 600.48, "text": " other things like they tried instead of using mean they tried using some and it"}, {"start": 600.48, "end": 605.32, "text": " just this thing just worked worked the best so the idea is the following let me"}, {"start": 605.32, "end": 612.4, "text": " try and kind of draw it down here so you have your your your codecs model here"}, {"start": 612.4, "end": 619.68, "text": " and you prompt it with some with some prompt as we saw on the on the chart"}, {"start": 619.68, "end": 625.56, "text": " there so we prompt it with something here and then we start generating okay"}, {"start": 625.56, "end": 632.36, "text": " so you generate the first token you and how you generate is you basically have"}, {"start": 632.36, "end": 635.88, "text": " to sample from a distribution over tokens so let's say we had some nice"}, {"start": 635.88, "end": 641.3199999999999, "text": " distribution like this so basically if you're using greedy you'd be you'd be"}, {"start": 641.32, "end": 645.4000000000001, "text": " sampling exactly this token so let me change the color so you'd sample this"}, {"start": 645.4000000000001, "end": 651.36, "text": " token here so whatever is this token here so you take that token oops and you"}, {"start": 651.36, "end": 657.2800000000001, "text": " then feed that token here you'd embed the token obviously so you pass the"}, {"start": 657.2800000000001, "end": 661.84, "text": " vector in and then you get another output here and then you just repeat"}, {"start": 661.84, "end": 666.96, "text": " that and so what what the what the mean logprop heuristic does is for all of"}, {"start": 666.96, "end": 671.84, "text": " degenerated tokens you just take their mean so you just so for all the tokens"}, {"start": 671.84, "end": 677.84, "text": " you just take this this probability and log log of the probability and you just"}, {"start": 677.84, "end": 681.38, "text": " do the mean so intuitively what the heuristic is trying to do is capture"}, {"start": 681.38, "end": 686.4000000000001, "text": " those samples that are kind of very probable under under this codecs model"}, {"start": 686.4000000000001, "end": 691.5600000000001, "text": " and that's that's the whole idea and that's pretty much it so additionally"}, {"start": 691.56, "end": 697.4799999999999, "text": " they had some part where they find you know even like on data sets which are"}, {"start": 697.4799999999999, "end": 701.4, "text": " more pure than just simply collecting data from from github so let me find"}, {"start": 701.4, "end": 705.7199999999999, "text": " that thing somewhere here so supervised fine-tuning is a section of the paper"}, {"start": 705.7199999999999, "end": 712.76, "text": " and what I've done here is they've they've taken only the code from"}, {"start": 712.76, "end": 717.8, "text": " competitive programming websites which is usually of higher quality than if you"}, {"start": 717.8, "end": 722.24, "text": " were to just randomly scrape the code from the internet from the from github"}, {"start": 722.24, "end": 727.56, "text": " in particular I think and so they collect the data by as I said competitive"}, {"start": 727.56, "end": 732.56, "text": " programming websites plus they found those repos which have continuous"}, {"start": 732.56, "end": 736.9, "text": " integration in place and they kind of found the logic to extract the functions"}, {"start": 736.9, "end": 741.4, "text": " and use those functions as training examples and by doing this they created"}, {"start": 741.4, "end": 746.8399999999999, "text": " codecs as family of models and those perform even better looking at the pass"}, {"start": 746.84, "end": 752.0, "text": " at K metric compared to the regular codecs so you can imagine that github"}, {"start": 752.0, "end": 757.5600000000001, "text": " copilot is using some combination of these techniques let's see how this"}, {"start": 757.5600000000001, "end": 766.36, "text": " novel paper leveraged codecs to solve these university level problems they use"}, {"start": 766.36, "end": 772.08, "text": " the problems as I mentioned from MIT various MIT courses from this Columbia"}, {"start": 772.08, "end": 777.0, "text": " course a computational linear algebra and there is this additional benchmark"}, {"start": 777.0, "end": 781.24, "text": " called math topics and they just do problems from there so what they do is"}, {"start": 781.24, "end": 785.84, "text": " they take a particular problem to be more precise they're using they take a"}, {"start": 785.84, "end": 791.76, "text": " question for that mathematical problem and they do three things so they either"}, {"start": 791.76, "end": 798.72, "text": " they either tidy questions so simplify a word problem to mathematical content or"}, {"start": 798.72, "end": 805.08, "text": " they add some context like topic library a name for example they hint to to codecs"}, {"start": 805.08, "end": 811.1600000000001, "text": " hey use numpy or hey use matplotlib and they sometimes define the concepts"}, {"start": 811.1600000000001, "end": 816.1600000000001, "text": " because it's expected that students who are attending the course already have"}, {"start": 816.1600000000001, "end": 820.6, "text": " that context and so it's kind of unfair not to give the same context to codecs so"}, {"start": 820.6, "end": 825.52, "text": " they provide a context and finally sometimes they have to interactively"}, {"start": 825.52, "end": 831.1999999999999, "text": " refine the prompt on until they they solve the problem so yeah you can see"}, {"start": 831.1999999999999, "end": 835.4399999999999, "text": " some problems here it's one thing when you know the actual solution that's"}, {"start": 835.4399999999999, "end": 839.1999999999999, "text": " that's cool but like when you don't know the solution this obviously will not"}, {"start": 839.1999999999999, "end": 845.16, "text": " work so it's very the application area is actually universities and university"}, {"start": 845.16, "end": 850.56, "text": " classes so it's got a huge potential there and here are some examples of what"}, {"start": 850.56, "end": 856.4399999999999, "text": " codecs managed to do so codecs generated a snippet of code of Python code in"}, {"start": 856.4399999999999, "end": 862.7199999999999, "text": " particular in this paper at least and the code generated these plots and solve"}, {"start": 862.7199999999999, "end": 869.04, "text": " this problem actually so so the problem was to like what's the volume you get if"}, {"start": 869.04, "end": 874.0799999999999, "text": " you rotate this 2d chart around the third axis so what what happens if you"}, {"start": 874.0799999999999, "end": 880.1999999999999, "text": " have for example imagine this is 3d imagine you add an axis here and you try"}, {"start": 880.2, "end": 885.44, "text": " and rotate this this 2d chart around this axis and you eventually get this"}, {"start": 885.44, "end": 889.76, "text": " 3d body here so that's what codecs had to kind of figure out and in this"}, {"start": 889.76, "end": 893.2, "text": " particular question and there are many other things that that it managed to"}, {"start": 893.2, "end": 897.9200000000001, "text": " solve obviously every single question they throw at codecs it managed it"}, {"start": 897.9200000000001, "end": 903.32, "text": " managed to solve it after some some prompt hacking as I like to call these"}, {"start": 903.32, "end": 909.6400000000001, "text": " things here so let's see what is going on here so one one thing you may think"}, {"start": 909.64, "end": 914.88, "text": " here is that what if codecs during the the pre training procedure which took"}, {"start": 914.88, "end": 920.56, "text": " into account the whole like a huge amount of code on the internet among"}, {"start": 920.56, "end": 925.0, "text": " those you can be certain that some of the code is used to solve some of the"}, {"start": 925.0, "end": 929.38, "text": " mathematical problems from these specific universities like MIT etc so"}, {"start": 929.38, "end": 934.68, "text": " that was my concern for sure and they did a good job there obviously by by"}, {"start": 934.68, "end": 939.2, "text": " yeah they sit here we validate that our results are not merely overfitting"}, {"start": 939.2, "end": 943.6400000000001, "text": " training data by solving a new computational linear algebra course"}, {"start": 943.6400000000001, "end": 948.76, "text": " which is not available online and is therefore unseen during codecs training"}, {"start": 948.76, "end": 955.88, "text": " we obtain correct answer for all of the randomly sampled questions and that's"}, {"start": 955.88, "end": 963.08, "text": " that's kind of a nice result okay so that out of the way let's continue here"}, {"start": 963.08, "end": 969.0, "text": " here you can see some some example questions from from these courses so"}, {"start": 969.0, "end": 975.0, "text": " they have 13 courses in in in total and for each one of those they have a"}, {"start": 975.0, "end": 980.32, "text": " representative example here and you can see for example one of those in in the"}, {"start": 980.32, "end": 985.08, "text": " multivariable calculus course is describe the graph of the function F and"}, {"start": 985.08, "end": 991.0, "text": " F is a multivariable function of X and Y equals to 10 minus as you can see here"}, {"start": 991.0, "end": 996.48, "text": " and what codecs managed to do is probably use modplotlib and basically"}, {"start": 996.48, "end": 1001.2, "text": " plotted the function in 3d as you can see here and yeah you can go at your"}, {"start": 1001.2, "end": 1005.52, "text": " own pace through these problems if you're if you're curious but like they"}, {"start": 1005.52, "end": 1012.4, "text": " vary a lot and in the sense that sometimes you have to plot the the"}, {"start": 1012.4, "end": 1016.72, "text": " function sometimes you just have to give a numerical answer like let me find some"}, {"start": 1016.72, "end": 1024.76, "text": " some some simple example what is the greatest common factor of 84 112 and 210"}, {"start": 1024.76, "end": 1033.56, "text": " and the model outputs 14 which is great okay so that's it now let me dig deeper"}, {"start": 1033.56, "end": 1040.2, "text": " into what exactly do they do how do they hack that the prompts so that the codecs"}, {"start": 1040.2, "end": 1047.3600000000001, "text": " generates correct outputs so first things first is here they added the"}, {"start": 1047.3600000000001, "end": 1053.8, "text": " in differential equations so that's providing the topic context and then"}, {"start": 1053.8, "end": 1057.48, "text": " they kind of rearrange this so you can see it's a it's a mix of multiple things"}, {"start": 1057.48, "end": 1062.32, "text": " so here in particular they not only provide the topic context they also"}, {"start": 1062.32, "end": 1068.96, "text": " provide the library context you can see here simpy is is a library and they kind"}, {"start": 1068.96, "end": 1072.8400000000001, "text": " of rearrange the question and this is the code that that that that codecs"}, {"start": 1072.8400000000001, "end": 1078.1200000000001, "text": " generates so it imports the simpy correctly it then generates the the these"}, {"start": 1078.1200000000001, "end": 1084.44, "text": " abstract symbols here x y and z and then just does the log as the function here"}, {"start": 1084.44, "end": 1092.72, "text": " is telling it and finally it just prints that if like derivatives of W of W with"}, {"start": 1092.72, "end": 1097.58, "text": " respect to X Y and Z and this is one of the simpler examples actually you can"}, {"start": 1097.58, "end": 1101.6799999999998, "text": " you can later there is a huge appendix and they codecs sometimes generates"}, {"start": 1101.6799999999998, "end": 1109.36, "text": " fairly fairly complex programs here is another another example here they they"}, {"start": 1109.36, "end": 1114.6399999999999, "text": " add multiple library hints they have some question here and they rephrase it"}, {"start": 1114.6399999999999, "end": 1119.78, "text": " and they put simpy they put stream stream plot and you can indeed see that"}, {"start": 1119.78, "end": 1129.36, "text": " stream plot is used here as well as the simpy so that's cool finally here is the"}, {"start": 1129.36, "end": 1133.84, "text": " context in the form of definitions okay so let's see what I've done here"}, {"start": 1133.84, "end": 1139.36, "text": " calculate the probability of getting a two-pair poker hand and use a human if"}, {"start": 1139.36, "end": 1143.2, "text": " you're not familiar with poker rules will have difficulty solving this"}, {"start": 1143.2, "end": 1146.8, "text": " because you don't know what these things are in the first place so that's why they"}, {"start": 1146.8, "end": 1151.8, "text": " sometimes have to add additional context in the form of definitions so a hand is"}, {"start": 1151.8, "end": 1157.36, "text": " a set of five cards that are drawn randomly from a standard 52 card deck"}, {"start": 1157.36, "end": 1163.6599999999999, "text": " with 13 ranks of four cards each etc etc and finally they hint them out here so"}, {"start": 1163.6599999999999, "end": 1169.24, "text": " write a program that generates simulations so it's additionally telling"}, {"start": 1169.24, "end": 1175.96, "text": " the codecs to generate the program and like a four in a form of a simulation so"}, {"start": 1175.96, "end": 1179.48, "text": " all of these help the model generate the correct solution and you can see here"}, {"start": 1179.48, "end": 1186.88, "text": " fairly fairly nice nice like a for loop it does a simulation where it does like"}, {"start": 1186.88, "end": 1192.92, "text": " what's this like 10 million iterations 10 million I guess iterations in this"}, {"start": 1192.92, "end": 1198.16, "text": " simulation at shuffles the deck takes five cards first five cards so it's like"}, {"start": 1198.16, "end": 1203.4, "text": " a not that optimal way to do this but like still it's it's it's semantically"}, {"start": 1203.4, "end": 1209.24, "text": " sound and it takes the first five which is a random hand and then it checks"}, {"start": 1209.24, "end": 1214.0, "text": " whether it has two pairs if the hand has two pairs then we just increment and"}, {"start": 1214.0, "end": 1219.8400000000001, "text": " finally it returns the diffraction so how likely is it to have like a two pair"}, {"start": 1219.8400000000001, "end": 1223.88, "text": " if you just draw random hands as it was asked here so calculate the probability"}, {"start": 1223.88, "end": 1228.2, "text": " of getting a two pair poker hand and the model did exactly that so that's cool"}, {"start": 1228.2, "end": 1233.0800000000002, "text": " so these examples now get a bit more problematic so you can see here that"}, {"start": 1233.08, "end": 1238.28, "text": " this is way more intricate so what it done is they've split the problem a"}, {"start": 1238.28, "end": 1242.6, "text": " statement into multiple parts so they first write like they first say here"}, {"start": 1242.6, "end": 1248.1599999999999, "text": " write a program that draws the Lorenz strange attractor and you can see that"}, {"start": 1248.1599999999999, "end": 1253.12, "text": " codecs generate some code and we have some some some some results here now"}, {"start": 1253.12, "end": 1257.62, "text": " here I'm not kind of sure whether they're just appending this to the"}, {"start": 1257.62, "end": 1264.04, "text": " previous prompt including this output so I'm not sure whether they just take this"}, {"start": 1264.04, "end": 1269.36, "text": " and then concatenate that with this snippet here and then they just add this"}, {"start": 1269.36, "end": 1276.6, "text": " in order to get this or do they just kind of yeah I guess that makes a lot of"}, {"start": 1276.6, "end": 1282.12, "text": " sense although that can very soon become unwieldy if the output is very very"}, {"start": 1282.12, "end": 1287.32, "text": " large because transformers as you recall have memory issues you can't"}, {"start": 1287.32, "end": 1293.28, "text": " have infinitely long sequences so yeah you can see they had to do produce the"}, {"start": 1293.28, "end": 1297.12, "text": " X projection of the Lawrence trajectory and then outcomes in our snippet and"}, {"start": 1297.12, "end": 1300.6799999999998, "text": " then plot the XY projection of the Lawrence trajectory and then comes in"}, {"start": 1300.6799999999998, "end": 1306.9199999999998, "text": " another snippet and finally here's our last example here and let me read it to"}, {"start": 1306.9199999999998, "end": 1313.06, "text": " you outside of their humdrum duties as 6042 TAs teaching assistants cyan is"}, {"start": 1313.06, "end": 1317.44, "text": " trying to learn to levitate using only intense concentration and Yelani is"}, {"start": 1317.44, "end": 1323.8, "text": " launching a Nelson 2008 presidential campaign suppose that science probability"}, {"start": 1323.8, "end": 1329.56, "text": " of levitating is 1 over 6 Yelani's chance of becoming president is 1 over 4"}, {"start": 1329.56, "end": 1336.0, "text": " and the success of one does not alter the others chances if at most one of"}, {"start": 1336.0, "end": 1340.24, "text": " them succeeds what is the probability that Yelani becomes the president of the"}, {"start": 1340.24, "end": 1346.48, "text": " United States this is a fairly complex problem statement and the things that"}, {"start": 1346.48, "end": 1351.8, "text": " kind of bothers me here is that they've actually doing this rephrasing procedure"}, {"start": 1351.8, "end": 1356.32, "text": " they've kind of solved and made this problem easier now you can see for"}, {"start": 1356.32, "end": 1360.56, "text": " yourself whether that's true or not but like they say here that suppose that"}, {"start": 1360.56, "end": 1365.28, "text": " science probability science succeeding is 1 over 6 I guess it should be off"}, {"start": 1365.28, "end": 1371.0, "text": " science succeeding here or something like that and Yelani's probability of succeeding is 1"}, {"start": 1371.0, "end": 1377.04, "text": " over 4 use numpy to find the probability at most one of Yelani and"}, {"start": 1377.04, "end": 1382.16, "text": " science succeed yeah there's it sounds weird but that's not the point so this"}, {"start": 1382.16, "end": 1385.84, "text": " is a point here so use numpy to find the probability Yelani succeeds but"}, {"start": 1385.84, "end": 1390.56, "text": " cyan does not succeed and finally divide the former by the latter probability I"}, {"start": 1390.56, "end": 1395.6599999999999, "text": " mean by doing this you're not just rephrasing the problem you're solving the"}, {"start": 1395.6599999999999, "end": 1401.6399999999999, "text": " problem if you know how to rephrase it this in this manner you already know how"}, {"start": 1401.6399999999999, "end": 1405.04, "text": " to solve the problem and you probably don't need codecs to do it for you so"}, {"start": 1405.04, "end": 1410.8, "text": " that's yeah I'm not sure how many of the solutions they have have such an"}, {"start": 1410.8, "end": 1415.0, "text": " intensive rephrasing and we'll see in a couple of seconds they've tried to kind"}, {"start": 1415.0, "end": 1420.92, "text": " of quantify the modifications they've done to the original problem statements"}, {"start": 1420.92, "end": 1425.96, "text": " but I'm afraid that the embeddings they've used can't capture this notion"}, {"start": 1425.96, "end": 1431.24, "text": " of how much you've actually helped solve the problem and it's gonna capture more"}, {"start": 1431.24, "end": 1440.96, "text": " of the yeah so I'm just gonna suspicious of how this similarity metric they've"}, {"start": 1440.96, "end": 1445.52, "text": " done is gonna capture this notion of actually solving the problem when"}, {"start": 1445.52, "end": 1450.44, "text": " rephrasing it if you know what I mean so finally we can see the the result here"}, {"start": 1450.44, "end": 1456.8400000000001, "text": " probability of Sam probability of Yelani blah blah just calculates and nicely"}, {"start": 1456.8400000000001, "end": 1461.64, "text": " names the variable so that's very cool things like it has comments it has a"}, {"start": 1461.64, "end": 1465.68, "text": " combination of informal and formal language here so you can see the the"}, {"start": 1465.68, "end": 1469.24, "text": " variable is very descriptive here and finally you can see that the vision"}, {"start": 1469.24, "end": 1474.96, "text": " which is not this thing is also a bit worrisome because I can see three dots"}, {"start": 1474.96, "end": 1481.44, "text": " here and yeah it'd be probably a bit cleaner if you assign this to a variable"}, {"start": 1481.44, "end": 1488.26, "text": " and then try to print it otherwise it's kind of weird anyhow you saw now the"}, {"start": 1488.26, "end": 1493.64, "text": " tweaks they've doing to make codecs work you can argue that some of them are"}, {"start": 1493.64, "end": 1501.4, "text": " are a bit like too helpful to codecs like the the number at the e problem we saw"}, {"start": 1501.4, "end": 1506.8000000000002, "text": " here but yeah let's see some other charts they have here but they've been"}, {"start": 1506.8000000000002, "end": 1512.6000000000001, "text": " additionally prompting the model with multiple problem statements and then"}, {"start": 1512.6000000000001, "end": 1519.72, "text": " sampling in other problem statement that codecs now invented and they've done a"}, {"start": 1519.72, "end": 1526.16, "text": " survey of those codecs generated like problem statements with the human"}, {"start": 1526.16, "end": 1531.66, "text": " generated problem statements and we can see the results here first off if we"}, {"start": 1531.66, "end": 1536.8, "text": " split on the x-axis you can see some MIT courses on the y-axis we can see"}, {"start": 1536.8, "end": 1541.8600000000001, "text": " difficulty and you can see that human written and machine generated problem"}, {"start": 1541.8600000000001, "end": 1548.08, "text": " statements are fairly close in difficulty according to this survey"}, {"start": 1548.08, "end": 1557.8, "text": " participants and then we have here like whether the the the problem statement"}, {"start": 1557.8, "end": 1562.76, "text": " was appropriate or not we can see that codecs generated problem statements are"}, {"start": 1562.76, "end": 1568.4399999999998, "text": " not as high quality as human ones but it still comes very very close and finally"}, {"start": 1568.4399999999998, "end": 1574.4399999999998, "text": " we have here a chart then that basically they've been given a problem statement"}, {"start": 1574.44, "end": 1579.44, "text": " and the question was hey is this generated by codecs by machine or by"}, {"start": 1579.44, "end": 1584.72, "text": " human and you can see that in the case when all of the problem statements were"}, {"start": 1584.72, "end": 1591.6000000000001, "text": " written by humans people sometimes thought that like like around 25% of"}, {"start": 1591.6000000000001, "end": 1595.88, "text": " those seemed like maybe machine generated those but when we're when they"}, {"start": 1595.88, "end": 1599.52, "text": " were presented with purely machine generated problem statements so here in"}, {"start": 1599.52, "end": 1603.44, "text": " this case people can kind of tell that something is not quite right but still"}, {"start": 1603.44, "end": 1608.72, "text": " they can judge and so half of the time they just say it's machines sometimes"}, {"start": 1608.72, "end": 1614.64, "text": " and then the other 50% of time they just say hey it's this is human generated"}, {"start": 1614.64, "end": 1623.64, "text": " okay let's see final charts here you can check out these ones yourself at your"}, {"start": 1623.64, "end": 1627.96, "text": " own pace I'm gonna just take a single one here is the one of the questions"}, {"start": 1627.96, "end": 1634.44, "text": " that the codecs generated it says find the area of the region bounded by the"}, {"start": 1634.44, "end": 1642.2, "text": " curve and the x-axis where the function is defined as y is equal to x squared"}, {"start": 1642.2, "end": 1649.64, "text": " times sine of x and the x-axis goes from 0 to pi and you can see the closest"}, {"start": 1649.64, "end": 1655.08, "text": " question in the data set looks very very similar but not quite there so find the"}, {"start": 1655.08, "end": 1660.56, "text": " area of the region under the given curve from 1 to 2 so they've defined as you"}, {"start": 1660.56, "end": 1667.56, "text": " can see X here is defined more formally here they've kind of used like an like"}, {"start": 1667.56, "end": 1673.72, "text": " human language to informal language to define the same thing and finally Y is"}, {"start": 1673.72, "end": 1679.86, "text": " defined similarly to codecs example here so maybe this one is very very"}, {"start": 1679.86, "end": 1685.24, "text": " close the only thing that codecs had to do is to change the function and change"}, {"start": 1685.24, "end": 1689.9599999999998, "text": " the the domain on the x-axis but like you can see some of these differ by"}, {"start": 1689.9599999999998, "end": 1694.9599999999998, "text": " quite a lot at least on the syntax level but looking at these generated"}, {"start": 1694.9599999999998, "end": 1700.24, "text": " questions I can I can kind of empathize with the participants in this survey"}, {"start": 1700.24, "end": 1704.4599999999998, "text": " here because it's really kind of tricky to tell which one of these is generated"}, {"start": 1704.4599999999998, "end": 1708.6399999999999, "text": " by a human and which one is generated by a by by codecs and that's that's fairly"}, {"start": 1708.64, "end": 1715.64, "text": " awesome finally finally this is the the similarity rate like metric I mentioned"}, {"start": 1715.64, "end": 1721.3200000000002, "text": " what I've done here is for every single course and again they have like 13 of"}, {"start": 1721.3200000000002, "end": 1726.68, "text": " those courses so here are the the MIT courses here is the Columbia course here"}, {"start": 1726.68, "end": 1732.16, "text": " are the courses from from that I guess the problems from that math benchmark"}, {"start": 1732.16, "end": 1741.28, "text": " and you can see that if this candle chart is close to one ideally that means"}, {"start": 1741.28, "end": 1746.68, "text": " that they didn't have to do any modification whatsoever so again"}, {"start": 1746.68, "end": 1750.92, "text": " stepping back for a second here what happened here is that they take the"}, {"start": 1750.92, "end": 1756.2, "text": " original problem statement they pass it to the I think Bert they take the"}, {"start": 1756.2, "end": 1760.8400000000001, "text": " modified problem statement and they do the same thing and then they just find"}, {"start": 1760.84, "end": 1767.9199999999998, "text": " the cosine similarity between those two like feature vectors and now the if they"}, {"start": 1767.9199999999998, "end": 1771.8799999999999, "text": " had minor modifications and those two vectors will be fairly similar and the"}, {"start": 1771.8799999999999, "end": 1777.28, "text": " cosine similarity will be close to one so what this basically shows you is that"}, {"start": 1777.28, "end": 1783.12, "text": " for example for this Columbia course they didn't have to to tweak the problems"}, {"start": 1783.12, "end": 1788.0, "text": " statements by a lot so that may either mean that that probably means that like"}, {"start": 1788.0, "end": 1792.16, "text": " those those problem statements are a bit easier for codecs to to to kind of parse"}, {"start": 1792.16, "end": 1798.92, "text": " and and generate the answer to you yeah you can see that for some of these they"}, {"start": 1798.92, "end": 1804.8, "text": " had to kind of do more tweaking so you can see here the the average cosine"}, {"start": 1804.8, "end": 1810.56, "text": " similarity is a bit more to the left compared to these other ones same here"}, {"start": 1810.56, "end": 1816.24, "text": " there is a huge variance here and the the average is quite on the left it's"}, {"start": 1816.24, "end": 1821.32, "text": " almost around 0.5 as I said it's kind of hard to looking at these quantitative"}, {"start": 1821.32, "end": 1826.56, "text": " results figure out so where I'm trying to get it here is that you can have the"}, {"start": 1826.56, "end": 1831.08, "text": " same cosine similarity for two modifications of the same original"}, {"start": 1831.08, "end": 1837.08, "text": " problem statement where one is a simple syntactic modification of the input"}, {"start": 1837.08, "end": 1841.88, "text": " problem statement whereas the other one is such rephrasing that you have actually"}, {"start": 1841.88, "end": 1848.88, "text": " solved the problem in a way so we saw that with the this problem this this"}, {"start": 1848.88, "end": 1855.68, "text": " example e so I'm trying to allude to this one here by by telling telling the"}, {"start": 1855.68, "end": 1859.2, "text": " codecs to divide the former by the letter probability they've kind of made"}, {"start": 1859.2, "end": 1862.88, "text": " it a lot easier so yeah it's kind of hard to understand through these"}, {"start": 1862.88, "end": 1867.44, "text": " visualizations how many times they had to do something like this if you yeah"}, {"start": 1867.44, "end": 1874.0800000000002, "text": " that's my point okay finally they have a similar thing just a histogram and you"}, {"start": 1874.0800000000002, "end": 1878.42, "text": " can see that all in all I think this is for all when you combine all the problem"}, {"start": 1878.42, "end": 1882.96, "text": " statements and then just accumulate and plot a histogram you can see that most"}, {"start": 1882.96, "end": 1889.16, "text": " of the time they had like 0.8 that's where the mode of this distribution lies"}, {"start": 1889.16, "end": 1894.68, "text": " basically yeah again it's hard to understand what this means in practice"}, {"start": 1894.68, "end": 1902.4, "text": " obviously be optimally if this mode was basically skewed all the way to the"}, {"start": 1902.4, "end": 1907.64, "text": " right here where we have cosine similarity of one which means there was"}, {"start": 1907.64, "end": 1913.04, "text": " no modification happening whatsoever and that's pretty much it I don't want to be"}, {"start": 1913.04, "end": 1917.28, "text": " too nitpicky here I mean the results are fairly astounding even with these"}, {"start": 1917.28, "end": 1924.2, "text": " tweaking I mean in two years time with better hardware bigger models more"}, {"start": 1924.2, "end": 1930.64, "text": " a novel like research ideas this can just get better cool hopefully like this"}, {"start": 1930.64, "end": 1935.64, "text": " video if you did consider subscribing click that notification bell share it"}, {"start": 1935.64, "end": 1955.8400000000001, "text": " with your friends and also join the discord community until next time bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=lvv4N2nf-HU
OpenAI GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover a new paper from OpenAI - "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models" where they combine diffusion models with transformers to outperform their older DALL-E model. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GLIDE paper: https://arxiv.org/abs/2112.10741 ✅ GLIDE code: https://github.com/openai/glide-text2im Learning about diffusion models Papers: ✅ Seminal (2015): https://arxiv.org/pdf/1503.03585.pdf ✅ DDPM (2020): https://arxiv.org/pdf/2006.11239.pdf ✅ OpenAI (1): https://arxiv.org/pdf/2102.09672.pdf ✅ OpenAI (2): https://arxiv.org/pdf/2105.05233.pdf Blogs: ✅ Score-based models: https://yang-song.github.io/blog/2021/score/ ✅ Diffusion models: https://lilianweng.github.io/lil-log/2021/07/11/diffusion-models.html ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to GLIDE - results 04:00 Intro to diffusion models 07:10 Inpainting and other awesome results 11:05 Diffusion models in depth 20:45 VAE inspired loss 31:30 GLIDE pipeline (diffusion + transformers) 34:15 Guided diffusion 38:00 Classifier-free guidance 42:25 CLIP guidance 45:25 Comparison with other models 48:30 Safety considerations 49:25 Failure cases 51:40 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kulsoom Abdullah Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #glide #openai #diffusionmodels
What's up guys in this video I'm covering Glide towards photorealistic image generation and editing with text guided diffusion models by the awesome OpenAI team right here and this is yet another iteration of OpenAI's efforts to improve upon these basically text conditioned image generation image synthesis models and you're probably familiar with one of their previous models which is namely Dali which had amazing results but as we're going to soon see Glide has even better results and they kind of verified that using human judges so we're gonna see the results a bit later but first with those of you who are not even familiar with Dali let me let me show you what the hype is all about this is what like Glide produces so Glide produces these images conditioned on these prompts here so we have a prompt here a hedgehog using a calculator and we can see an image that like Glide generated which is fairly reasonable I mean you as a human probably wouldn't be able to do a much better job although there's some there is some blur here which may be to the fact that they're using diffusion models but we're gonna see and discuss those a bit later then we have a like another example a corgi wearing a red bowtie and you can see here we have a this attribute binding we basically have the model has to understand that red corresponds to bowtie and then we have purple the color purple corresponds to this party hat so the model first has to kind of hallucinate and create those and then the plate to place them on the right spot in the image and make it look like a realistic and I think they do an awesome job and by the way I don't know what like what's up with corgis I can see corgis everywhere I guess some of the open AI members really likes corgis or it's there kind of in-house pet or something in other cool picture here robots meditating in a vipa sauna retreat fairly sure you cannot find this image anywhere but it'd be nice to see the the nearest neighbor from the training data set from all of these generated images although yeah this looks very very awesome if you ask me I'm gonna just cherry pick a couple more examples here and all of these are also cherry picked already they're using something called guidance like classifier free if I'm not wrong let me just check it out here classifier free guidance yeah that's right so here we can see a high quality oil painting of a psychedelic hamster dragon and this is totally wicked like psychedelic okay we can see kind of the colors are there then we have hamster hamster is also there dragon we can see these these wings in the background which kind of reminds us of the dragon high quality well I'm not sure about that part but like all it definitely has this oil painting vibe to it so again amazing amazing rendering of this of this prompt here many other cool results we have in other corgi here and finally we have this thing here so a red cube on top of a blue cube and I'd love to see some more complex query some more complex prompt where the model we have to do more serious relational reasoning because as far as I remember in the Dali paper they tried and and and constructed those and the model the Dali failed like didn't manage to either count well or fail to do the attribute binding correctly like associate the correct color with the shapes etc so it'd be cool to kind of further probe this model but that's gonna be hard because they only released a smaller model which is trained on a filter data set so it's not not nearly as potent as the one they have in house behind I guess yeah not sure whether they have an API for the for this glide model in any case let's try and understand the magic behind behind this model so first things first let me just kind of mention here when evaluated by human judges our samples are preferred to those from Dali 87% of the time when evaluated for photorealism and 69% of the time when evaluated for caption similarity so that's cool so that means that this model definitely improved upon Dali and the main like component that that brought that improvement I'd say is these diffusion models and you're probably familiar with some other generative models such as GANs generative other serial networks or VAE's variational autoencoders or even flow based models but if you're models are probably new for for most of you because only in 2020 did people like manage to get them to work to be in the sense they are they are now kind of you can you can sample very fast like relatively speaking and you can train them and they just work there is a nice relationship between diffusion models and VAE's we're gonna see that the actual loss for the diffusion models is inspired by these VAE models and also there is another class of generative models called score based models and I'm gonna link a lot of resources down in the video description so do check it out so there is a nice connection with that family as well I don't have it here but yeah you can check it out in the video description and another thing I want to point out here is that let's let's see the like a difference at least on this block diagram the main difference between like all of these models and and diffusion models and you can see let me try and use my fancy new touchpad basically you can see that the latents here the latents here are completely like they are of the same dimensionality as the as the input image whereas usually when you have something like VAE so it's an autoencoder that means you're basically down you're down sampling you're you're reducing the dimensionality of your input data into some latent vector here usually denoted as Z and this one is way like lower dimensional compared to the input input data you're trying to model here on the other side you're you're basically you're basically keeping the dimensionality the same as the as your input and you can imagine that that can cause some problems training this model and sampling from this generative model etc but we're gonna see how people manage to to work their way around it kinda I say kinda because before we even dig into details let me just show you I mean there are obviously some limitations and they pointed those out somewhere here let me try and find limitations okay our unoptimized model takes 15 seconds to sample one image on a single a hundred GPU and we're going to understand why this is basically the reason is that we have to do like 20 forward passes in order to render an image whereas using something like GAN or VAE you do a single forward pass and you've got yourself an image so that's that's a limitation to keep in mind and yeah so let's get back up here okay before I even start digging deeper into how diffusion models work which is going to be very ungrateful job for me to do because it'll probably take a whole video to do it so I'm gonna do a high level overview first let's see just the what the model can do on a high level what the model is capable of so here we can see some in painting capabilities and thing to keep in mind here is that they had to do some fine-tuning so this is not zero shot as as your Dali model or some of the other like images rendered in this paper this like model was fine-tuned for this precise task of in painting and you can see the awesome results so they they have this mask and the model inputs to the kind of I think that they feed that the three channeled RGB image they also feed this this mask as a separate channel and I think they have a couple more channels in that in painting model but that's not that important right now what's important is you can see that results are fairly stunning again here a girl hugging a corgi on a pedestal you can see it rendered corgi pretty awesome like just look at this this like hand wrapping around the body of this corgi I mean the results are if you if you told me like a couple of years ago that AI models will be able to do this I'd say it's it's like almost impossible and now I think people are just kind of used to AI being able to do this and we're setting even higher and higher goals and so now this is not like nothing special but yeah we need to appreciate the progress that happened since 2012 and of course earlier than that but yeah with the Alex net the things really started skyrocketing same thing here you can see the red hair you can see how the the ways of flowers appears I especially find this one fascinating the hat lies perfectly like the perspective the the color maybe the color is a bit brighter you can see the hopefully can see it on your screen the color is a bit brighter compared to this shirt here so yeah but still fairly stunning results and yeah small knit here glad stands for a guided language to image diffusion for generation and editing you can understand why that is we can both generate we can add it as you can see here we have language that's conditioned we are basically conditioning our our diffusion model on some language prompt and we basically generate images using the diffusion models so that's kind of trying to deconstruct that that complex sentence here are some work like awesome capabilities of Clyde model you can see here that first they kind of generate using the cozy living room they zero shot generate this image of the cozy living room and then they put a mouse like a mask here and then they say a painting of a corgi on the wall above a couch and here comes corgi again and then they put a mask again and then they say around coffee table in front of a couch and here comes the table and it's round and that's I mean fairly awesome and then again they put this mask and they see a vase of flowers on the coffee table and again we see the results rendered and finally here they say they put a mask here and they say a couch in the corner of her room and now I think this is a failure case or I may be missing something out but basically what happened is that we just got ourself a window here but I don't see like a couch in the corner of her room or maybe or maybe it kind of cropped this part so now this is this is like officially in the corner of a room although we don't see the wall here so but but anyways that's maybe me being too nitpicky here but yeah the results are very cool here this just shows some combination using this as the edit and here they kind of like instead of putting this generic mask with a generic shape they kind of help the model a bit more by using both this is kind of semantic segmentation in a way and they put a approximate shape and so it's easier to generate the picture here you can see I can see the results yourself here okay so that's what glide can do now let me try and demystify how this thing actually works and it's gonna be very very hard to try and explain how these diffusion models work very quickly but yeah let me try it if you think the explanation was not good enough I can maybe try just kind of upload my comment and I can I can create another video explaining just the diffusion models okay so the the process itself these diffusion models were inspired by some non-equilibrium statistical physics I know it sounds very very complicated because it is but like back in 2015 the first paper appeared and they managed to to basically originated the idea of the diffusion models but only in 2020 do we get the models to to work nicely to generate nice samples and to be like faster and yeah so here is the the main idea you have this diffusion process whereas you take your input image and what you do is you basically add add a certain amount of noise from a specific family like usually people use Gaussian because cautions are from this thing called exponential family which means they have all of these nice properties when you add up cautions when you do product with them you end up with a Gaussian like the structure of the Gaussian remains and it's easy to compute and that's why people are using Gossians all around like in VAE's as well for the for the priors etc etc so basically they have nice computational properties so we add up some noise and then we do we continue doing that until we get to like a picture which is basically pure Gaussian noise now wouldn't it be nice if we could reverse the process and starting from a Gaussian noise so just sampling this Gaussian noise picture reverse the process and generate our data so that would be cool so again we have this conditional distribution so X of t conditioned on the X of t minus 1 which is timestamp the previous timestamp and we basically know that because that's described by this formula here and I'm gonna quickly explain how this thing works in a second and then what's problematic is that we cannot it's not easy to figure out the reverse distribution here so so basically not a reverse distribution by this conditional distribution where you're trying to figure out the t minus 1 condition on the X of t so you're trying to denoise the image here so what diffusion models are trying to do is construct this p of theta which is going to be a neural network in practice to try and model this this unknown distribution here and we're gonna see a couple of nice properties of this distribution which will allow us to actually do that and okay before we get there let me try and dissect this this formula here so we can see that the notation is kind of cringy I love this notation better when you have like for example let me just take the touchpad here so I like it better when you have something like X of t and then you're sampling from the normal distribution or whatever distribution like so like having this squiggly thing but yeah just be aware how this notation works this just means we're sampling X of t from like the Gaussian distribution denoted by n and this is the mu so this thing here this part here is the mu so that's the mean and this thing here is just the covariance matrix okay so let's try and dissect this so we have X of t minus 1 multiplied by this thing here and beta t is something called I think it's called noise level something I forgot the exact name but whatever basically it's between 0 and 1 which means that this thing here under square root is gonna be less than 1 and that means that we're gonna contract this X of t minus 1 okay so let me try and visualize this thing right here okay so we have and for the sake of clarity I'm gonna assume that X of t minus 1 which is our image is actually only three dimensional data point so that I can actually visualize what's happening otherwise if this is thousand by thousand that's million dimensions which means I cannot draw a million dimensions on my one node screen here so here is the coordinate system so let's say that X of t minus 1 is somewhere here and so this is the origin point here it's kind of let me kind of try and connect it here so what this thing here does is it's contracting because it's more than one which means then the mu the mean of this distribution let me take a different color here let me try and take the green one it's gonna kind of push this point from here all the way to somewhere here okay so this is gonna be whoops it's gonna be well I'm still trying to get used to this novel touchpad okay so we have the point now here and now this beta is gonna be we're gonna have a diagonal I guess isotropic covariance matrix which means we're gonna have I'm gonna denote that as some small circle around this point so we're gonna have some Gaussian distribution centered around this point here and we're gonna sample our X of t from that distribution here and so let's say we sample it let me try and change the color here let me take the blue color so let's say we sample data point here so that's gonna be our so we started with with here so we had a X t minus one now we are here so this is visually representing in this 3d space what's happening in this image space where we're getting more noisy and noisy images so maybe from this image we got this one although this will take more steps obviously but yeah you get a point now let's see what happens in the limit the thing with these betas is that in the original paper in I think in the paper from 2020 they were not learnable they just took and created a schedule that started with some larger betas which are still smaller than one and then slowly anneal them towards zero okay linearly I think so we had a linear schedule and that means when beta t gets down to zero what happens actually it's the other way around it starts with very small because we want to have like lesser noise so it starts with zero and then it's a meal towards one okay that will make more sense so what happens when it gets to one that means this is going to zero so this thing here let me change the color so I'm gonna take the red pen here and so basically this thing is gonna be a kneeled it's gonna go to zero and that means that this thing here is gonna be contracted all the way down here until we get to the origin point so that means that after a lot of steps our data points are gonna be basically drawn from from this from this like distribution which is basically your your standard Gaussian distribution the normal distribution because we'll have a mu of zero because this is zero so that means this thing is gonna be zero and this thing is gonna be one which means we have identity matrix and we are basically sampling from a noise pure noise distribution and that's what I explained here so that that's basically this image here so that's at the limit of this diffusion process now you might think that this would be very slow if we had to sample like if we had to sample and get X 100 we'd have to repeat this sampling procedure hundred times but that's not the case luckily for us because you can you can form you can because you can see here Q X of T conditions of X of zero say X of zero is the original image from our data set the actual images we have in our data set we can form like a distribution so that we can directly sample and get and kind of skip these intermediate steps and like get the X 100 or X 200 from a single from a single like a step and you can see it here again these alpha T with this hat thing is a product of alpha of these coefficients here and they are 1 minus beta and you can imagine because all of these are smaller than 1 when you multiple a bunch of numbers that are smaller than 1 this thing is going to converge in the limit it's going to go down to zero to zero okay it's going to down to zero so that means what we end up with is nothing here and we end up with one here which means this thing is going to be equal to this small epsilon here which is as you can see here the normal distribution so that's what happens again in the in the limit of this diffusion process now this is very important they state here and this is this is actually a conclusion from 1949 I think they say note that Q so this is conditional distribution we are we're trying to model approaches a diagonal Gaussian distribution as capital T approaches infinity and this is the number of steps in our diffusion process in correspondingly this this noise level in the limit goes down to zero so it's sufficient to train a neural network to predict a mean mu theta and a diagonal covariance matrix Sigma theta and so this will be the this is this thing here is the distribution we're trying to learn in order to model the actual the true distribution here so this one the unknown one okay and the reason why it's possible is because we know that under these conditions the actual true distribution is going to have that same shape and then we can just kind of learn and kind of fit this caution on top on top of that distribution now if you're familiar with VA is this may be this whole like a diagram here may be familiar and if I were to swap if I were to swap this and kind of change it to and denote it as Z which is usual notation VA is because X one through T are the latents remember I mentioned that the latents here in this model are of the same dimension so these are the latents these noisy images here are the latents of our of our image here and basically if I were to denote these as Z so this one here as Z you may notice this formula from the VA is construction and here we're just trying to like construct something that's called evidence lower bound and it's fairly self-explanatory although it may be super confusing when you hear it the first time so why is that why is it called elbow or evidence lower bound so first we have this thing here is called evidence so this is evidence or I think you kept include the log one so this basically you want to maximize this you want to have your data points from your data set have high probability under your fitted distributions under your learned distribution P of theta so that's what machine learning is models are oftentimes trying to do we're trying to maximize the probability of the data points under our our model okay now because log is monotonic it actually does not matter whether you're optimizing you're not going to change the loss function by adding a log you're gonna change some computational properties and the convergence speed etc so it's kind of usually needs to have log inside but so if we have this why don't we try and maximize it directly well the problem is it's intractable so you can try and break this thing down and you end up with something with a complex integral that looks something like this we have these P theta effects zero and we'll have the prior so the probability of the actual latent and we'll have to basically integrate all over all of those latents as you can see here we have to sample the latents to Z the Zets we have to have their probabilities we have to to construct this to basically compute this integral which is going to be it's simply intractable so we cannot directly compute this thing here so this is equal to the probability theta X zero you'd have to do this thing here in order to get this this probability here so that's that's out of the question we can do that so that's why people are constructing these lower bounds and basically this is trivial by definition because KL divergence is always greater or equal than zero it's equal only in the case when the two distributions perfectly match which we're trying to do by training these beta t to match the the P distribution to to match the real Q distribution so after doing some algebra here I'm not gonna dig into a lot of details we end up with this formula here and this is what we call the the elbow the like like variance lower bound like loss and you can see here that basically if we tried and minimized this thing we know that it's gonna be that the actual thing we care about is gonna be even lower so what's called lower bound if it's actually higher because usually you don't have the minus and then you're trying to maximize this thing and this thing is gonna be smaller or equal and that means as you're maximizing it you know that the actual thing you care about which is the the the evidence here so this thing here is gonna be at least as big as your loss so this is also called surrogate loss because you're trying to find different loss it's kind of going to bound your actual thing you care about okay so long story short what happens is they kind of further have to break down this this equation so that it's actually tractable and computable so they just end up in a sum of various scale divergences and at the end they end up with a simple MSC loss between the the the mu theta so that's the the thing we're trying to learn here the mu theta of our P theta distribution and they just need to do MSC loss but what happens is they actually instead of figuring out the mu they actually just figure out the noise level that happens in that happened in a specific step and let me try and break that down in a second oops by the way I realized I forgot to put the condition this on that so you have to condition so it's P of X zero given that and we see we look at the probability of that so what's the probability of that and then what's the probability of this X zero being so what's the probability of X zero given that that so we have to integrate all of that to get the actual probability of X zero so it's a small mistake hopefully you can see what's happening there anyways on the high level just it's hard it's intractable that's everything I wanted to keep in mind that's it you don't have to know it is the probability details here okay so let me try and break this down why why they're why they're what they're doing here so instead of figuring out mu so that means instead of let's let's say let me go back to this picture here so let's say this is X of this is X of t and this was X of t minus 1 this point here was X of t minus 1 this one is X of t so instead of trying to predict given X of t trying to predict this this mu here so that that's the point from where we generated in the first place this point X of t instead of doing that they figure out the noise which is the same thing in the end we're gonna see that but it's up it's a bit different so let's say we had an image here so let's say we had an image here and it had some amount of noise like it had a couple of noise points here I'm gonna reduce the amount of noise obviously to make it easier to explain so we want to basically from there on we want to understand which portions of the noise were added going from step t minus 1 to step t we want to understand which of these noise points were added so maybe we figure out okay these were the noise points that was the noise that was added and basically we end up with something like this we end up with a image here that has these two but we we cannot get got rid of this of this noise here and you can imagine by doing this we are reducing the noise and at the end we're gonna end up with the final picture that's noise free image from our data distribution and that's awesome so that's what's happening they're trying to reduce and understand what was the noise that was added between the two steps that's that's everything I want you to understand here and that's what they do exactly here so you can see here we have this L simple so the simple loss because they had some simplifications here but this turned out to work quite nice empirically so let's see how this L simple works so we have expectation over the time step goes from 1 to capital T which is the size of our diffusion chain let's call it that way we are sampling X 0 from our from the true distribution so this is a fancy way of saying we take a data sample we take an image from our data set and we sample epsilon from the like a normal like Gaussian distribution right here okay so we're gonna what happens in practice how this model is going to be trained is the following let me let me change the color here we're gonna sample X 0 which means we take so we sample let me do it like this oops so we sample X 0 here so that means we take an image then we sample T so that mean maybe hundred in one single like training pass so we have we have hundred we have a specific image and because we have this specific we have this this cool thing that we got here that means we can kind of jump over we don't have to do hundred sampling steps we can just directly calculate the X of t because of this formula here so that means now we have X of t okay so now we have X of t which means we can feed it here we can feed X of t into our epsilon theta model we can also feed the timestamp and how do you feed a scalar you may ask into neural network well please remember transformers you're basically what happens they're using the same thing as with transformers using these basically Fourier basically these sinusoids and the index into a table of precomputed sinusoids and they just take a single row of there and they kind of append it and add it to the model that's that's how you convert scalar into into into vector check out my video on the transformer paper if you didn't understand that and now you're just trying to regress this this epsilon that's it so that's how the whole procedure works and once you train the model once you have this epsilon theta model trained how you actually generate the mu theta is using this formula which kind of is simple after a couple of simple algebraic manipulations you end up with this formula and now you can just input X of t input the predicted epsilon theta and you end up with the mu theta which is the mu theta for the X of t minus 1 so that's the the the mean from where the less noisy version of the image came from so remember going to our visualization here that means we found a mu up the diffusion chain where the image was less noisy okay that was my best attempt at trying to make these diffusion models a bit more clear it took me a couple of days to wrap my head around this I had to read four papers I had to read a couple of blocks and I'm still until you start and until you try and implement this thing yourself you're gonna miss like out on the details but the high level pictures here you have this process of adding noise you have this ability to reverse the process because of this nice property that the the actual conditional distribution so the X t condition X t minus 1 sorry the the X t minus 1 condition X t is also of the Gaussian form and that makes life a lot easier and yeah basically you're trying to figure out what part of the current image is the actual noise and that's how you're denoising with every single step and finally you end up with the image from your probability distribution which is hopefully as close as possible to the actual data distribution if you train your model correctly and that was the the hardest part to understand about this paper now after you know that now let's let's see a couple more details there's some guidance they're using in order to to to be able to not just generate unconditional images but take the piece of text like I as I showed you here you take a piece of text you condition the model additionally and then you generate an image and not just generate an image so here let me let me kind of dig in deeper here so for our main experiments we train a 3.5 billion parameter text conditional diffusion model at 64 times 64 resolution and another 1.5 billion parameter text conditional up sampling diffusion model to increase the resolution to 256 256 for clip guidance we also train noise aware blah blah blah VAT blah blah blah that's not that important what's probably worth mentioning here is that it's noise aware which means you have you cannot take an off-the-shelf VAT you have to retrain it with noisy images noisy samples to get the correct results more important than that how do you condition on text and the answer is simple transformers so to condition on the text we first encode it into a sequence of K tokens and feed these tokens into a transformer so then they say the output of the transformer is used in two ways they first the final token embedding is used in place of a class embedding in the ADM model which is the diffusion model I was just explaining second the last layer of token embeddings a sequence of K feature vectors is separately projected to the dimensionality of each attention layer throughout the ADM model basically in this is very architecture specific what they do is they take the output of the transform model so again we have a text here we have some text here like I don't know whatever some text and then you pass that through a transformer the almighty transformer here so that's T for transformer outcome the feature vectors and you basically use these feature vectors and you input them in a smart way into the diffusion model so this is the diffusion model let me kind of draw a box here so something like this diffusion model you condition D on on these feature vectors and that's the final pipeline that's how the conditional thing works so the training itself remains the same you still have you still have this diffusion thing you just now you'll at all times you'll have the captions you'll have the associated caption with each of these images you have a caption here you have a caption here which will be the same throughout this whole chain and basically you're gonna use that information additionally and train the the the yeah the the text condition version of this of this model okay so what turned out to be the case is that you can you can even further improve the results by having a dedicated classifier basically manipulate your your your your training procedure here and let me try and explain how that works so for for a moment forget that we are working with text and think of this like amnest we are trying to just let's say we have ten classes and we still have passing the text through transformer we just take up I don't know like a one hot encoding of our class and we input that instead of to condition to condition the model so what I do is they take the that they said they basically take the data set and they train a classifier so they have a classifier here like I just you know this see that's gonna be probably some convolutional neural network or something or VAT I know and you take the noisy versions of your sample and you teach this model to predict the class you teach you to predict the correct class like maybe it's gonna be C1 like dog or whatnot and now because this model knows how to given an image it can predict the correct class now you can use that model to guide the the training procedure and improve the samples we've seen this with clip guided VQ VA models and some other generative models basically you can see that all throughout Twitter yeah nowadays but yeah in a nutshell what happens how this is implemented is we are going to remember we're trying to predict the mu we're just gonna have some some scalar coefficient here which is basically an amplify the influence of the gradients with respect to the input image of the log of p phi y condition on XT so this is looks very complicated but this is basically our classifier so this is basically our classifier and we're trying to so what happens here on a semantic level is that hey we know because we have the gradient we know how to tweak X of t how to change the pixels of X of t in order to maximize the the the probability of the class so that means so that's something you can see in deep dream you can check out deep dreams we're not familiar with it but basically let's say you haven't like let me just kind of draw an image here so you have an image here and you'll know how to tweak the particular pixels so that a particular class here let's let me denote it as a like a softmax distribution or something like discrete one or whatnot so you know this is the correct class for example just change the color so you know this is a correct class and what you do is you tweak this so that you kind of boost that particular class that's the whole point that's what gradients are there for and because you have that information you can use that to tweak the mu so if we were to get back to the drawing I showed you up here let me just try and find it here that means instead of this time instead of going right so instead of going from from from here to here the classifier is smart enough to tell you hey if you want to create a class of a dog instead of going here so instead of going from here to here it's better that you move a little bit towards here and so this is the better path according to the classifier so this thing here it's gonna be a better class and yeah I mean that's that's what happened that's what the formula is telling us I didn't explain you how the actual formula was derived but yeah that's gonna be enough for now so that's one way of how they are conditioning the guiding the generation process of the for the particular class and there is another thing they do is classifier free guidance the idea here is not to have a dedicated classifier because you can imagine it's very expensive to train a separate dedicated classifier on the noisy samples this time so you cannot take an off-the-shelf like a model you have to train it on the noisy data and that's expensive you wanna you want to avoid that if possible and that's where the classifier free basically part of this text comes from so classifier free we do not want to have a classifier if we don't have to so let me let me walk you through this because it's fairly important since they got better results using this classifier free approach compared to using a clip as the guidance model so they say here instead of training a separate classifier model we choose to train an unconditional denoising diffusion model P theta of Z the notation is a bit different but bear with me yeah parameterized to a score estimator a e theta of ZL this is the thing we just saw this is the noise we're trying to to learn together with the conditional model P theta is that condition and C parameterized to the yeah basically the e theta ZL C model so that's basically our model we're just additionally conditioning that that model we're learning so we use a single neural network to parameterize both models where for the unconditioned model we can simply input zeros for the class identifier C when predicting the score IE we just put the C equals to zero for that model and we jointly train the unconditional and conditional models simply by randomly setting C to the unconditional class identifier so we have a like they basically have a same network and sometimes with some probability there they are basically zeroing the class and that's how they're training the unconditional and the conditional like a diffusion model at the same time and finally we then perform sampling using the following linear combination of the conditional and unconditional score estimates and I'm fairly sure this formula is broken that this here is actually the correct formula so I'm gonna now jump to this paper and it says that in order to get the final estimate we are going to take the the prediction that we got by using the like unconditioned version plus S minus the this difference here so what this means geometrically is the following so we have this distinct e theta conditioned on the zero that means it's unconditioned so it says hey if you want to generate a cool denoised image go so we are here go in this direction okay so now this thing here the difference tells us hey this thing the the the condition model is smarter because it learned how to generate conditioned images so it's gonna say hey instead of there go here okay and so you can imagine that the difference vector here is gonna be the green thing so we are from this we are subtracting this so that means we're gonna end up with this vector here and so what this is telling you is hey instead of going with the blue or with the red one we're gonna go first with the red one so that's the unconditioned generator and then we're gonna do so we can now amplify this difference vector by how much we want so there may be like two or something that means hey we want to do it like this so go twice there and so basically we end up going going in this direction or something like that so that's the the high-level idea of how this should be supposed to work yeah there is obviously a lot of theory for why this is yeah I think I think this was mostly empirical finding somebody correct me if I'm wrong but yeah and now I was constantly talking about like simple class conditioning because remember we were working with text here the version they're actually using is like conditioning on captions and this it means just empty empty text sequence and that's that's yeah we sometimes replace text captions with an empty sequence during training so that's how we we get into the from the class conditions world into the text condition world okay also also here I explained to you how the guided diffusion would work in the case where we are conditioning on a class but remember again we're working with text so they are going to use clip and they say here they are protruding the reverse process mean along the gradient of the text image top product with respect to the image so that's here this may be a bit overwhelming perturbing the reverse process mean is what I explained up there so we are as you can see we're protruding the mean with the gradients of whatever is this and in the case of clip it's gonna be we're bringing me along the gradient of the text image dot product with respect to the image go ahead and check out my clip video if you haven't watched that one if you're not familiar with clip but here briefly what clip does is you're basically gonna have like an image here and that's so let me just change the color here so the image and the the texts the the the captions are gonna be embedded in the joint space and so what happens is you have an image here and you have some pieces of text here and you're gonna do dot product because they're in the same space that means they're like I don't know like three-dimensional vector or three-dimensional vector it's gonna usually be higher dimensionality but for the sake of argument and you're gonna do dot product between them now if you're trying to generate a specific caption like the hamster the dragon hamster from from the example I showed you in the beginning what you want to do is again so this is an image here you'll want to tweak the pixels such that this dot product between these two such that this dot product goes up okay and that's that's these you're basically that's those are the gradients here that's what this formula is capturing so instead of in the case of class conditioning you're trying to maximize just the probability of a class here you're trying to maximize the match between the image and a particular caption for which you're trying to generate this image so remember you're given the caption and you want to generate a specific image and so you're using this clip model to learn how to change the image so that you can better describe and generate the the corresponding image for this particular caption so that's it okay there was a lot of details yeah let me know whether this this format was too too too many details or not but yeah now we're gonna just see the results and that's pretty much it thing worth noticing is that they they made sure that the training compute is roughly equal to that used to train the Lee so that they can fairly compare both models and you can read this part yourself if you're if you're curious to know more I basically this day basically describe how the how the model is fine-tuned for the image painting task in painting task and how some additional details about the guidance procedure here they compare glide using class class classifier free guidance clip guidance dali some GAN versions and the real images you can see the results and let me just kind of maybe focus on this column here and you can see that using classifier free guidance is creates better images arguably compared to the clip guidance and better than then the Lee although I took a single column buddy I think we can agree that a group of elephants walking in a muddy water this also looks better than this one that's for sure because it's kind of like a blurry and that has to do with the fact that they are using in the background a discretized version of the VAE model that's how the Lee works you can again check my video if you're not familiar with the Lee so those are some some some comparison results here what they've done is they've compared I'm gonna skip these curves but basically they compared how classifier free guidance compares to clip guidance and it turns out that clip classifier free is also better looking at these metrics except for this one where the one of the metrics was actually clip score so what happened is that basically the the adversarial examples made it look like the the clip guidance is better but it's actually not the classifier free guidance is better so they say here we hypothesize that clip guidance is finding adversarial examples for the evaluation clip model rather than actually a performing classifier free guidance when it comes to matching the prompt I guess that would be my first hypothesis as well for why this thing happens you can see here like the clip guidance model has for the higher score it has lower FID which means basically that if lower FID is better so that means that this curve the orange curve in this particular case is is the better one okay so they did some comparisons with the Lee model they they tried really hard to make glide as bad as possible and the Lee is as good as possible and still they got always better results you can see here everything above 50% means that glide is better than the Lee and that's the case as you can see across all of the columns and all of the rows so this row means that they're using clip to rerank the output images of the Lee and they use I think like 512 or something temps from the Lee and then they take the best one using clip models so the clip model is ranking the images for a particular caption and that's how they improve from no ranking to the Lee we ranked and then here what they do is they just blur the output of glide by passing it through the VQ VAE model if I'm not wrong from the Dali and that's make that makes it blur and that's why we saw up there we saw up there that this Dali image here is way more blurred compared to the output of glide so then they pass the glide image through that same model that caused this and they still compare them and it's still better compared to the Lee so that's very very encouraging result okay finally some safety considerations a section they as I said they open source the smaller version of a model that was trained in this filter data set so they say here we filtered out training images containing people to reduce the capabilities of the modeling many people centric problematic use cases okay that's one of the attempts they try to do so it's a 300 million it's a way smaller model remember that the big one is like in the billions of parameters yeah ballpark and they say here that we also probed glide the filtered version so the small filtered version for some forms of bias and found that it retains and may even amplify biases in the data set for example when asked to generate toys for girls our model produces more pink toys and stuffed animals than it does for the prompt toys for boys I mean this thing is kind of funny like I mean of course that that's gonna happen because the way our society works is that girls are playing with more pink toys when they are young I mean this kind of seems weird to me the model is just reproducing whatever is present in the data set I mean we can all agree on this so yeah let me know your thoughts on this particular statement here I'd love to hear your thoughts on this and finally some some failure cases and illustration of a cat that has eight legs yeah you can see it failed I mean maybe the model here thought that these are four legs here and then there are two tails so this is a Chernobyl cat or something so that's six and then these ears have some features that look like like maybe like look like like so that's like eight here but yeah for this one that that line of reasoning does not work yeah bunch of failure cases here it's kind of interesting like a mouse hunting a lion I'd agree you maybe maybe here this mouse is actually hunting this lion that may be the case also here maybe the the line is dead and the mouse is actually eating the lion but it just yeah went into the mouth to just check double check some of the I don't know some of the markers so that the line is healthy or something I don't know a car with triangular wheels here this is I guess the best attempt that the model got and this looks like something that may actually yeah appear in a world in real world data sets but yeah it's not perfect and I found some other failure cases down in the appendix like let me just kind of try and find it here so the fine-tune model they say golden necklace they do this this impeding you can see here they put a mask here and you can see what the model the model doesn't even produce the necklace so there are some failure cases obviously it's not perfect also I mentioned the the fact that the Lee was failing with these like relational reasoning types of tasks and here they just put a simple one so I guess they really cherry picked the best out of this model and we cannot test it ourselves so that's kind of that kind of sucks but other than that this is fairly fairly awesome work I love the fact that they are pushing diffusion models I'm not sure whether diffusion models are ever gonna be as fast as as GANs or V ease when sampling for this very simple reason of the fact that the whole diffusion model thing rests on this fact so this distribution approaches the Gaussian only when you have a bunch of steps and when you have beta t going to zero if you manage to find a solution around this it's not going to be diffusion model anymore or it's gonna be something different yeah it's not the diffusion anymore I guess in any case hopefully you found this video useful again I still don't have a camera hopefully in a couple of next weeks I'm gonna get it and let me know if you found this video yeah interesting useful and leave some comments feedback and yeah subscribe to this channel share it with friends and until next time bye bye
[{"start": 0.0, "end": 4.54, "text": " What's up guys in this video I'm covering Glide towards photorealistic"}, {"start": 4.54, "end": 10.32, "text": " image generation and editing with text guided diffusion models by the awesome"}, {"start": 10.32, "end": 17.48, "text": " OpenAI team right here and this is yet another iteration of OpenAI's efforts to"}, {"start": 17.48, "end": 23.32, "text": " improve upon these basically text conditioned image generation image"}, {"start": 23.32, "end": 27.12, "text": " synthesis models and you're probably familiar with one of their previous"}, {"start": 27.12, "end": 33.08, "text": " models which is namely Dali which had amazing results but as we're going to"}, {"start": 33.08, "end": 38.2, "text": " soon see Glide has even better results and they kind of verified that using"}, {"start": 38.2, "end": 42.160000000000004, "text": " human judges so we're gonna see the results a bit later but first with those"}, {"start": 42.160000000000004, "end": 45.400000000000006, "text": " of you who are not even familiar with Dali let me let me show you what the"}, {"start": 45.400000000000006, "end": 50.52, "text": " hype is all about this is what like Glide produces so Glide produces these"}, {"start": 50.52, "end": 56.760000000000005, "text": " images conditioned on these prompts here so we have a prompt here a hedgehog"}, {"start": 56.76, "end": 62.58, "text": " using a calculator and we can see an image that like Glide generated which is"}, {"start": 62.58, "end": 66.67999999999999, "text": " fairly reasonable I mean you as a human probably wouldn't be able to do a much"}, {"start": 66.67999999999999, "end": 71.48, "text": " better job although there's some there is some blur here which may be to the"}, {"start": 71.48, "end": 74.64, "text": " fact that they're using diffusion models but we're gonna see and discuss those a"}, {"start": 74.64, "end": 79.88, "text": " bit later then we have a like another example a corgi wearing a red bowtie"}, {"start": 79.88, "end": 84.08, "text": " and you can see here we have a this attribute binding we basically have the"}, {"start": 84.08, "end": 88.28, "text": " model has to understand that red corresponds to bowtie and then we"}, {"start": 88.28, "end": 93.56, "text": " have purple the color purple corresponds to this party hat so the model first has"}, {"start": 93.56, "end": 98.4, "text": " to kind of hallucinate and create those and then the plate to place them on the"}, {"start": 98.4, "end": 103.03999999999999, "text": " right spot in the image and make it look like a realistic and I think they do an"}, {"start": 103.03999999999999, "end": 106.75999999999999, "text": " awesome job and by the way I don't know what like what's up with corgis I can"}, {"start": 106.75999999999999, "end": 112.08, "text": " see corgis everywhere I guess some of the open AI members really likes corgis"}, {"start": 112.08, "end": 117.36, "text": " or it's there kind of in-house pet or something in other cool picture here"}, {"start": 117.36, "end": 123.2, "text": " robots meditating in a vipa sauna retreat fairly sure you cannot find this"}, {"start": 123.2, "end": 128.36, "text": " image anywhere but it'd be nice to see the the nearest neighbor from the"}, {"start": 128.36, "end": 131.64, "text": " training data set from all of these generated images although yeah this"}, {"start": 131.64, "end": 136.96, "text": " looks very very awesome if you ask me I'm gonna just cherry pick a couple more"}, {"start": 136.96, "end": 141.16, "text": " examples here and all of these are also cherry picked already they're using"}, {"start": 141.16, "end": 145.64, "text": " something called guidance like classifier free if I'm not wrong let me"}, {"start": 145.64, "end": 149.48, "text": " just check it out here classifier free guidance yeah that's right so here we"}, {"start": 149.48, "end": 154.92, "text": " can see a high quality oil painting of a psychedelic hamster dragon and this is"}, {"start": 154.92, "end": 158.76, "text": " totally wicked like psychedelic okay we can see kind of the colors are there"}, {"start": 158.76, "end": 163.56, "text": " then we have hamster hamster is also there dragon we can see these these"}, {"start": 163.56, "end": 168.48, "text": " wings in the background which kind of reminds us of the dragon high quality"}, {"start": 168.48, "end": 171.95999999999998, "text": " well I'm not sure about that part but like all it definitely has this oil"}, {"start": 171.95999999999998, "end": 178.6, "text": " painting vibe to it so again amazing amazing rendering of this of this prompt"}, {"start": 178.6, "end": 187.6, "text": " here many other cool results we have in other corgi here and finally we have"}, {"start": 187.6, "end": 193.16, "text": " this thing here so a red cube on top of a blue cube and I'd love to see some"}, {"start": 193.16, "end": 197.39999999999998, "text": " more complex query some more complex prompt where the model we have to do"}, {"start": 197.4, "end": 202.08, "text": " more serious relational reasoning because as far as I remember in the"}, {"start": 202.08, "end": 206.56, "text": " Dali paper they tried and and and constructed those and the model the Dali"}, {"start": 206.56, "end": 211.8, "text": " failed like didn't manage to either count well or fail to do the attribute"}, {"start": 211.8, "end": 216.16, "text": " binding correctly like associate the correct color with the shapes etc so it'd"}, {"start": 216.16, "end": 221.36, "text": " be cool to kind of further probe this model but that's gonna be hard because"}, {"start": 221.36, "end": 226.12, "text": " they only released a smaller model which is trained on a filter data set so it's"}, {"start": 226.12, "end": 231.84, "text": " not not nearly as potent as the one they have in house behind I guess yeah"}, {"start": 231.84, "end": 238.52, "text": " not sure whether they have an API for the for this glide model in any case"}, {"start": 238.52, "end": 243.56, "text": " let's try and understand the magic behind behind this model so first things"}, {"start": 243.56, "end": 248.20000000000002, "text": " first let me just kind of mention here when evaluated by human judges our"}, {"start": 248.20000000000002, "end": 253.8, "text": " samples are preferred to those from Dali 87% of the time when evaluated for"}, {"start": 253.8, "end": 260.24, "text": " photorealism and 69% of the time when evaluated for caption similarity so"}, {"start": 260.24, "end": 264.04, "text": " that's cool so that means that this model definitely improved upon Dali and"}, {"start": 264.04, "end": 269.40000000000003, "text": " the main like component that that brought that improvement I'd say is"}, {"start": 269.40000000000003, "end": 272.72, "text": " these diffusion models and you're probably familiar with some other"}, {"start": 272.72, "end": 277.36, "text": " generative models such as GANs generative other serial networks or VAE's"}, {"start": 277.36, "end": 282.52, "text": " variational autoencoders or even flow based models but if you're models are"}, {"start": 282.52, "end": 288.68, "text": " probably new for for most of you because only in 2020 did people like manage to"}, {"start": 288.68, "end": 293.35999999999996, "text": " get them to work to be in the sense they are they are now kind of you can you can"}, {"start": 293.35999999999996, "end": 298.64, "text": " sample very fast like relatively speaking and you can train them and they"}, {"start": 298.64, "end": 302.76, "text": " just work there is a nice relationship between diffusion models and VAE's we're"}, {"start": 302.76, "end": 308.12, "text": " gonna see that the actual loss for the diffusion models is inspired by these VAE"}, {"start": 308.12, "end": 311.96, "text": " models and also there is another class of generative models called score based"}, {"start": 311.96, "end": 315.35999999999996, "text": " models and I'm gonna link a lot of resources down in the video description"}, {"start": 315.35999999999996, "end": 319.91999999999996, "text": " so do check it out so there is a nice connection with that family as well I"}, {"start": 319.91999999999996, "end": 322.96, "text": " don't have it here but yeah you can check it out in the video description"}, {"start": 322.96, "end": 329.96, "text": " and another thing I want to point out here is that let's let's see the like a"}, {"start": 329.96, "end": 334.0, "text": " difference at least on this block diagram the main difference between like"}, {"start": 334.0, "end": 337.88, "text": " all of these models and and diffusion models and you can see let me try and"}, {"start": 337.88, "end": 344.4, "text": " use my fancy new touchpad basically you can see that the latents here the"}, {"start": 344.4, "end": 349.84, "text": " latents here are completely like they are of the same dimensionality as the as"}, {"start": 349.84, "end": 354.8, "text": " the input image whereas usually when you have something like VAE so it's an"}, {"start": 354.8, "end": 359.68, "text": " autoencoder that means you're basically down you're down sampling you're you're"}, {"start": 359.68, "end": 364.76, "text": " reducing the dimensionality of your input data into some latent vector here"}, {"start": 364.76, "end": 369.48, "text": " usually denoted as Z and this one is way like lower dimensional compared to the"}, {"start": 369.48, "end": 374.59999999999997, "text": " input input data you're trying to model here on the other side you're you're"}, {"start": 374.59999999999997, "end": 381.36, "text": " basically you're basically keeping the dimensionality the same as the as your"}, {"start": 381.36, "end": 385.92, "text": " input and you can imagine that that can cause some problems training this model"}, {"start": 385.92, "end": 388.92, "text": " and sampling from this generative model etc but we're gonna see how people"}, {"start": 388.92, "end": 395.16, "text": " manage to to work their way around it kinda I say kinda because before we even"}, {"start": 395.16, "end": 398.16, "text": " dig into details let me just show you I mean there are obviously some"}, {"start": 398.16, "end": 402.36, "text": " limitations and they pointed those out somewhere here let me try and find"}, {"start": 402.36, "end": 408.24, "text": " limitations okay our unoptimized model takes 15 seconds to sample one image on"}, {"start": 408.24, "end": 414.04, "text": " a single a hundred GPU and we're going to understand why this is basically the"}, {"start": 414.04, "end": 419.16, "text": " reason is that we have to do like 20 forward passes in order to render an"}, {"start": 419.16, "end": 424.24, "text": " image whereas using something like GAN or VAE you do a single forward pass and"}, {"start": 424.24, "end": 428.72, "text": " you've got yourself an image so that's that's a limitation to keep in mind and"}, {"start": 428.72, "end": 436.28000000000003, "text": " yeah so let's get back up here okay before I even start digging deeper into"}, {"start": 436.28000000000003, "end": 440.8, "text": " how diffusion models work which is going to be very ungrateful job for me to do"}, {"start": 440.8, "end": 444.24, "text": " because it'll probably take a whole video to do it so I'm gonna do a high"}, {"start": 444.24, "end": 449.44, "text": " level overview first let's see just the what the model can do on a high level"}, {"start": 449.44, "end": 453.88, "text": " what the model is capable of so here we can see some in painting capabilities"}, {"start": 453.88, "end": 458.32, "text": " and thing to keep in mind here is that they had to do some fine-tuning so this"}, {"start": 458.32, "end": 463.88, "text": " is not zero shot as as your Dali model or some of the other like images rendered"}, {"start": 463.88, "end": 468.6, "text": " in this paper this like model was fine-tuned for this precise task of in"}, {"start": 468.6, "end": 473.64000000000004, "text": " painting and you can see the awesome results so they they have this mask and"}, {"start": 473.64000000000004, "end": 478.48, "text": " the model inputs to the kind of I think that they feed that the three channeled"}, {"start": 478.48, "end": 482.64000000000004, "text": " RGB image they also feed this this mask as a separate channel and I think they"}, {"start": 482.64000000000004, "end": 485.20000000000005, "text": " have a couple more channels in that in painting model but that's not that"}, {"start": 485.20000000000005, "end": 489.40000000000003, "text": " important right now what's important is you can see that results are fairly"}, {"start": 489.40000000000003, "end": 494.28000000000003, "text": " stunning again here a girl hugging a corgi on a pedestal you can see it"}, {"start": 494.28, "end": 499.28, "text": " rendered corgi pretty awesome like just look at this this like hand wrapping"}, {"start": 499.28, "end": 504.55999999999995, "text": " around the body of this corgi I mean the results are if you if you told me like a"}, {"start": 504.55999999999995, "end": 509.35999999999996, "text": " couple of years ago that AI models will be able to do this I'd say it's it's"}, {"start": 509.35999999999996, "end": 513.52, "text": " like almost impossible and now I think people are just kind of used to AI being"}, {"start": 513.52, "end": 516.88, "text": " able to do this and we're setting even higher and higher goals and so now this"}, {"start": 516.88, "end": 521.4399999999999, "text": " is not like nothing special but yeah we need to appreciate the progress that"}, {"start": 521.44, "end": 525.6, "text": " happened since 2012 and of course earlier than that but yeah with the Alex"}, {"start": 525.6, "end": 530.24, "text": " net the things really started skyrocketing same thing here you can see"}, {"start": 530.24, "end": 535.0, "text": " the red hair you can see how the the ways of flowers appears I especially"}, {"start": 535.0, "end": 539.24, "text": " find this one fascinating the hat lies perfectly like the perspective the the"}, {"start": 539.24, "end": 543.0400000000001, "text": " color maybe the color is a bit brighter you can see the hopefully can see it on"}, {"start": 543.0400000000001, "end": 548.0, "text": " your screen the color is a bit brighter compared to this shirt here so yeah but"}, {"start": 548.0, "end": 553.72, "text": " still fairly stunning results and yeah small knit here glad stands for a guided"}, {"start": 553.72, "end": 557.92, "text": " language to image diffusion for generation and editing you can understand"}, {"start": 557.92, "end": 561.6, "text": " why that is we can both generate we can add it as you can see here we have"}, {"start": 561.6, "end": 564.76, "text": " language that's conditioned we are basically conditioning our our diffusion"}, {"start": 564.76, "end": 569.9, "text": " model on some language prompt and we basically generate images using the"}, {"start": 569.9, "end": 573.72, "text": " diffusion models so that's kind of trying to deconstruct that that complex"}, {"start": 573.72, "end": 579.0, "text": " sentence here are some work like awesome capabilities of Clyde model you can see"}, {"start": 579.0, "end": 585.6, "text": " here that first they kind of generate using the cozy living room they zero"}, {"start": 585.6, "end": 589.72, "text": " shot generate this image of the cozy living room and then they put a mouse"}, {"start": 589.72, "end": 594.48, "text": " like a mask here and then they say a painting of a corgi on the wall above a"}, {"start": 594.48, "end": 598.96, "text": " couch and here comes corgi again and then they put a mask again and then they"}, {"start": 598.96, "end": 602.84, "text": " say around coffee table in front of a couch and here comes the table and it's"}, {"start": 602.84, "end": 608.0, "text": " round and that's I mean fairly awesome and then again they put this mask and"}, {"start": 608.0, "end": 612.6, "text": " they see a vase of flowers on the coffee table and again we see the results"}, {"start": 612.6, "end": 616.88, "text": " rendered and finally here they say they put a mask here and they say a couch in"}, {"start": 616.88, "end": 620.96, "text": " the corner of her room and now I think this is a failure case or I may be"}, {"start": 620.96, "end": 624.88, "text": " missing something out but basically what happened is that we just got ourself a"}, {"start": 624.88, "end": 629.6800000000001, "text": " window here but I don't see like a couch in the corner of her room or maybe or"}, {"start": 629.68, "end": 634.7199999999999, "text": " maybe it kind of cropped this part so now this is this is like officially in"}, {"start": 634.7199999999999, "end": 638.56, "text": " the corner of a room although we don't see the wall here so but but anyways"}, {"start": 638.56, "end": 645.4, "text": " that's maybe me being too nitpicky here but yeah the results are very cool here"}, {"start": 645.4, "end": 649.3599999999999, "text": " this just shows some combination using this as the edit and here they kind of"}, {"start": 649.3599999999999, "end": 653.68, "text": " like instead of putting this generic mask with a generic shape they kind of"}, {"start": 653.68, "end": 658.8, "text": " help the model a bit more by using both this is kind of semantic segmentation in"}, {"start": 658.8, "end": 662.52, "text": " a way and they put a approximate shape and so it's easier to generate the"}, {"start": 662.52, "end": 666.4799999999999, "text": " picture here you can see I can see the results yourself here okay so that's"}, {"start": 666.4799999999999, "end": 673.16, "text": " what glide can do now let me try and demystify how this thing actually works"}, {"start": 673.16, "end": 679.04, "text": " and it's gonna be very very hard to try and explain how these diffusion models"}, {"start": 679.04, "end": 684.3199999999999, "text": " work very quickly but yeah let me try it if you think the explanation was not"}, {"start": 684.3199999999999, "end": 688.4399999999999, "text": " good enough I can maybe try just kind of upload my comment and I can I can create"}, {"start": 688.44, "end": 693.9200000000001, "text": " another video explaining just the diffusion models okay so the the process"}, {"start": 693.9200000000001, "end": 697.8000000000001, "text": " itself these diffusion models were inspired by some non-equilibrium"}, {"start": 697.8000000000001, "end": 702.44, "text": " statistical physics I know it sounds very very complicated because it is but"}, {"start": 702.44, "end": 708.84, "text": " like back in 2015 the first paper appeared and they managed to to basically"}, {"start": 708.84, "end": 713.44, "text": " originated the idea of the diffusion models but only in 2020 do we get the"}, {"start": 713.44, "end": 718.32, "text": " models to to work nicely to generate nice samples and to be like faster and"}, {"start": 718.32, "end": 724.72, "text": " yeah so here is the the main idea you have this diffusion process whereas you"}, {"start": 724.72, "end": 730.24, "text": " take your input image and what you do is you basically add add a certain amount"}, {"start": 730.24, "end": 734.2800000000001, "text": " of noise from a specific family like usually people use Gaussian because"}, {"start": 734.2800000000001, "end": 738.84, "text": " cautions are from this thing called exponential family which means they have"}, {"start": 738.84, "end": 742.2, "text": " all of these nice properties when you add up cautions when you do product with"}, {"start": 742.2, "end": 746.48, "text": " them you end up with a Gaussian like the structure of the Gaussian remains and"}, {"start": 746.48, "end": 750.08, "text": " it's easy to compute and that's why people are using Gossians all around"}, {"start": 750.08, "end": 755.08, "text": " like in VAE's as well for the for the priors etc etc so basically they have"}, {"start": 755.08, "end": 760.5600000000001, "text": " nice computational properties so we add up some noise and then we do we continue"}, {"start": 760.5600000000001, "end": 765.4000000000001, "text": " doing that until we get to like a picture which is basically pure Gaussian"}, {"start": 765.4000000000001, "end": 770.48, "text": " noise now wouldn't it be nice if we could reverse the process and starting"}, {"start": 770.48, "end": 774.96, "text": " from a Gaussian noise so just sampling this Gaussian noise picture reverse the"}, {"start": 774.96, "end": 778.8000000000001, "text": " process and generate our data so that would be cool so again we have this"}, {"start": 778.8000000000001, "end": 784.9200000000001, "text": " conditional distribution so X of t conditioned on the X of t minus 1 which"}, {"start": 784.9200000000001, "end": 790.44, "text": " is timestamp the previous timestamp and we basically know that because that's"}, {"start": 790.44, "end": 795.0600000000001, "text": " described by this formula here and I'm gonna quickly explain how this thing"}, {"start": 795.06, "end": 800.7199999999999, "text": " works in a second and then what's problematic is that we cannot it's not easy to"}, {"start": 800.7199999999999, "end": 805.5999999999999, "text": " figure out the reverse distribution here so so basically not a reverse"}, {"start": 805.5999999999999, "end": 808.92, "text": " distribution by this conditional distribution where you're trying to"}, {"start": 808.92, "end": 813.5999999999999, "text": " figure out the t minus 1 condition on the X of t so you're trying to denoise"}, {"start": 813.5999999999999, "end": 819.3599999999999, "text": " the image here so what diffusion models are trying to do is construct this p of"}, {"start": 819.36, "end": 825.4, "text": " theta which is going to be a neural network in practice to try and model"}, {"start": 825.4, "end": 828.92, "text": " this this unknown distribution here and we're gonna see a couple of nice"}, {"start": 828.92, "end": 833.8000000000001, "text": " properties of this distribution which will allow us to actually do that and"}, {"start": 833.8000000000001, "end": 839.8000000000001, "text": " okay before we get there let me try and dissect this this formula here so we can"}, {"start": 839.8000000000001, "end": 845.32, "text": " see that the notation is kind of cringy I love this notation better when you have"}, {"start": 845.32, "end": 851.5200000000001, "text": " like for example let me just take the touchpad here so I like it better when"}, {"start": 851.5200000000001, "end": 856.0400000000001, "text": " you have something like X of t and then you're sampling from the normal"}, {"start": 856.0400000000001, "end": 859.48, "text": " distribution or whatever distribution like so like having this squiggly thing"}, {"start": 859.48, "end": 863.5600000000001, "text": " but yeah just be aware how this notation works this just means we're sampling X"}, {"start": 863.5600000000001, "end": 870.2, "text": " of t from like the Gaussian distribution denoted by n and this is the mu so this"}, {"start": 870.2, "end": 874.8000000000001, "text": " thing here this part here is the mu so that's the mean and this thing here is"}, {"start": 874.8, "end": 880.16, "text": " just the covariance matrix okay so let's try and dissect this so we have X of t"}, {"start": 880.16, "end": 886.0, "text": " minus 1 multiplied by this thing here and beta t is something called I think"}, {"start": 886.0, "end": 889.0799999999999, "text": " it's called noise level something I forgot the exact name but whatever"}, {"start": 889.0799999999999, "end": 893.16, "text": " basically it's between 0 and 1 which means that this thing here under"}, {"start": 893.16, "end": 897.92, "text": " square root is gonna be less than 1 and that means that we're gonna"}, {"start": 897.92, "end": 902.4399999999999, "text": " contract this X of t minus 1 okay so let me try and visualize this thing right"}, {"start": 902.44, "end": 909.96, "text": " here okay so we have and for the sake of clarity I'm gonna assume that X of t"}, {"start": 909.96, "end": 914.4000000000001, "text": " minus 1 which is our image is actually only three dimensional data point so"}, {"start": 914.4000000000001, "end": 917.8800000000001, "text": " that I can actually visualize what's happening otherwise if this is thousand"}, {"start": 917.8800000000001, "end": 921.08, "text": " by thousand that's million dimensions which means I cannot draw a million"}, {"start": 921.08, "end": 926.7600000000001, "text": " dimensions on my one node screen here so here is the coordinate system so let's"}, {"start": 926.7600000000001, "end": 931.9200000000001, "text": " say that X of t minus 1 is somewhere here and so this is the origin point"}, {"start": 931.92, "end": 937.16, "text": " here it's kind of let me kind of try and connect it here so what this thing here"}, {"start": 937.16, "end": 942.76, "text": " does is it's contracting because it's more than one which means then the mu"}, {"start": 942.76, "end": 946.56, "text": " the mean of this distribution let me take a different color here let me try"}, {"start": 946.56, "end": 951.24, "text": " and take the green one it's gonna kind of push this point from here all the way"}, {"start": 951.24, "end": 955.36, "text": " to somewhere here okay so this is gonna be whoops it's gonna be well I'm still"}, {"start": 955.36, "end": 959.4, "text": " trying to get used to this novel touchpad okay so we have the point now"}, {"start": 959.4, "end": 964.8, "text": " here and now this beta is gonna be we're gonna have a diagonal I guess"}, {"start": 964.8, "end": 968.9399999999999, "text": " isotropic covariance matrix which means we're gonna have I'm gonna denote that"}, {"start": 968.9399999999999, "end": 972.4599999999999, "text": " as some small circle around this point so we're gonna have some Gaussian"}, {"start": 972.4599999999999, "end": 977.1999999999999, "text": " distribution centered around this point here and we're gonna sample our X of t"}, {"start": 977.1999999999999, "end": 982.9599999999999, "text": " from that distribution here and so let's say we sample it let me try and change"}, {"start": 982.9599999999999, "end": 986.92, "text": " the color here let me take the blue color so let's say we sample data point"}, {"start": 986.92, "end": 990.88, "text": " here so that's gonna be our so we started with with here so we had a X t"}, {"start": 990.88, "end": 995.5999999999999, "text": " minus one now we are here so this is visually representing in this 3d space"}, {"start": 995.5999999999999, "end": 999.56, "text": " what's happening in this image space where we're getting more noisy and noisy"}, {"start": 999.56, "end": 1003.92, "text": " images so maybe from this image we got this one although this will take more"}, {"start": 1003.92, "end": 1007.4399999999999, "text": " steps obviously but yeah you get a point now let's see what happens in the limit"}, {"start": 1007.4399999999999, "end": 1012.52, "text": " the thing with these betas is that in the original paper in I think in the"}, {"start": 1012.52, "end": 1018.6, "text": " paper from 2020 they were not learnable they just took and created a schedule"}, {"start": 1018.6, "end": 1022.04, "text": " that started with some larger betas which are still smaller than one and"}, {"start": 1022.04, "end": 1027.6, "text": " then slowly anneal them towards zero okay linearly I think so we had a linear"}, {"start": 1027.6, "end": 1033.96, "text": " schedule and that means when beta t gets down to zero what happens actually it's"}, {"start": 1033.96, "end": 1038.32, "text": " the other way around it starts with very small because we want to have like"}, {"start": 1038.32, "end": 1043.04, "text": " lesser noise so it starts with zero and then it's a meal towards one okay that"}, {"start": 1043.04, "end": 1046.3999999999999, "text": " will make more sense so what happens when it gets to one that means this is"}, {"start": 1046.3999999999999, "end": 1050.8, "text": " going to zero so this thing here let me change the color so I'm gonna take the"}, {"start": 1050.8, "end": 1055.2, "text": " red pen here and so basically this thing is gonna be a kneeled it's gonna go to"}, {"start": 1055.2, "end": 1061.36, "text": " zero and that means that this thing here is gonna be contracted all the way down"}, {"start": 1061.36, "end": 1067.96, "text": " here until we get to the origin point so that means that after a lot of steps"}, {"start": 1067.96, "end": 1074.72, "text": " our data points are gonna be basically drawn from from this from this like"}, {"start": 1074.72, "end": 1080.2, "text": " distribution which is basically your your standard Gaussian distribution the"}, {"start": 1080.2, "end": 1084.5, "text": " normal distribution because we'll have a mu of zero because this is zero so that"}, {"start": 1084.5, "end": 1089.6000000000001, "text": " means this thing is gonna be zero and this thing is gonna be one which means"}, {"start": 1089.6000000000001, "end": 1093.4, "text": " we have identity matrix and we are basically sampling from a noise pure"}, {"start": 1093.4, "end": 1096.48, "text": " noise distribution and that's what I explained here so that that's basically"}, {"start": 1096.48, "end": 1101.3600000000001, "text": " this image here so that's at the limit of this diffusion process now you might"}, {"start": 1101.3600000000001, "end": 1107.3600000000001, "text": " think that this would be very slow if we had to sample like if we had to sample"}, {"start": 1107.3600000000001, "end": 1113.28, "text": " and get X 100 we'd have to repeat this sampling procedure hundred times but"}, {"start": 1113.28, "end": 1118.48, "text": " that's not the case luckily for us because you can you can form you can"}, {"start": 1118.48, "end": 1122.8, "text": " because you can see here Q X of T conditions of X of zero say X of zero is"}, {"start": 1122.8, "end": 1126.16, "text": " the original image from our data set the actual images we have in our data set"}, {"start": 1126.16, "end": 1131.68, "text": " we can form like a distribution so that we can directly sample and get and kind"}, {"start": 1131.68, "end": 1137.2, "text": " of skip these intermediate steps and like get the X 100 or X 200 from a"}, {"start": 1137.2, "end": 1144.0400000000002, "text": " single from a single like a step and you can see it here again these alpha T with"}, {"start": 1144.0400000000002, "end": 1149.44, "text": " this hat thing is a product of alpha of these coefficients here and they are 1"}, {"start": 1149.44, "end": 1153.28, "text": " minus beta and you can imagine because all of these are smaller than 1 when you"}, {"start": 1153.28, "end": 1156.36, "text": " multiple a bunch of numbers that are smaller than 1 this thing is going to"}, {"start": 1156.36, "end": 1160.6399999999999, "text": " converge in the limit it's going to go down to zero to zero okay it's going to"}, {"start": 1160.6399999999999, "end": 1166.04, "text": " down to zero so that means what we end up with is nothing here and we end up"}, {"start": 1166.04, "end": 1170.8, "text": " with one here which means this thing is going to be equal to this small epsilon"}, {"start": 1170.8, "end": 1175.52, "text": " here which is as you can see here the normal distribution so that's what"}, {"start": 1175.52, "end": 1180.8, "text": " happens again in the in the limit of this diffusion process now this is very"}, {"start": 1180.8, "end": 1187.2, "text": " important they state here and this is this is actually a conclusion from 1949"}, {"start": 1187.2, "end": 1192.44, "text": " I think they say note that Q so this is conditional distribution we are we're"}, {"start": 1192.44, "end": 1198.12, "text": " trying to model approaches a diagonal Gaussian distribution as capital T"}, {"start": 1198.12, "end": 1201.68, "text": " approaches infinity and this is the number of steps in our diffusion"}, {"start": 1201.68, "end": 1206.44, "text": " process in correspondingly this this noise level in the limit goes down to"}, {"start": 1206.44, "end": 1211.2, "text": " zero so it's sufficient to train a neural network to predict a mean mu"}, {"start": 1211.2, "end": 1218.0800000000002, "text": " theta and a diagonal covariance matrix Sigma theta and so this will be the this"}, {"start": 1218.0800000000002, "end": 1223.24, "text": " is this thing here is the distribution we're trying to learn in order to model"}, {"start": 1223.24, "end": 1228.72, "text": " the actual the true distribution here so this one the unknown one okay and the"}, {"start": 1228.72, "end": 1234.48, "text": " reason why it's possible is because we know that under these conditions the"}, {"start": 1234.48, "end": 1239.16, "text": " actual true distribution is going to have that same shape and then we can"}, {"start": 1239.16, "end": 1243.24, "text": " just kind of learn and kind of fit this caution on top on top of that"}, {"start": 1243.24, "end": 1249.32, "text": " distribution now if you're familiar with VA is this may be this whole like a"}, {"start": 1249.32, "end": 1256.2, "text": " diagram here may be familiar and if I were to swap if I were to swap this and"}, {"start": 1256.2, "end": 1262.16, "text": " kind of change it to and denote it as Z which is usual notation VA is because X"}, {"start": 1262.16, "end": 1267.28, "text": " one through T are the latents remember I mentioned that the latents here in this"}, {"start": 1267.28, "end": 1271.8400000000001, "text": " model are of the same dimension so these are the latents these noisy images here"}, {"start": 1271.8400000000001, "end": 1278.0, "text": " are the latents of our of our image here and basically if I were to denote these"}, {"start": 1278.0, "end": 1282.8000000000002, "text": " as Z so this one here as Z you may notice this formula from the VA is"}, {"start": 1282.8000000000002, "end": 1286.76, "text": " construction and here we're just trying to like construct something that's"}, {"start": 1286.76, "end": 1291.3200000000002, "text": " called evidence lower bound and it's fairly self-explanatory although it may"}, {"start": 1291.32, "end": 1295.1599999999999, "text": " be super confusing when you hear it the first time so why is that why is it"}, {"start": 1295.1599999999999, "end": 1299.1599999999999, "text": " called elbow or evidence lower bound so first we have this thing here is called"}, {"start": 1299.1599999999999, "end": 1303.8799999999999, "text": " evidence so this is evidence or I think you kept include the log one so this"}, {"start": 1303.8799999999999, "end": 1307.24, "text": " basically you want to maximize this you want to have your data points from your"}, {"start": 1307.24, "end": 1313.52, "text": " data set have high probability under your fitted distributions under your"}, {"start": 1313.52, "end": 1318.4399999999998, "text": " learned distribution P of theta so that's what machine learning is models"}, {"start": 1318.44, "end": 1322.92, "text": " are oftentimes trying to do we're trying to maximize the probability of the data"}, {"start": 1322.92, "end": 1328.64, "text": " points under our our model okay now because log is monotonic it actually"}, {"start": 1328.64, "end": 1332.6000000000001, "text": " does not matter whether you're optimizing you're not going to change"}, {"start": 1332.6000000000001, "end": 1335.6000000000001, "text": " the loss function by adding a log you're gonna change some computational"}, {"start": 1335.6000000000001, "end": 1339.16, "text": " properties and the convergence speed etc so it's kind of usually needs to have"}, {"start": 1339.16, "end": 1344.2, "text": " log inside but so if we have this why don't we try and maximize it directly"}, {"start": 1344.2, "end": 1347.6000000000001, "text": " well the problem is it's intractable so you can try and break this thing down"}, {"start": 1347.6, "end": 1351.04, "text": " and you end up with something with a complex integral that looks something"}, {"start": 1351.04, "end": 1360.3999999999999, "text": " like this we have these P theta effects zero and we'll have the prior so the"}, {"start": 1360.3999999999999, "end": 1365.32, "text": " probability of the actual latent and we'll have to basically integrate all"}, {"start": 1365.32, "end": 1369.52, "text": " over all of those latents as you can see here we have to sample the latents to"}, {"start": 1369.52, "end": 1374.7199999999998, "text": " Z the Zets we have to have their probabilities we have to to construct"}, {"start": 1374.72, "end": 1379.04, "text": " this to basically compute this integral which is going to be it's simply"}, {"start": 1379.04, "end": 1382.76, "text": " intractable so we cannot directly compute this thing here so this is equal"}, {"start": 1382.76, "end": 1389.08, "text": " to the probability theta X zero you'd have to do this thing here in order to"}, {"start": 1389.08, "end": 1393.08, "text": " get this this probability here so that's that's out of the question we can do"}, {"start": 1393.08, "end": 1396.96, "text": " that so that's why people are constructing these lower bounds and"}, {"start": 1396.96, "end": 1403.0, "text": " basically this is trivial by definition because KL divergence is always greater"}, {"start": 1403.0, "end": 1407.84, "text": " or equal than zero it's equal only in the case when the two distributions"}, {"start": 1407.84, "end": 1413.04, "text": " perfectly match which we're trying to do by training these beta t to match the"}, {"start": 1413.04, "end": 1419.36, "text": " the P distribution to to match the real Q distribution so after doing some"}, {"start": 1419.36, "end": 1424.12, "text": " algebra here I'm not gonna dig into a lot of details we end up with this"}, {"start": 1424.12, "end": 1430.6, "text": " formula here and this is what we call the the elbow the like like variance"}, {"start": 1430.6, "end": 1437.56, "text": " lower bound like loss and you can see here that basically if we tried and"}, {"start": 1437.56, "end": 1444.28, "text": " minimized this thing we know that it's gonna be that the actual thing we care"}, {"start": 1444.28, "end": 1449.56, "text": " about is gonna be even lower so what's called lower bound if it's actually"}, {"start": 1449.56, "end": 1452.84, "text": " higher because usually you don't have the minus and then you're trying to"}, {"start": 1452.84, "end": 1457.84, "text": " maximize this thing and this thing is gonna be smaller or equal and that means"}, {"start": 1457.84, "end": 1461.6, "text": " as you're maximizing it you know that the actual thing you care about which is"}, {"start": 1461.6, "end": 1467.32, "text": " the the the evidence here so this thing here is gonna be at least as big as"}, {"start": 1467.32, "end": 1471.56, "text": " your loss so this is also called surrogate loss because you're trying to"}, {"start": 1471.56, "end": 1475.84, "text": " find different loss it's kind of going to bound your actual thing you care"}, {"start": 1475.84, "end": 1483.36, "text": " about okay so long story short what happens is they kind of further have to"}, {"start": 1483.36, "end": 1487.1599999999999, "text": " break down this this equation so that it's actually tractable and computable"}, {"start": 1487.16, "end": 1492.16, "text": " so they just end up in a sum of various scale divergences and at the end they"}, {"start": 1492.16, "end": 1498.48, "text": " end up with a simple MSC loss between the the the mu theta so that's the the"}, {"start": 1498.48, "end": 1502.6000000000001, "text": " thing we're trying to learn here the mu theta of our P theta distribution and"}, {"start": 1502.6000000000001, "end": 1506.8000000000002, "text": " they just need to do MSC loss but what happens is they actually instead of"}, {"start": 1506.8000000000002, "end": 1512.0, "text": " figuring out the mu they actually just figure out the noise level that happens"}, {"start": 1512.0, "end": 1515.68, "text": " in that happened in a specific step and let me try and break that down in a"}, {"start": 1515.68, "end": 1520.44, "text": " second oops by the way I realized I forgot to put the condition this on that"}, {"start": 1520.44, "end": 1526.1200000000001, "text": " so you have to condition so it's P of X zero given that and we see we look at the"}, {"start": 1526.1200000000001, "end": 1529.24, "text": " probability of that so what's the probability of that and then what's the"}, {"start": 1529.24, "end": 1536.3200000000002, "text": " probability of this X zero being so what's the probability of X zero given"}, {"start": 1536.3200000000002, "end": 1539.52, "text": " that that so we have to integrate all of that to get the actual probability of X"}, {"start": 1539.52, "end": 1542.48, "text": " zero so it's a small mistake hopefully you can see what's happening there"}, {"start": 1542.48, "end": 1547.0, "text": " anyways on the high level just it's hard it's intractable that's everything I"}, {"start": 1547.0, "end": 1550.4, "text": " wanted to keep in mind that's it you don't have to know it is the probability"}, {"start": 1550.4, "end": 1556.16, "text": " details here okay so let me try and break this down why why they're why"}, {"start": 1556.16, "end": 1560.44, "text": " they're what they're doing here so instead of figuring out mu so that means"}, {"start": 1560.44, "end": 1564.76, "text": " instead of let's let's say let me go back to this picture here so let's say"}, {"start": 1564.76, "end": 1574.32, "text": " this is X of this is X of t and this was X of t minus 1 this point here was X of"}, {"start": 1574.32, "end": 1580.4, "text": " t minus 1 this one is X of t so instead of trying to predict given X of t trying"}, {"start": 1580.4, "end": 1584.84, "text": " to predict this this mu here so that that's the point from where we"}, {"start": 1584.84, "end": 1589.36, "text": " generated in the first place this point X of t instead of doing that they figure"}, {"start": 1589.36, "end": 1592.8, "text": " out the noise which is the same thing in the end we're gonna see that but it's up"}, {"start": 1592.8, "end": 1597.3999999999999, "text": " it's a bit different so let's say we had an image here so let's say we had an"}, {"start": 1597.3999999999999, "end": 1602.04, "text": " image here and it had some amount of noise like it had a couple of noise"}, {"start": 1602.04, "end": 1605.76, "text": " points here I'm gonna reduce the amount of noise obviously to make it easier to"}, {"start": 1605.76, "end": 1612.24, "text": " explain so we want to basically from there on we want to understand which"}, {"start": 1612.24, "end": 1619.8, "text": " portions of the noise were added going from step t minus 1 to step t we want"}, {"start": 1619.8, "end": 1625.3999999999999, "text": " to understand which of these noise points were added so maybe we figure out"}, {"start": 1625.3999999999999, "end": 1629.28, "text": " okay these were the noise points that was the noise that was added and"}, {"start": 1629.28, "end": 1634.76, "text": " basically we end up with something like this we end up with a image here that"}, {"start": 1634.76, "end": 1639.56, "text": " has these two but we we cannot get got rid of this of this noise here and you"}, {"start": 1639.56, "end": 1642.72, "text": " can imagine by doing this we are reducing the noise and at the end we're"}, {"start": 1642.72, "end": 1647.12, "text": " gonna end up with the final picture that's noise free image from our data"}, {"start": 1647.12, "end": 1650.0, "text": " distribution and that's awesome so that's what's happening they're trying"}, {"start": 1650.0, "end": 1653.76, "text": " to reduce and understand what was the noise that was added between the two"}, {"start": 1653.76, "end": 1658.1599999999999, "text": " steps that's that's everything I want you to understand here and that's what"}, {"start": 1658.1599999999999, "end": 1662.9199999999998, "text": " they do exactly here so you can see here we have this L simple so the simple loss"}, {"start": 1662.9199999999998, "end": 1667.04, "text": " because they had some simplifications here but this turned out to work quite"}, {"start": 1667.04, "end": 1672.9599999999998, "text": " nice empirically so let's see how this L simple works so we have expectation over"}, {"start": 1672.96, "end": 1678.04, "text": " the time step goes from 1 to capital T which is the size of our diffusion chain"}, {"start": 1678.04, "end": 1683.2, "text": " let's call it that way we are sampling X 0 from our from the true distribution so"}, {"start": 1683.2, "end": 1686.78, "text": " this is a fancy way of saying we take a data sample we take an image from our"}, {"start": 1686.78, "end": 1692.8, "text": " data set and we sample epsilon from the like a normal like Gaussian distribution"}, {"start": 1692.8, "end": 1697.64, "text": " right here okay so we're gonna what happens in practice how this model is"}, {"start": 1697.64, "end": 1701.3600000000001, "text": " going to be trained is the following let me let me change the color here we're"}, {"start": 1701.36, "end": 1706.36, "text": " gonna sample X 0 which means we take so we sample let me do it like this oops so"}, {"start": 1706.36, "end": 1712.0, "text": " we sample X 0 here so that means we take an image then we sample T so that"}, {"start": 1712.0, "end": 1718.32, "text": " mean maybe hundred in one single like training pass so we have we have hundred"}, {"start": 1718.32, "end": 1722.9599999999998, "text": " we have a specific image and because we have this specific we have this this"}, {"start": 1722.9599999999998, "end": 1727.52, "text": " cool thing that we got here that means we can kind of jump over we don't have"}, {"start": 1727.52, "end": 1732.76, "text": " to do hundred sampling steps we can just directly calculate the X of t because of"}, {"start": 1732.76, "end": 1738.08, "text": " this formula here so that means now we have X of t okay so now we have X of t"}, {"start": 1738.08, "end": 1743.96, "text": " which means we can feed it here we can feed X of t into our epsilon theta model"}, {"start": 1743.96, "end": 1749.24, "text": " we can also feed the timestamp and how do you feed a scalar you may ask into"}, {"start": 1749.24, "end": 1753.52, "text": " neural network well please remember transformers you're basically what"}, {"start": 1753.52, "end": 1757.48, "text": " happens they're using the same thing as with transformers using these basically"}, {"start": 1757.48, "end": 1763.16, "text": " Fourier basically these sinusoids and the index into a table of precomputed"}, {"start": 1763.16, "end": 1767.56, "text": " sinusoids and they just take a single row of there and they kind of append it"}, {"start": 1767.56, "end": 1772.1200000000001, "text": " and add it to the model that's that's how you convert scalar into into into"}, {"start": 1772.1200000000001, "end": 1775.76, "text": " vector check out my video on the transformer paper if you didn't"}, {"start": 1775.76, "end": 1779.1200000000001, "text": " understand that and now you're just trying to regress this this epsilon"}, {"start": 1779.1200000000001, "end": 1785.1200000000001, "text": " that's it so that's how the whole procedure works and once you train the"}, {"start": 1785.12, "end": 1789.6, "text": " model once you have this epsilon theta model trained how you actually generate"}, {"start": 1789.6, "end": 1795.28, "text": " the mu theta is using this formula which kind of is simple after a couple of"}, {"start": 1795.28, "end": 1798.32, "text": " simple algebraic manipulations you end up with this formula and now you can"}, {"start": 1798.32, "end": 1803.0, "text": " just input X of t input the predicted epsilon theta and you end up with the"}, {"start": 1803.0, "end": 1808.9599999999998, "text": " mu theta which is the mu theta for the X of t minus 1 so that's the the the"}, {"start": 1808.96, "end": 1815.64, "text": " mean from where the less noisy version of the image came from so remember"}, {"start": 1815.64, "end": 1821.8400000000001, "text": " going to our visualization here that means we found a mu up the diffusion"}, {"start": 1821.8400000000001, "end": 1827.6000000000001, "text": " chain where the image was less noisy okay that was my best attempt at trying"}, {"start": 1827.6000000000001, "end": 1832.72, "text": " to make these diffusion models a bit more clear it took me a couple of days"}, {"start": 1832.72, "end": 1837.16, "text": " to wrap my head around this I had to read four papers I had to read a couple"}, {"start": 1837.16, "end": 1841.8400000000001, "text": " of blocks and I'm still until you start and until you try and implement this"}, {"start": 1841.8400000000001, "end": 1845.52, "text": " thing yourself you're gonna miss like out on the details but the high level"}, {"start": 1845.52, "end": 1851.68, "text": " pictures here you have this process of adding noise you have this ability to"}, {"start": 1851.68, "end": 1857.24, "text": " reverse the process because of this nice property that the the actual"}, {"start": 1857.24, "end": 1864.0600000000002, "text": " conditional distribution so the X t condition X t minus 1 sorry the the X t"}, {"start": 1864.06, "end": 1870.32, "text": " minus 1 condition X t is also of the Gaussian form and that makes life a lot"}, {"start": 1870.32, "end": 1875.44, "text": " easier and yeah basically you're trying to figure out what part of the current"}, {"start": 1875.44, "end": 1879.12, "text": " image is the actual noise and that's how you're denoising with every single step"}, {"start": 1879.12, "end": 1883.56, "text": " and finally you end up with the image from your probability distribution"}, {"start": 1883.56, "end": 1887.6, "text": " which is hopefully as close as possible to the actual data distribution if you"}, {"start": 1887.6, "end": 1892.0, "text": " train your model correctly and that was the the hardest part to understand about"}, {"start": 1892.0, "end": 1897.16, "text": " this paper now after you know that now let's let's see a couple more details"}, {"start": 1897.16, "end": 1902.52, "text": " there's some guidance they're using in order to to to be able to not just"}, {"start": 1902.52, "end": 1907.32, "text": " generate unconditional images but take the piece of text like I as I showed you"}, {"start": 1907.32, "end": 1910.66, "text": " here you take a piece of text you condition the model additionally and then"}, {"start": 1910.66, "end": 1916.52, "text": " you generate an image and not just generate an image so here let me let me"}, {"start": 1916.52, "end": 1922.32, "text": " kind of dig in deeper here so for our main experiments we train a 3.5 billion"}, {"start": 1922.32, "end": 1927.32, "text": " parameter text conditional diffusion model at 64 times 64 resolution and"}, {"start": 1927.32, "end": 1931.8799999999999, "text": " another 1.5 billion parameter text conditional up sampling diffusion model"}, {"start": 1931.8799999999999, "end": 1936.36, "text": " to increase the resolution to 256 256 for clip guidance we also train noise"}, {"start": 1936.36, "end": 1940.48, "text": " aware blah blah blah VAT blah blah blah that's not that important what's"}, {"start": 1940.48, "end": 1945.24, "text": " probably worth mentioning here is that it's noise aware which means you have"}, {"start": 1945.24, "end": 1950.64, "text": " you cannot take an off-the-shelf VAT you have to retrain it with noisy images"}, {"start": 1950.64, "end": 1955.6, "text": " noisy samples to get the correct results more important than that how do you"}, {"start": 1955.6, "end": 1959.08, "text": " condition on text and the answer is simple transformers so to condition on"}, {"start": 1959.08, "end": 1963.1200000000001, "text": " the text we first encode it into a sequence of K tokens and feed these"}, {"start": 1963.1200000000001, "end": 1967.28, "text": " tokens into a transformer so then they say the output of the transformer is"}, {"start": 1967.28, "end": 1971.48, "text": " used in two ways they first the final token embedding is used in place of a"}, {"start": 1971.48, "end": 1974.84, "text": " class embedding in the ADM model which is the diffusion model I was just"}, {"start": 1974.84, "end": 1979.4399999999998, "text": " explaining second the last layer of token embeddings a sequence of K feature"}, {"start": 1979.4399999999998, "end": 1983.08, "text": " vectors is separately projected to the dimensionality of each attention layer"}, {"start": 1983.08, "end": 1988.8, "text": " throughout the ADM model basically in this is very architecture specific what"}, {"start": 1988.8, "end": 1995.08, "text": " they do is they take the output of the transform model so again we have a text"}, {"start": 1995.08, "end": 2000.12, "text": " here we have some text here like I don't know whatever some text and then you"}, {"start": 2000.12, "end": 2005.2399999999998, "text": " pass that through a transformer the almighty transformer here so that's T"}, {"start": 2005.2399999999998, "end": 2010.0, "text": " for transformer outcome the feature vectors and you basically use these"}, {"start": 2010.0, "end": 2014.4799999999998, "text": " feature vectors and you input them in a smart way into the diffusion model so"}, {"start": 2014.4799999999998, "end": 2019.36, "text": " this is the diffusion model let me kind of draw a box here so something like"}, {"start": 2019.36, "end": 2025.9599999999998, "text": " this diffusion model you condition D on on these feature vectors and that's the"}, {"start": 2025.96, "end": 2030.16, "text": " final pipeline that's how the conditional thing works so the training itself"}, {"start": 2030.16, "end": 2035.08, "text": " remains the same you still have you still have this diffusion thing you just"}, {"start": 2035.08, "end": 2039.6000000000001, "text": " now you'll at all times you'll have the captions you'll have the associated"}, {"start": 2039.6000000000001, "end": 2042.92, "text": " caption with each of these images you have a caption here you have a caption"}, {"start": 2042.92, "end": 2047.3600000000001, "text": " here which will be the same throughout this whole chain and basically you're"}, {"start": 2047.3600000000001, "end": 2052.8, "text": " gonna use that information additionally and train the the the yeah the the text"}, {"start": 2052.8, "end": 2056.76, "text": " condition version of this of this model okay so what turned out to be the case"}, {"start": 2056.76, "end": 2061.0800000000004, "text": " is that you can you can even further improve the results by having a"}, {"start": 2061.0800000000004, "end": 2067.36, "text": " dedicated classifier basically manipulate your your your your training"}, {"start": 2067.36, "end": 2072.28, "text": " procedure here and let me try and explain how that works so for for a"}, {"start": 2072.28, "end": 2077.4, "text": " moment forget that we are working with text and think of this like amnest we"}, {"start": 2077.4, "end": 2082.8, "text": " are trying to just let's say we have ten classes and we still have passing the"}, {"start": 2082.8, "end": 2086.84, "text": " text through transformer we just take up I don't know like a one hot encoding of"}, {"start": 2086.84, "end": 2092.56, "text": " our class and we input that instead of to condition to condition the model so"}, {"start": 2092.56, "end": 2097.12, "text": " what I do is they take the that they said they basically take the data set"}, {"start": 2097.12, "end": 2101.2000000000003, "text": " and they train a classifier so they have a classifier here like I just you know"}, {"start": 2101.2000000000003, "end": 2103.76, "text": " this see that's gonna be probably some convolutional neural network or"}, {"start": 2103.76, "end": 2109.2400000000002, "text": " something or VAT I know and you take the noisy versions of your sample and you"}, {"start": 2109.2400000000002, "end": 2113.48, "text": " teach this model to predict the class you teach you to predict the correct"}, {"start": 2113.48, "end": 2118.36, "text": " class like maybe it's gonna be C1 like dog or whatnot and now because this"}, {"start": 2118.36, "end": 2124.0400000000004, "text": " model knows how to given an image it can predict the correct class now you can"}, {"start": 2124.0400000000004, "end": 2130.4, "text": " use that model to guide the the training procedure and improve the samples we've"}, {"start": 2130.4, "end": 2134.56, "text": " seen this with clip guided VQ VA models and some other generative models"}, {"start": 2134.56, "end": 2139.96, "text": " basically you can see that all throughout Twitter yeah nowadays but yeah in a"}, {"start": 2139.96, "end": 2144.2000000000003, "text": " nutshell what happens how this is implemented is we are going to remember"}, {"start": 2144.2000000000003, "end": 2147.48, "text": " we're trying to predict the mu we're just gonna have some some scalar"}, {"start": 2147.48, "end": 2152.52, "text": " coefficient here which is basically an amplify the influence of the gradients"}, {"start": 2152.52, "end": 2158.7200000000003, "text": " with respect to the input image of the log of p phi y condition on XT so this"}, {"start": 2158.72, "end": 2163.2, "text": " is looks very complicated but this is basically our classifier so this is"}, {"start": 2163.2, "end": 2166.9199999999996, "text": " basically our classifier and we're trying to so what happens here on a"}, {"start": 2166.9199999999996, "end": 2172.0, "text": " semantic level is that hey we know because we have the gradient we know how"}, {"start": 2172.0, "end": 2179.24, "text": " to tweak X of t how to change the pixels of X of t in order to maximize the the"}, {"start": 2179.24, "end": 2183.08, "text": " the probability of the class so that means so that's something you can see"}, {"start": 2183.08, "end": 2185.9599999999996, "text": " in deep dream you can check out deep dreams we're not familiar with it but"}, {"start": 2185.96, "end": 2189.44, "text": " basically let's say you haven't like let me just kind of draw an image here so"}, {"start": 2189.44, "end": 2194.36, "text": " you have an image here and you'll know how to tweak the particular pixels so"}, {"start": 2194.36, "end": 2199.04, "text": " that a particular class here let's let me denote it as a like a softmax"}, {"start": 2199.04, "end": 2204.12, "text": " distribution or something like discrete one or whatnot so you know this is the"}, {"start": 2204.12, "end": 2209.16, "text": " correct class for example just change the color so you know this is a correct"}, {"start": 2209.16, "end": 2216.3999999999996, "text": " class and what you do is you tweak this so that you kind of boost that particular"}, {"start": 2216.3999999999996, "end": 2221.16, "text": " class that's the whole point that's what gradients are there for and because you"}, {"start": 2221.16, "end": 2226.3199999999997, "text": " have that information you can use that to tweak the mu so if we were to get"}, {"start": 2226.3199999999997, "end": 2233.0, "text": " back to the drawing I showed you up here let me just try and find it here that"}, {"start": 2233.0, "end": 2239.16, "text": " means instead of this time instead of going right so instead of going from from"}, {"start": 2239.16, "end": 2244.48, "text": " from here to here the classifier is smart enough to tell you hey if you want"}, {"start": 2244.48, "end": 2249.16, "text": " to create a class of a dog instead of going here so instead of going from here"}, {"start": 2249.16, "end": 2254.96, "text": " to here it's better that you move a little bit towards here and so this is"}, {"start": 2254.96, "end": 2259.8, "text": " the better path according to the classifier so this thing here it's gonna"}, {"start": 2259.8, "end": 2263.8, "text": " be a better class and yeah I mean that's that's what happened that's what the"}, {"start": 2263.8, "end": 2267.48, "text": " formula is telling us I didn't explain you how the actual formula was derived"}, {"start": 2267.48, "end": 2273.0800000000004, "text": " but yeah that's gonna be enough for now so that's one way of how they are"}, {"start": 2273.0800000000004, "end": 2279.84, "text": " conditioning the guiding the generation process of the for the particular class"}, {"start": 2279.84, "end": 2285.4, "text": " and there is another thing they do is classifier free guidance the idea here"}, {"start": 2285.4, "end": 2289.2400000000002, "text": " is not to have a dedicated classifier because you can imagine it's very"}, {"start": 2289.24, "end": 2293.72, "text": " expensive to train a separate dedicated classifier on the noisy samples this"}, {"start": 2293.72, "end": 2297.3199999999997, "text": " time so you cannot take an off-the-shelf like a model you have to"}, {"start": 2297.3199999999997, "end": 2301.3199999999997, "text": " train it on the noisy data and that's expensive you wanna you want to avoid"}, {"start": 2301.3199999999997, "end": 2307.7999999999997, "text": " that if possible and that's where the classifier free basically part of this"}, {"start": 2307.7999999999997, "end": 2312.02, "text": " text comes from so classifier free we do not want to have a classifier if we"}, {"start": 2312.02, "end": 2316.12, "text": " don't have to so let me let me walk you through this because it's fairly"}, {"start": 2316.12, "end": 2319.92, "text": " important since they got better results using this classifier free approach"}, {"start": 2319.92, "end": 2326.16, "text": " compared to using a clip as the guidance model so they say here instead of"}, {"start": 2326.16, "end": 2330.4, "text": " training a separate classifier model we choose to train an unconditional"}, {"start": 2330.4, "end": 2335.12, "text": " denoising diffusion model P theta of Z the notation is a bit different but bear"}, {"start": 2335.12, "end": 2342.22, "text": " with me yeah parameterized to a score estimator a e theta of ZL this is the"}, {"start": 2342.22, "end": 2346.16, "text": " thing we just saw this is the noise we're trying to to learn together with"}, {"start": 2346.16, "end": 2351.8399999999997, "text": " the conditional model P theta is that condition and C parameterized to the"}, {"start": 2351.8399999999997, "end": 2357.7599999999998, "text": " yeah basically the e theta ZL C model so that's basically our model we're just"}, {"start": 2357.7599999999998, "end": 2361.8799999999997, "text": " additionally conditioning that that model we're learning so we use a single"}, {"start": 2361.8799999999997, "end": 2366.56, "text": " neural network to parameterize both models where for the unconditioned"}, {"start": 2366.56, "end": 2371.2, "text": " model we can simply input zeros for the class identifier C when predicting the"}, {"start": 2371.2, "end": 2376.2799999999997, "text": " score IE we just put the C equals to zero for that model and we jointly train"}, {"start": 2376.2799999999997, "end": 2380.2, "text": " the unconditional and conditional models simply by randomly setting C to the"}, {"start": 2380.2, "end": 2385.48, "text": " unconditional class identifier so we have a like they basically have a same"}, {"start": 2385.48, "end": 2389.3599999999997, "text": " network and sometimes with some probability there they are basically"}, {"start": 2389.3599999999997, "end": 2393.68, "text": " zeroing the class and that's how they're training the unconditional and the"}, {"start": 2393.68, "end": 2399.2799999999997, "text": " conditional like a diffusion model at the same time and finally we then"}, {"start": 2399.28, "end": 2402.48, "text": " perform sampling using the following linear combination of the conditional"}, {"start": 2402.48, "end": 2406.52, "text": " and unconditional score estimates and I'm fairly sure this formula is broken"}, {"start": 2406.52, "end": 2412.2400000000002, "text": " that this here is actually the correct formula so I'm gonna now jump to this"}, {"start": 2412.2400000000002, "end": 2419.52, "text": " paper and it says that in order to get the final estimate we are going to take"}, {"start": 2419.52, "end": 2426.5600000000004, "text": " the the prediction that we got by using the like unconditioned version plus S"}, {"start": 2426.56, "end": 2432.04, "text": " minus the this difference here so what this means geometrically is the"}, {"start": 2432.04, "end": 2438.7599999999998, "text": " following so we have this distinct e theta conditioned on the zero that means"}, {"start": 2438.7599999999998, "end": 2443.84, "text": " it's unconditioned so it says hey if you want to generate a cool denoised image"}, {"start": 2443.84, "end": 2449.08, "text": " go so we are here go in this direction okay so now this thing here the"}, {"start": 2449.08, "end": 2456.24, "text": " difference tells us hey this thing the the the condition model is smarter"}, {"start": 2456.24, "end": 2460.4399999999996, "text": " because it learned how to generate conditioned images so it's gonna say hey"}, {"start": 2460.4399999999996, "end": 2466.2799999999997, "text": " instead of there go here okay and so you can imagine that the difference vector"}, {"start": 2466.2799999999997, "end": 2473.12, "text": " here is gonna be the green thing so we are from this we are subtracting this so"}, {"start": 2473.12, "end": 2478.8399999999997, "text": " that means we're gonna end up with this vector here and so what this is telling"}, {"start": 2478.8399999999997, "end": 2484.16, "text": " you is hey instead of going with the blue or with the red one we're gonna go"}, {"start": 2484.16, "end": 2489.6, "text": " first with the red one so that's the unconditioned generator and then we're"}, {"start": 2489.6, "end": 2495.04, "text": " gonna do so we can now amplify this difference vector by how much we want so"}, {"start": 2495.04, "end": 2500.2, "text": " there may be like two or something that means hey we want to do it like this so"}, {"start": 2500.2, "end": 2504.7999999999997, "text": " go twice there and so basically we end up going going in this direction or"}, {"start": 2504.7999999999997, "end": 2509.56, "text": " something like that so that's the the high-level idea of how this should be"}, {"start": 2509.56, "end": 2513.3199999999997, "text": " supposed to work yeah there is obviously a lot of theory for why this is yeah I"}, {"start": 2513.32, "end": 2517.32, "text": " think I think this was mostly empirical finding somebody correct me if I'm wrong"}, {"start": 2517.32, "end": 2523.1200000000003, "text": " but yeah and now I was constantly talking about like simple class"}, {"start": 2523.1200000000003, "end": 2528.52, "text": " conditioning because remember we were working with text here the version"}, {"start": 2528.52, "end": 2532.92, "text": " they're actually using is like conditioning on captions and this it"}, {"start": 2532.92, "end": 2538.2400000000002, "text": " means just empty empty text sequence and that's that's yeah we sometimes replace"}, {"start": 2538.2400000000002, "end": 2542.48, "text": " text captions with an empty sequence during training so that's how we we get"}, {"start": 2542.48, "end": 2548.64, "text": " into the from the class conditions world into the text condition world okay also"}, {"start": 2548.64, "end": 2555.4, "text": " also here I explained to you how the guided diffusion would work in the case"}, {"start": 2555.4, "end": 2559.2, "text": " where we are conditioning on a class but remember again we're working with text"}, {"start": 2559.2, "end": 2564.04, "text": " so they are going to use clip and they say here they are protruding the reverse"}, {"start": 2564.04, "end": 2569.6, "text": " process mean along the gradient of the text image top product with respect to"}, {"start": 2569.6, "end": 2573.64, "text": " the image so that's here this may be a bit overwhelming perturbing the reverse"}, {"start": 2573.64, "end": 2577.52, "text": " process mean is what I explained up there so we are as you can see we're"}, {"start": 2577.52, "end": 2583.0, "text": " protruding the mean with the gradients of whatever is this and in the case of"}, {"start": 2583.0, "end": 2587.48, "text": " clip it's gonna be we're bringing me along the gradient of the text image dot"}, {"start": 2587.48, "end": 2592.52, "text": " product with respect to the image go ahead and check out my clip video if you"}, {"start": 2592.52, "end": 2596.92, "text": " haven't watched that one if you're not familiar with clip but here briefly what"}, {"start": 2596.92, "end": 2604.08, "text": " clip does is you're basically gonna have like an image here and that's so let me"}, {"start": 2604.08, "end": 2609.7200000000003, "text": " just change the color here so the image and the the texts the the the captions"}, {"start": 2609.7200000000003, "end": 2614.52, "text": " are gonna be embedded in the joint space and so what happens is you have an image"}, {"start": 2614.52, "end": 2619.08, "text": " here and you have some pieces of text here and you're gonna do dot product"}, {"start": 2619.08, "end": 2622.48, "text": " because they're in the same space that means they're like I don't know like"}, {"start": 2622.48, "end": 2624.56, "text": " three-dimensional vector or three-dimensional vector it's gonna"}, {"start": 2624.56, "end": 2628.56, "text": " usually be higher dimensionality but for the sake of argument and you're gonna do"}, {"start": 2628.56, "end": 2633.24, "text": " dot product between them now if you're trying to generate a specific caption"}, {"start": 2633.24, "end": 2638.68, "text": " like the hamster the dragon hamster from from the example I showed you in the"}, {"start": 2638.68, "end": 2643.84, "text": " beginning what you want to do is again so this is an image here you'll want to"}, {"start": 2643.84, "end": 2650.12, "text": " tweak the pixels such that this dot product between these two such that this"}, {"start": 2650.12, "end": 2655.56, "text": " dot product goes up okay and that's that's these you're basically that's"}, {"start": 2655.56, "end": 2659.6, "text": " those are the gradients here that's what this formula is capturing so instead of"}, {"start": 2659.6, "end": 2663.0, "text": " in the case of class conditioning you're trying to maximize just the probability"}, {"start": 2663.0, "end": 2666.88, "text": " of a class here you're trying to maximize the match between the image and"}, {"start": 2666.88, "end": 2670.48, "text": " a particular caption for which you're trying to generate this image so"}, {"start": 2670.48, "end": 2674.12, "text": " remember you're given the caption and you want to generate a specific image and"}, {"start": 2674.12, "end": 2679.68, "text": " so you're using this clip model to learn how to change the image so that you can"}, {"start": 2679.68, "end": 2684.12, "text": " better describe and generate the the corresponding image for this particular"}, {"start": 2684.12, "end": 2689.16, "text": " caption so that's it okay there was a lot of details yeah let me know whether"}, {"start": 2689.16, "end": 2693.3599999999997, "text": " this this format was too too too many details or not but yeah now we're gonna"}, {"start": 2693.3599999999997, "end": 2698.12, "text": " just see the results and that's pretty much it thing worth noticing is that"}, {"start": 2698.12, "end": 2703.16, "text": " they they made sure that the training compute is roughly equal to that used"}, {"start": 2703.16, "end": 2710.24, "text": " to train the Lee so that they can fairly compare both models and you can read"}, {"start": 2710.24, "end": 2715.04, "text": " this part yourself if you're if you're curious to know more I basically this"}, {"start": 2715.04, "end": 2719.7999999999997, "text": " day basically describe how the how the model is fine-tuned for the image"}, {"start": 2719.7999999999997, "end": 2724.3999999999996, "text": " painting task in painting task and how some additional details about the"}, {"start": 2724.3999999999996, "end": 2731.3199999999997, "text": " guidance procedure here they compare glide using class class classifier free"}, {"start": 2731.32, "end": 2736.44, "text": " guidance clip guidance dali some GAN versions and the real images you can see"}, {"start": 2736.44, "end": 2743.1800000000003, "text": " the results and let me just kind of maybe focus on this column here and you"}, {"start": 2743.1800000000003, "end": 2748.04, "text": " can see that using classifier free guidance is creates better images"}, {"start": 2748.04, "end": 2754.76, "text": " arguably compared to the clip guidance and better than then the Lee although I"}, {"start": 2754.76, "end": 2758.28, "text": " took a single column buddy I think we can agree that a group of elephants"}, {"start": 2758.28, "end": 2762.6000000000004, "text": " walking in a muddy water this also looks better than this one that's for sure"}, {"start": 2762.6000000000004, "end": 2767.6000000000004, "text": " because it's kind of like a blurry and that has to do with the fact that they"}, {"start": 2767.6000000000004, "end": 2771.84, "text": " are using in the background a discretized version of the VAE model"}, {"start": 2771.84, "end": 2775.0, "text": " that's how the Lee works you can again check my video if you're not familiar"}, {"start": 2775.0, "end": 2780.96, "text": " with the Lee so those are some some some comparison results here what they've"}, {"start": 2780.96, "end": 2784.2200000000003, "text": " done is they've compared I'm gonna skip these curves but basically they"}, {"start": 2784.22, "end": 2790.3599999999997, "text": " compared how classifier free guidance compares to clip guidance and it turns"}, {"start": 2790.3599999999997, "end": 2795.9199999999996, "text": " out that clip classifier free is also better looking at these metrics except"}, {"start": 2795.9199999999996, "end": 2801.08, "text": " for this one where the one of the metrics was actually clip score so what"}, {"start": 2801.08, "end": 2806.52, "text": " happened is that basically the the adversarial examples made it look like"}, {"start": 2806.52, "end": 2810.48, "text": " the the clip guidance is better but it's actually not the classifier free guidance"}, {"start": 2810.48, "end": 2813.7999999999997, "text": " is better so they say here we hypothesize that clip guidance is finding"}, {"start": 2813.8, "end": 2818.0, "text": " adversarial examples for the evaluation clip model rather than actually a"}, {"start": 2818.0, "end": 2821.76, "text": " performing classifier free guidance when it comes to matching the prompt I guess"}, {"start": 2821.76, "end": 2825.88, "text": " that would be my first hypothesis as well for why this thing happens you can"}, {"start": 2825.88, "end": 2832.8, "text": " see here like the clip guidance model has for the higher score it has lower FID"}, {"start": 2832.8, "end": 2838.36, "text": " which means basically that if lower FID is better so that means that this curve"}, {"start": 2838.36, "end": 2843.5, "text": " the orange curve in this particular case is is the better one okay so they did"}, {"start": 2843.5, "end": 2851.44, "text": " some comparisons with the Lee model they they tried really hard to make glide as"}, {"start": 2851.44, "end": 2856.0, "text": " bad as possible and the Lee is as good as possible and still they got always"}, {"start": 2856.0, "end": 2860.4, "text": " better results you can see here everything above 50% means that glide is"}, {"start": 2860.4, "end": 2863.76, "text": " better than the Lee and that's the case as you can see across all of the columns"}, {"start": 2863.76, "end": 2870.24, "text": " and all of the rows so this row means that they're using clip to rerank the"}, {"start": 2870.24, "end": 2875.3199999999997, "text": " output images of the Lee and they use I think like 512 or something temps from"}, {"start": 2875.3199999999997, "end": 2878.7599999999998, "text": " the Lee and then they take the best one using clip models so the clip model is"}, {"start": 2878.7599999999998, "end": 2882.4799999999996, "text": " ranking the images for a particular caption and that's how they improve from"}, {"start": 2882.4799999999996, "end": 2887.58, "text": " no ranking to the Lee we ranked and then here what they do is they just blur the"}, {"start": 2887.58, "end": 2892.4799999999996, "text": " output of glide by passing it through the VQ VAE model if I'm not wrong from"}, {"start": 2892.4799999999996, "end": 2897.24, "text": " the Dali and that's make that makes it blur and that's why we saw up there we"}, {"start": 2897.24, "end": 2901.4799999999996, "text": " saw up there that this Dali image here is way more blurred compared to the"}, {"start": 2901.4799999999996, "end": 2907.08, "text": " output of glide so then they pass the glide image through that same model that"}, {"start": 2907.08, "end": 2910.52, "text": " caused this and they still compare them and it's still better compared to the"}, {"start": 2910.52, "end": 2914.4799999999996, "text": " Lee so that's very very encouraging result okay finally some safety"}, {"start": 2914.4799999999996, "end": 2919.56, "text": " considerations a section they as I said they open source the smaller version of"}, {"start": 2919.56, "end": 2923.2, "text": " a model that was trained in this filter data set so they say here we filtered"}, {"start": 2923.2, "end": 2927.6, "text": " out training images containing people to reduce the capabilities of the modeling"}, {"start": 2927.6, "end": 2931.3199999999997, "text": " many people centric problematic use cases okay that's one of the attempts"}, {"start": 2931.3199999999997, "end": 2936.3999999999996, "text": " they try to do so it's a 300 million it's a way smaller model remember that"}, {"start": 2936.3999999999996, "end": 2943.48, "text": " the big one is like in the billions of parameters yeah ballpark and they say"}, {"start": 2943.48, "end": 2947.16, "text": " here that we also probed glide the filtered version so the small filtered"}, {"start": 2947.16, "end": 2951.24, "text": " version for some forms of bias and found that it retains and may even amplify"}, {"start": 2951.24, "end": 2955.52, "text": " biases in the data set for example when asked to generate toys for girls our"}, {"start": 2955.52, "end": 2959.8399999999997, "text": " model produces more pink toys and stuffed animals than it does for the"}, {"start": 2959.8399999999997, "end": 2966.2799999999997, "text": " prompt toys for boys I mean this thing is kind of funny like I mean of course"}, {"start": 2966.2799999999997, "end": 2972.3999999999996, "text": " that that's gonna happen because the way our society works is that girls are"}, {"start": 2972.3999999999996, "end": 2978.4799999999996, "text": " playing with more pink toys when they are young I mean this kind of seems"}, {"start": 2978.48, "end": 2982.2, "text": " weird to me the model is just reproducing whatever is present in the"}, {"start": 2982.2, "end": 2985.88, "text": " data set I mean we can all agree on this so yeah let me know your thoughts on"}, {"start": 2985.88, "end": 2989.2, "text": " this particular statement here I'd love to hear your thoughts on this and"}, {"start": 2989.2, "end": 2994.32, "text": " finally some some failure cases and illustration of a cat that has eight"}, {"start": 2994.32, "end": 3001.84, "text": " legs yeah you can see it failed I mean maybe the model here thought that these"}, {"start": 3001.84, "end": 3006.72, "text": " are four legs here and then there are two tails so this is a Chernobyl cat or"}, {"start": 3006.72, "end": 3011.3199999999997, "text": " something so that's six and then these ears have some features that look like"}, {"start": 3011.3199999999997, "end": 3015.9199999999996, "text": " like maybe like look like like so that's like eight here but yeah for this one"}, {"start": 3015.9199999999996, "end": 3021.48, "text": " that that line of reasoning does not work yeah bunch of failure cases here"}, {"start": 3021.48, "end": 3027.16, "text": " it's kind of interesting like a mouse hunting a lion I'd agree you maybe maybe"}, {"start": 3027.16, "end": 3032.52, "text": " here this mouse is actually hunting this lion that may be the case also here"}, {"start": 3032.52, "end": 3036.92, "text": " maybe the the line is dead and the mouse is actually eating the lion but it just"}, {"start": 3036.92, "end": 3040.7599999999998, "text": " yeah went into the mouth to just check double check some of the I don't know"}, {"start": 3040.7599999999998, "end": 3045.32, "text": " some of the markers so that the line is healthy or something I don't know a car"}, {"start": 3045.32, "end": 3049.28, "text": " with triangular wheels here this is I guess the best attempt that the model"}, {"start": 3049.28, "end": 3056.64, "text": " got and this looks like something that may actually yeah appear in a world in"}, {"start": 3056.64, "end": 3063.2, "text": " real world data sets but yeah it's not perfect and I found some other failure"}, {"start": 3063.2, "end": 3068.64, "text": " cases down in the appendix like let me just kind of try and find it here so the"}, {"start": 3068.64, "end": 3073.7999999999997, "text": " fine-tune model they say golden necklace they do this this impeding you can see"}, {"start": 3073.7999999999997, "end": 3077.16, "text": " here they put a mask here and you can see what the model the model doesn't"}, {"start": 3077.16, "end": 3080.8799999999997, "text": " even produce the necklace so there are some failure cases obviously it's not"}, {"start": 3080.88, "end": 3088.04, "text": " perfect also I mentioned the the fact that the Lee was failing with these like"}, {"start": 3088.04, "end": 3092.48, "text": " relational reasoning types of tasks and here they just put a simple one so I"}, {"start": 3092.48, "end": 3096.56, "text": " guess they really cherry picked the best out of this model and we cannot test it"}, {"start": 3096.56, "end": 3101.1600000000003, "text": " ourselves so that's kind of that kind of sucks but other than that this is fairly"}, {"start": 3101.1600000000003, "end": 3106.6, "text": " fairly awesome work I love the fact that they are pushing diffusion models I'm"}, {"start": 3106.6, "end": 3111.62, "text": " not sure whether diffusion models are ever gonna be as fast as as GANs or V"}, {"start": 3111.62, "end": 3116.68, "text": " ease when sampling for this very simple reason of the fact that the whole"}, {"start": 3116.68, "end": 3122.7999999999997, "text": " diffusion model thing rests on this fact so this distribution approaches the"}, {"start": 3122.7999999999997, "end": 3128.6, "text": " Gaussian only when you have a bunch of steps and when you have beta t going to"}, {"start": 3128.6, "end": 3132.2799999999997, "text": " zero if you manage to find a solution around this it's not going to be"}, {"start": 3132.28, "end": 3136.84, "text": " diffusion model anymore or it's gonna be something different yeah it's not the"}, {"start": 3136.84, "end": 3142.1600000000003, "text": " diffusion anymore I guess in any case hopefully you found this video useful"}, {"start": 3142.1600000000003, "end": 3146.0800000000004, "text": " again I still don't have a camera hopefully in a couple of next weeks I'm"}, {"start": 3146.0800000000004, "end": 3150.6400000000003, "text": " gonna get it and let me know if you found this video yeah interesting useful"}, {"start": 3150.6400000000003, "end": 3154.5600000000004, "text": " and leave some comments feedback and yeah subscribe to this channel share it"}, {"start": 3154.56, "end": 3171.84, "text": " with friends and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=ZHIRRsnINGA
Efficient Geometry-aware 3D Generative Adversarial Networks | GAN Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the "Efficient Geometry-aware 3D Generative Adversarial Networks" paper, that introduced a novel explicit-implicit 3D scene representation network and devised a novel framework (dual discrimination, gen/disc pose conditioning, etc.) for SOTA 3D GANs. I also explain how NeRF works in case you didn't have to chance to read it, yet. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ This video paper: https://arxiv.org/abs/2112.07945 ✅ Website: https://matthew-a-chan.github.io/EG3D/ Supporting papers: ✅ NeRF paper: https://arxiv.org/abs/2003.08934 ✅ Explicit voxel grids paper: https://arxiv.org/abs/1906.07751 ✅ StyleGAN v1: https://arxiv.org/abs/1812.04948 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:45 Tri-plane 3D scene representation intro 04:25 NeRF in depth 13:35 Explicit voxel grid methods 14:40 Tri-plane 3D scene representation explained 16:40 Tri-plane is as expressive 18:30 Efficient 3D GANs pipeline 22:05 Pose correlated facial features 23:40 Dual discrimination, super-resolution 32:40 Results 37:00 Ethical considerations, are we going to change anything? 39:25 Intrinsics, extrinsics robustness? 42:00 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kulsoom Abdullah Petar Veličković ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #3D #GANs #NeRF
What's cracking guys? In this video I'm going to cover a very interesting paper and as a quick note I still don't have as you can see I don't have the video and the YouTube studio set up yet So I'm gonna have to only show you the paper for the time being hopefully I'm going to solve that very quickly But basically I'm covering this novel very interesting paper called efficient geometry aware 3d generative adversarial networks or GANs for short from this beautiful group of people from Stanford and Nvidia as a quick knit I think that like the very fact that this is 3d I mean, I think this is a pleonasm because 3d should imply the geometry aware part of this title But yeah in any case I guess it makes makes a more interesting title okay before I start digging into the actual paper Let me show you their video. We present a new 3d GAN that produces photorealistic images that are both multi-view consistent and Geometry aware Our approach enables unsupervised learning of high fidelity 3d representations without any explicit 3d or multi-view supervision So you can see results are fairly impressive because they are they are multi-view consistent You can see that even when you're changing the poses the appearance and the identity Remain the same which is kind of tricky for GANs to achieve And you can see that so here what's interesting is you can see let me stop it here you saw that the texture is kind of sticking and So when the cat is moving when the pose is changing you can see that the fur is kind of flickering or something So that stickiness problem. I think it's called like that like texture sticking problem was solved in one of the Newer style GAN papers if I'm not wrong Okay, in any case let's get back to the paper. We saw the the visualizations there and Now let me try and explain why this paper is special and very interesting so First thing they say here is that they train this whole thing this GAN without any ground truth 3d scans or Multi-view supervision so that means they just have 2d images basically and they manage to achieve this this Like basically multi-view consistency as you can see here the pose here changes But the identity remains the same and they also managed to recover the underlying 3d geometry Of the scene which is very cool now in order to understand how this Like paper works. We have to start with this novel Explicit implicit representation they devised called a tree plane. I think it's called tree plane representation or something but basically it's a combination it's a Somewhere in the middle between nerf, and I'm gonna briefly I'm gonna shortly explain what nerf exactly is and these like explicit voxel approaches And when you combine the good sides of this approach and the nerf approach you get this novel representation That they've devised so the main idea here is that The thing with nerf is that it's very it's memory efficient Let's call it that way so the memory is cheap, but like the the inference is very expensive so if you want to run this thing if you want to get the Density and color from a certain point of the scene you're gonna have to do a forward pass which is way way Costlier compared to for example this voxel approach so we're here as you can see you have to explicitly keep the representation Like in a form of this of this of this cube, and that means it's memory expensive But like on the other hand like the actual like querying is very fast because you only have this shallow MLP part here so combining those two they get the representation here but this probably won't make any sense if you're not familiar with nerf or These other previous approaches, so I'm gonna do like a gonna do a quick digression here And explain how the nerf paper works in detail so nerf is very very interesting paper and Basically, there was a like a Cambrian explosion of papers that came after nerf that tried to to improve upon various various elements of nerf but here I'm going to explain the original architecture itself and The thing that's very funny and kind of there is a paradigm shift here and that is that Overfitting is actually good in nerf so nerf basically Overfits like heavily overfits to a 3d scene so you can see a bulldozer here as an example So it heavily overfits to that scene such that basically you can query this scene from a specific you take a specific point and you basically pick in like a like a like a ray like a direction vector and it will give you the pixel color from that position and that direction which is kind of Fascinating so you can do you can query this 3d scene after you've trained the nerf And I'm going to explain how that works in a second You can query it and get like these images you can see this one here or this one here You can get the same thing from from some in other spot like you can maybe place a camera here you place a camera here I'm gonna just Depict camera as this as this triangle and you get a novel novel image that you can just kind of Get by querying the MLP the multilayer perceptron that was trained and overfitted to this 3d scene So let me try and explain very briefly how how nerf works so you basically have a Sparse sampling of the 3d scene what I mean by that is that you have images such as this one So this is image number one taken from this particular location here. Let me try and change the color So we have from this location There was a camera here and somebody took the image and then there was a camera here and we have associated image So we have associated images and poses and it's very sparse like you maybe have 30 or 40 different images of this scene and Now what nerf does is the is the following you train the nerf the following way Basically nerf takes as an input this XYZ. So that's a coordinate So that's basically a position and so I want to so for example, I want to place a camera here So that's the position of the camera and then you have the theta and the Phi which are the angles They basically determine the the like a D array the vector in the space starting from that particular point in space and so once you input that that tuple that n-tuple into MLP It basically outputs like the three-dimensional vector that represents the the color so the RGB vector and this sigma thing which is called density of volume density basically so what volume density represents is basically the Probability that the ray has stopped at that particular point in in in the scene so think of that like this if the ray was if you had a ray like starting from here and Radiating through the scene and passing through the bulldozer Once the ray touches the surface of this of this part of the bulldozer and enters the actual bulldozer You'd expect the Sigma to go up because basically You will not be able to visualize that point because it's internal to the bulldozer and so the volume density kind of spikes up and you can see those Rays, so this is a visualization of the thing I was just mentioning so you take a ray like ray one So that may be this this red line here so that red line here and you can kind of visualize the the volume density here and now how nerf works is You take a bunch of points along a certain rate like you can see these these black dots here you query the MLP so this thing that's like a part of nerf and so that basically MLP is the thing that learns inside of this nerf framework and You query it alongside all of these points and you get the RGB Sigma Vectors and then what you do is using Sigma you do a weighted average And you get the final color and that's how they they kind of visualize that here so you basically will have some aggregate of these values and you're gonna form a color and Then you're gonna do a simple MSC loss so mean square error Using the ground truth pixel value so the you have the ground truth because you have the image So for example here, it's gonna be let me change the color again. Let me see like let me pick like yeah blue whatever so you'll have a like a yellow pixel here and Whatever came from this summation here you want to make sure to push it towards that point here and That's the whole trick so how you train like nerf is you take a bunch of these rays so have remember we have a sparse Sampling of this 3d scene. We'll have a bunch of these images We'll take randomly will take some rays emitting from from the cameras from from those images And we're gonna do we're gonna like do this sampling alongside those rays We're gonna form these colors as you can see here And we're gonna do the MSC loss and we're going to do this only like we're gonna basically overfit like crazy to this 3d scene So once that's done you can basically query nerf from some unseen position like as I said you can maybe place a camera here Or but where whatnot and you can get a 2d representation of this 3d scene from that particular pose And that's that's amazing. I mean you have this MLP this multilayer perception that Has encoded the whole 3d scene so every single information about that scene inside of the weights of that MLP So that's kind of crazy and an interesting paradigm if you've never seen nerf before So but in a nutshell that's it you have an MLP you take a bunch of rays You you query the the MLP alongside certain points You do the MSC loss, and that's how you train the nerf And it took I think the in the original paper somewhere around like seven eight hours Wall wall clock time to train this thing to you to kind of memorize a single scene. That's cool here you can see the the thing I was mentioning about how we actually accumulate the colors alongside along this particular Ray and the whole trick is to form these coefficients ti's change the color here So ti's are just this sum And we have the minus here and the exponent So it's exponential function and so what you can think of that you can think of this the following way So as I said once you get inside of the bulldozer The thing is this ti is gonna be very very small and that means that the color Inside of the bulldozer is gonna be weighted with a very small number here So that's gonna be approaching zero which means that the final color this C het Is gonna be heavily influenced by the first points that are close to the surface of the bulldozer Whereas the other points are gonna have less influence which makes a lot of sense So again this deltas are just the this distances let me zoom in here So Delta J's are just the distance between these successive points here that we are sampling and The sampling here seems like to be uniform, but in reality They are using stochastic sampling so that we can query the nerf the MLP later on Continuously across this whole space and you can see basically that as Sigma goes up. That means we are entering bulldozer This thing is gonna be bigger and bigger and because we have a minus sign. We just have an exponential When you have the minus That means it basically means we're gonna this thing is gonna converge to zero and that's what I just explained So hopefully that that makes a bit more sense how not nerf works So that's one approach and that's called implicit scene representation And it's implicit because the whole knowledge of the 3d scene is encoded inside of the weights of this Multi-layer perceptron network and that's very cool a bad thing about this approach Is that even though the whole scene is encoded in a relatively small number of weights if you want to render a single pixel So imagine we are now here and we don't have the actual pixel value. We'd have to Query the MLP across all of these points So that's a single forward pass times and because we have M point sampled along the ray In order to get a single pixel in the image space and we have to repeat that for every single pixel in the image space You can you can imagine that's kind of time intensive. Okay So that's where these second family of approaches come into play What they do instead is they explicitly keep the information about the 3d space as you can see here visualized in form of this cube and That means this approach is expensive memory wise, but it's much cheaper to do an inference Here in this model because you don't have to do a forward pass. You already have it explicitly store You can just kind of average it out and get the color of the pixel So that's everything I want you to take out from from from this image and explanation by the way I've put here inference prediction because there is a lot of Just to point out that people are very sloppy with terminology in general So we should be using probably prediction would be the correct terminology here not inference inferences Like it is more came from the context of probabilistic models And yeah, but people use these two interchangeably even though in reality We should be using the word prediction instead of inference, but yeah Super small digression there. Okay So now that we know the trade-offs, let's see what the actual paper brought to the table So we can see here what they did instead of storing every single Like RGB and the volume density in the scene They just have these three planes and even though these look like slices of paper They actually have the depth dimension because they'll have features here like maybe 32 features Alongside this dimension and every plane obviously will have like the depth. So this one we have here and we'll have here Maybe through the 32 features, but that's still way less compared to this Model here that has this so it's I think they call it. Yeah explicit voxel grid approach and Again, so it's memory It's more efficient memory wise and it's also more efficient time wise because as you can see here to find the RGB So to get to get the RGB value the color and the volume density We just have to pass it through this very shallow MLP and compare that to nerf where you have this huge huge Like MLP, I mean huge. I mean relatively big compared to this approach here So now the question remains so that's all nice and sweet but the question is is this as expressive as as as these models here and The answer will turn out to be yes, but before that I forgot to mention how this is constructed So if you want to query at the scene and get the the color at specific point in the scene What do you do you have this point you basically project it onto these planes. That means you have a In our case, we'll have a 32 dimensional vector here So it's going to be 32 dimensional vector and then extract the vector from here and from this projection So this is maybe XY plane and you just basically add them up and then you pass them through the a couple of fully connected Layers, basically a multi-layer perceptron here. So that's how the thing works now We need to make sure that this is as expressive as the two other approaches and that by creating this more efficient model We haven't sacrificed the quality and it turned out turns out that we have not as you can see here at the tree plane approach So that's the one here has a higher quality. The letters are much more like Readable compared to the two other approaches, which is very very cool That means we we've kept the expressive power of the two approaches and we made the model more efficient So what has happened here is that they are basically in this particular example The the trainable weights of this model are these weights Contained in the planes as well as the weights of this fully connected like of this fc layers and the the the way they they kind of Trained the tree plane model is the same way I just explained for nerf so they use the same approach to train and overfit on a particular scene in this case They have this scene called family scene so they train this this this component here the same way as you train nerf and they have to tweak these weights in the In these three planes and it's gonna change in the actual final model that I'm gonna explain right now By the way short aggression As I said these two papers would be useful to have some prior knowledge on and hopefully I gave you Enough context for you to understand the rest of this paper, but the third paper We're gonna find very useful is style again, so I guess the version one and two will suffice These kind of had incremental Improvements upon upon the first version of style again, which had amazing results So let's get to the actual final pipeline I've explained we now understand this part here We understand this whole part of the pipeline So let me kind of Encircle it in square it here. Whatever the name is so you can see we have these three planes we get the color and the density and Because we have the camera parameters 25 scalars We basically can Render the image from whatever pose in the scene so that part we understand and by the way It's 25 because it's 16 for the extrinsics matrix plus 9 for the intrinsics matrix extrinsics means that it basically just maps from the global from the world coordinate system into the camera coordinate system and Intrinsics are there to map from the camera coordinate system into the image space So there's some basic computer vision theory for you You don't have to worry too much about it But basically these 25 numbers tell you how the camera is placed and tell you certain properties About that camera so how it's placed in the scene and certain properties. So the approach will basically be to You're in the scene you're basically in a 3d scene you place a camera out somewhere And what you do is you emit bunch of rays and you sample bunch of points? Alongside those rays, so let me just kind of change the color So we'll sample bunch of points and all those points So when you query those you'll get as the output will get you get the color and the density the volume density And then you combine them in the the smart way I just showed you to get this final image here, which is 128 times 128 times 32, which is different So it's not an RGB image and the reason they've done it like this is because it's more Expressive and we'll see the details a bit later, but that's so that that's this component of the system now Let's switch our attention to the first part here What happens is they basically reuse the generator from the style again paper I mentioned so It's your classical again architecture all the upbeat very very smart one where as you can see here We have a latent of 512 scalars. So you have you have certain vector like some some noise vector Or I think it's actually trained in the case of style again You input that into the mapping vector the mapping vector creates these intermediate representations which we can then use to condition the Generator here which as you can see is up sampling in every single layer. We're doing some like we have some Deconvolution so-called deconvolution layers and we end up with this image 256 by 256 that's a spatial resolution and then we have 96 features And what we do is we just take the 96 features and we split it into three planes that each have a 32 Features per plane and we kind of just visually represent those here and then we know this part here now Now the interesting party I want you to see here is that camera prams are used to condition the mapping network Because that helps the generator decouple certain post correlated features and In general increase the expressivity of the of the generator I'm gonna probably show you a bit later what I mean by the correlation between pose and and and features But basically what it means, let me see whether I can find the the maybe somewhere in the appendix Basically people tend to smile when they're facing the camera up front and they Tend not to smile when they're not facing the camera. Let me see whether I can find that curve. Oh, yeah Okay, I found it here. So basically you can see here the plot that tells you so y'all is just basically tells you Tells you how rotated the head is alongside this let's call it XY plane and you can see that when you're facing the camera Like up front the the percentage of smiles is way higher when compared to you when when the head is turned Away from the camera and the reason is fairly simple when people know that they are being photographed They smile and when they are unaware they just being themselves and they're not smiling because Let's face it. This world is not that happy right? It's very it's a very sad world Anyways Enough with very very bad jokes. Let me go back to where I stopped So we are here trying to understand this whole pipeline Hopefully now I managed to to kind of break down this this first component of the pipeline Which is based off of style again The second part is based off of I said some combination of nerf and those explicit grid methods And finally, we have the third component here and let me now try and address This part and then you'll understand the complete method So what I've done here is Once they do the neural rendering and they have this 128 128 32 volume If they extract the first three channels Those will actually be representing the the actual image So those will contain by the way of training those will contain the down sampled version of the final output image And so what they do now is they up sample the image. So they'll up sample the first three channels using basically Bilinger so bilinear let me try and write it down bilinear transformation, which is not a learnable transformation That's just your your classical Digital image processing up sampling method nothing fancy there Secondly, they'll take this whole volume the 32 all of the 32 channels and they're going to pass it into this super resolution module Which is going to up sample the image and uh outcomes the 512 by 100 by 512 RGB image so three channels and again, we have some conditioning here. They're using these intermediate Latents to condition the the super resolution module Not completely sure whether that thing Whether they did some up relations, but like it makes sense that the information used to generate the original image I mean, it's not the original image. It's just these features that are then neural render. So yeah, i'm not sure whether this was Ablated but like whether it will work as good without this part, but yeah, it works like this And what happens now is they just take and concatenate the these the up sampled image and this super Like this image that was passed through the super resolution module And then they pass it into this style again Two version of discriminator, which is your your your classic GAN loss you're trying to figure out whether the image is real or not real Because you have some data set you basically have some data set of real images that's going to help you figure out This fact whether it's real or not Additionally, we are again conditioning the the discriminator using the camera parameters so the extrinsics and the intrinsics Now this is called what they call this is the Double discriminator or something like that. We're going to see the terminology a bit later, but the point is They are going to by concatenating and having six channels here. What they'll have to do is the following So they'll take some some real image. So this is a real image and then they're going to down sample that image like this and now they're going to use and That basically means that this discriminator is going to be able to use this image That basically means that this discriminator is going to force this small this new this this neural rendered image To be the same to be realistic as these down sampled real images and it's going to force Like the big image to be realistic as the big image is here And all of these constraints help the model learn how to to be consistent In in various poses and one of the crucial parts is actually conditioning on these camera parameters I forgot to mention how they they actually have so some data sets do have this data So the extrinsics and intrinsics In other data sets, basically they make the made an assumption of how intrinsics are What intrinsics make sense? We're going to see that in the appendix a bit later And they use some off-the-shelf detectors to figure out the pose So that that means to figure out the extrinsic component and that's how basically they train this whole thing So yeah, so we they don't have the 3d scans, but they do have the camera parameters. Keep that in mind. Okay Okay now That should be pretty much it. You know, understand should understand how the whole pipeline work But let me go through some additional details So here at this table, they just show the thing I mentioned about the trade-off between speed and memory That this novel approach is both faster and more memory efficient One thing worth worth mentioning is that the three plane representation is capable of representing this complex scene I'll be without view dependent effects That's what I mean by that is that in nerf you can change those theta and phi angles and get different color depending on how You're you're viewing the scene and that kind of helps you model the glossy effects the non-laboration I think that's the fancy way from the computer graphics world um so basically Let me let me see whether yeah, there is a Visualization of nerf somewhere here. So here it is. You can see that the color here Depends on the position and on the direction as well So the direction meaning the theta and phi angles so even if you're So even if you're in the scene in a certain position like here depending on how you kind of Direct the ray from that point you're going to have different colors Because of the way that nerf is set up here Whereas as you can see here the density the volume density is not a function of the direction Which makes sense because density just tells you whether the ray will stop at that particular point in the in the scene And that should not depend on the direction So if you're inside of bulldozer you're inside and it doesn't matter the direction This density should be high because you're inside of an object. That's not unless it's transparent Okay But yeah modeling transparent objects. That's a I guess a separate league Let's continue so I explained that part now I mentioned the the consistency that the double discriminator brings to the table now. Let's let's see what they set here First thing they say here is that the real images are fed into the discriminator are also processed by concatenating each of them With an appropriately blurred copy of itself So what I mean by that is that once you once you have this small image Once you down sampled it you'll have to up sample it again And by doing that you're gonna lose some details and the output image is gonna be somewhat blurred In order to be able to concatenate it and create this six channeled Like input volume so so this is the real image It has three channels and have certain spatial dimension and this this is the up sampled So we first down sample and then up sample the image In order to mimic the process here, okay So what I say here is that they are just kind of blurring and Basically, it should not be So the way they are probably approaching this is If you do like take an image of the full size and then you don't sample it and then you up sample it again I'm fairly sure that if you go into the spectral domain You can you can just devise a kernel a certain kernel like a kernel like this Like a kernel like in the sense in convolutional neural networks You can devise a kernel that's gonna modify the specter spectrum of this of this image in such a way that we get a blurred We get the same effect as if we were to down sample explicitly and up sample again But that's just a implementation detail that would probably make this process a bit more like efficient and optimized Okay, so the terminology I was I was trying to to get at was the dual dual discrimination That's how they call it and it not only encourages the final output to match the distribution of real images But also offers additional effects. So it encourages the neural rendering to match the distribution of thousand sampled real images So that's remember why so neural rendering to match the distribution of down sampled real images So that's because as you can see here, so this is the neural rendered image. So this thing here So as you recall we basically took the first three channels and we get this and now once we up sample this We are basically making sure that this thing looks as close as this real real image that was down sampled and up sampled Okay, so that's why we we have this consistency constraint um, and It encourages the super resolved images to be consistent with the neural rendering And what they mean by that is that basically this image here is going to be encouraged to be looked the same as this image here Which means that the super resolution is going to do its job Basically, it's going to just up sample the image without modifying the pose or the appearance or whatnot of this of this of this output image. Okay So that's it. Let's now continue and let's see some results. Results are fairly stunning You can see the identity of this of this uh, a man looking uh frontally And then when we change the pose the you can still say you can still tell that this is the same person and similarly here So this thing with the eyes is a bit creepy. It doesn't look like this person should be watching at the camera Looking at a camera. It looks like the eyes should be going there or something like that. But yeah Looks kind of weird. We're gonna see why that is a bit later Um, basically there are some problems where where the underlying geometry of the eye socket is concave instead of convex Which is creepy. Um They compare with the baseline models and as you can see every one of these has some problems either the geometry is super Bad or it's super super like very like smooth Um, and here it's also you can see it's the the image does not look realistic the the tooth The teeth here are are smoothened out and the geometry as well. And here is the the this method the proposed method looks way better Compared to the baselines just qualitatively analyzing these images Um Quantitative analysis show the same thing comparing with these baselines looking at a bunch of different metrics I'm not gonna get into what these metrics are fid basically a fresh raise inception score Um, uh, like depth and they're just comparing geometry and the quality of the uh, generating images and you can see that the uh, the the This approach is way better compared to the baselines. They also compare the speed the number of frames per second and you can see that It's lagging a little bit behind g-roff and some of those methods, but as you saw the quality is way better So that's kind of decent trade-off finally. They they use this fax smile. Um, Like sdd, I guess it's a standard deviation. I think this just kind of shows that the the Multi-view consistency is better when you use the dual discrimination and the Adding the generator pose conditioning the gpc to the model, so that's the basically The the camera parameters that we were feeding into generator to condition it. So that was this part here So using this conditioning on the parameters the generator the mapping network part of the generator That helps a lot incur this this This consistency, okay Let's continue and See some results. So this is i'm going to skip this this is from the style again. This is very Already very very clear from the style again papers You can mix up the style from the row. So this person here dictates the style and these persons in the column Dictate the general geometry and appearance you can see they're kind of mixing up this this kid here You can see how the style of the kid the style of the kid is changing depending on the leftmost column. Okay I thought explaining how this Pti works in more details, but it's gonna take it's gonna make it a little bit more complicated Details, but it's gonna take it's gonna make this video too long So I can maybe cover this separately in one of the next videos But basically what I managed to do is given the image The desired target image they managed to find the latent That generates that that image and then they can basically modify and edit the image semantically Inside of that latent space. So that's very very cool So what you can notice here I mentioned the concaveness of the eyes and that's the reason why it always looks like the eyes are looking directly into us into yeah Directly into our eyes, whatever is because you can see that the eye sockets are kind of concave here I think they they mentioned that somewhere down here. Let me let me find it So furthermore ambiguities that can be explained by geometry remain unresolved For example by creating concave eye sockets the generator creates the illusion of eyes that follow the camera an incorrect interpretation Though the renderings are view consistent and reflect the underlying geometry. Okay I'm not gonna dig deeper into that but um, i'm just gonna First address this paragraph here because I think it's also important not that like related to this concrete paper But I think it's a in general. I'm just yeah, I have this are these paragraphs useful Uh asking myself whether this is useful i'm gonna read it through Read it out loud so you can you can judge yourself So the single view 3d reconstruction or style mixing applications could be misused So it could be misused for generating edited imagery of real people. Okay, we are all familiar with deep fakes with what the GANs are in general kind of Like a problematic in that sense such misuse of image synthesis techniques poses a societal threat And we do not condone using our work with the intent of spreading misinformation or tarnishing reputation. I mean That's also kind of should be obvious authors do not want their paper their work to be used maliciously I don't see any added information in these sentences And finally, we also recognize a potential lack of diversity in our faces results stemming from implicit biases of the data sets We process and this like this same sentence could be copy pasted into so many different papers like Let's take let's replace this 3d GAN paper. Let's replace it with uh, like a gpt3 Or whatever your favorite language model is now in 2021 maybe without 2.0 or something. I don't know And basically so it's also can be misused and the author and secondly the authors condone using their work maliciously so that's also true and it also is true that depending on the biases in the they basically use the the the Text data set that was collected from the internet and reddit which has a lot of biases for like various various things like uh For example muslims being associated with crim like with terrorism stuff like that So some bad societal biases. So the point is Whether we should just be copy pasting this everywhere because this can be applied for every single paper Yeah, I don't mean anything bad by by saying all of this I just want to kind of raise the concern of whether this is going to be of any practical use paragraphs such as this one I'm not sure. I'm not smart here. I'm just I'm just uh, yeah speaking out loud here Um, finally, let me just find one additional thing here in the appendix Um, it has to do with the intrinsics Okay, here it is They say that we assume Fixed so the for the ffhq data set which is the data set of those high resolution human faces Not the cats the cats are the this is the cat's data set So they say the via zoom fixed camera intrinsics across the entire data set with a focal length of four point twenty six Times the image width equivalent to standard portrait lens. So that means that that's kind of assumed So that means it might not be correct. I'm not sure how the data set was collected So it may happen that different images were collected with different cameras which means this would not hold as a as a hypothesis and And they are using certain off-the-shelf detectors to figure out the extrinsics So I haven't seen not sure I may be be making a mistake here, but I haven't seen any Ablation on how on how robust the pipeline is to the errors in the this and this If this turns out not to be the case that this is a correct intrinsics for this data set Yeah, I just thought kind of mentioning that Also, one thing I'm not completely sure is how their Generating these geometry visualizations. I can assume that let me just go back to the pipeline here It's going to be easier to explain what I mean Okay, it's a long long paper, okay, what is probably happening is that they are using the density information So because so they pass array through the scene they have the density Information and they can kind of do a threshold and aggregate that density information So maybe these charts will be even better So they have something like this and then they can get just kind of threshold the the density and then use that to generate the Geometry, but I'm not sure whether I'm fairly sure that we have a lot of noise and glitching So it'd be nice to kind of Yeah for people who do not do this every day including me It'd be nice to have some hint of how stuff is done because yeah Understanding how geometry is actually visualized seems to be fairly important Anyways, if somebody knows exactly how this is done, feel free to comment down below and hopefully you like this video Let me know whether the lack of camera is Ruining the experience for you guys. So i'll make sure to in any case bring it back Very soon and create my youtube studio here in london Hopefully in like a couple of weeks in any case Subscribe if you if you like this video share out the the video with friends and join the discord community Because there is a lot of awesome cool people there which are willing to help each other out. So So consider joining the the community there and until next time. Bye. Bye
[{"start": 0.0, "end": 6.74, "text": " What's cracking guys? In this video I'm going to cover a very interesting paper and as a quick note"}, {"start": 6.74, "end": 11.96, "text": " I still don't have as you can see I don't have the video and the YouTube studio set up yet"}, {"start": 11.96, "end": 17.96, "text": " So I'm gonna have to only show you the paper for the time being hopefully I'm going to solve that very quickly"}, {"start": 18.48, "end": 25.0, "text": " But basically I'm covering this novel very interesting paper called efficient geometry aware"}, {"start": 25.0, "end": 29.7, "text": " 3d generative adversarial networks or GANs for short"}, {"start": 30.5, "end": 34.22, "text": " from this beautiful group of people from Stanford and"}, {"start": 34.82, "end": 41.879999999999995, "text": " Nvidia as a quick knit I think that like the very fact that this is 3d"}, {"start": 41.879999999999995, "end": 44.260000000000005, "text": " I mean, I think this is a pleonasm because"}, {"start": 45.1, "end": 49.5, "text": " 3d should imply the geometry aware part of this title"}, {"start": 49.5, "end": 57.980000000000004, "text": " But yeah in any case I guess it makes makes a more interesting title okay before I start digging into the actual paper"}, {"start": 58.019999999999996, "end": 63.9, "text": " Let me show you their video. We present a new 3d GAN that produces photorealistic images"}, {"start": 64.34, "end": 66.7, "text": " that are both multi-view consistent and"}, {"start": 69.78, "end": 71.78, "text": " Geometry aware"}, {"start": 73.3, "end": 79.02, "text": " Our approach enables unsupervised learning of high fidelity 3d representations without any"}, {"start": 79.02, "end": 81.02, "text": " explicit 3d or"}, {"start": 81.02, "end": 83.02, "text": " multi-view supervision"}, {"start": 86.74, "end": 91.97999999999999, "text": " So you can see results are fairly impressive because they are they are multi-view consistent"}, {"start": 91.97999999999999, "end": 95.89999999999999, "text": " You can see that even when you're changing the poses the appearance and the identity"}, {"start": 96.97999999999999, "end": 100.28, "text": " Remain the same which is kind of tricky for GANs to achieve"}, {"start": 100.74, "end": 106.02, "text": " And you can see that so here what's interesting is you can see let me stop it here"}, {"start": 106.02, "end": 109.38, "text": " you saw that the texture is kind of sticking and"}, {"start": 110.1, "end": 115.74, "text": " So when the cat is moving when the pose is changing you can see that the fur is kind of flickering or something"}, {"start": 115.74, "end": 121.02, "text": " So that stickiness problem. I think it's called like that like texture sticking problem was solved in one of the"}, {"start": 122.34, "end": 125.17999999999999, "text": " Newer style GAN papers if I'm not wrong"}, {"start": 125.82, "end": 131.3, "text": " Okay, in any case let's get back to the paper. We saw the the visualizations there and"}, {"start": 131.3, "end": 136.22, "text": " Now let me try and explain why this paper is special and very interesting so"}, {"start": 136.78, "end": 143.62, "text": " First thing they say here is that they train this whole thing this GAN without any ground truth 3d scans or"}, {"start": 144.10000000000002, "end": 152.02, "text": " Multi-view supervision so that means they just have 2d images basically and they manage to achieve this this"}, {"start": 152.78, "end": 157.18, "text": " Like basically multi-view consistency as you can see here the pose here changes"}, {"start": 157.18, "end": 162.82, "text": " But the identity remains the same and they also managed to recover the underlying 3d geometry"}, {"start": 163.38, "end": 169.32, "text": " Of the scene which is very cool now in order to understand how this"}, {"start": 169.86, "end": 173.74, "text": " Like paper works. We have to start with this novel"}, {"start": 174.66, "end": 181.76000000000002, "text": " Explicit implicit representation they devised called a tree plane. I think it's called tree plane representation or something"}, {"start": 182.18, "end": 183.58, "text": " but basically"}, {"start": 183.58, "end": 185.58, "text": " it's a combination it's a"}, {"start": 185.58, "end": 189.94000000000003, "text": " Somewhere in the middle between nerf, and I'm gonna briefly"}, {"start": 189.94000000000003, "end": 196.56, "text": " I'm gonna shortly explain what nerf exactly is and these like explicit voxel approaches"}, {"start": 197.46, "end": 203.86, "text": " And when you combine the good sides of this approach and the nerf approach you get this novel representation"}, {"start": 204.14000000000001, "end": 206.14000000000001, "text": " That they've devised"}, {"start": 206.22000000000003, "end": 208.82000000000002, "text": " so the main idea here is that"}, {"start": 209.54000000000002, "end": 213.32000000000002, "text": " The thing with nerf is that it's very it's memory efficient"}, {"start": 213.32, "end": 219.12, "text": " Let's call it that way so the memory is cheap, but like the the inference is very expensive"}, {"start": 219.12, "end": 222.51999999999998, "text": " so if you want to run this thing if you want to get the"}, {"start": 223.48, "end": 229.68, "text": " Density and color from a certain point of the scene you're gonna have to do a forward pass which is way way"}, {"start": 230.12, "end": 237.6, "text": " Costlier compared to for example this voxel approach so we're here as you can see you have to explicitly keep the representation"}, {"start": 237.6, "end": 243.68, "text": " Like in a form of this of this of this cube, and that means it's memory expensive"}, {"start": 243.76, "end": 250.88, "text": " But like on the other hand like the actual like querying is very fast because you only have this shallow"}, {"start": 251.04, "end": 255.64, "text": " MLP part here so combining those two they get the representation here"}, {"start": 255.84, "end": 260.64, "text": " but this probably won't make any sense if you're not familiar with nerf or"}, {"start": 261.36, "end": 266.21999999999997, "text": " These other previous approaches, so I'm gonna do like a gonna do a quick digression here"}, {"start": 266.22, "end": 271.1, "text": " And explain how the nerf paper works in detail"}, {"start": 271.86, "end": 273.08000000000004, "text": " so"}, {"start": 273.08000000000004, "end": 275.34000000000003, "text": " nerf is very very interesting paper and"}, {"start": 276.54, "end": 284.06, "text": " Basically, there was a like a Cambrian explosion of papers that came after nerf that tried to to improve upon various various elements of nerf"}, {"start": 284.38000000000005, "end": 287.78000000000003, "text": " but here I'm going to explain the original architecture itself and"}, {"start": 288.46000000000004, "end": 292.62, "text": " The thing that's very funny and kind of there is a paradigm shift here"}, {"start": 293.62, "end": 295.62, "text": " and that is that"}, {"start": 295.62, "end": 298.18, "text": " Overfitting is actually good in nerf so nerf"}, {"start": 298.9, "end": 300.1, "text": " basically"}, {"start": 300.1, "end": 305.98, "text": " Overfits like heavily overfits to a 3d scene so you can see a bulldozer here as an example"}, {"start": 306.42, "end": 309.9, "text": " So it heavily overfits to that scene such that"}, {"start": 310.5, "end": 314.78000000000003, "text": " basically you can query this scene from a specific you take a specific point and you"}, {"start": 315.78000000000003, "end": 321.82, "text": " basically pick in like a like a like a ray like a direction vector and it will give you the"}, {"start": 321.82, "end": 327.74, "text": " pixel color from that position and that direction which is kind of"}, {"start": 327.98, "end": 332.02, "text": " Fascinating so you can do you can query this 3d scene after you've trained the nerf"}, {"start": 332.02, "end": 334.02, "text": " And I'm going to explain how that works in a second"}, {"start": 334.14, "end": 340.06, "text": " You can query it and get like these images you can see this one here or this one here"}, {"start": 340.06, "end": 346.0, "text": " You can get the same thing from from some in other spot like you can maybe place a camera here you place a camera here"}, {"start": 346.0, "end": 346.94, "text": " I'm gonna just"}, {"start": 346.94, "end": 352.66, "text": " Depict camera as this as this triangle and you get a novel novel image that you can just kind of"}, {"start": 353.5, "end": 360.38, "text": " Get by querying the MLP the multilayer perceptron that was trained and overfitted to this 3d scene"}, {"start": 360.78, "end": 367.62, "text": " So let me try and explain very briefly how how nerf works so you basically have a"}, {"start": 368.02, "end": 373.38, "text": " Sparse sampling of the 3d scene what I mean by that is that you have images such as this one"}, {"start": 373.38, "end": 378.9, "text": " So this is image number one taken from this particular location here. Let me try and change the color"}, {"start": 378.98, "end": 380.7, "text": " So we have from this location"}, {"start": 380.7, "end": 386.4, "text": " There was a camera here and somebody took the image and then there was a camera here and we have associated image"}, {"start": 386.4, "end": 392.26, "text": " So we have associated images and poses and it's very sparse like you maybe have 30 or 40"}, {"start": 392.82, "end": 394.94, "text": " different images of this scene and"}, {"start": 395.54, "end": 401.42, "text": " Now what nerf does is the is the following you train the nerf the following way"}, {"start": 401.42, "end": 406.94, "text": " Basically nerf takes as an input this XYZ. So that's a coordinate"}, {"start": 406.94, "end": 411.26, "text": " So that's basically a position and so I want to so for example, I want to place a camera here"}, {"start": 411.26, "end": 416.14000000000004, "text": " So that's the position of the camera and then you have the theta and the Phi which are the angles"}, {"start": 416.14000000000004, "end": 421.18, "text": " They basically determine the the like a D array the vector in the space"}, {"start": 421.94, "end": 430.54, "text": " starting from that particular point in space and so once you input that that tuple that n-tuple into"}, {"start": 430.54, "end": 432.42, "text": " MLP"}, {"start": 432.42, "end": 441.06, "text": " It basically outputs like the three-dimensional vector that represents the the color so the RGB vector and this sigma thing"}, {"start": 441.06, "end": 443.06, "text": " which is called"}, {"start": 443.06, "end": 444.38, "text": " density of"}, {"start": 444.38, "end": 449.3, "text": " volume density basically so what volume density represents is basically the"}, {"start": 449.78000000000003, "end": 457.26, "text": " Probability that the ray has stopped at that particular point in in in the scene"}, {"start": 457.26, "end": 462.86, "text": " so think of that like this if the ray was if you had a ray like starting from here and"}, {"start": 464.02, "end": 466.7, "text": " Radiating through the scene and passing through the bulldozer"}, {"start": 467.58, "end": 473.9, "text": " Once the ray touches the surface of this of this part of the bulldozer and enters the actual bulldozer"}, {"start": 474.09999999999997, "end": 478.02, "text": " You'd expect the Sigma to go up because basically"}, {"start": 478.98, "end": 484.65999999999997, "text": " You will not be able to visualize that point because it's internal to the bulldozer and so the volume density"}, {"start": 484.66, "end": 488.86, "text": " kind of spikes up and you can see those"}, {"start": 489.3, "end": 493.8, "text": " Rays, so this is a visualization of the thing I was just mentioning so you take a ray like ray one"}, {"start": 493.84000000000003, "end": 496.62, "text": " So that may be this this red line here"}, {"start": 497.18, "end": 501.34000000000003, "text": " so that red line here and you can kind of visualize the the"}, {"start": 502.1, "end": 506.06, "text": " volume density here and now how nerf works is"}, {"start": 506.62, "end": 512.0600000000001, "text": " You take a bunch of points along a certain rate like you can see these these black dots here"}, {"start": 512.06, "end": 516.2199999999999, "text": " you query the MLP so this thing that's"}, {"start": 516.9399999999999, "end": 518.9399999999999, "text": " like a part of nerf and"}, {"start": 518.9799999999999, "end": 524.0999999999999, "text": " so that basically MLP is the thing that learns inside of this nerf framework and"}, {"start": 524.42, "end": 529.9399999999999, "text": " You query it alongside all of these points and you get the RGB Sigma"}, {"start": 530.66, "end": 536.14, "text": " Vectors and then what you do is using Sigma you do a weighted average"}, {"start": 536.14, "end": 540.9399999999999, "text": " And you get the final color and that's how they they kind of visualize that here"}, {"start": 540.94, "end": 546.9000000000001, "text": " so you basically will have some aggregate of these values and you're gonna form a color and"}, {"start": 547.1, "end": 551.0600000000001, "text": " Then you're gonna do a simple MSC loss so mean square error"}, {"start": 552.22, "end": 557.58, "text": " Using the ground truth pixel value so the you have the ground truth because you have the image"}, {"start": 557.58, "end": 563.98, "text": " So for example here, it's gonna be let me change the color again. Let me see like let me pick like yeah blue"}, {"start": 563.98, "end": 567.74, "text": " whatever so you'll have a like a yellow pixel here and"}, {"start": 567.74, "end": 575.7, "text": " Whatever came from this summation here you want to make sure to push it towards that point here and"}, {"start": 576.46, "end": 583.22, "text": " That's the whole trick so how you train like nerf is you take a bunch of these rays so have remember we have a sparse"}, {"start": 583.5, "end": 585.98, "text": " Sampling of this 3d scene. We'll have a bunch of these images"}, {"start": 586.5, "end": 591.82, "text": " We'll take randomly will take some rays emitting from from the cameras from from those images"}, {"start": 591.82, "end": 597.02, "text": " And we're gonna do we're gonna like do this sampling alongside those rays"}, {"start": 597.02, "end": 599.8199999999999, "text": " We're gonna form these colors as you can see here"}, {"start": 599.8199999999999, "end": 607.54, "text": " And we're gonna do the MSC loss and we're going to do this only like we're gonna basically overfit like crazy to this 3d scene"}, {"start": 608.1, "end": 614.38, "text": " So once that's done you can basically query nerf from some unseen position like as I said you can maybe place a camera here"}, {"start": 614.38, "end": 621.28, "text": " Or but where whatnot and you can get a 2d representation of this 3d scene from that particular pose"}, {"start": 621.28, "end": 626.22, "text": " And that's that's amazing. I mean you have this MLP this multilayer perception"}, {"start": 626.22, "end": 627.5400000000001, "text": " that"}, {"start": 627.5400000000001, "end": 635.24, "text": " Has encoded the whole 3d scene so every single information about that scene inside of the weights of that MLP"}, {"start": 635.24, "end": 639.14, "text": " So that's kind of crazy and an interesting paradigm if you've never seen nerf before"}, {"start": 639.86, "end": 643.72, "text": " So but in a nutshell that's it you have an MLP you take a bunch of rays"}, {"start": 644.3000000000001, "end": 649.22, "text": " You you query the the MLP alongside certain points"}, {"start": 649.6600000000001, "end": 652.78, "text": " You do the MSC loss, and that's how you train the nerf"}, {"start": 652.78, "end": 656.9399999999999, "text": " And it took I think the in the original paper somewhere around like seven eight hours"}, {"start": 657.4599999999999, "end": 663.18, "text": " Wall wall clock time to train this thing to you to kind of memorize a single scene. That's cool"}, {"start": 664.1, "end": 668.6999999999999, "text": " here you can see the the thing I was mentioning about how we actually accumulate the"}, {"start": 669.3, "end": 671.9, "text": " colors alongside along this particular"}, {"start": 672.54, "end": 678.9399999999999, "text": " Ray and the whole trick is to form these coefficients ti's change the color here"}, {"start": 678.9399999999999, "end": 681.62, "text": " So ti's are just this sum"}, {"start": 681.62, "end": 684.34, "text": " And we have the minus here and the exponent"}, {"start": 684.86, "end": 690.66, "text": " So it's exponential function and so what you can think of that you can think of this the following way"}, {"start": 690.66, "end": 693.54, "text": " So as I said once you get inside of the bulldozer"}, {"start": 694.42, "end": 699.86, "text": " The thing is this ti is gonna be very very small and that means that the color"}, {"start": 700.34, "end": 705.02, "text": " Inside of the bulldozer is gonna be weighted with a very small number here"}, {"start": 705.02, "end": 709.92, "text": " So that's gonna be approaching zero which means that the final color this C het"}, {"start": 709.92, "end": 717.36, "text": " Is gonna be heavily influenced by the first points that are close to the surface of the bulldozer"}, {"start": 717.7199999999999, "end": 721.52, "text": " Whereas the other points are gonna have less influence which makes a lot of sense"}, {"start": 721.68, "end": 726.3199999999999, "text": " So again this deltas are just the this distances let me zoom in here"}, {"start": 726.4, "end": 729.7199999999999, "text": " So Delta J's are just the distance between these"}, {"start": 730.3199999999999, "end": 732.52, "text": " successive points here that we are sampling and"}, {"start": 732.8, "end": 736.8399999999999, "text": " The sampling here seems like to be uniform, but in reality"}, {"start": 736.84, "end": 741.9200000000001, "text": " They are using stochastic sampling so that we can query the nerf the MLP later on"}, {"start": 742.72, "end": 750.24, "text": " Continuously across this whole space and you can see basically that as Sigma goes up. That means we are entering bulldozer"}, {"start": 751.08, "end": 756.6, "text": " This thing is gonna be bigger and bigger and because we have a minus sign. We just have an exponential"}, {"start": 757.4, "end": 759.4, "text": " When you have the minus"}, {"start": 759.5600000000001, "end": 765.36, "text": " That means it basically means we're gonna this thing is gonna converge to zero and that's what I just explained"}, {"start": 765.36, "end": 768.86, "text": " So hopefully that that makes a bit more sense how not nerf works"}, {"start": 769.72, "end": 775.22, "text": " So that's one approach and that's called implicit scene representation"}, {"start": 775.22, "end": 780.94, "text": " And it's implicit because the whole knowledge of the 3d scene is encoded inside of the weights of this"}, {"start": 781.2, "end": 785.88, "text": " Multi-layer perceptron network and that's very cool a bad thing about this approach"}, {"start": 785.88, "end": 793.58, "text": " Is that even though the whole scene is encoded in a relatively small number of weights if you want to render a single pixel"}, {"start": 793.58, "end": 797.74, "text": " So imagine we are now here and we don't have the actual pixel value. We'd have to"}, {"start": 799.12, "end": 801.12, "text": " Query the"}, {"start": 801.12, "end": 802.88, "text": " MLP across all of these points"}, {"start": 802.88, "end": 807.12, "text": " So that's a single forward pass times and because we have M point sampled along the ray"}, {"start": 807.4000000000001, "end": 814.0600000000001, "text": " In order to get a single pixel in the image space and we have to repeat that for every single pixel in the image space"}, {"start": 814.0600000000001, "end": 817.0400000000001, "text": " You can you can imagine that's kind of time intensive. Okay"}, {"start": 818.2, "end": 821.76, "text": " So that's where these second family of approaches come into play"}, {"start": 821.76, "end": 829.28, "text": " What they do instead is they explicitly keep the information about the 3d space as you can see here"}, {"start": 829.28, "end": 831.28, "text": " visualized in form of this cube and"}, {"start": 832.2, "end": 839.24, "text": " That means this approach is expensive memory wise, but it's much cheaper to do an inference"}, {"start": 840.36, "end": 845.6, "text": " Here in this model because you don't have to do a forward pass. You already have it explicitly store"}, {"start": 845.6, "end": 848.7, "text": " You can just kind of average it out and get the color of the pixel"}, {"start": 848.7, "end": 853.1400000000001, "text": " So that's everything I want you to take out from from from this image and explanation"}, {"start": 853.38, "end": 854.0200000000001, "text": " by the way"}, {"start": 854.0200000000001, "end": 856.62, "text": " I've put here inference prediction because there is a lot of"}, {"start": 857.22, "end": 861.6800000000001, "text": " Just to point out that people are very sloppy with terminology in general"}, {"start": 861.6800000000001, "end": 866.58, "text": " So we should be using probably prediction would be the correct terminology here not inference inferences"}, {"start": 866.58, "end": 870.7, "text": " Like it is more came from the context of probabilistic models"}, {"start": 870.7, "end": 875.82, "text": " And yeah, but people use these two interchangeably even though in reality"}, {"start": 875.82, "end": 878.7800000000001, "text": " We should be using the word prediction instead of inference, but yeah"}, {"start": 879.58, "end": 881.58, "text": " Super small digression there. Okay"}, {"start": 882.0200000000001, "end": 888.0400000000001, "text": " So now that we know the trade-offs, let's see what the actual paper brought to the table"}, {"start": 888.1, "end": 893.2600000000001, "text": " So we can see here what they did instead of storing every single"}, {"start": 893.5, "end": 897.0200000000001, "text": " Like RGB and the volume density in the scene"}, {"start": 897.3000000000001, "end": 902.1, "text": " They just have these three planes and even though these look like slices of paper"}, {"start": 902.1, "end": 906.86, "text": " They actually have the depth dimension because they'll have features here like maybe 32 features"}, {"start": 907.94, "end": 913.9200000000001, "text": " Alongside this dimension and every plane obviously will have like the depth. So this one we have here and we'll have here"}, {"start": 914.74, "end": 919.6800000000001, "text": " Maybe through the 32 features, but that's still way less compared to this"}, {"start": 920.3000000000001, "end": 925.4200000000001, "text": " Model here that has this so it's I think they call it. Yeah explicit voxel grid approach"}, {"start": 926.1, "end": 927.34, "text": " and"}, {"start": 927.34, "end": 929.34, "text": " Again, so it's memory"}, {"start": 929.34, "end": 937.0600000000001, "text": " It's more efficient memory wise and it's also more efficient time wise because as you can see here to find the RGB"}, {"start": 937.0600000000001, "end": 941.5400000000001, "text": " So to get to get the RGB value the color and the volume density"}, {"start": 941.7800000000001, "end": 948.82, "text": " We just have to pass it through this very shallow MLP and compare that to nerf where you have this huge huge"}, {"start": 949.0600000000001, "end": 954.34, "text": " Like MLP, I mean huge. I mean relatively big compared to this approach here"}, {"start": 954.34, "end": 959.7, "text": " So now the question remains so that's all nice and sweet but the question is is this as"}, {"start": 960.0600000000001, "end": 962.86, "text": " expressive as as as these models here and"}, {"start": 963.34, "end": 967.9, "text": " The answer will turn out to be yes, but before that I forgot to mention how this is constructed"}, {"start": 967.94, "end": 973.9, "text": " So if you want to query at the scene and get the the color at specific point in the scene"}, {"start": 974.0600000000001, "end": 980.02, "text": " What do you do you have this point you basically project it onto these planes. That means you have a"}, {"start": 980.02, "end": 983.6999999999999, "text": " In our case, we'll have a 32 dimensional vector here"}, {"start": 983.6999999999999, "end": 989.14, "text": " So it's going to be 32 dimensional vector and then extract the vector from here and from this projection"}, {"start": 989.14, "end": 995.74, "text": " So this is maybe XY plane and you just basically add them up and then you pass them through the a couple of fully connected"}, {"start": 995.74, "end": 1000.42, "text": " Layers, basically a multi-layer perceptron here. So that's how the thing works now"}, {"start": 1001.02, "end": 1009.1, "text": " We need to make sure that this is as expressive as the two other approaches and that by creating this more efficient model"}, {"start": 1009.1, "end": 1016.9, "text": " We haven't sacrificed the quality and it turned out turns out that we have not as you can see here at the tree plane approach"}, {"start": 1016.9, "end": 1021.5, "text": " So that's the one here has a higher quality. The letters are much more"}, {"start": 1021.98, "end": 1023.02, "text": " like"}, {"start": 1023.02, "end": 1027.9, "text": " Readable compared to the two other approaches, which is very very cool"}, {"start": 1027.9, "end": 1033.46, "text": " That means we we've kept the expressive power of the two approaches and we made the model more efficient"}, {"start": 1033.46, "end": 1037.94, "text": " So what has happened here is that they are basically in this particular example"}, {"start": 1037.94, "end": 1042.66, "text": " The the trainable weights of this model are these weights"}, {"start": 1043.1000000000001, "end": 1047.54, "text": " Contained in the planes as well as the weights of this fully connected"}, {"start": 1048.18, "end": 1053.94, "text": " like of this fc layers and the the the way they they kind of"}, {"start": 1054.38, "end": 1058.6000000000001, "text": " Trained the tree plane model is the same way"}, {"start": 1058.6000000000001, "end": 1065.66, "text": " I just explained for nerf so they use the same approach to train and overfit on a particular scene in this case"}, {"start": 1065.66, "end": 1068.0600000000002, "text": " They have this scene called family scene"}, {"start": 1068.1000000000001, "end": 1073.78, "text": " so they train this this this component here the same way as you train nerf and"}, {"start": 1074.3000000000002, "end": 1077.3000000000002, "text": " they have to tweak these weights in the"}, {"start": 1078.22, "end": 1084.7, "text": " In these three planes and it's gonna change in the actual final model that I'm gonna explain right now"}, {"start": 1086.3000000000002, "end": 1088.3000000000002, "text": " By the way short aggression"}, {"start": 1088.3000000000002, "end": 1093.3400000000001, "text": " As I said these two papers would be useful to have some prior knowledge on and hopefully I gave you"}, {"start": 1093.34, "end": 1098.3, "text": " Enough context for you to understand the rest of this paper, but the third paper"}, {"start": 1098.3, "end": 1104.62, "text": " We're gonna find very useful is style again, so I guess the version one and two will suffice"}, {"start": 1105.22, "end": 1107.22, "text": " These kind of had incremental"}, {"start": 1108.02, "end": 1112.6599999999999, "text": " Improvements upon upon the first version of style again, which had amazing results"}, {"start": 1113.1799999999998, "end": 1115.62, "text": " So let's get to the actual final pipeline"}, {"start": 1116.62, "end": 1120.26, "text": " I've explained we now understand this part here"}, {"start": 1120.26, "end": 1123.3799999999999, "text": " We understand this whole part of the pipeline"}, {"start": 1124.14, "end": 1126.02, "text": " So let me kind of"}, {"start": 1126.02, "end": 1130.5, "text": " Encircle it in square it here. Whatever the name is"}, {"start": 1131.86, "end": 1135.1, "text": " so you can see we have these three planes we"}, {"start": 1136.06, "end": 1138.1, "text": " get the color and the density and"}, {"start": 1139.18, "end": 1142.36, "text": " Because we have the camera parameters 25 scalars"}, {"start": 1143.74, "end": 1145.74, "text": " We basically can"}, {"start": 1145.74, "end": 1151.7, "text": " Render the image from whatever pose in the scene so that part we understand and by the way"}, {"start": 1151.7, "end": 1155.52, "text": " It's 25 because it's 16 for the extrinsics matrix"}, {"start": 1156.34, "end": 1159.1, "text": " plus 9 for the intrinsics matrix"}, {"start": 1160.06, "end": 1167.14, "text": " extrinsics means that it basically just maps from the global from the world coordinate system into the camera coordinate system and"}, {"start": 1167.74, "end": 1173.04, "text": " Intrinsics are there to map from the camera coordinate system into the image space"}, {"start": 1173.04, "end": 1175.72, "text": " So there's some basic computer vision theory for you"}, {"start": 1175.72, "end": 1177.92, "text": " You don't have to worry too much about it"}, {"start": 1177.92, "end": 1184.04, "text": " But basically these 25 numbers tell you how the camera is placed and tell you certain properties"}, {"start": 1184.32, "end": 1190.32, "text": " About that camera so how it's placed in the scene and certain properties. So the approach will basically be to"}, {"start": 1191.84, "end": 1196.8799999999999, "text": " You're in the scene you're basically in a 3d scene you place a camera out somewhere"}, {"start": 1196.88, "end": 1202.8400000000001, "text": " And what you do is you emit bunch of rays and you sample bunch of points?"}, {"start": 1203.64, "end": 1206.16, "text": " Alongside those rays, so let me just kind of change the color"}, {"start": 1206.2800000000002, "end": 1209.92, "text": " So we'll sample bunch of points and all those points"}, {"start": 1210.24, "end": 1215.8000000000002, "text": " So when you query those you'll get as the output will get you get the color and the density the volume density"}, {"start": 1215.8000000000002, "end": 1218.6200000000001, "text": " And then you combine them in the the smart way"}, {"start": 1218.6200000000001, "end": 1226.0800000000002, "text": " I just showed you to get this final image here, which is 128 times 128 times 32, which is different"}, {"start": 1226.08, "end": 1230.4399999999998, "text": " So it's not an RGB image and the reason they've done it like this is because it's more"}, {"start": 1230.6399999999999, "end": 1236.6, "text": " Expressive and we'll see the details a bit later, but that's so that that's this component of the system now"}, {"start": 1236.76, "end": 1239.8799999999999, "text": " Let's switch our attention to the first part here"}, {"start": 1240.4399999999998, "end": 1246.8, "text": " What happens is they basically reuse the generator from the style again paper I mentioned so"}, {"start": 1247.48, "end": 1253.3799999999999, "text": " It's your classical again architecture all the upbeat very very smart one where as you can see here"}, {"start": 1253.38, "end": 1259.3400000000001, "text": " We have a latent of 512 scalars. So you have you have certain vector like some some noise vector"}, {"start": 1259.3400000000001, "end": 1263.0200000000002, "text": " Or I think it's actually trained in the case of style again"}, {"start": 1263.8400000000001, "end": 1268.9, "text": " You input that into the mapping vector the mapping vector creates these intermediate"}, {"start": 1269.3200000000002, "end": 1272.5, "text": " representations which we can then use to condition the"}, {"start": 1273.0, "end": 1279.6200000000001, "text": " Generator here which as you can see is up sampling in every single layer. We're doing some like we have some"}, {"start": 1279.62, "end": 1285.54, "text": " Deconvolution so-called deconvolution layers and we end up with this image"}, {"start": 1286.34, "end": 1291.2199999999998, "text": " 256 by 256 that's a spatial resolution and then we have 96 features"}, {"start": 1291.2199999999998, "end": 1298.4599999999998, "text": " And what we do is we just take the 96 features and we split it into three planes that each have a 32"}, {"start": 1298.8999999999999, "end": 1306.06, "text": " Features per plane and we kind of just visually represent those here and then we know this part here now"}, {"start": 1306.06, "end": 1313.94, "text": " Now the interesting party I want you to see here is that camera prams are used to condition the mapping network"}, {"start": 1315.34, "end": 1320.94, "text": " Because that helps the generator decouple certain post correlated features and"}, {"start": 1321.5, "end": 1324.78, "text": " In general increase the expressivity of the of the generator"}, {"start": 1325.34, "end": 1331.2, "text": " I'm gonna probably show you a bit later what I mean by the correlation between pose and and and features"}, {"start": 1331.2, "end": 1336.48, "text": " But basically what it means, let me see whether I can find the the maybe somewhere in the appendix"}, {"start": 1338.24, "end": 1343.68, "text": " Basically people tend to smile when they're facing the camera up front and they"}, {"start": 1344.04, "end": 1350.0, "text": " Tend not to smile when they're not facing the camera. Let me see whether I can find that curve. Oh, yeah"}, {"start": 1350.0, "end": 1357.38, "text": " Okay, I found it here. So basically you can see here the plot that tells you so y'all is just basically tells you"}, {"start": 1357.38, "end": 1366.5, "text": " Tells you how rotated the head is alongside this let's call it XY plane and you can see that when you're facing the camera"}, {"start": 1367.7, "end": 1375.38, "text": " Like up front the the percentage of smiles is way higher when compared to you when when the head is turned"}, {"start": 1375.7, "end": 1381.3000000000002, "text": " Away from the camera and the reason is fairly simple when people know that they are being photographed"}, {"start": 1381.6200000000001, "end": 1385.94, "text": " They smile and when they are unaware they just being themselves and they're not smiling because"}, {"start": 1385.94, "end": 1389.8600000000001, "text": " Let's face it. This world is not that happy right? It's very it's a very sad world"}, {"start": 1390.74, "end": 1392.18, "text": " Anyways"}, {"start": 1392.18, "end": 1396.3400000000001, "text": " Enough with very very bad jokes. Let me go back to where I stopped"}, {"start": 1397.14, "end": 1400.42, "text": " So we are here trying to understand this whole pipeline"}, {"start": 1401.54, "end": 1407.54, "text": " Hopefully now I managed to to kind of break down this this first component of the pipeline"}, {"start": 1408.26, "end": 1410.98, "text": " Which is based off of style again"}, {"start": 1410.98, "end": 1417.06, "text": " The second part is based off of I said some combination of nerf and those explicit grid methods"}, {"start": 1417.7, "end": 1422.18, "text": " And finally, we have the third component here and let me now try and address"}, {"start": 1422.74, "end": 1425.22, "text": " This part and then you'll understand the complete method"}, {"start": 1426.42, "end": 1428.42, "text": " So what I've done here is"}, {"start": 1428.82, "end": 1433.78, "text": " Once they do the neural rendering and they have this 128 128 32"}, {"start": 1434.58, "end": 1435.46, "text": " volume"}, {"start": 1435.46, "end": 1437.78, "text": " If they extract the first three channels"}, {"start": 1437.78, "end": 1441.46, "text": " Those will actually be representing the the actual image"}, {"start": 1441.46, "end": 1448.34, "text": " So those will contain by the way of training those will contain the down sampled version of the final output image"}, {"start": 1448.98, "end": 1456.02, "text": " And so what they do now is they up sample the image. So they'll up sample the first three channels using basically"}, {"start": 1456.8999999999999, "end": 1463.1399999999999, "text": " Bilinger so bilinear let me try and write it down bilinear transformation, which is not a learnable transformation"}, {"start": 1463.1399999999999, "end": 1464.98, "text": " That's just your your classical"}, {"start": 1464.98, "end": 1468.5, "text": " Digital image processing up sampling method nothing fancy there"}, {"start": 1469.54, "end": 1476.9, "text": " Secondly, they'll take this whole volume the 32 all of the 32 channels and they're going to pass it into this super resolution module"}, {"start": 1477.8600000000001, "end": 1484.58, "text": " Which is going to up sample the image and uh outcomes the 512 by 100 by 512"}, {"start": 1485.3, "end": 1491.14, "text": " RGB image so three channels and again, we have some conditioning here. They're using these intermediate"}, {"start": 1491.14, "end": 1494.9, "text": " Latents to condition the the super resolution module"}, {"start": 1495.6200000000001, "end": 1498.74, "text": " Not completely sure whether that thing"}, {"start": 1499.46, "end": 1506.8200000000002, "text": " Whether they did some up relations, but like it makes sense that the information used to generate the original image"}, {"start": 1507.46, "end": 1513.7800000000002, "text": " I mean, it's not the original image. It's just these features that are then neural render. So yeah, i'm not sure whether this"}, {"start": 1514.18, "end": 1515.46, "text": " was"}, {"start": 1515.46, "end": 1522.02, "text": " Ablated but like whether it will work as good without this part, but yeah, it works like this"}, {"start": 1522.42, "end": 1529.22, "text": " And what happens now is they just take and concatenate the these the up sampled image and this super"}, {"start": 1529.8600000000001, "end": 1534.02, "text": " Like this image that was passed through the super resolution module"}, {"start": 1535.14, "end": 1537.7, "text": " And then they pass it into this style again"}, {"start": 1538.5, "end": 1542.02, "text": " Two version of discriminator, which is your your your classic"}, {"start": 1542.02, "end": 1546.18, "text": " GAN loss you're trying to figure out whether the image is real or not real"}, {"start": 1546.82, "end": 1552.5, "text": " Because you have some data set you basically have some data set of real images that's going to help you figure out"}, {"start": 1553.06, "end": 1555.06, "text": " This fact whether it's real or not"}, {"start": 1555.94, "end": 1564.5, "text": " Additionally, we are again conditioning the the discriminator using the camera parameters so the extrinsics and the intrinsics"}, {"start": 1565.78, "end": 1568.26, "text": " Now this is called what they call this is the"}, {"start": 1568.26, "end": 1573.3, "text": " Double discriminator or something like that. We're going to see the terminology a bit later, but the point is"}, {"start": 1574.5, "end": 1579.46, "text": " They are going to by concatenating and having six channels here. What they'll have to do is the following"}, {"start": 1579.46, "end": 1584.66, "text": " So they'll take some some real image. So this is a real image and then they're going to"}, {"start": 1585.3, "end": 1587.3, "text": " down sample that image"}, {"start": 1587.3, "end": 1588.58, "text": " like this"}, {"start": 1588.58, "end": 1590.58, "text": " and now they're going to use and"}, {"start": 1591.14, "end": 1595.54, "text": " That basically means that this discriminator is going to be able to use this image"}, {"start": 1595.54, "end": 1602.1, "text": " That basically means that this discriminator is going to force this small this new this this neural rendered image"}, {"start": 1602.42, "end": 1608.26, "text": " To be the same to be realistic as these down sampled real images and it's going to force"}, {"start": 1609.3799999999999, "end": 1612.42, "text": " Like the big image to be realistic as the big image is here"}, {"start": 1613.46, "end": 1618.8999999999999, "text": " And all of these constraints help the model learn how to to be consistent"}, {"start": 1618.9, "end": 1625.3000000000002, "text": " In in various poses and one of the crucial parts is actually conditioning on these camera parameters"}, {"start": 1625.5400000000002, "end": 1629.94, "text": " I forgot to mention how they they actually have so some data sets do have this data"}, {"start": 1630.42, "end": 1632.5800000000002, "text": " So the extrinsics and intrinsics"}, {"start": 1633.38, "end": 1638.02, "text": " In other data sets, basically they make the made an assumption of how intrinsics are"}, {"start": 1639.0600000000002, "end": 1642.3400000000001, "text": " What intrinsics make sense? We're going to see that in the appendix a bit later"}, {"start": 1642.98, "end": 1646.42, "text": " And they use some off-the-shelf detectors to figure out the pose"}, {"start": 1646.42, "end": 1652.5800000000002, "text": " So that that means to figure out the extrinsic component and that's how basically they train this whole thing"}, {"start": 1652.74, "end": 1658.66, "text": " So yeah, so we they don't have the 3d scans, but they do have the camera parameters. Keep that in mind. Okay"}, {"start": 1660.8200000000002, "end": 1662.02, "text": " Okay now"}, {"start": 1662.02, "end": 1666.02, "text": " That should be pretty much it. You know, understand should understand how the whole pipeline work"}, {"start": 1666.02, "end": 1668.26, "text": " But let me go through some additional details"}, {"start": 1668.66, "end": 1673.78, "text": " So here at this table, they just show the thing I mentioned about the trade-off between speed and memory"}, {"start": 1673.78, "end": 1678.5, "text": " That this novel approach is both faster and more memory efficient"}, {"start": 1679.78, "end": 1686.26, "text": " One thing worth worth mentioning is that the three plane representation is capable of representing this complex scene"}, {"start": 1686.58, "end": 1689.3, "text": " I'll be without view dependent effects"}, {"start": 1690.1, "end": 1698.1, "text": " That's what I mean by that is that in nerf you can change those theta and phi angles and get different color depending on how"}, {"start": 1698.1, "end": 1704.1, "text": " You're you're viewing the scene and that kind of helps you model the glossy effects the non-laboration"}, {"start": 1704.1799999999998, "end": 1706.82, "text": " I think that's the fancy way from the computer graphics world"}, {"start": 1707.54, "end": 1708.74, "text": " um"}, {"start": 1708.74, "end": 1710.4199999999998, "text": " so basically"}, {"start": 1710.4199999999998, "end": 1712.74, "text": " Let me let me see whether yeah, there is a"}, {"start": 1713.76, "end": 1717.62, "text": " Visualization of nerf somewhere here. So here it is. You can see that"}, {"start": 1718.1799999999998, "end": 1720.1799999999998, "text": " the color here"}, {"start": 1720.34, "end": 1724.82, "text": " Depends on the position and on the direction as well"}, {"start": 1724.82, "end": 1729.3799999999999, "text": " So the direction meaning the theta and phi angles so even if you're"}, {"start": 1729.9399999999998, "end": 1734.02, "text": " So even if you're in the scene in a certain position like here"}, {"start": 1734.56, "end": 1736.6599999999999, "text": " depending on how you kind of"}, {"start": 1738.1799999999998, "end": 1741.78, "text": " Direct the ray from that point you're going to have different colors"}, {"start": 1742.74, "end": 1745.3, "text": " Because of the way that nerf is set up here"}, {"start": 1745.62, "end": 1750.74, "text": " Whereas as you can see here the density the volume density is not a function of the direction"}, {"start": 1750.74, "end": 1758.1, "text": " Which makes sense because density just tells you whether the ray will stop at that particular point in the in the scene"}, {"start": 1758.5, "end": 1760.82, "text": " And that should not depend on the direction"}, {"start": 1761.7, "end": 1765.94, "text": " So if you're inside of bulldozer you're inside and it doesn't matter the direction"}, {"start": 1766.74, "end": 1771.94, "text": " This density should be high because you're inside of an object. That's not unless it's transparent"}, {"start": 1772.74, "end": 1773.86, "text": " Okay"}, {"start": 1773.86, "end": 1776.82, "text": " But yeah modeling transparent objects. That's a I guess a separate league"}, {"start": 1776.82, "end": 1780.1799999999998, "text": " Let's continue so I explained that part now"}, {"start": 1781.22, "end": 1788.1799999999998, "text": " I mentioned the the consistency that the double discriminator brings to the table now. Let's let's see what they set here"}, {"start": 1788.6599999999999, "end": 1795.7, "text": " First thing they say here is that the real images are fed into the discriminator are also processed by concatenating each of them"}, {"start": 1795.86, "end": 1799.06, "text": " With an appropriately blurred copy of itself"}, {"start": 1799.46, "end": 1804.1799999999998, "text": " So what I mean by that is that once you once you have this small image"}, {"start": 1804.18, "end": 1807.22, "text": " Once you down sampled it you'll have to up sample it again"}, {"start": 1808.02, "end": 1813.0600000000002, "text": " And by doing that you're gonna lose some details and the output image is gonna be somewhat blurred"}, {"start": 1813.54, "end": 1817.78, "text": " In order to be able to concatenate it and create this six channeled"}, {"start": 1818.5, "end": 1821.54, "text": " Like input volume so so this is the real image"}, {"start": 1821.54, "end": 1825.94, "text": " It has three channels and have certain spatial dimension and this this is the up sampled"}, {"start": 1826.42, "end": 1829.0600000000002, "text": " So we first down sample and then up sample the image"}, {"start": 1829.78, "end": 1832.3400000000001, "text": " In order to mimic the process here, okay"}, {"start": 1832.34, "end": 1836.58, "text": " So what I say here is that they are just kind of blurring and"}, {"start": 1837.3, "end": 1839.3, "text": " Basically, it should not be"}, {"start": 1839.86, "end": 1842.58, "text": " So the way they are probably approaching this is"}, {"start": 1843.22, "end": 1849.86, "text": " If you do like take an image of the full size and then you don't sample it and then you up sample it again"}, {"start": 1850.4199999999998, "end": 1854.74, "text": " I'm fairly sure that if you go into the spectral domain"}, {"start": 1855.3, "end": 1860.98, "text": " You can you can just devise a kernel a certain kernel like a kernel like this"}, {"start": 1860.98, "end": 1864.42, "text": " Like a kernel like in the sense in convolutional neural networks"}, {"start": 1864.58, "end": 1871.38, "text": " You can devise a kernel that's gonna modify the specter spectrum of this of this image in such a way that we get a blurred"}, {"start": 1871.54, "end": 1876.18, "text": " We get the same effect as if we were to down sample explicitly and up sample again"}, {"start": 1876.42, "end": 1883.46, "text": " But that's just a implementation detail that would probably make this process a bit more like efficient and optimized"}, {"start": 1885.14, "end": 1890.18, "text": " Okay, so the terminology I was I was trying to to get at was the dual dual discrimination"}, {"start": 1890.18, "end": 1896.02, "text": " That's how they call it and it not only encourages the final output to match the distribution of real images"}, {"start": 1896.42, "end": 1904.5, "text": " But also offers additional effects. So it encourages the neural rendering to match the distribution of thousand sampled real images"}, {"start": 1904.8200000000002, "end": 1910.5, "text": " So that's remember why so neural rendering to match the distribution of down sampled real images"}, {"start": 1910.74, "end": 1916.26, "text": " So that's because as you can see here, so this is the neural rendered image. So this thing here"}, {"start": 1916.26, "end": 1923.86, "text": " So as you recall we basically took the first three channels and we get this and now once we up sample this"}, {"start": 1924.58, "end": 1931.62, "text": " We are basically making sure that this thing looks as close as this real real image that was down sampled and up sampled"}, {"start": 1931.86, "end": 1933.86, "text": " Okay, so that's why we we have this"}, {"start": 1934.8799999999999, "end": 1936.8799999999999, "text": " consistency constraint"}, {"start": 1937.14, "end": 1938.74, "text": " um, and"}, {"start": 1938.74, "end": 1943.3, "text": " It encourages the super resolved images to be consistent with the neural rendering"}, {"start": 1943.3, "end": 1951.06, "text": " And what they mean by that is that basically this image here is going to be encouraged to be looked the same as this image here"}, {"start": 1951.06, "end": 1954.5, "text": " Which means that the super resolution is going to do its job"}, {"start": 1954.5, "end": 1961.86, "text": " Basically, it's going to just up sample the image without modifying the pose or the appearance or whatnot of this of this of this output image. Okay"}, {"start": 1962.58, "end": 1968.26, "text": " So that's it. Let's now continue and let's see some results. Results are fairly stunning"}, {"start": 1968.26, "end": 1974.04, "text": " You can see the identity of this of this uh, a man looking uh frontally"}, {"start": 1974.34, "end": 1981.78, "text": " And then when we change the pose the you can still say you can still tell that this is the same person and similarly here"}, {"start": 1982.26, "end": 1987.54, "text": " So this thing with the eyes is a bit creepy. It doesn't look like this person should be watching at the camera"}, {"start": 1988.1, "end": 1992.34, "text": " Looking at a camera. It looks like the eyes should be going there or something like that. But yeah"}, {"start": 1992.98, "end": 1995.94, "text": " Looks kind of weird. We're gonna see why that is a bit later"}, {"start": 1995.94, "end": 2003.3, "text": " Um, basically there are some problems where where the underlying geometry of the eye socket is concave instead of convex"}, {"start": 2003.3, "end": 2005.3, "text": " Which is creepy. Um"}, {"start": 2005.78, "end": 2012.3400000000001, "text": " They compare with the baseline models and as you can see every one of these has some problems either the geometry is super"}, {"start": 2012.98, "end": 2017.38, "text": " Bad or it's super super like very like smooth"}, {"start": 2018.1000000000001, "end": 2023.8600000000001, "text": " Um, and here it's also you can see it's the the image does not look realistic the the tooth"}, {"start": 2023.86, "end": 2030.8999999999999, "text": " The teeth here are are smoothened out and the geometry as well. And here is the the this method the proposed method looks way better"}, {"start": 2030.8999999999999, "end": 2035.06, "text": " Compared to the baselines just qualitatively analyzing these images"}, {"start": 2036.58, "end": 2037.3799999999999, "text": " Um"}, {"start": 2037.3799999999999, "end": 2042.1, "text": " Quantitative analysis show the same thing comparing with these baselines looking at a bunch of different metrics"}, {"start": 2042.1, "end": 2047.2199999999998, "text": " I'm not gonna get into what these metrics are fid basically a fresh raise inception score"}, {"start": 2047.22, "end": 2056.1, "text": " Um, uh, like depth and they're just comparing geometry and the quality of the uh, generating images and you can see that the uh, the the"}, {"start": 2057.06, "end": 2064.02, "text": " This approach is way better compared to the baselines. They also compare the speed the number of frames per second and you can see that"}, {"start": 2064.58, "end": 2069.86, "text": " It's lagging a little bit behind g-roff and some of those methods, but as you saw the quality is way better"}, {"start": 2069.86, "end": 2074.66, "text": " So that's kind of decent trade-off finally. They they use this fax smile. Um,"}, {"start": 2074.66, "end": 2079.46, "text": " Like sdd, I guess it's a standard deviation. I think this just kind of shows that"}, {"start": 2079.94, "end": 2081.2999999999997, "text": " the the"}, {"start": 2081.2999999999997, "end": 2086.02, "text": " Multi-view consistency is better when you use the dual discrimination and the"}, {"start": 2086.8199999999997, "end": 2089.7, "text": " Adding the generator pose conditioning the gpc"}, {"start": 2090.2599999999998, "end": 2092.2599999999998, "text": " to the model, so that's the"}, {"start": 2092.8199999999997, "end": 2093.8599999999997, "text": " basically"}, {"start": 2093.8599999999997, "end": 2100.2599999999998, "text": " The the camera parameters that we were feeding into generator to condition it. So that was this part here"}, {"start": 2100.26, "end": 2106.42, "text": " So using this conditioning on the parameters the generator the mapping network part of the generator"}, {"start": 2106.98, "end": 2108.98, "text": " That helps a lot"}, {"start": 2109.38, "end": 2111.38, "text": " incur this this"}, {"start": 2112.0200000000004, "end": 2114.0200000000004, "text": " This consistency, okay"}, {"start": 2114.34, "end": 2116.34, "text": " Let's continue and"}, {"start": 2116.5800000000004, "end": 2120.9, "text": " See some results. So this is i'm going to skip this this is from the style again. This is very"}, {"start": 2121.5400000000004, "end": 2124.82, "text": " Already very very clear from the style again papers"}, {"start": 2124.82, "end": 2131.86, "text": " You can mix up the style from the row. So this person here dictates the style and these persons in the column"}, {"start": 2132.1800000000003, "end": 2136.6600000000003, "text": " Dictate the general geometry and appearance you can see they're kind of mixing up this this kid here"}, {"start": 2136.6600000000003, "end": 2142.7400000000002, "text": " You can see how the style of the kid the style of the kid is changing depending on the leftmost column. Okay"}, {"start": 2144.5, "end": 2146.5, "text": " I thought explaining how this"}, {"start": 2146.82, "end": 2152.7400000000002, "text": " Pti works in more details, but it's gonna take it's gonna make it a little bit more complicated"}, {"start": 2152.74, "end": 2155.9399999999996, "text": " Details, but it's gonna take it's gonna make this video too long"}, {"start": 2156.3399999999997, "end": 2159.22, "text": " So I can maybe cover this separately in one of the next videos"}, {"start": 2159.9399999999996, "end": 2163.4599999999996, "text": " But basically what I managed to do is given the image"}, {"start": 2164.18, "end": 2167.9399999999996, "text": " The desired target image they managed to find the latent"}, {"start": 2168.4199999999996, "end": 2175.22, "text": " That generates that that image and then they can basically modify and edit the image"}, {"start": 2175.8399999999997, "end": 2177.22, "text": " semantically"}, {"start": 2177.22, "end": 2179.9399999999996, "text": " Inside of that latent space. So that's very very cool"}, {"start": 2179.94, "end": 2187.38, "text": " So what you can notice here I mentioned the concaveness of the eyes and that's the reason why it always looks like the eyes are looking"}, {"start": 2187.78, "end": 2190.42, "text": " directly into us into yeah"}, {"start": 2191.46, "end": 2196.66, "text": " Directly into our eyes, whatever is because you can see that the eye sockets are kind of concave here"}, {"start": 2197.62, "end": 2201.06, "text": " I think they they mentioned that somewhere down here. Let me let me find it"}, {"start": 2201.3, "end": 2205.94, "text": " So furthermore ambiguities that can be explained by geometry remain unresolved"}, {"start": 2205.94, "end": 2213.8, "text": " For example by creating concave eye sockets the generator creates the illusion of eyes that follow the camera an incorrect interpretation"}, {"start": 2214.18, "end": 2218.9, "text": " Though the renderings are view consistent and reflect the underlying geometry. Okay"}, {"start": 2219.78, "end": 2223.86, "text": " I'm not gonna dig deeper into that but um, i'm just gonna"}, {"start": 2225.38, "end": 2230.26, "text": " First address this paragraph here because I think it's also important not that like related to this concrete paper"}, {"start": 2230.26, "end": 2235.62, "text": " But I think it's a in general. I'm just yeah, I have this are these paragraphs useful"}, {"start": 2235.62, "end": 2239.22, "text": " Uh asking myself whether this is useful i'm gonna read it through"}, {"start": 2240.02, "end": 2242.18, "text": " Read it out loud so you can you can judge yourself"}, {"start": 2242.58, "end": 2248.5, "text": " So the single view 3d reconstruction or style mixing applications could be misused"}, {"start": 2248.9, "end": 2256.58, "text": " So it could be misused for generating edited imagery of real people. Okay, we are all familiar with deep fakes with what the GANs are in general"}, {"start": 2257.22, "end": 2258.5, "text": " kind of"}, {"start": 2258.5, "end": 2263.94, "text": " Like a problematic in that sense such misuse of image synthesis techniques poses a societal threat"}, {"start": 2263.94, "end": 2271.14, "text": " And we do not condone using our work with the intent of spreading misinformation or tarnishing reputation. I mean"}, {"start": 2271.94, "end": 2279.08, "text": " That's also kind of should be obvious authors do not want their paper their work to be used maliciously"}, {"start": 2280.1, "end": 2283.4, "text": " I don't see any added information in these sentences"}, {"start": 2283.94, "end": 2290.66, "text": " And finally, we also recognize a potential lack of diversity in our faces results stemming from implicit biases of the data sets"}, {"start": 2290.66, "end": 2298.66, "text": " We process and this like this same sentence could be copy pasted into so many different papers like"}, {"start": 2299.54, "end": 2305.48, "text": " Let's take let's replace this 3d GAN paper. Let's replace it with uh, like a gpt3"}, {"start": 2306.8199999999997, "end": 2312.02, "text": " Or whatever your favorite language model is now in 2021 maybe without"}, {"start": 2312.64, "end": 2314.58, "text": " 2.0 or something. I don't know"}, {"start": 2314.58, "end": 2321.7799999999997, "text": " And basically so it's also can be misused and the author and secondly the authors condone using their work maliciously"}, {"start": 2321.86, "end": 2327.2999999999997, "text": " so that's also true and it also is true that depending on the biases in the"}, {"start": 2328.1, "end": 2330.1, "text": " they basically use the the the"}, {"start": 2330.66, "end": 2335.86, "text": " Text data set that was collected from the internet and reddit which has a lot of biases for like"}, {"start": 2336.42, "end": 2338.42, "text": " various various things like uh"}, {"start": 2338.66, "end": 2343.22, "text": " For example muslims being associated with crim like with terrorism stuff like that"}, {"start": 2343.22, "end": 2347.4599999999996, "text": " So some bad societal biases. So the point is"}, {"start": 2349.06, "end": 2353.8599999999997, "text": " Whether we should just be copy pasting this everywhere because this can be applied for every single paper"}, {"start": 2353.98, "end": 2356.74, "text": " Yeah, I don't mean anything bad by by saying all of this"}, {"start": 2356.74, "end": 2362.02, "text": " I just want to kind of raise the concern of whether this is going to be of any practical use paragraphs such as this one"}, {"start": 2362.02, "end": 2366.66, "text": " I'm not sure. I'm not smart here. I'm just I'm just uh, yeah speaking out loud here"}, {"start": 2366.66, "end": 2372.92, "text": " Um, finally, let me just find one additional thing here in the appendix"}, {"start": 2373.7, "end": 2376.12, "text": " Um, it has to do with the intrinsics"}, {"start": 2377.54, "end": 2379.54, "text": " Okay, here it is"}, {"start": 2379.54, "end": 2381.54, "text": " They say that we assume"}, {"start": 2381.62, "end": 2387.62, "text": " Fixed so the for the ffhq data set which is the data set of those high resolution human faces"}, {"start": 2388.74, "end": 2391.7, "text": " Not the cats the cats are the this is the cat's data set"}, {"start": 2391.7, "end": 2399.14, "text": " So they say the via zoom fixed camera intrinsics across the entire data set with a focal length of four point twenty six"}, {"start": 2399.2999999999997, "end": 2405.54, "text": " Times the image width equivalent to standard portrait lens. So that means that that's kind of"}, {"start": 2406.5, "end": 2407.7799999999997, "text": " assumed"}, {"start": 2407.7799999999997, "end": 2411.7799999999997, "text": " So that means it might not be correct. I'm not sure how the data set was collected"}, {"start": 2412.02, "end": 2415.2999999999997, "text": " So it may happen that different images were collected with different cameras"}, {"start": 2415.2999999999997, "end": 2418.4199999999996, "text": " which means this would not hold as a as a hypothesis and"}, {"start": 2418.42, "end": 2422.42, "text": " And they are using certain off-the-shelf detectors to figure out the extrinsics"}, {"start": 2422.42, "end": 2428.1800000000003, "text": " So I haven't seen not sure I may be be making a mistake here, but I haven't seen any"}, {"start": 2429.06, "end": 2435.06, "text": " Ablation on how on how robust the pipeline is to the errors in the this and this"}, {"start": 2435.06, "end": 2440.1800000000003, "text": " If this turns out not to be the case that this is a correct intrinsics for this data set"}, {"start": 2440.9, "end": 2442.9, "text": " Yeah, I just thought kind of mentioning that"}, {"start": 2444.9, "end": 2447.54, "text": " Also, one thing I'm not completely sure is how their"}, {"start": 2447.54, "end": 2453.38, "text": " Generating these geometry visualizations. I can assume that let me just go back to the pipeline here"}, {"start": 2453.86, "end": 2456.9, "text": " It's going to be easier to explain what I mean"}, {"start": 2459.7, "end": 2466.98, "text": " Okay, it's a long long paper, okay, what is probably happening is that they are using the density information"}, {"start": 2468.5, "end": 2472.9, "text": " So because so they pass array through the scene they have the density"}, {"start": 2472.9, "end": 2478.02, "text": " Information and they can kind of do a threshold and aggregate that density information"}, {"start": 2478.34, "end": 2480.5, "text": " So maybe these charts will be even better"}, {"start": 2480.5, "end": 2484.5, "text": " So they have something like this and then they can get just kind of threshold"}, {"start": 2485.2200000000003, "end": 2488.9, "text": " the the density and then use that to generate the"}, {"start": 2490.26, "end": 2496.26, "text": " Geometry, but I'm not sure whether I'm fairly sure that we have a lot of noise and glitching"}, {"start": 2496.26, "end": 2498.26, "text": " So it'd be nice to kind of"}, {"start": 2498.26, "end": 2502.98, "text": " Yeah for people who do not do this every day including me"}, {"start": 2502.98, "end": 2507.46, "text": " It'd be nice to have some hint of how stuff is done because yeah"}, {"start": 2508.5800000000004, "end": 2512.42, "text": " Understanding how geometry is actually visualized seems to be fairly important"}, {"start": 2512.42, "end": 2519.7000000000003, "text": " Anyways, if somebody knows exactly how this is done, feel free to comment down below and hopefully you like this video"}, {"start": 2520.82, "end": 2523.86, "text": " Let me know whether the lack of camera is"}, {"start": 2523.86, "end": 2529.7000000000003, "text": " Ruining the experience for you guys. So i'll make sure to in any case bring it back"}, {"start": 2530.26, "end": 2533.2200000000003, "text": " Very soon and create my youtube studio here in london"}, {"start": 2533.94, "end": 2536.6600000000003, "text": " Hopefully in like a couple of weeks in any case"}, {"start": 2537.46, "end": 2544.34, "text": " Subscribe if you if you like this video share out the the video with friends and join the discord community"}, {"start": 2544.34, "end": 2550.26, "text": " Because there is a lot of awesome cool people there which are willing to help each other out. So"}, {"start": 2550.26, "end": 2555.32, "text": " So consider joining the the community there and until next time. Bye. Bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=qr3BjVNuCzw
Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video I cover the "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation" paper where the authors devised a novel representation that made the robotic manipulation much more robust and capable of generalizing to novel shape instantiations and arbitrary unseen poses. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2112.05124 ✅ Website: https://yilundu.github.io/ndf/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:00 Neural Point Descriptor Field 12:30 Introducing SE(3) equivariance 22:35 Inducing energy landscape and optimization 25:10 Generalizing the neural descriptors to a pose 28:00 Visualizations, Results and Ablation 29:55 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Kulsoom Abdullah Petar Veličković Bartłomiej Danek ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #robots #equivariance #neuralrepresentations
What's cracking guys? Finally I'm making a new video. Basically I'm still setting up in London, so I don't have my camera set up ready. And yeah, so this video is going to be a bit different compared to all of the other research paper overviews I've done so far. And in any case, I'm going to cover this paper called Neural Descriptor Fields, SE3 Equivariant Object Representations for Manipulation by these people from MIT, Google Research, and University of Toronto. But before I even start digging into the paper and explaining the method, let's see the video demo they prepared. Okay, here what we can see is that they'll make a couple of demonstrations, and based on those demonstrations, where the monks and in general whatever object they're using are upright, so that's kind of simple. And here you can see during the test time, the pose can be arbitrary as well as the instance doesn't have to be the same as the ones used in the demonstration. So they're of the same class, like for example, like a glass or whatnot, but basically the pose and the actual instance can and will be different. And you can see the awesome generalization that the method is showing. So things to keep in mind is that only this grabbing and placing the object, the glass in this case, on this thing is what the interesting part of this method is. The actual intermediate moving is just done via off-the-shelf methods. So yeah, and you can see here that even with different more complex objects, you can see a lot of variance between the objects of the same class, and it's still managing to work correctly. Okay, getting back to the paper, let's see how this thing actually works. As I said, the interesting part, generalization to like grabbing the object and placing the object. Everything in between, so this part here is handled by certain heuristics. Let me just find that part. They mentioned it somewhere here. We rely on off-the-shelf inverse kinematics and motion planning algorithms to execute the final predicted pick and place task. So let's focus on the actual components that enable this awesome generalization. So let's start here. So the key idea is to represent an object as a function f that maps a 3D coordinate x to a spatial descriptor z, which is a function of x of that 3D coordinate. So we want to somehow, and they additionally say here, that f may further be conditioned on an object point cloud P, which has, as you can see, it's a 3D points, and there are n number of those points in a point cloud. And the idea is to output category level descriptors f of x conditioned on the point cloud P. So the reason it's called neural point descriptor fields. So neural is going to be clear in a couple of seconds. Basically, they're using a neural network to learn these representations. Point is because we are currently just inputting the point here, I guess. Yeah. And descriptor fields because we are associating with every 3D point, we are associating a vector, hence it's a field. And finally, descriptor is because we'll be describing the object that we want to grab in the scene like this. So we are, so this is very abstract definition. We still don't know how to actually implement this, but let's slowly move on and we'll understand this better in a second. So blah blah mesh editor at all represent a 3D shape as an MLP. So it's a multilayer perception phi that maps a 3D coordinate x to its occupancy value. So what the idea here is, so these are called neural implicit representations. And the idea is that the weights of the neural network actually learn like basically the structure and the position of the object. So you basically input a point x and this function phi will tell you whether the point x lies inside of the object, so that will be quantized, that will be quantified by 1, or it lies outside of the object when we'd have 0 as the output. And now you can imagine if you were to systematically kind of do a sweep over various x's over certain cube, something like this, you'd basically get, let me just draw it here. Yeah, by the way I'm drawing using touchpad, so it's really hard. So what you'll get here is because the phi will output 1 or 0, basically you're going to, so basically this way you can render, let's call it render an object. So that means that this like neural network phi learned how to represent an object implicitly. And we can make that explicit by, as I said, doing some sweep over the input points x. Finally, so this as you can see here, if we were to do it like this, it would be similar to NURF and we'd be overfitting to one specific shape. And we want to be able to generalize to various instances and various classes as well. So they say here we're interested in learning a low dimensional latent space of 3D shapes. And that's basically what they mean by that is that we want to encode, we want to learn how to take an object, the point cloud of an object, we want to pass it to some point cloud encoder and we're going to see they're using point net to do this and we get a latent representation here. So the point of this representation is to make, to just like abstract away the unnecessary details. Like we want to make sure that we understand that this is a mug or whatnot. Like even if the mug was way like higher, taller, like here and maybe the handle was like here instead of on top of the mug, we want to be able to kind of represent these two as the same latent feature vector here. So that would be cool. Okay, so let me go back here. So as I said, they're interested in learning those low dimensional latent space like basically features. And these latent codes are obtained as an output of point net. I already mentioned that based point cloud encoder epsilon that takes as input a point cloud P leading to a conditional occupancy function here. So as you can see here, now we have, so this formula here directly corresponds to this block here and we input like a point, we input like a feature vector representing like an abstract representation of our input shape. And you can see this is still an occupancy function. So we get zero if we are inside of this specific shape and we get one, so we get zero if we are outside and one if we are inside of that particular shape, given the point X. And finally they say here the full model can be trained end to end on a data set of partial point clouds in corresponding occupancy voxel grids of the object's full 3D geometry, thus learning to predict the occupancy of a complete 3D object from a partial point cloud. So a cool thing about this is that you can create a synthetic data set and basically because it's synthetic you can kind of corrupt the point clouds, make partial point clouds, but you still have the ground truth of voxel grids like non-corrupted. And that means that you'll be able to train this whole thing using partial point clouds, but the occupancy, so this output here, so zero or one, is still going to be correct. So later on when we have real robot, all of the mugs as you can see above here, so basically you can imagine that the depth cameras which are using in the scene here, they've positioned them somewhere outside of this picture, will have only a partial, will be able to retrieve a partial point cloud. And I mean that's why it's a smart idea to train the NDF, this neural point descriptor in the same manner. Okay, so why is this important? Why are we training this neural network to predict the occupancy function? So it turns out there are some nice properties that once we train the model the way I just described, so we are inputting partial point clouds and we're trying to basically output zero or one depending on whether this point x is inside of the shape or not. And we do have the ground truth data because remember we're using synthetics, we can easily use synthetic data to train this model. So now let's see what the interesting part and properties of this neural point descriptor field network FR. So, this is here, to enable category level object manipulation, a spatial descriptor for a coordinate x given a point cloud P should encode information about the spatial relationship of x to the salient features of the object. Our key insight is that the category level 3D reconstruction objective trains this network PHY, so that's the occupancy network, to be a hierarchical course defined feature extractor that encodes exactly this information, so the spatial relationship of x to the salient features of the object. Basically PHY is a classifier whose decision boundaries is the surface of the object, so that's how it was trained. But like in a nutshell what they just said here is that they can use the representations that were learned during the training of this occupancy function PHY and use those as a way to capture these relationships of the point x to salient, geometrically salient parts of this shape. Ideally what would happen is that these descriptors would be the same. So if I had a point x here, let me change the color here to something else, like if I had a point here next to this handle of this small mug and if I had a point here next to this handle of this tall mug, ideally the descriptors would be the same and that's what's going to enable the ultimate generalization that the authors showed. So as I said their actual representation is going to be a concatenation of these representations here and they formulaically capture that exactly here. So we thus propose to parametrize our neural point descriptor field F as the function that maps every 3D coordinate x to the vector of concatenated activations of PHY. So basically, let me change the color, this is just a symbol for concatenation, we iterate through all of the layers from 1 through L, we grab the intermediate representations of the occupancy function of PHY and the notation is a bit sloppy here because we are not inputting point and the latent vector of the shape in every single of these layers, we are just inputting these in the first input layer. It's a bit sloppy but you get the point. Okay, so let me now explain, let me just show you one visualization, it's going to be a bit easier. So as I said, ideally what would happen is that even if we had different mugs as you can see here and we have these green points, this is the x, ideally we would have the same descriptor. So that's the whole point, that's the idea here. And now there is a problem with this current formulation and the thing we just have done. Basically, let me just kind of give you some understanding of this encoder here. It's a point net from a paper that I think is from 2015 or something. It's a very interesting architecture but the main idea of this architecture is the following. The main idea is the following, you want to have this permutation invariant property. So what the point net did is you input this array of points, so n times 3 because we have 3D points, we have n of those. What we do is we use these shared MLPs and some transforms and we basically independently process each of these points. So this point here is going to be independently processed from all of the other ones, at least in this layer. Then we have the featuring transforms but the most important part is basically here and because we do a global pooling exactly here, that means that this final representation will be the same no matter how we permute this input here. So that means that point net is permutation invariant. But now imagine what happens if I were to translate. So imagine we had a single point, imagine we had this shape like this mug here. And imagine what would happen if I were to translate that mug right here. What would happen is that all of the points in this point cloud would be basically modified such that each point would be, let's say, we had to add the translation vector t to those. And now once you pass those here, you can imagine that the representation will change. Okay, and the same thing would happen if I were to rotate the mug. We have a different representation. And that's all fine when we have a standalone point net. But what I'm getting at here is that we want to make sure that if I were to rotate this mug here, let's say we have a point x here. And if I were to rotate this mug, however we want, for the sake of simplicity, I'm just going to reduce this problem to a 2D problem. So let's say we basically have a 2D square and we have a point x here. Now if I were to rotate this square, so let's put some coordinate system here to make it a bit more clear. Basically, if, now let's draw the coordinate system again. And if we were to rotate the square for like 45 degrees or whatnot and rotate the point x the same way, so basically you'd want to ideally have the same descriptor for these two, right? Because their relative relationship has not changed. And with the current implementation, remember if we were to change the 3D points, so if we were to change x and if we were to change the rotation or if we were to translate this object, as I said, this latent vector here is going to change, x is going to change, so that means that the descriptor is definitely going to change. And in order to enable the generalization that they showed, it's very important to make this SE3 like an equivalent or just a fancy way of saying basically equivalent to translation and rotation. And they also, I think they're kind of being sloppy with the equivalent versus invariance. I've noticed that a couple of times, but yeah, hopefully you'll understand the gist of the paper. So they say here that in other words, we require f to be invariant to joint transformation of x and p. So again, f is, remember, the spatial descriptor, implying the descriptor field should be equivalent to SE3 transformation of p. So here they just formulaically represent that if we were to transform both x and the point cloud p with some rotation and some translation, we want to make sure that the output of the function f, remember, that's the descriptor, that's this whole thing is called f, and only the small portion here is called 5, that's the occupancy network. But we care about this descriptor, the neural point descriptor field, okay? So as I said, we want to make sure that those outputs are the same. They basically add two, let's call it tricks, to introduce this invariance or equivalence. The first thing is fairly simple. You basically first, if you want to be invariant to the translation, you can just find the mean point of the point cloud, so that's the mu here, and you subtract mu from both the input point x and from the point cloud p, and by doing this, you're guaranteed to always have this invariance. To understand why that is, and hopefully it's very, very simple, but let me try and kind of make it a bit clearer. If we have a coordinate system here, and I'm again going to be in a 2D space because it's harder to draw 3D objects using touchpad. So if we had, let's say we have a square here and a point here, and let's say we have a translated version of this same square, so I'm just going to use a different color here, so let's say we have it here, okay? So if we were to find the mean point, which would be here for this object, and it would be basically here for this square, for the red square, and if we were to subtract those from the corresponding two cases we have, we'd end up with the same result both times, so we'd basically end up with this configuration, if we were to subtract this vector, and we would end up with the same configuration, if we were to basically subtract the other vector. And because of that, the inputs are the same, and then obviously we do that even before we pass the data through the function f, and that's why we're going to be trivially have this property satisfied. As for the rotation, it's not that simple. You can do the same approach and kind of guarantee that you have invariance, because imagine if I were to kind of rotate this thing here, and now if we try to kind of just subtract the vector, we'd end up with a square that looks something like this, which obviously will be different input, and then f will just produce different output, so we don't have the invariance property. So to achieve the rotation, active variance, they say here we rely on recently proposed vector neurons, which propose a network architecture that equips an occupancy network, i.e. the composition of epsilon, which is the point net encoder, and phi, which is basically the small submodule in the occupancy network, with full SO3 active variance. So SO just means rotation invariance, active variance, whereas the SE3 means both translation and rotation, just keep that in mind. And formulaically that means we want to achieve this, we want to achieve the same output, even if we were to transform the input point X and the point cloud P using some arbitrary rotation matrix R. So how they achieve that is basically described in this paper called Vector Neurons. I'm going to just give a super high level explanation here. Basically what they've done is instead of using scalar neurons, so this is your classical linear layer in an MLP, they instead have as an output not a scalar but a vector, so that's a generalization of these scalar neurons. And similarly, the implementation, this is, if you can guess, this is an implementation of a ReLU non-linearity in these vector neural networks, and basically you may be able to guess why this is, but in a scalar case you have something like this, so you have some input X scalar, you have output Y, and basically ReLU will kind of saturate this to zero, so this is zero, and then we're going to have for positive X, we're going to have a simple linear function, so Y equals X. So here we have a similar, this is just a generalization of ReLU to 3D space. Basically if we have vector Q, nothing will happen, so that's the same as if we were to have input, so let me just for clarity name this Q as well. So in this case Q is a scalar, so in the case where Q is in the positive part of the domain here, Q remains unchanged. Basically Q, because this is Y equals X, we're going to have the output will also be Q, and that's what we get here. But in the other case where the Q lies in the, let's call it negative half space, which would be equivalent to this part here, we basically project the vector and we just keep the component that's alongside this plane, and we drop the component that's alongside negative K direction. So yeah, it would take a whole video to explain why this works and why this introduces SO3 equivariance, but let's just take it as a theory and it just works. So now we have all of the components necessary to understand how that robot is gripping those mugs, even if the pose and even if the shape, the actual instantiation of that mug is changing. The idea lies in creating, inducing these energy landscapes, and we are using descriptors in the following manner. Okay, let me use their visualization because it's going to be easier. So imagine we have basically, this is P hat is, you can treat this as a demonstration, a mug that was used during demonstration, and P is the one used during the test time. So imagine we have this point X here in green and we have the shape here, and basically we can pass those and we get out, as the output we get the description, the descriptor of their mutual relationship. Let's put it that way. On the other hand, we have novel point cloud P here. It's, as you can see, a bit differently shaped. It's not the same as the one above. And now the idea is to find corresponding, what will be the corresponding point to this X for this current configuration here? And the answer you would give intuitively is it should be somewhere here, right? And the reason it's going to work is because, as you can remember, how we train the F function, or neural descriptor, is such that this is what the energy landscape is going to look like. So basically for points which are close to the handle here, because remember the shape is going to be abstracted away because of the point net and the way we trained the whole system, and that's going to cause the energy landscape to look like this, which means once we use simple optimization, so we'll be tweaking, so what will happen is we'll be using Adam, so they've been using, in practice they've been using Adam, so we'll be tweaking the parameters of X, so basically three scalars, we're just tweaking those such that point X will be moving towards this low energy part of the landscape, and once that happens, basically this, so yeah, this is this loss, we are minimizing that, and as we are minimizing that, we're basically finding this point that we just found intuitively a couple moments ago. And because of that, the robot will basically know how to grip this handle. There will be like a simple high level explanation of how the thing works. So now the problem is with this thing is that the grip of a robot actually is not a single point, because if you have a single point, you can have multiple frames, coordinate frames attached, you can have infinite coordinate frames attached to that point, so they just kind of generalize this to the case where we want to have the pose information as well, and how they represent the pose is very similar, you just basically concatenate, so you take all of the points here, so this is again concatenation symbol, you take all of the points from this query point cloud, so those are these points with various colors as you can see here, and you just basically concatenate their descriptions, and that's how you get the final description for the pose. Once you have that, by the way, in order to get those points, they mentioned here that you're using a robust heuristic to sample points uniformly at random from within the bounding box of the rigidbody S, whereas you can think of S as like a gripper in the case of picking up an object, and in the case when you want to place it, it's going to be some other object, and in the case when you want to place the actual shape onto some table or whatnot, onto a shelf, you won't have a gripper, you have some different salient object, and basically what they do is, as you can see here, finally they basically do K demonstrations, so 1 through K, and they collect these descriptors, so for where the gripper was during picking and where the placement surface was when the object was placed in the demonstration, and then they just average out all of these descriptors, and they end up with a tuple, basically this Z pick and Z rel with these bars on top of them, that's a kind of silly notation, but yeah. And as I said, once you have the descriptors, you can just minimize this energy landscape to get the position for the novel object in a novel instance of the same class with a different orientation or whatnot. And again, they mention here, we rely on off-the-shelf inverse kinematics and motion planning algorithms to execute the final predicted pick and place task, so that means that the whole trajectory between the two endpoints is not something they were trying to solve in this paper. Yeah, here's just some visualizations of how you start up with some random points, and then because the demonstration was such, and then that means we had certain descriptor as an output of this demonstration, now we just minimize the loss, so we move towards the lower parts of the energy landscape, and we end up with the same, as you can see, analogous position for this novel shape, so basically this test shape is potentially different than the demonstration one. Results are, surprise, surprise, way better compared to some of the baselines, especially when you're using upright pose, the baseline is kind of on-pair, let's call it on-pair, but its NDF is still better compared to DON, but like when you start using arbitrary poses, that's when this thing starts shining, and it's way better compared, as you can see the gap increases significantly here and here. Also, you can notice that there is some drop in the performance here, and I think that they mentioned that the main reason was that the point clouds kind of changed, because during the demonstrations the robots only saw the mug or whatever object they were using from the bird view perspective, and when they use arbitrary pose, obviously there are some novel occlusions and these occlusions that happen, and so that's why there is some degradation there. That's pretty much it. Here they just show some ablations, basically it's better to use, like should we use final layer or the first layer of the occupancy network or all layers, and we saw that they ended up using all of these layers as a representation, and that turned out to be the best option, and yeah, that's pretty much it. There is a couple of things they also mentioned here. Yeah, they also showed that as the number of demonstrations increases, you can see that the performance is increasing, so it's important that they use around 10 demonstrations so that this can work, so this is a few-shot learning, let's say, and as a point of further potential research, they say that further NDFs can only define transferable energy landscapes over poses and points. Future work may explore integrating such energy functions with trajectory optimization to enable NDFs to transfer to full trajectory. So that's the part I said that the actual movement between the picking up and placing is done using these off-the-shelf methods. So I love the idea. There are things that, in my opinion, should be improved upon, obviously, not using the heuristics, not using this off-the-shelf algorithms to plan the trajectory, but maybe inducing an energy loss scape across this whole thing. Also, I'm always trying to think when I see a method like this, whether our brain is doing something similar. It's not that we have to copycat our brain, but we need to find some inspiration there, and it's kind of the only current implementation of AGI that we have present in the world, so I guess it's a good starting point. But just kind of doing gradient descent on the energy landscape doesn't seem like the thing that our brain is doing. But when I saw the demos they showed, it's quite impressive, and basically, yeah, kudos to the authors. Some parts of the paper could have been written a little bit more clear. Yeah, at times it's a bit hard to understand, and the 10-pager format for the papers is not helping that much because people have to squeeze in stuff, and then you lose the structuring of the documents. It's hard to read, and yeah. That said, hope you like this video. As I said, it's a bit different because I still don't have my setup here in London, but I'll be continuing with the Jack series. Next up, you can expect my Flex video, Haiku video, and then I'll start doing some videos on ML Ops, and yeah, stay tuned for that. Until next time, bye-bye.
[{"start": 0.0, "end": 10.5, "text": " What's cracking guys? Finally I'm making a new video. Basically I'm still setting up in London, so I don't have my camera set up ready."}, {"start": 10.5, "end": 17.0, "text": " And yeah, so this video is going to be a bit different compared to all of the other research paper overviews I've done so far."}, {"start": 17.0, "end": 37.0, "text": " And in any case, I'm going to cover this paper called Neural Descriptor Fields, SE3 Equivariant Object Representations for Manipulation by these people from MIT, Google Research, and University of Toronto."}, {"start": 37.0, "end": 44.0, "text": " But before I even start digging into the paper and explaining the method, let's see the video demo they prepared."}, {"start": 44.0, "end": 59.0, "text": " Okay, here what we can see is that they'll make a couple of demonstrations, and based on those demonstrations, where the monks and in general whatever object they're using are upright, so that's kind of simple."}, {"start": 59.0, "end": 68.0, "text": " And here you can see during the test time, the pose can be arbitrary as well as the instance doesn't have to be the same as the ones used in the demonstration."}, {"start": 68.0, "end": 79.0, "text": " So they're of the same class, like for example, like a glass or whatnot, but basically the pose and the actual instance can and will be different."}, {"start": 79.0, "end": 84.0, "text": " And you can see the awesome generalization that the method is showing."}, {"start": 84.0, "end": 101.0, "text": " So things to keep in mind is that only this grabbing and placing the object, the glass in this case, on this thing is what the interesting part of this method is."}, {"start": 101.0, "end": 108.0, "text": " The actual intermediate moving is just done via off-the-shelf methods."}, {"start": 108.0, "end": 119.0, "text": " So yeah, and you can see here that even with different more complex objects, you can see a lot of variance between the objects of the same class, and it's still managing to work correctly."}, {"start": 119.0, "end": 124.0, "text": " Okay, getting back to the paper, let's see how this thing actually works."}, {"start": 124.0, "end": 132.0, "text": " As I said, the interesting part, generalization to like grabbing the object and placing the object."}, {"start": 132.0, "end": 140.0, "text": " Everything in between, so this part here is handled by certain heuristics. Let me just find that part. They mentioned it somewhere here."}, {"start": 140.0, "end": 150.0, "text": " We rely on off-the-shelf inverse kinematics and motion planning algorithms to execute the final predicted pick and place task."}, {"start": 150.0, "end": 155.0, "text": " So let's focus on the actual components that enable this awesome generalization."}, {"start": 155.0, "end": 169.0, "text": " So let's start here. So the key idea is to represent an object as a function f that maps a 3D coordinate x to a spatial descriptor z, which is a function of x of that 3D coordinate."}, {"start": 169.0, "end": 183.0, "text": " So we want to somehow, and they additionally say here, that f may further be conditioned on an object point cloud P, which has, as you can see, it's a 3D points, and there are n number of those points in a point cloud."}, {"start": 183.0, "end": 190.0, "text": " And the idea is to output category level descriptors f of x conditioned on the point cloud P."}, {"start": 190.0, "end": 198.0, "text": " So the reason it's called neural point descriptor fields. So neural is going to be clear in a couple of seconds."}, {"start": 198.0, "end": 207.0, "text": " Basically, they're using a neural network to learn these representations. Point is because we are currently just inputting the point here, I guess."}, {"start": 207.0, "end": 215.0, "text": " Yeah. And descriptor fields because we are associating with every 3D point, we are associating a vector, hence it's a field."}, {"start": 215.0, "end": 224.0, "text": " And finally, descriptor is because we'll be describing the object that we want to grab in the scene like this."}, {"start": 224.0, "end": 233.0, "text": " So we are, so this is very abstract definition. We still don't know how to actually implement this, but let's slowly move on and we'll understand this better in a second."}, {"start": 233.0, "end": 246.0, "text": " So blah blah mesh editor at all represent a 3D shape as an MLP. So it's a multilayer perception phi that maps a 3D coordinate x to its occupancy value."}, {"start": 246.0, "end": 251.0, "text": " So what the idea here is, so these are called neural implicit representations."}, {"start": 251.0, "end": 260.0, "text": " And the idea is that the weights of the neural network actually learn like basically the structure and the position of the object."}, {"start": 260.0, "end": 270.0, "text": " So you basically input a point x and this function phi will tell you whether the point x lies inside of the object,"}, {"start": 270.0, "end": 280.0, "text": " so that will be quantized, that will be quantified by 1, or it lies outside of the object when we'd have 0 as the output."}, {"start": 280.0, "end": 291.0, "text": " And now you can imagine if you were to systematically kind of do a sweep over various x's over certain cube, something like this,"}, {"start": 291.0, "end": 300.0, "text": " you'd basically get, let me just draw it here. Yeah, by the way I'm drawing using touchpad, so it's really hard."}, {"start": 300.0, "end": 315.0, "text": " So what you'll get here is because the phi will output 1 or 0, basically you're going to, so basically this way you can render, let's call it render an object."}, {"start": 315.0, "end": 330.0, "text": " So that means that this like neural network phi learned how to represent an object implicitly. And we can make that explicit by, as I said, doing some sweep over the input points x."}, {"start": 330.0, "end": 341.0, "text": " Finally, so this as you can see here, if we were to do it like this, it would be similar to NURF and we'd be overfitting to one specific shape."}, {"start": 341.0, "end": 348.0, "text": " And we want to be able to generalize to various instances and various classes as well."}, {"start": 348.0, "end": 353.0, "text": " So they say here we're interested in learning a low dimensional latent space of 3D shapes."}, {"start": 353.0, "end": 362.0, "text": " And that's basically what they mean by that is that we want to encode, we want to learn how to take an object, the point cloud of an object,"}, {"start": 362.0, "end": 371.0, "text": " we want to pass it to some point cloud encoder and we're going to see they're using point net to do this and we get a latent representation here."}, {"start": 371.0, "end": 377.0, "text": " So the point of this representation is to make, to just like abstract away the unnecessary details."}, {"start": 377.0, "end": 381.0, "text": " Like we want to make sure that we understand that this is a mug or whatnot."}, {"start": 381.0, "end": 399.0, "text": " Like even if the mug was way like higher, taller, like here and maybe the handle was like here instead of on top of the mug, we want to be able to kind of represent these two as the same latent feature vector here."}, {"start": 399.0, "end": 403.0, "text": " So that would be cool. Okay, so let me go back here."}, {"start": 403.0, "end": 411.0, "text": " So as I said, they're interested in learning those low dimensional latent space like basically features."}, {"start": 411.0, "end": 414.0, "text": " And these latent codes are obtained as an output of point net."}, {"start": 414.0, "end": 425.0, "text": " I already mentioned that based point cloud encoder epsilon that takes as input a point cloud P leading to a conditional occupancy function here."}, {"start": 425.0, "end": 443.0, "text": " So as you can see here, now we have, so this formula here directly corresponds to this block here and we input like a point, we input like a feature vector representing like an abstract representation of our input shape."}, {"start": 443.0, "end": 459.0, "text": " And you can see this is still an occupancy function. So we get zero if we are inside of this specific shape and we get one, so we get zero if we are outside and one if we are inside of that particular shape, given the point X."}, {"start": 459.0, "end": 474.0, "text": " And finally they say here the full model can be trained end to end on a data set of partial point clouds in corresponding occupancy voxel grids of the object's full 3D geometry, thus learning to predict the occupancy of a complete 3D object from a partial point cloud."}, {"start": 474.0, "end": 496.0, "text": " So a cool thing about this is that you can create a synthetic data set and basically because it's synthetic you can kind of corrupt the point clouds, make partial point clouds, but you still have the ground truth of voxel grids like non-corrupted."}, {"start": 496.0, "end": 506.0, "text": " And that means that you'll be able to train this whole thing using partial point clouds, but the occupancy, so this output here, so zero or one, is still going to be correct."}, {"start": 506.0, "end": 526.0, "text": " So later on when we have real robot, all of the mugs as you can see above here, so basically you can imagine that the depth cameras which are using in the scene here, they've positioned them somewhere outside of this picture, will have only a partial, will be able to retrieve a partial point cloud."}, {"start": 526.0, "end": 537.0, "text": " And I mean that's why it's a smart idea to train the NDF, this neural point descriptor in the same manner."}, {"start": 537.0, "end": 560.0, "text": " Okay, so why is this important? Why are we training this neural network to predict the occupancy function? So it turns out there are some nice properties that once we train the model the way I just described, so we are inputting partial point clouds and we're trying to basically output zero or one depending on whether this point x is inside of the shape or not."}, {"start": 560.0, "end": 567.0, "text": " And we do have the ground truth data because remember we're using synthetics, we can easily use synthetic data to train this model."}, {"start": 567.0, "end": 576.0, "text": " So now let's see what the interesting part and properties of this neural point descriptor field network FR."}, {"start": 576.0, "end": 592.0, "text": " So, this is here, to enable category level object manipulation, a spatial descriptor for a coordinate x given a point cloud P should encode information about the spatial relationship of x to the salient features of the object."}, {"start": 592.0, "end": 613.0, "text": " Our key insight is that the category level 3D reconstruction objective trains this network PHY, so that's the occupancy network, to be a hierarchical course defined feature extractor that encodes exactly this information, so the spatial relationship of x to the salient features of the object."}, {"start": 613.0, "end": 631.0, "text": " Basically PHY is a classifier whose decision boundaries is the surface of the object, so that's how it was trained. But like in a nutshell what they just said here is that they can use the representations that were learned during the training of this occupancy function PHY"}, {"start": 631.0, "end": 645.0, "text": " and use those as a way to capture these relationships of the point x to salient, geometrically salient parts of this shape."}, {"start": 645.0, "end": 658.0, "text": " Ideally what would happen is that these descriptors would be the same. So if I had a point x here, let me change the color here to something else, like if I had a point here next to this handle of this small mug"}, {"start": 658.0, "end": 672.0, "text": " and if I had a point here next to this handle of this tall mug, ideally the descriptors would be the same and that's what's going to enable the ultimate generalization that the authors showed."}, {"start": 672.0, "end": 681.0, "text": " So as I said their actual representation is going to be a concatenation of these representations here and they formulaically capture that exactly here."}, {"start": 681.0, "end": 693.0, "text": " So we thus propose to parametrize our neural point descriptor field F as the function that maps every 3D coordinate x to the vector of concatenated activations of PHY."}, {"start": 693.0, "end": 708.0, "text": " So basically, let me change the color, this is just a symbol for concatenation, we iterate through all of the layers from 1 through L, we grab the intermediate representations of the occupancy function of PHY"}, {"start": 708.0, "end": 721.0, "text": " and the notation is a bit sloppy here because we are not inputting point and the latent vector of the shape in every single of these layers, we are just inputting these in the first input layer."}, {"start": 721.0, "end": 725.0, "text": " It's a bit sloppy but you get the point."}, {"start": 725.0, "end": 734.0, "text": " Okay, so let me now explain, let me just show you one visualization, it's going to be a bit easier."}, {"start": 734.0, "end": 746.0, "text": " So as I said, ideally what would happen is that even if we had different mugs as you can see here and we have these green points, this is the x, ideally we would have the same descriptor."}, {"start": 746.0, "end": 749.0, "text": " So that's the whole point, that's the idea here."}, {"start": 749.0, "end": 756.0, "text": " And now there is a problem with this current formulation and the thing we just have done."}, {"start": 756.0, "end": 762.0, "text": " Basically, let me just kind of give you some understanding of this encoder here."}, {"start": 762.0, "end": 767.0, "text": " It's a point net from a paper that I think is from 2015 or something."}, {"start": 767.0, "end": 774.0, "text": " It's a very interesting architecture but the main idea of this architecture is the following."}, {"start": 774.0, "end": 782.0, "text": " The main idea is the following, you want to have this permutation invariant property."}, {"start": 782.0, "end": 790.0, "text": " So what the point net did is you input this array of points, so n times 3 because we have 3D points, we have n of those."}, {"start": 790.0, "end": 796.0, "text": " What we do is we use these shared MLPs and some transforms and we basically independently process each of these points."}, {"start": 796.0, "end": 803.0, "text": " So this point here is going to be independently processed from all of the other ones, at least in this layer."}, {"start": 803.0, "end": 810.0, "text": " Then we have the featuring transforms but the most important part is basically here and because we do a global pooling exactly here,"}, {"start": 810.0, "end": 817.0, "text": " that means that this final representation will be the same no matter how we permute this input here."}, {"start": 817.0, "end": 821.0, "text": " So that means that point net is permutation invariant."}, {"start": 821.0, "end": 825.0, "text": " But now imagine what happens if I were to translate."}, {"start": 825.0, "end": 830.0, "text": " So imagine we had a single point, imagine we had this shape like this mug here."}, {"start": 830.0, "end": 835.0, "text": " And imagine what would happen if I were to translate that mug right here."}, {"start": 835.0, "end": 842.0, "text": " What would happen is that all of the points in this point cloud would be basically modified such that each point would be,"}, {"start": 842.0, "end": 847.0, "text": " let's say, we had to add the translation vector t to those."}, {"start": 847.0, "end": 851.0, "text": " And now once you pass those here, you can imagine that the representation will change."}, {"start": 851.0, "end": 855.0, "text": " Okay, and the same thing would happen if I were to rotate the mug."}, {"start": 855.0, "end": 857.0, "text": " We have a different representation."}, {"start": 857.0, "end": 860.0, "text": " And that's all fine when we have a standalone point net."}, {"start": 860.0, "end": 867.0, "text": " But what I'm getting at here is that we want to make sure that if I were to rotate this mug here,"}, {"start": 867.0, "end": 872.0, "text": " let's say we have a point x here."}, {"start": 872.0, "end": 878.0, "text": " And if I were to rotate this mug, however we want, for the sake of simplicity,"}, {"start": 878.0, "end": 881.0, "text": " I'm just going to reduce this problem to a 2D problem."}, {"start": 881.0, "end": 890.0, "text": " So let's say we basically have a 2D square and we have a point x here."}, {"start": 890.0, "end": 899.0, "text": " Now if I were to rotate this square, so let's put some coordinate system here to make it a bit more clear."}, {"start": 899.0, "end": 903.0, "text": " Basically, if, now let's draw the coordinate system again."}, {"start": 903.0, "end": 912.0, "text": " And if we were to rotate the square for like 45 degrees or whatnot and rotate the point x the same way,"}, {"start": 912.0, "end": 918.0, "text": " so basically you'd want to ideally have the same descriptor for these two, right?"}, {"start": 918.0, "end": 922.0, "text": " Because their relative relationship has not changed."}, {"start": 922.0, "end": 928.0, "text": " And with the current implementation, remember if we were to change the 3D points,"}, {"start": 928.0, "end": 937.0, "text": " so if we were to change x and if we were to change the rotation or if we were to translate this object,"}, {"start": 937.0, "end": 941.0, "text": " as I said, this latent vector here is going to change, x is going to change,"}, {"start": 941.0, "end": 944.0, "text": " so that means that the descriptor is definitely going to change."}, {"start": 944.0, "end": 948.0, "text": " And in order to enable the generalization that they showed,"}, {"start": 948.0, "end": 960.0, "text": " it's very important to make this SE3 like an equivalent or just a fancy way of saying basically equivalent to translation and rotation."}, {"start": 960.0, "end": 966.0, "text": " And they also, I think they're kind of being sloppy with the equivalent versus invariance."}, {"start": 966.0, "end": 972.0, "text": " I've noticed that a couple of times, but yeah, hopefully you'll understand the gist of the paper."}, {"start": 972.0, "end": 980.0, "text": " So they say here that in other words, we require f to be invariant to joint transformation of x and p."}, {"start": 980.0, "end": 982.0, "text": " So again, f is, remember, the spatial descriptor,"}, {"start": 982.0, "end": 988.0, "text": " implying the descriptor field should be equivalent to SE3 transformation of p."}, {"start": 988.0, "end": 993.0, "text": " So here they just formulaically represent that if we were to transform both x and the point cloud p"}, {"start": 993.0, "end": 999.0, "text": " with some rotation and some translation, we want to make sure that the output of the function f,"}, {"start": 999.0, "end": 1003.0, "text": " remember, that's the descriptor, that's this whole thing is called f,"}, {"start": 1003.0, "end": 1007.0, "text": " and only the small portion here is called 5, that's the occupancy network."}, {"start": 1007.0, "end": 1011.0, "text": " But we care about this descriptor, the neural point descriptor field, okay?"}, {"start": 1011.0, "end": 1016.0, "text": " So as I said, we want to make sure that those outputs are the same."}, {"start": 1016.0, "end": 1024.0, "text": " They basically add two, let's call it tricks, to introduce this invariance or equivalence."}, {"start": 1024.0, "end": 1027.0, "text": " The first thing is fairly simple."}, {"start": 1027.0, "end": 1031.0, "text": " You basically first, if you want to be invariant to the translation,"}, {"start": 1031.0, "end": 1036.0, "text": " you can just find the mean point of the point cloud, so that's the mu here,"}, {"start": 1036.0, "end": 1043.0, "text": " and you subtract mu from both the input point x and from the point cloud p,"}, {"start": 1043.0, "end": 1049.0, "text": " and by doing this, you're guaranteed to always have this invariance."}, {"start": 1049.0, "end": 1055.0, "text": " To understand why that is, and hopefully it's very, very simple, but let me try and kind of make it a bit clearer."}, {"start": 1055.0, "end": 1059.0, "text": " If we have a coordinate system here, and I'm again going to be in a 2D space"}, {"start": 1059.0, "end": 1064.0, "text": " because it's harder to draw 3D objects using touchpad."}, {"start": 1064.0, "end": 1070.0, "text": " So if we had, let's say we have a square here and a point here,"}, {"start": 1070.0, "end": 1075.0, "text": " and let's say we have a translated version of this same square,"}, {"start": 1075.0, "end": 1081.0, "text": " so I'm just going to use a different color here, so let's say we have it here, okay?"}, {"start": 1081.0, "end": 1086.0, "text": " So if we were to find the mean point, which would be here for this object,"}, {"start": 1086.0, "end": 1090.0, "text": " and it would be basically here for this square, for the red square,"}, {"start": 1090.0, "end": 1095.0, "text": " and if we were to subtract those from the corresponding two cases we have,"}, {"start": 1095.0, "end": 1103.0, "text": " we'd end up with the same result both times, so we'd basically end up with this configuration,"}, {"start": 1103.0, "end": 1108.0, "text": " if we were to subtract this vector, and we would end up with the same configuration,"}, {"start": 1108.0, "end": 1111.0, "text": " if we were to basically subtract the other vector."}, {"start": 1111.0, "end": 1122.0, "text": " And because of that, the inputs are the same, and then obviously we do that even before we pass the data through the function f,"}, {"start": 1122.0, "end": 1126.0, "text": " and that's why we're going to be trivially have this property satisfied."}, {"start": 1126.0, "end": 1130.0, "text": " As for the rotation, it's not that simple."}, {"start": 1130.0, "end": 1136.0, "text": " You can do the same approach and kind of guarantee that you have invariance,"}, {"start": 1136.0, "end": 1142.0, "text": " because imagine if I were to kind of rotate this thing here,"}, {"start": 1142.0, "end": 1146.0, "text": " and now if we try to kind of just subtract the vector,"}, {"start": 1146.0, "end": 1150.0, "text": " we'd end up with a square that looks something like this,"}, {"start": 1150.0, "end": 1157.0, "text": " which obviously will be different input, and then f will just produce different output,"}, {"start": 1157.0, "end": 1159.0, "text": " so we don't have the invariance property."}, {"start": 1159.0, "end": 1162.0, "text": " So to achieve the rotation, active variance,"}, {"start": 1162.0, "end": 1167.0, "text": " they say here we rely on recently proposed vector neurons,"}, {"start": 1167.0, "end": 1173.0, "text": " which propose a network architecture that equips an occupancy network, i.e. the composition of epsilon,"}, {"start": 1173.0, "end": 1179.0, "text": " which is the point net encoder, and phi, which is basically the small submodule in the occupancy network,"}, {"start": 1179.0, "end": 1182.0, "text": " with full SO3 active variance."}, {"start": 1182.0, "end": 1188.0, "text": " So SO just means rotation invariance, active variance,"}, {"start": 1188.0, "end": 1193.0, "text": " whereas the SE3 means both translation and rotation, just keep that in mind."}, {"start": 1193.0, "end": 1198.0, "text": " And formulaically that means we want to achieve this, we want to achieve the same output,"}, {"start": 1198.0, "end": 1206.0, "text": " even if we were to transform the input point X and the point cloud P using some arbitrary rotation matrix R."}, {"start": 1206.0, "end": 1211.0, "text": " So how they achieve that is basically described in this paper called Vector Neurons."}, {"start": 1211.0, "end": 1215.0, "text": " I'm going to just give a super high level explanation here."}, {"start": 1215.0, "end": 1219.0, "text": " Basically what they've done is instead of using scalar neurons,"}, {"start": 1219.0, "end": 1222.0, "text": " so this is your classical linear layer in an MLP,"}, {"start": 1222.0, "end": 1226.0, "text": " they instead have as an output not a scalar but a vector,"}, {"start": 1226.0, "end": 1231.0, "text": " so that's a generalization of these scalar neurons."}, {"start": 1231.0, "end": 1236.0, "text": " And similarly, the implementation, this is, if you can guess,"}, {"start": 1236.0, "end": 1243.0, "text": " this is an implementation of a ReLU non-linearity in these vector neural networks,"}, {"start": 1243.0, "end": 1246.0, "text": " and basically you may be able to guess why this is,"}, {"start": 1246.0, "end": 1250.0, "text": " but in a scalar case you have something like this,"}, {"start": 1250.0, "end": 1255.0, "text": " so you have some input X scalar, you have output Y,"}, {"start": 1255.0, "end": 1261.0, "text": " and basically ReLU will kind of saturate this to zero, so this is zero,"}, {"start": 1261.0, "end": 1269.0, "text": " and then we're going to have for positive X, we're going to have a simple linear function, so Y equals X."}, {"start": 1269.0, "end": 1275.0, "text": " So here we have a similar, this is just a generalization of ReLU to 3D space."}, {"start": 1275.0, "end": 1278.0, "text": " Basically if we have vector Q, nothing will happen,"}, {"start": 1278.0, "end": 1285.0, "text": " so that's the same as if we were to have input, so let me just for clarity name this Q as well."}, {"start": 1285.0, "end": 1292.0, "text": " So in this case Q is a scalar, so in the case where Q is in the positive part of the domain here,"}, {"start": 1292.0, "end": 1294.0, "text": " Q remains unchanged."}, {"start": 1294.0, "end": 1301.0, "text": " Basically Q, because this is Y equals X, we're going to have the output will also be Q,"}, {"start": 1301.0, "end": 1303.0, "text": " and that's what we get here."}, {"start": 1303.0, "end": 1309.0, "text": " But in the other case where the Q lies in the, let's call it negative half space,"}, {"start": 1309.0, "end": 1312.0, "text": " which would be equivalent to this part here,"}, {"start": 1312.0, "end": 1319.0, "text": " we basically project the vector and we just keep the component that's alongside this plane,"}, {"start": 1319.0, "end": 1323.0, "text": " and we drop the component that's alongside negative K direction."}, {"start": 1323.0, "end": 1333.0, "text": " So yeah, it would take a whole video to explain why this works and why this introduces SO3 equivariance,"}, {"start": 1333.0, "end": 1339.0, "text": " but let's just take it as a theory and it just works."}, {"start": 1339.0, "end": 1348.0, "text": " So now we have all of the components necessary to understand how that robot is gripping those mugs,"}, {"start": 1348.0, "end": 1355.0, "text": " even if the pose and even if the shape, the actual instantiation of that mug is changing."}, {"start": 1355.0, "end": 1362.0, "text": " The idea lies in creating, inducing these energy landscapes,"}, {"start": 1362.0, "end": 1366.0, "text": " and we are using descriptors in the following manner."}, {"start": 1366.0, "end": 1371.0, "text": " Okay, let me use their visualization because it's going to be easier."}, {"start": 1371.0, "end": 1379.0, "text": " So imagine we have basically, this is P hat is, you can treat this as a demonstration,"}, {"start": 1379.0, "end": 1384.0, "text": " a mug that was used during demonstration, and P is the one used during the test time."}, {"start": 1384.0, "end": 1391.0, "text": " So imagine we have this point X here in green and we have the shape here,"}, {"start": 1391.0, "end": 1397.0, "text": " and basically we can pass those and we get out, as the output we get the description,"}, {"start": 1397.0, "end": 1403.0, "text": " the descriptor of their mutual relationship. Let's put it that way."}, {"start": 1403.0, "end": 1408.0, "text": " On the other hand, we have novel point cloud P here."}, {"start": 1408.0, "end": 1413.0, "text": " It's, as you can see, a bit differently shaped. It's not the same as the one above."}, {"start": 1413.0, "end": 1420.0, "text": " And now the idea is to find corresponding,"}, {"start": 1420.0, "end": 1428.0, "text": " what will be the corresponding point to this X for this current configuration here?"}, {"start": 1428.0, "end": 1432.0, "text": " And the answer you would give intuitively is it should be somewhere here, right?"}, {"start": 1432.0, "end": 1436.0, "text": " And the reason it's going to work is because, as you can remember,"}, {"start": 1436.0, "end": 1441.0, "text": " how we train the F function, or neural descriptor,"}, {"start": 1441.0, "end": 1446.0, "text": " is such that this is what the energy landscape is going to look like."}, {"start": 1446.0, "end": 1450.0, "text": " So basically for points which are close to the handle here,"}, {"start": 1450.0, "end": 1455.0, "text": " because remember the shape is going to be abstracted away because of the point net"}, {"start": 1455.0, "end": 1460.0, "text": " and the way we trained the whole system,"}, {"start": 1460.0, "end": 1464.0, "text": " and that's going to cause the energy landscape to look like this,"}, {"start": 1464.0, "end": 1468.0, "text": " which means once we use simple optimization, so we'll be tweaking,"}, {"start": 1468.0, "end": 1473.0, "text": " so what will happen is we'll be using Adam, so they've been using, in practice they've been using Adam,"}, {"start": 1473.0, "end": 1478.0, "text": " so we'll be tweaking the parameters of X, so basically three scalars,"}, {"start": 1478.0, "end": 1485.0, "text": " we're just tweaking those such that point X will be moving towards this low energy part of the landscape,"}, {"start": 1485.0, "end": 1491.0, "text": " and once that happens, basically this, so yeah, this is this loss,"}, {"start": 1491.0, "end": 1494.0, "text": " we are minimizing that, and as we are minimizing that,"}, {"start": 1494.0, "end": 1500.0, "text": " we're basically finding this point that we just found intuitively a couple moments ago."}, {"start": 1500.0, "end": 1505.0, "text": " And because of that, the robot will basically know how to grip this handle."}, {"start": 1505.0, "end": 1509.0, "text": " There will be like a simple high level explanation of how the thing works."}, {"start": 1509.0, "end": 1517.0, "text": " So now the problem is with this thing is that the grip of a robot actually is not a single point,"}, {"start": 1517.0, "end": 1523.0, "text": " because if you have a single point, you can have multiple frames, coordinate frames attached,"}, {"start": 1523.0, "end": 1526.0, "text": " you can have infinite coordinate frames attached to that point,"}, {"start": 1526.0, "end": 1534.0, "text": " so they just kind of generalize this to the case where we want to have the pose information as well,"}, {"start": 1534.0, "end": 1542.0, "text": " and how they represent the pose is very similar, you just basically concatenate,"}, {"start": 1542.0, "end": 1546.0, "text": " so you take all of the points here, so this is again concatenation symbol,"}, {"start": 1546.0, "end": 1548.0, "text": " you take all of the points from this query point cloud,"}, {"start": 1548.0, "end": 1552.0, "text": " so those are these points with various colors as you can see here,"}, {"start": 1552.0, "end": 1556.0, "text": " and you just basically concatenate their descriptions,"}, {"start": 1556.0, "end": 1561.0, "text": " and that's how you get the final description for the pose."}, {"start": 1561.0, "end": 1566.0, "text": " Once you have that, by the way, in order to get those points,"}, {"start": 1566.0, "end": 1572.0, "text": " they mentioned here that you're using a robust heuristic to sample points uniformly at random"}, {"start": 1572.0, "end": 1576.0, "text": " from within the bounding box of the rigidbody S,"}, {"start": 1576.0, "end": 1581.0, "text": " whereas you can think of S as like a gripper in the case of picking up an object,"}, {"start": 1581.0, "end": 1586.0, "text": " and in the case when you want to place it, it's going to be some other object,"}, {"start": 1586.0, "end": 1596.0, "text": " and in the case when you want to place the actual shape onto some table or whatnot, onto a shelf,"}, {"start": 1596.0, "end": 1603.0, "text": " you won't have a gripper, you have some different salient object,"}, {"start": 1603.0, "end": 1611.0, "text": " and basically what they do is, as you can see here, finally they basically do K demonstrations,"}, {"start": 1611.0, "end": 1615.0, "text": " so 1 through K, and they collect these descriptors,"}, {"start": 1615.0, "end": 1620.0, "text": " so for where the gripper was during picking and where the placement surface was"}, {"start": 1620.0, "end": 1622.0, "text": " when the object was placed in the demonstration,"}, {"start": 1622.0, "end": 1627.0, "text": " and then they just average out all of these descriptors,"}, {"start": 1627.0, "end": 1636.0, "text": " and they end up with a tuple, basically this Z pick and Z rel with these bars on top of them,"}, {"start": 1636.0, "end": 1639.0, "text": " that's a kind of silly notation, but yeah."}, {"start": 1639.0, "end": 1645.0, "text": " And as I said, once you have the descriptors, you can just minimize this energy landscape"}, {"start": 1645.0, "end": 1655.0, "text": " to get the position for the novel object in a novel instance of the same class"}, {"start": 1655.0, "end": 1659.0, "text": " with a different orientation or whatnot."}, {"start": 1659.0, "end": 1665.0, "text": " And again, they mention here, we rely on off-the-shelf inverse kinematics and motion planning algorithms"}, {"start": 1665.0, "end": 1668.0, "text": " to execute the final predicted pick and place task,"}, {"start": 1668.0, "end": 1673.0, "text": " so that means that the whole trajectory between the two endpoints"}, {"start": 1673.0, "end": 1678.0, "text": " is not something they were trying to solve in this paper."}, {"start": 1678.0, "end": 1683.0, "text": " Yeah, here's just some visualizations of how you start up with some random points,"}, {"start": 1683.0, "end": 1687.0, "text": " and then because the demonstration was such,"}, {"start": 1687.0, "end": 1693.0, "text": " and then that means we had certain descriptor as an output of this demonstration,"}, {"start": 1693.0, "end": 1701.0, "text": " now we just minimize the loss, so we move towards the lower parts of the energy landscape,"}, {"start": 1701.0, "end": 1709.0, "text": " and we end up with the same, as you can see, analogous position for this novel shape,"}, {"start": 1709.0, "end": 1714.0, "text": " so basically this test shape is potentially different than the demonstration one."}, {"start": 1714.0, "end": 1719.0, "text": " Results are, surprise, surprise, way better compared to some of the baselines,"}, {"start": 1719.0, "end": 1725.0, "text": " especially when you're using upright pose, the baseline is kind of on-pair,"}, {"start": 1725.0, "end": 1729.0, "text": " let's call it on-pair, but its NDF is still better compared to DON,"}, {"start": 1729.0, "end": 1734.0, "text": " but like when you start using arbitrary poses, that's when this thing starts shining,"}, {"start": 1734.0, "end": 1739.0, "text": " and it's way better compared, as you can see the gap increases significantly here and here."}, {"start": 1739.0, "end": 1744.0, "text": " Also, you can notice that there is some drop in the performance here,"}, {"start": 1744.0, "end": 1750.0, "text": " and I think that they mentioned that the main reason was that the point clouds kind of changed,"}, {"start": 1750.0, "end": 1755.0, "text": " because during the demonstrations the robots only saw the mug or whatever object they were using"}, {"start": 1755.0, "end": 1762.0, "text": " from the bird view perspective, and when they use arbitrary pose,"}, {"start": 1762.0, "end": 1766.0, "text": " obviously there are some novel occlusions and these occlusions that happen,"}, {"start": 1766.0, "end": 1769.0, "text": " and so that's why there is some degradation there."}, {"start": 1769.0, "end": 1776.0, "text": " That's pretty much it. Here they just show some ablations, basically it's better to use,"}, {"start": 1776.0, "end": 1785.0, "text": " like should we use final layer or the first layer of the occupancy network or all layers,"}, {"start": 1785.0, "end": 1790.0, "text": " and we saw that they ended up using all of these layers as a representation,"}, {"start": 1790.0, "end": 1797.0, "text": " and that turned out to be the best option, and yeah, that's pretty much it."}, {"start": 1797.0, "end": 1801.0, "text": " There is a couple of things they also mentioned here."}, {"start": 1801.0, "end": 1806.0, "text": " Yeah, they also showed that as the number of demonstrations increases,"}, {"start": 1806.0, "end": 1811.0, "text": " you can see that the performance is increasing, so it's important that they use around 10 demonstrations"}, {"start": 1811.0, "end": 1816.0, "text": " so that this can work, so this is a few-shot learning, let's say,"}, {"start": 1816.0, "end": 1823.0, "text": " and as a point of further potential research, they say that further NDFs can only define"}, {"start": 1823.0, "end": 1826.0, "text": " transferable energy landscapes over poses and points."}, {"start": 1826.0, "end": 1831.0, "text": " Future work may explore integrating such energy functions with trajectory optimization"}, {"start": 1831.0, "end": 1833.0, "text": " to enable NDFs to transfer to full trajectory."}, {"start": 1833.0, "end": 1839.0, "text": " So that's the part I said that the actual movement between the picking up and placing"}, {"start": 1839.0, "end": 1843.0, "text": " is done using these off-the-shelf methods."}, {"start": 1843.0, "end": 1850.0, "text": " So I love the idea. There are things that, in my opinion, should be improved upon, obviously,"}, {"start": 1850.0, "end": 1857.0, "text": " not using the heuristics, not using this off-the-shelf algorithms to plan the trajectory,"}, {"start": 1857.0, "end": 1863.0, "text": " but maybe inducing an energy loss scape across this whole thing."}, {"start": 1863.0, "end": 1866.0, "text": " Also, I'm always trying to think when I see a method like this,"}, {"start": 1866.0, "end": 1868.0, "text": " whether our brain is doing something similar."}, {"start": 1868.0, "end": 1873.0, "text": " It's not that we have to copycat our brain, but we need to find some inspiration there,"}, {"start": 1873.0, "end": 1879.0, "text": " and it's kind of the only current implementation of AGI that we have present in the world,"}, {"start": 1879.0, "end": 1881.0, "text": " so I guess it's a good starting point."}, {"start": 1881.0, "end": 1891.0, "text": " But just kind of doing gradient descent on the energy landscape doesn't seem like the thing"}, {"start": 1891.0, "end": 1892.0, "text": " that our brain is doing."}, {"start": 1892.0, "end": 1900.0, "text": " But when I saw the demos they showed, it's quite impressive, and basically, yeah,"}, {"start": 1900.0, "end": 1902.0, "text": " kudos to the authors."}, {"start": 1902.0, "end": 1906.0, "text": " Some parts of the paper could have been written a little bit more clear."}, {"start": 1906.0, "end": 1913.0, "text": " Yeah, at times it's a bit hard to understand, and the 10-pager format for the papers is not helping"}, {"start": 1913.0, "end": 1919.0, "text": " that much because people have to squeeze in stuff, and then you lose the structuring of the documents."}, {"start": 1919.0, "end": 1922.0, "text": " It's hard to read, and yeah."}, {"start": 1922.0, "end": 1924.0, "text": " That said, hope you like this video."}, {"start": 1924.0, "end": 1930.0, "text": " As I said, it's a bit different because I still don't have my setup here in London,"}, {"start": 1930.0, "end": 1934.0, "text": " but I'll be continuing with the Jack series."}, {"start": 1934.0, "end": 1942.0, "text": " Next up, you can expect my Flex video, Haiku video, and then I'll start doing some videos on ML Ops,"}, {"start": 1942.0, "end": 1944.0, "text": " and yeah, stay tuned for that."}, {"start": 1944.0, "end": 1957.0, "text": " Until next time, bye-bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=SgaN-4po_cA
How I Got a Job at DeepMind as a Research Engineer (without a Machine Learning Degree!)
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE In this video, I share my story on how my journey to DeepMind looked like. I cover some details of my background story, how my ML curriculum looked like, how to get a referral for a top-tier company such as DeepMind, and finally how my final preps looked like. ⌚️ Timetable: 00:00 Intro - I landed a job at DeepMind 01:11 My story 07:16 ML curriculum 14:57 Landing an interview at DeepMind 17:34 Final preps ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #deepmind #engineering #job
Hey, what's up you guys? In this video I thought sharing my full story on how I managed to land a job at DeepMind as a research engineer, as this was one of the most requested videos by you guys, so I thought making it happen. So I guess what makes my story kind of special is the fact that I do not have a machine learning degree, and when I say that I mean I don't have a PhD, nor master's, nor bachelor's degree in machine learning. Also, my first exposure to this whole world of programming was fairly late when I was 19 years old, and similarly my first exposure to machine learning field was back in the summer of 2018, which is fairly recent as well. Also, I come from a relatively poor country and the higher education system here in Serbia is not that great, especially compared to some of the best universities out there. So in this video I thought telling you something more about my background story, telling you how I managed to craft my own machine learning curriculum, I'm going to tell you as well how I managed to get referred to DeepMind, and finally I'm going to tell you a little bit more how my final preparations look like. So having said that, hopefully you find this video useful and enjoy. Back in the high school I wasn't even aware that working for a big tech company is a thing. Like I had all of this software on my computer, like Windows and Google search, but I wasn't quite aware that you can work for a company out there to build products like these. Like that was the level of my ignorance back then. So no, I'm not that guy who started programming when he was six, nor did I dream about building an artificial general intelligence when I was seven years old. No, I was probably the mutt back then. Then in 2013 I was supposed to pick the field of my study, and back then I didn't quite know what I want to do yet, and so I heard about this electrical engineering faculty where only smart people can get in, and it's the hardest faculty in my country, and so I was very attracted by the challenge, and I decided to enroll and study electrical engineering. During the first couple of years of me studying there, aside from studying really hard, my extracurricular activities did not involve building fancy applications and cool technical projects. No, I was actually learning human languages, and I was working out focusing, like training calisthenics and stuff like that, and back then all of that seemed very irrelevant for my like future career, but like only later did I realize that all of the meta skills I acquired doing that were invaluable later during my career. So I learned how to discipline myself, I learned how to like build a program and execute on it, I also learned how to learn on my own, and all of these are very, very important. Then at the end of my studies in 2017, I decided I want to make a hard pivot into software engineering, and the reason was my studies were mostly focusing on digital electronics and analog electronics, and I did have some programming courses, but those constituted a minor like percentage of the courses I had, and so I started taking more and more programming courses, and around that same time in 2017, I managed to land an internship in a small general startup as an Android developer by this student organization, and so I decided I want to prepare myself, and I started learning Android, I started building apps, uploading them to Google Play Store, and again, this whole period of me experimenting and learning and searching for the resources myself and building my own curriculum proved to be very important later for my later journey. Fast forward to summer of 2017, while I was doing my internship in Germany, I realized I was falling behind, and the reason was I finally decided I want to start applying for big tech companies, and I realized that the only thing that's actually important are algorithms and data structures, and so all of the knowledge I accumulated throughout the years, all of that engineering knowledge I accumulated was actually not that important. I remember I felt really bad because of that, because somebody who was like 19 years old and was doing like competitive programming for a year already had an advantage compared to me. That feeling sucked really bad, and so once I got back to Belgrade end of the summer 2017, I decided I want to start learning algorithms and data structures, and so I started building my own software engineering curriculum. I started taking algorithms and data structure courses, I started reading through the textbooks, I started doing competitive programming, going through the cracking the coding interview book and stuff like that. In parallel, as soon as I started my preparations, I started applying for big tech companies, and soon, very soon, in December 2017, I managed to land my first ever interview with Facebook, and I failed miserably, but I did not give up, and so I just started pushing and learning and applying, and soon enough, I got my second interview with Microsoft, I failed, I got my interviews later on in 2018 with Nvidia, and I failed again, but the main point was I did not give up, that's it. Around that same time in 2018, I was accepted into this machine learning summer camp organized by Microsoft employees, and so I realized this was a perfect opportunity for me to show that I'm passionate and knowledgeable, and maybe this was my entry card for Microsoft, and soon enough, after I attended this machine learning summer camp, while I was in Brazil doing yet another student internship, I finally got my offer from Microsoft, and I was so happy. The team I was supposed to work for was the HoloLens team, the very same team that organized the machine learning summer camp, and I'm not sure how many of you know about HoloLens device, but this is a science fiction-like device that you put in your head, and all of a sudden, you have holographic projections in the 3D space around you. So because I was working on this really cool project, the HoloLens project, and I was surrounded by computer vision, and where is computer vision? There is machine learning, I was quite motivated myself to start learning machine learning on my own, even though my official role at Microsoft back then was software engineer, and so I started doing Coursera courses, Andrew Yang classics, but in all honesty, during that period, ML was not the strongest focus point myself, so I was trying to build strong software engineering foundations, I was also reading a lot of literature that doesn't have anything to do with tech, like investing and finance, etc. And so throughout 2019, I was learning ML, but on a low intensity mode, and nonetheless, the internal management in Microsoft noticed my efforts, and so I got shifted internally from a software engineer role to a machine learning engineer role, and I was sent to this ICCV conference, a famous computer vision conference, and all of this gave me additional boost and motivation to start learning and understanding how machine learning works in all of the nitty-gritty details. So all of this made me develop my own machine learning curriculum, and next up, I'm going to tell you more about it. I knew I wanted to research various subfields of machine learning, such as various computational R-Telegrams, such as neural style transfer, as well as GANs, or generative adversarial networks, transformers, which are all around the place, and reinforcement learning. And so I decided I want to dedicate like a two to three month periods of time to each one of those subfields of machine learning. How I call those in my head are macro cycles, and inside of each of those two to three month long macro cycles, interspersed are something that I call the micro cycles, and those micro cycles, again, can be of two types, input or output. Now those input cycles are all about me ingesting the information. So the initial micro cycles would be all about just understanding the high level structure of the subfield I'm trying to tackle. So I'd start with some high level resources, such as some blogs or some YouTube videos, stuff like that. And then in subsequent micros, I dig a lot deeper, and I start reading research papers, some chapters of some hard books, stuff like that. The output micro, on the other hand, is all about me outputting information. So me teaching you guys the stuff I've learned, and by doing that, I help you out, and also I learn stuff much better myself. So that's a win-win situation. Now some of the public artifacts that I create during those output micro cycles are things such as YouTube videos, things such as open sourcing, like a GitHub implementation of some paper or something, and usually those would come in the middle of the macro cycle. And finally, I decided I want to write a blog summarizing everything I've learned investigating researching that subfield of machine learning, and I'd usually publish those on the medium at the end of the macro cycle. Now all of this might seem like a perfect plan, but the reality is over the first couple of iterations, so that means over the first couple of subfields of machine learning, I was making lots of mistakes, I was not, I was deviating from the plan, but like after a couple of iterations, after a couple of months, like I think five or six months, I got much better at this, and yeah. And now let me tell you a bit more about the topics I was covering over that year and a half of my machine learning curriculum. First up is neural style transfer, and the reason is I'm A, very passionate about computer vision and about images and just visual stuff in general, and B, I like art in general. So ever since I was a kid, I loved drawing, painting, I was even doing graffitis back in my high school days. So this field like such a natural, like a topic to start with. Needless to say, I learned a lot about stuff like PyTorch, so the deep learning framework, I learned a lot about convolutional neural networks, about optimization algorithms, stuff like that. This was also the first time I read so many research papers, and aside from that, I open sourced three projects, the first one implementing the original neural style transfer paper, the second one is the fast version of the NST algorithm, and the third one was NST applied to videos. During that period, I also created my first ever YouTube video and YouTube playlist on NST, obviously, but I unfortunately did not write a blog sharing the learnings throughout that period, which was actually somewhere around four and a half months long, which is, as you can see, I was deviating from my initial plan of doing it two to three month chunks of time. Things were not perfect, but I was getting better, and that's what matters ultimately. Next up, I decided I wanna explore Deep Dream, because ever since I've seen the first outputs of that algorithm, I was fascinated by it, and I knew I wanted to understand the nitty-gritty details of that algorithm. And so again, I started exploring, reading blogs, going through subreddits, exploring projects, and implementing my own project as well, and unfortunately, again, I just created a single video, and I did not write a blog as well, but I was getting better slowly. In contrast to neural cell transfer, I was exploring Deep Dream for only about a month and a half, again, some deviations from the plan, but nothing too alarming. Next up, I switched my attention to generative adversarial networks, because they were, A, very popular back then, so even more than they are now, and B, the GAN component was used all around the place just to make some baseline have a more realistic output. And so again, I was reading a bunch of research papers, I implemented three projects as well here, so the original GAN paper, the conditional version of the GAN paper, and finally the DC GAN version, i.e. the deep convolutional GAN algorithm, which is one of the most seminal GAN papers, because it caused a Cambrian explosion of various novel papers down the line. Aside from GANs, Transformer was the name that was thrown all around the place, and I knew I needed to understand and nail the concept if I was to understand all of the advanced papers that were coming out every single day. And this time, I think I executed everything pretty much perfectly. I implemented and open-sourced the original Transformer paper, so that was a machine translation system that translated between English and German and vice versa, and since I knew both languages, it was quite kind of easier for me to debug the code and understand whether the outputs make sense or not. Also, this was the first time I started covering research papers on this channel, and finally the macro was ended by me writing a blog summarizing everything I've learned throughout the last three months. After Transformers came the GraphML subfield, and this is very very important because I was thinking about covering this subfield for many many months, and also this is the place where the DeepMind channel opened up for me, but I'm going to tell you about it a bit later in this video. I'll keep this short. Again, I implemented the original GraphAttention network paper, I created a very popular YouTube series of tutorials on graph neural networks, and all in all, I got an awesome response from the whole community of researchers and engineers alike, and as I said, all of this helped me later open up that DeepMind communication channel and land a first interview. And finally, probably somewhat expected, the last subfield I was covering was reinforcement learning, something DeepMind is very famous for. I open sourced the original DQN paper, which was a model that was the first model to solve most of the Atari games on a superhuman level. I covered various interesting papers, seminal papers such as the AlphaGo paper, AlphaGo Zero, Mu Zero, DQN, all of those goodies. And once I wrapped up RL, I felt I have enough breath to start covering the most advanced papers as they are coming out, and that's what I've been doing ever since, covering novel research papers on my YouTube channel. Now, all of this was great. I gained a huge breath and depth in various ML topics, but I still knew, like considering my background, that I'll need a referral if I was to land an interview at a company such as DeepMind. Now I'm going to tell you something more about how I managed to get a referral for DeepMind, and I'm also going to give you a couple of tips for how you can yourself land an interview at a top-tier company such as DeepMind. So somewhere around mid-2020, I approached this guy from DeepMind called Petar Valdichkovich. So I just sent him a message on LinkedIn, and we started chatting, which went way easier than I would have thought, because as it turned out, he was already following my content to my surprise, and I obviously knew who he was because he's one of the leading experts in the GraphML field. And so that's how we kicked it off. We started discussing various papers. I told him that later that year, I'll start exploring the GraphML field, to which he was very receptive and told me to ping him if I need any help whatsoever. And so obviously I did. We were discussing various Graph neural networks, and I've even implemented a paper where he was the first author of the famous Graph Attention Network paper. Later, in April 2021, we went on a call, and we were just chatting again, and then I told him I want to start applying for DeepMind. Two minutes later, without me even being aware of it, he already made a referral, and a couple minutes later, the interviewer came back and told me I have an interview scheduled for Monday, and that was Friday. So that's how my story rolled out. It doesn't have to be the same for each one of you, obviously. And so here I'm going to give you two tips on how you can get a referral for a top tier company such as DeepMind. So the first option is to form a genuine connection with a person from your target company. And I said genuine deliberately because the worst thing you can do is just spam a random person from that company asking for a favor, asking for a referral. Instead, ask what you can do for that person. How can you help out? And this is going to vary by a lot, so I can't give you a specific advice, but see what that person needs, and you can help out around some project or whatnot. The second option you've got is to find a project, an open source project from that target company of yours, and contribute. Invest a couple of months of your time, push a couple of pull requests, add some value to that project, add some value to that company, and somebody, I guarantee you, some of the contributors will notice you and will be glad to refer you back to the company. Back to my story. Once I managed to land an interview at DeepMind, I knew I had to prepare really hard because, as you may know, there is this discrepancy between the interviewing pipeline and between your daily work, and so I knew I had to refresh some of my algorithms and data structures knowledge, etc. And because of that, next up, I'm going to briefly tell you how my final preparations for DeepMind look like. So I personally did not have a lot of time because I already have landed an interview before I even started preparing for the interviewing process. So what I've done is I went through this algorithmic textbook with which I was already familiar with back from 2017 when I was applying for big tech companies. I also went through some chapters of cracking the coding interview book, and finally I went through this mathematics for machine learning book. Aside from the books, obviously, I was using various other resources like YouTube videos, etc. And the whole point for me was to cover four broad areas. So that would be machine learning, statistics, computer science, and mathematics. So aside from those technical topics, I knew I had to prepare for the behavioral questions as well, and so that's where the initial part of the cracking the coding interview book came in handy again. Finally, because this was DeepMind, I did some research on the artificial general intelligence from both the philosophical as well from the engineering side, and for every single interviewer, I've done my research and I've read many of the papers that they've published. So all of these preparations were interspersed between various interviews I had, which were usually like maybe 10 days apart from each other. Finally, I've passed all of the first five interviews, and on the last day, I had two more interviews scheduled, and again, I failed. And now it's really hard to express how devastated I felt, and if there is a single piece of advice that's that never put all of your eggs in a single basket, and that's what I did. I was laser focused on getting this job at DeepMind. In general, you should apply for multiple companies, because first of all, the hiring pipelines are almost always noisy, and so sometimes, even though you're a good candidate, you may end up getting a rejection. Luckily for me, because I was deemed as a technically strong candidate, I was rerouted to another group inside of DeepMind, and four interviews later, I got a yes, I got an offer from DeepMind, I became a research engineer, and that's pretty much my story. So I tried to be maximally transparent in this video, but I had to ditch some details, because otherwise, this video would be too long. But if you have any additional questions, feel free to write them down in the comment section, and I'll make sure to answer every single one of your comments. Hopefully you found this story useful, or at least inspiring, if nothing else, and I encourage you to subscribe to this channel, share this video out if you liked it, and also join the Discord community. Until next time, bye bye.
[{"start": 0.0, "end": 5.5200000000000005, "text": " Hey, what's up you guys? In this video I thought sharing my full story on how I managed to land"}, {"start": 5.5200000000000005, "end": 12.96, "text": " a job at DeepMind as a research engineer, as this was one of the most requested videos by you guys,"}, {"start": 12.96, "end": 17.68, "text": " so I thought making it happen. So I guess what makes my story kind of special is the fact that"}, {"start": 17.68, "end": 22.400000000000002, "text": " I do not have a machine learning degree, and when I say that I mean I don't have a PhD,"}, {"start": 22.400000000000002, "end": 27.92, "text": " nor master's, nor bachelor's degree in machine learning. Also, my first exposure to this whole"}, {"start": 27.92, "end": 34.72, "text": " world of programming was fairly late when I was 19 years old, and similarly my first exposure to"}, {"start": 35.28, "end": 41.040000000000006, "text": " machine learning field was back in the summer of 2018, which is fairly recent as well. Also,"}, {"start": 41.040000000000006, "end": 45.92, "text": " I come from a relatively poor country and the higher education system here in Serbia is not that"}, {"start": 45.92, "end": 50.72, "text": " great, especially compared to some of the best universities out there. So in this video I thought"}, {"start": 51.36, "end": 56.64, "text": " telling you something more about my background story, telling you how I managed to craft my own"}, {"start": 56.64, "end": 62.72, "text": " machine learning curriculum, I'm going to tell you as well how I managed to get referred to DeepMind,"}, {"start": 62.72, "end": 67.12, "text": " and finally I'm going to tell you a little bit more how my final preparations look like. So"}, {"start": 67.12, "end": 71.6, "text": " having said that, hopefully you find this video useful and enjoy."}, {"start": 75.52, "end": 79.92, "text": " Back in the high school I wasn't even aware that working for a big tech company is a thing."}, {"start": 79.92, "end": 85.04, "text": " Like I had all of this software on my computer, like Windows and Google search, but I wasn't"}, {"start": 85.04, "end": 90.88000000000001, "text": " quite aware that you can work for a company out there to build products like these. Like that was"}, {"start": 90.88000000000001, "end": 96.80000000000001, "text": " the level of my ignorance back then. So no, I'm not that guy who started programming when he was six,"}, {"start": 96.80000000000001, "end": 101.52000000000001, "text": " nor did I dream about building an artificial general intelligence when I was seven years old."}, {"start": 101.52000000000001, "end": 109.36000000000001, "text": " No, I was probably the mutt back then. Then in 2013 I was supposed to pick the field of my study,"}, {"start": 109.36000000000001, "end": 114.80000000000001, "text": " and back then I didn't quite know what I want to do yet, and so I heard about this electrical"}, {"start": 114.8, "end": 120.96, "text": " engineering faculty where only smart people can get in, and it's the hardest faculty in my country,"}, {"start": 120.96, "end": 126.8, "text": " and so I was very attracted by the challenge, and I decided to enroll and study electrical engineering."}, {"start": 126.8, "end": 131.12, "text": " During the first couple of years of me studying there, aside from studying really hard,"}, {"start": 131.92, "end": 138.4, "text": " my extracurricular activities did not involve building fancy applications and cool technical"}, {"start": 138.4, "end": 144.96, "text": " projects. No, I was actually learning human languages, and I was working out focusing, like"}, {"start": 144.96, "end": 150.72, "text": " training calisthenics and stuff like that, and back then all of that seemed very irrelevant for"}, {"start": 150.72, "end": 157.12, "text": " my like future career, but like only later did I realize that all of the meta skills I acquired"}, {"start": 157.12, "end": 163.6, "text": " doing that were invaluable later during my career. So I learned how to discipline myself, I learned"}, {"start": 163.6, "end": 170.32, "text": " how to like build a program and execute on it, I also learned how to learn on my own, and all of"}, {"start": 170.32, "end": 177.35999999999999, "text": " these are very, very important. Then at the end of my studies in 2017, I decided I want to make a"}, {"start": 177.35999999999999, "end": 182.79999999999998, "text": " hard pivot into software engineering, and the reason was my studies were mostly focusing on"}, {"start": 182.79999999999998, "end": 187.84, "text": " digital electronics and analog electronics, and I did have some programming courses, but those"}, {"start": 187.84, "end": 193.68, "text": " constituted a minor like percentage of the courses I had, and so I started taking more and more programming"}, {"start": 193.68, "end": 201.36, "text": " courses, and around that same time in 2017, I managed to land an internship in a small general"}, {"start": 201.36, "end": 206.8, "text": " startup as an Android developer by this student organization, and so I decided I want to prepare"}, {"start": 206.8, "end": 212.64000000000001, "text": " myself, and I started learning Android, I started building apps, uploading them to Google Play Store,"}, {"start": 212.64, "end": 218.16, "text": " and again, this whole period of me experimenting and learning and searching for the resources"}, {"start": 218.16, "end": 223.51999999999998, "text": " myself and building my own curriculum proved to be very important later for my later journey."}, {"start": 223.51999999999998, "end": 230.0, "text": " Fast forward to summer of 2017, while I was doing my internship in Germany, I realized I was falling"}, {"start": 230.0, "end": 235.51999999999998, "text": " behind, and the reason was I finally decided I want to start applying for big tech companies,"}, {"start": 235.51999999999998, "end": 240.0, "text": " and I realized that the only thing that's actually important are algorithms and data structures,"}, {"start": 240.0, "end": 244.88, "text": " and so all of the knowledge I accumulated throughout the years, all of that engineering knowledge I"}, {"start": 244.88, "end": 250.0, "text": " accumulated was actually not that important. I remember I felt really bad because of that,"}, {"start": 250.0, "end": 256.48, "text": " because somebody who was like 19 years old and was doing like competitive programming for a year"}, {"start": 256.48, "end": 261.76, "text": " already had an advantage compared to me. That feeling sucked really bad, and so once I got back"}, {"start": 261.76, "end": 268.48, "text": " to Belgrade end of the summer 2017, I decided I want to start learning algorithms and data structures,"}, {"start": 268.48, "end": 272.88, "text": " and so I started building my own software engineering curriculum. I started taking"}, {"start": 272.88, "end": 278.24, "text": " algorithms and data structure courses, I started reading through the textbooks, I started doing"}, {"start": 278.24, "end": 283.28000000000003, "text": " competitive programming, going through the cracking the coding interview book and stuff like that."}, {"start": 283.28000000000003, "end": 289.04, "text": " In parallel, as soon as I started my preparations, I started applying for big tech companies, and"}, {"start": 289.04, "end": 295.84000000000003, "text": " soon, very soon, in December 2017, I managed to land my first ever interview with Facebook,"}, {"start": 295.84, "end": 302.88, "text": " and I failed miserably, but I did not give up, and so I just started pushing and learning and applying,"}, {"start": 302.88, "end": 310.56, "text": " and soon enough, I got my second interview with Microsoft, I failed, I got my interviews later on"}, {"start": 310.56, "end": 315.59999999999997, "text": " in 2018 with Nvidia, and I failed again, but the main point was I did not give up, that's it."}, {"start": 316.23999999999995, "end": 321.59999999999997, "text": " Around that same time in 2018, I was accepted into this machine learning summer camp organized"}, {"start": 321.6, "end": 327.12, "text": " by Microsoft employees, and so I realized this was a perfect opportunity for me to show that I'm"}, {"start": 327.12, "end": 332.72, "text": " passionate and knowledgeable, and maybe this was my entry card for Microsoft, and soon enough,"}, {"start": 332.72, "end": 337.76000000000005, "text": " after I attended this machine learning summer camp, while I was in Brazil doing yet another"}, {"start": 337.76000000000005, "end": 343.44, "text": " student internship, I finally got my offer from Microsoft, and I was so happy. The team I was"}, {"start": 343.44, "end": 349.44, "text": " supposed to work for was the HoloLens team, the very same team that organized the machine learning"}, {"start": 349.44, "end": 355.12, "text": " summer camp, and I'm not sure how many of you know about HoloLens device, but this is a science"}, {"start": 355.12, "end": 360.32, "text": " fiction-like device that you put in your head, and all of a sudden, you have holographic projections"}, {"start": 360.32, "end": 365.2, "text": " in the 3D space around you. So because I was working on this really cool project, the HoloLens"}, {"start": 365.2, "end": 369.52, "text": " project, and I was surrounded by computer vision, and where is computer vision? There is machine"}, {"start": 369.52, "end": 375.44, "text": " learning, I was quite motivated myself to start learning machine learning on my own, even though"}, {"start": 375.44, "end": 381.12, "text": " my official role at Microsoft back then was software engineer, and so I started doing Coursera"}, {"start": 381.12, "end": 388.64, "text": " courses, Andrew Yang classics, but in all honesty, during that period, ML was not the strongest"}, {"start": 388.64, "end": 394.0, "text": " focus point myself, so I was trying to build strong software engineering foundations, I was also"}, {"start": 394.0, "end": 398.88, "text": " reading a lot of literature that doesn't have anything to do with tech, like investing and"}, {"start": 398.88, "end": 407.04, "text": " finance, etc. And so throughout 2019, I was learning ML, but on a low intensity mode, and nonetheless,"}, {"start": 407.04, "end": 413.36, "text": " the internal management in Microsoft noticed my efforts, and so I got shifted internally from a"}, {"start": 413.36, "end": 418.48, "text": " software engineer role to a machine learning engineer role, and I was sent to this ICCV"}, {"start": 418.48, "end": 423.6, "text": " conference, a famous computer vision conference, and all of this gave me additional boost and"}, {"start": 423.6, "end": 428.48, "text": " motivation to start learning and understanding how machine learning works in all of the nitty-gritty"}, {"start": 428.48, "end": 434.64000000000004, "text": " details. So all of this made me develop my own machine learning curriculum, and next up, I'm"}, {"start": 434.64000000000004, "end": 443.84000000000003, "text": " going to tell you more about it. I knew I wanted to research various subfields of machine learning,"}, {"start": 443.84000000000003, "end": 451.12, "text": " such as various computational R-Telegrams, such as neural style transfer, as well as GANs, or"}, {"start": 451.12, "end": 457.12, "text": " generative adversarial networks, transformers, which are all around the place, and reinforcement"}, {"start": 457.12, "end": 463.52, "text": " learning. And so I decided I want to dedicate like a two to three month periods of time to each one"}, {"start": 463.52, "end": 469.52, "text": " of those subfields of machine learning. How I call those in my head are macro cycles, and inside of"}, {"start": 469.52, "end": 475.12, "text": " each of those two to three month long macro cycles, interspersed are something that I call the micro"}, {"start": 475.12, "end": 481.6, "text": " cycles, and those micro cycles, again, can be of two types, input or output. Now those input cycles"}, {"start": 481.6, "end": 488.0, "text": " are all about me ingesting the information. So the initial micro cycles would be all about just"}, {"start": 488.0, "end": 493.92, "text": " understanding the high level structure of the subfield I'm trying to tackle. So I'd start with"}, {"start": 493.92, "end": 500.24, "text": " some high level resources, such as some blogs or some YouTube videos, stuff like that. And then in"}, {"start": 500.24, "end": 507.92, "text": " subsequent micros, I dig a lot deeper, and I start reading research papers, some chapters of some"}, {"start": 507.92, "end": 513.04, "text": " hard books, stuff like that. The output micro, on the other hand, is all about me outputting"}, {"start": 513.04, "end": 518.16, "text": " information. So me teaching you guys the stuff I've learned, and by doing that, I help you out,"}, {"start": 518.16, "end": 524.4, "text": " and also I learn stuff much better myself. So that's a win-win situation. Now some of the public"}, {"start": 524.4, "end": 529.6800000000001, "text": " artifacts that I create during those output micro cycles are things such as YouTube videos,"}, {"start": 529.6800000000001, "end": 534.24, "text": " things such as open sourcing, like a GitHub implementation of some paper or something,"}, {"start": 534.24, "end": 539.92, "text": " and usually those would come in the middle of the macro cycle. And finally, I decided I want to write"}, {"start": 539.92, "end": 545.92, "text": " a blog summarizing everything I've learned investigating researching that subfield of machine"}, {"start": 545.92, "end": 551.76, "text": " learning, and I'd usually publish those on the medium at the end of the macro cycle. Now all of"}, {"start": 551.76, "end": 557.6800000000001, "text": " this might seem like a perfect plan, but the reality is over the first couple of iterations,"}, {"start": 557.6800000000001, "end": 562.72, "text": " so that means over the first couple of subfields of machine learning, I was making lots of mistakes,"}, {"start": 562.72, "end": 568.5600000000001, "text": " I was not, I was deviating from the plan, but like after a couple of iterations, after a couple of"}, {"start": 568.5600000000001, "end": 574.8000000000001, "text": " months, like I think five or six months, I got much better at this, and yeah. And now let me tell you"}, {"start": 574.8000000000001, "end": 579.44, "text": " a bit more about the topics I was covering over that year and a half of my machine learning"}, {"start": 579.44, "end": 585.52, "text": " curriculum. First up is neural style transfer, and the reason is I'm A, very passionate about"}, {"start": 585.52, "end": 591.84, "text": " computer vision and about images and just visual stuff in general, and B, I like art in general."}, {"start": 591.84, "end": 596.32, "text": " So ever since I was a kid, I loved drawing, painting, I was even doing graffitis back in my"}, {"start": 596.32, "end": 601.76, "text": " high school days. So this field like such a natural, like a topic to start with. Needless to say,"}, {"start": 601.76, "end": 606.88, "text": " I learned a lot about stuff like PyTorch, so the deep learning framework, I learned a lot about"}, {"start": 606.88, "end": 613.0400000000001, "text": " convolutional neural networks, about optimization algorithms, stuff like that. This was also the"}, {"start": 613.0400000000001, "end": 618.88, "text": " first time I read so many research papers, and aside from that, I open sourced three projects,"}, {"start": 618.88, "end": 624.24, "text": " the first one implementing the original neural style transfer paper, the second one is the fast"}, {"start": 624.24, "end": 629.84, "text": " version of the NST algorithm, and the third one was NST applied to videos. During that period,"}, {"start": 629.84, "end": 636.24, "text": " I also created my first ever YouTube video and YouTube playlist on NST, obviously, but I"}, {"start": 636.24, "end": 642.08, "text": " unfortunately did not write a blog sharing the learnings throughout that period, which was"}, {"start": 642.08, "end": 648.0, "text": " actually somewhere around four and a half months long, which is, as you can see, I was deviating"}, {"start": 648.0, "end": 652.4, "text": " from my initial plan of doing it two to three month chunks of time. Things were not perfect,"}, {"start": 652.4, "end": 658.48, "text": " but I was getting better, and that's what matters ultimately. Next up, I decided I wanna explore"}, {"start": 658.48, "end": 664.24, "text": " Deep Dream, because ever since I've seen the first outputs of that algorithm, I was fascinated by it,"}, {"start": 664.24, "end": 669.2, "text": " and I knew I wanted to understand the nitty-gritty details of that algorithm. And so again,"}, {"start": 669.2, "end": 675.36, "text": " I started exploring, reading blogs, going through subreddits, exploring projects, and implementing"}, {"start": 675.36, "end": 681.92, "text": " my own project as well, and unfortunately, again, I just created a single video, and I did not write"}, {"start": 681.92, "end": 688.32, "text": " a blog as well, but I was getting better slowly. In contrast to neural cell transfer, I was"}, {"start": 688.32, "end": 693.44, "text": " exploring Deep Dream for only about a month and a half, again, some deviations from the plan,"}, {"start": 693.44, "end": 699.36, "text": " but nothing too alarming. Next up, I switched my attention to generative adversarial networks,"}, {"start": 699.36, "end": 706.24, "text": " because they were, A, very popular back then, so even more than they are now, and B, the GAN"}, {"start": 706.24, "end": 712.08, "text": " component was used all around the place just to make some baseline have a more realistic output."}, {"start": 712.08, "end": 717.28, "text": " And so again, I was reading a bunch of research papers, I implemented three projects as well here,"}, {"start": 717.28, "end": 723.2, "text": " so the original GAN paper, the conditional version of the GAN paper, and finally the DC GAN version,"}, {"start": 723.2, "end": 730.24, "text": " i.e. the deep convolutional GAN algorithm, which is one of the most seminal GAN papers, because it"}, {"start": 730.24, "end": 737.0400000000001, "text": " caused a Cambrian explosion of various novel papers down the line. Aside from GANs, Transformer was"}, {"start": 737.0400000000001, "end": 742.8000000000001, "text": " the name that was thrown all around the place, and I knew I needed to understand and nail the concept"}, {"start": 742.8000000000001, "end": 748.96, "text": " if I was to understand all of the advanced papers that were coming out every single day. And this"}, {"start": 748.96, "end": 754.88, "text": " time, I think I executed everything pretty much perfectly. I implemented and open-sourced the"}, {"start": 754.88, "end": 760.8000000000001, "text": " original Transformer paper, so that was a machine translation system that translated between English"}, {"start": 760.8000000000001, "end": 766.32, "text": " and German and vice versa, and since I knew both languages, it was quite kind of easier for me to"}, {"start": 766.32, "end": 772.24, "text": " debug the code and understand whether the outputs make sense or not. Also, this was the first time"}, {"start": 772.24, "end": 778.64, "text": " I started covering research papers on this channel, and finally the macro was ended by me writing a"}, {"start": 778.64, "end": 785.12, "text": " blog summarizing everything I've learned throughout the last three months. After Transformers came the"}, {"start": 785.12, "end": 791.04, "text": " GraphML subfield, and this is very very important because I was thinking about covering this subfield"}, {"start": 791.04, "end": 796.56, "text": " for many many months, and also this is the place where the DeepMind channel opened up for me, but"}, {"start": 796.56, "end": 802.08, "text": " I'm going to tell you about it a bit later in this video. I'll keep this short. Again, I implemented"}, {"start": 802.08, "end": 809.44, "text": " the original GraphAttention network paper, I created a very popular YouTube series of tutorials on"}, {"start": 809.44, "end": 815.44, "text": " graph neural networks, and all in all, I got an awesome response from the whole community of"}, {"start": 815.44, "end": 823.2800000000001, "text": " researchers and engineers alike, and as I said, all of this helped me later open up that DeepMind"}, {"start": 823.2800000000001, "end": 829.6, "text": " communication channel and land a first interview. And finally, probably somewhat expected, the last"}, {"start": 829.6, "end": 834.72, "text": " subfield I was covering was reinforcement learning, something DeepMind is very famous for."}, {"start": 834.72, "end": 841.0400000000001, "text": " I open sourced the original DQN paper, which was a model that was the first model to solve"}, {"start": 842.0, "end": 846.96, "text": " most of the Atari games on a superhuman level. I covered various interesting papers,"}, {"start": 847.6, "end": 856.72, "text": " seminal papers such as the AlphaGo paper, AlphaGo Zero, Mu Zero, DQN, all of those goodies. And once"}, {"start": 856.72, "end": 864.0, "text": " I wrapped up RL, I felt I have enough breath to start covering the most advanced papers as they"}, {"start": 864.0, "end": 869.2, "text": " are coming out, and that's what I've been doing ever since, covering novel research papers on my"}, {"start": 869.2, "end": 876.5600000000001, "text": " YouTube channel. Now, all of this was great. I gained a huge breath and depth in various ML topics,"}, {"start": 876.5600000000001, "end": 882.72, "text": " but I still knew, like considering my background, that I'll need a referral if I was to land an"}, {"start": 882.72, "end": 888.08, "text": " interview at a company such as DeepMind. Now I'm going to tell you something more about how I"}, {"start": 888.08, "end": 893.0400000000001, "text": " managed to get a referral for DeepMind, and I'm also going to give you a couple of tips"}, {"start": 893.0400000000001, "end": 897.52, "text": " for how you can yourself land an interview at a top-tier company such as DeepMind."}, {"start": 903.52, "end": 908.72, "text": " So somewhere around mid-2020, I approached this guy from DeepMind called Petar Valdichkovich."}, {"start": 908.72, "end": 914.1600000000001, "text": " So I just sent him a message on LinkedIn, and we started chatting, which went way easier than I"}, {"start": 914.1600000000001, "end": 919.6, "text": " would have thought, because as it turned out, he was already following my content to my surprise,"}, {"start": 919.6, "end": 924.64, "text": " and I obviously knew who he was because he's one of the leading experts in the GraphML field."}, {"start": 924.64, "end": 929.36, "text": " And so that's how we kicked it off. We started discussing various papers. I told him that later"}, {"start": 929.36, "end": 935.28, "text": " that year, I'll start exploring the GraphML field, to which he was very receptive and told me to"}, {"start": 935.28, "end": 940.24, "text": " ping him if I need any help whatsoever. And so obviously I did. We were discussing various"}, {"start": 940.24, "end": 944.56, "text": " Graph neural networks, and I've even implemented a paper where he was the first author of the"}, {"start": 944.56, "end": 951.12, "text": " famous Graph Attention Network paper. Later, in April 2021, we went on a call, and we were just"}, {"start": 951.12, "end": 956.0799999999999, "text": " chatting again, and then I told him I want to start applying for DeepMind. Two minutes later,"}, {"start": 956.0799999999999, "end": 961.6, "text": " without me even being aware of it, he already made a referral, and a couple minutes later,"}, {"start": 961.6, "end": 966.16, "text": " the interviewer came back and told me I have an interview scheduled for Monday, and that was"}, {"start": 966.16, "end": 971.2, "text": " Friday. So that's how my story rolled out. It doesn't have to be the same for each one of you,"}, {"start": 971.2, "end": 976.8000000000001, "text": " obviously. And so here I'm going to give you two tips on how you can get a referral for a top tier"}, {"start": 976.8000000000001, "end": 983.6800000000001, "text": " company such as DeepMind. So the first option is to form a genuine connection with a person from"}, {"start": 983.6800000000001, "end": 989.76, "text": " your target company. And I said genuine deliberately because the worst thing you can do is just spam a"}, {"start": 989.76, "end": 995.52, "text": " random person from that company asking for a favor, asking for a referral. Instead, ask what you can"}, {"start": 995.52, "end": 1000.64, "text": " do for that person. How can you help out? And this is going to vary by a lot, so I can't give you a"}, {"start": 1000.64, "end": 1006.24, "text": " specific advice, but see what that person needs, and you can help out around some project or whatnot."}, {"start": 1006.24, "end": 1012.16, "text": " The second option you've got is to find a project, an open source project from that target company"}, {"start": 1012.16, "end": 1018.88, "text": " of yours, and contribute. Invest a couple of months of your time, push a couple of pull requests,"}, {"start": 1018.88, "end": 1024.8, "text": " add some value to that project, add some value to that company, and somebody, I guarantee you, some of"}, {"start": 1024.8, "end": 1031.6, "text": " the contributors will notice you and will be glad to refer you back to the company. Back to my story."}, {"start": 1031.6, "end": 1037.68, "text": " Once I managed to land an interview at DeepMind, I knew I had to prepare really hard because, as"}, {"start": 1037.68, "end": 1042.96, "text": " you may know, there is this discrepancy between the interviewing pipeline and between your daily"}, {"start": 1042.96, "end": 1049.04, "text": " work, and so I knew I had to refresh some of my algorithms and data structures knowledge, etc. And"}, {"start": 1049.04, "end": 1054.4, "text": " because of that, next up, I'm going to briefly tell you how my final preparations for DeepMind look like."}, {"start": 1058.56, "end": 1064.32, "text": " So I personally did not have a lot of time because I already have landed an interview before I even"}, {"start": 1064.32, "end": 1070.4, "text": " started preparing for the interviewing process. So what I've done is I went through this algorithmic"}, {"start": 1070.4, "end": 1075.6000000000001, "text": " textbook with which I was already familiar with back from 2017 when I was applying for big tech"}, {"start": 1075.6000000000001, "end": 1080.72, "text": " companies. I also went through some chapters of cracking the coding interview book, and finally"}, {"start": 1080.72, "end": 1085.1200000000001, "text": " I went through this mathematics for machine learning book. Aside from the books, obviously,"}, {"start": 1085.1200000000001, "end": 1090.48, "text": " I was using various other resources like YouTube videos, etc. And the whole point for me was to"}, {"start": 1090.48, "end": 1096.3200000000002, "text": " cover four broad areas. So that would be machine learning, statistics, computer science, and"}, {"start": 1096.32, "end": 1101.4399999999998, "text": " mathematics. So aside from those technical topics, I knew I had to prepare for the behavioral questions"}, {"start": 1101.4399999999998, "end": 1106.1599999999999, "text": " as well, and so that's where the initial part of the cracking the coding interview book came in"}, {"start": 1106.1599999999999, "end": 1111.4399999999998, "text": " handy again. Finally, because this was DeepMind, I did some research on the artificial general"}, {"start": 1111.4399999999998, "end": 1116.8799999999999, "text": " intelligence from both the philosophical as well from the engineering side, and for every single"}, {"start": 1116.8799999999999, "end": 1122.8799999999999, "text": " interviewer, I've done my research and I've read many of the papers that they've published. So all"}, {"start": 1122.88, "end": 1128.4, "text": " of these preparations were interspersed between various interviews I had, which were usually like"}, {"start": 1128.4, "end": 1133.8400000000001, "text": " maybe 10 days apart from each other. Finally, I've passed all of the first five interviews,"}, {"start": 1133.8400000000001, "end": 1140.0, "text": " and on the last day, I had two more interviews scheduled, and again, I failed. And now it's"}, {"start": 1140.0, "end": 1146.5600000000002, "text": " really hard to express how devastated I felt, and if there is a single piece of advice that's that"}, {"start": 1146.5600000000002, "end": 1151.2, "text": " never put all of your eggs in a single basket, and that's what I did. I was laser focused on"}, {"start": 1151.2, "end": 1157.3600000000001, "text": " getting this job at DeepMind. In general, you should apply for multiple companies, because"}, {"start": 1157.92, "end": 1163.6000000000001, "text": " first of all, the hiring pipelines are almost always noisy, and so sometimes, even though you're"}, {"start": 1163.6000000000001, "end": 1169.6000000000001, "text": " a good candidate, you may end up getting a rejection. Luckily for me, because I was deemed as"}, {"start": 1169.6000000000001, "end": 1175.3600000000001, "text": " a technically strong candidate, I was rerouted to another group inside of DeepMind, and four"}, {"start": 1175.3600000000001, "end": 1181.04, "text": " interviews later, I got a yes, I got an offer from DeepMind, I became a research engineer,"}, {"start": 1181.04, "end": 1186.32, "text": " and that's pretty much my story. So I tried to be maximally transparent in this video, but I had to"}, {"start": 1186.32, "end": 1191.2, "text": " ditch some details, because otherwise, this video would be too long. But if you have any additional"}, {"start": 1191.2, "end": 1195.68, "text": " questions, feel free to write them down in the comment section, and I'll make sure to answer"}, {"start": 1195.68, "end": 1201.36, "text": " every single one of your comments. Hopefully you found this story useful, or at least inspiring,"}, {"start": 1201.36, "end": 1206.72, "text": " if nothing else, and I encourage you to subscribe to this channel, share this video out if you liked"}, {"start": 1206.72, "end": 1223.52, "text": " it, and also join the Discord community. Until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=fRJzU393YLY
Channel update: moving to London in 2 days, new MLOps series
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE A brief update on what is going on with the channel! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ New Flax notebook: https://github.com/gordicaleksa/get-started-with-JAX/blob/main/Tutorial_4_Flax_Zero2Hero_Colab.ipynb ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Moving to London (DeepMind job) 00:48 Flax video coming soon (notebook is on GitHub) 01:07 Next series after JAX: MLOPs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #channelupdate #london #mlops
What's up guys, I want to make a super short channel update video. In two days I'll be moving to London because of my deep mind job and that means that my YouTube upload schedule has been affected and will continue to be affected for at least one or two more weeks. So the reason is, once I arrive to London, over the first six days I'll be living in an Airbnb apartment and then I'm going to move to a temporary accommodation. So that means I'll take some time to kind of set up my YouTube studio before I can start filming videos as usually. So I just wanted to let you know that that's happening. Aside from that, I already have a video filmed and ready to be published next week so stay tuned for that one because I think a lot of you will find it very useful because it was one of the most requested videos on this channel. Also, just yesterday I've checked in the novel tutorial number four notebook for the Jax series of tutorials and it's covering Flex, i.e. the neural network framework built on top of the Jax ecosystem. So you can play with the code yourself or you can just wait for me to upload the video covering the whole thing. So finally, after I wrap up covering Jax tutorials, I'm going to start covering research papers again on this channel as well as I'll start covering MLops because a. I want to learn a lot more about MLopspace myself and b. because that was pretty much one of the most requested topics in general. So yeah, anyways, stay tuned and until next time, bye bye.
[{"start": 0.0, "end": 4.0, "text": " What's up guys, I want to make a super short channel update video."}, {"start": 4.0, "end": 8.8, "text": " In two days I'll be moving to London because of my deep mind job"}, {"start": 8.8, "end": 12.3, "text": " and that means that my YouTube upload schedule has been affected"}, {"start": 12.3, "end": 15.9, "text": " and will continue to be affected for at least one or two more weeks."}, {"start": 15.9, "end": 23.0, "text": " So the reason is, once I arrive to London, over the first six days I'll be living in an Airbnb apartment"}, {"start": 23.0, "end": 25.7, "text": " and then I'm going to move to a temporary accommodation."}, {"start": 25.7, "end": 32.8, "text": " So that means I'll take some time to kind of set up my YouTube studio before I can start filming videos as usually."}, {"start": 32.8, "end": 35.2, "text": " So I just wanted to let you know that that's happening."}, {"start": 35.2, "end": 40.4, "text": " Aside from that, I already have a video filmed and ready to be published next week"}, {"start": 40.4, "end": 44.9, "text": " so stay tuned for that one because I think a lot of you will find it very useful"}, {"start": 44.9, "end": 48.8, "text": " because it was one of the most requested videos on this channel."}, {"start": 48.8, "end": 57.099999999999994, "text": " Also, just yesterday I've checked in the novel tutorial number four notebook for the Jax series of tutorials"}, {"start": 57.099999999999994, "end": 62.599999999999994, "text": " and it's covering Flex, i.e. the neural network framework built on top of the Jax ecosystem."}, {"start": 62.599999999999994, "end": 67.7, "text": " So you can play with the code yourself or you can just wait for me to upload the video covering the whole thing."}, {"start": 67.7, "end": 74.3, "text": " So finally, after I wrap up covering Jax tutorials, I'm going to start covering research papers again on this channel"}, {"start": 74.3, "end": 77.5, "text": " as well as I'll start covering MLops"}, {"start": 77.5, "end": 82.2, "text": " because a. I want to learn a lot more about MLopspace myself"}, {"start": 82.2, "end": 86.5, "text": " and b. because that was pretty much one of the most requested topics in general."}, {"start": 86.5, "end": 108.5, "text": " So yeah, anyways, stay tuned and until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=6_PqUPxRmjY
Coding a Neural Network from Scratch in Pure JAX | Machine Learning with JAX | Tutorial #3
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE Watch me code a Neural Network from Scratch! 🥳 In this 3rd video of the JAX tutorials series. In this video, I create an MLP (multi-layer perceptron) and train it as a classifier on MNIST (although it's trivial to use a more complex dataset) - all this in pure JAX (no Flax/Haiku/Optax). I then add cool visualizations such as: * Visualizing MLP's learned weights * Visualizing embeddings of a batch of images in t-SNE * Finally, we analyze the dead neurons Credit: Got the inspiration from the official advanced JAX tutorial here: https://jax.readthedocs.io/en/latest/notebooks/Neural_Network_and_Data_Loading.html ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Get started with JAX GitHub: https://github.com/gordicaleksa/get-started-with-JAX ✅ Dead neuron article: https://pytorch-lightning.readthedocs.io/en/latest/notebooks/course_UvA-DL/02-activation-functions.html ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 Intro, structuring the code 00:03:10 MLP initialization function 00:13:30 Prediction function 00:24:10 PyTorch MNIST dataset 00:31:40 PyTorch data loaders 00:39:55 Training loop 00:49:15 Adding the accuracy metric 01:01:45 Visualize the image and prediction 01:04:40 Small code refactoring 01:09:25 Visualizing MLP weights 01:11:30 Visualizing embeddings using t-SNE 01:17:55 Analyzing dead neurons 01:24:35 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #jax #neuralnetwork #coding
What's cracking guys? This is the third video in the Machine Learning with Jack's series and in this one We're going to build a neural network from scratch I'm going to be using a multilayer perceptron and I'm gonna train a classification model on top of the MNIST dataset By the way, I'm going to be using MNIST just for convenience Conceptually, even if we were to use ImageNet, the code would be pretty much the same Just for the sake of me coding this real-time and this video not taking three hours I'm going to be using this small dataset such as MNIST Finally, we're going to be using a PyTorch data loaders to load the data from MNIST and the reason is Jack's designers by design decided not to develop loaders because other libraries such as frameworks such as TensorFlow and PyTorch have already done a decent job at Adding this functionality so they did not want to reinvent the wheel So that's in a nutshell what this video will be about and then as a bonus points I'm gonna do some visualizations and finally, we're hopefully I'll be able to parallelize the training in the case that Colab allows me to use the TPU if not Previous videos already explained how to do that. So that's just a bonus point Anyways, let's let's start with that and structure this this whole video. So first thing we're going to do MLP training on on MNIST dataset, right? That's the first thing the second thing we're going to do is we're going to do some visualizations and Finally, the third thing is going to be basically adding the Parallelization so yeah, hopefully I spelled that correctly So now let me break these tasks into smaller sub tasks first thing where we need to create a init function in it MLP we need to and add the Predict function. So those are going to be two things we need to do here Whoops to do whoa. I don't know how to write today. Okay, that's the first to do the second one will be basically add data loading in pytorch and the third to do is going to be add the training loop loss function and whatnot maybe accuracy or something like that So that's the the breakdown of the first section of this video now here. I'm gonna just do something like maybe visualize visualize the MLP weights so we're just gonna See whether we can notice some patterns especially in the shallower layers whether MLP learn to extract certain features Which are salient, which are also like understandable and intuitive to us humans Then I'm going to do something like I'm gonna visualize embeddings using t-sne and finally, I'm gonna do something like visualize dead neurons if I have the time I Won't be breaking this parallelization part because that's that's pending and we're gonna see whether we have time A and B whether I'll have TPUs on disposal. So let's start by breaking down Solving one task at a time. So first let me start with initial init and MLP Initial init MLP function. So that function should given some configuration of the MLP network It's gonna return the weights and biases of the network. So that's a high-level description of what I want to achieve here Now let's go and implement the function. So we're gonna call it Something descriptive like init MLP, I guess and I'm gonna pass layer widths So that's gonna be the configuration and then we're going to start Implementing the function here. So params is just going to be some list basically, we're gonna iterate through this like layer widths and we're gonna take the In width and out width will be the names. So that's the number of Neurons that comes into the layer and number of neurons that goes from the layer And basically I'm going to iterate through the zip version of this so layer widths and I'm gonna index it like this So basically we want to start here with the 0th one and go and just exclude the last element and here I'm gonna Oops, let me paste it. I'm gonna start from the first one and go all the way including the last element So hopefully you'll understand a second why I'm doing this And let me maybe first create the high-level let me call the function and see how that will work So I'm gonna have ML params here. I'm gonna call the init MLP I'm gonna call the init MLP. I'm gonna pass the list here. So First thing that we're gonna do is we're gonna add like the number of Elements of pixels in the MNIST data set then I'm gonna arbitrarily create some configuration So we're gonna have a hidden layer with 512 neurons then 256 and finally 10 So what this function is doing is as you can see here It started from it starts from the 0th one and this one starts from the first one So that means we're gonna be fetching tuples like this this one and then this one and then this one because that will enable me to Create the weight and biases we need for each of the layers. So let's do that So I'm gonna append to params the following thing. So basically We're gonna create a numpy Random we're just gonna randomly Sample data from the normal distribution That is gonna have a shape like this so out with and this is just convention We usually just first put the out Dimension here and then the input dimension. So there's gonna be for example 512 and then 784 here you could it vice versa, but it's gonna complicate things later on. So there's just a convention to keep in mind So next thing we'll need We'll need a bias term. So there's gonna be something like this random And and we're gonna need just the out with because that's the bias part, right? So this should now work Let me close this list and let me just return back the parameters here. Let's return the params and Let's see whether this is going to work. So In order to understand whether this is correct. We can maybe use the tree map functionality from Jax So I'm gonna just print Jax tree map I'm gonna create a lambda function here Which is going to just return the shape of the leaf and the leaves are going to be fetched from the MLP params Which means we're going to be taking weight and biases these matrices here and we're gonna be printing their shapes To understand whether this is implemented correctly. So obviously did not import Jax neither Neither did I import non-pice is gonna fail. So I'm gonna first import those libraries import numpy as MP import Jax and I'm gonna run this and I'm going to run this as well Let's see whether this is going to work. So it's invalid syntax. Let's see what I got wrong here NumPy random. Oh, yeah, I added a Closing bracket here so that failed. Let me rerun it Okay, so this looks correct. I think so we have 512 here and then we have 256 here and then we have 10 I'd say this is this is correct. So there's a couple of modifications We need to add here first thing is a however to just use the standard deviation one as the initialization This thing is gonna explode. We need to scale these parameters. I'm not adding any fancy initialization method here I'm just gonna scale these like this just gonna like basically Make the standard deviation smaller By a factor of hundred and that's it. The second thing we need to add here is now that we have implemented numpy's random Number generator so we'll actually be using Jax random number generator So that's why I'm gonna just do that right now So it was easier for me conceptually to first start with numpy now. I'm gonna just quickly Change this so I guess checks random PRNG key or something is that the functionality I'm gonna add a seed here. Whoops I'm just gonna add some constants here. So C is gonna be zero I'm gonna be using seed here and I'm gonna pass RNG inside of the initmlp function we need to add it here as the argument and now the thing we need to do is create the keys out of the this generator So this is actually also a key. So I'm gonna rename it because it will be misleading I'm gonna pass the key here and from the key, I'm gonna create keys using the split method So I'm gonna create keys using Jax random split method and I'm gonna pass in the key and I'm gonna set the number of keys to Whatever the number of layers is so that's length of this thing minus one, right? Because the number of layers is by one smaller than the length of this configuration file because we're taking tuples Remember so this this and then finally 256 and 10 as the last layer So that's gonna be the number of keys and now we are going to additionally add to this zip function I'm gonna add the keys and I'm gonna then add the key part here. And finally, I'm going to split it additionally into the weight key so we need to split it into weight key and Bias key so something like this. So I'm gonna do again Jax random split and I'm gonna pass in the key and that's gonna split Let me see whether this is going to create a problem because this thing I'm gonna call this the parent key just for the sake of Avoiding some bugs here. So parent key here and creating keys and I'm iterating through the keys here I'm splitting that into the weight key and the bias key and finally we're going to now Input the weight key and the bias key and create the correct random function from Jax Not completely sure about this syntax here I'm gonna just try something that makes sense and then I'm gonna search if it doesn't work So Jax random something like that and then I'm gonna pass Like the key so I'm gonna pass the weight key And then I'm gonna paste Pass the shape here. So this should be the syntax and then here I'm gonna again Do Jax random and I'm gonna pass the bias key here and the shape is going to be something like this So this should now work. Let me kind of split this into two lines. It's more readable Let me let me kind of try and refactor this a little bit something like this and then I'm gonna indent this a little bit more Fuck my life. Okay, and I'm gonna do it like this Okay, this looks a bit better. So hopefully this works. Let's see whether it makes sense. We have the parent key We split it into multiple keys here. We use those keys So we'll have a key per layer every layer We have a specific key and then we do an additional split because every layer has weights and bias We pass the weights we paste the bias and that's it. This should now work as expected. Let me try and run it hopefully it's gonna work and It's not working because a module object is not callable. Let me see. What have I done here? Aha I called random is the module not the function. So let's see how the the actual syntax looks like Jax random random. I don't know. Let's see Whether this is going to Yeah, this is the wrong file, let me try and see whether we have some interesting information here So we have random uniform here, but do we have random something? That's not uniform generator because uniform distribution is not as good for parameter initialization as Gaussian because Gaussian tends to because of the mean zero We're gonna have an expectation smaller weights than using uniform I guess and also it's gonna be symmetric around zero. That's important. So I really need to find the Actual Gaussian Random, let me just use Something like this Gaussian Jax random Gaussian Will this work? Let me open up a couple files here I Think we saw this one Okay, let me let me see whether I can just Grab a number from here Nope random seed random uniform They're just using uniform numbers all the time here. What what what's this? Oh Random normal. Okay. Okay. This is the syntax random normal. Whoops that that took some time. Okay Jax random and then dot normal and that's gonna work random dot normal and hopefully now this will work as expected and It seems it does everything looks correct here, I think we are we are good to go to the next section so We have the correct scale. We have the correct Structure three layers every layer has weights and every layer has bias We've used Jax's pseudo run random number generator and we had some primitive initialization scheme here using the scale parameter Now let's do the second part and that's the add the predict function. So now that we have this Let me just add another cell here Let me add a function called MLP predict we're gonna pass in the parameters of the MLP and we're going to What else we need some data? So let's pass in the input data here. This is going to be a single image And a flatten one because we're using an MLP. We're not using a convolutional neural network here. So How we're gonna split this is we're gonna take the hidden layers here So maybe let's call this Yeah, I'm just gonna do it like this so hidden layers I'm gonna take the The params all the params except for the last one That's gonna be the hidden layers the two one the first two layers and then I'm gonna iterate through those layers So I'm gonna just iterate like this We're iterating through the through the hidden layers and taking the weight and bias matrices Then we need to Set activation something like this activation is gonna be initially equal to whatever the input is And then we're just gonna reuse it here. So activation activity Reasion, I don't know how to spell today Basically, this is going to be What we need to do activation function and we need to create the the the mapping so mapping will be basically Jax I guess stop product No, I'm gonna add the jack jacks non-pay API here. So I'm gonna add Jacks non-pay as JMP because that's gonna be useful. I'm gonna rerun the cell and Finally here so JMP dot should work. I'm gonna pass in the weight and then the example and then I'm gonna add the bias term so this thing should be This is the functionality of the linear layer. Finally, we need the relu So I think and then contains the rally unit. Yep, it does so something like this should now function and this should not be Exit should be activation. Okay So this show work Let me see whether it makes sense So we're doing a dot product. We are adding the bias. We are doing the activation function and we get the activation out This looks reasonable. Let me now do the final step. The final step is we get the last and The last layers weight and bias so something like this params Of the last element of the last layer and we do apply this again. So I'm just gonna copy paste this We're gonna copy paste this I'm gonna use the last one The reason I'm doing it like this is because the last layer does not have the relu activation so these are going to be just logits and We could alternatively do it in a single for loop but then I'd have to have a switch statement like if Last layer then apply relu otherwise don't apply relu. So at the end is gonna be even Less readable. So I'm just gonna do it like this and finally we're going to return What we're going to return logits But not quite because Jax does not have cross entropy loss implemented. We'll have to be resourceful here and Create something that's going to be Maybe confusing at the first point. We're going to be implement importing from the Jax SciPy, I think special we're gonna import something called Like X log some or something X log some I don't know. Let me try and Search for this not completely sure So, yeah log some X log some X log some X so this thing is gonna allow us to Implement a numerically stable version of cross entropy So what this what this is is the following so imagine you we have ten outputs, right? We have ten outputs from the this amnest MLP model And let's take a single one. Maybe the single output will be called a one So this is the raw output from from from the logits and we can Equivalently represent this without changing anything like this log X of of one because These two are just inverse of each other. So this thing is equivalent to this thing here. So now What this will do is it's gonna add a log sum So it's gonna do something like log and it's gonna have a sum of Exponentials so something like this Oh one and Then it's gonna have obviously because it's a sum. It's gonna have as a second argument The second output and then the third output and you get the point all the way down So if you just look at this expression now, let me just kind of copy paste it here So that it's more Visible and now we're just gonna apply the simple rule. Basically if you try and subtract the logarithms That's equivalent as if you were to take the arguments and just divide them So we're just gonna take the this argument here. So The left argument and we're just gonna take the left argument And we're just gonna take the right argument and we're just gonna take the left argument And we're just gonna divide it by this sum here of all these basically exponents and you can see that this is Pretty much we just implemented softmax The only difference is that we have log softmax and that's it So this is what this thing implements and that's gonna be super convenient a bit later We're gonna see why in a second, but let me now test this function as well So as you can see here every time I write a certain module like this init MLP, I test it here so this is a Not not that rigorous testing but some testing nonetheless so let me try and Pass some data here. So I'm gonna just create a simple dummy Like image here. So dummy image flat. I'm gonna call it that way and this is going to be ran then Basically again is gonna be MP prod of the M NIST Image size this should work correctly Let me just print the shape to make sure this is this has the correct shape. It should be a flat image so 784 Should be the shape and we're going to pass that into the MLP predict and see what we get as the prediction So prediction is going to be MLP predict We pass the MLP parameters we generated up there. We pass it here. We pass in the Image here So something like this and this should now work Hopefully, so I'm gonna just print out the prediction Shape here. So let me try and run this thing So name log some XP is not defined because I have not rerun this thing Whoops, okay. Let's see what's wrong here. Let me reopen the tab here Jacks SciPy special Jacks SciPy special aha From Jacks SciPy special import. Yeah I'm really bad today. Okay. Let me try and rerun this again And see whether it works. It does not because unsupported operand types unsupported operand types, blah blah blah the device array and function Okay, something went wrong here. Let me see what so I'm passing the MP prod should give me so how you debug this is you just open up another cell here I'm just gonna do it like this I'm gonna just print out the MP prod of this thing and see whether this gives me the number it and it does so this thing returns back the number and For some reason this is not working Okay, I got it all wrong. This thing actually worked. The problem is here and even have the red squiggly line I'm stupid basically. I just I just subtracted a function from from from this I need to pass in the logits here and now this should work. Hopefully, okay, and it does work. So the shapes do work as expected so now because this is Jax without Reimplementing this function for the batch of data. We're just gonna wrap it up into V map and let's see whether that's gonna work So I'm gonna create a like a batched version so batched Whoops Let me zoom in again. So batched MLP predict is going to be equal V map and then I'm gonna wrap the MLP predict here and Then we're gonna specify the in axis Let me import V map so that I don't forget so from Jax import We're gonna need JIT. We're gonna need V map and we are we're gonna need P map and grad as well So let me rerun this and now let me continue here So in axis is gonna be none because we don't we just want to broadcast the parameters and it's gonna be zero for the Because we have the the batch dimension will be on the zero on the zero dimension So this should now work I think that's it and let me try and test this function again. So again Let's test it Small test here Basically, I'm gonna create this same thing Except that this time I'm gonna add a batch dimension of 16 here Let me check the shape here So this is gonna be images flat and I'm gonna pass that into the batched version and see what the predictions are So predictions predictions Batched version we pass in the again MLP parameters. We pass in the images flat And if I were to run this I'm gonna just print the predictions Shape and I'm gonna comment out this first test here So this should now work as expected hopefully and it does so we get the oh, I forgot to actually use this this version of the images so this should be 16 Yeah, so this seems to to work correctly and that means we have done the first part here And now let's jump to adding the data loading functionality. So I'm just gonna annotate this like this So this was a small test and this thing These things here These things here were also tests and Finally, okay, let's add another cell here We need to add the data loading so First thing we need to add the MNIST dataset and then we'll need to add the data loader Which is going to load the images that we're gonna pass into this into this MLP So let's start with the syntax PyTorch dataset MNIST I'm gonna search for it and let's see how the syntax goes like So basically TorchVision datasets MNIST something Let's let's see some example whether they have MNIST here. They do not have MNIST Let me see whether I can type something in like TorchVision datasets MNIST example Okay, let me return back here I think this is going to work I'm just gonna copy Copy this thing. Let me put this tab here. I'm gonna need it probably a bit later So something like this should work. I'm gonna import here so import So from these datasets, I'm gonna import the MNIST dataset And now I'm just gonna go ahead and add the data loader and import the MNIST dataset And now I'm just gonna be using MNIST. So I'm gonna run this Hopefully this import will work Let me see Okay, it worked. So now I'm gonna use MNIST here Basically, let me delete this and I'm gonna create a train dataset By calling MNIST and passing in let's see what the arguments here are So we have root that's where the data will be downloaded. Then we have the split I'm just gonna leave this as the default one Train, download and transform So it sucks that it doesn't have an example here that can just copy paste So let me just do it like this. We need roots. We need basically split We need to train, download and transform. Those are the four we'll need So root is just gonna be basically The structure, how it looks like, is the following So if I was to add an additional cell here Import OS package and print the OS get current working directory It's gonna print content. So this thing here is called the content directory So I'm gonna, inside of this directory, I'm gonna into the I guess train MNIST I'm gonna download the train dataset And we're gonna need, what else? We need the train equals to true So that's gonna make sure that this is the training dataset And finally we need download and transform So download, I guess download is by default true here, right? So download, if true, downloads the dataset from the internet and puts it into root directory If dataset is already downloaded, it is not downloaded again So yeah, we wanna download it and then transform is gonna be needed as well We're gonna see why in a second So for now let me just omit it I'm just gonna put transform to none And that's it Let me just print the type of this thing Train dataset just to have something as a placeholder And let me rerun this, let me close this, let me rerun this and see whether this works And it does It's downloading the data, I think it's finished We have train MNIST dataset right here in this Google Colabs storage space So let's now analyze what this dataset is So basically if I were to fetch like a something from the train dataset So I'm fairly sure it has the index functionality So I can just index and take the 0th one I'm gonna print the type of this something And that's going to be basically, as you can see here, pill image and we have a label So that's the structure of our dataset So because we're working with Jax, we're not going to be using PyTorch tensors So that means we need to convert this into a NumPy array and not pill image So how do we do that? And the answer is using this transform functions We're just gonna add a simple transform function So I'm just gonna call it custom transform And what the function is going to do is It's gonna take the image and it's going to convert the image into NumPy So something like this, so NumPy array of this x And I'm just gonna set the data type to MPFloat32 And I'm gonna return this thing I don't even need a temporary variable here So just return this And now if I were to run this, let me see whether it's gonna be converted into NumPy And it's not Because I forgot to add the custom transform here So let me run this now And yep, I'm getting unveiled the output here, which is a good sign actually So let me, instead of printing something, let me print the something 0 And then the type, this should be, I guess, NumPy And it is Okay, so we got this sorted Basically we now have a NumPy Float32 image And the second, let me just check what the second part of this something tuple is So let me just check whether that's integer or whatnot So it is, it's an integer So we have, because it's a label So I think that should be alright Now, we need to additionally create the test dataset So I'm just gonna copy paste this part here I'm gonna copy paste it here This is going to be a test Train is false because we want a test dataset Download equals true is okay And custom transform, that's all okay So let me just run this again and see whether it's alright So it's downloading the test data And as you can see here, we have the test MNIST in the storage space Okay, cool Second thing we need to do is add the data loader So we need to be able to load these images in I think I just understood as I made a mistake here We need to flatten out images If I were to, again, if I was to take the image label tuple Because I know what this is now I'm just gonna take the train dataset I'm gonna take the 0th one And I'm just gonna take the, actually just Let me just grab the image By adding 0 here This is an image, but the thing is The shape of this image is, I guess, 28 by 28, yep So we need to add the Just flatten the function, flatten the thing here And if I were to run this, we get the correct shape now So this is now correct Basically the last thing we need to add here is just data loader So if I was to search it PyTorch data loader Let's see what I get here Data loader Okay, we have the signature here I'm just gonna copy paste it And let's see whether that's gonna work for us So, I'm gonna obviously need to import the data loader functionality So let's see where can I import it from Torch vision or what not, I don't know Let's see here Import PyTorch data loader Let's open up this data tutorial So basically data loader Oh my god, I hate this Okay, from Torch, utils data, import data loader Okay, let me go back here Let me import that part Let me rerun the cell And now we have the data loader Okay, so this will work I'm gonna use the train data set I'm gonna create the train loader And I'm gonna add some random parameters here So batch size will be set to, for example, 128 And so just passing the batch size Shuffle is gonna be equal to true Because this is a train data set We wanna shuffle We don't need the custom sampler We don't need a batch sampler Number of workers is not important We'll need a collate function Because now we have a Basically we're not using PyTorch sensors These are all just optimization parameters Such as pin memory So we don't need none of those So let me just do it like this So we additionally need to create a collate function Because let me see what happens if I were to just Use it without a custom collate function So batch of data is equal to next I'm just gonna convert the train loader into iterator And then just fetch the next batch using the next function Let me see what this thing is Batch of data And whether it's gonna work So it's a list Okay, let's see what this list consists of So if I were to take the zeroth element And run it again It's a tensor So as you can see PyTorch by default creates a tensor If we do not add the custom collate function So that's why I'm gonna add a custom collate function here So def custom collate function We pass in the batch of data Basically we want to have all of the images Batched into a single lump array And we want to have all of the labels in a separate data structure Let's see what we currently get So I'm gonna just grab this custom collate I'm gonna paste it here So this batch input is just going to be a list of tuples And each tuple contains the numpy image and contains the label So we need to convert that into a bit different shape So instead of having a list of tuples We need a list with two elements One is all of our images batched into a numpy array And the second one is all of our labels basically Inside of a single numpy array So just to make sure I'm correct here Let me print the type of batch that should be a list Then let me print the type of the batch of zero that should be a tuple Finally let me print the type of the batch of zero And then zero this should be numpy And if I were to copy paste this This should be I guess integer And yeah This is gonna fail obviously because we're not returning anything Hopefully the print functionality will work nonetheless Yeah we have a list, then we have a tuple, then we have numpy array, we have integers So I already knew the structure obviously If you did not know the structure you can just kind of play with it Just analyze the shapes, analyze the types, etc. Okay so what we need to do here is we need to transpose this data So I'm gonna do it something like this Transposed data is gonna be I'm just going to unpack the batch of data here and then zip it So what this will do is we'll now have So this is a list of tuples After we do this we'll just have a bunch of tuples and zip We'll take the first element from every tuple that means all of our images And it's gonna put it into a single list And then it's gonna take all of our labels from all the tuples And it's gonna combine it again into a list So we can make sure that's correct So this again should be a list This thing here And basically the zeroth element should be a list as well And the first element should be a list as well So this should be All of these should be lists Let me check whether that's the case Zip is not subscriptable Let me see what I have done So basically let me check Oh yeah, I have to convert this into a list explicitly here So let me now run it And yeah, so we have list tuple tuple So yeah, instead of a list, they're actually using tuple in the background If I were to try and So the length of this thing should be 2 I guess Transpose data length should be 2 because now we have Yeah, it's 2 Because now we have all of the images and all of the labels in a single tuple So the next thing we need to do is basically Convert the tuples into a NumPy array So I'm gonna do something with this Labels is a NumPy array of this transposed data Of 1 because the second tuple is the labels And then we need images And here it's a bit more complex We need to do MPStack of transposed data of 0 And this will hopefully work as expected Now I'm gonna return back the images and the labels And this collate function should now properly work Let me run this and see whether we have the correct result It seems nothing is crashing So I managed to get the batch of data We're printing the type and it seems Okay, it's a NumPy array Let's see what the shape is So if I was to take the Let me do it like this So batch of 0 are images So that's like this, images And oops And finally, let's see what's the shape of the images Let me delete this thing And run this And yeah, we have a batch of 128 images Which are flattened out And if I was to take the labels Same as this here I'm just gonna add Instead of 0, I'm gonna put 1 for labels And let's see the shape of these labels This looks correct And finally, we need to check the data type So images Let's take the 0th image and print the data type And also let's take the 0th label and print the data type So it's always a smart idea to understand your shapes and your data types To avoid some bugs down the road So flow 32 looks good and in 64 I think that's gonna be totally fine Okay, we have our data loading functionality Let's continue on and see what else do we need to do And that's adding the looping function So the training loop functionality Let me delete this because this is done And let me now finally, let me also delete this And let me add a final cell adding a looping function A training loop function Okay, this should be fairly simple What we need to do is define the number of epochs So number of epochs is gonna be, I don't know, for now let's put something like 10 So we're gonna iterate through the epochs So in range num of epochs We're gonna do it 10 times We're going to do what? So we have this pattern where we take the MLP parameters So we have the MLP params We're gonna do some update function To which we're gonna pass the MLP params We're gonna pass the data So images and GT labels And this should just update our parameters And that should be the high level functionality So now we need to write this update function Let's see how we can do that So def update We pass in the params We pass in the images We pass in the labels Okay, obviously I forgot to add data loading functionality here So we'll have basically for what? We'll have images We'll have labels in the train loader And now we can pass the images and the GT labels But there is a small catch here So images Images are currently on a CPU And that's gonna be handled correctly by the Jack Slump API So we don't need to worry about it The thing we need to do here though Is to convert the labels into one hot representation We'll soon see why So let's first start developing this update function And then we'll understand what else we need to develop Okay, we obviously need to call the batched MLP predict function Okay, this one We're gonna pass in the parameters That's this argument here We're gonna pass in the images And this will return the expected shape of this Will be whatever the batch dimension is And then 10 because we have 10 outputs, remember And these are log softmax outputs So let's call those predictions So those are predictions here Finally, this is the trick that I told you about That will help us achieve the numerical stability of cross entropy loss So we'll need to just do something like this We need to multiply predictions with these labels So these are just GT labels So ground truth labels If we multiply them and if they are one hot That means we'll fetch the log softmax component From the predictions wherever the true value is And we'll want to push that to one So we'll want to have a probability one there Because GT label basically says that That's where the true class should lie in So now we just need to basically add minus JMP mean here And this is going to work as expected Because this is going to fetch the correct component, this part here And then we just do the mean This is gonna give us the cross entropy implementation pretty much Later, if I have enough time, I'm gonna do a more intuitive implementation We're gonna see it's not working because of numerical problems So the final component, as you can see here We need to convert this into one hot representation So that means that GT labels are actually JECs Let me see whether it's in... yeah, it's here So basically we just convert labels into one hot I'm gonna make sure this shape here is correct So labels shape And let's see whether that's gonna work or not Okay, this is obviously a dummy mistake This should be in a loss function, not in the update function So in this loss function, we pass in the parameters We pass in the, I guess, images and the GT labels And here we need to do this thing All of this goes into the loss function And now in the actual update function What we'll do is we'll take the gradients So we'll just find the grads So we'll call the grad function And we'll wrap in the loss function We're gonna pass in whatever is required So that's params, images, and GT labels And this is going to give us basically derivatives Of every single parameter in this pyTree here So we get the grads here And finally we need to return the JECsTreeMultiMap Is the function I need here We're gonna just do a simple function here So we get the weights, we get the gradients And what we do here is Okay, I don't need this here Let me just do the stochastic gradient descent update here So basically from the weight we subtract learning rate times the gradient Whereas I'm gonna just add some learning rate here by default Something like, I don't know, 0.01 And finally let me pass in the pyTree So that's params and grads So this should now work as expected But we don't have any metrics here Or any logging of whatever kind So neither the loss nor the accuracy So I'm just gonna do a small break here And then a break here just to make sure that this thing works So one hot required one poly... Okay, so I forgot to add the number of classes of MNIST So MNIST classes I'm just gonna pass in the length of that thing here And see whether this works now So learning rate is not defined Why not? I just added it here So let's see Oh my god, I actually placed this here I don't know why I'm making so many mistakes Learning rate here, now let's try and run it Yeah, I'm gonna expect this is gonna last a little bit Okay, so this thing actually worked Now let me try and grab the loss Not just the grads here So well, I think the function is called val and grad So let me just check that Val and grad, jacks Let's see whether that's the name of the function Jacks package Value and grad is the name of the function, okay? So we have value and grad here Value and grad So it's gonna return the loss and the grad And here I'm gonna return the loss and the parameters And so let me just do loss here And let me print the value of the loss And let's see what happens here Okay, I forgot to obviously import this thing here Ummm I guess it's from jacks, right? Yep, it seems that's correct Let me run this And we get some reasonable value of the loss function here So that's cool Let's now try and remove the break And let's run this for a single epoch And see whether the loss goes down So let me run this and see what we get So it seems that the loss is going down And that's very, very reassuring The only problem is we're getting too many output here So let me just interrupt the execution here And let me clear this output Wow, there's so many Okay, let me just clear this I'm gonna add a conditional logging here So if, I'm just gonna place a numerate here So that we know the batch ID So something like this And if the counter So every, because every batch has 128 images So let's say we can log every 50 batches or something So if this equals zero Finally, let's move the loss inside of this if statement So let me indent it here I'm gonna also just print this thing once And now this looks like it should work Let me run it And see, so 128, 10, this looks to be a good format So I'm just gonna interrupt this execution I'm basically going to just leave the loss printing here And the final thing before I'll remove this break Is to add the accuracy metrics So let's add the accuracy logging as well And then we'll be certain that MLP is learning something So accuracy Again, I'm not sure whether Jax Let me see whether Jax has accuracy metric Yeah, I don't see anything So I'm just gonna implement it myself So basically we're gonna pass the parameters again We're gonna pass the loader So either test or train loader And that should be it I think So the thing we need to do is find the number of correctly classified images So I'm just gonna basically iterate through images and labels Here in the loader I'm gonna again run the So I'm gonna call the batch So something like this I mean this is going to be inefficient Because I'm gonna run it in general if we had some arbitrary data set But this implantation is gonna work for MNIST pretty well So I'm just gonna go through the whole training set and through the whole test set And find the accuracy and report that somewhere here So after we finish the epoch I'm gonna print something like this So epoch And then I'm gonna print the ID of the epoch And then I'm gonna print the train accuracy equals to this And test accuracy equals to this I'm just gonna call the basically accuracy I'm gonna pass the MLP params after the epoch has been finished I'm gonna pass the train loader So this thing And I just forgot to So I basically just realized that I forgot to add the test loader So I'm gonna quickly add that as well So something like this Let me paste it here So now I have to add the test loader Somewhere above Let me see where I've defined Yeah, here So I'm gonna add the test loader It's gonna be the same as the train data loader I'm just gonna swap this for test data set Batch size, I can leave it the same size Shuffle, false No need to shuffle that data set And custom collate function is also fine So I'm gonna rerun the cell And now we have the test loader So this should now all be correctly set up We just need to implement the actual accuracy metric here Let's see, we need to do the same thing as here We need some predictions And then I'm gonna find the basically Instead of finding the raw predictions here So the log softmax I'm gonna do immediately call the soft argmax function And the axis should be 1 So that's gonna find the highest probability class For the particular prediction And we're gonna get basically prediction classes here And we're gonna contrast that with labels pretty much So we need to do something like this We'll need some sum temporary variables So sum, I'm gonna call it accumulator Basically accumulator is gonna add here the MP sum Between, which is gonna do predict classes And we're gonna see where those predicted classes are the same as the ground truth classes So I can maybe do something like this just for consistency sake So this, and we're gonna sum them up Find where we had true, that means we had the same class So this is going to work correctly And finally in order to return the metric We need to just divide the accumulated value With the length of the actual data loader I guess Is that correct? No it's not because we have to also add We care about the number of images So I guess we need to multiply that with the batch size So let me see whether loader has batch size variable I'm not sure whether this is going to work Whether Colab autocomplete is just tricking me here But this should work And this may be misleading because this is not the accuracy This is just the accumulated value So I'm just gonna replace this with something like Xsum And yeah this is my find and replace method for Google Colab Okay, this should not work I think Aside from this thing maybe not being correct Let me try and run this Maybe instead of waiting for the whole epoch I'm just going to run it right at the beginning here once Let's just see whether this is going to work Okay this actually failed And the reason is already quite familiar to me from PyTorch So what I forgot to do is basically we have to set the drop last to true here Otherwise we'll have the... So potentially the last batch will not have 128 images And that's where the discrepancy comes to be So if I were to rerun the cell here and then start this thing again Now hopefully we'll get the correct results So basically what happened here is at one point the last batch was smaller than 128 And there was a mismatch doing a comparison here and that's why it failed Here you can see it's not working but the thing is it's very very slow So what I'm going to do instead is I'm going to interrupt this And I'm going to load the whole MNIST into memory because it's a fairly small dataset with only 50k, 60k images So I'm going to do the following I'm going to change my implementation of accuracy So instead of doing the loader thing and then iterating through the whole dataset I'm just going to pass the dataset itself And that means this is going to change a lot So I'm going to just do the following Pass the whole dataset all at once into the MLP predict So I'm going to do something like this params the whole dataset So actually the images So these will be dataset images I'm going to pass them here We're going to have the again the JMP argmax alongside xis equals one So this will give us the predicted classes And I'm also going to pass obviously the dataset labels So and I'm going to compare those two and return the mean So basically JMP mean of this comparison And this is neater implementation so it's more concise And it's also way more fast compared to the last one So let me see where this makes sense So I'm doing inference on the whole dataset And I'm then converting my raw output So the log softmax outputs into the actual class indices here And I'm comparing them to the ground truth and finding the mean Okay so this looks like it should work So now I have to do the following thing So I have to load the whole thing into the memory So I'm going to do the following I'm going to take this thing So train images There is this internal member called data I'm going to fetch it that's going to get me the images I'm going to convert this into like a Jack's NumPy array And I'm going to reshape this into Wait let me check what this will return So this will going to be print train images shape Let me see what this is going to return So yeah as you can see the problem here is it's avoiding So this transform is not being called when I just fetched the data like this So we still need to flatten out the images So I'm going to do that I'm going to just reshape this into basically Whatever the length of the train data set is And I'm going to put minus one here So that's going to hopefully flatten out the whole thing And now yeah now we have the correct basically shape Let me just check the type train images It's going to be on the device So that means whatever accelerator I'm using is going to be on that accelerator I'm currently set to using none So yeah because MLP is a fairly simple model and MNIST is a super small data set Let me now do the same thing for labels I need train labels I'm going to do the same thing here I think so at least So I'm going to this time I'm going to access target so that's labels I'm going to convert them into Jack's array I think I need the reshaping so let me check the train label shape And let me check the type of the train labels So we have device array 60K this looks correct I think this is going to work So we need to do the same thing for test data set Let me just copy paste this Replace this with test Test images test labels Test data set here Test data set here Finally test data set here targets this should be working now Okay cool Now I'm going to instead of passing the I'm going to remove this line here And here I'm going to pass the data set and the label So I'm going to pass the train data set The train images And I'm going to pass the train labels And here I'm going to pass basically the same thing just test images And test labels Again I should have waited until I checked this works let me just run it like this See whether this is going to work And it's way faster as you can see and the accuracy is for some reason very very very high already I guess that's because we were assuming using the same MLP parameters so I have to I know what I have done So I'm going to interrupt this The thing is I'm using I'm constantly updating the MLP params so the state is saved here I have to call the init function here So we need to paste the let me find it let me find the init function we need to paste this here So that we start fresh every time we start the training so this now looks like it should work let me run it We have the results very very fast the accuracy is now low that seems to be working Let me interrupt the execution again and I'm going to delete this I'm going to yeah this now should work We're logging the loss every 50 batches we're logging the accuracy every epoch and we have how many epochs we have 10 epochs So let's start with like five and I'm going to run this and yeah let's see what the results are So the loss is going down that's cool that seems to be promising Okay after a single epoch we have 91% accuracy and we do not see any kind any sign of overfitting because the accuracy on both training and test data sets are pretty much the same 91% And now we have 93 I'm going to just skip until the training is done Okay the training is finally done we have 96% accuracy with a couple more epochs we would get to I guess 98 at least or something It could be further improved but the thing is this thing is already working So let me just kind of clear the output and let me try and run a prediction on a single image and I should have done this previously So let me fetch a single image visualize it and see whether the ground truth label like aligns with the predicted label So I'm going to do the following thing I'm going to fetch the single image so images let me take the like test loader I'm going to wrap it into the iterator here and take a single batch I'm going to take a single batch and I'm going to take the only the images and then I'm going to take a single image And make let's make sure that this makes sense so image shape should be 28 by 28 Oh yeah I have to just reshape it into MNIST image size let me see where this is going to work it is Now let me import ad hoc here I'm going to import just map plot live pyplot as plt Plt image show the image and let me just plot that thing So this looks correct we have number seven We'll also need labels so I have I'll have to modify this I'm going to grab labels as well so labels And let me check the the the ground truth label here so ground truth label is going to be labels of zero And let me print the the actual label here print the GT label it should be seven and it is OK So now let's see what our network is giving us if I was to do the MLP MLP predict on top of a image But I'll have to flatten it out so I'm going to pass the MLP params and I'm going to pass in the what I'm going to pass MP Revel of the image will this work let me check so let me check whether this is going to work These are just predictions and yeah we get some results let's now just do the argmax instead And see what we have so yeah we had we have the predicted value of seven so let's just add some string here And this is GT like this I'm going to paste it here let me run this and see whether it makes sense So predicted seven and GT seven so I have I should have done this image analysis while I was developing the Loaders here so basically while I was developing these data sets and loaders but yeah anyways we have it now We see everything is working correctly so that was pretty much it when it comes to the model training So was the first section it took around an hour to do this so now I want to do the following I want to quickly refactor The whole code code base and then I'm going to do some visualizations I was basically cutting some corners Because I'm filming this and I'm really I'm coding real time so it's hard to kind of be very very pedantic So now I want to improve some of those some of the things so let me see what first so this thing this this function Looks fairly nice we have parameters here we have keys I think the structure of this function was fairly nice We have this small testing functionality here so I'm just going to call it test we have a key we created We just called the function to see whether the results make sense and I think this cell was was neat I was still I was still very very fresh I guess this was some initial like this cell was used for debugging so I'm just gonna basically delete it Let me see okay so I had some explanations here etc etc again the design of this function is is fairly neat I'd say You can obviously always add some like doc strings comments etc whatnot okay let me see here we had some tests here So this was this first part was testing basically whether the MLP predict is working as expected And so I just printed the shape of the image I printed the shape of the output and the second part was actually testing whether So this test test the single example function and this was testing the batch test the batched Batched function basically here okay I can uncomment this part as well So yeah I mean it would obviously take a lot of time to to create this to make it perfect I'm just I want to make sure that I didn't That I don't have any any red flags like a huge huge red flags so I defined these custom transforming custom custom colloid functions These look neat as well so some consistency with my naming of the variables would be appreciated I'm sometimes using labels the full word and sometimes I'm abbreviating into LBLS so that sucks but yeah this looks nice We have the data sets here I did some debugging here I can I can I'm just going to comment it out and I'm basically just going to delete this part Now we have the data loaders then I did some testing here so this can be considered like a testing section again testing section What I did here I made sure that we have images labels of correct types and shapes and finally This thing here is just the optimization part where I was loading the whole data set load the whole data set into memory Okay I'm going to refactor this notebook a little bit after after I finish this video and I'm going to upload it to my to my github Hopefully you saw this one already so github let me let me open it up so I have this get started with Jax like repo do check it out I'm going to be posting all of these tutorials there as well as some other useful resources and videos here so that's it Let's continue here okay this to do is done we have the loss function I'm going to just delete this here we have the update function We have the the the accuracy maybe the metric and the loss should be one close to each other here that's it and finally we we create the network here Create create an MLP I guess a MLP and then we have the training loop here finally I could have probably just replace this with a product Function so to make this a little bit more robust but this is not that robust in any case because I'm hard coding names here but yeah You get the point in general you could kind of you'd have a section where you're defining these variables depending so when when the data set changes All of these variables change and basically MLP is out of the box is going to have a compatible shape this this was the point of doing this here And yeah you don't want to hard code stuff around your code base so yeah all in all for for a code that was written very very very ad hoc I think this this looks pretty decent So we have the loading here we convert the representation into the one hot representation then we call the update with this some logging of the loss We do some logging of the accuracy and I think it's fairly fairly cool now let's do a couple of visualizations first let me visualize the MLP weights So I'm gonna copy paste this into a novel cell here and what I mean by that is the following so I'm going to take the MLP weights So I'm going to take the MLP parameters I'm gonna fetch the zero player I'm gonna fetch the weight matrix and I'm gonna just take the that weight matrix and see what the shape is So the shape of that matrix is let me see 512 784 so what I want to do here is for a single output neuron I'm gonna I'm gonna fetch this all the weights for a single neuron and see whether we can see some pattern some interesting pattern I'm gonna do something like this weights single equals weights I'm gonna fetch a single row I'm gonna fetch all the columns and I'm gonna reshape that I'm gonna reshape that into the amnest image size so that's 28 by 28 and this should now be of correct shape with W single shape equals 28 by 28 I'm gonna just plot this and let's see whether we have some meaningful results here so I'm gonna plot it and we do not have any meaningful results here so if I was to randomly pick some other like some other weights for some other output neuron like 4 maybe I don't know let's see whether we can find some meaningful representation here and I do not see anything meaningful here I'm gonna try one more nope I guess not that interpretable but we still have 96% accuracy so that's cool and it was worth a try to just do this initial visualization so this task is done whoops let me just delete this to do and let's jump to the second one so visualize embeddings using t-sneak let me do that one I'm just gonna add a new cell here we're gonna import basically let me see side pie t-sneak okay yeah from scikit-learn manifold t-sneak so this will probably come in handy this documentation here I'm just gonna put the tab right here what the fuck okay let me put it here import from scikit-learn sklearn we import manifold module and we import t-sneak so from this package from this module import the function t-sneak my god okay okay this should now work so t-sneak is going to do a dimensionality reduction and for that I'll have to fetch the activations from the MLP just before we apply the linear layer that means we'll have to modify this predict function so let me find the MLP predict function I'm gonna copy paste it to this cell here I'm gonna modify it so first things first let's call it something else like fetch activations I guess we're gonna pass the params nothing changes there the only difference is we do not have to apply this this last part so let me let me think this will this will be yeah we basically do not want this part we just want to return back the activations the last ones just before we apply the linear layer so activation something like this and this should now work so now let me just create a batch version of this thing so batched fetch activations is gonna be remap we're gonna paste this function in and finally in axis will be again none and zero and this should now work so I'm just gonna fetch a batch of images and I'm gonna get those activations project them into 2d space and plot those images just to see whether the MLP the network learned how to separate different classes so let's see how that will look like so batch of activations is going to be equal to basically this and we need to pass MLP params so that's our train network and we also need to pass a batch of data so that means let me fetch a batch of images so batch I'm gonna take take like this images labels is gonna be next error and I'm gonna I'm going to again take from the test loader so test loader and yeah this should be it now I'm gonna take the images and I'm just gonna test them here and this should now work let me see what's the shape of these batch activations if this thing works so shape let me run this and it's 128 256 and that sounds about right because remember we had let me find a configuration so it was yeah 512 256 256 is the dimensionality of the activations just before we project them into the logits okay so let's now do the following we need to pass this into the t-sne method in order to get 128 comma 2 and this time I'm just gonna use my github because I have this pre-existing code already and I don't want to search through the documentation this video is already too long so I have this annotated get like method and it got pretty popular and anyways I had I used this new here so I'm just gonna copy paste a snippet let me find where I've been using it so here I'm just gonna take this part and I'm just gonna copy paste it into right into our notebook so let me do it like this okay so t-sne is here we'll take two components means basically we want to down project into two dimensions perplexity 30 is just a number that works we don't need this special method here and now finally I need to pass the batch activations here and yeah so this is how the usual normal workflow of a guy who does machine learning looks like you take somebody else's code or your own code even better and you just kind of tweak it so I'll also we won't need this part as well so number of classes is going to be equal to 10 because this is amnest and finally let's see I'm just gonna erase these comments for now so the only thing we need to change here is to put labels instead of node labels here because those are our ground truth labels so basically what this does is for class 0 we're going to extract only those embeddings which we know correspond to images which are of class 0 I have the number 0 in it because this is amnest so let me change this to labels this should all work now we don't need these special I'm not sure whether this is going to work in colab so the only thing we are left to do is to find this dictionary so let me find the dictionary and it's here so it's just a mapping from from a number to a color and let me just define it somewhere here and that's it this should work now hopefully and it does okay so we can see that this is just some stupid error from scatterplot I've already seen this error but like it does not change the semantics so everything works correctly here so we can see that different classes are clustered together in a particular parts of the of the studio space we see some outliers like there is this yellow dot here if you can see it on your screen but yeah in any case we can see that the MLP did learn something so that's cool now the final thing I thought doing is basically seeing how many dead neurons we have so I'm going to delete this cell here I'm gonna add a new cell just below this one so what we need to do here is to find those neurons that do not activate on all of the input images I'm going to feed into the network and those are considered dead neurons because their gradients are going to be zero which means the weights that come before that neuron output can be changed and thus yeah the neuron is pretty much dead I'm going to link a cool article down in the description so you can check it out it has a like a very thorough analysis of various activation functions and there is a dead neuron analysis section as well so you can check it out there so what we need to do is again we need to modify the MLP predict function so we'll have to paste this again here so let me take it and we'll need to fetch every single one of these activations so let me create a collector variable here and I'm just going to collect every single activations here so I'm going to append activations and that's going to be it we again do not need this part because we don't have rally activations here we just care about the outputs of the rally unit so finally I'm going to return back the collector here and that's it everything else remains the same yeah combining this one with this one is probably possible but yeah I just decided to do it ad hoc like this at the moment I'm just going to name this fetch activations two because I'm being super creative right now okay so what we need to do is basically again batch this one we're going to create a batched version of the function so a batched fetch activation is two function with we map we map here and I'm going to pass the function in axis again none and zero and that's it now yeah I guess I can just copy paste this all of this and let me do it here and now I'm just going to need to swap this for the activation two version everything else remains the same and we should expect to have something like let's see where this is going to work at all yeah it's not we're actually just getting a list here and that makes sense because remember that's how we are collecting the variables here the activations here so I actually need to do something like this in order to get the correct result I guess 128 512 that makes sense and let's see we need to have 128 256 in this one and yep we do okay so now that we have these activations we just need to find those neurons where each of these 128 images produced are like a value zero that means this neuron was dead for all these 128 images and that's fairly easy I guess let me form a list here that neurons so this is going to be a flag data structure in the sense that if there is a one that means that the neuron is dead otherwise it's not dead so let's assume for now that we have that all of the neurons are dead so I'm going to just pre populate this with a shape something like this so for for act in bad activations I'm going to so this is going to be hundred so remember the shape of this act will be something like 128 and then 512 or 256 so we need to fetch the shape and we take everything except for the zero basically dimension and now we have for every single neuron we have a corresponding flag here so this is just a set of flags for every single neuron that we care about and those are the ones that basically have a rally unit after them now let's iterate through this through the activations so I'm going to have a layer ID here and I'm going to have the actual activations in enumerate and enumerate which activations so now I'm going to do the following dead neurons from layer ID is going to fetch the corresponding structure equals non-pi I guess it's logical and or something the illogical and we're going to end whatever is here already with basically activations equals zero so we're asking whether the the the neuron is dead and if all of the neurons are dead alongside the batch dimension then that neuron corresponding your own remains that otherwise it's considered alive your own I guess and I think all X is equal zero should do the job here let me see what this is going to work so finally we're going to print the so we're going to iterate through the layers so we read through the layers of dead neurons and we print the how many of them are actually dead something like this let's let's see what's where this is going to work and it does so we have zero dead neurons in the first layer we have four dead neurons in the in the last layer now let's try and do something to make sure this works so let's initialize novel network that's not trained and let's see whether we have different results so now here I'm going to create a novel LP which I'm going to call MLP parameters parameters to and now let's use that one instead to fetch the activations everything else remains the same as far as I can see here and yeah so let me just search MLP params yeah this should be it let's see whether we have different results here now now a useful thing would be to actually analyze this on the whole training data set because if there is a debt if there is a lot of dead neurons in the when you evaluate on the whole training data set that may be problematic and especially problematic neurons are the ones in the shallower layers to play with this code at your own pace let me try I'm fairly sure that I won't have a TPU at disposal currently so if I were to try to use a TPU and click save here let's see whether we can grab a TPU let me click reconnect here and no back end with TPU is available so in any case I'm fairly sure that this was already super enough because one of the previous videos I already explained how to paralyze the computation so it's fairly trivial to make this code parallel just a couple of lines of code and hopefully you got something out of this video this is the first time I'm doing a real time coding session so let me know if you have any feedback down below and whether you like this type of a content so I'm gonna start making more of this if you like it in any case consider subscribing share out this video and join our discord community and until next time bye bye
[{"start": 0.0, "end": 5.22, "text": " What's cracking guys? This is the third video in the Machine Learning with Jack's series and in this one"}, {"start": 5.22, "end": 7.7, "text": " We're going to build a neural network from scratch"}, {"start": 8.14, "end": 14.68, "text": " I'm going to be using a multilayer perceptron and I'm gonna train a classification model on top of the MNIST dataset"}, {"start": 15.0, "end": 18.16, "text": " By the way, I'm going to be using MNIST just for convenience"}, {"start": 18.78, "end": 23.0, "text": " Conceptually, even if we were to use ImageNet, the code would be pretty much the same"}, {"start": 23.28, "end": 28.04, "text": " Just for the sake of me coding this real-time and this video not taking three hours"}, {"start": 28.04, "end": 30.84, "text": " I'm going to be using this small dataset such as MNIST"}, {"start": 31.34, "end": 39.0, "text": " Finally, we're going to be using a PyTorch data loaders to load the data from MNIST and the reason is Jack's designers"}, {"start": 39.48, "end": 46.56, "text": " by design decided not to develop loaders because other libraries such as frameworks such as TensorFlow"}, {"start": 47.04, "end": 50.28, "text": " and PyTorch have already done a decent job at"}, {"start": 51.0, "end": 53.980000000000004, "text": " Adding this functionality so they did not want to reinvent the wheel"}, {"start": 53.98, "end": 60.739999999999995, "text": " So that's in a nutshell what this video will be about and then as a bonus points"}, {"start": 60.739999999999995, "end": 63.18, "text": " I'm gonna do some visualizations and finally, we're hopefully"}, {"start": 63.699999999999996, "end": 69.96, "text": " I'll be able to parallelize the training in the case that Colab allows me to use the TPU if not"}, {"start": 70.42, "end": 73.62, "text": " Previous videos already explained how to do that. So that's just a bonus point"}, {"start": 74.02, "end": 82.38, "text": " Anyways, let's let's start with that and structure this this whole video. So first thing we're going to do MLP training on"}, {"start": 82.38, "end": 87.46, "text": " on MNIST dataset, right? That's the first thing the second thing we're going to do is"}, {"start": 88.22, "end": 90.5, "text": " we're going to do some visualizations and"}, {"start": 91.25999999999999, "end": 93.5, "text": " Finally, the third thing is going to be"}, {"start": 94.5, "end": 96.5, "text": " basically adding the"}, {"start": 97.25999999999999, "end": 100.3, "text": " Parallelization so yeah, hopefully I spelled that correctly"}, {"start": 100.3, "end": 106.97999999999999, "text": " So now let me break these tasks into smaller sub tasks first thing where we need to create a init function in it MLP"}, {"start": 107.53999999999999, "end": 108.97999999999999, "text": " we need to"}, {"start": 108.97999999999999, "end": 110.74, "text": " and add the"}, {"start": 110.74, "end": 114.86, "text": " Predict function. So those are going to be two things we need to do here"}, {"start": 115.46, "end": 120.66, "text": " Whoops to do whoa. I don't know how to write today. Okay, that's the first to do the second one will be"}, {"start": 121.22, "end": 122.61999999999999, "text": " basically add"}, {"start": 122.61999999999999, "end": 124.3, "text": " data loading"}, {"start": 124.3, "end": 130.14, "text": " in pytorch and the third to do is going to be add the training loop"}, {"start": 130.94, "end": 134.74, "text": " loss function and whatnot maybe accuracy or something like that"}, {"start": 134.74, "end": 140.9, "text": " So that's the the breakdown of the first section of this video now here. I'm gonna just do something like"}, {"start": 141.62, "end": 143.62, "text": " maybe visualize"}, {"start": 143.62, "end": 144.66, "text": " visualize"}, {"start": 144.66, "end": 145.66, "text": " the"}, {"start": 145.66, "end": 147.18, "text": " MLP weights"}, {"start": 147.18, "end": 149.18, "text": " so we're just gonna"}, {"start": 149.18, "end": 154.70000000000002, "text": " See whether we can notice some patterns especially in the shallower layers whether MLP learn to extract certain features"}, {"start": 154.70000000000002, "end": 158.94, "text": " Which are salient, which are also like understandable and intuitive to us humans"}, {"start": 158.94, "end": 164.26, "text": " Then I'm going to do something like I'm gonna visualize embeddings"}, {"start": 164.82, "end": 166.82, "text": " using t-sne and"}, {"start": 167.02, "end": 169.46, "text": " finally, I'm gonna do something like"}, {"start": 170.34, "end": 172.94, "text": " visualize dead neurons if I have the time I"}, {"start": 173.46, "end": 179.42, "text": " Won't be breaking this parallelization part because that's that's pending and we're gonna see whether we have time A and B whether I'll have"}, {"start": 179.42, "end": 181.94, "text": " TPUs on disposal. So let's start by breaking down"}, {"start": 182.46, "end": 187.82, "text": " Solving one task at a time. So first let me start with initial init and MLP"}, {"start": 187.82, "end": 193.26, "text": " Initial init MLP function. So that function should given some"}, {"start": 194.06, "end": 196.06, "text": " configuration of the MLP network"}, {"start": 196.42, "end": 202.42, "text": " It's gonna return the weights and biases of the network. So that's a high-level description of what I want to achieve here"}, {"start": 202.5, "end": 205.06, "text": " Now let's go and implement the function. So we're gonna call it"}, {"start": 205.57999999999998, "end": 210.74, "text": " Something descriptive like init MLP, I guess and I'm gonna pass layer widths"}, {"start": 210.98, "end": 214.38, "text": " So that's gonna be the configuration and then we're going to start"}, {"start": 214.38, "end": 217.22, "text": " Implementing the function here. So params is just going to be some list"}, {"start": 217.74, "end": 223.34, "text": " basically, we're gonna iterate through this like layer widths and we're gonna take the"}, {"start": 224.54, "end": 228.85999999999999, "text": " In width and out width will be the names. So that's the number of"}, {"start": 229.38, "end": 233.9, "text": " Neurons that comes into the layer and number of neurons that goes from the layer"}, {"start": 234.38, "end": 241.42, "text": " And basically I'm going to iterate through the zip version of this so layer widths and I'm gonna index it like this"}, {"start": 241.42, "end": 250.14, "text": " So basically we want to start here with the 0th one and go and just exclude the last element and here I'm gonna"}, {"start": 250.7, "end": 256.62, "text": " Oops, let me paste it. I'm gonna start from the first one and go all the way including the last element"}, {"start": 256.62, "end": 259.74, "text": " So hopefully you'll understand a second why I'm doing this"}, {"start": 260.21999999999997, "end": 265.5, "text": " And let me maybe first create the high-level let me call the function and see how that will work"}, {"start": 265.5, "end": 269.34, "text": " So I'm gonna have ML params here. I'm gonna call the init MLP"}, {"start": 269.34, "end": 273.82, "text": " I'm gonna call the init MLP. I'm gonna pass the list here. So"}, {"start": 274.61999999999995, "end": 278.61999999999995, "text": " First thing that we're gonna do is we're gonna add like the number of"}, {"start": 279.38, "end": 284.41999999999996, "text": " Elements of pixels in the MNIST data set then I'm gonna arbitrarily create some configuration"}, {"start": 284.41999999999996, "end": 288.88, "text": " So we're gonna have a hidden layer with 512 neurons then 256 and finally 10"}, {"start": 289.09999999999997, "end": 292.05999999999995, "text": " So what this function is doing is as you can see here"}, {"start": 292.05999999999995, "end": 295.34, "text": " It started from it starts from the 0th one and this one starts from the first one"}, {"start": 295.34, "end": 301.97999999999996, "text": " So that means we're gonna be fetching tuples like this this one and then this one and then this one because that will enable me to"}, {"start": 301.97999999999996, "end": 306.06, "text": " Create the weight and biases we need for each of the layers. So let's do that"}, {"start": 306.06, "end": 310.38, "text": " So I'm gonna append to params the following thing. So basically"}, {"start": 311.34, "end": 313.34, "text": " We're gonna create a numpy"}, {"start": 313.85999999999996, "end": 315.85999999999996, "text": " Random we're just gonna randomly"}, {"start": 316.46, "end": 319.5, "text": " Sample data from the normal distribution"}, {"start": 319.94, "end": 324.62, "text": " That is gonna have a shape like this so out with and this is just convention"}, {"start": 324.62, "end": 326.68, "text": " We usually just first put the out"}, {"start": 327.86, "end": 332.1, "text": " Dimension here and then the input dimension. So there's gonna be for example 512 and then"}, {"start": 332.66, "end": 338.98, "text": " 784 here you could it vice versa, but it's gonna complicate things later on. So there's just a convention to keep in mind"}, {"start": 339.66, "end": 341.66, "text": " So next thing we'll need"}, {"start": 342.14, "end": 345.84000000000003, "text": " We'll need a bias term. So there's gonna be something like this random"}, {"start": 346.86, "end": 353.4, "text": " And and we're gonna need just the out with because that's the bias part, right? So this should now work"}, {"start": 353.4, "end": 359.62, "text": " Let me close this list and let me just return back the parameters here. Let's return the params and"}, {"start": 360.38, "end": 363.09999999999997, "text": " Let's see whether this is going to work. So"}, {"start": 364.02, "end": 370.02, "text": " In order to understand whether this is correct. We can maybe use the tree map functionality from Jax"}, {"start": 370.29999999999995, "end": 373.78, "text": " So I'm gonna just print Jax tree map"}, {"start": 374.38, "end": 376.59999999999997, "text": " I'm gonna create a lambda function here"}, {"start": 376.6, "end": 383.52000000000004, "text": " Which is going to just return the shape of the leaf and the leaves are going to be fetched from the MLP params"}, {"start": 383.62, "end": 389.58000000000004, "text": " Which means we're going to be taking weight and biases these matrices here and we're gonna be printing their shapes"}, {"start": 389.74, "end": 395.5, "text": " To understand whether this is implemented correctly. So obviously did not import Jax neither"}, {"start": 395.58000000000004, "end": 401.66, "text": " Neither did I import non-pice is gonna fail. So I'm gonna first import those libraries import numpy as"}, {"start": 401.66, "end": 408.36, "text": " MP import Jax and I'm gonna run this and I'm going to run this as well"}, {"start": 408.36, "end": 413.74, "text": " Let's see whether this is going to work. So it's invalid syntax. Let's see what I got wrong here"}, {"start": 415.16, "end": 417.92, "text": " NumPy random. Oh, yeah, I added a"}, {"start": 418.6, "end": 422.06, "text": " Closing bracket here so that failed. Let me rerun it"}, {"start": 422.64000000000004, "end": 429.36, "text": " Okay, so this looks correct. I think so we have 512 here and then we have 256 here and then we have 10"}, {"start": 429.36, "end": 433.36, "text": " I'd say this is this is correct. So there's a couple of modifications"}, {"start": 433.36, "end": 440.44, "text": " We need to add here first thing is a however to just use the standard deviation one as the initialization"}, {"start": 441.0, "end": 447.66, "text": " This thing is gonna explode. We need to scale these parameters. I'm not adding any fancy initialization method here"}, {"start": 447.66, "end": 452.52000000000004, "text": " I'm just gonna scale these like this just gonna like basically"}, {"start": 453.04, "end": 455.04, "text": " Make the standard deviation smaller"}, {"start": 455.04, "end": 463.08000000000004, "text": " By a factor of hundred and that's it. The second thing we need to add here is now that we have implemented numpy's random"}, {"start": 464.24, "end": 468.44, "text": " Number generator so we'll actually be using Jax random number generator"}, {"start": 468.44, "end": 471.64000000000004, "text": " So that's why I'm gonna just do that right now"}, {"start": 471.84000000000003, "end": 475.76, "text": " So it was easier for me conceptually to first start with numpy now. I'm gonna just quickly"}, {"start": 475.76, "end": 484.2, "text": " Change this so I guess checks random PRNG key or something is that the functionality I'm gonna add a seed here. Whoops"}, {"start": 484.88, "end": 488.12, "text": " I'm just gonna add some constants here. So C is gonna be zero"}, {"start": 488.12, "end": 494.56, "text": " I'm gonna be using seed here and I'm gonna pass RNG inside of the initmlp function"}, {"start": 494.84, "end": 501.44, "text": " we need to add it here as the argument and now the thing we need to do is create the keys out of the"}, {"start": 501.76, "end": 502.88, "text": " this"}, {"start": 502.88, "end": 504.44, "text": " generator"}, {"start": 504.44, "end": 508.88, "text": " So this is actually also a key. So I'm gonna rename it because it will be misleading"}, {"start": 508.88, "end": 514.12, "text": " I'm gonna pass the key here and from the key, I'm gonna create keys using the split method"}, {"start": 514.12, "end": 517.48, "text": " So I'm gonna create keys using Jax random"}, {"start": 518.24, "end": 522.88, "text": " split method and I'm gonna pass in the key and I'm gonna set the number of keys to"}, {"start": 523.88, "end": 528.48, "text": " Whatever the number of layers is so that's length of this thing minus one, right?"}, {"start": 528.72, "end": 534.24, "text": " Because the number of layers is by one smaller than the length of this configuration file because we're taking tuples"}, {"start": 534.24, "end": 538.62, "text": " Remember so this this and then finally 256 and 10 as the last layer"}, {"start": 539.46, "end": 544.84, "text": " So that's gonna be the number of keys and now we are going to additionally add to this zip function"}, {"start": 544.84, "end": 546.64, "text": " I'm gonna add the keys and"}, {"start": 546.64, "end": 553.6, "text": " I'm gonna then add the key part here. And finally, I'm going to split it additionally into the weight key"}, {"start": 553.6, "end": 555.96, "text": " so we need to split it into weight key and"}, {"start": 557.36, "end": 562.32, "text": " Bias key so something like this. So I'm gonna do again Jax random"}, {"start": 562.32, "end": 564.24, "text": " split"}, {"start": 564.24, "end": 569.0400000000001, "text": " and I'm gonna pass in the key and that's gonna split"}, {"start": 569.88, "end": 576.84, "text": " Let me see whether this is going to create a problem because this thing I'm gonna call this the parent key just for the sake of"}, {"start": 578.2800000000001, "end": 583.96, "text": " Avoiding some bugs here. So parent key here and creating keys and I'm iterating through the keys here"}, {"start": 583.96, "end": 587.72, "text": " I'm splitting that into the weight key and the bias key and finally we're going to now"}, {"start": 587.72, "end": 592.12, "text": " Input the weight key and the bias key and create the correct random function from Jax"}, {"start": 593.32, "end": 595.6, "text": " Not completely sure about this syntax here"}, {"start": 595.6, "end": 600.0, "text": " I'm gonna just try something that makes sense and then I'm gonna search if it doesn't work"}, {"start": 600.0, "end": 603.32, "text": " So Jax random something like that and then I'm gonna pass"}, {"start": 604.28, "end": 606.72, "text": " Like the key so I'm gonna pass the weight key"}, {"start": 607.6800000000001, "end": 609.6800000000001, "text": " And then I'm gonna paste"}, {"start": 610.48, "end": 616.48, "text": " Pass the shape here. So this should be the syntax and then here I'm gonna again"}, {"start": 616.48, "end": 624.04, "text": " Do Jax random and I'm gonna pass the bias key here and the shape is going to be"}, {"start": 624.6, "end": 626.12, "text": " something like this"}, {"start": 626.12, "end": 630.96, "text": " So this should now work. Let me kind of split this into two lines. It's more readable"}, {"start": 631.72, "end": 638.4, "text": " Let me let me kind of try and refactor this a little bit something like this and then I'm gonna indent this a little bit more"}, {"start": 639.2, "end": 642.48, "text": " Fuck my life. Okay, and I'm gonna do it like this"}, {"start": 642.48, "end": 649.84, "text": " Okay, this looks a bit better. So hopefully this works. Let's see whether it makes sense. We have the parent key"}, {"start": 650.9200000000001, "end": 654.86, "text": " We split it into multiple keys here. We use those keys"}, {"start": 654.86, "end": 657.74, "text": " So we'll have a key per layer every layer"}, {"start": 657.74, "end": 662.24, "text": " We have a specific key and then we do an additional split because every layer has weights and bias"}, {"start": 662.48, "end": 669.34, "text": " We pass the weights we paste the bias and that's it. This should now work as expected. Let me try and run it"}, {"start": 669.34, "end": 672.34, "text": " hopefully it's gonna work and"}, {"start": 673.46, "end": 681.86, "text": " It's not working because a module object is not callable. Let me see. What have I done here?"}, {"start": 684.34, "end": 690.34, "text": " Aha I called random is the module not the function. So let's see how the the actual syntax looks like"}, {"start": 691.46, "end": 694.7800000000001, "text": " Jax random random. I don't know. Let's see"}, {"start": 694.78, "end": 698.18, "text": " Whether this is going to"}, {"start": 700.02, "end": 705.9, "text": " Yeah, this is the wrong file, let me try and see whether we have some interesting information here"}, {"start": 706.38, "end": 710.3199999999999, "text": " So we have random uniform here, but do we have random something?"}, {"start": 711.06, "end": 717.66, "text": " That's not uniform generator because uniform distribution is not as good for"}, {"start": 718.06, "end": 721.98, "text": " parameter initialization as Gaussian because Gaussian tends to because of the mean zero"}, {"start": 721.98, "end": 726.98, "text": " We're gonna have an expectation smaller weights than using uniform"}, {"start": 726.98, "end": 732.02, "text": " I guess and also it's gonna be symmetric around zero. That's important. So I really need to find the"}, {"start": 733.86, "end": 735.1800000000001, "text": " Actual"}, {"start": 735.1800000000001, "end": 736.3000000000001, "text": " Gaussian"}, {"start": 736.3000000000001, "end": 738.3000000000001, "text": " Random, let me just use"}, {"start": 738.54, "end": 740.54, "text": " Something like this Gaussian"}, {"start": 740.54, "end": 742.54, "text": " Jax random Gaussian"}, {"start": 743.58, "end": 745.58, "text": " Will this work?"}, {"start": 747.54, "end": 749.54, "text": " Let me open up a couple files here"}, {"start": 749.54, "end": 751.38, "text": " I"}, {"start": 751.38, "end": 753.38, "text": " Think we saw this one"}, {"start": 753.74, "end": 755.9399999999999, "text": " Okay, let me let me see whether I can just"}, {"start": 756.6999999999999, "end": 758.6999999999999, "text": " Grab a number from here"}, {"start": 760.02, "end": 762.4599999999999, "text": " Nope random seed random uniform"}, {"start": 763.3, "end": 767.26, "text": " They're just using uniform numbers all the time here. What what what's this? Oh"}, {"start": 768.5, "end": 774.4599999999999, "text": " Random normal. Okay. Okay. This is the syntax random normal. Whoops that that took some time. Okay"}, {"start": 774.46, "end": 782.46, "text": " Jax random and then dot normal and that's gonna work random dot normal and hopefully now this will work as"}, {"start": 783.14, "end": 785.14, "text": " expected and"}, {"start": 786.0600000000001, "end": 792.46, "text": " It seems it does everything looks correct here, I think we are we are good to go to the next section so"}, {"start": 793.34, "end": 795.58, "text": " We have the correct scale. We have the correct"}, {"start": 796.4200000000001, "end": 800.5, "text": " Structure three layers every layer has weights and every layer has bias"}, {"start": 800.5, "end": 808.42, "text": " We've used Jax's pseudo run random number generator and we had some primitive initialization scheme here using the scale parameter"}, {"start": 808.74, "end": 813.1, "text": " Now let's do the second part and that's the add the predict function. So now that we have this"}, {"start": 813.1, "end": 815.1, "text": " Let me just add another cell here"}, {"start": 816.1, "end": 819.46, "text": " Let me add a function called MLP predict"}, {"start": 820.18, "end": 823.14, "text": " we're gonna pass in the parameters of the MLP and"}, {"start": 824.06, "end": 825.86, "text": " we're going to"}, {"start": 825.86, "end": 832.22, "text": " What else we need some data? So let's pass in the input data here. This is going to be a single image"}, {"start": 832.5, "end": 837.74, "text": " And a flatten one because we're using an MLP. We're not using a convolutional neural network here. So"}, {"start": 838.78, "end": 843.2, "text": " How we're gonna split this is we're gonna take the hidden layers here"}, {"start": 844.22, "end": 846.22, "text": " So maybe let's call this"}, {"start": 848.62, "end": 851.66, "text": " Yeah, I'm just gonna do it like this so hidden layers I'm gonna take the"}, {"start": 851.66, "end": 856.38, "text": " The params all the params except for the last one"}, {"start": 856.38, "end": 863.1, "text": " That's gonna be the hidden layers the two one the first two layers and then I'm gonna iterate through those layers"}, {"start": 863.14, "end": 865.2199999999999, "text": " So I'm gonna just iterate like this"}, {"start": 867.1, "end": 871.7199999999999, "text": " We're iterating through the through the hidden layers and taking the weight and bias matrices"}, {"start": 872.98, "end": 874.98, "text": " Then we need to"}, {"start": 875.06, "end": 880.3399999999999, "text": " Set activation something like this activation is gonna be initially equal to whatever the input is"}, {"start": 880.34, "end": 883.9, "text": " And then we're just gonna reuse it here. So activation"}, {"start": 885.0600000000001, "end": 886.46, "text": " activity"}, {"start": 886.46, "end": 888.46, "text": " Reasion, I don't know how to spell today"}, {"start": 889.7, "end": 891.7, "text": " Basically, this is going to be"}, {"start": 893.12, "end": 900.3000000000001, "text": " What we need to do activation function and we need to create the the the mapping so mapping will be basically Jax"}, {"start": 900.3000000000001, "end": 902.38, "text": " I guess stop product"}, {"start": 903.82, "end": 908.5, "text": " No, I'm gonna add the jack jacks non-pay API here. So I'm gonna add"}, {"start": 908.5, "end": 915.22, "text": " Jacks non-pay as JMP because that's gonna be useful. I'm gonna rerun the cell and"}, {"start": 915.86, "end": 920.94, "text": " Finally here so JMP dot should work. I'm gonna pass in the"}, {"start": 921.46, "end": 928.74, "text": " weight and then the example and then I'm gonna add the bias term so this thing should be"}, {"start": 929.18, "end": 932.66, "text": " This is the functionality of the linear layer. Finally, we need the relu"}, {"start": 932.66, "end": 940.18, "text": " So I think and then contains the rally unit. Yep, it does so something like this should now function and this should not be"}, {"start": 940.4599999999999, "end": 943.02, "text": " Exit should be activation. Okay"}, {"start": 943.6999999999999, "end": 945.6999999999999, "text": " So this show work"}, {"start": 945.9399999999999, "end": 947.9399999999999, "text": " Let me see whether it makes sense"}, {"start": 948.06, "end": 954.4599999999999, "text": " So we're doing a dot product. We are adding the bias. We are doing the activation function and we get the activation out"}, {"start": 954.46, "end": 961.9000000000001, "text": " This looks reasonable. Let me now do the final step. The final step is we get the last and"}, {"start": 962.5400000000001, "end": 965.9000000000001, "text": " The last layers weight and bias so something like this"}, {"start": 966.7800000000001, "end": 968.38, "text": " params"}, {"start": 968.38, "end": 975.58, "text": " Of the last element of the last layer and we do apply this again. So I'm just gonna copy paste this"}, {"start": 977.22, "end": 980.3000000000001, "text": " We're gonna copy paste this I'm gonna use the last one"}, {"start": 980.3, "end": 985.8599999999999, "text": " The reason I'm doing it like this is because the last layer does not have the relu activation"}, {"start": 986.14, "end": 988.54, "text": " so these are going to be just logits and"}, {"start": 989.14, "end": 992.5, "text": " We could alternatively do it in a single for loop"}, {"start": 993.26, "end": 995.9399999999999, "text": " but then I'd have to have a switch statement like if"}, {"start": 996.4599999999999, "end": 1001.9799999999999, "text": " Last layer then apply relu otherwise don't apply relu. So at the end is gonna be even"}, {"start": 1002.5799999999999, "end": 1007.42, "text": " Less readable. So I'm just gonna do it like this and finally we're going to return"}, {"start": 1007.42, "end": 1009.9, "text": " What we're going to return logits"}, {"start": 1010.42, "end": 1017.2199999999999, "text": " But not quite because Jax does not have cross entropy loss implemented. We'll have to be resourceful here and"}, {"start": 1017.78, "end": 1020.42, "text": " Create something that's going to be"}, {"start": 1021.18, "end": 1025.98, "text": " Maybe confusing at the first point. We're going to be implement importing from the"}, {"start": 1026.62, "end": 1028.06, "text": " Jax"}, {"start": 1028.06, "end": 1031.86, "text": " SciPy, I think special we're gonna import something called"}, {"start": 1031.86, "end": 1037.5, "text": " Like X log some or something X log some I don't know. Let me try and"}, {"start": 1038.06, "end": 1040.62, "text": " Search for this not completely sure"}, {"start": 1043.82, "end": 1049.02, "text": " So, yeah log some X log some X log some X"}, {"start": 1049.54, "end": 1053.06, "text": " so this thing is gonna allow us to"}, {"start": 1054.34, "end": 1057.06, "text": " Implement a numerically stable version of cross entropy"}, {"start": 1057.06, "end": 1062.1799999999998, "text": " So what this what this is is the following so imagine you we have ten outputs, right?"}, {"start": 1062.1799999999998, "end": 1065.1799999999998, "text": " We have ten outputs from the this amnest MLP"}, {"start": 1065.8999999999999, "end": 1066.8999999999999, "text": " model"}, {"start": 1066.8999999999999, "end": 1071.1799999999998, "text": " And let's take a single one. Maybe the single output will be called a one"}, {"start": 1071.1799999999998, "end": 1074.98, "text": " So this is the raw output from from from the logits and we can"}, {"start": 1075.3, "end": 1083.06, "text": " Equivalently represent this without changing anything like this log X of of one because"}, {"start": 1083.06, "end": 1088.6599999999999, "text": " These two are just inverse of each other. So this thing is equivalent to this thing here. So now"}, {"start": 1089.22, "end": 1092.3799999999999, "text": " What this will do is it's gonna add a log sum"}, {"start": 1092.3799999999999, "end": 1096.34, "text": " So it's gonna do something like log and it's gonna have a sum of"}, {"start": 1097.1, "end": 1100.26, "text": " Exponentials so something like this Oh one and"}, {"start": 1100.78, "end": 1104.82, "text": " Then it's gonna have obviously because it's a sum. It's gonna have as a second argument"}, {"start": 1105.34, "end": 1109.8999999999999, "text": " The second output and then the third output and you get the point all the way down"}, {"start": 1109.9, "end": 1115.38, "text": " So if you just look at this expression now, let me just kind of copy paste it here"}, {"start": 1116.7, "end": 1118.7, "text": " So that it's more"}, {"start": 1119.02, "end": 1123.5400000000002, "text": " Visible and now we're just gonna apply the simple rule. Basically if you try and subtract the logarithms"}, {"start": 1123.5400000000002, "end": 1127.5, "text": " That's equivalent as if you were to take the arguments and just divide them"}, {"start": 1127.5, "end": 1130.8600000000001, "text": " So we're just gonna take the this argument here. So"}, {"start": 1131.5, "end": 1134.5800000000002, "text": " The left argument and we're just gonna take the left argument"}, {"start": 1134.5800000000002, "end": 1138.5, "text": " And we're just gonna take the right argument and we're just gonna take the left argument"}, {"start": 1138.5, "end": 1142.62, "text": " And we're just gonna divide it by this sum here"}, {"start": 1143.46, "end": 1145.46, "text": " of all these"}, {"start": 1145.74, "end": 1149.02, "text": " basically exponents and you can see that this is"}, {"start": 1149.86, "end": 1151.98, "text": " Pretty much we just implemented softmax"}, {"start": 1152.58, "end": 1156.58, "text": " The only difference is that we have log softmax and that's it"}, {"start": 1156.58, "end": 1160.58, "text": " So this is what this thing implements and that's gonna be super convenient a bit later"}, {"start": 1160.58, "end": 1164.98, "text": " We're gonna see why in a second, but let me now test this function as well"}, {"start": 1164.98, "end": 1170.18, "text": " So as you can see here every time I write a certain module like this init MLP, I test it here"}, {"start": 1170.26, "end": 1172.26, "text": " so this is a"}, {"start": 1172.42, "end": 1175.94, "text": " Not not that rigorous testing but some testing nonetheless"}, {"start": 1176.34, "end": 1178.34, "text": " so let me try and"}, {"start": 1179.58, "end": 1183.8600000000001, "text": " Pass some data here. So I'm gonna just create a simple"}, {"start": 1185.26, "end": 1186.82, "text": " dummy"}, {"start": 1186.82, "end": 1193.26, "text": " Like image here. So dummy image flat. I'm gonna call it that way and this is going to be ran then"}, {"start": 1193.26, "end": 1198.14, "text": " Basically again is gonna be MP prod of the M NIST"}, {"start": 1199.78, "end": 1202.62, "text": " Image size this should work correctly"}, {"start": 1203.34, "end": 1209.02, "text": " Let me just print the shape to make sure this is this has the correct shape. It should be a flat image"}, {"start": 1210.3, "end": 1211.3, "text": " so"}, {"start": 1211.3, "end": 1212.46, "text": " 784"}, {"start": 1212.46, "end": 1218.86, "text": " Should be the shape and we're going to pass that into the MLP predict and see what we get as the prediction"}, {"start": 1218.86, "end": 1220.86, "text": " So prediction is going to be MLP predict"}, {"start": 1220.86, "end": 1227.1799999999998, "text": " We pass the MLP parameters we generated up there. We pass it here. We pass in the"}, {"start": 1228.8999999999999, "end": 1230.58, "text": " Image here"}, {"start": 1230.58, "end": 1233.5, "text": " So something like this and this should now work"}, {"start": 1234.1399999999999, "end": 1237.26, "text": " Hopefully, so I'm gonna just print out the prediction"}, {"start": 1237.82, "end": 1240.4199999999998, "text": " Shape here. So let me try and run this thing"}, {"start": 1242.1799999999998, "end": 1246.4599999999998, "text": " So name log some XP is not defined because I have not rerun this thing"}, {"start": 1246.46, "end": 1251.46, "text": " Whoops, okay. Let's see what's wrong here. Let me reopen the tab here"}, {"start": 1252.18, "end": 1257.22, "text": " Jacks SciPy special Jacks SciPy special aha"}, {"start": 1258.1000000000001, "end": 1261.3, "text": " From Jacks SciPy special import. Yeah"}, {"start": 1262.3, "end": 1266.3400000000001, "text": " I'm really bad today. Okay. Let me try and rerun this again"}, {"start": 1267.14, "end": 1272.46, "text": " And see whether it works. It does not because unsupported operand types"}, {"start": 1272.46, "end": 1276.66, "text": " unsupported operand types, blah blah blah the"}, {"start": 1277.22, "end": 1279.22, "text": " device array and function"}, {"start": 1279.98, "end": 1284.54, "text": " Okay, something went wrong here. Let me see what so I'm passing the"}, {"start": 1285.94, "end": 1290.98, "text": " MP prod should give me so how you debug this is you just open up another cell here"}, {"start": 1291.3400000000001, "end": 1292.66, "text": " I'm just gonna do it like this"}, {"start": 1292.66, "end": 1299.26, "text": " I'm gonna just print out the MP prod of this thing and see whether this gives me the number it and it does"}, {"start": 1299.26, "end": 1302.58, "text": " so this thing returns back the number and"}, {"start": 1304.54, "end": 1306.82, "text": " For some reason this is not working"}, {"start": 1307.66, "end": 1312.86, "text": " Okay, I got it all wrong. This thing actually worked. The problem is here and even have the red squiggly line"}, {"start": 1312.86, "end": 1318.3799999999999, "text": " I'm stupid basically. I just I just subtracted a function from from from this"}, {"start": 1318.3799999999999, "end": 1325.3, "text": " I need to pass in the logits here and now this should work. Hopefully, okay, and it does work. So the shapes do"}, {"start": 1325.3, "end": 1328.58, "text": " work as expected so now because this is Jax without"}, {"start": 1329.1399999999999, "end": 1334.74, "text": " Reimplementing this function for the batch of data. We're just gonna wrap it up into V map and let's see whether that's gonna work"}, {"start": 1334.74, "end": 1338.82, "text": " So I'm gonna create a like a batched version so batched"}, {"start": 1339.62, "end": 1340.6599999999999, "text": " Whoops"}, {"start": 1340.6599999999999, "end": 1345.7, "text": " Let me zoom in again. So batched MLP predict is going to be equal"}, {"start": 1346.18, "end": 1350.1, "text": " V map and then I'm gonna wrap the MLP predict here and"}, {"start": 1350.6599999999999, "end": 1352.6599999999999, "text": " Then we're gonna specify the in axis"}, {"start": 1352.66, "end": 1356.8200000000002, "text": " Let me import V map so that I don't forget so from Jax import"}, {"start": 1357.22, "end": 1363.22, "text": " We're gonna need JIT. We're gonna need V map and we are we're gonna need P map and grad as well"}, {"start": 1363.22, "end": 1366.02, "text": " So let me rerun this and now let me continue here"}, {"start": 1366.02, "end": 1374.1000000000001, "text": " So in axis is gonna be none because we don't we just want to broadcast the parameters and it's gonna be zero for the"}, {"start": 1374.8200000000002, "end": 1380.1000000000001, "text": " Because we have the the batch dimension will be on the zero on the zero dimension"}, {"start": 1380.1, "end": 1382.1, "text": " So this should now work"}, {"start": 1382.1799999999998, "end": 1386.74, "text": " I think that's it and let me try and test this function again. So again"}, {"start": 1387.3799999999999, "end": 1389.3799999999999, "text": " Let's test it"}, {"start": 1389.3799999999999, "end": 1391.3799999999999, "text": " Small test here"}, {"start": 1391.3799999999999, "end": 1394.5, "text": " Basically, I'm gonna create this same thing"}, {"start": 1395.9399999999998, "end": 1399.62, "text": " Except that this time I'm gonna add a batch dimension of 16 here"}, {"start": 1400.1799999999998, "end": 1401.54, "text": " Let me check the shape here"}, {"start": 1401.54, "end": 1407.78, "text": " So this is gonna be images flat and I'm gonna pass that into the batched version and see what the predictions are"}, {"start": 1407.78, "end": 1410.58, "text": " So predictions predictions"}, {"start": 1411.86, "end": 1417.3, "text": " Batched version we pass in the again MLP parameters. We pass in the"}, {"start": 1417.94, "end": 1419.94, "text": " images flat"}, {"start": 1419.94, "end": 1423.78, "text": " And if I were to run this I'm gonna just print the predictions"}, {"start": 1425.06, "end": 1429.22, "text": " Shape and I'm gonna comment out this first test here"}, {"start": 1430.02, "end": 1434.02, "text": " So this should now work as expected hopefully and it does"}, {"start": 1434.02, "end": 1441.7, "text": " so we get the oh, I forgot to actually use this this version of the images so this should be 16"}, {"start": 1441.7, "end": 1447.1399999999999, "text": " Yeah, so this seems to to work correctly and that means we have done the first part here"}, {"start": 1447.1399999999999, "end": 1453.54, "text": " And now let's jump to adding the data loading functionality. So I'm just gonna annotate this like this"}, {"start": 1453.54, "end": 1455.54, "text": " So this was a small test"}, {"start": 1455.54, "end": 1457.54, "text": " and this"}, {"start": 1457.86, "end": 1459.86, "text": " thing"}, {"start": 1460.18, "end": 1462.18, "text": " These things here"}, {"start": 1462.18, "end": 1464.18, "text": " These things here were also tests"}, {"start": 1465.22, "end": 1467.22, "text": " and"}, {"start": 1467.22, "end": 1469.22, "text": " Finally, okay, let's add another cell here"}, {"start": 1470.1000000000001, "end": 1472.1000000000001, "text": " We need to add the data loading so"}, {"start": 1473.22, "end": 1477.22, "text": " First thing we need to add the MNIST dataset and then we'll need to add the data loader"}, {"start": 1477.22, "end": 1481.54, "text": " Which is going to load the images that we're gonna pass into this into this"}, {"start": 1482.1000000000001, "end": 1483.14, "text": " MLP"}, {"start": 1483.14, "end": 1488.42, "text": " So let's start with the syntax PyTorch dataset MNIST"}, {"start": 1488.42, "end": 1490.42, "text": " I'm gonna search for it"}, {"start": 1490.42, "end": 1492.42, "text": " and let's see how"}, {"start": 1492.42, "end": 1494.42, "text": " the syntax goes like"}, {"start": 1496.42, "end": 1500.42, "text": " So basically TorchVision datasets MNIST something"}, {"start": 1500.42, "end": 1504.42, "text": " Let's let's see some example whether they have MNIST here. They do not have MNIST"}, {"start": 1506.42, "end": 1512.42, "text": " Let me see whether I can type something in like"}, {"start": 1512.42, "end": 1518.42, "text": " TorchVision datasets MNIST example"}, {"start": 1518.42, "end": 1522.42, "text": " Okay, let me return back here I think this is going to work"}, {"start": 1522.42, "end": 1524.42, "text": " I'm just gonna copy"}, {"start": 1524.42, "end": 1528.42, "text": " Copy this thing. Let me put this tab here. I'm gonna need it probably a bit later"}, {"start": 1528.42, "end": 1534.42, "text": " So something like this should work. I'm gonna import here so import"}, {"start": 1534.42, "end": 1538.42, "text": " So from these datasets, I'm gonna import"}, {"start": 1538.42, "end": 1540.42, "text": " the MNIST dataset"}, {"start": 1540.42, "end": 1544.42, "text": " And now I'm just gonna go ahead and add the data loader"}, {"start": 1544.42, "end": 1548.42, "text": " and import the MNIST dataset"}, {"start": 1548.42, "end": 1552.42, "text": " And now I'm just gonna be using MNIST. So I'm gonna run this"}, {"start": 1552.42, "end": 1554.42, "text": " Hopefully this import will work"}, {"start": 1554.42, "end": 1556.42, "text": " Let me see"}, {"start": 1556.42, "end": 1560.42, "text": " Okay, it worked. So now I'm gonna use MNIST here"}, {"start": 1560.42, "end": 1566.42, "text": " Basically, let me delete this and I'm gonna create a train dataset"}, {"start": 1566.42, "end": 1570.42, "text": " By calling MNIST and passing in let's see what the arguments here are"}, {"start": 1570.42, "end": 1576.42, "text": " So we have root that's where the data will be downloaded. Then we have the split"}, {"start": 1576.42, "end": 1580.42, "text": " I'm just gonna leave this as the default one"}, {"start": 1580.42, "end": 1584.42, "text": " Train, download and transform"}, {"start": 1584.42, "end": 1588.42, "text": " So it sucks that it doesn't have an example here that can just copy paste"}, {"start": 1588.42, "end": 1592.42, "text": " So let me just do it like this. We need roots. We need basically split"}, {"start": 1592.42, "end": 1596.42, "text": " We need to train, download and transform. Those are the four we'll need"}, {"start": 1596.42, "end": 1600.42, "text": " So root is just gonna be basically"}, {"start": 1600.42, "end": 1602.42, "text": " The structure, how it looks like, is the following"}, {"start": 1602.42, "end": 1606.42, "text": " So if I was to add an additional cell here"}, {"start": 1606.42, "end": 1612.42, "text": " Import OS package and print the OS get current working directory"}, {"start": 1612.42, "end": 1616.42, "text": " It's gonna print content. So this thing here is called the content directory"}, {"start": 1616.42, "end": 1620.42, "text": " So I'm gonna, inside of this directory, I'm gonna into the"}, {"start": 1620.42, "end": 1624.42, "text": " I guess train MNIST"}, {"start": 1624.42, "end": 1628.42, "text": " I'm gonna download the train dataset"}, {"start": 1628.42, "end": 1633.42, "text": " And we're gonna need, what else? We need the train equals to true"}, {"start": 1633.42, "end": 1636.42, "text": " So that's gonna make sure that this is the training dataset"}, {"start": 1636.42, "end": 1639.42, "text": " And finally we need download and transform"}, {"start": 1639.42, "end": 1646.42, "text": " So download, I guess download is by default true here, right?"}, {"start": 1646.42, "end": 1650.42, "text": " So download, if true, downloads the dataset from the internet and puts it into root directory"}, {"start": 1650.42, "end": 1653.42, "text": " If dataset is already downloaded, it is not downloaded again"}, {"start": 1653.42, "end": 1657.42, "text": " So yeah, we wanna download it and then transform is gonna be needed as well"}, {"start": 1657.42, "end": 1659.42, "text": " We're gonna see why in a second"}, {"start": 1659.42, "end": 1661.42, "text": " So for now let me just omit it"}, {"start": 1661.42, "end": 1665.42, "text": " I'm just gonna put transform to none"}, {"start": 1665.42, "end": 1666.42, "text": " And that's it"}, {"start": 1666.42, "end": 1669.42, "text": " Let me just print the type of this thing"}, {"start": 1669.42, "end": 1673.42, "text": " Train dataset just to have something as a placeholder"}, {"start": 1673.42, "end": 1679.42, "text": " And let me rerun this, let me close this, let me rerun this and see whether this works"}, {"start": 1679.42, "end": 1680.42, "text": " And it does"}, {"start": 1680.42, "end": 1684.42, "text": " It's downloading the data, I think it's finished"}, {"start": 1684.42, "end": 1691.42, "text": " We have train MNIST dataset right here in this Google Colabs storage space"}, {"start": 1691.42, "end": 1694.42, "text": " So let's now analyze what this dataset is"}, {"start": 1694.42, "end": 1701.42, "text": " So basically if I were to fetch like a something from the train dataset"}, {"start": 1701.42, "end": 1704.42, "text": " So I'm fairly sure it has the index functionality"}, {"start": 1704.42, "end": 1706.42, "text": " So I can just index and take the 0th one"}, {"start": 1706.42, "end": 1709.42, "text": " I'm gonna print the type of this something"}, {"start": 1709.42, "end": 1716.42, "text": " And that's going to be basically, as you can see here, pill image and we have a label"}, {"start": 1716.42, "end": 1718.42, "text": " So that's the structure of our dataset"}, {"start": 1718.42, "end": 1723.42, "text": " So because we're working with Jax, we're not going to be using PyTorch tensors"}, {"start": 1723.42, "end": 1728.42, "text": " So that means we need to convert this into a NumPy array and not pill image"}, {"start": 1728.42, "end": 1730.42, "text": " So how do we do that?"}, {"start": 1730.42, "end": 1732.42, "text": " And the answer is using this transform functions"}, {"start": 1732.42, "end": 1735.42, "text": " We're just gonna add a simple transform function"}, {"start": 1735.42, "end": 1739.42, "text": " So I'm just gonna call it custom transform"}, {"start": 1739.42, "end": 1743.42, "text": " And what the function is going to do is"}, {"start": 1743.42, "end": 1749.42, "text": " It's gonna take the image and it's going to convert the image into NumPy"}, {"start": 1749.42, "end": 1753.42, "text": " So something like this, so NumPy array of this x"}, {"start": 1753.42, "end": 1759.42, "text": " And I'm just gonna set the data type to MPFloat32"}, {"start": 1759.42, "end": 1761.42, "text": " And I'm gonna return this thing"}, {"start": 1761.42, "end": 1763.42, "text": " I don't even need a temporary variable here"}, {"start": 1763.42, "end": 1765.42, "text": " So just return this"}, {"start": 1765.42, "end": 1771.42, "text": " And now if I were to run this, let me see whether it's gonna be converted into NumPy"}, {"start": 1771.42, "end": 1773.42, "text": " And it's not"}, {"start": 1774.42, "end": 1778.42, "text": " Because I forgot to add the custom transform here"}, {"start": 1778.42, "end": 1780.42, "text": " So let me run this now"}, {"start": 1780.42, "end": 1785.42, "text": " And yep, I'm getting unveiled the output here, which is a good sign actually"}, {"start": 1785.42, "end": 1789.42, "text": " So let me, instead of printing something, let me print the something 0"}, {"start": 1789.42, "end": 1795.42, "text": " And then the type, this should be, I guess, NumPy"}, {"start": 1796.42, "end": 1797.42, "text": " And it is"}, {"start": 1797.42, "end": 1800.42, "text": " Okay, so we got this sorted"}, {"start": 1800.42, "end": 1804.42, "text": " Basically we now have a NumPy Float32 image"}, {"start": 1804.42, "end": 1810.42, "text": " And the second, let me just check what the second part of this something tuple is"}, {"start": 1810.42, "end": 1813.42, "text": " So let me just check whether that's integer or whatnot"}, {"start": 1813.42, "end": 1815.42, "text": " So it is, it's an integer"}, {"start": 1815.42, "end": 1817.42, "text": " So we have, because it's a label"}, {"start": 1817.42, "end": 1819.42, "text": " So I think that should be alright"}, {"start": 1819.42, "end": 1824.42, "text": " Now, we need to additionally create the test dataset"}, {"start": 1824.42, "end": 1827.42, "text": " So I'm just gonna copy paste this part here"}, {"start": 1828.42, "end": 1830.42, "text": " I'm gonna copy paste it here"}, {"start": 1830.42, "end": 1832.42, "text": " This is going to be a test"}, {"start": 1832.42, "end": 1836.42, "text": " Train is false because we want a test dataset"}, {"start": 1836.42, "end": 1838.42, "text": " Download equals true is okay"}, {"start": 1838.42, "end": 1840.42, "text": " And custom transform, that's all okay"}, {"start": 1840.42, "end": 1847.42, "text": " So let me just run this again and see whether it's alright"}, {"start": 1847.42, "end": 1849.42, "text": " So it's downloading the test data"}, {"start": 1849.42, "end": 1854.42, "text": " And as you can see here, we have the test MNIST in the storage space"}, {"start": 1854.42, "end": 1855.42, "text": " Okay, cool"}, {"start": 1855.42, "end": 1858.42, "text": " Second thing we need to do is add the data loader"}, {"start": 1858.42, "end": 1862.42, "text": " So we need to be able to load these images in"}, {"start": 1863.42, "end": 1866.42, "text": " I think I just understood as I made a mistake here"}, {"start": 1866.42, "end": 1868.42, "text": " We need to flatten out images"}, {"start": 1868.42, "end": 1873.42, "text": " If I were to, again, if I was to take the image label tuple"}, {"start": 1873.42, "end": 1875.42, "text": " Because I know what this is now"}, {"start": 1875.42, "end": 1877.42, "text": " I'm just gonna take the train dataset"}, {"start": 1877.42, "end": 1879.42, "text": " I'm gonna take the 0th one"}, {"start": 1879.42, "end": 1881.42, "text": " And I'm just gonna take the, actually just"}, {"start": 1881.42, "end": 1883.42, "text": " Let me just grab the image"}, {"start": 1883.42, "end": 1885.42, "text": " By adding 0 here"}, {"start": 1885.42, "end": 1887.42, "text": " This is an image, but the thing is"}, {"start": 1887.42, "end": 1890.42, "text": " The shape of this image is, I guess, 28 by 28, yep"}, {"start": 1890.42, "end": 1892.42, "text": " So we need to add the"}, {"start": 1893.42, "end": 1896.42, "text": " Just flatten the function, flatten the thing here"}, {"start": 1896.42, "end": 1900.42, "text": " And if I were to run this, we get the correct shape now"}, {"start": 1900.42, "end": 1902.42, "text": " So this is now correct"}, {"start": 1902.42, "end": 1905.42, "text": " Basically the last thing we need to add here is just data loader"}, {"start": 1905.42, "end": 1907.42, "text": " So if I was to search it"}, {"start": 1907.42, "end": 1909.42, "text": " PyTorch data loader"}, {"start": 1910.42, "end": 1912.42, "text": " Let's see what I get here"}, {"start": 1913.42, "end": 1914.42, "text": " Data loader"}, {"start": 1914.42, "end": 1917.42, "text": " Okay, we have the signature here"}, {"start": 1917.42, "end": 1919.42, "text": " I'm just gonna copy paste it"}, {"start": 1920.42, "end": 1923.42, "text": " And let's see whether that's gonna work for us"}, {"start": 1923.42, "end": 1928.42, "text": " So, I'm gonna obviously need to import the data loader functionality"}, {"start": 1928.42, "end": 1931.42, "text": " So let's see where can I import it from"}, {"start": 1934.42, "end": 1937.42, "text": " Torch vision or what not, I don't know"}, {"start": 1937.42, "end": 1939.42, "text": " Let's see here"}, {"start": 1942.42, "end": 1944.42, "text": " Import"}, {"start": 1944.42, "end": 1946.42, "text": " PyTorch data loader"}, {"start": 1948.42, "end": 1951.42, "text": " Let's open up this data tutorial"}, {"start": 1951.42, "end": 1955.42, "text": " So basically data loader"}, {"start": 1955.42, "end": 1957.42, "text": " Oh my god, I hate this"}, {"start": 1959.42, "end": 1963.42, "text": " Okay, from Torch, utils data, import data loader"}, {"start": 1963.42, "end": 1965.42, "text": " Okay, let me go back here"}, {"start": 1965.42, "end": 1968.42, "text": " Let me import that part"}, {"start": 1968.42, "end": 1970.42, "text": " Let me rerun the cell"}, {"start": 1970.42, "end": 1972.42, "text": " And now we have the data loader"}, {"start": 1972.42, "end": 1974.42, "text": " Okay, so this will work"}, {"start": 1974.42, "end": 1977.42, "text": " I'm gonna use the train data set"}, {"start": 1977.42, "end": 1979.42, "text": " I'm gonna create the train loader"}, {"start": 1979.42, "end": 1985.42, "text": " And I'm gonna add some random parameters here"}, {"start": 1985.42, "end": 1990.42, "text": " So batch size will be set to, for example, 128"}, {"start": 1990.42, "end": 1993.42, "text": " And so just passing the batch size"}, {"start": 1993.42, "end": 1995.42, "text": " Shuffle is gonna be equal to true"}, {"start": 1995.42, "end": 1997.42, "text": " Because this is a train data set"}, {"start": 1997.42, "end": 1999.42, "text": " We wanna shuffle"}, {"start": 1999.42, "end": 2001.42, "text": " We don't need the custom sampler"}, {"start": 2001.42, "end": 2003.42, "text": " We don't need a batch sampler"}, {"start": 2003.42, "end": 2005.42, "text": " Number of workers is not important"}, {"start": 2005.42, "end": 2007.42, "text": " We'll need a collate function"}, {"start": 2007.42, "end": 2009.42, "text": " Because now we have a"}, {"start": 2009.42, "end": 2012.42, "text": " Basically we're not using PyTorch sensors"}, {"start": 2012.42, "end": 2015.42, "text": " These are all just optimization parameters"}, {"start": 2015.42, "end": 2016.42, "text": " Such as pin memory"}, {"start": 2016.42, "end": 2019.42, "text": " So we don't need none of those"}, {"start": 2019.42, "end": 2021.42, "text": " So let me just do it like this"}, {"start": 2021.42, "end": 2023.42, "text": " So we additionally need to create a collate function"}, {"start": 2023.42, "end": 2025.42, "text": " Because let me see what happens if I were to just"}, {"start": 2025.42, "end": 2027.42, "text": " Use it without a custom collate function"}, {"start": 2027.42, "end": 2032.42, "text": " So batch of data is equal to next"}, {"start": 2032.42, "end": 2035.42, "text": " I'm just gonna convert the train loader into iterator"}, {"start": 2035.42, "end": 2042.42, "text": " And then just fetch the next batch using the next function"}, {"start": 2042.42, "end": 2044.42, "text": " Let me see what this thing is"}, {"start": 2044.42, "end": 2046.42, "text": " Batch of data"}, {"start": 2046.42, "end": 2048.42, "text": " And whether it's gonna work"}, {"start": 2048.42, "end": 2049.42, "text": " So it's a list"}, {"start": 2049.42, "end": 2051.42, "text": " Okay, let's see what this list consists of"}, {"start": 2051.42, "end": 2054.42, "text": " So if I were to take the zeroth element"}, {"start": 2054.42, "end": 2056.42, "text": " And run it again"}, {"start": 2056.42, "end": 2057.42, "text": " It's a tensor"}, {"start": 2057.42, "end": 2058.42, "text": " So as you can see"}, {"start": 2058.42, "end": 2060.42, "text": " PyTorch by default creates a tensor"}, {"start": 2060.42, "end": 2062.42, "text": " If we do not add the custom collate function"}, {"start": 2062.42, "end": 2066.42, "text": " So that's why I'm gonna add a custom collate function here"}, {"start": 2066.42, "end": 2070.42, "text": " So def custom collate function"}, {"start": 2070.42, "end": 2074.42, "text": " We pass in the batch of data"}, {"start": 2074.42, "end": 2078.42, "text": " Basically we want to have all of the images"}, {"start": 2078.42, "end": 2080.42, "text": " Batched into a single lump array"}, {"start": 2080.42, "end": 2084.42, "text": " And we want to have all of the labels in a separate data structure"}, {"start": 2084.42, "end": 2088.42, "text": " Let's see what we currently get"}, {"start": 2088.42, "end": 2090.42, "text": " So I'm gonna just grab this custom collate"}, {"start": 2090.42, "end": 2092.42, "text": " I'm gonna paste it here"}, {"start": 2092.42, "end": 2096.42, "text": " So this batch input is just going to be a list of tuples"}, {"start": 2096.42, "end": 2101.42, "text": " And each tuple contains the numpy image and contains the label"}, {"start": 2101.42, "end": 2104.42, "text": " So we need to convert that into a bit different shape"}, {"start": 2104.42, "end": 2107.42, "text": " So instead of having a list of tuples"}, {"start": 2107.42, "end": 2109.42, "text": " We need a list with two elements"}, {"start": 2109.42, "end": 2112.42, "text": " One is all of our images batched into a numpy array"}, {"start": 2112.42, "end": 2117.42, "text": " And the second one is all of our labels basically"}, {"start": 2117.42, "end": 2120.42, "text": " Inside of a single numpy array"}, {"start": 2120.42, "end": 2123.42, "text": " So just to make sure I'm correct here"}, {"start": 2123.42, "end": 2126.42, "text": " Let me print the type of batch that should be a list"}, {"start": 2126.42, "end": 2131.42, "text": " Then let me print the type of the batch of zero that should be a tuple"}, {"start": 2131.42, "end": 2138.42, "text": " Finally let me print the type of the batch of zero"}, {"start": 2138.42, "end": 2140.42, "text": " And then zero this should be numpy"}, {"start": 2140.42, "end": 2142.42, "text": " And if I were to copy paste this"}, {"start": 2142.42, "end": 2144.42, "text": " This should be I guess integer"}, {"start": 2144.42, "end": 2147.42, "text": " And yeah"}, {"start": 2147.42, "end": 2150.42, "text": " This is gonna fail obviously because we're not returning anything"}, {"start": 2150.42, "end": 2153.42, "text": " Hopefully the print functionality will work nonetheless"}, {"start": 2153.42, "end": 2157.42, "text": " Yeah we have a list, then we have a tuple, then we have numpy array, we have integers"}, {"start": 2157.42, "end": 2159.42, "text": " So I already knew the structure obviously"}, {"start": 2159.42, "end": 2162.42, "text": " If you did not know the structure you can just kind of play with it"}, {"start": 2162.42, "end": 2165.42, "text": " Just analyze the shapes, analyze the types, etc."}, {"start": 2165.42, "end": 2169.42, "text": " Okay so what we need to do here is we need to transpose this data"}, {"start": 2169.42, "end": 2171.42, "text": " So I'm gonna do it something like this"}, {"start": 2171.42, "end": 2178.42, "text": " Transposed data is gonna be"}, {"start": 2178.42, "end": 2183.42, "text": " I'm just going to unpack the batch of data here and then zip it"}, {"start": 2183.42, "end": 2187.42, "text": " So what this will do is we'll now have"}, {"start": 2187.42, "end": 2188.42, "text": " So this is a list of tuples"}, {"start": 2188.42, "end": 2191.42, "text": " After we do this we'll just have a bunch of tuples and zip"}, {"start": 2191.42, "end": 2194.42, "text": " We'll take the first element from every tuple that means all of our images"}, {"start": 2194.42, "end": 2198.42, "text": " And it's gonna put it into a single list"}, {"start": 2198.42, "end": 2202.42, "text": " And then it's gonna take all of our labels from all the tuples"}, {"start": 2202.42, "end": 2204.42, "text": " And it's gonna combine it again into a list"}, {"start": 2204.42, "end": 2206.42, "text": " So we can make sure that's correct"}, {"start": 2206.42, "end": 2209.42, "text": " So this again should be a list"}, {"start": 2209.42, "end": 2211.42, "text": " This thing here"}, {"start": 2211.42, "end": 2216.42, "text": " And basically the zeroth element should be a list as well"}, {"start": 2216.42, "end": 2219.42, "text": " And the first element should be a list as well"}, {"start": 2219.42, "end": 2222.42, "text": " So this should be"}, {"start": 2222.42, "end": 2224.42, "text": " All of these should be lists"}, {"start": 2224.42, "end": 2227.42, "text": " Let me check whether that's the case"}, {"start": 2227.42, "end": 2230.42, "text": " Zip is not subscriptable"}, {"start": 2230.42, "end": 2232.42, "text": " Let me see what I have done"}, {"start": 2232.42, "end": 2238.42, "text": " So basically let me check"}, {"start": 2238.42, "end": 2242.42, "text": " Oh yeah, I have to convert this into a list explicitly here"}, {"start": 2242.42, "end": 2244.42, "text": " So let me now run it"}, {"start": 2244.42, "end": 2248.42, "text": " And yeah, so we have list tuple tuple"}, {"start": 2248.42, "end": 2251.42, "text": " So yeah, instead of a list, they're actually using tuple in the background"}, {"start": 2251.42, "end": 2254.42, "text": " If I were to try and"}, {"start": 2254.42, "end": 2257.42, "text": " So the length of this thing should be 2 I guess"}, {"start": 2257.42, "end": 2260.42, "text": " Transpose data length should be 2 because now we have"}, {"start": 2260.42, "end": 2262.42, "text": " Yeah, it's 2"}, {"start": 2262.42, "end": 2265.42, "text": " Because now we have all of the images and all of the labels in a single tuple"}, {"start": 2265.42, "end": 2270.42, "text": " So the next thing we need to do is basically"}, {"start": 2270.42, "end": 2273.42, "text": " Convert the tuples into a NumPy array"}, {"start": 2273.42, "end": 2275.42, "text": " So I'm gonna do something with this"}, {"start": 2275.42, "end": 2281.42, "text": " Labels is a NumPy array of this transposed data"}, {"start": 2281.42, "end": 2285.42, "text": " Of 1 because the second tuple is the labels"}, {"start": 2285.42, "end": 2287.42, "text": " And then we need images"}, {"start": 2287.42, "end": 2289.42, "text": " And here it's a bit more complex"}, {"start": 2289.42, "end": 2295.42, "text": " We need to do MPStack of transposed data of 0"}, {"start": 2295.42, "end": 2298.42, "text": " And this will hopefully work as expected"}, {"start": 2298.42, "end": 2301.42, "text": " Now I'm gonna return back the images and the labels"}, {"start": 2301.42, "end": 2304.42, "text": " And this collate function should now properly work"}, {"start": 2304.42, "end": 2307.42, "text": " Let me run this and see whether we have the correct result"}, {"start": 2307.42, "end": 2309.42, "text": " It seems nothing is crashing"}, {"start": 2309.42, "end": 2313.42, "text": " So I managed to get the batch of data"}, {"start": 2313.42, "end": 2316.42, "text": " We're printing the type and it seems"}, {"start": 2316.42, "end": 2319.42, "text": " Okay, it's a NumPy array"}, {"start": 2319.42, "end": 2321.42, "text": " Let's see what the shape is"}, {"start": 2321.42, "end": 2323.42, "text": " So if I was to take the"}, {"start": 2323.42, "end": 2324.42, "text": " Let me do it like this"}, {"start": 2324.42, "end": 2327.42, "text": " So batch of 0 are images"}, {"start": 2327.42, "end": 2331.42, "text": " So that's like this, images"}, {"start": 2331.42, "end": 2333.42, "text": " And oops"}, {"start": 2333.42, "end": 2339.42, "text": " And finally, let's see what's the shape of the images"}, {"start": 2339.42, "end": 2341.42, "text": " Let me delete this thing"}, {"start": 2341.42, "end": 2342.42, "text": " And run this"}, {"start": 2342.42, "end": 2346.42, "text": " And yeah, we have a batch of 128 images"}, {"start": 2346.42, "end": 2348.42, "text": " Which are flattened out"}, {"start": 2348.42, "end": 2350.42, "text": " And if I was to take the labels"}, {"start": 2350.42, "end": 2352.42, "text": " Same as this here"}, {"start": 2352.42, "end": 2354.42, "text": " I'm just gonna add"}, {"start": 2354.42, "end": 2358.42, "text": " Instead of 0, I'm gonna put 1 for labels"}, {"start": 2358.42, "end": 2364.42, "text": " And let's see the shape of these labels"}, {"start": 2364.42, "end": 2365.42, "text": " This looks correct"}, {"start": 2365.42, "end": 2368.42, "text": " And finally, we need to check the data type"}, {"start": 2368.42, "end": 2370.42, "text": " So images"}, {"start": 2370.42, "end": 2373.42, "text": " Let's take the 0th image and print the data type"}, {"start": 2373.42, "end": 2378.42, "text": " And also let's take the 0th label and print the data type"}, {"start": 2378.42, "end": 2383.42, "text": " So it's always a smart idea to understand your shapes and your data types"}, {"start": 2383.42, "end": 2385.42, "text": " To avoid some bugs down the road"}, {"start": 2385.42, "end": 2391.42, "text": " So flow 32 looks good and in 64 I think that's gonna be totally fine"}, {"start": 2391.42, "end": 2394.42, "text": " Okay, we have our data loading functionality"}, {"start": 2394.42, "end": 2397.42, "text": " Let's continue on and see what else do we need to do"}, {"start": 2397.42, "end": 2399.42, "text": " And that's adding the looping function"}, {"start": 2399.42, "end": 2403.42, "text": " So the training loop functionality"}, {"start": 2403.42, "end": 2407.42, "text": " Let me delete this because this is done"}, {"start": 2407.42, "end": 2411.42, "text": " And let me now finally, let me also delete this"}, {"start": 2411.42, "end": 2415.42, "text": " And let me add a final cell adding a looping function"}, {"start": 2415.42, "end": 2417.42, "text": " A training loop function"}, {"start": 2417.42, "end": 2419.42, "text": " Okay, this should be fairly simple"}, {"start": 2419.42, "end": 2422.42, "text": " What we need to do is define the number of epochs"}, {"start": 2422.42, "end": 2426.42, "text": " So number of epochs is gonna be, I don't know, for now let's put something like 10"}, {"start": 2426.42, "end": 2429.42, "text": " So we're gonna iterate through the epochs"}, {"start": 2429.42, "end": 2432.42, "text": " So in range num of epochs"}, {"start": 2432.42, "end": 2434.42, "text": " We're gonna do it 10 times"}, {"start": 2434.42, "end": 2436.42, "text": " We're going to do what?"}, {"start": 2436.42, "end": 2440.42, "text": " So we have this pattern where we take the MLP parameters"}, {"start": 2440.42, "end": 2442.42, "text": " So we have the MLP params"}, {"start": 2442.42, "end": 2444.42, "text": " We're gonna do some update function"}, {"start": 2444.42, "end": 2447.42, "text": " To which we're gonna pass the MLP params"}, {"start": 2447.42, "end": 2449.42, "text": " We're gonna pass the data"}, {"start": 2449.42, "end": 2452.42, "text": " So images and GT labels"}, {"start": 2452.42, "end": 2454.42, "text": " And this should just update our parameters"}, {"start": 2454.42, "end": 2457.42, "text": " And that should be the high level functionality"}, {"start": 2457.42, "end": 2460.42, "text": " So now we need to write this update function"}, {"start": 2460.42, "end": 2461.42, "text": " Let's see how we can do that"}, {"start": 2461.42, "end": 2463.42, "text": " So def update"}, {"start": 2463.42, "end": 2465.42, "text": " We pass in the params"}, {"start": 2465.42, "end": 2466.42, "text": " We pass in the images"}, {"start": 2466.42, "end": 2468.42, "text": " We pass in the labels"}, {"start": 2468.42, "end": 2471.42, "text": " Okay, obviously I forgot to add data loading functionality here"}, {"start": 2471.42, "end": 2475.42, "text": " So we'll have basically for what?"}, {"start": 2475.42, "end": 2476.42, "text": " We'll have images"}, {"start": 2476.42, "end": 2483.42, "text": " We'll have labels in the train loader"}, {"start": 2483.42, "end": 2488.42, "text": " And now we can pass the images and the GT labels"}, {"start": 2488.42, "end": 2489.42, "text": " But there is a small catch here"}, {"start": 2489.42, "end": 2493.42, "text": " So images"}, {"start": 2493.42, "end": 2496.42, "text": " Images are currently on a CPU"}, {"start": 2496.42, "end": 2501.42, "text": " And that's gonna be handled correctly by the Jack Slump API"}, {"start": 2501.42, "end": 2503.42, "text": " So we don't need to worry about it"}, {"start": 2503.42, "end": 2505.42, "text": " The thing we need to do here though"}, {"start": 2505.42, "end": 2509.42, "text": " Is to convert the labels into one hot representation"}, {"start": 2509.42, "end": 2511.42, "text": " We'll soon see why"}, {"start": 2511.42, "end": 2517.42, "text": " So let's first start developing this update function"}, {"start": 2517.42, "end": 2520.42, "text": " And then we'll understand what else we need to develop"}, {"start": 2520.42, "end": 2527.42, "text": " Okay, we obviously need to call the batched MLP predict function"}, {"start": 2527.42, "end": 2529.42, "text": " Okay, this one"}, {"start": 2529.42, "end": 2532.42, "text": " We're gonna pass in the parameters"}, {"start": 2532.42, "end": 2534.42, "text": " That's this argument here"}, {"start": 2534.42, "end": 2536.42, "text": " We're gonna pass in the images"}, {"start": 2536.42, "end": 2541.42, "text": " And this will return the expected shape of this"}, {"start": 2541.42, "end": 2543.42, "text": " Will be whatever the batch dimension is"}, {"start": 2543.42, "end": 2546.42, "text": " And then 10 because we have 10 outputs, remember"}, {"start": 2546.42, "end": 2550.42, "text": " And these are log softmax outputs"}, {"start": 2550.42, "end": 2552.42, "text": " So let's call those predictions"}, {"start": 2552.42, "end": 2554.42, "text": " So those are predictions here"}, {"start": 2554.42, "end": 2560.42, "text": " Finally, this is the trick that I told you about"}, {"start": 2560.42, "end": 2564.42, "text": " That will help us achieve the numerical stability of cross entropy loss"}, {"start": 2564.42, "end": 2568.42, "text": " So we'll need to just do something like this"}, {"start": 2568.42, "end": 2574.42, "text": " We need to multiply predictions with these labels"}, {"start": 2574.42, "end": 2576.42, "text": " So these are just GT labels"}, {"start": 2576.42, "end": 2578.42, "text": " So ground truth labels"}, {"start": 2578.42, "end": 2581.42, "text": " If we multiply them and if they are one hot"}, {"start": 2581.42, "end": 2585.42, "text": " That means we'll fetch the log softmax component"}, {"start": 2585.42, "end": 2589.42, "text": " From the predictions wherever the true value is"}, {"start": 2589.42, "end": 2591.42, "text": " And we'll want to push that to one"}, {"start": 2591.42, "end": 2594.42, "text": " So we'll want to have a probability one there"}, {"start": 2594.42, "end": 2597.42, "text": " Because GT label basically says that"}, {"start": 2597.42, "end": 2600.42, "text": " That's where the true class should lie in"}, {"start": 2600.42, "end": 2605.42, "text": " So now we just need to basically add minus JMP mean here"}, {"start": 2605.42, "end": 2608.42, "text": " And this is going to work as expected"}, {"start": 2608.42, "end": 2612.42, "text": " Because this is going to fetch the correct component, this part here"}, {"start": 2612.42, "end": 2614.42, "text": " And then we just do the mean"}, {"start": 2614.42, "end": 2617.42, "text": " This is gonna give us the cross entropy implementation pretty much"}, {"start": 2617.42, "end": 2621.42, "text": " Later, if I have enough time, I'm gonna do a more intuitive implementation"}, {"start": 2621.42, "end": 2625.42, "text": " We're gonna see it's not working because of numerical problems"}, {"start": 2625.42, "end": 2627.42, "text": " So the final component, as you can see here"}, {"start": 2627.42, "end": 2629.42, "text": " We need to convert this into one hot representation"}, {"start": 2629.42, "end": 2636.42, "text": " So that means that GT labels are actually JECs"}, {"start": 2636.42, "end": 2639.42, "text": " Let me see whether it's in... yeah, it's here"}, {"start": 2639.42, "end": 2642.42, "text": " So basically we just convert labels into one hot"}, {"start": 2642.42, "end": 2645.42, "text": " I'm gonna make sure this shape here is correct"}, {"start": 2645.42, "end": 2648.42, "text": " So labels shape"}, {"start": 2648.42, "end": 2652.42, "text": " And let's see whether that's gonna work or not"}, {"start": 2652.42, "end": 2654.42, "text": " Okay, this is obviously a dummy mistake"}, {"start": 2654.42, "end": 2657.42, "text": " This should be in a loss function, not in the update function"}, {"start": 2657.42, "end": 2660.42, "text": " So in this loss function, we pass in the parameters"}, {"start": 2660.42, "end": 2665.42, "text": " We pass in the, I guess, images and the GT labels"}, {"start": 2665.42, "end": 2668.42, "text": " And here we need to do this thing"}, {"start": 2668.42, "end": 2672.42, "text": " All of this goes into the loss function"}, {"start": 2672.42, "end": 2675.42, "text": " And now in the actual update function"}, {"start": 2675.42, "end": 2678.42, "text": " What we'll do is we'll take the gradients"}, {"start": 2678.42, "end": 2680.42, "text": " So we'll just find the grads"}, {"start": 2680.42, "end": 2682.42, "text": " So we'll call the grad function"}, {"start": 2682.42, "end": 2685.42, "text": " And we'll wrap in the loss function"}, {"start": 2685.42, "end": 2687.42, "text": " We're gonna pass in whatever is required"}, {"start": 2687.42, "end": 2694.42, "text": " So that's params, images, and GT labels"}, {"start": 2694.42, "end": 2698.42, "text": " And this is going to give us basically derivatives"}, {"start": 2698.42, "end": 2701.42, "text": " Of every single parameter in this pyTree here"}, {"start": 2701.42, "end": 2703.42, "text": " So we get the grads here"}, {"start": 2703.42, "end": 2708.42, "text": " And finally we need to return the JECsTreeMultiMap"}, {"start": 2708.42, "end": 2710.42, "text": " Is the function I need here"}, {"start": 2710.42, "end": 2712.42, "text": " We're gonna just do a simple function here"}, {"start": 2712.42, "end": 2715.42, "text": " So we get the weights, we get the gradients"}, {"start": 2715.42, "end": 2717.42, "text": " And what we do here is"}, {"start": 2717.42, "end": 2719.42, "text": " Okay, I don't need this here"}, {"start": 2719.42, "end": 2722.42, "text": " Let me just do the stochastic gradient descent update here"}, {"start": 2722.42, "end": 2728.42, "text": " So basically from the weight we subtract learning rate times the gradient"}, {"start": 2728.42, "end": 2732.42, "text": " Whereas I'm gonna just add some learning rate here by default"}, {"start": 2732.42, "end": 2735.42, "text": " Something like, I don't know, 0.01"}, {"start": 2735.42, "end": 2738.42, "text": " And finally let me pass in the pyTree"}, {"start": 2738.42, "end": 2742.42, "text": " So that's params and grads"}, {"start": 2742.42, "end": 2745.42, "text": " So this should now work as expected"}, {"start": 2745.42, "end": 2747.42, "text": " But we don't have any metrics here"}, {"start": 2747.42, "end": 2750.42, "text": " Or any logging of whatever kind"}, {"start": 2750.42, "end": 2755.42, "text": " So neither the loss nor the accuracy"}, {"start": 2755.42, "end": 2757.42, "text": " So I'm just gonna do a small break here"}, {"start": 2757.42, "end": 2761.42, "text": " And then a break here just to make sure that this thing works"}, {"start": 2761.42, "end": 2763.42, "text": " So one hot required one poly..."}, {"start": 2763.42, "end": 2767.42, "text": " Okay, so I forgot to add the number of classes of MNIST"}, {"start": 2767.42, "end": 2770.42, "text": " So MNIST classes"}, {"start": 2770.42, "end": 2773.42, "text": " I'm just gonna pass in the length of that thing here"}, {"start": 2773.42, "end": 2776.42, "text": " And see whether this works now"}, {"start": 2776.42, "end": 2780.42, "text": " So learning rate is not defined"}, {"start": 2780.42, "end": 2782.42, "text": " Why not? I just added it here"}, {"start": 2782.42, "end": 2784.42, "text": " So let's see"}, {"start": 2784.42, "end": 2786.42, "text": " Oh my god, I actually placed this here"}, {"start": 2786.42, "end": 2789.42, "text": " I don't know why I'm making so many mistakes"}, {"start": 2789.42, "end": 2792.42, "text": " Learning rate here, now let's try and run it"}, {"start": 2792.42, "end": 2795.42, "text": " Yeah, I'm gonna expect this is gonna last a little bit"}, {"start": 2795.42, "end": 2798.42, "text": " Okay, so this thing actually worked"}, {"start": 2798.42, "end": 2804.42, "text": " Now let me try and grab the loss"}, {"start": 2804.42, "end": 2805.42, "text": " Not just the grads here"}, {"start": 2805.42, "end": 2808.42, "text": " So well, I think the function is called val and grad"}, {"start": 2808.42, "end": 2810.42, "text": " So let me just check that"}, {"start": 2810.42, "end": 2812.42, "text": " Val and grad, jacks"}, {"start": 2812.42, "end": 2814.42, "text": " Let's see whether that's the name of the function"}, {"start": 2814.42, "end": 2817.42, "text": " Jacks package"}, {"start": 2817.42, "end": 2820.42, "text": " Value and grad is the name of the function, okay?"}, {"start": 2820.42, "end": 2822.42, "text": " So we have value and grad here"}, {"start": 2822.42, "end": 2823.42, "text": " Value and grad"}, {"start": 2823.42, "end": 2826.42, "text": " So it's gonna return the loss and the grad"}, {"start": 2826.42, "end": 2829.42, "text": " And here I'm gonna return the loss and the parameters"}, {"start": 2829.42, "end": 2832.42, "text": " And so let me just do loss here"}, {"start": 2832.42, "end": 2834.42, "text": " And let me print the value of the loss"}, {"start": 2834.42, "end": 2837.42, "text": " And let's see what happens here"}, {"start": 2837.42, "end": 2842.42, "text": " Okay, I forgot to obviously import this thing here"}, {"start": 2842.42, "end": 2845.42, "text": " Ummm"}, {"start": 2845.42, "end": 2848.42, "text": " I guess it's from jacks, right?"}, {"start": 2848.42, "end": 2850.42, "text": " Yep, it seems that's correct"}, {"start": 2850.42, "end": 2852.42, "text": " Let me run this"}, {"start": 2852.42, "end": 2855.42, "text": " And we get some reasonable value of the loss function here"}, {"start": 2855.42, "end": 2857.42, "text": " So that's cool"}, {"start": 2857.42, "end": 2862.42, "text": " Let's now try and remove the break"}, {"start": 2862.42, "end": 2865.42, "text": " And let's run this for a single epoch"}, {"start": 2865.42, "end": 2867.42, "text": " And see whether the loss goes down"}, {"start": 2867.42, "end": 2870.42, "text": " So let me run this and see what we get"}, {"start": 2870.42, "end": 2872.42, "text": " So it seems that the loss is going down"}, {"start": 2872.42, "end": 2875.42, "text": " And that's very, very reassuring"}, {"start": 2875.42, "end": 2877.42, "text": " The only problem is we're getting too many output here"}, {"start": 2877.42, "end": 2880.42, "text": " So let me just interrupt the execution here"}, {"start": 2880.42, "end": 2883.42, "text": " And let me clear this output"}, {"start": 2883.42, "end": 2884.42, "text": " Wow, there's so many"}, {"start": 2884.42, "end": 2886.42, "text": " Okay, let me just clear this"}, {"start": 2886.42, "end": 2890.42, "text": " I'm gonna add a conditional logging here"}, {"start": 2890.42, "end": 2895.42, "text": " So if, I'm just gonna place a numerate here"}, {"start": 2895.42, "end": 2898.42, "text": " So that we know the batch ID"}, {"start": 2898.42, "end": 2900.42, "text": " So something like this"}, {"start": 2900.42, "end": 2902.42, "text": " And if the counter"}, {"start": 2902.42, "end": 2905.42, "text": " So every, because every batch has 128 images"}, {"start": 2905.42, "end": 2911.42, "text": " So let's say we can log every 50 batches or something"}, {"start": 2911.42, "end": 2913.42, "text": " So if this equals zero"}, {"start": 2913.42, "end": 2917.42, "text": " Finally, let's move the loss inside of this if statement"}, {"start": 2917.42, "end": 2920.42, "text": " So let me indent it here"}, {"start": 2920.42, "end": 2924.42, "text": " I'm gonna also just print this thing once"}, {"start": 2924.42, "end": 2928.42, "text": " And now this looks like it should work"}, {"start": 2928.42, "end": 2930.42, "text": " Let me run it"}, {"start": 2930.42, "end": 2934.42, "text": " And see, so 128, 10, this looks to be a good format"}, {"start": 2934.42, "end": 2937.42, "text": " So I'm just gonna interrupt this execution"}, {"start": 2937.42, "end": 2943.42, "text": " I'm basically going to just leave the loss printing here"}, {"start": 2943.42, "end": 2948.42, "text": " And the final thing before I'll remove this break"}, {"start": 2948.42, "end": 2951.42, "text": " Is to add the accuracy metrics"}, {"start": 2951.42, "end": 2954.42, "text": " So let's add the accuracy logging as well"}, {"start": 2954.42, "end": 2958.42, "text": " And then we'll be certain that MLP is learning something"}, {"start": 2958.42, "end": 2961.42, "text": " So accuracy"}, {"start": 2961.42, "end": 2962.42, "text": " Again, I'm not sure whether Jax"}, {"start": 2962.42, "end": 2967.42, "text": " Let me see whether Jax has accuracy metric"}, {"start": 2967.42, "end": 2969.42, "text": " Yeah, I don't see anything"}, {"start": 2969.42, "end": 2970.42, "text": " So I'm just gonna implement it myself"}, {"start": 2970.42, "end": 2974.42, "text": " So basically we're gonna pass the parameters again"}, {"start": 2974.42, "end": 2976.42, "text": " We're gonna pass the loader"}, {"start": 2976.42, "end": 2978.42, "text": " So either test or train loader"}, {"start": 2978.42, "end": 2980.42, "text": " And that should be it I think"}, {"start": 2980.42, "end": 2987.42, "text": " So the thing we need to do is find the number of correctly classified images"}, {"start": 2987.42, "end": 2992.42, "text": " So I'm just gonna basically iterate through images and labels"}, {"start": 2992.42, "end": 2995.42, "text": " Here in the loader"}, {"start": 2995.42, "end": 2999.42, "text": " I'm gonna again run the"}, {"start": 2999.42, "end": 3001.42, "text": " So I'm gonna call the batch"}, {"start": 3001.42, "end": 3003.42, "text": " So something like this"}, {"start": 3003.42, "end": 3006.42, "text": " I mean this is going to be inefficient"}, {"start": 3006.42, "end": 3010.42, "text": " Because I'm gonna run it in general if we had some arbitrary data set"}, {"start": 3010.42, "end": 3013.42, "text": " But this implantation is gonna work for MNIST pretty well"}, {"start": 3013.42, "end": 3017.42, "text": " So I'm just gonna go through the whole training set and through the whole test set"}, {"start": 3017.42, "end": 3020.42, "text": " And find the accuracy and report that somewhere here"}, {"start": 3020.42, "end": 3024.42, "text": " So after we finish the epoch"}, {"start": 3024.42, "end": 3027.42, "text": " I'm gonna print something like this"}, {"start": 3027.42, "end": 3029.42, "text": " So epoch"}, {"start": 3029.42, "end": 3032.42, "text": " And then I'm gonna print the ID of the epoch"}, {"start": 3032.42, "end": 3038.42, "text": " And then I'm gonna print the train accuracy equals to this"}, {"start": 3038.42, "end": 3041.42, "text": " And test accuracy equals to this"}, {"start": 3041.42, "end": 3045.42, "text": " I'm just gonna call the basically accuracy"}, {"start": 3045.42, "end": 3051.42, "text": " I'm gonna pass the MLP params after the epoch has been finished"}, {"start": 3051.42, "end": 3053.42, "text": " I'm gonna pass the train loader"}, {"start": 3053.42, "end": 3055.42, "text": " So this thing"}, {"start": 3055.42, "end": 3057.42, "text": " And I just forgot to"}, {"start": 3057.42, "end": 3061.42, "text": " So I basically just realized that I forgot to add the test loader"}, {"start": 3061.42, "end": 3063.42, "text": " So I'm gonna quickly add that as well"}, {"start": 3063.42, "end": 3065.42, "text": " So something like this"}, {"start": 3065.42, "end": 3067.42, "text": " Let me paste it here"}, {"start": 3067.42, "end": 3069.42, "text": " So now I have to add the test loader"}, {"start": 3069.42, "end": 3071.42, "text": " Somewhere above"}, {"start": 3071.42, "end": 3073.42, "text": " Let me see where I've defined"}, {"start": 3073.42, "end": 3074.42, "text": " Yeah, here"}, {"start": 3074.42, "end": 3076.42, "text": " So I'm gonna add the test loader"}, {"start": 3076.42, "end": 3079.42, "text": " It's gonna be the same as the train data loader"}, {"start": 3079.42, "end": 3083.42, "text": " I'm just gonna swap this for test data set"}, {"start": 3083.42, "end": 3086.42, "text": " Batch size, I can leave it the same size"}, {"start": 3086.42, "end": 3088.42, "text": " Shuffle, false"}, {"start": 3088.42, "end": 3090.42, "text": " No need to shuffle that data set"}, {"start": 3090.42, "end": 3093.42, "text": " And custom collate function is also fine"}, {"start": 3093.42, "end": 3096.42, "text": " So I'm gonna rerun the cell"}, {"start": 3096.42, "end": 3098.42, "text": " And now we have the test loader"}, {"start": 3098.42, "end": 3101.42, "text": " So this should now all be correctly set up"}, {"start": 3101.42, "end": 3104.42, "text": " We just need to implement the actual accuracy metric here"}, {"start": 3104.42, "end": 3107.42, "text": " Let's see, we need to do the same thing as here"}, {"start": 3107.42, "end": 3109.42, "text": " We need some predictions"}, {"start": 3109.42, "end": 3114.42, "text": " And then I'm gonna find the basically"}, {"start": 3114.42, "end": 3116.42, "text": " Instead of finding the raw predictions here"}, {"start": 3116.42, "end": 3118.42, "text": " So the log softmax"}, {"start": 3118.42, "end": 3123.42, "text": " I'm gonna do immediately call the soft argmax function"}, {"start": 3123.42, "end": 3126.42, "text": " And the axis should be 1"}, {"start": 3126.42, "end": 3130.42, "text": " So that's gonna find the highest probability class"}, {"start": 3130.42, "end": 3132.42, "text": " For the particular prediction"}, {"start": 3132.42, "end": 3136.42, "text": " And we're gonna get basically prediction classes here"}, {"start": 3136.42, "end": 3139.42, "text": " And we're gonna contrast that with labels pretty much"}, {"start": 3139.42, "end": 3142.42, "text": " So we need to do something like this"}, {"start": 3142.42, "end": 3145.42, "text": " We'll need some sum temporary variables"}, {"start": 3145.42, "end": 3148.42, "text": " So sum, I'm gonna call it accumulator"}, {"start": 3148.42, "end": 3153.42, "text": " Basically accumulator is gonna add here the MP sum"}, {"start": 3153.42, "end": 3156.42, "text": " Between, which is gonna do predict classes"}, {"start": 3156.42, "end": 3162.42, "text": " And we're gonna see where those predicted classes are the same as the ground truth classes"}, {"start": 3162.42, "end": 3168.42, "text": " So I can maybe do something like this just for consistency sake"}, {"start": 3168.42, "end": 3171.42, "text": " So this, and we're gonna sum them up"}, {"start": 3171.42, "end": 3174.42, "text": " Find where we had true, that means we had the same class"}, {"start": 3174.42, "end": 3177.42, "text": " So this is going to work correctly"}, {"start": 3177.42, "end": 3179.42, "text": " And finally in order to return the metric"}, {"start": 3179.42, "end": 3184.42, "text": " We need to just divide the accumulated value"}, {"start": 3184.42, "end": 3188.42, "text": " With the length of the actual data loader I guess"}, {"start": 3188.42, "end": 3191.42, "text": " Is that correct?"}, {"start": 3191.42, "end": 3195.42, "text": " No it's not because we have to also add"}, {"start": 3195.42, "end": 3197.42, "text": " We care about the number of images"}, {"start": 3197.42, "end": 3202.42, "text": " So I guess we need to multiply that with the batch size"}, {"start": 3202.42, "end": 3206.42, "text": " So let me see whether loader has batch size variable"}, {"start": 3206.42, "end": 3208.42, "text": " I'm not sure whether this is going to work"}, {"start": 3208.42, "end": 3213.42, "text": " Whether Colab autocomplete is just tricking me here"}, {"start": 3213.42, "end": 3214.42, "text": " But this should work"}, {"start": 3214.42, "end": 3217.42, "text": " And this may be misleading because this is not the accuracy"}, {"start": 3217.42, "end": 3219.42, "text": " This is just the accumulated value"}, {"start": 3219.42, "end": 3222.42, "text": " So I'm just gonna replace this with something like Xsum"}, {"start": 3222.42, "end": 3226.42, "text": " And yeah this is my find and replace method for Google Colab"}, {"start": 3226.42, "end": 3229.42, "text": " Okay, this should not work I think"}, {"start": 3229.42, "end": 3233.42, "text": " Aside from this thing maybe not being correct"}, {"start": 3233.42, "end": 3236.42, "text": " Let me try and run this"}, {"start": 3236.42, "end": 3240.42, "text": " Maybe instead of waiting for the whole epoch"}, {"start": 3240.42, "end": 3244.42, "text": " I'm just going to run it right at the beginning here once"}, {"start": 3244.42, "end": 3248.42, "text": " Let's just see whether this is going to work"}, {"start": 3248.42, "end": 3250.42, "text": " Okay this actually failed"}, {"start": 3250.42, "end": 3254.42, "text": " And the reason is already quite familiar to me from PyTorch"}, {"start": 3254.42, "end": 3263.42, "text": " So what I forgot to do is basically we have to set the drop last to true here"}, {"start": 3263.42, "end": 3266.42, "text": " Otherwise we'll have the..."}, {"start": 3266.42, "end": 3269.42, "text": " So potentially the last batch will not have 128 images"}, {"start": 3269.42, "end": 3272.42, "text": " And that's where the discrepancy comes to be"}, {"start": 3272.42, "end": 3278.42, "text": " So if I were to rerun the cell here and then start this thing again"}, {"start": 3278.42, "end": 3282.42, "text": " Now hopefully we'll get the correct results"}, {"start": 3282.42, "end": 3287.42, "text": " So basically what happened here is at one point the last batch was smaller than 128"}, {"start": 3287.42, "end": 3291.42, "text": " And there was a mismatch doing a comparison here and that's why it failed"}, {"start": 3291.42, "end": 3295.42, "text": " Here you can see it's not working but the thing is it's very very slow"}, {"start": 3295.42, "end": 3299.42, "text": " So what I'm going to do instead is I'm going to interrupt this"}, {"start": 3299.42, "end": 3306.42, "text": " And I'm going to load the whole MNIST into memory because it's a fairly small dataset with only 50k, 60k images"}, {"start": 3306.42, "end": 3308.42, "text": " So I'm going to do the following"}, {"start": 3308.42, "end": 3311.42, "text": " I'm going to change my implementation of accuracy"}, {"start": 3311.42, "end": 3316.42, "text": " So instead of doing the loader thing and then iterating through the whole dataset"}, {"start": 3316.42, "end": 3319.42, "text": " I'm just going to pass the dataset itself"}, {"start": 3319.42, "end": 3322.42, "text": " And that means this is going to change a lot"}, {"start": 3322.42, "end": 3325.42, "text": " So I'm going to just do the following"}, {"start": 3325.42, "end": 3330.42, "text": " Pass the whole dataset all at once into the MLP predict"}, {"start": 3330.42, "end": 3335.42, "text": " So I'm going to do something like this params the whole dataset"}, {"start": 3335.42, "end": 3337.42, "text": " So actually the images"}, {"start": 3337.42, "end": 3340.42, "text": " So these will be dataset images"}, {"start": 3340.42, "end": 3342.42, "text": " I'm going to pass them here"}, {"start": 3342.42, "end": 3350.42, "text": " We're going to have the again the JMP argmax alongside xis equals one"}, {"start": 3350.42, "end": 3355.42, "text": " So this will give us the predicted classes"}, {"start": 3355.42, "end": 3360.42, "text": " And I'm also going to pass obviously the dataset labels"}, {"start": 3360.42, "end": 3365.42, "text": " So and I'm going to compare those two and return the mean"}, {"start": 3365.42, "end": 3369.42, "text": " So basically JMP mean of this comparison"}, {"start": 3369.42, "end": 3373.42, "text": " And this is neater implementation so it's more concise"}, {"start": 3373.42, "end": 3377.42, "text": " And it's also way more fast compared to the last one"}, {"start": 3377.42, "end": 3379.42, "text": " So let me see where this makes sense"}, {"start": 3379.42, "end": 3382.42, "text": " So I'm doing inference on the whole dataset"}, {"start": 3382.42, "end": 3385.42, "text": " And I'm then converting my raw output"}, {"start": 3385.42, "end": 3392.42, "text": " So the log softmax outputs into the actual class indices here"}, {"start": 3392.42, "end": 3396.42, "text": " And I'm comparing them to the ground truth and finding the mean"}, {"start": 3396.42, "end": 3399.42, "text": " Okay so this looks like it should work"}, {"start": 3399.42, "end": 3401.42, "text": " So now I have to do the following thing"}, {"start": 3401.42, "end": 3404.42, "text": " So I have to load the whole thing into the memory"}, {"start": 3404.42, "end": 3405.42, "text": " So I'm going to do the following"}, {"start": 3405.42, "end": 3408.42, "text": " I'm going to take this thing"}, {"start": 3408.42, "end": 3410.42, "text": " So train images"}, {"start": 3410.42, "end": 3414.42, "text": " There is this internal member called data"}, {"start": 3414.42, "end": 3417.42, "text": " I'm going to fetch it that's going to get me the images"}, {"start": 3417.42, "end": 3423.42, "text": " I'm going to convert this into like a Jack's NumPy array"}, {"start": 3423.42, "end": 3428.42, "text": " And I'm going to reshape this into"}, {"start": 3428.42, "end": 3430.42, "text": " Wait let me check what this will return"}, {"start": 3430.42, "end": 3436.42, "text": " So this will going to be print train images shape"}, {"start": 3436.42, "end": 3438.42, "text": " Let me see what this is going to return"}, {"start": 3438.42, "end": 3441.42, "text": " So yeah as you can see the problem here is it's avoiding"}, {"start": 3441.42, "end": 3446.42, "text": " So this transform is not being called when I just fetched the data like this"}, {"start": 3446.42, "end": 3450.42, "text": " So we still need to flatten out the images"}, {"start": 3450.42, "end": 3455.42, "text": " So I'm going to do that I'm going to just reshape this into basically"}, {"start": 3455.42, "end": 3459.42, "text": " Whatever the length of the train data set is"}, {"start": 3459.42, "end": 3461.42, "text": " And I'm going to put minus one here"}, {"start": 3461.42, "end": 3464.42, "text": " So that's going to hopefully flatten out the whole thing"}, {"start": 3464.42, "end": 3469.42, "text": " And now yeah now we have the correct basically shape"}, {"start": 3469.42, "end": 3473.42, "text": " Let me just check the type train images"}, {"start": 3473.42, "end": 3475.42, "text": " It's going to be on the device"}, {"start": 3475.42, "end": 3479.42, "text": " So that means whatever accelerator I'm using is going to be on that accelerator"}, {"start": 3479.42, "end": 3482.42, "text": " I'm currently set to using none"}, {"start": 3482.42, "end": 3488.42, "text": " So yeah because MLP is a fairly simple model and MNIST is a super small data set"}, {"start": 3488.42, "end": 3491.42, "text": " Let me now do the same thing for labels I need train labels"}, {"start": 3491.42, "end": 3495.42, "text": " I'm going to do the same thing here"}, {"start": 3495.42, "end": 3497.42, "text": " I think so at least"}, {"start": 3497.42, "end": 3501.42, "text": " So I'm going to this time I'm going to access target so that's labels"}, {"start": 3501.42, "end": 3505.42, "text": " I'm going to convert them into Jack's array"}, {"start": 3505.42, "end": 3510.42, "text": " I think I need the reshaping so let me check the train label shape"}, {"start": 3510.42, "end": 3515.42, "text": " And let me check the type of the train labels"}, {"start": 3515.42, "end": 3519.42, "text": " So we have device array 60K this looks correct"}, {"start": 3519.42, "end": 3521.42, "text": " I think this is going to work"}, {"start": 3521.42, "end": 3524.42, "text": " So we need to do the same thing for test data set"}, {"start": 3524.42, "end": 3526.42, "text": " Let me just copy paste this"}, {"start": 3526.42, "end": 3529.42, "text": " Replace this with test"}, {"start": 3529.42, "end": 3533.42, "text": " Test images test labels"}, {"start": 3533.42, "end": 3536.42, "text": " Test data set here"}, {"start": 3536.42, "end": 3539.42, "text": " Test data set here"}, {"start": 3539.42, "end": 3545.42, "text": " Finally test data set here targets this should be working now"}, {"start": 3545.42, "end": 3547.42, "text": " Okay cool"}, {"start": 3547.42, "end": 3553.42, "text": " Now I'm going to instead of passing the I'm going to remove this line here"}, {"start": 3553.42, "end": 3556.42, "text": " And here I'm going to pass the data set and the label"}, {"start": 3556.42, "end": 3560.42, "text": " So I'm going to pass the train data set"}, {"start": 3560.42, "end": 3562.42, "text": " The train images"}, {"start": 3562.42, "end": 3567.42, "text": " And I'm going to pass the train labels"}, {"start": 3567.42, "end": 3574.42, "text": " And here I'm going to pass basically the same thing just test images"}, {"start": 3574.42, "end": 3578.42, "text": " And test labels"}, {"start": 3578.42, "end": 3584.42, "text": " Again I should have waited until I checked this works let me just run it like this"}, {"start": 3584.42, "end": 3588.42, "text": " See whether this is going to work"}, {"start": 3588.42, "end": 3596.42, "text": " And it's way faster as you can see and the accuracy is for some reason very very very high already"}, {"start": 3596.42, "end": 3602.42, "text": " I guess that's because we were assuming using the same MLP parameters so I have to I know what I have done"}, {"start": 3602.42, "end": 3604.42, "text": " So I'm going to interrupt this"}, {"start": 3604.42, "end": 3612.42, "text": " The thing is I'm using I'm constantly updating the MLP params so the state is saved here I have to call the init function here"}, {"start": 3612.42, "end": 3622.42, "text": " So we need to paste the let me find it let me find the init function we need to paste this here"}, {"start": 3622.42, "end": 3632.42, "text": " So that we start fresh every time we start the training so this now looks like it should work let me run it"}, {"start": 3632.42, "end": 3637.42, "text": " We have the results very very fast the accuracy is now low that seems to be working"}, {"start": 3637.42, "end": 3646.42, "text": " Let me interrupt the execution again and I'm going to delete this I'm going to yeah this now should work"}, {"start": 3646.42, "end": 3656.42, "text": " We're logging the loss every 50 batches we're logging the accuracy every epoch and we have how many epochs we have 10 epochs"}, {"start": 3656.42, "end": 3663.42, "text": " So let's start with like five and I'm going to run this and yeah let's see what the results are"}, {"start": 3663.42, "end": 3669.42, "text": " So the loss is going down that's cool that seems to be promising"}, {"start": 3669.42, "end": 3688.42, "text": " Okay after a single epoch we have 91% accuracy and we do not see any kind any sign of overfitting because the accuracy on both training and test data sets are pretty much the same 91%"}, {"start": 3688.42, "end": 3694.42, "text": " And now we have 93 I'm going to just skip until the training is done"}, {"start": 3694.42, "end": 3702.42, "text": " Okay the training is finally done we have 96% accuracy with a couple more epochs we would get to I guess 98 at least or something"}, {"start": 3702.42, "end": 3706.42, "text": " It could be further improved but the thing is this thing is already working"}, {"start": 3706.42, "end": 3713.42, "text": " So let me just kind of clear the output and let me try and run a prediction on a single image and I should have done this previously"}, {"start": 3713.42, "end": 3721.42, "text": " So let me fetch a single image visualize it and see whether the ground truth label like aligns with the predicted label"}, {"start": 3721.42, "end": 3731.42, "text": " So I'm going to do the following thing I'm going to fetch the single image so images let me take the like test loader"}, {"start": 3731.42, "end": 3737.42, "text": " I'm going to wrap it into the iterator here and take a single batch"}, {"start": 3737.42, "end": 3744.42, "text": " I'm going to take a single batch and I'm going to take the only the images and then I'm going to take a single image"}, {"start": 3744.42, "end": 3751.42, "text": " And make let's make sure that this makes sense so image shape should be 28 by 28"}, {"start": 3751.42, "end": 3760.42, "text": " Oh yeah I have to just reshape it into MNIST image size let me see where this is going to work it is"}, {"start": 3760.42, "end": 3769.42, "text": " Now let me import ad hoc here I'm going to import just map plot live pyplot as plt"}, {"start": 3769.42, "end": 3776.42, "text": " Plt image show the image and let me just plot that thing"}, {"start": 3776.42, "end": 3781.42, "text": " So this looks correct we have number seven"}, {"start": 3781.42, "end": 3788.42, "text": " We'll also need labels so I have I'll have to modify this I'm going to grab labels as well so labels"}, {"start": 3788.42, "end": 3797.42, "text": " And let me check the the the ground truth label here so ground truth label is going to be labels of zero"}, {"start": 3797.42, "end": 3806.42, "text": " And let me print the the actual label here print the GT label it should be seven and it is OK"}, {"start": 3806.42, "end": 3817.42, "text": " So now let's see what our network is giving us if I was to do the MLP MLP predict on top of a image"}, {"start": 3817.42, "end": 3827.42, "text": " But I'll have to flatten it out so I'm going to pass the MLP params and I'm going to pass in the what I'm going to pass MP"}, {"start": 3827.42, "end": 3835.42, "text": " Revel of the image will this work let me check so let me check whether this is going to work"}, {"start": 3835.42, "end": 3845.42, "text": " These are just predictions and yeah we get some results let's now just do the argmax instead"}, {"start": 3845.42, "end": 3855.42, "text": " And see what we have so yeah we had we have the predicted value of seven so let's just add some string here"}, {"start": 3855.42, "end": 3864.42, "text": " And this is GT like this I'm going to paste it here let me run this and see whether it makes sense"}, {"start": 3864.42, "end": 3873.42, "text": " So predicted seven and GT seven so I have I should have done this image analysis while I was developing the"}, {"start": 3873.42, "end": 3880.42, "text": " Loaders here so basically while I was developing these data sets and loaders but yeah anyways we have it now"}, {"start": 3880.42, "end": 3885.42, "text": " We see everything is working correctly so that was pretty much it when it comes to the model training"}, {"start": 3885.42, "end": 3891.42, "text": " So was the first section it took around an hour to do this so now I want to do the following I want to quickly refactor"}, {"start": 3891.42, "end": 3896.42, "text": " The whole code code base and then I'm going to do some visualizations I was basically cutting some corners"}, {"start": 3896.42, "end": 3903.42, "text": " Because I'm filming this and I'm really I'm coding real time so it's hard to kind of be very very pedantic"}, {"start": 3903.42, "end": 3911.42, "text": " So now I want to improve some of those some of the things so let me see what first so this thing this this function"}, {"start": 3911.42, "end": 3918.42, "text": " Looks fairly nice we have parameters here we have keys I think the structure of this function was fairly nice"}, {"start": 3918.42, "end": 3925.42, "text": " We have this small testing functionality here so I'm just going to call it test we have a key we created"}, {"start": 3925.42, "end": 3933.42, "text": " We just called the function to see whether the results make sense and I think this cell was was neat I was still I was still very very fresh"}, {"start": 3933.42, "end": 3941.42, "text": " I guess this was some initial like this cell was used for debugging so I'm just gonna basically delete it"}, {"start": 3941.42, "end": 3949.42, "text": " Let me see okay so I had some explanations here etc etc again the design of this function is is fairly neat I'd say"}, {"start": 3949.42, "end": 3957.42, "text": " You can obviously always add some like doc strings comments etc whatnot okay let me see here we had some tests here"}, {"start": 3957.42, "end": 3965.42, "text": " So this was this first part was testing basically whether the MLP predict is working as expected"}, {"start": 3965.42, "end": 3972.42, "text": " And so I just printed the shape of the image I printed the shape of the output and the second part was actually testing whether"}, {"start": 3972.42, "end": 3981.42, "text": " So this test test the single example function and this was testing the batch test the batched"}, {"start": 3981.42, "end": 3988.42, "text": " Batched function basically here okay I can uncomment this part as well"}, {"start": 3988.42, "end": 3996.42, "text": " So yeah I mean it would obviously take a lot of time to to create this to make it perfect I'm just I want to make sure that I didn't"}, {"start": 3996.42, "end": 4005.42, "text": " That I don't have any any red flags like a huge huge red flags so I defined these custom transforming custom custom colloid functions"}, {"start": 4005.42, "end": 4013.42, "text": " These look neat as well so some consistency with my naming of the variables would be appreciated"}, {"start": 4013.42, "end": 4022.42, "text": " I'm sometimes using labels the full word and sometimes I'm abbreviating into LBLS so that sucks but yeah this looks nice"}, {"start": 4022.42, "end": 4033.42, "text": " We have the data sets here I did some debugging here I can I can I'm just going to comment it out and I'm basically just going to delete this part"}, {"start": 4033.42, "end": 4042.42, "text": " Now we have the data loaders then I did some testing here so this can be considered like a testing section again testing section"}, {"start": 4042.42, "end": 4050.42, "text": " What I did here I made sure that we have images labels of correct types and shapes and finally"}, {"start": 4050.42, "end": 4061.42, "text": " This thing here is just the optimization part where I was loading the whole data set load the whole data set into memory"}, {"start": 4061.42, "end": 4069.42, "text": " Okay I'm going to refactor this notebook a little bit after after I finish this video and I'm going to upload it to my to my github"}, {"start": 4069.42, "end": 4079.42, "text": " Hopefully you saw this one already so github let me let me open it up so I have this get started with Jax like repo do check it out"}, {"start": 4079.42, "end": 4086.42, "text": " I'm going to be posting all of these tutorials there as well as some other useful resources and videos here so that's it"}, {"start": 4086.42, "end": 4096.42, "text": " Let's continue here okay this to do is done we have the loss function I'm going to just delete this here we have the update function"}, {"start": 4096.42, "end": 4107.42, "text": " We have the the the accuracy maybe the metric and the loss should be one close to each other here that's it and finally we we create the network here"}, {"start": 4107.42, "end": 4120.42, "text": " Create create an MLP I guess a MLP and then we have the training loop here finally I could have probably just replace this with a product"}, {"start": 4120.42, "end": 4128.42, "text": " Function so to make this a little bit more robust but this is not that robust in any case because I'm hard coding names here but yeah"}, {"start": 4128.42, "end": 4135.42, "text": " You get the point in general you could kind of you'd have a section where you're defining these variables depending so when when the data set changes"}, {"start": 4135.42, "end": 4145.42, "text": " All of these variables change and basically MLP is out of the box is going to have a compatible shape this this was the point of doing this here"}, {"start": 4145.42, "end": 4154.42, "text": " And yeah you don't want to hard code stuff around your code base so yeah all in all for for a code that was written very very very ad hoc I think this this looks pretty decent"}, {"start": 4154.42, "end": 4163.42, "text": " So we have the loading here we convert the representation into the one hot representation then we call the update with this some logging of the loss"}, {"start": 4163.42, "end": 4173.42, "text": " We do some logging of the accuracy and I think it's fairly fairly cool now let's do a couple of visualizations first let me visualize the MLP weights"}, {"start": 4173.42, "end": 4183.42, "text": " So I'm gonna copy paste this into a novel cell here and what I mean by that is the following so I'm going to take the MLP weights"}, {"start": 4183.42, "end": 4196.42, "text": " So I'm going to take the MLP parameters I'm gonna fetch the zero player I'm gonna fetch the weight matrix and I'm gonna just take the that weight matrix and see what the shape is"}, {"start": 4196.42, "end": 4210.42, "text": " So the shape of that matrix is let me see 512 784 so what I want to do here is for a single output neuron I'm gonna I'm gonna fetch this all the weights for a single neuron and see whether we can see some pattern some interesting pattern"}, {"start": 4210.42, "end": 4234.42, "text": " I'm gonna do something like this weights single equals weights I'm gonna fetch a single row I'm gonna fetch all the columns and I'm gonna reshape that I'm gonna reshape that into the amnest image size so that's 28 by 28 and this should now be of correct shape with W single"}, {"start": 4234.42, "end": 4262.42, "text": " shape equals 28 by 28 I'm gonna just plot this and let's see whether we have some meaningful results here so I'm gonna plot it and we do not have any meaningful results here so if I was to randomly pick some other like some other weights for some other output neuron like 4 maybe I don't know"}, {"start": 4262.42, "end": 4282.42, "text": " let's see whether we can find some meaningful representation here and I do not see anything meaningful here I'm gonna try one more nope I guess not that interpretable but we still have 96% accuracy so that's cool and it was worth a try to just do this initial visualization"}, {"start": 4282.42, "end": 4311.42, "text": " so this task is done whoops let me just delete this to do and let's jump to the second one so visualize embeddings using t-sneak let me do that one I'm just gonna add a new cell here we're gonna import basically let me see side pie t-sneak okay yeah from scikit-learn manifold t-sneak so this will probably"}, {"start": 4311.42, "end": 4334.42, "text": " come in handy this documentation here I'm just gonna put the tab right here what the fuck okay let me put it here import from scikit-learn sklearn we import manifold module and we import t-sneak so from this package from this module import the function"}, {"start": 4334.42, "end": 4355.42, "text": " t-sneak my god okay okay this should now work so t-sneak is going to do a dimensionality reduction and for that I'll have to fetch the activations from the MLP just before we apply the linear layer that means we'll have to modify this predict"}, {"start": 4355.42, "end": 4374.42, "text": " function so let me find the MLP predict function I'm gonna copy paste it to this cell here I'm gonna modify it so first things first let's call it something else like fetch activations I guess we're gonna pass the params nothing changes there the only"}, {"start": 4374.42, "end": 4393.42, "text": " difference is we do not have to apply this this last part so let me let me think this will this will be yeah we basically do not want this part we just want to return back the activations the last ones just before we apply the linear layer so activation"}, {"start": 4393.42, "end": 4419.42, "text": " something like this and this should now work so now let me just create a batch version of this thing so batched fetch activations is gonna be remap we're gonna paste this function in and finally in axis will be again none and zero and this should now work so I'm just gonna fetch a batch of images"}, {"start": 4419.42, "end": 4440.42, "text": " and I'm gonna get those activations project them into 2d space and plot those images just to see whether the MLP the network learned how to separate different classes so let's see how that will look like so batch of activations is going to be equal to basically"}, {"start": 4440.42, "end": 4455.42, "text": " this and we need to pass MLP params so that's our train network and we also need to pass a batch of data so that means let me fetch a batch of images so batch"}, {"start": 4455.42, "end": 4475.42, "text": " I'm gonna take take like this images labels is gonna be next error and I'm gonna I'm going to again take from the test loader so test loader and yeah this should be it now I'm gonna take the images and I'm just gonna"}, {"start": 4475.42, "end": 4498.42, "text": " test them here and this should now work let me see what's the shape of these batch activations if this thing works so shape let me run this and it's 128 256 and that sounds about right because remember we had"}, {"start": 4498.42, "end": 4522.42, "text": " let me find a configuration so it was yeah 512 256 256 is the dimensionality of the activations just before we project them into the logits okay so let's now do the following we need to pass this into the t-sne method in order to get 128 comma 2"}, {"start": 4522.42, "end": 4540.42, "text": " and this time I'm just gonna use my github because I have this pre-existing code already and I don't want to search through the documentation this video is already too long so I have this annotated get like method and it got pretty popular and anyways I had"}, {"start": 4540.42, "end": 4559.42, "text": " I used this new here so I'm just gonna copy paste a snippet let me find where I've been using it so here I'm just gonna take this part and I'm just gonna copy paste it into right into our notebook so let me do it like this okay so t-sne is here we'll take two"}, {"start": 4559.42, "end": 4576.42, "text": " components means basically we want to down project into two dimensions perplexity 30 is just a number that works we don't need this special method here and now finally I need to pass the batch activations here and yeah so this is how the usual normal"}, {"start": 4576.42, "end": 4592.42, "text": " workflow of a guy who does machine learning looks like you take somebody else's code or your own code even better and you just kind of tweak it so I'll also we won't need this part as well so number of classes is going to be equal to 10 because this is"}, {"start": 4592.42, "end": 4612.42, "text": " amnest and finally let's see I'm just gonna erase these comments for now so the only thing we need to change here is to put labels instead of node labels here because those are our ground truth labels so basically what this does is for class 0 we're going to extract"}, {"start": 4612.42, "end": 4634.42, "text": " only those embeddings which we know correspond to images which are of class 0 I have the number 0 in it because this is amnest so let me change this to labels this should all work now we don't need these special I'm not sure whether this is going to work in colab so the only thing we are left to do is to find this dictionary"}, {"start": 4634.42, "end": 4660.42, "text": " so let me find the dictionary and it's here so it's just a mapping from from a number to a color and let me just define it somewhere here and that's it this should work now hopefully and it does okay so we can see that this is just some stupid error from scatterplot I've already seen this error but like it does not change the semantics so everything works correctly here"}, {"start": 4660.42, "end": 4686.42, "text": " so we can see that different classes are clustered together in a particular parts of the of the studio space we see some outliers like there is this yellow dot here if you can see it on your screen but yeah in any case we can see that the MLP did learn something so that's cool now the final thing I thought doing is basically seeing how many dead neurons we have so I'm going to delete this cell here"}, {"start": 4686.42, "end": 4706.42, "text": " I'm gonna add a new cell just below this one so what we need to do here is to find those neurons that do not activate on all of the input images I'm going to feed into the network and those are considered dead neurons because their gradients are going to be zero which means the weights that come before that neuron output"}, {"start": 4706.42, "end": 4723.42, "text": " can be changed and thus yeah the neuron is pretty much dead I'm going to link a cool article down in the description so you can check it out it has a like a very thorough analysis of various activation functions and there is a dead neuron analysis section as well so you can check it out there"}, {"start": 4723.42, "end": 4746.42, "text": " so what we need to do is again we need to modify the MLP predict function so we'll have to paste this again here so let me take it and we'll need to fetch every single one of these activations so let me create a collector variable here and I'm just going to collect every single activations here"}, {"start": 4746.42, "end": 4763.42, "text": " so I'm going to append activations and that's going to be it we again do not need this part because we don't have rally activations here we just care about the outputs of the rally unit so finally I'm going to return back the collector here and that's it everything else remains the same"}, {"start": 4763.42, "end": 4787.42, "text": " yeah combining this one with this one is probably possible but yeah I just decided to do it ad hoc like this at the moment I'm just going to name this fetch activations two because I'm being super creative right now okay so what we need to do is basically again batch this one we're going to create a batched version of the function"}, {"start": 4787.42, "end": 4814.42, "text": " so a batched fetch activation is two function with we map we map here and I'm going to pass the function in axis again none and zero and that's it now yeah I guess I can just copy paste this all of this and let me do it here and now I'm just going to need to swap this for the activation two version"}, {"start": 4814.42, "end": 4830.42, "text": " everything else remains the same and we should expect to have something like let's see where this is going to work at all yeah it's not we're actually just getting a list here and that makes sense because remember that's how we are collecting the variables here the activations here"}, {"start": 4830.42, "end": 4854.42, "text": " so I actually need to do something like this in order to get the correct result I guess 128 512 that makes sense and let's see we need to have 128 256 in this one and yep we do okay so now that we have these activations we just need to find those neurons where each of these 128 images produced are like a value zero"}, {"start": 4854.42, "end": 4882.42, "text": " that means this neuron was dead for all these 128 images and that's fairly easy I guess let me form a list here that neurons so this is going to be a flag data structure in the sense that if there is a one that means that the neuron is dead otherwise it's not dead so let's assume for now that we have that all of the neurons are dead so I'm going to just pre populate this with a shape something like this"}, {"start": 4882.42, "end": 4911.42, "text": " so for for act in bad activations I'm going to so this is going to be hundred so remember the shape of this act will be something like 128 and then 512 or 256 so we need to fetch the shape and we take everything except for the zero basically dimension and now we have for every"}, {"start": 4911.42, "end": 4939.42, "text": " single neuron we have a corresponding flag here so this is just a set of flags for every single neuron that we care about and those are the ones that basically have a rally unit after them now let's iterate through this through the activations so I'm going to have a layer ID here and I'm going to have the actual activations in enumerate"}, {"start": 4939.42, "end": 4943.42, "text": " and enumerate"}, {"start": 4943.42, "end": 4969.42, "text": " which activations so now I'm going to do the following dead neurons from layer ID is going to fetch the corresponding structure equals non-pi I guess it's logical and or something the illogical and we're going to end whatever is here already with basically"}, {"start": 4969.42, "end": 4981.42, "text": " activations equals zero so we're asking whether the the the neuron is dead and if all of the neurons are dead alongside the"}, {"start": 4981.42, "end": 5008.42, "text": " batch dimension then that neuron corresponding your own remains that otherwise it's considered alive your own I guess and I think all X is equal zero should do the job here let me see what this is going to work so finally we're going to print the so we're going to iterate through the layers so we read through the layers of dead neurons and we print the"}, {"start": 5008.42, "end": 5022.42, "text": " how many of them are actually dead something like this let's let's see what's where this is going to work and it does so we have zero dead neurons in the first layer we have four dead neurons in the in the last layer"}, {"start": 5022.42, "end": 5038.42, "text": " now let's try and do something to make sure this works so let's initialize novel network that's not trained and let's see whether we have different results so now"}, {"start": 5038.42, "end": 5064.42, "text": " here I'm going to create a novel LP which I'm going to call MLP parameters parameters to and now let's use that one instead to fetch the activations everything else remains the same as far as I can see here and yeah so let me just search MLP params yeah this should be it let's see whether we have different results here"}, {"start": 5064.42, "end": 5080.42, "text": " now now a useful thing would be to actually analyze this on the whole training data set because if there is a debt if there is a lot of dead neurons in the when you evaluate on the whole training data set that may be problematic and especially problematic neurons are the ones in the shallower layers"}, {"start": 5080.42, "end": 5096.42, "text": " to play with this code at your own pace let me try I'm fairly sure that I won't have a TPU at disposal currently so if I were to try to use a TPU and click save here let's see whether we can grab a TPU"}, {"start": 5096.42, "end": 5119.42, "text": " let me click reconnect here and no back end with TPU is available so in any case I'm fairly sure that this was already super enough because one of the previous videos I already explained how to paralyze the computation so it's fairly trivial to make this code parallel just a couple of lines of code and hopefully you got something out of this video"}, {"start": 5119.42, "end": 5138.42, "text": " this is the first time I'm doing a real time coding session so let me know if you have any feedback down below and whether you like this type of a content so I'm gonna start making more of this if you like it in any case consider subscribing share out this video and join our discord community and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=CQQaifxuFcs
Machine Learning with JAX - From Hero to HeroPro+ | Tutorial #2
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE This is the second video in the JAX series of tutorials. JAX is a powerful and increasingly more popular ML library built by the Google Research team. The 2 most popular deep learning frameworks built on top of JAX are Haiku (DeepMInd) and Flax (Google Research). In this video, we continue on and learn additional components needed to train complex ML models (such as NNs) on multiple machines! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My GitHub repo: https://github.com/gordicaleksa/get-started-with-JAX ✅ JAX GitHub: https://github.com/google/jax ✅ JAX docs: https://jax.readthedocs.io/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 My get started with JAX repo 00:01:25 Stateful to stateless conversion 00:11:00 PyTrees in depth 00:17:45 Training an MLP in pure JAX 00:27:30 Custom PyTrees 00:32:55 Parallelism in JAX (TPUs example) 00:40:05 Communication between devices 00:46:05 value_and_grad and has_aux 00:48:45 Training an ML model on multiple machines 00:58:50 stop grad, per example grads 01:06:45 Implementing MAML in 3 lines 01:08:35 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #jax #machinelearning #framework
What's cracking guys? This is the second video in the machine learning with Jack's series of tutorials and in this one We're going to learn how to build a fully fledged neural networks So in the last video we went from zero to hero and in this one we're going to go from hero to hero Pro Plus Yeah, anyways before we even start let me show you this thing So I open sourced a repo that's gonna called get started with Jack's that basically contains aside from these tutorials I'll be making other useful like content that I personally used and found useful So things like various YouTube videos blogs, etc. So and the second thing I want to I want to tell you is that you can Basically open up these notebooks I'm using in these videos in Google collab and you can easily play with them So you can just open it up and the Python environment is already set up So Jackson everything else is pre-installed so you can just kind of run this and everything will work Just run anyway, and this is going to to work out of the box So basically this like the notebook for this second video is already checked in as the time I'm recording this so you'll be able to open it up and maybe Tweak the code as you are watching the video So I think the best approach to get the most out of these videos is to Tweak the code yourself play a bit and and see where you can learn something something interesting awesome with that out of the way Let's continue with the video So let's first import the packages. We saw this in the last one So we were just importing Jack's the NumPy API of Jack's Transform functions that we saw in the last video. So grad jet V map in this one We're going to learn about P map which helps us parallelize the computation across multiple devices Which is very useful as you can imagine and yeah other necessary libraries here So the goal as I said will be to learn how to build complex more complex machine learning models Parallelize the computation across multiple devices and yeah, basically after this video You should be able to to get started and build your own models In order to get to the point where we can build neural networks There is a couple of things we have to you to first solve the first one is the problem of state So when I say that I mean the following so Jack's is basically uses the functional program programming paradigm, which means We need pure functions, which means Jack's does not love state in the sense that you cannot use your your usual Object-oriented programming type of stateful classes or functions. So we're gonna see a way around it right now First short reminder from the first video. We saw that impure functions are problematic So we saw this global variable G and we can treat it as a state for the purpose of this of this of this section Basically what this impure uses global function does it accesses G which is external state Which is forbidden because this makes the function impure and what happens now is when the JIT when we call JIT on this function It's going to cache the current value of G because that's a side effect. So it's gonna just cache the value 0 So that means the first time we call it is gonna is gonna run correctly But the second time we call it it's gonna call we're going to call the cached version of the function Which means 5 will be added to 0 and not 10 So in the meantime in the meanwhile, we've updated G to 10 So the state is updated to 10, but that will not be reflected because of the because we have impure function So this is the the trick you have to be aware of and as you can see here We get the correct result for the first call, but we get the incorrect result for the second call So that's 5 and we should have gotten I guess 5 plus 10. That's 15, right? Okay, that's the first step Secondly, we saw this pattern already. So basically we saw that the pseudo random number generator in Jax is not stateful in contrast to two numPy's and we saw this pattern where we create the state the initial state By just passing in the seed and then in order to get the the next state We just call this split function and we pass we externally pass the state. So that's that's a big difference compared to numPy By pseudo random number generator. So I basically want you to remember this pattern So we are inputting the state inside of the function. So split is not saving state internally is just taking it as an input It's manipulating it somehow in a pure fashion and returning back some novel states Whereas just recall that we were using key and sub key Terminology previously. I'm just want to stress out that we are Basically, I want to stress out this pattern. So that's why I'm using this new terminology. But yeah, you get the point hopefully Okay. So now let's see a way around this and if I haven't made this already clear enough Let me just kind of motivate where we're doing this. Basically if you're trying to create a neural network, for example Like there is a lot of stateful code when you're trying to build neural networks Especially in other frameworks such as PyTorch or TensorFlow Basically, our model parameters are stateful your optimizer is stateful atom is for example storing internally a certain Statistics so that means it's stateful and also we have a layer such as batch norm, which are stateful Yeah in summary and any useful program will will have to have some some kind of state and we just need to find Alternative way how we can handle it. So in this first example, we have a stateful class. It's called counter We have this state called like n which is basically the internal count and we have the function count We just increments the this this state by one and returns the current value of that state Finally, once we call reset, we just set the state to zero and that I guess all makes sense here We just instantiate the an object of this class and then we we call we call three times We call this this count function and as expected Hopefully we'll get one two three and here they are Okay, so that all works like a charm and this is something you're used to now the problem arises here When you try to get the function, so let's reset the counter and let's so that means the state is now internally set to zero And now let's try and get the counter count function. So let's try and get this function here Now, let's see what happens with it Once we did it and we try to again run the same type of like a for loop here The results are kind of surprising maybe or maybe not even familiar with jet Basically we get all ones and the reason is as I said as this count function is not pure because we are accessing We are basically writing to an external state as you can see here. So self dot n is a state Basically, it's not passed in as an argument. We are we are just modifying it externally So that's bad. And what will happen is so once we call reset here And we'll be set to zero and the first time so once we get the first time we call a jitted function We're gonna do the tracing if you recall from the last video and that means that basically on the first call The function will return one and because we have a side effect is just gonna learn how to cache number one and that's it Now, let me just kind of try and clarify that if we were to use make Jack spur to make this more clear Let me run this and see what the like underlying Jack's expression Jack's expression is and we can see it learn to just return Like one so basically the function the cache function has like the whole body of the function is something like this return one That's it. And that's why we're getting ones even though we are we are expecting to get increments Okay, just to make this a bit more clear if we were to call counter count function here We'd expect the internal state to be modified to like one and then once we call jet It's gonna be returning twos now, hopefully So if we correctly understand this if I run this we get twos and the reason is again It's basically tracing this function the first time it's racing yet. The value was already set to one We add additionally one so we have two and we turn two and basically because we don't have any inputs It's just gonna learn how to cache and return those number two and again if if I were to do the same thing here So count and call the make Jack Jack spur. We're hopefully gonna get like just two Instead of one here. Let's see and that's correct. Okay, we got two here In any case, we have to find a way to avoid like creating these in pure function So let's see what the solution is we're going to define this counter state Like instead of defining a complex class, we're just gonna have like a simple primitive data type here. So integer So this version two of the counter is gonna work the following way So we're going to pass in the counter state and we're going to return back the output as well as the state Now this may be a bit confusing m plus one where we're turning a couple of m plus one two times I mean doesn't make any sense right? So the thing is in general this thing here will be state and this thing here will be some function of the state So here in the example of this simple counter the output as well as the state are the same thing So yeah, we're just returning m plus one, but yeah doesn't doesn't matter The whole point is we are passing in the state and we are processing it and then returning it back In the case of reset function, we're just returning back the state zero and that's it So here we instantiate the counter here. We call the reset and as you can see here compared to the version one of the counter We are not internally modifying the state. We are just returning back the state here externally and here we can see the pattern So we've called the counter count we pass in the state and we get the state and the value we print the value and we keep doing this And so yeah, I guess this should already be familiar Basically Jack's like run random number generators use this very same pattern So this should work like as expected now and as we can see we get one two three But but now let's see whether after we did the function this thing will work or not Let's call reset again. We get the initial state Let's get the function and let's try and run this And as expected we're getting correct results Even with this jaded version of the function because we are externally manipulating the state and not creating impure functions And yep, this is working as expected and in summary, let's see the pattern we we use to convert a stateful function into a stateless one So basically we had the state which was just number n in the case of a simple counter We have this stateful method. So the the especially like in particular we had this method called count We are passing some arguments in general case and we get back the output Now we converted this and instead of saving the state inside of the class We're passing the state inside here as an input and we return back the top also the output and the state And this pattern can be applied for to any a stateful class and to get a stateless one and that's cool Okay, that's that was step one towards building like a fully fledged neural networks Now the second step is we still need to find a way how to deal with gradients And you may wonder why we have to like learn anything new to deal with gradients Well, we already know about the grad function. Well, the thing is with our current knowledge We would go something like this. So imagine we have some loss function F and we have like the parameters of our neural network. So x y z and w So we basically only have four parameters for now and we do something like x squared plus y squared plus z squared Blah blah blah. So basically we get this is a L2 you can treat this as L2 loss So minimizing this loss would cause the weights to go closer to get closer to zero And now the problem here is if we had something like GPT-3 model Which has 175 billion parameters and some newer language models are even bigger Basically, you'd have to write 175 billion like terms here minus four that we already have here And that sucks, right? So the problem does not stop there Basically, if we now try to find the gradients because we want to do an update of our neural network parameters We'd have to get dfdx. So we have to find derivatives with respect to every single one of those arguments And grad as you recall usually by default returns the like a derivative with respect to only the first argument So that will be dfdx. So we have to explicitly pass this list of integers And in the case of 175 billion would have quite a lot of these, right? So this will not scale and yeah, let me just run this. This is going to work as expected But as I said, it's bothersome and it does not scale So we had twos because if we take argument dfdx is going to be 2x, 2y, 2z in the case of dfdz, etc. So basically because we input all once as the input we get twos for every single gradient So that makes sense. Now again, the problem does not stop here If you want to do an update would have to have 175 billion lines doing these types of SGD updates So basically learning rate gradients and we update our weights here So there is obviously a better way. This is just a motivation for why we need the following solution And the solution is called a Pytree So in general we are used to wrapping our model parameters in some more complex data structures dictionaries or whatever And Pytree is the Jack's solution to how to basically find gradients of arbitrarily nested data structures And let's see a simple example here. So this is just a contrived example obviously You can treat it as some type of model parameters, very esoteric but yes, nonetheless Just stick with me here. We're going to first learn what Pytree is and then we're going to see a real example We're going to train like a multi-layer perceptron really soon. Let's see how Pytree is defined in the official docs So if I open up the docs here, if I go to tutorialjacks101 and find the working with Pytrees here Okay, we have the definition here. So a Pytree is a container of leaf elements and or more Pytrees So it has this recursive definition. Containers include lists, tuples and dictionaries So let's see how that maps to our example here. So basically we have a container here So this whole thing is a Pytree, but this Pytree contains another Pytree here And finally this Pytree has a leaf 1 and the leaf string this character a And finally this object is also considered a leaf. The second, this is also a Pytree, this whole tuple here And tuple I guess. This is a leaf, one will be a leaf, whereas this thing here will again be a Pytree And two will be a leaf and three will be a leaf. There is a handy way to analyze this and it's called this From Jax, just import this tree leaves function. We're going to iterate through these Pytrees and we're going to get the leaves And we're going to finally print them in a nice format here. So let's see what the results here are Basically this first Pytree has three leaves as we can see here. So one, character a and a simple empty object Let me see what else is interesting. So as I said this is 1, 2, 3, all of those will be leaves In the case of a dictionary, keys are not considered leaves. Keys are just like metadata of the Pytree node So we're just going to have 1, 2, 3, 4 and 5 as leaves as you can see here And interesting to notice here is finally this array is going to be considered a single leaf So we won't have 1, 2, 3 as separate leaves. This whole thing will be treated as a leaf So that's something to keep in mind. Okay, so how do we manipulate these trees? There's a couple of neat and handy functions. So again we have an example here of a simple Pytree And what we do is we call this tree map function and we just pass in the Pytree And we define a function that is going to manipulate and work on top of these It's going to process the leaves of the Pytree and basically what we expect is that every leaf will be multiplied by 2 In the case of this simple lambda function. So let's run this and as expected we get everything is multiplied by 2 So that was in the case of these single argument functions. In the case when you have multiple Like when your function has multiple arguments, I'm just going to create a copy here So a reference basically and I'm going to do a simple addition and this time we're going to call treeMultiMap So we have 2 Pytrees as input. We manipulate them using treeMultiMap and as the output we get the Pytree again So if I run this we would expect same results and yep we have the same results here because Just adding two same structures is the same as doing multiply by 2, right? Okay, so finally worth noticing here is that the structures have to be the same So here the condition was trivially I guess true because I just made this a reference But in the case when you have two separate data structures you want to make sure they have the same structure Because if I artificially here introduce a change so I just do a deep copy of the original object And I just append this list that contains number 23 and if we try to run this We can see that list arityMismatch is reported as an error So yeah, keep that in mind and that was the last missing piece before we can train like a multilayer perceptron Hopefully you already know what MLP is if you don't let me just quickly show you some diagrams And whoops, okay, we are in a really bad part of the internet right now. Let me go to images here If this thing will load my internet is really really slow today Okay, so I'm gonna just type in neural network here and hopefully everything will work correctly And here we are. Here is a simple MLP basically you have your input data coming in from the left You're processing the inputs using these neurons. These are called hidden layers This is the output layer and outcomes to results So this thing may be like a binary classifier like a hot dog or not hot dog I don't know in any case that's MLP at least on a high level and Now, okay, we are we're ready to start training a toy multilayer perceptron model So we know how to handle state we know how to handle parameters I need to find gradients of our parameters which may be stored in certain types of nested data structures So that's everything we need to to train our first model in Jax, I guess So let's start here. We have this init MLP parameters function What it's going to do is return a pie tree object and here you can see we specify the type Basically the configuration of our MLP. So that's gonna it's gonna have three layers So the first one will map input. That's a scalar into 128 dimensional vector Then we're gonna have a mapping from 128 to 128 and finally we're gonna have a mapping from 128 to output scalar again So that's three layer deep MLP. So this function here just takes that list as an input as you can see here using this smart indexing We can just fetch tuples such as this So we first have the tuple 128 and we're gonna use that to form weights and biases of our neural of our like a linear layer And we're gonna wrap it up into the dictionary and just append to this list So this is gonna be a list of dictionaries which contain two keys weights and biases which then contain like a random meet randomly initialized matrices And you don't have to worry about this. This is just some specific type. I think gaming or something initialization doesn't matter And think to notice we're using numpy random here instead of using Jax PRNG doesn't matter for this simple example But just keep that in mind Anyways, so we get back the the the pie tree and because we have a pie tree we can nicely use the tree map to print certain like information about the pie tree And we can see we've done everything correctly. We have biases and weights. Everything is of correct shape In general, this is a good good practice to have to analyze the shapes of your if your data and model parameters and whatnot throughout the whole pipeline basically Okay, let's continue here. We're gonna have some update function here in order to train it This is the main part and I'm just going to before we even start digging into the training code. I'm just gonna comment on the training code I'm just gonna comment on this forward because we haven't defined it yet and I'm just gonna show you the input data So this is what we're trying to learn to regress So the MLP will have as an input a simple scalar and we're trying to regress the corresponding y which is simply x squared in this in this example here So now let's see how how this thing works. Let me uncomment this basically We'll have a couple of epochs. So 5000 epochs in this example. We're gonna call the update function and you see the the pattern again We are passing in the state which is the model parameters in this case and we return back the parameters again and we just iterate that and that's it That's going to update the parameters and finally we can regress using the forward method Let me show how those are defined. So update basically takes as an input parameters x and y. So that's the input and output data We calculate the gradients of the loss function with respect to basically this input data and the current model Just stick with me. I'm gonna analyze this in a bit more detail in a couple minutes. For now, let's just see what the loss function is It's a simple mean squared error loss. Basically we do here prediction. We find an error term that's basically predicted y minus the ground truth y We square it to get basically positive values error terms and then we do the mean across all of the data points and that's your MSC loss Forward is fairly simple. You have a this is basically your MLP feed forward pass. So remember params is a pie tree When we iterate layer is just a dictionary that contains weights and biases. So we take out the weights We do dot product with the input data and we just add the biases here and then apply the activation function rally in this particular case So there is a dichotomy we made here between hidden layers. Whoops hidden layers in the output layer The reason is the last layer will not apply an activation function. We're just going to return back the raw results That's the reason for this dichotomy. I guess we can rewrite this a bit nicer, but yeah, it works Okay that out of the way. Let's now focus on this part. We do the grad of our loss function and as you can recall Basically, it's gonna do a derivative with respect to the first parameter and since our first parameter is params and which is a pie tree Will basically get as the output the same structure as params just But instead of values of instead of the weights will have derivatives of those weights with respect to the loss function So that's the magic behind grad. Basically all of these built-in functions transform functions such as grad know how to handle pie trees And this is finally a concise way to solve this problem of figuring out the gradients Now that we have gradients we can use our loved tree multimap function and we just pass in parameters and gradients We have this so on every single leaf of these pie trees we're going to apply the following rule and that's from the weight Just subtract the learning rate times the gradient which is simple stochastic gradient descent update rule and that's it As the result again tree multimap as you recall it returns back the pie tree like structure again Which means we have the updated weights getting back as the output of update and we can see that here So params in params come out as well here. Now, let's try to run this and then we're going to play a bit with this code So let me run this cell. Let's see where I ran this one. Yeah, I did. So let me run this one and let's see what the results are Okay, this is the output we can see a pretty nice fit in this particular case We can now play with this function. We can try and see some other polynomials maybe X X cubed We can see it also manages to fit this curve as well. We can play and add some other types of polynomials something like this We just I just added plus X here and it's still be like you can see at the borders here because we have less data points The fittest has a bigger errors compared to where we have a dense sampling which is expected I guess right? Finally, let's try one more interesting example that sign us excess. Let's see what we get here Whoops, when you use jet numpy sign here, so this should work now And you can see the fit is way worse here. We can obviously improve that by increasing the capacity of the model or increasing the number of epochs But yeah, let's try some other frequency like 3 X and yeah, we have a much better fit here In any case you get the point. We trained our first neural network in Jax and it seems to be working So let me return back the original statement here. So that's a simple like a polynomial and now let's analyze a couple of things here So first, let's see the structure of grads. I'm gonna reduce this to one just a single epoch I'm gonna comment out the JIT part and I'm gonna print grads here Okay, so I'm gonna just print the gradients here so that we are confident that we get the same structure as params So I'm gonna run this cell. I'm gonna run this again. I guess the output is gonna be a bit unwieldy Okay, let me see what we got here Bunch of numbers, but you can basically see that we have a list of dictionaries where the keys are biases and weights And basically these here these numbers are derivatives and not the parameter values. That's the important part So that's this part. Let me just uncomment this, return back the JIT And I mean that's pretty much it. That's everything that goes into training an MLP As a quick mention here, you can notice that I called JIT on top of this high level update function Because it gives the XLA compiler much more freedom to optimize various functionalities inside of this update function You can obviously play and toggle off and on the JIT wrapper here and see how faster it is to train it with JIT compared to without JIT, I guess Okay, let's continue here. This was an example of a simple neural network But if we wanted to create our own neural network library, we'd need to create layers such as NNLinear That's just PyTorch syntax or conv layers, etc. And in order to get those to work, we have to create our custom PyTrees And let's see why that is. So first imagine this is this my container object here is just a simple NNLinear layer So let's imagine that's a linear layer and now imagine we have a list of two linear layers here So just create an example PyTree here and now let's fetch the leaves and see what we get as the output And we would expect, I guess, like eight leaves here, right? But the thing is it's going to return only two leaves So we have here that we this whole PyTree only contains two leaves and those are this my container objects That's obviously going to be undesirable because TreeMap and those other functions, they go through the leaves and do certain updates And if we had a linear layer like this, we wouldn't be able to do a gradient, like stochastic gradient descent update on top of these structures So we have to find a way around this. For example, if we were to run this simple manipulation, so just x plus one We'd expect to just increment these numerical values, but this is not going to work And I guess it makes sense because if you know how TreeMap works, it's going to iterate over the leaves And the leaves are my container objects and we're trying to add one to my container, which doesn't make any sense So let's run this and we can see that we get this error, we're trying to unsupport operand blah blah plus for my container and integer So that makes sense, right? So what's the solution to this? It's simple, you just have to define two functions One is flatten, one is unflatten and then you have to register PyTree nodes So we registered the my container with these two associated functions, flatten and unflatten So what flatten does basically just take the ABC, so those were the values we had here, right? We'll just wrap that information into an iterable here and we are not going to want to have a name as the content We want to have it as a metadata and that's why we include it in this auxiliary data structure And we just return this tuple of like content and auxiliary data Finally, the unflatten just receives those same inputs in reverse order though because of I think some legacy problems they had And you basically just construct the my container and you return it back, that's it So after I run this, now the my container class should work as expected So if we were to run the tree leaves function again, let's run it and we have the leaves 1, 2, 3, 4, 5, 6 as expected Now this thing will work, so if we were to add plus 1, we'll just get I guess 2, 3, 4 should be the output here And yes, that is the case Basically tree leaves, i.e. the tree map iterated over the leaves which are just the numerical values here, these ones And it just added plus 1, returned a PyTree again and we just print the leaves here, that's it There is one more gotcha I want to explain, tell you about and that's that we can oftentimes mistake nodes for leaves And when I say that, here's what I mean, so let's create a zero PyTree So basically we just have two nodes here, two leaves that contain zeros with these shapes And let's try to create the same type of PyTree that will have ones instead of zeros So what we do here is we create this intermediate PyTree, we're going to iterate through the zeros tree, through all of the leaves We're gonna take the shape and that's gonna be our new PyTree And then we can iterate through the shapes and apply the ones function using tree map to get like the tree of ones And let's see whether this works or not So if I print it like this, here is what we get We get correctly 2, 3, so we have two rows here and three columns, everything is fine We have three rows here and four columns, everything is fine here It seems that we have a correct result here as well, so we have 2, 3 shape and we have 3, 4 So that's the intermediate array, so this shapes tree here And finally we have some weird result here, so what happened? So the thing that happened is, if you can recall, basically this here, this whole thing is a PyTree But this tuple is a PyTree as well, it's not a leaf, that means that leaf is 2 and 3 and 3 and 4 Which means we're going to apply ones on top of all of these leaves and that's why we get two ones here and then three ones here and then three ones here and four ones here So simple work around here is to create, so we wanna make sure that this is a leaf and not a PyTree We can just basically wrap this x.shape with jmp array and this should now work Let me see, and yep, it works, so basically the reason is this is now a leaf, 2, 3 are a leaf And we pass that to ones and that's why we get correct results here, awesome Now we can design custom neural network layers and we can train end to end our neural networks Now what if we wanna train really, really big neural networks? Well, in that case we have to parallelize our computation and that's where this next section comes into play So I'm gonna introduce this pmap transform function and as we'll soon see it's very, very similar to vmap But before I start showing you how to use pmap we have to do a couple of things So the first thing is to set up, we have to use TPUs here for this section So I'm gonna set up hardware accelerator to a TPU, I'm gonna click save and hopefully after clicking reconnect it's gonna allocate a couple of TPUs for me Awesome, this worked, so now I just have to rerun the import statements because interrupting the runtime erases all of the whole state of the colab So we have to rerun this cell and then let's go back, I can open it up like this, just jump to parallelism injects in the table of contents And we are here, okay, so we're first going to just call this setup function and that's gonna set up TPUs in the background and hopefully we'll get 8 cores at disposal Okay, running this cell may take a while but finally we got 8 cores here as you can see, so 0 through 7 Now I'm gonna use this convolve function as a running example and we'll be parallelizing this particular piece of computation across multiple cores Although we'll later see training annual network, actually a simple ML model across parallel cores So let's start with convolve, so I'm gonna define a simple signal here, so we have a signal X and we have a kernel W And the convolve function works as expected, basically we are sliding W across X, we are doing dot products and we are storing the computations in this list And then we're just converting the list into like a Jax array, that's it, just a simple 1D convolution So if we run here, let's see the results and we get the correct results, so basically imagine, let me print X and let me print W as well here So that may make a bit more sense, so what we do is in the first computation we'll have 2 times 0, that's 0, 1 times 3, that's 3 And 2 times 4, that's 8, 8 plus 3 is 11 and that's why we get 11 as the first output, that's all nice and clear So now let's do the same thing, but this time let's parallelize the computation, so first this is just going to grab the number of devices on disposal We'll have 8 cores and then we're just gonna create a batch to simulate heavy loads, so some bigger computation compared to just having a single signal X and kernel W So let's run this and basically we have 8 devices and that's why we end up having a batch of 8 times 5, where 5 is basically the signal length and we're just replicating So here we have unique data for all of these 8 elements in the batch, on the other hand we're just duplicating the kernels, we're not creating novel kernels for each of the examples Ok, that's it, let's now first parallelize the computation across the memory of the host device, that means we'll be using a single core But VMAP is gonna parallelize the computation in the background across multiple threads, I'm not sure how it's exactly implemented Basically we wrap the convol function into VMAP, by default it's gonna assume that the in-axis argument is set to 0,0, so this is the equivalent to explicitly writing in-axis equal to this tuple of 0,0 And after we run this we'll get the result, and as you can see this part is the same as the example above, and then we have increasing numbers because the signal X is being, let me print X here Let me just print this batch, you can see we're going from 0 through 39 and that's why you can see increasing numbers here as well The cool thing about VMAP now is that we just need to swap VMAP for PMAP, nothing else changes and all of a sudden we're sharding our batch of data So the signal X across multiple cores and we're running the computation in parallel independently on each one of these cores So let me run this and we get the same results as you can see here above for VMAP, there is one difference you can notice here This thing is called device array, whereas here we have sharded device array, so this sharded alludes to the fact that the actual data and computation is distributed across multiple cores So that's super cool, so syntax same as VMAP and that's nice and sweet Let's continue, and I want to stress that all of this happens literally independently, so basically we have 8 signals, so recall that X was 8 times 5 What will happen is that array of 5 elements is going to be on each of the cores alongside the kernels, so we basically have a single computation So we have a single kernel and a single example on each of the cores and we are doing computations without communicating between the cores So that's super nice if the nature of our computation is such that we do not need a communication between the cores And here I'm just illustrating that we can do one pass of this computation, so we do a convolution one times And then we use the outputs as the new kernels and we again convolve them with the original signal X And basically the thing is during both of these two calls we did not have a single communication between the cores themselves So let me run this, and we get some even bigger numbers and that's not that important anymore Finally, just notice that instead of repeating, replicating W's, which is kind of wasteful, we can just use the in-axis argument here And as you recall, VMAP uses the same syntax, basically in-axis tells PMAP that the first argument, so that means W, basically should be broadcasted Because we set none here, and here zero just means that the first zero dimension of XS corresponds to the batch dimension So this way, PMAP will in the background replicate W and do the job for us, so that's way better And let's run this and we should see the same results as before, so 11, 20, 29, 326, that was the same number as here And elsewhere, so that's the same output again So again, this is super nice if your computation is such that you do not need communication between the cores This is usually not the case, and an example we'll soon see is the one of training ML model in a distributed fashion Where as we take our batch of data and we basically split the batch of data into mini-batches, we send each mini-batch to each core And then we do computations there, we calculate the gradients and then we communicate between the cores in order to do a mean of the gradients to update our ML model We're gonna see that example in a second, but just be aware that certain computations do require communication, certain do not, and Jack's got us covered for both of those use cases So now let's see a simple example, like just a modification on the above function that we had So this is going to be normalized convolution, and the only difference is that we'll have a communication between cores So let's see how this works This part of the function remains totally the same as before, so we have output here, we are doing like sliding across the signal, and we accumulate the results into output Now the only difference is that we are calling this, so remember, we're calling this Jack's LexPSum function, and just recall that Lex is this middle layer API of Jack's So what we're doing here is we're going to, along the batch dimension, we're going to sum up all of the outputs, and we're gonna divide output on this particular core with the sum of outputs across all of the cores And that's gonna normalize our output So a thing to notice is this axis name, it can be a bit confusing, so what you do is you again wrap your function of interest into PMAP You specify in axis, which means, hey, I want to broadcast W, I want to use the zero dimension of XS as the batch dimension And we just give the name to those mapped axis by putting an arbitrary name here, so this can be whatever Like we just need to have consistency between whatever we use here, we have to use here inside of the definition as well So I chose batch dimension because it's, I guess, indicative of the thing we're doing, right? Let's first run this and then we can analyze it additionally The first thing you can notice is that this output is the same as VMAP So I'm just going to, I just did that for you to understand that VMAP and PMAP are very, conceptually very similar The only difference is that PMAP is actually using physical cores, whereas VMAP is doing the same thing just on the single device So let me just comment out VMAP so that we can have a bit less output in the console here So basically what the normalization has done is that all of these columns are now normalized Meaning if we sum them up, if we take a single column like this one, so this one, like this element and this element and this element And we sum them up, we're going to get one, so that's what I've done here I took the result, I took all of the rows and zeroth column and we can see here that result sums up to one If I put one here, we're going to get the same result, so it's again one And if I put two here, we're going to get one as the output and that's it That's what this computation did Let me try and see whether we can print the output here I'm not sure whether this is going to work, let me just try and do this Nope, it's going to trace it because PMAP, that's an additional thing to keep in mind PMAP is automatically calling JIT in the background, so yeah To make this a bit more clear, I'm going to take the last example that didn't have any communication So let me take something like this I'm just going to copy paste it here So we're just going to copy paste that thing here Okay, now we have a result here that is not... I'm going to print it here So PMAP result and this result is... there is no normalization here So let me just print it out here So here are the results So what we have done is this PSUM basically alongside batch dimension, which is this dimension here So the vertical dimension is going to sum up all of these values And use them to divide the current output So for each core, for example, for core 0, this will be the output And we're going to divide this array here with the sum across this dimension, the vertical dimension So let me just kind of show you that that is the case So if I was to take the element number like this element here, 11 So that's going to be temp element, something like this, I don't know PMAP result Basically I'm going to take 0th So that's this first row, I'm going to take the 0th element, so that's 11 Let me confirm that's the case, so I'm going to print it And then I'm going to create a temporary sum here So basically just a simple sum of PMAP result across basically across the vertical dimension So that's the batch dimension, 0th column And I'm going to print the sum as well And then I'm going to just do division So I'm going to print temp element divide by temp sum And we should get the same result as here, so 0.0081 whatever, okay? So in order to make this a bit less crowded, let me see where I can do something No, let me just run this This should work as expected now So I took 11, I created a sum here And after dividing you can see we got the same result So hopefully this now is perfectly clear what happened And the whole magic was because of this middle level API's PSUM function And just giving a name to these mapped axes here Let's continue, let me just erase this output Okay, now before we get to the final training pipeline There's a couple more functions you should be familiar with And then you can train whatever in PureJax Pretty much whatever ML model you wish to train The first function is this value and grad function And it's going to return the value of the loss as well as the gradients of the function of interest So I have a simple loss function here It's basically just, we just create these error terms We square them and we sum them up We have a simple input here So let me just print it here So basically it's going to be 0 through 3 And finally the value will hopefully return the loss value as well as the gradients It's fairly trivial to show that the numbers make sense Let me just print y here as well So it's going to be the same as x just plus 0.1 So basically the value of the loss does make sense because we basically subtract The difference will be 0.1 for each element here Then we square it which is going to give us 0.01 And we have 4 times that because we have a sum So that's why we have 0.04 here Same goes for gradients, all of the gradients are minus 0.2 Because the derivative with respect to x will be 2 times x minus y The difference is 0.1 I mean the minus 0.1 times 2 That's why we get these numbers here So, and in any case if you want to log both the value of loss and you want to get your gradients You have this efficient way to do that is to just call this inbuilt function called value and grad The second function you should be familiar with is Not a function actually, just a functionality Is if you want to return sometimes from your loss function You want to return not just the loss itself But maybe also some intermediate values like maybe you want to return these raw error terms You can just instead of doing this So if I were to call gradient function on this loss function here It has two outputs which means a vector output We're going to get an error here because gradient only works on scalar output functions So you can see here gradient only defined for scalar output functions So there is this neat auxiliary flag If we set it to true and we run this It's going to return the gradient as well as the It's just going to pass this intermediate value as the output directly Without doing any type of manipulation on top of it Awesome, we are now ready to train a machine learning model completely in parallel Okay, let me see how I can go about explaining this part Because there is a lot of functionality here But I think it's going to be digestible Let me first start by analyzing our data So we have a simple model like the ground truth model is this one Basically a single input linear function here With some addition of noise, it's a noisy, noisy We'll have some noise observations here So the true value of W, so the coefficient that goes with X And of this bias term are 2 and minus 1 We generate the domain of our data Basically just sampling from the normal distribution We generate 128 elements We generate some noise, again, sampling from the normal distribution And finally we generate our noisy observations by applying the above function here So XS times the true value of the weight parameter Plus the true value of the bias parameter plus the noise And let me visualize that for you So we understand what we are trying to regress So we are going to try to fit a line to this data And we are going to do that the following way So first we'll need to initialize our model So let's jump to the init model function here It's going to initially create a random value for the weight and for the bias So this is our approximation of the true weight and the true bias values Initially the best we can do is just to create a random value And return them packed into this params class Which is just inherited from this name tuple So it has weight key and bias key and the values are our Jax arrays As you can see here I'm using Jax's built-in PRNG functionality I'm using the random number generator I'm splitting the keys into weight and bias And I'm consuming those here to generate two random numbers So that's what this init model function does Once we have that I'm going to fetch the number of devices That's going to be 8 if you recall from the previous exercises we had 8 cores And we are going to replicate the parameters because we want to shard this We want to replicate this particular model across all the cores and do the computation in parallel Since params, so the thing we just got from our init model function is a pie tree We can iterate over the leaves, so over the weight and over the bias And we can just multiply them by 8 which is going to replicate the values So we'll have, yeah, I'll just going to print it here so you're going to see it in a second Next thing we're going to do is just reshape the data for the PMAP function Because we're going to trade this in parallel again So we have the XS and YS, so that's the data we just saw here And we're going to basically just reshape them so that the leading dimension has the number 8 So let me run this and see what we have So, okay, I forgot to run this cell I'm going to run this cell and then I'm going to rerun this cell again And let's see what we got here So you can see a couple of things here First thing, weight and bias have the same number replicated 8 times as expected And finally the data is now reshaped into this 8 times 16 times 1 shape which is suitable for PMAP The Bayesian training pipeline will look very similar to the MLP we trained like 20 minutes ago Basically we're going to have a number of epochs, we're going to iterate for a certain number of epochs We're going to have an update function and again the familiar pattern We pass in the replicated parameters, we pass in the data And we get back the parameters, updated ones, as well as the loss value After that I'm going to just log some values here and we're going to see that in a second And finally we're going to take our final replicated parameters And we're just going to fetch parameters from a single core Because we'll basically have 8 duplicates across the cores So we just need to extract a single one and we get parameters here So TREEMAP will create a PyTree object and then DeviceCat just takes the data from the TPU and downloads it to the host memory That's it, now that's the high level picture of what is going to happen here Now let me start digesting these couple of functions here So what we need to understand is that the model is very simple We just have weight times XS plus the bias because that corresponds perfectly Because we know what generated our data so we can create the same model here Usually that's not the case, you usually don't know how your model structure looks like That's why you have a huge neural network and you let the data morph the structure of that neural network Loss function is very simple, we just basically do mean square error again So predictions, we subtract the ground truth, etc The update function is by far the hardest to understand so we're going to focus a lot on this function So let's see what update function does So we get our parameters, so that's the current version of the parameters, we get the data And what we do is first we decorate the update with this PMAP and we name the mapped axis to batch So this is just a convenient way to, if we do it like this then we don't have to wrap it inside of the PMAP here So let me show you, so here as you can see we don't have PMAP explicitly here We just decorated the update directly with PMAP and that's why we can avoid putting PMAP here So what will happen is that each TPU core is going to get a portion of the parameters So because those are replicated we're just going to have the same parameters spread out across every single core We're going to get a single mini batch out of the whole batch of data on each of the cores And then we're going to pass those into the loss function and find the value of the loss as well as the gradients of the loss function So that means because again grad does the derivative with respect to the first parameter Which means we're going to have derivatives with respect to W, the weight and the bias Next thing we're going to do is we're going to combine the gradients across all of the TPU cores And this is arguably one of the most important, the most important operation in this whole training pipeline Basically you take the gradients which are now different across each of the cores And they're different because each core got a different portion of the data set Whoops, I'm not a robot, I assure you So basically you're going to do a mean averaging across all of the gradients And finally each single core will have the same gradients because we do this p mean operation As a consequence because we also have the same parameters, that means that each core will have the parameters in sync So all of the parameters are initially the same, the gradients are the same which means that the updated parameters across each of the cores are also the same So that's cool We did the same thing for loss, we just averaged loss across the 8 cores And finally we applied the stochastic gradient descent like update rule Again we have pi tree here, pi tree here, we just do the usual from the current value of the parameter We subtract the learning rate times the gradient and that's how we get our new parameters Finally we're going to return those parameters as well as the loss value and that's it I made a short note here which may be important if you want to train with some more complex optimizers and not just with SGD So if we had an optimizer this is how the pattern would look like, we pass in the grads and we pass in the optimizer state And the optimizer because of remember our pattern is going to return the new optimizer state and the new basically updated version of gradients And then we can just apply the updates instead of grads here and that's it So it's very simple just another additional line following this stateless pattern that we saw in the beginning of this video Nice, let me now go and run the training So this is our training, let me run it and let's see what we get Okay it's working, we're printing some values, we're going to debug and understand, analyze what's going on in the console here But before that let's just plot the results Okay so let me run this and we see that we have a line that's fit to this particular data set we artificially built Now let's analyze the output in the console, what happened here First thing I want you to notice here is that replicated parameters because as I said those are going to stay on the cores They should be sharded, sharded device type so let's see whether that's the case So we can see here after printing after the 0th epoch we're going to have this is a class of blah blah blah sharded device array Which means that indeed our replicated parameters are on the cores themselves Loss is as well a sharded device array But the data is as you can see here I'm just printing the type of the data We can see it's a non-py array which means it's locally stored on the CPU Because we do not want to keep the data on the core that wouldn't make any sense especially for bigger data sets Finally what I'm printing here is the loss shape You can see it's an 8 dimensional vector because again it's sharded across different cores And we can see that the loss goes down as we are advancing here It gets into situation after already a couple hundred steps here but yeah And that's pretty much it we trained our first machine learning model in parallel on 8th epoch cores And as you saw it was very simple to do that Now you're almost ready to be able to train whatever type of machine learning model There's a couple more questions you have to get an answer to So how would you go about doing transfer learning injects? Relatedly how do we free certain layers and fine tune other layers? So we need to figure out how to do that, how to do stop gradients And finally how do we get per sample gradients instead of the usual per batch gradients I'm going to tell you all about that in this last section of this tutorial In order to demonstrate how stop gradients work in Jax I'm going to use a couple of examples So first things first Jax uses this JaxLax stop gradient primitive for this purpose, for stopping gradients Let me quickly go through this TD0 update rule here If you're not familiar with RL this may be super confusing but you basically you do not need to know what this is You just need to know some of the details which I'm going to explain to you right now So the whole point is we are trying to learn certain value functions So that value function in our example is going to be a simple linear value function So it's parameterized by theta and we are basically doing dot product with the state vector in order to get the value function So in order to find the gradients of theta we have to do this expression So basically we have a reward here at the present timestamp plus the value at the current state minus the value at the previous state And we multiply that by the gradient of the value function in the previous state So that's a mouthful, the whole point is the update is not a gradient of any loss function However if we reformulate this above expression into this thing And if we have a way to stop the gradients of this target here of this value at the present state We'll be able by doing a gradient of this pseudo loss function to get exactly this expression Again as you can see here if we ignore the dependency of the target on the parameter theta for our value function We're going to get exactly this So let's see why that is, if we do a gradient with respect to theta, 2 goes in front here We'll have this whole expression and then if we do a derivative with respect to theta And this does not depend on theta, that means we're just going to end up with the derivative of this previous value function at the previous state So that's exactly this expression here So just some basic rules of differentiation at the end, that's all So let's finally see how we can implement that We have the value function as I already explained, simple dot product We have some initial values for the theta We have some like, so this t1 just means timestamp minus 1, i.e. the previous timestamp We have the reward, we have the current state here And finally we have the td loss We calculate the value function at the previous state We calculate the target and this is the whole point here We just apply the stop gradient to target and this is going to be our loss And now when we do a gradient of that loss with respect to theta Because theta is the first argument here And remember again, gradient by default will do a derivative with respect to the first parameter We get the correct value of delta theta here, i.e. the gradients of our model I'm going to run this, although the number will not mean a lot And I don't think it's that important to go and analyze this further You get the point here The second example is this straight through estimator And I've seen this thing used, for example, in the VQVAE paper And I also covered that paper in one of my previous videos, you can check it out I've basically pasted the link here But the whole point is, if you have a certain non-differentiable function in the middle of your neural network, for example For example, this is a simple non-differentiable function You take the x, which is a float, and if you apply a rounding, that's simply not differentiable So now we can reformulate this non-differentiable function into this straight through f So basically, what we have done here is, if you take a look at this So x plus f of x minus x Basically that means in the forward pass we're going to have f of x Which means these two are equivalent in the forward pass On the other hand, if we do a derivative, if we find a gradient of this function and a gradient of this function They are going to be different So let me just take some simple example here So x is 5.6, just an arbitrary number We're going to print the value in the forward pass So we're going to see that those are going to be the same So 6 and 6, because when we round 5.6 we get 6 So that's clear Whereas, because this f is non-differentiable If we try to do a gradient at this point x, or whatever x is We get 0 all the time On the other hand, this straight through gradient is going to give us 1 all the time You can see why that is, because we have a stop gradient here So a derivative of this thing with respect to x is just going to be 1 Because, yeah, I mean, if a function is equal to x, then basically a gradient is 1 So why may this be useful? If you have an automatic differentiation in the reverse mode You're basically, if you have 1 here If a certain part of your whole function has a gradient 1 By multiplying the gradients that came from the deeper levels by 1 You're just going to pass them through, that's why it's called straight through You're just going to copy them to the second part of the network or whatever ML model you're using And that's why this thing is kind of cool But the whole point is, again, just be aware that we are using this stop gradient to achieve this functionality Let's see a couple more advanced things So, calculating per sample gradients And let me just read this through In many frameworks, PyTorch, TensorFlow, Theano, it is often not trivial to compute per example gradients Because the library directly accumulates the gradient over the batch Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients Are typically very inefficient So you can go around this problem by taking a single sample, so for example, maybe a single image Passing it through, calculating the gradients for that single image And then doing that for the whole batch in a kind of a for loop and then aggregate the results Aggregate those gradients, but it's super expensive as you can imagine Jax basically has direct support for this type of a thing and it's very efficient So if we create a contrived example here by just stacking the state in the t-minus-1 timestamp And we batch the rewards, we batch the current state So basically using VMAP is going to allow us to get these per sample gradients Because remember, TD loss does not accept a batch of data It accepts a single example, so a single state, a single reward, and a single state here And we have arguments theta here So if we map it, if we wrap it inside of a VMAP using in-axis none, which means theta The arguments of our value, so the parameters of our value function are going to be broadcasted And then we say 000, which means the first zero dimension of these batches here is the batch dimension If we do that, we're going to trivially get the gradients for every single example And they are the same because we batch the same states here, so that's kind of trivial Wrapping up this tutorial, which is already quite lengthy In a summary, Jack's auto-differentiation engine is super powerful, you can do various stuff You can even define custom derivative rules if you have some numerical problems That's now already super advanced, you can check out the docs here I particularly linked this Autodeaf cookbook article, which you should read if you want to know all of the nitty-gritty details of the Autodeaf engine I'm going to end up by showing a simple example of this MAMBL This is this model agnostic metal learning algorithm And you can see it's fairly complicated, you can see the math here And the whole point is, this is how simply you can implement this MAMBL algorithm in Jack's It looks very much like mathematics, and that's why I think Jack's is super, super powerful and useful for researchers Not that beginner-friendly as I already mentioned in my previous video But even for beginners, if you go across a couple of hurdles, I think it's going to pay off Let me briefly go over MAMBL So the whole point here is, imagine you have a couple of tasks And while training, you do not want to minimize the loss on any particular task You want to minimize the loss, the potential losses on all of those tasks For example, maybe going in this direction will minimize the loss for the task 1 Going in this one will minimize the loss for the task 2 And similarly, going in this one will minimize the loss for task 3 That would lead us to have either theta 1, that's this These thetas are going to minimize the loss on the task 1 Similarly, this theta here, theta 2, is going to minimize the loss on this task 2 And similarly for theta 3 But we actually move here, which is somewhere in between all of those thetas And that's how MAMBL works And now, briefly going over this metal loss function We calculate the gradients here, and then we use the gradients to update Using SGD, we update the parameters here And we pass that into the loss function And then we take the gradients of that loss function It's kind of convoluted, I'm not going to try This is not the video about MAMBL, but I want you to just kind of appreciate how expressive Jax actually is Having said that, hopefully you liked this video If you did, share it out with your friends Consider subscribing to this channel And also join the Discord community, that's where the party is See you next time, bye bye!
[{"start": 0.0, "end": 6.98, "text": " What's cracking guys? This is the second video in the machine learning with Jack's series of tutorials and in this one"}, {"start": 6.98, "end": 10.620000000000001, "text": " We're going to learn how to build a fully fledged neural networks"}, {"start": 10.620000000000001, "end": 17.240000000000002, "text": " So in the last video we went from zero to hero and in this one we're going to go from hero to hero Pro Plus"}, {"start": 18.2, "end": 22.36, "text": " Yeah, anyways before we even start let me show you this thing"}, {"start": 22.36, "end": 29.76, "text": " So I open sourced a repo that's gonna called get started with Jack's that basically contains aside from these tutorials"}, {"start": 29.76, "end": 34.480000000000004, "text": " I'll be making other useful like content that I personally used and found useful"}, {"start": 34.480000000000004, "end": 42.86, "text": " So things like various YouTube videos blogs, etc. So and the second thing I want to I want to tell you is that you can"}, {"start": 43.800000000000004, "end": 45.86, "text": " Basically open up these notebooks"}, {"start": 45.86, "end": 50.7, "text": " I'm using in these videos in Google collab and you can easily play with them"}, {"start": 50.7, "end": 54.1, "text": " So you can just open it up and the Python environment is already set up"}, {"start": 54.1, "end": 58.28, "text": " So Jackson everything else is pre-installed so you can just kind of run this and everything will work"}, {"start": 58.28, "end": 61.300000000000004, "text": " Just run anyway, and this is going to to work out of the box"}, {"start": 61.4, "end": 67.56, "text": " So basically this like the notebook for this second video is already checked in as the time"}, {"start": 67.56, "end": 71.72, "text": " I'm recording this so you'll be able to open it up and maybe"}, {"start": 72.12, "end": 74.32, "text": " Tweak the code as you are watching the video"}, {"start": 74.32, "end": 77.48, "text": " So I think the best approach to get the most out of these videos is to"}, {"start": 77.72, "end": 84.24000000000001, "text": " Tweak the code yourself play a bit and and see where you can learn something something interesting awesome with that out of the way"}, {"start": 84.24000000000001, "end": 86.24000000000001, "text": " Let's continue with the video"}, {"start": 86.24, "end": 90.32, "text": " So let's first import the packages. We saw this in the last one"}, {"start": 90.32, "end": 93.64, "text": " So we were just importing Jack's the NumPy API of Jack's"}, {"start": 94.36, "end": 99.64, "text": " Transform functions that we saw in the last video. So grad jet V map in this one"}, {"start": 99.64, "end": 104.91999999999999, "text": " We're going to learn about P map which helps us parallelize the computation across multiple devices"}, {"start": 104.91999999999999, "end": 110.28, "text": " Which is very useful as you can imagine and yeah other necessary libraries here"}, {"start": 110.28, "end": 116.16, "text": " So the goal as I said will be to learn how to build complex more complex machine learning models"}, {"start": 116.16, "end": 121.16, "text": " Parallelize the computation across multiple devices and yeah, basically after this video"}, {"start": 121.16, "end": 124.1, "text": " You should be able to to get started and build your own models"}, {"start": 125.08, "end": 128.51999999999998, "text": " In order to get to the point where we can build neural networks"}, {"start": 128.51999999999998, "end": 132.85999999999999, "text": " There is a couple of things we have to you to first solve the first one is the problem of state"}, {"start": 132.85999999999999, "end": 135.8, "text": " So when I say that I mean the following so Jack's is basically"}, {"start": 136.72, "end": 140.28, "text": " uses the functional program programming paradigm, which means"}, {"start": 140.28, "end": 147.28, "text": " We need pure functions, which means Jack's does not love state in the sense that you cannot use your your usual"}, {"start": 147.8, "end": 154.6, "text": " Object-oriented programming type of stateful classes or functions. So we're gonna see a way around it right now"}, {"start": 155.72, "end": 161.68, "text": " First short reminder from the first video. We saw that impure functions are problematic"}, {"start": 161.76, "end": 168.8, "text": " So we saw this global variable G and we can treat it as a state for the purpose of this of this of this section"}, {"start": 168.8, "end": 175.36, "text": " Basically what this impure uses global function does it accesses G which is external state"}, {"start": 175.36, "end": 182.64000000000001, "text": " Which is forbidden because this makes the function impure and what happens now is when the JIT when we call JIT on this function"}, {"start": 182.92000000000002, "end": 188.04000000000002, "text": " It's going to cache the current value of G because that's a side effect. So it's gonna just cache the value 0"}, {"start": 188.24, "end": 191.52, "text": " So that means the first time we call it is gonna is gonna run correctly"}, {"start": 191.52, "end": 195.8, "text": " But the second time we call it it's gonna call we're going to call the cached version of the function"}, {"start": 195.8, "end": 199.98000000000002, "text": " Which means 5 will be added to 0 and not 10"}, {"start": 199.98000000000002, "end": 203.48000000000002, "text": " So in the meantime in the meanwhile, we've updated G to 10"}, {"start": 203.48000000000002, "end": 209.38000000000002, "text": " So the state is updated to 10, but that will not be reflected because of the because we have impure function"}, {"start": 209.38000000000002, "end": 213.42000000000002, "text": " So this is the the trick you have to be aware of and as you can see here"}, {"start": 213.42000000000002, "end": 217.84, "text": " We get the correct result for the first call, but we get the incorrect result for the second call"}, {"start": 217.84, "end": 224.08, "text": " So that's 5 and we should have gotten I guess 5 plus 10. That's 15, right? Okay, that's the first step"}, {"start": 224.08, "end": 230.70000000000002, "text": " Secondly, we saw this pattern already. So basically we saw that the pseudo random number generator in"}, {"start": 231.04000000000002, "end": 238.28, "text": " Jax is not stateful in contrast to two numPy's and we saw this pattern where we create the state the initial state"}, {"start": 238.32000000000002, "end": 242.88000000000002, "text": " By just passing in the seed and then in order to get the the next state"}, {"start": 242.88000000000002, "end": 248.76000000000002, "text": " We just call this split function and we pass we externally pass the state. So that's that's a big difference compared to numPy"}, {"start": 248.76, "end": 253.44, "text": " By pseudo random number generator. So I basically want you to remember this pattern"}, {"start": 253.44, "end": 260.14, "text": " So we are inputting the state inside of the function. So split is not saving state internally is just taking it as an input"}, {"start": 260.14, "end": 264.84, "text": " It's manipulating it somehow in a pure fashion and returning back some novel states"}, {"start": 265.32, "end": 268.26, "text": " Whereas just recall that we were using key and sub key"}, {"start": 269.0, "end": 271.86, "text": " Terminology previously. I'm just want to stress out that we are"}, {"start": 272.52, "end": 277.84, "text": " Basically, I want to stress out this pattern. So that's why I'm using this new terminology. But yeah, you get the point hopefully"}, {"start": 277.84, "end": 283.44, "text": " Okay. So now let's see a way around this and if I haven't made this already clear enough"}, {"start": 284.03999999999996, "end": 289.67999999999995, "text": " Let me just kind of motivate where we're doing this. Basically if you're trying to create a neural network, for example"}, {"start": 290.4, "end": 294.34, "text": " Like there is a lot of stateful code when you're trying to build neural networks"}, {"start": 294.34, "end": 297.12, "text": " Especially in other frameworks such as PyTorch or TensorFlow"}, {"start": 297.28, "end": 304.62, "text": " Basically, our model parameters are stateful your optimizer is stateful atom is for example storing internally a certain"}, {"start": 304.62, "end": 310.58, "text": " Statistics so that means it's stateful and also we have a layer such as batch norm, which are stateful"}, {"start": 310.58, "end": 315.9, "text": " Yeah in summary and any useful program will will have to have some some kind of state and we just need to find"}, {"start": 316.3, "end": 322.34000000000003, "text": " Alternative way how we can handle it. So in this first example, we have a stateful class. It's called counter"}, {"start": 322.58, "end": 329.02, "text": " We have this state called like n which is basically the internal count and we have the function count"}, {"start": 329.02, "end": 335.64, "text": " We just increments the this this state by one and returns the current value of that state"}, {"start": 335.94, "end": 341.9, "text": " Finally, once we call reset, we just set the state to zero and that I guess all makes sense here"}, {"start": 341.9, "end": 348.09999999999997, "text": " We just instantiate the an object of this class and then we we call we call three times"}, {"start": 348.09999999999997, "end": 351.02, "text": " We call this this count function and as expected"}, {"start": 352.65999999999997, "end": 355.02, "text": " Hopefully we'll get one two three and here they are"}, {"start": 355.02, "end": 360.74, "text": " Okay, so that all works like a charm and this is something you're used to now the problem arises here"}, {"start": 361.02, "end": 368.76, "text": " When you try to get the function, so let's reset the counter and let's so that means the state is now internally set to zero"}, {"start": 368.76, "end": 373.15999999999997, "text": " And now let's try and get the counter count function. So let's try and get this function here"}, {"start": 373.15999999999997, "end": 375.09999999999997, "text": " Now, let's see what happens with it"}, {"start": 375.09999999999997, "end": 380.5, "text": " Once we did it and we try to again run the same type of like a for loop here"}, {"start": 380.5, "end": 384.22, "text": " The results are kind of surprising maybe or maybe not even familiar with jet"}, {"start": 384.22, "end": 390.66, "text": " Basically we get all ones and the reason is as I said as this count function is not pure because we are accessing"}, {"start": 390.66, "end": 396.42, "text": " We are basically writing to an external state as you can see here. So self dot n is a state"}, {"start": 396.42, "end": 400.5, "text": " Basically, it's not passed in as an argument. We are we are just modifying it externally"}, {"start": 400.5, "end": 405.5, "text": " So that's bad. And what will happen is so once we call reset here"}, {"start": 405.5, "end": 411.5, "text": " And we'll be set to zero and the first time so once we get the first time we call a jitted function"}, {"start": 411.5, "end": 416.5, "text": " We're gonna do the tracing if you recall from the last video and that means that basically on the first call"}, {"start": 417.5, "end": 424.5, "text": " The function will return one and because we have a side effect is just gonna learn how to cache number one and that's it"}, {"start": 424.5, "end": 431.5, "text": " Now, let me just kind of try and clarify that if we were to use make Jack spur to make this more clear"}, {"start": 431.5, "end": 439.5, "text": " Let me run this and see what the like underlying Jack's expression Jack's expression is and we can see it learn to just return"}, {"start": 439.5, "end": 446.5, "text": " Like one so basically the function the cache function has like the whole body of the function is something like this return one"}, {"start": 446.5, "end": 452.5, "text": " That's it. And that's why we're getting ones even though we are we are expecting to get increments"}, {"start": 452.5, "end": 457.5, "text": " Okay, just to make this a bit more clear if we were to call counter count function here"}, {"start": 457.5, "end": 464.5, "text": " We'd expect the internal state to be modified to like one and then once we call jet"}, {"start": 464.5, "end": 466.5, "text": " It's gonna be returning twos now, hopefully"}, {"start": 466.5, "end": 472.5, "text": " So if we correctly understand this if I run this we get twos and the reason is again"}, {"start": 472.5, "end": 478.5, "text": " It's basically tracing this function the first time it's racing yet. The value was already set to one"}, {"start": 478.5, "end": 483.5, "text": " We add additionally one so we have two and we turn two and basically because we don't have any inputs"}, {"start": 483.5, "end": 489.5, "text": " It's just gonna learn how to cache and return those number two and again if if I were to do the same thing here"}, {"start": 489.5, "end": 497.5, "text": " So count and call the make Jack Jack spur. We're hopefully gonna get like just two"}, {"start": 497.5, "end": 503.5, "text": " Instead of one here. Let's see and that's correct. Okay, we got two here"}, {"start": 503.5, "end": 508.5, "text": " In any case, we have to find a way to avoid like creating these in pure function"}, {"start": 508.5, "end": 513.5, "text": " So let's see what the solution is we're going to define this counter state"}, {"start": 513.5, "end": 519.5, "text": " Like instead of defining a complex class, we're just gonna have like a simple primitive data type here. So integer"}, {"start": 519.5, "end": 522.5, "text": " So this version two of the counter is gonna work the following way"}, {"start": 522.5, "end": 529.5, "text": " So we're going to pass in the counter state and we're going to return back the output as well as the state"}, {"start": 529.5, "end": 534.5, "text": " Now this may be a bit confusing m plus one where we're turning a couple of m plus one two times"}, {"start": 534.5, "end": 542.5, "text": " I mean doesn't make any sense right? So the thing is in general this thing here will be state and this thing here will be some function of the state"}, {"start": 542.5, "end": 547.5, "text": " So here in the example of this simple counter the output as well as the state are the same thing"}, {"start": 547.5, "end": 551.5, "text": " So yeah, we're just returning m plus one, but yeah doesn't doesn't matter"}, {"start": 551.5, "end": 556.5, "text": " The whole point is we are passing in the state and we are processing it and then returning it back"}, {"start": 556.5, "end": 561.5, "text": " In the case of reset function, we're just returning back the state zero and that's it"}, {"start": 561.5, "end": 568.5, "text": " So here we instantiate the counter here. We call the reset and as you can see here compared to the version one of the counter"}, {"start": 568.5, "end": 571.5, "text": " We are not internally modifying the state. We are just"}, {"start": 571.5, "end": 576.5, "text": " returning back the state here externally and here we can see the pattern"}, {"start": 576.5, "end": 583.5, "text": " So we've called the counter count we pass in the state and we get the state and the value we print the value and we keep doing this"}, {"start": 583.5, "end": 586.5, "text": " And so yeah, I guess this should already be familiar"}, {"start": 586.5, "end": 592.5, "text": " Basically Jack's like run random number generators use this very same pattern"}, {"start": 592.5, "end": 597.5, "text": " So this should work like as expected now and as we can see we get one two three"}, {"start": 597.5, "end": 601.5, "text": " But but now let's see whether after we did the function this thing will work or not"}, {"start": 601.5, "end": 605.5, "text": " Let's call reset again. We get the initial state"}, {"start": 605.5, "end": 609.5, "text": " Let's get the function and let's try and run this"}, {"start": 609.5, "end": 612.5, "text": " And as expected we're getting correct results"}, {"start": 612.5, "end": 620.5, "text": " Even with this jaded version of the function because we are externally manipulating the state and not creating impure functions"}, {"start": 620.5, "end": 628.5, "text": " And yep, this is working as expected and in summary, let's see the pattern we we use to convert a stateful function into a stateless one"}, {"start": 628.5, "end": 633.5, "text": " So basically we had the state which was just number n in the case of a simple counter"}, {"start": 633.5, "end": 638.5, "text": " We have this stateful method. So the the especially like in particular we had this method called count"}, {"start": 638.5, "end": 643.5, "text": " We are passing some arguments in general case and we get back the output"}, {"start": 643.5, "end": 648.5, "text": " Now we converted this and instead of saving the state inside of the class"}, {"start": 648.5, "end": 655.5, "text": " We're passing the state inside here as an input and we return back the top also the output and the state"}, {"start": 655.5, "end": 662.5, "text": " And this pattern can be applied for to any a stateful class and to get a stateless one and that's cool"}, {"start": 662.5, "end": 668.5, "text": " Okay, that's that was step one towards building like a fully fledged neural networks"}, {"start": 668.5, "end": 672.5, "text": " Now the second step is we still need to find a way how to deal with gradients"}, {"start": 672.5, "end": 677.5, "text": " And you may wonder why we have to like learn anything new to deal with gradients"}, {"start": 677.5, "end": 682.5, "text": " Well, we already know about the grad function. Well, the thing is with our current knowledge"}, {"start": 682.5, "end": 686.5, "text": " We would go something like this. So imagine we have some loss function"}, {"start": 686.5, "end": 692.5, "text": " F and we have like the parameters of our neural network. So x y z and w"}, {"start": 692.5, "end": 698.5, "text": " So we basically only have four parameters for now and we do something like x squared plus y squared plus z squared"}, {"start": 698.5, "end": 703.5, "text": " Blah blah blah. So basically we get this is a L2 you can treat this as L2 loss"}, {"start": 703.5, "end": 708.5, "text": " So minimizing this loss would cause the weights to go closer to get closer to zero"}, {"start": 708.5, "end": 712.5, "text": " And now the problem here is if we had something like GPT-3 model"}, {"start": 712.5, "end": 718.5, "text": " Which has 175 billion parameters and some newer language models are even bigger"}, {"start": 718.5, "end": 725.5, "text": " Basically, you'd have to write 175 billion like terms here minus four that we already have here"}, {"start": 725.5, "end": 730.5, "text": " And that sucks, right? So the problem does not stop there"}, {"start": 730.5, "end": 736.5, "text": " Basically, if we now try to find the gradients because we want to do an update of our neural network parameters"}, {"start": 736.5, "end": 743.5, "text": " We'd have to get dfdx. So we have to find derivatives with respect to every single one of those arguments"}, {"start": 743.5, "end": 752.5, "text": " And grad as you recall usually by default returns the like a derivative with respect to only the first argument"}, {"start": 752.5, "end": 757.5, "text": " So that will be dfdx. So we have to explicitly pass this list of integers"}, {"start": 757.5, "end": 763.5, "text": " And in the case of 175 billion would have quite a lot of these, right?"}, {"start": 763.5, "end": 770.5, "text": " So this will not scale and yeah, let me just run this. This is going to work as expected"}, {"start": 770.5, "end": 773.5, "text": " But as I said, it's bothersome and it does not scale"}, {"start": 773.5, "end": 782.5, "text": " So we had twos because if we take argument dfdx is going to be 2x, 2y, 2z in the case of dfdz, etc."}, {"start": 782.5, "end": 787.5, "text": " So basically because we input all once as the input we get twos for every single gradient"}, {"start": 787.5, "end": 790.5, "text": " So that makes sense. Now again, the problem does not stop here"}, {"start": 790.5, "end": 798.5, "text": " If you want to do an update would have to have 175 billion lines doing these types of SGD updates"}, {"start": 798.5, "end": 802.5, "text": " So basically learning rate gradients and we update our weights here"}, {"start": 802.5, "end": 809.5, "text": " So there is obviously a better way. This is just a motivation for why we need the following solution"}, {"start": 809.5, "end": 811.5, "text": " And the solution is called a Pytree"}, {"start": 811.5, "end": 818.5, "text": " So in general we are used to wrapping our model parameters in some more complex data structures dictionaries or whatever"}, {"start": 818.5, "end": 828.5, "text": " And Pytree is the Jack's solution to how to basically find gradients of arbitrarily nested data structures"}, {"start": 828.5, "end": 834.5, "text": " And let's see a simple example here. So this is just a contrived example obviously"}, {"start": 834.5, "end": 840.5, "text": " You can treat it as some type of model parameters, very esoteric but yes, nonetheless"}, {"start": 840.5, "end": 847.5, "text": " Just stick with me here. We're going to first learn what Pytree is and then we're going to see a real example"}, {"start": 847.5, "end": 853.5, "text": " We're going to train like a multi-layer perceptron really soon. Let's see how Pytree is defined in the official docs"}, {"start": 853.5, "end": 860.5, "text": " So if I open up the docs here, if I go to tutorialjacks101 and find the working with Pytrees here"}, {"start": 860.5, "end": 867.5, "text": " Okay, we have the definition here. So a Pytree is a container of leaf elements and or more Pytrees"}, {"start": 867.5, "end": 872.5, "text": " So it has this recursive definition. Containers include lists, tuples and dictionaries"}, {"start": 872.5, "end": 878.5, "text": " So let's see how that maps to our example here. So basically we have a container here"}, {"start": 878.5, "end": 884.5, "text": " So this whole thing is a Pytree, but this Pytree contains another Pytree here"}, {"start": 884.5, "end": 891.5, "text": " And finally this Pytree has a leaf 1 and the leaf string this character a"}, {"start": 891.5, "end": 897.5, "text": " And finally this object is also considered a leaf. The second, this is also a Pytree, this whole tuple here"}, {"start": 897.5, "end": 904.5, "text": " And tuple I guess. This is a leaf, one will be a leaf, whereas this thing here will again be a Pytree"}, {"start": 904.5, "end": 910.5, "text": " And two will be a leaf and three will be a leaf. There is a handy way to analyze this and it's called this"}, {"start": 910.5, "end": 916.5, "text": " From Jax, just import this tree leaves function. We're going to iterate through these Pytrees and we're going to get the leaves"}, {"start": 916.5, "end": 921.5, "text": " And we're going to finally print them in a nice format here. So let's see what the results here are"}, {"start": 921.5, "end": 930.5, "text": " Basically this first Pytree has three leaves as we can see here. So one, character a and a simple empty object"}, {"start": 930.5, "end": 935.5, "text": " Let me see what else is interesting. So as I said this is 1, 2, 3, all of those will be leaves"}, {"start": 935.5, "end": 942.5, "text": " In the case of a dictionary, keys are not considered leaves. Keys are just like metadata of the Pytree node"}, {"start": 942.5, "end": 947.5, "text": " So we're just going to have 1, 2, 3, 4 and 5 as leaves as you can see here"}, {"start": 947.5, "end": 953.5, "text": " And interesting to notice here is finally this array is going to be considered a single leaf"}, {"start": 953.5, "end": 958.5, "text": " So we won't have 1, 2, 3 as separate leaves. This whole thing will be treated as a leaf"}, {"start": 958.5, "end": 962.5, "text": " So that's something to keep in mind. Okay, so how do we manipulate these trees?"}, {"start": 962.5, "end": 968.5, "text": " There's a couple of neat and handy functions. So again we have an example here of a simple Pytree"}, {"start": 968.5, "end": 973.5, "text": " And what we do is we call this tree map function and we just pass in the Pytree"}, {"start": 973.5, "end": 978.5, "text": " And we define a function that is going to manipulate and work on top of these"}, {"start": 978.5, "end": 985.5, "text": " It's going to process the leaves of the Pytree and basically what we expect is that every leaf will be multiplied by 2"}, {"start": 985.5, "end": 992.5, "text": " In the case of this simple lambda function. So let's run this and as expected we get everything is multiplied by 2"}, {"start": 992.5, "end": 1000.5, "text": " So that was in the case of these single argument functions. In the case when you have multiple"}, {"start": 1000.5, "end": 1005.5, "text": " Like when your function has multiple arguments, I'm just going to create a copy here"}, {"start": 1005.5, "end": 1012.5, "text": " So a reference basically and I'm going to do a simple addition and this time we're going to call treeMultiMap"}, {"start": 1012.5, "end": 1018.5, "text": " So we have 2 Pytrees as input. We manipulate them using treeMultiMap and as the output we get the Pytree again"}, {"start": 1018.5, "end": 1023.5, "text": " So if I run this we would expect same results and yep we have the same results here because"}, {"start": 1023.5, "end": 1028.5, "text": " Just adding two same structures is the same as doing multiply by 2, right?"}, {"start": 1028.5, "end": 1035.5, "text": " Okay, so finally worth noticing here is that the structures have to be the same"}, {"start": 1035.5, "end": 1042.5, "text": " So here the condition was trivially I guess true because I just made this a reference"}, {"start": 1042.5, "end": 1047.5, "text": " But in the case when you have two separate data structures you want to make sure they have the same structure"}, {"start": 1047.5, "end": 1053.5, "text": " Because if I artificially here introduce a change so I just do a deep copy of the original object"}, {"start": 1053.5, "end": 1058.5, "text": " And I just append this list that contains number 23 and if we try to run this"}, {"start": 1058.5, "end": 1063.5, "text": " We can see that list arityMismatch is reported as an error"}, {"start": 1063.5, "end": 1069.5, "text": " So yeah, keep that in mind and that was the last missing piece before we can train like a multilayer perceptron"}, {"start": 1069.5, "end": 1074.5, "text": " Hopefully you already know what MLP is if you don't let me just quickly show you some diagrams"}, {"start": 1074.5, "end": 1080.5, "text": " And whoops, okay, we are in a really bad part of the internet right now. Let me go to images here"}, {"start": 1080.5, "end": 1085.5, "text": " If this thing will load my internet is really really slow today"}, {"start": 1085.5, "end": 1090.5, "text": " Okay, so I'm gonna just type in neural network here and hopefully everything will work correctly"}, {"start": 1090.5, "end": 1097.5, "text": " And here we are. Here is a simple MLP basically you have your input data coming in from the left"}, {"start": 1097.5, "end": 1103.5, "text": " You're processing the inputs using these neurons. These are called hidden layers"}, {"start": 1103.5, "end": 1106.5, "text": " This is the output layer and outcomes to results"}, {"start": 1106.5, "end": 1111.5, "text": " So this thing may be like a binary classifier like a hot dog or not hot dog"}, {"start": 1111.5, "end": 1115.5, "text": " I don't know in any case that's MLP at least on a high level and"}, {"start": 1115.5, "end": 1121.5, "text": " Now, okay, we are we're ready to start training a toy multilayer perceptron model"}, {"start": 1121.5, "end": 1126.5, "text": " So we know how to handle state we know how to handle parameters"}, {"start": 1126.5, "end": 1133.5, "text": " I need to find gradients of our parameters which may be stored in certain types of nested data structures"}, {"start": 1133.5, "end": 1139.5, "text": " So that's everything we need to to train our first model in Jax, I guess"}, {"start": 1139.5, "end": 1144.5, "text": " So let's start here. We have this init MLP parameters function"}, {"start": 1144.5, "end": 1151.5, "text": " What it's going to do is return a pie tree object and here you can see we specify the type"}, {"start": 1151.5, "end": 1156.5, "text": " Basically the configuration of our MLP. So that's gonna it's gonna have three layers"}, {"start": 1156.5, "end": 1162.5, "text": " So the first one will map input. That's a scalar into 128 dimensional vector"}, {"start": 1162.5, "end": 1170.5, "text": " Then we're gonna have a mapping from 128 to 128 and finally we're gonna have a mapping from 128 to output scalar again"}, {"start": 1170.5, "end": 1178.5, "text": " So that's three layer deep MLP. So this function here just takes that list as an input as you can see here using this smart indexing"}, {"start": 1178.5, "end": 1181.5, "text": " We can just fetch tuples such as this"}, {"start": 1181.5, "end": 1190.5, "text": " So we first have the tuple 128 and we're gonna use that to form weights and biases of our neural of our like a linear layer"}, {"start": 1190.5, "end": 1195.5, "text": " And we're gonna wrap it up into the dictionary and just append to this list"}, {"start": 1195.5, "end": 1204.5, "text": " So this is gonna be a list of dictionaries which contain two keys weights and biases which then contain like a random meet randomly initialized matrices"}, {"start": 1204.5, "end": 1212.5, "text": " And you don't have to worry about this. This is just some specific type. I think gaming or something initialization doesn't matter"}, {"start": 1212.5, "end": 1218.5, "text": " And think to notice we're using numpy random here instead of using Jax PRNG doesn't matter for this simple example"}, {"start": 1218.5, "end": 1220.5, "text": " But just keep that in mind"}, {"start": 1220.5, "end": 1230.5, "text": " Anyways, so we get back the the the pie tree and because we have a pie tree we can nicely use the tree map to print certain like information about the pie tree"}, {"start": 1230.5, "end": 1236.5, "text": " And we can see we've done everything correctly. We have biases and weights. Everything is of correct shape"}, {"start": 1236.5, "end": 1247.5, "text": " In general, this is a good good practice to have to analyze the shapes of your if your data and model parameters and whatnot throughout the whole pipeline basically"}, {"start": 1247.5, "end": 1253.5, "text": " Okay, let's continue here. We're gonna have some update function here in order to train it"}, {"start": 1253.5, "end": 1261.5, "text": " This is the main part and I'm just going to before we even start digging into the training code. I'm just gonna comment on the training code"}, {"start": 1261.5, "end": 1267.5, "text": " I'm just gonna comment on this forward because we haven't defined it yet and I'm just gonna show you the input data"}, {"start": 1267.5, "end": 1270.5, "text": " So this is what we're trying to learn to regress"}, {"start": 1270.5, "end": 1281.5, "text": " So the MLP will have as an input a simple scalar and we're trying to regress the corresponding y which is simply x squared in this in this example here"}, {"start": 1281.5, "end": 1287.5, "text": " So now let's see how how this thing works. Let me uncomment this basically"}, {"start": 1287.5, "end": 1295.5, "text": " We'll have a couple of epochs. So 5000 epochs in this example. We're gonna call the update function and you see the the pattern again"}, {"start": 1295.5, "end": 1303.5, "text": " We are passing in the state which is the model parameters in this case and we return back the parameters again and we just iterate that and that's it"}, {"start": 1303.5, "end": 1308.5, "text": " That's going to update the parameters and finally we can regress using the forward method"}, {"start": 1308.5, "end": 1316.5, "text": " Let me show how those are defined. So update basically takes as an input parameters x and y. So that's the input and output data"}, {"start": 1316.5, "end": 1325.5, "text": " We calculate the gradients of the loss function with respect to basically this input data and the current model"}, {"start": 1325.5, "end": 1331.5, "text": " Just stick with me. I'm gonna analyze this in a bit more detail in a couple minutes. For now, let's just see what the loss function is"}, {"start": 1331.5, "end": 1342.5, "text": " It's a simple mean squared error loss. Basically we do here prediction. We find an error term that's basically predicted y minus the ground truth y"}, {"start": 1342.5, "end": 1351.5, "text": " We square it to get basically positive values error terms and then we do the mean across all of the data points and that's your MSC loss"}, {"start": 1351.5, "end": 1360.5, "text": " Forward is fairly simple. You have a this is basically your MLP feed forward pass. So remember params is a pie tree"}, {"start": 1360.5, "end": 1366.5, "text": " When we iterate layer is just a dictionary that contains weights and biases. So we take out the weights"}, {"start": 1366.5, "end": 1373.5, "text": " We do dot product with the input data and we just add the biases here and then apply the activation function rally in this particular case"}, {"start": 1373.5, "end": 1379.5, "text": " So there is a dichotomy we made here between hidden layers. Whoops hidden layers in the output layer"}, {"start": 1379.5, "end": 1386.5, "text": " The reason is the last layer will not apply an activation function. We're just going to return back the raw results"}, {"start": 1386.5, "end": 1391.5, "text": " That's the reason for this dichotomy. I guess we can rewrite this a bit nicer, but yeah, it works"}, {"start": 1391.5, "end": 1398.5, "text": " Okay that out of the way. Let's now focus on this part. We do the grad of our loss function and as you can recall"}, {"start": 1398.5, "end": 1407.5, "text": " Basically, it's gonna do a derivative with respect to the first parameter and since our first parameter is params and which is a pie tree"}, {"start": 1407.5, "end": 1412.5, "text": " Will basically get as the output the same structure as params just"}, {"start": 1412.5, "end": 1420.5, "text": " But instead of values of instead of the weights will have derivatives of those weights with respect to the loss function"}, {"start": 1420.5, "end": 1428.5, "text": " So that's the magic behind grad. Basically all of these built-in functions transform functions such as grad know how to handle pie trees"}, {"start": 1428.5, "end": 1437.5, "text": " And this is finally a concise way to solve this problem of figuring out the gradients"}, {"start": 1437.5, "end": 1445.5, "text": " Now that we have gradients we can use our loved tree multimap function and we just pass in parameters and gradients"}, {"start": 1445.5, "end": 1453.5, "text": " We have this so on every single leaf of these pie trees we're going to apply the following rule and that's from the weight"}, {"start": 1453.5, "end": 1460.5, "text": " Just subtract the learning rate times the gradient which is simple stochastic gradient descent update rule and that's it"}, {"start": 1460.5, "end": 1466.5, "text": " As the result again tree multimap as you recall it returns back the pie tree like structure again"}, {"start": 1466.5, "end": 1472.5, "text": " Which means we have the updated weights getting back as the output of update and we can see that here"}, {"start": 1472.5, "end": 1478.5, "text": " So params in params come out as well here. Now, let's try to run this and then we're going to play a bit with this code"}, {"start": 1478.5, "end": 1486.5, "text": " So let me run this cell. Let's see where I ran this one. Yeah, I did. So let me run this one and let's see what the results are"}, {"start": 1486.5, "end": 1493.5, "text": " Okay, this is the output we can see a pretty nice fit in this particular case"}, {"start": 1493.5, "end": 1502.5, "text": " We can now play with this function. We can try and see some other polynomials maybe X X cubed"}, {"start": 1502.5, "end": 1510.5, "text": " We can see it also manages to fit this curve as well. We can play and add some other types of polynomials something like this"}, {"start": 1510.5, "end": 1517.5, "text": " We just I just added plus X here and it's still be like you can see at the borders here because we have less data points"}, {"start": 1517.5, "end": 1523.5, "text": " The fittest has a bigger errors compared to where we have a dense sampling which is expected I guess right?"}, {"start": 1523.5, "end": 1530.5, "text": " Finally, let's try one more interesting example that sign us excess. Let's see what we get here"}, {"start": 1530.5, "end": 1537.5, "text": " Whoops, when you use jet numpy sign here, so this should work now"}, {"start": 1537.5, "end": 1549.5, "text": " And you can see the fit is way worse here. We can obviously improve that by increasing the capacity of the model or increasing the number of epochs"}, {"start": 1549.5, "end": 1558.5, "text": " But yeah, let's try some other frequency like 3 X and yeah, we have a much better fit here"}, {"start": 1558.5, "end": 1565.5, "text": " In any case you get the point. We trained our first neural network in Jax and it seems to be working"}, {"start": 1565.5, "end": 1573.5, "text": " So let me return back the original statement here. So that's a simple like a polynomial and now let's analyze a couple of things here"}, {"start": 1573.5, "end": 1578.5, "text": " So first, let's see the structure of grads. I'm gonna reduce this to one just a single epoch"}, {"start": 1578.5, "end": 1584.5, "text": " I'm gonna comment out the JIT part and I'm gonna print grads here"}, {"start": 1584.5, "end": 1591.5, "text": " Okay, so I'm gonna just print the gradients here so that we are confident that we get the same structure as params"}, {"start": 1591.5, "end": 1597.5, "text": " So I'm gonna run this cell. I'm gonna run this again. I guess the output is gonna be a bit unwieldy"}, {"start": 1597.5, "end": 1601.5, "text": " Okay, let me see what we got here"}, {"start": 1601.5, "end": 1610.5, "text": " Bunch of numbers, but you can basically see that we have a list of dictionaries where the keys are biases and weights"}, {"start": 1610.5, "end": 1617.5, "text": " And basically these here these numbers are derivatives and not the parameter values. That's the important part"}, {"start": 1617.5, "end": 1621.5, "text": " So that's this part. Let me just uncomment this, return back the JIT"}, {"start": 1621.5, "end": 1625.5, "text": " And I mean that's pretty much it. That's everything that goes into training an MLP"}, {"start": 1625.5, "end": 1632.5, "text": " As a quick mention here, you can notice that I called JIT on top of this high level update function"}, {"start": 1632.5, "end": 1640.5, "text": " Because it gives the XLA compiler much more freedom to optimize various functionalities inside of this update function"}, {"start": 1640.5, "end": 1651.5, "text": " You can obviously play and toggle off and on the JIT wrapper here and see how faster it is to train it with JIT compared to without JIT, I guess"}, {"start": 1651.5, "end": 1656.5, "text": " Okay, let's continue here. This was an example of a simple neural network"}, {"start": 1656.5, "end": 1662.5, "text": " But if we wanted to create our own neural network library, we'd need to create layers such as NNLinear"}, {"start": 1662.5, "end": 1665.5, "text": " That's just PyTorch syntax or conv layers, etc."}, {"start": 1665.5, "end": 1671.5, "text": " And in order to get those to work, we have to create our custom PyTrees"}, {"start": 1671.5, "end": 1679.5, "text": " And let's see why that is. So first imagine this is this my container object here is just a simple NNLinear layer"}, {"start": 1679.5, "end": 1684.5, "text": " So let's imagine that's a linear layer and now imagine we have a list of two linear layers here"}, {"start": 1684.5, "end": 1691.5, "text": " So just create an example PyTree here and now let's fetch the leaves and see what we get as the output"}, {"start": 1691.5, "end": 1698.5, "text": " And we would expect, I guess, like eight leaves here, right? But the thing is it's going to return only two leaves"}, {"start": 1698.5, "end": 1705.5, "text": " So we have here that we this whole PyTree only contains two leaves and those are this my container objects"}, {"start": 1705.5, "end": 1713.5, "text": " That's obviously going to be undesirable because TreeMap and those other functions, they go through the leaves and do certain updates"}, {"start": 1713.5, "end": 1722.5, "text": " And if we had a linear layer like this, we wouldn't be able to do a gradient, like stochastic gradient descent update on top of these structures"}, {"start": 1722.5, "end": 1730.5, "text": " So we have to find a way around this. For example, if we were to run this simple manipulation, so just x plus one"}, {"start": 1730.5, "end": 1735.5, "text": " We'd expect to just increment these numerical values, but this is not going to work"}, {"start": 1735.5, "end": 1740.5, "text": " And I guess it makes sense because if you know how TreeMap works, it's going to iterate over the leaves"}, {"start": 1740.5, "end": 1746.5, "text": " And the leaves are my container objects and we're trying to add one to my container, which doesn't make any sense"}, {"start": 1746.5, "end": 1755.5, "text": " So let's run this and we can see that we get this error, we're trying to unsupport operand blah blah plus for my container and integer"}, {"start": 1755.5, "end": 1761.5, "text": " So that makes sense, right? So what's the solution to this? It's simple, you just have to define two functions"}, {"start": 1761.5, "end": 1766.5, "text": " One is flatten, one is unflatten and then you have to register PyTree nodes"}, {"start": 1766.5, "end": 1772.5, "text": " So we registered the my container with these two associated functions, flatten and unflatten"}, {"start": 1772.5, "end": 1777.5, "text": " So what flatten does basically just take the ABC, so those were the values we had here, right?"}, {"start": 1777.5, "end": 1784.5, "text": " We'll just wrap that information into an iterable here and we are not going to want to have a name as the content"}, {"start": 1784.5, "end": 1789.5, "text": " We want to have it as a metadata and that's why we include it in this auxiliary data structure"}, {"start": 1789.5, "end": 1793.5, "text": " And we just return this tuple of like content and auxiliary data"}, {"start": 1793.5, "end": 1801.5, "text": " Finally, the unflatten just receives those same inputs in reverse order though because of I think some legacy problems they had"}, {"start": 1801.5, "end": 1806.5, "text": " And you basically just construct the my container and you return it back, that's it"}, {"start": 1806.5, "end": 1810.5, "text": " So after I run this, now the my container class should work as expected"}, {"start": 1810.5, "end": 1818.5, "text": " So if we were to run the tree leaves function again, let's run it and we have the leaves 1, 2, 3, 4, 5, 6 as expected"}, {"start": 1818.5, "end": 1827.5, "text": " Now this thing will work, so if we were to add plus 1, we'll just get I guess 2, 3, 4 should be the output here"}, {"start": 1827.5, "end": 1829.5, "text": " And yes, that is the case"}, {"start": 1829.5, "end": 1838.5, "text": " Basically tree leaves, i.e. the tree map iterated over the leaves which are just the numerical values here, these ones"}, {"start": 1838.5, "end": 1844.5, "text": " And it just added plus 1, returned a PyTree again and we just print the leaves here, that's it"}, {"start": 1844.5, "end": 1854.5, "text": " There is one more gotcha I want to explain, tell you about and that's that we can oftentimes mistake nodes for leaves"}, {"start": 1854.5, "end": 1859.5, "text": " And when I say that, here's what I mean, so let's create a zero PyTree"}, {"start": 1859.5, "end": 1865.5, "text": " So basically we just have two nodes here, two leaves that contain zeros with these shapes"}, {"start": 1865.5, "end": 1873.5, "text": " And let's try to create the same type of PyTree that will have ones instead of zeros"}, {"start": 1873.5, "end": 1881.5, "text": " So what we do here is we create this intermediate PyTree, we're going to iterate through the zeros tree, through all of the leaves"}, {"start": 1881.5, "end": 1885.5, "text": " We're gonna take the shape and that's gonna be our new PyTree"}, {"start": 1885.5, "end": 1893.5, "text": " And then we can iterate through the shapes and apply the ones function using tree map to get like the tree of ones"}, {"start": 1893.5, "end": 1895.5, "text": " And let's see whether this works or not"}, {"start": 1895.5, "end": 1899.5, "text": " So if I print it like this, here is what we get"}, {"start": 1899.5, "end": 1904.5, "text": " We get correctly 2, 3, so we have two rows here and three columns, everything is fine"}, {"start": 1904.5, "end": 1907.5, "text": " We have three rows here and four columns, everything is fine here"}, {"start": 1907.5, "end": 1912.5, "text": " It seems that we have a correct result here as well, so we have 2, 3 shape and we have 3, 4"}, {"start": 1912.5, "end": 1915.5, "text": " So that's the intermediate array, so this shapes tree here"}, {"start": 1915.5, "end": 1919.5, "text": " And finally we have some weird result here, so what happened?"}, {"start": 1919.5, "end": 1926.5, "text": " So the thing that happened is, if you can recall, basically this here, this whole thing is a PyTree"}, {"start": 1926.5, "end": 1932.5, "text": " But this tuple is a PyTree as well, it's not a leaf, that means that leaf is 2 and 3 and 3 and 4"}, {"start": 1932.5, "end": 1942.5, "text": " Which means we're going to apply ones on top of all of these leaves and that's why we get two ones here and then three ones here and then three ones here and four ones here"}, {"start": 1942.5, "end": 1948.5, "text": " So simple work around here is to create, so we wanna make sure that this is a leaf and not a PyTree"}, {"start": 1948.5, "end": 1955.5, "text": " We can just basically wrap this x.shape with jmp array and this should now work"}, {"start": 1955.5, "end": 1961.5, "text": " Let me see, and yep, it works, so basically the reason is this is now a leaf, 2, 3 are a leaf"}, {"start": 1961.5, "end": 1965.5, "text": " And we pass that to ones and that's why we get correct results here, awesome"}, {"start": 1965.5, "end": 1972.5, "text": " Now we can design custom neural network layers and we can train end to end our neural networks"}, {"start": 1972.5, "end": 1976.5, "text": " Now what if we wanna train really, really big neural networks?"}, {"start": 1976.5, "end": 1983.5, "text": " Well, in that case we have to parallelize our computation and that's where this next section comes into play"}, {"start": 1983.5, "end": 1990.5, "text": " So I'm gonna introduce this pmap transform function and as we'll soon see it's very, very similar to vmap"}, {"start": 1990.5, "end": 1994.5, "text": " But before I start showing you how to use pmap we have to do a couple of things"}, {"start": 1994.5, "end": 2000.5, "text": " So the first thing is to set up, we have to use TPUs here for this section"}, {"start": 2000.5, "end": 2012.5, "text": " So I'm gonna set up hardware accelerator to a TPU, I'm gonna click save and hopefully after clicking reconnect it's gonna allocate a couple of TPUs for me"}, {"start": 2012.5, "end": 2024.5, "text": " Awesome, this worked, so now I just have to rerun the import statements because interrupting the runtime erases all of the whole state of the colab"}, {"start": 2024.5, "end": 2033.5, "text": " So we have to rerun this cell and then let's go back, I can open it up like this, just jump to parallelism injects in the table of contents"}, {"start": 2033.5, "end": 2047.5, "text": " And we are here, okay, so we're first going to just call this setup function and that's gonna set up TPUs in the background and hopefully we'll get 8 cores at disposal"}, {"start": 2047.5, "end": 2054.5, "text": " Okay, running this cell may take a while but finally we got 8 cores here as you can see, so 0 through 7"}, {"start": 2054.5, "end": 2065.5, "text": " Now I'm gonna use this convolve function as a running example and we'll be parallelizing this particular piece of computation across multiple cores"}, {"start": 2065.5, "end": 2071.5, "text": " Although we'll later see training annual network, actually a simple ML model across parallel cores"}, {"start": 2071.5, "end": 2078.5, "text": " So let's start with convolve, so I'm gonna define a simple signal here, so we have a signal X and we have a kernel W"}, {"start": 2078.5, "end": 2090.5, "text": " And the convolve function works as expected, basically we are sliding W across X, we are doing dot products and we are storing the computations in this list"}, {"start": 2090.5, "end": 2096.5, "text": " And then we're just converting the list into like a Jax array, that's it, just a simple 1D convolution"}, {"start": 2096.5, "end": 2108.5, "text": " So if we run here, let's see the results and we get the correct results, so basically imagine, let me print X and let me print W as well here"}, {"start": 2108.5, "end": 2119.5, "text": " So that may make a bit more sense, so what we do is in the first computation we'll have 2 times 0, that's 0, 1 times 3, that's 3"}, {"start": 2119.5, "end": 2127.5, "text": " And 2 times 4, that's 8, 8 plus 3 is 11 and that's why we get 11 as the first output, that's all nice and clear"}, {"start": 2127.5, "end": 2136.5, "text": " So now let's do the same thing, but this time let's parallelize the computation, so first this is just going to grab the number of devices on disposal"}, {"start": 2136.5, "end": 2148.5, "text": " We'll have 8 cores and then we're just gonna create a batch to simulate heavy loads, so some bigger computation compared to just having a single signal X and kernel W"}, {"start": 2148.5, "end": 2162.5, "text": " So let's run this and basically we have 8 devices and that's why we end up having a batch of 8 times 5, where 5 is basically the signal length and we're just replicating"}, {"start": 2162.5, "end": 2175.5, "text": " So here we have unique data for all of these 8 elements in the batch, on the other hand we're just duplicating the kernels, we're not creating novel kernels for each of the examples"}, {"start": 2175.5, "end": 2184.5, "text": " Ok, that's it, let's now first parallelize the computation across the memory of the host device, that means we'll be using a single core"}, {"start": 2184.5, "end": 2194.5, "text": " But VMAP is gonna parallelize the computation in the background across multiple threads, I'm not sure how it's exactly implemented"}, {"start": 2194.5, "end": 2209.5, "text": " Basically we wrap the convol function into VMAP, by default it's gonna assume that the in-axis argument is set to 0,0, so this is the equivalent to explicitly writing in-axis equal to this tuple of 0,0"}, {"start": 2209.5, "end": 2225.5, "text": " And after we run this we'll get the result, and as you can see this part is the same as the example above, and then we have increasing numbers because the signal X is being, let me print X here"}, {"start": 2225.5, "end": 2234.5, "text": " Let me just print this batch, you can see we're going from 0 through 39 and that's why you can see increasing numbers here as well"}, {"start": 2234.5, "end": 2245.5, "text": " The cool thing about VMAP now is that we just need to swap VMAP for PMAP, nothing else changes and all of a sudden we're sharding our batch of data"}, {"start": 2245.5, "end": 2252.5, "text": " So the signal X across multiple cores and we're running the computation in parallel independently on each one of these cores"}, {"start": 2252.5, "end": 2260.5, "text": " So let me run this and we get the same results as you can see here above for VMAP, there is one difference you can notice here"}, {"start": 2260.5, "end": 2273.5, "text": " This thing is called device array, whereas here we have sharded device array, so this sharded alludes to the fact that the actual data and computation is distributed across multiple cores"}, {"start": 2273.5, "end": 2279.5, "text": " So that's super cool, so syntax same as VMAP and that's nice and sweet"}, {"start": 2279.5, "end": 2294.5, "text": " Let's continue, and I want to stress that all of this happens literally independently, so basically we have 8 signals, so recall that X was 8 times 5"}, {"start": 2294.5, "end": 2303.5, "text": " What will happen is that array of 5 elements is going to be on each of the cores alongside the kernels, so we basically have a single computation"}, {"start": 2303.5, "end": 2311.5, "text": " So we have a single kernel and a single example on each of the cores and we are doing computations without communicating between the cores"}, {"start": 2311.5, "end": 2317.5, "text": " So that's super nice if the nature of our computation is such that we do not need a communication between the cores"}, {"start": 2317.5, "end": 2326.5, "text": " And here I'm just illustrating that we can do one pass of this computation, so we do a convolution one times"}, {"start": 2326.5, "end": 2335.5, "text": " And then we use the outputs as the new kernels and we again convolve them with the original signal X"}, {"start": 2335.5, "end": 2343.5, "text": " And basically the thing is during both of these two calls we did not have a single communication between the cores themselves"}, {"start": 2343.5, "end": 2349.5, "text": " So let me run this, and we get some even bigger numbers and that's not that important anymore"}, {"start": 2349.5, "end": 2360.5, "text": " Finally, just notice that instead of repeating, replicating W's, which is kind of wasteful, we can just use the in-axis argument here"}, {"start": 2360.5, "end": 2371.5, "text": " And as you recall, VMAP uses the same syntax, basically in-axis tells PMAP that the first argument, so that means W, basically should be broadcasted"}, {"start": 2371.5, "end": 2382.5, "text": " Because we set none here, and here zero just means that the first zero dimension of XS corresponds to the batch dimension"}, {"start": 2382.5, "end": 2390.5, "text": " So this way, PMAP will in the background replicate W and do the job for us, so that's way better"}, {"start": 2390.5, "end": 2400.5, "text": " And let's run this and we should see the same results as before, so 11, 20, 29, 326, that was the same number as here"}, {"start": 2400.5, "end": 2404.5, "text": " And elsewhere, so that's the same output again"}, {"start": 2404.5, "end": 2411.5, "text": " So again, this is super nice if your computation is such that you do not need communication between the cores"}, {"start": 2411.5, "end": 2418.5, "text": " This is usually not the case, and an example we'll soon see is the one of training ML model in a distributed fashion"}, {"start": 2418.5, "end": 2425.5, "text": " Where as we take our batch of data and we basically split the batch of data into mini-batches, we send each mini-batch to each core"}, {"start": 2425.5, "end": 2434.5, "text": " And then we do computations there, we calculate the gradients and then we communicate between the cores in order to do a mean of the gradients to update our ML model"}, {"start": 2434.5, "end": 2443.5, "text": " We're gonna see that example in a second, but just be aware that certain computations do require communication, certain do not, and Jack's got us covered for both of those use cases"}, {"start": 2443.5, "end": 2450.5, "text": " So now let's see a simple example, like just a modification on the above function that we had"}, {"start": 2450.5, "end": 2457.5, "text": " So this is going to be normalized convolution, and the only difference is that we'll have a communication between cores"}, {"start": 2457.5, "end": 2458.5, "text": " So let's see how this works"}, {"start": 2458.5, "end": 2469.5, "text": " This part of the function remains totally the same as before, so we have output here, we are doing like sliding across the signal, and we accumulate the results into output"}, {"start": 2469.5, "end": 2479.5, "text": " Now the only difference is that we are calling this, so remember, we're calling this Jack's LexPSum function, and just recall that Lex is this middle layer API of Jack's"}, {"start": 2479.5, "end": 2494.5, "text": " So what we're doing here is we're going to, along the batch dimension, we're going to sum up all of the outputs, and we're gonna divide output on this particular core with the sum of outputs across all of the cores"}, {"start": 2494.5, "end": 2496.5, "text": " And that's gonna normalize our output"}, {"start": 2496.5, "end": 2506.5, "text": " So a thing to notice is this axis name, it can be a bit confusing, so what you do is you again wrap your function of interest into PMAP"}, {"start": 2506.5, "end": 2515.5, "text": " You specify in axis, which means, hey, I want to broadcast W, I want to use the zero dimension of XS as the batch dimension"}, {"start": 2515.5, "end": 2522.5, "text": " And we just give the name to those mapped axis by putting an arbitrary name here, so this can be whatever"}, {"start": 2522.5, "end": 2528.5, "text": " Like we just need to have consistency between whatever we use here, we have to use here inside of the definition as well"}, {"start": 2528.5, "end": 2534.5, "text": " So I chose batch dimension because it's, I guess, indicative of the thing we're doing, right?"}, {"start": 2534.5, "end": 2538.5, "text": " Let's first run this and then we can analyze it additionally"}, {"start": 2538.5, "end": 2543.5, "text": " The first thing you can notice is that this output is the same as VMAP"}, {"start": 2543.5, "end": 2549.5, "text": " So I'm just going to, I just did that for you to understand that VMAP and PMAP are very, conceptually very similar"}, {"start": 2549.5, "end": 2557.5, "text": " The only difference is that PMAP is actually using physical cores, whereas VMAP is doing the same thing just on the single device"}, {"start": 2557.5, "end": 2565.5, "text": " So let me just comment out VMAP so that we can have a bit less output in the console here"}, {"start": 2565.5, "end": 2573.5, "text": " So basically what the normalization has done is that all of these columns are now normalized"}, {"start": 2573.5, "end": 2580.5, "text": " Meaning if we sum them up, if we take a single column like this one, so this one, like this element and this element and this element"}, {"start": 2580.5, "end": 2583.5, "text": " And we sum them up, we're going to get one, so that's what I've done here"}, {"start": 2583.5, "end": 2589.5, "text": " I took the result, I took all of the rows and zeroth column and we can see here that result sums up to one"}, {"start": 2589.5, "end": 2594.5, "text": " If I put one here, we're going to get the same result, so it's again one"}, {"start": 2594.5, "end": 2598.5, "text": " And if I put two here, we're going to get one as the output and that's it"}, {"start": 2598.5, "end": 2601.5, "text": " That's what this computation did"}, {"start": 2601.5, "end": 2604.5, "text": " Let me try and see whether we can print the output here"}, {"start": 2604.5, "end": 2608.5, "text": " I'm not sure whether this is going to work, let me just try and do this"}, {"start": 2608.5, "end": 2613.5, "text": " Nope, it's going to trace it because PMAP, that's an additional thing to keep in mind"}, {"start": 2613.5, "end": 2618.5, "text": " PMAP is automatically calling JIT in the background, so yeah"}, {"start": 2618.5, "end": 2623.5, "text": " To make this a bit more clear, I'm going to take the last example that didn't have any communication"}, {"start": 2623.5, "end": 2628.5, "text": " So let me take something like this"}, {"start": 2628.5, "end": 2631.5, "text": " I'm just going to copy paste it here"}, {"start": 2631.5, "end": 2634.5, "text": " So we're just going to copy paste that thing here"}, {"start": 2634.5, "end": 2639.5, "text": " Okay, now we have a result here that is not... I'm going to print it here"}, {"start": 2639.5, "end": 2645.5, "text": " So PMAP result and this result is... there is no normalization here"}, {"start": 2645.5, "end": 2649.5, "text": " So let me just print it out here"}, {"start": 2649.5, "end": 2650.5, "text": " So here are the results"}, {"start": 2650.5, "end": 2657.5, "text": " So what we have done is this PSUM basically alongside batch dimension, which is this dimension here"}, {"start": 2657.5, "end": 2661.5, "text": " So the vertical dimension is going to sum up all of these values"}, {"start": 2661.5, "end": 2665.5, "text": " And use them to divide the current output"}, {"start": 2665.5, "end": 2669.5, "text": " So for each core, for example, for core 0, this will be the output"}, {"start": 2669.5, "end": 2675.5, "text": " And we're going to divide this array here with the sum across this dimension, the vertical dimension"}, {"start": 2675.5, "end": 2678.5, "text": " So let me just kind of show you that that is the case"}, {"start": 2678.5, "end": 2687.5, "text": " So if I was to take the element number like this element here, 11"}, {"start": 2687.5, "end": 2692.5, "text": " So that's going to be temp element, something like this, I don't know"}, {"start": 2692.5, "end": 2695.5, "text": " PMAP result"}, {"start": 2695.5, "end": 2698.5, "text": " Basically I'm going to take 0th"}, {"start": 2698.5, "end": 2703.5, "text": " So that's this first row, I'm going to take the 0th element, so that's 11"}, {"start": 2703.5, "end": 2706.5, "text": " Let me confirm that's the case, so I'm going to print it"}, {"start": 2706.5, "end": 2710.5, "text": " And then I'm going to create a temporary sum here"}, {"start": 2710.5, "end": 2719.5, "text": " So basically just a simple sum of PMAP result across basically across the vertical dimension"}, {"start": 2719.5, "end": 2722.5, "text": " So that's the batch dimension, 0th column"}, {"start": 2722.5, "end": 2725.5, "text": " And I'm going to print the sum as well"}, {"start": 2725.5, "end": 2727.5, "text": " And then I'm going to just do division"}, {"start": 2727.5, "end": 2732.5, "text": " So I'm going to print temp element divide by temp sum"}, {"start": 2732.5, "end": 2737.5, "text": " And we should get the same result as here, so 0.0081 whatever, okay?"}, {"start": 2737.5, "end": 2742.5, "text": " So in order to make this a bit less crowded, let me see where I can do something"}, {"start": 2742.5, "end": 2744.5, "text": " No, let me just run this"}, {"start": 2744.5, "end": 2745.5, "text": " This should work as expected now"}, {"start": 2745.5, "end": 2749.5, "text": " So I took 11, I created a sum here"}, {"start": 2749.5, "end": 2751.5, "text": " And after dividing you can see we got the same result"}, {"start": 2751.5, "end": 2755.5, "text": " So hopefully this now is perfectly clear what happened"}, {"start": 2755.5, "end": 2761.5, "text": " And the whole magic was because of this middle level API's PSUM function"}, {"start": 2761.5, "end": 2765.5, "text": " And just giving a name to these mapped axes here"}, {"start": 2765.5, "end": 2768.5, "text": " Let's continue, let me just erase this output"}, {"start": 2768.5, "end": 2771.5, "text": " Okay, now before we get to the final training pipeline"}, {"start": 2771.5, "end": 2774.5, "text": " There's a couple more functions you should be familiar with"}, {"start": 2774.5, "end": 2776.5, "text": " And then you can train whatever in PureJax"}, {"start": 2776.5, "end": 2779.5, "text": " Pretty much whatever ML model you wish to train"}, {"start": 2779.5, "end": 2782.5, "text": " The first function is this value and grad function"}, {"start": 2782.5, "end": 2789.5, "text": " And it's going to return the value of the loss as well as the gradients of the function of interest"}, {"start": 2789.5, "end": 2791.5, "text": " So I have a simple loss function here"}, {"start": 2791.5, "end": 2794.5, "text": " It's basically just, we just create these error terms"}, {"start": 2794.5, "end": 2796.5, "text": " We square them and we sum them up"}, {"start": 2796.5, "end": 2798.5, "text": " We have a simple input here"}, {"start": 2798.5, "end": 2800.5, "text": " So let me just print it here"}, {"start": 2800.5, "end": 2803.5, "text": " So basically it's going to be 0 through 3"}, {"start": 2803.5, "end": 2809.5, "text": " And finally the value will hopefully return the loss value as well as the gradients"}, {"start": 2809.5, "end": 2812.5, "text": " It's fairly trivial to show that the numbers make sense"}, {"start": 2812.5, "end": 2815.5, "text": " Let me just print y here as well"}, {"start": 2815.5, "end": 2818.5, "text": " So it's going to be the same as x just plus 0.1"}, {"start": 2818.5, "end": 2826.5, "text": " So basically the value of the loss does make sense because we basically subtract"}, {"start": 2826.5, "end": 2830.5, "text": " The difference will be 0.1 for each element here"}, {"start": 2830.5, "end": 2833.5, "text": " Then we square it which is going to give us 0.01"}, {"start": 2833.5, "end": 2836.5, "text": " And we have 4 times that because we have a sum"}, {"start": 2836.5, "end": 2839.5, "text": " So that's why we have 0.04 here"}, {"start": 2839.5, "end": 2843.5, "text": " Same goes for gradients, all of the gradients are minus 0.2"}, {"start": 2843.5, "end": 2850.5, "text": " Because the derivative with respect to x will be 2 times x minus y"}, {"start": 2850.5, "end": 2852.5, "text": " The difference is 0.1"}, {"start": 2852.5, "end": 2855.5, "text": " I mean the minus 0.1 times 2"}, {"start": 2855.5, "end": 2857.5, "text": " That's why we get these numbers here"}, {"start": 2857.5, "end": 2862.5, "text": " So, and in any case if you want to log both the value of loss and you want to get your gradients"}, {"start": 2862.5, "end": 2868.5, "text": " You have this efficient way to do that is to just call this inbuilt function called value and grad"}, {"start": 2868.5, "end": 2870.5, "text": " The second function you should be familiar with is"}, {"start": 2870.5, "end": 2873.5, "text": " Not a function actually, just a functionality"}, {"start": 2873.5, "end": 2876.5, "text": " Is if you want to return sometimes from your loss function"}, {"start": 2876.5, "end": 2878.5, "text": " You want to return not just the loss itself"}, {"start": 2878.5, "end": 2884.5, "text": " But maybe also some intermediate values like maybe you want to return these raw error terms"}, {"start": 2884.5, "end": 2886.5, "text": " You can just instead of doing this"}, {"start": 2886.5, "end": 2890.5, "text": " So if I were to call gradient function on this loss function here"}, {"start": 2890.5, "end": 2893.5, "text": " It has two outputs which means a vector output"}, {"start": 2893.5, "end": 2898.5, "text": " We're going to get an error here because gradient only works on scalar output functions"}, {"start": 2898.5, "end": 2902.5, "text": " So you can see here gradient only defined for scalar output functions"}, {"start": 2902.5, "end": 2908.5, "text": " So there is this neat auxiliary flag"}, {"start": 2908.5, "end": 2911.5, "text": " If we set it to true and we run this"}, {"start": 2911.5, "end": 2915.5, "text": " It's going to return the gradient as well as the"}, {"start": 2915.5, "end": 2920.5, "text": " It's just going to pass this intermediate value as the output directly"}, {"start": 2920.5, "end": 2924.5, "text": " Without doing any type of manipulation on top of it"}, {"start": 2924.5, "end": 2929.5, "text": " Awesome, we are now ready to train a machine learning model completely in parallel"}, {"start": 2929.5, "end": 2932.5, "text": " Okay, let me see how I can go about explaining this part"}, {"start": 2932.5, "end": 2934.5, "text": " Because there is a lot of functionality here"}, {"start": 2934.5, "end": 2936.5, "text": " But I think it's going to be digestible"}, {"start": 2936.5, "end": 2939.5, "text": " Let me first start by analyzing our data"}, {"start": 2939.5, "end": 2943.5, "text": " So we have a simple model like the ground truth model is this one"}, {"start": 2943.5, "end": 2947.5, "text": " Basically a single input linear function here"}, {"start": 2947.5, "end": 2950.5, "text": " With some addition of noise, it's a noisy, noisy"}, {"start": 2950.5, "end": 2952.5, "text": " We'll have some noise observations here"}, {"start": 2952.5, "end": 2957.5, "text": " So the true value of W, so the coefficient that goes with X"}, {"start": 2957.5, "end": 2960.5, "text": " And of this bias term are 2 and minus 1"}, {"start": 2960.5, "end": 2963.5, "text": " We generate the domain of our data"}, {"start": 2963.5, "end": 2966.5, "text": " Basically just sampling from the normal distribution"}, {"start": 2966.5, "end": 2968.5, "text": " We generate 128 elements"}, {"start": 2968.5, "end": 2974.5, "text": " We generate some noise, again, sampling from the normal distribution"}, {"start": 2974.5, "end": 2979.5, "text": " And finally we generate our noisy observations by applying the above function here"}, {"start": 2979.5, "end": 2984.5, "text": " So XS times the true value of the weight parameter"}, {"start": 2984.5, "end": 2987.5, "text": " Plus the true value of the bias parameter plus the noise"}, {"start": 2987.5, "end": 2990.5, "text": " And let me visualize that for you"}, {"start": 2990.5, "end": 2993.5, "text": " So we understand what we are trying to regress"}, {"start": 2993.5, "end": 2996.5, "text": " So we are going to try to fit a line to this data"}, {"start": 2996.5, "end": 3000.5, "text": " And we are going to do that the following way"}, {"start": 3000.5, "end": 3003.5, "text": " So first we'll need to initialize our model"}, {"start": 3003.5, "end": 3006.5, "text": " So let's jump to the init model function here"}, {"start": 3006.5, "end": 3010.5, "text": " It's going to initially create a random value for the weight and for the bias"}, {"start": 3010.5, "end": 3014.5, "text": " So this is our approximation of the true weight and the true bias values"}, {"start": 3014.5, "end": 3018.5, "text": " Initially the best we can do is just to create a random value"}, {"start": 3018.5, "end": 3021.5, "text": " And return them packed into this params class"}, {"start": 3021.5, "end": 3024.5, "text": " Which is just inherited from this name tuple"}, {"start": 3024.5, "end": 3029.5, "text": " So it has weight key and bias key and the values are our Jax arrays"}, {"start": 3029.5, "end": 3035.5, "text": " As you can see here I'm using Jax's built-in PRNG functionality"}, {"start": 3035.5, "end": 3038.5, "text": " I'm using the random number generator"}, {"start": 3038.5, "end": 3041.5, "text": " I'm splitting the keys into weight and bias"}, {"start": 3041.5, "end": 3044.5, "text": " And I'm consuming those here to generate two random numbers"}, {"start": 3044.5, "end": 3047.5, "text": " So that's what this init model function does"}, {"start": 3047.5, "end": 3050.5, "text": " Once we have that I'm going to fetch the number of devices"}, {"start": 3050.5, "end": 3055.5, "text": " That's going to be 8 if you recall from the previous exercises we had 8 cores"}, {"start": 3055.5, "end": 3060.5, "text": " And we are going to replicate the parameters because we want to shard this"}, {"start": 3060.5, "end": 3067.5, "text": " We want to replicate this particular model across all the cores and do the computation in parallel"}, {"start": 3067.5, "end": 3072.5, "text": " Since params, so the thing we just got from our init model function is a pie tree"}, {"start": 3072.5, "end": 3076.5, "text": " We can iterate over the leaves, so over the weight and over the bias"}, {"start": 3076.5, "end": 3081.5, "text": " And we can just multiply them by 8 which is going to replicate the values"}, {"start": 3081.5, "end": 3086.5, "text": " So we'll have, yeah, I'll just going to print it here so you're going to see it in a second"}, {"start": 3086.5, "end": 3091.5, "text": " Next thing we're going to do is just reshape the data for the PMAP function"}, {"start": 3091.5, "end": 3093.5, "text": " Because we're going to trade this in parallel again"}, {"start": 3093.5, "end": 3097.5, "text": " So we have the XS and YS, so that's the data we just saw here"}, {"start": 3097.5, "end": 3104.5, "text": " And we're going to basically just reshape them so that the leading dimension has the number 8"}, {"start": 3104.5, "end": 3107.5, "text": " So let me run this and see what we have"}, {"start": 3107.5, "end": 3111.5, "text": " So, okay, I forgot to run this cell"}, {"start": 3111.5, "end": 3115.5, "text": " I'm going to run this cell and then I'm going to rerun this cell again"}, {"start": 3115.5, "end": 3118.5, "text": " And let's see what we got here"}, {"start": 3118.5, "end": 3120.5, "text": " So you can see a couple of things here"}, {"start": 3120.5, "end": 3125.5, "text": " First thing, weight and bias have the same number replicated 8 times as expected"}, {"start": 3125.5, "end": 3135.5, "text": " And finally the data is now reshaped into this 8 times 16 times 1 shape which is suitable for PMAP"}, {"start": 3135.5, "end": 3142.5, "text": " The Bayesian training pipeline will look very similar to the MLP we trained like 20 minutes ago"}, {"start": 3142.5, "end": 3147.5, "text": " Basically we're going to have a number of epochs, we're going to iterate for a certain number of epochs"}, {"start": 3147.5, "end": 3151.5, "text": " We're going to have an update function and again the familiar pattern"}, {"start": 3151.5, "end": 3154.5, "text": " We pass in the replicated parameters, we pass in the data"}, {"start": 3154.5, "end": 3159.5, "text": " And we get back the parameters, updated ones, as well as the loss value"}, {"start": 3159.5, "end": 3163.5, "text": " After that I'm going to just log some values here and we're going to see that in a second"}, {"start": 3163.5, "end": 3167.5, "text": " And finally we're going to take our final replicated parameters"}, {"start": 3167.5, "end": 3170.5, "text": " And we're just going to fetch parameters from a single core"}, {"start": 3170.5, "end": 3173.5, "text": " Because we'll basically have 8 duplicates across the cores"}, {"start": 3173.5, "end": 3177.5, "text": " So we just need to extract a single one and we get parameters here"}, {"start": 3177.5, "end": 3187.5, "text": " So TREEMAP will create a PyTree object and then DeviceCat just takes the data from the TPU and downloads it to the host memory"}, {"start": 3187.5, "end": 3191.5, "text": " That's it, now that's the high level picture of what is going to happen here"}, {"start": 3191.5, "end": 3197.5, "text": " Now let me start digesting these couple of functions here"}, {"start": 3197.5, "end": 3200.5, "text": " So what we need to understand is that the model is very simple"}, {"start": 3200.5, "end": 3206.5, "text": " We just have weight times XS plus the bias because that corresponds perfectly"}, {"start": 3206.5, "end": 3212.5, "text": " Because we know what generated our data so we can create the same model here"}, {"start": 3212.5, "end": 3217.5, "text": " Usually that's not the case, you usually don't know how your model structure looks like"}, {"start": 3217.5, "end": 3224.5, "text": " That's why you have a huge neural network and you let the data morph the structure of that neural network"}, {"start": 3224.5, "end": 3229.5, "text": " Loss function is very simple, we just basically do mean square error again"}, {"start": 3229.5, "end": 3234.5, "text": " So predictions, we subtract the ground truth, etc"}, {"start": 3234.5, "end": 3239.5, "text": " The update function is by far the hardest to understand so we're going to focus a lot on this function"}, {"start": 3239.5, "end": 3241.5, "text": " So let's see what update function does"}, {"start": 3241.5, "end": 3247.5, "text": " So we get our parameters, so that's the current version of the parameters, we get the data"}, {"start": 3247.5, "end": 3255.5, "text": " And what we do is first we decorate the update with this PMAP and we name the mapped axis to batch"}, {"start": 3255.5, "end": 3262.5, "text": " So this is just a convenient way to, if we do it like this then we don't have to wrap it inside of the PMAP here"}, {"start": 3262.5, "end": 3266.5, "text": " So let me show you, so here as you can see we don't have PMAP explicitly here"}, {"start": 3266.5, "end": 3272.5, "text": " We just decorated the update directly with PMAP and that's why we can avoid putting PMAP here"}, {"start": 3272.5, "end": 3278.5, "text": " So what will happen is that each TPU core is going to get a portion of the parameters"}, {"start": 3278.5, "end": 3283.5, "text": " So because those are replicated we're just going to have the same parameters spread out across every single core"}, {"start": 3283.5, "end": 3289.5, "text": " We're going to get a single mini batch out of the whole batch of data on each of the cores"}, {"start": 3289.5, "end": 3299.5, "text": " And then we're going to pass those into the loss function and find the value of the loss as well as the gradients of the loss function"}, {"start": 3299.5, "end": 3306.5, "text": " So that means because again grad does the derivative with respect to the first parameter"}, {"start": 3306.5, "end": 3311.5, "text": " Which means we're going to have derivatives with respect to W, the weight and the bias"}, {"start": 3311.5, "end": 3316.5, "text": " Next thing we're going to do is we're going to combine the gradients across all of the TPU cores"}, {"start": 3316.5, "end": 3322.5, "text": " And this is arguably one of the most important, the most important operation in this whole training pipeline"}, {"start": 3322.5, "end": 3329.5, "text": " Basically you take the gradients which are now different across each of the cores"}, {"start": 3329.5, "end": 3334.5, "text": " And they're different because each core got a different portion of the data set"}, {"start": 3334.5, "end": 3337.5, "text": " Whoops, I'm not a robot, I assure you"}, {"start": 3337.5, "end": 3342.5, "text": " So basically you're going to do a mean averaging across all of the gradients"}, {"start": 3342.5, "end": 3348.5, "text": " And finally each single core will have the same gradients because we do this p mean operation"}, {"start": 3348.5, "end": 3356.5, "text": " As a consequence because we also have the same parameters, that means that each core will have the parameters in sync"}, {"start": 3356.5, "end": 3363.5, "text": " So all of the parameters are initially the same, the gradients are the same which means that the updated parameters across each of the cores are also the same"}, {"start": 3363.5, "end": 3364.5, "text": " So that's cool"}, {"start": 3364.5, "end": 3368.5, "text": " We did the same thing for loss, we just averaged loss across the 8 cores"}, {"start": 3368.5, "end": 3374.5, "text": " And finally we applied the stochastic gradient descent like update rule"}, {"start": 3374.5, "end": 3383.5, "text": " Again we have pi tree here, pi tree here, we just do the usual from the current value of the parameter"}, {"start": 3383.5, "end": 3389.5, "text": " We subtract the learning rate times the gradient and that's how we get our new parameters"}, {"start": 3389.5, "end": 3393.5, "text": " Finally we're going to return those parameters as well as the loss value and that's it"}, {"start": 3393.5, "end": 3400.5, "text": " I made a short note here which may be important if you want to train with some more complex optimizers and not just with SGD"}, {"start": 3400.5, "end": 3407.5, "text": " So if we had an optimizer this is how the pattern would look like, we pass in the grads and we pass in the optimizer state"}, {"start": 3407.5, "end": 3415.5, "text": " And the optimizer because of remember our pattern is going to return the new optimizer state and the new basically updated version of gradients"}, {"start": 3415.5, "end": 3419.5, "text": " And then we can just apply the updates instead of grads here and that's it"}, {"start": 3419.5, "end": 3427.5, "text": " So it's very simple just another additional line following this stateless pattern that we saw in the beginning of this video"}, {"start": 3427.5, "end": 3431.5, "text": " Nice, let me now go and run the training"}, {"start": 3431.5, "end": 3435.5, "text": " So this is our training, let me run it and let's see what we get"}, {"start": 3435.5, "end": 3444.5, "text": " Okay it's working, we're printing some values, we're going to debug and understand, analyze what's going on in the console here"}, {"start": 3444.5, "end": 3447.5, "text": " But before that let's just plot the results"}, {"start": 3447.5, "end": 3459.5, "text": " Okay so let me run this and we see that we have a line that's fit to this particular data set we artificially built"}, {"start": 3459.5, "end": 3462.5, "text": " Now let's analyze the output in the console, what happened here"}, {"start": 3462.5, "end": 3470.5, "text": " First thing I want you to notice here is that replicated parameters because as I said those are going to stay on the cores"}, {"start": 3470.5, "end": 3475.5, "text": " They should be sharded, sharded device type so let's see whether that's the case"}, {"start": 3475.5, "end": 3482.5, "text": " So we can see here after printing after the 0th epoch we're going to have this is a class of blah blah blah sharded device array"}, {"start": 3482.5, "end": 3489.5, "text": " Which means that indeed our replicated parameters are on the cores themselves"}, {"start": 3489.5, "end": 3494.5, "text": " Loss is as well a sharded device array"}, {"start": 3494.5, "end": 3498.5, "text": " But the data is as you can see here I'm just printing the type of the data"}, {"start": 3498.5, "end": 3502.5, "text": " We can see it's a non-py array which means it's locally stored on the CPU"}, {"start": 3502.5, "end": 3510.5, "text": " Because we do not want to keep the data on the core that wouldn't make any sense especially for bigger data sets"}, {"start": 3510.5, "end": 3513.5, "text": " Finally what I'm printing here is the loss shape"}, {"start": 3513.5, "end": 3518.5, "text": " You can see it's an 8 dimensional vector because again it's sharded across different cores"}, {"start": 3518.5, "end": 3522.5, "text": " And we can see that the loss goes down as we are advancing here"}, {"start": 3522.5, "end": 3528.5, "text": " It gets into situation after already a couple hundred steps here but yeah"}, {"start": 3528.5, "end": 3533.5, "text": " And that's pretty much it we trained our first machine learning model in parallel on 8th epoch cores"}, {"start": 3533.5, "end": 3536.5, "text": " And as you saw it was very simple to do that"}, {"start": 3536.5, "end": 3540.5, "text": " Now you're almost ready to be able to train whatever type of machine learning model"}, {"start": 3540.5, "end": 3543.5, "text": " There's a couple more questions you have to get an answer to"}, {"start": 3543.5, "end": 3550.5, "text": " So how would you go about doing transfer learning injects? Relatedly how do we free certain layers and fine tune other layers?"}, {"start": 3550.5, "end": 3554.5, "text": " So we need to figure out how to do that, how to do stop gradients"}, {"start": 3554.5, "end": 3559.5, "text": " And finally how do we get per sample gradients instead of the usual per batch gradients"}, {"start": 3559.5, "end": 3563.5, "text": " I'm going to tell you all about that in this last section of this tutorial"}, {"start": 3563.5, "end": 3568.5, "text": " In order to demonstrate how stop gradients work in Jax I'm going to use a couple of examples"}, {"start": 3568.5, "end": 3575.5, "text": " So first things first Jax uses this JaxLax stop gradient primitive for this purpose, for stopping gradients"}, {"start": 3575.5, "end": 3580.5, "text": " Let me quickly go through this TD0 update rule here"}, {"start": 3580.5, "end": 3586.5, "text": " If you're not familiar with RL this may be super confusing but you basically you do not need to know what this is"}, {"start": 3586.5, "end": 3590.5, "text": " You just need to know some of the details which I'm going to explain to you right now"}, {"start": 3590.5, "end": 3593.5, "text": " So the whole point is we are trying to learn certain value functions"}, {"start": 3593.5, "end": 3597.5, "text": " So that value function in our example is going to be a simple linear value function"}, {"start": 3597.5, "end": 3606.5, "text": " So it's parameterized by theta and we are basically doing dot product with the state vector in order to get the value function"}, {"start": 3606.5, "end": 3612.5, "text": " So in order to find the gradients of theta we have to do this expression"}, {"start": 3612.5, "end": 3621.5, "text": " So basically we have a reward here at the present timestamp plus the value at the current state minus the value at the previous state"}, {"start": 3621.5, "end": 3626.5, "text": " And we multiply that by the gradient of the value function in the previous state"}, {"start": 3626.5, "end": 3632.5, "text": " So that's a mouthful, the whole point is the update is not a gradient of any loss function"}, {"start": 3632.5, "end": 3637.5, "text": " However if we reformulate this above expression into this thing"}, {"start": 3637.5, "end": 3644.5, "text": " And if we have a way to stop the gradients of this target here of this value at the present state"}, {"start": 3644.5, "end": 3649.5, "text": " We'll be able by doing a gradient of this pseudo loss function to get exactly this expression"}, {"start": 3649.5, "end": 3656.5, "text": " Again as you can see here if we ignore the dependency of the target on the parameter theta for our value function"}, {"start": 3656.5, "end": 3658.5, "text": " We're going to get exactly this"}, {"start": 3658.5, "end": 3663.5, "text": " So let's see why that is, if we do a gradient with respect to theta, 2 goes in front here"}, {"start": 3663.5, "end": 3668.5, "text": " We'll have this whole expression and then if we do a derivative with respect to theta"}, {"start": 3668.5, "end": 3676.5, "text": " And this does not depend on theta, that means we're just going to end up with the derivative of this previous value function at the previous state"}, {"start": 3676.5, "end": 3678.5, "text": " So that's exactly this expression here"}, {"start": 3678.5, "end": 3681.5, "text": " So just some basic rules of differentiation at the end, that's all"}, {"start": 3681.5, "end": 3684.5, "text": " So let's finally see how we can implement that"}, {"start": 3684.5, "end": 3687.5, "text": " We have the value function as I already explained, simple dot product"}, {"start": 3687.5, "end": 3690.5, "text": " We have some initial values for the theta"}, {"start": 3690.5, "end": 3700.5, "text": " We have some like, so this t1 just means timestamp minus 1, i.e. the previous timestamp"}, {"start": 3700.5, "end": 3703.5, "text": " We have the reward, we have the current state here"}, {"start": 3703.5, "end": 3705.5, "text": " And finally we have the td loss"}, {"start": 3705.5, "end": 3708.5, "text": " We calculate the value function at the previous state"}, {"start": 3708.5, "end": 3711.5, "text": " We calculate the target and this is the whole point here"}, {"start": 3711.5, "end": 3718.5, "text": " We just apply the stop gradient to target and this is going to be our loss"}, {"start": 3718.5, "end": 3722.5, "text": " And now when we do a gradient of that loss with respect to theta"}, {"start": 3722.5, "end": 3724.5, "text": " Because theta is the first argument here"}, {"start": 3724.5, "end": 3729.5, "text": " And remember again, gradient by default will do a derivative with respect to the first parameter"}, {"start": 3729.5, "end": 3736.5, "text": " We get the correct value of delta theta here, i.e. the gradients of our model"}, {"start": 3736.5, "end": 3739.5, "text": " I'm going to run this, although the number will not mean a lot"}, {"start": 3739.5, "end": 3742.5, "text": " And I don't think it's that important to go and analyze this further"}, {"start": 3742.5, "end": 3743.5, "text": " You get the point here"}, {"start": 3743.5, "end": 3746.5, "text": " The second example is this straight through estimator"}, {"start": 3746.5, "end": 3751.5, "text": " And I've seen this thing used, for example, in the VQVAE paper"}, {"start": 3751.5, "end": 3755.5, "text": " And I also covered that paper in one of my previous videos, you can check it out"}, {"start": 3755.5, "end": 3758.5, "text": " I've basically pasted the link here"}, {"start": 3758.5, "end": 3764.5, "text": " But the whole point is, if you have a certain non-differentiable function in the middle of your neural network, for example"}, {"start": 3764.5, "end": 3767.5, "text": " For example, this is a simple non-differentiable function"}, {"start": 3767.5, "end": 3772.5, "text": " You take the x, which is a float, and if you apply a rounding, that's simply not differentiable"}, {"start": 3772.5, "end": 3777.5, "text": " So now we can reformulate this non-differentiable function into this straight through f"}, {"start": 3777.5, "end": 3780.5, "text": " So basically, what we have done here is, if you take a look at this"}, {"start": 3780.5, "end": 3783.5, "text": " So x plus f of x minus x"}, {"start": 3783.5, "end": 3787.5, "text": " Basically that means in the forward pass we're going to have f of x"}, {"start": 3787.5, "end": 3790.5, "text": " Which means these two are equivalent in the forward pass"}, {"start": 3790.5, "end": 3796.5, "text": " On the other hand, if we do a derivative, if we find a gradient of this function and a gradient of this function"}, {"start": 3796.5, "end": 3798.5, "text": " They are going to be different"}, {"start": 3798.5, "end": 3800.5, "text": " So let me just take some simple example here"}, {"start": 3800.5, "end": 3802.5, "text": " So x is 5.6, just an arbitrary number"}, {"start": 3802.5, "end": 3805.5, "text": " We're going to print the value in the forward pass"}, {"start": 3805.5, "end": 3807.5, "text": " So we're going to see that those are going to be the same"}, {"start": 3807.5, "end": 3811.5, "text": " So 6 and 6, because when we round 5.6 we get 6"}, {"start": 3811.5, "end": 3813.5, "text": " So that's clear"}, {"start": 3813.5, "end": 3816.5, "text": " Whereas, because this f is non-differentiable"}, {"start": 3816.5, "end": 3820.5, "text": " If we try to do a gradient at this point x, or whatever x is"}, {"start": 3820.5, "end": 3822.5, "text": " We get 0 all the time"}, {"start": 3822.5, "end": 3826.5, "text": " On the other hand, this straight through gradient is going to give us 1 all the time"}, {"start": 3826.5, "end": 3829.5, "text": " You can see why that is, because we have a stop gradient here"}, {"start": 3829.5, "end": 3833.5, "text": " So a derivative of this thing with respect to x is just going to be 1"}, {"start": 3833.5, "end": 3839.5, "text": " Because, yeah, I mean, if a function is equal to x, then basically a gradient is 1"}, {"start": 3839.5, "end": 3841.5, "text": " So why may this be useful?"}, {"start": 3841.5, "end": 3844.5, "text": " If you have an automatic differentiation in the reverse mode"}, {"start": 3844.5, "end": 3846.5, "text": " You're basically, if you have 1 here"}, {"start": 3846.5, "end": 3851.5, "text": " If a certain part of your whole function has a gradient 1"}, {"start": 3851.5, "end": 3855.5, "text": " By multiplying the gradients that came from the deeper levels by 1"}, {"start": 3855.5, "end": 3858.5, "text": " You're just going to pass them through, that's why it's called straight through"}, {"start": 3858.5, "end": 3864.5, "text": " You're just going to copy them to the second part of the network or whatever ML model you're using"}, {"start": 3864.5, "end": 3866.5, "text": " And that's why this thing is kind of cool"}, {"start": 3866.5, "end": 3871.5, "text": " But the whole point is, again, just be aware that we are using this stop gradient to achieve this functionality"}, {"start": 3871.5, "end": 3873.5, "text": " Let's see a couple more advanced things"}, {"start": 3873.5, "end": 3875.5, "text": " So, calculating per sample gradients"}, {"start": 3875.5, "end": 3877.5, "text": " And let me just read this through"}, {"start": 3877.5, "end": 3883.5, "text": " In many frameworks, PyTorch, TensorFlow, Theano, it is often not trivial to compute per example gradients"}, {"start": 3883.5, "end": 3887.5, "text": " Because the library directly accumulates the gradient over the batch"}, {"start": 3887.5, "end": 3894.5, "text": " Naive workarounds, such as computing a separate loss per example and then aggregating the resulting gradients"}, {"start": 3894.5, "end": 3896.5, "text": " Are typically very inefficient"}, {"start": 3896.5, "end": 3901.5, "text": " So you can go around this problem by taking a single sample, so for example, maybe a single image"}, {"start": 3901.5, "end": 3904.5, "text": " Passing it through, calculating the gradients for that single image"}, {"start": 3904.5, "end": 3909.5, "text": " And then doing that for the whole batch in a kind of a for loop and then aggregate the results"}, {"start": 3909.5, "end": 3912.5, "text": " Aggregate those gradients, but it's super expensive as you can imagine"}, {"start": 3912.5, "end": 3917.5, "text": " Jax basically has direct support for this type of a thing and it's very efficient"}, {"start": 3917.5, "end": 3927.5, "text": " So if we create a contrived example here by just stacking the state in the t-minus-1 timestamp"}, {"start": 3927.5, "end": 3930.5, "text": " And we batch the rewards, we batch the current state"}, {"start": 3930.5, "end": 3935.5, "text": " So basically using VMAP is going to allow us to get these per sample gradients"}, {"start": 3935.5, "end": 3943.5, "text": " Because remember, TD loss does not accept a batch of data"}, {"start": 3943.5, "end": 3948.5, "text": " It accepts a single example, so a single state, a single reward, and a single state here"}, {"start": 3948.5, "end": 3950.5, "text": " And we have arguments theta here"}, {"start": 3950.5, "end": 3958.5, "text": " So if we map it, if we wrap it inside of a VMAP using in-axis none, which means theta"}, {"start": 3958.5, "end": 3963.5, "text": " The arguments of our value, so the parameters of our value function are going to be broadcasted"}, {"start": 3963.5, "end": 3970.5, "text": " And then we say 000, which means the first zero dimension of these batches here is the batch dimension"}, {"start": 3970.5, "end": 3974.5, "text": " If we do that, we're going to trivially get the gradients for every single example"}, {"start": 3974.5, "end": 3978.5, "text": " And they are the same because we batch the same states here, so that's kind of trivial"}, {"start": 3978.5, "end": 3983.5, "text": " Wrapping up this tutorial, which is already quite lengthy"}, {"start": 3983.5, "end": 3989.5, "text": " In a summary, Jack's auto-differentiation engine is super powerful, you can do various stuff"}, {"start": 3989.5, "end": 3993.5, "text": " You can even define custom derivative rules if you have some numerical problems"}, {"start": 3993.5, "end": 3997.5, "text": " That's now already super advanced, you can check out the docs here"}, {"start": 3997.5, "end": 4006.5, "text": " I particularly linked this Autodeaf cookbook article, which you should read if you want to know all of the nitty-gritty details of the Autodeaf engine"}, {"start": 4006.5, "end": 4010.5, "text": " I'm going to end up by showing a simple example of this MAMBL"}, {"start": 4010.5, "end": 4013.5, "text": " This is this model agnostic metal learning algorithm"}, {"start": 4013.5, "end": 4016.5, "text": " And you can see it's fairly complicated, you can see the math here"}, {"start": 4016.5, "end": 4022.5, "text": " And the whole point is, this is how simply you can implement this MAMBL algorithm in Jack's"}, {"start": 4022.5, "end": 4031.5, "text": " It looks very much like mathematics, and that's why I think Jack's is super, super powerful and useful for researchers"}, {"start": 4031.5, "end": 4034.5, "text": " Not that beginner-friendly as I already mentioned in my previous video"}, {"start": 4034.5, "end": 4038.5, "text": " But even for beginners, if you go across a couple of hurdles, I think it's going to pay off"}, {"start": 4038.5, "end": 4041.5, "text": " Let me briefly go over MAMBL"}, {"start": 4041.5, "end": 4045.5, "text": " So the whole point here is, imagine you have a couple of tasks"}, {"start": 4045.5, "end": 4050.5, "text": " And while training, you do not want to minimize the loss on any particular task"}, {"start": 4050.5, "end": 4054.5, "text": " You want to minimize the loss, the potential losses on all of those tasks"}, {"start": 4054.5, "end": 4059.5, "text": " For example, maybe going in this direction will minimize the loss for the task 1"}, {"start": 4059.5, "end": 4062.5, "text": " Going in this one will minimize the loss for the task 2"}, {"start": 4062.5, "end": 4065.5, "text": " And similarly, going in this one will minimize the loss for task 3"}, {"start": 4065.5, "end": 4070.5, "text": " That would lead us to have either theta 1, that's this"}, {"start": 4070.5, "end": 4074.5, "text": " These thetas are going to minimize the loss on the task 1"}, {"start": 4074.5, "end": 4079.5, "text": " Similarly, this theta here, theta 2, is going to minimize the loss on this task 2"}, {"start": 4079.5, "end": 4081.5, "text": " And similarly for theta 3"}, {"start": 4081.5, "end": 4085.5, "text": " But we actually move here, which is somewhere in between all of those thetas"}, {"start": 4085.5, "end": 4087.5, "text": " And that's how MAMBL works"}, {"start": 4087.5, "end": 4091.5, "text": " And now, briefly going over this metal loss function"}, {"start": 4091.5, "end": 4097.5, "text": " We calculate the gradients here, and then we use the gradients to update"}, {"start": 4097.5, "end": 4100.5, "text": " Using SGD, we update the parameters here"}, {"start": 4100.5, "end": 4103.5, "text": " And we pass that into the loss function"}, {"start": 4103.5, "end": 4106.5, "text": " And then we take the gradients of that loss function"}, {"start": 4106.5, "end": 4109.5, "text": " It's kind of convoluted, I'm not going to try"}, {"start": 4109.5, "end": 4115.5, "text": " This is not the video about MAMBL, but I want you to just kind of appreciate how expressive Jax actually is"}, {"start": 4115.5, "end": 4118.5, "text": " Having said that, hopefully you liked this video"}, {"start": 4118.5, "end": 4120.5, "text": " If you did, share it out with your friends"}, {"start": 4120.5, "end": 4122.5, "text": " Consider subscribing to this channel"}, {"start": 4122.5, "end": 4126.5, "text": " And also join the Discord community, that's where the party is"}, {"start": 4126.5, "end": 4150.5, "text": " See you next time, bye bye!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=SstuvS-tVc0
Machine Learning with JAX - From Zero to Hero | Tutorial #1
❤️ Become The AI Epiphany Patreon ❤️ https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 Join our Discord community 👨‍👩‍👧‍👦 https://discord.gg/peBrCpheKE With this video I'm kicking off a series of tutorials on JAX! JAX is a powerful and increasingly more popular ML library built by the Google Research team. The 2 most popular deep learning frameworks built on top of JAX are Haiku (DeepMInd) and Flax (Google Research). In this video I cover the basics as well as the nitty-gritty details of jit, grad, vmap, and various other idiosyncrasies of JAX. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ JAX GitHub: https://github.com/google/jax ✅ JAX docs: https://jax.readthedocs.io/ ✅ My notebook: https://github.com/gordicaleksa/get-started-with-JAX ✅ Useful video on autodiff: https://www.youtube.com/watch?v=wG_nF1awSSY ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00:00 What is JAX? JAX ecosystem 00:03:35 JAX basics 00:10:05 JAX is accelerator agnostic 00:15:00 jit explained 00:17:45 grad explained 00:27:25 The power of JAX autodiff (Hessians and beyond) 00:31:00 vmap explained 00:36:50 JAX API (NumPy, lax, XLA) 00:39:40 The nitty-gritty details of jit 00:46:55 Static arguments 00:50:05 Gotcha 1: Pure functions 00:56:00 Gotcha 2: In-Place Updates 00:57:35 Gotcha 3: Out-of-Bounds Indexing 00:59:55 Gotcha 4: Non-Array Inputs 01:01:50 Gotcha 5: Random Numbers 01:09:40 Gotcha 6: Control Flow 01:13:45 Gotcha 7: NaNs and float32 02:15:25 Quick summary 02:16:00 Conclusion: who should be using JAX? 02:17:10 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany - https://www.patreon.com/theaiepiphany One-time donation - https://www.paypal.com/paypalme/theaiepiphany Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💼 LinkedIn - https://www.linkedin.com/in/aleksagordic/ 🐦 Twitter - https://twitter.com/gordic_aleksa 👨‍👩‍👧‍👦 Discord - https://discord.gg/peBrCpheKE 📺 YouTube - https://www.youtube.com/c/TheAIEpiphany/ 📚 Medium - https://gordicaleksa.medium.com/ 💻 GitHub - https://github.com/gordicaleksa 📢 AI Newsletter - https://aiepiphany.substack.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #jax #machinelearning #framework
What's cracking guys with this video? I'm kicking off a series of video tutorials about Jax. So what is Jax exactly? Let's see the official box here Basically Jax is a autograd and XLA which stands for accelerated linear algebra compiler which is basically a compiler developed by Google that co-evolved with the development of TPU units We're gonna see a lot of XLA later. So I thought just kind of explaining the terminology So they're basically brought together for high-performance numerical computing and machine learning research so in a nutshell we can treat Jax as a like a machine learning library and you need to be cautious not to compare Jax directly to pytorch or to tensorflow because Jax is kind of like a mid-level like library and people have been building on top of it So we have as for the high-level deep learning libraries we have the two most popular ones are Haiku coming from from DeepMind and We have a flex coming from the Google research team So basically aside from that DeepMind team has been developing for every single like domain particular domain such as graphML or RL they've been developing like a purposeful a Dedicated library built on top of Jax. So for example for graphML They've built up a thing called GRF or Jax for graphs. That would be like a acronym pretty much having mentioned flex I think it's worth noting that on hugging face hub You have more than 5,000 models at the at the time of this recording That you can use download and play with them And I think most of them were written in flex and that's the reason why I'll be covering both Haiku and flex in some of the future videos that out of the way Let me tell you what I'm going to cover in this series of videos So in the first two videos I'm going to explain the nitty-gritty details of Jax and then we're going to do We're going to code neural networks from scratch in both pure Jax as well as flex as well as Haiku also those of you who know me, you know that I'm a big fan of a top-down approach when teaching and Basically here I'm going to go a bit differently because the topic is kind of complex and the first two videos are going to go So the first one is going to go from mid-level of abstraction down to lower lower levels of abstraction And we're going to understand the nitty-gritty details of Jax in this video The second one will be mid-level to higher levels of abstraction And as I said the last three videos there will be hands-on and those are going to be on a high level of abstraction We won't be digging deeper into the primitives. We're just going to build from the components We learned in the first two videos. We're going to build neural networks from from scratch So aside from that as you can see here, I have a code skeleton already here and That there is a reason behind that first if I was writing if I was coding everything from scratch That would take this video would be two hours long Secondly, this is way more natural because you usually never ever build build stuff from your head from scratch You take you either take some snippets from the documentation from stack overflow or from somewhat somebody else's repo So it's very I want to stress the understanding here It's important that we understand the code so that we can modify it And I think it's less important for you to understand how to code this up from your head because nobody does that I mean sometimes I even like forget how to write down if name equals main in the main like file of a Python program So you just search for stuff if you don't understand how to write something down. Okay, let's dig down into the code first off I'm in Google Colab here I I enabled the GPU accelerator in the background and The this Jupiter file which I'm gonna share with you after this video I'm gonna open source into my github is structured into two like sections The first one is warming up with Jax basically we're on the mid level of abstraction pyramid. Let's go that way and then we're going deeper Understanding the nitty-gritty details behind many of Jax primitive Transform functions such as JIT grad etc. We're gonna see those in a second. Okay, let's let's let's start. So first things first I want you to understand that Jax is Jax's API is very similar to numPy's and Second thing I want you to take off from this section is that Jax code is accelerator agnostic and we're going to see what that means Exactly. Okay, let's let's dig into the actual code so as I said for the most part the syntax is the same as numPy's and You can see here. We are importing from Jax There is this module called numPy and the convention is just to import it as J and P They also have a scipy API so Jax scipy, but we're not going to use that one in this view So these functions that are called the transform functions are a vital component of Jax So those are grad JIT V map and P map and you'll be seeing these a lot throughout this this notebook and in general using Jax so aside from that Jax's structure so the API structures like an onion as Usually software like libraries are so we have the high level API, which is a numPy Then we have this thing called called Lex and finally we have I guess XLA, which is I think in simple spots, right? Okay, so because I'm a Etymology nerd. I just thought writing this down. Basically Lex is just an anagram for XLA Which is a compiler and I'm not completely sure how people came up with the name Jax So if anybody knows feel free to type it down in the comment section Let's see a couple of facts about Jax So the first thing I already mentioned the syntax is remarkably similar to numPy's if I run this we can see we have We have defined thousand points on the x-axis equally spaced from 0 to 10 And I just plot the function to sine cosine and we just plotted we get the chart here You can see that Jax syntax is completely similar like it's actually same if you just switch MP for JMP You get linspace sine cosine plot You can like as you can see here the arrays can be directly Inserted into the matplotlibs plot function. So if I run this one we get the same results. Okay, so that's it So that's the first fact second fact So that's something you need to get comfortable with and that's the functional programming Paradigm, especially if you're only familiar with the object-oriented programming paradigm This is going to be a bit rough But like I I promise you there are some awesome benefits that functional programming brings with it. So Jax arrays are immutable. So that's one of the three like that's one of the common properties you see in functional programming in functional programs, so what that means is the following If we take a like a NumPy array here, so numbers from 0 to 9 we print them out and I then modify the The array at index 0 and I add some random number like 23 and if I run this Everything is fine We have the original array and we have the modified array and as you can see X is modified in place, which means The array is by definition mutable on the other hand if we take this The same program and run it here. So I create this time Jax array instead of NumPy array and I try to modify the Array in place. This is what's going to happen. So we are going to get an exception here as you can see Basically, Jax is complaining that we it says here object does not support item assignment. So you cannot Basically basically modify the array in place. So this is a solution. It's kind of rough But yeah, you have to do this at syntax with index and then set the value and this is going to work Finally, you can see that we are not like modifying X in place. We basically allocate additional object in the memory Called Y and now we have both the original array as well as the Novel array so you may think here. Well, this is kind of suboptimal, right? Well, the trick is and we're going to see that a bit later Like Jax uses something called JIT which uses something called XLA which Basically takes care of this stuff behind the curtains. So you don't have to worry about this So if it notices that you're not using the original array It's going to just do this in place and avoid allocating another memory object. So yeah, that's taken care of So that's the the second thing I wanted you to understand So the functional programming paradigm is that you can use the same array in place So the functional programming paradigm is something that Jax relies upon Finally, we have the fact that Jax handles random numbers completely differently compared to NumPy And there is a very very good reason for this We're going to see that in a moment But the key point here is you have to create this basically key which is a fancy name for like a state So whereas NumPy's pseudo random number generators are stateful, Jax is not stateful because it's As I said, that's a consequence of functional programming paradigm again So key is just a synonym for a state in Jax And what you have to do when you're generating random numbers You have to pass the state explicitly and not implicitly such like in NumPy So here we created 10 like random numbers We sampled them from a normal distribution And we can see here that we get 10 random numbers The thing I want you to notice here is that the type of this x array is of device array So that basically that's a trick And that's why Jax code is accelerator agnostic Basically, Jax automatically puts this array onto the accelerator So in this case, I'm using here my accelerator is set up to GPU If I were to set it to TPU, the code would be set directly to the TPU unit And that's very cool We're going to see a consequence of that a bit later Basically, you don't have to do the laborious to device from PyTorch syntax, etc So this story naturally leads me to fact number four And that's that Jax's accelerator agnostic same code runs everywhere You do not have to write particular syntax for a particular accelerators That's very cool Especially in the future where we see more and more companies being like building custom AI Accelerators and chips, not just TPUs and GPUs But like companies such as Graphcore, Cerebras, etc Let's see it in practice So here we define a Jax array Again, we see the syntax where we have the key We have to pass explicitly the key or the state of the pseudo random number generator And as you can notice, there is some difference in the API design here We have D type here We're using for NumPy, we're using S type There are some minor differences But like in general, syntaxes are very, very similar between Jax's NumPy and NumPy API I mentioned that we don't have to use two device anymore So that's in PyTorch So this array here is directly pushed to the GPU unit Whereas this one is on the CPU If we do some time profiling here, we can see that if let me run this thing Let me run this thing, basically this first line, line number 10 We do like a basically matrix multiply directly on the GPU And that's very fast We can see that we're going to see results in a second The second line here, because we have a NumPy dot And because we have NumPy arrays which are on CPU It's going to be way slower And NumPy only works, the reason why Jax exists is because NumPy only works with CPUs And we do need to leverage existing accelerators The third line here does something like a modified version of the first two lines Basically, we're using Jax's dot product And we are using NumPy arrays which will cause this line to push the devices to the GPU And then do the dot product And because of that overhead, we're going to have a slower result Hopefully for this third line And let's see the numbers here So we have 27 milliseconds for the first line We have 437 milliseconds for the second line Because as I said, NumPy is very slow, it's running on CPU Finally, we have 97 milliseconds for this line here Because we have the overhead of sending the data from the CPU, from the host to the GPU unit And the final number 26 is the same as this one And that's this one here So basically what we do, we can explicitly, using this device put function We can push the DRA to the GPU And then now we have equivalent lines between this one and line number 10 And that's it, that's why we have the same results here A couple of notes here First, I'm using GPU as a synonym for AI accelerator In reality, as I said, Jax's accelerator agnostic Which means basically, depending on what I set here in the runtime setting here We are going to run this either on GPU or TPU or whatever they have supported A second note is this block until ready You can notice that every time I'm using Jax's dot function, I'm using this block until ready The reason is Jax is using something called asynchronous dispatch system in the background Which means if I were to run this thing here So let me just copy paste this So if I were to copy paste this here If I were to run this and maybe like let me assign that to a variable And now I do some coding here, blah, blah, blah The thing is this line is going to immediately return And this code here is going to start executing Because this line actually just delegates the task to the accelerator And we're not blocked, we're not waiting for this line to execute Which means the variable, this temporary variable is still not filled in Let's put it that way That's a fairly neat feature that makes things even faster But you need to be cognizant of it Especially when you're doing this time profiling You want to measure the actual computation and not the time that's necessary to dispatch Just delegate the task to the accelerator Just be aware of that Nice, those were some basics Now we're going to cover the transform functions But before that, let me do a quick overview So Jax's AI accelerator agnostic Jax handles random numbers a bit differently We're going to see why that is important a bit later Basically reproducibility in the environments where you have multiple accelerators Where you're doing parallel programming is like possible with Jax Whereas NumPy has some problems there because it was designed mainly for CPU programs Then as I said, functional programming paradigm is something you need to get comfortable with And finally, basically the syntax is very similar to NumPy That's a high level overview of what Jax is And now we're going to get deeper into these transform functions So as I said, JIT compiles your functions using this XLA And basically doing that, it catches some functions and makes it very, very fast to execute our programs I'm going to just run this simple visualization function here And I'm going to show you a simple example where how we can JIT a function and make it faster So here I just defined this function It's a cellU function, it's just an activation function The details here are not that important That's why I have this visualize function here I'm just going to run it We're going to see how it looks like You can see here up until zero and that's defined here When X is greater than zero, we just have X Which means we have a linear function here And when the number X is smaller than zero, we just have X And smaller than zero, we have some combination of exponents here with some coefficients, etc But that's not important The important part here is we can transform cellU So we pass the function inside of this JIT transform function And we get a function back And this function is compiled Actually, it's not exactly compiled We first need to trace it by calling it once But for the sake of argument, we just have a super fast function right now here And let's benchmark it So let me allocate a vector of million data points We have million random numbers here And we pass them through the cellU function And we basically do a time profiling of both the normal version as well as the JIT version Let me run it again and let's see the numbers So the thing here is this function is not overly complex So the results we get with JIT So we will not get a huge performance boost here But imagine once you start training neural networks Where JIT starts shining because then the optimizer, the compiler has a bigger freedom To do various, various optimization tricks such as fusing the operators Such as avoiding allocating certain temporary structures, etc And things get really, really fast like orders of magnitude faster Here we can see that basically we have 1.96 milliseconds using the normal version And then we have only 121 microseconds using the JIT version So even in this simple example, we get performance benefits So for now, I'm going to leave it here I'm going to stay like on a high level here We don't have to understand currently how JIT works But like you just need to know when you see JIT, JIT makes functions run fast Let me try to be this guy Awesome Okay, let's go ahead here Second transform function, very important, is grad Basically it does the magic of automatic differentiation for you So the same thing as dot backward in PyTorch if you're coming from the PyTorch world And like a short note here, differentiation can be manual So basically you have a function, you manually calculate on the paper How the derivatives look like And then you can code that knowledge into your function And that's a manual differentiation Symbolic one is very similar It's just an automatic way to automate the manual differentiation Whereby the program is using those rules such as product rule, etc To build up derivatives of a function And numeric functions are things like finite derivatives Where you use basically numeric methods to compute the derivatives And finally, the automatic one is the one we all love Which is used in every single deep learning framework we know of Okay, let's see how grad works So let's define a simple function here So it's just a sum of, as you can see here, logistic functions So this here inside of this is a logistic function or sigmoid And I just input, I just create like array of three values here, 0, 1, 2 And I rename the function to loss because this could be used as a loss function Just giving some semantics And finally, we can do grad by just wrapping loss into grad Again, we are passing function into a function Which is something you see often in the functional program in paradigm That gives us the grad loss So the gradient of the loss function which we can evaluate So this is contrast this to PyTorch or TensorFlow Where you just do backward Here you actually get the function back And you can evaluate it at particular points And by default, I said here the grad will take the derivative of the first parameter But here we only have one parameter so that doesn't matter And let me kind of run this and see what we got So we have some numbers which mean nothing to us because the function is fairly complicated If I were to change this to this sum of squares function So if I do something like this, we basically can get interpretable results So let's do a manual derivative of this function We're going to get so that's x1 squared plus x2 squared plus x3 squared Remember we are passing three numbers here So if we do derivative of that, that's gonna be x1 times 2 plus x2 times 2 plus x3 times 2 And if I were to run this, we're going to get 024 which makes sense because we have 012 Let me print this out So print x, so we have 012 and we get 024 because of this Grad basically does because all of these are bundled inside of x Grad basically does derivative of the function with respect to x1 Which is 2x1 and then the same for the other variables Originally, remember we have something like this We have this thing plus x2 squared plus x3 squared That's the original function This here is the derivative, manual derivative Okay, that's a simple example Let's now go and make sure that this thing actually does what we expect it to do We can do that using this numeric differentiation method The finite differences method If I were to run this, we're going to get some results There are some small errors here which are a natural consequence of the fact that we're storing numbers using a finite number of bits So how this works is very simple We take, so remember x is this array here, 012 Length of x thus will return us 3 We get the i is just the identity matrix And so we are basically iterating and taking one hot vectors here And basically this is here is the, this is the formula of the actual derivative So you nudge the input vector along a certain dimension a little bit In the positive direction, in the negative one And then you divide that by the 2 epsilon because yeah, that's the intensity of the nudge And if you don't understand this, the best way to play with Colab is to just insert a new code line here We can see what this exactly does So let me just paste this here So we have this thing here So we have, as you can see here, identity matrix And so this for loop is going to take one vector as a value One vector at a time and pass it into this function here And so you can see that basically we're going to evaluate f at the following So at least in the first iteration we're going to evaluate it at 0 And then we're going to add this small epsilon here So epsilon and then we're going to have 1, 2 And we're going to evaluate the function here We are going to subtract from that x minus epsilon So sorry, so that's 0 minus epsilon This thing will be constant for the first iteration And then we divide this by 2 epsilon And you can see that this is the definition of derivative itself And we do that for alongside every dimension That means in the second loop we'll have epsilon here instead of here And so on and so forth So minus epsilon here and yeah, you get the point So now we are certain that grad-brooks as expected Now let's see some fun examples We're going to define a simple second order polynomial function here x squared plus x plus 4 I'm going to run this to visualize it It's always nice to visualize stuff It's at least to me it makes stuff a bit more like It gives them this gut feeling and it's easier to understand what's going on We're going to do higher order derivatives here Just to show you how a powerful grad function actually is So you can do grad of f and then you can do grad of the grad of f And then you can do grad of and you can do that n times whatever the number Like whatever the whatever n is Or you can just do it like this You can do it like grad of grad of grad of f This will also work This is just shorter so that's why I did it like this If we were to manually do a derivative of this polynomial We'll get the first derivative will give us 2x plus 1 The second derivative so derivative of this one will just leave us with 2 And finally derivative of a constant is 0 That's why we have the third derivative is equal 0 So if we print all the values here we expect to see because x is 1 We expect to see 3, 2 and 0, right? So here it is 3, 2 and 0 The first value is just the value of the function at the input x which is 1 Which is as you can see here approximately 6 Exactly 6, okay? So far so good The cool thing about this is we are very close to the math So you can basically as you see formula in some paper You can implement it much easier in JAX compared to PyTorch or TensorFlow It's very powerful We're gonna see how powerful this whole autodiff package of JAX is a bit later And now I'm gonna do a simple modification here Let's assume we have two inputs So let's assume we have y here So we have plus y squared And I'm just going to modify this lambda function I'm gonna skip the evaluation because the evaluation will fail now And now I'm going to do the following If we just do the grads of the function By default do the derivative with respect to the first parameter That means we're going to get this thing here exactly So we should expect exactly the same numbers Except for the fact that f will be different when we evaluate it at x So this obviously will not work until I enter some numbers So this is going to be like let me say it's gonna be 1 And then I'm gonna modify this here So 1 x y x y and finally x y here That should work Let's see if it works or not Yeah, we get the numbers as expected So we get the same results here And we get 7 because once you evaluate it at y equals 1 This will be 6 plus 1 that's 7 So how can we do derivative with respect to y? The only thing we need to do here is arc nums equals 1 instead I think this is the syntax Let me try it out Let me just ignore this for the time being I'm just gonna delete this And let me print this out So we get the number 2 here Which is correct result If we do a differentiation of this polynomial with respect to y We expect to get 2 times y Since y is equal 1 we get 2 here So the thing that confuses me here is I'm not sure why we are getting device array here Whereas previously we had pure floats on the CPU I'm not sure about that Let me check if I can do it like this Yep, for some reason once I add a comma here It's returning me device array Whatever Obviously this is now df dy But yeah, you get the point Now let me show you how powerful the autodef engine is So aside from grad which can do derivatives with respect for the scalar output functions Jacobians, so these jack forward and jack rev functions can find Jacobians So Jacobians basically can evaluate derivatives even for vector-valued functions That's the only difference That's pretty much the only difference So here let me take another function Basically we have a simple paraboloid If we google it here we can see how it looks like It looks like a cup 2D surface in a 3D space And basically if we manually calculate the derivatives So df dx will be 2x, df dy will be 2y So we expect Jacobian to look something like this If we continue on and figure out the second order derivatives We can get the Hessian number So we can get the second order derivative with respect to x is 2 As you can see here for y it's also going to be 2 And if we do dx and then dy because this does not depend on y anymore We're going to get zeros And finally how you form the Hessian matrix And you don't need to worry about what Hessian is It's basically just a collection It's a matrix of as you can see here various derivatives of the multivariate function You can simply define it using the jack rev and jack forward function We additionally did it You can see how this nicely composes and we can get the results So the reason they're using both forward and rev is because it's just optimization like detail Because one of these the rev one works with wide matrices Whereas the jack forward works with tall matrices I'm going to link a video down in the description which nicely explains why this is Anyways we get the numbers so that's 2, 2 for the Jacobian as we expected here So if we plug in the numbers so for the input 1, 1 We can see that this evaluates to 2, 2 And finally for the Hessian we get 2, 0, 0, 2 So anyways I just wanted to show you that this is possible And if you want to dig into more details there documentation is very nice And yeah let's continue with the final example for grad I just took this edge case function so the absolute value of x And let's see how how how Jax handles it Basically this is how the function looks like I just visualized it here And I printed the values at minus 1 and 1 which would be 1 and 1 And you can see the the results here And finally I printed the I found the gradient of this function And you can see it's not differentiable at this point at 0 It's the gradient is undefined But like if we were to evaluate the function at minus 1 we get So as you can see here the derivative will be minus 1 for all the numbers here 1 for all numbers here The interesting point is actually 0 Because as you can see this one returns the obvious result minus 1 So this one here is kind of interesting If we were to delete this part and just kind of add a small number to 0 We'll get 1 as the output If we were to do the same thing just on the negative side we're going to get minus 1 So basically what they've done is they defined for 0 is going to be the value is going to be defined is going to be 1 That's just a convention and that's how Jax deals with it We didn't get any exception so yeah I guess I guess that's good at least in some settings Finally let's jump to Vmap Basically when you see Vmap you need to think about it the main value prop of Vmap is the following Writer functions as if you were dealing with a single data point And this is going to become more clear as we go through these examples Let's say we have a matrix W which is basically weights of you can treat it as weights of a linear neural network layer We input some state we set the shape to 150, 100 And we now create this batch X you can treat this as maybe 10 images a batch of 10 flattened images So it's 100 that means we had like a 10 times 10 pixel image it's 100 when you once you flattened out So now if we apply the matrix we do the dot product between W and X This is basically a simulating what the linear layer is doing in the background when it's processing the input data Now the trick here is this only transforms the single a single image it cannot handle the batch Otherwise it will crash because we're trying to multiply 150, 100 which is W And we're trying to multiply that with 10, 100 that's going to fail So this thing does not work for the batch so okay let me run it So how would we go about making this work for a batch of images? So that's because that's what we care about and because that's way more efficient as you may know Especially on GPUs and accelerators Basically the most naive approach would be to iterate through the images and then call this function that knows how to handle single data points And then we just stack the results and basically we can see that this will work And the thing is it's very very slow because you want to avoid doing for loops you want to vectorize your functions And that's why this runs 5.54 milliseconds Now let's see a bit more a better approach of doing this So this would be a better approach but as you can see we had to completely rewrite the function we had above So we had to swap as you can see here we had to swap W with X Now we have batch of X here and we have W here and we additionally had to do a transpose So now this will the shapes will match and everything will work fine So we have 100 and we try to multiply that with transpose version of W which is 100 150 And that gives us as you can see 10 150 which is what we expected right? So everything is fine now the problem with this and it's additional jitted so it's going to be really fast Now the problem with this is as you can see we have to write depending on whether we handle singular cases or a batch of data We have to write completely different functions and that's not desirable Although you can see this is way faster. So it's only 103 microseconds. Whereas this is 5.54 milliseconds Now this is the result that Jacks offers us using something called V map You just take the function that handles a single data point You wrap it into this you transform it using this V map And you can now pass the batch of your data without any other modifications So that's that's very cool. If I run this Let's see how how fast this thing is. So it's around 100 microseconds I think the last time I ran this this was actually faster than this one But I guess there are some some variants inside of there Okay, so I went ahead and reran this thing again And now we can see that we have 180 microseconds for this function Whereas we have 143 microseconds for this one. So obviously there is some variance in here But the point is here you have a much simpler way to write these batched Functions and it's as as efficient as by like doing this laborious work So basically what what V map does in the background is it takes the the for loops and packs them into this Alex API mid-level layer of the of the API and yeah, so now let me let me go ahead and Modify this example a little bit So we are going to instead of passing just X Let's try and pass both the W as well as the X because that's more I guess in in line with the functional Programming paradigm. So let me rerun this cell and now let's modify this one to to accept like both the W as well as the as the batch of data so We'll need to add this thing here again. So if I were to run this right now, it will crash and we'll see why so it basically says It basically says that W We're trying to to tell to V map that W has a batch dimension where whereas it does not have it's just a it's just a matrix That's supposed to represent the linear layer So what we need to do is add the in axis argument here So in axis like this and then we say none because W does not have a batch dimension And we need to specify the batch dimension of the second input and that's zero. So now this should work Let me let me try and rerun it again So fingers crossed And yeah, it did work. So basically that's it Now you saw in a bit more detail how V map works and this is the first part of this video So we basically saw the basics of Jax We saw how to use the the main transform functions such as JIT such as grad and finally V map and now let's dig Even deeper and understand the intricacies of how JIT works because that will help you enormously Debugging these Jax programs. So let's let's continue here. So We basically have as I said Jax has this onion like API layer API structure and that's that's I guess pretty much Always the case but still Anyways, we have numpy as the as the highest level then we have lax as the mid level and finally the XLA So lax API is stricter and more powerful and it's a simple Python wrapper around XLA So let's see what it means when I say stricter So if we were to add in the numpy level of the API This thing can be tolerated So one plus one like one as the integer plus one as the integer Like one as the integer plus one as the float will work. But once we get to the mid level Here we need to be explicit about the types. Otherwise we'll have we'll get an error So if I run this we'll see that this is printed So one plus one is okay, but here add requires arguments to have the same D types got int 32 and float 32 The reason they've done this done it this way is to be more error robust Finally, we have the fact that the lax layer is obviously more powerful Although as a trade-off, it's it's less user friendly, which is kind of obvious as an example here. We have X and Y We have basically we want to do a convolution a 1d convolution between the signals X and Y and in numpy API this would be done like this We just call the convol function and we get the result On the other hand, once you get into the lax land, you have to use this conv general dilated Which is way more powerful as you can see has much more options You can specify the window strides the padding but you have to be again much more strict here in order to get this to work You have to be explicit about the the types etc And if I run this we can see the results here as well as here will be the same Like this this assert should make sure that that we get the same results And as you can see it works perfectly now Final remark here is you can see here that this result returns batched results So that's why we have to index 0 0 to get the actual result As I said, lax API is just a thin wrapper around thin I mean, it's a wrapper around XLA so you can actually find the the XLA function that's going to be eventually called here And that's this conv the general padding It's in C++ you can see the arguments etc So if you ever need to dig a bit deeper and optimize something really really really hard Then you TensorFlow documentation here got you covered So let's get back here and continue That was the short mention of the API Now let's finally understand how it works So again, I won't get into more details here The whole point of this is to show that JIT functions are faster compared to non-JIT versions of the function Here we just normalize the matrix X like column wise And so both the mean we subtract the mean as well as we divide with the standard deviation of the columns and we get normalized columns And finally, the difference here is again not that big because this is a super simple function The more complex the function is the bigger the differences here will be between the JIT version and between the normal version So here we have the JIT version and again we're using a block until ready because of the asynchronous dispatch and we see the results Okay, so let's see In order to understand JIT it may be useful to understand when it fails Which functions cannot, so which class of functions we cannot JIT will tell you, will help you understand how JIT actually works behind the curtains So if I were to run this thing it's gonna crash So we are creating a simple vector of 10 random elements and we pass that vector into this get negatives and we call it So now because we don't have any JIT we're gonna get a result back and everything works as expected But if I were to call the JIT version of the function by wrapping this into this JIT transform If we run this we're going to get like error and it says here array boolean indices must be concrete, got shaped array, blah blah blah So basically what happens here is that depending on the content of X, so depending on the values of X We are going to get an output that's going to vary in its shape and that's something that's not tolerable inside the JIT world Let's slowly understand why this is the case, so let's start with this function We have a simple function f which just does like a dot product between X and Y and returns result and in the meanwhile it also prints some intermediate variables here So I prepare the variable X and Y, just random vectors and matrix here and we call the function here And then we again call, we have a second call here and let's see what happens after the second call So I'm gonna run this, there's a couple things happening here, first thing first is the first time you run a JIT function So that's line 14, what JIT does in the background is something called tracing So it basically takes, instead of inputting the actual values of X and Y, it creates this abstract like tracer value Let's call it that way, so it's basically like a placeholder variable that has a specified shape and specified data type So you can see here basically once we print X, it outputs here instead of the value of X, it outputs traced shaped array It's basically float 32 because we had random numbers sampled from Gaussian, we have the shape here 3, 4 and we have 4 for Y So basically you can see these are actually passed into the function the first time JIT is called And using that JIT can understand how the shapes are morphing going through the function and finally what the output shape of the function is And you can see here the output is also float 32 and the shape is 3 We finally get the results here, the second time you call the function something funny happens and that has to do with the functional programming paradigm Basically because print functions are a side effect because we are returning the result from this function by other means than through the actual output here So we are printing and that's a side effect and because of that JIT is just going to ignore all of those side effects And the second time you call it because it's going to call the compile function, we won't see any printing And the whole point is this caching mechanism I just explained, so that's why JIT functions are so fast So I mentioned here anytime we get the same shapes and types we just call the compile function So that means if I were to now call whatever, so basically for the whole class of inputs that have this shape and this data type And no matter the value we are always going to call the compile function But if I were to call this function again but this time with a different shape, so let's do something like this So x3 maybe 3, 5 and this will be 5 and let me just map this like this and I'm going to call the function again and print the results So x3, y3, so what will happen here is that this time we'll again trigger the compilation because the shape changed here And so JIT is smart enough to retrace it and as you can see here we have the tracing happen here again Hopefully you get a better picture of how JIT now works but we are going to continue and understand this in even more depth So now we have the same function as above, we just omit the print functions which are the side effects And this is how you should write your functions when you want to use them with JAKS transform functions such as JIT So no side effects If I were to print this and we are going to have something called JAKSper here, so that's JAKS expression And that's basically the, let's call it like a flow model that JIT creates in the background when it does its tracing procedure So if I were to run this, let's see what we have here So you can see what happens, so it creates this abstract grammar and you basically have C, so add 1 to A So that's going to be, and A is basically the first argument, so that's X, so it's basically creating a placeholder for this thing here Then it calls D B plus 1, so that's Y plus 1, this is going to be B And then it calls the general dot product, so that's this thing, with C and D, as you can see here, C and D And yeah, and it returns back the E, which is this result here So you can go into docs and understand in more detail every single part of this syntax of this grammar But like, now you understand a bit better how JIT, when it does the tracing, creates this type of a flow in the background And this can be compiled using XLA, okay? Let's see another example of a failure This time, what we do is we pass this argument neck and we condition upon it And depending on the value, whether it's true or false, we're going to return minus X or plus X And remember, the first time we call a JIT function, it's going to input abstract shape and like a data type It won't have information about the value, and that's why this thing is going to fail So it says here, blah blah blah, abstract tracer, value encountered, where concrete value is expected The problem arose in the bool function, so that's basically what happens And in order to avoid this, what we can do is use these static arguments By making an argument static, so we say here, hey, first argument, so that's neck, is going to be static What we do by doing this is once the JIT does the tracing procedure, it will not use this abstract tracer object It's going to use the actual value So we are kind of lowering the level of abstraction here while doing the tracing Which is going to kind of constrain the class of inputs where this compile function can be called from the cache But in return, yeah, in return, we get it to work, I guess Okay, so let's call this thing, so this should now work And the thing I want you to understand here again is that the first time we call it with true here, we do the tracing And then the second time we call it with true, because nothing changed, basically again we don't have the tracing procedure Once we switch this to false, we again trigger the tracing, so now we can call this function for any integer 32 And we'll have two cached functions, so that's like, which run really fast Let's continue analyzing the failures, the third failure is here If I were to run this, let's see what we get, blah blah blah shapes must be 1D sequences of concrete values of integer types Okay, so what happened here is, let me try and print some values again Print x, let's print x shape, let's print this whole product thing And I think that's going to work, so let's try and print that So as you can see here, what happened is that x is of trace type, and then x shape is actually like a concrete value And then this thing here is again a trace, so this trace object, and we're trying to pass the trace object into reshape, which expects a concrete value That's why this thing is crashing again Let's see what the solution is, we can basically use NumPy prod instead of Jack's product So this may be a little bit confusing, basically as you saw, you have two tools to make these functions work legit One is to make certain arguments static, and sometimes you'll have to use NumPy functions instead of Jack's functions So don't ask me why, I'm only a couple of steps ahead of you, I recently started learning Jack's myself But let's try and run this, and this time it works, because we have the mprod will return a concrete value and not the traced object And that's it, if you wish, as I said, I'm going to share the colab, I'm going to push it to my github, so go ahead and play with these failure cases yourself To better understand how JIT works, but basically just keep in mind that JIT passes in these abstract objects that have the shape information and the data type information and no value And that will help you save yourself from a lot of headache, I guess Ah, okay, this is something I need to cover because there are some gotchas, some idiosyncrasies of Jack's which you need to understand to be able, so that we'll have easier time in the later videos building up neural networks, etc. Once you understand a couple of these gotchas, things are going to be way easier So, let's start with gotcha number one, pure functions, so Jack's is designed to work only on pure functions, we saw some glimpse of this while we were using print functions a couple of minutes ago And here is an informal definition of a pure function, so first of all, all the input data is passed through the function parameters and all the results are output through the function results, okay, so that's the first thing Secondly, a pure function will always return the same result if invoked with the same inputs, so it's, in a way, it's like a huge memory, like a cache table, and depending on the inputs, you just retrieve the outputs So that's an informal definition of a pure function and I think it's going to be good enough to understand the following cells Okay, so let's start with the cell number one, example number one, so here we are violating number one because all results are output through the function results, nope, that's not the case, we are returning some results over the print function instead of through X, through this return statement here So let me call this function, okay, on the first call, again, we have the tracing, so we call the executing function print statement, but the second call, due to the fact that we are passing the same shape and same data type, that's just a float 32 here We just use the cache function and we return five, finally, the third call, because we changed the shape and the type here, so we basically have a ray now, it's going to basically call and trigger the tracer again, and I think that's pretty clear at this point of time Considering the other examples I already covered, example number two, so you do not want to interfere with the global variables and I think this is probably a bad design decision anyways, even if we ignore the functional program paradigm, this is probably not a good idea to do because, yeah, unexpected things can happen with your program So what's happening here is that, as I said, we are violating both one and two because, let's see why, so all the input is passed through the function parameters, nope, this time we are passing the input through the global variable, and secondly, a pure function will always return the same results if invoked with the same inputs Which is also not the case because, depending on the value of g, we are going to return different values here, even if we keep x the same, so what will happen here, once we call the jitted version of the function the first time, it's going to cache the value of g, which is zero Then we are going to update it, and then we are going to call it with five, and it's going to call the cached version of the function, which, if you recall, has g equals to zero somewhere in the Jackspur, and that's why we are going to get the wrong results So let me execute this cell, so on the first call we get four, and that's fine, because we passed four, we had g equals zero, that's fine, now g is equal to ten, and we have five here, so we expect fifteen, but we get five because the g version of zero is cached in the function And finally, if I call with a different, we change, instead of float, we pass the, like, a Jacks array, this time we trigger the execution and we get the correct result again So in any case, this is a wrong idea, a bad idea to add these types of impurities Example number three, so Haiku Flux are basically built upon this idea, the whole idea is that it's fine to have a stateful functionality inside of your pure function So what I mean by that is the following, so we have x here, and what we do here, we create this state dictionary, and we basically just add, depending on whether the integer i here is odd or even, we add x to even or odd keys And so we are preserving state, obviously, by doing this, so this is stateful execution here, and then we just return this sum here, so as you can see, we're not violating anything here, so this is pure, because we're only using x, we're only assigning x to this state And we're only outputting whatever came out from this function, so we're not accessing some global variable or whatnot So because of that, all of this will be fine, and if I execute this, we're going to get a correct result, and that's 50, okay Finally, a fourth example, iterators, since they are stateful, are a no-no, so if you try and use, we built array here, and if we just do like this, if we don't have any iterators, it's going to work But on the other hand, if we use the iterator, this is going to fail, even though we semantically would expect the same results, and you can see that's the case, we got 45 here, we got 0 here, even though we should have gotten 45 as well Here you can see this lex primitive, so that's one of those mid-level API functions, it's called foriloop, it's basically a smart version of the for loop, which can be later compiled using the XLA But the thing I want you to understand here is just that you cannot use iterators because they are stateful, and we are thus violating the purity constraint of Jax Okay, that was the gotcha number one, make sure to write pure functions if you want to use them with Jax transforms in general Gotcha number two, in-place updates, we saw this already, so you cannot modify the arrays in place, you have to use that at set syntax, so let's see a simple example here Once we create the Jax array, we have to use this at set syntax in order to get the output array Let me run this, and you can see here the results are as expected, so this is your NumPy syntax, so row number one where we have zero indexing, and then all columns, we set them to one, and that's what we see here So I think I mentioned this, so if this seems wasteful, basically don't worry because XLA is smart enough to figure out that if you're not using this input array, then it will not allocate a special memory object for this output array It's just going to reuse the input array and modify it in place, even though it does not appear to do so on this higher level perspective Don't worry about the expressiveness, we can still do everything we can with NumPy, so here if I create a simple matrix of ones, we can do whatever we want, we can add, we don't just have to assign values to certain locations, we can also do arbitrary operations such as addition And you can see here we can add to every second row, so every second row, and starting from the third column, we add seven to one, so that's why we have these eights here, as you can see So that's cool, that was the second gotcha, let's continue on, the third one is out of bounds indexing So this is a direct consequence of the fact that Jax wants to make this code accelerator agnostic, because it's very hard to communicate certain information from the accelerators, they had to create certain types of non-error behaviors, which may come as a surprise if you're not already acquainted with this So when it comes to NumPy, if we allocate an array with ten elements, so from zero to nine, and we try to index into eleventh position, this will throw an exception because there is no eleventh position, there is position zero through nine So let me try and run this, and we're going to get exception caught, blah blah blah, basically this is out of bounds for x is zero with size ten Okay, let's see what Jax's behavior here is, if we were to try to assign at position eleven, so that's the same example as here, if we try to add twenty-three, it's just going to ignore this operation, so that may be surprising, because it will not throw any error So let's see what the result will actually be, so we have no change whatsoever here, finally, if we try to return, to retrieve the element at eleventh position, which does not exist, it's going to clamp eleventh to the last index, which is nine, and so we are going to retrieve nine Which is super confusing as well, and this may be a cause of many, many really bad bugs, so be aware of this behavior, as I said, the reason this exists is because basically Jax tries to abstract the accelerator information from you, but as a consequence you get this And I guess if you're familiar with NAND behavior, this is kind of similar, because NANDs are also in a sense not acceptable, although we also have non-error behavior, whereas we just get NANDs back instead of the system throwing some exception So I mentioned it here, similarly to how invalid floating point arithmetic results in NANDs and not an exception, so keep this in mind Gotcha number four, non-array inputs, again this is added by design, it's not a bug, in NumPy if we try to do a sum and we pass a Python list, one, two, three here, we are going to get an expected result, so that's going to be six I guess In Jax on the other hand, if we try to do this, we're going to get an exception, so sum requires a NumPy array or scalar arguments got list at position zero So why is that? Let's imagine we try and make this more permissive, so if we pass a Python list, we're going to convert it here to a Jax array, and now Jax sum will work on the Jax array, so this thing will work So now let's create a list, X is a simple list, now let's try to make Jaxper to understand what will happen if we called Jit on this function, so let's try and run this, and you'll see that we'll basically have this whole thing unrolled, which is super inefficient So the thing that happens is that we'll be passing element by element from this list, and Jax will not be able to optimize this and create some smart, to use some smart primitives for looping, and instead we're going to get this inefficient optimization So yeah, keep this in mind, now this is a really good place where upon which I can base my conclusion that Jax is really nice for researchers, but if you're a beginner, I don't think that optimization details should be more important than the ease of use, such as the fact that you will not have an exception if you pass in a Python list instead of a NumPy or Jax array So definitely things like this make me, if I were to recommend to a beginner, I would definitely still recommend PyTorch to be honest over Jax, but like if you're a researcher and you want to have a lot of flexibility and you want to have like a super optimized programs, I think Jax is a really good bet for those guys Anyways, let's continue, gotcha number five, random numbers, I mentioned this already, now let's see it in a bit more detail, so basically what we can see here is that NumPy has a stateful pseudo random number generator That means that if I execute these two functions in a row, we'll have, so this one will advance the state of the PRNG and this one after execution will also advance it and it's hidden from us, in order to understand this a bit better, let's kind of dissect this generator So we set the seed to some value, seed is I think set to zero here, and basically we can fetch, there is this function getState, we can fetch the actual state of the NumPy PRNG, and I'm gonna print certain metadata from that state Then I'm going to execute, like a sample, a single number, and then we're gonna fetch the state again, and we're gonna sample the number again here, and we're gonna fetch the state again and print it, so let's see what happens here So first thing you can notice here is that the numbers here are different because the state is internally advanced So what you can notice here is that first of all NumPy is using this thing called Mersenne Twister, hopefully I'm pronouncing it correctly, PRNG, and it's known to have a number of problems, there is a link in their documentation for why that is, it's not that important for us What is probably useful to know is that it basically has the state consists of 624 unsigned integers, and every time you sample from that generator, we are basically consuming the entropy of the generator Bottom line, what I want you to take out from this is that NumPy's PRNG has some problems, and it's stateful, which is problematic because remember we need to have pure functions, we are in the functional programming world On the other hand, this is how Jax operates, so basically you have the PRNG from Jax, you seed it with a certain value, and it gives back the key, and key is just a synonym for state So basically we are now manipulating the state of the PRNG externally as opposed to internally So key is simply like a tuple of two unsigned integers, 32-bit integers, so that's the state Now what's the trick here? So now if we were to sample using that same state, we are going to get obviously the same results, so that's different compared to the NumPy behavior And the reason is again because we are not modifying the state, we are not advancing the state, and that's why we always get the same results So again, important to notice here, state is preserved, state has not changed, and secondly, the results are the same So what do we do? Obviously this is not a random number, this is a constant function, so how do we get a randomness in Jax? So here is a trick, every time you want to create a new random value, you basically just call this split function, and it's going to return key and subkey Subkey is used to generate a novel number, and key can subsequently be used again in the split function to get novel key and novel subkeys So every time you want to generate a new number, you have to call this splitting So this may seem like very rough and problematic, but believe me, it helps solve various issues that are caused by randomness in libraries such as NumPy, etc When you try to use them outside of the context for which they were designed for, and that's like basically I guess CPU, single threaded programs, etc Okay, so let's run this cell now and see the results. So we have the old key, old key got converted into the new key, so that's the new state And we have this subkey which we use to generate the random number So a couple of notes here, first things first is that basically you can split into more subkeys than just the two of these And secondly, there is no semantic difference between the key and the subkey, these are basically recommendations for how to organize these states And basically you use key to generate novels to split and generate novel key and novel subkey, and you use the subkey to generate the current random number that you currently need to consume And that's pretty much it. So after having explained all these complex details about how to handle PRNGs compared to NumPy, is there any good reason for it? And the answer is yes. So why this design? And the answer is can the code with the current design, with NumPy's design be reproducible, parallelizable, and vectorizable? So in the case of NumPy, number one is obeyed, in the case where you have a single threaded program on the CPU, basically there is no problem So let's see this concrete example. Let's assume we have a function called bar that generates a random number and a function called baz that also generates a random number Finally we have a function foe, and I don't know if I'm pronouncing these right, so this function returns bar plus two times baz And I don't know if you can see the problem with this code, once we start and try to call this foe function on line 14 Now the problem is NumPy assumes a single threaded environment, and basically Python guarantees that this will be executed, to the best of my knowledge, from left to right Which means every time we run this, we are going to get the same result back, which means number one is obeyed, so that means we have reproducible programs So now what happens if we get this function, or if we, and the JIT decides to basically parallelize this program and calls bar on one accelerator, on one core, and baz on another one So what can happen is that the order of execution can change, and if that happens we may get different results So if the first function to be called returns 0.3 and the second one returns 0.4, depending on the order we'll either have 0.3 plus 2 times 0.4, or we'll have basically 0.4 plus 2 times 0.3 And that basically leads to different outcomes, which means this will not be a reproducible result in the case of parallel, like computing on parallel, like cores, machines, whatever Similarly for the vectorization part, basically NumPy guarantees that if you generate numbers, if you do this, if you iterate in a for loop and you generate sequentially random numbers That will give you the same results as if you were to just set size to 3, on the other hand, Jax does it differently, if you generate individually That will give you different results compared to if you generate from all the numbers at once using a specific key So, and here again you can see an example where we generate three subkeys using the split function and not only two keys, as I previously mentioned that So let's run this function and you can see that basically NumPy has the same results, whereas Jax does not have the same results So if you're using the common SIMD pattern, so that's the single instruction multiple data, which is common in machine learning Whereas you want to apply the same functionality across different batches, you sometimes want to have the same randomness being applied across all of the batches And NumPy does not allow to do that, that's how I understood this part about the violating the vectorization problem, let me know if I got this wrong Okay, that's pretty much it, that was the random numbers, we have just two more gotchas to go and that's it So gotcha number four is the control flow, it's fairly, we've seen something similar to this, so basically control flow plus grad, nothing, there is no problem Basically this transform function, so the grad transform function can deal with these types of conditioning on the value of a function I went ahead and ran this and you can see the results, basically, yeah, just analyze this function, you'll see that at three there is this jump where we have a piecewise defined function here The whole point is if we were to take a grad of this function and evaluate it at two and at four, so that's just before this discontinuity and just after it, we'll get valid gradients Let me run this again, just to update the state of the cell, and if we were to jit this function and basically it will fail because we are conditioning on x So we have to, the solution is to make it a static argument and then it's gonna work, so if I run this, it's going to work And we already saw how to handle this case of conditioning on a value, so let's see some more interesting cases So here we are conditioning again on the value, but this time the value decides the length, the number of loops we'll have in this function And the way to go around this is again make this a static argument which will make jit trace this function using the forex, using the abstract tracer, so the shape and data type Whereas for n is gonna use a concrete value, so let me run this cell and see what we get Okay, so as you can see we have some huge, huge checks per here and the reason being is because we have, as you can see here, 15 was passed for n And this is the best way that jit can deal with these types of primitive, of native Python for loops Again, importantly, you should not change static values too often, otherwise we'll be triggering recompilation all of the time And then the overhead will maybe be detrimental to the speed of your application Let's see how this can be avoided and make a bit more optimal So a better way to do this is to use the low level API again, so the lex API, and there is this, as we saw, for iLoop function And we can rewrite the above problem and I went ahead and just rewrote it here and you can see that So this is the same function as the above one just using the lex API and if we now do the make, if I call the make checks per And let me kind of run this, we'll see that this code is way more succinct, it's more concise compared to the above one and thus more efficient, I guess I haven't profiled this example but yeah, I can assume it's more efficient You can go ahead and analyze these two cells at your own pace but the whole point was to be aware, just be aware that using these lower level API functions You can sometimes get more out of it Finally, the only reason I have this one is to understand that you can condition sometimes on the dimensionality of your data And that's allowed because imagine, this means that for the whole class of x where you have, where x is two dimensional This can, like a function can be cached and will work and we can fetch it and use it So let's run this one and let's see the results So I passed like an input array which is not two dimensional and because of that we took this branch Which means whatever we get as an input we just return as the output And that's why we have this super simple like a Jacksper for this particular function being traced with this input Hopefully all of these helped you crystallize and understand JIT because I guess, I think that's arguably the hardest part to understand about JAKS How JIT works in the background That's it, final, final, final gotcha is like how to handle NANDs in JAKS The usual non-error behavior is to simply return a NAND in the case of like operations such as this one, division by zero So if I were to comment out this thing here and let me run this cell So we won't have any error, we'll just have a NAND So if we want to actually debug and understand where the NANDs came from You wanna like the program to throw some exceptions, you can do, for example, this, you can find more in the docs But basically there is a way to do it and now it will be throwing exceptions, I guess Let me try again Yeah, now it's throwing an exception So yeah, let's keep that in mind Final cell, JAKS enforces single precision Why? Because actually nowadays it's fairly common to train your models, especially the big models like big transformers in FP16 or mixed precision or even FP8 So it means 8 bits only And because NAMPA is aggressive in promoting like certain variables into double, so that means 64 bits JAKS made it by design that they are enforcing 32 bits So that may lead to some, again, some peculiar behavior such as this one You say I wanna like a vector with thousand random numbers which are 64 like bit long And if we were to print the data top of the actual array, we'll have flow 32, which is not intuitive, right? So be aware of this and there is a way around it, you can set certain flags if you don't want to have this as a default behavior As a quick summary, we saw the basics of JAKS such as the fact that it's using, it's based on functional programming paradigm We saw like various transform functions such as JIT, GRAD, DMAP Then we saw, we went deeper into JAKS, we saw the layered, like, layer API We saw many details of how JIT works and we saw many gotchas that JAKS has which may catch you by surprise So you should be aware and cognizant of these Finally, some conclusion of mine to who should be using JAKS and who should be using other frameworks As I said, I think JAKS is very good if you're a researcher, you wanna be very flexible and you wanna have a powerful tool And you also wanna, once you train big models such as in whatever industry I lab you're working, you're gonna be training big models So having like very optimized code is super important But if you're a beginner on the other hand, I think it may be a bit too harsh for you currently with the current state of JAKS And I don't think that's going to change So basically all of these optimization details such as the fact you cannot use Python list That will just throw exception because it can hurt the performance Because of all of that and because of the functional paradigm I think it's easier to just start with PyTorch Especially when we get to neural networks It's a more natural approach to approach neural networks with object oriented, from the object oriented perspective At least in my opinion maybe I lack experience with JAKS so yeah, but that seems to be the case Anyways, in the next video we're going to cover some concepts such as PyTrees, handling states Which are all basic components that we'll need to later build neural networks from scratch in PureJAKS and also in Haiku or Flex Hopefully you found this video useful, it took a lot of time to prepare the notebook, to prepare everything I wanted to show you here So if you did like it, consider sharing it out, also consider subscribing to this channel if you haven't already Also do join the Discord community, there is a lot of smart people there and we have a community for, like there is more than 1,000-100 people at this point of time So somebody will help you out if you have some doubt or questions In any case, until next time, bye bye!
[{"start": 0.0, "end": 7.96, "text": " What's cracking guys with this video? I'm kicking off a series of video tutorials about Jax. So what is Jax exactly?"}, {"start": 7.96, "end": 9.96, "text": " Let's see the official box here"}, {"start": 10.8, "end": 14.32, "text": " Basically Jax is a autograd and XLA which stands for"}, {"start": 15.120000000000001, "end": 17.12, "text": " accelerated linear algebra"}, {"start": 17.16, "end": 22.02, "text": " compiler which is basically a compiler developed by Google that co-evolved with the"}, {"start": 22.88, "end": 24.84, "text": " development of TPU units"}, {"start": 24.84, "end": 30.04, "text": " We're gonna see a lot of XLA later. So I thought just kind of explaining the terminology"}, {"start": 30.04, "end": 35.88, "text": " So they're basically brought together for high-performance numerical computing and machine learning research"}, {"start": 36.08, "end": 40.92, "text": " so in a nutshell we can treat Jax as a like a machine learning library and"}, {"start": 41.6, "end": 46.8, "text": " you need to be cautious not to compare Jax directly to pytorch or to tensorflow because"}, {"start": 47.16, "end": 52.879999999999995, "text": " Jax is kind of like a mid-level like library and people have been building on top of it"}, {"start": 52.88, "end": 56.400000000000006, "text": " So we have as for the high-level deep learning libraries"}, {"start": 56.400000000000006, "end": 60.800000000000004, "text": " we have the two most popular ones are Haiku coming from from DeepMind and"}, {"start": 61.040000000000006, "end": 64.16, "text": " We have a flex coming from the Google research team"}, {"start": 64.72, "end": 72.28, "text": " So basically aside from that DeepMind team has been developing for every single like domain particular domain such as graphML"}, {"start": 72.28, "end": 75.84, "text": " or RL they've been developing like a purposeful a"}, {"start": 76.80000000000001, "end": 80.08, "text": " Dedicated library built on top of Jax. So for example for graphML"}, {"start": 80.08, "end": 88.12, "text": " They've built up a thing called GRF or Jax for graphs. That would be like a acronym pretty much having mentioned flex"}, {"start": 88.12, "end": 91.16, "text": " I think it's worth noting that on hugging face hub"}, {"start": 91.16, "end": 96.0, "text": " You have more than 5,000 models at the at the time of this recording"}, {"start": 96.6, "end": 98.92, "text": " That you can use download and play with them"}, {"start": 98.92, "end": 103.75999999999999, "text": " And I think most of them were written in flex and that's the reason why I'll be covering both"}, {"start": 103.92, "end": 108.38, "text": " Haiku and flex in some of the future videos that out of the way"}, {"start": 108.38, "end": 111.88, "text": " Let me tell you what I'm going to cover in this series of videos"}, {"start": 111.88, "end": 118.22, "text": " So in the first two videos I'm going to explain the nitty-gritty details of Jax and then we're going to do"}, {"start": 118.66, "end": 124.74, "text": " We're going to code neural networks from scratch in both pure Jax as well as flex as well as Haiku"}, {"start": 124.89999999999999, "end": 129.34, "text": " also those of you who know me, you know that I'm a big fan of a top-down approach when teaching and"}, {"start": 129.9, "end": 130.74, "text": " Basically here"}, {"start": 130.74, "end": 136.46, "text": " I'm going to go a bit differently because the topic is kind of complex and the first two videos are going to go"}, {"start": 136.46, "end": 142.74, "text": " So the first one is going to go from mid-level of abstraction down to lower lower levels of abstraction"}, {"start": 143.18, "end": 146.82, "text": " And we're going to understand the nitty-gritty details of Jax in this video"}, {"start": 146.94, "end": 150.34, "text": " The second one will be mid-level to higher levels of abstraction"}, {"start": 150.34, "end": 156.66, "text": " And as I said the last three videos there will be hands-on and those are going to be on a high level of abstraction"}, {"start": 156.66, "end": 161.66, "text": " We won't be digging deeper into the primitives. We're just going to build from the components"}, {"start": 161.66, "end": 165.14000000000001, "text": " We learned in the first two videos. We're going to build neural networks from from scratch"}, {"start": 165.14, "end": 169.77999999999997, "text": " So aside from that as you can see here, I have a code skeleton already"}, {"start": 170.5, "end": 171.94, "text": " here and"}, {"start": 171.94, "end": 178.01999999999998, "text": " That there is a reason behind that first if I was writing if I was coding everything from scratch"}, {"start": 178.26, "end": 180.85999999999999, "text": " That would take this video would be two hours long"}, {"start": 181.42, "end": 188.2, "text": " Secondly, this is way more natural because you usually never ever build build stuff from your head from scratch"}, {"start": 188.2, "end": 194.51999999999998, "text": " You take you either take some snippets from the documentation from stack overflow or from somewhat somebody else's repo"}, {"start": 194.52, "end": 197.06, "text": " So it's very I want to stress the understanding here"}, {"start": 197.06, "end": 200.10000000000002, "text": " It's important that we understand the code so that we can modify it"}, {"start": 200.10000000000002, "end": 204.94, "text": " And I think it's less important for you to understand how to code this up from your head because nobody does that"}, {"start": 204.94, "end": 211.56, "text": " I mean sometimes I even like forget how to write down if name equals main in the main like file of a Python program"}, {"start": 211.56, "end": 216.66000000000003, "text": " So you just search for stuff if you don't understand how to write something down. Okay, let's dig down into the code"}, {"start": 217.10000000000002, "end": 219.66000000000003, "text": " first off I'm in Google Colab here I"}, {"start": 219.66, "end": 223.82, "text": " I enabled the GPU accelerator in the background and"}, {"start": 224.46, "end": 228.7, "text": " The this Jupiter file which I'm gonna share with you after this video"}, {"start": 228.7, "end": 233.78, "text": " I'm gonna open source into my github is structured into two like sections"}, {"start": 233.78, "end": 240.26, "text": " The first one is warming up with Jax basically we're on the mid level of abstraction pyramid. Let's go that way and then we're going deeper"}, {"start": 241.14, "end": 245.24, "text": " Understanding the nitty-gritty details behind many of Jax primitive"}, {"start": 245.24, "end": 252.20000000000002, "text": " Transform functions such as JIT grad etc. We're gonna see those in a second. Okay, let's let's let's start. So first things first"}, {"start": 252.56, "end": 253.96, "text": " I"}, {"start": 253.96, "end": 256.72, "text": " want you to understand that Jax is"}, {"start": 257.68, "end": 260.32, "text": " Jax's API is very similar to numPy's and"}, {"start": 260.96000000000004, "end": 267.14, "text": " Second thing I want you to take off from this section is that Jax code is accelerator agnostic and we're going to see what that means"}, {"start": 267.14, "end": 269.84000000000003, "text": " Exactly. Okay, let's let's dig into the actual code"}, {"start": 269.84000000000003, "end": 274.6, "text": " so as I said for the most part the syntax is the same as numPy's and"}, {"start": 274.6, "end": 277.20000000000005, "text": " You can see here. We are importing from Jax"}, {"start": 277.20000000000005, "end": 281.8, "text": " There is this module called numPy and the convention is just to import it as J and P"}, {"start": 282.1, "end": 287.56, "text": " They also have a scipy API so Jax scipy, but we're not going to use that one in this view"}, {"start": 288.32000000000005, "end": 293.46000000000004, "text": " So these functions that are called the transform functions are a vital component of Jax"}, {"start": 293.48, "end": 299.04, "text": " So those are grad JIT V map and P map and you'll be seeing these a lot"}, {"start": 299.48, "end": 303.06, "text": " throughout this this notebook and in general using Jax so"}, {"start": 303.06, "end": 308.06, "text": " aside from that Jax's structure so the API structures like an onion as"}, {"start": 308.06, "end": 313.3, "text": " Usually software like libraries are so we have the high level API, which is a numPy"}, {"start": 313.3, "end": 319.46, "text": " Then we have this thing called called Lex and finally we have I guess XLA, which is I think in simple spots, right?"}, {"start": 319.46, "end": 321.46, "text": " Okay, so because I'm a"}, {"start": 322.46, "end": 327.9, "text": " Etymology nerd. I just thought writing this down. Basically Lex is just an anagram for XLA"}, {"start": 327.9, "end": 333.38, "text": " Which is a compiler and I'm not completely sure how people came up with the name Jax"}, {"start": 333.38, "end": 336.29999999999995, "text": " So if anybody knows feel free to type it down in the comment section"}, {"start": 336.29999999999995, "end": 337.85999999999996, "text": " Let's see a couple of facts about Jax"}, {"start": 337.85999999999996, "end": 343.34, "text": " So the first thing I already mentioned the syntax is remarkably similar to numPy's if I run this we can see we have"}, {"start": 343.34, "end": 348.85999999999996, "text": " We have defined thousand points on the x-axis equally spaced from 0 to 10"}, {"start": 348.85999999999996, "end": 354.29999999999995, "text": " And I just plot the function to sine cosine and we just plotted we get the chart here"}, {"start": 354.3, "end": 361.06, "text": " You can see that Jax syntax is completely similar like it's actually same if you just switch MP for JMP"}, {"start": 361.06, "end": 364.06, "text": " You get linspace sine cosine plot"}, {"start": 364.06, "end": 367.58000000000004, "text": " You can like as you can see here the arrays can be directly"}, {"start": 368.06, "end": 375.58000000000004, "text": " Inserted into the matplotlibs plot function. So if I run this one we get the same results. Okay, so that's it"}, {"start": 375.58000000000004, "end": 377.58000000000004, "text": " So that's the first fact second fact"}, {"start": 378.22, "end": 382.66, "text": " So that's something you need to get comfortable with and that's the functional programming"}, {"start": 382.66, "end": 387.06, "text": " Paradigm, especially if you're only familiar with the object-oriented programming paradigm"}, {"start": 387.06, "end": 389.06, "text": " This is going to be a bit rough"}, {"start": 389.06, "end": 394.86, "text": " But like I I promise you there are some awesome benefits that functional programming brings with it. So"}, {"start": 395.70000000000005, "end": 401.94000000000005, "text": " Jax arrays are immutable. So that's one of the three like that's one of the common properties you see in functional programming"}, {"start": 402.46000000000004, "end": 406.34000000000003, "text": " in functional programs, so what that means is the following"}, {"start": 406.34, "end": 413.61999999999995, "text": " If we take a like a NumPy array here, so numbers from 0 to 9 we print them out and I then modify the"}, {"start": 414.09999999999997, "end": 419.5, "text": " The array at index 0 and I add some random number like 23 and if I run this"}, {"start": 419.97999999999996, "end": 420.58, "text": " Everything is fine"}, {"start": 420.58, "end": 427.14, "text": " We have the original array and we have the modified array and as you can see X is modified in place, which means"}, {"start": 427.78, "end": 432.09999999999997, "text": " The array is by definition mutable on the other hand if we take this"}, {"start": 432.1, "end": 439.06, "text": " The same program and run it here. So I create this time Jax array instead of NumPy array and I try to modify the"}, {"start": 439.54, "end": 444.06, "text": " Array in place. This is what's going to happen. So we are going to get an exception here as you can see"}, {"start": 444.90000000000003, "end": 451.78000000000003, "text": " Basically, Jax is complaining that we it says here object does not support item assignment. So you cannot"}, {"start": 452.58000000000004, "end": 457.78000000000003, "text": " Basically basically modify the array in place. So this is a solution. It's kind of rough"}, {"start": 457.78, "end": 464.73999999999995, "text": " But yeah, you have to do this at syntax with index and then set the value and this is going to work"}, {"start": 464.73999999999995, "end": 471.61999999999995, "text": " Finally, you can see that we are not like modifying X in place. We basically allocate additional object in the memory"}, {"start": 472.26, "end": 476.17999999999995, "text": " Called Y and now we have both the original array as well as the"}, {"start": 476.73999999999995, "end": 481.61999999999995, "text": " Novel array so you may think here. Well, this is kind of suboptimal, right?"}, {"start": 481.61999999999995, "end": 484.41999999999996, "text": " Well, the trick is and we're going to see that a bit later"}, {"start": 484.42, "end": 488.98, "text": " Like Jax uses something called JIT which uses something called XLA which"}, {"start": 489.54, "end": 494.1, "text": " Basically takes care of this stuff behind the curtains. So you don't have to worry about this"}, {"start": 494.1, "end": 497.06, "text": " So if it notices that you're not using the original array"}, {"start": 497.06, "end": 503.86, "text": " It's going to just do this in place and avoid allocating another memory object. So yeah, that's taken care of"}, {"start": 504.66, "end": 507.78000000000003, "text": " So that's the the second thing I wanted you to understand"}, {"start": 507.78000000000003, "end": 512.34, "text": " So the functional programming paradigm is that you can use the same array in place"}, {"start": 512.34, "end": 517.38, "text": " So the functional programming paradigm is something that Jax relies upon"}, {"start": 518.02, "end": 525.0600000000001, "text": " Finally, we have the fact that Jax handles random numbers completely differently compared to NumPy"}, {"start": 525.0600000000001, "end": 527.14, "text": " And there is a very very good reason for this"}, {"start": 527.14, "end": 528.82, "text": " We're going to see that in a moment"}, {"start": 528.82, "end": 536.5, "text": " But the key point here is you have to create this basically key which is a fancy name for like a state"}, {"start": 536.5, "end": 544.82, "text": " So whereas NumPy's pseudo random number generators are stateful, Jax is not stateful because it's"}, {"start": 545.38, "end": 549.38, "text": " As I said, that's a consequence of functional programming paradigm again"}, {"start": 549.38, "end": 553.14, "text": " So key is just a synonym for a state in Jax"}, {"start": 553.14, "end": 556.42, "text": " And what you have to do when you're generating random numbers"}, {"start": 556.42, "end": 561.78, "text": " You have to pass the state explicitly and not implicitly such like in NumPy"}, {"start": 561.78, "end": 564.9, "text": " So here we created 10 like random numbers"}, {"start": 564.9, "end": 566.8199999999999, "text": " We sampled them from a normal distribution"}, {"start": 567.62, "end": 571.62, "text": " And we can see here that we get 10 random numbers"}, {"start": 571.62, "end": 577.54, "text": " The thing I want you to notice here is that the type of this x array is of device array"}, {"start": 577.54, "end": 579.54, "text": " So that basically that's a trick"}, {"start": 579.54, "end": 583.22, "text": " And that's why Jax code is accelerator agnostic"}, {"start": 583.9399999999999, "end": 588.5799999999999, "text": " Basically, Jax automatically puts this array onto the accelerator"}, {"start": 588.5799999999999, "end": 593.78, "text": " So in this case, I'm using here my accelerator is set up to GPU"}, {"start": 593.78, "end": 598.98, "text": " If I were to set it to TPU, the code would be set directly to the TPU unit"}, {"start": 598.98, "end": 600.02, "text": " And that's very cool"}, {"start": 600.5799999999999, "end": 602.5799999999999, "text": " We're going to see a consequence of that a bit later"}, {"start": 602.5799999999999, "end": 607.3, "text": " Basically, you don't have to do the laborious to device from PyTorch syntax, etc"}, {"start": 607.3, "end": 609.6999999999999, "text": " So this story naturally leads me to fact number four"}, {"start": 609.6999999999999, "end": 614.02, "text": " And that's that Jax's accelerator agnostic same code runs everywhere"}, {"start": 614.02, "end": 618.98, "text": " You do not have to write particular syntax for a particular accelerators"}, {"start": 618.98, "end": 619.54, "text": " That's very cool"}, {"start": 619.54, "end": 625.4599999999999, "text": " Especially in the future where we see more and more companies being like building custom AI"}, {"start": 625.4599999999999, "end": 627.9399999999999, "text": " Accelerators and chips, not just TPUs and GPUs"}, {"start": 627.9399999999999, "end": 631.38, "text": " But like companies such as Graphcore, Cerebras, etc"}, {"start": 631.38, "end": 632.18, "text": " Let's see it in practice"}, {"start": 632.18, "end": 634.5, "text": " So here we define a Jax array"}, {"start": 635.14, "end": 637.4599999999999, "text": " Again, we see the syntax where we have the key"}, {"start": 637.4599999999999, "end": 642.18, "text": " We have to pass explicitly the key or the state of the pseudo random number generator"}, {"start": 642.18, "end": 646.3399999999999, "text": " And as you can notice, there is some difference in the API design here"}, {"start": 646.3399999999999, "end": 647.86, "text": " We have D type here"}, {"start": 647.86, "end": 649.86, "text": " We're using for NumPy, we're using S type"}, {"start": 649.86, "end": 651.0600000000001, "text": " There are some minor differences"}, {"start": 651.0600000000001, "end": 657.78, "text": " But like in general, syntaxes are very, very similar between Jax's NumPy and NumPy API"}, {"start": 658.98, "end": 662.1800000000001, "text": " I mentioned that we don't have to use two device anymore"}, {"start": 662.1800000000001, "end": 663.46, "text": " So that's in PyTorch"}, {"start": 663.46, "end": 667.86, "text": " So this array here is directly pushed to the GPU unit"}, {"start": 667.86, "end": 669.38, "text": " Whereas this one is on the CPU"}, {"start": 670.1, "end": 674.66, "text": " If we do some time profiling here, we can see that if let me run this thing"}, {"start": 674.66, "end": 679.86, "text": " Let me run this thing, basically this first line, line number 10"}, {"start": 680.74, "end": 685.06, "text": " We do like a basically matrix multiply directly on the GPU"}, {"start": 685.62, "end": 686.9, "text": " And that's very fast"}, {"start": 686.9, "end": 689.62, "text": " We can see that we're going to see results in a second"}, {"start": 689.62, "end": 693.06, "text": " The second line here, because we have a NumPy dot"}, {"start": 693.06, "end": 695.38, "text": " And because we have NumPy arrays which are on CPU"}, {"start": 696.02, "end": 697.3, "text": " It's going to be way slower"}, {"start": 697.3, "end": 702.26, "text": " And NumPy only works, the reason why Jax exists is because NumPy only works with CPUs"}, {"start": 702.26, "end": 705.06, "text": " And we do need to leverage existing accelerators"}, {"start": 705.7, "end": 711.86, "text": " The third line here does something like a modified version of the first two lines"}, {"start": 711.86, "end": 714.34, "text": " Basically, we're using Jax's dot product"}, {"start": 714.34, "end": 720.98, "text": " And we are using NumPy arrays which will cause this line to push the devices to the GPU"}, {"start": 720.98, "end": 723.22, "text": " And then do the dot product"}, {"start": 723.22, "end": 726.1, "text": " And because of that overhead, we're going to have a slower result"}, {"start": 726.1, "end": 727.62, "text": " Hopefully for this third line"}, {"start": 728.5, "end": 730.34, "text": " And let's see the numbers here"}, {"start": 730.34, "end": 732.74, "text": " So we have 27 milliseconds for the first line"}, {"start": 732.74, "end": 735.5400000000001, "text": " We have 437 milliseconds for the second line"}, {"start": 735.5400000000001, "end": 739.22, "text": " Because as I said, NumPy is very slow, it's running on CPU"}, {"start": 739.22, "end": 743.22, "text": " Finally, we have 97 milliseconds for this line here"}, {"start": 743.22, "end": 748.34, "text": " Because we have the overhead of sending the data from the CPU, from the host to the GPU unit"}, {"start": 749.38, "end": 752.26, "text": " And the final number 26 is the same as this one"}, {"start": 752.26, "end": 753.46, "text": " And that's this one here"}, {"start": 753.46, "end": 758.1, "text": " So basically what we do, we can explicitly, using this device put function"}, {"start": 758.1, "end": 761.86, "text": " We can push the DRA to the GPU"}, {"start": 761.86, "end": 766.74, "text": " And then now we have equivalent lines between this one and line number 10"}, {"start": 767.46, "end": 769.46, "text": " And that's it, that's why we have the same results here"}, {"start": 770.34, "end": 771.78, "text": " A couple of notes here"}, {"start": 771.78, "end": 775.38, "text": " First, I'm using GPU as a synonym for AI accelerator"}, {"start": 775.38, "end": 779.62, "text": " In reality, as I said, Jax's accelerator agnostic"}, {"start": 779.62, "end": 785.38, "text": " Which means basically, depending on what I set here in the runtime setting here"}, {"start": 785.38, "end": 789.9399999999999, "text": " We are going to run this either on GPU or TPU or whatever they have supported"}, {"start": 789.9399999999999, "end": 792.42, "text": " A second note is this block until ready"}, {"start": 792.42, "end": 797.9399999999999, "text": " You can notice that every time I'm using Jax's dot function, I'm using this block until ready"}, {"start": 797.9399999999999, "end": 803.3, "text": " The reason is Jax is using something called asynchronous dispatch system in the background"}, {"start": 803.3, "end": 806.18, "text": " Which means if I were to run this thing here"}, {"start": 806.18, "end": 807.9399999999999, "text": " So let me just copy paste this"}, {"start": 807.9399999999999, "end": 809.78, "text": " So if I were to copy paste this here"}, {"start": 809.78, "end": 815.4599999999999, "text": " If I were to run this and maybe like let me assign that to a variable"}, {"start": 815.4599999999999, "end": 818.42, "text": " And now I do some coding here, blah, blah, blah"}, {"start": 818.42, "end": 821.4599999999999, "text": " The thing is this line is going to immediately return"}, {"start": 822.18, "end": 825.06, "text": " And this code here is going to start executing"}, {"start": 825.06, "end": 828.98, "text": " Because this line actually just delegates the task to the accelerator"}, {"start": 828.98, "end": 832.42, "text": " And we're not blocked, we're not waiting for this line to execute"}, {"start": 832.42, "end": 839.14, "text": " Which means the variable, this temporary variable is still not filled in"}, {"start": 839.14, "end": 840.58, "text": " Let's put it that way"}, {"start": 840.58, "end": 843.9399999999999, "text": " That's a fairly neat feature that makes things even faster"}, {"start": 843.9399999999999, "end": 845.62, "text": " But you need to be cognizant of it"}, {"start": 845.62, "end": 847.86, "text": " Especially when you're doing this time profiling"}, {"start": 847.86, "end": 854.58, "text": " You want to measure the actual computation and not the time that's necessary to dispatch"}, {"start": 854.58, "end": 856.8199999999999, "text": " Just delegate the task to the accelerator"}, {"start": 856.8199999999999, "end": 857.78, "text": " Just be aware of that"}, {"start": 857.78, "end": 859.22, "text": " Nice, those were some basics"}, {"start": 859.22, "end": 861.9399999999999, "text": " Now we're going to cover the transform functions"}, {"start": 861.9399999999999, "end": 864.58, "text": " But before that, let me do a quick overview"}, {"start": 864.58, "end": 867.3, "text": " So Jax's AI accelerator agnostic"}, {"start": 867.3, "end": 869.54, "text": " Jax handles random numbers a bit differently"}, {"start": 869.54, "end": 872.42, "text": " We're going to see why that is important a bit later"}, {"start": 872.42, "end": 877.2199999999999, "text": " Basically reproducibility in the environments where you have multiple accelerators"}, {"start": 877.2199999999999, "end": 882.02, "text": " Where you're doing parallel programming is like possible with Jax"}, {"start": 882.02, "end": 886.9799999999999, "text": " Whereas NumPy has some problems there because it was designed mainly for CPU programs"}, {"start": 887.78, "end": 892.5799999999999, "text": " Then as I said, functional programming paradigm is something you need to get comfortable with"}, {"start": 892.5799999999999, "end": 896.66, "text": " And finally, basically the syntax is very similar to NumPy"}, {"start": 896.66, "end": 899.78, "text": " That's a high level overview of what Jax is"}, {"start": 899.78, "end": 904.42, "text": " And now we're going to get deeper into these transform functions"}, {"start": 904.42, "end": 907.6999999999999, "text": " So as I said, JIT compiles your functions using this XLA"}, {"start": 907.6999999999999, "end": 914.5799999999999, "text": " And basically doing that, it catches some functions and makes it very, very fast to execute our programs"}, {"start": 914.5799999999999, "end": 918.9, "text": " I'm going to just run this simple visualization function here"}, {"start": 918.9, "end": 923.86, "text": " And I'm going to show you a simple example where how we can JIT a function and make it faster"}, {"start": 923.86, "end": 926.74, "text": " So here I just defined this function"}, {"start": 926.74, "end": 929.7, "text": " It's a cellU function, it's just an activation function"}, {"start": 929.7, "end": 931.38, "text": " The details here are not that important"}, {"start": 931.38, "end": 934.58, "text": " That's why I have this visualize function here"}, {"start": 934.58, "end": 935.7, "text": " I'm just going to run it"}, {"start": 935.7, "end": 937.62, "text": " We're going to see how it looks like"}, {"start": 937.62, "end": 942.42, "text": " You can see here up until zero and that's defined here"}, {"start": 942.42, "end": 945.3000000000001, "text": " When X is greater than zero, we just have X"}, {"start": 945.3000000000001, "end": 947.86, "text": " Which means we have a linear function here"}, {"start": 947.86, "end": 953.3000000000001, "text": " And when the number X is smaller than zero, we just have X"}, {"start": 953.3, "end": 958.3399999999999, "text": " And smaller than zero, we have some combination of exponents here with some coefficients, etc"}, {"start": 958.3399999999999, "end": 959.38, "text": " But that's not important"}, {"start": 959.38, "end": 962.26, "text": " The important part here is we can transform cellU"}, {"start": 962.26, "end": 966.66, "text": " So we pass the function inside of this JIT transform function"}, {"start": 966.66, "end": 968.18, "text": " And we get a function back"}, {"start": 968.18, "end": 969.6999999999999, "text": " And this function is compiled"}, {"start": 969.6999999999999, "end": 971.4599999999999, "text": " Actually, it's not exactly compiled"}, {"start": 971.4599999999999, "end": 973.62, "text": " We first need to trace it by calling it once"}, {"start": 973.62, "end": 978.5, "text": " But for the sake of argument, we just have a super fast function right now here"}, {"start": 978.5, "end": 979.8599999999999, "text": " And let's benchmark it"}, {"start": 979.8599999999999, "end": 983.2199999999999, "text": " So let me allocate a vector of million data points"}, {"start": 983.22, "end": 986.4200000000001, "text": " We have million random numbers here"}, {"start": 986.4200000000001, "end": 989.14, "text": " And we pass them through the cellU function"}, {"start": 989.14, "end": 995.7, "text": " And we basically do a time profiling of both the normal version as well as the JIT version"}, {"start": 995.7, "end": 998.5, "text": " Let me run it again and let's see the numbers"}, {"start": 998.5, "end": 1002.9, "text": " So the thing here is this function is not overly complex"}, {"start": 1002.9, "end": 1004.9, "text": " So the results we get with JIT"}, {"start": 1004.9, "end": 1008.98, "text": " So we will not get a huge performance boost here"}, {"start": 1008.98, "end": 1011.5400000000001, "text": " But imagine once you start training neural networks"}, {"start": 1011.54, "end": 1018.26, "text": " Where JIT starts shining because then the optimizer, the compiler has a bigger freedom"}, {"start": 1018.26, "end": 1022.74, "text": " To do various, various optimization tricks such as fusing the operators"}, {"start": 1022.74, "end": 1027.46, "text": " Such as avoiding allocating certain temporary structures, etc"}, {"start": 1027.46, "end": 1031.22, "text": " And things get really, really fast like orders of magnitude faster"}, {"start": 1032.1, "end": 1038.1, "text": " Here we can see that basically we have 1.96 milliseconds using the normal version"}, {"start": 1038.1, "end": 1041.78, "text": " And then we have only 121 microseconds using the JIT version"}, {"start": 1041.78, "end": 1045.3, "text": " So even in this simple example, we get performance benefits"}, {"start": 1045.3, "end": 1048.26, "text": " So for now, I'm going to leave it here"}, {"start": 1048.26, "end": 1050.74, "text": " I'm going to stay like on a high level here"}, {"start": 1051.6999999999998, "end": 1053.86, "text": " We don't have to understand currently how JIT works"}, {"start": 1053.86, "end": 1057.86, "text": " But like you just need to know when you see JIT, JIT makes functions run fast"}, {"start": 1057.86, "end": 1059.1399999999999, "text": " Let me try to be this guy"}, {"start": 1062.58, "end": 1063.3799999999999, "text": " Awesome"}, {"start": 1063.3799999999999, "end": 1066.1799999999998, "text": " Okay, let's go ahead here"}, {"start": 1066.18, "end": 1068.74, "text": " Second transform function, very important, is grad"}, {"start": 1069.78, "end": 1074.3400000000001, "text": " Basically it does the magic of automatic differentiation for you"}, {"start": 1074.3400000000001, "end": 1078.9, "text": " So the same thing as dot backward in PyTorch if you're coming from the PyTorch world"}, {"start": 1079.78, "end": 1086.3400000000001, "text": " And like a short note here, differentiation can be manual"}, {"start": 1086.3400000000001, "end": 1090.74, "text": " So basically you have a function, you manually calculate on the paper"}, {"start": 1090.74, "end": 1094.98, "text": " How the derivatives look like"}, {"start": 1094.98, "end": 1098.26, "text": " And then you can code that knowledge into your function"}, {"start": 1098.26, "end": 1100.1, "text": " And that's a manual differentiation"}, {"start": 1100.1, "end": 1101.6200000000001, "text": " Symbolic one is very similar"}, {"start": 1101.6200000000001, "end": 1105.46, "text": " It's just an automatic way to automate the manual differentiation"}, {"start": 1105.46, "end": 1109.3, "text": " Whereby the program is using those rules such as product rule, etc"}, {"start": 1109.3, "end": 1111.94, "text": " To build up derivatives of a function"}, {"start": 1112.66, "end": 1115.46, "text": " And numeric functions are things like finite derivatives"}, {"start": 1115.46, "end": 1118.9, "text": " Where you use basically numeric methods to compute the derivatives"}, {"start": 1118.9, "end": 1121.46, "text": " And finally, the automatic one is the one we all love"}, {"start": 1121.46, "end": 1125.06, "text": " Which is used in every single deep learning framework we know of"}, {"start": 1125.94, "end": 1129.06, "text": " Okay, let's see how grad works"}, {"start": 1129.06, "end": 1131.46, "text": " So let's define a simple function here"}, {"start": 1131.46, "end": 1134.58, "text": " So it's just a sum of, as you can see here, logistic functions"}, {"start": 1134.58, "end": 1137.6200000000001, "text": " So this here inside of this is a logistic function or sigmoid"}, {"start": 1137.6200000000001, "end": 1143.38, "text": " And I just input, I just create like array of three values here, 0, 1, 2"}, {"start": 1143.38, "end": 1149.8600000000001, "text": " And I rename the function to loss because this could be used as a loss function"}, {"start": 1149.86, "end": 1151.6999999999998, "text": " Just giving some semantics"}, {"start": 1151.6999999999998, "end": 1156.02, "text": " And finally, we can do grad by just wrapping loss into grad"}, {"start": 1156.02, "end": 1158.4199999999998, "text": " Again, we are passing function into a function"}, {"start": 1158.4199999999998, "end": 1162.5, "text": " Which is something you see often in the functional program in paradigm"}, {"start": 1163.2199999999998, "end": 1165.06, "text": " That gives us the grad loss"}, {"start": 1165.06, "end": 1167.86, "text": " So the gradient of the loss function which we can evaluate"}, {"start": 1167.86, "end": 1171.62, "text": " So this is contrast this to PyTorch or TensorFlow"}, {"start": 1171.62, "end": 1173.1399999999999, "text": " Where you just do backward"}, {"start": 1173.1399999999999, "end": 1175.06, "text": " Here you actually get the function back"}, {"start": 1175.06, "end": 1177.62, "text": " And you can evaluate it at particular points"}, {"start": 1177.62, "end": 1183.4599999999998, "text": " And by default, I said here the grad will take the derivative of the first parameter"}, {"start": 1183.4599999999998, "end": 1186.1799999999998, "text": " But here we only have one parameter so that doesn't matter"}, {"start": 1186.1799999999998, "end": 1190.4199999999998, "text": " And let me kind of run this and see what we got"}, {"start": 1191.1399999999999, "end": 1195.2199999999998, "text": " So we have some numbers which mean nothing to us because the function is fairly complicated"}, {"start": 1195.2199999999998, "end": 1199.9399999999998, "text": " If I were to change this to this sum of squares function"}, {"start": 1199.9399999999998, "end": 1205.3, "text": " So if I do something like this, we basically can get interpretable results"}, {"start": 1205.3, "end": 1207.78, "text": " So let's do a manual derivative of this function"}, {"start": 1207.78, "end": 1213.86, "text": " We're going to get so that's x1 squared plus x2 squared plus x3 squared"}, {"start": 1213.86, "end": 1215.86, "text": " Remember we are passing three numbers here"}, {"start": 1215.86, "end": 1224.6599999999999, "text": " So if we do derivative of that, that's gonna be x1 times 2 plus x2 times 2 plus x3 times 2"}, {"start": 1224.6599999999999, "end": 1230.26, "text": " And if I were to run this, we're going to get 024 which makes sense because we have 012"}, {"start": 1230.26, "end": 1231.3799999999999, "text": " Let me print this out"}, {"start": 1231.38, "end": 1238.66, "text": " So print x, so we have 012 and we get 024 because of this"}, {"start": 1238.66, "end": 1241.46, "text": " Grad basically does because all of these are bundled inside of x"}, {"start": 1242.18, "end": 1246.18, "text": " Grad basically does derivative of the function with respect to x1"}, {"start": 1246.8200000000002, "end": 1250.66, "text": " Which is 2x1 and then the same for the other variables"}, {"start": 1250.66, "end": 1253.5400000000002, "text": " Originally, remember we have something like this"}, {"start": 1253.5400000000002, "end": 1257.8600000000001, "text": " We have this thing plus x2 squared plus x3 squared"}, {"start": 1257.8600000000001, "end": 1259.3000000000002, "text": " That's the original function"}, {"start": 1259.3, "end": 1261.86, "text": " This here is the derivative, manual derivative"}, {"start": 1262.4199999999998, "end": 1264.8999999999999, "text": " Okay, that's a simple example"}, {"start": 1264.8999999999999, "end": 1271.3, "text": " Let's now go and make sure that this thing actually does what we expect it to do"}, {"start": 1272.02, "end": 1275.62, "text": " We can do that using this numeric differentiation method"}, {"start": 1275.62, "end": 1277.22, "text": " The finite differences method"}, {"start": 1277.22, "end": 1280.58, "text": " If I were to run this, we're going to get some results"}, {"start": 1280.58, "end": 1284.34, "text": " There are some small errors here which are a natural consequence of the fact that we're"}, {"start": 1284.34, "end": 1287.3, "text": " storing numbers using a finite number of bits"}, {"start": 1287.3, "end": 1289.3, "text": " So how this works is very simple"}, {"start": 1289.3, "end": 1294.6599999999999, "text": " We take, so remember x is this array here, 012"}, {"start": 1294.6599999999999, "end": 1297.3799999999999, "text": " Length of x thus will return us 3"}, {"start": 1297.3799999999999, "end": 1300.4199999999998, "text": " We get the i is just the identity matrix"}, {"start": 1300.4199999999998, "end": 1304.34, "text": " And so we are basically iterating and taking one hot vectors here"}, {"start": 1304.34, "end": 1309.78, "text": " And basically this is here is the, this is the formula of the actual derivative"}, {"start": 1309.78, "end": 1315.06, "text": " So you nudge the input vector along a certain dimension a little bit"}, {"start": 1315.06, "end": 1317.46, "text": " In the positive direction, in the negative one"}, {"start": 1317.46, "end": 1322.02, "text": " And then you divide that by the 2 epsilon because yeah, that's the intensity of the nudge"}, {"start": 1322.02, "end": 1329.46, "text": " And if you don't understand this, the best way to play with Colab is to just insert a new code line here"}, {"start": 1329.46, "end": 1331.86, "text": " We can see what this exactly does"}, {"start": 1331.86, "end": 1333.46, "text": " So let me just paste this here"}, {"start": 1333.46, "end": 1336.34, "text": " So we have this thing here"}, {"start": 1336.34, "end": 1338.5, "text": " So we have, as you can see here, identity matrix"}, {"start": 1339.3799999999999, "end": 1343.86, "text": " And so this for loop is going to take one vector as a value"}, {"start": 1343.86, "end": 1348.02, "text": " One vector at a time and pass it into this function here"}, {"start": 1348.02, "end": 1355.3799999999999, "text": " And so you can see that basically we're going to evaluate f at the following"}, {"start": 1356.58, "end": 1359.9399999999998, "text": " So at least in the first iteration we're going to evaluate it at 0"}, {"start": 1359.9399999999998, "end": 1362.1799999999998, "text": " And then we're going to add this small epsilon here"}, {"start": 1362.1799999999998, "end": 1365.62, "text": " So epsilon and then we're going to have 1, 2"}, {"start": 1365.62, "end": 1367.4599999999998, "text": " And we're going to evaluate the function here"}, {"start": 1368.74, "end": 1373.4599999999998, "text": " We are going to subtract from that x minus epsilon"}, {"start": 1373.46, "end": 1375.7, "text": " So sorry, so that's 0 minus epsilon"}, {"start": 1376.3400000000001, "end": 1379.38, "text": " This thing will be constant for the first iteration"}, {"start": 1379.38, "end": 1381.06, "text": " And then we divide this by 2 epsilon"}, {"start": 1381.06, "end": 1384.26, "text": " And you can see that this is the definition of derivative itself"}, {"start": 1384.26, "end": 1387.3, "text": " And we do that for alongside every dimension"}, {"start": 1387.3, "end": 1391.38, "text": " That means in the second loop we'll have epsilon here instead of here"}, {"start": 1392.5, "end": 1393.8600000000001, "text": " And so on and so forth"}, {"start": 1393.8600000000001, "end": 1397.38, "text": " So minus epsilon here and yeah, you get the point"}, {"start": 1397.94, "end": 1402.82, "text": " So now we are certain that grad-brooks as expected"}, {"start": 1402.82, "end": 1405.22, "text": " Now let's see some fun examples"}, {"start": 1406.02, "end": 1410.4199999999998, "text": " We're going to define a simple second order polynomial function here"}, {"start": 1411.1399999999999, "end": 1412.58, "text": " x squared plus x plus 4"}, {"start": 1413.3, "end": 1415.22, "text": " I'm going to run this to visualize it"}, {"start": 1415.9399999999998, "end": 1417.22, "text": " It's always nice to visualize stuff"}, {"start": 1417.22, "end": 1420.6599999999999, "text": " It's at least to me it makes stuff a bit more like"}, {"start": 1422.8999999999999, "end": 1425.9399999999998, "text": " It gives them this gut feeling and it's easier to understand what's going on"}, {"start": 1427.06, "end": 1430.5, "text": " We're going to do higher order derivatives here"}, {"start": 1430.5, "end": 1434.02, "text": " Just to show you how a powerful grad function actually is"}, {"start": 1434.02, "end": 1438.66, "text": " So you can do grad of f and then you can do grad of the grad of f"}, {"start": 1438.66, "end": 1443.14, "text": " And then you can do grad of and you can do that n times whatever the number"}, {"start": 1443.14, "end": 1446.5, "text": " Like whatever the whatever n is"}, {"start": 1446.5, "end": 1447.54, "text": " Or you can just do it like this"}, {"start": 1447.54, "end": 1450.58, "text": " You can do it like grad of grad of grad of f"}, {"start": 1450.58, "end": 1451.78, "text": " This will also work"}, {"start": 1451.78, "end": 1454.1, "text": " This is just shorter so that's why I did it like this"}, {"start": 1454.1, "end": 1457.46, "text": " If we were to manually do a derivative of this polynomial"}, {"start": 1457.46, "end": 1460.18, "text": " We'll get the first derivative will give us 2x plus 1"}, {"start": 1460.18, "end": 1463.94, "text": " The second derivative so derivative of this one will just leave us with 2"}, {"start": 1463.94, "end": 1466.26, "text": " And finally derivative of a constant is 0"}, {"start": 1466.26, "end": 1469.46, "text": " That's why we have the third derivative is equal 0"}, {"start": 1469.46, "end": 1474.1000000000001, "text": " So if we print all the values here we expect to see because x is 1"}, {"start": 1474.1000000000001, "end": 1478.26, "text": " We expect to see 3, 2 and 0, right?"}, {"start": 1478.26, "end": 1480.18, "text": " So here it is 3, 2 and 0"}, {"start": 1480.18, "end": 1485.14, "text": " The first value is just the value of the function at the input x which is 1"}, {"start": 1485.14, "end": 1488.98, "text": " Which is as you can see here approximately 6"}, {"start": 1488.98, "end": 1490.66, "text": " Exactly 6, okay?"}, {"start": 1491.94, "end": 1492.74, "text": " So far so good"}, {"start": 1492.74, "end": 1495.46, "text": " The cool thing about this is we are very close to the math"}, {"start": 1495.46, "end": 1498.66, "text": " So you can basically as you see formula in some paper"}, {"start": 1498.66, "end": 1502.42, "text": " You can implement it much easier in JAX compared to PyTorch or TensorFlow"}, {"start": 1503.06, "end": 1504.02, "text": " It's very powerful"}, {"start": 1504.02, "end": 1509.38, "text": " We're gonna see how powerful this whole autodiff package of JAX is a bit later"}, {"start": 1509.94, "end": 1513.38, "text": " And now I'm gonna do a simple modification here"}, {"start": 1513.38, "end": 1515.46, "text": " Let's assume we have two inputs"}, {"start": 1515.46, "end": 1518.34, "text": " So let's assume we have y here"}, {"start": 1518.34, "end": 1520.98, "text": " So we have plus y squared"}, {"start": 1520.98, "end": 1523.86, "text": " And I'm just going to modify this lambda function"}, {"start": 1524.4199999999998, "end": 1529.06, "text": " I'm gonna skip the evaluation because the evaluation will fail now"}, {"start": 1529.06, "end": 1532.82, "text": " And now I'm going to do the following"}, {"start": 1532.82, "end": 1536.4199999999998, "text": " If we just do the grads of the function"}, {"start": 1537.86, "end": 1541.78, "text": " By default do the derivative with respect to the first parameter"}, {"start": 1541.78, "end": 1544.8999999999999, "text": " That means we're going to get this thing here exactly"}, {"start": 1544.8999999999999, "end": 1547.78, "text": " So we should expect exactly the same numbers"}, {"start": 1547.78, "end": 1552.8999999999999, "text": " Except for the fact that f will be different when we evaluate it at x"}, {"start": 1552.8999999999999, "end": 1557.62, "text": " So this obviously will not work until I enter some numbers"}, {"start": 1557.62, "end": 1561.3799999999999, "text": " So this is going to be like let me say it's gonna be 1"}, {"start": 1561.3799999999999, "end": 1563.62, "text": " And then I'm gonna modify this here"}, {"start": 1563.62, "end": 1571.54, "text": " So 1 x y x y and finally x y here"}, {"start": 1572.26, "end": 1573.22, "text": " That should work"}, {"start": 1573.22, "end": 1574.5, "text": " Let's see if it works or not"}, {"start": 1575.54, "end": 1577.3, "text": " Yeah, we get the numbers as expected"}, {"start": 1577.3, "end": 1579.3, "text": " So we get the same results here"}, {"start": 1579.3, "end": 1582.74, "text": " And we get 7 because once you evaluate it at y equals 1"}, {"start": 1582.74, "end": 1584.82, "text": " This will be 6 plus 1 that's 7"}, {"start": 1586.18, "end": 1589.46, "text": " So how can we do derivative with respect to y?"}, {"start": 1589.46, "end": 1594.98, "text": " The only thing we need to do here is arc nums equals 1 instead"}, {"start": 1594.98, "end": 1596.6599999999999, "text": " I think this is the syntax"}, {"start": 1596.6599999999999, "end": 1597.46, "text": " Let me try it out"}, {"start": 1598.34, "end": 1601.22, "text": " Let me just ignore this for the time being"}, {"start": 1601.22, "end": 1602.82, "text": " I'm just gonna delete this"}, {"start": 1603.3799999999999, "end": 1605.3799999999999, "text": " And let me print this out"}, {"start": 1605.3799999999999, "end": 1606.8999999999999, "text": " So we get the number 2 here"}, {"start": 1606.9, "end": 1608.26, "text": " Which is correct result"}, {"start": 1608.26, "end": 1612.18, "text": " If we do a differentiation of this polynomial with respect to y"}, {"start": 1612.18, "end": 1614.9, "text": " We expect to get 2 times y"}, {"start": 1614.9, "end": 1618.02, "text": " Since y is equal 1 we get 2 here"}, {"start": 1618.02, "end": 1622.5, "text": " So the thing that confuses me here is I'm not sure why we are getting device array here"}, {"start": 1622.5, "end": 1625.7800000000002, "text": " Whereas previously we had pure floats on the CPU"}, {"start": 1626.5800000000002, "end": 1628.1000000000001, "text": " I'm not sure about that"}, {"start": 1628.1000000000001, "end": 1630.5, "text": " Let me check if I can do it like this"}, {"start": 1630.5, "end": 1636.9, "text": " Yep, for some reason once I add a comma here"}, {"start": 1636.9, "end": 1638.82, "text": " It's returning me device array"}, {"start": 1638.82, "end": 1639.7, "text": " Whatever"}, {"start": 1639.7, "end": 1642.1, "text": " Obviously this is now df dy"}, {"start": 1642.1, "end": 1643.94, "text": " But yeah, you get the point"}, {"start": 1644.98, "end": 1648.18, "text": " Now let me show you how powerful the autodef engine is"}, {"start": 1648.18, "end": 1654.1, "text": " So aside from grad which can do derivatives with respect for the scalar output functions"}, {"start": 1654.1, "end": 1660.02, "text": " Jacobians, so these jack forward and jack rev functions can find Jacobians"}, {"start": 1660.02, "end": 1664.82, "text": " So Jacobians basically can evaluate derivatives even for vector-valued functions"}, {"start": 1664.82, "end": 1666.02, "text": " That's the only difference"}, {"start": 1668.02, "end": 1669.6999999999998, "text": " That's pretty much the only difference"}, {"start": 1669.6999999999998, "end": 1673.2199999999998, "text": " So here let me take another function"}, {"start": 1673.2199999999998, "end": 1676.02, "text": " Basically we have a simple paraboloid"}, {"start": 1676.02, "end": 1679.54, "text": " If we google it here we can see how it looks like"}, {"start": 1679.54, "end": 1684.26, "text": " It looks like a cup 2D surface in a 3D space"}, {"start": 1684.98, "end": 1688.8999999999999, "text": " And basically if we manually calculate the derivatives"}, {"start": 1688.8999999999999, "end": 1692.74, "text": " So df dx will be 2x, df dy will be 2y"}, {"start": 1692.74, "end": 1695.54, "text": " So we expect Jacobian to look something like this"}, {"start": 1695.54, "end": 1699.54, "text": " If we continue on and figure out the second order derivatives"}, {"start": 1699.54, "end": 1700.82, "text": " We can get the Hessian number"}, {"start": 1700.82, "end": 1704.42, "text": " So we can get the second order derivative with respect to x is 2"}, {"start": 1704.42, "end": 1707.86, "text": " As you can see here for y it's also going to be 2"}, {"start": 1707.86, "end": 1712.5, "text": " And if we do dx and then dy because this does not depend on y anymore"}, {"start": 1712.5, "end": 1713.86, "text": " We're going to get zeros"}, {"start": 1713.86, "end": 1716.58, "text": " And finally how you form the Hessian matrix"}, {"start": 1716.58, "end": 1718.8999999999999, "text": " And you don't need to worry about what Hessian is"}, {"start": 1718.8999999999999, "end": 1721.1399999999999, "text": " It's basically just a collection"}, {"start": 1721.1399999999999, "end": 1728.5, "text": " It's a matrix of as you can see here various derivatives of the multivariate function"}, {"start": 1728.5, "end": 1732.1799999999998, "text": " You can simply define it using the jack rev and jack forward function"}, {"start": 1732.1799999999998, "end": 1733.2199999999998, "text": " We additionally did it"}, {"start": 1733.2199999999998, "end": 1736.74, "text": " You can see how this nicely composes and we can get the results"}, {"start": 1736.74, "end": 1742.98, "text": " So the reason they're using both forward and rev is because it's just optimization like detail"}, {"start": 1742.98, "end": 1747.22, "text": " Because one of these the rev one works with wide matrices"}, {"start": 1747.22, "end": 1750.98, "text": " Whereas the jack forward works with tall matrices"}, {"start": 1750.98, "end": 1755.78, "text": " I'm going to link a video down in the description which nicely explains why this is"}, {"start": 1755.78, "end": 1761.54, "text": " Anyways we get the numbers so that's 2, 2 for the Jacobian as we expected here"}, {"start": 1761.54, "end": 1765.06, "text": " So if we plug in the numbers so for the input 1, 1"}, {"start": 1765.06, "end": 1767.54, "text": " We can see that this evaluates to 2, 2"}, {"start": 1767.54, "end": 1770.8999999999999, "text": " And finally for the Hessian we get 2, 0, 0, 2"}, {"start": 1770.8999999999999, "end": 1774.74, "text": " So anyways I just wanted to show you that this is possible"}, {"start": 1774.74, "end": 1778.82, "text": " And if you want to dig into more details there documentation is very nice"}, {"start": 1778.82, "end": 1782.1, "text": " And yeah let's continue with the final example for grad"}, {"start": 1782.1, "end": 1785.46, "text": " I just took this edge case function so the absolute value of x"}, {"start": 1785.46, "end": 1787.94, "text": " And let's see how how how Jax handles it"}, {"start": 1787.94, "end": 1789.94, "text": " Basically this is how the function looks like"}, {"start": 1789.94, "end": 1791.46, "text": " I just visualized it here"}, {"start": 1791.46, "end": 1796.9, "text": " And I printed the values at minus 1 and 1 which would be 1 and 1"}, {"start": 1796.9, "end": 1798.9, "text": " And you can see the the results here"}, {"start": 1798.9, "end": 1802.9, "text": " And finally I printed the I found the gradient of this function"}, {"start": 1802.9, "end": 1805.7, "text": " And you can see it's not differentiable at this point at 0"}, {"start": 1805.7, "end": 1807.7, "text": " It's the gradient is undefined"}, {"start": 1807.7, "end": 1814.5, "text": " But like if we were to evaluate the function at minus 1 we get"}, {"start": 1814.5, "end": 1818.26, "text": " So as you can see here the derivative will be minus 1 for all the numbers here"}, {"start": 1818.26, "end": 1820.26, "text": " 1 for all numbers here"}, {"start": 1820.26, "end": 1822.26, "text": " The interesting point is actually 0"}, {"start": 1822.26, "end": 1826.26, "text": " Because as you can see this one returns the obvious result minus 1"}, {"start": 1826.26, "end": 1828.26, "text": " So this one here is kind of interesting"}, {"start": 1828.26, "end": 1836.26, "text": " If we were to delete this part and just kind of add a small number to 0"}, {"start": 1836.26, "end": 1840.26, "text": " We'll get 1 as the output"}, {"start": 1840.26, "end": 1846.26, "text": " If we were to do the same thing just on the negative side we're going to get minus 1"}, {"start": 1846.26, "end": 1853.78, "text": " So basically what they've done is they defined for 0 is going to be the value is going to be defined is going to be 1"}, {"start": 1853.78, "end": 1857.22, "text": " That's just a convention and that's how Jax deals with it"}, {"start": 1857.22, "end": 1862.26, "text": " We didn't get any exception so yeah I guess I guess that's good at least in some settings"}, {"start": 1862.26, "end": 1864.26, "text": " Finally let's jump to Vmap"}, {"start": 1864.26, "end": 1870.26, "text": " Basically when you see Vmap you need to think about it the main value prop of Vmap is the following"}, {"start": 1870.26, "end": 1873.86, "text": " Writer functions as if you were dealing with a single data point"}, {"start": 1873.86, "end": 1878.26, "text": " And this is going to become more clear as we go through these examples"}, {"start": 1878.26, "end": 1886.26, "text": " Let's say we have a matrix W which is basically weights of you can treat it as weights of a linear neural network layer"}, {"start": 1886.26, "end": 1890.26, "text": " We input some state we set the shape to 150, 100"}, {"start": 1890.26, "end": 1897.26, "text": " And we now create this batch X you can treat this as maybe 10 images a batch of 10 flattened images"}, {"start": 1897.26, "end": 1903.26, "text": " So it's 100 that means we had like a 10 times 10 pixel image it's 100 when you once you flattened out"}, {"start": 1903.26, "end": 1908.86, "text": " So now if we apply the matrix we do the dot product between W and X"}, {"start": 1908.86, "end": 1915.86, "text": " This is basically a simulating what the linear layer is doing in the background when it's processing the input data"}, {"start": 1915.86, "end": 1923.86, "text": " Now the trick here is this only transforms the single a single image it cannot handle the batch"}, {"start": 1923.86, "end": 1930.86, "text": " Otherwise it will crash because we're trying to multiply 150, 100 which is W"}, {"start": 1930.86, "end": 1937.4599999999998, "text": " And we're trying to multiply that with 10, 100 that's going to fail"}, {"start": 1937.4599999999998, "end": 1941.4599999999998, "text": " So this thing does not work for the batch so okay let me run it"}, {"start": 1941.4599999999998, "end": 1945.86, "text": " So how would we go about making this work for a batch of images?"}, {"start": 1945.86, "end": 1951.4599999999998, "text": " So that's because that's what we care about and because that's way more efficient as you may know"}, {"start": 1951.4599999999998, "end": 1954.86, "text": " Especially on GPUs and accelerators"}, {"start": 1954.86, "end": 1965.4599999999998, "text": " Basically the most naive approach would be to iterate through the images and then call this function that knows how to handle single data points"}, {"start": 1965.4599999999998, "end": 1971.06, "text": " And then we just stack the results and basically we can see that this will work"}, {"start": 1971.06, "end": 1979.6599999999999, "text": " And the thing is it's very very slow because you want to avoid doing for loops you want to vectorize your functions"}, {"start": 1979.6599999999999, "end": 1984.4599999999998, "text": " And that's why this runs 5.54 milliseconds"}, {"start": 1984.46, "end": 1989.06, "text": " Now let's see a bit more a better approach of doing this"}, {"start": 1989.06, "end": 1995.8600000000001, "text": " So this would be a better approach but as you can see we had to completely rewrite the function we had above"}, {"start": 1995.8600000000001, "end": 2000.46, "text": " So we had to swap as you can see here we had to swap W with X"}, {"start": 2000.46, "end": 2005.46, "text": " Now we have batch of X here and we have W here and we additionally had to do a transpose"}, {"start": 2005.46, "end": 2008.8600000000001, "text": " So now this will the shapes will match and everything will work fine"}, {"start": 2008.86, "end": 2016.6599999999999, "text": " So we have 100 and we try to multiply that with transpose version of W which is 100 150"}, {"start": 2016.6599999999999, "end": 2023.26, "text": " And that gives us as you can see 10 150 which is what we expected right?"}, {"start": 2023.26, "end": 2028.4599999999998, "text": " So everything is fine now the problem with this and it's additional jitted so it's going to be really fast"}, {"start": 2028.4599999999998, "end": 2035.86, "text": " Now the problem with this is as you can see we have to write depending on whether we handle singular cases or a batch of data"}, {"start": 2035.86, "end": 2040.4599999999998, "text": " We have to write completely different functions and that's not desirable"}, {"start": 2040.4599999999998, "end": 2048.06, "text": " Although you can see this is way faster. So it's only 103 microseconds. Whereas this is 5.54 milliseconds"}, {"start": 2048.06, "end": 2053.06, "text": " Now this is the result that Jacks offers us using something called V map"}, {"start": 2053.06, "end": 2057.2599999999998, "text": " You just take the function that handles a single data point"}, {"start": 2057.2599999999998, "end": 2061.2599999999998, "text": " You wrap it into this you transform it using this V map"}, {"start": 2061.26, "end": 2066.26, "text": " And you can now pass the batch of your data without any other modifications"}, {"start": 2066.26, "end": 2070.46, "text": " So that's that's very cool. If I run this"}, {"start": 2070.46, "end": 2074.46, "text": " Let's see how how fast this thing is. So it's around 100 microseconds"}, {"start": 2074.46, "end": 2077.6600000000003, "text": " I think the last time I ran this this was actually faster than this one"}, {"start": 2077.6600000000003, "end": 2080.6600000000003, "text": " But I guess there are some some variants inside of there"}, {"start": 2080.6600000000003, "end": 2083.6600000000003, "text": " Okay, so I went ahead and reran this thing again"}, {"start": 2083.6600000000003, "end": 2087.86, "text": " And now we can see that we have 180 microseconds for this function"}, {"start": 2087.86, "end": 2093.6600000000003, "text": " Whereas we have 143 microseconds for this one. So obviously there is some variance in here"}, {"start": 2093.6600000000003, "end": 2098.06, "text": " But the point is here you have a much simpler way to write these batched"}, {"start": 2098.86, "end": 2104.26, "text": " Functions and it's as as efficient as by like doing this laborious work"}, {"start": 2104.26, "end": 2111.86, "text": " So basically what what V map does in the background is it takes the the for loops and packs them into this Alex"}, {"start": 2112.26, "end": 2115.46, "text": " API mid-level layer of the of the API"}, {"start": 2115.46, "end": 2118.76, "text": " and yeah, so now let me let me go ahead and"}, {"start": 2119.76, "end": 2121.76, "text": " Modify this example a little bit"}, {"start": 2121.76, "end": 2124.0, "text": " So we are going to instead of passing just X"}, {"start": 2124.0, "end": 2131.36, "text": " Let's try and pass both the W as well as the X because that's more I guess in in line with the functional"}, {"start": 2131.96, "end": 2137.66, "text": " Programming paradigm. So let me rerun this cell and now let's modify this one to to accept"}, {"start": 2138.26, "end": 2143.56, "text": " like both the W as well as the as the batch of data so"}, {"start": 2143.56, "end": 2150.7599999999998, "text": " We'll need to add this thing here again. So if I were to run this right now, it will crash and we'll see why"}, {"start": 2151.36, "end": 2153.36, "text": " so it basically says"}, {"start": 2153.7599999999998, "end": 2155.7599999999998, "text": " It basically says that W"}, {"start": 2155.7599999999998, "end": 2162.7599999999998, "text": " We're trying to to tell to V map that W has a batch dimension where whereas it does not have it's just a it's just a matrix"}, {"start": 2162.7599999999998, "end": 2165.7599999999998, "text": " That's supposed to represent the linear layer"}, {"start": 2165.7599999999998, "end": 2169.16, "text": " So what we need to do is add the in axis argument here"}, {"start": 2169.16, "end": 2175.56, "text": " So in axis like this and then we say none because W does not have a batch dimension"}, {"start": 2175.56, "end": 2181.3599999999997, "text": " And we need to specify the batch dimension of the second input and that's zero. So now this should work"}, {"start": 2181.3599999999997, "end": 2183.3599999999997, "text": " Let me let me try and rerun it again"}, {"start": 2185.3599999999997, "end": 2187.3599999999997, "text": " So fingers crossed"}, {"start": 2187.3599999999997, "end": 2189.7599999999998, "text": " And yeah, it did work. So basically that's it"}, {"start": 2189.7599999999998, "end": 2194.96, "text": " Now you saw in a bit more detail how V map works and this is the first part of this video"}, {"start": 2194.96, "end": 2197.16, "text": " So we basically saw the basics of Jax"}, {"start": 2197.16, "end": 2204.16, "text": " We saw how to use the the main transform functions such as JIT such as grad and finally V map and now let's dig"}, {"start": 2204.16, "end": 2210.16, "text": " Even deeper and understand the intricacies of how JIT works because that will help you enormously"}, {"start": 2211.16, "end": 2214.16, "text": " Debugging these Jax programs. So let's let's continue here. So"}, {"start": 2215.16, "end": 2222.16, "text": " We basically have as I said Jax has this onion like API layer API structure and that's that's I guess pretty much"}, {"start": 2222.16, "end": 2224.16, "text": " Always the case but still"}, {"start": 2224.16, "end": 2229.16, "text": " Anyways, we have numpy as the as the highest level then we have lax as the mid level and finally the XLA"}, {"start": 2230.16, "end": 2235.16, "text": " So lax API is stricter and more powerful and it's a simple Python wrapper around XLA"}, {"start": 2235.16, "end": 2237.16, "text": " So let's see what it means when I say stricter"}, {"start": 2237.16, "end": 2242.16, "text": " So if we were to add in the numpy level of the API"}, {"start": 2242.16, "end": 2244.16, "text": " This thing can be tolerated"}, {"start": 2244.16, "end": 2249.16, "text": " So one plus one like one as the integer plus one as the integer"}, {"start": 2249.16, "end": 2256.16, "text": " Like one as the integer plus one as the float will work. But once we get to the mid level"}, {"start": 2257.16, "end": 2262.16, "text": " Here we need to be explicit about the types. Otherwise we'll have we'll get an error"}, {"start": 2262.16, "end": 2266.16, "text": " So if I run this we'll see that this is printed"}, {"start": 2266.16, "end": 2272.16, "text": " So one plus one is okay, but here add requires arguments to have the same D types got int 32 and float 32"}, {"start": 2272.16, "end": 2277.16, "text": " The reason they've done this done it this way is to be more error robust"}, {"start": 2277.16, "end": 2283.16, "text": " Finally, we have the fact that the lax layer is obviously more powerful"}, {"start": 2283.16, "end": 2289.16, "text": " Although as a trade-off, it's it's less user friendly, which is kind of obvious as an example here. We have X and Y"}, {"start": 2289.16, "end": 2297.16, "text": " We have basically we want to do a convolution a 1d convolution between the signals X and Y and in numpy"}, {"start": 2297.16, "end": 2300.16, "text": " API this would be done like this"}, {"start": 2300.16, "end": 2303.16, "text": " We just call the convol function and we get the result"}, {"start": 2303.16, "end": 2308.16, "text": " On the other hand, once you get into the lax land, you have to use this conv general dilated"}, {"start": 2308.16, "end": 2312.16, "text": " Which is way more powerful as you can see has much more options"}, {"start": 2312.16, "end": 2317.16, "text": " You can specify the window strides the padding but you have to be again much more strict here in order to get this to work"}, {"start": 2317.16, "end": 2320.16, "text": " You have to be explicit about the the types etc"}, {"start": 2320.16, "end": 2325.16, "text": " And if I run this we can see the results here as well as here will be the same"}, {"start": 2325.16, "end": 2330.16, "text": " Like this this assert should make sure that that we get the same results"}, {"start": 2330.16, "end": 2333.16, "text": " And as you can see it works perfectly now"}, {"start": 2333.16, "end": 2340.16, "text": " Final remark here is you can see here that this result returns batched results"}, {"start": 2340.16, "end": 2344.16, "text": " So that's why we have to index 0 0 to get the actual result"}, {"start": 2344.16, "end": 2348.16, "text": " As I said, lax API is just a thin wrapper around thin"}, {"start": 2348.16, "end": 2355.16, "text": " I mean, it's a wrapper around XLA so you can actually find the the XLA function that's going to be eventually called here"}, {"start": 2355.16, "end": 2358.16, "text": " And that's this conv the general padding"}, {"start": 2358.16, "end": 2361.16, "text": " It's in C++ you can see the arguments etc"}, {"start": 2361.16, "end": 2368.16, "text": " So if you ever need to dig a bit deeper and optimize something really really really hard"}, {"start": 2368.16, "end": 2372.16, "text": " Then you TensorFlow documentation here got you covered"}, {"start": 2372.16, "end": 2375.16, "text": " So let's get back here and continue"}, {"start": 2375.16, "end": 2378.16, "text": " That was the short mention of the API"}, {"start": 2378.16, "end": 2380.16, "text": " Now let's finally understand how it works"}, {"start": 2380.16, "end": 2384.16, "text": " So again, I won't get into more details here"}, {"start": 2384.16, "end": 2392.16, "text": " The whole point of this is to show that JIT functions are faster compared to non-JIT versions of the function"}, {"start": 2392.16, "end": 2396.16, "text": " Here we just normalize the matrix X like column wise"}, {"start": 2396.16, "end": 2405.16, "text": " And so both the mean we subtract the mean as well as we divide with the standard deviation of the columns and we get normalized columns"}, {"start": 2405.16, "end": 2410.16, "text": " And finally, the difference here is again not that big because this is a super simple function"}, {"start": 2410.16, "end": 2417.16, "text": " The more complex the function is the bigger the differences here will be between the JIT version and between the normal version"}, {"start": 2417.16, "end": 2425.16, "text": " So here we have the JIT version and again we're using a block until ready because of the asynchronous dispatch and we see the results"}, {"start": 2425.16, "end": 2428.16, "text": " Okay, so let's see"}, {"start": 2428.16, "end": 2432.16, "text": " In order to understand JIT it may be useful to understand when it fails"}, {"start": 2432.16, "end": 2441.16, "text": " Which functions cannot, so which class of functions we cannot JIT will tell you, will help you understand how JIT actually works behind the curtains"}, {"start": 2441.16, "end": 2444.16, "text": " So if I were to run this thing it's gonna crash"}, {"start": 2444.16, "end": 2455.16, "text": " So we are creating a simple vector of 10 random elements and we pass that vector into this get negatives and we call it"}, {"start": 2455.16, "end": 2461.16, "text": " So now because we don't have any JIT we're gonna get a result back and everything works as expected"}, {"start": 2461.16, "end": 2468.16, "text": " But if I were to call the JIT version of the function by wrapping this into this JIT transform"}, {"start": 2468.16, "end": 2477.16, "text": " If we run this we're going to get like error and it says here array boolean indices must be concrete, got shaped array, blah blah blah"}, {"start": 2477.16, "end": 2484.16, "text": " So basically what happens here is that depending on the content of X, so depending on the values of X"}, {"start": 2484.16, "end": 2491.16, "text": " We are going to get an output that's going to vary in its shape and that's something that's not tolerable inside the JIT world"}, {"start": 2491.16, "end": 2496.16, "text": " Let's slowly understand why this is the case, so let's start with this function"}, {"start": 2496.16, "end": 2506.16, "text": " We have a simple function f which just does like a dot product between X and Y and returns result and in the meanwhile it also prints some intermediate variables here"}, {"start": 2506.16, "end": 2514.16, "text": " So I prepare the variable X and Y, just random vectors and matrix here and we call the function here"}, {"start": 2514.16, "end": 2520.16, "text": " And then we again call, we have a second call here and let's see what happens after the second call"}, {"start": 2520.16, "end": 2527.16, "text": " So I'm gonna run this, there's a couple things happening here, first thing first is the first time you run a JIT function"}, {"start": 2527.16, "end": 2532.16, "text": " So that's line 14, what JIT does in the background is something called tracing"}, {"start": 2532.16, "end": 2542.16, "text": " So it basically takes, instead of inputting the actual values of X and Y, it creates this abstract like tracer value"}, {"start": 2542.16, "end": 2549.16, "text": " Let's call it that way, so it's basically like a placeholder variable that has a specified shape and specified data type"}, {"start": 2549.16, "end": 2556.16, "text": " So you can see here basically once we print X, it outputs here instead of the value of X, it outputs traced shaped array"}, {"start": 2556.16, "end": 2566.16, "text": " It's basically float 32 because we had random numbers sampled from Gaussian, we have the shape here 3, 4 and we have 4 for Y"}, {"start": 2566.16, "end": 2572.16, "text": " So basically you can see these are actually passed into the function the first time JIT is called"}, {"start": 2572.16, "end": 2582.16, "text": " And using that JIT can understand how the shapes are morphing going through the function and finally what the output shape of the function is"}, {"start": 2582.16, "end": 2588.16, "text": " And you can see here the output is also float 32 and the shape is 3"}, {"start": 2588.16, "end": 2598.16, "text": " We finally get the results here, the second time you call the function something funny happens and that has to do with the functional programming paradigm"}, {"start": 2598.16, "end": 2608.16, "text": " Basically because print functions are a side effect because we are returning the result from this function by other means than through the actual output here"}, {"start": 2608.16, "end": 2615.16, "text": " So we are printing and that's a side effect and because of that JIT is just going to ignore all of those side effects"}, {"start": 2615.16, "end": 2622.16, "text": " And the second time you call it because it's going to call the compile function, we won't see any printing"}, {"start": 2622.16, "end": 2629.16, "text": " And the whole point is this caching mechanism I just explained, so that's why JIT functions are so fast"}, {"start": 2629.16, "end": 2634.16, "text": " So I mentioned here anytime we get the same shapes and types we just call the compile function"}, {"start": 2634.16, "end": 2644.16, "text": " So that means if I were to now call whatever, so basically for the whole class of inputs that have this shape and this data type"}, {"start": 2644.16, "end": 2647.16, "text": " And no matter the value we are always going to call the compile function"}, {"start": 2647.16, "end": 2653.16, "text": " But if I were to call this function again but this time with a different shape, so let's do something like this"}, {"start": 2653.16, "end": 2663.16, "text": " So x3 maybe 3, 5 and this will be 5 and let me just map this like this and I'm going to call the function again and print the results"}, {"start": 2663.16, "end": 2671.16, "text": " So x3, y3, so what will happen here is that this time we'll again trigger the compilation because the shape changed here"}, {"start": 2671.16, "end": 2678.16, "text": " And so JIT is smart enough to retrace it and as you can see here we have the tracing happen here again"}, {"start": 2678.16, "end": 2685.16, "text": " Hopefully you get a better picture of how JIT now works but we are going to continue and understand this in even more depth"}, {"start": 2685.16, "end": 2691.16, "text": " So now we have the same function as above, we just omit the print functions which are the side effects"}, {"start": 2691.16, "end": 2698.16, "text": " And this is how you should write your functions when you want to use them with JAKS transform functions such as JIT"}, {"start": 2698.16, "end": 2700.16, "text": " So no side effects"}, {"start": 2700.16, "end": 2706.16, "text": " If I were to print this and we are going to have something called JAKSper here, so that's JAKS expression"}, {"start": 2706.16, "end": 2714.16, "text": " And that's basically the, let's call it like a flow model that JIT creates in the background when it does its tracing procedure"}, {"start": 2714.16, "end": 2718.16, "text": " So if I were to run this, let's see what we have here"}, {"start": 2718.16, "end": 2727.16, "text": " So you can see what happens, so it creates this abstract grammar and you basically have C, so add 1 to A"}, {"start": 2727.16, "end": 2734.16, "text": " So that's going to be, and A is basically the first argument, so that's X, so it's basically creating a placeholder for this thing here"}, {"start": 2734.16, "end": 2739.16, "text": " Then it calls D B plus 1, so that's Y plus 1, this is going to be B"}, {"start": 2739.16, "end": 2746.16, "text": " And then it calls the general dot product, so that's this thing, with C and D, as you can see here, C and D"}, {"start": 2746.16, "end": 2750.16, "text": " And yeah, and it returns back the E, which is this result here"}, {"start": 2750.16, "end": 2756.16, "text": " So you can go into docs and understand in more detail every single part of this syntax of this grammar"}, {"start": 2756.16, "end": 2764.16, "text": " But like, now you understand a bit better how JIT, when it does the tracing, creates this type of a flow in the background"}, {"start": 2764.16, "end": 2768.16, "text": " And this can be compiled using XLA, okay?"}, {"start": 2768.16, "end": 2771.16, "text": " Let's see another example of a failure"}, {"start": 2771.16, "end": 2777.16, "text": " This time, what we do is we pass this argument neck and we condition upon it"}, {"start": 2777.16, "end": 2784.16, "text": " And depending on the value, whether it's true or false, we're going to return minus X or plus X"}, {"start": 2784.16, "end": 2796.16, "text": " And remember, the first time we call a JIT function, it's going to input abstract shape and like a data type"}, {"start": 2796.16, "end": 2801.16, "text": " It won't have information about the value, and that's why this thing is going to fail"}, {"start": 2801.16, "end": 2808.16, "text": " So it says here, blah blah blah, abstract tracer, value encountered, where concrete value is expected"}, {"start": 2808.16, "end": 2812.16, "text": " The problem arose in the bool function, so that's basically what happens"}, {"start": 2812.16, "end": 2817.16, "text": " And in order to avoid this, what we can do is use these static arguments"}, {"start": 2817.16, "end": 2823.16, "text": " By making an argument static, so we say here, hey, first argument, so that's neck, is going to be static"}, {"start": 2823.16, "end": 2831.16, "text": " What we do by doing this is once the JIT does the tracing procedure, it will not use this abstract tracer object"}, {"start": 2831.16, "end": 2833.16, "text": " It's going to use the actual value"}, {"start": 2833.16, "end": 2837.16, "text": " So we are kind of lowering the level of abstraction here while doing the tracing"}, {"start": 2837.16, "end": 2846.16, "text": " Which is going to kind of constrain the class of inputs where this compile function can be called from the cache"}, {"start": 2846.16, "end": 2851.16, "text": " But in return, yeah, in return, we get it to work, I guess"}, {"start": 2851.16, "end": 2855.16, "text": " Okay, so let's call this thing, so this should now work"}, {"start": 2855.16, "end": 2862.16, "text": " And the thing I want you to understand here again is that the first time we call it with true here, we do the tracing"}, {"start": 2862.16, "end": 2869.16, "text": " And then the second time we call it with true, because nothing changed, basically again we don't have the tracing procedure"}, {"start": 2869.16, "end": 2876.16, "text": " Once we switch this to false, we again trigger the tracing, so now we can call this function for any integer 32"}, {"start": 2876.16, "end": 2882.16, "text": " And we'll have two cached functions, so that's like, which run really fast"}, {"start": 2882.16, "end": 2885.16, "text": " Let's continue analyzing the failures, the third failure is here"}, {"start": 2885.16, "end": 2894.16, "text": " If I were to run this, let's see what we get, blah blah blah shapes must be 1D sequences of concrete values of integer types"}, {"start": 2894.16, "end": 2898.16, "text": " Okay, so what happened here is, let me try and print some values again"}, {"start": 2898.16, "end": 2909.16, "text": " Print x, let's print x shape, let's print this whole product thing"}, {"start": 2909.16, "end": 2913.16, "text": " And I think that's going to work, so let's try and print that"}, {"start": 2913.16, "end": 2922.16, "text": " So as you can see here, what happened is that x is of trace type, and then x shape is actually like a concrete value"}, {"start": 2922.16, "end": 2931.16, "text": " And then this thing here is again a trace, so this trace object, and we're trying to pass the trace object into reshape, which expects a concrete value"}, {"start": 2931.16, "end": 2933.16, "text": " That's why this thing is crashing again"}, {"start": 2933.16, "end": 2942.16, "text": " Let's see what the solution is, we can basically use NumPy prod instead of Jack's product"}, {"start": 2942.16, "end": 2949.16, "text": " So this may be a little bit confusing, basically as you saw, you have two tools to make these functions work legit"}, {"start": 2949.16, "end": 2957.16, "text": " One is to make certain arguments static, and sometimes you'll have to use NumPy functions instead of Jack's functions"}, {"start": 2957.16, "end": 2963.16, "text": " So don't ask me why, I'm only a couple of steps ahead of you, I recently started learning Jack's myself"}, {"start": 2963.16, "end": 2975.16, "text": " But let's try and run this, and this time it works, because we have the mprod will return a concrete value and not the traced object"}, {"start": 2975.16, "end": 2985.16, "text": " And that's it, if you wish, as I said, I'm going to share the colab, I'm going to push it to my github, so go ahead and play with these failure cases yourself"}, {"start": 2985.16, "end": 2997.16, "text": " To better understand how JIT works, but basically just keep in mind that JIT passes in these abstract objects that have the shape information and the data type information and no value"}, {"start": 2997.16, "end": 3001.16, "text": " And that will help you save yourself from a lot of headache, I guess"}, {"start": 3001.16, "end": 3020.16, "text": " Ah, okay, this is something I need to cover because there are some gotchas, some idiosyncrasies of Jack's which you need to understand to be able, so that we'll have easier time in the later videos building up neural networks, etc."}, {"start": 3020.16, "end": 3024.16, "text": " Once you understand a couple of these gotchas, things are going to be way easier"}, {"start": 3024.16, "end": 3037.16, "text": " So, let's start with gotcha number one, pure functions, so Jack's is designed to work only on pure functions, we saw some glimpse of this while we were using print functions a couple of minutes ago"}, {"start": 3037.16, "end": 3049.16, "text": " And here is an informal definition of a pure function, so first of all, all the input data is passed through the function parameters and all the results are output through the function results, okay, so that's the first thing"}, {"start": 3049.16, "end": 3064.16, "text": " Secondly, a pure function will always return the same result if invoked with the same inputs, so it's, in a way, it's like a huge memory, like a cache table, and depending on the inputs, you just retrieve the outputs"}, {"start": 3064.16, "end": 3071.16, "text": " So that's an informal definition of a pure function and I think it's going to be good enough to understand the following cells"}, {"start": 3071.16, "end": 3087.16, "text": " Okay, so let's start with the cell number one, example number one, so here we are violating number one because all results are output through the function results, nope, that's not the case, we are returning some results over the print function instead of through X, through this return statement here"}, {"start": 3087.16, "end": 3105.16, "text": " So let me call this function, okay, on the first call, again, we have the tracing, so we call the executing function print statement, but the second call, due to the fact that we are passing the same shape and same data type, that's just a float 32 here"}, {"start": 3105.16, "end": 3122.16, "text": " We just use the cache function and we return five, finally, the third call, because we changed the shape and the type here, so we basically have a ray now, it's going to basically call and trigger the tracer again, and I think that's pretty clear at this point of time"}, {"start": 3122.16, "end": 3143.16, "text": " Considering the other examples I already covered, example number two, so you do not want to interfere with the global variables and I think this is probably a bad design decision anyways, even if we ignore the functional program paradigm, this is probably not a good idea to do because, yeah, unexpected things can happen with your program"}, {"start": 3143.16, "end": 3163.16, "text": " So what's happening here is that, as I said, we are violating both one and two because, let's see why, so all the input is passed through the function parameters, nope, this time we are passing the input through the global variable, and secondly, a pure function will always return the same results if invoked with the same inputs"}, {"start": 3163.16, "end": 3180.16, "text": " Which is also not the case because, depending on the value of g, we are going to return different values here, even if we keep x the same, so what will happen here, once we call the jitted version of the function the first time, it's going to cache the value of g, which is zero"}, {"start": 3180.16, "end": 3197.16, "text": " Then we are going to update it, and then we are going to call it with five, and it's going to call the cached version of the function, which, if you recall, has g equals to zero somewhere in the Jackspur, and that's why we are going to get the wrong results"}, {"start": 3197.16, "end": 3214.16, "text": " So let me execute this cell, so on the first call we get four, and that's fine, because we passed four, we had g equals zero, that's fine, now g is equal to ten, and we have five here, so we expect fifteen, but we get five because the g version of zero is cached in the function"}, {"start": 3214.16, "end": 3227.16, "text": " And finally, if I call with a different, we change, instead of float, we pass the, like, a Jacks array, this time we trigger the execution and we get the correct result again"}, {"start": 3227.16, "end": 3234.16, "text": " So in any case, this is a wrong idea, a bad idea to add these types of impurities"}, {"start": 3234.16, "end": 3245.16, "text": " Example number three, so Haiku Flux are basically built upon this idea, the whole idea is that it's fine to have a stateful functionality inside of your pure function"}, {"start": 3245.16, "end": 3265.16, "text": " So what I mean by that is the following, so we have x here, and what we do here, we create this state dictionary, and we basically just add, depending on whether the integer i here is odd or even, we add x to even or odd keys"}, {"start": 3265.16, "end": 3281.16, "text": " And so we are preserving state, obviously, by doing this, so this is stateful execution here, and then we just return this sum here, so as you can see, we're not violating anything here, so this is pure, because we're only using x, we're only assigning x to this state"}, {"start": 3281.16, "end": 3287.16, "text": " And we're only outputting whatever came out from this function, so we're not accessing some global variable or whatnot"}, {"start": 3287.16, "end": 3295.16, "text": " So because of that, all of this will be fine, and if I execute this, we're going to get a correct result, and that's 50, okay"}, {"start": 3295.16, "end": 3308.16, "text": " Finally, a fourth example, iterators, since they are stateful, are a no-no, so if you try and use, we built array here, and if we just do like this, if we don't have any iterators, it's going to work"}, {"start": 3308.16, "end": 3322.16, "text": " But on the other hand, if we use the iterator, this is going to fail, even though we semantically would expect the same results, and you can see that's the case, we got 45 here, we got 0 here, even though we should have gotten 45 as well"}, {"start": 3322.16, "end": 3336.16, "text": " Here you can see this lex primitive, so that's one of those mid-level API functions, it's called foriloop, it's basically a smart version of the for loop, which can be later compiled using the XLA"}, {"start": 3336.16, "end": 3347.16, "text": " But the thing I want you to understand here is just that you cannot use iterators because they are stateful, and we are thus violating the purity constraint of Jax"}, {"start": 3347.16, "end": 3356.16, "text": " Okay, that was the gotcha number one, make sure to write pure functions if you want to use them with Jax transforms in general"}, {"start": 3356.16, "end": 3370.16, "text": " Gotcha number two, in-place updates, we saw this already, so you cannot modify the arrays in place, you have to use that at set syntax, so let's see a simple example here"}, {"start": 3370.16, "end": 3378.16, "text": " Once we create the Jax array, we have to use this at set syntax in order to get the output array"}, {"start": 3378.16, "end": 3392.16, "text": " Let me run this, and you can see here the results are as expected, so this is your NumPy syntax, so row number one where we have zero indexing, and then all columns, we set them to one, and that's what we see here"}, {"start": 3392.16, "end": 3411.16, "text": " So I think I mentioned this, so if this seems wasteful, basically don't worry because XLA is smart enough to figure out that if you're not using this input array, then it will not allocate a special memory object for this output array"}, {"start": 3411.16, "end": 3419.16, "text": " It's just going to reuse the input array and modify it in place, even though it does not appear to do so on this higher level perspective"}, {"start": 3419.16, "end": 3438.16, "text": " Don't worry about the expressiveness, we can still do everything we can with NumPy, so here if I create a simple matrix of ones, we can do whatever we want, we can add, we don't just have to assign values to certain locations, we can also do arbitrary operations such as addition"}, {"start": 3438.16, "end": 3451.16, "text": " And you can see here we can add to every second row, so every second row, and starting from the third column, we add seven to one, so that's why we have these eights here, as you can see"}, {"start": 3451.16, "end": 3458.16, "text": " So that's cool, that was the second gotcha, let's continue on, the third one is out of bounds indexing"}, {"start": 3458.16, "end": 3476.16, "text": " So this is a direct consequence of the fact that Jax wants to make this code accelerator agnostic, because it's very hard to communicate certain information from the accelerators, they had to create certain types of non-error behaviors, which may come as a surprise if you're not already acquainted with this"}, {"start": 3476.16, "end": 3492.16, "text": " So when it comes to NumPy, if we allocate an array with ten elements, so from zero to nine, and we try to index into eleventh position, this will throw an exception because there is no eleventh position, there is position zero through nine"}, {"start": 3492.16, "end": 3502.16, "text": " So let me try and run this, and we're going to get exception caught, blah blah blah, basically this is out of bounds for x is zero with size ten"}, {"start": 3502.16, "end": 3521.16, "text": " Okay, let's see what Jax's behavior here is, if we were to try to assign at position eleven, so that's the same example as here, if we try to add twenty-three, it's just going to ignore this operation, so that may be surprising, because it will not throw any error"}, {"start": 3521.16, "end": 3544.16, "text": " So let's see what the result will actually be, so we have no change whatsoever here, finally, if we try to return, to retrieve the element at eleventh position, which does not exist, it's going to clamp eleventh to the last index, which is nine, and so we are going to retrieve nine"}, {"start": 3544.16, "end": 3566.16, "text": " Which is super confusing as well, and this may be a cause of many, many really bad bugs, so be aware of this behavior, as I said, the reason this exists is because basically Jax tries to abstract the accelerator information from you, but as a consequence you get this"}, {"start": 3566.16, "end": 3582.16, "text": " And I guess if you're familiar with NAND behavior, this is kind of similar, because NANDs are also in a sense not acceptable, although we also have non-error behavior, whereas we just get NANDs back instead of the system throwing some exception"}, {"start": 3582.16, "end": 3592.16, "text": " So I mentioned it here, similarly to how invalid floating point arithmetic results in NANDs and not an exception, so keep this in mind"}, {"start": 3592.16, "end": 3608.16, "text": " Gotcha number four, non-array inputs, again this is added by design, it's not a bug, in NumPy if we try to do a sum and we pass a Python list, one, two, three here, we are going to get an expected result, so that's going to be six I guess"}, {"start": 3608.16, "end": 3617.16, "text": " In Jax on the other hand, if we try to do this, we're going to get an exception, so sum requires a NumPy array or scalar arguments got list at position zero"}, {"start": 3617.16, "end": 3631.16, "text": " So why is that? Let's imagine we try and make this more permissive, so if we pass a Python list, we're going to convert it here to a Jax array, and now Jax sum will work on the Jax array, so this thing will work"}, {"start": 3631.16, "end": 3650.16, "text": " So now let's create a list, X is a simple list, now let's try to make Jaxper to understand what will happen if we called Jit on this function, so let's try and run this, and you'll see that we'll basically have this whole thing unrolled, which is super inefficient"}, {"start": 3650.16, "end": 3665.16, "text": " So the thing that happens is that we'll be passing element by element from this list, and Jax will not be able to optimize this and create some smart, to use some smart primitives for looping, and instead we're going to get this inefficient optimization"}, {"start": 3665.16, "end": 3690.16, "text": " So yeah, keep this in mind, now this is a really good place where upon which I can base my conclusion that Jax is really nice for researchers, but if you're a beginner, I don't think that optimization details should be more important than the ease of use, such as the fact that you will not have an exception if you pass in a Python list instead of a NumPy or Jax array"}, {"start": 3690.16, "end": 3711.16, "text": " So definitely things like this make me, if I were to recommend to a beginner, I would definitely still recommend PyTorch to be honest over Jax, but like if you're a researcher and you want to have a lot of flexibility and you want to have like a super optimized programs, I think Jax is a really good bet for those guys"}, {"start": 3711.16, "end": 3725.16, "text": " Anyways, let's continue, gotcha number five, random numbers, I mentioned this already, now let's see it in a bit more detail, so basically what we can see here is that NumPy has a stateful pseudo random number generator"}, {"start": 3725.16, "end": 3744.16, "text": " That means that if I execute these two functions in a row, we'll have, so this one will advance the state of the PRNG and this one after execution will also advance it and it's hidden from us, in order to understand this a bit better, let's kind of dissect this generator"}, {"start": 3744.16, "end": 3758.16, "text": " So we set the seed to some value, seed is I think set to zero here, and basically we can fetch, there is this function getState, we can fetch the actual state of the NumPy PRNG, and I'm gonna print certain metadata from that state"}, {"start": 3758.16, "end": 3771.16, "text": " Then I'm going to execute, like a sample, a single number, and then we're gonna fetch the state again, and we're gonna sample the number again here, and we're gonna fetch the state again and print it, so let's see what happens here"}, {"start": 3771.16, "end": 3777.16, "text": " So first thing you can notice here is that the numbers here are different because the state is internally advanced"}, {"start": 3777.16, "end": 3791.16, "text": " So what you can notice here is that first of all NumPy is using this thing called Mersenne Twister, hopefully I'm pronouncing it correctly, PRNG, and it's known to have a number of problems, there is a link in their documentation for why that is, it's not that important for us"}, {"start": 3791.16, "end": 3809.16, "text": " What is probably useful to know is that it basically has the state consists of 624 unsigned integers, and every time you sample from that generator, we are basically consuming the entropy of the generator"}, {"start": 3809.16, "end": 3823.16, "text": " Bottom line, what I want you to take out from this is that NumPy's PRNG has some problems, and it's stateful, which is problematic because remember we need to have pure functions, we are in the functional programming world"}, {"start": 3823.16, "end": 3834.16, "text": " On the other hand, this is how Jax operates, so basically you have the PRNG from Jax, you seed it with a certain value, and it gives back the key, and key is just a synonym for state"}, {"start": 3834.16, "end": 3840.16, "text": " So basically we are now manipulating the state of the PRNG externally as opposed to internally"}, {"start": 3840.16, "end": 3848.16, "text": " So key is simply like a tuple of two unsigned integers, 32-bit integers, so that's the state"}, {"start": 3848.16, "end": 3859.16, "text": " Now what's the trick here? So now if we were to sample using that same state, we are going to get obviously the same results, so that's different compared to the NumPy behavior"}, {"start": 3859.16, "end": 3866.16, "text": " And the reason is again because we are not modifying the state, we are not advancing the state, and that's why we always get the same results"}, {"start": 3866.16, "end": 3873.16, "text": " So again, important to notice here, state is preserved, state has not changed, and secondly, the results are the same"}, {"start": 3873.16, "end": 3881.16, "text": " So what do we do? Obviously this is not a random number, this is a constant function, so how do we get a randomness in Jax?"}, {"start": 3881.16, "end": 3890.16, "text": " So here is a trick, every time you want to create a new random value, you basically just call this split function, and it's going to return key and subkey"}, {"start": 3890.16, "end": 3898.16, "text": " Subkey is used to generate a novel number, and key can subsequently be used again in the split function to get novel key and novel subkeys"}, {"start": 3898.16, "end": 3901.16, "text": " So every time you want to generate a new number, you have to call this splitting"}, {"start": 3901.16, "end": 3914.16, "text": " So this may seem like very rough and problematic, but believe me, it helps solve various issues that are caused by randomness in libraries such as NumPy, etc"}, {"start": 3914.16, "end": 3923.16, "text": " When you try to use them outside of the context for which they were designed for, and that's like basically I guess CPU, single threaded programs, etc"}, {"start": 3923.16, "end": 3931.16, "text": " Okay, so let's run this cell now and see the results. So we have the old key, old key got converted into the new key, so that's the new state"}, {"start": 3931.16, "end": 3937.16, "text": " And we have this subkey which we use to generate the random number"}, {"start": 3937.16, "end": 3944.16, "text": " So a couple of notes here, first things first is that basically you can split into more subkeys than just the two of these"}, {"start": 3944.16, "end": 3954.16, "text": " And secondly, there is no semantic difference between the key and the subkey, these are basically recommendations for how to organize these states"}, {"start": 3954.16, "end": 3965.16, "text": " And basically you use key to generate novels to split and generate novel key and novel subkey, and you use the subkey to generate the current random number that you currently need to consume"}, {"start": 3965.16, "end": 3977.16, "text": " And that's pretty much it. So after having explained all these complex details about how to handle PRNGs compared to NumPy, is there any good reason for it?"}, {"start": 3977.16, "end": 3988.16, "text": " And the answer is yes. So why this design? And the answer is can the code with the current design, with NumPy's design be reproducible, parallelizable, and vectorizable?"}, {"start": 3988.16, "end": 3998.16, "text": " So in the case of NumPy, number one is obeyed, in the case where you have a single threaded program on the CPU, basically there is no problem"}, {"start": 3998.16, "end": 4009.16, "text": " So let's see this concrete example. Let's assume we have a function called bar that generates a random number and a function called baz that also generates a random number"}, {"start": 4009.16, "end": 4016.16, "text": " Finally we have a function foe, and I don't know if I'm pronouncing these right, so this function returns bar plus two times baz"}, {"start": 4016.16, "end": 4026.16, "text": " And I don't know if you can see the problem with this code, once we start and try to call this foe function on line 14"}, {"start": 4026.16, "end": 4039.16, "text": " Now the problem is NumPy assumes a single threaded environment, and basically Python guarantees that this will be executed, to the best of my knowledge, from left to right"}, {"start": 4039.16, "end": 4049.16, "text": " Which means every time we run this, we are going to get the same result back, which means number one is obeyed, so that means we have reproducible programs"}, {"start": 4049.16, "end": 4065.16, "text": " So now what happens if we get this function, or if we, and the JIT decides to basically parallelize this program and calls bar on one accelerator, on one core, and baz on another one"}, {"start": 4065.16, "end": 4072.16, "text": " So what can happen is that the order of execution can change, and if that happens we may get different results"}, {"start": 4072.16, "end": 4088.16, "text": " So if the first function to be called returns 0.3 and the second one returns 0.4, depending on the order we'll either have 0.3 plus 2 times 0.4, or we'll have basically 0.4 plus 2 times 0.3"}, {"start": 4088.16, "end": 4100.16, "text": " And that basically leads to different outcomes, which means this will not be a reproducible result in the case of parallel, like computing on parallel, like cores, machines, whatever"}, {"start": 4100.16, "end": 4113.16, "text": " Similarly for the vectorization part, basically NumPy guarantees that if you generate numbers, if you do this, if you iterate in a for loop and you generate sequentially random numbers"}, {"start": 4113.16, "end": 4122.16, "text": " That will give you the same results as if you were to just set size to 3, on the other hand, Jax does it differently, if you generate individually"}, {"start": 4122.16, "end": 4129.16, "text": " That will give you different results compared to if you generate from all the numbers at once using a specific key"}, {"start": 4129.16, "end": 4138.16, "text": " So, and here again you can see an example where we generate three subkeys using the split function and not only two keys, as I previously mentioned that"}, {"start": 4138.16, "end": 4149.16, "text": " So let's run this function and you can see that basically NumPy has the same results, whereas Jax does not have the same results"}, {"start": 4149.16, "end": 4155.16, "text": " So if you're using the common SIMD pattern, so that's the single instruction multiple data, which is common in machine learning"}, {"start": 4155.16, "end": 4165.16, "text": " Whereas you want to apply the same functionality across different batches, you sometimes want to have the same randomness being applied across all of the batches"}, {"start": 4165.16, "end": 4175.16, "text": " And NumPy does not allow to do that, that's how I understood this part about the violating the vectorization problem, let me know if I got this wrong"}, {"start": 4175.16, "end": 4183.16, "text": " Okay, that's pretty much it, that was the random numbers, we have just two more gotchas to go and that's it"}, {"start": 4183.16, "end": 4193.16, "text": " So gotcha number four is the control flow, it's fairly, we've seen something similar to this, so basically control flow plus grad, nothing, there is no problem"}, {"start": 4193.16, "end": 4200.16, "text": " Basically this transform function, so the grad transform function can deal with these types of conditioning on the value of a function"}, {"start": 4200.16, "end": 4213.16, "text": " I went ahead and ran this and you can see the results, basically, yeah, just analyze this function, you'll see that at three there is this jump where we have a piecewise defined function here"}, {"start": 4213.16, "end": 4225.16, "text": " The whole point is if we were to take a grad of this function and evaluate it at two and at four, so that's just before this discontinuity and just after it, we'll get valid gradients"}, {"start": 4225.16, "end": 4236.16, "text": " Let me run this again, just to update the state of the cell, and if we were to jit this function and basically it will fail because we are conditioning on x"}, {"start": 4236.16, "end": 4244.16, "text": " So we have to, the solution is to make it a static argument and then it's gonna work, so if I run this, it's going to work"}, {"start": 4244.16, "end": 4250.16, "text": " And we already saw how to handle this case of conditioning on a value, so let's see some more interesting cases"}, {"start": 4250.16, "end": 4259.16, "text": " So here we are conditioning again on the value, but this time the value decides the length, the number of loops we'll have in this function"}, {"start": 4259.16, "end": 4275.16, "text": " And the way to go around this is again make this a static argument which will make jit trace this function using the forex, using the abstract tracer, so the shape and data type"}, {"start": 4275.16, "end": 4280.16, "text": " Whereas for n is gonna use a concrete value, so let me run this cell and see what we get"}, {"start": 4280.16, "end": 4292.16, "text": " Okay, so as you can see we have some huge, huge checks per here and the reason being is because we have, as you can see here, 15 was passed for n"}, {"start": 4292.16, "end": 4300.16, "text": " And this is the best way that jit can deal with these types of primitive, of native Python for loops"}, {"start": 4300.16, "end": 4310.16, "text": " Again, importantly, you should not change static values too often, otherwise we'll be triggering recompilation all of the time"}, {"start": 4310.16, "end": 4315.16, "text": " And then the overhead will maybe be detrimental to the speed of your application"}, {"start": 4315.16, "end": 4319.16, "text": " Let's see how this can be avoided and make a bit more optimal"}, {"start": 4319.16, "end": 4328.16, "text": " So a better way to do this is to use the low level API again, so the lex API, and there is this, as we saw, for iLoop function"}, {"start": 4328.16, "end": 4335.16, "text": " And we can rewrite the above problem and I went ahead and just rewrote it here and you can see that"}, {"start": 4335.16, "end": 4343.16, "text": " So this is the same function as the above one just using the lex API and if we now do the make, if I call the make checks per"}, {"start": 4343.16, "end": 4352.16, "text": " And let me kind of run this, we'll see that this code is way more succinct, it's more concise compared to the above one and thus more efficient, I guess"}, {"start": 4352.16, "end": 4357.16, "text": " I haven't profiled this example but yeah, I can assume it's more efficient"}, {"start": 4357.16, "end": 4367.16, "text": " You can go ahead and analyze these two cells at your own pace but the whole point was to be aware, just be aware that using these lower level API functions"}, {"start": 4367.16, "end": 4371.16, "text": " You can sometimes get more out of it"}, {"start": 4371.16, "end": 4379.16, "text": " Finally, the only reason I have this one is to understand that you can condition sometimes on the dimensionality of your data"}, {"start": 4379.16, "end": 4387.16, "text": " And that's allowed because imagine, this means that for the whole class of x where you have, where x is two dimensional"}, {"start": 4387.16, "end": 4393.16, "text": " This can, like a function can be cached and will work and we can fetch it and use it"}, {"start": 4393.16, "end": 4396.16, "text": " So let's run this one and let's see the results"}, {"start": 4396.16, "end": 4403.16, "text": " So I passed like an input array which is not two dimensional and because of that we took this branch"}, {"start": 4403.16, "end": 4407.16, "text": " Which means whatever we get as an input we just return as the output"}, {"start": 4407.16, "end": 4414.16, "text": " And that's why we have this super simple like a Jacksper for this particular function being traced with this input"}, {"start": 4414.16, "end": 4422.16, "text": " Hopefully all of these helped you crystallize and understand JIT because I guess, I think that's arguably the hardest part to understand about JAKS"}, {"start": 4422.16, "end": 4424.16, "text": " How JIT works in the background"}, {"start": 4424.16, "end": 4432.16, "text": " That's it, final, final, final gotcha is like how to handle NANDs in JAKS"}, {"start": 4432.16, "end": 4439.16, "text": " The usual non-error behavior is to simply return a NAND in the case of like operations such as this one, division by zero"}, {"start": 4439.16, "end": 4443.16, "text": " So if I were to comment out this thing here and let me run this cell"}, {"start": 4443.16, "end": 4447.16, "text": " So we won't have any error, we'll just have a NAND"}, {"start": 4447.16, "end": 4452.16, "text": " So if we want to actually debug and understand where the NANDs came from"}, {"start": 4452.16, "end": 4458.16, "text": " You wanna like the program to throw some exceptions, you can do, for example, this, you can find more in the docs"}, {"start": 4458.16, "end": 4463.16, "text": " But basically there is a way to do it and now it will be throwing exceptions, I guess"}, {"start": 4463.16, "end": 4465.16, "text": " Let me try again"}, {"start": 4465.16, "end": 4468.16, "text": " Yeah, now it's throwing an exception"}, {"start": 4468.16, "end": 4470.16, "text": " So yeah, let's keep that in mind"}, {"start": 4470.16, "end": 4474.16, "text": " Final cell, JAKS enforces single precision"}, {"start": 4474.16, "end": 4484.16, "text": " Why? Because actually nowadays it's fairly common to train your models, especially the big models like big transformers in FP16 or mixed precision or even FP8"}, {"start": 4484.16, "end": 4486.16, "text": " So it means 8 bits only"}, {"start": 4486.16, "end": 4493.16, "text": " And because NAMPA is aggressive in promoting like certain variables into double, so that means 64 bits"}, {"start": 4493.16, "end": 4497.16, "text": " JAKS made it by design that they are enforcing 32 bits"}, {"start": 4497.16, "end": 4502.16, "text": " So that may lead to some, again, some peculiar behavior such as this one"}, {"start": 4502.16, "end": 4511.16, "text": " You say I wanna like a vector with thousand random numbers which are 64 like bit long"}, {"start": 4511.16, "end": 4517.16, "text": " And if we were to print the data top of the actual array, we'll have flow 32, which is not intuitive, right?"}, {"start": 4517.16, "end": 4524.16, "text": " So be aware of this and there is a way around it, you can set certain flags if you don't want to have this as a default behavior"}, {"start": 4524.16, "end": 4532.16, "text": " As a quick summary, we saw the basics of JAKS such as the fact that it's using, it's based on functional programming paradigm"}, {"start": 4532.16, "end": 4538.16, "text": " We saw like various transform functions such as JIT, GRAD, DMAP"}, {"start": 4538.16, "end": 4546.16, "text": " Then we saw, we went deeper into JAKS, we saw the layered, like, layer API"}, {"start": 4546.16, "end": 4554.16, "text": " We saw many details of how JIT works and we saw many gotchas that JAKS has which may catch you by surprise"}, {"start": 4554.16, "end": 4557.16, "text": " So you should be aware and cognizant of these"}, {"start": 4557.16, "end": 4564.16, "text": " Finally, some conclusion of mine to who should be using JAKS and who should be using other frameworks"}, {"start": 4564.16, "end": 4571.16, "text": " As I said, I think JAKS is very good if you're a researcher, you wanna be very flexible and you wanna have a powerful tool"}, {"start": 4571.16, "end": 4578.16, "text": " And you also wanna, once you train big models such as in whatever industry I lab you're working, you're gonna be training big models"}, {"start": 4578.16, "end": 4582.16, "text": " So having like very optimized code is super important"}, {"start": 4582.16, "end": 4590.16, "text": " But if you're a beginner on the other hand, I think it may be a bit too harsh for you currently with the current state of JAKS"}, {"start": 4590.16, "end": 4592.16, "text": " And I don't think that's going to change"}, {"start": 4592.16, "end": 4597.16, "text": " So basically all of these optimization details such as the fact you cannot use Python list"}, {"start": 4597.16, "end": 4601.16, "text": " That will just throw exception because it can hurt the performance"}, {"start": 4601.16, "end": 4606.16, "text": " Because of all of that and because of the functional paradigm"}, {"start": 4606.16, "end": 4610.16, "text": " I think it's easier to just start with PyTorch"}, {"start": 4610.16, "end": 4612.16, "text": " Especially when we get to neural networks"}, {"start": 4612.16, "end": 4620.16, "text": " It's a more natural approach to approach neural networks with object oriented, from the object oriented perspective"}, {"start": 4620.16, "end": 4626.16, "text": " At least in my opinion maybe I lack experience with JAKS so yeah, but that seems to be the case"}, {"start": 4626.16, "end": 4632.16, "text": " Anyways, in the next video we're going to cover some concepts such as PyTrees, handling states"}, {"start": 4632.16, "end": 4640.16, "text": " Which are all basic components that we'll need to later build neural networks from scratch in PureJAKS and also in Haiku or Flex"}, {"start": 4640.16, "end": 4646.16, "text": " Hopefully you found this video useful, it took a lot of time to prepare the notebook, to prepare everything I wanted to show you here"}, {"start": 4646.16, "end": 4651.16, "text": " So if you did like it, consider sharing it out, also consider subscribing to this channel if you haven't already"}, {"start": 4651.16, "end": 4659.16, "text": " Also do join the Discord community, there is a lot of smart people there and we have a community for, like there is more than 1,000-100 people at this point of time"}, {"start": 4659.16, "end": 4663.16, "text": " So somebody will help you out if you have some doubt or questions"}, {"start": 4663.16, "end": 4676.16, "text": " In any case, until next time, bye bye!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=iJ0IVZgGjTM
T0: Multitask Prompted Training Enables Zero-Shot Task Generalization | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ In this video I cover the "Multitask Prompted Training Enables Zero-Shot Task Generalization" paper that introduced the T0 transformer. T0 basically = Google's T5 + LM pretraining + additional training on prompted datasets. The paper came out of the BigScience workshop. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2110.08207 ✅ BigScience: https://bigscience.huggingface.co/ ✅ Models on HF hub: https://huggingface.co/bigscience ✅ Prompt tool: https://github.com/bigscience-workshop/promptsource ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 BigScience workshop, HuggingFace hub 03:00 A high-level overview 04:40 Are we really doing implicit training? 06:30 T0 training and prompt templates 10:55 Choosing the val dataset 13:20 How is T0 trained? 14:55 Results (vs GPT3) 18:25 Ablations (varying prompts and data) 21:30 Discrepancy in GPT3 API vs reported results 22:30 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #t0 #bigscience #huggingface
What's cracking guys? In this video I'm covering multitask prompted training enables zero-shot task generalization by this huge group of people here and all of them came from this thing called Big Science Workshop. Let me show you here. So a one-year long research workshop on large multilingual models and datasets. So they took inspiration from scientific creation schemes such as CERN and the Large Hadron Collider and it's basically a collaboration between various companies and the whole effort is put straight up by Hugging Face, GenC and this Idris company and they even have a supercomputer on their disposal. So like 28 petaflops supercomputer located near Paris, France. So yeah if you're interested, if you fancy training huge language models do check out this Big Science Workshop and yeah I mean outside of Hugging Face and these companies there are many other like supporting institutions like Mila, Stanford, even Microsoft here. So it's a huge thing. Other than that the model, the T0 model I'm about to cover is already published on Hugging Face Hub. So you can find all of the versions of this T0, T0+, T0++ models are already here and additionally the model card is already available as well as this small widget and if you're not familiar with Hugging Face what basically happens here is that once somebody trains a model they can publish it to this hub and they are responsible to create this model card because that will like enable people to use that model so that's in their best interest as well as in the best interest of the audience, of the people who are going to use this, of customers. And finally they have this small like applet in this widget style applet where you can see, you can run the model in the background so we can just click compute here and the T0 is going to do classification on this sentence. So we have last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app. Select the category for the above sentence from mobile, website, billing and account access. So you can see that the prompt here is in a natural language form, you don't have just a sentence and then the output you have the prompt and then you have the categories here and the model finally figures out the correct category. So if I were to change website to iOS because that's a word that appears here I guess it's gonna now output iOS. If I press compute here it's gonna output iOS. Bottom line you can play with T0 already or you can like here in this small widget or even better you can you can download it locally and then start the whole thing. Other than that aside from models they also like open sourced the this tooling they were using to collect the prompts. I'm gonna see what those are in a second but like yeah all in all huge kudos to the whole organization for making all this public and usable for others. Okay getting back to the paper let's dig into it. So large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. So that's the GPT-3 paper. It has been hypothesized that this is a consequence of implicit multitask learning in language model training. Can zero-shot generalization instead be directly induced by explicit multitask training? So basically the hypothesis is the following. When you're optimizing the language modeling objective so you're trying to predict the next word in the sentence the model basically figures out that it's way easier to learn how to solve certain NLP tasks such as summarization such as question answering which are relevant to us humans in order to to have an easier time to predict the next word. So all of those are implicitly learned as a consequence of trying to to minimize the cross entropy loss on the language modeling objective. The concern that the author has expressed here is that maybe just maybe those models are actually training explicitly and we just don't know about it so we call it implicit multitask training. We want to see that in a couple of minutes. Anyways they say here the model attains strong zero-shot performance on several standard data sets often outperforming models up to 16x its size. So they were comparing as we'll soon see this model t0 with GPT-3 family of models and oftentimes this model all performs GPT-3 which is super amazing. Okay let's continue here. I mentioned the problem with implicit multitask learning and they raise the concern here. So yet it is an open question how implicit this multitask learning really is. For example there are many websites that simply contain lists of trivia questions and answers and this data is precisely supervised training data for the task of closed book question answering. So let me open up that that website. Okay so here is like example of a website where we have those answer and question types of of of text. So you can see here what geometric shape is generally used for stop signs answer octagon. What is synophobia answer fear of dogs. So you can imagine that while the model was trying to to predict the next word it was actually learning how to answer this this question because once you prompt it with this thing and then you you force it to answer like this you're basically teaching it arguably how to do the question answering in this closed book setting. And you can imagine similar document somewhere in the huge data set that GPT-3 and T0 T5 models were using for training and that the implicit multitask may actually be explicit just that we are not aware of it. So therefore they insist on training this model explicitly and see whether that increases the zero shot generalization whether we can like get off with smaller models and whether we can have more robustness to those natural language prompts. And so they say here our goal is to induce a model to better generalize to unseen tasks without requiring massive scale as well as being more robust to the wording choices of the prompts. And that's very important because we see more and more lately like the the prompt engineering type of job where the whole point of some someone's job is to find the correct prompt in order to squeeze out the additional performance from from the language model. Anyways okay let's see how T0 is actually trained. So there are multiple tasks they they used we're going to see those in a bit detail a bit later but here here a couple of examples. So we have summarization task and you can see like the template here is the picture appeared on the wall of a pound land store on Weimark Avenue blah blah blah. So that's the question or the text and then you have how would you rephrase that in a few words. So that's the natural language prompt and the thing is this prompt here so they made so this whole group that wrote this paper they made an open call and they asked for contributors who would create novel creative prompts for these types of examples. So summarization problem summarization task does not have only this single type of prompt they have multiple prompts I think that's 8.1 on average or something. Anyways the model is expected to regress this type of a sentence graffiti artist banksy blah blah blah blah. Okay so other than that we have paraphrasing identification tasks so you can see here how is your traffic controlled that's one question then we have how do you become an air traffic controller that's the second question and then we have the prompt pick one these questions are duplicates or not duplicates and the model is supposed to say not duplicates in this specific case. So you can see that again this is a particular template for this particular task and there can be many more prompts as we'll soon see and yeah basically once you train the t0 model on various tasks you're gonna evaluate it on a held out set of tasks like natural language inference. Okay so let's go further here let me show you something so to make this a bit more clear because this prompting part is arguably the most important like novelty let's let's call it that way in this paper so here is a task of paraphrasing we can see again how is air traffic controlled that's question one we have we have question two and we have a label and here the label in the original data set is like numeric one so we have zero and zero corresponds to i think like not duplicates or one or whatever and in any case here is one particular prompt that people contributed so you can see here uh question one question two that's the template and then pick one these questions are duplicates or not duplicates and then you have choices uh label so that's gonna map zero to not duplicates or duplicates so basically you can see that this thing here is a template exact template for the actual instantiation we saw up there so that's this thing here so we have question question pick one blah blah blah so that's one particular template but other than that as i already mentioned we have other types of prompts like i received the questions question one and question two are they duplicates and again so you can see that semantics hasn't changed here but the like it's syntax wise it's completely different and that makes it makes it tougher for the for the model to to learn to solve that task uh similarly for every other like task you can see here the summary different prompts let me show you in even more detail how these guys made this uh like happen they basically created this uh like uh let's call it template language which helps automate the mapping from the original data set examples to the input output pair of examples using the natural language prompting so here we can see the example of how this template language works so we have a variable text and it's going to be eventually replaced by some concrete like text so we have here mark told pete many lies about himself which pete etc so that's one example but you can imagine any other example in the data set is going to pass through this scheme and out comes the actual input so here we'll have this replaced and then in the previous sentence does the pronoun and then they have span to text dot lower which is a python syntax and we have span to text is he so they ask does the pronoun he refer to spend one text that's going to be mark so does it refer to mark yes or no second example is here we have text and then here by he they mean uh mark yes or no so that's a second example and then you can have a bunch of these so text in other words blah blah true or false so basically all of these uh templates are gonna map the input examples into desired output format and that's how uh they created this novel uh prompted data sets let's call them that way anyways let me go back to the paper uh and let me show you a couple more things so i mentioned tasks and um it's a heuristic they they they they mentioned um the authors are well aware that this is not ideal because sometimes you can learn certain skills here and then even though you're testing the model on a held out data set you already learned the skill somewhere here so it's hard to decouple uh skills and they they opted to to do this way of separating it by tasks so we have um like close book q and a sentiment classification summarization etc etc and every single task has multiple data sets you can see here even like we have five data sets in total for the summarization task and then they have four uh like held out tasks as well as this big bench uh like set of data sets and they mentioned here that noting the grouping by task is an imperfect heuristic we are on the side of organizing our task taxonomy based on the task format as opposed to required skill largely based on conventions in the literature so as you can see this is pretty much still an open problem how do you separate nicely uh the the skills and the data sets from the held out ones so that you can um evaluate the model in a fair fashion and since they are comparing uh t0 with gpt3 they say that additionally we do not train our main model on any data sets that gpt3 use for evaluation so that our main results will be a fair zero shot comparison we verify that data for those tasks is not leaked through the pre-training corpus and i mean um this is a very tough problem uh even i if i if i recall correctly the original gpt3 paper actually had problems with leaking and subsequently they've done a post-hoc uh analysis and determined that the fact that the data leaked uh did not cause the gpt3 model to be much better on those on those tasks but yeah still uh like once you're dealing with such a huge internet scale types of data sets like like the ones used for gpt3 or t0 or t5 model upon which t0 is based on uh you have these problems of like leaking and as i said you you cannot decouple whether you learn a certain skill on the training data set and then try and evaluate the model in a totally different separate set of skills so that's still an open problem in the whole nlp field anyways i thought that was worth mentioning so let me continue here okay we saw this one and i mentioned that t0 is based on t5 and actually so the the the the fact is so the thing is t5 was originally trained on this mass language modeling objective and because uh they say here that it's quite different from the conditional text generation format used in our prompted data sets so because of that they actually took the t5 model that came from google and they additionally trained it as a like on a language modeling task and they refer to that model as the t5 plus lm and that's going to be used as a baseline throughout this paper in what is called t0 okay so let's see how the t0 is finally trained they mentioned here at a high level we assemble our multitask training mixture simply by combining all of the examples from all training data sets and shuffling the result this is equivalent to sampling from each data set in proportion to the number of examples in the data set so yeah basically you you take all of those question answering summarization tasks you you take their examples you map them using that template language and then you just kind of shuffle all of those of those examples and train this model by just patching those there are certain details here probably worth mentioning so some data sets are like even to order of magnitudes bigger than other ones so if you just if you were to just sample using this naive scheme the model would be overwhelmed by those a couple of those super huge data sets and so they have certain way to threshold the amount of examples coming from those bigger data sets in a nutshell okay so let's see the results finally we can see a comparison here between gpt family of models so we have gpt3 models here with different sizes we have 6.7 billion 13 billion and 175 so that's the actual gpt3 model the biggest one this is the t5 plus lm baseline and this is t0 model and we can see that most of the time on all of these tasks the t0 is way better compared to gpt3 so even the biggest one except on this data set and except on helloswag data set which are amazing results considering that gpt3 is way bigger compared to t0 like it's an order of magnitude bigger than t0 they argumented that on this task they perform worse because gpt3 use certain specific techniques that were like developed for that specific task and as for this one i haven't seen any any argument for why there is this huge gap in performance i guess that's an interesting open question for further research okay so they mentioned that note that brown at also the gpt3 paper authors reports performance on a single prompt whereas we report the median and interquartile range of performance across all prompts so they basically say that the the gpt3 paper they they cherry picked the best prompt and reported those results and we can see that on the chart here we have error bars for t0 and t5 plus lm baseline but we don't have any error bars here so this is the best case scenario pretty much the best prompt was cherry picked for this for this gpt3 model so that's something worth keeping in mind whereas here the model was evaluated on multiple prompts and all of the results are are plotted here like using these error bars okay let's continue we have other results on this big bench benchmark and again the trend is very similar only on this strategy qa is the t0 worse compared to these baselines whereas the baselines now are not gpt3 so keep that in mind they are just some models that the like people from google created those baselines for this big band a big bench benchmark uh okay so what's interesting here for me is the non-monotonic nature of of these models so a quick mention here we have t0 we know what it is we have t0 plus and we have t0 plus plus so t0 plus is just trained on additional data compared to t0 and t0 plus plus on even more data so let me just briefly go and show you what exactly that is so t0 plus is the same model but trained on a mixture that adds gpt3's evaluation data sets for t0 plus plus we add gpt3's and superglue data sets okay so that's the difference we have more data for these models so what's interesting going back here is that this non-monotonic behavior so on most tasks we have that basically adding more data is is a good thing but then we have certain tasks like this logical deduction where we have the inverse here happening so the the the model with the least data performs the best and then as we are adding data we have worse and worse performance and i guess investigating what happened here what kind of whether we had some overfitting or whatnot uh would be worth a try i guess okay finally let's see uh certain ablations they've done um the first one is uh training with various amounts of prompts per data set if that makes sense so they they take uh the t0 model and they uh they basically p equals zero means there are zero prompts that means they're basically using t5 plus lm baseline p equals one means for every data set they use a single prompt that was randomly chosen from from from the set of prompts and p equals all is basically all of the prompts for each data set which is on average i think around 8.1 or something and we can see the results here on various different tasks and uh we have this this this this candlestick diagram and you can see that the mean usually increases as we are adding so the blue one is the p equals zero the red one is the p equals one and the green one is the p equals all uh case so we can see that the median is constantly improving pretty much so the thing that worries me here is that the the spread is way bigger when we when when we set p equals to one but as we uh progress p to to to be equal to all we basically have uh like small spread and we have uh like higher median so that's cool so now question i have here is whether this is a like consequence of uh actually adding the prompts or is it just the fact that we have much more computation that went into training this this p equals all model compared to these two models so it'd be nice to to to uh um overlay a comparison here of models that were trained with less prompts but but had more compute time uh compared to these models here if that makes sense okay so they did aside from this uh in other uh in other like uh ablation and here uh they hold the number of prompts constant so they use all prompts uh but they vary so they have they vary the number of data sets that were used to train the t0 models so those were the t0 t0 plus and t0 plus plus models so what is interesting here is that uh it's not quite a clear cut uh which model is better here because first of all it seems like that t0 plus is overall worse compared to t like t0 like it has bigger spread and it most of the times even the median is lower compared to the uh t0 baseline so i mean yeah and then we have a trend where once we add this super glue data set we have a bump in performance here but again the spread is usually bigger way bigger compared to t0 here as well compare this to this and compare this to this this is definitely worth a further investigation i mean it is it is obviously not so this model here on this particular nly task is definitely not robust that means depending on how you pick your prompt you're gonna have a huge like difference in your performance and that's that's a bad thing that means this model is not robust uh when we increase the amount of data that will be like oh fun conclusion i have here okay uh let me wrap up this paper uh mentioning this part here so note that one of our templates is identical to brown at all again gpt3 paper authors reporter prompt this prompt scores 58.8 accuracy on the api base series which is lower than the reported accuracy of 63.5 from the paper itself so we can see a huge gap in performance between what the paper uh like reported and between and between what the actual api is offering so that's a like a point of concern for me there and they mentioned here that all other nine prompts however yield roughly random guessing performance with median accuracy roughly equal to 50 and interquartile range 1.28 these results suggest that t0 is more robust to prompt formulation than gpt3 so yeah i think this is a nice supporting evidence that training the models like this uh will make the models more robust so this offloads some of the work from those pro prompt programmer people maybe not so far in the future prompt programming will cease to exist and that's that's so sad um anyways um i'm gonna wrap it up with mentioning this um we release all models training this paper in addition to the collection of prompts we created and our prompt annotation tool so huge kudos to the authors uh for for doing this um this whole big science workshop sounds very exciting uh they have a supercomputer on disposal uh they are doing uh very nice interesting experiments and they are open sourcing all of that along the way so again do check it out and okay guys that's it for this video if you found it useful consider sharing it out with your friends as well as subscribing to this youtube channel and finally do join the discord community we have an ever-increasing number of smart people there so definitely worth checking out uh until next time bye bye
[{"start": 0.0, "end": 5.2, "text": " What's cracking guys? In this video I'm covering multitask prompted training enables zero-shot"}, {"start": 5.2, "end": 12.0, "text": " task generalization by this huge group of people here and all of them came from this thing called"}, {"start": 12.0, "end": 17.12, "text": " Big Science Workshop. Let me show you here. So a one-year long research workshop on large"}, {"start": 17.12, "end": 23.52, "text": " multilingual models and datasets. So they took inspiration from scientific creation schemes such"}, {"start": 23.52, "end": 29.04, "text": " as CERN and the Large Hadron Collider and it's basically a collaboration between various companies"}, {"start": 29.04, "end": 34.96, "text": " and the whole effort is put straight up by Hugging Face, GenC and this Idris company and they even"}, {"start": 34.96, "end": 42.08, "text": " have a supercomputer on their disposal. So like 28 petaflops supercomputer located near Paris,"}, {"start": 42.08, "end": 49.36, "text": " France. So yeah if you're interested, if you fancy training huge language models do check out this"}, {"start": 49.36, "end": 55.519999999999996, "text": " Big Science Workshop and yeah I mean outside of Hugging Face and these companies there are many"}, {"start": 55.52, "end": 63.36, "text": " other like supporting institutions like Mila, Stanford, even Microsoft here. So it's a huge"}, {"start": 63.36, "end": 70.0, "text": " thing. Other than that the model, the T0 model I'm about to cover is already published on Hugging"}, {"start": 70.0, "end": 76.24000000000001, "text": " Face Hub. So you can find all of the versions of this T0, T0+, T0++ models are already here"}, {"start": 76.24000000000001, "end": 82.24000000000001, "text": " and additionally the model card is already available as well as this small widget and if"}, {"start": 82.24, "end": 86.47999999999999, "text": " you're not familiar with Hugging Face what basically happens here is that once somebody"}, {"start": 86.47999999999999, "end": 91.44, "text": " trains a model they can publish it to this hub and they are responsible to create this model card"}, {"start": 91.44, "end": 96.96, "text": " because that will like enable people to use that model so that's in their best interest as well as"}, {"start": 96.96, "end": 101.44, "text": " in the best interest of the audience, of the people who are going to use this, of customers."}, {"start": 101.44, "end": 108.56, "text": " And finally they have this small like applet in this widget style applet where you can see,"}, {"start": 108.56, "end": 113.84, "text": " you can run the model in the background so we can just click compute here and the T0 is going to"}, {"start": 113.84, "end": 119.36, "text": " do classification on this sentence. So we have last week I upgraded my iOS version and ever since then"}, {"start": 119.36, "end": 124.48, "text": " my phone has been overheating whenever I use your app. Select the category for the above sentence"}, {"start": 124.48, "end": 130.72, "text": " from mobile, website, billing and account access. So you can see that the prompt here is in a natural"}, {"start": 130.72, "end": 135.12, "text": " language form, you don't have just a sentence and then the output you have the prompt and then you"}, {"start": 135.12, "end": 141.04, "text": " have the categories here and the model finally figures out the correct category. So if I were"}, {"start": 141.04, "end": 148.24, "text": " to change website to iOS because that's a word that appears here I guess it's gonna now output"}, {"start": 148.24, "end": 154.0, "text": " iOS. If I press compute here it's gonna output iOS. Bottom line you can play with T0 already or you"}, {"start": 154.0, "end": 158.56, "text": " can like here in this small widget or even better you can you can download it locally and then"}, {"start": 158.56, "end": 165.84, "text": " start the whole thing. Other than that aside from models they also like open sourced the"}, {"start": 166.64000000000001, "end": 170.96, "text": " this tooling they were using to collect the prompts. I'm gonna see what those are in a second"}, {"start": 170.96, "end": 178.64000000000001, "text": " but like yeah all in all huge kudos to the whole organization for making all this public and"}, {"start": 178.64000000000001, "end": 185.52, "text": " usable for others. Okay getting back to the paper let's dig into it. So large language models have"}, {"start": 185.52, "end": 191.68, "text": " recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. So that's"}, {"start": 191.68, "end": 198.64000000000001, "text": " the GPT-3 paper. It has been hypothesized that this is a consequence of implicit multitask learning"}, {"start": 198.64000000000001, "end": 204.24, "text": " in language model training. Can zero-shot generalization instead be directly induced by"}, {"start": 204.24, "end": 210.08, "text": " explicit multitask training? So basically the hypothesis is the following. When you're optimizing"}, {"start": 210.08, "end": 215.12, "text": " the language modeling objective so you're trying to predict the next word in the sentence the model"}, {"start": 215.12, "end": 221.6, "text": " basically figures out that it's way easier to learn how to solve certain NLP tasks such as"}, {"start": 221.6, "end": 226.72, "text": " summarization such as question answering which are relevant to us humans in order to to"}, {"start": 228.8, "end": 234.88, "text": " have an easier time to predict the next word. So all of those are implicitly learned as a consequence"}, {"start": 234.88, "end": 240.32, "text": " of trying to to minimize the cross entropy loss on the language modeling objective. The concern"}, {"start": 240.32, "end": 246.23999999999998, "text": " that the author has expressed here is that maybe just maybe those models are actually training"}, {"start": 246.23999999999998, "end": 251.44, "text": " explicitly and we just don't know about it so we call it implicit multitask training. We want to"}, {"start": 251.44, "end": 257.12, "text": " see that in a couple of minutes. Anyways they say here the model attains strong zero-shot performance"}, {"start": 257.12, "end": 263.68, "text": " on several standard data sets often outperforming models up to 16x its size. So they were comparing"}, {"start": 263.68, "end": 270.96, "text": " as we'll soon see this model t0 with GPT-3 family of models and oftentimes this model all performs"}, {"start": 270.96, "end": 278.64, "text": " GPT-3 which is super amazing. Okay let's continue here. I mentioned the problem with"}, {"start": 278.64, "end": 284.72, "text": " implicit multitask learning and they raise the concern here. So yet it is an open question how"}, {"start": 284.72, "end": 290.64, "text": " implicit this multitask learning really is. For example there are many websites that simply contain"}, {"start": 290.64, "end": 295.76, "text": " lists of trivia questions and answers and this data is precisely supervised training data for"}, {"start": 295.76, "end": 302.15999999999997, "text": " the task of closed book question answering. So let me open up that that website. Okay so here is"}, {"start": 302.15999999999997, "end": 308.56, "text": " like example of a website where we have those answer and question types of of of text. So you"}, {"start": 308.56, "end": 314.4, "text": " can see here what geometric shape is generally used for stop signs answer octagon. What is"}, {"start": 314.4, "end": 320.64, "text": " synophobia answer fear of dogs. So you can imagine that while the model was trying to to predict the"}, {"start": 320.64, "end": 325.03999999999996, "text": " next word it was actually learning how to answer this this question because once you prompt it with"}, {"start": 325.03999999999996, "end": 331.03999999999996, "text": " this thing and then you you force it to answer like this you're basically teaching it arguably"}, {"start": 331.03999999999996, "end": 337.91999999999996, "text": " how to do the question answering in this closed book setting. And you can imagine similar document"}, {"start": 337.92, "end": 345.92, "text": " somewhere in the huge data set that GPT-3 and T0 T5 models were using for training and that the"}, {"start": 345.92, "end": 351.12, "text": " implicit multitask may actually be explicit just that we are not aware of it. So therefore they"}, {"start": 351.12, "end": 356.40000000000003, "text": " insist on training this model explicitly and see whether that increases the zero shot generalization"}, {"start": 356.40000000000003, "end": 361.84000000000003, "text": " whether we can like get off with smaller models and whether we can have more robustness to those"}, {"start": 361.84000000000003, "end": 367.36, "text": " natural language prompts. And so they say here our goal is to induce a model to better generalize to"}, {"start": 367.36, "end": 372.64, "text": " unseen tasks without requiring massive scale as well as being more robust to the wording choices"}, {"start": 372.64, "end": 377.52000000000004, "text": " of the prompts. And that's very important because we see more and more lately like the the prompt"}, {"start": 377.52000000000004, "end": 383.36, "text": " engineering type of job where the whole point of some someone's job is to find the correct prompt"}, {"start": 383.36, "end": 389.12, "text": " in order to squeeze out the additional performance from from the language model. Anyways okay let's"}, {"start": 389.12, "end": 394.56, "text": " see how T0 is actually trained. So there are multiple tasks they they used we're going to see"}, {"start": 394.56, "end": 400.08, "text": " those in a bit detail a bit later but here here a couple of examples. So we have summarization task"}, {"start": 400.08, "end": 406.16, "text": " and you can see like the template here is the picture appeared on the wall of a pound land store"}, {"start": 406.16, "end": 412.08, "text": " on Weimark Avenue blah blah blah. So that's the question or the text and then you have how would"}, {"start": 412.08, "end": 416.4, "text": " you rephrase that in a few words. So that's the natural language prompt and the thing is this"}, {"start": 416.4, "end": 423.12, "text": " prompt here so they made so this whole group that wrote this paper they made an open call and they"}, {"start": 423.12, "end": 429.84000000000003, "text": " asked for contributors who would create novel creative prompts for these types of examples. So"}, {"start": 429.84000000000003, "end": 434.64, "text": " summarization problem summarization task does not have only this single type of prompt they have"}, {"start": 434.64, "end": 440.96, "text": " multiple prompts I think that's 8.1 on average or something. Anyways the model is expected to regress"}, {"start": 440.96, "end": 446.16, "text": " this type of a sentence graffiti artist banksy blah blah blah blah. Okay so other than that we"}, {"start": 446.16, "end": 451.6, "text": " have paraphrasing identification tasks so you can see here how is your traffic controlled that's one"}, {"start": 451.6, "end": 456.88, "text": " question then we have how do you become an air traffic controller that's the second question"}, {"start": 456.88, "end": 461.04, "text": " and then we have the prompt pick one these questions are duplicates or not duplicates and"}, {"start": 461.04, "end": 466.88, "text": " the model is supposed to say not duplicates in this specific case. So you can see that again"}, {"start": 466.88, "end": 473.04, "text": " this is a particular template for this particular task and there can be many more prompts as we'll"}, {"start": 473.04, "end": 479.76000000000005, "text": " soon see and yeah basically once you train the t0 model on various tasks you're gonna evaluate it"}, {"start": 479.76, "end": 487.12, "text": " on a held out set of tasks like natural language inference. Okay so let's go further here let me"}, {"start": 487.12, "end": 493.03999999999996, "text": " show you something so to make this a bit more clear because this prompting part is arguably the"}, {"start": 493.03999999999996, "end": 500.56, "text": " most important like novelty let's let's call it that way in this paper so here is a task of"}, {"start": 500.56, "end": 505.68, "text": " paraphrasing we can see again how is air traffic controlled that's question one we have we have"}, {"start": 505.68, "end": 511.04, "text": " question two and we have a label and here the label in the original data set is like numeric"}, {"start": 511.04, "end": 516.08, "text": " one so we have zero and zero corresponds to i think like not duplicates or one or whatever"}, {"start": 516.08, "end": 520.64, "text": " and in any case here is one particular prompt that people contributed so you can see here"}, {"start": 520.64, "end": 525.36, "text": " uh question one question two that's the template and then pick one these questions are duplicates"}, {"start": 525.36, "end": 531.2, "text": " or not duplicates and then you have choices uh label so that's gonna map zero to not duplicates"}, {"start": 531.2, "end": 536.88, "text": " or duplicates so basically you can see that this thing here is a template exact template for the"}, {"start": 536.88, "end": 542.6400000000001, "text": " actual instantiation we saw up there so that's this thing here so we have question question pick"}, {"start": 542.6400000000001, "end": 547.36, "text": " one blah blah blah so that's one particular template but other than that as i already"}, {"start": 547.36, "end": 551.9200000000001, "text": " mentioned we have other types of prompts like i received the questions question one and question"}, {"start": 551.9200000000001, "end": 557.2, "text": " two are they duplicates and again so you can see that semantics hasn't changed here but the like"}, {"start": 557.2, "end": 562.6400000000001, "text": " it's syntax wise it's completely different and that makes it makes it tougher for the for the"}, {"start": 562.6400000000001, "end": 569.36, "text": " model to to learn to solve that task uh similarly for every other like task you can see here the"}, {"start": 569.36, "end": 576.5600000000001, "text": " summary different prompts let me show you in even more detail how these guys made this uh like happen"}, {"start": 576.5600000000001, "end": 582.88, "text": " they basically created this uh like uh let's call it template language which helps automate the"}, {"start": 582.88, "end": 589.52, "text": " mapping from the original data set examples to the input output pair of examples using the natural"}, {"start": 589.52, "end": 593.92, "text": " language prompting so here we can see the example of how this template language works so we have a"}, {"start": 593.92, "end": 598.88, "text": " variable text and it's going to be eventually replaced by some concrete like text so we have"}, {"start": 598.88, "end": 604.08, "text": " here mark told pete many lies about himself which pete etc so that's one example but you can imagine"}, {"start": 604.08, "end": 608.96, "text": " any other example in the data set is going to pass through this scheme and out comes the actual"}, {"start": 608.96, "end": 614.08, "text": " input so here we'll have this replaced and then in the previous sentence does the pronoun and then"}, {"start": 614.08, "end": 621.44, "text": " they have span to text dot lower which is a python syntax and we have span to text is he so they ask"}, {"start": 621.44, "end": 628.0, "text": " does the pronoun he refer to spend one text that's going to be mark so does it refer to mark yes or"}, {"start": 628.0, "end": 635.44, "text": " no second example is here we have text and then here by he they mean uh mark yes or no so that's"}, {"start": 635.44, "end": 639.9200000000001, "text": " a second example and then you can have a bunch of these so text in other words blah blah true or"}, {"start": 639.9200000000001, "end": 646.6400000000001, "text": " false so basically all of these uh templates are gonna map the input examples into desired output"}, {"start": 646.6400000000001, "end": 652.1600000000001, "text": " format and that's how uh they created this novel uh prompted data sets let's call them that way"}, {"start": 652.72, "end": 658.8000000000001, "text": " anyways let me go back to the paper uh and let me show you a couple more things so i mentioned"}, {"start": 658.8, "end": 666.24, "text": " tasks and um it's a heuristic they they they they mentioned um the authors are well aware that this"}, {"start": 666.24, "end": 671.3599999999999, "text": " is not ideal because sometimes you can learn certain skills here and then even though you're"}, {"start": 671.3599999999999, "end": 677.28, "text": " testing the model on a held out data set you already learned the skill somewhere here so it's"}, {"start": 677.28, "end": 684.64, "text": " hard to decouple uh skills and they they opted to to do this way of separating it by tasks so we have"}, {"start": 684.64, "end": 691.12, "text": " um like close book q and a sentiment classification summarization etc etc and every single task has"}, {"start": 691.12, "end": 696.48, "text": " multiple data sets you can see here even like we have five data sets in total for the summarization"}, {"start": 696.48, "end": 703.36, "text": " task and then they have four uh like held out tasks as well as this big bench uh like set of"}, {"start": 703.36, "end": 709.4399999999999, "text": " data sets and they mentioned here that noting the grouping by task is an imperfect heuristic we are"}, {"start": 709.44, "end": 714.6400000000001, "text": " on the side of organizing our task taxonomy based on the task format as opposed to required skill"}, {"start": 714.6400000000001, "end": 718.8800000000001, "text": " largely based on conventions in the literature so as you can see this is pretty much still an open"}, {"start": 718.8800000000001, "end": 724.6400000000001, "text": " problem how do you separate nicely uh the the skills and the data sets from the held out ones"}, {"start": 724.6400000000001, "end": 730.48, "text": " so that you can um evaluate the model in a fair fashion and since they are comparing uh t0 with"}, {"start": 730.48, "end": 736.8000000000001, "text": " gpt3 they say that additionally we do not train our main model on any data sets that gpt3 use for"}, {"start": 736.8, "end": 742.9599999999999, "text": " evaluation so that our main results will be a fair zero shot comparison we verify that data for those"}, {"start": 742.9599999999999, "end": 750.4, "text": " tasks is not leaked through the pre-training corpus and i mean um this is a very tough problem"}, {"start": 750.4, "end": 756.8, "text": " uh even i if i if i recall correctly the original gpt3 paper actually had problems with leaking"}, {"start": 756.8, "end": 762.4, "text": " and subsequently they've done a post-hoc uh analysis and determined that the fact that the"}, {"start": 762.4, "end": 768.3199999999999, "text": " data leaked uh did not cause the gpt3 model to be much better on those on those tasks but yeah"}, {"start": 768.3199999999999, "end": 773.92, "text": " still uh like once you're dealing with such a huge internet scale types of data sets like like the"}, {"start": 773.92, "end": 781.68, "text": " ones used for gpt3 or t0 or t5 model upon which t0 is based on uh you have these problems of"}, {"start": 781.68, "end": 788.24, "text": " like leaking and as i said you you cannot decouple whether you learn a certain skill on the training"}, {"start": 788.24, "end": 794.5600000000001, "text": " data set and then try and evaluate the model in a totally different separate set of skills so that's"}, {"start": 794.5600000000001, "end": 798.8, "text": " still an open problem in the whole nlp field anyways i thought that was worth mentioning so"}, {"start": 798.8, "end": 806.5600000000001, "text": " let me continue here okay we saw this one and i mentioned that t0 is based on t5 and actually"}, {"start": 806.5600000000001, "end": 813.52, "text": " so the the the the fact is so the thing is t5 was originally trained on this mass language modeling"}, {"start": 813.52, "end": 818.64, "text": " objective and because uh they say here that it's quite different from the conditional text generation"}, {"start": 818.64, "end": 824.16, "text": " format used in our prompted data sets so because of that they actually took the t5 model that came"}, {"start": 824.16, "end": 830.16, "text": " from google and they additionally trained it as a like on a language modeling task and they refer"}, {"start": 830.16, "end": 835.52, "text": " to that model as the t5 plus lm and that's going to be used as a baseline throughout this paper"}, {"start": 836.3199999999999, "end": 842.88, "text": " in what is called t0 okay so let's see how the t0 is finally trained they mentioned here at a high"}, {"start": 842.88, "end": 848.96, "text": " level we assemble our multitask training mixture simply by combining all of the examples from all"}, {"start": 848.96, "end": 853.84, "text": " training data sets and shuffling the result this is equivalent to sampling from each data set in"}, {"start": 853.84, "end": 859.52, "text": " proportion to the number of examples in the data set so yeah basically you you take all of those"}, {"start": 859.52, "end": 865.04, "text": " question answering summarization tasks you you take their examples you map them using that template"}, {"start": 865.04, "end": 870.08, "text": " language and then you just kind of shuffle all of those of those examples and train this model by"}, {"start": 870.08, "end": 875.6800000000001, "text": " just patching those there are certain details here probably worth mentioning so some data sets are"}, {"start": 875.6800000000001, "end": 882.08, "text": " like even to order of magnitudes bigger than other ones so if you just if you were to just sample"}, {"start": 882.08, "end": 886.88, "text": " using this naive scheme the model would be overwhelmed by those a couple of those super"}, {"start": 886.88, "end": 891.84, "text": " huge data sets and so they have certain way to threshold the amount of examples coming from"}, {"start": 891.84, "end": 898.0, "text": " those bigger data sets in a nutshell okay so let's see the results finally we can see a"}, {"start": 898.0, "end": 904.4, "text": " comparison here between gpt family of models so we have gpt3 models here with different sizes we"}, {"start": 904.4, "end": 911.2, "text": " have 6.7 billion 13 billion and 175 so that's the actual gpt3 model the biggest one this is the t5"}, {"start": 911.2, "end": 916.88, "text": " plus lm baseline and this is t0 model and we can see that most of the time on all of these tasks"}, {"start": 917.52, "end": 925.52, "text": " the t0 is way better compared to gpt3 so even the biggest one except on this data set and except on"}, {"start": 925.52, "end": 931.04, "text": " helloswag data set which are amazing results considering that gpt3 is way bigger compared to"}, {"start": 931.04, "end": 939.4399999999999, "text": " t0 like it's an order of magnitude bigger than t0 they argumented that on this task they perform"}, {"start": 939.4399999999999, "end": 946.4, "text": " worse because gpt3 use certain specific techniques that were like developed for that specific task"}, {"start": 947.04, "end": 952.96, "text": " and as for this one i haven't seen any any argument for why there is this huge gap in performance i"}, {"start": 952.96, "end": 959.12, "text": " guess that's an interesting open question for further research okay so they mentioned that"}, {"start": 960.1600000000001, "end": 967.36, "text": " note that brown at also the gpt3 paper authors reports performance on a single prompt whereas"}, {"start": 967.36, "end": 974.0, "text": " we report the median and interquartile range of performance across all prompts so they basically"}, {"start": 974.0, "end": 982.1600000000001, "text": " say that the the gpt3 paper they they cherry picked the best prompt and reported those results"}, {"start": 982.16, "end": 989.68, "text": " and we can see that on the chart here we have error bars for t0 and t5 plus lm baseline but we"}, {"start": 989.68, "end": 994.8, "text": " don't have any error bars here so this is the best case scenario pretty much the best prompt was"}, {"start": 994.8, "end": 999.28, "text": " cherry picked for this for this gpt3 model so that's something worth keeping in mind"}, {"start": 999.28, "end": 1006.3199999999999, "text": " whereas here the model was evaluated on multiple prompts and all of the results are are plotted"}, {"start": 1006.32, "end": 1015.0400000000001, "text": " here like using these error bars okay let's continue we have other results on this big bench"}, {"start": 1015.0400000000001, "end": 1022.48, "text": " benchmark and again the trend is very similar only on this strategy qa is the t0 worse compared to"}, {"start": 1022.48, "end": 1027.44, "text": " these baselines whereas the baselines now are not gpt3 so keep that in mind they are just some models"}, {"start": 1027.44, "end": 1033.52, "text": " that the like people from google created those baselines for this big band a big bench benchmark"}, {"start": 1033.52, "end": 1040.8, "text": " uh okay so what's interesting here for me is the non-monotonic nature of of these models so"}, {"start": 1041.76, "end": 1048.72, "text": " a quick mention here we have t0 we know what it is we have t0 plus and we have t0 plus plus"}, {"start": 1048.72, "end": 1054.96, "text": " so t0 plus is just trained on additional data compared to t0 and t0 plus plus on even more data"}, {"start": 1054.96, "end": 1060.24, "text": " so let me just briefly go and show you what exactly that is so t0 plus is the same model"}, {"start": 1060.24, "end": 1066.24, "text": " but trained on a mixture that adds gpt3's evaluation data sets for t0 plus plus we add"}, {"start": 1066.24, "end": 1072.8, "text": " gpt3's and superglue data sets okay so that's the difference we have more data for these models so"}, {"start": 1072.8, "end": 1078.64, "text": " what's interesting going back here is that this non-monotonic behavior so on most tasks we have"}, {"start": 1078.64, "end": 1085.36, "text": " that basically adding more data is is a good thing but then we have certain tasks like this logical"}, {"start": 1085.36, "end": 1091.6799999999998, "text": " deduction where we have the inverse here happening so the the the model with the least data performs"}, {"start": 1091.6799999999998, "end": 1097.4399999999998, "text": " the best and then as we are adding data we have worse and worse performance and i guess investigating"}, {"start": 1097.4399999999998, "end": 1102.4799999999998, "text": " what happened here what kind of whether we had some overfitting or whatnot uh would be worth a"}, {"start": 1102.4799999999998, "end": 1108.3999999999999, "text": " try i guess okay finally let's see uh certain ablations they've done um the first one is uh"}, {"start": 1108.3999999999999, "end": 1114.0, "text": " training with various amounts of prompts per data set if that makes sense so they they take"}, {"start": 1114.0, "end": 1119.36, "text": " uh the t0 model and they uh they basically p equals zero means there are zero prompts that"}, {"start": 1119.36, "end": 1125.76, "text": " means they're basically using t5 plus lm baseline p equals one means for every data set they use a"}, {"start": 1125.76, "end": 1131.36, "text": " single prompt that was randomly chosen from from from the set of prompts and p equals all is"}, {"start": 1131.36, "end": 1136.08, "text": " basically all of the prompts for each data set which is on average i think around 8.1 or something"}, {"start": 1136.8, "end": 1141.76, "text": " and we can see the results here on various different tasks and uh we have this this this"}, {"start": 1141.76, "end": 1148.4, "text": " this candlestick diagram and you can see that the mean usually increases as we are adding so the"}, {"start": 1148.4, "end": 1154.8, "text": " blue one is the p equals zero the red one is the p equals one and the green one is the p equals all"}, {"start": 1155.28, "end": 1160.8, "text": " uh case so we can see that the median is constantly improving pretty much so the thing that worries me"}, {"start": 1160.8, "end": 1167.76, "text": " here is that the the spread is way bigger when we when when we set p equals to one but as we uh"}, {"start": 1167.76, "end": 1174.16, "text": " progress p to to to be equal to all we basically have uh like small spread and we have uh like"}, {"start": 1174.16, "end": 1181.52, "text": " higher median so that's cool so now question i have here is whether this is a like consequence of uh"}, {"start": 1181.52, "end": 1187.12, "text": " actually adding the prompts or is it just the fact that we have much more computation that went into"}, {"start": 1187.12, "end": 1193.12, "text": " training this this p equals all model compared to these two models so it'd be nice to to to uh"}, {"start": 1193.12, "end": 1199.6, "text": " um overlay a comparison here of models that were trained with less prompts but but had more compute"}, {"start": 1199.6, "end": 1207.12, "text": " time uh compared to these models here if that makes sense okay so they did aside from this uh"}, {"start": 1207.12, "end": 1214.32, "text": " in other uh in other like uh ablation and here uh they hold the number of prompts constant so they"}, {"start": 1214.32, "end": 1219.28, "text": " use all prompts uh but they vary so they have they vary the number of data sets that were used to"}, {"start": 1219.28, "end": 1224.24, "text": " train the t0 models so those were the t0 t0 plus and t0 plus plus models so what is interesting"}, {"start": 1224.24, "end": 1231.28, "text": " here is that uh it's not quite a clear cut uh which model is better here because first of all"}, {"start": 1231.28, "end": 1238.3999999999999, "text": " it seems like that t0 plus is overall worse compared to t like t0 like it has bigger spread"}, {"start": 1238.3999999999999, "end": 1246.24, "text": " and it most of the times even the median is lower compared to the uh t0 baseline so i mean yeah and"}, {"start": 1246.24, "end": 1253.68, "text": " then we have a trend where once we add this super glue data set we have a bump in performance here"}, {"start": 1253.68, "end": 1260.8, "text": " but again the spread is usually bigger way bigger compared to t0 here as well compare this to this"}, {"start": 1260.8, "end": 1266.64, "text": " and compare this to this this is definitely worth a further investigation i mean it is it is"}, {"start": 1266.64, "end": 1272.88, "text": " obviously not so this model here on this particular nly task is definitely not robust that means"}, {"start": 1272.88, "end": 1279.5200000000002, "text": " depending on how you pick your prompt you're gonna have a huge like difference in your performance"}, {"start": 1279.5200000000002, "end": 1284.8000000000002, "text": " and that's that's a bad thing that means this model is not robust uh when we increase the amount"}, {"start": 1284.8000000000002, "end": 1291.8400000000001, "text": " of data that will be like oh fun conclusion i have here okay uh let me wrap up this paper uh"}, {"start": 1291.8400000000001, "end": 1298.48, "text": " mentioning this part here so note that one of our templates is identical to brown at all again gpt3"}, {"start": 1298.48, "end": 1306.4, "text": " paper authors reporter prompt this prompt scores 58.8 accuracy on the api base series which is"}, {"start": 1306.4, "end": 1313.44, "text": " lower than the reported accuracy of 63.5 from the paper itself so we can see a huge gap in performance"}, {"start": 1313.44, "end": 1319.92, "text": " between what the paper uh like reported and between and between what the actual api is offering"}, {"start": 1320.48, "end": 1326.56, "text": " so that's a like a point of concern for me there and they mentioned here that all other nine prompts"}, {"start": 1326.56, "end": 1333.2, "text": " however yield roughly random guessing performance with median accuracy roughly equal to 50 and"}, {"start": 1333.2, "end": 1340.24, "text": " interquartile range 1.28 these results suggest that t0 is more robust to prompt formulation than gpt3"}, {"start": 1340.24, "end": 1344.3999999999999, "text": " so yeah i think this is a nice supporting evidence that training the models like this"}, {"start": 1344.3999999999999, "end": 1348.8, "text": " uh will make the models more robust so this offloads some of the work from those pro"}, {"start": 1348.8, "end": 1353.6799999999998, "text": " prompt programmer people maybe not so far in the future prompt programming will cease to exist and"}, {"start": 1353.68, "end": 1360.88, "text": " that's that's so sad um anyways um i'm gonna wrap it up with mentioning this um we release all models"}, {"start": 1360.88, "end": 1364.96, "text": " training this paper in addition to the collection of prompts we created and our prompt annotation"}, {"start": 1364.96, "end": 1371.92, "text": " tool so huge kudos to the authors uh for for doing this um this whole big science workshop"}, {"start": 1371.92, "end": 1378.0, "text": " sounds very exciting uh they have a supercomputer on disposal uh they are doing uh very nice"}, {"start": 1378.0, "end": 1383.04, "text": " interesting experiments and they are open sourcing all of that along the way so again do check it out"}, {"start": 1383.04, "end": 1388.1599999999999, "text": " and okay guys that's it for this video if you found it useful consider sharing it out with your"}, {"start": 1388.1599999999999, "end": 1393.52, "text": " friends as well as subscribing to this youtube channel and finally do join the discord community"}, {"start": 1393.52, "end": 1399.12, "text": " we have an ever-increasing number of smart people there so definitely worth checking out uh until"}, {"start": 1399.12, "end": 1414.08, "text": " next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=Gl0s0GDqN3c
ResNet Strikes Back! | Patches Are All You Need? | Papers Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ In this video I cover 2 recent papers: * ResNet strikes back: An improved training procedure in timm * Patches Are All You Need? The ResNet paper proposes a novel training procedure and boosts ResNet-50 top-1 accuracy to over 80%! The "Patches Are All You Need" paper shows that the template (using patch embeddings + isotropic model) may be playing a more important role than the transformer block itself. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ ResNet paper: https://arxiv.org/abs/2110.00476 ✅ All You Need paper: https://openreview.net/forum?id=TVHS5Y4dNvM ✅ timm lib: https://github.com/rwightman/pytorch-image-models/ ✅ My transformer implementation: https://github.com/gordicaleksa/pytorch-original-transformer ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:50 Paper 1: high-level overview 02:50 What caused the accuracy boost? 05:45 Data augmentations explained 07:05 Regularization explained 08:55 Optimizers explained 09:50 Experiments 12:50 Inherent result variance 13:55 Training procedure generalization to other datasets 15:10 Making wrong conclusions is easy 16:30 Paper 2: high-level overview 17:35 Architecture explained 22:25 A note on paper length 23:05 My thoughts 24:05 Single tweet implementation ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #resnet #patches #allyouneed
What's cracking guys? In this video I'm covering two papers. The first one is ResNet Strikes Back and the second one is Patches Are All You Need. And basically the reason I'm doing that is because this paper is super simple and a very neat idea. It's a four pager, it's a very short paper and the second reason is because both of these papers tell us that we don't know what we are doing, basically. So this one tells us that we are oftentimes doing unfair comparisons with some baselines and this one tells us that oftentimes we don't know how to disentangle what is bringing the actual performance, whether it is the actual architecture or the template itself, like something else basically. So I'll dig into the paper a bit later. Let me first start with ResNet Strikes Back and both of these are fairly short. I think 30 minutes will be enough. So anyways, it's a paper by Ross Whitman from the team library, famous image models, PyTorch library, and Hugo Touvron and Hervé Gégaud. I'm butchering their names, sorry for that. Anyways, so let's dig into the paper itself. The influential residual networks designed by Hiahtol remain the gold standard architecture in numerous scientific publications. So the ResNet-50 architecture published back in 2015 is still used as a baseline for various tasks. And they say that in this paper we re-evaluate the performance of the vanilla ResNet-50 when trained with a procedure that integrates such advances, i.e. the advances that appeared from 2015 onwards. And finally, for instance, with our more demanding training setting, a vanilla ResNet-50 achieves 80.4% top-one accuracy at resolution 224 times 224 on image net validation without extra data or distillation. Whereas the, I think, the original number was somewhere around 76 or something, so around five points, which is a decent advancement considering that they are basically freezing the original architecture and just adding novel, like, things, so like optimizers, augmentations, etc. We're going to see that in a second. So they have this, like, symbolic high-level representation formula here, which tells us basically that the accuracy of a certain model is a function of the architecture design, of the training procedure, and of noise and overfitting. And now, basically, it's kind of hard to decouple, I'd say, these two, because you do overfit the training procedure oftentimes of your model to a specific dataset. So I don't know how they decouple these two, but in any case, what they're focusing on in this paper is the training procedure itself. So let's see what they actually have done in this paper, how have they approved the ResNet-50. So they basically have three procedures. You can see them here, so A1, A2, and A3. The first one is the most compute-heavy one, the best one, at least for their setup, and you can see because it has 600 epochs, lots of time to train this one, like, memory intensive, etc., etc., and they achieved the 80.4 number we saw above. So that's achieved with this A1 procedure. Basically, what I've done is across multiple dimensions, such as the loss, such as data augmentations, regularization, and optimization, they kind of introduced the best techniques and approaches we know now in 2021, and they got the results that they reported. So let's see what the thing is about. Okay, so I went ahead and just pasted in the mixup and the cut-mix procedures so that it's easier to understand what I'm talking about here. So basically, they've used, as they say here, they're using mixup and cut-mix, and the thing they modified compared to the original papers is the following. So they say here, in our training, we assume instead that these concepts are all present and treat the classification as a multi-label classification problem, one versus all. So the thing is, as you can see here, what mixup does is both in the image space as well as in the output label space, they mix up various images, various categories. So here you can see that the two techniques do this in a bit different way. So this one applies some blending, and you can see like the cat here and the dog face here as well. So that's the image space, and then what happens in the label space is depending on the blending coefficient, they basically have the output distribution look something like this. Like maybe the cat will be 0.4 probability and the dog will be like 0.6. So this will be your target distribution you're trying to match using cross entropy. Same thing with cut-mix, except that here, as you can see, you're actually like pasting a crop of your second image of the cat here into your original image, and basically depending on the ratio of how much of the original image are you occupying with this crop, you'll be again forming these probabilities and thus forming the target distribution. So what this paper instead like proposes is to simply, instead of reweighting this, just put like one for the cat and one for the dog, and use binary cross entropy instead. So basically for every single class you'll be doing the log thing and like accumulating that to form the loss. So it's a fairly simple like a tweak and it turns out it helps a lot. So that's the first thing they do here. That's the loss component. Then they have the data augmentation component. Let me read this through. So we adopt the following combination of data augmentations. On top of a standard random resized crop, so RRC, so that's your like standard stuff, standard augmentation, you basically take a like a random crop from the image and you resize it into a temp like a 224 times 224. Horizontal flipping, which is obviously just flipping the image. They apply the the theme variance, so that's the library I mentioned, of rand augment and we're going to see what it is in a second. Mix up and cut mix. So what rand augment does is basically has a like a series of augmentations such as shear, as you can see here, such as changing the some photometric transformations such as this auto contrast and they basically have an automatic way to figure out these augmentations. So I don't think it's that important to understand. You can check out the paper yourself, but like all in all they're just using these techniques that people have devised over the last couple of years and they just kind of experimented with a lot of them and found the best configuration for Reson50. They mentioned mix up and cut mix. Let me just see here. So cut mix is this one and mix up is this one. So those two are augmentations as well. So yeah, anyways, regularization is the third thing they change here and they say in addition to adapting the weight decay we use label smooting, repeated augmentation and stochastic depth. So let me quickly go through all of these. So label smooting, what they do is they basically, this was used in the original transformer paper and you can check my implementation of the original transformer where I've actually implemented this label smooting thing, but the idea is super simple. So instead of having like a hard target where you have certain class having like the probability one, you just put something like one minus epsilon and then you take the epsilon, because you want to have the probability sum up to one, right? And so you just basically distribute that, whoops, the epsilon across all of the other classes. So that's going to be epsilon divided by n minus one, where n is the number of classes, so that's going to be a value at any one of these positions. So that's the label smooting. Repeated augmentation is a fairly simple technique as well. So in a batch you take a single image and you apply a couple of augmentations to that image and you include all of those replicas of that original image inside of the same batch and that turns out to kind of improve the results. So as you can see all of these are random heuristics and things just work and yeah. And finally stochastic depth is similar to dropout in a way, so what you do instead is you have a like a resonant module, so you have something like this, you have a block here and this is your regular resonant thing, so you have like a addition here, you're basically having a skip connection and so what this stochastic depth does is with certain probability they're going to short like short circuit these blocks here and that's how the the thing works. So just an additional heuristic like a variation of a dropout for these models. Okay so that's those are the three main components and finally instead of using Atom or SGD they're actually using LAM optimizer. So we therefore focus on LAM with cosine schedule as the default optimizer for training our resonant 50 and the reason is they are using a way larger batch sizes compared to the original resonant 50 training and then like this paper showed and some other transformer papers that LAM is really good and hopefully i'm pronouncing the name correctly but LAM is really good when you have like a lot of images in a batch and that's pretty much it. Like they take resonant 50, they applied lots of these different hacks, they figured out they did a enormous like kudos to the authors because this was i guess enormous amount of work put into this paper to do the extensive experimentations and all to kind of like sweep through the combinatorial space and figure out the best possible combinations. Okay let's see the experiments. So first things first here you can see the thing i just mentioned. So the column here contains all of the variables, the hyperparameters they were tweaking in order to find these optimal optimal trainings procedures. So A1, A2 and A3. As you can see here they're comparing them with PyTorch default training procedure for resonant 50 and some other some other procedures here from like this is the vision transformer etc. The main thing you can notice here is they're using LAM instead of Atom and SGD optimizers. They have different learning rates, they have different schedules, etc. etc. So in some procedures such as in A1 they're using label smooting, whereas in others they are not using label smooting, they're also using stochastic depth etc. So i don't think it's that important to understand all the details. The authors did the all the hard lifting for us. Bottom line is once they found this this like correct configuration they got the 80.4 result and you can see that this result is way better compared to the reported results that all of these other papers reported. So this vision transformer paper reported 78.4 which as you can see is almost 2 percent. The comparison was not fair because they were probably using some advanced like training receipt for for vision transformer, whereas they were not using they didn't try as hard to improve Resin50 which is I mean understandable. You cannot invest that much time into reconstructing baselines and trying to make them as best as possible because it's hard. It takes a lot of computing, it takes a lot of time, so people just take like numbers reported in previous papers. Okay so let's see some tables here. They took the same like training procedures like labeled A1, A2 and A3 and they applied them to various other models in order to see whether this specific training procedure generalizes to other architectures as well as to the Resin50. So the final results are that mostly they do if certain constraints are obeyed. So the model has to be similar in size and depth to Resin50 and the more similar it is the better the results will be. So basically that means they did overfit. And if we take a look at some results here, if we take a look at EfficientNet B4 for example, you can see that A2 is actually better for that model compared to A1, whereas for Resin50 obviously the best results come from A1. Also some deeper models such as this SE Squeeze Excite I guess, Net154, you can see results are a tad better here with A2 compared to A1 which is because it's way deeper compared to Resin50 and I guess you'd have to kind of tweak the procedure to get better results. Okay second next up is, I'll actually just skip this table, you can check it out yourself and let me focus on this result here. So they showed that once they do evaluation on ImageNet V2 versus the original ImageNet, you can kind of see that, so all of these crosses here are like a single model that was trained on ImageNet and then evaluated on these two datasets and you can see that this seems like there is no correlation, at least like just eyeballing this thing, because for example the best top one accuracy on this V2 dataset comes for at the same time the model has, I guess, around mean like top one accuracy on the original ImageNet. So now imagine we had different training procedure and not just different random seed and you can see there is a lot of variance that can happen and so there is a lot of potential confusion that can happen and it's hard to disentangle whether something you added is actually beneficial or not. Yeah anyways, let's continue here. There is a couple more things I want to show you. First thing is this one, so they took the same model and they fine-tuned it on various other datasets and you can see that, like this is a one procedure and you can see that the results on ImageNet are fairly better here and the gap is pretty big, but once you go and compare it with on other datasets you can see that these results are fairly similar. So even here and here we still have some like a gap, I guess this one has a similar distribution to ImageNet, that would be my assumption, but yeah. Anyways, this tells us that even this fancy like training procedure, they took some, like a lot of compute to figure out, is not, will not be like the best one on other datasets and the differences are even smaller when you take a look at A2. So here sometimes they even have worse performance. So you can see here versus this one here. Yeah, I guess all of this is to show that it's, A, it's hard to find a right training procedure and B, even if you find it, it's not gonna generalize to other setups. So yeah, this seems too hacky for me, like following Richard Sutton's like philosophy for this approach, this type of approach to be the way to go in the future. So one more thing, one more table that's gonna strengthen that argument even more is this one here. So they took Resonant 50 and this DATE-S model, which is I think similar in size to Resonant 50, and what they've done is they took the A2 procedure and trained both of them and they got results here. And you can see that Resonant 50 seems to be better, the conclusion is this one is better than VIT. And then they took T2 procedure, which was tailored to this VIT, and you can see that now with this training procedure it seems that this one is better compared to Resonant 50. The point here is when you have a paper and people figure out a certain procedure, that procedure is gonna probably benefit the model that's being published and it's gonna be unfair compared to all of the previous baselines. And then they have, in this second case, once they are evaluating on ImageNet V2 instead of the, here it was ImageNet, you can see that here, like for both procedures, it seems that VIT is better compared to Resonant 50, although this here is not, like the difference is very small, so yeah. In any case, this interplay between where you're evaluating your models and the training procedure you're using, all this makes it easy to to make a wrong conclusion. And yeah, that's pretty much it. That's the Resonant Strikes Again paper. And now let me switch back to this new paper, like just published a couple of days ago, it's still, like the authors are still anonymous, it's a double blind review. And the idea here was the following. So they made this CovMix mixer model and they showed that it's very simple, it's just using convolutions, as we're going to see in a second, and they again have, they reported better results compared to these other baselines such as the VIT, ResNet, and ResMLP. And now the thing is, probably with the procedures like that were advised in the paper I just covered, would make this ResNet way better, maybe even comparable, because we know it went over 80. So yeah, so basically it's very hard to compare like models, especially when you just have a single dimension like parameters here, because there are many other things we may care more about, like throughput, etc. Okay, anyways, let's see what this paper is about. So if you already know something about vision transformers or MLP mixer, this thing will look very familiar. If you haven't watched those videos, if you haven't explored those papers, you can check out my videos, I've covered both of them, and I'll link them somewhere here. Anyways, you can see the common pattern, we have like images split into various patches, then the patches basically projected into like a vector, and then in the case of VIT, what they've done is they just applied a transformer, like a couple of transformer blocks, in the case of MLP mixer, they've done mixing over a spatial extent and over the channel extent, by applying basically MLP, or you can also consider it as a type of a convolution. In any case, what this paper shows is that you do not need to use like self-attention, like in the case of VITs, you can just use simple convolutional layers. So the thing they do here is they apply depth-wise convolution, then they have like a activation function, they have batch norm, and then they do point-wise convolution, activation function, batch norm, and they just repeat this a couple of times, and that's it. It's super simple, we're gonna see it, they can basically fit this whole implementation of the model in a single tweet, they have that in the appendix, so that's a fun fact. And so what depth-wise and point-wise do is the same thing as mixer, they're just doing spatial-wise and channel-wise mixing. A brief recap how those two work, so how depth-wise and point-wise work, is if we have like a activation volume here, so basically, let me just draw it out here, so something like this, so this will be the number of channels, and this will be like, I don't know, this will, this is like width, and this thing here is height. So what depth-wise convolution does is you take a single channel, like maybe this one, and you just apply a kernel across that, that channel, across the spatial extent of the activation volume, and that's the, and you just get like a new feature map. Point-wise, what it does is you basically have the opposite, so you take one, times one, kernel, and that will have basically as many channels as the activation volume has, and basically then, this is how you're going to form novel features by just doing a stride over the spatial extent, and you're going to mix up, as you can see here, channels, whereas here you're mixing up the spatial information. So that's a idea that we've seen already in MLP Mixer, and yeah, in any case, what they argue, because we'll soon see, we saw that their results are better compared to MLP Mixers and VATs in some cases, what they show is that this actual template of having this patch embedding, and then having this, how they call it, isotropic model, basically the thing is you're not reducing neither the number of channels nor the spatial extent, whereas in the CNNs, for example, you're basically usually, you're reducing the spatial extent and you're adding the number of channels as you're going deeper into the network. So these isotropic models basically preserve both the number of channels as well as the spatial extent, and they argue that maybe, just maybe, that template of using patch embeddings plus that isotropic, like, modeling of features is the thing that actually gives us performance, not the self-attention, not the other fancy things we've imported from NLP world. In any case, the model is super simple, it's a very nice baseline, you can see here, this is the whole implementation, as you can see, so they have this initial layer here, is basically just the patch embedding, then they have a couple of these depth, so N of these layers, and finally they add the adaptive average pooling and they flatten out the features and they add a linear classifier on top of it. So it's a fairly simple model and it achieves comparable, if not better, results compared to VAT, compared to ResNet and ResMLP. So now the thing here, the red alert is the throughput is super, super small, they mentioned that it's possible to, in future research, improve this. I mean, this ties back to the thing I mentioned, in this plot here, when you just plot a single dimension, like number of parameters here, you may paint up like a wrong picture. But in any case, this paper is not trying to set SOTA, they're just trying to say that we are maybe focusing on the wrong thing, I guess. So we went all the way from ResNet and convolutional neural networks to transformers and vision transformers to finally get back to convolutional neural networks, again, in their pure form, just changing this, just keeping this isotropic template. Also, I like the end part of this paper, a note on paper length, expecting more text in this paper, wondering if it's a workshop paper, we hastily submitted to iClear. No, this paper presents a simple idea, one where we genuinely believe that a short paper presentation is more effective. Do we really need exactly eight, now nine or ten pages to describe every machine learning architecture and algorithm in existence? We proposed an incredibly simple architecture and made a very simple point that we think is worth more discussion. Patches work well in convolutional architectures. We think that four pages is more than enough space for this. The details of the experiments and architectures are in the appendix for those who want to read through it all. A lot of respect to the authors, to be honest. Aside from this, I really think we need to question this whole template of how we are presenting the research. I think that PDFs, for one, are obsolete. I think that authors need to write like accompanying blogs a lot more. We need something like interactive and types of notebooks, such as Jupyter or Deep Note is also one cool notebook environment. So, I think we need as a community, we need to rethink how we are presenting, because the whole idea of this paper is for other researchers to understand what these guys have done and be able to reconstruct or build upon their ideas. And so, who cares about like 10-pager format? Paper reading in and off itself is hard and I guess any help we can get from the technology advancements we have, such as interactivity, visualizations, animations, will help us as a community. Finally, let me show you the one tweet implementation. So, here it is. An implementation of our model in exactly 280 characters is right here. Twitter is slowly becoming a more important medium, if not already, compared to conferences. You can check out other results in the paper. They have some nice visualizations of the kernels. I'll leave you with that and if you like this video, consider subscribing, share out the video, hit that bell icon to get notified every time I upload a new video, also join the Discord community. We've got a lot of smart people there and until next time, bye-bye!
[{"start": 0.0, "end": 3.68, "text": " What's cracking guys? In this video I'm covering two papers. The first one is"}, {"start": 3.68, "end": 8.24, "text": " ResNet Strikes Back and the second one is Patches Are All You Need."}, {"start": 8.24, "end": 12.24, "text": " And basically the reason I'm doing that is because this paper is super"}, {"start": 12.24, "end": 15.280000000000001, "text": " simple and a very neat idea. It's a four pager,"}, {"start": 15.280000000000001, "end": 19.68, "text": " it's a very short paper and the second reason is because both of these papers"}, {"start": 19.68, "end": 23.6, "text": " tell us that we don't know what we are doing,"}, {"start": 23.6, "end": 27.68, "text": " basically. So this one tells us that we are oftentimes doing unfair"}, {"start": 27.68, "end": 32.08, "text": " comparisons with some baselines and this one tells us that oftentimes"}, {"start": 32.08, "end": 35.36, "text": " we don't know how to disentangle what is"}, {"start": 35.36, "end": 39.28, "text": " bringing the actual performance, whether it is the actual architecture or"}, {"start": 39.28, "end": 43.519999999999996, "text": " the template itself, like something else basically. So I'll dig into the paper a"}, {"start": 43.519999999999996, "end": 48.08, "text": " bit later. Let me first start with ResNet Strikes Back and both of"}, {"start": 48.08, "end": 51.04, "text": " these are fairly short. I think 30 minutes will be"}, {"start": 51.04, "end": 55.04, "text": " enough. So anyways, it's a paper by Ross Whitman"}, {"start": 55.04, "end": 59.839999999999996, "text": " from the team library, famous image models, PyTorch library,"}, {"start": 59.839999999999996, "end": 64.8, "text": " and Hugo Touvron and Herv\u00e9 G\u00e9gaud. I'm butchering their names, sorry for that."}, {"start": 64.8, "end": 68.16, "text": " Anyways, so let's dig into the paper itself."}, {"start": 68.16, "end": 71.75999999999999, "text": " The influential residual networks designed by Hiahtol"}, {"start": 71.75999999999999, "end": 75.36, "text": " remain the gold standard architecture in numerous scientific publications."}, {"start": 75.36, "end": 78.56, "text": " So the ResNet-50 architecture published back in 2015"}, {"start": 78.56, "end": 81.84, "text": " is still used as a baseline for various tasks."}, {"start": 81.84, "end": 85.44, "text": " And they say that in this paper we re-evaluate the performance of the"}, {"start": 85.44, "end": 88.72, "text": " vanilla ResNet-50 when trained with a procedure that"}, {"start": 88.72, "end": 93.92, "text": " integrates such advances, i.e. the advances that appeared from"}, {"start": 93.92, "end": 98.96000000000001, "text": " 2015 onwards. And finally, for instance, with our more demanding training"}, {"start": 98.96000000000001, "end": 103.52000000000001, "text": " setting, a vanilla ResNet-50 achieves 80.4%"}, {"start": 103.52000000000001, "end": 107.76, "text": " top-one accuracy at resolution 224 times 224 on image"}, {"start": 107.76, "end": 111.44, "text": " net validation without extra data or distillation."}, {"start": 111.44, "end": 114.48, "text": " Whereas the, I think, the original number was somewhere around"}, {"start": 114.48, "end": 117.75999999999999, "text": " 76 or something, so around five points, which is a"}, {"start": 117.75999999999999, "end": 121.44, "text": " decent advancement considering that they are basically freezing the"}, {"start": 121.44, "end": 126.64, "text": " original architecture and just adding novel, like, things, so like"}, {"start": 126.64, "end": 131.04, "text": " optimizers, augmentations, etc. We're going to see that in a second."}, {"start": 131.04, "end": 134.88, "text": " So they have this, like, symbolic high-level representation"}, {"start": 134.88, "end": 139.04, "text": " formula here, which tells us basically that the accuracy of a certain model"}, {"start": 139.04, "end": 143.35999999999999, "text": " is a function of the architecture design, of the training"}, {"start": 143.35999999999999, "end": 148.72, "text": " procedure, and of noise and overfitting. And now, basically, it's kind of hard to"}, {"start": 148.72, "end": 153.2, "text": " decouple, I'd say, these two, because you do overfit the training procedure"}, {"start": 153.2, "end": 155.92, "text": " oftentimes of your model to a specific dataset."}, {"start": 155.92, "end": 160.07999999999998, "text": " So I don't know how they decouple these two, but in any case,"}, {"start": 160.07999999999998, "end": 164.48, "text": " what they're focusing on in this paper is the training procedure itself."}, {"start": 164.48, "end": 168.07999999999998, "text": " So let's see what they actually have done in this paper, how"}, {"start": 168.08, "end": 174.24, "text": " have they approved the ResNet-50. So they basically have three procedures."}, {"start": 174.24, "end": 180.24, "text": " You can see them here, so A1, A2, and A3. The first one is the most compute-heavy"}, {"start": 180.24, "end": 184.32000000000002, "text": " one, the best one, at least for their setup, and you"}, {"start": 184.32000000000002, "end": 188.16000000000003, "text": " can see because it has 600 epochs,"}, {"start": 188.16000000000003, "end": 192.88000000000002, "text": " lots of time to train this one, like, memory intensive, etc., etc., and they"}, {"start": 192.88000000000002, "end": 197.44, "text": " achieved the 80.4 number we saw above. So that's achieved with this"}, {"start": 197.44, "end": 202.88, "text": " A1 procedure. Basically, what I've done is across multiple"}, {"start": 202.88, "end": 209.35999999999999, "text": " dimensions, such as the loss, such as data augmentations, regularization,"}, {"start": 209.35999999999999, "end": 213.04, "text": " and optimization, they kind of introduced the best"}, {"start": 213.04, "end": 219.76, "text": " techniques and approaches we know now in 2021, and they got the"}, {"start": 219.76, "end": 224.4, "text": " results that they reported. So let's see what the thing is about."}, {"start": 224.4, "end": 228.48000000000002, "text": " Okay, so I went ahead and just pasted in the mixup and the cut-mix"}, {"start": 228.48000000000002, "end": 231.92000000000002, "text": " procedures so that it's easier to understand what I'm talking about here."}, {"start": 231.92000000000002, "end": 237.04000000000002, "text": " So basically, they've used, as they say here, they're using mixup and cut-mix,"}, {"start": 237.04000000000002, "end": 240.56, "text": " and the thing they modified compared to the original papers is the following."}, {"start": 240.56, "end": 244.08, "text": " So they say here, in our training, we assume instead that these concepts are"}, {"start": 244.08, "end": 246.48000000000002, "text": " all present and treat the classification as a"}, {"start": 246.48000000000002, "end": 250.72, "text": " multi-label classification problem, one versus all. So the thing is, as you can"}, {"start": 250.72, "end": 254.72, "text": " see here, what mixup does is both in the image"}, {"start": 254.72, "end": 258.96, "text": " space as well as in the output label space, they mix up"}, {"start": 258.96, "end": 263.52, "text": " various images, various categories. So here you can see"}, {"start": 263.52, "end": 267.28, "text": " that the two techniques do this in a bit different way."}, {"start": 267.28, "end": 270.48, "text": " So this one applies some blending, and you can see"}, {"start": 270.48, "end": 274.56, "text": " like the cat here and the dog face here as well. So that's the image space, and"}, {"start": 274.56, "end": 277.84, "text": " then what happens in the label space is depending on the blending coefficient,"}, {"start": 277.84, "end": 280.64, "text": " they basically have the output distribution look something like this."}, {"start": 280.64, "end": 285.44, "text": " Like maybe the cat will be 0.4 probability and the"}, {"start": 285.44, "end": 289.28, "text": " dog will be like 0.6. So this will be your target distribution you're trying"}, {"start": 289.28, "end": 293.91999999999996, "text": " to match using cross entropy. Same thing with cut-mix, except that here,"}, {"start": 293.91999999999996, "end": 298.71999999999997, "text": " as you can see, you're actually like pasting a crop"}, {"start": 298.71999999999997, "end": 304.0, "text": " of your second image of the cat here into your original image, and"}, {"start": 304.0, "end": 307.52, "text": " basically depending on the ratio of how much of the"}, {"start": 307.52, "end": 311.28, "text": " original image are you occupying with this crop, you'll be again"}, {"start": 311.28, "end": 315.12, "text": " forming these probabilities and thus forming the"}, {"start": 315.12, "end": 319.68, "text": " target distribution. So what this paper instead"}, {"start": 319.68, "end": 325.52, "text": " like proposes is to simply, instead of reweighting this, just put like one for"}, {"start": 325.52, "end": 330.15999999999997, "text": " the cat and one for the dog, and use binary cross entropy instead."}, {"start": 330.15999999999997, "end": 333.44, "text": " So basically for every single class you'll be doing"}, {"start": 333.44, "end": 337.28, "text": " the log thing and like accumulating that to form the loss."}, {"start": 337.28, "end": 342.32, "text": " So it's a fairly simple like a tweak and it turns out it helps a lot."}, {"start": 342.32, "end": 345.91999999999996, "text": " So that's the first thing they do here. That's the loss component. Then they have"}, {"start": 345.91999999999996, "end": 348.71999999999997, "text": " the data augmentation component. Let me read this through."}, {"start": 348.71999999999997, "end": 352.15999999999997, "text": " So we adopt the following combination of data augmentations. On top of"}, {"start": 352.15999999999997, "end": 356.0, "text": " a standard random resized crop, so RRC, so that's your"}, {"start": 356.0, "end": 359.2, "text": " like standard stuff, standard augmentation, you basically take a"}, {"start": 359.2, "end": 363.52, "text": " like a random crop from the image and you resize it into a temp like a 224"}, {"start": 363.52, "end": 368.32, "text": " times 224. Horizontal flipping, which is obviously just flipping the image."}, {"start": 368.32, "end": 373.35999999999996, "text": " They apply the the theme variance, so that's the library I mentioned,"}, {"start": 373.35999999999996, "end": 377.03999999999996, "text": " of rand augment and we're going to see what it is in a second."}, {"start": 377.03999999999996, "end": 383.2, "text": " Mix up and cut mix. So what rand augment does is basically has a like a"}, {"start": 383.2, "end": 387.44, "text": " series of augmentations such as shear, as you can see here, such as changing the"}, {"start": 387.44, "end": 390.47999999999996, "text": " some photometric transformations such as this auto"}, {"start": 390.48, "end": 394.64000000000004, "text": " contrast and they basically have an automatic way to figure out these"}, {"start": 394.64000000000004, "end": 397.76, "text": " augmentations. So I don't think it's that important to"}, {"start": 397.76, "end": 399.76, "text": " understand. You can check out the paper yourself,"}, {"start": 399.76, "end": 403.36, "text": " but like all in all they're just using these techniques that people have"}, {"start": 403.36, "end": 406.96000000000004, "text": " devised over the last couple of years and they just kind of"}, {"start": 406.96000000000004, "end": 411.76, "text": " experimented with a lot of them and found the best configuration for Reson50."}, {"start": 411.76, "end": 415.28000000000003, "text": " They mentioned mix up and cut mix. Let me just see here."}, {"start": 415.28000000000003, "end": 419.6, "text": " So cut mix is this one and mix up is this one. So those two are augmentations as"}, {"start": 419.6, "end": 422.8, "text": " well. So yeah, anyways,"}, {"start": 422.8, "end": 427.28000000000003, "text": " regularization is the third thing they change here and they say in"}, {"start": 427.28000000000003, "end": 431.20000000000005, "text": " addition to adapting the weight decay we use label smooting,"}, {"start": 431.20000000000005, "end": 435.84000000000003, "text": " repeated augmentation and stochastic depth. So let me quickly go through"}, {"start": 435.84000000000003, "end": 439.28000000000003, "text": " all of these. So label smooting, what they do is"}, {"start": 439.28000000000003, "end": 442.72, "text": " they basically, this was used in the original transformer paper"}, {"start": 442.72, "end": 445.76000000000005, "text": " and you can check my implementation of the original transformer where I've"}, {"start": 445.76000000000005, "end": 447.84000000000003, "text": " actually implemented this label smooting thing,"}, {"start": 447.84, "end": 451.76, "text": " but the idea is super simple. So instead of having like a hard target where you"}, {"start": 451.76, "end": 454.0, "text": " have certain class having like the probability"}, {"start": 454.0, "end": 457.35999999999996, "text": " one, you just put something like one minus"}, {"start": 457.35999999999996, "end": 461.91999999999996, "text": " epsilon and then you take the epsilon, because you want to have the"}, {"start": 461.91999999999996, "end": 465.44, "text": " probability sum up to one, right? And so you just basically distribute that,"}, {"start": 465.44, "end": 468.0, "text": " whoops, the epsilon across all of the other"}, {"start": 468.0, "end": 473.91999999999996, "text": " classes. So that's going to be epsilon divided by n minus one, where n is"}, {"start": 473.91999999999996, "end": 477.2, "text": " the number of classes, so that's going to be a value at any one of these"}, {"start": 477.2, "end": 480.24, "text": " positions. So that's the label smooting. Repeated"}, {"start": 480.24, "end": 483.03999999999996, "text": " augmentation is a fairly simple technique as well."}, {"start": 483.03999999999996, "end": 487.59999999999997, "text": " So in a batch you take a single image and you apply a couple of augmentations to"}, {"start": 487.59999999999997, "end": 491.68, "text": " that image and you include all of those replicas of that original image inside"}, {"start": 491.68, "end": 494.32, "text": " of the same batch and that turns out to kind of improve the"}, {"start": 494.32, "end": 496.8, "text": " results. So as you can see all of these are"}, {"start": 496.8, "end": 500.96, "text": " random heuristics and things just work and yeah."}, {"start": 500.96, "end": 504.56, "text": " And finally stochastic depth is similar to dropout in a way,"}, {"start": 504.56, "end": 508.8, "text": " so what you do instead is you have a like a resonant module,"}, {"start": 508.8, "end": 512.08, "text": " so you have something like this, you have a block here and this is your"}, {"start": 512.08, "end": 516.16, "text": " regular resonant thing, so you have like a addition here, you're basically"}, {"start": 516.16, "end": 520.48, "text": " having a skip connection and so what this stochastic depth does is with"}, {"start": 520.48, "end": 525.36, "text": " certain probability they're going to short like short circuit these blocks"}, {"start": 525.36, "end": 528.48, "text": " here and that's how the the thing works. So"}, {"start": 528.48, "end": 532.0, "text": " just an additional heuristic like a variation of a dropout"}, {"start": 532.0, "end": 537.04, "text": " for these models. Okay so that's those are the three main components"}, {"start": 537.04, "end": 540.4, "text": " and finally instead of using Atom or SGD they're"}, {"start": 540.4, "end": 544.0, "text": " actually using LAM optimizer. So we therefore focus on LAM with"}, {"start": 544.0, "end": 547.92, "text": " cosine schedule as the default optimizer for training our resonant 50"}, {"start": 547.92, "end": 551.44, "text": " and the reason is they are using a way larger batch sizes"}, {"start": 551.44, "end": 555.44, "text": " compared to the original resonant 50 training and then"}, {"start": 555.44, "end": 559.04, "text": " like this paper showed and some other transformer papers that LAM is really"}, {"start": 559.04, "end": 560.48, "text": " good and hopefully i'm pronouncing the name"}, {"start": 560.48, "end": 563.52, "text": " correctly but LAM is really good when you have like a"}, {"start": 563.52, "end": 567.44, "text": " lot of images in a batch and that's pretty much it. Like they"}, {"start": 567.44, "end": 570.5600000000001, "text": " take resonant 50, they applied lots of these different"}, {"start": 570.5600000000001, "end": 574.5600000000001, "text": " hacks, they figured out they did a enormous like"}, {"start": 574.5600000000001, "end": 577.36, "text": " kudos to the authors because this was i guess enormous"}, {"start": 577.36, "end": 582.16, "text": " amount of work put into this paper to do the extensive experimentations and all"}, {"start": 582.16, "end": 586.0, "text": " to kind of like sweep through the combinatorial space and figure out the"}, {"start": 586.0, "end": 588.8000000000001, "text": " best possible combinations. Okay let's see"}, {"start": 588.8, "end": 592.64, "text": " the experiments. So first things first here you can see the"}, {"start": 592.64, "end": 596.9599999999999, "text": " thing i just mentioned. So the column here contains all of the"}, {"start": 596.9599999999999, "end": 601.12, "text": " variables, the hyperparameters they were tweaking in order to find these"}, {"start": 601.12, "end": 606.9599999999999, "text": " optimal optimal trainings procedures. So A1, A2 and A3. As you can see here"}, {"start": 606.9599999999999, "end": 610.24, "text": " they're comparing them with PyTorch default training procedure"}, {"start": 610.24, "end": 614.24, "text": " for resonant 50 and some other some other procedures here from like"}, {"start": 614.24, "end": 617.1999999999999, "text": " this is the vision transformer etc. The main thing you can notice here is"}, {"start": 617.2, "end": 621.84, "text": " they're using LAM instead of Atom and SGD optimizers. They have"}, {"start": 621.84, "end": 625.84, "text": " different learning rates, they have different schedules, etc. etc."}, {"start": 625.84, "end": 630.88, "text": " So in some procedures such as in A1 they're using label smooting,"}, {"start": 630.88, "end": 634.0, "text": " whereas in others they are not using label smooting, they're also using"}, {"start": 634.0, "end": 638.1600000000001, "text": " stochastic depth etc. So i don't think it's that important"}, {"start": 638.1600000000001, "end": 642.24, "text": " to understand all the details. The authors did the all the hard"}, {"start": 642.24, "end": 646.08, "text": " lifting for us. Bottom line is once they found this"}, {"start": 646.08, "end": 652.32, "text": " this like correct configuration they got the 80.4 result and you can see that"}, {"start": 652.32, "end": 656.08, "text": " this result is way better compared to the reported results that all of these"}, {"start": 656.08, "end": 659.12, "text": " other papers reported. So this vision"}, {"start": 659.12, "end": 663.2800000000001, "text": " transformer paper reported 78.4 which as you can see is"}, {"start": 663.2800000000001, "end": 667.44, "text": " almost 2 percent. The comparison was not fair"}, {"start": 667.44, "end": 670.96, "text": " because they were probably using some advanced like training receipt for"}, {"start": 670.96, "end": 673.76, "text": " for vision transformer, whereas they were not using"}, {"start": 673.76, "end": 677.68, "text": " they didn't try as hard to improve Resin50 which is"}, {"start": 677.68, "end": 682.16, "text": " I mean understandable. You cannot invest that much time into"}, {"start": 682.16, "end": 686.56, "text": " reconstructing baselines and trying to make them as best as possible"}, {"start": 686.56, "end": 689.6, "text": " because it's hard. It takes a lot of computing, it takes a lot of time,"}, {"start": 689.6, "end": 693.12, "text": " so people just take like numbers reported in previous papers."}, {"start": 693.12, "end": 699.6, "text": " Okay so let's see some tables here. They took the same like training"}, {"start": 699.6, "end": 703.84, "text": " procedures like labeled A1, A2 and A3 and they"}, {"start": 703.84, "end": 708.32, "text": " applied them to various other models in order to see whether this"}, {"start": 708.32, "end": 711.44, "text": " specific training procedure generalizes to other"}, {"start": 711.44, "end": 716.16, "text": " architectures as well as to the Resin50. So the"}, {"start": 716.16, "end": 721.44, "text": " final results are that mostly they do if certain constraints"}, {"start": 721.44, "end": 727.84, "text": " are obeyed. So the model has to be similar in size and depth to Resin50"}, {"start": 727.84, "end": 730.88, "text": " and the more similar it is the better the results will be. So basically that"}, {"start": 730.88, "end": 733.84, "text": " means they did overfit. And if we take a look at some"}, {"start": 733.84, "end": 737.76, "text": " results here, if we take a look at EfficientNet B4 for"}, {"start": 737.76, "end": 742.32, "text": " example, you can see that A2 is actually better for that model"}, {"start": 742.32, "end": 747.0400000000001, "text": " compared to A1, whereas for Resin50 obviously the best results"}, {"start": 747.0400000000001, "end": 750.48, "text": " come from A1. Also some deeper models such as this"}, {"start": 750.48, "end": 757.2, "text": " SE Squeeze Excite I guess, Net154, you can see results are a tad better"}, {"start": 757.2, "end": 761.76, "text": " here with A2 compared to A1 which is because"}, {"start": 761.76, "end": 766.48, "text": " it's way deeper compared to Resin50 and I guess"}, {"start": 766.48, "end": 770.4000000000001, "text": " you'd have to kind of tweak the procedure to"}, {"start": 770.4000000000001, "end": 776.0, "text": " get better results. Okay second next up is, I'll actually just skip this"}, {"start": 776.0, "end": 780.48, "text": " table, you can check it out yourself and let me focus on this result here. So"}, {"start": 780.48, "end": 786.0, "text": " they showed that once they do evaluation on ImageNet V2"}, {"start": 786.0, "end": 789.36, "text": " versus the original ImageNet, you can kind of"}, {"start": 789.36, "end": 793.52, "text": " see that, so all of these crosses here are like a single model that was"}, {"start": 793.52, "end": 797.44, "text": " trained on ImageNet and then evaluated on these two datasets and you"}, {"start": 797.44, "end": 800.0, "text": " can see that this seems like there is no correlation,"}, {"start": 800.0, "end": 803.92, "text": " at least like just eyeballing this thing, because for example"}, {"start": 803.92, "end": 807.44, "text": " the best top one accuracy on this V2 dataset"}, {"start": 807.44, "end": 813.28, "text": " comes for at the same time the model has, I guess, around mean like top one"}, {"start": 813.28, "end": 817.04, "text": " accuracy on the original ImageNet. So now imagine we had different"}, {"start": 817.04, "end": 819.36, "text": " training procedure and not just different random seed and you can see"}, {"start": 819.36, "end": 821.52, "text": " there is a lot of variance that can happen"}, {"start": 821.52, "end": 825.12, "text": " and so there is a lot of potential confusion that can"}, {"start": 825.12, "end": 828.4, "text": " happen and it's hard to disentangle whether something you added"}, {"start": 828.4, "end": 834.24, "text": " is actually beneficial or not. Yeah anyways, let's continue here."}, {"start": 834.24, "end": 838.8, "text": " There is a couple more things I want to show you. First thing is this one, so"}, {"start": 838.8, "end": 842.0799999999999, "text": " they took the same model and they fine-tuned"}, {"start": 842.08, "end": 846.1600000000001, "text": " it on various other datasets and you can see that,"}, {"start": 846.1600000000001, "end": 850.1600000000001, "text": " like this is a one procedure and you can see that the results on ImageNet"}, {"start": 850.1600000000001, "end": 853.44, "text": " are fairly better here and the gap is pretty big, but once you"}, {"start": 853.44, "end": 857.2, "text": " go and compare it with on other datasets you can see that these results are"}, {"start": 857.2, "end": 861.2800000000001, "text": " fairly similar. So even here and here we still"}, {"start": 861.2800000000001, "end": 865.5200000000001, "text": " have some like a gap, I guess this one has a similar"}, {"start": 865.5200000000001, "end": 868.4000000000001, "text": " distribution to ImageNet, that would be my"}, {"start": 868.4, "end": 872.9599999999999, "text": " assumption, but yeah. Anyways, this tells us that even this"}, {"start": 872.9599999999999, "end": 876.8, "text": " fancy like training procedure, they took some,"}, {"start": 876.8, "end": 880.0799999999999, "text": " like a lot of compute to figure out, is not,"}, {"start": 880.0799999999999, "end": 884.64, "text": " will not be like the best one on other datasets"}, {"start": 884.64, "end": 889.04, "text": " and the differences are even smaller when you take a look at A2."}, {"start": 889.04, "end": 892.24, "text": " So here sometimes they even have worse performance."}, {"start": 892.24, "end": 896.0, "text": " So you can see here versus this one here. Yeah, I guess all of this is to show that"}, {"start": 896.0, "end": 898.32, "text": " it's, A, it's hard to find a right training"}, {"start": 898.32, "end": 902.32, "text": " procedure and B, even if you find it, it's not gonna generalize to other setups."}, {"start": 902.32, "end": 906.72, "text": " So yeah, this seems too hacky for me, like following Richard Sutton's"}, {"start": 906.72, "end": 911.12, "text": " like philosophy for this approach, this type of approach to be the way to go"}, {"start": 911.12, "end": 915.6, "text": " in the future. So one more thing, one more table that's gonna"}, {"start": 915.6, "end": 918.64, "text": " strengthen that argument even more is this one here."}, {"start": 918.64, "end": 923.44, "text": " So they took Resonant 50 and this DATE-S model, which is I think similar in size"}, {"start": 923.44, "end": 926.4000000000001, "text": " to Resonant 50, and what they've done is they took the"}, {"start": 926.4000000000001, "end": 931.0400000000001, "text": " A2 procedure and trained both of them and they got results here."}, {"start": 931.0400000000001, "end": 934.72, "text": " And you can see that Resonant 50 seems to be better,"}, {"start": 934.72, "end": 938.08, "text": " the conclusion is this one is better than VIT."}, {"start": 938.08, "end": 942.0, "text": " And then they took T2 procedure, which was tailored to this VIT,"}, {"start": 942.0, "end": 944.96, "text": " and you can see that now with this training procedure it seems that this"}, {"start": 944.96, "end": 948.1600000000001, "text": " one is better compared to Resonant 50. The point here is"}, {"start": 948.1600000000001, "end": 950.48, "text": " when you have a paper and people figure out a certain"}, {"start": 950.48, "end": 955.36, "text": " procedure, that procedure is gonna probably benefit the model that's"}, {"start": 955.36, "end": 957.36, "text": " being published and it's gonna be unfair"}, {"start": 957.36, "end": 959.04, "text": " compared to all of the previous baselines."}, {"start": 959.04, "end": 962.72, "text": " And then they have, in this second case, once they are"}, {"start": 962.72, "end": 967.12, "text": " evaluating on ImageNet V2 instead of the, here it was ImageNet,"}, {"start": 967.12, "end": 971.12, "text": " you can see that here, like for both procedures,"}, {"start": 971.12, "end": 975.6, "text": " it seems that VIT is better compared to Resonant 50, although this here is not,"}, {"start": 975.6, "end": 979.6, "text": " like the difference is very small, so yeah. In any case, this interplay between"}, {"start": 979.6, "end": 984.4, "text": " where you're evaluating your models and the training procedure you're using,"}, {"start": 984.4, "end": 988.0, "text": " all this makes it easy to to make a wrong conclusion."}, {"start": 988.0, "end": 992.64, "text": " And yeah, that's pretty much it. That's the Resonant Strikes Again paper."}, {"start": 992.64, "end": 996.5600000000001, "text": " And now let me switch back to this new paper,"}, {"start": 996.5600000000001, "end": 1000.24, "text": " like just published a couple of days ago, it's still, like the authors are"}, {"start": 1000.24, "end": 1005.0400000000001, "text": " still anonymous, it's a double blind review. And the idea here was"}, {"start": 1005.04, "end": 1010.4, "text": " the following. So they made this CovMix mixer model"}, {"start": 1010.4, "end": 1014.48, "text": " and they showed that it's very simple, it's just using convolutions, as we're"}, {"start": 1014.48, "end": 1018.4, "text": " going to see in a second, and they again have, they reported"}, {"start": 1018.4, "end": 1022.24, "text": " better results compared to these other baselines such as the VIT,"}, {"start": 1022.24, "end": 1027.36, "text": " ResNet, and ResMLP. And now the thing is,"}, {"start": 1027.36, "end": 1032.8, "text": " probably with the procedures like that were advised in the paper I just"}, {"start": 1032.8, "end": 1036.56, "text": " covered, would make this ResNet way better,"}, {"start": 1036.56, "end": 1040.3999999999999, "text": " maybe even comparable, because we know it went over 80."}, {"start": 1040.3999999999999, "end": 1043.52, "text": " So yeah, so basically it's very hard to compare"}, {"start": 1043.52, "end": 1047.6, "text": " like models, especially when you just have a single dimension like parameters"}, {"start": 1047.6, "end": 1049.36, "text": " here, because there are many other things we"}, {"start": 1049.36, "end": 1053.84, "text": " may care more about, like throughput, etc. Okay, anyways, let's"}, {"start": 1053.84, "end": 1055.9199999999998, "text": " see what this paper is about. So if you already"}, {"start": 1055.9199999999998, "end": 1059.36, "text": " know something about vision transformers or MLP mixer,"}, {"start": 1059.36, "end": 1063.1999999999998, "text": " this thing will look very familiar. If you haven't watched those videos,"}, {"start": 1063.1999999999998, "end": 1066.08, "text": " if you haven't explored those papers, you can check out my videos, I've covered"}, {"start": 1066.08, "end": 1068.8799999999999, "text": " both of them, and I'll link them somewhere here."}, {"start": 1068.8799999999999, "end": 1071.6799999999998, "text": " Anyways, you can see the common pattern, we have"}, {"start": 1071.6799999999998, "end": 1075.6799999999998, "text": " like images split into various patches, then the patches"}, {"start": 1075.6799999999998, "end": 1081.12, "text": " basically projected into like a vector, and then in the case of VIT,"}, {"start": 1081.12, "end": 1084.08, "text": " what they've done is they just applied a transformer,"}, {"start": 1084.08, "end": 1087.4399999999998, "text": " like a couple of transformer blocks, in the case of MLP mixer, they've done"}, {"start": 1087.44, "end": 1091.52, "text": " mixing over a spatial extent and over the channel extent,"}, {"start": 1091.52, "end": 1096.3200000000002, "text": " by applying basically MLP, or you can also consider it"}, {"start": 1096.3200000000002, "end": 1102.0800000000002, "text": " as a type of a convolution. In any case, what this paper shows is that you"}, {"start": 1102.0800000000002, "end": 1105.44, "text": " do not need to use like self-attention, like in the case of"}, {"start": 1105.44, "end": 1109.28, "text": " VITs, you can just use simple convolutional layers. So the"}, {"start": 1109.28, "end": 1112.88, "text": " thing they do here is they apply depth-wise convolution, then they have"}, {"start": 1112.88, "end": 1116.3200000000002, "text": " like a activation function, they have batch norm,"}, {"start": 1116.32, "end": 1120.1599999999999, "text": " and then they do point-wise convolution, activation function, batch norm, and they"}, {"start": 1120.1599999999999, "end": 1123.12, "text": " just repeat this a couple of times, and that's it. It's super simple,"}, {"start": 1123.12, "end": 1126.8799999999999, "text": " we're gonna see it, they can basically fit this whole implementation of the"}, {"start": 1126.8799999999999, "end": 1131.2, "text": " model in a single tweet, they have that in the appendix, so that's a fun fact."}, {"start": 1131.2, "end": 1136.1599999999999, "text": " And so what depth-wise and point-wise do is the same thing as mixer, they're"}, {"start": 1136.1599999999999, "end": 1138.56, "text": " just doing spatial-wise and channel-wise mixing."}, {"start": 1138.56, "end": 1142.72, "text": " A brief recap how those two work, so how depth-wise and point-wise work,"}, {"start": 1142.72, "end": 1146.24, "text": " is if we have like a activation volume here,"}, {"start": 1146.24, "end": 1149.84, "text": " so basically, let me just draw it out here,"}, {"start": 1149.84, "end": 1154.56, "text": " so something like this, so this will be the number of channels,"}, {"start": 1154.56, "end": 1158.64, "text": " and this will be like, I don't know, this will, this is like width,"}, {"start": 1158.64, "end": 1164.72, "text": " and this thing here is height. So what depth-wise convolution does is you"}, {"start": 1164.72, "end": 1167.2, "text": " take a single channel, like maybe this one,"}, {"start": 1167.2, "end": 1170.88, "text": " and you just apply a kernel across that, that channel,"}, {"start": 1170.88, "end": 1174.48, "text": " across the spatial extent of the activation volume,"}, {"start": 1174.48, "end": 1177.7600000000002, "text": " and that's the, and you just get like a new feature map."}, {"start": 1177.7600000000002, "end": 1182.48, "text": " Point-wise, what it does is you basically have the opposite, so you take one,"}, {"start": 1182.48, "end": 1188.72, "text": " times one, kernel, and that will have basically as many channels as"}, {"start": 1188.72, "end": 1194.0800000000002, "text": " the activation volume has, and basically then, this is how you're"}, {"start": 1194.0800000000002, "end": 1197.5200000000002, "text": " going to form novel features by just doing a"}, {"start": 1197.52, "end": 1201.12, "text": " stride over the spatial extent, and you're going to mix up, as you can see"}, {"start": 1201.12, "end": 1204.24, "text": " here, channels, whereas here you're mixing up the spatial"}, {"start": 1204.24, "end": 1208.08, "text": " information. So that's a idea that we've seen already in MLP"}, {"start": 1208.08, "end": 1212.8, "text": " Mixer, and yeah, in any case, what they argue, because"}, {"start": 1212.8, "end": 1217.36, "text": " we'll soon see, we saw that their results are better compared to MLP Mixers and"}, {"start": 1217.36, "end": 1220.6399999999999, "text": " VATs in some cases, what they show is that this actual"}, {"start": 1220.6399999999999, "end": 1224.8, "text": " template of having this patch embedding, and then having this, how they call it,"}, {"start": 1224.8, "end": 1229.04, "text": " isotropic model, basically the thing is you're not reducing"}, {"start": 1229.04, "end": 1233.9199999999998, "text": " neither the number of channels nor the spatial extent, whereas in the"}, {"start": 1233.9199999999998, "end": 1237.68, "text": " CNNs, for example, you're basically usually, you're"}, {"start": 1237.68, "end": 1240.8, "text": " reducing the spatial extent and you're adding the number of channels as you're"}, {"start": 1240.8, "end": 1245.36, "text": " going deeper into the network. So these isotropic models basically"}, {"start": 1245.36, "end": 1248.56, "text": " preserve both the number of channels as well as the spatial extent,"}, {"start": 1248.56, "end": 1253.68, "text": " and they argue that maybe, just maybe, that template of using patch embeddings"}, {"start": 1253.68, "end": 1259.3600000000001, "text": " plus that isotropic, like, modeling of features is the thing that"}, {"start": 1259.3600000000001, "end": 1263.04, "text": " actually gives us performance, not the self-attention, not the"}, {"start": 1263.04, "end": 1267.28, "text": " other fancy things we've imported from NLP world. In any case, the model is super"}, {"start": 1267.28, "end": 1269.3600000000001, "text": " simple, it's a very nice baseline, you can see"}, {"start": 1269.3600000000001, "end": 1274.24, "text": " here, this is the whole implementation, as you can see, so they have this"}, {"start": 1274.24, "end": 1278.64, "text": " initial layer here, is basically just the"}, {"start": 1278.64, "end": 1282.0800000000002, "text": " patch embedding, then they have a couple of these"}, {"start": 1282.08, "end": 1287.28, "text": " depth, so N of these layers, and finally they add the adaptive"}, {"start": 1287.28, "end": 1291.6799999999998, "text": " average pooling and they flatten out the features and they add a"}, {"start": 1291.6799999999998, "end": 1295.84, "text": " linear classifier on top of it. So it's a fairly simple model and it achieves"}, {"start": 1295.84, "end": 1299.9199999999998, "text": " comparable, if not better, results compared to VAT, compared to ResNet"}, {"start": 1299.9199999999998, "end": 1304.6399999999999, "text": " and ResMLP. So now the thing here, the red alert is the throughput is super, super"}, {"start": 1304.6399999999999, "end": 1307.52, "text": " small, they mentioned that it's possible to,"}, {"start": 1307.52, "end": 1311.84, "text": " in future research, improve this. I mean, this ties back to the thing I mentioned,"}, {"start": 1311.84, "end": 1316.48, "text": " in this plot here, when you just plot a single dimension,"}, {"start": 1316.48, "end": 1320.08, "text": " like number of parameters here, you may paint up like a wrong picture."}, {"start": 1320.08, "end": 1323.6, "text": " But in any case, this paper is not trying to set SOTA,"}, {"start": 1323.6, "end": 1327.6799999999998, "text": " they're just trying to say that we are maybe focusing on the wrong thing, I"}, {"start": 1327.6799999999998, "end": 1330.56, "text": " guess. So we went all the way from ResNet and"}, {"start": 1330.56, "end": 1333.04, "text": " convolutional neural networks to transformers"}, {"start": 1333.04, "end": 1336.48, "text": " and vision transformers to finally get back to convolutional"}, {"start": 1336.48, "end": 1340.8799999999999, "text": " neural networks, again, in their pure form, just changing this, just keeping this"}, {"start": 1340.88, "end": 1346.0, "text": " isotropic template. Also, I like the end part of this"}, {"start": 1346.0, "end": 1349.68, "text": " paper, a note on paper length, expecting more text in this paper,"}, {"start": 1349.68, "end": 1354.48, "text": " wondering if it's a workshop paper, we hastily submitted to iClear. No, this"}, {"start": 1354.48, "end": 1358.0800000000002, "text": " paper presents a simple idea, one where we genuinely believe that a short paper"}, {"start": 1358.0800000000002, "end": 1362.64, "text": " presentation is more effective. Do we really need exactly eight, now nine"}, {"start": 1362.64, "end": 1366.88, "text": " or ten pages to describe every machine learning architecture and algorithm in"}, {"start": 1366.88, "end": 1369.44, "text": " existence? We proposed an incredibly simple"}, {"start": 1369.44, "end": 1373.3600000000001, "text": " architecture and made a very simple point that we think is worth more"}, {"start": 1373.3600000000001, "end": 1376.24, "text": " discussion. Patches work well in"}, {"start": 1376.24, "end": 1379.68, "text": " convolutional architectures. We think that four pages is more than enough"}, {"start": 1379.68, "end": 1382.72, "text": " space for this. The details of the experiments and"}, {"start": 1382.72, "end": 1386.48, "text": " architectures are in the appendix for those who want to read through it all."}, {"start": 1386.48, "end": 1389.76, "text": " A lot of respect to the authors, to be honest."}, {"start": 1389.76, "end": 1392.88, "text": " Aside from this, I really think we need to question this"}, {"start": 1392.88, "end": 1396.88, "text": " whole template of how we are presenting the research."}, {"start": 1396.88, "end": 1401.2, "text": " I think that PDFs, for one, are obsolete. I think that authors need to write"}, {"start": 1401.2, "end": 1405.1200000000001, "text": " like accompanying blogs a lot more. We need something like interactive"}, {"start": 1405.1200000000001, "end": 1410.72, "text": " and types of notebooks, such as Jupyter or Deep Note is also one cool notebook"}, {"start": 1410.72, "end": 1414.48, "text": " environment. So, I think we need as a community, we"}, {"start": 1414.48, "end": 1418.24, "text": " need to rethink how we are presenting, because the whole idea of this paper"}, {"start": 1418.24, "end": 1421.7600000000002, "text": " is for other researchers to understand what these guys have done"}, {"start": 1421.7600000000002, "end": 1425.2800000000002, "text": " and be able to reconstruct or build upon their ideas."}, {"start": 1425.28, "end": 1429.68, "text": " And so, who cares about like 10-pager format? Paper reading in and off itself"}, {"start": 1429.68, "end": 1433.76, "text": " is hard and I guess any help we can get from"}, {"start": 1433.76, "end": 1436.32, "text": " the technology advancements we have, such as"}, {"start": 1436.32, "end": 1440.6399999999999, "text": " interactivity, visualizations, animations, will help us"}, {"start": 1440.6399999999999, "end": 1443.36, "text": " as a community. Finally, let me show you the"}, {"start": 1443.36, "end": 1448.72, "text": " one tweet implementation. So, here it is."}, {"start": 1448.72, "end": 1453.52, "text": " An implementation of our model in exactly 280 characters"}, {"start": 1453.52, "end": 1457.52, "text": " is right here. Twitter is slowly becoming a more important medium,"}, {"start": 1457.52, "end": 1461.92, "text": " if not already, compared to conferences."}, {"start": 1461.92, "end": 1465.6, "text": " You can check out other results in the paper. They have some nice"}, {"start": 1465.6, "end": 1470.0, "text": " visualizations of the kernels."}, {"start": 1470.0, "end": 1473.84, "text": " I'll leave you with that and if you like this video, consider subscribing,"}, {"start": 1473.84, "end": 1478.16, "text": " share out the video, hit that bell icon to get notified every time I upload a new"}, {"start": 1478.16, "end": 1480.72, "text": " video, also join the Discord community. We've"}, {"start": 1480.72, "end": 1486.32, "text": " got a lot of smart people there and until next time, bye-bye!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=k0Q-U9U0EQs
Fake It Till You Make It (Microsoft) | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ In this video I cover Microsoft's "Fake it till you make it: face analysis in the wild using synthetic data alone" paper. They introduce synthetic (face) data that is realistic enough to enable trivial generalization to real data! Disclaimer: I was a part of this team while back at Microsoft, and even had some contributions to this project, but in this video, I only talk about the stuff that can be deduced from the paper itself. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2109.15102 ✅ Demo video: https://www.youtube.com/watch?v=wlOMpQe8luQ&ab_channel=MicrosoftResearch ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 The domain gap problem 04:20 The paper deals only with human faces 06:05 Data-centric approach is short-sighted? 08:50 Building 3D human faces (high-level explanation) 09:30 Pros and cons of the approach 11:45 Building 3D human faces (in-depth) 16:15 Results on landmark localization and sem seg 20:45 Better generalization? 21:25 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #facesynthetics #syntheticdata #microsoft
What's cracking guys in this video I'm covering this new paper called fake it till you make it face analysis in the wild using synthetic data alone by these awesome people from Microsoft and I actually had the privilege of working with them while I was back at Microsoft so as a quick disclaimer I actually did contribute a little bit to this framework the synthetics framework And so I know a lot of details of how this project was structured and everything in this video I'm gonna only tell you obviously only the things and conclusions you can you can deduce from the paper itself So having said that these guys are a real wood Tata spell through shade this Charlie Hewitt Sebastian Gaggio Matthew Johnson Virginia as tellers Thomas Cashman and Jamie Sutton, so let's see what the hype is all about They say here that we demonstrated it's possible to perform face related computer vision in the wild using synthetic data alone So I mean obviously you want to use synthetic see if that's possible because you have perfect ground truth But what was the problem so far well? there is something called domain gap and I think it's fairly obvious when you try and train your models on synthetic data and Then try to do to do inference with that same model on real data. You're gonna have some problems The reason is you have different distributions in the case where your synthetics is not perfect as it usually is not you basically have a gap because the distribution between the those data points from the synthetics is different from the distribution of the data points in the real data and So different people different groups try to solve this in different ways and some good examples would be like opening eye so I Recently a couple months ago covered this opening ice paper where they used the robotic hand to solve the Rubik's Cube and the thing Is the policy the RL agent was actually trained purely in simulation? The approach they took was to not to try to create a perfect simulation because it's even harder To create a perfect simulator for this robotic hand then to create a simulator For the for the faces because here you only need to mimic the appearance of these faces Whereas in the in this case you have to model the physics you have to model the friction you have to model the contact Forces which is super hard to do and so what they did is just Randomize the simulation so that the real world appears like just another instantiation of a simulation So they modify even stuff like gravity from the usual 9.81 at least on earth to like even 8.5 or whatnot, but it was all done gradually In order to have this curriculum like like a gradual training So that was one approach second approach other approaches were stuff like you can fine-tune On real images after training on your synthetics, so you have imperfect synthetics you fine-tune in real data from the target distribution And then you can expect that the the accuracy or whatever metric you care about is going to improve And finally there are approaches such as uprealing with GANs whereby you take your real data, and you pass them through certain like GANs which will make those images appear more realistic Those are all the different ways to try to reduce this domain gap so Microsoft on the other hand took another route and that's to reduce the domain gap almost down to zero And we have to have some bad jokes on this on this channel. So yeah anyways Basically, here are some of the images that the framework produces you can see they are fairly realistic Especially this this this this person here. You can see that you can have geometry So 3d information because you have a synthetic data and that means you also have perfect labeling, which is awesome Which is the superpower of this whole framework. You can do pixel wise like a semantic segmentation You have the landmarks however dense you want them to be and a lot of other Ground-truth data which we're going to see a bit later and before I dig deeper into the paper Let me just show you this quick clip from Microsoft where you'll see the animations of the of this of this syntax framework Here are some examples synthetic faces that we render as training data for machine learning With synthetic data you can guarantee perfect labels without annotation noise Generate rich labels that are otherwise impossible to label by hand and have full control over variation and diversity in a data set Awesome, so let's continue and dig deeper into this paper So they say here that they show that the synthetic data can match can both match real data inaccuracy as well as open up New approaches where manual labeling would be impossible. Okay, so a quick remark here. You need to keep in mind that rendering Only face images so the heads of of humans is a fairly constrained problem I mean you have Uh, like a certain geometry that all humans have we have usually have two eyes We usually have one nose and I mean the problem is very very constrained and I really believe that we can Actually reduce to the main gap almost down to zero if not to zero in the future For this particular problem that Microsoft deeply cares about so because this the team works in mixed reality and virtual reality so that means they they care about stuff like Microsoft HoloLens and There are other competitors out there such as Magic Leap and Facebook Reality Labs and all of them care about this very constrained problem of like understanding human faces and bodies and so That the point I'm trying to drive here is that if you had some other area such as like self-driving I'm fairly sure this is not the way I had because I mean it boils down to the question whether it's easier to build up a new world or just understand the The the the current world because in order to build a perfect simulator for self-driving you literally have to Cover so many edge cases, etc. So you literally have to build something like I guess real-world simulator and Yeah, I'm fairly sure that's harder than just trying to build a computer vision systems that will solve us self-driving continuing on here When they say that when faced with a machine learning problem the hardest challenge often isn't choosing the right machine learning model It's finding the right data and this reminds me strongly of Andrew Eng's data centric approach And the thing I noticed here is that all of these people and groups that are focused more on short-term products Tend to say something like this and I don't mean that in a in a in a bad way. I just think that like this implies obviously implies supervised learning paradigm and I'm fairly sure that's not the way that kids and humans learn and I'm fairly sure that approaches that for example Facebook and deep minor pushing Like self supervised paradigm is the way to go long term. But having said that again, I Think this is a fairly fairly good approach At the time being because we're gonna build better product which will enable us accelerate Accomplishing those long-term goals of cracking intelligence Obviously they mentioned some some benefits of synthetic data here So with synthetic data you can guarantee perfect labels without annotation noise Generate rich labels, which are otherwise Impossible to label by hand and they have an example later here in the paper. Let me show you basically, I mean you can you can generate As much landmarks as you want like you can you can make the the mesh as dense as you want and this is obviously not possible To achieve by by manual labeling it would be super expensive and super super error prone So that's some of the benefits that real data that synthetic data is bringing. Let me go back here Aside from that you have a full control over variation and diversity in a data set Which is also very very important when we are deploying these products globally You want to make sure it works on different demographics and they even have a histogram down here Let me quickly show you this so they show the histograms here over the age Over the I guess it's sex not gender and and ethnicity and as you can see here It's it's a work in progress obviously, but the good thing is you can basically improve this Whereas it's way harder to do this when you're trying to collect real data and I guess a small remark here. It'd be nice to have like a ground truth histograms here I the global statistics so that we can compare how far away is this current? Histogram from the actual desired target histogram. So this is another superpower of having synthetics you can be very flexible and make sure that the training data is representative And mirrors the true statistics you wanna you want to target. Okay, so let me show you how the data is actually Like synthesized on a high level what they do you have this template face You add the identity then you add certain expression like smiling or not Then you add the texture you add the hair clothing Sometimes they add like headwear eyewear what not like devices again That's something that Microsoft cares about like think Collins and finally you add like the environment You can see the lightning you can see the background etc So that's on a high level how these Like models are 3d avatars are produced and then you just render whatever you want by positioning a virtual camera in this scene So I think I want to stress here is this one. So it requires considerable expertise and Investment to develop a synthetics framework with minimal domain gap. So this is another reason I think this is not the way to go ahead, but it's a perfect perfect strategy for Microsoft in particular This is super expensive to build and maintain and etc So on a pro side say you have spent time labeling face images with landmarks However, you suddenly require additional landmarks in each image Relabeling and verifying will take a long long time if all of a sudden you realize you want to have additional landmark on the lip You have to literally take all of your images and send them back to labelers So that they can relabel the images and then you can actually Verify that you haven't introduced any mistakes by doing this so you can see this is very very time-consuming very very expensive and Erroneous, I guess Secondly, synthetics lets you render faces from a simulated device to develop algorithms and even guide hardware design itself Imagine what would be the the the alternative here So you literally have to design a prototype which is super expensive and slow process and then you'd have to create a prototype Process and then you'd have to collect data and only then can you train the model on the flip side with synthetics? You can literally just position in your in your in this in a scene You can just position a virtual camera and you can start rendering data training models and you can see whether it makes sense To position the sensor right there or not So this significantly reduces the iteration the time it takes you to iterate and so that's a huge huge asset So you may ask yourself. What is the thing with this framework? I mean we've already we We are already used to seeing amazingly realistic avatars in movies in in games etc And they give an answer to that here. So the visual effects industry has developed many techniques for convincing audiences That 3d faces are real and we build upon these in our approach However, a key difference is scale. So this framework proposes a procedural way to generate these like Avatars way more quickly and very realistically. So that's the value proposition of this whole thing And now let me kind of break it down for you how the how the models are created So we saw it on the on the high level here Now, let's see how they actually construct these identities expressions, etc. So the the main point here is the they basically collected a lot of these scans and To acquire these you either have to buy them or have the the space for collecting the scans yourself So the thing is those setups are very very pricey because I don't know if you ever saw one I'm gonna pop it somewhere on the screen here You basically have to have hundreds or if not thousands of cameras placed in that room in order to collect like the accurate geometry and everything you care about and later like Various processing algorithms to form these scans. Additionally, obviously you have to have real humans here in the loop So it's a it's a slow procedure. It's very expensive procedure. But once you have scans they additionally had to do the cleaning step So that's a very manual and very very very slow Like procedure to create from this raw scan to get this clean scan once you have the scans You can construct something called template face We're gonna see that in a bit then you can learn a generative face model and then you can sample from that face model to Obtain various geometry. Now, let's see a bit more about the generative face model itself, so you basically are trying to Create this generating function So the formula does look a bit intimidating and there is some domain expertise involved here But on a high level in order to form form a novel geometry with a certain expression What you do is you take this template face which is formed as and they mentioned here as the average face with neutral expressions So you take all of the scans you find their vertices you just do the average and you find the template head Then what they do is they learn this this basis and they call it the linear identity basis They learn that thing and finally you have this expression basis, which is not learned Okay once you have the template head once you have the this Linear identity basis and the expression basis you just use the betas in size to determine the actual geometry of the face And you can see here. So the generating function comprises out of Like takes as an input betas size and tatas so data just determines the position of the face model namely one of those is like jaw so you can basically position draw something like this and finally once they fit the so as I said this s is learned and Betas are also learned so that they fit the scans that they have in the data set Once they have a like a set of betas they can fit a multivariate like a normal distribution on top of it and that's how they actually form the the Genetic model and then you can just sample betas from it and you can get novel identities continuing on I want to stress again that this is a huge engineering project more so than than research and They say here that in addition to facial expression We'll layer random eye gaze directions on top of sampled expressions and use procedural logic to pose the eyelets accordingly So there is lots of heuristics all around the place here and not only for for the like positioning the the eyes But also here for texture you have to to you basically use the scans To get the texture information and they'll see here that they are modeling here at the strand level And so money here at the strand level allows us to capture realistic multi-path elimination effects, but on the con side Each asset was authored by a groom artist who specializes in creating digital hair So as you can see here, this is very slow and very very expensive Same goes for every other part of these of this synthetic framework So you have you have to obviously somebody has to construct these clothing somebody has to construct the hats the eyewear Etc but at the end because this is synthetics you can have various Various types of labels so not only vertices Which they later will see the results they they are using they are doing a landmark regression They'll you can also have UVs you can have masks you can have depth Normals albedo every single type of information you can care about in computer vision and graphics And again, this just showcased the variety of faces you can you can generate using using this the syntax framework And I think it's very very impressive Finally, let's jump to the actual results This is where they stress that this data-centric approach makes a lot of sense because for the semantic segmentation problem What I've done is used simplest off-the-shelf models such as unit and such as resnet to to to build Like to get results which are comparable with state-of-the-art methods on these benchmarks So for the semantics implementation what I do is they use unit a model and they just apply the binary cross entropy loss On the ground truth labels, which again are perfect because we have synthetic data And the second thing that has to be done here is something called label adaptation and let me first motivate what it is And then I'm gonna quickly explain how they do it So the thing is if you concentrate really hard on on on these images you can see that so this is the ground truth from the real data And you can see, uh, that the nose here It's fairly arbitrary decision for the labelers in this real data to to kind of end the nose here So basically this boundary of the nose is fairly arbitrarily chosen And so we shouldn't penalize synthetics for having chosen that the nose ends here instead of here and so that's why the adaptation adaptation is done and What what they do is they just freeze the weights of the first unit and they just what they do is they take the the real data Labels and they just kind of map using another unit So this is basically another unit and you just map from the output from this synthetics model You map to the real data. Let's jump to the final results Let's see the tables. So this is basically for the semantic segmentation problem You can see they compare their methods against various soda baselines and you can see results are fairly Comparable here. So 92 F1 score Against these baselines some of these baselines are a bit better. You can see that this paper in particular is 1.2 points better, but you need to realize that that model looks something like this So you basically have pyramid pooling here edge perceiving edge aware graph reasoning edge aware graph reasoning here Decoder fairly complex reprojection projection. So it's a fairly complex like architecture Like zooming in into this particular module you can see you have graph convolutions transpose bowling etc, etc So very very complex architectures. Whereas here they just use simple unit which is off-the-shelf model and they achieve comparable results So that's cool second thing is What confused me initially was the that we have better results with synthetics compared to real data And I guess the main reason here is this particular data set. So they used Helen data set I think it has around 2k training images. Whereas they have I think they used 100k images to train the unit. So that's why they get better results here Although it'd be really nice if they explicitly mentioned that somewhere that the amount of images used for this synthetics Like baseline synthetics results compared to the real result in here on the other hand on this different data set called LEPA The situation is a bit different because the real training data set actually has way more data points So the results here are better compared to synthetics data set and that makes sense. They also show results on the landmark localization And against some other Soda results and they show again that they have comparable results So the synthetics data again here some of them do have better results like like this one here or I don't know like this one Here etc. But like again, keep in mind they are using off-the-shelf models here It'd be very very interesting to see how this performs with much better models They did some ablations The augmentations matter a lot. So I forgot to mention that once you render our data What they do is they apply various augmentations like grayscaling blurring Warping etc. And they just show here that not using augmentations hurts performance by a lot You can see the jump here and finally Not doing the label adaptation hurts the performance a lot So you can see here it jumps to 5.61 where we want to be lower So it jumps from 3 to 5.6, which is a big gap So that means this part is very very important for landmark localization again The reason is the same there is also ambiguity same as with semantic segmentation and the nose example Here in the landmark localization problem They have one more interesting visualization here and they say here that predictions by network strain would reel So that's the top one against the synthetic data on the bottom one And they say that know how synthetic data network generalizes better across expression illumination pose and occlusion You can obviously see the results here are way better compared to the results up here It's maybe a knit but like claiming that you generalize better Whereas you have much more data using synthetics compared to real data is not a fair claim Although I guess it proves the point that it's better to have synthetic data because you can generate as much data as you want Yeah, and finally let's end up here I mentioned that the superpower is you can generate as much as dense landmarks as you want You can basically put virtual sensors wherever you want and render So for example eye tracking is something very important for the HoloLens device And so you need to be able to track where the pupil is where the eye is And you can you can quickly iterate with different positions of the cameras later when the prototype actually is developed You can be confident that the conclusions you draw from the synthetics experiments can now be Generalized to the real data and that's the superpower of this whole framework having said that they they clearly acknowledge certain limitations We do not include expression dependent wrinkling effects of realism suffers during certain expressions Since we sample parts of our model independently we sometimes get unusual but not impossible combinations such as feminine faces That have a beard and stuff like that. So yeah, I mean again, this is awesome result I think this for this particular problem that Microsoft and some of these companies like magic leap etc care about This is a perfect. I think this is a right direction to head into to build better products. But like if we're trying to just Talk here about improving research improving AI as a field. I think this is not definitely not the way to do to go ahead Finally, they mentioned that they are going to publish for non-commercial use case 100,000 images and I saw some people were confused on Twitter including Andrei Karpati And I thought maybe trying and giving an answer to that like question Namely to why they are only publishing 100k images and the answer lies pretty much here. I mean They took a look at the de-deblation for the landmark localization at task and they noticed that after a hundred key images Basically, they got diminishing returns and you can see the the mean error starts increasing again So that means a hundred case totally arbitrary value and the whole point of this framework is to have this Flexibility that I just explained here. So even when they render The all of these avatars from a certain angle. I mean the whole point is you want to be flexible enough to Put a new virtual camera and then we render the data there There is not much point in actually in my opinion in publishing these images So what would be really nice is if Microsoft were to make an API? Out of this and earn some money by doing that and help other people Like not have to go through the same procedure. That's as we saw was very expensive very slow to create this whole framework But I can also totally understand the reason for not doing that because it was super expensive and it's giving Microsoft an edge Anyways, if you like this paper straight out with a friend Subscribe to this channel join the discord communities. We have very smart people chatting there and you can ask them questions So yeah until next time. Bye. Bye
[{"start": 0.0, "end": 6.04, "text": " What's cracking guys in this video I'm covering this new paper called fake it till you make it face analysis in the wild using"}, {"start": 6.16, "end": 13.9, "text": " synthetic data alone by these awesome people from Microsoft and I actually had the privilege of working with them while I was back at"}, {"start": 13.9, "end": 15.66, "text": " Microsoft so as a quick disclaimer"}, {"start": 15.66, "end": 19.82, "text": " I actually did contribute a little bit to this framework the synthetics framework"}, {"start": 19.82, "end": 24.54, "text": " And so I know a lot of details of how this project was structured and everything in this video"}, {"start": 24.54, "end": 29.900000000000002, "text": " I'm gonna only tell you obviously only the things and conclusions you can you can deduce from the paper itself"}, {"start": 29.9, "end": 35.58, "text": " So having said that these guys are a real wood Tata spell through shade this Charlie Hewitt Sebastian Gaggio"}, {"start": 36.04, "end": 41.879999999999995, "text": " Matthew Johnson Virginia as tellers Thomas Cashman and Jamie Sutton, so let's see what the hype is all about"}, {"start": 42.58, "end": 50.0, "text": " They say here that we demonstrated it's possible to perform face related computer vision in the wild using synthetic data alone"}, {"start": 50.239999999999995, "end": 55.84, "text": " So I mean obviously you want to use synthetic see if that's possible because you have perfect ground truth"}, {"start": 55.84, "end": 58.08, "text": " But what was the problem so far well?"}, {"start": 58.08, "end": 64.46, "text": " there is something called domain gap and I think it's fairly obvious when you try and train your models on synthetic data and"}, {"start": 64.8, "end": 70.28, "text": " Then try to do to do inference with that same model on real data. You're gonna have some problems"}, {"start": 70.6, "end": 76.25999999999999, "text": " The reason is you have different distributions in the case where your synthetics is not perfect as it usually is not"}, {"start": 77.0, "end": 79.16, "text": " you basically have a gap because the"}, {"start": 79.68, "end": 85.42, "text": " distribution between the those data points from the synthetics is different from the distribution of the data points in the real data and"}, {"start": 85.42, "end": 93.38, "text": " So different people different groups try to solve this in different ways and some good examples would be like opening eye"}, {"start": 93.62, "end": 95.5, "text": " so I"}, {"start": 95.5, "end": 102.98, "text": " Recently a couple months ago covered this opening ice paper where they used the robotic hand to solve the Rubik's Cube and the thing"}, {"start": 102.98, "end": 107.32000000000001, "text": " Is the policy the RL agent was actually trained purely in simulation?"}, {"start": 107.34, "end": 112.98, "text": " The approach they took was to not to try to create a perfect simulation because it's even harder"}, {"start": 112.98, "end": 117.5, "text": " To create a perfect simulator for this robotic hand then to create a simulator"}, {"start": 118.02000000000001, "end": 122.86, "text": " For the for the faces because here you only need to mimic the appearance of these faces"}, {"start": 122.94, "end": 129.18, "text": " Whereas in the in this case you have to model the physics you have to model the friction you have to model the contact"}, {"start": 129.18, "end": 132.38, "text": " Forces which is super hard to do and so what they did is just"}, {"start": 133.3, "end": 139.38, "text": " Randomize the simulation so that the real world appears like just another instantiation of a simulation"}, {"start": 139.38, "end": 142.7, "text": " So they modify even stuff like gravity from the usual"}, {"start": 143.22, "end": 148.62, "text": " 9.81 at least on earth to like even 8.5 or whatnot, but it was all done gradually"}, {"start": 149.18, "end": 152.9, "text": " In order to have this curriculum like like a gradual training"}, {"start": 152.9, "end": 158.18, "text": " So that was one approach second approach other approaches were stuff like you can fine-tune"}, {"start": 158.57999999999998, "end": 165.14, "text": " On real images after training on your synthetics, so you have imperfect synthetics you fine-tune in real data from the target distribution"}, {"start": 165.14, "end": 170.26, "text": " And then you can expect that the the accuracy or whatever metric you care about is going to improve"}, {"start": 170.73999999999998, "end": 177.7, "text": " And finally there are approaches such as uprealing with GANs whereby you take your real data, and you pass them through certain like"}, {"start": 178.26, "end": 182.1, "text": " GANs which will make those images appear more realistic"}, {"start": 182.1, "end": 186.1, "text": " Those are all the different ways to try to reduce this domain gap so"}, {"start": 186.42, "end": 191.38, "text": " Microsoft on the other hand took another route and that's to reduce the domain gap almost down to zero"}, {"start": 191.38, "end": 195.78, "text": " And we have to have some bad jokes on this on this channel. So yeah"}, {"start": 196.42, "end": 197.7, "text": " anyways"}, {"start": 197.7, "end": 204.18, "text": " Basically, here are some of the images that the framework produces you can see they are fairly realistic"}, {"start": 205.06, "end": 210.18, "text": " Especially this this this this person here. You can see that you can have geometry"}, {"start": 210.18, "end": 216.5, "text": " So 3d information because you have a synthetic data and that means you also have perfect labeling, which is awesome"}, {"start": 216.5, "end": 222.74, "text": " Which is the superpower of this whole framework. You can do pixel wise like a semantic segmentation"}, {"start": 222.98, "end": 227.46, "text": " You have the landmarks however dense you want them to be and a lot of other"}, {"start": 228.02, "end": 231.62, "text": " Ground-truth data which we're going to see a bit later and before I dig deeper into the paper"}, {"start": 231.7, "end": 237.94, "text": " Let me just show you this quick clip from Microsoft where you'll see the animations of the of this of this syntax framework"}, {"start": 237.94, "end": 245.94, "text": " Here are some examples synthetic faces that we render as training data for machine learning"}, {"start": 246.34, "end": 250.74, "text": " With synthetic data you can guarantee perfect labels without annotation noise"}, {"start": 251.14, "end": 259.06, "text": " Generate rich labels that are otherwise impossible to label by hand and have full control over variation and diversity in a data set"}, {"start": 261.38, "end": 264.9, "text": " Awesome, so let's continue and dig deeper into this paper"}, {"start": 264.9, "end": 270.58, "text": " So they say here that they show that the synthetic data can match can both match real data inaccuracy as well as open up"}, {"start": 270.58, "end": 276.82, "text": " New approaches where manual labeling would be impossible. Okay, so a quick remark here. You need to keep in mind that"}, {"start": 277.76, "end": 278.82, "text": " rendering"}, {"start": 278.82, "end": 284.58, "text": " Only face images so the heads of of humans is a fairly constrained problem"}, {"start": 284.58, "end": 285.7, "text": " I mean you have"}, {"start": 285.7, "end": 290.17999999999995, "text": " Uh, like a certain geometry that all humans have we have usually have two eyes"}, {"start": 290.18, "end": 296.34000000000003, "text": " We usually have one nose and I mean the problem is very very constrained and I really believe that we can"}, {"start": 296.82, "end": 301.54, "text": " Actually reduce to the main gap almost down to zero if not to zero in the future"}, {"start": 302.1, "end": 309.06, "text": " For this particular problem that Microsoft deeply cares about so because this the team works in mixed reality and virtual reality"}, {"start": 309.22, "end": 313.7, "text": " so that means they they care about stuff like Microsoft HoloLens and"}, {"start": 314.26, "end": 317.38, "text": " There are other competitors out there such as Magic Leap and Facebook"}, {"start": 317.38, "end": 323.62, "text": " Reality Labs and all of them care about this very constrained problem of like understanding"}, {"start": 323.86, "end": 326.18, "text": " human faces and bodies and so"}, {"start": 327.62, "end": 332.9, "text": " That the point I'm trying to drive here is that if you had some other area such as like self-driving"}, {"start": 333.06, "end": 340.1, "text": " I'm fairly sure this is not the way I had because I mean it boils down to the question whether it's easier to"}, {"start": 340.9, "end": 343.42, "text": " build up a new world or just understand the"}, {"start": 343.42, "end": 349.14000000000004, "text": " The the the current world because in order to build a perfect simulator for self-driving you literally have to"}, {"start": 349.54, "end": 355.86, "text": " Cover so many edge cases, etc. So you literally have to build something like I guess real-world simulator and"}, {"start": 356.70000000000005, "end": 364.66, "text": " Yeah, I'm fairly sure that's harder than just trying to build a computer vision systems that will solve us self-driving continuing on here"}, {"start": 365.54, "end": 371.98, "text": " When they say that when faced with a machine learning problem the hardest challenge often isn't choosing the right machine learning model"}, {"start": 371.98, "end": 377.18, "text": " It's finding the right data and this reminds me strongly of Andrew Eng's data centric approach"}, {"start": 377.58000000000004, "end": 384.40000000000003, "text": " And the thing I noticed here is that all of these people and groups that are focused more on short-term products"}, {"start": 385.90000000000003, "end": 392.06, "text": " Tend to say something like this and I don't mean that in a in a in a bad way. I just think that"}, {"start": 392.94, "end": 397.26, "text": " like this implies obviously implies supervised learning paradigm and"}, {"start": 397.26, "end": 404.7, "text": " I'm fairly sure that's not the way that kids and humans learn and I'm fairly sure that approaches that for example Facebook and deep minor pushing"}, {"start": 404.7, "end": 410.98, "text": " Like self supervised paradigm is the way to go long term. But having said that again, I"}, {"start": 411.74, "end": 414.44, "text": " Think this is a fairly fairly good approach"}, {"start": 414.86, "end": 419.65999999999997, "text": " At the time being because we're gonna build better product which will enable us accelerate"}, {"start": 420.09999999999997, "end": 422.82, "text": " Accomplishing those long-term goals of cracking intelligence"}, {"start": 422.82, "end": 426.21999999999997, "text": " Obviously they mentioned some some benefits of synthetic data here"}, {"start": 426.21999999999997, "end": 430.7, "text": " So with synthetic data you can guarantee perfect labels without annotation noise"}, {"start": 431.26, "end": 433.78, "text": " Generate rich labels, which are otherwise"}, {"start": 434.38, "end": 439.21999999999997, "text": " Impossible to label by hand and they have an example later here in the paper. Let me show you"}, {"start": 440.1, "end": 442.5, "text": " basically, I mean you can you can"}, {"start": 443.21999999999997, "end": 444.42, "text": " generate"}, {"start": 444.42, "end": 451.1, "text": " As much landmarks as you want like you can you can make the the mesh as dense as you want and this is obviously not possible"}, {"start": 451.1, "end": 456.42, "text": " To achieve by by manual labeling it would be super expensive and super super error prone"}, {"start": 456.74, "end": 462.82000000000005, "text": " So that's some of the benefits that real data that synthetic data is bringing. Let me go back here"}, {"start": 463.46000000000004, "end": 468.14000000000004, "text": " Aside from that you have a full control over variation and diversity in a data set"}, {"start": 468.14000000000004, "end": 472.06, "text": " Which is also very very important when we are deploying these products globally"}, {"start": 472.06, "end": 477.22, "text": " You want to make sure it works on different demographics and they even have a histogram down here"}, {"start": 477.22, "end": 481.5, "text": " Let me quickly show you this so they show the histograms here over the age"}, {"start": 481.98, "end": 487.82000000000005, "text": " Over the I guess it's sex not gender and and ethnicity and as you can see here"}, {"start": 487.82000000000005, "end": 492.5, "text": " It's it's a work in progress obviously, but the good thing is you can basically improve this"}, {"start": 492.5, "end": 496.86, "text": " Whereas it's way harder to do this when you're trying to collect real data"}, {"start": 497.46000000000004, "end": 498.66, "text": " and"}, {"start": 498.66, "end": 503.78000000000003, "text": " I guess a small remark here. It'd be nice to have like a ground truth histograms here"}, {"start": 503.78, "end": 509.5, "text": " I the global statistics so that we can compare how far away is this current?"}, {"start": 510.02, "end": 515.54, "text": " Histogram from the actual desired target histogram. So this is another superpower of having synthetics"}, {"start": 515.54, "end": 520.06, "text": " you can be very flexible and make sure that the training data is"}, {"start": 521.06, "end": 522.06, "text": " representative"}, {"start": 522.06, "end": 529.14, "text": " And mirrors the true statistics you wanna you want to target. Okay, so let me show you how the data is actually"}, {"start": 529.14, "end": 533.34, "text": " Like synthesized on a high level what they do you have this template face"}, {"start": 533.86, "end": 538.54, "text": " You add the identity then you add certain expression like smiling or not"}, {"start": 538.54, "end": 541.18, "text": " Then you add the texture you add the hair clothing"}, {"start": 541.58, "end": 545.98, "text": " Sometimes they add like headwear eyewear what not like devices again"}, {"start": 545.98, "end": 551.9399999999999, "text": " That's something that Microsoft cares about like think Collins and finally you add like the environment"}, {"start": 551.9399999999999, "end": 553.98, "text": " You can see the lightning you can see the background etc"}, {"start": 553.98, "end": 555.98, "text": " So that's on a high level how these"}, {"start": 555.98, "end": 563.98, "text": " Like models are 3d avatars are produced and then you just render whatever you want by positioning a virtual camera in this scene"}, {"start": 563.98, "end": 569.98, "text": " So I think I want to stress here is this one. So it requires considerable expertise and"}, {"start": 570.58, "end": 575.98, "text": " Investment to develop a synthetics framework with minimal domain gap. So this is another reason"}, {"start": 575.98, "end": 581.98, "text": " I think this is not the way to go ahead, but it's a perfect perfect strategy for Microsoft in particular"}, {"start": 581.98, "end": 586.98, "text": " This is super expensive to build and maintain and etc"}, {"start": 586.98, "end": 591.98, "text": " So on a pro side say you have spent time labeling face images with landmarks"}, {"start": 591.98, "end": 595.98, "text": " However, you suddenly require additional landmarks in each image"}, {"start": 595.98, "end": 601.98, "text": " Relabeling and verifying will take a long long time if all of a sudden you realize you want to have additional landmark on the lip"}, {"start": 601.98, "end": 605.98, "text": " You have to literally take all of your images and send them back to labelers"}, {"start": 605.98, "end": 609.98, "text": " So that they can relabel the images and then you can actually"}, {"start": 609.98, "end": 616.98, "text": " Verify that you haven't introduced any mistakes by doing this so you can see this is very very time-consuming very very expensive"}, {"start": 616.98, "end": 618.98, "text": " and"}, {"start": 618.98, "end": 620.98, "text": " Erroneous, I guess"}, {"start": 620.98, "end": 627.98, "text": " Secondly, synthetics lets you render faces from a simulated device to develop algorithms and even guide hardware design itself"}, {"start": 627.98, "end": 630.98, "text": " Imagine what would be the the the alternative here"}, {"start": 630.98, "end": 637.98, "text": " So you literally have to design a prototype which is super expensive and slow process and then you'd have to create a prototype"}, {"start": 637.98, "end": 644.98, "text": " Process and then you'd have to collect data and only then can you train the model on the flip side with synthetics?"}, {"start": 644.98, "end": 649.26, "text": " You can literally just position in your in your in this in a scene"}, {"start": 649.26, "end": 655.7, "text": " You can just position a virtual camera and you can start rendering data training models and you can see whether it makes sense"}, {"start": 655.7, "end": 658.1800000000001, "text": " To position the sensor right there or not"}, {"start": 658.1800000000001, "end": 664.9, "text": " So this significantly reduces the iteration the time it takes you to iterate and so that's a huge huge asset"}, {"start": 664.9, "end": 670.98, "text": " So you may ask yourself. What is the thing with this framework? I mean we've already we"}, {"start": 671.5799999999999, "end": 673.5799999999999, "text": " We are already used to seeing"}, {"start": 674.1, "end": 678.4599999999999, "text": " amazingly realistic avatars in movies in in games etc"}, {"start": 678.4599999999999, "end": 685.38, "text": " And they give an answer to that here. So the visual effects industry has developed many techniques for convincing audiences"}, {"start": 686.02, "end": 690.02, "text": " That 3d faces are real and we build upon these in our approach"}, {"start": 690.02, "end": 696.92, "text": " However, a key difference is scale. So this framework proposes a procedural way to generate these like"}, {"start": 697.66, "end": 703.06, "text": " Avatars way more quickly and very realistically. So that's the value proposition of this whole thing"}, {"start": 703.06, "end": 707.8, "text": " And now let me kind of break it down for you how the how the models are created"}, {"start": 707.8, "end": 709.8, "text": " So we saw it on the on the high level here"}, {"start": 710.38, "end": 718.1999999999999, "text": " Now, let's see how they actually construct these identities expressions, etc. So the the main point here is the they basically"}, {"start": 718.2, "end": 720.2, "text": " collected a"}, {"start": 720.44, "end": 722.44, "text": " lot of these scans and"}, {"start": 722.8000000000001, "end": 728.24, "text": " To acquire these you either have to buy them or have the the space for collecting the scans yourself"}, {"start": 728.24, "end": 732.44, "text": " So the thing is those setups are very very pricey because I don't know if you ever saw one"}, {"start": 732.44, "end": 734.6800000000001, "text": " I'm gonna pop it somewhere on the screen here"}, {"start": 734.6800000000001, "end": 737.9200000000001, "text": " You basically have to have hundreds or if not thousands of cameras"}, {"start": 738.4000000000001, "end": 743.9200000000001, "text": " placed in that room in order to collect like the accurate geometry and everything you care about and"}, {"start": 744.5600000000001, "end": 746.5600000000001, "text": " later like"}, {"start": 746.56, "end": 752.0799999999999, "text": " Various processing algorithms to form these scans. Additionally, obviously you have to have real humans here in the loop"}, {"start": 752.0799999999999, "end": 759.16, "text": " So it's a it's a slow procedure. It's very expensive procedure. But once you have scans they additionally had to do the cleaning step"}, {"start": 759.16, "end": 762.3199999999999, "text": " So that's a very manual and very very very slow"}, {"start": 762.52, "end": 767.7199999999999, "text": " Like procedure to create from this raw scan to get this clean scan once you have the scans"}, {"start": 767.7199999999999, "end": 769.8, "text": " You can construct something called template face"}, {"start": 769.8, "end": 775.2399999999999, "text": " We're gonna see that in a bit then you can learn a generative face model and then you can sample from that face model to"}, {"start": 775.24, "end": 780.08, "text": " Obtain various geometry. Now, let's see a bit more about the generative face model"}, {"start": 780.76, "end": 784.12, "text": " itself, so you basically are trying to"}, {"start": 785.8, "end": 787.64, "text": " Create this generating function"}, {"start": 787.64, "end": 791.84, "text": " So the formula does look a bit intimidating and there is some domain expertise involved here"}, {"start": 791.84, "end": 797.32, "text": " But on a high level in order to form form a novel geometry with a certain expression"}, {"start": 797.32, "end": 804.76, "text": " What you do is you take this template face which is formed as and they mentioned here as the average face with neutral expressions"}, {"start": 804.76, "end": 810.4, "text": " So you take all of the scans you find their vertices you just do the average and you find the template head"}, {"start": 810.76, "end": 817.2, "text": " Then what they do is they learn this this basis and they call it the linear identity basis"}, {"start": 817.68, "end": 821.68, "text": " They learn that thing and finally you have this expression basis, which is not learned"}, {"start": 821.92, "end": 822.4399999999999, "text": " Okay"}, {"start": 822.4399999999999, "end": 825.04, "text": " once you have the template head once you have the this"}, {"start": 825.4, "end": 833.28, "text": " Linear identity basis and the expression basis you just use the betas in size to determine the actual geometry of the face"}, {"start": 833.28, "end": 837.04, "text": " And you can see here. So the generating function comprises out of"}, {"start": 837.56, "end": 843.68, "text": " Like takes as an input betas size and tatas so data just determines the position of the face model"}, {"start": 843.8399999999999, "end": 848.48, "text": " namely one of those is like jaw so you can basically position draw something like this and"}, {"start": 849.64, "end": 853.8399999999999, "text": " finally once they fit the so as I said this s is learned and"}, {"start": 854.24, "end": 859.24, "text": " Betas are also learned so that they fit the scans that they have in the data set"}, {"start": 859.24, "end": 865.6800000000001, "text": " Once they have a like a set of betas they can fit a multivariate like a normal distribution on top of it"}, {"start": 865.6800000000001, "end": 868.36, "text": " and that's how they actually form the the"}, {"start": 869.48, "end": 874.0, "text": " Genetic model and then you can just sample betas from it and you can get novel identities"}, {"start": 874.6, "end": 882.76, "text": " continuing on I want to stress again that this is a huge engineering project more so than than research and"}, {"start": 883.48, "end": 885.48, "text": " They say here that in addition to facial expression"}, {"start": 885.48, "end": 892.9200000000001, "text": " We'll layer random eye gaze directions on top of sampled expressions and use procedural logic to pose the eyelets accordingly"}, {"start": 892.9200000000001, "end": 900.5600000000001, "text": " So there is lots of heuristics all around the place here and not only for for the like positioning the the eyes"}, {"start": 900.5600000000001, "end": 905.72, "text": " But also here for texture you have to to you basically use the scans"}, {"start": 906.2, "end": 911.36, "text": " To get the texture information and they'll see here that they are modeling here at the strand level"}, {"start": 911.36, "end": 917.64, "text": " And so money here at the strand level allows us to capture realistic multi-path elimination effects, but on the con side"}, {"start": 918.12, "end": 923.04, "text": " Each asset was authored by a groom artist who specializes in creating digital hair"}, {"start": 923.08, "end": 927.0, "text": " So as you can see here, this is very slow and very very expensive"}, {"start": 927.08, "end": 930.92, "text": " Same goes for every other part of these of this synthetic framework"}, {"start": 930.92, "end": 938.84, "text": " So you have you have to obviously somebody has to construct these clothing somebody has to construct the hats the eyewear"}, {"start": 938.84, "end": 942.76, "text": " Etc but at the end because this is synthetics you can have various"}, {"start": 943.5600000000001, "end": 947.2800000000001, "text": " Various types of labels so not only vertices"}, {"start": 947.8000000000001, "end": 953.0, "text": " Which they later will see the results they they are using they are doing a landmark regression"}, {"start": 953.0, "end": 955.6800000000001, "text": " They'll you can also have UVs you can have masks you can have depth"}, {"start": 956.12, "end": 961.48, "text": " Normals albedo every single type of information you can care about in computer vision and graphics"}, {"start": 961.48, "end": 970.04, "text": " And again, this just showcased the variety of faces you can you can generate using using this the syntax framework"}, {"start": 970.04, "end": 972.04, "text": " And I think it's very very impressive"}, {"start": 972.52, "end": 974.52, "text": " Finally, let's jump to the actual results"}, {"start": 975.32, "end": 981.8000000000001, "text": " This is where they stress that this data-centric approach makes a lot of sense because for the semantic segmentation problem"}, {"start": 981.8000000000001, "end": 989.04, "text": " What I've done is used simplest off-the-shelf models such as unit and such as resnet to to to build"}, {"start": 989.04, "end": 993.92, "text": " Like to get results which are comparable with state-of-the-art methods on these benchmarks"}, {"start": 994.0799999999999, "end": 1001.1999999999999, "text": " So for the semantics implementation what I do is they use unit a model and they just apply the binary cross entropy loss"}, {"start": 1001.4399999999999, "end": 1005.9599999999999, "text": " On the ground truth labels, which again are perfect because we have synthetic data"}, {"start": 1005.9599999999999, "end": 1012.88, "text": " And the second thing that has to be done here is something called label adaptation and let me first motivate what it is"}, {"start": 1012.88, "end": 1014.78, "text": " And then I'm gonna quickly explain how they do it"}, {"start": 1014.78, "end": 1022.16, "text": " So the thing is if you concentrate really hard on on on these images you can see that so this is the ground truth from the real data"}, {"start": 1022.56, "end": 1025.2, "text": " And you can see, uh, that the nose here"}, {"start": 1025.6, "end": 1031.84, "text": " It's fairly arbitrary decision for the labelers in this real data to to kind of end the nose here"}, {"start": 1032.0, "end": 1035.68, "text": " So basically this boundary of the nose is fairly arbitrarily chosen"}, {"start": 1036.0, "end": 1041.52, "text": " And so we shouldn't penalize synthetics for having chosen that the nose ends here instead of here"}, {"start": 1041.52, "end": 1044.48, "text": " and so that's why the adaptation adaptation is done and"}, {"start": 1045.36, "end": 1053.12, "text": " What what they do is they just freeze the weights of the first unit and they just what they do is they take the the real data"}, {"start": 1053.48, "end": 1056.8, "text": " Labels and they just kind of map using another unit"}, {"start": 1056.8, "end": 1063.44, "text": " So this is basically another unit and you just map from the output from this synthetics model"}, {"start": 1063.8, "end": 1066.8, "text": " You map to the real data. Let's jump to the final results"}, {"start": 1066.8, "end": 1072.28, "text": " Let's see the tables. So this is basically for the semantic segmentation problem"}, {"start": 1072.28, "end": 1080.36, "text": " You can see they compare their methods against various soda baselines and you can see results are fairly"}, {"start": 1080.6399999999999, "end": 1084.58, "text": " Comparable here. So 92 F1 score"}, {"start": 1085.1, "end": 1090.84, "text": " Against these baselines some of these baselines are a bit better. You can see that this paper in particular is"}, {"start": 1090.84, "end": 1096.6399999999999, "text": " 1.2 points better, but you need to realize that that model looks something like this"}, {"start": 1096.6399999999999, "end": 1104.52, "text": " So you basically have pyramid pooling here edge perceiving edge aware graph reasoning edge aware graph reasoning here"}, {"start": 1104.84, "end": 1110.54, "text": " Decoder fairly complex reprojection projection. So it's a fairly complex like architecture"}, {"start": 1111.0, "end": 1118.1599999999999, "text": " Like zooming in into this particular module you can see you have graph convolutions transpose bowling etc, etc"}, {"start": 1118.16, "end": 1125.4, "text": " So very very complex architectures. Whereas here they just use simple unit which is off-the-shelf model and they achieve comparable results"}, {"start": 1125.4, "end": 1126.72, "text": " So that's cool"}, {"start": 1126.72, "end": 1128.24, "text": " second thing is"}, {"start": 1128.24, "end": 1133.6000000000001, "text": " What confused me initially was the that we have better results with synthetics compared to real data"}, {"start": 1133.6000000000001, "end": 1139.0400000000002, "text": " And I guess the main reason here is this particular data set. So they used Helen data set"}, {"start": 1139.0400000000002, "end": 1143.16, "text": " I think it has around 2k training images. Whereas they have I think they used"}, {"start": 1143.16, "end": 1147.6000000000001, "text": " 100k images to train the unit. So that's why they get better results here"}, {"start": 1147.6000000000001, "end": 1154.4, "text": " Although it'd be really nice if they explicitly mentioned that somewhere that the amount of images used for this synthetics"}, {"start": 1155.0400000000002, "end": 1162.3600000000001, "text": " Like baseline synthetics results compared to the real result in here on the other hand on this different data set called LEPA"}, {"start": 1163.44, "end": 1168.5600000000002, "text": " The situation is a bit different because the real training data set actually has way more data points"}, {"start": 1168.56, "end": 1175.12, "text": " So the results here are better compared to synthetics data set and that makes sense. They also show results on the landmark localization"}, {"start": 1175.44, "end": 1177.44, "text": " And against some other"}, {"start": 1177.8799999999999, "end": 1181.7, "text": " Soda results and they show again that they have comparable results"}, {"start": 1181.72, "end": 1188.8, "text": " So the synthetics data again here some of them do have better results like like this one here or I don't know like this one"}, {"start": 1188.8, "end": 1193.44, "text": " Here etc. But like again, keep in mind they are using off-the-shelf models here"}, {"start": 1193.44, "end": 1197.74, "text": " It'd be very very interesting to see how this performs with much better models"}, {"start": 1197.74, "end": 1199.74, "text": " They did some ablations"}, {"start": 1200.82, "end": 1205.34, "text": " The augmentations matter a lot. So I forgot to mention that once you render our data"}, {"start": 1205.94, "end": 1209.86, "text": " What they do is they apply various augmentations like grayscaling blurring"}, {"start": 1210.78, "end": 1216.9, "text": " Warping etc. And they just show here that not using augmentations hurts performance by a lot"}, {"start": 1216.9, "end": 1220.18, "text": " You can see the jump here and finally"}, {"start": 1220.98, "end": 1224.22, "text": " Not doing the label adaptation hurts the performance a lot"}, {"start": 1224.22, "end": 1227.9, "text": " So you can see here it jumps to 5.61 where we want to be lower"}, {"start": 1227.9, "end": 1231.9, "text": " So it jumps from 3 to 5.6, which is a big gap"}, {"start": 1231.9, "end": 1236.46, "text": " So that means this part is very very important for landmark localization again"}, {"start": 1236.46, "end": 1241.84, "text": " The reason is the same there is also ambiguity same as with semantic segmentation and the nose example"}, {"start": 1242.34, "end": 1244.66, "text": " Here in the landmark localization problem"}, {"start": 1244.8600000000001, "end": 1250.46, "text": " They have one more interesting visualization here and they say here that predictions by network strain would reel"}, {"start": 1250.46, "end": 1255.3, "text": " So that's the top one against the synthetic data on the bottom one"}, {"start": 1255.3, "end": 1261.9, "text": " And they say that know how synthetic data network generalizes better across expression illumination pose and occlusion"}, {"start": 1261.9, "end": 1265.74, "text": " You can obviously see the results here are way better compared to the results up here"}, {"start": 1265.74, "end": 1268.38, "text": " It's maybe a knit but like claiming that you generalize better"}, {"start": 1268.78, "end": 1273.82, "text": " Whereas you have much more data using synthetics compared to real data is not a fair claim"}, {"start": 1273.82, "end": 1279.66, "text": " Although I guess it proves the point that it's better to have synthetic data because you can generate as much data as you want"}, {"start": 1279.66, "end": 1284.14, "text": " Yeah, and finally let's end up here"}, {"start": 1284.14, "end": 1291.5800000000002, "text": " I mentioned that the superpower is you can generate as much as dense landmarks as you want"}, {"start": 1291.5800000000002, "end": 1296.6200000000001, "text": " You can basically put virtual sensors wherever you want and render"}, {"start": 1296.6200000000001, "end": 1300.3000000000002, "text": " So for example eye tracking is something very important for the HoloLens device"}, {"start": 1300.3000000000002, "end": 1305.1000000000001, "text": " And so you need to be able to track where the pupil is where the eye is"}, {"start": 1305.1, "end": 1312.06, "text": " And you can you can quickly iterate with different positions of the cameras later when the prototype actually is developed"}, {"start": 1312.06, "end": 1317.6599999999999, "text": " You can be confident that the conclusions you draw from the synthetics experiments can now be"}, {"start": 1318.1399999999999, "end": 1325.62, "text": " Generalized to the real data and that's the superpower of this whole framework having said that they they clearly acknowledge certain limitations"}, {"start": 1325.62, "end": 1330.8999999999999, "text": " We do not include expression dependent wrinkling effects of realism suffers during certain expressions"}, {"start": 1330.9, "end": 1338.22, "text": " Since we sample parts of our model independently we sometimes get unusual but not impossible combinations such as feminine faces"}, {"start": 1338.22, "end": 1345.38, "text": " That have a beard and stuff like that. So yeah, I mean again, this is awesome result"}, {"start": 1345.38, "end": 1351.5, "text": " I think this for this particular problem that Microsoft and some of these companies like magic leap etc care about"}, {"start": 1351.98, "end": 1358.5400000000002, "text": " This is a perfect. I think this is a right direction to head into to build better products. But like if we're trying to just"}, {"start": 1358.54, "end": 1365.34, "text": " Talk here about improving research improving AI as a field. I think this is not definitely not the way to do to go ahead"}, {"start": 1365.34, "end": 1370.62, "text": " Finally, they mentioned that they are going to publish for non-commercial use case"}, {"start": 1370.62, "end": 1375.8999999999999, "text": " 100,000 images and I saw some people were confused on Twitter including Andrei Karpati"}, {"start": 1375.8999999999999, "end": 1380.22, "text": " And I thought maybe trying and giving an answer to that like question"}, {"start": 1380.22, "end": 1386.46, "text": " Namely to why they are only publishing 100k images and the answer lies pretty much here. I mean"}, {"start": 1386.46, "end": 1394.3400000000001, "text": " They took a look at the de-deblation for the landmark localization at task and they noticed that after a hundred key images"}, {"start": 1395.1000000000001, "end": 1400.42, "text": " Basically, they got diminishing returns and you can see the the mean error starts increasing again"}, {"start": 1400.42, "end": 1406.98, "text": " So that means a hundred case totally arbitrary value and the whole point of this framework is to have this"}, {"start": 1407.46, "end": 1411.92, "text": " Flexibility that I just explained here. So even when they render"}, {"start": 1411.92, "end": 1418.48, "text": " The all of these avatars from a certain angle. I mean the whole point is you want to be flexible enough to"}, {"start": 1419.2, "end": 1422.2, "text": " Put a new virtual camera and then we render the data there"}, {"start": 1422.2, "end": 1426.8600000000001, "text": " There is not much point in actually in my opinion in publishing these images"}, {"start": 1426.8600000000001, "end": 1430.88, "text": " So what would be really nice is if Microsoft were to make an API?"}, {"start": 1431.48, "end": 1436.04, "text": " Out of this and earn some money by doing that and help other people"}, {"start": 1436.04, "end": 1443.04, "text": " Like not have to go through the same procedure. That's as we saw was very expensive very slow to create this whole framework"}, {"start": 1443.04, "end": 1450.12, "text": " But I can also totally understand the reason for not doing that because it was super expensive and it's giving Microsoft an edge"}, {"start": 1450.24, "end": 1452.86, "text": " Anyways, if you like this paper straight out with a friend"}, {"start": 1453.8, "end": 1459.1599999999999, "text": " Subscribe to this channel join the discord communities. We have very smart people chatting there and you can ask them questions"}, {"start": 1459.16, "end": 1465.96, "text": " So yeah until next time. Bye. Bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=QaIDMuGbqt0
10k subscribers | joining Google DeepMind, updates, AMA
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video we're celebrating 10k subscribers! I give you some updates like: 1) I'm joining DeepMind as a Research Engineer!!! 2) Where this channel is heading to? hint: more applied stuff! 3) AMA (I answer some of your questions!) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Get started with ML blog: https://gordicaleksa.medium.com/get-started-with-ai-and-machine-learning-in-3-months-5236d5e0f230 (also check out other "Getting started with X" blogs) ✅ Get started with ML video: https://www.youtube.com/watch?v=7q_OJvQQ7vY (also check out videos where I explain good projects for beginners/intermediate practitioners) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Updates 01:45 My GitHub 02:10 Turn that notifications on 02:30 Where this channel is heading to (DeepMind) 03:30 Follow me on Twitter and LinkedIn 03:45 Patreon and Discord community 04:30 AMA (answering your questions) 04:33 Q1 05:58 Q2 07:10 Q3 07:49 Q4 08:49 Q5 09:32 Q6 10:48 Q7 11:45 Q8 12:07 Q9 13:05 Q10 14:36 Q11 15:22 Q12 15:58 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #10ksubs #deepmind #ama
What's cracking guys? In this video we're celebrating because the channel just hit 10k subscribers and in this video I just thought of giving you a short update telling you where this channel is at, where this channel is heading to as well as I'm gonna do like a short AMA where I'm gonna answer some of the questions you guys posted over the last couple of days. So let's start with the beginning. My first video was uploaded in March 15th last year so that's 2020 and obviously back then I did not have a clue that we're gonna have a pandemic and so it perfectly coincided with the pandemic at least in Serbia. To be honest here I'm not sure I'd be having this YouTube channel if the pandemic did not happen because it kind of helped me focus and have less distractions because we had a bunch of lockdowns here in Serbia so that's I guess a positive side I see in this pandemic. So it took me a year and a half to get to the 10k mark and to be honest if you asked me two years ago whether I would have a YouTube channel I'd probably say no and the main takeaway there is I guess many of you I'm fairly sure that many of you can actually start your own YouTube channel I don't think there is something special about it and I strongly believe that in 20 years time many people especially tech savvy people are gonna have some type of a media channel because it's a super valuable asset that helps you communicate clearly your ideas, it helps recruiters notice you and that can definitely help you land a dream job. So looking at these channel numbers like the views, number of subscribers and everything I think I'm doing fairly well considering all of the things I was doing aside from this channel so I definitely do not consider myself to be a YouTuber so throughout this whole period I was working pretty much full time at Microsoft I was initially a software engineer and then as an ML engineer I was also making a bunch of open source projects and posting them to GitHub and hopefully you know of some of those you may find some cool projects there so I'm gonna just link it somewhere here. I had eight open source projects, I wrote a bunch of medium blogs I read a bunch of papers, a bunch of books so the point I'm trying to drive here is that many of the things I've been doing were done privately, it's obviously very time consuming to upload everything to YouTube. As a side note here I noticed that many of you have not turned on the all notification bell so I encourage you to do that because a lot of cool content is coming on and also some of you do have the all notification bell turned on yet your YouTube notifications are turned off so you're not receiving the notifications so go ahead and fix that. I also want to tell you something about where this channel is heading to. So first things first, some of you may know, some of you may not know, I just landed a job at DeepMind and I'm super happy about it, that was my pretty much dream company and so the next video you can expect like me sharing my journey of how I landed a job there so that's gonna be one or two videos. Aside from that I'm gonna be much more bullish on creating engineering videos so code walkthroughs videos something like my G9AI video or something like the Dyno video I did and so the reason is, the reason I was pushing research so much is because I lack formal education in machine learning so I had to compensate all of that myself and so like covering research papers was just my way to kind of learn that and also make it public and help others. In a nutshell, going forward I'm gonna focus pretty much equally on research papers as well as on the engineering videos so that's some update on my side. Again, I said I do not consider myself a YouTuber, this is just yet another one of my channels where I'm trying to communicate my thoughts, my ideas and help you guys out. So having said that, I share lots of cool content on, at least I think so, on Twitter and LinkedIn so many of the paper summaries I've done were in a written format and only a subset of those were actually uploaded as a video. So do follow me on Twitter and do follow me on LinkedIn. Finally, before I do an AMA, I wanna make a huge shout out to my Patreons, they've been supporting me over the last period when I was jobless and I also wanna make a huge shout out to the AIPfany Discord server, those guys are super active, they're helping each other out. So if you have any questions, I think the best thing you could do now is join the Discord server and ask a question there because there will be somebody that will answer your question, obviously even better than I can. So the server has some really awesome channels, like there is one channel where people are sharing their ML interviewing experiences, trying to land a job at whatever ML company, we have info channels where you can ask whatever job related question and finally I've got a channel where I'm bringing valuable ML companies and thus sharing with you guys the fresh job openings. Having said that, let's start the AMA. So the first question is this, congrats and we hope you do a video about how to get internship in machine learning and in other video project ideas for portfolio please. Okay, so the best thing I think you can do here is be very deliberate in your job search. So find a couple of companies you're super interested in and just check out the projects they are working on, some open source projects they have and consider contributing. So that way they'll get some exposure to your abilities and you'll have an opportunity to show what you know and basically that's I think one of the best ways to land a job at a certain company. Aside from that, consider creating a project and open sourcing it in a certain topic that's relevant to that particular company you want to apply for. Also consider writing a blog where you'll be covering and showing your knowledge about that certain topic that they care about again or just make a video if you're comfortable with the video format. Finally just connect with the people that work at those companies on Twitter, on LinkedIn and share with them your content, don't be spammy, just share some valuable content with them and also if you think you're still lacking some knowledge be very deliberate again and ask them which skill sets do they look for and go ahead and learn exactly those things. So don't be that guy that just goes ahead and learns like language X, framework Y and technology Z just because they think it's going to be useful. So be very focused. Okay, second question. Can you tell how to approach math in deep learning and some resources, especially for CS grads? I'll be sharing more advice on this question in the deep mind video but for now let me give you some brief tips. First a small background story. So I basically had throughout my life a love and hate relationship with mathematics and that was mainly because of schooling. So once I started self-studying I became a lot better at mathematics and I started loving the topics. Okay, having said that I obviously have like my journey when it comes to mathematics started in elementary school, then I had high school and finally I was studying electrical engineering so I had a bunch of courses there. So if you have a similar background there then my best tip is to do the following thing. Go through mathematics for machine learning book. Accompany that book with the three blue one browns, awesome playlists on linear algebra, calculus, etc. And finally learn stuff on the fly. So as you approach, as you encounter certain problems, some mathematical formula you do not understand, go ahead and read a couple of blogs and a couple of Wikipedia pages and kind of that way you're going to learn like a lot, believe me. And that's probably a mindset I picked up from my software engineering role. Okay, third question, moral question. How can it be prevented that general advances in AI are being used for morally questionable purposes? Is stopping research altogether like Joseph Redman really the only consequence? So in my honest opinion there is no way to stop this. As with any technology AI can be misused. So instead of burying your head in the sand let me propose a couple of alternatives. Work on educating the public and work at a company that is using AI for good. Because just think about it, if you just kind of exclude yourself from the whole story that's like you don't exist. So if you're a good positive person and you're engaged you can only make things better. Fourth question is on impact. Hopefully with a bit of contextual background from DeepMind, which area do you think AI will impact the most in the next five years? So first things first, I still haven't started working for DeepMind. I'll be starting in December so that means in a couple of months. But I obviously still have an opinion of which areas are going to be promising. So I cannot pick a single one. I can pick up like a number of areas I'm personally passionate about. So that's self-supervised learning, that's causality, that's reinforcement learning, that's geometric deep learning and program synthesis and induction. There is obviously a lot of overlap between those fields but yeah I'm pretty passionate about all of those. The reason I haven't mentioned computer vision is because it deals more with perception. Whereas the thing we are lacking currently is the cognition part so that's the reasoning part. And all of these fields seem to hold certain promising ideas that can lead to us building an agent that can actually reason and be intelligent I guess. As for the technical question, where do you personally struggle the most with in regards to AI? I personally, when I focus on whatever topic, I do not have problems nailing it to be honest. The current problem I'm seeing is because I have a lot of context switching because I'm doing various sub-fields. And so I sometimes feel like in a month I forgot, like even a paper I overviewed on my YouTube channel, I feel like I forgot many things. But then like in 5 minutes, 10 minutes I can retrieve the context really quickly. And I just feel a huge gap between the stuff I know before investing those 5-10 minutes in that topic. Okay, sixth question. Where do you typically find the papers? I usually look at papers with code and send it to Preserver. Are there more good sources that you recommend? And same question here. Any tips on how to pick the relevant research papers when there are so many of them? So basically my main answer would be Twitter. Go ahead and curate your network. Follow people that are passionate about the same topics you are passionate about and only follow those people. So that way you'll get a full immersion. Also by creating this network of people around you, some of them can sometimes DM you and send you the hottest new paper. And that's basically like a human recommender system, which is in my opinion still better than machine learning recommender systems. So I personally do not even scroll, usually don't scroll through the Twitter feed. I just open up certain profiles which I follow and I just check out what those guys and girls are posting. Obviously from time to time I just go and scroll through the feed just to add some randomness to the process and maybe pick up on some cool new people I've missed out. So for me personally I'm very curious and I don't have a single field I'm interested in, so I follow people from a bunch of different subfields and that's I guess also good for my YouTube channel because there is a lot of diversity. Seventh question is how to read papers fast. And a simple answer would be there is no wisdom there. You just have to do it. You have to read a bunch of papers and a couple of tips would maybe be read the abstract first, check out the figures, the charts, whatever the visualizations you have in the paper and read the conclusion. After you have that you'll hopefully have some rough idea of what's going on in the paper, of the main ideas, and then you can do a single or a double pass to the paper or even more depending on what you're doing. So for example when I was implementing some papers like Graph attention networks I think I've read it at least three or four times. And then occasionally I just check out some snippets of the paper and that's normal. And another tip would be be comfortable with not understanding stuff. So sometimes you'll have to leave that paper behind even though you did not nail certain concepts and only two or three papers down the line will actually figure it out. So just be patient and be comfortable with not knowing everything. Eighth question, hi Alexa, your journey to DeepMind is very inspiring to me. What's the biggest piece of advice you have for someone trying to self-study reinforcement learning? And a very short answer here, check out my blog on how to get started with reinforcement learning. So I wrote those blogs such that Alexa from the past would be grateful if I have stumbled upon such a resource. Ninth question, as a newcomer to AI ML field, could you please suggest the best place to start as I want to excel in computer vision and NLP fields and want to become a researcher. Your valuable guidance in this matter will be highly appreciated. So basically I created a series, a couple of blogs and a couple of videos where I basically tell you how to get started with whatever topic or with machine learning in general. So I strongly suggest to check out my medium like blogs as well as two or three videos I made a while ago. I'm going to link them down in the description and that's pretty much it. There is not much wisdom there, just don't fall into this decision making paralysis trap and start working and be consistent with your work and you'll get there. I'm trying to create the best, the optimal learning path because that's impossible. Just pick one of these, I think these resources I've shared are good enough and if you just commit to them you'll nail it. Tenth question, I'm planning to take a gap year between my bachelor's degree and master's degree to fill at least some gaps. I don't feel like college has prepared me for the industry so just want to upskill in general. I'd like to know how you approach making your own curriculum, any tips in general would be great too, thanks. So basically again I think my blogs on learning how to learn and my blogs on how to get started with topic X are an excellent starting point to better understand my thought processes. So on a personal note, this whole journey of mine of creating these curriculums and self-studying started at least 15 years ago. So basically back in high school I was learning human languages and I had to learn those by myself and so I slowly started developing my own strategies of how to do it and I shared many of those thoughts in those blogs I mentioned. So basically human languages workout, IE like physical training and working out helped me a lot, like just gained that different type of willpower and create a program, create a goal, create a program and just stick to it and kind of execute all of that. So all of those skills slowly, so learning how to learn and discipline and willpower developed over a course of 15 years I guess. So now even though I started in 2018 like machine learning which is pretty recent, I already had a decent background both from the technical standpoint and I also made a really strong background in learning how to learn. Eleventh question, you have so many good articles, how did you acquire so much knowledge in the space in what seems like a short time? Again here at this point in time I think I have a really strong learning how to learn abilities. So as I said through learning languages, through working out I developed all of the necessary ingredients like setting the goal, making a program, executing, so having that pure willpower as well as like a bunch of different learning like tricks I guess. But also things like learning how to learn course helped a lot. I definitely think that learning languages had a really really strong role in developing those necessary cognitive abilities that will later help me learn whatever topic I want, whether it be machine learning, software engineering, finance, what not. And finally twelfth and final question, hi maybe you could make a video about your journey until today, what's your academic background, how you got into ML etc. So I'm gonna keep it super brief here, I basically did like a bachelor in electrical engineering. I dropped out of masters because as I said I'm a strong like a proponent of self education. So I strongly believe that I can create a better program for myself than the best faculties out there like MIT or Stanford. So in 2018 I attended this machine learning summer camp organized by Microsoft and I think that was the starting point for me pretty much. That's it guys, let's go to 100,000 subscribers. If you find these videos useful consider subscribing, share the videos out with your friends and join the Discord community. Until next time, bye bye.
[{"start": 0.0, "end": 5.8, "text": " What's cracking guys? In this video we're celebrating because the channel just hit 10k subscribers"}, {"start": 5.8, "end": 11.6, "text": " and in this video I just thought of giving you a short update telling you where this channel is at, where this channel is heading to"}, {"start": 11.6, "end": 18.6, "text": " as well as I'm gonna do like a short AMA where I'm gonna answer some of the questions you guys posted over the last couple of days."}, {"start": 18.6, "end": 25.3, "text": " So let's start with the beginning. My first video was uploaded in March 15th last year so that's 2020"}, {"start": 25.3, "end": 29.6, "text": " and obviously back then I did not have a clue that we're gonna have a pandemic"}, {"start": 29.6, "end": 33.0, "text": " and so it perfectly coincided with the pandemic at least in Serbia."}, {"start": 33.0, "end": 38.800000000000004, "text": " To be honest here I'm not sure I'd be having this YouTube channel if the pandemic did not happen"}, {"start": 38.800000000000004, "end": 44.1, "text": " because it kind of helped me focus and have less distractions because we had a bunch of lockdowns here in Serbia"}, {"start": 44.1, "end": 49.0, "text": " so that's I guess a positive side I see in this pandemic."}, {"start": 49.0, "end": 55.8, "text": " So it took me a year and a half to get to the 10k mark and to be honest if you asked me two years ago"}, {"start": 55.8, "end": 62.4, "text": " whether I would have a YouTube channel I'd probably say no and the main takeaway there is I guess many of you"}, {"start": 62.4, "end": 67.0, "text": " I'm fairly sure that many of you can actually start your own YouTube channel I don't think there is something special about it"}, {"start": 67.0, "end": 74.4, "text": " and I strongly believe that in 20 years time many people especially tech savvy people are gonna have some type of a media channel"}, {"start": 74.4, "end": 80.8, "text": " because it's a super valuable asset that helps you communicate clearly your ideas, it helps recruiters notice you"}, {"start": 80.8, "end": 83.4, "text": " and that can definitely help you land a dream job."}, {"start": 83.4, "end": 88.2, "text": " So looking at these channel numbers like the views, number of subscribers and everything"}, {"start": 88.2, "end": 94.2, "text": " I think I'm doing fairly well considering all of the things I was doing aside from this channel"}, {"start": 94.2, "end": 102.2, "text": " so I definitely do not consider myself to be a YouTuber so throughout this whole period I was working pretty much full time at Microsoft"}, {"start": 102.2, "end": 108.60000000000001, "text": " I was initially a software engineer and then as an ML engineer I was also making a bunch of open source projects"}, {"start": 108.6, "end": 113.8, "text": " and posting them to GitHub and hopefully you know of some of those you may find some cool projects there"}, {"start": 113.8, "end": 118.8, "text": " so I'm gonna just link it somewhere here. I had eight open source projects, I wrote a bunch of medium blogs"}, {"start": 118.8, "end": 124.39999999999999, "text": " I read a bunch of papers, a bunch of books so the point I'm trying to drive here is that many of the things I've been doing"}, {"start": 124.39999999999999, "end": 129.4, "text": " were done privately, it's obviously very time consuming to upload everything to YouTube."}, {"start": 129.4, "end": 134.6, "text": " As a side note here I noticed that many of you have not turned on the all notification bell"}, {"start": 134.6, "end": 141.4, "text": " so I encourage you to do that because a lot of cool content is coming on and also some of you do have the all notification bell turned on"}, {"start": 141.4, "end": 146.4, "text": " yet your YouTube notifications are turned off so you're not receiving the notifications so go ahead and fix that."}, {"start": 146.4, "end": 149.0, "text": " I also want to tell you something about where this channel is heading to."}, {"start": 149.0, "end": 155.0, "text": " So first things first, some of you may know, some of you may not know, I just landed a job at DeepMind"}, {"start": 155.0, "end": 161.0, "text": " and I'm super happy about it, that was my pretty much dream company and so the next video you can expect"}, {"start": 161.0, "end": 166.0, "text": " like me sharing my journey of how I landed a job there so that's gonna be one or two videos."}, {"start": 166.0, "end": 172.0, "text": " Aside from that I'm gonna be much more bullish on creating engineering videos so code walkthroughs videos"}, {"start": 172.0, "end": 179.6, "text": " something like my G9AI video or something like the Dyno video I did and so the reason is, the reason I was pushing research so much"}, {"start": 179.6, "end": 184.0, "text": " is because I lack formal education in machine learning so I had to compensate all of that myself"}, {"start": 184.0, "end": 192.4, "text": " and so like covering research papers was just my way to kind of learn that and also make it public and help others."}, {"start": 192.4, "end": 198.0, "text": " In a nutshell, going forward I'm gonna focus pretty much equally on research papers as well as on the engineering videos"}, {"start": 198.0, "end": 200.0, "text": " so that's some update on my side."}, {"start": 200.0, "end": 208.4, "text": " Again, I said I do not consider myself a YouTuber, this is just yet another one of my channels where I'm trying to communicate my thoughts, my ideas and help you guys out."}, {"start": 208.4, "end": 213.4, "text": " So having said that, I share lots of cool content on, at least I think so, on Twitter and LinkedIn"}, {"start": 213.4, "end": 220.4, "text": " so many of the paper summaries I've done were in a written format and only a subset of those were actually uploaded as a video."}, {"start": 220.4, "end": 223.4, "text": " So do follow me on Twitter and do follow me on LinkedIn."}, {"start": 223.4, "end": 232.4, "text": " Finally, before I do an AMA, I wanna make a huge shout out to my Patreons, they've been supporting me over the last period when I was jobless"}, {"start": 232.4, "end": 239.4, "text": " and I also wanna make a huge shout out to the AIPfany Discord server, those guys are super active, they're helping each other out."}, {"start": 239.4, "end": 248.4, "text": " So if you have any questions, I think the best thing you could do now is join the Discord server and ask a question there because there will be somebody that will answer your question, obviously even better than I can."}, {"start": 248.4, "end": 257.4, "text": " So the server has some really awesome channels, like there is one channel where people are sharing their ML interviewing experiences, trying to land a job at whatever ML company,"}, {"start": 257.4, "end": 269.4, "text": " we have info channels where you can ask whatever job related question and finally I've got a channel where I'm bringing valuable ML companies and thus sharing with you guys the fresh job openings."}, {"start": 269.4, "end": 272.4, "text": " Having said that, let's start the AMA."}, {"start": 272.4, "end": 282.4, "text": " So the first question is this, congrats and we hope you do a video about how to get internship in machine learning and in other video project ideas for portfolio please."}, {"start": 282.4, "end": 287.4, "text": " Okay, so the best thing I think you can do here is be very deliberate in your job search."}, {"start": 287.4, "end": 296.4, "text": " So find a couple of companies you're super interested in and just check out the projects they are working on, some open source projects they have and consider contributing."}, {"start": 296.4, "end": 307.4, "text": " So that way they'll get some exposure to your abilities and you'll have an opportunity to show what you know and basically that's I think one of the best ways to land a job at a certain company."}, {"start": 307.4, "end": 315.4, "text": " Aside from that, consider creating a project and open sourcing it in a certain topic that's relevant to that particular company you want to apply for."}, {"start": 315.4, "end": 326.4, "text": " Also consider writing a blog where you'll be covering and showing your knowledge about that certain topic that they care about again or just make a video if you're comfortable with the video format."}, {"start": 326.4, "end": 335.4, "text": " Finally just connect with the people that work at those companies on Twitter, on LinkedIn and share with them your content, don't be spammy, just share some valuable content with them"}, {"start": 335.4, "end": 345.4, "text": " and also if you think you're still lacking some knowledge be very deliberate again and ask them which skill sets do they look for and go ahead and learn exactly those things."}, {"start": 345.4, "end": 355.4, "text": " So don't be that guy that just goes ahead and learns like language X, framework Y and technology Z just because they think it's going to be useful."}, {"start": 355.4, "end": 357.4, "text": " So be very focused."}, {"start": 357.4, "end": 364.4, "text": " Okay, second question. Can you tell how to approach math in deep learning and some resources, especially for CS grads?"}, {"start": 364.4, "end": 370.4, "text": " I'll be sharing more advice on this question in the deep mind video but for now let me give you some brief tips."}, {"start": 370.4, "end": 377.4, "text": " First a small background story. So I basically had throughout my life a love and hate relationship with mathematics and that was mainly because of schooling."}, {"start": 377.4, "end": 383.4, "text": " So once I started self-studying I became a lot better at mathematics and I started loving the topics."}, {"start": 383.4, "end": 393.4, "text": " Okay, having said that I obviously have like my journey when it comes to mathematics started in elementary school, then I had high school and finally I was studying electrical engineering"}, {"start": 393.4, "end": 400.4, "text": " so I had a bunch of courses there. So if you have a similar background there then my best tip is to do the following thing."}, {"start": 400.4, "end": 410.4, "text": " Go through mathematics for machine learning book. Accompany that book with the three blue one browns, awesome playlists on linear algebra, calculus, etc."}, {"start": 410.4, "end": 418.4, "text": " And finally learn stuff on the fly. So as you approach, as you encounter certain problems, some mathematical formula you do not understand,"}, {"start": 418.4, "end": 427.4, "text": " go ahead and read a couple of blogs and a couple of Wikipedia pages and kind of that way you're going to learn like a lot, believe me."}, {"start": 427.4, "end": 430.4, "text": " And that's probably a mindset I picked up from my software engineering role."}, {"start": 430.4, "end": 437.4, "text": " Okay, third question, moral question. How can it be prevented that general advances in AI are being used for morally questionable purposes?"}, {"start": 437.4, "end": 442.4, "text": " Is stopping research altogether like Joseph Redman really the only consequence?"}, {"start": 442.4, "end": 448.4, "text": " So in my honest opinion there is no way to stop this. As with any technology AI can be misused."}, {"start": 448.4, "end": 452.4, "text": " So instead of burying your head in the sand let me propose a couple of alternatives."}, {"start": 452.4, "end": 458.4, "text": " Work on educating the public and work at a company that is using AI for good."}, {"start": 458.4, "end": 463.4, "text": " Because just think about it, if you just kind of exclude yourself from the whole story that's like you don't exist."}, {"start": 463.4, "end": 469.4, "text": " So if you're a good positive person and you're engaged you can only make things better."}, {"start": 469.4, "end": 473.4, "text": " Fourth question is on impact. Hopefully with a bit of contextual background from DeepMind,"}, {"start": 473.4, "end": 477.4, "text": " which area do you think AI will impact the most in the next five years?"}, {"start": 477.4, "end": 483.4, "text": " So first things first, I still haven't started working for DeepMind. I'll be starting in December so that means in a couple of months."}, {"start": 483.4, "end": 488.4, "text": " But I obviously still have an opinion of which areas are going to be promising."}, {"start": 488.4, "end": 493.4, "text": " So I cannot pick a single one. I can pick up like a number of areas I'm personally passionate about."}, {"start": 493.4, "end": 503.4, "text": " So that's self-supervised learning, that's causality, that's reinforcement learning, that's geometric deep learning and program synthesis and induction."}, {"start": 503.4, "end": 508.4, "text": " There is obviously a lot of overlap between those fields but yeah I'm pretty passionate about all of those."}, {"start": 508.4, "end": 512.4, "text": " The reason I haven't mentioned computer vision is because it deals more with perception."}, {"start": 512.4, "end": 516.4, "text": " Whereas the thing we are lacking currently is the cognition part so that's the reasoning part."}, {"start": 516.4, "end": 529.4, "text": " And all of these fields seem to hold certain promising ideas that can lead to us building an agent that can actually reason and be intelligent I guess."}, {"start": 529.4, "end": 534.4, "text": " As for the technical question, where do you personally struggle the most with in regards to AI?"}, {"start": 534.4, "end": 540.4, "text": " I personally, when I focus on whatever topic, I do not have problems nailing it to be honest."}, {"start": 540.4, "end": 549.4, "text": " The current problem I'm seeing is because I have a lot of context switching because I'm doing various sub-fields."}, {"start": 549.4, "end": 557.4, "text": " And so I sometimes feel like in a month I forgot, like even a paper I overviewed on my YouTube channel, I feel like I forgot many things."}, {"start": 557.4, "end": 562.4, "text": " But then like in 5 minutes, 10 minutes I can retrieve the context really quickly."}, {"start": 562.4, "end": 572.4, "text": " And I just feel a huge gap between the stuff I know before investing those 5-10 minutes in that topic."}, {"start": 572.4, "end": 578.4, "text": " Okay, sixth question. Where do you typically find the papers? I usually look at papers with code and send it to Preserver."}, {"start": 578.4, "end": 582.4, "text": " Are there more good sources that you recommend? And same question here."}, {"start": 582.4, "end": 587.4, "text": " Any tips on how to pick the relevant research papers when there are so many of them?"}, {"start": 587.4, "end": 592.4, "text": " So basically my main answer would be Twitter. Go ahead and curate your network."}, {"start": 592.4, "end": 599.4, "text": " Follow people that are passionate about the same topics you are passionate about and only follow those people."}, {"start": 599.4, "end": 605.4, "text": " So that way you'll get a full immersion. Also by creating this network of people around you,"}, {"start": 605.4, "end": 610.4, "text": " some of them can sometimes DM you and send you the hottest new paper."}, {"start": 610.4, "end": 617.4, "text": " And that's basically like a human recommender system, which is in my opinion still better than machine learning recommender systems."}, {"start": 617.4, "end": 621.4, "text": " So I personally do not even scroll, usually don't scroll through the Twitter feed."}, {"start": 621.4, "end": 627.4, "text": " I just open up certain profiles which I follow and I just check out what those guys and girls are posting."}, {"start": 627.4, "end": 632.4, "text": " Obviously from time to time I just go and scroll through the feed just to add some randomness to the process"}, {"start": 632.4, "end": 637.4, "text": " and maybe pick up on some cool new people I've missed out."}, {"start": 637.4, "end": 642.4, "text": " So for me personally I'm very curious and I don't have a single field I'm interested in,"}, {"start": 642.4, "end": 648.4, "text": " so I follow people from a bunch of different subfields and that's I guess also good for my YouTube channel because there is a lot of diversity."}, {"start": 648.4, "end": 655.4, "text": " Seventh question is how to read papers fast. And a simple answer would be there is no wisdom there."}, {"start": 655.4, "end": 662.4, "text": " You just have to do it. You have to read a bunch of papers and a couple of tips would maybe be read the abstract first,"}, {"start": 662.4, "end": 668.4, "text": " check out the figures, the charts, whatever the visualizations you have in the paper and read the conclusion."}, {"start": 668.4, "end": 674.4, "text": " After you have that you'll hopefully have some rough idea of what's going on in the paper, of the main ideas,"}, {"start": 674.4, "end": 679.4, "text": " and then you can do a single or a double pass to the paper or even more depending on what you're doing."}, {"start": 679.4, "end": 684.4, "text": " So for example when I was implementing some papers like Graph attention networks I think I've read it at least three or four times."}, {"start": 684.4, "end": 689.4, "text": " And then occasionally I just check out some snippets of the paper and that's normal."}, {"start": 689.4, "end": 693.4, "text": " And another tip would be be comfortable with not understanding stuff."}, {"start": 693.4, "end": 698.4, "text": " So sometimes you'll have to leave that paper behind even though you did not nail certain concepts"}, {"start": 698.4, "end": 702.4, "text": " and only two or three papers down the line will actually figure it out."}, {"start": 702.4, "end": 705.4, "text": " So just be patient and be comfortable with not knowing everything."}, {"start": 705.4, "end": 709.4, "text": " Eighth question, hi Alexa, your journey to DeepMind is very inspiring to me."}, {"start": 709.4, "end": 715.4, "text": " What's the biggest piece of advice you have for someone trying to self-study reinforcement learning?"}, {"start": 715.4, "end": 720.4, "text": " And a very short answer here, check out my blog on how to get started with reinforcement learning."}, {"start": 720.4, "end": 727.4, "text": " So I wrote those blogs such that Alexa from the past would be grateful if I have stumbled upon such a resource."}, {"start": 727.4, "end": 732.4, "text": " Ninth question, as a newcomer to AI ML field, could you please suggest the best place to start"}, {"start": 732.4, "end": 737.4, "text": " as I want to excel in computer vision and NLP fields and want to become a researcher."}, {"start": 737.4, "end": 740.4, "text": " Your valuable guidance in this matter will be highly appreciated."}, {"start": 740.4, "end": 746.4, "text": " So basically I created a series, a couple of blogs and a couple of videos"}, {"start": 746.4, "end": 751.4, "text": " where I basically tell you how to get started with whatever topic or with machine learning in general."}, {"start": 751.4, "end": 759.4, "text": " So I strongly suggest to check out my medium like blogs as well as two or three videos I made a while ago."}, {"start": 759.4, "end": 763.4, "text": " I'm going to link them down in the description and that's pretty much it."}, {"start": 763.4, "end": 768.4, "text": " There is not much wisdom there, just don't fall into this decision making paralysis trap"}, {"start": 768.4, "end": 773.4, "text": " and start working and be consistent with your work and you'll get there."}, {"start": 773.4, "end": 778.4, "text": " I'm trying to create the best, the optimal learning path because that's impossible."}, {"start": 778.4, "end": 783.4, "text": " Just pick one of these, I think these resources I've shared are good enough"}, {"start": 783.4, "end": 785.4, "text": " and if you just commit to them you'll nail it."}, {"start": 785.4, "end": 792.4, "text": " Tenth question, I'm planning to take a gap year between my bachelor's degree and master's degree to fill at least some gaps."}, {"start": 792.4, "end": 796.4, "text": " I don't feel like college has prepared me for the industry so just want to upskill in general."}, {"start": 796.4, "end": 802.4, "text": " I'd like to know how you approach making your own curriculum, any tips in general would be great too, thanks."}, {"start": 802.4, "end": 809.4, "text": " So basically again I think my blogs on learning how to learn and my blogs on how to get started with topic X"}, {"start": 809.4, "end": 813.4, "text": " are an excellent starting point to better understand my thought processes."}, {"start": 813.4, "end": 822.4, "text": " So on a personal note, this whole journey of mine of creating these curriculums and self-studying started at least 15 years ago."}, {"start": 822.4, "end": 829.4, "text": " So basically back in high school I was learning human languages and I had to learn those by myself"}, {"start": 829.4, "end": 836.4, "text": " and so I slowly started developing my own strategies of how to do it and I shared many of those thoughts in those blogs I mentioned."}, {"start": 836.4, "end": 843.4, "text": " So basically human languages workout, IE like physical training and working out helped me a lot,"}, {"start": 843.4, "end": 854.4, "text": " like just gained that different type of willpower and create a program, create a goal, create a program and just stick to it and kind of execute all of that."}, {"start": 854.4, "end": 863.4, "text": " So all of those skills slowly, so learning how to learn and discipline and willpower developed over a course of 15 years I guess."}, {"start": 863.4, "end": 869.4, "text": " So now even though I started in 2018 like machine learning which is pretty recent,"}, {"start": 869.4, "end": 876.4, "text": " I already had a decent background both from the technical standpoint and I also made a really strong background in learning how to learn."}, {"start": 876.4, "end": 884.4, "text": " Eleventh question, you have so many good articles, how did you acquire so much knowledge in the space in what seems like a short time?"}, {"start": 884.4, "end": 889.4, "text": " Again here at this point in time I think I have a really strong learning how to learn abilities."}, {"start": 889.4, "end": 899.4, "text": " So as I said through learning languages, through working out I developed all of the necessary ingredients like setting the goal, making a program,"}, {"start": 899.4, "end": 906.4, "text": " executing, so having that pure willpower as well as like a bunch of different learning like tricks I guess."}, {"start": 906.4, "end": 909.4, "text": " But also things like learning how to learn course helped a lot."}, {"start": 909.4, "end": 915.4, "text": " I definitely think that learning languages had a really really strong role in developing those necessary cognitive abilities"}, {"start": 915.4, "end": 922.4, "text": " that will later help me learn whatever topic I want, whether it be machine learning, software engineering, finance, what not."}, {"start": 922.4, "end": 928.4, "text": " And finally twelfth and final question, hi maybe you could make a video about your journey until today,"}, {"start": 928.4, "end": 931.4, "text": " what's your academic background, how you got into ML etc."}, {"start": 931.4, "end": 937.4, "text": " So I'm gonna keep it super brief here, I basically did like a bachelor in electrical engineering."}, {"start": 937.4, "end": 942.4, "text": " I dropped out of masters because as I said I'm a strong like a proponent of self education."}, {"start": 942.4, "end": 949.4, "text": " So I strongly believe that I can create a better program for myself than the best faculties out there like MIT or Stanford."}, {"start": 949.4, "end": 957.4, "text": " So in 2018 I attended this machine learning summer camp organized by Microsoft and I think that was the starting point for me pretty much."}, {"start": 957.4, "end": 961.4, "text": " That's it guys, let's go to 100,000 subscribers."}, {"start": 961.4, "end": 969.4, "text": " If you find these videos useful consider subscribing, share the videos out with your friends and join the Discord community."}, {"start": 969.4, "end": 972.4, "text": " Until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=0OevSonydgg
The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for RL | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ In this video I cover Google Brain's "The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning" paper. They introduce the "Attention Neuron" model that copes with arbitrary permutations of input observations. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2109.02869 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 A high-level overview, main ideas 08:00 Permutation invariance vs equivariance 13:00 PI implemented via the Set Transformer idea 19:20 Interactive demos (blog) 22:00 Results 26:15 Pong occlusion experiment 28:00 Representations visualized via t-SNE 29:30 Attention Neuron is robust, attention visualized 31:45 Outro, recap Credits: SmarterEveryDay video: https://www.youtube.com/watch?v=MFzDaBzBlL0 Cellular Automata video: https://www.youtube.com/watch?v=C2vgICfQawE ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #attentionneuron #permutationinvariance #googlebrain
What's cracking guys? In this video I'm covering this new paper called the sensory neuron as a transformer permutation invariant neural networks for reinforcement learning by Jujing Tang and David Ha from Google Brain. So the idea is to create an agent that will be able or a framework that will be able to solve a problem like this one. So you have a RL like environment here like this car racing game and the goal is to be able to to play the game even though the observation was severely shuffled. So we had some certain type of permutation being applied to the input observation. So as you can see here this is our input, this is your the original car racing game and then you divide this like observation into patches like a grid of patches and then you do arbitrary permutation and you get something like this and you still want to preserve the the performance that the agent had previously. So obviously if we took something like the DQN agent that's one of the most famous and one of the first successful like RL agents that were able to play these types of games, if you know how it works you know that it's going to fail miserably here because if you learn how to play on this observation it's using CNN in the background remember and so it's going to completely crash here because the like initial structure the original structure the spatial structure that this observation had is not is no longer present here so it's going to fail. If you haven't watched my DQN video do check it out I'm going to link it somewhere here. So how they managed to achieve this is there are two components to this. So the first component is this they borrow ideas from this self-organization literature and the second idea is to add the permutation invariance like to bake it in into the model and that way they achieve that this this behavior I just explained. So let's see what I see here. Numerous studies have demonstrated that humans can adapt to changes in sensory inputs even when they are fed into the wrong channels but difficult adaptations such as learning to see by interpreting visual information emitted from a grid of electrodes placed on one's tongue or learning to ride a backward bicycle require months of training to achieve mastery. So I mean just the pure fact that humans are able to do this is fairly mind-blowing. So this first thing learning to see through the tongue is you basically have like a camera I guess and that camera approximately contains the same information as what your eye sensor would receive and so you just take the electrode so the information coming from that RGB camera you place it on someone's tongue and the like neural plasticity of the brain does the rest after obviously long like multiple months of training but you can learn how your brain can learn and your neurons can adapt like to form a circuit that can understand the image even though it's coming from a different channel so that's like tongue so that's something called sensory substitution. Like check out this video from getting smarter every day where you have this backward bike and the goal that the idea is that the input observation is kind of permuted that means that you try to turn the bar left but the wheel turns right so you have this discrepancy this this shuffling that happened and all of a sudden nobody can actually ride that bike even though you know how to to ride the normal bike and so I guess there is an analogy between that and this thing here it's just that this is a bit more complex and you can imagine we couldn't be able to play this without a severe like a lot of training. So check out this short clip. Where I work the welders are geniuses and they like to play jokes on the engineers. He had a challenge for me. He had built a special bicycle and he wanted me to try to ride it. He had only changed one thing when he turned the handlebar to the left the wheel goes to the right when you turn it to the right the wheel goes to the left. I thought this would be easy so I hopped on the bike ready to demonstrate how quickly I could conquer this. And here he is ladies and gentlemen Mr. Justin Sandlin first attempt riding the bicycle. All right so the faster I go the better. Yeah yeah I couldn't do it. You can see that I'm laughing but I'm actually really frustrated. In this moment I had a really deep revelation. My thinking was in a rut. This bike revealed a very deep truth to me. I had the knowledge of how to operate the bike but I did not have the understanding. Okay so now the question is can we do better and create AI systems that can rapidly adapt to sensory substitutions without the need to be retrained. Can we avoid the need to retrain the network in order to complete this task. So even humans need months to adapt to this novel task. So this paper is in a way trying to go even above the human intelligence at least in this very particular domain. One more interesting idea I want to like read out here and then I'm gonna dig into the paper is this. In complex systems we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment with each individual agent acting only on locally available information without knowing the full picture. So there are systems such as they mentioned here an implementation of these systems such as cellular automata where basically all of the cells all the components of the system only have access to a local information and then they are able to somehow communicate and form this global representation this global complex behavior. If you don't know what cellular automata is check out this short video. So basically every single cell consults the neighbors and uses their information to update its own information and every single cell does that for itself and then you have this global like behavior emerging which is pretty amazing. So a similar idea was applied in this paper basically every single one of these patches will be fed into a unique neural network all of them will be identical the ways will be shared but like still we have like a single neural network will have access to only like a local information so for to a single patch and we somehow need to integrate that information into a global representation which can then be fed into some policy network of an RL agent and then the agent can learn how to play this thing. So the second important component you want to bake in into this model is that no matter the permutation of the patches in the case of these visual games you want to have the same global representation so that means we need to have some kind of permutation invariance baked in. The way this paper achieves it is using some the idea from the set transformer paper. Now let me let me break it down and explain all of that. Okay so here it is. So the set transformer cleverly replaced Q and again I'm gonna assume you know what QKV is it's a like a terminology from the original transformer paper if you're not familiar with it just check out my transformer video. So they replaced the Q with a set of learnable seed vectors so it's no longer a function of input X thus enabling the output to become PI or permutation invariant. So before I dig into this chart here and explain how exactly did they accomplish like the permutation invariance let me quickly explain the difference between permutation equivariance and permutation invariance. So they mentioned here using these like formulas basically this property needs to be abated if you want to have this is by definition what permutation invariance is all about. So you have some input X you do some permutation over that input X so that's you can treat it as a list of tokens and then once you index it with this S S is just a permutation of this list here and you basically index X which means which is a maybe a non-Pi notation I don't know if this is like I've never seen this notation in math notebooks but basically what you do here is you're permuting the input X according to this like list S and once you pass that through function F you want to make sure that it's the same as the original like as passing the original input. So no matter the permutation you apply to the input you get the same output that's the permutation invariance and on the other hand we have and as you can see here the interesting thing here is that the input space dimensionality does not have to be the same as the output space dimensionality. On the other hand if we have permutation actually variance the those two have to match so we we gotta have and both in the input space as well as in the output space. So here is the formula for for permutation actually variance as you can see here so now once we permute the input what happens is because we have the equivalent part basically the output is going to permute in the same manner. So the original transformer paper had exactly this property and all of the transformers except for the set transformer have this property. So basically let me try and explain it really quickly. So let's say we have three input tokens so a red token we have a green token and we have a blue token. So we pass them into the transformer layer so we have like a black box here this is going to do the transformer magic and out comes some novel representations. So I'll just gonna denote them as this double bar red and then so let me take the green one double bar green so that's some novel representation and finally the blue one. So the the reason why transformers are permutation equivalent is because if we imagine I'm just gonna permute this input now like this. So we're gonna have red here we're gonna have green here and we're gonna have blue remain on the same position. So what is now going to happen is that we're just going to permute the output the same way as we permuted the input. So that's what this formula here is telling us. Basically that's what transformers are going to do by default. So we're gonna have the following thing we're gonna have green will be here red will be here and blue will be here. Okay so let me make the permutation explicit. So we have we have this is the original like ordering of the inputs one two three what we did here is we put two here because two corresponds to green one then we had one then we had three. So this is the permutation we applied to the input. So now what this formula tells us is that passing this permuted input through H and this is H in our case so I just you know the T, T's are H. So passing this in the input so this novel permutation so that means this thing here should be the same as passing as passing the original input so that means this one here passing it as the so the output of that thing is going to be this thing and then we apply the the permutation so that means we permute this thing like this. So we're going to take the the second output which is this and put it in the first place we're gonna take the first output and put it in the second place we're gonna take the third output and just leave it where it is and as you can see we get exactly this output here. So hopefully this was clear maybe a bit confusing but yeah this is the this is the permutation equivariance. So now what we're trying to accomplish is instead of having this the outputs being permuted the same way as the inputs we want to have them remain fixed. So that means for example we want to have this type of arrangement no matter how we permute the input. In this case we'll have like three factorial type like possibilities to permute the input and we'll always have this output so that will be the permutation invariance. Now one more important detail between permutation equivariance and invariance is that in the case of equivariance the like the number of input tokens and I'm going to use the terminology from transformers has to be the same as the number of output tokens. Whereas once you have the permutation invariance you can have for example arbitrary number of tokens here like n and you can still keep this maybe I don't know like five whatever. So that's another important detail and we'll soon see why that is important basically we can some of the patches of the input patches we can kind of omit them and the agent will still be able to play the game. Nice now let me try to explain how the set transformer idea comes along here. So this is the input token these are the input tokens so we have in the case of the image all of these will be just like a single patch so this ot means observation at time t and we are taking patch number one and we are feeding them into these sensory neurons. So as I said these are going to be shared so that's the idea borrowed from the self-organization literature you basically have like this network weights are going to be shared across every single one of sensory neurons and basically as you can see every sensory neuron will have as an input just the local information so just in the case of images just a couple of pixels in that single patch. So what will happen is all of them are going to generate the keys and the value vectors so that's your usual transformer thing and then instead of generating query vectors as well we decouple that part we decouple this part as you can see from the input so that means that this Q matrix is just going to be like a fixed embedding like a like an embedding table you can think of it this way like imagine we have just three vectors here okay and that means that if you know how transformers work that means that we're always going to have three vectors at the output here so this message will have constant format of three vectors like this no matter the number of inputs so no matter the number of patches we input we're always going to have three vectors as the output because that's dictated by the Q matrix. Let me now try and explain why this thing is permutation invariant so if we take this particular query vector this one here and we do a dot product with all of the keys here so we'll end up with with coefficients alpha 1 alpha 2 all the way through alpha n okay and we're going to have value vectors basically v1 v2 all the way through vn and as you remember as recall we just do multiplication between the attention coefficients and the value vectors we add them up and we form the message here so because of the addition that's that's the part that enables us the permutation invariance so now imagine I just permute I randomly permute the input observation so what will happen is that the key vectors will be randomly permuted as well as the value vector so maybe so then after we apply this query after we do the dot product between query and those new keys and new value vectors I mean the new order of the keys and values we'll just have this this the following thing so basically let's imagine we have something like this alpha 2 alpha 1 all the way through alpha n and because the value vectors are going to be permuted the same in the same manner we'll just have v2 v1 all the way through vn and now again because we're just doing multiplication between these and just adding them up we're going to end up with the same result and that's why we have permutation invariance so basically by doing this we bake in by design we bake in this permutation invariance property into our neural network so we don't have you do not have to learn how to handle different permutations of the inputs okay and basically on a high level what will happen with all of these agents and they tested these agents on four different games is we're going to input the observation we're going to input the previous action we're going to pass them through this attention neuron so this is this whole pipeline we just saw here we're going to form the message we're going to have some policy function here and we're going to output actions the next action that will that we will use to interact with the environment okay having explained this like how this thing works I'm just going to make a small remark here so these here are sensory neurons and the whole thing is called attention neuron so I'm just going to go back to the title the title name was the sensory neuron as a transformer and I think this part should be attention neuron I guess because that's where the transformer like attention happens and not on the sensory neuron level but yeah and anyways a super small net I may be off here let's get back to the paper okay so quick notice here so everything I just explained visually is just captured here algebraically basically what will happen is every one of these sensory neurons is gonna from the observation is going to create its own key is going to create its own value and as you can see here the inputs are a bit different the key function will be accepting the observation so this like the ith component as well as the previous action whereas the value like functions will only be accepting the observations and now I guess the reason for that is you do not want to bias the like current action with the previous one because remember how this thing works is you're gonna just have in the this message which is just going to contain like a sum a weighted sum of value vectors and so these value vectors will not have like the previous section like incorporated into them and so that means we won't won't be biasing the the current action at this time step so I may be completely wrong here but I think that's that's the reason they omitted the last action for the value function here finally this is just the transformer formalism the important thing to notice here is that the q does not have any input and that's the way we achieve the permutation invariance and the second thing is they decoupled like the q matrices with this wq's which is not usually done because once you apply the wq's you actually get the queries and the keys and the values so here they just decouple this to to make this formalism a bit more clear I guess but again if you understood the what I explained here you don't need to worry about the formulas okay so a couple more things they have four experiments they have these non vision continuous control tasks like the this this like card pull swing up and they also have vision based tasks like the pong game and the car race game and I'm gonna switch to blog and show you the animations and then I'm gonna return back to the paper and show you the the results they got okay here we are basically we have a like this card pull problem it's a very famous toy problem in the RL field you're trying to just basically make sure that this card is at the x equals zero position along this x-axis and you're trying to make sure that the theta angle equals zero that means that this bar the pole is vertical and so what will now happen as you can see here these are the observations so we have five we'll have five sensory neurons well one will be processing at this point like this theta dot which is angular velocity then we have cosine and sine of theta we have the x velocity and we have the x position because of the way this agent is implemented and that's using LSTM for the sensory neurons that means once I click shuffle observation it will take some time for the hidden state of the LSTM to recover and understand which input is parsing and that's what we'll have a small transition once I hit this and then we'll the agent will still manage to to achieve the performance and to balance the pole so let me let me click shuffle observations okay so nothing happened there was just some small glitch there let me touch it again so this time it failed but it's gonna manage to balance it out again so that's one example the non-visual task they were they were using their experiments there is this is the second experiment they've done and that's using the pong game and as you can see here aside from shuffling they also mask certain patches like certain patches out and you can imagine this would be super hard for a human to play and the agent actually manages to to to play it really well masking out the the patches is possible because of the m versus n so the input and the output space do not have to be the same that means we can change the number of like components of the observation at the input and we're still going to have the same message at the output and the policy will be able to handle it so that's why we can handle these like patches so yeah aside from that we have like the car race game and they showed that they can that their agent can actually handle different backgrounds even though the agent never saw these different backgrounds during the training and the reason it can achieve that is because the representation that this like model like constructs is basically abstracts out the background as a not important information to play this game but we'll get back to to those details a bit later this is just for you to understand and have a clear picture of the tasks they were using in this in in their experiments now quickly getting back to the paper let's digest the results they got so this is the carpool like a static image and again kudos to the authors for making this interactive demos in their blog i think that's the the way forward because pdfs are such an obsolete format in my honest opinion but yeah um okay so let's see the results they achieve they have two baselines first is fully connected neural network with five observations the second one is fully connected neural network with 10 observations so these fives are recalled that we had like the x the uh the velocity we had the theta the angular velocity etc so those are the five observations these baselines will be just you just input the five observations into a fully connected layer you get a intermediate hidden representation you you pass it to another fully connected layer and out comes the output so these are these baselines and the this ours is just instead of using here a feed forward network we're going to use this attention neuron we just developed so this is going to be attention neuron component here attention neuron okay so these are the models we are comparing with and as you can see here on the five observations it's a bit it's lagging a little bit behind the baseline and obviously we have uh like no answer here because we cannot plug in five observations into this neural network which is expecting 10 observations so that's the uh the another con of these baselines they are not flexible they are not adaptable to the variation of the number of inputs so the reason there is a small lag here is because they're using lstm so it takes some time to to build up to understand the input that the that particular sensory neuron is is parsing and uh now makes me wonder whether we can just use uh instead of lstm's just use the fully connected network and achieve better results here i'm not sure i saw in the paper like some ablation like that one and i guess it should work since this thing is working already i don't see why this wouldn't work but yeah anyways uh the the the point here is that once we shuffle the observations uh this thing completely fails and the attention neuron a model uh doesn't have any problems dealing with shuffled uh like inputs then uh expanding to 10 observations which is proving the point that this model can handle varying number of inputs you can see that result is still like holds here here we don't have an answer because this is as i said five observations and we we can only test the 10 observation uh fully connected neural network and again it's lagging a little bit behind it uh finally the interesting column for me is this one because um instead of adding uh and instead of duplicating the the five uh input observations this time they actually uh add random gaussian noise and the the query vector somehow learned to ignore that information and only focus on the actual uh signals so basically if i go if i scroll back to this chart uh that means that these query vectors are learned such that even though we have even though that like a half of these observations are like pure noise it will learn how to ignore these and so i guess the the the the corresponding attention coefficients will be super small and so the value vectors will be smaller that were emitted from these observations and so the message will be constructed only from the mostly from from the useful signal uh parts of the observation and not the noise and that's fairly fairly intriguing property that that happened there on this diagram they just visually present how the message so the the uh global representation is the same so basically in the case of a card pool uh this empty vector so let me let me scroll back here so this empty is just a 16 dimensional uh like a vector and they just plot it here so this here you pick one time step and this will be the empty at that time step and as you can see here these two plots are the same which is nice because uh here we don't have the observation shuffling and here we have the observation shuffling and the message remains the same which means this is just a visual representation of the of the of the permutation invariance property uh okay uh so i already showed you uh the demos and there is just they just did a bit more um they were very thorough with these experiments so what they did is they did this uh training occlusion ratio they they they changed it from zero to 0.9 which means zero means you you don't have any occlusion so all of the the patches are actually uh contained contain the signal they are not like blacked out and 0.9 means 90% of these patches will be blacked out and so training the agent across that range and then testing it again on various ratios of occlusion you can see an interesting pattern emerging here so we definitely see the best results for 50% and so as a quick reminder for the game of punk uh 21 is the top score you can achieve so you're that you have two paddles here you're playing the game and you can uh once the the ball gets past your opponent that's plus one for you and you can get until 21 so because these are averages um 21 means you're pretty much constantly winning the game and so as you can see here until we get to approximately 50% of occlusion in the test we are performing really well uh and then it drops it seems that the agents are struggling if we have too much occlusion during training but this this one this one is they mentioned this example here and it's interesting because when you when you basically train with 80% occlusion and then you show all of the patches in the test time you can see that it's still on pair with the opponent meaning that on average it's even a bit better than the opponent when you have all of the patches which means it can leverage the additional information and generalize to it even though it did not have it during the training time okay they also analyze the representations they get from this attention neuron so we just what they do is they take the empty vector they just apply t-sne and they just project the empty to two-dimensional space so if we take uh like outputs which are close to each other so empty is projected nearby in the 2d space we can see that the corresponding inputs that led to that empty being projected here are fairly similar so here you can see that this uh i guess this is called pad or paddle uh is all the way down and in all of these inputs so basically this is a single input this whole row is a single input because again it would be helpful if you knew how dqm works especially when you're dealing with these Atari games what you do you stack four consecutive frames to avoid having this uh perceptual aliasing whereby you do not know if you see a single image whether the ball is going in this direction or that direction or that direction so you do not know the state once you concatenate couple frames you have uh like a less uh like a vagueness let's go that way in your input so you can see that the inputs are are visually and semantically similar and they are grouped here into the same uh like part of this 2d space here you can imagine once you have this type of representation the agent can learn how to to make good decisions and play play play the game of punk okay um i mentioned this showed you i showed you this in the blog uh basically uh the cool thing is that the uh like our this model the attention neuron manages to to cope with different types of backgrounds so they they tried four different backgrounds here uh plus the shuffling and they showed that the like agents still can't play the game because it's ignoring uh that irrelevant information which is the background in this case and they plotted on this diagram here the attention so how they construct these representations is remember these are patches this is going to be your input it you're going to pass it to the sensory neuron and it's going to emit the key and so once you you take your query you just do dot product with all the keys and you can see that these here uh get the most attention which is basically the like the the curve the turning so i mean you as humans we can we can notice that these are very important because once you take the features that correspond to these patches and you combine them into the message and then you can basically use that feature that novel global representation to make a good decision and that's to turn left in this case similarly here even though they swapped the background so the original background is this screen like a grass type of thing once you swap it with this random background the the model still learns how to uh pay attention to the relevant parts of the image and so it knows when to turn i think that's pretty much it uh there is one more interesting thing here and that's uh they not only shuffle the input observation they do that dynamically during the the game so every for example t equals 25 steps they shuffle the input observation and they do that from 25 50 all the way to 500 and no reshuffle and we can see the results here are pretty pretty nice so looking at the car racing uh the the performance drops somewhat same as for a carpool but for different reasons so here we have the lstm so it takes some time to update the hidden state here they have some different strategy they're using some difference between the the the neighboring frames and so once you do the uh like a shuffle that means that that difference is going to have a it's going to introduce an error into the model and that's it's going to struggle uh like uh until it stabilizes again so there are a bunch of details they did a very nice and thorough experimentations across different games you can check out the blog yourself and the interactive demos but basically the main ideas are uh to have this type of sharing uh between sensory neurons and to add the the set transformer idea so that we have the permutation invariance and so that we can shuffle the observation and still preserve the same empty upon which we are building the policy network so i haven't been focusing obviously on the ariel component because that's just um not the the core inside behind this paper they were also using evolutionary evolutionary strategy algorithms to basically train the agents but the main idea was to test whether the agents can be more robust to permutation to random shuffling of the input uh observations so hopefully you found this video useful if you did share it out with a friend subscribe this channel join the discord community there is a lot of cool conversations happening there people are helping each other out so do join it and until next time bye bye
[{"start": 0.0, "end": 3.2800000000000002, "text": " What's cracking guys? In this video I'm covering this new paper called"}, {"start": 3.2800000000000002, "end": 8.32, "text": " the sensory neuron as a transformer permutation invariant neural networks for reinforcement"}, {"start": 8.32, "end": 15.36, "text": " learning by Jujing Tang and David Ha from Google Brain. So the idea is to create an agent"}, {"start": 15.36, "end": 20.64, "text": " that will be able or a framework that will be able to solve a problem like this one. So you have a"}, {"start": 20.64, "end": 28.64, "text": " RL like environment here like this car racing game and the goal is to be able to to play the game"}, {"start": 28.64, "end": 34.4, "text": " even though the observation was severely shuffled. So we had some certain type of permutation being"}, {"start": 34.4, "end": 40.32, "text": " applied to the input observation. So as you can see here this is our input, this is your the original"}, {"start": 40.32, "end": 46.96, "text": " car racing game and then you divide this like observation into patches like a grid of patches"}, {"start": 46.96, "end": 51.92, "text": " and then you do arbitrary permutation and you get something like this and you still want to preserve"}, {"start": 51.92, "end": 57.120000000000005, "text": " the the performance that the agent had previously. So obviously if we took something like the DQN"}, {"start": 57.12, "end": 63.28, "text": " agent that's one of the most famous and one of the first successful like RL agents that were"}, {"start": 63.28, "end": 68.32, "text": " able to play these types of games, if you know how it works you know that it's going to fail"}, {"start": 68.32, "end": 74.8, "text": " miserably here because if you learn how to play on this observation it's using CNN in the background"}, {"start": 74.8, "end": 81.6, "text": " remember and so it's going to completely crash here because the like initial structure the original"}, {"start": 81.6, "end": 87.52, "text": " structure the spatial structure that this observation had is not is no longer present here so it's going"}, {"start": 87.52, "end": 91.83999999999999, "text": " to fail. If you haven't watched my DQN video do check it out I'm going to link it somewhere here."}, {"start": 92.72, "end": 98.47999999999999, "text": " So how they managed to achieve this is there are two components to this. So the first component is"}, {"start": 98.47999999999999, "end": 105.36, "text": " this they borrow ideas from this self-organization literature and the second idea is to add the"}, {"start": 105.36, "end": 112.16, "text": " permutation invariance like to bake it in into the model and that way they achieve that this"}, {"start": 112.16, "end": 118.96, "text": " this behavior I just explained. So let's see what I see here. Numerous studies have demonstrated"}, {"start": 118.96, "end": 124.72, "text": " that humans can adapt to changes in sensory inputs even when they are fed into the wrong channels"}, {"start": 124.72, "end": 130.56, "text": " but difficult adaptations such as learning to see by interpreting visual information emitted from a"}, {"start": 130.56, "end": 136.64000000000001, "text": " grid of electrodes placed on one's tongue or learning to ride a backward bicycle require months"}, {"start": 136.64000000000001, "end": 143.6, "text": " of training to achieve mastery. So I mean just the pure fact that humans are able to do this is fairly"}, {"start": 143.6, "end": 149.04, "text": " mind-blowing. So this first thing learning to see through the tongue is you basically have like a"}, {"start": 149.04, "end": 154.64000000000001, "text": " camera I guess and that camera approximately contains the same information as what your eye"}, {"start": 154.64000000000001, "end": 158.64000000000001, "text": " sensor would receive and so you just take the electrode so the information coming from that"}, {"start": 158.64, "end": 165.2, "text": " RGB camera you place it on someone's tongue and the like neural plasticity of the brain does the"}, {"start": 165.2, "end": 171.76, "text": " rest after obviously long like multiple months of training but you can learn how your brain can"}, {"start": 171.76, "end": 177.92, "text": " learn and your neurons can adapt like to form a circuit that can understand the image even though"}, {"start": 177.92, "end": 181.92, "text": " it's coming from a different channel so that's like tongue so that's something called sensory"}, {"start": 181.92, "end": 186.88, "text": " substitution. Like check out this video from getting smarter every day where you have this"}, {"start": 186.88, "end": 191.35999999999999, "text": " backward bike and the goal that the idea is that the input observation is kind of permuted that"}, {"start": 191.35999999999999, "end": 196.48, "text": " means that you try to turn the bar left but the wheel turns right so you have this discrepancy"}, {"start": 196.48, "end": 201.44, "text": " this this shuffling that happened and all of a sudden nobody can actually ride that bike even"}, {"start": 201.44, "end": 205.76, "text": " though you know how to to ride the normal bike and so I guess there is an analogy between that and"}, {"start": 205.76, "end": 211.04, "text": " this thing here it's just that this is a bit more complex and you can imagine we couldn't be able"}, {"start": 211.04, "end": 215.84, "text": " to play this without a severe like a lot of training. So check out this short clip. Where I"}, {"start": 215.84, "end": 220.32, "text": " work the welders are geniuses and they like to play jokes on the engineers. He had a challenge"}, {"start": 220.32, "end": 224.88, "text": " for me. He had built a special bicycle and he wanted me to try to ride it. He had only changed"}, {"start": 224.88, "end": 229.84, "text": " one thing when he turned the handlebar to the left the wheel goes to the right when you turn it to the"}, {"start": 229.84, "end": 234.08, "text": " right the wheel goes to the left. I thought this would be easy so I hopped on the bike ready to"}, {"start": 234.08, "end": 238.56, "text": " demonstrate how quickly I could conquer this. And here he is ladies and gentlemen Mr. Justin Sandlin"}, {"start": 239.2, "end": 241.2, "text": " first attempt riding the bicycle."}, {"start": 241.2, "end": 246.72, "text": " All right so the faster I go the better."}, {"start": 248.72, "end": 254.32, "text": " Yeah yeah I couldn't do it. You can see that I'm laughing but I'm actually really frustrated."}, {"start": 254.32, "end": 260.8, "text": " In this moment I had a really deep revelation. My thinking was in a rut. This bike revealed a"}, {"start": 260.8, "end": 266.32, "text": " very deep truth to me. I had the knowledge of how to operate the bike but I did not have the"}, {"start": 266.32, "end": 272.15999999999997, "text": " understanding. Okay so now the question is can we do better and create AI systems that can rapidly"}, {"start": 272.15999999999997, "end": 278.96, "text": " adapt to sensory substitutions without the need to be retrained. Can we avoid the need to"}, {"start": 278.96, "end": 284.24, "text": " retrain the network in order to complete this task. So even humans need months to adapt to this"}, {"start": 284.24, "end": 290.32, "text": " novel task. So this paper is in a way trying to go even above the human intelligence at least in"}, {"start": 290.32, "end": 296.64, "text": " this very particular domain. One more interesting idea I want to like read out here and then"}, {"start": 296.64, "end": 302.56, "text": " I'm gonna dig into the paper is this. In complex systems we often observe complex global behavior"}, {"start": 302.56, "end": 307.12, "text": " emerge from a collection of agents interacting with each other in their environment with each"}, {"start": 307.12, "end": 312.96, "text": " individual agent acting only on locally available information without knowing the full picture."}, {"start": 312.96, "end": 318.56, "text": " So there are systems such as they mentioned here an implementation of these systems such as cellular"}, {"start": 318.56, "end": 324.16, "text": " automata where basically all of the cells all the components of the system only have access to a"}, {"start": 324.16, "end": 329.76, "text": " local information and then they are able to somehow communicate and form this global representation"}, {"start": 329.76, "end": 349.59999999999997, "text": " this global complex behavior. If you don't know what cellular automata is check out this short video."}, {"start": 359.76, "end": 384.24, "text": " So basically every single cell consults the neighbors and uses their information to update"}, {"start": 384.24, "end": 390.48, "text": " its own information and every single cell does that for itself and then you have this global"}, {"start": 390.48, "end": 396.96000000000004, "text": " like behavior emerging which is pretty amazing. So a similar idea was applied in this paper basically"}, {"start": 396.96000000000004, "end": 404.16, "text": " every single one of these patches will be fed into a unique neural network all of them will be"}, {"start": 404.16, "end": 410.08, "text": " identical the ways will be shared but like still we have like a single neural network will have"}, {"start": 410.08, "end": 415.52, "text": " access to only like a local information so for to a single patch and we somehow need to integrate"}, {"start": 415.52, "end": 420.32, "text": " that information into a global representation which can then be fed into some policy network"}, {"start": 420.32, "end": 425.12, "text": " of an RL agent and then the agent can learn how to play this thing. So the second important component"}, {"start": 425.12, "end": 429.36, "text": " you want to bake in into this model is that no matter the permutation of the patches in the case"}, {"start": 429.36, "end": 434.96, "text": " of these visual games you want to have the same global representation so that means we need to"}, {"start": 434.96, "end": 440.23999999999995, "text": " have some kind of permutation invariance baked in. The way this paper achieves it is using some the"}, {"start": 440.23999999999995, "end": 445.12, "text": " idea from the set transformer paper. Now let me let me break it down and explain all of that."}, {"start": 445.68, "end": 452.32, "text": " Okay so here it is. So the set transformer cleverly replaced Q and again I'm gonna assume you know what"}, {"start": 452.88, "end": 458.32, "text": " QKV is it's a like a terminology from the original transformer paper if you're not familiar with it"}, {"start": 458.32, "end": 463.84, "text": " just check out my transformer video. So they replaced the Q with a set of learnable seed"}, {"start": 463.84, "end": 469.91999999999996, "text": " vectors so it's no longer a function of input X thus enabling the output to become PI or permutation"}, {"start": 469.91999999999996, "end": 477.52, "text": " invariant. So before I dig into this chart here and explain how exactly did they accomplish"}, {"start": 478.08, "end": 481.2, "text": " like the permutation invariance let me quickly explain the difference between"}, {"start": 482.08, "end": 488.23999999999995, "text": " permutation equivariance and permutation invariance. So they mentioned here using these"}, {"start": 488.24, "end": 494.32, "text": " like formulas basically this property needs to be abated if you want to have this is by definition"}, {"start": 494.32, "end": 499.84000000000003, "text": " what permutation invariance is all about. So you have some input X you do some permutation over"}, {"start": 499.84000000000003, "end": 505.28000000000003, "text": " that input X so that's you can treat it as a list of tokens and then once you index it with this S"}, {"start": 505.28000000000003, "end": 511.92, "text": " S is just a permutation of this list here and you basically index X which means which is a maybe a"}, {"start": 511.92, "end": 517.52, "text": " non-Pi notation I don't know if this is like I've never seen this notation in math notebooks but"}, {"start": 517.52, "end": 525.92, "text": " basically what you do here is you're permuting the input X according to this like list S and once you"}, {"start": 525.92, "end": 531.68, "text": " pass that through function F you want to make sure that it's the same as the original like as passing"}, {"start": 531.68, "end": 536.96, "text": " the original input. So no matter the permutation you apply to the input you get the same output"}, {"start": 536.96, "end": 542.48, "text": " that's the permutation invariance and on the other hand we have and as you can see here the interesting"}, {"start": 542.48, "end": 547.6800000000001, "text": " thing here is that the input space dimensionality does not have to be the same as the output space"}, {"start": 547.6800000000001, "end": 553.28, "text": " dimensionality. On the other hand if we have permutation actually variance the those two have"}, {"start": 553.28, "end": 558.5600000000001, "text": " to match so we we gotta have and both in the input space as well as in the output space. So here is"}, {"start": 558.5600000000001, "end": 564.16, "text": " the formula for for permutation actually variance as you can see here so now once we permute the"}, {"start": 564.16, "end": 569.9200000000001, "text": " input what happens is because we have the equivalent part basically the output is going to permute in"}, {"start": 569.92, "end": 575.36, "text": " the same manner. So the original transformer paper had exactly this property and all of the"}, {"start": 575.36, "end": 580.24, "text": " transformers except for the set transformer have this property. So basically let me try and explain"}, {"start": 580.24, "end": 588.56, "text": " it really quickly. So let's say we have three input tokens so a red token we have a green token and"}, {"start": 588.56, "end": 595.12, "text": " we have a blue token. So we pass them into the transformer layer so we have like a black box here"}, {"start": 595.12, "end": 601.84, "text": " this is going to do the transformer magic and out comes some novel representations. So I'll just"}, {"start": 601.84, "end": 608.88, "text": " gonna denote them as this double bar red and then so let me take the green one double bar green so"}, {"start": 608.88, "end": 615.76, "text": " that's some novel representation and finally the blue one. So the the reason why transformers are"}, {"start": 615.76, "end": 620.88, "text": " permutation equivalent is because if we imagine I'm just gonna permute this input now like this."}, {"start": 620.88, "end": 627.6, "text": " So we're gonna have red here we're gonna have green here and we're gonna have blue remain on"}, {"start": 627.6, "end": 633.12, "text": " the same position. So what is now going to happen is that we're just going to permute the output the"}, {"start": 633.12, "end": 638.4, "text": " same way as we permuted the input. So that's what this formula here is telling us. Basically that's"}, {"start": 638.4, "end": 641.76, "text": " what transformers are going to do by default. So we're gonna have the following thing we're gonna"}, {"start": 641.76, "end": 650.64, "text": " have green will be here red will be here and blue will be here. Okay so let me make the permutation"}, {"start": 650.64, "end": 656.0, "text": " explicit. So we have we have this is the original like ordering of the inputs one two three what we"}, {"start": 656.0, "end": 661.92, "text": " did here is we put two here because two corresponds to green one then we had one then we had three. So"}, {"start": 661.92, "end": 668.16, "text": " this is the permutation we applied to the input. So now what this formula tells us is that passing"}, {"start": 668.16, "end": 675.52, "text": " this permuted input through H and this is H in our case so I just you know the T, T's are H. So passing"}, {"start": 675.52, "end": 682.96, "text": " this in the input so this novel permutation so that means this thing here should be the same as"}, {"start": 682.96, "end": 688.8, "text": " passing as passing the original input so that means this one here passing it as the so the output of"}, {"start": 688.8, "end": 694.24, "text": " that thing is going to be this thing and then we apply the the permutation so that means we"}, {"start": 694.24, "end": 698.3199999999999, "text": " permute this thing like this. So we're going to take the the second output which is this and put"}, {"start": 698.3199999999999, "end": 702.88, "text": " it in the first place we're gonna take the first output and put it in the second place"}, {"start": 702.88, "end": 708.72, "text": " we're gonna take the third output and just leave it where it is and as you can see we get exactly"}, {"start": 708.72, "end": 714.72, "text": " this output here. So hopefully this was clear maybe a bit confusing but yeah this is the this is the"}, {"start": 714.72, "end": 720.64, "text": " permutation equivariance. So now what we're trying to accomplish is instead of having this the outputs"}, {"start": 721.36, "end": 726.48, "text": " being permuted the same way as the inputs we want to have them remain fixed. So that means for example"}, {"start": 726.48, "end": 733.28, "text": " we want to have this type of arrangement no matter how we permute the input. In this case we'll have"}, {"start": 733.28, "end": 738.8000000000001, "text": " like three factorial type like possibilities to permute the input and we'll always have this"}, {"start": 738.8000000000001, "end": 743.9200000000001, "text": " output so that will be the permutation invariance. Now one more important detail between permutation"}, {"start": 743.9200000000001, "end": 750.48, "text": " equivariance and invariance is that in the case of equivariance the like the number of input tokens"}, {"start": 750.48, "end": 754.8000000000001, "text": " and I'm going to use the terminology from transformers has to be the same as the number"}, {"start": 754.8, "end": 759.8399999999999, "text": " of output tokens. Whereas once you have the permutation invariance you can have for example"}, {"start": 759.8399999999999, "end": 764.8, "text": " arbitrary number of tokens here like n and you can still keep this maybe I don't know like five"}, {"start": 764.8, "end": 770.24, "text": " whatever. So that's another important detail and we'll soon see why that is important basically"}, {"start": 771.4399999999999, "end": 778.24, "text": " we can some of the patches of the input patches we can kind of omit them and the agent will still"}, {"start": 778.24, "end": 785.04, "text": " be able to play the game. Nice now let me try to explain how the set transformer idea comes along"}, {"start": 785.04, "end": 793.28, "text": " here. So this is the input token these are the input tokens so we have in the case of the image"}, {"start": 793.28, "end": 800.72, "text": " all of these will be just like a single patch so this ot means observation at time t and we are"}, {"start": 800.72, "end": 806.88, "text": " taking patch number one and we are feeding them into these sensory neurons. So as I said these are"}, {"start": 806.88, "end": 811.2, "text": " going to be shared so that's the idea borrowed from the self-organization literature you basically"}, {"start": 811.2, "end": 817.28, "text": " have like this network weights are going to be shared across every single one of sensory neurons"}, {"start": 817.28, "end": 822.72, "text": " and basically as you can see every sensory neuron will have as an input just the local information"}, {"start": 822.72, "end": 828.8, "text": " so just in the case of images just a couple of pixels in that single patch. So what will happen"}, {"start": 828.8, "end": 835.92, "text": " is all of them are going to generate the keys and the value vectors so that's your usual transformer"}, {"start": 835.92, "end": 842.3199999999999, "text": " thing and then instead of generating query vectors as well we decouple that part we decouple this part"}, {"start": 842.3199999999999, "end": 847.52, "text": " as you can see from the input so that means that this Q matrix is just going to be like a fixed"}, {"start": 847.52, "end": 852.56, "text": " embedding like a like an embedding table you can think of it this way like imagine we have just"}, {"start": 852.56, "end": 858.4, "text": " three vectors here okay and that means that if you know how transformers work that means that"}, {"start": 858.4, "end": 864.4, "text": " we're always going to have three vectors at the output here so this message will have constant"}, {"start": 864.4, "end": 870.16, "text": " format of three vectors like this no matter the number of inputs so no matter the number of patches"}, {"start": 870.16, "end": 875.1999999999999, "text": " we input we're always going to have three vectors as the output because that's dictated by the Q"}, {"start": 875.1999999999999, "end": 880.16, "text": " matrix. Let me now try and explain why this thing is permutation invariant so if we take this"}, {"start": 880.16, "end": 885.6, "text": " particular query vector this one here and we do a dot product with all of the keys here so we'll"}, {"start": 885.6, "end": 893.36, "text": " end up with with coefficients alpha 1 alpha 2 all the way through alpha n okay and we're going to"}, {"start": 893.36, "end": 905.36, "text": " have value vectors basically v1 v2 all the way through vn and as you remember as recall we just"}, {"start": 905.36, "end": 911.84, "text": " do multiplication between the attention coefficients and the value vectors we add them up and we form"}, {"start": 911.84, "end": 918.0, "text": " the message here so because of the addition that's that's the part that enables us the permutation"}, {"start": 918.0, "end": 924.08, "text": " invariance so now imagine I just permute I randomly permute the input observation so what will happen"}, {"start": 924.08, "end": 931.12, "text": " is that the key vectors will be randomly permuted as well as the value vector so maybe so then after"}, {"start": 931.12, "end": 937.28, "text": " we apply this query after we do the dot product between query and those new keys and new value"}, {"start": 937.28, "end": 942.16, "text": " vectors I mean the new order of the keys and values we'll just have this this the following"}, {"start": 942.16, "end": 948.64, "text": " thing so basically let's imagine we have something like this alpha 2 alpha 1 all the way through alpha"}, {"start": 948.64, "end": 953.92, "text": " n and because the value vectors are going to be permuted the same in the same manner we'll just"}, {"start": 953.92, "end": 961.1999999999999, "text": " have v2 v1 all the way through vn and now again because we're just doing multiplication between"}, {"start": 961.1999999999999, "end": 966.4, "text": " these and just adding them up we're going to end up with the same result and that's why we have"}, {"start": 966.4, "end": 973.4399999999999, "text": " permutation invariance so basically by doing this we bake in by design we bake in this permutation"}, {"start": 973.4399999999999, "end": 978.56, "text": " invariance property into our neural network so we don't have you do not have to learn how to handle"}, {"start": 978.56, "end": 984.56, "text": " different permutations of the inputs okay and basically on a high level what will happen with"}, {"start": 984.56, "end": 989.68, "text": " all of these agents and they tested these agents on four different games is we're going to input"}, {"start": 989.68, "end": 993.84, "text": " the observation we're going to input the previous action we're going to pass them through this"}, {"start": 993.84, "end": 998.4, "text": " attention neuron so this is this whole pipeline we just saw here we're going to form the message"}, {"start": 998.4, "end": 1003.0400000000001, "text": " we're going to have some policy function here and we're going to output actions the next action"}, {"start": 1003.0400000000001, "end": 1010.0, "text": " that will that we will use to interact with the environment okay having explained this like how"}, {"start": 1010.0, "end": 1015.52, "text": " this thing works I'm just going to make a small remark here so these here are sensory neurons and"}, {"start": 1015.52, "end": 1020.72, "text": " the whole thing is called attention neuron so I'm just going to go back to the title the title name"}, {"start": 1020.72, "end": 1025.84, "text": " was the sensory neuron as a transformer and I think this part should be attention neuron I guess"}, {"start": 1025.84, "end": 1033.6000000000001, "text": " because that's where the transformer like attention happens and not on the sensory neuron level"}, {"start": 1033.6000000000001, "end": 1038.32, "text": " but yeah and anyways a super small net I may be off here let's get back to the paper okay"}, {"start": 1038.88, "end": 1046.24, "text": " so quick notice here so everything I just explained visually is just captured here"}, {"start": 1046.24, "end": 1053.28, "text": " algebraically basically what will happen is every one of these sensory neurons is gonna from the"}, {"start": 1053.28, "end": 1058.72, "text": " observation is going to create its own key is going to create its own value and as you can see"}, {"start": 1058.72, "end": 1065.76, "text": " here the inputs are a bit different the key function will be accepting the observation so this"}, {"start": 1065.76, "end": 1072.32, "text": " like the ith component as well as the previous action whereas the value like functions will only"}, {"start": 1072.32, "end": 1078.8799999999999, "text": " be accepting the observations and now I guess the reason for that is you do not want to bias"}, {"start": 1078.8799999999999, "end": 1085.12, "text": " the like current action with the previous one because remember how this thing works is you're"}, {"start": 1085.12, "end": 1090.6399999999999, "text": " gonna just have in the this message which is just going to contain like a sum a weighted sum of"}, {"start": 1090.6399999999999, "end": 1097.2, "text": " value vectors and so these value vectors will not have like the previous section like incorporated"}, {"start": 1097.2, "end": 1105.04, "text": " into them and so that means we won't won't be biasing the the current action at this time step"}, {"start": 1105.04, "end": 1109.1200000000001, "text": " so I may be completely wrong here but I think that's that's the reason they omitted the"}, {"start": 1109.68, "end": 1115.52, "text": " last action for the value function here finally this is just the transformer formalism the"}, {"start": 1115.52, "end": 1120.96, "text": " important thing to notice here is that the q does not have any input and that's the way we"}, {"start": 1120.96, "end": 1127.04, "text": " achieve the permutation invariance and the second thing is they decoupled like the q matrices with"}, {"start": 1127.04, "end": 1132.24, "text": " this wq's which is not usually done because once you apply the wq's you actually get the queries"}, {"start": 1132.24, "end": 1137.84, "text": " and the keys and the values so here they just decouple this to to make this formalism a bit more"}, {"start": 1137.84, "end": 1142.24, "text": " clear I guess but again if you understood the what I explained here you don't need to worry"}, {"start": 1142.24, "end": 1149.76, "text": " about the formulas okay so a couple more things they have four experiments they have these non"}, {"start": 1149.76, "end": 1156.0, "text": " vision continuous control tasks like the this this like card pull swing up and they also have"}, {"start": 1156.0, "end": 1160.8, "text": " vision based tasks like the pong game and the car race game and I'm gonna switch to blog and show"}, {"start": 1160.8, "end": 1166.16, "text": " you the animations and then I'm gonna return back to the paper and show you the the results they got"}, {"start": 1166.16, "end": 1172.48, "text": " okay here we are basically we have a like this card pull problem it's a very famous toy problem"}, {"start": 1172.48, "end": 1179.36, "text": " in the RL field you're trying to just basically make sure that this card is at the x equals zero"}, {"start": 1179.36, "end": 1184.48, "text": " position along this x-axis and you're trying to make sure that the theta angle equals zero that"}, {"start": 1184.48, "end": 1190.0, "text": " means that this bar the pole is vertical and so what will now happen as you can see here these"}, {"start": 1190.0, "end": 1195.1200000000001, "text": " are the observations so we have five we'll have five sensory neurons well one will be processing"}, {"start": 1195.1200000000001, "end": 1201.52, "text": " at this point like this theta dot which is angular velocity then we have cosine and sine of theta"}, {"start": 1201.52, "end": 1207.52, "text": " we have the x velocity and we have the x position because of the way this agent is implemented and"}, {"start": 1207.52, "end": 1213.28, "text": " that's using LSTM for the sensory neurons that means once I click shuffle observation it will"}, {"start": 1213.28, "end": 1218.8799999999999, "text": " take some time for the hidden state of the LSTM to recover and understand which input is parsing"}, {"start": 1218.8799999999999, "end": 1223.92, "text": " and that's what we'll have a small transition once I hit this and then we'll the agent will"}, {"start": 1223.92, "end": 1228.16, "text": " still manage to to achieve the performance and to balance the pole so let me let me click shuffle"}, {"start": 1228.16, "end": 1234.96, "text": " observations okay so nothing happened there was just some small glitch there let me touch it again"}, {"start": 1234.96, "end": 1241.76, "text": " so this time it failed but it's gonna manage to balance it out again so that's one example"}, {"start": 1241.76, "end": 1246.72, "text": " the non-visual task they were they were using their experiments there is this is the second"}, {"start": 1247.36, "end": 1252.64, "text": " experiment they've done and that's using the pong game and as you can see here aside from shuffling"}, {"start": 1252.64, "end": 1258.48, "text": " they also mask certain patches like certain patches out and you can imagine this would be"}, {"start": 1258.48, "end": 1264.8, "text": " super hard for a human to play and the agent actually manages to to to play it really well"}, {"start": 1264.8, "end": 1270.96, "text": " masking out the the patches is possible because of the m versus n so the input and the output space"}, {"start": 1270.96, "end": 1275.44, "text": " do not have to be the same that means we can change the number of like components of the"}, {"start": 1275.44, "end": 1280.88, "text": " observation at the input and we're still going to have the same message at the output and the policy"}, {"start": 1280.88, "end": 1286.0, "text": " will be able to handle it so that's why we can handle these like patches so yeah aside from that"}, {"start": 1286.0, "end": 1294.08, "text": " we have like the car race game and they showed that they can that their agent can actually handle"}, {"start": 1294.08, "end": 1299.28, "text": " different backgrounds even though the agent never saw these different backgrounds during the training"}, {"start": 1299.28, "end": 1305.44, "text": " and the reason it can achieve that is because the representation that this like model like"}, {"start": 1305.44, "end": 1311.92, "text": " constructs is basically abstracts out the background as a not important information to play this game"}, {"start": 1311.92, "end": 1317.04, "text": " but we'll get back to to those details a bit later this is just for you to understand and have a clear"}, {"start": 1317.04, "end": 1322.0, "text": " picture of the tasks they were using in this in in their experiments now quickly getting back to"}, {"start": 1322.0, "end": 1328.56, "text": " the paper let's digest the results they got so this is the carpool like a static image and again"}, {"start": 1328.56, "end": 1334.08, "text": " kudos to the authors for making this interactive demos in their blog i think that's the the way"}, {"start": 1334.08, "end": 1340.8, "text": " forward because pdfs are such an obsolete format in my honest opinion but yeah um okay so let's see"}, {"start": 1340.8, "end": 1345.76, "text": " the results they achieve they have two baselines first is fully connected neural network with five"}, {"start": 1345.76, "end": 1350.0, "text": " observations the second one is fully connected neural network with 10 observations so these"}, {"start": 1350.0, "end": 1356.24, "text": " fives are recalled that we had like the x the uh the velocity we had the theta the angular velocity"}, {"start": 1356.24, "end": 1361.36, "text": " etc so those are the five observations these baselines will be just you just input the five"}, {"start": 1361.36, "end": 1367.36, "text": " observations into a fully connected layer you get a intermediate hidden representation you you pass"}, {"start": 1367.36, "end": 1373.68, "text": " it to another fully connected layer and out comes the output so these are these baselines and the"}, {"start": 1373.68, "end": 1379.44, "text": " this ours is just instead of using here a feed forward network we're going to use this attention"}, {"start": 1379.44, "end": 1384.48, "text": " neuron we just developed so this is going to be attention neuron component here attention"}, {"start": 1384.48, "end": 1389.6, "text": " neuron okay so these are the models we are comparing with and as you can see here on the"}, {"start": 1389.6, "end": 1395.84, "text": " five observations it's a bit it's lagging a little bit behind the baseline and obviously we have"}, {"start": 1395.84, "end": 1401.84, "text": " uh like no answer here because we cannot plug in five observations into this neural network"}, {"start": 1401.84, "end": 1407.1200000000001, "text": " which is expecting 10 observations so that's the uh the another con of these baselines they are not"}, {"start": 1407.1200000000001, "end": 1412.4, "text": " flexible they are not adaptable to the variation of the number of inputs so the reason there is a"}, {"start": 1412.4, "end": 1417.3600000000001, "text": " small lag here is because they're using lstm so it takes some time to to build up to understand"}, {"start": 1417.3600000000001, "end": 1424.0, "text": " the input that the that particular sensory neuron is is parsing and uh now makes me wonder whether"}, {"start": 1424.0, "end": 1430.0800000000002, "text": " we can just use uh instead of lstm's just use the fully connected network and achieve better results"}, {"start": 1430.0800000000002, "end": 1436.0, "text": " here i'm not sure i saw in the paper like some ablation like that one and i guess it should work"}, {"start": 1436.0, "end": 1442.3200000000002, "text": " since this thing is working already i don't see why this wouldn't work but yeah anyways uh the the"}, {"start": 1442.32, "end": 1446.72, "text": " the point here is that once we shuffle the observations uh this thing completely fails"}, {"start": 1447.28, "end": 1454.3999999999999, "text": " and the attention neuron a model uh doesn't have any problems dealing with shuffled uh like inputs"}, {"start": 1454.3999999999999, "end": 1458.96, "text": " then uh expanding to 10 observations which is proving the point that this model can handle"}, {"start": 1458.96, "end": 1463.76, "text": " varying number of inputs you can see that result is still like holds here here we don't have an"}, {"start": 1463.76, "end": 1468.8, "text": " answer because this is as i said five observations and we we can only test the 10 observation uh"}, {"start": 1468.8, "end": 1473.44, "text": " fully connected neural network and again it's lagging a little bit behind it uh finally the"}, {"start": 1473.44, "end": 1479.76, "text": " interesting column for me is this one because um instead of adding uh and instead of duplicating"}, {"start": 1479.76, "end": 1486.96, "text": " the the five uh input observations this time they actually uh add random gaussian noise and the the"}, {"start": 1486.96, "end": 1493.84, "text": " query vector somehow learned to ignore that information and only focus on the actual uh"}, {"start": 1493.84, "end": 1500.8799999999999, "text": " signals so basically if i go if i scroll back to this chart uh that means that these query vectors"}, {"start": 1500.8799999999999, "end": 1506.08, "text": " are learned such that even though we have even though that like a half of these observations"}, {"start": 1506.08, "end": 1512.9599999999998, "text": " are like pure noise it will learn how to ignore these and so i guess the the the the corresponding"}, {"start": 1512.9599999999998, "end": 1518.08, "text": " attention coefficients will be super small and so the value vectors will be smaller that were"}, {"start": 1518.08, "end": 1524.0, "text": " emitted from these observations and so the message will be constructed only from the mostly from from"}, {"start": 1524.0, "end": 1529.52, "text": " the useful signal uh parts of the observation and not the noise and that's fairly fairly intriguing"}, {"start": 1529.52, "end": 1534.96, "text": " property that that happened there on this diagram they just visually present how the message so the"}, {"start": 1535.76, "end": 1541.6799999999998, "text": " the uh global representation is the same so basically in the case of a card pool"}, {"start": 1541.68, "end": 1548.0800000000002, "text": " uh this empty vector so let me let me scroll back here so this empty is just a 16 dimensional uh like"}, {"start": 1548.0800000000002, "end": 1555.92, "text": " a vector and they just plot it here so this here you pick one time step and this will be the empty"}, {"start": 1555.92, "end": 1561.92, "text": " at that time step and as you can see here these two plots are the same which is nice because uh"}, {"start": 1561.92, "end": 1566.3200000000002, "text": " here we don't have the observation shuffling and here we have the observation shuffling and the"}, {"start": 1566.3200000000002, "end": 1571.04, "text": " message remains the same which means this is just a visual representation of the of the"}, {"start": 1571.04, "end": 1578.48, "text": " of the permutation invariance property uh okay uh so i already showed you uh the demos and"}, {"start": 1579.28, "end": 1584.8, "text": " there is just they just did a bit more um they were very thorough with these experiments so what"}, {"start": 1584.8, "end": 1589.92, "text": " they did is they did this uh training occlusion ratio they they they changed it from zero to"}, {"start": 1589.92, "end": 1595.44, "text": " 0.9 which means zero means you you don't have any occlusion so all of the the patches are actually"}, {"start": 1595.44, "end": 1602.56, "text": " uh contained contain the signal they are not like blacked out and 0.9 means 90% of these patches will"}, {"start": 1602.56, "end": 1609.44, "text": " be blacked out and so training the agent across that range and then testing it again on various"}, {"start": 1610.4, "end": 1615.28, "text": " ratios of occlusion you can see an interesting pattern emerging here so we definitely see the"}, {"start": 1615.28, "end": 1623.2, "text": " best results for 50% and so as a quick reminder for the game of punk uh 21 is the top score you"}, {"start": 1623.2, "end": 1628.32, "text": " can achieve so you're that you have two paddles here you're playing the game and you can uh once"}, {"start": 1628.32, "end": 1634.48, "text": " the the ball gets past your opponent that's plus one for you and you can get until 21 so because"}, {"start": 1634.48, "end": 1640.72, "text": " these are averages um 21 means you're pretty much constantly winning the game and so as you can see"}, {"start": 1640.72, "end": 1647.3600000000001, "text": " here until we get to approximately 50% of occlusion in the test we are performing really well uh and"}, {"start": 1647.3600000000001, "end": 1651.6000000000001, "text": " then it drops it seems that the agents are struggling if we have too much occlusion during"}, {"start": 1651.6, "end": 1656.7199999999998, "text": " training but this this one this one is they mentioned this example here and it's interesting"}, {"start": 1656.7199999999998, "end": 1663.12, "text": " because when you when you basically train with 80% occlusion and then you show all of the patches"}, {"start": 1663.12, "end": 1668.1599999999999, "text": " in the test time you can see that it's still on pair with the opponent meaning that on average"}, {"start": 1668.1599999999999, "end": 1672.8799999999999, "text": " it's even a bit better than the opponent when you have all of the patches which means it can"}, {"start": 1674.3999999999999, "end": 1680.9599999999998, "text": " leverage the additional information and generalize to it even though it did not have it during the"}, {"start": 1680.96, "end": 1686.96, "text": " training time okay they also analyze the representations they get from this attention"}, {"start": 1686.96, "end": 1692.88, "text": " neuron so we just what they do is they take the empty vector they just apply t-sne and they just"}, {"start": 1692.88, "end": 1699.3600000000001, "text": " project the empty to two-dimensional space so if we take uh like outputs which are close to each"}, {"start": 1699.3600000000001, "end": 1705.92, "text": " other so empty is projected nearby in the 2d space we can see that the corresponding inputs that led"}, {"start": 1705.92, "end": 1711.6000000000001, "text": " to that empty being projected here are fairly similar so here you can see that this uh i guess"}, {"start": 1711.6000000000001, "end": 1717.3600000000001, "text": " this is called pad or paddle uh is all the way down and in all of these inputs so basically this"}, {"start": 1717.3600000000001, "end": 1722.88, "text": " is a single input this whole row is a single input because again it would be helpful if you knew how"}, {"start": 1722.88, "end": 1727.6000000000001, "text": " dqm works especially when you're dealing with these Atari games what you do you stack four"}, {"start": 1727.6000000000001, "end": 1735.04, "text": " consecutive frames to avoid having this uh perceptual aliasing whereby you do not know if you see a"}, {"start": 1735.04, "end": 1739.36, "text": " single image whether the ball is going in this direction or that direction or that direction so"}, {"start": 1739.36, "end": 1745.76, "text": " you do not know the state once you concatenate couple frames you have uh like a less uh like a"}, {"start": 1745.76, "end": 1751.92, "text": " vagueness let's go that way in your input so you can see that the inputs are are visually and"}, {"start": 1751.92, "end": 1757.28, "text": " semantically similar and they are grouped here into the same uh like part of this 2d space here"}, {"start": 1757.28, "end": 1762.08, "text": " you can imagine once you have this type of representation the agent can learn how to to"}, {"start": 1762.08, "end": 1767.4399999999998, "text": " make good decisions and play play play the game of punk okay um i mentioned this showed you i showed"}, {"start": 1767.4399999999998, "end": 1774.24, "text": " you this in the blog uh basically uh the cool thing is that the uh like our this model the"}, {"start": 1774.24, "end": 1779.1999999999998, "text": " attention neuron manages to to cope with different types of backgrounds so they they tried four"}, {"start": 1779.1999999999998, "end": 1785.28, "text": " different backgrounds here uh plus the shuffling and they showed that the like agents still can't"}, {"start": 1785.28, "end": 1790.0, "text": " play the game because it's ignoring uh that irrelevant information which is the background"}, {"start": 1790.0, "end": 1796.16, "text": " in this case and they plotted on this diagram here the attention so how they construct these"}, {"start": 1796.16, "end": 1800.88, "text": " representations is remember these are patches this is going to be your input it you're going"}, {"start": 1800.88, "end": 1807.2, "text": " to pass it to the sensory neuron and it's going to emit the key and so once you you take your query"}, {"start": 1807.2, "end": 1813.36, "text": " you just do dot product with all the keys and you can see that these here uh get the most attention"}, {"start": 1813.36, "end": 1818.56, "text": " which is basically the like the the curve the turning so i mean you as humans we can we can"}, {"start": 1818.56, "end": 1823.04, "text": " notice that these are very important because once you take the features that correspond to these"}, {"start": 1823.04, "end": 1829.9199999999998, "text": " patches and you combine them into the message and then you can basically use that feature that novel"}, {"start": 1829.9199999999998, "end": 1835.28, "text": " global representation to make a good decision and that's to turn left in this case similarly here"}, {"start": 1835.28, "end": 1840.56, "text": " even though they swapped the background so the original background is this screen like a grass"}, {"start": 1840.56, "end": 1846.56, "text": " type of thing once you swap it with this random background the the model still learns how to"}, {"start": 1846.56, "end": 1851.2, "text": " uh pay attention to the relevant parts of the image and so it knows when to turn i think that's"}, {"start": 1851.2, "end": 1858.72, "text": " pretty much it uh there is one more interesting thing here and that's uh they not only shuffle the"}, {"start": 1858.72, "end": 1864.56, "text": " input observation they do that dynamically during the the game so every for example t equals 25"}, {"start": 1864.56, "end": 1870.96, "text": " steps they shuffle the input observation and they do that from 25 50 all the way to 500 and no"}, {"start": 1870.96, "end": 1875.44, "text": " reshuffle and we can see the results here are pretty pretty nice so looking at the car racing"}, {"start": 1875.44, "end": 1881.44, "text": " uh the the performance drops somewhat same as for a carpool but for different reasons so here we have"}, {"start": 1881.44, "end": 1886.0800000000002, "text": " the lstm so it takes some time to update the hidden state here they have some different strategy"}, {"start": 1886.0800000000002, "end": 1892.4, "text": " they're using some difference between the the the neighboring frames and so once you do the uh like"}, {"start": 1892.4, "end": 1897.2, "text": " a shuffle that means that that difference is going to have a it's going to introduce an error into"}, {"start": 1897.2, "end": 1902.64, "text": " the model and that's it's going to struggle uh like uh until it stabilizes again so there are a"}, {"start": 1902.64, "end": 1908.3200000000002, "text": " bunch of details they did a very nice and thorough experimentations across different games you can"}, {"start": 1908.3200000000002, "end": 1913.8400000000001, "text": " check out the blog yourself and the interactive demos but basically the main ideas are uh to have"}, {"start": 1913.8400000000001, "end": 1920.64, "text": " this type of sharing uh between sensory neurons and to add the the set transformer idea so that"}, {"start": 1920.64, "end": 1925.2800000000002, "text": " we have the permutation invariance and so that we can shuffle the observation and still preserve"}, {"start": 1925.2800000000002, "end": 1930.48, "text": " the same empty upon which we are building the policy network so i haven't been focusing obviously"}, {"start": 1930.48, "end": 1936.72, "text": " on the ariel component because that's just um not the the core inside behind this paper they were"}, {"start": 1936.72, "end": 1942.4, "text": " also using evolutionary evolutionary strategy algorithms to basically train the agents but the"}, {"start": 1942.4, "end": 1949.2, "text": " main idea was to test whether the agents can be more robust to permutation to random shuffling of"}, {"start": 1949.2, "end": 1953.52, "text": " the input uh observations so hopefully you found this video useful if you did share it out with a"}, {"start": 1953.52, "end": 1958.08, "text": " friend subscribe this channel join the discord community there is a lot of cool conversations"}, {"start": 1958.08, "end": 1974.0, "text": " happening there people are helping each other out so do join it and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=WJWBq4NZfvY
DeepMind Perceiver and Perceiver IO | Paper Explained
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover: * Perceiver (Perceiver: General Perception with Iterative Attention) * Perceiver IO (Perceiver IO: A General Architecture for Structured Inputs & Outputs) The goal was to create a modality-agnostic, general perception architecture that could work on images, videos, audio, text, etc. alike. The main idea is to use the cross-attention module as a bottleneck layer that will map the input modality data into the latent space - this way we avoid the quadratic curse of transformers. After that powerful latent transformers are used to refine the representation - rinse and repeat. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Perceiver: https://arxiv.org/abs/2103.03206 ✅ Perceiver IO: https://arxiv.org/abs/2107.14795 ✅ Code: https://github.com/deepmind/deepmind-research/tree/master/perceiver ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:00 Perceiver architecture explained 05:40 Comparison with Facebook DETR model 07:05 Comparison to RNNs 08:35 Algorithmic complexity of Perceiver 10:35 Positional encodings and permutation equivariance 12:00 Results - ImageNet 14:35 Pixel permutation robustness 17:40 Attention visualized 20:20 Results - AudioSet 23:30 Results - Point Cloud 25:00 Perceiver IO 26:15 Decoder explained in depth (main contribution) 28:45 GLUE results (BERT baseline) 29:50 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #perceiver #perceiverio #deepmind
What's cracking guys in this video I'm covering Perceiver, general perception with iterative attention and at the end of the video I'm gonna contrast it with the new newly published Perceiver IO paper. It's a work by DeepMind team Andrew Jegel, Felikski Manow, Andrew Bruch, Andrew Tissaman, Oriol Vinales and Joao Caheira. So the idea is to generate this general perception architecture and make it modality agnostic and I'm gonna break that down in a little bit but let's see the motivation behind the paper. So the perception models used in deep learning are designed for individual modalities often relying on domain specific assumptions such as the local grid structures and I guess they're referring to CNN here but basically depending on which modality you're using in your deep learning pipeline if you have like an image or if you have an audio or if you have like a point cloud you're going to use heavily specialized architectures to kind of process and understand all of those modalities. The whole idea of this paper is to kind of ditch all of those specific priors which are modality dependent like for example when you're using a CNN you're basically exploiting the fact that neighboring pixels their intensities are correlated and so you can use a kernel that's like three by three and you can kind of shift it and share the weights whereby you're also exploiting the fact that the statistics will not change across the image and so they say it here nicely in this paper we introduce the perceiver a model that builds upon transformers and hence makes few architectural assumptions about the relationship between its inputs but it also scales to hundreds of thousands of inputs like comnets so on the one hand they're reusing transformers which are like proven to be very good at modeling like arbitrary relationships between the inputs and on the other hand they kind of avoided this quadratic complexity problem that transformers have and we're going to see how they did it and that's what enables them to apply this model to hundreds of thousands of inputs so let's dig into the architecture straight away here it is it basically as you can see here we have this latent array which is n times d dimensions we have this byte array which is basically our input modality so this thing here is going to be either your image or your audio or your video or whatnot so whatever you have so let me kind of explain that using this image below here you can see various modalities we have like image here we have like a video we kind of they kind of sampled a couple frames from the video they have the audio here that's like i guess aligned with this video and we have a point cloud here so let me take one example and form the byte array like for this image so this image was going to be like i guess it's a image net image so it's 224 times 224 and that's like 50k pixels okay so basically how they're gonna flattened out and form a 50k so there will be m i guess so that's the m component of the byte array as you can see here so we had m times c and then the c is going to be because this is an rgb image it's going to be m times three and that's going to be i mean 50k times three and as you can see this is a like a pretty big input that transformers cannot handle by themselves and similarly for the video they're gonna basically concatenate the features the input features from the images as well as from the audio and they're going to form a single m time c array again so that's this part now let's see how the model works so basically you have this latent array and you use it to form these query vectors so this is your regular q k v attention pattern and what you do you form queries from this latent array and you form keys and values from your input and so you're basically in a way questioning the input for valuable information and then you're combining value vectors using those scores you formed using the q and k matrices to get like a novel representation here so this is the first component of the pipeline you have basically this cross attention module this is the first part then the second important part is once you have this like a vector that contains certain information from your input modality you're then going to apply your like basically the original transformer like block here you're gonna just kind of concatenate a couple of those blocks and form this latent transformer and that's the basically the second component of the pipeline now what I do is just kind of repeat this thing so as you can see here and now we again we form the query vectors and we again query the keys here and we we form another representation here so how you can think about this is basically the latent transformers are like kind of refining the representation so that we can ask better questions to the input I mean ask better questions to the input modality and form more informative representations and then at the end we can kind of use that for whatever specific task we are having at hand so basically classification or not so that's the the main idea I mean it's fairly simple and one detail that's worth mentioning here is that they are sharing weights between these latent transformers so as you can see here these dotted this this dashed line means that this latent transformer has the same weights as this one and they're doing the same thing for for the cross attention now there is a small and important detail I guess they are actually for this cross attention the first one is going to have its own weights and then all of the other cross attention modules are going to share the weights because otherwise they had some instability problems okay so if you watch some of my previous videos like I covered Facebook Detre model in one of my previous like videos and I think it's a nice comparison to contrast that this this perceiver with that model so we could see many basic ideas and and and like building blocks in this Detre model so as you can see here so this plays a role of basically this part of the latent array so you have this object queries and then you have a basically transformer decoder so that's the same thing as this latent transformer here and as you can see here this is if we treat this as the byte array this part this representation here basically then we have a similar structure all in all so obviously they've replaced this whole thing here so instead of having these specific cnm parts and which are obviously modality specific and then doing this intricate stuff they just kind of flatten out the inputs they do the cross attention and then they apply the transformers but yeah I thought it would be neat to kind of contrast that to to the perceiver module so again if you kind of ditch this part and you just use your input directly here and then the only other difference would be I guess interleaving these latent transformers to kind of refine the representations and here they are constantly in every single layer they're kind of consulting the input like modality so yeah I thought it would be neat to kind of mention this okay let's continue let me share a couple more interesting ideas here they mention it here because we optionally share weights between each instance of the transformer tower so that's the so that's the latent transformer and between all the all instances of the cross attention module but the first our model can be interpreted as a recurrent neural network but unrolled in depth using the same input so the same input rather than in time so unrolled in depth and using the same inputs so those are the differences so basically a quick like a recap we have a when we have rnn we have a basically nn here so we have a like a our model here we input at time step one like whatever the the the modality may be here so for example like a textual token or something what we do then is we unroll this so we apply basically the weights here are shared and then we apply another input so like a different token and then we repeat this like on and on and so the difference here is instead of using different inputs they're using constantly the same input modality so that's the byte array here so let me go back here and so secondly they are obviously not unrolling in time they are unrolling in depth so we have these powerful latent transformers where we are unrolling them in depth so i guess there is a nice mapping between these two ideas and i guess this thing but the first one so this kind of a symmetry between not sharing the weights in the weights in the first cross attention module with the other ones is kind of worth further investigation um continuing on here second thing i want to mention here and this is fairly important this is what enables us to use a perceiver directly on the input modality without doing all of those tricks like for example the vision transformer had to first do like one layer of convolution or how they kind of originally called that is to take the patches and linearly project them but you can implement that using a single convolutional layer and only then when you reduce the spatial dimensions so you basically have a much less number of tokens you can then pass it them through the you can then pass them through the transformer so in order to avoid that we need to reduce the complexity and here is the the main reason why perceiver works this results in architecture with complexity m times n plus l times n squared and this is the key so m is the size of the input modality and n is the size of the latent array and they usually use something like 512 here 512 here and if you remember we had 50k for the rgb image from image net so there is a huge difference there and we can save a lot of computation just here because we don't have a quadratic like dependency on m and this thing here is l is just a number of layers in the latent transformer and n is the again like latent dimension and so it's n squared because we are using the transformer but this time we're using the transformer and we're feeding as the input those latent variables instead of using the input modality so this is how they manage to save the computation and this is the whole trick and so by decoupling the input size and the depth we can add additional transformer layers at a cost that's independent of the input size and that's cool so we don't have m squared we have n squared so basically the the main thing to do to reduce the complexity is to have this projection of the input modality into this latent space by doing this query kqv basically attention and reducing using this as a bottleneck one final remark is let me mention the positional encodings so the reason we have to use position encodings is the same as the reason why we had to use the positional encodings in the original transformer in the first place and that's because transformers are permutation actually equivalent not invariant that means that if you have an input here and if you rearrange it so this is some tokens and if you rearrange them your output is going to be rearranged in the same fashion and that's why it's equivalent not invariant if it was invariant then that means whatever however we permute the input sequence the outputs would remain the same and if it's equivalent then it's kind of following the pattern of the input permutation at the output hopefully that's clear so basically they're using same Fourier features as in the original transformer paper and they mentioned some small detail here so in language modeling transformer inputs are typically produced by adding a position encoding to the input encoding so by adding them and we found it beneficial to instead concatenate the position and input features before passing them into the perceiver so this is just a minor detail obviously but like i just wanted to to mention that they have to add this in a way modality specific positional encodings but that's a minor like prior compared to having to actually tweak the architecture to deal with the modality okay those are pretty much all of the details you need to know about the perceiver now let's see some results they did experiments on various modalities one of them were images they also did on audio on video and on point clouds let's start with images so they have results here let me go back here and as you can see here they compare the perceiver with like the simplest baseline such as resonant 50 vat so the vision transformer here they additionally add the ff means Fourier features they basically concatenate the Fourier features along all of the pixels and then they pass that through the resonant and vat and so that's the second block and finally they have the basically the original transformer and the only thing they had to do is tweak so resize the input image from 224 to 64 times 64 because as we know quadratic complexity before i even comment the results and the drop we have here you need to be cognizant of the next fact and that's that these are the simplest baselines if we took a soda so the state-of-the-art model on image net without no pre so with no pre-training and those would be nf nets and efficient nets we too we can see that so basically the the top one accuracy is 86.5 which is way higher compared to what perceiver achieved here and then efficient nets we too additionally like with no pre-training they achieve roughly this this accuracy but they are also way more efficient obviously both parameter wise and like computation wise etc so the reason i'm mentioning this is because they are not trying to compare a perceiver with the highly specialized modality specific architectures they're just trying to show that they are as good or even a bit better than the regular baselines and so now let's see the results we can see that it's obviously better than both the resonant and the vat and we can see a significant drop here so from 77 to 73 and a smaller drop from for the vat a quick remark on this drop for resonant 50 when we had the Fourier the Fourier features basically the author is mentioned here to account for the perceivers use of features at input we train versions of the benchmark models with this input as well and found that they produce comparable if slightly worse performance to models trained solely on rgb input and i mean this slightly worse is a very arguable statement because i mean dropping from 77 to 73 four percent in top one seems like a significant drop and i would love to know why and i think i would give this another look whether there is some bug here or i don't know looks suspicious to me in any case anyways let me show you some other interesting results so these ones here so i mentioned the permutation active variants of of the transformers and as well as perceiver iode obviously and so they did a small experiment here to kind of further consolidate this idea of using modality agnostic like architecture such as perceiver i.o. and the whole point is so what it did is they took a raw image and we have scores here to top one accuracy and then they do did this permutation and you see you can see that there is a significant drop happening here as well as for vat although some somewhat smaller obviously and here we have like same results and let me just quickly explain what this permutation means so what i do is they take an image net image let me draw a simplified image there will be three by three pixels so something like this one two three one two three okay and i'm going to label these pixels as so one two three two three four five six etc all the way until now nine okay so what it did is they basically just predefined like a like a like a deterministic permutation mapping so for example one possible solution would be something like this like i would put like five here and then one and then like three or whatnot and so what it means basically is that that the transformation they they do upon the input images is they'll basically shift this pixel intensity whatever it is to this location here they'll shift this pixel intensity to this location here and they do this for every single pixel according to this permutation table and so after and they also include as you can see here the Fourier features because to make it a bit more fair but like obviously because cnns are used to exploiting the local correlations and now here it's kind of that that kind of assumption is violated because from a natural image here you end up with something that's obviously artificial and so you can see a significant drop happening for the cnn architecture vat performs somewhat better although there is a drop there as well so the reason is because the the first layer as i mentioned of the vat is basically a cnn transform and then we have transformer blocks so that's why we have because of that i guess because of the cnn part i guess that's why we we suffer the performance suffers here and because these are permutation active variants so both the original transformer as well as the perceiver like ignoring the whether the the positional encodings are learned or just Fourier ones they still kind of preserve the performance okay so that was a quick remark and experiment they did and this kind of further emphasizes the the need to have these generic architecture such as perceiver okay continuing on let me show you these they had some nice attention maps visualized here so what you see here is basically this image here you take the first cross attention block from the perceiver model and you basically take the so again you have a latent vector here so this will be n dimensional and the modality is this one here so we have m pixels here so what they do is they take a query so a single query so maybe they form from this position they form so from this feature vector latent feature vector they form a query vector and they use that very vector to query all of the keys of this image and that's how they form all of these visualizations okay so basically these are just attention scores prior to soft max so again every single pixel here will have its own key vector and we're going to do like dot product between this query and all of the keys for every pixel and that's how we form these scores and so you can see there is a qualitative change happening between this one and these ones so that's the reason is because as i mentioned they have unique weights for this first cross attention module and they share the weights for all of the other modules and they're using actually eight eight cross attention modules for the image net model that means they'll be consulting the input modality i.e. the the image eight times and then they'll have obviously the refinements using the latent transformers but you can see a qualitative change between these ones and so because these weights these two share the same weights whereas this has separate set of weights and you can see obviously qualitative change so here you can even like notice a dog image even though these are only scores i'm fairly sure this is not overlaid on top of the image i'm fairly sure this just pops up when you do the when you calculate these scores and here you can see these types of 2d sinusoids i guess some complex 2d signals which kind of resemble the Fourier features in a way and so they speculate that that's because they're using Fourier features as the positional embeddings because once they use they learn once they mentioned that they don't see these patterns anymore so i guess it's it heavily correlates with the use of Fourier features and here is just kind of like the same thing broken down for multiple query vectors so we just saw a single query vector remember there are 512 of these for the image net task and so they can have 512 of these visualizations they just have 100 here and a couple of other ones here i guess you can just see they correlate heavily these ones with these ones obviously continuing on to other modalities they did a same set of experiments on this audio set data set so basically audio set contains a bunch of data and it's videos combined with audio obviously and they have some baselines here and they compare a perceiver with those baselines using only audio as the modality using only video and using both the audio and video they basically just concatenate the data from audio and video into that m times c byte array so the details are not that important in my opinion so but yeah a couple of interesting facts here so first perceiver is kind of um um pair or better compared to all of these other baselines including resin at 50 again uh but like this c9 14 which is super specialized has a bit higher score and once they kind of drop domain specific information then they are now kind of better i mean you can always make your model perform better than the baselines if you do this kind of stuff but yeah as i mentioned previously they're trying to compare with baselines and show that this same model with some small tweaks like specific augmentations and specific positional encodings they can kind of have on-pair performance with the baselines across different modalities and that's the whole point of the paper on video we can see it performs better than these two baselines here and finally for the combination we can see that obviously using both modalities helps boost the performance so it's way better compared to using only video or comparing only audio or using only audio and again this baseline that has this specific uh like a late stage fusion has a bit better performance but again not that important for me what's interesting is that the the the difference between raw audio and male spectrogram when we're using only the audio modality is like very similar similar and usually you get like a huge boost by using these spectrograms so i guess this is something that's worth further investigation and something i found interesting yeah a small rant here they mentioned this so for video um basically a full 32 frame clip at this resolution has more than 2 million pixels and that's obviously not something that's feasible even with a model such as perceiver so they mentioned here they experimented using tiny space time patches with dimensions two times eight times eight resulting in a total of 12 000 inputs to the perceiver so this resembles the v what vat is doing a lot and so they were kind of comparing themselves with vat and here they're doing basically the same thing a hack on how to reduce the input modality into uh using domain specific knowledge into smaller modality and then input it back into perceiver so i i assume like this does not look consistent with the rest of the ideas in this paper and yeah uh just leave your comments what what are your thoughts on this and whether whether this is violating the whole idea of of the perceiver model and finally i'm just going to end it up here showing the results on the uh basically uh point cloud classification the goal here is given 3d point cloud to classify an object into certain category and again you can see vats resonance and transformer perceiver outperforms all of them but again when you have highly specialized architecture such as this point net plus bus it's way better compared to perceiver by at least six points here i'm going to wrap up this paper with this sentence so they mentioned it here while we reduced the amount of modality specific prior knowledge in the model we still employ modality specific augmentation and position encoding and we also just saw the trick they are doing to make video modality feasible and so this is something to to to have in mind it's we're still we're still definitely struggling uh with videos finally before i contrast this paper with perceiver io with the novel paper i want to mention one more thing uh that i saw in the ablations here so let me just find it so what they did here is they basically omitted as they say here performance of models built from a stack of cross attention layers with no latent transformers and now this is way more equivalent to what the debtor facebook debtor model is doing and you can see that the accuracy is way lower compared to using those latent transformers to refine the representation so that's this is a kind of argument behind the usage of latent transformers okay now let me quickly contrast this paper with perceiver io okay perceiver io a general architecture for structured inputs and outputs again the deep mind team and they say here while the perceiver supports many kinds of inputs it can only produce very simple outputs such as class scores perceiver io overcomes this limitation without sacrificing the originals appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics so in the original paper all of the tasks we saw so image net audio set and those point clouds all of those were classification tasks and so we had a simple uh like the output structure was very simple when you have more complex uh problems such as trying to make an agent for starcraft 2 then you have much more structured outputs so the output may be something like this like you have one array that tells you which agent you should select so which of the characters you should select and then once you select the agent you have like a matrix that tells you what's the probability of moving to all of these locations so maybe you have something like four by four matrix and basically you you have a probability of where the current character the selected character should move and so you have like complex structured output and the the thing is the original perceiver even though it was very flexible with the input it wasn't as flexible with the output and so basically what this paper does and they mentioned it here and they said here our main technical contribution is perceiver ios decoding procedure so let's see what they mean by that here is the architecture let me just kind of contrast this one with the original perceiver paper so this part is the same so we have latent array n times d we have the input byte array m times c we just do this cross attention module we do the latent transformer part so this is the latent transformer part this is the cross attention part and we just repeat that part rinse and repeat and that's the same thing as with the original perceiver so the novel thing here is this output query array so what you do is depending on your desired outputs you have like an output array that maybe has like o vectors and every vector has e features so what you would do is form o query vectors and basically again do across like attention logic and so because you have o vectors here you're going to end up with o vectors here zooming in into the decoder head because this is the main contribution of the paper let me kind of like break it down a little bit more first things first this these query vectors are going to be task dependent so depending on the structured output you want to achieve you'll be either ingraining some integrating some domain specific knowledge here or you're going to make these learnable let's see how the shapes work out here so we know we have basically all of these and their dimension the dimensionality is not that important it's going to be like x and the only important thing is that they match with like key vectors because we're going to do dot products here in order to form the attention scores now the important part is that the value vectors so these here have the dimensionality e if that's our target as you can see here so he's our target so we want to have the dimensionality of these value vectors should be e and we have we need to have all of these and that's how we make sure that this thing here is o times e and that's our desired shape so basically everything else is simple attention logic we form the weights here we use these to kind of weight these value vectors we add them up and we form via these two weights and via uh basically these two value vectors we form this vector here and we just repeat a procedure for all of the output vectors so that's basically the main contribution of the paper everything else is about trying this new perceiver io module model on various different tasks and i'm going to discover a single one and that's the language uh like uh like the glue benchmark and they compare with bird piece and you can see here depending whether they use tokenization so sentence piece is a like a famous tokenization algorithm or simply using utf-8 bytes that means no tokenization at all you just take the raw text and you encode the characters as utf-8 bytes and so you can see that they can report uh results of their own pair with bird but here's a catch i mean even though they're kind of making the the flop similar here uh the number of parameters is obviously way way bigger for perceiver io so if that's something you care about this is obviously not going to be like a fair comparison right and i guess this is where perceiver io shines the reason is we have when we have utf-8 the input size is going to be way larger and then because of that you have to kind of reduce the depth of the bird model and that means you have less parameters less expressive model and you get way bigger gap here and i find that very cool because just using utf-8 encoding instead of having to devise these intricate tokenization methods and tokenization is literally a field in and of itself so have like being able to avoid that whole step and still manage to have these results like even better than using sentence piece is pretty pretty damn amazing having said that that's it for this video if you found it useful share it out with a friend and subscribe to this channel hit the bell icon join the discord community and until next time bye bye
[{"start": 0.0, "end": 4.08, "text": " What's cracking guys in this video I'm covering Perceiver, general perception with iterative"}, {"start": 4.08, "end": 9.44, "text": " attention and at the end of the video I'm gonna contrast it with the new newly published Perceiver"}, {"start": 9.44, "end": 15.6, "text": " IO paper. It's a work by DeepMind team Andrew Jegel, Felikski Manow, Andrew Bruch, Andrew Tissaman,"}, {"start": 15.6, "end": 22.8, "text": " Oriol Vinales and Joao Caheira. So the idea is to generate this general perception architecture"}, {"start": 22.8, "end": 27.44, "text": " and make it modality agnostic and I'm gonna break that down in a little bit but let's see the"}, {"start": 27.44, "end": 33.84, "text": " motivation behind the paper. So the perception models used in deep learning are designed for"}, {"start": 33.84, "end": 38.64, "text": " individual modalities often relying on domain specific assumptions such as the local grid"}, {"start": 38.64, "end": 44.0, "text": " structures and I guess they're referring to CNN here but basically depending on which modality"}, {"start": 44.0, "end": 48.56, "text": " you're using in your deep learning pipeline if you have like an image or if you have an audio or if"}, {"start": 48.56, "end": 53.6, "text": " you have like a point cloud you're going to use heavily specialized architectures to kind of"}, {"start": 53.6, "end": 60.480000000000004, "text": " process and understand all of those modalities. The whole idea of this paper is to kind of"}, {"start": 60.480000000000004, "end": 65.68, "text": " ditch all of those specific priors which are modality dependent like for example when you're"}, {"start": 65.68, "end": 71.12, "text": " using a CNN you're basically exploiting the fact that neighboring pixels their intensities are"}, {"start": 71.12, "end": 75.36, "text": " correlated and so you can use a kernel that's like three by three and you can kind of shift it and"}, {"start": 75.36, "end": 81.04, "text": " share the weights whereby you're also exploiting the fact that the statistics will not change"}, {"start": 81.04, "end": 87.28, "text": " across the image and so they say it here nicely in this paper we introduce the perceiver a model"}, {"start": 87.28, "end": 91.76, "text": " that builds upon transformers and hence makes few architectural assumptions about the relationship"}, {"start": 91.76, "end": 97.2, "text": " between its inputs but it also scales to hundreds of thousands of inputs like comnets so on the one"}, {"start": 97.2, "end": 103.68, "text": " hand they're reusing transformers which are like proven to be very good at modeling like arbitrary"}, {"start": 103.68, "end": 109.84, "text": " relationships between the inputs and on the other hand they kind of avoided this quadratic"}, {"start": 109.84, "end": 115.12, "text": " complexity problem that transformers have and we're going to see how they did it and that's"}, {"start": 115.12, "end": 120.16, "text": " what enables them to apply this model to hundreds of thousands of inputs so let's dig into the"}, {"start": 120.16, "end": 126.96000000000001, "text": " architecture straight away here it is it basically as you can see here we have this latent array which"}, {"start": 126.96000000000001, "end": 132.96, "text": " is n times d dimensions we have this byte array which is basically our input modality so this"}, {"start": 132.96, "end": 138.0, "text": " thing here is going to be either your image or your audio or your video or whatnot so whatever you"}, {"start": 138.0, "end": 143.6, "text": " have so let me kind of explain that using this image below here you can see various modalities"}, {"start": 143.6, "end": 149.52, "text": " we have like image here we have like a video we kind of they kind of sampled a couple frames"}, {"start": 149.52, "end": 155.04, "text": " from the video they have the audio here that's like i guess aligned with this video and we have"}, {"start": 155.04, "end": 160.24, "text": " a point cloud here so let me take one example and form the byte array like for this image so this"}, {"start": 160.24, "end": 166.88, "text": " image was going to be like i guess it's a image net image so it's 224 times 224 and that's like"}, {"start": 166.88, "end": 174.96, "text": " 50k pixels okay so basically how they're gonna flattened out and form a 50k so there will be m"}, {"start": 174.96, "end": 181.04, "text": " i guess so that's the m component of the byte array as you can see here so we had m times c"}, {"start": 181.6, "end": 186.96, "text": " and then the c is going to be because this is an rgb image it's going to be m times three and that's"}, {"start": 186.96, "end": 192.24, "text": " going to be i mean 50k times three and as you can see this is a like a pretty big input that"}, {"start": 192.24, "end": 197.52, "text": " transformers cannot handle by themselves and similarly for the video they're gonna basically"}, {"start": 197.52, "end": 203.84, "text": " concatenate the features the input features from the images as well as from the audio and they're"}, {"start": 203.84, "end": 209.28, "text": " going to form a single m time c array again so that's this part now let's see how the model"}, {"start": 209.28, "end": 213.68, "text": " works so basically you have this latent array and you use it to form these query vectors so this is"}, {"start": 213.68, "end": 221.60000000000002, "text": " your regular q k v attention pattern and what you do you form queries from this latent array"}, {"start": 221.6, "end": 227.04, "text": " and you form keys and values from your input and so you're basically in a way questioning the input"}, {"start": 227.04, "end": 233.04, "text": " for valuable information and then you're combining value vectors using those scores you formed using"}, {"start": 233.04, "end": 238.79999999999998, "text": " the q and k matrices to get like a novel representation here so this is the first"}, {"start": 238.79999999999998, "end": 244.32, "text": " component of the pipeline you have basically this cross attention module this is the first part then"}, {"start": 244.32, "end": 250.07999999999998, "text": " the second important part is once you have this like a vector that contains certain information"}, {"start": 250.08, "end": 255.44000000000003, "text": " from your input modality you're then going to apply your like basically the original transformer"}, {"start": 256.16, "end": 259.92, "text": " like block here you're gonna just kind of concatenate a couple of those blocks and form"}, {"start": 259.92, "end": 266.0, "text": " this latent transformer and that's the basically the second component of the pipeline now what I"}, {"start": 266.0, "end": 270.8, "text": " do is just kind of repeat this thing so as you can see here and now we again we form the query"}, {"start": 270.8, "end": 278.72, "text": " vectors and we again query the keys here and we we form another representation here so how you can"}, {"start": 278.72, "end": 286.32000000000005, "text": " think about this is basically the latent transformers are like kind of refining the representation so"}, {"start": 286.32000000000005, "end": 291.92, "text": " that we can ask better questions to the input I mean ask better questions to the input modality"}, {"start": 291.92, "end": 297.44000000000005, "text": " and form more informative representations and then at the end we can kind of use that for whatever"}, {"start": 297.44000000000005, "end": 303.52000000000004, "text": " specific task we are having at hand so basically classification or not so that's the the main idea"}, {"start": 303.52, "end": 309.28, "text": " I mean it's fairly simple and one detail that's worth mentioning here is that they are sharing"}, {"start": 309.28, "end": 314.47999999999996, "text": " weights between these latent transformers so as you can see here these dotted this this dashed"}, {"start": 314.47999999999996, "end": 319.44, "text": " line means that this latent transformer has the same weights as this one and they're doing the"}, {"start": 319.44, "end": 325.12, "text": " same thing for for the cross attention now there is a small and important detail I guess they are"}, {"start": 325.12, "end": 329.84, "text": " actually for this cross attention the first one is going to have its own weights and then all of"}, {"start": 329.84, "end": 334.32, "text": " the other cross attention modules are going to share the weights because otherwise they had some"}, {"start": 334.32, "end": 341.44, "text": " instability problems okay so if you watch some of my previous videos like I covered Facebook"}, {"start": 341.44, "end": 347.35999999999996, "text": " Detre model in one of my previous like videos and I think it's a nice comparison to contrast"}, {"start": 347.35999999999996, "end": 353.44, "text": " that this this perceiver with that model so we could see many basic ideas and and and like building"}, {"start": 353.44, "end": 360.32, "text": " blocks in this Detre model so as you can see here so this plays a role of basically this part of the"}, {"start": 360.32, "end": 366.56, "text": " latent array so you have this object queries and then you have a basically transformer decoder so"}, {"start": 366.56, "end": 373.2, "text": " that's the same thing as this latent transformer here and as you can see here this is if we treat"}, {"start": 373.2, "end": 380.56, "text": " this as the byte array this part this representation here basically then we have a similar structure"}, {"start": 380.56, "end": 386.64, "text": " all in all so obviously they've replaced this whole thing here so instead of having these specific"}, {"start": 386.64, "end": 392.4, "text": " cnm parts and which are obviously modality specific and then doing this intricate stuff"}, {"start": 392.4, "end": 396.8, "text": " they just kind of flatten out the inputs they do the cross attention and then they apply the"}, {"start": 396.8, "end": 401.68, "text": " transformers but yeah I thought it would be neat to kind of contrast that to to the perceiver"}, {"start": 402.24, "end": 406.64, "text": " module so again if you kind of ditch this part and you just use your input directly here"}, {"start": 406.64, "end": 413.2, "text": " and then the only other difference would be I guess interleaving these latent transformers"}, {"start": 413.2, "end": 417.84, "text": " to kind of refine the representations and here they are constantly in every single layer they're"}, {"start": 417.84, "end": 425.12, "text": " kind of consulting the input like modality so yeah I thought it would be neat to kind of mention this"}, {"start": 425.12, "end": 430.64, "text": " okay let's continue let me share a couple more interesting ideas here they mention it here"}, {"start": 431.28, "end": 435.91999999999996, "text": " because we optionally share weights between each instance of the transformer tower so that's the"}, {"start": 435.92, "end": 440.64000000000004, "text": " so that's the latent transformer and between all the all instances of the cross attention module"}, {"start": 440.64000000000004, "end": 446.0, "text": " but the first our model can be interpreted as a recurrent neural network but unrolled in depth"}, {"start": 446.0, "end": 452.40000000000003, "text": " using the same input so the same input rather than in time so unrolled in depth and using the same"}, {"start": 452.40000000000003, "end": 458.32, "text": " inputs so those are the differences so basically a quick like a recap we have a when we have rnn we"}, {"start": 458.32, "end": 465.28000000000003, "text": " have a basically nn here so we have a like a our model here we input at time step one like whatever"}, {"start": 465.28, "end": 471.59999999999997, "text": " the the the modality may be here so for example like a textual token or something what we do then"}, {"start": 471.59999999999997, "end": 478.64, "text": " is we unroll this so we apply basically the weights here are shared and then we apply another input so"}, {"start": 478.64, "end": 485.59999999999997, "text": " like a different token and then we repeat this like on and on and so the difference here is instead"}, {"start": 485.59999999999997, "end": 490.4, "text": " of using different inputs they're using constantly the same input modality so that's the byte array"}, {"start": 490.4, "end": 496.08, "text": " here so let me go back here and so secondly they are obviously not unrolling in time they are"}, {"start": 496.08, "end": 502.15999999999997, "text": " unrolling in depth so we have these powerful latent transformers where we are unrolling them"}, {"start": 502.15999999999997, "end": 506.4, "text": " in depth so i guess there is a nice mapping between these two ideas and i guess this thing"}, {"start": 506.4, "end": 511.84, "text": " but the first one so this kind of a symmetry between not sharing the weights in the weights"}, {"start": 511.84, "end": 517.84, "text": " in the first cross attention module with the other ones is kind of worth further investigation"}, {"start": 517.84, "end": 522.0, "text": " um continuing on here second thing i want to mention here and this is fairly important this"}, {"start": 522.0, "end": 526.96, "text": " is what enables us to use a perceiver directly on the input modality without doing all of those"}, {"start": 526.96, "end": 533.2, "text": " tricks like for example the vision transformer had to first do like one layer of convolution"}, {"start": 533.76, "end": 539.2, "text": " or how they kind of originally called that is to take the patches and linearly project them but"}, {"start": 539.2, "end": 544.0, "text": " you can implement that using a single convolutional layer and only then when you reduce the spatial"}, {"start": 544.0, "end": 549.84, "text": " dimensions so you basically have a much less number of tokens you can then pass it them through the"}, {"start": 549.84, "end": 554.08, "text": " you can then pass them through the transformer so in order to avoid that we need to reduce the"}, {"start": 554.08, "end": 559.04, "text": " complexity and here is the the main reason why perceiver works this results in architecture"}, {"start": 559.04, "end": 566.96, "text": " with complexity m times n plus l times n squared and this is the key so m is the size of the input"}, {"start": 566.96, "end": 572.4, "text": " modality and n is the size of the latent array and they usually use something like 512 here"}, {"start": 572.4, "end": 579.84, "text": " 512 here and if you remember we had 50k for the rgb image from image net so there is a huge"}, {"start": 579.84, "end": 585.28, "text": " difference there and we can save a lot of computation just here because we don't have a quadratic like"}, {"start": 585.92, "end": 591.52, "text": " dependency on m and this thing here is l is just a number of layers in the latent transformer"}, {"start": 591.52, "end": 597.92, "text": " and n is the again like latent dimension and so it's n squared because we are using the transformer"}, {"start": 597.92, "end": 603.5999999999999, "text": " but this time we're using the transformer and we're feeding as the input those latent variables"}, {"start": 603.5999999999999, "end": 608.8, "text": " instead of using the input modality so this is how they manage to save the computation and this"}, {"start": 608.8, "end": 614.0799999999999, "text": " is the whole trick and so by decoupling the input size and the depth we can add additional"}, {"start": 614.0799999999999, "end": 618.9599999999999, "text": " transformer layers at a cost that's independent of the input size and that's cool so we don't"}, {"start": 618.9599999999999, "end": 624.3199999999999, "text": " have m squared we have n squared so basically the the main thing to do to reduce the complexity is"}, {"start": 624.32, "end": 631.9200000000001, "text": " to have this projection of the input modality into this latent space by doing this query kqv"}, {"start": 631.9200000000001, "end": 639.0400000000001, "text": " basically attention and reducing using this as a bottleneck one final remark is let me mention the"}, {"start": 639.0400000000001, "end": 644.08, "text": " positional encodings so the reason we have to use position encodings is the same as the reason why"}, {"start": 644.08, "end": 648.8800000000001, "text": " we had to use the positional encodings in the original transformer in the first place and that's"}, {"start": 648.88, "end": 654.64, "text": " because transformers are permutation actually equivalent not invariant that means that if you"}, {"start": 654.64, "end": 660.96, "text": " have an input here and if you rearrange it so this is some tokens and if you rearrange them"}, {"start": 661.6, "end": 667.04, "text": " your output is going to be rearranged in the same fashion and that's why it's equivalent not"}, {"start": 667.04, "end": 672.4, "text": " invariant if it was invariant then that means whatever however we permute the input sequence"}, {"start": 673.04, "end": 678.24, "text": " the outputs would remain the same and if it's equivalent then it's kind of following the pattern"}, {"start": 678.24, "end": 685.52, "text": " of the input permutation at the output hopefully that's clear so basically they're using same"}, {"start": 685.52, "end": 691.12, "text": " Fourier features as in the original transformer paper and they mentioned some small detail here"}, {"start": 691.12, "end": 695.52, "text": " so in language modeling transformer inputs are typically produced by adding a position encoding"}, {"start": 695.52, "end": 700.96, "text": " to the input encoding so by adding them and we found it beneficial to instead concatenate"}, {"start": 700.96, "end": 705.12, "text": " the position and input features before passing them into the perceiver so this is just a minor"}, {"start": 705.12, "end": 709.44, "text": " detail obviously but like i just wanted to to mention that they have to add this in a way"}, {"start": 709.44, "end": 716.64, "text": " modality specific positional encodings but that's a minor like prior compared to having to actually"}, {"start": 716.64, "end": 720.8, "text": " tweak the architecture to deal with the modality okay those are pretty much all of the details you"}, {"start": 720.8, "end": 725.52, "text": " need to know about the perceiver now let's see some results they did experiments on various modalities"}, {"start": 725.52, "end": 731.44, "text": " one of them were images they also did on audio on video and on point clouds let's start with images"}, {"start": 731.44, "end": 738.4000000000001, "text": " so they have results here let me go back here and as you can see here they compare the perceiver"}, {"start": 738.4000000000001, "end": 744.8000000000001, "text": " with like the simplest baseline such as resonant 50 vat so the vision transformer here they"}, {"start": 744.8000000000001, "end": 750.0, "text": " additionally add the ff means Fourier features they basically concatenate the Fourier features"}, {"start": 750.0, "end": 756.96, "text": " along all of the pixels and then they pass that through the resonant and vat and so that's the"}, {"start": 756.96, "end": 761.6, "text": " second block and finally they have the basically the original transformer and the only thing they"}, {"start": 761.6, "end": 769.6800000000001, "text": " had to do is tweak so resize the input image from 224 to 64 times 64 because as we know"}, {"start": 769.6800000000001, "end": 773.84, "text": " quadratic complexity before i even comment the results and the drop we have here"}, {"start": 774.5600000000001, "end": 780.08, "text": " you need to be cognizant of the next fact and that's that these are the simplest baselines if we"}, {"start": 780.08, "end": 786.08, "text": " took a soda so the state-of-the-art model on image net without no pre so with no pre-training and"}, {"start": 786.08, "end": 792.08, "text": " those would be nf nets and efficient nets we too we can see that so basically the the top one"}, {"start": 792.08, "end": 798.8000000000001, "text": " accuracy is 86.5 which is way higher compared to what perceiver achieved here and then efficient"}, {"start": 798.8000000000001, "end": 804.96, "text": " nets we too additionally like with no pre-training they achieve roughly this this accuracy but they"}, {"start": 804.96, "end": 809.84, "text": " are also way more efficient obviously both parameter wise and like computation wise etc so the reason"}, {"start": 809.84, "end": 815.2800000000001, "text": " i'm mentioning this is because they are not trying to compare a perceiver with the highly specialized"}, {"start": 815.28, "end": 820.9599999999999, "text": " modality specific architectures they're just trying to show that they are as good or even a bit better"}, {"start": 820.9599999999999, "end": 827.36, "text": " than the regular baselines and so now let's see the results we can see that it's obviously better"}, {"start": 827.36, "end": 834.3199999999999, "text": " than both the resonant and the vat and we can see a significant drop here so from 77 to 73 and a"}, {"start": 834.3199999999999, "end": 839.76, "text": " smaller drop from for the vat a quick remark on this drop for resonant 50 when we had the Fourier"}, {"start": 839.76, "end": 845.84, "text": " the Fourier features basically the author is mentioned here to account for the perceivers use"}, {"start": 845.84, "end": 850.3199999999999, "text": " of features at input we train versions of the benchmark models with this input as well and"}, {"start": 850.3199999999999, "end": 854.96, "text": " found that they produce comparable if slightly worse performance to models trained solely on rgb"}, {"start": 854.96, "end": 861.52, "text": " input and i mean this slightly worse is a very arguable statement because i mean dropping from"}, {"start": 861.52, "end": 868.96, "text": " 77 to 73 four percent in top one seems like a significant drop and i would love to know why"}, {"start": 868.96, "end": 873.6, "text": " and i think i would give this another look whether there is some bug here or i don't know"}, {"start": 873.6, "end": 878.72, "text": " looks suspicious to me in any case anyways let me show you some other interesting results so"}, {"start": 879.44, "end": 885.2, "text": " these ones here so i mentioned the permutation active variants of of the transformers and as"}, {"start": 885.2, "end": 891.84, "text": " well as perceiver iode obviously and so they did a small experiment here to kind of further"}, {"start": 891.84, "end": 898.64, "text": " consolidate this idea of using modality agnostic like architecture such as perceiver i.o. and the"}, {"start": 898.64, "end": 904.96, "text": " whole point is so what it did is they took a raw image and we have scores here to top one accuracy"}, {"start": 904.96, "end": 909.84, "text": " and then they do did this permutation and you see you can see that there is a significant drop"}, {"start": 909.84, "end": 916.64, "text": " happening here as well as for vat although some somewhat smaller obviously and here we have like"}, {"start": 916.64, "end": 922.16, "text": " same results and let me just quickly explain what this permutation means so what i do is they take"}, {"start": 922.16, "end": 927.12, "text": " an image net image let me draw a simplified image there will be three by three pixels so something"}, {"start": 927.12, "end": 937.04, "text": " like this one two three one two three okay and i'm going to label these pixels as so one two three"}, {"start": 937.04, "end": 947.1999999999999, "text": " two three four five six etc all the way until now nine okay so what it did is they basically just"}, {"start": 948.24, "end": 956.0799999999999, "text": " predefined like a like a like a deterministic permutation mapping so for example one possible"}, {"start": 956.7199999999999, "end": 965.04, "text": " solution would be something like this like i would put like five here and then one and then like"}, {"start": 965.04, "end": 970.88, "text": " three or whatnot and so what it means basically is that that the transformation they they do upon"}, {"start": 970.88, "end": 977.52, "text": " the input images is they'll basically shift this pixel intensity whatever it is to this location"}, {"start": 977.52, "end": 983.92, "text": " here they'll shift this pixel intensity to this location here and they do this for every single"}, {"start": 983.92, "end": 989.68, "text": " pixel according to this permutation table and so after and they also include as you can see here"}, {"start": 989.68, "end": 996.7199999999999, "text": " the Fourier features because to make it a bit more fair but like obviously because cnns are used to"}, {"start": 997.52, "end": 1003.3599999999999, "text": " exploiting the local correlations and now here it's kind of that that kind of assumption is"}, {"start": 1003.3599999999999, "end": 1008.88, "text": " violated because from a natural image here you end up with something that's obviously artificial"}, {"start": 1008.88, "end": 1014.8, "text": " and so you can see a significant drop happening for the cnn architecture vat performs somewhat"}, {"start": 1014.8, "end": 1021.28, "text": " better although there is a drop there as well so the reason is because the the first layer as i"}, {"start": 1021.28, "end": 1027.2, "text": " mentioned of the vat is basically a cnn transform and then we have transformer blocks so that's why"}, {"start": 1027.2, "end": 1033.28, "text": " we have because of that i guess because of the cnn part i guess that's why we we suffer the"}, {"start": 1033.28, "end": 1038.1599999999999, "text": " performance suffers here and because these are permutation active variants so both the original"}, {"start": 1038.1599999999999, "end": 1044.32, "text": " transformer as well as the perceiver like ignoring the whether the the positional encodings are"}, {"start": 1044.32, "end": 1050.8, "text": " learned or just Fourier ones they still kind of preserve the performance okay so that was a quick"}, {"start": 1050.8, "end": 1057.28, "text": " remark and experiment they did and this kind of further emphasizes the the need to have these"}, {"start": 1058.0, "end": 1063.28, "text": " generic architecture such as perceiver okay continuing on let me show you these they had"}, {"start": 1063.28, "end": 1070.8, "text": " some nice attention maps visualized here so what you see here is basically this image here you take"}, {"start": 1070.8, "end": 1077.28, "text": " the first cross attention block from the perceiver model and you basically take the so again you have"}, {"start": 1077.28, "end": 1085.84, "text": " a latent vector here so this will be n dimensional and the modality is this one here so we have m"}, {"start": 1085.84, "end": 1091.04, "text": " pixels here so what they do is they take a query so a single query so maybe they form from this"}, {"start": 1091.04, "end": 1095.76, "text": " position they form so from this feature vector latent feature vector they form a query vector"}, {"start": 1095.76, "end": 1101.68, "text": " and they use that very vector to query all of the keys of this image and that's how they form all"}, {"start": 1101.68, "end": 1109.52, "text": " of these visualizations okay so basically these are just attention scores prior to soft max so"}, {"start": 1109.52, "end": 1115.12, "text": " again every single pixel here will have its own key vector and we're going to do like dot product"}, {"start": 1115.12, "end": 1121.2, "text": " between this query and all of the keys for every pixel and that's how we form these scores and so"}, {"start": 1121.2, "end": 1126.4, "text": " you can see there is a qualitative change happening between this one and these ones so that's the"}, {"start": 1126.4, "end": 1132.16, "text": " reason is because as i mentioned they have unique weights for this first cross attention module and"}, {"start": 1132.16, "end": 1137.8400000000001, "text": " they share the weights for all of the other modules and they're using actually eight eight"}, {"start": 1137.8400000000001, "end": 1144.56, "text": " cross attention modules for the image net model that means they'll be consulting the input modality"}, {"start": 1144.56, "end": 1149.8400000000001, "text": " i.e. the the image eight times and then they'll have obviously the refinements using the latent"}, {"start": 1149.84, "end": 1154.8799999999999, "text": " transformers but you can see a qualitative change between these ones and so because these weights"}, {"start": 1154.8799999999999, "end": 1157.9199999999998, "text": " these two share the same weights whereas this has separate set of weights"}, {"start": 1159.84, "end": 1165.28, "text": " and you can see obviously qualitative change so here you can even like notice a dog image even"}, {"start": 1165.28, "end": 1170.48, "text": " though these are only scores i'm fairly sure this is not overlaid on top of the image i'm fairly sure"}, {"start": 1170.48, "end": 1175.04, "text": " this just pops up when you do the when you calculate these scores and here you can see these"}, {"start": 1175.04, "end": 1180.8799999999999, "text": " types of 2d sinusoids i guess some complex 2d signals which kind of resemble the Fourier"}, {"start": 1181.6, "end": 1187.44, "text": " features in a way and so they speculate that that's because they're using Fourier features"}, {"start": 1187.44, "end": 1192.32, "text": " as the positional embeddings because once they use they learn once they mentioned that they"}, {"start": 1192.32, "end": 1196.8, "text": " don't see these patterns anymore so i guess it's it heavily correlates with the use of"}, {"start": 1196.8, "end": 1204.0, "text": " Fourier features and here is just kind of like the same thing broken down for multiple"}, {"start": 1204.0, "end": 1210.72, "text": " query vectors so we just saw a single query vector remember there are 512 of these for the image net"}, {"start": 1211.36, "end": 1216.64, "text": " task and so they can have 512 of these visualizations they just have 100 here and"}, {"start": 1216.64, "end": 1221.92, "text": " a couple of other ones here i guess you can just see they correlate heavily these ones with these"}, {"start": 1221.92, "end": 1228.32, "text": " ones obviously continuing on to other modalities they did a same set of experiments on this audio"}, {"start": 1228.32, "end": 1236.1599999999999, "text": " set data set so basically audio set contains a bunch of data and it's videos combined with audio"}, {"start": 1236.1599999999999, "end": 1244.08, "text": " obviously and they have some baselines here and they compare a perceiver with those baselines"}, {"start": 1244.8, "end": 1250.72, "text": " using only audio as the modality using only video and using both the audio and video they basically"}, {"start": 1250.72, "end": 1257.4399999999998, "text": " just concatenate the data from audio and video into that m times c byte array so the details are not"}, {"start": 1257.44, "end": 1262.24, "text": " that important in my opinion so but yeah a couple of interesting facts here so first perceiver is kind"}, {"start": 1262.24, "end": 1268.88, "text": " of um um pair or better compared to all of these other baselines including resin at 50 again uh but"}, {"start": 1268.88, "end": 1274.96, "text": " like this c9 14 which is super specialized has a bit higher score and once they kind of drop domain"}, {"start": 1274.96, "end": 1280.0800000000002, "text": " specific information then they are now kind of better i mean you can always make your model"}, {"start": 1280.0800000000002, "end": 1285.1200000000001, "text": " perform better than the baselines if you do this kind of stuff but yeah as i mentioned previously"}, {"start": 1285.12, "end": 1290.6399999999999, "text": " they're trying to compare with baselines and show that this same model with some small tweaks like"}, {"start": 1290.6399999999999, "end": 1296.8799999999999, "text": " specific augmentations and specific positional encodings they can kind of have on-pair performance"}, {"start": 1296.8799999999999, "end": 1300.1599999999999, "text": " with the baselines across different modalities and that's the whole point of the paper"}, {"start": 1301.04, "end": 1307.6, "text": " on video we can see it performs better than these two baselines here and finally for the combination"}, {"start": 1307.6, "end": 1312.4799999999998, "text": " we can see that obviously using both modalities helps boost the performance so it's way better"}, {"start": 1312.48, "end": 1320.08, "text": " compared to using only video or comparing only audio or using only audio and again this baseline"}, {"start": 1320.08, "end": 1326.0, "text": " that has this specific uh like a late stage fusion has a bit better performance but again not that"}, {"start": 1326.0, "end": 1331.52, "text": " important for me what's interesting is that the the the difference between raw audio and male"}, {"start": 1331.52, "end": 1338.08, "text": " spectrogram when we're using only the audio modality is like very similar similar and usually"}, {"start": 1338.08, "end": 1342.8, "text": " you get like a huge boost by using these spectrograms so i guess this is something that's"}, {"start": 1342.8, "end": 1348.0, "text": " worth further investigation and something i found interesting yeah a small rant here"}, {"start": 1348.8799999999999, "end": 1356.6399999999999, "text": " they mentioned this so for video um basically a full 32 frame clip at this resolution has more"}, {"start": 1356.6399999999999, "end": 1362.3999999999999, "text": " than 2 million pixels and that's obviously not something that's feasible even with a model such"}, {"start": 1362.4, "end": 1368.5600000000002, "text": " as perceiver so they mentioned here they experimented using tiny space time patches with dimensions"}, {"start": 1368.5600000000002, "end": 1375.76, "text": " two times eight times eight resulting in a total of 12 000 inputs to the perceiver so this"}, {"start": 1375.76, "end": 1382.88, "text": " resembles the v what vat is doing a lot and so they were kind of comparing themselves with vat"}, {"start": 1383.44, "end": 1389.0400000000002, "text": " and here they're doing basically the same thing a hack on how to reduce the input modality into"}, {"start": 1389.04, "end": 1393.76, "text": " uh using domain specific knowledge into smaller modality and then input it back into perceiver so"}, {"start": 1393.76, "end": 1401.92, "text": " i i assume like this does not look consistent with the rest of the ideas in this paper and yeah"}, {"start": 1402.48, "end": 1407.04, "text": " uh just leave your comments what what are your thoughts on this and whether whether this is"}, {"start": 1408.0, "end": 1413.84, "text": " violating the whole idea of of the perceiver model and finally i'm just going to end it up here"}, {"start": 1413.84, "end": 1420.48, "text": " showing the results on the uh basically uh point cloud classification the goal here is given 3d"}, {"start": 1420.48, "end": 1426.0, "text": " point cloud to classify an object into certain category and again you can see vats resonance"}, {"start": 1426.0, "end": 1430.8, "text": " and transformer perceiver outperforms all of them but again when you have highly specialized"}, {"start": 1430.8, "end": 1436.32, "text": " architecture such as this point net plus bus it's way better compared to perceiver by at least six"}, {"start": 1436.32, "end": 1440.3999999999999, "text": " points here i'm going to wrap up this paper with this sentence so they mentioned it here"}, {"start": 1440.4, "end": 1444.4, "text": " while we reduced the amount of modality specific prior knowledge in the model we still employ"}, {"start": 1444.4, "end": 1450.4, "text": " modality specific augmentation and position encoding and we also just saw the trick they"}, {"start": 1450.4, "end": 1457.3600000000001, "text": " are doing to make video modality feasible and so this is something to to to have in mind it's we're"}, {"start": 1457.3600000000001, "end": 1463.52, "text": " still we're still definitely struggling uh with videos finally before i contrast this paper with"}, {"start": 1463.52, "end": 1468.64, "text": " perceiver io with the novel paper i want to mention one more thing uh that i saw in the"}, {"start": 1468.64, "end": 1473.68, "text": " ablations here so let me just find it so what they did here is they basically omitted as they say"}, {"start": 1473.68, "end": 1478.88, "text": " here performance of models built from a stack of cross attention layers with no latent transformers"}, {"start": 1478.88, "end": 1484.48, "text": " and now this is way more equivalent to what the debtor facebook debtor model is doing and you"}, {"start": 1484.48, "end": 1489.76, "text": " can see that the accuracy is way lower compared to using those latent transformers to refine the"}, {"start": 1489.76, "end": 1496.8000000000002, "text": " representation so that's this is a kind of argument behind the usage of latent transformers okay now"}, {"start": 1496.8, "end": 1501.6, "text": " let me quickly contrast this paper with perceiver io okay perceiver io a general architecture for"}, {"start": 1501.6, "end": 1507.44, "text": " structured inputs and outputs again the deep mind team and they say here while the perceiver supports"}, {"start": 1507.44, "end": 1513.36, "text": " many kinds of inputs it can only produce very simple outputs such as class scores perceiver io"}, {"start": 1513.36, "end": 1518.24, "text": " overcomes this limitation without sacrificing the originals appealing properties by learning"}, {"start": 1518.24, "end": 1523.52, "text": " to flexibly query the model's latent space to produce outputs of arbitrary size and semantics"}, {"start": 1523.52, "end": 1529.52, "text": " so in the original paper all of the tasks we saw so image net audio set and those point clouds all"}, {"start": 1529.52, "end": 1534.4, "text": " of those were classification tasks and so we had a simple uh like the output structure was very"}, {"start": 1534.4, "end": 1540.32, "text": " simple when you have more complex uh problems such as trying to make an agent for starcraft 2"}, {"start": 1540.32, "end": 1544.6399999999999, "text": " then you have much more structured outputs so the output may be something like this like you have"}, {"start": 1544.6399999999999, "end": 1549.52, "text": " one array that tells you which agent you should select so which of the characters you should"}, {"start": 1549.52, "end": 1554.56, "text": " select and then once you select the agent you have like a matrix that tells you what's the"}, {"start": 1554.56, "end": 1560.0, "text": " probability of moving to all of these locations so maybe you have something like four by four matrix"}, {"start": 1560.0, "end": 1565.84, "text": " and basically you you have a probability of where the current character the selected character should"}, {"start": 1565.84, "end": 1571.52, "text": " move and so you have like complex structured output and the the thing is the original perceiver"}, {"start": 1571.52, "end": 1576.0, "text": " even though it was very flexible with the input it wasn't as flexible with the output and so basically"}, {"start": 1576.0, "end": 1580.48, "text": " what this paper does and they mentioned it here and they said here our main technical contribution"}, {"start": 1580.48, "end": 1585.92, "text": " is perceiver ios decoding procedure so let's see what they mean by that here is the architecture"}, {"start": 1585.92, "end": 1590.08, "text": " let me just kind of contrast this one with the original perceiver paper so this part is the same"}, {"start": 1590.08, "end": 1595.76, "text": " so we have latent array n times d we have the input byte array m times c we just do this cross"}, {"start": 1595.76, "end": 1600.56, "text": " attention module we do the latent transformer part so this is the latent transformer part this is the"}, {"start": 1600.56, "end": 1606.96, "text": " cross attention part and we just repeat that part rinse and repeat and that's the same thing as with"}, {"start": 1606.96, "end": 1612.32, "text": " the original perceiver so the novel thing here is this output query array so what you do is depending"}, {"start": 1612.32, "end": 1618.96, "text": " on your desired outputs you have like an output array that maybe has like o vectors and every vector"}, {"start": 1618.96, "end": 1627.04, "text": " has e features so what you would do is form o query vectors and basically again do across like"}, {"start": 1627.04, "end": 1633.12, "text": " attention logic and so because you have o vectors here you're going to end up with o vectors here"}, {"start": 1633.12, "end": 1636.56, "text": " zooming in into the decoder head because this is the main contribution of the paper"}, {"start": 1637.36, "end": 1643.44, "text": " let me kind of like break it down a little bit more first things first this these query vectors"}, {"start": 1643.44, "end": 1648.08, "text": " are going to be task dependent so depending on the structured output you want to achieve"}, {"start": 1648.6399999999999, "end": 1654.1599999999999, "text": " you'll be either ingraining some integrating some domain specific knowledge here or you're going to"}, {"start": 1654.16, "end": 1660.5600000000002, "text": " make these learnable let's see how the shapes work out here so we know we have basically all of these"}, {"start": 1660.5600000000002, "end": 1665.0400000000002, "text": " and their dimension the dimensionality is not that important it's going to be like x and the"}, {"start": 1665.0400000000002, "end": 1670.0, "text": " only important thing is that they match with like key vectors because we're going to do dot products"}, {"start": 1670.0, "end": 1675.2, "text": " here in order to form the attention scores now the important part is that the value vectors so these"}, {"start": 1675.2, "end": 1681.28, "text": " here have the dimensionality e if that's our target as you can see here so he's our target so we want"}, {"start": 1681.28, "end": 1686.96, "text": " to have the dimensionality of these value vectors should be e and we have we need to have all of"}, {"start": 1686.96, "end": 1694.0, "text": " these and that's how we make sure that this thing here is o times e and that's our desired shape so"}, {"start": 1694.0, "end": 1700.3999999999999, "text": " basically everything else is simple attention logic we form the weights here we use these to"}, {"start": 1700.3999999999999, "end": 1706.3999999999999, "text": " kind of weight these value vectors we add them up and we form via these two weights and via"}, {"start": 1706.4, "end": 1712.5600000000002, "text": " uh basically these two value vectors we form this vector here and we just repeat a procedure for"}, {"start": 1712.5600000000002, "end": 1717.68, "text": " all of the output vectors so that's basically the main contribution of the paper everything else is"}, {"start": 1717.68, "end": 1725.52, "text": " about trying this new perceiver io module model on various different tasks and i'm going to discover"}, {"start": 1725.52, "end": 1732.96, "text": " a single one and that's the language uh like uh like the glue benchmark and they compare with"}, {"start": 1732.96, "end": 1739.04, "text": " bird piece and you can see here depending whether they use tokenization so sentence piece"}, {"start": 1739.04, "end": 1746.24, "text": " is a like a famous tokenization algorithm or simply using utf-8 bytes that means no tokenization at"}, {"start": 1746.24, "end": 1753.2, "text": " all you just take the raw text and you encode the characters as utf-8 bytes and so you can see that"}, {"start": 1753.2, "end": 1758.8, "text": " they can report uh results of their own pair with bird but here's a catch i mean even though"}, {"start": 1758.8, "end": 1765.04, "text": " they're kind of making the the flop similar here uh the number of parameters is obviously way way"}, {"start": 1765.04, "end": 1770.0, "text": " bigger for perceiver io so if that's something you care about this is obviously not going to be"}, {"start": 1771.12, "end": 1776.0, "text": " like a fair comparison right and i guess this is where perceiver io shines the reason is we have"}, {"start": 1776.0, "end": 1781.6, "text": " when we have utf-8 the input size is going to be way larger and then because of that you have to"}, {"start": 1781.6, "end": 1786.8799999999999, "text": " kind of reduce the depth of the bird model and that means you have less parameters less expressive"}, {"start": 1786.88, "end": 1792.5600000000002, "text": " model and you get way bigger gap here and i find that very cool because just using"}, {"start": 1794.16, "end": 1800.0, "text": " utf-8 encoding instead of having to devise these intricate tokenization methods and tokenization"}, {"start": 1800.0, "end": 1805.8400000000001, "text": " is literally a field in and of itself so have like being able to avoid that whole step and still"}, {"start": 1805.8400000000001, "end": 1811.5200000000002, "text": " manage to have these results like even better than using sentence piece is pretty pretty damn"}, {"start": 1811.5200000000002, "end": 1815.8400000000001, "text": " amazing having said that that's it for this video if you found it useful share it out with a friend"}, {"start": 1815.84, "end": 1822.24, "text": " and subscribe to this channel hit the bell icon join the discord community and until next time bye"}, {"start": 1822.24, "end": 1846.72, "text": " bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=a6WCZn7kOhk
ETA Prediction with Graph Neural Networks in Google Maps | Paper Explained
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover the "ETA Prediction with Graph Neural Networks in Google Maps" paper. An awesome new real-world application of GNNs! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2108.11482 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro - GNNs in production 02:40 How graphs are formed 05:30 Graph features 07:10 GNN explained (DeepMind GN) 12:40 Different horizons 14:30 Loss functions 18:45 Reducing the variance 21:50 ETA baselines explained 25:30 How does the inference work 28:20 Offline results 30:45 Ablations and experiments 37:30 Outro, engineering ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #googlemaps #gnns #graphml
What's cracking guys in this video I'm covering this super exciting paper called ETA prediction with graph neural networks in Google Maps. So it's yet another application of graph neural networks in a like a real world setting. So previously on this channel I've covered PINSAGE. So that's another application of GraphML to the recommender system application. And here we are using it to predict this. So ETA stands for Estimated Time of Arrival. I mean you probably already knew that if you have ever used Google Maps. And basically as you can see it's running at production settings at Google Maps and they achieved some awesome results. So basically our GNN proved powerful when deployed significantly reducing negative ETA outcomes in several regions compared to the previous production baseline up to 40 plus percent in cities like Sydney. And I'm going to zoom in a little bit here into this map here. And you can see the results are pretty awesome like in Sydney they achieve like 43 percent. And as you can see it's very dependent on the geography. So in some cities like this Tai Chung city it's up to 51 percent in Europe it seems to be a little bit lower compared to these like increases here. And I guess it has a lot to do with the topology of the place with the data available because obviously this is going to be trained in a supervised setting. Even though the experimented with some unsupervised methods I'm going to see those details a bit later. But yeah it'd be funny to kind of analyze and understand why the why these variations in the in the improvements. But that's like something that's outside of the scope of this paper obviously. OK so let me just motivate the whole paper and kind of explain the problem if it's not already clear enough. So basically we have you have a point A so that's your starting point and you have like a point B so that's your end point. And basically what like Google Maps does is one of like one of the modules of that application is going to source like feasible routes from A to B. So something like this A to B and then like a third path maybe like this. And then this system is going to like estimate ETA for all of these routes. So ETA or estimated time of arrival. And that basically means how like long does the system expect your travel to last from A to B. And basically once you have that information Google Maps can kind of sort like sort basically all of these routes according to ETA and suggest you the shortest routes like timewise. Shortest maybe they're longer like physically but like there you'll get quicker there pretty much. OK so that's hopefully the problem is now clear enough. Now let's see how this is like posed as a graph ML problem. So I think it's pretty clear but let me let me kind of explain that. And the part that's super interesting is how they constructed graphs. So it's not that intuitive and this is not something I'd expect. But yeah so I guess if you had like like a street here and you had some intersection here and then you had another street here and you had another street here I guess the way you model this thing. So if you had let's say we have another intersection here. So how you would model it is basically how I just like draw it down. So you have this these will be nodes. So the intersections would be nodes and then you'd probably have like the whole street would be an edge between two nodes. And I guess once you think about it a bit more deeply it makes sense that I mean if this is like two kilometers you don't want to have that as a single edge. Whereas like some other streets may be like 20 meters or something. So it makes sense to kind of segment the roads themselves into nodes. So that's exactly and precisely what I did here. So as you can see here they segment the roads into these segments and so this is called a segment. Let me just get the like terminology out there. So this is a segment and this whole thing here is called a super segment and you can then like going from here to here you can see that these things so the segments will be nodes and the edges will exist between the like obviously segments that are connected. So something like this. So this would be like like a graph. And finally you can see that on the third like chart here. And so that's the final super segment and that's the unit that the graph like the graph neural network will be consuming. So obviously if you're not familiar with graph ML I have a whole playlist you can check it out. I'll link it somewhere here. But basically what is needed here is you. I mean with every machine learning algorithm but like here as well. So you want to associate certain features with all of these nodes and that's precisely what I've done. And so once you have a graph once you have the features then you just need a graph and then and you can basically predict that like time it takes to traverse a certain segment. So what the graph neural network is going to do is given as an input this this super segment is going to predict among other things is going to predict how long does it expect once you enter. So once you enter here how long does it take to exit here. So that will be like a node level prediction. But the system is also going to predict how long does it take if you enter here. How long does it take you to exit here. So all of those predictions are going to be made by the G and M. So let's slowly start digging into details so you can understand the like the like specifics of this whole pipeline. First things first we've got like features. I mentioned those and here are some of the features they were using. So the actual no by the way this is these are labels. So the actual traversal times along segments and super segments in seconds are used as node level and graph level labels for prediction. So hopefully now it's clear. So you have some graph and the labels are basically the amount of seconds that it takes to traverse the segment or traverse the super segment. And all of that data is obviously collected by Google Maps throughout the last many years. And so they got the data available. So that's something that's given. And now the features. So on the segment node level we provide the average real time and historical segment traversal travel speeds and times. So both the speeds and times are like ingrained in the feature vectors as well as segment length and segment priority. Like is it a road classific is it a highway is it an intersection I guess like all of that is provided as features. And so I won't be digging into details here obviously like on a high level you can understand that here we've got some domain specific knowledge like that's basically injected into these feature vectors. And the other important thing is that we additionally provide learnable segment and super segment level embedding vectors. So that means that these vectors here are going to be a concatenation between all of the features we just saw like the length of the road etc. Etc. And the second part is going to be learnable. So we will have a learnable portion of those vectors. OK. So that's that's how I understand all of this. OK. What next once we have labels we have data we know how graphs are formed. We need a gene and obviously. And the model they are using here is graph network. So that's a paper that came out of deep mind as well. It's a super generic framework. You can see the equations here. So these are the basically update equations for the edges for the nodes and for this like basically graph level embedding. And these are some like basically aggregation functions again for edges and for for nodes. And let me try and break them down so that you have a feeling what's going on in here. So let me draw a simple graph here. So I have a note here. Let me draw a couple more nodes or something like this. And we've got some edges. So there's going to be an edge here and then here and then here maybe and then here. So this is the update rule for for the edge. And basically this fight what I've used in practice is a simple MLP. So it's a multilayer perceptron. Nothing fancy there. So what happens is you take the edge feature. So this is this is an edge and you've got some features there. And so basically you take those features you put them in here and you probably concatenate those features with the next thing. So you concatenate those features with the source node with this one and with the target node. So basically the two nodes that the edge is connecting with. So those are this is a source and this is the target nodes features. And you just add this global. This is a global feature vector that's basically associated with the graph itself. So there is some like a separate feature vector out here that's being updated together with this graph like features. And so once you concatenate all of the data all of that data and you kind of pass it through the MLP. That's the update and you get a novel edge representation which is like denoted here as E prime. And very similarly you do it for nodes. So you have a so you basically have a node here. Let's focus on this one. So what do you do as you can see here you concatenate its features again with this global thing. Global feature vector and you additionally add this aggregated like vector that's formed precisely in this manner. So this way. OK what you do is and these are simple summations. So just when you see this thing in all one you can think of it as a summation. So what you do here is you basically take all of the incoming and outgoing edges. And you just kind of sum up their features of those edges and that's this vector. And then you concatenate it here and you use an MLP to produce a novel representation for this node. OK. Hopefully that's clear enough. So in this particular case that's going to be this edge. It's going to be this edge and it's going to be this edge. And finally you're updating this global feature vector. Again you use its own features. You use like basically the aggregated versions of edges and nodes. Now you basically take all of the edges and all of the node features and you just basically sum them up. I guess you may need to do some kind of like me like averaging there. But basically you kind of combine them and you concatenate them. Pass them through the MLP and you get a novel representation here. And that's it. You just rinse and repeat. Usually GNNs are very shallow so you won't be doing this like in CNN like 150 times like in Resonance. You'll be doing this a couple of times two three times and we'll see that in this paper in particular. They are only doing it two times. OK. So that's a high level overview of this GN block. And now let's see what else. So as a quick recap of GN architecture it's fairly simple. Again you have a graph here. I'm going to denote it as this blob and then you have an encoder block here. So something like this. So this is encoder. Then what to do is they once you encode these raw features into a latent space you have this processor like part of the pipeline which is basically your GNN. And you apply like this logic you just saw here a couple of times and then you take those latent representations which are now refined. You pass them through the decoder and the decoder will finally output all of the predictions you're interested in. So that's it on the high level how this GN like pipeline looks like. They mentioned it also here so we compose three GN blocks into an encode process decode architecture and learn a separate model for each horizon H. These blocks are defined as blah blah blah. I'm going to return to this thing in a second because it's very important this horizon detail. But I'm going to first kind of like explain this thing in a bit more detail. So the encoder is first applied to the super segment S with the previously described raw features. So those were the as we saw the speeds the time the additional like learnable embeddings. Those are the raw features. It produces latent representations of nodes edges and the super segment itself. So that's the you components. So that's the this is the super segment segment representation. These latents are then passed to the processor which is applied two times with shared parameters. So there is additional sharing of the parameters here to update these representations. Finally these representations are transformed into appropriate predictions by applying the decoder. So again this is going to output the basically both node wise as well as graph level predictions of how long does it take to traverse a segment or a super segment. Those are going to be outputs. We'll see soon see a bit more details there. But yeah now getting back to the horizon detail it's very important because what they do is they basically for each region of the world because you obviously can't train a single system that will work everywhere. What they do is like focus on some specific region like take Serbia. That's my home country. I'm going to take Serbia as an example and I'm going to draw something that looks like Serbia but not exactly. And so this is Serbia and basically you train a single GNN for this region. So for Serbia or even some like like a small smaller like part of Serbia and you additionally train a GNN model for different horizons. So that means what they were using was zero minutes 10 minutes 20 minutes 30 minutes and finally 60 minutes. So an hour. So they're going to have one two three four five GNN for a specific region. So why is that. Well the reason is this is like zero GNN. Let me call it that way is basically going to is basically going to predict the time it takes to traverse this particular segment right now. Whereas when you input like a graph a super segment into this one is going to predict what the traversal time of that super segment is going to be or segment in 10 minutes. And by analogy this one will like tell you like how long does it take to traverse a segment or a super segment in 20 minutes 30 minutes and one hour. And they just figure out experimentally that that's that leads to much better performances. And that's what I stick with. So yeah this was Serby. OK something like that. Nice. Let's see what else. So once we have we know how the graph is structured. We know how the graph neural network is structured. Now let's see how the actual loss functions look like and a loss function at least the one they're using in production. They also experiment and we'll see those details in a minute with some other losses here which are unsupervised. So basically unsupervised and soup. And so this is using production. Basically there are three components. One component is this SS which is basically super segment. One component is this which is segment level prediction. And one component is this segment but cumulative prediction. We'll see what that means in a second. So here is how the super segment loss is formed. You iterate over different super segments. You have this normalization part which I'm going to explain in a second and you have your Huber loss over here. So Huber loss for those of you who are not familiar with that is very simple and it's supposed to kind of prevent certain outliers from like destabilizing the training. So it looks like basically like a parabola in a certain range which is denoted by delta and minus delta here. And then you basically have just a linear extrapolation here. So here you have a linear function. So it's not a parabola here. It's not going to be like this. It's going to be a simple linear function there. And so the gradients remain constant. They are not increasing. And so this kind of stabilizes the training when you're updating the gradients. If you watch my DQN video or checked out my project on GitHub about DQN, that's I think the only place I've seen Huber loss being used so far. But I guess it's a neat technique to have in your toolkit whenever you want to kind of stabilize the update procedure. So this 400 basically means this delta is 400. And so the main part here is this means this is a prediction like a super segment level prediction at time t with horizon with the model that has a horizon H. So if we had the model, if we're training the model with the horizon zero, you can just ignore this. So we are predicting we have the ground truth here. So this is the GT label. So this part is the GT. So this is from the data set that the Google Maps team collected. And this will be your actual prediction on a super segment level. And so basically you just do this Huber loss over it. And that's that's it. So for the normalizing normalization component, I owe you an explanation there. So this flow basically tells you the following thing. So this flow of I is how long does it take to traverse this super segment I when there is almost no traffic on the road? And the kind of reasoning behind it is so imagine if you have a super long like a super segment. And that means that these errors are probably going to be larger because there is so much more like uncertainty when you have a larger like a route. Right. And so what you do is kind of normalize with this like zero time or basically the time it takes to traverse it when there are no cars in there, no traffic. And so that's kind of the intuition. So the bigger the road, you want to kind of normalize it by its this flow time. Similarly for segments. So you additionally just have here, as you can see, you have this iteration over the segments of the super segment. So M this index I means the amount of segments in that super segment. And you do the same procedure basically. And finally, you have these cumulative segments. So what they do is let me let me kind of jump back here. So it will tell you if you're here right now, it's going to tell you how long did it take to get up to there. So it won't tell you the traversal time from here to here. It's going to tell you the cumulative time from here all the way here to here. And then for for this one, obviously, you're just going to include this one as well and et cetera. So that's the whole point there. OK, I think we're pretty much done with the methods. There is two more details. Fairly important for the production setting. Genehands and all that's all nice. But like at the end, you got to make the thing work. And what I noticed is that as they were dumping models across every single epoch, the predictions that came from those models were like varying a lot. So there was like a large amount of variance there. And so they they used basically two methods to dampen out those oscillations to kind of reduce the variance. One is using this meta gradient method. And the second one is using this exponential moving average method. And this you've seen this if you watch my previous videos, like self supervised learning field uses EMA a lot to form the teacher networks, et cetera. But basically, let me let me first quickly explain meta gradient. I won't be digging into details here. But like the whole idea is instead of picking a learning rate and some schedule, you basically treat learning rate as a hyperparameter itself and you jointly optimize the learning rate alongside the model parameters. So this is what these fancy formulas are telling us. We're basically also updating this this this is learning rate parameter using the loss function. So this is just a generic set of meta gradient. That's why they're using this new this Greek symbol. But in this particular application of this paper, new is basically learning rate. And so you are tuning those as well during the training. And the second thing is they're using EMA, as I said. So basically, what do you do? You take when you're updated, when you update the GNN, you don't just use those weights as as the new weights. What you do is instead you take this previous weights, which is the EMA accumulative estimation of weights. You kind of weighted with this parameter and they've used zero point nine nine here. That's again a hyperparameter. They just figured out that that number works well. And these are the novel weights. And as you can see, so this is kind of pulling these novel weights towards this stable estimate of the weights that was created through snapshots, multiple snapshots in the history, I guess. That's pretty much it. That's the whole method. And now I'm going to slowly start explaining the experimental setup. I'm going to show you the results. And yeah, for showing you the baselines they're going to compare with, let me just quickly just mention this part. So they mentioned that we know that November twenty twenty data may be susceptible to covid-19 traffic patterns. And hence we attempted to select regions that minimize this deviation for evaluation in order to capture performance under regular traffic conditions. So the thing is, obviously, because because of the covid, the traffic patterns have changed. And so they are kind of trying to find the regions where even post covid, hopefully they'll have relevant results. And so my question is, how do they actually find the regions that minimize this deviation? There are no mention they haven't mentioned this in the paper, but I'd love to know the answer. So if somebody somehow knows that, feel free to comment down in the comment section. OK, having said that, let me show you the baselines they are they're using. So first baseline is super simple. It's basically real time travel times. So what you do is the following. So you have a segment. So you have one segment and I'm just going to draw it here as a square, but it basically represents this thing here. So one of these segments. And so what happens is the following thing. So they have over the last like two minutes or five minutes, they have the data of what were the traversal times of the cars and vehicles in general passing through that segment. I mean, they say here nicely summation of segment level travel times computed using segment speeds averaged over a two minute period. So you have a two minute window prior to the time of prediction. So basically take the data, the recent data, the last two minutes, and you use that as a as a weight as a predictor. So what happens basically is you find that average for this segment, then you find the average for the next segment. You find the average for the next segment. And if this was the whole super like a super segment, the predictor would just sum up these average numbers. Let's call this T1, T2, T3. You just basically take and sum them up to get the final value. So T will just be basically a sum of all of these TIs where individual TIs for this particular segment, you use the last two minute data and compute it. So it's a nonparametric model. You don't have a neural network. You don't have any fancy model models. You just have nonparametric method there. Fairly simple with historical travel times baseline. What you do here instead of using the last two minutes, like basically you can use both the future and past information, but in the like last days. So, for example, if this if we're trying to predict like what is going to what is the travel time of a super segment at 4 p.m. What they're going to do is take a look at the 4 p.m. So let me just kind of draw this as a timeline. So 4 p.m. is here. So they're going to take the data over the last, I don't know, like 15 minutes, maybe here and 15 minutes here. And they're going to take a look at like what those times were like last week and the week before and etc. And they are going to combine and average out that data over the last some weeks. So they say here average across last 17 weeks. So that's how you form the predictions here. So that's so you have you use the historical data to form these TIs and then you again just sum them up to get the final T. Finally, deep sets is like the last method. This is a model based approach. So this is not so this is parametric method. And the thing is, it's ignoring the graph structure. So if you have a super segment, what this method is going to do is the following. Let me kind of go up there. It's going to be easier to explain. So this method is going to pass this through an MLP, pass this through an MLP, pass all of the segments through an MLP. Then it's going to like kind of aggregate all of those, maybe using summation. And then it's going to pass that final representation. So we have a final representation here, which is formed, as I said, by aggregating all of these feature vectors of all of these segments. And then you just pass it to another MLP here to get the final representation. And then you can do the predictions. So as you can see here, you're treating this as a bag of words in a way. You're just taking all of those features, feature vectors of the segments. You're kind of ditching the graph data, the graph information of how they're connected. And you just kind of aggregate them and then inform the prediction. So that's the deep set. Having explained deep set, I realized I have not explained how the actual inference looks like once we have this this GNN model trained. So let me explain that in a second. Basically, this is how the serving looks like. Once this app, once this model is running in production in your Google Map application, this is how you're actually getting the results. So imagine you have a route from A to B. So you have route from A to B and A and B are connected with three super segments. So we have like something like this. A is connected by a segment one, two, and three to the segment, to the actual destination B. OK, so what do you do here? And this is going to make the reason behind using the horizon models like more clear, hopefully. So what happens is you take. So we are at time t and you basically take take this segment one and use the model that has the horizon equal to zero. And you do prediction and that is going to give you t one. So that's basically the estimated, the predicted time. What the model is expecting, how long do you need to to kind of traverse this segment one? So once you have t one, let's imagine t one is 12 minutes. So this is 12 minutes, 12 minutes. OK, so once you have 12 minutes, then what you do is out of the pool of the horizon models. So that means, as I said, we have the 10 minute model, the 20 minute model, the 30 minute model. So we are because we are 12 minutes, we're going to take the models that have horizon 10. So I'm going to denote them as horizon 10 and you're going to take the model that has horizon 20. So that's horizon 20 like this. And now you're going to take those models and pass the segment two through them. So H10 is going to tell you how long does it take to traverse segment two in 10 minutes from now. But since we have 12, we are actually expecting 12 minutes. That means we're going to do basic interpolation here. So this one is going to tell us the result. Let me call it t two at 20. And this will be t two. So that's the second time it takes to traverse the second segment according to this model 10. And you're going to combine this two informations, simple interpolation to form the actual time it takes to traverse the data. So it's going to be somewhere between these two numbers. Right. And so once you have the t two, you can use the t two and t one. You can add them up and then do the same procedure for segment super segment three and rinse and repeat until you get to the end point B. So, yeah, hopefully that's now it's clear how you're using this model in inference. Now, let me get back to the where we stopped. So I explained the deep sets. That's the third baseline. Now, let's see the actual results here. By the way, they're using this RMS metric to to kind of compare the baselines. And you can see here it's very simple. You basically take the super segment level predictions. You do the summation over the super segments and you do a square and you average here. Yeah, basically root mean squared error. It's pretty pretty self-explanatory. And so here are the results. You can see the real time baseline, the historical, the deep sets baseline and the GM baseline. And you can see that it's they also show it's a significant statistically significant result. And it's lower compared to all of the other baselines, which is cool. So the thing that initially kind of bugged me here is that if you take a look at the numbers, the differences are fairly trivial, like small compared to the other baselines. So deep set seems to be performing really well. So why the whole hustle? Like why why are we using the graph data? And I'm going to get to that in a second. But like, let's see here, if we compare these numbers, like take this number, you can see it's way worse compared to these two models. So these two are out of the question. That's obviously like clear. But like between deep sets and GM, what should we use? And I mean, it's a bit better, but is it like worth it? And the thing is one important note here dimension is so one note is that these improvements are computed over individual super segments, and they will accumulate over entire routes. Hence, our improvements may quickly become more important on a whole route level. So a super segment may be and I think they have a data somewhere here. Let me find it. Where it is. Okay, here it is data set statistics. So you can see that the super segment length is so average segment length is around like 100 meters and super segments are around so average number of segments per super segment is around 20. So that means probably average super segment has around two kilometers. So once you accumulate that over like 10 kilometers, when you have multiple super segments, then obviously the errors accumulate and GM start bringing more and more advantage compared to these other baselines. And I mean, obviously, Google would not put this into production if it wasn't meaningful enough. So yeah, but just don't get confused by the small differences. Now, hopefully, it's clear enough. And let's see the ablations. So one thing I have not mentioned is they've experimented with various things. I did mention the unsupervised loss. So they tried using stuff like deep graph info max. They've also experimented with these combinations of aggregators and like not using just the some aggregator but using some other types of aggregations. And all of the ideas come from this principal neighborhood aggregation paper. You can check it out. Yeah, just keep in mind that they've experimented with lots of things and they eventually decided not to use many of those ideas in the production because the trade off was not acceptable. So even though they brought some performance benefits, the cost of kind of serving those more like complex models is way higher. So, okay, let's kind of skim over these ablations. I mentioned those embedding vectors. So aside from the domain knowledge that was ingrained by those speeds and times in the feature vectors, we had the learnable portion. And you can see that actually using like both embeddings, both the segment level embeddings as well as the super segment embeddings brings better results. Don't get confused by this column like these acronyms. This stands for New York City, I guess, LA, Tokyo and Singapore. They had to kind of obviously pick a subset of the cities. They can't evaluate this on the whole on a global scale. And you can see results are always better when you're using the embeddings. That's that's kind of clear. Second thing here is they showed how the meta gradient and EMA are reducing the variants in the during the training. So we can see on the x axis here, basically training iterations on the y axis, we see the RMS metric. And we can see that when we're just using Adam with no EMA and no meta gradients, you can see that just the red curve is kind of getting crazy here. And once you start adding EMA and meta gradients, they show that the blue curves are using both of these is the best thing to do. And they show that not only for they'll just show for it for Tokyo and they show it also for for LA here. And I just made a note here. So basically, it'd be interesting to see how learning rate evolves over the duration of the training. So that means I'd love to see the like how the meta gradient is tuning the learning rate. Can we see some interesting patterns there? Like those types of visualizations in the appendix would be super appreciated. And secondly, be interesting to understand which other schedules did they try with Adam before deciding to go with meta gradients. Let's wrap up the ablation studies. They experimented with these extended super segments and extended super segments plus extra features, whereby they were using additional segments, he can see extended super segments, but they were still predicting only the like traversal times for the segments in our original super segment. And they basically showed that all of that kind of helps, but like it's very, very expensive, both memory wise and I guess inference wise. And so they decided not to use this in the production setting. Here you can see the unsupervised loss I mentioned. So this is the deep graph informax paper. This is the I guess, or encoder. So graph or encoder, I guess, paper. And so you can see that those two are, again, improving results across various cities and various horizons. So both for New York City, LA, et cetera, for different horizons. But the trick here is, and this is an excerpt from this paper. So the optimal variant and hyperparameters were different for each region and prediction horizon. Thus, it is likely expensive to tune such losses for all existing regions and each horizon. So as I said, you're going to be using various different GNNs depending on region. So if you have additionally have to tune these kind of hyperparameters, it's going to be hard and expensive. And so they decided not to use them in the actual production setting. Finally, let's see this, these results. So using various different aggregation functions like the sum is the actual, the one that's using production, but they also try the square root, the mean, the max and all of them combined. And as you can see, usually this all of them combined, like gives you additional performance. But again, you have to once you're in a production setting, this is engineering, you have to trade off the costs. And so they stick with the sum function. They also did this online evaluation of GN compared to the baselines and they showed it's again better, same as in the offline setting we just saw. So a couple of things that bug me there is, so they mentioned that computed over all tracks within a week, specifically January 15th through January 22nd. What is not clear to me is what it means over all tracks. So do they actually average the results across every single region they are covering or what? So this part kind of bugs me. So yeah, not sure about it. And again, I said the results are nice, but like basically we don't have a gut feeling for what these small improvements mean. And so A, it'd be nice to have like something like, hey, when you're using this model, it's going to save you up that many seconds or minutes compared to this baseline. So that would be much more intuitive. We'd have a much better intuitive understanding of what's going on here. Secondly, and I noted it here, basically some type of geographical heat maps where we see, depending on the region, like what the performance differences are, that'd be super nice visualization, I guess. Finally, they did the last evaluation on this particular route in LA, I think, and they show certain results and I have a same remark here. It's not intuitive what these curves mean. Like, I mean, you can see that the GN, so this graph neural network model is way better. It's better compared to real time baseline, but I don't have a gut feeling for what this means. And so yeah, that kind of sucks. And also they don't have any units here. And so I guess it's seconds, but it's kind of not clear enough. And so they here also show for the, when we have results for the horizon, like one hour from now, you can see that this GN that was trained with that horizon is way better compared to the GN that was trained with the horizon equals to zero. So that's kind of argument behind why do we need all of these multiple horizon GNM models and we're combining them to actually serve the results, the end results. I'm going to end it up here and I'm going to just briefly mention that this is a thing that's running in production, which means that research was only a small portion of this whole project and there were many engineering challenges they had to solve. So one of them was, so they had to obviously cache results. So as they say here, the path in an ETA request usually includes multiple super segments and making predictions for these super segments on the fly is not practical nor scalable. So they had to kind of cache the results and then periodically refresh all of the cache. So that's one of the engineering details. The second thing they mentioned here, for less frequently visited road segments, we use simpler per segment models. So they drop to simpler baselines in certain regions. So it's a complex system. There are many components. So the meta-gradient, the EMA, the graph neural networks, all of this engineering tricks to make this work and I think it's a very impactful project. I mean, I use Google Maps all the time. So I really love this paper. I think it's a clearly written paper aside from a couple of these remarks and maybe the lack of appendix section. Hopefully like this video. If you did share it out with a friend, subscribe to this channel, hit the bell icon. Also join the Discord community. There is a lot of engagement happening there. Cool people interested in ML helping each other out. So yeah, do check it out and until next time, bye-bye.
[{"start": 0.0, "end": 7.2, "text": " What's cracking guys in this video I'm covering this super exciting paper called ETA prediction with graph neural networks in Google Maps."}, {"start": 7.6000000000000005, "end": 13.6, "text": " So it's yet another application of graph neural networks in a like a real world setting."}, {"start": 14.200000000000001, "end": 16.8, "text": " So previously on this channel I've covered PINSAGE."}, {"start": 16.96, "end": 21.76, "text": " So that's another application of GraphML to the recommender system application."}, {"start": 21.96, "end": 24.2, "text": " And here we are using it to predict this."}, {"start": 24.240000000000002, "end": 26.76, "text": " So ETA stands for Estimated Time of Arrival."}, {"start": 26.76, "end": 29.96, "text": " I mean you probably already knew that if you have ever used Google Maps."}, {"start": 30.32, "end": 37.32, "text": " And basically as you can see it's running at production settings at Google Maps and they achieved some awesome results."}, {"start": 37.32, "end": 50.92, "text": " So basically our GNN proved powerful when deployed significantly reducing negative ETA outcomes in several regions compared to the previous production baseline up to 40 plus percent in cities like Sydney."}, {"start": 51.160000000000004, "end": 54.56, "text": " And I'm going to zoom in a little bit here into this map here."}, {"start": 54.56, "end": 60.440000000000005, "text": " And you can see the results are pretty awesome like in Sydney they achieve like 43 percent."}, {"start": 60.64, "end": 63.040000000000006, "text": " And as you can see it's very dependent on the geography."}, {"start": 63.040000000000006, "end": 72.24000000000001, "text": " So in some cities like this Tai Chung city it's up to 51 percent in Europe it seems to be a little bit lower compared to these like increases here."}, {"start": 72.56, "end": 80.24000000000001, "text": " And I guess it has a lot to do with the topology of the place with the data available because obviously this is going to be trained in a supervised setting."}, {"start": 80.24, "end": 85.64, "text": " Even though the experimented with some unsupervised methods I'm going to see those details a bit later."}, {"start": 85.91999999999999, "end": 94.03999999999999, "text": " But yeah it'd be funny to kind of analyze and understand why the why these variations in the in the improvements."}, {"start": 94.03999999999999, "end": 98.24, "text": " But that's like something that's outside of the scope of this paper obviously."}, {"start": 98.56, "end": 104.39999999999999, "text": " OK so let me just motivate the whole paper and kind of explain the problem if it's not already clear enough."}, {"start": 104.4, "end": 113.52000000000001, "text": " So basically we have you have a point A so that's your starting point and you have like a point B so that's your end point."}, {"start": 113.52000000000001, "end": 123.12, "text": " And basically what like Google Maps does is one of like one of the modules of that application is going to source like feasible routes from A to B."}, {"start": 123.12, "end": 128.52, "text": " So something like this A to B and then like a third path maybe like this."}, {"start": 128.52, "end": 134.60000000000002, "text": " And then this system is going to like estimate ETA for all of these routes."}, {"start": 134.60000000000002, "end": 136.76000000000002, "text": " So ETA or estimated time of arrival."}, {"start": 137.08, "end": 143.16000000000003, "text": " And that basically means how like long does the system expect your travel to last from A to B."}, {"start": 143.48000000000002, "end": 154.08, "text": " And basically once you have that information Google Maps can kind of sort like sort basically all of these routes according to ETA and suggest you the shortest routes like timewise."}, {"start": 154.08, "end": 161.12, "text": " Shortest maybe they're longer like physically but like there you'll get quicker there pretty much."}, {"start": 161.4, "end": 164.24, "text": " OK so that's hopefully the problem is now clear enough."}, {"start": 164.48000000000002, "end": 170.44, "text": " Now let's see how this is like posed as a graph ML problem."}, {"start": 170.92000000000002, "end": 174.12, "text": " So I think it's pretty clear but let me let me kind of explain that."}, {"start": 174.72000000000003, "end": 178.12, "text": " And the part that's super interesting is how they constructed graphs."}, {"start": 178.12, "end": 181.60000000000002, "text": " So it's not that intuitive and this is not something I'd expect."}, {"start": 181.6, "end": 194.16, "text": " But yeah so I guess if you had like like a street here and you had some intersection here and then you had another street here and you had another street here I guess the way you model this thing."}, {"start": 194.16, "end": 196.64, "text": " So if you had let's say we have another intersection here."}, {"start": 197.0, "end": 200.88, "text": " So how you would model it is basically how I just like draw it down."}, {"start": 201.04, "end": 202.76, "text": " So you have this these will be nodes."}, {"start": 202.76, "end": 208.84, "text": " So the intersections would be nodes and then you'd probably have like the whole street would be an edge between two nodes."}, {"start": 208.84, "end": 217.2, "text": " And I guess once you think about it a bit more deeply it makes sense that I mean if this is like two kilometers you don't want to have that as a single edge."}, {"start": 217.48000000000002, "end": 221.32, "text": " Whereas like some other streets may be like 20 meters or something."}, {"start": 221.32, "end": 224.96, "text": " So it makes sense to kind of segment the roads themselves into nodes."}, {"start": 225.2, "end": 227.36, "text": " So that's exactly and precisely what I did here."}, {"start": 227.4, "end": 232.72, "text": " So as you can see here they segment the roads into these segments and so this is called a segment."}, {"start": 233.28, "end": 236.32, "text": " Let me just get the like terminology out there."}, {"start": 236.32, "end": 252.07999999999998, "text": " So this is a segment and this whole thing here is called a super segment and you can then like going from here to here you can see that these things so the segments will be nodes and the edges will exist between the like obviously segments that are connected."}, {"start": 252.07999999999998, "end": 253.04, "text": " So something like this."}, {"start": 253.12, "end": 255.56, "text": " So this would be like like a graph."}, {"start": 255.72, "end": 259.03999999999996, "text": " And finally you can see that on the third like chart here."}, {"start": 259.36, "end": 265.92, "text": " And so that's the final super segment and that's the unit that the graph like the graph neural network will be consuming."}, {"start": 265.92, "end": 271.36, "text": " So obviously if you're not familiar with graph ML I have a whole playlist you can check it out."}, {"start": 271.52000000000004, "end": 272.8, "text": " I'll link it somewhere here."}, {"start": 273.12, "end": 276.0, "text": " But basically what is needed here is you."}, {"start": 276.24, "end": 279.48, "text": " I mean with every machine learning algorithm but like here as well."}, {"start": 279.48, "end": 284.96000000000004, "text": " So you want to associate certain features with all of these nodes and that's precisely what I've done."}, {"start": 285.12, "end": 294.6, "text": " And so once you have a graph once you have the features then you just need a graph and then and you can basically predict that like time it takes to traverse a certain segment."}, {"start": 294.6, "end": 305.24, "text": " So what the graph neural network is going to do is given as an input this this super segment is going to predict among other things is going to predict how long does it expect once you enter."}, {"start": 305.24, "end": 309.0, "text": " So once you enter here how long does it take to exit here."}, {"start": 309.24, "end": 311.48, "text": " So that will be like a node level prediction."}, {"start": 311.64000000000004, "end": 315.24, "text": " But the system is also going to predict how long does it take if you enter here."}, {"start": 315.48, "end": 317.64000000000004, "text": " How long does it take you to exit here."}, {"start": 317.88, "end": 320.6, "text": " So all of those predictions are going to be made by the G and M."}, {"start": 320.6, "end": 327.96000000000004, "text": " So let's slowly start digging into details so you can understand the like the like specifics of this whole pipeline."}, {"start": 328.20000000000005, "end": 331.0, "text": " First things first we've got like features."}, {"start": 331.0, "end": 334.36, "text": " I mentioned those and here are some of the features they were using."}, {"start": 334.36, "end": 337.56, "text": " So the actual no by the way this is these are labels."}, {"start": 337.56, "end": 344.6, "text": " So the actual traversal times along segments and super segments in seconds are used as node level and graph level labels for prediction."}, {"start": 344.6, "end": 345.56, "text": " So hopefully now it's clear."}, {"start": 345.56, "end": 354.68, "text": " So you have some graph and the labels are basically the amount of seconds that it takes to traverse the segment or traverse the super segment."}, {"start": 354.68, "end": 360.04, "text": " And all of that data is obviously collected by Google Maps throughout the last many years."}, {"start": 360.04, "end": 362.04, "text": " And so they got the data available."}, {"start": 362.04, "end": 364.04, "text": " So that's something that's given."}, {"start": 364.04, "end": 366.04, "text": " And now the features."}, {"start": 366.04, "end": 373.32, "text": " So on the segment node level we provide the average real time and historical segment traversal travel speeds and times."}, {"start": 373.32, "end": 381.64, "text": " So both the speeds and times are like ingrained in the feature vectors as well as segment length and segment priority."}, {"start": 381.64, "end": 387.71999999999997, "text": " Like is it a road classific is it a highway is it an intersection I guess like all of that is provided as features."}, {"start": 387.71999999999997, "end": 401.0, "text": " And so I won't be digging into details here obviously like on a high level you can understand that here we've got some domain specific knowledge like that's basically injected into these feature vectors."}, {"start": 401.0, "end": 406.92, "text": " And the other important thing is that we additionally provide learnable segment and super segment level embedding vectors."}, {"start": 406.92, "end": 415.24, "text": " So that means that these vectors here are going to be a concatenation between all of the features we just saw like the length of the road etc."}, {"start": 415.24, "end": 415.96, "text": " Etc."}, {"start": 415.96, "end": 418.04, "text": " And the second part is going to be learnable."}, {"start": 418.04, "end": 421.48, "text": " So we will have a learnable portion of those vectors."}, {"start": 421.48, "end": 422.52, "text": " OK."}, {"start": 422.52, "end": 424.68, "text": " So that's that's how I understand all of this."}, {"start": 424.68, "end": 424.92, "text": " OK."}, {"start": 424.92, "end": 429.4, "text": " What next once we have labels we have data we know how graphs are formed."}, {"start": 429.4, "end": 430.6, "text": " We need a gene and obviously."}, {"start": 430.6, "end": 434.68, "text": " And the model they are using here is graph network."}, {"start": 434.68, "end": 437.40000000000003, "text": " So that's a paper that came out of deep mind as well."}, {"start": 437.40000000000003, "end": 439.8, "text": " It's a super generic framework."}, {"start": 439.8, "end": 440.92, "text": " You can see the equations here."}, {"start": 440.92, "end": 451.24, "text": " So these are the basically update equations for the edges for the nodes and for this like basically graph level embedding."}, {"start": 451.24, "end": 458.04, "text": " And these are some like basically aggregation functions again for edges and for for nodes."}, {"start": 458.04, "end": 463.08000000000004, "text": " And let me try and break them down so that you have a feeling what's going on in here."}, {"start": 463.08000000000004, "end": 465.48, "text": " So let me draw a simple graph here."}, {"start": 465.48, "end": 467.88, "text": " So I have a note here."}, {"start": 467.88, "end": 471.24, "text": " Let me draw a couple more nodes or something like this."}, {"start": 471.24, "end": 472.76000000000005, "text": " And we've got some edges."}, {"start": 472.76000000000005, "end": 477.72, "text": " So there's going to be an edge here and then here and then here maybe and then here."}, {"start": 477.72, "end": 481.64000000000004, "text": " So this is the update rule for for the edge."}, {"start": 481.64000000000004, "end": 485.48, "text": " And basically this fight what I've used in practice is a simple MLP."}, {"start": 485.48, "end": 487.0, "text": " So it's a multilayer perceptron."}, {"start": 487.0, "end": 488.28, "text": " Nothing fancy there."}, {"start": 488.28, "end": 491.08, "text": " So what happens is you take the edge feature."}, {"start": 491.08, "end": 494.6, "text": " So this is this is an edge and you've got some features there."}, {"start": 495.4, "end": 502.52, "text": " And so basically you take those features you put them in here and you probably concatenate those features with the next thing."}, {"start": 502.52, "end": 508.04, "text": " So you concatenate those features with the source node with this one and with the target node."}, {"start": 508.04, "end": 510.68, "text": " So basically the two nodes that the edge is connecting with."}, {"start": 510.68, "end": 514.68, "text": " So those are this is a source and this is the target nodes features."}, {"start": 514.68, "end": 517.2399999999999, "text": " And you just add this global."}, {"start": 517.2399999999999, "end": 521.16, "text": " This is a global feature vector that's basically associated with the graph itself."}, {"start": 521.16, "end": 529.0799999999999, "text": " So there is some like a separate feature vector out here that's being updated together with this graph like features."}, {"start": 529.0799999999999, "end": 533.4, "text": " And so once you concatenate all of the data all of that data and you kind of pass it through the MLP."}, {"start": 533.4, "end": 540.1999999999999, "text": " That's the update and you get a novel edge representation which is like denoted here as E prime."}, {"start": 540.1999999999999, "end": 543.4799999999999, "text": " And very similarly you do it for nodes."}, {"start": 543.48, "end": 545.64, "text": " So you have a so you basically have a node here."}, {"start": 545.64, "end": 546.6800000000001, "text": " Let's focus on this one."}, {"start": 547.4, "end": 553.0, "text": " So what do you do as you can see here you concatenate its features again with this global thing."}, {"start": 553.88, "end": 561.24, "text": " Global feature vector and you additionally add this aggregated like vector that's formed precisely in this manner."}, {"start": 561.24, "end": 562.28, "text": " So this way."}, {"start": 562.28, "end": 566.9200000000001, "text": " OK what you do is and these are simple summations."}, {"start": 566.9200000000001, "end": 571.4, "text": " So just when you see this thing in all one you can think of it as a summation."}, {"start": 571.4, "end": 575.72, "text": " So what you do here is you basically take all of the incoming and outgoing edges."}, {"start": 576.52, "end": 581.16, "text": " And you just kind of sum up their features of those edges and that's this vector."}, {"start": 581.16, "end": 587.16, "text": " And then you concatenate it here and you use an MLP to produce a novel representation for this node."}, {"start": 587.16, "end": 587.64, "text": " OK."}, {"start": 587.64, "end": 588.92, "text": " Hopefully that's clear enough."}, {"start": 588.92, "end": 591.24, "text": " So in this particular case that's going to be this edge."}, {"start": 591.24, "end": 593.9599999999999, "text": " It's going to be this edge and it's going to be this edge."}, {"start": 593.9599999999999, "end": 596.84, "text": " And finally you're updating this global feature vector."}, {"start": 596.84, "end": 599.24, "text": " Again you use its own features."}, {"start": 599.24, "end": 604.52, "text": " You use like basically the aggregated versions of edges and nodes."}, {"start": 604.52, "end": 609.32, "text": " Now you basically take all of the edges and all of the node features and you just basically sum them up."}, {"start": 609.32, "end": 613.5600000000001, "text": " I guess you may need to do some kind of like me like averaging there."}, {"start": 613.5600000000001, "end": 616.84, "text": " But basically you kind of combine them and you concatenate them."}, {"start": 616.84, "end": 620.84, "text": " Pass them through the MLP and you get a novel representation here."}, {"start": 620.84, "end": 621.4, "text": " And that's it."}, {"start": 621.4, "end": 622.76, "text": " You just rinse and repeat."}, {"start": 622.76, "end": 628.12, "text": " Usually GNNs are very shallow so you won't be doing this like in CNN like 150 times like in Resonance."}, {"start": 628.12, "end": 632.36, "text": " You'll be doing this a couple of times two three times and we'll see that in this paper in particular."}, {"start": 632.36, "end": 633.72, "text": " They are only doing it two times."}, {"start": 634.6, "end": 635.16, "text": " OK."}, {"start": 635.16, "end": 638.12, "text": " So that's a high level overview of this GN block."}, {"start": 638.12, "end": 639.48, "text": " And now let's see what else."}, {"start": 640.84, "end": 646.92, "text": " So as a quick recap of GN architecture it's fairly simple."}, {"start": 646.92, "end": 648.68, "text": " Again you have a graph here."}, {"start": 648.68, "end": 653.16, "text": " I'm going to denote it as this blob and then you have an encoder block here."}, {"start": 653.16, "end": 654.12, "text": " So something like this."}, {"start": 654.12, "end": 655.24, "text": " So this is encoder."}, {"start": 655.24, "end": 667.0, "text": " Then what to do is they once you encode these raw features into a latent space you have this processor like part of the pipeline which is basically your GNN."}, {"start": 667.0, "end": 674.92, "text": " And you apply like this logic you just saw here a couple of times and then you take those latent representations which are now refined."}, {"start": 674.92, "end": 680.76, "text": " You pass them through the decoder and the decoder will finally output all of the predictions you're interested in."}, {"start": 680.76, "end": 685.72, "text": " So that's it on the high level how this GN like pipeline looks like."}, {"start": 685.72, "end": 694.12, "text": " They mentioned it also here so we compose three GN blocks into an encode process decode architecture and learn a separate model for each horizon H."}, {"start": 695.08, "end": 697.48, "text": " These blocks are defined as blah blah blah."}, {"start": 697.48, "end": 702.12, "text": " I'm going to return to this thing in a second because it's very important this horizon detail."}, {"start": 702.12, "end": 706.4399999999999, "text": " But I'm going to first kind of like explain this thing in a bit more detail."}, {"start": 706.44, "end": 711.32, "text": " So the encoder is first applied to the super segment S with the previously described raw features."}, {"start": 711.32, "end": 717.48, "text": " So those were the as we saw the speeds the time the additional like learnable embeddings."}, {"start": 717.48, "end": 718.84, "text": " Those are the raw features."}, {"start": 718.84, "end": 723.08, "text": " It produces latent representations of nodes edges and the super segment itself."}, {"start": 723.08, "end": 724.5200000000001, "text": " So that's the you components."}, {"start": 724.5200000000001, "end": 727.1600000000001, "text": " So that's the this is the super segment segment representation."}, {"start": 728.7600000000001, "end": 734.0400000000001, "text": " These latents are then passed to the processor which is applied two times with shared parameters."}, {"start": 734.04, "end": 739.48, "text": " So there is additional sharing of the parameters here to update these representations."}, {"start": 739.48, "end": 743.9599999999999, "text": " Finally these representations are transformed into appropriate predictions by applying the decoder."}, {"start": 743.9599999999999, "end": 754.68, "text": " So again this is going to output the basically both node wise as well as graph level predictions of how long does it take to traverse a segment or a super segment."}, {"start": 754.68, "end": 755.7199999999999, "text": " Those are going to be outputs."}, {"start": 755.7199999999999, "end": 757.88, "text": " We'll see soon see a bit more details there."}, {"start": 757.88, "end": 770.84, "text": " But yeah now getting back to the horizon detail it's very important because what they do is they basically for each region of the world because you obviously can't train a single system that will work everywhere."}, {"start": 770.84, "end": 775.0, "text": " What they do is like focus on some specific region like take Serbia."}, {"start": 775.0, "end": 776.28, "text": " That's my home country."}, {"start": 776.28, "end": 780.92, "text": " I'm going to take Serbia as an example and I'm going to draw something that looks like Serbia but not exactly."}, {"start": 780.92, "end": 787.56, "text": " And so this is Serbia and basically you train a single GNN for this region."}, {"start": 787.56, "end": 797.88, "text": " So for Serbia or even some like like a small smaller like part of Serbia and you additionally train a GNN model for different horizons."}, {"start": 797.88, "end": 808.4399999999999, "text": " So that means what they were using was zero minutes 10 minutes 20 minutes 30 minutes and finally 60 minutes."}, {"start": 808.4399999999999, "end": 809.7199999999999, "text": " So an hour."}, {"start": 809.7199999999999, "end": 815.0799999999999, "text": " So they're going to have one two three four five GNN for a specific region."}, {"start": 815.0799999999999, "end": 816.3599999999999, "text": " So why is that."}, {"start": 816.36, "end": 820.28, "text": " Well the reason is this is like zero GNN."}, {"start": 820.28, "end": 829.96, "text": " Let me call it that way is basically going to is basically going to predict the time it takes to traverse this particular segment right now."}, {"start": 829.96, "end": 841.5600000000001, "text": " Whereas when you input like a graph a super segment into this one is going to predict what the traversal time of that super segment is going to be or segment in 10 minutes."}, {"start": 841.56, "end": 851.4, "text": " And by analogy this one will like tell you like how long does it take to traverse a segment or a super segment in 20 minutes 30 minutes and one hour."}, {"start": 851.4, "end": 856.1999999999999, "text": " And they just figure out experimentally that that's that leads to much better performances."}, {"start": 856.1999999999999, "end": 857.64, "text": " And that's what I stick with."}, {"start": 857.64, "end": 863.7199999999999, "text": " So yeah this was Serby."}, {"start": 863.7199999999999, "end": 864.8399999999999, "text": " OK something like that."}, {"start": 864.8399999999999, "end": 866.52, "text": " Nice."}, {"start": 866.52, "end": 867.88, "text": " Let's see what else."}, {"start": 867.88, "end": 871.3199999999999, "text": " So once we have we know how the graph is structured."}, {"start": 871.32, "end": 874.2, "text": " We know how the graph neural network is structured."}, {"start": 874.2, "end": 881.08, "text": " Now let's see how the actual loss functions look like and a loss function at least the one they're using in production."}, {"start": 881.08, "end": 886.2800000000001, "text": " They also experiment and we'll see those details in a minute with some other losses here which are unsupervised."}, {"start": 886.2800000000001, "end": 889.48, "text": " So basically unsupervised and soup."}, {"start": 889.48, "end": 892.5200000000001, "text": " And so this is using production."}, {"start": 892.5200000000001, "end": 894.2800000000001, "text": " Basically there are three components."}, {"start": 894.2800000000001, "end": 898.12, "text": " One component is this SS which is basically super segment."}, {"start": 898.12, "end": 900.9200000000001, "text": " One component is this which is segment level prediction."}, {"start": 900.92, "end": 904.8399999999999, "text": " And one component is this segment but cumulative prediction."}, {"start": 904.8399999999999, "end": 906.28, "text": " We'll see what that means in a second."}, {"start": 906.28, "end": 908.52, "text": " So here is how the super segment loss is formed."}, {"start": 908.52, "end": 911.56, "text": " You iterate over different super segments."}, {"start": 911.56, "end": 917.64, "text": " You have this normalization part which I'm going to explain in a second and you have your Huber loss over here."}, {"start": 917.64, "end": 927.8, "text": " So Huber loss for those of you who are not familiar with that is very simple and it's supposed to kind of prevent certain outliers from like destabilizing the training."}, {"start": 927.8, "end": 940.04, "text": " So it looks like basically like a parabola in a certain range which is denoted by delta and minus delta here."}, {"start": 940.04, "end": 944.04, "text": " And then you basically have just a linear extrapolation here."}, {"start": 944.04, "end": 945.4799999999999, "text": " So here you have a linear function."}, {"start": 945.4799999999999, "end": 947.0799999999999, "text": " So it's not a parabola here."}, {"start": 947.0799999999999, "end": 949.0, "text": " It's not going to be like this."}, {"start": 949.0, "end": 951.88, "text": " It's going to be a simple linear function there."}, {"start": 951.88, "end": 954.1999999999999, "text": " And so the gradients remain constant."}, {"start": 954.1999999999999, "end": 955.56, "text": " They are not increasing."}, {"start": 955.56, "end": 959.88, "text": " And so this kind of stabilizes the training when you're updating the gradients."}, {"start": 959.88, "end": 970.76, "text": " If you watch my DQN video or checked out my project on GitHub about DQN, that's I think the only place I've seen Huber loss being used so far."}, {"start": 970.76, "end": 977.9599999999999, "text": " But I guess it's a neat technique to have in your toolkit whenever you want to kind of stabilize the update procedure."}, {"start": 977.9599999999999, "end": 982.92, "text": " So this 400 basically means this delta is 400."}, {"start": 982.92, "end": 996.36, "text": " And so the main part here is this means this is a prediction like a super segment level prediction at time t with horizon with the model that has a horizon H."}, {"start": 996.36, "end": 1001.7199999999999, "text": " So if we had the model, if we're training the model with the horizon zero, you can just ignore this."}, {"start": 1001.7199999999999, "end": 1004.28, "text": " So we are predicting we have the ground truth here."}, {"start": 1004.28, "end": 1005.4799999999999, "text": " So this is the GT label."}, {"start": 1005.4799999999999, "end": 1006.8399999999999, "text": " So this part is the GT."}, {"start": 1006.8399999999999, "end": 1010.28, "text": " So this is from the data set that the Google Maps team collected."}, {"start": 1010.28, "end": 1014.04, "text": " And this will be your actual prediction on a super segment level."}, {"start": 1014.04, "end": 1017.88, "text": " And so basically you just do this Huber loss over it."}, {"start": 1017.88, "end": 1019.24, "text": " And that's that's it."}, {"start": 1019.24, "end": 1024.44, "text": " So for the normalizing normalization component, I owe you an explanation there."}, {"start": 1024.44, "end": 1028.52, "text": " So this flow basically tells you the following thing."}, {"start": 1028.52, "end": 1037.8799999999999, "text": " So this flow of I is how long does it take to traverse this super segment I when there is almost no traffic on the road?"}, {"start": 1037.88, "end": 1045.8000000000002, "text": " And the kind of reasoning behind it is so imagine if you have a super long like a super segment."}, {"start": 1045.8000000000002, "end": 1055.72, "text": " And that means that these errors are probably going to be larger because there is so much more like uncertainty when you have a larger like a route."}, {"start": 1055.72, "end": 1067.0, "text": " Right. And so what you do is kind of normalize with this like zero time or basically the time it takes to traverse it when there are no cars in there, no traffic."}, {"start": 1067.0, "end": 1068.68, "text": " And so that's kind of the intuition."}, {"start": 1068.68, "end": 1073.88, "text": " So the bigger the road, you want to kind of normalize it by its this flow time."}, {"start": 1073.88, "end": 1075.16, "text": " Similarly for segments."}, {"start": 1075.16, "end": 1082.04, "text": " So you additionally just have here, as you can see, you have this iteration over the segments of the super segment."}, {"start": 1082.04, "end": 1087.4, "text": " So M this index I means the amount of segments in that super segment."}, {"start": 1087.4, "end": 1089.96, "text": " And you do the same procedure basically."}, {"start": 1089.96, "end": 1092.6, "text": " And finally, you have these cumulative segments."}, {"start": 1092.6, "end": 1096.52, "text": " So what they do is let me let me kind of jump back here."}, {"start": 1096.52, "end": 1104.2, "text": " So it will tell you if you're here right now, it's going to tell you how long did it take to get up to there."}, {"start": 1104.2, "end": 1107.08, "text": " So it won't tell you the traversal time from here to here."}, {"start": 1107.08, "end": 1111.72, "text": " It's going to tell you the cumulative time from here all the way here to here."}, {"start": 1111.72, "end": 1116.84, "text": " And then for for this one, obviously, you're just going to include this one as well and et cetera."}, {"start": 1116.84, "end": 1118.44, "text": " So that's the whole point there."}, {"start": 1118.44, "end": 1122.12, "text": " OK, I think we're pretty much done with the methods."}, {"start": 1122.12, "end": 1123.8, "text": " There is two more details."}, {"start": 1123.8, "end": 1126.12, "text": " Fairly important for the production setting."}, {"start": 1126.12, "end": 1127.7199999999998, "text": " Genehands and all that's all nice."}, {"start": 1127.7199999999998, "end": 1130.52, "text": " But like at the end, you got to make the thing work."}, {"start": 1130.52, "end": 1135.56, "text": " And what I noticed is that as they were dumping models across every single epoch,"}, {"start": 1135.56, "end": 1139.3999999999999, "text": " the predictions that came from those models were like varying a lot."}, {"start": 1139.3999999999999, "end": 1142.28, "text": " So there was like a large amount of variance there."}, {"start": 1142.28, "end": 1148.76, "text": " And so they they used basically two methods to dampen out those oscillations to kind of reduce the variance."}, {"start": 1148.76, "end": 1151.2399999999998, "text": " One is using this meta gradient method."}, {"start": 1151.2399999999998, "end": 1155.08, "text": " And the second one is using this exponential moving average method."}, {"start": 1155.08, "end": 1164.36, "text": " And this you've seen this if you watch my previous videos, like self supervised learning field uses EMA a lot to form the teacher networks, et cetera."}, {"start": 1164.36, "end": 1167.08, "text": " But basically, let me let me first quickly explain meta gradient."}, {"start": 1167.08, "end": 1169.08, "text": " I won't be digging into details here."}, {"start": 1169.08, "end": 1182.84, "text": " But like the whole idea is instead of picking a learning rate and some schedule, you basically treat learning rate as a hyperparameter itself and you jointly optimize the learning rate alongside the model parameters."}, {"start": 1182.84, "end": 1185.56, "text": " So this is what these fancy formulas are telling us."}, {"start": 1185.56, "end": 1192.9199999999998, "text": " We're basically also updating this this this is learning rate parameter using the loss function."}, {"start": 1192.9199999999998, "end": 1195.24, "text": " So this is just a generic set of meta gradient."}, {"start": 1195.24, "end": 1199.0, "text": " That's why they're using this new this Greek symbol."}, {"start": 1199.0, "end": 1204.1999999999998, "text": " But in this particular application of this paper, new is basically learning rate."}, {"start": 1204.1999999999998, "end": 1207.9599999999998, "text": " And so you are tuning those as well during the training."}, {"start": 1207.9599999999998, "end": 1211.6399999999999, "text": " And the second thing is they're using EMA, as I said."}, {"start": 1211.64, "end": 1213.16, "text": " So basically, what do you do?"}, {"start": 1213.16, "end": 1219.4, "text": " You take when you're updated, when you update the GNN, you don't just use those weights as as the new weights."}, {"start": 1219.4, "end": 1226.5200000000002, "text": " What you do is instead you take this previous weights, which is the EMA accumulative estimation of weights."}, {"start": 1226.5200000000002, "end": 1230.68, "text": " You kind of weighted with this parameter and they've used zero point nine nine here."}, {"start": 1230.68, "end": 1232.1200000000001, "text": " That's again a hyperparameter."}, {"start": 1232.1200000000001, "end": 1234.68, "text": " They just figured out that that number works well."}, {"start": 1234.68, "end": 1237.0, "text": " And these are the novel weights."}, {"start": 1237.0, "end": 1249.48, "text": " And as you can see, so this is kind of pulling these novel weights towards this stable estimate of the weights that was created through snapshots, multiple snapshots in the history, I guess."}, {"start": 1249.48, "end": 1250.44, "text": " That's pretty much it."}, {"start": 1250.44, "end": 1252.28, "text": " That's the whole method."}, {"start": 1252.28, "end": 1256.92, "text": " And now I'm going to slowly start explaining the experimental setup."}, {"start": 1256.92, "end": 1258.36, "text": " I'm going to show you the results."}, {"start": 1258.36, "end": 1264.84, "text": " And yeah, for showing you the baselines they're going to compare with, let me just quickly just mention this part."}, {"start": 1264.84, "end": 1270.9199999999998, "text": " So they mentioned that we know that November twenty twenty data may be susceptible to covid-19 traffic patterns."}, {"start": 1270.9199999999998, "end": 1279.6399999999999, "text": " And hence we attempted to select regions that minimize this deviation for evaluation in order to capture performance under regular traffic conditions."}, {"start": 1279.6399999999999, "end": 1284.28, "text": " So the thing is, obviously, because because of the covid, the traffic patterns have changed."}, {"start": 1284.28, "end": 1292.6, "text": " And so they are kind of trying to find the regions where even post covid, hopefully they'll have relevant results."}, {"start": 1292.6, "end": 1298.12, "text": " And so my question is, how do they actually find the regions that minimize this deviation?"}, {"start": 1298.12, "end": 1303.08, "text": " There are no mention they haven't mentioned this in the paper, but I'd love to know the answer."}, {"start": 1303.08, "end": 1308.52, "text": " So if somebody somehow knows that, feel free to comment down in the comment section."}, {"start": 1308.52, "end": 1312.6, "text": " OK, having said that, let me show you the baselines they are they're using."}, {"start": 1312.6, "end": 1314.9199999999998, "text": " So first baseline is super simple."}, {"start": 1314.9199999999998, "end": 1317.9599999999998, "text": " It's basically real time travel times."}, {"start": 1317.9599999999998, "end": 1319.32, "text": " So what you do is the following."}, {"start": 1319.32, "end": 1320.6, "text": " So you have a segment."}, {"start": 1320.6, "end": 1327.6399999999999, "text": " So you have one segment and I'm just going to draw it here as a square, but it basically represents this thing here."}, {"start": 1327.6399999999999, "end": 1329.6399999999999, "text": " So one of these segments."}, {"start": 1329.6399999999999, "end": 1332.4399999999998, "text": " And so what happens is the following thing."}, {"start": 1332.4399999999998, "end": 1344.1999999999998, "text": " So they have over the last like two minutes or five minutes, they have the data of what were the traversal times of the cars and vehicles in general passing through that segment."}, {"start": 1344.1999999999998, "end": 1350.52, "text": " I mean, they say here nicely summation of segment level travel times computed using segment speeds averaged over a two minute period."}, {"start": 1350.52, "end": 1353.56, "text": " So you have a two minute window prior to the time of prediction."}, {"start": 1353.56, "end": 1359.8799999999999, "text": " So basically take the data, the recent data, the last two minutes, and you use that as a as a weight as a predictor."}, {"start": 1359.8799999999999, "end": 1366.36, "text": " So what happens basically is you find that average for this segment, then you find the average for the next segment."}, {"start": 1366.36, "end": 1368.36, "text": " You find the average for the next segment."}, {"start": 1368.36, "end": 1375.16, "text": " And if this was the whole super like a super segment, the predictor would just sum up these average numbers."}, {"start": 1375.16, "end": 1384.2, "text": " Let's call this T1, T2, T3. You just basically take and sum them up to get the final value."}, {"start": 1384.2, "end": 1396.1200000000001, "text": " So T will just be basically a sum of all of these TIs where individual TIs for this particular segment, you use the last two minute data and compute it."}, {"start": 1396.1200000000001, "end": 1397.64, "text": " So it's a nonparametric model."}, {"start": 1397.64, "end": 1398.68, "text": " You don't have a neural network."}, {"start": 1398.68, "end": 1400.0400000000002, "text": " You don't have any fancy model models."}, {"start": 1400.0400000000002, "end": 1403.0, "text": " You just have nonparametric method there."}, {"start": 1403.0, "end": 1405.88, "text": " Fairly simple with historical travel times baseline."}, {"start": 1405.88, "end": 1415.08, "text": " What you do here instead of using the last two minutes, like basically you can use both the future and past information, but in the like last days."}, {"start": 1415.08, "end": 1423.4, "text": " So, for example, if this if we're trying to predict like what is going to what is the travel time of a super segment at 4 p.m."}, {"start": 1423.4, "end": 1427.16, "text": " What they're going to do is take a look at the 4 p.m."}, {"start": 1427.16, "end": 1429.48, "text": " So let me just kind of draw this as a timeline."}, {"start": 1429.48, "end": 1431.08, "text": " So 4 p.m. is here."}, {"start": 1431.08, "end": 1438.4399999999998, "text": " So they're going to take the data over the last, I don't know, like 15 minutes, maybe here and 15 minutes here."}, {"start": 1438.4399999999998, "end": 1447.6399999999999, "text": " And they're going to take a look at like what those times were like last week and the week before and etc."}, {"start": 1447.6399999999999, "end": 1451.56, "text": " And they are going to combine and average out that data over the last some weeks."}, {"start": 1451.56, "end": 1454.9199999999998, "text": " So they say here average across last 17 weeks."}, {"start": 1454.9199999999998, "end": 1458.12, "text": " So that's how you form the predictions here."}, {"start": 1458.12, "end": 1465.1599999999999, "text": " So that's so you have you use the historical data to form these TIs and then you again just sum them up to get the final T."}, {"start": 1465.1599999999999, "end": 1469.0, "text": " Finally, deep sets is like the last method."}, {"start": 1469.0, "end": 1470.84, "text": " This is a model based approach."}, {"start": 1470.84, "end": 1473.08, "text": " So this is not so this is parametric method."}, {"start": 1473.08, "end": 1476.28, "text": " And the thing is, it's ignoring the graph structure."}, {"start": 1476.28, "end": 1481.08, "text": " So if you have a super segment, what this method is going to do is the following."}, {"start": 1481.08, "end": 1482.52, "text": " Let me kind of go up there."}, {"start": 1482.52, "end": 1484.28, "text": " It's going to be easier to explain."}, {"start": 1484.28, "end": 1491.6399999999999, "text": " So this method is going to pass this through an MLP, pass this through an MLP, pass all of the segments through an MLP."}, {"start": 1491.6399999999999, "end": 1495.3999999999999, "text": " Then it's going to like kind of aggregate all of those, maybe using summation."}, {"start": 1495.3999999999999, "end": 1498.44, "text": " And then it's going to pass that final representation."}, {"start": 1498.44, "end": 1506.84, "text": " So we have a final representation here, which is formed, as I said, by aggregating all of these feature vectors of all of these segments."}, {"start": 1506.84, "end": 1511.6399999999999, "text": " And then you just pass it to another MLP here to get the final representation."}, {"start": 1511.6399999999999, "end": 1513.08, "text": " And then you can do the predictions."}, {"start": 1513.08, "end": 1516.28, "text": " So as you can see here, you're treating this as a bag of words in a way."}, {"start": 1516.28, "end": 1520.04, "text": " You're just taking all of those features, feature vectors of the segments."}, {"start": 1520.04, "end": 1524.84, "text": " You're kind of ditching the graph data, the graph information of how they're connected."}, {"start": 1524.84, "end": 1527.56, "text": " And you just kind of aggregate them and then inform the prediction."}, {"start": 1527.56, "end": 1528.9199999999998, "text": " So that's the deep set."}, {"start": 1528.9199999999998, "end": 1538.12, "text": " Having explained deep set, I realized I have not explained how the actual inference looks like once we have this this GNN model trained."}, {"start": 1538.12, "end": 1541.0, "text": " So let me explain that in a second."}, {"start": 1541.0, "end": 1543.56, "text": " Basically, this is how the serving looks like."}, {"start": 1543.56, "end": 1550.36, "text": " Once this app, once this model is running in production in your Google Map application, this is how you're actually getting the results."}, {"start": 1550.36, "end": 1553.4, "text": " So imagine you have a route from A to B."}, {"start": 1553.4, "end": 1558.84, "text": " So you have route from A to B and A and B are connected with three super segments."}, {"start": 1558.84, "end": 1560.68, "text": " So we have like something like this."}, {"start": 1560.68, "end": 1570.04, "text": " A is connected by a segment one, two, and three to the segment, to the actual destination B."}, {"start": 1570.04, "end": 1572.68, "text": " OK, so what do you do here?"}, {"start": 1572.68, "end": 1580.36, "text": " And this is going to make the reason behind using the horizon models like more clear, hopefully."}, {"start": 1580.36, "end": 1582.2, "text": " So what happens is you take."}, {"start": 1582.2, "end": 1591.1599999999999, "text": " So we are at time t and you basically take take this segment one and use the model that has the horizon equal to zero."}, {"start": 1591.1599999999999, "end": 1594.52, "text": " And you do prediction and that is going to give you t one."}, {"start": 1594.52, "end": 1597.48, "text": " So that's basically the estimated, the predicted time."}, {"start": 1597.48, "end": 1602.52, "text": " What the model is expecting, how long do you need to to kind of traverse this segment one?"}, {"start": 1602.52, "end": 1605.88, "text": " So once you have t one, let's imagine t one is 12 minutes."}, {"start": 1605.88, "end": 1609.48, "text": " So this is 12 minutes, 12 minutes."}, {"start": 1609.48, "end": 1615.56, "text": " OK, so once you have 12 minutes, then what you do is out of the pool of the horizon models."}, {"start": 1615.56, "end": 1619.72, "text": " So that means, as I said, we have the 10 minute model, the 20 minute model, the 30 minute model."}, {"start": 1619.72, "end": 1624.76, "text": " So we are because we are 12 minutes, we're going to take the models that have horizon 10."}, {"start": 1624.76, "end": 1630.2, "text": " So I'm going to denote them as horizon 10 and you're going to take the model that has horizon 20."}, {"start": 1630.2, "end": 1633.24, "text": " So that's horizon 20 like this."}, {"start": 1633.24, "end": 1637.8799999999999, "text": " And now you're going to take those models and pass the segment two through them."}, {"start": 1637.8799999999999, "end": 1643.56, "text": " So H10 is going to tell you how long does it take to traverse segment two in 10 minutes from now."}, {"start": 1643.56, "end": 1647.0, "text": " But since we have 12, we are actually expecting 12 minutes."}, {"start": 1647.0, "end": 1649.56, "text": " That means we're going to do basic interpolation here."}, {"start": 1649.56, "end": 1656.6, "text": " So this one is going to tell us the result. Let me call it t two at 20."}, {"start": 1656.6, "end": 1659.1599999999999, "text": " And this will be t two."}, {"start": 1659.1599999999999, "end": 1665.6399999999999, "text": " So that's the second time it takes to traverse the second segment according to this model 10."}, {"start": 1665.6399999999999, "end": 1674.52, "text": " And you're going to combine this two informations, simple interpolation to form the actual time it takes to traverse the data."}, {"start": 1674.52, "end": 1677.1599999999999, "text": " So it's going to be somewhere between these two numbers."}, {"start": 1677.1599999999999, "end": 1677.48, "text": " Right."}, {"start": 1677.48, "end": 1682.28, "text": " And so once you have the t two, you can use the t two and t one."}, {"start": 1682.28, "end": 1690.6, "text": " You can add them up and then do the same procedure for segment super segment three and rinse and repeat until you get to the end point B."}, {"start": 1690.6, "end": 1694.84, "text": " So, yeah, hopefully that's now it's clear how you're using this model in inference."}, {"start": 1694.84, "end": 1697.72, "text": " Now, let me get back to the where we stopped."}, {"start": 1697.72, "end": 1700.68, "text": " So I explained the deep sets. That's the third baseline."}, {"start": 1700.68, "end": 1704.04, "text": " Now, let's see the actual results here."}, {"start": 1704.04, "end": 1712.12, "text": " By the way, they're using this RMS metric to to kind of compare the baselines."}, {"start": 1712.12, "end": 1713.8799999999999, "text": " And you can see here it's very simple."}, {"start": 1713.8799999999999, "end": 1716.68, "text": " You basically take the super segment level predictions."}, {"start": 1716.68, "end": 1722.04, "text": " You do the summation over the super segments and you do a square and you average here."}, {"start": 1722.04, "end": 1724.2, "text": " Yeah, basically root mean squared error."}, {"start": 1724.2, "end": 1726.84, "text": " It's pretty pretty self-explanatory."}, {"start": 1726.84, "end": 1728.76, "text": " And so here are the results."}, {"start": 1728.76, "end": 1733.56, "text": " You can see the real time baseline, the historical, the deep sets baseline and the GM baseline."}, {"start": 1733.56, "end": 1739.24, "text": " And you can see that it's they also show it's a significant statistically significant result."}, {"start": 1739.24, "end": 1744.6, "text": " And it's lower compared to all of the other baselines, which is cool."}, {"start": 1744.6, "end": 1750.12, "text": " So the thing that initially kind of bugged me here is that if you take a look at the numbers,"}, {"start": 1750.12, "end": 1755.6399999999999, "text": " the differences are fairly trivial, like small compared to the other baselines."}, {"start": 1755.6399999999999, "end": 1758.28, "text": " So deep set seems to be performing really well."}, {"start": 1758.28, "end": 1761.8799999999999, "text": " So why the whole hustle? Like why why are we using the graph data?"}, {"start": 1761.88, "end": 1763.64, "text": " And I'm going to get to that in a second."}, {"start": 1763.64, "end": 1768.92, "text": " But like, let's see here, if we compare these numbers, like take this number,"}, {"start": 1768.92, "end": 1773.0, "text": " you can see it's way worse compared to these two models."}, {"start": 1773.0, "end": 1774.68, "text": " So these two are out of the question."}, {"start": 1774.68, "end": 1776.7600000000002, "text": " That's obviously like clear."}, {"start": 1776.7600000000002, "end": 1780.44, "text": " But like between deep sets and GM, what should we use?"}, {"start": 1780.44, "end": 1783.5600000000002, "text": " And I mean, it's a bit better, but is it like worth it?"}, {"start": 1783.5600000000002, "end": 1788.8400000000001, "text": " And the thing is one important note here dimension is so one note is that these"}, {"start": 1788.84, "end": 1791.9599999999998, "text": " improvements are computed over individual super segments,"}, {"start": 1791.9599999999998, "end": 1794.52, "text": " and they will accumulate over entire routes."}, {"start": 1794.52, "end": 1798.76, "text": " Hence, our improvements may quickly become more important on a whole route level."}, {"start": 1798.76, "end": 1801.8799999999999, "text": " So a super segment may be and I think they have a data somewhere here."}, {"start": 1801.8799999999999, "end": 1802.52, "text": " Let me find it."}, {"start": 1805.08, "end": 1805.8, "text": " Where it is."}, {"start": 1805.8, "end": 1808.04, "text": " Okay, here it is data set statistics."}, {"start": 1808.04, "end": 1813.32, "text": " So you can see that the super segment length is so average segment length is around like"}, {"start": 1813.32, "end": 1819.32, "text": " 100 meters and super segments are around so average number of segments per super segment"}, {"start": 1819.32, "end": 1820.12, "text": " is around 20."}, {"start": 1820.12, "end": 1823.96, "text": " So that means probably average super segment has around two kilometers."}, {"start": 1823.96, "end": 1827.96, "text": " So once you accumulate that over like 10 kilometers, when you have multiple super"}, {"start": 1827.96, "end": 1834.52, "text": " segments, then obviously the errors accumulate and GM start bringing more and more advantage"}, {"start": 1834.52, "end": 1836.9199999999998, "text": " compared to these other baselines."}, {"start": 1836.9199999999998, "end": 1841.08, "text": " And I mean, obviously, Google would not put this into production if it wasn't meaningful"}, {"start": 1841.08, "end": 1841.32, "text": " enough."}, {"start": 1841.32, "end": 1843.8799999999999, "text": " So yeah, but just don't get confused by the small differences."}, {"start": 1843.8799999999999, "end": 1845.08, "text": " Now, hopefully, it's clear enough."}, {"start": 1845.08, "end": 1846.6799999999998, "text": " And let's see the ablations."}, {"start": 1846.6799999999998, "end": 1851.56, "text": " So one thing I have not mentioned is they've experimented with various things."}, {"start": 1851.56, "end": 1853.8, "text": " I did mention the unsupervised loss."}, {"start": 1853.8, "end": 1858.6799999999998, "text": " So they tried using stuff like deep graph info max."}, {"start": 1858.6799999999998, "end": 1863.48, "text": " They've also experimented with these combinations of aggregators and like not using just the"}, {"start": 1864.2, "end": 1867.8799999999999, "text": " some aggregator but using some other types of aggregations."}, {"start": 1867.88, "end": 1871.8000000000002, "text": " And all of the ideas come from this principal neighborhood aggregation paper."}, {"start": 1872.44, "end": 1873.24, "text": " You can check it out."}, {"start": 1873.24, "end": 1879.16, "text": " Yeah, just keep in mind that they've experimented with lots of things and they eventually"}, {"start": 1879.16, "end": 1883.96, "text": " decided not to use many of those ideas in the production because the trade off was not"}, {"start": 1883.96, "end": 1884.5200000000002, "text": " acceptable."}, {"start": 1884.5200000000002, "end": 1889.72, "text": " So even though they brought some performance benefits, the cost of kind of serving those"}, {"start": 1889.72, "end": 1892.8400000000001, "text": " more like complex models is way higher."}, {"start": 1892.84, "end": 1898.84, "text": " So, okay, let's kind of skim over these ablations."}, {"start": 1898.84, "end": 1900.9199999999998, "text": " I mentioned those embedding vectors."}, {"start": 1900.9199999999998, "end": 1907.24, "text": " So aside from the domain knowledge that was ingrained by those speeds and times in the"}, {"start": 1907.24, "end": 1910.12, "text": " feature vectors, we had the learnable portion."}, {"start": 1910.12, "end": 1916.12, "text": " And you can see that actually using like both embeddings, both the segment level embeddings"}, {"start": 1916.12, "end": 1919.08, "text": " as well as the super segment embeddings brings better results."}, {"start": 1919.08, "end": 1923.0, "text": " Don't get confused by this column like these acronyms."}, {"start": 1923.0, "end": 1927.08, "text": " This stands for New York City, I guess, LA, Tokyo and Singapore."}, {"start": 1927.08, "end": 1930.36, "text": " They had to kind of obviously pick a subset of the cities."}, {"start": 1930.36, "end": 1932.76, "text": " They can't evaluate this on the whole on a global scale."}, {"start": 1933.48, "end": 1936.36, "text": " And you can see results are always better when you're using the embeddings."}, {"start": 1936.36, "end": 1937.3999999999999, "text": " That's that's kind of clear."}, {"start": 1938.12, "end": 1945.3999999999999, "text": " Second thing here is they showed how the meta gradient and EMA are reducing the variants"}, {"start": 1945.3999999999999, "end": 1946.6799999999998, "text": " in the during the training."}, {"start": 1946.68, "end": 1952.44, "text": " So we can see on the x axis here, basically training iterations on the y axis, we see"}, {"start": 1952.44, "end": 1954.6000000000001, "text": " the RMS metric."}, {"start": 1954.6000000000001, "end": 1960.3600000000001, "text": " And we can see that when we're just using Adam with no EMA and no meta gradients, you"}, {"start": 1960.3600000000001, "end": 1964.2, "text": " can see that just the red curve is kind of getting crazy here."}, {"start": 1964.2, "end": 1969.8, "text": " And once you start adding EMA and meta gradients, they show that the blue curves are using both"}, {"start": 1969.8, "end": 1972.6000000000001, "text": " of these is the best thing to do."}, {"start": 1972.6, "end": 1976.84, "text": " And they show that not only for they'll just show for it for Tokyo and they show it also"}, {"start": 1976.84, "end": 1978.12, "text": " for for LA here."}, {"start": 1978.12, "end": 1979.56, "text": " And I just made a note here."}, {"start": 1979.56, "end": 1984.04, "text": " So basically, it'd be interesting to see how learning rate evolves over the duration of"}, {"start": 1984.04, "end": 1984.6, "text": " the training."}, {"start": 1985.24, "end": 1990.1999999999998, "text": " So that means I'd love to see the like how the meta gradient is tuning the learning rate."}, {"start": 1990.1999999999998, "end": 1992.28, "text": " Can we see some interesting patterns there?"}, {"start": 1992.28, "end": 1996.6, "text": " Like those types of visualizations in the appendix would be super appreciated."}, {"start": 1996.6, "end": 2001.3999999999999, "text": " And secondly, be interesting to understand which other schedules did they try with Adam"}, {"start": 2001.4, "end": 2003.96, "text": " before deciding to go with meta gradients."}, {"start": 2003.96, "end": 2006.2800000000002, "text": " Let's wrap up the ablation studies."}, {"start": 2006.2800000000002, "end": 2013.16, "text": " They experimented with these extended super segments and extended super segments plus"}, {"start": 2013.16, "end": 2021.24, "text": " extra features, whereby they were using additional segments, he can see extended super segments,"}, {"start": 2021.24, "end": 2025.72, "text": " but they were still predicting only the like traversal times for the segments in our original"}, {"start": 2025.72, "end": 2026.76, "text": " super segment."}, {"start": 2026.76, "end": 2032.84, "text": " And they basically showed that all of that kind of helps, but like it's very, very expensive,"}, {"start": 2032.84, "end": 2035.24, "text": " both memory wise and I guess inference wise."}, {"start": 2035.24, "end": 2037.64, "text": " And so they decided not to use this in the production setting."}, {"start": 2038.84, "end": 2041.64, "text": " Here you can see the unsupervised loss I mentioned."}, {"start": 2041.64, "end": 2043.72, "text": " So this is the deep graph informax paper."}, {"start": 2044.52, "end": 2048.12, "text": " This is the I guess, or encoder."}, {"start": 2048.12, "end": 2050.6, "text": " So graph or encoder, I guess, paper."}, {"start": 2050.6, "end": 2056.04, "text": " And so you can see that those two are, again, improving results across various cities and"}, {"start": 2056.04, "end": 2057.0, "text": " various horizons."}, {"start": 2057.72, "end": 2061.64, "text": " So both for New York City, LA, et cetera, for different horizons."}, {"start": 2061.64, "end": 2066.7599999999998, "text": " But the trick here is, and this is an excerpt from this paper."}, {"start": 2066.7599999999998, "end": 2072.2, "text": " So the optimal variant and hyperparameters were different for each region and prediction"}, {"start": 2072.2, "end": 2072.92, "text": " horizon."}, {"start": 2072.92, "end": 2078.2799999999997, "text": " Thus, it is likely expensive to tune such losses for all existing regions and each horizon."}, {"start": 2078.2799999999997, "end": 2082.44, "text": " So as I said, you're going to be using various different GNNs depending on region."}, {"start": 2082.44, "end": 2087.96, "text": " So if you have additionally have to tune these kind of hyperparameters, it's going to be"}, {"start": 2087.96, "end": 2089.8, "text": " hard and expensive."}, {"start": 2089.8, "end": 2092.68, "text": " And so they decided not to use them in the actual production setting."}, {"start": 2093.8, "end": 2097.0, "text": " Finally, let's see this, these results."}, {"start": 2097.0, "end": 2105.16, "text": " So using various different aggregation functions like the sum is the actual, the one that's"}, {"start": 2105.16, "end": 2111.56, "text": " using production, but they also try the square root, the mean, the max and all of them combined."}, {"start": 2111.56, "end": 2116.2799999999997, "text": " And as you can see, usually this all of them combined, like gives you additional performance."}, {"start": 2116.2799999999997, "end": 2120.2799999999997, "text": " But again, you have to once you're in a production setting, this is engineering, you have to"}, {"start": 2120.2799999999997, "end": 2121.16, "text": " trade off the costs."}, {"start": 2121.16, "end": 2123.16, "text": " And so they stick with the sum function."}, {"start": 2123.16, "end": 2127.4, "text": " They also did this online evaluation of GN compared to the baselines and they showed"}, {"start": 2127.4, "end": 2130.44, "text": " it's again better, same as in the offline setting we just saw."}, {"start": 2131.0, "end": 2136.12, "text": " So a couple of things that bug me there is, so they mentioned that computed over all tracks"}, {"start": 2136.12, "end": 2140.36, "text": " within a week, specifically January 15th through January 22nd."}, {"start": 2140.36, "end": 2143.6400000000003, "text": " What is not clear to me is what it means over all tracks."}, {"start": 2143.6400000000003, "end": 2150.52, "text": " So do they actually average the results across every single region they are covering or what?"}, {"start": 2150.52, "end": 2152.84, "text": " So this part kind of bugs me."}, {"start": 2152.84, "end": 2154.1200000000003, "text": " So yeah, not sure about it."}, {"start": 2155.2400000000002, "end": 2161.1600000000003, "text": " And again, I said the results are nice, but like basically we don't have a gut feeling"}, {"start": 2161.1600000000003, "end": 2163.6400000000003, "text": " for what these small improvements mean."}, {"start": 2163.6400000000003, "end": 2169.48, "text": " And so A, it'd be nice to have like something like, hey, when you're using this model, it's"}, {"start": 2169.48, "end": 2173.2400000000002, "text": " going to save you up that many seconds or minutes compared to this baseline."}, {"start": 2173.2400000000002, "end": 2174.6, "text": " So that would be much more intuitive."}, {"start": 2175.08, "end": 2178.04, "text": " We'd have a much better intuitive understanding of what's going on here."}, {"start": 2178.04, "end": 2184.44, "text": " Secondly, and I noted it here, basically some type of geographical heat maps where we see,"}, {"start": 2184.44, "end": 2189.64, "text": " depending on the region, like what the performance differences are, that'd be super nice"}, {"start": 2189.64, "end": 2191.0, "text": " visualization, I guess."}, {"start": 2191.0, "end": 2197.64, "text": " Finally, they did the last evaluation on this particular route in LA, I think, and they"}, {"start": 2197.64, "end": 2200.6, "text": " show certain results and I have a same remark here."}, {"start": 2200.6, "end": 2204.2, "text": " It's not intuitive what these curves mean."}, {"start": 2204.2, "end": 2209.7999999999997, "text": " Like, I mean, you can see that the GN, so this graph neural network model is way better."}, {"start": 2211.16, "end": 2215.3199999999997, "text": " It's better compared to real time baseline, but I don't have a gut feeling for what this"}, {"start": 2215.3199999999997, "end": 2215.8799999999997, "text": " means."}, {"start": 2215.8799999999997, "end": 2217.4, "text": " And so yeah, that kind of sucks."}, {"start": 2217.4, "end": 2220.68, "text": " And also they don't have any units here."}, {"start": 2221.48, "end": 2225.16, "text": " And so I guess it's seconds, but it's kind of not clear enough."}, {"start": 2225.16, "end": 2231.8799999999997, "text": " And so they here also show for the, when we have results for the horizon, like one hour"}, {"start": 2231.8799999999997, "end": 2236.7599999999998, "text": " from now, you can see that this GN that was trained with that horizon is way better compared"}, {"start": 2236.7599999999998, "end": 2239.64, "text": " to the GN that was trained with the horizon equals to zero."}, {"start": 2239.64, "end": 2246.12, "text": " So that's kind of argument behind why do we need all of these multiple horizon GNM models"}, {"start": 2246.12, "end": 2250.44, "text": " and we're combining them to actually serve the results, the end results."}, {"start": 2250.44, "end": 2255.16, "text": " I'm going to end it up here and I'm going to just briefly mention that this is a thing"}, {"start": 2255.16, "end": 2259.0, "text": " that's running in production, which means that research was only a small portion of"}, {"start": 2259.0, "end": 2262.52, "text": " this whole project and there were many engineering challenges they had to solve."}, {"start": 2262.52, "end": 2265.96, "text": " So one of them was, so they had to obviously cache results."}, {"start": 2265.96, "end": 2271.32, "text": " So as they say here, the path in an ETA request usually includes multiple super segments and"}, {"start": 2271.32, "end": 2276.36, "text": " making predictions for these super segments on the fly is not practical nor scalable."}, {"start": 2276.36, "end": 2282.6800000000003, "text": " So they had to kind of cache the results and then periodically refresh all of the cache."}, {"start": 2283.7200000000003, "end": 2285.4, "text": " So that's one of the engineering details."}, {"start": 2285.4, "end": 2289.48, "text": " The second thing they mentioned here, for less frequently visited road segments,"}, {"start": 2289.48, "end": 2291.6400000000003, "text": " we use simpler per segment models."}, {"start": 2291.6400000000003, "end": 2294.6800000000003, "text": " So they drop to simpler baselines in certain regions."}, {"start": 2294.6800000000003, "end": 2295.96, "text": " So it's a complex system."}, {"start": 2295.96, "end": 2297.08, "text": " There are many components."}, {"start": 2297.08, "end": 2301.8, "text": " So the meta-gradient, the EMA, the graph neural networks, all of this engineering"}, {"start": 2301.8, "end": 2306.6000000000004, "text": " tricks to make this work and I think it's a very impactful project."}, {"start": 2306.6000000000004, "end": 2308.28, "text": " I mean, I use Google Maps all the time."}, {"start": 2308.28, "end": 2309.88, "text": " So I really love this paper."}, {"start": 2309.88, "end": 2315.8, "text": " I think it's a clearly written paper aside from a couple of these remarks and maybe the"}, {"start": 2315.8, "end": 2317.6400000000003, "text": " lack of appendix section."}, {"start": 2317.6400000000003, "end": 2318.76, "text": " Hopefully like this video."}, {"start": 2318.76, "end": 2323.0, "text": " If you did share it out with a friend, subscribe to this channel, hit the bell icon."}, {"start": 2323.0, "end": 2324.76, "text": " Also join the Discord community."}, {"start": 2324.76, "end": 2326.6000000000004, "text": " There is a lot of engagement happening there."}, {"start": 2327.2400000000002, "end": 2330.6800000000003, "text": " Cool people interested in ML helping each other out."}, {"start": 2330.68, "end": 2345.24, "text": " So yeah, do check it out and until next time, bye-bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=zQqbXFY0Nco
Neural Search with Jina AI | Open-source ML Tool Explained
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I kick off a series of videos where I'll be walking you through how to use certain ML tools! ML is definitely more than research. It's about engineering and it's about using the best tools out there to get the job done. First up: Jina AI - a neural search engine! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Jina AI GitHub: https://github.com/jina-ai/jina ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 A high-level overview of Jina AI 03:30 Setting up the MNIST fashion example 04:50 Arguments and data loading 10:15 Core concepts - Flow and Executors 11:40 Visualizing the flow 12:40 Flow is lazy 13:45 The core algorithm explained 15:20 Indexing 18:15 Encoding the images via SVD 22:45 Evaluation - finding closest embeddings 27:50 Writing to an HTML 29:25 HTML results visualized 30:10 Chatbot example overview 31:00 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Bartłomiej Danek Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #jinaai #neuralsearch #mltools
What's cracking guys in this video I'm continuing with the coding video series and I'm not going to cover a research paper like code in this one I'm gonna cover open source ML tools and libraries and hopefully this will help you improve your own projects by including these awesome open source libraries and tools so in this one I'm gonna cover Gini AI and as you can see here they are a cloud native neural search framework for any kind of data so this enables it to import lightweight Google into your project whereas the neural part means you'll be using deep neural networks to perform the search and the any kind of data means basically you can take whatever modality you wish so like image or text or audio or video or mesh or whatnot and you can use those to query and find like data from any other modalities you can use images to query for certain text snippets you can use text to query for images or videos or audio for mesh or whatever like whatever combination in that combinatorial space you can imagine Gini AI can kind of help you do that as you can see the readme is pretty nicely structured they have lots of resources here to tutorials they have like lots of contributors people are using this already it's an established tool as you can see here used by 130 like organizations and people already and so the way I suggest you go ahead and explore this tool is by like understanding these quick demos and in this video I'm gonna walk you through step-by-step through the fashion image search aside from that you should probably go and understand the basic concepts of Gina so that's document executor and flow and they have really nice pages where they explain how you can use the document how you can use all of these like a native concepts to Gina so document is an abstraction they use to describe whatever data type you have so basically document can be an image document can be a text document can be video or whatever data type you have don't get confused by the terminology so document if you're like me a document is gonna associate you to text data textual data only and even the emoji here is associating us to to text but it's not it's a it's an abstraction object which wraps up any kind of data then they have executors which process the data and flows finally organize those executors into these kinds of pipelines as you can see here on the image a short note on the installation procedure because they are cloud native neural search engine that basically means because cloud is mostly Linux that means they support Gina for Linux pretty much and because I'm on Windows this thing does not work on Windows and I had to install a window subsystem for Linux I had some problems getting into work but after you have upgraded to WSL 2 and fix some other small problems like I had problems plotting because Matplotlib would not display the actual like chart so I had to do some tweaks so if you know you want to know all of those details if you are on Windows like me and you want to try this thing ago do join the discord community because I'm gonna gonna paste like the step-by-step how I kind of solved all of the problems I encountered on Windows and how I got it to work so having said that let's start with the fashion image search so they have a nice page here explaining how you can actually get it like running and the the end like result will be this you can see here basically we have a query image on the left and we can fetch like they are fetching 50 closest images from the data set where closeness is defined as you can see here like visual similarity in the case of images so yeah let's get this project started okay so the first thing you obviously need to do here is do git clone clone the repository on your own machine install the condo environment and just do pip install you Gina and that's gonna install Gina AI package to your environment and then you can get started so as you can see here I'm Windows so I'm in this like WSL terminal I'm using Ubuntu as my distribution of choice I'm just going to start Visual Studio code you can use whatever editor you prefer just like kind of navigate to the root directory of your project and again from your environment where you're using conda or something else just start your ID and then we can get started the first thing we will want to do is to clone the demo example to the root directory so I'm just gonna type Gina hello fork fashion into Gina fashion demo directory so basically this means hey I want to use Gina and I want to use these quick examples I want to fork it to my local machine I want to use fashion example in particular and I'm gonna fork it into Gina fashion demo so if I run that the as you can see here I just got this new folder downloaded and I'm gonna open up the the app.py so this is the main like main file and the thing we're going to do so we basically don't need any additional arguments so we can just start this thing and that's it so I'm gonna run f5 and start this project okay there are some basic parameters we are setting up here let me quickly just walk you through them so basically you can see here what is important is this part here so what they include they have this working directory the work directory is just going to create a random directory inside of your root directory and that's pretty much the only thing we care about in this one and then what they do is they as you can see here they have the data URL the the labels like for the training set so these two are for the training set so we are downloading fashion MNIST here and then they have the query data so basically the let's call these these are test sets again fashion MNIST and we have the data and we have the the labels here so number queries is just the number of queries that we're gonna display on the on the web page we'll see soon see and finally top k means how many how many similar items are we trying to find for each of these queries so those are all of the parameters you should care about and now let's return to the app.py so let's kind of step into this function and see what's going on so first things first they are going to create this random directory so that's it and then they're gonna form this this basically dictionary where they have as you can see here so the URL for the data and the local file name name where this data is gonna be saved so we're gonna save the the the training data directly inside of this random directory and we're gonna name this index labels file so nothing special here we're gonna do that for both the training set and the test set and that's it okay let's continue so next step is downloading the data so let's step into this download data function so basically nothing smart here this is everything is handled by this URL lib package so what happens is we're gonna skip this because it's none so we're gonna install this opener whatever so this is just like boilerplate pretty much you don't need to understand all of these details with progress bar just a context manager to show the progress while the thing is being downloaded and now we're gonna iterate through the dictionary and download the data and the labels for the training set and for the test set that's it so if it if it doesn't exist and it doesn't exist because we just created this random folder we're gonna retrieve it here we're gonna download it so until this point we just downloaded the data into those files and now we're gonna actually load the data from the files let's see load labels let's step into that one and it's very simple what they do is just they just open up that like local file and let me show you where it is so basically this is the random directory I was mentioning so it's directly inside of Gina root you open it up and you can see index label so that's the first file so now here we're gonna open it up as you can see here and we're gonna load it from using a numpy and just reading the file here so nothing you need to care about so this is some specific knowledge because they know the structure of this file they know that they have to offset reading eight bytes from the very start and that's it then they do the reshape part and this is always useful to have shapes in your mind constantly when you're debugging machine learning code so this should be a 60 K comma one should be the shape of this of this object because those are labels and we have 60 K examples inside of the index data set so I'm gonna step over here and we can check the data to see whether we are correct here so V I'm gonna find the data array and you can see here shape is 60 K comma one so that's it now we're gonna do the same thing for for the next label so this is gonna download the labels for the query data set same procedure so I won't be walking you through the same function again and finally we have we're downloading the actual images through that's done you can see in the directory we have this index originals that means these are actually the images okay and now we're gonna step into this load MNIST I'm gonna step into it and you can see again we just open up the file we read it we like have this non pipe from buffer and we reshape it into 60 K 28 by 28 because we have MNIST images here so that's it it's fairly simple I'm gonna just step over this and we're gonna see that after I do this so we data and we have 60 K 28 28 those are obviously MNIST images and finally last step we're gonna do the same thing so we're doing the same thing just for the query data set this time and I'm gonna step over this time so we're gonna load the data and that's it we downloaded the data we loaded the labels we loaded the like images and that's it so that data is currently stored into this dictionary so we have targets you can pick whatever type of data you want so for example we can we can pick index labels so that means the index data set the labels and you'll find the data array here so it's 60 K 1 you can do the same thing for example index which is images and you can see the data 60 K 28 28 so everything is stored into this neatly stored into this dictionary object and that's it now this is some nothing that important just some optimization stuff and finally this is the flow object I mentioned in the initial part of the video which is one of the like building blocks of these GNA like programs and what flow does is just organizes these executors into this nice pipeline which you can then run and execute a process the data in whichever whatever manner you wish so as you can see here we're not form flow and we're gonna add these so these are called executors so we have executor that's called my encoder we're gonna see how that thing what the thing does in a couple of minutes we have we'll have two of those in parallel then we'll have the indexer and the indexer is gonna index this thing into this rug directory which is the random UUID directory as you can see here the string on the screen and finally we have this my evaluator executor which is going to do some the actual searching and matching of the image embedding vector of the query embedding vector to the all of the other index embedding vectors so that's that's it and now because this is lazy execution this object still the constructors of these executors are still not called that's gonna happen after we open it up with a context manager here so only here will they actually execute and form those those like executors so that's the lazy execution design pattern so the thing I'm gonna do now is actually visualize this flow so I'm gonna stop this program I'm gonna add the plotting function here so we can plot you can actually plot these like flows and we're gonna just just save it into like I don't know like time flow SVG and that's it I'm gonna save this I'm gonna rerun it and I'm gonna show you what happens so we are here if I do a step over let's just step over it's gonna form this SVG and I'm gonna open it up and show you how this thing looks like here it is it's you can see the temporary flow SVG file in the root directory and for some reason it's gonna open it up in Microsoft Edge even though my default browser is Google Chrome so let's open it up here and you can see the actual like diagram so we have the input here we split the input into two my encoder executors then we have the my indexer and then we have the my evaluator if we were to put some names as the arguments to those here basically name they have a name parameter blah blah blah we have nicer diagrams but you get the point okay having done that let me now show you what I meant by delays execution so we're here now I still haven't executed this line so if I go to let me find the files let me find the the actual my executors folder so I'm gonna put some breakpoint inside of the constructors here so we need functions my indexer I'm gonna put a breaking point here my encoder and finally I'm gonna put a breaking point into my evaluator and so now I'm gonna put a breakpoint here and I'm gonna step over this line here and we're gonna see it's gonna hit the constructors of those executors so f10 and you can see here my encoder is hit here so I'm gonna do f5 again we we had encoder again because we have is remember two encoder executors in parallel so I'm gonna step over f10 so f5 indexer and f5 evaluator and that's it f5 will bring us to the next line f index okay so you saw that by doing this we only then executed the constructors of all of the executors you've inputted here and now now we're gonna hit the index function so first let me break this down on a high level for you and then we're gonna step over the code so we have as you can see here the main function there is not much lines of code here left it's fairly simple we have we're calling flow index which is the same thing as if if we were to call this post function and instead of using slash eval here we'd be using slash index so these are the same things so the indexing function is gonna as you can imagine kind of index the data and gonna create the embedding vectors and then the eval function is gonna do the actual matching so once we call the eval function we're gonna actually find the 50 closest matches to that query image okay so that's like the rough pipeline and then finally after we have the data we're gonna write it into HTML and we're gonna be able to open up this HTML page in the browser and see the final results let's go a bit deeper and then I'm gonna step into these this code so next thing you see we have these generators we have index generator here we have query generator here and we have some additional parameters not that important show progress is just gonna show us in the bar as the testing is being indexed and evaluated we're gonna see the progress on the in the console print results so once this thing is done the print result is a callback function and it's just going to dump all of the results into some data structure which is then going to be used by this write HTML function so that's the whole point there top key is just the argument so it's currently 50 which means we're gonna search for 50 closest matches inside of the index data set using the query vector and that's pretty much it on the high level before I show you how these post functions work let's step into the generator itself and see what's going on there so the generator is gonna iterate over all of the 60k docs as you can see here and it's gonna basically a load you can see here the structure of the dictionary we fetch the index data and then we just fetch the first image and for some reason they are inverting the image with 255 here I'm not sure why this is if somebody knows feel free to post to comment down below but they yeah for some reason they are they're inverting it and now what the next thing they're gonna do is because this is just a grayscale image they're gonna do stack times three here which is gonna form RGB image so they're basically gonna copy that channel three times and get an RGB image so now this is the document so this is the third core component of genome framework so I mentioned flow I mentioned executors so document is just like a wrapper object around all of different data types so here we're gonna form an image and so I'm gonna step over here and we're gonna set a tag ID to this this is gonna be zero because we are iterating remember over all of the 60k docs so that's it now we're gonna yield D which basically this is the generator syntax of Python I'm gonna not I'm not gonna dig into that a lot deeper but yeah so if I step over again it's gonna hit the same line again and you may wonder why is that so the thing is the way this is set up in the background is this generator is gonna yield hundred images and only then will the post function we just saw so the from the flow object is gonna execute well this thing is generating images let's return back to app I so it's gonna generate hundred images and the reason is as I already explained there is a hidden default argument that's hundred and after it's generated a hundred images is gonna call the index is gonna trigger and what happens once index is called is all of the executors which have these request methods defined all of these functions which are decorated with this at requests on index are gonna get cold and not only those functions but also the functions that don't have anything any on parameter here so we're gonna see what I mean by that so requests so I'm gonna put a breakpoint here requests we have this one is triggered on search and eval so we don't have to worry about it yet so this one is as you can see that doesn't have the on part so that means every single flow post function is gonna trigger this one so I have a breakpoint here and let's go further on this converter function is not cold so I'm gonna skip it and this last one is eval so it's not gonna be cold as well so the functions that they're going to get cold right now are the basically encode function and the index function so let's let's kind of step over here so I'm gonna hit that five that's gonna generate hundred images and the index function will be cold so f5 and as you can see here the encode function is the first one to be cold let's understand what's going on here so Doc's is just a array of hundred images remember that that's the images generated by the index generator so we're gonna fetch the content which is just is going to fetch all of the images we're gonna stack them and we're going to end up with hundred twenty four twenty four three the shape will be something like that I suspect so let's see the content part so the shape is hundred twenty eight twenty three because remember we have RGB amnest images we have hundreds of them because of the generator and that's it now we're gonna reshape them so gonna they're gonna extract only single channels so I don't know I'm not sure why we had to form RGB images when we were extracting single channel here but yeah whatever so basically we're reshaping these seven eight four because 24 squared equals that number so we're gonna just flatten it out and you can see here hundred seven hundred eighty four and now this is a funny part so the first thing they're gonna do is because these amnest images are in the 0 to 55 range so they're gonna normalize them to 0 1 range and then they're gonna do this multiplication with this other matrix and that's going to form the embedding so let me kind of explain what this actually does so first things first let's see the shape the shape will be of the so we have self other matrix let me find that one so it's as you can see here seven hundred eighty four sixty four so once we multiply once we multiply so we have remember we have hundred we have seven eighty four here so that's a that's this matrix and then we multiply that thing so let me put brackets here so we multiply it with seven eight four sixty four that means we're gonna convert this shape into hundred sixty four so we basically embedded all of those images into this 64 dimensional embedding vector so the logic here is not actually a neural network the thing they do is in the like constructor function they just form this random matrix that has exactly this shape then they apply the singular value decomposition I'm not gonna go to details here it's an animal technique and the thing you should probably know is that any matrix no matter the shape so it doesn't have to be a square matrix can be decomposition using SVD so once we have SVD that the nice property of these two matrices is that they are orthogonal that means once you multiply a vector by this matrix the vector is just gonna change the rotation but the length will stay the same so it's preserving the length of the vector so just rotation matrix and what I do is they multiply these two matrices to form this this other matrix so just a fancy way of creating an orthogonal matrix I the matrix that does rotations of this particular shape and that's gonna embed our vectors into this 64 dimensional vector that's it okay let's continue here so let's press F10 here so now the next thing that's gonna happen is we're gonna iterate we're gonna fill in these documents with embeddings and see here embeddings and the content content is just a flattened out amnesty image this function is gonna convert the actual blob the image into URIs which is unique resource identifier which is basically like a long string that's gonna be an equivalent representation of this image and then they're gonna pop the blob that's it we're gonna repeat this for all of the images and that's the encoding part of the index of this of this index function so remember we have hundred images here now we just what it we just did is we embedded all of these images and the next thing we're gonna do so I'm gonna just press F5 here we should hit one of the other functions so the the indexing function I guess so F5 and as you can see here the indexing function the only thing it does is it has this docs like private variable which is a document array mem a mem map and just another object from Gina which helps you kind of optimize memory wise and time wise the performance so I won't get into the details because it's just an optimization trick okay so we're gonna just dump all of those docs which now remember contain the embeddings as well as the original images convert it into your eyes I went ahead and executed the whole index function because it takes roughly 58 seconds on my machine so yeah so what happened is again we chunked the data the 60k images into hundred images per batch and we did the embeddings we did the extension of that specialized memory object so we just kind of added all of those documents with the embedding vectors to that specialized memory object the second thing that's gonna happen is we're gonna hold the post eval function and this again has this query generator so I'm gonna step into it and put a breakpoint here so put a breakpoint there return here so we're gonna step over and see what the query generator is doing and then we're gonna hit set the breakpoints to all of the executors which are kind of sign up to listen to this eval call and I've said the breakpoints to all of those eval functions already so let's just step over here so query generator you can see the number of queries 128 we have the target so that's a dictionary that contains all the data and with ground truth equals to true let's see what it means so query generator okay what this line will do is the following you will find because we have remember we have MNIST data set and we have like I think there are 10 classes inside of it so what is going to do is return this object which will contain the following thing a dictionary that will have a label and then all of the images from the index data set that have so all of the IDs that have label 0 and then the same thing for label 1 so IDs here etc so let me step over this code so f10 and see what GTS looks like so again and if I expand 0 somewhere in there you can find the information of the IDs of images or the document IDs so now a small drawback with Gina in my opinion is that it's kind of hard to find in these as you can see in variables while you're debugging the code to find the actual information you need because there is a lot of metadata and things you don't care about because of all of these complex document objects but it's also a good thing I guess at the end because it's wrapping up a lot of functionality for you so now we're gonna iterate through the like all of the docs we have 128 query vectors so we're gonna do so 128 times we're gonna form a random query vector so random ID from this because we have 10k query images remember we're gonna form a random ID here and we're gonna just fetch that like image that's again black and white first and then they form RGB image and then they just kind of wrap it up into the document and because the ground truth is set to true we're gonna return not only the query image but also the actual ground truth label this is again going to be called hundred times and only then will we have the post eval function being called as soon as gonna step over until we hit the first breakpoint and here is the funny part so basically what happens here is we the docs contains hundred query images and it's gonna do match function with this self dot underscore docs which if you remember contains the index images with all of the embedding vectors so what we're going to do here is for every single query image here we're gonna find the closest image in the index images using cosine metric just doing some normalization to zero from zero to one and finally we're gonna we're gonna fetch top K which means 50 closest images using the cosine like metric so in the background we're gonna have like a for loop simple like comparisons and at the end this talks container is gonna contain all of the matches inside of it gonna again step over and get to the next breakpoint finally we are here in the last executor so this here we're gonna calculate the precision and recall metrics so we've done the matching we now have for each very vector we have associated 50 like images from the index data set or from the training set and let's see some some information here so I can use VS codes nice functionality here and plot the number of documents we have so that's 28 and it's 28 because we have 128 remember and we're chunking into pieces of hundred and so 128 module 100 is gonna give us 28 query vectors in this iteration so ground shoots again contains the like basically the IDs of those images in the index data set that have the same label as a particular query vector so let me kind of break that down so let me step over here so first things first so this is a query vector and these are the matches from the like less function so basically these are the 50 images and we're gonna grab their IDs so we're gonna grab the IDs of those images from the index data set so let's see that so if I print actual here actual whoops actual okay so we can he see some numbers so those are the IDs of the images from the index data set now these are floats obviously there are some small inefficiencies and problems with this like example but it's working functionally it's working correctly so and finally desired contains the basically they said all of the IDs like that have the same label as this query vector so that means because we have 60k images and we have 10 labels I can expect to have 6,000 like IDs here so if I like kind of print length of desired like this whoops oh yeah I have to step over it so if I now try and print this it's 6,000 so yeah and now we can kind of do the intersection between the actual IDs we got and the desired so the ones that have the same label as the query vector and we can find the precision and recall metrics I'm gonna skip over that it's a simple simple logic and that's it now I'm going to basically remove this break a break point and jump to the final step of this whole program and that's writing all of this data into an HTML document and kind of rendering it inside of a browser okay here we are as I said we had this post eval a function being called and at the end we have had this callback function so let me kind of quickly walk you through what happened there so after we finished the calculation of like precision and recall metrics for every single query vector now this this function kind of populates this simple this is a simple like Python list it populates it with HTML syntax so we have as you see here here a table row table cell and we paste the like original query image and then we have we trade over all of the matches over the 50 matches we had all of those matches in the same row and then we close the cell we close the row and we repeat that for every single query vector that's 128 query vectors and that's how this thing is formed and after that let me kind of like jump into write HTML function I'm gonna put a breakpoint here and gonna press f5 so here we are it's gonna open up this demo HTML which is a template like HTML file and it's gonna replace some of the placeholder variables like as you can see here result we're gonna put this result HTML so that's the same just as Python string we just created that contains the content of the HTML file so yeah we're gonna replace all of these templates placeholders and finally we're gonna get let me jump to this web browser open it's gonna fail because I'm in WSL but that's just on my system it's gonna work for you much better and I'll just manually open it up so that's it when I press f5 this is the end of the program and I'm gonna open up the HTML okay here is the HTML let me open it up and here are the results and as you can see on the left hand side we've got query vectors on the right hand side we have the matches so the 50 matches for every single very vector and there is 128 of them and as you can see our results look pretty decent like if we focus on one particular image like I don't know like this shoe here we can see that most of these results are shoes they're not even though there are many other images in this data set like like bags or or whatever trousers I don't know there must be some like failure cases but I didn't manage to see anyone I don't see any failure case here in particular but yeah hopefully got some idea what Gina can do and let me just kind of mention that this second example with the QA the question-answer like chatbot is maybe a bit better because remember that Gina is a cloud native neural search so cloud and this example here what it does is it's very similar but like it opens up a chatbot so and it creates a Python server and so what happens is there is a communication happening between a Gina Python server and this JavaScript code in the browser and so you basically input like a query like a text query inside of the browser it gets embedded the Gina does its magic of finding the similar the most similar match so the most similar response the textual response and it communicates it back via the like this HTTPS I guess like a link to the JavaScript code and it just renders the the response it's a very similar example but it basically paints a clear picture the Gina is used for these cloud types of applications and because here we have explicitly like a Python server so hopefully found this video useful if you did share it out with a friend subscribe to the channel hit that bell icon also if you have any feedback on this video or what you would like me which ML tool you'd like me to cover next let me know down in the comments and also join the discord community until next time bye bye
[{"start": 0.0, "end": 4.68, "text": " What's cracking guys in this video I'm continuing with the coding video series"}, {"start": 4.68, "end": 9.5, "text": " and I'm not going to cover a research paper like code in this one I'm gonna"}, {"start": 9.5, "end": 14.76, "text": " cover open source ML tools and libraries and hopefully this will help you improve"}, {"start": 14.76, "end": 18.84, "text": " your own projects by including these awesome open source libraries and tools"}, {"start": 18.84, "end": 24.88, "text": " so in this one I'm gonna cover Gini AI and as you can see here they are a cloud"}, {"start": 24.88, "end": 28.8, "text": " native neural search framework for any kind of data so this enables it to"}, {"start": 28.8, "end": 33.0, "text": " import lightweight Google into your project whereas the neural part means"}, {"start": 33.0, "end": 37.760000000000005, "text": " you'll be using deep neural networks to perform the search and the any kind of"}, {"start": 37.760000000000005, "end": 41.24, "text": " data means basically you can take whatever modality you wish so like image"}, {"start": 41.24, "end": 46.28, "text": " or text or audio or video or mesh or whatnot and you can use those to query"}, {"start": 46.28, "end": 50.760000000000005, "text": " and find like data from any other modalities you can use images to query"}, {"start": 50.760000000000005, "end": 55.28, "text": " for certain text snippets you can use text to query for images or videos or"}, {"start": 55.28, "end": 59.160000000000004, "text": " audio for mesh or whatever like whatever combination in that combinatorial space"}, {"start": 59.160000000000004, "end": 64.96000000000001, "text": " you can imagine Gini AI can kind of help you do that as you can see the readme is"}, {"start": 64.96000000000001, "end": 70.0, "text": " pretty nicely structured they have lots of resources here to tutorials they have"}, {"start": 70.0, "end": 73.8, "text": " like lots of contributors people are using this already it's an established"}, {"start": 73.8, "end": 77.84, "text": " tool as you can see here used by 130 like organizations and people already and"}, {"start": 77.84, "end": 84.16, "text": " so the way I suggest you go ahead and explore this tool is by like"}, {"start": 84.16, "end": 88.32, "text": " understanding these quick demos and in this video I'm gonna walk you through"}, {"start": 88.32, "end": 91.52, "text": " step-by-step through the fashion image search aside from that you should"}, {"start": 91.52, "end": 95.02, "text": " probably go and understand the basic concepts of Gina so that's document"}, {"start": 95.02, "end": 99.16, "text": " executor and flow and they have really nice pages where they explain how you"}, {"start": 99.16, "end": 104.92, "text": " can use the document how you can use all of these like a native concepts to Gina"}, {"start": 104.92, "end": 110.36, "text": " so document is an abstraction they use to describe whatever data type you have"}, {"start": 110.36, "end": 114.0, "text": " so basically document can be an image document can be a text document can be"}, {"start": 114.0, "end": 117.68, "text": " video or whatever data type you have don't get confused by the terminology so"}, {"start": 117.68, "end": 121.92, "text": " document if you're like me a document is gonna associate you to text data"}, {"start": 121.92, "end": 127.56, "text": " textual data only and even the emoji here is associating us to to text but"}, {"start": 127.56, "end": 131.48, "text": " it's not it's a it's an abstraction object which wraps up any kind of data"}, {"start": 131.48, "end": 135.72, "text": " then they have executors which process the data and flows finally organize"}, {"start": 135.72, "end": 140.66, "text": " those executors into these kinds of pipelines as you can see here on the"}, {"start": 140.66, "end": 144.56, "text": " image a short note on the installation procedure because they are cloud native"}, {"start": 144.56, "end": 148.52, "text": " neural search engine that basically means because cloud is mostly Linux that"}, {"start": 148.52, "end": 153.2, "text": " means they support Gina for Linux pretty much and because I'm on Windows this"}, {"start": 153.2, "end": 157.2, "text": " thing does not work on Windows and I had to install a window subsystem for Linux"}, {"start": 157.2, "end": 162.24, "text": " I had some problems getting into work but after you have upgraded to WSL 2 and"}, {"start": 162.24, "end": 166.2, "text": " fix some other small problems like I had problems plotting because Matplotlib"}, {"start": 166.2, "end": 171.07999999999998, "text": " would not display the actual like chart so I had to do some tweaks so if you"}, {"start": 171.07999999999998, "end": 173.88, "text": " know you want to know all of those details if you are on Windows like me"}, {"start": 173.88, "end": 178.11999999999998, "text": " and you want to try this thing ago do join the discord community because I'm"}, {"start": 178.11999999999998, "end": 183.32, "text": " gonna gonna paste like the step-by-step how I kind of solved all of the problems"}, {"start": 183.32, "end": 187.72, "text": " I encountered on Windows and how I got it to work so having said that let's"}, {"start": 187.72, "end": 191.72, "text": " start with the fashion image search so they have a nice page here explaining"}, {"start": 191.72, "end": 198.04, "text": " how you can actually get it like running and the the end like result will be this"}, {"start": 198.04, "end": 202.07999999999998, "text": " you can see here basically we have a query image on the left and we can fetch"}, {"start": 202.07999999999998, "end": 206.88, "text": " like they are fetching 50 closest images from the data set where closeness is"}, {"start": 206.88, "end": 211.72, "text": " defined as you can see here like visual similarity in the case of images so yeah"}, {"start": 211.72, "end": 215.68, "text": " let's get this project started okay so the first thing you obviously need to do"}, {"start": 215.68, "end": 220.07999999999998, "text": " here is do git clone clone the repository on your own machine install"}, {"start": 220.08, "end": 226.44000000000003, "text": " the condo environment and just do pip install you Gina and that's gonna"}, {"start": 226.44000000000003, "end": 232.12, "text": " install Gina AI package to your environment and then you can get started"}, {"start": 232.12, "end": 237.66000000000003, "text": " so as you can see here I'm Windows so I'm in this like WSL terminal I'm using"}, {"start": 237.66000000000003, "end": 241.68, "text": " Ubuntu as my distribution of choice I'm just going to start Visual Studio code"}, {"start": 241.68, "end": 245.60000000000002, "text": " you can use whatever editor you prefer just like kind of navigate to the root"}, {"start": 245.6, "end": 251.32, "text": " directory of your project and again from your environment where you're using"}, {"start": 251.32, "end": 259.0, "text": " conda or something else just start your ID and then we can get started the first"}, {"start": 259.0, "end": 263.4, "text": " thing we will want to do is to clone the demo example to the root directory so"}, {"start": 263.4, "end": 271.56, "text": " I'm just gonna type Gina hello fork fashion into Gina fashion demo"}, {"start": 271.56, "end": 276.68, "text": " directory so basically this means hey I want to use Gina and I want to use these"}, {"start": 276.68, "end": 281.4, "text": " quick examples I want to fork it to my local machine I want to use fashion"}, {"start": 281.4, "end": 285.88, "text": " example in particular and I'm gonna fork it into Gina fashion demo so if I run"}, {"start": 285.88, "end": 291.8, "text": " that the as you can see here I just got this new folder downloaded and I'm gonna"}, {"start": 291.8, "end": 298.08, "text": " open up the the app.py so this is the main like main file and the thing we're"}, {"start": 298.08, "end": 301.32, "text": " going to do so we basically don't need any additional arguments so we can just"}, {"start": 301.32, "end": 308.32, "text": " start this thing and that's it so I'm gonna run f5 and start this project okay"}, {"start": 308.32, "end": 311.56, "text": " there are some basic parameters we are setting up here let me quickly just walk"}, {"start": 311.56, "end": 316.0, "text": " you through them so basically you can see here what is important is this part"}, {"start": 316.0, "end": 320.0, "text": " here so what they include they have this working directory the work directory is"}, {"start": 320.0, "end": 323.88, "text": " just going to create a random directory inside of your root directory and that's"}, {"start": 323.88, "end": 328.64, "text": " pretty much the only thing we care about in this one and then what they do is they"}, {"start": 328.64, "end": 335.71999999999997, "text": " as you can see here they have the data URL the the labels like for the training"}, {"start": 335.71999999999997, "end": 338.91999999999996, "text": " set so these two are for the training set so we are downloading fashion MNIST"}, {"start": 338.91999999999996, "end": 344.32, "text": " here and then they have the query data so basically the let's call these these"}, {"start": 344.32, "end": 349.8, "text": " are test sets again fashion MNIST and we have the data and we have the the labels"}, {"start": 349.8, "end": 354.08, "text": " here so number queries is just the number of queries that we're gonna"}, {"start": 354.08, "end": 359.56, "text": " display on the on the web page we'll see soon see and finally top k means how"}, {"start": 359.56, "end": 363.8, "text": " many how many similar items are we trying to find for each of these queries"}, {"start": 363.8, "end": 368.0, "text": " so those are all of the parameters you should care about and now let's return"}, {"start": 368.0, "end": 373.03999999999996, "text": " to the app.py so let's kind of step into this function and see what's going on so"}, {"start": 373.03999999999996, "end": 378.28, "text": " first things first they are going to create this random directory so that's"}, {"start": 378.28, "end": 384.0, "text": " it and then they're gonna form this this basically dictionary where they have as"}, {"start": 384.0, "end": 388.48, "text": " you can see here so the URL for the data and the local file name name where this"}, {"start": 388.48, "end": 393.6, "text": " data is gonna be saved so we're gonna save the the the training data directly"}, {"start": 393.6, "end": 398.48, "text": " inside of this random directory and we're gonna name this index labels file"}, {"start": 398.48, "end": 403.2, "text": " so nothing special here we're gonna do that for both the training set and the"}, {"start": 403.2, "end": 408.44, "text": " test set and that's it okay let's continue so next step is downloading the"}, {"start": 408.44, "end": 414.4, "text": " data so let's step into this download data function so basically nothing smart"}, {"start": 414.4, "end": 420.2, "text": " here this is everything is handled by this URL lib package so what happens is"}, {"start": 420.2, "end": 424.28, "text": " we're gonna skip this because it's none so we're gonna install this opener"}, {"start": 424.28, "end": 426.92, "text": " whatever so this is just like boilerplate pretty much you don't need"}, {"start": 426.92, "end": 430.84, "text": " to understand all of these details with progress bar just a context manager to"}, {"start": 430.84, "end": 434.36, "text": " show the progress while the thing is being downloaded and now we're gonna"}, {"start": 434.36, "end": 438.15999999999997, "text": " iterate through the dictionary and download the data and the labels for the"}, {"start": 438.16, "end": 442.6, "text": " training set and for the test set that's it so if it if it doesn't exist and it"}, {"start": 442.6, "end": 445.92, "text": " doesn't exist because we just created this random folder we're gonna retrieve"}, {"start": 445.92, "end": 450.88000000000005, "text": " it here we're gonna download it so until this point we just downloaded the data"}, {"start": 450.88000000000005, "end": 454.88, "text": " into those files and now we're gonna actually load the data from the files"}, {"start": 454.88, "end": 459.92, "text": " let's see load labels let's step into that one and it's very simple what they"}, {"start": 459.92, "end": 464.08000000000004, "text": " do is just they just open up that like local file and let me show you where it"}, {"start": 464.08000000000004, "end": 466.68, "text": " is so basically this is the random directory I was mentioning so it's"}, {"start": 466.68, "end": 470.52, "text": " directly inside of Gina root you open it up and you can see index label so that's"}, {"start": 470.52, "end": 474.16, "text": " the first file so now here we're gonna open it up as you can see here and we're"}, {"start": 474.16, "end": 481.16, "text": " gonna load it from using a numpy and just reading the file here so nothing"}, {"start": 481.16, "end": 484.88, "text": " you need to care about so this is some specific knowledge because they know the"}, {"start": 484.88, "end": 488.68, "text": " structure of this file they know that they have to offset reading eight bytes"}, {"start": 488.68, "end": 492.72, "text": " from the very start and that's it then they do the reshape part and this is"}, {"start": 492.72, "end": 495.96000000000004, "text": " always useful to have shapes in your mind constantly when you're debugging"}, {"start": 495.96, "end": 501.59999999999997, "text": " machine learning code so this should be a 60 K comma one should be the shape of"}, {"start": 501.59999999999997, "end": 507.03999999999996, "text": " this of this object because those are labels and we have 60 K examples inside"}, {"start": 507.03999999999996, "end": 511.02, "text": " of the index data set so I'm gonna step over here and we can check the data to"}, {"start": 511.02, "end": 515.68, "text": " see whether we are correct here so V I'm gonna find the data array and you can"}, {"start": 515.68, "end": 523.4, "text": " see here shape is 60 K comma one so that's it now we're gonna do the same"}, {"start": 523.4, "end": 529.04, "text": " thing for for the next label so this is gonna download the labels for the query"}, {"start": 529.04, "end": 533.4399999999999, "text": " data set same procedure so I won't be walking you through the same function"}, {"start": 533.4399999999999, "end": 537.92, "text": " again and finally we have we're downloading the actual images through"}, {"start": 537.92, "end": 541.6, "text": " that's done you can see in the directory we have this index originals that means"}, {"start": 541.6, "end": 546.88, "text": " these are actually the images okay and now we're gonna step into this load"}, {"start": 546.88, "end": 551.0799999999999, "text": " MNIST I'm gonna step into it and you can see again we just open up the file we"}, {"start": 551.08, "end": 557.24, "text": " read it we like have this non pipe from buffer and we reshape it into 60 K 28 by"}, {"start": 557.24, "end": 561.96, "text": " 28 because we have MNIST images here so that's it it's fairly simple I'm gonna"}, {"start": 561.96, "end": 570.44, "text": " just step over this and we're gonna see that after I do this so we data and we"}, {"start": 570.44, "end": 575.8000000000001, "text": " have 60 K 28 28 those are obviously MNIST images and finally last step we're"}, {"start": 575.8000000000001, "end": 578.8000000000001, "text": " gonna do the same thing so we're doing the same thing just for the query data"}, {"start": 578.8, "end": 583.1999999999999, "text": " set this time and I'm gonna step over this time so we're gonna load the data"}, {"start": 583.1999999999999, "end": 588.68, "text": " and that's it we downloaded the data we loaded the labels we loaded the like"}, {"start": 588.68, "end": 592.4399999999999, "text": " images and that's it so that data is currently stored into this dictionary so"}, {"start": 592.4399999999999, "end": 596.52, "text": " we have targets you can pick whatever type of data you want so for example we"}, {"start": 596.52, "end": 600.4799999999999, "text": " can we can pick index labels so that means the index data set the labels and"}, {"start": 600.4799999999999, "end": 605.88, "text": " you'll find the data array here so it's 60 K 1 you can do the same thing for"}, {"start": 605.88, "end": 611.68, "text": " example index which is images and you can see the data 60 K 28 28 so"}, {"start": 611.68, "end": 614.96, "text": " everything is stored into this neatly stored into this dictionary object and"}, {"start": 614.96, "end": 619.32, "text": " that's it now this is some nothing that important just some optimization stuff"}, {"start": 619.32, "end": 623.0, "text": " and finally this is the flow object I mentioned in the initial part of the"}, {"start": 623.0, "end": 628.64, "text": " video which is one of the like building blocks of these GNA like programs and"}, {"start": 628.64, "end": 634.16, "text": " what flow does is just organizes these executors into this nice pipeline which"}, {"start": 634.16, "end": 638.24, "text": " you can then run and execute a process the data in whichever whatever manner"}, {"start": 638.24, "end": 644.92, "text": " you wish so as you can see here we're not form flow and we're gonna add these"}, {"start": 644.92, "end": 649.0, "text": " so these are called executors so we have executor that's called my encoder we're"}, {"start": 649.0, "end": 653.0799999999999, "text": " gonna see how that thing what the thing does in a couple of minutes we have we'll"}, {"start": 653.0799999999999, "end": 658.36, "text": " have two of those in parallel then we'll have the indexer and the indexer is"}, {"start": 658.36, "end": 662.8399999999999, "text": " gonna index this thing into this rug directory which is the random UUID"}, {"start": 662.84, "end": 666.0400000000001, "text": " directory as you can see here the string on the screen and finally we have this"}, {"start": 666.0400000000001, "end": 672.0400000000001, "text": " my evaluator executor which is going to do some the actual searching and matching"}, {"start": 672.0400000000001, "end": 677.72, "text": " of the image embedding vector of the query embedding vector to the all of"}, {"start": 677.72, "end": 682.24, "text": " the other index embedding vectors so that's that's it and now because this is"}, {"start": 682.24, "end": 686.96, "text": " lazy execution this object still the constructors of these executors are"}, {"start": 686.96, "end": 689.9200000000001, "text": " still not called that's gonna happen after we open it up with a context"}, {"start": 689.92, "end": 694.76, "text": " manager here so only here will they actually execute and form those those"}, {"start": 694.76, "end": 700.0, "text": " like executors so that's the lazy execution design pattern so the thing"}, {"start": 700.0, "end": 704.24, "text": " I'm gonna do now is actually visualize this flow so I'm gonna stop this program"}, {"start": 704.24, "end": 708.3199999999999, "text": " I'm gonna add the plotting function here so we can plot you can actually plot"}, {"start": 708.3199999999999, "end": 712.7199999999999, "text": " these like flows and we're gonna just just save it into like I don't know like"}, {"start": 712.7199999999999, "end": 717.8, "text": " time flow SVG and that's it I'm gonna save this I'm gonna rerun it and I'm"}, {"start": 717.8, "end": 722.04, "text": " gonna show you what happens so we are here if I do a step over let's just step"}, {"start": 722.04, "end": 726.04, "text": " over it's gonna form this SVG and I'm gonna open it up and show you how this"}, {"start": 726.04, "end": 730.92, "text": " thing looks like here it is it's you can see the temporary flow SVG file in the"}, {"start": 730.92, "end": 734.52, "text": " root directory and for some reason it's gonna open it up in Microsoft Edge even"}, {"start": 734.52, "end": 737.88, "text": " though my default browser is Google Chrome so let's open it up here and you"}, {"start": 737.88, "end": 745.2199999999999, "text": " can see the actual like diagram so we have the input here we split the input"}, {"start": 745.22, "end": 749.4, "text": " into two my encoder executors then we have the my indexer and then we have the"}, {"start": 749.4, "end": 754.88, "text": " my evaluator if we were to put some names as the arguments to those here"}, {"start": 754.88, "end": 760.08, "text": " basically name they have a name parameter blah blah blah we have nicer"}, {"start": 760.08, "end": 763.44, "text": " diagrams but you get the point okay having done that let me now show you"}, {"start": 763.44, "end": 766.88, "text": " what I meant by delays execution so we're here now I still haven't executed"}, {"start": 766.88, "end": 772.36, "text": " this line so if I go to let me find the files let me find the the actual my"}, {"start": 772.36, "end": 776.08, "text": " executors folder so I'm gonna put some breakpoint inside of the constructors"}, {"start": 776.08, "end": 780.96, "text": " here so we need functions my indexer I'm gonna put a breaking point here my"}, {"start": 780.96, "end": 785.76, "text": " encoder and finally I'm gonna put a breaking point into my evaluator and so"}, {"start": 785.76, "end": 789.76, "text": " now I'm gonna put a breakpoint here and I'm gonna step over this line here and"}, {"start": 789.76, "end": 793.72, "text": " we're gonna see it's gonna hit the constructors of those executors so f10"}, {"start": 793.72, "end": 801.3000000000001, "text": " and you can see here my encoder is hit here so I'm gonna do f5 again we we had"}, {"start": 801.3, "end": 805.68, "text": " encoder again because we have is remember two encoder executors in"}, {"start": 805.68, "end": 812.52, "text": " parallel so I'm gonna step over f10 so f5 indexer and f5 evaluator and that's"}, {"start": 812.52, "end": 817.68, "text": " it f5 will bring us to the next line f index okay so you saw that by doing this"}, {"start": 817.68, "end": 822.68, "text": " we only then executed the constructors of all of the executors you've inputted"}, {"start": 822.68, "end": 828.0799999999999, "text": " here and now now we're gonna hit the index function so first let me break"}, {"start": 828.08, "end": 831.44, "text": " this down on a high level for you and then we're gonna step over the code so"}, {"start": 831.44, "end": 835.1600000000001, "text": " we have as you can see here the main function there is not much lines of"}, {"start": 835.1600000000001, "end": 839.2, "text": " code here left it's fairly simple we have we're calling flow index which is"}, {"start": 839.2, "end": 844.36, "text": " the same thing as if if we were to call this post function and instead of using"}, {"start": 844.36, "end": 849.1600000000001, "text": " slash eval here we'd be using slash index so these are the same things so"}, {"start": 849.1600000000001, "end": 854.76, "text": " the indexing function is gonna as you can imagine kind of index the data and"}, {"start": 854.76, "end": 858.76, "text": " gonna create the embedding vectors and then the eval function is gonna do the"}, {"start": 858.76, "end": 862.3199999999999, "text": " actual matching so once we call the eval function we're gonna actually find the"}, {"start": 862.3199999999999, "end": 867.4399999999999, "text": " 50 closest matches to that query image okay so that's like the rough pipeline"}, {"start": 867.4399999999999, "end": 871.48, "text": " and then finally after we have the data we're gonna write it into HTML and we're"}, {"start": 871.48, "end": 876.0, "text": " gonna be able to open up this HTML page in the browser and see the final results"}, {"start": 876.0, "end": 880.6, "text": " let's go a bit deeper and then I'm gonna step into these this code so next thing"}, {"start": 880.6, "end": 883.84, "text": " you see we have these generators we have index generator here we have query"}, {"start": 883.84, "end": 887.44, "text": " generator here and we have some additional parameters not that important"}, {"start": 887.44, "end": 891.5600000000001, "text": " show progress is just gonna show us in the bar as the testing is being indexed"}, {"start": 891.5600000000001, "end": 896.0, "text": " and evaluated we're gonna see the progress on the in the console print"}, {"start": 896.0, "end": 900.0, "text": " results so once this thing is done the print result is a callback function and"}, {"start": 900.0, "end": 904.44, "text": " it's just going to dump all of the results into some data structure which"}, {"start": 904.44, "end": 908.36, "text": " is then going to be used by this write HTML function so that's the whole point"}, {"start": 908.36, "end": 912.64, "text": " there top key is just the argument so it's currently 50 which means we're"}, {"start": 912.64, "end": 917.4399999999999, "text": " gonna search for 50 closest matches inside of the index data set using the"}, {"start": 917.4399999999999, "end": 922.1999999999999, "text": " query vector and that's pretty much it on the high level before I show you how"}, {"start": 922.1999999999999, "end": 927.08, "text": " these post functions work let's step into the generator itself and see what's"}, {"start": 927.08, "end": 932.1999999999999, "text": " going on there so the generator is gonna iterate over all of the 60k docs as"}, {"start": 932.1999999999999, "end": 936.1999999999999, "text": " you can see here and it's gonna basically a load you can see here the"}, {"start": 936.1999999999999, "end": 940.68, "text": " structure of the dictionary we fetch the index data and then we just fetch the"}, {"start": 940.68, "end": 945.3599999999999, "text": " first image and for some reason they are inverting the image with 255 here I'm"}, {"start": 945.3599999999999, "end": 949.12, "text": " not sure why this is if somebody knows feel free to post to comment down below"}, {"start": 949.12, "end": 953.68, "text": " but they yeah for some reason they are they're inverting it and now what the"}, {"start": 953.68, "end": 956.76, "text": " next thing they're gonna do is because this is just a grayscale image they're"}, {"start": 956.76, "end": 961.2399999999999, "text": " gonna do stack times three here which is gonna form RGB image so they're"}, {"start": 961.2399999999999, "end": 967.4399999999999, "text": " basically gonna copy that channel three times and get an RGB image so now this"}, {"start": 967.44, "end": 972.6, "text": " is the document so this is the third core component of genome framework so I"}, {"start": 972.6, "end": 977.96, "text": " mentioned flow I mentioned executors so document is just like a wrapper object"}, {"start": 977.96, "end": 983.2800000000001, "text": " around all of different data types so here we're gonna form an image and so"}, {"start": 983.2800000000001, "end": 988.5600000000001, "text": " I'm gonna step over here and we're gonna set a tag ID to this this is gonna be"}, {"start": 988.5600000000001, "end": 992.32, "text": " zero because we are iterating remember over all of the 60k docs so that's it"}, {"start": 992.32, "end": 996.36, "text": " now we're gonna yield D which basically this is the generator syntax of Python"}, {"start": 996.36, "end": 1000.64, "text": " I'm gonna not I'm not gonna dig into that a lot deeper but yeah so if I step"}, {"start": 1000.64, "end": 1004.78, "text": " over again it's gonna hit the same line again and you may wonder why is that so"}, {"start": 1004.78, "end": 1007.84, "text": " the thing is the way this is set up in the background is this generator is"}, {"start": 1007.84, "end": 1013.22, "text": " gonna yield hundred images and only then will the post function we just saw so"}, {"start": 1013.22, "end": 1017.0, "text": " the from the flow object is gonna execute well this thing is generating"}, {"start": 1017.0, "end": 1021.32, "text": " images let's return back to app I so it's gonna generate hundred images and"}, {"start": 1021.32, "end": 1025.08, "text": " the reason is as I already explained there is a hidden default argument"}, {"start": 1025.08, "end": 1029.0, "text": " that's hundred and after it's generated a hundred images is gonna call the index"}, {"start": 1029.0, "end": 1033.1599999999999, "text": " is gonna trigger and what happens once index is called is all of the executors"}, {"start": 1033.1599999999999, "end": 1036.9199999999998, "text": " which have these request methods defined all of these functions which are"}, {"start": 1036.9199999999998, "end": 1042.8999999999999, "text": " decorated with this at requests on index are gonna get cold and not only those"}, {"start": 1042.8999999999999, "end": 1046.8, "text": " functions but also the functions that don't have anything any on parameter"}, {"start": 1046.8, "end": 1049.6399999999999, "text": " here so we're gonna see what I mean by that so requests so I'm gonna put a"}, {"start": 1049.6399999999999, "end": 1053.8799999999999, "text": " breakpoint here requests we have this one is triggered on search and eval so"}, {"start": 1053.88, "end": 1057.88, "text": " we don't have to worry about it yet so this one is as you can see that doesn't"}, {"start": 1057.88, "end": 1062.3600000000001, "text": " have the on part so that means every single flow post function is gonna"}, {"start": 1062.3600000000001, "end": 1066.1200000000001, "text": " trigger this one so I have a breakpoint here and let's go further on this"}, {"start": 1066.1200000000001, "end": 1070.24, "text": " converter function is not cold so I'm gonna skip it and this last one is eval"}, {"start": 1070.24, "end": 1074.24, "text": " so it's not gonna be cold as well so the functions that they're going to get"}, {"start": 1074.24, "end": 1080.7600000000002, "text": " cold right now are the basically encode function and the index function so let's"}, {"start": 1080.76, "end": 1084.96, "text": " let's kind of step over here so I'm gonna hit that five that's gonna"}, {"start": 1084.96, "end": 1090.12, "text": " generate hundred images and the index function will be cold so f5 and as you"}, {"start": 1090.12, "end": 1093.92, "text": " can see here the encode function is the first one to be cold let's understand"}, {"start": 1093.92, "end": 1098.56, "text": " what's going on here so Doc's is just a array of hundred images remember that"}, {"start": 1098.56, "end": 1102.4, "text": " that's the images generated by the index generator so we're gonna fetch the"}, {"start": 1102.4, "end": 1106.48, "text": " content which is just is going to fetch all of the images we're gonna stack them"}, {"start": 1106.48, "end": 1111.2, "text": " and we're going to end up with hundred twenty four twenty four three the shape"}, {"start": 1111.2, "end": 1115.76, "text": " will be something like that I suspect so let's see the content part so the shape"}, {"start": 1115.76, "end": 1119.1200000000001, "text": " is hundred twenty eight twenty three because remember we have RGB amnest"}, {"start": 1119.1200000000001, "end": 1122.92, "text": " images we have hundreds of them because of the generator and that's it now we're"}, {"start": 1122.92, "end": 1126.24, "text": " gonna reshape them so gonna they're gonna extract only single channels so I"}, {"start": 1126.24, "end": 1131.56, "text": " don't know I'm not sure why we had to form RGB images when we were extracting"}, {"start": 1131.56, "end": 1136.04, "text": " single channel here but yeah whatever so basically we're reshaping these seven"}, {"start": 1136.04, "end": 1140.1599999999999, "text": " eight four because 24 squared equals that number so we're gonna just flatten"}, {"start": 1140.1599999999999, "end": 1144.1599999999999, "text": " it out and you can see here hundred seven hundred eighty four and now this"}, {"start": 1144.1599999999999, "end": 1148.68, "text": " is a funny part so the first thing they're gonna do is because these amnest"}, {"start": 1148.68, "end": 1154.48, "text": " images are in the 0 to 55 range so they're gonna normalize them to 0 1 range"}, {"start": 1154.48, "end": 1158.32, "text": " and then they're gonna do this multiplication with this other matrix"}, {"start": 1158.32, "end": 1161.96, "text": " and that's going to form the embedding so let me kind of explain what this"}, {"start": 1161.96, "end": 1165.36, "text": " actually does so first things first let's see the shape the shape will be of"}, {"start": 1165.36, "end": 1169.8799999999999, "text": " the so we have self other matrix let me find that one so it's as you can see"}, {"start": 1169.8799999999999, "end": 1174.12, "text": " here seven hundred eighty four sixty four so once we multiply once we"}, {"start": 1174.12, "end": 1178.28, "text": " multiply so we have remember we have hundred we have seven eighty four here"}, {"start": 1178.28, "end": 1182.24, "text": " so that's a that's this matrix and then we multiply that thing so let me put"}, {"start": 1182.24, "end": 1188.32, "text": " brackets here so we multiply it with seven eight four sixty four that means"}, {"start": 1188.32, "end": 1192.8799999999999, "text": " we're gonna convert this shape into hundred sixty four so we basically"}, {"start": 1192.88, "end": 1199.16, "text": " embedded all of those images into this 64 dimensional embedding vector so the"}, {"start": 1199.16, "end": 1202.88, "text": " logic here is not actually a neural network the thing they do is in the"}, {"start": 1202.88, "end": 1208.0, "text": " like constructor function they just form this random matrix that has exactly"}, {"start": 1208.0, "end": 1213.0800000000002, "text": " this shape then they apply the singular value decomposition I'm not gonna go to"}, {"start": 1213.0800000000002, "end": 1216.24, "text": " details here it's an animal technique and the thing you should probably know"}, {"start": 1216.24, "end": 1220.3600000000001, "text": " is that any matrix no matter the shape so it doesn't have to be a square matrix"}, {"start": 1220.36, "end": 1225.3999999999999, "text": " can be decomposition using SVD so once we have SVD that the nice property of"}, {"start": 1225.3999999999999, "end": 1229.6799999999998, "text": " these two matrices is that they are orthogonal that means once you multiply"}, {"start": 1229.6799999999998, "end": 1234.28, "text": " a vector by this matrix the vector is just gonna change the rotation but the"}, {"start": 1234.28, "end": 1237.52, "text": " length will stay the same so it's preserving the length of the vector so"}, {"start": 1237.52, "end": 1242.1999999999998, "text": " just rotation matrix and what I do is they multiply these two matrices to form"}, {"start": 1242.1999999999998, "end": 1246.4799999999998, "text": " this this other matrix so just a fancy way of creating an orthogonal matrix I"}, {"start": 1246.48, "end": 1251.6, "text": " the matrix that does rotations of this particular shape and that's gonna embed"}, {"start": 1251.6, "end": 1255.8, "text": " our vectors into this 64 dimensional vector that's it okay let's continue"}, {"start": 1255.8, "end": 1261.48, "text": " here so let's press F10 here so now the next thing that's gonna happen is we're"}, {"start": 1261.48, "end": 1265.24, "text": " gonna iterate we're gonna fill in these documents with embeddings and see here"}, {"start": 1265.24, "end": 1269.06, "text": " embeddings and the content content is just a flattened out amnesty image this"}, {"start": 1269.06, "end": 1274.56, "text": " function is gonna convert the actual blob the image into URIs which is unique"}, {"start": 1274.56, "end": 1279.28, "text": " resource identifier which is basically like a long string that's gonna be an"}, {"start": 1279.28, "end": 1283.8, "text": " equivalent representation of this image and then they're gonna pop the blob"}, {"start": 1283.8, "end": 1287.72, "text": " that's it we're gonna repeat this for all of the images and that's the"}, {"start": 1287.72, "end": 1293.04, "text": " encoding part of the index of this of this index function so remember we have"}, {"start": 1293.04, "end": 1298.32, "text": " hundred images here now we just what it we just did is we embedded all of these"}, {"start": 1298.32, "end": 1303.1599999999999, "text": " images and the next thing we're gonna do so I'm gonna just press F5 here we"}, {"start": 1303.16, "end": 1309.28, "text": " should hit one of the other functions so the the indexing function I guess so F5"}, {"start": 1309.28, "end": 1313.1200000000001, "text": " and as you can see here the indexing function the only thing it does is it"}, {"start": 1313.1200000000001, "end": 1318.92, "text": " has this docs like private variable which is a document array mem a mem map"}, {"start": 1318.92, "end": 1323.92, "text": " and just another object from Gina which helps you kind of optimize memory wise"}, {"start": 1323.92, "end": 1327.0400000000002, "text": " and time wise the performance so I won't get into the details because it's just"}, {"start": 1327.0400000000002, "end": 1331.68, "text": " an optimization trick okay so we're gonna just dump all of those docs which"}, {"start": 1331.68, "end": 1336.1200000000001, "text": " now remember contain the embeddings as well as the original images convert it"}, {"start": 1336.1200000000001, "end": 1339.5600000000002, "text": " into your eyes I went ahead and executed the whole index function because it"}, {"start": 1339.5600000000002, "end": 1344.04, "text": " takes roughly 58 seconds on my machine so yeah so what happened is again we"}, {"start": 1344.04, "end": 1350.0600000000002, "text": " chunked the data the 60k images into hundred images per batch and we did the"}, {"start": 1350.0600000000002, "end": 1355.0800000000002, "text": " embeddings we did the extension of that specialized memory object so we just"}, {"start": 1355.0800000000002, "end": 1358.3200000000002, "text": " kind of added all of those documents with the embedding vectors to that"}, {"start": 1358.32, "end": 1362.4399999999998, "text": " specialized memory object the second thing that's gonna happen is we're gonna"}, {"start": 1362.4399999999998, "end": 1367.28, "text": " hold the post eval function and this again has this query generator so I'm"}, {"start": 1367.28, "end": 1372.2, "text": " gonna step into it and put a breakpoint here so put a breakpoint there return"}, {"start": 1372.2, "end": 1375.12, "text": " here so we're gonna step over and see what the query generator is doing and"}, {"start": 1375.12, "end": 1379.32, "text": " then we're gonna hit set the breakpoints to all of the executors which are kind"}, {"start": 1379.32, "end": 1384.28, "text": " of sign up to listen to this eval call and I've said the breakpoints to all of"}, {"start": 1384.28, "end": 1388.12, "text": " those eval functions already so let's just step over here so query generator"}, {"start": 1388.12, "end": 1392.12, "text": " you can see the number of queries 128 we have the target so that's a dictionary"}, {"start": 1392.12, "end": 1396.1999999999998, "text": " that contains all the data and with ground truth equals to true let's see"}, {"start": 1396.1999999999998, "end": 1400.9199999999998, "text": " what it means so query generator okay what this line will do is the following"}, {"start": 1400.9199999999998, "end": 1405.28, "text": " you will find because we have remember we have MNIST data set and we have like"}, {"start": 1405.28, "end": 1409.3999999999999, "text": " I think there are 10 classes inside of it so what is going to do is return this"}, {"start": 1409.3999999999999, "end": 1412.76, "text": " object which will contain the following thing a dictionary that will have a"}, {"start": 1412.76, "end": 1417.56, "text": " label and then all of the images from the index data set that have so all of"}, {"start": 1417.56, "end": 1423.36, "text": " the IDs that have label 0 and then the same thing for label 1 so IDs here etc"}, {"start": 1423.36, "end": 1428.9199999999998, "text": " so let me step over this code so f10 and see what GTS looks like so again and if"}, {"start": 1428.9199999999998, "end": 1432.9199999999998, "text": " I expand 0 somewhere in there you can find the information of the IDs of"}, {"start": 1432.9199999999998, "end": 1436.6399999999999, "text": " images or the document IDs so now a small drawback with Gina in my opinion"}, {"start": 1436.6399999999999, "end": 1440.48, "text": " is that it's kind of hard to find in these as you can see in variables while"}, {"start": 1440.48, "end": 1443.6, "text": " you're debugging the code to find the actual information you need because"}, {"start": 1443.6, "end": 1447.32, "text": " there is a lot of metadata and things you don't care about because of all of"}, {"start": 1447.32, "end": 1451.1599999999999, "text": " these complex document objects but it's also a good thing I guess at the end"}, {"start": 1451.1599999999999, "end": 1455.3999999999999, "text": " because it's wrapping up a lot of functionality for you so now we're gonna"}, {"start": 1455.3999999999999, "end": 1462.56, "text": " iterate through the like all of the docs we have 128 query vectors so we're gonna"}, {"start": 1462.56, "end": 1467.04, "text": " do so 128 times we're gonna form a random query vector so random ID from"}, {"start": 1467.04, "end": 1471.6399999999999, "text": " this because we have 10k query images remember we're gonna form a random ID"}, {"start": 1471.6399999999999, "end": 1476.6, "text": " here and we're gonna just fetch that like image that's again black and white"}, {"start": 1476.6, "end": 1479.9599999999998, "text": " first and then they form RGB image and then they just kind of wrap it up into"}, {"start": 1479.9599999999998, "end": 1483.8799999999999, "text": " the document and because the ground truth is set to true we're gonna return"}, {"start": 1483.8799999999999, "end": 1489.7199999999998, "text": " not only the query image but also the actual ground truth label this is again"}, {"start": 1489.7199999999998, "end": 1493.6399999999999, "text": " going to be called hundred times and only then will we have the post eval"}, {"start": 1493.6399999999999, "end": 1496.6399999999999, "text": " function being called as soon as gonna step over until we hit the first"}, {"start": 1496.6399999999999, "end": 1500.32, "text": " breakpoint and here is the funny part so basically what happens here is we the"}, {"start": 1500.32, "end": 1504.8799999999999, "text": " docs contains hundred query images and it's gonna do match function with this"}, {"start": 1504.88, "end": 1510.3600000000001, "text": " self dot underscore docs which if you remember contains the index images with"}, {"start": 1510.3600000000001, "end": 1514.92, "text": " all of the embedding vectors so what we're going to do here is for every"}, {"start": 1514.92, "end": 1520.2, "text": " single query image here we're gonna find the closest image in the index images"}, {"start": 1520.2, "end": 1525.3200000000002, "text": " using cosine metric just doing some normalization to zero from zero to one"}, {"start": 1525.3200000000002, "end": 1528.96, "text": " and finally we're gonna we're gonna fetch top K which means 50 closest"}, {"start": 1528.96, "end": 1534.0800000000002, "text": " images using the cosine like metric so in the background we're gonna have like"}, {"start": 1534.08, "end": 1538.84, "text": " a for loop simple like comparisons and at the end this talks container is gonna"}, {"start": 1538.84, "end": 1543.08, "text": " contain all of the matches inside of it gonna again step over and get to the"}, {"start": 1543.08, "end": 1547.9199999999998, "text": " next breakpoint finally we are here in the last executor so this here we're"}, {"start": 1547.9199999999998, "end": 1551.0, "text": " gonna calculate the precision and recall metrics so we've done the matching we"}, {"start": 1551.0, "end": 1556.84, "text": " now have for each very vector we have associated 50 like images from the index"}, {"start": 1556.84, "end": 1560.56, "text": " data set or from the training set and let's see some some information here so"}, {"start": 1560.56, "end": 1564.48, "text": " I can use VS codes nice functionality here and plot the number of documents we"}, {"start": 1564.48, "end": 1569.04, "text": " have so that's 28 and it's 28 because we have 128 remember and we're chunking"}, {"start": 1569.04, "end": 1574.96, "text": " into pieces of hundred and so 128 module 100 is gonna give us 28 query vectors in"}, {"start": 1574.96, "end": 1581.08, "text": " this iteration so ground shoots again contains the like basically the IDs of"}, {"start": 1581.08, "end": 1584.96, "text": " those images in the index data set that have the same label as a particular"}, {"start": 1584.96, "end": 1589.8799999999999, "text": " query vector so let me kind of break that down so let me step over here so"}, {"start": 1589.88, "end": 1593.5600000000002, "text": " first things first so this is a query vector and these are the matches from"}, {"start": 1593.5600000000002, "end": 1597.64, "text": " the like less function so basically these are the 50 images and we're gonna"}, {"start": 1597.64, "end": 1602.0, "text": " grab their IDs so we're gonna grab the IDs of those images from the index data"}, {"start": 1602.0, "end": 1609.3600000000001, "text": " set so let's see that so if I print actual here actual whoops actual okay so"}, {"start": 1609.3600000000001, "end": 1613.3600000000001, "text": " we can he see some numbers so those are the IDs of the images from the index"}, {"start": 1613.3600000000001, "end": 1617.0400000000002, "text": " data set now these are floats obviously there are some small inefficiencies and"}, {"start": 1617.04, "end": 1620.6, "text": " problems with this like example but it's working functionally it's working"}, {"start": 1620.6, "end": 1626.6399999999999, "text": " correctly so and finally desired contains the basically they said all of"}, {"start": 1626.6399999999999, "end": 1631.36, "text": " the IDs like that have the same label as this query vector so that means because"}, {"start": 1631.36, "end": 1638.04, "text": " we have 60k images and we have 10 labels I can expect to have 6,000 like IDs here"}, {"start": 1638.04, "end": 1644.68, "text": " so if I like kind of print length of desired like this whoops oh yeah I have"}, {"start": 1644.68, "end": 1651.8, "text": " to step over it so if I now try and print this it's 6,000 so yeah and now we"}, {"start": 1651.8, "end": 1656.4, "text": " can kind of do the intersection between the actual IDs we got and the desired so"}, {"start": 1656.4, "end": 1660.1200000000001, "text": " the ones that have the same label as the query vector and we can find the"}, {"start": 1660.1200000000001, "end": 1664.2, "text": " precision and recall metrics I'm gonna skip over that it's a simple simple"}, {"start": 1664.2, "end": 1667.72, "text": " logic and that's it now I'm going to basically remove this break a break point"}, {"start": 1667.72, "end": 1673.4, "text": " and jump to the final step of this whole program and that's writing all of this"}, {"start": 1673.4, "end": 1678.16, "text": " data into an HTML document and kind of rendering it inside of a browser okay"}, {"start": 1678.16, "end": 1684.5600000000002, "text": " here we are as I said we had this post eval a function being called and at the"}, {"start": 1684.5600000000002, "end": 1687.68, "text": " end we have had this callback function so let me kind of quickly walk you"}, {"start": 1687.68, "end": 1692.48, "text": " through what happened there so after we finished the calculation of like"}, {"start": 1692.48, "end": 1696.0400000000002, "text": " precision and recall metrics for every single query vector now this this"}, {"start": 1696.0400000000002, "end": 1700.64, "text": " function kind of populates this simple this is a simple like Python list it"}, {"start": 1700.64, "end": 1705.8000000000002, "text": " populates it with HTML syntax so we have as you see here here a table row table"}, {"start": 1705.8000000000002, "end": 1710.44, "text": " cell and we paste the like original query image and then we have we trade"}, {"start": 1710.44, "end": 1713.88, "text": " over all of the matches over the 50 matches we had all of those matches in"}, {"start": 1713.88, "end": 1718.0400000000002, "text": " the same row and then we close the cell we close the row and we repeat that for"}, {"start": 1718.0400000000002, "end": 1721.8400000000001, "text": " every single query vector that's 128 query vectors and that's how this thing"}, {"start": 1721.8400000000001, "end": 1727.2, "text": " is formed and after that let me kind of like jump into write HTML function I'm"}, {"start": 1727.2, "end": 1732.52, "text": " gonna put a breakpoint here and gonna press f5 so here we are it's gonna open"}, {"start": 1732.52, "end": 1737.48, "text": " up this demo HTML which is a template like HTML file and it's gonna replace"}, {"start": 1737.48, "end": 1742.52, "text": " some of the placeholder variables like as you can see here result we're gonna"}, {"start": 1742.52, "end": 1747.4, "text": " put this result HTML so that's the same just as Python string we just created"}, {"start": 1747.4, "end": 1751.32, "text": " that contains the content of the HTML file so yeah we're gonna replace all of"}, {"start": 1751.32, "end": 1757.04, "text": " these templates placeholders and finally we're gonna get let me jump to this web"}, {"start": 1757.04, "end": 1760.6, "text": " browser open it's gonna fail because I'm in WSL but that's just on my system it's"}, {"start": 1760.6, "end": 1764.8, "text": " gonna work for you much better and I'll just manually open it up so that's it"}, {"start": 1764.8, "end": 1769.24, "text": " when I press f5 this is the end of the program and I'm gonna open up the HTML"}, {"start": 1769.24, "end": 1773.36, "text": " okay here is the HTML let me open it up and here are the results and as you can"}, {"start": 1773.36, "end": 1777.44, "text": " see on the left hand side we've got query vectors on the right hand side we"}, {"start": 1777.44, "end": 1782.0, "text": " have the matches so the 50 matches for every single very vector and there is"}, {"start": 1782.0, "end": 1787.24, "text": " 128 of them and as you can see our results look pretty decent like if we"}, {"start": 1787.24, "end": 1791.56, "text": " focus on one particular image like I don't know like this shoe here we can"}, {"start": 1791.56, "end": 1794.8, "text": " see that most of these results are shoes they're not even though there are many"}, {"start": 1794.8, "end": 1798.88, "text": " other images in this data set like like bags or or whatever trousers I don't"}, {"start": 1798.88, "end": 1804.16, "text": " know there must be some like failure cases but I didn't manage to see anyone"}, {"start": 1804.16, "end": 1808.68, "text": " I don't see any failure case here in particular but yeah hopefully got some"}, {"start": 1808.68, "end": 1812.52, "text": " idea what Gina can do and let me just kind of mention that this second example"}, {"start": 1812.52, "end": 1817.64, "text": " with the QA the question-answer like chatbot is maybe a bit better because"}, {"start": 1817.64, "end": 1823.16, "text": " remember that Gina is a cloud native neural search so cloud and this example"}, {"start": 1823.16, "end": 1828.48, "text": " here what it does is it's very similar but like it opens up a chatbot so and"}, {"start": 1828.48, "end": 1831.96, "text": " it creates a Python server and so what happens is there is a communication"}, {"start": 1831.96, "end": 1835.68, "text": " happening between a Gina Python server and this JavaScript code in the browser"}, {"start": 1835.68, "end": 1840.0, "text": " and so you basically input like a query like a text query inside of the browser"}, {"start": 1840.0, "end": 1845.0, "text": " it gets embedded the Gina does its magic of finding the similar the most similar"}, {"start": 1845.0, "end": 1850.24, "text": " match so the most similar response the textual response and it communicates it"}, {"start": 1850.24, "end": 1856.3200000000002, "text": " back via the like this HTTPS I guess like a link to the JavaScript code and it"}, {"start": 1856.3200000000002, "end": 1860.44, "text": " just renders the the response it's a very similar example but it basically"}, {"start": 1860.44, "end": 1864.48, "text": " paints a clear picture the Gina is used for these cloud types of applications"}, {"start": 1864.48, "end": 1869.1200000000001, "text": " and because here we have explicitly like a Python server so hopefully found this"}, {"start": 1869.1200000000001, "end": 1872.48, "text": " video useful if you did share it out with a friend subscribe to the channel"}, {"start": 1872.48, "end": 1876.8, "text": " hit that bell icon also if you have any feedback on this video or what you would"}, {"start": 1876.8, "end": 1880.3600000000001, "text": " like me which ML tool you'd like me to cover next let me know down in the"}, {"start": 1880.36, "end": 1896.36, "text": " comments and also join the discord community until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=gKT-MVoPjzM
ALiBi | Train Short, Test Long: Attention With Linear Biases Enables Input Length Extrapolation
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover ALiBi model from the "Train Short, Test Long: Attention With Linear Biases Enables Input Length Extrapolation" paper. Instead of using positional embeddings (like e.g. the original transformer) they added non-learnable biases directly into the query-key matrix and achieved extraordinary extrapolation results (i.e. great perf on longer sequences). ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://ofir.io/train_short_test_long.pdf ✅ Code: https://github.com/ofirpress/attention_with_linear_biases ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:00 Main results 03:30 Time and memory tradeoffs 05:00 ALiBi method explained 10:00 Results, low data regime 12:35 Generalization to different datasets 13:15 Results, big data regime 16:00 Why does it work and future work 21:10 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #alibi #transformers #extrapolation
What's up guys, in this video I'm covering yet another transformer paper, train short, test long, attention with linear biases, enables input length extrapolation by Ophir Press, Noah Smith, Mike Lewis from Paul Olin School of Computer Science, FAIR and Allen Institute FAIR for AI. So I mentioned yet another transformer paper, but to be honest, this paper probably has the biggest ratio of performance boost they got over the lines of PyTorch code or whatever your favorite framework is. So they basically they only need a couple lines of code to get the results I'm about to show you and they get pretty decent results as we'll soon see. So the question they open it up this paper with is how to achieve extrapolation at inference time two longer sequences than seen during training. So basically we obviously want to train transformers on way shorter sequences because that way we save up memory, we save up time, computational budget, money, whatnot, and it'd be nice if we could deploy those on unseen longer input sequences. So that's the holy grail I guess. And I think this is a nice step in that direction. So they say that they introduce a simple and efficient method, attention with linear biases or Alibi for short, that allows for extrapolation. Alibi does not add positional embeddings to the word embeddings, instead it biases the query key attention scores with the term that is proportional to their distance. And I'm going to break down this sentence here in a couple of minutes, but like the reason they mentioned positional embeddings is because the original transformer paper Vasvani and his collaborators, they were using sinusoidal like patterns and they were adding them to the input embeddings and only at the input layer. Back then they had a hypothesis that that's going to help using sinusoids instead of using learned embeddings is going to help the extrapolation, but they did not do any serious extrapolation study. And this paper is I think the first one to have done that kind of extrapolation study. And let's see the results right now. So charts are pretty impressive. So here on the left side, you can see the model strain on 512 tokens. And on the right side, we see a model strain on 1024 tokens. So they compared Alibi with three different baselines. So the original transformer, the rotary transformer and the T5 bias baseline. And as you can see, as we increase the inference input tokens, that's further away from the training from what the model saw during the training period, basically can see that the sinusoidal like perplexity just explodes. So again, we have on the y axis perplexity, lower is better. And we see that rotary and T5 bias models perform somewhat better, a lot better. I mean, I'm talking qualitatively here, but still. And finally, Alibi looks like this looks amazing. And to be honest, I'm still puzzled how this is even possible. On the right hand side, we can see qualitatively the same trends. I'm going to skip that chart. Okay. So one thing I mentioned initially is that Alibi can be implemented by changing only a few lines of existing transformer code. And that makes this thing even more impressive because simplicity is the king at the end. If you have some model that achieve some really nice results, but you have to hack it a lot, that's not going to scale. And we had those papers, those transformer papers coming up and only the research groups that actually publish the paper were the ones who implemented it. And that's kind of hard to reproduce. Okay. Having said that, let me show you some other interesting charts. So when we compare again, Alibi with the three baselines we just saw across these three dimensions. So the training speed, the inference speed and the training memory, we're going to see awesome results because sinusoidal, so the original transformer and Alibi are on pair. So you can see here. So this is the training speed, obviously the words per second, the higher the hardest metric is, the better. And you can see that sinusoidal one, so the original transformer and Alibi are on pair across different input lengths. Then we can see inference speed, again, fairly similar across all of the input lengths. Here it's even a bit higher. So that's kind of strange. Not sure why that is. If anybody has an idea, please write it down in the comments. And finally, the training memory again here lower is better. So the number of gigabytes, you can see that sinusoidal and the Alibi are on pair and the rotary and t5 bias, even though they achieve some kind of extrapolation as we saw on the above chart here. So we saw some better performance, obviously compared to sinusoidal one. We see that there is a huge trade-off. And the nice thing about this Alibi paper is that pretty much with zero overhead, so there is some small overhead dimension like 100 megabytes of memory, etc. But let's say zero overhead, they get all of this additional extrapolation capability. And that's pretty amazing, if you ask me. Okay, let's continue and let me try and actually digest how this method works. And it's fairly simple. So by now, hopefully you understand how transformers work. That's a basic building block in deep learning nowadays. Do check out my video on transformers. There are also some nice blogs out there, like J. Alomar's blog is awesome. So do check those out because I'm going to give you only a high level overview here. So short recap of transformers. Let me kind of denote the input vectors like this. So these are input tokens embedded into like vectors. And so all of these lines kind of represent a separate token embedding vector. And so what transformers do is you form, for each of those vectors, you form a query, you form a key vector, and you form, finally, a value vector. And you repeat that same procedure for all of these, right? For all of the input embedding vectors. And so what you want to do is you want to take the query, so query one, and you want to do dot product between query one and key one, and query one and all of the other tokens. And actually, because here we are dealing with language modeling, we have something called causal masking. That means you don't want to have access to the words that come in the future. So what you do in the language modeling task is you're going to take, so let's take this is query three. So query three will only be able to attend to key three, to key two, and to key one. And it won't be able to actually attend, so it won't be able to attend these ones. The ones, the tokens that come afterwards in the sequence. And that's why we see the diagonal pattern right here. So this is just, as you can see, so query one dot product with key one, query two, key one, query two, key two, et cetera. So that's just the thing I just explained here. That's why we see the diagonal pattern. And now these are attention scores, and after that, we do some dropout, we do some softmax, that's your classical, like, attention building block in Transformer. And the novelty here, what they did is instead of adding those sinusoidal embeddings, they just add this simple mask, as you can see the pattern here, which has this recency bias, because you can see that the further away the score is from the current query, so here we have query five, the further away the score is, the higher the penalty here. And so basically, after you apply softmax, the probability, the attention coefficients for that, like, value vector is going to drop down. So that means you're going to take that value vector from the position one with a smaller coefficient compared to value vectors that are close by this Q5 query. So hopefully that was clear. I'm going to dig into a bit more detail a bit later, but yeah, the thing you're probably noticing here is this m parameter, and the funny thing is it's actually not learnable. They just kind of experimentally found some good values, we're going to see those in a minute. And so every single attention head will have like a unique number m, and that's it. That's the whole logic they did, and they achieved those results. Now, let's dig into a bit more details here. So they mentioned here, when using a Libby, would not add position embeddings at any point in the network. The only modification we apply is that after the query key dot product, we add a static non-learned bias. So as I already explained, and here is just an algebraic expression of the thing I just explained above. So m is again a parameter that's dependent on the attention head you're in, and these depend on how far away is the key vector from this current query vector. Then we apply the softmax. So they said here again, m is case specific and fixed before training. And so they have, they experimented with some different things, and they figured out that the geometric progression works the best. So that's kind of similar to what sinusoidal embeddings did, except that they were actually using those geometric progressions inside the sinusoid function, i.e. for the frequency of the sinusoids. And so they did some ablations, they mentioned it here somewhere. So our main insight from this exploration is that the slope sets that work best are the ones in which the slopes are in the 0 to 1 range, with the density of the slopes increasing as we get closer to 0. And as we see, these sequences do obey those properties pretty much. They're between 0 and 1, and we see that it gets denser as we get closer to 0, because yeah, 2 to the 8, and then we have 2 to the power of 9. So it's getting closer and closer, it's converging to 0 as the n, where n is the exponent, goes to infinity. That's me trying to be mathematically rigorous, but it's not working. Okay, I think you got an idea of how this thing works. Now let's finally see the results. So they first tested on this wiki text 103 dataset, and let's see the results here. We can see here that the model trained on 512, so Olivia model trained on 512, has lower perplexity than the sinusoidal model trained on the same input length. But the funny thing is, as you're increasing the input length, so the validation sequence length, it still has better perplexity compared to those models that were actually trained on that very same input sequence length. So what I mean by that is, you take a model that was trained on 512, and you do an inference on 3072, and it's still better, as you can see here, let me zoom in, it's still better than the actual model that was trained on that very same sequence. So that's pretty impressive. Now keep in mind, and we'll see that a bit later, that this dataset and this transformer model, fairly small compared to the next results we'll see. And as we'll soon see, these advantages that Olivia is bringing are much more pronounced in the lower data regime, which makes sense, because biases are always helpful when you have a low data regime. We saw that multiple times. Vision transformers versus CNNs would be one nice example where vision transformers in the big data regime start outperforming CNNs, because they learn even better biases by themselves. Let's put it that way, yeah. To be able to fully appreciate these results, I kind of highlighted this paragraph here. So the L512 model is 1.84 times faster to train, and yet it still beats the 3072 input token length model when extrapolating to that length. In addition, training the 3072 sinusoidal model requires a GPU with more than 16 gigabytes of memory to fit the large attention matrices, but our model beats its performance, even though it can be trained on the GPU with much less memory due to the much smaller attention matrices. Okay, pretty impressive results for this low data regime. They have one more chart where they show pretty much the same information. So, again, training the sinusoidal model with 3072 has a bigger perplexity compared to a model that was trained with only 512 tokens, and it's, as you can see here, way faster to train, because on the x-axis we have words per second during the training time and still a lower perplexity, and the evaluation is on this sequence, so that benefits this model. So, again, just another chart that shows that they are better in this low data regime. Okay, not that interesting. Let's go further on. So this one is interesting because here they completely changed the data statistics. So they are using Toronto Book Corpus, which is a completely different data set, and they show that the same trends still apply, so training the Alibi model and then extrapolating is still better than actually training the sinusoidal model on that, as you can see, 3072 input sequence length. That means that the Alibi model is generalizing to different data sets, and the next thing they kind of showed in their experiments is that they also scale, they also, this method generalizes to the big data regime as well, although, as we'll see, the performance drops, the difference between the two models drops. So here on the left-hand side we can see the model evaluated on 1024, whereas Alibi is trained with 512, the sinusoidal one is trained with 1024, and we can see perplexity is pretty much on pair, like Alibi is a bit worse here compared to the sinusoidal one, but you gotta have in mind that the memory consumption here is lower and it's faster to train, because it has shorter input sequence length. When we go to 2048, you can see that it's pretty much constantly better by some small margin, but still, and it's faster to train, smaller memory again, so that's cool, but like the results are not as drastic compared to the loaded regime. As a quick notice, the models they're using here have 1.3 billion parameters, so that's just, you have like a number in your head. Final results here, they showed that basically there is this local, like local minimum, where they showed that the minimum actually lies somewhere around two times the input sequence length that was used for training, so if the model was trained on 512, the minimum is around 1024, and that's interesting, and as you can see here, then it kind of rises up, the complexity goes up, but it's fairly stable up to 10k tokens, and as you can see here, the sinusoidal one just explodes as soon as you start increasing from 512 towards 1000, it already explodes, and so they don't even have a curve here. Same thing here, just for the 1024 model, similar trend, as you can see, the minimum lies somewhere around 2048 or something. Okay, finally, before digging into some ablations, let's see this one. So this table in the appendix has additional results comparing our models to the sinusoidal baseline when both are trained on the same L, showing that Alibi performs similarly to the sinusoidal baseline when not extrapolating. This is in contrast to the results presented on the smaller datasets, where Alibi consistently outperformed other position methods even when not extrapolating. This hints that Alibi's inductive bias provides additional benefits for lower resource language modeling. So that's something I already mentioned, and I think it's fairly important to understand that this type of bias works best for a low data regime, and they showed that across this paper. Okay, finally, they have some analysis for why these models are extrapolating that well. Okay, so the thing I forgot to mention is that how they're actually evaluating the perplexity, and what they did, they were using this non-overlapping inference method whereby you put all of these words as the context, as the input to Transformer, and then what happens is when you're predicting big, the only context you have is the word d. When you're predicting gray, you have d big, and when you're predicting cat, you have d big gray cat, and then you move on to this other subsequence set on the mat, and again, on will only have, like as a context, this word set, d will have these two words, and mat will have the full context, it's the three words. So, compare that to this overlapping inference whereby, again, the first words will have like little context, and then, so cat will have three words as a context, and then what they'll do is they'll have this sliding window, so this will be, now we'll have these words input as the, let me change the color here, so we'll have these words, big gray cat set, and so the set word will actually have all of these three as the context, whereas in the non-overlapping case, it did not have any, so this method is obviously better because it has more context. On the con side, the trade-off is this is way slower and obviously computationally more expensive, so usually people just do these non-overlapping inferences. Second thing, there is this thing called early token curse, and you can kind of deduce what this is, so that means that, as you saw here, when you're doing these non-overlapping inference, like the early tokens have less context by the very nature of a sequence than the later tokens, so that's called the early token curse, and one of the ways you can kind of deal with it is using the sliding window approach, whereas only a couple of initial tokens will have that problem of lack of context. Okay, so here, they, what it did is they now try to do this sliding window inference instead of doing the non-overlapping method, and they show some results here, and to be honest, I'm a bit puzzled about these results. Okay, so first let me try and explain what I think they've done here. So if we have a sequence that's like 1024, and the model was trained on 512, so let's take this model here. So in previous charts, what we saw was that the model was fed these chunks, so the model would be fed 512 tokens here, it would do the inference, and then would feed these 512 tokens. So the thing that happens is now when you try and feed this whole sequence all at once, the perplexity starts increasing compared to the results we saw previously, and that's especially clear on this chart here. So it's constantly going up and up. So that's kind of sign that it's not learning how to use up that additional context that comes from 1024 tokens, so that means it's actually not extrapolating in this different sense of the word extrapolation, I guess. So they have a couple of hypotheses for why these models are actually performing better on longer input sequences when they're doing these non-overlapping like inference, and they kind of accepted that this second one makes more sense, so the model performs better because longer input sequences provide more context, and so the early token curse is reduced. When we validate a validation set with a model with some input subsequence length L valid, we're actually making L valid predictions, where the first prediction has just one token of context, the second has two, and so on. So the average prediction gets to use this many tokens of previous context. When we extrapolate and increase L valid, the amount of tokens of context we have in the average prediction also increases, and so the performance improvement might not come from being able to do better on the longer sequences, but because our average number of context tokens has become larger for each prediction. I'm not sure how this makes sense. So basically, if you have a really long sequence, what they do is when they're doing the non-overlapping inference, and I guess these are trying to explain exactly how those results are better on longer sequences, I mean, you're gonna chunk this up into, let's say, 512, and that means every time when you come here, you're basically, these initial tokens will not have the previous context, so they will not be able to reference this information here. So I don't see how this is actually reducing the, like alleviating this early token curse. I may have misunderstood something really badly here, but anyways, I encourage you to read this section and also comment down below what you think about this. So as a final thought, I'm gonna leave you with this sentence. So this analysis reveals that when L-valid is bigger than the training input sequence length, Alibi might not be using context longer than the ones it was trained on, and this points to research direction which could be pursued in future work. That's it for this video. If you found it useful, share it out with a friend, subscribe, hit that bell icon, and also join the Discord community if you wanna connect with other ML people, and there's gonna be a lot of goodies happening in that community. So yeah, until next time, bye bye.
[{"start": 0.0, "end": 5.5200000000000005, "text": " What's up guys, in this video I'm covering yet another transformer paper, train short, test long,"}, {"start": 5.5200000000000005, "end": 11.28, "text": " attention with linear biases, enables input length extrapolation by Ophir Press, Noah Smith,"}, {"start": 11.28, "end": 16.88, "text": " Mike Lewis from Paul Olin School of Computer Science, FAIR and Allen Institute FAIR for AI."}, {"start": 16.88, "end": 21.36, "text": " So I mentioned yet another transformer paper, but to be honest, this paper probably has the"}, {"start": 21.36, "end": 28.16, "text": " biggest ratio of performance boost they got over the lines of PyTorch code or whatever your favorite"}, {"start": 28.16, "end": 35.2, "text": " framework is. So they basically they only need a couple lines of code to get the results I'm"}, {"start": 35.2, "end": 39.76, "text": " about to show you and they get pretty decent results as we'll soon see. So the question they"}, {"start": 39.76, "end": 44.480000000000004, "text": " open it up this paper with is how to achieve extrapolation at inference time two longer"}, {"start": 44.480000000000004, "end": 50.0, "text": " sequences than seen during training. So basically we obviously want to train transformers on way"}, {"start": 50.0, "end": 54.72, "text": " shorter sequences because that way we save up memory, we save up time, computational budget,"}, {"start": 54.72, "end": 62.08, "text": " money, whatnot, and it'd be nice if we could deploy those on unseen longer input sequences."}, {"start": 62.08, "end": 66.32, "text": " So that's the holy grail I guess. And I think this is a nice step in that direction. So"}, {"start": 67.28, "end": 71.75999999999999, "text": " they say that they introduce a simple and efficient method, attention with linear biases or"}, {"start": 71.75999999999999, "end": 77.44, "text": " Alibi for short, that allows for extrapolation. Alibi does not add positional embeddings to the"}, {"start": 77.44, "end": 82.56, "text": " word embeddings, instead it biases the query key attention scores with the term that is proportional"}, {"start": 82.56, "end": 86.96000000000001, "text": " to their distance. And I'm going to break down this sentence here in a couple of minutes,"}, {"start": 86.96000000000001, "end": 90.64, "text": " but like the reason they mentioned positional embeddings is because the original transformer"}, {"start": 90.64, "end": 97.36, "text": " paper Vasvani and his collaborators, they were using sinusoidal like patterns and they were"}, {"start": 97.36, "end": 103.68, "text": " adding them to the input embeddings and only at the input layer. Back then they had a hypothesis"}, {"start": 103.68, "end": 108.08, "text": " that that's going to help using sinusoids instead of using learned embeddings is going to help the"}, {"start": 108.08, "end": 113.28, "text": " extrapolation, but they did not do any serious extrapolation study. And this paper is I think"}, {"start": 113.28, "end": 118.16, "text": " the first one to have done that kind of extrapolation study. And let's see the results right now."}, {"start": 119.28, "end": 127.52, "text": " So charts are pretty impressive. So here on the left side, you can see the model strain on 512"}, {"start": 127.52, "end": 136.56, "text": " tokens. And on the right side, we see a model strain on 1024 tokens. So they compared Alibi with"}, {"start": 136.56, "end": 142.08, "text": " three different baselines. So the original transformer, the rotary transformer and the T5"}, {"start": 142.8, "end": 149.6, "text": " bias baseline. And as you can see, as we increase the inference input tokens, that's further away"}, {"start": 149.6, "end": 154.96, "text": " from the training from what the model saw during the training period, basically can see that the"}, {"start": 154.96, "end": 161.84, "text": " sinusoidal like perplexity just explodes. So again, we have on the y axis perplexity, lower is better."}, {"start": 161.84, "end": 168.56, "text": " And we see that rotary and T5 bias models perform somewhat better, a lot better. I mean,"}, {"start": 169.12, "end": 174.24, "text": " I'm talking qualitatively here, but still. And finally, Alibi looks like this looks amazing."}, {"start": 174.24, "end": 178.4, "text": " And to be honest, I'm still puzzled how this is even possible. On the right hand side, we can see"}, {"start": 178.4, "end": 184.08, "text": " qualitatively the same trends. I'm going to skip that chart. Okay. So one thing I mentioned"}, {"start": 184.08, "end": 188.64000000000001, "text": " initially is that Alibi can be implemented by changing only a few lines of existing transformer"}, {"start": 188.64, "end": 193.35999999999999, "text": " code. And that makes this thing even more impressive because simplicity is the king at the end. If you"}, {"start": 193.35999999999999, "end": 197.92, "text": " have some model that achieve some really nice results, but you have to hack it a lot, that's"}, {"start": 197.92, "end": 202.48, "text": " not going to scale. And we had those papers, those transformer papers coming up and only the research"}, {"start": 202.48, "end": 208.32, "text": " groups that actually publish the paper were the ones who implemented it. And that's kind of hard"}, {"start": 208.32, "end": 216.32, "text": " to reproduce. Okay. Having said that, let me show you some other interesting charts. So when we"}, {"start": 216.32, "end": 222.79999999999998, "text": " compare again, Alibi with the three baselines we just saw across these three dimensions. So the"}, {"start": 222.79999999999998, "end": 228.4, "text": " training speed, the inference speed and the training memory, we're going to see awesome results"}, {"start": 228.4, "end": 233.84, "text": " because sinusoidal, so the original transformer and Alibi are on pair. So you can see here."}, {"start": 234.64, "end": 240.0, "text": " So this is the training speed, obviously the words per second, the higher the hardest metric is,"}, {"start": 240.0, "end": 245.28, "text": " the better. And you can see that sinusoidal one, so the original transformer and Alibi"}, {"start": 245.28, "end": 252.32, "text": " are on pair across different input lengths. Then we can see inference speed, again, fairly similar"}, {"start": 252.32, "end": 257.28, "text": " across all of the input lengths. Here it's even a bit higher. So that's kind of strange. Not sure"}, {"start": 257.28, "end": 264.32, "text": " why that is. If anybody has an idea, please write it down in the comments. And finally,"}, {"start": 264.32, "end": 268.72, "text": " the training memory again here lower is better. So the number of gigabytes, you can see that"}, {"start": 268.72, "end": 275.52000000000004, "text": " sinusoidal and the Alibi are on pair and the rotary and t5 bias, even though they achieve some"}, {"start": 275.52000000000004, "end": 280.56, "text": " kind of extrapolation as we saw on the above chart here. So we saw some better performance,"}, {"start": 280.56, "end": 286.48, "text": " obviously compared to sinusoidal one. We see that there is a huge trade-off. And the nice thing"}, {"start": 286.48, "end": 290.96000000000004, "text": " about this Alibi paper is that pretty much with zero overhead, so there is some small overhead"}, {"start": 290.96000000000004, "end": 296.48, "text": " dimension like 100 megabytes of memory, etc. But let's say zero overhead, they get all of this"}, {"start": 296.48, "end": 302.24, "text": " additional extrapolation capability. And that's pretty amazing, if you ask me. Okay, let's continue"}, {"start": 302.24, "end": 307.44, "text": " and let me try and actually digest how this method works. And it's fairly simple. So by now,"}, {"start": 307.44, "end": 311.92, "text": " hopefully you understand how transformers work. That's a basic building block in deep learning"}, {"start": 311.92, "end": 316.8, "text": " nowadays. Do check out my video on transformers. There are also some nice blogs out there, like"}, {"start": 316.8, "end": 320.48, "text": " J. Alomar's blog is awesome. So do check those out because I'm going to give you only a high"}, {"start": 320.48, "end": 327.76, "text": " level overview here. So short recap of transformers. Let me kind of denote the input vectors like this."}, {"start": 327.76, "end": 334.56, "text": " So these are input tokens embedded into like vectors. And so all of these lines kind of"}, {"start": 334.56, "end": 340.8, "text": " represent a separate token embedding vector. And so what transformers do is you form, for each of"}, {"start": 340.8, "end": 348.72, "text": " those vectors, you form a query, you form a key vector, and you form, finally, a value vector."}, {"start": 348.72, "end": 353.92, "text": " And you repeat that same procedure for all of these, right? For all of the input embedding"}, {"start": 353.92, "end": 361.6, "text": " vectors. And so what you want to do is you want to take the query, so query one, and you want to do"}, {"start": 361.6, "end": 368.40000000000003, "text": " dot product between query one and key one, and query one and all of the other tokens. And actually,"}, {"start": 368.40000000000003, "end": 373.68, "text": " because here we are dealing with language modeling, we have something called causal masking. That means"}, {"start": 373.68, "end": 378.64, "text": " you don't want to have access to the words that come in the future. So what you do in the language"}, {"start": 378.64, "end": 384.40000000000003, "text": " modeling task is you're going to take, so let's take this is query three. So query three will only"}, {"start": 384.40000000000003, "end": 393.52, "text": " be able to attend to key three, to key two, and to key one. And it won't be able to actually attend,"}, {"start": 393.52, "end": 399.76, "text": " so it won't be able to attend these ones. The ones, the tokens that come afterwards in the"}, {"start": 399.76, "end": 405.52, "text": " sequence. And that's why we see the diagonal pattern right here. So this is just, as you can see, so"}, {"start": 405.52, "end": 411.2, "text": " query one dot product with key one, query two, key one, query two, key two, et cetera. So that's just"}, {"start": 411.2, "end": 416.56, "text": " the thing I just explained here. That's why we see the diagonal pattern. And now these are attention"}, {"start": 416.56, "end": 422.15999999999997, "text": " scores, and after that, we do some dropout, we do some softmax, that's your classical, like, attention"}, {"start": 422.15999999999997, "end": 427.76, "text": " building block in Transformer. And the novelty here, what they did is instead of adding those"}, {"start": 427.76, "end": 435.52, "text": " sinusoidal embeddings, they just add this simple mask, as you can see the pattern here, which has"}, {"start": 435.52, "end": 440.8, "text": " this recency bias, because you can see that the further away the score is from the current query,"}, {"start": 441.36, "end": 448.0, "text": " so here we have query five, the further away the score is, the higher the penalty here. And so"}, {"start": 448.0, "end": 452.8, "text": " basically, after you apply softmax, the probability, the attention coefficients for that,"}, {"start": 452.8, "end": 457.36, "text": " like, value vector is going to drop down. So that means you're going to take that value vector from"}, {"start": 457.36, "end": 463.6, "text": " the position one with a smaller coefficient compared to value vectors that are close by this"}, {"start": 463.6, "end": 468.88, "text": " Q5 query. So hopefully that was clear. I'm going to dig into a bit more detail a bit later, but yeah,"}, {"start": 469.68, "end": 474.64, "text": " the thing you're probably noticing here is this m parameter, and the funny thing is it's actually"}, {"start": 474.64, "end": 480.8, "text": " not learnable. They just kind of experimentally found some good values, we're going to see those"}, {"start": 480.8, "end": 488.56, "text": " in a minute. And so every single attention head will have like a unique number m, and that's it."}, {"start": 488.56, "end": 493.68, "text": " That's the whole logic they did, and they achieved those results. Now, let's dig into a bit more"}, {"start": 493.68, "end": 500.88, "text": " details here. So they mentioned here, when using a Libby, would not add position embeddings at any"}, {"start": 500.88, "end": 505.52, "text": " point in the network. The only modification we apply is that after the query key dot product,"}, {"start": 505.52, "end": 510.24, "text": " we add a static non-learned bias. So as I already explained, and here is just an algebraic expression"}, {"start": 510.24, "end": 516.32, "text": " of the thing I just explained above. So m is again a parameter that's dependent on the attention head"}, {"start": 516.32, "end": 522.96, "text": " you're in, and these depend on how far away is the key vector from this current query vector."}, {"start": 522.96, "end": 528.48, "text": " Then we apply the softmax. So they said here again, m is case specific and fixed before training."}, {"start": 528.48, "end": 534.72, "text": " And so they have, they experimented with some different things, and they figured out that the"}, {"start": 534.72, "end": 541.52, "text": " geometric progression works the best. So that's kind of similar to what sinusoidal embeddings did,"}, {"start": 541.52, "end": 546.5600000000001, "text": " except that they were actually using those geometric progressions inside the sinusoid function,"}, {"start": 546.5600000000001, "end": 553.2, "text": " i.e. for the frequency of the sinusoids. And so they did some ablations, they mentioned it here"}, {"start": 553.2, "end": 559.6, "text": " somewhere. So our main insight from this exploration is that the slope sets that work best are the ones"}, {"start": 559.6, "end": 565.12, "text": " in which the slopes are in the 0 to 1 range, with the density of the slopes increasing as we get"}, {"start": 565.12, "end": 572.5600000000001, "text": " closer to 0. And as we see, these sequences do obey those properties pretty much. They're"}, {"start": 572.5600000000001, "end": 578.5600000000001, "text": " between 0 and 1, and we see that it gets denser as we get closer to 0, because yeah, 2 to the 8,"}, {"start": 578.5600000000001, "end": 582.64, "text": " and then we have 2 to the power of 9. So it's getting closer and closer, it's converging to 0"}, {"start": 582.64, "end": 587.6, "text": " as the n, where n is the exponent, goes to infinity. That's me trying to be mathematically"}, {"start": 587.6, "end": 594.32, "text": " rigorous, but it's not working. Okay, I think you got an idea of how this thing works. Now let's"}, {"start": 594.96, "end": 602.5600000000001, "text": " finally see the results. So they first tested on this wiki text 103 dataset, and let's see the"}, {"start": 602.5600000000001, "end": 608.72, "text": " results here. We can see here that the model trained on 512, so Olivia model trained on 512,"}, {"start": 609.6, "end": 616.1600000000001, "text": " has lower perplexity than the sinusoidal model trained on the same input length. But the funny"}, {"start": 616.16, "end": 622.8, "text": " thing is, as you're increasing the input length, so the validation sequence length, it still has"}, {"start": 622.8, "end": 628.8, "text": " better perplexity compared to those models that were actually trained on that very same input"}, {"start": 628.8, "end": 634.56, "text": " sequence length. So what I mean by that is, you take a model that was trained on 512, and you do"}, {"start": 634.56, "end": 640.8, "text": " an inference on 3072, and it's still better, as you can see here, let me zoom in, it's still better"}, {"start": 640.8, "end": 646.56, "text": " than the actual model that was trained on that very same sequence. So that's pretty impressive."}, {"start": 646.56, "end": 651.52, "text": " Now keep in mind, and we'll see that a bit later, that this dataset and this transformer model,"}, {"start": 651.52, "end": 658.3199999999999, "text": " fairly small compared to the next results we'll see. And as we'll soon see, these advantages"}, {"start": 658.3199999999999, "end": 663.1999999999999, "text": " that Olivia is bringing are much more pronounced in the lower data regime, which makes sense,"}, {"start": 663.1999999999999, "end": 667.76, "text": " because biases are always helpful when you have a low data regime. We saw that multiple times."}, {"start": 667.76, "end": 673.12, "text": " Vision transformers versus CNNs would be one nice example where vision transformers in the big data"}, {"start": 673.12, "end": 679.4399999999999, "text": " regime start outperforming CNNs, because they learn even better biases by themselves. Let's"}, {"start": 679.4399999999999, "end": 684.24, "text": " put it that way, yeah. To be able to fully appreciate these results, I kind of highlighted"}, {"start": 684.24, "end": 693.92, "text": " this paragraph here. So the L512 model is 1.84 times faster to train, and yet it still beats the"}, {"start": 693.92, "end": 703.5999999999999, "text": " 3072 input token length model when extrapolating to that length. In addition, training the 3072"}, {"start": 703.5999999999999, "end": 708.88, "text": " sinusoidal model requires a GPU with more than 16 gigabytes of memory to fit the large attention"}, {"start": 708.88, "end": 713.04, "text": " matrices, but our model beats its performance, even though it can be trained on the GPU with"}, {"start": 713.04, "end": 719.36, "text": " much less memory due to the much smaller attention matrices. Okay, pretty impressive results for this"}, {"start": 719.36, "end": 724.08, "text": " low data regime. They have one more chart where they show pretty much the same information. So,"}, {"start": 724.08, "end": 732.96, "text": " again, training the sinusoidal model with 3072 has a bigger perplexity compared to a model that"}, {"start": 732.96, "end": 741.12, "text": " was trained with only 512 tokens, and it's, as you can see here, way faster to train, because on the"}, {"start": 741.12, "end": 747.2, "text": " x-axis we have words per second during the training time and still a lower perplexity, and the"}, {"start": 747.2, "end": 754.72, "text": " evaluation is on this sequence, so that benefits this model. So, again, just another chart that"}, {"start": 754.72, "end": 758.96, "text": " shows that they are better in this low data regime. Okay, not that interesting. Let's go further on."}, {"start": 759.76, "end": 765.12, "text": " So this one is interesting because here they completely changed the data statistics. So they"}, {"start": 765.12, "end": 772.1600000000001, "text": " are using Toronto Book Corpus, which is a completely different data set, and they show that the same"}, {"start": 772.16, "end": 778.88, "text": " trends still apply, so training the Alibi model and then extrapolating is still better than actually"}, {"start": 778.88, "end": 785.6, "text": " training the sinusoidal model on that, as you can see, 3072 input sequence length. That means that"}, {"start": 785.6, "end": 792.24, "text": " the Alibi model is generalizing to different data sets, and the next thing they kind of showed in"}, {"start": 792.24, "end": 798.0799999999999, "text": " their experiments is that they also scale, they also, this method generalizes to the big data"}, {"start": 798.08, "end": 804.72, "text": " regime as well, although, as we'll see, the performance drops, the difference between the"}, {"start": 804.72, "end": 811.36, "text": " two models drops. So here on the left-hand side we can see the model evaluated on 1024,"}, {"start": 811.36, "end": 818.32, "text": " whereas Alibi is trained with 512, the sinusoidal one is trained with 1024, and we can see perplexity"}, {"start": 818.32, "end": 825.36, "text": " is pretty much on pair, like Alibi is a bit worse here compared to the sinusoidal one, but you gotta"}, {"start": 825.36, "end": 831.44, "text": " have in mind that the memory consumption here is lower and it's faster to train, because it has"}, {"start": 831.44, "end": 840.32, "text": " shorter input sequence length. When we go to 2048, you can see that it's pretty much constantly better"}, {"start": 840.32, "end": 845.6, "text": " by some small margin, but still, and it's faster to train, smaller memory again, so that's cool,"}, {"start": 845.6, "end": 850.48, "text": " but like the results are not as drastic compared to the loaded regime. As a quick notice, the models"}, {"start": 850.48, "end": 856.88, "text": " they're using here have 1.3 billion parameters, so that's just, you have like a number in your head."}, {"start": 858.72, "end": 865.84, "text": " Final results here, they showed that basically there is this local, like local minimum,"}, {"start": 866.72, "end": 873.04, "text": " where they showed that the minimum actually lies somewhere around two times the input sequence"}, {"start": 873.04, "end": 879.84, "text": " length that was used for training, so if the model was trained on 512, the minimum is around 1024,"}, {"start": 879.84, "end": 884.8000000000001, "text": " and that's interesting, and as you can see here, then it kind of rises up, the complexity goes up,"}, {"start": 884.8000000000001, "end": 893.0400000000001, "text": " but it's fairly stable up to 10k tokens, and as you can see here, the sinusoidal one just explodes"}, {"start": 893.0400000000001, "end": 899.2, "text": " as soon as you start increasing from 512 towards 1000, it already explodes, and so they don't even"}, {"start": 899.2, "end": 905.9200000000001, "text": " have a curve here. Same thing here, just for the 1024 model, similar trend, as you can see,"}, {"start": 905.92, "end": 912.0799999999999, "text": " the minimum lies somewhere around 2048 or something. Okay, finally, before digging into"}, {"start": 912.0799999999999, "end": 918.56, "text": " some ablations, let's see this one. So this table in the appendix has additional results comparing"}, {"start": 918.56, "end": 923.4399999999999, "text": " our models to the sinusoidal baseline when both are trained on the same L, showing that Alibi"}, {"start": 923.4399999999999, "end": 927.92, "text": " performs similarly to the sinusoidal baseline when not extrapolating. This is in contrast to the"}, {"start": 927.92, "end": 933.92, "text": " results presented on the smaller datasets, where Alibi consistently outperformed other position"}, {"start": 933.92, "end": 941.76, "text": " methods even when not extrapolating. This hints that Alibi's inductive bias provides"}, {"start": 941.76, "end": 950.16, "text": " additional benefits for lower resource language modeling. So that's something I already mentioned,"}, {"start": 950.16, "end": 956.4799999999999, "text": " and I think it's fairly important to understand that this type of bias works best for a low data"}, {"start": 956.48, "end": 966.16, "text": " regime, and they showed that across this paper. Okay, finally, they have some analysis for why"}, {"start": 966.16, "end": 971.2, "text": " these models are extrapolating that well. Okay, so the thing I forgot to mention is that how they're"}, {"start": 971.2, "end": 976.4, "text": " actually evaluating the perplexity, and what they did, they were using this non-overlapping inference"}, {"start": 976.4, "end": 982.88, "text": " method whereby you put all of these words as the context, as the input to Transformer,"}, {"start": 982.88, "end": 988.0, "text": " and then what happens is when you're predicting big, the only context you have is the word d."}, {"start": 988.0, "end": 994.64, "text": " When you're predicting gray, you have d big, and when you're predicting cat, you have d big gray"}, {"start": 994.64, "end": 1001.4399999999999, "text": " cat, and then you move on to this other subsequence set on the mat, and again, on will only have,"}, {"start": 1001.4399999999999, "end": 1006.72, "text": " like as a context, this word set, d will have these two words, and mat will have the full context,"}, {"start": 1006.72, "end": 1013.6, "text": " it's the three words. So, compare that to this overlapping inference whereby, again, the first"}, {"start": 1013.6, "end": 1019.36, "text": " words will have like little context, and then, so cat will have three words as a context, and then"}, {"start": 1019.36, "end": 1024.56, "text": " what they'll do is they'll have this sliding window, so this will be, now we'll have these words"}, {"start": 1025.1200000000001, "end": 1031.52, "text": " input as the, let me change the color here, so we'll have these words, big gray cat set, and so the"}, {"start": 1031.52, "end": 1036.8799999999999, "text": " set word will actually have all of these three as the context, whereas in the non-overlapping case,"}, {"start": 1037.6, "end": 1042.96, "text": " it did not have any, so this method is obviously better because it has more context. On the con"}, {"start": 1042.96, "end": 1048.16, "text": " side, the trade-off is this is way slower and obviously computationally more expensive, so"}, {"start": 1048.16, "end": 1053.76, "text": " usually people just do these non-overlapping inferences. Second thing, there is this thing"}, {"start": 1053.76, "end": 1060.56, "text": " called early token curse, and you can kind of deduce what this is, so that means that, as you saw"}, {"start": 1060.56, "end": 1066.56, "text": " here, when you're doing these non-overlapping inference, like the early tokens have less context"}, {"start": 1066.56, "end": 1075.04, "text": " by the very nature of a sequence than the later tokens, so that's called the early token curse,"}, {"start": 1075.04, "end": 1079.9199999999998, "text": " and one of the ways you can kind of deal with it is using the sliding window approach, whereas"}, {"start": 1079.9199999999998, "end": 1087.84, "text": " only a couple of initial tokens will have that problem of lack of context. Okay, so here,"}, {"start": 1087.84, "end": 1093.28, "text": " they, what it did is they now try to do this sliding window inference instead of doing the"}, {"start": 1093.28, "end": 1100.56, "text": " non-overlapping method, and they show some results here, and to be honest, I'm a bit puzzled about"}, {"start": 1100.56, "end": 1106.08, "text": " these results. Okay, so first let me try and explain what I think they've done here. So if we have a"}, {"start": 1106.08, "end": 1112.8, "text": " sequence that's like 1024, and the model was trained on 512, so let's take this model here."}, {"start": 1112.8, "end": 1120.6399999999999, "text": " So in previous charts, what we saw was that the model was fed these chunks, so the model would be"}, {"start": 1120.6399999999999, "end": 1128.3999999999999, "text": " fed 512 tokens here, it would do the inference, and then would feed these 512 tokens. So the thing"}, {"start": 1128.3999999999999, "end": 1135.28, "text": " that happens is now when you try and feed this whole sequence all at once, the perplexity starts"}, {"start": 1135.28, "end": 1140.24, "text": " increasing compared to the results we saw previously, and that's especially clear on this"}, {"start": 1140.24, "end": 1146.72, "text": " chart here. So it's constantly going up and up. So that's kind of sign that it's not learning how to"}, {"start": 1148.88, "end": 1154.4, "text": " use up that additional context that comes from 1024 tokens, so that means it's actually not"}, {"start": 1154.4, "end": 1161.2, "text": " extrapolating in this different sense of the word extrapolation, I guess. So they have a couple of"}, {"start": 1161.2, "end": 1165.36, "text": " hypotheses for why these models are actually performing better on longer input sequences when"}, {"start": 1165.36, "end": 1171.9199999999998, "text": " they're doing these non-overlapping like inference, and they kind of accepted that this second one"}, {"start": 1171.9199999999998, "end": 1177.04, "text": " makes more sense, so the model performs better because longer input sequences provide more context,"}, {"start": 1177.04, "end": 1182.7199999999998, "text": " and so the early token curse is reduced. When we validate a validation set with a model with some"}, {"start": 1182.7199999999998, "end": 1189.04, "text": " input subsequence length L valid, we're actually making L valid predictions, where the first"}, {"start": 1189.04, "end": 1195.12, "text": " prediction has just one token of context, the second has two, and so on. So the average prediction"}, {"start": 1195.12, "end": 1201.1999999999998, "text": " gets to use this many tokens of previous context. When we extrapolate and increase L valid, the"}, {"start": 1201.1999999999998, "end": 1206.0, "text": " amount of tokens of context we have in the average prediction also increases, and so the performance"}, {"start": 1206.0, "end": 1211.12, "text": " improvement might not come from being able to do better on the longer sequences, but because our"}, {"start": 1211.12, "end": 1218.0, "text": " average number of context tokens has become larger for each prediction. I'm not sure how this makes"}, {"start": 1218.0, "end": 1225.52, "text": " sense. So basically, if you have a really long sequence, what they do is when they're doing the"}, {"start": 1225.52, "end": 1233.28, "text": " non-overlapping inference, and I guess these are trying to explain exactly how those results are"}, {"start": 1233.28, "end": 1240.08, "text": " better on longer sequences, I mean, you're gonna chunk this up into, let's say, 512, and that means"}, {"start": 1240.08, "end": 1246.24, "text": " every time when you come here, you're basically, these initial tokens will not have the previous"}, {"start": 1246.24, "end": 1252.64, "text": " context, so they will not be able to reference this information here. So I don't see how this is"}, {"start": 1252.64, "end": 1261.84, "text": " actually reducing the, like alleviating this early token curse. I may have misunderstood something"}, {"start": 1261.84, "end": 1268.4, "text": " really badly here, but anyways, I encourage you to read this section and also comment down below"}, {"start": 1268.4, "end": 1272.48, "text": " what you think about this. So as a final thought, I'm gonna leave you with this sentence. So this"}, {"start": 1272.48, "end": 1277.76, "text": " analysis reveals that when L-valid is bigger than the training input sequence length,"}, {"start": 1278.32, "end": 1283.92, "text": " Alibi might not be using context longer than the ones it was trained on, and this points to research"}, {"start": 1283.92, "end": 1288.72, "text": " direction which could be pursued in future work. That's it for this video. If you found it useful,"}, {"start": 1288.72, "end": 1294.24, "text": " share it out with a friend, subscribe, hit that bell icon, and also join the Discord community"}, {"start": 1294.24, "end": 1299.76, "text": " if you wanna connect with other ML people, and there's gonna be a lot of goodies happening in"}, {"start": 1299.76, "end": 1303.12, "text": " that community. So yeah, until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=hNf6RNHKnE4
Facebook AI's DINO | PyTorch Code Explained
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany With this video, I kick off a brand new series of coding videos in PyTorch where I'll be explaining some of the most impactful AI research through its code analysis! In this video I cover DINO from the "Emerging Properties in Self-Supervised Vision Transformers" paper. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2104.14294 ✅ Code: https://github.com/facebookresearch/dino ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 DINO paper - short overview 02:38 Code analysis starts - training arguments (argparse) 07:30 Training main function 08:25 DINO augmentations 11:30 Main function resumed 16:05 DINO head (MLP) 18:05 Main function resumed 20:40 DINO loss overview 21:45 Main function resumed 24:05 Augmentations visualized (matplotlib) 27:00 Main function resumed 28:00 DINO Core part!!! (in-depth shape analysis) 33:45 DINO loss in depth 38:00 Main function resumed 39:50 Visualizing attention script explained 48:15 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dino #pytorch #coding
What's cracking guys in this video we're gonna do something very different So I'm gonna walk you through the code of the dyno model which I've previously covered in one of my videos From Facebook AI research so we're gonna step through the code and understand how exactly the training works line by line in PyTorch So if you find that useful let me know and also Leave a feedback of what you think about this video so that I can improve it over the for the next for the next time So this is super experimental before I dig into the code I'll do like a super short overview of the paper just one minute overview for those of you who haven't watched the video go Ahead and watch it. I'm gonna link it somewhere here And so let me do a quick recap of what happens so that you have some context before we dig into the code Okay, so basically in that video what it what I've explained was I explained how exactly this attention mechanism works But I explained it on a very high level using these diagrams And so from from this image we get this this this attention map And we're gonna see how exactly that works by the way if you if you want me to share these squiggles of the paper With you let me know down in the comments. I'm gonna leave a comment and if I get enough thumbs up I'm gonna leave I'm gonna think about how I can share these with you Okay, so next up we saw this high level diagram of high down a dyno works so basically we have the student branch We have the teacher branch the teacher branch is updated using this Exponentially moving average from the student like weights, and we also have a stop gradient here Which means we won't be updating the teacher Finally after applying the softmax we get our district output distributions here And the whole idea is to make these two distributions be the same And we do that by across entropy as you can see here, so the p2 times log of p1 So the idea is to extract the essence of the image we input images which had different augmentations as the input to student and teacher and Hopefully we want to make sure that the distributions are the same because that's the same image And that's how we extract the kind of the essence of the image, so there will be a high level explanation So we saw the pseudocode now. We're gonna see the actual code. We saw that we are using So I mentioned augmentations. We're using these global crops, and we're using these local crops, and we are again trying to Make the model output the same distributions for both of these Finally let me see what else I had so explain how K nearest neighbor algorithm works and Probably in the next video I'm gonna show you how exactly this looks in code because I think the things I just explained will take like at least an Hour to kind of cover, so let's get back to the code and see how this looks like okay So you've watched the video you have some context from this short recap now. Let's start digging into actual code Okay, so let's put some breakpoint here as I'm going to step through this code and show you exactly how it works So let's start with this get args parser function Okay, so I'm gonna just quickly scheme over these arguments so that you have some context and idea of which variables are gonna be Circulating around the code so first things first. They have the architecture. We're gonna be using the vision transformer small They have many different options, but we're gonna focus on that one batch size I had to increase it to 32 because it was 16 by default and it's for some reason crashing my machine I still haven't been debugging the amount of memory. It's eating up on my GPU So yeah, so output dimension is just 65 65,000 and that's the actual output distributions We just saw in the paper overview which we are going to compare and make sure they are the same for different crops, okay? Then we have just normalized last layer. I'm gonna skip this momentum teacher so 0.996 That's the at the actual coefficient that's being used in the exponentially moving average when we are forming the teacher weights from the student weights Again better realization. This is not that important warm-up temperature teacher temperature Number of epochs all of these are not that important. There's just a schedule of how the temperature of the teacher is gonna change throughout the training Using mix precision training is gonna be on so I obviously won't be focusing on the optimization stuff in this video so weight decay Okay, that's again. We have some schedule so they have the end and initial value Clipping gradients so that's also basically when you're updating the weights They are going to use this to kind of clip the norm of the of all of the weights before doing an update We're gonna see that in a couple of minutes So number of epochs hundred by default where or not to freeze the last layer This is just some trick that makes the training a bit easier learning rate So they basically have a linear warm-up first and then they have some kind of I think cosine schedule Until the end of the training so warm-up epochs just tells you across how many epochs you're you're you're taking to do that linear warm-up thing minimum learning rate that's the pretty much learning rate at the end of the of the training Optimizer they're using Adam Wide Adam so that's because they have distributed training this dino model is trained on 8 or 16 GPUs originally But I have only a single GPU here So that's that's it drop path rate is just an argument that is going to modify the the vision transformer architecture Then we have the global crop scales. It's gonna be from 0 4 to 1 In contrast that to so I'm gonna quickly get back to that one, but we have eight local crops and we have And we have so local crop scale is gonna be from 0 0 5 to 0.4 And that's the same number as here. So that means so that means that This this for the small crops We're gonna take 5% to 40% of the image size and then we're gonna resize that back But like for the big crops we're going to take from 40 to 100% of the image and then we're gonna do the resizing to 24 as for the small crops they're going to be resized to 96 96 pixels Again some details, but like yeah, this video is gonna be about a lot of details We're gonna use image net the thing I did is I actually downloaded a small subset of image net because this thing is huge more than like 150 gigabytes, I think and So from fast AI basically it's called image net Well, I don't know how to pronounce it with two T's and aside from me having to download image net that small subset of image net I also had to do a couple of tricks to make this work on Windows because I'm running on Windows machine So if you want to know about those nitty-gritty details and small bugs and problems I had to solve do join the discord server actually had a thread where as I was in analyzing this code over the past two days I was kind of writing down the steps I took and the problems I encountered Okay, so finally as I said image net where I'm going to save the checkpoints and logs seed seed basically serves to Make this code reproducible because otherwise there is a lot of randomness and every time you run this code is gonna have different output and different results And the images are gonna be loaded in different order It makes it easy for me to have some consistency between different runs and it's easier to debug the code So a number of workers not that important all of this is just some distributed settings and That's not that vital. Okay We parse the arguments we create the output directory where we are going to dump the logs the log files and the checkpoints And finally, let's go to the main function and that's trained, you know So again, this is the in it distributed mode is just a function that's setting up certain settings When you're doing distributed training on 8 or 16 GPUs again, I'm using Only one GPU I had to do some modification because of Windows But yeah, do check out this Discord server for that So we are fixing the random seeds. They're just printing I'm gonna Rise up the console here So they're basically just printing the the Shock code the the hash of this commit with of the code. I'm currently running Then they're just printing some arguments Cudian and benchmark set to true is just a certain optimization again That makes the code run faster under certain assumptions again. Not that important. Okay This is fairly important data augmentation dino. I'm gonna quickly walk you through this code and then I'm gonna step over it in the execution loop But basically as you can see here, they have flip and color jitter Transform so that's just a compose. That means we are composing a couple of different transforms and These are organized so that they do the flipping and the color Like photometric augmentations. So we have as you can see here random horizontal flip with 50 percent With 50 chance, we're gonna flip it Either flip it or not flip it then we have the we're gonna randomly apply with certain probability like 80 percent This color jitter which is going to change the brightness the contrast the saturation the hue So basically our photometric augmentations and finally random grayscale is with 20 chance. We're gonna kind of Apply the the grayscale transformation The then we have normalization. This is your standard stuff. We are converting the the images to pi torch tensor We are normalizing them and don't get confused by these magic numbers Even though it'd be really nice if the author has actually extracted this into a constant Because they've been using this all around the place. So this is just your image image net Statistics so the mean and the standard deviation of the image net images Okay So for the fun part this random resize crop is the thing that does the magic of dino So global crop scale is again the thing I just mentioned So from we're gonna take 40 to 100 percent of the image We're gonna cut it there and then we're gonna resize it back to 224 by 224 Using bicubic interpolation 224 by 224 using bicubic interpolation method Then they're gonna apply some photometric augmentations and the flipping the gaussian blur and we're gonna normalize the images We have a second global Crop because if you remember the details from the paper, we have two global crops and we have eight local crops Although obviously that's you can you can experiment with that So again, a random resize crop everything is the same. They just apply solarization here Uh, if you're not familiar with solarization, i'm just gonna put the image on the screen so you can see a visual example I guess the image is going to explain that much faster than I will Finally, we have the the local crops. So that means we are applying the local crop scale So that means we are now using that 1.5 percent to 40 percent of the image and then we resize that back into 96 by 96 images So these crops are gonna these images that came from the smaller crop of the image are gonna be of smaller resolution And 96 by 96 again flipping and color rendering a gaussian blur normalization Standard stuff so that's it So basically what's gonna happen is this data augmentation dino uh is gonna be passed to the data set in pytorch and Then every time you try to fetch an image from the data set it's gonna apply It's gonna append one crop here one global crop the second global crop And then we're gonna have iterating a for loop. So eight times and do these small local crops And then we're gonna return like a list of 10 crops in total This is just on code. I wrote i'm gonna experiment with that a bit later But let me get back to the actual line where we stopped. Okay, so that was the that was the uh data augmentation Of dino. I think that was pretty important detail by the way, i'll be adding time stamps to all of these logical sections So feel free to skip if you understand some part you can just skip to the part you you want to understand better Okay, continuing on we are creating this data set. Uh, so it's gonna link to the as you can see here image net train Uh subdirectory and we're gonna pass the transform I already mentioned Which means every time you fetch an image from this data set is gonna apply those 10 transformations and we're gonna get those 10 crops back Uh distributed sampler. So just ignore the distributed part This is just going to shuffle the images once we input that into the data loader So as we can see we have a data set here. We have the sampler We have I think 64 images per gpu and since I have a single gpu that's going to be 64 images in total a number of workers just optimization stuff peel memory as well. That just kind of helps you Basically, uh transferred images from the cpu to the gpu faster because there's this uh dedicated slot of the memory So i'm currently going through every single detail, uh, because i'm experimenting but like, uh, i'm later on i'm gonna abstract many details So let me know whether you actually want everything explained or you just want me to cover the main idea Uh, this is experimental video. So, uh, please give me that feedback. And yeah, uh droplast means because we have a data set of I I think uh, like I have around 9 000 something images Because every batch has 64 images as we saw here Uh, that means that probably the last batch won't have 64 because unless the the number of images in our training data set is divisible But 64 we're gonna have the last batch being less than 64 and this droplast just means we're gonna drop The last batch because that can cause some issues probably because of the shape being different So that's the reason you sometimes see this droplast set to true. Okay data loaded there are uh, as you can see here 9469 images, okay um This is just some bureaucracy stuff. Uh, they did because they used to to name these models Date now, uh, they're just kind of replacing them with vat, but the architecture is already vat small. So nothing's gonna happen there um, so basically This says that so vat is just uh, like a shorthand notation for the all of the vision transformer models That exist in this vision transformer pi file So we're gonna because we are vision transformer we're gonna enter this if statement and we're going to as you can see here We're gonna form so this statement here may confuse you. So this is just a list of all of the different like uh functions from this vision transformer file and this is just going to instantiate the model and we're going to instantiate it with the 32 by 32 so the patch will be 32 by 32 pixels and we're gonna use this Uh drop path rate of 0.1. So the quickest way you can understand how this thing What this thing does is just put a print statement in there do this, uh, like just print the dictionary Uh set a breakpoint there stop the code Re-execute again and we're gonna see what happens there. So i'm gonna Just skip these skip these i'm gonna get here. I'm gonna open the console And after I print this you're gonna see Uh, whoops bunch of stuff and as you can see here It's actually printing this whole vision transformer pi file and the thing is uh, it's some of the some of the functions are organized Into dictionaries. So this is just your python stuff. So that means we're gonna have somewhere in here We're gonna have uh, like these functions. Let me let me jump to that file. So we're gonna have somewhere in there We're gonna have this v80 small Function and then we're gonna pass the patch size and uh additional we are passing that drop path Uh coefficient so that's the the thing that happens there. So basically we're instantiating the v80 small model here Uh, then we're doing the same thing for the teacher network. So that was a student network We have the teacher network and since we're using uh Vision transformer small the embedding dimension is going to be 384. Uh, that's just the hyper parameter they they kind of Decided to use okay So they have in these other branches they're using excite and some other stuff we don't care about so Now this is another important part. Basically, uh, what they do is they they wrap the student network So this is the vision transformer and they put this dino head on top of it. So dino head is just a simple mlp Uh with some additional tweaks and i'm gonna jump into those details right now So the the main point is they are forming uh, this this mlp And since number of layers is three by default, that means the mlp is gonna have three linear layers that means we're gonna uh Execute the else branch and we're gonna form the mlp that just consists out of as you can see here We have some potentially some batch norms potentially some uh, like gelu, uh, like uh Activation functions, uh, but most of all most importantly we have this linear function Uh, so basically it's an mlp And then we just initialize the way it's using this self apply that's an inbuilt Function that's a part of this nn module class and then we're gonna form the last layer, uh by creating yet another Linear layer and the linear layer is gonna have so the output dimension here is like 65 000 something So that's the actual output of the whole uh pipeline and then we're gonna compare those outputs by cross entropy later on So that's the last layer. Uh, the interesting part here is this weight norm is just a hook function Uh what it does is once you put this weight g to all of ones That means they are going to normalize the actual linear layer So it's just going to make sure that the l2 norm of the weights of that linear layer are equal to one That's basically it and if they decide to do this normalization of the last layer if this is set to true Then as you can see here that those gradients are going to be set to false which means the following during the training Those uh, those weights will not will not be updated. Which means that the l2 norm is going to be kept at one Uh, which means we're normalizing the last layer hence the norm last layer. I think it's pretty self-explanatory So again simple mlp on top of it. We have the last layer which has the 65 000, uh something Uh number of neurons as the output and that's it. Optionally we have these uh, these uh, like normalization stuff Okay, getting back here. So that's the dino head We stick it on top of the student and one additional detail which we need to go through This is this multi-crop wrapper again. I have timestamps just jump if you don't if you're not interested into this section Uh, so the multi-crop wrapper What it does is the following Um, you know, i'm actually just gonna put a break point here and then after we we get to this part We'll understand what multi-crop wrapper does but basically on a high level for now it's basically just gonna deal with those crops passed them through the the pipeline and then Outcomes the result from the student the final the final output and the final output from the teacher We'll see that in a in a once we we hit that part of the code. Okay, uh getting back here So that was how we formed the student part So again, it's a multi-crop wrapper around student which is vision transformer and dino head which is mlp Same thing goes for for the teacher. We have teacher we have dino head and that's it Some minor differences like whether we are not we are normalizing the last layer or not But that's it and we are not using the bet normalization So this is uh, there is no better normalization in this whole pipeline, which is something we we strive for okay Uh, we're just pushing the the student and teacher to the gpu. That's this dot kuda that that's what it means and finally Because as I said, we don't have any any batch norms That means we're gonna enter the else branch and that means that the teacher network will not be using this distributed data parallel Which is just a wrapper that makes it easy to train these models in a distributed fashion Uh, okay continuing on we wrap the student network, uh into this distributed data parallel But but they only have a single gpu here So the index zero means the default gpu you have on your system. I have a rtx 2080. So that's it Then we are loading the as you can see here. We are loading the state dictionary directly from the student Uh module so and they say here nicely teacher and student start with the same weights So initially we're gonna start with the same weights later on we're gonna use this exponentially moving average to update the teacher weights Uh, finally as I said, there is a stop gradient Uh, if you saw the the diagram there is a stop gradient and this is how you implement the stop gradient in pytorch you basically just iterate through all of the Like parameters and you said there are gradients to false. So that means you won't be trainable I'm gonna skip all of this just f9 and that's it So we we have an output here, uh, the student and teacher are built. They're both vision transformer small networks Okay, this part is fairly important and i'm gonna probably uh step through it a bit later But let me just give you a glimpse. So i'm gonna put uh, like a like a breakpoint here in the forward function And as you can see here, it's just accumulating the parameters that we passed in the init function Uh, it's just registering this center vector. So if register buffer what it does it makes sure that the this this this vector here Uh is a part of the state dictionary even though it's not trainable. So this center vector will not be trainable We're not going to update it using gradients. We're going to update it using the exponentially moving average again So that's the reason they're using register buffer. So it's not trainable So it's not a model parameter in the pytorch, uh, like uh terminology And but we still need it in the actual state dictionary of the model Um, okay. Finally, they have some teacher temperature schedule. So with the current parameters, this is actually going to return 0.04 Throughout all of this all of the epochs, but what it's it's kind of envisioned to have a linear ramp up here Although that's not that important. Let's get back to the dyno loss Okay, i'm gonna step over this i'm gonna later once we are actually using the dyno loss i'm gonna step through the forward function um This just not important Whether we are regularizing or not regularizing certain parameters of the student network. We're gonna create the atom wide optimizer And finally because we're using the mixed precision training We're just going to wrap this create this fp16. So that means we're using 16 bits training these models Okay here they're just creating uh schedulers for the learning rate. So it's going to be a cosine scheduler Um, then we have a weight decay schedule against cosine scheduler and finally momentum scheduler Basically, uh, that's the thing that is used to update the teacher Uh weights using the student weights. So that's that parameter Uh, and again, it's just a cosine scheduler. Okay, so that's it. I think we're we're good to go We're getting closer to the actual training loop Uh, this already took a lot of time. So uh, we're just going to this this restart from checkpoint I think we're gonna skip it because we don't have any checkpoints for now. So just put a breakpoint there. We're gonna enter it Okay, let me step over step over over Okay, let me step over step over. Okay, and we're gonna just return from that function since we have don't have any checkpoints Okay, so that's just uh dummy for now. Um Start epoch is gonna start with zero. We're just uh logging the starting time just some bureaucracy stuff And finally here is the main loop. Okay, we are here so we're starting from the epoch zero we're gonna run for 100 epochs and Uh ignore this line. This is just something that has to do with this distributed sampler. Otherwise, you wouldn't have this line Um, so this is the actual meat of the whole whole script So it's training one epoch is gonna train the the dino for one epoch and then later on we're just gonna have some some Logging nothing nothing very fancy. So i'm gonna step into this f7 And let's see how this function looks like. Okay, so metric logger not an important again Okay, this is where the fun starts. So we have the data loader and we're going to fetch Like this images so this images is going to be a list of 10 elements and each element is going to be a batch of images So each element is going to be 64 images So that means so because 10 elements because as you remember we have two global crops We have eight local crops. So that means if we take like the zeroth element of this images list That's those are the global crops and we have 64 images and if we take uh, like the second index That means that's the first local crop Like set of images a batch of local crops. And so as you can see here I just pre-wrote some code here to make this a bit faster I'm gonna stop the code execution and i'm gonna rerun it again I'll I'll need to make another tweak to make this work So that's that means i'm gonna have to add additional crop So what this does is we are just going to preserve the image as it is without doing any any crops So that's just a reference image and that was that's going to be prepended as the first crop So now we have 11 crops and hopefully now this is going to work. Let me let me let me run this Uh, let me go back to the code where we were. Okay, here we are So what i'm doing here is as you remember I prepended this original image So now we have 11 elements instead of 10 and this zero will just be indexing the original images And i'm just gonna fetch like one image out of those 64 images I'm gonna convert those to non-pi do some move axes because the channel is actually channel first and now we want to have channel last For for the plot function of matplotlib So second line what it does is it will extract the global crop So remember now the the the index one and zero the index one and two are actually referencing The global crops and again, i'm gonna take just one single image Finally, we have the local crop. So index three through ten are all going to contain local crops So i'm just gonna take one arbitrary local crop and finally, let's let's plot those results. So Um, here you can see the the original image, uh, just some dude playing golf So i'm gonna now exit that remember that image and now we're gonna see a global crop as you can see it's uh, There is a crop but there is also severe solarization and other photometric effects So you can't really see it perfectly, but let's see the the local crop So the local crop is as you can see now, it's totally clear We're just taking a single small crop and we're resizing it into this image of 96 by 96 And we also apply some augmentations here. So that's it That's the now you have an understanding of how images look like and I encourage you when you're debugging and understanding the code Do make these visualizations as much as possible because that helps you understand what's going on in your code That plus print statement, I guess that's pretty much how I debug the code. Although there are sometimes more elegant ways to do it. Okay so Um, let's let's continue here. So okay This line is just forming a global step depending on the epoch we are in Depending on the number of images in the data set depending on the current iteration inside of this Inside of this loop. Okay, so that's just global step and Then what they're going to do here is depending on the schedule current schedule. We're gonna update the way decay and learning rate Uh, depending on these param groups, it's not that important. I'm just gonna skip that part So we're just adapting the way decay and learning rates depending on the current step Okay, so skipping that we're putting the images on the gpu. So that's this step Let me just whoops. Let me just skip over that Uh, and finally this part, uh is just a context context manager because we have this mixed precision training And it's not none. So we're gonna enter this but now it's not going to work because I prepended all of this I'm just gonna quickly comment this out Uh and comment this out and i'm gonna restart the training and get back to you. Okay. Here we are Uh, so this is as I said very important step We're taking the global crops and we are passing only the global crops through the teacher to get the teacher output and let's see So analyzing the shape of the output is super important. I can't stress that enough So let's now see whoops. I have uh, okay, so we have a multi crop wrapper Okay, this is the part we we want to understand a little bit better um so x Is just a list of two elements again, uh, each contains a batch of crops These are global crops. So because it's a list we're gonna skip this part So this is just a fancy way of separating the global from the local crops and it doesn't make much sense for the teacher network Because it's only using global crops. So we're just gonna have number two here. So as you can see here Uh index crops just has number two. Uh, this is going to make more sense for the for the student network so now what we do is We iterate here and we're gonna take as you can see from zero to two So that means we're gonna concatenate all of those, uh global crops We have 64 images in one in one in one element. We have 64 images in the second one We're just going to concatenate them and now we have 128 images in a single tensor and then we're gonna pass that through the backbone which is basically the vision transformer Uh of the teach network. So that's going to form what so the the let's try and figure out the shape so 128 and because we are outputting Only the cls token. So that will be the output from the vision transformer. So it has 384 Uh, like uh neurons, right? So we're gonna have hopefully, um What so 128 times 384, okay, let's let's see whether the prediction was right So it's kind of good to have these predictions and then uh tweak your brain depending on whether you made made a mistake or not I guess that's a similar way how these neural networks work. So the out Shape here as you can see the shape is 128 128, um, comma 384 so that's it i'm gonna skip this part and now we're gonna gonna this is going to accumulate the The output and this will again make more sense for the student network. Now is just gonna Uh, we're gonna just copy paste pretty much this output from the vision transformer here and that's it We're gonna exit the loop and now this is the actual dino head So this is the mlp and now we're after passing it here. We're gonna end up with those 65 000 something So we're gonna have 128 times 65,000 something. Let's let's see whether that that's true. Okay Uh, whoops, let me get out of here Teacher output. Let's see whether the teacher output is as I predicted so teacher output. Let me kind of So 128 65 000, that's it. So the predictions were right. So now This is the interesting part. We have a student network and we are passing both the global crops as well as the local crops So let's enter the loop again again. It's a list. So i'm gonna skip this and now it's a bit more interesting. So it's gonna Make uh return the ids where the resolution changes So we have 20 remember the global crops are 224 by 224 and the local crops are 96 by 96 So what this line of code actually does is it finds a separation and returns the indices? Of the place where the resolution changes so it's going to return as you will soon see 2 and 10 I think or something like that. Yeah 2 and 10 so that's because after 2 we drop the small resolution And then we keep the small resolution until the end and that's 10 crops So now we are iterating again as you can see here. We're first going to pass 0 through 2 So that's global crops And we're going to pass that to the the network here. So that's going to be again 128 So that was that is going to be as you can see here 128 384 Uh, and we're just going to concatenate that with this output, which is just a placeholder variable You can see here is torch empty and we're just concatenating the results to it and that's it Now we're gonna put this the start index is now going to be 2 and the end index is going to be 10 So now we're passing 2 Through 10 which means we are fetching the local crops. We're concatenating them So we're gonna form what like let me do it like calculator Here we have 8 crops all of these have what? 64 64 images so that's 512. So let's see whether that's true So we have 512 and we're going to expect the output 512 384 so let's see whether that's true. So we have out Output variable 512 384. That's it. So everything works as we expect So let's continue here again. We're just concatenating these so now we're gonna concatenate these with the outputs from the from the global crops and that means we're gonna have what like 512 for the global crops plus 100 128 so 512 was for the for the local crops 128 is for the Global crops so that's 640. So if we do that, we're gonna see that the output is Let me see here So the output is 640 384. Okay, that's it. Let's continue. We're gonna exit the loop. We're gonna pass all of these through the MLP and that means we're gonna end up with Instead of 384 we're gonna have those 65 000 something Activations, so that's it. That's the whole trick. Let me again exit this thing Okay, exit. Come on. Come on. Come on This sucks ass, okay, so this is the last important detail Let's just analyze the dyno loss and that's pretty much it. You know, you should be able to understand the training So here what happened what happens we have we take the student outputs and remember that's again 600 something times 65 000 something so 640 times 65 000 and we're just going to scale it with this temperature coefficient So remember that's remember from the diagram from the paper. That's what they are doing Now they're gonna chunk this into 10 pieces because remember We have 10 different crops so this is just a way to take these 640 outputs and kind of cluster them into Semantically meaningful categories so that means global crops and local crops So we're gonna see that the student out let's find that variable. So that student out here Let me expand it. We have 640 and now we're gonna have a list of 10 elements and every element will have 64 images as it should be so as you can see here 10 elements in the list and every single one contains 64 images That's it. We do a similar thing for the teacher network. We have some temperature stuff We have the centering of the output and then we divide by the temperature across the feature dimension so that means across the That vector that has 65 000 activations where you're going to do the soft max across it and then we do the detaching because again, we're not training teacher network and We chunk it into two because remember we have two global crops being passed through the teacher network So teacher out let me just expand that variable. So teacher out has 128 it's gonna be split into two elements each will have 64 images So let's step over it and as you can see here two elements 64 images each. So that's it So now we have those different views and now we're gonna do the actual like loss, so What is happening here is we are taking a single so there's gonna be a single like a Global crop and it's gonna contain a batch of images for that global crop and then so this just makes sure we are not operating On the same view. So that's what we're gonna skip in the first iteration because both are zero So that means both are indexing the same global crop and now in the second iteration, this is gonna be one We're gonna skip that and what happens now is we're gonna take a batch of images So 64 images from the from the from the second global crop and we're gonna contrast it with Q Which is the the first global crop and that means and we're gonna do simple cross entropy here So that's Q and at times the log of the student view Softmax is here because we already we already done softmax for the teacher network So we have to do softmax and then we have to apply log and then we have just cross entropy and that's it So focusing on dimensions here on the shape again So if we take expand the Q variable, you can see it's 64 times 65,000 The same thing is gonna be for this student view So we have 64 images here and here and we have their outputs and we're just gonna make sure that the Distributions are the same for all of these 64 images. And then what we're gonna do is Basically sum it up So we're gonna sum up across the that dimension that not the dimension number one And then this mean will do the averaging across the second dimension and that's gonna end up with a scalar So after we do this and this we end up with a scalar and that's it. That's pretty much it now We iterate through every single combination of views. So we have we trade through different global crops here We trade through both the global and local crops here. We take those views We do the cross entropy and that's how we train this whole thing Okay, and the final last detail is we are updating this this center vector which is used as you remember here So the self center is gonna be updated using the teacher outputs in this function here But you're gonna skip it. You can analyze it yourself. It's fairly simple it's basically following the formula from the paper and we're doing the Exponentially moving average update of the of the center vector. That's it Okay, let's exit this thing. Let's exit this thing and that's it We now have just the bureaucracy stuff. So we clear the gradients of the of the model We are going to now do Backward which is gonna calculate the gradients throughout the whole network of all of the parameters that are obviously a trainable We're gonna do some Clipping of gradients basically here. This line is just going to freeze And not train the last layer if you remember there was this parameter called What's the name of it freeze? So since this freeze last layer is only one and current epoch is zero That means we're actually going to freeze the last layer of the model But that's again a small optimization trick to make this thing more stable and trainable Finally we do the step. That means we're gonna actually update the weights and that's it Just some details around mixed precision training nothing that spectacular Finally we take the the the moment value depending on the on the Epoch we are in on the global step we're in and this is the part where we are updating the teacher networks using the Student weights, so we are basically doing Exponential moving average here. So as you can see the teacher weights are gonna be multiplied by M Which is the this momentum parameter and we're gonna add up 1 minus M times the weights from the student And you can see it on the formula on the screen and that's that's pretty much it. Okay, that was it That was pretty much the the core parts of the training everything else is just logging Dumping those log files and checkpoints. This was the the very meat of the dyno training and how the training looks like again Any feedback is appreciated the more feedback I received that the better I'll know for the next time how to make these videos. Okay. I want to end this video by showing you this visualize attention function Let's quickly run through how this thing works and I think it's fairly fairly much simpler compared to the training So let's kind of dig into it. I'm gonna put a breakpoint here. I'm gonna start debugging and Let's see what happens Okay, again, we're using We at small pet size 8 No pre train weights. This is not that important. We have some image path. I already Inputted the path to this image. So we're gonna be using this image here this this bird image and So that's the image path image size is gonna be 480 not that important again, which is gonna dump in the current directory Threshold is also not that important for now. Let me see what I Am using currently so my current threshold is 0.1. I'm actually gonna delete this so I'm gonna just input the image path here as you can see and Now let's apply this and Restart the code again jumping to here Okay, I'm gonna jump here Let's see here. So Device if I have if you have GPU, it's gonna be using GPU. So it's CUDA here for me. I have a GPU So we're gonna form we're gonna create a model. So this is the same line as in dyno training code So we're gonna basically instantiate the vision transformer small Because of this string with pet size 8 and 0 classes means it won't have the output head is just going to have the the final layer is Gonna be just bunch of tokens the CLS tokens and all of the other tokens, but no no actual linear Classifier had okay. Let's do this. So these lines were just gonna freeze the the model Parameters that means we are not we're gonna set them to not trainable So that means this model is just going to be used as is no training involved Eval evaluation mode that basically sets up depending what you have dropout or batch norm This thing can change the the those layers And that's why it's important when you're using a model in an inference mode to call this eval function We are gonna push the device to the GPU to make things a bit faster And finally because I don't have any pre-trained weights, which is gonna Fetch from a certain URL as you can see here. We form an URL we Fetch those from the torch hub. We're gonna just concatenate the URL to this repository And we're gonna fetch the state dict and we're gonna load the state dictionary So the download actually already happened on my local machine So it's just fetching it from the local storage instead of having to go to the cloud Okay, my image the image name was like incorrect So I had to tweak that in the settings and now I rerun the code and here we are again So because we have an image path, we're gonna open it load the image We're gonna load the image. We're gonna do something that's gonna load the image We're gonna load the image. We're gonna do some transformations. So we're gonna resize the image into 480 by 480 We're gonna convert it into tensor. And again, here is here these Numbers we already saw the much the magic numbers. They just basically represent the mean and standard deviation of the image net data set So, okay, so after that we're gonna transform the image So now we have a tensor that's normalized and resized as you can see here Finally, we make sure that it's divisible by the pet size. So the pet size is currently what eight so we're gonna Make sure that the width and height are are actually Divisible by eight and now we're gonna crop the image here if necessary and on squeeze zero means we're just gonna add Another dimension in the front. So that's just some pie torch like stuff And if we take a look at the image dimension here, so let's take a look at the shape So the shape is just three four hundred eighty four hundred eighty and after we execute this line is gonna be one three four hundred Eighty four hundred eighty. So that's the unsqueezed part then we're gonna divide the resolution by the pet size So this will tell you how many tokens you have across the width dimension and we have sixty and here again We have sixty because remember it's four hundred eighty times four hundred eighty image finally What this does it's gonna affect from the vision transformer. It's gonna get the the self-attention of the image So we're gonna take the image we're gonna Like send it to the GPU because the model is also in GPU. So you don't want to have a discrepancy there We're gonna feed the image into the model and it's going to take from the last layer It's gonna return not the tokens is gonna return the attention coefficients. So that means the following so remember we have 60 times 60 Tokens as we just saw up there. So we have 60 here and 60 here So that means we have three hundred three thousand six hundred tokens in in total plus one for the CLS token So that's three hundred three thousand six hundred one. So the attention dimension we should expect should be basically three thousand six hundred one times three hundred six thousand three Three thousand six hundred and one and we also have six Attention head so we're gonna have something like six times this number here So that's the the the numbers I'm the shape I'm expecting so if I step over and After we now let's find the attentions variable. So the shape is as you can see here six Three thousand six four six hundred blah blah the number we expected basically plus the this one as the first dimension again Those are just some idiosyncrasies of pytorch quick recap How those were formed is basically we have so we have as I said Three thousand six hundred one tokens you take a single token and you basically form its query and then you do a dot product a scale dot product with all of the other keys and that's why you have 3601 attention coefficients for that single token, but since we have 3601 tokens we're gonna have this this this shape here. So hopefully that's clear So as you can see here, we extract the number of heads the number of heads is six And now what we do here is we take we just extract this is just because of one We take this zero here and then we take all of the heads, but we take only the zeros token Which means CLS token and we take one through the end Which means we find the attention coefficients that the CLS has across all of the other let's call them spatial tokens So that's the whole idea there. So that's why we have these numbers here So, let's see what what is the shape we're expecting. We're expecting something like let me write down here Maybe it's easier to see so expecting like six And then we expect because we're taking just the CLS token. So expect six three hundred Three thousand six hundred. So that's the shape where we're actually expecting after after this This line of code. So let's see where that's true So attentions as you can see here, that's that's the actual number six Three thousand six hundred. Okay. I'm just gonna ignore this threshold part and we're just going to reshape the those attentions into Into different shape. So instead of having it flattened out, we're gonna return it back to 60 60 And finally, this is just nearest neighbor Interpolation which means you're gonna take that token Whatever the coefficient is we're just going to replicate the coefficient into the 8 times 8 pixels because that's was the original image shape so that we can visualize the actual attention So that's why we need the interpolation part finally just some making some directories Saving the original image and finally here in this in this section here. We're going to as you can see here We're going to take The attentions for every single head and save them in an image So let me just kind of do f9 there and we're gonna end up having Six images which I'm going to show you in a second and finally I'm gonna ignore this threshold in part again And that's that's pretty much it and here are the attention Maps we got so here you can see the bird image Every single head has different attention pattern and that's it and these are the images you you saw in the actual paper Now you know how they are calculated and visualize awesome. This was a long video hopefully liked it if you did share it out with a friend subscribe hit the bell icon and Join the district community until next time bye bye
[{"start": 0.0, "end": 3.6, "text": " What's cracking guys in this video we're gonna do something very different"}, {"start": 3.6, "end": 9.120000000000001, "text": " So I'm gonna walk you through the code of the dyno model which I've previously covered in one of my videos"}, {"start": 9.46, "end": 17.46, "text": " From Facebook AI research so we're gonna step through the code and understand how exactly the training works line by line in PyTorch"}, {"start": 17.7, "end": 20.6, "text": " So if you find that useful let me know and also"}, {"start": 21.1, "end": 27.04, "text": " Leave a feedback of what you think about this video so that I can improve it over the for the next for the next time"}, {"start": 27.04, "end": 30.24, "text": " So this is super experimental before I dig into the code"}, {"start": 30.4, "end": 37.12, "text": " I'll do like a super short overview of the paper just one minute overview for those of you who haven't watched the video go"}, {"start": 37.12, "end": 38.92, "text": " Ahead and watch it. I'm gonna link it somewhere here"}, {"start": 38.92, "end": 43.620000000000005, "text": " And so let me do a quick recap of what happens so that you have some context before we dig into the code"}, {"start": 43.980000000000004, "end": 51.36, "text": " Okay, so basically in that video what it what I've explained was I explained how exactly this attention mechanism works"}, {"start": 51.379999999999995, "end": 54.32, "text": " But I explained it on a very high level using these diagrams"}, {"start": 54.32, "end": 59.1, "text": " And so from from this image we get this this this attention map"}, {"start": 59.1, "end": 64.22, "text": " And we're gonna see how exactly that works by the way if you if you want me to share these squiggles of the paper"}, {"start": 64.32, "end": 68.36, "text": " With you let me know down in the comments. I'm gonna leave a comment and if I get enough thumbs up"}, {"start": 68.36, "end": 70.96000000000001, "text": " I'm gonna leave I'm gonna think about how I can share these with you"}, {"start": 71.44, "end": 78.58, "text": " Okay, so next up we saw this high level diagram of high down a dyno works so basically we have the student branch"}, {"start": 78.58, "end": 81.8, "text": " We have the teacher branch the teacher branch is updated using this"}, {"start": 81.8, "end": 87.94, "text": " Exponentially moving average from the student like weights, and we also have a stop gradient here"}, {"start": 87.94, "end": 89.94, "text": " Which means we won't be updating the teacher"}, {"start": 90.52, "end": 94.47999999999999, "text": " Finally after applying the softmax we get our district output distributions here"}, {"start": 94.47999999999999, "end": 98.28, "text": " And the whole idea is to make these two distributions be the same"}, {"start": 98.75999999999999, "end": 104.12, "text": " And we do that by across entropy as you can see here, so the p2 times log of p1"}, {"start": 104.92, "end": 110.12, "text": " So the idea is to extract the essence of the image we input images which had different"}, {"start": 110.12, "end": 112.80000000000001, "text": " augmentations as the input to student and teacher and"}, {"start": 113.12, "end": 117.32000000000001, "text": " Hopefully we want to make sure that the distributions are the same because that's the same image"}, {"start": 117.32000000000001, "end": 121.64, "text": " And that's how we extract the kind of the essence of the image, so there will be a high level explanation"}, {"start": 122.12, "end": 127.24000000000001, "text": " So we saw the pseudocode now. We're gonna see the actual code. We saw that we are using"}, {"start": 127.64, "end": 134.68, "text": " So I mentioned augmentations. We're using these global crops, and we're using these local crops, and we are again trying to"}, {"start": 136.16, "end": 138.84, "text": " Make the model output the same distributions for both of these"}, {"start": 138.84, "end": 145.96, "text": " Finally let me see what else I had so explain how K nearest neighbor algorithm works and"}, {"start": 146.52, "end": 147.64000000000001, "text": " Probably in the next video"}, {"start": 147.64000000000001, "end": 154.0, "text": " I'm gonna show you how exactly this looks in code because I think the things I just explained will take like at least an"}, {"start": 154.0, "end": 158.84, "text": " Hour to kind of cover, so let's get back to the code and see how this looks like okay"}, {"start": 158.84, "end": 164.46, "text": " So you've watched the video you have some context from this short recap now. Let's start digging into actual code"}, {"start": 164.46, "end": 170.44, "text": " Okay, so let's put some breakpoint here as I'm going to step through this code and show you exactly how it works"}, {"start": 170.70000000000002, "end": 173.42000000000002, "text": " So let's start with this get args parser function"}, {"start": 173.42000000000002, "end": 180.4, "text": " Okay, so I'm gonna just quickly scheme over these arguments so that you have some context and idea of which variables are gonna be"}, {"start": 180.44, "end": 186.08, "text": " Circulating around the code so first things first. They have the architecture. We're gonna be using the vision transformer small"}, {"start": 186.36, "end": 190.12, "text": " They have many different options, but we're gonna focus on that one batch size"}, {"start": 190.12, "end": 195.52, "text": " I had to increase it to 32 because it was 16 by default and it's for some reason crashing my machine"}, {"start": 195.52, "end": 199.68, "text": " I still haven't been debugging the amount of memory. It's eating up on my GPU"}, {"start": 199.68, "end": 203.6, "text": " So yeah, so output dimension is just 65"}, {"start": 204.36, "end": 207.16, "text": " 65,000 and that's the actual output distributions"}, {"start": 207.16, "end": 212.36, "text": " We just saw in the paper overview which we are going to compare and make sure they are the same for different crops, okay?"}, {"start": 213.20000000000002, "end": 218.92000000000002, "text": " Then we have just normalized last layer. I'm gonna skip this momentum teacher so 0.996"}, {"start": 218.92, "end": 226.51999999999998, "text": " That's the at the actual coefficient that's being used in the exponentially moving average when we are forming the teacher weights from the student weights"}, {"start": 227.32, "end": 232.72, "text": " Again better realization. This is not that important warm-up temperature teacher temperature"}, {"start": 233.79999999999998, "end": 239.6, "text": " Number of epochs all of these are not that important. There's just a schedule of how the temperature of the teacher is gonna change"}, {"start": 239.83999999999997, "end": 241.83999999999997, "text": " throughout the training"}, {"start": 242.11999999999998, "end": 247.79999999999998, "text": " Using mix precision training is gonna be on so I obviously won't be focusing on the optimization"}, {"start": 247.8, "end": 249.8, "text": " stuff in this video"}, {"start": 249.92000000000002, "end": 250.84, "text": " so"}, {"start": 250.84, "end": 252.32000000000002, "text": " weight decay"}, {"start": 252.32000000000002, "end": 256.92, "text": " Okay, that's again. We have some schedule so they have the end and initial value"}, {"start": 257.48, "end": 263.06, "text": " Clipping gradients so that's also basically when you're updating the weights"}, {"start": 263.24, "end": 268.88, "text": " They are going to use this to kind of clip the norm of the of all of the weights before doing an update"}, {"start": 268.96000000000004, "end": 270.96000000000004, "text": " We're gonna see that in a couple of minutes"}, {"start": 271.08000000000004, "end": 275.6, "text": " So number of epochs hundred by default where or not to freeze the last layer"}, {"start": 275.6, "end": 278.96000000000004, "text": " This is just some trick that makes the training a bit easier"}, {"start": 279.72, "end": 281.12, "text": " learning rate"}, {"start": 281.12, "end": 286.04, "text": " So they basically have a linear warm-up first and then they have some kind of I think cosine schedule"}, {"start": 286.44, "end": 293.72, "text": " Until the end of the training so warm-up epochs just tells you across how many epochs you're you're you're taking to do that linear"}, {"start": 293.72, "end": 295.20000000000005, "text": " warm-up thing"}, {"start": 295.20000000000005, "end": 301.18, "text": " minimum learning rate that's the pretty much learning rate at the end of the of the training"}, {"start": 301.64000000000004, "end": 303.64000000000004, "text": " Optimizer they're using Adam"}, {"start": 303.64, "end": 310.47999999999996, "text": " Wide Adam so that's because they have distributed training this dino model is trained on 8 or 16 GPUs originally"}, {"start": 310.47999999999996, "end": 312.32, "text": " But I have only a single GPU here"}, {"start": 312.32, "end": 319.62, "text": " So that's that's it drop path rate is just an argument that is going to modify the the vision transformer architecture"}, {"start": 320.2, "end": 324.44, "text": " Then we have the global crop scales. It's gonna be from 0 4 to 1"}, {"start": 325.08, "end": 330.8, "text": " In contrast that to so I'm gonna quickly get back to that one, but we have eight local crops and we have"}, {"start": 330.8, "end": 337.0, "text": " And we have so local crop scale is gonna be from 0 0 5 to 0.4"}, {"start": 337.0, "end": 339.36, "text": " And that's the same number as here. So that means"}, {"start": 340.0, "end": 341.92, "text": " so that means that"}, {"start": 341.92, "end": 344.24, "text": " This this for the small crops"}, {"start": 344.24, "end": 349.74, "text": " We're gonna take 5% to 40% of the image size and then we're gonna resize that back"}, {"start": 349.76, "end": 356.0, "text": " But like for the big crops we're going to take from 40 to 100% of the image and then we're gonna do the resizing to"}, {"start": 356.0, "end": 361.2, "text": " 24 as for the small crops they're going to be resized to 96 96 pixels"}, {"start": 361.76, "end": 365.86, "text": " Again some details, but like yeah, this video is gonna be about a lot of details"}, {"start": 366.44, "end": 373.3, "text": " We're gonna use image net the thing I did is I actually downloaded a small subset of image net because this thing is huge"}, {"start": 373.3, "end": 375.3, "text": " more than like 150 gigabytes, I think"}, {"start": 376.04, "end": 377.96, "text": " and"}, {"start": 377.96, "end": 380.76, "text": " So from fast AI basically it's called image net"}, {"start": 380.76, "end": 387.36, "text": " Well, I don't know how to pronounce it with two T's and aside from me having to download image net that small subset of image net"}, {"start": 387.36, "end": 391.64, "text": " I also had to do a couple of tricks to make this work on Windows because I'm running on Windows machine"}, {"start": 391.64, "end": 396.02, "text": " So if you want to know about those nitty-gritty details and small bugs and problems"}, {"start": 396.02, "end": 402.96, "text": " I had to solve do join the discord server actually had a thread where as I was in analyzing this code over the past two days"}, {"start": 402.96, "end": 408.12, "text": " I was kind of writing down the steps I took and the problems I encountered"}, {"start": 408.12, "end": 414.04, "text": " Okay, so finally as I said image net where I'm going to save the checkpoints and logs"}, {"start": 414.84000000000003, "end": 416.84000000000003, "text": " seed seed basically serves to"}, {"start": 417.16, "end": 424.36, "text": " Make this code reproducible because otherwise there is a lot of randomness and every time you run this code is gonna have different output and different results"}, {"start": 424.36, "end": 426.36, "text": " And the images are gonna be loaded in different order"}, {"start": 426.36, "end": 431.88, "text": " It makes it easy for me to have some consistency between different runs and it's easier to debug the code"}, {"start": 431.88, "end": 436.68, "text": " So a number of workers not that important all of this is just some distributed settings and"}, {"start": 436.68, "end": 438.68, "text": " That's not that vital. Okay"}, {"start": 438.68, "end": 445.72, "text": " We parse the arguments we create the output directory where we are going to dump the logs the log files and the checkpoints"}, {"start": 445.72, "end": 448.76, "text": " And finally, let's go to the main function and that's trained, you know"}, {"start": 449.8, "end": 455.96000000000004, "text": " So again, this is the in it distributed mode is just a function that's setting up certain settings"}, {"start": 455.96000000000004, "end": 460.76, "text": " When you're doing distributed training on 8 or 16 GPUs again, I'm using"}, {"start": 461.4, "end": 464.84000000000003, "text": " Only one GPU I had to do some modification because of Windows"}, {"start": 464.84, "end": 468.59999999999997, "text": " But yeah, do check out this Discord server for that"}, {"start": 469.32, "end": 473.08, "text": " So we are fixing the random seeds. They're just printing I'm gonna"}, {"start": 473.64, "end": 475.64, "text": " Rise up the console here"}, {"start": 475.64, "end": 477.64, "text": " So they're basically just printing the the"}, {"start": 478.67999999999995, "end": 483.96, "text": " Shock code the the hash of this commit with of the code. I'm currently running"}, {"start": 484.67999999999995, "end": 486.67999999999995, "text": " Then they're just printing some arguments"}, {"start": 487.23999999999995, "end": 491.23999999999995, "text": " Cudian and benchmark set to true is just a certain optimization again"}, {"start": 491.24, "end": 497.32, "text": " That makes the code run faster under certain assumptions again. Not that important. Okay"}, {"start": 498.04, "end": 504.84000000000003, "text": " This is fairly important data augmentation dino. I'm gonna quickly walk you through this code and then I'm gonna step over it"}, {"start": 505.56, "end": 507.56, "text": " in the execution loop"}, {"start": 507.88, "end": 512.28, "text": " But basically as you can see here, they have flip and color jitter"}, {"start": 513.0, "end": 518.04, "text": " Transform so that's just a compose. That means we are composing a couple of different transforms and"}, {"start": 518.04, "end": 521.24, "text": " These are organized so that they do the flipping and the color"}, {"start": 521.88, "end": 527.64, "text": " Like photometric augmentations. So we have as you can see here random horizontal flip with 50 percent"}, {"start": 528.1999999999999, "end": 530.1999999999999, "text": " With 50 chance, we're gonna flip it"}, {"start": 530.68, "end": 537.88, "text": " Either flip it or not flip it then we have the we're gonna randomly apply with certain probability like 80 percent"}, {"start": 538.52, "end": 543.56, "text": " This color jitter which is going to change the brightness the contrast the saturation the hue"}, {"start": 543.56, "end": 549.7199999999999, "text": " So basically our photometric augmentations and finally random grayscale is with 20 chance. We're gonna"}, {"start": 550.52, "end": 551.8, "text": " kind of"}, {"start": 551.8, "end": 553.8, "text": " Apply the the grayscale transformation"}, {"start": 554.76, "end": 560.92, "text": " The then we have normalization. This is your standard stuff. We are converting the the images to pi torch tensor"}, {"start": 561.4799999999999, "end": 565.7199999999999, "text": " We are normalizing them and don't get confused by these magic numbers"}, {"start": 566.04, "end": 570.04, "text": " Even though it'd be really nice if the author has actually extracted this into a constant"}, {"start": 570.04, "end": 574.4399999999999, "text": " Because they've been using this all around the place. So this is just your image image net"}, {"start": 575.0, "end": 578.52, "text": " Statistics so the mean and the standard deviation of the image net images"}, {"start": 579.0, "end": 579.4, "text": " Okay"}, {"start": 579.4, "end": 583.56, "text": " So for the fun part this random resize crop is the thing that does the magic of dino"}, {"start": 583.9599999999999, "end": 586.68, "text": " So global crop scale is again the thing I just mentioned"}, {"start": 586.68, "end": 589.56, "text": " So from we're gonna take 40 to 100 percent of the image"}, {"start": 589.56, "end": 594.12, "text": " We're gonna cut it there and then we're gonna resize it back to 224 by 224"}, {"start": 594.76, "end": 596.76, "text": " Using bicubic interpolation"}, {"start": 596.76, "end": 600.4399999999999, "text": " 224 by 224 using bicubic interpolation method"}, {"start": 600.92, "end": 607.0, "text": " Then they're gonna apply some photometric augmentations and the flipping the gaussian blur and we're gonna normalize the images"}, {"start": 607.88, "end": 609.88, "text": " We have a second global"}, {"start": 610.2, "end": 616.12, "text": " Crop because if you remember the details from the paper, we have two global crops and we have eight local crops"}, {"start": 616.12, "end": 619.08, "text": " Although obviously that's you can you can experiment with that"}, {"start": 619.8, "end": 624.52, "text": " So again, a random resize crop everything is the same. They just apply solarization here"}, {"start": 624.52, "end": 629.16, "text": " Uh, if you're not familiar with solarization, i'm just gonna put the image on the screen so you can see a visual example"}, {"start": 629.24, "end": 631.56, "text": " I guess the image is going to explain that much faster than I will"}, {"start": 632.28, "end": 637.72, "text": " Finally, we have the the local crops. So that means we are applying the local crop scale"}, {"start": 637.72, "end": 644.84, "text": " So that means we are now using that 1.5 percent to 40 percent of the image and then we resize that back into 96 by 96 images"}, {"start": 644.84, "end": 651.64, "text": " So these crops are gonna these images that came from the smaller crop of the image are gonna be of smaller resolution"}, {"start": 651.64, "end": 656.84, "text": " And 96 by 96 again flipping and color rendering a gaussian blur normalization"}, {"start": 657.24, "end": 659.24, "text": " Standard stuff so that's it"}, {"start": 659.64, "end": 664.04, "text": " So basically what's gonna happen is this data augmentation dino"}, {"start": 664.4399999999999, "end": 668.4399999999999, "text": " uh is gonna be passed to the data set in pytorch and"}, {"start": 668.68, "end": 672.92, "text": " Then every time you try to fetch an image from the data set it's gonna apply"}, {"start": 673.08, "end": 677.0, "text": " It's gonna append one crop here one global crop the second global crop"}, {"start": 677.0, "end": 683.08, "text": " And then we're gonna have iterating a for loop. So eight times and do these small local crops"}, {"start": 683.56, "end": 686.84, "text": " And then we're gonna return like a list of 10 crops in total"}, {"start": 687.4, "end": 690.28, "text": " This is just on code. I wrote i'm gonna experiment with that a bit later"}, {"start": 690.36, "end": 696.06, "text": " But let me get back to the actual line where we stopped. Okay, so that was the that was the uh data augmentation"}, {"start": 696.76, "end": 702.12, "text": " Of dino. I think that was pretty important detail by the way, i'll be adding time stamps to all of these logical sections"}, {"start": 702.12, "end": 708.76, "text": " So feel free to skip if you understand some part you can just skip to the part you you want to understand better"}, {"start": 709.5600000000001, "end": 716.04, "text": " Okay, continuing on we are creating this data set. Uh, so it's gonna link to the as you can see here image net train"}, {"start": 716.58, "end": 719.96, "text": " Uh subdirectory and we're gonna pass the transform I already mentioned"}, {"start": 720.28, "end": 726.84, "text": " Which means every time you fetch an image from this data set is gonna apply those 10 transformations and we're gonna get those 10 crops back"}, {"start": 727.32, "end": 730.52, "text": " Uh distributed sampler. So just ignore the distributed part"}, {"start": 730.52, "end": 734.76, "text": " This is just going to shuffle the images once we input that into the data loader"}, {"start": 734.76, "end": 737.88, "text": " So as we can see we have a data set here. We have the sampler"}, {"start": 738.12, "end": 743.16, "text": " We have I think 64 images per gpu and since I have a single gpu that's going to be"}, {"start": 743.88, "end": 749.64, "text": " 64 images in total a number of workers just optimization stuff peel memory as well. That just kind of helps you"}, {"start": 751.3199999999999, "end": 757.96, "text": " Basically, uh transferred images from the cpu to the gpu faster because there's this uh dedicated slot of the memory"}, {"start": 757.96, "end": 765.08, "text": " So i'm currently going through every single detail, uh, because i'm experimenting but like, uh, i'm later on i'm gonna abstract many details"}, {"start": 765.1600000000001, "end": 770.36, "text": " So let me know whether you actually want everything explained or you just want me to cover the main idea"}, {"start": 770.6800000000001, "end": 775.0, "text": " Uh, this is experimental video. So, uh, please give me that feedback. And yeah, uh"}, {"start": 775.5400000000001, "end": 781.88, "text": " droplast means because we have a data set of I I think uh, like I have around 9 000 something images"}, {"start": 782.12, "end": 784.9200000000001, "text": " Because every batch has 64 images as we saw here"}, {"start": 784.92, "end": 792.28, "text": " Uh, that means that probably the last batch won't have 64 because unless the the number of images in our training data set is divisible"}, {"start": 792.28, "end": 798.1999999999999, "text": " But 64 we're gonna have the last batch being less than 64 and this droplast just means we're gonna drop"}, {"start": 798.4399999999999, "end": 802.5999999999999, "text": " The last batch because that can cause some issues probably because of the shape being different"}, {"start": 803.3199999999999, "end": 809.24, "text": " So that's the reason you sometimes see this droplast set to true. Okay data loaded there are uh, as you can see here"}, {"start": 810.5, "end": 813.64, "text": " 9469 images, okay"}, {"start": 813.64, "end": 815.24, "text": " um"}, {"start": 815.24, "end": 820.6, "text": " This is just some bureaucracy stuff. Uh, they did because they used to to name these models"}, {"start": 821.16, "end": 828.04, "text": " Date now, uh, they're just kind of replacing them with vat, but the architecture is already vat small. So nothing's gonna happen there"}, {"start": 828.52, "end": 829.8, "text": " um, so"}, {"start": 829.8, "end": 831.48, "text": " basically"}, {"start": 831.48, "end": 837.8, "text": " This says that so vat is just uh, like a shorthand notation for the all of the vision transformer models"}, {"start": 838.2, "end": 840.68, "text": " That exist in this vision transformer pi file"}, {"start": 840.68, "end": 846.8399999999999, "text": " So we're gonna because we are vision transformer we're gonna enter this if statement and we're going to as you can see here"}, {"start": 846.8399999999999, "end": 852.4399999999999, "text": " We're gonna form so this statement here may confuse you. So this is just a list of all of the different"}, {"start": 853.16, "end": 860.3599999999999, "text": " like uh functions from this vision transformer file and this is just going to instantiate the model and we're going to instantiate it with the"}, {"start": 860.76, "end": 866.04, "text": " 32 by 32 so the patch will be 32 by 32 pixels and we're gonna use this"}, {"start": 866.04, "end": 871.24, "text": " Uh drop path rate of 0.1. So the quickest way you can understand how this thing"}, {"start": 871.64, "end": 877.42, "text": " What this thing does is just put a print statement in there do this, uh, like just print the dictionary"}, {"start": 878.1999999999999, "end": 880.5999999999999, "text": " Uh set a breakpoint there stop the code"}, {"start": 881.54, "end": 884.4399999999999, "text": " Re-execute again and we're gonna see what happens there. So i'm gonna"}, {"start": 885.0799999999999, "end": 889.9599999999999, "text": " Just skip these skip these i'm gonna get here. I'm gonna open the console"}, {"start": 890.4399999999999, "end": 892.52, "text": " And after I print this you're gonna see"}, {"start": 892.52, "end": 896.04, "text": " Uh, whoops bunch of stuff and as you can see here"}, {"start": 896.12, "end": 903.56, "text": " It's actually printing this whole vision transformer pi file and the thing is uh, it's some of the some of the functions are organized"}, {"start": 903.64, "end": 908.36, "text": " Into dictionaries. So this is just your python stuff. So that means we're gonna have somewhere in here"}, {"start": 908.6, "end": 913.64, "text": " We're gonna have uh, like these functions. Let me let me jump to that file. So we're gonna have somewhere in there"}, {"start": 913.64, "end": 915.64, "text": " We're gonna have this v80 small"}, {"start": 915.96, "end": 921.16, "text": " Function and then we're gonna pass the patch size and uh additional we are passing that drop path"}, {"start": 921.16, "end": 926.76, "text": " Uh coefficient so that's the the thing that happens there. So basically we're instantiating the v80 small model here"}, {"start": 927.3199999999999, "end": 931.16, "text": " Uh, then we're doing the same thing for the teacher network. So that was a student network"}, {"start": 931.16, "end": 933.9599999999999, "text": " We have the teacher network and since we're using uh"}, {"start": 934.8399999999999, "end": 941.8, "text": " Vision transformer small the embedding dimension is going to be 384. Uh, that's just the hyper parameter they they kind of"}, {"start": 943.24, "end": 945.48, "text": " Decided to use okay"}, {"start": 945.48, "end": 949.9599999999999, "text": " So they have in these other branches they're using excite and some other stuff we don't care about"}, {"start": 949.96, "end": 951.4000000000001, "text": " so"}, {"start": 951.4000000000001, "end": 957.72, "text": " Now this is another important part. Basically, uh, what they do is they they wrap the student network"}, {"start": 957.72, "end": 963.88, "text": " So this is the vision transformer and they put this dino head on top of it. So dino head is just a simple mlp"}, {"start": 964.44, "end": 969.08, "text": " Uh with some additional tweaks and i'm gonna jump into those details right now"}, {"start": 969.4000000000001, "end": 973.5600000000001, "text": " So the the main point is they are forming uh, this this mlp"}, {"start": 974.2800000000001, "end": 979.0, "text": " And since number of layers is three by default, that means the mlp is gonna have three linear layers"}, {"start": 979.0, "end": 981.0, "text": " that means we're gonna uh"}, {"start": 981.0, "end": 986.44, "text": " Execute the else branch and we're gonna form the mlp that just consists out of as you can see here"}, {"start": 986.6, "end": 992.12, "text": " We have some potentially some batch norms potentially some uh, like gelu, uh, like uh"}, {"start": 992.92, "end": 997.64, "text": " Activation functions, uh, but most of all most importantly we have this linear function"}, {"start": 998.12, "end": 1000.12, "text": " Uh, so basically it's an mlp"}, {"start": 1000.68, "end": 1005.08, "text": " And then we just initialize the way it's using this self apply that's an inbuilt"}, {"start": 1005.08, "end": 1008.44, "text": " Function that's a part of this nn module"}, {"start": 1009.4000000000001, "end": 1014.6, "text": " class and then we're gonna form the last layer, uh by creating yet another"}, {"start": 1015.0, "end": 1020.76, "text": " Linear layer and the linear layer is gonna have so the output dimension here is like 65 000 something"}, {"start": 1021.0, "end": 1027.8, "text": " So that's the actual output of the whole uh pipeline and then we're gonna compare those outputs by cross entropy later on"}, {"start": 1028.04, "end": 1032.92, "text": " So that's the last layer. Uh, the interesting part here is this weight norm is just a hook function"}, {"start": 1032.92, "end": 1038.04, "text": " Uh what it does is once you put this weight g to all of ones"}, {"start": 1038.3600000000001, "end": 1041.72, "text": " That means they are going to normalize the actual linear layer"}, {"start": 1041.88, "end": 1047.48, "text": " So it's just going to make sure that the l2 norm of the weights of that linear layer are equal to one"}, {"start": 1047.72, "end": 1052.52, "text": " That's basically it and if they decide to do this normalization of the last layer if this is set to true"}, {"start": 1052.92, "end": 1059.4, "text": " Then as you can see here that those gradients are going to be set to false which means the following during the training"}, {"start": 1059.4, "end": 1065.8000000000002, "text": " Those uh, those weights will not will not be updated. Which means that the l2 norm is going to be kept at one"}, {"start": 1066.3600000000001, "end": 1071.0, "text": " Uh, which means we're normalizing the last layer hence the norm last layer. I think it's pretty self-explanatory"}, {"start": 1071.5600000000002, "end": 1077.5600000000002, "text": " So again simple mlp on top of it. We have the last layer which has the 65 000, uh something"}, {"start": 1078.0400000000002, "end": 1084.3600000000001, "text": " Uh number of neurons as the output and that's it. Optionally we have these uh, these uh, like normalization stuff"}, {"start": 1084.76, "end": 1087.3200000000002, "text": " Okay, getting back here. So that's the dino head"}, {"start": 1087.32, "end": 1092.12, "text": " We stick it on top of the student and one additional detail which we need to go through"}, {"start": 1092.12, "end": 1098.04, "text": " This is this multi-crop wrapper again. I have timestamps just jump if you don't if you're not interested into this section"}, {"start": 1098.6, "end": 1100.6, "text": " Uh, so the multi-crop wrapper"}, {"start": 1101.3999999999999, "end": 1103.0, "text": " What it does is the following"}, {"start": 1103.0, "end": 1107.72, "text": " Um, you know, i'm actually just gonna put a break point here and then after we we get to this part"}, {"start": 1107.72, "end": 1111.8, "text": " We'll understand what multi-crop wrapper does but basically on a high level for now"}, {"start": 1111.8, "end": 1116.36, "text": " it's basically just gonna deal with those crops passed them through the the pipeline and then"}, {"start": 1116.36, "end": 1121.08, "text": " Outcomes the result from the student the final the final output and the final output from the teacher"}, {"start": 1121.3999999999999, "end": 1126.52, "text": " We'll see that in a in a once we we hit that part of the code. Okay, uh getting back here"}, {"start": 1126.6, "end": 1129.0, "text": " So that was how we formed the student part"}, {"start": 1129.3999999999999, "end": 1134.6799999999998, "text": " So again, it's a multi-crop wrapper around student which is vision transformer and dino head which is mlp"}, {"start": 1135.08, "end": 1140.28, "text": " Same thing goes for for the teacher. We have teacher we have dino head and that's it"}, {"start": 1140.6, "end": 1144.84, "text": " Some minor differences like whether we are not we are normalizing the last layer or not"}, {"start": 1144.84, "end": 1147.6399999999999, "text": " But that's it and we are not using the bet normalization"}, {"start": 1147.6399999999999, "end": 1154.04, "text": " So this is uh, there is no better normalization in this whole pipeline, which is something we we strive for"}, {"start": 1155.08, "end": 1156.12, "text": " okay"}, {"start": 1156.12, "end": 1162.84, "text": " Uh, we're just pushing the the student and teacher to the gpu. That's this dot kuda that that's what it means and finally"}, {"start": 1163.3999999999999, "end": 1166.6, "text": " Because as I said, we don't have any any batch norms"}, {"start": 1167.24, "end": 1173.8, "text": " That means we're gonna enter the else branch and that means that the teacher network will not be using this distributed data parallel"}, {"start": 1173.8, "end": 1178.52, "text": " Which is just a wrapper that makes it easy to train these models in a distributed fashion"}, {"start": 1179.0, "end": 1184.68, "text": " Uh, okay continuing on we wrap the student network, uh into this distributed data parallel"}, {"start": 1184.68, "end": 1187.08, "text": " But but they only have a single gpu here"}, {"start": 1187.08, "end": 1193.32, "text": " So the index zero means the default gpu you have on your system. I have a rtx 2080. So that's it"}, {"start": 1194.12, "end": 1199.1599999999999, "text": " Then we are loading the as you can see here. We are loading the state dictionary directly from the student"}, {"start": 1199.16, "end": 1204.1200000000001, "text": " Uh module so and they say here nicely teacher and student start with the same weights"}, {"start": 1204.52, "end": 1210.68, "text": " So initially we're gonna start with the same weights later on we're gonna use this exponentially moving average to update the teacher weights"}, {"start": 1211.16, "end": 1213.96, "text": " Uh, finally as I said, there is a stop gradient"}, {"start": 1214.1200000000001, "end": 1219.3200000000002, "text": " Uh, if you saw the the diagram there is a stop gradient and this is how you implement the stop gradient in pytorch"}, {"start": 1219.48, "end": 1221.48, "text": " you basically just iterate through all of the"}, {"start": 1222.28, "end": 1226.8400000000001, "text": " Like parameters and you said there are gradients to false. So that means you won't be trainable"}, {"start": 1226.84, "end": 1231.1599999999999, "text": " I'm gonna skip all of this just f9 and that's it"}, {"start": 1231.32, "end": 1237.6399999999999, "text": " So we we have an output here, uh, the student and teacher are built. They're both vision transformer small networks"}, {"start": 1237.72, "end": 1243.0, "text": " Okay, this part is fairly important and i'm gonna probably uh step through it a bit later"}, {"start": 1243.32, "end": 1248.12, "text": " But let me just give you a glimpse. So i'm gonna put uh, like a like a breakpoint here in the forward function"}, {"start": 1248.6799999999998, "end": 1254.36, "text": " And as you can see here, it's just accumulating the parameters that we passed in the init function"}, {"start": 1254.36, "end": 1263.1599999999999, "text": " Uh, it's just registering this center vector. So if register buffer what it does it makes sure that the this this this vector here"}, {"start": 1263.7199999999998, "end": 1270.9199999999998, "text": " Uh is a part of the state dictionary even though it's not trainable. So this center vector will not be trainable"}, {"start": 1271.1599999999999, "end": 1277.56, "text": " We're not going to update it using gradients. We're going to update it using the exponentially moving average again"}, {"start": 1277.7199999999998, "end": 1280.4399999999998, "text": " So that's the reason they're using register buffer. So it's not trainable"}, {"start": 1280.44, "end": 1284.52, "text": " So it's not a model parameter in the pytorch, uh, like uh terminology"}, {"start": 1285.0, "end": 1288.76, "text": " And but we still need it in the actual state dictionary of the model"}, {"start": 1289.24, "end": 1295.72, "text": " Um, okay. Finally, they have some teacher temperature schedule. So with the current parameters, this is actually going to return 0.04"}, {"start": 1296.68, "end": 1302.6000000000001, "text": " Throughout all of this all of the epochs, but what it's it's kind of envisioned to have a linear ramp up here"}, {"start": 1302.8400000000001, "end": 1305.3200000000002, "text": " Although that's not that important. Let's get back to the dyno loss"}, {"start": 1305.32, "end": 1311.6399999999999, "text": " Okay, i'm gonna step over this i'm gonna later once we are actually using the dyno loss i'm gonna step through the forward function"}, {"start": 1312.52, "end": 1314.04, "text": " um"}, {"start": 1314.04, "end": 1316.04, "text": " This just not important"}, {"start": 1317.1599999999999, "end": 1323.24, "text": " Whether we are regularizing or not regularizing certain parameters of the student network. We're gonna create the atom"}, {"start": 1323.8799999999999, "end": 1325.8799999999999, "text": " wide optimizer"}, {"start": 1325.8799999999999, "end": 1329.24, "text": " And finally because we're using the mixed precision training"}, {"start": 1329.24, "end": 1336.6, "text": " We're just going to wrap this create this fp16. So that means we're using 16 bits training these models"}, {"start": 1337.16, "end": 1342.04, "text": " Okay here they're just creating uh schedulers for the learning rate. So it's going to be a cosine scheduler"}, {"start": 1342.84, "end": 1348.76, "text": " Um, then we have a weight decay schedule against cosine scheduler and finally momentum scheduler"}, {"start": 1349.56, "end": 1354.84, "text": " Basically, uh, that's the thing that is used to update the teacher"}, {"start": 1354.84, "end": 1358.76, "text": " Uh weights using the student weights. So that's that parameter"}, {"start": 1359.1599999999999, "end": 1364.36, "text": " Uh, and again, it's just a cosine scheduler. Okay, so that's it. I think we're we're good to go"}, {"start": 1364.6799999999998, "end": 1366.6799999999998, "text": " We're getting closer to the actual training loop"}, {"start": 1367.1599999999999, "end": 1373.0, "text": " Uh, this already took a lot of time. So uh, we're just going to this this restart from checkpoint"}, {"start": 1373.0, "end": 1379.32, "text": " I think we're gonna skip it because we don't have any checkpoints for now. So just put a breakpoint there. We're gonna enter it"}, {"start": 1380.4399999999998, "end": 1382.76, "text": " Okay, let me step over step over over"}, {"start": 1382.76, "end": 1389.32, "text": " Okay, let me step over step over. Okay, and we're gonna just return from that function since we have don't have any checkpoints"}, {"start": 1389.56, "end": 1392.28, "text": " Okay, so that's just uh dummy for now. Um"}, {"start": 1392.92, "end": 1398.28, "text": " Start epoch is gonna start with zero. We're just uh logging the starting time just some bureaucracy stuff"}, {"start": 1398.44, "end": 1401.8799999999999, "text": " And finally here is the main loop. Okay, we are here"}, {"start": 1402.36, "end": 1406.76, "text": " so we're starting from the epoch zero we're gonna run for 100 epochs and"}, {"start": 1406.76, "end": 1412.68, "text": " Uh ignore this line. This is just something that has to do with this distributed sampler. Otherwise, you wouldn't have this line"}, {"start": 1413.24, "end": 1417.08, "text": " Um, so this is the actual meat of the whole whole script"}, {"start": 1417.32, "end": 1423.64, "text": " So it's training one epoch is gonna train the the dino for one epoch and then later on we're just gonna have some some"}, {"start": 1424.12, "end": 1429.72, "text": " Logging nothing nothing very fancy. So i'm gonna step into this f7"}, {"start": 1429.72, "end": 1436.1200000000001, "text": " And let's see how this function looks like. Okay, so metric logger not an important again"}, {"start": 1437.0, "end": 1442.52, "text": " Okay, this is where the fun starts. So we have the data loader and we're going to fetch"}, {"start": 1443.16, "end": 1449.72, "text": " Like this images so this images is going to be a list of 10 elements and each element is going to be a batch of images"}, {"start": 1450.04, "end": 1452.44, "text": " So each element is going to be 64 images"}, {"start": 1452.68, "end": 1457.4, "text": " So that means so because 10 elements because as you remember we have two global crops"}, {"start": 1457.4, "end": 1462.0400000000002, "text": " We have eight local crops. So that means if we take like the zeroth element of this images list"}, {"start": 1462.3600000000001, "end": 1468.2800000000002, "text": " That's those are the global crops and we have 64 images and if we take uh, like the second index"}, {"start": 1468.44, "end": 1470.44, "text": " That means that's the first local crop"}, {"start": 1471.0, "end": 1474.92, "text": " Like set of images a batch of local crops. And so as you can see here"}, {"start": 1474.92, "end": 1477.88, "text": " I just pre-wrote some code here to make this a bit faster"}, {"start": 1478.2800000000002, "end": 1481.8000000000002, "text": " I'm gonna stop the code execution and i'm gonna rerun it again"}, {"start": 1482.0400000000002, "end": 1484.44, "text": " I'll I'll need to make another tweak to make this work"}, {"start": 1484.44, "end": 1487.8, "text": " So that's that means i'm gonna have to add additional crop"}, {"start": 1488.3600000000001, "end": 1493.8, "text": " So what this does is we are just going to preserve the image as it is without doing any any crops"}, {"start": 1494.1200000000001, "end": 1499.3200000000002, "text": " So that's just a reference image and that was that's going to be prepended as the first crop"}, {"start": 1499.3200000000002, "end": 1504.92, "text": " So now we have 11 crops and hopefully now this is going to work. Let me let me let me run this"}, {"start": 1505.96, "end": 1509.16, "text": " Uh, let me go back to the code where we were. Okay, here we are"}, {"start": 1509.16, "end": 1514.1200000000001, "text": " So what i'm doing here is as you remember I prepended this original image"}, {"start": 1514.76, "end": 1520.76, "text": " So now we have 11 elements instead of 10 and this zero will just be indexing the original images"}, {"start": 1521.0, "end": 1524.3600000000001, "text": " And i'm just gonna fetch like one image out of those 64 images"}, {"start": 1524.76, "end": 1531.48, "text": " I'm gonna convert those to non-pi do some move axes because the channel is actually channel first and now we want to have channel last"}, {"start": 1531.8000000000002, "end": 1533.96, "text": " For for the plot function of matplotlib"}, {"start": 1534.52, "end": 1538.2, "text": " So second line what it does is it will extract the global crop"}, {"start": 1538.2, "end": 1544.76, "text": " So remember now the the the index one and zero the index one and two are actually referencing"}, {"start": 1545.64, "end": 1549.0800000000002, "text": " The global crops and again, i'm gonna take just one single image"}, {"start": 1549.64, "end": 1555.56, "text": " Finally, we have the local crop. So index three through ten are all going to contain local crops"}, {"start": 1555.56, "end": 1560.76, "text": " So i'm just gonna take one arbitrary local crop and finally, let's let's plot those results. So"}, {"start": 1560.76, "end": 1568.6, "text": " Um, here you can see the the original image, uh, just some dude playing golf"}, {"start": 1569.08, "end": 1575.48, "text": " So i'm gonna now exit that remember that image and now we're gonna see a global crop as you can see it's uh,"}, {"start": 1575.8, "end": 1580.2, "text": " There is a crop but there is also severe solarization and other photometric effects"}, {"start": 1580.28, "end": 1583.4, "text": " So you can't really see it perfectly, but let's see the the local crop"}, {"start": 1583.8799999999999, "end": 1587.24, "text": " So the local crop is as you can see now, it's totally clear"}, {"start": 1587.24, "end": 1592.68, "text": " We're just taking a single small crop and we're resizing it into this image of 96 by 96"}, {"start": 1592.92, "end": 1595.64, "text": " And we also apply some augmentations here. So that's it"}, {"start": 1595.72, "end": 1601.88, "text": " That's the now you have an understanding of how images look like and I encourage you when you're debugging and understanding the code"}, {"start": 1602.36, "end": 1608.2, "text": " Do make these visualizations as much as possible because that helps you understand what's going on in your code"}, {"start": 1608.84, "end": 1614.92, "text": " That plus print statement, I guess that's pretty much how I debug the code. Although there are sometimes more elegant ways to do it. Okay"}, {"start": 1614.92, "end": 1616.76, "text": " so"}, {"start": 1616.76, "end": 1619.3200000000002, "text": " Um, let's let's continue here. So okay"}, {"start": 1619.4, "end": 1622.92, "text": " This line is just forming a global step depending on the epoch we are in"}, {"start": 1623.4, "end": 1627.88, "text": " Depending on the number of images in the data set depending on the current iteration inside of this"}, {"start": 1628.76, "end": 1632.1200000000001, "text": " Inside of this loop. Okay, so that's just global step and"}, {"start": 1633.0, "end": 1639.5600000000002, "text": " Then what they're going to do here is depending on the schedule current schedule. We're gonna update the way decay and learning rate"}, {"start": 1639.96, "end": 1643.5600000000002, "text": " Uh, depending on these param groups, it's not that important. I'm just gonna skip that part"}, {"start": 1643.56, "end": 1647.72, "text": " So we're just adapting the way decay and learning rates depending on the current step"}, {"start": 1647.72, "end": 1652.84, "text": " Okay, so skipping that we're putting the images on the gpu. So that's this step"}, {"start": 1653.8, "end": 1656.04, "text": " Let me just whoops. Let me just skip over that"}, {"start": 1657.3999999999999, "end": 1665.08, "text": " Uh, and finally this part, uh is just a context context manager because we have this mixed precision training"}, {"start": 1665.08, "end": 1670.52, "text": " And it's not none. So we're gonna enter this but now it's not going to work because I prepended all of this"}, {"start": 1670.52, "end": 1672.52, "text": " I'm just gonna quickly comment this out"}, {"start": 1672.52, "end": 1678.28, "text": " Uh and comment this out and i'm gonna restart the training and get back to you. Okay. Here we are"}, {"start": 1678.36, "end": 1680.84, "text": " Uh, so this is as I said very important step"}, {"start": 1681.0, "end": 1687.56, "text": " We're taking the global crops and we are passing only the global crops through the teacher to get the teacher output and let's see"}, {"start": 1687.8799999999999, "end": 1691.96, "text": " So analyzing the shape of the output is super important. I can't stress that enough"}, {"start": 1692.36, "end": 1697.4, "text": " So let's now see whoops. I have uh, okay, so we have a multi crop wrapper"}, {"start": 1697.56, "end": 1700.76, "text": " Okay, this is the part we we want to understand a little bit better"}, {"start": 1700.76, "end": 1703.0, "text": " um"}, {"start": 1703.0, "end": 1704.28, "text": " so x"}, {"start": 1704.28, "end": 1710.04, "text": " Is just a list of two elements again, uh, each contains a batch of crops"}, {"start": 1710.52, "end": 1713.96, "text": " These are global crops. So because it's a list we're gonna skip this part"}, {"start": 1714.04, "end": 1720.04, "text": " So this is just a fancy way of separating the global from the local crops and it doesn't make much sense for the teacher network"}, {"start": 1720.52, "end": 1725.0, "text": " Because it's only using global crops. So we're just gonna have number two here. So as you can see here"}, {"start": 1725.0, "end": 1730.36, "text": " Uh index crops just has number two. Uh, this is going to make more sense for the for the student network"}, {"start": 1730.68, "end": 1732.68, "text": " so now what we do is"}, {"start": 1733.48, "end": 1738.76, "text": " We iterate here and we're gonna take as you can see from zero to two"}, {"start": 1739.08, "end": 1742.68, "text": " So that means we're gonna concatenate all of those, uh global crops"}, {"start": 1742.68, "end": 1747.8, "text": " We have 64 images in one in one in one element. We have 64 images in the second one"}, {"start": 1747.96, "end": 1750.92, "text": " We're just going to concatenate them and now we have 128"}, {"start": 1750.92, "end": 1758.28, "text": " images in a single tensor and then we're gonna pass that through the backbone which is basically the vision transformer"}, {"start": 1758.8400000000001, "end": 1765.0800000000002, "text": " Uh of the teach network. So that's going to form what so the the let's try and figure out the shape so"}, {"start": 1765.94, "end": 1768.28, "text": " 128 and because we are outputting"}, {"start": 1769.16, "end": 1774.68, "text": " Only the cls token. So that will be the output from the vision transformer. So it has 384"}, {"start": 1774.68, "end": 1780.8400000000001, "text": " Uh, like uh neurons, right? So we're gonna have hopefully, um"}, {"start": 1781.48, "end": 1786.92, "text": " What so 128 times 384, okay, let's let's see whether the prediction was right"}, {"start": 1787.0800000000002, "end": 1793.0, "text": " So it's kind of good to have these predictions and then uh tweak your brain depending on whether you made made a mistake or not"}, {"start": 1793.24, "end": 1797.0, "text": " I guess that's a similar way how these neural networks work. So the out"}, {"start": 1797.5600000000002, "end": 1800.2, "text": " Shape here as you can see the shape is 128"}, {"start": 1800.2, "end": 1807.24, "text": " 128, um, comma 384 so that's it i'm gonna skip this part and now we're gonna gonna this is going to accumulate the"}, {"start": 1807.88, "end": 1812.1200000000001, "text": " The output and this will again make more sense for the student network. Now is just gonna"}, {"start": 1812.68, "end": 1817.72, "text": " Uh, we're gonna just copy paste pretty much this output from the vision transformer here and that's it"}, {"start": 1817.8, "end": 1820.8400000000001, "text": " We're gonna exit the loop and now this is the actual dino head"}, {"start": 1820.8400000000001, "end": 1826.76, "text": " So this is the mlp and now we're after passing it here. We're gonna end up with those 65 000 something"}, {"start": 1826.76, "end": 1832.92, "text": " So we're gonna have 128 times 65,000 something. Let's let's see whether that that's true. Okay"}, {"start": 1833.32, "end": 1836.04, "text": " Uh, whoops, let me get out of here"}, {"start": 1836.6, "end": 1842.44, "text": " Teacher output. Let's see whether the teacher output is as I predicted so teacher output. Let me kind of"}, {"start": 1844.92, "end": 1846.76, "text": " So 128"}, {"start": 1846.76, "end": 1849.8, "text": " 65 000, that's it. So the predictions were right. So now"}, {"start": 1850.36, "end": 1855.48, "text": " This is the interesting part. We have a student network and we are passing both the global crops as well as the local crops"}, {"start": 1855.48, "end": 1861.4, "text": " So let's enter the loop again again. It's a list. So i'm gonna skip this and now it's a bit more interesting. So it's gonna"}, {"start": 1862.3600000000001, "end": 1865.88, "text": " Make uh return the ids where the resolution changes"}, {"start": 1865.88, "end": 1872.2, "text": " So we have 20 remember the global crops are 224 by 224 and the local crops are 96 by 96"}, {"start": 1872.44, "end": 1876.84, "text": " So what this line of code actually does is it finds a separation and returns the indices?"}, {"start": 1877.56, "end": 1882.44, "text": " Of the place where the resolution changes so it's going to return as you will soon see 2 and 10"}, {"start": 1882.44, "end": 1887.8, "text": " I think or something like that. Yeah 2 and 10 so that's because after 2 we drop the small resolution"}, {"start": 1887.8, "end": 1891.56, "text": " And then we keep the small resolution until the end and that's 10 crops"}, {"start": 1891.96, "end": 1897.3200000000002, "text": " So now we are iterating again as you can see here. We're first going to pass 0 through 2"}, {"start": 1897.72, "end": 1899.72, "text": " So that's global crops"}, {"start": 1900.04, "end": 1905.5800000000002, "text": " And we're going to pass that to the the network here. So that's going to be again 128"}, {"start": 1906.6000000000001, "end": 1911.1000000000001, "text": " So that was that is going to be as you can see here 128 384"}, {"start": 1911.1, "end": 1917.5, "text": " Uh, and we're just going to concatenate that with this output, which is just a placeholder variable"}, {"start": 1917.58, "end": 1922.62, "text": " You can see here is torch empty and we're just concatenating the results to it and that's it"}, {"start": 1922.86, "end": 1927.8999999999999, "text": " Now we're gonna put this the start index is now going to be 2 and the end index is going to be 10"}, {"start": 1928.3, "end": 1930.1399999999999, "text": " So now we're passing 2"}, {"start": 1930.1399999999999, "end": 1934.86, "text": " Through 10 which means we are fetching the local crops. We're concatenating them"}, {"start": 1935.1, "end": 1938.3, "text": " So we're gonna form what like let me do it like calculator"}, {"start": 1938.3, "end": 1941.6599999999999, "text": " Here we have 8 crops all of these have what?"}, {"start": 1942.44, "end": 1943.74, "text": " 64"}, {"start": 1943.74, "end": 1948.06, "text": " 64 images so that's 512. So let's see whether that's true"}, {"start": 1948.06, "end": 1951.82, "text": " So we have 512 and we're going to expect the output 512"}, {"start": 1952.86, "end": 1956.78, "text": " 384 so let's see whether that's true. So we have out"}, {"start": 1957.8999999999999, "end": 1963.58, "text": " Output variable 512 384. That's it. So everything works as we expect"}, {"start": 1963.58, "end": 1971.26, "text": " So let's continue here again. We're just concatenating these so now we're gonna concatenate these with the outputs from the from the"}, {"start": 1971.58, "end": 1974.54, "text": " global crops and that means we're gonna have what like"}, {"start": 1975.6399999999999, "end": 1979.82, "text": " 512 for the global crops plus 100"}, {"start": 1980.76, "end": 1985.1799999999998, "text": " 128 so 512 was for the for the local crops 128 is for the"}, {"start": 1985.82, "end": 1991.26, "text": " Global crops so that's 640. So if we do that, we're gonna see that the output is"}, {"start": 1991.26, "end": 1993.26, "text": " Let me see here"}, {"start": 1993.66, "end": 1995.66, "text": " So the output is 640"}, {"start": 1996.36, "end": 2002.62, "text": " 384. Okay, that's it. Let's continue. We're gonna exit the loop. We're gonna pass all of these"}, {"start": 2003.5, "end": 2004.62, "text": " through the"}, {"start": 2004.62, "end": 2007.26, "text": " MLP and that means we're gonna end up with"}, {"start": 2008.94, "end": 2012.14, "text": " Instead of 384 we're gonna have those 65 000 something"}, {"start": 2013.0, "end": 2017.58, "text": " Activations, so that's it. That's the whole trick. Let me again exit this thing"}, {"start": 2017.58, "end": 2021.02, "text": " Okay, exit. Come on. Come on. Come on"}, {"start": 2023.1799999999998, "end": 2026.3799999999999, "text": " This sucks ass, okay, so this is the last important detail"}, {"start": 2026.46, "end": 2031.1, "text": " Let's just analyze the dyno loss and that's pretty much it. You know, you should be able to understand the training"}, {"start": 2032.06, "end": 2037.34, "text": " So here what happened what happens we have we take the student outputs and remember that's again"}, {"start": 2038.6999999999998, "end": 2043.1799999999998, "text": " 600 something times 65 000 something so 640 times"}, {"start": 2043.18, "end": 2047.44, "text": " 65 000 and we're just going to scale it with this temperature coefficient"}, {"start": 2047.44, "end": 2051.52, "text": " So remember that's remember from the diagram from the paper. That's what they are doing"}, {"start": 2052.2400000000002, "end": 2056.56, "text": " Now they're gonna chunk this into 10 pieces because remember"}, {"start": 2058.96, "end": 2063.12, "text": " We have 10 different crops so this is just a way to take these 640"}, {"start": 2063.84, "end": 2066.2400000000002, "text": " outputs and kind of cluster them into"}, {"start": 2067.2000000000003, "end": 2070.64, "text": " Semantically meaningful categories so that means global crops and local crops"}, {"start": 2070.64, "end": 2075.2, "text": " So we're gonna see that the student out let's find that variable. So that student out here"}, {"start": 2075.8399999999997, "end": 2082.48, "text": " Let me expand it. We have 640 and now we're gonna have a list of 10 elements and every element will have"}, {"start": 2082.7, "end": 2090.3199999999997, "text": " 64 images as it should be so as you can see here 10 elements in the list and every single one contains 64 images"}, {"start": 2090.4, "end": 2094.7999999999997, "text": " That's it. We do a similar thing for the teacher network. We have some temperature stuff"}, {"start": 2094.8799999999997, "end": 2099.12, "text": " We have the centering of the output and then we divide by the temperature"}, {"start": 2099.12, "end": 2101.7599999999998, "text": " across the feature dimension so that means across the"}, {"start": 2102.4, "end": 2108.16, "text": " That vector that has 65 000 activations where you're going to do the soft max across it and"}, {"start": 2109.0, "end": 2113.8399999999997, "text": " then we do the detaching because again, we're not training teacher network and"}, {"start": 2114.2799999999997, "end": 2120.2799999999997, "text": " We chunk it into two because remember we have two global crops being passed through the teacher network"}, {"start": 2120.2799999999997, "end": 2124.3199999999997, "text": " So teacher out let me just expand that variable. So teacher out has"}, {"start": 2124.32, "end": 2129.54, "text": " 128 it's gonna be split into two elements each will have 64 images"}, {"start": 2129.54, "end": 2135.78, "text": " So let's step over it and as you can see here two elements 64 images each. So that's it"}, {"start": 2135.78, "end": 2139.82, "text": " So now we have those different views and now we're gonna do the actual"}, {"start": 2140.5800000000004, "end": 2142.5800000000004, "text": " like loss, so"}, {"start": 2142.7400000000002, "end": 2149.2400000000002, "text": " What is happening here is we are taking a single so there's gonna be a single like a"}, {"start": 2149.24, "end": 2156.8799999999997, "text": " Global crop and it's gonna contain a batch of images for that global crop and then so this just makes sure we are not operating"}, {"start": 2156.8799999999997, "end": 2161.3599999999997, "text": " On the same view. So that's what we're gonna skip in the first iteration because both are zero"}, {"start": 2161.4799999999996, "end": 2166.4799999999996, "text": " So that means both are indexing the same global crop and now in the second iteration, this is gonna be one"}, {"start": 2166.4799999999996, "end": 2171.2, "text": " We're gonna skip that and what happens now is we're gonna take a batch of images"}, {"start": 2171.2, "end": 2177.58, "text": " So 64 images from the from the from the second global crop and we're gonna contrast it with Q"}, {"start": 2177.58, "end": 2183.16, "text": " Which is the the first global crop and that means and we're gonna do simple cross entropy here"}, {"start": 2183.16, "end": 2186.46, "text": " So that's Q and at times the log of the student view"}, {"start": 2186.92, "end": 2191.52, "text": " Softmax is here because we already we already done softmax for the teacher network"}, {"start": 2191.52, "end": 2197.04, "text": " So we have to do softmax and then we have to apply log and then we have just cross entropy and that's it"}, {"start": 2197.04, "end": 2199.58, "text": " So focusing on dimensions here on the shape again"}, {"start": 2199.58, "end": 2204.92, "text": " So if we take expand the Q variable, you can see it's 64 times 65,000"}, {"start": 2204.92, "end": 2207.48, "text": " The same thing is gonna be for this student view"}, {"start": 2207.48, "end": 2212.98, "text": " So we have 64 images here and here and we have their outputs and we're just gonna make sure that the"}, {"start": 2213.12, "end": 2217.8, "text": " Distributions are the same for all of these 64 images. And then what we're gonna do is"}, {"start": 2218.6800000000003, "end": 2220.44, "text": " Basically sum it up"}, {"start": 2220.44, "end": 2224.66, "text": " So we're gonna sum up across the that dimension that not the dimension number one"}, {"start": 2224.66, "end": 2231.4, "text": " And then this mean will do the averaging across the second dimension and that's gonna end up with a scalar"}, {"start": 2231.4, "end": 2236.94, "text": " So after we do this and this we end up with a scalar and that's it. That's pretty much it now"}, {"start": 2236.94, "end": 2243.42, "text": " We iterate through every single combination of views. So we have we trade through different global crops here"}, {"start": 2243.52, "end": 2247.6800000000003, "text": " We trade through both the global and local crops here. We take those views"}, {"start": 2247.6800000000003, "end": 2250.8, "text": " We do the cross entropy and that's how we train this whole thing"}, {"start": 2250.88, "end": 2257.54, "text": " Okay, and the final last detail is we are updating this this center vector which is used as you remember here"}, {"start": 2257.54, "end": 2262.94, "text": " So the self center is gonna be updated using the teacher outputs in this function here"}, {"start": 2262.94, "end": 2266.12, "text": " But you're gonna skip it. You can analyze it yourself. It's fairly simple"}, {"start": 2266.8, "end": 2271.22, "text": " it's basically following the formula from the paper and we're doing the"}, {"start": 2272.44, "end": 2275.7599999999998, "text": " Exponentially moving average update of the of the center vector. That's it"}, {"start": 2276.44, "end": 2281.62, "text": " Okay, let's exit this thing. Let's exit this thing and that's it"}, {"start": 2281.62, "end": 2287.66, "text": " We now have just the bureaucracy stuff. So we clear the gradients of the of the model"}, {"start": 2288.2, "end": 2290.2799999999997, "text": " We are going to now do"}, {"start": 2291.16, "end": 2297.8399999999997, "text": " Backward which is gonna calculate the gradients throughout the whole network of all of the parameters that are obviously a trainable"}, {"start": 2298.56, "end": 2300.56, "text": " We're gonna do some"}, {"start": 2301.2799999999997, "end": 2305.2, "text": " Clipping of gradients basically here. This line is just going to freeze"}, {"start": 2305.72, "end": 2309.72, "text": " And not train the last layer if you remember there was this parameter called"}, {"start": 2309.72, "end": 2311.72, "text": " What's the name of it freeze?"}, {"start": 2312.16, "end": 2317.04, "text": " So since this freeze last layer is only one and current epoch is zero"}, {"start": 2317.04, "end": 2320.3599999999997, "text": " That means we're actually going to freeze the last layer of the model"}, {"start": 2320.3599999999997, "end": 2325.72, "text": " But that's again a small optimization trick to make this thing more stable and trainable"}, {"start": 2326.4399999999996, "end": 2331.8799999999997, "text": " Finally we do the step. That means we're gonna actually update the weights and that's it"}, {"start": 2331.8799999999997, "end": 2335.48, "text": " Just some details around mixed precision training nothing that spectacular"}, {"start": 2335.48, "end": 2338.84, "text": " Finally we take the the the"}, {"start": 2339.56, "end": 2341.96, "text": " moment value depending on the on the"}, {"start": 2341.96, "end": 2348.64, "text": " Epoch we are in on the global step we're in and this is the part where we are updating the teacher networks"}, {"start": 2349.12, "end": 2350.72, "text": " using the"}, {"start": 2350.72, "end": 2353.44, "text": " Student weights, so we are basically doing"}, {"start": 2353.96, "end": 2360.2400000000002, "text": " Exponential moving average here. So as you can see the teacher weights are gonna be multiplied by M"}, {"start": 2360.24, "end": 2367.24, "text": " Which is the this momentum parameter and we're gonna add up 1 minus M times the weights from the student"}, {"start": 2367.8399999999997, "end": 2372.4399999999996, "text": " And you can see it on the formula on the screen and that's that's pretty much it. Okay, that was it"}, {"start": 2372.4399999999996, "end": 2376.68, "text": " That was pretty much the the core parts of the training everything else is just logging"}, {"start": 2377.0, "end": 2383.16, "text": " Dumping those log files and checkpoints. This was the the very meat of the dyno training and how the training looks like again"}, {"start": 2383.16, "end": 2386.7999999999997, "text": " Any feedback is appreciated the more feedback I received that the better"}, {"start": 2386.8, "end": 2393.1600000000003, "text": " I'll know for the next time how to make these videos. Okay. I want to end this video by showing you this visualize attention function"}, {"start": 2393.1600000000003, "end": 2401.4, "text": " Let's quickly run through how this thing works and I think it's fairly fairly much simpler compared to the training"}, {"start": 2401.4, "end": 2408.0, "text": " So let's kind of dig into it. I'm gonna put a breakpoint here. I'm gonna start debugging and"}, {"start": 2408.48, "end": 2410.48, "text": " Let's see what happens"}, {"start": 2411.48, "end": 2413.48, "text": " Okay, again, we're using"}, {"start": 2413.48, "end": 2415.92, "text": " We at small pet size 8"}, {"start": 2416.76, "end": 2422.64, "text": " No pre train weights. This is not that important. We have some image path. I already"}, {"start": 2423.56, "end": 2428.44, "text": " Inputted the path to this image. So we're gonna be using this image here this this bird image and"}, {"start": 2429.12, "end": 2435.48, "text": " So that's the image path image size is gonna be 480 not that important again, which is gonna dump in the current directory"}, {"start": 2436.32, "end": 2439.6, "text": " Threshold is also not that important for now. Let me see what I"}, {"start": 2439.6, "end": 2444.52, "text": " Am using currently so my current threshold is 0.1. I'm actually gonna delete this"}, {"start": 2445.04, "end": 2448.3199999999997, "text": " so I'm gonna just input the image path here as you can see and"}, {"start": 2448.96, "end": 2450.96, "text": " Now let's apply this and"}, {"start": 2451.72, "end": 2453.72, "text": " Restart the code again"}, {"start": 2454.24, "end": 2456.24, "text": " jumping to here"}, {"start": 2456.24, "end": 2458.24, "text": " Okay, I'm gonna jump here"}, {"start": 2458.24, "end": 2460.24, "text": " Let's see here. So"}, {"start": 2460.24, "end": 2465.44, "text": " Device if I have if you have GPU, it's gonna be using GPU. So it's CUDA here for me. I have a GPU"}, {"start": 2465.44, "end": 2471.2400000000002, "text": " So we're gonna form we're gonna create a model. So this is the same line as in dyno training code"}, {"start": 2471.2400000000002, "end": 2474.76, "text": " So we're gonna basically instantiate the vision transformer small"}, {"start": 2475.4, "end": 2482.56, "text": " Because of this string with pet size 8 and 0 classes means it won't have the output"}, {"start": 2483.08, "end": 2485.92, "text": " head is just going to have the the final layer is"}, {"start": 2486.4, "end": 2491.7200000000003, "text": " Gonna be just bunch of tokens the CLS tokens and all of the other tokens, but no no actual linear"}, {"start": 2491.72, "end": 2496.52, "text": " Classifier had okay. Let's do this. So these lines were just gonna"}, {"start": 2497.24, "end": 2499.24, "text": " freeze the the model"}, {"start": 2499.48, "end": 2503.2, "text": " Parameters that means we are not we're gonna set them to not trainable"}, {"start": 2503.2, "end": 2508.04, "text": " So that means this model is just going to be used as is no training involved"}, {"start": 2508.68, "end": 2514.2, "text": " Eval evaluation mode that basically sets up depending what you have dropout or batch norm"}, {"start": 2514.2, "end": 2517.7999999999997, "text": " This thing can change the the those layers"}, {"start": 2517.8, "end": 2523.76, "text": " And that's why it's important when you're using a model in an inference mode to call this eval function"}, {"start": 2524.8, "end": 2528.44, "text": " We are gonna push the device to the GPU to make things a bit faster"}, {"start": 2528.44, "end": 2531.8, "text": " And finally because I don't have any pre-trained weights, which is gonna"}, {"start": 2532.44, "end": 2536.48, "text": " Fetch from a certain URL as you can see here. We form an URL we"}, {"start": 2537.2400000000002, "end": 2543.1600000000003, "text": " Fetch those from the torch hub. We're gonna just concatenate the URL to this repository"}, {"start": 2543.16, "end": 2547.3599999999997, "text": " And we're gonna fetch the state dict and we're gonna load the state dictionary"}, {"start": 2547.3599999999997, "end": 2549.96, "text": " So the download actually already happened on my local machine"}, {"start": 2549.96, "end": 2554.0, "text": " So it's just fetching it from the local storage instead of having to go to the cloud"}, {"start": 2554.0, "end": 2557.2799999999997, "text": " Okay, my image the image name was like incorrect"}, {"start": 2557.2799999999997, "end": 2561.48, "text": " So I had to tweak that in the settings and now I rerun the code and here we are again"}, {"start": 2561.48, "end": 2565.56, "text": " So because we have an image path, we're gonna open it load the image"}, {"start": 2566.56, "end": 2570.16, "text": " We're gonna load the image. We're gonna do something that's gonna load the image"}, {"start": 2570.16, "end": 2577.12, "text": " We're gonna load the image. We're gonna do some transformations. So we're gonna resize the image into 480 by 480"}, {"start": 2577.2799999999997, "end": 2580.8799999999997, "text": " We're gonna convert it into tensor. And again, here is here these"}, {"start": 2581.56, "end": 2588.2799999999997, "text": " Numbers we already saw the much the magic numbers. They just basically represent the mean and standard deviation of the image net data set"}, {"start": 2589.8799999999997, "end": 2592.8399999999997, "text": " So, okay, so after that we're gonna transform the image"}, {"start": 2592.8399999999997, "end": 2597.12, "text": " So now we have a tensor that's normalized and resized as you can see here"}, {"start": 2597.12, "end": 2604.16, "text": " Finally, we make sure that it's divisible by the pet size. So the pet size is currently what eight so we're gonna"}, {"start": 2604.7999999999997, "end": 2608.1, "text": " Make sure that the width and height are are actually"}, {"start": 2609.2799999999997, "end": 2616.64, "text": " Divisible by eight and now we're gonna crop the image here if necessary and on squeeze zero means we're just gonna add"}, {"start": 2617.04, "end": 2622.2799999999997, "text": " Another dimension in the front. So that's just some pie torch like stuff"}, {"start": 2622.28, "end": 2627.34, "text": " And if we take a look at the image dimension here, so let's take a look at the shape"}, {"start": 2627.4, "end": 2634.6800000000003, "text": " So the shape is just three four hundred eighty four hundred eighty and after we execute this line is gonna be one three four hundred"}, {"start": 2634.6800000000003, "end": 2640.76, "text": " Eighty four hundred eighty. So that's the unsqueezed part then we're gonna divide the resolution by the pet size"}, {"start": 2640.8, "end": 2648.0, "text": " So this will tell you how many tokens you have across the width dimension and we have sixty and here again"}, {"start": 2648.0, "end": 2652.14, "text": " We have sixty because remember it's four hundred eighty times four hundred eighty image"}, {"start": 2653.28, "end": 2654.44, "text": " finally"}, {"start": 2654.44, "end": 2661.52, "text": " What this does it's gonna affect from the vision transformer. It's gonna get the the self-attention of the image"}, {"start": 2661.52, "end": 2663.52, "text": " So we're gonna take the image we're gonna"}, {"start": 2664.08, "end": 2669.04, "text": " Like send it to the GPU because the model is also in GPU. So you don't want to have a discrepancy there"}, {"start": 2669.2, "end": 2674.48, "text": " We're gonna feed the image into the model and it's going to take from the last layer"}, {"start": 2674.48, "end": 2681.48, "text": " It's gonna return not the tokens is gonna return the attention coefficients. So that means the following so remember we have"}, {"start": 2682.0, "end": 2684.0, "text": " 60 times 60"}, {"start": 2684.16, "end": 2688.96, "text": " Tokens as we just saw up there. So we have 60 here and 60 here"}, {"start": 2688.96, "end": 2695.56, "text": " So that means we have three hundred three thousand six hundred tokens in in total plus one for the CLS token"}, {"start": 2695.56, "end": 2702.2, "text": " So that's three hundred three thousand six hundred one. So the attention dimension we should expect should be"}, {"start": 2702.2, "end": 2707.96, "text": " basically three thousand six hundred one times three hundred six thousand three"}, {"start": 2709.12, "end": 2712.8999999999996, "text": " Three thousand six hundred and one and we also have six"}, {"start": 2713.7599999999998, "end": 2718.16, "text": " Attention head so we're gonna have something like six times this number here"}, {"start": 2718.2, "end": 2721.7599999999998, "text": " So that's the the the numbers I'm the shape I'm expecting"}, {"start": 2722.68, "end": 2724.9199999999996, "text": " so if I step over and"}, {"start": 2724.92, "end": 2732.6800000000003, "text": " After we now let's find the attentions variable. So the shape is as you can see here six"}, {"start": 2733.2400000000002, "end": 2740.66, "text": " Three thousand six four six hundred blah blah the number we expected basically plus the this one as the first dimension again"}, {"start": 2740.66, "end": 2743.84, "text": " Those are just some idiosyncrasies of pytorch quick recap"}, {"start": 2743.84, "end": 2747.46, "text": " How those were formed is basically we have so we have as I said"}, {"start": 2747.46, "end": 2755.2200000000003, "text": " Three thousand six hundred one tokens you take a single token and you basically form its query and then you do a dot product"}, {"start": 2755.2200000000003, "end": 2758.4, "text": " a scale dot product with all of the other keys and that's why you have"}, {"start": 2759.7, "end": 2763.7, "text": " 3601 attention coefficients for that single token, but since we have"}, {"start": 2764.56, "end": 2768.86, "text": " 3601 tokens we're gonna have this this this shape here. So hopefully that's clear"}, {"start": 2768.86, "end": 2772.58, "text": " So as you can see here, we extract the number of heads the number of heads is six"}, {"start": 2772.58, "end": 2779.1, "text": " And now what we do here is we take we just extract this is just because of one"}, {"start": 2779.1, "end": 2785.62, "text": " We take this zero here and then we take all of the heads, but we take only the zeros token"}, {"start": 2787.2999999999997, "end": 2791.42, "text": " Which means CLS token and we take one through the end"}, {"start": 2791.46, "end": 2798.72, "text": " Which means we find the attention coefficients that the CLS has across all of the other let's call them spatial tokens"}, {"start": 2798.72, "end": 2801.7, "text": " So that's the whole idea there. So that's why we have these numbers here"}, {"start": 2801.7, "end": 2806.9399999999996, "text": " So, let's see what what is the shape we're expecting. We're expecting something like let me write down here"}, {"start": 2806.9399999999996, "end": 2809.18, "text": " Maybe it's easier to see so expecting like six"}, {"start": 2809.8599999999997, "end": 2815.7, "text": " And then we expect because we're taking just the CLS token. So expect six three hundred"}, {"start": 2816.2999999999997, "end": 2820.16, "text": " Three thousand six hundred. So that's the shape where we're actually expecting after after this"}, {"start": 2820.72, "end": 2823.4399999999996, "text": " This line of code. So let's see where that's true"}, {"start": 2823.7, "end": 2828.16, "text": " So attentions as you can see here, that's that's the actual number six"}, {"start": 2828.16, "end": 2835.2, "text": " Three thousand six hundred. Okay. I'm just gonna ignore this threshold part and we're just going to reshape"}, {"start": 2835.7599999999998, "end": 2837.7599999999998, "text": " the those attentions into"}, {"start": 2838.48, "end": 2843.68, "text": " Into different shape. So instead of having it flattened out, we're gonna return it back to 60 60"}, {"start": 2844.08, "end": 2846.44, "text": " And finally, this is just nearest neighbor"}, {"start": 2847.24, "end": 2850.8399999999997, "text": " Interpolation which means you're gonna take that token"}, {"start": 2851.48, "end": 2856.3999999999996, "text": " Whatever the coefficient is we're just going to replicate the coefficient into the 8 times 8"}, {"start": 2856.4, "end": 2861.5, "text": " pixels because that's was the original image shape so that we can visualize the actual attention"}, {"start": 2861.6, "end": 2863.8, "text": " So that's why we need the interpolation part"}, {"start": 2864.48, "end": 2867.1, "text": " finally just some making some directories"}, {"start": 2867.76, "end": 2874.2000000000003, "text": " Saving the original image and finally here in this in this section here. We're going to as you can see here"}, {"start": 2874.32, "end": 2876.32, "text": " We're going to take"}, {"start": 2877.6, "end": 2881.46, "text": " The attentions for every single head and save them in an image"}, {"start": 2881.46, "end": 2886.4, "text": " So let me just kind of do f9 there and we're gonna end up having"}, {"start": 2887.94, "end": 2893.82, "text": " Six images which I'm going to show you in a second and finally I'm gonna ignore this threshold in part again"}, {"start": 2893.82, "end": 2896.66, "text": " And that's that's pretty much it and here are the attention"}, {"start": 2897.26, "end": 2900.3, "text": " Maps we got so here you can see the bird image"}, {"start": 2900.66, "end": 2907.9, "text": " Every single head has different attention pattern and that's it and these are the images you you saw in the actual paper"}, {"start": 2907.9, "end": 2912.6, "text": " Now you know how they are calculated and visualize awesome. This was a long video"}, {"start": 2912.78, "end": 2917.7400000000002, "text": " hopefully liked it if you did share it out with a friend subscribe hit the bell icon and"}, {"start": 2917.74, "end": 2937.4199999999996, "text": " Join the district community until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=Ich5TIvdYRE
Fastformer: Additive Attention Can Be All You Need | Paper Explained
👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I covered "Fastformer: Additive Attention Can Be All You Need" paper introducing a novel, linear complexity, transformer model! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2108.09084 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 01:00 Previous work and problems 03:10 Fastformer method explained 07:10 Param sharing and complexity 09:10 Results, Fastformer is effective 11:20 Results, Fastformer is efficient 12:45 Ablations 14:10 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #fastformer #transformers #linearcomplexity
What's cracking guys? In this video I'm covering fastformer, additive attention can be all you need. So yet another all you need paper and yet another attempt to create a transformer that's more efficient, that's hopefully linear, like complexity with respect to the input sequence token length. So the reason I'm covering this paper is because I actually like it. I think like the idea is fairly simple. So what really resonated with me was the simplicity. So I guess that's Rich Sutton School of Thought. And aside from that, they have awesome results on various datasets and they actually showed both theoretically and experimentally that they have a better like efficiency compared to previous work. So previous work, even though they sometimes have allegedly linear complexity, they have a huge constant that's hidden behind the big O notation. So in practice, they are not that useful, but I'll dig into those details a bit later. So before I do that, let me just quickly contrast this work with previous attempts to make attention, the attention module of the transformer more efficient. So one group of papers were focusing on how to make the dense attention pattern of the original Vosfani transformer more sparse, but still like keeping the performance that the original transformer has. And so you can see in the long former paper, they proposed a bunch of various different like attention patterns. You can see they are fairly complex and intricate. And in my honest opinion, when I see something that's so complex and hacky, I just don't think it's going to be there in a couple of years. It's just a current way to boost the results. And I still think it's useful for the community, but I don't think it's going to work in the long run. Same thing with like Reformer here. You can see they used locality-sensitive hashing. You can see just the number of steps you have to do here. So LSH bucketing, sort by the LSH bucket, chunk sort the sequence to parallelize, blah, blah, blah, 10 within the same bucket. So it's fairly complex. You can see just looking at the graph, it's very complex. It doesn't seem like the way to go. That's one of my main problems I had with these papers. Second thing is the thing I don't like about these Transformer papers, which are trying to make it more efficient, is that they are all using different data sets. It's kind of hard to compare apples to apples here. So I'd encourage the community to start using like a dedicated set of data sets and try and evaluate all of them on the same suit of tasks. That's just my two cents. OK, so all of these papers are basically trying to modify this multi-head retention module. So this one here. So same for reformers, same for longformers, same for many of them. That was the basic pain point. That's where the quadratic complexity lies in. And again, I won't be digging into how Transformer works. That's like by now a building block of deep learning. So go check out my video on Transformer if you want to learn how it actually works. So yeah, that's the motivation. We want to reduce the quadratic complexity. And now let's see how FastFormer actually managed to achieve that. I think the idea is fairly neat and fairly simple. So let's dig into it. OK, so first things first, we have input sequence, which are embedded into these vectors. We have so we have n vectors, which represent the end tokens of the input sequence. We have the usual query, transform and key transform and value transform. And now the interesting thing happens here. So they first take these query vectors and they do additive attention. I'm going to explain in a second what that means. They do additive attention. They form these alpha coefficients. They multiply the vectors with the alpha coefficients and they sum them up to achieve this global query vector, which serves as an initial step in creating this global context of this whole sequence. Let's see quickly how the additive attention works. So we have a vector. Let's take like q1, whatever. The additive attention is super simple. You just take the input vector. So q1 and what they're going to do is they're going to just feed it through a fully connected network. So every single dimension in this vector, every single feature is going to have associated like a link here with a weight. And you basically form that's how you form the alpha coefficient. So alpha I in the general case, we basically have the operations in here where these the number of dimensions of the vector. So you can see it's fairly simple and computationally super cheap to form these alpha coefficients, which, as I said, then you just use them to do a scalar product of the vector and you form another representation, which is here, and then you just add them up. You add up all the vectors. You form the Q vector. That's it. That's how you form the Q vector. Then you what you do is you take the Q vector and you modify the key vectors by doing element wise. So this this symbol here is just element wise product. They did some ablation studies. We'll see that in a couple of minutes. But basically the concatenation of this global Q vector or the addition did not work as good as as doing this like element wise product. Basically, once you do that, you form these P vectors and then you again do the additive attention. So the same thing, the same procedure is here. And then you add them up and you form the Q the K vector. So that's the key global vector. Finally, the final step of this of this module is you basically have value vectors. What you're going to do is again do the element wise product. You're going to form the U vectors. And this time, instead of doing additive attention, they did something similar to the original transformer. They did simple linear transformation here. So this block is just like a set of linear layers or MLPs. I think I'm not sure whether they're using single layer or MLP, but that doesn't matter. So basically you have a simple transformation, which is learnable. You form these R vectors. And then finally, you just add them up with the query vectors, which we had here to form the final output. And that's it. So it's much more simple compared to, in my opinion, compared to long former compared to lean former compared, which is the lower end approximation of the matrix compared to every other paper pretty much. Yeah. So that's a big plus in my book, at least. OK, so everything I just explained is written down in these formulas. The only thing I forgot to mention is they have the same as in the original transformer. They have this division by the square root of the dimensionality, which serves to keep the things stable. And so this is just a scale dot product. And that's how I formed the alpha coefficients. And then, as I said, you just do weighted like some of these query vectors to form the global query vector. And that's it. You do the same thing for for the key for the global key vector, as I already mentioned it here. So that's this one. And finally, I explained the third step. And that's this linear transformation part I explained. OK, so a couple of details which are interesting. They did some oblations again. They mentioned here we shared the value and query transformation parameters to reduce the memory cost. In addition, we shared the parameters across different fast formal layers to further reduce the parameter size and mitigate the risk of overfitting. So this is nothing new. We had a bunch of papers prior to this one, which were experimenting with this weight sharing. And so nothing new there, but still that's this is additionally going to save up some parameters. And they also didn't have any slowdowns doing this. So that's that's cool. So if you followed along the explanation of how this thing works, it's easy to notice that the whole complexity will be n times d. So that means it's linear with respect to the number of tokens. And these just the dimensionality of the vectors which are being passed through the transformer. OK, so funny thing, I just kind of had to notice this. This must be the most unlucky place to put the footnote. But, yeah, I guess they did it on purpose. Hopefully they did it on purpose. OK, so having said that, let me just show you the complexity compared to the other previous papers. So we had transformer, the original transformer, which was obviously squared compared to the input sequence length. And that was the problem that that made the yields transformers not good enough to to do inference on long sequences. Then we had a bunch of various papers. So I mentioned Longformer, I mentioned I don't see reformer here, but yeah, Linformer and other papers. So as I said, there is always all of these papers pretty much had some hidden caveat inside. So, for example, Longformer had this K. And if I remember correctly, you had to push K pretty high in order to get decent performance boost. So maybe something like square root of n would be optimal. And so you kind of have linear complexity, but you're not quite there. Same with our papers. You can see some have square dependency on the dimensionality. Some have huge constants in front of them, which you don't see because the big O notation just eats it up. So I think this former is the first paper to actually have decent linear complexity. Let's finally see the results. So they have two sections. First one is they show that the first former is effective and then they show that it's actually efficient as well. So they had, I think, five data sets here to show on Amazon. Yeah, I am DB in mind. So sentiment and topic classification tasks. And you can see that this former pretty much outperforms all of the previous baselines on all of these tasks, except for this one where I guess the difference is fairly negligible and the standard deviation here is even bigger. So, yeah. But like, as I mentioned, the thing I don't like about everybody's using their own data sets, it'd be nice to have like an agreement of which data sets we should use when we are evaluating these efficient transformers. Just putting it out there, maybe somebody starts implementing that idea. OK, so that's the being effective. And here are some other data sets they used. So they have the, I think, recommendation task here and they show again then first former is better compared like across these various different metrics. And I don't want anyone get into what these metrics are, but basically it's better. And they have this combination with this PLM and our baseline plus ensemble disaster just signifies ensemble. And they have even better results that you've even better results by doing that. But, yeah, I guess this this row is what is actually important for us. Additional results on text summarization. Again, fast former is fairly better compared to all of the previous baselines across rows one, rows two, rows L metrics. So different metrics. Again, not that important for this paper. Before I show you how efficient this fast former is, let me just kind of dig into this part. So in our experiments, we use glove embeddings to initialize token embedding matrix. I'm not sure why they're using glove here. So this is like an alternative to virtual back embeddings. Not completely sure why they did not learn the whole thing from scratch. If anybody knows what's going on here, let me know down in the comments because I'm this part confused me, to be honest. OK, let's see how efficient this thing is. So looking at inference, looking at training graphs here, so we can see that as the sequence length increases, somewhere around 512, the transformer starts exploding. So the GPU memory is not large enough. I think they were using V100 GPUs and VDI. And so basically that means you cannot actually even evaluate the transformer on these larger sequences, on these longer sequences. Then you can see that this dark blue line is the fast former and it's way better compared to the other baselines. So this pooling former, if I'm not wrong, let me just see the theoretical complexity they reported. So pooling former n times d times w, where w is the window size. And you can see that even though this looks linear in practice, it's not like optimal because they have huge constant. And I guess that w also contributes to that constant exploding. So yeah, in practice, it doesn't seem to be as good as all of the other baselines. Forgot to mention we have here on the y-axis inference time per layer and here we have training time per layer. I guess it correlates pretty much these two charts. That basically means it's better, it's more efficient for both training and inference compared to the other baselines. And I guess that's nice results. Finally, here are some ablations. I mentioned the Element-Vise product they were using to construct those global vectors. You can see that compared to adding up those vectors instead of doing Element-Vise or concatenating them to the key vectors is worse looking at the accuracy on different datasets. So the trend is kind of similar and sticks throughout all of these different datasets. Finally, I mentioned weight sharing, the de-doblation studies there. So without sharing, you can see decent results, but I guess the number of parameters explodes. With query value sharing, some improvements. And then when you do the head-wise sharing, I think that's important. We have a significant drop here. And the same trend goes for every single dataset. And the whole point is if you read the original transformer paper, and if you ever saw those attention visualizations, every single head is learning different sub-function. So sharing the weights there kind of hurts the performance, and that would be some hand-wavy explanation of why that happens. So every single one needs to learn a dedicated function. For example, if you have a sentence, one of those heads will maybe focus on attending the nouns. One will be attending verbs. I don't know, whatever. So different semantics all in all. So that will be a quick explanation of this transformer paper. So again, as I said, what I like about this paper is it's much more simpler implementation-wise than just understanding how it works compared to previous papers I've covered and read, as well as they obviously have both theoretical and experimental guarantees that they are more efficient, and they're also effective across a bunch of different datasets, as we saw. So yeah, that's pretty much it. And finally, if you missed it, I just created a brand new Discord community, so do join it. I'll link the link in the description. Basically, you'll be able to engage with others there, ask questions related to ML. I'm actually working on bringing startups to offer ML jobs, so if that's something that's interesting to you, do join the community and also subscribe to my monthly AI newsletter, which is going to be super valuable, hopefully. So having said that, subscribe to this channel, hit that bell notification, and until next time, bye-bye.
[{"start": 0.0, "end": 5.9, "text": " What's cracking guys? In this video I'm covering fastformer, additive attention can be all you need."}, {"start": 5.9, "end": 14.0, "text": " So yet another all you need paper and yet another attempt to create a transformer that's more efficient,"}, {"start": 14.0, "end": 20.0, "text": " that's hopefully linear, like complexity with respect to the input sequence token length."}, {"start": 21.0, "end": 25.0, "text": " So the reason I'm covering this paper is because I actually like it."}, {"start": 25.0, "end": 31.0, "text": " I think like the idea is fairly simple. So what really resonated with me was the simplicity."}, {"start": 31.0, "end": 38.0, "text": " So I guess that's Rich Sutton School of Thought. And aside from that, they have awesome results on various datasets"}, {"start": 38.0, "end": 45.0, "text": " and they actually showed both theoretically and experimentally that they have a better like efficiency compared to previous work."}, {"start": 45.0, "end": 54.0, "text": " So previous work, even though they sometimes have allegedly linear complexity, they have a huge constant that's hidden behind the big O notation."}, {"start": 54.0, "end": 58.0, "text": " So in practice, they are not that useful, but I'll dig into those details a bit later."}, {"start": 58.0, "end": 65.0, "text": " So before I do that, let me just quickly contrast this work with previous attempts to make attention,"}, {"start": 65.0, "end": 68.0, "text": " the attention module of the transformer more efficient."}, {"start": 68.0, "end": 78.0, "text": " So one group of papers were focusing on how to make the dense attention pattern of the original Vosfani transformer more sparse,"}, {"start": 78.0, "end": 83.0, "text": " but still like keeping the performance that the original transformer has."}, {"start": 83.0, "end": 90.0, "text": " And so you can see in the long former paper, they proposed a bunch of various different like attention patterns."}, {"start": 90.0, "end": 93.0, "text": " You can see they are fairly complex and intricate."}, {"start": 93.0, "end": 102.0, "text": " And in my honest opinion, when I see something that's so complex and hacky, I just don't think it's going to be there in a couple of years."}, {"start": 102.0, "end": 105.0, "text": " It's just a current way to boost the results."}, {"start": 105.0, "end": 112.0, "text": " And I still think it's useful for the community, but I don't think it's going to work in the long run."}, {"start": 112.0, "end": 115.0, "text": " Same thing with like Reformer here."}, {"start": 115.0, "end": 117.0, "text": " You can see they used locality-sensitive hashing."}, {"start": 117.0, "end": 120.0, "text": " You can see just the number of steps you have to do here."}, {"start": 120.0, "end": 127.0, "text": " So LSH bucketing, sort by the LSH bucket, chunk sort the sequence to parallelize, blah, blah, blah, 10 within the same bucket."}, {"start": 127.0, "end": 128.0, "text": " So it's fairly complex."}, {"start": 128.0, "end": 130.0, "text": " You can see just looking at the graph, it's very complex."}, {"start": 130.0, "end": 132.0, "text": " It doesn't seem like the way to go."}, {"start": 132.0, "end": 135.0, "text": " That's one of my main problems I had with these papers."}, {"start": 135.0, "end": 144.0, "text": " Second thing is the thing I don't like about these Transformer papers, which are trying to make it more efficient, is that they are all using different data sets."}, {"start": 144.0, "end": 147.0, "text": " It's kind of hard to compare apples to apples here."}, {"start": 147.0, "end": 156.0, "text": " So I'd encourage the community to start using like a dedicated set of data sets and try and evaluate all of them on the same suit of tasks."}, {"start": 156.0, "end": 158.0, "text": " That's just my two cents."}, {"start": 158.0, "end": 164.0, "text": " OK, so all of these papers are basically trying to modify this multi-head retention module."}, {"start": 164.0, "end": 166.0, "text": " So this one here."}, {"start": 166.0, "end": 170.0, "text": " So same for reformers, same for longformers, same for many of them."}, {"start": 170.0, "end": 171.0, "text": " That was the basic pain point."}, {"start": 171.0, "end": 174.0, "text": " That's where the quadratic complexity lies in."}, {"start": 174.0, "end": 177.0, "text": " And again, I won't be digging into how Transformer works."}, {"start": 177.0, "end": 180.0, "text": " That's like by now a building block of deep learning."}, {"start": 180.0, "end": 184.0, "text": " So go check out my video on Transformer if you want to learn how it actually works."}, {"start": 184.0, "end": 186.0, "text": " So yeah, that's the motivation."}, {"start": 186.0, "end": 188.0, "text": " We want to reduce the quadratic complexity."}, {"start": 188.0, "end": 191.0, "text": " And now let's see how FastFormer actually managed to achieve that."}, {"start": 191.0, "end": 194.0, "text": " I think the idea is fairly neat and fairly simple."}, {"start": 194.0, "end": 196.0, "text": " So let's dig into it."}, {"start": 196.0, "end": 201.0, "text": " OK, so first things first, we have input sequence, which are embedded into these vectors."}, {"start": 201.0, "end": 206.0, "text": " We have so we have n vectors, which represent the end tokens of the input sequence."}, {"start": 206.0, "end": 211.0, "text": " We have the usual query, transform and key transform and value transform."}, {"start": 211.0, "end": 213.0, "text": " And now the interesting thing happens here."}, {"start": 213.0, "end": 219.0, "text": " So they first take these query vectors and they do additive attention."}, {"start": 219.0, "end": 221.0, "text": " I'm going to explain in a second what that means."}, {"start": 221.0, "end": 223.0, "text": " They do additive attention."}, {"start": 223.0, "end": 225.0, "text": " They form these alpha coefficients."}, {"start": 225.0, "end": 233.0, "text": " They multiply the vectors with the alpha coefficients and they sum them up to achieve this global query vector,"}, {"start": 233.0, "end": 238.0, "text": " which serves as an initial step in creating this global context of this whole sequence."}, {"start": 238.0, "end": 241.0, "text": " Let's see quickly how the additive attention works."}, {"start": 241.0, "end": 243.0, "text": " So we have a vector."}, {"start": 243.0, "end": 244.0, "text": " Let's take like q1, whatever."}, {"start": 244.0, "end": 246.0, "text": " The additive attention is super simple."}, {"start": 246.0, "end": 248.0, "text": " You just take the input vector."}, {"start": 248.0, "end": 254.0, "text": " So q1 and what they're going to do is they're going to just feed it through a fully connected network."}, {"start": 254.0, "end": 262.0, "text": " So every single dimension in this vector, every single feature is going to have associated like a link here with a weight."}, {"start": 262.0, "end": 265.0, "text": " And you basically form that's how you form the alpha coefficient."}, {"start": 265.0, "end": 272.0, "text": " So alpha I in the general case, we basically have the operations in here where these the number of dimensions of the vector."}, {"start": 272.0, "end": 277.0, "text": " So you can see it's fairly simple and computationally super cheap to form these alpha coefficients,"}, {"start": 277.0, "end": 284.0, "text": " which, as I said, then you just use them to do a scalar product of the vector and you form another representation,"}, {"start": 284.0, "end": 286.0, "text": " which is here, and then you just add them up."}, {"start": 286.0, "end": 288.0, "text": " You add up all the vectors."}, {"start": 288.0, "end": 289.0, "text": " You form the Q vector."}, {"start": 289.0, "end": 290.0, "text": " That's it."}, {"start": 290.0, "end": 291.0, "text": " That's how you form the Q vector."}, {"start": 291.0, "end": 296.0, "text": " Then you what you do is you take the Q vector and you modify the key vectors by doing element wise."}, {"start": 296.0, "end": 300.0, "text": " So this this symbol here is just element wise product."}, {"start": 300.0, "end": 301.0, "text": " They did some ablation studies."}, {"start": 301.0, "end": 303.0, "text": " We'll see that in a couple of minutes."}, {"start": 303.0, "end": 314.0, "text": " But basically the concatenation of this global Q vector or the addition did not work as good as as doing this like element wise product."}, {"start": 314.0, "end": 319.0, "text": " Basically, once you do that, you form these P vectors and then you again do the additive attention."}, {"start": 319.0, "end": 322.0, "text": " So the same thing, the same procedure is here."}, {"start": 322.0, "end": 326.0, "text": " And then you add them up and you form the Q the K vector."}, {"start": 326.0, "end": 328.0, "text": " So that's the key global vector."}, {"start": 328.0, "end": 334.0, "text": " Finally, the final step of this of this module is you basically have value vectors."}, {"start": 334.0, "end": 338.0, "text": " What you're going to do is again do the element wise product."}, {"start": 338.0, "end": 340.0, "text": " You're going to form the U vectors."}, {"start": 340.0, "end": 346.0, "text": " And this time, instead of doing additive attention, they did something similar to the original transformer."}, {"start": 346.0, "end": 348.0, "text": " They did simple linear transformation here."}, {"start": 348.0, "end": 353.0, "text": " So this block is just like a set of linear layers or MLPs."}, {"start": 353.0, "end": 357.0, "text": " I think I'm not sure whether they're using single layer or MLP, but that doesn't matter."}, {"start": 357.0, "end": 359.0, "text": " So basically you have a simple transformation, which is learnable."}, {"start": 359.0, "end": 361.0, "text": " You form these R vectors."}, {"start": 361.0, "end": 367.0, "text": " And then finally, you just add them up with the query vectors, which we had here to form the final output."}, {"start": 367.0, "end": 368.0, "text": " And that's it."}, {"start": 368.0, "end": 381.0, "text": " So it's much more simple compared to, in my opinion, compared to long former compared to lean former compared, which is the lower end approximation of the matrix compared to every other paper pretty much."}, {"start": 381.0, "end": 382.0, "text": " Yeah."}, {"start": 382.0, "end": 385.0, "text": " So that's a big plus in my book, at least."}, {"start": 385.0, "end": 390.0, "text": " OK, so everything I just explained is written down in these formulas."}, {"start": 390.0, "end": 394.0, "text": " The only thing I forgot to mention is they have the same as in the original transformer."}, {"start": 394.0, "end": 400.0, "text": " They have this division by the square root of the dimensionality, which serves to keep the things stable."}, {"start": 400.0, "end": 403.0, "text": " And so this is just a scale dot product."}, {"start": 403.0, "end": 405.0, "text": " And that's how I formed the alpha coefficients."}, {"start": 405.0, "end": 412.0, "text": " And then, as I said, you just do weighted like some of these query vectors to form the global query vector."}, {"start": 412.0, "end": 417.0, "text": " And that's it. You do the same thing for for the key for the global key vector, as I already mentioned it here."}, {"start": 417.0, "end": 418.0, "text": " So that's this one."}, {"start": 418.0, "end": 421.0, "text": " And finally, I explained the third step."}, {"start": 421.0, "end": 424.0, "text": " And that's this linear transformation part I explained."}, {"start": 424.0, "end": 428.0, "text": " OK, so a couple of details which are interesting."}, {"start": 428.0, "end": 430.0, "text": " They did some oblations again."}, {"start": 430.0, "end": 434.0, "text": " They mentioned here we shared the value and query transformation parameters to reduce the memory cost."}, {"start": 434.0, "end": 441.0, "text": " In addition, we shared the parameters across different fast formal layers to further reduce the parameter size and mitigate the risk of overfitting."}, {"start": 441.0, "end": 442.0, "text": " So this is nothing new."}, {"start": 442.0, "end": 447.0, "text": " We had a bunch of papers prior to this one, which were experimenting with this weight sharing."}, {"start": 447.0, "end": 454.0, "text": " And so nothing new there, but still that's this is additionally going to save up some parameters."}, {"start": 454.0, "end": 457.0, "text": " And they also didn't have any slowdowns doing this."}, {"start": 457.0, "end": 458.0, "text": " So that's that's cool."}, {"start": 458.0, "end": 465.0, "text": " So if you followed along the explanation of how this thing works, it's easy to notice that the whole complexity will be n times d."}, {"start": 465.0, "end": 468.0, "text": " So that means it's linear with respect to the number of tokens."}, {"start": 468.0, "end": 473.0, "text": " And these just the dimensionality of the vectors which are being passed through the transformer."}, {"start": 473.0, "end": 477.0, "text": " OK, so funny thing, I just kind of had to notice this."}, {"start": 477.0, "end": 481.0, "text": " This must be the most unlucky place to put the footnote."}, {"start": 481.0, "end": 483.0, "text": " But, yeah, I guess they did it on purpose."}, {"start": 483.0, "end": 485.0, "text": " Hopefully they did it on purpose."}, {"start": 485.0, "end": 491.0, "text": " OK, so having said that, let me just show you the complexity compared to the other previous papers."}, {"start": 491.0, "end": 497.0, "text": " So we had transformer, the original transformer, which was obviously squared compared to the input sequence length."}, {"start": 497.0, "end": 506.0, "text": " And that was the problem that that made the yields transformers not good enough to to do inference on long sequences."}, {"start": 506.0, "end": 508.0, "text": " Then we had a bunch of various papers."}, {"start": 508.0, "end": 514.0, "text": " So I mentioned Longformer, I mentioned I don't see reformer here, but yeah, Linformer and other papers."}, {"start": 514.0, "end": 519.0, "text": " So as I said, there is always all of these papers pretty much had some hidden caveat inside."}, {"start": 519.0, "end": 521.0, "text": " So, for example, Longformer had this K."}, {"start": 521.0, "end": 528.0, "text": " And if I remember correctly, you had to push K pretty high in order to get decent performance boost."}, {"start": 528.0, "end": 531.0, "text": " So maybe something like square root of n would be optimal."}, {"start": 531.0, "end": 536.0, "text": " And so you kind of have linear complexity, but you're not quite there."}, {"start": 536.0, "end": 538.0, "text": " Same with our papers."}, {"start": 538.0, "end": 542.0, "text": " You can see some have square dependency on the dimensionality."}, {"start": 542.0, "end": 548.0, "text": " Some have huge constants in front of them, which you don't see because the big O notation just eats it up."}, {"start": 548.0, "end": 553.0, "text": " So I think this former is the first paper to actually have decent linear complexity."}, {"start": 553.0, "end": 554.0, "text": " Let's finally see the results."}, {"start": 554.0, "end": 555.0, "text": " So they have two sections."}, {"start": 555.0, "end": 561.0, "text": " First one is they show that the first former is effective and then they show that it's actually efficient as well."}, {"start": 561.0, "end": 566.0, "text": " So they had, I think, five data sets here to show on Amazon."}, {"start": 566.0, "end": 568.0, "text": " Yeah, I am DB in mind."}, {"start": 568.0, "end": 572.0, "text": " So sentiment and topic classification tasks."}, {"start": 572.0, "end": 579.0, "text": " And you can see that this former pretty much outperforms all of the previous baselines on all of these tasks,"}, {"start": 579.0, "end": 584.0, "text": " except for this one where I guess the difference is fairly negligible and the standard deviation here is even bigger."}, {"start": 584.0, "end": 586.0, "text": " So, yeah."}, {"start": 586.0, "end": 591.0, "text": " But like, as I mentioned, the thing I don't like about everybody's using their own data sets,"}, {"start": 591.0, "end": 598.0, "text": " it'd be nice to have like an agreement of which data sets we should use when we are evaluating these efficient transformers."}, {"start": 598.0, "end": 603.0, "text": " Just putting it out there, maybe somebody starts implementing that idea."}, {"start": 603.0, "end": 606.0, "text": " OK, so that's the being effective."}, {"start": 606.0, "end": 610.0, "text": " And here are some other data sets they used."}, {"start": 610.0, "end": 618.0, "text": " So they have the, I think, recommendation task here and they show again then first former is better compared like across these various different metrics."}, {"start": 618.0, "end": 622.0, "text": " And I don't want anyone get into what these metrics are, but basically it's better."}, {"start": 622.0, "end": 630.0, "text": " And they have this combination with this PLM and our baseline plus ensemble disaster just signifies ensemble."}, {"start": 630.0, "end": 633.0, "text": " And they have even better results that you've even better results by doing that."}, {"start": 633.0, "end": 637.0, "text": " But, yeah, I guess this this row is what is actually important for us."}, {"start": 637.0, "end": 639.0, "text": " Additional results on text summarization."}, {"start": 639.0, "end": 647.0, "text": " Again, fast former is fairly better compared to all of the previous baselines across rows one, rows two, rows L metrics."}, {"start": 647.0, "end": 651.0, "text": " So different metrics. Again, not that important for this paper."}, {"start": 651.0, "end": 656.0, "text": " Before I show you how efficient this fast former is, let me just kind of dig into this part."}, {"start": 656.0, "end": 662.0, "text": " So in our experiments, we use glove embeddings to initialize token embedding matrix."}, {"start": 662.0, "end": 664.0, "text": " I'm not sure why they're using glove here."}, {"start": 664.0, "end": 668.0, "text": " So this is like an alternative to virtual back embeddings."}, {"start": 668.0, "end": 672.0, "text": " Not completely sure why they did not learn the whole thing from scratch."}, {"start": 672.0, "end": 680.0, "text": " If anybody knows what's going on here, let me know down in the comments because I'm this part confused me, to be honest."}, {"start": 680.0, "end": 682.0, "text": " OK, let's see how efficient this thing is."}, {"start": 682.0, "end": 690.0, "text": " So looking at inference, looking at training graphs here, so we can see that as the sequence length increases,"}, {"start": 690.0, "end": 694.0, "text": " somewhere around 512, the transformer starts exploding."}, {"start": 694.0, "end": 697.0, "text": " So the GPU memory is not large enough."}, {"start": 697.0, "end": 701.0, "text": " I think they were using V100 GPUs and VDI."}, {"start": 701.0, "end": 711.0, "text": " And so basically that means you cannot actually even evaluate the transformer on these larger sequences, on these longer sequences."}, {"start": 711.0, "end": 720.0, "text": " Then you can see that this dark blue line is the fast former and it's way better compared to the other baselines."}, {"start": 720.0, "end": 726.0, "text": " So this pooling former, if I'm not wrong, let me just see the theoretical complexity they reported."}, {"start": 726.0, "end": 731.0, "text": " So pooling former n times d times w, where w is the window size."}, {"start": 731.0, "end": 738.0, "text": " And you can see that even though this looks linear in practice, it's not like optimal because they have huge constant."}, {"start": 738.0, "end": 743.0, "text": " And I guess that w also contributes to that constant exploding."}, {"start": 743.0, "end": 748.0, "text": " So yeah, in practice, it doesn't seem to be as good as all of the other baselines."}, {"start": 748.0, "end": 754.0, "text": " Forgot to mention we have here on the y-axis inference time per layer and here we have training time per layer."}, {"start": 754.0, "end": 757.0, "text": " I guess it correlates pretty much these two charts."}, {"start": 757.0, "end": 763.0, "text": " That basically means it's better, it's more efficient for both training and inference compared to the other baselines."}, {"start": 763.0, "end": 765.0, "text": " And I guess that's nice results."}, {"start": 765.0, "end": 767.0, "text": " Finally, here are some ablations."}, {"start": 767.0, "end": 771.0, "text": " I mentioned the Element-Vise product they were using to construct those global vectors."}, {"start": 771.0, "end": 780.0, "text": " You can see that compared to adding up those vectors instead of doing Element-Vise or concatenating them to the key vectors"}, {"start": 780.0, "end": 785.0, "text": " is worse looking at the accuracy on different datasets."}, {"start": 785.0, "end": 791.0, "text": " So the trend is kind of similar and sticks throughout all of these different datasets."}, {"start": 791.0, "end": 796.0, "text": " Finally, I mentioned weight sharing, the de-doblation studies there."}, {"start": 796.0, "end": 802.0, "text": " So without sharing, you can see decent results, but I guess the number of parameters explodes."}, {"start": 802.0, "end": 806.0, "text": " With query value sharing, some improvements."}, {"start": 806.0, "end": 810.0, "text": " And then when you do the head-wise sharing, I think that's important."}, {"start": 810.0, "end": 812.0, "text": " We have a significant drop here."}, {"start": 812.0, "end": 815.0, "text": " And the same trend goes for every single dataset."}, {"start": 815.0, "end": 823.0, "text": " And the whole point is if you read the original transformer paper, and if you ever saw those attention visualizations,"}, {"start": 823.0, "end": 826.0, "text": " every single head is learning different sub-function."}, {"start": 826.0, "end": 832.0, "text": " So sharing the weights there kind of hurts the performance, and that would be some hand-wavy explanation of why that happens."}, {"start": 832.0, "end": 836.0, "text": " So every single one needs to learn a dedicated function."}, {"start": 836.0, "end": 844.0, "text": " For example, if you have a sentence, one of those heads will maybe focus on attending the nouns."}, {"start": 844.0, "end": 847.0, "text": " One will be attending verbs. I don't know, whatever."}, {"start": 847.0, "end": 849.0, "text": " So different semantics all in all."}, {"start": 849.0, "end": 854.0, "text": " So that will be a quick explanation of this transformer paper."}, {"start": 854.0, "end": 861.0, "text": " So again, as I said, what I like about this paper is it's much more simpler implementation-wise"}, {"start": 861.0, "end": 867.0, "text": " than just understanding how it works compared to previous papers I've covered and read,"}, {"start": 867.0, "end": 874.0, "text": " as well as they obviously have both theoretical and experimental guarantees that they are more efficient,"}, {"start": 874.0, "end": 878.0, "text": " and they're also effective across a bunch of different datasets, as we saw."}, {"start": 878.0, "end": 880.0, "text": " So yeah, that's pretty much it."}, {"start": 880.0, "end": 886.0, "text": " And finally, if you missed it, I just created a brand new Discord community, so do join it."}, {"start": 886.0, "end": 888.0, "text": " I'll link the link in the description."}, {"start": 888.0, "end": 893.0, "text": " Basically, you'll be able to engage with others there, ask questions related to ML."}, {"start": 893.0, "end": 898.0, "text": " I'm actually working on bringing startups to offer ML jobs, so if that's something that's interesting to you,"}, {"start": 898.0, "end": 905.0, "text": " do join the community and also subscribe to my monthly AI newsletter, which is going to be super valuable, hopefully."}, {"start": 905.0, "end": 922.0, "text": " So having said that, subscribe to this channel, hit that bell notification, and until next time, bye-bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=rk9bhIRInC0
Do Vision Transformers See Like Convolutional Neural Networks? | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover the "Do Vision Transformers See Like Convolutional Neural Networks?" paper. They dissect ViTs and ResNets and show the differences in the features learned as well as what contributes to those differences (like the amount of data used, skip connections, etc.). ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2108.08810 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:45 Contrasting features in ViTs vs CNNs 06:45 Global vs Local receptive fields 13:55 Data matters, mr. obvious 17:40 Contrasting receptive fields 20:30 Data flow through CLS vs spatial tokens 23:30 Skip connections matter a lot in ViTs 24:20 Spatial information is preserved in ViTs 30:10 Features evolution with the amount of data 32:20 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #vision #transformers #cnns
What's cracking guys in this video I'm covering do vision transformers see like CNN's by the Google brain team and Before I dig into that I want to remind you that I just created a brand new discord community Feel free to join it. So there are many things that this discord community will bring so whether you're looking for some machine learning a related job or you just want to indulge into some meme content or You can also engage with the members Maybe ask you even ask your questions and somebody will be able to to kind of help you out So do join it and also subscribe to my monthly AI newsletter It's coming the first issue will be coming end of this month And I encourage you to subscribe because I strongly believe that it's gonna bring you a lot of value Okay, so getting back to the paper do vision transformers see like CNN's What basically the authors of this paper try to answer is? Whether there is a fundamental difference between their presentations learned by VAT versus CNN's How the skip connections are affecting those representations how the data size is affecting those representations, etc So there are multiple Tweaks and experiments they've done here. There is not a single one idea. It's more of trying to distill The knowledge they they they they found they accumulated during the experiment duration about what's what's the difference between these two? Model families and so let me just start digging into into diagrams They have a bunch of diagrams like this one. So in this first diagram what I've done is they try to visualize How different representations across different layers of VAT and CNN's relate to each other? So they've used something called this centered kernel alignment metric Which is this fancy formula here, but basically it's fairly simple. So what they have is they have so they have X which is NY so X is basically we have M examples and P1 is just the number of neurons in that particular layer So X mean X is just like M examples and we collect the activation So P1 activations and Y is some different layer potentially different layer It can be also the same one with M examples again and P2 neurons this time And so what I do they form these gram matrices. So K and L So if you watch my neural style transfer playlist, you'll have heard about a gram matrices they just kind of tell you like the covariance information of your data and Going further they apply this H C I's HS IC metric which is Like formed like this Just a second. So here and so they first did this centering by this H matrix and they get K prime and L prime And then they do a simple dot product Normalized with them where M is the number of examples So for the sake of just having a clear mental picture of what this means Because there's many of these in between steps you can you can you can basically treat this as a simple dot product Although it's not working on on vectors. It's working on these gram matrices, which were centralized centered etc But yeah, I think the dot product knowledge you will work Fine enough. So basically you're trying to given up even a vector you're trying to see So the closer a vector in a vector space is to that vector the bigger the dot product will be So if you have in the worst case if you have orthogonal vector the dot product will be zero Obviously if it's told in the opposite direction will have like a negative dot product So having said that let's see what's going on here. So they analyze VAT with L 16 and H H 14 models, which are basically the only differences if you remember and do watch my VAT video That's just currently that's a building block in in air research by now You should have a clear understanding of what VAT is and what transformer is before watching this video, but like a just a simple Explanation here is if you have an image What VATs do is they kind of form bunch of patches across that image and all of the patches will be so we have single patch here And so what what this number means 16 or 14 is just the dimension of the patch So 16 by 16, so that means the hardest number is the less patches will have in an image And that means that the transformer model the VAT will be smaller because the sequences will be shorter Obviously because we have bigger patches, right? So I think the information is pretty much redundant So I'm gonna go just focus on on single VAT model The whole idea here is as you can see so we have on x-axis. We have a layer index on y-axis We also have a layer index and obviously we have this line across this diagonal because Basically, if you do a dot product between a vector and itself, it's gonna give you a high output and That's why we have this this strong line here now because the matrix is symmetric by the nature of things We're just gonna focus on a single on this lower part of the of the of the of the diagram And you can see it's fairly uniform especially compared to to to ResNet And what that means is that our representations tend to stay similar as they kind of propagate through the VAT On the other hand, what happens with ResNet is there is some qualitative difference that happens As the property as the as the information as the as the features are propagating through the ResNet So what you can see here, let me help you parse this this diagram is so layers 80 to 120 Which are deeper layers of ResNet If you take a look at the similarity between the features in those layers and the features in the like Shallower in the first couple of layers of ResNet You can see that like focusing on this part of the diagram It's we have a low low activation here low similarity, which means the features have changed and On the other hand, if we if we take a look at these parts this part of the diagram here where they are comparing How the higher level features are being similar to each other We can see like a high output here as well as here and if we contrast that to to VAT We can see here it's much more uniform So that's the first conclusion they they had here is that for some reason there is this uniformity happening in VATs And it's not present in ResNet And so let's let's continue and see what other diagrams they have So the second thing that they've done here is now we have on the x-axis we have ResNet 50 On the y-axis we have VAT And what this diagram shows us is that ResNet basically takes 60 layers To calculate the features which are calculated in the first 30 something layers of VAT You can see that by looking at this bright blob that basically VAT gets to compute the features in the first 30 something layers Which take ResNet around 60 something layers And then we have the second blob it's a bit more vague here but like still So all of the other layers of ResNet are computing features which basically That's how I'm parsing this diagram at least What this means is that the VAT is managing to compute those similar features to those to the ones that ResNet is computing Until layer 75 or something So that means that VAT basically has this whole advantage here of computing refining the features even further on compared to ResNet And we've seen in the past research papers that VATs have comparable or even outperform ResNet So this kind of visualizes why this is And it all makes sense if you understand how VATs work We have a global attention span compared to CNNs which have by design have local attention only So having said that the third diagram is fairly informative And I guess it's super similar to the diagrams we saw in the original VAT paper So again I encourage you to go and check out that video if you haven't already But what it tells us is the following So we have this encoder block 012223 And what it basically means is again let me just kind of do a block diagram of VAT So this will be a VAT we have an image on the input something like this And so VAT is basically a stack of transformer blocks right And so let me denote this one as 0, this one is 1, and then we have blah blah blah And there is block 22, whatever So I guess 22 not 22 And so what they do is as you can see obviously there is a bunch of layers here in between They focus on these shallow layers as well as on deeper layers And there is a funny trend you can notice here And that's that basically if we focus on the deeper like blocks so these ones You can see that the mean distance and I'm going to explain what it is in a minute Is much higher and it's high across all of the tension heads So remember each block, each transformer layer has multiple tension heads And they just kind of plot all of the heads and all of the mean distances across all of the heads And let me before I start explaining the other curves let me kind of try and explain what this means It's fairly simple so imagine you have a simple image And we have 4x4 patches something like this 4x4 so 1, 2, 3, 4 Ok so what they do is the following So imagine you have this patch a token corresponding to this patch used as a query Again assuming you know how VATs work this query token is going to attend Basically we are going to do a dot product, a scaled dot product with the keys that correspond to other tokens And those are going to give us certain coefficients And so basically that means that this query will maybe be attending It will be attending this key here with maybe value 0.8 And it's going to be attending this key with maybe some lower weights maybe 0.01 And so basically how the mean distance is calculated is You have basically 0.8 times whatever the distance is And so we have let's say we have 3 pixels here and 3 pixels here So basically we have square root of 3 raised to the power of 2 plus 3 raised to the power of 2 So that's square root of 18 if I'm not wrong And so the basic idea is you are waiting the distance of all of the tokens from the query token And that means if we are attending if we are strongly attending further away tokens So these tokens that means that the mean distance will increase And analogously and similarly if we are attending the closer tokens and not the tokens further away Which we can see by those attention coefficients That means that the mean distance will drop down So that's a rough intuition of how this thing works Hopefully you got it And so now having that in mind let's kind of try and interpret this diagram again If we focus on the shallower block so block 0 or block 1 We can see that after we sort the attention heads So maybe this block has whatever like 16 heads I guess And all of the heads will have different mean distance which is natural And so you can see that certain heads here have much like a smaller mean distance So what that basically means is that if we have an image here and we have a certain We focus on a specific token here that means that it's going to attend the neighboring tokens IE pixels that are grouped into those tokens And I guess we saw the same or similar diagram in the original VAT paper What it tells us is that initially the shallower, the first blocks of the VAT Are going to learn both the global attention as you can see here So certain heads are still learning these global attention spans Whereas some of the other heads have something like CNN So that means some heads will be attending like this one And other heads may be attending to further away tokens like these ones here And that's the whole point, we have a mix of all of those different like I guess effective receptive fields And I guess similar trend for the VAT H14 but that's the whole idea Basically the additional flexibility that VATs, transformers in general have Is that they can choose and learn however they want their receptive fields Whereas CNNs are kind of bound to only have local receptive fields So that was already too long, let me continue, hopefully you got that one So this fourth diagram what they show is that basically if you train these models, VATs With not enough data, so all of those were pre-trained on JFT which is a huge data set I think it has 300 million data points If you train the VAT in contrast on ImageNet you can see what happens here So now if we focus on block 0 and block 1 we can see that even all of the heads have kind of global attention span And not local, and they also say here that these models also perform much worse When only trained on ImageNet suggesting that incorporating local features Which is hardcoded into CNNs may be important for strong performance And yeah I guess this is kind of a nice demonstration of that fact Okay, continuing on here, let's see the fifth diagram So what they show here is the following Now we have on X axis we have the mean distance so the same metric I just explained a couple of minutes ago Here we have the similarity metric and so what they do is they try to compare the features in shallower layers And compare them to ResNet-50 and see what's going on there So what they show is that for all of those basically features that have all of those heads That have lower mean distance so that would be in this part of the diagram Those features tend to have higher similarity with ResNet features And as the distance as the mean distance increases going right on the X axis We can see that the similarity decreases which means that VIT is qualitatively learning different representations compared to ResNet Okay similar thing goes for VIT H14 and L16 compared to ResNet-152 So I won't be focusing on the other diagrams but you get the point Basically the very fact that transformers have this extra flexibility is kind of Like being used by transformers to learn some new features which are useful But the thing I find interesting here is that we didn't see that much of an improvement that VIT has brought compared to CNNs So it kind of surprises me that where is all of this additional expressivity being used up when doing for example classification tasks So that's an open question at least in my mind So even though it has so much more flexibility and it's obviously learning some different as we saw here as well So we saw that in the first 75 layers or something we already learned similar features as ResNet So the question is what happens with all of that additional expressivity and why don't we see higher accuracy scores compared to CNNs Why are they still on pair And one thing I forgot to mention here for the data part So I'm not sure why they didn't mention anywhere in this paper the sharpness aware minimization objectives So there's a recent paper you can check out my video I'll link it somewhere here Where they showed that you don't need to pre-train VATs on JFT-300M which is super expensive You can just use the SAM objective and they have not been mentioning that paper anywhere in this paper So that kind of surprised me if somebody knows the answer feel free to write it down in the comments But yeah I guess it would be interesting to see the same kind of experiments now this time with VAT plus SAM And training only on ImageNet instead of training on JFT I guess that would be something I'd like to get an answer to Ok again as I said a bunch of diagrams not necessarily connected so stick with me Hopefully we'll have a bigger picture at the end of this video The second thing they do here is they try to visualize the receptive field of CNNs versus VATs And you're obviously hopefully familiar with how CNNs work and you can see that we have linear increase of basically of the receptive field in CNNs And the reason is let me just kind of quickly backtrack here So let me draw a simple 4x4 pixel image here So we have 4x4 pixels so something like this And so how CNNs work is basically you have a kernel let's assume we have a 2x2 window And we're gonna like do multiplication here then we're gonna slide the window to the right if we assume that the stride is equal to 1 We'll have something like this and all of these 4 pixels we're gonna be mapped into a single feature The second position will be also mapped to like a second feature here The third position will be mapped to yet another feature etc So what happens is that after we do this for the whole image What happens is that the next window that works in this layer so this was layer 0 let's say this is layer 1 So now when we have a sliding window here because remember this blue dot came from the 4 pixels and this blue dot came from 4 pixels here So that means that this window here will actually be attending to 3x3 to this span of the image And you can imagine extrapolating that into deeper layers This happens in every single layer and that's why we have like a linear increase of the receptive field in CNNs On the other hand VATs do not work like CNNs obviously and they can immediately attend whatever pixel in the image in whatever layer of the VAT So and that's kind of reflected here you can see there is a sudden jump here where all of a sudden every token can attend every token And it's kind of I don't like these diagrams to be honest it's kind of hard to understand them in O of 1 what this exactly means But like they basically calculated some absolute values of gradients of the central token of the feature map with respect to the input tokens And that's how they kind of and then they do the averaging but basically you can see these like bright shadows here Basically mean that they are attending every single input token Okay so the receptive field is basically the whole image whereas here it takes some takes a while and takes a deeper layer only there will have like a neuron attending the whole input image Okay let's continue and see some other interesting results so I like this result a lot and let me try and explain what this actually depicts So the colors on this diagram are the following things so let me just recap briefly how transformer layer looks like So this will be let's call this F this will be just like a transformation block just stick with me here so we'll have a skip connection here Those two will get added up and then we have next layer so F is either MLP or self-attention layer so that's your transformer block right So we have each transformer block consists out of MLP actually first goes the self-attention block and then we have sub block sub module and then we have the MLP sub module So that's F and so that will be the input features and obviously we'll have F of Z here after being after we transform the features we have F of Z So what they depict here is a norm simply a norm between like Z we take L2 norm of Z and we take a ratio between that and the F of Z okay So something like this we have a ratio of norms and now having that in mind let's now interpret this diagram here So you can see that yellow means high value and that means that if we focus on zero thought token which is CLS token so the classification token in VAT We can see there is a like a spike here in the lower layers which means if we focus on what this plot is actually plotting That means that the norm of Z vector is much higher than this one which means much more signal is being passed by the skip connection So actually the skip connection carries a lot of the information on this CLS token route okay And then we can see that after like we get to these deep layers from maybe fifth layer onwards we can see that the brightness goes down That means that this kind of changes and now the information is being passed over this main branch that's how they are calling it in the paper So this will be the main branch the long branch and this is the skip connection or the short branch And we can see that the reverse thing happens for all of the other tokens in the VAT model So initially you can see that the more information is carried over the long branch and then later on we have increase here That means that Z increases that means the skip connections are carrying more information To be honest I don't know what to make out of this except that it's really interesting like symmetry going on here and the transition the change of phase here So there is this complementarity between the CLS token and all of the other spatial tokens let's call them that way But yeah to be honest I'm not sure why this is but it's an interesting dynamic in any case Okay so next thing what I show is how that basically we saw that features are fairly uniform going through the whole VAT model And they show that skip connections are fairly important to keep this uniformity of representations So they show that after they remove certain like skip connection the performance drops by 4% and we get these diagrams Which tells us that now deeper layers and these shallower layers have like are way different compared to for example here Which means that there is some qualitative change that happens if we remove a skip connection and performance obviously suffers a lot So skip connections are definitely a vital part of VATs so it seems Okay let's continue on so the next thing they show here is that basically the tokens that are not CLS token are preserving the spatial information in VATs And that's fairly why they do this is because we eventually will want to if they prove to be better than CNNs We'll want to use VATs not just for the classification task but also for some tasks that require more like global spatial awareness Like object detection or semantic segmentation etc. So that's why they do this experiment and they show that basically Again similarity between tokens in deeper layers and tokens in the input image is basically localized Let me try and kind of break that down so basically this is your input image you break it into patches after you kind of So let me denote this as X we take X here we pass it through the transformer so we just pass it through the VAT here And we take a certain layer like we focus on maybe this layer maybe layer whatever 13 or something and what they do is they take a specific patch So now we take a specific patch like maybe this patch here we take this patch here and what we do is we just do similarity with the input image And we can see that the response is highest exactly where this token is which means that the representations are kind of preserved The spatial information is preserved even in the deeper layers that's the whole point so the only difference is when they take a token that's on the edge of the image For some reason it's attending the whole edge like it's attending the all of the edges of the input image and I have not to be honest I haven't seen the answer in this paper for why that is So my first assumption would be even though I know it's incorrect would be that when you do for example CNN when you do the sliding of the windows You basically have to add some padding across the image and then you can see some artifacts going on on the edges of the image But here we have VAT which works completely different so it's kind of funny to see this thing emerging keeping in mind that when you kind of flatten this out in a rest order Which is the way that VAT is usually like flatten out the input image token so if we flatten them out you can see that we'll have a bunch of green pixels And then there will be a single pixel that has like high output and then we'll have a bunch of pixels that have low output and then high so there's this pattern going on Which is kind of funny to be honest yeah if you have any hypothesis why this may be feel free to write it down in the comments I guess it's an open question The whole point is basically if we compare that to CNNs we can see that CNNs are way less localized compared to VATs and they quickly show that the reason for that is the usage of global pooling in the CNN So basically after you have a couple of CNN layers at the end you just do global pooling across the features and that kind of destroys your spatial information And that's why CNNs are poor at least those types of model of CNNs with global average global pooling are poor at spatial localization And they showed that VATs if you do the same thing instead of training it with a CLS token if you instead add like a global averaging global pooling And then you stick a classification hat on top of it you'll also have poor spatial awareness of that VAT So the next thing they did is and these diagrams are fairly interesting so again I'm going to plot VAT here for the sake of argument So VIT and so we'll have output token that corresponds to the CLS tokens let's call it CLS and we'll have the other tokens here So there will be as many of these as there are tokens in the input image so what they do here is they just kind of so all these will be vectors obviously So every token is a vector and they'll just take a linear like a linear layer on top of those vectors and then train a classifier And what they show here is that if you have a VAT trained with CLS you'll see that the test accuracy will be super low And the reason is basically because all of these so all of these tokens are fairly dumb they don't have in the sense that they don't have a They haven't been trained to understand the global context of the image and so if you stick a linear layer on your probe on top of them And train a classifier that way you basically it's going to be a poor classifier and we can see that by the test accuracy So even though the CLS probe will actually work really nicely when you take that and you average because they basically average I think they yeah yeah after you do the averaging across all of those tokens and the CLS token the test accuracy is going to drop even though CLS will have high accuracy by itself and then they show that if they train VAT using the gap instead of CLS logic So that means instead of having the CLS logic what they'll do is they'll just do averaging here and then they'll train the VAT by by sticking classification had here And so all of the tokens by doing this will learn some global context of the input image and when you do that you can see that the test accuracy Increases heavily because all of these tokens now have global information Whereas when you don't do that you obviously have more spatial information as we saw in some of the previous diagrams and finally They did some experiments on how the data influences the performance of the of the model and we can see so not the performance but basically the similarity of How the features evolve as we kind of start diminishing the amount of data that the model is being fed during the training time So on one hand we have a like a like VAT that was trained on on GFT 300m and then what they do as you can see here they kind of slowly start decreasing the amount of data So 30% of the GFT 10% and 3% and you can see that so this is a block index on the on the x-axis which means basically like the block ID of the VAT model And if we focus on the shallower on the first blocks of the VAT we can see that the similarity is fairly high of those first blocks So the features are fairly similar to the model that was trained on the full data set and as we go into deeper layers those layers suffer a lot more Compared to the shallower layers and the situation gets even worse as we decrease the amount of data from 3% you can see these are still kind of similar to the fully trained model Whereas these ones these features are completely different so it kind of deteriorates really quickly with when we decrease the data And again I'd be super happy to see the same results with a SAM objective and how the SAM influences this behavior On the other hand they have a much smaller model here so remember 32 means it's a 32 by 32 patch which means we have a smaller less tokens and thus a smaller transformer the VAT transformer And we can see that similarity is much more robust with the decrease of data here compared to this transformer And yeah I guess it has to do with the overfitting for a bigger model you simply need more data to kind of fine tune those weights to train those weights Finally not much new information here basically the more data you have the better the test accuracy will be in this linear probe setting Yeah that was pretty much it so obviously this paper is a bit different there is not a single core idea they're trying to validate What they did here instead was dissecting the VAT every single component I really like the thoroughness of this paper the experiments all of the experiments they did I guess we knew many of these information from the maybe from some of the previous papers including the original VAT paper So the shallower layers are making use of the fact that they can attend the whole image fairly successfully so that we have a mix of local and global attention in the shallower layers And we saw that in the original paper as well so we can see I guess this is the interesting part the features are kind of uniform across the whole model to just recap the whole thing VAT learns features much earlier compared to ResNets which means there is this additional expressivity that is left unthemed Again this is the same thing as in the original transformer as I said we have both global and local attention in the earlier layers And that kind of deteriorates we have less data if you're not using SAM objective if you're just training VAT the classical way without SAM So by cross entropy in the case of classification we can see that we don't have those like CNN like behavior emerging in these lower layers we only have global attention here So yeah some like experiments in receptive field and yeah this phase information is interesting the fact that we need skip connections The fact that tokens learn the spatial information and preserve it throughout the whole network except when they're using the global pooling So but like when you're training with the VAT with CLS token then the tokens kind of learn to preserve the spatial information Yeah I guess that's pretty much it for this video so if you found it useful consider subscribing share it with a friend Also I encourage you to join the discord community I strongly believe it's going to become a really vibrant community over the next couple of months Also subscribe to the AI monthly newsletter and until next time bye bye
[{"start": 0.0, "end": 4.5600000000000005, "text": " What's cracking guys in this video I'm covering do vision transformers see like CNN's"}, {"start": 5.36, "end": 7.68, "text": " by the Google brain team and"}, {"start": 8.24, "end": 13.1, "text": " Before I dig into that I want to remind you that I just created a brand new discord community"}, {"start": 13.1, "end": 16.44, "text": " Feel free to join it. So there are many things that this discord community will bring"}, {"start": 16.44, "end": 20.400000000000002, "text": " so whether you're looking for some machine learning a related job or"}, {"start": 20.8, "end": 24.0, "text": " you just want to indulge into some meme content or"}, {"start": 24.84, "end": 26.94, "text": " You can also engage with the members"}, {"start": 26.94, "end": 31.86, "text": " Maybe ask you even ask your questions and somebody will be able to to kind of help you out"}, {"start": 31.86, "end": 36.18, "text": " So do join it and also subscribe to my monthly AI newsletter"}, {"start": 36.18, "end": 39.3, "text": " It's coming the first issue will be coming end of this month"}, {"start": 39.3, "end": 43.3, "text": " And I encourage you to subscribe because I strongly believe that it's gonna bring you a lot of value"}, {"start": 43.46, "end": 46.94, "text": " Okay, so getting back to the paper do vision transformers see"}, {"start": 47.5, "end": 49.1, "text": " like CNN's"}, {"start": 49.1, "end": 52.18000000000001, "text": " What basically the authors of this paper try to answer is?"}, {"start": 52.18, "end": 58.78, "text": " Whether there is a fundamental difference between their presentations learned by VAT versus CNN's"}, {"start": 59.22, "end": 66.78, "text": " How the skip connections are affecting those representations how the data size is affecting those representations, etc"}, {"start": 66.98, "end": 68.78, "text": " So there are multiple"}, {"start": 68.78, "end": 73.7, "text": " Tweaks and experiments they've done here. There is not a single one idea. It's more of"}, {"start": 74.42, "end": 76.3, "text": " trying to distill"}, {"start": 76.3, "end": 84.62, "text": " The knowledge they they they they found they accumulated during the experiment duration about what's what's the difference between these two?"}, {"start": 85.22, "end": 90.02, "text": " Model families and so let me just start digging into into diagrams"}, {"start": 90.02, "end": 95.96, "text": " They have a bunch of diagrams like this one. So in this first diagram what I've done is they try to visualize"}, {"start": 96.7, "end": 104.7, "text": " How different representations across different layers of VAT and CNN's relate to each other? So they've used something called this"}, {"start": 104.7, "end": 106.7, "text": " centered kernel alignment metric"}, {"start": 106.7, "end": 113.7, "text": " Which is this fancy formula here, but basically it's fairly simple. So what they have is they have so they have"}, {"start": 114.7, "end": 123.7, "text": " X which is NY so X is basically we have M examples and P1 is just the number of neurons in that particular layer"}, {"start": 123.7, "end": 129.7, "text": " So X mean X is just like M examples and we collect the activation"}, {"start": 129.7, "end": 134.7, "text": " So P1 activations and Y is some different layer potentially different layer"}, {"start": 134.7, "end": 139.7, "text": " It can be also the same one with M examples again and P2 neurons this time"}, {"start": 139.7, "end": 143.7, "text": " And so what I do they form these gram matrices. So K and L"}, {"start": 143.7, "end": 148.7, "text": " So if you watch my neural style transfer playlist, you'll have heard about a gram matrices"}, {"start": 148.7, "end": 153.7, "text": " they just kind of tell you like the covariance information of your data and"}, {"start": 153.7, "end": 160.7, "text": " Going further they apply this H C I's HS IC metric which is"}, {"start": 160.7, "end": 162.7, "text": " Like formed like this"}, {"start": 162.7, "end": 170.7, "text": " Just a second. So here and so they first did this centering by this H matrix and they get K prime and L prime"}, {"start": 170.7, "end": 173.7, "text": " And then they do a simple dot product"}, {"start": 173.7, "end": 176.7, "text": " Normalized with them where M is the number of examples"}, {"start": 176.7, "end": 181.7, "text": " So for the sake of just having a clear mental picture of what this means"}, {"start": 181.7, "end": 187.7, "text": " Because there's many of these in between steps you can you can you can basically treat this as a simple dot product"}, {"start": 187.7, "end": 194.7, "text": " Although it's not working on on vectors. It's working on these gram matrices, which were centralized centered etc"}, {"start": 194.7, "end": 197.7, "text": " But yeah, I think the dot product knowledge you will work"}, {"start": 197.7, "end": 203.7, "text": " Fine enough. So basically you're trying to given up even a vector you're trying to see"}, {"start": 203.7, "end": 209.7, "text": " So the closer a vector in a vector space is to that vector the bigger the dot product will be"}, {"start": 209.7, "end": 214.7, "text": " So if you have in the worst case if you have orthogonal vector the dot product will be zero"}, {"start": 214.7, "end": 219.7, "text": " Obviously if it's told in the opposite direction will have like a negative dot product"}, {"start": 219.7, "end": 228.7, "text": " So having said that let's see what's going on here. So they analyze VAT with L 16 and H"}, {"start": 228.7, "end": 234.7, "text": " H 14 models, which are basically the only differences if you remember and do watch my VAT video"}, {"start": 234.7, "end": 239.7, "text": " That's just currently that's a building block in in air research by now"}, {"start": 239.7, "end": 245.7, "text": " You should have a clear understanding of what VAT is and what transformer is before watching this video, but like a just a simple"}, {"start": 245.7, "end": 248.7, "text": " Explanation here is if you have an image"}, {"start": 248.7, "end": 256.7, "text": " What VATs do is they kind of form bunch of patches across that image and all of the patches will be so we have single patch here"}, {"start": 256.7, "end": 262.7, "text": " And so what what this number means 16 or 14 is just the dimension of the patch"}, {"start": 262.7, "end": 269.7, "text": " So 16 by 16, so that means the hardest number is the less patches will have in an image"}, {"start": 269.7, "end": 274.7, "text": " And that means that the transformer model the VAT will be smaller because the sequences will be shorter"}, {"start": 274.7, "end": 281.7, "text": " Obviously because we have bigger patches, right? So I think the information is pretty much redundant"}, {"start": 281.7, "end": 285.7, "text": " So I'm gonna go just focus on on single VAT model"}, {"start": 285.7, "end": 290.7, "text": " The whole idea here is as you can see so we have on x-axis. We have a layer index on y-axis"}, {"start": 290.7, "end": 297.7, "text": " We also have a layer index and obviously we have this line across this diagonal because"}, {"start": 297.7, "end": 304.7, "text": " Basically, if you do a dot product between a vector and itself, it's gonna give you a high output and"}, {"start": 304.7, "end": 311.7, "text": " That's why we have this this strong line here now because the matrix is symmetric by the nature of things"}, {"start": 311.7, "end": 317.7, "text": " We're just gonna focus on a single on this lower part of the of the of the of the diagram"}, {"start": 317.7, "end": 323.7, "text": " And you can see it's fairly uniform especially compared to to to ResNet"}, {"start": 323.7, "end": 331.7, "text": " And what that means is that our representations tend to stay similar as they kind of propagate through the VAT"}, {"start": 331.7, "end": 335.7, "text": " On the other hand, what happens with ResNet is there is some qualitative difference that happens"}, {"start": 335.7, "end": 341.7, "text": " As the property as the as the information as the as the features are propagating through the ResNet"}, {"start": 341.7, "end": 348.7, "text": " So what you can see here, let me help you parse this this diagram is so layers 80 to 120"}, {"start": 348.7, "end": 351.7, "text": " Which are deeper layers of ResNet"}, {"start": 351.7, "end": 357.7, "text": " If you take a look at the similarity between the features in those layers and the features in the like"}, {"start": 357.7, "end": 361.7, "text": " Shallower in the first couple of layers of ResNet"}, {"start": 361.7, "end": 365.7, "text": " You can see that like focusing on this part of the diagram"}, {"start": 365.7, "end": 373.7, "text": " It's we have a low low activation here low similarity, which means the features have changed and"}, {"start": 373.7, "end": 381.7, "text": " On the other hand, if we if we take a look at these parts this part of the diagram here where they are comparing"}, {"start": 381.7, "end": 385.7, "text": " How the higher level features are being similar to each other"}, {"start": 385.7, "end": 390.7, "text": " We can see like a high output here as well as here and if we contrast that to to VAT"}, {"start": 390.7, "end": 392.7, "text": " We can see here it's much more uniform"}, {"start": 392.7, "end": 399.7, "text": " So that's the first conclusion they they had here is that for some reason there is this uniformity happening in VATs"}, {"start": 399.7, "end": 401.7, "text": " And it's not present in ResNet"}, {"start": 401.7, "end": 406.7, "text": " And so let's let's continue and see what other diagrams they have"}, {"start": 406.7, "end": 413.7, "text": " So the second thing that they've done here is now we have on the x-axis we have ResNet 50"}, {"start": 413.7, "end": 415.7, "text": " On the y-axis we have VAT"}, {"start": 415.7, "end": 421.7, "text": " And what this diagram shows us is that ResNet basically takes 60 layers"}, {"start": 421.7, "end": 427.7, "text": " To calculate the features which are calculated in the first 30 something layers of VAT"}, {"start": 427.7, "end": 439.7, "text": " You can see that by looking at this bright blob that basically VAT gets to compute the features in the first 30 something layers"}, {"start": 439.7, "end": 442.7, "text": " Which take ResNet around 60 something layers"}, {"start": 442.7, "end": 447.7, "text": " And then we have the second blob it's a bit more vague here but like still"}, {"start": 447.7, "end": 454.7, "text": " So all of the other layers of ResNet are computing features which basically"}, {"start": 454.7, "end": 456.7, "text": " That's how I'm parsing this diagram at least"}, {"start": 456.7, "end": 466.7, "text": " What this means is that the VAT is managing to compute those similar features to those to the ones that ResNet is computing"}, {"start": 466.7, "end": 468.7, "text": " Until layer 75 or something"}, {"start": 468.7, "end": 479.7, "text": " So that means that VAT basically has this whole advantage here of computing refining the features even further on compared to ResNet"}, {"start": 479.7, "end": 486.7, "text": " And we've seen in the past research papers that VATs have comparable or even outperform ResNet"}, {"start": 486.7, "end": 490.7, "text": " So this kind of visualizes why this is"}, {"start": 490.7, "end": 494.7, "text": " And it all makes sense if you understand how VATs work"}, {"start": 494.7, "end": 502.7, "text": " We have a global attention span compared to CNNs which have by design have local attention only"}, {"start": 502.7, "end": 509.7, "text": " So having said that the third diagram is fairly informative"}, {"start": 509.7, "end": 515.7, "text": " And I guess it's super similar to the diagrams we saw in the original VAT paper"}, {"start": 515.7, "end": 520.7, "text": " So again I encourage you to go and check out that video if you haven't already"}, {"start": 520.7, "end": 522.7, "text": " But what it tells us is the following"}, {"start": 522.7, "end": 527.7, "text": " So we have this encoder block 012223"}, {"start": 527.7, "end": 533.7, "text": " And what it basically means is again let me just kind of do a block diagram of VAT"}, {"start": 533.7, "end": 538.7, "text": " So this will be a VAT we have an image on the input something like this"}, {"start": 538.7, "end": 542.7, "text": " And so VAT is basically a stack of transformer blocks right"}, {"start": 542.7, "end": 548.7, "text": " And so let me denote this one as 0, this one is 1, and then we have blah blah blah"}, {"start": 548.7, "end": 551.7, "text": " And there is block 22, whatever"}, {"start": 551.7, "end": 554.7, "text": " So I guess 22 not 22"}, {"start": 554.7, "end": 560.7, "text": " And so what they do is as you can see obviously there is a bunch of layers here in between"}, {"start": 560.7, "end": 564.7, "text": " They focus on these shallow layers as well as on deeper layers"}, {"start": 564.7, "end": 567.7, "text": " And there is a funny trend you can notice here"}, {"start": 567.7, "end": 575.7, "text": " And that's that basically if we focus on the deeper like blocks so these ones"}, {"start": 575.7, "end": 579.7, "text": " You can see that the mean distance and I'm going to explain what it is in a minute"}, {"start": 579.7, "end": 583.7, "text": " Is much higher and it's high across all of the tension heads"}, {"start": 583.7, "end": 588.7, "text": " So remember each block, each transformer layer has multiple tension heads"}, {"start": 588.7, "end": 593.7, "text": " And they just kind of plot all of the heads and all of the mean distances across all of the heads"}, {"start": 593.7, "end": 599.7, "text": " And let me before I start explaining the other curves let me kind of try and explain what this means"}, {"start": 599.7, "end": 604.7, "text": " It's fairly simple so imagine you have a simple image"}, {"start": 604.7, "end": 607.7, "text": " And we have 4x4 patches something like this"}, {"start": 607.7, "end": 611.7, "text": " 4x4 so 1, 2, 3, 4"}, {"start": 611.7, "end": 614.7, "text": " Ok so what they do is the following"}, {"start": 614.7, "end": 620.7, "text": " So imagine you have this patch a token corresponding to this patch used as a query"}, {"start": 620.7, "end": 626.7, "text": " Again assuming you know how VATs work this query token is going to attend"}, {"start": 626.7, "end": 632.7, "text": " Basically we are going to do a dot product, a scaled dot product with the keys that correspond to other tokens"}, {"start": 632.7, "end": 635.7, "text": " And those are going to give us certain coefficients"}, {"start": 635.7, "end": 641.7, "text": " And so basically that means that this query will maybe be attending"}, {"start": 641.7, "end": 647.7, "text": " It will be attending this key here with maybe value 0.8"}, {"start": 647.7, "end": 654.7, "text": " And it's going to be attending this key with maybe some lower weights maybe 0.01"}, {"start": 654.7, "end": 660.7, "text": " And so basically how the mean distance is calculated is"}, {"start": 660.7, "end": 665.7, "text": " You have basically 0.8 times whatever the distance is"}, {"start": 665.7, "end": 668.7, "text": " And so we have let's say we have 3 pixels here and 3 pixels here"}, {"start": 668.7, "end": 676.7, "text": " So basically we have square root of 3 raised to the power of 2 plus 3 raised to the power of 2"}, {"start": 676.7, "end": 679.7, "text": " So that's square root of 18 if I'm not wrong"}, {"start": 679.7, "end": 687.7, "text": " And so the basic idea is you are waiting the distance of all of the tokens from the query token"}, {"start": 687.7, "end": 692.7, "text": " And that means if we are attending if we are strongly attending further away tokens"}, {"start": 692.7, "end": 695.7, "text": " So these tokens that means that the mean distance will increase"}, {"start": 695.7, "end": 706.7, "text": " And analogously and similarly if we are attending the closer tokens and not the tokens further away"}, {"start": 706.7, "end": 710.7, "text": " Which we can see by those attention coefficients"}, {"start": 710.7, "end": 713.7, "text": " That means that the mean distance will drop down"}, {"start": 713.7, "end": 715.7, "text": " So that's a rough intuition of how this thing works"}, {"start": 715.7, "end": 717.7, "text": " Hopefully you got it"}, {"start": 717.7, "end": 723.7, "text": " And so now having that in mind let's kind of try and interpret this diagram again"}, {"start": 723.7, "end": 728.7, "text": " If we focus on the shallower block so block 0 or block 1"}, {"start": 728.7, "end": 732.7, "text": " We can see that after we sort the attention heads"}, {"start": 732.7, "end": 736.7, "text": " So maybe this block has whatever like 16 heads I guess"}, {"start": 736.7, "end": 741.7, "text": " And all of the heads will have different mean distance which is natural"}, {"start": 741.7, "end": 748.7, "text": " And so you can see that certain heads here have much like a smaller mean distance"}, {"start": 748.7, "end": 752.7, "text": " So what that basically means is that if we have an image here and we have a certain"}, {"start": 752.7, "end": 758.7, "text": " We focus on a specific token here that means that it's going to attend the neighboring tokens"}, {"start": 758.7, "end": 762.7, "text": " IE pixels that are grouped into those tokens"}, {"start": 762.7, "end": 768.7, "text": " And I guess we saw the same or similar diagram in the original VAT paper"}, {"start": 768.7, "end": 775.7, "text": " What it tells us is that initially the shallower, the first blocks of the VAT"}, {"start": 775.7, "end": 778.7, "text": " Are going to learn both the global attention as you can see here"}, {"start": 778.7, "end": 782.7, "text": " So certain heads are still learning these global attention spans"}, {"start": 782.7, "end": 786.7, "text": " Whereas some of the other heads have something like CNN"}, {"start": 786.7, "end": 789.7, "text": " So that means some heads will be attending like this one"}, {"start": 789.7, "end": 794.7, "text": " And other heads may be attending to further away tokens like these ones here"}, {"start": 794.7, "end": 803.7, "text": " And that's the whole point, we have a mix of all of those different like I guess effective receptive fields"}, {"start": 803.7, "end": 810.7, "text": " And I guess similar trend for the VAT H14 but that's the whole idea"}, {"start": 810.7, "end": 816.7, "text": " Basically the additional flexibility that VATs, transformers in general have"}, {"start": 816.7, "end": 821.7, "text": " Is that they can choose and learn however they want their receptive fields"}, {"start": 821.7, "end": 826.7, "text": " Whereas CNNs are kind of bound to only have local receptive fields"}, {"start": 826.7, "end": 831.7, "text": " So that was already too long, let me continue, hopefully you got that one"}, {"start": 831.7, "end": 838.7, "text": " So this fourth diagram what they show is that basically if you train these models, VATs"}, {"start": 838.7, "end": 844.7, "text": " With not enough data, so all of those were pre-trained on JFT which is a huge data set"}, {"start": 844.7, "end": 847.7, "text": " I think it has 300 million data points"}, {"start": 847.7, "end": 853.7, "text": " If you train the VAT in contrast on ImageNet you can see what happens here"}, {"start": 853.7, "end": 863.7, "text": " So now if we focus on block 0 and block 1 we can see that even all of the heads have kind of global attention span"}, {"start": 863.7, "end": 869.7, "text": " And not local, and they also say here that these models also perform much worse"}, {"start": 869.7, "end": 873.7, "text": " When only trained on ImageNet suggesting that incorporating local features"}, {"start": 873.7, "end": 877.7, "text": " Which is hardcoded into CNNs may be important for strong performance"}, {"start": 877.7, "end": 882.7, "text": " And yeah I guess this is kind of a nice demonstration of that fact"}, {"start": 882.7, "end": 887.7, "text": " Okay, continuing on here, let's see the fifth diagram"}, {"start": 887.7, "end": 890.7, "text": " So what they show here is the following"}, {"start": 890.7, "end": 895.7, "text": " Now we have on X axis we have the mean distance so the same metric I just explained a couple of minutes ago"}, {"start": 895.7, "end": 902.7, "text": " Here we have the similarity metric and so what they do is they try to compare the features in shallower layers"}, {"start": 902.7, "end": 906.7, "text": " And compare them to ResNet-50 and see what's going on there"}, {"start": 906.7, "end": 913.7, "text": " So what they show is that for all of those basically features that have all of those heads"}, {"start": 913.7, "end": 917.7, "text": " That have lower mean distance so that would be in this part of the diagram"}, {"start": 917.7, "end": 922.7, "text": " Those features tend to have higher similarity with ResNet features"}, {"start": 922.7, "end": 928.7, "text": " And as the distance as the mean distance increases going right on the X axis"}, {"start": 928.7, "end": 936.7, "text": " We can see that the similarity decreases which means that VIT is qualitatively learning different representations compared to ResNet"}, {"start": 936.7, "end": 945.7, "text": " Okay similar thing goes for VIT H14 and L16 compared to ResNet-152"}, {"start": 945.7, "end": 948.7, "text": " So I won't be focusing on the other diagrams but you get the point"}, {"start": 948.7, "end": 954.7, "text": " Basically the very fact that transformers have this extra flexibility is kind of"}, {"start": 954.7, "end": 959.7, "text": " Like being used by transformers to learn some new features which are useful"}, {"start": 959.7, "end": 967.7, "text": " But the thing I find interesting here is that we didn't see that much of an improvement that VIT has brought compared to CNNs"}, {"start": 967.7, "end": 975.7, "text": " So it kind of surprises me that where is all of this additional expressivity being used up when doing for example classification tasks"}, {"start": 975.7, "end": 979.7, "text": " So that's an open question at least in my mind"}, {"start": 979.7, "end": 986.7, "text": " So even though it has so much more flexibility and it's obviously learning some different as we saw here as well"}, {"start": 986.7, "end": 994.7, "text": " So we saw that in the first 75 layers or something we already learned similar features as ResNet"}, {"start": 994.7, "end": 1002.7, "text": " So the question is what happens with all of that additional expressivity and why don't we see higher accuracy scores compared to CNNs"}, {"start": 1002.7, "end": 1005.7, "text": " Why are they still on pair"}, {"start": 1005.7, "end": 1010.7, "text": " And one thing I forgot to mention here for the data part"}, {"start": 1010.7, "end": 1017.7, "text": " So I'm not sure why they didn't mention anywhere in this paper the sharpness aware minimization objectives"}, {"start": 1017.7, "end": 1020.7, "text": " So there's a recent paper you can check out my video I'll link it somewhere here"}, {"start": 1020.7, "end": 1028.7, "text": " Where they showed that you don't need to pre-train VATs on JFT-300M which is super expensive"}, {"start": 1028.7, "end": 1036.7, "text": " You can just use the SAM objective and they have not been mentioning that paper anywhere in this paper"}, {"start": 1036.7, "end": 1041.7, "text": " So that kind of surprised me if somebody knows the answer feel free to write it down in the comments"}, {"start": 1041.7, "end": 1050.7, "text": " But yeah I guess it would be interesting to see the same kind of experiments now this time with VAT plus SAM"}, {"start": 1050.7, "end": 1053.7, "text": " And training only on ImageNet instead of training on JFT"}, {"start": 1053.7, "end": 1056.7, "text": " I guess that would be something I'd like to get an answer to"}, {"start": 1056.7, "end": 1063.7, "text": " Ok again as I said a bunch of diagrams not necessarily connected so stick with me"}, {"start": 1063.7, "end": 1067.7, "text": " Hopefully we'll have a bigger picture at the end of this video"}, {"start": 1067.7, "end": 1073.7, "text": " The second thing they do here is they try to visualize the receptive field of CNNs versus VATs"}, {"start": 1073.7, "end": 1084.7, "text": " And you're obviously hopefully familiar with how CNNs work and you can see that we have linear increase of basically of the receptive field in CNNs"}, {"start": 1084.7, "end": 1087.7, "text": " And the reason is let me just kind of quickly backtrack here"}, {"start": 1087.7, "end": 1092.7, "text": " So let me draw a simple 4x4 pixel image here"}, {"start": 1092.7, "end": 1096.7, "text": " So we have 4x4 pixels so something like this"}, {"start": 1096.7, "end": 1101.7, "text": " And so how CNNs work is basically you have a kernel let's assume we have a 2x2 window"}, {"start": 1101.7, "end": 1109.7, "text": " And we're gonna like do multiplication here then we're gonna slide the window to the right if we assume that the stride is equal to 1"}, {"start": 1109.7, "end": 1115.7, "text": " We'll have something like this and all of these 4 pixels we're gonna be mapped into a single feature"}, {"start": 1115.7, "end": 1119.7, "text": " The second position will be also mapped to like a second feature here"}, {"start": 1119.7, "end": 1124.7, "text": " The third position will be mapped to yet another feature etc"}, {"start": 1124.7, "end": 1129.7, "text": " So what happens is that after we do this for the whole image"}, {"start": 1129.7, "end": 1135.7, "text": " What happens is that the next window that works in this layer so this was layer 0 let's say this is layer 1"}, {"start": 1135.7, "end": 1145.7, "text": " So now when we have a sliding window here because remember this blue dot came from the 4 pixels and this blue dot came from 4 pixels here"}, {"start": 1145.7, "end": 1154.7, "text": " So that means that this window here will actually be attending to 3x3 to this span of the image"}, {"start": 1154.7, "end": 1157.7, "text": " And you can imagine extrapolating that into deeper layers"}, {"start": 1157.7, "end": 1164.7, "text": " This happens in every single layer and that's why we have like a linear increase of the receptive field in CNNs"}, {"start": 1164.7, "end": 1176.7, "text": " On the other hand VATs do not work like CNNs obviously and they can immediately attend whatever pixel in the image in whatever layer of the VAT"}, {"start": 1176.7, "end": 1186.7, "text": " So and that's kind of reflected here you can see there is a sudden jump here where all of a sudden every token can attend every token"}, {"start": 1186.7, "end": 1193.7, "text": " And it's kind of I don't like these diagrams to be honest it's kind of hard to understand them in O of 1 what this exactly means"}, {"start": 1193.7, "end": 1203.7, "text": " But like they basically calculated some absolute values of gradients of the central token of the feature map with respect to the input tokens"}, {"start": 1203.7, "end": 1210.7, "text": " And that's how they kind of and then they do the averaging but basically you can see these like bright shadows here"}, {"start": 1210.7, "end": 1214.7, "text": " Basically mean that they are attending every single input token"}, {"start": 1214.7, "end": 1226.7, "text": " Okay so the receptive field is basically the whole image whereas here it takes some takes a while and takes a deeper layer only there will have like a neuron attending the whole input image"}, {"start": 1226.7, "end": 1236.7, "text": " Okay let's continue and see some other interesting results so I like this result a lot and let me try and explain what this actually depicts"}, {"start": 1236.7, "end": 1245.7, "text": " So the colors on this diagram are the following things so let me just recap briefly how transformer layer looks like"}, {"start": 1245.7, "end": 1257.7, "text": " So this will be let's call this F this will be just like a transformation block just stick with me here so we'll have a skip connection here"}, {"start": 1257.7, "end": 1267.7, "text": " Those two will get added up and then we have next layer so F is either MLP or self-attention layer so that's your transformer block right"}, {"start": 1267.7, "end": 1277.7, "text": " So we have each transformer block consists out of MLP actually first goes the self-attention block and then we have sub block sub module and then we have the MLP sub module"}, {"start": 1277.7, "end": 1290.7, "text": " So that's F and so that will be the input features and obviously we'll have F of Z here after being after we transform the features we have F of Z"}, {"start": 1290.7, "end": 1302.7, "text": " So what they depict here is a norm simply a norm between like Z we take L2 norm of Z and we take a ratio between that and the F of Z okay"}, {"start": 1302.7, "end": 1310.7, "text": " So something like this we have a ratio of norms and now having that in mind let's now interpret this diagram here"}, {"start": 1310.7, "end": 1320.7, "text": " So you can see that yellow means high value and that means that if we focus on zero thought token which is CLS token so the classification token in VAT"}, {"start": 1320.7, "end": 1328.7, "text": " We can see there is a like a spike here in the lower layers which means if we focus on what this plot is actually plotting"}, {"start": 1328.7, "end": 1336.7, "text": " That means that the norm of Z vector is much higher than this one which means much more signal is being passed by the skip connection"}, {"start": 1336.7, "end": 1344.7, "text": " So actually the skip connection carries a lot of the information on this CLS token route okay"}, {"start": 1344.7, "end": 1352.7, "text": " And then we can see that after like we get to these deep layers from maybe fifth layer onwards we can see that the brightness goes down"}, {"start": 1352.7, "end": 1360.7, "text": " That means that this kind of changes and now the information is being passed over this main branch that's how they are calling it in the paper"}, {"start": 1360.7, "end": 1364.7, "text": " So this will be the main branch the long branch and this is the skip connection or the short branch"}, {"start": 1364.7, "end": 1372.7, "text": " And we can see that the reverse thing happens for all of the other tokens in the VAT model"}, {"start": 1372.7, "end": 1379.7, "text": " So initially you can see that the more information is carried over the long branch and then later on we have increase here"}, {"start": 1379.7, "end": 1383.7, "text": " That means that Z increases that means the skip connections are carrying more information"}, {"start": 1383.7, "end": 1393.7, "text": " To be honest I don't know what to make out of this except that it's really interesting like symmetry going on here and the transition the change of phase here"}, {"start": 1393.7, "end": 1401.7, "text": " So there is this complementarity between the CLS token and all of the other spatial tokens let's call them that way"}, {"start": 1401.7, "end": 1410.7, "text": " But yeah to be honest I'm not sure why this is but it's an interesting dynamic in any case"}, {"start": 1410.7, "end": 1422.7, "text": " Okay so next thing what I show is how that basically we saw that features are fairly uniform going through the whole VAT model"}, {"start": 1422.7, "end": 1427.7, "text": " And they show that skip connections are fairly important to keep this uniformity of representations"}, {"start": 1427.7, "end": 1436.7, "text": " So they show that after they remove certain like skip connection the performance drops by 4% and we get these diagrams"}, {"start": 1436.7, "end": 1446.7, "text": " Which tells us that now deeper layers and these shallower layers have like are way different compared to for example here"}, {"start": 1446.7, "end": 1454.7, "text": " Which means that there is some qualitative change that happens if we remove a skip connection and performance obviously suffers a lot"}, {"start": 1454.7, "end": 1459.7, "text": " So skip connections are definitely a vital part of VATs so it seems"}, {"start": 1459.7, "end": 1472.7, "text": " Okay let's continue on so the next thing they show here is that basically the tokens that are not CLS token are preserving the spatial information in VATs"}, {"start": 1472.7, "end": 1481.7, "text": " And that's fairly why they do this is because we eventually will want to if they prove to be better than CNNs"}, {"start": 1481.7, "end": 1488.7, "text": " We'll want to use VATs not just for the classification task but also for some tasks that require more like global spatial awareness"}, {"start": 1488.7, "end": 1496.7, "text": " Like object detection or semantic segmentation etc. So that's why they do this experiment and they show that basically"}, {"start": 1496.7, "end": 1504.7, "text": " Again similarity between tokens in deeper layers and tokens in the input image is basically localized"}, {"start": 1504.7, "end": 1512.7, "text": " Let me try and kind of break that down so basically this is your input image you break it into patches after you kind of"}, {"start": 1512.7, "end": 1522.7, "text": " So let me denote this as X we take X here we pass it through the transformer so we just pass it through the VAT here"}, {"start": 1522.7, "end": 1531.7, "text": " And we take a certain layer like we focus on maybe this layer maybe layer whatever 13 or something and what they do is they take a specific patch"}, {"start": 1531.7, "end": 1541.7, "text": " So now we take a specific patch like maybe this patch here we take this patch here and what we do is we just do similarity with the input image"}, {"start": 1541.7, "end": 1550.7, "text": " And we can see that the response is highest exactly where this token is which means that the representations are kind of preserved"}, {"start": 1550.7, "end": 1559.7, "text": " The spatial information is preserved even in the deeper layers that's the whole point so the only difference is when they take a token that's on the edge of the image"}, {"start": 1559.7, "end": 1572.7, "text": " For some reason it's attending the whole edge like it's attending the all of the edges of the input image and I have not to be honest I haven't seen the answer in this paper for why that is"}, {"start": 1572.7, "end": 1580.7, "text": " So my first assumption would be even though I know it's incorrect would be that when you do for example CNN when you do the sliding of the windows"}, {"start": 1580.7, "end": 1587.7, "text": " You basically have to add some padding across the image and then you can see some artifacts going on on the edges of the image"}, {"start": 1587.7, "end": 1598.7, "text": " But here we have VAT which works completely different so it's kind of funny to see this thing emerging keeping in mind that when you kind of flatten this out in a rest order"}, {"start": 1598.7, "end": 1609.7, "text": " Which is the way that VAT is usually like flatten out the input image token so if we flatten them out you can see that we'll have a bunch of green pixels"}, {"start": 1609.7, "end": 1618.7, "text": " And then there will be a single pixel that has like high output and then we'll have a bunch of pixels that have low output and then high so there's this pattern going on"}, {"start": 1618.7, "end": 1627.7, "text": " Which is kind of funny to be honest yeah if you have any hypothesis why this may be feel free to write it down in the comments I guess it's an open question"}, {"start": 1627.7, "end": 1645.7, "text": " The whole point is basically if we compare that to CNNs we can see that CNNs are way less localized compared to VATs and they quickly show that the reason for that is the usage of global pooling in the CNN"}, {"start": 1645.7, "end": 1654.7, "text": " So basically after you have a couple of CNN layers at the end you just do global pooling across the features and that kind of destroys your spatial information"}, {"start": 1654.7, "end": 1664.7, "text": " And that's why CNNs are poor at least those types of model of CNNs with global average global pooling are poor at spatial localization"}, {"start": 1664.7, "end": 1674.7, "text": " And they showed that VATs if you do the same thing instead of training it with a CLS token if you instead add like a global averaging global pooling"}, {"start": 1674.7, "end": 1683.7, "text": " And then you stick a classification hat on top of it you'll also have poor spatial awareness of that VAT"}, {"start": 1683.7, "end": 1691.7, "text": " So the next thing they did is and these diagrams are fairly interesting so again I'm going to plot VAT here for the sake of argument"}, {"start": 1691.7, "end": 1699.7, "text": " So VIT and so we'll have output token that corresponds to the CLS tokens let's call it CLS and we'll have the other tokens here"}, {"start": 1699.7, "end": 1709.7, "text": " So there will be as many of these as there are tokens in the input image so what they do here is they just kind of so all these will be vectors obviously"}, {"start": 1709.7, "end": 1718.7, "text": " So every token is a vector and they'll just take a linear like a linear layer on top of those vectors and then train a classifier"}, {"start": 1718.7, "end": 1726.7, "text": " And what they show here is that if you have a VAT trained with CLS you'll see that the test accuracy will be super low"}, {"start": 1726.7, "end": 1735.7, "text": " And the reason is basically because all of these so all of these tokens are fairly dumb they don't have in the sense that they don't have a"}, {"start": 1735.7, "end": 1743.7, "text": " They haven't been trained to understand the global context of the image and so if you stick a linear layer on your probe on top of them"}, {"start": 1743.7, "end": 1749.7, "text": " And train a classifier that way you basically it's going to be a poor classifier and we can see that by the test accuracy"}, {"start": 1749.7, "end": 1757.7, "text": " So even though the CLS probe will actually work really nicely when you take that and you average because they basically average"}, {"start": 1757.7, "end": 1765.7, "text": " I think they yeah yeah after you do the averaging across all of those tokens and the CLS token the test accuracy is going to drop even though"}, {"start": 1765.7, "end": 1774.7, "text": " CLS will have high accuracy by itself and then they show that if they train VAT using the gap instead of CLS logic"}, {"start": 1774.7, "end": 1785.7, "text": " So that means instead of having the CLS logic what they'll do is they'll just do averaging here and then they'll train the VAT by by sticking classification had here"}, {"start": 1785.7, "end": 1793.7, "text": " And so all of the tokens by doing this will learn some global context of the input image and when you do that you can see that the test accuracy"}, {"start": 1793.7, "end": 1798.7, "text": " Increases heavily because all of these tokens now have global information"}, {"start": 1798.7, "end": 1807.7, "text": " Whereas when you don't do that you obviously have more spatial information as we saw in some of the previous diagrams and finally"}, {"start": 1807.7, "end": 1817.7, "text": " They did some experiments on how the data influences the performance of the of the model and we can see so not the performance but basically the similarity of"}, {"start": 1817.7, "end": 1825.7, "text": " How the features evolve as we kind of start diminishing the amount of data that the model is being fed during the training time"}, {"start": 1825.7, "end": 1836.7, "text": " So on one hand we have a like a like VAT that was trained on on GFT 300m and then what they do as you can see here they kind of slowly start decreasing the amount of data"}, {"start": 1836.7, "end": 1848.7, "text": " So 30% of the GFT 10% and 3% and you can see that so this is a block index on the on the x-axis which means basically like the block ID of the VAT model"}, {"start": 1848.7, "end": 1857.7, "text": " And if we focus on the shallower on the first blocks of the VAT we can see that the similarity is fairly high of those first blocks"}, {"start": 1857.7, "end": 1865.7, "text": " So the features are fairly similar to the model that was trained on the full data set and as we go into deeper layers those layers suffer a lot more"}, {"start": 1865.7, "end": 1878.7, "text": " Compared to the shallower layers and the situation gets even worse as we decrease the amount of data from 3% you can see these are still kind of similar to the fully trained model"}, {"start": 1878.7, "end": 1888.7, "text": " Whereas these ones these features are completely different so it kind of deteriorates really quickly with when we decrease the data"}, {"start": 1888.7, "end": 1898.7, "text": " And again I'd be super happy to see the same results with a SAM objective and how the SAM influences this behavior"}, {"start": 1898.7, "end": 1910.7, "text": " On the other hand they have a much smaller model here so remember 32 means it's a 32 by 32 patch which means we have a smaller less tokens and thus a smaller transformer the VAT transformer"}, {"start": 1910.7, "end": 1919.7, "text": " And we can see that similarity is much more robust with the decrease of data here compared to this transformer"}, {"start": 1919.7, "end": 1929.7, "text": " And yeah I guess it has to do with the overfitting for a bigger model you simply need more data to kind of fine tune those weights to train those weights"}, {"start": 1929.7, "end": 1937.7, "text": " Finally not much new information here basically the more data you have the better the test accuracy will be in this linear probe setting"}, {"start": 1937.7, "end": 1944.7, "text": " Yeah that was pretty much it so obviously this paper is a bit different there is not a single core idea they're trying to validate"}, {"start": 1944.7, "end": 1954.7, "text": " What they did here instead was dissecting the VAT every single component I really like the thoroughness of this paper the experiments all of the experiments they did"}, {"start": 1954.7, "end": 1962.7, "text": " I guess we knew many of these information from the maybe from some of the previous papers including the original VAT paper"}, {"start": 1962.7, "end": 1972.7, "text": " So the shallower layers are making use of the fact that they can attend the whole image fairly successfully so that we have a mix of local and global attention in the shallower layers"}, {"start": 1972.7, "end": 1982.7, "text": " And we saw that in the original paper as well so we can see I guess this is the interesting part the features are kind of uniform across the whole model to just recap the whole thing"}, {"start": 1982.7, "end": 1993.7, "text": " VAT learns features much earlier compared to ResNets which means there is this additional expressivity that is left unthemed"}, {"start": 1993.7, "end": 2002.7, "text": " Again this is the same thing as in the original transformer as I said we have both global and local attention in the earlier layers"}, {"start": 2002.7, "end": 2013.7, "text": " And that kind of deteriorates we have less data if you're not using SAM objective if you're just training VAT the classical way without SAM"}, {"start": 2013.7, "end": 2026.7, "text": " So by cross entropy in the case of classification we can see that we don't have those like CNN like behavior emerging in these lower layers we only have global attention here"}, {"start": 2026.7, "end": 2039.7, "text": " So yeah some like experiments in receptive field and yeah this phase information is interesting the fact that we need skip connections"}, {"start": 2039.7, "end": 2049.7, "text": " The fact that tokens learn the spatial information and preserve it throughout the whole network except when they're using the global pooling"}, {"start": 2049.7, "end": 2057.7, "text": " So but like when you're training with the VAT with CLS token then the tokens kind of learn to preserve the spatial information"}, {"start": 2057.7, "end": 2065.7, "text": " Yeah I guess that's pretty much it for this video so if you found it useful consider subscribing share it with a friend"}, {"start": 2065.7, "end": 2073.7, "text": " Also I encourage you to join the discord community I strongly believe it's going to become a really vibrant community over the next couple of months"}, {"start": 2073.7, "end": 2079.7, "text": " Also subscribe to the AI monthly newsletter and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=oPfu_Ec5u60
DeepMind DetCon: Efficient Visual Pretraining with Contrastive Detection | Paper Explained
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE ❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video, I cover DetCon: Efficient Visual Pretraining with Contrastive Detection - a novel self-supervised method that achieves SOTA results on various transfer learning tasks. The main idea is to add the semantic segmentation information into the contrastive objective (they used various heuristics to obtain semantic segmentation information in an unsupervised fashion). ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2103.10957 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 News: Discord and AI newsletter! 01:24 Self-supervised learning, BYOL, and SimCLR 03:50 DetCon method overview 11:35 Semantic segmentation heuristics 12:45 Overview of BYOL and SimCLR 20:15 Results 24:10 Impact of segmentation heuristics 27:20 Outro 28:10 Turn on the notification bell, much love! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ TODO: Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR ML PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #detcon #selfsupervised #contrastive
What's up guys, I just came back from vacation and in this video I'm covering efficient visual pre-training with contrastive detection or DATCON for short by the DeepMind team, but before I even dig into the paper I want to announce a couple of news I have for you So the first news is I just created a brand new Discord server so that means we'll be able to drive the engagement in this community so much more and you'll be able to Get answers to your questions without relying on me answering every single question on YouTube and everything and all the goodies the discord brings with it Also, you'll be able to suggest a repost I should do the code walkthroughs off because I want to be more bullish going forward doing those like hands-on coding videos and Hopefully you'll find that useful Anyways sign up for the discord. I'll just link it down in the description. So check it out So the second news is I'm creating a new monthly AI newsletter Where I'm gonna cover like the hottest the latest and greatest things that happened in the field of AI On a monthly basis and I'm gonna keep it high level. So that means it will be suitable for engineers Researchers and entrepreneurs as well. So compared to these videos where I'm in a zoom-in state in a particular paper there I'm gonna give you a brief understanding of what happened over the last month if you want to just quickly catch up with the things that Happened That's what this newsletter is gonna be all about and again just sign up for the newsletter I'm gonna link it down in the description. So having said that let me jump back into the paper and See what's happening here. So that con is another self supervised learning paper and it's just a Basically piggybacks off of dual and seem clear research with initial additional idea of including the segmentation mask information in order to create those representations representations, okay, so so let's start from the main motivation Self-supervised pre training has been shown to yield powerful representation for transfer learning these performance gains come at a large computational cost However with state-of-the-art methods requiring an order of magnitude more computation than supervised pre training So the trade-off here is fairly clear. So on one hand you have Like you don't need like expensive labeling annotation teams So you can kind of save up on the cost like there But on the other hand you have to invest in more like computational So your infrastructure your compute has will rise with some of the older methods so that's the trade-off you you kind of have to accept and You're still not going to approach the the representations that you get from from having super from from applying a supervised Paradigm, but like SSL is definitely getting closer and this paper is right like just in the right direction. I'd say Okay, so as you can see here on the x-axis We have like a number of epochs on image net So we're training these different methods on image and and they are then fine tuning them on cocoa and kind of reporting the Detection accuracy and you can see that using sim clear which is one of the older self supervised learning methods We can see that the performance is nowhere near as good as with a supervised method and then we can see that this novel method So that con actually surpasses Even the supervised method in 5x less computations You need like around 200 epochs instead of thousand epochs to achieve higher Accuracy on the cocoa detection desk. So I already mentioned that or I think I mentioned that Basically this paper piggybacks off of sim clear and Buell and I'm going to dig a bit deeper into those two But like first of all, let me kind of show you how this whole method works So here it is and if you read any of those older papers like moco sim clear Buell Whatever, you'll be fairly familiar with this similar pipeline So what we do is we have an image And then we do like two different augmentations So we'll have something like T Augmentation T and augmentation T prime and as you can see here So here we had some crop the elephant is now cropped and zoomed in and here we had some other types of augmentations So what we do next is We just apply encoder which is your regular CNN They've used ResNet across this paper to gain to get these Convolutional features so this here is just a spatial extent of the features and there is a third dimension here not shown Which can be maybe like 1k dimensions there okay, so now the important thing to notice here is that they are using these segmentation masks and That's that's so that's the difference compared to Buell. That's a difference compared to other SSL methods that we've seen so far and We're gonna see how they actually calculate the masks but for now just let's let's treat that as a black box and Now that the whole idea of the paper is the following so you have you have an elephant here as you can see it here And you can see these red dots represent the feature vectors that correspond to that spatial extent of the image here We have more red dots because the elephant occupies nearly the whole image and so now in order to form representations what they do is they take those masks so they take the information from the mask so with that like Segmentation mask information they can pull exactly those feature vectors which correspond to that red mask or to the elephant mask And so that means effectively you'll use you'll take these like as I said, these are maybe 1k Dimensional feature vectors you'll just take them and you'll do average over all of those 5 vectors in this case here or whatever the number of array vectors here is and that's how you get the final Representation here. Okay, you do that the same procedure for all of the other masks So because this image had different augmentations compared to this one here We still have this yellow object whatever this is it like a human so that means we have that vector here Whereas we don't have it here and now the whole idea is the following We just want to make sure that the red vectors get have a representation such that they are closer together in the vector space Whereas we want to make sure to push away the red from the blue vectors here And again, we want to pull the the the blue vectors and we want to because we only have yellow here But we don't have yellow here. We want to just push away the blue vector and the red vector So that's the main idea using these segmentation masks They have a richer learning signal and they are able to do this fine-grained push and pull Standard contrastive learning scheme. Okay, so let's dig into some formulas here To make this a bit clearer. So here is just a formulaic representation of the thing I just explained so basically M is your segmentation mask and it's gonna be a big binary mask So basically what M will do is the following so M will be something like this So for a red mask, it's gonna have ones here. So these will be ones ones ones ones Okay, and all of the other ones will be zeros So these will be zeros, okay and Now as you can see here, so H is just a feature vector that came from the CNN So that means so these here are H vectors. Okay, so and as you can see for every Particular slot so for this one The mask this binary mask will decide whether we want to use that feature vector behind it So whether we want to use the feature vector or not, okay, and That's simply described here as this So when when M equals 1 that means that the mask is active that means we're gonna use that feature vector and kind of sum them up and then divide by the number of Like active units. So let me kind of clarify that so here we have a 5 times 1 So there's gonna be 5 so that means we're gonna take these five vectors add them together and divide by 5 Which is basically what I already explained and that's average pooling across those across those vectors. So that's how we form the representations So that's this part now. The second part is as I said, they are piggybacking piggybacking off of Be all and simpler and so in order to like having formed those HM representations We now need to form these VM representations before we actually apply this contrastive detection objective Which I'll explain in a second. Okay, so Again, these here are only HM s and now we have to form VMS before we actually applied the objective And now I'm gonna briefly explain so as I said This here these formulas correspond to seem clear and these ones here correspond to be all and then For now, let's just treat it as a black box. We somehow like a calculate these VMS and What we do next up is as I said, we do this push and pull This this pooling and pushing away of these feature vectors So once we have those VMS we can then do the thing I just explained pictorially here So this this pushing away of different of vectors that correspond to different objects and pooling together The vectors that correspond to the same object in the image. So this is just a formulaically described here basically as you can see so VM and this VM prime just correspond to the same mask so M is for example red mask and Prime just means different augmentation. So this means basically the following So we will let's say this is just for the sake of argument. This is now VM just ignore It's it's actually HM, but let's say we pre process it using this fuel or sim clear and now we have VM here So what we want to do is so this will be the VM and we want to make sure that this one is As close as possible to this one. So those are two vectors. We want to make sure that the dot product between those two is Super high that means they are similar on the other hand. We want to make sure that this so this red vector is Pushed away from this blue vector and same for the other vectors So that's the whole that's the whole idea and that's kind of captured in this formula here so we want to make sure that the Vectors corresponding to the same mask So for example the the red mask the elephant mask are similar where we want to make sure that these negative ones are Pushed away. So that means this will get negative that means Exponential function raised to like a big negative number goes to zero. This will roughly go to zero Let's say and that means that that's how this loss will get minimized because minus log looks something like this So this is your minus log function which means you want to push this value here to one and that's how the loss is minimized and By doing that but that so that's a numerical intuition But what happens in that process is that as I said the red vectors or the the vectors corresponding to the same mask Get clustered together. Whereas all of the other vectors get kind of Like pushed away from each other and that's the whole idea Nothing, nothing fancy there Now briefly before I start like giving you a some overview of Buell and SimClear Let me show you the segmentation masks and what I've used. So Obviously, you don't know it wouldn't make sense to have a self-surprise learning method if you Presuppose that you have semantic segmentation masks because those require obviously labeling from humans So what I've done is they tried a couple of heuristics so some of these heuristics for calculating the segmentation masks are super simple and Like primitive like this one's the spatial heuristic where you basically cluster and assume that the neighboring pixels Belong to a specific object and so they tried this one. They also tried this FH algorithm heuristic this MCG and those are not that important Basically, they're just some more advanced heuristics compared to the spatial heuristic obviously and finally we they have they used human annotated segmentation masks And they later showed some intuitive results at having better segmentation masks like this one Actually improved the methods by by quite a lot. Okay, so those are the segmentation masks now Let me slowly dig into these Buell and SimClear methods Again, the ideas of many of these self supervised learning Papers are fairly similar So what I do here again is you have an image and you do you apply two different distinct Augmentations and that's how we get view t and view t prime of the same image So that would be something like this, right? So like you had an elephant this picture here and you get two different images. So that's that's the idea Okay, so once you have those what Buell does is it does not use negative samples So that's a that's the that was the novel thing that Buell did and it managed to actually create Like high quality representations without them collapsing and I'm gonna slowly digest what that means but stick with me for now so As I said, you have a view here you apply an encoder which is again usually your CNN you get some representation Then they have this G theta which is basically like a shallow MLP network Which is a like a projection head you get this projection representation and then they apply additionally this this additional MLP Which actually was one of the important ingredients to avoid the collapse So this a symmetry between the upper branch and the lower branch was important fact in in the important piece of information in Buell Okay, so On the other hand the lower branch is just constructed basically from the upper branch by using something called Exponentially moving average so that means you just take snapshots of the upper network in time and you use the exponential Logic to kind of average those out and that's how you form the these f-x i or whatever the discrete letter is pronounced like and Then you you kind of propagate and you get this final representation and the whole goal will be to After you do some L2 normalization. So whatever this vector is, let's call it V What they do is they first normalize it using L2 So they just find the L2 of that vector denormalize it by dividing it by the L2 norm and then they want to make sure that these two representations Basically by doing this MSC loss mean square error They want to ensure that these vectors are the same and so intuitively what that computational thing does is it forces the network? To learn how to abstract the important parts of the image So no matter the augmentations you want to extract something that's inherently present in those images and by learning that you learn valuable Representations which are agnostic to different augmentations. You can still capture and understand understand what's in the image so that's the whole point of y'all and They showed empirically that by doing this they avoid the collapse. So why would the collapse happen? Well, it's simple like the easiest most trivial way to kind of make sure you are minimizing this this loss function is To always output a constant representation. So no matter the views and give me an image whatever the images Whatever the augmentations are after you pass them through F theta. You'll get some constant representation Let's call it like V C like constant vector and you get V C here as well. And that means basically Not here, but like here at the output you get constant representations and that means that this is basically Approaching zero without you learning any interesting representations. So you'll avoid the disc collapse and back then when the paper came out like there they didn't have like Super strong theoretical understanding of why that is but some recent paper actually I kind of explained this a bit better I forgot the name of the paper. But yeah now sim clear is Similar but different it does so I'd say the main one of the main differences is that it uses negative sampling So what's in clear does is the following again? We have t and t prime so we apply different augmentations to the same image and this time they apply the same encoder In both branches and then they apply the same projection head in both branches and now the whole idea here is to make sure that the Augmentations belonging to the same image are similar. So those are printed representations are close together in the vector space Whereas like whereas representations from different images are further away from each other So let me try and kind of visualize that a little bit So this is your image and you have a batch of images to apply You apply the first augmentation and you get image t1 and you apply the second augmentation to the same image you get t1 Prime now you have a second image and you again apply augmentation and you get Image t2 and you get t2 prime and you do this Obviously n times because you have n images in your batch and now the whole point will be the following You take the representation you get from this t1 image after passing it through f and g And you want to make sure that that representation is close to this one because they belong to the same image And then you want to make sure that it's far away from these representations from this one from this one as well You want to pull it? You want to push it away from those representations and the same for every single image in the batch you're gonna Push them away and the only thing that's getting pulled together are those two images here So that's the whole idea of sim clear. So that's just kind of formulaically Showed here you basically have as I said, so similarity usually dot product between those representations There's some temperature coefficient But a whole point at the end is to kind of do what what I showed you here is just symbolic representation of the thing I just explained you here. Okay having understand that let me now go back to the Method and here we are. So again, here is the here is the sim clear method so you can now see that HM is the representation that comes from the F so from the encoder and then they just applied the projection head and That's it. That's that's everything you need to form representation for sim clear and then you apply this this this novel this novel object objective function to get representations for Buell they additionally just add this this head which I Mentioned which gives you this as symmetry in the in the pipeline and obviously they also use this a network So that's the network form from the exponential moving average. Okay, so now you can kind of hopefully ground these these Formulas to the dual and sim clear papers and again once you have the representations those VMs and VM primes You can do the logic I explained a couple minutes ago where you're forcing the vectors for Where you're forcing the vectors that correspond to the same masks to be closer together Whereas you're pushing away the vectors belonging to different masks And whereas masks correspond hopefully to different objects and that depends on the quality of your segmentation masks and they did some ablations on How those different so how these different algorithms? Affect the performance of the method and we'll see that in a couple minutes. Okay Let's see the results finally of this stat-con method and as you can see here So having pre training on the image net and then fine-tuning different tasks like cocoa instance segmentation Pascal semantic segmentation cityscapes Semantic segmentation and NYU depth estimation you can see in all of these four tasks, which are fairly different Like the that come method performs way better compared to the supervised and sim clear baselines So I think let's let's focus for now on one chart and then I'll tell you some interesting things I see here. So first things first is You can see here that sim clear after certain amount of given enough computation like Surpasses the performance of supervised like paradigm and that's something we saw in the beginning of this paper So given enough compute that's a trade-off you're gonna eventually get there, but you need bunch of compute, right? Whereas here you can see focusing on that con that like with After only 200 epochs, you're already better compared to these two paradigms and you have better performance Okay, so the the the same thing applies pretty much for all of the other tasks I won't be digging into every single chart here. But the interesting thing here to notice is that supervised like paradigm Completely fails for these two tasks like depth estimation and semantic segmentation Which means that the pre training supervised pre training on image net is not informative enough so the features are not informative enough for these particular tasks where it's in clear and that can seem to generalize better to different tasks, which is Another plus and bonus point for applying SSL because you get more generalization out of the method. Okay I'll skip this table. It just has like a numeric representation of the thing We just saw here So basically the the main point they're trying to drive here is that with 10x less computation They can achieve performance that's much better than all the other baselines So here you can see with only 100 approximately 100 epochs It's already better than seem clear which is way better that supervised learning on this city scape semantic segmentation desk So that's the whole point. Okay going forward. Let's see. What else is there Okay here they compare like That con again against various different like baselines not only Buell and seem clear both so moco Supervised suave and other methods and they just showed that it consistently outperforms all of those baselines So one thing they've worried about is that maybe scaling the number of parameters will kind of give us something qualitatively different behavior with that con compared to the previous baselines and this basically just confirms that this method does scale with with scale and Like the performance does not change and it's still all performed So that's con the blue one still outperforms all of the other baselines like we all seem clear and supervised Baseline on these cocoa detection and cocoa instance segmentation tasks. Okay Moving further they even showed that this recently published methods here Which has billion which used billion images for for pre training and has way more parameters It's still like the the dead con outperforms those methods Which is a nice result and they kind of mentioned that it's not super comparable because Sierra uses The data that's here uses is way more Noisy, it's kind of not a fair comparison, but still it's an impressive result Okay, let's see. What else interesting is here They showed how using different Segmentation mass so different heuristics which I briefly described before Improves and changes the performance. So as you can see here using those spatial like heuristics, so that's this one Let me just go briefly up there So this is a spatial heuristic and here we have so this upper here is one by one here. We have two by two Grid and here we have four by four. Okay, so that will hopefully help you understand this chart a bit better What these numbers mean so here is one by one? Here is using two by two ten by ten and five by five seems to be working the best on this particular Like desk and I guess this is not that surprising. So the better the Semantic segmentation you do initially so FH obviously has better as you can see on the x-axis this means that the overlap between the The masks that this method found is way way higher To the GT which is obviously one because by default GT overlaps completely with itself. That's why we have one here and these methods have even smaller overlap Which means they are of lower quality So that basically means the higher the quality of your segmentation masks the higher the detection accuracy and that's kind of I guess intuitive But it's nice. You have a chart like proving exactly that. Okay They did a couple more oblations of pre-training this time on cocoa instead of image net and they still have they still Outperform the supervised and the same clear like baselines. That's that that's the point there. Finally, they showed that using GT labels which kind of makes sense Helps helps them even in increasing the the pre-training image resolution and always testing in on this resolution Consistently improves the performance when you're using GT masks Whereas when you're using FH the performance does not improve with resolution which I guess makes sense They mentioned that somewhere here in the in the discussion portion of the paper. So, okay this part How can this be so one interpretation is that other images give us clean negative examples? Because images in cocoa depict different scenes However, it appears that negatives from the same image provide a stronger learning signal blah blah blah I were not pushing features from the same object apart Positives from the same image are also at least as good as those across segmentations if again, they are clean We are not pulling together features from different objects So basically if you have if you don't have perfect cementation mass that means you'll have overlap between classes Which means sometimes you're gonna be pushing away Two feature vectors even though there is an overlap where those two vectors are formed from the same object So that's something not something you want to do. You want to pull push away different objects You want to pull together the same object and that's all like a hand wavy explanation of why you need better better cement cementing better cementing segmentation masks Okay wrapping up this paper The main idea again is usage of these segmentation masks in order to form a richer training signal So whereas Sinclair will in this example just we won't have any cementation masks We won't have any semantic segmentation mask, which means Sinclair will just be pulling together these two because they belong to the same image Whereas here we have a much richer signal because of the presence of semantic segmentation So that's pretty much it for this video if you have any suggestion For which repo I should do a code walkthrough off. Let me know down in the comments So I was thinking maybe dyno or some of those newer papers that came out So if you have any suggestion Please let me know down in the comments and I definitely read those comments and you can influence the next video other than that If you found this video useful consider subscribing and hit that notification Button so that you can get notified the moment I upload a new video I've noticed that only 10% of you have the notification bell on and only 5 or 6% of you Are actually receiving those notifications because your YouTube settings are set in such manner that you are not receiving them instantaneously So yeah, definitely toggle on that notification button and until next time. Bye. Bye. I
[{"start": 0.0, "end": 6.6000000000000005, "text": " What's up guys, I just came back from vacation and in this video I'm covering efficient visual pre-training with contrastive detection or"}, {"start": 7.38, "end": 12.16, "text": " DATCON for short by the DeepMind team, but before I even dig into the paper"}, {"start": 12.16, "end": 14.74, "text": " I want to announce a couple of news I have for you"}, {"start": 14.74, "end": 18.34, "text": " So the first news is I just created a brand new Discord server"}, {"start": 18.54, "end": 24.38, "text": " so that means we'll be able to drive the engagement in this community so much more and you'll be able to"}, {"start": 24.38, "end": 31.88, "text": " Get answers to your questions without relying on me answering every single question on YouTube and everything and all the goodies the discord brings with it"}, {"start": 32.14, "end": 34.42, "text": " Also, you'll be able to suggest a repost"}, {"start": 34.42, "end": 41.5, "text": " I should do the code walkthroughs off because I want to be more bullish going forward doing those like hands-on"}, {"start": 42.08, "end": 43.78, "text": " coding videos and"}, {"start": 43.78, "end": 45.5, "text": " Hopefully you'll find that useful"}, {"start": 45.5, "end": 49.54, "text": " Anyways sign up for the discord. I'll just link it down in the description. So check it out"}, {"start": 49.54, "end": 53.3, "text": " So the second news is I'm creating a new monthly AI newsletter"}, {"start": 53.3, "end": 58.98, "text": " Where I'm gonna cover like the hottest the latest and greatest things that happened in the field of AI"}, {"start": 59.339999999999996, "end": 64.3, "text": " On a monthly basis and I'm gonna keep it high level. So that means it will be suitable for engineers"}, {"start": 64.78, "end": 71.24, "text": " Researchers and entrepreneurs as well. So compared to these videos where I'm in a zoom-in state in a particular paper there"}, {"start": 71.24, "end": 76.97999999999999, "text": " I'm gonna give you a brief understanding of what happened over the last month if you want to just quickly catch up with the things that"}, {"start": 76.97999999999999, "end": 77.97999999999999, "text": " Happened"}, {"start": 77.97999999999999, "end": 82.14, "text": " That's what this newsletter is gonna be all about and again just sign up for the newsletter"}, {"start": 82.14, "end": 87.46000000000001, "text": " I'm gonna link it down in the description. So having said that let me jump back into the paper and"}, {"start": 88.34, "end": 90.5, "text": " See what's happening here. So"}, {"start": 91.22, "end": 95.82, "text": " that con is another self supervised learning paper and"}, {"start": 96.5, "end": 98.5, "text": " it's just a"}, {"start": 98.58, "end": 106.54, "text": " Basically piggybacks off of dual and seem clear research with initial additional idea of including the segmentation mask"}, {"start": 106.82, "end": 108.94, "text": " information in order to create those representations"}, {"start": 108.94, "end": 113.14, "text": " representations, okay, so so let's start from the main motivation"}, {"start": 113.94, "end": 121.1, "text": " Self-supervised pre training has been shown to yield powerful representation for transfer learning these performance gains come at a large computational cost"}, {"start": 121.62, "end": 128.06, "text": " However with state-of-the-art methods requiring an order of magnitude more computation than supervised pre training"}, {"start": 128.06, "end": 132.06, "text": " So the trade-off here is fairly clear. So on one hand you have"}, {"start": 132.66, "end": 136.94, "text": " Like you don't need like expensive labeling annotation teams"}, {"start": 136.94, "end": 140.78, "text": " So you can kind of save up on the cost like there"}, {"start": 140.78, "end": 144.82, "text": " But on the other hand you have to invest in more like computational"}, {"start": 144.82, "end": 149.98, "text": " So your infrastructure your compute has will rise with some of the older methods"}, {"start": 149.98, "end": 153.14, "text": " so that's the trade-off you you kind of have to accept and"}, {"start": 153.9, "end": 161.18, "text": " You're still not going to approach the the representations that you get from from having super from from applying a supervised"}, {"start": 161.18, "end": 168.94, "text": " Paradigm, but like SSL is definitely getting closer and this paper is right like just in the right direction. I'd say"}, {"start": 169.62, "end": 174.14000000000001, "text": " Okay, so as you can see here on the x-axis"}, {"start": 174.14000000000001, "end": 176.94, "text": " We have like a number of epochs on image net"}, {"start": 176.94, "end": 183.74, "text": " So we're training these different methods on image and and they are then fine tuning them on cocoa and kind of reporting the"}, {"start": 184.06, "end": 189.14000000000001, "text": " Detection accuracy and you can see that using sim clear which is one of the older"}, {"start": 189.14, "end": 191.38, "text": " self supervised learning methods"}, {"start": 191.38, "end": 199.77999999999997, "text": " We can see that the performance is nowhere near as good as with a supervised method and then we can see that this novel method"}, {"start": 199.77999999999997, "end": 201.77999999999997, "text": " So that con actually"}, {"start": 202.1, "end": 203.22, "text": " surpasses"}, {"start": 203.22, "end": 207.7, "text": " Even the supervised method in 5x less computations"}, {"start": 207.7, "end": 212.42, "text": " You need like around 200 epochs instead of thousand epochs to achieve higher"}, {"start": 212.42, "end": 219.33999999999997, "text": " Accuracy on the cocoa detection desk. So I already mentioned that or I think I mentioned that"}, {"start": 219.89999999999998, "end": 226.66, "text": " Basically this paper piggybacks off of sim clear and Buell and I'm going to dig a bit deeper into those two"}, {"start": 226.73999999999998, "end": 230.82, "text": " But like first of all, let me kind of show you how this whole method works"}, {"start": 231.01999999999998, "end": 237.06, "text": " So here it is and if you read any of those older papers like moco sim clear Buell"}, {"start": 237.06, "end": 241.48, "text": " Whatever, you'll be fairly familiar with this similar pipeline"}, {"start": 241.48, "end": 243.79999999999998, "text": " So what we do is we have an image"}, {"start": 244.35999999999999, "end": 247.26, "text": " And then we do like two different augmentations"}, {"start": 247.26, "end": 249.16, "text": " So we'll have something like T"}, {"start": 249.16, "end": 252.89999999999998, "text": " Augmentation T and augmentation T prime and as you can see here"}, {"start": 252.89999999999998, "end": 259.84, "text": " So here we had some crop the elephant is now cropped and zoomed in and here we had some other types of augmentations"}, {"start": 260.03999999999996, "end": 262.03999999999996, "text": " So what we do next is"}, {"start": 262.03999999999996, "end": 265.15999999999997, "text": " We just apply encoder which is your regular CNN"}, {"start": 265.15999999999997, "end": 269.28, "text": " They've used ResNet across this paper to gain to get these"}, {"start": 269.28, "end": 276.44, "text": " Convolutional features so this here is just a spatial extent of the features and there is a third dimension here not shown"}, {"start": 276.55999999999995, "end": 279.23999999999995, "text": " Which can be maybe like 1k dimensions there"}, {"start": 279.23999999999995, "end": 285.4, "text": " okay, so now the important thing to notice here is that they are using these segmentation masks and"}, {"start": 286.08, "end": 293.11999999999995, "text": " That's that's so that's the difference compared to Buell. That's a difference compared to other SSL methods that we've seen so far and"}, {"start": 293.96, "end": 296.52, "text": " We're gonna see how they actually calculate the masks"}, {"start": 296.52, "end": 299.47999999999996, "text": " but for now just let's let's treat that as a black box and"}, {"start": 300.12, "end": 305.32, "text": " Now that the whole idea of the paper is the following so you have you have an elephant here as you can see it here"}, {"start": 305.79999999999995, "end": 313.4, "text": " And you can see these red dots represent the feature vectors that correspond to that spatial extent of the image here"}, {"start": 313.4, "end": 315.56, "text": " We have more red dots because the elephant"}, {"start": 316.47999999999996, "end": 321.64, "text": " occupies nearly the whole image and so now in order to form"}, {"start": 321.64, "end": 329.32, "text": " representations what they do is they take those masks so they take the information from the mask so with that like"}, {"start": 329.68, "end": 337.56, "text": " Segmentation mask information they can pull exactly those feature vectors which correspond to that red mask or to the elephant mask"}, {"start": 337.56, "end": 343.44, "text": " And so that means effectively you'll use you'll take these like as I said, these are maybe 1k"}, {"start": 344.08, "end": 348.2, "text": " Dimensional feature vectors you'll just take them and you'll do average over all of those"}, {"start": 348.2, "end": 355.44, "text": " 5 vectors in this case here or whatever the number of array vectors here is and that's how you get the final"}, {"start": 356.03999999999996, "end": 360.52, "text": " Representation here. Okay, you do that the same procedure for all of the other masks"}, {"start": 360.52, "end": 364.44, "text": " So because this image had different augmentations compared to this one here"}, {"start": 364.44, "end": 370.56, "text": " We still have this yellow object whatever this is it like a human so that means we have that vector here"}, {"start": 370.56, "end": 374.59999999999997, "text": " Whereas we don't have it here and now the whole idea is the following"}, {"start": 374.6, "end": 382.68, "text": " We just want to make sure that the red vectors get have a representation such that they are closer together in the vector space"}, {"start": 383.16, "end": 388.36, "text": " Whereas we want to make sure to push away the red from the blue vectors here"}, {"start": 388.36, "end": 395.3, "text": " And again, we want to pull the the the blue vectors and we want to because we only have yellow here"}, {"start": 395.3, "end": 400.46000000000004, "text": " But we don't have yellow here. We want to just push away the blue vector and the red vector"}, {"start": 400.46000000000004, "end": 403.16, "text": " So that's the main idea using these segmentation masks"}, {"start": 403.16, "end": 408.56, "text": " They have a richer learning signal and they are able to do this fine-grained push and pull"}, {"start": 409.48, "end": 414.94, "text": " Standard contrastive learning scheme. Okay, so let's dig into some formulas here"}, {"start": 415.68, "end": 421.64000000000004, "text": " To make this a bit clearer. So here is just a formulaic representation of the thing"}, {"start": 421.64000000000004, "end": 427.42, "text": " I just explained so basically M is your segmentation mask and it's gonna be a big binary mask"}, {"start": 427.42, "end": 431.20000000000005, "text": " So basically what M will do is the following so M will be something like this"}, {"start": 431.2, "end": 436.82, "text": " So for a red mask, it's gonna have ones here. So these will be ones ones ones ones"}, {"start": 436.82, "end": 439.47999999999996, "text": " Okay, and all of the other ones will be zeros"}, {"start": 441.2, "end": 443.2, "text": " So these will be zeros, okay and"}, {"start": 443.52, "end": 449.08, "text": " Now as you can see here, so H is just a feature vector that came from the CNN"}, {"start": 449.08, "end": 456.24, "text": " So that means so these here are H vectors. Okay, so and as you can see for every"}, {"start": 457.08, "end": 459.08, "text": " Particular slot so for this one"}, {"start": 459.08, "end": 465.52, "text": " The mask this binary mask will decide whether we want to use that feature vector behind it"}, {"start": 465.68, "end": 469.24, "text": " So whether we want to use the feature vector or not, okay, and"}, {"start": 469.88, "end": 471.59999999999997, "text": " That's simply described here as this"}, {"start": 471.59999999999997, "end": 475.68, "text": " So when when M equals 1 that means that the mask is active"}, {"start": 475.68, "end": 481.32, "text": " that means we're gonna use that feature vector and kind of sum them up and then divide by the number of"}, {"start": 482.12, "end": 487.62, "text": " Like active units. So let me kind of clarify that so here we have a 5 times 1"}, {"start": 487.62, "end": 492.96, "text": " So there's gonna be 5 so that means we're gonna take these five vectors add them together and divide by 5"}, {"start": 492.96, "end": 500.58, "text": " Which is basically what I already explained and that's average pooling across those across those vectors. So that's how we form the representations"}, {"start": 500.58, "end": 506.38, "text": " So that's this part now. The second part is as I said, they are piggybacking piggybacking off of"}, {"start": 507.52, "end": 514.0600000000001, "text": " Be all and simpler and so in order to like having formed those HM representations"}, {"start": 514.06, "end": 520.8199999999999, "text": " We now need to form these VM representations before we actually apply this contrastive detection objective"}, {"start": 520.8199999999999, "end": 523.0, "text": " Which I'll explain in a second. Okay, so"}, {"start": 524.0799999999999, "end": 531.64, "text": " Again, these here are only HM s and now we have to form VMS before we actually applied the objective"}, {"start": 531.64, "end": 534.78, "text": " And now I'm gonna briefly explain so as I said"}, {"start": 535.4599999999999, "end": 541.38, "text": " This here these formulas correspond to seem clear and these ones here correspond to be all and"}, {"start": 541.38, "end": 543.18, "text": " then"}, {"start": 543.18, "end": 549.14, "text": " For now, let's just treat it as a black box. We somehow like a calculate these VMS and"}, {"start": 549.82, "end": 554.18, "text": " What we do next up is as I said, we do this push and pull"}, {"start": 554.58, "end": 558.72, "text": " This this pooling and pushing away of these feature vectors"}, {"start": 558.72, "end": 564.1, "text": " So once we have those VMS we can then do the thing I just explained pictorially here"}, {"start": 564.1, "end": 571.4200000000001, "text": " So this this pushing away of different of vectors that correspond to different objects and pooling together"}, {"start": 571.84, "end": 578.38, "text": " The vectors that correspond to the same object in the image. So this is just a formulaically described here"}, {"start": 578.74, "end": 581.98, "text": " basically as you can see so VM and"}, {"start": 582.5, "end": 584.5, "text": " this VM prime"}, {"start": 584.94, "end": 589.9, "text": " just correspond to the same mask so M is for example red mask and"}, {"start": 589.9, "end": 594.3, "text": " Prime just means different augmentation. So this means basically the following"}, {"start": 594.3, "end": 599.26, "text": " So we will let's say this is just for the sake of argument. This is now VM just ignore"}, {"start": 599.26, "end": 605.66, "text": " It's it's actually HM, but let's say we pre process it using this fuel or sim clear and now we have VM here"}, {"start": 605.74, "end": 610.86, "text": " So what we want to do is so this will be the VM and we want to make sure that this one is"}, {"start": 611.02, "end": 617.5, "text": " As close as possible to this one. So those are two vectors. We want to make sure that the dot product between those two is"}, {"start": 617.5, "end": 622.86, "text": " Super high that means they are similar on the other hand. We want to make sure that this"}, {"start": 623.62, "end": 625.54, "text": " so this red vector is"}, {"start": 625.54, "end": 629.54, "text": " Pushed away from this blue vector and same for the other vectors"}, {"start": 629.54, "end": 634.42, "text": " So that's the whole that's the whole idea and that's kind of captured in this formula here"}, {"start": 634.42, "end": 636.88, "text": " so we want to make sure that the"}, {"start": 639.22, "end": 640.86, "text": " Vectors corresponding to the same mask"}, {"start": 640.86, "end": 646.54, "text": " So for example the the red mask the elephant mask are similar where we want to make sure that these negative ones are"}, {"start": 646.54, "end": 651.36, "text": " Pushed away. So that means this will get negative that means"}, {"start": 652.3, "end": 657.4599999999999, "text": " Exponential function raised to like a big negative number goes to zero. This will roughly go to zero"}, {"start": 657.4599999999999, "end": 663.98, "text": " Let's say and that means that that's how this loss will get minimized because minus log looks something like this"}, {"start": 664.66, "end": 666.66, "text": " So this is your minus log function"}, {"start": 666.8199999999999, "end": 673.14, "text": " which means you want to push this value here to one and that's how the loss is minimized and"}, {"start": 673.14, "end": 676.66, "text": " By doing that but that so that's a numerical intuition"}, {"start": 676.66, "end": 683.3, "text": " But what happens in that process is that as I said the red vectors or the the vectors corresponding to the same mask"}, {"start": 683.34, "end": 686.74, "text": " Get clustered together. Whereas all of the other vectors get kind of"}, {"start": 687.38, "end": 691.9, "text": " Like pushed away from each other and that's the whole idea"}, {"start": 693.1, "end": 695.1, "text": " Nothing, nothing fancy there"}, {"start": 695.54, "end": 700.7, "text": " Now briefly before I start like giving you a some overview of Buell and SimClear"}, {"start": 700.7, "end": 704.5400000000001, "text": " Let me show you the segmentation masks and what I've used. So"}, {"start": 705.26, "end": 710.7800000000001, "text": " Obviously, you don't know it wouldn't make sense to have a self-surprise learning method if you"}, {"start": 711.1800000000001, "end": 716.58, "text": " Presuppose that you have semantic segmentation masks because those require obviously labeling from humans"}, {"start": 716.74, "end": 719.86, "text": " So what I've done is they tried a couple of heuristics"}, {"start": 720.0200000000001, "end": 724.9000000000001, "text": " so some of these heuristics for calculating the segmentation masks are super simple and"}, {"start": 724.9, "end": 732.62, "text": " Like primitive like this one's the spatial heuristic where you basically cluster and assume that the neighboring pixels"}, {"start": 732.8199999999999, "end": 737.74, "text": " Belong to a specific object and so they tried this one. They also tried this"}, {"start": 738.42, "end": 743.62, "text": " FH algorithm heuristic this MCG and those are not that important"}, {"start": 744.62, "end": 752.4, "text": " Basically, they're just some more advanced heuristics compared to the spatial heuristic obviously and finally we they have they used human annotated segmentation masks"}, {"start": 752.4, "end": 757.1999999999999, "text": " And they later showed some intuitive results at having better segmentation masks like this one"}, {"start": 758.12, "end": 763.9599999999999, "text": " Actually improved the methods by by quite a lot. Okay, so those are the segmentation masks now"}, {"start": 763.9599999999999, "end": 767.42, "text": " Let me slowly dig into these Buell and SimClear methods"}, {"start": 768.24, "end": 772.54, "text": " Again, the ideas of many of these self supervised learning"}, {"start": 773.1999999999999, "end": 775.1999999999999, "text": " Papers are fairly similar"}, {"start": 775.2, "end": 782.24, "text": " So what I do here again is you have an image and you do you apply two different distinct"}, {"start": 782.6800000000001, "end": 787.6, "text": " Augmentations and that's how we get view t and view t prime of the same image"}, {"start": 788.08, "end": 790.08, "text": " So that would be something like this, right?"}, {"start": 790.08, "end": 795.08, "text": " So like you had an elephant this picture here and you get two different images. So that's that's the idea"}, {"start": 795.24, "end": 800.72, "text": " Okay, so once you have those what Buell does is it does not use negative samples"}, {"start": 800.72, "end": 807.1600000000001, "text": " So that's a that's the that was the novel thing that Buell did and it managed to actually create"}, {"start": 807.4, "end": 813.74, "text": " Like high quality representations without them collapsing and I'm gonna slowly digest what that means but stick with me for now"}, {"start": 814.12, "end": 815.8000000000001, "text": " so"}, {"start": 815.8000000000001, "end": 823.12, "text": " As I said, you have a view here you apply an encoder which is again usually your CNN you get some representation"}, {"start": 823.12, "end": 828.9, "text": " Then they have this G theta which is basically like a shallow MLP network"}, {"start": 828.9, "end": 835.68, "text": " Which is a like a projection head you get this projection representation and then they apply additionally this this additional MLP"}, {"start": 836.0799999999999, "end": 841.04, "text": " Which actually was one of the important ingredients to avoid the collapse"}, {"start": 841.04, "end": 848.72, "text": " So this a symmetry between the upper branch and the lower branch was important fact in in the important piece of information in Buell"}, {"start": 848.88, "end": 850.88, "text": " Okay, so"}, {"start": 850.9599999999999, "end": 857.0, "text": " On the other hand the lower branch is just constructed basically from the upper branch by using something called"}, {"start": 857.0, "end": 865.0, "text": " Exponentially moving average so that means you just take snapshots of the upper network in time and you use the exponential"}, {"start": 865.2, "end": 869.04, "text": " Logic to kind of average those out and that's how you form the these"}, {"start": 869.84, "end": 873.64, "text": " f-x i or whatever the discrete letter is pronounced like and"}, {"start": 874.6, "end": 879.96, "text": " Then you you kind of propagate and you get this final representation and the whole goal will be to"}, {"start": 880.52, "end": 885.04, "text": " After you do some L2 normalization. So whatever this vector is, let's call it V"}, {"start": 885.04, "end": 888.3199999999999, "text": " What they do is they first normalize it using L2"}, {"start": 888.3199999999999, "end": 895.48, "text": " So they just find the L2 of that vector denormalize it by dividing it by the L2 norm and then they want to make sure that"}, {"start": 895.48, "end": 897.48, "text": " these two representations"}, {"start": 898.1999999999999, "end": 900.9599999999999, "text": " Basically by doing this MSC loss mean square error"}, {"start": 901.8, "end": 910.0, "text": " They want to ensure that these vectors are the same and so intuitively what that computational thing does is it forces the network?"}, {"start": 910.0, "end": 913.88, "text": " To learn how to abstract the important parts of the image"}, {"start": 913.88, "end": 922.16, "text": " So no matter the augmentations you want to extract something that's inherently present in those images and by learning that you learn valuable"}, {"start": 922.76, "end": 930.2, "text": " Representations which are agnostic to different augmentations. You can still capture and understand understand what's in the image"}, {"start": 930.24, "end": 932.32, "text": " so that's the whole point of y'all and"}, {"start": 932.84, "end": 939.04, "text": " They showed empirically that by doing this they avoid the collapse. So why would the collapse happen?"}, {"start": 939.04, "end": 946.0799999999999, "text": " Well, it's simple like the easiest most trivial way to kind of make sure you are minimizing this this loss function is"}, {"start": 946.36, "end": 952.52, "text": " To always output a constant representation. So no matter the views and give me an image whatever the images"}, {"start": 952.64, "end": 958.4, "text": " Whatever the augmentations are after you pass them through F theta. You'll get some constant representation"}, {"start": 958.4, "end": 965.3199999999999, "text": " Let's call it like V C like constant vector and you get V C here as well. And that means basically"}, {"start": 965.32, "end": 972.1600000000001, "text": " Not here, but like here at the output you get constant representations and that means that this is basically"}, {"start": 973.1600000000001, "end": 980.84, "text": " Approaching zero without you learning any interesting representations. So you'll avoid the disc collapse and"}, {"start": 981.2800000000001, "end": 985.96, "text": " back then when the paper came out like there they didn't have like"}, {"start": 986.96, "end": 992.6, "text": " Super strong theoretical understanding of why that is but some recent paper actually I kind of explained this a bit better"}, {"start": 992.6, "end": 995.08, "text": " I forgot the name of the paper. But yeah"}, {"start": 996.0, "end": 998.0, "text": " now sim clear is"}, {"start": 998.32, "end": 1004.6, "text": " Similar but different it does so I'd say the main one of the main differences is that it uses negative sampling"}, {"start": 1004.6, "end": 1006.9200000000001, "text": " So what's in clear does is the following again?"}, {"start": 1006.9200000000001, "end": 1013.16, "text": " We have t and t prime so we apply different augmentations to the same image and this time they apply the same encoder"}, {"start": 1013.4, "end": 1018.6600000000001, "text": " In both branches and then they apply the same projection head in both branches"}, {"start": 1018.66, "end": 1024.12, "text": " and now the whole idea here is to make sure that the"}, {"start": 1024.6, "end": 1030.68, "text": " Augmentations belonging to the same image are similar. So those are printed representations are close together in the vector space"}, {"start": 1031.08, "end": 1035.84, "text": " Whereas like whereas representations from different images are further away from each other"}, {"start": 1035.92, "end": 1039.76, "text": " So let me try and kind of visualize that a little bit"}, {"start": 1039.76, "end": 1042.8799999999999, "text": " So this is your image and you have a batch of images to apply"}, {"start": 1042.88, "end": 1049.96, "text": " You apply the first augmentation and you get image t1 and you apply the second augmentation to the same image you get t1"}, {"start": 1050.2800000000002, "end": 1054.44, "text": " Prime now you have a second image and you again apply"}, {"start": 1055.1200000000001, "end": 1056.64, "text": " augmentation and you get"}, {"start": 1056.64, "end": 1060.72, "text": " Image t2 and you get t2 prime and you do this"}, {"start": 1061.3200000000002, "end": 1067.24, "text": " Obviously n times because you have n images in your batch and now the whole point will be the following"}, {"start": 1067.24, "end": 1073.16, "text": " You take the representation you get from this t1 image after passing it through f and g"}, {"start": 1073.16, "end": 1080.16, "text": " And you want to make sure that that representation is close to this one because they belong to the same image"}, {"start": 1080.16, "end": 1087.1200000000001, "text": " And then you want to make sure that it's far away from these representations from this one from this one as well"}, {"start": 1087.1200000000001, "end": 1088.76, "text": " You want to pull it?"}, {"start": 1088.76, "end": 1095.28, "text": " You want to push it away from those representations and the same for every single image in the batch you're gonna"}, {"start": 1095.28, "end": 1102.0, "text": " Push them away and the only thing that's getting pulled together are those two images here"}, {"start": 1102.08, "end": 1106.56, "text": " So that's the whole idea of sim clear. So that's just kind of"}, {"start": 1107.48, "end": 1109.3999999999999, "text": " formulaically"}, {"start": 1109.3999999999999, "end": 1115.92, "text": " Showed here you basically have as I said, so similarity usually dot product between those representations"}, {"start": 1115.96, "end": 1117.52, "text": " There's some temperature coefficient"}, {"start": 1117.52, "end": 1123.96, "text": " But a whole point at the end is to kind of do what what I showed you here is just symbolic representation of the thing"}, {"start": 1123.96, "end": 1128.96, "text": " I just explained you here. Okay having understand that let me now go back to the"}, {"start": 1130.28, "end": 1136.3, "text": " Method and here we are. So again, here is the here is the sim clear method"}, {"start": 1136.4, "end": 1143.92, "text": " so you can now see that HM is the representation that comes from the F so from the encoder and then they just applied the"}, {"start": 1144.28, "end": 1145.76, "text": " projection head and"}, {"start": 1145.76, "end": 1153.0, "text": " That's it. That's that's everything you need to form representation for sim clear and then you apply this this this novel this novel"}, {"start": 1153.0, "end": 1155.0, "text": " object objective function"}, {"start": 1155.12, "end": 1161.6, "text": " to get representations for Buell they additionally just add this this head which I"}, {"start": 1161.92, "end": 1168.44, "text": " Mentioned which gives you this as symmetry in the in the pipeline and obviously they also use this a network"}, {"start": 1168.44, "end": 1173.48, "text": " So that's the network form from the exponential moving average. Okay, so now you can kind of"}, {"start": 1174.2, "end": 1176.66, "text": " hopefully ground these these"}, {"start": 1176.66, "end": 1183.5400000000002, "text": " Formulas to the dual and sim clear papers and again once you have the representations those VMs and VM primes"}, {"start": 1183.5800000000002, "end": 1188.92, "text": " You can do the logic I explained a couple minutes ago where you're forcing the"}, {"start": 1189.74, "end": 1191.74, "text": " vectors for"}, {"start": 1192.02, "end": 1196.46, "text": " Where you're forcing the vectors that correspond to the same masks to be closer together"}, {"start": 1196.94, "end": 1200.5, "text": " Whereas you're pushing away the vectors belonging to different masks"}, {"start": 1200.5, "end": 1208.46, "text": " And whereas masks correspond hopefully to different objects and that depends on the quality of your segmentation masks and they did some ablations on"}, {"start": 1208.74, "end": 1211.86, "text": " How those different so how these different algorithms?"}, {"start": 1212.42, "end": 1216.4, "text": " Affect the performance of the method and we'll see that in a couple minutes. Okay"}, {"start": 1216.94, "end": 1221.94, "text": " Let's see the results finally of this stat-con method and as you can see here"}, {"start": 1222.74, "end": 1229.74, "text": " So having pre training on the image net and then fine-tuning different tasks like cocoa instance segmentation"}, {"start": 1229.74, "end": 1231.74, "text": " Pascal semantic segmentation"}, {"start": 1232.02, "end": 1233.46, "text": " cityscapes"}, {"start": 1233.46, "end": 1240.34, "text": " Semantic segmentation and NYU depth estimation you can see in all of these four tasks, which are fairly different"}, {"start": 1241.14, "end": 1246.78, "text": " Like the that come method performs way better compared to the supervised and sim clear baselines"}, {"start": 1247.18, "end": 1254.82, "text": " So I think let's let's focus for now on one chart and then I'll tell you some interesting things"}, {"start": 1254.82, "end": 1257.26, "text": " I see here. So first things first is"}, {"start": 1257.26, "end": 1260.02, "text": " You can see here that sim clear"}, {"start": 1260.74, "end": 1262.74, "text": " after certain amount of"}, {"start": 1262.82, "end": 1264.82, "text": " given enough computation"}, {"start": 1264.94, "end": 1266.46, "text": " like"}, {"start": 1266.46, "end": 1272.62, "text": " Surpasses the performance of supervised like paradigm and that's something we saw in the beginning of this paper"}, {"start": 1272.62, "end": 1278.86, "text": " So given enough compute that's a trade-off you're gonna eventually get there, but you need bunch of compute, right?"}, {"start": 1279.26, "end": 1282.3, "text": " Whereas here you can see focusing on that con that"}, {"start": 1282.86, "end": 1284.34, "text": " like with"}, {"start": 1284.34, "end": 1290.54, "text": " After only 200 epochs, you're already better compared to these two paradigms and you have better performance"}, {"start": 1290.54, "end": 1295.6599999999999, "text": " Okay, so the the the same thing applies pretty much for all of the other tasks"}, {"start": 1295.6599999999999, "end": 1300.86, "text": " I won't be digging into every single chart here. But the interesting thing here to notice is that supervised"}, {"start": 1301.78, "end": 1303.26, "text": " like paradigm"}, {"start": 1303.26, "end": 1307.86, "text": " Completely fails for these two tasks like depth estimation and semantic segmentation"}, {"start": 1307.86, "end": 1313.34, "text": " Which means that the pre training supervised pre training on image net is not informative enough"}, {"start": 1313.34, "end": 1318.06, "text": " so the features are not informative enough for these particular tasks where it's in clear and"}, {"start": 1318.58, "end": 1321.78, "text": " that can seem to generalize better to different tasks, which is"}, {"start": 1322.4599999999998, "end": 1329.9199999999998, "text": " Another plus and bonus point for applying SSL because you get more generalization out of the method. Okay"}, {"start": 1330.6999999999998, "end": 1336.12, "text": " I'll skip this table. It just has like a numeric representation of the thing"}, {"start": 1336.12, "end": 1336.9399999999998, "text": " We just saw here"}, {"start": 1336.9399999999998, "end": 1342.1, "text": " So basically the the main point they're trying to drive here is that with 10x less computation"}, {"start": 1342.1, "end": 1347.02, "text": " They can achieve performance that's much better than all the other baselines"}, {"start": 1347.02, "end": 1350.4599999999998, "text": " So here you can see with only 100 approximately 100 epochs"}, {"start": 1350.5, "end": 1356.82, "text": " It's already better than seem clear which is way better that supervised learning on this city scape semantic segmentation desk"}, {"start": 1356.82, "end": 1360.26, "text": " So that's the whole point. Okay going forward. Let's see. What else is"}, {"start": 1361.02, "end": 1362.9399999999998, "text": " there"}, {"start": 1362.9399999999998, "end": 1364.3, "text": " Okay"}, {"start": 1364.3, "end": 1366.3, "text": " here they compare like"}, {"start": 1366.3, "end": 1373.6399999999999, "text": " That con again against various different like baselines not only Buell and seem clear both so moco"}, {"start": 1374.26, "end": 1381.06, "text": " Supervised suave and other methods and they just showed that it consistently outperforms all of those baselines"}, {"start": 1381.94, "end": 1389.12, "text": " So one thing they've worried about is that maybe scaling the number of parameters will kind of give us something qualitatively different"}, {"start": 1389.5, "end": 1392.86, "text": " behavior with that con compared to the previous baselines and"}, {"start": 1392.86, "end": 1398.4599999999998, "text": " this basically just confirms that this method does scale with with scale and"}, {"start": 1399.2199999999998, "end": 1402.58, "text": " Like the performance does not change and it's still all performed"}, {"start": 1402.58, "end": 1408.3799999999999, "text": " So that's con the blue one still outperforms all of the other baselines like we all seem clear and supervised"}, {"start": 1408.9799999999998, "end": 1413.3, "text": " Baseline on these cocoa detection and cocoa instance segmentation tasks. Okay"}, {"start": 1414.82, "end": 1419.4599999999998, "text": " Moving further they even showed that this recently published methods here"}, {"start": 1419.46, "end": 1425.42, "text": " Which has billion which used billion images for for pre training and has way more parameters"}, {"start": 1425.82, "end": 1430.26, "text": " It's still like the the dead con outperforms those methods"}, {"start": 1430.26, "end": 1435.66, "text": " Which is a nice result and they kind of mentioned that it's not super comparable because"}, {"start": 1436.38, "end": 1438.38, "text": " Sierra uses"}, {"start": 1438.38, "end": 1440.5, "text": " The data that's here uses is way more"}, {"start": 1441.22, "end": 1444.8600000000001, "text": " Noisy, it's kind of not a fair comparison, but still it's an impressive result"}, {"start": 1445.66, "end": 1448.54, "text": " Okay, let's see. What else interesting is here"}, {"start": 1448.54, "end": 1452.26, "text": " They showed how using different"}, {"start": 1453.1399999999999, "end": 1457.04, "text": " Segmentation mass so different heuristics which I briefly described before"}, {"start": 1458.1399999999999, "end": 1464.62, "text": " Improves and changes the performance. So as you can see here using those spatial like heuristics, so that's this one"}, {"start": 1464.62, "end": 1466.62, "text": " Let me just go briefly up there"}, {"start": 1466.8999999999999, "end": 1472.94, "text": " So this is a spatial heuristic and here we have so this upper here is one by one here. We have two by two"}, {"start": 1472.94, "end": 1480.78, "text": " Grid and here we have four by four. Okay, so that will hopefully help you understand this chart a bit better"}, {"start": 1481.5800000000002, "end": 1484.22, "text": " What these numbers mean so here is one by one?"}, {"start": 1484.94, "end": 1490.48, "text": " Here is using two by two ten by ten and five by five seems to be working the best on this particular"}, {"start": 1491.5, "end": 1495.9, "text": " Like desk and I guess this is not that surprising. So the better the"}, {"start": 1495.9, "end": 1502.46, "text": " Semantic segmentation you do initially so FH obviously has better as you can see on the x-axis"}, {"start": 1502.46, "end": 1504.46, "text": " this means that the overlap between the"}, {"start": 1504.9, "end": 1509.7, "text": " The masks that this method found is way way higher"}, {"start": 1510.5, "end": 1514.1000000000001, "text": " To the GT which is obviously one because by default"}, {"start": 1514.66, "end": 1520.74, "text": " GT overlaps completely with itself. That's why we have one here and these methods have even smaller overlap"}, {"start": 1520.74, "end": 1522.3000000000002, "text": " Which means they are of lower quality"}, {"start": 1522.3, "end": 1529.78, "text": " So that basically means the higher the quality of your segmentation masks the higher the detection accuracy and that's kind of I guess intuitive"}, {"start": 1529.82, "end": 1533.78, "text": " But it's nice. You have a chart like proving exactly that. Okay"}, {"start": 1535.3799999999999, "end": 1537.86, "text": " They did a couple more oblations of"}, {"start": 1538.82, "end": 1543.3, "text": " pre-training this time on cocoa instead of image net and they still have they still"}, {"start": 1544.02, "end": 1551.06, "text": " Outperform the supervised and the same clear like baselines. That's that that's the point there. Finally, they showed that"}, {"start": 1551.06, "end": 1552.34, "text": " using"}, {"start": 1552.34, "end": 1554.78, "text": " GT labels which kind of makes sense"}, {"start": 1555.4199999999998, "end": 1558.7, "text": " Helps helps them even in increasing the the pre-training"}, {"start": 1559.3799999999999, "end": 1562.54, "text": " image resolution and always testing in on this resolution"}, {"start": 1563.5, "end": 1566.7, "text": " Consistently improves the performance when you're using GT masks"}, {"start": 1566.7, "end": 1572.5, "text": " Whereas when you're using FH the performance does not improve with resolution which I guess makes sense"}, {"start": 1573.22, "end": 1578.86, "text": " They mentioned that somewhere here in the in the discussion portion of the paper. So, okay this part"}, {"start": 1578.86, "end": 1583.8999999999999, "text": " How can this be so one interpretation is that other images give us clean negative examples?"}, {"start": 1583.8999999999999, "end": 1586.4599999999998, "text": " Because images in cocoa depict different scenes"}, {"start": 1586.4599999999998, "end": 1591.82, "text": " However, it appears that negatives from the same image provide a stronger learning signal blah blah blah"}, {"start": 1591.82, "end": 1595.6999999999998, "text": " I were not pushing features from the same object apart"}, {"start": 1595.6999999999998, "end": 1601.5, "text": " Positives from the same image are also at least as good as those across segmentations if again, they are clean"}, {"start": 1601.5, "end": 1605.1399999999999, "text": " We are not pulling together features from different objects"}, {"start": 1605.14, "end": 1610.74, "text": " So basically if you have if you don't have perfect cementation mass that means you'll have overlap between classes"}, {"start": 1610.74, "end": 1613.8600000000001, "text": " Which means sometimes you're gonna be pushing away"}, {"start": 1614.5800000000002, "end": 1620.3400000000001, "text": " Two feature vectors even though there is an overlap where those two vectors are formed from the same object"}, {"start": 1620.3400000000001, "end": 1624.5800000000002, "text": " So that's something not something you want to do. You want to pull push away different objects"}, {"start": 1624.5800000000002, "end": 1631.0600000000002, "text": " You want to pull together the same object and that's all like a hand wavy explanation of why you need better"}, {"start": 1631.6200000000001, "end": 1633.46, "text": " better cement cementing"}, {"start": 1633.46, "end": 1636.14, "text": " better cementing segmentation masks"}, {"start": 1636.74, "end": 1637.8600000000001, "text": " Okay"}, {"start": 1637.8600000000001, "end": 1639.8600000000001, "text": " wrapping up this paper"}, {"start": 1640.54, "end": 1647.82, "text": " The main idea again is usage of these segmentation masks in order to form a richer training signal"}, {"start": 1647.82, "end": 1653.14, "text": " So whereas Sinclair will in this example just we won't have any cementation masks"}, {"start": 1653.3400000000001, "end": 1661.02, "text": " We won't have any semantic segmentation mask, which means Sinclair will just be pulling together these two because they belong to the same image"}, {"start": 1661.02, "end": 1665.7, "text": " Whereas here we have a much richer signal because of the presence of semantic segmentation"}, {"start": 1665.86, "end": 1669.42, "text": " So that's pretty much it for this video if you have any suggestion"}, {"start": 1669.94, "end": 1675.02, "text": " For which repo I should do a code walkthrough off. Let me know down in the comments"}, {"start": 1675.02, "end": 1679.62, "text": " So I was thinking maybe dyno or some of those newer papers that came out"}, {"start": 1679.78, "end": 1681.58, "text": " So if you have any suggestion"}, {"start": 1681.58, "end": 1687.62, "text": " Please let me know down in the comments and I definitely read those comments and you can influence the next video other than that"}, {"start": 1687.62, "end": 1691.6999999999998, "text": " If you found this video useful consider subscribing and hit that notification"}, {"start": 1692.4199999999998, "end": 1696.62, "text": " Button so that you can get notified the moment I upload a new video"}, {"start": 1696.7399999999998, "end": 1704.02, "text": " I've noticed that only 10% of you have the notification bell on and only 5 or 6% of you"}, {"start": 1704.7399999999998, "end": 1712.3799999999999, "text": " Are actually receiving those notifications because your YouTube settings are set in such manner that you are not receiving them instantaneously"}, {"start": 1712.38, "end": 1717.8200000000002, "text": " So yeah, definitely toggle on that notification button and until next time. Bye. Bye. I"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=BFivrO_PXt4
DINO: Emerging Properties in Self-Supervised Vision Transformers | Paper Explained!
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover DINO (self DIstillation with NO labels) introduced in the "Emerging Properties in Self-Supervised Vision Transformers" paper by Facebook AI. The idea is to see whether using supervised learning was preventing transformers from showing the same kind of results in CV as they demonstrated in the NLP world (where we use self-supervised learning objectives such as (masked) language modeling). It turns out some nice properties emerge such as: * DINO-ViT learns to predict segmentation masks * features are especially of high quality for the k-NN classification ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2104.14294 ✅ Code: https://github.com/facebookresearch/dino ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 DINO main ideas, attention maps explained 05:05 DINO explained in depth 10:30 Pseudocode walk-through 13:55 Multi-crop and local-to-global correspondence 15:15 More details on the teacher network 19:00 Results 25:00 Ablations 27:40 Collapse analysis 30:40 Features visualized and outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dino #facebook #selfsupervised
What's cracking guys? In this video I'm covering emerging properties in self-supervised vision transformers paper introducing Dino model from Facebook AI research, INRIA and Sorbonne University. The main motivation behind the paper was investigating why haven't transformers brought such an like a competitive advantage to computer vision as they did in NLP world and the main hypothesis is that using supervised learning as we've have so far with vision transformers have been kind of damping down their performance and so they propose to use self-supervised learning which provides us with a richer training signal and so they mentioned it here nicely. The resulting VAT are competitive with comnets but they have not yet delivered clear benefits over them. They're computationally more demanding, require more training data and their features do not exhibit unique properties. So on a small tangent this part of requiring more training data is not correct anymore so I recorded a video where I explained this novel technique where they use sharpness aware minimization objective to kind of smoothen out the lost landscape of VATs and thus it requires much less data now so yeah that part is not true anymore. But anyways, returning back to the paper, so the reason they want to use self-supervised learning is first because labeling is expensive and secondly because it's a richer training signal because suppose you have a sentence and if you want to predict just a single label for that sentence that's like a poor training signal compared to just trying to predict every single token which is a language modeling objective and so yeah that's the idea. Let's kind of try and use that self-supervised learning and see whether we get some like new properties popping up for transformers like for vision transformers and they say here that in this paper we question if self-supervised learning provides new properties to VATs that stand out compared to CNNs and as we'll soon see that is the case i.e. they do have some properties which even when they use the same training procedure so dyno procedure for comnets they do not achieve the same they do not gain the same properties as VAT. So first of those properties are these segmentation masks. As you see these are the attention maps that we get from the last layer of VAT after we are training with dyno and you can see that the objects are segmented which are salient for us humans. So we see a bird here, we see a boat here, we see a bicycle, toothbrush etc and that's not something we explicitly told a VAT to predict and so it's just an emergent property. And if you're, by the way, if you're not under if you don't understand how these are calculated let me just kind of briefly walk you through that part. They see here we look at the self-attention of the CLS token on the heads of the last layer. So again I recorded a video on vision transformer so if you want to know more details you can check it out. I'm going to briefly kind of explain here. So basically it's a transformer so you have a transformer like blob here and what you do in order to kind of parse images is so first you have your CLS token which is a learnable token and then what you do is you kind of segment your image so you kind of add these patches you kind of create all of these patches and then you for example take it to take these patches in a raster order you flatten them out and you kind of put them here okay and so now what happens is how they actually calculate this attention is the how they calculate how they made this attention map is the following so you kind of process all of these tokens using transformer and then out comes a representation of this CLS token and what you do is and also obviously for every single token here you'll have a corresponding output representation and they just take a this this token here they create a query vector so this will be query and they take these tokens and they just kind of map them to key vectors so we'll have key one here we'll have key two here etc so same here and just basically doing dot product between this query so that corresponds to the CLS token and these keys you'll get scores and then you'll just take these scores so you'll have a you'll get a score for this one you'll get a score for this one and you just take that flattened out structure and you wrap it back into the matrix structure and that's how you get these like these these images so basically these are just scores of these dot products between queries and keys pretty much so that's as simple as that now that was the that was a one of the emergent properties the second one that is super important is that the features that come out of the vision transformer trained by Dino have they perform really well using this k nearest neighbor classification and we'll see that a bit later but those are the two properties that emerge and now let me slowly start explaining the dino architecture itself on a high level what you do you have an input image x and you kind of do different augmentations so you do geometric augmentations such as crop you do photometric augmentations such as color jitter and you apply one set of those augmentations and you get x1 and you apply a distinct set of augmentations and you get x2 okay so now you pass those augmentations through these networks which are so this one will be just vision transformer so that's the whole point of this paper right we have vision transformer we want to kind of test it how it works with self-supervised learning and you form you form the teacher weights by kind of using this ema procedure so exponentially moving average so you'll be taking uh past snapshots of of student uh weights and you'll be using them to form the teacher weights and so now just keeping it on high level what happens here we will we'll output some distribution here which will be a categorical distribution i'm just going to draw it as continuous one because it's easier but it's softmax it's categorical and the whole point will be to kind of match these two distributions so we'll have one coming from the teacher as well and the whole goal will be to kind of make sure these two are the same and why is that so the intuition behind this procedure is so no matter how you kind of deform how you augment your image uh you want to make sure that the student and teacher learn how to extract such representations so that they are invariant to those augmentations so again even though these two uh images uh are completely different because they are different crops different augmentations we'll end up with the same representations because we'll have same distributions and so that means we're extracting relevant information from the image that's the whole point and those features turn out to be super valuable on downstream tasks now going in a bit more detail first how we enforce the uh the these two distributions to be the same is via cross entropy loss so that's this thing here secondly uh important details here is we're using a temperature coefficient here in softmax if you're not familiar with the temperature does is basically let's say you have a distribution something like this and basically so this will be your mode of the distribution if you apply now the temperature and if we let the temperature go down to zero we'll basically end up with something like this so we'll have a probability of one for this mode and everything else will drop down to zero that's also called sharpening if you see that expression now you know what it is basically as you drop down the temperature you'll be sharpening distribution so that only the mode peaks and everything else drops down so that's the idea there uh and on the other hand we have this this centering which is vital for this method otherwise we'd got we would get a like a collapse of the representations and i'll explain those in a bit but basically um we need uh we need these the centering to to avoid collapse and um so briefly like what collapses is um the the easiest thing the easiest thing this this this network could do the system could do in order to make sure that they're the output distributions are the same is to just kind of output the uniform distribution for every single input so they'll just kind of learn to output something like this so this is your distribution and they'll just learn how to output the uniform one and this one will also learn how to output the uniform one and then trivially the loss goes to zero but that's obviously not what we want because that's that's a collapse a second type of collapse would be uh if you instead have just uh like a peak on a specific uh position on specific dimension so here for example you having a peak and on the same position from the teacher network as well so these are two types of collapses that happen and in order to avoid them what they do is they apply this centering and what centering is is you um just basically um so uh let me start like this we have image here and it's usually a batch of images so we'll have a like a b number of these images and so the vector here the tensor here will be b comma k okay where k is just the dimensionality of the representation and we have b of those so what to do is you'll you'll be calculating this c vector so the centering vector by just kind of taking a mean from all of these representations so just take a mean over over over the b dimension over the zero dimension and so you'll end up with a k one vector and you'll additionally be using exponential moving average to calculate this c vector so that means that also the previous batches will be influencing the value of this c vector and then what you do is you just subtract c from all of these uh b representations and then you pass them through softmax that also has temperature coefficient uh second important detail here is this stop gradient operator so that basically means we'll be propagating uh the loss uh like uh the errors through the student network only because that makes sense teacher is only is formed by using this exponentially moving average uh that's that's it hopefully uh that made that made sense um now i'm gonna walk you through the pseudocode that will maybe help you further understand this uh and yeah so here it is we have the teacher network we have the student network we iterate through the data set so x is just a batch of images you apply some augmentations here you apply a different set of augmentations here you end up with x1 and x2 okay so those are two batches with different augmentations applied to them uh then you do a feed forward through the student network and you do a feed forward through the teacher network and you end up with these logits here okay and finally uh what you do here you can see it's kind of symmetric so we're using t1 which is a teacher logit from the this x1 batch of augmented images and we use this uh s2 which is us which which came from this second group of augmented images uh being feed through the the student network so it's basically just symmetry stuff um so let's see let's see how how h is implemented basically here we can see t gets detached so the logits of the teacher gets detached and that's just an implementation of the stop gradient in pytorch uh then we apply softmax and as you can see here so tps is just a temperature parameter for the student um that that's that's what makes it sharper and uh then we just apply softmax and that's the the output distribution we had here so that that's this one and on the other hand we had here we we have the c vector so we'll be subtracting it as i said so that's the centering part and then we have then we apply the sharpening part so that's the this part here and we get t which is the output distribution so that the categorical distribution that came out of from teacher and finally we just apply as you can see here cross entropy and we do a mean because it's a batch of different uh images okay um that's pretty much it hopefully this helped a bit backward we'll calculate the actual gradients for all of the weights in the network update just updates the student network as you see so we only update student network and then here are the two exponentially moving average updates one updates the teacher network as you can see here l times the teacher network's parameters and y minus l times the student network parameters uh on the other hand we have here the the centering vector so t1 and t2 remember those are just logits so these are just logits so we have the these are bk tensors and they'll just concatenate those so that means they'll form basically if you have uh let's let's make it like this so we have b here and k is here okay so that's t1 and then we have t2 which they'll just concatenate here and then they do like a simple mean across this dimension okay and that's how they get the the the the the c vector plus they have as you see the exponentially moving part so they'll be using c's from other batches from the previous batches that's how they form the c vector and that's it in a nutshell that's those are all of the nitty gritty details um but again the main picture is very very simple the idea is to form such representations so that we have the same or similar distributions at the output and uh like it doesn't matter what kind of augmentations we applied those are still the same okay um a couple more interesting points to make here is uh they mentioned here that all crops are passed through the student while only the global views are passed through the teacher therefore encouraging local to global correspondences so that means the following so let's say we have an image here and we have let me draw something inside like maybe a chat programmer okay like a guy with glasses he's buffed okay like this and what you'll do is some of the crops will be uh maybe uh consisting out of more than 50 of the image something like this some of those crops may be smaller maybe like taking only one part one smaller part of the image and you'll be passing the big ones through the that's why they say here you'll be passing the big ones through the teaching network and you'll be passing both types of crops through the student network and again the idea is if you can kind of infer from this small crop if you can infer uh like relevant information that contains the same information as this bigger crop uh that means you you got yourself a robust representation okay and that's the whole idea that this this uh uh like multi-crop strategy enforces uh robust representations which turn out to to to do quite well on those k and evaluations as we'll soon see now let's see some more details about the teacher network here uh they mentioned that unlike knowledge distillation we do not have a teacher uh given a priority and hence we build it from past iterations of the student network so as we saw we were using that uh exponential moving average to form the teacher network so usually in computer vision you have a big uh like cnn maybe something like this so that's a cnn and you train it on a data set maybe something like image net and the whole idea is now having trained that one you freeze the weights and then you have a smaller cnn something like this and so what to do is the following so you take an image from that data set you feed it both here into the big one as well as to your smaller yet to be trained cnn and now instead of using the one hot uh encoding of the label as the target you'll actually be using whatever the the output distribution from this one is so whatever the output distribution here is we'll want to mimic in the smaller one so here we'll want to mimic that one uh by just applying uh k l divergence or or whatnot uh so that's the usual knowledge distillation and here we are not using exactly that because as you as you saw the teacher network is not frozen we're just updating it uh exponentially as the time goes by okay that's that's the first detail uh secondly they they mentioned here that indeed we observed that this teacher performs a form of model in sampling similar to polyac rupert averaging with an exponential decay uh that leads us to this thing and that's we observed that this teacher has better performance than the student throughout the training and hence guides the training of the student by providing target features of higher quality uh so yeah i mean just doing and sampling is a is a is a known method where whereby you can improve the the performance of your model by just taking a bunch of models and then averaging their outputs but here we are doing the averaging in the in the parameter space and that also uh helps obviously uh increase the performance we'll see some curves later on where they quantitatively show that this does happen indeed so that the teacher is always better than the student and so students can can actually learn something from the teacher because if yeah yeah i guess that makes sense um okay so a couple more details worth mentioning is that this whole like system this dino system will be uh free of bet normalization which is a trend we've seen recently with nf nets if you if you if you saw that paper uh it came from deep mind the idea was to kind of ditch the bet normalization because it has some nasty properties um and yeah but it ended up um complicating the whole procedure so i'm not sure it uh it got so popular but yeah still um just if you're able to kind of ditch bet normalization that that's cool and here they managed it because uh they say here that therefore when applying dino to vat we do not use any bet normalization in the projection hats uh vat does not have bn by default and they just made sure not to add bn layers uh on top of those linear layers in the projection head okay i already mentioned that these centering and sharpening operations are vital uh and they say here so as shown experimentally centering prevents one dimension to dominate but encourages collapse to the uniform distribution so that's this first type of uh like collapse i mentioned the uniform one so this one and then they mention um while the sharpening has the opposite effect okay so it kind of makes those uh one dimension explode so applying both operations balances their effects which is sufficient to avoid collapse in presence of a momentum teacher okay that's that's it those are those are all of the details of the architecture and how the system works now let's see for the results um first what they do is they take uh resnet as the as the as the baseline and they train it using dino so this is this method here and they show that uh compared to all of the previous uh like self-supervised methods such as i don't know like bootstrap your own latence bjoll or or moco or or simclear etc so they show that they are they perform better using both types of evaluations so the first one is linear one and the second one is k and n evaluation and just uh briefly if you don't understand how how the k and n works it's fairly simple so what you do is you train your your vat so you train your vat here and then you take uh you basically take uh your training set so this is your training set you freeze these weights so these are frozen so let me just you know it like f you froze these and now you just input an image outcomes are representation and because it's from the training set you'll have a label associated with it and you'll you'll just repeat this for a subset of your training set and you'll store these representation label tuples okay and next thing how you actually evaluate now this system is the following so now you have now you have uh like a validation data set and you pass in an image from the validation set outcomes representation but this time you don't have a label so what you do instead is you search through these representations which you've collected from your training set and you just find the k nearest ones and they were using like 20 uh experimentally they found that number works well so you'll just find 20 closest representations and you'll see what their labels are and by majority voting for example you'll just figure out what the represent what the label should be for this one okay that's the that's the whole idea so you just find the closest ones where closeness is defined by i guess they used uh l2 metric and so yeah that that's it that's how you find a label for for this one you just find the closest ones okay and now the interesting part actually here is that uh once you have vit you can see that the knn performance is like the gap between these two is severely like uh decreased whereas here we had a bigger gap that's the the important thing to notice is that using transformers and ssl has something that's that's um qualitatively different comparing to using cnn's that's the first thing worth mentioning and the yeah in general they compare with other baselines so here so here they compensate for the architecture here they use whatever architecture just the best methods out there and they still show that uh using linear evaluation as well as the knn they they achieve the best results compared to previous like bjoll simclear etc previous ssl baselines uh those are the initial results now let's see some other stuff here so these two results just support their claim that the features are really of high quality and since they perform really well for k nearest neighbor classification they also should perform well for image retrieval and copy detection and they just kind of uh yeah show that that's the case uh interesting thing here is that uh cool thing about ssl methods is you can use them on whatever data set you have you don't have to have labels and so they just pre-trained dino on this google landmarks uh data set which turns out to to boost performance severely compared to just pre-training dino on on image net and yeah they get results which are comparable to even a supervised baseline and that's cool okay let's continue here and that was the that was the first part so those were the the the that was the emergent property of nice k features that are good for knn focusing on these results here we can see that um different colors just you know different hats of the transformer heads attention heads from the vat uh final layer and we can see that the red head focuses on horses on this zebra's head we can see that the yellow head is focusing on on on the neck as well as on the ears and the blue hat is focusing on on this bonds here okay and um what's interesting is they show here that um even some some occluded objects such as these bushes here are detected and segmented uh i mean it's hard even for me to kind of detect these and the network learns how to recognize the bush and segment that information okay that's super cool okay continuing further they showed that if you take supervised baseline and plot the segmentation maps from from that one and from dino on on different data sets on pascal so they pre-trained this on image net so both dino and the supervised method and then they test it on pascal data set and you can see there's bunch of random like sparse dots here so the segmentation masks are not of the same quality as as as in dino and they also show quantitatively that you can see that this is i think uh how they called it uh jacquard similarity or iou just a fancy name for intersection of a union they show it's much higher compared to supervised baseline so again this testifies that um uh the the dino the vat with dino learns how to extract these segmentation masks uh high quality segmentation masks okay uh in this table they show a general applicability of these features to various downstream benchmarks such as cipher 10 cipher 100 etc so just comparing dino with the supervised baseline we can see it pretty much achieves outperforms supervised baseline on all of these downstream tasks which further testifies that the features learned using dino and vat are of super high quality uh some ablations here uh maybe the most important row i want you to focus on is when we uh expel when we don't use the momentum encoder so that's the teacher network being formed from exponential like by the exponential moving average strategy we can see that it completely fails to train so the performance just drops down to almost zero and it does not work so we need we so this method so dino needs momentum encoder and yeah um other ablations like uh multi crop not using multi crop reduces result reduces the performance not using cross entropy reduces performance etc in this plot they show that not that using smaller patches is more important than using a bigger model so if you focus on on this thing here so if you focus on on the eight by eight uh patches and vat small uh as you can see it has better performance than using um vat big so the bigger model but with 16 by 16 patches so the trade-off here obviously exists and uh it's the following basically when you're using eight by eight you need you need a bigger memory footprint and so uh you also kind of uh reduce the throughput of number of images you can feed through the model in a second um but yeah you get some additional performance but all in all you can see these two curves overlap so basically it boils down to how much performance you want and what's the throughput your desired throughput and that's how you can pick the best trade-off between the size of the model and the patch size okay nice um here is the quantitative results i mentioned that the student is always so that the teacher is always better than student you can see as the epochs progress during the training the validation accuracy of the teacher is always higher compared to the student and they just kind of try different strategies of how they update the teacher and it turns out that using momentum is the best strategy using previous epoch so that's something similar to dqn if you watch my video on dqn or you know what dqn is it also uses it also freezes the target network and updates it every couple of epochs or whatever every every end steps in general and so that method also works but it yields somewhat worse performance and using updating the teacher too often so that will be previous iteration or student copy you can see that the method just fails all completely so yeah there are ways to kind of build up this teacher network but the best thing seems to be so far this momentum update these results are pretty interesting let me just kind of explain them again there are two forms of collapse regardless of the input the model output is uniform along all dimensions or dominated by one dimension if we take the cross entropy we saw in the loss so we take the cross entropy loss and kind of split it down into the entropy component and the KL divergence component and plot it here they show the following so if you're using sharpening only and you just ignore you you kind of ditch the centering part in the teacher network this is what happens basically the entropy drops down to zero which means what which means that the network learns to output like something like this you basically have a peak at one dimension and the probabilities one and everything else is zero and so the entropy of this thing is zero so that means that using only sharpening leads to this mode of collapse and using only centering also collapses but as you can see the collapse has a non-zero value so the the the the entropy has non-zero value which means we have something like this which means we have a uniform distribution and this has a non-zero entropy and so that's what we end up with here and it did mention somewhere here let me just check the entropy converges to different values so zero with no centering and minus log one of over k with no sharpening so that's that's the value that we'll have here so here this value will be minus log of one over k the reason being is the support has k values so this is distribution support is has k dimensions so that means this will be one over k so that the integral amounts to one and this means if you just calculate the entropy you'll end up with with this this value here so yeah using both as you can see here the entropy converges to some value but it's neither zero neither this value of minus log one over k and that's it that's pretty much it taking a look at the KL divergence the KL becomes zero for these two cases because both the student and the teacher learn to output either uniform or this kind of distribution and when we're using both then KL divergence does not collapse to zero those are just two different perspectives on the same thing and that's the fact that they collapse to either of these modes depending on what what do we omit either sharpening or centering okay that's that's pretty much it i think i explained everything i wanted finally they mentioned it here in the future we plan to explore if pre-training a large vat model with dino on random uncurated images could push the limits of visual features and i can just imagine something along the gpt3 dimensions being trained and this time with vision transformer so that will be exciting i guess final thing i want to show you are the features that like the some semantics of the features of vat trained with dino as you can see here just extracting those representations and then using t-sneed to kind of plot them in 2d you can see that dino vat with dino learns how to cluster similar objects here we can see like model t that's a car then car wheel ambulance minibus so we have some kind of vehicles cluster here and then we have some like gutter snake king snake ring neck snakes we have some snakes here we have like orangutan some monkeys here so all in all all of these emerge by training dino by training vat using dino so we get this nice property even though we train vat only using ssl objective and we also got those segmentation masks and we also got the features which are of high quality as shown by knn performance etc so hopefully you like this video if you did consider subscribing and sharing and until next time bye bye
[{"start": 0.0, "end": 4.24, "text": " What's cracking guys? In this video I'm covering emerging properties in self-supervised vision"}, {"start": 4.24, "end": 12.16, "text": " transformers paper introducing Dino model from Facebook AI research, INRIA and Sorbonne University."}, {"start": 12.16, "end": 19.28, "text": " The main motivation behind the paper was investigating why haven't transformers brought"}, {"start": 19.28, "end": 25.76, "text": " such an like a competitive advantage to computer vision as they did in NLP world and the main"}, {"start": 25.76, "end": 31.28, "text": " hypothesis is that using supervised learning as we've have so far with vision transformers have"}, {"start": 31.28, "end": 38.72, "text": " been kind of damping down their performance and so they propose to use self-supervised learning"}, {"start": 39.44, "end": 45.52, "text": " which provides us with a richer training signal and so they mentioned it here nicely. The resulting"}, {"start": 45.52, "end": 51.760000000000005, "text": " VAT are competitive with comnets but they have not yet delivered clear benefits over them."}, {"start": 51.76, "end": 57.44, "text": " They're computationally more demanding, require more training data and their features do not"}, {"start": 57.44, "end": 62.48, "text": " exhibit unique properties. So on a small tangent this part of requiring more training data is not"}, {"start": 62.48, "end": 68.08, "text": " correct anymore so I recorded a video where I explained this novel technique where they use"}, {"start": 68.08, "end": 74.88, "text": " sharpness aware minimization objective to kind of smoothen out the lost landscape of VATs and thus"}, {"start": 75.6, "end": 81.44, "text": " it requires much less data now so yeah that part is not true anymore. But anyways, returning back"}, {"start": 81.44, "end": 85.52, "text": " to the paper, so the reason they want to use self-supervised learning is first because"}, {"start": 85.52, "end": 90.64, "text": " labeling is expensive and secondly because it's a richer training signal because suppose you have"}, {"start": 90.64, "end": 95.92, "text": " a sentence and if you want to predict just a single label for that sentence that's like a"}, {"start": 95.92, "end": 100.72, "text": " poor training signal compared to just trying to predict every single token which is a language"}, {"start": 100.72, "end": 107.03999999999999, "text": " modeling objective and so yeah that's the idea. Let's kind of try and use that"}, {"start": 107.04, "end": 111.68, "text": " self-supervised learning and see whether we get some like new properties popping up for"}, {"start": 111.68, "end": 116.96000000000001, "text": " transformers like for vision transformers and they say here that in this paper we question if"}, {"start": 116.96000000000001, "end": 121.84, "text": " self-supervised learning provides new properties to VATs that stand out compared to CNNs and as"}, {"start": 121.84, "end": 128.88, "text": " we'll soon see that is the case i.e. they do have some properties which even when they use the same"}, {"start": 128.88, "end": 135.44, "text": " training procedure so dyno procedure for comnets they do not achieve the same they do not gain the"}, {"start": 135.44, "end": 142.64, "text": " same properties as VAT. So first of those properties are these segmentation masks. As you see these are"}, {"start": 142.64, "end": 148.24, "text": " the attention maps that we get from the last layer of VAT after we are training with dyno and you can"}, {"start": 148.24, "end": 153.28, "text": " see that the objects are segmented which are salient for us humans. So we see a bird here,"}, {"start": 153.28, "end": 159.76, "text": " we see a boat here, we see a bicycle, toothbrush etc and that's not something we explicitly"}, {"start": 159.76, "end": 168.39999999999998, "text": " told a VAT to predict and so it's just an emergent property. And if you're, by the way, if you're not"}, {"start": 168.39999999999998, "end": 173.04, "text": " under if you don't understand how these are calculated let me just kind of briefly walk you"}, {"start": 173.04, "end": 179.28, "text": " through that part. They see here we look at the self-attention of the CLS token on the heads of"}, {"start": 179.28, "end": 184.64, "text": " the last layer. So again I recorded a video on vision transformer so if you want to know more"}, {"start": 184.64, "end": 189.12, "text": " details you can check it out. I'm going to briefly kind of explain here. So basically it's a transformer"}, {"start": 189.12, "end": 196.72, "text": " so you have a transformer like blob here and what you do in order to kind of parse images is so first"}, {"start": 196.72, "end": 203.68, "text": " you have your CLS token which is a learnable token and then what you do is you kind of segment your"}, {"start": 203.68, "end": 209.68, "text": " image so you kind of add these patches you kind of create all of these patches and then you for"}, {"start": 209.68, "end": 214.4, "text": " example take it to take these patches in a raster order you flatten them out and you kind of put"}, {"start": 214.4, "end": 221.20000000000002, "text": " them here okay and so now what happens is how they actually calculate this attention is the how they"}, {"start": 221.20000000000002, "end": 226.24, "text": " calculate how they made this attention map is the following so you kind of process all of these"}, {"start": 226.24, "end": 232.24, "text": " tokens using transformer and then out comes a representation of this CLS token and what you do"}, {"start": 232.24, "end": 237.92000000000002, "text": " is and also obviously for every single token here you'll have a corresponding output representation"}, {"start": 237.92, "end": 244.72, "text": " and they just take a this this token here they create a query vector so this will be query and"}, {"start": 244.72, "end": 250.48, "text": " they take these tokens and they just kind of map them to key vectors so we'll have key one here"}, {"start": 250.48, "end": 258.56, "text": " we'll have key two here etc so same here and just basically doing dot product between this query"}, {"start": 258.56, "end": 265.03999999999996, "text": " so that corresponds to the CLS token and these keys you'll get scores and then you'll just take"}, {"start": 265.04, "end": 269.76000000000005, "text": " these scores so you'll have a you'll get a score for this one you'll get a score for this one and"}, {"start": 269.76000000000005, "end": 274.56, "text": " you just take that flattened out structure and you wrap it back into the matrix structure and"}, {"start": 274.56, "end": 280.32000000000005, "text": " that's how you get these like these these images so basically these are just scores of these dot"}, {"start": 280.32000000000005, "end": 286.64000000000004, "text": " products between queries and keys pretty much so that's as simple as that now that was the that"}, {"start": 286.64000000000004, "end": 291.68, "text": " was a one of the emergent properties the second one that is super important is that the features"}, {"start": 291.68, "end": 297.6, "text": " that come out of the vision transformer trained by Dino have they perform really well using this"}, {"start": 297.6, "end": 302.64, "text": " k nearest neighbor classification and we'll see that a bit later but those are the two properties"}, {"start": 302.64, "end": 311.28000000000003, "text": " that emerge and now let me slowly start explaining the dino architecture itself on a high level what"}, {"start": 311.28000000000003, "end": 317.68, "text": " you do you have an input image x and you kind of do different augmentations so you do geometric"}, {"start": 317.68, "end": 324.24, "text": " augmentations such as crop you do photometric augmentations such as color jitter and you apply"}, {"start": 324.24, "end": 329.36, "text": " one set of those augmentations and you get x1 and you apply a distinct set of augmentations and you"}, {"start": 329.36, "end": 335.2, "text": " get x2 okay so now you pass those augmentations through these networks which are so this one will"}, {"start": 335.2, "end": 339.84000000000003, "text": " be just vision transformer so that's the whole point of this paper right we have vision transformer"}, {"start": 339.84000000000003, "end": 345.76, "text": " we want to kind of test it how it works with self-supervised learning and you form you form"}, {"start": 345.76, "end": 351.12, "text": " the teacher weights by kind of using this ema procedure so exponentially moving average so"}, {"start": 351.12, "end": 357.59999999999997, "text": " you'll be taking uh past snapshots of of student uh weights and you'll be using them to form the"}, {"start": 357.59999999999997, "end": 363.92, "text": " teacher weights and so now just keeping it on high level what happens here we will we'll output some"}, {"start": 363.92, "end": 367.76, "text": " distribution here which will be a categorical distribution i'm just going to draw it as"}, {"start": 367.76, "end": 374.08, "text": " continuous one because it's easier but it's softmax it's categorical and the whole point will be to"}, {"start": 374.08, "end": 379.35999999999996, "text": " kind of match these two distributions so we'll have one coming from the teacher as well and the"}, {"start": 379.35999999999996, "end": 384.8, "text": " whole goal will be to kind of make sure these two are the same and why is that so the intuition"}, {"start": 384.8, "end": 390.96, "text": " behind this procedure is so no matter how you kind of deform how you augment your image uh you want"}, {"start": 390.96, "end": 398.56, "text": " to make sure that the student and teacher learn how to extract such representations so that they"}, {"start": 398.56, "end": 404.88, "text": " are invariant to those augmentations so again even though these two uh images uh are completely"}, {"start": 404.88, "end": 409.04, "text": " different because they are different crops different augmentations we'll end up with the same"}, {"start": 409.04, "end": 414.56, "text": " representations because we'll have same distributions and so that means we're extracting"}, {"start": 414.56, "end": 418.64, "text": " relevant information from the image that's the whole point and those features turn out to be"}, {"start": 418.64, "end": 424.88, "text": " super valuable on downstream tasks now going in a bit more detail first how we enforce the uh the"}, {"start": 424.88, "end": 428.71999999999997, "text": " these two distributions to be the same is via cross entropy loss so that's this thing here"}, {"start": 429.6, "end": 435.6, "text": " secondly uh important details here is we're using a temperature coefficient here in softmax if you're"}, {"start": 435.6, "end": 440.24, "text": " not familiar with the temperature does is basically let's say you have a distribution something like"}, {"start": 440.24, "end": 446.71999999999997, "text": " this and basically so this will be your mode of the distribution if you apply now the temperature"}, {"start": 447.68, "end": 452.08, "text": " and if we let the temperature go down to zero we'll basically end up with something like this so"}, {"start": 452.08, "end": 458.47999999999996, "text": " we'll have a probability of one for this mode and everything else will drop down to zero that's also"}, {"start": 458.47999999999996, "end": 462.96, "text": " called sharpening if you see that expression now you know what it is basically as you drop down the"}, {"start": 462.96, "end": 467.59999999999997, "text": " temperature you'll be sharpening distribution so that only the mode peaks and everything else drops"}, {"start": 467.59999999999997, "end": 474.64, "text": " down so that's the idea there uh and on the other hand we have this this centering which is vital"}, {"start": 474.64, "end": 480.88, "text": " for this method otherwise we'd got we would get a like a collapse of the representations and i'll"}, {"start": 480.88, "end": 488.88, "text": " explain those in a bit but basically um we need uh we need these the centering to to avoid collapse"}, {"start": 488.88, "end": 496.24, "text": " and um so briefly like what collapses is um the the easiest thing the easiest thing this this"}, {"start": 496.24, "end": 500.96, "text": " this network could do the system could do in order to make sure that they're the output distributions"}, {"start": 500.96, "end": 505.52, "text": " are the same is to just kind of output the uniform distribution for every single input so they'll"}, {"start": 505.52, "end": 510.0, "text": " just kind of learn to output something like this so this is your distribution and they'll just learn"}, {"start": 510.0, "end": 515.84, "text": " how to output the uniform one and this one will also learn how to output the uniform one and then"}, {"start": 515.84, "end": 521.28, "text": " trivially the loss goes to zero but that's obviously not what we want because that's that's a collapse"}, {"start": 521.28, "end": 528.24, "text": " a second type of collapse would be uh if you instead have just uh like a peak on a specific"}, {"start": 528.24, "end": 535.52, "text": " uh position on specific dimension so here for example you having a peak and on the same position"}, {"start": 535.52, "end": 541.28, "text": " from the teacher network as well so these are two types of collapses that happen and in order to"}, {"start": 541.28, "end": 547.52, "text": " avoid them what they do is they apply this centering and what centering is is you um just"}, {"start": 547.52, "end": 554.4, "text": " basically um so uh let me start like this we have image here and it's usually a batch of images so"}, {"start": 554.4, "end": 559.76, "text": " we'll have a like a b number of these images and so the vector here the tensor here will be b"}, {"start": 559.76, "end": 567.12, "text": " comma k okay where k is just the dimensionality of the representation and we have b of those so what"}, {"start": 567.12, "end": 572.8, "text": " to do is you'll you'll be calculating this c vector so the centering vector by just kind of taking a"}, {"start": 572.8, "end": 578.72, "text": " mean from all of these representations so just take a mean over over over the b dimension over"}, {"start": 578.72, "end": 586.64, "text": " the zero dimension and so you'll end up with a k one vector and you'll additionally be using"}, {"start": 586.64, "end": 592.24, "text": " exponential moving average to calculate this c vector so that means that also the previous batches"}, {"start": 592.24, "end": 597.68, "text": " will be influencing the value of this c vector and then what you do is you just subtract c from all"}, {"start": 597.68, "end": 603.12, "text": " of these uh b representations and then you pass them through softmax that also has temperature"}, {"start": 603.12, "end": 610.16, "text": " coefficient uh second important detail here is this stop gradient operator so that basically"}, {"start": 610.16, "end": 617.12, "text": " means we'll be propagating uh the loss uh like uh the errors through the student network only"}, {"start": 617.12, "end": 621.68, "text": " because that makes sense teacher is only is formed by using this exponentially moving average"}, {"start": 622.7199999999999, "end": 628.8, "text": " uh that's that's it hopefully uh that made that made sense um now i'm gonna walk you through the"}, {"start": 628.8, "end": 633.6, "text": " pseudocode that will maybe help you further understand this uh and yeah so here it is"}, {"start": 634.48, "end": 638.48, "text": " we have the teacher network we have the student network we iterate through the data"}, {"start": 638.48, "end": 642.96, "text": " set so x is just a batch of images you apply some augmentations here you apply a different set of"}, {"start": 642.96, "end": 648.16, "text": " augmentations here you end up with x1 and x2 okay so those are two batches with different"}, {"start": 648.16, "end": 653.12, "text": " augmentations applied to them uh then you do a feed forward through the student network and you"}, {"start": 653.12, "end": 656.8000000000001, "text": " do a feed forward through the teacher network and you end up with these logits here okay"}, {"start": 657.52, "end": 663.28, "text": " and finally uh what you do here you can see it's kind of symmetric so we're using t1 which is a"}, {"start": 663.28, "end": 671.8399999999999, "text": " teacher logit from the this x1 batch of augmented images and we use this uh s2 which is us which"}, {"start": 671.8399999999999, "end": 677.52, "text": " which came from this second group of augmented images uh being feed through the the student"}, {"start": 677.52, "end": 683.92, "text": " network so it's basically just symmetry stuff um so let's see let's see how how h is implemented"}, {"start": 683.92, "end": 689.68, "text": " basically here we can see t gets detached so the logits of the teacher gets detached and that's"}, {"start": 689.68, "end": 695.04, "text": " just an implementation of the stop gradient in pytorch uh then we apply softmax and as you can"}, {"start": 695.04, "end": 702.16, "text": " see here so tps is just a temperature parameter for the student um that that's that's what makes"}, {"start": 702.16, "end": 708.16, "text": " it sharper and uh then we just apply softmax and that's the the output distribution we had here so"}, {"start": 708.16, "end": 714.8, "text": " that that's this one and on the other hand we had here we we have the c vector so we'll be"}, {"start": 714.8, "end": 719.4399999999999, "text": " subtracting it as i said so that's the centering part and then we have then we apply the sharpening"}, {"start": 719.4399999999999, "end": 724.16, "text": " part so that's the this part here and we get t which is the output distribution so that the"}, {"start": 724.16, "end": 728.88, "text": " categorical distribution that came out of from teacher and finally we just apply as you can see"}, {"start": 728.88, "end": 735.76, "text": " here cross entropy and we do a mean because it's a batch of different uh images okay um that's"}, {"start": 735.76, "end": 740.7199999999999, "text": " pretty much it hopefully this helped a bit backward we'll calculate the actual gradients for all of"}, {"start": 740.72, "end": 746.0, "text": " the weights in the network update just updates the student network as you see so we only update"}, {"start": 746.0, "end": 750.88, "text": " student network and then here are the two exponentially moving average updates one updates"}, {"start": 750.88, "end": 757.12, "text": " the teacher network as you can see here l times the teacher network's parameters and y minus l"}, {"start": 757.12, "end": 763.0400000000001, "text": " times the student network parameters uh on the other hand we have here the the centering vector"}, {"start": 763.0400000000001, "end": 770.1600000000001, "text": " so t1 and t2 remember those are just logits so these are just logits so we have the"}, {"start": 770.16, "end": 777.6, "text": " these are bk tensors and they'll just concatenate those so that means they'll form basically if you"}, {"start": 777.6, "end": 786.0799999999999, "text": " have uh let's let's make it like this so we have b here and k is here okay so that's t1 and then"}, {"start": 786.0799999999999, "end": 795.68, "text": " we have t2 which they'll just concatenate here and then they do like a simple mean across this"}, {"start": 795.68, "end": 802.9599999999999, "text": " dimension okay and that's how they get the the the the the c vector plus they have as you see the"}, {"start": 802.9599999999999, "end": 808.88, "text": " exponentially moving part so they'll be using c's from other batches from the previous batches"}, {"start": 808.88, "end": 813.8399999999999, "text": " that's how they form the c vector and that's it in a nutshell that's those are all of the nitty"}, {"start": 813.8399999999999, "end": 820.88, "text": " gritty details um but again the main picture is very very simple the idea is to form such"}, {"start": 820.88, "end": 828.08, "text": " representations so that we have the same or similar distributions at the output and uh like it doesn't"}, {"start": 828.08, "end": 835.6, "text": " matter what kind of augmentations we applied those are still the same okay um a couple more"}, {"start": 835.6, "end": 840.64, "text": " interesting points to make here is uh they mentioned here that all crops are passed through the"}, {"start": 840.64, "end": 845.44, "text": " student while only the global views are passed through the teacher therefore encouraging local"}, {"start": 845.44, "end": 851.12, "text": " to global correspondences so that means the following so let's say we have an image here and"}, {"start": 851.12, "end": 857.0400000000001, "text": " we have let me draw something inside like maybe a chat programmer okay like a guy with glasses"}, {"start": 857.6800000000001, "end": 866.48, "text": " he's buffed okay like this and what you'll do is some of the crops will be uh maybe uh consisting"}, {"start": 866.48, "end": 872.8000000000001, "text": " out of more than 50 of the image something like this some of those crops may be smaller maybe like"}, {"start": 872.8, "end": 878.16, "text": " taking only one part one smaller part of the image and you'll be passing the big ones through the"}, {"start": 878.16, "end": 881.68, "text": " that's why they say here you'll be passing the big ones through the teaching network and you'll be"}, {"start": 881.68, "end": 886.9599999999999, "text": " passing both types of crops through the student network and again the idea is if you can kind of"}, {"start": 886.9599999999999, "end": 894.7199999999999, "text": " infer from this small crop if you can infer uh like relevant information that contains the same"}, {"start": 894.7199999999999, "end": 901.68, "text": " information as this bigger crop uh that means you you got yourself a robust representation okay"}, {"start": 901.68, "end": 908.3199999999999, "text": " and that's the whole idea that this this uh uh like multi-crop strategy enforces uh robust"}, {"start": 908.3199999999999, "end": 913.52, "text": " representations which turn out to to to do quite well on those k and evaluations as we'll soon see"}, {"start": 913.52, "end": 918.0, "text": " now let's see some more details about the teacher network here uh they mentioned that unlike"}, {"start": 918.0, "end": 924.16, "text": " knowledge distillation we do not have a teacher uh given a priority and hence we build it from"}, {"start": 924.16, "end": 928.88, "text": " past iterations of the student network so as we saw we were using that uh exponential moving average"}, {"start": 928.88, "end": 935.6, "text": " to form the teacher network so usually in computer vision you have a big uh like cnn maybe something"}, {"start": 935.6, "end": 940.8, "text": " like this so that's a cnn and you train it on a data set maybe something like image net"}, {"start": 941.76, "end": 948.4, "text": " and the whole idea is now having trained that one you freeze the weights and then you have a smaller"}, {"start": 948.4, "end": 955.36, "text": " cnn something like this and so what to do is the following so you take an image from that data set"}, {"start": 955.36, "end": 962.8000000000001, "text": " you feed it both here into the big one as well as to your smaller yet to be trained cnn and now"}, {"start": 962.8000000000001, "end": 970.8000000000001, "text": " instead of using the one hot uh encoding of the label as the target you'll actually be using"}, {"start": 970.8000000000001, "end": 974.88, "text": " whatever the the output distribution from this one is so whatever the output distribution here is"}, {"start": 975.6800000000001, "end": 981.76, "text": " we'll want to mimic in the smaller one so here we'll want to mimic that one uh by just applying"}, {"start": 981.76, "end": 988.96, "text": " uh k l divergence or or whatnot uh so that's the usual knowledge distillation and here we are not"}, {"start": 988.96, "end": 994.3199999999999, "text": " using exactly that because as you as you saw the teacher network is not frozen we're just updating"}, {"start": 994.3199999999999, "end": 1000.4, "text": " it uh exponentially as the time goes by okay that's that's the first detail uh secondly they they"}, {"start": 1000.4, "end": 1004.88, "text": " mentioned here that indeed we observed that this teacher performs a form of model in sampling"}, {"start": 1005.4399999999999, "end": 1010.56, "text": " similar to polyac rupert averaging with an exponential decay uh that leads us to this"}, {"start": 1010.56, "end": 1014.7199999999999, "text": " thing and that's we observed that this teacher has better performance than the student throughout"}, {"start": 1014.7199999999999, "end": 1019.04, "text": " the training and hence guides the training of the student by providing target features of higher"}, {"start": 1019.04, "end": 1026.0, "text": " quality uh so yeah i mean just doing and sampling is a is a is a known method where whereby you can"}, {"start": 1026.0, "end": 1029.9199999999998, "text": " improve the the performance of your model by just taking a bunch of models and then averaging"}, {"start": 1030.48, "end": 1036.32, "text": " their outputs but here we are doing the averaging in the in the parameter space and that also uh"}, {"start": 1036.32, "end": 1040.96, "text": " helps obviously uh increase the performance we'll see some curves later on where they"}, {"start": 1040.96, "end": 1046.0, "text": " quantitatively show that this does happen indeed so that the teacher is always better than the"}, {"start": 1046.0, "end": 1051.9199999999998, "text": " student and so students can can actually learn something from the teacher because if yeah yeah"}, {"start": 1051.9199999999998, "end": 1058.24, "text": " i guess that makes sense um okay so a couple more details worth mentioning is that this whole"}, {"start": 1059.2, "end": 1065.04, "text": " like system this dino system will be uh free of bet normalization which is a trend we've seen"}, {"start": 1065.04, "end": 1070.56, "text": " recently with nf nets if you if you if you saw that paper uh it came from deep mind the idea was"}, {"start": 1070.56, "end": 1077.52, "text": " to kind of ditch the bet normalization because it has some nasty properties um and yeah but it ended"}, {"start": 1077.52, "end": 1084.72, "text": " up um complicating the whole procedure so i'm not sure it uh it got so popular but yeah still um"}, {"start": 1085.68, "end": 1089.92, "text": " just if you're able to kind of ditch bet normalization that that's cool and here they"}, {"start": 1089.92, "end": 1095.6000000000001, "text": " managed it because uh they say here that therefore when applying dino to vat we do not use any bet"}, {"start": 1095.6000000000001, "end": 1101.8400000000001, "text": " normalization in the projection hats uh vat does not have bn by default and they just made sure"}, {"start": 1101.8400000000001, "end": 1109.44, "text": " not to add bn layers uh on top of those linear layers in the projection head okay i already"}, {"start": 1109.44, "end": 1116.16, "text": " mentioned that these centering and sharpening operations are vital uh and they say here so as"}, {"start": 1116.16, "end": 1120.8000000000002, "text": " shown experimentally centering prevents one dimension to dominate but encourages collapse"}, {"start": 1120.8000000000002, "end": 1126.24, "text": " to the uniform distribution so that's this first type of uh like collapse i mentioned the uniform"}, {"start": 1126.24, "end": 1134.0800000000002, "text": " one so this one and then they mention um while the sharpening has the opposite effect okay so it"}, {"start": 1134.0800000000002, "end": 1139.76, "text": " kind of makes those uh one dimension explode so applying both operations balances their effects"}, {"start": 1139.76, "end": 1145.52, "text": " which is sufficient to avoid collapse in presence of a momentum teacher okay that's that's it those"}, {"start": 1145.52, "end": 1150.24, "text": " are those are all of the details of the architecture and how the system works now let's see for the"}, {"start": 1150.24, "end": 1156.8799999999999, "text": " results um first what they do is they take uh resnet as the as the as the baseline and they"}, {"start": 1156.8799999999999, "end": 1162.8799999999999, "text": " train it using dino so this is this method here and they show that uh compared to all of the"}, {"start": 1162.8799999999999, "end": 1168.96, "text": " previous uh like self-supervised methods such as i don't know like bootstrap your own latence bjoll"}, {"start": 1168.96, "end": 1176.48, "text": " or or moco or or simclear etc so they show that they are they perform better using both types of"}, {"start": 1176.48, "end": 1182.72, "text": " evaluations so the first one is linear one and the second one is k and n evaluation and just uh"}, {"start": 1182.72, "end": 1188.32, "text": " briefly if you don't understand how how the k and n works it's fairly simple so what you do is you"}, {"start": 1188.32, "end": 1195.3600000000001, "text": " train your your vat so you train your vat here and then you take uh you basically take uh your"}, {"start": 1195.36, "end": 1202.0, "text": " training set so this is your training set you freeze these weights so these are frozen so let"}, {"start": 1202.0, "end": 1208.6399999999999, "text": " me just you know it like f you froze these and now you just input an image outcomes are representation"}, {"start": 1210.0, "end": 1213.52, "text": " and because it's from the training set you'll have a label associated with it"}, {"start": 1214.1599999999999, "end": 1219.1999999999998, "text": " and you'll you'll just repeat this for a subset of your training set and you'll store these"}, {"start": 1219.2, "end": 1226.48, "text": " representation label tuples okay and next thing how you actually evaluate now this system is the"}, {"start": 1226.48, "end": 1233.68, "text": " following so now you have now you have uh like a validation data set and you pass in an image from"}, {"start": 1233.68, "end": 1239.44, "text": " the validation set outcomes representation but this time you don't have a label so what you do"}, {"start": 1239.44, "end": 1245.2, "text": " instead is you search through these representations which you've collected from your training set and"}, {"start": 1245.2, "end": 1250.0800000000002, "text": " you just find the k nearest ones and they were using like 20 uh experimentally they found that"}, {"start": 1250.0800000000002, "end": 1256.24, "text": " number works well so you'll just find 20 closest representations and you'll see what their labels"}, {"start": 1256.24, "end": 1261.3600000000001, "text": " are and by majority voting for example you'll just figure out what the represent what the label"}, {"start": 1261.3600000000001, "end": 1265.92, "text": " should be for this one okay that's the that's the whole idea so you just find the closest ones where"}, {"start": 1265.92, "end": 1271.52, "text": " closeness is defined by i guess they used uh l2 metric and so yeah that that's it that's how you"}, {"start": 1271.52, "end": 1276.48, "text": " find a label for for this one you just find the closest ones okay and now the interesting part"}, {"start": 1276.48, "end": 1284.48, "text": " actually here is that uh once you have vit you can see that the knn performance is like the gap"}, {"start": 1284.48, "end": 1291.44, "text": " between these two is severely like uh decreased whereas here we had a bigger gap that's the the"}, {"start": 1291.44, "end": 1297.36, "text": " important thing to notice is that using transformers and ssl has something that's that's um qualitatively"}, {"start": 1297.36, "end": 1302.32, "text": " different comparing to using cnn's that's the first thing worth mentioning and the yeah in general"}, {"start": 1302.32, "end": 1307.4399999999998, "text": " they compare with other baselines so here so here they compensate for the architecture here they use"}, {"start": 1307.4399999999998, "end": 1312.0, "text": " whatever architecture just the best methods out there and they still show that uh using linear"}, {"start": 1312.0, "end": 1317.6799999999998, "text": " evaluation as well as the knn they they achieve the best results compared to previous like bjoll"}, {"start": 1317.6799999999998, "end": 1323.6, "text": " simclear etc previous ssl baselines uh those are the initial results now let's see some other stuff"}, {"start": 1323.6, "end": 1331.4399999999998, "text": " here so these two results just support their claim that the features are really of high quality and"}, {"start": 1332.1599999999999, "end": 1337.04, "text": " since they perform really well for k nearest neighbor classification they also should perform"}, {"start": 1337.04, "end": 1342.3999999999999, "text": " well for image retrieval and copy detection and they just kind of uh yeah show that that's the"}, {"start": 1342.3999999999999, "end": 1348.8, "text": " case uh interesting thing here is that uh cool thing about ssl methods is you can use them on"}, {"start": 1348.8, "end": 1353.9199999999998, "text": " whatever data set you have you don't have to have labels and so they just pre-trained dino on this"}, {"start": 1353.9199999999998, "end": 1360.56, "text": " google landmarks uh data set which turns out to to boost performance severely compared to just"}, {"start": 1360.56, "end": 1366.32, "text": " pre-training dino on on image net and yeah they get results which are comparable to even a supervised"}, {"start": 1366.32, "end": 1372.96, "text": " baseline and that's cool okay let's continue here and that was the that was the first part so those"}, {"start": 1372.96, "end": 1380.24, "text": " were the the the that was the emergent property of nice k features that are good for knn focusing"}, {"start": 1380.24, "end": 1385.92, "text": " on these results here we can see that um different colors just you know different hats of the"}, {"start": 1385.92, "end": 1391.52, "text": " transformer heads attention heads from the vat uh final layer and we can see that the red head"}, {"start": 1391.52, "end": 1397.52, "text": " focuses on horses on this zebra's head we can see that the yellow head is focusing on on on the neck"}, {"start": 1397.52, "end": 1403.68, "text": " as well as on the ears and the blue hat is focusing on on this bonds here okay and um what's"}, {"start": 1403.68, "end": 1409.92, "text": " interesting is they show here that um even some some occluded objects such as these bushes here"}, {"start": 1409.92, "end": 1415.92, "text": " are detected and segmented uh i mean it's hard even for me to kind of detect these and the network"}, {"start": 1415.92, "end": 1422.24, "text": " learns how to recognize the bush and segment that information okay that's super cool okay continuing"}, {"start": 1422.24, "end": 1431.04, "text": " further they showed that if you take supervised baseline and plot the segmentation maps from from"}, {"start": 1431.04, "end": 1438.24, "text": " that one and from dino on on different data sets on pascal so they pre-trained this on image net so"}, {"start": 1438.24, "end": 1443.6, "text": " both dino and the supervised method and then they test it on pascal data set and you can see there's"}, {"start": 1443.6, "end": 1449.2, "text": " bunch of random like sparse dots here so the segmentation masks are not of the same quality"}, {"start": 1449.2, "end": 1457.04, "text": " as as as in dino and they also show quantitatively that you can see that this is i think uh how they"}, {"start": 1457.04, "end": 1462.88, "text": " called it uh jacquard similarity or iou just a fancy name for intersection of a union they show"}, {"start": 1462.88, "end": 1469.3600000000001, "text": " it's much higher compared to supervised baseline so again this testifies that um uh the the dino"}, {"start": 1469.3600000000001, "end": 1474.8, "text": " the vat with dino learns how to extract these segmentation masks uh high quality segmentation"}, {"start": 1474.8, "end": 1480.3999999999999, "text": " masks okay uh in this table they show a general applicability of these features to various"}, {"start": 1480.3999999999999, "end": 1487.28, "text": " downstream benchmarks such as cipher 10 cipher 100 etc so just comparing dino with the supervised"}, {"start": 1487.28, "end": 1492.56, "text": " baseline we can see it pretty much achieves outperforms supervised baseline on all of these"}, {"start": 1492.56, "end": 1498.24, "text": " downstream tasks which further testifies that the features learned using dino and vat are of super"}, {"start": 1498.24, "end": 1505.52, "text": " high quality uh some ablations here uh maybe the most important row i want you to focus on is when"}, {"start": 1505.52, "end": 1511.6, "text": " we uh expel when we don't use the momentum encoder so that's the teacher network being formed from"}, {"start": 1511.6, "end": 1516.48, "text": " exponential like by the exponential moving average strategy we can see that it completely"}, {"start": 1516.48, "end": 1522.16, "text": " fails to train so the performance just drops down to almost zero and it does not work so we need we"}, {"start": 1522.16, "end": 1530.16, "text": " so this method so dino needs momentum encoder and yeah um other ablations like uh multi crop"}, {"start": 1530.16, "end": 1535.1200000000001, "text": " not using multi crop reduces result reduces the performance not using cross entropy reduces"}, {"start": 1535.1200000000001, "end": 1541.1200000000001, "text": " performance etc in this plot they show that not that using smaller patches is more important than"}, {"start": 1541.1200000000001, "end": 1546.5600000000002, "text": " using a bigger model so if you focus on on this thing here so if you focus on on the eight by eight"}, {"start": 1546.56, "end": 1554.8799999999999, "text": " uh patches and vat small uh as you can see it has better performance than using um vat big so the"}, {"start": 1554.8799999999999, "end": 1561.6, "text": " bigger model but with 16 by 16 patches so the trade-off here obviously exists and uh it's the"}, {"start": 1561.6, "end": 1566.6399999999999, "text": " following basically when you're using eight by eight you need you need a bigger memory footprint"}, {"start": 1566.6399999999999, "end": 1572.3999999999999, "text": " and so uh you also kind of uh reduce the throughput of number of images you can feed through the model"}, {"start": 1572.4, "end": 1577.76, "text": " in a second um but yeah you get some additional performance but all in all you can see these two"}, {"start": 1577.76, "end": 1583.2, "text": " curves overlap so basically it boils down to how much performance you want and what's the throughput"}, {"start": 1583.2, "end": 1587.68, "text": " your desired throughput and that's how you can pick the best trade-off between the size of the"}, {"start": 1587.68, "end": 1595.8400000000001, "text": " model and the patch size okay nice um here is the quantitative results i mentioned that the student"}, {"start": 1595.8400000000001, "end": 1601.1200000000001, "text": " is always so that the teacher is always better than student you can see as the epochs progress"}, {"start": 1601.12, "end": 1606.1599999999999, "text": " during the training the validation accuracy of the teacher is always higher compared to the student"}, {"start": 1607.28, "end": 1613.04, "text": " and they just kind of try different strategies of how they update the teacher and it turns out that"}, {"start": 1613.04, "end": 1619.1999999999998, "text": " using momentum is the best strategy using previous epoch so that's something similar to dqn if you"}, {"start": 1619.1999999999998, "end": 1626.8, "text": " watch my video on dqn or you know what dqn is it also uses it also freezes the target network and"}, {"start": 1626.8, "end": 1635.84, "text": " updates it every couple of epochs or whatever every every end steps in general and so that method"}, {"start": 1635.84, "end": 1643.76, "text": " also works but it yields somewhat worse performance and using updating the teacher too often so that"}, {"start": 1643.76, "end": 1650.6399999999999, "text": " will be previous iteration or student copy you can see that the method just fails all completely so"}, {"start": 1650.64, "end": 1656.96, "text": " yeah there are ways to kind of build up this teacher network but the best thing seems to be so far"}, {"start": 1656.96, "end": 1662.16, "text": " this momentum update these results are pretty interesting let me just kind of explain them"}, {"start": 1662.16, "end": 1666.5600000000002, "text": " again there are two forms of collapse regardless of the input the model output is uniform"}, {"start": 1666.5600000000002, "end": 1674.0800000000002, "text": " along all dimensions or dominated by one dimension if we take the cross entropy we saw in the loss"}, {"start": 1674.72, "end": 1679.5200000000002, "text": " so we take the cross entropy loss and kind of split it down into the entropy component and the"}, {"start": 1679.52, "end": 1685.68, "text": " KL divergence component and plot it here they show the following so if you're using sharpening"}, {"start": 1685.68, "end": 1690.8, "text": " only and you just ignore you you kind of ditch the centering part in the teacher network this"}, {"start": 1690.8, "end": 1695.2, "text": " is what happens basically the entropy drops down to zero which means what which means that the"}, {"start": 1695.2, "end": 1702.72, "text": " network learns to output like something like this you basically have a peak at one dimension and the"}, {"start": 1702.72, "end": 1707.84, "text": " probabilities one and everything else is zero and so the entropy of this thing is zero so that means"}, {"start": 1707.84, "end": 1714.56, "text": " that using only sharpening leads to this mode of collapse and using only centering also collapses"}, {"start": 1714.56, "end": 1720.32, "text": " but as you can see the collapse has a non-zero value so the the the the entropy has non-zero"}, {"start": 1720.32, "end": 1724.48, "text": " value which means we have something like this which means we have a uniform distribution"}, {"start": 1725.6799999999998, "end": 1733.1999999999998, "text": " and this has a non-zero entropy and so that's what we end up with here and it did mention"}, {"start": 1733.2, "end": 1738.48, "text": " somewhere here let me just check the entropy converges to different values so zero with no"}, {"start": 1738.48, "end": 1745.3600000000001, "text": " centering and minus log one of over k with no sharpening so that's that's the value that we'll"}, {"start": 1745.3600000000001, "end": 1754.32, "text": " have here so here this value will be minus log of one over k the reason being is the support has"}, {"start": 1754.32, "end": 1761.8400000000001, "text": " k values so this is distribution support is has k dimensions so that means this will be one over k"}, {"start": 1761.84, "end": 1768.8799999999999, "text": " so that the integral amounts to one and this means if you just calculate the entropy you'll end up"}, {"start": 1768.8799999999999, "end": 1776.56, "text": " with with this this value here so yeah using both as you can see here the entropy converges to some"}, {"start": 1776.56, "end": 1782.8799999999999, "text": " value but it's neither zero neither this value of minus log one over k and that's it that's"}, {"start": 1782.88, "end": 1792.48, "text": " pretty much it taking a look at the KL divergence the KL becomes zero for these two cases because"}, {"start": 1792.48, "end": 1799.2, "text": " both the student and the teacher learn to output either uniform or this kind of distribution and"}, {"start": 1799.2, "end": 1804.0800000000002, "text": " when we're using both then KL divergence does not collapse to zero those are just two different"}, {"start": 1804.0800000000002, "end": 1809.7600000000002, "text": " perspectives on the same thing and that's the fact that they collapse to either of these modes"}, {"start": 1809.76, "end": 1816.16, "text": " depending on what what do we omit either sharpening or centering okay that's that's"}, {"start": 1816.16, "end": 1821.52, "text": " pretty much it i think i explained everything i wanted finally they mentioned it here in the"}, {"start": 1821.52, "end": 1826.96, "text": " future we plan to explore if pre-training a large vat model with dino on random uncurated images"}, {"start": 1826.96, "end": 1833.84, "text": " could push the limits of visual features and i can just imagine something along the gpt3 dimensions"}, {"start": 1833.84, "end": 1840.32, "text": " being trained and this time with vision transformer so that will be exciting i guess final thing i"}, {"start": 1840.32, "end": 1847.52, "text": " want to show you are the features that like the some semantics of the features of vat trained with"}, {"start": 1847.52, "end": 1854.08, "text": " dino as you can see here just extracting those representations and then using t-sneed to kind"}, {"start": 1854.08, "end": 1861.9199999999998, "text": " of plot them in 2d you can see that dino vat with dino learns how to cluster similar objects here"}, {"start": 1861.92, "end": 1868.24, "text": " we can see like model t that's a car then car wheel ambulance minibus so we have some kind of"}, {"start": 1868.24, "end": 1873.76, "text": " vehicles cluster here and then we have some like gutter snake king snake ring neck snakes we have"}, {"start": 1873.76, "end": 1880.64, "text": " some snakes here we have like orangutan some monkeys here so all in all all of these emerge"}, {"start": 1880.64, "end": 1887.6000000000001, "text": " by training dino by training vat using dino so we get this nice property even though we train vat"}, {"start": 1887.6, "end": 1894.24, "text": " only using ssl objective and we also got those segmentation masks and we also got the features"}, {"start": 1894.24, "end": 1900.24, "text": " which are of high quality as shown by knn performance etc so hopefully you like this"}, {"start": 1900.24, "end": 1917.76, "text": " video if you did consider subscribing and sharing and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=BNx-wno-0-g
DETR: End-to-End Object Detection with Transformers | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover DETR, an end-to-end object detection pipeline with transformers. The main 2 ideas are: * Using transformers instead of specialized vision architectures * Using Hungarian matching and loss to train the system e2e ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2005.12872 ✅ Code: https://github.com/facebookresearch/detr ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro: DETR main ideas 00:45 Non-max suppression 03:20 High-level pipeline overview 07:50 Architecture in more detail 12:10 Matching loss 18:35 Hungarian loss 21:00 Results 24:05 Visualization 27:35 Ablations 30:00 Outro: Segmentation results ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #detr #objectdetection #transformers
What's cracking guys? In this video I'm covering end-to-end object detection with transformers or DETER for short by Nicolas Carrion, Francisco Massa and others from Facebook AI Research. So obviously it's about object detection and they are using transformers. So they mentioned here that the main ingredients of the new framework called DETER are set-based global loss that forces unique predictions via bipartite matching. It's a mouthful, we'll break that down in a minute. And a transformer encoder decoder architecture. So the basic premise here is they are training this detection pipeline using end-to-end learning and they're using as the differentiable blob they're using transformers. So those are the two main ideas and comparing that to previous object detection pipelines, like if you're familiar with that, basically most of them consist out of multiple stages. So as you can see here on the left side, basically we have, so the stage one just outputs bunch of these different region proposals. So all of these boxes are kind of claiming that they see a car inside of that box. And the idea is to just kind of filter out and leave just the best one. And in this case this non-max suppression algorithm works perfectly. So basically after stage one proposes these regions we have this heuristic called non-max suppression algorithm and it just basically filters out the best box. And the way it works is super simple. Basically you find the box with the highest confidence that this is a car and then what you do, you just set some threshold and you find the IOU or the intersection of reunion with other boxes and you just kind of filter out those boxes that have that whose IOU crosses that threshold. Okay so I just went ahead and pasted this IOU definition so that I can better explain how this NMS algorithm, so non-max suppression, works. So as I said we have two stages. Usually the first stage proposes a bunch of these different regions and all of those boxes kind of claim that they see a car inside of it. But we want to just leave a single one that has the highest confidence and the best bounding box. And the way we do that is the following. So out of all of these boxes we'll just find the one with the highest confidence score and then we're just gonna set the threshold for example I don't know like 0.7 and we're going to remove all of those boxes whose IOU is bigger than 0.7. That means in human language those boxes that overlap too much with the box with the highest confidence those will be removed. And IOU is just just defined like this. So you find the intersection between the two boxes you find the union and you just divide the two. As you can see as the the overlap becomes bigger this IOU will will tend to one and as the overlap becomes smaller you'll basically this IOU will drop to zero. Okay that's the basic idea. So we want to the idea the whole idea of the DETER pipeline is to remove this this heuristic and kind of train this whole system detection pipeline end to end without these multiple stages without these heuristics such as the NMS. And this is not the only method out there but that's the one that I took as an as an example and yeah that's it. So with the motivation out of the way let's see how DETER works. So jumping to this high level explanation of the pipeline we have how the thing like flows is we have an image at the input we pass it through a CNN which will down sample the spatial extent and increase the number of channels so we'll end up with multiple feature maps here. So something like hw times c where c is the number of channels and initially we maybe have something like h0 we have w0 and we have three in the case of RGB images. So after we have these feature maps what we do is we're going to flatten them out and pass them as tokens into the transformer and later I'll just we'll just unpack how this works in the next diagram but here just stick with me. So basically we just kind of process all of these tokens similar to division transformer you can you can treat it that way and what what what happens is at the output of the decoder we have a bunch of these different region proposals. So we'll have a fixed number of these and they were using 100 so we'll have 100 proposals and each of these boxes has two pieces of information so we can treat this box it will just be a two tuple so we'll have two tuples so we'll have one tuple will tell us about the actual box so what's the the center coordinate and what's the height and width of the box so that will be four numbers so we'll have like cx that's the center coordinate on the x-axis then we have the center coordinate on the y-axis we have the height of the box and we have the width of the box. So in the case of this red one so obviously this would be the center then we had the height and we've had the width so that's the output. Now the second piece of information that's vital is which kind of object is that and that's where the second tuple comes in and this tuple will just contain c number of classes plus one where this is just the cardinal number of the of the classes set so that means for example if we have I don't know 10 classes like dog cat whatever we'll have 10 here and plus one is because of the this no object class so sometimes a box like this green one just is just a spurious box and it doesn't contain any objects so that's why we include plus one so that will be the output and all of these hundred outputs will contain exactly these two tuples so that's the output information now the interesting part is how do we actually match these so that we can compare them and how do we back prop these through the encoder and the cnn and train the data pipeline so the idea is fairly simple the first thing we do is this bipartite matching and i'll explain that in a bit more detail but the rough idea is to find the permutation so basically if we if we focus on the ground truth the ground truth here will be something like this so we'll have let me just draw it like this so we'll have a bunch of outputs and the first one will be this this let's say this red box is the first output in the ground truth and then the second output will be this maybe this yellow one i'll just denote like this so we'll have so this will be the two outputs and then we have a bunch of these no object like proposals and so as you can see the the goal will be to kind of learn how to find a permutation how to associate this one to this one and how to associate this box with this box so we'll basically want to find a way how to sort this so that we have this red box standing in the first place and then the yellow box being in the second place and all of the other no object proposals should be here and once we have that then we can just kind of compare them because they're matched right because if we didn't do that it just wouldn't make any sense so again on the high level we first do the sorting using this hungarian matching algorithm and the second thing is we just compare them so obviously you can think of it we'll basically want to compare the boxes so the the red box here and the red box here will somehow include the iou in the loss will somehow include all of those four coordinates in the loss as well as the class information so for example this here so the the red class for this proposal will be equal to one so the probability is 100 here maybe our prediction is not that good so it's predicting maybe 0.7 for this red class and we want to push that 0.7 towards one okay that was it in a nutshell now let me start digging into more details so that this becomes way more clear than it currently is okay let's find and i'll explain all of these equations in a second i just want to focus first on on the on the expanded diagram of that of that same pipeline so again we have cnn we have these we have these feature maps and we'll have positionally coding so that's standard stuff for transformers except that for for a small detail they are actually adding this positionally codings in every single layer of the encoder and of the decoder but that's a minor detail again if you're not familiar how transformers work please check out my video on transformer i'll link it somewhere here but like in a nutshell i'll try to explain it quickly here basically every of these tokens will be a map to a query key and value and what you'll do so you'll have queries so let me denote queries with with this like like an arrow so that's a vector and query will be in a red so we'll have like queries and then we'll have let's focus on on a specific query so on this one and basically this query will be uh we'll do adopt products so that's a that's a way to find the similarity between that query and all of the keys so we'll have keys for every single uh like token and as you can see here this red uh this red uh arrow and this blue arrow are super similar they're pointing in the same direction so that means the the score will be really high between these two so so basically the score between these two between this blue key and this red query will be really high so that means that the value vector and i'll denote value vectors with with green uh that means that this value vector will have a high coefficient when we sum up all of the all of the value vectors so the basic idea is every token attends every other token so we have this global attention and we'll just basically figure out these scores by doing a query and a key dot product and we'll use the scores to kind of do a weighted average of these value vectors and that's how we form a novel representation so that's a super high level overview that's how it works you can think of it as information routing and you do a bunch of those you just propagate that through the encoder you get these representations and now the second part is interesting so they have these object queries and they're basically learned so these are just randomized vectors initially and you can think of it as this one is maybe so this vector here this red one is looking for a particular thing in an image so it's going to attend all of this information and it's going to focus on a particular things so they have a nice visualization a bit later in the paper let me just find it for a second as you can as you can see here basically one of these so they have as i said they have 100 output proposals here they just depicted 20 of those and they just so each of these dots is just the center of the of the box that that proposal it's outputting across the validation set so that's maybe a mouthful but in a nutshell what that means is this one particular proposal so maybe the first one from the output of the decoder so the leftmost proposal in in the of the decoder is basically attending on all of the small so the green dots just mean a small bounding box so it's attending all of the small bounding boxes in the in the bottom left part of the image and as you can see those different output tokens will be attending different parts of the image so this one as you can see is attending to the center most part of the image and also it's attending to those small boxes etc so basically all of these let me go back here so all of these will be focusing on particular like semantics of the image let's call it that way the only difference compared to the original transformer is that here we'll actually be outputting this in parallel we won't be doing this all regressively so we're just gonna output all of the outputs at once in parallel and the final step is once we have these output tokens we just pass them through these fully like feed forward networks and we output the class and box information so that's the as i already mentioned this is for tuple this is the just the c c plus one number of classes and that's that's it in a nutshell so that's how the pipeline is structured now i'm gonna go and explain the most important part and that's how we do how do we match the the proposals and how do we actually do how the loss looks like so let's focus on that now okay so okay this is the this is the main part basically uh the whole idea is to find a permutation let me go here so again i just copy pasted the the diagram from the above and so the whole idea will be the following so you have your ground truth so in in this case in the case of this image we have one black one red box as you can see here we have one yellow box and all of the other boxes will just be no object uh like uh proposals okay and so the idea is to find a permutation of this output here so that the loss is minimized and uh that means we want to push this this this red one should be in the first place this yellow one should be in the second place and all of the other ones should be here so that's the permutation that will minimize the loss and as you can see here so let's focus on this so y i is just a tuple of the class information and the box information and this is just the sigma of i is just the uh current uh permutation okay so we want to minimize this match loss between those two so n just stands for the number of proposals as i said they are using a hundred of these and that's hard coded so basically none of the images in the data set will ever have more than 100 objects so they just took a like a upper threshold and set it to 100 so we sum up all of the match losses are over all of the proposals and we find the permutation that minimizes this matching loss and that will be the permutation that we end up using okay so that may be abstract still so i'll try and break it down here is the the main the main this is how the matching loss looks like broken down into two components we have this left term with the like class probability and we have the right term with this box loss and let me try and break this down so first things first is if we focus on on on this on this proposal okay so this is one of the outputs as i said it's going to have the the four tuple that kind of tells you where the box is and it's going to have this distribution over classes and because it's red it's probably going to have like this red component being the strongest one maybe 0.6 and in the current permutation just treat this as the current permutation of the algorithm so we'll be comparing red with the yellow one and because the the yellow class has a probability of 0.2 that means here we'll have minus 0.2 and that's not good enough because we want to minimize this as you as you remember so this matching loss needs to go down so we we need to minimize it as much as possible and the minimal value would be if we had one for the for the yellow class so instead of having two here we have one so again let me just dig into this into this loss and kind of break it down one just means an index function so that means that when the when the ground truth that's the ci class is different from the no object class so that means when we have some class so here we have this let's call it red class we here we have yellow class so in this case for the for the i equals one we see here the indices so indices go from one to six so c of i c of one is red so that means that because it's different from the no object we'll have one here so this index function will be one okay so c of two is yellow so that means it's still different from no object so that means we're gonna have one here so it's minus one times this and now for the fun part the probability of the of the yellow class for the second index here as we can see is 0.2 and that means we'll end up with your minus 0.2 here that's minus 0.2 and this is going to give us some value whatever that may be let's just denote it as x as you can see the goal will be to find the permutation that minimizes all of these so if you imagine the permutation where this red is here and the yellow is the second one in that case we'd have a much much lower matching loss so for that permutation because this is the red class we can see it has 0.6 we'll have minus 0.6 here and that means we we are going to minimize this loss even more so in a nutshell that that's the way how this Hungarian algorithm works like it just iterates through different permutations it just calculates this these expressions and finds the permutation that has the minimal matching loss and it turns out that that corresponds to having red as the first proposal then the yellow one and then all of the four ones it doesn't matter the order because basically as you can see here these indexing this index function will just be zero because those proposals have ground truth to no object so all of these have ground truth of no object class okay hopefully that was clear now let's break down the loss the box loss and here it is here is how it's defined is defined as a weighted average of this iou loss and of l1 loss as you can see bi as i mentioned that's the four tuple so that's the center coordinates plus the height and width of the box and what we do is we just take the so this is the the ground truth proposal this is the current permutation proposal and we just find the iou as i said iou is distinct the second term just has this l1 between these bounding box vectors so there's one thing that's kind of confusing me here because this goes with a plus sign let me just see here so this goes with a plus sign the the box loss that means we want to minimize it and that means that obviously when the when the bounding box vectors co-align when they're the same this will go down to zero that's cool but the more they intersect so that means it's a better aligned this thing is going up so i suspect that we need a minus sign here or something to to kind of this for this to make sense or basically maybe the a specific choice of these hyperparameters can help it work out but like no i'm i'm kind of confused by by this part i think there should be a minus here or i'm misinterpreting this a bit okay basically that's it that's the l box and as i said after we find the correct matching after we find the correct permutation we do the following thing and this is the loss component this is how the detrim model is trained we basically go over all of the end proposals so that's as i said hundred in their case and what they do is they just find the whatever the ground truth class is we find what's the current probability of that class and we obviously want to push that towards one and that's when log becomes zero so this is minimized and the second part is whenever we have a class that's not a no object so basically a class that has something meaningful inside will want to include that box loss and that's the same box loss as the one as i just explained for the matching procedure so that's it that's that's that's how everything works again i'm a bit confused by the l box loss and let me just kind of step out and try and explain this once more on the high level so again we are looking for a permutation so for this sigma that minimizes a sum over all of these hundred proposals of this l match so this matching loss and that will lead us to find the correct like let's go to sort the output so that we have perfect matching with the ground truth once we have the alignment between the output and the ground truth we just apply this this simple loss here so we both want to penalize for the class is not being as confident enough and we want to also penalize if the l box if the boxes are two different so we want to make sure they that both the class is correct as well as the bounding box that's all the information you need to know about how the model works now let's focus on the results there is one more detail though here and that's we add prediction feed for networks and hungarian loss after each decoder layer so that's just a minor detail but it will be kind of important later on so that means that we are not only doing these ffns at the output at the final layer of the decoder we're also doing it here and here and here and although it's conceptually not that like necessary to do this they they figure out that improves the performance and so they included it and they'll have later some ablations on this so that's why i wanted to mention this so again we'll have the same procedure i just mentioned so the matching plus the the loss the hungarian loss across all of the layers of the output of the decoder okay that's it now let's see for the experiments they compare with this faster rcnn object detection baseline and they show some really nice results so again these architectures are super handcrafted there was many years of iterations to improve these and they managed to using this end-to-end learning pipeline using transformers they managed to outperform the faster rcnn baseline as you can see here focusing on all of the so first thing i really like here is they they kind of mentioned the flops the fps and the number of parameters because only when you kind of even those out then you can then you can kind of claim that you have better performance or whatever or own pair and in this case as you can see on this apl so that's just the the precision for the larger objects we can see that we are the detr outperforms faster rcnn by quite a lot as well as on the medium sized objects but here on the smaller sized objects it's kind of under performs but all in all looking at the this ap metric we are like the detr model is better than than the faster rcnn and they did mention it somewhere here in the paper let me find it um okay so here so that demonstrates significantly better performance on large objects or is not likely enabled by the non-local computations of the transformer it obtains however lower performances on small objects so the the main idea here is because certain objects span all like a huge portion of the image and the transformer having its global attention on all of the layers has the advantage compared to cnns which need to go into couple you need to have a couple of layers before you have the receptive field that can kind of perceive that huge object in the scene in the image and so that's the reason why they they hypothesize the transformer is performing better for the larger objects whereas it has some problems with the smaller objects and i can suppose that the reason being is the the the spacial reduction we have in the cnn so if we were to not reduce the spatial extent but then we'd kind of have to increase the computation and the memory footprint probably like this this can be further improved with some of the novel like efficient transformer architectures yeah i guess that's left for for the future work okay let me get back to the results we saw we saw the the comparison with a faster rcnn now let's see some ablations so they showed that and this is pretty straightforward i mean they showed that that the encoder a part of the transformer is necessary so the more layers you add the better it performs but the the biggest gap goes from not using just like encoder to using it you can see that the ap like increases significantly as well as obviously the the number of parameters as well as the computation and fps but we have a huge improvement here and then we kind of start saturating adding more and more layers they ended up using six i think in their encoder and that's pretty much it okay so let's continue here here is a nice visualization of the output of the encoder so what they do is they take a query token from the output of the encoder and they then attend using that query vector they just attend all of the keys all the key vectors of this image and by doing that they they as you can see here this query attends to this cow here so it does some sort of an instant segmentation as you can see here so focusing on this query vector here so this one basically it attends as you can see the small cow so that's again instant segmentation similarly for all of the other query tokens continuing and seeing other results this one is super interesting basically what i do is they form this synthetic image where they have six times four so 24 giraffes and they mentioned that in the training set they don't have more than 13 giraffes in a single image and despite that this data model as you can see pretty successfully manages to detect all of the 24 giraffes with a high confidence scores you can see 100 100 most of them are above 90 90 as i can see here so that's a nice out of distribution like generalization and that's cool for this second chart here what they do is i mentioned that on the every layer of the decoder they have this hungarian laws being applied and what they show is that when they are using one of the intermediate layers instead of the final layer as the as the output they get you can see that this progression so obviously taking the first layer taking the output from the first layer is much worse than if we take it from the sixth layer so that's the final layer and additionally what they show here so i'm just focusing on this ap metric ignoring this ap 50 metric what they additionally show is that this non max suppression heuristic which i mentioned in the beginning of the video basically helps if we are taking the outputs from the initial couple of layers but then later on it does not help and that's and that's by design because the hungarian laws and the matching was meant to make this uh like the the proposals unique and so we don't have to kind of filter out similar proposals because that's handled by the loss implicitly okay uh that's that part here they have a interesting visualization of the this time decoder attention mechanisms so we take one of the proposals one of the tokens from the output of the decoder we use that and we attend to the outputs of the encoder network and that's what we got and basically as you can see focusing on one of those output tokens so whatever like the the whatever the the index of the orange token is you can see that it attends to this to the extremities of the of the elephant this kind of explains why why the encoder is necessary so encoder kind of does this uh this initial segmentation of the instance and then what the decoder does it learns how to kind of do contours contours of the object which is necessary for it to find the convex hull or the the bounding box in this case uh of the of that object they just kind of work in symbiosis and help each other uh and again for the blue token you can see that it again attends the extremities and that's how it figures out so so these components here are really important for for the detector to figure out that this is an elephant and similarly in this image for zebras you can see a green token attending here to this zebra the blue one to this zebra and the orange one to the zebra okay that's it um they did a couple more ablations i mentioned that the spatial these these uh positional encodings they have um it's pretty intricate i can kind of go to appendix and show you this part so as you can see here the spatial encoding is added to the keys to values in every single layer and it's also added to the keys of the of the decoder as well and you can see these object queries which are also positional encodings but they are learned not fixed in this in this particular case it's again added here and there so it's it's pretty convoluted but like uh you can also do it the same way as the as the original transformer which was only at the input but they showed that this increased the performance in the ablation table i was just showing you so let me get back here so yeah basically here they showed that adding these uh positional encodings on every layer at the attention module just increases the the ap metric uh compared to all of the other combinations so i don't think we we can extract something like something super intuitive here it's just that this experimentation showed that it's just better to kind of include them in all of the layers okay they also try and uh like the ablation of the L box component which i showed you and they try using only L1 or try using only IOU and it turns out that um if you if you ignore the IOU you have a significant decrease in this AP metric so you have to include the IOU L1 is just extra and using both of those obviously just increases across just boosts up all of the metrics uh okay that's it yeah i already mentioned this one as as i said so this just shows how different uh output those they call it they call those prediction slots i just call them tokens of the decoder basically you can see that each of them attends different different regions of the image and different types of objects of of boxes so you can you can notice a trend that this red dots are uh equally present across all of these 20 prediction slots out of the 100 they have and the reason being those are i think huge they say some of here are huge horizontal boxes or something so red to large horizontal boxes and it seems that the MSCoco data set just has a bunch of those objects which span the whole image horizontally so that's why all of the all of these uh prediction slots or tokens i'll learn how to kind of focus and attend to those okay that's that's pretty much it um let me see i'm going to briefly uh just tell you about this uh they also showed that just by simply modifying the data pipeline uh by adding this this this additional like segmentation head they can also do really well on this panoptic segmentation task and the idea is they just do regular object detection training of the data pipeline so that remains the same and then they just append the head and they trend that they freeze the data and they just train the the head itself and they showed some nice results you can see here like segmentation of this giraffe image and boss etc yeah they just showed that this works really well here in this table comparing it to other baselines like this panoptic fpn etc and they just report nice results there as well so that's pretty much it it's exciting uh so the main two components again are using transformers and learning end-to-end using this this match plus hungarian loss logic and that's it if you if you like this video share it out subscribe to this channel and until next time bye bye
[{"start": 0.0, "end": 4.08, "text": " What's cracking guys? In this video I'm covering end-to-end object detection with"}, {"start": 4.08, "end": 11.6, "text": " transformers or DETER for short by Nicolas Carrion, Francisco Massa and others from Facebook AI"}, {"start": 11.6, "end": 18.0, "text": " Research. So obviously it's about object detection and they are using transformers. So they mentioned"}, {"start": 18.0, "end": 23.92, "text": " here that the main ingredients of the new framework called DETER are set-based global loss that forces"}, {"start": 23.92, "end": 29.2, "text": " unique predictions via bipartite matching. It's a mouthful, we'll break that down in a minute."}, {"start": 29.2, "end": 35.28, "text": " And a transformer encoder decoder architecture. So the basic premise here is they are training"}, {"start": 35.28, "end": 41.68, "text": " this detection pipeline using end-to-end learning and they're using as the"}, {"start": 41.68, "end": 47.28, "text": " differentiable blob they're using transformers. So those are the two main ideas and comparing that"}, {"start": 47.28, "end": 54.4, "text": " to previous object detection pipelines, like if you're familiar with that, basically most of them"}, {"start": 54.4, "end": 59.839999999999996, "text": " consist out of multiple stages. So as you can see here on the left side, basically we have, so the"}, {"start": 59.839999999999996, "end": 65.52, "text": " stage one just outputs bunch of these different region proposals. So all of these boxes are kind of"}, {"start": 67.75999999999999, "end": 73.68, "text": " claiming that they see a car inside of that box. And the idea is to just kind of filter out and"}, {"start": 73.68, "end": 79.28, "text": " leave just the best one. And in this case this non-max suppression algorithm works perfectly."}, {"start": 79.28, "end": 84.8, "text": " So basically after stage one proposes these regions we have this heuristic called non-max"}, {"start": 84.8, "end": 90.56, "text": " suppression algorithm and it just basically filters out the best box. And the way it works"}, {"start": 90.56, "end": 97.04, "text": " is super simple. Basically you find the box with the highest confidence that this is a car and then"}, {"start": 97.04, "end": 102.64, "text": " what you do, you just set some threshold and you find the IOU or the intersection of reunion"}, {"start": 102.64, "end": 109.52, "text": " with other boxes and you just kind of filter out those boxes that have that whose IOU crosses that"}, {"start": 109.52, "end": 114.24, "text": " threshold. Okay so I just went ahead and pasted this IOU definition so that I can better explain"}, {"start": 114.24, "end": 120.64, "text": " how this NMS algorithm, so non-max suppression, works. So as I said we have two stages. Usually"}, {"start": 120.64, "end": 126.32, "text": " the first stage proposes a bunch of these different regions and all of those boxes kind of claim that"}, {"start": 126.32, "end": 131.52, "text": " they see a car inside of it. But we want to just leave a single one that has the highest"}, {"start": 131.52, "end": 137.04000000000002, "text": " confidence and the best bounding box. And the way we do that is the following. So out of all of these"}, {"start": 137.04000000000002, "end": 141.84, "text": " boxes we'll just find the one with the highest confidence score and then we're just gonna set"}, {"start": 141.84, "end": 148.32000000000002, "text": " the threshold for example I don't know like 0.7 and we're going to remove all of those boxes"}, {"start": 148.32000000000002, "end": 155.44, "text": " whose IOU is bigger than 0.7. That means in human language those boxes that overlap too much with"}, {"start": 155.44, "end": 160.48000000000002, "text": " the box with the highest confidence those will be removed. And IOU is just just defined like this. So"}, {"start": 160.48, "end": 165.04, "text": " you find the intersection between the two boxes you find the union and you just divide the two."}, {"start": 165.04, "end": 171.6, "text": " As you can see as the the overlap becomes bigger this IOU will will tend to one and as the overlap"}, {"start": 171.6, "end": 177.67999999999998, "text": " becomes smaller you'll basically this IOU will drop to zero. Okay that's the basic idea. So we"}, {"start": 177.67999999999998, "end": 183.12, "text": " want to the idea the whole idea of the DETER pipeline is to remove this this heuristic and"}, {"start": 183.12, "end": 189.28, "text": " kind of train this whole system detection pipeline end to end without these multiple stages without"}, {"start": 189.28, "end": 194.88, "text": " these heuristics such as the NMS. And this is not the only method out there but that's the one that"}, {"start": 194.88, "end": 201.2, "text": " I took as an as an example and yeah that's it. So with the motivation out of the way let's see how"}, {"start": 201.2, "end": 208.32, "text": " DETER works. So jumping to this high level explanation of the pipeline we have how the thing"}, {"start": 208.88, "end": 214.64, "text": " like flows is we have an image at the input we pass it through a CNN which will down sample the"}, {"start": 214.64, "end": 219.51999999999998, "text": " spatial extent and increase the number of channels so we'll end up with multiple feature maps here."}, {"start": 219.51999999999998, "end": 228.07999999999998, "text": " So something like hw times c where c is the number of channels and initially we maybe have something"}, {"start": 228.07999999999998, "end": 237.92, "text": " like h0 we have w0 and we have three in the case of RGB images. So after we have these feature maps"}, {"start": 237.92, "end": 243.44, "text": " what we do is we're going to flatten them out and pass them as tokens into the transformer and later"}, {"start": 243.44, "end": 249.92, "text": " I'll just we'll just unpack how this works in the next diagram but here just stick with me. So"}, {"start": 249.92, "end": 254.72, "text": " basically we just kind of process all of these tokens similar to division transformer you can"}, {"start": 254.72, "end": 260.64, "text": " you can treat it that way and what what what happens is at the output of the decoder we have"}, {"start": 260.64, "end": 265.76, "text": " a bunch of these different region proposals. So we'll have a fixed number of these and they were"}, {"start": 265.76, "end": 272.48, "text": " using 100 so we'll have 100 proposals and each of these boxes has two pieces of information so we can"}, {"start": 272.48, "end": 277.92, "text": " treat this box it will just be a two tuple so we'll have two tuples so we'll have one tuple will"}, {"start": 277.92, "end": 283.20000000000005, "text": " tell us about the actual box so what's the the center coordinate and what's the height and width"}, {"start": 283.20000000000005, "end": 288.88, "text": " of the box so that will be four numbers so we'll have like cx that's the center coordinate on the"}, {"start": 288.88, "end": 294.64000000000004, "text": " x-axis then we have the center coordinate on the y-axis we have the height of the box and we have"}, {"start": 294.64000000000004, "end": 301.20000000000005, "text": " the width of the box. So in the case of this red one so obviously this would be the center then we"}, {"start": 301.2, "end": 305.92, "text": " had the height and we've had the width so that's the output. Now the second piece of information"}, {"start": 305.92, "end": 312.71999999999997, "text": " that's vital is which kind of object is that and that's where the second tuple comes in and this"}, {"start": 312.71999999999997, "end": 321.03999999999996, "text": " tuple will just contain c number of classes plus one where this is just the cardinal number of the"}, {"start": 321.03999999999996, "end": 325.91999999999996, "text": " of the classes set so that means for example if we have I don't know 10 classes like dog cat whatever"}, {"start": 325.92, "end": 332.64000000000004, "text": " we'll have 10 here and plus one is because of the this no object class so sometimes a box like this"}, {"start": 332.64000000000004, "end": 338.48, "text": " green one just is just a spurious box and it doesn't contain any objects so that's why we include"}, {"start": 338.48, "end": 343.52000000000004, "text": " plus one so that will be the output and all of these hundred outputs will contain exactly"}, {"start": 343.52000000000004, "end": 348.08000000000004, "text": " these two tuples so that's the output information now the interesting part is how do we actually"}, {"start": 348.08000000000004, "end": 352.64, "text": " match these so that we can compare them and how do we back prop these through the encoder and the"}, {"start": 352.64, "end": 359.52, "text": " cnn and train the data pipeline so the idea is fairly simple the first thing we do is this"}, {"start": 359.52, "end": 364.96, "text": " bipartite matching and i'll explain that in a bit more detail but the rough idea is to find the"}, {"start": 364.96, "end": 369.36, "text": " permutation so basically if we if we focus on the ground truth the ground truth here will be"}, {"start": 369.36, "end": 375.12, "text": " something like this so we'll have let me just draw it like this so we'll have a bunch of outputs and"}, {"start": 375.12, "end": 380.56, "text": " the first one will be this this let's say this red box is the first output in the ground truth and"}, {"start": 380.56, "end": 386.64, "text": " then the second output will be this maybe this yellow one i'll just denote like this so we'll"}, {"start": 386.64, "end": 393.84000000000003, "text": " have so this will be the two outputs and then we have a bunch of these no object like proposals"}, {"start": 393.84000000000003, "end": 400.08, "text": " and so as you can see the the goal will be to kind of learn how to find a permutation how to"}, {"start": 400.08, "end": 407.52, "text": " associate this one to this one and how to associate this box with this box so we'll basically want to"}, {"start": 407.52, "end": 413.03999999999996, "text": " find a way how to sort this so that we have this red box standing in the first place and then the"}, {"start": 413.03999999999996, "end": 418.88, "text": " yellow box being in the second place and all of the other no object proposals should be here and"}, {"start": 418.88, "end": 423.68, "text": " once we have that then we can just kind of compare them because they're matched right because if we"}, {"start": 423.68, "end": 428.79999999999995, "text": " didn't do that it just wouldn't make any sense so again on the high level we first do the sorting"}, {"start": 428.79999999999995, "end": 435.35999999999996, "text": " using this hungarian matching algorithm and the second thing is we just compare them so obviously"}, {"start": 435.36, "end": 441.04, "text": " you can think of it we'll basically want to compare the boxes so the the red box here and the red box"}, {"start": 441.04, "end": 447.92, "text": " here will somehow include the iou in the loss will somehow include all of those four coordinates in"}, {"start": 447.92, "end": 454.96000000000004, "text": " the loss as well as the class information so for example this here so the the red class for this"}, {"start": 454.96000000000004, "end": 460.72, "text": " proposal will be equal to one so the probability is 100 here maybe our prediction is not that good"}, {"start": 460.72, "end": 467.44000000000005, "text": " so it's predicting maybe 0.7 for this red class and we want to push that 0.7 towards one okay that"}, {"start": 467.44000000000005, "end": 472.88000000000005, "text": " was it in a nutshell now let me start digging into more details so that this becomes way more clear"}, {"start": 472.88000000000005, "end": 478.24, "text": " than it currently is okay let's find and i'll explain all of these equations in a second i"}, {"start": 478.24, "end": 485.36, "text": " just want to focus first on on the on the expanded diagram of that of that same pipeline so again we"}, {"start": 485.36, "end": 489.92, "text": " have cnn we have these we have these feature maps and we'll have positionally coding so that's"}, {"start": 489.92, "end": 494.48, "text": " standard stuff for transformers except that for for a small detail they are actually adding this"}, {"start": 494.48, "end": 500.16, "text": " positionally codings in every single layer of the encoder and of the decoder but that's a minor"}, {"start": 500.16, "end": 506.32, "text": " detail again if you're not familiar how transformers work please check out my video on transformer"}, {"start": 506.32, "end": 511.92, "text": " i'll link it somewhere here but like in a nutshell i'll try to explain it quickly here basically"}, {"start": 511.92, "end": 517.84, "text": " every of these tokens will be a map to a query key and value and what you'll do so you'll have"}, {"start": 517.84, "end": 524.0, "text": " queries so let me denote queries with with this like like an arrow so that's a vector and query"}, {"start": 524.0, "end": 529.6, "text": " will be in a red so we'll have like queries and then we'll have let's focus on on a specific query"}, {"start": 529.6, "end": 536.32, "text": " so on this one and basically this query will be uh we'll do adopt products so that's a that's a way"}, {"start": 536.32, "end": 541.6800000000001, "text": " to find the similarity between that query and all of the keys so we'll have keys for every single"}, {"start": 541.68, "end": 549.28, "text": " uh like token and as you can see here this red uh this red uh arrow and this blue arrow are super"}, {"start": 549.28, "end": 554.2399999999999, "text": " similar they're pointing in the same direction so that means the the score will be really high"}, {"start": 554.2399999999999, "end": 561.76, "text": " between these two so so basically the score between these two between this blue key and this red query"}, {"start": 561.76, "end": 567.1999999999999, "text": " will be really high so that means that the value vector and i'll denote value vectors with with"}, {"start": 567.2, "end": 573.9200000000001, "text": " green uh that means that this value vector will have a high coefficient when we sum up all of the"}, {"start": 573.9200000000001, "end": 579.6, "text": " all of the value vectors so the basic idea is every token attends every other token so we have this"}, {"start": 579.6, "end": 587.2800000000001, "text": " global attention and we'll just basically figure out these scores by doing a query and a key dot"}, {"start": 587.2800000000001, "end": 592.96, "text": " product and we'll use the scores to kind of do a weighted average of these value vectors and that's"}, {"start": 592.96, "end": 597.6, "text": " how we form a novel representation so that's a super high level overview that's how it works"}, {"start": 597.6, "end": 603.0400000000001, "text": " you can think of it as information routing and you do a bunch of those you just propagate that"}, {"start": 603.0400000000001, "end": 607.2, "text": " through the encoder you get these representations and now the second part is interesting so they"}, {"start": 607.2, "end": 612.5600000000001, "text": " have these object queries and they're basically learned so these are just randomized vectors"}, {"start": 612.5600000000001, "end": 618.32, "text": " initially and you can think of it as this one is maybe so this vector here this red one is looking"}, {"start": 618.32, "end": 623.12, "text": " for a particular thing in an image so it's going to attend all of this information and it's going"}, {"start": 623.12, "end": 628.1600000000001, "text": " to focus on a particular things so they have a nice visualization a bit later in the paper"}, {"start": 628.88, "end": 635.36, "text": " let me just find it for a second as you can as you can see here basically one of these so they have"}, {"start": 635.36, "end": 641.6, "text": " as i said they have 100 output proposals here they just depicted 20 of those and they just so each of"}, {"start": 641.6, "end": 648.5600000000001, "text": " these dots is just the center of the of the box that that proposal it's outputting across the validation"}, {"start": 648.5600000000001, "end": 653.44, "text": " set so that's maybe a mouthful but in a nutshell what that means is this one particular proposal"}, {"start": 653.44, "end": 658.16, "text": " so maybe the first one from the output of the decoder so the leftmost proposal in in the of the"}, {"start": 658.16, "end": 663.9200000000001, "text": " decoder is basically attending on all of the small so the green dots just mean a small bounding box"}, {"start": 663.9200000000001, "end": 670.88, "text": " so it's attending all of the small bounding boxes in the in the bottom left part of the image and as"}, {"start": 670.88, "end": 675.4399999999999, "text": " you can see those different output tokens will be attending different parts of the image so this one"}, {"start": 675.4399999999999, "end": 680.48, "text": " as you can see is attending to the center most part of the image and also it's attending to those"}, {"start": 680.48, "end": 689.36, "text": " small boxes etc so basically all of these let me go back here so all of these will be focusing on"}, {"start": 689.36, "end": 694.0, "text": " particular like semantics of the image let's call it that way the only difference compared to the"}, {"start": 694.0, "end": 699.28, "text": " original transformer is that here we'll actually be outputting this in parallel we won't be doing"}, {"start": 699.28, "end": 705.12, "text": " this all regressively so we're just gonna output all of the outputs at once in parallel and the"}, {"start": 705.12, "end": 710.72, "text": " final step is once we have these output tokens we just pass them through these fully like feed"}, {"start": 710.72, "end": 715.52, "text": " forward networks and we output the class and box information so that's the as i already mentioned"}, {"start": 715.52, "end": 724.16, "text": " this is for tuple this is the just the c c plus one number of classes and that's that's it in a"}, {"start": 724.16, "end": 729.68, "text": " nutshell so that's how the pipeline is structured now i'm gonna go and explain the most important"}, {"start": 729.68, "end": 735.76, "text": " part and that's how we do how do we match the the proposals and how do we actually do how the loss"}, {"start": 735.76, "end": 744.0, "text": " looks like so let's focus on that now okay so okay this is the this is the main part basically uh the"}, {"start": 744.0, "end": 750.72, "text": " whole idea is to find a permutation let me go here so again i just copy pasted the the diagram from"}, {"start": 750.72, "end": 755.76, "text": " the above and so the whole idea will be the following so you have your ground truth so in"}, {"start": 755.76, "end": 760.5600000000001, "text": " in this case in the case of this image we have one black one red box as you can see here we have one"}, {"start": 760.5600000000001, "end": 768.08, "text": " yellow box and all of the other boxes will just be no object uh like uh proposals okay and so the idea"}, {"start": 768.08, "end": 774.24, "text": " is to find a permutation of this output here so that the loss is minimized and uh that means we"}, {"start": 774.24, "end": 779.36, "text": " want to push this this this red one should be in the first place this yellow one should be in the"}, {"start": 779.36, "end": 783.2, "text": " second place and all of the other ones should be here so that's the permutation that will minimize"}, {"start": 783.2, "end": 790.24, "text": " the loss and as you can see here so let's focus on this so y i is just a tuple of the class"}, {"start": 790.24, "end": 796.8000000000001, "text": " information and the box information and this is just the sigma of i is just the uh current uh"}, {"start": 797.44, "end": 804.0, "text": " permutation okay so we want to minimize this match loss between those two so n just stands for the"}, {"start": 804.0, "end": 809.84, "text": " number of proposals as i said they are using a hundred of these and that's hard coded so basically"}, {"start": 809.84, "end": 814.96, "text": " none of the images in the data set will ever have more than 100 objects so they just took a like a"}, {"start": 814.96, "end": 821.36, "text": " upper threshold and set it to 100 so we sum up all of the match losses are over all of the proposals"}, {"start": 821.36, "end": 826.4, "text": " and we find the permutation that minimizes this matching loss and that will be the permutation"}, {"start": 826.4, "end": 833.52, "text": " that we end up using okay so that may be abstract still so i'll try and break it down here is the"}, {"start": 833.52, "end": 839.04, "text": " the main the main this is how the matching loss looks like broken down into two components we"}, {"start": 839.04, "end": 846.24, "text": " have this left term with the like class probability and we have the right term with this box loss and"}, {"start": 846.24, "end": 852.56, "text": " let me try and break this down so first things first is if we focus on on on this on this"}, {"start": 852.56, "end": 857.92, "text": " proposal okay so this is one of the outputs as i said it's going to have the the four tuple that"}, {"start": 857.92, "end": 863.04, "text": " kind of tells you where the box is and it's going to have this distribution over classes and because"}, {"start": 863.04, "end": 868.4, "text": " it's red it's probably going to have like this red component being the strongest one maybe 0.6"}, {"start": 868.4, "end": 873.76, "text": " and in the current permutation just treat this as the current permutation of the algorithm so we'll"}, {"start": 873.76, "end": 880.64, "text": " be comparing red with the yellow one and because the the yellow class has a probability of 0.2"}, {"start": 880.64, "end": 886.9599999999999, "text": " that means here we'll have minus 0.2 and that's not good enough because we want to minimize this"}, {"start": 886.9599999999999, "end": 892.7199999999999, "text": " as you as you remember so this matching loss needs to go down so we we need to minimize it as much"}, {"start": 892.72, "end": 899.12, "text": " as possible and the minimal value would be if we had one for the for the yellow class so instead"}, {"start": 899.12, "end": 904.88, "text": " of having two here we have one so again let me just dig into this into this loss and kind of"}, {"start": 904.88, "end": 911.44, "text": " break it down one just means an index function so that means that when the when the ground truth"}, {"start": 911.44, "end": 917.6, "text": " that's the ci class is different from the no object class so that means when we have some class so here"}, {"start": 917.6, "end": 923.52, "text": " we have this let's call it red class we here we have yellow class so in this case for the for the"}, {"start": 923.52, "end": 930.72, "text": " i equals one we see here the indices so indices go from one to six so c of i c of one is red so"}, {"start": 931.6800000000001, "end": 936.08, "text": " that means that because it's different from the no object we'll have one here so this"}, {"start": 936.08, "end": 942.48, "text": " index function will be one okay so c of two is yellow so that means it's still different from no"}, {"start": 942.48, "end": 948.08, "text": " object so that means we're gonna have one here so it's minus one times this and now for the fun part"}, {"start": 948.08, "end": 956.24, "text": " the probability of the of the yellow class for the second index here as we can see is 0.2 and that"}, {"start": 956.24, "end": 962.88, "text": " means we'll end up with your minus 0.2 here that's minus 0.2 and this is going to give us some value"}, {"start": 962.88, "end": 969.04, "text": " whatever that may be let's just denote it as x as you can see the goal will be to find the permutation"}, {"start": 969.04, "end": 975.36, "text": " that minimizes all of these so if you imagine the permutation where this red is here and the yellow"}, {"start": 975.36, "end": 984.0, "text": " is the second one in that case we'd have a much much lower matching loss so for that permutation"}, {"start": 984.0, "end": 992.4, "text": " because this is the red class we can see it has 0.6 we'll have minus 0.6 here and that means we"}, {"start": 992.4, "end": 997.12, "text": " we are going to minimize this loss even more so in a nutshell that that's the way how this Hungarian"}, {"start": 997.12, "end": 1002.32, "text": " algorithm works like it just iterates through different permutations it just calculates this"}, {"start": 1002.32, "end": 1008.08, "text": " these expressions and finds the permutation that has the minimal matching loss and it turns out"}, {"start": 1008.08, "end": 1013.68, "text": " that that corresponds to having red as the first proposal then the yellow one and then all of the"}, {"start": 1013.68, "end": 1019.52, "text": " four ones it doesn't matter the order because basically as you can see here these indexing"}, {"start": 1020.48, "end": 1026.24, "text": " this index function will just be zero because those proposals have ground truth to no object"}, {"start": 1026.24, "end": 1033.76, "text": " so all of these have ground truth of no object class okay hopefully that was clear now let's"}, {"start": 1033.76, "end": 1039.84, "text": " break down the loss the box loss and here it is here is how it's defined is defined as a weighted"}, {"start": 1039.84, "end": 1046.56, "text": " average of this iou loss and of l1 loss as you can see bi as i mentioned that's the four tuple"}, {"start": 1046.56, "end": 1052.8, "text": " so that's the center coordinates plus the height and width of the box and what we do is we just"}, {"start": 1052.8, "end": 1059.36, "text": " take the so this is the the ground truth proposal this is the current permutation proposal and we"}, {"start": 1059.36, "end": 1065.68, "text": " just find the iou as i said iou is distinct the second term just has this l1 between these"}, {"start": 1065.68, "end": 1070.8799999999999, "text": " bounding box vectors so there's one thing that's kind of confusing me here because this goes with"}, {"start": 1070.8799999999999, "end": 1076.1599999999999, "text": " a plus sign let me just see here so this goes with a plus sign the the box loss that means we want"}, {"start": 1076.16, "end": 1083.52, "text": " to minimize it and that means that obviously when the when the bounding box vectors co-align when"}, {"start": 1083.52, "end": 1090.24, "text": " they're the same this will go down to zero that's cool but the more they intersect so that means it's"}, {"start": 1090.24, "end": 1096.88, "text": " a better aligned this thing is going up so i suspect that we need a minus sign here or something to"}, {"start": 1096.88, "end": 1103.44, "text": " to kind of this for this to make sense or basically maybe the a specific choice of these"}, {"start": 1103.44, "end": 1109.8400000000001, "text": " hyperparameters can help it work out but like no i'm i'm kind of confused by by this part i think"}, {"start": 1109.8400000000001, "end": 1115.76, "text": " there should be a minus here or i'm misinterpreting this a bit okay basically that's it that's the l"}, {"start": 1115.76, "end": 1122.24, "text": " box and as i said after we find the correct matching after we find the correct permutation"}, {"start": 1122.24, "end": 1126.96, "text": " we do the following thing and this is the loss component this is how the detrim model is trained"}, {"start": 1126.96, "end": 1132.88, "text": " we basically go over all of the end proposals so that's as i said hundred in their case and"}, {"start": 1132.88, "end": 1138.8000000000002, "text": " what they do is they just find the whatever the ground truth class is we find what's the current"}, {"start": 1138.8000000000002, "end": 1144.0800000000002, "text": " probability of that class and we obviously want to push that towards one and that's when log becomes"}, {"start": 1144.0800000000002, "end": 1152.48, "text": " zero so this is minimized and the second part is whenever we have a class that's not a no object"}, {"start": 1153.44, "end": 1158.16, "text": " so basically a class that has something meaningful inside will want to include that box loss and"}, {"start": 1158.16, "end": 1163.2, "text": " that's the same box loss as the one as i just explained for the matching procedure so that's it"}, {"start": 1164.0800000000002, "end": 1171.0400000000002, "text": " that's that's that's how everything works again i'm a bit confused by the l box loss and let me just"}, {"start": 1171.0400000000002, "end": 1177.76, "text": " kind of step out and try and explain this once more on the high level so again we are looking"}, {"start": 1177.76, "end": 1183.8400000000001, "text": " for a permutation so for this sigma that minimizes a sum over all of these hundred proposals of this"}, {"start": 1183.84, "end": 1190.48, "text": " l match so this matching loss and that will lead us to find the correct like let's go to sort the"}, {"start": 1190.48, "end": 1196.0, "text": " output so that we have perfect matching with the ground truth once we have the alignment between"}, {"start": 1196.0, "end": 1201.6799999999998, "text": " the output and the ground truth we just apply this this simple loss here so we both want to penalize"}, {"start": 1201.6799999999998, "end": 1209.12, "text": " for the class is not being as confident enough and we want to also penalize if the l box if the boxes"}, {"start": 1209.12, "end": 1214.08, "text": " are two different so we want to make sure they that both the class is correct as well as the bounding box"}, {"start": 1214.08, "end": 1218.08, "text": " that's all the information you need to know about how the model works now let's focus on the results"}, {"start": 1219.36, "end": 1225.28, "text": " there is one more detail though here and that's we add prediction feed for networks and hungarian"}, {"start": 1225.28, "end": 1230.08, "text": " loss after each decoder layer so that's just a minor detail but it will be kind of important later"}, {"start": 1230.08, "end": 1236.3999999999999, "text": " on so that means that we are not only doing these ffns at the output at the final layer of the"}, {"start": 1236.4, "end": 1242.96, "text": " decoder we're also doing it here and here and here and although it's conceptually not that like"}, {"start": 1242.96, "end": 1248.3200000000002, "text": " necessary to do this they they figure out that improves the performance and so they included it"}, {"start": 1248.3200000000002, "end": 1252.5600000000002, "text": " and they'll have later some ablations on this so that's why i wanted to mention this so again"}, {"start": 1252.5600000000002, "end": 1258.5600000000002, "text": " we'll have the same procedure i just mentioned so the matching plus the the loss the hungarian loss"}, {"start": 1258.5600000000002, "end": 1263.8400000000001, "text": " across all of the layers of the output of the decoder okay that's it now let's see for the"}, {"start": 1263.84, "end": 1270.56, "text": " experiments they compare with this faster rcnn object detection baseline and they show some really"}, {"start": 1270.56, "end": 1277.6, "text": " nice results so again these architectures are super handcrafted there was many years of iterations to"}, {"start": 1277.6, "end": 1283.76, "text": " improve these and they managed to using this end-to-end learning pipeline using transformers they"}, {"start": 1283.76, "end": 1290.9599999999998, "text": " managed to outperform the faster rcnn baseline as you can see here focusing on all of the so first"}, {"start": 1290.96, "end": 1295.8400000000001, "text": " thing i really like here is they they kind of mentioned the flops the fps and the number of"}, {"start": 1295.8400000000001, "end": 1300.8, "text": " parameters because only when you kind of even those out then you can then you can kind of"}, {"start": 1301.52, "end": 1307.28, "text": " claim that you have better performance or whatever or own pair and in this case as you can see on"}, {"start": 1307.28, "end": 1313.28, "text": " this apl so that's just the the precision for the larger objects we can see that we are the"}, {"start": 1313.28, "end": 1320.48, "text": " detr outperforms faster rcnn by quite a lot as well as on the medium sized objects but here on"}, {"start": 1320.48, "end": 1326.24, "text": " the smaller sized objects it's kind of under performs but all in all looking at the this ap"}, {"start": 1326.96, "end": 1332.96, "text": " metric we are like the detr model is better than than the faster rcnn and they did mention it"}, {"start": 1332.96, "end": 1338.8, "text": " somewhere here in the paper let me find it um okay so here so that demonstrates significantly"}, {"start": 1338.8, "end": 1343.6, "text": " better performance on large objects or is not likely enabled by the non-local computations of"}, {"start": 1343.6, "end": 1349.92, "text": " the transformer it obtains however lower performances on small objects so the the main idea here is"}, {"start": 1349.92, "end": 1355.1200000000001, "text": " because certain objects span all like a huge portion of the image and the transformer having"}, {"start": 1355.1200000000001, "end": 1361.04, "text": " its global attention on all of the layers has the advantage compared to cnns which need to go into"}, {"start": 1361.04, "end": 1365.44, "text": " couple you need to have a couple of layers before you have the receptive field that can kind of"}, {"start": 1365.44, "end": 1370.72, "text": " perceive that huge object in the scene in the image and so that's the reason why they they"}, {"start": 1370.72, "end": 1376.72, "text": " hypothesize the transformer is performing better for the larger objects whereas it has some problems"}, {"start": 1376.72, "end": 1382.72, "text": " with the smaller objects and i can suppose that the reason being is the the the spacial reduction"}, {"start": 1382.72, "end": 1388.96, "text": " we have in the cnn so if we were to not reduce the spatial extent but then we'd kind of have to"}, {"start": 1388.96, "end": 1394.48, "text": " increase the computation and the memory footprint probably like this this can be further improved"}, {"start": 1394.48, "end": 1401.2, "text": " with some of the novel like efficient transformer architectures yeah i guess that's left for for the"}, {"start": 1401.2, "end": 1407.76, "text": " future work okay let me get back to the results we saw we saw the the comparison with a faster rcnn"}, {"start": 1407.76, "end": 1412.72, "text": " now let's see some ablations so they showed that and this is pretty straightforward i mean"}, {"start": 1413.52, "end": 1419.76, "text": " they showed that that the encoder a part of the transformer is necessary so the more layers you"}, {"start": 1419.76, "end": 1426.16, "text": " add the better it performs but the the biggest gap goes from not using just like encoder to using it"}, {"start": 1426.16, "end": 1431.92, "text": " you can see that the ap like increases significantly as well as obviously the the"}, {"start": 1431.92, "end": 1436.5600000000002, "text": " number of parameters as well as the computation and fps but we have a huge improvement here and"}, {"start": 1436.5600000000002, "end": 1441.76, "text": " then we kind of start saturating adding more and more layers they ended up using six i think in"}, {"start": 1441.76, "end": 1447.44, "text": " their encoder and that's pretty much it okay so let's continue here here is a nice visualization"}, {"start": 1447.44, "end": 1453.6000000000001, "text": " of the output of the encoder so what they do is they take a query token from the output of the"}, {"start": 1453.6, "end": 1459.28, "text": " encoder and they then attend using that query vector they just attend all of the keys all the"}, {"start": 1459.28, "end": 1466.32, "text": " key vectors of this image and by doing that they they as you can see here this query attends to"}, {"start": 1466.32, "end": 1471.9199999999998, "text": " this cow here so it does some sort of an instant segmentation as you can see here so focusing on"}, {"start": 1471.9199999999998, "end": 1477.04, "text": " this query vector here so this one basically it attends as you can see the small cow so that's"}, {"start": 1477.04, "end": 1483.84, "text": " again instant segmentation similarly for all of the other query tokens continuing and seeing other"}, {"start": 1483.84, "end": 1488.72, "text": " results this one is super interesting basically what i do is they form this synthetic image"}, {"start": 1488.72, "end": 1494.24, "text": " where they have six times four so 24 giraffes and they mentioned that in the training set they don't"}, {"start": 1494.24, "end": 1500.72, "text": " have more than 13 giraffes in a single image and despite that this data model as you can see pretty"}, {"start": 1500.72, "end": 1505.44, "text": " successfully manages to detect all of the 24 giraffes with a high confidence scores you can"}, {"start": 1505.44, "end": 1512.16, "text": " see 100 100 most of them are above 90 90 as i can see here so that's a nice out of distribution"}, {"start": 1512.72, "end": 1519.28, "text": " like generalization and that's cool for this second chart here what they do is i mentioned that on the"}, {"start": 1519.28, "end": 1525.28, "text": " every layer of the decoder they have this hungarian laws being applied and what they show is that when"}, {"start": 1525.28, "end": 1531.3600000000001, "text": " they are using one of the intermediate layers instead of the final layer as the as the output"}, {"start": 1531.36, "end": 1537.84, "text": " they get you can see that this progression so obviously taking the first layer taking the output"}, {"start": 1537.84, "end": 1542.8799999999999, "text": " from the first layer is much worse than if we take it from the sixth layer so that's the final layer"}, {"start": 1542.8799999999999, "end": 1548.6399999999999, "text": " and additionally what they show here so i'm just focusing on this ap metric ignoring this ap 50"}, {"start": 1548.6399999999999, "end": 1553.9199999999998, "text": " metric what they additionally show is that this non max suppression heuristic which i mentioned in"}, {"start": 1553.9199999999998, "end": 1559.6, "text": " the beginning of the video basically helps if we are taking the outputs from the initial couple of"}, {"start": 1559.6, "end": 1564.24, "text": " layers but then later on it does not help and that's and that's by design because the hungarian"}, {"start": 1564.24, "end": 1569.9199999999998, "text": " laws and the matching was meant to make this uh like the the proposals unique and so we don't have"}, {"start": 1569.9199999999998, "end": 1577.04, "text": " to kind of filter out similar proposals because that's handled by the loss implicitly okay uh"}, {"start": 1577.52, "end": 1582.9599999999998, "text": " that's that part here they have a interesting visualization of the this time decoder attention"}, {"start": 1582.9599999999998, "end": 1589.1999999999998, "text": " mechanisms so we take one of the proposals one of the tokens from the output of the decoder"}, {"start": 1589.2, "end": 1595.44, "text": " we use that and we attend to the outputs of the encoder network and that's what we got and basically"}, {"start": 1595.44, "end": 1603.2, "text": " as you can see focusing on one of those output tokens so whatever like the the whatever the"}, {"start": 1603.2, "end": 1609.28, "text": " the index of the orange token is you can see that it attends to this to the extremities of the of"}, {"start": 1609.28, "end": 1613.92, "text": " the elephant this kind of explains why why the encoder is necessary so encoder kind of does this"}, {"start": 1613.92, "end": 1620.0800000000002, "text": " uh this initial segmentation of the instance and then what the decoder does it learns how to kind"}, {"start": 1620.0800000000002, "end": 1625.1200000000001, "text": " of do contours contours of the object which is necessary for it to find the convex hull or the"}, {"start": 1625.1200000000001, "end": 1629.92, "text": " the bounding box in this case uh of the of that object they just kind of work in symbiosis and"}, {"start": 1629.92, "end": 1635.3600000000001, "text": " help each other uh and again for the blue token you can see that it again attends the extremities"}, {"start": 1635.3600000000001, "end": 1641.04, "text": " and that's how it figures out so so these components here are really important for for"}, {"start": 1641.04, "end": 1647.2, "text": " the detector to figure out that this is an elephant and similarly in this image for zebras you can see"}, {"start": 1647.76, "end": 1653.36, "text": " a green token attending here to this zebra the blue one to this zebra and the orange one to the"}, {"start": 1653.36, "end": 1660.32, "text": " zebra okay that's it um they did a couple more ablations i mentioned that the spatial these"}, {"start": 1660.32, "end": 1665.76, "text": " these uh positional encodings they have um it's pretty intricate i can kind of go to appendix and"}, {"start": 1665.76, "end": 1671.44, "text": " show you this part so as you can see here the spatial encoding is added to the keys to values"}, {"start": 1671.44, "end": 1677.76, "text": " in every single layer and it's also added to the keys of the of the decoder as well and you can see"}, {"start": 1677.76, "end": 1682.64, "text": " these object queries which are also positional encodings but they are learned not fixed in this"}, {"start": 1682.64, "end": 1688.0, "text": " in this particular case it's again added here and there so it's it's pretty convoluted but like uh"}, {"start": 1688.0, "end": 1693.28, "text": " you can also do it the same way as the as the original transformer which was only at the input"}, {"start": 1693.28, "end": 1698.56, "text": " but they showed that this increased the performance in the ablation table i was just showing you"}, {"start": 1698.56, "end": 1704.16, "text": " so let me get back here so yeah basically here they showed that adding these uh positional"}, {"start": 1704.16, "end": 1712.0, "text": " encodings on every layer at the attention module just increases the the ap metric uh compared to"}, {"start": 1712.0, "end": 1718.32, "text": " all of the other combinations so i don't think we we can extract something like something super"}, {"start": 1718.32, "end": 1722.8, "text": " intuitive here it's just that this experimentation showed that it's just better to kind of include"}, {"start": 1722.8, "end": 1729.6, "text": " them in all of the layers okay they also try and uh like the ablation of the L box component"}, {"start": 1729.6, "end": 1736.6399999999999, "text": " which i showed you and they try using only L1 or try using only IOU and it turns out that um if you"}, {"start": 1736.6399999999999, "end": 1742.08, "text": " if you ignore the IOU you have a significant decrease in this AP metric so you have to include"}, {"start": 1742.08, "end": 1749.12, "text": " the IOU L1 is just extra and using both of those obviously just increases across just boosts up all"}, {"start": 1749.12, "end": 1754.9599999999998, "text": " of the metrics uh okay that's it yeah i already mentioned this one as as i said so this just shows"}, {"start": 1754.9599999999998, "end": 1759.76, "text": " how different uh output those they call it they call those prediction slots i just call them"}, {"start": 1759.76, "end": 1765.1999999999998, "text": " tokens of the decoder basically you can see that each of them attends different different regions"}, {"start": 1765.1999999999998, "end": 1770.8799999999999, "text": " of the image and different types of objects of of boxes so you can you can notice a trend that this"}, {"start": 1770.8799999999999, "end": 1777.36, "text": " red dots are uh equally present across all of these 20 prediction slots out of the 100 they have"}, {"start": 1777.36, "end": 1782.6399999999999, "text": " and the reason being those are i think huge they say some of here are huge horizontal boxes or"}, {"start": 1782.6399999999999, "end": 1788.6399999999999, "text": " something so red to large horizontal boxes and it seems that the MSCoco data set just has a bunch of"}, {"start": 1788.6399999999999, "end": 1793.6799999999998, "text": " those objects which span the whole image horizontally so that's why all of the all of these"}, {"start": 1793.6799999999998, "end": 1799.84, "text": " uh prediction slots or tokens i'll learn how to kind of focus and attend to those okay that's"}, {"start": 1799.84, "end": 1805.52, "text": " that's pretty much it um let me see i'm going to briefly uh just tell you about this uh they also"}, {"start": 1805.52, "end": 1813.6, "text": " showed that just by simply modifying the data pipeline uh by adding this this this additional"}, {"start": 1813.6, "end": 1819.36, "text": " like segmentation head they can also do really well on this panoptic segmentation task and the"}, {"start": 1819.36, "end": 1824.56, "text": " idea is they just do regular object detection training of the data pipeline so that remains"}, {"start": 1824.56, "end": 1830.32, "text": " the same and then they just append the head and they trend that they freeze the data and they just"}, {"start": 1830.32, "end": 1834.16, "text": " train the the head itself and they showed some nice results you can see here"}, {"start": 1834.16, "end": 1840.4, "text": " like segmentation of this giraffe image and boss etc yeah they just showed that this works really"}, {"start": 1840.4, "end": 1846.5600000000002, "text": " well here in this table comparing it to other baselines like this panoptic fpn etc and they just"}, {"start": 1846.5600000000002, "end": 1852.96, "text": " report nice results there as well so that's pretty much it it's exciting uh so the main two components"}, {"start": 1852.96, "end": 1861.68, "text": " again are using transformers and learning end-to-end using this this match plus hungarian loss logic and"}, {"start": 1861.68, "end": 1869.6000000000001, "text": " that's it if you if you like this video share it out subscribe to this channel and until next time"}, {"start": 1869.6, "end": 1892.7199999999998, "text": " bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=qpxTgclVlcc
Channel Update: vacation, leaving Microsoft, approaching 10k subs and more!
Short channel update. Love you guys! Let's make AI great again! ⌚️ Timetable: 00:00 Taking a break! 00:27 Suggest the topic for the special video! 00:53 Leaving Microsoft, applying for DeepMind, personal failures 02:41 Consider becoming a Patreon! 02:57 Connect with me: Twitter, LinkedIn, Medium, GitHub ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #channelupdate #theaiepiphany
What's cracking guys I'll keep this one short is just a channel update. I want to tell you that I'll be taking a vacation So I'm just gonna take some time to kind of relax I've been working a lot over the last year and a half I guess all of us have been kind of burned out because of the pandemic and everything So on the side note, don't worry like the content will still be coming up because I have a couple more videos in the pipeline So those are coming so the condo will still be uploaded regularly. So no worries there Also, this channel is growing slowly. We're almost reaching 10,000 subscribers So I'm planning on making a special video for 10k subs And so if you have any suggestions feel free to just comment down below and suggest whether you want me to make a Q&A I like something you're interested in how I'm learning or how I cover how I read papers or I don't know whatever you have In mind just feel free to kind of Write down your your comments down below a couple more updates So a lot of a lot of things have been like happening in my life over the last couple of months So first things first is I've quit with Microsoft recently. So maybe a week ago or something and So yeah, that's that's kind of new So I thought give myself a chance to to to have a more free time just kind of it was really hard balancing Microsoft all of these projects YouTube etc So I thought just kind of taking a break having some time to travel having some time to To rest more to to make more content for you guys So I learned a lot working as a soft initially as a software engineer and later as a machine learning engineer Microsoft by the time has come to just kind of change stuff in my life So I still think it's a great place to to work at and so yeah one more thing I've also been applying for DeepMind over the last three months and it was super tense I've passed every single interview every single technical and behavioral interview and then a couple weeks ago I did the last one and I got the email Yeah, we won't be proceeding with you for the time and they kind of rerouted me to a second team in DeepMind So that's a novelty as well It always feels bad when you get that rejection But at least I'm happy that I know I can kind of pass all of those interviews at DeepMind and that's a it's one of The companies I admire a lot as a fun fact. I've also the first time applied for Microsoft. I also failed and that's totally normal I just want to share this failure publicly because basically almost no one shares the failures We all share just success stories and I want you to know that we all fail sometimes and like the first time I failed Microsoft it was pretty dumb. Actually the thing that happened with DeepMind is very fairly similar I just want to tell you kind of don't get demotivated if you're not managing to kind of accomplish whatever you're planning To land a dream job or or get your PhD or whatever Just keep on pushing there and you'll eventually get wherever you you want. Just don't give up. I Don't know how many of you know this but I have a patreon profile So feel free to support me there. It's totally optional I highly appreciate your support and I'm just gonna list down some of the names of the current patrons and kudos to them Thanks a lot. Like I really appreciate it Finally, even though I have this YouTube channel, obviously, I love to keep you guys across my different social media platforms So do connect with me on Twitter. Do connect with me on LinkedIn. I'm super active there I also write occasional blogs on Medium so you can follow me there as well. And finally I have a github profile where I'm really prolific with open source projects If you're interested in those if you want to learn some of the deep learning stuff like have some hands-on approach Do feel free to just kind of check out those projects kind of clone them feel free to play with them and let me know what you think Thanks guys and see you in a couple weeks. Cheers
[{"start": 0.0, "end": 5.8, "text": " What's cracking guys I'll keep this one short is just a channel update. I want to tell you that I'll be taking a vacation"}, {"start": 5.8, "end": 8.48, "text": " So I'm just gonna take some time to kind of relax"}, {"start": 8.48, "end": 11.02, "text": " I've been working a lot over the last year and a half"}, {"start": 11.02, "end": 15.4, "text": " I guess all of us have been kind of burned out because of the pandemic and everything"}, {"start": 15.48, "end": 22.240000000000002, "text": " So on the side note, don't worry like the content will still be coming up because I have a couple more videos in the pipeline"}, {"start": 22.240000000000002, "end": 27.2, "text": " So those are coming so the condo will still be uploaded regularly. So no worries there"}, {"start": 27.2, "end": 33.44, "text": " Also, this channel is growing slowly. We're almost reaching 10,000 subscribers"}, {"start": 33.56, "end": 37.3, "text": " So I'm planning on making a special video for 10k subs"}, {"start": 37.3, "end": 43.34, "text": " And so if you have any suggestions feel free to just comment down below and suggest whether you want me to make a Q&A"}, {"start": 43.34, "end": 49.28, "text": " I like something you're interested in how I'm learning or how I cover how I read papers or I don't know whatever you have"}, {"start": 49.28, "end": 51.28, "text": " In mind just feel free to kind of"}, {"start": 51.6, "end": 53.6, "text": " Write down your your comments down below"}, {"start": 53.6, "end": 56.24, "text": " a couple more updates"}, {"start": 56.24, "end": 60.88, "text": " So a lot of a lot of things have been like happening in my life over the last couple of months"}, {"start": 60.88, "end": 66.92, "text": " So first things first is I've quit with Microsoft recently. So maybe a week ago or something and"}, {"start": 67.48, "end": 69.08, "text": " So yeah, that's that's kind of new"}, {"start": 69.08, "end": 74.56, "text": " So I thought give myself a chance to to to have a more free time just kind of it was really hard balancing"}, {"start": 74.68, "end": 77.32, "text": " Microsoft all of these projects YouTube etc"}, {"start": 77.32, "end": 81.68, "text": " So I thought just kind of taking a break having some time to travel having some time to"}, {"start": 81.68, "end": 84.80000000000001, "text": " To rest more to to make more content for you guys"}, {"start": 84.80000000000001, "end": 89.80000000000001, "text": " So I learned a lot working as a soft initially as a software engineer and later as a machine learning engineer"}, {"start": 89.96000000000001, "end": 93.28, "text": " Microsoft by the time has come to just kind of change stuff in my life"}, {"start": 93.28, "end": 97.4, "text": " So I still think it's a great place to to work at and so yeah one more thing"}, {"start": 97.4, "end": 101.68, "text": " I've also been applying for DeepMind over the last three months and it was super tense"}, {"start": 101.68, "end": 107.12, "text": " I've passed every single interview every single technical and behavioral interview and then a couple weeks ago"}, {"start": 107.12, "end": 110.04, "text": " I did the last one and I got the email"}, {"start": 110.04, "end": 115.52000000000001, "text": " Yeah, we won't be proceeding with you for the time and they kind of rerouted me to a second team in DeepMind"}, {"start": 115.52000000000001, "end": 117.12, "text": " So that's a novelty as well"}, {"start": 117.12, "end": 119.08000000000001, "text": " It always feels bad when you get that rejection"}, {"start": 119.08000000000001, "end": 123.52000000000001, "text": " But at least I'm happy that I know I can kind of pass all of those interviews at DeepMind and that's a it's one of"}, {"start": 123.52000000000001, "end": 130.16, "text": " The companies I admire a lot as a fun fact. I've also the first time applied for Microsoft. I also failed and that's totally normal"}, {"start": 130.16, "end": 136.32, "text": " I just want to share this failure publicly because basically almost no one shares the failures"}, {"start": 136.32, "end": 142.88, "text": " We all share just success stories and I want you to know that we all fail sometimes and like the first time I failed"}, {"start": 142.88, "end": 147.12, "text": " Microsoft it was pretty dumb. Actually the thing that happened with DeepMind is very fairly similar"}, {"start": 147.12, "end": 152.79999999999998, "text": " I just want to tell you kind of don't get demotivated if you're not managing to kind of accomplish whatever you're planning"}, {"start": 152.92, "end": 156.24, "text": " To land a dream job or or get your PhD or whatever"}, {"start": 156.35999999999999, "end": 161.6, "text": " Just keep on pushing there and you'll eventually get wherever you you want. Just don't give up. I"}, {"start": 162.2, "end": 164.95999999999998, "text": " Don't know how many of you know this but I have a patreon profile"}, {"start": 164.96, "end": 167.8, "text": " So feel free to support me there. It's totally optional"}, {"start": 167.8, "end": 175.36, "text": " I highly appreciate your support and I'm just gonna list down some of the names of the current patrons and kudos to them"}, {"start": 175.36, "end": 177.60000000000002, "text": " Thanks a lot. Like I really appreciate it"}, {"start": 178.48000000000002, "end": 185.12, "text": " Finally, even though I have this YouTube channel, obviously, I love to keep you guys across my different social media platforms"}, {"start": 185.12, "end": 190.20000000000002, "text": " So do connect with me on Twitter. Do connect with me on LinkedIn. I'm super active there"}, {"start": 190.2, "end": 195.35999999999999, "text": " I also write occasional blogs on Medium so you can follow me there as well. And finally"}, {"start": 195.35999999999999, "end": 199.39999999999998, "text": " I have a github profile where I'm really prolific with open source projects"}, {"start": 199.39999999999998, "end": 206.07999999999998, "text": " If you're interested in those if you want to learn some of the deep learning stuff like have some hands-on approach"}, {"start": 206.35999999999999, "end": 212.35999999999999, "text": " Do feel free to just kind of check out those projects kind of clone them feel free to play with them and let me know what you think"}, {"start": 212.36, "end": 219.56, "text": " Thanks guys and see you in a couple weeks. Cheers"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=jMqLTPcA9CQ
DALL-E: Zero-Shot Text-to-Image Generation | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover DALL-E or "Zero-Shot Text-to-Image Generation" paper by OpenAI team. They train a VQ-VAE to learn compressed image representations and then they train an autoregressive transformer on top of that discrete latent space and BPEd text. The model learns to combine distinct concepts in a plausible way, image to image capabilities emerge, etc. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2102.12092 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 What is DALL-E? 03:25 VQ-VAE blur problems 05:15 transformers, transformers, transformers! 07:10 Stage 1 and Stage 2 explained 07:30 Stage 1 VQ-VAE recap 10:00 Stage 2 autoregressive transformer 10:45 Some notes on ELBO 13:05 VQ-VAE modifications 17:20 Stage 2 in-depth 23:00 Results 24:25 Engineering, engineering, engineering 25:40 Automatic filtering via CLIP 27:40 More results 32:00 Additional image to image translation examples ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dalle #openai #generativemodeling
What's up? In this video I'm covering zero shot text to image generation from the OpenAI team or Dali for short. And this work came out a couple of months ago, but I didn't see anybody covering the paper, so I thought walking you through and explaining in detail how this thing works. And let me just briefly explain what Dali is all about. As you can see here, Dali can synthesize these awesome images using just textual prompts. So on the left side here we have... so this is the prompt you're using and inputting into Dali. And so a taper made of accordion, a taper with the texture of an accordion. And you can see that Dali is able to remarkably mix up these concepts into something that looks like a plausible drawing that a human would. So a human could combine these concepts as well. So here we can see some kind of like a merge between the accordion and taper. Here we can see that the accordion has... so this handle is kind of in the taper shaped head. And yeah, it's pretty awesome. Some other examples involve this one. So an illustration of a baby hedgehog in a Christmas sweater walking a dog. And just like focusing maybe on this one, it looks super amazing. Like you can imagine this being used to generate ad hoc logos or gifts, whatever. So huge potential use cases for this model. Obviously it does make some mistakes at moments. So here we can see just a small hedgehog instead of a dog. So here the face is kind of blurry. So it's not perfect, but it's a huge leap forward. And I mean, I'm loving it. Here we have a neon sign that reads backprop, a neon sign that reads backprop, backprop, neon sign. So obviously you can see there is a lot of like prompt engineering. This is something that somebody called it on Twitter, like programming 3.0. So basically you have programming 1.0, that's your classical software engineering. Then you had programming 2.0, that's machine learning, where your data is your program. And here you're finally starting to just kind of hack the prompts in order to get the desired result from your model. And it looks like an emerging paradigm. And we'll be seeing more and more of this with GPT models as well, et cetera. So you can see results are pretty cool. You can see backprop here rendered. And I mean, pretty awesome. And finally, there are some like this emerging property of image translation as well. So basically they see here, so the exact same cat on the top as a sketch on the bottom. And in this example here, you can see it's, I mean, it got perfectly right. So it kind of just extracted the edges from this picture of a cat here and looks pretty awesome. On the other examples, it's not that perfect. But yeah, I mean, the very fact that it can actually do this without being explicitly trained to do these kind of stuff. So this is just an emerging property, same as we had in GPT models where we had we basically trained the model to predict the next token. We got like something like machine translation as a side effect, et cetera. So yeah, same thing happens here with the lead. Basically, and having mentioned GPT, you can assume that transformers and this is open AI. So you can just assume there is a transformer behind this. And we'll get to that in a moment. Let me just kind of touch on the one of the components of this of this of this model. So basically we have so Dali consists out of two parts. So one is VQVA model, which, by the way, I've covered in one of my previous videos, as well as VQGAM paper, which is super similar to Dali paper, except for the GAM component. So I do recommend you check those out. I'll just link them somewhere here. But having said that, I'll briefly explain what VQVA is in this video. So here I just want to point out one thing. So VQVA is basically an auto encoder like so it's a VA with like a quantized discrete latent space. And it has this like encoder like structure. Right. So we have a bottleneck there and then we reconstruct back the image. So this is your input image goes here. They reconstructed the image. Let's call it R comes out here. So here you can see that this is the input image and after it goes through the whole VQVA pipeline outcomes, this kind of blurry image. So the global composition is still here. Everything is still here. But you can see that it's blurry. And that's something you don't see with GANs. GANs have crispy, clear output. And here as well. So we have this text here and it's kind of blurred out here in the in the reconstruction. And similarly for these squiggly lines, some of them even get lost. So you can see here we focus on this one. You can see that like it kind of totally disappeared. OK. So it's not perfect. But like the reason they do this is so they can model because it's really hard to model the image directly in the in the in the pixel space. That's the whole idea. And I'll get to that in a moment. OK. So I mentioned transformers. Let's see. So this is open and I don't forget. So when compute model size and data are scaled carefully or regressive transformers. So this is the original transformer paper, which I've covered in one of my videos. You can check it out as well. You have achieved impressive results in several domains such as text. So this is your these are the GPT family of models. So GPT one, two and three images. So this is your image GPT from OpenAI and audio. So this is Jukebox paper from OpenAI as well. So you can see a trend here. What they do is they take a different modality. So, for example, text. They take a transformer. They scale the data. They scale the transformer and they get awesome results. And then we have images. Similar thing happened with images. So basically they also used a transformer. They modeled images directly in the pixel space. And the consequence was that they could only generate up to, I think, two like 64 times 64 images. Don't get don't I may be wrong here, but I think it's something like that. So in any case, it's a super small resolution compared to two. Nowadays we have generative models such as style game V2, where we have images that are like a million pixels, et cetera. So and finally we have audio. Again, they used a similar approach to this paper. So again, I think they use VQVAE and a transformer. They scaled it up and they generate they managed to generate some nice like music, et cetera. OK, so question is they ask is could data set size and model size be the limiting factor of current approaches? So here we're modeling both the text and the image images jointly. And so they say they train 12 billion parameter or aggressive transformer on 250 million image text pairs. So that's it. So data plus big transformers and you get awesome results. That's that's the main like kind of leitmotif here. OK, so briefly explaining the details of this like pipeline. They have two stages in the first stage. They train the VQVAE model in the second stage. They train this all regressive transformer directly in the latent discrete latent space of the VQVAE. So let's kind of just briefly touch on it. As I said, I have a whole video on this. You can check it out. But let me just kind of briefly explain it. So how you train this thing is you have an image, you pass it through this. This will be some kind of a CNN, some kind of encoder outcome. These these vectors you'll have, for example, this will be like one vector here. And just this will be the other dimensions along this side. And what you do is you kind of snap that that so these will be just like a collection of feature maps. Right. So you just take this vector and you'll find the closest one in this in this codebook of vectors. So this is just your embedding table. You find the closest one according to L2 distance and you snap it to that one. So you can see, for example, if this one was maybe the closest codebook vector was one here, you can see we snap it to one. So we have just a simple one here. And then later you can just kind of index using these indices into the codebook vector and you can actually find the actual vector. So you'll put E1 here. So that's this vector. So this vector will replace this one. And then you pass these these vectors, these quantized vectors into the coder, which is also just some kind of transposed convolutions. And you get the CNL, you get the reconstructed image back. So the loss is fairly simple. They just have reconstruction loss. So they used MSC. So that's mean squared error. Plus, they have some loss that encourages encourages the codebook vectors to be close to these vectors and these vectors to be close to codebook vectors. So the one difficulty is it's hard to it's not hard. It's impossible to actually backprop through this thing because of this quantization step. And so what they do is they whatever the gradients for these vectors here are, they just copy paste those. As you can see this by this red line, you just copy paste those into the encoder. And then because this part is differentiable, you can just backprop through the CNN and figure out all of the other gradients. But that's a difficulty. So this part, because of the quantization, that's one of the main things they had to solve. And yeah, that's that's in a nutshell how it works. So in my VQVAE video, I even went through some code. So if you're still confused how this exactly work, you can check it out. But that's the whole like a like a rough idea of how VQVAE works. So once you train this model, so what you have is you have you train these embedding vectors, you train the encoder, you train the decoder. And so now the second step is this transformer thing. So we concatenate up to 256 pp encoded. So that's by pairing coding text tokens with the 32 by 32. That's 1024 image tokens and train an autoregressive transformer to model the joint distribution over the text and image tokens. So this 32 by 32, let me just show you show you where that comes up. So basically this means that this in here will have 32 vectors here and 32 here. So we'll have 1024 tokens like in total. And these are image tokens. So those kind of capture the information about the image and then using decoder, you can just decode them back into the pixel space. OK, there's this part about where dimension elbow. I think it's it will be more confusing than than than good to explain this. But if you ignore why for a second, so if we ignore the labels, the captions, sorry, if we ignore the captions, this is basically your your VAE objective. So just elbow. Basically, this is the LN. So of p of x given that that's your reconstruction loss. And then depending how you decide to model your your data. So if you take Gaussian here, so basically what will happen is because Gaussian has some some some structure or something like this. So we have C, we have e raised to the power of blah, blah, blah. We have basically x minus mu squared over something. Sigma's doesn't matter. But basically when you do LN of this, this is a constant. So we don't care. This is E. These two are so LN and E are just inverse of each other. So you end up with x minus mu squared, which is your basically MSC loss. So that's how you end up with MSC loss, even though you have this abstraction here. And I know this part confused me. So hopefully this will like give you some epiphany moment. And similarly here, this is just a KL divergence between the approximate posterior and your prior. So that's basically your your your like VAE objective. And you're trying to to maximize this part because in turn, because it's a lower bound, that means you're you're going to maximize the like the likelihood of your data. So you just kind of want to maximize the reconstruction probability and you want to minimize the KL divergence between those two. That's just in a nutshell. I don't want to get into maths. As I said, the video has a better explanation of this of this component, as well as some links to some cool blocks, which you can check out at your own pace and understand the mathematics behind this. But the confusing part is you actually won't see the KL part, especially in VQVAE model. The reason being is that the KL divergence is constant in this model. So it's kind of confusing to you. It's a nice thing to think about it like this. But like for practice from a practical standpoint, it's just confusing. OK, so a couple of details where VQGAN and like VQVAE papers differ from DALI. So I mentioned the straight through estimator. So that's this part. Let me go back here. So this is called the straight through estimator. This red line, this is basically your you copy pasting the gradients from the decoder over to the encoder so that you can back prop through the encoder and train your model. OK, so that's the straight through gradient. Like a gradient estimator. And here, so they say we instead use the GUMBO softmax relaxation, replacing the expectation over blah, blah, blah, blah, blah. All in all, this is a modification they made as well as the likelihood for the P sub theta is evaluated using the log Laplace distribution. So I mentioned previously I mentioned this Gaussian here. So Gaussian leads to you having MSE as an objective in the loss function. This thing, because they're using log Laplace, they'll have some different objective. And let me briefly go to the appendix and show you what I mean by this. This is kind of different compared to other papers. And I think it's pretty neat idea. So the L1 and L2 reconstruction objectives are commonly used when training VA's. So L2 was what we saw. So MSE is basically L2 squared. And so these objectives correspond to using Laplace and Gaussian distributions for the this term. And that's what I mentioned. So I explained the thing with Gaussian. Similarly, it goes with Laplace. You'll have the same logic behind it. So you'll just end up with instead of square, you just won't have the square component. And there is a strange mismatch in this in this modeling choice. Pixel values lie within a bounding interval. So your pixels will be in the like 0, 1 range, right? That's your normal RGB image. And they say here, but both of these distributions are supported by the entire real line. Hence, some amount of likelihood will be placed outside the admissible range of pixel values. And so I just kind of draw like based at the distributions here. So your output pixel values will be pretty much here from 0 to 1, no matter how you parameterize this Laplace. So if you put mew to be equal to 0.5, so somewhere here, you're still going to have some non zero probability being given to values which are outside of the 0, 1 range. So this is the range we care about, right? The 0, 1 range. And these distributions, these assumptions that we have that the output picks like random variables are model as Gaussians or Laplace simply do not make that much sense if you think about it. And that's why they replace this. So they say here. So we represent the variant of Laplace distribution, which is also supported by a bounded interval, blah, blah, blah. So this PDF is defined on 0, 1 and is given by this this PDF. So what it did is they just applied they say somewhere here a sigmoid. Let me just a sec. So yeah. So we consider the PDF for random variable obtained by applying the sigmoid function to Laplace distributed random variable. And so instead of using emacy, what you'll now use to in order to do to maximize the likelihood the likelihood of your data. So to increase to decrease the reconstruction loss, you just you'll just do log of this. So just Ellen of this term and whatever pops out. That's what your objective will be like. And so that's in contrast with emacy loss. OK. And just this is the sigmoid. You have domain goes from minus infinity to plus infinity. And we have the codomain, the image being from zero to one. And that's the reason why the random variable will now be constrained to be between zero and one. Like contrast that to Laplace, as I mentioned, it will have values which are outside of the bounded region we are interested in. So that was just a small, small glimpse into these details, which they they kind of change compared to the original paper, VQVA paper. Let me now get back to where we ended it. I'll skip the explanation of the gumbell softmax relaxation. I think it's just a detail and there is a whole paper behind this idea. So I have to skip it. But like in a nutshell, they replaced the straight through gradient estimator with this novel gumbell softmax relaxation method. And now let's jump into stage two. OK. So I mentioned that after we train the VQVA, we have a discrete space. And now we want to learn how to train a transformer on top of that, on top of those image tokens. And they say here, given a text image pair, we BPE encode the lower case caption using at most 256 tokens with walk up size of 16000 and encode the image using 32 by 32 tokens with walk up size 8K. OK. Finally, the text and image tokens are concatenated and modeled or aggressively as a single stream of data. Let me try and break this down for you if it's already not clear. But like what I do is so imagine you have you have a caption. So you have some like I'll draw something like this. So we have a caption that's your some that's some text and you'll have associated image with that caption. So something like this will have a squiggly man here, random stuff, whatever. So that's your image. And so what happens now is the following. So they first lowercase this. So let me just denote that as L, then they do BPE. So that means they'll basically tokenize the words into into some smaller units. And what they have is they have this walk up size of 16K. So that just basically means you have a huge table here and that table contains contains 16K vectors which are learned. OK, so now what this thing how this thing will work like is the following. So let's say you have a sentence like I don't know like monkey. So monkey, monkey, king. And then we'll have whatever like just three dots. OK, so this is your input sentence. And so BPE will just kind of break down your words maybe into some sub words like I know you'll have maybe mon you'll have key. You'll have I know maybe key I and K.I. And so what happens is you'll just take this sub word. You'll find the appropriate vector and you'll replace your sub word with this vector and et cetera, et cetera. And so what they say is they have max of 256 tokens. So that means if we have more than that, we'll just kind of truncate. If we have less, they have some special like tokens which they'll train for that particular purpose. Where when we have like those when we have like a less number than 256. But it's a minor detail. And so that's what happens with the text. So we have we end up with 256 tokens here. OK, now the following thing happens with the image. So this one just gets fed into the encoder of the VQVAE, which was previously trained in stage one and outcome 32 by 32 tokens. So basically this is 32 by 32 and you're kind of going to unroll these. And so this will be your your target. So basically you'll just put start of sentence token here and then you'll basically all start of image some special token. And then you'll just unroll this thing here. OK, so just unroll it here. And that's your input into your transformer. And what the output will be is the following. So this very same thing just shifted by one to the left. OK, so you'll have something like this and then you'll have end of sentence token or end of image in this case. OK, so that's it. And so now how the whole thing works is you have a huge transformer. I think they used I think like GPT-2 or something with some sparse attention maps. And so you'll have a transformer here and then you have your usual thing. You just pass this in, you transform it through the multiple layers of transformer and you'll want to predict these. So you'll want to predict this thing from the start of sentence token. So in this case, these are image tokens. But just imagine for a second this was like a textual token. In that case, if this was text like Monkey King, we'd have M here and we'd have M here. And then the second letter would be O. So we'd have O here. So as you can see, you want to predict from the like this special token, you want to predict M. From M, you want to predict O because O is the next one. So you're just trying to predict the next token, simple stuff pretty much. And you can do this in parallel. And that's how they train this whole thing. So this thing here will be 256 and this thing here will be 1024. And you just train this transformer to kind of learn how to do this next token prediction. And then how you actually use this later in inference is you just prepend this caption and you prepend this special token like this one here. And then you just start sampling from the model until you get 32 by 32 tokens. You just pass that through the decoder and out comes a novel image. So that caption can be obviously completely novel. So the model has never seen the caption during training like the examples I showed you with taper and hedgehogs. So yeah, that's how it works in a nutshell. Now, let me just show you some results they have here. So they compare obviously the Dali paper with different GAN baselines. So like DFGAN, DMGAN, TENSIONGAN, etc. And here we have, so this is the ground truth caption, a very cute cat lying by a big bike. And here we have the actual validation images. So the images that go with these captions. So that's the ground truth. And if you take a look at the results, you can see two things. So first thing is Dali has better outputs. Like it's just if we focus on this thing here, a living room with a TV on top of a stand with a guitars sitting next to you. And basically you can see this looks really cool, much better than GANs. The main difference is like VQVAE is, as we saw in the beginning of the video, has blurry output. That's why we have blurry kind of blurry results here, even though their composition is much better compared to GANs. Where GANs are kind of the chat programmers of the generative modeling world because they're super confident. They just output some values super crisp, even though it's complete BS. And so on the other hand, we have VQVAEs, which have better compositionality, but they have blurry results. That's something you can take off from this image. OK, let's now continue. One thing I want to mention is, and they actually explicitly mentioned it here, is getting to model the model to train in a 16 bit precision past one billion parameters. And if you remember, they have 12 billion parameters, so that they have 12 billion parameter transformer without diverging was the most challenging part of this project. This just makes you appreciate engineering so much because if you think about it, the ideas in this paper are nothing brand novel. Like VQGAN has super similar ideas. Jukebox, all of these papers are kind of variations of each other. There is nothing super novel here from the research standpoint, in my honest opinion. But like from the engineering standpoint, there is a lot of things you have to solve in order to get this to work. So one of the things was that they had to train this in 16 bit precision and they had problems with overflow. So they had to devise multiple techniques like this one here, and they also had to do some engineering hacks in order to kind of avoid the memory bottlenecks they had on their GPUs. So they couldn't fit the whole model. So they had to do this charting between multiple nodes, etc. So just keep that in mind. I wouldn't say there is anything super novel research wise in this paper. But just when you scale it up, when you collect the data, when you invest just a bunch of money, a bunch of engineering, like just a bunch of dev like ours, you get some awesome results. And I think that's a valuable lesson. We don't always have to just invent novel ideas. Sometimes we just have to scale up and kind of push to the limits the current work we have. What I have here is, if you're familiar with CLIP, if you're not, I also create a video on CLIP. You can check it out. But basically what I do here is they have this automatic way of filtering the output from the Dali. So, for example, you generate you generate 512 images from the Dali. So we have a bunch of images. This is one. This is second one. This is third one, etc. So you generate a bunch of them. And then you have this model called CLIP, which will let me just draw it like like a circle. So CLIP will just basically you input an image and you input a caption. So, for example, this one, a group of urinals is near the trees and CLIP will tell you. So this is CLIP. CLIP will tell you how likely is this image to be under that caption. Let's call it that way. So that means you can do explicit sorting of these images according to the scores that CLIP model gave them. And you just take you cherry pick the ones with high scores. And you can see the results here are pretty astounding. So if you just have if you just change a single image, you can see it's pretty random. You don't see even the urinals. Then when you have when you generate eight images and you pick the best one, the best one, according to CLIP, you get better results. And finally, if you have 512 images, this one looks much better. So this one looks way better compared to the ones below. OK, so, yeah, this just works. It's just a heuristic to kind of get some better results from these generating models. Same goes from some different captions. And that's the that's the rough idea. Again, check out the CLIP video if you're interested in learning about details, how that exactly works. OK, some additional results. They compare this to this DF can baseline. Yep, they just get better realism and accuracy results compared to the baseline. Nothing fancy there. Here they show some results, some zero shot samples on this CUB data set. This CUB data set basically consists out of birds. And the thing is that the original that huge data set I mentioned that the Dali model uses has like a bunch of images. But the distribution of those images and captions is completely different compared to this data set. And they show some not that great performance on this on this on this CUB data set. You can see some birds here. It's not perfect. And especially when you focus on these quantitative results, you can see that Dali is much worse compared to other baselines on that data set, because, as I said, it's out of distribution in a way so it doesn't perform that well. What they do here on these diagrams is they plot the inception score and the FID. So that's for shades, inception, distance. Those are just some metrics which people use in order to kind of try and quantify how good are the generated images, because the problem itself is ill-posed, same as Newell style transfer, same as deep dream, those kind of images. You kind of you don't have a clear cut way to determine which image is better. So these are some heuristics like IS and FID. And what they do is on the next axis, they have a blur kernel radius. So what they do is they kind of whatever is generated out of out of Dali and other baselines like GANs, they just kind of blurred them with increasingly bigger filters. So you have initially you'll have some so you have an image here. And hopefully you're familiar with this. This is just some basic image processing technique. So you can create a kernel like a Gaussian and you just kind of like slide over the image and apply the like the you just do the kernel operation, the same as in CNNs. You just do like element wise multiplication. You add them up. You kind of map that to an output pixel. And if you do that with a Gaussian kernel, it'll just kind of blur the image. And what they then do is they kind of start increasing these kernels. And so they have a bigger one. And so the bigger the kernel, the more blurrier the image will be. And they show that when they kind of blur the image, they get better results. So this is the Dali and you can see that higher is better. And so the bigger the blur kernel is, the better results we get from the Dali compared to other baselines. Here, FID should be smaller so you can see again that even here after just increasing the blur radius to one, we have better results compared to baselines. Why do this? The reason is, as I mentioned at the beginning of the video, VQVAs have problems because the outputs are not crispy. And so they suspect that these metrics may be penalizing them because of that. So in order to get on an even ground, they take the GAN output, they blur it so as to compare to kind of make it fair to Dali. And then they kind of calculate the FIDs and IS. I mean, this is kind of a way to, I guess, measure the, like, how high quality the composition of the images and put less focus on the actual details. I guess that's the whole point of this. On FID, they show that the inception score is much lower, which is a bad thing. You want to have high inception score. And they show that the FID is higher. So that means it's having problems with the CUP data set. As I already mentioned, CUP is kind of different. The distribution of data is different there. Okay. Here, pretty obvious fact. That's that when we increase the sample size for re-ranking, so that's the clip part, the more so if you have 512 generated images, that's going to be way better than you generating just a single image. As you can see here, the inception score goes up. The FID goes down. And so that means there is a sweet spot, maybe around 128 generated images where you can then automatically select the best one. And this just improves the results on both metrics, FID and IS. Okay. Those were the results. I guess that's pretty much it. Let me just go back to the appendix and show you one more thing. One more thing. So, yeah, you can see the patterns in the transformer. They have just different patterns there, not just simple causal masks. And this was introduced in their sparse transformer papers. You can check that out if you wish so. And I just want to show you some additional results on the image to image translation somewhere here. Okay. So the exact same cat on the top color, on the top colored red on the bottom. So you can see here again perfectly image to image translation, even though this task, so you now know how the Dali model is trained. It's just trained to first train VQVAE. Then you train the autoregressive transformer to just predict the next image token. And you can see that out comes this ability to do this kind of stuff as well. So here they mentioned two panel image of the exact same cat on the top, a photo of the cat on the bottom, the cat with sunglasses. And you can see again like just placing some sunglasses onto the cat. Really awesome results. I love this. Here with postcard, the exact same cat on the top as a postage stamp on the bottom. Pretty awesome results. And with that, I leave you here. Okay. That was it. Hopefully you like this video. Let me know what you think. Subscribe, share out this content if you like it. That's the best way you can support me. And until next time, bye bye.
[{"start": 0.0, "end": 9.0, "text": " What's up? In this video I'm covering zero shot text to image generation from the OpenAI team or Dali for short."}, {"start": 9.0, "end": 15.0, "text": " And this work came out a couple of months ago, but I didn't see anybody covering the paper,"}, {"start": 15.0, "end": 20.0, "text": " so I thought walking you through and explaining in detail how this thing works."}, {"start": 20.0, "end": 24.0, "text": " And let me just briefly explain what Dali is all about."}, {"start": 24.0, "end": 31.0, "text": " As you can see here, Dali can synthesize these awesome images using just textual prompts."}, {"start": 31.0, "end": 38.0, "text": " So on the left side here we have... so this is the prompt you're using and inputting into Dali."}, {"start": 38.0, "end": 43.0, "text": " And so a taper made of accordion, a taper with the texture of an accordion."}, {"start": 43.0, "end": 53.0, "text": " And you can see that Dali is able to remarkably mix up these concepts into something that looks like a plausible drawing that a human would."}, {"start": 53.0, "end": 55.0, "text": " So a human could combine these concepts as well."}, {"start": 55.0, "end": 60.0, "text": " So here we can see some kind of like a merge between the accordion and taper."}, {"start": 60.0, "end": 66.0, "text": " Here we can see that the accordion has... so this handle is kind of in the taper shaped head."}, {"start": 66.0, "end": 69.0, "text": " And yeah, it's pretty awesome."}, {"start": 69.0, "end": 71.0, "text": " Some other examples involve this one."}, {"start": 71.0, "end": 77.0, "text": " So an illustration of a baby hedgehog in a Christmas sweater walking a dog."}, {"start": 77.0, "end": 81.0, "text": " And just like focusing maybe on this one, it looks super amazing."}, {"start": 81.0, "end": 89.0, "text": " Like you can imagine this being used to generate ad hoc logos or gifts, whatever."}, {"start": 89.0, "end": 93.0, "text": " So huge potential use cases for this model."}, {"start": 93.0, "end": 95.0, "text": " Obviously it does make some mistakes at moments."}, {"start": 95.0, "end": 99.0, "text": " So here we can see just a small hedgehog instead of a dog."}, {"start": 99.0, "end": 102.0, "text": " So here the face is kind of blurry."}, {"start": 102.0, "end": 106.0, "text": " So it's not perfect, but it's a huge leap forward."}, {"start": 106.0, "end": 108.0, "text": " And I mean, I'm loving it."}, {"start": 108.0, "end": 115.0, "text": " Here we have a neon sign that reads backprop, a neon sign that reads backprop, backprop, neon sign."}, {"start": 115.0, "end": 119.0, "text": " So obviously you can see there is a lot of like prompt engineering."}, {"start": 119.0, "end": 124.0, "text": " This is something that somebody called it on Twitter, like programming 3.0."}, {"start": 124.0, "end": 127.0, "text": " So basically you have programming 1.0, that's your classical software engineering."}, {"start": 127.0, "end": 132.0, "text": " Then you had programming 2.0, that's machine learning, where your data is your program."}, {"start": 132.0, "end": 141.0, "text": " And here you're finally starting to just kind of hack the prompts in order to get the desired result from your model."}, {"start": 141.0, "end": 143.0, "text": " And it looks like an emerging paradigm."}, {"start": 143.0, "end": 147.0, "text": " And we'll be seeing more and more of this with GPT models as well, et cetera."}, {"start": 147.0, "end": 150.0, "text": " So you can see results are pretty cool."}, {"start": 150.0, "end": 152.0, "text": " You can see backprop here rendered."}, {"start": 152.0, "end": 154.0, "text": " And I mean, pretty awesome."}, {"start": 154.0, "end": 161.0, "text": " And finally, there are some like this emerging property of image translation as well."}, {"start": 161.0, "end": 169.0, "text": " So basically they see here, so the exact same cat on the top as a sketch on the bottom."}, {"start": 169.0, "end": 175.0, "text": " And in this example here, you can see it's, I mean, it got perfectly right."}, {"start": 175.0, "end": 181.0, "text": " So it kind of just extracted the edges from this picture of a cat here and looks pretty awesome."}, {"start": 181.0, "end": 183.0, "text": " On the other examples, it's not that perfect."}, {"start": 183.0, "end": 189.0, "text": " But yeah, I mean, the very fact that it can actually do this without being explicitly trained to do these kind of stuff."}, {"start": 189.0, "end": 196.0, "text": " So this is just an emerging property, same as we had in GPT models where we had we basically trained the model to predict the next token."}, {"start": 196.0, "end": 201.0, "text": " We got like something like machine translation as a side effect, et cetera."}, {"start": 201.0, "end": 204.0, "text": " So yeah, same thing happens here with the lead."}, {"start": 204.0, "end": 209.0, "text": " Basically, and having mentioned GPT, you can assume that transformers and this is open AI."}, {"start": 209.0, "end": 212.0, "text": " So you can just assume there is a transformer behind this."}, {"start": 212.0, "end": 214.0, "text": " And we'll get to that in a moment."}, {"start": 214.0, "end": 220.0, "text": " Let me just kind of touch on the one of the components of this of this of this model."}, {"start": 220.0, "end": 223.0, "text": " So basically we have so Dali consists out of two parts."}, {"start": 223.0, "end": 235.0, "text": " So one is VQVA model, which, by the way, I've covered in one of my previous videos, as well as VQGAM paper, which is super similar to Dali paper, except for the GAM component."}, {"start": 235.0, "end": 237.0, "text": " So I do recommend you check those out."}, {"start": 237.0, "end": 239.0, "text": " I'll just link them somewhere here."}, {"start": 239.0, "end": 243.0, "text": " But having said that, I'll briefly explain what VQVA is in this video."}, {"start": 243.0, "end": 246.0, "text": " So here I just want to point out one thing."}, {"start": 246.0, "end": 254.0, "text": " So VQVA is basically an auto encoder like so it's a VA with like a quantized discrete latent space."}, {"start": 254.0, "end": 257.0, "text": " And it has this like encoder like structure."}, {"start": 257.0, "end": 261.0, "text": " Right. So we have a bottleneck there and then we reconstruct back the image."}, {"start": 261.0, "end": 263.0, "text": " So this is your input image goes here."}, {"start": 263.0, "end": 264.0, "text": " They reconstructed the image."}, {"start": 264.0, "end": 267.0, "text": " Let's call it R comes out here."}, {"start": 267.0, "end": 275.0, "text": " So here you can see that this is the input image and after it goes through the whole VQVA pipeline outcomes, this kind of blurry image."}, {"start": 275.0, "end": 277.0, "text": " So the global composition is still here."}, {"start": 277.0, "end": 279.0, "text": " Everything is still here."}, {"start": 279.0, "end": 281.0, "text": " But you can see that it's blurry."}, {"start": 281.0, "end": 283.0, "text": " And that's something you don't see with GANs."}, {"start": 283.0, "end": 286.0, "text": " GANs have crispy, clear output."}, {"start": 286.0, "end": 287.0, "text": " And here as well."}, {"start": 287.0, "end": 291.0, "text": " So we have this text here and it's kind of blurred out here in the in the reconstruction."}, {"start": 291.0, "end": 296.0, "text": " And similarly for these squiggly lines, some of them even get lost."}, {"start": 296.0, "end": 298.0, "text": " So you can see here we focus on this one."}, {"start": 298.0, "end": 301.0, "text": " You can see that like it kind of totally disappeared."}, {"start": 301.0, "end": 303.0, "text": " OK. So it's not perfect."}, {"start": 303.0, "end": 311.0, "text": " But like the reason they do this is so they can model because it's really hard to model the image directly in the in the in the pixel space."}, {"start": 311.0, "end": 312.0, "text": " That's the whole idea."}, {"start": 312.0, "end": 314.0, "text": " And I'll get to that in a moment."}, {"start": 314.0, "end": 316.0, "text": " OK. So I mentioned transformers."}, {"start": 316.0, "end": 317.0, "text": " Let's see."}, {"start": 317.0, "end": 318.0, "text": " So this is open and I don't forget."}, {"start": 318.0, "end": 324.0, "text": " So when compute model size and data are scaled carefully or regressive transformers."}, {"start": 324.0, "end": 327.0, "text": " So this is the original transformer paper, which I've covered in one of my videos."}, {"start": 327.0, "end": 329.0, "text": " You can check it out as well."}, {"start": 329.0, "end": 332.0, "text": " You have achieved impressive results in several domains such as text."}, {"start": 332.0, "end": 335.0, "text": " So this is your these are the GPT family of models."}, {"start": 335.0, "end": 338.0, "text": " So GPT one, two and three images."}, {"start": 338.0, "end": 341.0, "text": " So this is your image GPT from OpenAI and audio."}, {"start": 341.0, "end": 345.0, "text": " So this is Jukebox paper from OpenAI as well."}, {"start": 345.0, "end": 346.0, "text": " So you can see a trend here."}, {"start": 346.0, "end": 349.0, "text": " What they do is they take a different modality."}, {"start": 349.0, "end": 351.0, "text": " So, for example, text."}, {"start": 351.0, "end": 352.0, "text": " They take a transformer."}, {"start": 352.0, "end": 353.0, "text": " They scale the data."}, {"start": 353.0, "end": 356.0, "text": " They scale the transformer and they get awesome results."}, {"start": 356.0, "end": 357.0, "text": " And then we have images."}, {"start": 357.0, "end": 359.0, "text": " Similar thing happened with images."}, {"start": 359.0, "end": 361.0, "text": " So basically they also used a transformer."}, {"start": 361.0, "end": 364.0, "text": " They modeled images directly in the pixel space."}, {"start": 364.0, "end": 370.0, "text": " And the consequence was that they could only generate up to, I think, two like 64 times 64 images."}, {"start": 370.0, "end": 374.0, "text": " Don't get don't I may be wrong here, but I think it's something like that."}, {"start": 374.0, "end": 378.0, "text": " So in any case, it's a super small resolution compared to two."}, {"start": 378.0, "end": 382.0, "text": " Nowadays we have generative models such as style game V2, where we have images that are like"}, {"start": 382.0, "end": 385.0, "text": " a million pixels, et cetera."}, {"start": 385.0, "end": 388.0, "text": " So and finally we have audio."}, {"start": 388.0, "end": 391.0, "text": " Again, they used a similar approach to this paper."}, {"start": 391.0, "end": 394.0, "text": " So again, I think they use VQVAE and a transformer."}, {"start": 394.0, "end": 400.0, "text": " They scaled it up and they generate they managed to generate some nice like music, et cetera."}, {"start": 400.0, "end": 408.0, "text": " OK, so question is they ask is could data set size and model size be the limiting factor of current approaches?"}, {"start": 408.0, "end": 412.0, "text": " So here we're modeling both the text and the image images jointly."}, {"start": 412.0, "end": 419.0, "text": " And so they say they train 12 billion parameter or aggressive transformer on 250 million image text pairs."}, {"start": 419.0, "end": 423.0, "text": " So that's it. So data plus big transformers and you get awesome results."}, {"start": 423.0, "end": 428.0, "text": " That's that's the main like kind of leitmotif here."}, {"start": 428.0, "end": 435.0, "text": " OK, so briefly explaining the details of this like pipeline."}, {"start": 435.0, "end": 442.0, "text": " They have two stages in the first stage. They train the VQVAE model in the second stage."}, {"start": 442.0, "end": 448.0, "text": " They train this all regressive transformer directly in the latent discrete latent space of the VQVAE."}, {"start": 448.0, "end": 450.0, "text": " So let's kind of just briefly touch on it."}, {"start": 450.0, "end": 453.0, "text": " As I said, I have a whole video on this. You can check it out."}, {"start": 453.0, "end": 455.0, "text": " But let me just kind of briefly explain it."}, {"start": 455.0, "end": 459.0, "text": " So how you train this thing is you have an image, you pass it through this."}, {"start": 459.0, "end": 463.0, "text": " This will be some kind of a CNN, some kind of encoder outcome."}, {"start": 463.0, "end": 469.0, "text": " These these vectors you'll have, for example, this will be like one vector here."}, {"start": 469.0, "end": 474.0, "text": " And just this will be the other dimensions along this side."}, {"start": 474.0, "end": 480.0, "text": " And what you do is you kind of snap that that so these will be just like a collection of feature maps."}, {"start": 480.0, "end": 486.0, "text": " Right. So you just take this vector and you'll find the closest one in this in this codebook of vectors."}, {"start": 486.0, "end": 488.0, "text": " So this is just your embedding table."}, {"start": 488.0, "end": 493.0, "text": " You find the closest one according to L2 distance and you snap it to that one."}, {"start": 493.0, "end": 500.0, "text": " So you can see, for example, if this one was maybe the closest codebook vector was one here, you can see we snap it to one."}, {"start": 500.0, "end": 502.0, "text": " So we have just a simple one here."}, {"start": 502.0, "end": 510.0, "text": " And then later you can just kind of index using these indices into the codebook vector and you can actually find the actual vector."}, {"start": 510.0, "end": 512.0, "text": " So you'll put E1 here. So that's this vector."}, {"start": 512.0, "end": 514.0, "text": " So this vector will replace this one."}, {"start": 514.0, "end": 523.0, "text": " And then you pass these these vectors, these quantized vectors into the coder, which is also just some kind of transposed convolutions."}, {"start": 523.0, "end": 527.0, "text": " And you get the CNL, you get the reconstructed image back."}, {"start": 527.0, "end": 530.0, "text": " So the loss is fairly simple."}, {"start": 530.0, "end": 532.0, "text": " They just have reconstruction loss."}, {"start": 532.0, "end": 533.0, "text": " So they used MSC."}, {"start": 533.0, "end": 534.0, "text": " So that's mean squared error."}, {"start": 534.0, "end": 543.0, "text": " Plus, they have some loss that encourages encourages the codebook vectors to be close to these vectors and these vectors to be close to codebook vectors."}, {"start": 543.0, "end": 548.0, "text": " So the one difficulty is it's hard to it's not hard."}, {"start": 548.0, "end": 554.0, "text": " It's impossible to actually backprop through this thing because of this quantization step."}, {"start": 554.0, "end": 561.0, "text": " And so what they do is they whatever the gradients for these vectors here are, they just copy paste those."}, {"start": 561.0, "end": 565.0, "text": " As you can see this by this red line, you just copy paste those into the encoder."}, {"start": 565.0, "end": 571.0, "text": " And then because this part is differentiable, you can just backprop through the CNN and figure out all of the other gradients."}, {"start": 571.0, "end": 573.0, "text": " But that's a difficulty."}, {"start": 573.0, "end": 577.0, "text": " So this part, because of the quantization, that's one of the main things they had to solve."}, {"start": 577.0, "end": 580.0, "text": " And yeah, that's that's in a nutshell how it works."}, {"start": 580.0, "end": 583.0, "text": " So in my VQVAE video, I even went through some code."}, {"start": 583.0, "end": 586.0, "text": " So if you're still confused how this exactly work, you can check it out."}, {"start": 586.0, "end": 591.0, "text": " But that's the whole like a like a rough idea of how VQVAE works."}, {"start": 591.0, "end": 599.0, "text": " So once you train this model, so what you have is you have you train these embedding vectors, you train the encoder, you train the decoder."}, {"start": 599.0, "end": 603.0, "text": " And so now the second step is this transformer thing."}, {"start": 603.0, "end": 606.0, "text": " So we concatenate up to 256 pp encoded."}, {"start": 606.0, "end": 611.0, "text": " So that's by pairing coding text tokens with the 32 by 32."}, {"start": 611.0, "end": 619.0, "text": " That's 1024 image tokens and train an autoregressive transformer to model the joint distribution over the text and image tokens."}, {"start": 619.0, "end": 623.0, "text": " So this 32 by 32, let me just show you show you where that comes up."}, {"start": 623.0, "end": 630.0, "text": " So basically this means that this in here will have 32 vectors here and 32 here."}, {"start": 630.0, "end": 635.0, "text": " So we'll have 1024 tokens like in total."}, {"start": 635.0, "end": 637.0, "text": " And these are image tokens."}, {"start": 637.0, "end": 646.0, "text": " So those kind of capture the information about the image and then using decoder, you can just decode them back into the pixel space."}, {"start": 646.0, "end": 649.0, "text": " OK, there's this part about where dimension elbow."}, {"start": 649.0, "end": 654.0, "text": " I think it's it will be more confusing than than than good to explain this."}, {"start": 654.0, "end": 664.0, "text": " But if you ignore why for a second, so if we ignore the labels, the captions, sorry, if we ignore the captions, this is basically your your VAE objective."}, {"start": 664.0, "end": 665.0, "text": " So just elbow."}, {"start": 665.0, "end": 667.0, "text": " Basically, this is the LN."}, {"start": 667.0, "end": 673.0, "text": " So of p of x given that that's your reconstruction loss."}, {"start": 673.0, "end": 677.0, "text": " And then depending how you decide to model your your data."}, {"start": 677.0, "end": 684.0, "text": " So if you take Gaussian here, so basically what will happen is because Gaussian has some some some structure or something like this."}, {"start": 684.0, "end": 688.0, "text": " So we have C, we have e raised to the power of blah, blah, blah."}, {"start": 688.0, "end": 695.0, "text": " We have basically x minus mu squared over something."}, {"start": 695.0, "end": 697.0, "text": " Sigma's doesn't matter."}, {"start": 697.0, "end": 699.0, "text": " But basically when you do LN of this, this is a constant."}, {"start": 699.0, "end": 700.0, "text": " So we don't care."}, {"start": 700.0, "end": 701.0, "text": " This is E."}, {"start": 701.0, "end": 705.0, "text": " These two are so LN and E are just inverse of each other."}, {"start": 705.0, "end": 710.0, "text": " So you end up with x minus mu squared, which is your basically MSC loss."}, {"start": 710.0, "end": 715.0, "text": " So that's how you end up with MSC loss, even though you have this abstraction here."}, {"start": 715.0, "end": 717.0, "text": " And I know this part confused me."}, {"start": 717.0, "end": 721.0, "text": " So hopefully this will like give you some epiphany moment."}, {"start": 721.0, "end": 729.0, "text": " And similarly here, this is just a KL divergence between the approximate posterior and your prior."}, {"start": 729.0, "end": 733.0, "text": " So that's basically your your your like VAE objective."}, {"start": 733.0, "end": 742.0, "text": " And you're trying to to maximize this part because in turn, because it's a lower bound, that means you're you're going to maximize the like the likelihood of your data."}, {"start": 742.0, "end": 750.0, "text": " So you just kind of want to maximize the reconstruction probability and you want to minimize the KL divergence between those two."}, {"start": 750.0, "end": 752.0, "text": " That's just in a nutshell."}, {"start": 752.0, "end": 753.0, "text": " I don't want to get into maths."}, {"start": 753.0, "end": 762.0, "text": " As I said, the video has a better explanation of this of this component, as well as some links to some cool blocks, which you can check out at your own pace and understand the mathematics behind this."}, {"start": 762.0, "end": 769.0, "text": " But the confusing part is you actually won't see the KL part, especially in VQVAE model."}, {"start": 769.0, "end": 773.0, "text": " The reason being is that the KL divergence is constant in this model."}, {"start": 773.0, "end": 775.0, "text": " So it's kind of confusing to you."}, {"start": 775.0, "end": 777.0, "text": " It's a nice thing to think about it like this."}, {"start": 777.0, "end": 782.0, "text": " But like for practice from a practical standpoint, it's just confusing."}, {"start": 782.0, "end": 792.0, "text": " OK, so a couple of details where VQGAN and like VQVAE papers differ from DALI."}, {"start": 792.0, "end": 795.0, "text": " So I mentioned the straight through estimator."}, {"start": 795.0, "end": 797.0, "text": " So that's this part. Let me go back here."}, {"start": 797.0, "end": 799.0, "text": " So this is called the straight through estimator."}, {"start": 799.0, "end": 808.0, "text": " This red line, this is basically your you copy pasting the gradients from the decoder over to the encoder so that you can back prop through the encoder and train your model."}, {"start": 808.0, "end": 810.0, "text": " OK, so that's the straight through gradient."}, {"start": 810.0, "end": 812.0, "text": " Like a gradient estimator."}, {"start": 812.0, "end": 820.0, "text": " And here, so they say we instead use the GUMBO softmax relaxation, replacing the expectation over blah, blah, blah, blah, blah."}, {"start": 820.0, "end": 830.0, "text": " All in all, this is a modification they made as well as the likelihood for the P sub theta is evaluated using the log Laplace distribution."}, {"start": 830.0, "end": 833.0, "text": " So I mentioned previously I mentioned this Gaussian here."}, {"start": 833.0, "end": 839.0, "text": " So Gaussian leads to you having MSE as an objective in the loss function."}, {"start": 839.0, "end": 843.0, "text": " This thing, because they're using log Laplace, they'll have some different objective."}, {"start": 843.0, "end": 848.0, "text": " And let me briefly go to the appendix and show you what I mean by this."}, {"start": 848.0, "end": 850.0, "text": " This is kind of different compared to other papers."}, {"start": 850.0, "end": 852.0, "text": " And I think it's pretty neat idea."}, {"start": 852.0, "end": 857.0, "text": " So the L1 and L2 reconstruction objectives are commonly used when training VA's."}, {"start": 857.0, "end": 861.0, "text": " So L2 was what we saw. So MSE is basically L2 squared."}, {"start": 861.0, "end": 869.0, "text": " And so these objectives correspond to using Laplace and Gaussian distributions for the this term."}, {"start": 869.0, "end": 872.0, "text": " And that's what I mentioned. So I explained the thing with Gaussian."}, {"start": 872.0, "end": 876.0, "text": " Similarly, it goes with Laplace. You'll have the same logic behind it."}, {"start": 876.0, "end": 880.0, "text": " So you'll just end up with instead of square, you just won't have the square component."}, {"start": 880.0, "end": 885.0, "text": " And there is a strange mismatch in this in this modeling choice."}, {"start": 885.0, "end": 891.0, "text": " Pixel values lie within a bounding interval. So your pixels will be in the like 0, 1 range, right?"}, {"start": 891.0, "end": 894.0, "text": " That's your normal RGB image."}, {"start": 894.0, "end": 898.0, "text": " And they say here, but both of these distributions are supported by the entire real line."}, {"start": 898.0, "end": 903.0, "text": " Hence, some amount of likelihood will be placed outside the admissible range of pixel values."}, {"start": 903.0, "end": 908.0, "text": " And so I just kind of draw like based at the distributions here."}, {"start": 908.0, "end": 914.0, "text": " So your output pixel values will be pretty much here from 0 to 1, no matter how you parameterize this Laplace."}, {"start": 914.0, "end": 925.0, "text": " So if you put mew to be equal to 0.5, so somewhere here, you're still going to have some non zero probability being given to values which are outside of the 0, 1 range."}, {"start": 925.0, "end": 929.0, "text": " So this is the range we care about, right? The 0, 1 range."}, {"start": 929.0, "end": 940.0, "text": " And these distributions, these assumptions that we have that the output picks like random variables are model as Gaussians or Laplace simply do not make that much sense if you think about it."}, {"start": 940.0, "end": 945.0, "text": " And that's why they replace this. So they say here."}, {"start": 945.0, "end": 952.0, "text": " So we represent the variant of Laplace distribution, which is also supported by a bounded interval, blah, blah, blah."}, {"start": 952.0, "end": 957.0, "text": " So this PDF is defined on 0, 1 and is given by this this PDF."}, {"start": 957.0, "end": 961.0, "text": " So what it did is they just applied they say somewhere here a sigmoid."}, {"start": 961.0, "end": 963.0, "text": " Let me just a sec. So yeah."}, {"start": 963.0, "end": 971.0, "text": " So we consider the PDF for random variable obtained by applying the sigmoid function to Laplace distributed random variable."}, {"start": 971.0, "end": 981.0, "text": " And so instead of using emacy, what you'll now use to in order to do to maximize the likelihood the likelihood of your data."}, {"start": 981.0, "end": 986.0, "text": " So to increase to decrease the reconstruction loss, you just you'll just do log of this."}, {"start": 986.0, "end": 989.0, "text": " So just Ellen of this term and whatever pops out."}, {"start": 989.0, "end": 991.0, "text": " That's what your objective will be like."}, {"start": 991.0, "end": 994.0, "text": " And so that's in contrast with emacy loss."}, {"start": 994.0, "end": 996.0, "text": " OK. And just this is the sigmoid."}, {"start": 996.0, "end": 999.0, "text": " You have domain goes from minus infinity to plus infinity."}, {"start": 999.0, "end": 1003.0, "text": " And we have the codomain, the image being from zero to one."}, {"start": 1003.0, "end": 1008.0, "text": " And that's the reason why the random variable will now be constrained to be between zero and one."}, {"start": 1008.0, "end": 1015.0, "text": " Like contrast that to Laplace, as I mentioned, it will have values which are outside of the bounded region we are interested in."}, {"start": 1015.0, "end": 1026.0, "text": " So that was just a small, small glimpse into these details, which they they kind of change compared to the original paper, VQVA paper."}, {"start": 1026.0, "end": 1030.0, "text": " Let me now get back to where we ended it."}, {"start": 1030.0, "end": 1035.0, "text": " I'll skip the explanation of the gumbell softmax relaxation."}, {"start": 1035.0, "end": 1039.0, "text": " I think it's just a detail and there is a whole paper behind this idea."}, {"start": 1039.0, "end": 1040.0, "text": " So I have to skip it."}, {"start": 1040.0, "end": 1049.0, "text": " But like in a nutshell, they replaced the straight through gradient estimator with this novel gumbell softmax relaxation method."}, {"start": 1049.0, "end": 1052.0, "text": " And now let's jump into stage two."}, {"start": 1052.0, "end": 1057.0, "text": " OK. So I mentioned that after we train the VQVA, we have a discrete space."}, {"start": 1057.0, "end": 1064.0, "text": " And now we want to learn how to train a transformer on top of that, on top of those image tokens."}, {"start": 1064.0, "end": 1082.0, "text": " And they say here, given a text image pair, we BPE encode the lower case caption using at most 256 tokens with walk up size of 16000 and encode the image using 32 by 32 tokens with walk up size 8K."}, {"start": 1082.0, "end": 1089.0, "text": " OK. Finally, the text and image tokens are concatenated and modeled or aggressively as a single stream of data."}, {"start": 1089.0, "end": 1094.0, "text": " Let me try and break this down for you if it's already not clear."}, {"start": 1094.0, "end": 1099.0, "text": " But like what I do is so imagine you have you have a caption."}, {"start": 1099.0, "end": 1101.0, "text": " So you have some like I'll draw something like this."}, {"start": 1101.0, "end": 1107.0, "text": " So we have a caption that's your some that's some text and you'll have associated image with that caption."}, {"start": 1107.0, "end": 1115.0, "text": " So something like this will have a squiggly man here, random stuff, whatever."}, {"start": 1115.0, "end": 1118.0, "text": " So that's your image. And so what happens now is the following."}, {"start": 1118.0, "end": 1120.0, "text": " So they first lowercase this."}, {"start": 1120.0, "end": 1124.0, "text": " So let me just denote that as L, then they do BPE."}, {"start": 1124.0, "end": 1131.0, "text": " So that means they'll basically tokenize the words into into some smaller units."}, {"start": 1131.0, "end": 1134.0, "text": " And what they have is they have this walk up size of 16K."}, {"start": 1134.0, "end": 1145.0, "text": " So that just basically means you have a huge table here and that table contains contains 16K vectors which are learned."}, {"start": 1145.0, "end": 1149.0, "text": " OK, so now what this thing how this thing will work like is the following."}, {"start": 1149.0, "end": 1152.0, "text": " So let's say you have a sentence like I don't know like monkey."}, {"start": 1152.0, "end": 1158.0, "text": " So monkey, monkey, king."}, {"start": 1158.0, "end": 1162.0, "text": " And then we'll have whatever like just three dots."}, {"start": 1162.0, "end": 1164.0, "text": " OK, so this is your input sentence."}, {"start": 1164.0, "end": 1171.0, "text": " And so BPE will just kind of break down your words maybe into some sub words like I know you'll have maybe mon you'll have key."}, {"start": 1171.0, "end": 1177.0, "text": " You'll have I know maybe key I and K.I. And so what happens is you'll just take this sub word."}, {"start": 1177.0, "end": 1184.0, "text": " You'll find the appropriate vector and you'll replace your sub word with this vector and et cetera, et cetera."}, {"start": 1184.0, "end": 1188.0, "text": " And so what they say is they have max of 256 tokens."}, {"start": 1188.0, "end": 1192.0, "text": " So that means if we have more than that, we'll just kind of truncate."}, {"start": 1192.0, "end": 1198.0, "text": " If we have less, they have some special like tokens which they'll train for that particular purpose."}, {"start": 1198.0, "end": 1203.0, "text": " Where when we have like those when we have like a less number than 256."}, {"start": 1203.0, "end": 1207.0, "text": " But it's a minor detail. And so that's what happens with the text."}, {"start": 1207.0, "end": 1211.0, "text": " So we have we end up with 256 tokens here."}, {"start": 1211.0, "end": 1214.0, "text": " OK, now the following thing happens with the image."}, {"start": 1214.0, "end": 1225.0, "text": " So this one just gets fed into the encoder of the VQVAE, which was previously trained in stage one and outcome 32 by 32 tokens."}, {"start": 1225.0, "end": 1232.0, "text": " So basically this is 32 by 32 and you're kind of going to unroll these."}, {"start": 1232.0, "end": 1235.0, "text": " And so this will be your your target."}, {"start": 1235.0, "end": 1244.0, "text": " So basically you'll just put start of sentence token here and then you'll basically all start of image some special token."}, {"start": 1244.0, "end": 1246.0, "text": " And then you'll just unroll this thing here."}, {"start": 1246.0, "end": 1249.0, "text": " OK, so just unroll it here."}, {"start": 1249.0, "end": 1251.0, "text": " And that's your input into your transformer."}, {"start": 1251.0, "end": 1254.0, "text": " And what the output will be is the following."}, {"start": 1254.0, "end": 1258.0, "text": " So this very same thing just shifted by one to the left."}, {"start": 1258.0, "end": 1267.0, "text": " OK, so you'll have something like this and then you'll have end of sentence token or end of image in this case."}, {"start": 1267.0, "end": 1269.0, "text": " OK, so that's it."}, {"start": 1269.0, "end": 1274.0, "text": " And so now how the whole thing works is you have a huge transformer."}, {"start": 1274.0, "end": 1280.0, "text": " I think they used I think like GPT-2 or something with some sparse attention maps."}, {"start": 1280.0, "end": 1286.0, "text": " And so you'll have a transformer here and then you have your usual thing."}, {"start": 1286.0, "end": 1294.0, "text": " You just pass this in, you transform it through the multiple layers of transformer and you'll want to predict these."}, {"start": 1294.0, "end": 1299.0, "text": " So you'll want to predict this thing from the start of sentence token."}, {"start": 1299.0, "end": 1302.0, "text": " So in this case, these are image tokens."}, {"start": 1302.0, "end": 1305.0, "text": " But just imagine for a second this was like a textual token."}, {"start": 1305.0, "end": 1311.0, "text": " In that case, if this was text like Monkey King, we'd have M here and we'd have M here."}, {"start": 1311.0, "end": 1314.0, "text": " And then the second letter would be O."}, {"start": 1314.0, "end": 1316.0, "text": " So we'd have O here."}, {"start": 1316.0, "end": 1321.0, "text": " So as you can see, you want to predict from the like this special token, you want to predict M."}, {"start": 1321.0, "end": 1324.0, "text": " From M, you want to predict O because O is the next one."}, {"start": 1324.0, "end": 1328.0, "text": " So you're just trying to predict the next token, simple stuff pretty much."}, {"start": 1328.0, "end": 1330.0, "text": " And you can do this in parallel."}, {"start": 1330.0, "end": 1332.0, "text": " And that's how they train this whole thing."}, {"start": 1332.0, "end": 1341.0, "text": " So this thing here will be 256 and this thing here will be 1024."}, {"start": 1341.0, "end": 1347.0, "text": " And you just train this transformer to kind of learn how to do this next token prediction."}, {"start": 1347.0, "end": 1356.0, "text": " And then how you actually use this later in inference is you just prepend this caption and you prepend this special token like this one here."}, {"start": 1356.0, "end": 1361.0, "text": " And then you just start sampling from the model until you get 32 by 32 tokens."}, {"start": 1361.0, "end": 1365.0, "text": " You just pass that through the decoder and out comes a novel image."}, {"start": 1365.0, "end": 1368.0, "text": " So that caption can be obviously completely novel."}, {"start": 1368.0, "end": 1376.0, "text": " So the model has never seen the caption during training like the examples I showed you with taper and hedgehogs."}, {"start": 1376.0, "end": 1379.0, "text": " So yeah, that's how it works in a nutshell."}, {"start": 1379.0, "end": 1384.0, "text": " Now, let me just show you some results they have here."}, {"start": 1384.0, "end": 1392.0, "text": " So they compare obviously the Dali paper with different GAN baselines."}, {"start": 1392.0, "end": 1396.0, "text": " So like DFGAN, DMGAN, TENSIONGAN, etc."}, {"start": 1396.0, "end": 1402.0, "text": " And here we have, so this is the ground truth caption, a very cute cat lying by a big bike."}, {"start": 1402.0, "end": 1404.0, "text": " And here we have the actual validation images."}, {"start": 1404.0, "end": 1406.0, "text": " So the images that go with these captions."}, {"start": 1406.0, "end": 1407.0, "text": " So that's the ground truth."}, {"start": 1407.0, "end": 1411.0, "text": " And if you take a look at the results, you can see two things."}, {"start": 1411.0, "end": 1414.0, "text": " So first thing is Dali has better outputs."}, {"start": 1414.0, "end": 1422.0, "text": " Like it's just if we focus on this thing here, a living room with a TV on top of a stand with a guitars sitting next to you."}, {"start": 1422.0, "end": 1427.0, "text": " And basically you can see this looks really cool, much better than GANs."}, {"start": 1427.0, "end": 1434.0, "text": " The main difference is like VQVAE is, as we saw in the beginning of the video, has blurry output."}, {"start": 1434.0, "end": 1440.0, "text": " That's why we have blurry kind of blurry results here, even though their composition is much better compared to GANs."}, {"start": 1440.0, "end": 1446.0, "text": " Where GANs are kind of the chat programmers of the generative modeling world because they're super confident."}, {"start": 1446.0, "end": 1450.0, "text": " They just output some values super crisp, even though it's complete BS."}, {"start": 1450.0, "end": 1456.0, "text": " And so on the other hand, we have VQVAEs, which have better compositionality, but they have blurry results."}, {"start": 1456.0, "end": 1460.0, "text": " That's something you can take off from this image."}, {"start": 1460.0, "end": 1463.0, "text": " OK, let's now continue."}, {"start": 1463.0, "end": 1474.0, "text": " One thing I want to mention is, and they actually explicitly mentioned it here, is getting to model the model to train in a 16 bit precision past one billion parameters."}, {"start": 1474.0, "end": 1485.0, "text": " And if you remember, they have 12 billion parameters, so that they have 12 billion parameter transformer without diverging was the most challenging part of this project."}, {"start": 1485.0, "end": 1492.0, "text": " This just makes you appreciate engineering so much because if you think about it, the ideas in this paper are nothing brand novel."}, {"start": 1492.0, "end": 1495.0, "text": " Like VQGAN has super similar ideas."}, {"start": 1495.0, "end": 1499.0, "text": " Jukebox, all of these papers are kind of variations of each other."}, {"start": 1499.0, "end": 1504.0, "text": " There is nothing super novel here from the research standpoint, in my honest opinion."}, {"start": 1504.0, "end": 1509.0, "text": " But like from the engineering standpoint, there is a lot of things you have to solve in order to get this to work."}, {"start": 1509.0, "end": 1515.0, "text": " So one of the things was that they had to train this in 16 bit precision and they had problems with overflow."}, {"start": 1515.0, "end": 1526.0, "text": " So they had to devise multiple techniques like this one here, and they also had to do some engineering hacks in order to kind of avoid the memory bottlenecks they had on their GPUs."}, {"start": 1526.0, "end": 1528.0, "text": " So they couldn't fit the whole model."}, {"start": 1528.0, "end": 1532.0, "text": " So they had to do this charting between multiple nodes, etc."}, {"start": 1532.0, "end": 1534.0, "text": " So just keep that in mind."}, {"start": 1534.0, "end": 1538.0, "text": " I wouldn't say there is anything super novel research wise in this paper."}, {"start": 1538.0, "end": 1548.0, "text": " But just when you scale it up, when you collect the data, when you invest just a bunch of money, a bunch of engineering, like just a bunch of dev like ours, you get some awesome results."}, {"start": 1548.0, "end": 1550.0, "text": " And I think that's a valuable lesson."}, {"start": 1550.0, "end": 1552.0, "text": " We don't always have to just invent novel ideas."}, {"start": 1552.0, "end": 1558.0, "text": " Sometimes we just have to scale up and kind of push to the limits the current work we have."}, {"start": 1558.0, "end": 1564.0, "text": " What I have here is, if you're familiar with CLIP, if you're not, I also create a video on CLIP."}, {"start": 1564.0, "end": 1565.0, "text": " You can check it out."}, {"start": 1565.0, "end": 1571.0, "text": " But basically what I do here is they have this automatic way of filtering the output from the Dali."}, {"start": 1571.0, "end": 1576.0, "text": " So, for example, you generate you generate 512 images from the Dali."}, {"start": 1576.0, "end": 1578.0, "text": " So we have a bunch of images."}, {"start": 1578.0, "end": 1580.0, "text": " This is one. This is second one."}, {"start": 1580.0, "end": 1582.0, "text": " This is third one, etc."}, {"start": 1582.0, "end": 1583.0, "text": " So you generate a bunch of them."}, {"start": 1583.0, "end": 1588.0, "text": " And then you have this model called CLIP, which will let me just draw it like like a circle."}, {"start": 1588.0, "end": 1593.0, "text": " So CLIP will just basically you input an image and you input a caption."}, {"start": 1593.0, "end": 1598.0, "text": " So, for example, this one, a group of urinals is near the trees and CLIP will tell you."}, {"start": 1598.0, "end": 1599.0, "text": " So this is CLIP."}, {"start": 1599.0, "end": 1604.0, "text": " CLIP will tell you how likely is this image to be under that caption."}, {"start": 1604.0, "end": 1605.0, "text": " Let's call it that way."}, {"start": 1605.0, "end": 1613.0, "text": " So that means you can do explicit sorting of these images according to the scores that CLIP model gave them."}, {"start": 1613.0, "end": 1616.0, "text": " And you just take you cherry pick the ones with high scores."}, {"start": 1616.0, "end": 1619.0, "text": " And you can see the results here are pretty astounding."}, {"start": 1619.0, "end": 1624.0, "text": " So if you just have if you just change a single image, you can see it's pretty random."}, {"start": 1624.0, "end": 1625.0, "text": " You don't see even the urinals."}, {"start": 1625.0, "end": 1631.0, "text": " Then when you have when you generate eight images and you pick the best one, the best one, according to CLIP, you get better results."}, {"start": 1631.0, "end": 1635.0, "text": " And finally, if you have 512 images, this one looks much better."}, {"start": 1635.0, "end": 1638.0, "text": " So this one looks way better compared to the ones below."}, {"start": 1638.0, "end": 1641.0, "text": " OK, so, yeah, this just works."}, {"start": 1641.0, "end": 1648.0, "text": " It's just a heuristic to kind of get some better results from these generating models."}, {"start": 1648.0, "end": 1650.0, "text": " Same goes from some different captions."}, {"start": 1650.0, "end": 1652.0, "text": " And that's the that's the rough idea."}, {"start": 1652.0, "end": 1657.0, "text": " Again, check out the CLIP video if you're interested in learning about details, how that exactly works."}, {"start": 1657.0, "end": 1660.0, "text": " OK, some additional results."}, {"start": 1660.0, "end": 1664.0, "text": " They compare this to this DF can baseline."}, {"start": 1664.0, "end": 1670.0, "text": " Yep, they just get better realism and accuracy results compared to the baseline."}, {"start": 1670.0, "end": 1672.0, "text": " Nothing fancy there."}, {"start": 1672.0, "end": 1677.0, "text": " Here they show some results, some zero shot samples on this CUB data set."}, {"start": 1677.0, "end": 1680.0, "text": " This CUB data set basically consists out of birds."}, {"start": 1680.0, "end": 1687.0, "text": " And the thing is that the original that huge data set I mentioned that the Dali model uses has like a bunch of images."}, {"start": 1687.0, "end": 1692.0, "text": " But the distribution of those images and captions is completely different compared to this data set."}, {"start": 1692.0, "end": 1697.0, "text": " And they show some not that great performance on this on this on this CUB data set."}, {"start": 1697.0, "end": 1699.0, "text": " You can see some birds here."}, {"start": 1699.0, "end": 1700.0, "text": " It's not perfect."}, {"start": 1700.0, "end": 1713.0, "text": " And especially when you focus on these quantitative results, you can see that Dali is much worse compared to other baselines on that data set, because, as I said, it's out of distribution in a way so it doesn't perform that well."}, {"start": 1713.0, "end": 1719.0, "text": " What they do here on these diagrams is they plot the inception score and the FID."}, {"start": 1719.0, "end": 1721.0, "text": " So that's for shades, inception, distance."}, {"start": 1721.0, "end": 1733.0, "text": " Those are just some metrics which people use in order to kind of try and quantify how good are the generated images, because the problem itself is ill-posed, same as Newell style transfer, same as deep dream, those kind of images."}, {"start": 1733.0, "end": 1738.0, "text": " You kind of you don't have a clear cut way to determine which image is better."}, {"start": 1738.0, "end": 1741.0, "text": " So these are some heuristics like IS and FID."}, {"start": 1741.0, "end": 1745.0, "text": " And what they do is on the next axis, they have a blur kernel radius."}, {"start": 1745.0, "end": 1755.0, "text": " So what they do is they kind of whatever is generated out of out of Dali and other baselines like GANs, they just kind of blurred them with increasingly bigger filters."}, {"start": 1755.0, "end": 1759.0, "text": " So you have initially you'll have some so you have an image here."}, {"start": 1759.0, "end": 1760.0, "text": " And hopefully you're familiar with this."}, {"start": 1760.0, "end": 1763.0, "text": " This is just some basic image processing technique."}, {"start": 1763.0, "end": 1774.0, "text": " So you can create a kernel like a Gaussian and you just kind of like slide over the image and apply the like the you just do the kernel operation, the same as in CNNs."}, {"start": 1774.0, "end": 1777.0, "text": " You just do like element wise multiplication."}, {"start": 1777.0, "end": 1779.0, "text": " You add them up. You kind of map that to an output pixel."}, {"start": 1779.0, "end": 1783.0, "text": " And if you do that with a Gaussian kernel, it'll just kind of blur the image."}, {"start": 1783.0, "end": 1787.0, "text": " And what they then do is they kind of start increasing these kernels."}, {"start": 1787.0, "end": 1790.0, "text": " And so they have a bigger one."}, {"start": 1790.0, "end": 1794.0, "text": " And so the bigger the kernel, the more blurrier the image will be."}, {"start": 1794.0, "end": 1800.0, "text": " And they show that when they kind of blur the image, they get better results."}, {"start": 1800.0, "end": 1804.0, "text": " So this is the Dali and you can see that higher is better."}, {"start": 1804.0, "end": 1810.0, "text": " And so the bigger the blur kernel is, the better results we get from the Dali compared to other baselines."}, {"start": 1810.0, "end": 1821.0, "text": " Here, FID should be smaller so you can see again that even here after just increasing the blur radius to one, we have better results compared to baselines."}, {"start": 1821.0, "end": 1822.0, "text": " Why do this?"}, {"start": 1822.0, "end": 1830.0, "text": " The reason is, as I mentioned at the beginning of the video, VQVAs have problems because the outputs are not crispy."}, {"start": 1830.0, "end": 1834.0, "text": " And so they suspect that these metrics may be penalizing them because of that."}, {"start": 1834.0, "end": 1844.0, "text": " So in order to get on an even ground, they take the GAN output, they blur it so as to compare to kind of make it fair to Dali."}, {"start": 1844.0, "end": 1847.0, "text": " And then they kind of calculate the FIDs and IS."}, {"start": 1847.0, "end": 1857.0, "text": " I mean, this is kind of a way to, I guess, measure the, like, how high quality the composition of the images and put less focus on the actual details."}, {"start": 1857.0, "end": 1859.0, "text": " I guess that's the whole point of this."}, {"start": 1859.0, "end": 1864.0, "text": " On FID, they show that the inception score is much lower, which is a bad thing."}, {"start": 1864.0, "end": 1866.0, "text": " You want to have high inception score."}, {"start": 1866.0, "end": 1868.0, "text": " And they show that the FID is higher."}, {"start": 1868.0, "end": 1871.0, "text": " So that means it's having problems with the CUP data set."}, {"start": 1871.0, "end": 1873.0, "text": " As I already mentioned, CUP is kind of different."}, {"start": 1873.0, "end": 1875.0, "text": " The distribution of data is different there."}, {"start": 1875.0, "end": 1877.0, "text": " Okay."}, {"start": 1877.0, "end": 1879.0, "text": " Here, pretty obvious fact."}, {"start": 1879.0, "end": 1891.0, "text": " That's that when we increase the sample size for re-ranking, so that's the clip part, the more so if you have 512 generated images, that's going to be way better than you generating just a single image."}, {"start": 1891.0, "end": 1893.0, "text": " As you can see here, the inception score goes up."}, {"start": 1893.0, "end": 1894.0, "text": " The FID goes down."}, {"start": 1894.0, "end": 1902.0, "text": " And so that means there is a sweet spot, maybe around 128 generated images where you can then automatically select the best one."}, {"start": 1902.0, "end": 1907.0, "text": " And this just improves the results on both metrics, FID and IS."}, {"start": 1907.0, "end": 1908.0, "text": " Okay."}, {"start": 1908.0, "end": 1909.0, "text": " Those were the results."}, {"start": 1909.0, "end": 1910.0, "text": " I guess that's pretty much it."}, {"start": 1910.0, "end": 1915.0, "text": " Let me just go back to the appendix and show you one more thing."}, {"start": 1915.0, "end": 1916.0, "text": " One more thing."}, {"start": 1916.0, "end": 1918.0, "text": " So, yeah, you can see the patterns in the transformer."}, {"start": 1918.0, "end": 1924.0, "text": " They have just different patterns there, not just simple causal masks."}, {"start": 1924.0, "end": 1926.0, "text": " And this was introduced in their sparse transformer papers."}, {"start": 1926.0, "end": 1929.0, "text": " You can check that out if you wish so."}, {"start": 1929.0, "end": 1934.0, "text": " And I just want to show you some additional results on the image to image translation somewhere here."}, {"start": 1934.0, "end": 1935.0, "text": " Okay."}, {"start": 1935.0, "end": 1940.0, "text": " So the exact same cat on the top color, on the top colored red on the bottom."}, {"start": 1940.0, "end": 1948.0, "text": " So you can see here again perfectly image to image translation, even though this task, so you now know how the Dali model is trained."}, {"start": 1948.0, "end": 1951.0, "text": " It's just trained to first train VQVAE."}, {"start": 1951.0, "end": 1955.0, "text": " Then you train the autoregressive transformer to just predict the next image token."}, {"start": 1955.0, "end": 1960.0, "text": " And you can see that out comes this ability to do this kind of stuff as well."}, {"start": 1960.0, "end": 1968.0, "text": " So here they mentioned two panel image of the exact same cat on the top, a photo of the cat on the bottom, the cat with sunglasses."}, {"start": 1968.0, "end": 1972.0, "text": " And you can see again like just placing some sunglasses onto the cat."}, {"start": 1972.0, "end": 1974.0, "text": " Really awesome results."}, {"start": 1974.0, "end": 1975.0, "text": " I love this."}, {"start": 1975.0, "end": 1981.0, "text": " Here with postcard, the exact same cat on the top as a postage stamp on the bottom."}, {"start": 1981.0, "end": 1982.0, "text": " Pretty awesome results."}, {"start": 1982.0, "end": 1985.0, "text": " And with that, I leave you here."}, {"start": 1985.0, "end": 1986.0, "text": " Okay. That was it."}, {"start": 1986.0, "end": 1987.0, "text": " Hopefully you like this video."}, {"start": 1987.0, "end": 1989.0, "text": " Let me know what you think."}, {"start": 1989.0, "end": 1991.0, "text": " Subscribe, share out this content if you like it."}, {"start": 1991.0, "end": 1993.0, "text": " That's the best way you can support me."}, {"start": 1993.0, "end": 2012.0, "text": " And until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=B-1K4glzJ2Q
RMA: Rapid Motor Adaptation for Legged Robots | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover the "Rapid Motor Adaptation for Legged Robots" paper where they introduce this novel technique which allows legged robots to adapt, in real-time, to novel never before seen environments! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://ashish-kmr.github.io/rma-legged-robots/rma-locomotion-final.pdf ✅ Blog: https://ai.facebook.com/blog/ai-now-enables-robots-to-adapt-rapidly-to-changing-real-world-conditions ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Introducing RMA 02:15 Paper overview 04:15 Architecture explained 12:00 Reward shaping 15:30 Curriculum training 17:40 No need to predict env vector 18:50 Results and experiments ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #reinforcementlearning #robots #adaptation
AI now enables robots to adapt rapidly to changing real-world conditions. So this research just came out yesterday. It's from Facebook, CMU, and Berkeley AI research, and introduced this thing called rapid motor adaptation, which is an awesome new technique. Let's call it that way. It leverages reinforcement learning plus supervised learning in order to do these rapid adaptations to various novel and unknown environments. As you can see here, so let me just zoom in. This robot is basically trained so the reward function is just move straight ahead and there's like, it's like heavily shaped. We'll see that in a moment. But the whole point I want you to notice here is that this matrix is oiled up, so, so, and the robot never has never seen something like a similar environment like and it hasn't been fine-tuned. Neither has been the simulation calibrated to kind of depict the state of this particular environment. So again, it's trained completely in the simulation. It's using RL, it's using supervised learning, and we'll see the details of the paper in a second. But let me just show you a couple more cool examples. So here again, you can see it's just moving straight ahead, so it's kind of going towards the rock. But that's not the point. You can see it's adaptively, it can kind of go down these rocks and it doesn't kind of fall over. So it's pretty, it looks pretty, much more natural compared to some of the like previous methods we've been seeing. And finally, this one is really cool. So the robot goes over this super hard environment and the leg kind of stucks here. And then after a couple of moments, instead of falling over, it unstucks and it continues walking. So I find this super exciting. Like I can imagine like a huge number of cool applications where these robots could be doing the jobs that humans can't or should not, like some radiation, high radiation areas or whatnot. Again, this can and will probably be used in military purposes, but that's, I guess, a topic for another video. So let me now, having said that, let's jump to the paper and let's see how this thing works. Okay. So again, this is a work by Ashish Kumar, Zipeng Fu, Deepak Pathak and Jitendra Malik. So that's Berkeley folks, CMU and Facebook. And what I want you to notice here is they've been testing this on a wide variety of environments. And interesting thing is written here. So the robot achieved this high success rate despite never having seen unstable or sinking ground, obstructive vegetation or stairs during training. All deployment results are with the same policy. So the same policy without any simulation calibration or real world fine tuning. So usually if you watch my video on OpenAI's robotic hand solving Robux cube, which I super recommend you go ahead and watch. So basically they use this thing called randomize domain adaptation, where the whole point is to, so you have a special environment in which you want to deploy a robot. And in their case, they had a cage, they had this robotic hand. And so there was a really constrained setup. And what they did is they kind of measured all of the relevant information, like where the cameras are positioned, and they constructed the simulation in such a way so that it's calibrated. So the main parameters such as camera position, et cetera, are kind of like baked in into the simulation. Here on the other hand, they don't do that. They have, we'll see like the simulation is just a sampling bunch of randomized terrains, which are hard. And they do some, again, they do some curriculum learning. But like the important thing is they don't do the simulation calibration and they don't have this real world fine tuning. I think that even that OpenAI's paper did this fine tuning whereby you basically capture some real data and then you further fine tune your policy on that real data so that you can better bridge the gap between the simulation and the real world. But here they don't do that and that's super exciting. Okay, let's see how this model works on the high level. So here is the whole pipeline. Basically we have, so we have first, the first part is training, obviously, the second part is deployment, where we are using the policy trained in A in the real world. So the training part itself contains out of, consists out of two parts. So the first part is this policy training phase, whereby we are taking these parameters of the environment such as like terrain height. So for example, if you focus on this robot here, it will basically have, assuming all the four legs are in contact with the ground, you can figure out, you can calculate because you're in a simulation, you have all of the ground truth data, they just calculate the highest peak of those four points and that's one of the like information bits they are using in this ET vector. They also have this motor strength, they have this mass and center of mass information. So that's basically, they're putting certain mass objects on top of this robot's back because they'll be testing it in the same setup later on. And so obviously this thing won't be available during the deployment phase because that's just something that you have in a simulator. And so again, how the whole thing's work is the following. So they have this, this first stage, they train this RL policy based policy PI, and they just use like proximal policy optimization algorithm from OpenAI. So that's PPO. And again, the goal is just to maximize the reward where the reward is super shaped. We'll see that in a couple of minutes, but like for now, let's just focus and see how this works. So they encode this information. So this environmental factor vector using this like n factor encoder, which is a simple MLP. I think they have like three layer MLP and they encode it into this latent extrinsic, like how they call it, extrinsic latent vector. And it kind of contains the compressed information about the environment, about all of this information here. And aside from that, obviously we are like, we are fitting in the current state, which consists out of the joint position, joint velocities, et cetera. They also have this action. So what the actions are is basically you just set the joint position. These are joint position of all of the 12 actuated joints on this robot. And that will be the action. And finally, those actions will be converted to Torx using something called PD controllers. And then you have your usual like RL setup. Basically you have your environment here. So let me kind of abstract this. You have environment. Let's call it E. You have an agent. And what happens is agent sends the action to the environment. And we can see that here. So that's the action here at point at timestamp T. And what comes back is the pretty much the next state. So that's, they depicted that they denoted that as XT as well as reward. So you'll have some intermediate intermediate rewards. So that's your basic generic RL setup. And again, they just use PPO. They train this base policy. They train this encoder in the first phase. And then the important part is this adaptation module. So that's the true like innovation of this paper. They train this thing to be able to infer this latent information. Just looking at the past observations. That's the whole idea. So this time, instead of having a privilege of accessing this information from the simulator, they just pack the last 50 states and actions. They feed that into this adaptation module, which is again, simple MLP. And then they have some one D convolutions. And basically they output this approximation. So ZT hat, which they then minimize using simple MSC loss. So that's your like L2 square loss. And everything else remains the same. We're still feeding in the state. We're still feeding in the last action. And that's it. So one important thing to notice here is that during this phase two, when they're training the only the adaptation module, so obviously the base policies is frozen here. And second important thing is they are using on policy algorithm here. So they're basically feeding in this ZT hat. So the inferred one, even though initially, obviously, because this is randomly initialized, these will be kind of rubbish in the beginning. So what you'll basically be ending up in like bad states. And that will help this network become robust enough so that you can couple like cope with different deployment scenarios. Because if we were feeding the perfect information coming from this, from these environment variables, then it will the basically the adaptation module would only be exposed to good trajectories. And that's not what we want, because we want it to be able to cope even with these bad trajectories when when something goes wrong. So that's the that's that's it in a nutshell. So in the first again, in the first part, they train the base policy. So that's your RL agent, they train this encoder of the environment factors. And they train this in the second stage, they train this adaptation module, which is really important, which learns how to predict the latent, so condensed information about the environment from the past observations only. And interesting thing to notice here is that so they have no vision component in the system. So all of the videos you clips, the clips you saw earlier were based solely on this RL and supervised learning policies. Okay. So deployment, what I do is they obviously feed the the past observations into the adaptation module, it runs on 10 10 hertz, because you have like 5050 past observations here. So it's kind of needs to run a bit slower. And it infers this this ZT hat, which is information about the environment, every now and then. So these two components are pretty much asynchronous. So that means that the base policy will be running on 100 hertz. So so it would be feeding the state and the last action, and it will just be fetching whatever the last update to this, to this vector of this latent vector is. So that assumption kind of is valid because environment won't be changing. Like empirically, they kind of figure out that it won't be changing, they don't need to improve the the frequency of this, and it just works. Okay, having explained that, let's now jump into some details. First thing, so this is, they notice here that the key, the key contribution of the paper is basically, so our novel aspects are the use of a varied terrain generator, and natural reward functions. So by the way, let me just digress here. So as you can see here, this is the terrain, it's totally random, it looks really hard, you can, the legs can get jammed, etc. So the whole point of this is, similarly to the OpenAI's robotic hand, is to train this thing in a bunch of different hard environments, I forgot to mention that, where they sample different parameters such as mass, and like in the OpenAI paper, they sample gravity even. And so the idea is to expose the policy to such a wide range of randomized worlds, so that when you deploy it in a real world, it just looks like another simulation, and that's the whole point. Simulation hypothesis confirmed. So let's continue here. So the aspects are, as I said, varied terrain generator, and natural reward functions, motivated by bioenergetics, which allows us to learn walking policies without using any reference demonstrations. But the truly novel contribution of this paper is the adaptation module, trained in simulation, which makes the rapid motor adaptation possible. Nice. Um, what next? So here is the reward function they're using. I mentioned it's heavily shaped, as you can see here. So the goal is the following. The goal is to, they say it's somewhere here, so the reward function encourages the agent to move forward with a maximum speed, blah blah blah, and penalizes it for jerky and inefficient motions. So basically you can see here, the goal is to maximize the reward, and if we plot this first component, so the whole reward will just be a sum of these 10 components, and if we just plot this thing here, so basically on the x-axis you have vx, and let me just kind of briefly digress here. So this is a robot, just to explain the coordinate system. This is a robot, pretty retarded, okay, but yeah, it works. And this would be the x-axis, y-axis would be, for example, pointing towards us, and these z-axis points upwards. And having said that, so this velocity x is just a component along this x-axis here, and so it basically increases until 0.35, so this value here, and then it just starts getting into the saturation, which means the robot is encouraged only to get to 0.35 and not above. So they, again, penalize, because this is minus, we want to minimize this part, so we want to minimize the sideways movement, so this is, again, y-axis coming out of the screen towards you, so we want to minimize that velocity component there, as well as this angular velocity, and like a bunch of similar stuff, like torque, we want to minimize torque, we want to minimize the joint motion, we want to minimize the slippage of the foot. So basically what this g of t is, they say it here, so blah blah blah, the binary foot contact indicator vector as g, so that's g. So that means you have four legs, and because you do this dyac operation, what you do is you basically get a matrix where you have a binary matrix, so you have maybe zero, one, zero, zero, one. That means two of the legs are currently in the contact with the floor, and only those, we'll only care about those velocities of the foot. So I think, so they say somewhere here, so vf is the velocity of the feet, okay? So that means we want to minimize that velocity, which makes sense, because that means if the velocity is hard, that means that the legs are kind of slipping around, and that's not something we want. We want a stable robot without jerking, without like these movements, like either sideways movements, or here you can see it's also penalized not to go up and down too much. So anyways, I can just imagine how it looked like to shape this thing. Not the best design, I guess, but like I love the results they got, and yeah. In general, it's not recommended to shape the reward like this, but I guess their contribution lied somewhere else, and they, basically in the adaptation procedure, I guess this part is not as important as the fact that they show that their robots can adapt to unknown environments, even being trained in simulation only, that's a huge, huge thing. And they say it here again, this tells us about the fragility of this shaping. We found these reward functions to be critical for learning realistic gates in simulation, so I guess they had to iterate a lot to get this to work. And again, problems in the paradise, they say here, if we naively train our agent with the above reward function, it learns to stay in a place because of the penalty terms on the movement of the joints. So that translates into this. So this Q with the small dot here, if you can see, just denotes the velocities of the joints. So this term of the reward function will encourage the robot not to move at all, and yeah, as you can see here, if we just start with these, because this will be just a weighted sum of these rewards, right? And you'll have some parameters, as you can see here, they just figured out these parameters empirically. So if you just start with these parameters, the thing won't work, the robot won't move. So they do some sort of annealing, they start with super small coefficients, and then during the training, they slowly raise this up. So it's a training curriculum, obviously. And yeah, and they say to prevent the collapse, we start the training with very small penalty coefficients, so very small penalty coefficients, and then gradually increase the strength of these coefficients using a fixed curriculum. We also linearly increase the difficulty of other perturbations such as mass, friction, and motor strength as the training progresses. We don't have any curriculum on the trains and start the training with randomly sampling the train profiles from the same fixed difficulty. Bottom line, certain parts of the environment are under curriculum, certain parts like the terrain itself is always sampled from the same very same difficulty. That was the whole point. Here in this table, they just show the range they've used for certain parameters in the environment like friction, center of mass, etc. So yeah, so again, center of mass is this thing here, they have this robot, they put like an object with mass on top of its back, and they just test how it behaves under these external forces. That's the whole point. And as I mentioned, this is the same thing as with OpenAI's robotic hand. They sample an environment where they'll be sampling uniformly from these ranges, and these ranges as well will be progressively becoming harder and harder as the training progresses, so that's because they are included in the training curriculum. One thing worth mentioning is that they mention it here. Note that instead of predicting ET, so that's the environmental vector, which is the case in typical system identification, we directly estimate extrinsics ZT that only encodes how the behavior should change to correct for the given environment vector ET. So the whole point here is something similar to Mu0. If you watched that video, if you want to check it out, I'll link Mu0 video somewhere here, I made it a while ago, and the whole idea is the following. You only need to encode that information which makes you behave, have a better policy. You don't need to be able to reconstruct from that latent, so in this case, this environment vector, or in the case of Mu0, the observation space, you just need to have such information upon which you can act and have an optimal policy, and that's it. So you just save some bandwidth on modeling only the information you actually need to behave properly in the environment, and you just discard everything else. Okay, here we have some experiments they've done with certain baselines. So again, they have a bunch of different tasks. They have this uneven foam, they have this upward incline, so they're testing it in a bunch of different mattresses. They're like step up eight centimeters, step up six centimeters, step down 15 centimeters, et cetera. And they just compare it with the other baselines such as the this very same model, just without that adaptation block, as well as this A1. So A1 is just the name of the robot, so A1 is some default controller software that comes as a firmware of this robot when you buy it. And they showed that on all of these environments in these tables, just looking at the success metric, TTF, so it's time to fall and distance, RMA pretty much outperforms the other baselines in all of these tasks, as well as in this payload analysis, we have these diagrams where we are on the x-axis, we are increasing the weight of this thing of the payload on the robot's back, and we can see how different baselines behave, and we can see that the success drops really slowly with this RMA approach, whereas with the baselines, it drops pretty significantly. And worth mentioning is that the robot itself has 12 kilos, which means that at this point on x-axis, where basically the robot is carrying the same amount of weight as its own body weight on its own back. So that's pretty cool. So here they have another analysis on this particular environment with the mattress covered with oil, and what you can notice here is that from this point on, so when it starts stepping on this slippery floor, the environment parameters obviously change, like the friction parameter will change, and we can see that being inferred by the adaptation module exactly here. So we just log, so this latent vector, I think it has like eight dimensions, so they just took Z1 and Z5, so those are two particular features of that latent vector, and they just do, they plot it, and we can see that it basically reacts really quickly to the change of the environment, and that in turns helps the robot. So you can see here, this is the gate patterns. Basically RL means rear left leg, rear right leg, like forward left leg, and forward right leg, and so you can just see these means that so when you have a bar here, that means that the contact is made between this particular leg and the floor, and you can see that things start getting messy here at one point in time, and then the pattern returns to normal even though the robot is still on the environment because of the adaptation module, and also here what I mentioned is that the peaks here, and this is the torque plot as you can see here, the knee torque gets higher after this part here, and that's just a way to visualize what happens internally in that RL agent and adaptation modules, and yeah, pretty awesome behavior. Some final results here are in simulation. They just show a couple more baselines. We have RMA, we have RMA without the adaptation module, we have expert which is using, instead of using the previous observations to figure out the latent environment vector, it just uses the ground truth vectors, and we can see looking at all these metrics like reward, TTF, so time to fall, distance, samples, torque, smoothness, if you remember stuff like torque and stuff like smoothness, and ground impact were like enforced to be minimized by the reward components up there, so let me just, for example, let's take a ground impact or smoothness. If I go there to the reward function, so here we have, so smoothness as you can see here, or ground impact, that means that the force, so F is the, so ground reaction force is at the feet as F, so this part just tells you how elegantly, let's call it that way, the robot is kind of touching the floor, and it should be minimized looking at this because, yeah, because it's a part of the component with a minus prefix here, and just again returning to the table, we can see that the ground impact gets smaller and smaller here, and we can see that the difference between RMA and expert is really small at all of these metrics, so we can see that the ground force is at the feet, so just comparing the success rate, comparing the TTF, comparing reward, comparing distance, and everything else, it seems that RMA is able to infer the latent environment vector almost as well as the ground truth vector, so that's awesome, that means that the supervised learning like knowledge distillation worked, so that's pretty much it, that was a pretty neat idea, awesome results, this has, this can have amazing implications on RL being used in real world environments combined with some other types of learning, paradigms such as supervised learning, and like I guess the like follow-up work to this thing will be integrating vision component and then just deploying it, and next thing you know, Boston Dynamics robots will be running around the city, that's pretty much it, hopefully you like this video, if you did, share it out, comment down below if you have any suggestions for the next papers or whatever you have to say, and until next time, bye bye!
[{"start": 0.0, "end": 5.12, "text": " AI now enables robots to adapt rapidly to changing real-world conditions."}, {"start": 5.12, "end": 12.88, "text": " So this research just came out yesterday. It's from Facebook, CMU, and Berkeley AI research,"}, {"start": 12.88, "end": 19.12, "text": " and introduced this thing called rapid motor adaptation, which is an awesome new technique."}, {"start": 19.12, "end": 24.8, "text": " Let's call it that way. It leverages reinforcement learning plus supervised learning in order to"}, {"start": 24.8, "end": 32.4, "text": " do these rapid adaptations to various novel and unknown environments. As you can see here,"}, {"start": 32.4, "end": 37.68, "text": " so let me just zoom in. This robot is basically trained so the reward function is just move"}, {"start": 37.68, "end": 43.120000000000005, "text": " straight ahead and there's like, it's like heavily shaped. We'll see that in a moment."}, {"start": 43.120000000000005, "end": 48.64, "text": " But the whole point I want you to notice here is that this matrix is oiled up, so,"}, {"start": 48.64, "end": 55.92, "text": " so, and the robot never has never seen something like a similar environment like and it hasn't"}, {"start": 55.92, "end": 64.4, "text": " been fine-tuned. Neither has been the simulation calibrated to kind of depict the state of this"}, {"start": 64.4, "end": 71.2, "text": " particular environment. So again, it's trained completely in the simulation. It's using RL,"}, {"start": 71.2, "end": 75.12, "text": " it's using supervised learning, and we'll see the details of the paper in a second. But let me just"}, {"start": 75.12, "end": 80.32000000000001, "text": " show you a couple more cool examples. So here again, you can see it's just moving straight ahead,"}, {"start": 80.32000000000001, "end": 85.12, "text": " so it's kind of going towards the rock. But that's not the point. You can see it's adaptively,"}, {"start": 85.12, "end": 91.76, "text": " it can kind of go down these rocks and it doesn't kind of fall over. So it's pretty, it looks pretty,"}, {"start": 91.76, "end": 96.56, "text": " much more natural compared to some of the like previous methods we've been seeing."}, {"start": 97.92, "end": 103.84, "text": " And finally, this one is really cool. So the robot goes over this super hard environment and"}, {"start": 103.84, "end": 108.32000000000001, "text": " the leg kind of stucks here. And then after a couple of moments, instead of falling over,"}, {"start": 108.32000000000001, "end": 115.76, "text": " it unstucks and it continues walking. So I find this super exciting. Like I can imagine like a"}, {"start": 115.76, "end": 120.24000000000001, "text": " huge number of cool applications where these robots could be doing the jobs that humans"}, {"start": 121.04, "end": 128.0, "text": " can't or should not, like some radiation, high radiation areas or whatnot. Again, this can and"}, {"start": 128.0, "end": 132.64000000000001, "text": " will probably be used in military purposes, but that's, I guess, a topic for another video."}, {"start": 132.64, "end": 137.44, "text": " So let me now, having said that, let's jump to the paper and let's see how this thing works."}, {"start": 137.44, "end": 143.67999999999998, "text": " Okay. So again, this is a work by Ashish Kumar, Zipeng Fu, Deepak Pathak and Jitendra Malik."}, {"start": 143.67999999999998, "end": 149.92, "text": " So that's Berkeley folks, CMU and Facebook. And what I want you to notice here is they've"}, {"start": 149.92, "end": 156.16, "text": " been testing this on a wide variety of environments. And interesting thing is written here. So the robot"}, {"start": 156.16, "end": 163.35999999999999, "text": " achieved this high success rate despite never having seen unstable or sinking ground, obstructive"}, {"start": 163.35999999999999, "end": 168.0, "text": " vegetation or stairs during training. All deployment results are with the same policy."}, {"start": 168.56, "end": 175.84, "text": " So the same policy without any simulation calibration or real world fine tuning. So usually"}, {"start": 175.84, "end": 182.07999999999998, "text": " if you watch my video on OpenAI's robotic hand solving Robux cube, which I super recommend you"}, {"start": 182.08, "end": 188.64000000000001, "text": " go ahead and watch. So basically they use this thing called randomize domain adaptation,"}, {"start": 188.64000000000001, "end": 194.88000000000002, "text": " where the whole point is to, so you have a special environment in which you want to deploy a robot."}, {"start": 194.88000000000002, "end": 200.0, "text": " And in their case, they had a cage, they had this robotic hand. And so there was a really constrained"}, {"start": 200.0, "end": 204.4, "text": " setup. And what they did is they kind of measured all of the relevant information, like where the"}, {"start": 204.4, "end": 209.60000000000002, "text": " cameras are positioned, and they constructed the simulation in such a way so that it's calibrated."}, {"start": 209.6, "end": 215.44, "text": " So the main parameters such as camera position, et cetera, are kind of like baked in into the"}, {"start": 215.44, "end": 220.07999999999998, "text": " simulation. Here on the other hand, they don't do that. They have, we'll see like the simulation is"}, {"start": 220.07999999999998, "end": 225.6, "text": " just a sampling bunch of randomized terrains, which are hard. And they do some, again, they"}, {"start": 225.6, "end": 231.2, "text": " do some curriculum learning. But like the important thing is they don't do the simulation calibration"}, {"start": 231.2, "end": 236.24, "text": " and they don't have this real world fine tuning. I think that even that OpenAI's paper did this"}, {"start": 236.24, "end": 241.76000000000002, "text": " fine tuning whereby you basically capture some real data and then you further fine tune your"}, {"start": 241.76000000000002, "end": 250.16, "text": " policy on that real data so that you can better bridge the gap between the simulation and the"}, {"start": 250.16, "end": 255.20000000000002, "text": " real world. But here they don't do that and that's super exciting. Okay, let's see how this model"}, {"start": 255.20000000000002, "end": 260.96000000000004, "text": " works on the high level. So here is the whole pipeline. Basically we have, so we have first,"}, {"start": 260.96000000000004, "end": 265.68, "text": " the first part is training, obviously, the second part is deployment, where we are using the policy"}, {"start": 265.68, "end": 273.84000000000003, "text": " trained in A in the real world. So the training part itself contains out of, consists out of two"}, {"start": 273.84000000000003, "end": 280.32, "text": " parts. So the first part is this policy training phase, whereby we are taking these parameters of"}, {"start": 280.32, "end": 286.56, "text": " the environment such as like terrain height. So for example, if you focus on this robot here,"}, {"start": 286.56, "end": 292.96000000000004, "text": " it will basically have, assuming all the four legs are in contact with the ground, you can figure out,"}, {"start": 292.96, "end": 297.91999999999996, "text": " you can calculate because you're in a simulation, you have all of the ground truth data, they just"}, {"start": 297.91999999999996, "end": 304.32, "text": " calculate the highest peak of those four points and that's one of the like information bits they"}, {"start": 304.32, "end": 310.32, "text": " are using in this ET vector. They also have this motor strength, they have this mass and center of"}, {"start": 310.32, "end": 315.35999999999996, "text": " mass information. So that's basically, they're putting certain mass objects on top of this"}, {"start": 315.35999999999996, "end": 322.71999999999997, "text": " robot's back because they'll be testing it in the same setup later on. And so obviously this"}, {"start": 322.72, "end": 327.36, "text": " thing won't be available during the deployment phase because that's just something that you have"}, {"start": 327.36, "end": 332.40000000000003, "text": " in a simulator. And so again, how the whole thing's work is the following. So they have this,"}, {"start": 334.0, "end": 340.88000000000005, "text": " this first stage, they train this RL policy based policy PI, and they just use like proximal policy"}, {"start": 340.88000000000005, "end": 347.44000000000005, "text": " optimization algorithm from OpenAI. So that's PPO. And again, the goal is just to maximize the"}, {"start": 347.44, "end": 353.44, "text": " reward where the reward is super shaped. We'll see that in a couple of minutes, but like for now,"}, {"start": 353.44, "end": 359.52, "text": " let's just focus and see how this works. So they encode this information. So this environmental"}, {"start": 359.52, "end": 365.92, "text": " factor vector using this like n factor encoder, which is a simple MLP. I think they have like"}, {"start": 365.92, "end": 372.08, "text": " three layer MLP and they encode it into this latent extrinsic, like how they call it,"}, {"start": 372.08, "end": 378.0, "text": " extrinsic latent vector. And it kind of contains the compressed information about the environment,"}, {"start": 378.0, "end": 384.24, "text": " about all of this information here. And aside from that, obviously we are like, we are fitting in"}, {"start": 384.24, "end": 389.59999999999997, "text": " the current state, which consists out of the joint position, joint velocities, et cetera."}, {"start": 389.59999999999997, "end": 395.12, "text": " They also have this action. So what the actions are is basically you just set the joint position."}, {"start": 395.12, "end": 400.88, "text": " These are joint position of all of the 12 actuated joints on this robot. And that will be the action."}, {"start": 400.88, "end": 405.92, "text": " And finally, those actions will be converted to Torx using something called PD controllers."}, {"start": 405.92, "end": 409.92, "text": " And then you have your usual like RL setup. Basically you have your environment here. So"}, {"start": 409.92, "end": 417.68, "text": " let me kind of abstract this. You have environment. Let's call it E. You have an agent. And what"}, {"start": 417.68, "end": 423.36, "text": " happens is agent sends the action to the environment. And we can see that here. So that's the action here"}, {"start": 423.36, "end": 430.08, "text": " at point at timestamp T. And what comes back is the pretty much the next state. So that's,"}, {"start": 430.08, "end": 436.32, "text": " they depicted that they denoted that as XT as well as reward. So you'll have some intermediate"}, {"start": 436.32, "end": 442.56, "text": " intermediate rewards. So that's your basic generic RL setup. And again, they just use PPO."}, {"start": 442.56, "end": 447.76, "text": " They train this base policy. They train this encoder in the first phase. And then the important"}, {"start": 447.76, "end": 453.03999999999996, "text": " part is this adaptation module. So that's the true like innovation of this paper. They train this"}, {"start": 453.04, "end": 461.6, "text": " thing to be able to infer this latent information. Just looking at the past observations. That's the"}, {"start": 461.6, "end": 466.96000000000004, "text": " whole idea. So this time, instead of having a privilege of accessing this information from the"}, {"start": 466.96000000000004, "end": 473.12, "text": " simulator, they just pack the last 50 states and actions. They feed that into this adaptation"}, {"start": 473.12, "end": 478.72, "text": " module, which is again, simple MLP. And then they have some one D convolutions. And basically they"}, {"start": 478.72, "end": 486.40000000000003, "text": " output this approximation. So ZT hat, which they then minimize using simple MSC loss. So that's"}, {"start": 486.40000000000003, "end": 492.8, "text": " your like L2 square loss. And everything else remains the same. We're still feeding in the"}, {"start": 492.8, "end": 498.88000000000005, "text": " state. We're still feeding in the last action. And that's it. So one important thing to notice"}, {"start": 498.88000000000005, "end": 504.8, "text": " here is that during this phase two, when they're training the only the adaptation module, so"}, {"start": 504.8, "end": 510.16, "text": " obviously the base policies is frozen here. And second important thing is they are using on policy"}, {"start": 510.16, "end": 516.8, "text": " algorithm here. So they're basically feeding in this ZT hat. So the inferred one, even though"}, {"start": 516.8, "end": 521.2, "text": " initially, obviously, because this is randomly initialized, these will be kind of rubbish in"}, {"start": 521.2, "end": 528.48, "text": " the beginning. So what you'll basically be ending up in like bad states. And that will help this"}, {"start": 528.48, "end": 534.48, "text": " network become robust enough so that you can couple like cope with different deployment scenarios."}, {"start": 534.48, "end": 539.76, "text": " Because if we were feeding the perfect information coming from this, from these environment"}, {"start": 539.76, "end": 546.08, "text": " variables, then it will the basically the adaptation module would only be exposed to good"}, {"start": 546.08, "end": 551.6800000000001, "text": " trajectories. And that's not what we want, because we want it to be able to cope even with these bad"}, {"start": 551.6800000000001, "end": 557.6, "text": " trajectories when when something goes wrong. So that's the that's that's it in a nutshell. So in"}, {"start": 557.6, "end": 563.6800000000001, "text": " the first again, in the first part, they train the base policy. So that's your RL agent, they train"}, {"start": 563.68, "end": 568.3199999999999, "text": " this encoder of the environment factors. And they train this in the second stage, they train this"}, {"start": 568.3199999999999, "end": 574.7199999999999, "text": " adaptation module, which is really important, which learns how to predict the latent, so condensed"}, {"start": 574.7199999999999, "end": 580.8, "text": " information about the environment from the past observations only. And interesting thing to notice"}, {"start": 580.8, "end": 585.68, "text": " here is that so they have no vision component in the system. So all of the videos you clips, the"}, {"start": 585.68, "end": 594.4, "text": " clips you saw earlier were based solely on this RL and supervised learning policies. Okay. So"}, {"start": 594.4, "end": 600.0799999999999, "text": " deployment, what I do is they obviously feed the the past observations into the adaptation module,"}, {"start": 600.0799999999999, "end": 607.28, "text": " it runs on 10 10 hertz, because you have like 5050 past observations here. So it's kind of needs to"}, {"start": 607.28, "end": 614.0799999999999, "text": " run a bit slower. And it infers this this ZT hat, which is information about the environment,"}, {"start": 614.08, "end": 618.1600000000001, "text": " every now and then. So these two components are pretty much asynchronous. So that means that the"}, {"start": 618.1600000000001, "end": 624.08, "text": " base policy will be running on 100 hertz. So so it would be feeding the state and the last action,"}, {"start": 624.08, "end": 630.24, "text": " and it will just be fetching whatever the last update to this, to this vector of this latent"}, {"start": 630.24, "end": 637.2800000000001, "text": " vector is. So that assumption kind of is valid because environment won't be changing. Like"}, {"start": 637.2800000000001, "end": 641.6, "text": " empirically, they kind of figure out that it won't be changing, they don't need to improve the"}, {"start": 641.6, "end": 646.08, "text": " the frequency of this, and it just works. Okay, having explained that, let's now jump into some"}, {"start": 646.08, "end": 652.0, "text": " details. First thing, so this is, they notice here that the key, the key contribution of the"}, {"start": 652.0, "end": 657.44, "text": " paper is basically, so our novel aspects are the use of a varied terrain generator, and natural"}, {"start": 657.44, "end": 662.0, "text": " reward functions. So by the way, let me just digress here. So as you can see here, this is"}, {"start": 662.0, "end": 668.88, "text": " the terrain, it's totally random, it looks really hard, you can, the legs can get jammed, etc. So"}, {"start": 668.88, "end": 675.92, "text": " the whole point of this is, similarly to the OpenAI's robotic hand, is to train this thing in a bunch"}, {"start": 675.92, "end": 680.88, "text": " of different hard environments, I forgot to mention that, where they sample different parameters such"}, {"start": 680.88, "end": 687.52, "text": " as mass, and like in the OpenAI paper, they sample gravity even. And so the idea is to expose the"}, {"start": 687.52, "end": 692.72, "text": " policy to such a wide range of randomized worlds, so that when you deploy it in a real world, it"}, {"start": 692.72, "end": 698.08, "text": " just looks like another simulation, and that's the whole point. Simulation hypothesis confirmed."}, {"start": 698.08, "end": 707.0400000000001, "text": " So let's continue here. So the aspects are, as I said, varied terrain generator, and natural reward"}, {"start": 707.0400000000001, "end": 713.44, "text": " functions, motivated by bioenergetics, which allows us to learn walking policies without using any"}, {"start": 713.44, "end": 717.44, "text": " reference demonstrations. But the truly novel contribution of this paper is the adaptation"}, {"start": 717.44, "end": 723.2800000000001, "text": " module, trained in simulation, which makes the rapid motor adaptation possible. Nice."}, {"start": 723.28, "end": 728.88, "text": " Um, what next? So here is the reward function they're using. I mentioned it's heavily shaped,"}, {"start": 728.88, "end": 736.0, "text": " as you can see here. So the goal is the following. The goal is to, they say it's somewhere here, so"}, {"start": 736.0, "end": 739.6, "text": " the reward function encourages the agent to move forward with a maximum speed, blah blah blah,"}, {"start": 739.6, "end": 746.4, "text": " and penalizes it for jerky and inefficient motions. So basically you can see here, the goal is to"}, {"start": 746.4, "end": 752.24, "text": " maximize the reward, and if we plot this first component, so the whole reward will just be a sum"}, {"start": 752.24, "end": 758.88, "text": " of these 10 components, and if we just plot this thing here, so basically on the x-axis you have"}, {"start": 759.44, "end": 765.12, "text": " vx, and let me just kind of briefly digress here. So this is a robot, just to explain the coordinate"}, {"start": 765.12, "end": 771.28, "text": " system. This is a robot, pretty retarded, okay, but yeah, it works. And this would be the x-axis,"}, {"start": 771.92, "end": 778.4, "text": " y-axis would be, for example, pointing towards us, and these z-axis points upwards. And having said"}, {"start": 778.4, "end": 786.48, "text": " that, so this velocity x is just a component along this x-axis here, and so it basically increases"}, {"start": 786.48, "end": 793.04, "text": " until 0.35, so this value here, and then it just starts getting into the saturation, which means"}, {"start": 793.04, "end": 799.92, "text": " the robot is encouraged only to get to 0.35 and not above. So they, again, penalize, because this"}, {"start": 799.92, "end": 805.36, "text": " is minus, we want to minimize this part, so we want to minimize the sideways movement, so this is,"}, {"start": 805.36, "end": 810.64, "text": " again, y-axis coming out of the screen towards you, so we want to minimize that velocity component"}, {"start": 810.64, "end": 816.16, "text": " there, as well as this angular velocity, and like a bunch of similar stuff, like torque, we want to"}, {"start": 816.16, "end": 824.32, "text": " minimize torque, we want to minimize the joint motion, we want to minimize the slippage of the"}, {"start": 824.32, "end": 830.16, "text": " foot. So basically what this g of t is, they say it here, so blah blah blah, the binary foot contact"}, {"start": 830.16, "end": 836.8, "text": " indicator vector as g, so that's g. So that means you have four legs, and because you do this dyac"}, {"start": 837.36, "end": 842.24, "text": " operation, what you do is you basically get a matrix where you have a binary matrix, so you have"}, {"start": 842.24, "end": 849.92, "text": " maybe zero, one, zero, zero, one. That means two of the legs are currently in the contact with the"}, {"start": 849.92, "end": 857.04, "text": " floor, and only those, we'll only care about those velocities of the foot. So I think, so they say"}, {"start": 857.04, "end": 862.56, "text": " somewhere here, so vf is the velocity of the feet, okay? So that means we want to minimize that"}, {"start": 862.56, "end": 866.48, "text": " velocity, which makes sense, because that means if the velocity is hard, that means that the legs are"}, {"start": 866.48, "end": 871.04, "text": " kind of slipping around, and that's not something we want. We want a stable robot without jerking,"}, {"start": 871.04, "end": 879.12, "text": " without like these movements, like either sideways movements, or here you can see it's also"}, {"start": 879.12, "end": 884.88, "text": " penalized not to go up and down too much. So anyways, I can just imagine how it looked like"}, {"start": 884.88, "end": 892.96, "text": " to shape this thing. Not the best design, I guess, but like I love the results they got, and yeah."}, {"start": 893.92, "end": 900.24, "text": " In general, it's not recommended to shape the reward like this, but I guess their contribution"}, {"start": 900.24, "end": 906.08, "text": " lied somewhere else, and they, basically in the adaptation procedure, I guess this part is not"}, {"start": 906.08, "end": 911.36, "text": " as important as the fact that they show that their robots can adapt to unknown environments,"}, {"start": 911.36, "end": 917.52, "text": " even being trained in simulation only, that's a huge, huge thing. And they say it here again,"}, {"start": 918.4, "end": 925.12, "text": " this tells us about the fragility of this shaping. We found these reward functions to be critical"}, {"start": 925.12, "end": 930.08, "text": " for learning realistic gates in simulation, so I guess they had to iterate a lot to get this to"}, {"start": 930.08, "end": 936.32, "text": " work. And again, problems in the paradise, they say here, if we naively train our agent with the"}, {"start": 936.32, "end": 941.5200000000001, "text": " above reward function, it learns to stay in a place because of the penalty terms on the movement of"}, {"start": 941.5200000000001, "end": 947.84, "text": " the joints. So that translates into this. So this Q with the small dot here, if you can see, just"}, {"start": 947.84, "end": 954.72, "text": " denotes the velocities of the joints. So this term of the reward function will encourage the robot not"}, {"start": 954.72, "end": 962.1600000000001, "text": " to move at all, and yeah, as you can see here, if we just start with these, because this will be just"}, {"start": 962.16, "end": 966.56, "text": " a weighted sum of these rewards, right? And you'll have some parameters, as you can see here, they"}, {"start": 966.56, "end": 971.8399999999999, "text": " just figured out these parameters empirically. So if you just start with these parameters,"}, {"start": 971.8399999999999, "end": 976.9599999999999, "text": " the thing won't work, the robot won't move. So they do some sort of annealing, they start with"}, {"start": 976.9599999999999, "end": 982.0799999999999, "text": " super small coefficients, and then during the training, they slowly raise this up. So it's a"}, {"start": 982.0799999999999, "end": 988.0, "text": " training curriculum, obviously. And yeah, and they say to prevent the collapse, we start the training"}, {"start": 988.0, "end": 992.4, "text": " with very small penalty coefficients, so very small penalty coefficients, and then gradually increase"}, {"start": 992.4, "end": 996.72, "text": " the strength of these coefficients using a fixed curriculum. We also linearly increase the difficulty"}, {"start": 996.72, "end": 1001.44, "text": " of other perturbations such as mass, friction, and motor strength as the training progresses."}, {"start": 1001.44, "end": 1005.28, "text": " We don't have any curriculum on the trains and start the training with randomly sampling the"}, {"start": 1005.28, "end": 1011.2, "text": " train profiles from the same fixed difficulty. Bottom line, certain parts of the environment are"}, {"start": 1011.2, "end": 1016.72, "text": " under curriculum, certain parts like the terrain itself is always sampled from the same very same"}, {"start": 1016.72, "end": 1022.48, "text": " difficulty. That was the whole point. Here in this table, they just show the range they've used for"}, {"start": 1022.48, "end": 1029.04, "text": " certain parameters in the environment like friction, center of mass, etc. So yeah, so again,"}, {"start": 1029.04, "end": 1035.92, "text": " center of mass is this thing here, they have this robot, they put like an object with mass on top of"}, {"start": 1037.3600000000001, "end": 1044.08, "text": " its back, and they just test how it behaves under these external forces. That's the whole point."}, {"start": 1044.08, "end": 1049.4399999999998, "text": " And as I mentioned, this is the same thing as with OpenAI's robotic hand. They sample an environment"}, {"start": 1049.4399999999998, "end": 1054.3999999999999, "text": " where they'll be sampling uniformly from these ranges, and these ranges as well will be"}, {"start": 1054.3999999999999, "end": 1058.08, "text": " progressively becoming harder and harder as the training progresses, so that's because they are"}, {"start": 1058.08, "end": 1065.1999999999998, "text": " included in the training curriculum. One thing worth mentioning is that they mention it here."}, {"start": 1065.1999999999998, "end": 1071.28, "text": " Note that instead of predicting ET, so that's the environmental vector, which is the case in typical"}, {"start": 1071.28, "end": 1079.2, "text": " system identification, we directly estimate extrinsics ZT that only encodes how the behavior"}, {"start": 1079.2, "end": 1085.36, "text": " should change to correct for the given environment vector ET. So the whole point here is something"}, {"start": 1085.36, "end": 1091.44, "text": " similar to Mu0. If you watched that video, if you want to check it out, I'll link Mu0 video somewhere"}, {"start": 1091.44, "end": 1096.8, "text": " here, I made it a while ago, and the whole idea is the following. You only need to encode that"}, {"start": 1096.8, "end": 1103.04, "text": " information which makes you behave, have a better policy. You don't need to be able to reconstruct"}, {"start": 1103.04, "end": 1108.6399999999999, "text": " from that latent, so in this case, this environment vector, or in the case of Mu0, the observation"}, {"start": 1108.6399999999999, "end": 1114.8, "text": " space, you just need to have such information upon which you can act and have an optimal policy,"}, {"start": 1115.44, "end": 1122.96, "text": " and that's it. So you just save some bandwidth on modeling only the information you actually need"}, {"start": 1122.96, "end": 1129.28, "text": " to behave properly in the environment, and you just discard everything else. Okay, here we have"}, {"start": 1129.92, "end": 1133.8400000000001, "text": " some experiments they've done with certain baselines. So again, they have a bunch of"}, {"start": 1133.8400000000001, "end": 1139.44, "text": " different tasks. They have this uneven foam, they have this upward incline, so they're testing it"}, {"start": 1139.44, "end": 1146.0, "text": " in a bunch of different mattresses. They're like step up eight centimeters, step up six centimeters,"}, {"start": 1146.0, "end": 1152.32, "text": " step down 15 centimeters, et cetera. And they just compare it with the other baselines such as the"}, {"start": 1152.32, "end": 1158.4, "text": " this very same model, just without that adaptation block, as well as this A1. So A1 is just the name"}, {"start": 1158.4, "end": 1166.48, "text": " of the robot, so A1 is some default controller software that comes as a firmware of this robot"}, {"start": 1166.48, "end": 1171.68, "text": " when you buy it. And they showed that on all of these environments in these tables, just looking"}, {"start": 1171.68, "end": 1177.92, "text": " at the success metric, TTF, so it's time to fall and distance, RMA pretty much outperforms the other"}, {"start": 1177.92, "end": 1183.1200000000001, "text": " baselines in all of these tasks, as well as in this payload analysis, we have these diagrams"}, {"start": 1183.1200000000001, "end": 1189.44, "text": " where we are on the x-axis, we are increasing the weight of this thing of the payload on the robot's"}, {"start": 1189.44, "end": 1194.0, "text": " back, and we can see how different baselines behave, and we can see that the success drops"}, {"start": 1194.0, "end": 1200.4, "text": " really slowly with this RMA approach, whereas with the baselines, it drops pretty significantly."}, {"start": 1200.4, "end": 1206.48, "text": " And worth mentioning is that the robot itself has 12 kilos, which means that at this point on x-axis,"}, {"start": 1206.48, "end": 1212.3200000000002, "text": " where basically the robot is carrying the same amount of weight as its own body weight on its"}, {"start": 1212.3200000000002, "end": 1220.8000000000002, "text": " own back. So that's pretty cool. So here they have another analysis on this particular environment"}, {"start": 1220.8000000000002, "end": 1227.92, "text": " with the mattress covered with oil, and what you can notice here is that from this point on,"}, {"start": 1227.92, "end": 1235.8400000000001, "text": " so when it starts stepping on this slippery floor, the environment parameters obviously change,"}, {"start": 1235.8400000000001, "end": 1241.04, "text": " like the friction parameter will change, and we can see that being"}, {"start": 1241.8400000000001, "end": 1248.4, "text": " inferred by the adaptation module exactly here. So we just log, so this latent vector, I think"}, {"start": 1248.4, "end": 1254.88, "text": " it has like eight dimensions, so they just took Z1 and Z5, so those are two particular features"}, {"start": 1254.88, "end": 1261.2800000000002, "text": " of that latent vector, and they just do, they plot it, and we can see that it basically reacts"}, {"start": 1261.2800000000002, "end": 1266.8000000000002, "text": " really quickly to the change of the environment, and that in turns helps the robot. So you can see"}, {"start": 1266.8000000000002, "end": 1275.92, "text": " here, this is the gate patterns. Basically RL means rear left leg, rear right leg,"}, {"start": 1276.64, "end": 1282.8000000000002, "text": " like forward left leg, and forward right leg, and so you can just see these means that"}, {"start": 1282.8, "end": 1288.48, "text": " so when you have a bar here, that means that the contact is made between this particular leg and"}, {"start": 1288.48, "end": 1294.24, "text": " the floor, and you can see that things start getting messy here at one point in time, and then"}, {"start": 1294.24, "end": 1299.6, "text": " the pattern returns to normal even though the robot is still on the environment because of the"}, {"start": 1299.6, "end": 1306.1599999999999, "text": " adaptation module, and also here what I mentioned is that the peaks here, and this is the torque"}, {"start": 1306.16, "end": 1313.8400000000001, "text": " plot as you can see here, the knee torque gets higher after this part here, and that's just"}, {"start": 1313.8400000000001, "end": 1321.76, "text": " a way to visualize what happens internally in that RL agent and adaptation modules, and yeah,"}, {"start": 1321.76, "end": 1330.0800000000002, "text": " pretty awesome behavior. Some final results here are in simulation. They just show a couple more"}, {"start": 1330.08, "end": 1335.9199999999998, "text": " baselines. We have RMA, we have RMA without the adaptation module, we have expert which is using,"}, {"start": 1335.9199999999998, "end": 1340.96, "text": " instead of using the previous observations to figure out the latent environment vector,"}, {"start": 1340.96, "end": 1346.24, "text": " it just uses the ground truth vectors, and we can see looking at all these metrics like reward,"}, {"start": 1346.24, "end": 1353.1999999999998, "text": " TTF, so time to fall, distance, samples, torque, smoothness, if you remember stuff like torque"}, {"start": 1353.2, "end": 1360.4, "text": " and stuff like smoothness, and ground impact were like enforced to be minimized by the reward"}, {"start": 1360.4, "end": 1366.16, "text": " components up there, so let me just, for example, let's take a ground impact or smoothness. If I go"}, {"start": 1366.16, "end": 1372.72, "text": " there to the reward function, so here we have, so smoothness as you can see here, or ground impact,"}, {"start": 1372.72, "end": 1380.16, "text": " that means that the force, so F is the, so ground reaction force is at the feet as F,"}, {"start": 1380.16, "end": 1385.1200000000001, "text": " so this part just tells you how elegantly, let's call it that way, the robot is kind of touching"}, {"start": 1385.1200000000001, "end": 1391.1200000000001, "text": " the floor, and it should be minimized looking at this because, yeah, because it's a part of the"}, {"start": 1391.1200000000001, "end": 1397.44, "text": " component with a minus prefix here, and just again returning to the table, we can see that the"}, {"start": 1398.48, "end": 1404.0, "text": " ground impact gets smaller and smaller here, and we can see that the difference between RMA and"}, {"start": 1404.0, "end": 1409.76, "text": " expert is really small at all of these metrics, so we can see that the ground force is at the"}, {"start": 1409.76, "end": 1416.08, "text": " feet, so just comparing the success rate, comparing the TTF, comparing reward, comparing distance,"}, {"start": 1416.08, "end": 1424.32, "text": " and everything else, it seems that RMA is able to infer the latent environment vector almost as"}, {"start": 1424.32, "end": 1429.76, "text": " well as the ground truth vector, so that's awesome, that means that the supervised learning"}, {"start": 1430.72, "end": 1435.52, "text": " like knowledge distillation worked, so that's pretty much it, that was a pretty neat idea,"}, {"start": 1435.52, "end": 1443.84, "text": " awesome results, this has, this can have amazing implications on RL being used in real world"}, {"start": 1443.84, "end": 1448.8, "text": " environments combined with some other types of learning, paradigms such as supervised learning,"}, {"start": 1449.68, "end": 1455.52, "text": " and like I guess the like follow-up work to this thing will be integrating vision component and"}, {"start": 1455.52, "end": 1459.52, "text": " then just deploying it, and next thing you know, Boston Dynamics robots will be running around the"}, {"start": 1459.52, "end": 1464.48, "text": " city, that's pretty much it, hopefully you like this video, if you did, share it out, comment down"}, {"start": 1464.48, "end": 1470.16, "text": " below if you have any suggestions for the next papers or whatever you have to say, and until next"}, {"start": 1470.16, "end": 1498.3200000000002, "text": " time, bye bye!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=3SLQVh9ABDM
AudioCLIP: Extending CLIP to Image, Text and Audio | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover AudioCLIP - OpenAI's CLIP (contrastive language-image pre-training) model extended to support the audio modality and novel query directions. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2106.13043 ✅ Jina AI: https://github.com/jina-ai/jina ✅ My GitHub projects: https://github.com/gordicaleksa ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Overview of the CLIP model 07:15 ESResNeXt model and spectrograms 10:30 High-level system overview 11:25 AudioCLIP datasets walk-through 14:45 Audio data augmentations 16:30 Various training stages of AudioCLIP 19:10 Results 21:45 Querying results 23:00 Neural Search and Jina AI ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #audioclip #clip #neuralsearch
What's cracking guys? In this video I'm covering audio clip, extending clip to image, text and audio by Andrei Guzov, Federico Rau, Yun Hees and Andreas Dengel from DFKI and Technical University of Kaiserlautern. So what they do, the main idea they do in this paper is they extend clip by adding a novel modality and that's audio. And so before even I start digging into this paper, I thought explaining how OpenAI's clip model works and for those of you who want to know all of the need to read the details of the paper, you can check out my video on clip. I've made it a couple of months ago and yeah. So having said that, the main motivation behind clip was to try and see how vision models perform given a huge amount of data, so scale. And as we already know, OpenAI did the same thing in NLP with GPT family of models, especially the GPT-3 model, which was huge, like 175 billion parameter model. And the thing is in vision, computer vision, until recently we were using stuff like ImageNet or ImageNet 21k, which were super small compared to the NLP standards. So obviously, okay, Google had their own proprietary big data set, JFT-300M, I think, and but that's not obviously public. And so OpenAI had to create their own data set and they finally ended up with 400 million image caption pairs, which they've used to train this clip model we're just gonna, we are about to explore. So again, one more thing is that they wanted to see how the natural language works as the source of supervision compared to those gold labels you have on your classical ImageNet data set, for example, or whatever arbitrary vision data set like Cypher 10 or whatnot. Okay, having said that, let me kind of briefly recap how clip works. So how you train clip is the following, you have a batch of images, you have a batch of corresponding captions, and you encode those into these representations here. And what you do then is you basically apply cosine similarity between all of the pairs. And you want to make sure you maximize the corresponding image and captions. So I1 corresponds to T1. That's why we want to maximize this particular term, but we want to minimize all of the other ones. And the same goes for the other elements on this main diagonal. And so again, you want to maximize those and you want to minimize all of the other ones. So going in a bit more detail here, ImageNet, so what they've used for the image encoder was the vision transformer. And so basically, you feed in an image and from a certain layer, you're just going to take a vector and you're going to linearly project that one. And that's how they obtain these encodings. On the other hand, text encoder was GPT-2. And how they formed this representation, so these T1 through Tn is the following. You input something called start of a sequence token, then you're going to encode this via byte pair encoding, for example. So you're going to embed those tokens here. And finally, you'll append the end of sentence token. And so the representation they finally use is after propagating this through the transformer, they'll just take the representation that's on top of EOS token. And they're just going to linearly project that in order to get these T1 through Tn, where n is the batch size. Then what they do is they L2 normalize these vectors so that they can just do dot product. And by doing dot product, you basically just get cosine similarity. So just to recap here, so A times B, so A dot product B is actually, so L2 norm of A times L2 norm of B times cosine of theta of the angle between those two vectors, okay? And since, as I said, these are normalized, that means these are is going to be one. So a dot product is the same thing as cosine similarity. So you're just gonna do element-wise multiplication between these embeddings, and you're going to end up with these. And finally, you're just gonna apply cross entropy, like symmetric cross entropy to this matrix, and that's going to be the last function of CLIP. So that will just push these terms to one, and it's gonna, so these ones will be pushed to one, and all of the other ones are going to be pushed down to minus one, okay? So all of the elements aside from the main diagonal are going to be pushed towards minus one. And that's in a nutshell how they train this multimodal embedding space. They back prop through the weights of the image encoder and the text encoder, and that's your CLIP. So a nice thing they later showed is that they can use this in a zero-shot setting on any target data set. So take, take for example, Cypher 10, I don't know, I think this is Cypher 10. So you have plane, car, dog, bird, whatever, and you want to convert that gold label into this format that CLIP is used to. And what they do is just they do simply something like this, a photo alpha, and like just insert like your label for sort of a photo alpha plane, I don't know. And you encode all of your labels into T1 through Tn, where n is the number of labels in your data set, so maybe like 10 for Cypher 10. And then how you classify an image, you just take your image, you encode it, and you do again simply like cosine similarity, you apply softmax, and you find the mode of the distribution, and that's your output. So here we have a photo of a dog. And so it's as simple as that you can kind of zero-shot create this classification head, and you can then classify your images. And so when I said classification head, let me just kind of break that down. If we take this I1 embedding, and I kind of draw it as a vector that has like maybe four dimensions, let's simplify it a bit. And so basically what these will do is they will just form the weights of the connections of that classifier head. So you'll have something like this, and this will be your first output, then you'll have, so this one, this embedding here, when you do multiplication with the I1 vector, you'll get something like this. So as you can see, you're just basically forming a classification head, and the reason we can do this is, as we saw here, is if we do dot product, so that's just element-wise multiplication, and that's pretty much the thing we're doing here with these connections, we do cosine similarity. And so as you can see, all of these blue, green, and other weights are formed, are pre-computed, and they form a specific classification head for this specific dataset at hand. So hopefully that was clear. Now that's CLIP, now let's jump into the paper. So they say here, in this work we present an extension of the CLIP model that handles audio in addition to text and images. Our proposed model incorporates the ES Resnext audio model into the CLIP framework using the audio set dataset. Such a combination enables the proposed model to perform bimodal and unimodal classification and querying, while keeping CLIP's ability to generalize to unseen datasets in a zero-shot inference fashion. The main unknown here, I guess, will be ES Resnext, which stands for, so ES stands for environmental sound. The X part is just a modification on top of the Resnet, the famous Resnet model with skip connections. So how those sound classification models work is pretty similar. So audio is simply a 1D signal, so you'll have something like this. You have your samples, 1D samples of your audio waveform, and that will look something like this. And you have two, pretty much two approaches to audio classification. You either apply something like 1D convolution, and then you process the signal until you get your classification output, or you convert this, so more commonly you just convert this into something called Spectrogram, where then you apply a 2D convolutional neural network to get your classification results. So Spectrogram just has, on the Y axis you have the frequency component, on the X axis you have the time component, and what the Spectrogram tells you is the following. So if I take a particular time point and a particular frequency, maybe like 1 kHz, it's going to tell you the amplitude of the sinusoid at that particular point. And why a sinusoid? Because if you know what Fourier transform is, so you can basically represent any natural signal as a sum of sinusoids with different frequencies, with different amplitudes and phases. So applying that Fourier transform, you can convert this arbitrary natural signal into Spectrogram. And that decomposition was just proven experimentally to be super useful, so that's why we are commonly using these Spectrograms, or male Spectrograms, which are just log-normalized Spectrograms as an input. And so, yes, ResNet is just a fancy, improved ResNet style architecture that works across these 2D Spectrograms. With a small caveat, they're not using this thing called short time Fourier transform, they're using something called complex frequency B-spline, super fancy, we don't even need to know the details, they just have some additional hyperparameters and what it turns out that this STFT, so this short time Fourier transform is just a special case of that thing, and so they can initialize it so that initially it behaves as STFT, but then eventually those parameters are learned and that showed to improve the results of this ES ResNext. So that's all the prerequisite knowledge you need, again, don't worry if you don't understand every single detail here, you just have some 2D Spectrogram and this ResNet model and that's how you, that's how this ES ResNext works like. Okay, now let's jump to the whole pipeline, this is how the final model looks like, and as you can see this is your standard clip, so this part here is your standard clip, and they additionally add, so this is the ES ResNext, you feed in audio, you find the representations, and as you can see here, this is like pretty much the same idea from clip, you just this time because you have the additional modality, you'll create these matrices of cosine similarities between audio and image, so this one between audio and text, that's this one, and again you have text and image, that's your classical clip, and that's this one, and you'll basically be again maximizing the main diagonal cosine similarities, you'll be suppressing the after the main diagonal elements, and that's how you're going to train this novel audio clip model. Okay, now let's see what else, so their procedure of training this thing is fairly complex, there are many different initializations, different training procedures, so let's briefly go through the data sets they are using, and they mentioned here, so in this work the clip data set was used indirectly as a weight initializer of the text and image heads, the clip model, so basically they implicitly used that huge open AIS data set by reusing their weights, that's what they want to say here, for the purposes of this work the ImageNet data set served as a weight initializer of the ES ResNext model, so in that previous paper that introduced this model ES ResNext, they showed that using ImageNet initialization helps boost the classification performance on sounds after they additionally train it on this thing called audio set, which we'll soon see. So aside from those two data sets which are used implicitly, they used this thing called audio set explicitly, so that's a data set that has 1.8 million, so it's a fairly big data set for the audio research community, and so it has 527 classes, whatnot, and each sample is a snippet up to 10 seconds long from a YouTube video defined by your corresponding ID and timings, but basically you usually ditch these components, so the vision component, you just focus on the clips and on the labels of those clips, but they say here, for this work we acquired video frames in addition to audio tracks, thus the audio set data set became the glue between the vanilla clip framework and our tri-modal extension on top of it. During the training part, 10 equally distant frames were extracted from our video recording, and one of them was picked randomly and passed through the audio clip model, so let's see how that works. So drawing a video here, something like this, I'm looking from it sideways, and these are frames, okay, and you'll have like spatial extent of the frame, whatnot, and that's not that important. Okay, so again, associated with this video, we'll have an audio, so we'll have like an audio here, so that will be some like natural signal, something like this, and again, we'll have a label, so I don't know, like cat or whatnot, and so how they perform, how they create a data set that will be used to jointly train all of the three modalities is the following. As they mentioned, they pick randomly one of these like frames, maybe this one, and so you have a 2D image there, so they are going to convert this into a 2D spectrogram, okay, so that's again a 2D image, and you have your label here, so those three will be fed into this system and used to jointly train it, and we'll see there are a bit more details than that, but like that's for start. They additionally have these two datasets called Urban Sound 8K and ESC50, which is Environment Sound Classification 50 dataset, and they use those in two ways, so they mentioned here on this dataset, we perform zero-shot inference using the audio clip model trained on audio set, so they use it to evaluate the models, but they also use those datasets to fine-tune the audio head, as we'll soon see. Aside from the datasets, they also use these data augmentations for the reason that's stated here. In comparison to the composite clip dataset, the audio datasets provide two orders of magnitude, less training samples, which makes overfitting an issue, so they have this thing called time scaling, where, so this is your like audio signal, and you have some like signal inside of it, and what they do is basically they either compress it, so you'll have a signal here, or they decompress it, so you'll have something like this, and they feed that into the ES Resnext model. They also use this thing called time inversion that's equivalent to flipping your image inside computer vision tasks, so that means, like, simply you're going to, if we have this signal here, and this is some like symmetry axis, you just basically rotate by 180 to get the signal, and then you feed that into ES Resnext. They also have random crops and padding, because they actually need to have these signals of appropriate length, so that they can be fed into the ES Resnext model, so sometimes they'll have to, if this is the, for example, if this is the target like length, that means they'll have to kind of pad this part here, they didn't say how they pad it, I guess it's zeros, or something, or if we have a longer signal, then you'll just crop it in order to fit into this length of the signal. So, and yeah, finally they have this random noise, they basically just have additive white gaussian noise applied, and that's similar to when you do photometric augmentations when you're training your computer vision models, so that's an equivalent type of augmentation. How they train this thing is the following. While the clip model was already pre-trained on text image pairs, we decided to perform an extended audio set pre-training of the audio head first, as it improved performance of the base ES Resnext model, and then to continue training in a tri-model setting, combining it with two other heads. Here, the whole audio clip model was trained jointly on the audio set data set, using audio snippets, the corresponding video frames, and the assigned textual labels. The whole training pipeline roughly consists out of three parts. So the first part is, you have clip, and it's already pre-trained by the OpenAI team, and so that's just taken as is. The second part is your ES Resnext, which is first initialized via these ImageNet weights, and then they additionally pre-train it using this audio set data set. The reason being is ImageNet weights are obviously more suited towards, like, understanding images, where they want to apply this model for audio, so that's why they have this second part here. Finally, and they mentioned it here, parameters of the two other sub-networks, namely text and image head, were frozen during the cooperative pre-training of the audio encoding head, thus these heads served as teachers in a multi-modal knowledge distillation setup. So that means the third component is, again, they're training on this audio set data set, but they this time do it jointly. So here they had a standalone training of ES Resnext, here they do it jointly. So when I say jointly, let me just go back here to the, to that drawing. Okay, so what I do is they freeze these, so they freeze these weights here, they freeze these weights here, and now they just back prop through this model, and they jointly train this whole system, but they just update the weights of the audio head. So that's the third part of how they train ES Resnext. So as I said, multiple, multiple details there, but yeah. And the last part is they actually train the whole system jointly, and they update all of the three models, as they mentioned here. Here all three modality dedicated heads were tuned together, making the resulting model take into account the distribution of images and textual descriptions, in addition to distribution of audio samples. So the reason they do this is because Clip was obviously trained on this OpenAI dataset, and here they want to evaluate this audio clip model on audio set, and so that's the reason why they additionally fine-tune all of the three heads onto audio set, because that kind of boosts the performance on the downstream tasks. And it makes sense, I guess. Okay, finally, let's see the results. First things first, they kind of show that the original pre-training on audio dataset of this ES Resnext model, so when they have like more epochs, that the performance gets better on the audio set, and it marginally gets better on these two other data sets. So that's kind of not that informative. Let's see some other tables they have here. So this is an important one. So they evaluate audio clip in two settings. First is audio head only, and then the full model. So audio head only is that when you only train the audio head while the other two, the text and the image heads, were frozen, and the full model is where you jointly train all of the three heads. And we can see on ImageNet, the performance actually goes down, which makes sense if you think about it, because the ES Resnext was initialized with ImageNet weights, and the more you fine-tune it jointly with the other heads, the more the model forgets those initial ImageNet weights, and the worse its performance gets on the ImageNet dataset. And again, hopefully you understand the setup here, so let me just kind of go back here. So what happens here is you have this setup, you take your ImageNet datasets, you'll have thousand labels, you'll encode them into these thousand embeddings, and then how you find the accuracy, you just take ImageNet images, and you feed them, and you apply this process to find the highest cosine similarity. That's how you're going to do the classification on ImageNet. Let's get back to the table. They also show results on audio set. Here we can have, we can classify images, we can also just embed audio using ES Resnext instead of image, and again, they show here that obviously the performance gets better, which makes sense since they are training jointly on this very same audio set dataset. They additionally report state-of-the-art results on this UrbanSound 8K and ESC50 datasets in the full model setup, which is cool. Not much new information here. They again have, like, these are the state-of-the-art results, and they report this zero shot. I kind of hate these tables, I love they had some lines here, but yeah. So this row here shows you the zero shot performance of the audio clip, and here in this row they actually pre-train on these datasets, and that's where they get the solar results, okay? The last thing where they report their results are on querying, and basically because we now have three modalities, that means you can query in multiple ways. You can query with an image, you can query audio, or you can query with text, you can query images, or via images, you can query audio. So all of the different directions are possible, and they test all of those, and again, let's see the results. So if you query with text for images on ImageNet, the performance actually drops if we go to the jointly trained model, which again makes sense, that's consistent with the previous results from the previous table, but again, on the audio set, they actually do improve on all of these different, like, combinations. So querying with text, where the result is an image, querying with text where the result is audio, they also query with audio where the result is image, and vice versa. And again, they have some small regression here on the ESC50, so basically the only dataset where they have improvements, but it's a big one, is this audio set, and all of the other results either regress or stay pretty much the same as prior to this joint training. So these querying capabilities are super important, because that's how we search for information. So if you think about Google, you basically have a textual query and retrieve bunch of different information that can be of various modalities. So it can be an image, it can be a video, it can be text, whatever. So Google used to be a rule-based system, and in the meanwhile, like a couple of years ago when BERT Transformer was published, Google started using BERT in production. So when you type something in Google, you have neural networks being used to retrieve certain results of your queries. And that's something called neural search. And when I'm added, let me just mention GINA-AI, which is an open source framework, which is super cool, because compared to Google, which is a general purpose framework, which you obviously cannot integrate into your own projects, you can use GINA-AI to kind of integrate this search functionality into your project, where they're using neural search. And as soon as models such as AudioClip are kind of published, GINA-AI developers integrate all of those novel research into their framework, and they make it easy to extend their framework with these novel models. So do check them out. I'm a huge fan of open source. If you've been following my channel for quite some time, you know that I'm bullish on open source. You can check out some of my previous projects on GitHub. If you're into that, I have a bunch of different deep learning projects implemented there from scratch. So do check those projects out. Do check GINA-AI. And until next time, subscribe, share out this video if you liked it, and bye bye.
[{"start": 0.0, "end": 5.92, "text": " What's cracking guys? In this video I'm covering audio clip, extending clip to image, text and audio"}, {"start": 5.92, "end": 14.56, "text": " by Andrei Guzov, Federico Rau, Yun Hees and Andreas Dengel from DFKI and Technical University of"}, {"start": 14.56, "end": 21.12, "text": " Kaiserlautern. So what they do, the main idea they do in this paper is they extend clip"}, {"start": 21.84, "end": 29.36, "text": " by adding a novel modality and that's audio. And so before even I start digging into this paper,"}, {"start": 29.36, "end": 34.96, "text": " I thought explaining how OpenAI's clip model works and for those of you who want to know all"}, {"start": 34.96, "end": 39.6, "text": " of the need to read the details of the paper, you can check out my video on clip. I've made it a"}, {"start": 39.6, "end": 47.92, "text": " couple of months ago and yeah. So having said that, the main motivation behind clip was to"}, {"start": 49.120000000000005, "end": 57.84, "text": " try and see how vision models perform given a huge amount of data, so scale. And as we already know,"}, {"start": 57.84, "end": 64.24000000000001, "text": " OpenAI did the same thing in NLP with GPT family of models, especially the GPT-3 model, which was"}, {"start": 64.24000000000001, "end": 72.48, "text": " huge, like 175 billion parameter model. And the thing is in vision, computer vision, until recently"}, {"start": 72.48, "end": 78.96000000000001, "text": " we were using stuff like ImageNet or ImageNet 21k, which were super small compared to the NLP"}, {"start": 78.96000000000001, "end": 86.56, "text": " standards. So obviously, okay, Google had their own proprietary big data set, JFT-300M, I think,"}, {"start": 86.56, "end": 93.44, "text": " and but that's not obviously public. And so OpenAI had to create their own data set and they"}, {"start": 93.44, "end": 99.84, "text": " finally ended up with 400 million image caption pairs, which they've used to train this clip model"}, {"start": 99.84, "end": 106.96000000000001, "text": " we're just gonna, we are about to explore. So again, one more thing is that they wanted to see"}, {"start": 106.96000000000001, "end": 112.72, "text": " how the natural language works as the source of supervision compared to those gold labels you have"}, {"start": 112.72, "end": 119.28, "text": " on your classical ImageNet data set, for example, or whatever arbitrary vision data set like"}, {"start": 119.28, "end": 125.92, "text": " Cypher 10 or whatnot. Okay, having said that, let me kind of briefly recap how clip works. So"}, {"start": 125.92, "end": 130.32, "text": " how you train clip is the following, you have a batch of images, you have a batch of corresponding"}, {"start": 131.2, "end": 138.24, "text": " captions, and you encode those into these representations here. And what you do then"}, {"start": 138.24, "end": 145.84, "text": " is you basically apply cosine similarity between all of the pairs. And you want to make sure you"}, {"start": 145.84, "end": 152.64000000000001, "text": " maximize the corresponding image and captions. So I1 corresponds to T1. That's why we want to maximize"}, {"start": 152.64000000000001, "end": 157.92000000000002, "text": " this particular term, but we want to minimize all of the other ones. And the same goes for the other"}, {"start": 157.92000000000002, "end": 164.88, "text": " elements on this main diagonal. And so again, you want to maximize those and you want to minimize"}, {"start": 164.88, "end": 171.35999999999999, "text": " all of the other ones. So going in a bit more detail here, ImageNet, so what they've used for"}, {"start": 171.35999999999999, "end": 178.48, "text": " the image encoder was the vision transformer. And so basically, you feed in an image and from a certain"}, {"start": 178.48, "end": 183.28, "text": " layer, you're just going to take a vector and you're going to linearly project that one. And"}, {"start": 183.28, "end": 192.16, "text": " that's how they obtain these encodings. On the other hand, text encoder was GPT-2. And how they"}, {"start": 192.16, "end": 199.68, "text": " formed this representation, so these T1 through Tn is the following. You input something called"}, {"start": 199.68, "end": 205.84, "text": " start of a sequence token, then you're going to encode this via byte pair encoding, for example."}, {"start": 205.84, "end": 213.68, "text": " So you're going to embed those tokens here. And finally, you'll append the end of sentence token."}, {"start": 213.68, "end": 219.2, "text": " And so the representation they finally use is after propagating this through the transformer,"}, {"start": 219.2, "end": 224.88, "text": " they'll just take the representation that's on top of EOS token. And they're just going to linearly"}, {"start": 224.88, "end": 231.83999999999997, "text": " project that in order to get these T1 through Tn, where n is the batch size. Then what they do is"}, {"start": 231.83999999999997, "end": 238.48, "text": " they L2 normalize these vectors so that they can just do dot product. And by doing dot product,"}, {"start": 238.48, "end": 245.67999999999998, "text": " you basically just get cosine similarity. So just to recap here, so A times B, so A dot product B"}, {"start": 245.68, "end": 254.32, "text": " is actually, so L2 norm of A times L2 norm of B times cosine of theta of the angle between those"}, {"start": 254.32, "end": 260.88, "text": " two vectors, okay? And since, as I said, these are normalized, that means these are is going to be one."}, {"start": 260.88, "end": 266.16, "text": " So a dot product is the same thing as cosine similarity. So you're just gonna do element-wise"}, {"start": 266.16, "end": 270.32, "text": " multiplication between these embeddings, and you're going to end up with these. And finally,"}, {"start": 270.32, "end": 276.4, "text": " you're just gonna apply cross entropy, like symmetric cross entropy to this matrix, and that's"}, {"start": 277.04, "end": 282.8, "text": " going to be the last function of CLIP. So that will just push these terms to one, and it's gonna,"}, {"start": 282.8, "end": 289.6, "text": " so these ones will be pushed to one, and all of the other ones are going to be pushed down to minus"}, {"start": 289.6, "end": 294.96, "text": " one, okay? So all of the elements aside from the main diagonal are going to be pushed towards minus"}, {"start": 294.96, "end": 300.71999999999997, "text": " one. And that's in a nutshell how they train this multimodal embedding space. They back prop through"}, {"start": 300.71999999999997, "end": 307.12, "text": " the weights of the image encoder and the text encoder, and that's your CLIP. So a nice thing they"}, {"start": 307.12, "end": 312.71999999999997, "text": " later showed is that they can use this in a zero-shot setting on any target data set. So take,"}, {"start": 312.71999999999997, "end": 317.76, "text": " take for example, Cypher 10, I don't know, I think this is Cypher 10. So you have plane, car, dog,"}, {"start": 317.76, "end": 325.44, "text": " bird, whatever, and you want to convert that gold label into this format that CLIP is used to. And"}, {"start": 325.44, "end": 330.48, "text": " what they do is just they do simply something like this, a photo alpha, and like just insert"}, {"start": 330.48, "end": 336.48, "text": " like your label for sort of a photo alpha plane, I don't know. And you encode all of your labels"}, {"start": 336.48, "end": 343.12, "text": " into T1 through Tn, where n is the number of labels in your data set, so maybe like 10 for Cypher 10."}, {"start": 343.12, "end": 349.44, "text": " And then how you classify an image, you just take your image, you encode it, and you do again simply"}, {"start": 350.24, "end": 355.84000000000003, "text": " like cosine similarity, you apply softmax, and you find the mode of the distribution, and that's your"}, {"start": 355.84000000000003, "end": 362.8, "text": " output. So here we have a photo of a dog. And so it's as simple as that you can kind of zero-shot"}, {"start": 362.8, "end": 367.76, "text": " create this classification head, and you can then classify your images. And so when I said"}, {"start": 367.76, "end": 375.84, "text": " classification head, let me just kind of break that down. If we take this I1 embedding, and I kind of"}, {"start": 376.96, "end": 384.15999999999997, "text": " draw it as a vector that has like maybe four dimensions, let's simplify it a bit. And so"}, {"start": 384.15999999999997, "end": 391.2, "text": " basically what these will do is they will just form the weights of the connections of that"}, {"start": 391.2, "end": 397.2, "text": " classifier head. So you'll have something like this, and this will be your first output, then you'll"}, {"start": 397.2, "end": 404.24, "text": " have, so this one, this embedding here, when you do multiplication with the I1 vector, you'll get"}, {"start": 404.24, "end": 408.64, "text": " something like this. So as you can see, you're just basically forming a classification head,"}, {"start": 408.64, "end": 415.28, "text": " and the reason we can do this is, as we saw here, is if we do dot product, so that's just element-wise"}, {"start": 415.28, "end": 419.84, "text": " multiplication, and that's pretty much the thing we're doing here with these connections, we do"}, {"start": 419.84, "end": 427.84, "text": " cosine similarity. And so as you can see, all of these blue, green, and other weights are formed,"}, {"start": 427.84, "end": 433.52, "text": " are pre-computed, and they form a specific classification head for this specific dataset"}, {"start": 433.52, "end": 438.88, "text": " at hand. So hopefully that was clear. Now that's CLIP, now let's jump into the paper. So they say"}, {"start": 438.88, "end": 444.23999999999995, "text": " here, in this work we present an extension of the CLIP model that handles audio in addition to text"}, {"start": 444.24, "end": 450.88, "text": " and images. Our proposed model incorporates the ES Resnext audio model into the CLIP framework"}, {"start": 450.88, "end": 456.24, "text": " using the audio set dataset. Such a combination enables the proposed model to perform bimodal"}, {"start": 456.24, "end": 461.04, "text": " and unimodal classification and querying, while keeping CLIP's ability to generalize to unseen"}, {"start": 461.04, "end": 468.0, "text": " datasets in a zero-shot inference fashion. The main unknown here, I guess, will be ES Resnext,"}, {"start": 468.0, "end": 474.56, "text": " which stands for, so ES stands for environmental sound. The X part is just a modification on top"}, {"start": 474.56, "end": 479.6, "text": " of the Resnet, the famous Resnet model with skip connections. So how those sound classification"}, {"start": 479.6, "end": 485.2, "text": " models work is pretty similar. So audio is simply a 1D signal, so you'll have something like this."}, {"start": 485.2, "end": 492.56, "text": " You have your samples, 1D samples of your audio waveform, and that will look something like this."}, {"start": 492.56, "end": 499.04, "text": " And you have two, pretty much two approaches to audio classification. You either apply something"}, {"start": 499.04, "end": 505.92, "text": " like 1D convolution, and then you process the signal until you get your classification output,"}, {"start": 505.92, "end": 512.24, "text": " or you convert this, so more commonly you just convert this into something called Spectrogram,"}, {"start": 512.24, "end": 517.84, "text": " where then you apply a 2D convolutional neural network to get your classification results."}, {"start": 517.84, "end": 523.84, "text": " So Spectrogram just has, on the Y axis you have the frequency component, on the X axis you have"}, {"start": 523.84, "end": 529.36, "text": " the time component, and what the Spectrogram tells you is the following. So if I take a particular"}, {"start": 529.36, "end": 536.32, "text": " time point and a particular frequency, maybe like 1 kHz, it's going to tell you the amplitude"}, {"start": 536.88, "end": 545.2, "text": " of the sinusoid at that particular point. And why a sinusoid? Because if you know what Fourier"}, {"start": 545.2, "end": 551.9200000000001, "text": " transform is, so you can basically represent any natural signal as a sum of sinusoids with"}, {"start": 551.9200000000001, "end": 557.0400000000001, "text": " different frequencies, with different amplitudes and phases. So applying that Fourier transform,"}, {"start": 557.0400000000001, "end": 563.5200000000001, "text": " you can convert this arbitrary natural signal into Spectrogram. And that decomposition was just"}, {"start": 563.5200000000001, "end": 569.6, "text": " proven experimentally to be super useful, so that's why we are commonly using these Spectrograms,"}, {"start": 569.6, "end": 576.8000000000001, "text": " or male Spectrograms, which are just log-normalized Spectrograms as an input. And so, yes, ResNet is"}, {"start": 576.8000000000001, "end": 584.5600000000001, "text": " just a fancy, improved ResNet style architecture that works across these 2D Spectrograms."}, {"start": 585.2, "end": 589.76, "text": " With a small caveat, they're not using this thing called short time Fourier transform,"}, {"start": 589.76, "end": 595.84, "text": " they're using something called complex frequency B-spline, super fancy, we don't even need to know"}, {"start": 595.84, "end": 600.72, "text": " the details, they just have some additional hyperparameters and what it turns out that this"}, {"start": 600.72, "end": 606.72, "text": " STFT, so this short time Fourier transform is just a special case of that thing, and so they can"}, {"start": 606.72, "end": 612.4, "text": " initialize it so that initially it behaves as STFT, but then eventually those parameters are"}, {"start": 612.4, "end": 617.9200000000001, "text": " learned and that showed to improve the results of this ES ResNext. So that's all the prerequisite"}, {"start": 617.9200000000001, "end": 622.0, "text": " knowledge you need, again, don't worry if you don't understand every single detail here,"}, {"start": 622.0, "end": 626.96, "text": " you just have some 2D Spectrogram and this ResNet model and that's how you, that's how this ES"}, {"start": 626.96, "end": 634.24, "text": " ResNext works like. Okay, now let's jump to the whole pipeline, this is how the final model looks"}, {"start": 634.24, "end": 638.88, "text": " like, and as you can see this is your standard clip, so this part here is your standard clip,"}, {"start": 639.44, "end": 644.48, "text": " and they additionally add, so this is the ES ResNext, you feed in audio, you find the"}, {"start": 644.48, "end": 651.04, "text": " representations, and as you can see here, this is like pretty much the same idea from clip, you just"}, {"start": 651.04, "end": 655.68, "text": " this time because you have the additional modality, you'll create these matrices of"}, {"start": 655.68, "end": 662.88, "text": " cosine similarities between audio and image, so this one between audio and text, that's this one,"}, {"start": 662.88, "end": 668.3199999999999, "text": " and again you have text and image, that's your classical clip, and that's this one, and you'll"}, {"start": 668.3199999999999, "end": 674.4, "text": " basically be again maximizing the main diagonal cosine similarities, you'll be suppressing the"}, {"start": 674.4, "end": 679.5999999999999, "text": " after the main diagonal elements, and that's how you're going to train this novel audio clip model."}, {"start": 679.6, "end": 688.5600000000001, "text": " Okay, now let's see what else, so their procedure of training this thing is fairly complex, there are"}, {"start": 688.5600000000001, "end": 694.5600000000001, "text": " many different initializations, different training procedures, so let's briefly go through the data"}, {"start": 694.5600000000001, "end": 700.08, "text": " sets they are using, and they mentioned here, so in this work the clip data set was used"}, {"start": 700.08, "end": 706.5600000000001, "text": " indirectly as a weight initializer of the text and image heads, the clip model, so basically they"}, {"start": 706.56, "end": 712.16, "text": " implicitly used that huge open AIS data set by reusing their weights, that's what they want to"}, {"start": 712.16, "end": 718.0799999999999, "text": " say here, for the purposes of this work the ImageNet data set served as a weight initializer of the"}, {"start": 718.0799999999999, "end": 725.04, "text": " ES ResNext model, so in that previous paper that introduced this model ES ResNext, they showed that"}, {"start": 725.04, "end": 732.4799999999999, "text": " using ImageNet initialization helps boost the classification performance on sounds after they"}, {"start": 732.48, "end": 737.28, "text": " additionally train it on this thing called audio set, which we'll soon see. So aside from those two"}, {"start": 737.28, "end": 743.12, "text": " data sets which are used implicitly, they used this thing called audio set explicitly, so that's"}, {"start": 743.12, "end": 750.4, "text": " a data set that has 1.8 million, so it's a fairly big data set for the audio research community,"}, {"start": 750.4, "end": 758.0, "text": " and so it has 527 classes, whatnot, and each sample is a snippet up to 10 seconds long from"}, {"start": 758.0, "end": 762.48, "text": " a YouTube video defined by your corresponding ID and timings, but basically you usually ditch these"}, {"start": 763.04, "end": 767.68, "text": " components, so the vision component, you just focus on the clips and on the labels of those clips,"}, {"start": 768.32, "end": 775.84, "text": " but they say here, for this work we acquired video frames in addition to audio tracks, thus the audio"}, {"start": 775.84, "end": 782.48, "text": " set data set became the glue between the vanilla clip framework and our tri-modal extension on top"}, {"start": 782.48, "end": 787.36, "text": " of it. During the training part, 10 equally distant frames were extracted from our video recording,"}, {"start": 787.36, "end": 793.2, "text": " and one of them was picked randomly and passed through the audio clip model, so let's see how"}, {"start": 793.2, "end": 799.84, "text": " that works. So drawing a video here, something like this, I'm looking from it sideways, and these are"}, {"start": 799.84, "end": 806.8000000000001, "text": " frames, okay, and you'll have like spatial extent of the frame, whatnot, and that's not that important."}, {"start": 806.8000000000001, "end": 813.9200000000001, "text": " Okay, so again, associated with this video, we'll have an audio, so we'll have like an audio here,"}, {"start": 813.92, "end": 820.4, "text": " so that will be some like natural signal, something like this, and again, we'll have a label, so I"}, {"start": 820.4, "end": 827.04, "text": " don't know, like cat or whatnot, and so how they perform, how they create a data set that will be"}, {"start": 827.04, "end": 831.8399999999999, "text": " used to jointly train all of the three modalities is the following. As they mentioned, they pick"}, {"start": 831.8399999999999, "end": 838.8, "text": " randomly one of these like frames, maybe this one, and so you have a 2D image there, so they are going"}, {"start": 838.8, "end": 844.7199999999999, "text": " to convert this into a 2D spectrogram, okay, so that's again a 2D image, and you have your label"}, {"start": 844.7199999999999, "end": 854.16, "text": " here, so those three will be fed into this system and used to jointly train it, and we'll see there"}, {"start": 854.16, "end": 859.04, "text": " are a bit more details than that, but like that's for start. They additionally have these two"}, {"start": 859.04, "end": 866.7199999999999, "text": " datasets called Urban Sound 8K and ESC50, which is Environment Sound Classification 50 dataset,"}, {"start": 866.72, "end": 871.6, "text": " and they use those in two ways, so they mentioned here on this dataset, we perform zero-shot"}, {"start": 871.6, "end": 878.48, "text": " inference using the audio clip model trained on audio set, so they use it to evaluate the models,"}, {"start": 878.48, "end": 885.6800000000001, "text": " but they also use those datasets to fine-tune the audio head, as we'll soon see. Aside from the"}, {"start": 885.6800000000001, "end": 890.88, "text": " datasets, they also use these data augmentations for the reason that's stated here. In comparison"}, {"start": 890.88, "end": 897.68, "text": " to the composite clip dataset, the audio datasets provide two orders of magnitude, less training"}, {"start": 897.68, "end": 903.36, "text": " samples, which makes overfitting an issue, so they have this thing called time scaling, where, so this"}, {"start": 903.36, "end": 910.24, "text": " is your like audio signal, and you have some like signal inside of it, and what they do is basically"}, {"start": 910.24, "end": 915.04, "text": " they either compress it, so you'll have a signal here, or they decompress it, so you'll have something"}, {"start": 915.04, "end": 921.68, "text": " like this, and they feed that into the ES Resnext model. They also use this thing called time"}, {"start": 921.68, "end": 928.16, "text": " inversion that's equivalent to flipping your image inside computer vision tasks, so that means, like,"}, {"start": 928.16, "end": 936.24, "text": " simply you're going to, if we have this signal here, and this is some like symmetry axis,"}, {"start": 936.24, "end": 943.04, "text": " you just basically rotate by 180 to get the signal, and then you feed that into ES Resnext."}, {"start": 943.04, "end": 950.24, "text": " They also have random crops and padding, because they actually need to have these signals of"}, {"start": 950.24, "end": 955.76, "text": " appropriate length, so that they can be fed into the ES Resnext model, so sometimes they'll have to,"}, {"start": 955.76, "end": 963.12, "text": " if this is the, for example, if this is the target like length, that means they'll have to kind of"}, {"start": 963.12, "end": 969.04, "text": " pad this part here, they didn't say how they pad it, I guess it's zeros, or something, or if we have"}, {"start": 969.04, "end": 975.4399999999999, "text": " a longer signal, then you'll just crop it in order to fit into this length of the signal."}, {"start": 975.4399999999999, "end": 981.8399999999999, "text": " So, and yeah, finally they have this random noise, they basically just have additive white gaussian"}, {"start": 981.8399999999999, "end": 987.5999999999999, "text": " noise applied, and that's similar to when you do photometric augmentations when you're training"}, {"start": 987.5999999999999, "end": 994.64, "text": " your computer vision models, so that's an equivalent type of augmentation. How they train this thing is"}, {"start": 994.64, "end": 1000.3199999999999, "text": " the following. While the clip model was already pre-trained on text image pairs, we decided to"}, {"start": 1000.3199999999999, "end": 1005.76, "text": " perform an extended audio set pre-training of the audio head first, as it improved performance of"}, {"start": 1005.76, "end": 1011.4399999999999, "text": " the base ES Resnext model, and then to continue training in a tri-model setting, combining it"}, {"start": 1011.4399999999999, "end": 1016.24, "text": " with two other heads. Here, the whole audio clip model was trained jointly on the audio set data"}, {"start": 1016.24, "end": 1021.04, "text": " set, using audio snippets, the corresponding video frames, and the assigned textual labels."}, {"start": 1021.04, "end": 1026.8799999999999, "text": " The whole training pipeline roughly consists out of three parts. So the first part is, you have clip,"}, {"start": 1026.8799999999999, "end": 1033.04, "text": " and it's already pre-trained by the OpenAI team, and so that's just taken as is. The second part is"}, {"start": 1033.04, "end": 1039.68, "text": " your ES Resnext, which is first initialized via these ImageNet weights, and then they additionally"}, {"start": 1039.68, "end": 1045.36, "text": " pre-train it using this audio set data set. The reason being is ImageNet weights are obviously"}, {"start": 1045.36, "end": 1050.8799999999999, "text": " more suited towards, like, understanding images, where they want to apply this model for audio,"}, {"start": 1050.88, "end": 1055.2800000000002, "text": " so that's why they have this second part here. Finally, and they mentioned it here,"}, {"start": 1056.0800000000002, "end": 1061.8400000000001, "text": " parameters of the two other sub-networks, namely text and image head, were frozen during the"}, {"start": 1061.8400000000001, "end": 1066.88, "text": " cooperative pre-training of the audio encoding head, thus these heads served as teachers in a"}, {"start": 1066.88, "end": 1072.4, "text": " multi-modal knowledge distillation setup. So that means the third component is, again, they're"}, {"start": 1072.4, "end": 1079.2, "text": " training on this audio set data set, but they this time do it jointly. So here they had a standalone"}, {"start": 1079.2, "end": 1084.0800000000002, "text": " training of ES Resnext, here they do it jointly. So when I say jointly, let me just go back here"}, {"start": 1084.0800000000002, "end": 1090.96, "text": " to the, to that drawing. Okay, so what I do is they freeze these, so they freeze these weights here,"}, {"start": 1090.96, "end": 1097.3600000000001, "text": " they freeze these weights here, and now they just back prop through this model, and they jointly"}, {"start": 1097.3600000000001, "end": 1102.4, "text": " train this whole system, but they just update the weights of the audio head. So that's the third"}, {"start": 1102.4, "end": 1109.1200000000001, "text": " part of how they train ES Resnext. So as I said, multiple, multiple details there, but yeah. And"}, {"start": 1109.1200000000001, "end": 1113.92, "text": " the last part is they actually train the whole system jointly, and they update all of the three"}, {"start": 1113.92, "end": 1119.3600000000001, "text": " models, as they mentioned here. Here all three modality dedicated heads were tuned together,"}, {"start": 1119.3600000000001, "end": 1124.48, "text": " making the resulting model take into account the distribution of images and textual descriptions,"}, {"start": 1124.48, "end": 1130.16, "text": " in addition to distribution of audio samples. So the reason they do this is because Clip was"}, {"start": 1130.16, "end": 1137.2, "text": " obviously trained on this OpenAI dataset, and here they want to evaluate this audio clip model on"}, {"start": 1137.2, "end": 1143.0400000000002, "text": " audio set, and so that's the reason why they additionally fine-tune all of the three heads"}, {"start": 1143.0400000000002, "end": 1149.1200000000001, "text": " onto audio set, because that kind of boosts the performance on the downstream tasks. And"}, {"start": 1150.5600000000002, "end": 1156.88, "text": " it makes sense, I guess. Okay, finally, let's see the results. First things first, they kind of show"}, {"start": 1156.88, "end": 1164.72, "text": " that the original pre-training on audio dataset of this ES Resnext model, so when they have like"}, {"start": 1164.72, "end": 1170.88, "text": " more epochs, that the performance gets better on the audio set, and it marginally gets better on"}, {"start": 1170.88, "end": 1177.44, "text": " these two other data sets. So that's kind of not that informative. Let's see some other tables they"}, {"start": 1177.44, "end": 1183.1200000000001, "text": " have here. So this is an important one. So they evaluate audio clip in two settings. First is audio"}, {"start": 1183.12, "end": 1188.56, "text": " head only, and then the full model. So audio head only is that when you only train the audio head"}, {"start": 1188.56, "end": 1193.76, "text": " while the other two, the text and the image heads, were frozen, and the full model is where you"}, {"start": 1193.76, "end": 1198.9599999999998, "text": " jointly train all of the three heads. And we can see on ImageNet, the performance actually goes down,"}, {"start": 1198.9599999999998, "end": 1205.52, "text": " which makes sense if you think about it, because the ES Resnext was initialized with ImageNet"}, {"start": 1205.52, "end": 1212.2399999999998, "text": " weights, and the more you fine-tune it jointly with the other heads, the more the model forgets"}, {"start": 1212.24, "end": 1219.44, "text": " those initial ImageNet weights, and the worse its performance gets on the ImageNet dataset. And again,"}, {"start": 1219.44, "end": 1225.1200000000001, "text": " hopefully you understand the setup here, so let me just kind of go back here. So what happens here"}, {"start": 1225.1200000000001, "end": 1230.32, "text": " is you have this setup, you take your ImageNet datasets, you'll have thousand labels, you'll"}, {"start": 1230.32, "end": 1236.0, "text": " encode them into these thousand embeddings, and then how you find the accuracy, you just take"}, {"start": 1236.0, "end": 1241.6, "text": " ImageNet images, and you feed them, and you apply this process to find the highest cosine similarity."}, {"start": 1241.6, "end": 1247.9199999999998, "text": " That's how you're going to do the classification on ImageNet. Let's get back to the table."}, {"start": 1248.48, "end": 1254.8799999999999, "text": " They also show results on audio set. Here we can have, we can classify images, we can also just embed"}, {"start": 1254.8799999999999, "end": 1262.48, "text": " audio using ES Resnext instead of image, and again, they show here that obviously the performance"}, {"start": 1262.48, "end": 1267.04, "text": " gets better, which makes sense since they are training jointly on this very same audio set"}, {"start": 1267.04, "end": 1274.48, "text": " dataset. They additionally report state-of-the-art results on this UrbanSound 8K and ESC50 datasets"}, {"start": 1274.48, "end": 1282.24, "text": " in the full model setup, which is cool. Not much new information here. They again have, like,"}, {"start": 1282.24, "end": 1287.68, "text": " these are the state-of-the-art results, and they report this zero shot. I kind of hate these tables,"}, {"start": 1288.32, "end": 1294.24, "text": " I love they had some lines here, but yeah. So this row here shows you the zero shot performance of"}, {"start": 1294.24, "end": 1300.32, "text": " the audio clip, and here in this row they actually pre-train on these datasets, and that's where they"}, {"start": 1300.32, "end": 1306.32, "text": " get the solar results, okay? The last thing where they report their results are on querying, and"}, {"start": 1306.32, "end": 1310.64, "text": " basically because we now have three modalities, that means you can query in multiple ways. You can"}, {"start": 1310.64, "end": 1316.8, "text": " query with an image, you can query audio, or you can query with text, you can query images, or"}, {"start": 1316.8, "end": 1322.48, "text": " via images, you can query audio. So all of the different directions are possible, and they test"}, {"start": 1322.48, "end": 1330.4, "text": " all of those, and again, let's see the results. So if you query with text for images on ImageNet,"}, {"start": 1330.4, "end": 1336.0, "text": " the performance actually drops if we go to the jointly trained model, which again makes sense,"}, {"start": 1336.72, "end": 1342.16, "text": " that's consistent with the previous results from the previous table, but again, on the audio set,"}, {"start": 1342.16, "end": 1349.52, "text": " they actually do improve on all of these different, like, combinations. So querying with text,"}, {"start": 1349.52, "end": 1354.48, "text": " where the result is an image, querying with text where the result is audio, they also query with"}, {"start": 1354.48, "end": 1360.72, "text": " audio where the result is image, and vice versa. And again, they have some small regression here"}, {"start": 1360.72, "end": 1367.68, "text": " on the ESC50, so basically the only dataset where they have improvements, but it's a big one, is"}, {"start": 1367.68, "end": 1375.2, "text": " this audio set, and all of the other results either regress or stay pretty much the same as prior to"}, {"start": 1375.2, "end": 1380.0800000000002, "text": " this joint training. So these querying capabilities are super important, because that's how we search"}, {"start": 1380.0800000000002, "end": 1385.28, "text": " for information. So if you think about Google, you basically have a textual query and retrieve"}, {"start": 1385.28, "end": 1389.52, "text": " bunch of different information that can be of various modalities. So it can be an image,"}, {"start": 1389.52, "end": 1394.32, "text": " it can be a video, it can be text, whatever. So Google used to be a rule-based system,"}, {"start": 1394.32, "end": 1398.96, "text": " and in the meanwhile, like a couple of years ago when BERT Transformer was published,"}, {"start": 1398.96, "end": 1404.72, "text": " Google started using BERT in production. So when you type something in Google, you have neural"}, {"start": 1404.72, "end": 1410.96, "text": " networks being used to retrieve certain results of your queries. And that's something called neural"}, {"start": 1410.96, "end": 1416.96, "text": " search. And when I'm added, let me just mention GINA-AI, which is an open source framework,"}, {"start": 1416.96, "end": 1421.6000000000001, "text": " which is super cool, because compared to Google, which is a general purpose framework, which you"}, {"start": 1421.6, "end": 1429.36, "text": " obviously cannot integrate into your own projects, you can use GINA-AI to kind of integrate this"}, {"start": 1429.36, "end": 1435.6, "text": " search functionality into your project, where they're using neural search. And as soon as models"}, {"start": 1435.6, "end": 1441.6, "text": " such as AudioClip are kind of published, GINA-AI developers integrate all of those novel research"}, {"start": 1441.6, "end": 1448.24, "text": " into their framework, and they make it easy to extend their framework with these novel models."}, {"start": 1448.24, "end": 1453.76, "text": " So do check them out. I'm a huge fan of open source. If you've been following my channel for"}, {"start": 1453.76, "end": 1457.92, "text": " quite some time, you know that I'm bullish on open source. You can check out some of my previous"}, {"start": 1457.92, "end": 1463.36, "text": " projects on GitHub. If you're into that, I have a bunch of different deep learning projects"}, {"start": 1463.36, "end": 1469.04, "text": " implemented there from scratch. So do check those projects out. Do check GINA-AI. And until next"}, {"start": 1469.04, "end": 1483.76, "text": " time, subscribe, share out this video if you liked it, and bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=YH319yyeoVw
Focal Transformer: Focal Self-attention for Local-Global Interactions in Vision Transformers
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover a new paper coming from Microsoft: "Focal Self-attention for Local-Global Interactions in Vision Transformers" where they introduce a new transformer layer called focal attention. The main idea is to reduce the complexity but preserve the long-range dependencies. They achieve this by attending to the nearby tokens in a fine-grained manner and to the tokens that are further away they attend their coarsened representations. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2107.00641 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Main idea of the paper: focal self-attention 04:55 Overview of Focal Transformer architecture 08:15 Focal Self-Attention layer 12:30 Computational complexity, overlapping regions 15:30 SOTA results but with a disclaimer 17:30 Ablations 19:50 Outro, Focal Transformer is slower than Swin ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #focaltransformer #microsoft #transformer
What's up? In this video I'm covering focal self-attention for local global interactions in vision transformers by the Microsoft Research and Microsoft Cloud and AI teams. So they basically introduced this novel layer called focal self-attention and introduced this novel model called focal transformer. And let's see what it's all about. Basically if you focus on this diagram here, let me kind of motivate the introduction of this focal self-attention. First things first, on the left hand side we have a classical vision transformer, so that's date model from Facebook AI. And if you haven't watched my video on vision transformer, I do suggest you go ahead and watch it first, but I'll just briefly explain how it works here. So the idea is because you have a transformer and transformers were invented in the context of NLP where you introduce like a sequence of tokens as the input. So we somehow need to convert this image into that sequence of tokens. And so how we do that is basically we split the image into patches. So something like this. So we'll have a patch here, we have a patch here, etc. So the whole image will be basically built from these patches. And here you can see a query patch. So this is one particular patch. And obviously hopefully you're familiar with the self-attention like terminology, but this is a query patch. And what we can see on these attention maps here is basically that the vision transformer has both local attention as well as global attention. So let me like kind of explain what I mean by that. So if you focus on this attention map here, so what it basically says is the following. That means that as you can see this like blob here, that means that this query patch is basically attending these nearby patches a lot. And that's just one particular attention here from the first layer of this of this state transformer. And if we focus on the second head or third head, we can see that it also has so this query patch in those other heads also tends certain other areas. Like if you focus on this one, you can see that that part is these patches here. And it's also focusing pretty much on the whole cat image. So we'll be doing something like this. It's going to attend the whole cat image. And that means that those are some. So because those were learned compared to CNN's where we have like the bias of locality here, it's obvious that vision transformer found it useful to actually also attend those like parts of the image that are further away from the query patch, which tells us something about its usefulness. Right. And so the bad thing about this whole thing with with vision transformer and transformers in general is that they are they have quadratic complexity. So that means that basically you have N square complexity where N is the number of tokens. And here that's the number of patches in the image. And that's expensive. So basically what this novel focal self-attention does is the following. So the idea is let's attend to nearby patches with a fine grain structure. So as you can see here, we'll attend these closer patches with a fine resolution. But once we start getting like further and further away from the query patch, let's kind of start just attending to these course representations. So what they'll end up doing is they'll they'll basically take these four patches here and they'll summarize those into this one big patch. And the same thing here. So here we'll have like I don't know, like maybe four by four patches will be summarized into a single patch. And so that means that we both keep the local as well as the global like connections. And we also reduce the computation, obviously, because we will have less like less amount of tokens to attend to. So that's the main idea of this of this of this paper, basically. And as you can see here, so we're going from fine to coarse structure as we are going from closer closer patches towards more like the patches ever further away from from the query patch. OK, that's that's pretty much it. So now let's start digging into nitty gritty details and how this everything works like. And let's see some results. OK, so just reiterating here in this paper, we represent focal self-attention, a new mechanism that incorporates both fine grained local and coarse grained global interactions in this new mechanism. Each token attends its closest surrounding tokens at fine granularity and the tokens far away at coarse granularity. And thus can capture both short and long range visual dependencies efficiently and effectively. They additionally report that they achieve state of the art results on a couple of different vision benchmarks, such as image classification, object detection and semantic segmentation. And they report some numbers here as well as here. So this is map metric for the object classification, object detection. Sorry. And yeah, I'll get back to these results a bit later and tell you more about what I have to say about those results. OK, let's now start investigating the high level architecture. So if we if we focus here, the main difference you'll notice between this architecture and the vision transformer are these parts here. So these denominators. So as I told you, we first in order to to kind of transform. In order to parse this image by a transformer, we'll need to add these patches. Right. So we have this patch embedding layer. So the difference between vision transformer and this focal transformer is the following. So basically what they'll do in order to create these patches is in this example, what they did is they took four neighboring pixels and that's one patch. So we'll have four by four will be a single patch. And that's why you see four here. So because height and width just represent how many pixels you have, like in height and width dimensions, obviously. Divide by four and you get the number of tokens. And D is the dimension of that token after we embed it using this like linear work. I think they're using convolutional like projection layer. So now the big difference between focal transformer and the like standard vision transformer. So there are two differences. First is they're using this focal self-attention instead of your standard multi-head self-attention. And the second thing is, as you can see, they're reducing the number of these of these like number of patches, number of tokens. So as we go into this, after you apply this first layer and one times, then what they do is they reduce the number of patches here. So basically, instead of having this one patch, what you'll have end up having is you'll have this one. This will be like a single token. OK. And then going to the next layer here will have even bigger like even bigger tokens. OK. So one token will basically span this amount of an image. OK. And they are also, as you can notice here, so D goes to 2D, goes to 4D, goes to 80. And that's a very similar logic that is applied in all convolutional neural networks. So the idea is as you're reducing the spatial extent of your of your like representation, you're you're increasing the number of channels. And basically what that means is we are we are seeing like a huge merge between these transformer and CNN architectures. And it's not even clear what's a transformer, what what transform means and what a CNN means. We also saw those MLP mixers. So, yeah, there is a lot of debate. What's convolutional neural network now and what's a transformer actually? So that's that that's the high level overview of how this works. Everything else remains the same as the vision transformer. So we just have this novel layer and we have this novel logic where we are reducing the number of tokens as we go deeper and deeper. What I show here, this small diagram, let me just tell you briefly about it. Basically, as we are so for a given number of tokens, they show that they show that the receptive field of this novel layer is higher. So it's higher compared to the regular self-attention. And the reason is, as we saw, is that we are attending as we are getting further away from the query token. We're basically attending Corsair and Corsair tokens so we can attend more tokens and have like for the same amount of tokens, we can attend like a huge. We can have a huge receptive field. OK, now let me focus on on the actual meat of this paper. And that's this focal self-attention layer. So let me kind of digest how this thing goes together and how they implement this focal self-attention. So the first thing you'll notice is so imagine this is your input image and these are just patches. We have 20 times 20 patches. So that's 20 times 20 tokens. So the first thing they do is this blue blob you can see here are our query tokens. And what they do is they group them into this window. And every query inside that window will attend to the same group of value and key and keys. And let me just kind of explain what I mean by that. So imagine if we had if we didn't group them in this window fashion, what we would have to do is the following. So this token, this neighborhood would actually be something like maybe, I don't know, like this. And this tokens neighborhood here would be maybe something like this. And that means we'd have like a computational overhead. And so basically because of some computational conveniences, they group these into into a window so that it can share the neighborhood. And now I'll show you what I mean by sharing the neighborhood. So first things first, we have these three levels. So level one, level two, level three. And those correspond to those fine grained over coarse grained structure. So let me just go back here. So level one was these these fine grained like patches. And then we have these more coarse grained. And finally, these are the the coarse grained patches. OK, and let me go back here. So the first thing you'll notice is we're going to pull these tokens into coarser representations, as I said. So that means that here in level two, we're going to take this two by two, these two by two batches, and we're going to convert them into a single one. So that's the reason we have. So one, two, three, four, five, six. That's why we have six by six here after after doing this this this coarsening. And finally here, because we are on a level three, we're going to we're going to convert these four by four patches into a single into a single representation here. So that's why we have five by five. So now you can see that on this level one, what we're doing is we are attending this neighborhood here. OK, so something like this. Let me just draw it. And then on the second level, as you can see here, we're attending this neighborhood. But we have like coarsen representations. OK. And finally, the last layer, let me just draw this. OK, the last layer will attend the whole feature map. And but but the representations will be much coarser. So that's the idea. And out of those, what we do is we kind of flatten those and we concatenate those. And then we apply these like value. We apply a value projection matrices and and key projection matrices to get values and keys. And so now let me show you what I mean by this windowed queries. OK, as you can see here, this same query attends to all of these keys and values as well as this one. So all of the queries will be attending to the same keys and values. And that's the whole point here. So that means that's how they kind of save some computation. So otherwise, as I already explained, if you had if you have them separated, if you had only this one, then let me just delete this for a second. OK, whoops. Let me just delete all of these and let's focus on a single token. OK, so if we had only this token, that means that its neighborhood would actually be something like this. And then we'd had the coarser one, et cetera. Whereas for this one here, we had a totally different neighborhood. And the fact that these have different neighborhoods means that you'd have to extract all of these for all specific tokens. And that's make it more that makes it more computationally intensive to do this. And that's why they kind of bucket them into this this window. OK, that's pretty much it. Then once you have the queries, keys and values, you do your regular self-attention stuff. You basically out of queries and keys, you form those raw scores. Then you apply softmax, you get the attention coefficients and you use those to aggregate value vectors to form novel representations of these tokens here. So, again, these tokens here will have novel representations formed after we applied this layer once. OK, that's pretty much it. Hopefully that was understandable. Let me now jump further and kind of focus on this part here. So they say to perform focal self-attention, we need to first extract the surrounding tokens for each query token in the feature map. So that's the thing I just mentioned. That's the motivation behind a kind of grouping these great tokens into into these windows. And I think the Swin transformer or some of the other like previous work showed that that kind of helps. And like this paper is just copying that part. So note that the strict version of focal self-attention following figure one requires to exclude the overlapping regions across different levels. In our model, we intentionally keep them in order to capture the pyramid information for the overlap regions. So what I say there is the following thing. So let me get back to the image here. As you can see here, looking at this image, it seems like they are they don't have any overlaps. So that basically means that they only apply the fine grain structure here. And then for this course or grain structure, it seems like they're only attending these this region here. So let me just kind of. Shaded. So this part here, OK, but in reality, they are also they are also using these regions as well for the course or scale. And, yeah, they just mentioned that they do that. And I didn't see any ablations on this. But like it seems that this overlapping information on multiple scales help. And they kind of I guess it's also more computationally intensive if they wanted to exclude these regions. So they just left it like this. OK, that's that part. Additionally, they have this this learnable relative position bias that was introduced in this previous previous paper called Swin transformer. And, yeah, this bias term kind of just helps boost the performance additionally. OK, let me just focus on this one now. This is the whole point. They have less computation. So M times N is the so M times N are your like feature feature map dimension. So that means this is basically the number of tokens and and as we can see, so the complexity depends on them. And we have some like pretty much constant here. So L is the number of levels. And that was three in the examples we just saw. And they also have these S sub R L components. So let me just kind of show you what these are. And as you can see here, so that S sub R. So it's eight for this first level. And we can see why it's eight. So basically that that's basically the dimension of this pooled feature map. OK, we have six by six here. That's why we have six here and we have five here. That's why we have five of five here. So those are pretty much constants. And the whole point is that this part is constant once you set up the parameters of your architecture. And then you only depend linearly on the number of tokens, though this constant may be big. So, yeah, that's also worth mentioning. And we'll see soon like the number of flops they have and the time it takes to do an inference through this to this architecture. OK, that's pretty much it. Let me now focus on on on the actual results. Let's see what they got. So they have again they have this focal tiny focal small and focal base. They tested three of these models with. So this one is obviously the biggest one and they have smaller ones. And what they report are some like state of the art results. But as we'll see, it's not that clear cut. So if we focus on this number here, so it's eighty three point eight and it's a bit bigger than than SwinBase. But the thing is they also have more parameters. It's not a significant amount, but it's more. And they also have more flops. And as they show in the appendix as well, it takes more time for the same amount of flops. It takes more time to compute the forward prop through the focal base compared to the SwinBase. So that's something to keep in mind. If they were to kind of make those the same, I'm not sure they'd have even this big of like increasing performance. So I'd say this is more on pair than state of the art, if you ask me. But like, yeah, yeah, if you don't care about these other parameters, then you have a bit better performance. And they have the same results. They show the same results across different tasks. So this is the on the COCO. So this is basically object detection. They show some improvements. Again, I'm not sure the underlying flops and and and like number of parameters as well as the inference speed. If you don't care about those, again, they do achieve state of the art results there as well. Similarly here, so a bunch of different bunch of different benchmarks. I won't be focusing on each of those in particular. So here they show results on semantic segmentation. Again, they're a bit better compared to the baselines. And yeah, it's kind of hard to compare these because they don't have numbers for the number of parameters and flops there. But yeah, yeah, I don't want to focus too much on the results. Let me show you some ablations here. In the final model, they were actually testing. They used two levels, not three levels, as we saw in those previous diagrams. And what they tested is increasing this window size from seven to 14. And that kind of bumped up the the performance, as you can see here, but also the flops go up. So it's obviously a trade off again. And what is show here is the importance of using both those local and global like interactions. So first things first, what they show is this window attention, which basically means the following thing. So window means that you're if I go back to the diagram up there. So give me a second. So, OK, this part. So that means that these queries are actually these queries are only attending to the keys and values that are inside of this region. OK, let me just change the color. So they're only attending this part. They're not attending the other keys and values. And that basically means we don't have a nice communication across different parts of the image. And then obviously the performance drops by a lot. Let me get back here. OK. And once they add this like local part and then global part, you can see it increases and local plus global. So both using the fine grained structures. So that's the local part and the global one. So that's where you're using those windows of size seven or 14, whatever. So when you combine both of those, you get the like computational efficiency. Plus you get to kind of ingest information from the whole image. That's something that we know is useful because that's why transformers are performing better than CNN's. OK, what they additionally did here is this switch transformer had something like something called like Windows shift. And it actually hurts the performance of Swin transformer a lot if they remove that shifting. But for the for this focal transformer, it's actually not necessary. So they're just going to remove that. Here are some some depth ablations. Nothing super, super interesting. They again show they are better than Swin. Even even even when they reduced the number of as you can see here, even when they reduce depth of the focal one, they are still even better than the swing that has like like deeper layer in this particular stage of the architecture. OK. And here again, they are their own pair. So, yeah, those are just some additional claims that they are still the art on these different tasks. OK, that's pretty much it. Let me just do two more things here. OK, so what I say here is in our observation is that adding long range tokens can bring more relative improvement for image classification than object detection and vice versa for local tokens. We suspect that dense predictions like object detection more rely on fine grained local context while image classification favors more the global information. So this is an interesting statement and it kind of makes sense because if you want to understand what's in the image, if you want to classify the object in the image, you want to kind of understand what's like the whole the whole context. Whereas when you're doing object detection, it's kind of enough to kind of understand the fine grained details. But like there is also this is a double edged sword because this could also mean that the model learns the neural networks learn how to exploit those spurious signals that we we are well aware of that. Sometimes neural networks kind of exploit those spurious signals like texture in order to to infer what the object is inside of the image. And we don't want that. We want our neural networks to understand the actual content of the image itself and not those spurious signals. Second thing I want to I want to mention here is they say here, though extensive experimental results show that our focal self attention can significantly boost the performance on both image classification and dense prediction tasks. It does introduce extra computational and memory cost since each query token needs to attend to the coarsened global tokens in addition to the local tokens. So that's what something I already stressed a couple of times. I don't think this model significantly boosts upon the prior state of the art. And again, they did acknowledge it here. You have to trade off computation and so flops and also memory footprint and speed. If you want to get this small and not significant boost in performance, if you ask me and going back to the appendix here, they mentioned somewhere here. So accordingly, our focal transformer has slower running speed, though it has similar flops as wind transformers. So that's very important. This is mainly due to two reasons. We introduced the global coarse grain attention and introduces the extra computations. That's one. And two, though we conduct our focal attention on the window windows, we still observe that extracting the surrounding tokens around local windows and the global tokens across the feature map are time consuming. So just keep this in mind. If you want to actually run this in production one day, those extra parameter, those extra dimensions like computation and speed and memory do matter a lot. So that's pretty much it. Hopefully you like this paper. If you did, share it out, subscribe. And until next time, bye bye.
[{"start": 0.0, "end": 10.0, "text": " What's up? In this video I'm covering focal self-attention for local global interactions in vision transformers by the Microsoft Research and Microsoft Cloud and AI teams."}, {"start": 10.0, "end": 17.0, "text": " So they basically introduced this novel layer called focal self-attention and introduced this novel model called focal transformer."}, {"start": 17.0, "end": 19.0, "text": " And let's see what it's all about."}, {"start": 19.0, "end": 25.0, "text": " Basically if you focus on this diagram here, let me kind of motivate the introduction of this focal self-attention."}, {"start": 25.0, "end": 31.0, "text": " First things first, on the left hand side we have a classical vision transformer, so that's date model from Facebook AI."}, {"start": 31.0, "end": 39.0, "text": " And if you haven't watched my video on vision transformer, I do suggest you go ahead and watch it first, but I'll just briefly explain how it works here."}, {"start": 39.0, "end": 47.0, "text": " So the idea is because you have a transformer and transformers were invented in the context of NLP where you introduce like a sequence of tokens as the input."}, {"start": 47.0, "end": 51.0, "text": " So we somehow need to convert this image into that sequence of tokens."}, {"start": 51.0, "end": 55.0, "text": " And so how we do that is basically we split the image into patches."}, {"start": 55.0, "end": 60.0, "text": " So something like this. So we'll have a patch here, we have a patch here, etc."}, {"start": 60.0, "end": 64.0, "text": " So the whole image will be basically built from these patches."}, {"start": 64.0, "end": 68.0, "text": " And here you can see a query patch. So this is one particular patch."}, {"start": 68.0, "end": 74.0, "text": " And obviously hopefully you're familiar with the self-attention like terminology, but this is a query patch."}, {"start": 74.0, "end": 85.0, "text": " And what we can see on these attention maps here is basically that the vision transformer has both local attention as well as global attention."}, {"start": 85.0, "end": 88.0, "text": " So let me like kind of explain what I mean by that."}, {"start": 88.0, "end": 93.0, "text": " So if you focus on this attention map here, so what it basically says is the following."}, {"start": 93.0, "end": 103.0, "text": " That means that as you can see this like blob here, that means that this query patch is basically attending these nearby patches a lot."}, {"start": 103.0, "end": 109.0, "text": " And that's just one particular attention here from the first layer of this of this state transformer."}, {"start": 109.0, "end": 117.0, "text": " And if we focus on the second head or third head, we can see that it also has so this query patch in those other heads also tends certain other areas."}, {"start": 117.0, "end": 122.0, "text": " Like if you focus on this one, you can see that that part is these patches here."}, {"start": 122.0, "end": 126.0, "text": " And it's also focusing pretty much on the whole cat image. So we'll be doing something like this."}, {"start": 126.0, "end": 128.0, "text": " It's going to attend the whole cat image."}, {"start": 128.0, "end": 138.0, "text": " And that means that those are some. So because those were learned compared to CNN's where we have like the bias of locality here,"}, {"start": 138.0, "end": 147.0, "text": " it's obvious that vision transformer found it useful to actually also attend those like parts of the image that are further away from the query patch,"}, {"start": 147.0, "end": 150.0, "text": " which tells us something about its usefulness. Right."}, {"start": 150.0, "end": 158.0, "text": " And so the bad thing about this whole thing with with vision transformer and transformers in general is that they are they have quadratic complexity."}, {"start": 158.0, "end": 163.0, "text": " So that means that basically you have N square complexity where N is the number of tokens."}, {"start": 163.0, "end": 167.0, "text": " And here that's the number of patches in the image. And that's expensive."}, {"start": 167.0, "end": 172.0, "text": " So basically what this novel focal self-attention does is the following."}, {"start": 172.0, "end": 178.0, "text": " So the idea is let's attend to nearby patches with a fine grain structure."}, {"start": 178.0, "end": 182.0, "text": " So as you can see here, we'll attend these closer patches with a fine resolution."}, {"start": 182.0, "end": 190.0, "text": " But once we start getting like further and further away from the query patch, let's kind of start just attending to these course representations."}, {"start": 190.0, "end": 198.0, "text": " So what they'll end up doing is they'll they'll basically take these four patches here and they'll summarize those into this one big patch."}, {"start": 198.0, "end": 205.0, "text": " And the same thing here. So here we'll have like I don't know, like maybe four by four patches will be summarized into a single patch."}, {"start": 205.0, "end": 211.0, "text": " And so that means that we both keep the local as well as the global like connections."}, {"start": 211.0, "end": 218.0, "text": " And we also reduce the computation, obviously, because we will have less like less amount of tokens to attend to."}, {"start": 218.0, "end": 222.0, "text": " So that's the main idea of this of this of this paper, basically."}, {"start": 222.0, "end": 234.0, "text": " And as you can see here, so we're going from fine to coarse structure as we are going from closer closer patches towards more like the patches ever further away from from the query patch."}, {"start": 234.0, "end": 236.0, "text": " OK, that's that's pretty much it."}, {"start": 236.0, "end": 240.0, "text": " So now let's start digging into nitty gritty details and how this everything works like."}, {"start": 240.0, "end": 247.0, "text": " And let's see some results. OK, so just reiterating here in this paper, we represent focal self-attention,"}, {"start": 247.0, "end": 254.0, "text": " a new mechanism that incorporates both fine grained local and coarse grained global interactions in this new mechanism."}, {"start": 254.0, "end": 261.0, "text": " Each token attends its closest surrounding tokens at fine granularity and the tokens far away at coarse granularity."}, {"start": 261.0, "end": 269.0, "text": " And thus can capture both short and long range visual dependencies efficiently and effectively."}, {"start": 269.0, "end": 274.0, "text": " They additionally report that they achieve state of the art results on a couple of different vision benchmarks,"}, {"start": 274.0, "end": 278.0, "text": " such as image classification, object detection and semantic segmentation."}, {"start": 278.0, "end": 281.0, "text": " And they report some numbers here as well as here."}, {"start": 281.0, "end": 285.0, "text": " So this is map metric for the object classification, object detection."}, {"start": 285.0, "end": 292.0, "text": " Sorry. And yeah, I'll get back to these results a bit later and tell you more about what I have to say about those results."}, {"start": 292.0, "end": 298.0, "text": " OK, let's now start investigating the high level architecture."}, {"start": 298.0, "end": 307.0, "text": " So if we if we focus here, the main difference you'll notice between this architecture and the vision transformer are these parts here."}, {"start": 307.0, "end": 313.0, "text": " So these denominators. So as I told you, we first in order to to kind of transform."}, {"start": 313.0, "end": 318.0, "text": " In order to parse this image by a transformer, we'll need to add these patches. Right."}, {"start": 318.0, "end": 325.0, "text": " So we have this patch embedding layer. So the difference between vision transformer and this focal transformer is the following."}, {"start": 325.0, "end": 333.0, "text": " So basically what they'll do in order to create these patches is in this example, what they did is they took four neighboring pixels and that's one patch."}, {"start": 333.0, "end": 337.0, "text": " So we'll have four by four will be a single patch. And that's why you see four here."}, {"start": 337.0, "end": 342.0, "text": " So because height and width just represent how many pixels you have, like in height and width dimensions, obviously."}, {"start": 342.0, "end": 351.0, "text": " Divide by four and you get the number of tokens. And D is the dimension of that token after we embed it using this like linear work."}, {"start": 351.0, "end": 354.0, "text": " I think they're using convolutional like projection layer."}, {"start": 354.0, "end": 360.0, "text": " So now the big difference between focal transformer and the like standard vision transformer."}, {"start": 360.0, "end": 366.0, "text": " So there are two differences. First is they're using this focal self-attention instead of your standard multi-head self-attention."}, {"start": 366.0, "end": 374.0, "text": " And the second thing is, as you can see, they're reducing the number of these of these like number of patches, number of tokens."}, {"start": 374.0, "end": 383.0, "text": " So as we go into this, after you apply this first layer and one times, then what they do is they reduce the number of patches here."}, {"start": 383.0, "end": 389.0, "text": " So basically, instead of having this one patch, what you'll have end up having is you'll have this one."}, {"start": 389.0, "end": 398.0, "text": " This will be like a single token. OK. And then going to the next layer here will have even bigger like even bigger tokens."}, {"start": 398.0, "end": 403.0, "text": " OK. So one token will basically span this amount of an image. OK."}, {"start": 403.0, "end": 409.0, "text": " And they are also, as you can notice here, so D goes to 2D, goes to 4D, goes to 80."}, {"start": 409.0, "end": 415.0, "text": " And that's a very similar logic that is applied in all convolutional neural networks."}, {"start": 415.0, "end": 423.0, "text": " So the idea is as you're reducing the spatial extent of your of your like representation, you're you're increasing the number of channels."}, {"start": 423.0, "end": 431.0, "text": " And basically what that means is we are we are seeing like a huge merge between these transformer and CNN architectures."}, {"start": 431.0, "end": 436.0, "text": " And it's not even clear what's a transformer, what what transform means and what a CNN means."}, {"start": 436.0, "end": 440.0, "text": " We also saw those MLP mixers. So, yeah, there is a lot of debate."}, {"start": 440.0, "end": 444.0, "text": " What's convolutional neural network now and what's a transformer actually?"}, {"start": 444.0, "end": 447.0, "text": " So that's that that's the high level overview of how this works."}, {"start": 447.0, "end": 449.0, "text": " Everything else remains the same as the vision transformer."}, {"start": 449.0, "end": 458.0, "text": " So we just have this novel layer and we have this novel logic where we are reducing the number of tokens as we go deeper and deeper."}, {"start": 458.0, "end": 462.0, "text": " What I show here, this small diagram, let me just tell you briefly about it."}, {"start": 462.0, "end": 471.0, "text": " Basically, as we are so for a given number of tokens, they show that they show that the receptive field of this novel layer is higher."}, {"start": 471.0, "end": 475.0, "text": " So it's higher compared to the regular self-attention."}, {"start": 475.0, "end": 481.0, "text": " And the reason is, as we saw, is that we are attending as we are getting further away from the query token."}, {"start": 481.0, "end": 489.0, "text": " We're basically attending Corsair and Corsair tokens so we can attend more tokens and have like for the same amount of tokens, we can attend like a huge."}, {"start": 489.0, "end": 492.0, "text": " We can have a huge receptive field."}, {"start": 492.0, "end": 497.0, "text": " OK, now let me focus on on the actual meat of this paper."}, {"start": 497.0, "end": 499.0, "text": " And that's this focal self-attention layer."}, {"start": 499.0, "end": 505.0, "text": " So let me kind of digest how this thing goes together and how they implement this focal self-attention."}, {"start": 505.0, "end": 511.0, "text": " So the first thing you'll notice is so imagine this is your input image and these are just patches."}, {"start": 511.0, "end": 513.0, "text": " We have 20 times 20 patches."}, {"start": 513.0, "end": 515.0, "text": " So that's 20 times 20 tokens."}, {"start": 515.0, "end": 520.0, "text": " So the first thing they do is this blue blob you can see here are our query tokens."}, {"start": 520.0, "end": 523.0, "text": " And what they do is they group them into this window."}, {"start": 523.0, "end": 530.0, "text": " And every query inside that window will attend to the same group of value and key and keys."}, {"start": 530.0, "end": 533.0, "text": " And let me just kind of explain what I mean by that."}, {"start": 533.0, "end": 539.0, "text": " So imagine if we had if we didn't group them in this window fashion, what we would have to do is the following."}, {"start": 539.0, "end": 544.0, "text": " So this token, this neighborhood would actually be something like maybe, I don't know, like this."}, {"start": 544.0, "end": 549.0, "text": " And this tokens neighborhood here would be maybe something like this."}, {"start": 549.0, "end": 553.0, "text": " And that means we'd have like a computational overhead."}, {"start": 553.0, "end": 560.0, "text": " And so basically because of some computational conveniences, they group these into into a window so that it can share the neighborhood."}, {"start": 560.0, "end": 562.0, "text": " And now I'll show you what I mean by sharing the neighborhood."}, {"start": 562.0, "end": 564.0, "text": " So first things first, we have these three levels."}, {"start": 564.0, "end": 566.0, "text": " So level one, level two, level three."}, {"start": 566.0, "end": 570.0, "text": " And those correspond to those fine grained over coarse grained structure."}, {"start": 570.0, "end": 572.0, "text": " So let me just go back here."}, {"start": 572.0, "end": 577.0, "text": " So level one was these these fine grained like patches."}, {"start": 577.0, "end": 579.0, "text": " And then we have these more coarse grained."}, {"start": 579.0, "end": 583.0, "text": " And finally, these are the the coarse grained patches."}, {"start": 583.0, "end": 586.0, "text": " OK, and let me go back here."}, {"start": 586.0, "end": 592.0, "text": " So the first thing you'll notice is we're going to pull these tokens into coarser representations, as I said."}, {"start": 592.0, "end": 601.0, "text": " So that means that here in level two, we're going to take this two by two, these two by two batches, and we're going to convert them into a single one."}, {"start": 601.0, "end": 602.0, "text": " So that's the reason we have."}, {"start": 602.0, "end": 605.0, "text": " So one, two, three, four, five, six."}, {"start": 605.0, "end": 609.0, "text": " That's why we have six by six here after after doing this this this coarsening."}, {"start": 609.0, "end": 617.0, "text": " And finally here, because we are on a level three, we're going to we're going to convert these four by four patches into a single into a single representation here."}, {"start": 617.0, "end": 619.0, "text": " So that's why we have five by five."}, {"start": 619.0, "end": 623.0, "text": " So now you can see that on this level one, what we're doing is we are attending this neighborhood here."}, {"start": 623.0, "end": 627.0, "text": " OK, so something like this."}, {"start": 627.0, "end": 629.0, "text": " Let me just draw it."}, {"start": 629.0, "end": 634.0, "text": " And then on the second level, as you can see here, we're attending this neighborhood."}, {"start": 634.0, "end": 637.0, "text": " But we have like coarsen representations."}, {"start": 637.0, "end": 638.0, "text": " OK."}, {"start": 638.0, "end": 642.0, "text": " And finally, the last layer, let me just draw this."}, {"start": 642.0, "end": 645.0, "text": " OK, the last layer will attend the whole feature map."}, {"start": 645.0, "end": 647.0, "text": " And but but the representations will be much coarser."}, {"start": 647.0, "end": 648.0, "text": " So that's the idea."}, {"start": 648.0, "end": 653.0, "text": " And out of those, what we do is we kind of flatten those and we concatenate those."}, {"start": 653.0, "end": 656.0, "text": " And then we apply these like value."}, {"start": 656.0, "end": 661.0, "text": " We apply a value projection matrices and and key projection matrices to get values and keys."}, {"start": 661.0, "end": 666.0, "text": " And so now let me show you what I mean by this windowed queries."}, {"start": 666.0, "end": 673.0, "text": " OK, as you can see here, this same query attends to all of these keys and values as well as this one."}, {"start": 673.0, "end": 677.0, "text": " So all of the queries will be attending to the same keys and values."}, {"start": 677.0, "end": 678.0, "text": " And that's the whole point here."}, {"start": 678.0, "end": 682.0, "text": " So that means that's how they kind of save some computation."}, {"start": 682.0, "end": 690.0, "text": " So otherwise, as I already explained, if you had if you have them separated, if you had only this one, then let me just delete this for a second."}, {"start": 690.0, "end": 695.0, "text": " OK, whoops. Let me just delete all of these and let's focus on a single token."}, {"start": 695.0, "end": 701.0, "text": " OK, so if we had only this token, that means that its neighborhood would actually be something like this."}, {"start": 701.0, "end": 704.0, "text": " And then we'd had the coarser one, et cetera."}, {"start": 704.0, "end": 709.0, "text": " Whereas for this one here, we had a totally different neighborhood."}, {"start": 709.0, "end": 716.0, "text": " And the fact that these have different neighborhoods means that you'd have to extract all of these for all specific tokens."}, {"start": 716.0, "end": 719.0, "text": " And that's make it more that makes it more computationally intensive to do this."}, {"start": 719.0, "end": 722.0, "text": " And that's why they kind of bucket them into this this window."}, {"start": 722.0, "end": 724.0, "text": " OK, that's pretty much it."}, {"start": 724.0, "end": 729.0, "text": " Then once you have the queries, keys and values, you do your regular self-attention stuff."}, {"start": 729.0, "end": 733.0, "text": " You basically out of queries and keys, you form those raw scores."}, {"start": 733.0, "end": 742.0, "text": " Then you apply softmax, you get the attention coefficients and you use those to aggregate value vectors to form novel representations of these tokens here."}, {"start": 742.0, "end": 749.0, "text": " So, again, these tokens here will have novel representations formed after we applied this layer once."}, {"start": 749.0, "end": 751.0, "text": " OK, that's pretty much it."}, {"start": 751.0, "end": 753.0, "text": " Hopefully that was understandable."}, {"start": 753.0, "end": 757.0, "text": " Let me now jump further and kind of focus on this part here."}, {"start": 757.0, "end": 764.0, "text": " So they say to perform focal self-attention, we need to first extract the surrounding tokens for each query token in the feature map."}, {"start": 764.0, "end": 765.0, "text": " So that's the thing I just mentioned."}, {"start": 765.0, "end": 770.0, "text": " That's the motivation behind a kind of grouping these great tokens into into these windows."}, {"start": 770.0, "end": 776.0, "text": " And I think the Swin transformer or some of the other like previous work showed that that kind of helps."}, {"start": 776.0, "end": 779.0, "text": " And like this paper is just copying that part."}, {"start": 779.0, "end": 787.0, "text": " So note that the strict version of focal self-attention following figure one requires to exclude the overlapping regions across different levels."}, {"start": 787.0, "end": 793.0, "text": " In our model, we intentionally keep them in order to capture the pyramid information for the overlap regions."}, {"start": 793.0, "end": 795.0, "text": " So what I say there is the following thing."}, {"start": 795.0, "end": 798.0, "text": " So let me get back to the image here."}, {"start": 798.0, "end": 803.0, "text": " As you can see here, looking at this image, it seems like they are they don't have any overlaps."}, {"start": 803.0, "end": 808.0, "text": " So that basically means that they only apply the fine grain structure here."}, {"start": 808.0, "end": 815.0, "text": " And then for this course or grain structure, it seems like they're only attending these this region here."}, {"start": 815.0, "end": 818.0, "text": " So let me just kind of. Shaded."}, {"start": 818.0, "end": 826.0, "text": " So this part here, OK, but in reality, they are also they are also using these regions as well for the course or scale."}, {"start": 826.0, "end": 830.0, "text": " And, yeah, they just mentioned that they do that."}, {"start": 830.0, "end": 832.0, "text": " And I didn't see any ablations on this."}, {"start": 832.0, "end": 837.0, "text": " But like it seems that this overlapping information on multiple scales help."}, {"start": 837.0, "end": 843.0, "text": " And they kind of I guess it's also more computationally intensive if they wanted to exclude these regions."}, {"start": 843.0, "end": 846.0, "text": " So they just left it like this. OK, that's that part."}, {"start": 846.0, "end": 854.0, "text": " Additionally, they have this this learnable relative position bias that was introduced in this previous previous paper called Swin transformer."}, {"start": 854.0, "end": 858.0, "text": " And, yeah, this bias term kind of just helps boost the performance additionally."}, {"start": 858.0, "end": 860.0, "text": " OK, let me just focus on this one now."}, {"start": 860.0, "end": 864.0, "text": " This is the whole point. They have less computation."}, {"start": 864.0, "end": 869.0, "text": " So M times N is the so M times N are your like feature feature map dimension."}, {"start": 869.0, "end": 876.0, "text": " So that means this is basically the number of tokens and and as we can see, so the complexity depends on them."}, {"start": 876.0, "end": 878.0, "text": " And we have some like pretty much constant here."}, {"start": 878.0, "end": 880.0, "text": " So L is the number of levels."}, {"start": 880.0, "end": 882.0, "text": " And that was three in the examples we just saw."}, {"start": 882.0, "end": 887.0, "text": " And they also have these S sub R L components."}, {"start": 887.0, "end": 889.0, "text": " So let me just kind of show you what these are."}, {"start": 889.0, "end": 894.0, "text": " And as you can see here, so that S sub R."}, {"start": 894.0, "end": 896.0, "text": " So it's eight for this first level."}, {"start": 896.0, "end": 897.0, "text": " And we can see why it's eight."}, {"start": 897.0, "end": 901.0, "text": " So basically that that's basically the dimension of this pooled feature map."}, {"start": 901.0, "end": 903.0, "text": " OK, we have six by six here."}, {"start": 903.0, "end": 905.0, "text": " That's why we have six here and we have five here."}, {"start": 905.0, "end": 906.0, "text": " That's why we have five of five here."}, {"start": 906.0, "end": 908.0, "text": " So those are pretty much constants."}, {"start": 908.0, "end": 914.0, "text": " And the whole point is that this part is constant once you set up the parameters of your architecture."}, {"start": 914.0, "end": 919.0, "text": " And then you only depend linearly on the number of tokens, though this constant may be big."}, {"start": 919.0, "end": 922.0, "text": " So, yeah, that's also worth mentioning."}, {"start": 922.0, "end": 930.0, "text": " And we'll see soon like the number of flops they have and the time it takes to do an inference through this to this architecture."}, {"start": 930.0, "end": 932.0, "text": " OK, that's pretty much it."}, {"start": 932.0, "end": 934.0, "text": " Let me now focus on on on the actual results."}, {"start": 934.0, "end": 935.0, "text": " Let's see what they got."}, {"start": 935.0, "end": 940.0, "text": " So they have again they have this focal tiny focal small and focal base."}, {"start": 940.0, "end": 942.0, "text": " They tested three of these models with."}, {"start": 942.0, "end": 945.0, "text": " So this one is obviously the biggest one and they have smaller ones."}, {"start": 945.0, "end": 950.0, "text": " And what they report are some like state of the art results."}, {"start": 950.0, "end": 952.0, "text": " But as we'll see, it's not that clear cut."}, {"start": 952.0, "end": 958.0, "text": " So if we focus on this number here, so it's eighty three point eight and it's a bit bigger than than SwinBase."}, {"start": 958.0, "end": 961.0, "text": " But the thing is they also have more parameters."}, {"start": 961.0, "end": 963.0, "text": " It's not a significant amount, but it's more."}, {"start": 963.0, "end": 965.0, "text": " And they also have more flops."}, {"start": 965.0, "end": 971.0, "text": " And as they show in the appendix as well, it takes more time for the same amount of flops."}, {"start": 971.0, "end": 977.0, "text": " It takes more time to compute the forward prop through the focal base compared to the SwinBase."}, {"start": 977.0, "end": 979.0, "text": " So that's something to keep in mind."}, {"start": 979.0, "end": 987.0, "text": " If they were to kind of make those the same, I'm not sure they'd have even this big of like increasing performance."}, {"start": 987.0, "end": 991.0, "text": " So I'd say this is more on pair than state of the art, if you ask me."}, {"start": 991.0, "end": 998.0, "text": " But like, yeah, yeah, if you don't care about these other parameters, then you have a bit better performance."}, {"start": 998.0, "end": 1001.0, "text": " And they have the same results."}, {"start": 1001.0, "end": 1003.0, "text": " They show the same results across different tasks."}, {"start": 1003.0, "end": 1005.0, "text": " So this is the on the COCO."}, {"start": 1005.0, "end": 1007.0, "text": " So this is basically object detection."}, {"start": 1007.0, "end": 1009.0, "text": " They show some improvements."}, {"start": 1009.0, "end": 1015.0, "text": " Again, I'm not sure the underlying flops and and and like number of parameters as well as the inference speed."}, {"start": 1015.0, "end": 1020.0, "text": " If you don't care about those, again, they do achieve state of the art results there as well."}, {"start": 1020.0, "end": 1025.0, "text": " Similarly here, so a bunch of different bunch of different benchmarks."}, {"start": 1025.0, "end": 1028.0, "text": " I won't be focusing on each of those in particular."}, {"start": 1028.0, "end": 1033.0, "text": " So here they show results on semantic segmentation."}, {"start": 1033.0, "end": 1037.0, "text": " Again, they're a bit better compared to the baselines."}, {"start": 1037.0, "end": 1044.0, "text": " And yeah, it's kind of hard to compare these because they don't have numbers for the number of parameters and flops there."}, {"start": 1044.0, "end": 1047.0, "text": " But yeah, yeah, I don't want to focus too much on the results."}, {"start": 1047.0, "end": 1049.0, "text": " Let me show you some ablations here."}, {"start": 1049.0, "end": 1051.0, "text": " In the final model, they were actually testing."}, {"start": 1051.0, "end": 1055.0, "text": " They used two levels, not three levels, as we saw in those previous diagrams."}, {"start": 1055.0, "end": 1060.0, "text": " And what they tested is increasing this window size from seven to 14."}, {"start": 1060.0, "end": 1067.0, "text": " And that kind of bumped up the the performance, as you can see here, but also the flops go up."}, {"start": 1067.0, "end": 1069.0, "text": " So it's obviously a trade off again."}, {"start": 1069.0, "end": 1076.0, "text": " And what is show here is the importance of using both those local and global like interactions."}, {"start": 1076.0, "end": 1081.0, "text": " So first things first, what they show is this window attention, which basically means the following thing."}, {"start": 1081.0, "end": 1086.0, "text": " So window means that you're if I go back to the diagram up there."}, {"start": 1086.0, "end": 1089.0, "text": " So give me a second. So, OK, this part."}, {"start": 1089.0, "end": 1097.0, "text": " So that means that these queries are actually these queries are only attending to the keys and values that are inside of this region."}, {"start": 1097.0, "end": 1100.0, "text": " OK, let me just change the color. So they're only attending this part."}, {"start": 1100.0, "end": 1103.0, "text": " They're not attending the other keys and values."}, {"start": 1103.0, "end": 1108.0, "text": " And that basically means we don't have a nice communication across different parts of the image."}, {"start": 1108.0, "end": 1111.0, "text": " And then obviously the performance drops by a lot."}, {"start": 1111.0, "end": 1113.0, "text": " Let me get back here. OK."}, {"start": 1113.0, "end": 1120.0, "text": " And once they add this like local part and then global part, you can see it increases and local plus global."}, {"start": 1120.0, "end": 1122.0, "text": " So both using the fine grained structures."}, {"start": 1122.0, "end": 1124.0, "text": " So that's the local part and the global one."}, {"start": 1124.0, "end": 1129.0, "text": " So that's where you're using those windows of size seven or 14, whatever."}, {"start": 1129.0, "end": 1133.0, "text": " So when you combine both of those, you get the like computational efficiency."}, {"start": 1133.0, "end": 1137.0, "text": " Plus you get to kind of ingest information from the whole image."}, {"start": 1137.0, "end": 1142.0, "text": " That's something that we know is useful because that's why transformers are performing better than CNN's."}, {"start": 1142.0, "end": 1148.0, "text": " OK, what they additionally did here is this switch transformer had something like something called like Windows shift."}, {"start": 1148.0, "end": 1154.0, "text": " And it actually hurts the performance of Swin transformer a lot if they remove that shifting."}, {"start": 1154.0, "end": 1158.0, "text": " But for the for this focal transformer, it's actually not necessary."}, {"start": 1158.0, "end": 1160.0, "text": " So they're just going to remove that."}, {"start": 1160.0, "end": 1162.0, "text": " Here are some some depth ablations."}, {"start": 1162.0, "end": 1164.0, "text": " Nothing super, super interesting."}, {"start": 1164.0, "end": 1166.0, "text": " They again show they are better than Swin."}, {"start": 1166.0, "end": 1173.0, "text": " Even even even when they reduced the number of as you can see here, even when they reduce depth of the focal one,"}, {"start": 1173.0, "end": 1181.0, "text": " they are still even better than the swing that has like like deeper layer in this particular stage of the architecture."}, {"start": 1181.0, "end": 1184.0, "text": " OK. And here again, they are their own pair."}, {"start": 1184.0, "end": 1191.0, "text": " So, yeah, those are just some additional claims that they are still the art on these different tasks."}, {"start": 1191.0, "end": 1194.0, "text": " OK, that's pretty much it."}, {"start": 1194.0, "end": 1196.0, "text": " Let me just do two more things here."}, {"start": 1196.0, "end": 1207.0, "text": " OK, so what I say here is in our observation is that adding long range tokens can bring more relative improvement for image classification than object detection and vice versa for local tokens."}, {"start": 1207.0, "end": 1216.0, "text": " We suspect that dense predictions like object detection more rely on fine grained local context while image classification favors more the global information."}, {"start": 1216.0, "end": 1222.0, "text": " So this is an interesting statement and it kind of makes sense because if you want to understand what's in the image, if you want to classify the object in the image,"}, {"start": 1222.0, "end": 1226.0, "text": " you want to kind of understand what's like the whole the whole context."}, {"start": 1226.0, "end": 1232.0, "text": " Whereas when you're doing object detection, it's kind of enough to kind of understand the fine grained details."}, {"start": 1232.0, "end": 1243.0, "text": " But like there is also this is a double edged sword because this could also mean that the model learns the neural networks learn how to exploit those spurious signals that we we are well aware of that."}, {"start": 1243.0, "end": 1251.0, "text": " Sometimes neural networks kind of exploit those spurious signals like texture in order to to infer what the object is inside of the image."}, {"start": 1251.0, "end": 1258.0, "text": " And we don't want that. We want our neural networks to understand the actual content of the image itself and not those spurious signals."}, {"start": 1258.0, "end": 1271.0, "text": " Second thing I want to I want to mention here is they say here, though extensive experimental results show that our focal self attention can significantly boost the performance on both image classification and dense prediction tasks."}, {"start": 1271.0, "end": 1280.0, "text": " It does introduce extra computational and memory cost since each query token needs to attend to the coarsened global tokens in addition to the local tokens."}, {"start": 1280.0, "end": 1283.0, "text": " So that's what something I already stressed a couple of times."}, {"start": 1283.0, "end": 1288.0, "text": " I don't think this model significantly boosts upon the prior state of the art."}, {"start": 1288.0, "end": 1295.0, "text": " And again, they did acknowledge it here. You have to trade off computation and so flops and also memory footprint and speed."}, {"start": 1295.0, "end": 1304.0, "text": " If you want to get this small and not significant boost in performance, if you ask me and going back to the appendix here, they mentioned somewhere here."}, {"start": 1304.0, "end": 1310.0, "text": " So accordingly, our focal transformer has slower running speed, though it has similar flops as wind transformers."}, {"start": 1310.0, "end": 1313.0, "text": " So that's very important. This is mainly due to two reasons."}, {"start": 1313.0, "end": 1318.0, "text": " We introduced the global coarse grain attention and introduces the extra computations."}, {"start": 1318.0, "end": 1330.0, "text": " That's one. And two, though we conduct our focal attention on the window windows, we still observe that extracting the surrounding tokens around local windows and the global tokens across the feature map are time consuming."}, {"start": 1330.0, "end": 1341.0, "text": " So just keep this in mind. If you want to actually run this in production one day, those extra parameter, those extra dimensions like computation and speed and memory do matter a lot."}, {"start": 1341.0, "end": 1360.0, "text": " So that's pretty much it. Hopefully you like this paper. If you did, share it out, subscribe. And until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=FYA_jwPpXi0
Multimodal Few-Shot Learning with Frozen Language Models | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover "Multimodal Few-Shot Learning with Frozen Language Models" from DeepMind. They introduce Frozen - which is able to handle both visual and textual inputs and shows good generalization capabilities to novel visual question answering datasets combined with fast binding mechanisms even though it was only trained on image captioning. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2106.13884 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 02:20 GPT-3 and emerging few-shot properties 04:20 Training procedure for Frozen 07:45 Inference 10:15 Strong generalization? 11:55 Prompting mechanisms and the hardest task 13:25 Quantitative results 19:50 Outro ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #frozen #deepmind #multimodal
What's up? In this video I'm covering this novel paper called Multimodal Few Shot Learning with Frozen Language Models by Maria Zimpucalli, Jacob Manik, Serkan Kabi, Ali Aslami, Oriol Vinyals and Felix Hill of DeepMind. So what this paper shows in a nutshell is that they have this model called Frozen where basically they take a huge pre-trained language model such as GPT-2, they freeze it, hence frozen, and they basically train this vision encoder so that they can parse images as well. So it will just convert images into like tokens which are compatible with the language model and then they show that they can have this model do few-shot learning across different tasks which involve both the visual cues as well as the linguistic cues. To make it a bit less abstract, let me show you this example to show you what I mean. So this is an input, an example of an input that goes into this frozen model. So we have an image, it will be tokenized, then we have this person is like, and then we have this smiley because this girl is happy, this person is like sad, and finally we prompt the model with this image and we prompt it with this person is like, and as we can see the model generates this like terrified smiley, a dot, and end of a sentence token. Okay, which is what we've expected, which is really cool. And okay, thing to keep in mind during this whole video is that these examples are curated, they mentioned that multiple times, so again the whole point is that they can have this few-shot learning capability rather than they have like a highly performant model on all of these tasks. Okay, nonetheless let me show you this one. So this was invented by Zacharias Janssen, Thomas Edison, blah blah blah. So the point here is that just looking at this image you can't deduce what's the answer, whereas here you could have. And so as you can see the output here is the right brothers dot end of sentence token. So the whole point here as we'll soon see is that the language model can now pull the factual knowledge in and we can answer questions like this because of the language model, not because of the vision portion of the model. Okay, let's dig deeper into the paper now. So that was a high level like a glimpse of this paper. Basically they can handle both modalities and they can like learn the more examples you give the model the better it becomes. That's really cool. Okay, let's now dive deeper. When trained at a sufficient scale autoregressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. And this should be like familiar to you. If you haven't watched my video on GPT-3 or the original transformer paper, do check them out. I'll link them somewhere here. But basically what those models showed is the following, especially the GPT family of models from OpenAI. They showed that even though the model was trained as a language model, so basically you have an unsupervised task of next token prediction and that's everything. And they show that you can actually do machine translation for example. And here is an example of how it looks like. So we have a one-shot setting here where we prompt the model with translate English to French and then we give it one example, C author to Lutre Des Morts, I don't speak French. And then you prompt with cheese and you have this symbol and the model will actually learn how to translate this. So to perform machine translation even though it's never seen such a task during this training. And here we just have additionally a few-shot example set up where we have multiple examples and then we prompt the model to translate. And they show that with these multiple examples the performance just gets better and better. Obviously it saturates after a certain point, but like the trend is clear. Okay, so that's the first thing. Now here we present a simple yet effective approach for transferring this few-shot learning ability to a multi-modal setting. And in particular they focus on vision as we'll soon see. Okay, finally here's a motivation behind all this. So despite these impressive capabilities, such large-scale language models are blind to modalities other than text, preventing us from communicating visual tasks, questions or concepts to them. So that's the reason they kind of integrated this vision component. Okay, let me now explain to you how this whole system looks like and how it works. So the system as a whole is pretty simple. So we have a language model as you can see here and it's frozen. So the parameters are all frozen and we have this vision encoder. So just to be a bit more specific they were using GPT-2 for the language model. They were using NF ResNet for the vision encoder, but that's not that crucial. You can pick some other language model, you can pick some different vision encoder, maybe something like vision image transformer, but that's not that important. Like the fact that these components exist and they are wired the way they are. Okay, so what I do is, so they first need to kind of adapt the image into the input that the transformer model is expecting and that's these tokens. So what I do is they have this vision encoder and it will output after some pulling layer a vector. And what they do is they take a linear layer, they just project this into this novel space. Let me draw it like this and the dimension will be n times d, where basically they found this n to be like the best value was two, but that's not important. As you can see here they have only two tokens, but it can be an arbitrary number in general. So d is, that's important. D is a dimension that's the same as these tokens that go into language model. So obviously that's the prerequisite so that we can feed these tokens into the language model that come from the image. And so that's the image part. Now how do they train this is fairly simple. So basically I think it would be wiser to write this down as, so we have start of sentence token here and all of these other words will be kind of translated here. So a will get here, this small will be here, etc. So we'll have small here and so now what you're trying to do is to predict the target sequence. So as you can see here, so let me just focus on predicting the word small. So the word small will have as a context these image tokens that came here as well as this start of sentence token as well as a because we have causal masking obviously you don't want to have if you saw small in the input then you're kind of cheating and it's easy to predict the output small. You can just kind of copy the values and the transformer will learn how just to copy copy paste the values and that's not what we want. We want to predict tokens. So that's how the test would look like. Basically here you'd have you'd output the distribution the usual way so you're just trying to maximize the likelihood. So here maybe we have some distribution and we find the token that corresponds to the word small maybe this one and we'll just want to maximize this to one and push all of the other probabilities down to zero and we do that by just simple like cross entropy so it will be a minus log of p so when p goes to one the loss goes to zero and so in a nutshell that's how the system works that's how it's trained the gradients are back propped through these weights which are frozen and these weights are then tweaked so that basically what happens is that this vision encoder learns such representations so that they're useful in order to do this captioning task okay and that's the whole system it's fairly trivial. Let me now show you how they use this thing and here we have in the first example so we have a vision encoder so we have an image we encode it into these two tokens and then we prompt it with like this text so question what color is the car and the model generates blue and then end of sentence token so by the way just a short remark here they actually even though you can see here a word and a single token what actually in practice what I do is they use this tokenizer called sentence piece so this boat or this word small maybe like separated into sub words maybe like maybe this sma part will be like one we'll have one token associated with it and then ll will have second so that's just an example but in general you'll have more tokens than you have words in your sentence okay just a minor detail and so that's this is one of the tasks that they're gonna evaluate this model on the second example here will require the model to have some knowledge base and additionally I forgot to mention that the captioning so during this captioning training all of the like named entities are masked so if you have a name like alexa or something you'll mask it with a person so you'll have you'll just like put a person instead of a name there so that means that this vision encoder cannot learn those named entities obviously because it never saw like named entities and so the examples you see here like like this model generated steve jobs that did not certainly didn't come from the vision encoder and that's an important thing I want you to notice here so the knowledge of that steve jobs was the guy who invented iphone actually came from the language model itself and that's the important from the language model itself so that's important and the third task they evaluate this model on is this fast binding task where you present the model with the image then you say you kind of have this made up word called dex and you just want the model to associate this visual category of an apple with the novel word dex and you do the same thing here you have an orange blanket and finally you prompt it with an image and you say question what is this answer and the model generates this is a dex so we saw dex is apple so it properly generated this is a dex sentence okay so having said that let me now kind of see let me show you the quantitative results they got because that's that's that's interesting before that just a short remark here so they say here in contrast our work enables strong generalization to new multi-modal tasks blah blah blah what i want to say here is i don't like this part because basically we don't even have a strong definition of what strong generalization is and the closest thing i could think of was fransois chollet's paper on the measure of intelligence where he described this terminology where he basically says that all of the current models pretty much that we have in deep learning field and machine learning are only like able to do this local generalization whereas this broad generalization and extreme generalization is something that only humans can currently basically do and so yeah i pretty much agree with him on this on this one because this language model that they use in frozen has seen like a lot of data so that means it has a lot of experience which we need to count in when we calculate these generalization capabilities so arguably because of all of that like immense amount of experience you cannot actually claim that it's generalizing like strongly it does generalize but like just the level yeah it's kind of yeah rent over pretty much okay these are just some details i already explained they have a huge seven billion pre-trained language model they freeze it and that's how they train the model i also explained this one how how do we map the images into into tokens which language model can then parse okay i'll just skip all of those small detail there is in positionally coding that helped them and let me focus on this so important thing i want you to notice is that they have a bunch of different ways to prompt this model so that it can generate the answers much better and here's one example so they have a pretty intricate uh like terminology here so two ways zero repeats two inner shots so let's see what i mean so first they have this task induction which they quantitatively show it helps a lot so answer with dex or blicket kind of prompts kind of tunes the model into answering with these two words in a sense and then they have so two shots basically this is the first part and uh the reason it's two ways because we have two made up words like blanket and ducks here and because the reason it's two inner shots is because they have two independent examples here as you can see and finally they prompt the model and blicket would be the correct answer because as you can see blicket is lion okay and the the last task they'll they'll they'll evaluate is this one where again you have to associate a visual category with a novel word and finally you actually not only have to output like here it's basically recognition you recognize it's a lion and you output blicket but here you have to reason because it says what is the the question says what is the dex made of so you first need to understand that dex is a table and then you need to understand what it is made of and the answer is wood so this is the final task they evaluate this frozen model on and now let's see the quantitative results okay here are the results for the visual question answering and there are these baselines which i'll shortly explain now especially this blind baseline so the strength of the pre-trained language model is a double edge sword it powers the generalization abilities of frozen but also enables the model to perform surprisingly well without considering the visual input at all so you can learn to ignore the visual input and still answer the question so to guard against this possibility we also train blind baselines in which the image presented to the visual encoder is blacked out but the common weights are still trained this amounts to prefix tuning so just as a short reminder how it works basically here instead of an image for these baselines what they'll do instead is they'll just black out or blue out in my case these images and so what happens is basically the model will have to learn some representation which is constant which won't which won't change because it will only be presented with black images and so that constant kind of needs to help in this captioning task and so that's why they call it prefix tuning because it finds some representation that helps the overall system do better job at captioning okay so that's one of the baselines the blind baseline and having explained that one let's now focus on the quantitative results okay here we are um so frozen here is the the the version we were talking about here is the version trained from scratch so that means you just stitch all of the weights in the pre-trained transformer and try and train it from scratch um here the the setups they have so this is zero shot one shot four shot etc and we can see that as we add more examples we have this trend that the the accuracy is improving which is desirable okay we see that the from scratch model totally fails we see that the fine-tune model is worse so fine-tuning so basically here they do not they do not freeze the language model but they initialize the model with a pre-trained weight the blind baseline i've just explained it as well so um again oscar is some dedicated baseline you can see it's much better than frozen but the whole point here is that we have this improve improving ability uh that also was pertinent to gpt3 models and language models in general so that's cool um here you can just see when you additionally fine-tune on this very data set visual visual question answering data set the performance obviously ramps up and that's that's expected i guess now the second thing here is the test on this okay vqa data set so this data set contains those examples where you need some additional knowledge base in order to answer the questions and again they so they show that um the the model improves with additional examples as a new baseline they use this 400 million parameter model and by the way this one has seven billion and again obviously the the bigger the model the the better the performance is something that's not that surprising i guess uh like in 2021 and um yeah again the this baseline is much better they just want to stress out that uh we have this improvement uh like being transferred to this multi-multi-multi-modal uh setup and that's cool okay so that was the the first experiments they did um here let me just uh kind of reiterate this because it's really important uh so this conceptual captions data set is hypernamed meaning that for example proper names are replaced with a general person a word-like person okay so this enables us to rigorously study the transfer of factual knowledge because all knowledge of named entities comes from language model pre-training consequently when we show the model an image of an airplane and ask who invented this the visual encoder has determined that the image contains an airplane and the language model has used this to retrieve the factual knowledge that airplanes were invented by the Wright brothers and finally jumping to fast concept binding these are the last tasks they tested this frozen model on um let me just kind of connect this table with the actual tasks so this is the task at hand we have this uh visual binding uh task so this is a two-way binding because we have again two novel made up words text and blick it they'll show like five way binding task where obviously you have now five made upwards and we'll soon see that it fails on that one but it succeeds on the two-way binding task so on the two-way binding you can see again they have a bunch of different ways to to prompt the model so that they can elicit better outputs and again here you can see it's improving with more examples second thing they tried is they used real names instead of those made upwards they do this so that they can quantify how harder it is for the model to learn the binding uh and how hard the task itself is and you can see so this kind of bugs me i'm not sure whether this is a typo but like you can it has better performance and then you kind of saturate saturates after three examples um again this annual baseline is better as i mentioned uh five way um binding fails so here it's literally like uh same as random chance 20 and it kind of improves here then goes back so yeah it's inconclusive here they mentioned here somewhere here just a sec so in table four we show that the observed effects on open-ended mini image net do not transfer to the five-way setting where frozen is not significantly above chance this shows that learning to bind five new names to five visual categories in a single forward pass is beyond the current capabilities of frozen okay so it kind of fails there and they leave it up as a future research the final task is the one i showed you where you need to aside from fast binding you need to reason not much new conclusions can be made here so again it's improving with more examples the interesting part maybe is so if we focus on this blind baseline we can see that even repeating uh so again remember the image is blacked out so we just just repeat the text a couple of times and those linguistic cues help boost the performance of this blind model which means that these improvements above basically are a combination of both linguistic cues as well as the visual cues okay um i think that's pretty much it i like this paper a lot i like this uh inclusion of visual uh information into this whole pipeline it slowly starts resembling the way we humans operate so we have these like we have vision obviously and we kind of uh somehow represent that information and then we have this like computation engine inside our head by the way one of the previous papers called uh like pre-trained transformers or universal computation engine or something they showed that like if you take a huge pre-trained language model and you just tweak some layer norm parameters and some embedding weights you can basically fine tune it very fast onto novel tasks and that's cool i guess this work is a follow-up in a way and yeah having said that i hope you like this video if you did consider subscribing sharing and until next time bye bye
[{"start": 0.0, "end": 2.88, "text": " What's up? In this video I'm covering this novel paper called"}, {"start": 2.88, "end": 6.32, "text": " Multimodal Few Shot Learning with Frozen Language Models"}, {"start": 6.32, "end": 13.92, "text": " by Maria Zimpucalli, Jacob Manik, Serkan Kabi, Ali Aslami, Oriol Vinyals and Felix Hill of DeepMind."}, {"start": 13.92, "end": 18.32, "text": " So what this paper shows in a nutshell is that they have this model called Frozen"}, {"start": 18.32, "end": 22.88, "text": " where basically they take a huge pre-trained language model such as GPT-2,"}, {"start": 22.88, "end": 27.36, "text": " they freeze it, hence frozen, and they basically train this vision encoder"}, {"start": 27.36, "end": 31.36, "text": " so that they can parse images as well. So it will just convert images into"}, {"start": 31.36, "end": 36.4, "text": " like tokens which are compatible with the language model and then they show that they can have"}, {"start": 36.96, "end": 42.72, "text": " this model do few-shot learning across different tasks which involve both the"}, {"start": 43.6, "end": 47.120000000000005, "text": " visual cues as well as the linguistic cues. To make it a bit less abstract,"}, {"start": 47.120000000000005, "end": 51.760000000000005, "text": " let me show you this example to show you what I mean. So this is an input,"}, {"start": 51.760000000000005, "end": 55.6, "text": " an example of an input that goes into this frozen model. So we have an image,"}, {"start": 55.6, "end": 60.72, "text": " it will be tokenized, then we have this person is like, and then we have this smiley because"}, {"start": 60.72, "end": 67.12, "text": " this girl is happy, this person is like sad, and finally we prompt the model with this image"}, {"start": 67.12, "end": 72.16, "text": " and we prompt it with this person is like, and as we can see the model generates this"}, {"start": 72.16, "end": 77.84, "text": " like terrified smiley, a dot, and end of a sentence token. Okay, which is what we've expected,"}, {"start": 77.84, "end": 84.24000000000001, "text": " which is really cool. And okay, thing to keep in mind during this whole video is that these"}, {"start": 84.24, "end": 88.96, "text": " examples are curated, they mentioned that multiple times, so again the whole point is that they can"}, {"start": 89.52, "end": 94.88, "text": " have this few-shot learning capability rather than they have like a highly performant model"}, {"start": 94.88, "end": 100.39999999999999, "text": " on all of these tasks. Okay, nonetheless let me show you this one. So this was invented by Zacharias"}, {"start": 100.39999999999999, "end": 106.08, "text": " Janssen, Thomas Edison, blah blah blah. So the point here is that just looking at this image you"}, {"start": 106.08, "end": 112.8, "text": " can't deduce what's the answer, whereas here you could have. And so as you can see the output here"}, {"start": 112.8, "end": 119.2, "text": " is the right brothers dot end of sentence token. So the whole point here as we'll soon see is that"}, {"start": 119.2, "end": 124.56, "text": " the language model can now pull the factual knowledge in and we can answer questions like"}, {"start": 124.56, "end": 129.44, "text": " this because of the language model, not because of the vision portion of the model. Okay, let's"}, {"start": 129.44, "end": 135.12, "text": " dig deeper into the paper now. So that was a high level like a glimpse of this paper. Basically"}, {"start": 135.12, "end": 141.28, "text": " they can handle both modalities and they can like learn the more examples you give the model the"}, {"start": 141.28, "end": 147.12, "text": " better it becomes. That's really cool. Okay, let's now dive deeper. When trained at a sufficient scale"}, {"start": 147.12, "end": 152.0, "text": " autoregressive language models exhibit the notable ability to learn a new language task"}, {"start": 152.0, "end": 158.8, "text": " after being prompted with just a few examples. And this should be like familiar to you. If you"}, {"start": 158.8, "end": 164.56, "text": " haven't watched my video on GPT-3 or the original transformer paper, do check them out. I'll link"}, {"start": 164.56, "end": 169.04, "text": " them somewhere here. But basically what those models showed is the following, especially the"}, {"start": 169.04, "end": 176.16, "text": " GPT family of models from OpenAI. They showed that even though the model was trained as a language"}, {"start": 176.16, "end": 181.12, "text": " model, so basically you have an unsupervised task of next token prediction and that's everything."}, {"start": 181.12, "end": 185.04, "text": " And they show that you can actually do machine translation for example. And here is an example"}, {"start": 185.04, "end": 190.07999999999998, "text": " of how it looks like. So we have a one-shot setting here where we prompt the model with translate"}, {"start": 190.07999999999998, "end": 195.92, "text": " English to French and then we give it one example, C author to Lutre Des Morts, I don't speak French."}, {"start": 195.92, "end": 200.55999999999997, "text": " And then you prompt with cheese and you have this symbol and the model will actually learn how to"}, {"start": 200.55999999999997, "end": 206.48, "text": " translate this. So to perform machine translation even though it's never seen such a task during"}, {"start": 206.48, "end": 214.39999999999998, "text": " this training. And here we just have additionally a few-shot example set up where we have multiple"}, {"start": 215.27999999999997, "end": 220.56, "text": " examples and then we prompt the model to translate. And they show that with these multiple examples"}, {"start": 220.56, "end": 224.79999999999998, "text": " the performance just gets better and better. Obviously it saturates after a certain point,"}, {"start": 224.8, "end": 230.88000000000002, "text": " but like the trend is clear. Okay, so that's the first thing. Now here we present a simple"}, {"start": 230.88000000000002, "end": 236.16000000000003, "text": " yet effective approach for transferring this few-shot learning ability to a multi-modal setting."}, {"start": 236.16000000000003, "end": 243.36, "text": " And in particular they focus on vision as we'll soon see. Okay, finally here's a motivation behind"}, {"start": 243.36, "end": 248.4, "text": " all this. So despite these impressive capabilities, such large-scale language models are blind to"}, {"start": 248.4, "end": 253.60000000000002, "text": " modalities other than text, preventing us from communicating visual tasks, questions or concepts"}, {"start": 253.6, "end": 258.64, "text": " to them. So that's the reason they kind of integrated this vision component. Okay, let me"}, {"start": 258.64, "end": 263.2, "text": " now explain to you how this whole system looks like and how it works. So the system as a whole"}, {"start": 263.2, "end": 268.0, "text": " is pretty simple. So we have a language model as you can see here and it's frozen. So the parameters"}, {"start": 268.0, "end": 273.92, "text": " are all frozen and we have this vision encoder. So just to be a bit more specific they were using"}, {"start": 273.92, "end": 279.28, "text": " GPT-2 for the language model. They were using NF ResNet for the vision encoder, but that's not"}, {"start": 279.28, "end": 283.59999999999997, "text": " that crucial. You can pick some other language model, you can pick some different vision encoder,"}, {"start": 283.59999999999997, "end": 288.47999999999996, "text": " maybe something like vision image transformer, but that's not that important. Like the fact that"}, {"start": 288.47999999999996, "end": 296.0, "text": " these components exist and they are wired the way they are. Okay, so what I do is, so they first need"}, {"start": 296.0, "end": 301.11999999999995, "text": " to kind of adapt the image into the input that the transformer model is expecting and that's these"}, {"start": 301.11999999999995, "end": 306.23999999999995, "text": " tokens. So what I do is they have this vision encoder and it will output after some pulling"}, {"start": 306.24, "end": 311.28000000000003, "text": " layer a vector. And what they do is they take a linear layer, they just project this into"}, {"start": 312.64, "end": 321.04, "text": " this novel space. Let me draw it like this and the dimension will be n times d, where basically"}, {"start": 321.68, "end": 326.16, "text": " they found this n to be like the best value was two, but that's not important. As you can see here"}, {"start": 326.16, "end": 333.44, "text": " they have only two tokens, but it can be an arbitrary number in general. So d is, that's"}, {"start": 333.44, "end": 339.2, "text": " important. D is a dimension that's the same as these tokens that go into language model. So obviously"}, {"start": 339.2, "end": 344.08, "text": " that's the prerequisite so that we can feed these tokens into the language model that come from the"}, {"start": 344.08, "end": 353.12, "text": " image. And so that's the image part. Now how do they train this is fairly simple. So basically"}, {"start": 353.12, "end": 358.64, "text": " I think it would be wiser to write this down as, so we have start of sentence token here"}, {"start": 358.64, "end": 363.2, "text": " and all of these other words will be kind of translated here. So a will get here,"}, {"start": 363.76, "end": 371.2, "text": " this small will be here, etc. So we'll have small here and so now what you're trying to do is to"}, {"start": 371.2, "end": 378.15999999999997, "text": " predict the target sequence. So as you can see here, so let me just focus on predicting the word"}, {"start": 378.15999999999997, "end": 386.32, "text": " small. So the word small will have as a context these image tokens that came here as well as"}, {"start": 386.32, "end": 392.24, "text": " this start of sentence token as well as a because we have causal masking obviously you don't want to"}, {"start": 392.24, "end": 398.08, "text": " have if you saw small in the input then you're kind of cheating and it's easy to predict the"}, {"start": 398.08, "end": 402.56, "text": " output small. You can just kind of copy the values and the transformer will learn how just to copy"}, {"start": 402.56, "end": 407.28, "text": " copy paste the values and that's not what we want. We want to predict tokens. So that's how the"}, {"start": 407.28, "end": 412.32, "text": " test would look like. Basically here you'd have you'd output the distribution the usual way so"}, {"start": 412.32, "end": 418.15999999999997, "text": " you're just trying to maximize the likelihood. So here maybe we have some distribution and we find"}, {"start": 418.15999999999997, "end": 423.36, "text": " the token that corresponds to the word small maybe this one and we'll just want to maximize this to"}, {"start": 423.36, "end": 429.6, "text": " one and push all of the other probabilities down to zero and we do that by just simple like cross"}, {"start": 429.6, "end": 435.68, "text": " entropy so it will be a minus log of p so when p goes to one the loss goes to zero and so in a"}, {"start": 435.68, "end": 440.56, "text": " nutshell that's how the system works that's how it's trained the gradients are back propped"}, {"start": 440.56, "end": 447.52, "text": " through these weights which are frozen and these weights are then tweaked so that basically"}, {"start": 447.52, "end": 453.68, "text": " what happens is that this vision encoder learns such representations so that they're useful"}, {"start": 453.68, "end": 459.12, "text": " in order to do this captioning task okay and that's the whole system it's fairly"}, {"start": 459.12, "end": 465.6, "text": " trivial. Let me now show you how they use this thing and here we have in the first example"}, {"start": 465.6, "end": 471.44, "text": " so we have a vision encoder so we have an image we encode it into these two tokens and then we"}, {"start": 471.44, "end": 477.44, "text": " prompt it with like this text so question what color is the car and the model generates blue"}, {"start": 477.92, "end": 484.72, "text": " and then end of sentence token so by the way just a short remark here they actually even though you"}, {"start": 484.72, "end": 489.44, "text": " can see here a word and a single token what actually in practice what I do is they use this"}, {"start": 489.44, "end": 496.32, "text": " tokenizer called sentence piece so this boat or this word small maybe like separated into"}, {"start": 496.32, "end": 502.96, "text": " sub words maybe like maybe this sma part will be like one we'll have one token associated with it"}, {"start": 502.96, "end": 508.15999999999997, "text": " and then ll will have second so that's just an example but in general you'll have more tokens"}, {"start": 508.15999999999997, "end": 514.24, "text": " than you have words in your sentence okay just a minor detail and so that's this is one of the"}, {"start": 514.24, "end": 521.52, "text": " tasks that they're gonna evaluate this model on the second example here will require the model"}, {"start": 521.52, "end": 526.96, "text": " to have some knowledge base and additionally I forgot to mention that the captioning so during"}, {"start": 526.96, "end": 534.08, "text": " this captioning training all of the like named entities are masked so if you have a name like"}, {"start": 534.08, "end": 540.16, "text": " alexa or something you'll mask it with a person so you'll have you'll just like put a person"}, {"start": 540.16, "end": 545.8399999999999, "text": " instead of a name there so that means that this vision encoder cannot learn those named entities"}, {"start": 545.8399999999999, "end": 552.0799999999999, "text": " obviously because it never saw like named entities and so the examples you see here like"}, {"start": 552.0799999999999, "end": 558.0799999999999, "text": " like this model generated steve jobs that did not certainly didn't come from the vision encoder and"}, {"start": 558.0799999999999, "end": 562.88, "text": " that's an important thing I want you to notice here so the knowledge of that steve jobs was the"}, {"start": 562.88, "end": 568.8, "text": " guy who invented iphone actually came from the language model itself and that's the important"}, {"start": 568.8, "end": 574.16, "text": " from the language model itself so that's important and the third task they evaluate this model on is"}, {"start": 574.16, "end": 581.76, "text": " this fast binding task where you present the model with the image then you say you kind of have this"}, {"start": 581.76, "end": 588.24, "text": " made up word called dex and you just want the model to associate this visual category of an apple with"}, {"start": 588.24, "end": 594.0, "text": " the novel word dex and you do the same thing here you have an orange blanket and finally you prompt"}, {"start": 594.0, "end": 601.12, "text": " it with an image and you say question what is this answer and the model generates this is a dex so"}, {"start": 601.12, "end": 608.96, "text": " we saw dex is apple so it properly generated this is a dex sentence okay so having said that let me"}, {"start": 608.96, "end": 614.8, "text": " now kind of see let me show you the quantitative results they got because that's that's that's"}, {"start": 614.8, "end": 620.96, "text": " interesting before that just a short remark here so they say here in contrast our work enables"}, {"start": 620.96, "end": 627.6800000000001, "text": " strong generalization to new multi-modal tasks blah blah blah what i want to say here is i don't"}, {"start": 627.6800000000001, "end": 634.64, "text": " like this part because basically we don't even have a strong definition of what strong generalization"}, {"start": 634.64, "end": 641.2, "text": " is and the closest thing i could think of was fransois chollet's paper on the measure of"}, {"start": 641.2, "end": 647.6, "text": " intelligence where he described this terminology where he basically says that all of the current"}, {"start": 647.6, "end": 653.52, "text": " models pretty much that we have in deep learning field and machine learning are only like able to"}, {"start": 653.52, "end": 658.8000000000001, "text": " do this local generalization whereas this broad generalization and extreme generalization is"}, {"start": 658.8000000000001, "end": 665.76, "text": " something that only humans can currently basically do and so yeah i pretty much agree with him on"}, {"start": 665.76, "end": 672.08, "text": " this on this one because this language model that they use in frozen has seen like a lot of data so"}, {"start": 672.08, "end": 677.84, "text": " that means it has a lot of experience which we need to count in when we calculate these generalization"}, {"start": 677.84, "end": 684.32, "text": " capabilities so arguably because of all of that like immense amount of experience you cannot"}, {"start": 684.32, "end": 690.24, "text": " actually claim that it's generalizing like strongly it does generalize but like just the level yeah"}, {"start": 690.24, "end": 694.8000000000001, "text": " it's kind of yeah rent over pretty much okay these are just some details i already explained"}, {"start": 695.6800000000001, "end": 700.96, "text": " they have a huge seven billion pre-trained language model they freeze it and that's how they train the"}, {"start": 700.96, "end": 707.76, "text": " model i also explained this one how how do we map the images into into tokens which language model"}, {"start": 707.76, "end": 713.12, "text": " can then parse okay i'll just skip all of those small detail there is in positionally coding that"}, {"start": 713.12, "end": 720.4000000000001, "text": " helped them and let me focus on this so important thing i want you to notice is that they have a"}, {"start": 720.4000000000001, "end": 726.32, "text": " bunch of different ways to prompt this model so that it can generate the answers much better and"}, {"start": 726.32, "end": 732.24, "text": " here's one example so they have a pretty intricate uh like terminology here so two ways zero repeats"}, {"start": 732.24, "end": 737.0400000000001, "text": " two inner shots so let's see what i mean so first they have this task induction which they"}, {"start": 737.0400000000001, "end": 744.48, "text": " quantitatively show it helps a lot so answer with dex or blicket kind of prompts kind of tunes the"}, {"start": 744.48, "end": 749.84, "text": " model into answering with these two words in a sense and then they have so two shots basically"}, {"start": 749.84, "end": 755.36, "text": " this is the first part and uh the reason it's two ways because we have two made up words like"}, {"start": 755.36, "end": 760.08, "text": " blanket and ducks here and because the reason it's two inner shots is because they have two"}, {"start": 760.08, "end": 765.04, "text": " independent examples here as you can see and finally they prompt the model and blicket would"}, {"start": 765.04, "end": 770.4, "text": " be the correct answer because as you can see blicket is lion okay and the the last task they'll"}, {"start": 770.4, "end": 776.72, "text": " they'll they'll evaluate is this one where again you have to associate a visual category with a"}, {"start": 776.72, "end": 782.72, "text": " novel word and finally you actually not only have to output like here it's basically recognition"}, {"start": 782.72, "end": 788.32, "text": " you recognize it's a lion and you output blicket but here you have to reason because it says what"}, {"start": 788.32, "end": 795.0400000000001, "text": " is the the question says what is the dex made of so you first need to understand that dex is a table"}, {"start": 795.0400000000001, "end": 801.28, "text": " and then you need to understand what it is made of and the answer is wood so this is the final"}, {"start": 801.28, "end": 805.9200000000001, "text": " task they evaluate this frozen model on and now let's see the quantitative results okay here are"}, {"start": 805.9200000000001, "end": 811.6, "text": " the results for the visual question answering and there are these baselines which i'll shortly explain"}, {"start": 811.6, "end": 817.9200000000001, "text": " now especially this blind baseline so the strength of the pre-trained language model is a double edge"}, {"start": 817.9200000000001, "end": 823.12, "text": " sword it powers the generalization abilities of frozen but also enables the model to perform"}, {"start": 823.12, "end": 828.8000000000001, "text": " surprisingly well without considering the visual input at all so you can learn to ignore the visual"}, {"start": 828.8000000000001, "end": 833.84, "text": " input and still answer the question so to guard against this possibility we also train blind"}, {"start": 833.84, "end": 839.0400000000001, "text": " baselines in which the image presented to the visual encoder is blacked out but the common"}, {"start": 839.04, "end": 845.76, "text": " weights are still trained this amounts to prefix tuning so just as a short reminder how it works"}, {"start": 845.76, "end": 852.0, "text": " basically here instead of an image for these baselines what they'll do instead is they'll"}, {"start": 852.0, "end": 859.12, "text": " just black out or blue out in my case these images and so what happens is basically the model will"}, {"start": 859.12, "end": 864.56, "text": " have to learn some representation which is constant which won't which won't change because it will only"}, {"start": 864.56, "end": 871.4399999999999, "text": " be presented with black images and so that constant kind of needs to help in this captioning task"}, {"start": 871.4399999999999, "end": 877.1999999999999, "text": " and so that's why they call it prefix tuning because it finds some representation that helps"}, {"start": 877.1999999999999, "end": 882.16, "text": " the overall system do better job at captioning okay so that's one of the baselines the blind"}, {"start": 882.16, "end": 887.52, "text": " baseline and having explained that one let's now focus on the quantitative results okay here we are"}, {"start": 887.52, "end": 894.56, "text": " um so frozen here is the the the version we were talking about here is the version trained from"}, {"start": 894.56, "end": 899.52, "text": " scratch so that means you just stitch all of the weights in the pre-trained transformer and try"}, {"start": 899.52, "end": 906.0799999999999, "text": " and train it from scratch um here the the setups they have so this is zero shot one shot four shot"}, {"start": 906.0799999999999, "end": 913.28, "text": " etc and we can see that as we add more examples we have this trend that the the accuracy is"}, {"start": 913.28, "end": 919.04, "text": " improving which is desirable okay we see that the from scratch model totally fails we see that the"}, {"start": 919.04, "end": 924.72, "text": " fine-tune model is worse so fine-tuning so basically here they do not they do not freeze"}, {"start": 924.72, "end": 929.1999999999999, "text": " the language model but they initialize the model with a pre-trained weight the blind baseline"}, {"start": 929.1999999999999, "end": 935.68, "text": " i've just explained it as well so um again oscar is some dedicated baseline you can see it's much"}, {"start": 935.68, "end": 942.48, "text": " better than frozen but the whole point here is that we have this improve improving ability uh that"}, {"start": 942.48, "end": 949.44, "text": " also was pertinent to gpt3 models and language models in general so that's cool um here you can"}, {"start": 949.44, "end": 955.04, "text": " just see when you additionally fine-tune on this very data set visual visual question answering"}, {"start": 955.04, "end": 961.04, "text": " data set the performance obviously ramps up and that's that's expected i guess now the second"}, {"start": 961.04, "end": 967.12, "text": " thing here is the test on this okay vqa data set so this data set contains those examples where"}, {"start": 967.12, "end": 973.92, "text": " you need some additional knowledge base in order to answer the questions and again they so they"}, {"start": 973.92, "end": 981.28, "text": " show that um the the model improves with additional examples as a new baseline they use this 400"}, {"start": 981.28, "end": 987.2, "text": " million parameter model and by the way this one has seven billion and again obviously the the"}, {"start": 987.2, "end": 991.2, "text": " bigger the model the the better the performance is something that's not that surprising i guess"}, {"start": 991.2, "end": 999.0400000000001, "text": " uh like in 2021 and um yeah again the this baseline is much better they just want to stress out that"}, {"start": 999.0400000000001, "end": 1004.72, "text": " uh we have this improvement uh like being transferred to this multi-multi-multi-modal"}, {"start": 1004.72, "end": 1012.48, "text": " uh setup and that's cool okay so that was the the first experiments they did um here let me just"}, {"start": 1012.48, "end": 1017.12, "text": " uh kind of reiterate this because it's really important uh so this conceptual captions data"}, {"start": 1017.12, "end": 1022.24, "text": " set is hypernamed meaning that for example proper names are replaced with a general person"}, {"start": 1022.24, "end": 1027.76, "text": " a word-like person okay so this enables us to rigorously study the transfer of factual knowledge"}, {"start": 1027.76, "end": 1033.28, "text": " because all knowledge of named entities comes from language model pre-training consequently when we"}, {"start": 1033.28, "end": 1039.04, "text": " show the model an image of an airplane and ask who invented this the visual encoder has determined"}, {"start": 1039.04, "end": 1043.52, "text": " that the image contains an airplane and the language model has used this to retrieve the"}, {"start": 1043.52, "end": 1048.08, "text": " factual knowledge that airplanes were invented by the Wright brothers and finally jumping to"}, {"start": 1048.08, "end": 1055.44, "text": " fast concept binding these are the last tasks they tested this frozen model on um let me just"}, {"start": 1055.44, "end": 1061.52, "text": " kind of connect this table with the actual tasks so this is the task at hand we have this uh visual"}, {"start": 1061.52, "end": 1067.36, "text": " binding uh task so this is a two-way binding because we have again two novel made up words"}, {"start": 1067.36, "end": 1074.24, "text": " text and blick it they'll show like five way binding task where obviously you have now five"}, {"start": 1074.24, "end": 1080.1599999999999, "text": " made upwards and we'll soon see that it fails on that one but it succeeds on the two-way binding"}, {"start": 1080.1599999999999, "end": 1086.0, "text": " task so on the two-way binding you can see again they have a bunch of different ways to to prompt"}, {"start": 1086.0, "end": 1093.12, "text": " the model so that they can elicit better outputs and again here you can see it's improving with"}, {"start": 1093.12, "end": 1098.6399999999999, "text": " more examples second thing they tried is they used real names instead of those made upwards"}, {"start": 1098.6399999999999, "end": 1104.4799999999998, "text": " they do this so that they can quantify how harder it is for the model to learn the binding uh and"}, {"start": 1104.4799999999998, "end": 1109.4399999999998, "text": " how hard the task itself is and you can see so this kind of bugs me i'm not sure whether this"}, {"start": 1109.4399999999998, "end": 1115.36, "text": " is a typo but like you can it has better performance and then you kind of saturate saturates after three"}, {"start": 1115.36, "end": 1126.08, "text": " examples um again this annual baseline is better as i mentioned uh five way um binding fails so here"}, {"start": 1126.08, "end": 1134.1599999999999, "text": " it's literally like uh same as random chance 20 and it kind of improves here then goes back so"}, {"start": 1134.1599999999999, "end": 1138.6399999999999, "text": " yeah it's inconclusive here they mentioned here somewhere here just a sec so in table four we show"}, {"start": 1138.6399999999999, "end": 1144.08, "text": " that the observed effects on open-ended mini image net do not transfer to the five-way setting"}, {"start": 1144.08, "end": 1149.6799999999998, "text": " where frozen is not significantly above chance this shows that learning to bind five new names"}, {"start": 1149.6799999999998, "end": 1154.8799999999999, "text": " to five visual categories in a single forward pass is beyond the current capabilities of frozen okay"}, {"start": 1155.6, "end": 1161.12, "text": " so it kind of fails there and they leave it up as a future research the final task is the one i"}, {"start": 1161.12, "end": 1166.72, "text": " showed you where you need to aside from fast binding you need to reason not much new conclusions can"}, {"start": 1166.72, "end": 1172.6399999999999, "text": " be made here so again it's improving with more examples the interesting part maybe is so if we"}, {"start": 1172.64, "end": 1179.2800000000002, "text": " focus on this blind baseline we can see that even repeating uh so again remember the image is blacked"}, {"start": 1179.2800000000002, "end": 1185.8400000000001, "text": " out so we just just repeat the text a couple of times and those linguistic cues help boost the"}, {"start": 1185.8400000000001, "end": 1192.8000000000002, "text": " performance of this blind model which means that these improvements above basically are a combination"}, {"start": 1192.8000000000002, "end": 1200.4, "text": " of both linguistic cues as well as the visual cues okay um i think that's pretty much it i like this"}, {"start": 1200.4, "end": 1208.88, "text": " paper a lot i like this uh inclusion of visual uh information into this whole pipeline it slowly"}, {"start": 1208.88, "end": 1214.5600000000002, "text": " starts resembling the way we humans operate so we have these like we have vision obviously and we"}, {"start": 1214.5600000000002, "end": 1221.3600000000001, "text": " kind of uh somehow represent that information and then we have this like computation engine inside"}, {"start": 1221.3600000000001, "end": 1226.16, "text": " our head by the way one of the previous papers called uh like pre-trained transformers or"}, {"start": 1226.16, "end": 1232.4, "text": " universal computation engine or something they showed that like if you take a huge pre-trained"}, {"start": 1232.4, "end": 1237.2, "text": " language model and you just tweak some layer norm parameters and some embedding weights you can"}, {"start": 1237.2, "end": 1243.1200000000001, "text": " basically fine tune it very fast onto novel tasks and that's cool i guess this work is a follow-up"}, {"start": 1243.1200000000001, "end": 1248.64, "text": " in a way and yeah having said that i hope you like this video if you did consider subscribing"}, {"start": 1248.64, "end": 1260.64, "text": " sharing and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=j2PXES-liuc
VQ-GAN: Taming Transformers for High-Resolution Image Synthesis | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover VQ-GAN or Taming Transformers for High-Resolution Image Synthesis. It uses modified VQ-VAEs and a powerful transformer (GPT-2) to synthesize high-res images. An important modification of VQ-VAE they brought are: 1) changing MSE for perceptual loss 2) adding adversarial loss which makes the images way more crispy compared to the original VQ-VAE which had blurry outputs. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2012.09841 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 01:50 A high-level VQ-GAN overview 04:00 Perceptual loss 05:10 Patch-based adversarial loss 06:45 Sequence prediction via GPT 09:50 Generating high-res images 12:45 Loss explained in depth 16:15 Training the transformer 17:50 Conditioning transformer 20:45 Comparisons and results 22:00 Sampling strategies 23:00 Comparisons and results continued 25:00 Rejection sampling with ResNet or CLIP 26:45 Receptive field effects 28:30 Comparisons with DALL-E ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #vqvae #imagesynthesis #gpt
What's up in this video I'm covering VQGAN or taming transformers for high resolution image synthesis paper by Patrick Esser, Robin Rhombach and Bjorn Omer from Heidelberg University. So as you can see it's all about image synthesis and what's exciting about this paper is they are the first to use transformers and achieve high resolution images as the one you can see here on the screen. And the thing about this paper is it directly builds off of VQVAE and that's a model that was previously developed by DeepMind and I've covered in my last video so do check it out if you don't know anything about it. But like aside from this paper, so aside from VQGAN, Dali from OpenAI also uses VQVAE as the backbone. So that's an important building block currently for these generative models and yeah, do check that video out. Okay, having said that let's see what this paper is all about. So I mentioned transformers and they say here we demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize higher resolution images. So this CNN part is just the encoder part of the VQVAE as we'll soon see. Okay. They say here that we hypothesize that low-level image structure is well described by local connectivity, i.e. convolutional architecture, whereas this structural assumption ceases to be effective on higher semantic levels. So the idea is to extract certain features which are kind of representative of what image constitutes of and then kind of use transformer to do this sequence token prediction as we'll soon see in order to generate and in order to generate like images which are globally coherent and that's where the attention pattern of the transformers which basically treats the underlying data as the fully connected graph comes like useful. Okay, so having said that let's see the high-level overview of the architecture. Okay, so first things first as you can probably recognize this bottom block here is VQVAE and the training itself of this VQGAN paper consists out of two stages. So very similar to VQVAE. Basically you first train the VQVAE module, so that's this one, and then you train a prior so in the VQVAE paper they use pixelCNN, here they're using transformers or more precisely they're using GPT-2 model from OpenAI. Okay, if you watch my previous video you'll see that this structure overall is very similar to the original VQVAE paper. So what I've done additionally here is they swapped pixelCNN and they're not now using state-of-the-art sequence modeling model and that's transformer GPT-2 in this case and they also improved upon the reconstruction loss. They are not using MSC, they're using perceptual loss plus adversarial loss instead. So there is a lot of overlap between the two projects, but like this project got a much better much better like imagery and high-res imagery. So yeah, let's see how it works. So again, you have the VQVAE component here. What you do is you have your image, you encode it, you get some latent representations and then you snap these latent vectors onto these codebook vectors so that you have this discrete representation here which you then feed back into the decoder in order to reconstruct the image. So that's the high level picture of how that works. And again, like just as a reminder this system is trained in two stages. The stage one is you learn to train this VQVAE model, you learn how to kind of create these high quality latent vectors and then once you have that you freeze everything, you freeze the decoder, you freeze the encoder, you freeze the codebook and you train the transformer across this discrete latent space. Okay, that's the high level idea of how the system is trained. So now let's see what are the differences between VQVAE and this paper. So the main difference here in the stage one is they are not using MSC. So they are instead using perceptual loss plus adversarial loss. So perceptual loss just it comes from the, to the best of my knowledge, it comes from the neural stall transfer literature. And so what you do is you basically take an image. So you'll take this original image, you'll feed it into a pre-trained CNN like maybe VGG net or something. So this is your CNN. You feed in the original image. So let's denote that as X and you feed in the reconstructed image and let's denote that as X hat. And what you then do, you just take a certain layer from this VGG, from this pre-trained CNN and you extract those feature maps. So that will be something like this. Okay, so that's gonna be, these are the channels height and width. Okay, and you do the same thing for the original image. So you also extract those features there and what you do then instead is you just apply MSC loss, but this time not in the image space, but across this feature space. That's the only difference. And the reason that functions is because the classifier already learned how to extract some valuable features and it's just unreasonably effective. Okay, and aside from that, they also have this adversarial component. So as you may know, the original VQVAE had a problem with blur and we already know that GANs kind of improved that. They make much more crispier image. So it was like actually the original authors of VQVAE paper also like suggested using GANs in the future work. So here it is. So they're using PatchGAN. So what it does, so what's the difference between that and your regular GAN? You usually have just a single output like full, like you have like something like a sigmoid activation function and then you're trying to train the discriminator to discriminate between real data and fake data. So here it's the same thing instead, but like the only difference is you're taking patches, like overlapping patches, like maybe this thing and then you'll have this and you'll have another patch and you have like four patches together and you can see it will, the discriminator will output some values here. So if this image was a real image, we'd expect it to output, for example, all of these should be R or 1, whatever that, like whatever the value, however you decide to encode the real values. And so basically train, let's say these are like 0 to 1 values, you'll just do your minus loss. So basically your cross entropy and you're going to train this patch-based discriminator the usual way. Okay, so I'll get into a bit more details about the GAN loss, but this is roughly how it works like. So they have patch-based GAN, all of that taken together, so the perceptual loss plus this GAN loss, so these two together give much more crispier image as a reconstruction here from this VQVA pipeline. Okay, the second step, once you train that, once you have these code vectors, what you do is you just train the transformer on a sequence prediction task. So how the procedure looks like, you take an image, when you're training your transformer, you feed it into the encoder, which is frozen, you get some sequence, as you can see here, of discrete symbols, and these symbols are just indices into this codebook vector table, and you now try and train your transformer as a basically, as a sequence predictor. So how that looks like is, this will be your target sequence, so this 14233, etc. And what you do is you'll have some special token, like a start of the sequence token, and then you'll just shift this whole array by one to the right, so you'll have one, 42, you have three, etc. And now you just feed this input into your transformer, you do a couple of layers of transformer the usual way, and then you basically do cross-entropy again to train and backprop through your transformer to train it. So here is an example we have here at 22, and so we know because of this here that the next symbol is 57, so how it will be trained is we'll find like element 57, so it may be here, we'll find its probability that, let me know that as like pi sub 57, and we'll just do minus log of that probability, okay? And so in order for this to, for the loss to get down to zero, you want to push p 57 to one, because we know that's the true symbol as you can see here. So just usual sequence prediction stuff, nothing special there. So if you understand transformers, if you understand GANs, and if you understand VQVAEs, this is super trivial to understand, just connecting those components, you get the system, and voila. Okay, so that was the high level overview. Now let's, let me just show you one more thing. So there is one thing I need to know, to mention here, and that's that compared to OpenAI, these guys are from Heidelberg, so I guess they didn't have as much compute as OpenAI, obviously, and so they're constrained to have only 16 by 16 discrete latent space here, so their transformer can only work with these many tokens, okay? So that's 256, and so that means you can't have a super big image in the input, because what I've also noticed is, if they go, if they kind of decimate the spatial extent, like the spatial dimension of this image, by more than 16x, so what happens is the reconstruction starts degrading severely. So they are kind of limited to train the system when a 256 times 256 images, okay? Because of these two things I just mentioned. So this constrains you to, because reconstruction loss will suffer if you start, like, reducing the spatial dimension by a lot, and also, transformer cannot have more than 16 by 16, so take those two together, and you get, like, that their images are 256 by 256. So now, how do they get the high res images is pretty simple, so they just do sliding window across this discrete space, so let me just show you that. They have it somewhere here, okay, so what they do is because, obviously, we have only, we can only fit 256 tokens into the transformer, they just do this sliding window across, and that's how they generate more than 256 tokens, and then they just plug that in into the decoder, which is a fully convolutional neural network, so we can just accept variable number tokens. So that's how I understood this works, if, I may be wrong, like, comment down if you think there's something else they're doing, but yeah, they also mentioned here that, so there is an assumption, implicit assumption they have in order for this to work, and they say here, our wiki again ensures that the available context is still sufficient to faithfully model images, as long as either the statistics of the data set are approximately spatially invariant, or spatial conditioning information is available. So, in practice, they mentioned that even if that's kind of violated, there is this CocoGAM paper where they show they can take these smaller patches and kind of combine them into a bigger image without any seams, and that sounds really good. Okay, so that's the whole, how the whole system looks like. Again, once, what I haven't mentioned is once you train the system, how they generate novel images is, so you just prompt your transformer with a special symbol, it will generate some token, like three, I don't know, then you feed three back into the input because it's all regressive, outcome some novel symbol, like maybe 21, and again, you put 21 here and you repeat, okay, until you get 16 by 16 tokens here or more, and then you just feed that through this decoder, which was frozen from the stage one of the training, and out comes the image. Okay, so that's it. Now let's start digging into the details. First thing, I want to contrast this work with ImageGPT, and they mentioned here, so previous work, so this is ImageGPT, which applied transformers to image generation, demonstrated promising results for images up to a size of 64 by 64 pixels, but due to the quadratically increasing cost in sequence length, cannot simply be scaled to higher resolutions. So as you may know, like transformers have that, like, quadratic complexity because every token is attending to every other token, so you have n square complexity, and so this ImageGPT paper, instead of working in the discrete latent space, like here, they were working directly in the image space, and because they are working in the image space, they cannot generate, like, images of that high of a resolution, like when we have a decoder and working when we are regressing, when we are predicting next tokens in the discrete latent space. So that's what that approach was bound to fail, but like, you can see the inception of this very same idea was already there and in the VQVA paper, so yeah. Okay, so let me kind of explain the loss components in a bit more detail. So I assume you already are familiar with VQVAE, so again, we have reconstruction loss in the original paper, so this is just the L2 squared, so your MSU loss, so what this term does, and what this term does is the following. You take those codebook vectors and you push them towards the encoded vectors, so this E of X is just this thing here, so these are the E of X vectors, okay, and SG just means stop gradient, so you freeze those and you push the code vectors towards those and you also then freeze the encoded vectors and you push the, no sorry, you actually, you freeze the codebook vectors and you push the encoded vectors towards those, and by doing that, you get, that's the VQVAE in a nutshell, and now the interesting thing this paper does is they change this, they're not using MSC anymore, they sit here, so we replace the L2 loss used in the original paper for the reconstruction loss by a perceptual loss and introduce an adversarial training procedure with a patch-based generator, discriminator. So here is your usual adversarial loss, so we have this equation here and what happens is, so just a small recap, basically discriminator wants to maximize this function, this term here, and generator wants to minimize it. In order for the discriminator to maximize this, what it needs to do is, so this is discriminator and you want to push it to one, because for real images, right, for X, because log of one will be zero and that's maximizing this term here, and here, because these are fake images, X hat is fake, i.e. generated images, so you want to make sure that the discriminator outputs zero here, so that means it can discriminate between the fake and the real images, and when this goes to zero, log of one is zero, so that means we've just maximized this whole equation, okay? On the other hand, the generator wants to trick the discriminator into thinking that the images it generates are real, so it wants to instead, like, minimize this thing and it minimizes it by just pushing this to one. When you have this going to one, log of zero tends to minus infinity, and so basically that's minimizing this equation, so that's that's just a minimax game, like, setup of your regular GAN training, and the final loss of the VQVAE of this modified VQVAE model is you just want to, as you can see, we have E, that's encoder, G generator, and Z, the codebook table, we want to minimize, so we minimize according to, like, tweaking those parameters, we minimize this VQ loss, and then we also actually have the GAN component, which is maximized by discriminator and minimized by these three components here, okay? So the generator actually tries to minimize this GAN component and that's that's it, that's your, that's the whole loss function. Additionally, they have this lambda weight, which seems a bit arbitrary, to be honest, I didn't see any ablations on this, but, like, they found an adaptive way to kind of tweak this lambda, which is a coefficient that stands with this GAN component of the loss function, okay? So that's the first stage of the training, and the second part, as I already mentioned, is training the transformer. So they say here, so thus, after choosing some ordering of the indices in S, where S is just, that's your token sequence, so that's, this is S, all of these symbols are S, and the ordering they mentioned here is the following thing. So let me just clarify this part a bit. So you can, you usually take the rest of the order, so what you'll usually do, so you have this table, let me just redraw it here for clarity, what you'll do is you need to somehow impose an order, because this is a matrix, and there is no inherent, like, linear, like, you have to linearize this, right? And usually people just do, like, rest order, so that's how you're going to predict the tokens. So, because in order to have next token defined, you need to kind of make this assumption of a rest order, but I also tried doing spirals, like outgoing spirals and in-going spirals, and the rest order turned out to work the best, so they kind of stuck with it and used it as the other papers prior to this one. Okay, so, so after choosing some ordering of the indices in S, image generation can be formulated as autoregressive next index prediction. Given previous indices, the transformer learns to predict the distribution of possible next indices, i.e. this term here, to compute the likelihood of the full representation, so the joint is just modeled as a product of these conditionals, okay? And finally, in order to train the transformer, you just apply the cross entropy already mentioned. Okay, now, interesting part is this one. If you want to condition this, and they condition the transformer on multi- like on very different modalities, so you can condition this transformer on a bunch of things. You can condition it on a class, you can condition it on a like image that has semantic segmentation applied to it, etc. So, they show here, let me show you just some examples. So, here, as you can see, you have this image which is semantically segmented, and you're trying to, given that as a condition, you're trying to generate some novel images here, as you can see on the right. Okay, so let me show you how they kind of make that work. So, they say here, if the conditioning information C has spatial extent, so as that semantic masks, we first learn in other VQ again to obtain again an index-based representation R with the newly obtained codebook Z. Due to the autoregressive structure of the transformer, we can then simply prepend R to S. So, if you don't understand that, let me kind of clarify it. I think it's fairly simple. Now, let's see how this procedure will look like. Basically, you'll have an additional VQ again, okay? So, this is your encoder one, you have a discrete latent space, you have a decoder. So, you train this thing on those semantic masks, so this is decoder one, okay? So, you train this thing on your like semantic masks, and once it's trained, you just freeze it up, and then you do the following thing. So, you have your target VQ again, so this one will just be encoding the regular images, and what you'll do is the following during the training of this one. So, you take the input image you encoded, you get the target sequence, so this is your target sequence, and how your input will be formed is the following. So, you take a semantic mask, you'll just get some tokens here, you flatten it out, and you prepend it here, and now what you do is you just take your special token, you place it here, you take your sequence here, you just shift it by one position to the right, and this is your input to the transformer, this is your target, and you again train it on a simple sequence prediction problem. So, that's how they'll train this model, and then later during generation, obviously, you're just going to prepend the semantic mask and the prompting mask token, and you're then just going to sample from your transformer to get the output image, which you'll just feed through the transformer. So, the output sequence, which you'll then just feed to the decoder, okay? So, yeah, that was pretty much a thorough explanation of how this system works. Hopefully, that was useful. I went into a bit more details than usually, so let me know whether you find this kind of in-depth explanation useful, or should I just concentrate on like high-level explanation and just stick to the key points of the paper, okay? So, that was it. Now, let's see some experiments and some comparisons. Okay, in this table, they just compare and kind of justify the use of transformer instead of some other or regressive model, like as I mentioned, the QVAE originally used PixelCNN. PixelSnail is just an improvement from that very same family, and you can see here that because this one trains 2x faster, they have two comparisons. So, they have a comparison where they train GPT-2 for the same amount of time as P-Snail, and basically, you can see we have an improvement here because these numbers are lower. Lower is better for NLL, and again, when you have the same number of steps, it's even better. So, that kind of justifies the usage of transformers, although I think at this point of time, it's pretty obvious that transformers are better than those types of models. Okay, here are some results. Basically, they can do multiple things, and that's the premise of this paper. You'll later see that they are on-pair with other methods, but they don't try and say that they are better. They are just on-pair, but they are really diverse. They can apply this to many different modalities. So, one thing you can do is you can take this image here, you can encode it and use it to prompt the transformer in order to generate by sampling different images. And when I say sampling, again, when you take this as the context and you're trying to predict the next token, okay, so this token will have some distribution output from the transformer, right? And so, there are multiple ways you can do sampling from distribution. One is to do a greedy approach, so you just take the token that has the highest probability. One is to do, and that's what these folks did, is top-k sampling. That means, for example, if k equals 3, you'll take the three tokens with highest probability, so that would be these three here. And then you just re-rate them so that they sum up to 1, and you'll just sample according to their weights. And that's how you avoid sampling these super small probability tokens, which would be the case if you were using your regular sampling. So, not a greedy, not top-k, but just a regular sampling where you're taking into consideration all of the tokens with their corresponding probabilities. So, that's one task you can, aside from semantic masks, you can use depth masks. As you can see, they get these weird pigeons. They scared the shit out of me, but like, yeah, I don't know. And additionally, you can see they can kind of condition on these pose kind of images, and you get these as output. Like, I think this girl broke her leg, but yeah, okay, she'll survive. And finally, you have class conditioning. So, basically, they conditioned on the class label of a frog, and they generate these novel, scary frogs. Okay, and additionally, they have something called rejection sampling. We'll get to that in a moment, but like using that, they can achieve much better. They can get much better images than these here. I think these were not obtained by using this rejection sampling. Okay, let's see other results. So, again, these are some awesome images they can obtain using semantic segmentation masks. Here on the right, you can see again, like depth images, semantic masks, edges, super resolution, a bunch of different tasks they can apply this on. As I mentioned, they say they're on par with other methods. So, for this semantic image synthesis, they compare with the spade. It's even a bit better. So, spade is a bit better than their approach, but yeah. Okay, finally, comparing to other methods on the face synthesis, face image synthesis task, looking at the FIDs, we can see that this paper is a bit worse than this PGGAN, and it's a lot worse than StyleGAN2, looking at the FID metrics, so fresh-ase inception distance metric. They are way worse compared to StyleGAN, but as I said, they're not claiming they're better. In that sense, they're just more general, and yeah, and they're the first to have used transformers to generate these higher-res images. Okay, similarly here, if we focus on FID, we can see that without... So, keeping this acceptance rate, so that's the rejection sampling I was mentioning about. So, if you keep it at one, you can see that they arrive at FIDs of maybe 15, and that's worse than many of these methods here. Okay, but if you start applying this rejection sampling, they drop the FID down to five, which is much better or on par with these methods, but they can also use rejection sampling, so I guess this is not that fair of a comparison. By the way, they use ResNet, they say it's somewhere here, they use ResNet 101 to do this rejection sampling, so here, but instead, if you'd use something better, like a clip model from OpenAI, they can achieve even better results, and you'll see those all around, like on Twitter, people using VQGAN plus clip to generate nice images, and so the whole trick with those rejection sampling is the following. So, you output a couple of images from your... You sample a couple of images from your... Like, from VQGAN, and what you then do is you feed these into, for example, clip, and you prompt clip with a specific class, so for example, you have class one, whatever, that may be like a frog or whatever, and what clip will do, it will say how probable this picture is to be of a class one, so maybe this one will be, I don't know, 0.7, this one will be 0.85 or something, this one will be, I don't know, 0.6, and so you can just sort them, sort the images according to these scores, and just pick the top three or top five, whatever, and what it turns out, it turns out this kind of is an automatic way of cherry picking, so it additionally improves upon the results that this paper achieved, reported. Okay, what I show here is how that down sampling in the encoder affects the results, so what I do is they keep the latent space at 16 by 16, and then what I do is you can see here the number of times that the spatial extent of the image has been reduced, so 16 means that the original image was 256 by 256, one means basically input image was 16 by 16, and that means that during the training, they actually have to take a crop of 16 by 16 here and directly train the transformer on that, and as you can see, the receptive field is much, much, much smaller when you have f1, and because of that, you get these weird Picasso-looking images, and I really think, like, somebody can earn some money by just producing these and selling them as artwork. If somebody earns some money, consider becoming a patron of the epiphany, yeah, that'd be nice. Okay, when they use f2, so that's a bit higher receptive field, you can see it kind of gets better, but this really freaks me out, and then additionally, it gets better and better, and finally, when you have 16, assuming these are 256 images, meaning basically you had a receptive field was spanning the whole image, and that's why we get much better results here. Additionally, these are much faster to generate, so we have a huge improvement in speed up compared to when you're trying to regress this in the pixel space, because here you'll have to regress all of the pixels, whereas here you'll just have to regress the 16 by 16 in the latent space and then decode it, so that's why we get a huge speed up. Okay, I think that's pretty much it. Let me just show you a couple more things that's kind of interesting. So here they show that they are better than Delle and VQVAE2 when we look at the FIDs of the reconstructed images. So you just, they just compare, they take the validation images, for example, so that's this FID slash VAL, so they pass it through the encoder and then decoder, and then they just calculate the FID, and so this just shows that the reconstruction is coming from this VQGAN paper are better than Delle and VQVAE, as we can expect, because as I mentioned, they are blurry because they're not using adversarial component, neither did they use perceptual loss. Okay, one more interesting thing I want to show you in the appendix. So basically here is that visualizable, I just explained. You have Delle here, you have VQGAN here, you can see these images are much more blurry compared to VQGAN, which has adversarial component, so that's really cool. GANs are like famous for making images much more crispy. They have many more diagrams and pixTiv included in the appendix, it's huge, it's got maybe 50 pages, you can check it out, some of the images there, but that was in a nutshell everything I wanted to cover today. If you found this video useful, share it out, subscribe, and until next time, bye-bye!
[{"start": 0.0, "end": 7.58, "text": " What's up in this video I'm covering VQGAN or taming transformers for high resolution image synthesis paper by Patrick Esser,"}, {"start": 7.86, "end": 11.78, "text": " Robin Rhombach and Bjorn Omer from Heidelberg University."}, {"start": 11.9, "end": 18.34, "text": " So as you can see it's all about image synthesis and what's exciting about this paper is they are the first to use"}, {"start": 18.5, "end": 23.04, "text": " transformers and achieve high resolution images as the one you can see here on the screen."}, {"start": 23.04, "end": 28.2, "text": " And the thing about this paper is it directly builds off of VQVAE and"}, {"start": 28.2, "end": 33.56, "text": " that's a model that was previously developed by DeepMind and I've covered in my last video"}, {"start": 33.56, "end": 38.66, "text": " so do check it out if you don't know anything about it. But like aside from this paper, so aside from VQGAN,"}, {"start": 38.879999999999995, "end": 42.76, "text": " Dali from OpenAI also uses VQVAE as the backbone."}, {"start": 42.879999999999995, "end": 48.739999999999995, "text": " So that's an important building block currently for these generative models and yeah, do check that video out."}, {"start": 49.16, "end": 51.8, "text": " Okay, having said that let's see what this paper is all about."}, {"start": 51.8, "end": 54.760000000000005, "text": " So I mentioned transformers and they say here"}, {"start": 54.76, "end": 60.76, "text": " we demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers"}, {"start": 61.3, "end": 64.67999999999999, "text": " enables them to model and thereby synthesize higher resolution images."}, {"start": 64.67999999999999, "end": 70.03999999999999, "text": " So this CNN part is just the encoder part of the VQVAE as we'll soon see. Okay."}, {"start": 70.7, "end": 77.12, "text": " They say here that we hypothesize that low-level image structure is well described by local connectivity, i.e."}, {"start": 77.74, "end": 84.47999999999999, "text": " convolutional architecture, whereas this structural assumption ceases to be effective on higher semantic levels."}, {"start": 84.48, "end": 91.30000000000001, "text": " So the idea is to extract certain features which are kind of representative of what image constitutes of and then kind of use"}, {"start": 91.58, "end": 99.62, "text": " transformer to do this sequence token prediction as we'll soon see in order to generate and in order to generate like images"}, {"start": 99.62, "end": 105.9, "text": " which are globally coherent and that's where the attention pattern of the transformers which basically"}, {"start": 106.46000000000001, "end": 111.36, "text": " treats the underlying data as the fully connected graph comes like useful."}, {"start": 111.36, "end": 116.82, "text": " Okay, so having said that let's see the high-level overview of the architecture."}, {"start": 117.48, "end": 123.6, "text": " Okay, so first things first as you can probably recognize this bottom block here is VQVAE and"}, {"start": 124.24, "end": 127.08, "text": " the training itself of this VQGAN paper"}, {"start": 127.75999999999999, "end": 133.84, "text": " consists out of two stages. So very similar to VQVAE. Basically you first train the VQVAE"}, {"start": 134.32, "end": 137.76, "text": " module, so that's this one, and then you train a prior"}, {"start": 137.76, "end": 144.35999999999999, "text": " so in the VQVAE paper they use pixelCNN, here they're using transformers or more precisely"}, {"start": 144.35999999999999, "end": 148.32, "text": " they're using GPT-2 model from OpenAI. Okay, if you watch my previous video"}, {"start": 148.56, "end": 153.44, "text": " you'll see that this structure overall is very similar to the original VQVAE paper."}, {"start": 153.44, "end": 158.92, "text": " So what I've done additionally here is they swapped pixelCNN and they're not now using state-of-the-art sequence modeling"}, {"start": 159.48, "end": 165.12, "text": " model and that's transformer GPT-2 in this case and they also improved upon the reconstruction loss. They are not using MSC,"}, {"start": 165.12, "end": 167.84, "text": " they're using perceptual loss plus adversarial loss instead."}, {"start": 167.92000000000002, "end": 175.56, "text": " So there is a lot of overlap between the two projects, but like this project got a much better much better like imagery and high-res imagery."}, {"start": 175.56, "end": 179.88, "text": " So yeah, let's see how it works. So again, you have the VQVAE component here."}, {"start": 179.88, "end": 184.52, "text": " What you do is you have your image, you encode it, you get some latent"}, {"start": 184.72, "end": 192.04000000000002, "text": " representations and then you snap these latent vectors onto these codebook vectors so that you have this discrete representation here"}, {"start": 192.04, "end": 195.45999999999998, "text": " which you then feed back into the decoder in order to reconstruct the image."}, {"start": 195.48, "end": 201.07999999999998, "text": " So that's the high level picture of how that works. And again, like just as a reminder"}, {"start": 201.64, "end": 207.07999999999998, "text": " this system is trained in two stages. The stage one is you learn to train this VQVAE model,"}, {"start": 207.07999999999998, "end": 211.51999999999998, "text": " you learn how to kind of create these high quality"}, {"start": 212.64, "end": 217.0, "text": " latent vectors and then once you have that you freeze everything, you freeze the"}, {"start": 217.0, "end": 223.4, "text": " decoder, you freeze the encoder, you freeze the codebook and you train the transformer across this discrete latent space."}, {"start": 223.48, "end": 226.24, "text": " Okay, that's the high level idea of how the system is trained."}, {"start": 226.32, "end": 230.52, "text": " So now let's see what are the differences between VQVAE and this paper."}, {"start": 230.52, "end": 236.16, "text": " So the main difference here in the stage one is they are not using MSC."}, {"start": 237.0, "end": 240.88, "text": " So they are instead using perceptual loss plus adversarial loss."}, {"start": 240.88, "end": 244.6, "text": " So perceptual loss just it comes from the, to the best of my knowledge,"}, {"start": 244.6, "end": 250.38, "text": " it comes from the neural stall transfer literature. And so what you do is you basically take an image."}, {"start": 250.38, "end": 256.4, "text": " So you'll take this original image, you'll feed it into a pre-trained CNN like maybe"}, {"start": 257.32, "end": 262.18, "text": " VGG net or something. So this is your CNN. You feed in the original image."}, {"start": 262.18, "end": 268.48, "text": " So let's denote that as X and you feed in the reconstructed image and let's denote that as X hat."}, {"start": 268.48, "end": 274.04, "text": " And what you then do, you just take a certain layer from this VGG, from this pre-trained CNN"}, {"start": 274.04, "end": 279.74, "text": " and you extract those feature maps. So that will be something like this. Okay, so that's gonna be,"}, {"start": 280.22, "end": 286.20000000000005, "text": " these are the channels height and width. Okay, and you do the same thing for the original image."}, {"start": 286.20000000000005, "end": 292.36, "text": " So you also extract those features there and what you do then instead is you just apply MSC loss,"}, {"start": 292.36, "end": 297.76, "text": " but this time not in the image space, but across this feature space. That's the only difference."}, {"start": 297.76, "end": 303.96000000000004, "text": " And the reason that functions is because the classifier already learned how to extract some valuable features"}, {"start": 303.96, "end": 311.15999999999997, "text": " and it's just unreasonably effective. Okay, and aside from that, they also have this adversarial component."}, {"start": 311.15999999999997, "end": 319.88, "text": " So as you may know, the original VQVAE had a problem with blur and we already know that GANs kind of improved that."}, {"start": 319.88, "end": 325.79999999999995, "text": " They make much more crispier image. So it was like actually the original authors of VQVAE paper also"}, {"start": 325.79999999999995, "end": 331.88, "text": " like suggested using GANs in the future work. So here it is. So they're using PatchGAN."}, {"start": 331.88, "end": 337.8, "text": " So what it does, so what's the difference between that and your regular GAN? You usually have just a single output"}, {"start": 337.8, "end": 343.8, "text": " like full, like you have like something like a sigmoid activation function and then you're trying to train the discriminator"}, {"start": 343.8, "end": 349.08, "text": " to discriminate between real data and fake data. So here it's the same thing instead, but like the only difference is"}, {"start": 349.08, "end": 355.4, "text": " you're taking patches, like overlapping patches, like maybe this thing and then you'll have this and you'll have another patch"}, {"start": 355.4, "end": 361.64, "text": " and you have like four patches together and you can see it will, the discriminator will output some values here."}, {"start": 361.64, "end": 369.47999999999996, "text": " So if this image was a real image, we'd expect it to output, for example, all of these should be R or 1, whatever that,"}, {"start": 369.47999999999996, "end": 376.44, "text": " like whatever the value, however you decide to encode the real values. And so basically train, let's say these are like"}, {"start": 376.44, "end": 383.56, "text": " 0 to 1 values, you'll just do your minus loss. So basically your cross entropy and you're going to train this patch-based discriminator"}, {"start": 383.56, "end": 390.2, "text": " the usual way. Okay, so I'll get into a bit more details about the GAN loss, but this is roughly how it works like."}, {"start": 390.2, "end": 398.59999999999997, "text": " So they have patch-based GAN, all of that taken together, so the perceptual loss plus this GAN loss, so these two together"}, {"start": 399.47999999999996, "end": 407.0, "text": " give much more crispier image as a reconstruction here from this VQVA pipeline. Okay, the second step, once you train that,"}, {"start": 407.0, "end": 413.56, "text": " once you have these code vectors, what you do is you just train the transformer on a sequence prediction task."}, {"start": 413.56, "end": 419.56, "text": " So how the procedure looks like, you take an image, when you're training your transformer, you feed it into the encoder,"}, {"start": 419.56, "end": 427.64, "text": " which is frozen, you get some sequence, as you can see here, of discrete symbols, and these symbols are just indices into this"}, {"start": 427.64, "end": 435.72, "text": " codebook vector table, and you now try and train your transformer as a basically, as a sequence predictor."}, {"start": 435.72, "end": 445.0, "text": " So how that looks like is, this will be your target sequence, so this 14233, etc. And what you do is you'll have some"}, {"start": 445.0, "end": 450.68, "text": " special token, like a start of the sequence token, and then you'll just shift this whole array by one to the right,"}, {"start": 450.68, "end": 457.72, "text": " so you'll have one, 42, you have three, etc. And now you just feed this input into your transformer,"}, {"start": 458.84, "end": 465.72, "text": " you do a couple of layers of transformer the usual way, and then you basically do cross-entropy"}, {"start": 466.52, "end": 472.84, "text": " again to train and backprop through your transformer to train it. So here is an example we have here at 22,"}, {"start": 472.84, "end": 480.03999999999996, "text": " and so we know because of this here that the next symbol is 57, so how it will be trained is we'll find"}, {"start": 480.03999999999996, "end": 488.91999999999996, "text": " like element 57, so it may be here, we'll find its probability that, let me know that as like pi sub 57,"}, {"start": 489.64, "end": 496.52, "text": " and we'll just do minus log of that probability, okay? And so in order for this to, for the loss to get"}, {"start": 496.52, "end": 503.88, "text": " down to zero, you want to push p 57 to one, because we know that's the true symbol as you can see here."}, {"start": 503.88, "end": 509.56, "text": " So just usual sequence prediction stuff, nothing special there. So if you understand transformers,"}, {"start": 509.56, "end": 514.52, "text": " if you understand GANs, and if you understand VQVAEs, this is super trivial to understand, just"}, {"start": 514.52, "end": 521.0, "text": " connecting those components, you get the system, and voila. Okay, so that was the high level overview."}, {"start": 521.0, "end": 527.0, "text": " Now let's, let me just show you one more thing. So there is one thing I need to know, to mention here,"}, {"start": 527.0, "end": 532.52, "text": " and that's that compared to OpenAI, these guys are from Heidelberg, so I guess they didn't have"}, {"start": 532.52, "end": 538.92, "text": " as much compute as OpenAI, obviously, and so they're constrained to have only 16 by 16 discrete"}, {"start": 538.92, "end": 544.68, "text": " latent space here, so their transformer can only work with these many tokens, okay? So that's 256,"}, {"start": 544.68, "end": 551.0, "text": " and so that means you can't have a super big image in the input, because what I've also noticed is,"}, {"start": 551.0, "end": 556.04, "text": " if they go, if they kind of decimate the spatial extent, like the spatial dimension of this image,"}, {"start": 556.04, "end": 563.3199999999999, "text": " by more than 16x, so what happens is the reconstruction starts degrading severely."}, {"start": 563.3199999999999, "end": 571.64, "text": " So they are kind of limited to train the system when a 256 times 256 images, okay? Because of these"}, {"start": 571.64, "end": 576.52, "text": " two things I just mentioned. So this constrains you to, because reconstruction loss will suffer"}, {"start": 576.52, "end": 582.52, "text": " if you start, like, reducing the spatial dimension by a lot, and also, transformer cannot have more"}, {"start": 582.52, "end": 589.0, "text": " than 16 by 16, so take those two together, and you get, like, that their images are 256 by 256."}, {"start": 589.0, "end": 595.0, "text": " So now, how do they get the high res images is pretty simple, so they just do sliding window"}, {"start": 595.0, "end": 600.28, "text": " across this discrete space, so let me just show you that. They have it somewhere here, okay, so"}, {"start": 600.28, "end": 607.48, "text": " what they do is because, obviously, we have only, we can only fit 256 tokens into the transformer,"}, {"start": 607.48, "end": 613.56, "text": " they just do this sliding window across, and that's how they generate more than 256 tokens,"}, {"start": 613.56, "end": 618.04, "text": " and then they just plug that in into the decoder, which is a fully convolutional neural network,"}, {"start": 618.04, "end": 622.8399999999999, "text": " so we can just accept variable number tokens. So that's how I understood this works, if,"}, {"start": 622.8399999999999, "end": 627.64, "text": " I may be wrong, like, comment down if you think there's something else they're doing, but yeah,"}, {"start": 627.64, "end": 632.92, "text": " they also mentioned here that, so there is an assumption, implicit assumption they have in"}, {"start": 632.92, "end": 637.16, "text": " order for this to work, and they say here, our wiki again ensures that the available context is"}, {"start": 637.16, "end": 642.28, "text": " still sufficient to faithfully model images, as long as either the statistics of the data set are"}, {"start": 642.28, "end": 648.12, "text": " approximately spatially invariant, or spatial conditioning information is available. So, in"}, {"start": 648.12, "end": 653.48, "text": " practice, they mentioned that even if that's kind of violated, there is this CocoGAM paper where"}, {"start": 653.48, "end": 659.48, "text": " they show they can take these smaller patches and kind of combine them into a bigger image"}, {"start": 659.48, "end": 665.72, "text": " without any seams, and that sounds really good. Okay, so that's the whole, how the whole system"}, {"start": 665.72, "end": 671.72, "text": " looks like. Again, once, what I haven't mentioned is once you train the system, how they generate"}, {"start": 671.72, "end": 678.6800000000001, "text": " novel images is, so you just prompt your transformer with a special symbol, it will generate some token,"}, {"start": 678.68, "end": 683.64, "text": " like three, I don't know, then you feed three back into the input because it's all regressive, outcome"}, {"start": 683.64, "end": 689.9599999999999, "text": " some novel symbol, like maybe 21, and again, you put 21 here and you repeat, okay, until you get"}, {"start": 689.9599999999999, "end": 695.88, "text": " 16 by 16 tokens here or more, and then you just feed that through this decoder, which was frozen"}, {"start": 695.88, "end": 701.56, "text": " from the stage one of the training, and out comes the image. Okay, so that's it. Now let's start"}, {"start": 701.56, "end": 707.9599999999999, "text": " digging into the details. First thing, I want to contrast this work with ImageGPT, and they"}, {"start": 707.96, "end": 712.0400000000001, "text": " mentioned here, so previous work, so this is ImageGPT, which applied transformers to image"}, {"start": 712.0400000000001, "end": 718.6800000000001, "text": " generation, demonstrated promising results for images up to a size of 64 by 64 pixels, but due"}, {"start": 718.6800000000001, "end": 724.44, "text": " to the quadratically increasing cost in sequence length, cannot simply be scaled to higher resolutions."}, {"start": 724.44, "end": 729.88, "text": " So as you may know, like transformers have that, like, quadratic complexity because every token is"}, {"start": 729.88, "end": 735.1600000000001, "text": " attending to every other token, so you have n square complexity, and so this ImageGPT paper,"}, {"start": 735.16, "end": 740.4399999999999, "text": " instead of working in the discrete latent space, like here, they were working directly in the"}, {"start": 740.4399999999999, "end": 744.12, "text": " image space, and because they are working in the image space, they cannot generate, like, images of"}, {"start": 744.12, "end": 749.0799999999999, "text": " that high of a resolution, like when we have a decoder and working when we are regressing,"}, {"start": 749.0799999999999, "end": 753.88, "text": " when we are predicting next tokens in the discrete latent space. So that's what that"}, {"start": 753.88, "end": 759.72, "text": " approach was bound to fail, but like, you can see the inception of this very same idea was already"}, {"start": 759.72, "end": 769.0, "text": " there and in the VQVA paper, so yeah. Okay, so let me kind of explain the loss components in a bit"}, {"start": 769.0, "end": 775.64, "text": " more detail. So I assume you already are familiar with VQVAE, so again, we have reconstruction loss"}, {"start": 775.64, "end": 783.0, "text": " in the original paper, so this is just the L2 squared, so your MSU loss, so what this term does,"}, {"start": 783.0, "end": 788.44, "text": " and what this term does is the following. You take those codebook vectors and you push them"}, {"start": 788.44, "end": 795.32, "text": " towards the encoded vectors, so this E of X is just this thing here, so these are the E of X vectors,"}, {"start": 795.32, "end": 800.7600000000001, "text": " okay, and SG just means stop gradient, so you freeze those and you push the code vectors towards"}, {"start": 800.7600000000001, "end": 807.1600000000001, "text": " those and you also then freeze the encoded vectors and you push the, no sorry, you actually, you"}, {"start": 807.1600000000001, "end": 813.08, "text": " freeze the codebook vectors and you push the encoded vectors towards those, and by doing that, you get,"}, {"start": 813.08, "end": 819.32, "text": " that's the VQVAE in a nutshell, and now the interesting thing this paper does is they change"}, {"start": 819.32, "end": 825.5600000000001, "text": " this, they're not using MSC anymore, they sit here, so we replace the L2 loss used in the original"}, {"start": 825.5600000000001, "end": 831.32, "text": " paper for the reconstruction loss by a perceptual loss and introduce an adversarial training"}, {"start": 831.32, "end": 837.8000000000001, "text": " procedure with a patch-based generator, discriminator. So here is your usual adversarial loss, so we have"}, {"start": 837.8, "end": 844.92, "text": " this equation here and what happens is, so just a small recap, basically discriminator wants to"}, {"start": 844.92, "end": 851.16, "text": " maximize this function, this term here, and generator wants to minimize it. In order for"}, {"start": 851.16, "end": 857.24, "text": " the discriminator to maximize this, what it needs to do is, so this is discriminator and you want to"}, {"start": 857.24, "end": 865.4799999999999, "text": " push it to one, because for real images, right, for X, because log of one will be zero and that's"}, {"start": 865.48, "end": 872.12, "text": " maximizing this term here, and here, because these are fake images, X hat is fake, i.e. generated"}, {"start": 872.12, "end": 877.72, "text": " images, so you want to make sure that the discriminator outputs zero here, so that means it can discriminate"}, {"start": 877.72, "end": 883.16, "text": " between the fake and the real images, and when this goes to zero, log of one is zero, so that means"}, {"start": 883.16, "end": 889.88, "text": " we've just maximized this whole equation, okay? On the other hand, the generator wants to trick the"}, {"start": 889.88, "end": 895.88, "text": " discriminator into thinking that the images it generates are real, so it wants to instead, like,"}, {"start": 895.88, "end": 901.16, "text": " minimize this thing and it minimizes it by just pushing this to one. When you have this going to"}, {"start": 901.16, "end": 908.36, "text": " one, log of zero tends to minus infinity, and so basically that's minimizing this equation, so that's"}, {"start": 908.36, "end": 915.24, "text": " that's just a minimax game, like, setup of your regular GAN training, and the final loss of the"}, {"start": 915.24, "end": 921.5600000000001, "text": " VQVAE of this modified VQVAE model is you just want to, as you can see, we have E, that's encoder,"}, {"start": 921.5600000000001, "end": 928.12, "text": " G generator, and Z, the codebook table, we want to minimize, so we minimize according to, like,"}, {"start": 928.12, "end": 935.48, "text": " tweaking those parameters, we minimize this VQ loss, and then we also actually have"}, {"start": 936.6, "end": 942.52, "text": " the GAN component, which is maximized by discriminator and minimized by these three"}, {"start": 942.52, "end": 948.6, "text": " components here, okay? So the generator actually tries to minimize this GAN component and that's"}, {"start": 948.6, "end": 953.96, "text": " that's it, that's your, that's the whole loss function. Additionally, they have this lambda"}, {"start": 953.96, "end": 960.1999999999999, "text": " weight, which seems a bit arbitrary, to be honest, I didn't see any ablations on this, but, like, they"}, {"start": 960.1999999999999, "end": 966.68, "text": " found an adaptive way to kind of tweak this lambda, which is a coefficient that stands with this GAN"}, {"start": 966.68, "end": 973.56, "text": " component of the loss function, okay? So that's the first stage of the training, and the second"}, {"start": 973.56, "end": 980.04, "text": " part, as I already mentioned, is training the transformer. So they say here, so thus, after"}, {"start": 980.04, "end": 986.68, "text": " choosing some ordering of the indices in S, where S is just, that's your token sequence, so that's,"}, {"start": 986.68, "end": 993.56, "text": " this is S, all of these symbols are S, and the ordering they mentioned here is the following"}, {"start": 993.56, "end": 999.8, "text": " thing. So let me just clarify this part a bit. So you can, you usually take the rest of the order,"}, {"start": 999.8, "end": 1004.68, "text": " so what you'll usually do, so you have this table, let me just redraw it here for clarity,"}, {"start": 1005.7199999999999, "end": 1010.92, "text": " what you'll do is you need to somehow impose an order, because this is a matrix, and there is no"}, {"start": 1011.56, "end": 1016.1999999999999, "text": " inherent, like, linear, like, you have to linearize this, right? And usually people just do, like,"}, {"start": 1016.1999999999999, "end": 1022.04, "text": " rest order, so that's how you're going to predict the tokens. So, because in order to have next"}, {"start": 1022.04, "end": 1027.56, "text": " token defined, you need to kind of make this assumption of a rest order, but I also tried"}, {"start": 1027.56, "end": 1035.56, "text": " doing spirals, like outgoing spirals and in-going spirals, and the rest order turned out to work"}, {"start": 1035.56, "end": 1042.6, "text": " the best, so they kind of stuck with it and used it as the other papers prior to this one. Okay, so,"}, {"start": 1043.56, "end": 1048.2, "text": " so after choosing some ordering of the indices in S, image generation can be formulated as"}, {"start": 1048.2, "end": 1053.32, "text": " autoregressive next index prediction. Given previous indices, the transformer learns to predict"}, {"start": 1053.32, "end": 1059.32, "text": " the distribution of possible next indices, i.e. this term here, to compute the likelihood of the"}, {"start": 1059.32, "end": 1066.1200000000001, "text": " full representation, so the joint is just modeled as a product of these conditionals, okay? And"}, {"start": 1066.1200000000001, "end": 1070.1200000000001, "text": " finally, in order to train the transformer, you just apply the cross entropy already mentioned."}, {"start": 1070.1200000000001, "end": 1076.3600000000001, "text": " Okay, now, interesting part is this one. If you want to condition this, and they condition the"}, {"start": 1076.36, "end": 1081.32, "text": " transformer on multi- like on very different modalities, so you can condition this transformer"}, {"start": 1081.32, "end": 1085.24, "text": " on a bunch of things. You can condition it on a class, you can condition it on a like image that"}, {"start": 1085.24, "end": 1090.6799999999998, "text": " has semantic segmentation applied to it, etc. So, they show here, let me show you just some examples."}, {"start": 1090.6799999999998, "end": 1096.4399999999998, "text": " So, here, as you can see, you have this image which is semantically segmented, and you're trying to,"}, {"start": 1096.4399999999998, "end": 1100.12, "text": " given that as a condition, you're trying to generate some novel images here, as you can see"}, {"start": 1100.12, "end": 1107.56, "text": " on the right. Okay, so let me show you how they kind of make that work. So, they say here, if the"}, {"start": 1107.56, "end": 1113.8, "text": " conditioning information C has spatial extent, so as that semantic masks, we first learn in other"}, {"start": 1113.8, "end": 1123.32, "text": " VQ again to obtain again an index-based representation R with the newly obtained codebook Z. Due to the"}, {"start": 1123.32, "end": 1129.0, "text": " autoregressive structure of the transformer, we can then simply prepend R to S. So, if you"}, {"start": 1129.0, "end": 1133.4, "text": " don't understand that, let me kind of clarify it. I think it's fairly simple. Now, let's see how"}, {"start": 1133.4, "end": 1138.36, "text": " this procedure will look like. Basically, you'll have an additional VQ again, okay? So, this is your"}, {"start": 1138.36, "end": 1144.12, "text": " encoder one, you have a discrete latent space, you have a decoder. So, you train this thing on those"}, {"start": 1144.12, "end": 1149.96, "text": " semantic masks, so this is decoder one, okay? So, you train this thing on your like semantic masks,"}, {"start": 1149.96, "end": 1154.04, "text": " and once it's trained, you just freeze it up, and then you do the following thing. So, you have your"}, {"start": 1154.04, "end": 1161.56, "text": " target VQ again, so this one will just be encoding the regular images, and what you'll do is the"}, {"start": 1161.56, "end": 1166.6, "text": " following during the training of this one. So, you take the input image you encoded, you get the"}, {"start": 1166.6, "end": 1172.28, "text": " target sequence, so this is your target sequence, and how your input will be formed is the following."}, {"start": 1172.28, "end": 1178.92, "text": " So, you take a semantic mask, you'll just get some tokens here, you flatten it out, and you prepend"}, {"start": 1178.92, "end": 1185.5600000000002, "text": " it here, and now what you do is you just take your special token, you place it here,"}, {"start": 1186.44, "end": 1192.6000000000001, "text": " you take your sequence here, you just shift it by one position to the right, and this is your"}, {"start": 1192.6000000000001, "end": 1198.52, "text": " input to the transformer, this is your target, and you again train it on a simple sequence prediction"}, {"start": 1198.52, "end": 1204.2, "text": " problem. So, that's how they'll train this model, and then later during generation, obviously,"}, {"start": 1204.2, "end": 1210.8400000000001, "text": " you're just going to prepend the semantic mask and the prompting mask token, and you're then just"}, {"start": 1210.8400000000001, "end": 1215.72, "text": " going to sample from your transformer to get the output image, which you'll just feed through the"}, {"start": 1215.72, "end": 1219.96, "text": " transformer. So, the output sequence, which you'll then just feed to the decoder, okay?"}, {"start": 1221.0, "end": 1227.64, "text": " So, yeah, that was pretty much a thorough explanation of how this system works. Hopefully,"}, {"start": 1227.64, "end": 1232.6000000000001, "text": " that was useful. I went into a bit more details than usually, so let me know whether you find"}, {"start": 1232.6, "end": 1237.3999999999999, "text": " this kind of in-depth explanation useful, or should I just concentrate on like high-level explanation"}, {"start": 1237.3999999999999, "end": 1244.28, "text": " and just stick to the key points of the paper, okay? So, that was it. Now, let's see some"}, {"start": 1244.28, "end": 1250.36, "text": " experiments and some comparisons. Okay, in this table, they just compare and kind of justify"}, {"start": 1250.36, "end": 1255.8, "text": " the use of transformer instead of some other or regressive model, like as I mentioned,"}, {"start": 1255.8, "end": 1261.08, "text": " the QVAE originally used PixelCNN. PixelSnail is just an improvement from that very same family,"}, {"start": 1261.08, "end": 1267.24, "text": " and you can see here that because this one trains 2x faster, they have two comparisons. So, they have"}, {"start": 1267.24, "end": 1273.32, "text": " a comparison where they train GPT-2 for the same amount of time as P-Snail, and basically, you can"}, {"start": 1273.32, "end": 1277.6399999999999, "text": " see we have an improvement here because these numbers are lower. Lower is better for NLL,"}, {"start": 1277.6399999999999, "end": 1283.0, "text": " and again, when you have the same number of steps, it's even better. So, that kind of justifies the"}, {"start": 1283.0, "end": 1287.8799999999999, "text": " usage of transformers, although I think at this point of time, it's pretty obvious that transformers"}, {"start": 1287.88, "end": 1296.1200000000001, "text": " are better than those types of models. Okay, here are some results. Basically, they can do"}, {"start": 1296.1200000000001, "end": 1301.96, "text": " multiple things, and that's the premise of this paper. You'll later see that they are on-pair"}, {"start": 1301.96, "end": 1306.6000000000001, "text": " with other methods, but they don't try and say that they are better. They are just on-pair,"}, {"start": 1306.6000000000001, "end": 1311.0800000000002, "text": " but they are really diverse. They can apply this to many different modalities. So, one thing you"}, {"start": 1311.0800000000002, "end": 1317.3200000000002, "text": " can do is you can take this image here, you can encode it and use it to prompt the transformer"}, {"start": 1317.32, "end": 1324.84, "text": " in order to generate by sampling different images. And when I say sampling, again, when you take this"}, {"start": 1324.84, "end": 1330.28, "text": " as the context and you're trying to predict the next token, okay, so this token will have some"}, {"start": 1330.28, "end": 1337.6399999999999, "text": " distribution output from the transformer, right? And so, there are multiple ways you can do sampling"}, {"start": 1337.6399999999999, "end": 1342.28, "text": " from distribution. One is to do a greedy approach, so you just take the token that has the highest"}, {"start": 1342.28, "end": 1349.8, "text": " probability. One is to do, and that's what these folks did, is top-k sampling. That means, for"}, {"start": 1349.8, "end": 1354.6, "text": " example, if k equals 3, you'll take the three tokens with highest probability, so that would be"}, {"start": 1356.2, "end": 1362.36, "text": " these three here. And then you just re-rate them so that they sum up to 1, and you'll just sample"}, {"start": 1362.36, "end": 1368.92, "text": " according to their weights. And that's how you avoid sampling these super small probability"}, {"start": 1368.92, "end": 1374.2, "text": " tokens, which would be the case if you were using your regular sampling. So, not a greedy, not top-k,"}, {"start": 1374.2, "end": 1378.76, "text": " but just a regular sampling where you're taking into consideration all of the tokens with their"}, {"start": 1378.76, "end": 1385.72, "text": " corresponding probabilities. So, that's one task you can, aside from semantic masks, you can use"}, {"start": 1385.72, "end": 1391.0800000000002, "text": " depth masks. As you can see, they get these weird pigeons. They scared the shit out of me, but like,"}, {"start": 1391.0800000000002, "end": 1397.72, "text": " yeah, I don't know. And additionally, you can see they can kind of condition on these"}, {"start": 1397.72, "end": 1406.52, "text": " pose kind of images, and you get these as output. Like, I think this girl broke her"}, {"start": 1406.52, "end": 1411.8, "text": " leg, but yeah, okay, she'll survive. And finally, you have class conditioning. So, basically,"}, {"start": 1411.8, "end": 1418.92, "text": " they conditioned on the class label of a frog, and they generate these novel, scary frogs. Okay,"}, {"start": 1418.92, "end": 1424.28, "text": " and additionally, they have something called rejection sampling. We'll get to that in a moment,"}, {"start": 1424.28, "end": 1429.6399999999999, "text": " but like using that, they can achieve much better. They can get much better images than these here."}, {"start": 1429.6399999999999, "end": 1437.72, "text": " I think these were not obtained by using this rejection sampling. Okay, let's see other results."}, {"start": 1437.72, "end": 1443.8, "text": " So, again, these are some awesome images they can obtain using semantic segmentation masks. Here on"}, {"start": 1443.8, "end": 1449.72, "text": " the right, you can see again, like depth images, semantic masks, edges, super resolution, a bunch"}, {"start": 1449.72, "end": 1454.92, "text": " of different tasks they can apply this on. As I mentioned, they say they're on par with other"}, {"start": 1454.92, "end": 1459.96, "text": " methods. So, for this semantic image synthesis, they compare with the spade. It's even a bit"}, {"start": 1459.96, "end": 1466.92, "text": " better. So, spade is a bit better than their approach, but yeah. Okay, finally, comparing"}, {"start": 1466.92, "end": 1473.72, "text": " to other methods on the face synthesis, face image synthesis task, looking at the FIDs, we can see"}, {"start": 1473.72, "end": 1481.56, "text": " that this paper is a bit worse than this PGGAN, and it's a lot worse than StyleGAN2, looking at"}, {"start": 1481.56, "end": 1487.56, "text": " the FID metrics, so fresh-ase inception distance metric. They are way worse compared to StyleGAN,"}, {"start": 1487.56, "end": 1492.84, "text": " but as I said, they're not claiming they're better. In that sense, they're just more general,"}, {"start": 1492.84, "end": 1496.92, "text": " and yeah, and they're the first to have used transformers to generate these higher-res images."}, {"start": 1496.92, "end": 1505.24, "text": " Okay, similarly here, if we focus on FID, we can see that without... So, keeping this acceptance"}, {"start": 1505.24, "end": 1510.3600000000001, "text": " rate, so that's the rejection sampling I was mentioning about. So, if you keep it at one,"}, {"start": 1510.3600000000001, "end": 1516.6000000000001, "text": " you can see that they arrive at FIDs of maybe 15, and that's worse than many of these methods here."}, {"start": 1516.6000000000001, "end": 1521.64, "text": " Okay, but if you start applying this rejection sampling, they drop the FID down to five, which"}, {"start": 1521.64, "end": 1526.04, "text": " is much better or on par with these methods, but they can also use rejection sampling, so I guess"}, {"start": 1526.04, "end": 1531.32, "text": " this is not that fair of a comparison. By the way, they use ResNet, they say it's somewhere here,"}, {"start": 1531.32, "end": 1537.6399999999999, "text": " they use ResNet 101 to do this rejection sampling, so here, but instead, if you'd use"}, {"start": 1538.28, "end": 1543.1599999999999, "text": " something better, like a clip model from OpenAI, they can achieve even better results, and you'll"}, {"start": 1543.1599999999999, "end": 1550.2, "text": " see those all around, like on Twitter, people using VQGAN plus clip to generate nice images,"}, {"start": 1550.2, "end": 1554.6, "text": " and so the whole trick with those rejection sampling is the following. So, you output"}, {"start": 1554.6, "end": 1561.0, "text": " a couple of images from your... You sample a couple of images from your... Like, from VQGAN,"}, {"start": 1561.0, "end": 1567.24, "text": " and what you then do is you feed these into, for example, clip, and you prompt clip with a specific"}, {"start": 1567.24, "end": 1572.1999999999998, "text": " class, so for example, you have class one, whatever, that may be like a frog or whatever,"}, {"start": 1572.1999999999998, "end": 1579.08, "text": " and what clip will do, it will say how probable this picture is to be of a class one, so maybe"}, {"start": 1579.08, "end": 1586.04, "text": " this one will be, I don't know, 0.7, this one will be 0.85 or something, this one will be, I don't"}, {"start": 1586.04, "end": 1593.24, "text": " know, 0.6, and so you can just sort them, sort the images according to these scores, and just pick the"}, {"start": 1593.24, "end": 1600.4399999999998, "text": " top three or top five, whatever, and what it turns out, it turns out this kind of is an automatic way"}, {"start": 1600.4399999999998, "end": 1606.04, "text": " of cherry picking, so it additionally improves upon the results that this paper achieved, reported."}, {"start": 1606.04, "end": 1613.3999999999999, "text": " Okay, what I show here is how that down sampling in the encoder affects the results, so what I do"}, {"start": 1613.3999999999999, "end": 1620.76, "text": " is they keep the latent space at 16 by 16, and then what I do is you can see here the number of"}, {"start": 1620.76, "end": 1626.12, "text": " times that the spatial extent of the image has been reduced, so 16 means that the original image"}, {"start": 1626.12, "end": 1634.76, "text": " was 256 by 256, one means basically input image was 16 by 16, and that means that during the training,"}, {"start": 1634.76, "end": 1642.28, "text": " they actually have to take a crop of 16 by 16 here and directly train the transformer on that,"}, {"start": 1642.28, "end": 1647.64, "text": " and as you can see, the receptive field is much, much, much smaller when you have f1, and because"}, {"start": 1647.64, "end": 1654.12, "text": " of that, you get these weird Picasso-looking images, and I really think, like, somebody can earn some"}, {"start": 1654.12, "end": 1658.92, "text": " money by just producing these and selling them as artwork. If somebody earns some money, consider"}, {"start": 1658.92, "end": 1666.92, "text": " becoming a patron of the epiphany, yeah, that'd be nice. Okay, when they use f2, so that's a bit higher"}, {"start": 1667.88, "end": 1673.8000000000002, "text": " receptive field, you can see it kind of gets better, but this really freaks me out, and then"}, {"start": 1673.8000000000002, "end": 1678.8400000000001, "text": " additionally, it gets better and better, and finally, when you have 16, assuming these are 256"}, {"start": 1678.8400000000001, "end": 1683.64, "text": " images, meaning basically you had a receptive field was spanning the whole image, and that's"}, {"start": 1683.64, "end": 1689.0800000000002, "text": " why we get much better results here. Additionally, these are much faster to generate, so we have a"}, {"start": 1689.0800000000002, "end": 1694.76, "text": " huge improvement in speed up compared to when you're trying to regress this in the pixel space,"}, {"start": 1694.76, "end": 1701.24, "text": " because here you'll have to regress all of the pixels, whereas here you'll just have to regress"}, {"start": 1701.8000000000002, "end": 1707.5600000000002, "text": " the 16 by 16 in the latent space and then decode it, so that's why we get a huge speed up. Okay,"}, {"start": 1707.56, "end": 1713.3999999999999, "text": " I think that's pretty much it. Let me just show you a couple more things that's kind of interesting."}, {"start": 1713.3999999999999, "end": 1720.44, "text": " So here they show that they are better than Delle and VQVAE2 when we look at the FIDs of the"}, {"start": 1720.44, "end": 1724.76, "text": " reconstructed images. So you just, they just compare, they take the validation images, for"}, {"start": 1724.76, "end": 1730.9199999999998, "text": " example, so that's this FID slash VAL, so they pass it through the encoder and then decoder,"}, {"start": 1730.9199999999998, "end": 1735.8799999999999, "text": " and then they just calculate the FID, and so this just shows that the reconstruction is coming from"}, {"start": 1735.88, "end": 1741.16, "text": " this VQGAN paper are better than Delle and VQVAE, as we can expect, because as I mentioned, they are"}, {"start": 1741.16, "end": 1745.88, "text": " blurry because they're not using adversarial component, neither did they use perceptual loss."}, {"start": 1745.88, "end": 1753.88, "text": " Okay, one more interesting thing I want to show you in the appendix. So basically here is that"}, {"start": 1753.88, "end": 1759.96, "text": " visualizable, I just explained. You have Delle here, you have VQGAN here, you can see these images"}, {"start": 1759.96, "end": 1765.24, "text": " are much more blurry compared to VQGAN, which has adversarial component, so that's really"}, {"start": 1765.24, "end": 1773.8, "text": " cool. GANs are like famous for making images much more crispy. They have many more diagrams and"}, {"start": 1773.8, "end": 1779.0, "text": " pixTiv included in the appendix, it's huge, it's got maybe 50 pages, you can check it out, some of"}, {"start": 1779.0, "end": 1783.4, "text": " the images there, but that was in a nutshell everything I wanted to cover today. If you found"}, {"start": 1783.4, "end": 1800.76, "text": " this video useful, share it out, subscribe, and until next time, bye-bye!"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=VZFVUrYcig0
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany In this video I cover VQ-VAEs papers: 1) Neural Discrete Representation Learning 2) Generating Diverse High-Fidelity Images with VQ-VAE-2 (the only difference is the existence of a hierarchical structure of latents and priors) Many novel interesting AI papers such as DALL-E and Jukebox from OpenAI as well as VQ-GAN build off of VQ-VAEs, so it's fairly important to have a good grasp of how they work. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ VQ-VAE1 paper: https://arxiv.org/abs/1711.00937 ✅ VQ-VAE2 paper: https://arxiv.org/abs/1906.00446 ✅ PyTorch code: https://colab.research.google.com/github/zalandoresearch/pytorch-vq-vae/blob/master/vq-vae.ipynb ✅ ELBO explained: https://mbernste.github.io/posts/elbo/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 01:10 A tangent on autoencoders and VAEs 07:50 Motivation behind discrete representations 08:25 High-level explanation of VQ-VAE framework 11:20 Diving deeper 13:05 VQ-VAE loss 16:20 PyTorch implementation 23:30 KL term missing 25:50 Prior autoregressive models 28:50 Results 32:20 VQ-VAE two ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #vqvae #discretelatents #generativemodeling
What's up? In this video I'm covering this older paper called Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinales and Corey... Yes. From DeepMind. Anyways, they introduced this model called VQVAE and the reason I'm covering it is because it's a seminal paper and many of the novel research coming from the AI field, like such as Dali and Jukebox and VQGAN, all of them are using VQVAE as a building block and so I think it's really important to understand this paper. So aside from diving deep into the paper as I always do, I'll also show you and walk you through a code snippet in PyTorch so you can really relate the code with the semantics of this paper. And if you found it useful, just share this video out and that's the best thing you can do to help me. Awesome, let's dig into it. So our model, the Vector Quantized Variational Autoencoder, differs from VAEs in two key ways. So the encoder network outputs discrete rather than continuous codes and the prior is learned rather than static. So let me just make a small digression into what autoencoders are and what variational autoencoders are before we can dig into VQVAEs, okay? So autoencoders are pretty simple. So you basically have something called an encoder network. So I'll just depict it like this. So it's E. You have something called like a latent code and latent is just a fancy way of saying hidden. So this is a latent vector here and then you have a decoder, okay? And let me maybe take an example of autoencoding images. So you take an image as the input, you project it down into this lower dimensional space. So maybe this is your ImageNet image, 224 times 224 times 3. And you may be projected down to, I don't know, maybe 30 dimensions here. And then you have to reproject it back into the original image space. So this and make sure that the reconstruction error is as low as possible. So you just basically make sure that this image is the same as this one, even though you kind of probably lost some information. Encoding this into this lower dimensional space. And you usually do that like using MSC loss. These are your continuous value images like from 0 to 1, you'll be using MSC. Okay, so that's autoencoder. And basically what variational autoencoders do additionally is they instead of having this deterministic bottleneck layer, they make it stochastic. So what encoder in VAE does is it will parameterize a distribution instead of a static latent vector. So usually people use isotropic Gaussians. That doesn't have to be that way. But like that's because Gaussians have really nice computational properties. And so people usually use those. But don't think that VAE have to use Gaussians. They are much more abstract framework than just using Gaussians. So what you'll have usually is you'll have encoder and you'll be outputting, for example, 30 dimensional vector of means. And you'll be outputting 30 dimensional vector of standard deviations. And you usually do log of standard deviation because you want to make sure that standard deviation is positive. And because it's like an argument of a log function, that means domain is greater than zero. So yeah, everything is computationally nice here. So once you have U's and sigma's, what you do instead of just passing it through the decoder, you sample a vector from this distribution and then you feed it back into the decoder. And so now the difference, so when you train these VAEs, you have two terms basically. And basically you also have the reconstruction term. Let me call it R. So that's that part. And you also have something called like you're doing a KL divergence between the prior and between whatever distribution you've outputted here. And let me kind of try and break that down a bit. So you can treat the KL term as a regularization term. But there is a deeper probabilistic explanation for why that is and why are we using KL instead of something else. So basically the reason is we are trying to maximize this thing called ELBO or evidence lower bound. And it turns out that like maximizing ELBO is the same thing as basically trying to minimize reconstruction loss, plus trying to minimize the KL divergence between. So this thing here is basically your approximate posterior. So this is Q of Z. So you're trying to infer the Z, the latent code, given your input, right? So given the X. So that may be your image. And what that second term in VAE is, you're trying to just minimize the KL between this. So this is your approximate posterior and the prior. And you usually put prior to be like, as I said, Gaussian. And so you can treat this either as a regularization. So that would be like your neural network kind of a perspective. Or you can treat this as a consequence of variational inference, where basically the reason you're doing this is you're trying to make your posterior as close as possible to the true posterior, which would be P of Z given X. And after like doing some math, you end up maximizing ELBO, which boils down at the end to just maximizing, to minimizing reconstruction loss and minimizing this KL divergence between your prior, which is usually Gaussian, because as I said, it's computationally convenient, and this approximate posterior. So all in all, the reason you're doing this is because you're trying to use these as generative models. So that means you want to be in a situation where you can just plug in some latent vector here and out comes some novel image. And the reason you can do this with these ordinary autoencoders is because their latent space doesn't have any constraints. So you basically, if we were to project this 30 dimensional vector into a 2D space, we'd end up with something like this. So maybe your cat image would be like projected here, your dog image would be projected here, some other dog would be projected here, and then you'd have like some like car, whatever. And the thing is, if you tried and queried, if you took a random latent vector like here and tried and feed it through the decoder, you'd probably get a totally random function, like a totally random image. And what you want to do instead, you want to impose certain structure into this latent space. So you want to make sure that it's continuous. So you want to make sure that if you're interpolating between these two points, you're getting the meaningful images all the way down from this point A all the way down to this point B. And VAEs do exactly this because of this KL term here, they are regularizing the latent space and making it much more continuous and meaningful compared to your standard autoencoders. So that was your 101 for autoencoders and VAEs. Now let's see basically how these VQ VAEs work. Getting back here, so they mentioned VAEs have continuous space. So as you can see here, these latent vectors will be continuous, whereas we'll soon see that the VQ VAE has a discrete latent variables. On the other hand, so the prior is static here. So as I said, so your prior here is a standard like Gaussian. And here like in VQVs, as we'll soon see, we'll be training some autoregressive model to in order to learn the prior. And those are some of the differences and we'll see those in a minute. OK, so we concentrate on discrete representations, which are potentially a more natural fit for many of the modalities we are interested in. Language is inherently discrete. Similarly, speech is typically represented as a sequence of symbols and images can often be described concisely by language. Furthermore, discrete representations are a natural fit for complex reasoning, planning and predictive learning. So that's some motivation for why we want to discretize the latent space. They also avoid something called posterior collapse by doing this. And that's additional benefit of using quantized VAEs. OK, let me let me jump into this diagram and explain the method and then we'll slowly start kind of this like breaking down the components into smaller pieces. So on a high level, again, we have this autoencoder like high level structure, as you can see. So we have the image, we encode it using some CNN into into this representation here. And then what we do is so this is the part that's peculiar to VQVs. So what we do is we take these vectors. So we take this one, for example, and what we do is we snap it onto one of these vectors from this thing called a codebook. So this thing here is called a codebook and it contains vectors which are also learnable and trainable. And so we want to find the closest one where closeness is defined by the L2 norm. And once you do that, you can basically map this one. So maybe the closest one was maybe this one, vector one. And so you'd put one here, as you can see in this symbolic table. And then what you do is you basically index using these indices into the codebook and you extract these like discrete latent vectors. And then only then do you feed those into the decoder and out comes the image, which is hopefully perfectly reconstructed version of your input image. So that's the high level overview. And as you can see, one difficulty with this is how do you actually back prop? So the forward prop is pretty straightforward, but the back prop is kind of difficult because this process of snapping onto the nearest like codebook vector is not differentiable. So what they instead do is something called straight through gradient. Like basically, as you can see, this shape here has the same shape as this thing here. So what I do is whatever the gradients for this volume here are, they just copy paste them into the encoder and then you can back prop through the encoder as usual. So the only part that's tricky is to just basically copy paste gradients here over to here from the decoder to the encoder. And we'll see like don't worry. We'll see the details in the code later as well. So the intuition behind this is that if if if you're encoding vectors, so if this vector here is encoded really closely to this codebook vector. So as you can see, so if this was a gradient for the codebook vector, we can expect that by moving the encoded vector a bit to the right. So the next time we feed forward that image will maybe snapped to this vector and will lower the reconstruction loss by doing that. So that's some rough intuition why this works. So hopefully the assumption is that is that these encoded vectors are close to the codebook vectors. And if that's the case, then these gradients kind of also are good gradients for those encoding vectors. And then you can just lower the the the basically the reconstruction loss. OK, so that's the high level picture. Now, let me kind of slowly start kind of breaking down these equations. And it's pretty easy. So this is just formalized version of what I've just explained verbally. So this is your encoded vector. This is your codebook vector. And you're trying to find that codebook vector, which is the closest to this encoded vector where closeness is defined using L2 norm, as you can see by these brackets and number two here. So once we find that index, that's our like that's the so we have this is our approximate posterior. And as you can see, it's basically one hot is deterministic distribution of our input variable. OK, so this just says the same thing I just explained. So once you find that K, you basically assign to so you assign to this Zeddy, you actually assign it as a proxy. You put this Zedq, which is just the closest vector to that Zeddy. And that's what we do in this discretization step. So we find the closest one and we basically use so we'll be using this E1 vector instead of this vector here. That's the idea of this equation here. So note that there is no real gradient defined for equation two. However, we approximate the gradient similar to the straight through estimator and just copy gradients from decoder input here to encoder output Zeddy. And that's the this this this line, this red line, they draw here. OK. And since the output representation of the encoder and the input, so this is the output representation of the encoder. So and the input of the decoder. So this is this one share the same dimensional space. The gradients contain useful information for how the encoder has to change its output to lower the reconstruction loss. So those are some of the things I already explained. OK. So finally, this is how the loss looks like. So we contrast this to VAE where we have KL and we have reconstruction loss. So here we have also we have the reconstruction loss. So that's this log P of X given Zed. So this is your likelihood. And you have these two terms. So first, let me kind of explain this one and explain why do we get MSC when we have these like weird log likelihood equations here. So I guess this can confuse some like like people who are who are new to this channel. So this is a likelihood. And the reason we get MSC out of this is the following. So we assume that the likelihood is Gaussian and under that assumption, which is reasonable if you have if you're trying to reconstruct images where the pixels are continuous. Under that assumption, if you just replace this P with your like Gaussian, like you have some constant, right? You have some e raised to the power of blah, blah, blah. You have some square here. And as you can see, because of the log, we'll just be ignoring constant E and log will be canceled out because you're inverse of each other. And you'll just be and you'll just end up with this square term. And that's why you have square loss as the as the optimal loss term under the assumption that your data, the likelihood of your data is Gaussian. So, yeah, hopefully that was helpful. And the second two terms are interesting because so compared to V is we don't have KL here. We have these two. So as you just mean, stop gradient. So that means in Pytorch, you just be detaching that part from the computational graph to just freeze those and you try and push these vectors. So these are the codebook vectors. You push them to their corresponding encode encoded vectors. And the same thing here. You basically freeze the codebook vectors and you're pushing the encoded vectors towards those codebook vectors. So that's that's how the loss looks like. It's fairly simple. We'll see that in the code in a couple of minutes. And so what I say here is so decoder optimizes the first loss term only the encoder optimizes the first and the last loss term loss terms. And the embeddings are optimized by the middle last term. So let's see why that is. So first things first, you can see we learned the embeddings by this second last term. So that's what I said. We'll learn encoder through both this term. As you can see, we are pushing the output from the encoder here towards these frozen codebook vectors. And we're also using reconstruction loss because, as you remember, we have that straight through gradient. So we are also training encoder using this reconstruction term, whereas the decoder is only trained. So those weights are only trained using this reconstruction loss. And these two terms do not contribute to its updates. OK, that was pretty much detailed explanation of how how it works. Now, I thought kind of walking you through an example of how this looks in PyTorch. And hopefully it will be easier to understand. And once you connect these images and like paper with concrete code. So I usually only cover code, only cover papers in my video. So now I thought doing this as an experiment. And like, let me know what you think down in the comments. OK, so so there is this and I'll link the code in the description, obviously. So there is this vector quantizer class. And let's see how it works. So we have embedding dimension and we have number of embeddings. So number of embeddings is this thing here. So it's basically this is your number of embeddings, the number of vectors in your code table and the embedding dimension. And the embedding dimension is this dimension here. OK, so once we have that, we just create this embedding object in PyTorch and we just initialize it uniformly. Blah, blah, that's that's easy. And we have this commitment cost part, which is basically this beta term. So that's just a weight with this commitment term of the loss. OK. And now let's see how the forward function looks like. So first, let me just give you some context. So let's let's take an as an example from this snippet. So as an example for a BCHW tensor of shape 16, 64, 32, 32, we first converted to blah, blah, blah. So what happens here is you take 16 images. So you feed 16 images into your decoder. Out comes this tensor here. OK, so we still have 16 images. But what changed is that the number of channel as you probably know, like CNN's usually you start with three channels and then you're as you're going through the CNN layers, you get more and more channels. But the spatial dimension kind of diminishes. OK, so that's what we see here. So we went from three to sixty four and the height and width went to thirty two by thirty two. OK, so so what I first do is they converted into this BCHWC vector instead. And then they flatten it into this representation here. And all of these 16000 vectors of size 64 will be quantized independently. So they'll be snapped to those codebook vectors. OK. And in other words, the channels are used as the space in which to quantize. All other dimensions will be flattened and be seen as different examples to quantize. 16,384 in this case. So that's in context. And now let's see the code. So this is the first. So this is the permutation that happens. So we want to get it into this format contiguous. Just make sure that in the memory, all of these memory locations are contiguous. Just some low level low level memory management stuff. So here is the flatten part. So we converted into the representation of 16000 by 64. And so here is this distances calculation. So this is a part that we use to find what what are the closest codebook vectors. And you can go through the like every single symbol here, but I'm just going to like do it high level. Basically. So flat input is it has shape. Let's say n times 64. OK, so n is 16000 in this example. But let me just use the symbol and instead. So and we have embedding table, which is as you remember, K and 64. So it's a couple K, 64. OK, so what we'll end up having here is basically we'll get a table that's n, K. And that means for every single encoded vector, we'll find its distance to all of the codebook vectors. And that's why we have K here. And once we have this, so once we have this distances array, what we do is we do simple argument over dimension one. So that means let me just kind of pictorially draw this. So we have, as I said, distances like is a matrix like this. And what we do is we focus on a single row and we find the lowest distance here. So that's the argument over dimension one. And we find, let's say that's maybe number three. So maybe this vector here is the closest one and that's maybe number three. So we'll map this into a novel vector, which will have number three here. So that's your symbol. So that's this part here. Let me just go back here. So those are whoops, my God. OK, so that will be your. So this vector here mapped into one. OK, here we just had another example with number three, but that's it. So once we do that, what they do additionally is once you create this vector of indices, you just cast it into a bit different representation. So instead of having three, you'll do it like this. You'll you'll basically expand it into a matrix and times K again. And this time you're going to create one hot vector. So you're going to put zero, one, two, three. So you want to put one here and all of the zeros will be here. So this is just a different representation of this very same information. OK, so that's the encodings data structure. And why do we do that is because we can then multiply this encodings data structure with the embedding table. So, again, embedding table is the code codebook table. So you'll have just a second. So we have K here. So this will be K dimensional and this will be 64. So these are this is your codebook vector code table. So if you can see here, just by multiplying this encodings data structure with the codebook vector, you'll end up extracting vector number three from from here. And that's basically the quantization process we're talking about. So you'll end up having basically quantized vectors. You'll have you'll end up having if you multiply these, you can see you end up with n times 64 dimensional table. And those will contain the quantized vectors. So that's how the quantization is done. Simple matrix multiplication. And finally, the interesting stuff here is so this E latent loss is this term here. So that's this term here. As you can see, we put a stop gradient onto the codebook vectors. And as you can see here, so these are the codebook vectors. We put a stop gradient, which is implemented as detached in PyTorch. We do the same thing for this second term. So this is this part where we put stop gradient on the encoded vectors. So encoded vectors are called inputs in our code. And we just do detach. And that's it. Just simple MSC laws there. And again, this is beta. So that's your loss. OK. And now for the funny part, and I'll just ignore this, is how do we actually implement these straight through gradient? So this is the way we implement that estimator. So if you take a look at this equation, obviously inputs and inputs will kind of cancel out and you end up with quantized. So the reason they added this part here is because when we do backprop, this part is basically excluded from the computational graph. And so we end up with inputs. And so that means that basically whatever the gradients you get for your outputs, for your quantized variable here, those will just be copy pasted into the inputs. And that's the part I just explained pictorially here. Basically, you have whatever the gradients here are. You're just going to copy paste them into here. And once you have that, then you can just do because everything is differentiable in the encoder. You can figure out the gradients for every single weight and then you can just update your network. OK. So you may wonder why we are not using the KL term. Let me try and give you some intuition there. So they say here, since we assume a uniform prior for Z, so compare that to VAEs, which had like isotropic Gaussians as the basically standard Gaussians as a prior. Here we have we assume uniform prior for Z. The KL term that usually appears in the elbow. So this is your evidence lower bound is constant with respect to the encoder parameters and thus can be ignored for training. So let me try and digest this for you a little bit. So that's so pure. If that if we take a look at that, we have a uniform distribution here. OK. So so this will have K. Points in the domain here and what we'll end up with. So how our posterior looks like, so how our approximate posterior Z given X looks like is it's as I mentioned, it's a deterministic function pretty much. You will have you'll end up having a single vector that's the closest to your encoded vector. So you'll end up with something like this. And if you take a look at what you get by doing a KL divergence between these two. So if you do KL between these two, you'll end up with basically just one times log. Then we have in the numerator, we'll have one and in the denominator we'll have we'll have because this is uniform, we have K points here. This is going to be one over K probability. So the probability that corresponds to this one, let's maybe that's this one, will be one over K. So you end up with KL divergence that's constant and that's equal to log of K. OK. And so you basically can't minimize it because it's constant. And they mentioned it somewhere here. So our proposal distribution blah, blah is deterministic. And by defining a simple uniform prior of Z, we obtain a KL divergence constant and equal to log of K. So that's why they can't minimize it because it's constant in this setup. And yeah, I just wanted to explain that a little bit more. OK. So for the prior part, they say here, whilst training the VQVAE, the prior is kept constant and uniform, as we just saw. After training, we fit an autoregressive distribution over Z, P of Z, so that we can generate X via ancestral sampling. So all of this sounds maybe a bit scary. Let me try and demystify this part. So if you're familiar with language modeling, if you watch my GPT-3 video, I'll link it somewhere here or like transformers, whatever, if you're familiar with language modeling in general, this will be really easy. So what you basically do here is in a way you just try to predict the next token. So in the case of language modeling, you're trying to predict a token that represents some subword or something or like a word or a character. Here, you're just trying to predict the next symbol. But like the semantics of the process is the same. So imagine you flatten this into a sequence and you have the same, like basically same setup as for language modeling. So let's say we have a simple example, not like a not like a matrix of symbols, but maybe just like a three symbols. Like let's say we have three, two, five. OK. And how you train this all regressive model is the following. So you'd input something called some special symbol called start of a sequence. Then you'd input three here and then you'd input two here. And as you can see, you're trying to predict. So given the special token, you're trying to predict three. Then given three, you're trying to predict two because that's the next token, as you can see here. And now given two, you're trying to predict five because that's the next token in the sequence, as you can see. So this goes as an input to your model. These are the targets. And basically you're just you'll you'll basically use some like cross entropy and you'll backprop through the weights of the model. And that's how you're going to train this this this token predictor model. And it will going to be all regressive, which means once you output a token here, you're going to feed it back to the input and then you're going to keep on generating like that. So once you have it trained on this discrete space that was so basically there are two things you do here. First things first, you basically train this VQVAE end to end. So to learn how to reconstruct images. And then you have you basically have a codebook that's that's fully trained and you have encoder and decoder, which are fully trained. You just freeze all of those and you just train this predictor model. And then when you want to generate new novel images, what you do is you just prompt the the all regressive model with the special token and it will just output some number. Let's maybe say three and you feedback the three inside of the input of your all regressive model and it will generate something else like maybe four. And you do that all the way until you have enough elements in your in your like latent space. So that will be thirty two by thirty two in this example here. And once you have that, you just feed that through this frozen decoder and out comes some novel image. So that's the explanation of how this whole system works together. Hopefully you you now understood it a bit better. OK, so having demystified that, hopefully let's see some results. And first thing they mentioned here is that they can compress their data by a lot. And that's hopefully obvious. But like so they say here in this experiment, we show that we can model X, which is hundred twenty eight by hundred twenty eight in their example by compressing them to thirty two by thirty two discrete space with K equals five hundred twelve. So case the size of that codebook codebook table, why are purely the convolutional PX given set. So decoder. So a reduction of and you can see how much here and basically nine is here because if you have a naive encoding method of your symbols, you'll just have nine bits. You need nine bits in order to represent five hundred twelve symbols if you assume that all of the code like sizes are fixed. OK, and basically you can see here some reconstructions. And what I want you to notice here is because this will be important for the later Dali paper. I'll create like a paper over. I'll do and VQ again as well. You can see that there is some blurriness to these images. If you take this dog here and you'll see like the reconstruction is pretty good, but there is some blurriness happening here. It's especially noticeable here. And for this real image here, you can see it's much blurrier. You can take a look at the paper yourself. But that's that's something I want you to notice here. So those were the reconstructions. And here after we do that sampling process in the inside of the latent space using that order aggressive model, you can sample novel unconditional images and you can see some results here. And you can see it's really diverse. And that's one of the advantages these likelihood models have compared to GANs. They cover all of the modes because of the way they are trained, whereas GANs sometimes can omit some modes or they can even like go into something called mode collapse where they basically output just a single image or similar looking imagery. OK, they also do this not only for images, they can also compress and reconstruct audio, video, et cetera. So I found the audio part really interesting. So this is your original audio after you reconstructed. You can see the waveforms are not quite the same. But what's interesting is that they have the same content. Couple of nice remarks they made here is it can be heard from the provided samples and as shown in Figure 7 reconstruction. So Figure 7 is this one here. You can find the samples on the website. You can hear them at your own pace. But like the reconstruction has the same content, same text contents, but the waveform is quite different and prosody. So the way you accent your sentence in the voice is altered. So this means that the VQVAE has without any form of linguistic supervision, so without any supervision, learned a high level abstract space that is invariant to low level features and only encodes the content of the speech, which is awesome since we don't have any labels here. So while samples drawn from even the best speech models like the original WaveNet sound like babbling, samples from VQVAE contain clear words and part sentences. So what I recommend you do is you go ahead and listen to the samples from this model and from WaveNet, which was very popular, like parallel WaveNet, like that was a full up work to WaveNet, was actually used in production at Google. It's maybe still used. I'm not sure, but I know it was used in production for Google Assistant. OK, so I think that's pretty much it for this paper. Let me just briefly go and explain you how this VQVAE 2 version differs from this version 1. So first thing you can notice is it's got much better reconstructions. You can generate much better imagery using this model. The main difference is so everything remains the same. The loss remains the same. The only difference is they have this hierarchical structure of the of the latents as well as the priors. So what I do instead is they have this top level codebook vectors and they have this mid level codebook vectors and they are of different resolutions. So that means this one will have to focus on some like higher level structure of the image. This one can maybe even focus on some details, texture, et cetera. And once you combine them into decoder, you can reconstruct much higher quality imagery. Same thing here. You train this autoregressive model in this latent space and then you use that to condition the next level prior autoregressive model to basically regress this new latents and then you can decode them into imagery. So that's the only difference pretty much between this model and the last one. So they're using this hierarchical structures. And as you can see, they get much better imagery. Like this is even compared like comparing even this to, I don't know, style again. V2, the images look really, really cool, although style again can can generate much higher resolution imagery. I think so. But yeah, either way, this is really cool for a VAE. Usually GANs have much better reconstruct image images than VAEs. OK, here are some more images. Again, I think this is class conditional on ImageNet. I think doesn't matter that much. Let me just see. Yeah, these are some high resolution images. You can see they're pretty good. You can see it's still not perfect. There is some distortions here happening. I don't know. Maybe I'm wrong. Hopefully you found this video insightful. Let me know what you think down in the comments, whether you like this format, whether the code part was useful. If it was, I'm going to try and include more of these into my next videos. So until next time, subscribe and share this video and bye bye.
[{"start": 0.0, "end": 8.52, "text": " What's up? In this video I'm covering this older paper called Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinales and Corey..."}, {"start": 10.32, "end": 10.96, "text": " Yes."}, {"start": 11.36, "end": 12.88, "text": " From DeepMind."}, {"start": 13.040000000000001, "end": 21.080000000000002, "text": " Anyways, they introduced this model called VQVAE and the reason I'm covering it is because it's a seminal paper"}, {"start": 21.080000000000002, "end": 28.28, "text": " and many of the novel research coming from the AI field, like such as Dali and Jukebox and VQGAN,"}, {"start": 28.28, "end": 34.68, "text": " all of them are using VQVAE as a building block and so I think it's really important to understand this paper."}, {"start": 34.68, "end": 42.32, "text": " So aside from diving deep into the paper as I always do, I'll also show you and walk you through a code snippet in PyTorch"}, {"start": 42.32, "end": 46.96, "text": " so you can really relate the code with the semantics of this paper."}, {"start": 46.96, "end": 51.52, "text": " And if you found it useful, just share this video out and that's the best thing you can do to help me."}, {"start": 51.52, "end": 53.32, "text": " Awesome, let's dig into it."}, {"start": 53.32, "end": 59.8, "text": " So our model, the Vector Quantized Variational Autoencoder, differs from VAEs in two key ways."}, {"start": 59.8, "end": 66.76, "text": " So the encoder network outputs discrete rather than continuous codes and the prior is learned rather than static."}, {"start": 66.76, "end": 75.24000000000001, "text": " So let me just make a small digression into what autoencoders are and what variational autoencoders are before we can dig into VQVAEs, okay?"}, {"start": 75.24000000000001, "end": 81.24000000000001, "text": " So autoencoders are pretty simple. So you basically have something called an encoder network."}, {"start": 81.24, "end": 89.24, "text": " So I'll just depict it like this. So it's E. You have something called like a latent code and latent is just a fancy way of saying hidden."}, {"start": 89.24, "end": 93.24, "text": " So this is a latent vector here and then you have a decoder, okay?"}, {"start": 93.24, "end": 104.24, "text": " And let me maybe take an example of autoencoding images. So you take an image as the input, you project it down into this lower dimensional space."}, {"start": 104.24, "end": 112.24, "text": " So maybe this is your ImageNet image, 224 times 224 times 3."}, {"start": 112.24, "end": 116.24, "text": " And you may be projected down to, I don't know, maybe 30 dimensions here."}, {"start": 116.24, "end": 120.24, "text": " And then you have to reproject it back into the original image space."}, {"start": 120.24, "end": 125.24, "text": " So this and make sure that the reconstruction error is as low as possible."}, {"start": 125.24, "end": 133.24, "text": " So you just basically make sure that this image is the same as this one, even though you kind of probably lost some information."}, {"start": 133.24, "end": 137.24, "text": " Encoding this into this lower dimensional space."}, {"start": 137.24, "end": 146.24, "text": " And you usually do that like using MSC loss. These are your continuous value images like from 0 to 1, you'll be using MSC."}, {"start": 146.24, "end": 157.24, "text": " Okay, so that's autoencoder. And basically what variational autoencoders do additionally is they instead of having this deterministic bottleneck layer, they make it stochastic."}, {"start": 157.24, "end": 164.24, "text": " So what encoder in VAE does is it will parameterize a distribution instead of a static latent vector."}, {"start": 164.24, "end": 169.24, "text": " So usually people use isotropic Gaussians. That doesn't have to be that way."}, {"start": 169.24, "end": 173.24, "text": " But like that's because Gaussians have really nice computational properties."}, {"start": 173.24, "end": 178.24, "text": " And so people usually use those. But don't think that VAE have to use Gaussians."}, {"start": 178.24, "end": 181.24, "text": " They are much more abstract framework than just using Gaussians."}, {"start": 181.24, "end": 192.24, "text": " So what you'll have usually is you'll have encoder and you'll be outputting, for example, 30 dimensional vector of means."}, {"start": 192.24, "end": 196.24, "text": " And you'll be outputting 30 dimensional vector of standard deviations."}, {"start": 196.24, "end": 203.24, "text": " And you usually do log of standard deviation because you want to make sure that standard deviation is positive."}, {"start": 203.24, "end": 209.24, "text": " And because it's like an argument of a log function, that means domain is greater than zero."}, {"start": 209.24, "end": 213.24, "text": " So yeah, everything is computationally nice here."}, {"start": 213.24, "end": 218.24, "text": " So once you have U's and sigma's, what you do instead of just passing it through the decoder,"}, {"start": 218.24, "end": 225.24, "text": " you sample a vector from this distribution and then you feed it back into the decoder."}, {"start": 225.24, "end": 230.24, "text": " And so now the difference, so when you train these VAEs, you have two terms basically."}, {"start": 230.24, "end": 235.24, "text": " And basically you also have the reconstruction term. Let me call it R."}, {"start": 235.24, "end": 243.24, "text": " So that's that part. And you also have something called like you're doing a KL divergence between the prior"}, {"start": 243.24, "end": 249.24, "text": " and between whatever distribution you've outputted here. And let me kind of try and break that down a bit."}, {"start": 249.24, "end": 252.24, "text": " So you can treat the KL term as a regularization term."}, {"start": 252.24, "end": 259.24, "text": " But there is a deeper probabilistic explanation for why that is and why are we using KL instead of something else."}, {"start": 259.24, "end": 266.24, "text": " So basically the reason is we are trying to maximize this thing called ELBO or evidence lower bound."}, {"start": 266.24, "end": 272.24, "text": " And it turns out that like maximizing ELBO is the same thing as basically trying to minimize reconstruction loss,"}, {"start": 272.24, "end": 279.24, "text": " plus trying to minimize the KL divergence between. So this thing here is basically your approximate posterior."}, {"start": 279.24, "end": 285.24, "text": " So this is Q of Z. So you're trying to infer the Z, the latent code, given your input, right?"}, {"start": 285.24, "end": 294.24, "text": " So given the X. So that may be your image. And what that second term in VAE is, you're trying to just minimize the KL between this."}, {"start": 294.24, "end": 301.24, "text": " So this is your approximate posterior and the prior. And you usually put prior to be like, as I said, Gaussian."}, {"start": 301.24, "end": 309.24, "text": " And so you can treat this either as a regularization. So that would be like your neural network kind of a perspective."}, {"start": 309.24, "end": 319.24, "text": " Or you can treat this as a consequence of variational inference, where basically the reason you're doing this is you're trying to make your posterior"}, {"start": 319.24, "end": 324.24, "text": " as close as possible to the true posterior, which would be P of Z given X."}, {"start": 324.24, "end": 332.24, "text": " And after like doing some math, you end up maximizing ELBO, which boils down at the end to just maximizing, to minimizing reconstruction loss"}, {"start": 332.24, "end": 342.24, "text": " and minimizing this KL divergence between your prior, which is usually Gaussian, because as I said, it's computationally convenient, and this approximate posterior."}, {"start": 342.24, "end": 348.24, "text": " So all in all, the reason you're doing this is because you're trying to use these as generative models."}, {"start": 348.24, "end": 357.24, "text": " So that means you want to be in a situation where you can just plug in some latent vector here and out comes some novel image."}, {"start": 357.24, "end": 363.24, "text": " And the reason you can do this with these ordinary autoencoders is because their latent space doesn't have any constraints."}, {"start": 363.24, "end": 370.24, "text": " So you basically, if we were to project this 30 dimensional vector into a 2D space, we'd end up with something like this."}, {"start": 370.24, "end": 378.24, "text": " So maybe your cat image would be like projected here, your dog image would be projected here, some other dog would be projected here,"}, {"start": 378.24, "end": 385.24, "text": " and then you'd have like some like car, whatever. And the thing is, if you tried and queried, if you took a random latent vector like here"}, {"start": 385.24, "end": 394.24, "text": " and tried and feed it through the decoder, you'd probably get a totally random function, like a totally random image."}, {"start": 394.24, "end": 398.24, "text": " And what you want to do instead, you want to impose certain structure into this latent space."}, {"start": 398.24, "end": 404.24, "text": " So you want to make sure that it's continuous. So you want to make sure that if you're interpolating between these two points,"}, {"start": 404.24, "end": 411.24, "text": " you're getting the meaningful images all the way down from this point A all the way down to this point B."}, {"start": 411.24, "end": 420.24, "text": " And VAEs do exactly this because of this KL term here, they are regularizing the latent space"}, {"start": 420.24, "end": 425.24, "text": " and making it much more continuous and meaningful compared to your standard autoencoders."}, {"start": 425.24, "end": 436.24, "text": " So that was your 101 for autoencoders and VAEs. Now let's see basically how these VQ VAEs work."}, {"start": 436.24, "end": 444.24, "text": " Getting back here, so they mentioned VAEs have continuous space. So as you can see here, these latent vectors will be continuous,"}, {"start": 444.24, "end": 450.24, "text": " whereas we'll soon see that the VQ VAE has a discrete latent variables."}, {"start": 450.24, "end": 457.24, "text": " On the other hand, so the prior is static here. So as I said, so your prior here is a standard like Gaussian."}, {"start": 457.24, "end": 466.24, "text": " And here like in VQVs, as we'll soon see, we'll be training some autoregressive model to in order to learn the prior."}, {"start": 466.24, "end": 469.24, "text": " And those are some of the differences and we'll see those in a minute."}, {"start": 469.24, "end": 476.24, "text": " OK, so we concentrate on discrete representations, which are potentially a more natural fit for many of the modalities we are interested in."}, {"start": 476.24, "end": 484.24, "text": " Language is inherently discrete. Similarly, speech is typically represented as a sequence of symbols and images can often be described concisely by language."}, {"start": 484.24, "end": 489.24, "text": " Furthermore, discrete representations are a natural fit for complex reasoning, planning and predictive learning."}, {"start": 489.24, "end": 494.24, "text": " So that's some motivation for why we want to discretize the latent space."}, {"start": 494.24, "end": 498.24, "text": " They also avoid something called posterior collapse by doing this."}, {"start": 498.24, "end": 502.24, "text": " And that's additional benefit of using quantized VAEs."}, {"start": 502.24, "end": 512.24, "text": " OK, let me let me jump into this diagram and explain the method and then we'll slowly start kind of this like breaking down the components into smaller pieces."}, {"start": 512.24, "end": 517.24, "text": " So on a high level, again, we have this autoencoder like high level structure, as you can see."}, {"start": 517.24, "end": 523.24, "text": " So we have the image, we encode it using some CNN into into this representation here."}, {"start": 523.24, "end": 527.24, "text": " And then what we do is so this is the part that's peculiar to VQVs."}, {"start": 527.24, "end": 529.24, "text": " So what we do is we take these vectors."}, {"start": 529.24, "end": 538.24, "text": " So we take this one, for example, and what we do is we snap it onto one of these vectors from this thing called a codebook."}, {"start": 538.24, "end": 544.24, "text": " So this thing here is called a codebook and it contains vectors which are also learnable and trainable."}, {"start": 544.24, "end": 550.24, "text": " And so we want to find the closest one where closeness is defined by the L2 norm."}, {"start": 550.24, "end": 553.24, "text": " And once you do that, you can basically map this one."}, {"start": 553.24, "end": 557.24, "text": " So maybe the closest one was maybe this one, vector one."}, {"start": 557.24, "end": 561.24, "text": " And so you'd put one here, as you can see in this symbolic table."}, {"start": 561.24, "end": 571.24, "text": " And then what you do is you basically index using these indices into the codebook and you extract these like discrete latent vectors."}, {"start": 571.24, "end": 580.24, "text": " And then only then do you feed those into the decoder and out comes the image, which is hopefully perfectly reconstructed version of your input image."}, {"start": 580.24, "end": 582.24, "text": " So that's the high level overview."}, {"start": 582.24, "end": 588.24, "text": " And as you can see, one difficulty with this is how do you actually back prop?"}, {"start": 588.24, "end": 599.24, "text": " So the forward prop is pretty straightforward, but the back prop is kind of difficult because this process of snapping onto the nearest like codebook vector is not differentiable."}, {"start": 599.24, "end": 602.24, "text": " So what they instead do is something called straight through gradient."}, {"start": 602.24, "end": 607.24, "text": " Like basically, as you can see, this shape here has the same shape as this thing here."}, {"start": 607.24, "end": 616.24, "text": " So what I do is whatever the gradients for this volume here are, they just copy paste them into the encoder and then you can back prop through the encoder as usual."}, {"start": 616.24, "end": 623.24, "text": " So the only part that's tricky is to just basically copy paste gradients here over to here from the decoder to the encoder."}, {"start": 623.24, "end": 627.24, "text": " And we'll see like don't worry. We'll see the details in the code later as well."}, {"start": 627.24, "end": 638.24, "text": " So the intuition behind this is that if if if you're encoding vectors, so if this vector here is encoded really closely to this codebook vector."}, {"start": 638.24, "end": 647.24, "text": " So as you can see, so if this was a gradient for the codebook vector, we can expect that by moving the encoded vector a bit to the right."}, {"start": 647.24, "end": 655.24, "text": " So the next time we feed forward that image will maybe snapped to this vector and will lower the reconstruction loss by doing that."}, {"start": 655.24, "end": 657.24, "text": " So that's some rough intuition why this works."}, {"start": 657.24, "end": 664.24, "text": " So hopefully the assumption is that is that these encoded vectors are close to the codebook vectors."}, {"start": 664.24, "end": 671.24, "text": " And if that's the case, then these gradients kind of also are good gradients for those encoding vectors."}, {"start": 671.24, "end": 675.24, "text": " And then you can just lower the the the basically the reconstruction loss."}, {"start": 675.24, "end": 683.24, "text": " OK, so that's the high level picture. Now, let me kind of slowly start kind of breaking down these equations."}, {"start": 683.24, "end": 689.24, "text": " And it's pretty easy. So this is just formalized version of what I've just explained verbally."}, {"start": 689.24, "end": 695.24, "text": " So this is your encoded vector. This is your codebook vector. And you're trying to find that codebook vector,"}, {"start": 695.24, "end": 703.24, "text": " which is the closest to this encoded vector where closeness is defined using L2 norm, as you can see by these brackets and number two here."}, {"start": 703.24, "end": 710.24, "text": " So once we find that index, that's our like that's the so we have this is our approximate posterior."}, {"start": 710.24, "end": 717.24, "text": " And as you can see, it's basically one hot is deterministic distribution of our input variable."}, {"start": 717.24, "end": 722.24, "text": " OK, so this just says the same thing I just explained."}, {"start": 722.24, "end": 731.24, "text": " So once you find that K, you basically assign to so you assign to this Zeddy, you actually assign it as a proxy."}, {"start": 731.24, "end": 739.24, "text": " You put this Zedq, which is just the closest vector to that Zeddy. And that's what we do in this discretization step."}, {"start": 739.24, "end": 746.24, "text": " So we find the closest one and we basically use so we'll be using this E1 vector instead of this vector here."}, {"start": 746.24, "end": 751.24, "text": " That's the idea of this equation here. So note that there is no real gradient defined for equation two."}, {"start": 751.24, "end": 760.24, "text": " However, we approximate the gradient similar to the straight through estimator and just copy gradients from decoder input here to encoder output Zeddy."}, {"start": 760.24, "end": 765.24, "text": " And that's the this this this line, this red line, they draw here. OK."}, {"start": 765.24, "end": 772.24, "text": " And since the output representation of the encoder and the input, so this is the output representation of the encoder."}, {"start": 772.24, "end": 778.24, "text": " So and the input of the decoder. So this is this one share the same dimensional space."}, {"start": 778.24, "end": 783.24, "text": " The gradients contain useful information for how the encoder has to change its output to lower the reconstruction loss."}, {"start": 783.24, "end": 786.24, "text": " So those are some of the things I already explained. OK."}, {"start": 786.24, "end": 794.24, "text": " So finally, this is how the loss looks like. So we contrast this to VAE where we have KL and we have reconstruction loss."}, {"start": 794.24, "end": 799.24, "text": " So here we have also we have the reconstruction loss. So that's this log P of X given Zed."}, {"start": 799.24, "end": 802.24, "text": " So this is your likelihood. And you have these two terms."}, {"start": 802.24, "end": 811.24, "text": " So first, let me kind of explain this one and explain why do we get MSC when we have these like weird log likelihood equations here."}, {"start": 811.24, "end": 817.24, "text": " So I guess this can confuse some like like people who are who are new to this channel."}, {"start": 817.24, "end": 822.24, "text": " So this is a likelihood. And the reason we get MSC out of this is the following."}, {"start": 822.24, "end": 832.24, "text": " So we assume that the likelihood is Gaussian and under that assumption, which is reasonable if you have if you're trying to reconstruct images where the pixels are continuous."}, {"start": 832.24, "end": 840.24, "text": " Under that assumption, if you just replace this P with your like Gaussian, like you have some constant, right?"}, {"start": 840.24, "end": 844.24, "text": " You have some e raised to the power of blah, blah, blah. You have some square here."}, {"start": 844.24, "end": 852.24, "text": " And as you can see, because of the log, we'll just be ignoring constant E and log will be canceled out because you're inverse of each other."}, {"start": 852.24, "end": 856.24, "text": " And you'll just be and you'll just end up with this square term."}, {"start": 856.24, "end": 866.24, "text": " And that's why you have square loss as the as the optimal loss term under the assumption that your data, the likelihood of your data is Gaussian."}, {"start": 866.24, "end": 869.24, "text": " So, yeah, hopefully that was helpful."}, {"start": 869.24, "end": 874.24, "text": " And the second two terms are interesting because so compared to V is we don't have KL here."}, {"start": 874.24, "end": 879.24, "text": " We have these two. So as you just mean, stop gradient."}, {"start": 879.24, "end": 889.24, "text": " So that means in Pytorch, you just be detaching that part from the computational graph to just freeze those and you try and push these vectors."}, {"start": 889.24, "end": 895.24, "text": " So these are the codebook vectors. You push them to their corresponding encode encoded vectors."}, {"start": 895.24, "end": 902.24, "text": " And the same thing here. You basically freeze the codebook vectors and you're pushing the encoded vectors towards those codebook vectors."}, {"start": 902.24, "end": 907.24, "text": " So that's that's how the loss looks like. It's fairly simple. We'll see that in the code in a couple of minutes."}, {"start": 907.24, "end": 918.24, "text": " And so what I say here is so decoder optimizes the first loss term only the encoder optimizes the first and the last loss term loss terms."}, {"start": 918.24, "end": 921.24, "text": " And the embeddings are optimized by the middle last term. So let's see why that is."}, {"start": 921.24, "end": 927.24, "text": " So first things first, you can see we learned the embeddings by this second last term."}, {"start": 927.24, "end": 931.24, "text": " So that's what I said. We'll learn encoder through both this term."}, {"start": 931.24, "end": 937.24, "text": " As you can see, we are pushing the output from the encoder here towards these frozen codebook vectors."}, {"start": 937.24, "end": 941.24, "text": " And we're also using reconstruction loss because, as you remember, we have that straight through gradient."}, {"start": 941.24, "end": 947.24, "text": " So we are also training encoder using this reconstruction term, whereas the decoder is only trained."}, {"start": 947.24, "end": 956.24, "text": " So those weights are only trained using this reconstruction loss. And these two terms do not contribute to its updates."}, {"start": 956.24, "end": 961.24, "text": " OK, that was pretty much detailed explanation of how how it works."}, {"start": 961.24, "end": 966.24, "text": " Now, I thought kind of walking you through an example of how this looks in PyTorch."}, {"start": 966.24, "end": 975.24, "text": " And hopefully it will be easier to understand. And once you connect these images and like paper with concrete code."}, {"start": 975.24, "end": 980.24, "text": " So I usually only cover code, only cover papers in my video. So now I thought doing this as an experiment."}, {"start": 980.24, "end": 988.24, "text": " And like, let me know what you think down in the comments. OK, so so there is this and I'll link the code in the description, obviously."}, {"start": 988.24, "end": 993.24, "text": " So there is this vector quantizer class. And let's see how it works."}, {"start": 993.24, "end": 999.24, "text": " So we have embedding dimension and we have number of embeddings. So number of embeddings is this thing here."}, {"start": 999.24, "end": 1004.24, "text": " So it's basically this is your number of embeddings, the number of vectors in your code table and the embedding dimension."}, {"start": 1004.24, "end": 1015.24, "text": " And the embedding dimension is this dimension here. OK, so once we have that, we just create this embedding object in PyTorch and we just initialize it uniformly."}, {"start": 1015.24, "end": 1022.24, "text": " Blah, blah, that's that's easy. And we have this commitment cost part, which is basically this beta term."}, {"start": 1022.24, "end": 1027.24, "text": " So that's just a weight with this commitment term of the loss. OK."}, {"start": 1027.24, "end": 1033.24, "text": " And now let's see how the forward function looks like. So first, let me just give you some context."}, {"start": 1033.24, "end": 1045.24, "text": " So let's let's take an as an example from this snippet. So as an example for a BCHW tensor of shape 16, 64, 32, 32, we first converted to blah, blah, blah."}, {"start": 1045.24, "end": 1053.24, "text": " So what happens here is you take 16 images. So you feed 16 images into your decoder. Out comes this tensor here."}, {"start": 1053.24, "end": 1067.24, "text": " OK, so we still have 16 images. But what changed is that the number of channel as you probably know, like CNN's usually you start with three channels and then you're as you're going through the CNN layers, you get more and more channels."}, {"start": 1067.24, "end": 1075.24, "text": " But the spatial dimension kind of diminishes. OK, so that's what we see here. So we went from three to sixty four and the height and width went to thirty two by thirty two."}, {"start": 1075.24, "end": 1083.24, "text": " OK, so so what I first do is they converted into this BCHWC vector instead."}, {"start": 1083.24, "end": 1092.24, "text": " And then they flatten it into this representation here. And all of these 16000 vectors of size 64 will be quantized independently."}, {"start": 1092.24, "end": 1098.24, "text": " So they'll be snapped to those codebook vectors. OK. And in other words, the channels are used as the space in which to quantize."}, {"start": 1098.24, "end": 1104.24, "text": " All other dimensions will be flattened and be seen as different examples to quantize. 16,384 in this case."}, {"start": 1104.24, "end": 1109.24, "text": " So that's in context. And now let's see the code. So this is the first. So this is the permutation that happens."}, {"start": 1109.24, "end": 1117.24, "text": " So we want to get it into this format contiguous. Just make sure that in the memory, all of these memory locations are contiguous."}, {"start": 1117.24, "end": 1123.24, "text": " Just some low level low level memory management stuff. So here is the flatten part."}, {"start": 1123.24, "end": 1130.24, "text": " So we converted into the representation of 16000 by 64. And so here is this distances calculation."}, {"start": 1130.24, "end": 1135.24, "text": " So this is a part that we use to find what what are the closest codebook vectors."}, {"start": 1135.24, "end": 1141.24, "text": " And you can go through the like every single symbol here, but I'm just going to like do it high level."}, {"start": 1141.24, "end": 1150.24, "text": " Basically. So flat input is it has shape. Let's say n times 64. OK, so n is 16000 in this example."}, {"start": 1150.24, "end": 1158.24, "text": " But let me just use the symbol and instead. So and we have embedding table, which is as you remember, K and 64."}, {"start": 1158.24, "end": 1169.24, "text": " So it's a couple K, 64. OK, so what we'll end up having here is basically we'll get a table that's n, K."}, {"start": 1169.24, "end": 1176.24, "text": " And that means for every single encoded vector, we'll find its distance to all of the codebook vectors."}, {"start": 1176.24, "end": 1184.24, "text": " And that's why we have K here. And once we have this, so once we have this distances array, what we do is we do simple argument over dimension one."}, {"start": 1184.24, "end": 1192.24, "text": " So that means let me just kind of pictorially draw this. So we have, as I said, distances like is a matrix like this."}, {"start": 1192.24, "end": 1197.24, "text": " And what we do is we focus on a single row and we find the lowest distance here."}, {"start": 1197.24, "end": 1202.24, "text": " So that's the argument over dimension one. And we find, let's say that's maybe number three."}, {"start": 1202.24, "end": 1206.24, "text": " So maybe this vector here is the closest one and that's maybe number three."}, {"start": 1206.24, "end": 1212.24, "text": " So we'll map this into a novel vector, which will have number three here."}, {"start": 1212.24, "end": 1216.24, "text": " So that's your symbol. So that's this part here. Let me just go back here."}, {"start": 1216.24, "end": 1221.24, "text": " So those are whoops, my God. OK, so that will be your."}, {"start": 1221.24, "end": 1229.24, "text": " So this vector here mapped into one. OK, here we just had another example with number three, but that's it."}, {"start": 1229.24, "end": 1238.24, "text": " So once we do that, what they do additionally is once you create this vector of indices, you just cast it into a bit different representation."}, {"start": 1238.24, "end": 1247.24, "text": " So instead of having three, you'll do it like this. You'll you'll basically expand it into a matrix and times K again."}, {"start": 1247.24, "end": 1254.24, "text": " And this time you're going to create one hot vector. So you're going to put zero, one, two, three."}, {"start": 1254.24, "end": 1258.24, "text": " So you want to put one here and all of the zeros will be here."}, {"start": 1258.24, "end": 1266.24, "text": " So this is just a different representation of this very same information. OK, so that's the encodings data structure."}, {"start": 1266.24, "end": 1272.24, "text": " And why do we do that is because we can then multiply this encodings data structure with the embedding table."}, {"start": 1272.24, "end": 1278.24, "text": " So, again, embedding table is the code codebook table. So you'll have just a second."}, {"start": 1278.24, "end": 1283.24, "text": " So we have K here. So this will be K dimensional and this will be 64."}, {"start": 1283.24, "end": 1287.24, "text": " So these are this is your codebook vector code table."}, {"start": 1287.24, "end": 1298.24, "text": " So if you can see here, just by multiplying this encodings data structure with the codebook vector, you'll end up extracting vector number three from from here."}, {"start": 1298.24, "end": 1305.24, "text": " And that's basically the quantization process we're talking about. So you'll end up having basically quantized vectors."}, {"start": 1305.24, "end": 1313.24, "text": " You'll have you'll end up having if you multiply these, you can see you end up with n times 64 dimensional table."}, {"start": 1313.24, "end": 1318.24, "text": " And those will contain the quantized vectors. So that's how the quantization is done."}, {"start": 1318.24, "end": 1328.24, "text": " Simple matrix multiplication. And finally, the interesting stuff here is so this E latent loss is this term here."}, {"start": 1328.24, "end": 1333.24, "text": " So that's this term here. As you can see, we put a stop gradient onto the codebook vectors."}, {"start": 1333.24, "end": 1339.24, "text": " And as you can see here, so these are the codebook vectors. We put a stop gradient, which is implemented as detached in PyTorch."}, {"start": 1339.24, "end": 1347.24, "text": " We do the same thing for this second term. So this is this part where we put stop gradient on the encoded vectors."}, {"start": 1347.24, "end": 1351.24, "text": " So encoded vectors are called inputs in our code. And we just do detach. And that's it."}, {"start": 1351.24, "end": 1355.24, "text": " Just simple MSC laws there. And again, this is beta. So that's your loss."}, {"start": 1355.24, "end": 1363.24, "text": " OK. And now for the funny part, and I'll just ignore this, is how do we actually implement these straight through gradient?"}, {"start": 1363.24, "end": 1367.24, "text": " So this is the way we implement that estimator."}, {"start": 1367.24, "end": 1375.24, "text": " So if you take a look at this equation, obviously inputs and inputs will kind of cancel out and you end up with quantized."}, {"start": 1375.24, "end": 1383.24, "text": " So the reason they added this part here is because when we do backprop, this part is basically excluded from the computational graph."}, {"start": 1383.24, "end": 1392.24, "text": " And so we end up with inputs. And so that means that basically whatever the gradients you get for your outputs, for your quantized variable here,"}, {"start": 1392.24, "end": 1397.24, "text": " those will just be copy pasted into the inputs. And that's the part I just explained pictorially here."}, {"start": 1397.24, "end": 1402.24, "text": " Basically, you have whatever the gradients here are. You're just going to copy paste them into here."}, {"start": 1402.24, "end": 1406.24, "text": " And once you have that, then you can just do because everything is differentiable in the encoder."}, {"start": 1406.24, "end": 1412.24, "text": " You can figure out the gradients for every single weight and then you can just update your network. OK."}, {"start": 1412.24, "end": 1418.24, "text": " So you may wonder why we are not using the KL term. Let me try and give you some intuition there."}, {"start": 1418.24, "end": 1430.24, "text": " So they say here, since we assume a uniform prior for Z, so compare that to VAEs, which had like isotropic Gaussians as the basically standard Gaussians as a prior."}, {"start": 1430.24, "end": 1435.24, "text": " Here we have we assume uniform prior for Z. The KL term that usually appears in the elbow."}, {"start": 1435.24, "end": 1442.24, "text": " So this is your evidence lower bound is constant with respect to the encoder parameters and thus can be ignored for training."}, {"start": 1442.24, "end": 1448.24, "text": " So let me try and digest this for you a little bit. So that's so pure."}, {"start": 1448.24, "end": 1455.24, "text": " If that if we take a look at that, we have a uniform distribution here. OK."}, {"start": 1455.24, "end": 1465.24, "text": " So so this will have K. Points in the domain here and what we'll end up with."}, {"start": 1465.24, "end": 1476.24, "text": " So how our posterior looks like, so how our approximate posterior Z given X looks like is it's as I mentioned, it's a deterministic function pretty much."}, {"start": 1476.24, "end": 1481.24, "text": " You will have you'll end up having a single vector that's the closest to your encoded vector."}, {"start": 1481.24, "end": 1488.24, "text": " So you'll end up with something like this. And if you take a look at what you get by doing a KL divergence between these two."}, {"start": 1488.24, "end": 1498.24, "text": " So if you do KL between these two, you'll end up with basically just one times log."}, {"start": 1498.24, "end": 1506.24, "text": " Then we have in the numerator, we'll have one and in the denominator we'll have we'll have because this is uniform, we have K points here."}, {"start": 1506.24, "end": 1515.24, "text": " This is going to be one over K probability. So the probability that corresponds to this one, let's maybe that's this one, will be one over K."}, {"start": 1515.24, "end": 1522.24, "text": " So you end up with KL divergence that's constant and that's equal to log of K."}, {"start": 1522.24, "end": 1530.24, "text": " OK. And so you basically can't minimize it because it's constant. And they mentioned it somewhere here."}, {"start": 1530.24, "end": 1539.24, "text": " So our proposal distribution blah, blah is deterministic. And by defining a simple uniform prior of Z, we obtain a KL divergence constant and equal to log of K."}, {"start": 1539.24, "end": 1547.24, "text": " So that's why they can't minimize it because it's constant in this setup. And yeah, I just wanted to explain that a little bit more."}, {"start": 1547.24, "end": 1556.24, "text": " OK. So for the prior part, they say here, whilst training the VQVAE, the prior is kept constant and uniform, as we just saw."}, {"start": 1556.24, "end": 1563.24, "text": " After training, we fit an autoregressive distribution over Z, P of Z, so that we can generate X via ancestral sampling."}, {"start": 1563.24, "end": 1572.24, "text": " So all of this sounds maybe a bit scary. Let me try and demystify this part. So if you're familiar with language modeling, if you watch my GPT-3 video,"}, {"start": 1572.24, "end": 1579.24, "text": " I'll link it somewhere here or like transformers, whatever, if you're familiar with language modeling in general, this will be really easy."}, {"start": 1579.24, "end": 1592.24, "text": " So what you basically do here is in a way you just try to predict the next token. So in the case of language modeling, you're trying to predict a token that represents some subword or something or like a word or a character."}, {"start": 1592.24, "end": 1597.24, "text": " Here, you're just trying to predict the next symbol. But like the semantics of the process is the same."}, {"start": 1597.24, "end": 1605.24, "text": " So imagine you flatten this into a sequence and you have the same, like basically same setup as for language modeling."}, {"start": 1605.24, "end": 1612.24, "text": " So let's say we have a simple example, not like a not like a matrix of symbols, but maybe just like a three symbols."}, {"start": 1612.24, "end": 1622.24, "text": " Like let's say we have three, two, five. OK. And how you train this all regressive model is the following. So you'd input something called some special symbol called start of a sequence."}, {"start": 1622.24, "end": 1628.24, "text": " Then you'd input three here and then you'd input two here. And as you can see, you're trying to predict."}, {"start": 1628.24, "end": 1635.24, "text": " So given the special token, you're trying to predict three. Then given three, you're trying to predict two because that's the next token, as you can see here."}, {"start": 1635.24, "end": 1642.24, "text": " And now given two, you're trying to predict five because that's the next token in the sequence, as you can see."}, {"start": 1642.24, "end": 1653.24, "text": " So this goes as an input to your model. These are the targets. And basically you're just you'll you'll basically use some like cross entropy and you'll backprop through the weights of the model."}, {"start": 1653.24, "end": 1657.24, "text": " And that's how you're going to train this this this token predictor model."}, {"start": 1657.24, "end": 1666.24, "text": " And it will going to be all regressive, which means once you output a token here, you're going to feed it back to the input and then you're going to keep on generating like that."}, {"start": 1666.24, "end": 1673.24, "text": " So once you have it trained on this discrete space that was so basically there are two things you do here."}, {"start": 1673.24, "end": 1680.24, "text": " First things first, you basically train this VQVAE end to end. So to learn how to reconstruct images."}, {"start": 1680.24, "end": 1688.24, "text": " And then you have you basically have a codebook that's that's fully trained and you have encoder and decoder, which are fully trained."}, {"start": 1688.24, "end": 1693.24, "text": " You just freeze all of those and you just train this predictor model."}, {"start": 1693.24, "end": 1703.24, "text": " And then when you want to generate new novel images, what you do is you just prompt the the all regressive model with the special token and it will just output some number."}, {"start": 1703.24, "end": 1710.24, "text": " Let's maybe say three and you feedback the three inside of the input of your all regressive model and it will generate something else like maybe four."}, {"start": 1710.24, "end": 1717.24, "text": " And you do that all the way until you have enough elements in your in your like latent space."}, {"start": 1717.24, "end": 1720.24, "text": " So that will be thirty two by thirty two in this example here."}, {"start": 1720.24, "end": 1725.24, "text": " And once you have that, you just feed that through this frozen decoder and out comes some novel image."}, {"start": 1725.24, "end": 1728.24, "text": " So that's the explanation of how this whole system works together."}, {"start": 1728.24, "end": 1731.24, "text": " Hopefully you you now understood it a bit better."}, {"start": 1731.24, "end": 1736.24, "text": " OK, so having demystified that, hopefully let's see some results."}, {"start": 1736.24, "end": 1741.24, "text": " And first thing they mentioned here is that they can compress their data by a lot."}, {"start": 1741.24, "end": 1754.24, "text": " And that's hopefully obvious. But like so they say here in this experiment, we show that we can model X, which is hundred twenty eight by hundred twenty eight in their example by compressing them to thirty two by thirty two discrete space with K equals five hundred twelve."}, {"start": 1754.24, "end": 1761.24, "text": " So case the size of that codebook codebook table, why are purely the convolutional PX given set."}, {"start": 1761.24, "end": 1773.24, "text": " So decoder. So a reduction of and you can see how much here and basically nine is here because if you have a naive encoding method of your symbols, you'll just have nine bits."}, {"start": 1773.24, "end": 1781.24, "text": " You need nine bits in order to represent five hundred twelve symbols if you assume that all of the code like sizes are fixed."}, {"start": 1781.24, "end": 1785.24, "text": " OK, and basically you can see here some reconstructions."}, {"start": 1785.24, "end": 1790.24, "text": " And what I want you to notice here is because this will be important for the later Dali paper."}, {"start": 1790.24, "end": 1794.24, "text": " I'll create like a paper over. I'll do and VQ again as well."}, {"start": 1794.24, "end": 1797.24, "text": " You can see that there is some blurriness to these images."}, {"start": 1797.24, "end": 1804.24, "text": " If you take this dog here and you'll see like the reconstruction is pretty good, but there is some blurriness happening here."}, {"start": 1804.24, "end": 1806.24, "text": " It's especially noticeable here."}, {"start": 1806.24, "end": 1811.24, "text": " And for this real image here, you can see it's much blurrier."}, {"start": 1811.24, "end": 1812.24, "text": " You can take a look at the paper yourself."}, {"start": 1812.24, "end": 1815.24, "text": " But that's that's something I want you to notice here."}, {"start": 1815.24, "end": 1817.24, "text": " So those were the reconstructions."}, {"start": 1817.24, "end": 1829.24, "text": " And here after we do that sampling process in the inside of the latent space using that order aggressive model, you can sample novel unconditional images and you can see some results here."}, {"start": 1829.24, "end": 1831.24, "text": " And you can see it's really diverse."}, {"start": 1831.24, "end": 1835.24, "text": " And that's one of the advantages these likelihood models have compared to GANs."}, {"start": 1835.24, "end": 1850.24, "text": " They cover all of the modes because of the way they are trained, whereas GANs sometimes can omit some modes or they can even like go into something called mode collapse where they basically output just a single image or similar looking imagery."}, {"start": 1850.24, "end": 1859.24, "text": " OK, they also do this not only for images, they can also compress and reconstruct audio, video, et cetera."}, {"start": 1859.24, "end": 1861.24, "text": " So I found the audio part really interesting."}, {"start": 1861.24, "end": 1865.24, "text": " So this is your original audio after you reconstructed."}, {"start": 1865.24, "end": 1867.24, "text": " You can see the waveforms are not quite the same."}, {"start": 1867.24, "end": 1870.24, "text": " But what's interesting is that they have the same content."}, {"start": 1870.24, "end": 1877.24, "text": " Couple of nice remarks they made here is it can be heard from the provided samples and as shown in Figure 7 reconstruction."}, {"start": 1877.24, "end": 1879.24, "text": " So Figure 7 is this one here."}, {"start": 1879.24, "end": 1881.24, "text": " You can find the samples on the website."}, {"start": 1881.24, "end": 1882.24, "text": " You can hear them at your own pace."}, {"start": 1882.24, "end": 1888.24, "text": " But like the reconstruction has the same content, same text contents, but the waveform is quite different and prosody."}, {"start": 1888.24, "end": 1892.24, "text": " So the way you accent your sentence in the voice is altered."}, {"start": 1892.24, "end": 1908.24, "text": " So this means that the VQVAE has without any form of linguistic supervision, so without any supervision, learned a high level abstract space that is invariant to low level features and only encodes the content of the speech, which is awesome since we don't have any labels here."}, {"start": 1908.24, "end": 1917.24, "text": " So while samples drawn from even the best speech models like the original WaveNet sound like babbling, samples from VQVAE contain clear words and part sentences."}, {"start": 1917.24, "end": 1931.24, "text": " So what I recommend you do is you go ahead and listen to the samples from this model and from WaveNet, which was very popular, like parallel WaveNet, like that was a full up work to WaveNet, was actually used in production at Google."}, {"start": 1931.24, "end": 1932.24, "text": " It's maybe still used."}, {"start": 1932.24, "end": 1935.24, "text": " I'm not sure, but I know it was used in production for Google Assistant."}, {"start": 1935.24, "end": 1939.24, "text": " OK, so I think that's pretty much it for this paper."}, {"start": 1939.24, "end": 1948.24, "text": " Let me just briefly go and explain you how this VQVAE 2 version differs from this version 1."}, {"start": 1948.24, "end": 1952.24, "text": " So first thing you can notice is it's got much better reconstructions."}, {"start": 1952.24, "end": 1955.24, "text": " You can generate much better imagery using this model."}, {"start": 1955.24, "end": 1958.24, "text": " The main difference is so everything remains the same."}, {"start": 1958.24, "end": 1960.24, "text": " The loss remains the same."}, {"start": 1960.24, "end": 1966.24, "text": " The only difference is they have this hierarchical structure of the of the latents as well as the priors."}, {"start": 1966.24, "end": 1976.24, "text": " So what I do instead is they have this top level codebook vectors and they have this mid level codebook vectors and they are of different resolutions."}, {"start": 1976.24, "end": 1981.24, "text": " So that means this one will have to focus on some like higher level structure of the image."}, {"start": 1981.24, "end": 1984.24, "text": " This one can maybe even focus on some details, texture, et cetera."}, {"start": 1984.24, "end": 1989.24, "text": " And once you combine them into decoder, you can reconstruct much higher quality imagery."}, {"start": 1989.24, "end": 1990.24, "text": " Same thing here."}, {"start": 1990.24, "end": 2004.24, "text": " You train this autoregressive model in this latent space and then you use that to condition the next level prior autoregressive model to basically regress this new latents and then you can decode them into imagery."}, {"start": 2004.24, "end": 2007.24, "text": " So that's the only difference pretty much between this model and the last one."}, {"start": 2007.24, "end": 2009.24, "text": " So they're using this hierarchical structures."}, {"start": 2009.24, "end": 2012.24, "text": " And as you can see, they get much better imagery."}, {"start": 2012.24, "end": 2016.24, "text": " Like this is even compared like comparing even this to, I don't know, style again."}, {"start": 2016.24, "end": 2023.24, "text": " V2, the images look really, really cool, although style again can can generate much higher resolution imagery."}, {"start": 2023.24, "end": 2024.24, "text": " I think so."}, {"start": 2024.24, "end": 2028.24, "text": " But yeah, either way, this is really cool for a VAE."}, {"start": 2028.24, "end": 2033.24, "text": " Usually GANs have much better reconstruct image images than VAEs."}, {"start": 2033.24, "end": 2036.24, "text": " OK, here are some more images."}, {"start": 2036.24, "end": 2039.24, "text": " Again, I think this is class conditional on ImageNet."}, {"start": 2039.24, "end": 2041.24, "text": " I think doesn't matter that much."}, {"start": 2041.24, "end": 2042.24, "text": " Let me just see."}, {"start": 2042.24, "end": 2044.24, "text": " Yeah, these are some high resolution images."}, {"start": 2044.24, "end": 2046.24, "text": " You can see they're pretty good."}, {"start": 2046.24, "end": 2048.24, "text": " You can see it's still not perfect."}, {"start": 2048.24, "end": 2050.24, "text": " There is some distortions here happening."}, {"start": 2050.24, "end": 2051.24, "text": " I don't know. Maybe I'm wrong."}, {"start": 2051.24, "end": 2053.24, "text": " Hopefully you found this video insightful."}, {"start": 2053.24, "end": 2058.24, "text": " Let me know what you think down in the comments, whether you like this format, whether the code part was useful."}, {"start": 2058.24, "end": 2063.24, "text": " If it was, I'm going to try and include more of these into my next videos."}, {"start": 2063.24, "end": 2077.24, "text": " So until next time, subscribe and share this video and bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=VNg0NyCGl_4
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I cover the "GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation" paper. They managed to increase the diversity and stability of anime avatar generation and as a side benefit - they can also apply it to videos without any additional temporal constraints! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2106.06561 ✅ GitHub: https://github.com/mchong6/GANsNRoses ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 High-level overview of the paper 04:00 Diving deeper 05:00 GNR architecture and loss overview 09:50 Loss components 13:10 Diversity discriminator in more detail 16:25 Baseline comparisons 17:50 Ablations 18:40 Quantitative results (FID, DFID, LPIPS) 19:50 Controllability ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Eli Mahler Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #gansandroses #anime #generativemodeling
What's up? In this video I'm covering Gans and Roses stable, controllable, diverse image-to-image translation Works for videos too. I love the title. I mean, I think it's totally appropriate given that they are generating anime images So yeah, just roll with it by Minjin Chung and David Forsyth from the University of Illinois And let me just dive straight into what the paper is all about and it's about creating these cute images Basically anime images. So basically they feed in the content image here and they pick different styles for every single column Using this thing called style code vector and we'll soon see what it is and as you can see so they just pick out style code and they keep it the same across the whole column and One thing you can notice here is so first things first is that you can see that this hoodie here is preserved Because the content code is the same across this whole row Whereas the style code is changing and same thing here You can see this this formal color and it's being kept across the whole row because as I said the content code is fixed Whereas the style code is changing on the other hand if we just focus on a single column There we have that the content code is changing because we are having different input images here, right? but like the style code is kept the same and you can see that it's kind of interesting because You can see that the style is kind of kept although it's obviously very vague Wake concept and yeah, let me just show you one more thing and then I'll show you how this thing works. So This is what they get as a as a as a emerging phenomena basically from from their whole pipeline is that even though they don't train the the system on They don't have any temporal kind of a loss or constraint so and Despite that they can they can they can gain Really consistent frames so you can see here different input images So they'll they'll yield different content codes But the style code is kept the same and so you can see that basically it looks really Consistent like just going throughout the frames. You can see that the head pose here is the same as here But the style code so basically the style is the same so you can see this pointy beard Being the same there and here and that's really cool without any temporal constraints They achieve this if we compare their result with this baseline console again You can see that first things first so this looks terrible like the this is kind of frightening me and the second thing is that you can see that the Style is changing and and basically you can see here. The chain is kind of rounded here We have a pointy chin so it's not consistent across the frames So that basically means that the style is changing as well and it should have been kept fixed But it's this method is failing to do that on the other hand What they do here is they take this first frame and they take this last frame and this time So again, the content code is kind of determined by the input video and they are just kind of sampling randomly sampling different style codes and so you can see that This image here has a totally different appearance compared to this one. So they are totally two different animes So the style varies a lot here Even though they sample random random styles, they still get decently similar images So if you take a look at these two you can see that These two have almost the same face The only thing that's different is kind of the hair color and the background color But they are most of the same the eyes are the same so it's failing to create diverse set of anime avatars, so Those are the main points of this paper So they show that they can have really diverse set of avatars. They show that they can have this consistency and That's that's pretty much it. Let me now show you how how it works. I think the results are pretty impressive So let's yeah, let's start from from the beginning of the paper So so we show how to learn a map that takes a content code derived from from a face image and a randomly chosen Style code to an anime image we derive an adversarial loss from our simple and effective definitions of style and content And this adversarial loss guarantees the map is diverse a very wide range of anime can be produced from a single content code so they're stressing the diversity and basically, yeah So the most important step is to be exact about what is what is intended by content and what is intended by style So we adopt a specific definition content is what changes when face images are subject to a family of data augmentation Transformations and style is what does not change It's as easy as that and as you can see it's coupled to the definition of that set of augmentations and they say somewhere here so basically So know that this definition Is conditioned on the augmentations so different sets of data augmentations will result in different definitions of style So having said that let me just walk you through the pipeline. So let me explain how this works. So First things they do is they form a batch of input images Okay, and how they do that is they take a single image So maybe this one and they apply a bunch of different like geometrical type of augmentations like scaling rotation, etc. And So that's an important thing. I want you to understand here. So they don't take multiple different images. They take a single image They apply a bunch of different augmentations and then they feed that into this encoder So now what the trick here is is as you can see we have encoder We have decoder which kind of generates the anime images then we have additionally the second branch that again encodes this and decodes this and as you can probably assume already this is Like this will lead to cyclic consistency loss, but let me first focus on so there are three components to the loss function The first things first is that this style so these are the style codes coming from from these images And what they want to do is keep these the same So because they are they have the same image and they're just applying augmentations They want to make sure that the only thing that varies is the content code Whereas the style is the same because that's their definition of what content is and what style is so style is whatever remains the same And so this thing will encode that whatever remains the same concept and the content code will encode Whatever changes concept. Okay So how they will enforce that these are the same is they'll just find basically Variance of that because again imagine this S will be some I don't know like the dimensional vector and Because they have a batch of B images. They'll have B times the matrix and they'll just make sure that Basically the variance across different vectors Goes down to goes down to zero. So just formally written. That's variance of this batch of style codes should be one term that we are going to minimize and Obviously when variance is equal to zero all the vectors are the same The second part is so after they generate these images they'll feed them back here And as you can see so they want to preserve the content code So this C prime they want to make make sure that those are the same as these C's here So what they'll do is they'll feed the same style codes here and they'll take the content here and They'll try to reconstruct the very same images as in the input batch So they'll just do L to distance between the batch of real images and the batch That had some augmentations applied in between this batch of generated images and by doing that we we kind of So that the network will learn how to preserve the content code inside of the generated images The third component of the loss will be basically adversarial loss So that's your that's this part and this first branch is something you're familiar with Hopefully so that's just your your GAN so adversarial loss objective So we'll be training the discriminator to discriminate between a batch of real enemy images and a batch of generated images and on the other hand will be training generator to learn how to trick the discriminator into thinking that whatever it generated it comes from the real data set so that's our standard GAN objective But the interesting thing here is this mini batch as you can see mini batch standard deviation Let me try and zoom in here. So this is just standard deviation. So what that does is the following So if you just apply if you just have this branch here What will happen probably is that your generator can learn to basically collapse into a single? Image that looks realistic and discriminator will be totally fine with that I will just say that okay, this looks real to me, but we want to be diverse. We want to have diverse set of Avatars and so they what I do is they they take this these images and from from a certain layer here I think they are using penultimate layer We'll see the details a bit later, but they take the feature maps here and they just form these like basically standard deviation statistics and they'll use those to feed them into the second branch of discriminator and what that does is the following so that means that discriminator I the generator will need to learn how to mimic the basically those standard deviation across the whole batch and that will but by kind of trying to mimic the The same statistic from from the real images. You will learn how to create diverse set of images so that's the whole point of this thing now, let's dig into a bit more details and We'll see how all of those fit together Okay, so just ignore these I just verbal explained those so this was the first component So as you can see the loss has three components. That's the adversarial components. This is your classical again loss You have the style consistency loss and you have the cycle consistency loss. So the first thing is this as I said, so They say here in a batch as above we expect all Si sold the codes to be the same and this is how we enforce that concept and secondly, they have the cycle consistency loss and They say here that we find it helpful to shuffle the style codes in a batch. So there's just a minor detail Let me just show you the equations. So This part here is the encoder. So from y to x so that's that's this one So they are just kind of mixing different Notations here. They say here B to a they say y to x in the equation down there But that's the they want to see this thing so that's the equation represents you're feeding the generated image here and you out out comes the Content code and the style code and so what I say here is let me get back to the equation So this is the generated anime image and you feed back to get the content and the style code and then you feed it back So that very same content code, but you take the style code from from the original images So as you can see here, so those are So we take these style codes from the original Augmented images and we take whatever was generated here and we feed that into this decoder okay, so that's what these equations say and So a minor detail here is that as you can see so C comes from the Ith image but the style vector comes from the Jth image and What they did here by doing this is basically they are additionally encoding So this cycle consistency loss can also have the style consistency loss Mingled in and I don't know whether so this is obviously not that pretty Conceptually, but I guess that was some heuristic and they figured out they helps them further enforce that style consistency Component. Okay, and finally just L2. That's how we do cycle consistency loss So whatever we reconstructed we want to make sure that it's in the image space It's got a really small distance compared to the original augmented images There is one more detail here Additionally the they use this thing called LP IPS. So that's basically our perceptual loss from neural style transfer literature. So What happens is that let me go back to the diagram here. So what happens is that they Additionally, so they feed this image. So this is the input image and this is the reconstructed image they'll feed both of those through this generator and Decoder and basically they'll take certain feature maps from a certain layer and they'll want to make sure that also in the feature space They're really close looking at maybe again L2 metric So that's that's that additional part that they have in the cycle consistency component. Okay, and And finally, they have the diversity discriminator and adversarial loss component so That's the part I mentioned with the GAN loss and with this standard deviation stuff So let's see what they say our diversity discriminator exploits the mini badge standard deviation trick used in the progressive growing of GANs paper To exploit this difference in the penultimate layer of the discriminator We compute the standard deviation of each feature and pass that into another FC layers that's fully connected layer our diversity discriminator outputs the real fake logits and the standard deviation logits This means the discriminator can identify reliable differences in within batch variation so that's basically what I already explained and I'll just try to To tell you one thing that confuses me a little bit here. That's the following so imagine so this is your decoder and discriminator and basically What will happen once you feed this image? So let's say one image maybe this one once you feed it to and you extract the feature maps from this penultimate layer You'll basically have some activation volume. So those activations from this penultimate layer will look something like this So there'll be I will have some like let me draw it like this. So this will be some height This will be some width and I'll just do add a subscript here like L because we are looking at the layer L And we'll have some number of channels. So so basically these will be this is a collection of feature maps for this particular image, okay, so this will be C sub L and That's just a single image. So that's maybe this first image now We we do the same thing for for the second image. So we'll again have a similar type of activation volume there Etc and so the part that confuses me is that they haven't really explained how they are calculating these standard Deviation statistics. So what I assume is they take a certain feature like maybe this feature so that's basically, let's call it a I J and They're gonna compute the standard deviation across the channels and across the mini batch as well So now I don't know whether they are actually aggregating those standard deviations across the channels or whether they are just lumping that whole So where the final shape will be something like B times Height L times Width L or are they going to aggregate across this dimension and just feed that so it's kind of not clear The technicalities are not not kind of clear and I guess you have to experiment and try a couple things to see what works I think what I did in the progressive growing of GANs paper is that they Average across every single dimension and that then they just replicated into a certain constant feature map and then just append that to the Activation volumes, but yeah that part is not super clear. But the concept itself is hopefully Understandable and that's that they want to enforce That the generated images have the same similar or realistically looking statistics Which will make sure that we have diverse set of images. Okay so having said that let me continue here and Let's see the experiments and the results they got compared to the baselines Comparing this Gans and roses paper to these old baselines like grid++ console GAN and star GANv2 What we can notice is that if we take two different style codes here, we get completely different images So you can see that these two are okay Maybe the pose is similar, but the eyes are totally different. The chin is this is pointy chin This is kind of round chin and we fit and here the difference is even bigger So the head is obviously bigger the hair is different that the eyes are bigger etc So if we take a look at some other baselines here, you can see that it's pretty much the same picture Just the only thing that changes is kind of colors Same thing here and also like just take a look at this side Like this is so creepy and like there are some artifacts, but they look really similar So they and there are some artifacts like this one So they don't manage to create diverse types of avatars whereas this Gans and roses paper does they also compare with this newer? Any GAN paper and the same thing applies here as you can see applying three different style codes They they gain they get like totally different Like avatars so you can see there's there is a many differences between these three But taking a look at the end again output you can see that these three only like the only significant difference is maybe the eye The color basically and here you can also see the structure the face the eyes and everything changes So again a plus plus for the diversity component of the Gans and roses paper. They did some ablations Also like here you can see that this is the thing I was mentioning. So basically when you take this diversity discriminator Branch and you just just remove it you can see that that what happens is you get a basically mode collapse here and That means that all of these images are realistic and so the GAN component will say everything is fine But like as we can see they are pretty much the same and and they're yeah They're almost the same here when we include the branch. We actually get really diverse outputs and then they from some other paper they just tried this mode seeking loss and They begin just punch of artifacts. So they just stuck with this diversity discriminator component. Okay, they also did quantitative comparisons between with the baselines and just using these DFID and LPIPS Metrics which focus on the diversity of the images and they also use the FID so that's for shades inception distance We just focus is whether the generated image is actually Realistically looking and similar to the actual real avatars and Again, they just get basically much better Much better performance compared to the baseline. So DFID should be as low as possible You can see it's much lower than the baselines just focusing on this selfie to anime data set and FID is also smaller and LPIPS is higher. So all of these three metrics basically rely on a pre-trained classifier Like inception we three model or something and How they work is similar to the perceptual loss. So they just take they just do some Comparisons in the feature space and yeah, so I won't be digging into how they exactly work But main point here is that the quantitative results are are comparable and consistent with the qualitative results. We saw They also have There is this other paper that Created this technique called so this set of results Created this technique called so this sephapaper and they say that sephafinds laden direction that corresponds to large semantic changes in a style Based GAN by finding eigenvectors of the style modulation weight parameters We show that our style space is highly expressive and editable using these eigenvectors in figure seven So basically, they somehow calculate these eigenvectors and so every single image here was generated using a different style code And once you add these eigenvectors you can kind of Make them have certain properties. So if you add this eigenvector five You basically get small eyes and sharp features if you add eigenvector eight to get abstract lines And finally big eyes and dark hair if you add this eigenvector at 10 So just uh, they show that this manipulation method kind of works for for their style codes as well Okay, that's that's pretty much it I mean this paper is pretty pretty simple to understand and but the results are I like results Which a lot and so I thought covering it. So yeah, hopefully you found this video useful if you did consider sharing and subscribing And until next time. Bye. Bye
[{"start": 0.0, "end": 6.68, "text": " What's up? In this video I'm covering Gans and Roses stable, controllable, diverse image-to-image translation"}, {"start": 7.140000000000001, "end": 14.48, "text": " Works for videos too. I love the title. I mean, I think it's totally appropriate given that they are generating anime images"}, {"start": 14.48, "end": 19.92, "text": " So yeah, just roll with it by Minjin Chung and David Forsyth from the University of Illinois"}, {"start": 19.92, "end": 26.22, "text": " And let me just dive straight into what the paper is all about and it's about creating these cute images"}, {"start": 26.22, "end": 33.62, "text": " Basically anime images. So basically they feed in the content image here and they pick different styles for every single column"}, {"start": 33.96, "end": 37.34, "text": " Using this thing called style code vector and we'll soon see what it is"}, {"start": 37.34, "end": 42.94, "text": " and as you can see so they just pick out style code and they keep it the same across the whole column and"}, {"start": 43.9, "end": 50.739999999999995, "text": " One thing you can notice here is so first things first is that you can see that this hoodie here is preserved"}, {"start": 51.46, "end": 54.46, "text": " Because the content code is the same across this whole row"}, {"start": 54.46, "end": 58.7, "text": " Whereas the style code is changing and same thing here"}, {"start": 58.7, "end": 66.02, "text": " You can see this this formal color and it's being kept across the whole row because as I said the content code is fixed"}, {"start": 66.02, "end": 71.34, "text": " Whereas the style code is changing on the other hand if we just focus on a single column"}, {"start": 71.9, "end": 76.5, "text": " There we have that the content code is changing because we are having different input images here, right?"}, {"start": 76.58, "end": 80.46000000000001, "text": " but like the style code is kept the same and you can see that"}, {"start": 81.26, "end": 83.26, "text": " it's kind of interesting because"}, {"start": 83.26, "end": 89.86, "text": " You can see that the style is kind of kept although it's obviously very vague"}, {"start": 90.30000000000001, "end": 97.2, "text": " Wake concept and yeah, let me just show you one more thing and then I'll show you how this thing works. So"}, {"start": 98.10000000000001, "end": 101.5, "text": " This is what they get as a as a as a emerging phenomena"}, {"start": 101.5, "end": 106.78, "text": " basically from from their whole pipeline is that even though they don't train the the system on"}, {"start": 107.58000000000001, "end": 110.30000000000001, "text": " They don't have any temporal kind of a loss or constraint"}, {"start": 110.82000000000001, "end": 112.5, "text": " so and"}, {"start": 112.5, "end": 114.86, "text": " Despite that they can they can they can gain"}, {"start": 115.58, "end": 120.18, "text": " Really consistent frames so you can see here different input images"}, {"start": 120.18, "end": 123.1, "text": " So they'll they'll yield different content codes"}, {"start": 123.14, "end": 127.82, "text": " But the style code is kept the same and so you can see that basically it looks really"}, {"start": 128.26, "end": 133.14, "text": " Consistent like just going throughout the frames. You can see that the head pose here is the same as here"}, {"start": 133.14, "end": 137.7, "text": " But the style code so basically the style is the same so you can see this pointy beard"}, {"start": 137.7, "end": 143.22, "text": " Being the same there and here and that's really cool without any temporal constraints"}, {"start": 143.22, "end": 146.66, "text": " They achieve this if we compare their result with this"}, {"start": 147.33999999999997, "end": 148.78, "text": " baseline console again"}, {"start": 148.78, "end": 150.5, "text": " You can see that first things first"}, {"start": 150.5, "end": 156.66, "text": " so this looks terrible like the this is kind of frightening me and the second thing is that you can see that the"}, {"start": 157.06, "end": 161.54, "text": " Style is changing and and basically you can see here. The chain is kind of rounded here"}, {"start": 161.54, "end": 165.0, "text": " We have a pointy chin so it's not consistent across the frames"}, {"start": 165.0, "end": 170.02, "text": " So that basically means that the style is changing as well and it should have been kept fixed"}, {"start": 170.02, "end": 173.54, "text": " But it's this method is failing to do that on the other hand"}, {"start": 173.54, "end": 179.38, "text": " What they do here is they take this first frame and they take this last frame and this time"}, {"start": 180.54, "end": 187.98, "text": " So again, the content code is kind of determined by the input video and they are just kind of sampling randomly sampling different"}, {"start": 188.06, "end": 190.06, "text": " style codes and so you can see that"}, {"start": 190.06, "end": 195.84, "text": " This image here has a totally different appearance compared to this one. So they are totally two different animes"}, {"start": 196.14000000000001, "end": 198.66, "text": " So the style varies a lot here"}, {"start": 199.18, "end": 204.46, "text": " Even though they sample random random styles, they still get decently similar images"}, {"start": 204.46, "end": 207.12, "text": " So if you take a look at these two you can see that"}, {"start": 207.54, "end": 209.5, "text": " These two have almost the same face"}, {"start": 209.5, "end": 213.54, "text": " The only thing that's different is kind of the hair color and the background color"}, {"start": 213.54, "end": 218.34, "text": " But they are most of the same the eyes are the same so it's failing to create diverse set of"}, {"start": 218.34, "end": 221.02, "text": " anime avatars, so"}, {"start": 221.78, "end": 223.82, "text": " Those are the main points of this paper"}, {"start": 223.82, "end": 230.78, "text": " So they show that they can have really diverse set of avatars. They show that they can have this consistency and"}, {"start": 232.02, "end": 236.98000000000002, "text": " That's that's pretty much it. Let me now show you how how it works. I think the results are pretty impressive"}, {"start": 237.38, "end": 239.78, "text": " So let's yeah, let's start from from the beginning of the paper"}, {"start": 239.78, "end": 245.7, "text": " So so we show how to learn a map that takes a content code derived from from a face image and a randomly chosen"}, {"start": 245.7, "end": 252.66, "text": " Style code to an anime image we derive an adversarial loss from our simple and effective definitions of style and content"}, {"start": 252.66, "end": 260.14, "text": " And this adversarial loss guarantees the map is diverse a very wide range of anime can be produced from a single content code"}, {"start": 260.14, "end": 262.14, "text": " so they're stressing the diversity and"}, {"start": 262.98, "end": 264.82, "text": " basically, yeah"}, {"start": 264.82, "end": 270.76, "text": " So the most important step is to be exact about what is what is intended by content and what is intended by style"}, {"start": 270.76, "end": 276.94, "text": " So we adopt a specific definition content is what changes when face images are subject to a family of data augmentation"}, {"start": 277.62, "end": 280.26, "text": " Transformations and style is what does not change"}, {"start": 280.34, "end": 286.5, "text": " It's as easy as that and as you can see it's coupled to the definition of that set of augmentations and they say somewhere here"}, {"start": 286.5, "end": 288.26, "text": " so basically"}, {"start": 288.26, "end": 290.26, "text": " So know that this definition"}, {"start": 290.65999999999997, "end": 297.38, "text": " Is conditioned on the augmentations so different sets of data augmentations will result in different definitions of style"}, {"start": 297.38, "end": 303.58, "text": " So having said that let me just walk you through the pipeline. So let me explain how this works. So"}, {"start": 304.58, "end": 307.65999999999997, "text": " First things they do is they form a batch of input images"}, {"start": 307.65999999999997, "end": 310.42, "text": " Okay, and how they do that is they take a single image"}, {"start": 310.42, "end": 314.62, "text": " So maybe this one and they apply a bunch of different like"}, {"start": 315.26, "end": 319.38, "text": " geometrical type of augmentations like scaling rotation, etc. And"}, {"start": 320.1, "end": 323.86, "text": " So that's an important thing. I want you to understand here. So they don't take"}, {"start": 323.86, "end": 327.18, "text": " multiple different images. They take a single image"}, {"start": 327.18, "end": 331.18, "text": " They apply a bunch of different augmentations and then they feed that into this encoder"}, {"start": 331.3, "end": 336.46000000000004, "text": " So now what the trick here is is as you can see we have encoder"}, {"start": 336.46000000000004, "end": 342.66, "text": " We have decoder which kind of generates the anime images then we have additionally the second branch"}, {"start": 343.06, "end": 349.14, "text": " that again encodes this and decodes this and as you can probably assume already this is"}, {"start": 349.14, "end": 356.06, "text": " Like this will lead to cyclic consistency loss, but let me first focus on so there are three components to the loss function"}, {"start": 356.06, "end": 362.18, "text": " The first things first is that this style so these are the style codes coming from from these images"}, {"start": 362.18, "end": 366.34, "text": " And what they want to do is keep these the same"}, {"start": 366.38, "end": 370.47999999999996, "text": " So because they are they have the same image and they're just applying augmentations"}, {"start": 370.5, "end": 374.21999999999997, "text": " They want to make sure that the only thing that varies is the content code"}, {"start": 374.22, "end": 381.34000000000003, "text": " Whereas the style is the same because that's their definition of what content is and what style is so style is whatever remains the same"}, {"start": 381.34000000000003, "end": 388.66, "text": " And so this thing will encode that whatever remains the same concept and the content code will encode"}, {"start": 388.66, "end": 390.66, "text": " Whatever changes concept. Okay"}, {"start": 390.90000000000003, "end": 395.64000000000004, "text": " So how they will enforce that these are the same is they'll just find basically"}, {"start": 397.06, "end": 403.42, "text": " Variance of that because again imagine this S will be some I don't know like"}, {"start": 403.42, "end": 405.42, "text": " the dimensional vector and"}, {"start": 405.82, "end": 412.44, "text": " Because they have a batch of B images. They'll have B times the matrix and they'll just make sure that"}, {"start": 413.66, "end": 417.06, "text": " Basically the variance across different vectors"}, {"start": 417.62, "end": 423.54, "text": " Goes down to goes down to zero. So just formally written. That's variance of this"}, {"start": 424.1, "end": 431.26, "text": " batch of style codes should be one term that we are going to minimize and"}, {"start": 431.26, "end": 435.38, "text": " Obviously when variance is equal to zero all the vectors are the same"}, {"start": 435.5, "end": 440.3, "text": " The second part is so after they generate these images they'll feed them back here"}, {"start": 440.3, "end": 443.18, "text": " And as you can see so they want to preserve the content code"}, {"start": 443.18, "end": 448.18, "text": " So this C prime they want to make make sure that those are the same as these C's here"}, {"start": 448.18, "end": 455.09999999999997, "text": " So what they'll do is they'll feed the same style codes here and they'll take the content here and"}, {"start": 455.21999999999997, "end": 458.88, "text": " They'll try to reconstruct the very same images as in the input batch"}, {"start": 458.88, "end": 464.58, "text": " So they'll just do L to distance between the batch of real images and the batch"}, {"start": 464.82, "end": 471.24, "text": " That had some augmentations applied in between this batch of generated images and by doing that we we kind of"}, {"start": 471.64, "end": 477.28, "text": " So that the network will learn how to preserve the content code inside of the generated images"}, {"start": 478.12, "end": 482.04, "text": " The third component of the loss will be basically adversarial loss"}, {"start": 482.15999999999997, "end": 487.28, "text": " So that's your that's this part and this first branch is something you're familiar with"}, {"start": 487.28, "end": 492.71999999999997, "text": " Hopefully so that's just your your GAN so adversarial loss objective"}, {"start": 492.71999999999997, "end": 499.02, "text": " So we'll be training the discriminator to discriminate between a batch of real enemy images and a batch of"}, {"start": 499.44, "end": 503.32, "text": " generated images and on the other hand will be training generator to learn how to"}, {"start": 503.88, "end": 505.52, "text": " trick the"}, {"start": 505.52, "end": 511.84, "text": " discriminator into thinking that whatever it generated it comes from the real data set so that's our standard GAN objective"}, {"start": 511.84, "end": 516.28, "text": " But the interesting thing here is this mini batch as you can see mini batch standard deviation"}, {"start": 516.28, "end": 520.72, "text": " Let me try and zoom in here. So this is just standard deviation. So what that does is the following"}, {"start": 520.72, "end": 523.0799999999999, "text": " So if you just apply if you just have this branch here"}, {"start": 523.12, "end": 530.9599999999999, "text": " What will happen probably is that your generator can learn to basically collapse into a single?"}, {"start": 531.3199999999999, "end": 535.24, "text": " Image that looks realistic and discriminator will be totally fine with that"}, {"start": 535.24, "end": 541.3199999999999, "text": " I will just say that okay, this looks real to me, but we want to be diverse. We want to have diverse set of"}, {"start": 541.32, "end": 549.48, "text": " Avatars and so they what I do is they they take this these images and from from a certain layer here"}, {"start": 549.48, "end": 552.08, "text": " I think they are using penultimate layer"}, {"start": 552.08, "end": 557.1600000000001, "text": " We'll see the details a bit later, but they take the feature maps here and they just form these"}, {"start": 558.0, "end": 565.96, "text": " like basically standard deviation statistics and they'll use those to feed them into the second branch of discriminator and what that does is"}, {"start": 565.96, "end": 567.96, "text": " the following so that means that"}, {"start": 567.96, "end": 572.1600000000001, "text": " discriminator I the generator will need to learn how to mimic the"}, {"start": 573.12, "end": 576.6800000000001, "text": " basically those standard deviation across the whole batch and that will"}, {"start": 577.36, "end": 580.2, "text": " but by kind of trying to mimic the"}, {"start": 580.9200000000001, "end": 585.6, "text": " The same statistic from from the real images. You will learn how to create diverse set of images"}, {"start": 585.6, "end": 590.12, "text": " so that's the whole point of this thing now, let's dig into a bit more details and"}, {"start": 591.4000000000001, "end": 593.9200000000001, "text": " We'll see how all of those fit together"}, {"start": 593.92, "end": 599.92, "text": " Okay, so just ignore these I just verbal explained those so this was the first component"}, {"start": 599.92, "end": 605.4399999999999, "text": " So as you can see the loss has three components. That's the adversarial components. This is your classical again loss"}, {"start": 605.4399999999999, "end": 612.68, "text": " You have the style consistency loss and you have the cycle consistency loss. So the first thing is this as I said, so"}, {"start": 613.4799999999999, "end": 616.8, "text": " They say here in a batch as above we expect all"}, {"start": 617.24, "end": 621.24, "text": " Si sold the codes to be the same and this is how we enforce"}, {"start": 621.24, "end": 623.5600000000001, "text": " that concept and"}, {"start": 624.84, "end": 627.64, "text": " secondly, they have the cycle consistency loss and"}, {"start": 628.5600000000001, "end": 634.12, "text": " They say here that we find it helpful to shuffle the style codes in a batch. So there's just a minor detail"}, {"start": 635.08, "end": 637.08, "text": " Let me just show you the equations. So"}, {"start": 637.5600000000001, "end": 643.08, "text": " This part here is the encoder. So from y to x so that's that's this one"}, {"start": 643.84, "end": 645.84, "text": " So they are just kind of mixing different"}, {"start": 646.4, "end": 651.0, "text": " Notations here. They say here B to a they say y to x in the equation down there"}, {"start": 651.0, "end": 653.32, "text": " But that's the they want to see this thing"}, {"start": 653.32, "end": 660.22, "text": " so that's the equation represents you're feeding the generated image here and you out out comes the"}, {"start": 660.6, "end": 666.26, "text": " Content code and the style code and so what I say here is let me get back to the equation"}, {"start": 666.88, "end": 673.36, "text": " So this is the generated anime image and you feed back to get the content and the style code and then you feed it back"}, {"start": 674.16, "end": 679.36, "text": " So that very same content code, but you take the style code from from the original images"}, {"start": 679.36, "end": 682.2, "text": " So as you can see here, so those are"}, {"start": 683.32, "end": 686.64, "text": " So we take these style codes from the original"}, {"start": 687.24, "end": 692.52, "text": " Augmented images and we take whatever was generated here and we feed that into this decoder"}, {"start": 692.52, "end": 694.84, "text": " okay, so that's what these equations say and"}, {"start": 695.76, "end": 700.9, "text": " So a minor detail here is that as you can see so C comes from the Ith image"}, {"start": 701.12, "end": 704.16, "text": " but the style vector comes from the Jth image and"}, {"start": 704.16, "end": 708.92, "text": " What they did here by doing this is basically they are additionally encoding"}, {"start": 708.92, "end": 713.4599999999999, "text": " So this cycle consistency loss can also have the style consistency loss"}, {"start": 714.0, "end": 717.86, "text": " Mingled in and I don't know whether so this is obviously not that pretty"}, {"start": 718.28, "end": 722.8, "text": " Conceptually, but I guess that was some heuristic and they figured out they helps them further"}, {"start": 723.3199999999999, "end": 725.3199999999999, "text": " enforce that style consistency"}, {"start": 726.12, "end": 731.54, "text": " Component. Okay, and finally just L2. That's how we do cycle consistency loss"}, {"start": 731.54, "end": 735.48, "text": " So whatever we reconstructed we want to make sure that it's in the image space"}, {"start": 735.48, "end": 739.04, "text": " It's got a really small distance compared to the original"}, {"start": 739.88, "end": 741.56, "text": " augmented images"}, {"start": 741.56, "end": 743.56, "text": " There is one more detail here"}, {"start": 744.28, "end": 751.04, "text": " Additionally the they use this thing called LP IPS. So that's basically our perceptual loss from neural style transfer literature. So"}, {"start": 751.8, "end": 757.04, "text": " What happens is that let me go back to the diagram here. So what happens is that they"}, {"start": 757.04, "end": 763.04, "text": " Additionally, so they feed this image. So this is the input image and this is the reconstructed image"}, {"start": 763.1999999999999, "end": 766.52, "text": " they'll feed both of those through this generator and"}, {"start": 767.9599999999999, "end": 775.04, "text": " Decoder and basically they'll take certain feature maps from a certain layer and they'll want to make sure that also in the feature space"}, {"start": 775.16, "end": 778.3199999999999, "text": " They're really close looking at maybe again L2 metric"}, {"start": 778.3199999999999, "end": 784.24, "text": " So that's that's that additional part that they have in the cycle consistency component. Okay, and"}, {"start": 784.24, "end": 789.04, "text": " And finally, they have the diversity discriminator and adversarial loss"}, {"start": 789.52, "end": 791.0, "text": " component so"}, {"start": 791.0, "end": 795.48, "text": " That's the part I mentioned with the GAN loss and with this standard deviation stuff"}, {"start": 795.48, "end": 802.92, "text": " So let's see what they say our diversity discriminator exploits the mini badge standard deviation trick used in the progressive growing of GANs paper"}, {"start": 803.2, "end": 806.92, "text": " To exploit this difference in the penultimate layer of the discriminator"}, {"start": 806.92, "end": 811.08, "text": " We compute the standard deviation of each feature and pass that into another"}, {"start": 811.08, "end": 818.1600000000001, "text": " FC layers that's fully connected layer our diversity discriminator outputs the real fake logits and the standard deviation logits"}, {"start": 818.1600000000001, "end": 823.48, "text": " This means the discriminator can identify reliable differences in within batch variation"}, {"start": 823.6800000000001, "end": 827.76, "text": " so that's basically what I already explained and I'll just try to"}, {"start": 828.8000000000001, "end": 834.72, "text": " To tell you one thing that confuses me a little bit here. That's the following so imagine so this is your"}, {"start": 835.24, "end": 837.24, "text": " decoder and discriminator and"}, {"start": 838.1600000000001, "end": 839.36, "text": " basically"}, {"start": 839.36, "end": 841.72, "text": " What will happen once you feed this image?"}, {"start": 841.72, "end": 847.32, "text": " So let's say one image maybe this one once you feed it to and you extract the feature maps from this penultimate layer"}, {"start": 847.32, "end": 852.6, "text": " You'll basically have some activation volume. So those activations from this penultimate layer will look something like this"}, {"start": 852.6, "end": 858.8000000000001, "text": " So there'll be I will have some like let me draw it like this. So this will be some height"}, {"start": 859.76, "end": 866.44, "text": " This will be some width and I'll just do add a subscript here like L because we are looking at the layer L"}, {"start": 866.44, "end": 872.2600000000001, "text": " And we'll have some number of channels. So so basically these will be this is a collection of feature maps"}, {"start": 872.9200000000001, "end": 876.24, "text": " for this particular image, okay, so this will be C sub L and"}, {"start": 877.4000000000001, "end": 880.6800000000001, "text": " That's just a single image. So that's maybe this first image now"}, {"start": 880.6800000000001, "end": 887.5200000000001, "text": " We we do the same thing for for the second image. So we'll again have a similar type of activation volume"}, {"start": 888.24, "end": 889.4000000000001, "text": " there"}, {"start": 889.4000000000001, "end": 895.8000000000001, "text": " Etc and so the part that confuses me is that they haven't really explained how they are calculating these standard"}, {"start": 895.8, "end": 900.3199999999999, "text": " Deviation statistics. So what I assume is they take a certain feature like maybe this feature"}, {"start": 900.88, "end": 902.4399999999999, "text": " so that's"}, {"start": 902.4399999999999, "end": 904.68, "text": " basically, let's call it a I"}, {"start": 905.8, "end": 907.3199999999999, "text": " J and"}, {"start": 907.3199999999999, "end": 913.64, "text": " They're gonna compute the standard deviation across the channels and across the mini batch as well"}, {"start": 913.7199999999999, "end": 921.4799999999999, "text": " So now I don't know whether they are actually aggregating those standard deviations across the channels or whether they are just lumping that whole"}, {"start": 921.48, "end": 925.72, "text": " So where the final shape will be something like B"}, {"start": 926.48, "end": 928.48, "text": " times"}, {"start": 929.2, "end": 930.72, "text": " Height L"}, {"start": 930.72, "end": 931.8000000000001, "text": " times"}, {"start": 931.8000000000001, "end": 938.7, "text": " Width L or are they going to aggregate across this dimension and just feed that so it's kind of not clear"}, {"start": 938.7, "end": 945.52, "text": " The technicalities are not not kind of clear and I guess you have to experiment and try a couple things to see what works"}, {"start": 945.88, "end": 949.96, "text": " I think what I did in the progressive growing of GANs paper is that they"}, {"start": 949.96, "end": 955.9200000000001, "text": " Average across every single dimension and that then they just replicated into a certain constant feature map"}, {"start": 955.9200000000001, "end": 958.0, "text": " and then just append that to the"}, {"start": 958.9200000000001, "end": 964.0, "text": " Activation volumes, but yeah that part is not super clear. But the concept itself is hopefully"}, {"start": 965.0, "end": 967.0, "text": " Understandable and that's that they want to enforce"}, {"start": 967.2800000000001, "end": 973.0400000000001, "text": " That the generated images have the same similar or realistically looking statistics"}, {"start": 973.36, "end": 976.32, "text": " Which will make sure that we have diverse set of images. Okay"}, {"start": 976.32, "end": 980.5200000000001, "text": " so having said that let me continue here and"}, {"start": 981.6800000000001, "end": 984.96, "text": " Let's see the experiments and the results they got compared to the baselines"}, {"start": 986.32, "end": 988.32, "text": " Comparing this"}, {"start": 988.32, "end": 993.2, "text": " Gans and roses paper to these old baselines like grid++ console GAN and star GANv2"}, {"start": 993.2, "end": 998.5400000000001, "text": " What we can notice is that if we take two different style codes here, we get completely different images"}, {"start": 998.5400000000001, "end": 1001.5200000000001, "text": " So you can see that these two are okay"}, {"start": 1001.52, "end": 1006.38, "text": " Maybe the pose is similar, but the eyes are totally different. The chin is this is pointy chin"}, {"start": 1006.38, "end": 1010.64, "text": " This is kind of round chin and we fit and here the difference is even bigger"}, {"start": 1010.64, "end": 1014.84, "text": " So the head is obviously bigger the hair is different that the eyes are bigger etc"}, {"start": 1014.84, "end": 1019.12, "text": " So if we take a look at some other baselines here, you can see that it's pretty much the same picture"}, {"start": 1019.12, "end": 1021.36, "text": " Just the only thing that changes is kind of colors"}, {"start": 1021.86, "end": 1024.92, "text": " Same thing here and also like just take a look at this side"}, {"start": 1024.92, "end": 1029.96, "text": " Like this is so creepy and like there are some artifacts, but they look really similar"}, {"start": 1029.96, "end": 1032.64, "text": " So they and there are some artifacts like this one"}, {"start": 1032.64, "end": 1040.1200000000001, "text": " So they don't manage to create diverse types of avatars whereas this Gans and roses paper does they also compare with this newer?"}, {"start": 1040.48, "end": 1046.3600000000001, "text": " Any GAN paper and the same thing applies here as you can see applying three different style codes"}, {"start": 1046.48, "end": 1049.48, "text": " They they gain they get like totally different"}, {"start": 1049.8400000000001, "end": 1054.88, "text": " Like avatars so you can see there's there is a many differences between these three"}, {"start": 1054.88, "end": 1061.96, "text": " But taking a look at the end again output you can see that these three only like the only significant difference is maybe the eye"}, {"start": 1061.96, "end": 1066.72, "text": " The color basically and here you can also see the structure the face the eyes and everything changes"}, {"start": 1066.72, "end": 1073.0400000000002, "text": " So again a plus plus for the diversity component of the Gans and roses paper. They did some ablations"}, {"start": 1073.5200000000002, "end": 1080.88, "text": " Also like here you can see that this is the thing I was mentioning. So basically when you take this diversity"}, {"start": 1082.0800000000002, "end": 1083.3600000000001, "text": " discriminator"}, {"start": 1083.36, "end": 1090.7199999999998, "text": " Branch and you just just remove it you can see that that what happens is you get a basically mode collapse here and"}, {"start": 1091.84, "end": 1097.04, "text": " That means that all of these images are realistic and so the GAN component will say everything is fine"}, {"start": 1097.12, "end": 1101.1599999999999, "text": " But like as we can see they are pretty much the same and and they're yeah"}, {"start": 1101.1599999999999, "end": 1106.24, "text": " They're almost the same here when we include the branch. We actually get really diverse outputs"}, {"start": 1106.6399999999999, "end": 1110.9599999999998, "text": " and then they from some other paper they just tried this mode seeking loss and"}, {"start": 1110.96, "end": 1118.96, "text": " They begin just punch of artifacts. So they just stuck with this diversity discriminator component. Okay, they also did"}, {"start": 1120.16, "end": 1124.48, "text": " quantitative comparisons between with the baselines and just using these"}, {"start": 1125.28, "end": 1127.28, "text": " DFID and LPIPS"}, {"start": 1128.72, "end": 1134.28, "text": " Metrics which focus on the diversity of the images and they also use the FID so that's for shades inception distance"}, {"start": 1134.28, "end": 1137.1200000000001, "text": " We just focus is whether the generated image is actually"}, {"start": 1137.12, "end": 1142.2399999999998, "text": " Realistically looking and similar to the actual real avatars and"}, {"start": 1143.04, "end": 1146.2399999999998, "text": " Again, they just get basically much better"}, {"start": 1147.1999999999998, "end": 1151.6, "text": " Much better performance compared to the baseline. So DFID should be as low as possible"}, {"start": 1151.6, "end": 1157.9199999999998, "text": " You can see it's much lower than the baselines just focusing on this selfie to anime data set and FID is also smaller and"}, {"start": 1158.4799999999998, "end": 1164.3999999999999, "text": " LPIPS is higher. So all of these three metrics basically rely on a pre-trained classifier"}, {"start": 1164.4, "end": 1167.44, "text": " Like inception we three model or something and"}, {"start": 1168.0800000000002, "end": 1172.8000000000002, "text": " How they work is similar to the perceptual loss. So they just take they just do some"}, {"start": 1173.76, "end": 1178.4, "text": " Comparisons in the feature space and yeah, so I won't be digging into how they exactly work"}, {"start": 1178.4, "end": 1184.88, "text": " But main point here is that the quantitative results are are comparable and consistent with the qualitative results. We saw"}, {"start": 1186.0, "end": 1187.52, "text": " They also have"}, {"start": 1187.52, "end": 1189.76, "text": " There is this other paper that"}, {"start": 1190.3200000000002, "end": 1192.72, "text": " Created this technique called so this set of results"}, {"start": 1192.72, "end": 1200.72, "text": " Created this technique called so this sephapaper and they say that sephafinds laden direction that corresponds to large semantic changes in a style"}, {"start": 1200.72, "end": 1204.8, "text": " Based GAN by finding eigenvectors of the style modulation weight parameters"}, {"start": 1204.96, "end": 1210.72, "text": " We show that our style space is highly expressive and editable using these eigenvectors in figure seven"}, {"start": 1210.96, "end": 1218.56, "text": " So basically, they somehow calculate these eigenvectors and so every single image here was generated using a different style code"}, {"start": 1218.8, "end": 1221.44, "text": " And once you add these eigenvectors you can kind of"}, {"start": 1221.44, "end": 1226.0, "text": " Make them have certain properties. So if you add this eigenvector five"}, {"start": 1226.0800000000002, "end": 1231.28, "text": " You basically get small eyes and sharp features if you add eigenvector eight to get abstract lines"}, {"start": 1231.6000000000001, "end": 1235.04, "text": " And finally big eyes and dark hair if you add this eigenvector at 10"}, {"start": 1235.3600000000001, "end": 1241.68, "text": " So just uh, they show that this manipulation method kind of works for for their style codes as well"}, {"start": 1242.0800000000002, "end": 1243.68, "text": " Okay, that's that's pretty much it"}, {"start": 1243.68, "end": 1248.56, "text": " I mean this paper is pretty pretty simple to understand and but the results are I like results"}, {"start": 1248.56, "end": 1256.58, "text": " Which a lot and so I thought covering it. So yeah, hopefully you found this video useful if you did consider sharing and subscribing"}, {"start": 1256.58, "end": 1278.58, "text": " And until next time. Bye. Bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=xQ5ltOOxoFg
Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Paper: Do Transformers Really Perform Bad for Graph Representation? In this video, I cover Graphormer a new transformer model that achieved SOTA results on the OGB large-scale challenge benchmark. It achieved that by introducing 3 novel types of structural biases/encodings into the classic transformer encoder. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2106.05234 ✅ What can GNNs learn blog: https://andreasloukas.blog/2019/12/27/what-gnn-can-and-cannot-learn/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Key points of the paper 02:15 Graph-level predictions and why we need structural information 05:35 GNNs basics 09:40 Centrality encoding explained in depth 14:15 Spatial encoding explained in depth 18:00 Edge encoding explained in depth 22:30 Results on OGB LSC 23:30 Graphormer handles over-smoothing 25:10 Other results and ablation study 28:20 Graphormer is more expressive than WL 30:30 Mean aggregation as a special case of Graphormer 35:00 Sum aggregation as a special case of Graphormer ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphormer #graphs #transformers
What's up? In this video I'm covering this new paper called Do Transformers Really Perform Bad for Graph Representation? Introducing the Graformer model by Zhengxuan Jing, Jiang Lekai, Zhengjie Luo and others from... they did this as an internship at Microsoft Research Asia basically. So what is cool about this paper is that this is the first time that Transformer has outperformed classical graph neural networks on this important benchmark called OGB Large Scale Challenge. And the whole point of the paper, the whole idea is to encode structural inductive biases into the transformer model and they show that by adding three of those encodings they can actually outperform obviously those classical GNNs, which is super exciting. Having said that, so basically we up until now we knew that Transformers are really good for sequence modeling so they dominated the NLP world. Then recently we had the Vision Transformer entering the computer vision world but and outperforming like a bunch of different computer vision models. But like the only downside there was that it required a huge amount of pre-training so like it used 300M, a million data points to pre-train it. And a couple of days ago I have also covered this paper where the VIT, the Vision Transformer, used this sharpness aware minimization objective to actually not need any pre-training anymore and it still outperforms a resonant baselines, which is really cool. But up until now, up until Graphformer, we didn't have the same thing happening in the GraphML field where GNNs were dominating pretty much. And so let me just briefly show you how it looks pictorially. So they add three encodings. They have this thing called a centrality encoding. They used something called degree centrality. They also add these edge encoding and spatial encoding information directly as a bias before the softmax in the self-attention layer. And those just kind of encode the edge information as well as the structural information into the Graphformer model. And we'll soon see the details. But yeah, that was a high-level picture. So now let's start digging into the paper. So the Transformer architecture has become a dominant choice in many domains such as natural language processing and computer vision. Yet it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. So just as a small digression here, what graph-level means is the following. So if you have a graph, a simple graph here, you can do multiple types of tasks on this graph. You can do node-wise, so node-level predictions. You can do edge-level predictions. Or you can do graph-level predictions. So a graph-level prediction would be if you have a molecule here. If this graph represents a certain molecule. So for the graph-level, you take into account all of the node and edge features and you try to regress maybe the solubility of this molecule. On the other hand, if you had edge prediction, for example, you could have multiple edges here and these nodes can be maybe drugs and you're trying to predict the side effect between you by using the two drugs together. And for example, you may regress that this edge has a probability of 0.9. That means that this side effect has a huge probability of occurring. On the other hand, this edge here may be un-plausible because the probability is only 0.001, for example. And that would be edge-level prediction task. But here we're focusing on graph-level prediction tasks and most of the benchmarks actually contain molecules, so that makes sense. Next up, our key insight to utilizing transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. So what it means is the following. So if you know how transformers work and you hopefully do, and I'll link the paper somewhere here so you can check it out and understand transformers, hopefully you do. But like this is a 101. Basically, if we have some sequence of tokens like this, and what transformers do, let me draw another one, is every single token will be attending to each other tokens including itself. It will be attending to this one, it will be attending to this one, it will be attending to this one. And the same goes for all of the other tokens. And as you can see, what it implicitly assumes is that the underlying data forms a fully connected graph. So if I can just redraw this in a bit different fashion, like this. And what happens is that transformer assumes that all of these tokens, nodes, whatever they may be, it can be image patches in the case of a vision transformer, they can be like subware tokens in the case of NLP. And so it assumes this structure. But sometimes when you have a molecule, for example, you know that the molecule is connected like this. You know that this is the connection. And you want to somehow encode this information into the transformer. And that's what the graph former did. And we'll soon see how those exactly look like, but that's the whole point. You want to ingrain the structural information into the transformer to make it more performant. Okay, and finally, many popular genome variants could be covered as a special cases of graph former and I'll later show you this in the appendix and I'll explain how this comes to be. Okay, having said that, let me just give you like a brief introduction to GNNs. If you're not familiar with them, I have a whole graph ML playlist. You can check it out. I'll link it somewhere here. But like, just let me shortly explain you how this works. So what you usually have is in this one family of graph neural networks called MPNNs. So you have this aggregation step and you have this combined step. And so let me draw a small graph here, something like this. And what happens is the following. So let me connect these like this. So these are the direct neighbors and we have some indirect neighbors. So what will happen is that for this, so every single node, so that's the first point, every single node and every single edge will have a certain feature vector associated with it. So why is that? Well, basically assume you have, assume this is like, these are atoms. So this graph represents a molecule. So this edge, this feature vector may contain the information such as is this a single bond, is this a double bond, is this an aromatic bond, etc. So that can be, as you can see, useful information encoded directly into the edge feature vector. So what you do is the following. So you have this, as I said, aggregate step. So what that does is the following. So it will take the neighbors, so either the one hop or the multi hop neighbors, but let's assume we have one hop neighborhood here. So that means that this i node will take into account this vector, this vector, this vector, as well as the edges. And it will somehow aggregate that information. So that may be like a mean. You basically add them up and you just divide them by the number of elements or like sum or whatever. There can be different aggregation methods. But once you have that, so again, you have the original feature vector here and you have the one you just formed by aggregation. And now the following step is this combined method. As you can see, so this is the aggregated one, so this is this one, and then you have the original one, that's this one. And you somehow combine them. So they can be maybe via MLP, you just pass them through a multilayer perceptron and you form a novel representation here, which will look something like this. Okay, so that will be how graph neural networks roughly work. And you then stack a bunch of these layers together. Finally, you have some, depending on the task, you just have some classification head, whatever, you back prop, you train these weights and that's it. So that's it. And now there is one more thing. If you want to do graph level prediction, what you usually do is you have this readout function. And what it does, as you can see, it just takes the nodes from the set of nodes of that graph and you just kind of combine them somehow. So again, you have layer one and you do, on the layer one, you do a processing that looks something like this. So you do those kind of aggregations and you combine them and you repeat that multiple times. So you have layer two all the way up until layer L, whatever that number may be. And then you take the node feature vectors from that layer and you, for example, sum them up and that will be a readout function. So you have a single vector which you can then use to maybe regress something or whatever. And so that's how the pipeline roughly looks like for graph neural networks. So now having said that, let me just briefly tell you about Transformer. And as I said, so I won't be delving into details here, but you have a couple of procedures here. So you basically first project your feature vectors into these query vectors, into key vectors, into value vectors. Then you find the similarity between the query and the keys and that will provide you with this attention scores, which then you apply just a softmax and you then use those attention coefficients to aggregate the value vectors. So that's just a high-level overview. As I said, you can watch my video to understand this. That's not the point of this paper. And now let's start digging into the encodings that's the meat of this paper. Okay, first up, let's see the centrality encoding. So node centralities is a really abstract concept. There can be many things and they concretely use something called degree centrality. We'll see that in a moment. But the rough notion is that those centrality measures tell you how important a certain node or an edge is in that particular graph. So they say it here in equation four, the equation four is just a transformer equation for like accumulating value vectors. The attention distribution is calculated based on the semantic correlation between nodes. So that's the query keys dot product. However node centrality, which measures how important a node is in the graph, is usually a strong signal for graph understanding. For example, celebrities who have a huge number of followers are important vectors in predicting the trend of a social network. As you can imagine, if Elon Musk tweets something, you can assume that a nice portion of the Twitter graph will be copy pasting his, retweeting his content. So you can kind of predict just because of a single node, you can predict the behavior of many other nodes. So that's maybe rough intuition for why this works. And as I mentioned, they're using degree centrality and that's a simple concept. So again, assume you have a node and assume you have some edges. So you have some ingoing edges, you have some outgoing edges here. So what node centrality is, is the following. So basically the out degree will be three here. As you can see, we have three edges. So that will mean we have out degree three and we'll have two ingoing edges. So the degree centrality of the, so of the in degree is two. So that's basically, so that's the number, that's the measure we want here. So for an influencer node, you'll have like thousands and thousands of these ingoing edges, which kind of signal that it's an important node. And how they include that information into this graph former is the following way. So as you can see, this is the original feature vector in your graph. And let me try and draw this. So again, you have a small graph here and it's connected with some other nodes. And so what happens is the following. You have the original, so this X side, that's the original feature vector. Okay. And you add up to that feature vector, these additional embeddings. So you basically form, what they do is they form a table which has two columns. So one column will correspond to the in degree vectors and the other will correspond to the out degree vector. And now if you focus on this specific node here, and yeah, assuming we have, assuming this one is maybe in degree, maybe these two, maybe these two is out, outgoing edge, you can see that we can basically form the degree numbers and we have two for the out degree and we have one for the in degree. And so using that information, we can basically index into this embedding table, which is initially just randomly initialized. So what we'll do is we'll have, as you can see here, so this will be zero slot, first slot, second slot. We'll just take this vector here and we'll add it directly to this one. So we'll just add it up to this one. And because we have one in degree, we'll go to the in degree column and we'll just index into this one. So this is one and we'll take this vector and we'll again add it up to this representation. Okay. So as you can see, that's basically what they do. And yeah, they say it here. Where Z minus and Z plus are d-dimensional real valued vectors, so our learnable embedding vector specified by the in degree and out degree respectively. So what this roughly does, now you can imagine if you had certain node, which is an influencer node, which had a bunch of these in going the edges, maybe thousand and you had another node that also had a thousand of these in degree edges. Okay. That means that they'll be indexing the same, the very same vector here will be used in both of those and that kind of can signal to this graph former that these are somehow special. That's the rough intuition behind all of this and because behind this first embedding embedding procedure. So the second thing which is really important is this spatial encoding and they say here, concretely for any graph G we consider a function Phi between VI and VJ. So we have node space and we had no space just a Cartesian product and we map that into real number. So which measures the spatial relation between VI and VJ in graph G. So this is very abstract. Let's make it a bit more concrete. So in this paper we choose Phi to be the distance of the shortest path. So the shortest path distance between VI and VJ if the two nodes are connected, if not, we set the output of Phi to be a special value, for example, minus one. Okay. Let's see what this practically means. So again, we have this equation. As you can see, this is basically transformer equation. We have just the dot product between. So this is the query vector. This is the key vector. We do a dot product. This is transposed and we just kind of scale it where D is the number of dimensions in those vectors. We just take a square root. Okay. So this is the transformer part and that's not that interesting. So what's interesting here is they add a bias. So this is a learnable scalar as they say here to encode this information of the structure in the graph and they do that by you can see here this Phi is used to index into this B array. So B will be again as we had the we had this embeddings here. Now we'll have similarly array that will call B and which will be able to index using this Phi where Phi is the shortest path between the any two points. So that means the following again, let me draw a simple diagram. So we have maybe something like let me draw like a path diagram. So we have a simple path diagram a path graph and as you can see the this node here will have so this this node here will be have a distance of one this note here will have a distance of two. So that's the shortest path and finally this one will have distance of three. So that means when we do when we form a query vector here and keys here what we'll do is the following. So we'll be adding so for this particular node here will be adding. Let me just change the color. So this is 0 1 2 3. So for this one will be using exactly this one will be using this scalar then for the second one will be using this scalar. So now you can imagine that this can help us learn some interesting stuff. So they say here for example, if B is learned to be a decreasing function with respect to Phi so as we so that what they say is as we go down this B array if this becomes like smaller and smaller so they say that for each note the model will likely pay more attention to the notes near it and pay less attention to the notes far away from it and in the extreme case imagine you have you have maybe 0 here for the first note and you have minus infinity for all of the other ones. So this is minus infinity what will happen is because you're adding those to the attention score after applying softmax all of those nodes which have minus infinity the attention coefficients will drop to 0. Whereas for this one it will be nonzero will be probably one. So in this particular graph, so we'll be focusing. So what we'll end up doing is focusing on only this note here and ignoring the other the rest of these. So if we had more neighbors more direct neighbors would be basically focusing only on the one hop neighborhood just because of this learnable scalar here. So that's the idea there. This is how they encode this structural information of the graph why these shortest path distance scalars. Okay, that's the second important encoding and the third one and the final one is this edge encoding and they said here there are mainly two edge encoding methods used in previous works in the first method the edge features are added to the associated nodes features in the second method for each node its associated edge feature will be used together with the node features in the aggregation. However, such ways of using edge feature only propagate the edge information to its associated nodes which may not be an effective way to leverage edge information in the representation of the whole graph. So in a nutshell what this means is the following. So if you have a if you have a graph again, let me draw a simple graph here. What they do is if you have a graph like this and when you do those aggregation and combine steps in the first layer of the GNN what will end up with what we'll end up doing is will be end up will end up including this information. So these two edges and information from these two nodes into this novel representation that will form here and we are ignoring these edges here. We're ignoring them in the first best least and what they are suggesting here is that it may be wise to kind of ingrain that information straight away and like during this aggregation process just kind of use that edge information from all of the other nodes and that's how they do that is the following. So again, you can see simple stuff. We have the transformer part. We have the spatially embedding part, which I just explained. And we have this novel edge encoding part, which is this fancy formula, which I'm now going to try to explain. So again, we'll have a embedding table. So we'll have a table which will correspond to these learnable edge features and like the length of this table will just be the following thing. The length will be the maximum shortest path in the graph. So you find all of the so between every single node in your graph, you find the shortest paths and you find the maximal such path and that will be the length of this table. Let's call it. I don't know maybe E and okay. So once we have that we do the following thing. So let's assume we want to find. So let's assume we are finding a tension score between I and J. So if this is I and if this node here is J, what we'll do is the following. So we'll have we'll have some feature vectors which are associated with these edges, right? So we'll have this and this and this and we'll do the following will take the first edge. So we'll have edges here. So these are again random initialize initially. So we have 123, etc. So we'll take this vector here will do a product between that vector and this vector here. So we'll do the product between those two because this is the first edge on the shortest path. So the shortest path is this one. So we have this is the shortest path between I and J. So we'll first do a dot product between those two. Then we'll take the following vector will do a dot product between that vector and this vector and we'll take the third vector because this is the third vector on this path and we'll do dot product and what we'll just do is we'll just sum them up as you can see here and divide them by n which is the number of edges in the path, which is 3 here. So these are the edge features. These are the learnable edge features and finally just sum them up and divide by n. That's it. And intuitively what this does is we ingrain the edge information across so between I and J along the shortest path and they show that in the ablation studies later that this helps boost the performance of the graph form model. Okay. So those are the main ideas of this paper. These three encodings and now I'll just again. So everything else remains the same as in the transformer model. They just ingrain these encodings. Everything else remains the same. You have the multi-head attention sub module. You have the FFN. So the fully the fully the feed-forward network module. You have the layer normalization standard stuff. So I won't go into those details and I'll skip this an interesting part for now which shows that classical GNNs can be considered as a special case of this graph form module. But before that let me show you the experiment results. So looking at the baselines here. So again, this is this large-scale challenge data set. It has 3.8 million graphs. So it's huge compared that to previous GNN popular graph ML benchmarks such as Quora sites here or PubMed and you'll see that this is much bigger and so much a much better benchmark compared to those. And we compare with GCN. So that's graph convolutional network with GINs with GCNs with the virtual node etc. And we have this is some modification of a transformer also an attempt to apply transformers to graph level prediction tasks, but not as successful as graph former as we'll see. So as you can see looking at the test, this is mean absolute error we achieve. So graph former achieves the lowest error on compared to all of these other classical GNNs. So those are some nice results and there's one thing I want to mention here. Let me just see where it is. It basically yeah, as stated in section 3.3 we further find the proposed reformer does not encounter the problem of over smoothing. I the train invalidate error keep going down along with the growth of depth and width of models. So now that this may be strange if you're not familiar with graph neural networks, but basically compared to CNNs where depth more depth basically take resonance for example, so the more depth you have the more expressive and powerful and performant the models are here in the GNN literature. Basically that doesn't necessarily help. So you usually see those shallow GNNs like two or three layer deep and those work just fine. So the reason for that are there are multiple reasons. So one of them is maybe bottleneck. So that means if you have if you take a look at the one hop neighborhood, you'll have maybe hundred neighbors. If you take a look at the two hop neighborhood, you may end up with 10,000 neighbors. If you take a look at the three hop neighborhoods, that's the third layer of the GNN. You'll maybe be attending to millions of notes and just but if you try and average all of those feature vectors, you'll end up basically collapsing into a single feature vector and that's something that's called over smoothing. So basically all of your feature vectors start collapsing into very same representation and thus you don't have the discriminative power to kind of figure out whatever the task at hand is. So maybe predict the solubility of the molecule whatever. So that's the over smoothing problem and it seems that Graphformer kind of dealt with it and I'd be really really excited. I'm really excited to see like what the future research shows and explains why this is. Finally, they also show some results on other OGP data sets. Again, these are mostly molecular data sets and you can see again, it just outperforms all of the other baselines. This flag is just a simple adversarial augmentation. So just that you know what this suffix means and yeah on zinc again lowest mean absolute error compared to the previous baselines. So that's that's pretty much it. They also did some ablations showing that all of these encodings boost up the performance. So this is the edge encoding as you can see the error drops when they added same goes for the nose centrality same goes for the spatial encoding. They always have these boosts in the performance when they add those particular encodings. Finally, as we know, basically transformers have problem with the scalability of these the vanilla transformer. We have this quadratic dependency between the tokens because of this attention attending mechanism and so the quadratic complexity of the self-attention modular stricts reformers application on large graphs, which is pretty much obvious. So that means that even though it achieved really nice results on these benchmarks, which are molecular data sets molecular graphs, I can imagine that this still can't be used to any like like huge graphs such as maybe social media graph. And so yeah, I can I just expect to see people kind of using the ideas that came from the transformer literature like lean former performer, etc. to kind of make this thing more efficient and then we'll see whether it can achieve some nice results even on larger graphs. Having said that, that's the those are the main ideas of this paper. And now I want to focus on this on this fact one and kind of delve a bit deeper into it and explain why this is. Okay, let's focus on this part by choosing proper weights and distance function five the graph former layer can represent aggregate and combine steps of popular gene and modules models such as gene GCN and graph sage. Moreover, we show further that by using our special encoding graph former can go beyond classic message passing gene ends whose expressive power is no more than the vice file a lemon one test. So by the way, there is a bit of confusion about this vice file a lemon test and the thing is all of these expressivity like statements basically only are valid for and so-called anonymous graph neural networks whereby anonymous. I mean that the node feature vectors are actually basically nonexistent or they are not discriminative. So that means where well by using standard graph neural networks, you always have those no features. So these analysis are not that they can be kind of tricky. I'll link this blog down in the description. You can see it on the screen which explains this problematic really good and kind of explains what a gene ends can learn and what they can learn and so do check it out. It's really really cool. Having said that let's focus on this part. How can we how can we treat these classical gene ends as a special case? Okay, but let's start first with this vice file a lemon thing. So so vice file a lemon or the color test basically what it does it it outputs color histogram and if the color case runs are the same it claims that those two graphs are isomorphic and we can clearly see that these two graphs here are not isomorphic but vice file a lemon tells us that they are isomorphic because we have four red colors and we have two blue colors and because of that vice file a lemon would falsely claim that these are isomorphic on the other hand if you focus on the shortest path distance, so let's let's see what I see here. So these two graphs cannot be distinguished by vice file a lemon one test by the SPD sets. I the SPD from each node to others are different the two types of nodes in the left graph have SPD sets as you can see here and let me explain let me kind of break it down for you what this means. So obviously the shortest path distance between this node this this red node and itself is zero. It has one distance until it reaches this distance these nodes and two for these two and finally three and as you can see here that's 0 1 1 2 2 3 and for all of the red nodes. We have the same SPD set. Okay, finally for this blue node again. We have trivially zero shortest path to itself. We have 1 1 1 here. So that's 3 1. So those are these ones here and finally we have to to which are these two and similarly goes for this graph here and we can see that the SPD sets are different thus this SPD will help us determine that these graphs are not isomorphic. I they are non isomorphic. So that's a like a visual explanation and a particular example where they showed the graph former is more powerful than those simple like vice versa lemon one test kind of architectures. Okay. Now let's focus on the really important part and that's these how we how can we basically show that GNN's are some GNN's are special case of graph of this graph or module. So let's focus on mean aggregate function and explaining how that works. So we begin by showing that self-attention module with spatial encoding can represent mean aggregation. This is achieved by in equation 6 settings. So this is equation 6. This is your transformer. This is your basically bias using this shortest path function. Okay, so by setting the scalar when Phi equals 1 so direct neighbors to 0 and to minus infinity otherwise where Phi is the shortest path distance and setting these two matrices so the query projection matrix and a key projection matrix to 0 matrix and the value projection matrix to identity matrix then the softmax gives the average represent of representations of the neighbors and let's let me let me try and break it down how this exactly works. It's quite a simple. So we have a note here. We have some direct neighbors. Let's say we have three direct neighbors. These are connected and finally, let's say we have some undirect indirect like neighbors. Okay, like these two. So what happens is the following. So as we know these will have and we are focusing on this node and trying to figure out the next layer presentation. So we can see that these nodes here have distance one and these nodes here have distance 2 and finally 3. Okay. So what I say is the following if we set these bees to 0. So if we set this one to 0 this one to 0 and this one to 0 and if we set these to minus infinity then as you can see looking at this formula here will have that after we apply softmax these will get to 0 and these will get will have some nonzero value and now the second portion is this these matrices should be zero matrices and the value should be identity that will help us achieve the the mean aggregation of neighbors. So let's see. So for now we just have some arbitrary nonzero values here and we have zeros here for the attention coefficients. So now let me let me do it like this. So let's focus on this particular node and so we have some feature vector once we map them this feature vector using the key query and value matrices will get the following thing. So query and key so key and query will just be zero vectors because remember we have zero matrices there and this one here because we have an identity matrix will be the same just a copy pasted version of this vector here. So this is V. This is we as well. So this is the value vector. Okay, and now we do the same thing for all of the nodes in the graph and as you can see if we take this query vector which is all zeros and we do adopt products between any of the keys because all the keys are also all zero vectors will always have this thing going to zero. So this this term in the equation will go to zero and finally as I already explained will have this bias terms will be zero for the local neighbors direct neighbors and minus infinity there. So that means we'll basically end up having the same coefficients here and that's going to be one over three. And these will be zero. So we'll only be will end up attending only these nodes here and we'll take a mean average of those three feature vectors. Okay, so if you're confused by one three, it's pretty simple. So it's just soft max thing. So you'll have E. So let's focus on for example on this note will have e raised to power of zero because that's the attention. That's the attention score we got and down in the denominator. We'll just have a sum and that sum will be equal to three because we'll have three times. You raised to the power of zero and we'll have two times. You raised to the power of minus infinity, which is obviously zero and that's why we achieve. That's why we discuss to one over three. And that means that yeah, basically those are the attention coefficients that this node will use to accumulate the value vectors of the direct neighbors and as you can see, we literally achieved this mean aggregation. So that was the first the first part. Now the sum aggregate part is pretty easy. Once we understand this first mean aggregate part, but let me let me let me zoom in a bit here and explain it. The summary Asian can be realized by first performing mean aggregation and then multiply the node degrees. We'll see what that means. So specifically the node degrees can be extracted from centrality encoding. So that's the degree encoding by an additional head and be concatenated to the representations after mean aggregation. Then the fully the feed-forward network module in graph former can represent the function of multiplying the degree of the dimensions of average representations by the universal approximation theorem of fully of fully feed-forward networks. Okay. So let me kind of break that down. It's pretty simple and I'll just draw it over this chart. So let's assume the first head gives us the mean representation for this vector. Okay, so we have that part and now what they say is the following let's take another head. So we have multiple attention heads. Remember this is transformer. So what that have will do it will learn how to extract the degree of this note, which is as you can see here just consider with some direct graphs. It's three. So we'll learn how to extract maybe something like a vector which will have all zeros and only three at the bottom here. Okay. And so how that will happen is the following. So remember this so this this vector here is formed by taking the original vector and adding to it those embeddings for the degree centrality. So basically will be adding those note those vectors which contain the degree information and this head can learn how to extract from this vector how to extract this number here. So that have will learn how to do that. And once we have that we do the following. We basically concatenate because if you remember how transformer works every single head attention head will basically output a representation. Those will get concatenated. Then we'll have some MLPs. So again, we'll have the following will have the mean representation and we'll have the degree of the node and then by this universal by this theorem of universal approximation theorem where these MLPs can basically like giving up enough width and depth. They can basically approximate any function. So what we get here is after mapping them. It can learn how to take this number. Let's say three and multiply it with every single feature dimension here. So with this thing with this thing with this thing with this thing and if we do that will just end up with a sum aggregation instead of a mean of irrigation. And hopefully this was clear enough. This is how I understood it basically. And yeah, if I'm wrong, feel free to comment down. But yeah, I think that's in a nutshell how it works. So similar thing for max aggregate similar thing for combine and mean readout. You can you can do it yourself. But this is the whole logic the whole point how this works. And yeah, we saw we can emulate these as a special case of graph warmer and that's super cool. Having said that if you like this video consider sharing subscribe and until next time. Bye bye.
[{"start": 0.0, "end": 3.2, "text": " What's up? In this video I'm covering this new paper called"}, {"start": 3.2, "end": 6.6000000000000005, "text": " Do Transformers Really Perform Bad for Graph Representation?"}, {"start": 6.6000000000000005, "end": 13.8, "text": " Introducing the Graformer model by Zhengxuan Jing, Jiang Lekai, Zhengjie Luo"}, {"start": 13.8, "end": 19.8, "text": " and others from... they did this as an internship at Microsoft Research Asia basically."}, {"start": 19.8, "end": 24.0, "text": " So what is cool about this paper is that this is the first time that"}, {"start": 24.0, "end": 31.0, "text": " Transformer has outperformed classical graph neural networks on this important"}, {"start": 31.0, "end": 36.4, "text": " benchmark called OGB Large Scale Challenge. And the whole point of the"}, {"start": 36.4, "end": 42.2, "text": " paper, the whole idea is to encode structural inductive biases into the"}, {"start": 42.2, "end": 46.6, "text": " transformer model and they show that by adding three of those encodings they can"}, {"start": 46.6, "end": 52.0, "text": " actually outperform obviously those classical GNNs, which is super exciting."}, {"start": 52.0, "end": 57.6, "text": " Having said that, so basically we up until now we knew that Transformers are"}, {"start": 57.6, "end": 62.6, "text": " really good for sequence modeling so they dominated the NLP world. Then recently"}, {"start": 62.6, "end": 67.4, "text": " we had the Vision Transformer entering the computer vision world but and"}, {"start": 67.4, "end": 71.6, "text": " outperforming like a bunch of different computer vision models. But like the only"}, {"start": 71.6, "end": 76.1, "text": " downside there was that it required a huge amount of pre-training so like it"}, {"start": 76.1, "end": 82.1, "text": " used 300M, a million data points to pre-train it. And a couple of days ago I"}, {"start": 82.1, "end": 86.3, "text": " have also covered this paper where the VIT, the Vision Transformer, used this"}, {"start": 86.3, "end": 92.0, "text": " sharpness aware minimization objective to actually not need any pre-training"}, {"start": 92.0, "end": 95.8, "text": " anymore and it still outperforms a resonant baselines, which is really cool."}, {"start": 95.8, "end": 99.39999999999999, "text": " But up until now, up until Graphformer, we didn't have the same thing"}, {"start": 99.39999999999999, "end": 104.0, "text": " happening in the GraphML field where GNNs were dominating pretty much. And so"}, {"start": 104.0, "end": 108.3, "text": " let me just briefly show you how it looks pictorially. So they add three"}, {"start": 108.3, "end": 111.8, "text": " encodings. They have this thing called a centrality encoding. They used something"}, {"start": 111.8, "end": 117.8, "text": " called degree centrality. They also add these edge encoding and spatial encoding"}, {"start": 117.8, "end": 122.4, "text": " information directly as a bias before the softmax in the self-attention layer."}, {"start": 122.4, "end": 127.3, "text": " And those just kind of encode the edge information as well as the structural"}, {"start": 127.3, "end": 133.0, "text": " information into the Graphformer model. And we'll soon see the details."}, {"start": 133.0, "end": 137.5, "text": " But yeah, that was a high-level picture. So now let's start digging into the paper."}, {"start": 137.5, "end": 142.6, "text": " So the Transformer architecture has become a dominant choice in many domains"}, {"start": 142.6, "end": 146.4, "text": " such as natural language processing and computer vision. Yet it has not achieved"}, {"start": 146.4, "end": 150.7, "text": " competitive performance on popular leaderboards of graph-level prediction"}, {"start": 150.7, "end": 156.0, "text": " compared to mainstream GNN variants. So just as a small digression here, what"}, {"start": 156.0, "end": 160.2, "text": " graph-level means is the following. So if you have a graph, a simple graph here,"}, {"start": 160.2, "end": 168.5, "text": " you can do multiple types of tasks on this graph. You can do node-wise, so"}, {"start": 168.5, "end": 172.1, "text": " node-level predictions. You can do edge-level predictions. Or you can do"}, {"start": 172.1, "end": 175.1, "text": " graph-level predictions. So a graph-level prediction would be if you have a"}, {"start": 175.1, "end": 178.89999999999998, "text": " molecule here. If this graph represents a certain molecule. So for the graph-level,"}, {"start": 178.89999999999998, "end": 182.79999999999998, "text": " you take into account all of the node and edge features and you try to"}, {"start": 182.79999999999998, "end": 187.0, "text": " regress maybe the solubility of this molecule. On the other hand, if you had"}, {"start": 187.0, "end": 191.0, "text": " edge prediction, for example, you could have multiple edges here and these nodes"}, {"start": 191.0, "end": 195.1, "text": " can be maybe drugs and you're trying to predict the side effect between you by"}, {"start": 195.1, "end": 198.8, "text": " using the two drugs together. And for example, you may regress that this edge"}, {"start": 198.8, "end": 202.6, "text": " has a probability of 0.9. That means that this side effect has a huge"}, {"start": 202.6, "end": 206.8, "text": " probability of occurring. On the other hand, this edge here may be"}, {"start": 206.8, "end": 212.2, "text": " un-plausible because the probability is only 0.001, for example. And that"}, {"start": 212.2, "end": 216.1, "text": " would be edge-level prediction task. But here we're focusing on graph-level"}, {"start": 216.1, "end": 219.9, "text": " prediction tasks and most of the benchmarks actually contain molecules, so"}, {"start": 219.9, "end": 223.7, "text": " that makes sense. Next up, our key insight to utilizing"}, {"start": 223.7, "end": 227.5, "text": " transformer in the graph is the necessity of effectively encoding the"}, {"start": 227.5, "end": 232.5, "text": " structural information of a graph into the model. So what it means is the"}, {"start": 232.5, "end": 235.9, "text": " following. So if you know how transformers work and you hopefully do, and"}, {"start": 235.9, "end": 239.0, "text": " I'll link the paper somewhere here so you can check it out and understand"}, {"start": 239.0, "end": 243.29999999999998, "text": " transformers, hopefully you do. But like this is a 101."}, {"start": 243.3, "end": 249.10000000000002, "text": " Basically, if we have some sequence of tokens like this, and what"}, {"start": 249.10000000000002, "end": 254.0, "text": " transformers do, let me draw another one, is every single token will be attending"}, {"start": 254.0, "end": 257.8, "text": " to each other tokens including itself. It will be attending to this one, it will"}, {"start": 257.8, "end": 261.6, "text": " be attending to this one, it will be attending to this one. And the same goes"}, {"start": 261.6, "end": 266.0, "text": " for all of the other tokens. And as you can see, what it implicitly assumes is"}, {"start": 266.0, "end": 271.1, "text": " that the underlying data forms a fully connected graph. So if I"}, {"start": 271.1, "end": 277.70000000000005, "text": " can just redraw this in a bit different fashion, like this. And what"}, {"start": 277.70000000000005, "end": 283.6, "text": " happens is that transformer assumes that all of these tokens, nodes, whatever"}, {"start": 283.6, "end": 287.0, "text": " they may be, it can be image patches in the case of a vision transformer, they"}, {"start": 287.0, "end": 292.40000000000003, "text": " can be like subware tokens in the case of NLP. And so it assumes this"}, {"start": 292.40000000000003, "end": 296.8, "text": " structure. But sometimes when you have a molecule, for example, you know that the"}, {"start": 296.8, "end": 302.0, "text": " molecule is connected like this. You know that this is the connection. And you"}, {"start": 302.0, "end": 306.90000000000003, "text": " want to somehow encode this information into the transformer. And that's what"}, {"start": 306.90000000000003, "end": 312.1, "text": " the graph former did. And we'll soon see how those exactly look like, but that's"}, {"start": 312.1, "end": 315.3, "text": " the whole point. You want to ingrain the structural information into the"}, {"start": 315.3, "end": 320.8, "text": " transformer to make it more performant."}, {"start": 320.8, "end": 325.6, "text": " Okay, and finally, many popular genome variants could be covered as a special"}, {"start": 325.6, "end": 328.8, "text": " cases of graph former and I'll later show you this in the appendix and I'll"}, {"start": 328.8, "end": 334.1, "text": " explain how this comes to be. Okay, having said that, let me just give you"}, {"start": 334.1, "end": 339.20000000000005, "text": " like a brief introduction to GNNs. If you're not familiar with them, I have a"}, {"start": 339.20000000000005, "end": 342.90000000000003, "text": " whole graph ML playlist. You can check it out. I'll link it somewhere here."}, {"start": 342.90000000000003, "end": 349.1, "text": " But like, just let me shortly explain you how this works. So what you usually"}, {"start": 349.1, "end": 355.0, "text": " have is in this one family of graph neural networks called MPNNs. So"}, {"start": 355.0, "end": 360.5, "text": " you have this aggregation step and you have this combined step. And so let me"}, {"start": 360.5, "end": 367.5, "text": " draw a small graph here, something like this. And what happens is the following."}, {"start": 367.5, "end": 371.1, "text": " So let me connect these like this. So these are the direct neighbors and we"}, {"start": 371.1, "end": 376.4, "text": " have some indirect neighbors. So what will happen is that for this, so every"}, {"start": 376.4, "end": 380.5, "text": " single node, so that's the first point, every single node and every single edge"}, {"start": 380.5, "end": 384.6, "text": " will have a certain feature vector associated with it. So why is that? Well,"}, {"start": 384.6, "end": 389.1, "text": " basically assume you have, assume this is like, these are atoms. So this graph"}, {"start": 389.1, "end": 393.6, "text": " represents a molecule. So this edge, this feature vector may contain the"}, {"start": 393.6, "end": 397.5, "text": " information such as is this a single bond, is this a double bond, is this an"}, {"start": 397.5, "end": 401.70000000000005, "text": " aromatic bond, etc. So that can be, as you can see, useful information encoded"}, {"start": 401.70000000000005, "end": 406.1, "text": " directly into the edge feature vector. So what you do is the following. So you"}, {"start": 406.1, "end": 410.20000000000005, "text": " have this, as I said, aggregate step. So what that does is the following. So it"}, {"start": 410.2, "end": 414.59999999999997, "text": " will take the neighbors, so either the one hop or the multi hop neighbors, but"}, {"start": 414.59999999999997, "end": 417.5, "text": " let's assume we have one hop neighborhood here. So that means that this"}, {"start": 417.5, "end": 422.4, "text": " i node will take into account this vector, this vector, this vector, as well"}, {"start": 422.4, "end": 427.59999999999997, "text": " as the edges. And it will somehow aggregate that information. So that may"}, {"start": 427.59999999999997, "end": 431.4, "text": " be like a mean. You basically add them up and you just divide them by the number"}, {"start": 431.4, "end": 436.09999999999997, "text": " of elements or like sum or whatever. There can be different aggregation methods."}, {"start": 436.1, "end": 442.0, "text": " But once you have that, so again, you have the original feature vector here and"}, {"start": 442.0, "end": 447.0, "text": " you have the one you just formed by aggregation. And now the following step"}, {"start": 447.0, "end": 451.0, "text": " is this combined method. As you can see, so this is the aggregated one, so this is"}, {"start": 451.0, "end": 456.5, "text": " this one, and then you have the original one, that's this one. And you somehow"}, {"start": 456.5, "end": 461.0, "text": " combine them. So they can be maybe via MLP, you just pass them through a"}, {"start": 461.0, "end": 465.6, "text": " multilayer perceptron and you form a novel representation here, which will"}, {"start": 465.6, "end": 471.90000000000003, "text": " look something like this. Okay, so that will be how graph neural networks"}, {"start": 471.90000000000003, "end": 475.6, "text": " roughly work. And you then stack a bunch of these layers together. Finally, you"}, {"start": 475.6, "end": 479.0, "text": " have some, depending on the task, you just have some classification head,"}, {"start": 479.0, "end": 484.20000000000005, "text": " whatever, you back prop, you train these weights and that's it. So that's"}, {"start": 484.20000000000005, "end": 488.6, "text": " it. And now there is one more thing. If you want to do graph level prediction,"}, {"start": 488.6, "end": 492.0, "text": " what you usually do is you have this readout function. And what it does, as"}, {"start": 492.0, "end": 497.0, "text": " you can see, it just takes the nodes from the set of nodes of that graph and"}, {"start": 497.0, "end": 503.8, "text": " you just kind of combine them somehow. So again, you have layer one and you do, on"}, {"start": 503.8, "end": 507.1, "text": " the layer one, you do a processing that looks something like this. So you do"}, {"start": 507.1, "end": 512.0, "text": " those kind of aggregations and you combine them and you repeat that"}, {"start": 512.0, "end": 517.1, "text": " multiple times. So you have layer two all the way up until layer L, whatever that"}, {"start": 517.1, "end": 522.1, "text": " number may be. And then you take the node feature vectors from that layer and"}, {"start": 522.1, "end": 525.6, "text": " you, for example, sum them up and that will be a readout function. So you have"}, {"start": 525.6, "end": 531.6, "text": " a single vector which you can then use to maybe regress something or whatever."}, {"start": 531.6, "end": 535.8000000000001, "text": " And so that's how the pipeline roughly looks like for graph neural"}, {"start": 535.8000000000001, "end": 539.7, "text": " networks. So now having said that, let me just briefly tell you about"}, {"start": 539.7, "end": 543.9, "text": " Transformer. And as I said, so I won't be delving into details here, but you"}, {"start": 543.9, "end": 548.1999999999999, "text": " have a couple of procedures here. So you basically first project your feature"}, {"start": 548.1999999999999, "end": 552.6, "text": " vectors into these query vectors, into key vectors, into value vectors. Then you"}, {"start": 552.6, "end": 557.6, "text": " find the similarity between the query and the keys and that will provide you"}, {"start": 557.6, "end": 563.6, "text": " with this attention scores, which then you apply just a softmax and you then"}, {"start": 563.6, "end": 569.4, "text": " use those attention coefficients to aggregate the value vectors. So that's"}, {"start": 569.4, "end": 573.3, "text": " just a high-level overview. As I said, you can watch my video to understand this."}, {"start": 573.3, "end": 578.4, "text": " That's not the point of this paper. And now let's start digging into the"}, {"start": 578.4, "end": 582.3, "text": " encodings that's the meat of this paper. Okay, first up, let's see the centrality"}, {"start": 582.3, "end": 588.9, "text": " encoding. So node centralities is a really abstract concept."}, {"start": 588.9, "end": 593.0999999999999, "text": " There can be many things and they concretely use something called degree"}, {"start": 593.0999999999999, "end": 596.6999999999999, "text": " centrality. We'll see that in a moment. But the rough notion is that those"}, {"start": 596.7, "end": 603.3000000000001, "text": " centrality measures tell you how important a certain node or an edge is in"}, {"start": 603.3000000000001, "end": 607.3000000000001, "text": " that particular graph. So they say it here in equation four, the equation four"}, {"start": 607.3000000000001, "end": 612.4000000000001, "text": " is just a transformer equation for like accumulating value vectors. The"}, {"start": 612.4000000000001, "end": 615.6, "text": " attention distribution is calculated based on the semantic correlation"}, {"start": 615.6, "end": 620.6, "text": " between nodes. So that's the query keys dot product. However node centrality,"}, {"start": 620.6, "end": 625.1, "text": " which measures how important a node is in the graph, is usually a strong signal"}, {"start": 625.1, "end": 628.7, "text": " for graph understanding. For example, celebrities who have a huge number of"}, {"start": 628.7, "end": 632.3000000000001, "text": " followers are important vectors in predicting the trend of a social"}, {"start": 632.3000000000001, "end": 637.0, "text": " network. As you can imagine, if Elon Musk tweets something, you can assume"}, {"start": 637.0, "end": 641.9, "text": " that a nice portion of the Twitter graph will be copy pasting his, retweeting"}, {"start": 641.9, "end": 645.3000000000001, "text": " his content. So you can kind of predict just because of a single node, you can"}, {"start": 645.3000000000001, "end": 649.5, "text": " predict the behavior of many other nodes. So that's maybe rough intuition"}, {"start": 649.5, "end": 653.5, "text": " for why this works. And as I mentioned, they're using degree centrality and"}, {"start": 653.5, "end": 657.7, "text": " that's a simple concept. So again, assume you have a node and assume you"}, {"start": 657.7, "end": 661.7, "text": " have some edges. So you have some ingoing edges, you have some outgoing"}, {"start": 661.7, "end": 667.8, "text": " edges here. So what node centrality is, is the following. So basically the"}, {"start": 667.8, "end": 671.7, "text": " out degree will be three here. As you can see, we have three edges. So that"}, {"start": 671.7, "end": 677.4, "text": " will mean we have out degree three and we'll have two ingoing edges. So the"}, {"start": 677.4, "end": 681.8, "text": " degree centrality of the, so of the in degree is two. So that's basically,"}, {"start": 681.8, "end": 686.0, "text": " so that's the number, that's the measure we want here. So for an influencer"}, {"start": 686.0, "end": 689.6999999999999, "text": " node, you'll have like thousands and thousands of these ingoing edges,"}, {"start": 689.9, "end": 693.9, "text": " which kind of signal that it's an important node. And how they include"}, {"start": 693.9, "end": 697.9, "text": " that information into this graph former is the following way. So as you can"}, {"start": 697.9, "end": 701.9, "text": " see, this is the original feature vector in your graph. And let me try and"}, {"start": 701.9, "end": 707.5999999999999, "text": " draw this. So again, you have a small graph here and it's connected with"}, {"start": 707.6, "end": 715.0, "text": " some other nodes. And so what happens is the following. You have the"}, {"start": 715.0, "end": 721.2, "text": " original, so this X side, that's the original feature vector. Okay. And you"}, {"start": 721.2, "end": 726.7, "text": " add up to that feature vector, these additional embeddings. So you basically"}, {"start": 726.7, "end": 732.1, "text": " form, what they do is they form a table which has two columns. So one"}, {"start": 732.1, "end": 736.4, "text": " column will correspond to the in degree vectors and the other will"}, {"start": 736.4, "end": 740.8, "text": " correspond to the out degree vector. And now if you focus on this specific"}, {"start": 740.8, "end": 748.1999999999999, "text": " node here, and yeah, assuming we have, assuming this one is maybe in"}, {"start": 748.1999999999999, "end": 753.5, "text": " degree, maybe these two, maybe these two is out, outgoing edge, you can"}, {"start": 753.5, "end": 759.5, "text": " see that we can basically form the degree numbers and we have two"}, {"start": 759.6, "end": 764.6999999999999, "text": " for the out degree and we have one for the in degree. And so using that"}, {"start": 764.7, "end": 767.9000000000001, "text": " information, we can basically index into this embedding table, which is"}, {"start": 767.9000000000001, "end": 772.5, "text": " initially just randomly initialized. So what we'll do is we'll have, as"}, {"start": 772.5, "end": 778.3000000000001, "text": " you can see here, so this will be zero slot, first slot, second slot."}, {"start": 778.6, "end": 783.0, "text": " We'll just take this vector here and we'll add it directly to this one."}, {"start": 783.2, "end": 787.8000000000001, "text": " So we'll just add it up to this one. And because we have one in degree,"}, {"start": 788.0, "end": 792.1, "text": " we'll go to the in degree column and we'll just index into this one."}, {"start": 792.1, "end": 796.8000000000001, "text": " So this is one and we'll take this vector and we'll again add it up to"}, {"start": 796.8000000000001, "end": 798.0, "text": " this representation."}, {"start": 798.0, "end": 804.0, "text": " Okay. So as you can see, that's basically what they do. And yeah, they"}, {"start": 804.0, "end": 808.8000000000001, "text": " say it here. Where Z minus and Z plus are d-dimensional real valued"}, {"start": 808.8000000000001, "end": 812.3000000000001, "text": " vectors, so our learnable embedding vector specified by the in degree"}, {"start": 812.3000000000001, "end": 813.8000000000001, "text": " and out degree respectively."}, {"start": 814.1, "end": 817.4, "text": " So what this roughly does, now you can imagine if you had certain"}, {"start": 817.4, "end": 821.5, "text": " node, which is an influencer node, which had a bunch of these in going"}, {"start": 821.5, "end": 825.6, "text": " the edges, maybe thousand and you had another node that also had a"}, {"start": 825.6, "end": 827.9, "text": " thousand of these in degree edges."}, {"start": 828.4, "end": 828.9, "text": " Okay."}, {"start": 830.0, "end": 834.1, "text": " That means that they'll be indexing the same, the very same"}, {"start": 835.4, "end": 841.5, "text": " vector here will be used in both of those and that kind of can signal"}, {"start": 841.5, "end": 845.4, "text": " to this graph former that these are somehow special."}, {"start": 846.2, "end": 849.8, "text": " That's the rough intuition behind all of this and because behind this"}, {"start": 849.8, "end": 852.5999999999999, "text": " first embedding embedding procedure."}, {"start": 852.9, "end": 855.9, "text": " So the second thing which is really important is this spatial encoding"}, {"start": 856.1999999999999, "end": 859.8, "text": " and they say here, concretely for any graph G we consider a function"}, {"start": 860.0, "end": 862.3, "text": " Phi between VI and VJ."}, {"start": 862.4, "end": 866.6999999999999, "text": " So we have node space and we had no space just a Cartesian product"}, {"start": 866.6999999999999, "end": 868.1999999999999, "text": " and we map that into real number."}, {"start": 868.3, "end": 872.1999999999999, "text": " So which measures the spatial relation between VI and VJ in graph G."}, {"start": 872.1999999999999, "end": 873.5999999999999, "text": " So this is very abstract."}, {"start": 873.5999999999999, "end": 874.9, "text": " Let's make it a bit more concrete."}, {"start": 874.9, "end": 879.9, "text": " So in this paper we choose Phi to be the distance of the shortest path."}, {"start": 879.9, "end": 884.1999999999999, "text": " So the shortest path distance between VI and VJ if the two nodes are"}, {"start": 884.1999999999999, "end": 887.3, "text": " connected, if not, we set the output of Phi to be a special value,"}, {"start": 887.3, "end": 888.4, "text": " for example, minus one."}, {"start": 888.6999999999999, "end": 889.1999999999999, "text": " Okay."}, {"start": 889.3, "end": 890.6999999999999, "text": " Let's see what this practically means."}, {"start": 890.6999999999999, "end": 893.0, "text": " So again, we have this equation."}, {"start": 893.9, "end": 896.0, "text": " As you can see, this is basically transformer equation."}, {"start": 897.0, "end": 898.6999999999999, "text": " We have just the dot product between."}, {"start": 898.9, "end": 900.1999999999999, "text": " So this is the query vector."}, {"start": 900.1999999999999, "end": 901.1999999999999, "text": " This is the key vector."}, {"start": 901.1999999999999, "end": 902.3, "text": " We do a dot product."}, {"start": 902.3, "end": 906.4, "text": " This is transposed and we just kind of scale it where D is the number"}, {"start": 906.4, "end": 908.0, "text": " of dimensions in those vectors."}, {"start": 908.6999999999999, "end": 909.6999999999999, "text": " We just take a square root."}, {"start": 909.6999999999999, "end": 910.1999999999999, "text": " Okay."}, {"start": 910.3, "end": 912.5, "text": " So this is the transformer part and that's not that interesting."}, {"start": 912.6999999999999, "end": 915.0, "text": " So what's interesting here is they add a bias."}, {"start": 915.0999999999999, "end": 920.3, "text": " So this is a learnable scalar as they say here to encode this information"}, {"start": 920.5, "end": 925.0, "text": " of the structure in the graph and they do that by you can see here"}, {"start": 925.0, "end": 929.3, "text": " this Phi is used to index into this B array."}, {"start": 929.3, "end": 934.1999999999999, "text": " So B will be again as we had the we had this embeddings here."}, {"start": 934.1999999999999, "end": 940.4, "text": " Now we'll have similarly array that will call B and which will be able"}, {"start": 940.4, "end": 944.1999999999999, "text": " to index using this Phi where Phi is the shortest path between the"}, {"start": 944.1999999999999, "end": 945.0, "text": " any two points."}, {"start": 945.1999999999999, "end": 948.0, "text": " So that means the following again, let me draw a simple diagram."}, {"start": 948.0, "end": 952.3, "text": " So we have maybe something like let me draw like a path diagram."}, {"start": 952.3, "end": 957.8, "text": " So we have a simple path diagram a path graph and as you can see the"}, {"start": 957.8, "end": 962.8, "text": " this node here will have so this this node here will be have a distance"}, {"start": 962.8, "end": 965.5, "text": " of one this note here will have a distance of two."}, {"start": 965.5999999999999, "end": 968.1999999999999, "text": " So that's the shortest path and finally this one will have distance"}, {"start": 968.1999999999999, "end": 968.6999999999999, "text": " of three."}, {"start": 969.0, "end": 973.8, "text": " So that means when we do when we form a query vector here and keys"}, {"start": 973.8, "end": 976.5, "text": " here what we'll do is the following."}, {"start": 976.5, "end": 981.0999999999999, "text": " So we'll be adding so for this particular node here will be adding."}, {"start": 981.5, "end": 982.8, "text": " Let me just change the color."}, {"start": 982.8, "end": 989.0999999999999, "text": " So this is 0 1 2 3."}, {"start": 989.3, "end": 993.1999999999999, "text": " So for this one will be using exactly this one will be using this"}, {"start": 993.1999999999999, "end": 998.9, "text": " scalar then for the second one will be using this scalar."}, {"start": 999.0999999999999, "end": 1004.8, "text": " So now you can imagine that this can help us learn some interesting"}, {"start": 1004.8, "end": 1005.0999999999999, "text": " stuff."}, {"start": 1005.0999999999999, "end": 1009.8, "text": " So they say here for example, if B is learned to be a decreasing"}, {"start": 1009.8, "end": 1014.4, "text": " function with respect to Phi so as we so that what they say is as"}, {"start": 1014.4, "end": 1019.5999999999999, "text": " we go down this B array if this becomes like smaller and smaller"}, {"start": 1020.0, "end": 1024.3999999999999, "text": " so they say that for each note the model will likely pay more attention"}, {"start": 1024.3999999999999, "end": 1027.8999999999999, "text": " to the notes near it and pay less attention to the notes far away"}, {"start": 1027.8999999999999, "end": 1033.0, "text": " from it and in the extreme case imagine you have you have maybe 0"}, {"start": 1033.0, "end": 1036.8999999999999, "text": " here for the first note and you have minus infinity for all of the"}, {"start": 1036.8999999999999, "end": 1037.5, "text": " other ones."}, {"start": 1037.5, "end": 1041.5, "text": " So this is minus infinity what will happen is because you're adding"}, {"start": 1041.5, "end": 1046.0, "text": " those to the attention score after applying softmax all of those"}, {"start": 1046.0, "end": 1049.5, "text": " nodes which have minus infinity the attention coefficients will"}, {"start": 1049.5, "end": 1050.3, "text": " drop to 0."}, {"start": 1050.3, "end": 1053.6, "text": " Whereas for this one it will be nonzero will be probably one."}, {"start": 1053.6, "end": 1056.6, "text": " So in this particular graph, so we'll be focusing."}, {"start": 1056.6, "end": 1062.0, "text": " So what we'll end up doing is focusing on only this note here and"}, {"start": 1062.0, "end": 1064.0, "text": " ignoring the other the rest of these."}, {"start": 1064.0, "end": 1068.3, "text": " So if we had more neighbors more direct neighbors would be basically"}, {"start": 1068.3, "end": 1072.6, "text": " focusing only on the one hop neighborhood just because of this"}, {"start": 1072.6, "end": 1074.4, "text": " learnable scalar here."}, {"start": 1074.4, "end": 1076.0, "text": " So that's the idea there."}, {"start": 1076.0, "end": 1081.0, "text": " This is how they encode this structural information of the graph"}, {"start": 1081.0, "end": 1084.2, "text": " why these shortest path distance scalars."}, {"start": 1084.2, "end": 1088.7, "text": " Okay, that's the second important encoding and the third one and"}, {"start": 1088.7, "end": 1092.7, "text": " the final one is this edge encoding and they said here there are"}, {"start": 1092.7, "end": 1096.1000000000001, "text": " mainly two edge encoding methods used in previous works in the"}, {"start": 1096.1000000000001, "end": 1099.6000000000001, "text": " first method the edge features are added to the associated nodes"}, {"start": 1099.6000000000001, "end": 1102.9, "text": " features in the second method for each node its associated edge"}, {"start": 1102.9, "end": 1106.0, "text": " feature will be used together with the node features in the"}, {"start": 1106.0, "end": 1106.7, "text": " aggregation."}, {"start": 1107.1000000000001, "end": 1110.6000000000001, "text": " However, such ways of using edge feature only propagate the edge"}, {"start": 1110.6000000000001, "end": 1114.2, "text": " information to its associated nodes which may not be an effective"}, {"start": 1114.2, "end": 1117.6000000000001, "text": " way to leverage edge information in the representation of the whole"}, {"start": 1117.6000000000001, "end": 1118.1000000000001, "text": " graph."}, {"start": 1118.4, "end": 1121.7, "text": " So in a nutshell what this means is the following."}, {"start": 1121.7, "end": 1125.6000000000001, "text": " So if you have a if you have a graph again, let me draw a simple"}, {"start": 1125.6000000000001, "end": 1126.3, "text": " graph here."}, {"start": 1127.9, "end": 1132.8, "text": " What they do is if you have a graph like this and when you do"}, {"start": 1132.8, "end": 1135.5, "text": " those aggregation and combine steps in the first layer of the"}, {"start": 1135.5, "end": 1139.3, "text": " GNN what will end up with what we'll end up doing is will be end"}, {"start": 1139.3, "end": 1142.2, "text": " up will end up including this information."}, {"start": 1142.2, "end": 1146.6000000000001, "text": " So these two edges and information from these two nodes into"}, {"start": 1146.6000000000001, "end": 1150.8, "text": " this novel representation that will form here and we are ignoring"}, {"start": 1150.8, "end": 1152.0, "text": " these edges here."}, {"start": 1152.0, "end": 1155.2, "text": " We're ignoring them in the first best least and what they are"}, {"start": 1155.2, "end": 1159.6, "text": " suggesting here is that it may be wise to kind of ingrain that"}, {"start": 1159.6, "end": 1164.3999999999999, "text": " information straight away and like during this aggregation process"}, {"start": 1164.3999999999999, "end": 1167.8999999999999, "text": " just kind of use that edge information from all of the other"}, {"start": 1167.8999999999999, "end": 1171.8, "text": " nodes and that's how they do that is the following."}, {"start": 1171.8, "end": 1174.5, "text": " So again, you can see simple stuff."}, {"start": 1174.5, "end": 1176.3, "text": " We have the transformer part."}, {"start": 1176.3, "end": 1179.7, "text": " We have the spatially embedding part, which I just explained."}, {"start": 1179.7, "end": 1185.7, "text": " And we have this novel edge encoding part, which is this fancy"}, {"start": 1185.7, "end": 1187.9, "text": " formula, which I'm now going to try to explain."}, {"start": 1187.9, "end": 1190.1000000000001, "text": " So again, we'll have a embedding table."}, {"start": 1190.1000000000001, "end": 1194.4, "text": " So we'll have a table which will correspond to these learnable"}, {"start": 1194.4, "end": 1197.9, "text": " edge features and like the length of this table will just be"}, {"start": 1197.9, "end": 1198.6000000000001, "text": " the following thing."}, {"start": 1198.6000000000001, "end": 1201.8, "text": " The length will be the maximum shortest path in the graph."}, {"start": 1201.8, "end": 1205.2, "text": " So you find all of the so between every single node in your"}, {"start": 1205.2, "end": 1208.3, "text": " graph, you find the shortest paths and you find the maximal"}, {"start": 1208.3, "end": 1211.3, "text": " such path and that will be the length of this table."}, {"start": 1211.3, "end": 1211.8, "text": " Let's call it."}, {"start": 1211.8, "end": 1215.1, "text": " I don't know maybe E and okay."}, {"start": 1215.1, "end": 1217.1, "text": " So once we have that we do the following thing."}, {"start": 1217.1, "end": 1219.8999999999999, "text": " So let's assume we want to find."}, {"start": 1219.8999999999999, "end": 1223.6, "text": " So let's assume we are finding a tension score between I and J."}, {"start": 1223.6, "end": 1228.8, "text": " So if this is I and if this node here is J, what we'll do is"}, {"start": 1228.8, "end": 1229.3, "text": " the following."}, {"start": 1229.3, "end": 1234.1, "text": " So we'll have we'll have some feature vectors which are associated"}, {"start": 1234.1, "end": 1235.1, "text": " with these edges, right?"}, {"start": 1235.1, "end": 1239.1, "text": " So we'll have this and this and this and we'll do the following"}, {"start": 1239.1, "end": 1241.6, "text": " will take the first edge."}, {"start": 1241.6, "end": 1243.1999999999998, "text": " So we'll have edges here."}, {"start": 1243.1999999999998, "end": 1246.3, "text": " So these are again random initialize initially."}, {"start": 1246.3, "end": 1251.3, "text": " So we have 123, etc."}, {"start": 1251.3, "end": 1256.0, "text": " So we'll take this vector here will do a product between that"}, {"start": 1256.0, "end": 1257.6, "text": " vector and this vector here."}, {"start": 1257.6, "end": 1261.5, "text": " So we'll do the product between those two because this is the"}, {"start": 1261.5, "end": 1263.3, "text": " first edge on the shortest path."}, {"start": 1263.3, "end": 1265.6, "text": " So the shortest path is this one."}, {"start": 1265.6, "end": 1268.8, "text": " So we have this is the shortest path between I and J."}, {"start": 1268.8, "end": 1271.3999999999999, "text": " So we'll first do a dot product between those two."}, {"start": 1271.3999999999999, "end": 1274.3, "text": " Then we'll take the following vector will do a dot product"}, {"start": 1274.3, "end": 1277.3, "text": " between that vector and this vector and we'll take the third"}, {"start": 1277.3, "end": 1281.3, "text": " vector because this is the third vector on this path and"}, {"start": 1281.3, "end": 1284.8, "text": " we'll do dot product and what we'll just do is we'll just sum"}, {"start": 1284.8, "end": 1288.1, "text": " them up as you can see here and divide them by n which is the"}, {"start": 1288.1, "end": 1291.3999999999999, "text": " number of edges in the path, which is 3 here."}, {"start": 1291.4, "end": 1294.9, "text": " So these are the edge features."}, {"start": 1294.9, "end": 1300.1000000000001, "text": " These are the learnable edge features and finally just sum"}, {"start": 1300.1000000000001, "end": 1301.8000000000002, "text": " them up and divide by n."}, {"start": 1301.8000000000002, "end": 1302.7, "text": " That's it."}, {"start": 1302.7, "end": 1306.0, "text": " And intuitively what this does is we ingrain the edge"}, {"start": 1306.0, "end": 1311.1000000000001, "text": " information across so between I and J along the shortest path"}, {"start": 1311.1000000000001, "end": 1313.8000000000002, "text": " and they show that in the ablation studies later that this"}, {"start": 1313.8000000000002, "end": 1316.5, "text": " helps boost the performance of the graph form model."}, {"start": 1316.5, "end": 1317.7, "text": " Okay."}, {"start": 1317.7, "end": 1320.6000000000001, "text": " So those are the main ideas of this paper."}, {"start": 1320.6, "end": 1324.1999999999998, "text": " These three encodings and now I'll just again."}, {"start": 1324.1999999999998, "end": 1328.3, "text": " So everything else remains the same as in the transformer model."}, {"start": 1328.3, "end": 1330.0, "text": " They just ingrain these encodings."}, {"start": 1330.0, "end": 1331.3, "text": " Everything else remains the same."}, {"start": 1331.3, "end": 1333.6, "text": " You have the multi-head attention sub module."}, {"start": 1333.6, "end": 1334.5, "text": " You have the FFN."}, {"start": 1334.5, "end": 1338.5, "text": " So the fully the fully the feed-forward network module."}, {"start": 1338.5, "end": 1340.8, "text": " You have the layer normalization standard stuff."}, {"start": 1340.8, "end": 1343.6, "text": " So I won't go into those details and I'll skip this an"}, {"start": 1343.6, "end": 1348.1, "text": " interesting part for now which shows that classical GNNs can"}, {"start": 1348.1, "end": 1351.0, "text": " be considered as a special case of this graph form module."}, {"start": 1351.0, "end": 1355.3, "text": " But before that let me show you the experiment results."}, {"start": 1355.3, "end": 1357.6999999999998, "text": " So looking at the baselines here."}, {"start": 1357.6999999999998, "end": 1361.6999999999998, "text": " So again, this is this large-scale challenge data set."}, {"start": 1361.6999999999998, "end": 1364.3999999999999, "text": " It has 3.8 million graphs."}, {"start": 1364.3999999999999, "end": 1368.6999999999998, "text": " So it's huge compared that to previous GNN popular graph ML"}, {"start": 1368.6999999999998, "end": 1372.6999999999998, "text": " benchmarks such as Quora sites here or PubMed and you'll see"}, {"start": 1372.6999999999998, "end": 1376.1999999999998, "text": " that this is much bigger and so much a much better benchmark"}, {"start": 1376.1999999999998, "end": 1376.8999999999999, "text": " compared to those."}, {"start": 1376.9, "end": 1379.2, "text": " And we compare with GCN."}, {"start": 1379.2, "end": 1382.3000000000002, "text": " So that's graph convolutional network with GINs with GCNs"}, {"start": 1382.3000000000002, "end": 1384.0, "text": " with the virtual node etc."}, {"start": 1384.0, "end": 1387.3000000000002, "text": " And we have this is some modification of a transformer also"}, {"start": 1387.3000000000002, "end": 1391.4, "text": " an attempt to apply transformers to graph level prediction"}, {"start": 1391.4, "end": 1394.5, "text": " tasks, but not as successful as graph former as we'll see."}, {"start": 1394.5, "end": 1398.2, "text": " So as you can see looking at the test, this is mean absolute"}, {"start": 1398.2, "end": 1399.8000000000002, "text": " error we achieve."}, {"start": 1399.8000000000002, "end": 1404.9, "text": " So graph former achieves the lowest error on compared to"}, {"start": 1404.9, "end": 1407.3000000000002, "text": " all of these other classical GNNs."}, {"start": 1407.3000000000002, "end": 1411.1000000000001, "text": " So those are some nice results and there's one thing I want"}, {"start": 1411.1000000000001, "end": 1412.0, "text": " to mention here."}, {"start": 1412.0, "end": 1413.6000000000001, "text": " Let me just see where it is."}, {"start": 1413.6000000000001, "end": 1418.8000000000002, "text": " It basically yeah, as stated in section 3.3 we further find"}, {"start": 1418.8000000000002, "end": 1421.6000000000001, "text": " the proposed reformer does not encounter the problem of"}, {"start": 1421.6000000000001, "end": 1422.6000000000001, "text": " over smoothing."}, {"start": 1422.6000000000001, "end": 1426.7, "text": " I the train invalidate error keep going down along with the"}, {"start": 1426.7, "end": 1429.4, "text": " growth of depth and width of models."}, {"start": 1429.4, "end": 1432.8000000000002, "text": " So now that this may be strange if you're not familiar with"}, {"start": 1432.8, "end": 1436.2, "text": " graph neural networks, but basically compared to CNNs where"}, {"start": 1436.2, "end": 1439.8999999999999, "text": " depth more depth basically take resonance for example, so the"}, {"start": 1439.8999999999999, "end": 1442.3999999999999, "text": " more depth you have the more expressive and powerful and"}, {"start": 1442.3999999999999, "end": 1445.2, "text": " performant the models are here in the GNN literature."}, {"start": 1445.2, "end": 1447.7, "text": " Basically that doesn't necessarily help."}, {"start": 1447.7, "end": 1450.8, "text": " So you usually see those shallow GNNs like two or three"}, {"start": 1450.8, "end": 1453.1, "text": " layer deep and those work just fine."}, {"start": 1453.1, "end": 1456.0, "text": " So the reason for that are there are multiple reasons."}, {"start": 1456.0, "end": 1457.5, "text": " So one of them is maybe bottleneck."}, {"start": 1457.5, "end": 1460.3999999999999, "text": " So that means if you have if you take a look at the one hop"}, {"start": 1460.3999999999999, "end": 1462.3, "text": " neighborhood, you'll have maybe hundred neighbors."}, {"start": 1462.3, "end": 1464.7, "text": " If you take a look at the two hop neighborhood, you may end"}, {"start": 1464.7, "end": 1466.0, "text": " up with 10,000 neighbors."}, {"start": 1466.0, "end": 1468.5, "text": " If you take a look at the three hop neighborhoods, that's the"}, {"start": 1468.5, "end": 1469.6, "text": " third layer of the GNN."}, {"start": 1469.6, "end": 1474.0, "text": " You'll maybe be attending to millions of notes and just"}, {"start": 1474.0, "end": 1477.2, "text": " but if you try and average all of those feature vectors,"}, {"start": 1477.2, "end": 1480.6, "text": " you'll end up basically collapsing into a single feature"}, {"start": 1480.6, "end": 1483.3, "text": " vector and that's something that's called over smoothing."}, {"start": 1483.3, "end": 1485.8, "text": " So basically all of your feature vectors start collapsing"}, {"start": 1485.8, "end": 1488.3999999999999, "text": " into very same representation and thus you don't have the"}, {"start": 1488.4, "end": 1492.6000000000001, "text": " discriminative power to kind of figure out whatever the task"}, {"start": 1492.6000000000001, "end": 1493.1000000000001, "text": " at hand is."}, {"start": 1493.1000000000001, "end": 1496.1000000000001, "text": " So maybe predict the solubility of the molecule whatever."}, {"start": 1496.1000000000001, "end": 1498.7, "text": " So that's the over smoothing problem and it seems that"}, {"start": 1498.7, "end": 1503.9, "text": " Graphformer kind of dealt with it and I'd be really really"}, {"start": 1503.9, "end": 1504.4, "text": " excited."}, {"start": 1504.4, "end": 1507.6000000000001, "text": " I'm really excited to see like what the future research shows"}, {"start": 1507.6000000000001, "end": 1509.3000000000002, "text": " and explains why this is."}, {"start": 1509.3000000000002, "end": 1514.7, "text": " Finally, they also show some results on other OGP data sets."}, {"start": 1514.7, "end": 1518.3000000000002, "text": " Again, these are mostly molecular data sets and you can"}, {"start": 1518.3, "end": 1522.8, "text": " see again, it just outperforms all of the other baselines."}, {"start": 1522.8, "end": 1526.2, "text": " This flag is just a simple adversarial augmentation."}, {"start": 1526.2, "end": 1530.8, "text": " So just that you know what this suffix means and yeah on"}, {"start": 1530.8, "end": 1537.2, "text": " zinc again lowest mean absolute error compared to the previous"}, {"start": 1537.2, "end": 1537.7, "text": " baselines."}, {"start": 1537.7, "end": 1539.8, "text": " So that's that's pretty much it."}, {"start": 1539.8, "end": 1543.5, "text": " They also did some ablations showing that all of these"}, {"start": 1543.5, "end": 1545.3, "text": " encodings boost up the performance."}, {"start": 1545.3, "end": 1549.5, "text": " So this is the edge encoding as you can see the error drops"}, {"start": 1549.5, "end": 1552.5, "text": " when they added same goes for the nose centrality same goes"}, {"start": 1552.5, "end": 1553.5, "text": " for the spatial encoding."}, {"start": 1553.5, "end": 1557.5, "text": " They always have these boosts in the performance when they"}, {"start": 1557.5, "end": 1559.3999999999999, "text": " add those particular encodings."}, {"start": 1559.3999999999999, "end": 1565.6, "text": " Finally, as we know, basically transformers have problem with"}, {"start": 1565.6, "end": 1569.3, "text": " the scalability of these the vanilla transformer."}, {"start": 1569.3, "end": 1572.5, "text": " We have this quadratic dependency between the tokens"}, {"start": 1572.5, "end": 1575.2, "text": " because of this attention attending mechanism and"}, {"start": 1575.2, "end": 1577.5, "text": " so the quadratic complexity of the self-attention modular"}, {"start": 1577.5, "end": 1580.5, "text": " stricts reformers application on large graphs, which is"}, {"start": 1580.5, "end": 1581.4, "text": " pretty much obvious."}, {"start": 1581.4, "end": 1585.5, "text": " So that means that even though it achieved really nice results"}, {"start": 1585.5, "end": 1588.9, "text": " on these benchmarks, which are molecular data sets molecular"}, {"start": 1588.9, "end": 1593.0, "text": " graphs, I can imagine that this still can't be used to any"}, {"start": 1593.0, "end": 1597.7, "text": " like like huge graphs such as maybe social media graph."}, {"start": 1597.7, "end": 1602.8, "text": " And so yeah, I can I just expect to see people kind of"}, {"start": 1602.8, "end": 1605.5, "text": " using the ideas that came from the transformer literature"}, {"start": 1605.5, "end": 1607.5, "text": " like lean former performer, etc."}, {"start": 1607.5, "end": 1610.5, "text": " to kind of make this thing more efficient and then we'll see"}, {"start": 1610.5, "end": 1613.8, "text": " whether it can achieve some nice results even on larger graphs."}, {"start": 1613.8, "end": 1618.3, "text": " Having said that, that's the those are the main ideas of"}, {"start": 1618.3, "end": 1618.8, "text": " this paper."}, {"start": 1618.8, "end": 1623.3, "text": " And now I want to focus on this on this fact one and kind of"}, {"start": 1623.3, "end": 1626.1, "text": " delve a bit deeper into it and explain why this is."}, {"start": 1626.1, "end": 1630.5, "text": " Okay, let's focus on this part by choosing proper weights"}, {"start": 1630.5, "end": 1632.9, "text": " and distance function five the graph former layer can"}, {"start": 1632.9, "end": 1635.9, "text": " represent aggregate and combine steps of popular gene and"}, {"start": 1635.9, "end": 1639.8, "text": " modules models such as gene GCN and graph sage."}, {"start": 1640.8, "end": 1644.0, "text": " Moreover, we show further that by using our special encoding"}, {"start": 1644.0, "end": 1646.8, "text": " graph former can go beyond classic message passing gene"}, {"start": 1646.8, "end": 1650.6, "text": " ends whose expressive power is no more than the vice"}, {"start": 1650.6, "end": 1652.4, "text": " file a lemon one test."}, {"start": 1652.7, "end": 1657.5, "text": " So by the way, there is a bit of confusion about this vice"}, {"start": 1657.5, "end": 1661.5, "text": " file a lemon test and the thing is all of these expressivity"}, {"start": 1662.8, "end": 1667.2, "text": " like statements basically only are valid for and so-called"}, {"start": 1667.2, "end": 1670.5, "text": " anonymous graph neural networks whereby anonymous."}, {"start": 1670.5, "end": 1675.1, "text": " I mean that the node feature vectors are actually basically"}, {"start": 1675.1, "end": 1677.6, "text": " nonexistent or they are not discriminative."}, {"start": 1677.8, "end": 1682.5, "text": " So that means where well by using standard graph neural"}, {"start": 1682.5, "end": 1683.9, "text": " networks, you always have those no features."}, {"start": 1683.9, "end": 1687.3, "text": " So these analysis are not that they can be kind of tricky."}, {"start": 1687.3, "end": 1689.3999999999999, "text": " I'll link this blog down in the description."}, {"start": 1689.3999999999999, "end": 1692.0, "text": " You can see it on the screen which explains this problematic"}, {"start": 1692.0, "end": 1695.5, "text": " really good and kind of explains what a gene ends can"}, {"start": 1695.5, "end": 1698.7, "text": " learn and what they can learn and so do check it out."}, {"start": 1698.7, "end": 1699.7, "text": " It's really really cool."}, {"start": 1699.8999999999999, "end": 1701.6, "text": " Having said that let's focus on this part."}, {"start": 1701.6, "end": 1704.8999999999999, "text": " How can we how can we treat these classical gene ends as"}, {"start": 1704.8999999999999, "end": 1705.7, "text": " a special case?"}, {"start": 1706.1, "end": 1710.3999999999999, "text": " Okay, but let's start first with this vice file a lemon"}, {"start": 1710.3999999999999, "end": 1710.7, "text": " thing."}, {"start": 1711.1, "end": 1715.7, "text": " So so vice file a lemon or the color test basically what"}, {"start": 1715.7, "end": 1719.5, "text": " it does it it outputs color histogram and if the color"}, {"start": 1719.5, "end": 1723.1000000000001, "text": " case runs are the same it claims that those two graphs"}, {"start": 1723.1000000000001, "end": 1726.5, "text": " are isomorphic and we can clearly see that these two graphs"}, {"start": 1726.5, "end": 1730.4, "text": " here are not isomorphic but vice file a lemon tells us that"}, {"start": 1730.4, "end": 1735.1000000000001, "text": " they are isomorphic because we have four red colors and we"}, {"start": 1735.1000000000001, "end": 1741.0, "text": " have two blue colors and because of that vice file a lemon"}, {"start": 1741.0, "end": 1744.8, "text": " would falsely claim that these are isomorphic on the other"}, {"start": 1744.8, "end": 1747.8, "text": " hand if you focus on the shortest path distance, so let's"}, {"start": 1747.8, "end": 1748.8, "text": " let's see what I see here."}, {"start": 1749.0, "end": 1752.8, "text": " So these two graphs cannot be distinguished by vice file"}, {"start": 1752.8, "end": 1755.2, "text": " a lemon one test by the SPD sets."}, {"start": 1755.5, "end": 1758.8, "text": " I the SPD from each node to others are different the two"}, {"start": 1758.8, "end": 1762.7, "text": " types of nodes in the left graph have SPD sets as you can"}, {"start": 1762.7, "end": 1765.5, "text": " see here and let me explain let me kind of break it down"}, {"start": 1765.5, "end": 1766.6, "text": " for you what this means."}, {"start": 1766.7, "end": 1769.8999999999999, "text": " So obviously the shortest path distance between this node"}, {"start": 1770.0, "end": 1772.3999999999999, "text": " this this red node and itself is zero."}, {"start": 1772.4, "end": 1776.8000000000002, "text": " It has one distance until it reaches this distance these"}, {"start": 1776.8000000000002, "end": 1781.1000000000001, "text": " nodes and two for these two and finally three and as you"}, {"start": 1781.1000000000001, "end": 1786.4, "text": " can see here that's 0 1 1 2 2 3 and for all of the red"}, {"start": 1786.4, "end": 1786.8000000000002, "text": " nodes."}, {"start": 1786.8000000000002, "end": 1788.2, "text": " We have the same SPD set."}, {"start": 1788.2, "end": 1791.6000000000001, "text": " Okay, finally for this blue node again."}, {"start": 1791.6000000000001, "end": 1795.3000000000002, "text": " We have trivially zero shortest path to itself."}, {"start": 1795.5, "end": 1799.0, "text": " We have 1 1 1 here."}, {"start": 1799.0, "end": 1799.8000000000002, "text": " So that's 3 1."}, {"start": 1799.8, "end": 1804.2, "text": " So those are these ones here and finally we have to to"}, {"start": 1804.8, "end": 1808.6, "text": " which are these two and similarly goes for this graph"}, {"start": 1808.6, "end": 1813.2, "text": " here and we can see that the SPD sets are different thus"}, {"start": 1813.7, "end": 1817.3999999999999, "text": " this SPD will help us determine that these graphs are"}, {"start": 1817.3999999999999, "end": 1818.3, "text": " not isomorphic."}, {"start": 1818.3, "end": 1819.8, "text": " I they are non isomorphic."}, {"start": 1820.2, "end": 1824.3999999999999, "text": " So that's a like a visual explanation and a particular"}, {"start": 1824.3999999999999, "end": 1827.3, "text": " example where they showed the graph former is more powerful"}, {"start": 1827.3, "end": 1831.7, "text": " than those simple like vice versa lemon one test kind of"}, {"start": 1831.7, "end": 1832.3, "text": " architectures."}, {"start": 1832.3, "end": 1832.6, "text": " Okay."}, {"start": 1833.5, "end": 1836.1, "text": " Now let's focus on the really important part and that's"}, {"start": 1836.1, "end": 1841.0, "text": " these how we how can we basically show that GNN's are"}, {"start": 1841.0, "end": 1843.6, "text": " some GNN's are special case of graph of this graph or"}, {"start": 1843.6, "end": 1844.0, "text": " module."}, {"start": 1844.3999999999999, "end": 1847.6, "text": " So let's focus on mean aggregate function and explaining"}, {"start": 1847.6, "end": 1848.3999999999999, "text": " how that works."}, {"start": 1848.6, "end": 1851.0, "text": " So we begin by showing that self-attention module with"}, {"start": 1851.0, "end": 1853.7, "text": " spatial encoding can represent mean aggregation."}, {"start": 1854.0, "end": 1856.6, "text": " This is achieved by in equation 6 settings."}, {"start": 1856.6, "end": 1857.8, "text": " So this is equation 6."}, {"start": 1857.8, "end": 1859.3, "text": " This is your transformer."}, {"start": 1859.3999999999999, "end": 1863.3, "text": " This is your basically bias using this shortest path"}, {"start": 1863.8, "end": 1864.3, "text": " function."}, {"start": 1864.5, "end": 1871.6, "text": " Okay, so by setting the scalar when Phi equals 1 so direct"}, {"start": 1871.6, "end": 1876.5, "text": " neighbors to 0 and to minus infinity otherwise where Phi"}, {"start": 1876.5, "end": 1879.3, "text": " is the shortest path distance and setting these two matrices"}, {"start": 1879.3, "end": 1881.8999999999999, "text": " so the query projection matrix and a key projection matrix"}, {"start": 1882.0, "end": 1886.1, "text": " to 0 matrix and the value projection matrix to identity"}, {"start": 1886.1, "end": 1889.1999999999998, "text": " matrix then the softmax gives the average represent of"}, {"start": 1889.1999999999998, "end": 1891.8999999999999, "text": " representations of the neighbors and let's let me let me"}, {"start": 1891.8999999999999, "end": 1894.0, "text": " try and break it down how this exactly works."}, {"start": 1894.3, "end": 1895.1, "text": " It's quite a simple."}, {"start": 1895.3999999999999, "end": 1897.1, "text": " So we have a note here."}, {"start": 1897.3, "end": 1898.8, "text": " We have some direct neighbors."}, {"start": 1899.8999999999999, "end": 1901.6999999999998, "text": " Let's say we have three direct neighbors."}, {"start": 1901.6999999999998, "end": 1904.8, "text": " These are connected and finally, let's say we have some"}, {"start": 1905.3, "end": 1907.8, "text": " undirect indirect like neighbors."}, {"start": 1907.8, "end": 1908.6, "text": " Okay, like these two."}, {"start": 1909.0, "end": 1911.3, "text": " So what happens is the following."}, {"start": 1911.3999999999999, "end": 1915.3999999999999, "text": " So as we know these will have and we are focusing on this"}, {"start": 1915.4, "end": 1920.2, "text": " node and trying to figure out the next layer presentation."}, {"start": 1920.6000000000001, "end": 1924.3000000000002, "text": " So we can see that these nodes here have distance one and"}, {"start": 1924.3000000000002, "end": 1927.4, "text": " these nodes here have distance 2 and finally 3."}, {"start": 1927.4, "end": 1927.8000000000002, "text": " Okay."}, {"start": 1928.3000000000002, "end": 1934.1000000000001, "text": " So what I say is the following if we set these bees to 0."}, {"start": 1934.1000000000001, "end": 1938.7, "text": " So if we set this one to 0 this one to 0 and this one to 0"}, {"start": 1939.3000000000002, "end": 1944.5, "text": " and if we set these to minus infinity then as you can see"}, {"start": 1944.5, "end": 1948.3, "text": " looking at this formula here will have that after we apply"}, {"start": 1948.3, "end": 1952.0, "text": " softmax these will get to 0 and these will get will have"}, {"start": 1952.0, "end": 1956.5, "text": " some nonzero value and now the second portion is this these"}, {"start": 1956.5, "end": 1958.9, "text": " matrices should be zero matrices and the value should be"}, {"start": 1958.9, "end": 1962.7, "text": " identity that will help us achieve the the mean aggregation"}, {"start": 1962.7, "end": 1963.3, "text": " of neighbors."}, {"start": 1963.6, "end": 1964.2, "text": " So let's see."}, {"start": 1964.2, "end": 1966.9, "text": " So for now we just have some arbitrary nonzero values here"}, {"start": 1966.9, "end": 1969.3, "text": " and we have zeros here for the attention coefficients."}, {"start": 1969.7, "end": 1971.7, "text": " So now let me let me do it like this."}, {"start": 1971.7, "end": 1975.8, "text": " So let's focus on this particular node and so we have"}, {"start": 1975.8, "end": 1979.3, "text": " some feature vector once we map them this feature vector"}, {"start": 1979.3, "end": 1982.8, "text": " using the key query and value matrices will get the following"}, {"start": 1982.8, "end": 1983.1000000000001, "text": " thing."}, {"start": 1983.1000000000001, "end": 1988.9, "text": " So query and key so key and query will just be zero vectors"}, {"start": 1988.9, "end": 1992.3, "text": " because remember we have zero matrices there and this one"}, {"start": 1992.3, "end": 1996.5, "text": " here because we have an identity matrix will be the same"}, {"start": 1996.5, "end": 1999.2, "text": " just a copy pasted version of this vector here."}, {"start": 1999.2, "end": 2000.0, "text": " So this is V."}, {"start": 2000.1000000000001, "end": 2001.1000000000001, "text": " This is we as well."}, {"start": 2001.1, "end": 2002.6, "text": " So this is the value vector."}, {"start": 2002.6, "end": 2007.3999999999999, "text": " Okay, and now we do the same thing for all of the nodes"}, {"start": 2007.3999999999999, "end": 2011.3, "text": " in the graph and as you can see if we take this query vector"}, {"start": 2011.3, "end": 2015.1999999999998, "text": " which is all zeros and we do adopt products between any"}, {"start": 2015.1999999999998, "end": 2018.1, "text": " of the keys because all the keys are also all zero vectors"}, {"start": 2018.1, "end": 2020.8999999999999, "text": " will always have this thing going to zero."}, {"start": 2020.8999999999999, "end": 2024.6, "text": " So this this term in the equation will go to zero and"}, {"start": 2024.6, "end": 2028.3, "text": " finally as I already explained will have this bias terms"}, {"start": 2028.3, "end": 2031.6, "text": " will be zero for the local neighbors direct neighbors and"}, {"start": 2031.6, "end": 2032.8, "text": " minus infinity there."}, {"start": 2033.0, "end": 2036.5, "text": " So that means we'll basically end up having the same"}, {"start": 2036.5, "end": 2039.3999999999999, "text": " coefficients here and that's going to be one over three."}, {"start": 2040.6, "end": 2042.3999999999999, "text": " And these will be zero."}, {"start": 2042.6, "end": 2047.6, "text": " So we'll only be will end up attending only these nodes"}, {"start": 2047.6, "end": 2053.2, "text": " here and we'll take a mean average of those three feature"}, {"start": 2053.2, "end": 2053.5, "text": " vectors."}, {"start": 2053.5, "end": 2056.6, "text": " Okay, so if you're confused by one three, it's pretty simple."}, {"start": 2056.6, "end": 2058.2999999999997, "text": " So it's just soft max thing."}, {"start": 2058.2999999999997, "end": 2059.9, "text": " So you'll have E."}, {"start": 2060.0, "end": 2064.9, "text": " So let's focus on for example on this note will have e raised"}, {"start": 2064.9, "end": 2067.7999999999997, "text": " to power of zero because that's the attention."}, {"start": 2067.7999999999997, "end": 2071.6, "text": " That's the attention score we got and down in the denominator."}, {"start": 2071.6, "end": 2074.9, "text": " We'll just have a sum and that sum will be equal to three"}, {"start": 2075.1, "end": 2077.2, "text": " because we'll have three times."}, {"start": 2078.2999999999997, "end": 2081.4, "text": " You raised to the power of zero and we'll have two times."}, {"start": 2081.4, "end": 2085.6, "text": " You raised to the power of minus infinity, which is obviously"}, {"start": 2085.6, "end": 2088.2000000000003, "text": " zero and that's why we achieve."}, {"start": 2088.2000000000003, "end": 2090.5, "text": " That's why we discuss to one over three."}, {"start": 2091.5, "end": 2094.6, "text": " And that means that yeah, basically those are the attention"}, {"start": 2094.6, "end": 2098.2000000000003, "text": " coefficients that this node will use to accumulate the value"}, {"start": 2098.2000000000003, "end": 2101.1, "text": " vectors of the direct neighbors and as you can see, we literally"}, {"start": 2101.1, "end": 2102.7000000000003, "text": " achieved this mean aggregation."}, {"start": 2102.7000000000003, "end": 2105.4, "text": " So that was the first the first part."}, {"start": 2105.4, "end": 2109.2000000000003, "text": " Now the sum aggregate part is pretty easy."}, {"start": 2109.2, "end": 2112.2, "text": " Once we understand this first mean aggregate part, but let me"}, {"start": 2112.2, "end": 2114.5, "text": " let me let me zoom in a bit here and explain it."}, {"start": 2114.5, "end": 2117.3999999999996, "text": " The summary Asian can be realized by first performing mean"}, {"start": 2117.3999999999996, "end": 2120.2, "text": " aggregation and then multiply the node degrees."}, {"start": 2120.2, "end": 2121.2, "text": " We'll see what that means."}, {"start": 2121.5, "end": 2124.2, "text": " So specifically the node degrees can be extracted from"}, {"start": 2124.2, "end": 2125.3999999999996, "text": " centrality encoding."}, {"start": 2125.3999999999996, "end": 2128.6, "text": " So that's the degree encoding by an additional head and be"}, {"start": 2128.6, "end": 2132.2, "text": " concatenated to the representations after mean aggregation."}, {"start": 2132.2999999999997, "end": 2135.6, "text": " Then the fully the feed-forward network module in graph"}, {"start": 2135.6, "end": 2138.2, "text": " former can represent the function of multiplying the"}, {"start": 2138.2, "end": 2141.1, "text": " degree of the dimensions of average representations by the"}, {"start": 2141.1, "end": 2145.7, "text": " universal approximation theorem of fully of fully feed-forward"}, {"start": 2145.7, "end": 2146.2, "text": " networks."}, {"start": 2146.3999999999996, "end": 2146.7999999999997, "text": " Okay."}, {"start": 2148.1, "end": 2150.7999999999997, "text": " So let me kind of break that down."}, {"start": 2150.8999999999996, "end": 2155.2999999999997, "text": " It's pretty simple and I'll just draw it over this chart."}, {"start": 2155.2999999999997, "end": 2161.0, "text": " So let's assume the first head gives us the mean representation"}, {"start": 2161.0, "end": 2161.7999999999997, "text": " for this vector."}, {"start": 2162.5, "end": 2165.8999999999996, "text": " Okay, so we have that part and now what they say is the"}, {"start": 2165.9, "end": 2169.2000000000003, "text": " following let's take another head."}, {"start": 2169.3, "end": 2171.1, "text": " So we have multiple attention heads."}, {"start": 2171.1, "end": 2172.3, "text": " Remember this is transformer."}, {"start": 2172.7000000000003, "end": 2176.7000000000003, "text": " So what that have will do it will learn how to extract the"}, {"start": 2176.7000000000003, "end": 2179.9, "text": " degree of this note, which is as you can see here just"}, {"start": 2180.3, "end": 2181.7000000000003, "text": " consider with some direct graphs."}, {"start": 2181.7000000000003, "end": 2182.4, "text": " It's three."}, {"start": 2182.9, "end": 2186.6, "text": " So we'll learn how to extract maybe something like a vector"}, {"start": 2186.6, "end": 2190.0, "text": " which will have all zeros and only three at the bottom here."}, {"start": 2190.1, "end": 2190.4, "text": " Okay."}, {"start": 2191.2000000000003, "end": 2193.6, "text": " And so how that will happen is the following."}, {"start": 2193.6, "end": 2199.7999999999997, "text": " So remember this so this this vector here is formed by taking"}, {"start": 2199.7999999999997, "end": 2205.5, "text": " the original vector and adding to it those embeddings for the"}, {"start": 2205.5, "end": 2206.5, "text": " degree centrality."}, {"start": 2206.5, "end": 2210.4, "text": " So basically will be adding those note those vectors which"}, {"start": 2210.4, "end": 2214.6, "text": " contain the degree information and this head can learn how"}, {"start": 2214.6, "end": 2218.5, "text": " to extract from this vector how to extract this number here."}, {"start": 2219.4, "end": 2221.1, "text": " So that have will learn how to do that."}, {"start": 2221.4, "end": 2223.2, "text": " And once we have that we do the following."}, {"start": 2223.2, "end": 2225.6, "text": " We basically concatenate because if you remember how"}, {"start": 2225.6, "end": 2228.7, "text": " transformer works every single head attention head will"}, {"start": 2228.7, "end": 2230.7, "text": " basically output a representation."}, {"start": 2230.7999999999997, "end": 2232.2, "text": " Those will get concatenated."}, {"start": 2232.2999999999997, "end": 2233.6, "text": " Then we'll have some MLPs."}, {"start": 2233.7999999999997, "end": 2237.5, "text": " So again, we'll have the following will have the mean"}, {"start": 2237.5, "end": 2244.8999999999996, "text": " representation and we'll have the degree of the node and"}, {"start": 2244.8999999999996, "end": 2248.3999999999996, "text": " then by this universal by this theorem of universal"}, {"start": 2248.3999999999996, "end": 2251.2999999999997, "text": " approximation theorem where these MLPs can basically like"}, {"start": 2251.3, "end": 2253.7000000000003, "text": " giving up enough width and depth."}, {"start": 2254.0, "end": 2255.9, "text": " They can basically approximate any function."}, {"start": 2256.1000000000004, "end": 2260.2000000000003, "text": " So what we get here is after mapping them."}, {"start": 2261.2000000000003, "end": 2264.5, "text": " It can learn how to take this number."}, {"start": 2264.6000000000004, "end": 2269.6000000000004, "text": " Let's say three and multiply it with every single feature"}, {"start": 2269.6000000000004, "end": 2270.4, "text": " dimension here."}, {"start": 2270.6000000000004, "end": 2274.3, "text": " So with this thing with this thing with this thing with"}, {"start": 2274.3, "end": 2278.2000000000003, "text": " this thing and if we do that will just end up with a sum"}, {"start": 2278.3, "end": 2280.7000000000003, "text": " aggregation instead of a mean of irrigation."}, {"start": 2280.7, "end": 2282.2999999999997, "text": " And hopefully this was clear enough."}, {"start": 2283.1, "end": 2285.6, "text": " This is how I understood it basically."}, {"start": 2285.7, "end": 2288.2, "text": " And yeah, if I'm wrong, feel free to comment down."}, {"start": 2288.2999999999997, "end": 2290.5, "text": " But yeah, I think that's in a nutshell how it works."}, {"start": 2290.7, "end": 2294.2, "text": " So similar thing for max aggregate similar thing for"}, {"start": 2294.2, "end": 2295.7999999999997, "text": " combine and mean readout."}, {"start": 2295.7999999999997, "end": 2296.8999999999996, "text": " You can you can do it yourself."}, {"start": 2296.8999999999996, "end": 2300.7999999999997, "text": " But this is the whole logic the whole point how this works."}, {"start": 2300.8999999999996, "end": 2305.1, "text": " And yeah, we saw we can emulate these as a special case"}, {"start": 2305.1, "end": 2307.2999999999997, "text": " of graph warmer and that's super cool."}, {"start": 2307.6, "end": 2310.2999999999997, "text": " Having said that if you like this video consider sharing"}, {"start": 2310.3, "end": 2312.2000000000003, "text": " subscribe and until next time."}, {"start": 2312.2, "end": 2340.2, "text": " Bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=OC0oe1EzQxo
Text Style Brush - Transfer of text aesthetics from a single example | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I explain the "Text Style Brush - Transfer of text aesthetics from a single example" paper from Facebook AI research. They achieve a high-quality, single-shot, text style transfer across multiple domains (handwritten, scene text) through a combination of self-supervised and supervised learning. This work promises many exciting future applications, like augmented/mixed reality, but also raises societal concerns related to deep fakes. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://ai.facebook.com/research/publications/textstylebrush-transfer-of-text-aesthetics-from-a-single-example/ ✅ Blog: https://ai.facebook.com/blog/ai-can-now-emulate-text-style-in-images-in-one-shot-using-just-a-single-word ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 What is Text Style Brush about? 02:35 Key points of the paper 04:05 A high-level overview of the TSB pipeline 13:00 Diving deeper - Encoders 14:30 Diving deeper - Generator and map network 15:30 Adaptive Instance Normalization 17:40 Text perceptual loss 21:15 Text content loss 23:10 Reconstruction loss 27:20 New Imgur 5k dataset 28:05 Ablations 30:15 Text reco accuracy, quantitative quality, user study 32:40 Failure cases 33:40 Future applications and deep fakes 35:10 Conclusions ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #textstyletransfer #deepfakes #facebookai
What's up? In this video I'm covering this new paper called Textile Brush Transfer of Text Aesthetics from a Single Example by Pravin Krishnan, Ramakavuri, Guampeng, Boris Vasilev and Tal Hessner from Facebook AI Research. Basically they got some really nice results of rendering text in a specified style, in a one-shot setting. And let me show you what I mean by that. So basically you can see here in the source image you have some text and they kind of find a bounty box around it and they extract it and you can see just using this single word image they can learn the style and render novel content in that very same style one-shot. So basically just a single image like this one and you can do something like this. So they just kind of swapped a soya bottle for a tea vessel. And not only that, so you can see here another example on the book and not only that but they can also work cross-domain. So they're not constrained to a scene text editing domain. They can also do handwriting. So again here you can see a single word image and they can render novel sentences using that very same style. So the thing is they only generate these word images and how they actually blend these together is using something called Poisson blending. That's the only part that feels a bit like hand crafting compared to this deep learning approach of actually generating these images. So let me show you some more examples what they achieve and then we'll start explaining the paper. So here you can see some examples on the left you have word image and on the right you can see the novel content in the same style. And it's pretty nice. There are some problems obviously here for example background is not the same. Like words are kind of blurry and I'll later give my hypothesis why that is and I think it's because of the reconstruction loss they're using with L1. But it's still pretty decent. So again here you can see the foreground texture is kind of lost here. The letters are kind of blurred. But all in all like it's really nice, looks awesome. Also sometimes I notice that the perspective is not the same so they don't manage to capture the sheer, the perspective of the source style. But like it's pretty awesome. Like look at this one. Disarming statement. I wouldn't be able to kind of discriminate between the two. They look the same to me. So nice results. Now let me start digging into how it works. So they say here we present a novel approach for disentangling the content of a text image from all aspects of its appearance. The appearance representation we derive can then be applied to new content for one shot transfer of the source style to new content. We'll learn this disentanglement in a self-supervised manner. So this is not completely true because they actually have a couple of pre-training networks like one for, like one is an OCR engine, one is a font classifier and they also assume to have those bounding boxes as well as the ground truth text inside of those bounding boxes. So there, it's a combination of supervised learning and self-supervised learning, but like a nice portion of the pipeline is actually self-supervised. So that's cool. So our method processes entire word boxes without requiring a segmentation of text from background per character processing or making assumptions on string lengths. So that's cool. So as a kind of motivation for one shot learning, they say it here, moreover, it's unreasonable to expect text images used as style samples to provide example appearances for the entire alphabet, including digits. Transferring styles between strings with different characters therefore implies hallucinating plausible appearances of unseen characters in the input style. And that kind of, yeah, should associate you with generative adversarial networks and that's one of the components they're obviously using here in this work, but there are many components to it. And let me now kind of try and explain. So I'll first try to give you like a high level overview and then we'll start digging into every single component. And like most of this video, we'll focus on explaining how this thing works because the problem itself is ill-posed as every, like similarly to neural style transfer because as a final conclusion, like humans just make a decision whether something looks nice or not. Or although there are some objective metrics like Fraschay's inception distance or inception score used in GANs, but like those are not that robust. And yeah, what they did also was a user study. So that's additionally, they've showed that this work looks better for a certain group of people they took for this experiment. So let me now start kind of dissecting this. So first things first, what they do is they take the image that has the style, like the style they want, and they crop it around the image word. So they put some context around the actual text that has the style they want. And that's one input. The second thing they do is they have a text which they want to render in the style from this image. And so you can see here, so earth is the, so they just take the word earth, which is the same as here, because as I told you, they have the ground truth, but they add some novel text for which they don't have the actual image. And what they then do is they don't input the text itself. They actually render the text on a white background and using some standard font. And we'll see those details a bit later. So once they render the text, now they can encode the text using this content encoder and they can encode the style using this style encoder. Both of these are just simple ResNet, basically ResNet, 34. And it's just modified a little bit. And so what I do here is because they have, as I told you, this bounding box ground truth, what I do is they take specific features corresponding to the bounding box and they do an average pooling along those pixels, along those features, and they create this vector here, which is a style vector, which was like, I think, 512, it doesn't matter, but yeah, let's make it a bit concrete. The thing they do with content is they just create a tensor here. Like as I said, they modify the ResNet 34, so they take just a certain feature maps from a certain layer. And the next thing they do is, you can see here, they feed those into this generator, which is style again, modified generator pretty much. So this style mapping network, what it does is the following. So it maps this style representation, this latent vector here, into multiple coefficients. And what these do if you're familiar with either the original adaptive instance normalization paper or with style again, is they kind of modify the feature maps, like they modify certain statistics like the mean and standard deviation so that they can kind of build in the style of their choosing, like the style they want. Okay? So I'll later explain how that adaptive instance normalization works, but for now you just have some coefficients here, which we got by mapping it from this style vector, and they modified, they use those to modify the feature maps. Kind of going, zooming in a little bit here, you can see the architecture is quite detailed. There are many components here, but the main idea is to use this modulated convolutions, so to use these coefficients to kind of modify the feature maps, and then they just have a progressive growing of these feature maps until they generate as the output this word image as well as the soft mask. So they're generating those two, so both the soft masks as well as the actual image words. So, okay. So once they generate those, they have five losses in total. So as you can see, this is basically hand crafting 2.0. They're using just DL components to create these systems, but the thing I don't like is that there are so many moving parts here, and I hope that like a couple of papers down the line, this will be something like end to end and we'll do the same thing. Okay, rent over. Let me explain to you briefly these losses functions. So they have, so these two here, so the typeface classifier and the recognizer are pre-trained and frozen during the training. So what I do is they take a, so this is a VGG 19 network, so this one is VGG 19. What I do is they take a VGG 19 network and they have some synthetic data set with certain image words, and they have actually the ground truth labels for the font, for the font type. So they just train a simple classifier that can predict what's the font of the input image. And a thing worth mentioning here is that the fonts used in that data set won't be the same as the fonts that this model will eventually be using because like they're using real data and here they just have synthetic fonts. So yeah, you can't cover the whole variance like that's present in real data sets. So having said that, what I do here is something, so this is a perceptual loss. If you're familiar with neural cell transfer, what they do here is they basically make sure, so as you can see, they input the ground truth, so this is this image, red image, and they input this generated image. And what I want to make sure is that the feature maps, it's a specific layer, are the same looking at one distance, for example. And also they form gram matrices which kind of capture the style of the image and they also make sure that those look the same for both this generated image as well as for this ground truth image. They also have some embedding loss, we'll get to that a bit later, but that's pretty much a neural style transfer part. Then they have recognizer which is a simple OCR engine. Again, they trained it, they took some pre-trained OCR engine, off the shelf thing, and what they do is because we have the ground truth text, we know that this is earth and warming, they just input all of these four, so both the masks as well as the generated images, and they kind of do the OCR. And finally they do simple cross entropy between the ground truth text and between whatever this OCR engine saw in these four images. So yeah, basically, let me just kind of explain that a bit more detailed. So let's say the first character, so it's out with certain distribution and you're gonna do argmax and that will be our first character. And so what they do is because they have the ground truth words, for example, this is the, let's say we're looking at this image, so they know that the first letter is E, so that will be one hot encoding of E, and they'll just do cross entropy between the output distribution and between the target distribution to train the generator. So again, this is frozen, so they are not modifying these weights, they are modifying the generator and the rest of the pipeline, okay? Same thing for this network here. And finally they have the, like this is pretty much default thing, discriminator, like the GAM part where they are taking this real image and the generated image here to make sure that they look plausible, realistic. Okay, so that was the pipeline, a bunch of components, seven neural networks here, as you can see, one, two, three, four, five, six, seven, five losses, and yeah, I actually forgot to mention these two. So let me just briefly go over those. So reconstruction loss, what I do is they make sure that this generated image here is close to this image here. So yeah, and that's the part which I suspect is causing the blurriness we saw in the results. The second part is the cyclic consistency loss from the CycleGAM paper, and this is a bit more complicated to explain, but what they do is the following. So they take this image, okay, they generate the output image, so this is Earth. So what they do now is they take this generated image, they blend it back into this original image because you know they have the bounding box, so they blend it in here, and then they pass that image again through the whole pipeline and generate another image, let's call that one OSC one prime, whatever. And now they want to make sure that that image is the same as this one, so they just do L1 between those two. So again, it's complicated, a bunch of details, but yeah, that's what I do, that's the whole pipeline, and now let's start digging into details a bit more. Okay, so let's start with encoders. So given a character string C for content, we therefore render it using a standard font for the text Verily Serif Mono on a plain white background producing the image 64 height and width is arbitrary, whereas cap denotes this standard font. Okay, so that's the part I explained to you, so this is this part. So you have the ground truth text, you just render the images, and the reason they do that is just so that they have uniform encoders here so that they don't have to encode text in some fancy ways, they just render it. So the style encoder is given a localized scene image along with a word bounding box denoting the word image from which the style is learned. So that's the part I explained to you here. So basically, this is the word image and they're actually including additional context so that it can better model the background, etc. Okay, and the penultimate layer of style encoder uses a region of interest, a line operator as described in mask rcnn to pull features coming from the desired word region, thereby reducing the style tensor to a fixed 512 dimensional representation. So that's the part I explained to you here because they have the bounding box, this yellow thing, they can use that information to kind of pull over this feature maps to get this final style vector. So that's the part about encoders. So again, they are resonant three, four, nothing fancy there. Now let's go to generator and style mapping network and let me explain you those. So the content representation is directly fed as input to the first layer of the generator instead of providing the learn tensor which was used in the original style again to model. So they just kind of explain some modifications they did, but it's basically still style style again, be too with some small modifications. And we introduce a style mapping network M which converts the style vector. Yes. So that's yes, is this thing here? Yes. Two layers specific style representations WSI where I indexes the layers of the generator, which are then fed as the adaptive instance normalization coefficient to each layer of the generator. So those, those WSI are just these things here. Okay. So now a quick detour into what adaptive instance normalization is. It's fairly simple. The original idea came from the neural style transfer literature. And what I do is the following. So imagine you have, imagine you have something like a VGG network and it's pre-trained on ImageNet, like it's a classifier and the weights are frozen. And what they do now is the following in that original paper. So they just take the style image, they pass it through this thing. They basically cache the feature maps or more precisely they cache the means and standard deviations of each of those feature maps. So once you have that, they input the content image here. And so what I do is if we take a specific content image feature map from a specific layer like this one, they're going to, as you can see here, find the mean statistic of the feature map, find the standard deviation and that's the just the instance normalization part. So they just use, so they have a feature map here and just going over the height and width features, they're going to find these mus and sigmas. They're going to normalize the content feature map and they're going to add the corresponding standard deviation and mean from the style image. So from this one. So those are the cached values. And by doing that, by just modifying those simple first order and first order statistics, they can basically, they show that they can adaptively change the style of the image. And yeah, so that was the basic idea and that's what style again was using and that's what this paper is also using to kind of generate the source style. Okay. Finally, I mentioned masks and they say here, the final mask provides a soft semantic segmentation of the generated image and is later used to construct the loss functions. Yeah, we'll see how they are exactly used. I already explained that this basically this recognizer is using masks as well as these generated images to do cross entropy over them and to train the generator, but they're also used for these reconstruction and cyclic loss and we'll see that in a minute. Okay. Let me, let me explain to you how this text perceptual loss works in a bit more detail. So specifically we assume a pre-trained type phase classification of proxy. Importantly, we make no assumption that the fonts see was trained on, trained to classify would appear in the photos we use to train and evaluate our textile brush. Okay. Now let me try and debug these equations for you. So it looks like this. Again, we have, let's say we have a VGG and in each layer we're gonna, so these are just the activation volumes and the spatial extent kind of goes down, whereas the number of channels goes up. So let me kind of depict it like this. So, so this is maybe the input image and then we're processing it. We get the feature maps and like, so this is the, this would be one feature map. This will be the second feature map. This would be the third feature map, et cetera. So this is the number of channels and we don't see the spatial extent of the image. I'm looking from it sideways. So what I do is they first do this. This is the perceptual loss part. So because this thing is pre-trained, it learns to extract certain features and they do the following. So they make sure that the generated image, so this is one, the output image and the input image are really close in the feature space. So they take certain feature maps, like maybe, I don't know, maybe take a certain subset here and we make sure that the, that those are really similar looking at L1 norm. Okay. So you take the generated image and you take the ground truth image and you want to make sure that these here, the features are really close, where closeness is defined by L1. That's the, that's the content loss in neural style transfer literature. The second thing they do is they form grand matrices also from those features. So you take those feature maps there and you do dot product between them pairwise and you end up with a certain thing called a feature map, which will have, so if this is C and they just do dot product between every single feature map, they'll end up with C times C matrix, which kind of captures the style of your image. And again, they want to make sure that the generated image, so this one, the generated one, as well as the input image have the same gray matrix. So they want to make them as close as possible again using L1. And this embedding thing is pretty much the same as this thing here. The only difference is that this psi here comes from the pen ultimate layer. So it's much deeper layer. So it's focusing on certain other features compared to these phi's here. And that's it in a nutshell. Again, I'm going to link a neural style transfer video somewhere here. You can check those out if you're not familiar with this, but this is pretty simple things. Once you wrap your head around it, it's pretty, pretty easy. Okay. They say here, note that the perceptual losses are only computed for the output image corresponding to original content C1. So they can't do it for the C2 word, so they can't do it for this one, for this warming one, because they don't have the actual ground truth image. So they just do it for the earth, for that one. Okay. That was that part of the loss. Now let's focus on the second part. We use the output string estimated by the recognizer module to compute a loss LR, which reflects how well the generator captured the desired content string C1, C2, simultaneously on both, as I said, so on both target images, on both the generated images and their masks. So how this will look like is the following. So again, let's take an input image, and this is something we generated. And we pass it through this recognizer network. And it will start generating character by character. And assuming the ground truth word is earth, as an example above, so earth. And assuming this one, the first distribution it generates looks something like this, like maybe like this. And it turns out that this argmax corresponds to letter B, whereas this one here maybe corresponds to letter E. And so obviously we made a mistake here. And what they do is just, they have the ground truth, the one hot encoding, they'll just do basically whatever this probability is, let's call it Pi. They're just going to do minus log of Pi. And as you can see, as Pi goes to one, which should be, which is the ground truth, the loss goes to zero. So this loss looks like this classic stuff. So this is one here, this is zero here, this is the loss, and this is the output probability. And they just repeat that process. So now they are going to feed this back into, and they're going to start generating next character, and next character, and next character. And basically they'll just be doing cross entropy and summing those and use that to back prop and train the generator. So that's how the text content loss works like. And now let's focus on the reconstruction losses. So here LREC represents the differences between the output generated image in style S and content C1, okay? And the cropped input style example ISC1. So basically, I think explained this, but let me go back here. So they're going to do L1 between this thing here, so this earth and this one. And because they're doing L1, I'm pretty sure this is causing them this blurriness problem they have. Yep. Okay. Finally, we compute a fake style vector ES' and generate this image. So let me try and draw this. Basically, so this is the input image, and this is the bounding box. So everything around here is the context, okay? So this is our initial input image. And in this example here, that will be earth, exactly this one. And now what I do is they feed this through the whole encoder generator pipeline and they generate the initial image. Let's call it O. And so what they now do is they take this O and they basically stitch it back in instead of EI here. So they'll have something like this. They'll have the same image, except that instead of I, they're going to blend in O. And again, they're going to pass this thing through the whole encoder generator image and they're going to generate another image called O'. And finally, they're going to do L1 between these two. So this one and this one. Just simple L1. And that's the cycle consistency loss from the cycle GAN paper, although there they use two different generator networks to do the same thing. So the semantics behind this approach is the following. So whatever we generate here, so this O, we want to make sure that after passing it another time through the whole encoder generator pipeline, we want to make sure that we can go back to the original word image I. That's the reasoning behind it. And yeah, you should go ahead and check out the cycle GAN paper to understand this a bit better. Again, we can use an L1 loss criterion for both reconstruction losses and split these functions on the foreground and background region separately using the generated masks. So this is where the masks come into play. This allows the network to learn fine-grained variations of the style present in the foreground region. Okay. And because they don't have any ground truth for masks, by doing this loss and by doing the previous loss with the recognizer module, they actually learn how to produce nice and accurate soft masks. That's pretty much it. Now, what they do is they just do a weighted average of all of these. And I can just imagine how the hyperparameter search for this looks like. Yeah. Additionally, the deep part is the discriminator-based adversarial loss. They additionally use some regularizations like the R1 regularization and path length regularization. I won't cover those into detail here, but just some regularization methods. R1 just kind of constrains the discriminator gradients, et cetera. So, yep, that was pretty much it for the model itself. So it's a fair bit of work here to make this work, I guess. And there are many, many different papers which are, which you need to read to understand everything here. So as I said, style GAN, neural style transfer, literature, then GANs in general, like OCR. So a bunch of components stitched together to give this pipeline the functionality it has. Okay. Now, other than the model, they also introduced this Imgur 5K handwriting set. And the whole point is it's got more variance, it's got much more diversity in the handwriting. The style variance is much bigger than looking at all of the other previous datasets. And they kind of show certain metrics here, like words, number of writers. You can see that there are much more writers here. The styles are whatever high means. Basically, there are more styles and there is background, there is more, it's more in the wild compared to the other datasets. Okay. Now, finally, let's focus on the results. We saw the visual results and they also did some ablations here where they tried to kind of remove certain components and just make sure that everything they are using is actually improving the performance. They're using metrics like MSC, so mean, square error, this structural similarity index measure, and this is just the signal to noise ratio metric. And finally, Fraschay's inception distance. So the three of these can only be used if they have the actual ground truth target image, which they have only for certain synthetic tests. So basically, FID is the one we should care about because that one kind of tries to capture what we humans think of a nice image. It's not perfect. Same thing goes for the inception score, but I think if used properly, it correlates nicely with human judgment. And they show here again, just using the discriminator or the GAN loss, they get poor performance, adding the recognizer module, so the OCR part improves everything, getting the reconstruction loss, and finally the cycle loss additionally gives them boost in all the metrics. So the interesting part here is that actually, once they add this final perceptual loss here, they boost the FID score, but they kind of lose on these objective metrics like MSC, SSAM, and PSNR, which are shown, by the way, not to correspond really good with our judgment. The reason being is, imagine you have, for MSC, if you have an image, and so if you generate an image and you just shift it by, I don't know, maybe five pixels downwards, so you just subtract five, you'll have an image which any human would say is perfect, but this MSC loss would be huge. So it just can't differentiate between salient regions and it just uses this dumb difference, like heuristic, to kind of give its judgment, but that doesn't correspond with human judgment that well. So what they do here is they try and see how good, this is a way to describe how good their generated images are by just passing them through this OCR engine. And that OCR engine is far from perfect, as you can see here, basically its accuracy drops significantly on this text visual question-answer dataset, but it's good enough and they try and feed the generated images from these two baselines as well from the text style brush paper, from this paper, and they notice that they have much higher accuracy. Now I'm a bit suspicious about these results because they were basically using that very same module to train the generator, so they kind of basically, the model can learn how to trick the recognizer, in a way can learn how to trick the recognizer module, I think here, so I'm not sure whether this is a fair comparison, but still, nonetheless, they did compare them. Okay, and here are some additional images where they've done the Poisson blending I mentioned, so again, I think the results are pretty impressive. If you take a look at this one later, Waste, this pretty much looks like an original image, it's pretty nice results there, as well as here, like awesome results, I don't see any problems with these ones, they later show some failure cases, I'll show you those in a bit. Again, they compare with a certain baseline, this time looking at the FID, and this is some other metric, I think it's called geometric something, I forgot, but basically, similar thing to FID, and they show that they get much lower results compared to the baseline, and again, lower is better for the FID metric. Okay, aside from these quantitative results, they did some user study where they just showed that this paper is better compared to the baselines, and as you can see here, so Davis and collaborators, the previous art, the previous work basically constrains itself to specific domain like handwriting or scene text, whereas this method just works on all of these, and again, it outperforms, looking at these user study, it outperforms the baseline, so both the SRNet and this one, and yeah, as well as side by side. Okay, so they show different failure cases here, and they're of very different natures, so here you can see that it fails to capture this italic style of handwriting, and as well as the thickness of the strokes, and yeah, and there are different failure cases, like here, it fails to capture these foreground effects, and I guess one of the main reasons is they are using those pre-trained recognizer, and modules, as well as the font classifier, combined that with the L1 reconstruction loss, I think all of those kind of jointly lead to this thing failing on these more complex styles like this calligraphy here, you can notice it here, or in this example, they lose this perspective distortion, let's call it, or maybe, I don't know, like this just, yeah, whatever, and yeah, I think that's pretty much it. They finally mentioned that this can be used in many cool applications like MR, like mixed reality and stuff, so our method aims at use cases involving creative self-expression and augmented reality, and that's something I'm really passionate about since I've been working on Microsoft HoloLens in the past, and they mentioned here, so photorealistic translation, so once you have some augmented reality glasses, you can imagine how you're looking at some language, like some foreign language, and all of a sudden, it's automatically translated and perfectly blended in as if it was real, and you can just kind of bridge this communication gap using that. They say here, our method can be used for data generation and augmentation for training future OCR systems, so that one is pretty obvious, so you can, obviously, when you can do this really good, you can use the data as a, basically, a new data point in a data set to train some OCR models by doing this, or just to augment an existing data set. However, they say here, we are aware that, like other technologies, ours can be misused, possibly the same as deepfake faces can be used for misinformation. So, deepfakes, that's a real concern for the society. I guess by just kind of open sourcing the data set and the research they're doing, they're kind of helping it, but at the end, it's just a game of cat and mouse. So, yeah, researchers are developing better tools, but by open sourcing this, you're also providing better tools for those who want to exploit it, so I don't think there is a great strategy to kind of cope with these deepfakes in general. Yeah, as a conclusion, they get some really cool results. They do have some failure cases. They do have problems, they still have problems with blur, but like all in all, it looks like great work, and yeah, maybe one of the things that focus on would be to try and reduce the number of components in the system. I guess it's pretty cumbersome to, if you want to change anything, you'd have to, you'd probably have, because all of these are coupled, you'd have to change a bunch of different values in order to retrain this thing. So I guess that would be a future research direction, and having said that, hopefully you liked this video. If you did, subscribe, share it out and until next time, bye bye.
[{"start": 0.0, "end": 5.0, "text": " What's up? In this video I'm covering this new paper called Textile Brush Transfer of"}, {"start": 5.0, "end": 11.120000000000001, "text": " Text Aesthetics from a Single Example by Pravin Krishnan, Ramakavuri, Guampeng, Boris"}, {"start": 11.120000000000001, "end": 17.94, "text": " Vasilev and Tal Hessner from Facebook AI Research. Basically they got some really nice results"}, {"start": 17.94, "end": 23.96, "text": " of rendering text in a specified style, in a one-shot setting. And let me show you what"}, {"start": 23.96, "end": 29.8, "text": " I mean by that. So basically you can see here in the source image you have some text and"}, {"start": 29.8, "end": 35.480000000000004, "text": " they kind of find a bounty box around it and they extract it and you can see just using"}, {"start": 35.480000000000004, "end": 44.22, "text": " this single word image they can learn the style and render novel content in that very"}, {"start": 44.22, "end": 49.82, "text": " same style one-shot. So basically just a single image like this one and you can do something"}, {"start": 49.82, "end": 56.480000000000004, "text": " like this. So they just kind of swapped a soya bottle for a tea vessel. And not only that,"}, {"start": 56.48, "end": 61.36, "text": " so you can see here another example on the book and not only that but they can also work"}, {"start": 61.36, "end": 66.16, "text": " cross-domain. So they're not constrained to a scene text editing domain. They can also"}, {"start": 66.16, "end": 72.03999999999999, "text": " do handwriting. So again here you can see a single word image and they can render novel"}, {"start": 72.03999999999999, "end": 78.75999999999999, "text": " sentences using that very same style. So the thing is they only generate these word images"}, {"start": 78.75999999999999, "end": 83.03999999999999, "text": " and how they actually blend these together is using something called Poisson blending."}, {"start": 83.04, "end": 86.64, "text": " That's the only part that feels a bit like hand crafting compared to this deep learning"}, {"start": 86.64, "end": 91.48, "text": " approach of actually generating these images. So let me show you some more examples what"}, {"start": 91.48, "end": 98.88000000000001, "text": " they achieve and then we'll start explaining the paper. So here you can see some examples"}, {"start": 98.88000000000001, "end": 105.88000000000001, "text": " on the left you have word image and on the right you can see the novel content in the"}, {"start": 105.88000000000001, "end": 111.94000000000001, "text": " same style. And it's pretty nice. There are some problems obviously here for example background"}, {"start": 111.94, "end": 118.92, "text": " is not the same. Like words are kind of blurry and I'll later give my hypothesis why that"}, {"start": 118.92, "end": 123.16, "text": " is and I think it's because of the reconstruction loss they're using with L1. But it's still"}, {"start": 123.16, "end": 128.16, "text": " pretty decent. So again here you can see the foreground texture is kind of lost here. The"}, {"start": 128.16, "end": 133.68, "text": " letters are kind of blurred. But all in all like it's really nice, looks awesome. Also"}, {"start": 133.68, "end": 138.84, "text": " sometimes I notice that the perspective is not the same so they don't manage to capture"}, {"start": 138.84, "end": 146.56, "text": " the sheer, the perspective of the source style. But like it's pretty awesome. Like look at"}, {"start": 146.56, "end": 152.56, "text": " this one. Disarming statement. I wouldn't be able to kind of discriminate between the"}, {"start": 152.56, "end": 158.18, "text": " two. They look the same to me. So nice results. Now let me start digging into how it works."}, {"start": 158.18, "end": 163.52, "text": " So they say here we present a novel approach for disentangling the content of a text image"}, {"start": 163.52, "end": 168.8, "text": " from all aspects of its appearance. The appearance representation we derive can then be applied"}, {"start": 168.8, "end": 174.44, "text": " to new content for one shot transfer of the source style to new content. We'll learn this"}, {"start": 174.44, "end": 179.08, "text": " disentanglement in a self-supervised manner. So this is not completely true because they"}, {"start": 179.08, "end": 183.88000000000002, "text": " actually have a couple of pre-training networks like one for, like one is an OCR engine, one"}, {"start": 183.88000000000002, "end": 190.20000000000002, "text": " is a font classifier and they also assume to have those bounding boxes as well as the"}, {"start": 190.20000000000002, "end": 195.32000000000002, "text": " ground truth text inside of those bounding boxes. So there, it's a combination of supervised"}, {"start": 195.32, "end": 200.28, "text": " learning and self-supervised learning, but like a nice portion of the pipeline is actually"}, {"start": 200.28, "end": 205.23999999999998, "text": " self-supervised. So that's cool. So our method processes entire word boxes without requiring"}, {"start": 205.23999999999998, "end": 210.12, "text": " a segmentation of text from background per character processing or making assumptions"}, {"start": 210.12, "end": 217.56, "text": " on string lengths. So that's cool. So as a kind of motivation for one shot learning,"}, {"start": 217.56, "end": 222.5, "text": " they say it here, moreover, it's unreasonable to expect text images used as style samples"}, {"start": 222.5, "end": 227.32, "text": " to provide example appearances for the entire alphabet, including digits. Transferring styles"}, {"start": 227.32, "end": 232.56, "text": " between strings with different characters therefore implies hallucinating plausible appearances"}, {"start": 232.56, "end": 238.96, "text": " of unseen characters in the input style. And that kind of, yeah, should associate you with"}, {"start": 238.96, "end": 243.28, "text": " generative adversarial networks and that's one of the components they're obviously using"}, {"start": 243.28, "end": 248.44, "text": " here in this work, but there are many components to it. And let me now kind of try and explain."}, {"start": 248.44, "end": 253.48, "text": " So I'll first try to give you like a high level overview and then we'll start digging"}, {"start": 253.48, "end": 257.71999999999997, "text": " into every single component. And like most of this video, we'll focus on explaining how"}, {"start": 257.71999999999997, "end": 263.16, "text": " this thing works because the problem itself is ill-posed as every, like similarly to neural"}, {"start": 263.16, "end": 269.68, "text": " style transfer because as a final conclusion, like humans just make a decision whether something"}, {"start": 269.68, "end": 274.04, "text": " looks nice or not. Or although there are some objective metrics like Fraschay's inception"}, {"start": 274.04, "end": 280.88, "text": " distance or inception score used in GANs, but like those are not that robust. And yeah,"}, {"start": 280.88, "end": 287.84000000000003, "text": " what they did also was a user study. So that's additionally, they've showed that this work"}, {"start": 287.84000000000003, "end": 294.6, "text": " looks better for a certain group of people they took for this experiment. So let me now"}, {"start": 294.6, "end": 301.40000000000003, "text": " start kind of dissecting this. So first things first, what they do is they take the image"}, {"start": 301.4, "end": 307.56, "text": " that has the style, like the style they want, and they crop it around the image word. So"}, {"start": 307.56, "end": 313.59999999999997, "text": " they put some context around the actual text that has the style they want. And that's one"}, {"start": 313.59999999999997, "end": 318.28, "text": " input. The second thing they do is they have a text which they want to render in the style"}, {"start": 318.28, "end": 324.03999999999996, "text": " from this image. And so you can see here, so earth is the, so they just take the word"}, {"start": 324.03999999999996, "end": 328.06, "text": " earth, which is the same as here, because as I told you, they have the ground truth,"}, {"start": 328.06, "end": 333.96, "text": " but they add some novel text for which they don't have the actual image. And what they"}, {"start": 333.96, "end": 340.32, "text": " then do is they don't input the text itself. They actually render the text on a white background"}, {"start": 340.32, "end": 344.92, "text": " and using some standard font. And we'll see those details a bit later. So once they render"}, {"start": 344.92, "end": 351.12, "text": " the text, now they can encode the text using this content encoder and they can encode the"}, {"start": 351.12, "end": 357.96, "text": " style using this style encoder. Both of these are just simple ResNet, basically ResNet,"}, {"start": 357.96, "end": 363.96, "text": " 34. And it's just modified a little bit. And so what I do here is because they have, as"}, {"start": 363.96, "end": 370.84, "text": " I told you, this bounding box ground truth, what I do is they take specific features corresponding"}, {"start": 370.84, "end": 377.21999999999997, "text": " to the bounding box and they do an average pooling along those pixels, along those features,"}, {"start": 377.21999999999997, "end": 381.35999999999996, "text": " and they create this vector here, which is a style vector, which was like, I think, 512,"}, {"start": 381.35999999999996, "end": 387.12, "text": " it doesn't matter, but yeah, let's make it a bit concrete. The thing they do with content"}, {"start": 387.12, "end": 392.16, "text": " is they just create a tensor here. Like as I said, they modify the ResNet 34, so they"}, {"start": 392.16, "end": 397.6, "text": " take just a certain feature maps from a certain layer. And the next thing they do is, you"}, {"start": 397.6, "end": 403.32, "text": " can see here, they feed those into this generator, which is style again, modified generator pretty"}, {"start": 403.32, "end": 410.92, "text": " much. So this style mapping network, what it does is the following. So it maps this"}, {"start": 410.92, "end": 417.76, "text": " style representation, this latent vector here, into multiple coefficients. And what these"}, {"start": 417.76, "end": 422.72, "text": " do if you're familiar with either the original adaptive instance normalization paper or with"}, {"start": 422.72, "end": 428.84000000000003, "text": " style again, is they kind of modify the feature maps, like they modify certain statistics"}, {"start": 428.84000000000003, "end": 436.02000000000004, "text": " like the mean and standard deviation so that they can kind of build in the style of their"}, {"start": 436.02, "end": 441.88, "text": " choosing, like the style they want. Okay? So I'll later explain how that adaptive instance"}, {"start": 441.88, "end": 445.71999999999997, "text": " normalization works, but for now you just have some coefficients here, which we got"}, {"start": 445.71999999999997, "end": 451.47999999999996, "text": " by mapping it from this style vector, and they modified, they use those to modify the"}, {"start": 451.47999999999996, "end": 457.0, "text": " feature maps. Kind of going, zooming in a little bit here, you can see the architecture"}, {"start": 457.0, "end": 464.24, "text": " is quite detailed. There are many components here, but the main idea is to use this modulated"}, {"start": 464.24, "end": 470.24, "text": " convolutions, so to use these coefficients to kind of modify the feature maps, and then"}, {"start": 470.24, "end": 476.92, "text": " they just have a progressive growing of these feature maps until they generate as the output"}, {"start": 476.92, "end": 482.04, "text": " this word image as well as the soft mask. So they're generating those two, so both the"}, {"start": 482.04, "end": 490.72, "text": " soft masks as well as the actual image words. So, okay. So once they generate those, they"}, {"start": 490.72, "end": 496.92, "text": " have five losses in total. So as you can see, this is basically hand crafting 2.0. They're"}, {"start": 496.92, "end": 502.32000000000005, "text": " using just DL components to create these systems, but the thing I don't like is that there are"}, {"start": 502.32000000000005, "end": 508.68, "text": " so many moving parts here, and I hope that like a couple of papers down the line, this"}, {"start": 508.68, "end": 513.9200000000001, "text": " will be something like end to end and we'll do the same thing. Okay, rent over. Let me"}, {"start": 513.9200000000001, "end": 520.1600000000001, "text": " explain to you briefly these losses functions. So they have, so these two here, so the typeface"}, {"start": 520.16, "end": 524.88, "text": " classifier and the recognizer are pre-trained and frozen during the training. So what I"}, {"start": 524.88, "end": 533.6, "text": " do is they take a, so this is a VGG 19 network, so this one is VGG 19. What I do is they take"}, {"start": 533.6, "end": 541.24, "text": " a VGG 19 network and they have some synthetic data set with certain image words, and they"}, {"start": 541.24, "end": 545.76, "text": " have actually the ground truth labels for the font, for the font type. So they just"}, {"start": 545.76, "end": 551.8, "text": " train a simple classifier that can predict what's the font of the input image. And a"}, {"start": 551.8, "end": 557.52, "text": " thing worth mentioning here is that the fonts used in that data set won't be the same as"}, {"start": 557.52, "end": 563.36, "text": " the fonts that this model will eventually be using because like they're using real data"}, {"start": 563.36, "end": 569.16, "text": " and here they just have synthetic fonts. So yeah, you can't cover the whole variance like"}, {"start": 569.16, "end": 575.4, "text": " that's present in real data sets. So having said that, what I do here is something, so"}, {"start": 575.4, "end": 579.24, "text": " this is a perceptual loss. If you're familiar with neural cell transfer, what they do here"}, {"start": 579.24, "end": 584.72, "text": " is they basically make sure, so as you can see, they input the ground truth, so this"}, {"start": 584.72, "end": 590.64, "text": " is this image, red image, and they input this generated image. And what I want to make sure"}, {"start": 590.64, "end": 596.16, "text": " is that the feature maps, it's a specific layer, are the same looking at one distance,"}, {"start": 596.16, "end": 601.36, "text": " for example. And also they form gram matrices which kind of capture the style of the image"}, {"start": 601.36, "end": 606.28, "text": " and they also make sure that those look the same for both this generated image as well"}, {"start": 606.28, "end": 611.4, "text": " as for this ground truth image. They also have some embedding loss, we'll get to that"}, {"start": 611.4, "end": 615.96, "text": " a bit later, but that's pretty much a neural style transfer part. Then they have recognizer"}, {"start": 615.96, "end": 621.5600000000001, "text": " which is a simple OCR engine. Again, they trained it, they took some pre-trained OCR"}, {"start": 621.5600000000001, "end": 627.2, "text": " engine, off the shelf thing, and what they do is because we have the ground truth text,"}, {"start": 627.2, "end": 632.2, "text": " we know that this is earth and warming, they just input all of these four, so both the"}, {"start": 632.2, "end": 638.1600000000001, "text": " masks as well as the generated images, and they kind of do the OCR. And finally they"}, {"start": 638.1600000000001, "end": 644.6400000000001, "text": " do simple cross entropy between the ground truth text and between whatever this OCR engine"}, {"start": 644.6400000000001, "end": 651.08, "text": " saw in these four images. So yeah, basically, let me just kind of explain that a bit more"}, {"start": 651.08, "end": 656.0400000000001, "text": " detailed. So let's say the first character, so it's out with certain distribution and"}, {"start": 656.04, "end": 660.8, "text": " you're gonna do argmax and that will be our first character. And so what they do is because"}, {"start": 660.8, "end": 665.0, "text": " they have the ground truth words, for example, this is the, let's say we're looking at this"}, {"start": 665.0, "end": 669.52, "text": " image, so they know that the first letter is E, so that will be one hot encoding of"}, {"start": 669.52, "end": 674.0, "text": " E, and they'll just do cross entropy between the output distribution and between the target"}, {"start": 674.0, "end": 679.88, "text": " distribution to train the generator. So again, this is frozen, so they are not modifying"}, {"start": 679.88, "end": 684.12, "text": " these weights, they are modifying the generator and the rest of the pipeline, okay? Same thing"}, {"start": 684.12, "end": 690.36, "text": " for this network here. And finally they have the, like this is pretty much default thing,"}, {"start": 690.36, "end": 697.12, "text": " discriminator, like the GAM part where they are taking this real image and the generated"}, {"start": 697.12, "end": 705.64, "text": " image here to make sure that they look plausible, realistic. Okay, so that was the pipeline,"}, {"start": 705.64, "end": 711.88, "text": " a bunch of components, seven neural networks here, as you can see, one, two, three, four,"}, {"start": 711.88, "end": 717.6, "text": " five, six, seven, five losses, and yeah, I actually forgot to mention these two. So let"}, {"start": 717.6, "end": 722.8, "text": " me just briefly go over those. So reconstruction loss, what I do is they make sure that this"}, {"start": 722.8, "end": 729.6, "text": " generated image here is close to this image here. So yeah, and that's the part which I"}, {"start": 729.6, "end": 736.4, "text": " suspect is causing the blurriness we saw in the results. The second part is the cyclic"}, {"start": 736.4, "end": 741.72, "text": " consistency loss from the CycleGAM paper, and this is a bit more complicated to explain,"}, {"start": 741.72, "end": 747.0, "text": " but what they do is the following. So they take this image, okay, they generate the output"}, {"start": 747.0, "end": 752.0, "text": " image, so this is Earth. So what they do now is they take this generated image, they blend"}, {"start": 752.0, "end": 757.0, "text": " it back into this original image because you know they have the bounding box, so they blend"}, {"start": 757.0, "end": 761.28, "text": " it in here, and then they pass that image again through the whole pipeline and generate"}, {"start": 761.28, "end": 769.12, "text": " another image, let's call that one OSC one prime, whatever. And now they want to make"}, {"start": 769.12, "end": 775.08, "text": " sure that that image is the same as this one, so they just do L1 between those two. So again,"}, {"start": 775.08, "end": 779.32, "text": " it's complicated, a bunch of details, but yeah, that's what I do, that's the whole pipeline,"}, {"start": 779.32, "end": 786.2, "text": " and now let's start digging into details a bit more. Okay, so let's start with encoders."}, {"start": 786.2, "end": 790.5600000000001, "text": " So given a character string C for content, we therefore render it using a standard font"}, {"start": 790.5600000000001, "end": 798.6, "text": " for the text Verily Serif Mono on a plain white background producing the image 64 height"}, {"start": 798.6, "end": 804.6800000000001, "text": " and width is arbitrary, whereas cap denotes this standard font. Okay, so that's the part"}, {"start": 804.6800000000001, "end": 808.66, "text": " I explained to you, so this is this part. So you have the ground truth text, you just"}, {"start": 808.66, "end": 815.0, "text": " render the images, and the reason they do that is just so that they have uniform encoders"}, {"start": 815.0, "end": 821.24, "text": " here so that they don't have to encode text in some fancy ways, they just render it. So"}, {"start": 821.24, "end": 826.6800000000001, "text": " the style encoder is given a localized scene image along with a word bounding box denoting"}, {"start": 826.68, "end": 830.88, "text": " the word image from which the style is learned. So that's the part I explained to you here."}, {"start": 830.88, "end": 836.64, "text": " So basically, this is the word image and they're actually including additional context so that"}, {"start": 836.64, "end": 843.28, "text": " it can better model the background, etc. Okay, and the penultimate layer of style encoder"}, {"start": 843.28, "end": 848.2399999999999, "text": " uses a region of interest, a line operator as described in mask rcnn to pull features"}, {"start": 848.2399999999999, "end": 854.0999999999999, "text": " coming from the desired word region, thereby reducing the style tensor to a fixed 512 dimensional"}, {"start": 854.1, "end": 857.0400000000001, "text": " representation. So that's the part I explained to you here because they have the bounding"}, {"start": 857.0400000000001, "end": 862.6800000000001, "text": " box, this yellow thing, they can use that information to kind of pull over this feature"}, {"start": 862.6800000000001, "end": 867.28, "text": " maps to get this final style vector. So that's the part about encoders. So again, they are"}, {"start": 867.28, "end": 872.36, "text": " resonant three, four, nothing fancy there. Now let's go to generator and style mapping"}, {"start": 872.36, "end": 877.96, "text": " network and let me explain you those. So the content representation is directly fed as"}, {"start": 877.96, "end": 882.4, "text": " input to the first layer of the generator instead of providing the learn tensor which"}, {"start": 882.4, "end": 887.48, "text": " was used in the original style again to model. So they just kind of explain some modifications"}, {"start": 887.48, "end": 892.88, "text": " they did, but it's basically still style style again, be too with some small modifications."}, {"start": 892.88, "end": 899.48, "text": " And we introduce a style mapping network M which converts the style vector. Yes. So that's"}, {"start": 899.48, "end": 909.12, "text": " yes, is this thing here? Yes. Two layers specific style representations WSI where I indexes"}, {"start": 909.12, "end": 914.5600000000001, "text": " the layers of the generator, which are then fed as the adaptive instance normalization"}, {"start": 914.5600000000001, "end": 922.44, "text": " coefficient to each layer of the generator. So those, those WSI are just these things"}, {"start": 922.44, "end": 930.52, "text": " here. Okay. So now a quick detour into what adaptive instance normalization is. It's fairly"}, {"start": 930.52, "end": 936.64, "text": " simple. The original idea came from the neural style transfer literature. And what I do is"}, {"start": 936.64, "end": 941.36, "text": " the following. So imagine you have, imagine you have something like a VGG network and"}, {"start": 941.36, "end": 945.6, "text": " it's pre-trained on ImageNet, like it's a classifier and the weights are frozen. And"}, {"start": 945.6, "end": 949.3199999999999, "text": " what they do now is the following in that original paper. So they just take the style"}, {"start": 949.3199999999999, "end": 955.08, "text": " image, they pass it through this thing. They basically cache the feature maps or more precisely"}, {"start": 955.08, "end": 961.96, "text": " they cache the means and standard deviations of each of those feature maps. So once you"}, {"start": 961.96, "end": 968.84, "text": " have that, they input the content image here. And so what I do is if we take a specific"}, {"start": 968.84, "end": 973.36, "text": " content image feature map from a specific layer like this one, they're going to, as"}, {"start": 973.36, "end": 980.9200000000001, "text": " you can see here, find the mean statistic of the feature map, find the standard deviation"}, {"start": 980.9200000000001, "end": 986.08, "text": " and that's the just the instance normalization part. So they just use, so they have a feature"}, {"start": 986.08, "end": 992.4000000000001, "text": " map here and just going over the height and width features, they're going to find these"}, {"start": 992.4000000000001, "end": 997.88, "text": " mus and sigmas. They're going to normalize the content feature map and they're going"}, {"start": 997.88, "end": 1005.2800000000001, "text": " to add the corresponding standard deviation and mean from the style image. So from this"}, {"start": 1005.2800000000001, "end": 1010.44, "text": " one. So those are the cached values. And by doing that, by just modifying those simple"}, {"start": 1010.44, "end": 1016.72, "text": " first order and first order statistics, they can basically, they show that they can adaptively"}, {"start": 1016.72, "end": 1022.4000000000001, "text": " change the style of the image. And yeah, so that was the basic idea and that's what style"}, {"start": 1022.4000000000001, "end": 1029.24, "text": " again was using and that's what this paper is also using to kind of generate the source"}, {"start": 1029.24, "end": 1030.24, "text": " style."}, {"start": 1030.24, "end": 1035.64, "text": " Okay. Finally, I mentioned masks and they say here, the final mask provides a soft semantic"}, {"start": 1035.64, "end": 1040.72, "text": " segmentation of the generated image and is later used to construct the loss functions."}, {"start": 1040.72, "end": 1048.44, "text": " Yeah, we'll see how they are exactly used. I already explained that this basically this"}, {"start": 1048.44, "end": 1055.0400000000002, "text": " recognizer is using masks as well as these generated images to do cross entropy over"}, {"start": 1055.0400000000002, "end": 1059.16, "text": " them and to train the generator, but they're also used for these reconstruction and cyclic"}, {"start": 1059.16, "end": 1063.92, "text": " loss and we'll see that in a minute. Okay. Let me, let me explain to you how this text"}, {"start": 1063.92, "end": 1068.3600000000001, "text": " perceptual loss works in a bit more detail. So specifically we assume a pre-trained type"}, {"start": 1068.3600000000001, "end": 1073.0, "text": " phase classification of proxy. Importantly, we make no assumption that the fonts see was"}, {"start": 1073.0, "end": 1077.8400000000001, "text": " trained on, trained to classify would appear in the photos we use to train and evaluate"}, {"start": 1077.8400000000001, "end": 1087.52, "text": " our textile brush. Okay. Now let me try and debug these equations for you. So it looks"}, {"start": 1087.52, "end": 1096.08, "text": " like this. Again, we have, let's say we have a VGG and in each layer we're gonna, so these"}, {"start": 1096.08, "end": 1101.8799999999999, "text": " are just the activation volumes and the spatial extent kind of goes down, whereas the number"}, {"start": 1101.8799999999999, "end": 1109.08, "text": " of channels goes up. So let me kind of depict it like this. So, so this is maybe the input"}, {"start": 1109.08, "end": 1114.84, "text": " image and then we're processing it. We get the feature maps and like, so this is the,"}, {"start": 1114.84, "end": 1117.6799999999998, "text": " this would be one feature map. This will be the second feature map. This would be the"}, {"start": 1117.6799999999998, "end": 1122.9599999999998, "text": " third feature map, et cetera. So this is the number of channels and we don't see the spatial"}, {"start": 1122.9599999999998, "end": 1130.8, "text": " extent of the image. I'm looking from it sideways. So what I do is they first do this. This is"}, {"start": 1130.8, "end": 1137.04, "text": " the perceptual loss part. So because this thing is pre-trained, it learns to extract certain"}, {"start": 1137.04, "end": 1144.6399999999999, "text": " features and they do the following. So they make sure that the generated image, so this"}, {"start": 1144.64, "end": 1150.1200000000001, "text": " is one, the output image and the input image are really close in the feature space. So"}, {"start": 1150.1200000000001, "end": 1155.0, "text": " they take certain feature maps, like maybe, I don't know, maybe take a certain subset"}, {"start": 1155.0, "end": 1162.8000000000002, "text": " here and we make sure that the, that those are really similar looking at L1 norm. Okay."}, {"start": 1162.8000000000002, "end": 1168.48, "text": " So you take the generated image and you take the ground truth image and you want to make"}, {"start": 1168.48, "end": 1175.6, "text": " sure that these here, the features are really close, where closeness is defined by L1. That's"}, {"start": 1175.6, "end": 1182.16, "text": " the, that's the content loss in neural style transfer literature. The second thing they"}, {"start": 1182.16, "end": 1187.64, "text": " do is they form grand matrices also from those features. So you take those feature maps there"}, {"start": 1187.64, "end": 1193.76, "text": " and you do dot product between them pairwise and you end up with a certain thing called"}, {"start": 1193.76, "end": 1198.32, "text": " a feature map, which will have, so if this is C and they just do dot product between"}, {"start": 1198.32, "end": 1203.6, "text": " every single feature map, they'll end up with C times C matrix, which kind of captures the"}, {"start": 1203.6, "end": 1208.4, "text": " style of your image. And again, they want to make sure that the generated image, so"}, {"start": 1208.4, "end": 1214.72, "text": " this one, the generated one, as well as the input image have the same gray matrix. So"}, {"start": 1214.72, "end": 1219.68, "text": " they want to make them as close as possible again using L1. And this embedding thing is"}, {"start": 1219.68, "end": 1226.54, "text": " pretty much the same as this thing here. The only difference is that this psi here comes"}, {"start": 1226.54, "end": 1231.6000000000001, "text": " from the pen ultimate layer. So it's much deeper layer. So it's focusing on certain"}, {"start": 1231.6000000000001, "end": 1239.16, "text": " other features compared to these phi's here. And that's it in a nutshell. Again, I'm going"}, {"start": 1239.16, "end": 1242.64, "text": " to link a neural style transfer video somewhere here. You can check those out if you're not"}, {"start": 1242.64, "end": 1247.0800000000002, "text": " familiar with this, but this is pretty simple things. Once you wrap your head around it,"}, {"start": 1247.08, "end": 1253.28, "text": " it's pretty, pretty easy. Okay. They say here, note that the perceptual losses are only computed"}, {"start": 1253.28, "end": 1261.0, "text": " for the output image corresponding to original content C1. So they can't do it for the C2"}, {"start": 1261.0, "end": 1267.04, "text": " word, so they can't do it for this one, for this warming one, because they don't have"}, {"start": 1267.04, "end": 1273.04, "text": " the actual ground truth image. So they just do it for the earth, for that one. Okay. That"}, {"start": 1273.04, "end": 1278.72, "text": " was that part of the loss. Now let's focus on the second part. We use the output string"}, {"start": 1278.72, "end": 1285.12, "text": " estimated by the recognizer module to compute a loss LR, which reflects how well the generator"}, {"start": 1285.12, "end": 1291.58, "text": " captured the desired content string C1, C2, simultaneously on both, as I said, so on both"}, {"start": 1291.58, "end": 1296.12, "text": " target images, on both the generated images and their masks. So how this will look like"}, {"start": 1296.12, "end": 1302.6, "text": " is the following. So again, let's take an input image, and this is something we generated."}, {"start": 1302.6, "end": 1309.84, "text": " And we pass it through this recognizer network. And it will start generating character by"}, {"start": 1309.84, "end": 1316.8, "text": " character. And assuming the ground truth word is earth, as an example above, so earth. And"}, {"start": 1316.8, "end": 1320.84, "text": " assuming this one, the first distribution it generates looks something like this, like"}, {"start": 1320.84, "end": 1326.7199999999998, "text": " maybe like this. And it turns out that this argmax corresponds to letter B, whereas this"}, {"start": 1326.7199999999998, "end": 1332.04, "text": " one here maybe corresponds to letter E. And so obviously we made a mistake here. And what"}, {"start": 1332.04, "end": 1338.32, "text": " they do is just, they have the ground truth, the one hot encoding, they'll just do basically"}, {"start": 1338.32, "end": 1345.3999999999999, "text": " whatever this probability is, let's call it Pi. They're just going to do minus log of"}, {"start": 1345.3999999999999, "end": 1353.54, "text": " Pi. And as you can see, as Pi goes to one, which should be, which is the ground truth,"}, {"start": 1353.54, "end": 1359.18, "text": " the loss goes to zero. So this loss looks like this classic stuff. So this is one here,"}, {"start": 1359.18, "end": 1366.0, "text": " this is zero here, this is the loss, and this is the output probability. And they just repeat"}, {"start": 1366.0, "end": 1370.4, "text": " that process. So now they are going to feed this back into, and they're going to start"}, {"start": 1370.4, "end": 1377.76, "text": " generating next character, and next character, and next character. And basically they'll"}, {"start": 1377.76, "end": 1383.3600000000001, "text": " just be doing cross entropy and summing those and use that to back prop and train the generator."}, {"start": 1383.3600000000001, "end": 1388.4, "text": " So that's how the text content loss works like. And now let's focus on the reconstruction"}, {"start": 1388.4, "end": 1394.96, "text": " losses. So here LREC represents the differences between the output generated image in style"}, {"start": 1394.96, "end": 1403.24, "text": " S and content C1, okay? And the cropped input style example ISC1. So basically, I think"}, {"start": 1403.24, "end": 1408.72, "text": " explained this, but let me go back here. So they're going to do L1 between this thing"}, {"start": 1408.72, "end": 1413.6000000000001, "text": " here, so this earth and this one. And because they're doing L1, I'm pretty sure this is"}, {"start": 1413.6, "end": 1421.76, "text": " causing them this blurriness problem they have. Yep. Okay. Finally, we compute a fake"}, {"start": 1421.76, "end": 1430.6799999999998, "text": " style vector ES' and generate this image. So let me try and draw this. Basically, so"}, {"start": 1430.6799999999998, "end": 1438.0, "text": " this is the input image, and this is the bounding box. So everything around here is the context,"}, {"start": 1438.0, "end": 1445.0, "text": " okay? So this is our initial input image. And in this example here, that will be earth,"}, {"start": 1445.0, "end": 1452.68, "text": " exactly this one. And now what I do is they feed this through the whole encoder generator"}, {"start": 1452.68, "end": 1459.64, "text": " pipeline and they generate the initial image. Let's call it O. And so what they now do is"}, {"start": 1459.64, "end": 1465.28, "text": " they take this O and they basically stitch it back in instead of EI here. So they'll"}, {"start": 1465.28, "end": 1471.44, "text": " have something like this. They'll have the same image, except that instead of I, they're"}, {"start": 1471.44, "end": 1478.48, "text": " going to blend in O. And again, they're going to pass this thing through the whole encoder"}, {"start": 1478.48, "end": 1486.8, "text": " generator image and they're going to generate another image called O'. And finally, they're"}, {"start": 1486.8, "end": 1495.76, "text": " going to do L1 between these two. So this one and this one. Just simple L1. And that's"}, {"start": 1495.76, "end": 1503.9199999999998, "text": " the cycle consistency loss from the cycle GAN paper, although there they use two different"}, {"start": 1503.9199999999998, "end": 1508.84, "text": " generator networks to do the same thing. So the semantics behind this approach is the"}, {"start": 1508.84, "end": 1513.8799999999999, "text": " following. So whatever we generate here, so this O, we want to make sure that after passing"}, {"start": 1513.88, "end": 1518.96, "text": " it another time through the whole encoder generator pipeline, we want to make sure that"}, {"start": 1518.96, "end": 1528.0, "text": " we can go back to the original word image I. That's the reasoning behind it. And yeah,"}, {"start": 1528.0, "end": 1532.0800000000002, "text": " you should go ahead and check out the cycle GAN paper to understand this a bit better."}, {"start": 1532.0800000000002, "end": 1537.14, "text": " Again, we can use an L1 loss criterion for both reconstruction losses and split these"}, {"start": 1537.14, "end": 1541.92, "text": " functions on the foreground and background region separately using the generated masks."}, {"start": 1541.92, "end": 1546.04, "text": " So this is where the masks come into play. This allows the network to learn fine-grained"}, {"start": 1546.04, "end": 1552.1200000000001, "text": " variations of the style present in the foreground region. Okay. And because they don't have"}, {"start": 1552.1200000000001, "end": 1557.2, "text": " any ground truth for masks, by doing this loss and by doing the previous loss with the"}, {"start": 1557.2, "end": 1564.6000000000001, "text": " recognizer module, they actually learn how to produce nice and accurate soft masks. That's"}, {"start": 1564.6000000000001, "end": 1569.8000000000002, "text": " pretty much it. Now, what they do is they just do a weighted average of all of these."}, {"start": 1569.8, "end": 1577.48, "text": " And I can just imagine how the hyperparameter search for this looks like. Yeah. Additionally,"}, {"start": 1577.48, "end": 1585.52, "text": " the deep part is the discriminator-based adversarial loss. They additionally use some regularizations"}, {"start": 1585.52, "end": 1591.12, "text": " like the R1 regularization and path length regularization. I won't cover those into detail"}, {"start": 1591.12, "end": 1596.48, "text": " here, but just some regularization methods. R1 just kind of constrains the discriminator"}, {"start": 1596.48, "end": 1605.1200000000001, "text": " gradients, et cetera. So, yep, that was pretty much it for the model itself. So it's a fair"}, {"start": 1605.1200000000001, "end": 1614.8, "text": " bit of work here to make this work, I guess. And there are many, many different papers"}, {"start": 1614.8, "end": 1621.68, "text": " which are, which you need to read to understand everything here. So as I said, style GAN,"}, {"start": 1621.68, "end": 1629.92, "text": " neural style transfer, literature, then GANs in general, like OCR. So a bunch of components"}, {"start": 1629.92, "end": 1639.88, "text": " stitched together to give this pipeline the functionality it has. Okay. Now, other than"}, {"start": 1639.88, "end": 1644.92, "text": " the model, they also introduced this Imgur 5K handwriting set. And the whole point is"}, {"start": 1644.92, "end": 1653.4, "text": " it's got more variance, it's got much more diversity in the handwriting. The style variance"}, {"start": 1653.4, "end": 1657.96, "text": " is much bigger than looking at all of the other previous datasets. And they kind of"}, {"start": 1657.96, "end": 1662.92, "text": " show certain metrics here, like words, number of writers. You can see that there are much"}, {"start": 1662.92, "end": 1668.24, "text": " more writers here. The styles are whatever high means. Basically, there are more styles"}, {"start": 1668.24, "end": 1674.92, "text": " and there is background, there is more, it's more in the wild compared to the other datasets."}, {"start": 1674.92, "end": 1682.92, "text": " Okay. Now, finally, let's focus on the results. We saw the visual results and they also did"}, {"start": 1682.92, "end": 1689.04, "text": " some ablations here where they tried to kind of remove certain components and just make"}, {"start": 1689.04, "end": 1694.72, "text": " sure that everything they are using is actually improving the performance. They're using metrics"}, {"start": 1694.72, "end": 1702.72, "text": " like MSC, so mean, square error, this structural similarity index measure, and this is just"}, {"start": 1702.72, "end": 1708.3600000000001, "text": " the signal to noise ratio metric. And finally, Fraschay's inception distance. So the three"}, {"start": 1708.3600000000001, "end": 1716.56, "text": " of these can only be used if they have the actual ground truth target image, which they"}, {"start": 1716.56, "end": 1724.0, "text": " have only for certain synthetic tests. So basically, FID is the one we should care about"}, {"start": 1724.0, "end": 1729.24, "text": " because that one kind of tries to capture what we humans think of a nice image. It's"}, {"start": 1729.24, "end": 1737.72, "text": " not perfect. Same thing goes for the inception score, but I think if used properly, it correlates"}, {"start": 1737.72, "end": 1745.56, "text": " nicely with human judgment. And they show here again, just using the discriminator or"}, {"start": 1745.56, "end": 1751.68, "text": " the GAN loss, they get poor performance, adding the recognizer module, so the OCR part improves"}, {"start": 1751.68, "end": 1757.52, "text": " everything, getting the reconstruction loss, and finally the cycle loss additionally gives"}, {"start": 1757.52, "end": 1763.68, "text": " them boost in all the metrics. So the interesting part here is that actually, once they add"}, {"start": 1763.68, "end": 1772.5600000000002, "text": " this final perceptual loss here, they boost the FID score, but they kind of lose on these"}, {"start": 1772.5600000000002, "end": 1779.6000000000001, "text": " objective metrics like MSC, SSAM, and PSNR, which are shown, by the way, not to correspond"}, {"start": 1779.6, "end": 1784.9599999999998, "text": " really good with our judgment. The reason being is, imagine you have, for MSC, if you"}, {"start": 1784.9599999999998, "end": 1789.36, "text": " have an image, and so if you generate an image and you just shift it by, I don't know, maybe"}, {"start": 1789.36, "end": 1796.1599999999999, "text": " five pixels downwards, so you just subtract five, you'll have an image which any human"}, {"start": 1796.1599999999999, "end": 1802.9599999999998, "text": " would say is perfect, but this MSC loss would be huge. So it just can't differentiate between"}, {"start": 1802.96, "end": 1810.64, "text": " salient regions and it just uses this dumb difference, like heuristic, to kind of give"}, {"start": 1810.64, "end": 1815.16, "text": " its judgment, but that doesn't correspond with human judgment that well. So what they"}, {"start": 1815.16, "end": 1820.4, "text": " do here is they try and see how good, this is a way to describe how good their generated"}, {"start": 1820.4, "end": 1826.92, "text": " images are by just passing them through this OCR engine. And that OCR engine is far from"}, {"start": 1826.92, "end": 1833.0, "text": " perfect, as you can see here, basically its accuracy drops significantly on this text"}, {"start": 1833.0, "end": 1839.52, "text": " visual question-answer dataset, but it's good enough and they try and feed the generated"}, {"start": 1839.52, "end": 1846.88, "text": " images from these two baselines as well from the text style brush paper, from this paper,"}, {"start": 1846.88, "end": 1851.44, "text": " and they notice that they have much higher accuracy. Now I'm a bit suspicious about these"}, {"start": 1851.44, "end": 1858.24, "text": " results because they were basically using that very same module to train the generator,"}, {"start": 1858.24, "end": 1865.52, "text": " so they kind of basically, the model can learn how to trick the recognizer, in a way can"}, {"start": 1865.52, "end": 1869.04, "text": " learn how to trick the recognizer module, I think here, so I'm not sure whether this"}, {"start": 1869.04, "end": 1876.8, "text": " is a fair comparison, but still, nonetheless, they did compare them. Okay, and here are"}, {"start": 1876.8, "end": 1883.0, "text": " some additional images where they've done the Poisson blending I mentioned, so again,"}, {"start": 1883.0, "end": 1888.52, "text": " I think the results are pretty impressive. If you take a look at this one later, Waste,"}, {"start": 1888.52, "end": 1894.08, "text": " this pretty much looks like an original image, it's pretty nice results there, as well as"}, {"start": 1894.08, "end": 1899.76, "text": " here, like awesome results, I don't see any problems with these ones, they later show"}, {"start": 1899.76, "end": 1906.28, "text": " some failure cases, I'll show you those in a bit. Again, they compare with a certain"}, {"start": 1906.28, "end": 1911.3999999999999, "text": " baseline, this time looking at the FID, and this is some other metric, I think it's called"}, {"start": 1911.3999999999999, "end": 1917.12, "text": " geometric something, I forgot, but basically, similar thing to FID, and they show that they"}, {"start": 1917.12, "end": 1925.56, "text": " get much lower results compared to the baseline, and again, lower is better for the FID metric."}, {"start": 1925.56, "end": 1932.32, "text": " Okay, aside from these quantitative results, they did some user study where they just showed"}, {"start": 1932.32, "end": 1938.28, "text": " that this paper is better compared to the baselines, and as you can see here, so Davis"}, {"start": 1938.28, "end": 1945.8799999999999, "text": " and collaborators, the previous art, the previous work basically constrains itself to specific"}, {"start": 1945.8799999999999, "end": 1951.3999999999999, "text": " domain like handwriting or scene text, whereas this method just works on all of these, and"}, {"start": 1951.3999999999999, "end": 1957.04, "text": " again, it outperforms, looking at these user study, it outperforms the baseline, so both"}, {"start": 1957.04, "end": 1963.3999999999999, "text": " the SRNet and this one, and yeah, as well as side by side. Okay, so they show different"}, {"start": 1963.3999999999999, "end": 1970.6399999999999, "text": " failure cases here, and they're of very different natures, so here you can see that it fails"}, {"start": 1970.6399999999999, "end": 1976.68, "text": " to capture this italic style of handwriting, and as well as the thickness of the strokes,"}, {"start": 1976.68, "end": 1983.08, "text": " and yeah, and there are different failure cases, like here, it fails to capture these"}, {"start": 1983.08, "end": 1988.9199999999998, "text": " foreground effects, and I guess one of the main reasons is they are using those pre-trained"}, {"start": 1988.9199999999998, "end": 1996.9199999999998, "text": " recognizer, and modules, as well as the font classifier, combined that with the L1 reconstruction"}, {"start": 1996.9199999999998, "end": 2004.72, "text": " loss, I think all of those kind of jointly lead to this thing failing on these more complex"}, {"start": 2004.72, "end": 2011.8799999999999, "text": " styles like this calligraphy here, you can notice it here, or in this example, they lose"}, {"start": 2011.88, "end": 2017.6000000000001, "text": " this perspective distortion, let's call it, or maybe, I don't know, like this just, yeah,"}, {"start": 2017.6000000000001, "end": 2023.72, "text": " whatever, and yeah, I think that's pretty much it. They finally mentioned that this"}, {"start": 2023.72, "end": 2028.5600000000002, "text": " can be used in many cool applications like MR, like mixed reality and stuff, so our method"}, {"start": 2028.5600000000002, "end": 2034.48, "text": " aims at use cases involving creative self-expression and augmented reality, and that's something"}, {"start": 2034.48, "end": 2039.16, "text": " I'm really passionate about since I've been working on Microsoft HoloLens in the past,"}, {"start": 2039.16, "end": 2043.28, "text": " and they mentioned here, so photorealistic translation, so once you have some augmented"}, {"start": 2043.28, "end": 2047.3600000000001, "text": " reality glasses, you can imagine how you're looking at some language, like some foreign"}, {"start": 2047.3600000000001, "end": 2052.58, "text": " language, and all of a sudden, it's automatically translated and perfectly blended in as if"}, {"start": 2052.58, "end": 2059.44, "text": " it was real, and you can just kind of bridge this communication gap using that. They say"}, {"start": 2059.44, "end": 2064.6800000000003, "text": " here, our method can be used for data generation and augmentation for training future OCR systems,"}, {"start": 2064.68, "end": 2069.72, "text": " so that one is pretty obvious, so you can, obviously, when you can do this really good,"}, {"start": 2069.72, "end": 2076.12, "text": " you can use the data as a, basically, a new data point in a data set to train some OCR"}, {"start": 2076.12, "end": 2081.96, "text": " models by doing this, or just to augment an existing data set. However, they say here,"}, {"start": 2081.96, "end": 2086.68, "text": " we are aware that, like other technologies, ours can be misused, possibly the same as"}, {"start": 2086.68, "end": 2092.64, "text": " deepfake faces can be used for misinformation. So, deepfakes, that's a real concern for the"}, {"start": 2092.64, "end": 2097.2, "text": " society. I guess by just kind of open sourcing the data set and the research they're doing,"}, {"start": 2097.2, "end": 2102.92, "text": " they're kind of helping it, but at the end, it's just a game of cat and mouse. So, yeah,"}, {"start": 2102.92, "end": 2107.3599999999997, "text": " researchers are developing better tools, but by open sourcing this, you're also providing"}, {"start": 2107.3599999999997, "end": 2111.16, "text": " better tools for those who want to exploit it, so I don't think there is a great strategy"}, {"start": 2111.16, "end": 2118.08, "text": " to kind of cope with these deepfakes in general. Yeah, as a conclusion, they get some really"}, {"start": 2118.08, "end": 2122.7999999999997, "text": " cool results. They do have some failure cases. They do have problems, they still have problems"}, {"start": 2122.7999999999997, "end": 2129.52, "text": " with blur, but like all in all, it looks like great work, and yeah, maybe one of the things"}, {"start": 2129.52, "end": 2135.36, "text": " that focus on would be to try and reduce the number of components in the system. I guess"}, {"start": 2135.36, "end": 2140.44, "text": " it's pretty cumbersome to, if you want to change anything, you'd have to, you'd probably"}, {"start": 2140.44, "end": 2143.66, "text": " have, because all of these are coupled, you'd have to change a bunch of different values"}, {"start": 2143.66, "end": 2148.7599999999998, "text": " in order to retrain this thing. So I guess that would be a future research direction,"}, {"start": 2148.7599999999998, "end": 2154.04, "text": " and having said that, hopefully you liked this video. If you did, subscribe, share it"}, {"start": 2154.04, "end": 2174.44, "text": " out and until next time, bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=Z3XtWuuTHz4
Chip Placement with Deep Reinforcement Learning | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I do a deep dive of the "Chip Placement with Deep Reinforcement Learning" paper. They devised a novel RL approach for an efficient and highly performant chip placement routine. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2004.10746 ✅ Tesla's custom chip: https://www.youtube.com/watch?v=Ucp0TTmvqOE&ab_channel=Tesla ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Paper keypoints 01:21 Why do we need faster chip design? 05:30 High-level pipeline explained 09:20 RL agent explained 14:15 How are computers made 21:00 Assumptions 22:35 Wirelength and macro heuristic 24:55 Macro placement masking 27:50 Edge Graph Neural Network details 29:50 Results 32:25 Visualizations 34:25 Further research ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #hardware #reinforcementlearning #ai
What's up? In this video I'm covering this paper called chip placement with deep reinforcement learning. Basically, yesterday I saw this tweet from Anna Goldie where they've just published the nature version of this work and I thought covering it because it's a really interesting piece of work. It combines both like the electronics, which is something I have some background in as well as machine learning more concretely like reinforcement learning and graph ML. So it's exciting combination and super super impactful work. So I thought covering it and basically what they achieved is they managed to train an RL agent combined with some graph ML for learning like nice representations in order to do displacement much faster than what previously was necessary. So like they needed like humans usually need like multiple weeks to do this and now it's like less than six hours and it's achieving even superhuman performance looking just at the metrics like the area. So basically, yeah, we want to minimize the area because we can cram more transistors inside of it and more compute us and also they want to minimize other things like power consumption as well and under certain constraints like density congestion like you don't want to have too much too many wires overlapping on a single like on a certain piece of like on a certain surface of the chip. So that's a high level overview. Now let me let's dig into the paper and yeah, see what they did. So in this work we present a learning based a learning based approach to chip placement one of the most complex and time consuming stages of the chip design process. Unlike prior methods our approach has the ability to learn from past experience and improve over time. So yeah, that's the main thing basically obviously like prior to this work there were many other approaches mainly some optimization based approaches like simulated annealing and some more like heuristics and yeah a bunch of different theory was developed, but none of those were actually deep learning algorithms. And they say here our objective is to minimize the PPA. So as I mentioned so the power consumption the performance whereas they mean basically timing and we'll see what that exactly is like you basically want to make sure that certain like logic gates in your like flip-flops in your in your like circuit get certain signals on time. So there are these like timing constraints you have to obey and we'll see more details a bit later. And obviously the area again like more transistors you can cram into the chip the more compute you get and that's better. So and we showed that in under six hours our method can generate placements that are super human or comparable on modern accelerator netlists where netlist is just a fancy way of saying like a graph that basically tells you how each of the components on the chip are are connected together. So whereas existing baselines require human experts in the loop and take several weeks. So that's the the main like the the main like point they're driving here. They automate this they make it even superhuman and they make it faster, which is huge. And so why do we care about this? Like basically well they say it here the world is moving towards specialized hardware to meet AI's expectations and to meet AI's exponentially growing demand for compute. However today today's chip take years to design leaving us with a speculative task of optimizing them for the machine learning models of two to five years from now. And the logic is pretty simple if you can cut down the time you need to to loop over different designs you can experiment much faster. And it's the same thing as in deep learning where you want to make sure that your training length is like yeah shorter obviously like the same thing that happened with image net training for example there we needed like multiple weeks and then finally there was a paper like a couple of years ago where they've trained like image net like a vision model classifier in like half an hour or something. It's probably even better today. So basically when you short those you can iterate much faster obviously and that's that's really really precious. So one one nice example of these custom chips is maybe Tesla. That's the first thing that popped onto my mind. They have a customly designed computer vision chip set on their vehicles and it's not just computer vision. They're also using like different sensors not just the RGB cameras. They're also using radars etc. So the only thing they're not using and that's slider because Elon Musk as you probably know rants about those all the time and says that we only need to use RGB cameras. But that's a topic for another for another video. Okay so the objective is to place a netlist graph of macros for example static RAM which is basically you probably know of DRAM. That's Dynamic Random Access Memory. These are S-RAMs are usually non-volatile so they are even more expensive and they are they are more performant than DRAMs. But they do take more area so that's a trade-off. Standard cells logic gates such as NANDs, NORs and XORs. You probably heard of XOR if you did did anything related to cryptography. Onto a chip canvas such that power performance and area are optimized while adhering to constraints on placement density and routing congestion. Yeah for the time being like you can just have an intuitive understanding of these and I guess you already have like basically you don't want to have multiple wires too many wires going over a certain cross section of the certain grid cell on your like canvas. That's the high level overview. Now let's dig into the architecture. Let me show you the high level like pipeline how this looks like. So basically what they do is the following. So they treat this they pose this as a reinforcement learning problem. And as you can see here the state is modeled like you have this chip canvas so currently you don't have anything on it. And basically as I already mentioned you can treat your chip as a hypergraph and that means the following. So let's assume I'll just draw macros as these squares. So we'll have a couple of macros and macros can be as you saw like SRAM or it can be memory control or whatever. So these are somehow connected like this and then you have these standard cells. I'll just denote them as circles. And basically what was it's a hypergraph. So that means basically the single edge can connect to multiple nodes. That's the definition of the hypergraph. And now so imagine you have a bunch of these you have hundreds of macros you have millions of standard cells and you want to somehow place them onto this fixed size chip canvas. OK. And so what you do is you somehow and we'll see the details a bit later using this graph neural network you kind of embed this information about the connectivity. And that will be your state. And then you're doing the following. So the agent takes that state and outputs an action and an action is just take a macro. So we'll be just focusing on placing these macros as we'll soon see these will be handled by some forced direct method which is some heuristic that was previously devised. So they're just kind of piggybacking on certain other methods to do that part. So they place the macro here and after that you get a new state and then you just output the new action where you place another macro onto the chip canvas. And you do so until you've placed all of the macros as you can see here. And then from that point on you use this force directed method to place the standard cells. So you will just take those cluster clusters of standard cells and will place them somehow onto your chip canvas. And finally you get the reward. And as you can see immediately the problem is pretty hard because it's you get zero rewards all the way down and only at the end do you receive the actual reward. So it means the credit assignment problem is going to be pretty tough. I mean the horizon is not that big. They won't have more than let's say thousand macros. But like still it's a tough problem. And this reward as you can see let me just decrypt this for you. So this is a half perimeter wire length. We'll see what that exactly is. But it's a roughly it kind of takes into account the length of wires you have in your chip. And that kind of correlates positively with power consumption and with area and we'll see those a bit later. OK. So that's a that's a high level overview. And I mentioned it here. So our our true reward is the output of a commercial electronic design automation tool including wire length routing congestion density power timing in an area. However RL policies require hundreds of thousands of examples to learn effectively as you already know. And so it is critical that the reward function be fast evaluated ideally running in a few milliseconds. So in order to be effective these approximate reward functions must also be positively correlated with the true reward. So the point the whole point is they can evaluate the real reward the true reward. But that's super slow. So you have to use these EDA tools. And so they decided to just take something as a proxy which is fast to compute a and B positively correlates with the things they care about. And we mentioned those already. So that's the PPA stuff. OK. So while we treat congestion as a soft constraint so lower congestion improves the reward function we treat density as a hard constraint masking out actions grid cells to place nodes onto whose density exceeds the target density. So in order to understand this part we should go and see how the policy network looks like. Let me scroll down here. I'll skip some stuff and then I later return. OK. So this is how the model looks like. And to end basically we have two portions here. The first part is the representation learning part. So we have something called edge graph neural network some graph neural network data device specifically for this work. And then we have your basic value net and basically policy network. So this is your actor critic RL agent. They're using PPO that's proximal policy optimization algorithm published by opening a couple of years ago. So that part is pretty standard. Now let's see. This part is pretty interesting and novel. So let's see how they manage to parse the netlist. So the hyper graph of your components into something into some representation that contains valuable information that can be used to figure out where to place the next components. So the way they train this whole pipeline is using supervised learning and the whole goal is to learn how to predict the reward. So that's how they train this part. So they take this part in isolation and they trained the following way. So again you have this hyper graph. So you have macros. You have some like standard cells. They are somehow connected. It's not that important. OK like this and what to do as you can see here. So if you're familiar with graph ML if you're not I made a whole playlist and I'll link it somewhere here. You should check it out. But like I'll try to explain it on a high level and basically it's pretty simple. So every single node in your graph has a feature vector associated with it. So whoops let me delete that. OK so you have a feature vector like this. You have a feature vector here. Every single node will have a feature vector associated with it. The second thing is in this paper actually even the edges have feature vectors associated with them. So you have something like this. So every edge and every node has a feature vector. So what I do is as you can see here the graph neural network takes in the connectivity information and all of those feature vectors. And it kind of it somehow processes these. So what graph we'll see the exact equations here a bit later. But like basically you just somehow combine those. You manipulate them using fully connected neural networks. You aggregate the neighbor the neighbors. So for example this one will somehow take all of its neighbors. So this is the neighbor and this is the neighbor. So we will take those feature vectors like this one and this one and it will somehow combine them with this one. So it's a it's it's using this relational inductive bias of connectivity and it's accumulating those feature vectors to create a novel representation. So this is going to be something maybe of a smaller dimension. Let's call it V prime. If this was vector V this is V prime. And they are going to output that as the output of of of GNN. So as you can see here edge edge embeddings. So that's a set that's that's a matrix that's of dimension E times basically D sub E. So E is the number of edges in your graph and D sub E is just the like feature vector dimension. So once you have the matrix what I do as you can see here they do this reduce mean operation. So they just take a mean over like element wise mean over all of these feature vectors. And they get the single feature vector. That's the graph embedding. OK. That somehow roughly captures the dynamics the information about this graph. And the second thing they do is they take the current macro ID. So there's the current macro you want to place into the chip canvas. And they so again here you have a table that's N times D sub V. So N is the number of nodes. This V is the dimension of your node feature vector. And they just take a single one. So they just take a DV vector here. OK. And extract it. And finally they have some metadata information like the like the length of the wire on your chip etc. So they have some metadata information and they concatenate all those they feed it through the fully connected layer. And that's it. So now as I already told you they're going to train this thing to to kind of understand to predict the reward of the of the placement. And I'll get into details a bit later. But that's that's that's it in a nutshell. So once you train that you use your presentations it learned to basically as a Kickstarter for these policy and value networks and they are trained as I said using PPO. So that's it. And now I'm going to slowly start digging into the details and later we'll see the experiments and the results they got. OK. Before I do that I think it will be quite useful for you to maybe understand a little bit about some electronics one on one and you'll hopefully have an epiphany moment. OK. So as software the same thing goes with hardware. Basically you have a bunch of abstraction layers. So in software we have like on the lower lowest layer you have firmware you have something like what called hardware abstraction layer. You have drivers and you have operating systems and you have compilers and you have all of those systems that translate the higher level languages into like something that your computer can parse and understand. And finally you have the application layer. So obviously a bunch of abstractions and the same thing goes for hardware. So you have like you have on a really high level you have like you have a CPU and you have maybe like the direct memory controller. So that's just a device that can do whom CPU can just delegate basically certain memory intensive operations. And then you have memory like you have a memory block here and let's say both are connected to memory both the CPU and the M are connected to memory. So if you zoom in into any of these components like take maybe direct memory controller you'll end up with two types of circuits. So basically you have combination logic and you have these circuits which are called sequential logic which contain the memory in the system. So they are stateful. So if you take a look at this one. So what's the main difference. The main difference is the following. If you have this record this is called inverter. Basically if you feed it zero it will output one and if you feed it one it will output the logic value of zero. And the thing is if the thing is with these combination logic gates is that as soon as you change the input. So if you go from zero to one the output will go from one to zero really quick like maybe in 20 picoseconds or something. Now the trick with these flip flops with these memory circuits is so this is a deep flip flop. It's a simple one of the simplest flip flops out there. So basically what happens here you can change this as much as you want like from zero to one. Nothing will change here is going to keep one here until you have the clock. So the uptick of the clock or the down tick of the clock. And basically so hopefully you know what like a clock signal is. It's something that drives your computer. It's a basically digital signal that goes from zero to one with a certain frequency and on each of these upticks or downticks something is going to happen. So on each tick the sequential logic gates so these these gates like flip flops will change their state. So if you keep one here and then the uptick comes at that point the one will propagate here on the net. And if you change it now to zero and the down tick comes for example then this will go to zero. So these are keeping that they have the memory. That's that's that's that's the main trick. So having said that let me go even even deeper. So let's take one specific gate like NAND. And what NAND does is the following. Yeah you've seen these if you've ever done any programming so you know about the NAND operator. This is how you write it down in Python. And basically this is the opposite of an AND gate. So you just are not AND. And basically as you can see here looking at the table you know that AND gates give you one only when you have one and one as the input. And all of the other values will be zero. And NAND is just the opposite of that. As you can see it's inverted AND gate. And now this is the like abstracted view of this gate. And if you dig a bit deeper and take a certain implementation which is called CMOS implementation of the NAND gate which is basically complementary metal oxide semiconductor doesn't matter. But basically there are different ways you can implement these abstract logic gates. The same thing as you can do with if you have like an image classifier you can implement it using a neural network using SVM using I don't know like even like k-nearest neighbors whatever your favorite classifier is. The same thing goes here. You have a bunch of different ways how you can implement the very same thing and that is the NAND gate. So here we have a specific logic you can use TTL. There are many different logics you can implement. Now let me show you why this works. So basically let's take as an input one and one. And when I say one you again have to implement one somehow on your system. So let's say for example that we have a voltage supply that's maybe I don't know like 0.9 volts. And so on this system one will be defined as something close to 0.9 volts and zero will be defined as something close to zero volts. So that's how you implement zero and that's how you implement one. And so if I feed once here for example and so that's 0.9 volts. And if you feed one here as well and as you can see this small circle again that means it's inverted. So we'll be adding zero onto this gate and zero onto this gate. And what happens is because you brought a huge voltage here you're going to toggle on this transistor. And so the current will start to flow here and here and you're going to block these two because you brought zero voltage here. And so as you can see you're basically making a direct connection to the output to this thing here. And let's say this is ground so it's zero volts and basically that means your output is zero volts which is zero. And let's see so one one you get zero. The same thing goes for zero zero. So if you have if you have if you bring zero zero you're going to basically turn off these two and you're going to turn on these two. And so you're basically bringing the voltage supply directly to the output which is 0.9 volts which basically translates into logic one. And yeah so that's that's how it works. That's how your computer computers basically work. And going even deeper like people have to design these and you can see here so this is how you basically place these metal wires. And this is how you implement a NAND gate. And finally you have the last step and that's the production. You actually do some like photo lithography using like light waves you engrave these into a silicon. And this is how a NAND gate multiple NAND gates look like actually manufactured. And yeah I had some prior experience myself. I was studying electronics and computer science. And so this was one of the radio frequency circuits I designed which just takes your signal like maybe your audio and basically modulates it. So it's on a higher frequency and then you can send it over the wire like a radio signal. And yeah so that was the basically one of one of how computers work. You have these logic gates you have these sequential gates and everything is made out of them. So now you know how those macros are all built and hopefully like if you found this useful please comment down so that I know for future videos whether this is interesting or not. Okay let's see some of the simplifications they are making to make this thing work and make it like computationally tractable because the problem is huge as you can imagine. Because of the like millions of standard cells and hundreds of macros. So first things first they group millions of standard cells into a few thousand clusters using this tool HMEDIS. So that way they basically make it easier for those force direct methods to do the placement like themselves. And second thing they do they discretize the grid so the chip canvas to a few thousand grid cells. Okay so that means they won't have obviously so you have your chip and it will be super super hard and probably wouldn't be any like meaningful to just make it continuous or to make it really like small resolution. So what I do is they kind of divide these into I think it was 128 by 128 cells so they have something like this and they can only place macros on a specific grid cell. So that's the second simplification they make. The third one is when calculating wire length we make the simplifying assumption that the wire starts from the standard cell blah blah blah that's not important. And finally to calculate the routing congestion cost we only consider the average congestion of the top 10% most congested grid cells. So again certain heuristic which makes it a bit more feasible to calculate this congestion which is used later on in the reward function. So this part the half perimeter wire length is something they use as a proxy as I mentioned for training the RL agent and it positively correlates as they show with area and other metrics of interest. So let me just explain you briefly how it works and the formula is pretty simple as you can see. So imagine you have a wire like this and this is your wire and basically what I do is they for this particular wire the this number will be equal to this thing here plus so that's called the height and this will be with. So that's the half perimeter right. So the perimeter would be two times that and you can see here so max over the basically x axis and minus mean over the x axis same thing for the y axis. And if you had something a bit more complex somewhere that goes like this maybe like this and finally ends up like this did again do the following. So they would find the max x value. So for this particular wire so that will be maybe this and they subtract this thing and they would do the same thing for why. So it's a kind of you find a bounding box and you find the half perimeter of the bounding box that's it and then they just kind of combine all of those across different wires. Okay. That's that that's that part. Okay. So certain heuristic additional heuristic and you can immediately see because and I forgot to mention that this work was actually used to design the newest TPUs in Google. So that's the version five if I'm not wrong. And so you can immediately see this is this is in production already. So there are lots of details but like the high level picture you already got it pretty much hopefully. And yeah let's see some additional details. So to select the order in which the macros are placed we sort macros by descending size and break ties using topological sort. So that's additional heuristic they use in order to make the action space smaller and to make this problem computationally tractable. So they just sort the macros so those blocks in your chip by size and then the policy will just take the next one and then learn how it just needs to decide where to place it. It doesn't need to learn which one to pick in each step. Okay. So I forgot to mention one detail here. Hopefully you already understood it but like if you have this policy and the output is hundred twenty eight by hundred twenty eight. So those are the grid cells I mentioned previously. And so it just outputs a distribution across all of the grid cells and where the distribution is where the probability is highest. So we just take ARGMAX that's where you're going to place the macro. So it would be really simple if but like additionally you need to mask certain areas. And so for example if you have your your your chip canvas and you've already placed the chip the macro here. So let's say that's this part and let's say that's this part. Then what will happen is they'll have this density this mask which will be used to kind of messed out certain like let's call them pixels basically to zero. So for example if if now the if the network if the policy network outputs I don't know maybe zero point zero two here and we already have that part occupied this mask will just zero it out to zero. And you'll just distribute the probability across the other valid cells. So that's that part. And yeah that's what I mentioned here. So basically to meet this constraint during each RL step we calculate the current density mask a binary M times N matrix that represents grid cells onto which we can place the center of the current node without violating the density threshold criteria. Before choosing an action from the policy network output we first take the dot product of the mask and the policy network output and then take the ARGMAX over feasible locations. So that's the process I just explained. So we also enable blockage aware placement such as clock straps by setting the density function of the blocked areas to one. So block it just means that basically you explicitly say hey I don't want to have anything placed around this macro for example or around this wire and you explicitly encode that into that mask and then you can't place macros on top of that location. Okay. That's just minor detail. Finally the loss function here is the wire length. I already explained the half perimeter wire length calculation. They have the congestion and the density constraint and they'll just include this density part while Lagrangian so you'll just have a weighted sum of congestion density and wire length. So as for the loss they are just summing over all of the net lists in the data set and they just want to maximize the expected cumulative reward. That was the reward part. Let's now jump into explaining the graph neural network. So our intuition was that a policy network architecture capable of transferring placement optimization across chips should be able to encode the state associated with a new unseen chip into a meaningful signal at inference time. We therefore propose training a neural network architecture capable of predicting reward on new net lists with the ultimate goal of using this architecture as the encoder layer of our policy network. So that was the part I explained here the encoder part and now they see here we therefore created a data set of 10,000 chip placements where the input is a state associated with a given placement and the label is a reward for that placement. So basically they used for example like those EDA tools and they found the ground truth reward and so they basically had a simple supervised regression task of on this with this graph neural network called the edge graph neural network. For the computation itself it's fairly simple. So let's say you have two components here. You have a macro here and it's connected to this macro here. You have a feature vector here and you have a feature vector here as well as here. OK. So what I do is the kind of so the process using a neural network feed like a simple fully connected layer. They process these vectors and then they just concatenate those and they place it again. They just feed it through in other in other fully connected layer. I'm not quite sure what this part means here because like he doesn't make any sense to concatenate the learnable weights with your with the output here. But yeah basically yeah you somehow combine them and then what you do to calculate the node feature is for example if you had multiple edges going into this node you just take all of those edge features. So you take this edge you take this edge and you just aggregate those and do a mean across those feature vectors. So the computation is pretty simple. Again check out my graph ML videos. This will be pretty self explanatory. OK. This is just your PPO objective function and I won't go to go into details there. Let's see the final results. So on the X axis we have different TPU blocks. So they have block one through four and this open source risk five CPU chip. And you can see here the placement cost. You want to have as low of a cost as possible. So using zero shot basically means that they don't fine tune that. So basically what I do is they take the whole system you saw with the policy network value network and graph graph neural network and they train it on maybe for example 20 20 different chips. And now they take that network and just apply it into a new chip like this TPU block one. And if they don't fine tune it on the block itself it's just zero shot. So as you can see zero shot is pretty decent and then if you fine tune it for two hours you get even better performance in 12 hours you get even better performance. But the point is if they train it from scratch so without that previous information from those other 20 chips. So it takes after 24 hours it still sucks pretty much. It's still much much worse than compared to this this one that's fine tuned for 12 hours. So this means it generalizes to new never before seen chips. So that's the benefit of a learning system. OK. Here what they show is again training from scratch versus fine tuning. As you can see basically only after a couple of hours the pre-trained one gets a really low loss whereas you need much more time to train. I mean this is pretty pretty obvious bottom line you want to pre train these are all agents and then later just fine tune them onto the new chip you're trying to place. OK. That's the whole point there. Here they compare. So I mentioned data sets. So I mentioned the largest one the one that has 20 blocks but they also train them on the medium one which is a subset of this one. And they also train it on the small one which is a subset of the medium one. So again you can see that going to more data gain gives you like lower placement costs. And looking at this curve here they are basically what you see here is the RL cost on an unseen chip. And you can see that with the big data set you get to overfitting much much slower and you get better performance than than basically using medium or smaller data sets. So it's both harder to overfit and you get you get lower costs. So again more data nothing new there but yeah these are the placements they get. So here you can see these purplish blocks are macros and here in the middle this like a blob of nothingness is a bunch of basically those standard cells. And here here is the zero shot placement and here is after you fine tune them you can see there is much less congestion congestion. You can see so basically these red and orange colors you know that there is a huge overlap across those layers and here it's much better place and you can also see much more regular structure here compared to the zero shot one. But this is still fairly fairly good for something that's that's working on a chip never before seen. Here are some results who here they blurred this because of the confidentiality. But again you can you can see the main thing is that humans when they when humans design chips you can see there is a lot of structure you can you can see some regular shapes here. Whereas this RL agent just outputs this blob and this donut shaped thing here it outperforms the human design on many different metrics and that's cool. And they mentioned here like the wire length and the congestion and you can see that it's both everything is lower using this RL agent approach. Okay. Last table and I think we're done here is this one. So they compare it with this replace method from 2019 is one of the state of the art placement methods and again looking at different metrics like timing area power wire length congestion. It outperforms replace which not only has lower performance it also like higher cost. It also many times just creates invalid placements because there was too many like either too many like the congestion was too big or there was some invalid placement was made. And so it's it's harder to to to to get it to work as a summary they mentioned here. So for example the process of standard cell partitioning row and column selection as well as selecting the order in which the macros are placed all can be further optimized. In addition we would also benefit from more optimized approach to standard cell placement. So what I see here is potential future work. The first thing first is that the heuristic I just mentioned was that they are using the biggest macro first and then the smaller one blah blah blah. So they argue that basically that process can also be learned and they can squeeze out more performance by doing that. The second thing is instead of using those force directed method that can also be kind of thrown into the RL framework approach. And yeah that's pretty much it. So great results super impactful work. So A.I. is building chips which make A.I. more powerful so we have a pretty pretty nice loop here and it will be exciting to see what else is there to come. So if you found this video useful subscribe share it out and until next time bye bye.
[{"start": 0.0, "end": 4.0, "text": " What's up? In this video I'm covering this paper called chip placement with deep reinforcement learning."}, {"start": 4.88, "end": 11.120000000000001, "text": " Basically, yesterday I saw this tweet from Anna Goldie where they've just published the nature version of this work"}, {"start": 11.36, "end": 14.72, "text": " and I thought covering it because it's a really interesting piece of work. It combines both"}, {"start": 15.200000000000001, "end": 19.12, "text": " like the electronics, which is something I have some background in as well as machine learning"}, {"start": 19.68, "end": 25.92, "text": " more concretely like reinforcement learning and graph ML. So it's exciting combination and super super impactful work."}, {"start": 25.92, "end": 30.720000000000002, "text": " So I thought covering it and basically what they achieved is they managed to train an RL agent"}, {"start": 31.28, "end": 34.88, "text": " combined with some graph ML for learning like nice representations"}, {"start": 35.28, "end": 40.0, "text": " in order to do displacement much faster than what previously was necessary."}, {"start": 40.0, "end": 45.84, "text": " So like they needed like humans usually need like multiple weeks to do this and now it's like less than six hours"}, {"start": 45.92, "end": 48.8, "text": " and it's achieving even superhuman performance looking just at the"}, {"start": 49.28, "end": 52.0, "text": " metrics like the area. So basically, yeah,"}, {"start": 52.0, "end": 59.92, "text": " we want to minimize the area because we can cram more transistors inside of it and more compute us and also they want to minimize"}, {"start": 60.8, "end": 63.84, "text": " other things like power consumption as well and"}, {"start": 64.16, "end": 70.16, "text": " under certain constraints like density congestion like you don't want to have too much too many wires overlapping on a single"}, {"start": 70.32, "end": 76.32, "text": " like on a certain piece of like on a certain surface of the chip. So that's a high level overview."}, {"start": 76.32, "end": 80.08, "text": " Now let me let's dig into the paper and yeah, see what they did."}, {"start": 80.08, "end": 83.28, "text": " So in this work we present a learning based"}, {"start": 84.64, "end": 89.52, "text": " a learning based approach to chip placement one of the most complex and time consuming"}, {"start": 90.16, "end": 97.68, "text": " stages of the chip design process. Unlike prior methods our approach has the ability to learn from past experience and improve over time."}, {"start": 98.24, "end": 101.03999999999999, "text": " So yeah, that's the main thing basically"}, {"start": 101.68, "end": 103.03999999999999, "text": " obviously"}, {"start": 103.03999999999999, "end": 105.28, "text": " like prior to this work there were many other approaches"}, {"start": 105.28, "end": 112.48, "text": " mainly some optimization based approaches like simulated annealing and some more like heuristics and yeah"}, {"start": 112.56, "end": 116.72, "text": " a bunch of different theory was developed, but none of those were actually deep learning algorithms."}, {"start": 117.52, "end": 123.04, "text": " And they say here our objective is to minimize the PPA. So as I mentioned so the power consumption the performance"}, {"start": 123.36, "end": 129.36, "text": " whereas they mean basically timing and we'll see what that exactly is like you basically want to make sure that certain"}, {"start": 129.76, "end": 131.36, "text": " like"}, {"start": 131.36, "end": 134.08, "text": " logic gates in your like flip-flops in your in your"}, {"start": 134.08, "end": 135.68, "text": " like circuit"}, {"start": 135.68, "end": 139.68, "text": " get certain signals on time. So there are these like timing"}, {"start": 140.48000000000002, "end": 143.68, "text": " constraints you have to obey and we'll see more details a bit later."}, {"start": 143.68, "end": 149.60000000000002, "text": " And obviously the area again like more transistors you can cram into the chip the more compute you get and that's better."}, {"start": 149.92000000000002, "end": 155.12, "text": " So and we showed that in under six hours our method can generate placements that are super human"}, {"start": 155.68, "end": 162.08, "text": " or comparable on modern accelerator netlists where netlist is just a fancy way of saying like a graph that basically"}, {"start": 162.08, "end": 165.84, "text": " tells you how each of the components on the chip are are connected together."}, {"start": 166.64000000000001, "end": 174.16000000000003, "text": " So whereas existing baselines require human experts in the loop and take several weeks. So that's the the main like the the main like"}, {"start": 175.44, "end": 181.60000000000002, "text": " point they're driving here. They automate this they make it even superhuman and they make it faster, which is huge."}, {"start": 182.72000000000003, "end": 190.72000000000003, "text": " And so why do we care about this? Like basically well they say it here the world is moving towards specialized hardware to meet AI's expectations"}, {"start": 190.72, "end": 193.6, "text": " and to meet AI's exponentially growing demand for compute."}, {"start": 193.84, "end": 200.56, "text": " However today today's chip take years to design leaving us with a speculative task of optimizing them for the machine learning models"}, {"start": 200.96, "end": 205.44, "text": " of two to five years from now. And the logic is pretty simple if you can cut down the time"}, {"start": 205.84, "end": 210.16, "text": " you need to to loop over different designs you can experiment much faster."}, {"start": 210.24, "end": 214.32, "text": " And it's the same thing as in deep learning where you want to make sure that your training"}, {"start": 214.88, "end": 219.76, "text": " length is like yeah shorter obviously like the same thing that happened with image net training for example"}, {"start": 219.76, "end": 225.51999999999998, "text": " there we needed like multiple weeks and then finally there was a paper like a couple of years ago where they've"}, {"start": 225.92, "end": 230.48, "text": " trained like image net like a vision model classifier in like half an hour or something."}, {"start": 231.12, "end": 237.92, "text": " It's probably even better today. So basically when you short those you can iterate much faster obviously and that's that's really really precious."}, {"start": 239.76, "end": 246.39999999999998, "text": " So one one nice example of these custom chips is maybe Tesla. That's the first thing that popped onto my mind. They have a"}, {"start": 246.4, "end": 252.56, "text": " customly designed computer vision chip set on their vehicles and it's not just computer vision."}, {"start": 252.56, "end": 258.56, "text": " They're also using like different sensors not just the RGB cameras. They're also using radars etc."}, {"start": 258.56, "end": 262.56, "text": " So the only thing they're not using and that's slider because Elon Musk as you probably know"}, {"start": 262.56, "end": 266.56, "text": " rants about those all the time and says that we only need to use RGB cameras."}, {"start": 267.36, "end": 274.56, "text": " But that's a topic for another for another video. Okay so the objective is to place a netlist"}, {"start": 274.56, "end": 282.56, "text": " graph of macros for example static RAM which is basically you probably know of DRAM. That's Dynamic Random Access Memory."}, {"start": 282.56, "end": 290.56, "text": " These are S-RAMs are usually non-volatile so they are even more expensive and they are they are more performant than DRAMs."}, {"start": 290.56, "end": 298.56, "text": " But they do take more area so that's a trade-off. Standard cells logic gates such as NANDs, NORs and XORs."}, {"start": 298.56, "end": 312.56, "text": " You probably heard of XOR if you did did anything related to cryptography. Onto a chip canvas such that power performance and area are optimized while adhering to constraints on placement density and routing congestion."}, {"start": 312.56, "end": 326.56, "text": " Yeah for the time being like you can just have an intuitive understanding of these and I guess you already have like basically you don't want to have multiple wires too many wires going over a certain cross section of the certain grid cell on your like canvas."}, {"start": 326.56, "end": 334.56, "text": " That's the high level overview. Now let's dig into the architecture. Let me show you the high level like pipeline how this looks like."}, {"start": 334.56, "end": 341.56, "text": " So basically what they do is the following. So they treat this they pose this as a reinforcement learning problem."}, {"start": 341.56, "end": 347.56, "text": " And as you can see here the state is modeled like you have this chip canvas so currently you don't have anything on it."}, {"start": 347.56, "end": 358.56, "text": " And basically as I already mentioned you can treat your chip as a hypergraph and that means the following. So let's assume I'll just draw macros as these squares."}, {"start": 358.56, "end": 364.56, "text": " So we'll have a couple of macros and macros can be as you saw like SRAM or it can be memory control or whatever."}, {"start": 364.56, "end": 369.56, "text": " So these are somehow connected like this and then you have these standard cells. I'll just denote them as circles."}, {"start": 369.56, "end": 378.56, "text": " And basically what was it's a hypergraph. So that means basically the single edge can connect to multiple nodes. That's the definition of the hypergraph."}, {"start": 378.56, "end": 390.56, "text": " And now so imagine you have a bunch of these you have hundreds of macros you have millions of standard cells and you want to somehow place them onto this fixed size chip canvas."}, {"start": 390.56, "end": 401.56, "text": " OK. And so what you do is you somehow and we'll see the details a bit later using this graph neural network you kind of embed this information about the connectivity."}, {"start": 401.56, "end": 411.56, "text": " And that will be your state. And then you're doing the following. So the agent takes that state and outputs an action and an action is just take a macro."}, {"start": 411.56, "end": 420.56, "text": " So we'll be just focusing on placing these macros as we'll soon see these will be handled by some forced direct method which is some heuristic that was previously devised."}, {"start": 420.56, "end": 435.56, "text": " So they're just kind of piggybacking on certain other methods to do that part. So they place the macro here and after that you get a new state and then you just output the new action where you place another macro onto the chip canvas."}, {"start": 435.56, "end": 444.56, "text": " And you do so until you've placed all of the macros as you can see here. And then from that point on you use this force directed method to place the standard cells."}, {"start": 444.56, "end": 452.56, "text": " So you will just take those cluster clusters of standard cells and will place them somehow onto your chip canvas. And finally you get the reward."}, {"start": 452.56, "end": 461.56, "text": " And as you can see immediately the problem is pretty hard because it's you get zero rewards all the way down and only at the end do you receive the actual reward."}, {"start": 461.56, "end": 468.56, "text": " So it means the credit assignment problem is going to be pretty tough. I mean the horizon is not that big. They won't have more than let's say thousand macros."}, {"start": 468.56, "end": 476.56, "text": " But like still it's a tough problem. And this reward as you can see let me just decrypt this for you. So this is a half perimeter wire length."}, {"start": 476.56, "end": 484.56, "text": " We'll see what that exactly is. But it's a roughly it kind of takes into account the length of wires you have in your chip."}, {"start": 484.56, "end": 491.56, "text": " And that kind of correlates positively with power consumption and with area and we'll see those a bit later."}, {"start": 491.56, "end": 496.56, "text": " OK. So that's a that's a high level overview. And I mentioned it here."}, {"start": 496.56, "end": 506.56, "text": " So our our true reward is the output of a commercial electronic design automation tool including wire length routing congestion density power timing in an area."}, {"start": 506.56, "end": 519.56, "text": " However RL policies require hundreds of thousands of examples to learn effectively as you already know. And so it is critical that the reward function be fast evaluated ideally running in a few milliseconds."}, {"start": 519.56, "end": 525.56, "text": " So in order to be effective these approximate reward functions must also be positively correlated with the true reward."}, {"start": 525.56, "end": 532.56, "text": " So the point the whole point is they can evaluate the real reward the true reward. But that's super slow."}, {"start": 532.56, "end": 542.56, "text": " So you have to use these EDA tools. And so they decided to just take something as a proxy which is fast to compute a and B positively correlates with the things they care about."}, {"start": 542.56, "end": 560.56, "text": " And we mentioned those already. So that's the PPA stuff. OK. So while we treat congestion as a soft constraint so lower congestion improves the reward function we treat density as a hard constraint masking out actions grid cells to place nodes onto whose density exceeds the target density."}, {"start": 560.56, "end": 569.56, "text": " So in order to understand this part we should go and see how the policy network looks like. Let me scroll down here. I'll skip some stuff and then I later return."}, {"start": 569.56, "end": 578.56, "text": " OK. So this is how the model looks like. And to end basically we have two portions here. The first part is the representation learning part."}, {"start": 578.56, "end": 584.56, "text": " So we have something called edge graph neural network some graph neural network data device specifically for this work."}, {"start": 584.56, "end": 592.56, "text": " And then we have your basic value net and basically policy network. So this is your actor critic RL agent."}, {"start": 592.56, "end": 600.56, "text": " They're using PPO that's proximal policy optimization algorithm published by opening a couple of years ago. So that part is pretty standard."}, {"start": 600.56, "end": 608.56, "text": " Now let's see. This part is pretty interesting and novel. So let's see how they manage to parse the netlist."}, {"start": 608.56, "end": 620.56, "text": " So the hyper graph of your components into something into some representation that contains valuable information that can be used to figure out where to place the next components."}, {"start": 620.56, "end": 628.56, "text": " So the way they train this whole pipeline is using supervised learning and the whole goal is to learn how to predict the reward."}, {"start": 628.56, "end": 634.56, "text": " So that's how they train this part. So they take this part in isolation and they trained the following way."}, {"start": 634.56, "end": 644.56, "text": " So again you have this hyper graph. So you have macros. You have some like standard cells. They are somehow connected. It's not that important."}, {"start": 644.56, "end": 653.56, "text": " OK like this and what to do as you can see here. So if you're familiar with graph ML if you're not I made a whole playlist and I'll link it somewhere here."}, {"start": 653.56, "end": 658.56, "text": " You should check it out. But like I'll try to explain it on a high level and basically it's pretty simple."}, {"start": 658.56, "end": 665.56, "text": " So every single node in your graph has a feature vector associated with it. So whoops let me delete that."}, {"start": 665.56, "end": 672.56, "text": " OK so you have a feature vector like this. You have a feature vector here. Every single node will have a feature vector associated with it."}, {"start": 672.56, "end": 680.56, "text": " The second thing is in this paper actually even the edges have feature vectors associated with them. So you have something like this."}, {"start": 680.56, "end": 690.56, "text": " So every edge and every node has a feature vector. So what I do is as you can see here the graph neural network takes in the connectivity information and all of those feature vectors."}, {"start": 690.56, "end": 697.56, "text": " And it kind of it somehow processes these. So what graph we'll see the exact equations here a bit later."}, {"start": 697.56, "end": 705.56, "text": " But like basically you just somehow combine those. You manipulate them using fully connected neural networks. You aggregate the neighbor the neighbors."}, {"start": 705.56, "end": 711.56, "text": " So for example this one will somehow take all of its neighbors. So this is the neighbor and this is the neighbor."}, {"start": 711.56, "end": 718.56, "text": " So we will take those feature vectors like this one and this one and it will somehow combine them with this one."}, {"start": 718.56, "end": 726.56, "text": " So it's a it's it's using this relational inductive bias of connectivity and it's accumulating those feature vectors to create a novel representation."}, {"start": 726.56, "end": 734.56, "text": " So this is going to be something maybe of a smaller dimension. Let's call it V prime. If this was vector V this is V prime."}, {"start": 734.56, "end": 742.56, "text": " And they are going to output that as the output of of of GNN. So as you can see here edge edge embeddings."}, {"start": 742.56, "end": 750.56, "text": " So that's a set that's that's a matrix that's of dimension E times basically D sub E."}, {"start": 750.56, "end": 756.56, "text": " So E is the number of edges in your graph and D sub E is just the like feature vector dimension."}, {"start": 756.56, "end": 766.56, "text": " So once you have the matrix what I do as you can see here they do this reduce mean operation. So they just take a mean over like element wise mean over all of these feature vectors."}, {"start": 766.56, "end": 776.56, "text": " And they get the single feature vector. That's the graph embedding. OK. That somehow roughly captures the dynamics the information about this graph."}, {"start": 776.56, "end": 782.56, "text": " And the second thing they do is they take the current macro ID. So there's the current macro you want to place into the chip canvas."}, {"start": 782.56, "end": 791.56, "text": " And they so again here you have a table that's N times D sub V. So N is the number of nodes."}, {"start": 791.56, "end": 796.56, "text": " This V is the dimension of your node feature vector. And they just take a single one."}, {"start": 796.56, "end": 806.56, "text": " So they just take a DV vector here. OK. And extract it. And finally they have some metadata information like the like the length of the wire on your chip etc."}, {"start": 806.56, "end": 813.56, "text": " So they have some metadata information and they concatenate all those they feed it through the fully connected layer. And that's it."}, {"start": 813.56, "end": 822.56, "text": " So now as I already told you they're going to train this thing to to kind of understand to predict the reward of the of the placement."}, {"start": 822.56, "end": 827.56, "text": " And I'll get into details a bit later. But that's that's that's it in a nutshell."}, {"start": 827.56, "end": 840.56, "text": " So once you train that you use your presentations it learned to basically as a Kickstarter for these policy and value networks and they are trained as I said using PPO."}, {"start": 840.56, "end": 847.56, "text": " So that's it. And now I'm going to slowly start digging into the details and later we'll see the experiments and the results they got."}, {"start": 847.56, "end": 860.56, "text": " OK. Before I do that I think it will be quite useful for you to maybe understand a little bit about some electronics one on one and you'll hopefully have an epiphany moment."}, {"start": 860.56, "end": 866.56, "text": " OK. So as software the same thing goes with hardware. Basically you have a bunch of abstraction layers."}, {"start": 866.56, "end": 873.56, "text": " So in software we have like on the lower lowest layer you have firmware you have something like what called hardware abstraction layer."}, {"start": 873.56, "end": 884.56, "text": " You have drivers and you have operating systems and you have compilers and you have all of those systems that translate the higher level languages into like something that your computer can parse and understand."}, {"start": 884.56, "end": 890.56, "text": " And finally you have the application layer. So obviously a bunch of abstractions and the same thing goes for hardware."}, {"start": 890.56, "end": 908.56, "text": " So you have like you have on a really high level you have like you have a CPU and you have maybe like the direct memory controller. So that's just a device that can do whom CPU can just delegate basically certain memory intensive operations."}, {"start": 908.56, "end": 917.56, "text": " And then you have memory like you have a memory block here and let's say both are connected to memory both the CPU and the M are connected to memory."}, {"start": 917.56, "end": 924.56, "text": " So if you zoom in into any of these components like take maybe direct memory controller you'll end up with two types of circuits."}, {"start": 924.56, "end": 932.56, "text": " So basically you have combination logic and you have these circuits which are called sequential logic which contain the memory in the system."}, {"start": 932.56, "end": 937.56, "text": " So they are stateful. So if you take a look at this one. So what's the main difference. The main difference is the following."}, {"start": 937.56, "end": 945.56, "text": " If you have this record this is called inverter. Basically if you feed it zero it will output one and if you feed it one it will output the logic value of zero."}, {"start": 945.56, "end": 953.56, "text": " And the thing is if the thing is with these combination logic gates is that as soon as you change the input."}, {"start": 953.56, "end": 961.56, "text": " So if you go from zero to one the output will go from one to zero really quick like maybe in 20 picoseconds or something."}, {"start": 961.56, "end": 967.56, "text": " Now the trick with these flip flops with these memory circuits is so this is a deep flip flop."}, {"start": 967.56, "end": 975.56, "text": " It's a simple one of the simplest flip flops out there. So basically what happens here you can change this as much as you want like from zero to one."}, {"start": 975.56, "end": 984.56, "text": " Nothing will change here is going to keep one here until you have the clock. So the uptick of the clock or the down tick of the clock."}, {"start": 984.56, "end": 988.56, "text": " And basically so hopefully you know what like a clock signal is. It's something that drives your computer."}, {"start": 988.56, "end": 998.56, "text": " It's a basically digital signal that goes from zero to one with a certain frequency and on each of these upticks or downticks something is going to happen."}, {"start": 998.56, "end": 1004.56, "text": " So on each tick the sequential logic gates so these these gates like flip flops will change their state."}, {"start": 1004.56, "end": 1011.56, "text": " So if you keep one here and then the uptick comes at that point the one will propagate here on the net."}, {"start": 1011.56, "end": 1018.56, "text": " And if you change it now to zero and the down tick comes for example then this will go to zero."}, {"start": 1018.56, "end": 1022.56, "text": " So these are keeping that they have the memory. That's that's that's that's the main trick."}, {"start": 1022.56, "end": 1029.56, "text": " So having said that let me go even even deeper. So let's take one specific gate like NAND."}, {"start": 1029.56, "end": 1036.56, "text": " And what NAND does is the following. Yeah you've seen these if you've ever done any programming so you know about the NAND operator."}, {"start": 1036.56, "end": 1041.56, "text": " This is how you write it down in Python. And basically this is the opposite of an AND gate."}, {"start": 1041.56, "end": 1051.56, "text": " So you just are not AND. And basically as you can see here looking at the table you know that AND gates give you one only when you have one and one as the input."}, {"start": 1051.56, "end": 1058.56, "text": " And all of the other values will be zero. And NAND is just the opposite of that. As you can see it's inverted AND gate."}, {"start": 1058.56, "end": 1074.56, "text": " And now this is the like abstracted view of this gate. And if you dig a bit deeper and take a certain implementation which is called CMOS implementation of the NAND gate which is basically complementary metal oxide semiconductor doesn't matter."}, {"start": 1074.56, "end": 1079.56, "text": " But basically there are different ways you can implement these abstract logic gates."}, {"start": 1079.56, "end": 1091.56, "text": " The same thing as you can do with if you have like an image classifier you can implement it using a neural network using SVM using I don't know like even like k-nearest neighbors whatever your favorite classifier is."}, {"start": 1091.56, "end": 1096.56, "text": " The same thing goes here. You have a bunch of different ways how you can implement the very same thing and that is the NAND gate."}, {"start": 1096.56, "end": 1102.56, "text": " So here we have a specific logic you can use TTL. There are many different logics you can implement."}, {"start": 1102.56, "end": 1112.56, "text": " Now let me show you why this works. So basically let's take as an input one and one. And when I say one you again have to implement one somehow on your system."}, {"start": 1112.56, "end": 1120.56, "text": " So let's say for example that we have a voltage supply that's maybe I don't know like 0.9 volts."}, {"start": 1120.56, "end": 1130.56, "text": " And so on this system one will be defined as something close to 0.9 volts and zero will be defined as something close to zero volts."}, {"start": 1130.56, "end": 1138.56, "text": " So that's how you implement zero and that's how you implement one. And so if I feed once here for example and so that's 0.9 volts."}, {"start": 1138.56, "end": 1144.56, "text": " And if you feed one here as well and as you can see this small circle again that means it's inverted."}, {"start": 1144.56, "end": 1154.56, "text": " So we'll be adding zero onto this gate and zero onto this gate. And what happens is because you brought a huge voltage here you're going to toggle on this transistor."}, {"start": 1154.56, "end": 1160.56, "text": " And so the current will start to flow here and here and you're going to block these two because you brought zero voltage here."}, {"start": 1160.56, "end": 1166.56, "text": " And so as you can see you're basically making a direct connection to the output to this thing here."}, {"start": 1166.56, "end": 1172.56, "text": " And let's say this is ground so it's zero volts and basically that means your output is zero volts which is zero."}, {"start": 1172.56, "end": 1176.56, "text": " And let's see so one one you get zero. The same thing goes for zero zero."}, {"start": 1176.56, "end": 1185.56, "text": " So if you have if you have if you bring zero zero you're going to basically turn off these two and you're going to turn on these two."}, {"start": 1185.56, "end": 1193.56, "text": " And so you're basically bringing the voltage supply directly to the output which is 0.9 volts which basically translates into logic one."}, {"start": 1193.56, "end": 1198.56, "text": " And yeah so that's that's how it works. That's how your computer computers basically work."}, {"start": 1198.56, "end": 1206.56, "text": " And going even deeper like people have to design these and you can see here so this is how you basically place these metal wires."}, {"start": 1206.56, "end": 1211.56, "text": " And this is how you implement a NAND gate. And finally you have the last step and that's the production."}, {"start": 1211.56, "end": 1219.56, "text": " You actually do some like photo lithography using like light waves you engrave these into a silicon."}, {"start": 1219.56, "end": 1225.56, "text": " And this is how a NAND gate multiple NAND gates look like actually manufactured."}, {"start": 1225.56, "end": 1231.56, "text": " And yeah I had some prior experience myself. I was studying electronics and computer science."}, {"start": 1231.56, "end": 1239.56, "text": " And so this was one of the radio frequency circuits I designed which just takes your signal like maybe your audio and basically modulates it."}, {"start": 1239.56, "end": 1243.56, "text": " So it's on a higher frequency and then you can send it over the wire like a radio signal."}, {"start": 1243.56, "end": 1249.56, "text": " And yeah so that was the basically one of one of how computers work."}, {"start": 1249.56, "end": 1253.56, "text": " You have these logic gates you have these sequential gates and everything is made out of them."}, {"start": 1253.56, "end": 1263.56, "text": " So now you know how those macros are all built and hopefully like if you found this useful please comment down so that I know for future videos whether this is interesting or not."}, {"start": 1263.56, "end": 1272.56, "text": " Okay let's see some of the simplifications they are making to make this thing work and make it like computationally tractable because the problem is huge as you can imagine."}, {"start": 1272.56, "end": 1277.56, "text": " Because of the like millions of standard cells and hundreds of macros."}, {"start": 1277.56, "end": 1284.56, "text": " So first things first they group millions of standard cells into a few thousand clusters using this tool HMEDIS."}, {"start": 1284.56, "end": 1292.56, "text": " So that way they basically make it easier for those force direct methods to do the placement like themselves."}, {"start": 1292.56, "end": 1298.56, "text": " And second thing they do they discretize the grid so the chip canvas to a few thousand grid cells."}, {"start": 1298.56, "end": 1313.56, "text": " Okay so that means they won't have obviously so you have your chip and it will be super super hard and probably wouldn't be any like meaningful to just make it continuous or to make it really like small resolution."}, {"start": 1313.56, "end": 1326.56, "text": " So what I do is they kind of divide these into I think it was 128 by 128 cells so they have something like this and they can only place macros on a specific grid cell."}, {"start": 1326.56, "end": 1336.56, "text": " So that's the second simplification they make. The third one is when calculating wire length we make the simplifying assumption that the wire starts from the standard cell blah blah blah that's not important."}, {"start": 1336.56, "end": 1343.56, "text": " And finally to calculate the routing congestion cost we only consider the average congestion of the top 10% most congested grid cells."}, {"start": 1343.56, "end": 1351.56, "text": " So again certain heuristic which makes it a bit more feasible to calculate this congestion which is used later on in the reward function."}, {"start": 1351.56, "end": 1365.56, "text": " So this part the half perimeter wire length is something they use as a proxy as I mentioned for training the RL agent and it positively correlates as they show with area and other metrics of interest."}, {"start": 1365.56, "end": 1369.56, "text": " So let me just explain you briefly how it works and the formula is pretty simple as you can see."}, {"start": 1369.56, "end": 1385.56, "text": " So imagine you have a wire like this and this is your wire and basically what I do is they for this particular wire the this number will be equal to this thing here plus so that's called the height and this will be with."}, {"start": 1385.56, "end": 1397.56, "text": " So that's the half perimeter right. So the perimeter would be two times that and you can see here so max over the basically x axis and minus mean over the x axis same thing for the y axis."}, {"start": 1397.56, "end": 1408.56, "text": " And if you had something a bit more complex somewhere that goes like this maybe like this and finally ends up like this did again do the following."}, {"start": 1408.56, "end": 1414.56, "text": " So they would find the max x value."}, {"start": 1414.56, "end": 1422.56, "text": " So for this particular wire so that will be maybe this and they subtract this thing and they would do the same thing for why."}, {"start": 1422.56, "end": 1433.56, "text": " So it's a kind of you find a bounding box and you find the half perimeter of the bounding box that's it and then they just kind of combine all of those across different wires."}, {"start": 1433.56, "end": 1436.56, "text": " Okay. That's that that's that part. Okay."}, {"start": 1436.56, "end": 1449.56, "text": " So certain heuristic additional heuristic and you can immediately see because and I forgot to mention that this work was actually used to design the newest TPUs in Google."}, {"start": 1449.56, "end": 1455.56, "text": " So that's the version five if I'm not wrong. And so you can immediately see this is this is in production already."}, {"start": 1455.56, "end": 1460.56, "text": " So there are lots of details but like the high level picture you already got it pretty much hopefully."}, {"start": 1460.56, "end": 1470.56, "text": " And yeah let's see some additional details. So to select the order in which the macros are placed we sort macros by descending size and break ties using topological sort."}, {"start": 1470.56, "end": 1478.56, "text": " So that's additional heuristic they use in order to make the action space smaller and to make this problem computationally tractable."}, {"start": 1478.56, "end": 1491.56, "text": " So they just sort the macros so those blocks in your chip by size and then the policy will just take the next one and then learn how it just needs to decide where to place it."}, {"start": 1491.56, "end": 1494.56, "text": " It doesn't need to learn which one to pick in each step."}, {"start": 1494.56, "end": 1498.56, "text": " Okay. So I forgot to mention one detail here."}, {"start": 1498.56, "end": 1504.56, "text": " Hopefully you already understood it but like if you have this policy and the output is hundred twenty eight by hundred twenty eight."}, {"start": 1504.56, "end": 1513.56, "text": " So those are the grid cells I mentioned previously. And so it just outputs a distribution across all of the grid cells and where the distribution is where the probability is highest."}, {"start": 1513.56, "end": 1517.56, "text": " So we just take ARGMAX that's where you're going to place the macro."}, {"start": 1517.56, "end": 1523.56, "text": " So it would be really simple if but like additionally you need to mask certain areas."}, {"start": 1523.56, "end": 1530.56, "text": " And so for example if you have your your your chip canvas and you've already placed the chip the macro here."}, {"start": 1530.56, "end": 1546.56, "text": " So let's say that's this part and let's say that's this part. Then what will happen is they'll have this density this mask which will be used to kind of messed out certain like let's call them pixels basically to zero."}, {"start": 1546.56, "end": 1562.56, "text": " So for example if if now the if the network if the policy network outputs I don't know maybe zero point zero two here and we already have that part occupied this mask will just zero it out to zero."}, {"start": 1562.56, "end": 1568.56, "text": " And you'll just distribute the probability across the other valid cells."}, {"start": 1568.56, "end": 1571.56, "text": " So that's that part. And yeah that's what I mentioned here."}, {"start": 1571.56, "end": 1586.56, "text": " So basically to meet this constraint during each RL step we calculate the current density mask a binary M times N matrix that represents grid cells onto which we can place the center of the current node without violating the density threshold criteria."}, {"start": 1586.56, "end": 1596.56, "text": " Before choosing an action from the policy network output we first take the dot product of the mask and the policy network output and then take the ARGMAX over feasible locations."}, {"start": 1596.56, "end": 1604.56, "text": " So that's the process I just explained. So we also enable blockage aware placement such as clock straps by setting the density function of the blocked areas to one."}, {"start": 1604.56, "end": 1619.56, "text": " So block it just means that basically you explicitly say hey I don't want to have anything placed around this macro for example or around this wire and you explicitly encode that into that mask and then you can't place macros on top of that location."}, {"start": 1619.56, "end": 1628.56, "text": " Okay. That's just minor detail. Finally the loss function here is the wire length. I already explained the half perimeter wire length calculation."}, {"start": 1628.56, "end": 1639.56, "text": " They have the congestion and the density constraint and they'll just include this density part while Lagrangian so you'll just have a weighted sum of congestion density and wire length."}, {"start": 1639.56, "end": 1650.56, "text": " So as for the loss they are just summing over all of the net lists in the data set and they just want to maximize the expected cumulative reward. That was the reward part."}, {"start": 1650.56, "end": 1666.56, "text": " Let's now jump into explaining the graph neural network. So our intuition was that a policy network architecture capable of transferring placement optimization across chips should be able to encode the state associated with a new unseen chip into a meaningful signal at inference time."}, {"start": 1666.56, "end": 1676.56, "text": " We therefore propose training a neural network architecture capable of predicting reward on new net lists with the ultimate goal of using this architecture as the encoder layer of our policy network."}, {"start": 1676.56, "end": 1689.56, "text": " So that was the part I explained here the encoder part and now they see here we therefore created a data set of 10,000 chip placements where the input is a state associated with a given placement and the label is a reward for that placement."}, {"start": 1689.56, "end": 1706.56, "text": " So basically they used for example like those EDA tools and they found the ground truth reward and so they basically had a simple supervised regression task of on this with this graph neural network called the edge graph neural network."}, {"start": 1706.56, "end": 1722.56, "text": " For the computation itself it's fairly simple. So let's say you have two components here. You have a macro here and it's connected to this macro here. You have a feature vector here and you have a feature vector here as well as here."}, {"start": 1722.56, "end": 1740.56, "text": " OK. So what I do is the kind of so the process using a neural network feed like a simple fully connected layer. They process these vectors and then they just concatenate those and they place it again."}, {"start": 1740.56, "end": 1753.56, "text": " They just feed it through in other in other fully connected layer. I'm not quite sure what this part means here because like he doesn't make any sense to concatenate the learnable weights with your with the output here."}, {"start": 1753.56, "end": 1767.56, "text": " But yeah basically yeah you somehow combine them and then what you do to calculate the node feature is for example if you had multiple edges going into this node you just take all of those edge features."}, {"start": 1767.56, "end": 1776.56, "text": " So you take this edge you take this edge and you just aggregate those and do a mean across those feature vectors. So the computation is pretty simple."}, {"start": 1776.56, "end": 1789.56, "text": " Again check out my graph ML videos. This will be pretty self explanatory. OK. This is just your PPO objective function and I won't go to go into details there."}, {"start": 1789.56, "end": 1801.56, "text": " Let's see the final results. So on the X axis we have different TPU blocks. So they have block one through four and this open source risk five CPU chip."}, {"start": 1801.56, "end": 1813.56, "text": " And you can see here the placement cost. You want to have as low of a cost as possible. So using zero shot basically means that they don't fine tune that."}, {"start": 1813.56, "end": 1825.56, "text": " So basically what I do is they take the whole system you saw with the policy network value network and graph graph neural network and they train it on maybe for example 20 20 different chips."}, {"start": 1825.56, "end": 1836.56, "text": " And now they take that network and just apply it into a new chip like this TPU block one. And if they don't fine tune it on the block itself it's just zero shot."}, {"start": 1836.56, "end": 1846.56, "text": " So as you can see zero shot is pretty decent and then if you fine tune it for two hours you get even better performance in 12 hours you get even better performance."}, {"start": 1846.56, "end": 1857.56, "text": " But the point is if they train it from scratch so without that previous information from those other 20 chips. So it takes after 24 hours it still sucks pretty much."}, {"start": 1857.56, "end": 1867.56, "text": " It's still much much worse than compared to this this one that's fine tuned for 12 hours. So this means it generalizes to new never before seen chips."}, {"start": 1867.56, "end": 1874.56, "text": " So that's the benefit of a learning system. OK. Here what they show is again training from scratch versus fine tuning."}, {"start": 1874.56, "end": 1885.56, "text": " As you can see basically only after a couple of hours the pre-trained one gets a really low loss whereas you need much more time to train."}, {"start": 1885.56, "end": 1894.56, "text": " I mean this is pretty pretty obvious bottom line you want to pre train these are all agents and then later just fine tune them onto the new chip you're trying to place."}, {"start": 1894.56, "end": 1906.56, "text": " OK. That's the whole point there. Here they compare. So I mentioned data sets. So I mentioned the largest one the one that has 20 blocks but they also train them on the medium one which is a subset of this one."}, {"start": 1906.56, "end": 1918.56, "text": " And they also train it on the small one which is a subset of the medium one. So again you can see that going to more data gain gives you like lower placement costs."}, {"start": 1918.56, "end": 1926.56, "text": " And looking at this curve here they are basically what you see here is the RL cost on an unseen chip."}, {"start": 1926.56, "end": 1937.56, "text": " And you can see that with the big data set you get to overfitting much much slower and you get better performance than than basically using medium or smaller data sets."}, {"start": 1937.56, "end": 1948.56, "text": " So it's both harder to overfit and you get you get lower costs. So again more data nothing new there but yeah these are the placements they get."}, {"start": 1948.56, "end": 1961.56, "text": " So here you can see these purplish blocks are macros and here in the middle this like a blob of nothingness is a bunch of basically those standard cells."}, {"start": 1961.56, "end": 1967.56, "text": " And here here is the zero shot placement and here is after you fine tune them you can see there is much less congestion congestion."}, {"start": 1967.56, "end": 1983.56, "text": " You can see so basically these red and orange colors you know that there is a huge overlap across those layers and here it's much better place and you can also see much more regular structure here compared to the zero shot one."}, {"start": 1983.56, "end": 1991.56, "text": " But this is still fairly fairly good for something that's that's working on a chip never before seen."}, {"start": 1991.56, "end": 2005.56, "text": " Here are some results who here they blurred this because of the confidentiality. But again you can you can see the main thing is that humans when they when humans design chips you can see there is a lot of structure you can you can see some regular shapes here."}, {"start": 2005.56, "end": 2015.56, "text": " Whereas this RL agent just outputs this blob and this donut shaped thing here it outperforms the human design on many different metrics and that's cool."}, {"start": 2015.56, "end": 2024.56, "text": " And they mentioned here like the wire length and the congestion and you can see that it's both everything is lower using this RL agent approach."}, {"start": 2024.56, "end": 2025.56, "text": " Okay."}, {"start": 2025.56, "end": 2041.56, "text": " Last table and I think we're done here is this one. So they compare it with this replace method from 2019 is one of the state of the art placement methods and again looking at different metrics like timing area power wire length congestion."}, {"start": 2041.56, "end": 2047.56, "text": " It outperforms replace which not only has lower performance it also like higher cost."}, {"start": 2047.56, "end": 2060.56, "text": " It also many times just creates invalid placements because there was too many like either too many like the congestion was too big or there was some invalid placement was made."}, {"start": 2060.56, "end": 2066.56, "text": " And so it's it's harder to to to to get it to work as a summary they mentioned here."}, {"start": 2066.56, "end": 2076.56, "text": " So for example the process of standard cell partitioning row and column selection as well as selecting the order in which the macros are placed all can be further optimized."}, {"start": 2076.56, "end": 2080.56, "text": " In addition we would also benefit from more optimized approach to standard cell placement."}, {"start": 2080.56, "end": 2082.56, "text": " So what I see here is potential future work."}, {"start": 2082.56, "end": 2090.56, "text": " The first thing first is that the heuristic I just mentioned was that they are using the biggest macro first and then the smaller one blah blah blah."}, {"start": 2090.56, "end": 2098.56, "text": " So they argue that basically that process can also be learned and they can squeeze out more performance by doing that."}, {"start": 2098.56, "end": 2105.56, "text": " The second thing is instead of using those force directed method that can also be kind of thrown into the RL framework approach."}, {"start": 2105.56, "end": 2108.56, "text": " And yeah that's pretty much it."}, {"start": 2108.56, "end": 2112.56, "text": " So great results super impactful work."}, {"start": 2112.56, "end": 2122.56, "text": " So A.I. is building chips which make A.I. more powerful so we have a pretty pretty nice loop here and it will be exciting to see what else is there to come."}, {"start": 2122.56, "end": 2142.56, "text": " So if you found this video useful subscribe share it out and until next time bye bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=6ekOVosCQN8
Non-Parametric Transformers | Paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I do a deep dive of the "Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning" paper. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Non-Parametric Transformers paper: https://arxiv.org/abs/2106.02584 ✅ Jay Alammar's BERT blog: http://jalammar.github.io/illustrated-bert/ ✅ My LinkedIn post (Judea Pearl): https://www.linkedin.com/posts/aleksagordic_pearl-causality-intelligence-activity-6807985607432785920-Zxn2 (also check out my other posts I made related to this) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Key ideas of the paper 01:40 Abstract 02:55 Note on k-NN (non-parametric machine learning) 04:30 Data and NPT setup explained 06:15 NPT loss is inspired by BERT 08:20 A high-level architecture overview 11:30 NPT jointly learns imputation and prediction 12:50 Architecture deep dive (input embeddings, etc) 20:45 More details on the stochastic masking loss 23:30 Connections to Graph Neural Networks and CNNs 29:45 NPT achieves great results on tabular data benchmarks 34:25 NPT learns the underlying relational, causal mechanisms 39:40 NPT does rely on other datapoints 42:10 NPT attends to similar vectors 45:15 Conclusions ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #nonparametric #transformers #relationalreasoning
What's up? In this video I'm covering this new exciting paper called Self-Attention Between Data Points Going Beyond Individual Input-Output Pairs in Deep Learning, which introduces this model called non-parametric transformers. And now, non-parametric doesn't mean it doesn't have parameters, it's quite the opposite. So it just means that it doesn't scale well with data. Basically, the more data you give it, the more parameters it's going to need. And in general, these non-parametric models either have this time-wise or space-wise slash memory-wise dependency on the number of data points. So having said that, the main idea this paper introduces is that it's a kind of attempt to generalize even transformers. So whereas transformers basically assume there is a fully connected graph between the input tokens, and so they kind of tend to everything, here they additionally actually attend over different data points. So that means it's assuming an implicit fully connected graph over the data points. And I'll get into those details in a moment. And basically, even during the inference, they need the training data points, and they'll use those to better infer the class or the value if you're doing regression of the test data point. And we'll see all of those details. So that's like the main innovation that this paper brings. They get a really nice results on some tabular data sets, which is something that previous methods like XGBoost were dominating. And yeah, that's pretty exciting. So having said all that, let me jump into the details of the paper. They say here, we challenge a common assumption, underlying most supervised deep learning, that our model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general purpose deep learning architecture that takes as input the entire data set, instead of processing one data point at a time. Our approach uses self attention to reason about relationships between data points explicitly, which can be seen as realizing non-parametric models using parametric attention mechanism. And again, you can see here, so entire data set, they are learning from the entire data set, and they're doing inference, as we'll see soon see on the pretty much, because it doesn't scale well with data, they'll have some simplifying assumptions and approximations. So they'll be using some chunk of the training data to do inference on. As I said, the model is called non-parametric transformers. And so a quick note here, like just on like conventional non-parametric models. So they say here, conventional non-parametric models cannot learn in the sense familiar to deep learning practitioners. Interactions from the data limiting the flexibility these models have in adapting to the data at hand. And the good example would be a KNN, K nearest neighbors algorithm. So just a quick note how it works, and you'll then understand what non-parametric models mean. So let's say we have some classes here, like maybe we have two classes, one is like triangles, and then maybe we have something like circles, something like this. So and assume, so this is your data set, and now assume I give you a query point and ask you what's the class of this data point. And now if we assume that K equals three, basically that means we'll find the three nearest neighbors, hence the K nearest neighbors algorithm. So that's going to be this, and then we'll output by majority voting, let's take that simple variant of the algorithm without including the actual distances, and either way we're going to have triangle here. So as you can see, the more data you have here, so if we had a bunch of different circles and triangles, and if K goes to infinity, and also assuming data goes to infinity, that means that this algorithm doesn't scale well, because you'll have to search for like infinite number of nearest neighbors. And so yeah, contrast that to maybe a neural network where no matter the like the number of data points, the size of your data set, you'll have a specified pre-specified architecture whose parameters won't change, and the inference time won't change, because you always have to pass a single data point or a batch whatever through the neural network. So that's the difference between parametric and non-parametric, and as we'll soon see, now we'll see how the model actually works. So I first want to explain you the architecture itself and the data layout. So you can see here, their data set is tabular, so they basically assume they concentrate on like on this single target regression and classification. That means the following. That means they are placing the attributes along, so these are data points, as you can see, along this axis. We have here attributes, but in this last column, they're going to assume they're having like targets, either classification targets, like what's the the class for these. So for this data point here, we have attributes, and this is going to be some some like class, okay? Or if we're doing regression, there's going to be some continuous value. So that's just how they structure it, and as you can see here, if we had a parametric model, maybe a transformer, this is how the inference would look like, except for the transformer, you'd have edges going here as well, and here as well, because we are attending, every token is attending to every token, i.e. every attribute is attending to every attribute. So finally, we output the the target value, whatever that is, either class or a continuous value. And here in MPT, as you can see, they're also leveraging other data points, so maybe some training data points, and basically using all that information, they can reason and figure out this value, and they show that this inductive bias is really useful, so having the ability to attend to other data points as well. So that's an interesting shifting perspective, because usually we're used to this setup, but yeah, that's cool. Okay, so let me start explaining how the thing works, and first things first, like the objective is the following. So MPTs, so non-parametric transformers, training objective is to reconstruct a corruptive version of the input data set. Similar to BERT, we apply stochastic masking to both features and targets, and minimize a loss on MPTs prediction at entries masked out in the input. So basically, they'll have a binary mask telling them which parts are masked here, and those will be predicted. And a quick side note, if you're not familiar with BERT, it's very simple, it's a famous transformer, and the way it works, the way they train BERT is the following. So you have input tokens, so this is some sentence, like help print Mayuko, and by the way, this is from JLMR's blog, I'll link it down in the description, and what they do when they train BERT is they mask out certain tokens. So this Mayuko will be masked by like a special symbol called mask, whatever, with some probability, or they'll just place some random value there, okay? And they do that for a number of tokens, and now the goal is to output Mayuko here, so output the correct class up here, so if they masked some token here, they'll have to output it here. So the final loss will just be, if you're doing classification, like they'll just do like basically cross entropy loss here, and they're gonna back prop that and train the model. So it's a self-supervised way of training, and I'm just ignoring, for the time being, for those of you who know how BERT works, I'm just ignoring this part of next sentence prediction, and focusing on these, because that's the part that's really similar in MPTs, they're doing the same thing as BERT, pretty much. Okay, so they also assume, as I said, that the targets are the final attribute of this table. So as I said, they're gonna mask out these values, and they're gonna try predict those values, but they're also going to mask out some of the attributes as well, so you'll see, that's a combination. Let's first focus on the high-level picture. So this is, as I said, this is the input, so this is your tabular data set, you have n data points, you have d attributes, and some of the values are masked, so either attributes or the target values. So what I do is, they first embed this data into this n, d, and now we have dimension e here, and I'll get into a bit more details of how exactly the input and output embeddings work, but having done that, somehow, consider it a black box for now, what I do is the following. So they apply, so they first flatten out this thing, and they apply a data point, so basically the same thing as what was finally did in the original transformer, so we have self-attention from the original transformer applied here, across data points first. So they basically attend, as you can see these lines here, every data point attends to every other data point, so we do a VASVANI type self-attention, then they reshape it, and then they do the same thing just across the, this time across attributes, and they repeat this n times, basically as a for loop, and then they didn't draw it here, but basically they have, again, they have output linear projection layer, and that's the whole thing, the architecture is pretty simple. Now if you're familiar with recently published MLP mixer, and I'll link it somewhere here if you haven't watched my video, I've explained it there, but like, you can see that if we treat patch like a data point, so MLP mixer like inputs an image, you split it into patches, and if we treat those patches as data points, there is a similarity in the mixing part at least, so they also have, so basically they also have the first part where they are attending over different channels, over different patches, so different data points, but the same channel, and here they have the part where they are attending basically the samehow where they are attending basically the same patch, but different channels, so that will be, that will correspond to different attributes here. So this corresponds to this part, and this part where they are attending over different data points corresponds to this part, where they are attending over different patches. they're attending over different patches. So I just thought that's a neat analogy and other than that they are not using attendance. So the whole point of MLP mixture paper was to ditch the attention and just focus on the MLPs so I have just have less even less inductive biases in this architecture. Okay there was a small digression hopefully you found that connection useful. Now let me go and try and focus on explaining the the actual architecture. So quick mention again we focus on single target classification and regression corresponding to a masking matrix M with ones at all entries of the table column X that this is just a means that the last column is masked but online multi-target settings imputation self-supervision using input feature is also possible. So that's what I told you when I said like basically they can also mask some other parts of the of the input data and that's really cool. So the cool part here is that and I noted it here basically it joins the learns imputation plus prediction. So in conventional approaches what you usually have to do when you use those boosting methods you have to have a two-stage approach. The first stage just imputes the values and now again imputation just means that you're missing some values. If this is your tabular data and let's assume you're missing some value and I'll just denote that as a question mark these algorithms usually have to maybe apply some heuristic like maybe they'll have to attend over the same row and maybe find I don't know like mean or median and impute it right here or they can even train a dedicated machine learning model which will learn how to impute those values. Once the imputation is done that stage is done now they only can do prediction. Here this model MPT does this jointly and that's really cool because it can learn much better how to do imputation in this end-to-end deep learning scenario because basically it can also attend other data points and make much better much more educated decisions here. So that's something I thought mentioning and yeah I'll get back to this equivariance thing and I'll make some nice connections to graph neural networks but like for the time being let me just again I showed you the the high-level picture of how the architecture works now let me get into a bit more details. So again I won't be explaining the actual self-attention mechanism I have a video I made a video on that you can check it out it's a basically Vasvani self-attention so you've been seeing if you're following your like recent research this is being used everywhere so I'll skip that part and I'll just focus on explaining the input output embeddings. So let me just scroll down in the appendix okay it's gonna be so okay by the way it's the paper is really neat they have they added really cool things like checklists and computational requirements and also some implicated like societal implications so I really love how the paper is written and yeah kudos to the authors. Okay let's see how the how the thing works in the initial stage of the pipeline of the architecture so basically we have we have a data set so this is the data set tabular so we have and data points we have the attributes one of them is also the the target column so implicitly that's the last column and how it works is the following so they say it here we'll learn separate embedding weights for each attribute and that makes sense because each attribute contains its own semantics and we'll now see like even like in more details why that is so let's say we have this column so let's assume this this column contains categorical variables so we'll have a certain number of classes there and the first step they do for a categorical variable is they just basically encoded into a one-hot encoding so this is going to be size of C where C is the number of classes and basically so so let's say if we if you have a number I don't know two and assume we have I don't know maybe four classes this is going to be embedded into a one-hot vector like this so basically we have four slots here and one goes here zeros elsewhere okay so they're gonna embed every single attribute in that column across different data points into that one-hot embedding okay so the next step they do is they append the the mask here so they'll append the mask down all the way down right and they do this for every single column so now let's assume this one is maybe continuous variable the first thing they do here is they standardize the column so they just make it have mean of zero and standard standard deviation of one or variance equal to one and basically now after they do that they'll just have a single dimension here so that's one not C and they'll append the mask again okay so that's gonna be two and as you can see and also some different column may have different number of classes here so it may be C C prime whatever and that means for that reason we'll basically have to have different independent linear projection layers not just because of the semantics but also because of the pure shape mismatch so you can think of it like both ways so once we have that we just have like linear projection layer so which will just project all of these so whatever the number of dimensions here is into e dimensions so this is going to be kind of projected into like you can imagine having a fully connected layer here blah blah I don't know how to draw today but basically you can imagine it like this is you'll have e dimensions and we'll end up with a n times d times e volume the same thing you saw on the high level diagram so that's the embedding part almost let me just show you the formula so this in code this is the part that embeds the categorical variables into one hot embeddings then they append the mask so again the mask will just have dimensions n times d and if you have one somewhere and I forgot to mention that part if you have one somewhere that means that part is going to be masked out so let's assume this part here has one so that means they do the following aside from appending one here at the last like slot here they're going to mask out all of this with high probability so all of these will get set to zero and so that means that that one hot vector will just be zeroed out so this hot this one hot vector will just be zeroed out so we'll have just all zeros all the way here so as in birth this is either going to be zeroed out or they're gonna just set our random value in there and that's how the thing works so again getting back to to the formula you can see after we do the one hot encoding for categorical variables or just normalization for continuous variables we append the mask we project this into the into this e dimensions and then we just concatenate across all the different columns IE attributes and then they additionally have these learned embeddings for index and type I'm not sure whether they did ablation on whether they actually need the both of these whether it can be jointly learned like the type and index but yeah if anybody knows or the authors are watching the video they can maybe explain it all in the comments having done that for a single attribute column so this is for a single attribute column so this was this part maybe we're focusing only on this specific column on this volume now they just concatenate all of those as you can see across the axis D so this is the D axis and that's it that's it you get the NDE volume and similarly for the output embedding I'll just briefly explain it so here they're not predicting the mass variable so they'll just end up with so EJ was the notation they used to denote this part like without the the masking dimension that's added so they're just going to again linearly project all of these this E into into E let's call it subscript subscript D where this is basically depends on the on the column so for this column this ED is going to be equal to C for this continuous variable column is going to be equal to 1 etc so yeah that's basically needed so that we can then do the like appropriate apply the appropriate loss across that column so it's very easy they say here so it's one for continuous attributes it's the number of categories for a categorical attribute and to obtain the final prediction matrix we take the argmax over the categorical predictions so that's how they do the prediction so basically whatever they spit out in the last layer maybe like they'll have some probabilities obviously if we extract this part here and then just make it a bit nicer to see so if I extract it here it's going to be some distribution and like dimensions here are going to be C as I said so it's gonna be just some distribution of across the the classes and they're just gonna do like like normal like argmax and find the the the highest probability class and that's their prediction okay that was lots of details on the on the architecture let me let me get back to the paper and tell you some interesting stuff so first things first I don't want to finish with that with the loss so as I said before so they're applying stochastic feature masking they're applying stochastic target masking similarly to BERT and during the training we compute the negative log likelihood loss at training targets as well as the auxiliary loss from masked out features so that's the part I mentioned so assuming we have a classification problem what I do is the following so they'll output again this would be the output of their model of the NPT and let's focus on a single attribute on a single slot here basically they said this is going to be C and they're simply going to take the the ground truth position so maybe this is the ground truth class they're gonna take the probability that they had there so let me just kind of draw it three-dimensional here so they're just going to extract the probability there and they're just gonna do minus log p of I so in order to minimize the loss you want to have your obviously so that's like cross entropy simple stuff you want to have your probability going to one this is the output PI and this is the loss okay the y-axis so they just summed that out across all of the masked out features so assume we had one here for for mask assume we had maybe if we had one here we're gonna additionally extract the class for that one and do minus log PI and we're gonna just do a sum over those and just do a back prop through the model hopefully that was clear so long sure store they're just going to sum up across all of the masked by their features or targets here and they're going to do a sum and just do a big prop on their loss okay so interestingly stochastic target masking means that many training targets are unmasked to the model at training time so this is interesting that means that this column which contains the targets will not be like completely massive some parts will have zeros here and that means that the model can actually see that the correct like target value for that for that data point this allows MPTs to learn to predict at each epoch the masked targets of certain training data points using the targets of other training data points in addition to all training data features okay so that's an interesting part and we'll see how that helps them do relational reasoning later on here's just a short note I'll obviously this approach will have some problems with scaling because they are attending like bunch of training data points during inference as well and yeah that's something that future research will need to tackle with okay architecture explained now let me actually show you some exciting results and first give you some connections to graph neural networks that I promised here so first things first they sit here MPTs are equivalent to a permutation of the data points in other words if the set of input data points are shuffled so if we have the tabular data you have and and and rows d columns so we kind of shuffle these rows MPTs produce the same predictions but shuffled in an analogous manner so that's the equivalent part so it should not depend on their ordering that's the important part so now let me just kind of give you some connections with CNN's and graph neural networks and hopefully you'll find that useful okay where are we let me just find the yep here it is so they mentioned here that MPTs can be seen as a conceptual generalization of graph convolutional networks and again I've covered that in my previous video you can check it out I'll link it somewhere here and the other graph neural networks like get again I've checked I've covered that one as well so that's graph attention network engine in which a set of dependencies edges between data points is not known a priori and instead is learned from data using self-attention so this is the interesting part and now let me let me let me show you what they mean by that so amputees learn an implicit graph structure so whereas if you have if you're familiar with graph neural networks let me let me draw a graph here basically you have some notes in your graph and for example for forget you'll have explicit connections here okay you'll have something like this and then what graph attention network will do it will just kind of assign certain attention coefficients to each of these edges and it's going to aggregate all of these node features so every node has a feature vector associated with it it's going to kind of aggregate all of these by some like invariant like mapping and they're going to get the final embedding for this vector so this was maybe so prior to this we had this vector had like embedding vector V and now after this accumulation across the neighborhood so we're somehow modifying these feature vectors we are somehow combining them and we get V prime that's the updated representation so now the difference here is between get and in general the graph neural networks is they have explicit graph structure so compared to that amputees the nonparametric transformers now do the following it is something similar that transformer there with like like with with sentences basically it assumes a fully connected graph so it assumes that all of these are connected with each other okay I think I've connected everything basically and now it just learns which of these edges have what kind of strength and so it's implicitly inferring the structure of the graph without being said like which edges exist on which edges does not exist so it's a more of a soft assignment here compared to graph neural network so that's the first connection second things second thing I want to mention is that like graph attention network is invariant to the data point ordering so it's not active variant it's invariant that means no matter how you have these node feature vectors in your table so basically let's say this is one this is two this is three you'll represent your graph as certain table where again here we have data points and here you have number of attributes so now no matter if you if you shuffle these one two three like in whatever permutation you'll still have the same V prime feature vector here because what graph attention network does is it just sums sums up these these these values and basically some enables us to have these this invariance property right because some doesn't care about the permutation of your input so on the other hand let me take a convolutional neural network just to explain the active variance part I think you'll find it useful basically let's assume we have an image here and in this image we have certain like an image of a square okay and now if we pass that to a CNN and let's assume we're in the first layer of the convolutional neural network and that the the first kernel the first filter learns to just figure out the vertical edges in in your in your image so what the image will look like is something like this we'll have something like this and we'll have like two vertical edges here okay so the active variance means the following if we now take this square and we translate it here maybe here now what will happen in the output is this response will just shift by the same amount here so as you can see it's equivalent it's varying with input it's not invariant to the position of the square it's actually varying if that's even a verb with the input okay so the invariant part income nets usually comes at the like at the classification head part because there you really don't care you just care that you have a square in the image if you're trying to classify it you don't care where it is so it's actually desirable property to have invariance at the latest stages of the CNNs but like first stages have active variance so that was just a short summary of what's a covariance what's invariance and so basically now that we know that we can say two things first things first is that amputees are active variant second thing is that they are implicitly learning the graph structure over the data set and that's really cool but unfortunately it's a kind of computationally expensive and yeah they say here we'll leave further exploration of the close connection between amputees and graph neural networks to future work I just briefly mentioned a couple of obvious properties I guess some of the more knowledgeable people from the graph ML community could write down comments of what interesting other properties this thing has with graph neural networks okay that was it now let me jump to experiments they have some really nice experiments okay the first experiment they do is they they basically test this on a tabular data set a collection of tabular data sets ten of them to be precise and they say it here amputees perform competitively on established benchmarks tabular data is ubiquitous in real-world machine learning but notoriously challenging for general purpose deep neural networks which consistently underperform boosting methods and are rarely used in practice so again short note boosting methods they are super simple you basically just take take a weak learner which means it's a some some really weak model that can be a bit better than random guessing and they combine these in a clever way for example majority vote or something to get something called strong learner which actually has some nice accuracy let's say if we're doing classification so now there was a bunch of like different work that came after the original boosting method so there is first thing is this add a boost so there's just adaptive boosting where once you create the first week a classifier let's say you basically use the data points is performing poor on to feed those into the next week learner which will learn to perform much better on the points where this first learner was having difficulties with so that's the add a boost and we had gradient boosting and then there is a bunch of variations on gradient boosting so here you can see XGBoost which is just a performant like implementation of that gradient boosting method we have cat boost also gradient boosting implementation light gradient boosting method again gradient boosting and yeah so you get the point these boosting methods are pretty popular on tabular data you'll see them all around kegel and now let's see the results they got and they're pretty spectacular if you take a look at this binary classification tasks so I think there was four of these you can see MPT is the first in this table it's better than then cat boost and light GBM actually boosts altogether it's also the first on this table where they're doing multi-class classification and MPT is again the best method it's got a bit less variance compared to XGBoost they're otherwise tied here and finally we have regression tasks where MPT was a bit worse so you can see it's tied with XGBoost it has a little bit bigger variance and cat boost is kind of really really good for this for this task but nonetheless they mentioned it here so in addition to its strong rank-wise performance MPT achieves best performance on four out of ten benchmark datasets more than any other method so they definitely proven that this neural network can be used for tabular data and give some nice results by the way if you're asking yourself maybe it's not fair like they're they're probably used much more computation to train the MPT that's actually not true because for just figuring out the best hyperparameters for those boosting methods they actually wrote something some somewhere in the paper that the computation necessary was pretty much on pair so as so this is supporting our hypothesis that attention between data points is a useful architectural inductive bias for prediction and aside from these tabular datasets they also tested MPT on the image kind of benchmarks like I think MNIST and Cypher-10 and they said here similar to previous work on transformers for computer vision we would expect pre-training a millions of images to significantly boost MPT's performance and if you watch my previous video you now know that vision transformers and MLP mixers don't need pre-training at all if you use this thing called sharpness aware minimization objective and again I'll link the video somewhere here you can check it out basically if we have if you somehow find a way to to smoothen out the lost landscape you'll notice that you don't need extensive pre-training anymore and you can achieve results that outperform resonant baselines and that's it in a nutshell so I guess the authors of the paper either either didn't have the time to rewrite this or they still haven't read that paper which recently came out a couple of days ago okay that's the first part now let's see some some some cool things this diagram here let me just introduce it basically they make this semi synthetic the problem and they say it here for each batch we input the original data with masked target values as well as a copy of the original data where all target values have been revealed I know masking is applied at this time we input novel semi synthetic test data to ensure that MPT has learned the correct relational mechanism as opposed to just memorizing the target values okay so they they have a nice nice chart here where you can see the what's what's the input so this is the input we have some data points and we have the target values there are hidden denoted by these question marks and we have the actual correct values here so if MPT smart even though we're gonna just back prop on these masked values is going to learn how to attend how to find this similar vector so this is basically the same vector and then just copy this value here instead of learning to somehow map and use the attributes is going to learn the relational mechanism and why do we want that well because it's going to generalize much better if it learns how to just attend to the same vector and just copy the value that's the whole point of relational reasoning so as you can see here they draw some attention maps so this green data point so that's this one the first one it attends to this one so that's this data point here so that means it learned completely and you can see attention here is one all around you have zeros if you take a look at this heat map so that means the whole attention of this data point of the first data point is on this one and that's exactly what we wanted and you can see the pattern going for all of these data points here and here this part you can ignore they didn't enforce this they did write something about this in the appendix why these data points are attending to themselves so that's something may be interesting taking a look into but yeah basically you can see the results are really cool Pearson correlation is 99.9 that means basically a fancy way of saying that it basically learns how to copy over these values or that's a hypothesis for now but like it's it's it's basically outputting perfect values as if it was copying them and we'll soon see that it's actually copying them so for this second row let me just go back down here okay so additionally we perform an interventional experiment to investigate the extent to which amputees have actually learned the causal mechanism underlying the lookup task and if you see these two keywords and if you're familiar with Judea Pearl's work you'll notice that this is basically rank two experiments so we have this thing called letter of causation the first letter is just observing data the second letter would be having acting in the environment or doing these these interventional kinds of experiments where you're setting something to certain value and then observing and finally we have a counterfactual level where we are imagining like different worlds and that sounds super abstract for this context but yeah just just I wanted to just associate this sentence here with Judea Pearl's work so if you want to know more about it you can check some of my previous LinkedIn posts or you can read the book of why from from Judea Pearl as a nice introduction to his work so the model is not confronted with target values associated with features that are highly unlikely under the training data and let's see why that is so basically what I do as you can see so it's the same experiment as the above but this time they take this target value and instead of basically keeping it the way it was in the original data set they tweak it they take that maybe there was 1.3 and they're going to kind of mess it up and put 7.5 here and that combination 7.5 with similar attributes have never been seen in the training data set so now if the model learned how to just kind of use the attributes and figure out the target value is gonna fail here and on the other hand if you'd learn how to do what I previously mentioned and that's just to attend to the vector and copy that value it's gonna have nice performance and as we can see here the Pearson correlation is again great that means it learned how to no matter these interventions it learned how to predict the exact value from from these slots and that means it learned the correct relational mechanism that's the point of these experiments and so we now confidently conclude that the MPTs robustly learn the causal data generating mechanism underlying the semi-synthetic data set this requires MPTs to learn a non-trivial sequence of computational steps they must learn to match rows based on similarity of relevant features to look up the target value of the duplicated data point and to copy that value into the target of the mask data point so that's what I already explained and yeah hopefully now it makes more sense okay next experiment they did was to corrupt the data points and see whether it's failing because it learned how to actually rely on those other data points so what I do they said here so this completely corrupts information from all data points except except the one for which we evaluate hence a model that relies meaningfully on attention between data points will show deteriorating performance so let me just explain you what they actually did and they did the following thing so we have a data set here and let's say we have a bunch of data points and let's say we want to predict this value here okay so this is the data point for which we want to figure out predict the class what I do is they just shuffle all of these attributes of all of the others so all of these other data points are going to get shuffled and that means this feature vector can't be matched now with other feature vectors in them like using the we can't use the mechanism we just described it's using and that's just figuring out matching the vector finding maybe I don't know maybe it learned to do cosine similarity now I can't do that because we shuffle the attributes and what I show is that performance deteriorates which is a further proof that it learned the correct relational mechanism it appears that when MPTs do not find it advantageous to rely on attention between data points during training they can learn to completely ignore the other inputs essentially collapsing into a standard parametric model this supports our earlier claims that amputees can learn end-to-end from data the extent to which they rely on other data points for prediction so that's a fancy way of saying that basically sometimes if we have a data set again so sometimes it's actually just relying on this particular row on a single data point to predict this value here because for that task you figure out it doesn't actually need to use other data points and for those data sets the the performance they show does not deteriorate which is again a further proof about proof informal proof that basically it learned the correct mechanism here they just have a table a bunch of different data sets you can see regressions here so yeah on certain data sets as I just mentioned it's not deteriorating that much because it doesn't rely on other data points that much okay having said that let me see if we have something else interesting and yeah this is the last part that's super interesting and they now try to figure out how the model finds those similar vectors so and what I do is we sort the input data with respect to their feature space it's actually input space as I saw in the app appendix but maybe the authors can correct me here distance such that the similar data points are now close to each other and so let me explain what that means basically that means you have the original tabular data set you have your data points what I do is they find maybe using L2 metric define the closest vectors here and they're just going to sort them so now we have a vector here and a vector here and these two have really small L2 distance and now we have you know vector here and these two have really small L2 distance which that means that basically these two also have maybe a bit bigger L2 distance okay so you get the point basically we're slowly adding stacking vectors which are further away looking at the L2 distance and when we do that and we apply NPT and when we look at the certain like attention patterns from a certain like head we get this pattern and this means that is basically attending to the vectors which are basically close in the input space which means it's attending to similar vectors according to this L2 distance and if you don't know how to parse this diagram let me help you basically if we have if this is row I this row is going to attend and assume we have a sort of table so this table is sorted according to L2 so let's say this is I this is vector I and it's somewhere in the middle of the table and that vector is going to attend to its proximity so it's going to attend to these vectors really strongly and then it's going to drop off and then it's going to have really weak attendance to maybe this vector here so it's going to tend to these vectors really strongly and the strength will kind of fade away going downwards or upwards here and that's exactly what you see in this attention map basically if this is vector I and this is just n number of data points this is just n number of data points this means that this vector as you can see attends to itself really strongly and to its neighborhood but then it deteriorates because you can see that the dark blue means the attention coefficients are really really really weak there okay that's the whole point and yeah further they had some additional experiment where they're doing similar thing but the conclusion was the same basically MPT learns how to attend to those vectors which have which are really close in the input space looking at the L2 distance okay that's pretty much it they dimension some problems like these scaling limitations which I have already flagged and that's pretty much it so we saw that MPT's learn how to do relational reasoning and because they learn the correct mechanism on that simple semi-synthetic task it can generalize much better than if they actually learn how to deduce the value just looking at the attributes and instead they learn how to just copy paste the values from the corresponding call of Rose and yeah hopefully this was useful if you found this video useful share it out and subscribe and until next time bye bye
[{"start": 0.0, "end": 3.7600000000000002, "text": " What's up? In this video I'm covering this new exciting paper called"}, {"start": 3.7600000000000002, "end": 8.72, "text": " Self-Attention Between Data Points Going Beyond Individual Input-Output Pairs in Deep Learning,"}, {"start": 9.44, "end": 14.96, "text": " which introduces this model called non-parametric transformers. And now, non-parametric doesn't"}, {"start": 14.96, "end": 19.92, "text": " mean it doesn't have parameters, it's quite the opposite. So it just means that it doesn't scale"}, {"start": 19.92, "end": 25.28, "text": " well with data. Basically, the more data you give it, the more parameters it's going to need."}, {"start": 25.28, "end": 31.12, "text": " And in general, these non-parametric models either have this time-wise or space-wise slash"}, {"start": 31.12, "end": 38.400000000000006, "text": " memory-wise dependency on the number of data points. So having said that, the main idea"}, {"start": 38.400000000000006, "end": 44.08, "text": " this paper introduces is that it's a kind of attempt to generalize even transformers."}, {"start": 44.08, "end": 51.6, "text": " So whereas transformers basically assume there is a fully connected graph between the input tokens,"}, {"start": 51.6, "end": 57.04, "text": " and so they kind of tend to everything, here they additionally actually attend over different data"}, {"start": 57.04, "end": 63.28, "text": " points. So that means it's assuming an implicit fully connected graph over the data points."}, {"start": 63.28, "end": 70.48, "text": " And I'll get into those details in a moment. And basically, even during the inference,"}, {"start": 70.48, "end": 77.52000000000001, "text": " they need the training data points, and they'll use those to better infer the class or the"}, {"start": 77.52, "end": 82.47999999999999, "text": " value if you're doing regression of the test data point. And we'll see all of those details."}, {"start": 82.47999999999999, "end": 88.39999999999999, "text": " So that's like the main innovation that this paper brings. They get a really nice results"}, {"start": 88.39999999999999, "end": 94.24, "text": " on some tabular data sets, which is something that previous methods like XGBoost were dominating."}, {"start": 94.24, "end": 99.28, "text": " And yeah, that's pretty exciting. So having said all that, let me jump into the details of the"}, {"start": 99.28, "end": 105.84, "text": " paper. They say here, we challenge a common assumption, underlying most supervised deep"}, {"start": 105.84, "end": 110.0, "text": " learning, that our model makes a prediction depending only on its parameters and the features"}, {"start": 110.0, "end": 114.72, "text": " of a single input. To this end, we introduce a general purpose deep learning architecture"}, {"start": 114.72, "end": 119.84, "text": " that takes as input the entire data set, instead of processing one data point at a time. Our"}, {"start": 119.84, "end": 126.32000000000001, "text": " approach uses self attention to reason about relationships between data points explicitly,"}, {"start": 126.32000000000001, "end": 131.12, "text": " which can be seen as realizing non-parametric models using parametric attention mechanism."}, {"start": 131.12, "end": 136.96, "text": " And again, you can see here, so entire data set, they are learning from the entire data set,"}, {"start": 136.96, "end": 144.56, "text": " and they're doing inference, as we'll see soon see on the pretty much, because it doesn't scale well"}, {"start": 144.56, "end": 149.28, "text": " with data, they'll have some simplifying assumptions and approximations. So they'll be using some chunk"}, {"start": 149.28, "end": 155.76, "text": " of the training data to do inference on. As I said, the model is called non-parametric transformers."}, {"start": 155.76, "end": 165.12, "text": " And so a quick note here, like just on like conventional non-parametric models. So they say"}, {"start": 165.12, "end": 169.28, "text": " here, conventional non-parametric models cannot learn in the sense familiar to deep learning"}, {"start": 169.28, "end": 174.64, "text": " practitioners. Interactions from the data limiting the flexibility these models have in adapting to"}, {"start": 174.64, "end": 181.28, "text": " the data at hand. And the good example would be a KNN, K nearest neighbors algorithm. So just a quick"}, {"start": 181.28, "end": 186.8, "text": " note how it works, and you'll then understand what non-parametric models mean. So let's say we have"}, {"start": 186.8, "end": 192.16, "text": " some classes here, like maybe we have two classes, one is like triangles, and then maybe we have"}, {"start": 192.16, "end": 198.72, "text": " something like circles, something like this. So and assume, so this is your data set, and now assume"}, {"start": 198.72, "end": 203.36, "text": " I give you a query point and ask you what's the class of this data point. And now if we assume"}, {"start": 203.36, "end": 210.96, "text": " that K equals three, basically that means we'll find the three nearest neighbors, hence the K"}, {"start": 210.96, "end": 216.48000000000002, "text": " nearest neighbors algorithm. So that's going to be this, and then we'll output by majority voting,"}, {"start": 216.48000000000002, "end": 222.72000000000003, "text": " let's take that simple variant of the algorithm without including the actual distances, and"}, {"start": 223.28000000000003, "end": 228.48000000000002, "text": " either way we're going to have triangle here. So as you can see, the more data you have here, so if"}, {"start": 228.48, "end": 236.72, "text": " we had a bunch of different circles and triangles, and if K goes to infinity, and also assuming data"}, {"start": 236.72, "end": 241.28, "text": " goes to infinity, that means that this algorithm doesn't scale well, because you'll have to search"}, {"start": 241.28, "end": 248.16, "text": " for like infinite number of nearest neighbors. And so yeah, contrast that to maybe a neural network"}, {"start": 248.16, "end": 253.92, "text": " where no matter the like the number of data points, the size of your data set, you'll have"}, {"start": 253.92, "end": 259.36, "text": " a specified pre-specified architecture whose parameters won't change, and the inference time"}, {"start": 259.36, "end": 264.24, "text": " won't change, because you always have to pass a single data point or a batch whatever through the"}, {"start": 264.24, "end": 269.68, "text": " neural network. So that's the difference between parametric and non-parametric, and as we'll soon"}, {"start": 269.68, "end": 274.64, "text": " see, now we'll see how the model actually works. So I first want to explain you the architecture"}, {"start": 274.64, "end": 282.64, "text": " itself and the data layout. So you can see here, their data set is tabular, so they basically assume"}, {"start": 282.64, "end": 288.56, "text": " they concentrate on like on this single target regression and classification. That means the"}, {"start": 288.56, "end": 292.24, "text": " following. That means they are placing the attributes along, so these are data points,"}, {"start": 292.24, "end": 297.52, "text": " as you can see, along this axis. We have here attributes, but in this last column, they're"}, {"start": 297.52, "end": 301.44, "text": " going to assume they're having like targets, either classification targets, like what's the"}, {"start": 302.08, "end": 308.24, "text": " the class for these. So for this data point here, we have attributes, and this is going to be some"}, {"start": 308.24, "end": 313.04, "text": " some like class, okay? Or if we're doing regression, there's going to be some continuous value."}, {"start": 313.04, "end": 320.96000000000004, "text": " So that's just how they structure it, and as you can see here, if we had a parametric model,"}, {"start": 320.96000000000004, "end": 325.6, "text": " maybe a transformer, this is how the inference would look like, except for the transformer,"}, {"start": 325.6, "end": 330.64, "text": " you'd have edges going here as well, and here as well, because we are attending, every token is"}, {"start": 330.64, "end": 336.48, "text": " attending to every token, i.e. every attribute is attending to every attribute. So finally, we output"}, {"start": 336.48, "end": 342.64000000000004, "text": " the the target value, whatever that is, either class or a continuous value. And here in MPT,"}, {"start": 342.64000000000004, "end": 348.40000000000003, "text": " as you can see, they're also leveraging other data points, so maybe some training data points,"}, {"start": 349.28000000000003, "end": 355.44, "text": " and basically using all that information, they can reason and figure out this value, and they show"}, {"start": 355.44, "end": 362.0, "text": " that this inductive bias is really useful, so having the ability to attend to other data points"}, {"start": 362.0, "end": 366.72, "text": " as well. So that's an interesting shifting perspective, because usually we're used to"}, {"start": 367.44, "end": 374.64, "text": " this setup, but yeah, that's cool. Okay, so let me start explaining how the thing works, and"}, {"start": 375.28, "end": 380.96, "text": " first things first, like the objective is the following. So MPTs, so non-parametric transformers,"}, {"start": 380.96, "end": 384.8, "text": " training objective is to reconstruct a corruptive version of the input data set."}, {"start": 384.8, "end": 390.48, "text": " Similar to BERT, we apply stochastic masking to both features and targets, and minimize a loss"}, {"start": 390.48, "end": 397.68, "text": " on MPTs prediction at entries masked out in the input. So basically, they'll have a binary mask"}, {"start": 398.64000000000004, "end": 404.96000000000004, "text": " telling them which parts are masked here, and those will be predicted. And a quick side note,"}, {"start": 405.6, "end": 412.16, "text": " if you're not familiar with BERT, it's very simple, it's a famous transformer, and the way it works,"}, {"start": 412.16, "end": 416.72, "text": " the way they train BERT is the following. So you have input tokens, so this is some sentence,"}, {"start": 416.72, "end": 422.64000000000004, "text": " like help print Mayuko, and by the way, this is from JLMR's blog, I'll link it down in the description,"}, {"start": 422.64000000000004, "end": 427.84000000000003, "text": " and what they do when they train BERT is they mask out certain tokens. So this Mayuko"}, {"start": 427.84000000000003, "end": 434.72, "text": " will be masked by like a special symbol called mask, whatever, with some probability, or they'll"}, {"start": 434.72, "end": 440.96000000000004, "text": " just place some random value there, okay? And they do that for a number of tokens, and now the goal is"}, {"start": 440.96, "end": 448.4, "text": " to output Mayuko here, so output the correct class up here, so if they masked some token here, they'll"}, {"start": 448.4, "end": 453.52, "text": " have to output it here. So the final loss will just be, if you're doing classification,"}, {"start": 455.84, "end": 461.84, "text": " like they'll just do like basically cross entropy loss here, and they're gonna back prop that and"}, {"start": 461.84, "end": 468.08, "text": " train the model. So it's a self-supervised way of training, and I'm just ignoring, for the time being,"}, {"start": 468.08, "end": 472.24, "text": " for those of you who know how BERT works, I'm just ignoring this part of next sentence prediction,"}, {"start": 473.03999999999996, "end": 478.47999999999996, "text": " and focusing on these, because that's the part that's really similar in MPTs, they're doing the"}, {"start": 478.47999999999996, "end": 486.96, "text": " same thing as BERT, pretty much. Okay, so they also assume, as I said, that the targets are the final"}, {"start": 486.96, "end": 492.96, "text": " attribute of this table. So as I said, they're gonna mask out these values, and they're gonna try"}, {"start": 492.96, "end": 498.0, "text": " predict those values, but they're also going to mask out some of the attributes as well, so you'll"}, {"start": 498.0, "end": 505.6, "text": " see, that's a combination. Let's first focus on the high-level picture. So this is, as I said,"}, {"start": 505.6, "end": 511.2, "text": " this is the input, so this is your tabular data set, you have n data points, you have d attributes,"}, {"start": 511.2, "end": 514.88, "text": " and some of the values are masked, so either attributes or the target values."}, {"start": 515.76, "end": 523.6, "text": " So what I do is, they first embed this data into this n, d, and now we have dimension e here,"}, {"start": 523.6, "end": 527.9200000000001, "text": " and I'll get into a bit more details of how exactly the input and output embeddings work,"}, {"start": 527.9200000000001, "end": 533.2, "text": " but having done that, somehow, consider it a black box for now, what I do is the following."}, {"start": 533.2, "end": 539.0400000000001, "text": " So they apply, so they first flatten out this thing, and they apply a data point, so basically"}, {"start": 539.36, "end": 543.76, "text": " the same thing as what was finally did in the original transformer, so we have self-attention"}, {"start": 543.76, "end": 549.28, "text": " from the original transformer applied here, across data points first. So they basically attend,"}, {"start": 549.28, "end": 554.64, "text": " as you can see these lines here, every data point attends to every other data point, so we do"}, {"start": 554.64, "end": 560.3199999999999, "text": " a VASVANI type self-attention, then they reshape it, and then they do the same thing just across"}, {"start": 560.3199999999999, "end": 568.64, "text": " the, this time across attributes, and they repeat this n times, basically as a for loop, and then"}, {"start": 568.64, "end": 575.68, "text": " they didn't draw it here, but basically they have, again, they have output linear projection layer,"}, {"start": 575.68, "end": 580.4799999999999, "text": " and that's the whole thing, the architecture is pretty simple. Now if you're familiar with"}, {"start": 580.4799999999999, "end": 585.52, "text": " recently published MLP mixer, and I'll link it somewhere here if you haven't watched my video,"}, {"start": 585.52, "end": 593.5999999999999, "text": " I've explained it there, but like, you can see that if we treat patch like a data point, so MLP"}, {"start": 593.5999999999999, "end": 598.8, "text": " mixer like inputs an image, you split it into patches, and if we treat those patches as data"}, {"start": 598.8, "end": 604.3199999999999, "text": " points, there is a similarity in the mixing part at least, so they also have, so basically"}, {"start": 604.32, "end": 609.6, "text": " they also have the first part where they are attending over different channels, over different"}, {"start": 609.6, "end": 614.32, "text": " patches, so different data points, but the same channel, and here they have the part"}, {"start": 614.32, "end": 618.1600000000001, "text": " where they are attending basically the samehow where they are attending basically the same"}, {"start": 618.1600000000001, "end": 621.9200000000001, "text": " patch, but different channels, so that will be, that will correspond to different attributes"}, {"start": 621.9200000000001, "end": 626.32, "text": " here. So this corresponds to this part, and this part where they are attending over different"}, {"start": 626.32, "end": 631.12, "text": " data points corresponds to this part, where they are attending over different patches."}, {"start": 631.12, "end": 635.14, "text": " they're attending over different patches. So I just thought that's a neat"}, {"start": 635.14, "end": 639.46, "text": " analogy and other than that they are not using attendance. So the whole point of"}, {"start": 639.46, "end": 644.0600000000001, "text": " MLP mixture paper was to ditch the attention and just focus on the MLPs so"}, {"start": 644.0600000000001, "end": 647.92, "text": " I have just have less even less inductive biases in this architecture."}, {"start": 647.92, "end": 651.22, "text": " Okay there was a small digression hopefully you found that connection"}, {"start": 651.22, "end": 657.66, "text": " useful. Now let me go and try and focus on explaining the the actual architecture."}, {"start": 657.66, "end": 662.48, "text": " So quick mention again we focus on single target classification and"}, {"start": 662.48, "end": 669.48, "text": " regression corresponding to a masking matrix M with ones at all entries of the"}, {"start": 669.48, "end": 676.04, "text": " table column X that this is just a means that the last column is masked but"}, {"start": 676.04, "end": 680.48, "text": " online multi-target settings imputation self-supervision using input feature is"}, {"start": 680.48, "end": 684.56, "text": " also possible. So that's what I told you when I said like basically they can also"}, {"start": 684.56, "end": 689.7199999999999, "text": " mask some other parts of the of the input data and that's really cool. So the cool"}, {"start": 689.7199999999999, "end": 695.16, "text": " part here is that and I noted it here basically it joins the learns imputation"}, {"start": 695.16, "end": 699.3199999999999, "text": " plus prediction. So in conventional approaches what you usually have to do"}, {"start": 699.3199999999999, "end": 703.16, "text": " when you use those boosting methods you have to have a two-stage approach. The"}, {"start": 703.16, "end": 707.8, "text": " first stage just imputes the values and now again imputation just means that"}, {"start": 707.8, "end": 712.04, "text": " you're missing some values. If this is your tabular data and let's assume"}, {"start": 712.04, "end": 716.3199999999999, "text": " you're missing some value and I'll just denote that as a question mark these"}, {"start": 716.3199999999999, "end": 721.4599999999999, "text": " algorithms usually have to maybe apply some heuristic like maybe they'll have to"}, {"start": 721.4599999999999, "end": 725.8399999999999, "text": " attend over the same row and maybe find I don't know like mean or median and"}, {"start": 725.8399999999999, "end": 730.68, "text": " impute it right here or they can even train a dedicated machine learning"}, {"start": 730.68, "end": 735.3199999999999, "text": " model which will learn how to impute those values. Once the imputation is done"}, {"start": 735.3199999999999, "end": 740.4399999999999, "text": " that stage is done now they only can do prediction. Here this model MPT does this"}, {"start": 740.44, "end": 744.5200000000001, "text": " jointly and that's really cool because it can learn much better how to do"}, {"start": 744.5200000000001, "end": 751.0, "text": " imputation in this end-to-end deep learning scenario because basically it"}, {"start": 751.0, "end": 755.2800000000001, "text": " can also attend other data points and make much better much more educated"}, {"start": 755.2800000000001, "end": 761.0400000000001, "text": " decisions here. So that's something I thought mentioning and yeah I'll get"}, {"start": 761.0400000000001, "end": 764.8800000000001, "text": " back to this equivariance thing and I'll make some nice connections to graph"}, {"start": 764.8800000000001, "end": 769.4000000000001, "text": " neural networks but like for the time being let me just again I showed you the"}, {"start": 769.4, "end": 772.24, "text": " the high-level picture of how the architecture works now let me get into a"}, {"start": 772.24, "end": 777.36, "text": " bit more details. So again I won't be explaining the actual self-attention"}, {"start": 777.36, "end": 780.92, "text": " mechanism I have a video I made a video on that you can check it out it's a"}, {"start": 780.92, "end": 784.52, "text": " basically Vasvani self-attention so you've been seeing if you're following"}, {"start": 784.52, "end": 789.1999999999999, "text": " your like recent research this is being used everywhere so I'll skip that part"}, {"start": 789.1999999999999, "end": 794.56, "text": " and I'll just focus on explaining the input output embeddings. So let me just"}, {"start": 794.56, "end": 800.7199999999999, "text": " scroll down in the appendix okay it's gonna be so okay by the way it's the"}, {"start": 800.7199999999999, "end": 805.68, "text": " paper is really neat they have they added really cool things like checklists"}, {"start": 805.68, "end": 811.56, "text": " and computational requirements and also some implicated like societal"}, {"start": 811.56, "end": 815.5999999999999, "text": " implications so I really love how the paper is written and yeah kudos to the"}, {"start": 815.5999999999999, "end": 821.4, "text": " authors. Okay let's see how the how the thing works in the initial stage of the"}, {"start": 821.4, "end": 826.24, "text": " pipeline of the architecture so basically we have we have a data set so"}, {"start": 826.24, "end": 832.12, "text": " this is the data set tabular so we have and data points we have the attributes"}, {"start": 832.12, "end": 837.92, "text": " one of them is also the the target column so implicitly that's the last"}, {"start": 837.92, "end": 842.8, "text": " column and how it works is the following so they say it here we'll learn separate"}, {"start": 842.8, "end": 847.24, "text": " embedding weights for each attribute and that makes sense because each attribute"}, {"start": 847.24, "end": 853.88, "text": " contains its own semantics and we'll now see like even like in more details why"}, {"start": 853.88, "end": 859.04, "text": " that is so let's say we have this column so let's assume this this column"}, {"start": 859.04, "end": 863.4, "text": " contains categorical variables so we'll have a certain number of classes there"}, {"start": 863.4, "end": 867.6, "text": " and the first step they do for a categorical variable is they just"}, {"start": 867.6, "end": 874.04, "text": " basically encoded into a one-hot encoding so this is going to be size of C"}, {"start": 874.04, "end": 880.0799999999999, "text": " where C is the number of classes and basically so so let's say if we if you"}, {"start": 880.0799999999999, "end": 884.24, "text": " have a number I don't know two and assume we have I don't know maybe four"}, {"start": 884.24, "end": 889.7199999999999, "text": " classes this is going to be embedded into a one-hot vector like this so"}, {"start": 889.7199999999999, "end": 896.68, "text": " basically we have four slots here and one goes here zeros elsewhere okay so"}, {"start": 896.68, "end": 902.5999999999999, "text": " they're gonna embed every single attribute in that column across"}, {"start": 902.6, "end": 907.5600000000001, "text": " different data points into that one-hot embedding okay so the next step they do"}, {"start": 907.5600000000001, "end": 914.64, "text": " is they append the the mask here so they'll append the mask down all the way"}, {"start": 914.64, "end": 919.72, "text": " down right and they do this for every single column so now let's assume this"}, {"start": 919.72, "end": 925.0400000000001, "text": " one is maybe continuous variable the first thing they do here is they"}, {"start": 925.0400000000001, "end": 929.8000000000001, "text": " standardize the column so they just make it have mean of zero and standard"}, {"start": 929.8, "end": 936.0799999999999, "text": " standard deviation of one or variance equal to one and basically now after"}, {"start": 936.0799999999999, "end": 940.64, "text": " they do that they'll just have a single dimension here so that's one not C and"}, {"start": 940.64, "end": 946.0799999999999, "text": " they'll append the mask again okay so that's gonna be two and as you can see"}, {"start": 946.0799999999999, "end": 950.3599999999999, "text": " and also some different column may have different number of classes here so it"}, {"start": 950.3599999999999, "end": 957.1999999999999, "text": " may be C C prime whatever and that means for that reason we'll basically have to"}, {"start": 957.2, "end": 961.96, "text": " have different independent linear projection layers not just because of"}, {"start": 961.96, "end": 967.24, "text": " the semantics but also because of the pure shape mismatch so you can think of"}, {"start": 967.24, "end": 972.08, "text": " it like both ways so once we have that we just have like linear projection"}, {"start": 972.08, "end": 975.9200000000001, "text": " layer so which will just project all of these so whatever the number of"}, {"start": 975.9200000000001, "end": 980.8000000000001, "text": " dimensions here is into e dimensions so this is going to be kind of projected"}, {"start": 980.8000000000001, "end": 986.2, "text": " into like you can imagine having a fully connected layer here blah blah I don't"}, {"start": 986.2, "end": 990.2, "text": " know how to draw today but basically you can imagine it like this is you'll have"}, {"start": 990.2, "end": 999.0, "text": " e dimensions and we'll end up with a n times d times e volume the same thing"}, {"start": 999.0, "end": 1004.6800000000001, "text": " you saw on the high level diagram so that's the embedding part almost let me"}, {"start": 1004.6800000000001, "end": 1009.6, "text": " just show you the formula so this in code this is the part that embeds the"}, {"start": 1009.6, "end": 1014.08, "text": " categorical variables into one hot embeddings then they append the mask so"}, {"start": 1014.08, "end": 1021.4000000000001, "text": " again the mask will just have dimensions n times d and if you have one somewhere"}, {"start": 1021.4000000000001, "end": 1025.48, "text": " and I forgot to mention that part if you have one somewhere that means that part"}, {"start": 1025.48, "end": 1031.72, "text": " is going to be masked out so let's assume this part here has one so that"}, {"start": 1031.72, "end": 1039.44, "text": " means they do the following aside from appending one here at the last like"}, {"start": 1039.44, "end": 1044.8400000000001, "text": " slot here they're going to mask out all of this with high probability so all of"}, {"start": 1044.8400000000001, "end": 1051.52, "text": " these will get set to zero and so that means that that one hot vector will just"}, {"start": 1051.52, "end": 1057.04, "text": " be zeroed out so this hot this one hot vector will just be zeroed out so we'll"}, {"start": 1057.04, "end": 1061.4, "text": " have just all zeros all the way here so as in birth this is either going to be"}, {"start": 1061.4, "end": 1066.56, "text": " zeroed out or they're gonna just set our random value in there and that's how the"}, {"start": 1066.56, "end": 1070.8799999999999, "text": " thing works so again getting back to to the formula you can see after we do the"}, {"start": 1070.8799999999999, "end": 1074.36, "text": " one hot encoding for categorical variables or just normalization for"}, {"start": 1074.36, "end": 1079.6799999999998, "text": " continuous variables we append the mask we project this into the into this e"}, {"start": 1079.6799999999998, "end": 1085.6799999999998, "text": " dimensions and then we just concatenate across all the different columns IE"}, {"start": 1085.6799999999998, "end": 1090.32, "text": " attributes and then they additionally have these learned embeddings for"}, {"start": 1090.32, "end": 1094.3999999999999, "text": " index and type I'm not sure whether they did ablation on whether they actually"}, {"start": 1094.4, "end": 1098.5600000000002, "text": " need the both of these whether it can be jointly learned like the type and index"}, {"start": 1098.5600000000002, "end": 1102.72, "text": " but yeah if anybody knows or the authors are watching the video they can maybe"}, {"start": 1102.72, "end": 1107.66, "text": " explain it all in the comments having done that for a single attribute column"}, {"start": 1107.66, "end": 1110.8000000000002, "text": " so this is for a single attribute column so this was this part maybe we're"}, {"start": 1110.8000000000002, "end": 1115.3200000000002, "text": " focusing only on this specific column on this volume now they just concatenate"}, {"start": 1115.3200000000002, "end": 1121.4, "text": " all of those as you can see across the axis D so this is the D axis and that's"}, {"start": 1121.4, "end": 1131.0800000000002, "text": " it that's it you get the NDE volume and similarly for the output embedding I'll"}, {"start": 1131.0800000000002, "end": 1135.52, "text": " just briefly explain it so here they're not predicting the mass variable so"}, {"start": 1135.52, "end": 1142.4, "text": " they'll just end up with so EJ was the notation they used to denote this part"}, {"start": 1142.4, "end": 1150.1200000000001, "text": " like without the the masking dimension that's added so they're just going to"}, {"start": 1150.12, "end": 1159.3999999999999, "text": " again linearly project all of these this E into into E let's call it subscript"}, {"start": 1159.3999999999999, "end": 1164.56, "text": " subscript D where this is basically depends on the on the column so for this"}, {"start": 1164.56, "end": 1170.52, "text": " column this ED is going to be equal to C for this continuous variable column is"}, {"start": 1170.52, "end": 1177.1999999999998, "text": " going to be equal to 1 etc so yeah that's basically needed so that we can"}, {"start": 1177.2, "end": 1183.24, "text": " then do the like appropriate apply the appropriate loss across that column so"}, {"start": 1183.24, "end": 1189.28, "text": " it's very easy they say here so it's one for continuous attributes it's the"}, {"start": 1189.28, "end": 1193.04, "text": " number of categories for a categorical attribute and to obtain the final"}, {"start": 1193.04, "end": 1198.2, "text": " prediction matrix we take the argmax over the categorical predictions so"}, {"start": 1198.2, "end": 1202.24, "text": " that's how they do the prediction so basically whatever they spit out in the"}, {"start": 1202.24, "end": 1207.16, "text": " last layer maybe like they'll have some probabilities obviously if we extract"}, {"start": 1207.16, "end": 1212.1200000000001, "text": " this part here and then just make it a bit nicer to see so if I extract it here"}, {"start": 1212.1200000000001, "end": 1218.76, "text": " it's going to be some distribution and like dimensions here are going to be C"}, {"start": 1218.76, "end": 1225.08, "text": " as I said so it's gonna be just some distribution of across the the classes"}, {"start": 1225.08, "end": 1230.68, "text": " and they're just gonna do like like normal like argmax and find the the the"}, {"start": 1230.68, "end": 1234.44, "text": " highest probability class and that's their prediction okay that was lots of"}, {"start": 1234.44, "end": 1239.4, "text": " details on the on the architecture let me let me get back to the paper and tell"}, {"start": 1239.4, "end": 1245.64, "text": " you some interesting stuff so first things first I don't want to finish with"}, {"start": 1245.64, "end": 1250.6000000000001, "text": " that with the loss so as I said before so they're applying stochastic feature"}, {"start": 1250.6000000000001, "end": 1254.2, "text": " masking they're applying stochastic target masking similarly to BERT and"}, {"start": 1254.2, "end": 1258.8, "text": " during the training we compute the negative log likelihood loss at training"}, {"start": 1258.8, "end": 1263.8799999999999, "text": " targets as well as the auxiliary loss from masked out features so that's the"}, {"start": 1263.8799999999999, "end": 1268.6, "text": " part I mentioned so assuming we have a classification problem what I do is the"}, {"start": 1268.6, "end": 1274.68, "text": " following so they'll output again this would be the output of their model of"}, {"start": 1274.68, "end": 1280.96, "text": " the NPT and let's focus on a single attribute on a single slot here"}, {"start": 1280.96, "end": 1285.56, "text": " basically they said this is going to be C and they're simply going to take the"}, {"start": 1285.56, "end": 1290.0, "text": " the ground truth position so maybe this is the ground truth class they're gonna"}, {"start": 1290.0, "end": 1294.04, "text": " take the probability that they had there so let me just kind of draw it"}, {"start": 1294.04, "end": 1298.12, "text": " three-dimensional here so they're just going to extract the probability there"}, {"start": 1298.12, "end": 1306.9199999999998, "text": " and they're just gonna do minus log p of I so in order to minimize the loss you"}, {"start": 1306.9199999999998, "end": 1311.44, "text": " want to have your obviously so that's like cross entropy simple stuff you"}, {"start": 1311.44, "end": 1317.3400000000001, "text": " want to have your probability going to one this is the output PI and this is"}, {"start": 1317.3400000000001, "end": 1322.76, "text": " the loss okay the y-axis so they just summed that out across all of the"}, {"start": 1322.76, "end": 1328.3200000000002, "text": " masked out features so assume we had one here for for mask assume we had maybe if"}, {"start": 1328.3200000000002, "end": 1334.18, "text": " we had one here we're gonna additionally extract the class for that one and do"}, {"start": 1334.18, "end": 1339.52, "text": " minus log PI and we're gonna just do a sum over those and just do a back prop"}, {"start": 1339.52, "end": 1343.44, "text": " through the model hopefully that was clear so long sure store they're just"}, {"start": 1343.44, "end": 1349.68, "text": " going to sum up across all of the masked by their features or targets here and"}, {"start": 1349.68, "end": 1354.6399999999999, "text": " they're going to do a sum and just do a big prop on their loss okay so"}, {"start": 1354.6399999999999, "end": 1358.68, "text": " interestingly stochastic target masking means that many training targets are"}, {"start": 1358.68, "end": 1362.56, "text": " unmasked to the model at training time so this is interesting that means that"}, {"start": 1362.56, "end": 1367.36, "text": " this column which contains the targets will not be like completely massive some"}, {"start": 1367.36, "end": 1371.1999999999998, "text": " parts will have zeros here and that means that the model can actually see"}, {"start": 1371.1999999999998, "end": 1376.36, "text": " that the correct like target value for that for that data point this allows"}, {"start": 1376.36, "end": 1381.3999999999999, "text": " MPTs to learn to predict at each epoch the masked targets of certain training"}, {"start": 1381.3999999999999, "end": 1386.52, "text": " data points using the targets of other training data points in addition to all"}, {"start": 1386.52, "end": 1390.76, "text": " training data features okay so that's an interesting part and we'll see how that"}, {"start": 1390.76, "end": 1397.24, "text": " helps them do relational reasoning later on here's just a short note I'll"}, {"start": 1397.24, "end": 1402.1200000000001, "text": " obviously this approach will have some problems with scaling because they are"}, {"start": 1402.1200000000001, "end": 1406.4, "text": " attending like bunch of training data points during inference as well and yeah"}, {"start": 1406.4, "end": 1410.1200000000001, "text": " that's something that future research will need to tackle with okay"}, {"start": 1410.1200000000001, "end": 1415.56, "text": " architecture explained now let me actually show you some exciting results"}, {"start": 1415.56, "end": 1420.24, "text": " and first give you some connections to graph neural networks that I promised"}, {"start": 1420.24, "end": 1423.48, "text": " here so first things first they sit here"}, {"start": 1423.48, "end": 1428.52, "text": " MPTs are equivalent to a permutation of the data points in other words if the"}, {"start": 1428.52, "end": 1433.08, "text": " set of input data points are shuffled so if we have the tabular data you have"}, {"start": 1433.08, "end": 1440.92, "text": " and and and rows d columns so we kind of shuffle these rows MPTs produce the same"}, {"start": 1440.92, "end": 1446.04, "text": " predictions but shuffled in an analogous manner so that's the equivalent part so"}, {"start": 1446.04, "end": 1450.56, "text": " it should not depend on their ordering that's the important part so now let me"}, {"start": 1450.56, "end": 1454.96, "text": " just kind of give you some connections with CNN's and graph neural networks and"}, {"start": 1454.96, "end": 1460.04, "text": " hopefully you'll find that useful okay where are we let me just find the yep"}, {"start": 1460.04, "end": 1466.1599999999999, "text": " here it is so they mentioned here that MPTs can be seen as a conceptual"}, {"start": 1466.1599999999999, "end": 1469.96, "text": " generalization of graph convolutional networks and again I've covered that in"}, {"start": 1469.96, "end": 1472.96, "text": " my previous video you can check it out I'll link it somewhere here and the"}, {"start": 1472.96, "end": 1476.56, "text": " other graph neural networks like get again I've checked I've covered that one"}, {"start": 1476.56, "end": 1481.0, "text": " as well so that's graph attention network engine in which a set of"}, {"start": 1481.0, "end": 1486.1599999999999, "text": " dependencies edges between data points is not known a priori and instead is"}, {"start": 1486.1599999999999, "end": 1491.44, "text": " learned from data using self-attention so this is the interesting part and now"}, {"start": 1491.44, "end": 1495.56, "text": " let me let me let me show you what they mean by that so amputees learn an"}, {"start": 1495.56, "end": 1499.6, "text": " implicit graph structure so whereas if you have if you're familiar with graph"}, {"start": 1499.6, "end": 1503.52, "text": " neural networks let me let me draw a graph here basically you have some notes"}, {"start": 1503.52, "end": 1510.8, "text": " in your graph and for example for forget you'll have explicit connections here"}, {"start": 1510.8, "end": 1515.6399999999999, "text": " okay you'll have something like this and then what graph attention network will"}, {"start": 1515.6399999999999, "end": 1520.6399999999999, "text": " do it will just kind of assign certain attention coefficients to each of these"}, {"start": 1520.6399999999999, "end": 1526.48, "text": " edges and it's going to aggregate all of these node features so every node has a"}, {"start": 1526.48, "end": 1532.16, "text": " feature vector associated with it it's going to kind of aggregate all of these"}, {"start": 1532.16, "end": 1540.0, "text": " by some like invariant like mapping and they're going to get the final embedding"}, {"start": 1540.0, "end": 1544.24, "text": " for this vector so this was maybe so prior to this we had this vector had"}, {"start": 1544.24, "end": 1549.0, "text": " like embedding vector V and now after this accumulation across the"}, {"start": 1549.0, "end": 1554.42, "text": " neighborhood so we're somehow modifying these feature vectors we are somehow"}, {"start": 1554.42, "end": 1559.4, "text": " combining them and we get V prime that's the updated representation so now the"}, {"start": 1559.4, "end": 1564.16, "text": " difference here is between get and in general the graph neural networks is"}, {"start": 1564.16, "end": 1568.76, "text": " they have explicit graph structure so compared to that amputees the"}, {"start": 1568.76, "end": 1572.3200000000002, "text": " nonparametric transformers now do the following it is something similar that"}, {"start": 1572.3200000000002, "end": 1577.2, "text": " transformer there with like like with with sentences basically it assumes a"}, {"start": 1577.2, "end": 1581.6000000000001, "text": " fully connected graph so it assumes that all of these are connected with each"}, {"start": 1581.6000000000001, "end": 1585.6000000000001, "text": " other okay I think I've connected everything basically and now it just"}, {"start": 1585.6, "end": 1590.6399999999999, "text": " learns which of these edges have what kind of strength and so it's implicitly"}, {"start": 1590.6399999999999, "end": 1594.36, "text": " inferring the structure of the graph without being said like which edges"}, {"start": 1594.36, "end": 1598.36, "text": " exist on which edges does not exist so it's a more of a soft assignment here"}, {"start": 1598.36, "end": 1602.04, "text": " compared to graph neural network so that's the first connection second"}, {"start": 1602.04, "end": 1605.6, "text": " things second thing I want to mention is that like graph attention network is"}, {"start": 1605.6, "end": 1609.8799999999999, "text": " invariant to the data point ordering so it's not active variant it's invariant"}, {"start": 1609.8799999999999, "end": 1615.1599999999999, "text": " that means no matter how you have these node feature vectors in your table so"}, {"start": 1615.16, "end": 1619.48, "text": " basically let's say this is one this is two this is three you'll represent your"}, {"start": 1619.48, "end": 1623.8400000000001, "text": " graph as certain table where again here we have data points and here you have"}, {"start": 1623.8400000000001, "end": 1628.8400000000001, "text": " number of attributes so now no matter if you if you shuffle these one two three"}, {"start": 1628.8400000000001, "end": 1633.64, "text": " like in whatever permutation you'll still have the same V prime feature"}, {"start": 1633.64, "end": 1640.24, "text": " vector here because what graph attention network does is it just sums sums up"}, {"start": 1640.24, "end": 1647.76, "text": " these these these values and basically some enables us to have these this"}, {"start": 1647.76, "end": 1652.64, "text": " invariance property right because some doesn't care about the permutation of"}, {"start": 1652.64, "end": 1657.08, "text": " your input so on the other hand let me take a convolutional neural network"}, {"start": 1657.08, "end": 1660.4, "text": " just to explain the active variance part I think you'll find it useful"}, {"start": 1660.4, "end": 1665.28, "text": " basically let's assume we have an image here and in this image we have certain"}, {"start": 1665.28, "end": 1670.36, "text": " like an image of a square okay and now if we pass that to a CNN and let's"}, {"start": 1670.36, "end": 1674.24, "text": " assume we're in the first layer of the convolutional neural network and that the"}, {"start": 1674.24, "end": 1680.52, "text": " the first kernel the first filter learns to just figure out the vertical edges in"}, {"start": 1680.52, "end": 1684.6399999999999, "text": " in your in your image so what the image will look like is something like this"}, {"start": 1684.6399999999999, "end": 1689.36, "text": " we'll have something like this and we'll have like two vertical edges here okay so"}, {"start": 1689.36, "end": 1693.56, "text": " the active variance means the following if we now take this square and we"}, {"start": 1693.56, "end": 1701.56, "text": " translate it here maybe here now what will happen in the output is this"}, {"start": 1701.56, "end": 1708.0, "text": " response will just shift by the same amount here so as you can see it's"}, {"start": 1708.0, "end": 1712.44, "text": " equivalent it's varying with input it's not invariant to the position of the"}, {"start": 1712.44, "end": 1718.6799999999998, "text": " square it's actually varying if that's even a verb with the input okay so the"}, {"start": 1718.68, "end": 1724.04, "text": " invariant part income nets usually comes at the like at the classification head"}, {"start": 1724.04, "end": 1728.52, "text": " part because there you really don't care you just care that you have a square in"}, {"start": 1728.52, "end": 1731.8, "text": " the image if you're trying to classify it you don't care where it is so it's"}, {"start": 1731.8, "end": 1736.3200000000002, "text": " actually desirable property to have invariance at the latest stages of the"}, {"start": 1736.3200000000002, "end": 1740.3600000000001, "text": " CNNs but like first stages have active variance so that was just a short"}, {"start": 1740.3600000000001, "end": 1745.72, "text": " summary of what's a covariance what's invariance and so basically now that we"}, {"start": 1745.72, "end": 1750.3600000000001, "text": " know that we can say two things first things first is that amputees are"}, {"start": 1750.3600000000001, "end": 1753.92, "text": " active variant second thing is that they are implicitly learning the graph"}, {"start": 1753.92, "end": 1759.0, "text": " structure over the data set and that's really cool but unfortunately it's a"}, {"start": 1759.0, "end": 1763.1200000000001, "text": " kind of computationally expensive and yeah they say here we'll leave further"}, {"start": 1763.1200000000001, "end": 1766.32, "text": " exploration of the close connection between amputees and graph neural"}, {"start": 1766.32, "end": 1769.64, "text": " networks to future work I just briefly mentioned a couple of obvious"}, {"start": 1769.64, "end": 1773.92, "text": " properties I guess some of the more knowledgeable people from the graph ML"}, {"start": 1773.92, "end": 1777.92, "text": " community could write down comments of what interesting other properties this"}, {"start": 1777.92, "end": 1783.88, "text": " thing has with graph neural networks okay that was it now let me jump to"}, {"start": 1783.88, "end": 1788.16, "text": " experiments they have some really nice experiments okay the first experiment"}, {"start": 1788.16, "end": 1793.72, "text": " they do is they they basically test this on a tabular data set a collection of"}, {"start": 1793.72, "end": 1798.44, "text": " tabular data sets ten of them to be precise and they say it here amputees"}, {"start": 1798.44, "end": 1803.52, "text": " perform competitively on established benchmarks tabular data is"}, {"start": 1803.52, "end": 1808.04, "text": " ubiquitous in real-world machine learning but notoriously challenging for"}, {"start": 1808.04, "end": 1813.08, "text": " general purpose deep neural networks which consistently underperform boosting"}, {"start": 1813.08, "end": 1819.0, "text": " methods and are rarely used in practice so again short note boosting methods"}, {"start": 1819.0, "end": 1823.28, "text": " they are super simple you basically just take take a weak learner which means"}, {"start": 1823.28, "end": 1827.28, "text": " it's a some some really weak model that can be a bit better than random guessing"}, {"start": 1827.28, "end": 1831.96, "text": " and they combine these in a clever way for example majority vote or something"}, {"start": 1831.96, "end": 1836.44, "text": " to get something called strong learner which actually has some nice accuracy"}, {"start": 1836.44, "end": 1841.64, "text": " let's say if we're doing classification so now there was a bunch of like"}, {"start": 1841.64, "end": 1846.02, "text": " different work that came after the original boosting method so there is"}, {"start": 1846.02, "end": 1850.0, "text": " first thing is this add a boost so there's just adaptive boosting where once"}, {"start": 1850.0, "end": 1856.04, "text": " you create the first week a classifier let's say you basically use the data"}, {"start": 1856.04, "end": 1861.16, "text": " points is performing poor on to feed those into the next week learner which"}, {"start": 1861.16, "end": 1866.6200000000001, "text": " will learn to perform much better on the points where this first learner was"}, {"start": 1866.6200000000001, "end": 1870.72, "text": " having difficulties with so that's the add a boost and we had gradient boosting"}, {"start": 1870.72, "end": 1874.18, "text": " and then there is a bunch of variations on gradient boosting so here you can see"}, {"start": 1874.18, "end": 1878.88, "text": " XGBoost which is just a performant like implementation of that gradient"}, {"start": 1878.88, "end": 1883.88, "text": " boosting method we have cat boost also gradient boosting implementation light"}, {"start": 1883.88, "end": 1889.92, "text": " gradient boosting method again gradient boosting and yeah so you get the point"}, {"start": 1889.92, "end": 1893.0800000000002, "text": " these boosting methods are pretty popular on tabular data you'll see them"}, {"start": 1893.0800000000002, "end": 1898.52, "text": " all around kegel and now let's see the results they got and they're pretty"}, {"start": 1898.52, "end": 1904.1200000000001, "text": " spectacular if you take a look at this binary classification tasks so I think"}, {"start": 1904.1200000000001, "end": 1907.6000000000001, "text": " there was four of these you can see MPT is the first in this table it's better"}, {"start": 1907.6000000000001, "end": 1913.16, "text": " than then cat boost and light GBM actually boosts altogether it's also the"}, {"start": 1913.16, "end": 1917.72, "text": " first on this table where they're doing multi-class classification and MPT is"}, {"start": 1917.72, "end": 1922.56, "text": " again the best method it's got a bit less variance compared to XGBoost they're"}, {"start": 1922.56, "end": 1928.48, "text": " otherwise tied here and finally we have regression tasks where MPT was a bit"}, {"start": 1928.48, "end": 1932.8, "text": " worse so you can see it's tied with XGBoost it has a little bit bigger"}, {"start": 1932.8, "end": 1939.52, "text": " variance and cat boost is kind of really really good for this for this task but"}, {"start": 1939.52, "end": 1946.0, "text": " nonetheless they mentioned it here so in addition to its strong rank-wise"}, {"start": 1946.0, "end": 1952.08, "text": " performance MPT achieves best performance on four out of ten benchmark"}, {"start": 1952.08, "end": 1956.76, "text": " datasets more than any other method so they definitely proven that this neural"}, {"start": 1956.76, "end": 1961.96, "text": " network can be used for tabular data and give some nice results by the way if"}, {"start": 1961.96, "end": 1966.44, "text": " you're asking yourself maybe it's not fair like they're they're probably used"}, {"start": 1966.44, "end": 1971.0, "text": " much more computation to train the MPT that's actually not true because for"}, {"start": 1971.0, "end": 1974.88, "text": " just figuring out the best hyperparameters for those boosting methods"}, {"start": 1974.88, "end": 1979.16, "text": " they actually wrote something some somewhere in the paper that the"}, {"start": 1979.16, "end": 1985.96, "text": " computation necessary was pretty much on pair so as so this is supporting our"}, {"start": 1985.96, "end": 1990.1200000000001, "text": " hypothesis that attention between data points is a useful architectural"}, {"start": 1990.1200000000001, "end": 1997.5200000000002, "text": " inductive bias for prediction and aside from these tabular datasets they also"}, {"start": 1997.5200000000002, "end": 2004.2800000000002, "text": " tested MPT on the image kind of benchmarks like I think MNIST and Cypher-10"}, {"start": 2004.28, "end": 2008.32, "text": " and they said here similar to previous work on transformers for computer vision"}, {"start": 2008.32, "end": 2013.44, "text": " we would expect pre-training a millions of images to significantly boost MPT's"}, {"start": 2013.44, "end": 2017.8799999999999, "text": " performance and if you watch my previous video you now know that vision"}, {"start": 2017.8799999999999, "end": 2022.72, "text": " transformers and MLP mixers don't need pre-training at all if you use this"}, {"start": 2022.72, "end": 2028.24, "text": " thing called sharpness aware minimization objective and again I'll"}, {"start": 2028.24, "end": 2032.04, "text": " link the video somewhere here you can check it out basically if we have if you"}, {"start": 2032.04, "end": 2038.0, "text": " somehow find a way to to smoothen out the lost landscape you'll notice that"}, {"start": 2038.0, "end": 2041.92, "text": " you don't need extensive pre-training anymore and you can achieve results that"}, {"start": 2041.92, "end": 2046.76, "text": " outperform resonant baselines and that's it in a nutshell so I guess the authors"}, {"start": 2046.76, "end": 2050.88, "text": " of the paper either either didn't have the time to rewrite this or they still"}, {"start": 2050.88, "end": 2056.2, "text": " haven't read that paper which recently came out a couple of days ago okay"}, {"start": 2056.2, "end": 2062.64, "text": " that's the first part now let's see some some some cool things this diagram here"}, {"start": 2062.64, "end": 2071.68, "text": " let me just introduce it basically they make this semi synthetic the problem and"}, {"start": 2071.68, "end": 2076.12, "text": " they say it here for each batch we input the original data with masked target"}, {"start": 2076.12, "end": 2080.9199999999996, "text": " values as well as a copy of the original data where all target values have been"}, {"start": 2080.92, "end": 2086.96, "text": " revealed I know masking is applied at this time we input novel semi synthetic"}, {"start": 2086.96, "end": 2092.16, "text": " test data to ensure that MPT has learned the correct relational mechanism as"}, {"start": 2092.16, "end": 2098.6800000000003, "text": " opposed to just memorizing the target values okay so they they have a nice"}, {"start": 2098.6800000000003, "end": 2103.7200000000003, "text": " nice chart here where you can see the what's what's the input so this is the"}, {"start": 2103.7200000000003, "end": 2107.92, "text": " input we have some data points and we have the target values there are hidden"}, {"start": 2107.92, "end": 2114.04, "text": " denoted by these question marks and we have the actual correct values here so"}, {"start": 2114.04, "end": 2119.32, "text": " if MPT smart even though we're gonna just back prop on these masked values is"}, {"start": 2119.32, "end": 2124.6800000000003, "text": " going to learn how to attend how to find this similar vector so this is basically"}, {"start": 2124.6800000000003, "end": 2129.28, "text": " the same vector and then just copy this value here instead of learning to"}, {"start": 2129.28, "end": 2134.28, "text": " somehow map and use the attributes is going to learn the relational mechanism"}, {"start": 2134.28, "end": 2138.8, "text": " and why do we want that well because it's going to generalize much better if"}, {"start": 2138.8, "end": 2143.0800000000004, "text": " it learns how to just attend to the same vector and just copy the value that's"}, {"start": 2143.0800000000004, "end": 2148.6400000000003, "text": " the whole point of relational reasoning so as you can see here they draw some"}, {"start": 2148.6400000000003, "end": 2152.8, "text": " attention maps so this green data point so that's this one the first one it"}, {"start": 2152.8, "end": 2157.52, "text": " attends to this one so that's this data point here so that means it learned"}, {"start": 2157.52, "end": 2163.0800000000004, "text": " completely and you can see attention here is one all around you have zeros if"}, {"start": 2163.08, "end": 2168.12, "text": " you take a look at this heat map so that means the whole attention of this data"}, {"start": 2168.12, "end": 2172.6, "text": " point of the first data point is on this one and that's exactly what we wanted"}, {"start": 2172.6, "end": 2177.24, "text": " and you can see the pattern going for all of these data points here and here"}, {"start": 2177.24, "end": 2181.68, "text": " this part you can ignore they didn't enforce this they did write something"}, {"start": 2181.68, "end": 2186.04, "text": " about this in the appendix why these data points are attending to themselves"}, {"start": 2186.04, "end": 2190.36, "text": " so that's something may be interesting taking a look into but yeah basically"}, {"start": 2190.36, "end": 2195.96, "text": " you can see the results are really cool Pearson correlation is 99.9 that means"}, {"start": 2195.96, "end": 2200.2400000000002, "text": " basically a fancy way of saying that it basically learns how to copy over these"}, {"start": 2200.2400000000002, "end": 2204.96, "text": " values or that's a hypothesis for now but like it's it's it's basically"}, {"start": 2204.96, "end": 2209.0, "text": " outputting perfect values as if it was copying them and we'll soon see that"}, {"start": 2209.0, "end": 2214.4, "text": " it's actually copying them so for this second row let me just go back down here"}, {"start": 2214.4, "end": 2218.92, "text": " okay so additionally we perform an interventional experiment to"}, {"start": 2218.92, "end": 2222.64, "text": " investigate the extent to which amputees have actually learned the causal"}, {"start": 2222.64, "end": 2227.44, "text": " mechanism underlying the lookup task and if you see these two keywords and if"}, {"start": 2227.44, "end": 2231.56, "text": " you're familiar with Judea Pearl's work you'll notice that this is basically"}, {"start": 2231.56, "end": 2237.48, "text": " rank two experiments so we have this thing called letter of causation the"}, {"start": 2237.48, "end": 2243.0, "text": " first letter is just observing data the second letter would be having acting in"}, {"start": 2243.0, "end": 2247.2000000000003, "text": " the environment or doing these these interventional kinds of experiments"}, {"start": 2247.2, "end": 2250.8799999999997, "text": " where you're setting something to certain value and then observing and"}, {"start": 2250.8799999999997, "end": 2256.52, "text": " finally we have a counterfactual level where we are imagining like different"}, {"start": 2256.52, "end": 2260.8399999999997, "text": " worlds and that sounds super abstract for this context but yeah just just I"}, {"start": 2260.8399999999997, "end": 2265.8799999999997, "text": " wanted to just associate this sentence here with Judea Pearl's work so if you"}, {"start": 2265.8799999999997, "end": 2270.08, "text": " want to know more about it you can check some of my previous LinkedIn posts or"}, {"start": 2270.08, "end": 2275.6, "text": " you can read the book of why from from Judea Pearl as a nice introduction to"}, {"start": 2275.6, "end": 2280.36, "text": " his work so the model is not confronted with target values associated with"}, {"start": 2280.36, "end": 2285.2799999999997, "text": " features that are highly unlikely under the training data and let's see why that"}, {"start": 2285.2799999999997, "end": 2291.04, "text": " is so basically what I do as you can see so it's the same experiment as the above"}, {"start": 2291.04, "end": 2297.8399999999997, "text": " but this time they take this target value and instead of basically keeping"}, {"start": 2297.8399999999997, "end": 2303.16, "text": " it the way it was in the original data set they tweak it they take that maybe"}, {"start": 2303.16, "end": 2310.24, "text": " there was 1.3 and they're going to kind of mess it up and put 7.5 here and that"}, {"start": 2310.24, "end": 2315.2799999999997, "text": " combination 7.5 with similar attributes have never been seen in the training"}, {"start": 2315.2799999999997, "end": 2320.7599999999998, "text": " data set so now if the model learned how to just kind of use the attributes and"}, {"start": 2320.7599999999998, "end": 2325.0, "text": " figure out the target value is gonna fail here and on the other hand if you'd"}, {"start": 2325.0, "end": 2328.8399999999997, "text": " learn how to do what I previously mentioned and that's just to attend to"}, {"start": 2328.84, "end": 2333.1600000000003, "text": " the vector and copy that value it's gonna have nice performance and as we"}, {"start": 2333.1600000000003, "end": 2338.6000000000004, "text": " can see here the Pearson correlation is again great that means it learned how to"}, {"start": 2338.6000000000004, "end": 2344.96, "text": " no matter these interventions it learned how to predict the exact value from from"}, {"start": 2344.96, "end": 2349.56, "text": " these slots and that means it learned the correct relational mechanism that's"}, {"start": 2349.56, "end": 2354.48, "text": " the point of these experiments and so we now confidently conclude that the MPTs"}, {"start": 2354.48, "end": 2358.76, "text": " robustly learn the causal data generating mechanism underlying the"}, {"start": 2358.76, "end": 2363.76, "text": " semi-synthetic data set this requires MPTs to learn a non-trivial sequence of"}, {"start": 2363.76, "end": 2368.28, "text": " computational steps they must learn to match rows based on similarity of"}, {"start": 2368.28, "end": 2373.76, "text": " relevant features to look up the target value of the duplicated data point and"}, {"start": 2373.76, "end": 2378.08, "text": " to copy that value into the target of the mask data point so that's what I"}, {"start": 2378.08, "end": 2383.2400000000002, "text": " already explained and yeah hopefully now it makes more sense okay next"}, {"start": 2383.24, "end": 2388.4399999999996, "text": " experiment they did was to corrupt the data points and see whether it's failing"}, {"start": 2388.4399999999996, "end": 2393.0, "text": " because it learned how to actually rely on those other data points so what I do"}, {"start": 2393.0, "end": 2396.7999999999997, "text": " they said here so this completely corrupts information from all data"}, {"start": 2396.7999999999997, "end": 2401.16, "text": " points except except the one for which we evaluate hence a model that relies"}, {"start": 2401.16, "end": 2404.4799999999996, "text": " meaningfully on attention between data points will show deteriorating"}, {"start": 2404.4799999999996, "end": 2411.56, "text": " performance so let me just explain you what they actually did and they did the"}, {"start": 2411.56, "end": 2418.16, "text": " following thing so we have a data set here and let's say we have a bunch of"}, {"start": 2418.16, "end": 2422.6, "text": " data points and let's say we want to predict this value here okay so this is"}, {"start": 2422.6, "end": 2427.4, "text": " the data point for which we want to figure out predict the class what I do"}, {"start": 2427.4, "end": 2432.04, "text": " is they just shuffle all of these attributes of all of the others so all"}, {"start": 2432.04, "end": 2437.08, "text": " of these other data points are going to get shuffled and that means this feature"}, {"start": 2437.08, "end": 2443.2, "text": " vector can't be matched now with other feature vectors in them like using the"}, {"start": 2443.2, "end": 2446.6, "text": " we can't use the mechanism we just described it's using and that's just"}, {"start": 2446.6, "end": 2450.52, "text": " figuring out matching the vector finding maybe I don't know maybe it learned to do"}, {"start": 2450.52, "end": 2454.7599999999998, "text": " cosine similarity now I can't do that because we shuffle the attributes and"}, {"start": 2454.7599999999998, "end": 2459.48, "text": " what I show is that performance deteriorates which is a further proof"}, {"start": 2459.48, "end": 2464.52, "text": " that it learned the correct relational mechanism it appears that when MPTs do"}, {"start": 2464.52, "end": 2468.2, "text": " not find it advantageous to rely on attention between data points during"}, {"start": 2468.2, "end": 2472.12, "text": " training they can learn to completely ignore the other inputs essentially"}, {"start": 2472.12, "end": 2476.2, "text": " collapsing into a standard parametric model this supports our earlier claims"}, {"start": 2476.2, "end": 2481.0, "text": " that amputees can learn end-to-end from data the extent to which they rely on"}, {"start": 2481.0, "end": 2484.4, "text": " other data points for prediction so that's a fancy way of saying that"}, {"start": 2484.4, "end": 2489.6, "text": " basically sometimes if we have a data set again so sometimes it's actually"}, {"start": 2489.6, "end": 2495.44, "text": " just relying on this particular row on a single data point to predict this value"}, {"start": 2495.44, "end": 2498.8399999999997, "text": " here because for that task you figure out it doesn't actually need to use"}, {"start": 2498.8399999999997, "end": 2504.7999999999997, "text": " other data points and for those data sets the the performance they show does"}, {"start": 2504.7999999999997, "end": 2509.64, "text": " not deteriorate which is again a further proof about proof informal proof that"}, {"start": 2509.64, "end": 2513.92, "text": " basically it learned the correct mechanism here they just have a table a"}, {"start": 2513.92, "end": 2519.48, "text": " bunch of different data sets you can see regressions here so yeah on certain"}, {"start": 2519.48, "end": 2524.08, "text": " data sets as I just mentioned it's not deteriorating that much because it"}, {"start": 2524.08, "end": 2529.72, "text": " doesn't rely on other data points that much okay having said that let me see if"}, {"start": 2529.72, "end": 2534.64, "text": " we have something else interesting and yeah this is the last part that's super"}, {"start": 2534.64, "end": 2539.72, "text": " interesting and they now try to figure out how the model finds those similar"}, {"start": 2539.72, "end": 2544.76, "text": " vectors so and what I do is we sort the input data with respect to their feature"}, {"start": 2544.76, "end": 2548.58, "text": " space it's actually input space as I saw in the app appendix but maybe the"}, {"start": 2548.58, "end": 2553.56, "text": " authors can correct me here distance such that the similar data points are"}, {"start": 2553.56, "end": 2561.12, "text": " now close to each other and so let me explain what that means basically that"}, {"start": 2561.12, "end": 2566.7599999999998, "text": " means you have the original tabular data set you have your data points what I do"}, {"start": 2566.7599999999998, "end": 2572.3199999999997, "text": " is they find maybe using L2 metric define the closest vectors here and"}, {"start": 2572.3199999999997, "end": 2577.48, "text": " they're just going to sort them so now we have a vector here and a vector here"}, {"start": 2577.48, "end": 2582.32, "text": " and these two have really small L2 distance and now we have you know vector"}, {"start": 2582.32, "end": 2586.92, "text": " here and these two have really small L2 distance which that means that basically"}, {"start": 2586.92, "end": 2592.08, "text": " these two also have maybe a bit bigger L2 distance okay so you get the point"}, {"start": 2592.08, "end": 2598.96, "text": " basically we're slowly adding stacking vectors which are further away looking"}, {"start": 2598.96, "end": 2604.08, "text": " at the L2 distance and when we do that and we apply NPT and when we look at"}, {"start": 2604.08, "end": 2610.2, "text": " the certain like attention patterns from a certain like head we get this pattern"}, {"start": 2610.2, "end": 2615.16, "text": " and this means that is basically attending to the vectors which are"}, {"start": 2615.16, "end": 2620.6, "text": " basically close in the input space which means it's attending to similar vectors"}, {"start": 2620.6, "end": 2624.96, "text": " according to this L2 distance and if you don't know how to parse this diagram"}, {"start": 2624.96, "end": 2632.7599999999998, "text": " let me help you basically if we have if this is row I this row is going to"}, {"start": 2632.76, "end": 2637.6800000000003, "text": " attend and assume we have a sort of table so this table is sorted according"}, {"start": 2637.6800000000003, "end": 2642.96, "text": " to L2 so let's say this is I this is vector I and it's somewhere in the"}, {"start": 2642.96, "end": 2647.36, "text": " middle of the table and that vector is going to attend to its proximity so it's"}, {"start": 2647.36, "end": 2651.0, "text": " going to attend to these vectors really strongly and then it's going to drop off"}, {"start": 2651.0, "end": 2654.88, "text": " and then it's going to have really weak attendance to maybe this vector here so"}, {"start": 2654.88, "end": 2660.1600000000003, "text": " it's going to tend to these vectors really strongly and the strength will"}, {"start": 2660.16, "end": 2666.16, "text": " kind of fade away going downwards or upwards here and that's exactly what you"}, {"start": 2666.16, "end": 2671.0, "text": " see in this attention map basically if this is vector I and this is just n"}, {"start": 2671.0, "end": 2675.12, "text": " number of data points this is just n number of data points this means that"}, {"start": 2675.12, "end": 2679.92, "text": " this vector as you can see attends to itself really strongly and to its"}, {"start": 2679.92, "end": 2684.56, "text": " neighborhood but then it deteriorates because you can see that the dark blue"}, {"start": 2684.56, "end": 2689.44, "text": " means the attention coefficients are really really really weak there okay"}, {"start": 2689.44, "end": 2694.16, "text": " that's the whole point and yeah further they had some additional experiment"}, {"start": 2694.16, "end": 2697.96, "text": " where they're doing similar thing but the conclusion was the same basically"}, {"start": 2697.96, "end": 2702.56, "text": " MPT learns how to attend to those vectors which have which are really"}, {"start": 2702.56, "end": 2709.32, "text": " close in the input space looking at the L2 distance okay that's pretty much it"}, {"start": 2709.32, "end": 2713.7200000000003, "text": " they dimension some problems like these scaling limitations which I have already"}, {"start": 2713.72, "end": 2721.52, "text": " flagged and that's pretty much it so we saw that MPT's learn how to do"}, {"start": 2721.52, "end": 2726.24, "text": " relational reasoning and because they learn the correct mechanism on that"}, {"start": 2726.24, "end": 2730.3599999999997, "text": " simple semi-synthetic task it can generalize much better than if they"}, {"start": 2730.3599999999997, "end": 2735.12, "text": " actually learn how to deduce the value just looking at the attributes and"}, {"start": 2735.12, "end": 2738.8399999999997, "text": " instead they learn how to just copy paste the values from the corresponding"}, {"start": 2738.8399999999997, "end": 2742.72, "text": " call of Rose and yeah hopefully this was useful if you found this video useful"}, {"start": 2742.72, "end": 2748.08, "text": " share it out and subscribe and until next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=oDtcobGQ7xU
When Vision Transformers Outperform ResNets without Pretraining | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentation paper explained. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/2106.01548 ✅ LinkedIn post: https://www.linkedin.com/posts/aleksagordic_vision-transformers-mlp-activity-6807372257187442688-7jzF ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Key points of the paper 01:37 Key conclusions 03:00 Inductive biases and biases in a CNN 07:00 SAM explained 11:30 Possibility of heavy pruning, overfitting, sparsity, etc. 14:20 Neural tangent kernel and steepness of curvature 17:30 Results, empirical correlation between SAM and biases 19:00 Deeper look into the Hessians 20:50 Attention visualized, low data regime plots ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #visiontransformer #nopretraining #resnets
What's up? In this video I'm covering this new paper called When Vision Transformers Outperform Resonance Without Pre-Training or Strong Data Augmentations by the Google research team Chen, Xie and Gong. And the main idea of this paper, the main innovation was actually something that was developed I think half a year ago called Sharpness Aware Minimization Objective and what it does it smoothens out the lost landscape. Let me show you the picture here. So basically we have their testing vision transformers, their testing recently published MLP mixer and you can see that using the same here we get a much smoother like a point of convergence in that lot landscape and what that brings us is higher generalization capabilities that was previously shown that basically converging to flat areas of the lost landscape guarantees better generalization properties and also it shows that we don't need a bunch of pre-training data as before so basically remember and do check out my videos on vision transformer and mixer MLPs if you haven't so if you haven't heard of it so far. Basically they used to train to pre-train VATs on this huge data set called JFT 300m which is a proprietary data set containing 300 million images from Google and now they showed that without that huge pre-training procedure and without extreme data augmentation techniques which were previously advised they achieve on pair or even outperform resonant baselines which is really cool so that's the main bullet point now let me go slowly take you through the walk you through the paper so they say here vision transformers and MLPs signal further efforts on replacing hand-wired features or inductive biases with general-purpose neural architecture so we've seen that trend over the last year basically transformers going from an NLP coming to the computer vision field and basically being having less biases than covnets as well as recently published mixer MLPs which basically just use multilayer perceptions to do token-wise and channel-wise mixing. This paper investigates VATs and NLP mixers from the lens of loss geometry as I already explained that and intending to improve the models data efficiency as training and generalization at inference okay by promoting smoothness with a recently proposed sharpness aware optimizer so the same thing we substantially improve the accuracy and robustness of VATs and MLP mixers on various tasks okay and we show that the improved smoothness attributes to sparser active neurons in the first few layers and the result in VATs outperform resonance of similar size and throughput when trained from scratch on image add without large pre training or strong data augmentations okay next thing that's interesting is this so despite the appealing potential of moving toward general-purpose neural architectures the lack of convolution like inductive bias also challenges the training of vision transformers and MLPs when trained on image net with a conventional inception style data processing which is basically just random crops and random flips super simple I think that was used back in 2014 with the inception when the inception architecture first came out hence inception style just want to yield modest modest accuracies of a few percentage points below resonance of comparable size so what they say here is that if you don't use Sam and you just train your VATs and MLP mixers with the same procedure as resonance we have like even lower performance stand than those resonant baselines so the reason is comments and let me just recap shortly here comments have really useful priors built into them and there is this whole thing going on currently in the deep learning field where we are pretty much going forward with this blank slate paradigm and if you take a look at humans we have a bunch of evolutionary built-in priors and so like my deep belief is that like we will need to to finally once we find really good set of priors we'll need to start using them because it's super expensive to you to be rediscovering them all the time but like I still think this research direction is is very useful and we are just kind of trying to see like what can we learn with as least priors as possible so yeah I still think it's a cool idea but eventually I do believe priors are going to be really important and let me show you so basically for a CNN if you have an image here and what CNN do and exploit is the the fact that the the local pixel neighborhood is highly correlated in natural images so if you have a pixel here it's highly likely that all of these pixels in the neighborhood are going to be a bit less like they'll have a bit less intensity or a bit higher intensity but they won't be we won't have dramatic changes so that that's what I mean when I say core like the neighborhood the pixel intensity is correlate in this neighborhood so what CNN's have is so you hopefully know what how CNN's work you have a kernel and they have at least three biases I can think of are the first one is locality so basically you're assuming that this filter here should be attending only to the local neighborhood of the pixel the second one is weight sharing which basically means yeah you the things you learn here the filter you learn here will be useful not only here but also here and here and all around the image and this leads to something called translational active variance which is super useful property that CNN's have and the third bias I can think of is the hierarchy so basically you know that CNN's if when you go in like into deeper layers the spatial extent diminishes and the volumes get more and more channels and what that practically means is that if you have an image here and I'm just looking from it sideways here if you have a neuron here and a neuron here so this one is only going to attend depending on the kernel size maybe three pixels vertically here whereas the deeper one will pretty much attend over the whole image and so that means that we progressively keep expanding the neighborhood of the pixels which is basically something called a relational inductive bias so yeah that's that CNN's and MLPs just have much less priors and that's why as you can see here they are harder to train prior to this same thing they were like under performing or as in the baselines okay so let's continue here and let me explain what Sam is so the first order optimizers like SGD stochastic gradient and Adam only seek the model parameters that minimize the training error they dismiss the higher order information such as flatness that correlates with the generalization so first order because we are basically just estimating the gradient which is the like basically the first order derivative of the loss with respect to the loss and of the of the model parameters there are also second-order methods like LBFGS is one of the famous ones where we also take into account the second order partial derivatives so Hessians etc but here what they actually advise is something a bit different and we'll see that in a moment so Sam strives to find a solution whose entire neighborhood has low losses rather than focus on any singleton point so let me go to the drawing here again and try to explain what this means so what I've did here in order to visualize the loss function because the models have millions of parameters they've kind of projected the the whole like the the weights into 2d space and now they can plot the loss and so what what what the whole point of the same thing is is if you take a point here in the parameter space and you take a disk around that point so in the 2d space it's going to be a disk in a 3d space it's going to be a sphere in an n dimensional space is going to be an n dimensional hyperbola and what they want to achieve is the following so maybe this is a better example here what they want to achieve is the following so they want to make sure that inside of this disk or hyperbola when we're in the original parameter space we want to make sure that the max loss so if I if I take a point on the disk here and they project it upwards here we'll see that the loss here is really really high even though the loss here is really low the max loss inside the volume of interest disk in this particular case is high and they want to make sure that that max loss is actually minimized and as you can see this is what we get so going from from this thing we get something much smoother and you can see here that if I take a disk of the same size the max loss is still really low lower than this one here so that's the whole point that's the geometric explanation now let me just kind of explain you the formula which tells the same thing and here it is but briefly before that so from the original paper so motivated by the connection between sharpness of the lost landscape and generalization so that's the the key point here somebody the prior work showed that if we converge to a flat part of the lost landscape we're gonna have much better generalization capabilities so we propose a different approach rather than seeking our parameter values W that simply have low training loss value we seek out parameter values whose entire neighborhoods have uniformly low training loss value equivalently neighborhoods having both low loss and low curvature and or from the original paper again here so here is the where they converge with resonance in the original formulation of the optimizer like using Adam and here is where they get using Sam and as you can see again it's much flatter much smoother and that's the whole point okay and here's the formula so it's really easy basically what I see here is for these epsilons so this is called a perturbation vector let's go that way so we want to make sure that the L2 norm is inside this threshold so that just just a mathematical formulation of what they already explained with discs and hyperboles so you want to make sure that W when you when you add this vector over the whole value around this W point we want to make sure that the max loss so max loss is minimized with respect to W so that's the whole point so you're basically you have your W let's let's assume we are in a 2d space here so let's assume we are in a 2d space even though we're going to be like a multi-dimensional space so this is W2 this is W1 let's say we've converged to this point here so what it says here is that so epsilon is just some vector and we are going to trace out so so this is a set of all vectors which are whose L2 norm is inside of row so that's gonna be into the case just a disc so we want to make sure that inside that region so the the max loss the max loss inside that region needs to be minimal and that's what we are minimizing so hopefully yeah that was quite in-depth hopefully you understood it let me get back to the beginning here a couple of things I want to mention so a side observation is that unlike resonance and MLP mixers VATs have extremely sparse active neurons less than 5% for most layers revealing the redundancy of input image patches and the capacity for network pruning and this looks like an interesting follow-up work from this paper basically it seems that we can reduce the memory footprint of VATs heavily and still hopefully contain like keep the performance we achieved here without using the extensive pre-training etc okay and I just kind of highlight the perspicuous here because I really am a strong believer that we should be using like we should make paper papers as clear as possible and not use fancy terms fancy equations if there is no need to do that I literally had to you to to Google this word and it says clearly expressed and easily understood well this sentence is not perspicuous that's what I know cool I'll rant over okay let me walk you through a couple of bullet points here and again they are focusing only on vision transformers and MLP mixers so do watch those videos if you haven't already here are a couple of points they make it's been extensively studied that the convergence to a flat region whose curvature is small benefits generalization of neural networks and I repeated it as multiple times I think this is a really important key point to keep in mind although mixer has fewer parameters than VATs it has smaller training error but much worse test accuracy so that basically means mixers tend to overfit much more and that makes sense because they have less priors in built into them so here are some nice curves that explain that so taking a look at maybe mixer B 16 here you can see the training curve is really low but the test accuracy is not that high on the other hand if we take something like this VAT B 16 the training curves are higher as you can see here so that this this dim orange curve here is higher but also the the test accuracy is higher so as you can see they overfit much much less than than MLP mixers here they just compare how it looks like without Sam and with Sam and again the training curve with Sam is higher and the test accuracy is also higher so that's cool that means we overfit much less to the training data and we generalize much better to the test distribution okay and finally here just the the sparsity constraint they they they noticed that basically in the lower layers using Sam the the number of activated neurons that's the y-axis gets much much lower for for for mixers than them without Sam here a couple of more bullet points here so Xiao et al. showed that the trainability of a neural network can be characterized by the condition number of the associated neural tangent kernel and I won't get into the details of the of the of the kernel but basically it's a it's a simple proxy for trainability so case pretty stable for Resnets echoing previous results the Resnets enjoy superior trainability regardless of the depth however so if you remember like back in 2015 when the paper came out Resnets from Microsoft research they showed for the first time that we can train models from 18 all the way to 151 layers and it just works because of the because of the skip connections or residual connections however you want to call them however we observed that the condition number diverges when it comes to VAT and MLP mixer confirming that the training of VATs desires extra care so they kind of quantify this not kind of they quantify this in this table and you can see it here so here is the NTK here we have Resnets and VATs and mixers so you can see that the NTK for Resnets is pretty much the same so I'm not sure where those numbers whether there is some maybe bug here but like I'd assume that a Resonant 152 should have a bit higher NTK although I'm not sure about exact details but okay basically what I want to show you here is that mixer has much higher NTK which means it's much harder to train the second thing they plot here and I'll go into a bit more detail a bit later but basically Hessians are again a proxy for the curvature of your loss landscape at the point of convergence so what they calculate here as you can see so you can see that the lower is better the lower means it's more flat and you can see that VATs have much higher than than Resonants and mixers have even higher Hessian so this is the just the this is just the max eigenvalue of the eigenvector associated with this Hessian so I don't want to confuse you here but it's just a proxy for the curvature and you can see that after applying SAM it just goes like it falls drops down all the way to 20 something so that's even lower than for Resonant so that's really cool aside from that they they show that the performance is really really great so after applying SAM on ImageNet the accuracy just kind of increases and as well as on ImageNet C which tests the robustness of the model how is that well because if you take a look at the ImageNet C data set you can see that it just has a bunch of different augmentations like Gaussian noise different kinds of noises here in impulse noise blurring like they have motion blurring some special effects and some photometric augmentations here like brightness contrast etc so basically you just want to make sure that you're generalizing to these small shifts in the distribution of your data set and yeah they showed that actually VATs perform even better than comparably sized Resonants okay let's see what else is interesting in this paper and I think I've covered everything pretty much a bunch of results here and summarized in a couple of sentences here so on the ImageNet validation set SAM boosts the top one accuracy of VATs from something to something so basically increase of 5% here and for mixture as well and empirically so this is interesting empirically the degree of improvement negatively correlates with the level of inductive biases built into the architecture so what you're seeing here is the following so let's plot a 2d chart here and on the x-axis let's say we have a bias and on the so the bias ingrained into the architecture itself and on the y-axis we have some like improvement okay and what I say here is we have a negative correlation like something like this some point cloud here and basically CNNs are here so here is a CNN MLP mixtures are probably here and VATs are somewhere here I don't know I'm just qualitatively drawing this and what they say is that the more priors we have in the architecture the less improvement we get from using SAM for that particular architecture so that's something they they empirically found so there is no like theoretical explanation for why that is but it is okay again it's more robust as well not just more accurate but more robust like looking at the image net see that is said nothing interesting there I already mentioned that and this is an interesting table basically they just kind of decompose the Hessians already mentioned which are a proxy for the curvature and you can see looking at the layers like let's focus on VAT or even better like on mixer looking at the lower layer like in the embedding layer itself we have a huge Hessian so the huge eigenvalue of the Hessian and going to deep players block one block six block twelve you can see that the Hessians go down so basically these lower layers contribute to the huge like to the steep curvature of the lost landscape and that's what I've fixed here as you can see using SAM it drops significantly and basically that correlates with the the fact that we have much sparser activations now in those lower layers as we saw on the plot up there so that's this plot here basically you can see that we have much sparser activations so this is the x-axis is the depth of the network so we now see that we have much sparse activations in the shallower layers of the network that correlates with this finding here one thing they notice as well is that the L2 norm of the weight vector increases which may indicate that they've used the weight decay regularization and it seems it's not helping so they need to further investigate that the reason I highlight the recursively here is that basically you can see H of K depends on the H of K plus 1 so then that's the reason why we have higher eigenvalues so this Hessians in the lower layer because it just multiplies with all of the previous deeper layers and it just kind of accumulates and blows up in the shallower layers okay additionally what they found is that the attention maps found by VATs and MLP mixers after the SAM procedure has much better discriminative features you can see that the attention maps do focus on something that's salient in these images much better than before using SAM so yeah just a fun fact and I mean visualizations are super important so kudos to them for doing this a couple of fun results here basically and it's pretty obvious what they show here is that even with using when you're using SAM when you go to when you start reducing the number of data points in your data set so what they did here is they randomly sampled a half of the pictures from the ImageNet 1k and here 1 fourth of the images are randomly sampled and we can see that VAT focusing on the orange curve you can see that the VATs and mixers degrade much more severely when we go into the low data regime here whereas thanks to the biases already mentioned CNN's ResNets here in particular managed to keep up that performance even in the low data regime so again priors are super important and nonetheless I do think that this kind of research where we are just doing this blank slate paradigm is going to be very interesting and informative over the long long run so yeah they also tried some contrastive learning and it kind of improves a bit upon the SAM as well they tried adversarial training as well I won't be focusing on that and they like using this PGD 10 attack basically PGD and I think they just average over 10 attacks and they showed that they get a nice like nice performance boost there as well yeah hopefully that's it hopefully that was informative and useful if you found useful consider subscribing sharing this video and see you next time bye bye
[{"start": 0.0, "end": 3.56, "text": " What's up? In this video I'm covering this new paper called When Vision"}, {"start": 3.56, "end": 7.32, "text": " Transformers Outperform Resonance Without Pre-Training or Strong Data"}, {"start": 7.32, "end": 14.84, "text": " Augmentations by the Google research team Chen, Xie and Gong. And the main idea of"}, {"start": 14.84, "end": 18.36, "text": " this paper, the main innovation was actually something that was developed I"}, {"start": 18.36, "end": 23.68, "text": " think half a year ago called Sharpness Aware Minimization Objective and what it"}, {"start": 23.68, "end": 27.68, "text": " does it smoothens out the lost landscape. Let me show you the picture here. So"}, {"start": 27.68, "end": 31.92, "text": " basically we have their testing vision transformers, their testing recently"}, {"start": 31.92, "end": 39.04, "text": " published MLP mixer and you can see that using the same here we get a much"}, {"start": 39.04, "end": 45.2, "text": " smoother like a point of convergence in that lot landscape and what that brings"}, {"start": 45.2, "end": 50.32, "text": " us is higher generalization capabilities that was previously shown that basically"}, {"start": 50.32, "end": 55.480000000000004, "text": " converging to flat areas of the lost landscape guarantees better"}, {"start": 55.48, "end": 60.519999999999996, "text": " generalization properties and also it shows that we don't need a bunch of"}, {"start": 60.519999999999996, "end": 65.16, "text": " pre-training data as before so basically remember and do check out my videos on"}, {"start": 65.16, "end": 69.88, "text": " vision transformer and mixer MLPs if you haven't so if you haven't heard of it so"}, {"start": 69.88, "end": 76.19999999999999, "text": " far. Basically they used to train to pre-train VATs on this huge data set"}, {"start": 76.19999999999999, "end": 81.52, "text": " called JFT 300m which is a proprietary data set containing 300 million images"}, {"start": 81.52, "end": 87.52, "text": " from Google and now they showed that without that huge pre-training procedure"}, {"start": 87.52, "end": 91.28, "text": " and without extreme data augmentation techniques which were previously advised"}, {"start": 91.28, "end": 96.6, "text": " they achieve on pair or even outperform resonant baselines which is really cool"}, {"start": 96.6, "end": 100.96, "text": " so that's the main bullet point now let me go slowly take you through the walk"}, {"start": 100.96, "end": 106.12, "text": " you through the paper so they say here vision transformers and MLPs signal"}, {"start": 106.12, "end": 111.0, "text": " further efforts on replacing hand-wired features or inductive biases with"}, {"start": 111.0, "end": 115.0, "text": " general-purpose neural architecture so we've seen that trend over the last year"}, {"start": 115.0, "end": 120.12, "text": " basically transformers going from an NLP coming to the computer vision field and"}, {"start": 120.12, "end": 126.08, "text": " basically being having less biases than covnets as well as recently published"}, {"start": 126.08, "end": 132.68, "text": " mixer MLPs which basically just use multilayer perceptions to do token-wise"}, {"start": 132.68, "end": 137.72, "text": " and channel-wise mixing. This paper investigates VATs and NLP mixers from"}, {"start": 137.72, "end": 141.96, "text": " the lens of loss geometry as I already explained that and intending to improve"}, {"start": 141.96, "end": 147.72, "text": " the models data efficiency as training and generalization at inference okay by"}, {"start": 147.72, "end": 152.24, "text": " promoting smoothness with a recently proposed sharpness aware optimizer so the"}, {"start": 152.24, "end": 158.96, "text": " same thing we substantially improve the accuracy and robustness of VATs and"}, {"start": 158.96, "end": 163.44, "text": " MLP mixers on various tasks okay and we show that the improved smoothness"}, {"start": 163.44, "end": 168.72, "text": " attributes to sparser active neurons in the first few layers and the result in"}, {"start": 168.72, "end": 172.2, "text": " VATs outperform resonance of similar size and throughput when trained from"}, {"start": 172.2, "end": 177.32, "text": " scratch on image add without large pre training or strong data augmentations"}, {"start": 177.32, "end": 183.0, "text": " okay next thing that's interesting is this so despite the appealing potential"}, {"start": 183.0, "end": 186.4, "text": " of moving toward general-purpose neural architectures the lack of convolution"}, {"start": 186.4, "end": 190.2, "text": " like inductive bias also challenges the training of vision transformers and"}, {"start": 190.2, "end": 195.28, "text": " MLPs when trained on image net with a conventional inception style data"}, {"start": 195.28, "end": 200.44, "text": " processing which is basically just random crops and random flips super"}, {"start": 200.44, "end": 203.64, "text": " simple I think that was used back in 2014 with the inception when the"}, {"start": 203.64, "end": 208.28, "text": " inception architecture first came out hence inception style just want to"}, {"start": 208.28, "end": 213.23999999999998, "text": " yield modest modest accuracies of a few percentage points below resonance of"}, {"start": 213.23999999999998, "end": 219.12, "text": " comparable size so what they say here is that if you don't use Sam and you just"}, {"start": 219.12, "end": 227.04, "text": " train your VATs and MLP mixers with the same procedure as resonance we have like"}, {"start": 227.04, "end": 232.04, "text": " even lower performance stand than those resonant baselines so the reason is"}, {"start": 232.04, "end": 238.0, "text": " comments and let me just recap shortly here comments have really useful priors"}, {"start": 238.0, "end": 241.84, "text": " built into them and there is this whole thing going on currently in the deep"}, {"start": 241.84, "end": 246.88, "text": " learning field where we are pretty much going forward with this blank slate"}, {"start": 246.88, "end": 251.96, "text": " paradigm and if you take a look at humans we have a bunch of evolutionary"}, {"start": 251.96, "end": 258.44, "text": " built-in priors and so like my deep belief is that like we will need to to"}, {"start": 258.44, "end": 262.84, "text": " finally once we find really good set of priors we'll need to start using them"}, {"start": 262.84, "end": 266.96, "text": " because it's super expensive to you to be rediscovering them all the time but"}, {"start": 266.96, "end": 272.0, "text": " like I still think this research direction is is very useful and we are"}, {"start": 272.0, "end": 277.04, "text": " just kind of trying to see like what can we learn with as least priors as"}, {"start": 277.04, "end": 281.28, "text": " possible so yeah I still think it's a cool idea but eventually I do believe"}, {"start": 281.28, "end": 285.48, "text": " priors are going to be really important and let me show you so basically for a"}, {"start": 285.48, "end": 291.64, "text": " CNN if you have an image here and what CNN do and exploit is the the fact that"}, {"start": 291.64, "end": 296.2, "text": " the the local pixel neighborhood is highly correlated in natural images so"}, {"start": 296.2, "end": 300.6, "text": " if you have a pixel here it's highly likely that all of these pixels in the"}, {"start": 300.6, "end": 306.36, "text": " neighborhood are going to be a bit less like they'll have a bit less intensity"}, {"start": 306.36, "end": 310.44, "text": " or a bit higher intensity but they won't be we won't have dramatic changes so"}, {"start": 310.44, "end": 314.68, "text": " that that's what I mean when I say core like the neighborhood the pixel"}, {"start": 314.68, "end": 319.64000000000004, "text": " intensity is correlate in this neighborhood so what CNN's have is so"}, {"start": 319.64000000000004, "end": 323.76000000000005, "text": " you hopefully know what how CNN's work you have a kernel and they have at"}, {"start": 323.76000000000005, "end": 328.64000000000004, "text": " least three biases I can think of are the first one is locality so basically"}, {"start": 328.64, "end": 334.0, "text": " you're assuming that this filter here should be attending only to the local"}, {"start": 334.0, "end": 337.84, "text": " neighborhood of the pixel the second one is weight sharing which basically"}, {"start": 337.84, "end": 342.32, "text": " means yeah you the things you learn here the filter you learn here will be useful"}, {"start": 342.32, "end": 348.03999999999996, "text": " not only here but also here and here and all around the image and this leads to"}, {"start": 348.03999999999996, "end": 351.8, "text": " something called translational active variance which is super useful property"}, {"start": 351.8, "end": 356.96, "text": " that CNN's have and the third bias I can think of is the hierarchy so basically"}, {"start": 356.96, "end": 363.4, "text": " you know that CNN's if when you go in like into deeper layers the spatial"}, {"start": 363.4, "end": 368.23999999999995, "text": " extent diminishes and the volumes get more and more channels and what that"}, {"start": 368.23999999999995, "end": 371.76, "text": " practically means is that if you have an image here and I'm just looking from it"}, {"start": 371.76, "end": 377.76, "text": " sideways here if you have a neuron here and a neuron here so this one is only"}, {"start": 377.76, "end": 383.24, "text": " going to attend depending on the kernel size maybe three pixels vertically here"}, {"start": 383.24, "end": 389.68, "text": " whereas the deeper one will pretty much attend over the whole image and so that"}, {"start": 389.68, "end": 394.8, "text": " means that we progressively keep expanding the neighborhood of the pixels"}, {"start": 394.8, "end": 400.32, "text": " which is basically something called a relational inductive bias so yeah that's"}, {"start": 400.32, "end": 406.52, "text": " that CNN's and MLPs just have much less priors and that's why as you can see"}, {"start": 406.52, "end": 413.04, "text": " here they are harder to train prior to this same thing they were like under"}, {"start": 413.04, "end": 418.68, "text": " performing or as in the baselines okay so let's continue here and let me explain"}, {"start": 418.68, "end": 422.88, "text": " what Sam is so the first order optimizers like SGD stochastic gradient"}, {"start": 422.88, "end": 428.04, "text": " and Adam only seek the model parameters that minimize the training error they"}, {"start": 428.04, "end": 432.0, "text": " dismiss the higher order information such as flatness that correlates with"}, {"start": 432.0, "end": 437.28000000000003, "text": " the generalization so first order because we are basically just estimating"}, {"start": 437.28000000000003, "end": 441.32000000000005, "text": " the gradient which is the like basically the first order derivative of the loss"}, {"start": 441.32, "end": 445.59999999999997, "text": " with respect to the loss and of the of the model parameters there are also"}, {"start": 445.59999999999997, "end": 450.2, "text": " second-order methods like LBFGS is one of the famous ones where we also take"}, {"start": 450.2, "end": 456.0, "text": " into account the second order partial derivatives so Hessians etc but here"}, {"start": 456.0, "end": 459.48, "text": " what they actually advise is something a bit different and we'll see that in a"}, {"start": 459.48, "end": 464.08, "text": " moment so Sam strives to find a solution whose entire neighborhood has low losses"}, {"start": 464.08, "end": 470.52, "text": " rather than focus on any singleton point so let me go to the drawing here again"}, {"start": 470.52, "end": 476.44, "text": " and try to explain what this means so what I've did here in order to"}, {"start": 476.44, "end": 479.91999999999996, "text": " visualize the loss function because the models have millions of parameters"}, {"start": 479.91999999999996, "end": 485.15999999999997, "text": " they've kind of projected the the whole like the the weights into 2d space and"}, {"start": 485.15999999999997, "end": 490.44, "text": " now they can plot the loss and so what what what the whole point of the same"}, {"start": 490.44, "end": 495.59999999999997, "text": " thing is is if you take a point here in the parameter space and you take a disk"}, {"start": 495.59999999999997, "end": 499.79999999999995, "text": " around that point so in the 2d space it's going to be a disk in a 3d space"}, {"start": 499.8, "end": 503.28000000000003, "text": " it's going to be a sphere in an n dimensional space is going to be an n"}, {"start": 503.28000000000003, "end": 508.64, "text": " dimensional hyperbola and what they want to achieve is the following so maybe"}, {"start": 508.64, "end": 512.2, "text": " this is a better example here what they want to achieve is the following so they"}, {"start": 512.2, "end": 516.12, "text": " want to make sure that inside of this disk or hyperbola when we're in the"}, {"start": 516.12, "end": 521.12, "text": " original parameter space we want to make sure that the max loss so if I if I take"}, {"start": 521.12, "end": 525.2, "text": " a point on the disk here and they project it upwards here we'll see that"}, {"start": 525.2, "end": 529.12, "text": " the loss here is really really high even though the loss here is really low the"}, {"start": 529.12, "end": 533.92, "text": " max loss inside the volume of interest disk in this particular case is high"}, {"start": 533.92, "end": 539.76, "text": " and they want to make sure that that max loss is actually minimized and as you"}, {"start": 539.76, "end": 544.08, "text": " can see this is what we get so going from from this thing we get something"}, {"start": 544.08, "end": 548.64, "text": " much smoother and you can see here that if I take a disk of the same size the"}, {"start": 548.64, "end": 553.6, "text": " max loss is still really low lower than this one here so that's the whole point"}, {"start": 553.6, "end": 557.8, "text": " that's the geometric explanation now let me just kind of explain you the formula"}, {"start": 557.8, "end": 564.16, "text": " which tells the same thing and here it is but briefly before that so from the"}, {"start": 564.16, "end": 567.4, "text": " original paper so motivated by the connection between sharpness of the lost"}, {"start": 567.4, "end": 571.74, "text": " landscape and generalization so that's the the key point here somebody the"}, {"start": 571.74, "end": 576.5999999999999, "text": " prior work showed that if we converge to a flat part of the lost landscape we're"}, {"start": 576.5999999999999, "end": 580.64, "text": " gonna have much better generalization capabilities so we propose a different"}, {"start": 580.64, "end": 585.12, "text": " approach rather than seeking our parameter values W that simply have low"}, {"start": 585.12, "end": 588.92, "text": " training loss value we seek out parameter values whose entire"}, {"start": 588.92, "end": 593.84, "text": " neighborhoods have uniformly low training loss value equivalently"}, {"start": 593.84, "end": 598.52, "text": " neighborhoods having both low loss and low curvature and or from the original"}, {"start": 598.52, "end": 603.36, "text": " paper again here so here is the where they converge with resonance in the"}, {"start": 603.36, "end": 607.88, "text": " original formulation of the optimizer like using Adam and here is where they"}, {"start": 607.88, "end": 612.36, "text": " get using Sam and as you can see again it's much flatter much smoother and"}, {"start": 612.36, "end": 616.16, "text": " that's the whole point okay and here's the formula so it's really easy"}, {"start": 616.16, "end": 622.6800000000001, "text": " basically what I see here is for these epsilons so this is called a perturbation"}, {"start": 622.6800000000001, "end": 628.48, "text": " vector let's go that way so we want to make sure that the L2 norm is inside"}, {"start": 628.48, "end": 633.16, "text": " this threshold so that just just a mathematical formulation of what they"}, {"start": 633.16, "end": 636.32, "text": " already explained with discs and hyperboles so you want to make sure that"}, {"start": 636.32, "end": 643.48, "text": " W when you when you add this vector over the whole value around this W point we"}, {"start": 643.48, "end": 649.2, "text": " want to make sure that the max loss so max loss is minimized with respect to W"}, {"start": 649.2, "end": 653.08, "text": " so that's the whole point so you're basically you have your W let's let's"}, {"start": 653.08, "end": 658.12, "text": " assume we are in a 2d space here so let's assume we are in a 2d space even"}, {"start": 658.12, "end": 661.5200000000001, "text": " though we're going to be like a multi-dimensional space so this is W2"}, {"start": 661.52, "end": 667.04, "text": " this is W1 let's say we've converged to this point here so what it says here is"}, {"start": 667.04, "end": 672.64, "text": " that so epsilon is just some vector and we are going to trace out so so this is"}, {"start": 672.64, "end": 678.56, "text": " a set of all vectors which are whose L2 norm is inside of row so that's gonna be"}, {"start": 678.56, "end": 683.6, "text": " into the case just a disc so we want to make sure that inside that region so the"}, {"start": 683.6, "end": 688.36, "text": " the max loss the max loss inside that region needs to be minimal and that's"}, {"start": 688.36, "end": 692.84, "text": " what we are minimizing so hopefully yeah that was quite in-depth hopefully you"}, {"start": 692.84, "end": 698.24, "text": " understood it let me get back to the beginning here a couple of things I want"}, {"start": 698.24, "end": 702.0, "text": " to mention so a side observation is that unlike resonance and MLP mixers"}, {"start": 702.0, "end": 707.16, "text": " VATs have extremely sparse active neurons less than 5% for most layers"}, {"start": 707.16, "end": 711.64, "text": " revealing the redundancy of input image patches and the capacity for network"}, {"start": 711.64, "end": 715.2, "text": " pruning and this looks like an interesting follow-up work from this"}, {"start": 715.2, "end": 719.6400000000001, "text": " paper basically it seems that we can reduce the memory footprint of VATs"}, {"start": 719.6400000000001, "end": 725.9200000000001, "text": " heavily and still hopefully contain like keep the performance we achieved here"}, {"start": 725.9200000000001, "end": 731.88, "text": " without using the extensive pre-training etc okay and I just kind of highlight the"}, {"start": 731.88, "end": 736.36, "text": " perspicuous here because I really am a strong believer that we should be using"}, {"start": 736.36, "end": 741.72, "text": " like we should make paper papers as clear as possible and not use fancy"}, {"start": 741.72, "end": 745.4, "text": " terms fancy equations if there is no need to do that I literally had to you"}, {"start": 745.4, "end": 749.28, "text": " to to Google this word and it says clearly expressed and easily understood"}, {"start": 749.28, "end": 755.0, "text": " well this sentence is not perspicuous that's what I know cool I'll rant over"}, {"start": 755.0, "end": 759.72, "text": " okay let me walk you through a couple of bullet points here and again they are"}, {"start": 759.72, "end": 763.1600000000001, "text": " focusing only on vision transformers and MLP mixers so do watch those videos if"}, {"start": 763.1600000000001, "end": 768.88, "text": " you haven't already here are a couple of points they make it's been extensively"}, {"start": 768.88, "end": 772.08, "text": " studied that the convergence to a flat region whose curvature is small"}, {"start": 772.08, "end": 775.96, "text": " benefits generalization of neural networks and I repeated it as multiple"}, {"start": 775.96, "end": 779.52, "text": " times I think this is a really important key point to keep in mind"}, {"start": 779.52, "end": 786.0, "text": " although mixer has fewer parameters than VATs it has smaller training error but"}, {"start": 786.0, "end": 790.24, "text": " much worse test accuracy so that basically means mixers tend to overfit"}, {"start": 790.24, "end": 794.28, "text": " much more and that makes sense because they have less priors in built into them"}, {"start": 794.28, "end": 799.52, "text": " so here are some nice curves that explain that so taking a look at maybe"}, {"start": 799.52, "end": 805.9599999999999, "text": " mixer B 16 here you can see the training curve is really low but the test"}, {"start": 805.9599999999999, "end": 811.68, "text": " accuracy is not that high on the other hand if we take something like this VAT"}, {"start": 811.68, "end": 817.64, "text": " B 16 the training curves are higher as you can see here so that this this dim"}, {"start": 817.64, "end": 824.16, "text": " orange curve here is higher but also the the test accuracy is higher so as you"}, {"start": 824.16, "end": 829.7199999999999, "text": " can see they overfit much much less than than MLP mixers here they just compare"}, {"start": 829.7199999999999, "end": 834.52, "text": " how it looks like without Sam and with Sam and again the training curve with"}, {"start": 834.52, "end": 839.88, "text": " Sam is higher and the test accuracy is also higher so that's cool that means we"}, {"start": 839.88, "end": 843.7199999999999, "text": " overfit much less to the training data and we generalize much better to the"}, {"start": 843.7199999999999, "end": 848.36, "text": " test distribution okay and finally here just the the sparsity constraint they"}, {"start": 848.36, "end": 855.96, "text": " they they noticed that basically in the lower layers using Sam the the number of"}, {"start": 855.96, "end": 861.24, "text": " activated neurons that's the y-axis gets much much lower for for for mixers than"}, {"start": 861.24, "end": 866.5600000000001, "text": " them without Sam here a couple of more bullet points here so Xiao et al."}, {"start": 866.5600000000001, "end": 871.5600000000001, "text": " showed that the trainability of a neural network can be characterized by the"}, {"start": 871.5600000000001, "end": 877.08, "text": " condition number of the associated neural tangent kernel and I won't get"}, {"start": 877.08, "end": 880.2, "text": " into the details of the of the of the kernel but basically it's a it's a"}, {"start": 880.2, "end": 885.08, "text": " simple proxy for trainability so case pretty stable for Resnets echoing"}, {"start": 885.08, "end": 889.0200000000001, "text": " previous results the Resnets enjoy superior trainability regardless of the"}, {"start": 889.0200000000001, "end": 894.12, "text": " depth however so if you remember like back in 2015 when the paper came out"}, {"start": 894.12, "end": 898.6, "text": " Resnets from Microsoft research they showed for the first time that we can"}, {"start": 898.6, "end": 904.2800000000001, "text": " train models from 18 all the way to 151 layers and it just works because of the"}, {"start": 904.28, "end": 908.72, "text": " because of the skip connections or residual connections however you want to"}, {"start": 908.72, "end": 912.88, "text": " call them however we observed that the condition number diverges when it comes"}, {"start": 912.88, "end": 919.24, "text": " to VAT and MLP mixer confirming that the training of VATs desires extra care so"}, {"start": 919.24, "end": 925.16, "text": " they kind of quantify this not kind of they quantify this in this table and you"}, {"start": 925.16, "end": 931.68, "text": " can see it here so here is the NTK here we have Resnets and VATs and mixers so"}, {"start": 931.68, "end": 935.64, "text": " you can see that the NTK for Resnets is pretty much the same so I'm not sure"}, {"start": 935.64, "end": 939.8399999999999, "text": " where those numbers whether there is some maybe bug here but like I'd assume"}, {"start": 939.8399999999999, "end": 944.9599999999999, "text": " that a Resonant 152 should have a bit higher NTK although I'm not sure about"}, {"start": 944.9599999999999, "end": 949.4799999999999, "text": " exact details but okay basically what I want to show you here is that mixer has"}, {"start": 949.4799999999999, "end": 954.66, "text": " much higher NTK which means it's much harder to train the second thing they"}, {"start": 954.66, "end": 959.0, "text": " plot here and I'll go into a bit more detail a bit later but basically Hessians"}, {"start": 959.0, "end": 966.64, "text": " are again a proxy for the curvature of your loss landscape at the point of"}, {"start": 966.64, "end": 971.0, "text": " convergence so what they calculate here as you can see so you can see that the"}, {"start": 971.0, "end": 975.52, "text": " lower is better the lower means it's more flat and you can see that VATs have"}, {"start": 975.52, "end": 980.54, "text": " much higher than than Resonants and mixers have even higher Hessian so this"}, {"start": 980.54, "end": 985.72, "text": " is the just the this is just the max eigenvalue of the eigenvector associated"}, {"start": 985.72, "end": 989.4, "text": " with this Hessian so I don't want to confuse you here but it's just a proxy"}, {"start": 989.4, "end": 995.72, "text": " for the curvature and you can see that after applying SAM it just goes like it"}, {"start": 995.72, "end": 999.52, "text": " falls drops down all the way to 20 something so that's even lower than for"}, {"start": 999.52, "end": 1004.12, "text": " Resonant so that's really cool aside from that they they show that the"}, {"start": 1004.12, "end": 1009.0, "text": " performance is really really great so after applying SAM on ImageNet the"}, {"start": 1009.0, "end": 1014.2, "text": " accuracy just kind of increases and as well as on ImageNet C which tests the"}, {"start": 1014.2, "end": 1019.8000000000001, "text": " robustness of the model how is that well because if you take a look at the"}, {"start": 1019.8000000000001, "end": 1024.04, "text": " ImageNet C data set you can see that it just has a bunch of different"}, {"start": 1024.04, "end": 1027.68, "text": " augmentations like Gaussian noise different kinds of noises here in impulse"}, {"start": 1027.68, "end": 1034.88, "text": " noise blurring like they have motion blurring some special effects and some"}, {"start": 1034.88, "end": 1039.8400000000001, "text": " photometric augmentations here like brightness contrast etc so basically you"}, {"start": 1039.8400000000001, "end": 1043.04, "text": " just want to make sure that you're generalizing to these small shifts in"}, {"start": 1043.04, "end": 1048.92, "text": " the distribution of your data set and yeah they showed that actually VATs"}, {"start": 1048.92, "end": 1053.48, "text": " perform even better than comparably sized Resonants okay let's see what else"}, {"start": 1053.48, "end": 1057.68, "text": " is interesting in this paper and I think I've covered everything pretty much a"}, {"start": 1057.68, "end": 1062.56, "text": " bunch of results here and summarized in a couple of sentences here so on the"}, {"start": 1062.56, "end": 1066.6399999999999, "text": " ImageNet validation set SAM boosts the top one accuracy of VATs from"}, {"start": 1066.6399999999999, "end": 1071.12, "text": " something to something so basically increase of 5% here and for mixture as"}, {"start": 1071.12, "end": 1074.4799999999998, "text": " well and empirically so this is interesting empirically the degree of"}, {"start": 1074.4799999999998, "end": 1079.1999999999998, "text": " improvement negatively correlates with the level of inductive biases built into"}, {"start": 1079.1999999999998, "end": 1083.4399999999998, "text": " the architecture so what you're seeing here is the following so let's plot a"}, {"start": 1083.4399999999998, "end": 1089.6, "text": " 2d chart here and on the x-axis let's say we have a bias and on the so the"}, {"start": 1089.6, "end": 1093.4399999999998, "text": " bias ingrained into the architecture itself and on the y-axis we have some"}, {"start": 1093.4399999999998, "end": 1097.76, "text": " like improvement okay and what I say here is we have a negative correlation"}, {"start": 1097.76, "end": 1103.8, "text": " like something like this some point cloud here and basically CNNs are here"}, {"start": 1103.8, "end": 1110.28, "text": " so here is a CNN MLP mixtures are probably here and VATs are somewhere"}, {"start": 1110.28, "end": 1116.68, "text": " here I don't know I'm just qualitatively drawing this and what they say is that"}, {"start": 1116.68, "end": 1121.08, "text": " the more priors we have in the architecture the less improvement we get"}, {"start": 1121.08, "end": 1125.16, "text": " from using SAM for that particular architecture so that's something they"}, {"start": 1125.16, "end": 1130.4, "text": " they empirically found so there is no like theoretical explanation for why that"}, {"start": 1130.4, "end": 1137.48, "text": " is but it is okay again it's more robust as well not just more accurate but more"}, {"start": 1137.48, "end": 1141.52, "text": " robust like looking at the image net see that is said nothing interesting there"}, {"start": 1141.52, "end": 1148.16, "text": " I already mentioned that and this is an interesting table basically they just"}, {"start": 1148.16, "end": 1152.16, "text": " kind of decompose the Hessians already mentioned which are a proxy for the"}, {"start": 1152.16, "end": 1158.28, "text": " curvature and you can see looking at the layers like let's focus on VAT or even"}, {"start": 1158.28, "end": 1163.28, "text": " better like on mixer looking at the lower layer like in the embedding layer"}, {"start": 1163.28, "end": 1170.0400000000002, "text": " itself we have a huge Hessian so the huge eigenvalue of the Hessian and going"}, {"start": 1170.0400000000002, "end": 1176.1200000000001, "text": " to deep players block one block six block twelve you can see that the Hessians go"}, {"start": 1176.1200000000001, "end": 1181.88, "text": " down so basically these lower layers contribute to the huge like to the steep"}, {"start": 1181.88, "end": 1185.88, "text": " curvature of the lost landscape and that's what I've fixed here as you can"}, {"start": 1185.88, "end": 1191.96, "text": " see using SAM it drops significantly and basically that correlates with the the"}, {"start": 1191.96, "end": 1195.3200000000002, "text": " fact that we have much sparser activations now in those lower layers as"}, {"start": 1195.3200000000002, "end": 1200.5200000000002, "text": " we saw on the plot up there so that's this plot here basically you can see"}, {"start": 1200.5200000000002, "end": 1205.5200000000002, "text": " that we have much sparser activations so this is the x-axis is the depth of the"}, {"start": 1205.5200000000002, "end": 1209.4, "text": " network so we now see that we have much sparse activations in the shallower"}, {"start": 1209.4, "end": 1214.72, "text": " layers of the network that correlates with this finding here one thing they"}, {"start": 1214.72, "end": 1220.3200000000002, "text": " notice as well is that the L2 norm of the weight vector increases which may"}, {"start": 1220.3200000000002, "end": 1226.0400000000002, "text": " indicate that they've used the weight decay regularization and it seems it's"}, {"start": 1226.0400000000002, "end": 1230.94, "text": " not helping so they need to further investigate that the reason I highlight"}, {"start": 1230.94, "end": 1236.1200000000001, "text": " the recursively here is that basically you can see H of K depends on the H of"}, {"start": 1236.12, "end": 1242.6, "text": " K plus 1 so then that's the reason why we have higher eigenvalues so this"}, {"start": 1242.6, "end": 1247.6, "text": " Hessians in the lower layer because it just multiplies with all of the"}, {"start": 1247.6, "end": 1252.0, "text": " previous deeper layers and it just kind of accumulates and blows up in the"}, {"start": 1252.0, "end": 1259.28, "text": " shallower layers okay additionally what they found is that the attention maps"}, {"start": 1259.28, "end": 1265.1999999999998, "text": " found by VATs and MLP mixers after the SAM procedure has much better"}, {"start": 1265.2, "end": 1270.3600000000001, "text": " discriminative features you can see that the attention maps do focus on"}, {"start": 1270.3600000000001, "end": 1276.1200000000001, "text": " something that's salient in these images much better than before using SAM so"}, {"start": 1276.1200000000001, "end": 1281.32, "text": " yeah just a fun fact and I mean visualizations are super important so"}, {"start": 1281.32, "end": 1290.28, "text": " kudos to them for doing this a couple of fun results here basically and it's"}, {"start": 1290.28, "end": 1295.72, "text": " pretty obvious what they show here is that even with using when you're using"}, {"start": 1295.72, "end": 1300.84, "text": " SAM when you go to when you start reducing the number of data points in"}, {"start": 1300.84, "end": 1304.8, "text": " your data set so what they did here is they randomly sampled a half of the"}, {"start": 1304.8, "end": 1309.56, "text": " pictures from the ImageNet 1k and here 1 fourth of the images are randomly"}, {"start": 1309.56, "end": 1314.26, "text": " sampled and we can see that VAT focusing on the orange curve you can see that the"}, {"start": 1314.26, "end": 1318.96, "text": " VATs and mixers degrade much more severely when we go into the low data"}, {"start": 1318.96, "end": 1325.56, "text": " regime here whereas thanks to the biases already mentioned CNN's ResNets here in"}, {"start": 1325.56, "end": 1330.72, "text": " particular managed to keep up that performance even in the low data regime"}, {"start": 1330.72, "end": 1336.1200000000001, "text": " so again priors are super important and nonetheless I do think that this kind of"}, {"start": 1336.1200000000001, "end": 1340.44, "text": " research where we are just doing this blank slate paradigm is going to be very"}, {"start": 1340.44, "end": 1347.0, "text": " interesting and informative over the long long run so yeah they"}, {"start": 1347.0, "end": 1352.52, "text": " also tried some contrastive learning and it kind of improves a bit upon the SAM"}, {"start": 1352.52, "end": 1357.56, "text": " as well they tried adversarial training as well I won't be focusing on that and"}, {"start": 1357.56, "end": 1362.84, "text": " they like using this PGD 10 attack basically PGD and I think they just"}, {"start": 1362.84, "end": 1369.92, "text": " average over 10 attacks and they showed that they get a nice like nice"}, {"start": 1369.92, "end": 1375.2, "text": " performance boost there as well yeah hopefully that's it hopefully that was"}, {"start": 1375.2, "end": 1380.6000000000001, "text": " informative and useful if you found useful consider subscribing sharing"}, {"start": 1380.6, "end": 1406.0, "text": " this video and see you next time bye bye"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=847zrERIr-k
DeepMind's Android RL Environment - AndroidEnv
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I give you some background details behind the newly introduced AndroidEnv and I show you what you need to modify in order to use it for an arbitrary Android app. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ blog: https://deepmind.com/research/publications/androidenv ✅ GitHub: https://github.com/deepmind/android_env ✅ paper/technical report: https://arxiv.org/abs/2105.13231 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Introducing AndroidEnv - why is it important? 03:00 Tech report intro 05:00 Real-time simulation - novel problems for RL agents 07:10 Raw action space, gestures, etc. 11:30 Generic RL agents - results on a selected set of tasks 13:45 Creating a task for your custom Android app 16:31 A set of existing task examples ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #androidenv #deepmind #reinforcementlearning
Introducing Android M, an open-ended platform for training agents on Android apps and games. With a universal touchscreen interface, access to the entire operating system, and a number of ready-to-use tasks, Android M is a promising domain for RL research. So I just stumbled across this tweet a couple of days ago, and I thought covering this because it seems like a really, really impactful future environment. So I'm going to walk you through their blog. So they have a blog here, and they also open-sourced a GitHub repo with instructions how to get this started and start training your RL agents on Android, different Android applications. So they also have a technical report. I'll walk you through that one as well, just some important details. And by the end of this video, hopefully you'll understand why this is important and how you can get started playing with your own Android RL agents. So first of all, probably the most obvious thing is this. So they say here, the increasing complexity of environments has driven the development of novel algorithms and agents such as TQN. So that's the Deep Q network from 2013, and Atari learning environment, ALE, was the thing that basically made it happen aside from convolutional neural networks. But yeah, just focusing on environments. Then we had AlphaGo, again, Go simulator, PPO, like OpenAI's algorithm, proximal policy optimization. They used Mojoco to develop that one, and again, AlphaStar used StarCraft II environment. So in order to advance the state of the art, even further, researchers seek new and more stimulating environments to tackle. So the next step, and this is really exciting, basically how you can take whatever Android application and train an RL agent with a couple of caveats. So basically, and we'll see that in the tech report and on the GitHub later, but you'll need to have an open source app, obviously, and you'll need to modify the source code a bit so that it outputs the rewards, which then you'll parse via custom-made tasks, and finally you'll have the complete thing. So you know, you'll define when the episode ends, what are the rewards in your environment, and then you can train the agents. But other than that, it's really cool, and yeah, let's continue here. Yeah, you can see some examples of a contact app, and they're using here, like whatever, like opening Play Store, and you can even use YouTube, you can browse the web, you can do basically whatever, you can play games, you can do whatever Android offers. The whole ecosystem of these apps is now basically open for developing custom reinforcement learning agents, and you can see here some games like 2048, etc. So let me just jump straight into the tech report, and then we'll see how you can actually create those custom tasks so you can train your own RL agents. So here is the paper, and I just want to walk you through the main ideas here. So first things first, so this is the obvious one, but screen pixels constitute the observations, the action space is defined by touchscreen gestures, and this is not completely true, we'll see what the raw action space looks like in a minute. The interaction is real-time, and this is really important, and the actions are executed asynchronously while the environment runs at its own time scale. So this means that basically the environment is not a lockstep environment like Atari, where you basically send an action, so the environment waits until you figure out what your next action will be, and then once you send the action to the environment, it executes a step forward, and then waits again for your next action. So that's the lockstep paradigm. Here on the other hand, it's a real-time environment, that means Android emulator doesn't care whether you skip certain actions if you were too slow to compute that action or whatever was the problem. So that's something that makes this even more challenging than those other simulators. And we'll dig in a bit more details in a couple of minutes there as well, but I just want to state this one. So the sheer number of apps built for a multitude of important aspects of human life ranging from education and business to communication and entertainment provides virtually unlimited challenges for RL research. So this is going to be big in my opinion. We now have a huge taskpad of applications which are actually meaningful, like comparing that to Atari games or those made-up environments. It will be so much easier to actually, after training these agents, to deploy them into some product, so there is a much lesser gap between this environment and the thing you actually want and between compared to Atari or some other environments. Okay, so let's continue here. So I already mentioned, so Android M is unable to run in lockstep. And let me jump over all of these and first tackle that problem. Okay, so here are some additional details. So they say here, another important factor to consider in real-time environments is that agents require some deliberation time to generate the next action given an observation. So that means if you have an agent that's a convolutional neural network, once it receives the observation, it will take some time to process that observation in order to output the action. And that doesn't matter in those lockstep environments, but here the timings actually do matter. So they say it here, in a traditional lockstep environment, the environment generates an observation and pauses to wait until the agent responds with an action before stepping the simulation forward. So, okay, and the diagram is here and basically, so once you get the observation, they kind of compress the deliberation time because it doesn't matter. So basically, once you compute it, you issue an action and then the environment steps forward and then just repeat. And you can see there are no problems here. But on the other hand, if you have a real-time environment like Atari, like Android M, the large deliberation time can be harmful to performance. And they have a nice chart here as well. So here the action sends the, so the agent sends the action and you can basically, in order to cope with this problem of this being a real-time environment, you can insert these artificial weights inside your routine. And so what happens is once you issue an action, the simulator starts rendering the observation. So this may vary depending on the app, et cetera. And then once you get the observation, it takes some time for the agent to compute the next action and then it issues the action. And as you can see, because of the variance of this thing and of the rendering, you'll have to kind of think about this a bit more than in your lockstep environments. So that's something to keep in mind. It's going to make this a bit more challenging, but yeah, I guess it's not that big of a deal. Okay. Let me get back to the beginning of the stack report and let's start with the action interface. So basically the action interface is super simple. So the native action space of the environment consists of a tuple consisting of a position. Basically you have X and Y coordinates which are in the continuous space. And then the Android M will, depending on the actual device you're using, the resolution of your emulator, it will just discretize these continuous actions. Continuous actions into those pixels. So that is, it's basically, you don't have to care about the actual resolution. The second part of your action will be this discrete value. So it can be touch, lift, or just repeat. Basically, yeah, you can either touch the screen, you can either lift the pointer and you can repeat the previous action. So that's the raw action space. And the thing is in order to actually, in order to meaningful interact with Android applications, as you may know, you need these gestures. So which are much more complex. And they say it here, the complexity of the interface arises from the fact that individual raw actions on their own do not necessarily trigger a meaningful change in the environment. And they have nice visualizations here. Basically you can see the swiping and you can see simple touch and lift. You can see a bit more like nonlinear swipe movement. And basically, this makes it so much harder for the agent to learn because you have to have, you have to be both precise in the spatial sense and also in the time sense. So you can't do it like this and then stop here and then continue doing the swipe because the system, the Android itself, probably won't interpret that as a correct gesture as a swipe action. So you'll have, so the agent will have to both be precise both time-wise and space-wise. Let's call it that way. And I think they mention it here. So this need to compose the actions pair with the difficulty of solving the underlying task itself leads to difficult exploration problem. So for example, in order to learn to play chess, an agent must not only find a winning strategy, it also has to learn to move pieces through drag and drop gestures. So yeah, that's the part I mentioned. And that's something to keep in mind. Aside from these information from the observation, which is basically an RGB frame, you'll have additional text task extras. And so an extra in Android Envy is any information that the environment sends to aid the understanding of the task. The information sent through this channel is typically very useful for learning, yet difficult to extract from raw pixels. And I gave an example of, I think, like text being displayed on the screen. And instead of you having to parse and learn the OCR additionally, you'll have the option of just circumventing that and printing that, like just putting that into this extras variable, which will contain the string directly, making it a bit easier to learn the task at hand. So I mentioned that in order to actually deploy your RL agent to these applications, you'll have to modify the apps in a way in order to create the notion of reward and of episode signal, et cetera. So they say here, while Android is an operating system with no inherent rewards or episodes, Android Env provides a simple mechanism for defining tasks on it. And we'll see how that functions. I'll just walk you through the GitHub and that's going to be the best explanation, I guess. They made a small selection of tasks already, and you can see some apps like Catch here and RocketSlave, PressButton, some apps like 2048, like this is a popular game, and Blockinger. And you can see the layout here is pretty complex. And they say it here, for example, most agents perform well on tasks such as Catch. So this is a simple one. They have a simple action interface and dense rewards, whereas the combination of a highly structured interface, you can see the complex layout here, like all these buttons and here, the actual screen of the game. Time sensitivity and sparse rewards renders Blockinger particularly difficult to solve. And they tested a bunch of common RL agents. And the interesting thing was that actually DQN, one of the oldest agents out there, like deep learning agents at least, deep RL agents, has maybe one of the best performances on all of these tasks. And as you can see, these cyan curves, DQN outperforms many of these other agents like RTD2 or Impala, etc. And you can also see the progression, like going from top left here to bottom right, you can see that the tasks get harder and harder, and basically the performance really drops down. And yeah. Okay, by the way, for those of you who haven't been developing Android apps, let me just kind of give you some intuition here. Basically, when you open up an Android studio, and you can basically in order to debug your app, you initialize this emulator, which can be arbitrary device, which you need to specify, and it will have certain height and certain width of the screen. So you can also pick a specific version of Android. So all of that can be specified through Android Studio. So these are just some things you won't have to worry about because the emulators themselves care about this part. And that's just abstracted from you. Additionally, they mentioned a bunch of useful environments that were previously open sourced or advised. Some of them are like you have to pay the license like Mojoco. But they end up here saying that since Android has billions of users and Android M provides tasks that run on the standard Android operating system simulator, agents trained on the platform could potentially tackle a wide range of use cases leading to direct real world impact. And this is something I'm really passionate about. So seeing these agents and RL in general, having some real world impact will be super, super interesting. And I hope to see like research and apps happening because of this new environment. So having said that, let me go back to the GitHub and show you how to actually create a task. Okay, so if you open up this tasks guide MD file, you'll see these two sections here where they basically describe what you need to do in order to enable your Android application to be used as a training ground for your your RL agent. And the main things are so you have this setup steps, which will once the once the training start, it will just do a couple of things like you can automate installation of the the application from the APK APK file, which is just Android binary. You can do a bunch of those stuff. And I'll just focus on the stuff important to RL. And that's, that's defining rewards and like end of episode signal. So you'll basically have to have to set up this log parsing config in order to just define certain regs expressions. Because the way that apps will communicate the reward information, as I think I mentioned, is through the log log, basically through the log console. So here's an example of a text proto file, which defines the task. And if I expand it, you can see so it's pretty self explanatory. So perform these upon launching the environment, install the app, check if it's installed correctly, etc. So that's the not that interesting part. Let me just skip down to the this part. And basically, you can see you define the regx expression for your reward and for your score for the episode and signal etc. And the way you know this all works is the following. So you might you might have noticed the tasks often rely on log messages exposed by the Android system, which Android app can intercept and translate into items such as rewards, episode and signals or task extras. And they said here, so of course, applications might not send suitable messages by default. So in order to have access to such messages, we often add them to the app source code to match our expectation. For example, in the case of the 2048 app, we find in the game source code the exact lines where the score is computed, and add a line to log this value in the format that is expected by the text proto. So that's this file here. So as you can see here, some regular expressions, we have a plus or minus sign, then we have a couple of digits followed by a dot and then a couple of digits more. So basically a float like a scalar value. And so you add the line to log this value in the format that is expected by the text proto, or conversely, make sure that yeah, whatever. So basically, you'll have to edit the source code of your application by adding a logging line here. And then through this text proto, you'll be able to send that environment to the agent. And that's how you're going to actually train and set up everything. And finally, they have a couple of examples here on this page, example tasks. And I'll just link all of these in the description. But basically, here is a really simple environment they created. And in this one, so take like, take this one as an example. So basically, the RL agent gets a reward plus one, if it clicks the B button, and if it clicks the A button, it just the episode ends, because that's how they defined it. And basically, what happens then is the app shuts down, and the environment will automatically launch relaunch the app again. And that's done through all through the task proto file we just saw. So that's how it works. And there are, aside from this one, they have a bunch of other examples, like with a calculator, some games, like the 2084 dimension, for which they had to modify the source code, and like chess. So yeah, bunch of different stuff. And yeah, hopefully you found this one useful. If you did, leave a like, subscribe and see you in the next video.
[{"start": 0.0, "end": 5.5200000000000005, "text": " Introducing Android M, an open-ended platform for training agents on Android apps and games."}, {"start": 5.5200000000000005, "end": 9.44, "text": " With a universal touchscreen interface, access to the entire operating system,"}, {"start": 9.44, "end": 14.96, "text": " and a number of ready-to-use tasks, Android M is a promising domain for RL research."}, {"start": 14.96, "end": 20.240000000000002, "text": " So I just stumbled across this tweet a couple of days ago, and I thought covering this because"}, {"start": 20.240000000000002, "end": 27.84, "text": " it seems like a really, really impactful future environment. So I'm going to walk you through"}, {"start": 27.84, "end": 33.68, "text": " their blog. So they have a blog here, and they also open-sourced a GitHub repo"}, {"start": 34.4, "end": 39.68, "text": " with instructions how to get this started and start training your RL agents on Android,"}, {"start": 40.32, "end": 44.879999999999995, "text": " different Android applications. So they also have a technical report. I'll walk you through"}, {"start": 44.879999999999995, "end": 50.0, "text": " that one as well, just some important details. And by the end of this video, hopefully you'll"}, {"start": 50.0, "end": 57.120000000000005, "text": " understand why this is important and how you can get started playing with your own Android RL agents."}, {"start": 57.12, "end": 62.879999999999995, "text": " So first of all, probably the most obvious thing is this. So they say here,"}, {"start": 62.879999999999995, "end": 67.2, "text": " the increasing complexity of environments has driven the development of novel algorithms and"}, {"start": 67.2, "end": 76.56, "text": " agents such as TQN. So that's the Deep Q network from 2013, and Atari learning environment,"}, {"start": 76.56, "end": 82.0, "text": " ALE, was the thing that basically made it happen aside from convolutional neural networks."}, {"start": 82.0, "end": 87.76, "text": " But yeah, just focusing on environments. Then we had AlphaGo, again, Go simulator, PPO,"}, {"start": 87.76, "end": 95.28, "text": " like OpenAI's algorithm, proximal policy optimization. They used Mojoco to develop that"}, {"start": 95.28, "end": 100.48, "text": " one, and again, AlphaStar used StarCraft II environment. So in order to advance the state"}, {"start": 100.48, "end": 105.28, "text": " of the art, even further, researchers seek new and more stimulating environments to tackle."}, {"start": 105.28, "end": 110.56, "text": " So the next step, and this is really exciting, basically how you can take whatever Android"}, {"start": 110.56, "end": 116.72, "text": " application and train an RL agent with a couple of caveats. So basically, and we'll see that in"}, {"start": 116.72, "end": 121.92, "text": " the tech report and on the GitHub later, but you'll need to have an open source app, obviously,"}, {"start": 121.92, "end": 127.28, "text": " and you'll need to modify the source code a bit so that it outputs the rewards, which then you'll"}, {"start": 127.28, "end": 134.24, "text": " parse via custom-made tasks, and finally you'll have the complete thing. So you know, you'll define"}, {"start": 134.24, "end": 139.44, "text": " when the episode ends, what are the rewards in your environment, and then you can train the agents."}, {"start": 139.44, "end": 145.44, "text": " But other than that, it's really cool, and yeah, let's continue here. Yeah, you can see some"}, {"start": 145.44, "end": 151.44, "text": " examples of a contact app, and they're using here, like whatever, like opening Play Store,"}, {"start": 152.0, "end": 159.12, "text": " and you can even use YouTube, you can browse the web, you can do basically whatever, you can play"}, {"start": 159.12, "end": 166.64, "text": " games, you can do whatever Android offers. The whole ecosystem of these apps is now basically"}, {"start": 166.64, "end": 173.11999999999998, "text": " open for developing custom reinforcement learning agents, and you can see here some games like 2048,"}, {"start": 173.11999999999998, "end": 178.0, "text": " etc. So let me just jump straight into the tech report, and then we'll see how you can actually"}, {"start": 178.0, "end": 183.76, "text": " create those custom tasks so you can train your own RL agents. So here is the paper, and I just"}, {"start": 183.76, "end": 188.88, "text": " want to walk you through the main ideas here. So first things first, so this is the obvious one,"}, {"start": 188.88, "end": 193.67999999999998, "text": " but screen pixels constitute the observations, the action space is defined by touchscreen gestures,"}, {"start": 193.68, "end": 198.48000000000002, "text": " and this is not completely true, we'll see what the raw action space looks like in a minute."}, {"start": 199.52, "end": 203.84, "text": " The interaction is real-time, and this is really important, and the actions are executed"}, {"start": 203.84, "end": 209.52, "text": " asynchronously while the environment runs at its own time scale. So this means that basically"}, {"start": 210.4, "end": 215.44, "text": " the environment is not a lockstep environment like Atari, where you basically send an action,"}, {"start": 215.44, "end": 221.52, "text": " so the environment waits until you figure out what your next action will be, and then once you send"}, {"start": 221.52, "end": 226.88000000000002, "text": " the action to the environment, it executes a step forward, and then waits again for your next action."}, {"start": 226.88000000000002, "end": 231.68, "text": " So that's the lockstep paradigm. Here on the other hand, it's a real-time environment, that means"}, {"start": 231.68, "end": 237.52, "text": " Android emulator doesn't care whether you skip certain actions if you were too slow to compute"}, {"start": 237.52, "end": 242.88, "text": " that action or whatever was the problem. So that's something that makes this even more challenging"}, {"start": 242.88, "end": 249.12, "text": " than those other simulators. And we'll dig in a bit more details in a couple of minutes there as"}, {"start": 249.12, "end": 254.24, "text": " well, but I just want to state this one. So the sheer number of apps built for a multitude of"}, {"start": 254.24, "end": 258.4, "text": " important aspects of human life ranging from education and business to communication and"}, {"start": 258.4, "end": 264.88, "text": " entertainment provides virtually unlimited challenges for RL research. So this is going"}, {"start": 264.88, "end": 274.0, "text": " to be big in my opinion. We now have a huge taskpad of applications which are actually"}, {"start": 274.0, "end": 279.52, "text": " meaningful, like comparing that to Atari games or those made-up environments. It will be so much"}, {"start": 279.52, "end": 284.8, "text": " easier to actually, after training these agents, to deploy them into some product, so there is a"}, {"start": 284.8, "end": 292.24, "text": " much lesser gap between this environment and the thing you actually want and between compared to"}, {"start": 292.24, "end": 298.08, "text": " Atari or some other environments. Okay, so let's continue here. So I already mentioned, so Android"}, {"start": 298.08, "end": 305.35999999999996, "text": " M is unable to run in lockstep. And let me jump over all of these and first tackle that problem."}, {"start": 305.35999999999996, "end": 311.12, "text": " Okay, so here are some additional details. So they say here, another important factor to consider in"}, {"start": 311.12, "end": 316.0, "text": " real-time environments is that agents require some deliberation time to generate the next action"}, {"start": 316.0, "end": 320.0, "text": " given an observation. So that means if you have an agent that's a convolutional neural network,"}, {"start": 320.0, "end": 324.79999999999995, "text": " once it receives the observation, it will take some time to process that observation in order"}, {"start": 324.8, "end": 330.48, "text": " to output the action. And that doesn't matter in those lockstep environments, but here the timings"}, {"start": 330.48, "end": 335.36, "text": " actually do matter. So they say it here, in a traditional lockstep environment, the environment"}, {"start": 335.36, "end": 341.28000000000003, "text": " generates an observation and pauses to wait until the agent responds with an action before stepping"}, {"start": 341.28000000000003, "end": 350.0, "text": " the simulation forward. So, okay, and the diagram is here and basically, so once you get the"}, {"start": 350.0, "end": 355.2, "text": " observation, they kind of compress the deliberation time because it doesn't matter. So basically,"}, {"start": 355.2, "end": 360.64, "text": " once you compute it, you issue an action and then the environment steps forward and then just repeat."}, {"start": 360.64, "end": 366.56, "text": " And you can see there are no problems here. But on the other hand, if you have a real-time"}, {"start": 366.56, "end": 374.24, "text": " environment like Atari, like Android M, the large deliberation time can be harmful to performance."}, {"start": 374.24, "end": 381.6, "text": " And they have a nice chart here as well. So here the action sends the, so the agent sends the action"}, {"start": 381.6, "end": 387.12, "text": " and you can basically, in order to cope with this problem of this being a real-time environment,"}, {"start": 387.12, "end": 394.8, "text": " you can insert these artificial weights inside your routine. And so what happens is once you"}, {"start": 395.6, "end": 401.36, "text": " issue an action, the simulator starts rendering the observation. So this may vary depending on"}, {"start": 401.36, "end": 406.96000000000004, "text": " the app, et cetera. And then once you get the observation, it takes some time for the agent to"}, {"start": 407.52000000000004, "end": 412.08000000000004, "text": " compute the next action and then it issues the action. And as you can see, because of the"}, {"start": 412.08000000000004, "end": 419.6, "text": " variance of this thing and of the rendering, you'll have to kind of think about this a bit more than"}, {"start": 419.6, "end": 423.44, "text": " in your lockstep environments. So that's something to keep in mind. It's going to make"}, {"start": 424.72, "end": 428.16, "text": " this a bit more challenging, but yeah, I guess it's not that big of a deal."}, {"start": 428.16, "end": 434.96000000000004, "text": " Okay. Let me get back to the beginning of the stack report and let's start with the action"}, {"start": 434.96000000000004, "end": 440.88000000000005, "text": " interface. So basically the action interface is super simple. So the native action space of the"}, {"start": 440.88000000000005, "end": 446.0, "text": " environment consists of a tuple consisting of a position. Basically you have X and Y coordinates"}, {"start": 446.0, "end": 451.92, "text": " which are in the continuous space. And then the Android M will, depending on the actual device"}, {"start": 451.92, "end": 456.96000000000004, "text": " you're using, the resolution of your emulator, it will just discretize these continuous actions."}, {"start": 456.96, "end": 462.0, "text": " Continuous actions into those pixels. So that is, it's basically, you don't have to care about the"}, {"start": 462.0, "end": 468.23999999999995, "text": " actual resolution. The second part of your action will be this discrete value. So it can be touch,"}, {"start": 468.23999999999995, "end": 472.32, "text": " lift, or just repeat. Basically, yeah, you can either touch the screen, you can either lift"}, {"start": 473.59999999999997, "end": 480.47999999999996, "text": " the pointer and you can repeat the previous action. So that's the raw action space. And the thing is"}, {"start": 481.59999999999997, "end": 486.64, "text": " in order to actually, in order to meaningful interact with Android applications, as you may"}, {"start": 486.64, "end": 491.91999999999996, "text": " know, you need these gestures. So which are much more complex. And they say it here, the complexity"}, {"start": 491.91999999999996, "end": 496.71999999999997, "text": " of the interface arises from the fact that individual raw actions on their own do not"}, {"start": 496.71999999999997, "end": 502.4, "text": " necessarily trigger a meaningful change in the environment. And they have nice visualizations"}, {"start": 502.4, "end": 509.36, "text": " here. Basically you can see the swiping and you can see simple touch and lift. You can see a bit"}, {"start": 509.36, "end": 518.08, "text": " more like nonlinear swipe movement. And basically, this makes it so much harder for the agent to learn"}, {"start": 518.08, "end": 524.4, "text": " because you have to have, you have to be both precise in the spatial sense and also in the time"}, {"start": 524.4, "end": 532.08, "text": " sense. So you can't do it like this and then stop here and then continue doing the swipe because the"}, {"start": 532.08, "end": 537.2, "text": " system, the Android itself, probably won't interpret that as a correct gesture as a swipe action. So"}, {"start": 537.2, "end": 542.8000000000001, "text": " you'll have, so the agent will have to both be precise both time-wise and space-wise. Let's call"}, {"start": 542.8000000000001, "end": 548.5600000000001, "text": " it that way. And I think they mention it here. So this need to compose the actions pair with the"}, {"start": 548.5600000000001, "end": 554.0, "text": " difficulty of solving the underlying task itself leads to difficult exploration problem. So for"}, {"start": 554.0, "end": 559.12, "text": " example, in order to learn to play chess, an agent must not only find a winning strategy, it also has"}, {"start": 559.12, "end": 564.5600000000001, "text": " to learn to move pieces through drag and drop gestures. So yeah, that's the part I mentioned."}, {"start": 564.56, "end": 570.9599999999999, "text": " And that's something to keep in mind. Aside from these information from the observation,"}, {"start": 570.9599999999999, "end": 577.52, "text": " which is basically an RGB frame, you'll have additional text task extras. And so an extra"}, {"start": 577.52, "end": 581.1199999999999, "text": " in Android Envy is any information that the environment sends to aid the understanding of"}, {"start": 581.1199999999999, "end": 585.04, "text": " the task. The information sent through this channel is typically very useful for learning,"}, {"start": 585.04, "end": 590.4, "text": " yet difficult to extract from raw pixels. And I gave an example of, I think, like text being"}, {"start": 590.4, "end": 595.28, "text": " displayed on the screen. And instead of you having to parse and learn the OCR additionally,"}, {"start": 596.0799999999999, "end": 602.9599999999999, "text": " you'll have the option of just circumventing that and printing that, like just putting that"}, {"start": 602.9599999999999, "end": 607.92, "text": " into this extras variable, which will contain the string directly, making it a bit easier to learn"}, {"start": 607.92, "end": 617.92, "text": " the task at hand. So I mentioned that in order to actually deploy your RL agent to these applications,"}, {"start": 617.92, "end": 626.0799999999999, "text": " you'll have to modify the apps in a way in order to create the notion of reward and of episode"}, {"start": 626.0799999999999, "end": 632.7199999999999, "text": " signal, et cetera. So they say here, while Android is an operating system with no inherent rewards"}, {"start": 632.7199999999999, "end": 638.3199999999999, "text": " or episodes, Android Env provides a simple mechanism for defining tasks on it. And we'll"}, {"start": 638.3199999999999, "end": 645.1999999999999, "text": " see how that functions. I'll just walk you through the GitHub and that's going to be the best"}, {"start": 645.2, "end": 650.6400000000001, "text": " explanation, I guess. They made a small selection of tasks already, and you can see some apps like"}, {"start": 652.1600000000001, "end": 659.84, "text": " Catch here and RocketSlave, PressButton, some apps like 2048, like this is a popular game,"}, {"start": 659.84, "end": 665.2800000000001, "text": " and Blockinger. And you can see the layout here is pretty complex. And they say it here,"}, {"start": 665.9200000000001, "end": 670.5600000000001, "text": " for example, most agents perform well on tasks such as Catch. So this is a simple one."}, {"start": 670.5600000000001, "end": 674.5600000000001, "text": " They have a simple action interface and dense rewards, whereas the combination of a highly"}, {"start": 674.56, "end": 679.3599999999999, "text": " structured interface, you can see the complex layout here, like all these buttons and here,"}, {"start": 679.3599999999999, "end": 686.4799999999999, "text": " the actual screen of the game. Time sensitivity and sparse rewards renders Blockinger particularly"}, {"start": 686.4799999999999, "end": 692.64, "text": " difficult to solve. And they tested a bunch of common RL agents. And the interesting thing was"}, {"start": 692.64, "end": 697.5999999999999, "text": " that actually DQN, one of the oldest agents out there, like deep learning agents at least,"}, {"start": 698.2399999999999, "end": 703.8399999999999, "text": " deep RL agents, has maybe one of the best performances on all of these tasks. And as you"}, {"start": 703.84, "end": 712.1600000000001, "text": " can see, these cyan curves, DQN outperforms many of these other agents like RTD2 or Impala, etc."}, {"start": 712.1600000000001, "end": 717.0400000000001, "text": " And you can also see the progression, like going from top left here to bottom right, you can see"}, {"start": 717.0400000000001, "end": 724.1600000000001, "text": " that the tasks get harder and harder, and basically the performance really drops down. And yeah."}, {"start": 725.84, "end": 729.36, "text": " Okay, by the way, for those of you who haven't been developing Android apps,"}, {"start": 729.36, "end": 736.88, "text": " let me just kind of give you some intuition here. Basically, when you open up an Android studio,"}, {"start": 736.88, "end": 744.96, "text": " and you can basically in order to debug your app, you initialize this emulator, which can be"}, {"start": 744.96, "end": 753.04, "text": " arbitrary device, which you need to specify, and it will have certain height and certain width of"}, {"start": 753.04, "end": 760.16, "text": " the screen. So you can also pick a specific version of Android. So all of that can be specified"}, {"start": 760.16, "end": 764.56, "text": " through Android Studio. So these are just some things you won't have to worry about because the"}, {"start": 765.36, "end": 773.92, "text": " emulators themselves care about this part. And that's just abstracted from you. Additionally,"}, {"start": 773.92, "end": 783.28, "text": " they mentioned a bunch of useful environments that were previously open sourced or advised."}, {"start": 783.28, "end": 789.8399999999999, "text": " Some of them are like you have to pay the license like Mojoco. But they end up here saying that"}, {"start": 789.8399999999999, "end": 794.24, "text": " since Android has billions of users and Android M provides tasks that run on the standard Android"}, {"start": 794.9599999999999, "end": 800.0, "text": " operating system simulator, agents trained on the platform could potentially tackle a wide range of"}, {"start": 800.0, "end": 805.28, "text": " use cases leading to direct real world impact. And this is something I'm really passionate about. So"}, {"start": 805.28, "end": 813.12, "text": " seeing these agents and RL in general, having some real world impact will be super, super interesting."}, {"start": 813.12, "end": 820.64, "text": " And I hope to see like research and apps happening because of this new environment. So having said"}, {"start": 820.64, "end": 826.08, "text": " that, let me go back to the GitHub and show you how to actually create a task. Okay, so if you open"}, {"start": 826.08, "end": 834.32, "text": " up this tasks guide MD file, you'll see these two sections here where they basically describe what"}, {"start": 834.32, "end": 840.4000000000001, "text": " you need to do in order to enable your Android application to be used as a training ground for"}, {"start": 840.4000000000001, "end": 848.64, "text": " your your RL agent. And the main things are so you have this setup steps, which will once the once"}, {"start": 848.64, "end": 853.5200000000001, "text": " the training start, it will just do a couple of things like you can automate installation of the"}, {"start": 853.52, "end": 860.96, "text": " the application from the APK APK file, which is just Android binary. You can do a bunch of those"}, {"start": 860.96, "end": 868.3199999999999, "text": " stuff. And I'll just focus on the stuff important to RL. And that's, that's defining rewards and"}, {"start": 869.36, "end": 874.8, "text": " like end of episode signal. So you'll basically have to have to set up this log parsing config"}, {"start": 875.6, "end": 882.3199999999999, "text": " in order to just define certain regs expressions. Because the way that apps will communicate"}, {"start": 882.32, "end": 888.6400000000001, "text": " the reward information, as I think I mentioned, is through the log log, basically through the log"}, {"start": 888.6400000000001, "end": 896.08, "text": " console. So here's an example of a text proto file, which defines the task. And if I expand it,"}, {"start": 896.08, "end": 900.8000000000001, "text": " you can see so it's pretty self explanatory. So perform these upon launching the environment,"}, {"start": 900.8000000000001, "end": 905.44, "text": " install the app, check if it's installed correctly, etc. So that's the not that interesting part. Let"}, {"start": 905.44, "end": 911.44, "text": " me just skip down to the this part. And basically, you can see you define the regx expression for"}, {"start": 911.44, "end": 920.6400000000001, "text": " your reward and for your score for the episode and signal etc. And the way you know this all works is"}, {"start": 920.6400000000001, "end": 925.5200000000001, "text": " the following. So you might you might have noticed the tasks often rely on log messages exposed by"}, {"start": 925.5200000000001, "end": 930.72, "text": " the Android system, which Android app can intercept and translate into items such as rewards,"}, {"start": 930.72, "end": 936.24, "text": " episode and signals or task extras. And they said here, so of course, applications might not send"}, {"start": 936.24, "end": 942.0, "text": " suitable messages by default. So in order to have access to such messages, we often add them to the"}, {"start": 942.0, "end": 947.76, "text": " app source code to match our expectation. For example, in the case of the 2048 app, we find in"}, {"start": 947.76, "end": 952.64, "text": " the game source code the exact lines where the score is computed, and add a line to log this value"}, {"start": 952.64, "end": 959.44, "text": " in the format that is expected by the text proto. So that's this file here. So as you can see here,"}, {"start": 959.44, "end": 963.6800000000001, "text": " some regular expressions, we have a plus or minus sign, then we have a couple of digits"}, {"start": 963.68, "end": 968.88, "text": " followed by a dot and then a couple of digits more. So basically a float like a scalar value."}, {"start": 969.76, "end": 974.4, "text": " And so you add the line to log this value in the format that is expected by the text proto,"}, {"start": 974.4, "end": 978.9599999999999, "text": " or conversely, make sure that yeah, whatever. So basically, you'll have to edit the source"}, {"start": 978.9599999999999, "end": 985.1999999999999, "text": " code of your application by adding a logging line here. And then through this text proto,"}, {"start": 985.1999999999999, "end": 989.68, "text": " you'll be able to send that environment to the agent. And that's how you're going to actually"}, {"start": 989.68, "end": 994.2399999999999, "text": " train and set up everything. And finally, they have a couple of examples here on this page,"}, {"start": 994.9599999999999, "end": 1000.4799999999999, "text": " example tasks. And I'll just link all of these in the description. But basically, here is a really"}, {"start": 1000.4799999999999, "end": 1006.8, "text": " simple environment they created. And in this one, so take like, take this one as an example. So"}, {"start": 1006.8, "end": 1012.56, "text": " basically, the RL agent gets a reward plus one, if it clicks the B button, and if it clicks the A"}, {"start": 1012.56, "end": 1019.12, "text": " button, it just the episode ends, because that's how they defined it. And basically, what happens"}, {"start": 1019.12, "end": 1027.36, "text": " then is the app shuts down, and the environment will automatically launch relaunch the app again."}, {"start": 1027.36, "end": 1033.84, "text": " And that's done through all through the task proto file we just saw. So that's how it works. And"}, {"start": 1035.44, "end": 1039.76, "text": " there are, aside from this one, they have a bunch of other examples, like with a calculator,"}, {"start": 1040.32, "end": 1048.72, "text": " some games, like the 2084 dimension, for which they had to modify the source code, and like chess."}, {"start": 1048.72, "end": 1053.6000000000001, "text": " So yeah, bunch of different stuff. And yeah, hopefully you found this one useful. If you did,"}, {"start": 1053.6, "end": 1081.6, "text": " leave a like, subscribe and see you in the next video."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=AoKf3SvvTIU
MLP-Mixer: An all-MLP Architecture for Vision | Paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I explain the MLP-Mixer: An all-MLP Architecture for Vision paper, aka MLP is all you need. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ paper: https://arxiv.org/pdf/2105.01601.pdf ✅ Sutton's blog: http://www.incompleteideas.net/IncIdeas/BitterLesson.html ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 We've gone the full circle 01:50 Bitter lessons by Sutton 02:50 Architecture overview 06:45 Rant, rant, rant 08:20 Results 11:00 Pareto frontier 15:10 Visualization of learned weights 18:30 Decrypting equations 21:20 No positional encodings 24:10 Dark magic, initializing weight matrices during fine-tuning ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #mixer #mlp #allyouneed
What's up? In this video I'm covering the MLP Mixer and all MLP architecture for vision. A newly published paper by the Google Brain team and I'd give it an alternative title which is MLP is all you need because as you'll see in the scene we went to full circle and we used to use MLPs, the multi-layer perceptrons to solve the vision tasks and we in 2012 we had the ImageNet moment with AlexNet which basically showed that it's much wiser to just use big convocional neural networks with a lot of data and compute to solve the vision tasks. Finally we started using transformers to start with the VASFANI transformer in 2017 and then the last year we had Vision Transformer which showed that with lots of data and used like the JFT300M dataset from Google, the proprietary dataset and with 300 million data points it showed that it can learn really nice representations and even achieve better results than CNNs. So finally back to 2021 this paper came out and showed that by just using these simple multi-layer perceptrons, in a clever way arguably, you can achieve similar results and compute-wise you can be even a bit better and throughput-wise as well. So yeah let's see, so basically the main thing is this paper doesn't try to achieve the new SODA, it just shows that we need to maybe investigate some other research paths which may lead us even further. So this paper is a good step towards that direction, so they showed that with simplicity they can still achieve great results, so who knows a couple of papers down the line what this may evolve to. So having said that, let's dig into the paper. So first things first, they say here the mixture relies only on basic matrix multiplication routines, changes to data layouts such as reshapes and transpositions and scalar non-linearities such as GALUs as we'll soon see. So that part associates me to Biddle Lessons by Sudden which is a blog you should check out by Richard Sudden which basically, and I'll link it down in the description, but it basically says that the model that stand the tooth of time are those which can leverage which are simple and can leverage the available computation of the time. And so this paper is super simple and you can see they even linked like a code snippet implementation of this paper at the last page which kind of tells you all about it, so it's basically 20 liner and that's awesome, so that's simple and achieves nice results, so yeah I like it. That's the part I like about this paper. The architecture wise, so they just have these channel mixing MLPs and token mixing LPs. So the token mixing MLPs basically attend across the image, so the spatial extent they attend the tokens, the patches and the channel mixing ones just focus on a single patch and they kind of process the channels, hence the mixer name. So they are mixing between processing channels and processing over the spatial extent i.e. over the tokens or patches. So here is the architecture, fairly simple, resembles the vision transformer a lot, so if you haven't checked out I've covered the vision transformer in my previous video, I'll just link it somewhere here. The architecture is very similar to the vision transformer, so you just take the input image, you split it into patches as you can see here, you flatten that out as you can see here and then you just unroll all of these matrices into vectors and you use a single fully connected layer here which is shared across all of these patches and you just project them into some new latent representation vectors here. Then they stack and of these mixer layers which we'll soon see what they are, but like they ended up with global average pooling and a linear classifier on top of it, so it's a super simple architecture. Mixer contains the part that mixes, so this is the token mixer part this year and this is the channel mixing part, this part here. So before digging into those, so just let's see what MLP is. MLP just consists out of two fully connected layers and a GalU non-linearity in between. Okay, so super simple, if we ignore the transpositions here, what they semantically do here is they just take the column here and they apply the first MLP, the MLP1, they applied on this column, so they'll project this column into some other subspace and they'll just take the same MLP, take the second column, project that one into another space and basically you arrive at the new subspace here which has the same dimension, so it's again it's C and S here as it was here. So they preserve the dimensions and that's the legacy from the transformer architecture. So basically if you take a look at the CNNs on the other hand, you usually have the spatial extent usually goes down, you have this pyramid-like structure where the spatial extent goes down, where the number of channels increases, usually two-fold. That was at least some heuristic people used to use, doesn't mean it's the best one. And here we're preserving dimensions. So the second thing they do is, again ignoring layer normalization and skip connections, they just take the row-wise, they just apply these MLP2 network to like row-wise onto these metrics and they achieve some new representation and they just repeat this n times and that's it, that's as simple as that. So skip connections are just like the the Resinopaper introduced those and the transformer used those as well as well as the layer norm. So the original was fine, transformer used layer norm. So I guess a bunch of those are just legacy artifacts. What's new here is they are not using CNNs, they're not using the convolutional kernels, they're not using the attention, they're just using MLP, they took some of the legacy from the previous art, but yeah. And they showed, as you will soon see, that they achieve comparable results. And yeah, this is here. Despite its simplicity, Mixer attains competitive results. When pre-trained on large datasets, so 300 million actually, it reaches near state-of-the-art performance previously claimed by CNNs and transformers in terms of the accuracy cost trade-offs. Okay, yeah, maybe a small rant here. It's unclear from the paper why they're using Galioos. So it seems we keep using like these exotic activation functions without having a good reason why. For their defense they have on one of the last papers, they do have the things they've tried and that didn't work, which is really super and great. But I'd like to know why they use Galioos and why not use ReLU. I think those kinds of insights would be really useful. But what probably happened is they tried a couple of these and they just took the best one, just a simple grid search over activation functions. But more probably they just took it because some other paper used it and that's it. But yeah, second thing is, I mean, we keep on using these normalizations again without any particular reason, at least I don't see it. So why are we using layer norm and not batch norm or instance normalization or group norm? There is too much tradition and legacy and taking something that worked and there is too little theoretical explanations of why we're actually using something that we're using. And that's why we get papers like this where after years of developing of CNNs and transformers, we show that, hey, we can just use MLPs, it's gonna work. Why are you complicating things? So yeah, a small rant over. Let's continue with the paper. And I'll skip some of these details for now. I just wanna give you the bigger picture of the model for now. So first things first, similar to the vision transformer, they create a family of models. So they have the small mixers with 32 means basically the patches are 32 by 32 pixels. So that means you'll have less of those patches because the patch is bigger. And then they have 16 here. So then big models, large models, and finally the huge mixture. And yeah, just a bunch of different hyperparameters depending on. So basically as you go towards a huge model, you increase everything and that's it. Again, skipping the details and focusing on the results. So as you can see, when it's pre-trained, and this is the main takeaway, when it's pre-trained on smaller data sets, it's not going to have as good performance as the better model, as the more common CNNs and transformers such as the big transfer model, the vision transformer. So as you can see, it's lagging behind a little bit here compared to BIT and VIT. But this part actually, I highlighted this part because it's actually a bit better, a lot better on VTab, which I think is more complex actually than the other ones. So that's kind of surprising. I'd like to know what this number is all about. And yeah, you can see the throughput is a bit better than the vision transformer. The compute is a bit higher, so it's hard to compare them, but it's definitely not the new state of the art, it's just comparable. You can see the perf-wise it's there, throughput is a bit better, but then compute-wise it's even, it takes more compute than the vision transformer, which is interesting. And yeah, but going to JFT300M, you can see that now we have even better performance than the big transfer model, and it's comparable to vision transfer. NFNets are decently better, it's lagging behind NFNets, both in terms of performance, both in terms of throughput, and yeah, but this NFTnet takes a bit more compute. I'd like to see comparison with the EfficientNet v2, which was recently published, and which showed that it's much better than NFNets. So yeah, I wonder why they omitted that part, probably because they were already late into the paper writing. But yeah, I'd like to see the comparison with eNet v2. So those were some tabular results. Now let's see the charts, these are really interesting. We have the chart that shows the compute accuracy trade-off here, and you can see that the mixture is directly on the Pareto frontier. And what Pareto frontier is, is just a fancy word, basically that means that if you're at this point, you cannot increase one metric of interest without decreasing the other one. So that means you can do this, because that means, oops, that means you're keeping the accuracy constant, but you're decreasing the compute. Or likewise, you can't keep the compute, and just increase the accuracy. You have to move across these lines, where if you want to increase the accuracy, you have to increase the compute. Or if you want to decrease the compute, you have to decrease the accuracy. And that's just a trade-off. And a good thing, and a really reassuring fact is that it's lying directly on that frontier, alongside with NFNets, and alongside with the division transformer. So those are some nice results. The second chart shows us that as we increase the compute, so going from 10 million data points sourced, the full data set, JFT300M, we see that basically, and the full lines are the mixer models, the other ones are the big transfer and division transformer. So we can see, and what's interesting is that for smaller models, for smaller mixer models, it plateaus really quick, and it achieves results maybe a bit inferior to those baselines. But when we go to bigger mixer models and large data regimes, we can see that it even achieves better performance at, finally, compared to, this is, I think, yeah, division transformer. So the encouraging thing is that the derivative here is positive, which means if we extrapolate, hopefully, it's going to go towards the AGI. That's the promise of this model, obviously. Yeah. And jokes aside, it does seem to have steeper slope here than compared to the division transformer, which is encouraging, which means that we can push this even further. So two papers down the line, we'll be seeing even bigger data sets and, yeah, better performance. And I haven't mentioned this part, and it's just an implementation detail. The reason they use this linear five-shot ImageNet top one instead of just using ImageNet top one is because it was really compute-intensive, even for Google Brain to train all of these, to fine-tune all these models. So as a proxy, so they've just frozen the weights that come from the images, and they trained a small linear classifier on top of it and used that as a proxy to fine-tuning. But just an implementation detail, as I said. The main thing takeaway here is that it seems that MLPs have even greater potential than division transformers, and that's exciting. Two more curves here. Again, compute versus accuracy. I can see Mixer is pretty decent on this part of the spectrum. So when we have a bunch of compute, it's pretty decent and comparable to division transformer. As we go on the lower part of the spectrum, of the compute spectrum, it seems to lag a little bit behind division transformer. But yeah, it kind of converges here. And we saw the similar behavior with division transformer compared to CNNs. Only in the big data regimes, we get really high performance out of it. Similarly here, throughput and accuracy, it's on the frontier. Here, they just have some additional tabular data, but all of the main takeaways were pretty much in those charts I already described. Regarding compute, they say here, we may scale the model in two independent ways. So increasing the model size, the number of layers, blah, blah, blah. And during the pre-training. And the second dimension is increasing the input image resolution when fine tuning. So those are the two things they had in their toolkit. And as I said, it appears that Mixer models benefit from growing pre-training dataset size even more than division transformer. Nice. Finally, nice visualizations here. And that's the end of the paper. Basically, first of all, let me explain you what these patches represent. So if you have the input image, and let's try it like this. And it's got some patches. I'll draw it like four by four. But it's usually much more. It's usually 14 by 14 or something like that. And what it does, so if you take this single patch, what it does is basically, imagine we have 14 by 14 of these patches. And so this thing here basically means that if we take this and flatten it out like this, so we have S here and C here, and that was the usual representation we used in the beginning of the paper. So this will be 14 by 14. And the number of channels doesn't matter now. So basically, this thing here is the weights which attend to all of the elements in this column. And if you remember, so these are the tokens. So that means a single pixel here will attend to a single patch here. So that means if you have something like this, let me zoom in a bit, something like this, that means that that particular fully connected layer is going to attend a lot to this part because the red, let's assume red is some positive number, so it's going to have to attend this part of the image with the positive weights and it's going to attend this part of the image with the negative weights, hence the blue part. Similarly here, you can see that the blue part, that means we're going to attend this part with negative weights and this part with the positive weight. So, and another interesting detail is that you can see there is a lot of symmetry here and they've intentionally arranged all of these like that. And also, if you take a look at the Y axis, the frequency kind of goes up. So these are the low frequency components. This is like the DC component and then we have the high frequency components here. By the way, this is the first MLP layer, this is the second token mixture layer, this is the third token mixture layer. So as we go here, deeper into the network, we can see that the patterns become much more complex, higher frequency ones than here. And the idea why they did this is because usually when we analyze CNNs, we can notice certain patterns and they mentioned the GABA filters, which is just a multi- you get those by a combination of gossians and sinusoids. And we see similar structure in the lower layers, but when we go deeper into the network, we get something that we still can't discern. And it'd be nice to kind of analyze this and do some mathematical approximations here to understand how we can model these and maybe we can take off, take some intuition further from this and develop device better models once we know what this thing is modeling. So yeah, but it's a, I guess, future work for these authors. Now let me go ahead and explain a couple of those details, which may be important to you. So first of all, let me start with this. So yeah, so I mentioned that they are sharing weights, so all patches are linearly projected with the same projection matrix. And that helps save up the memory footprint. So that's important. And they are also sharing the across the columns and across the rows. So that's the, the token mixer, the token mixing part and the channel mixing part. And let me just decrypt these equations for you. So basically we take that input matrix, which has dimensions S times C, if you remember, and they applied normalization and this weight matrix just represents the first fully connected layer of the MLP. So that's this one here. That's the network. And then they have a nonlinearity, which is GALU. So that's this one. Finally, they have the second weight matrix, which represents the second fully connected layer. So that's again, this one here. And finally, we have plus with the input, which is basically the skip connection. And then they just repeat this and apply it. The second thing is basically the, so this is the, this is across channels. And this is across, so that means it's a token mixer and this part is across basically, so let me draw the matrix here. So this is S, this is C, this goes across channels. So that means it's attending a bunch of different tokens. So it's a token mixer. And here we go, across S. So that means we're doing channel wise mixing. So these are the two equations I wanted to explain. Okay. They mentioned that tying across the channel mixing MLP, so keeping the same network here and here and here makes sense because you're basically, you want to do the positionally, that encodes the, that kind of enforces the positional invariance because whatever you learn here, you want to have a pattern that's generalizable to all of the other spatial locations. But you usually don't constrain, you usually don't take the network, like the MLP and constrain it to be the same across the columns. So that's what they mentioned and they actually tried both things. They figured out that just tying it doesn't hurt the performance, but it saves memory obviously, because you don't have to have C MLPs. You just can't, you can have just one single MLP and just apply it C times. But again, there's just a thing, like there is nothing theoretically guaranteeing that this is a better choice. They just, it's just a experiment. Important detail is that it's not using, so Mixer does not use position embeddings because the token mixing MLPs are sensitive to the order of the input tokens and therefore may learn to represent location. So, and that's it. Learn to represent location. So this is a direct, let's make a direct comparison with the Transformer. So Transformers have, as you may recall, so these are the patches embedded into the latent space. And what the Transformer does is, let's take this token for example, is going to attend all of the other tokens. So we're going to create those keys, queries and value vectors, and we're going to attend all of the other tokens, and we're going to form those alpha coefficients, basically the attention coefficients, and we're going to sum them up. And because we're summing them up, we are losing the positional information. And because of that, the original, the Vassfani paper and some, and the successor papers basically had to add additional positional information. So that's kind of going to encode somehow that this is different, so maybe like this, and then we'll maybe have the second position encoded like this, whatever. You need some unique pattern that kind of uniquely identifies each of the positions. But here, because we have MLPs, we don't need to do that. And the reason is, if you take a look at the matrix, we have S here and C here. So if you have MLP, like the token-wise mixture, applied across this column, you basically, because it's an MLP, so it's got a fully connected layer, right? So it's going to do something like this. It's going to attend all of these positions, and this weight here is some weight V1,1. And that's the first element of the output vector. Then the second element of the output vector will attend like this. Again, it's a fully connected, and it's becoming a mess really quickly, but you can understand that basically. Let me take another color. So basically, this one here is going to be some W1,2. And you'll have a collection of these weights, which are particular to this token here. So it directly learns that for that position, for that token, which corresponds to a certain position in the image. So this is the input image. Maybe this is this token here, this patch. So these weights will learn how to encode information from this particular patch, and that's why they don't need to use any positional encodings. Hopefully, that was clear enough. If not, let me know in the comments. I'll try to explain it further. Okay, so that was the detail I wanted to explain. And the final thing I want to mention is this here. Here is some dark magic here, cosine learning rate. I really wonder why are we still using these without having any theoretical justifications. Like a small footnote of why they use that particular schedule and not something else, like a simple linear schedule or even some constant learning rate schedule would be really appreciated. And again, following common practice, and I highlight common practice, so we also apply fine-tune higher resolutions with respect to those used during pre-training. So what happens here is that basically people show that when you're training for like vision benchmarks, you pre-train on certain resolution, like maybe 224 times 224, and when you want to fine-tune, you actually increase the resolution and that will help you boost your performance. So you fine-tune on 384 times 384, for example. And as I said, so I highlight the common practice, because I'm not sure if we still have an understanding of why this is making things work any better. So we are just using the legacy ideas, legacy heuristics, and keeping them and kind of keeping them in all of the present research. And I guess, yeah, we sometimes have to do that because otherwise, like the research will fall apart. You have to clench onto something. But basically, it'd be nice when we had more papers explaining why these things work, and not just combining them like black boxes. There is one more detail I want to mention, and we are done basically because they are doing this up-scaling during the fine-tune process. They somehow have to adjust the weights of the MLPs. So the reason that is, is because, if you remember, so the input image can be represented as S times C, and because the token mixer basically attends across the number of patches, when we increase the resolution, so when we increase the resolution, but we keep the number of, the size of the patch the same, that means we're going to have more of these patches. So that means S goes to S prime, which is a bigger number, and that means this thing won't fit into the previous MLP, which had certain bandwidth. So it was maybe, it was maybe, it could attend like this. It was smaller. So now we have to kind of take the weight matrix of that old MLP, and we need to initialize this new weight matrix, which is bigger somehow, and what they did is they just took these and stacked them block-wise here, and that was the initialization method they used to fine-tune the models. So yeah, just an engineering detail. I thought it's interesting and important to mention that there are a lot of these details that go into making this thing work, and the reason this works is if you think about it, if you now multiply the input vector with this new matrix, this thing is basically going to attend to the first part, and this thing is basically going to attend to the second part. So it's like you're applying these in parallel, and so that seemed like a sound way to initialize the new weight matrix. So that's the main reason they did it like this. Hopefully you found this video useful. If you did, leave a like, share the video, and see you next time.
[{"start": 0.0, "end": 5.68, "text": " What's up? In this video I'm covering the MLP Mixer and all MLP architecture for vision."}, {"start": 6.640000000000001, "end": 14.08, "text": " A newly published paper by the Google Brain team and I'd give it an alternative title which is"}, {"start": 14.08, "end": 20.88, "text": " MLP is all you need because as you'll see in the scene we went to full circle and we used to use"}, {"start": 21.44, "end": 27.6, "text": " MLPs, the multi-layer perceptrons to solve the vision tasks and we in 2012 we had the ImageNet"}, {"start": 27.6, "end": 34.800000000000004, "text": " moment with AlexNet which basically showed that it's much wiser to just use big convocional neural"}, {"start": 34.800000000000004, "end": 41.92, "text": " networks with a lot of data and compute to solve the vision tasks. Finally we started using"}, {"start": 41.92, "end": 47.44, "text": " transformers to start with the VASFANI transformer in 2017 and then the last year we had Vision"}, {"start": 47.44, "end": 54.88, "text": " Transformer which showed that with lots of data and used like the JFT300M dataset from Google,"}, {"start": 54.88, "end": 60.480000000000004, "text": " the proprietary dataset and with 300 million data points it showed that it can learn really"}, {"start": 60.480000000000004, "end": 67.68, "text": " nice representations and even achieve better results than CNNs. So finally back to 2021"}, {"start": 69.36, "end": 74.88, "text": " this paper came out and showed that by just using these simple multi-layer perceptrons,"}, {"start": 74.88, "end": 82.72, "text": " in a clever way arguably, you can achieve similar results and compute-wise you can be even a bit"}, {"start": 82.72, "end": 88.8, "text": " better and throughput-wise as well. So yeah let's see, so basically the main thing is this paper"}, {"start": 88.8, "end": 95.84, "text": " doesn't try to achieve the new SODA, it just shows that we need to maybe investigate some other"}, {"start": 95.84, "end": 102.4, "text": " research paths which may lead us even further. So this paper is a good step towards that direction,"}, {"start": 102.4, "end": 107.52, "text": " so they showed that with simplicity they can still achieve great results, so who knows a couple of"}, {"start": 107.52, "end": 114.0, "text": " papers down the line what this may evolve to. So having said that, let's dig into the paper."}, {"start": 114.56, "end": 119.44, "text": " So first things first, they say here the mixture relies only on basic matrix multiplication"}, {"start": 119.44, "end": 125.84, "text": " routines, changes to data layouts such as reshapes and transpositions and scalar non-linearities such"}, {"start": 125.84, "end": 132.72, "text": " as GALUs as we'll soon see. So that part associates me to Biddle Lessons by Sudden which is a blog you"}, {"start": 132.72, "end": 138.16, "text": " should check out by Richard Sudden which basically, and I'll link it down in the description, but it"}, {"start": 138.16, "end": 144.4, "text": " basically says that the model that stand the tooth of time are those which can leverage which are"}, {"start": 144.4, "end": 150.88, "text": " simple and can leverage the available computation of the time. And so this paper is super simple and"}, {"start": 150.88, "end": 158.24, "text": " you can see they even linked like a code snippet implementation of this paper at the last page which"}, {"start": 158.24, "end": 164.88, "text": " kind of tells you all about it, so it's basically 20 liner and that's awesome, so that's simple"}, {"start": 164.88, "end": 168.96, "text": " and achieves nice results, so yeah I like it. That's the part I like about this paper."}, {"start": 170.56, "end": 176.48000000000002, "text": " The architecture wise, so they just have these channel mixing MLPs and token mixing LPs. So the"}, {"start": 176.48000000000002, "end": 184.24, "text": " token mixing MLPs basically attend across the image, so the spatial extent they attend the tokens,"}, {"start": 184.24, "end": 190.96, "text": " the patches and the channel mixing ones just focus on a single patch and they kind of process the"}, {"start": 190.96, "end": 196.88, "text": " channels, hence the mixer name. So they are mixing between processing channels and processing"}, {"start": 196.88, "end": 204.64000000000001, "text": " over the spatial extent i.e. over the tokens or patches. So here is the architecture, fairly simple,"}, {"start": 205.84, "end": 210.0, "text": " resembles the vision transformer a lot, so if you haven't checked out I've covered the vision"}, {"start": 210.0, "end": 213.92000000000002, "text": " transformer in my previous video, I'll just link it somewhere here. The architecture is very similar"}, {"start": 213.92, "end": 218.64, "text": " to the vision transformer, so you just take the input image, you split it into patches as you can"}, {"start": 218.64, "end": 226.16, "text": " see here, you flatten that out as you can see here and then you just unroll all of these matrices into"}, {"start": 226.16, "end": 232.56, "text": " vectors and you use a single fully connected layer here which is shared across all of these"}, {"start": 233.6, "end": 240.88, "text": " patches and you just project them into some new latent representation vectors here. Then they stack"}, {"start": 240.88, "end": 247.28, "text": " and of these mixer layers which we'll soon see what they are, but like they ended up with global"}, {"start": 247.28, "end": 251.84, "text": " average pooling and a linear classifier on top of it, so it's a super simple architecture."}, {"start": 251.84, "end": 259.44, "text": " Mixer contains the part that mixes, so this is the token mixer part this year and this is the"}, {"start": 260.0, "end": 267.68, "text": " channel mixing part, this part here. So before digging into those, so just let's see what MLP is."}, {"start": 267.68, "end": 274.24, "text": " MLP just consists out of two fully connected layers and a GalU non-linearity in between. Okay,"}, {"start": 274.24, "end": 281.92, "text": " so super simple, if we ignore the transpositions here, what they semantically do here is they just"}, {"start": 281.92, "end": 290.32, "text": " take the column here and they apply the first MLP, the MLP1, they applied on this column, so"}, {"start": 290.32, "end": 295.44, "text": " they'll project this column into some other subspace and they'll just take the same MLP,"}, {"start": 295.44, "end": 302.24, "text": " take the second column, project that one into another space and basically you arrive at the"}, {"start": 302.24, "end": 310.16, "text": " new subspace here which has the same dimension, so it's again it's C and S here as it was here."}, {"start": 310.16, "end": 315.28, "text": " So they preserve the dimensions and that's the legacy from the transformer architecture. So"}, {"start": 315.28, "end": 319.28, "text": " basically if you take a look at the CNNs on the other hand, you usually have the spatial extent"}, {"start": 319.28, "end": 323.36, "text": " usually goes down, you have this pyramid-like structure where the spatial extent goes down,"}, {"start": 323.36, "end": 328.16, "text": " where the number of channels increases, usually two-fold. That was at least some"}, {"start": 328.16, "end": 334.8, "text": " heuristic people used to use, doesn't mean it's the best one. And here we're preserving dimensions."}, {"start": 334.8, "end": 340.24, "text": " So the second thing they do is, again ignoring layer normalization and skip connections,"}, {"start": 340.24, "end": 347.76, "text": " they just take the row-wise, they just apply these MLP2 network to like row-wise onto these"}, {"start": 347.76, "end": 353.28, "text": " metrics and they achieve some new representation and they just repeat this n times and that's it,"}, {"start": 353.28, "end": 359.92, "text": " that's as simple as that. So skip connections are just like the the Resinopaper introduced those"}, {"start": 359.92, "end": 364.96, "text": " and the transformer used those as well as well as the layer norm. So the original was fine,"}, {"start": 364.96, "end": 369.12, "text": " transformer used layer norm. So I guess a bunch of those are just legacy artifacts."}, {"start": 369.68, "end": 374.24, "text": " What's new here is they are not using CNNs, they're not using the convolutional kernels,"}, {"start": 374.24, "end": 378.72, "text": " they're not using the attention, they're just using MLP, they took some of the legacy from"}, {"start": 378.72, "end": 385.44, "text": " the previous art, but yeah. And they showed, as you will soon see, that they achieve comparable"}, {"start": 386.48, "end": 393.2, "text": " results. And yeah, this is here. Despite its simplicity, Mixer attains competitive results."}, {"start": 393.2, "end": 399.04, "text": " When pre-trained on large datasets, so 300 million actually, it reaches near state-of-the-art"}, {"start": 399.04, "end": 404.32, "text": " performance previously claimed by CNNs and transformers in terms of the accuracy cost"}, {"start": 404.32, "end": 412.24, "text": " trade-offs. Okay, yeah, maybe a small rant here. It's unclear from the paper why they're using"}, {"start": 412.24, "end": 422.48, "text": " Galioos. So it seems we keep using like these exotic activation functions without having a"}, {"start": 422.48, "end": 431.20000000000005, "text": " good reason why. For their defense they have on one of the last papers, they do have the"}, {"start": 431.20000000000005, "end": 436.40000000000003, "text": " things they've tried and that didn't work, which is really super and great. But I'd like to know"}, {"start": 436.40000000000003, "end": 441.92, "text": " why they use Galioos and why not use ReLU. I think those kinds of insights would be really"}, {"start": 442.56, "end": 448.88, "text": " useful. But what probably happened is they tried a couple of these and they just took the best one,"}, {"start": 448.88, "end": 455.04, "text": " just a simple grid search over activation functions. But more probably they just took it"}, {"start": 455.04, "end": 462.96, "text": " because some other paper used it and that's it. But yeah, second thing is, I mean, we keep on using"}, {"start": 462.96, "end": 471.6, "text": " these normalizations again without any particular reason, at least I don't see it. So why are we"}, {"start": 471.6, "end": 478.08, "text": " using layer norm and not batch norm or instance normalization or group norm? There is too much"}, {"start": 478.08, "end": 484.24, "text": " tradition and legacy and taking something that worked and there is too little theoretical"}, {"start": 484.24, "end": 488.47999999999996, "text": " explanations of why we're actually using something that we're using. And that's why we"}, {"start": 489.12, "end": 494.79999999999995, "text": " get papers like this where after years of developing of CNNs and transformers, we show that,"}, {"start": 494.79999999999995, "end": 500.0, "text": " hey, we can just use MLPs, it's gonna work. Why are you complicating things? So yeah,"}, {"start": 500.88, "end": 507.84, "text": " a small rant over. Let's continue with the paper. And I'll skip some of these details for now."}, {"start": 507.84, "end": 513.68, "text": " I just wanna give you the bigger picture of the model for now. So first things first,"}, {"start": 514.64, "end": 520.24, "text": " similar to the vision transformer, they create a family of models. So they have the small mixers"}, {"start": 521.04, "end": 528.8, "text": " with 32 means basically the patches are 32 by 32 pixels. So that means you'll have less of those"}, {"start": 528.8, "end": 535.04, "text": " patches because the patch is bigger. And then they have 16 here. So then big models, large models,"}, {"start": 535.04, "end": 541.1999999999999, "text": " and finally the huge mixture. And yeah, just a bunch of different hyperparameters depending on."}, {"start": 541.1999999999999, "end": 548.9599999999999, "text": " So basically as you go towards a huge model, you increase everything and that's it. Again,"}, {"start": 548.9599999999999, "end": 556.64, "text": " skipping the details and focusing on the results. So as you can see, when it's pre-trained, and this"}, {"start": 556.64, "end": 561.36, "text": " is the main takeaway, when it's pre-trained on smaller data sets, it's not going to have as good"}, {"start": 561.36, "end": 567.92, "text": " performance as the better model, as the more common CNNs and transformers such as the big"}, {"start": 567.92, "end": 573.36, "text": " transfer model, the vision transformer. So as you can see, it's lagging behind a little bit here"}, {"start": 574.72, "end": 581.36, "text": " compared to BIT and VIT. But this part actually, I highlighted this part because it's actually a"}, {"start": 581.36, "end": 588.72, "text": " bit better, a lot better on VTab, which I think is more complex actually than the other ones."}, {"start": 588.72, "end": 591.9200000000001, "text": " So that's kind of surprising. I'd like to know what this number is all about."}, {"start": 593.2, "end": 598.8000000000001, "text": " And yeah, you can see the throughput is a bit better than the vision transformer. The compute"}, {"start": 598.8000000000001, "end": 605.0400000000001, "text": " is a bit higher, so it's hard to compare them, but it's definitely not the new state of the art,"}, {"start": 605.0400000000001, "end": 609.9200000000001, "text": " it's just comparable. You can see the perf-wise it's there, throughput is a bit better, but then"}, {"start": 609.9200000000001, "end": 614.88, "text": " compute-wise it's even, it takes more compute than the vision transformer, which is interesting."}, {"start": 614.88, "end": 622.8, "text": " And yeah, but going to JFT300M, you can see that now we have even better performance than the"}, {"start": 622.8, "end": 628.8, "text": " big transfer model, and it's comparable to vision transfer. NFNets are decently better,"}, {"start": 629.4399999999999, "end": 634.72, "text": " it's lagging behind NFNets, both in terms of performance, both in terms of throughput,"}, {"start": 635.28, "end": 642.08, "text": " and yeah, but this NFTnet takes a bit more compute. I'd like to see comparison with the"}, {"start": 642.08, "end": 649.9200000000001, "text": " EfficientNet v2, which was recently published, and which showed that it's much better than NFNets."}, {"start": 649.9200000000001, "end": 654.4000000000001, "text": " So yeah, I wonder why they omitted that part, probably because they were already late into"}, {"start": 654.4000000000001, "end": 662.4000000000001, "text": " the paper writing. But yeah, I'd like to see the comparison with eNet v2. So those were some"}, {"start": 662.4000000000001, "end": 666.24, "text": " tabular results. Now let's see the charts, these are really interesting. We have the chart that"}, {"start": 666.24, "end": 674.24, "text": " shows the compute accuracy trade-off here, and you can see that the mixture is directly on the"}, {"start": 674.24, "end": 679.52, "text": " Pareto frontier. And what Pareto frontier is, is just a fancy word, basically that means that"}, {"start": 679.52, "end": 685.12, "text": " if you're at this point, you cannot increase one metric of interest without decreasing the other"}, {"start": 685.12, "end": 690.88, "text": " one. So that means you can do this, because that means, oops, that means you're keeping the"}, {"start": 690.88, "end": 696.16, "text": " accuracy constant, but you're decreasing the compute. Or likewise, you can't keep the compute,"}, {"start": 696.16, "end": 700.96, "text": " and just increase the accuracy. You have to move across these lines, where if you want to increase"}, {"start": 700.96, "end": 705.52, "text": " the accuracy, you have to increase the compute. Or if you want to decrease the compute, you have"}, {"start": 705.52, "end": 712.48, "text": " to decrease the accuracy. And that's just a trade-off. And a good thing, and a really reassuring"}, {"start": 712.48, "end": 720.32, "text": " fact is that it's lying directly on that frontier, alongside with NFNets, and alongside with the"}, {"start": 720.32, "end": 726.08, "text": " division transformer. So those are some nice results. The second chart shows us that as we"}, {"start": 726.08, "end": 732.48, "text": " increase the compute, so going from 10 million data points sourced, the full data set, JFT300M,"}, {"start": 732.48, "end": 738.8000000000001, "text": " we see that basically, and the full lines are the mixer models, the other ones are the big"}, {"start": 738.8000000000001, "end": 745.6800000000001, "text": " transfer and division transformer. So we can see, and what's interesting is that for smaller models,"}, {"start": 745.68, "end": 752.4, "text": " for smaller mixer models, it plateaus really quick, and it achieves results maybe a bit inferior to"}, {"start": 753.68, "end": 760.56, "text": " those baselines. But when we go to bigger mixer models and large data regimes, we can see that"}, {"start": 760.56, "end": 767.4399999999999, "text": " it even achieves better performance at, finally, compared to, this is, I think, yeah, division"}, {"start": 767.4399999999999, "end": 773.5999999999999, "text": " transformer. So the encouraging thing is that the derivative here is positive, which means if we"}, {"start": 773.6, "end": 780.0, "text": " extrapolate, hopefully, it's going to go towards the AGI. That's the promise of this model,"}, {"start": 780.0, "end": 790.72, "text": " obviously. Yeah. And jokes aside, it does seem to have steeper slope here than compared to the"}, {"start": 791.28, "end": 795.84, "text": " division transformer, which is encouraging, which means that we can push this even further. So"}, {"start": 795.84, "end": 801.6800000000001, "text": " two papers down the line, we'll be seeing even bigger data sets and, yeah, better performance."}, {"start": 801.68, "end": 805.92, "text": " And I haven't mentioned this part, and it's just an implementation detail. The reason they use this"}, {"start": 805.92, "end": 811.52, "text": " linear five-shot ImageNet top one instead of just using ImageNet top one is because it was really"}, {"start": 811.52, "end": 816.8, "text": " compute-intensive, even for Google Brain to train all of these, to fine-tune all these models. So"}, {"start": 816.8, "end": 822.56, "text": " as a proxy, so they've just frozen the weights that come from the images, and they trained a small"}, {"start": 822.56, "end": 828.9599999999999, "text": " linear classifier on top of it and used that as a proxy to fine-tuning. But just an implementation"}, {"start": 828.96, "end": 834.64, "text": " detail, as I said. The main thing takeaway here is that it seems that MLPs have even greater potential"}, {"start": 834.64, "end": 841.76, "text": " than division transformers, and that's exciting. Two more curves here. Again, compute versus"}, {"start": 841.76, "end": 847.2800000000001, "text": " accuracy. I can see Mixer is pretty decent on this part of the spectrum. So when we have a bunch of"}, {"start": 847.2800000000001, "end": 852.88, "text": " compute, it's pretty decent and comparable to division transformer. As we go on the lower part"}, {"start": 852.88, "end": 857.9200000000001, "text": " of the spectrum, of the compute spectrum, it seems to lag a little bit behind division transformer."}, {"start": 857.92, "end": 862.64, "text": " But yeah, it kind of converges here. And we saw the similar behavior with division transformer"}, {"start": 862.64, "end": 869.12, "text": " compared to CNNs. Only in the big data regimes, we get really high performance out of it. Similarly"}, {"start": 869.12, "end": 874.56, "text": " here, throughput and accuracy, it's on the frontier. Here, they just have some additional"}, {"start": 874.56, "end": 880.56, "text": " tabular data, but all of the main takeaways were pretty much in those charts I already described."}, {"start": 881.8399999999999, "end": 886.0799999999999, "text": " Regarding compute, they say here, we may scale the model in two independent ways. So increasing"}, {"start": 886.08, "end": 891.76, "text": " the model size, the number of layers, blah, blah, blah. And during the pre-training. And the second"}, {"start": 891.76, "end": 898.0, "text": " dimension is increasing the input image resolution when fine tuning. So those are the two things they"}, {"start": 898.0, "end": 905.6, "text": " had in their toolkit. And as I said, it appears that Mixer models benefit from growing pre-training"}, {"start": 905.6, "end": 912.48, "text": " dataset size even more than division transformer. Nice. Finally, nice visualizations here."}, {"start": 912.48, "end": 920.24, "text": " And that's the end of the paper. Basically, first of all, let me explain you what these patches"}, {"start": 920.24, "end": 927.76, "text": " represent. So if you have the input image, and let's try it like this. And it's got some patches."}, {"start": 929.2, "end": 934.96, "text": " I'll draw it like four by four. But it's usually much more. It's usually 14 by 14 or something like"}, {"start": 934.96, "end": 940.4, "text": " that. And what it does, so if you take this single patch, what it does is basically, imagine we have"}, {"start": 940.4, "end": 950.72, "text": " 14 by 14 of these patches. And so this thing here basically means that if we take this and flatten it"}, {"start": 950.72, "end": 958.64, "text": " out like this, so we have S here and C here, and that was the usual representation we used in the"}, {"start": 958.64, "end": 965.28, "text": " beginning of the paper. So this will be 14 by 14. And the number of channels doesn't matter now."}, {"start": 965.28, "end": 974.0799999999999, "text": " So basically, this thing here is the weights which attend to all of the elements in this column."}, {"start": 975.28, "end": 982.24, "text": " And if you remember, so these are the tokens. So that means a single pixel here will attend to a"}, {"start": 982.24, "end": 988.3199999999999, "text": " single patch here. So that means if you have something like this, let me zoom in a bit,"}, {"start": 988.32, "end": 995.7600000000001, "text": " something like this, that means that that particular fully connected layer is going to attend"}, {"start": 996.4000000000001, "end": 1001.6, "text": " a lot to this part because the red, let's assume red is some positive number, so it's going to have"}, {"start": 1001.6, "end": 1006.96, "text": " to attend this part of the image with the positive weights and it's going to attend this part of the"}, {"start": 1006.96, "end": 1012.72, "text": " image with the negative weights, hence the blue part. Similarly here, you can see that the blue"}, {"start": 1012.72, "end": 1016.48, "text": " part, that means we're going to attend this part with negative weights and this part with the"}, {"start": 1016.48, "end": 1022.0, "text": " positive weight. So, and another interesting detail is that you can see there is a lot of symmetry"}, {"start": 1022.0, "end": 1030.0, "text": " here and they've intentionally arranged all of these like that. And also, if you take a look at"}, {"start": 1030.0, "end": 1035.84, "text": " the Y axis, the frequency kind of goes up. So these are the low frequency components. This is like the"}, {"start": 1035.84, "end": 1045.04, "text": " DC component and then we have the high frequency components here. By the way, this is the first"}, {"start": 1045.04, "end": 1051.28, "text": " MLP layer, this is the second token mixture layer, this is the third token mixture layer. So as we go"}, {"start": 1051.28, "end": 1057.12, "text": " here, deeper into the network, we can see that the patterns become much more complex, higher"}, {"start": 1057.12, "end": 1064.1599999999999, "text": " frequency ones than here. And the idea why they did this is because usually when we analyze CNNs,"}, {"start": 1064.1599999999999, "end": 1069.36, "text": " we can notice certain patterns and they mentioned the GABA filters, which is just a multi- you get"}, {"start": 1069.36, "end": 1077.04, "text": " those by a combination of gossians and sinusoids. And we see similar structure in the lower layers,"}, {"start": 1077.04, "end": 1081.84, "text": " but when we go deeper into the network, we get something that we still can't discern."}, {"start": 1081.84, "end": 1086.3999999999999, "text": " And it'd be nice to kind of analyze this and do some mathematical approximations"}, {"start": 1086.3999999999999, "end": 1092.6399999999999, "text": " here to understand how we can model these and maybe we can take off, take some intuition"}, {"start": 1092.64, "end": 1099.0400000000002, "text": " further from this and develop device better models once we know what this thing is modeling."}, {"start": 1099.6000000000001, "end": 1105.5200000000002, "text": " So yeah, but it's a, I guess, future work for these authors. Now let me go ahead and"}, {"start": 1105.5200000000002, "end": 1112.96, "text": " explain a couple of those details, which may be important to you. So first of all,"}, {"start": 1112.96, "end": 1121.3600000000001, "text": " let me start with this. So yeah, so I mentioned that they are sharing weights, so all patches"}, {"start": 1121.36, "end": 1126.24, "text": " are linearly projected with the same projection matrix. And that helps save up the memory"}, {"start": 1126.24, "end": 1130.7199999999998, "text": " footprint. So that's important. And they are also sharing the across the columns and across the"}, {"start": 1130.7199999999998, "end": 1137.6799999999998, "text": " rows. So that's the, the token mixer, the token mixing part and the channel mixing part. And"}, {"start": 1137.6799999999998, "end": 1142.6399999999999, "text": " let me just decrypt these equations for you. So basically we take that input matrix, which has"}, {"start": 1142.6399999999999, "end": 1150.08, "text": " dimensions S times C, if you remember, and they applied normalization and this weight matrix just"}, {"start": 1150.08, "end": 1156.8, "text": " represents the first fully connected layer of the MLP. So that's this one here. That's the network."}, {"start": 1157.4399999999998, "end": 1165.52, "text": " And then they have a nonlinearity, which is GALU. So that's this one. Finally, they have"}, {"start": 1165.52, "end": 1170.72, "text": " the second weight matrix, which represents the second fully connected layer. So that's again,"}, {"start": 1170.72, "end": 1178.8799999999999, "text": " this one here. And finally, we have plus with the input, which is basically the skip connection."}, {"start": 1178.88, "end": 1185.2800000000002, "text": " And then they just repeat this and apply it. The second thing is basically the, so this is the,"}, {"start": 1185.2800000000002, "end": 1193.5200000000002, "text": " this is across channels. And this is across, so that means it's a token mixer and this part is"}, {"start": 1193.5200000000002, "end": 1201.5200000000002, "text": " across basically, so let me draw the matrix here. So this is S, this is C, this goes across channels."}, {"start": 1201.5200000000002, "end": 1207.2, "text": " So that means it's attending a bunch of different tokens. So it's a token mixer. And here we go,"}, {"start": 1207.2, "end": 1212.32, "text": " across S. So that means we're doing channel wise mixing. So these are the two equations"}, {"start": 1212.32, "end": 1218.24, "text": " I wanted to explain. Okay. They mentioned that tying across the channel mixing MLP,"}, {"start": 1218.24, "end": 1223.68, "text": " so keeping the same network here and here and here makes sense because you're basically,"}, {"start": 1223.68, "end": 1228.96, "text": " you want to do the positionally, that encodes the, that kind of enforces the positional invariance"}, {"start": 1228.96, "end": 1234.88, "text": " because whatever you learn here, you want to have a pattern that's generalizable to all of the other"}, {"start": 1234.88, "end": 1240.0, "text": " spatial locations. But you usually don't constrain, you usually don't take the"}, {"start": 1240.64, "end": 1246.3200000000002, "text": " network, like the MLP and constrain it to be the same across the columns. So that's what they"}, {"start": 1246.3200000000002, "end": 1251.6000000000001, "text": " mentioned and they actually tried both things. They figured out that just tying it doesn't"}, {"start": 1251.6000000000001, "end": 1255.3600000000001, "text": " hurt the performance, but it saves memory obviously, because you don't have to have"}, {"start": 1255.3600000000001, "end": 1261.1200000000001, "text": " C MLPs. You just can't, you can have just one single MLP and just apply it C times."}, {"start": 1261.12, "end": 1267.28, "text": " But again, there's just a thing, like there is nothing theoretically guaranteeing that this is"}, {"start": 1267.28, "end": 1275.9199999999998, "text": " a better choice. They just, it's just a experiment. Important detail is that it's not using, so"}, {"start": 1275.9199999999998, "end": 1280.8, "text": " Mixer does not use position embeddings because the token mixing MLPs are sensitive to the order"}, {"start": 1280.8, "end": 1288.1599999999999, "text": " of the input tokens and therefore may learn to represent location. So, and that's it."}, {"start": 1288.16, "end": 1298.96, "text": " Learn to represent location. So this is a direct, let's make a direct comparison with the Transformer."}, {"start": 1298.96, "end": 1306.96, "text": " So Transformers have, as you may recall, so these are the patches embedded into the latent space."}, {"start": 1307.52, "end": 1314.24, "text": " And what the Transformer does is, let's take this token for example, is going to attend all of the"}, {"start": 1314.24, "end": 1320.8, "text": " other tokens. So we're going to create those keys, queries and value vectors, and we're going to"}, {"start": 1320.8, "end": 1327.44, "text": " attend all of the other tokens, and we're going to form those alpha coefficients, basically the"}, {"start": 1327.44, "end": 1331.68, "text": " attention coefficients, and we're going to sum them up. And because we're summing them up, we are"}, {"start": 1331.68, "end": 1336.72, "text": " losing the positional information. And because of that, the original, the Vassfani paper and"}, {"start": 1336.72, "end": 1344.16, "text": " some, and the successor papers basically had to add additional positional information. So that's"}, {"start": 1344.16, "end": 1349.28, "text": " kind of going to encode somehow that this is different, so maybe like this, and then we'll"}, {"start": 1349.28, "end": 1354.72, "text": " maybe have the second position encoded like this, whatever. You need some unique pattern that kind"}, {"start": 1354.72, "end": 1360.08, "text": " of uniquely identifies each of the positions. But here, because we have MLPs, we don't need to do"}, {"start": 1360.08, "end": 1368.6399999999999, "text": " that. And the reason is, if you take a look at the matrix, we have S here and C here. So if you have"}, {"start": 1369.4399999999998, "end": 1378.56, "text": " MLP, like the token-wise mixture, applied across this column, you basically, because it's an MLP,"}, {"start": 1378.56, "end": 1383.04, "text": " so it's got a fully connected layer, right? So it's going to do something like this. It's going to"}, {"start": 1383.04, "end": 1390.8799999999999, "text": " attend all of these positions, and this weight here is some weight V1,1. And that's the first"}, {"start": 1392.24, "end": 1398.08, "text": " element of the output vector. Then the second element of the output vector will attend like this."}, {"start": 1399.04, "end": 1403.76, "text": " Again, it's a fully connected, and it's becoming a mess really quickly, but you can understand that"}, {"start": 1403.76, "end": 1415.04, "text": " basically. Let me take another color. So basically, this one here is going to be some W1,2. And you'll"}, {"start": 1415.04, "end": 1423.28, "text": " have a collection of these weights, which are particular to this token here. So it directly"}, {"start": 1423.28, "end": 1429.52, "text": " learns that for that position, for that token, which corresponds to a certain position in the"}, {"start": 1429.52, "end": 1435.76, "text": " image. So this is the input image. Maybe this is this token here, this patch. So these weights will"}, {"start": 1435.76, "end": 1441.04, "text": " learn how to encode information from this particular patch, and that's why they don't need to use any"}, {"start": 1441.04, "end": 1445.84, "text": " positional encodings. Hopefully, that was clear enough. If not, let me know in the comments. I'll"}, {"start": 1445.84, "end": 1452.08, "text": " try to explain it further. Okay, so that was the detail I wanted to explain. And the final thing"}, {"start": 1452.08, "end": 1457.44, "text": " I want to mention is this here. Here is some dark magic here, cosine learning rate. I really wonder"}, {"start": 1457.44, "end": 1461.04, "text": " why are we still using these without having any theoretical justifications."}, {"start": 1462.64, "end": 1469.28, "text": " Like a small footnote of why they use that particular schedule and not something else,"}, {"start": 1469.28, "end": 1473.76, "text": " like a simple linear schedule or even some constant learning rate schedule would be really"}, {"start": 1473.76, "end": 1479.68, "text": " appreciated. And again, following common practice, and I highlight common practice,"}, {"start": 1479.68, "end": 1484.64, "text": " so we also apply fine-tune higher resolutions with respect to those used during pre-training."}, {"start": 1484.64, "end": 1489.76, "text": " So what happens here is that basically people show that when you're training for like vision"}, {"start": 1489.76, "end": 1495.92, "text": " benchmarks, you pre-train on certain resolution, like maybe 224 times 224, and when you want to"}, {"start": 1495.92, "end": 1500.5600000000002, "text": " fine-tune, you actually increase the resolution and that will help you boost your performance."}, {"start": 1500.5600000000002, "end": 1508.72, "text": " So you fine-tune on 384 times 384, for example. And as I said, so I highlight the common practice,"}, {"start": 1508.72, "end": 1513.6000000000001, "text": " because I'm not sure if we still have an understanding of why this is making things work"}, {"start": 1513.6, "end": 1519.12, "text": " any better. So we are just using the legacy ideas, legacy heuristics, and keeping them"}, {"start": 1519.12, "end": 1525.4399999999998, "text": " and kind of keeping them in all of the present research. And I guess, yeah, we sometimes have"}, {"start": 1525.4399999999998, "end": 1530.8799999999999, "text": " to do that because otherwise, like the research will fall apart. You have to clench onto something."}, {"start": 1531.6799999999998, "end": 1537.76, "text": " But basically, it'd be nice when we had more papers explaining why these things work,"}, {"start": 1537.76, "end": 1544.64, "text": " and not just combining them like black boxes. There is one more detail I want to mention,"}, {"start": 1544.64, "end": 1552.4, "text": " and we are done basically because they are doing this up-scaling during the fine-tune process."}, {"start": 1553.2, "end": 1559.12, "text": " They somehow have to adjust the weights of the MLPs. So the reason that is, is because,"}, {"start": 1559.12, "end": 1570.32, "text": " if you remember, so the input image can be represented as S times C, and because the token"}, {"start": 1570.32, "end": 1577.84, "text": " mixer basically attends across the number of patches, when we increase the resolution,"}, {"start": 1577.84, "end": 1584.1599999999999, "text": " so when we increase the resolution, but we keep the number of, the size of the patch the same,"}, {"start": 1584.16, "end": 1590.16, "text": " that means we're going to have more of these patches. So that means S goes to S prime,"}, {"start": 1590.16, "end": 1596.3200000000002, "text": " which is a bigger number, and that means this thing won't fit into the previous MLP, which had"}, {"start": 1596.3200000000002, "end": 1602.0800000000002, "text": " certain bandwidth. So it was maybe, it was maybe, it could attend like this. It was smaller. So now"}, {"start": 1602.0800000000002, "end": 1612.3200000000002, "text": " we have to kind of take the weight matrix of that old MLP, and we need to initialize this new weight"}, {"start": 1612.32, "end": 1617.76, "text": " matrix, which is bigger somehow, and what they did is they just took these and stacked them"}, {"start": 1617.76, "end": 1626.32, "text": " block-wise here, and that was the initialization method they used to fine-tune the models."}, {"start": 1626.32, "end": 1630.96, "text": " So yeah, just an engineering detail. I thought it's interesting and important to mention"}, {"start": 1631.9199999999998, "end": 1638.56, "text": " that there are a lot of these details that go into making this thing work, and the reason this works"}, {"start": 1638.56, "end": 1646.48, "text": " is if you think about it, if you now multiply the input vector with this new matrix, this thing is"}, {"start": 1646.48, "end": 1650.8799999999999, "text": " basically going to attend to the first part, and this thing is basically going to attend to the"}, {"start": 1650.8799999999999, "end": 1658.6399999999999, "text": " second part. So it's like you're applying these in parallel, and so that seemed like a sound way to"}, {"start": 1658.6399999999999, "end": 1663.6, "text": " initialize the new weight matrix. So that's the main reason they did it like this. Hopefully you"}, {"start": 1663.6, "end": 1679.28, "text": " found this video useful. If you did, leave a like, share the video, and see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=DrOp_MQGn9o
Implementing DeepMind's DQN from scratch! | Project Update
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ I walk you through my project (it's still not working - RL of course it's not working!) and explain to you my workflow, how I think, organize my project, todos, etc. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My GitHub: https://github.com/gordicaleksa ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro, my other deep learning projects 01:55 Patreon 02:15 Tensorboard walk-through 05:08 Code walk-through - understand DQN arguments 07:25 Main loop (collecting experience and learning from it) 14:15 Main actor-learner class 20:19 Visualizations (matplolib, tensorboard) 23:30 Analyzing other people's projects ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dqn #projectupdate #reinforcementlearning
What's up? In this video I'm going to give you an update of my project. So for those of you who don't know, I've been developing the DeepQ network project from DeepMind from scratch. And I thought of showing you how the code looks like before I actually open source it, when it's in a completely polished form. So it's still not done. Like basically I did wrote most of the code already, but it's not working because it's RL. And I'm still trying to debug it. So I want to walk you through how I structure the project, how I test individual components, which to do. So in general, what my workflow is, and I still have a bunch of to-dos in the code, as you'll see. So I also show you the the the logs I've been doing. So I've been logging like stuff like even grad gradient norms because I'm experiencing vanishing gradients problems. So yeah, hopefully you'll get something out of it. If you do, please share the video and subscribe. Before I jump into the code, for those of you who are new to this channel, I want to show you this. And basically my GitHub profile. And I want to just mention that I've got a lot of projects that I've been working on and open sourcing over the last year and three or four months. And so do check them out. So by far the most popular one is the PyTorch, like the GraphAttention network implementation. So do check that out if you are interested in GraphML. And I also have like a neural style transfer. I have like three repos there for like the slow one, the fast one, and even one for videos. Then I've got this Deep Dream repo. And this pick you can see here is basically was created using the software I developed for Deep Dream. I also have this GAN project. So check it out as well. And finally, VASFANI Transformers, so the original transformer paper from 2017. So yeah, and as you can see here, looking at my log, I've been working on deep key networks for almost two weeks now. And as I said, it's still not working. And as a final note, I do have a Patreon account. So if you want to support me financially, even though it's completely optional, if you're already sharing my content and you like my content, that's already super enough. But having said that, if you still want to support me like financially, you can do that on my Patreon. I'll link it in the description. So let's jump into the actual code. Okay. So first things first, I'll assume you already know what DQN is. And if you haven't watched the video, I actually covered DQN. I'll link it somewhere here in one of my previous videos. So do check that out. And now the thing is, because I have TensorBoard, I'll just kick off and start the run so that we can actually see the logs I'm collecting while I'm explaining the code. So let me just briefly walk you through what I'm logging. The first thing is like epsilon, and that's the hyperparameter in the epsilon greedy algorithm, which DQN uses. Then we have FPS. I'm basically logging the number of frames. So the number of steps, environment steps I do per second. And it's usually around 400, which is pretty nice. Then I'm logging some metrics. So these just tell me how fast my training will end up. Epsilon is not that actually useful, because it's just that it's correct. I just want to see that it's correct pretty much. So HuberLoss is the loss that's used for DQN. So it's basically a mix of mean square error loss and L1 when you go outside the minus one, one range, simple stuff. Then we have rewards per episode. And I'm not doing any averaging here. So I'm just logging the cumulative reward I get per episode. And as you can see, it's just floating around minus 21, which means the algorithm is not learning anything, although I do have a little only a little bit of samples here. Anyways, steps per episode just tells me how much steps do I need before the episode terminates. And finally, I'm actually logging these gradients, the norms of gradients, because I'm experiencing experiencing. So I saw that the network is not learning for some reason. So I thought just plotting the gradients and I saw I'm having a vanishing gradients problem. So if you take a look at the DQN network, let me just find some picture of DQN. So this one, so you can see there are three convolutional, so you can only see two here, but there are three convolutional neural layers in DQN. And there are two fully connected layers. So that's five in total. And if you take a look at the plots, that's exactly what you see. So you have the first convolutional layer, the second, the third, then we have the fully connected and then have the total gradient here. So I basically just plot the weights and the biases per layer. So that's why we have 10. And then the final one, I just get by combining, by concatenating all of the parameters together and finding the L2 norm. And yeah, you can see that it's not learning anything because the gradients are almost zero. And let me just toggle on the newest run. So that's the one that's currently running. And we'll later see the trends, but like, yeah, for the time being, just let it run. And let's focus on the code. So first things first, yeah, I just have some arguments for the program, which I then wrap into this dictionary and I pass it into the main function. Let me just walk you through, briefly walk you through this, like what the parameters are. So first things first is the seed thing. And if you're not doing, if you've never done RL, you probably, you've seen seed occasionally, but it's not that vital. Here on the other hand, it's very super important that you include the seed because you want to have some sort of ability to reconstruct, to reproduce the results you got. Because if you're familiar with RL, basically it's very noisy. So you can have maybe in 60% of the seeds, you get some nice results. And then maybe 40%, you get like zero. And that sucks. So once you find a seed that works, you want to have the number. And yeah, so that's the idea there. I'm using punk environment and the no frame skip part means basically because I'm using some wrappers from this stable baselines three library. I don't want to skip any frames because the wrapper itself will be skipping frames. And if you're not familiar why we're skipping frames, that's because in Atari, you basically, the DeepMind crew back in 2013, just made this convention that it's better to, it's a trade off between performance and results you'll get. And basically you just act on every fourth frame, and then you just repeat the actions four times. So you act on the first frame, then you just repeat the actions and then you take the fourth frame and then yeah, so you're just skipping the intermediate frames and you're doing this dummy action by just repeating the action. So that's that part before just means that the environment is deterministic. So if you take we zero, I think basically then the environment will sometimes execute your action, but sometimes 25% probability, you will execute one of the previous sections, actually the previous section. Yeah, a number of steps, 50 million, that's the DQN hyperparameter. These are not that important. Replay buffer, yeah, nothing super important here. And I just have some logging parameters. So let me just walk you through the actual main function. And that's important. Okay, so first thing I do is you can see here, I take the NYD, so that's the Pong no frame, V4 string, and I just wrap it in this wrapper. And that's a common workflow you'll see across many projects. So that's the only part of the code that I have reused from a different library and that's stable baselines three in this case. And I additionally just had to create this one more transform, which would just swap the access and because the DQN in PyTorch basically requires that the channel comes first. So what I do is I just take the height, width, and channel and just permute the dimensions so that I get the channel first tensor. And that's pretty much it. Monitor just basically tracks your different statistics from your training, like the rewards you get per episode, like the length of the episode, you can even record videos, but I'm not doing that at the moment. So that's it. Okay, then once we have the environment, so that's the OpenAI Gym environment, I create a buffer. And basically the reason I had to develop buffer from scratch is because stable baselines three, and this may not sound that intuitive because it's a really nice library, they don't support a smart buffer. And what I mean by smart buffer is that they actually have the when you want to store 1 million frames, what they actually do is instead of storing 1 million and then so basically, instead of storing, let me write it here. So 1m, and then maybe 184, 84 are the usual like height and width dimensions, they're actually storing 1m, then 1, 4, 84, 84. And the reason is observations for DQN are just basically you stack the last four frames. And that's what you pass through the network because the states are what they call they call that perceptually aliased. That means you do need previous frames in order to understand the state you're in and to make this a Markov decision process. Okay, so because they don't have the smart buffer, I implemented the replay buffer from scratch. And yeah, secondly, set random seeds is really important. And I mentioned that already. But let me just show you here. So we both we have to set the seed for both the torch, the pytorch framework for the NumPy library for Python's inner random library, then for OpenAI.GM and both for the action space as for the GMes itself, because there was some weirdo bug. And for some reason, you'd expect this to set the action space as well, but it did not. And so I have to do that as well. So yeah, just keep that in mind, setting seeds in RL agent is very important. So you can actually reproduce results and have some stable trends. Linear schedule, simple class, which helps me anneal the epsilon hyperparameter from one to 01 over the span of certain number of steps. And that's just the thing that the paper used. So if I go to linear schedule, you can see it here. And the reason I'm actually going here is because I want to show you what I usually do when I develop a project. So you see the class here. And in order to be sure that at least some components are working, I create these subtests in the file itself. So you can see here, I actually instantiate those linear schedule and I test it. So if I run the utils, you'll see that, yeah, we have correct results. So it goes from one to 01 over the span of 50 steps. And the reason it's 50 is because I've set 50 here. So that's the way I test components. So first, most of the components are usually orthogonal, you can test them. And then once you combine them in the complete agent, infrastructure, you want to be confident in some components so that you can debug. Yeah. And even if, as you can see, even though I've done this, I'm still having problems debugging the RL agent. So testing is very important and just be pedantic when you're developing RL code. Okay. Basic stuff here, PyTorch. If I have GPU and I have RTX 2080, I just fetch here the device. I instantiate the DQN network, the target DQN, which we use to set the TDD targets in the DQN algorithm. And finally, this is the main thing, this ActorLearner class is what I used, is a nice abstraction and you'll see how it looks like. But basically you can see the shortest training loop ever here. And it's pretty intuitive. So basically until you have, so get experience count, so until you have collected this number of experiences, so frames, we just keep on collecting experience and we keep on learning. So first we go and collect experience. So we'll have an inner loop of four steps where we are collecting, interacting with the environment, taking the rewards, the down flags, the frames, observations, and we'll be storing them in a replay buffer. And once we have enough data, and there is this delay here, we start learning. So basically how DQN works is you initially don't learn, you're just collecting data and filling the replay buffer. And once you get to this point, now I start learning from the experience. And here I'm just sampling from the buffer and doing the HuberLoss stuff. Okay, let me continue. So along the way, I do note some stuff that's weird, that are weird, and some stuff that I want to do. So go through the code once more and make sure there are no bugs, it's an obvious one. But like this thing is something that I'm still confused about. For some reason, the game of Pong, even though you basically only need three actions, that's move up, move down, or no up, or basically just stay in place. For some reason, this Atari learning environment, the ALE, is giving me six outputs. If somebody knows it already, just write it down in the comments. But yeah, so that's how I think. So once I see something that's problematic, I just write it down in the code itself, using these to-dos. And this ID, the byte arm, basically highlights them in this yellow color. So it's really easy for me to organize them. And I also have the to-do list here, so I can just organize my to-dos really nicely. So that's cool. Having said that, let's go to this ActualLearner class. Let me show you, and again, so a bunch of to-dos here, as you can see. And so ActualLearner, the init function is just basically a wrapper class. So I just store all of these replay buffers, dqns, etc., as like these private variables in Python. And what I do here is I just log the initial time of the program, because I'll be using this to calculate the frame per second metric. And yeah, so basically, just a container, the init function just takes care of these standard stuff. I just instantiate the loss here. I create the atom optimizer. And as you can see here, so the original paper used RMS prop, but that was before Atom was published, so I guess that's why they didn't use Atom. So I just, I'm using Atom, and if I'm having problems because of Atom, I'll try out RMS prop as well. And yeah, a short note to myself just to kind of freshen up my knowledge of how those algorithms exactly work, so that I can better debug this code. Okay, so first things first, collect experience. So that's one of the main functions. And as you can see here, so this number is equals four, so we just collect the experiences four times, so four steps in the environment. And what we do here, we just store the frame in the buffer, then we fetch the last observation, which will concatenate the four last frames. And if it doesn't have enough data, it will just fill in the black frames before. So yeah, basically, at the end, you'll have four frames, and some of them may be black if we don't have enough data. Then we sample the action using that observation. And that function just wraps up a couple of details from the DQN algorithm. It's a neat way to be concise here. And what I do is, initially, before the learning starts, and if you remember this part, this is the same thing as this part here. So we wait for 50k steps, and then we start learning. And before we start learning, we have, we just take random actions from the environment. So that means we'll be acting totally randomly in the environment initially. Then once that, once we have enough steps, we'll start doing something smart. And that's actually exploiting DQN's Epsilon greedy, as you can see here. So with Epsilon probability, so we just roll the dice with Epsilon probability, we randomly act. Otherwise, we act greedily. We take the dead action that corresponds to the Q value that's the highest. And yeah, that's it. Let me get back to the algorithm sample action. So we pick the action, we send the action to the environment, and that's your classical RL feedback loop. And I get the new frame back, the rewards and the dumb flag in case the episode is done or I've lost the life. Then what I do, I store effects. So basically, if you remember, I first stored a frame, but only now do I have the action and the reward and the dumb flags. So at the same index, I just add that data so that we can later fetch it during the training, during the training when we are sampling from the buffer. Simple stuff here. And yeah, I'm logging some stuff here, as you can see. So some of the logging utilities are logging depending on the like every fourth episode. And some logging is just coupled to the number of steps and not to the number of episodes because it makes more sense and you'll see why. Okay, the second important function is learning from experience. And you can see it here, basically, what I do here is I fetch from the replay buffer the batch size, which is 32 in the algorithm. And so we have all of we get the data as you can see observations, actions, rewards, next observations and dumb flags. And we just create the basically the target values, as you can see here, so it packs the next observation into the target DQN, I find the max values, max values, and we just add the rewards and scale with the discount factor gamma, we scale those numbers. And this thing here may be interesting. So basically, when the dumb flag is one, you want to make sure that this will zero out and this will be zero when the dumb flag is one, and we'll just have the reward. Why is that? Well, because basically, you don't want to have the you don't want to pass the final state, the terminal state into DQN, we just have the reward, we don't want to introduce the noise there because the state is already done. Basically, what the Q learning function does, it predicts the expected reward. And since the state is already since the game is done, the expected reward should be zero. And yeah, that's why we zero out that part. Then we have the observations, we pass them through DQN. And we just take the actions that we sampled during the experience collection. And we take those Q values. And finally, we just have the loss between the target values. And so that's your DQN algorithm. And this part is not that important if you didn't understand. I'm just walking you through how I'm thinking and how I'm structuring the code and everything. Okay, then basic stuff. We're zeroing the gradient gradients, we do the backward, which will accumulate the grad fields in PyTorch. So every single variable will have the grad fields filled in with the gradient. And we do the step. So that's it. That's how we learn. Two more details. First off, every fourth update, so no, actually every 10k update, I'll be doing a hard update of the target network. And that's what the original paper showed, improved the stability significantly. So aside from replay buffer, this thing is very important in order to have an algorithm that's stable and that can work. So finally, you can see I'm logging gradients here. And that's super important because I'm still trying to debug this thing. And yeah, okay, so I think I've got enough data in the tensor board. So I'll just go ahead and stop this run. And what I'll do, I want to show you what the visual observation is. Basically, collect experience, I'll just uncomment this thing here. I'll put a breaking point there and I'll rerun this thing. So let me show you what this does. Whoops, that's the wrong file. I'm starting the main file. Okay, if I go here and we just plot the function, you can see initially because we only have one frame, and this is the observation. So the last three frames will be black. Okay, so if we go and do another step, so as you can see, we did another step. I step into here, I plotted, we now have two frames. So if I do another step again, we did another step and I step in, you can see we have three frames. So that's what I have this for. So just a simple function which will help me better understand what's going on. And okay, let me show you the logging functions in tensor board and we're pretty much done. When it comes to logging, I'm just, I'm basically logging the stuff you already saw in the tensor board. So reward per episode, blah, blah, blah, nothing smart there. So maybe the only a bit more interesting part is here. So I'm iterating through the parameters of the decan model and I'm just calculating the, as you can see, I take the gradient, the data, and I did the norm, the L2 norm, P equals two L2 norm, and I just basically log those numbers. And finally, I'm accumulating these numbers in the total grad so we can have the complete vector and not just the parameters for a specific layer, but for the whole network itself. Okay, let me show you the tensor board finally and see what the data we've got. Okay, Epsilon as expected, just linearly annealed to 0.1. That's not so interesting. FPS, as you can see, it's pretty stable, around 330 frames. I don't know whether that's good or bad because I haven't been experimenting enough, but we'll see. That sounds like a nice number. Huber loss, as you can see, it's just random noise pretty much. I don't see any downward trends, so something's wrong, or maybe I just didn't have enough steps. That's also a possibility. Also rewards, as you can see, I'm almost constantly at minus 21. That means I basically, the agent loses all of the 21 games of Pong against the built-in agent, so that means it's just playing really bad. Steps per episode, again, really noisy, and you can see the number of steps that's needed per episode, so that's around maybe the mean is around 850 or something. And finally, we can see the grads, and they are completely zero. So if you've seen the bug in my code, comment down in the comment section. I'll hopefully have this thing running in a couple of days, and then I'll need a couple more days to just polish everything, create a readme, so that people can kind of use it and not experience the problems I did. So yeah, one more thing I want to mention, which is really important, how I actually go about developing these projects. So usually, unless nobody has published any code whatsoever, which I did have a couple of occasions where I had to implement a paper totally from scratch, I just had the paper, so what I ended up doing is pinning the entire authors and communicating with them, so I did get some snippets there. So basically, you always want to start with some code and understand what other people are doing. So I did investigate at least five, six, or seven, something like that, PyTorch implementations, I think six. And one of them here is stable baselines, so I was just stepping through the code, looking what those folks did, and as I said, they didn't have the smart replay buffers, so I had to implement the buffer from scratch, and stuff like that. But I did learn some nice tricks from this repo, so that's a cool thing to do when you're developing a project. Go ahead and first analyze the other projects, and you'll learn some stuff additionally, and it will be a much more rewarding experience. You'll learn more if you also understand what other people have been doing. Okay, having said that, maybe the last thing I want to show you are quickly the DQN model, what I've done there. And you can see I just created a super generic model there, so you can play with it, and you'll have the code in a couple of days, hopefully until next weekend, actually. And you can see I have this automatic part here, where because, as you know, we have three convolutional layers, we have fully connected layer, that means that basically you can't change the input, because otherwise the compatibility between the volume that comes out from the third comb layer won't be compatible with the first fully connected layer, so you either have to keep the observation constant, or you can do it like me here. So I'm basically passing the input volume, and I can automatically calculate the number of neurons that I need in the first fully connected layer. So that's this part here. And yeah, that's not an interesting replay buffer, on the other hand. As you can see, the main thing I've done it is because I only need 7GB of memory, only of RAM, instead of 28 for the implementation that these stable baselines have. And yeah, basically a couple of public functions here, and the bunch of work happens here. There is not a lot of code, but there are certain edge cases, and the implementation I actually took an inspiration from was this Berkeley's implementation, and they also have a small bug. It's a subtle one, it won't actually impact the final result of the Dequeon agent, but nonetheless I did found it and submit an issue. And yeah, that's pretty much it. Hopefully you found this video useful. If you did, leave a comment, subscribe, share the video, and I hope to see you next time.
[{"start": 0.0, "end": 4.72, "text": " What's up? In this video I'm going to give you an update of my project. So for those of you who"}, {"start": 4.72, "end": 10.64, "text": " don't know, I've been developing the DeepQ network project from DeepMind from scratch."}, {"start": 10.64, "end": 15.120000000000001, "text": " And I thought of showing you how the code looks like before I actually open source it,"}, {"start": 15.120000000000001, "end": 19.92, "text": " when it's in a completely polished form. So it's still not done. Like basically I did"}, {"start": 19.92, "end": 25.52, "text": " wrote most of the code already, but it's not working because it's RL. And I'm still trying"}, {"start": 25.52, "end": 29.84, "text": " to debug it. So I want to walk you through how I structure the project, how I test individual"}, {"start": 29.84, "end": 35.76, "text": " components, which to do. So in general, what my workflow is, and I still have a bunch of to-dos"}, {"start": 35.76, "end": 40.72, "text": " in the code, as you'll see. So I also show you the the the logs I've been doing. So I've been"}, {"start": 40.72, "end": 47.6, "text": " logging like stuff like even grad gradient norms because I'm experiencing vanishing gradients"}, {"start": 47.6, "end": 53.2, "text": " problems. So yeah, hopefully you'll get something out of it. If you do, please share the video and"}, {"start": 53.2, "end": 57.28, "text": " subscribe. Before I jump into the code, for those of you who are new to this channel, I want to show"}, {"start": 57.28, "end": 65.2, "text": " you this. And basically my GitHub profile. And I want to just mention that I've got a lot of"}, {"start": 65.2, "end": 68.8, "text": " projects that I've been working on and open sourcing over the last year and three or four"}, {"start": 68.8, "end": 75.12, "text": " months. And so do check them out. So by far the most popular one is the PyTorch, like the"}, {"start": 75.12, "end": 80.4, "text": " GraphAttention network implementation. So do check that out if you are interested in GraphML. And I"}, {"start": 80.4, "end": 85.92, "text": " also have like a neural style transfer. I have like three repos there for like the slow one,"}, {"start": 85.92, "end": 91.12, "text": " the fast one, and even one for videos. Then I've got this Deep Dream repo. And this pick you can"}, {"start": 91.12, "end": 97.36000000000001, "text": " see here is basically was created using the software I developed for Deep Dream. I also have"}, {"start": 97.36000000000001, "end": 103.04, "text": " this GAN project. So check it out as well. And finally, VASFANI Transformers, so the original"}, {"start": 103.04, "end": 109.68, "text": " transformer paper from 2017. So yeah, and as you can see here, looking at my log, I've been"}, {"start": 109.68, "end": 115.84, "text": " working on deep key networks for almost two weeks now. And as I said, it's still not working. And"}, {"start": 115.84, "end": 121.12, "text": " as a final note, I do have a Patreon account. So if you want to support me financially, even though"}, {"start": 121.12, "end": 127.60000000000001, "text": " it's completely optional, if you're already sharing my content and you like my content,"}, {"start": 127.60000000000001, "end": 132.72, "text": " that's already super enough. But having said that, if you still want to support me like financially,"}, {"start": 132.72, "end": 137.44, "text": " you can do that on my Patreon. I'll link it in the description. So let's jump into the actual code."}, {"start": 137.44, "end": 144.07999999999998, "text": " Okay. So first things first, I'll assume you already know what DQN is. And if you haven't"}, {"start": 144.07999999999998, "end": 149.68, "text": " watched the video, I actually covered DQN. I'll link it somewhere here in one of my previous videos."}, {"start": 149.68, "end": 155.92, "text": " So do check that out. And now the thing is, because I have TensorBoard, I'll just kick off"}, {"start": 155.92, "end": 166.32, "text": " and start the run so that we can actually see the logs I'm collecting while I'm explaining the code."}, {"start": 166.32, "end": 170.79999999999998, "text": " So let me just briefly walk you through what I'm logging. The first thing is like epsilon,"}, {"start": 170.79999999999998, "end": 176.88, "text": " and that's the hyperparameter in the epsilon greedy algorithm, which DQN uses. Then we have"}, {"start": 176.88, "end": 182.79999999999998, "text": " FPS. I'm basically logging the number of frames. So the number of steps, environment steps I do per"}, {"start": 182.79999999999998, "end": 188.56, "text": " second. And it's usually around 400, which is pretty nice. Then I'm logging some metrics."}, {"start": 189.76, "end": 195.44, "text": " So these just tell me how fast my training will end up. Epsilon is not that actually useful,"}, {"start": 195.44, "end": 200.88, "text": " because it's just that it's correct. I just want to see that it's correct pretty much."}, {"start": 200.88, "end": 206.88, "text": " So HuberLoss is the loss that's used for DQN. So it's basically a mix of mean square error loss"}, {"start": 206.88, "end": 212.16, "text": " and L1 when you go outside the minus one, one range, simple stuff. Then we have rewards per"}, {"start": 212.16, "end": 218.72, "text": " episode. And I'm not doing any averaging here. So I'm just logging the cumulative reward I get"}, {"start": 218.72, "end": 225.6, "text": " per episode. And as you can see, it's just floating around minus 21, which means the algorithm is not"}, {"start": 225.6, "end": 230.24, "text": " learning anything, although I do have a little only a little bit of samples here. Anyways,"}, {"start": 231.2, "end": 238.16, "text": " steps per episode just tells me how much steps do I need before the episode terminates. And finally,"}, {"start": 238.16, "end": 245.36, "text": " I'm actually logging these gradients, the norms of gradients, because I'm experiencing experiencing."}, {"start": 245.36, "end": 250.56, "text": " So I saw that the network is not learning for some reason. So I thought just plotting the gradients"}, {"start": 250.56, "end": 255.12, "text": " and I saw I'm having a vanishing gradients problem. So if you take a look at the DQN network,"}, {"start": 255.12, "end": 263.12, "text": " let me just find some picture of DQN. So this one, so you can see there are three convolutional,"}, {"start": 263.12, "end": 268.24, "text": " so you can only see two here, but there are three convolutional neural layers in DQN."}, {"start": 268.24, "end": 272.08000000000004, "text": " And there are two fully connected layers. So that's five in total. And if you take a look"}, {"start": 272.08, "end": 277.12, "text": " at the plots, that's exactly what you see. So you have the first convolutional layer, the second,"}, {"start": 277.12, "end": 282.15999999999997, "text": " the third, then we have the fully connected and then have the total gradient here. So I basically"}, {"start": 282.15999999999997, "end": 286.96, "text": " just plot the weights and the biases per layer. So that's why we have 10. And then the final one,"}, {"start": 286.96, "end": 293.36, "text": " I just get by combining, by concatenating all of the parameters together and finding the L2 norm."}, {"start": 293.36, "end": 300.15999999999997, "text": " And yeah, you can see that it's not learning anything because the gradients are almost zero."}, {"start": 300.16, "end": 304.0, "text": " And let me just toggle on the newest run. So that's the one that's currently running."}, {"start": 304.0, "end": 308.72, "text": " And we'll later see the trends, but like, yeah, for the time being, just let it run."}, {"start": 309.6, "end": 317.44000000000005, "text": " And let's focus on the code. So first things first, yeah, I just have some arguments for the"}, {"start": 317.44000000000005, "end": 321.92, "text": " program, which I then wrap into this dictionary and I pass it into the main function. Let me"}, {"start": 321.92, "end": 326.0, "text": " just walk you through, briefly walk you through this, like what the parameters are. So first things"}, {"start": 326.0, "end": 332.48, "text": " first is the seed thing. And if you're not doing, if you've never done RL, you probably, you've seen"}, {"start": 332.48, "end": 337.28, "text": " seed occasionally, but it's not that vital. Here on the other hand, it's very super important that you"}, {"start": 337.84, "end": 345.76, "text": " include the seed because you want to have some sort of ability to reconstruct, to reproduce the"}, {"start": 345.76, "end": 353.36, "text": " results you got. Because if you're familiar with RL, basically it's very noisy. So you can have"}, {"start": 353.36, "end": 359.6, "text": " maybe in 60% of the seeds, you get some nice results. And then maybe 40%, you get like zero."}, {"start": 359.6, "end": 365.12, "text": " And that sucks. So once you find a seed that works, you want to have the number. And yeah,"}, {"start": 365.12, "end": 371.04, "text": " so that's the idea there. I'm using punk environment and the no frame skip part means"}, {"start": 372.16, "end": 376.64, "text": " basically because I'm using some wrappers from this stable baselines three library."}, {"start": 376.64, "end": 380.72, "text": " I don't want to skip any frames because the wrapper itself will be skipping frames."}, {"start": 380.72, "end": 385.44000000000005, "text": " And if you're not familiar why we're skipping frames, that's because in Atari, you basically,"}, {"start": 385.44000000000005, "end": 393.12, "text": " the DeepMind crew back in 2013, just made this convention that it's better to, it's a trade off"}, {"start": 393.12, "end": 401.20000000000005, "text": " between performance and results you'll get. And basically you just act on every fourth frame,"}, {"start": 401.20000000000005, "end": 406.8, "text": " and then you just repeat the actions four times. So you act on the first frame, then you just repeat"}, {"start": 406.8, "end": 411.28000000000003, "text": " the actions and then you take the fourth frame and then yeah, so you're just skipping the"}, {"start": 411.28000000000003, "end": 415.36, "text": " intermediate frames and you're doing this dummy action by just repeating the action."}, {"start": 415.36, "end": 421.2, "text": " So that's that part before just means that the environment is deterministic. So if you take we"}, {"start": 421.2, "end": 427.44, "text": " zero, I think basically then the environment will sometimes execute your action, but sometimes 25%"}, {"start": 427.44, "end": 432.40000000000003, "text": " probability, you will execute one of the previous sections, actually the previous section. Yeah,"}, {"start": 432.4, "end": 438.32, "text": " a number of steps, 50 million, that's the DQN hyperparameter. These are not that important."}, {"start": 439.2, "end": 444.79999999999995, "text": " Replay buffer, yeah, nothing super important here. And I just have some logging parameters."}, {"start": 444.79999999999995, "end": 449.84, "text": " So let me just walk you through the actual main function. And that's important. Okay, so"}, {"start": 450.96, "end": 456.08, "text": " first thing I do is you can see here, I take the NYD, so that's the Pong no frame,"}, {"start": 456.08, "end": 463.44, "text": " V4 string, and I just wrap it in this wrapper. And that's a common workflow you'll see"}, {"start": 464.0, "end": 470.15999999999997, "text": " across many projects. So that's the only part of the code that I have reused from a different"}, {"start": 470.15999999999997, "end": 476.79999999999995, "text": " library and that's stable baselines three in this case. And I additionally just had to create this"}, {"start": 476.79999999999995, "end": 482.88, "text": " one more transform, which would just swap the access and because the DQN in PyTorch basically"}, {"start": 482.88, "end": 488.48, "text": " requires that the channel comes first. So what I do is I just take the height, width, and channel"}, {"start": 488.48, "end": 494.96, "text": " and just permute the dimensions so that I get the channel first tensor. And that's pretty much it."}, {"start": 494.96, "end": 501.44, "text": " Monitor just basically tracks your different statistics from your training, like the rewards"}, {"start": 501.44, "end": 506.88, "text": " you get per episode, like the length of the episode, you can even record videos, but I'm"}, {"start": 506.88, "end": 513.28, "text": " not doing that at the moment. So that's it. Okay, then once we have the environment, so that's the"}, {"start": 513.28, "end": 521.76, "text": " OpenAI Gym environment, I create a buffer. And basically the reason I had to develop buffer from"}, {"start": 521.76, "end": 527.12, "text": " scratch is because stable baselines three, and this may not sound that intuitive because it's a"}, {"start": 527.12, "end": 532.16, "text": " really nice library, they don't support a smart buffer. And what I mean by smart buffer is that"}, {"start": 532.16, "end": 539.6, "text": " they actually have the when you want to store 1 million frames, what they actually do is instead"}, {"start": 539.6, "end": 546.48, "text": " of storing 1 million and then so basically, instead of storing, let me write it here. So 1m,"}, {"start": 547.12, "end": 555.52, "text": " and then maybe 184, 84 are the usual like height and width dimensions, they're actually storing 1m,"}, {"start": 555.52, "end": 565.52, "text": " then 1, 4, 84, 84. And the reason is observations for DQN are just basically you stack the last four"}, {"start": 565.52, "end": 571.52, "text": " frames. And that's what you pass through the network because the states are what they call"}, {"start": 571.52, "end": 577.52, "text": " they call that perceptually aliased. That means you do need previous frames in order to understand"}, {"start": 577.52, "end": 583.92, "text": " the state you're in and to make this a Markov decision process. Okay, so because they don't"}, {"start": 583.92, "end": 588.56, "text": " have the smart buffer, I implemented the replay buffer from scratch. And yeah,"}, {"start": 590.24, "end": 596.3199999999999, "text": " secondly, set random seeds is really important. And I mentioned that already. But let me just show"}, {"start": 596.3199999999999, "end": 602.8, "text": " you here. So we both we have to set the seed for both the torch, the pytorch framework for the"}, {"start": 602.8, "end": 610.56, "text": " NumPy library for Python's inner random library, then for OpenAI.GM and both for the action space"}, {"start": 610.56, "end": 615.28, "text": " as for the GMes itself, because there was some weirdo bug. And for some reason, you'd expect"}, {"start": 615.28, "end": 620.2399999999999, "text": " this to set the action space as well, but it did not. And so I have to do that as well. So yeah,"}, {"start": 620.2399999999999, "end": 626.16, "text": " just keep that in mind, setting seeds in RL agent is very important. So you can actually reproduce"}, {"start": 626.16, "end": 634.7199999999999, "text": " results and have some stable trends. Linear schedule, simple class, which helps me anneal"}, {"start": 634.72, "end": 642.0, "text": " the epsilon hyperparameter from one to 01 over the span of certain number of steps. And that's"}, {"start": 642.0, "end": 648.8000000000001, "text": " just the thing that the paper used. So if I go to linear schedule, you can see it here. And the"}, {"start": 648.8000000000001, "end": 653.44, "text": " reason I'm actually going here is because I want to show you what I usually do when I develop a"}, {"start": 653.44, "end": 659.44, "text": " project. So you see the class here. And in order to be sure that at least some components are"}, {"start": 659.44, "end": 665.0400000000001, "text": " working, I create these subtests in the file itself. So you can see here, I actually instantiate"}, {"start": 665.0400000000001, "end": 671.6800000000001, "text": " those linear schedule and I test it. So if I run the utils, you'll see that, yeah, we have correct"}, {"start": 671.6800000000001, "end": 678.24, "text": " results. So it goes from one to 01 over the span of 50 steps. And the reason it's 50 is because"}, {"start": 678.24, "end": 683.5200000000001, "text": " I've set 50 here. So that's the way I test components. So first, most of the components"}, {"start": 683.5200000000001, "end": 689.2, "text": " are usually orthogonal, you can test them. And then once you combine them in the complete agent,"}, {"start": 689.2, "end": 695.9200000000001, "text": " infrastructure, you want to be confident in some components so that you can debug. Yeah. And even"}, {"start": 695.9200000000001, "end": 703.0400000000001, "text": " if, as you can see, even though I've done this, I'm still having problems debugging the RL agent. So"}, {"start": 703.6800000000001, "end": 708.1600000000001, "text": " testing is very important and just be pedantic when you're developing RL code."}, {"start": 709.0400000000001, "end": 716.72, "text": " Okay. Basic stuff here, PyTorch. If I have GPU and I have RTX 2080, I just fetch here the device."}, {"start": 716.72, "end": 723.9200000000001, "text": " I instantiate the DQN network, the target DQN, which we use to set the TDD targets in the DQN"}, {"start": 723.9200000000001, "end": 730.08, "text": " algorithm. And finally, this is the main thing, this ActorLearner class is what I used, is a nice"}, {"start": 730.08, "end": 736.08, "text": " abstraction and you'll see how it looks like. But basically you can see the shortest training loop"}, {"start": 736.08, "end": 743.9200000000001, "text": " ever here. And it's pretty intuitive. So basically until you have, so get experience count, so until"}, {"start": 743.92, "end": 752.24, "text": " you have collected this number of experiences, so frames, we just keep on collecting experience"}, {"start": 752.24, "end": 757.28, "text": " and we keep on learning. So first we go and collect experience. So we'll have an inner loop"}, {"start": 757.28, "end": 761.68, "text": " of four steps where we are collecting, interacting with the environment, taking the rewards, the"}, {"start": 761.68, "end": 766.4799999999999, "text": " down flags, the frames, observations, and we'll be storing them in a replay buffer. And once we"}, {"start": 766.4799999999999, "end": 772.9599999999999, "text": " have enough data, and there is this delay here, we start learning. So basically how DQN works is"}, {"start": 772.96, "end": 777.44, "text": " you initially don't learn, you're just collecting data and filling the replay buffer. And once you"}, {"start": 777.44, "end": 782.08, "text": " get to this point, now I start learning from the experience. And here I'm just sampling from the"}, {"start": 782.08, "end": 790.1600000000001, "text": " buffer and doing the HuberLoss stuff. Okay, let me continue. So along the way, I do note some stuff"}, {"start": 790.1600000000001, "end": 794.5600000000001, "text": " that's weird, that are weird, and some stuff that I want to do. So go through the code once more and"}, {"start": 794.5600000000001, "end": 799.12, "text": " make sure there are no bugs, it's an obvious one. But like this thing is something that I'm still"}, {"start": 799.12, "end": 805.12, "text": " confused about. For some reason, the game of Pong, even though you basically only need three actions,"}, {"start": 805.12, "end": 812.16, "text": " that's move up, move down, or no up, or basically just stay in place. For some reason, this"}, {"start": 812.16, "end": 818.08, "text": " Atari learning environment, the ALE, is giving me six outputs. If somebody knows it already,"}, {"start": 818.08, "end": 823.92, "text": " just write it down in the comments. But yeah, so that's how I think. So once I see"}, {"start": 823.92, "end": 830.16, "text": " something that's problematic, I just write it down in the code itself, using these to-dos. And this"}, {"start": 830.16, "end": 836.4, "text": " ID, the byte arm, basically highlights them in this yellow color. So it's really easy for me to"}, {"start": 836.4, "end": 842.16, "text": " organize them. And I also have the to-do list here, so I can just organize my to-dos really nicely."}, {"start": 842.16, "end": 848.88, "text": " So that's cool. Having said that, let's go to this ActualLearner class. Let me show you, and again,"}, {"start": 848.88, "end": 853.5999999999999, "text": " so a bunch of to-dos here, as you can see. And so ActualLearner, the init function is just basically"}, {"start": 853.6, "end": 860.16, "text": " a wrapper class. So I just store all of these replay buffers, dqns, etc., as like these"}, {"start": 862.08, "end": 868.96, "text": " private variables in Python. And what I do here is I just log the initial time of the program,"}, {"start": 868.96, "end": 874.32, "text": " because I'll be using this to calculate the frame per second metric. And yeah, so basically,"}, {"start": 874.32, "end": 884.08, "text": " just a container, the init function just takes care of these standard stuff. I just instantiate"}, {"start": 884.08, "end": 891.36, "text": " the loss here. I create the atom optimizer. And as you can see here, so the original paper used RMS"}, {"start": 891.36, "end": 897.36, "text": " prop, but that was before Atom was published, so I guess that's why they didn't use Atom. So I just,"}, {"start": 897.36, "end": 903.44, "text": " I'm using Atom, and if I'm having problems because of Atom, I'll try out RMS prop as well. And yeah,"}, {"start": 903.44, "end": 910.72, "text": " a short note to myself just to kind of freshen up my knowledge of how those algorithms exactly work,"}, {"start": 910.72, "end": 919.2, "text": " so that I can better debug this code. Okay, so first things first, collect experience. So that's"}, {"start": 919.2, "end": 925.6800000000001, "text": " one of the main functions. And as you can see here, so this number is equals four, so we just"}, {"start": 926.32, "end": 933.0400000000001, "text": " collect the experiences four times, so four steps in the environment. And what we do here, we just"}, {"start": 933.04, "end": 938.24, "text": " store the frame in the buffer, then we fetch the last observation, which will concatenate the four"}, {"start": 938.24, "end": 944.8, "text": " last frames. And if it doesn't have enough data, it will just fill in the black frames before. So"}, {"start": 944.8, "end": 949.52, "text": " yeah, basically, at the end, you'll have four frames, and some of them may be black if we don't"}, {"start": 949.52, "end": 955.4399999999999, "text": " have enough data. Then we sample the action using that observation. And that function just wraps up"}, {"start": 955.4399999999999, "end": 961.92, "text": " a couple of details from the DQN algorithm. It's a neat way to be concise here. And what I do is,"}, {"start": 961.92, "end": 966.64, "text": " initially, before the learning starts, and if you remember this part, this is the same thing as"}, {"start": 968.16, "end": 974.64, "text": " this part here. So we wait for 50k steps, and then we start learning. And before we start learning,"}, {"start": 974.64, "end": 978.64, "text": " we have, we just take random actions from the environment. So that means we'll be"}, {"start": 979.76, "end": 985.12, "text": " acting totally randomly in the environment initially. Then once that, once we have enough"}, {"start": 985.12, "end": 991.52, "text": " steps, we'll start doing something smart. And that's actually exploiting DQN's Epsilon greedy,"}, {"start": 991.52, "end": 997.12, "text": " as you can see here. So with Epsilon probability, so we just roll the dice with Epsilon probability,"}, {"start": 997.12, "end": 1003.68, "text": " we randomly act. Otherwise, we act greedily. We take the dead action that corresponds to the"}, {"start": 1003.68, "end": 1011.84, "text": " Q value that's the highest. And yeah, that's it. Let me get back to the algorithm sample action."}, {"start": 1011.84, "end": 1015.84, "text": " So we pick the action, we send the action to the environment, and that's your classical"}, {"start": 1015.84, "end": 1022.48, "text": " RL feedback loop. And I get the new frame back, the rewards and the dumb flag in case the episode"}, {"start": 1022.48, "end": 1030.56, "text": " is done or I've lost the life. Then what I do, I store effects. So basically, if you remember,"}, {"start": 1030.56, "end": 1035.2, "text": " I first stored a frame, but only now do I have the action and the reward and the dumb flags."}, {"start": 1035.2, "end": 1041.2, "text": " So at the same index, I just add that data so that we can later fetch it during the training,"}, {"start": 1041.2, "end": 1047.92, "text": " during the training when we are sampling from the buffer. Simple stuff here. And yeah,"}, {"start": 1047.92, "end": 1052.56, "text": " I'm logging some stuff here, as you can see. So some of the logging utilities are logging"}, {"start": 1053.6000000000001, "end": 1060.0, "text": " depending on the like every fourth episode. And some logging is just coupled to the number"}, {"start": 1060.0, "end": 1063.92, "text": " of steps and not to the number of episodes because it makes more sense and you'll see why."}, {"start": 1065.04, "end": 1069.52, "text": " Okay, the second important function is learning from experience. And you can see it here,"}, {"start": 1069.52, "end": 1075.12, "text": " basically, what I do here is I fetch from the replay buffer the batch size, which is 32 in the"}, {"start": 1075.12, "end": 1081.2, "text": " algorithm. And so we have all of we get the data as you can see observations, actions, rewards,"}, {"start": 1081.2, "end": 1088.48, "text": " next observations and dumb flags. And we just create the basically the target values, as you"}, {"start": 1088.48, "end": 1093.92, "text": " can see here, so it packs the next observation into the target DQN, I find the max values,"}, {"start": 1093.92, "end": 1100.16, "text": " max values, and we just add the rewards and scale with the discount factor gamma, we scale those"}, {"start": 1100.16, "end": 1107.92, "text": " numbers. And this thing here may be interesting. So basically, when the dumb flag is one, you want"}, {"start": 1107.92, "end": 1114.0800000000002, "text": " to make sure that this will zero out and this will be zero when the dumb flag is one, and we'll just"}, {"start": 1114.0800000000002, "end": 1119.68, "text": " have the reward. Why is that? Well, because basically, you don't want to have the you don't"}, {"start": 1119.68, "end": 1125.76, "text": " want to pass the final state, the terminal state into DQN, we just have the reward, we don't want"}, {"start": 1125.76, "end": 1130.4, "text": " to introduce the noise there because the state is already done. Basically, what the Q learning"}, {"start": 1130.4, "end": 1136.24, "text": " function does, it predicts the expected reward. And since the state is already since the game is"}, {"start": 1136.24, "end": 1143.52, "text": " done, the expected reward should be zero. And yeah, that's why we zero out that part. Then we have"}, {"start": 1143.52, "end": 1150.8, "text": " the observations, we pass them through DQN. And we just take the actions that we sampled during the"}, {"start": 1151.6, "end": 1157.44, "text": " experience collection. And we take those Q values. And finally, we just have the loss between the"}, {"start": 1157.44, "end": 1162.24, "text": " target values. And so that's your DQN algorithm. And this part is not that important if you didn't"}, {"start": 1162.24, "end": 1166.32, "text": " understand. I'm just walking you through how I'm thinking and how I'm structuring the code and"}, {"start": 1166.32, "end": 1172.48, "text": " everything. Okay, then basic stuff. We're zeroing the gradient gradients, we do the backward, which"}, {"start": 1172.48, "end": 1178.64, "text": " will accumulate the grad fields in PyTorch. So every single variable will have the grad fields"}, {"start": 1178.64, "end": 1185.3600000000001, "text": " filled in with the gradient. And we do the step. So that's it. That's how we learn. Two more details."}, {"start": 1186.32, "end": 1195.6, "text": " First off, every fourth update, so no, actually every 10k update, I'll be doing a hard"}, {"start": 1195.6, "end": 1202.9599999999998, "text": " update of the target network. And that's what the original paper showed, improved the stability"}, {"start": 1202.9599999999998, "end": 1207.4399999999998, "text": " significantly. So aside from replay buffer, this thing is very important in order to have an"}, {"start": 1207.4399999999998, "end": 1214.24, "text": " algorithm that's stable and that can work. So finally, you can see I'm logging gradients here."}, {"start": 1214.24, "end": 1218.7199999999998, "text": " And that's super important because I'm still trying to debug this thing. And yeah, okay, so I think"}, {"start": 1218.72, "end": 1225.68, "text": " I've got enough data in the tensor board. So I'll just go ahead and stop this run. And what I'll do,"}, {"start": 1225.68, "end": 1230.64, "text": " I want to show you what the visual observation is. Basically, collect experience, I'll just"}, {"start": 1230.64, "end": 1237.52, "text": " uncomment this thing here. I'll put a breaking point there and I'll rerun this thing. So let me"}, {"start": 1237.52, "end": 1245.28, "text": " show you what this does. Whoops, that's the wrong file. I'm starting the main file. Okay, if I go"}, {"start": 1245.28, "end": 1251.28, "text": " here and we just plot the function, you can see initially because we only have one frame,"}, {"start": 1252.08, "end": 1257.6, "text": " and this is the observation. So the last three frames will be black. Okay, so if we go and do"}, {"start": 1257.6, "end": 1263.6, "text": " another step, so as you can see, we did another step. I step into here, I plotted, we now have"}, {"start": 1263.6, "end": 1270.72, "text": " two frames. So if I do another step again, we did another step and I step in, you can see we have"}, {"start": 1270.72, "end": 1277.68, "text": " three frames. So that's what I have this for. So just a simple function which will help me"}, {"start": 1277.68, "end": 1282.32, "text": " better understand what's going on. And okay, let me show you the logging functions in tensor"}, {"start": 1282.32, "end": 1286.88, "text": " board and we're pretty much done. When it comes to logging, I'm just, I'm basically logging the"}, {"start": 1286.88, "end": 1291.52, "text": " stuff you already saw in the tensor board. So reward per episode, blah, blah, blah, nothing smart"}, {"start": 1291.52, "end": 1295.92, "text": " there. So maybe the only a bit more interesting part is here. So I'm iterating through the"}, {"start": 1295.92, "end": 1300.88, "text": " parameters of the decan model and I'm just calculating the, as you can see, I take the"}, {"start": 1300.88, "end": 1307.6000000000001, "text": " gradient, the data, and I did the norm, the L2 norm, P equals two L2 norm, and I just"}, {"start": 1308.5600000000002, "end": 1315.04, "text": " basically log those numbers. And finally, I'm accumulating these numbers in the total grad"}, {"start": 1315.04, "end": 1320.48, "text": " so we can have the complete vector and not just the parameters for a specific layer, but for the"}, {"start": 1320.48, "end": 1326.4, "text": " whole network itself. Okay, let me show you the tensor board finally and see what the data we've"}, {"start": 1326.4, "end": 1332.8, "text": " got. Okay, Epsilon as expected, just linearly annealed to 0.1. That's not so interesting."}, {"start": 1332.8, "end": 1340.32, "text": " FPS, as you can see, it's pretty stable, around 330 frames. I don't know whether that's good or"}, {"start": 1340.32, "end": 1346.24, "text": " bad because I haven't been experimenting enough, but we'll see. That sounds like a nice number."}, {"start": 1346.24, "end": 1353.76, "text": " Huber loss, as you can see, it's just random noise pretty much. I don't see any downward trends, so"}, {"start": 1353.76, "end": 1359.36, "text": " something's wrong, or maybe I just didn't have enough steps. That's also a possibility. Also"}, {"start": 1359.36, "end": 1366.08, "text": " rewards, as you can see, I'm almost constantly at minus 21. That means I basically, the agent loses"}, {"start": 1366.08, "end": 1372.48, "text": " all of the 21 games of Pong against the built-in agent, so that means it's just playing really bad."}, {"start": 1372.48, "end": 1377.04, "text": " Steps per episode, again, really noisy, and you can see the number of steps that's needed per"}, {"start": 1377.04, "end": 1383.6, "text": " episode, so that's around maybe the mean is around 850 or something. And finally, we can see the"}, {"start": 1384.24, "end": 1392.16, "text": " grads, and they are completely zero. So if you've seen the bug in my code, comment down in the"}, {"start": 1392.16, "end": 1396.96, "text": " comment section. I'll hopefully have this thing running in a couple of days, and then I'll need"}, {"start": 1396.96, "end": 1402.96, "text": " a couple more days to just polish everything, create a readme, so that people can kind of use"}, {"start": 1402.96, "end": 1409.2, "text": " it and not experience the problems I did. So yeah, one more thing I want to mention, which is really"}, {"start": 1409.2, "end": 1416.0, "text": " important, how I actually go about developing these projects. So usually, unless nobody has"}, {"start": 1416.0, "end": 1421.44, "text": " published any code whatsoever, which I did have a couple of occasions where I had to implement a"}, {"start": 1421.44, "end": 1426.24, "text": " paper totally from scratch, I just had the paper, so what I ended up doing is pinning the entire"}, {"start": 1426.24, "end": 1432.08, "text": " authors and communicating with them, so I did get some snippets there. So basically, you always want"}, {"start": 1432.08, "end": 1436.96, "text": " to start with some code and understand what other people are doing. So I did investigate at least"}, {"start": 1436.96, "end": 1443.04, "text": " five, six, or seven, something like that, PyTorch implementations, I think six. And one of them here"}, {"start": 1443.04, "end": 1448.88, "text": " is stable baselines, so I was just stepping through the code, looking what those folks did, and as I"}, {"start": 1448.88, "end": 1453.68, "text": " said, they didn't have the smart replay buffers, so I had to implement the buffer from scratch,"}, {"start": 1453.68, "end": 1460.96, "text": " and stuff like that. But I did learn some nice tricks from this repo, so that's a cool thing to"}, {"start": 1460.96, "end": 1465.68, "text": " do when you're developing a project. Go ahead and first analyze the other projects, and you'll learn"}, {"start": 1465.68, "end": 1470.88, "text": " some stuff additionally, and it will be a much more rewarding experience. You'll learn more if you also"}, {"start": 1470.88, "end": 1477.92, "text": " understand what other people have been doing. Okay, having said that, maybe the last thing I want to"}, {"start": 1477.92, "end": 1484.5600000000002, "text": " show you are quickly the DQN model, what I've done there. And you can see I just created a super"}, {"start": 1484.5600000000002, "end": 1492.48, "text": " generic model there, so you can play with it, and you'll have the code in a couple of days, hopefully"}, {"start": 1492.48, "end": 1497.92, "text": " until next weekend, actually. And you can see I have this automatic part here, where because, as"}, {"start": 1497.92, "end": 1502.8000000000002, "text": " you know, we have three convolutional layers, we have fully connected layer, that means that"}, {"start": 1502.8, "end": 1508.08, "text": " basically you can't change the input, because otherwise the compatibility between the volume"}, {"start": 1508.08, "end": 1512.96, "text": " that comes out from the third comb layer won't be compatible with the first fully connected layer,"}, {"start": 1512.96, "end": 1519.12, "text": " so you either have to keep the observation constant, or you can do it like me here."}, {"start": 1519.12, "end": 1526.8799999999999, "text": " So I'm basically passing the input volume, and I can automatically calculate the number of neurons"}, {"start": 1526.8799999999999, "end": 1531.2, "text": " that I need in the first fully connected layer. So that's this part here."}, {"start": 1531.2, "end": 1536.72, "text": " And yeah, that's not an interesting replay buffer, on the other hand. As you can see,"}, {"start": 1536.72, "end": 1546.48, "text": " the main thing I've done it is because I only need 7GB of memory, only of RAM, instead of 28 for the"}, {"start": 1546.48, "end": 1555.92, "text": " implementation that these stable baselines have. And yeah, basically a couple of public functions"}, {"start": 1555.92, "end": 1561.92, "text": " here, and the bunch of work happens here. There is not a lot of code, but there are certain edge"}, {"start": 1561.92, "end": 1567.8400000000001, "text": " cases, and the implementation I actually took an inspiration from was this Berkeley's implementation,"}, {"start": 1567.8400000000001, "end": 1574.8000000000002, "text": " and they also have a small bug. It's a subtle one, it won't actually impact the final result of the"}, {"start": 1574.8000000000002, "end": 1580.5600000000002, "text": " Dequeon agent, but nonetheless I did found it and submit an issue. And yeah, that's pretty much it."}, {"start": 1580.56, "end": 1586.56, "text": " Hopefully you found this video useful. If you did, leave a comment, subscribe, share the video,"}, {"start": 1586.56, "end": 1614.56, "text": " and I hope to see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=CTsSrOKSPNo
EfficientNetV2 - Smaller Models and Faster Training | Paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ The newest version of EfficientNet v2 achieved better results on ImageNet top-1 accuracy than recently published NFNets, Vision Transformers, etc. You'll learn about: ✔️ What is a progressive training ✔️ Fused-MBConv layer and novel reward function for NAS ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ paper: https://arxiv.org/pdf/2104.00298.pdf ✅ code (yet to be published): https://github.com/google/automl/efficientnetv2 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 High-level overview 01:40 NAS review 07:20 Deep dive 15:35 Novel reward 17:34 Progressive training 21:10 Stochastic depth regularization 23:00 Results ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #efficientnetv2 #neuralarchitecturesearch #rl
What's up in this video I'm covering the newest version of efficient net paper just came out a couple of days ago The title is efficient net we to smaller models and faster training by Minxion Tan and quality Hopefully I pronounced their names correctly from Google brain and you can see the results. They got are pretty decent Taking a look at the image net accuracy like top one accuracy on the y-axis and training time on the x-axis We can see that the curves just form envelope around all of the previous models like NF nets which used the which ditch the better realization altogether and you can see the V1 version of efficient net here and some models such as vision transformer and yeah, basically they form an envelope So pretty decent results the two main ideas are basically they modify the search space by adding this new Like block called fused and become which is just a handcrafted module which was created by modifying the ambi com from the mobile net paper and We'll see those a bit later. It's not that important so that's the first thing the second thing is they updated the the objective function by adding the training step time and by adding the Number of parameters into the objective function. Finally, the the last thing is progressive training training whereby they while they're training there they are slowly and gradually increasing the Image resolution, but they're also increasing the regularization Like dropout and some random augmentations. We'll see those details a bit later. So those are the main ideas So first like the the search State the search space and the search Strategy were updated a bit and finally the progressive training. That's the second thing Before I even start digging into the paper. I want to walk you through the body of literature that came before this paper and The interesting thing is that this author the quality basically was the co-author of many of the first and Seminal neural architecture search papers including the first one the the NAS with RL paper Then we had the NAS net paper then we had the M. Ness paper and finally efficient net V1 I just want to walk you through and because I think You understanding the this previous work will help you help ground your understanding and better better better grasp the v2 version Okay. So first things first. What's neural architecture search? the basic idea is this you have you have a controller which is usually some RNN like LSTM and Basically you sample architectures from your search space you evaluate the architectures by training them and You basically evaluate them on the validation set you get the accuracy and you get you you basically give the controller the reward Which is the evaluation accuracy and this is your common RL Loop you have the agent you have the environment you get rewards and you can train the agent the controller here to Eventually sample those architectures which have high reward ie which have high validation accuracy and you can see here How the sampling happens basically the controller? Samples first maybe the number of filters then what's the height of the filter was the width? What's the stride blah blah blah in in the every single layer and that's something called global search space because you're basically For all of the end layers you have the maximum flexibility you can pick every single layer Maybe different but the the like the search space is huge and this didn't work that good I mean it did work very well, but they needed a bunch of compute like some huge number of GPU hours and Then the second paper the NAS net paper this learning transferable architectures for scalable image recognition What it did basically was scope down the search space by? By noticing that most of the handcrafted architectures in the literature like mobile Nets etc and resnets and VGG and whatnot they basically tend to Create a cell and then just replicate it and so what they decided to do is just design the cell Basically you search for the cell and then you repeat the cell n times in order to get your final Architecture and that's it that was the main idea in that paper M s net which is really closely related to efficient nets did the following so basically They introduced They they again modify the search space a little bit so now they have like a couple blocks like this Let's say maybe seven I think they had some number like that and what I do is you you now? Sample the first layer in each of these blocks, and then you just repeat them By some number of times which is also a hyper parameter Which we search for so l1 here l2 here, so we'll have l2 of these will have l3 ln of these and This is a search space and additionally they included the latency Into the final reward objective function so not only the accuracy on the validation set but also the latency which made this M nest paper suitable for finding architectures that could run on edge devices such as mobile devices etc The loop is pretty much the same the reward function is just modified. This is still Lstm the only thing they Actually a change is they are not using the reinforce algorithm. They're using PPO from open AI as the RL algorithm of choice those are the main ideas finally we get to efficient net and Efficient net basically uses the same space as M. Ness net the only difference is instead of latency They are actually optimizing for flops So they they are basically that means the floating number of floating operations per second And that's that's it they the trick they did was to find the baseline architectures called B zero by using this M. Ness space I just explained and then they found these Basically then they scale the the B zero architecture by scaling the depth scaling the width I the number of channels and the image resolution pretty much Proportionally that means they just do some great search define this alpha beta and gamma which control depth width and even resolution and then they just raise these parameters to Phi and And Phi is the thing that controls how you get novel architecture So basically by setting Phi to 2 they'll they'll sample B one and then they'll have B one through B seven depending on the Phi The nice thing about Phi is that when you when you put when it when Phi is one compared to when Phi is zero The the B one model the model with Phi equals one will have twice as much flops Compared to B zero so they can kind of intuitively increase at the Phi and they'll proportion they'll have a feeling for how much compute how much additional compute they will need and Basically the accuracy will improve as well. So those are the main ideas that you need to understand before grasping the v2 paper Hopefully this was useful if you if you like this part just comment down below So that I know whether it was useful or not Anyways, let's jump into the paper itself okay, so First things first to develop this family of models. We use a combination of training aware Mass and scaling to jointly optimize training speed and parameter efficiency. So that's the thing I mentioned basically instead of just optimizing latency or flops They're optimizing these two so the training speed and the parameter efficiency explicitly into the objective function So they they make it explicit part of objective function The models were searched from search space enriched with new ops such as fused and become so that's the layer I mentioned that we'll see that in a bit And yeah, those are some of the main ideas and this is the second idea We propose an improved method of progressive learning which adaptive adjust regularization So the dropout and data augmentation along with the image size and we'll see how that works in detail in a bit Okay, so they mentioned here that training efficiency has gained significant interest recently so they mentioned the NF nets and some other papers and What I say then is that these methods often come with expensive overhead on parameter size And if you take a look at this table, you'll see that Basically, here's the efficient that we won resonant some vision transformer You can see that the number of parameters that the V2 has is much lower Than than than these two and even the V1 has much smaller amount of parameters compared to vision transformer in this resonant derivative so usually people omit some of the dimensions and then the results are inconclusive And they actually stressed that a couple of times throughout this paper, but we'll get to that in a moment Okay, so our study shows inefficientness. So it's basically they they Dissect efficient nap V1 and try to find the bottlenecks so they can improve the version 2 which makes sense, right? So training with very large image sizes is slow. So the thing with image net V1 was that basically the the B0 model used The image resolution 2 to 4 times 2 to 4 the B7 used I think somewhere around 560 And once you have this image size, basically you you can you can you can put so much On on the GPU. So if your image size increases the batch size will have to decrease And once the batch size decreases that means your training is going to slow down So that's the first thing they're gonna tackle with and that's gonna be done with this like Progressive increase of image resolution which you can treat as a sort of a curriculum learning basically the second thing is depthwise convolutions are slow in early layers and Thirdly equally scaling up every stage is suboptimal So that's the thing I mentioned here with those alpha betas and gammas what I do is Let's take the MNAS architecture here Basically if alpha so that controls the depth is 2 that means every single layer So in every single stage is going to increase by 2 instead of maybe increasing here by I don't know 2.3 and here maybe earlier you won't need that much you'll put 1.7 you get the point basically, they want to create a non uniform scaling instead of this and Based on these observations we design a search space enriched with additional ops such as fused and become and apply training awareness And scaling to jointly optimize model accuracy training speed and parameter size So I want to stress this training speed and parameter size is something they explicitly encode into the objective function Okay, hopefully that's clear. I'm gonna skip these parts. Whoops and Let's start here While many recent books have claimed large gains in training or inference speed They often much worse. They're often much worse than efficient net in terms of parameters and flops efficiency That's something I mentioned a couple minutes ago. Basically you can see here that even though the NF net and resonant RS have Really nice top one accuracy if you take a look at these two dimensions the parameters and flops They are even worse than then the V1 version of efficient net. Okay, I mentioned this already and This thing let's let's let's dig a bit deeper into this so Depthwise convolutions have fewer parameters and flops than regular convolutions, but they cannot fully utilize modern accelerators so Here's the thing the Mb conv layer so this is the Basically a module that somebody handcrafted a couple of years ago in the mobile net paper and you can see the structure This is also called inverted residual bottleneck layer because you can see that the Number of channels actually increases here and then you have a residual connection the skip connection connecting the like the usual skip connection, okay, and The thing is these depthwise convolutions are not So modern accelerators the hover accelerators don't support this operation as much so they just kind of replaced it with calm three by three and Everything else stays the same. So these two are kind of fused together. That's hence fused and become and they said here that Basically can fuse them becomes can improve training speed with a small overhead and parameters and flops But if we replace all blocks with fused and become then it significantly increases the parameters and flops while also slowing down the training so What I want to say is that you can simply just replace the M becomes with this newly this new version the fuse version And they have some relations here. Basically when you have no fuse layers You can see the parameters the the flops and the accuracy but once you once so once you add the fused Layer the fuse modules in the in the early layers of your architecture You get small increase in parameters and flops, but you get some decent decrease in accuracy and increase in throughput Like the number of images per second per per like TPU or or or V hundred basically faster training speed but once you start adding a lot of these like In the first five layers if you if you replace the the M become by fuse one by the fuse variant in the first five layers The parameters the number of parameters and the flops just blow up Blows off and you don't get a much accuracy improvement and neither do you get training speed So there is a trade-off obviously here So they mentioned that they just want to include this thing in the as the part of the search as well Okay Let's continue. So I mentioned this already scaling up every stage is Suboptimal so you want to have non-uniform scaling and they mentioned here the non-uniform scaling strategy to gradually add More layers to later stages That's some kind of heuristic and they mentioned here says a heuristic We also gradually add more layers to later stages So as you can see this neural architecture search business is basically a little bit Handcrafting of the initial search space and some heuristics and some search so it's it's a there's a lot of engineering details But you'll hopefully get the the main points and the gist of this paper Soon, okay. So here is that additional detail. So since the search space is smaller and it's smaller because Efficient net b1 didn't use some of the options it had so this just kind of cut those options from the search space And now the search space is smaller. So they say here since it's smaller We can simply apply a random search on much larger networks. They have comparable size as efficient net b4 Which was so we had b0 to b7. So that's something in between In the middle and the the main point I want you to understand here is that The the the model size you're searching for will be the most optimal one But once you start scaling to either smaller models or bigger models If the scaling strategy is good, you'll get decent results But they still won't be as optimal is as if you directly search for them. So They just found found the trade-off that searching for the Size that's equivalent to this b4 model is the best trade-off point and Having said that this is the the the the loss function I was mentioning so the search reward combines the model accuracy The normal training step time and the parameter size using a simple way to product accuracy times the the the training step time and Parameter size you can see they're raised to the power of w which are negative. So basically you'll want to Pull this down to to zero in order to to increase the the reward and just to contrast The reward for the v2 version of efficient net with with m nes paper and enet v1 Let me show you this formula. So they previously had accuracy and the latency if you remember and w was again negative so that means you want to uh, push this down to zero you want to increase this and basically That will lead to higher rewards and that's what the controller learns to do So he finds highly accurate models with low latency constraints The enet v1 had as I mentioned flops instead of latency everything else remains the same And here we can see aside from accuracy we have These two so the the the training speed and the number of parameters the nice thing about this thing here about s Is that it correlates really nicely and they show that with the latency. So the the higher the the training speed Basically the lower the latency. So so so the the faster you're training the the lower the latency will be and that's cool. Let's continue Here is finally what they find And This is the architecture they they they they search for the optimal one They have some fused mb com layers in the initial stages as you can see here And then they have the mb coms the usual stuff and you can also see that the number of channel increases here Which is the heuristic I mentioned so compare that to the uh b0 efficient net. So that's the v1 version The previous version of efficient net they only increase the number of channels And the v1 version the previous version of efficient net they only used and becomes okay Let me let me explain you the the progressive learning part, which is arguably one of the the most important algorithmic improvements of this paper So first the motivation for this thing, uh, they notice that when the image size is small like 128 by 128 Uh, the the the smaller the regularization the smaller the augmentations they are using the higher the accuracy And as you're increasing the image resolution we can see here that basically the higher The regularization the better the accuracy and you can see the trend here So basically the higher the the resolution you want to increase the regularization in order to get the optimal the the highest performance Okay, so that's the motivation and what's the main idea? So here you can see so the the progressively as the training progresses So this is the number of epochs on the on the axis and you can see in the initial axis The images are smaller here. They are bigger and also the augmentations are increasing so you can see the shears here Uh some like geometric transformations of the image you can see them the mix up Augmentation here kicking in basically they are mixing up two images together And so that's a high level idea and here How the algorithm works like is basically they they split the training into m steps Uh where m was four I think so basically you have it something like this So this is the like the the the the training and this is the epoch zero This is the epoch n whatever and we just split this into four parts Like this, right? Okay And in this portion they use the initial Uh image size and the initial magnitude of augmentations and then they're linearly increasing those In every single stage. So here they're just going to increase it and here in this stage They're going to increase it further and finally, uh the highest The the magnitude of augmentation will be the highest as well as the image size and if we zoom in here basically So for all of these m stages They're just linearly interpolating between the target image size between the start image size just simple linear interpolation here and same goes for the regularization just uh linear interpolation between the Initial magnitude and the final magnitude and let's see what the regularizations they are using The one is dropout like classical stuff. Uh, the second one is this random random augmentation where they're basically just combining a bunch of stuff like shearing like uh, uh various uh geometric and photometric augmentations So the mix-up is just uh, you're you're basically you saw it here Uh, you're mixing up two images in the image space and you're also mixing their their labels So the the target label will be some uh, like mixture between these two So for example if we know that we so we had maybe uh, like a dog here and we had Uh cat here and basically if we want to mix up, uh with lambda equals, I don't know 0.2 Uh, you'll you'll end up with the the final um Target will be this like you'll have 02 for for I don't know dog so this is 02 and this will be For this thing for the cat you'll have 0.8 And so you'll want to train your model to predict this Distribution when you input this image here except we don't have dog and cat here But some panda bear and something some fish or whatever. Okay Uh, you get a point. Let's continue and let's see the results. Um Before that, I just want to mention one more, uh regularization they're using that's this stochastic depth So the main idea is the following so you have the the resnet block Basically, you transform the features you're probably familiar with resnets You have this residual connect the skip connection which just adds these features without transforming them You add them up and that's your residual block here. So what stochastic depth does? It has this survival probability, which is basically the probability that this block will remain here So that means with 0.2 probability you'll actually short Um connect this thing so you'll just remove this thing and basically, uh, you you'll effectively end up Uh with the shallower model. So by having stochastic depth um the It's just they just show that that improves the accuracy and it's just a variation on the dropout. Uh, like uh concept basically um, you want to make sure that the That the network becomes robust to the co-adaptation So similarly as you had dropout So the original dropout was you have maybe like three neurons here two neurons here You have a fully connected layer and you'll with some probability you'll you'll pull this Weight to zero and basically the other weights have to learn how to not rely too much on on that particular edge Or on any particular edge in in for that matter and here This is just a generalization in a sense where we are just removing the whole module And we want to make sure that the network is robust that when certain modules just disappear It will still be able to predict the the the image Like the the class in the image In this case, okay, that's the that's the main idea Let's do results. So the first result is so in particular our efficient napv2m achieves comparable accuracy to efficient napv7 while training 11x faster using the same computing resources and this was one of the objectives in the training so that makes sense, but those results are really impressive. So let's take a look here and you can see For both number of parameters Flops i.e. compute and latency you can see that the efficient napv2 forms an envelope Above all of the previous models like nfnets and vision transformers and whatnot Uh same goes for every single dimension. The interesting thing is how much better it is than the vision transformer and I mentioned it here So recently vision transformers have demonstrated impressive results on image net accuracy and training speed However here we show that properly designed com nets With improved training method can still largely outperform vision transformers in both accuracy and training efficiency and if we take a look at the numbers, uh, we can see like the The v2l which is not pre-trained on the this larger image nap which has around 21 000 classes but the version from 2012 which had 1000 classes basically the numbers are much better as you can see here compared to Uh, even the even the biggest of vision transformers which were pre-trained on the image net 21k And you can see the numbers here You can see that the accuracy is is higher. We can see that we have much lower number of parameters Uh, we have uh much less compute here And what's this? Uh, yeah inference is also smaller and the training time is also smaller So across every single dimension is just better Which means that the biases we have in convolutional neural networks are not going to go any anytime soon. So it seems Uh and vision transformer took a lot of time to train and a lot of data. I think it was uh, initially trained on the this Private data set that google has which is huge. So yeah, uh, these are some nice results, uh from from the google brain team Let's continue and see here So they also showed that the performance on when transferring to other classical Uh data sets such as cipher 10 cipher 100 flowers cars we can see that They achieved the the highest accuracy as well here So two conclusions they they derive here is scaling up data size is more effective than simply scaling up model size in high accuracy regime so basically, uh, that means that this pre-training they uh this uh additional pre-training that did on the image net 21k helped boost the the the accuracy as you can see in this column and just having a bigger model will lead to overfitting so they just Mentioned that data is What what matters and some other work supports the claim as well? So although the image 21k has 10x more data our training approach enables us to finish the pre-training of efficient NAB 2 within two days using 32 tpu cores And this is pretty impressive like in only two days you can train on this huge data set, which is a nice nice Uh nice thing to to to have obviously so because they reduce the training time. They can now train it in two days and achieve Uh great results. So they did a lot of ablations and show that uh each of the new Uh novel things they introduced like progressive training uh leads to to improvements in results um and What's interesting is maybe that here they just tried progressive learning for resonance and they showed that using progressive training you can improve the uh accuracy on the resonance and not on the efficient nets but on resonance and uh also decrease the uh training time same goes for the v1 so basically, uh this training, uh, This progressive training is not it's not it has not over fitted to the particular architecture From efficient net v2. So yeah, that's not nice ablation there. Um, that's pretty much it so basically Search is the name of the game Um this progressive training Improves the training speed it improves the accuracy A couple of heuristics here and there and that's it. That's the version two of efficient net Hopefully you found this video useful if you did leave the comment uh share that would be super helpful and Um hit that bell icon to get notified when I upload a new video until next time. See you
[{"start": 0.0, "end": 5.84, "text": " What's up in this video I'm covering the newest version of efficient net paper just came out a couple of days ago"}, {"start": 5.84, "end": 12.14, "text": " The title is efficient net we to smaller models and faster training by Minxion Tan and quality"}, {"start": 12.4, "end": 19.28, "text": " Hopefully I pronounced their names correctly from Google brain and you can see the results. They got are pretty decent"}, {"start": 20.32, "end": 25.64, "text": " Taking a look at the image net accuracy like top one accuracy on the y-axis and training time on the x-axis"}, {"start": 25.64, "end": 31.3, "text": " We can see that the curves just form envelope around all of the previous models like NF nets"}, {"start": 31.3, "end": 35.58, "text": " which used the which ditch the better realization altogether and you can see the"}, {"start": 36.06, "end": 42.82, "text": " V1 version of efficient net here and some models such as vision transformer and yeah, basically they form an envelope"}, {"start": 42.82, "end": 49.58, "text": " So pretty decent results the two main ideas are basically they modify the search space by adding this new"}, {"start": 49.58, "end": 58.62, "text": " Like block called fused and become which is just a handcrafted module which was created by modifying the ambi com from the mobile net paper and"}, {"start": 59.18, "end": 61.7, "text": " We'll see those a bit later. It's not that important"}, {"start": 61.7, "end": 68.25999999999999, "text": " so that's the first thing the second thing is they updated the the objective function by adding the training step time and by adding the"}, {"start": 69.38, "end": 75.1, "text": " Number of parameters into the objective function. Finally, the the last thing is progressive training"}, {"start": 75.1, "end": 81.0, "text": " training whereby they while they're training there they are slowly and gradually increasing the"}, {"start": 81.33999999999999, "end": 84.3, "text": " Image resolution, but they're also increasing the regularization"}, {"start": 85.02, "end": 90.94, "text": " Like dropout and some random augmentations. We'll see those details a bit later. So those are the main ideas"}, {"start": 90.94, "end": 92.94, "text": " So first like the the search"}, {"start": 93.34, "end": 95.34, "text": " State the search space and the search"}, {"start": 96.41999999999999, "end": 101.13999999999999, "text": " Strategy were updated a bit and finally the progressive training. That's the second thing"}, {"start": 101.14, "end": 107.34, "text": " Before I even start digging into the paper. I want to walk you through the body of literature that came before this paper and"}, {"start": 107.62, "end": 110.78, "text": " The interesting thing is that this author the quality"}, {"start": 111.3, "end": 115.06, "text": " basically was the co-author of many of the first and"}, {"start": 115.5, "end": 121.22, "text": " Seminal neural architecture search papers including the first one the the NAS with RL paper"}, {"start": 121.46000000000001, "end": 128.06, "text": " Then we had the NAS net paper then we had the M. Ness paper and finally efficient net V1"}, {"start": 128.06, "end": 130.06, "text": " I just want to walk you through and"}, {"start": 130.06, "end": 132.06, "text": " because I think"}, {"start": 132.26, "end": 140.02, "text": " You understanding the this previous work will help you help ground your understanding and better better better grasp the v2 version"}, {"start": 140.5, "end": 143.86, "text": " Okay. So first things first. What's neural architecture search?"}, {"start": 143.86, "end": 149.26, "text": " the basic idea is this you have you have a controller which is usually some RNN like LSTM and"}, {"start": 150.14000000000001, "end": 156.54, "text": " Basically you sample architectures from your search space you evaluate the architectures by training them and"}, {"start": 156.54, "end": 164.18, "text": " You basically evaluate them on the validation set you get the accuracy and you get you you basically give the controller the reward"}, {"start": 164.82, "end": 169.73999999999998, "text": " Which is the evaluation accuracy and this is your common RL"}, {"start": 170.26, "end": 177.06, "text": " Loop you have the agent you have the environment you get rewards and you can train the agent the controller here to"}, {"start": 177.78, "end": 184.62, "text": " Eventually sample those architectures which have high reward ie which have high validation accuracy and you can see here"}, {"start": 184.62, "end": 187.70000000000002, "text": " How the sampling happens basically the controller?"}, {"start": 188.22, "end": 192.06, "text": " Samples first maybe the number of filters then what's the height of the filter was the width?"}, {"start": 192.22, "end": 199.54, "text": " What's the stride blah blah blah in in the every single layer and that's something called global search space because you're basically"}, {"start": 200.38, "end": 204.98000000000002, "text": " For all of the end layers you have the maximum flexibility you can pick every single layer"}, {"start": 204.98000000000002, "end": 210.94, "text": " Maybe different but the the like the search space is huge and this didn't work that good"}, {"start": 210.94, "end": 216.9, "text": " I mean it did work very well, but they needed a bunch of compute like some huge number of GPU hours and"}, {"start": 217.42, "end": 222.62, "text": " Then the second paper the NAS net paper this learning transferable architectures for scalable image recognition"}, {"start": 222.82, "end": 225.72, "text": " What it did basically was scope down the search space by?"}, {"start": 226.7, "end": 231.36, "text": " By noticing that most of the handcrafted architectures in the literature like mobile"}, {"start": 232.18, "end": 237.22, "text": " Nets etc and resnets and VGG and whatnot they basically tend to"}, {"start": 237.22, "end": 243.46, "text": " Create a cell and then just replicate it and so what they decided to do is just design the cell"}, {"start": 243.86, "end": 248.98, "text": " Basically you search for the cell and then you repeat the cell n times in order to get your final"}, {"start": 249.54, "end": 252.54, "text": " Architecture and that's it that was the main idea in that paper"}, {"start": 253.18, "end": 258.82, "text": " M s net which is really closely related to efficient nets did the following so basically"}, {"start": 259.74, "end": 261.74, "text": " They introduced"}, {"start": 261.74, "end": 268.36, "text": " They they again modify the search space a little bit so now they have like a couple blocks like this"}, {"start": 269.18, "end": 270.66, "text": " Let's say maybe seven"}, {"start": 270.66, "end": 274.58, "text": " I think they had some number like that and what I do is you you now?"}, {"start": 275.14, "end": 280.14, "text": " Sample the first layer in each of these blocks, and then you just repeat them"}, {"start": 280.90000000000003, "end": 284.22, "text": " By some number of times which is also a hyper parameter"}, {"start": 284.22, "end": 291.74, "text": " Which we search for so l1 here l2 here, so we'll have l2 of these will have l3 ln of these and"}, {"start": 292.5, "end": 295.98, "text": " This is a search space and additionally they included the latency"}, {"start": 296.46000000000004, "end": 303.5, "text": " Into the final reward objective function so not only the accuracy on the validation set but also the latency which made this"}, {"start": 304.3, "end": 311.14000000000004, "text": " M nest paper suitable for finding architectures that could run on edge devices such as mobile devices etc"}, {"start": 311.14, "end": 316.9, "text": " The loop is pretty much the same the reward function is just modified. This is still"}, {"start": 317.62, "end": 319.8, "text": " Lstm the only thing they"}, {"start": 321.09999999999997, "end": 326.5, "text": " Actually a change is they are not using the reinforce algorithm. They're using PPO from open AI"}, {"start": 327.09999999999997, "end": 329.41999999999996, "text": " as the RL algorithm of choice"}, {"start": 330.38, "end": 334.14, "text": " those are the main ideas finally we get to efficient net and"}, {"start": 334.14, "end": 341.86, "text": " Efficient net basically uses the same space as M. Ness net the only difference is instead of latency"}, {"start": 341.86, "end": 344.18, "text": " They are actually optimizing for flops"}, {"start": 344.38, "end": 349.86, "text": " So they they are basically that means the floating number of floating operations per second"}, {"start": 350.3, "end": 355.24, "text": " And that's that's it they the trick they did was to find the"}, {"start": 356.06, "end": 363.21999999999997, "text": " baseline architectures called B zero by using this M. Ness space I just explained and then they found these"}, {"start": 363.22, "end": 370.06, "text": " Basically then they scale the the B zero architecture by scaling the depth scaling the width"}, {"start": 370.06, "end": 372.74, "text": " I the number of channels and the image resolution"}, {"start": 373.54, "end": 374.74, "text": " pretty much"}, {"start": 374.74, "end": 379.86, "text": " Proportionally that means they just do some great search define this alpha beta and gamma"}, {"start": 380.58000000000004, "end": 383.5, "text": " which control depth width and"}, {"start": 384.54, "end": 388.58000000000004, "text": " even resolution and then they just raise these parameters to Phi and"}, {"start": 388.58, "end": 392.9, "text": " And Phi is the thing that controls how you get novel architecture"}, {"start": 392.9, "end": 399.94, "text": " So basically by setting Phi to 2 they'll they'll sample B one and then they'll have B one through B seven depending on the Phi"}, {"start": 400.7, "end": 406.74, "text": " The nice thing about Phi is that when you when you put when it when Phi is one compared to when Phi is zero"}, {"start": 407.09999999999997, "end": 413.06, "text": " The the B one model the model with Phi equals one will have twice as much flops"}, {"start": 413.06, "end": 419.02, "text": " Compared to B zero so they can kind of intuitively increase at the Phi and they'll proportion"}, {"start": 419.02, "end": 423.38, "text": " they'll have a feeling for how much compute how much additional compute they will need and"}, {"start": 424.14, "end": 429.06, "text": " Basically the accuracy will improve as well. So those are the main ideas that you need to understand before"}, {"start": 430.22, "end": 432.22, "text": " grasping the v2 paper"}, {"start": 432.22, "end": 435.9, "text": " Hopefully this was useful if you if you like this part just comment down below"}, {"start": 436.26, "end": 438.62, "text": " So that I know whether it was useful or not"}, {"start": 439.1, "end": 441.1, "text": " Anyways, let's jump into the paper itself"}, {"start": 441.1, "end": 443.1, "text": " okay, so"}, {"start": 443.54, "end": 447.42, "text": " First things first to develop this family of models. We use a combination of training aware"}, {"start": 447.82000000000005, "end": 453.14000000000004, "text": " Mass and scaling to jointly optimize training speed and parameter efficiency. So that's the thing"}, {"start": 453.66, "end": 458.06, "text": " I mentioned basically instead of just optimizing latency or flops"}, {"start": 458.18, "end": 463.58000000000004, "text": " They're optimizing these two so the training speed and the parameter efficiency explicitly into the objective function"}, {"start": 464.3, "end": 467.38, "text": " So they they make it explicit part of objective function"}, {"start": 467.38, "end": 473.9, "text": " The models were searched from search space enriched with new ops such as fused and become so that's the layer"}, {"start": 473.9, "end": 475.9, "text": " I mentioned that we'll see that in a bit"}, {"start": 475.9, "end": 479.82, "text": " And yeah, those are some of the main ideas and this is the second idea"}, {"start": 479.82, "end": 484.14, "text": " We propose an improved method of progressive learning which adaptive adjust regularization"}, {"start": 484.14, "end": 490.82, "text": " So the dropout and data augmentation along with the image size and we'll see how that works in detail in a bit"}, {"start": 491.1, "end": 495.9, "text": " Okay, so they mentioned here that training efficiency has gained significant interest recently"}, {"start": 495.9, "end": 499.09999999999997, "text": " so they mentioned the NF nets and some other papers and"}, {"start": 499.58, "end": 504.46, "text": " What I say then is that these methods often come with expensive overhead on parameter size"}, {"start": 504.46, "end": 507.29999999999995, "text": " And if you take a look at this table, you'll see that"}, {"start": 508.14, "end": 512.42, "text": " Basically, here's the efficient that we won resonant some vision transformer"}, {"start": 512.42, "end": 516.66, "text": " You can see that the number of parameters that the V2 has is much lower"}, {"start": 517.5, "end": 524.74, "text": " Than than than these two and even the V1 has much smaller amount of parameters compared to vision transformer in this resonant"}, {"start": 524.74, "end": 526.46, "text": " derivative"}, {"start": 526.46, "end": 528.46, "text": " so usually people"}, {"start": 529.1, "end": 532.9, "text": " omit some of the dimensions and then the results are inconclusive"}, {"start": 533.58, "end": 538.38, "text": " And they actually stressed that a couple of times throughout this paper, but we'll get to that in a moment"}, {"start": 539.3, "end": 543.74, "text": " Okay, so our study shows inefficientness. So it's basically they they"}, {"start": 544.78, "end": 551.0600000000001, "text": " Dissect efficient nap V1 and try to find the bottlenecks so they can improve the version 2 which makes sense, right?"}, {"start": 551.06, "end": 559.26, "text": " So training with very large image sizes is slow. So the thing with image net V1 was that basically the the B0 model used"}, {"start": 559.9, "end": 566.2199999999999, "text": " The image resolution 2 to 4 times 2 to 4 the B7 used I think somewhere around"}, {"start": 567.42, "end": 568.54, "text": " 560"}, {"start": 568.54, "end": 574.14, "text": " And once you have this image size, basically you you can you can you can put so much"}, {"start": 574.78, "end": 580.18, "text": " On on the GPU. So if your image size increases the batch size will have to decrease"}, {"start": 580.18, "end": 584.2199999999999, "text": " And once the batch size decreases that means your training is going to slow down"}, {"start": 584.5, "end": 589.38, "text": " So that's the first thing they're gonna tackle with and that's gonna be done with this"}, {"start": 589.8599999999999, "end": 591.4599999999999, "text": " like"}, {"start": 591.4599999999999, "end": 595.7399999999999, "text": " Progressive increase of image resolution which you can treat as a sort of a curriculum"}, {"start": 596.3, "end": 601.26, "text": " learning basically the second thing is depthwise convolutions are slow in early layers and"}, {"start": 601.9399999999999, "end": 605.2199999999999, "text": " Thirdly equally scaling up every stage is suboptimal"}, {"start": 605.22, "end": 611.46, "text": " So that's the thing I mentioned here with those alpha betas and gammas what I do is"}, {"start": 611.9, "end": 613.9, "text": " Let's take the MNAS architecture here"}, {"start": 614.02, "end": 618.98, "text": " Basically if alpha so that controls the depth is 2 that means every single layer"}, {"start": 619.1, "end": 624.78, "text": " So in every single stage is going to increase by 2 instead of maybe increasing here by I don't know"}, {"start": 625.4200000000001, "end": 630.9, "text": " 2.3 and here maybe earlier you won't need that much you'll put 1.7 you get the point"}, {"start": 630.9, "end": 635.3, "text": " basically, they want to create a non uniform scaling instead of this and"}, {"start": 637.86, "end": 644.38, "text": " Based on these observations we design a search space enriched with additional ops such as fused and become and apply training awareness"}, {"start": 644.78, "end": 650.38, "text": " And scaling to jointly optimize model accuracy training speed and parameter size"}, {"start": 650.5, "end": 656.66, "text": " So I want to stress this training speed and parameter size is something they explicitly encode into the objective function"}, {"start": 656.66, "end": 660.6999999999999, "text": " Okay, hopefully that's clear. I'm gonna skip these parts. Whoops and"}, {"start": 661.4599999999999, "end": 663.4599999999999, "text": " Let's start here"}, {"start": 664.8199999999999, "end": 668.54, "text": " While many recent books have claimed large gains in training or inference speed"}, {"start": 668.54, "end": 673.78, "text": " They often much worse. They're often much worse than efficient net in terms of parameters and flops efficiency"}, {"start": 673.78, "end": 681.26, "text": " That's something I mentioned a couple minutes ago. Basically you can see here that even though the NF net and resonant RS have"}, {"start": 681.26, "end": 686.18, "text": " Really nice top one accuracy if you take a look at these two dimensions the parameters and flops"}, {"start": 686.18, "end": 691.98, "text": " They are even worse than then the V1 version of efficient net. Okay, I mentioned this already and"}, {"start": 692.78, "end": 697.54, "text": " This thing let's let's let's dig a bit deeper into this so"}, {"start": 698.18, "end": 704.98, "text": " Depthwise convolutions have fewer parameters and flops than regular convolutions, but they cannot fully utilize modern accelerators"}, {"start": 705.7, "end": 706.7, "text": " so"}, {"start": 706.7, "end": 709.18, "text": " Here's the thing the Mb conv layer"}, {"start": 709.18, "end": 711.18, "text": " so this is the"}, {"start": 711.18, "end": 717.42, "text": " Basically a module that somebody handcrafted a couple of years ago in the mobile net paper and you can see the structure"}, {"start": 717.42, "end": 719.8199999999999, "text": " This is also called inverted residual"}, {"start": 720.6999999999999, "end": 723.3399999999999, "text": " bottleneck layer because you can see that the"}, {"start": 723.9, "end": 729.06, "text": " Number of channels actually increases here and then you have a residual connection the skip connection connecting"}, {"start": 729.5799999999999, "end": 732.06, "text": " the like the usual skip connection, okay, and"}, {"start": 732.54, "end": 736.8199999999999, "text": " The thing is these depthwise convolutions are not"}, {"start": 736.82, "end": 742.6600000000001, "text": " So modern accelerators the hover accelerators don't support this operation as much so they just kind of"}, {"start": 743.0600000000001, "end": 745.0600000000001, "text": " replaced it with calm three by three and"}, {"start": 745.5400000000001, "end": 751.38, "text": " Everything else stays the same. So these two are kind of fused together. That's hence fused and become and"}, {"start": 752.1, "end": 754.1, "text": " they said here that"}, {"start": 755.3000000000001, "end": 760.6600000000001, "text": " Basically can fuse them becomes can improve training speed with a small overhead and parameters and flops"}, {"start": 760.6600000000001, "end": 765.4200000000001, "text": " But if we replace all blocks with fused and become then it significantly increases the"}, {"start": 765.42, "end": 768.02, "text": " parameters and flops while also slowing down the training so"}, {"start": 768.9, "end": 776.06, "text": " What I want to say is that you can simply just replace the M becomes with this newly this new version the fuse version"}, {"start": 776.62, "end": 780.86, "text": " And they have some relations here. Basically when you have no fuse layers"}, {"start": 780.86, "end": 787.18, "text": " You can see the parameters the the flops and the accuracy but once you once so once you add the fused"}, {"start": 787.74, "end": 791.42, "text": " Layer the fuse modules in the in the early layers of your architecture"}, {"start": 791.42, "end": 797.74, "text": " You get small increase in parameters and flops, but you get some decent decrease in accuracy and increase in throughput"}, {"start": 798.2199999999999, "end": 804.78, "text": " Like the number of images per second per per like TPU or or or V hundred basically faster training speed"}, {"start": 805.3399999999999, "end": 807.8199999999999, "text": " but once you start adding a lot of these like"}, {"start": 808.4599999999999, "end": 815.0999999999999, "text": " In the first five layers if you if you replace the the M become by fuse one by the fuse variant in the first five layers"}, {"start": 815.5799999999999, "end": 819.3399999999999, "text": " The parameters the number of parameters and the flops just blow up"}, {"start": 819.34, "end": 825.1800000000001, "text": " Blows off and you don't get a much accuracy improvement and neither do you get training speed"}, {"start": 825.1800000000001, "end": 827.1, "text": " So there is a trade-off obviously here"}, {"start": 827.1, "end": 833.58, "text": " So they mentioned that they just want to include this thing in the as the part of the search as well"}, {"start": 834.62, "end": 835.74, "text": " Okay"}, {"start": 835.74, "end": 839.4200000000001, "text": " Let's continue. So I mentioned this already scaling up every stage is"}, {"start": 840.14, "end": 847.26, "text": " Suboptimal so you want to have non-uniform scaling and they mentioned here the non-uniform scaling strategy to gradually add"}, {"start": 847.26, "end": 849.5, "text": " More layers to later stages"}, {"start": 849.5, "end": 853.26, "text": " That's some kind of heuristic and they mentioned here says a heuristic"}, {"start": 853.26, "end": 856.3, "text": " We also gradually add more layers to later stages"}, {"start": 856.3, "end": 860.86, "text": " So as you can see this neural architecture search business is basically a little bit"}, {"start": 861.34, "end": 867.5, "text": " Handcrafting of the initial search space and some heuristics and some search so it's it's a there's a lot of engineering details"}, {"start": 867.9, "end": 872.3, "text": " But you'll hopefully get the the main points and the gist of this paper"}, {"start": 872.3, "end": 879.0999999999999, "text": " Soon, okay. So here is that additional detail. So since the search space is smaller and it's smaller because"}, {"start": 879.66, "end": 886.06, "text": " Efficient net b1 didn't use some of the options it had so this just kind of cut those options from the search space"}, {"start": 886.06, "end": 889.26, "text": " And now the search space is smaller. So they say here since it's smaller"}, {"start": 889.26, "end": 894.6999999999999, "text": " We can simply apply a random search on much larger networks. They have comparable size as efficient net b4"}, {"start": 895.26, "end": 898.6999999999999, "text": " Which was so we had b0 to b7. So that's something in between"}, {"start": 898.7, "end": 902.3000000000001, "text": " In the middle and the the main point I want you to understand here is that"}, {"start": 903.1, "end": 907.1, "text": " The the the model size you're searching for will be the most optimal one"}, {"start": 907.1, "end": 910.5400000000001, "text": " But once you start scaling to either smaller models or bigger models"}, {"start": 911.6600000000001, "end": 915.6600000000001, "text": " If the scaling strategy is good, you'll get decent results"}, {"start": 915.6600000000001, "end": 921.1, "text": " But they still won't be as optimal is as if you directly search for them. So"}, {"start": 921.82, "end": 924.94, "text": " They just found found the trade-off that searching for the"}, {"start": 924.94, "end": 929.6600000000001, "text": " Size that's equivalent to this b4 model is the best trade-off point"}, {"start": 930.62, "end": 931.82, "text": " and"}, {"start": 931.82, "end": 935.1, "text": " Having said that this is the the the the loss function"}, {"start": 935.1, "end": 938.94, "text": " I was mentioning so the search reward combines the model accuracy"}, {"start": 939.1800000000001, "end": 945.9000000000001, "text": " The normal training step time and the parameter size using a simple way to product accuracy times"}, {"start": 946.86, "end": 949.4200000000001, "text": " the the the training step time and"}, {"start": 949.42, "end": 954.14, "text": " Parameter size you can see they're raised to the power of w which are negative. So basically you'll want to"}, {"start": 954.6999999999999, "end": 960.62, "text": " Pull this down to to zero in order to to increase the the reward and just to contrast"}, {"start": 961.02, "end": 967.3399999999999, "text": " The reward for the v2 version of efficient net with with m nes paper and enet v1"}, {"start": 968.2199999999999, "end": 974.78, "text": " Let me show you this formula. So they previously had accuracy and the latency if you remember and w was again negative"}, {"start": 974.78, "end": 980.38, "text": " so that means you want to uh, push this down to zero you want to increase this and basically"}, {"start": 980.9399999999999, "end": 985.98, "text": " That will lead to higher rewards and that's what the controller learns to do"}, {"start": 985.98, "end": 989.8199999999999, "text": " So he finds highly accurate models with low latency constraints"}, {"start": 990.54, "end": 995.8199999999999, "text": " The enet v1 had as I mentioned flops instead of latency everything else remains the same"}, {"start": 995.8199999999999, "end": 998.38, "text": " And here we can see aside from accuracy we have"}, {"start": 998.38, "end": 1004.78, "text": " These two so the the the training speed and the number of parameters the nice thing about this thing here about s"}, {"start": 1005.1, "end": 1011.18, "text": " Is that it correlates really nicely and they show that with the latency. So the the higher the the training speed"}, {"start": 1011.98, "end": 1017.9, "text": " Basically the lower the latency. So so so the the faster you're training the the lower the latency will be and that's cool. Let's continue"}, {"start": 1019.5, "end": 1021.74, "text": " Here is finally what they find"}, {"start": 1022.54, "end": 1023.74, "text": " And"}, {"start": 1023.74, "end": 1028.3, "text": " This is the architecture they they they they search for the optimal one"}, {"start": 1028.3, "end": 1032.94, "text": " They have some fused mb com layers in the initial stages as you can see here"}, {"start": 1032.94, "end": 1039.18, "text": " And then they have the mb coms the usual stuff and you can also see that the number of channel increases here"}, {"start": 1039.18, "end": 1046.46, "text": " Which is the heuristic I mentioned so compare that to the uh b0 efficient net. So that's the v1 version"}, {"start": 1046.78, "end": 1050.86, "text": " The previous version of efficient net they only increase the number of channels"}, {"start": 1050.86, "end": 1055.1799999999998, "text": " And the v1 version the previous version of efficient net they only used and becomes okay"}, {"start": 1055.1799999999998, "end": 1060.3, "text": " Let me let me explain you the the progressive learning part, which is arguably one of the the most important"}, {"start": 1061.08, "end": 1063.08, "text": " algorithmic improvements of this paper"}, {"start": 1063.74, "end": 1070.56, "text": " So first the motivation for this thing, uh, they notice that when the image size is small like 128 by 128"}, {"start": 1071.02, "end": 1076.62, "text": " Uh, the the the smaller the regularization the smaller the augmentations they are using the higher the accuracy"}, {"start": 1076.62, "end": 1081.58, "text": " And as you're increasing the image resolution we can see here that basically"}, {"start": 1082.3, "end": 1083.8999999999999, "text": " the higher"}, {"start": 1083.8999999999999, "end": 1087.82, "text": " The regularization the better the accuracy and you can see the trend here"}, {"start": 1087.9799999999998, "end": 1095.26, "text": " So basically the higher the the resolution you want to increase the regularization in order to get the optimal the the highest performance"}, {"start": 1095.7399999999998, "end": 1103.1799999999998, "text": " Okay, so that's the motivation and what's the main idea? So here you can see so the the progressively as the training progresses"}, {"start": 1103.18, "end": 1108.0600000000002, "text": " So this is the number of epochs on the on the axis and you can see in the initial axis"}, {"start": 1108.0600000000002, "end": 1113.5800000000002, "text": " The images are smaller here. They are bigger and also the augmentations are increasing so you can see the shears here"}, {"start": 1114.0600000000002, "end": 1118.7, "text": " Uh some like geometric transformations of the image you can see them the mix up"}, {"start": 1119.5800000000002, "end": 1124.4, "text": " Augmentation here kicking in basically they are mixing up two images together"}, {"start": 1124.8600000000001, "end": 1127.3400000000001, "text": " And so that's a high level idea and here"}, {"start": 1127.34, "end": 1132.9399999999998, "text": " How the algorithm works like is basically they they split the training into m steps"}, {"start": 1133.34, "end": 1137.1, "text": " Uh where m was four I think so basically you have it something like this"}, {"start": 1137.34, "end": 1141.6599999999999, "text": " So this is the like the the the the training and this is the epoch zero"}, {"start": 1141.74, "end": 1145.1, "text": " This is the epoch n whatever and we just split this into four parts"}, {"start": 1145.74, "end": 1147.74, "text": " Like this, right? Okay"}, {"start": 1148.22, "end": 1151.6599999999999, "text": " And in this portion they use the initial"}, {"start": 1151.66, "end": 1157.66, "text": " Uh image size and the initial magnitude of augmentations and then they're linearly increasing those"}, {"start": 1158.22, "end": 1162.78, "text": " In every single stage. So here they're just going to increase it and here in this stage"}, {"start": 1162.8600000000001, "end": 1166.3000000000002, "text": " They're going to increase it further and finally, uh the highest"}, {"start": 1167.5800000000002, "end": 1172.7, "text": " The the magnitude of augmentation will be the highest as well as the image size and if we zoom in here"}, {"start": 1173.3400000000001, "end": 1174.7, "text": " basically"}, {"start": 1174.7, "end": 1176.7, "text": " So for all of these m stages"}, {"start": 1176.7, "end": 1183.5800000000002, "text": " They're just linearly interpolating between the target image size between the start image size just simple linear interpolation here"}, {"start": 1183.9, "end": 1188.8600000000001, "text": " and same goes for the regularization just uh linear interpolation between the"}, {"start": 1189.98, "end": 1195.26, "text": " Initial magnitude and the final magnitude and let's see what the regularizations they are using"}, {"start": 1195.42, "end": 1200.88, "text": " The one is dropout like classical stuff. Uh, the second one is this random random augmentation"}, {"start": 1201.26, "end": 1205.26, "text": " where they're basically just combining a bunch of stuff like shearing like uh,"}, {"start": 1205.26, "end": 1209.02, "text": " uh various uh geometric and photometric augmentations"}, {"start": 1209.5, "end": 1212.86, "text": " So the mix-up is just uh, you're you're basically you saw it here"}, {"start": 1213.1, "end": 1218.3799999999999, "text": " Uh, you're mixing up two images in the image space and you're also mixing their their labels"}, {"start": 1218.7, "end": 1223.34, "text": " So the the target label will be some uh, like mixture between these two"}, {"start": 1223.34, "end": 1229.18, "text": " So for example if we know that we so we had maybe uh, like a dog here and we had"}, {"start": 1229.18, "end": 1236.14, "text": " Uh cat here and basically if we want to mix up, uh with lambda equals, I don't know 0.2"}, {"start": 1236.6200000000001, "end": 1238.8600000000001, "text": " Uh, you'll you'll end up with the the final"}, {"start": 1239.42, "end": 1240.3, "text": " um"}, {"start": 1240.3, "end": 1242.3, "text": " Target will be this like you'll have"}, {"start": 1243.02, "end": 1247.26, "text": " 02 for for I don't know dog so this is 02 and this will be"}, {"start": 1247.74, "end": 1250.22, "text": " For this thing for the cat you'll have 0.8"}, {"start": 1250.7, "end": 1254.22, "text": " And so you'll want to train your model to predict"}, {"start": 1254.94, "end": 1255.9, "text": " this"}, {"start": 1255.9, "end": 1261.02, "text": " Distribution when you input this image here except we don't have dog and cat here"}, {"start": 1261.02, "end": 1264.0600000000002, "text": " But some panda bear and something some fish or whatever. Okay"}, {"start": 1264.6200000000001, "end": 1269.74, "text": " Uh, you get a point. Let's continue and let's see the results. Um"}, {"start": 1271.02, "end": 1276.5400000000002, "text": " Before that, I just want to mention one more, uh regularization they're using that's this stochastic depth"}, {"start": 1277.02, "end": 1280.94, "text": " So the main idea is the following so you have the the resnet block"}, {"start": 1281.3400000000001, "end": 1284.5400000000002, "text": " Basically, you transform the features you're probably familiar with resnets"}, {"start": 1284.54, "end": 1289.1, "text": " You have this residual connect the skip connection which just adds these features without transforming them"}, {"start": 1289.58, "end": 1295.1, "text": " You add them up and that's your residual block here. So what stochastic depth does?"}, {"start": 1295.58, "end": 1300.86, "text": " It has this survival probability, which is basically the probability that this block will remain here"}, {"start": 1301.18, "end": 1305.1, "text": " So that means with 0.2 probability you'll actually short"}, {"start": 1305.58, "end": 1311.8999999999999, "text": " Um connect this thing so you'll just remove this thing and basically, uh, you you'll effectively end up"}, {"start": 1311.9, "end": 1317.02, "text": " Uh with the shallower model. So by having stochastic depth"}, {"start": 1317.5, "end": 1318.3000000000002, "text": " um"}, {"start": 1318.3000000000002, "end": 1319.18, "text": " the"}, {"start": 1319.18, "end": 1325.98, "text": " It's just they just show that that improves the accuracy and it's just a variation on the dropout. Uh, like uh"}, {"start": 1327.1000000000001, "end": 1328.5400000000002, "text": " concept basically"}, {"start": 1328.5400000000002, "end": 1331.3400000000001, "text": " um, you want to make sure that the"}, {"start": 1332.14, "end": 1335.0400000000002, "text": " That the network becomes robust to the co-adaptation"}, {"start": 1335.74, "end": 1337.9, "text": " So similarly as you had dropout"}, {"start": 1337.9, "end": 1342.6200000000001, "text": " So the original dropout was you have maybe like three neurons here two neurons here"}, {"start": 1342.94, "end": 1348.6200000000001, "text": " You have a fully connected layer and you'll with some probability you'll you'll pull this"}, {"start": 1349.18, "end": 1356.38, "text": " Weight to zero and basically the other weights have to learn how to not rely too much on on that particular edge"}, {"start": 1356.8600000000001, "end": 1360.46, "text": " Or on any particular edge in in for that matter and here"}, {"start": 1360.46, "end": 1364.94, "text": " This is just a generalization in a sense where we are just removing the whole module"}, {"start": 1364.94, "end": 1369.42, "text": " And we want to make sure that the network is robust that when certain modules just disappear"}, {"start": 1369.5800000000002, "end": 1372.8600000000001, "text": " It will still be able to predict the the the image"}, {"start": 1373.66, "end": 1375.66, "text": " Like the the class in the image"}, {"start": 1375.9, "end": 1378.6200000000001, "text": " In this case, okay, that's the that's the main idea"}, {"start": 1379.5, "end": 1386.88, "text": " Let's do results. So the first result is so in particular our efficient napv2m achieves comparable accuracy to efficient napv7"}, {"start": 1387.66, "end": 1393.8200000000002, "text": " while training 11x faster using the same computing resources and this was one of the"}, {"start": 1393.82, "end": 1401.76, "text": " objectives in the training so that makes sense, but those results are really impressive. So let's take a look here and you can see"}, {"start": 1403.6799999999998, "end": 1404.96, "text": " For both"}, {"start": 1404.96, "end": 1406.32, "text": " number of parameters"}, {"start": 1406.32, "end": 1411.84, "text": " Flops i.e. compute and latency you can see that the efficient napv2 forms an envelope"}, {"start": 1412.48, "end": 1417.84, "text": " Above all of the previous models like nfnets and vision transformers and whatnot"}, {"start": 1417.84, "end": 1424.8, "text": " Uh same goes for every single dimension. The interesting thing is how much better it is than the vision transformer and I mentioned it here"}, {"start": 1425.04, "end": 1430.08, "text": " So recently vision transformers have demonstrated impressive results on image net accuracy and training speed"}, {"start": 1430.48, "end": 1434.0, "text": " However here we show that properly designed com nets"}, {"start": 1434.48, "end": 1441.1999999999998, "text": " With improved training method can still largely outperform vision transformers in both accuracy and training efficiency"}, {"start": 1441.6799999999998, "end": 1445.6, "text": " and if we take a look at the numbers, uh, we can see like the"}, {"start": 1445.6, "end": 1452.1599999999999, "text": " The v2l which is not pre-trained on the this larger image nap which has around 21 000 classes"}, {"start": 1452.6399999999999, "end": 1453.76, "text": " but the"}, {"start": 1453.76, "end": 1456.32, "text": " version from 2012 which had 1000 classes"}, {"start": 1457.04, "end": 1458.9599999999998, "text": " basically the numbers are"}, {"start": 1458.9599999999998, "end": 1461.6, "text": " much better as you can see here compared to"}, {"start": 1462.32, "end": 1468.7199999999998, "text": " Uh, even the even the biggest of vision transformers which were pre-trained on the image net 21k"}, {"start": 1469.1999999999998, "end": 1471.1999999999998, "text": " And you can see the numbers here"}, {"start": 1471.2, "end": 1477.8400000000001, "text": " You can see that the accuracy is is higher. We can see that we have much lower number of parameters"}, {"start": 1478.32, "end": 1481.52, "text": " Uh, we have uh much less compute here"}, {"start": 1482.24, "end": 1488.48, "text": " And what's this? Uh, yeah inference is also smaller and the training time is also smaller"}, {"start": 1488.8, "end": 1491.92, "text": " So across every single dimension is just better"}, {"start": 1492.88, "end": 1498.96, "text": " Which means that the biases we have in convolutional neural networks are not going to go any anytime soon. So it seems"}, {"start": 1498.96, "end": 1506.88, "text": " Uh and vision transformer took a lot of time to train and a lot of data. I think it was uh, initially trained on the this"}, {"start": 1507.76, "end": 1515.6000000000001, "text": " Private data set that google has which is huge. So yeah, uh, these are some nice results, uh from from the google brain team"}, {"start": 1516.8, "end": 1519.28, "text": " Let's continue and see here"}, {"start": 1519.44, "end": 1524.32, "text": " So they also showed that the performance on when transferring to other classical"}, {"start": 1524.32, "end": 1529.2, "text": " Uh data sets such as cipher 10 cipher 100 flowers cars we can see that"}, {"start": 1529.84, "end": 1532.08, "text": " They achieved the the highest accuracy as well here"}, {"start": 1532.3999999999999, "end": 1540.0, "text": " So two conclusions they they derive here is scaling up data size is more effective than simply scaling up model size in high accuracy regime"}, {"start": 1540.56, "end": 1542.96, "text": " so basically, uh, that means that this"}, {"start": 1543.5, "end": 1550.96, "text": " pre-training they uh this uh additional pre-training that did on the image net 21k helped boost the the the accuracy as you can see"}, {"start": 1550.96, "end": 1552.96, "text": " in this column and"}, {"start": 1552.96, "end": 1556.8, "text": " just having a bigger model will lead to overfitting so they just"}, {"start": 1558.16, "end": 1560.16, "text": " Mentioned that data is"}, {"start": 1560.64, "end": 1564.64, "text": " What what matters and some other work supports the claim as well?"}, {"start": 1565.6000000000001, "end": 1572.08, "text": " So although the image 21k has 10x more data our training approach enables us to finish the pre-training of efficient"}, {"start": 1572.08, "end": 1575.3600000000001, "text": " NAB 2 within two days using 32 tpu cores"}, {"start": 1575.3600000000001, "end": 1581.6000000000001, "text": " And this is pretty impressive like in only two days you can train on this huge data set, which is a nice nice"}, {"start": 1581.6, "end": 1588.56, "text": " Uh nice thing to to to have obviously so because they reduce the training time. They can now train it in two days and achieve"}, {"start": 1589.36, "end": 1592.8, "text": " Uh great results. So they did a lot of ablations and show that"}, {"start": 1593.4399999999998, "end": 1595.4399999999998, "text": " uh each of the new"}, {"start": 1596.0, "end": 1598.24, "text": " Uh novel things they introduced like progressive training"}, {"start": 1598.9599999999998, "end": 1601.6799999999998, "text": " uh leads to to improvements in results"}, {"start": 1602.56, "end": 1604.0, "text": " um and"}, {"start": 1604.0, "end": 1610.9599999999998, "text": " What's interesting is maybe that here they just tried progressive learning for resonance and they showed that using progressive"}, {"start": 1610.96, "end": 1612.96, "text": " training you can improve the"}, {"start": 1613.3600000000001, "end": 1619.68, "text": " uh accuracy on the resonance and not on the efficient nets but on resonance and uh also decrease the"}, {"start": 1620.16, "end": 1626.08, "text": " uh training time same goes for the v1 so basically, uh this training, uh,"}, {"start": 1626.32, "end": 1632.42, "text": " This progressive training is not it's not it has not over fitted to the particular architecture"}, {"start": 1632.8, "end": 1638.48, "text": " From efficient net v2. So yeah, that's not nice ablation there. Um, that's pretty much it"}, {"start": 1638.48, "end": 1640.48, "text": " so basically"}, {"start": 1640.8, "end": 1643.1200000000001, "text": " Search is the name of the game"}, {"start": 1643.6, "end": 1645.6, "text": " Um this progressive training"}, {"start": 1645.84, "end": 1648.64, "text": " Improves the training speed it improves the accuracy"}, {"start": 1648.96, "end": 1653.44, "text": " A couple of heuristics here and there and that's it. That's the version two of efficient net"}, {"start": 1653.76, "end": 1656.72, "text": " Hopefully you found this video useful if you did leave the comment"}, {"start": 1657.04, "end": 1660.24, "text": " uh share that would be super helpful and"}, {"start": 1660.24, "end": 1668.5, "text": " Um hit that bell icon to get notified when I upload a new video until next time. See you"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=mH7f7N7s79s
MuZero - Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | RL Paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ MuZero - the latest agent in the lineage of AlphaGo agents. Zero human knowledge, zero rules, and cracks not only Go, Chess and Shogi but additionally it achieves SOTA on the Atari benchmark. You'll learn about: ✔️ How can MuZero learn to play without the rules ✔️ How does it learn the dynamics/model ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/1911.08265 ✅ Blog: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Overview of the AlphaGo lineage 03:00 MuZero actors explained 11:10 How can MuZero work without any rules? 14:50 MuZero learner explained 21:15 Results 25:15 Update to the search algorithm ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ Huge thank you to these AI Epiphany patreons: Petar Veličković Zvonimir Sabljic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #muzero #atari #reinforcementlearning
Finally, I'm going to cover Mu0, the latest agent in the lineage of AlphaGo agents, in this paper called Mastering Atari, Go, Chess, and Shogi, by planning with a learn model, and again, that's the DeepMind crew that's been working on this since 2014 or something, and basically, it'd be really helpful if you already watched my previous videos on AlphaGo, AlphaGo 0, and Alpha0, because many of the details still apply here, so do watch those, I'll link it somewhere here. Okay, before I start digging into the paper, let me just quickly walk you through what happened with this lineage of agents, and the first thing that happened was AlphaGo, back in 2015, they developed this agent which could solve the game of Go, but the trick was it used a lot of human data, so the professional games played by Go players, and it had a lot of Go-specific heuristics integrated into the agent itself, and it also knew the rules of the game, obviously, and then they had AlphaGo 0, which basically also played Go, and was even better than AlphaGo, but the trick was they didn't use any human data, they were just using pure self-play reinforcement learning, and the algorithm, the agent, learned how to play by itself, and became the best player ever so far. The next step was doing the Alpha0, which basically was a minor, let's say minor, probably a lot of engineering, but a minor conceptual change, so they had to prune a couple more details from AlphaGo 0 in order to make it general enough so that it can play not only Go, but also chess and shogi, which is a so-called Japanese chess, and again, we didn't have any human data, but we did have the rules, and the latest agent in this lineage is Mu0, which is this paper, and basically, they just ditched the rules, and I'll explain in a minute what that means, and it can also play Atari games now, so the Atari 57 games in the Atari benchmark, so when they say they don't have the rules, what does that mean? Well, basically, that means that during the, so if you watched my previous videos, you know what Monte Carlo Tree Search is, and basically what it means is that during the Monte Carlo Tree Search, you don't have the available simulator, so you don't know how to, given some action, so starting from a root state, S0, given some action, you don't know what the next state here will be, you don't know how the exact layout of the boards will look like in the case of board games, or in the case of Atari, you don't know what the next screen will look like, and basically, because you don't have the simulator, you'll have to learn the dynamics model, and we'll see, and that's the whole main paper, the main idea of this paper, is to learn the dynamics model, and then subsequently use it in order to plan, in order to play all of these games, so let's see how they managed to pull it off, so they say here, we present the Mu0 algorithm, which by combining a Tree-based search, so again, Monte Carlo Tree Search, with a learned model, that's the novelty, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of the underlying dynamics, so that's what I mentioned already, and you'll understand that as this video progresses, so when evaluating on Go, Chess, and Shogi, without any knowledge of the game rules, Mu0 matched the superhuman performance of the AlphaZero agent, so that's this one, algorithm that was supplied with the game rules, and it also achieves state of the art on this Atari 57 benchmark, and that's pretty awesome as well. Okay, so let's dig into the algorithm straight away, and I'll skip this part for now, and let's just explain how everything looks like in Mu0. Okay, so first, let's understand the algorithm itself. First things first, the Mu0 agent consists out of three parts, so the first part is the representation function, H, the second function is the prediction function, F, which will give us the policy and the value function, and finally, we'll have something called G function, or the dynamics function, and given the state and given the action, it will just output the next state, and it will output the reward, okay, and this is what we're trying to learn here, so we don't have the simulator, the perfect simulator, we have this dynamics model, which we're trying to learn, and so these three components together comprise the Mu0 model, okay, so this is the input state, and the thing is, because we have Atari, we have Go, Chess, and Shogi, and I'll just give you an example of how this thing looks like for Go, and how it looks for Atari as two representative cases, so for Go, this thing, whoops, will look like this, so we'll have eight planes, and each plane will have 19 by 19 resolution, because that's how the Go board looks like, so this is going to be, so eight, and the reason we have eight is because in order to play Go, you need to have some past observations, otherwise you can't play the game, okay, so for Atari, this is going to look like, so similarly, we'll have 32 times three, because we have RGB frames, we'll have to encode 32 frames, and so this is going to look like this, and that's like, I think, 96 by 96 resolution, and we'll have additionally 32 planes for the 32 previous actions, and that's also needed, because in order to play Atari, you just need those actions encoded, and we're just going to broadcast, for example, if the action was zero, you're just going to put a bias plane with all zeros stacked here, if it was one, you're just going to input one over 18, because Atari has 18 actions, so that's how they encode the actions, as just constant bias planes here, so once we have those volumes, those represent the input states, and now we have the H function, which is a simple resonant, so you'll just pass that volume into a resonant, and out comes the hidden representation, the hidden state, and it's going to look similarly, it's going to have 256 planes, and depending on whether we have Atari, or whether we have Go, or whatever, for Atari, it's going to be 6 by 6 resolution, and for Go, it's going to remain the same, so 19 by 19, because again, the board has 19 by 19 resolution, okay, and now we just apply Monte Carlo Tree search in the following manner, we have the F function, which is the prediction function, which is also a convolutional neural network, and we just pass this hidden state into it, and out comes the policy, so the policy will be like 361 dimensional vector, in the case of Go, and the value will just be the value function, which gives you the expected reward you'll get going from this state to the end of the game, just basic reinforcement learning stuff, so that's that part, and using those priors, using those probabilities coming from the policy, that's going to be your prior in your Monte Carlo Tree search, if you remember from the previous videos, and you're just going to take the maximum, like the PUCT value, and for example, you took this branch, and once you have that action, you're going to pass it into this G function, so let's see how that functions, so again, I explained we have this hidden state, and it looks like this, so 256 there, and this is dependent on the game itself, so this is 256, and what we're going to do is, we're going to encode this action, again, spatially, so as I previously said, for example, for Atari, you're just going to add a bias plane here, and this is what's going into the G function, and out goes the next hidden state, which will again have 256 planes, and out goes the reward here, so just by doing this, and doing the simulations through the Monte Carlo, using this G model, the dynamics model, and the prediction, and the representation models, you'll build up the Monte Carlo tree search, and then, as you know, you just take the highest visitation count in the root, and that's your next action, and so, as you can see here, you just pick the maximum visitation count action, and you pass it to the environment, the environment gives you some reward, and it gives you the next observation, so basically, you'll just, in the case of, say, Go, because those are 8 boards, you'll just pop the oldest one, and you'll just append the newest board state that came from the environment, and that's your new input state, and then you just recursively repeat this until the end of the game, and then the environment tells you, okay, this environment will just stop you, and what you'll do is you'll be saving those trajectories into this centralized replay buffer, where all of these actors are going to store their experience, so basically, Mu0 is a distributed agent, that means you'll have multiple actors, so all of these are maybe separate threads, and they'll be executing this thing here, they'll be collecting experience, they'll be saving stuff like this thing, like the policy coming, the refund policy from the Monte Carlo tree search, and rewards, and states, they'll be saving those tuples here, and later, you can use those to train this thing. Now, there is one thing that may confuse you, and that's how does this thing play without any rules, and the answer is, like the partial answer, and we'll dig into it a bit later, is basically, only in this root state, you can query the environment and ask what the valid actions are, so as you can imagine, for example, in the case of Go, you'll have 361 branches here, and not every single one is going to be legal, so once you communicate with the environment, it tells you maybe these actions here are invalid, that means you'll just pull the priors, the probabilities to zero, and you'll take that probability mass, and you'll just proportionally distribute it across all of the other branches, and so doing that, you'll basically be building the Monte Carlo tree search, where the root node will kind of constrain you, constrain your space, but all of the other nodes here, like all of these, they'll have all of the 361 actions available in the case of Go, and that means you may play some, initially, you'll be playing maybe some impossible games, illegal games, but that doesn't matter, eventually, during training, the model will learn how to basically do the correct steps, it will learn the rules of the game, basically. Okay, before explaining the actual training procedure, let me just further clarify how it works without any rules, and that's the part that bugged me a lot, initially, so let me try and explain it. So, first of all, specifically, AlphaGo Zero and AlphaZero use knowledge of the rules of the game in three places, so state transitions in the search tree, actions available at each node of the search tree, and episode termination within the search tree, so there are three parts to understanding how this thing works without any rules. First things first, this one. So, AlphaZero had access to a perfect simulator of the true dynamics process, so this is how the thing looks like. You basically have, so this is the root of the Monte Carlo tree search, so this is your initial state, you start building the Monte Carlo tree search like this, and what's the difference? Well, I already explained how the input state looks like, so in the case of Go, you have the eight planes, 19 by 19 spatial resolution, and once you take some action in the Monte Carlo tree search, what you'll end up here is, again, basically the input observations, and not some uninterpretable hidden state like in the case of Mu Zero, so you'll end up here with, again, with some eight planes, and you'll have 19 by 19 basically boards, and that's here, and again, once you play some other action here, a prime, again here you'll have the interpretable result. Because you have the rules, because you have the simulator, you can do that. In the case of Mu Zero, you don't have that, so here you'll have, instead of eight, you'll have those 256 planes, and you don't have any interpretation of that hidden volume, there's just some state that the Mu Zero learns how to build up, so that you can predict the policy, the value, and the rewards. So that's what I want you to understand, that's the first thing. The second part is, and I already kind of explained this, Mu Zero only masks legal actions at the root of the search tree where the environment can be queried, but does not perform any masking within the search tree itself. Okay, so what does that mean? Basically, in the case of Go again, you have 361 actions going from the root node, you'll mask some of those out because you can query the environment, and that means maybe you mask some portion here, and you'll have this part available for simulations, and you'll be building the tree, but the thing is, all of the other nodes here will have all of the actions available because you don't have the simulator, and so that's the second part. And the third part is the terminal node, so inside the tree, the search can proceed past a terminal node. In this case, the network is expected to always predict the same value. This is achieved by treating terminal states as absorbing states during training. So what they say here is the following, so because once you have the simulator, once you arrive at a certain state, the terminal state, the simulator will just tell you, hey, this is a terminal state, you can take the reward, you just back up the reward, up the Monte Carlo tree search, and that's it. That's what we've done in AlphaGo, in AlphaGo Zero, and AlphaZero. Here, you can actually go past this state because you don't know that it's a terminal state, and that's kind of a problem. And they mention here that somehow, and I don't understand this part really well, like it's kind of confusing, they didn't explain it really well in the paper how they enforce that the value function always predicts the same value. Yeah, so if you maybe know the answer, just comment down below, but for now, we can just treat it as a black box and continue with this video. Let me just now explain to you the training procedure itself. Okay. So we've been saving these experience replays into the replay buffer, and now what we do is we sample one of the real sequences. So this is one of the real sequences we've stored, and you can see the input states, and you can see some actions. And what we do is we take one state, so the first state, we pass it into our representation functions of the resonant, and we get some hidden state here. Okay. What we do is we pass it into the prediction model, we get the policy, we get the value. Then we basically take this section that we have from the replay buffer, and we concatenate it spatially again with the hidden representation, and we pass it into the G function, the dynamics model, and out comes the reward, and out comes the next state. We repeat the process again, so we pass it through the prediction function, we get the policy, we get the value, we take the action again, we concatenate it with the hidden state, we pass it into the G, the dynamics model, and again we get the hidden state, etc., etc. So basically, they've been using these, and they say it somewhere here. So in each case, we train mu0 for K5 hypothetical steps. So that means they'll have, even though you can see only three here, they basically in practice use five of these steps, and now once you have this, this is how you train the thing. So you make sure that this policy here is the same as this one, and you've stored this one in the replay buffer, so you can use it as the target, and just apply a simple cross entropy law, so you minimize the KL divergence, you make this to the same, and that's one component of the loss. The second component, you want to make sure the value function becomes as close as possible to the outcome of the game. So in the case of Go, the game will, after you roll this out, at the end you'll either win and get plus one, you'll have a tie, so zero, or you'll lose, that's minus one, and you'll make sure to, and let's just kind of denote this as Z, and you'll just want to make sure that V minus Z squared gets to zero, and that's simple mean square error loss, and finally you do the same thing for rewards, except that in board games such as Go, because the rewards are always zero, until the very end you actually won't be using this component, so that this thing will be used for Atari, where you have rewards along the way, but just remember there's a third component, and that's pretty much it, there is some regularization of the parameters, and that's pretty much it. So, as you can see, by putting the loss here and here, you'll be modifying, and this is some, as you can see, recurrent process, so whatever came here is going to influence the values here, so you can treat this as a rule, like the recurrent neural network, even though they are not using recurrent neural network here, basically they'll be tweaking all of these parameters, the parameters of the dynamics model, the parameters of the prediction functions, and the parameters of H, the representation function, to learn this model, and eventually you'll have a pretty decent dynamics model, which basically now contains the rules of the game, and that's awesome. So, the part that's maybe confusing is, as you can see here, what keeps this hidden state, which doesn't have any interpretation, correlated to this input real state that we sampled from the replay buffer, is exactly the loss. So, by making sure that the prediction function from this state gives you valid policy and value, eventually this state will have something to do with this state, even though they didn't enforce any constraints upon this hidden state, so you can't reconstruct the actual observations going from this state, that's not how they train it, and yet it works, because the only thing we want to encode into this is, we want to be sure, we want to make sure that we can actually get the policy, get the value, and get the reward from these states, and that's it. And they mentioned that here, so, at every one of these steps, the model predicts the policy, the value function, and the immediate reward. The model is trained end-to-end with the sole objective of accurately estimating these three important quantities, so as to match the improved estimates of policy, coming from the Monte Carlo Tree Search, and value generated by search, as well as the observed reward. There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation, so you can go from the hidden state to the actual input observations, and that's not needed, because we only care about the three quantities. Finally, instead, the hidden states are free to represent state in whatever way is relevant to predicting the current and future values and policies. Intuitively, the agent can invent internally the rules of dynamics that lead to most accurate planning, and that's what's so awesome about Mu0. Basically, by doing this process here, you learn this model, and it just works. They say here, again, no semantics attached to the state. You just predict these, and it's just solved all by itself. Here is the loss function. Again, basically, this is the cross-entropy I mentioned, so this is the raw policy, this is the Monte Carlo policy. You want to make sure those are the same. Just a simple mean square error here for the value function, and for the reward, you also do MSE between the reward you got and between your output from your dynamics model G. Finally, the regularization, so theta just comprises out of theta F plus theta H, and finally, you have theta G. That's the Mu0 model, and you just do L2 regularization. That's the final loss. You have five hypothetical steps. You just train it on the bunch of trajectories, and we can see now the results. Let me show you this. Basically, comparing Mu0 on chess, so this is Y-axis is dealer score, number of training steps. You can see that this is probably stockfish, so it got better than stockfish at one point of time. Then we have for shrogis, similar, and finally, the game of Go. Here, you can see it gets better than Alpha0, and finally, on Atari, they took something called RT D2 agent, and this is the mean value. This is the median, and you can see that the Mu0 achieves a higher median and mean scores than the state of the art, so that's awesome. Now, there is one catch here, and that's that actually on some of the tough games like Montezuma's Revenge and Pitfall and Solaris, etc., you have zero score pretty much with Mu0. If you take a look at the distribution across all of the 57 games, so these are the 57 games here, it will have a really high score for some games, and then we'll just dip here, and for the last five or seven games, the score is going to be really low, but you can't catch that using just these two statistics, like mean and median. You need to show the fifth percentile, etc. So, like an agent called Agent57 later solved this thing and achieved much better results here, but you actually had lower results here, so it's a more general agent, but it still had lower mean and median scores than Mu0. Okay, so those are some results, and I mentioned it here, and now let me just show you some more results. Basically, this is for when you have a bunch of data, like 20 billion frames, you can see Mu0 achieved much better results than R2D2, Apex is also some distributed agent, and when they use 10x less data and something called Mu0 reanalyze, which learns how to better, basically has better data efficiency because it can reuse some of the trajectories from the replay buffer and train multiple times on those trajectories by using the freshest weights, and they show that they get much better data efficiency. Again, the scores are lower, but it still achieves state-of-the-art with less data, and that's awesome. That's pretty much it. There's a couple more things I want to mention, and that's here. So, for the game of Go, basically, after training the agent with 800 simulations for every single Monte Carlo tree search, so when you do those rollouts, you use 800 simulations, and that takes around 0.1 seconds, and they can show that by using more, by giving the agent in the evaluation, by giving it more search time, so more simulations, it achieves much better ELO, which means it generalizes to search trees that it hasn't encountered during training, and that's awesome. You can see that until, like, two orders of magnitude, it still has a pretty decent increase in ELO score. The same thing doesn't apply for Atari, because Atari, they speculate that the dynamics model, the G model, is harder to learn there, so you can imagine why, because you have to kind of understand how to transition the frames, which are a tougher problem than analyzing the board states, and basically, they train it with 50 simulations, and you can see it does improve until 100, but then it just kind of flattens and just slowly starts to actually go down. So, yeah, so Atari is a bit different. Here, nothing interesting, basically, the more simulations, the better results, as we can expect. That's pretty much it, let me just see whether there is something more interesting to tell you here. Yeah, there is a small difference in the search algorithm, because now we have rewards along the way, and not just in the terminal states, like in the case of board games, and because of that, they had to introduce this, as you can see, this notion of this cumulative reward, and they say here the backup is generalized to the case where the environment can emit intermediate rewards, have a discount gamma different from one, and the value estimates are unbounded. And basically, now, once you, as you can see here, this is familiar, so your visitation count and your, this is the action value function, the way you're now updating it is by adding this G estimate, and that's just the sum of discounted rewards plus the bootstrapped value here. So, yeah, so in the case of board games, these would all be zero, pretty much, and you'll just have the bootstrap from the leaf node, and you'd propagate that one up the tree, so you'd back up those values, but here you have all of these in case of Atari. That also has some consequences that we have to now normalize the Q values, because, as I said, rewards are, you have intermediate rewards which can be unbounded, so you have to do this normalization in order to have this thing interact really well with the other part of the PUCT algorithm. So basically, you have, as you remember, you have the Q, and you have this prior times the visitations, so the more you visit something, this will get higher, so this will get lower, and basically, finally, you'll just rely on the Q value, so you want to make sure that this is of similar scale as this thing, so you just have the normalization by saving the max and min Q values throughout the Monte Carlo tree search. You'll be logging that statistic, and you can use it here to normalize your Q. So just a couple of technical details, but hopefully you got the gist of the algorithm. I think it's pretty neat, basically, it achieved really good scores, so again, it'd be really nice if, in my opinion, if we could have the scores be as close, like even better than Agent 57, that would be really impressive, but yeah, those games require long-term planning, and Mew Zero just couldn't solve it with this approach. Maybe that's the forward, like future research direction. That's it. Hopefully you enjoyed this video, if you did, leave a like, subscribe, share, and see you in the next video.
[{"start": 0.0, "end": 5.82, "text": " Finally, I'm going to cover Mu0, the latest agent in the lineage of AlphaGo agents, in this paper called"}, {"start": 5.82, "end": 14.46, "text": " Mastering Atari, Go, Chess, and Shogi, by planning with a learn model, and again, that's the DeepMind crew"}, {"start": 14.46, "end": 24.060000000000002, "text": " that's been working on this since 2014 or something, and basically, it'd be really helpful if you already watched"}, {"start": 24.06, "end": 31.68, "text": " my previous videos on AlphaGo, AlphaGo 0, and Alpha0, because many of the details still apply here, so do watch those,"}, {"start": 31.68, "end": 37.879999999999995, "text": " I'll link it somewhere here. Okay, before I start digging into the paper, let me just quickly walk you through"}, {"start": 37.879999999999995, "end": 45.06, "text": " what happened with this lineage of agents, and the first thing that happened was AlphaGo, back in 2015,"}, {"start": 45.06, "end": 51.599999999999994, "text": " they developed this agent which could solve the game of Go, but the trick was it used a lot of human data,"}, {"start": 51.6, "end": 62.1, "text": " so the professional games played by Go players, and it had a lot of Go-specific heuristics integrated into the agent itself,"}, {"start": 62.1, "end": 71.6, "text": " and it also knew the rules of the game, obviously, and then they had AlphaGo 0, which basically also played Go,"}, {"start": 71.6, "end": 78.56, "text": " and was even better than AlphaGo, but the trick was they didn't use any human data, they were just using pure self-play"}, {"start": 78.56, "end": 86.32000000000001, "text": " reinforcement learning, and the algorithm, the agent, learned how to play by itself, and became the best player ever so far."}, {"start": 86.32000000000001, "end": 94.22, "text": " The next step was doing the Alpha0, which basically was a minor, let's say minor, probably a lot of engineering,"}, {"start": 94.22, "end": 100.96000000000001, "text": " but a minor conceptual change, so they had to prune a couple more details from AlphaGo 0 in order to make it general enough"}, {"start": 100.96000000000001, "end": 107.16, "text": " so that it can play not only Go, but also chess and shogi, which is a so-called Japanese chess,"}, {"start": 107.16, "end": 115.52, "text": " and again, we didn't have any human data, but we did have the rules, and the latest agent in this lineage is Mu0,"}, {"start": 115.52, "end": 122.92, "text": " which is this paper, and basically, they just ditched the rules, and I'll explain in a minute what that means,"}, {"start": 122.92, "end": 128.92, "text": " and it can also play Atari games now, so the Atari 57 games in the Atari benchmark,"}, {"start": 128.92, "end": 134.72, "text": " so when they say they don't have the rules, what does that mean? Well, basically, that means that during the,"}, {"start": 134.72, "end": 140.12, "text": " so if you watched my previous videos, you know what Monte Carlo Tree Search is, and basically what it means is that"}, {"start": 140.12, "end": 147.52, "text": " during the Monte Carlo Tree Search, you don't have the available simulator, so you don't know how to, given some action,"}, {"start": 147.52, "end": 154.96, "text": " so starting from a root state, S0, given some action, you don't know what the next state here will be,"}, {"start": 154.96, "end": 159.92, "text": " you don't know how the exact layout of the boards will look like in the case of board games, or in the case of Atari,"}, {"start": 159.92, "end": 166.11999999999998, "text": " you don't know what the next screen will look like, and basically, because you don't have the simulator,"}, {"start": 166.11999999999998, "end": 172.72, "text": " you'll have to learn the dynamics model, and we'll see, and that's the whole main paper, the main idea of this paper,"}, {"start": 172.72, "end": 179.39999999999998, "text": " is to learn the dynamics model, and then subsequently use it in order to plan, in order to play all of these games,"}, {"start": 179.39999999999998, "end": 187.11999999999998, "text": " so let's see how they managed to pull it off, so they say here, we present the Mu0 algorithm, which by combining a Tree-based search,"}, {"start": 187.12, "end": 194.32, "text": " so again, Monte Carlo Tree Search, with a learned model, that's the novelty, achieves superhuman performance in a range of challenging"}, {"start": 194.32, "end": 200.72, "text": " and visually complex domains, without any knowledge of the underlying dynamics, so that's what I mentioned already,"}, {"start": 200.72, "end": 209.72, "text": " and you'll understand that as this video progresses, so when evaluating on Go, Chess, and Shogi, without any knowledge of the game rules,"}, {"start": 209.72, "end": 216.92000000000002, "text": " Mu0 matched the superhuman performance of the AlphaZero agent, so that's this one, algorithm that was supplied"}, {"start": 216.92, "end": 226.92, "text": " with the game rules, and it also achieves state of the art on this Atari 57 benchmark, and that's pretty awesome as well."}, {"start": 226.92, "end": 238.32, "text": " Okay, so let's dig into the algorithm straight away, and I'll skip this part for now, and let's just explain how everything looks like in Mu0."}, {"start": 238.32, "end": 246.32, "text": " Okay, so first, let's understand the algorithm itself. First things first, the Mu0 agent consists out of three parts,"}, {"start": 246.32, "end": 258.71999999999997, "text": " so the first part is the representation function, H, the second function is the prediction function, F, which will give us the policy and the value function,"}, {"start": 258.71999999999997, "end": 267.32, "text": " and finally, we'll have something called G function, or the dynamics function, and given the state and given the action,"}, {"start": 267.32, "end": 274.12, "text": " it will just output the next state, and it will output the reward, okay, and this is what we're trying to learn here,"}, {"start": 274.12, "end": 278.92, "text": " so we don't have the simulator, the perfect simulator, we have this dynamics model, which we're trying to learn,"}, {"start": 278.92, "end": 289.52, "text": " and so these three components together comprise the Mu0 model, okay, so this is the input state, and the thing is,"}, {"start": 289.52, "end": 296.92, "text": " because we have Atari, we have Go, Chess, and Shogi, and I'll just give you an example of how this thing looks like for Go,"}, {"start": 296.92, "end": 303.12, "text": " and how it looks for Atari as two representative cases, so for Go, this thing, whoops, will look like this,"}, {"start": 303.12, "end": 312.12, "text": " so we'll have eight planes, and each plane will have 19 by 19 resolution, because that's how the Go board looks like,"}, {"start": 312.12, "end": 319.32, "text": " so this is going to be, so eight, and the reason we have eight is because in order to play Go, you need to have some past observations,"}, {"start": 319.32, "end": 327.32, "text": " otherwise you can't play the game, okay, so for Atari, this is going to look like, so similarly, we'll have 32 times three,"}, {"start": 327.32, "end": 339.12, "text": " because we have RGB frames, we'll have to encode 32 frames, and so this is going to look like this, and that's like, I think, 96 by 96 resolution,"}, {"start": 339.12, "end": 346.92, "text": " and we'll have additionally 32 planes for the 32 previous actions, and that's also needed, because in order to play Atari,"}, {"start": 346.92, "end": 353.52, "text": " you just need those actions encoded, and we're just going to broadcast, for example, if the action was zero,"}, {"start": 353.52, "end": 361.71999999999997, "text": " you're just going to put a bias plane with all zeros stacked here, if it was one, you're just going to input one over 18,"}, {"start": 361.71999999999997, "end": 369.71999999999997, "text": " because Atari has 18 actions, so that's how they encode the actions, as just constant bias planes here,"}, {"start": 369.71999999999997, "end": 377.91999999999996, "text": " so once we have those volumes, those represent the input states, and now we have the H function, which is a simple resonant,"}, {"start": 377.92, "end": 385.32, "text": " so you'll just pass that volume into a resonant, and out comes the hidden representation, the hidden state,"}, {"start": 385.32, "end": 395.32, "text": " and it's going to look similarly, it's going to have 256 planes, and depending on whether we have Atari, or whether we have Go, or whatever,"}, {"start": 395.32, "end": 404.72, "text": " for Atari, it's going to be 6 by 6 resolution, and for Go, it's going to remain the same, so 19 by 19, because again, the board has 19 by 19 resolution,"}, {"start": 404.72, "end": 413.52000000000004, "text": " okay, and now we just apply Monte Carlo Tree search in the following manner, we have the F function, which is the prediction function,"}, {"start": 413.52000000000004, "end": 420.12, "text": " which is also a convolutional neural network, and we just pass this hidden state into it, and out comes the policy,"}, {"start": 420.12, "end": 430.72, "text": " so the policy will be like 361 dimensional vector, in the case of Go, and the value will just be the value function,"}, {"start": 430.72, "end": 440.12, "text": " which gives you the expected reward you'll get going from this state to the end of the game, just basic reinforcement learning stuff,"}, {"start": 440.12, "end": 450.72, "text": " so that's that part, and using those priors, using those probabilities coming from the policy, that's going to be your prior in your Monte Carlo Tree search,"}, {"start": 450.72, "end": 458.92, "text": " if you remember from the previous videos, and you're just going to take the maximum, like the PUCT value, and for example, you took this branch,"}, {"start": 458.92, "end": 467.12, "text": " and once you have that action, you're going to pass it into this G function, so let's see how that functions,"}, {"start": 467.12, "end": 477.72, "text": " so again, I explained we have this hidden state, and it looks like this, so 256 there, and this is dependent on the game itself,"}, {"start": 477.72, "end": 486.12, "text": " so this is 256, and what we're going to do is, we're going to encode this action, again, spatially,"}, {"start": 486.12, "end": 495.12, "text": " so as I previously said, for example, for Atari, you're just going to add a bias plane here, and this is what's going into the G function,"}, {"start": 495.12, "end": 503.92, "text": " and out goes the next hidden state, which will again have 256 planes, and out goes the reward here,"}, {"start": 503.92, "end": 513.52, "text": " so just by doing this, and doing the simulations through the Monte Carlo, using this G model, the dynamics model, and the prediction, and the representation models,"}, {"start": 513.52, "end": 520.52, "text": " you'll build up the Monte Carlo tree search, and then, as you know, you just take the highest visitation count in the root,"}, {"start": 520.52, "end": 527.3199999999999, "text": " and that's your next action, and so, as you can see here, you just pick the maximum visitation count action,"}, {"start": 527.3199999999999, "end": 532.92, "text": " and you pass it to the environment, the environment gives you some reward, and it gives you the next observation,"}, {"start": 532.92, "end": 540.92, "text": " so basically, you'll just, in the case of, say, Go, because those are 8 boards, you'll just pop the oldest one,"}, {"start": 540.92, "end": 548.52, "text": " and you'll just append the newest board state that came from the environment, and that's your new input state,"}, {"start": 548.52, "end": 553.52, "text": " and then you just recursively repeat this until the end of the game, and then the environment tells you,"}, {"start": 553.52, "end": 562.52, "text": " okay, this environment will just stop you, and what you'll do is you'll be saving those trajectories into this centralized replay buffer,"}, {"start": 562.52, "end": 571.92, "text": " where all of these actors are going to store their experience, so basically, Mu0 is a distributed agent,"}, {"start": 571.92, "end": 578.52, "text": " that means you'll have multiple actors, so all of these are maybe separate threads, and they'll be executing this thing here,"}, {"start": 578.52, "end": 586.12, "text": " they'll be collecting experience, they'll be saving stuff like this thing, like the policy coming, the refund policy from the Monte Carlo tree search,"}, {"start": 586.12, "end": 592.92, "text": " and rewards, and states, they'll be saving those tuples here, and later, you can use those to train this thing."}, {"start": 592.92, "end": 598.72, "text": " Now, there is one thing that may confuse you, and that's how does this thing play without any rules,"}, {"start": 598.72, "end": 606.52, "text": " and the answer is, like the partial answer, and we'll dig into it a bit later, is basically, only in this root state,"}, {"start": 606.52, "end": 613.72, "text": " you can query the environment and ask what the valid actions are, so as you can imagine, for example, in the case of Go,"}, {"start": 613.72, "end": 620.72, "text": " you'll have 361 branches here, and not every single one is going to be legal, so once you communicate with the environment,"}, {"start": 620.72, "end": 627.32, "text": " it tells you maybe these actions here are invalid, that means you'll just pull the priors, the probabilities to zero,"}, {"start": 627.32, "end": 633.72, "text": " and you'll take that probability mass, and you'll just proportionally distribute it across all of the other branches,"}, {"start": 633.72, "end": 641.72, "text": " and so doing that, you'll basically be building the Monte Carlo tree search, where the root node will kind of constrain you,"}, {"start": 641.72, "end": 650.52, "text": " constrain your space, but all of the other nodes here, like all of these, they'll have all of the 361 actions available in the case of Go,"}, {"start": 650.52, "end": 658.9200000000001, "text": " and that means you may play some, initially, you'll be playing maybe some impossible games, illegal games, but that doesn't matter,"}, {"start": 658.9200000000001, "end": 668.12, "text": " eventually, during training, the model will learn how to basically do the correct steps, it will learn the rules of the game, basically."}, {"start": 668.12, "end": 678.52, "text": " Okay, before explaining the actual training procedure, let me just further clarify how it works without any rules,"}, {"start": 678.52, "end": 683.32, "text": " and that's the part that bugged me a lot, initially, so let me try and explain it."}, {"start": 683.32, "end": 688.92, "text": " So, first of all, specifically, AlphaGo Zero and AlphaZero use knowledge of the rules of the game in three places,"}, {"start": 688.92, "end": 697.52, "text": " so state transitions in the search tree, actions available at each node of the search tree, and episode termination within the search tree,"}, {"start": 697.52, "end": 701.12, "text": " so there are three parts to understanding how this thing works without any rules."}, {"start": 701.12, "end": 710.3199999999999, "text": " First things first, this one. So, AlphaZero had access to a perfect simulator of the true dynamics process, so this is how the thing looks like."}, {"start": 710.3199999999999, "end": 718.92, "text": " You basically have, so this is the root of the Monte Carlo tree search, so this is your initial state, you start building the Monte Carlo tree search like this,"}, {"start": 718.92, "end": 725.52, "text": " and what's the difference? Well, I already explained how the input state looks like, so in the case of Go, you have the eight planes,"}, {"start": 725.52, "end": 737.12, "text": " 19 by 19 spatial resolution, and once you take some action in the Monte Carlo tree search, what you'll end up here is, again, basically the input observations,"}, {"start": 737.12, "end": 745.72, "text": " and not some uninterpretable hidden state like in the case of Mu Zero, so you'll end up here with, again, with some eight planes,"}, {"start": 745.72, "end": 758.32, "text": " and you'll have 19 by 19 basically boards, and that's here, and again, once you play some other action here, a prime, again here you'll have the interpretable result."}, {"start": 758.32, "end": 767.12, "text": " Because you have the rules, because you have the simulator, you can do that. In the case of Mu Zero, you don't have that, so here you'll have, instead of eight,"}, {"start": 767.12, "end": 779.72, "text": " you'll have those 256 planes, and you don't have any interpretation of that hidden volume, there's just some state that the Mu Zero learns how to build up,"}, {"start": 779.72, "end": 786.12, "text": " so that you can predict the policy, the value, and the rewards. So that's what I want you to understand, that's the first thing."}, {"start": 786.12, "end": 794.72, "text": " The second part is, and I already kind of explained this, Mu Zero only masks legal actions at the root of the search tree where the environment can be queried,"}, {"start": 794.72, "end": 806.12, "text": " but does not perform any masking within the search tree itself. Okay, so what does that mean? Basically, in the case of Go again, you have 361 actions going from the root node,"}, {"start": 806.12, "end": 817.12, "text": " you'll mask some of those out because you can query the environment, and that means maybe you mask some portion here, and you'll have this part available for simulations,"}, {"start": 817.12, "end": 825.52, "text": " and you'll be building the tree, but the thing is, all of the other nodes here will have all of the actions available because you don't have the simulator,"}, {"start": 825.52, "end": 833.72, "text": " and so that's the second part. And the third part is the terminal node, so inside the tree, the search can proceed past a terminal node."}, {"start": 833.72, "end": 842.12, "text": " In this case, the network is expected to always predict the same value. This is achieved by treating terminal states as absorbing states during training."}, {"start": 842.12, "end": 850.32, "text": " So what they say here is the following, so because once you have the simulator, once you arrive at a certain state, the terminal state,"}, {"start": 850.32, "end": 859.32, "text": " the simulator will just tell you, hey, this is a terminal state, you can take the reward, you just back up the reward, up the Monte Carlo tree search, and that's it."}, {"start": 859.32, "end": 868.12, "text": " That's what we've done in AlphaGo, in AlphaGo Zero, and AlphaZero. Here, you can actually go past this state because you don't know that it's a terminal state,"}, {"start": 868.12, "end": 876.72, "text": " and that's kind of a problem. And they mention here that somehow, and I don't understand this part really well, like it's kind of confusing,"}, {"start": 876.72, "end": 885.52, "text": " they didn't explain it really well in the paper how they enforce that the value function always predicts the same value."}, {"start": 885.52, "end": 894.32, "text": " Yeah, so if you maybe know the answer, just comment down below, but for now, we can just treat it as a black box and continue with this video."}, {"start": 894.32, "end": 903.9200000000001, "text": " Let me just now explain to you the training procedure itself. Okay. So we've been saving these experience replays into the replay buffer,"}, {"start": 903.9200000000001, "end": 911.72, "text": " and now what we do is we sample one of the real sequences. So this is one of the real sequences we've stored, and you can see the input states,"}, {"start": 911.72, "end": 919.32, "text": " and you can see some actions. And what we do is we take one state, so the first state, we pass it into our representation functions of the resonant,"}, {"start": 919.32, "end": 928.32, "text": " and we get some hidden state here. Okay. What we do is we pass it into the prediction model, we get the policy, we get the value."}, {"start": 928.32, "end": 938.12, "text": " Then we basically take this section that we have from the replay buffer, and we concatenate it spatially again with the hidden representation,"}, {"start": 938.12, "end": 944.9200000000001, "text": " and we pass it into the G function, the dynamics model, and out comes the reward, and out comes the next state."}, {"start": 944.92, "end": 952.3199999999999, "text": " We repeat the process again, so we pass it through the prediction function, we get the policy, we get the value, we take the action again,"}, {"start": 952.3199999999999, "end": 958.92, "text": " we concatenate it with the hidden state, we pass it into the G, the dynamics model, and again we get the hidden state, etc., etc."}, {"start": 958.92, "end": 969.92, "text": " So basically, they've been using these, and they say it somewhere here. So in each case, we train mu0 for K5 hypothetical steps."}, {"start": 969.92, "end": 976.7199999999999, "text": " So that means they'll have, even though you can see only three here, they basically in practice use five of these steps,"}, {"start": 976.7199999999999, "end": 988.92, "text": " and now once you have this, this is how you train the thing. So you make sure that this policy here is the same as this one,"}, {"start": 988.92, "end": 995.92, "text": " and you've stored this one in the replay buffer, so you can use it as the target, and just apply a simple cross entropy law,"}, {"start": 995.92, "end": 1001.92, "text": " so you minimize the KL divergence, you make this to the same, and that's one component of the loss."}, {"start": 1001.92, "end": 1009.52, "text": " The second component, you want to make sure the value function becomes as close as possible to the outcome of the game."}, {"start": 1009.52, "end": 1016.92, "text": " So in the case of Go, the game will, after you roll this out, at the end you'll either win and get plus one, you'll have a tie,"}, {"start": 1016.92, "end": 1024.52, "text": " so zero, or you'll lose, that's minus one, and you'll make sure to, and let's just kind of denote this as Z,"}, {"start": 1024.52, "end": 1033.92, "text": " and you'll just want to make sure that V minus Z squared gets to zero, and that's simple mean square error loss,"}, {"start": 1033.92, "end": 1041.32, "text": " and finally you do the same thing for rewards, except that in board games such as Go, because the rewards are always zero,"}, {"start": 1041.32, "end": 1048.52, "text": " until the very end you actually won't be using this component, so that this thing will be used for Atari, where you have rewards along the way,"}, {"start": 1048.52, "end": 1056.72, "text": " but just remember there's a third component, and that's pretty much it, there is some regularization of the parameters, and that's pretty much it."}, {"start": 1056.72, "end": 1065.52, "text": " So, as you can see, by putting the loss here and here, you'll be modifying, and this is some, as you can see, recurrent process,"}, {"start": 1065.52, "end": 1072.72, "text": " so whatever came here is going to influence the values here, so you can treat this as a rule, like the recurrent neural network,"}, {"start": 1072.72, "end": 1077.92, "text": " even though they are not using recurrent neural network here, basically they'll be tweaking all of these parameters,"}, {"start": 1077.92, "end": 1087.1200000000001, "text": " the parameters of the dynamics model, the parameters of the prediction functions, and the parameters of H, the representation function,"}, {"start": 1087.1200000000001, "end": 1099.1200000000001, "text": " to learn this model, and eventually you'll have a pretty decent dynamics model, which basically now contains the rules of the game, and that's awesome."}, {"start": 1099.12, "end": 1109.9199999999998, "text": " So, the part that's maybe confusing is, as you can see here, what keeps this hidden state, which doesn't have any interpretation,"}, {"start": 1109.9199999999998, "end": 1117.7199999999998, "text": " correlated to this input real state that we sampled from the replay buffer, is exactly the loss."}, {"start": 1117.7199999999998, "end": 1127.52, "text": " So, by making sure that the prediction function from this state gives you valid policy and value,"}, {"start": 1127.52, "end": 1135.12, "text": " eventually this state will have something to do with this state, even though they didn't enforce any constraints upon this hidden state,"}, {"start": 1135.12, "end": 1142.12, "text": " so you can't reconstruct the actual observations going from this state, that's not how they train it, and yet it works,"}, {"start": 1142.12, "end": 1149.72, "text": " because the only thing we want to encode into this is, we want to be sure, we want to make sure that we can actually get the policy,"}, {"start": 1149.72, "end": 1153.92, "text": " get the value, and get the reward from these states, and that's it."}, {"start": 1153.92, "end": 1162.72, "text": " And they mentioned that here, so, at every one of these steps, the model predicts the policy, the value function, and the immediate reward."}, {"start": 1162.72, "end": 1168.72, "text": " The model is trained end-to-end with the sole objective of accurately estimating these three important quantities,"}, {"start": 1168.72, "end": 1177.52, "text": " so as to match the improved estimates of policy, coming from the Monte Carlo Tree Search, and value generated by search, as well as the observed reward."}, {"start": 1177.52, "end": 1185.52, "text": " There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation,"}, {"start": 1185.52, "end": 1193.72, "text": " so you can go from the hidden state to the actual input observations, and that's not needed, because we only care about the three quantities."}, {"start": 1193.72, "end": 1201.92, "text": " Finally, instead, the hidden states are free to represent state in whatever way is relevant to predicting the current and future values and policies."}, {"start": 1201.92, "end": 1210.52, "text": " Intuitively, the agent can invent internally the rules of dynamics that lead to most accurate planning, and that's what's so awesome about Mu0."}, {"start": 1210.52, "end": 1217.1200000000001, "text": " Basically, by doing this process here, you learn this model, and it just works."}, {"start": 1217.1200000000001, "end": 1224.3200000000002, "text": " They say here, again, no semantics attached to the state. You just predict these, and it's just solved all by itself."}, {"start": 1224.32, "end": 1235.52, "text": " Here is the loss function. Again, basically, this is the cross-entropy I mentioned, so this is the raw policy, this is the Monte Carlo policy."}, {"start": 1235.52, "end": 1243.72, "text": " You want to make sure those are the same. Just a simple mean square error here for the value function, and for the reward,"}, {"start": 1243.72, "end": 1250.12, "text": " you also do MSE between the reward you got and between your output from your dynamics model G."}, {"start": 1250.12, "end": 1262.9199999999998, "text": " Finally, the regularization, so theta just comprises out of theta F plus theta H, and finally, you have theta G."}, {"start": 1262.9199999999998, "end": 1268.12, "text": " That's the Mu0 model, and you just do L2 regularization. That's the final loss."}, {"start": 1268.12, "end": 1276.52, "text": " You have five hypothetical steps. You just train it on the bunch of trajectories, and we can see now the results."}, {"start": 1276.52, "end": 1286.12, "text": " Let me show you this. Basically, comparing Mu0 on chess, so this is Y-axis is dealer score, number of training steps."}, {"start": 1286.12, "end": 1290.72, "text": " You can see that this is probably stockfish, so it got better than stockfish at one point of time."}, {"start": 1290.72, "end": 1297.52, "text": " Then we have for shrogis, similar, and finally, the game of Go. Here, you can see it gets better than Alpha0,"}, {"start": 1297.52, "end": 1306.72, "text": " and finally, on Atari, they took something called RT D2 agent, and this is the mean value. This is the median,"}, {"start": 1306.72, "end": 1314.32, "text": " and you can see that the Mu0 achieves a higher median and mean scores than the state of the art, so that's awesome."}, {"start": 1314.32, "end": 1324.52, "text": " Now, there is one catch here, and that's that actually on some of the tough games like Montezuma's Revenge and Pitfall and Solaris, etc.,"}, {"start": 1324.52, "end": 1331.92, "text": " you have zero score pretty much with Mu0. If you take a look at the distribution across all of the 57 games,"}, {"start": 1331.92, "end": 1339.12, "text": " so these are the 57 games here, it will have a really high score for some games, and then we'll just dip here,"}, {"start": 1339.12, "end": 1348.72, "text": " and for the last five or seven games, the score is going to be really low, but you can't catch that using just these two statistics,"}, {"start": 1348.72, "end": 1357.52, "text": " like mean and median. You need to show the fifth percentile, etc. So, like an agent called Agent57 later solved this thing"}, {"start": 1357.52, "end": 1363.72, "text": " and achieved much better results here, but you actually had lower results here, so it's a more general agent,"}, {"start": 1363.72, "end": 1373.52, "text": " but it still had lower mean and median scores than Mu0. Okay, so those are some results, and I mentioned it here,"}, {"start": 1373.52, "end": 1381.72, "text": " and now let me just show you some more results. Basically, this is for when you have a bunch of data, like 20 billion frames,"}, {"start": 1381.72, "end": 1390.12, "text": " you can see Mu0 achieved much better results than R2D2, Apex is also some distributed agent, and when they use 10x less data"}, {"start": 1390.12, "end": 1402.32, "text": " and something called Mu0 reanalyze, which learns how to better, basically has better data efficiency because it can reuse some of the trajectories"}, {"start": 1402.32, "end": 1412.52, "text": " from the replay buffer and train multiple times on those trajectories by using the freshest weights, and they show that they get much better data efficiency."}, {"start": 1412.52, "end": 1420.52, "text": " Again, the scores are lower, but it still achieves state-of-the-art with less data, and that's awesome. That's pretty much it."}, {"start": 1420.52, "end": 1430.72, "text": " There's a couple more things I want to mention, and that's here. So, for the game of Go, basically, after training the agent"}, {"start": 1430.72, "end": 1440.32, "text": " with 800 simulations for every single Monte Carlo tree search, so when you do those rollouts, you use 800 simulations, and that takes around 0.1 seconds,"}, {"start": 1440.32, "end": 1450.1200000000001, "text": " and they can show that by using more, by giving the agent in the evaluation, by giving it more search time, so more simulations,"}, {"start": 1450.1200000000001, "end": 1458.92, "text": " it achieves much better ELO, which means it generalizes to search trees that it hasn't encountered during training, and that's awesome."}, {"start": 1458.92, "end": 1468.1200000000001, "text": " You can see that until, like, two orders of magnitude, it still has a pretty decent increase in ELO score."}, {"start": 1468.1200000000001, "end": 1477.72, "text": " The same thing doesn't apply for Atari, because Atari, they speculate that the dynamics model, the G model, is harder to learn there,"}, {"start": 1477.72, "end": 1487.52, "text": " so you can imagine why, because you have to kind of understand how to transition the frames, which are a tougher problem than analyzing the board states,"}, {"start": 1487.52, "end": 1499.12, "text": " and basically, they train it with 50 simulations, and you can see it does improve until 100, but then it just kind of flattens and just slowly starts to actually go down."}, {"start": 1499.12, "end": 1507.92, "text": " So, yeah, so Atari is a bit different. Here, nothing interesting, basically, the more simulations, the better results, as we can expect."}, {"start": 1507.92, "end": 1513.72, "text": " That's pretty much it, let me just see whether there is something more interesting to tell you here."}, {"start": 1513.72, "end": 1522.52, "text": " Yeah, there is a small difference in the search algorithm, because now we have rewards along the way, and not just in the terminal states, like in the case of board games,"}, {"start": 1522.52, "end": 1531.1200000000001, "text": " and because of that, they had to introduce this, as you can see, this notion of this cumulative reward,"}, {"start": 1531.1200000000001, "end": 1540.1200000000001, "text": " and they say here the backup is generalized to the case where the environment can emit intermediate rewards, have a discount gamma different from one, and the value estimates are unbounded."}, {"start": 1540.12, "end": 1551.12, "text": " And basically, now, once you, as you can see here, this is familiar, so your visitation count and your, this is the action value function,"}, {"start": 1551.12, "end": 1563.7199999999998, "text": " the way you're now updating it is by adding this G estimate, and that's just the sum of discounted rewards plus the bootstrapped value here."}, {"start": 1563.72, "end": 1572.52, "text": " So, yeah, so in the case of board games, these would all be zero, pretty much, and you'll just have the bootstrap from the leaf node,"}, {"start": 1572.52, "end": 1582.1200000000001, "text": " and you'd propagate that one up the tree, so you'd back up those values, but here you have all of these in case of Atari."}, {"start": 1582.12, "end": 1595.12, "text": " That also has some consequences that we have to now normalize the Q values, because, as I said, rewards are, you have intermediate rewards which can be unbounded,"}, {"start": 1595.12, "end": 1604.52, "text": " so you have to do this normalization in order to have this thing interact really well with the other part of the PUCT algorithm."}, {"start": 1604.52, "end": 1613.72, "text": " So basically, you have, as you remember, you have the Q, and you have this prior times the visitations, so the more you visit something, this will get higher,"}, {"start": 1613.72, "end": 1622.32, "text": " so this will get lower, and basically, finally, you'll just rely on the Q value, so you want to make sure that this is of similar scale as this thing,"}, {"start": 1622.32, "end": 1629.92, "text": " so you just have the normalization by saving the max and min Q values throughout the Monte Carlo tree search."}, {"start": 1629.92, "end": 1633.92, "text": " You'll be logging that statistic, and you can use it here to normalize your Q."}, {"start": 1633.92, "end": 1638.3200000000002, "text": " So just a couple of technical details, but hopefully you got the gist of the algorithm."}, {"start": 1638.3200000000002, "end": 1650.92, "text": " I think it's pretty neat, basically, it achieved really good scores, so again, it'd be really nice if, in my opinion, if we could have the scores be as close,"}, {"start": 1650.92, "end": 1663.72, "text": " like even better than Agent 57, that would be really impressive, but yeah, those games require long-term planning, and Mew Zero just couldn't solve it with this approach."}, {"start": 1663.72, "end": 1667.92, "text": " Maybe that's the forward, like future research direction."}, {"start": 1667.92, "end": 1694.92, "text": " That's it. Hopefully you enjoyed this video, if you did, leave a like, subscribe, share, and see you in the next video."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=eTa-k1pgvnU
OpenAI - Solving Rubik's Cube with a Robot Hand | RL paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover OpenAI's famous robotic hand that learned to solve the Rubik's cube by training purely in a simulation. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/1910.07113 ✅ Blog: https://openai.com/blog/solving-rubiks-cube/ ✅ Quaternions: https://www.linkedin.com/posts/aleksagordic_visualizing-quaternions-an-explorable-video-activity-6779354558696087552-0heH ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 01:32 Comparison with Dactyl system 03:01 High-level overview 07:00 Tasks (Rubik's cube and block reorientation) 08:55 Physical system overview 11:08 Reading angles from the cube (electronics) 13:25 Realistic modeling of the system in simulation 15:52 Automatic Domain Randomization (ADR) 19:50 Cube size randomization during training (blog) 20:30 Entropy and rand param probability distribution 23:55 ADR pseudocode 27:15 Rapid 28:45 Randomizations 31:35 PPO 32:05 Actions and rewards 33:55 Policy network, embed and add 37:35 Behavioural cloning 39:55 Vision pipeline 41:10 Focal loss 42:10 Results 47:30 Perturbation robustness 48:20 Meta-learning 51:35 Predicting environment variables from LSTM hidden state ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #robothand #openai #rubikscube
What's up? In this video I'm gonna cover this paper called solving Rubik's Cube with a robot hand by the pretty much complete OpenAI team as you can see here and the goal here is to solve the Rubik's Cube using a robotic hand obviously and but the thing I want you to understand is that the point is not to actually figure out how to solve the cube they just use some off-the-shelf solver for that the whole the difficulty in the project lies in sensing the state of the hand the state of the cube and advising a control policy I controlling those joints on the hand in order to solve the cube so it's a control and sensing problem not the actual how to solve the cube problem and having said that take a look at this video we're trying to build robots that learn a little bit like humans do by trial and error what we've done is train an algorithm to solve the Rubik's Cube one handed with a robotic hand which is actually pretty hard even for a human to do we don't tell it how the hand needs to move that the cube in order to get there the particular friction that's on the fingers how easy it is to turn the faces on the cube what the gravity what the weight of the cube is all of these things it needs to learn by itself awesome hopefully you got some high-level understanding from that video and now let's just dig into the paper it's a quite a long paper hopefully you'll at the end of this video I'll understand it much better okay let's start with this the two main components that this new paper advised because as a side note they previously published this paper called learning gestures in hand manipulation where the setup was pretty similar the only difference is that they are trying to just reorient this block that has these open AI letters on every single face of this cube and so the goal was much simpler you just need to flip the cube and to get from a certain source position into a certain target configuration and that's it it's still a complicated task but solving the Rubik's Cube is just a level up so yeah and the two main components they say are key to solve this problem the Rubik's Cube is this automatic domain randomization algorithm and a new robot platform built from machine learning and high level this automatic domain randomization so they in the previous paper and this one they already used domain randomization but the novelty is this in this automatic part so they are automatically learning how to create progressively harder and harder environments and by doing that it turns out that the policy once deployed on a real robotic hand treats the like the real world as just another simulation matrix confirmed and let's dig into the high level explanation so as you can see here the the setup looks something like this so they have bunch of these simulations and you can see every single hand corresponds to different simulation and everything is randomized from the appearance you can see different the color hands the physics like the gravity the cube size the cube mass the friction of the robotic hand bunch of stuff is randomized so once you have the data once you have those simulations what they do is they train two separate pipelines so one is for training the control policy and you can see it here so it gets as an input it gets the fingertip position so you get pretty much five 3d points and it also gets the state of the cube so that's the position the orientation and the six face angles as we'll soon see and once you have that state of both the hand and the cube they train this LSTM based policy network in order to output these actions and that will be just 20 times 11 basically 20 means 20 joints so the hand has 20 joints they can control and 11 is just discretized angles because they later show that that's much better and the place is much better once they do that instead of using continuous actions so those are just the relative angles so maybe move this specific joint by two degrees and that's it on the other hand the second pipeline takes these three RGB images as the input and it tries to regress the the cube pose the cube orientation as well as the face angles and we'll see all of the details a bit later once they combine all of that together we have this one this this chart and you can see here that the cube pose always comes from the visual system but because it was really tough actually to figure out the absolute face angles so you have the cube and you have to figure out which exact angle and you have from minus pi to pi the angle goes from minus pi to pi so it's really hard to regress it because all for the occlusions etc and they sometimes actually use the cube itself it's a smart cube that has sensors and we'll see that in a moment and it basically just outputs and gives to the policy these numbers which represent the the face angles so they either use vision or they use the cube itself in order to get the face angles so combine them with the fingertip positions that's the input observation to the policy and they basically train the LSTM based policy to output the correct actions in order to solve the cube so that's the high level explanation now let's start digging into details they say here the the learning requires vast amount of training data which is hard and expensive to acquire on a physical system I should have mentioned this a bit earlier but basically you have two options you can either train this thing on a robot but because we're trying to learn we know we'll need a lot of data and it's simply not practical to train the policy on a physical robot and in the previous paper the system was called Dactyl just FYI basically they showed that the hand had a lot of breakages so it's super impractical to train this even if you have a farm of robots and this thing this this hand is pretty expensive I can assume so it's really hard to have a setup to collect enough data to train the policy so they have to go to simulation but once you're in a simulation you need to advise a good way to bridge the domain gap the reality gap to in order to have the policy training in sim be good enough in the actual real world so that's just something to keep in mind first of all let me just briefly explain so the goals the first goal we saw was the block reorientation so that was in there in the previous paper they were solving this one and they're using it as a baseline in this one so the goal is considered solved if the orientation of the cube falls inside this 0.4 radians from the target orientation and that's something like 22 degrees more or less 22 23 degrees so it's a pretty big margin but yeah they just figure out that number works for them okay now you can see here visualized so this is the the block reorientation task you can see the target orientation here and this is the Rubik's Cube task and you can see the target is the solved cube in this case they found that one interesting thing to notice here is that it's really hard to rotate certain once you have a single hand it's really hard to rotate any face other than the top face and they say here we found rotating the top face to be far simpler than rotating other faces so they said it's really hard to rotate this one if you have only a single hand so it's much easier to rotate the top face and thus instead of rotating arbitrary faces we combine together a flip and the top face rotation in order to perform the desired operation that these sub goals can then be performed sequentially to eventually solve the Rubik's Cube so just using those two operations the flip and the face rotation the top face rotation they can solve the cube when it comes to solving the Rubik's Cube competing a solution sequence can easily be done with existing software libraries like a Siam Bo solver so that's what I mentioned in the beginning of the video they use off-the-shelf solver and in this work the key problem is thus in sensing and control not finding the solution sequence so keep that in mind okay briefly just let me just go through this physical setup so they have the cage they have these three cameras one two three three RGB cameras they have this system called phase space which they use in order to track the fingertip positions and you can see these small LEDs on the fingertips and that's what a what this phase space system motion motion capture system uses in order to figure out the exact 3d positions of those fingertips so then we have the hand and we have the like the cage that holds everything together so one thing I actually when I first saw this video a couple of years ago I thought that the hand was also developed by open AI I wasn't sure which part of the project they actually developed but actually the the hand itself is from some other company it's called the shadow robot company and it's a pretty marvelous piece of engineering itself the hand itself I think has 24 degrees of freedom so it's a really like high dimensional control problem so previously nobody could solve nobody could imagine solving like rubik's cube using like a hand as complex as this one but some other systems exist but they are not learned and they are not like a humanoid hand they are just like some kind of a machine that can solve the cube really fast but that's not what we are trying to accomplish here aside from the hand and the setup there is the cube component and I already mentioned it's there is smart cube involved and they say here we therefore replace some of the components of the original Geekr cube with custom ones in order to achieve a tracking accuracy of approximately five degrees so the original cube only can so we need as I said we need to to calculate these angles and we need some better resolution than just nine degrees so we need to have resolution that's and they they figure out the five degrees works so they had to create a custom cube and that's funny and you can see here and I just highlight the charging here because it's not a simple it's not like dummy let's go that way a cube it's a it's got like a microcontroller it's got a couple of components in order to figure out those angles and we'll see that in a moment okay a quick detour into how this exactly works is you have this thing in every single face of the cube and it's basically a variable resistor so here is just circuit on the left side and you can see this variable resistor here actually corresponds to this thing here and this is just a simple circuit you've got resistors we've got the positive power rail we've got the ground ie the zero potential in the circuit you have something called operational amplifier here and what this circuit does you can treat it as a black box and most people do basically just hold the same potentials on the plus and on the negative inputs and all in all you've got a current going through here and the signal here is basically our times I where I is the current and our is changing so the analog signal will actually be increasing as the resistance increases and the resistance increases once you're turning the the face particular face and so what I do is they have a microcontroller here inside of the cube so some microcontroller it's got something called analog to digital converter and basically this signal here is being fed into a DC and it can discretize the signal into some value so maybe let's say this range goes from from 0 to 5 volts when you change the when you rotate the cube from 0 to 360 degrees and you can just discretize that into maybe some of the numbers will be you'll just have some kind of a curve and this is the analog analog voltage and this will be the output from the converter and you can then map those numbers into angles and send them via Bluetooth to your policy and that's the part of the observation that the policy uses in order to control the joints and eventually solve the sub goals and eventually solve the cube so that was a quick detour into electronics it's funny to see a piece of electronics inside of enamel paper actually studied electronics it's pretty satisfying feeling to to see that you can actually leverage some of the knowledge to learn on the faculty yeah this will probably be the first and the last time I'll ever see it anyways let's continue and let me tell you one more thing before we start jumping into how the policy is trained and etc so you have the setup but you want to have the simulation to be as as closely to you want your simulation to resemble the real world as closely as possible so they showed here that they added additional modeling to the hand like these pulleys and tendons and doing that they improved the simulated hand and you can see on these charts basically they track one particular joint like maybe this one and they track the its angle and you can see the this curve the green one is you get these curves by taking a control sequence and sending that control sequence both to the robotic hand to the real robotic hand and to the simulation and you're just tracking the 3d the angle of that particular joint and you can see there is a discrepancy before the calibration before this modeling was done once they included that you can see that the curves align which means we have much better simulation and thus we'll have better control policy once we deploy it on a real robot and they show this is really important part of their of their paper okay so let me jump to the cube so it turns out modeling the cube is super hard there is so many details like the friction the size the mass the lots of forces inside the cube itself so aside from the forces that act inside of the cube there's this thing called a forgiveness factor so once you have the top face and I rotated maybe by a couple of degrees I can still take this perpendicular face and rotated and it will snap and it will work but like if you if you go too far now you can't you basically can't flip the the perpendicular face so that's the forgiveness factor so you have to count all of those tiny details inside your model in order to get a realistic representation so this here they performed a very rudimentary dynamics calibration of the parameters so like the like the size the friction etc which mojoco which is the engine they're using for rendering allows us to specify in order to roughly match a physical Rubik's Cube our goal was not to get an exact match but rather to have a plausible model as a starting point for domain randomization so again ADR will see that that plays a really important part in this work okay and here we are so and they say here so in this previous paper so that's the deck tile the the previous paper they published we were able to train a control policy and a vision model in simulation and then transfer both to a real robot through the use of domain randomization however this required a significant amount of manual tuning and a tight iteration loop between randomization design in simulation and validation on a robot in this section we describe how automatic domain randomization can be used to automate this process so it's a mouthful basically what it means is a following so in previous paper they for every single parameter that they want to randomize like maybe the cube size they had to manually find the range so they had to find maybe this is a 5.4 centimeters this may be 5.9 and once they said those basically margins you can sample anything in between but you can't sample outside that range so that means you have to for every single range you have to test whether that actually leads to a better policy when you deploy it into a robot because if you make this arbitrary big maybe this goes to 7.2 now the policy will actually be much worse because it's really hard for the policy to learn how to solve some of those cubes and then it leads to a worse policy which is not desirable so what the ADR basically does is it starts from a single value so they measure the realistic cube and that was 5.7 actually in this case and then they slowly expand to the left and to the right and they end up with some a bit bigger range and then a bit bigger range and so that's kind of an implicit curriculum learning and we'll see those details so that's the whole point so for every single randomization parameter like maybe the like the friction the cube mass the cube size even the gravity etc they gradually and automatically kind of increased the actual range for that parameter and that's the whole idea the high-level explanation of ADR they say it here the main hypothesis that motivates ADR is the training on a maximally diverse distribution of environments leads to transfer via emergent metal learning more concretely if the model has some form of memory and they use LSTMs and we'll see that in a moment it can learn to adjust its behavior during deployment to improve performance on the current environment over time I by implementing a learning algorithm internally so basically what happens is that you have a finite capacity policy and you've got pretty much infinite number of sampled environments so that policy can't can't memorize every single solution to every single environment and there is also not a single robust policy that can solve each of these really in a performant way so it has to use the memory it has in order to adapt to the given environment and improve its skills that way and they later show that actually that happens and the policy learns how to encode the environment parameters like cube size cube mass gravity etc inside the hidden state of an LSTM and that will be an important thing they'll want to show you okay so here is a high-level explanation and I already gave you here so they are basically just updating those ranges and they say here at its core ADR realizes a training curriculum that gradually expands the distribution of environments for which the model can perform well the initial distribution of environments is concentrated on a single environment for example in policy training the initial environment is based on calibration values measured from the physical robot so they measure for example the cube so it's 5.7 centimeters the gravity is like 9.81 whatever so they have these initial values and then they gradually expand them and I'll show you in a blog in a second how that looks like here is a really nice visualization on their blog you can see that the x-axis is the training the timeline basically and the y-axis is the the range I was mentioning so they start with 5.7 centimeters but but as the time progresses the ADR does its job and the cube eventually comes to this huge range so they actually sample some of the cubes will have 5.47 centimeters size and some will have 6.13 so that makes it much harder to solve the tasks but because they gradually went to that point the policy learns how to cope with those cube size ranges getting back to the paper let me explain the pseudocode for this algorithm but previous prior to that let me just explain this briefly these two equations so they've got this distribution here and the entropy so as what they did is for every single parameter that they want to randomize like like the friction and gravity they'll have a uniform distribution so for the cube size for example 5.5 I don't know like maybe 5.9 they'll just have those ranges and they'll have a uniform distribution here so and that they'll have the same thing for every single parameter so this is maybe like cube size blah blah blah and they'll have like friction etc so the final distribution will just be a product 1 to D where D is the number of those parameters and they'll just multiply all of these uniform distributions fi low means the the lower the lower part of the range so that's 5.5 for the cube and the high part means the high threshold and that's it and once you have this distribution you can find its entropy and for uniform distributions it's quite easy so the entropy is basically in general's case for a discrete entropy for discrete distribution you'll have a simple sum of probability of that event times the logarithm of 1 over the probability of the event and this part the log part is actually the self-information so it's sometimes denoted like I of X and this is how you calculate the entropy and basically because the uniform distribution always has the same values so maybe if we take a simple discrete distribution like this one let's say we have three possible events so all of them will have 1 over 3 probability and so you can see so the probability is 1 over 3 and then you have the log 1 over 1 over 3 so that's 3 just 3 and we have that 3 times so these will just cancel out and you end up with log 3 so in general case this is just the difference they denote here so in the continuous case it will just be your entropy will be the integral pi of X log 1 over pi of X DX and because for a particular uniform distribution this is constant you can just pull it out here and so we've got integral over DX and that will just be the size of your domain so maybe here it will be whatever the difference between these two are and that's why they have the difference here so this thing is just the entropy of a uniform distribution that was a long way to say that thing and then they just divide by the number of distributions by the number of randomized parameters and you get kind of a average entropy and so the the the more spread out these distributions are the higher this entropy will be because you can see as the as the range increases the entropy increases and so this is a measure of how actually demanding the environment the current environment is so that's just a simple way to do that okay here is the actual algorithm they start with some initial calibration values as I mentioned that may be 5.7 centimeters for the cube 9.81 for the gravity and they keep these buffers for every single one randomized parameter and they keep the lower and the higher so one for the lower range part of range and one for the higher part of the range and they keep these these are just thresholds for how performant the policy is given a specific environment and M just means so M is just a number of iterations we need in order to decide whether we want to expand or squish the range of that particular randomized parameter okay so we sample a particular vector from the current distribution and we pick random randomized parameters so one of those D parameters that we are using we pick one of them we then sample from X from the uniform distribution and with 50% chance will either pick the lower so we'll set lambda I to the lower extreme or with 50% chance will set lambda I to a higher extreme so what that means in that particular case with the Rubik's Cube that means that sometimes so basically once you sample this you'll sometimes take your sample 5.6 blah blah but like in this case if you pick the lower one you'll set that lambda I that corresponds to the Rubik Cube size to 5.5 or to 5.9 so that's this part and basically then you evaluate the performance of the policy on that specific environment so hopefully understand what happens here so you pick all of the parameters or all of those the randomized parameters you have randomly from their ranges but you pick one specific one and you intentionally pull it to some extreme either to the lower extreme or to the higher extreme and you evaluate the performance you save the performance inside of the buffer and you keep doing that until you see this threshold M so that may be 128 times once you have enough samples you're you're confident that your estimation will be good and you just take the average performance you just this is just clearing the buffer and if the average performance is good enough whatever this TH is depending on whether you're training vision vision network or the policy network so if you're good enough that means that the network that was that is being trained doesn't have any difficulty solving this particular environment so we can expand the range and next time the environment will be even harder on the other hand on the on the flip side basically if the performance is really poor that means that pulling this particular parameter to its extreme is really hard and we want to actually squish this this uniform distribution for that particular randomized parameter so hopefully this was clear enough let me know in the comments because this is arguably the most important algorithmic innovation they had in this paper other than that there was a lot of engineering and nothing less important but yeah okay let's continue here is just a high level overview of how the vision pipeline and how the policy pipelines are trained you have the model parameters you have the current PHYs so those are the the the ranges we've just mentioned for every single randomized parameter and you you basically sometimes sample those you generate the data and you pull push the data into this buffer but we sometimes just do the ADR so basically snap one of those parameters to its extreme and we sometimes update this PHY to make it more challenging or to reduce its difficulty and that's the two things are happening parallel we are constantly improving the the difficulty of environment and we are constantly generating data and training the model and updating the model parameters so yeah that's how it works and you can see one one difference between how they train the vision network compared to the policy is that they here they use both the data that is evaluated as well as the actual data pulled from the current parameters and they do that because data is really expensive and they need all data I can have for that particular policy the problem with with this thing is that sometimes you'll be pulling data that's too hard because this is still remember this is still under evaluation we still don't know whether snapping that parameter to its extreme is too hard or not and then yeah there's just the trade-off but they figure out it's good enough for their approach okay let's now see what they randomize in their simulations and already mentioned like the physics like geometry friction gravity etc the interesting part is this one the adversarial what they do here is the following so they have their training the policy network that's the P and it's outputting those 20 times 11 categorical distributions that correspond to angles for particular joints and on the other hand they have this network the adversary called are which just basically outputs random actions and those get added to these ones and the policy needs to learn how to complete the sub goal it's given and disregard the randomness introduced by the adversary and they tried two things actually try to intentionally to develop a true adversary which will try and perform the worst actions in order to kind of confuse the policy but they realize that just using a random network is actually even better because it brings more diversity and later the policy can better generalize once deployed on a real robot so why do they do this and the simple answer is the following basically once you have the real hand even though the policy says to a specific joint move by maybe two degrees the underlying controllers like the PID controller will add some noise so it will maybe execute 2.1 degrees instead of 2 degrees and then the policy will still have to learn how to deal with that so and that's why they introduced this kind of modeling into the pipeline they also add noise to the observations basically fingertip positions are kind of added like they had a Gaussian noise to to the positions etc so everything you can imagine everything that's in this pipeline they try and randomize finally the vision like the lighting conditions was the lux level the camera positions so remember they have this cage their cage and they figure out the positions and use those as the calibration as the initial as initial positions and then in the simulation they'll just have some volume that will be expanding and they'll be sampling camera positions from that volume and that will introduce randomness they'll also experiment with the field of view with bunch of different parameters and yeah that's pretty much it and I mentioned here so some of these are fixed in a specific simulation so they'll just pick the cube size and the mass and they'll freeze that but other parameters like the observation noise and adversaries like the the thing I just mentioned will be changing with every single step in the episode so yeah just keep that in mind and finally training the policy they use this thing called PPO proximal policy optimization which is basically actor critic method they devised a couple of years ago it's a fairly standard RL algorithm I won't dig into too much depth here basically they have the value network they also have the policy network the value network tries to predict the expected value and the policy network tries to predict the optimal actions and in that collaboration they eventually learn a really good policy let me just focus briefly on on on these details like the rewards and actions so I already explained how the actions function so basically for every single joint and there are 20 of them they output these categorical distributions so maybe something like this and the x-axis is like the angle so here this is the highest probability angle so that maybe correspond to five degrees so they'll pick five degrees so those are the actions the rewards they have three types of rewards and they had some reward shaping here as you can see so the first component is the non sparse component and for example if they are trying to in the case of cube flipping if they're trying to get to the target orientation the reward will be the closer you get to the orientation the final orientation the more reward you'll have for each step so if you decrease the distance between those two orientations you'll be getting more reward and how they represent the orientations is using something called quaternions and those are just it's just an extension of the complex number system into four dimensions it sounds pretty complex I recently created a post on LinkedIn and I'll link it down in the description where I explained the best resources you can get to learn about quaternions on the high level you can treat them as a black box it's just a four-tuple basically four numbers and they somehow captured the notion of the orientation of the object and they are used all over the place in computer graphics and robotics etc so the second reward is a reward of five whenever the goal is achieved so that's a sparse reward as you can as you can assume and finally the penalty of minus 20 when the cube or the block is dropped so that's the reward system they they deploy here is the actual architecture you can see the as I mentioned the value the value network and the policy network so that's the part of the PPO algorithm and you can see the distribution 20 by 11 and value has just a single scalar and one thing interesting here is that the the the policy network takes the only the noisy observations whereas the value network takes both the non noisy observations which are only available in the simulation but also it uses the noisy observations and the reason we can use that all of the data that's not available on the actual robot is because we'll won't be using the value network at all we'll be just using the policy network so this is this a symmetric act or critic approach which they use and which they devised in some of their previous papers okay here are some of the inputs you can see like cube orientation quaternions fingertip positions blah blah blah you can notice that the only the noisy parts go into the policy networks you can see here the check sign so the noisy fingertip positions you can see the check sign here the Q position and the orientation etc okay a couple of interesting things to notice is this and these enable them to experiment a lot more is this embed and add approach so you can see here that the observations are always normalized and then embedded into this constant 512 dimensional dimensional space and they just summed it up summed it up and they apply value and what that enabled them to do is experiment with different observations and keep the experience they accumulated over over many many months intact and they say here so more concretely we're first embed each type of observation separately without any weight sharing into latent space of dimensionality 512 we then combine all inputs by adding the latent representation of each and applying a value non-linearity after the main motivation behind this change was to easily add new observation to an existing policy and to share embeddings between value and policy network for inputs that feed into both so for example if they have fingertips without noise here and fingertips with the noise they'll be sharing these embeddings in those cases otherwise they are not sharing the the weights there okay let's continue this is super interesting and mind-blowing so they actually train this thing they say here we've been training the Rubik's Cube policy continuously for several months at this scale while concurrently improving the simulation fidelity the ADR algorithm they've been tuning the hyperparams and they've been changing the network architecture so the cumulative amount of experience over that period used for training on the Rubik's Cube is roughly 13,000 years and which is on the same order of magnitude as the 40,000 years they used to train the open AI 5 the bots they train for Dota 2 game and basically what I want you to have in mind here is that a single simulation step takes is one is 80 milliseconds so you just divide this number here the number of milliseconds here by this and you'll have the number of steps that happen in the simulation since they started doing this and it's a huge number okay experimenting is really important obviously they want to keep the experience that they want to ditch the experience they accumulated and so they do one more thing and they say here so we therefore rarely train experiments from scratch because it's expensive yeah but instead updated existing experiments and initialized from previous checkpoints for both the ADR so they both keep those those thresholds for each of those ranges for the randomized parameters and they also keep the policy parameters and they did something called behavior cloning in order not to lose experience they implemented something called behavior cloning if you're familiar with imitation learning this is one of the basic approaches there and they just implemented a variant of this dagger algorithm and why they do this is that they can iterate with different policy architectures not only the observations and still keep the experience intact pretty much intact so what how it works is basically you have something called a student network you have the teacher network and this one is basically we input the observation out goes the actions and we use these actions to navigate the environment but what we additionally do is because it's got the value function here as well so these are the actions and here is a teacher so that's a that's the the best network we currently have with a slightly different architecture and this is the new network the new architecture and we want to kind of transfer the knowledge experience to that new network and so what we do is additionally we have the values here and we have the actions here and we just do MSC loss between these two so just simple mean square error loss there and we do simple KL divergence between those two and that will make sure that the student eventually learns to output the same value functions and to output the same actions in particular states as the teacher network and that's what what I mean by behavior cloning in this case okay so they say here this is work surprisingly well allowing us to iterate on the policy architecture quickly without losing the accumulated training progress our cloning approach works with arbitrary policy architecture changes as long as the action space remains unchanged so the only thing that needs to remain constant is the action space itself so 20 times 11 everything else can change the observations can change and the architecture itself can change so I think already mentioned estimating the face angles is super hard so what I did is they so they just added these cutouts so that the problem of estimating the angle is a bit easier and they say here so while predicting position and orientation works directly works well we found predicting all six face angles directly to be much more challenging did you have the occlusion even when using a cube with a symmetric center stickers to work around this we decompose face angle prediction into several distinct predictions so for the sake of brevity I'll just skip over the details but they estimate this thing called active axis and the active face angles and the top face angle in order to and then they do some post processing in order to use as a proxy to get the actual face angles and yeah that's what I do with the vision part and you can see the network here so position orientation which is a quaternion and these three elements combined with some post-processing indirectly gives them the face angle estimations one thing you'll notice is there is many components here and you kind of usually have to manually specify that when you do the weighted loss you have to manually specify those weights so a way around that is using something called focal loss and there is this paper which introduced it previously so then the main idea is the following so let's say you're doing classification and the true label is maybe one and so you're using cross entropy loss your loss will look something like this and let's say you output 0.9 and that means the loss is pretty small and what they do is the following they just wait using this this here this factor here so because the and the minus log P is the cross entropy part so this is the the actual weight and so you can see the bigger the P is so the closer we are to the correct solution the smaller the weight for this loss so that means roughly that if we are performing really good here we want ditch we want to drop the weight for that component because we are already really good there but if we're maybe struggling here we'll increase the weight that goes with this loss component and that's pretty much it and then they do the ablation studies and they show that using the full model that means using the focal loss that the domain randomization etc they achieve the best results looking at maybe a real cube you can see that the orientation error is seven point something you can see the other values and that's much better than when they don't use the randomization or no focal loss etc so yeah you can see the order of magnitude of these errors and it's not small but it's good enough let's let's continue and jump into results it shows some results on the sim to sim transfer performance and basically the model was the policy was trained in the ADR setting and the test set also comes from the simulation but using some manually specified ranges and so that's the the holdout data and they show that as the randomization level increases and that's the entropy I mentioned previously as it increases and it's a good measure of how random how big those ranges are you can see that the performance on the the number of successes on the of that on that holdout set gets better and that's one thing and the second thing is they show here the performance on the real data and using the ADR so the more randomization they add so you can see the entropy increases basically the better the results get if we take this specific policy the XXL policy which was trained for a lot of time you can see they even didn't they didn't put in the numbers here so you can assume a couple of months and you can see the the mean and the median number of successes in the block rotation problem increases and it's pretty good in the real world on the real robot as well okay let's continue and see what else is there this demonstrates that the curriculum learning is really important so the blue line is trained the blue policy is trained in the ADR setting whereas all of the other curves are manual using the manually set domain randomization ranges so what they basically do is the following so when you start with ADR we start with a single value and then you slowly expand the range and you expand the range again and they expand the range again and this may be like the cube size this is the size and how they produce these other curves is for example this one corresponds to maybe this range and the the worst one actually corresponds to the biggest range so what happens there is that if you start training the network with a huge range it's actually going to underperform because it didn't have the the benefit the the luxury of doing it gradually as it was just thrown into the water so to speak and it was really hard for the for that policy to cope so basically yeah this is a argument for using the curriculum learning pretty much here again they showed that on the block orientation problem as we increase the ADR ranges the entropy increases and the errors for estimating the orientation and position of the vision pipeline gets better and better you can see this gets much better as we increase the entropy similarly for the rub Rubik's Cube the orientation position and the top angle all increase going to higher entropy ADRs and that's that makes sense okay this is the final result they got on the Rubik's Cube basically what I did is they took the maximally hard scramble that means if you know some Rubik's Cube theory at the hardest state you can be in is 26 quarter rotations away from the like completed cube state that means quarter rotation is just a 90 degree rotation on the cube and it can be shown that there is only three states that there are 26 quarter rotations away from the goal from the completed cube state and they took one of those hard states and they use those to test the policy and they can show here they showed here that on the full scramble so and you basically have to have 43 successes to get to the completed state because we're only using remember we're just using the phase rotations and we're just using the flipping and basically we need 43 successes in order to successfully solve that particular scramble and they show that for the XXL ADR so highly randomized ADR and using the Giker cube for the face angles instead of vision because you remember it's hard to estimate those using vision and they show that they have 60% success rate on the half scramble and 20% success rate on the full scramble once deployed on a real robot and that's pretty amazing still not perfect but like it's pretty awesome yeah that's that result and let me continue and show you some more awesome results and you can see that using a rubber glove which changes the like effectively changes the friction of the environment or using the tide fingers or maybe putting the blanket over the cube which will basically render the vision pipeline useless because I can't figure out the position and orientation or they do some kind of funny perturbations with giraffes and pencils just applying force on the cube and hand and it can cope with all of those perturbations and still solve the problem although a bit harder as they showed okay now for the last but not least I'm gonna show you some metal learning that eventually happened so when they say metal learning they are not thinking of metal learning in the classical sense like solving multiple tasks they're still solving just a Rubik's Cube or they're just reorienting the cube what I mean is that they learn how to estimate this transition probability function and that's also called online system identification in some other communities so basically that means it learns how to estimate the parameters such as friction gravity implicitly and let me show you the results so they run the policy in a simulation on this flipping problem and they show that once they so we have a LSTM inside of the policy if you remember and they show that once they take the hidden state and just pull it down to zero in these particular moments like it's step 10 and step 30 you can see that the time to complete the flip actually increases and then once the the policy has enough time to again encode the the parameters of the environment it again becomes really fast and then once the next perturbation happens again it gets a bit slower but then it adjusts adapts again so until the hidden state is updated it will be a bit slower but then it just gets to the baseline level the second thing they show is just by resampling the environment dynamics so they let the policy work with certain then a certain environment like specific cube size gravity etc and then they just resample everything like new gravity new cube size new cube mass and they show that again the policy kind of slows down and then again learns the dynamics encodes the dynamics inside of the LSTM hidden cell and yeah it just becomes much better the third thing they did is breaking a random joint and basically what I do is at one point of time they just disable one of the joints so whatever action we send to the joint it won't move because it's dead and they show that after that happens basically it will still learn how to how to solve the problem will be a bit slower and the green line here is just a policy that started with a broken with the same joint broken from the start and you can see that this one is a bit worse and they speculate here I think just a second that we hypothesize that this could be because the policy has already locked in some information in its recurrent state and therefore is not adjustable as adjustable anymore alternatively maybe it just has not accumulated enough information yet and the baseline policy is in the lead because it has an information advantage of at least 10 achieved flips I doubt that's the case because I can see the offset continue like being constant even after 20 steps here so yeah that's that'd be interesting to investigate further still you can understand the high-level picture here basically it's learning how to encode environment parameters inside the LSTM is hidden state which is pretty awesome a couple more things yeah they say here we found very interesting that our policy can learn to adapt internally to broken joints this is in contrast contrast to prior work it explicitly searched over a set of policies until they found one that works for a broken robot so I'm not sure which of the randomizations they applied in the ADR setting helps them learn this thing whether they I think I'm pretty confident they didn't simulate the broken joints inside the training so it's pretty funny that this actually works the final thing I want to show you is this the train the predictor over those LSTM's hidden states to predict whether a specific parameter in a particular environment is higher than the average or lower than the average to make that a bit less abstract so it's trying to predict whether maybe the gravity or the cube mass is higher than the average over all of the environments was trained on or lower so they just use a cross entropy loss and to it outputs one when it's certain that the parameter is higher than the average otherwise it tries to output zero and you can see here maybe for the cube size the range is this and this is the average and similarly for the Rubik's Cube problem and once it was trained like that so on a bunch of different environments we take the LSTM's hidden state we have a simple predictor I think they just use a simple fully connected layer and finally the sigmoid output so once they train it on a bunch of those environments with the true labels known they deployed and they showed that here for example on the block orientation policy you can see that certain parameters such as cube size the the the the actual LSTM hidden states is informed enough so that the policy that was trained can learn to with 85% accuracy predict that it's higher than the average which means there is enough information inside of the LSTM state to do this and they notice also that for the cube mass the block orientation policy stores more information about it than the Rubik's Cube policies you can see here this the cube mass the purple one has around 60 something accuracy on the block orientation task whereas here it's a bit above 50% which is close to random chance and they keep they say here we hypothesize that this is because the block orientation policy uses a dynamic approach that tosses the block around to flip it in order contrast the Rubik's Cube policy flips the cube much more deliberately in order to avoid unintentional misalignment of the cube faces so that means that basically the sewing Rubik's Cube is it's less important to know the actual cube mass and that's why the LSTM just doesn't encode that information whereas here because flipping is much less deliberate they don't have to be afraid that one of the faces will misalign like this you can just toss it with much more much higher strength so it encodes that information a bit more than the Rubik's Cube policy that's super fascinating think and they say here this is the evidence that the policy successfully inferring and storing useful information regarding the environment parameters in LSTM hidden and cell states know that we do not train the policy explicitly to store information about the semantically meaningful physical parameters and that's really super awesome that's it hopefully you liked the video if you did you know the drill just like subscribe share and hope to see you next time
[{"start": 0.0, "end": 5.5, "text": " What's up? In this video I'm gonna cover this paper called solving Rubik's Cube"}, {"start": 5.5, "end": 11.28, "text": " with a robot hand by the pretty much complete OpenAI team as you can see"}, {"start": 11.28, "end": 17.14, "text": " here and the goal here is to solve the Rubik's Cube using a robotic hand"}, {"start": 17.14, "end": 23.52, "text": " obviously and but the thing I want you to understand is that the point is not"}, {"start": 23.52, "end": 27.12, "text": " to actually figure out how to solve the cube they just use some off-the-shelf"}, {"start": 27.12, "end": 33.64, "text": " solver for that the whole the difficulty in the project lies in sensing the"}, {"start": 33.64, "end": 39.08, "text": " state of the hand the state of the cube and advising a control policy I"}, {"start": 39.08, "end": 43.64, "text": " controlling those joints on the hand in order to solve the cube so it's a"}, {"start": 43.64, "end": 47.72, "text": " control and sensing problem not the actual how to solve the cube problem and"}, {"start": 47.72, "end": 51.760000000000005, "text": " having said that take a look at this video"}, {"start": 51.76, "end": 58.559999999999995, "text": " we're trying to build robots that learn a little bit like humans do by trial"}, {"start": 58.559999999999995, "end": 65.48, "text": " and error what we've done is train an algorithm to solve the Rubik's Cube one"}, {"start": 65.48, "end": 71.32, "text": " handed with a robotic hand which is actually pretty hard even for a human"}, {"start": 71.32, "end": 77.68, "text": " to do we don't tell it how the hand needs to move that the cube in order to"}, {"start": 77.68, "end": 83.04, "text": " get there the particular friction that's on the fingers how easy it is to turn"}, {"start": 83.04, "end": 87.44000000000001, "text": " the faces on the cube what the gravity what the weight of the cube is all of"}, {"start": 87.44000000000001, "end": 91.76, "text": " these things it needs to learn by itself"}, {"start": 92.56, "end": 96.80000000000001, "text": " awesome hopefully you got some high-level understanding from that video"}, {"start": 96.80000000000001, "end": 100.48, "text": " and now let's just dig into the paper it's a quite a long paper hopefully"}, {"start": 100.48, "end": 105.68, "text": " you'll at the end of this video I'll understand it much better okay let's"}, {"start": 105.68, "end": 111.60000000000001, "text": " start with this the two main components that this new paper advised because as a"}, {"start": 111.60000000000001, "end": 116.16000000000001, "text": " side note they previously published this paper called learning gestures in hand"}, {"start": 116.16000000000001, "end": 120.88000000000001, "text": " manipulation where the setup was pretty similar the only difference is that they"}, {"start": 120.88000000000001, "end": 127.72, "text": " are trying to just reorient this block that has these open AI letters on every"}, {"start": 127.72, "end": 132.16, "text": " single face of this cube and so the goal was much simpler you just need to flip"}, {"start": 132.16, "end": 137.32, "text": " the cube and to get from a certain source position into a certain target"}, {"start": 137.32, "end": 143.16, "text": " configuration and that's it it's still a complicated task but solving the Rubik's"}, {"start": 143.16, "end": 149.6, "text": " Cube is just a level up so yeah and the two main components they say are key to"}, {"start": 149.6, "end": 154.72, "text": " solve this problem the Rubik's Cube is this automatic domain randomization"}, {"start": 154.72, "end": 161.04, "text": " algorithm and a new robot platform built from machine learning and high level"}, {"start": 161.04, "end": 166.2, "text": " this automatic domain randomization so they in the previous paper and this one"}, {"start": 166.2, "end": 170.84, "text": " they already used domain randomization but the novelty is this in this"}, {"start": 170.84, "end": 175.12, "text": " automatic part so they are automatically learning how to create progressively"}, {"start": 175.12, "end": 180.79999999999998, "text": " harder and harder environments and by doing that it turns out that the policy"}, {"start": 180.79999999999998, "end": 186.72, "text": " once deployed on a real robotic hand treats the like the real world as just"}, {"start": 186.72, "end": 193.32, "text": " another simulation matrix confirmed and let's dig into the high level"}, {"start": 193.32, "end": 200.52, "text": " explanation so as you can see here the the setup looks something like this so"}, {"start": 200.52, "end": 204.6, "text": " they have bunch of these simulations and you can see every single hand"}, {"start": 204.6, "end": 208.32, "text": " corresponds to different simulation and everything is randomized from the"}, {"start": 208.32, "end": 212.48, "text": " appearance you can see different the color hands the physics like the"}, {"start": 212.48, "end": 217.44, "text": " gravity the cube size the cube mass the friction of the robotic hand bunch of"}, {"start": 217.44, "end": 222.0, "text": " stuff is randomized so once you have the data once you have those simulations"}, {"start": 222.0, "end": 227.04, "text": " what they do is they train two separate pipelines so one is for training the"}, {"start": 227.04, "end": 232.07999999999998, "text": " control policy and you can see it here so it gets as an input it gets the"}, {"start": 232.07999999999998, "end": 237.2, "text": " fingertip position so you get pretty much five 3d points and it also gets the"}, {"start": 237.2, "end": 241.39999999999998, "text": " state of the cube so that's the position the orientation and the six face angles"}, {"start": 241.4, "end": 248.64000000000001, "text": " as we'll soon see and once you have that state of both the hand and the cube"}, {"start": 248.64000000000001, "end": 254.44, "text": " they train this LSTM based policy network in order to output these actions"}, {"start": 254.44, "end": 263.16, "text": " and that will be just 20 times 11 basically 20 means 20 joints so the hand"}, {"start": 263.16, "end": 268.02, "text": " has 20 joints they can control and 11 is just discretized angles because they"}, {"start": 268.02, "end": 272.64, "text": " later show that that's much better and the place is much better once they do"}, {"start": 272.64, "end": 276.84, "text": " that instead of using continuous actions so those are just the relative angles so"}, {"start": 276.84, "end": 282.91999999999996, "text": " maybe move this specific joint by two degrees and that's it on the other hand"}, {"start": 282.91999999999996, "end": 290.4, "text": " the second pipeline takes these three RGB images as the input and it tries to"}, {"start": 290.4, "end": 295.15999999999997, "text": " regress the the cube pose the cube orientation as well as the face angles"}, {"start": 295.16, "end": 299.8, "text": " and we'll see all of the details a bit later once they combine all of that"}, {"start": 299.8, "end": 305.76000000000005, "text": " together we have this one this this chart and you can see here that the cube"}, {"start": 305.76000000000005, "end": 310.72, "text": " pose always comes from the visual system but because it was really tough actually"}, {"start": 310.72, "end": 315.58000000000004, "text": " to figure out the absolute face angles so you have the cube and you have to"}, {"start": 315.58000000000004, "end": 321.04, "text": " figure out which exact angle and you have from minus pi to pi the angle goes"}, {"start": 321.04, "end": 325.72, "text": " from minus pi to pi so it's really hard to regress it because all for the"}, {"start": 325.72, "end": 330.12, "text": " occlusions etc and they sometimes actually use the cube itself it's a"}, {"start": 330.12, "end": 333.8, "text": " smart cube that has sensors and we'll see that in a moment and it basically"}, {"start": 333.8, "end": 339.6, "text": " just outputs and gives to the policy these numbers which represent the the"}, {"start": 339.6, "end": 345.0, "text": " face angles so they either use vision or they use the cube itself in order to get"}, {"start": 345.0, "end": 349.16, "text": " the face angles so combine them with the fingertip positions that's the input"}, {"start": 349.16, "end": 353.32000000000005, "text": " observation to the policy and they basically train the LSTM based policy to"}, {"start": 353.32000000000005, "end": 356.68, "text": " output the correct actions in order to solve the cube so that's the high level"}, {"start": 356.68, "end": 361.46000000000004, "text": " explanation now let's start digging into details they say here the the learning"}, {"start": 361.46000000000004, "end": 364.96000000000004, "text": " requires vast amount of training data which is hard and expensive to acquire"}, {"start": 364.96000000000004, "end": 368.0, "text": " on a physical system I should have mentioned this a bit earlier but"}, {"start": 368.0, "end": 372.20000000000005, "text": " basically you have two options you can either train this thing on a robot but"}, {"start": 372.20000000000005, "end": 376.72, "text": " because we're trying to learn we know we'll need a lot of data and it's simply"}, {"start": 376.72, "end": 381.08000000000004, "text": " not practical to train the policy on a physical robot and in the previous"}, {"start": 381.08000000000004, "end": 387.64000000000004, "text": " paper the system was called Dactyl just FYI basically they showed that the hand"}, {"start": 387.64000000000004, "end": 392.64000000000004, "text": " had a lot of breakages so it's super impractical to train this even if you"}, {"start": 392.64000000000004, "end": 396.6, "text": " have a farm of robots and this thing this this hand is pretty expensive I can"}, {"start": 396.6, "end": 400.8, "text": " assume so it's really hard to have a setup to collect enough data to train"}, {"start": 400.8, "end": 403.84000000000003, "text": " the policy so they have to go to simulation but once you're in a"}, {"start": 403.84, "end": 409.0, "text": " simulation you need to advise a good way to bridge the domain gap the reality"}, {"start": 409.0, "end": 414.67999999999995, "text": " gap to in order to have the policy training in sim be good enough in the"}, {"start": 414.67999999999995, "end": 420.88, "text": " actual real world so that's just something to keep in mind first of all"}, {"start": 420.88, "end": 425.12, "text": " let me just briefly explain so the goals the first goal we saw was the block"}, {"start": 425.12, "end": 429.35999999999996, "text": " reorientation so that was in there in the previous paper they were solving"}, {"start": 429.35999999999996, "end": 433.53999999999996, "text": " this one and they're using it as a baseline in this one so the goal is"}, {"start": 433.54, "end": 439.56, "text": " considered solved if the orientation of the cube falls inside this 0.4 radians"}, {"start": 439.56, "end": 445.68, "text": " from the target orientation and that's something like 22 degrees more or less"}, {"start": 445.68, "end": 451.88, "text": " 22 23 degrees so it's a pretty big margin but yeah they just figure out"}, {"start": 451.88, "end": 456.44, "text": " that number works for them okay now you can see here visualized so this is the"}, {"start": 456.44, "end": 461.14000000000004, "text": " the block reorientation task you can see the target orientation here and this is"}, {"start": 461.14, "end": 465.4, "text": " the Rubik's Cube task and you can see the target is the solved cube in this"}, {"start": 465.4, "end": 473.12, "text": " case they found that one interesting thing to notice here is that it's really"}, {"start": 473.12, "end": 478.12, "text": " hard to rotate certain once you have a single hand it's really hard to rotate"}, {"start": 478.12, "end": 484.88, "text": " any face other than the top face and they say here we found rotating the top"}, {"start": 484.88, "end": 488.62, "text": " face to be far simpler than rotating other faces so they said it's really"}, {"start": 488.62, "end": 493.04, "text": " hard to rotate this one if you have only a single hand so it's much easier to"}, {"start": 493.04, "end": 498.32, "text": " rotate the top face and thus instead of rotating arbitrary faces we combine"}, {"start": 498.32, "end": 503.12, "text": " together a flip and the top face rotation in order to perform the desired"}, {"start": 503.12, "end": 507.92, "text": " operation that these sub goals can then be performed sequentially to eventually"}, {"start": 507.92, "end": 511.96, "text": " solve the Rubik's Cube so just using those two operations the flip and the"}, {"start": 511.96, "end": 515.6800000000001, "text": " face rotation the top face rotation they can solve the cube when it comes to"}, {"start": 515.68, "end": 518.68, "text": " solving the Rubik's Cube competing a solution sequence can easily be done"}, {"start": 518.68, "end": 522.68, "text": " with existing software libraries like a Siam Bo solver so that's what I mentioned"}, {"start": 522.68, "end": 527.2399999999999, "text": " in the beginning of the video they use off-the-shelf solver and in this work"}, {"start": 527.2399999999999, "end": 531.1999999999999, "text": " the key problem is thus in sensing and control not finding the solution"}, {"start": 531.1999999999999, "end": 537.1999999999999, "text": " sequence so keep that in mind okay briefly just let me just go through this"}, {"start": 537.1999999999999, "end": 542.0799999999999, "text": " physical setup so they have the cage they have these three cameras one two"}, {"start": 542.08, "end": 547.2800000000001, "text": " three three RGB cameras they have this system called phase space which they use"}, {"start": 547.2800000000001, "end": 551.72, "text": " in order to track the fingertip positions and you can see these small"}, {"start": 551.72, "end": 557.8000000000001, "text": " LEDs on the fingertips and that's what a what this phase space system motion"}, {"start": 557.8000000000001, "end": 561.72, "text": " motion capture system uses in order to figure out the exact 3d positions of"}, {"start": 561.72, "end": 567.6400000000001, "text": " those fingertips so then we have the hand and we have the like the cage that"}, {"start": 567.64, "end": 573.4, "text": " holds everything together so one thing I actually when I first saw this video a"}, {"start": 573.4, "end": 576.64, "text": " couple of years ago I thought that the hand was also developed by open AI I"}, {"start": 576.64, "end": 581.48, "text": " wasn't sure which part of the project they actually developed but actually the"}, {"start": 581.48, "end": 585.12, "text": " the hand itself is from some other company it's called the shadow robot"}, {"start": 585.12, "end": 590.88, "text": " company and it's a pretty marvelous piece of engineering itself the hand itself I"}, {"start": 590.88, "end": 595.06, "text": " think has 24 degrees of freedom so it's a really like high dimensional control"}, {"start": 595.06, "end": 601.1999999999999, "text": " problem so previously nobody could solve nobody could imagine solving like rubik's"}, {"start": 601.1999999999999, "end": 607.64, "text": " cube using like a hand as complex as this one but some other systems exist"}, {"start": 607.64, "end": 612.0, "text": " but they are not learned and they are not like a humanoid hand they are just"}, {"start": 612.0, "end": 614.8, "text": " like some kind of a machine that can solve the cube really fast but that's"}, {"start": 614.8, "end": 619.2399999999999, "text": " not what we are trying to accomplish here aside from the hand and the setup"}, {"start": 619.2399999999999, "end": 623.88, "text": " there is the cube component and I already mentioned it's there is smart"}, {"start": 623.88, "end": 627.68, "text": " cube involved and they say here we therefore replace some of the components"}, {"start": 627.68, "end": 631.24, "text": " of the original Geekr cube with custom ones in order to achieve a tracking"}, {"start": 631.24, "end": 634.96, "text": " accuracy of approximately five degrees so the original cube only can so we"}, {"start": 634.96, "end": 641.04, "text": " need as I said we need to to calculate these angles and we need some better"}, {"start": 641.04, "end": 645.56, "text": " resolution than just nine degrees so we need to have resolution that's and they"}, {"start": 645.56, "end": 649.64, "text": " they figure out the five degrees works so they had to create a custom cube and"}, {"start": 649.64, "end": 652.92, "text": " that's funny and you can see here and I just highlight the charging here because"}, {"start": 652.92, "end": 658.16, "text": " it's not a simple it's not like dummy let's go that way a cube it's a it's"}, {"start": 658.16, "end": 663.7199999999999, "text": " got like a microcontroller it's got a couple of components in order to figure"}, {"start": 663.7199999999999, "end": 669.0, "text": " out those angles and we'll see that in a moment okay a quick detour into how this"}, {"start": 669.0, "end": 675.36, "text": " exactly works is you have this thing in every single face of the cube and it's"}, {"start": 675.36, "end": 680.04, "text": " basically a variable resistor so here is just circuit on the left side and you"}, {"start": 680.04, "end": 683.4399999999999, "text": " can see this variable resistor here actually corresponds to this thing here"}, {"start": 683.4399999999999, "end": 687.9599999999999, "text": " and this is just a simple circuit you've got resistors we've got the positive"}, {"start": 687.9599999999999, "end": 692.3399999999999, "text": " power rail we've got the ground ie the zero potential in the circuit you have"}, {"start": 692.3399999999999, "end": 697.76, "text": " something called operational amplifier here and what this circuit does you can"}, {"start": 697.76, "end": 701.8399999999999, "text": " treat it as a black box and most people do basically just hold the same"}, {"start": 701.8399999999999, "end": 707.6999999999999, "text": " potentials on the plus and on the negative inputs and all in all you've got"}, {"start": 707.7, "end": 713.5200000000001, "text": " a current going through here and the signal here is basically our times I"}, {"start": 713.5200000000001, "end": 718.72, "text": " where I is the current and our is changing so the analog signal will"}, {"start": 718.72, "end": 722.88, "text": " actually be increasing as the resistance increases and the resistance increases"}, {"start": 722.88, "end": 729.24, "text": " once you're turning the the face particular face and so what I do is they"}, {"start": 729.24, "end": 733.6400000000001, "text": " have a microcontroller here inside of the cube so some microcontroller it's"}, {"start": 733.64, "end": 738.96, "text": " got something called analog to digital converter and basically this signal here"}, {"start": 738.96, "end": 744.68, "text": " is being fed into a DC and it can discretize the signal into some value so"}, {"start": 744.68, "end": 752.36, "text": " maybe let's say this range goes from from 0 to 5 volts when you change the"}, {"start": 752.36, "end": 757.4, "text": " when you rotate the cube from 0 to 360 degrees and you can just discretize that"}, {"start": 757.4, "end": 762.12, "text": " into maybe some of the numbers will be you'll just have some kind of a curve"}, {"start": 762.12, "end": 767.4, "text": " and this is the analog analog voltage and this will be the output from the"}, {"start": 767.4, "end": 772.84, "text": " converter and you can then map those numbers into angles and send them via"}, {"start": 772.84, "end": 777.44, "text": " Bluetooth to your policy and that's the part of the observation that the policy"}, {"start": 777.44, "end": 781.96, "text": " uses in order to control the joints and eventually solve the sub goals and"}, {"start": 781.96, "end": 787.8, "text": " eventually solve the cube so that was a quick detour into electronics it's"}, {"start": 787.8, "end": 790.96, "text": " funny to see a piece of electronics inside of enamel paper actually studied"}, {"start": 790.96, "end": 795.24, "text": " electronics it's pretty satisfying feeling to to see that you can actually"}, {"start": 795.24, "end": 798.88, "text": " leverage some of the knowledge to learn on the faculty yeah this will probably"}, {"start": 798.88, "end": 805.1600000000001, "text": " be the first and the last time I'll ever see it anyways let's continue and let me"}, {"start": 805.1600000000001, "end": 808.76, "text": " tell you one more thing before we start jumping into how the policy is trained"}, {"start": 808.76, "end": 814.48, "text": " and etc so you have the setup but you want to have the simulation to be as as"}, {"start": 814.48, "end": 820.0, "text": " closely to you want your simulation to resemble the real world as closely as"}, {"start": 820.0, "end": 826.76, "text": " possible so they showed here that they added additional modeling to the hand"}, {"start": 826.76, "end": 833.32, "text": " like these pulleys and tendons and doing that they improved the simulated hand"}, {"start": 833.32, "end": 838.96, "text": " and you can see on these charts basically they track one particular joint"}, {"start": 838.96, "end": 844.48, "text": " like maybe this one and they track the its angle and you can see the this curve"}, {"start": 844.48, "end": 851.72, "text": " the green one is you get these curves by taking a control sequence and sending"}, {"start": 851.72, "end": 855.0, "text": " that control sequence both to the robotic hand to the real robotic hand"}, {"start": 855.0, "end": 858.96, "text": " and to the simulation and you're just tracking the 3d the angle of that"}, {"start": 858.96, "end": 862.72, "text": " particular joint and you can see there is a discrepancy before the calibration"}, {"start": 862.72, "end": 866.88, "text": " before this modeling was done once they included that you can see that the curves"}, {"start": 866.88, "end": 873.04, "text": " align which means we have much better simulation and thus we'll have better"}, {"start": 873.04, "end": 876.56, "text": " control policy once we deploy it on a real robot and they show this is really"}, {"start": 876.56, "end": 883.48, "text": " important part of their of their paper okay so let me jump to the cube so it"}, {"start": 883.48, "end": 888.48, "text": " turns out modeling the cube is super hard there is so many details like the"}, {"start": 888.48, "end": 894.7199999999999, "text": " friction the size the mass the lots of forces inside the cube itself so aside"}, {"start": 894.7199999999999, "end": 899.56, "text": " from the forces that act inside of the cube there's this thing called a"}, {"start": 899.56, "end": 903.88, "text": " forgiveness factor so once you have the top face and I rotated maybe by a couple"}, {"start": 903.88, "end": 908.3199999999999, "text": " of degrees I can still take this perpendicular face and rotated and it"}, {"start": 908.3199999999999, "end": 913.64, "text": " will snap and it will work but like if you if you go too far now you can't you"}, {"start": 913.64, "end": 918.5999999999999, "text": " basically can't flip the the perpendicular face so that's the"}, {"start": 918.5999999999999, "end": 922.4799999999999, "text": " forgiveness factor so you have to count all of those tiny details inside your"}, {"start": 922.4799999999999, "end": 926.7199999999999, "text": " model in order to get a realistic representation so this here they"}, {"start": 926.72, "end": 930.9200000000001, "text": " performed a very rudimentary dynamics calibration of the parameters so like"}, {"start": 930.9200000000001, "end": 935.52, "text": " the like the size the friction etc which mojoco which is the engine they're using"}, {"start": 935.52, "end": 940.12, "text": " for rendering allows us to specify in order to roughly match a physical Rubik's"}, {"start": 940.12, "end": 944.48, "text": " Cube our goal was not to get an exact match but rather to have a plausible"}, {"start": 944.48, "end": 950.24, "text": " model as a starting point for domain randomization so again ADR will see that"}, {"start": 950.24, "end": 956.48, "text": " that plays a really important part in this work okay and here we are so"}, {"start": 956.48, "end": 961.84, "text": " and they say here so in this previous paper so that's the deck tile the the"}, {"start": 961.84, "end": 965.16, "text": " previous paper they published we were able to train a control policy and a"}, {"start": 965.16, "end": 969.5600000000001, "text": " vision model in simulation and then transfer both to a real robot through the"}, {"start": 969.5600000000001, "end": 974.32, "text": " use of domain randomization however this required a significant amount of manual"}, {"start": 974.32, "end": 978.5600000000001, "text": " tuning and a tight iteration loop between randomization design in"}, {"start": 978.5600000000001, "end": 983.2, "text": " simulation and validation on a robot in this section we describe how automatic"}, {"start": 983.2, "end": 987.5600000000001, "text": " domain randomization can be used to automate this process so it's a"}, {"start": 987.5600000000001, "end": 992.0400000000001, "text": " mouthful basically what it means is a following so in previous paper they for"}, {"start": 992.0400000000001, "end": 996.2800000000001, "text": " every single parameter that they want to randomize like maybe the cube size they"}, {"start": 996.2800000000001, "end": 1002.5600000000001, "text": " had to manually find the range so they had to find maybe this is a 5.4"}, {"start": 1002.5600000000001, "end": 1011.5600000000001, "text": " centimeters this may be 5.9 and once they said those basically margins you can"}, {"start": 1011.56, "end": 1017.3199999999999, "text": " sample anything in between but you can't sample outside that range so that means"}, {"start": 1017.3199999999999, "end": 1022.0799999999999, "text": " you have to for every single range you have to test whether that actually leads"}, {"start": 1022.0799999999999, "end": 1026.32, "text": " to a better policy when you deploy it into a robot because if you make this"}, {"start": 1026.32, "end": 1031.48, "text": " arbitrary big maybe this goes to 7.2 now the policy will actually be much worse"}, {"start": 1031.48, "end": 1035.8799999999999, "text": " because it's really hard for the policy to learn how to solve some of those cubes"}, {"start": 1035.8799999999999, "end": 1041.36, "text": " and then it leads to a worse policy which is not desirable so what the ADR"}, {"start": 1041.36, "end": 1046.24, "text": " basically does is it starts from a single value so they measure the"}, {"start": 1046.24, "end": 1050.7199999999998, "text": " realistic cube and that was 5.7 actually in this case and then they slowly"}, {"start": 1050.7199999999998, "end": 1056.04, "text": " expand to the left and to the right and they end up with some a bit bigger range"}, {"start": 1056.04, "end": 1059.8, "text": " and then a bit bigger range and so that's kind of an implicit curriculum"}, {"start": 1059.8, "end": 1063.6799999999998, "text": " learning and we'll see those details so that's the whole point so for every"}, {"start": 1063.6799999999998, "end": 1067.52, "text": " single randomization parameter like maybe the like the friction the cube"}, {"start": 1067.52, "end": 1074.36, "text": " mass the cube size even the gravity etc they gradually and automatically kind of"}, {"start": 1074.36, "end": 1079.84, "text": " increased the actual range for that parameter and that's the whole idea the"}, {"start": 1079.84, "end": 1083.96, "text": " high-level explanation of ADR they say it here the main hypothesis that"}, {"start": 1083.96, "end": 1087.6399999999999, "text": " motivates ADR is the training on a maximally diverse distribution of"}, {"start": 1087.6399999999999, "end": 1092.68, "text": " environments leads to transfer via emergent metal learning more concretely"}, {"start": 1092.68, "end": 1096.8799999999999, "text": " if the model has some form of memory and they use LSTMs and we'll see that in"}, {"start": 1096.88, "end": 1100.8000000000002, "text": " a moment it can learn to adjust its behavior during deployment to improve"}, {"start": 1100.8000000000002, "end": 1105.7600000000002, "text": " performance on the current environment over time I by implementing a learning"}, {"start": 1105.7600000000002, "end": 1110.7600000000002, "text": " algorithm internally so basically what happens is that you have a finite"}, {"start": 1110.7600000000002, "end": 1116.0, "text": " capacity policy and you've got pretty much infinite number of sampled"}, {"start": 1116.0, "end": 1121.5200000000002, "text": " environments so that policy can't can't memorize every single solution to every"}, {"start": 1121.5200000000002, "end": 1125.96, "text": " single environment and there is also not a single robust policy that can solve"}, {"start": 1125.96, "end": 1132.16, "text": " each of these really in a performant way so it has to use the memory it has in"}, {"start": 1132.16, "end": 1137.56, "text": " order to adapt to the given environment and improve its skills that way and they"}, {"start": 1137.56, "end": 1142.96, "text": " later show that actually that happens and the policy learns how to encode the"}, {"start": 1142.96, "end": 1147.52, "text": " environment parameters like cube size cube mass gravity etc inside the hidden"}, {"start": 1147.52, "end": 1152.48, "text": " state of an LSTM and that will be an important thing they'll want to show you"}, {"start": 1152.48, "end": 1157.3600000000001, "text": " okay so here is a high-level explanation and I already gave you here so they are"}, {"start": 1157.3600000000001, "end": 1162.6, "text": " basically just updating those ranges and they say here at its core ADR realizes"}, {"start": 1162.6, "end": 1166.32, "text": " a training curriculum that gradually expands the distribution of environments"}, {"start": 1166.32, "end": 1169.96, "text": " for which the model can perform well the initial distribution of environments is"}, {"start": 1169.96, "end": 1174.44, "text": " concentrated on a single environment for example in policy training the initial"}, {"start": 1174.44, "end": 1178.24, "text": " environment is based on calibration values measured from the physical robot"}, {"start": 1178.24, "end": 1183.2, "text": " so they measure for example the cube so it's 5.7 centimeters the gravity is like"}, {"start": 1183.2, "end": 1187.76, "text": " 9.81 whatever so they have these initial values and then they gradually expand"}, {"start": 1187.76, "end": 1191.44, "text": " them and I'll show you in a blog in a second how that looks like here is a"}, {"start": 1191.44, "end": 1196.08, "text": " really nice visualization on their blog you can see that the x-axis is the"}, {"start": 1196.08, "end": 1202.0, "text": " training the timeline basically and the y-axis is the the range I was mentioning"}, {"start": 1202.0, "end": 1206.64, "text": " so they start with 5.7 centimeters but but as the time progresses the ADR does"}, {"start": 1206.64, "end": 1211.96, "text": " its job and the cube eventually comes to this huge range so they actually sample"}, {"start": 1211.96, "end": 1217.4, "text": " some of the cubes will have 5.47 centimeters size and some will have 6.13"}, {"start": 1217.4, "end": 1221.48, "text": " so that makes it much harder to solve the tasks but because they gradually"}, {"start": 1221.48, "end": 1228.0800000000002, "text": " went to that point the policy learns how to cope with those cube size ranges"}, {"start": 1228.0800000000002, "end": 1232.48, "text": " getting back to the paper let me explain the pseudocode for this algorithm but"}, {"start": 1232.48, "end": 1235.8600000000001, "text": " previous prior to that let me just explain this briefly these two equations"}, {"start": 1235.86, "end": 1241.9199999999998, "text": " so they've got this distribution here and the entropy so as what they did is"}, {"start": 1241.9199999999998, "end": 1245.7199999999998, "text": " for every single parameter that they want to randomize like like the friction"}, {"start": 1245.7199999999998, "end": 1249.56, "text": " and gravity they'll have a uniform distribution so for the cube size for"}, {"start": 1249.56, "end": 1256.02, "text": " example 5.5 I don't know like maybe 5.9 they'll just have those ranges and"}, {"start": 1256.02, "end": 1262.04, "text": " they'll have a uniform distribution here so and that they'll have the same thing"}, {"start": 1262.04, "end": 1267.6399999999999, "text": " for every single parameter so this is maybe like cube size blah blah blah and"}, {"start": 1267.6399999999999, "end": 1272.76, "text": " they'll have like friction etc so the final distribution will just be a"}, {"start": 1272.76, "end": 1278.68, "text": " product 1 to D where D is the number of those parameters and they'll just"}, {"start": 1278.68, "end": 1284.08, "text": " multiply all of these uniform distributions fi low means the the"}, {"start": 1284.08, "end": 1290.8, "text": " lower the lower part of the range so that's 5.5 for the cube and the high"}, {"start": 1290.8, "end": 1296.24, "text": " part means the high threshold and that's it and once you have this"}, {"start": 1296.24, "end": 1301.2, "text": " distribution you can find its entropy and for uniform distributions it's quite"}, {"start": 1301.2, "end": 1306.6399999999999, "text": " easy so the entropy is basically in general's case for a discrete entropy"}, {"start": 1306.6399999999999, "end": 1312.8, "text": " for discrete distribution you'll have a simple sum of probability of that event"}, {"start": 1312.8, "end": 1321.24, "text": " times the logarithm of 1 over the probability of the event and this part"}, {"start": 1321.24, "end": 1325.56, "text": " the log part is actually the self-information so it's sometimes denoted"}, {"start": 1325.56, "end": 1330.9199999999998, "text": " like I of X and this is how you calculate the entropy and basically"}, {"start": 1330.9199999999998, "end": 1335.3999999999999, "text": " because the uniform distribution always has the same values so maybe if we take"}, {"start": 1335.3999999999999, "end": 1342.12, "text": " a simple discrete distribution like this one let's say we have three possible"}, {"start": 1342.12, "end": 1349.0, "text": " events so all of them will have 1 over 3 probability and so you can see so the"}, {"start": 1349.0, "end": 1355.7399999999998, "text": " probability is 1 over 3 and then you have the log 1 over 1 over 3 so that's"}, {"start": 1355.7399999999998, "end": 1362.12, "text": " 3 just 3 and we have that 3 times so these will just cancel out and you end"}, {"start": 1362.12, "end": 1369.76, "text": " up with log 3 so in general case this is just the difference they denote"}, {"start": 1369.76, "end": 1374.32, "text": " here so in the continuous case it will just be your entropy will be the"}, {"start": 1374.32, "end": 1386.92, "text": " integral pi of X log 1 over pi of X DX and because for a particular uniform"}, {"start": 1386.92, "end": 1392.36, "text": " distribution this is constant you can just pull it out here and so we've got"}, {"start": 1392.36, "end": 1398.24, "text": " integral over DX and that will just be the size of your domain so maybe here"}, {"start": 1398.24, "end": 1402.44, "text": " it will be whatever the difference between these two are and that's why"}, {"start": 1402.44, "end": 1407.44, "text": " they have the difference here so this thing is just the entropy of a uniform"}, {"start": 1407.44, "end": 1411.28, "text": " distribution that was a long way to say that thing and then they just divide by"}, {"start": 1411.28, "end": 1414.4, "text": " the number of distributions by the number of randomized parameters and you"}, {"start": 1414.4, "end": 1419.84, "text": " get kind of a average entropy and so the the the more spread out these"}, {"start": 1419.84, "end": 1423.16, "text": " distributions are the higher this entropy will be because you can see as"}, {"start": 1423.16, "end": 1428.64, "text": " the as the range increases the entropy increases and so this is a measure of"}, {"start": 1428.64, "end": 1434.2, "text": " how actually demanding the environment the current environment is so that's"}, {"start": 1434.2, "end": 1440.5600000000002, "text": " just a simple way to do that okay here is the actual algorithm they start with"}, {"start": 1440.5600000000002, "end": 1444.88, "text": " some initial calibration values as I mentioned that may be 5.7 centimeters"}, {"start": 1444.88, "end": 1450.0800000000002, "text": " for the cube 9.81 for the gravity and they keep these buffers for every"}, {"start": 1450.08, "end": 1455.04, "text": " single one randomized parameter and they keep the lower and the higher so one for"}, {"start": 1455.04, "end": 1458.08, "text": " the lower range part of range and one for the higher part of the range and"}, {"start": 1458.08, "end": 1463.72, "text": " they keep these these are just thresholds for how performant the policy"}, {"start": 1463.72, "end": 1469.52, "text": " is given a specific environment and M just means so M is just a number of"}, {"start": 1469.52, "end": 1476.04, "text": " iterations we need in order to decide whether we want to expand or squish the"}, {"start": 1476.04, "end": 1481.48, "text": " range of that particular randomized parameter okay so we sample a particular"}, {"start": 1481.48, "end": 1488.24, "text": " vector from the current distribution and we pick random randomized parameters so"}, {"start": 1488.24, "end": 1493.1599999999999, "text": " one of those D parameters that we are using we pick one of them we then sample"}, {"start": 1493.1599999999999, "end": 1500.8, "text": " from X from the uniform distribution and with 50% chance will either pick the"}, {"start": 1500.8, "end": 1510.48, "text": " lower so we'll set lambda I to the lower extreme or with 50% chance will set"}, {"start": 1510.48, "end": 1514.48, "text": " lambda I to a higher extreme so what that means in that particular case with"}, {"start": 1514.48, "end": 1519.2, "text": " the Rubik's Cube that means that sometimes so basically once you sample"}, {"start": 1519.2, "end": 1525.6399999999999, "text": " this you'll sometimes take your sample 5.6 blah blah but like in this case if"}, {"start": 1525.6399999999999, "end": 1530.04, "text": " you pick the lower one you'll set that lambda I that corresponds to the Rubik"}, {"start": 1530.04, "end": 1538.8, "text": " Cube size to 5.5 or to 5.9 so that's this part and basically then you evaluate"}, {"start": 1538.8, "end": 1544.72, "text": " the performance of the policy on that specific environment so hopefully"}, {"start": 1544.72, "end": 1548.56, "text": " understand what happens here so you pick all of the parameters or all of those"}, {"start": 1548.56, "end": 1553.24, "text": " the randomized parameters you have randomly from their ranges but you pick"}, {"start": 1553.24, "end": 1559.52, "text": " one specific one and you intentionally pull it to some extreme either to the"}, {"start": 1559.52, "end": 1563.2, "text": " lower extreme or to the higher extreme and you evaluate the performance you"}, {"start": 1563.2, "end": 1567.24, "text": " save the performance inside of the buffer and you keep doing that until you see"}, {"start": 1567.24, "end": 1572.24, "text": " this threshold M so that may be 128 times once you have enough samples you're"}, {"start": 1572.24, "end": 1576.92, "text": " you're confident that your estimation will be good and you just take the"}, {"start": 1576.92, "end": 1581.72, "text": " average performance you just this is just clearing the buffer and if the"}, {"start": 1581.72, "end": 1586.24, "text": " average performance is good enough whatever this TH is depending on whether"}, {"start": 1586.24, "end": 1591.4, "text": " you're training vision vision network or the policy network so if you're good"}, {"start": 1591.4, "end": 1595.2, "text": " enough that means that the network that was that is being trained doesn't have"}, {"start": 1595.2, "end": 1598.64, "text": " any difficulty solving this particular environment so we can expand the range"}, {"start": 1598.64, "end": 1603.16, "text": " and next time the environment will be even harder on the other hand on the on"}, {"start": 1603.16, "end": 1608.4, "text": " the flip side basically if the performance is really poor that means"}, {"start": 1608.4, "end": 1613.32, "text": " that pulling this particular parameter to its extreme is really hard and we"}, {"start": 1613.32, "end": 1617.4399999999998, "text": " want to actually squish this this uniform distribution for that particular"}, {"start": 1617.4399999999998, "end": 1621.2, "text": " randomized parameter so hopefully this was clear enough let me know in the"}, {"start": 1621.2, "end": 1624.52, "text": " comments because this is arguably the most important algorithmic innovation"}, {"start": 1624.52, "end": 1628.2, "text": " they had in this paper other than that there was a lot of engineering and"}, {"start": 1628.2, "end": 1634.8799999999999, "text": " nothing less important but yeah okay let's continue here is just a high level"}, {"start": 1634.8799999999999, "end": 1640.48, "text": " overview of how the vision pipeline and how the policy pipelines are trained you"}, {"start": 1640.48, "end": 1646.08, "text": " have the model parameters you have the current PHYs so those are the the the"}, {"start": 1646.08, "end": 1650.68, "text": " ranges we've just mentioned for every single randomized parameter and you you"}, {"start": 1650.68, "end": 1655.48, "text": " basically sometimes sample those you generate the data and you pull push the"}, {"start": 1655.48, "end": 1660.7, "text": " data into this buffer but we sometimes just do the ADR so basically snap one"}, {"start": 1660.7, "end": 1666.16, "text": " of those parameters to its extreme and we sometimes update this PHY to make it"}, {"start": 1666.16, "end": 1671.52, "text": " more challenging or to reduce its difficulty and that's the two things are"}, {"start": 1671.52, "end": 1675.0800000000002, "text": " happening parallel we are constantly improving the the difficulty of"}, {"start": 1675.0800000000002, "end": 1679.6000000000001, "text": " environment and we are constantly generating data and training the model"}, {"start": 1679.6000000000001, "end": 1684.52, "text": " and updating the model parameters so yeah that's how it works and you can see"}, {"start": 1684.52, "end": 1688.92, "text": " one one difference between how they train the vision network compared to"}, {"start": 1688.92, "end": 1695.5600000000002, "text": " the policy is that they here they use both the data that is evaluated as well"}, {"start": 1695.56, "end": 1699.2, "text": " as the actual data pulled from the current parameters and they do that"}, {"start": 1699.2, "end": 1704.24, "text": " because data is really expensive and they need all data I can have for that"}, {"start": 1704.24, "end": 1708.6, "text": " particular policy the problem with with this thing is that sometimes you'll be"}, {"start": 1708.6, "end": 1712.8799999999999, "text": " pulling data that's too hard because this is still remember this is still"}, {"start": 1712.8799999999999, "end": 1717.6399999999999, "text": " under evaluation we still don't know whether snapping that parameter to its"}, {"start": 1717.6399999999999, "end": 1722.8799999999999, "text": " extreme is too hard or not and then yeah there's just the trade-off but they"}, {"start": 1722.88, "end": 1727.0, "text": " figure out it's good enough for their approach okay let's now see what they"}, {"start": 1727.0, "end": 1732.0400000000002, "text": " randomize in their simulations and already mentioned like the physics like"}, {"start": 1732.0400000000002, "end": 1737.3200000000002, "text": " geometry friction gravity etc the interesting part is this one the"}, {"start": 1737.3200000000002, "end": 1741.1200000000001, "text": " adversarial what they do here is the following so they have their training"}, {"start": 1741.1200000000001, "end": 1747.44, "text": " the policy network that's the P and it's outputting those 20 times 11"}, {"start": 1747.44, "end": 1752.0, "text": " categorical distributions that correspond to angles for particular"}, {"start": 1752.0, "end": 1757.28, "text": " joints and on the other hand they have this network the adversary called are"}, {"start": 1757.28, "end": 1765.36, "text": " which just basically outputs random actions and those get added to these"}, {"start": 1765.36, "end": 1771.96, "text": " ones and the policy needs to learn how to complete the sub goal it's given and"}, {"start": 1771.96, "end": 1777.24, "text": " disregard the randomness introduced by the adversary and they tried two things"}, {"start": 1777.24, "end": 1783.24, "text": " actually try to intentionally to develop a true adversary which will try and"}, {"start": 1783.24, "end": 1789.44, "text": " perform the worst actions in order to kind of confuse the policy but they"}, {"start": 1789.44, "end": 1794.08, "text": " realize that just using a random network is actually even better because it"}, {"start": 1794.08, "end": 1800.6, "text": " brings more diversity and later the policy can better generalize once"}, {"start": 1800.6, "end": 1805.52, "text": " deployed on a real robot so why do they do this and the simple answer is the"}, {"start": 1805.52, "end": 1810.16, "text": " following basically once you have the real hand even though the policy says to"}, {"start": 1810.16, "end": 1815.4, "text": " a specific joint move by maybe two degrees the underlying controllers like"}, {"start": 1815.4, "end": 1822.16, "text": " the PID controller will add some noise so it will maybe execute 2.1 degrees"}, {"start": 1822.16, "end": 1826.08, "text": " instead of 2 degrees and then the policy will still have to learn how to deal"}, {"start": 1826.08, "end": 1829.6399999999999, "text": " with that so and that's why they introduced this kind of modeling into"}, {"start": 1829.64, "end": 1835.5600000000002, "text": " the pipeline they also add noise to the observations basically fingertip"}, {"start": 1835.5600000000002, "end": 1839.68, "text": " positions are kind of added like they had a Gaussian noise to to the positions"}, {"start": 1839.68, "end": 1843.4, "text": " etc so everything you can imagine everything that's in this pipeline they"}, {"start": 1843.4, "end": 1847.96, "text": " try and randomize finally the vision like the lighting conditions was the"}, {"start": 1847.96, "end": 1852.92, "text": " lux level the camera positions so remember they have this cage their cage"}, {"start": 1852.92, "end": 1857.96, "text": " and they figure out the positions and use those as the calibration as the"}, {"start": 1857.96, "end": 1863.04, "text": " initial as initial positions and then in the simulation they'll just have some"}, {"start": 1863.04, "end": 1867.4, "text": " volume that will be expanding and they'll be sampling camera positions"}, {"start": 1867.4, "end": 1871.72, "text": " from that volume and that will introduce randomness they'll also experiment with"}, {"start": 1871.72, "end": 1874.8, "text": " the field of view with bunch of different parameters and yeah that's"}, {"start": 1874.8, "end": 1879.24, "text": " pretty much it and I mentioned here so some of these are fixed in a specific"}, {"start": 1879.24, "end": 1883.56, "text": " simulation so they'll just pick the cube size and the mass and they'll freeze"}, {"start": 1883.56, "end": 1887.28, "text": " that but other parameters like the observation noise and adversaries like"}, {"start": 1887.28, "end": 1891.32, "text": " the the thing I just mentioned will be changing with every single step in the"}, {"start": 1891.32, "end": 1897.2, "text": " episode so yeah just keep that in mind and finally training the policy they use"}, {"start": 1897.2, "end": 1901.92, "text": " this thing called PPO proximal policy optimization which is basically"}, {"start": 1901.92, "end": 1906.16, "text": " actor critic method they devised a couple of years ago it's a fairly"}, {"start": 1906.16, "end": 1910.24, "text": " standard RL algorithm I won't dig into too much depth here basically they have"}, {"start": 1910.24, "end": 1913.84, "text": " the value network they also have the policy network the value network tries"}, {"start": 1913.84, "end": 1918.8799999999999, "text": " to predict the expected value and the policy network tries to predict the"}, {"start": 1918.8799999999999, "end": 1923.04, "text": " optimal actions and in that collaboration they eventually learn a"}, {"start": 1923.04, "end": 1928.4399999999998, "text": " really good policy let me just focus briefly on on on these details like the"}, {"start": 1928.4399999999998, "end": 1932.84, "text": " rewards and actions so I already explained how the actions function so"}, {"start": 1932.84, "end": 1936.6799999999998, "text": " basically for every single joint and there are 20 of them they output these"}, {"start": 1936.68, "end": 1943.88, "text": " categorical distributions so maybe something like this and the x-axis is"}, {"start": 1943.88, "end": 1948.8, "text": " like the angle so here this is the highest probability angle so that maybe"}, {"start": 1948.8, "end": 1952.5800000000002, "text": " correspond to five degrees so they'll pick five degrees so those are the"}, {"start": 1952.5800000000002, "end": 1956.24, "text": " actions the rewards they have three types of rewards and they had some"}, {"start": 1956.24, "end": 1960.44, "text": " reward shaping here as you can see so the first component is the non sparse"}, {"start": 1960.44, "end": 1967.88, "text": " component and for example if they are trying to in the case of cube flipping"}, {"start": 1967.88, "end": 1976.48, "text": " if they're trying to get to the target orientation the reward will be the"}, {"start": 1976.48, "end": 1980.6000000000001, "text": " closer you get to the orientation the final orientation the more reward you'll"}, {"start": 1980.6000000000001, "end": 1984.72, "text": " have for each step so if you decrease the distance between those two"}, {"start": 1984.72, "end": 1988.28, "text": " orientations you'll be getting more reward and how they represent the"}, {"start": 1988.28, "end": 1992.44, "text": " orientations is using something called quaternions and those are just it's just"}, {"start": 1992.44, "end": 1996.2, "text": " an extension of the complex number system into four dimensions it sounds"}, {"start": 1996.2, "end": 2000.56, "text": " pretty complex I recently created a post on LinkedIn and I'll link it down in the"}, {"start": 2000.56, "end": 2004.96, "text": " description where I explained the best resources you can get to learn about"}, {"start": 2004.96, "end": 2008.68, "text": " quaternions on the high level you can treat them as a black box it's just a"}, {"start": 2008.68, "end": 2014.3999999999999, "text": " four-tuple basically four numbers and they somehow captured the notion of the"}, {"start": 2014.4, "end": 2018.52, "text": " orientation of the object and they are used all over the place in computer"}, {"start": 2018.52, "end": 2024.72, "text": " graphics and robotics etc so the second reward is a reward of five whenever the"}, {"start": 2024.72, "end": 2029.2800000000002, "text": " goal is achieved so that's a sparse reward as you can as you can assume and"}, {"start": 2029.2800000000002, "end": 2034.96, "text": " finally the penalty of minus 20 when the cube or the block is dropped so that's"}, {"start": 2034.96, "end": 2040.3600000000001, "text": " the reward system they they deploy here is the actual architecture you can see"}, {"start": 2040.3600000000001, "end": 2043.8400000000001, "text": " the as I mentioned the value the value network and the policy network so that's"}, {"start": 2043.84, "end": 2050.24, "text": " the part of the PPO algorithm and you can see the distribution 20 by 11 and"}, {"start": 2050.24, "end": 2056.56, "text": " value has just a single scalar and one thing interesting here is that the the"}, {"start": 2056.56, "end": 2062.16, "text": " the policy network takes the only the noisy observations whereas the value"}, {"start": 2062.16, "end": 2067.44, "text": " network takes both the non noisy observations which are only available in"}, {"start": 2067.44, "end": 2072.88, "text": " the simulation but also it uses the noisy observations and the reason we can"}, {"start": 2072.88, "end": 2077.6, "text": " use that all of the data that's not available on the actual robot is because"}, {"start": 2077.6, "end": 2081.32, "text": " we'll won't be using the value network at all we'll be just using the policy"}, {"start": 2081.32, "end": 2085.84, "text": " network so this is this a symmetric act or critic approach which they use and"}, {"start": 2085.84, "end": 2091.76, "text": " which they devised in some of their previous papers okay here are some of"}, {"start": 2091.76, "end": 2097.52, "text": " the inputs you can see like cube orientation quaternions fingertip"}, {"start": 2097.52, "end": 2102.1600000000003, "text": " positions blah blah blah you can notice that the only the noisy parts go into"}, {"start": 2102.16, "end": 2105.7599999999998, "text": " the policy networks you can see here the check sign so the noisy fingertip"}, {"start": 2105.7599999999998, "end": 2109.96, "text": " positions you can see the check sign here the Q position and the orientation"}, {"start": 2109.96, "end": 2117.3999999999996, "text": " etc okay a couple of interesting things to notice is this and these enable them"}, {"start": 2117.3999999999996, "end": 2122.8799999999997, "text": " to experiment a lot more is this embed and add approach so you can see here"}, {"start": 2122.8799999999997, "end": 2127.3999999999996, "text": " that the observations are always normalized and then embedded into this"}, {"start": 2127.4, "end": 2132.48, "text": " constant 512 dimensional dimensional space and they just summed it up"}, {"start": 2132.48, "end": 2137.28, "text": " summed it up and they apply value and what that enabled them to do is"}, {"start": 2137.28, "end": 2142.08, "text": " experiment with different observations and keep the experience they accumulated"}, {"start": 2142.08, "end": 2149.2400000000002, "text": " over over many many months intact and they say here so more concretely we're"}, {"start": 2149.2400000000002, "end": 2153.1600000000003, "text": " first embed each type of observation separately without any weight sharing"}, {"start": 2153.16, "end": 2158.3199999999997, "text": " into latent space of dimensionality 512 we then combine all inputs by adding"}, {"start": 2158.3199999999997, "end": 2163.16, "text": " the latent representation of each and applying a value non-linearity after the"}, {"start": 2163.16, "end": 2166.6, "text": " main motivation behind this change was to easily add new observation to an"}, {"start": 2166.6, "end": 2171.0, "text": " existing policy and to share embeddings between value and policy network for"}, {"start": 2171.0, "end": 2175.96, "text": " inputs that feed into both so for example if they have fingertips without"}, {"start": 2175.96, "end": 2180.3999999999996, "text": " noise here and fingertips with the noise they'll be sharing these embeddings in"}, {"start": 2180.4, "end": 2186.84, "text": " those cases otherwise they are not sharing the the weights there okay let's"}, {"start": 2186.84, "end": 2192.92, "text": " continue this is super interesting and mind-blowing so they actually train this"}, {"start": 2192.92, "end": 2196.5, "text": " thing they say here we've been training the Rubik's Cube policy continuously for"}, {"start": 2196.5, "end": 2200.92, "text": " several months at this scale while concurrently improving the simulation"}, {"start": 2200.92, "end": 2205.7200000000003, "text": " fidelity the ADR algorithm they've been tuning the hyperparams and they've been"}, {"start": 2205.7200000000003, "end": 2209.2400000000002, "text": " changing the network architecture so the cumulative amount of experience over"}, {"start": 2209.24, "end": 2214.8399999999997, "text": " that period used for training on the Rubik's Cube is roughly 13,000 years and"}, {"start": 2214.8399999999997, "end": 2220.3599999999997, "text": " which is on the same order of magnitude as the 40,000 years they used to train"}, {"start": 2220.3599999999997, "end": 2227.3999999999996, "text": " the open AI 5 the bots they train for Dota 2 game and basically what I want"}, {"start": 2227.3999999999996, "end": 2233.12, "text": " you to have in mind here is that a single simulation step takes is one is 80"}, {"start": 2233.12, "end": 2238.2, "text": " milliseconds so you just divide this number here the number of milliseconds"}, {"start": 2238.2, "end": 2243.2799999999997, "text": " here by this and you'll have the number of steps that happen in the simulation"}, {"start": 2243.2799999999997, "end": 2248.9199999999996, "text": " since they started doing this and it's a huge number okay experimenting is really"}, {"start": 2248.9199999999996, "end": 2251.6, "text": " important obviously they want to keep the experience that they want to ditch"}, {"start": 2251.6, "end": 2258.8399999999997, "text": " the experience they accumulated and so they do one more thing and they say here"}, {"start": 2258.8399999999997, "end": 2263.7599999999998, "text": " so we therefore rarely train experiments from scratch because it's expensive yeah"}, {"start": 2263.7599999999998, "end": 2267.68, "text": " but instead updated existing experiments and initialized from previous"}, {"start": 2267.68, "end": 2273.48, "text": " checkpoints for both the ADR so they both keep those those thresholds for"}, {"start": 2273.48, "end": 2276.96, "text": " each of those ranges for the randomized parameters and they also keep the policy"}, {"start": 2276.96, "end": 2282.7599999999998, "text": " parameters and they did something called behavior cloning in order not to lose"}, {"start": 2282.7599999999998, "end": 2286.56, "text": " experience they implemented something called behavior cloning if you're"}, {"start": 2286.56, "end": 2290.3599999999997, "text": " familiar with imitation learning this is one of the basic approaches there and"}, {"start": 2290.3599999999997, "end": 2295.6, "text": " they just implemented a variant of this dagger algorithm and why they do this is"}, {"start": 2295.6, "end": 2299.62, "text": " that they can iterate with different policy architectures not only the"}, {"start": 2299.62, "end": 2305.88, "text": " observations and still keep the experience intact pretty much intact so"}, {"start": 2305.88, "end": 2309.52, "text": " what how it works is basically you have something called a student network you"}, {"start": 2309.52, "end": 2315.6, "text": " have the teacher network and this one is basically we input the observation out"}, {"start": 2315.6, "end": 2320.7599999999998, "text": " goes the actions and we use these actions to navigate the environment but"}, {"start": 2320.7599999999998, "end": 2325.52, "text": " what we additionally do is because it's got the value function here as well so"}, {"start": 2325.52, "end": 2329.28, "text": " these are the actions and here is a teacher so that's a that's the the best"}, {"start": 2329.28, "end": 2334.28, "text": " network we currently have with a slightly different architecture and this"}, {"start": 2334.28, "end": 2339.0, "text": " is the new network the new architecture and we want to kind of transfer the"}, {"start": 2339.0, "end": 2343.28, "text": " knowledge experience to that new network and so what we do is additionally we"}, {"start": 2343.28, "end": 2347.16, "text": " have the values here and we have the actions here and we just do MSC loss"}, {"start": 2347.16, "end": 2353.24, "text": " between these two so just simple mean square error loss there and we do simple"}, {"start": 2353.24, "end": 2357.8799999999997, "text": " KL divergence between those two and that will make sure that the student"}, {"start": 2357.8799999999997, "end": 2361.7599999999998, "text": " eventually learns to output the same value functions and to output the same"}, {"start": 2361.7599999999998, "end": 2367.56, "text": " actions in particular states as the teacher network and that's what what I"}, {"start": 2367.56, "end": 2374.12, "text": " mean by behavior cloning in this case okay so they say here this is work"}, {"start": 2374.12, "end": 2377.4799999999996, "text": " surprisingly well allowing us to iterate on the policy architecture quickly"}, {"start": 2377.4799999999996, "end": 2381.56, "text": " without losing the accumulated training progress our cloning approach works with"}, {"start": 2381.56, "end": 2385.96, "text": " arbitrary policy architecture changes as long as the action space remains"}, {"start": 2385.96, "end": 2389.7999999999997, "text": " unchanged so the only thing that needs to remain constant is the action space"}, {"start": 2389.7999999999997, "end": 2395.44, "text": " itself so 20 times 11 everything else can change the observations can change"}, {"start": 2395.44, "end": 2400.6, "text": " and the architecture itself can change so I think already mentioned estimating"}, {"start": 2400.6, "end": 2404.92, "text": " the face angles is super hard so what I did is they so they just added these"}, {"start": 2404.92, "end": 2409.7599999999998, "text": " cutouts so that the problem of estimating the angle is a bit easier and"}, {"start": 2409.76, "end": 2414.6800000000003, "text": " they say here so while predicting position and orientation works directly"}, {"start": 2414.6800000000003, "end": 2418.96, "text": " works well we found predicting all six face angles directly to be much more"}, {"start": 2418.96, "end": 2423.0400000000004, "text": " challenging did you have the occlusion even when using a cube with a symmetric"}, {"start": 2423.0400000000004, "end": 2427.4, "text": " center stickers to work around this we decompose face angle prediction into"}, {"start": 2427.4, "end": 2430.92, "text": " several distinct predictions so for the sake of brevity I'll just skip over the"}, {"start": 2430.92, "end": 2435.36, "text": " details but they estimate this thing called active axis and the active face"}, {"start": 2435.36, "end": 2440.48, "text": " angles and the top face angle in order to and then they do some post processing"}, {"start": 2440.48, "end": 2448.1600000000003, "text": " in order to use as a proxy to get the actual face angles and yeah that's what"}, {"start": 2448.1600000000003, "end": 2451.44, "text": " I do with the vision part and you can see the network here so position"}, {"start": 2451.44, "end": 2456.28, "text": " orientation which is a quaternion and these three elements combined with some"}, {"start": 2456.28, "end": 2461.48, "text": " post-processing indirectly gives them the face angle estimations one thing"}, {"start": 2461.48, "end": 2464.56, "text": " you'll notice is there is many components here and you kind of usually"}, {"start": 2464.56, "end": 2468.2, "text": " have to manually specify that when you do the weighted loss you have to"}, {"start": 2468.2, "end": 2472.4, "text": " manually specify those weights so a way around that is using something called"}, {"start": 2472.4, "end": 2477.6, "text": " focal loss and there is this paper which introduced it previously so then the"}, {"start": 2477.6, "end": 2481.66, "text": " main idea is the following so let's say you're doing classification and the true"}, {"start": 2481.66, "end": 2486.04, "text": " label is maybe one and so you're using cross entropy loss your loss will look"}, {"start": 2486.04, "end": 2491.52, "text": " something like this and let's say you output 0.9 and that means the loss is"}, {"start": 2491.52, "end": 2496.84, "text": " pretty small and what they do is the following they just wait using this this"}, {"start": 2496.84, "end": 2502.96, "text": " here this factor here so because the and the minus log P is the cross entropy"}, {"start": 2502.96, "end": 2509.36, "text": " part so this is the the actual weight and so you can see the bigger the P is"}, {"start": 2509.36, "end": 2513.48, "text": " so the closer we are to the correct solution the smaller the weight for this"}, {"start": 2513.48, "end": 2518.4, "text": " loss so that means roughly that if we are performing really good here we want"}, {"start": 2518.4, "end": 2522.44, "text": " ditch we want to drop the weight for that component because we are already"}, {"start": 2522.44, "end": 2527.84, "text": " really good there but if we're maybe struggling here we'll increase the weight"}, {"start": 2527.84, "end": 2532.2400000000002, "text": " that goes with this loss component and that's pretty much it and then they do"}, {"start": 2532.2400000000002, "end": 2535.96, "text": " the ablation studies and they show that using the full model that means using"}, {"start": 2535.96, "end": 2540.52, "text": " the focal loss that the domain randomization etc they achieve the best"}, {"start": 2540.52, "end": 2548.8, "text": " results looking at maybe a real cube you can see that the orientation error is"}, {"start": 2548.8, "end": 2553.36, "text": " seven point something you can see the other values and that's much better than"}, {"start": 2553.36, "end": 2558.96, "text": " when they don't use the randomization or no focal loss etc so yeah you can see"}, {"start": 2558.96, "end": 2565.92, "text": " the order of magnitude of these errors and it's not small but it's good enough"}, {"start": 2565.92, "end": 2570.64, "text": " let's let's continue and jump into results it shows some results on the"}, {"start": 2570.64, "end": 2576.8, "text": " sim to sim transfer performance and basically the model was the policy was"}, {"start": 2576.8, "end": 2581.04, "text": " trained in the ADR setting and the test set also comes from the simulation but"}, {"start": 2581.04, "end": 2586.4, "text": " using some manually specified ranges and so that's the the holdout data and they"}, {"start": 2586.4, "end": 2590.36, "text": " show that as the randomization level increases and that's the entropy I"}, {"start": 2590.36, "end": 2595.64, "text": " mentioned previously as it increases and it's a good measure of how random how"}, {"start": 2595.64, "end": 2600.72, "text": " big those ranges are you can see that the performance on the the number of"}, {"start": 2600.72, "end": 2609.44, "text": " successes on the of that on that holdout set gets better and that's one thing and"}, {"start": 2609.44, "end": 2615.14, "text": " the second thing is they show here the performance on the real data and using"}, {"start": 2615.14, "end": 2622.68, "text": " the ADR so the more randomization they add so you can see the entropy increases"}, {"start": 2622.68, "end": 2627.56, "text": " basically the better the results get if we take this specific policy the XXL"}, {"start": 2627.56, "end": 2631.2799999999997, "text": " policy which was trained for a lot of time you can see they even didn't they"}, {"start": 2631.2799999999997, "end": 2636.3599999999997, "text": " didn't put in the numbers here so you can assume a couple of months and you"}, {"start": 2636.3599999999997, "end": 2642.7999999999997, "text": " can see the the mean and the median number of successes in the block"}, {"start": 2642.7999999999997, "end": 2648.04, "text": " rotation problem increases and it's pretty good in the real world on the"}, {"start": 2648.04, "end": 2653.56, "text": " real robot as well okay let's continue and see what else is there this"}, {"start": 2653.56, "end": 2656.96, "text": " demonstrates that the curriculum learning is really important so the blue"}, {"start": 2656.96, "end": 2662.84, "text": " line is trained the blue policy is trained in the ADR setting whereas all"}, {"start": 2662.84, "end": 2669.32, "text": " of the other curves are manual using the manually set domain randomization ranges"}, {"start": 2669.32, "end": 2673.46, "text": " so what they basically do is the following so when you start with ADR we"}, {"start": 2673.46, "end": 2677.2, "text": " start with a single value and then you slowly expand the range and you expand"}, {"start": 2677.2, "end": 2682.96, "text": " the range again and they expand the range again and this may be like"}, {"start": 2682.96, "end": 2690.72, "text": " the cube size this is the size and how they produce these other curves is for"}, {"start": 2690.72, "end": 2695.8399999999997, "text": " example this one corresponds to maybe this range and the the worst one"}, {"start": 2695.8399999999997, "end": 2699.7599999999998, "text": " actually corresponds to the biggest range so what happens there is that if"}, {"start": 2699.7599999999998, "end": 2705.3599999999997, "text": " you start training the network with a huge range it's actually going to"}, {"start": 2705.36, "end": 2711.44, "text": " underperform because it didn't have the the benefit the the luxury of doing it"}, {"start": 2711.44, "end": 2716.2400000000002, "text": " gradually as it was just thrown into the water so to speak and it was really hard"}, {"start": 2716.2400000000002, "end": 2722.4, "text": " for the for that policy to cope so basically yeah this is a argument for"}, {"start": 2722.4, "end": 2726.36, "text": " using the curriculum learning pretty much here again they showed that on the"}, {"start": 2726.36, "end": 2734.44, "text": " block orientation problem as we increase the ADR ranges the entropy increases and"}, {"start": 2734.44, "end": 2738.2000000000003, "text": " the errors for estimating the orientation and position of the vision"}, {"start": 2738.2000000000003, "end": 2744.64, "text": " pipeline gets better and better you can see this gets much better as we increase"}, {"start": 2744.64, "end": 2750.12, "text": " the entropy similarly for the rub Rubik's Cube the orientation position and"}, {"start": 2750.12, "end": 2757.08, "text": " the top angle all increase going to higher entropy ADRs and that's that"}, {"start": 2757.08, "end": 2763.6, "text": " makes sense okay this is the final result they got on the Rubik's Cube"}, {"start": 2763.6, "end": 2768.52, "text": " basically what I did is they took the maximally hard scramble that means if"}, {"start": 2768.52, "end": 2774.2799999999997, "text": " you know some Rubik's Cube theory at the hardest state you can be in is 26 quarter"}, {"start": 2774.2799999999997, "end": 2781.36, "text": " rotations away from the like completed cube state that means quarter rotation"}, {"start": 2781.36, "end": 2786.08, "text": " is just a 90 degree rotation on the cube and it can be shown that there is only"}, {"start": 2786.08, "end": 2790.7599999999998, "text": " three states that there are 26 quarter rotations away from the goal from the"}, {"start": 2790.76, "end": 2796.0400000000004, "text": " completed cube state and they took one of those hard states and they use those"}, {"start": 2796.0400000000004, "end": 2801.0800000000004, "text": " to test the policy and they can show here they showed here that on the full"}, {"start": 2801.0800000000004, "end": 2806.96, "text": " scramble so and you basically have to have 43 successes to get to the"}, {"start": 2806.96, "end": 2810.8, "text": " completed state because we're only using remember we're just using the phase"}, {"start": 2810.8, "end": 2815.5200000000004, "text": " rotations and we're just using the flipping and basically we need 43"}, {"start": 2815.52, "end": 2821.08, "text": " successes in order to successfully solve that particular scramble and they show"}, {"start": 2821.08, "end": 2828.64, "text": " that for the XXL ADR so highly randomized ADR and using the Giker cube"}, {"start": 2828.64, "end": 2832.32, "text": " for the face angles instead of vision because you remember it's hard to"}, {"start": 2832.32, "end": 2837.0, "text": " estimate those using vision and they show that they have 60% success rate on"}, {"start": 2837.0, "end": 2842.52, "text": " the half scramble and 20% success rate on the full scramble once deployed on a"}, {"start": 2842.52, "end": 2849.2, "text": " real robot and that's pretty amazing still not perfect but like it's pretty"}, {"start": 2849.2, "end": 2855.16, "text": " awesome yeah that's that result and let me continue and show you some more"}, {"start": 2855.16, "end": 2861.04, "text": " awesome results and you can see that using a rubber glove which changes the"}, {"start": 2861.04, "end": 2866.56, "text": " like effectively changes the friction of the environment or using the tide"}, {"start": 2866.56, "end": 2871.64, "text": " fingers or maybe putting the blanket over the cube which will basically"}, {"start": 2871.64, "end": 2879.8399999999997, "text": " render the vision pipeline useless because I can't figure out the position"}, {"start": 2879.8399999999997, "end": 2884.7999999999997, "text": " and orientation or they do some kind of funny perturbations with giraffes and"}, {"start": 2884.7999999999997, "end": 2890.24, "text": " pencils just applying force on the cube and hand and it can cope with all of"}, {"start": 2890.24, "end": 2894.4, "text": " those perturbations and still solve the problem although a bit harder as they"}, {"start": 2894.4, "end": 2901.08, "text": " showed okay now for the last but not least I'm gonna show you some metal"}, {"start": 2901.08, "end": 2904.48, "text": " learning that eventually happened so when they say metal learning they are"}, {"start": 2904.48, "end": 2907.92, "text": " not thinking of metal learning in the classical sense like solving multiple"}, {"start": 2907.92, "end": 2911.72, "text": " tasks they're still solving just a Rubik's Cube or they're just reorienting"}, {"start": 2911.72, "end": 2917.88, "text": " the cube what I mean is that they learn how to estimate this transition"}, {"start": 2917.88, "end": 2923.64, "text": " probability function and that's also called online system identification in"}, {"start": 2923.64, "end": 2927.16, "text": " some other communities so basically that means it learns how to estimate the"}, {"start": 2927.16, "end": 2932.04, "text": " parameters such as friction gravity implicitly and let me show you the"}, {"start": 2932.04, "end": 2938.0, "text": " results so they run the policy in a simulation on this flipping problem and"}, {"start": 2938.0, "end": 2942.8799999999997, "text": " they show that once they so we have a LSTM inside of the policy if you"}, {"start": 2942.8799999999997, "end": 2948.72, "text": " remember and they show that once they take the hidden state and just pull it"}, {"start": 2948.72, "end": 2954.58, "text": " down to zero in these particular moments like it's step 10 and step 30 you can"}, {"start": 2954.58, "end": 2960.6, "text": " see that the time to complete the flip actually increases and then once the the"}, {"start": 2960.6, "end": 2966.36, "text": " policy has enough time to again encode the the parameters of the environment it"}, {"start": 2966.36, "end": 2971.7999999999997, "text": " again becomes really fast and then once the next perturbation happens again it"}, {"start": 2971.7999999999997, "end": 2977.16, "text": " gets a bit slower but then it adjusts adapts again so until the hidden state"}, {"start": 2977.16, "end": 2983.2799999999997, "text": " is updated it will be a bit slower but then it just gets to the baseline level"}, {"start": 2983.28, "end": 2987.96, "text": " the second thing they show is just by resampling the environment dynamics so"}, {"start": 2987.96, "end": 2993.0800000000004, "text": " they let the policy work with certain then a certain environment like specific"}, {"start": 2993.0800000000004, "end": 2997.92, "text": " cube size gravity etc and then they just resample everything like new gravity new"}, {"start": 2997.92, "end": 3002.0800000000004, "text": " cube size new cube mass and they show that again the policy kind of slows down"}, {"start": 3002.0800000000004, "end": 3008.52, "text": " and then again learns the dynamics encodes the dynamics inside of the LSTM"}, {"start": 3008.52, "end": 3013.12, "text": " hidden cell and yeah it just becomes much better the third thing they did is"}, {"start": 3013.12, "end": 3017.7599999999998, "text": " breaking a random joint and basically what I do is at one point of time they"}, {"start": 3017.7599999999998, "end": 3022.32, "text": " just disable one of the joints so whatever action we send to the joint it"}, {"start": 3022.32, "end": 3029.28, "text": " won't move because it's dead and they show that after that happens basically it"}, {"start": 3029.28, "end": 3036.78, "text": " will still learn how to how to solve the problem will be a bit slower and the"}, {"start": 3036.78, "end": 3041.7200000000003, "text": " green line here is just a policy that started with a broken with the same"}, {"start": 3041.7200000000003, "end": 3046.1600000000003, "text": " joint broken from the start and you can see that this one is a bit worse and"}, {"start": 3046.1600000000003, "end": 3053.2400000000002, "text": " they speculate here I think just a second that we hypothesize that this"}, {"start": 3053.2400000000002, "end": 3057.1200000000003, "text": " could be because the policy has already locked in some information in its"}, {"start": 3057.1200000000003, "end": 3061.36, "text": " recurrent state and therefore is not adjustable as adjustable anymore"}, {"start": 3061.36, "end": 3065.6400000000003, "text": " alternatively maybe it just has not accumulated enough information yet and"}, {"start": 3065.64, "end": 3069.52, "text": " the baseline policy is in the lead because it has an information advantage"}, {"start": 3069.52, "end": 3074.8399999999997, "text": " of at least 10 achieved flips I doubt that's the case because I can see the"}, {"start": 3074.8399999999997, "end": 3080.2, "text": " offset continue like being constant even after 20 steps here so yeah that's"}, {"start": 3080.2, "end": 3085.98, "text": " that'd be interesting to investigate further still you can understand the"}, {"start": 3085.98, "end": 3091.74, "text": " high-level picture here basically it's learning how to encode environment"}, {"start": 3091.74, "end": 3095.2, "text": " parameters inside the LSTM is hidden state which is pretty awesome a couple"}, {"start": 3095.2, "end": 3101.12, "text": " more things yeah they say here we found very interesting that our policy can"}, {"start": 3101.12, "end": 3105.72, "text": " learn to adapt internally to broken joints this is in contrast contrast to"}, {"start": 3105.72, "end": 3109.8799999999997, "text": " prior work it explicitly searched over a set of policies until they found one"}, {"start": 3109.8799999999997, "end": 3115.14, "text": " that works for a broken robot so I'm not sure which of the randomizations they"}, {"start": 3115.14, "end": 3121.3599999999997, "text": " applied in the ADR setting helps them learn this thing whether they I think I'm"}, {"start": 3121.36, "end": 3126.04, "text": " pretty confident they didn't simulate the broken joints inside the training so"}, {"start": 3126.04, "end": 3130.7200000000003, "text": " it's pretty funny that this actually works the final thing I want to show"}, {"start": 3130.7200000000003, "end": 3136.2000000000003, "text": " you is this the train the predictor over those LSTM's hidden states to predict"}, {"start": 3136.2000000000003, "end": 3140.7200000000003, "text": " whether a specific parameter in a particular environment is higher than"}, {"start": 3140.7200000000003, "end": 3144.6, "text": " the average or lower than the average to make that a bit less abstract so it's"}, {"start": 3144.6, "end": 3148.44, "text": " trying to predict whether maybe the gravity or the cube mass is higher than"}, {"start": 3148.44, "end": 3153.4, "text": " the average over all of the environments was trained on or lower so they just use"}, {"start": 3153.4, "end": 3159.44, "text": " a cross entropy loss and to it outputs one when it's certain that the"}, {"start": 3159.44, "end": 3164.64, "text": " parameter is higher than the average otherwise it tries to output zero and"}, {"start": 3164.64, "end": 3169.64, "text": " you can see here maybe for the cube size the range is this and this is the"}, {"start": 3169.64, "end": 3175.36, "text": " average and similarly for the Rubik's Cube problem and once it was trained"}, {"start": 3175.36, "end": 3179.04, "text": " like that so on a bunch of different environments we take the LSTM's hidden"}, {"start": 3179.04, "end": 3182.28, "text": " state we have a simple predictor I think they just use a simple fully connected"}, {"start": 3182.28, "end": 3187.84, "text": " layer and finally the sigmoid output so once they train it on a bunch of those"}, {"start": 3187.84, "end": 3194.56, "text": " environments with the true labels known they deployed and they showed that here"}, {"start": 3194.56, "end": 3199.32, "text": " for example on the block orientation policy you can see that certain"}, {"start": 3199.32, "end": 3206.56, "text": " parameters such as cube size the the the the actual LSTM hidden states is"}, {"start": 3206.56, "end": 3213.1200000000003, "text": " informed enough so that the policy that was trained can learn to with 85%"}, {"start": 3213.1200000000003, "end": 3217.0, "text": " accuracy predict that it's higher than the average which means there is enough"}, {"start": 3217.0, "end": 3223.76, "text": " information inside of the LSTM state to do this and they notice also that for"}, {"start": 3223.76, "end": 3227.92, "text": " the cube mass the block orientation policy stores more information about it"}, {"start": 3227.92, "end": 3232.48, "text": " than the Rubik's Cube policies you can see here this the cube mass the purple"}, {"start": 3232.48, "end": 3238.7200000000003, "text": " one has around 60 something accuracy on the block orientation task whereas here"}, {"start": 3238.7200000000003, "end": 3244.7200000000003, "text": " it's a bit above 50% which is close to random chance and they keep they say"}, {"start": 3244.7200000000003, "end": 3248.64, "text": " here we hypothesize that this is because the block orientation policy uses a"}, {"start": 3248.64, "end": 3253.42, "text": " dynamic approach that tosses the block around to flip it in order contrast the"}, {"start": 3253.42, "end": 3258.04, "text": " Rubik's Cube policy flips the cube much more deliberately in order to avoid"}, {"start": 3258.04, "end": 3264.28, "text": " unintentional misalignment of the cube faces so that means that basically the"}, {"start": 3264.28, "end": 3269.2400000000002, "text": " sewing Rubik's Cube is it's less important to know the actual cube mass"}, {"start": 3269.2400000000002, "end": 3273.0, "text": " and that's why the LSTM just doesn't encode that information whereas here"}, {"start": 3273.0, "end": 3278.4, "text": " because flipping is much less deliberate they don't have to be afraid that one of"}, {"start": 3278.4, "end": 3282.52, "text": " the faces will misalign like this you can just toss it with much more much"}, {"start": 3282.52, "end": 3287.4, "text": " higher strength so it encodes that information a bit more than the Rubik's"}, {"start": 3287.4, "end": 3293.28, "text": " Cube policy that's super fascinating think and they say here this is the"}, {"start": 3293.28, "end": 3296.52, "text": " evidence that the policy successfully inferring and storing useful information"}, {"start": 3296.52, "end": 3301.04, "text": " regarding the environment parameters in LSTM hidden and cell states know that we"}, {"start": 3301.04, "end": 3306.04, "text": " do not train the policy explicitly to store information about the semantically"}, {"start": 3306.04, "end": 3310.28, "text": " meaningful physical parameters and that's really super awesome that's it"}, {"start": 3310.28, "end": 3314.0, "text": " hopefully you liked the video if you did you know the drill just like subscribe"}, {"start": 3314.0, "end": 3341.16, "text": " share and hope to see you next time"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=0slFo1rV0EM
DeepMind's AlphaGo Zero and AlphaZero | RL paper explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I cover AlphaGo Zero (and AlphaZero), an agent that learned, through pure self-play and zero human knowledge, to beat all of the best human players and algorithms in Go, Chess, and Shogi. You'll learn about: ✔️AlphaGo Zero (Mastering the game of go without human knowledge) ✔️AlphaZero (A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ AlphaGo Zero paper: http://augmentingcognition.com/assets/Silver2017a.pdf ✅ AlphaZero paper: https://arxiv.org/abs/1712.01815 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 - AlphaGo lineage of agents 02:35 - Comparing AlphaGo Zero with AlphaGo 06:50 - High-level explanation of AlphaGo Zero inner workings 10:20 - MCTS recap 12:00 - Training details and curves 15:10 - Architecture impact 17:30 - Knowledge acquired 20:55 - Results 22:05 - Discovering joseki 23:40 - Human domain knowledge in AlphaGo Zero 25:30 - Pipeline overview 28:40 - Self-play thread explained 31:55 - Further details (PUCT recap, etc.) 35:50 - AlphaZero (what's new?) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #alphagozero #alphazero #deepmind
What's up in this video? I'm going to cover the AlphaGo Zero paper or mastering the game of go without human knowledge And I also cover the AlphaZero paper I'll actually tell you some of the modifications that are needed to get this thing to AlphaZero level and Just a hint. It's a really small modification and basically reapplying this to chess and shogi as well So what's the trick with this paper? And hopefully already watched my previous video on AlphaGo if you haven't I strongly recommend you go ahead and watch it I'll link it somewhere here Basically, the main thing is they are not using human experts data anymore So they say it here here we introduce an algorithm based solely on reinforcement learning without human data Guidance or domain knowledge beyond game rules starting tabula rasa or blank slate in Latin our new program AlphaGo Zero Achieved super superhuman performance winning hundred to zero against the previously published champion defeating AlphaGo agent so a Small digression here if you're not familiar with this Basically AlphaGo is a lineage of agents even though they published these two papers or actually three and then later a mu zero There were a couple more iterations in between so it's kind of a spectrum and let me just kind of explain how that functions So the previous paper The AlphaGo paper was actually AlphaGo fan Because that's the model that defeated fan who we the European champion in go back in 2015 Then they had AlphaGo Lee Which defeated Lee Sedol back in 2016 And that's the the famous match where Lee Sedol lost 4 to 1 and that was the first time that An agent and like an algorithm could defeat the grandmaster in go and finally they built something called AlphaGo master Which basically won 60 to 0 Against all of the best grandmasters in go in 2017. I think and finally we're here AlphaGo zero which Yeah, which is basically that the best iteration so far in the only version that's not using human experts knowledge So going from here to here. It's a kind of a spectrum where the amount of the amount of compute like the both the efficiency of the algorithm was improved as well as the amount of Hard-coded data and heuristics was kind of slowly going down. So finally zero doesn't use the human data and Is much more efficient. So yeah, that's the paper will be reviewing in this in this video anyways So let me see what the differences are between this paper and the AlphaGo paper The first one is so first and foremost it is trained solely by self play RL at that means if you remember from the previous video that we won't be having these networks anymore So the SL policy is gone because this one was trained on the human experts data. The faster law policy is also gone So those were the two tasks that were Leveraging the human experts data and we're left with these two, but let's see what what else is different So second it uses only the black and white stones from the board as input features. So that means we're not using anything Additional heuristics and hand crafting of these these features. We're just using the raw board data And we just take a couple of those because we need some prehistory in order to play this game optimally And that's the only thing so we're basically using raw board data and that's it. No, no, no hand crafting No domain knowledge integrated there third. It uses a single neural network rather than separate policy and value networks So that means we're basically merging these two together So we'll be adding this will be combining somehow these two Into a single network and finally it uses a simpler tree search That relies upon the single neural network to value positions and sample moves without performing any montecarlo rollouts So again, if you remember from alpha go paper when we were doing the montecarlo tree search And we got to the leaf position So this is some state and we basically did two things one Was to pass the state into a value network. The second thing was to do a rollout Basically playing the game until the end with this fast rollout policy. So basically with this thing And we'd get some value then we'd Back Back propagate those two information so that both the value and the rollout outcome up the tree Basically, we don't have we don't have this anymore and that's the the new modification that they introduced So it's much simpler. So if you take a look at it This thing is so much simpler in every aspect. We don't have this we merge these two together and the MCTS algorithm is simpler so and we also achieve much better results. So that's quite impressive if you ask me Okay, hopefully you got some intuition of the differences between alpha go zero and alpha go so let's continue So let's see what the output from the network actually is. So basically it's a tuple we have the policy Vector and we get the value scalar Coming from the from the network. So the vector of move probabilities p represents the probability of selecting each move a Including the pass. So additionally if you take a look at the alpha go implementation I think they didn't have the the pass move So they just had the output vector of the policy vector was just 19 by 19 The output vector of the policy vector was just 19 by 19 flat flat vector. So basically 361 Dimensional vector and here we have uh, plus one for the pass move Additionally, we have the the value part of the network Combined together. So if you take a look how this looks like you basically have something like this This is some kind of a cnn and then we have Uh two streams coming off on top of this. So this thing is the policy part and then we have the the value network So this is our new model. Hopefully this is uh clear enough and additional detail is they're using Resonance instead of cnn's so previously in alpha go they were using plain old Convolutional neural networks now they're using resonance and hopefully you're all familiar with those so basically what resonance did is they uh, just kind of um added this residual connection which will be Which will add the feature maps from the previous layer on top of the process features And they show that by adding this small trick they can train much deeper Naprox and this was a one of the major points in the deep learning history So yeah, they they are now using resonance instead of cnn's pure cnn's Okay Okay, uh, let's slowly start digging into how the actual uh algorithm works and basically the I'll be stressing the differences with alpha go Many things overlap. So yeah Okay, so the main difference, uh in the training procedure is that they integrated the mcts directly into the training and they're not using it just for the uh During the play so basically this is how the network is trained. So you have a self-play game So take our our own agent. Uh, you take the The basically that's the uh, the network that has both the policy and the value and you just roll out the game until the end state Now this is the trick So in this state, uh, you you pass it into the network It outputs the probabilities and the so this is kind of the probability vector and the the value scalar So now instead of just picking the action directly from this output What I do is they uh Do the mcts over this state so this is Depicted here and they actually pick the action from this probability distribution, which is much better Prediction than the raw probability prediction from the directive from the network so Uh after just picking the action we get into a different state We again do the mcts and we roll out all of these Uh until the end and we're just saving this this tuple. So the data for each time step t is stored as state Policy, so that's the this vector here. We're storing this one and we're storing the outcome. So the basically the The end result basically either plus one if the player won or minus one if it lost So this is basically the main trick doing the mcts during the actual self-play once we have that what we do is we in order to train the The agent we do the following We take those tuples that we stored and we want to make sure that the output from the raw network So the policy vector that comes directly from the network Gets as close to the mcts policy as possible Secondly, we just regressed the value function so as to be really close to the final outcome of that particular game so Basically, here is the equation you can see We do a simple regression doing mean square error for the value function and we do the simple simple cross entropy loss here, so this has a minimum so the p vector is the probability coming from the raw network the pi is the Probability coming from the mcts and we want to make sure that the probability coming from the raw network is as close as possible To pi and that's the minimum of this function. So basically that's when we hit the entropy limit Before that we have the non zero k l divergence and once we get to To the same to once p becomes equal to pi we have zero k l and we converge to a minimum So basically that's why this thing Pushes p to be close to pi. Finally, the third term is just basically l2 regularization and And that's it. It's pretty simple. It's much simpler than alpha go And yeah, let's see what I see here So the neural network parameters theta are updated to maximize the similarity of the policy vector p To the search probabilities p pi and to minimize the error between the predicted winner V and the game winner z as for the montecarlo tree search a short recap how it works Basically, they're using the again the p uct algorithm. So the upper confidence bound for trees algorithm again Basically start from the root state and you pick the values so as to maximize this p uct score So the q value the action value function plus this u term which consists of which has this visitation term inside and priors and picking the highest ones they get to the leaf node and finally Uh in the case of alpha go zero, they don't have the uh, actually the expanding threshold which was around 40 In alpha go they just uh expand the network every single time so once you get to the leaf you just expand it and basically what it means is you you pass that state into the network and it will output the probability vector and the value and you'll be back propping that um, basically back upping the the value function up the tree and you'll be using the p as the priors for the next uh Nodes in the in the tree. So basically next time if one of the threads comes here, it will pick up one of those of these Children depending on the again on the p uct score So maybe come here it will again expand this one and that's how we actually build up this montecarlo tree search And hopefully that that was also already clear from the previous video So here just depicted that the value function is back propped back backed up, uh up the tree and they're updating the statistics both the visitation counts as well as the Basically the the montecarlo estimates Okay That's that's it That's it. And um, so they said here over the course of training 4.9 million games of self play were generated using 1600 simulations for each mcts which corresponds to approximately 0.4 second Thinking time per move. So that means when you're building up these trees during the the self play you do these you do 1600 of these simulations in order to build a tree and then you pick the The action that actually maximizes the visitation count in the root node They have some temperature coefficient, but we'll get to that a bit later. Basically, that's it Okay, let's continue And after doing this training Surprisingly the alpha goes zero outperform the alpha goalie after just 36 hours So you can see it here after just 36 hours Basically, it's already better than the agent that that has beaten at least it all back in 2016 And that's impressive And they said here the alpha goalie was actually trained over several months So a lot of compute went into this thing. So yeah, keep that in mind Alpha goes here. I use a single machine with four tpus Whereas alpha go Lee was distributed over many machines and used 48 tpus Alpha go zero defeated alpha go Lee 100 to zero So that's some serious performance that we got from this pure self-reinforced learning approach And that's amazing again. You can see the curves here the Dotted line here is the the lee method The the purple one is actually the same architecture as as the Alpha go zero but just using the human experts data and you can see it can't achieve the elo score As high as alpha go zero Okay looking at this curve here Uh, we're trying to see how good these networks both the alpha go zero as well this supervised learning network How good are they? predicting the actual moves that human experts will make and you can see that the supervised method is actually better here and that shouldn't surprise you because The goal for alpha go zero is not to be better at predicting human experts move It's to be the best agent the best player and that doesn't necessarily correlate with human play And we'll see that exactly happening a bit later where alpha go zero discover certain joseki's So those are some corner sequences in go they weren't discovered by by humans prior to this agent So that's that's awesome. Um, and we can see it here actually looking at the emcee of professional game outcomes, so the uh Of alpha go zero is much better at predicting the final outcome of the games than the supervised approach so that means if you're in a particular state and we just um, Basically pass it through the the network. It will be better at predicting the final outcome in those games played by humans than the sl Network. So yeah, this is actually more important than this chart keep that in mind Okay Aside from that they try to decouple the The the the contribution that came from the algorithm and what came from the actual architecture improvement because they're using resonance if you remember So looking at these charts here, um, the the leftmost This leftmost network is actually a dual Resnet meaning dual is because we have a single architecture that outputs both the policy as well as the value and here we have the Cep conf that's basically single network for both policy and for the value network So that's something alpha go used and it's using the simple cnn instead of resonant and you can see that the elo rating Uh goes up. So everything else is pretty much the same And we get a huge boost just from the architecture And these are the other combinations. So just using separate resonants and using dual architecture, but with cnn's Um similarly looking at the prediction accuracy of the human players This one is actually a bit worse by looking at the msc of the game. So just predict regressing the outcome This one is the best and finally this architecture gave them the best results. So that's what they're using um, okay um One more detail, uh, they say here this is partly due to improved computational efficiency But more importantly the dual objective regularizes the network to a common representation that supports multiple use cases So the idea here is the following so you have a cnn And since you're training the policy, so this is the policy vector here and this is the value scaler so when you just do your gradient update of your network for the To get closer to the mcts Um, uh probability vector you're updating these features, but these features are subsequently used by the value stream here and so basically whatever you do you're improving both the value and the policy and your You're basically figuring out the features which are Good for figuring out both the policy and the value and that helps that helps boost the performance Okay Uh, let's see some of the knowledge that this network learned during this training and In this chart what you can see in the top row here are some of these joseki as I mentioned and I don't know How to play go but I know enough to just kind of understand what's happening But if you're familiar with go, maybe this will be even more interesting for you Basically here you can see These are the joseki's that humans play and you can see that the alpha go zero actually reinvented these At some time during the training So this first joseki was discovered like maybe around 10 hour mark Then the second one was discovered. I don't know like maybe 15 hours and the last one was discovered Maybe around 36 hour mark And that's super fun and we'll later see how the frequency was increasing So at one point of the training it was heavily using one of these joseki's and then it kind of just ditched them away Because they were actually worse It found some better patterns and started using those in the second row What we can basically see are the joseki's which were the most frequent joseki's at that part of the training time frame So basically if you take a look at I don't know maybe this one so around the 48 hour mark It was heavily using this particular pattern and yeah, and finally the the last row tells you Basically shows you the games. So these are just so we take this Maybe three hour mark. We take the one particular self play game and we just Plotted here and you can see how it evolved So here it was it was playing really greedy and clustering all of the moves the black and the whites In the same region of the of the board and as the game progressed we came to these More subtle again. I'm not I don't understand go but I was after reading the paper. I understand this So basically here it's leading multiple battles battles distributed across the board and it's much better Player now after 70 hours than at the beginning So those were some of the knowledge that the alpha zero basically rediscovered and that's pretty damn impressive If you ask me Okay, let's continue and uh see what's up here Okay, so they say here ultimately alpha go zero preferred new joseki variants that were previously unknown I mentioned that already so that's pretty much fascinating. Uh, surprisingly this letter capture sequence Um is one of the first elements of go knowledge learned by humans But they were only understood by alpha go zero much later in training and that's cool because that suggests also that Uh, it's not constrained by any way by the human priors and knowledge So it actually discovered something that was obvious to humans much later But it also discovered some things that were not obvious to humans And that's awesome And they say here the humankind has accumulated go knowledge from millions of games played over thousands of years years collectively distilled into patterns Proverbs and books in the space of a few days starting tabula rasa. So blank slates totally random Alpha go zero was able to rediscover much of this go knowledge as well as novel strategies that provide New insights into the oldest of games and they're just romanticizing here a bit. Okay basically Uh, it's pretty awesome and looking at the charts. Uh, you can see that the Uh alpha go zero, uh Beats alpha go lee after just a couple of uh Days and then starts being better than this alpha go master version after 30 something days Again, uh some similar chart here Uh, basically this is the alpha go zero performance. This is the master the lead the the fan um Uh models and finally we have some commercial go programs which you can see are much worse than even the alpha go fan And then we have this, uh Basically open source go program which is even worse Finally this one is basically the the raw network performance So without using the mcts by just using the probabilities coming from the policy network, uh and playing using that one, uh, you get That much much lower elo score than using mcts. So search is really important for playing go efficiently. Okay That's it that was the high level overview I wanted to show you and now we'll get into some details But before that let me just show you those jiseki's I was mentioning. So here Um, so this is some jiseki. That's um, the humans play a lot So it's even has a name five three point press whatever and you can see the frequency here And similarly for other jiseki's and you can see for some patterns. So this like knight's move pincer the frequency Peaked here and then slowly decreased after 70 hours of training So that means it was discovering and ditching the patterns. So that's cool Um, this is the the the second chart shows you Um some of the jiseki's that was playing really often at one point of training So again here three by three invasion. It had a peak here. So it was playing this this Particular jiseki a lot at one point of time and so on and so forth So that's that's it Now let's dig into the details of the actual algorithm. Okay, so i've covered this part already This is a spectrum I was mentioning so they went from fan to lee to master and finally to alpha go zero Uh, and each time increasing the both the efficiency. So this one used 176 gpus Then the lee version I think used something around 47 tpus Then they had uh alpha go master which used even less hardware and finally alpha go zero actually uses only four tpus And it's much better as you saw than lee and and master and every version So far, um, and yeah, they say here Okay, so uh, basically They say it here. The primary contribution is to demonstrate that superhuman performance can be achieved without human domain knowledge We reiterated that already a couple of times To clarify this contribution we number the domain knowledge that alpha go zero uses explicitly or implicitly Either in its training procedure or its mcts These are the items of knowledge that would need to be replaced For alpha go zero to learn a different alternating markov game And this is the main point where basically after you just kind of Modify a couple of these you get to alpha zero and you can play both the chess and shogi as well as go And uh the most important parts here. So basically you will want to Um ditch the symmetry part So the rules of go are invariant under rotation and reflection this knowledge has been used in alpha go zero Both by augmenting the data set during training to include rotations and reflections of each position So if you remember from the alpha go video Basically each time you want to evaluate a position you take a random Uh element from the dihedral group that has eight elements So basically because you have two reflections and four rotations you get to eight elements and you were just using those and that will Make your data bigger and also They showed it improves the training So we'll have to uh expel that part because other games such as chess they don't have That symmetry as as inherent part of the game Okay Some details here. So the montecarlo tree search parameters some bayesian optimization was used To figure those out and this is a really important part So alpha goes your self-play training pipeline consists of three main components So the first thing is neural network parameters are continually optimized from recent self-play data That's the first part Uh and alpha go zero players are continually evaluated. That's the second part and thirdly the best performing player so far This alpha theta star is used to generate new self-play data and let us see how that exactly works so So, um, you basically have something like a replay buffer and I i'll draw it like this you have the Current best network and i'll just draw it like a box and we have a bunch of self-play games happening in parallel so how the actual framework works like is the following so We have this Network and that's the best agent so far and we're just taking data from the replay buffer And we are updating that network Next up every every thousand, um updates will be dumping a checkpoint Model and we'll be comparing that checkpoint model with the last previously stored best model So we have the this thing they call alpha theta star So that's the best agent and interesting fact is that this agent so let me draw it like a like a rectangle Basically, they use those weights in all of these self-play threads. So when I knew when one of these Self-playing threads finishes, uh, it will want to start over again So we'll just pull the best parameters here And it will be using that network weights to play the game and store it will be storing The tuples so we we mentioned so the s The pi and the outcome of the game so those tuples will be stored back in these in the storage So that's how the dynamics of the whole pipeline works um So I haven't finished this part. So once uh, we we dump the checkpoint What we want to do is we want to do evaluation and basically what they did is they play 400 games between this checkpoint and between the best model and whichever model wins. That's the the model will be using For all of the next self-play game. So the next self-play thread So if this one finishes and wants to start over again, it will just pull the the newest the best parameters And it will start playing the game So that's roughly how this thing works. And now let's Let's see some details they said here again the optimization process produces a new checkpoint every thousand training steps If the new player wins by a margin of over 55 Then it becomes the best player and is subsequently used for self-play generation and also becomes the baseline for subsequent comparisons So I mentioned that part already and it's playing 400 games To we are playing 400 games between the checkpoint and the best model in order to figure out whether it's statistically significantly Better than the whether the checkpoint is better than the best model so far Okay, so now that we have the best player, let's see how we can use those weights in order to play these self-play games So the best current player alpha theta star as selected by the evaluator is used to generate data In each iteration alpha theta star plays 25 000 games of self-play And it uses 1600 simulations of mcts to select each move. Okay Um, let me draw it here Basically, this is the self-play thread and it just fetches the the newest weights from the alpha theta star And they say here for the first 30 moves of each game the temperature is set to one This selects moves proportionally to their visit count in mcts ensures a diverse set of positions are encountered That means if this is the initial state of the self-play game, so that's the initial board state Uh, we play for the first 30 steps. So this is 30 Um, we play uh using proportional sampling. That means the following So if I just zoom in into this state, so this is some state state zero in this particular case and once we do the mcts And we find the probabilities we'll be using uh, we won't be doing a greedy sampling will be Uh, a greedy policy will just uh take the proportional sampling. So if we have something like this maybe uh Some distribution like this one. That means this action will be taken the most often because it has the highest probability So that's for the first 30 states and that ensures that we have some kind of because we're constantly using the same model This ensures we are having uh some uh diversity in the initial positions. Okay So that's the first part and then we start playing from this point on we start playing Uh greedy because the temperature goes to zero and that means uh, whatever that so that means this one will get transformed into this All of these will be zero. So that's zero. That's zero. This one will be one So we'll be picking the the highest probability always Okay So then they say the following additional exploration is achieved by adding dairy clay noise to the prior probabilities in the root node As zero so if we take certain state here Uh, and we zoom in again and before we even start building the mct mcts3 what they'll do is they'll add the Noise into the root priors themselves. So once the once we get the initial priors those will get uh by passing the the state into the um our Model into the network, uh, we'll modify these priors using the dairy clay noise which will ensure there is some randomization in how do we visit the uh, the how to explore the mcts3 and that will explore there will additionally enable us to have a diverse set of tuples which will be fed back into the storage into the replay buffer Which again is used by the currently best network to update its weights And that's that's pretty much it. Um Um, this is something you should know from previous video. Basically, we're storing statistics of visitation counts cumulative mc montecarlo estimates the action value function and the priors Okay uh here Puct we saw this one. So basically depending on the visitation counts the more you visit some state the smaller the u becomes and basically that means uh You depend on the action value function to decide whether you want to Uh pick that edge or not Okay Additional detail positions in the queue are evaluated by the neural network using a mini batch size of eight The search thread is locked until the validation completes So this is a bit different compared to the alpha go model because it used to use a synchronous policy here on the other hand What this means is basically on the gpu you have so this is a gpu You have the network and you just make a queue here um, and it has eight slots and basically you need to wait for the eight threads to reach the final the leaf node and to push it here and then We'll just you know in a single batch will produce the the priors as well as the values and then Only then will the those threads resume so they are kind of synchronous. So that's A detail that's that's new to alpha go zero Uh, they also they they continue using this virtual loss, which if you remember If we had some uh So this is the the root of the mcts. This is the leaf basically, uh the virtual loss will make sure that these visitation counts are So these are the states and the squiggly lines are the actions Uh, it will just uh, maybe do minus three on all of the states Uh for the visitation and that will reduce the u part of the key of the of the puct which means All of the other threads which are building up the mcts tree Will be less likely to take this exact route to the leaf node And that's that's the whole point because we're already exploring this one We just have this virtual loss thing which uh discourages other threads to explore the same path Okay, finally, um We're using exponentiated visit count Uh, basically, so this is the visitation counts and this is the temperature coefficient which we were mentioning. So the first 30 Moves are are being played Proportionally and then this temperature drops to zero which makes sure that we're just picking the highest probability action Okay Let's summarize this the principal difference Differences are that alpha goes zero does not use any rollouts It uses a single neural network instead of a separate policy and value networks Leaf nodes are always expanded so there is no threshold We don't have to wait until maybe 40 or something which was dynamically computed so that the gpus are not starving some optimization stuff And so we just expand it as soon as we encounter a leaf node We expand it and each search thread simply waits for the neural network evaluation. So it's synchronous Rather than performing evaluation and backup asynchronously And also there is no tree policy, which if you remember just I used to set up priors So when we get to a leaf node, we used to use the tree policy which will set some fast priors And we would send the leaf node onto gpu where the policy network SL policy would wait and it will it will do an inference and calculate the actual priors which will then get swapped here But here so but so this is basically a placeholder network and we don't have it anymore So much less details actually, it's it's an easier algorithm. We're just using reinforcement learning and we got better results Okay Hopefully you got something out of this and now I just want to make it Clear what the differences are between this model and the alpha zero. So that's just a small additional step if you ask me Um, I don't think there was anything significant that happened there Okay here here here is it basically? There's just a couple of differences And I don't quite understand the first one. So they say alpha zero instead estimates and optimizes the expected outcome Taking account of draws or potentially other outcomes. So this part bugs me because um the alpha goes zero estimates the value so the value function estimates between plus one and minus one which where plus one means This player is going to win and minus one means we are certain we're going to lose Um, and so i'm not quite sure what they mean by that If you know, please leave a comment down there and the second one is obvious So the rules of chess and shogi are asymmetric. So in general symmetries cannot be assumed So that's something I already mentioned We want to get rid of that of that of picking uh, like an element from the dihedral group one of those eight Reflections rotation combinations, so we just want to ditch that and uh, they showed that they won't hurt the performance at all So yeah, um A little bit but we can compensate by compute Okay And the third thing is in contrast alpha zero simply maintains a single neural network that is updated continually Rather than waiting for an iteration to complete Self play games are generated by using the latest parameters for this neural network omitting the valuation step And the selection of best player. So that means this time we have this network and instead of uh Doing these these periodic check pointing and comparing for the best model We are continuously using this same weights and we're updating them using the Replay buffer and the threads are just filling that buffer and we are just taking the data and we're continually updating this agent And when a new thread resumes so this one finishes and wants to start over again It will just fetch the newest the current the only pair of weights we are we are continually updating. Okay? So that's the third difference. So not a lot there and okay and um You can see the results Here basically on chess they compared against the stockfish That was the best engine at the time and you can see after Sometime it gets better than stockfish Finally, we have shroggy here in a small amount of time. The model gets better than shroggy, which is a japanese chess affectionately known as japanese chess and Finally we get better from alpha go zero after some training as well So what's interesting on these three charts if you ask me is this one so You can see how small the gap the elo gap here is and that just kind of indicates the amount of effort like decades of research of chess because chess was considered as the drosophila of AI because So many people so many researchers spent so much time researching it and so there are so many good heuristics in Good heuristics and a hand crafting that went into creating this stockfish engine that it was kind of hard to get much better than it So that's that's funny um, okay, the the only thing that's actually um, not so uh, Game agnostic is this so they are still adding the noise to the prior policy to ensure the exploration So directly noise I mentioned in the alpha go zero Um, and it's uh scaled in proportion to the typical number of legal moves for that game type So that's something that's game specific other than that. This is a pretty generic algorithm and yeah uh I think worth noticing here is that you have to train a single Instance a single agent for every specific game So that means this still doesn't generalize as much as we'd like it to so optimally you can train for alpha for go And then just kind of fine tune it for chess and it would be Uh really good, but that's not the case and you have to train it from scratch pretty much to the best of my knowledge um one thing worth noticing as well is that alpha zero searches just 80,000 positions per second in chess and 40,000 in shogi compared to 70 million for stockfish and 35 million for elmo So that just shows uh that this approach is much less brute force and these two the the elmo and stockfish Are much more similar to deep blue which used a brute force. Um algorithm to beat garry casparro back in 97 So they say arguably a more human-like approach to to search and to to playing the games and I agree um So that's that's pretty much everything you need to know from alpha zero paper once you know Uh once you understand the alpha go and alpha go zero so ditch the symmetries keep continually updating the agent Um, and then you just kind of map for the specific rules You can you kind of do some small adaptations you apply the same algorithm and as you can see after some training time You can get that you can achieve state of the art on all the three benchmarks That was it for this video If you have any feedback whatsoever on the things I could improve, please feel free to just comment down in the comment section And i'll read those and you know the drill just hit that subscribe button hit the bell icon to get notified and until next time Keep learning deep
[{"start": 0.0, "end": 5.34, "text": " What's up in this video? I'm going to cover the AlphaGo Zero paper or mastering the game of go without human knowledge"}, {"start": 5.34, "end": 7.7, "text": " And I also cover the AlphaZero paper"}, {"start": 7.7, "end": 14.6, "text": " I'll actually tell you some of the modifications that are needed to get this thing to AlphaZero level and"}, {"start": 15.46, "end": 20.34, "text": " Just a hint. It's a really small modification and basically reapplying this to chess and shogi as well"}, {"start": 20.6, "end": 22.76, "text": " So what's the trick with this paper?"}, {"start": 22.96, "end": 28.46, "text": " And hopefully already watched my previous video on AlphaGo if you haven't I strongly recommend you go ahead and watch it"}, {"start": 28.46, "end": 30.12, "text": " I'll link it somewhere here"}, {"start": 30.12, "end": 33.8, "text": " Basically, the main thing is they are not using human experts data anymore"}, {"start": 33.8, "end": 39.28, "text": " So they say it here here we introduce an algorithm based solely on reinforcement learning without human data"}, {"start": 39.6, "end": 47.2, "text": " Guidance or domain knowledge beyond game rules starting tabula rasa or blank slate in Latin our new program AlphaGo Zero"}, {"start": 47.84, "end": 55.94, "text": " Achieved super superhuman performance winning hundred to zero against the previously published champion defeating AlphaGo agent"}, {"start": 55.94, "end": 57.94, "text": " so a"}, {"start": 57.94, "end": 60.46, "text": " Small digression here if you're not familiar with this"}, {"start": 60.879999999999995, "end": 68.0, "text": " Basically AlphaGo is a lineage of agents even though they published these two papers or actually three and then later a mu zero"}, {"start": 68.67999999999999, "end": 76.36, "text": " There were a couple more iterations in between so it's kind of a spectrum and let me just kind of explain how that functions"}, {"start": 76.36, "end": 78.16, "text": " So the previous paper"}, {"start": 78.16, "end": 80.92, "text": " The AlphaGo paper was actually AlphaGo fan"}, {"start": 80.92, "end": 86.1, "text": " Because that's the model that defeated fan who we the European champion in go back in 2015"}, {"start": 86.1, "end": 88.1, "text": " Then they had AlphaGo Lee"}, {"start": 89.14, "end": 92.94, "text": " Which defeated Lee Sedol back in 2016"}, {"start": 93.58, "end": 99.9, "text": " And that's the the famous match where Lee Sedol lost 4 to 1 and that was the first time that"}, {"start": 100.46000000000001, "end": 109.3, "text": " An agent and like an algorithm could defeat the grandmaster in go and finally they built something called AlphaGo"}, {"start": 109.3, "end": 111.3, "text": " master"}, {"start": 111.3, "end": 114.3, "text": " Which basically won 60 to 0"}, {"start": 115.02, "end": 122.82, "text": " Against all of the best grandmasters in go in 2017. I think and finally we're here AlphaGo zero"}, {"start": 123.62, "end": 124.66, "text": " which"}, {"start": 124.66, "end": 130.74, "text": " Yeah, which is basically that the best iteration so far in the only version that's not using human experts"}, {"start": 131.46, "end": 132.66, "text": " knowledge"}, {"start": 132.66, "end": 138.1, "text": " So going from here to here. It's a kind of a spectrum where the amount of"}, {"start": 138.1, "end": 144.42, "text": " the amount of compute like the both the efficiency of the algorithm was improved as well as the amount of"}, {"start": 144.9, "end": 151.22, "text": " Hard-coded data and heuristics was kind of slowly going down. So finally zero doesn't use the human data and"}, {"start": 151.78, "end": 156.26, "text": " Is much more efficient. So yeah, that's the paper will be reviewing in this in this video"}, {"start": 157.06, "end": 159.06, "text": " anyways"}, {"start": 159.06, "end": 162.9, "text": " So let me see what the differences are between this paper and the AlphaGo paper"}, {"start": 163.38, "end": 166.82, "text": " The first one is so first and foremost it is trained solely by"}, {"start": 166.82, "end": 173.38, "text": " self play RL at that means if you remember from the previous video that we won't be having these networks anymore"}, {"start": 173.38, "end": 179.62, "text": " So the SL policy is gone because this one was trained on the human experts data. The faster law policy is also gone"}, {"start": 179.62, "end": 181.62, "text": " So those were the two tasks that were"}, {"start": 182.42, "end": 188.01999999999998, "text": " Leveraging the human experts data and we're left with these two, but let's see what what else is different"}, {"start": 188.01999999999998, "end": 193.78, "text": " So second it uses only the black and white stones from the board as input features. So that means we're not using anything"}, {"start": 193.78, "end": 199.14000000000001, "text": " Additional heuristics and hand crafting of these these features. We're just using the raw board data"}, {"start": 199.86, "end": 205.7, "text": " And we just take a couple of those because we need some prehistory in order to play this game optimally"}, {"start": 206.02, "end": 211.14, "text": " And that's the only thing so we're basically using raw board data and that's it. No, no, no hand crafting"}, {"start": 211.14, "end": 217.54, "text": " No domain knowledge integrated there third. It uses a single neural network rather than separate policy and value networks"}, {"start": 218.34, "end": 221.46, "text": " So that means we're basically merging these two together"}, {"start": 221.46, "end": 224.26000000000002, "text": " So we'll be adding this will be combining somehow these two"}, {"start": 225.06, "end": 229.22, "text": " Into a single network and finally it uses a simpler tree search"}, {"start": 229.38, "end": 236.18, "text": " That relies upon the single neural network to value positions and sample moves without performing any montecarlo rollouts"}, {"start": 236.66, "end": 242.5, "text": " So again, if you remember from alpha go paper when we were doing the montecarlo tree search"}, {"start": 242.82, "end": 244.82, "text": " And we got to the leaf position"}, {"start": 244.82, "end": 248.74, "text": " So this is some state and we basically did two things one"}, {"start": 248.74, "end": 253.70000000000002, "text": " Was to pass the state into a value network. The second thing was to do a rollout"}, {"start": 254.10000000000002, "end": 258.90000000000003, "text": " Basically playing the game until the end with this fast rollout policy. So basically with this thing"}, {"start": 259.62, "end": 262.5, "text": " And we'd get some value then we'd"}, {"start": 263.62, "end": 264.34000000000003, "text": " Back"}, {"start": 264.34000000000003, "end": 270.34000000000003, "text": " Back propagate those two information so that both the value and the rollout outcome up the tree"}, {"start": 270.66, "end": 275.78000000000003, "text": " Basically, we don't have we don't have this anymore and that's the the new modification that they introduced"}, {"start": 275.78, "end": 279.05999999999995, "text": " So it's much simpler. So if you take a look at it"}, {"start": 279.61999999999995, "end": 285.53999999999996, "text": " This thing is so much simpler in every aspect. We don't have this we merge these two together and the"}, {"start": 286.34, "end": 293.53999999999996, "text": " MCTS algorithm is simpler so and we also achieve much better results. So that's quite impressive if you ask me"}, {"start": 294.58, "end": 302.02, "text": " Okay, hopefully you got some intuition of the differences between alpha go zero and alpha go so let's continue"}, {"start": 302.02, "end": 308.41999999999996, "text": " So let's see what the output from the network actually is. So basically it's a tuple we have the policy"}, {"start": 309.38, "end": 311.62, "text": " Vector and we get the value scalar"}, {"start": 312.18, "end": 317.7, "text": " Coming from the from the network. So the vector of move probabilities p represents the probability of selecting each move a"}, {"start": 318.18, "end": 322.26, "text": " Including the pass. So additionally if you take a look at the alpha go implementation"}, {"start": 322.41999999999996, "end": 324.41999999999996, "text": " I think they didn't have the the pass move"}, {"start": 324.41999999999996, "end": 328.9, "text": " So they just had the output vector of the policy vector was just 19 by 19"}, {"start": 328.9, "end": 335.14, "text": " The output vector of the policy vector was just 19 by 19 flat flat vector. So basically 361"}, {"start": 337.46, "end": 342.73999999999995, "text": " Dimensional vector and here we have uh, plus one for the pass move"}, {"start": 344.17999999999995, "end": 347.21999999999997, "text": " Additionally, we have the the value part of the network"}, {"start": 347.85999999999996, "end": 352.73999999999995, "text": " Combined together. So if you take a look how this looks like you basically have something like this"}, {"start": 353.38, "end": 356.41999999999996, "text": " This is some kind of a cnn and then we have"}, {"start": 356.42, "end": 363.62, "text": " Uh two streams coming off on top of this. So this thing is the policy part and then we have the the value network"}, {"start": 363.62, "end": 369.78000000000003, "text": " So this is our new model. Hopefully this is uh clear enough and additional detail is they're using"}, {"start": 370.40000000000003, "end": 374.74, "text": " Resonance instead of cnn's so previously in alpha go they were using plain old"}, {"start": 375.28000000000003, "end": 379.46000000000004, "text": " Convolutional neural networks now they're using resonance and hopefully you're all familiar with those"}, {"start": 379.54, "end": 384.42, "text": " so basically what resonance did is they uh, just kind of um"}, {"start": 384.42, "end": 387.46000000000004, "text": " added this residual connection which will be"}, {"start": 388.26, "end": 392.98, "text": " Which will add the feature maps from the previous layer on top of the process features"}, {"start": 393.38, "end": 397.38, "text": " And they show that by adding this small trick they can train much deeper"}, {"start": 397.70000000000005, "end": 402.34000000000003, "text": " Naprox and this was a one of the major points in the deep learning history"}, {"start": 403.38, "end": 407.78000000000003, "text": " So yeah, they they are now using resonance instead of cnn's pure cnn's"}, {"start": 408.74, "end": 409.94, "text": " Okay"}, {"start": 409.94, "end": 415.78, "text": " Okay, uh, let's slowly start digging into how the actual uh algorithm works and basically the"}, {"start": 416.34, "end": 418.66, "text": " I'll be stressing the differences with alpha go"}, {"start": 419.3, "end": 421.3, "text": " Many things overlap. So yeah"}, {"start": 421.7, "end": 429.62, "text": " Okay, so the main difference, uh in the training procedure is that they integrated the mcts directly into the training and they're not using it just for the"}, {"start": 430.18, "end": 430.9, "text": " uh"}, {"start": 430.9, "end": 435.78, "text": " During the play so basically this is how the network is trained. So you have a self-play game"}, {"start": 436.26, "end": 438.82, "text": " So take our our own agent. Uh, you take the"}, {"start": 438.82, "end": 446.02, "text": " The basically that's the uh, the network that has both the policy and the value and you just roll out the game until the end state"}, {"start": 446.34, "end": 448.26, "text": " Now this is the trick"}, {"start": 448.26, "end": 451.14, "text": " So in this state, uh, you you pass it into the network"}, {"start": 451.14, "end": 457.46, "text": " It outputs the probabilities and the so this is kind of the probability vector and the the value scalar"}, {"start": 457.86, "end": 462.18, "text": " So now instead of just picking the action directly from this output"}, {"start": 462.98, "end": 464.98, "text": " What I do is they uh"}, {"start": 464.98, "end": 468.98, "text": " Do the mcts over this state so this is"}, {"start": 469.78000000000003, "end": 475.3, "text": " Depicted here and they actually pick the action from this probability distribution, which is much better"}, {"start": 475.86, "end": 479.94, "text": " Prediction than the raw probability prediction from the directive from the network"}, {"start": 480.74, "end": 481.54, "text": " so"}, {"start": 481.54, "end": 485.46000000000004, "text": " Uh after just picking the action we get into a different state"}, {"start": 485.86, "end": 489.46000000000004, "text": " We again do the mcts and we roll out all of these"}, {"start": 489.46, "end": 496.26, "text": " Uh until the end and we're just saving this this tuple. So the data for each time step t is stored as state"}, {"start": 496.9, "end": 503.53999999999996, "text": " Policy, so that's the this vector here. We're storing this one and we're storing the outcome. So the basically the"}, {"start": 504.26, "end": 509.14, "text": " The end result basically either plus one if the player won or minus one if it lost"}, {"start": 509.29999999999995, "end": 513.06, "text": " So this is basically the main trick doing the mcts during the actual self-play"}, {"start": 513.54, "end": 517.14, "text": " once we have that what we do is we in order to train the"}, {"start": 517.14, "end": 519.22, "text": " The agent we do the following"}, {"start": 520.26, "end": 526.74, "text": " We take those tuples that we stored and we want to make sure that the output from the raw network"}, {"start": 526.74, "end": 529.86, "text": " So the policy vector that comes directly from the network"}, {"start": 530.66, "end": 534.98, "text": " Gets as close to the mcts policy as possible"}, {"start": 535.46, "end": 542.5, "text": " Secondly, we just regressed the value function so as to be really close to the final outcome of that particular game"}, {"start": 543.06, "end": 545.06, "text": " so"}, {"start": 545.06, "end": 547.6999999999999, "text": " Basically, here is the equation you can see"}, {"start": 548.42, "end": 554.0999999999999, "text": " We do a simple regression doing mean square error for the value function and we do the"}, {"start": 555.14, "end": 560.42, "text": " simple simple cross entropy loss here, so this has a minimum so the p vector is the"}, {"start": 561.38, "end": 564.5, "text": " probability coming from the raw network the pi is the"}, {"start": 565.06, "end": 571.54, "text": " Probability coming from the mcts and we want to make sure that the probability coming from the raw network is as close as possible"}, {"start": 571.54, "end": 576.9, "text": " To pi and that's the minimum of this function. So basically that's when we hit the entropy limit"}, {"start": 577.38, "end": 581.4599999999999, "text": " Before that we have the non zero k l divergence and once we get to"}, {"start": 582.26, "end": 588.8199999999999, "text": " To the same to once p becomes equal to pi we have zero k l and we converge to a minimum"}, {"start": 589.62, "end": 592.0999999999999, "text": " So basically that's why this thing"}, {"start": 592.8199999999999, "end": 599.62, "text": " Pushes p to be close to pi. Finally, the third term is just basically l2 regularization and"}, {"start": 599.62, "end": 604.42, "text": " And that's it. It's pretty simple. It's much simpler than alpha go"}, {"start": 604.9, "end": 607.54, "text": " And yeah, let's see what I see here"}, {"start": 607.54, "end": 612.74, "text": " So the neural network parameters theta are updated to maximize the similarity of the policy vector p"}, {"start": 613.22, "end": 617.46, "text": " To the search probabilities p pi and to minimize the error between the predicted winner"}, {"start": 618.02, "end": 624.1, "text": " V and the game winner z as for the montecarlo tree search a short recap how it works"}, {"start": 624.1, "end": 631.0600000000001, "text": " Basically, they're using the again the p uct algorithm. So the upper confidence bound for trees algorithm again"}, {"start": 631.14, "end": 637.14, "text": " Basically start from the root state and you pick the values so as to maximize this p uct score"}, {"start": 637.3000000000001, "end": 645.22, "text": " So the q value the action value function plus this u term which consists of which has this visitation term inside and priors"}, {"start": 645.7, "end": 649.78, "text": " and picking the highest ones they get to the leaf node and finally"}, {"start": 649.78, "end": 656.74, "text": " Uh in the case of alpha go zero, they don't have the uh, actually the expanding threshold which was around 40"}, {"start": 657.06, "end": 660.66, "text": " In alpha go they just uh expand the network every single time"}, {"start": 660.98, "end": 667.78, "text": " so once you get to the leaf you just expand it and basically what it means is you you pass that state into the"}, {"start": 668.5799999999999, "end": 674.3399999999999, "text": " network and it will output the probability vector and the value and you'll be back propping that"}, {"start": 674.34, "end": 682.02, "text": " um, basically back upping the the value function up the tree and you'll be using the p as the priors for the"}, {"start": 682.58, "end": 683.94, "text": " next uh"}, {"start": 683.94, "end": 688.82, "text": " Nodes in the in the tree. So basically next time if one of the threads comes here, it will pick up"}, {"start": 689.46, "end": 691.46, "text": " one of those of these"}, {"start": 691.46, "end": 694.34, "text": " Children depending on the again on the p uct score"}, {"start": 694.58, "end": 701.22, "text": " So maybe come here it will again expand this one and that's how we actually build up this montecarlo tree search"}, {"start": 701.22, "end": 704.6600000000001, "text": " And hopefully that that was also already clear from the previous video"}, {"start": 704.74, "end": 709.0600000000001, "text": " So here just depicted that the value function is back propped"}, {"start": 709.7, "end": 716.1800000000001, "text": " back backed up, uh up the tree and they're updating the statistics both the visitation counts as well as the"}, {"start": 717.62, "end": 720.02, "text": " Basically the the montecarlo estimates"}, {"start": 721.14, "end": 722.26, "text": " Okay"}, {"start": 722.26, "end": 724.02, "text": " That's that's it"}, {"start": 724.02, "end": 727.5400000000001, "text": " That's it. And um, so they said here over the course of training"}, {"start": 727.54, "end": 731.4, "text": " 4.9 million games of self play were generated using"}, {"start": 732.48, "end": 738.4399999999999, "text": " 1600 simulations for each mcts which corresponds to approximately 0.4 second"}, {"start": 739.24, "end": 745.88, "text": " Thinking time per move. So that means when you're building up these trees during the the self play you do these you do"}, {"start": 746.64, "end": 750.28, "text": " 1600 of these simulations in order to build a tree and then you pick the"}, {"start": 751.24, "end": 754.92, "text": " The action that actually maximizes the visitation count in the root node"}, {"start": 754.92, "end": 758.8399999999999, "text": " They have some temperature coefficient, but we'll get to that a bit later. Basically, that's it"}, {"start": 759.56, "end": 761.56, "text": " Okay, let's continue"}, {"start": 762.28, "end": 764.1999999999999, "text": " And after doing this training"}, {"start": 764.76, "end": 769.64, "text": " Surprisingly the alpha goes zero outperform the alpha goalie after just 36 hours"}, {"start": 769.64, "end": 773.0799999999999, "text": " So you can see it here after just 36 hours"}, {"start": 773.64, "end": 779.64, "text": " Basically, it's already better than the agent that that has beaten at least it all back in 2016"}, {"start": 779.64, "end": 780.92, "text": " And that's impressive"}, {"start": 780.92, "end": 785.4799999999999, "text": " And they said here the alpha goalie was actually trained over several months"}, {"start": 785.4799999999999, "end": 789.16, "text": " So a lot of compute went into this thing. So yeah, keep that in mind"}, {"start": 790.04, "end": 793.64, "text": " Alpha goes here. I use a single machine with four tpus"}, {"start": 794.04, "end": 798.68, "text": " Whereas alpha go Lee was distributed over many machines and used 48 tpus"}, {"start": 799.4799999999999, "end": 802.92, "text": " Alpha go zero defeated alpha go Lee 100 to zero"}, {"start": 803.4, "end": 809.4, "text": " So that's some serious performance that we got from this pure self-reinforced learning approach"}, {"start": 809.4, "end": 813.16, "text": " And that's amazing again. You can see the curves here"}, {"start": 814.1999999999999, "end": 815.16, "text": " the"}, {"start": 815.16, "end": 817.88, "text": " Dotted line here is the the lee method"}, {"start": 818.6, "end": 821.18, "text": " The the purple one is actually the same architecture"}, {"start": 821.9599999999999, "end": 823.48, "text": " as as the"}, {"start": 823.48, "end": 828.92, "text": " Alpha go zero but just using the human experts data and you can see it can't achieve the elo score"}, {"start": 829.48, "end": 832.04, "text": " As high as alpha go zero"}, {"start": 833.0799999999999, "end": 835.0799999999999, "text": " Okay looking at this curve here"}, {"start": 835.08, "end": 842.6, "text": " Uh, we're trying to see how good these networks both the alpha go zero as well this supervised learning network"}, {"start": 842.84, "end": 844.84, "text": " How good are they?"}, {"start": 845.4000000000001, "end": 848.2, "text": " predicting the actual moves that human experts will make"}, {"start": 848.6, "end": 853.4000000000001, "text": " and you can see that the supervised method is actually better here and that shouldn't surprise you because"}, {"start": 854.2, "end": 858.6, "text": " The goal for alpha go zero is not to be better at predicting human experts move"}, {"start": 858.6800000000001, "end": 864.36, "text": " It's to be the best agent the best player and that doesn't necessarily correlate with human play"}, {"start": 864.36, "end": 869.72, "text": " And we'll see that exactly happening a bit later where alpha go zero discover certain joseki's"}, {"start": 869.88, "end": 874.76, "text": " So those are some corner sequences in go they weren't discovered by by humans prior to this agent"}, {"start": 875.32, "end": 878.52, "text": " So that's that's awesome. Um, and we can see it here actually"}, {"start": 879.4, "end": 880.76, "text": " looking at the"}, {"start": 880.76, "end": 884.6, "text": " emcee of professional game outcomes, so the uh"}, {"start": 885.32, "end": 890.44, "text": " Of alpha go zero is much better at predicting the final outcome of the games than the supervised approach"}, {"start": 890.44, "end": 894.44, "text": " so that means if you're in a particular state and we just um,"}, {"start": 895.32, "end": 903.08, "text": " Basically pass it through the the network. It will be better at predicting the final outcome in those games played by humans than the sl"}, {"start": 903.72, "end": 909.6400000000001, "text": " Network. So yeah, this is actually more important than this chart keep that in mind"}, {"start": 910.9200000000001, "end": 912.9200000000001, "text": " Okay"}, {"start": 913.32, "end": 916.44, "text": " Aside from that they try to decouple the"}, {"start": 916.44, "end": 924.5200000000001, "text": " The the the contribution that came from the algorithm and what came from the actual architecture improvement because they're using resonance if you remember"}, {"start": 925.32, "end": 928.6, "text": " So looking at these charts here, um, the the leftmost"}, {"start": 930.0400000000001, "end": 932.84, "text": " This leftmost network is actually a dual"}, {"start": 933.1600000000001, "end": 939.1600000000001, "text": " Resnet meaning dual is because we have a single architecture that outputs both the policy as well as the value"}, {"start": 939.72, "end": 941.0, "text": " and"}, {"start": 941.0, "end": 942.5200000000001, "text": " here we have the"}, {"start": 942.52, "end": 948.36, "text": " Cep conf that's basically single network for both policy and for the value network"}, {"start": 948.68, "end": 955.48, "text": " So that's something alpha go used and it's using the simple cnn instead of resonant and you can see that the elo rating"}, {"start": 955.88, "end": 958.36, "text": " Uh goes up. So everything else is pretty much the same"}, {"start": 959.24, "end": 961.98, "text": " And we get a huge boost just from the architecture"}, {"start": 962.84, "end": 968.04, "text": " And these are the other combinations. So just using separate resonants and using dual architecture, but with cnn's"}, {"start": 968.04, "end": 972.5999999999999, "text": " Um similarly looking at the prediction accuracy of the human players"}, {"start": 973.0, "end": 979.0, "text": " This one is actually a bit worse by looking at the msc of the game. So just predict regressing the outcome"}, {"start": 979.56, "end": 985.48, "text": " This one is the best and finally this architecture gave them the best results. So that's what they're using"}, {"start": 986.28, "end": 988.28, "text": " um, okay"}, {"start": 988.5999999999999, "end": 989.64, "text": " um"}, {"start": 989.64, "end": 994.52, "text": " One more detail, uh, they say here this is partly due to improved computational efficiency"}, {"start": 994.52, "end": 1001.88, "text": " But more importantly the dual objective regularizes the network to a common representation that supports multiple use cases"}, {"start": 1002.1999999999999, "end": 1005.0799999999999, "text": " So the idea here is the following so you have a cnn"}, {"start": 1006.04, "end": 1010.36, "text": " And since you're training the policy, so this is the policy vector here and this is the"}, {"start": 1011.0799999999999, "end": 1016.84, "text": " value scaler so when you just do your gradient update of your network for the"}, {"start": 1017.72, "end": 1019.72, "text": " To get closer to the mcts"}, {"start": 1019.72, "end": 1026.3600000000001, "text": " Um, uh probability vector you're updating these features, but these features are subsequently used"}, {"start": 1026.84, "end": 1028.76, "text": " by the value"}, {"start": 1028.76, "end": 1035.08, "text": " stream here and so basically whatever you do you're improving both the value and the policy and your"}, {"start": 1036.68, "end": 1039.0, "text": " You're basically figuring out the features which are"}, {"start": 1039.64, "end": 1045.4, "text": " Good for figuring out both the policy and the value and that helps that helps boost the performance"}, {"start": 1046.44, "end": 1047.56, "text": " Okay"}, {"start": 1047.56, "end": 1050.9199999999998, "text": " Uh, let's see some of the knowledge that this network learned"}, {"start": 1051.56, "end": 1053.08, "text": " during this"}, {"start": 1053.08, "end": 1055.08, "text": " training and"}, {"start": 1055.08, "end": 1061.56, "text": " In this chart what you can see in the top row here are some of these joseki as I mentioned and I don't know"}, {"start": 1061.56, "end": 1065.56, "text": " How to play go but I know enough to just kind of understand what's happening"}, {"start": 1065.72, "end": 1069.32, "text": " But if you're familiar with go, maybe this will be even more interesting for you"}, {"start": 1070.28, "end": 1072.12, "text": " Basically here you can see"}, {"start": 1072.12, "end": 1079.2399999999998, "text": " These are the joseki's that humans play and you can see that the alpha go zero actually reinvented these"}, {"start": 1080.28, "end": 1082.28, "text": " At some time during the training"}, {"start": 1082.36, "end": 1086.52, "text": " So this first joseki was discovered like maybe around 10 hour mark"}, {"start": 1086.9199999999998, "end": 1091.56, "text": " Then the second one was discovered. I don't know like maybe 15 hours and the last one was discovered"}, {"start": 1092.1999999999998, "end": 1094.1999999999998, "text": " Maybe around 36 hour mark"}, {"start": 1095.56, "end": 1099.56, "text": " And that's super fun and we'll later see how the frequency was increasing"}, {"start": 1099.56, "end": 1105.56, "text": " So at one point of the training it was heavily using one of these joseki's and then it kind of just ditched them away"}, {"start": 1105.6399999999999, "end": 1107.6399999999999, "text": " Because they were actually worse"}, {"start": 1108.04, "end": 1110.9199999999998, "text": " It found some better patterns and started using those"}, {"start": 1111.6399999999999, "end": 1113.3999999999999, "text": " in the second row"}, {"start": 1113.3999999999999, "end": 1119.72, "text": " What we can basically see are the joseki's which were the most frequent joseki's at that part of the training"}, {"start": 1120.52, "end": 1121.8799999999999, "text": " time frame"}, {"start": 1121.8799999999999, "end": 1126.76, "text": " So basically if you take a look at I don't know maybe this one so around the 48 hour mark"}, {"start": 1126.76, "end": 1129.56, "text": " It was heavily using this particular pattern"}, {"start": 1130.12, "end": 1134.6, "text": " and yeah, and finally the the last row tells you"}, {"start": 1135.48, "end": 1138.92, "text": " Basically shows you the games. So these are just so we take this"}, {"start": 1139.48, "end": 1144.28, "text": " Maybe three hour mark. We take the one particular self play game and we just"}, {"start": 1144.92, "end": 1147.56, "text": " Plotted here and you can see how it evolved"}, {"start": 1147.56, "end": 1152.84, "text": " So here it was it was playing really greedy and clustering all of the moves the black and the whites"}, {"start": 1152.84, "end": 1158.6, "text": " In the same region of the of the board and as the game progressed we came to these"}, {"start": 1159.3999999999999, "end": 1164.76, "text": " More subtle again. I'm not I don't understand go but I was after reading the paper. I understand this"}, {"start": 1164.76, "end": 1170.76, "text": " So basically here it's leading multiple battles battles distributed across the board and it's much better"}, {"start": 1171.8799999999999, "end": 1175.32, "text": " Player now after 70 hours than at the beginning"}, {"start": 1176.36, "end": 1182.1999999999998, "text": " So those were some of the knowledge that the alpha zero basically rediscovered and that's pretty damn impressive"}, {"start": 1182.2, "end": 1183.8, "text": " If you ask me"}, {"start": 1183.8, "end": 1188.04, "text": " Okay, let's continue and uh see what's up here"}, {"start": 1189.4, "end": 1195.0, "text": " Okay, so they say here ultimately alpha go zero preferred new joseki variants that were previously unknown"}, {"start": 1195.0800000000002, "end": 1201.16, "text": " I mentioned that already so that's pretty much fascinating. Uh, surprisingly this letter capture sequence"}, {"start": 1201.8, "end": 1206.1200000000001, "text": " Um is one of the first elements of go knowledge learned by humans"}, {"start": 1206.12, "end": 1213.4799999999998, "text": " But they were only understood by alpha go zero much later in training and that's cool because that suggests also that"}, {"start": 1213.8, "end": 1218.28, "text": " Uh, it's not constrained by any way by the human priors and knowledge"}, {"start": 1219.0, "end": 1223.3999999999999, "text": " So it actually discovered something that was obvious to humans much later"}, {"start": 1223.6399999999999, "end": 1226.4399999999998, "text": " But it also discovered some things that were not obvious to humans"}, {"start": 1227.32, "end": 1228.04, "text": " And that's awesome"}, {"start": 1228.04, "end": 1233.9599999999998, "text": " And they say here the humankind has accumulated go knowledge from millions of games played over thousands of years"}, {"start": 1233.96, "end": 1236.14, "text": " years collectively distilled into patterns"}, {"start": 1236.54, "end": 1242.46, "text": " Proverbs and books in the space of a few days starting tabula rasa. So blank slates totally random"}, {"start": 1242.78, "end": 1248.54, "text": " Alpha go zero was able to rediscover much of this go knowledge as well as novel strategies that provide"}, {"start": 1248.8600000000001, "end": 1254.46, "text": " New insights into the oldest of games and they're just romanticizing here a bit. Okay"}, {"start": 1255.74, "end": 1256.78, "text": " basically"}, {"start": 1256.78, "end": 1261.3400000000001, "text": " Uh, it's pretty awesome and looking at the charts. Uh, you can see that the"}, {"start": 1261.34, "end": 1263.34, "text": " Uh alpha go zero, uh"}, {"start": 1263.98, "end": 1267.1799999999998, "text": " Beats alpha go lee after just a couple of uh"}, {"start": 1267.8999999999999, "end": 1274.3799999999999, "text": " Days and then starts being better than this alpha go master version after 30 something days"}, {"start": 1275.4199999999998, "end": 1277.4199999999998, "text": " Again, uh some similar chart here"}, {"start": 1277.82, "end": 1284.06, "text": " Uh, basically this is the alpha go zero performance. This is the master the lead the the fan"}, {"start": 1284.78, "end": 1285.82, "text": " um"}, {"start": 1285.82, "end": 1292.9399999999998, "text": " Uh models and finally we have some commercial go programs which you can see are much worse than even the alpha go fan"}, {"start": 1293.26, "end": 1295.26, "text": " And then we have this, uh"}, {"start": 1295.5, "end": 1298.46, "text": " Basically open source go program which is even worse"}, {"start": 1300.22, "end": 1303.34, "text": " Finally this one is basically the the raw network performance"}, {"start": 1303.34, "end": 1310.9399999999998, "text": " So without using the mcts by just using the probabilities coming from the policy network, uh and playing using that one, uh, you get"}, {"start": 1310.94, "end": 1318.6200000000001, "text": " That much much lower elo score than using mcts. So search is really important for playing go efficiently. Okay"}, {"start": 1320.78, "end": 1326.7, "text": " That's it that was the high level overview I wanted to show you and now we'll get into some details"}, {"start": 1326.7, "end": 1330.94, "text": " But before that let me just show you those jiseki's I was mentioning. So here"}, {"start": 1331.5800000000002, "end": 1335.42, "text": " Um, so this is some jiseki. That's um, the humans play a lot"}, {"start": 1335.42, "end": 1340.46, "text": " So it's even has a name five three point press whatever and you can see the frequency here"}, {"start": 1340.46, "end": 1346.3, "text": " And similarly for other jiseki's and you can see for some patterns. So this like knight's move pincer"}, {"start": 1346.8600000000001, "end": 1348.7, "text": " the frequency"}, {"start": 1348.7, "end": 1352.7, "text": " Peaked here and then slowly decreased after 70 hours of training"}, {"start": 1353.3400000000001, "end": 1357.18, "text": " So that means it was discovering and ditching the patterns. So that's cool"}, {"start": 1357.74, "end": 1360.94, "text": " Um, this is the the the second chart shows you"}, {"start": 1361.58, "end": 1366.06, "text": " Um some of the jiseki's that was playing really often at one point of training"}, {"start": 1366.06, "end": 1370.7, "text": " So again here three by three invasion. It had a peak here. So it was playing this this"}, {"start": 1371.1, "end": 1375.6599999999999, "text": " Particular jiseki a lot at one point of time and so on and so forth"}, {"start": 1376.46, "end": 1378.46, "text": " So that's that's it"}, {"start": 1379.1, "end": 1385.4199999999998, "text": " Now let's dig into the details of the actual algorithm. Okay, so i've covered this part already"}, {"start": 1385.82, "end": 1392.1399999999999, "text": " This is a spectrum I was mentioning so they went from fan to lee to master and finally to alpha go zero"}, {"start": 1392.14, "end": 1396.5400000000002, "text": " Uh, and each time increasing the both the efficiency. So this one used"}, {"start": 1397.8000000000002, "end": 1399.8000000000002, "text": " 176 gpus"}, {"start": 1399.98, "end": 1404.5600000000002, "text": " Then the lee version I think used something around 47 tpus"}, {"start": 1405.8200000000002, "end": 1412.46, "text": " Then they had uh alpha go master which used even less hardware and finally alpha go zero actually uses only four tpus"}, {"start": 1412.5400000000002, "end": 1417.1000000000001, "text": " And it's much better as you saw than lee and and master and every version"}, {"start": 1417.8200000000002, "end": 1420.5400000000002, "text": " So far, um, and yeah, they say here"}, {"start": 1420.54, "end": 1423.04, "text": " Okay, so uh, basically"}, {"start": 1423.82, "end": 1431.02, "text": " They say it here. The primary contribution is to demonstrate that superhuman performance can be achieved without human domain knowledge"}, {"start": 1431.58, "end": 1434.1399999999999, "text": " We reiterated that already a couple of times"}, {"start": 1434.78, "end": 1441.18, "text": " To clarify this contribution we number the domain knowledge that alpha go zero uses explicitly or implicitly"}, {"start": 1441.5, "end": 1444.3799999999999, "text": " Either in its training procedure or its mcts"}, {"start": 1444.7, "end": 1447.74, "text": " These are the items of knowledge that would need to be replaced"}, {"start": 1447.74, "end": 1451.9, "text": " For alpha go zero to learn a different alternating markov game"}, {"start": 1452.46, "end": 1457.34, "text": " And this is the main point where basically after you just kind of"}, {"start": 1457.98, "end": 1463.42, "text": " Modify a couple of these you get to alpha zero and you can play both the chess and shogi as well as go"}, {"start": 1464.22, "end": 1469.02, "text": " And uh the most important parts here. So basically you will want to"}, {"start": 1469.82, "end": 1471.82, "text": " Um ditch the symmetry part"}, {"start": 1471.9, "end": 1476.94, "text": " So the rules of go are invariant under rotation and reflection this knowledge has been used in alpha go zero"}, {"start": 1476.94, "end": 1481.98, "text": " Both by augmenting the data set during training to include rotations and reflections of each position"}, {"start": 1481.98, "end": 1483.98, "text": " So if you remember from the alpha go video"}, {"start": 1484.22, "end": 1487.9, "text": " Basically each time you want to evaluate a position you take a random"}, {"start": 1488.78, "end": 1492.3, "text": " Uh element from the dihedral group that has eight elements"}, {"start": 1492.3, "end": 1499.98, "text": " So basically because you have two reflections and four rotations you get to eight elements and you were just using those and that will"}, {"start": 1500.38, "end": 1502.7, "text": " Make your data bigger and also"}, {"start": 1503.74, "end": 1505.74, "text": " They showed it improves the training"}, {"start": 1505.74, "end": 1511.34, "text": " So we'll have to uh expel that part because other games such as chess they don't have"}, {"start": 1511.66, "end": 1515.34, "text": " That symmetry as as inherent part of the game"}, {"start": 1516.46, "end": 1517.34, "text": " Okay"}, {"start": 1517.34, "end": 1523.26, "text": " Some details here. So the montecarlo tree search parameters some bayesian optimization was used"}, {"start": 1524.22, "end": 1527.74, "text": " To figure those out and this is a really important part"}, {"start": 1527.74, "end": 1532.14, "text": " So alpha goes your self-play training pipeline consists of three main components"}, {"start": 1532.14, "end": 1537.9, "text": " So the first thing is neural network parameters are continually optimized from recent self-play data"}, {"start": 1538.46, "end": 1540.46, "text": " That's the first part"}, {"start": 1540.46, "end": 1548.14, "text": " Uh and alpha go zero players are continually evaluated. That's the second part and thirdly the best performing player so far"}, {"start": 1548.5400000000002, "end": 1555.5, "text": " This alpha theta star is used to generate new self-play data and let us see how that exactly works"}, {"start": 1556.3000000000002, "end": 1557.74, "text": " so"}, {"start": 1557.74, "end": 1564.3, "text": " So, um, you basically have something like a replay buffer and I i'll draw it like this"}, {"start": 1565.1, "end": 1566.7, "text": " you have the"}, {"start": 1566.7, "end": 1572.54, "text": " Current best network and i'll just draw it like a box and we have a bunch of self-play games"}, {"start": 1573.34, "end": 1575.02, "text": " happening in parallel"}, {"start": 1575.02, "end": 1578.3, "text": " so how the actual framework works like is the following so"}, {"start": 1579.34, "end": 1580.86, "text": " We have this"}, {"start": 1580.86, "end": 1586.7, "text": " Network and that's the best agent so far and we're just taking data from the replay buffer"}, {"start": 1586.7, "end": 1588.7, "text": " And we are updating that network"}, {"start": 1589.66, "end": 1595.18, "text": " Next up every every thousand, um updates will be dumping a checkpoint"}, {"start": 1595.9, "end": 1601.9, "text": " Model and we'll be comparing that checkpoint model with the last previously stored best model"}, {"start": 1602.06, "end": 1605.9, "text": " So we have the this thing they call alpha theta star"}, {"start": 1606.3, "end": 1611.74, "text": " So that's the best agent and interesting fact is that this agent so let me draw it like a like a"}, {"start": 1612.44, "end": 1613.66, "text": " rectangle"}, {"start": 1613.66, "end": 1620.38, "text": " Basically, they use those weights in all of these self-play threads. So when I knew when one of these"}, {"start": 1621.26, "end": 1625.5800000000002, "text": " Self-playing threads finishes, uh, it will want to start over again"}, {"start": 1625.9, "end": 1628.22, "text": " So we'll just pull the best parameters here"}, {"start": 1628.6200000000001, "end": 1633.74, "text": " And it will be using that network weights to play the game and store it will be storing"}, {"start": 1634.14, "end": 1636.7, "text": " The tuples so we we mentioned so the s"}, {"start": 1636.7, "end": 1643.42, "text": " The pi and the outcome of the game so those tuples will be stored back in these in the storage"}, {"start": 1643.82, "end": 1646.22, "text": " So that's how the dynamics of the whole pipeline works"}, {"start": 1646.78, "end": 1647.82, "text": " um"}, {"start": 1647.82, "end": 1651.5, "text": " So I haven't finished this part. So once uh, we we dump the checkpoint"}, {"start": 1651.98, "end": 1657.42, "text": " What we want to do is we want to do evaluation and basically what they did is they play"}, {"start": 1657.42, "end": 1665.92, "text": " 400 games between this checkpoint and between the best model and whichever model wins. That's the the model will be using"}, {"start": 1666.88, "end": 1671.1200000000001, "text": " For all of the next self-play game. So the next self-play thread"}, {"start": 1671.28, "end": 1676.8000000000002, "text": " So if this one finishes and wants to start over again, it will just pull the the newest the best parameters"}, {"start": 1677.1200000000001, "end": 1679.1200000000001, "text": " And it will start playing the game"}, {"start": 1679.1200000000001, "end": 1682.3200000000002, "text": " So that's roughly how this thing works. And now let's"}, {"start": 1682.32, "end": 1689.36, "text": " Let's see some details they said here again the optimization process produces a new checkpoint every thousand training steps"}, {"start": 1689.6, "end": 1692.8, "text": " If the new player wins by a margin of over 55"}, {"start": 1694.3999999999999, "end": 1702.3999999999999, "text": " Then it becomes the best player and is subsequently used for self-play generation and also becomes the baseline for subsequent comparisons"}, {"start": 1703.36, "end": 1706.56, "text": " So I mentioned that part already and it's playing 400 games"}, {"start": 1706.56, "end": 1712.96, "text": " To we are playing 400 games between the checkpoint and the best model in order to figure out whether it's statistically"}, {"start": 1713.74, "end": 1715.12, "text": " significantly"}, {"start": 1715.12, "end": 1718.96, "text": " Better than the whether the checkpoint is better than the best model so far"}, {"start": 1719.04, "end": 1724.6399999999999, "text": " Okay, so now that we have the best player, let's see how we can use those weights in order to play these"}, {"start": 1725.1799999999998, "end": 1726.72, "text": " self-play games"}, {"start": 1726.72, "end": 1734.1599999999999, "text": " So the best current player alpha theta star as selected by the evaluator is used to generate data"}, {"start": 1734.16, "end": 1740.0, "text": " In each iteration alpha theta star plays 25 000 games of self-play"}, {"start": 1740.8000000000002, "end": 1745.68, "text": " And it uses 1600 simulations of mcts to select each move. Okay"}, {"start": 1746.16, "end": 1748.16, "text": " Um, let me draw it here"}, {"start": 1749.76, "end": 1756.96, "text": " Basically, this is the self-play thread and it just fetches the the newest weights from the alpha theta"}, {"start": 1757.52, "end": 1758.72, "text": " star"}, {"start": 1758.72, "end": 1764.24, "text": " And they say here for the first 30 moves of each game the temperature is set to one"}, {"start": 1764.88, "end": 1771.76, "text": " This selects moves proportionally to their visit count in mcts ensures a diverse set of positions are encountered"}, {"start": 1772.16, "end": 1777.04, "text": " That means if this is the initial state of the self-play game, so that's the initial board state"}, {"start": 1777.52, "end": 1780.8, "text": " Uh, we play for the first 30 steps. So this is 30"}, {"start": 1781.68, "end": 1785.92, "text": " Um, we play uh using proportional sampling. That means the following"}, {"start": 1785.92, "end": 1789.6000000000001, "text": " So if I just zoom in into this state, so this is some state"}, {"start": 1790.24, "end": 1794.5800000000002, "text": " state zero in this particular case and once we do the mcts"}, {"start": 1796.0, "end": 1802.8000000000002, "text": " And we find the probabilities we'll be using uh, we won't be doing a greedy sampling will be"}, {"start": 1803.2, "end": 1809.68, "text": " Uh, a greedy policy will just uh take the proportional sampling. So if we have something like this maybe"}, {"start": 1810.16, "end": 1812.0800000000002, "text": " uh"}, {"start": 1812.08, "end": 1818.98, "text": " Some distribution like this one. That means this action will be taken the most often because it has the highest probability"}, {"start": 1820.08, "end": 1826.1599999999999, "text": " So that's for the first 30 states and that ensures that we have some kind of because we're constantly using the same model"}, {"start": 1826.3999999999999, "end": 1831.52, "text": " This ensures we are having uh some uh diversity in the initial positions. Okay"}, {"start": 1832.24, "end": 1836.32, "text": " So that's the first part and then we start playing from this point on we start playing"}, {"start": 1836.32, "end": 1844.0, "text": " Uh greedy because the temperature goes to zero and that means uh, whatever that so that means this one will get transformed into this"}, {"start": 1844.96, "end": 1848.96, "text": " All of these will be zero. So that's zero. That's zero. This one will be one"}, {"start": 1849.04, "end": 1851.6799999999998, "text": " So we'll be picking the the highest probability always"}, {"start": 1852.32, "end": 1853.12, "text": " Okay"}, {"start": 1853.12, "end": 1859.6799999999998, "text": " So then they say the following additional exploration is achieved by adding dairy clay noise to the prior probabilities in the root node"}, {"start": 1859.76, "end": 1862.48, "text": " As zero so if we take certain state here"}, {"start": 1862.48, "end": 1867.6, "text": " Uh, and we zoom in again and before we even start building the mct"}, {"start": 1868.14, "end": 1870.96, "text": " mcts3 what they'll do is they'll add the"}, {"start": 1871.68, "end": 1876.64, "text": " Noise into the root priors themselves. So once the once we get the initial priors"}, {"start": 1877.3600000000001, "end": 1880.72, "text": " those will get uh by passing the the state into the"}, {"start": 1881.28, "end": 1882.56, "text": " um our"}, {"start": 1882.56, "end": 1886.96, "text": " Model into the network, uh, we'll modify these priors using the dairy clay noise"}, {"start": 1886.96, "end": 1892.56, "text": " which will ensure there is some randomization in how do we visit the uh,"}, {"start": 1892.64, "end": 1900.16, "text": " the how to explore the mcts3 and that will explore there will additionally enable us to have a diverse set of"}, {"start": 1900.22, "end": 1902.48, "text": " tuples which will be fed back into the"}, {"start": 1903.1000000000001, "end": 1904.24, "text": " storage"}, {"start": 1904.24, "end": 1906.24, "text": " into the replay buffer"}, {"start": 1906.24, "end": 1910.64, "text": " Which again is used by the currently best network to update its weights"}, {"start": 1911.6000000000001, "end": 1914.72, "text": " And that's that's pretty much it. Um"}, {"start": 1914.72, "end": 1920.88, "text": " Um, this is something you should know from previous video. Basically, we're storing statistics of visitation counts"}, {"start": 1921.42, "end": 1926.08, "text": " cumulative mc montecarlo estimates the action value function and the priors"}, {"start": 1926.8, "end": 1928.16, "text": " Okay"}, {"start": 1928.16, "end": 1929.68, "text": " uh here"}, {"start": 1929.68, "end": 1935.44, "text": " Puct we saw this one. So basically depending on the visitation counts the more you visit some state"}, {"start": 1936.08, "end": 1938.8, "text": " the smaller the u becomes and basically that means"}, {"start": 1939.28, "end": 1940.24, "text": " uh"}, {"start": 1940.24, "end": 1943.52, "text": " You depend on the action value function to decide whether you want to"}, {"start": 1943.52, "end": 1945.52, "text": " Uh pick that edge or not"}, {"start": 1946.56, "end": 1947.68, "text": " Okay"}, {"start": 1947.68, "end": 1952.8, "text": " Additional detail positions in the queue are evaluated by the neural network using a mini batch size of eight"}, {"start": 1953.04, "end": 1955.6, "text": " The search thread is locked until the validation completes"}, {"start": 1955.84, "end": 1962.48, "text": " So this is a bit different compared to the alpha go model because it used to use a synchronous policy here on the other hand"}, {"start": 1962.8, "end": 1966.8799999999999, "text": " What this means is basically on the gpu you have so this is a gpu"}, {"start": 1967.44, "end": 1970.24, "text": " You have the network and you just make a queue here"}, {"start": 1970.24, "end": 1977.76, "text": " um, and it has eight slots and basically you need to wait for the eight threads to"}, {"start": 1978.56, "end": 1983.04, "text": " reach the final the leaf node and to push it here and then"}, {"start": 1983.76, "end": 1989.92, "text": " We'll just you know in a single batch will produce the the priors as well as the values and then"}, {"start": 1990.32, "end": 1994.48, "text": " Only then will the those threads resume so they are kind of synchronous. So that's"}, {"start": 1995.1200000000001, "end": 1998.32, "text": " A detail that's that's new to alpha go zero"}, {"start": 1998.32, "end": 2003.36, "text": " Uh, they also they they continue using this virtual loss, which if you remember"}, {"start": 2004.32, "end": 2006.32, "text": " If we had some uh"}, {"start": 2007.04, "end": 2009.84, "text": " So this is the the root of the mcts. This is the leaf"}, {"start": 2010.3799999999999, "end": 2015.6, "text": " basically, uh the virtual loss will make sure that these visitation counts are"}, {"start": 2016.0, "end": 2018.8, "text": " So these are the states and the squiggly lines are the actions"}, {"start": 2019.28, "end": 2023.2, "text": " Uh, it will just uh, maybe do minus three on all of the states"}, {"start": 2023.2, "end": 2030.32, "text": " Uh for the visitation and that will reduce the u part of the key of the of the puct which means"}, {"start": 2031.2, "end": 2034.4, "text": " All of the other threads which are building up the mcts tree"}, {"start": 2035.2, "end": 2039.2, "text": " Will be less likely to take this exact route to the leaf node"}, {"start": 2039.3600000000001, "end": 2042.32, "text": " And that's that's the whole point because we're already exploring this one"}, {"start": 2042.64, "end": 2048.32, "text": " We just have this virtual loss thing which uh discourages other threads to explore the same path"}, {"start": 2049.52, "end": 2051.52, "text": " Okay, finally, um"}, {"start": 2051.52, "end": 2053.68, "text": " We're using exponentiated visit count"}, {"start": 2054.32, "end": 2060.48, "text": " Uh, basically, so this is the visitation counts and this is the temperature coefficient which we were mentioning. So the first 30"}, {"start": 2061.68, "end": 2063.68, "text": " Moves are are being played"}, {"start": 2063.98, "end": 2069.36, "text": " Proportionally and then this temperature drops to zero which makes sure that we're just picking the highest probability action"}, {"start": 2070.56, "end": 2071.6, "text": " Okay"}, {"start": 2071.6, "end": 2073.44, "text": " Let's summarize this"}, {"start": 2073.44, "end": 2075.36, "text": " the principal difference"}, {"start": 2075.36, "end": 2079.6, "text": " Differences are that alpha goes zero does not use any rollouts"}, {"start": 2079.6, "end": 2084.08, "text": " It uses a single neural network instead of a separate policy and value networks"}, {"start": 2084.72, "end": 2087.8399999999997, "text": " Leaf nodes are always expanded so there is no threshold"}, {"start": 2087.8399999999997, "end": 2095.7599999999998, "text": " We don't have to wait until maybe 40 or something which was dynamically computed so that the gpus are not starving some optimization stuff"}, {"start": 2097.04, "end": 2101.04, "text": " And so we just expand it as soon as we encounter a leaf node"}, {"start": 2101.04, "end": 2106.72, "text": " We expand it and each search thread simply waits for the neural network evaluation. So it's synchronous"}, {"start": 2106.72, "end": 2109.7799999999997, "text": " Rather than performing evaluation and backup asynchronously"}, {"start": 2111.2, "end": 2116.3999999999996, "text": " And also there is no tree policy, which if you remember just I used to set up priors"}, {"start": 2116.56, "end": 2121.8399999999997, "text": " So when we get to a leaf node, we used to use the tree policy which will set some fast priors"}, {"start": 2122.24, "end": 2126.3199999999997, "text": " And we would send the leaf node onto gpu where the policy network"}, {"start": 2126.72, "end": 2133.4399999999996, "text": " SL policy would wait and it will it will do an inference and calculate the actual priors which will then get swapped here"}, {"start": 2133.44, "end": 2139.28, "text": " But here so but so this is basically a placeholder network and we don't have it anymore"}, {"start": 2139.52, "end": 2145.2000000000003, "text": " So much less details actually, it's it's an easier algorithm. We're just using reinforcement learning and we got better results"}, {"start": 2147.04, "end": 2148.56, "text": " Okay"}, {"start": 2148.56, "end": 2151.84, "text": " Hopefully you got something out of this and now I just want to make it"}, {"start": 2152.48, "end": 2158.96, "text": " Clear what the differences are between this model and the alpha zero. So that's just a small additional step if you ask me"}, {"start": 2159.52, "end": 2162.7200000000003, "text": " Um, I don't think there was anything significant that happened there"}, {"start": 2162.72, "end": 2165.3599999999997, "text": " Okay here here here is it basically?"}, {"start": 2166.64, "end": 2168.64, "text": " There's just a couple of differences"}, {"start": 2168.8799999999997, "end": 2175.04, "text": " And I don't quite understand the first one. So they say alpha zero instead estimates and optimizes the expected outcome"}, {"start": 2175.6, "end": 2181.2, "text": " Taking account of draws or potentially other outcomes. So this part bugs me because um"}, {"start": 2182.16, "end": 2183.8399999999997, "text": " the alpha goes zero"}, {"start": 2183.8399999999997, "end": 2189.6, "text": " estimates the value so the value function estimates between plus one and minus one which where plus one means"}, {"start": 2189.6, "end": 2193.7599999999998, "text": " This player is going to win and minus one means we are certain we're going to lose"}, {"start": 2194.7999999999997, "end": 2198.08, "text": " Um, and so i'm not quite sure what they mean by that"}, {"start": 2198.08, "end": 2202.72, "text": " If you know, please leave a comment down there and the second one is obvious"}, {"start": 2202.72, "end": 2207.36, "text": " So the rules of chess and shogi are asymmetric. So in general symmetries cannot be assumed"}, {"start": 2207.36, "end": 2209.12, "text": " So that's something I already mentioned"}, {"start": 2209.12, "end": 2215.2, "text": " We want to get rid of that of that of picking uh, like an element from the dihedral group one of those eight"}, {"start": 2215.2, "end": 2222.96, "text": " Reflections rotation combinations, so we just want to ditch that and uh, they showed that they won't hurt the performance at all"}, {"start": 2223.68, "end": 2225.68, "text": " So yeah, um"}, {"start": 2226.16, "end": 2228.16, "text": " A little bit but we can compensate by compute"}, {"start": 2228.24, "end": 2229.04, "text": " Okay"}, {"start": 2229.04, "end": 2236.1, "text": " And the third thing is in contrast alpha zero simply maintains a single neural network that is updated continually"}, {"start": 2236.8799999999997, "end": 2239.04, "text": " Rather than waiting for an iteration to complete"}, {"start": 2239.04, "end": 2245.14, "text": " Self play games are generated by using the latest parameters for this neural network omitting the valuation step"}, {"start": 2245.54, "end": 2250.1, "text": " And the selection of best player. So that means this time we have this network"}, {"start": 2250.58, "end": 2252.34, "text": " and instead of uh"}, {"start": 2252.34, "end": 2256.74, "text": " Doing these these periodic check pointing and comparing for the best model"}, {"start": 2257.14, "end": 2260.98, "text": " We are continuously using this same weights and we're updating them"}, {"start": 2261.54, "end": 2263.22, "text": " using the"}, {"start": 2263.22, "end": 2270.5, "text": " Replay buffer and the threads are just filling that buffer and we are just taking the data and we're continually updating this agent"}, {"start": 2270.8199999999997, "end": 2275.2999999999997, "text": " And when a new thread resumes so this one finishes and wants to start over again"}, {"start": 2275.4599999999996, "end": 2281.4599999999996, "text": " It will just fetch the newest the current the only pair of weights we are we are continually updating. Okay?"}, {"start": 2282.02, "end": 2286.58, "text": " So that's the third difference. So not a lot there and okay and um"}, {"start": 2287.7, "end": 2289.7, "text": " You can see the results"}, {"start": 2289.7, "end": 2293.9399999999996, "text": " Here basically on chess they compared against the stockfish"}, {"start": 2294.02, "end": 2298.18, "text": " That was the best engine at the time and you can see after"}, {"start": 2298.98, "end": 2301.14, "text": " Sometime it gets better than stockfish"}, {"start": 2301.7799999999997, "end": 2307.8599999999997, "text": " Finally, we have shroggy here in a small amount of time. The model gets better than shroggy, which is a japanese chess"}, {"start": 2308.64, "end": 2310.98, "text": " affectionately known as japanese chess and"}, {"start": 2311.7799999999997, "end": 2314.98, "text": " Finally we get better from alpha go zero"}, {"start": 2315.7799999999997, "end": 2318.1, "text": " after some training as well"}, {"start": 2318.1, "end": 2321.62, "text": " So what's interesting on these three charts if you ask me is this one"}, {"start": 2322.18, "end": 2322.8199999999997, "text": " so"}, {"start": 2322.8199999999997, "end": 2329.86, "text": " You can see how small the gap the elo gap here is and that just kind of indicates the amount of effort"}, {"start": 2330.18, "end": 2335.38, "text": " like decades of research of chess because chess was considered as the drosophila of AI because"}, {"start": 2336.02, "end": 2340.1, "text": " So many people so many researchers spent so much time"}, {"start": 2340.8199999999997, "end": 2344.74, "text": " researching it and so there are so many good heuristics in"}, {"start": 2344.74, "end": 2352.9799999999996, "text": " Good heuristics and a hand crafting that went into creating this stockfish engine that it was kind of hard to get much better than it"}, {"start": 2353.8599999999997, "end": 2355.8599999999997, "text": " So that's that's funny"}, {"start": 2355.8599999999997, "end": 2359.9399999999996, "text": " um, okay, the the only thing that's actually um, not so uh,"}, {"start": 2360.4199999999996, "end": 2366.18, "text": " Game agnostic is this so they are still adding the noise to the prior policy to ensure the exploration"}, {"start": 2366.18, "end": 2368.74, "text": " So directly noise I mentioned in the alpha go zero"}, {"start": 2368.74, "end": 2375.06, "text": " Um, and it's uh scaled in proportion to the typical number of legal moves for that game type"}, {"start": 2375.3799999999997, "end": 2380.8199999999997, "text": " So that's something that's game specific other than that. This is a pretty generic algorithm and yeah"}, {"start": 2381.22, "end": 2382.18, "text": " uh"}, {"start": 2382.18, "end": 2385.62, "text": " I think worth noticing here is that you have to train a single"}, {"start": 2386.18, "end": 2388.74, "text": " Instance a single agent for every specific game"}, {"start": 2389.06, "end": 2396.02, "text": " So that means this still doesn't generalize as much as we'd like it to so optimally you can train for alpha for go"}, {"start": 2396.02, "end": 2399.38, "text": " And then just kind of fine tune it for chess and it would be"}, {"start": 2399.94, "end": 2404.2599999999998, "text": " Uh really good, but that's not the case and you have to train it from scratch pretty much"}, {"start": 2404.74, "end": 2406.74, "text": " to the best of my knowledge"}, {"start": 2406.74, "end": 2411.22, "text": " um one thing worth noticing as well is that alpha zero searches just"}, {"start": 2411.7599999999998, "end": 2420.92, "text": " 80,000 positions per second in chess and 40,000 in shogi compared to 70 million for stockfish and 35 million for elmo"}, {"start": 2420.92, "end": 2429.16, "text": " So that just shows uh that this approach is much less brute force and these two the the elmo and stockfish"}, {"start": 2429.88, "end": 2435.7200000000003, "text": " Are much more similar to deep blue which used a brute force. Um algorithm to beat garry casparro back in 97"}, {"start": 2436.6, "end": 2443.4, "text": " So they say arguably a more human-like approach to to search and to to playing the games and I agree"}, {"start": 2444.04, "end": 2445.48, "text": " um"}, {"start": 2445.48, "end": 2449.64, "text": " So that's that's pretty much everything you need to know from alpha zero paper once you know"}, {"start": 2449.64, "end": 2457.0, "text": " Uh once you understand the alpha go and alpha go zero so ditch the symmetries keep continually updating the agent"}, {"start": 2457.48, "end": 2460.92, "text": " Um, and then you just kind of map for the specific rules"}, {"start": 2460.92, "end": 2467.64, "text": " You can you kind of do some small adaptations you apply the same algorithm and as you can see after some training time"}, {"start": 2468.2799999999997, "end": 2472.12, "text": " You can get that you can achieve state of the art on all the three benchmarks"}, {"start": 2472.8399999999997, "end": 2474.44, "text": " That was it for this video"}, {"start": 2474.44, "end": 2480.36, "text": " If you have any feedback whatsoever on the things I could improve, please feel free to just comment down in the comment section"}, {"start": 2480.52, "end": 2487.0, "text": " And i'll read those and you know the drill just hit that subscribe button hit the bell icon to get notified and until next time"}, {"start": 2487.0, "end": 2504.04, "text": " Keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=Z1BELqFQZVM
AlphaGo - Mastering the game of Go with deep neural networks and tree search | RL Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover the seminal AlphaGo paper - the first system to beat a professional Go player in the game of Go. A task previously considered beyond the reach of current AI systems and at least 10 years off into the future, but neural networks proved them wrong! You'll learn about: ✔️All of the nitty-gritty details around AlphaGo ✔️How MTCS and other subcomponents work ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ AlphaGo movie: https://www.youtube.com/watch?v=WXuK6gekU1Y&ab_channel=DeepMind ✅ Karpathy on AlphaGo: https://medium.com/@karpathy/alphago-in-context-c47718cb95a5 ✅ Silver on UCB algo: https://www.youtube.com/watch?v=sGuiWX07sKw&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB&index=12&t=2370s&ab_channel=DeepMind ✅ MTCS explained: https://www.youtube.com/watch?v=UXW2yZndl7U ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:37 Context behind the game of Go 04:10 High-level overview of components - SL policies 07:25 RL policy network 09:30 The value network 11:15 Going deeper 16:30 Details around value network 19:05 Understanding the search (MTCS) 27:10 Evaluation of AlphaGo 33:30 Older techniques 34:40 Even more detailed explanation of APV-MTCS 37:40 Virtual loss 41:00 Engineering 45:30 Neural networks and symmetries ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #alphago #deepmind #reinforcementlearning
What's up? In this video I'm going to cover AlphaGo paper or mastering the game of Go with deep neural networks and research. And I guess all of you already know and heard a lot about it. They even made a movie about AlphaGo. Everything we've ever tried in AI just falls over when you try the game of Go. The number of possible configurations of the board is more than the number of atoms in the universe. After you've finished with this video, you'll have a much better background. And I strongly suggest you go ahead and watch the complete movie if you haven't already. The game kind of turned on its axis. That is not a confident face. Okay, with that out of the way, let's jump into the paper. So basically, why is Go interesting? The reason it's so interesting is that all of the other famous board games such as chess, checkers, etc. were already solved much earlier. Like basically, in 1997, Deep Blue program has beaten Garry Kasparov, the best chess player in the world at that point of time. And all of the other games were also solved, but Go still wasn't solved. And people predicted that we'll need at least, so I think in 2014 or 2015, people thought that we'll need at least 10 more years to actually solve the game. And then in 2015-16, Destiny appeared and cracked the game of Go. So it's a pretty big deal and it's a seminal. It's a breakthrough in the world of AI. And yeah, so basically, what is Go to start with? So basically, it's an old Eastern game that was, I think, invented in China a couple of thousand years ago. And basically, you have a 19 by 19 board and you place these kind of white and black stones on the intersections. And the whole goal is to acquire as much territory as possible. So you can see here, the white here actually gained all of the axes belong to the white player. And the game finishes in two ways. Either you win because the opponent resigned or you win because the game ended and you have more territory than the opponent. Those are the basics. Now let's jump into the details of the paper and I'll slowly start going from the high level explanation. I'll go into the depth at the end of the paper. So yeah, stick with me. Okay, so as a bit of theory, basically, all the games that have perfect information have this thing called the optimal value function. In games that have perfect information such as Go because it's completely observable, compare that to poker, maybe where you don't know the cards of the opponent, so it's partially observable. Here in perfect information games, you have this thing called optimal value function. Once you have the optimal value function, you can play using it just following the greedy policy and you'll win. You'll basically play the most optimal you can if the other players are also playing the optimal game. So that's some game theory for you. But basically, we'll find a way how to approximate this function and we'll also use something called policy function and Monte Carlo tree search to get to the final AlphaGo layout. And I'll slowly start digesting all of those parts for you. So before that, why is Go so hard? And one of the reasons, the main reason basically is that the search space is huge. So in games like chess, you have the breath is around 35 and the depth is around 80. So you can just kind of average out all of the branches and get this number. And the final number of states is B raised to the power of D. So you can see that chess has a lot of states, but it's nowhere near close to Go, so you have basically 250 raised to the power of 150. That's a huge amount of states. And all of the previous approaches such as the minimax search, such as the depth minimax search with the alpha, beta, pruning, whatever, failed to solve the search. And even pure Monte Carlo tree search methods, which are proven to be really good and effective for other board games, failed miserably when deployed to Go. So let's see how this paper actually coped and found the solution to solve this hard, difficult game. And yeah, here is the kind of the high level overview of the whole system. And it has a couple of components. So the main components are these. So we first have this thing called policy network. And what the policy network does is the following. You input a state where the state is the following. So you basically have a 19 by 19 board, right? And they'll just have some handcrafted feature maps. You'll end up with a 19 by 19 times 48 volume, which you'll be inputting into CNN. And that's pretty much it. And you don't need to know anything about those specifics. So here is, for example, one of the features is these things called liberties. And those are basically the empty slots around a certain player. Maybe we have a black here and some of the liberties would be the red circles here. And you kind of you somehow integrate that information into that feature, into those feature maps. And you input those and use that as the as the current state. So with that out of the way, let's just see how it works. So the first thing they do is they train this thing called supervised learning policy network. And what they do is the following. So they have a huge database of professional, so experts playing game of go. And it's they just train this policy network as a simple supervised learning method using the simple SL method. And so basically you input the state. Here's a visual representation. So you input the state here and out of the policy network comes this vector of probabilities distributed over the whole board. So there's basically a 19 by 19 vector of probabilities. And what they do here to train it is the following. So they just have these tuples, a bunch of these tuples, I think 30 million. But we'll get to those details a bit later. And we have these tuple and tuples. And so what they do is the following. So they have a CNN. So that's the policy network. So just basically plug in the S, the state, and you get out the probabilities. So you just want to do a simple cross entropy loss here. And basically whatever the action that the professional player played. So maybe that's this one. You want to make sure that you push that one to one and you make sure that the other ones go to zero. So because this is softmax, you just basically do log on this action and minus log and you just minimize that loss. So if I just plot that. So that's the minus log. And you want to make sure that the probability of that particular action goes to one. And then we'll have the loss dropping down to zero. So that's how they train it. And that's super simple. So it's just that you need a lot of data and they have a big CNN, 12, 13 layers deep. Again, we'll get to those details a bit later. So but that's the policy network. Okay. The self supervised policy network. Then we have this thing called the rollout policy. And it's trained in the same fashion. The only difference is that it's much smaller. It's actually it's not a DNN. It's not a neural network. It's a it's a simple linear layer. And it's trained on less data because it's it's has a smaller capacity. OK. Once that's done, you want to train something called the RL policy network. And what you do basically is you take this trained this pre trained supervised learning policy and you just initialize this RL policy network. And then you create a huge pool of different policy networks. And you just let them play against each other. So something we call self play in the field of reinforcement learning. And basically after after you train those against each other, you basically then train those networks using policy gradient methods. And again, I'll give you a simple intuitive explanation and then we'll dig into equations. But what you basically do so you have two players and they play they play the game of go. And now let's say that each of these lines represents the action. So let's say that this player won. So player one player two. So at the end, the player one won. And so basically what you'll do is you'll take all of the actions that that player did and you'll make sure they are more like they have higher probability. And on the other hand, you'll take the player two and you'll make sure that the actions that that player took in those specific states are less probable the next time. And then you'll just take the player two. You'll input it into the pool of players, potential players. You'll update your your current your RL policy. And that's how you'll continue doing it. You'll be improving your policy. You'll be putting the opponents into the pool and you'll be playing against them and basically do the policy gradient. And that's how you learn the policy network. Again, just a high level explanation will get equations. Basically, I just want you to have this mental picture. The actions if you won, you want to make sure that the actions you played in those particular states are of higher probability. And if you lost, you want to make sure that the actions are actually less probable next time you use that network. That's it. It's really simple. It's basically you can treat it as a classification task for all of those couples. OK, so the trick here is we won't be using this RL policy network in the final AlphaGo algorithm. So why do we have it? Well, we have it in order to bootstrap this thing called the value network. And what the value network does is you give it the state and it just tells you how likely are you to win in that particular state. So it has values between one and minus one. If it outputs one, that means you're highly likely to win. If it outputs minus one, that means you're going to lose and anything in between. Yeah, just you can interpolate yourself. OK, so you can see here the again the visualization of the value network. The output is a scalar, which tells you how good that state is. So that's basically our best approximation of the value state function in RL. OK, so the way we train the value network is we take a bunch of so we have we play a bunch of self play games. And from each game, we take a single state and the outcome of the game and we train the value network to regress that outcome. So we'll again we'll get to details, but like that's the high level picture. So you want to make sure that it learns how to basically regress. It's again a supervised learning on a data set that we extracted from the self play games. And that's it. So once we have all of those networks, we'll be using them and combine them into this thing called Monte Carlo Tree Search. And we'll use these networks to better to kind of scope the breadth and the depth of the tree search. And that's how the ultimate algorithm, the final algorithm will look like. So we'll slowly get to those details, but this was the high level picture. Hopefully you understand everything. OK, now let's start digging into some equations and let's see how the SL policies train. So we basically want to maximize the likelihood of the human move a selected in state s. And here you can just symbolically see that basically you want to move. So for that particular action, you just want to maximize it given the state. So those are these state action tuples and you just do a step along the gradient, which will kind of boost up the probability of that action and diminish all of the other probabilities. Simple classification, basically cross entropy loss. Don't get confused by these expressions. This just means that the update step is proportional to this one. But you basically do cross entropy loss and that's it. So it's a 13 layer CNN basically. And you train it on 30 million positions and position is just a fancy name for the state. And so that's the data set I was mentioning here. And yeah, so finally, once we have this network pre trained, they showed that on the on the test set, you can predict with 57 percent accuracy the next step that the expert will play. So that's a huge improvement compared to the previous state of the art, which was 44.4. And that actually translates into much higher probability of winning the game as we'll soon see. OK, so that was the SL policy. Now, the fast rollout policy already explained it's a linear softmax. It just has a bit different input. Basically, again, some go magic, some hand crafting. And so the thing with this network is it has a much smaller accuracy, so 24.2. But the thing is, it's much faster because it's so smaller. So the inference is two microseconds compared to this SL policy, which has three milliseconds. Now you'll see why this will be useful and once we get to the search part of the algorithm for the aerial network, let's see how we train that one. So, again, we train the current policy network, which is initial initially initialized using the SL policy network. And we randomly select some previous iteration of the policy network. So that's the self play part of the game. And this is how you, again, how you update the weights of that policy network. Basically, it's again proportional. So the update step is proportional to the gradient. So that's the same thing as this one. But there's a trick. There is this ZT and that's the outcome. So that means if you won, you'd get plus one. And then that means you basically end up in the same situation as here. You'll basically want to make sure that all of the actions you played get get like higher probability in those particular states. And if you lost, then you'll have minus one. That means you'll move. You'll actually move in the opposite direction. So you'll make sure that the actions are less probable and that intuitively makes sense. And in expectations, all of that will lead you to have much better policy function. OK. And when you play head to head, when you take this RL policy that was trained using self play and you compare it with the SL policy network, you can see that it will win in 80 percent of the games. So it improved dramatically. And if you use no search whatsoever, and we still haven't I still haven't explained the search, but we'll get there. And if you don't use a search, you can actually just use this RL policy and you'll win in 85 percent of the games against this open source go pro. And of course, go program called Pachi. And it's not the best one out there, but it was pretty decent. And it's it's awesome to see that without any search whatsoever. And this thing, this Pachi thing, it uses a bunch of simulations, a bunch of search, and it still loses against a pure DNN approach without any MTCS Monte Carlo tree search. So those are some of the details. Let me show you what this chart means. This one tells you that depending how many filters you have in your CNN, you will be improving the training accuracy. And this is the final AlphaGo model. And you can see that the as expected, the battery gets on this KGS supervised data set, the battery will get against the final version. And the final CNN they took had 192 layers. As you can see, that's this curve, it's got a decent trade off between the performance and its way faster than these bulky big networks. So it's a trade off they did. OK, we'll get to this chart in a bit later when we start talking about search. OK, we saw how to train the policy networks. We saw how to train the RL policy as well. And you saw this is pretty much simple cross entropy. It's a simple classification task pretty much ignoring this part. And let's see how we train the value network. So the value network is trained like this. So it's basically a regression task. And what this means is the following. So let's say we have state S and let's say that the final outcome going from that state was maybe ZT was one. That means we won playing from this state. So in let's say that the value function gave us maybe zero point seven. So because of that, this difference, this is plus one and this is zero point seven. That means you have zero point three, a positive number scaling the value function. That means you'll increment. You'll you'll you'll kind of update this to go from zero point seven. Next time you'll see the state you'll maybe you'll maybe tell you maybe out with zero point seventy five. But if the value function was off by much more, if it was maybe I don't know if it was zero point three or zero point two, then this difference would be zero point eight. And that means you take like a bigger step and a bigger gradient. And that means that you'd go from maybe zero point two. You'd next time you see the step, you'd output zero point thirty five. So you can see we had a bigger increment here than here. And that's how it pretty much works. So it's L2 L2 loss, basically. Nothing special there. Simple regression. You always have to be supervised at the end in order to learn using gradient based methods. So again, I explained how they train it. They they they so they had some problems overfitting. So if you if you if you train this value network the following way. If you took two players and they played a game and if you were using every single state and the final outcome, which may be plus one for the this player, then you'd kind of overfit because the states are highly correlated. And so what it did is instead from every single game, they just take a single random state and couple that with the final outcome ZT. And that's what they use. And after training on 30 million data points, they they pretty much avoid the overfitting. And you can see that they have zero point two to six and zero point two three four on training and test sets, which is minimal overfitting. Before that, they had zero point thirty seven on the test set, which is much higher than zero point nineteen on the training set when they just take this naive approach. So that's the reason they took just a single couple from a single game. And that's how they avoided overfitting on this problem. OK, finally, search. Let me see how I'll explain this. So here are some equations which will make sense in a second. Basically, let's start with this diagram. So this chart here tells you the following. So we're in a current state and we want to do a bunch of simulations in order to improve our predictions. So what we'll do is we'll somehow create these priors. And in order to create these priors, we'll use the SL policy network. So we'll just input the state. We'll through this SL policy network, it will give us some priors. And using those priors, you can see that we'll end up with some. So every single edge will have this score associated with it. So that's a Q. So it's an action value function plus this U of P. And we'll see what it is. And we just maximize that score. And that's how we traverse the tree. And once we get to the leaf node. So once we get here or here and we'll just keep this for a moment. Expansion steps will go like this. Evaluation. So once we get to the leaf node, we do the two things. So the first thing is we pass that state into a value network. So we pass this into a value network and it will give us the estimate of how good that state is. And secondly, we'll do a rollout using the fast policy network. So basically from this state, using that policy, we'll be playing the game and then we'll get to the final state. We'll get some reward and using those two informations, those two snippets, we'll do something called backup. Where we'll from the leaf node going to the root node, we'll be updating the edges with some statistics. So we somehow update those statistics. We use those statistics to calculate the Q values. And later we use those Q values to again in the next simulation to again do this simulation. So when you do a bunch of these simulations, the values, the Q values and the visitation counts will be much more predictive of the final outcome. And you'll finally use the visitation count from the edge to pick the best action from that particular state. And that's the root state. So if you're currently confused, that's totally fine. We'll slowly digest each of these pieces in a second. Okay, so I skipped the expansion part because it's not usually you won't do expansion in every single step. So basically expansion happens once you. So for example, let's say this is a leaf node and it has it was visited 39 times so far. So out of all the simulations we had so far, we visited the state exactly 39 times and we have something called and threshold, which basically and they have it. The nominal value here is 40. So once you get the next time you visit this leaf, the number of visits will rise to 40 and you'll want to expand this node. So how to expand it? Well, again, you just use the SL policy network. You input the state and you get the new priors and then you can you just pick the using the this the score that you'll pick the next state as the leaf. So the one that had the max value, this will be your leaf node. And then again, you just proceed with the evaluation. You do the you pass it through the value function. You do the rollouts and you just kind of update the statistics up the tree, the entry updates. OK, so that's the Monte Carlo tree search like a high level overview and we'll slowly start to better understand how it works. So how do we pick the actions? So again, I said action value function plus this you think and this algorithm is called P. You see T or polynomial upper bound. Upper confidence bound for trees algorithm and if you're familiar with UCB, this will come will be pretty easy. David Silver had a really nice explanation of UCB, so I'll link that in the description. But here it will be enough to just understand this on like intuitively. Basically, this thing, the you part is proportional to the prior and to the number of visitations of that particular edge. So what that means is the following. So initially, this thing will be really small to zero. So basically have the you part being proportional to the prior. So you get the prior from the SL network, remember? So once you start visiting this this edge a lot of times, so that that particular action from that particular state, this thing will start to go to infinity, which means that the whole you part will go to zero. And that means that initially you are preferring. So that's something called confidence in the face of uncertainty. So you'll basically prefer exploration and you won't be focusing on in the Q value. But as time goes on, this will go to zero. And you'll basically because you visited this edge a lot of times, you have a decent you have more confidence in your Q estimate. And then if the Q estimate is high, that means that that particular action from that state is really promising. So you'll keep continue exploring that part of a subtree. So, yeah, hopefully that makes a bit more sense now. And don't worry, we'll get even deeper than this. OK, so small optimization detail you basically don't want because the SL network is a huge CNN. You don't want to use it repeatedly. You just once you calculate the priors for a specific leaf node, that's it. You just store the priors and you don't recalculate it again. So that's that's a small detail. So how do we get the Q values, which we later use to do navigate to traverse the tree? So this is how we get the Q values. Basically, we first calculate this thing. So the value function, it's a simple weighted average between the value function and the rollout outcome. So that's this part. So once we do the rollout and we do the V part, we take those values and we average them. And these 0.5 here for the lambda, which means they'll equally take from the value estimate as well as from the rollout estimate. That's the V. And finally, in order to get the Q, you'll just divide that by the number of visitations of that particular edge. And that's your Q estimate. So this is just a fancy way of saying, hey, in iteration I, did you visit the state S and action A? If you had, just increment this. So just a fancy way of saying from N simulations we had, how many of those simulations did traverse this particular edge S A? So that's it. And that's how you calculate the Qs. So if we were following along, you may be confused about why are we using SL policy to calculate the priors. And the reason is simple experiment. They tried both. So it's worth noting that the SL policy P sigma performed better in AlphaGo than the stronger RL policy network P row. Basically, but on the other hand, the value function was derived from the stronger RL policy network. So, yeah, it performed better than the value function derived from the SL policy network. Just a simple detail. Again, experiments are pretty common in deep learning. So, yeah, even though we'd expect that the RL policy would perform better, it did not. And they make some assumption here, presumably because humans select a diverse beam of promising moves, whereas RL optimized for a single, for the single best move. Not sure whether that's the best explanation, I guess. Yeah. OK, so that was the search part. We saw all of the components on the high level. Let's see some results. And here on the Y axis, you can see the L rating. And so what's what I hope I'm pronouncing that right. So you know, or something like that. So basically, when you have a 230 point gap, that means you have a 80 percent probability of winning. So that means translating that into human language, you have player one and player two. And this guy has 230 points more in ELO than this guy. That means he'll win in 80 percent of the games in expectation. So that's it. So once we know the Y axis on the X axis, you can see the following things. So these two are open source go programs. This one was using it wasn't using this Monte Carlo tree search. It used some older search heuristics, some variations of Minimax probably. And then we have two commercial go programs. Here we have Fan Hui, who was you probably know this guy. He actually was one of the main actors in the AlphaGo movie. Really cool guy. He was a European champion at that point of time. And you can see the scores. And this is the AlphaGo. And this is finally the distributed version of the AlphaGo. So on multiple machines, not on a single machine. And you can see this one is really bad. Beginner level, all of the other go programs were amateurs. And finally, only the AlphaGo achieved two then. So this is just some go specific ranking. So we had he was to then professional and AlphaGo was in the same ballpark. And the distributed version was even stronger. So that's about it. You can see AlphaGo finally achieved professional level rankings. And here are just some ablation studies. Basically, when you use both the rollout, the fast rollout, both the value function and the policy network to estimate the priors. That's the best thing to do. Once you remove the value, we drop. Once you remove the rollouts, we drop even further. So that means that the rollouts are even more important. They're really important in evaluating those leaf nodes. So yeah, here just a simple ablation of different. Once you increase the number of threads, you can see the yellow goes up. Once you increase the number of GPUs and hold the 40 threads, you improve again the performance and finally distributed. Once you start adding up more GPUs, you again, as expected, you get more performance. So compute, especially in RL, is something that can improve things. And we know that from GPT family of models as well. So throw more compute at it and you'll get better results. That's for sure. The trick here is they only had five seconds for each move. So they have some limitations. And that's something that you hear each program use approximately five second computations. And using that constraint, we get these yellow scores. Otherwise, we had more time would get even higher yellow scores. OK, so some detail about the hover implementation. So the AlphaGo uses an asynchronous multi-threaded search that executes simulations on CPUs and computes policy and value networks in parallel on GPUs. So basically those fast policy rollouts, they'll be delegated to CPUs and the slower, the bulkier policy evaluations, such as the one that calculates the priors and the value function that estimates the leaf node, will be just offloaded to a GPU. And that's a small optimization they did. And you can see a bunch of hardware here. And the final distributed version of AlphaGo used multiple machines, 40 threads, 1,200 something CPUs and almost 200 GPUs. It's a huge amount of hardware. OK, I'll skip this part. Not that interesting. And I'll skip this. So this was just a couple of. So the five games they played against Fanhui and they won each of the five games back in 2015 or 16. So you can just check out. I personally don't understand, never played Go, but you don't need to understand Go in order to understand the actual implementation and deep learning approach part of this paper. OK, one more detail. The kind of the ablations with that lambda term combining the value function estimation and the rollout estimation. And they're just complementary. So the value network approximates the outcome of games played by the strong, but in practically slow network. Well, the rollouts can be can precisely score and evaluate the outcome of games played by the weaker, but faster rollout policy. So basically, because policy only takes two microseconds, you can execute it multiple times. And finally, the estimates will get much closer to the value function, which is slower, but more accurate. And combining them is kind of complementary. So, yeah, again, experiments, experiments. OK, I think this was the the the the this was the high level part. And I've got only two more pages left where we'll dig into even more details and you'll really understand how this thing works. Before that, just a quick mention. So the AlphaGo evaluated thousands of times fewer positions where the position is again a synonym for state. Then Deep Blue didn't just match against Casperov 97. So basically it's a much smarter approach. It's not as brute force as Deep Blue was. So that's again plus one for AlphaGo. And so it's provided a hope that human level performance can now be achieved in other seemingly intractable A.I. domains. So the good thing about this paper is that it gave us hope that when something looks so insolvable, when the search space is so huge, as in the case of Go, and people thought we'll need additional 10 years to solve it and we solved it in a year or two after that prediction was made, this gives us hope that who knows, maybe a couple of years down the road we'll be solving some problems which we now think are uncrackable. So who knows. Okay, that was the first part. Getting even deeper. The last two pages will get you a really firm grasp and understanding of AlphaGo. So let's dig into it. Quick note here. Prior to AlphaGo, in order to solve board games, people used this thing called Minimax Search. And you probably heard of it in your favorite CS class. And then people invented some better heuristics like Depth-first Minimax Search with Alpha-Beta pruning, which achieved superhuman performance in chess, checkers, and Othello, but has not been effective in Go. So after that, people started implementing this thing called Monte Carlo Tree Search without using DNNs. And basically this one dramatically improved performance in a bunch of different board games as well as in Go in particular, but it still wasn't good enough by itself to just push it into the professional league, to push those algorithms into professional level. Okay, so let's see now every single detail about the search algorithm. So I mentioned the statistics and here are the statistics. So each edge stores a set of statistics. And here we can see the prior. So that's the thing we calculate using the SL network. We store the visitation counts for that particular edge. We store the cumulative Monte Carlo estimates coming from the value function and coming from the rollouts. And finally, we calculate the Q and we store that in each and every edge. And we just calculate the Q using all of these previous statistics. So, yeah, keep that in mind. I mentioned the P-UCT. So that's the variant of the upper confidence bound algorithm for trees, the ECT. And this is the exact formulation. So they have something called this coefficient which just weighs this U part compared to the Q part. So this is the complete equation. And this is just a hyperparameter like your lambda parameter. And they set it to five. Again, probably hyperparameter search. And here is a prior. Here is the visitation. So we were just missing this part. So what it means is the following. Let's say we have the parent state and we have the child state. And if we're calculating the P-UCT value for this edge, we're basically counting the number of visitations that this child, that his parent had. So let me now translate this, try and translate this equation into human language. What it basically means is the following. So the more visitations your parent had and the less times they visited you, but you have a strong prior, that means this edge will be highly likely to visit the next time. So if we were visiting a lot of the other children of this parent, so that would mean that this numerator part would increase. But we haven't been visiting this child, that means that this part would be low. So this one is low, this one is high, this will explode. And then we have the prior, so if it's non-zero and will never be zero, that means eventually we'll want to explore this particular node. So that's the exploration versus the exploitation tradeoff I was mentioning previously. Now you can see every single detail. And this search control strategy initially prefers actions with high prior probability and low visit count, but asymptotically prefers actions with high action value. So that's this one. So as I said, this will go to zero. And finally, you'll be just traversing the tree using the action value functions. Okay. Optimization detail again, the leaf position is added to the queue for evaluation by the value network unless it has previously been evaluated. So again, we'll just run the leaf node through the value functions only once and that's it, just a small optimization. One more interesting thing here is this. So they have this concept of a virtual loss. And what it does is the following. So let's say we have a root node and we traverse some path and we got to a leaf node. And so because now we have to evaluate this node, right, so we have to pass it through the value network. So the value network is on some GPU there and we have to do a rollout and that means we'll have to delegate that to some CPU. So we have some CPU here and basically we'll have some delay before this is calculated, before the game is done. And before this inference on the GPU is done. So in that time, what we want to do is we want to make sure that the other threads are not exploring this exact same path. Because remember, Monte Carlo Tree Search has a bunch of simulations going on in parallel. So there's a bunch of different threads we're trying to calculate and improve upon the estimates of this search tree. And so we want to discourage them to visit the same trace as this one. So what we'll do is we'll for each every edge. So these are the states and the connections are the edges. So basically what we'll do is we'll increase the artificial increase the visitation counts by three, where three is this number of virtual laws. They use number three and will also decrease the Monte Carlo estimate. So these W values, the cumulative estimates will decrease them by three, which will effectively do the following thing. So as you can see, looking at this equation, the PUCT equation, because this will get bigger, that means that this term will get smaller. So U will get smaller and also Q, because Q is calculated by averaging, just a second, so Q, this is Q. You can see that that's how we calculate and because this part, the cumulative part is smaller, Q will be smaller. So that means every single of these edges will have smaller probability. And so the other parallel threads won't be visiting it until we finish up with this value inference and rollout. Then we'll just kind of counteract this. So we'll add minus three, so we'll delete this and we'll add plus three here. And we'll finally add the Z estimate, the actual estimate from the rollout, and we'll add the visitation count will be increased by one. So finally, we have the exact state that we want to be in. So hopefully this was clear enough. If not, please comment down in the comment section whether this was confusing. Okay, that was the virtual part of the tree. So at the end of the simulation, the rollout statistics are updated in backward pass through each step, replacing the virtual losses by the actual outcome. So you can see we counteract the virtual loss, we increase the visitation count, we counteract the virtual loss, and we add the actual rollout outcome. So that's the whole trick. Okay, this is how the Q is evaluated again. This is 0.5, so we'll have equal contribution coming from the value function, averaged over the number of visitations of that particular edge. Same goes for the rollouts, that's how we get the Q. And that's it. Now, there is more details. Hopefully you find this interesting, because there is a lot of engineering here. AlphaGo is not a new novel research per se, but the whole engineering pipeline and everything put together is awesome, and that's why they achieved the results they did. So I mentioned the prior calculations. So once we have the leaf and we want to expand it, you basically feed it through an SL policy network. But the thing is, again, SL policy is on some GPU, so until you get the actual priors, you'll have to wait. And that's why they have this thing called the tree policy. And we never heard about this policy so far, so this is the fourth policy network. They only mention it here. And what it does, it provides the placeholders for the priors. So before we get the actual priors from the GPU, they just put some roughly good priors, not the best ones, so these ones will be higher quality. So it's a similar thing to this virtual loss. So it's a placeholder before we get the actual results. And you'll see this pattern repeating in this paper. Okay. And finally, the priors are computed by the SL policy, and they use this beta to control the temperature. That means you basically, you know what the temperature does to softmax, so the lower temperature will kind of make it sharper distribution. So yeah, that's a minor, not that important detail. Finally, at the end, so it selects the action with the maximum visit count. And this is less sensitive to outliers than maximizing the action value function, value, okay? So basically, that means the following. So we have a route, and we did a bunch of simulations. So now we have pretty decent statistics and much better approximations to the actual Q values. And now we have some numbers here. Maybe this edge was visited 32 times. This one was visited only three times. This one was visited 50 times. So we'll pick up this edge, and that's the action we'll play, and we'll end up with a new state. And now maybe Lissedol will play the game, and he'll bring us to some other state, and then we'll do the Monte Carlo again. Okay, that's how the game plays off. Okay? But again, some tricks. We want to reuse the tree at subsequent time steps. So that means the following. So we were calculating this huge tree at this root state here, okay? And Lissedol decided to... and we decided to go here. And once we go here, we still have a bunch of simulations that went down and calculated the statistics for this part of the subtree. We'll just throw away this part, but we'll still have some statistics here, and so we won't have to recalculate those from scratch. Again, smaller optimization detail. It kind of helps you build up this picture of what it takes to actually make this thing work. So it's one thing to just say, hey, we're using policy gradients, supervised learning, blah, blah, blah. But when you actually see what it takes to make this thing work, there's a lot of details, a lot of distributed programming, software engineering, testing, trying out things, experimenting. So, yeah, just keep that in mind if you don't already know it. And finally, the match opponent version of the AlphaGo continues searching during the opponent's move. So that means once we get into the... once Lissedol, for example, is... it's history, we're still calculating. We're still trying to approximate a better approximation of this. So we're here. We're still building up the Monte Carlo tree search. So wherever does Lissedol take us to some state, we'll have better approximations of the subtree here because we continued calculating the Monte Carlo... updating the Monte Carlo tree. And, yeah, again, one more detail. It extends the search if the action maximizing visit count and the action maximizing action value disagree. So if the Q value and the N value tell you to take different actions, so there is some discrepancy there and we want to improve it. And so they just keep calculating until they agree. And then we know we have much better estimate and we're more certain that we'll win. Okay. Those were the most important parts of the AlphaGo. Now, I'll just wrap this up with a couple more details around the neural networks. And that's it. We already heard a lot of details about this policy. So it's a linear softmax. That's why it's so fast, two microseconds. And, again, it was trained on eight million positions. So the huge SL policy was trained on 30 million positions. This one was trained on less data because it's smaller, less capacity, basically. But it's the same approach to training this one. And there is some additional heuristics which I won't mention. So just keep in mind that, again, lots of details which go into this once you start really digging deep. I want to mention symmetries. And what's the thing with symmetries? Well, because the board looks like this. Let's say, just to mark this orientation like this. So we can have, you can imagine four different rotations you can perform. So maybe you can rotate this thing by 90 degrees and you'll end up in this layout. And you can also have reflections. So you'll end up with eight different possibilities to feed this into a CNN. And when they use the raw policy, the RL policy network to play the game, they do the following. They take all of the eight configurations and they feed that into the policy network and they just average out all the probabilities across those eight configurations. And they just get a better actually gameplay by doing that, better probabilities, better distributions over actions. And yeah, just keep that in mind. And also the actual Monte Carlo tree search I was mentioning so far, what it does is makes use of an implicit symmetry ensemble that randomly selects the single rotation slash reflection for each evaluation. So that means once you get to the leaf node, you'll sometimes take a random element from this group of symmetries and do the actual inference on top of that one. And they just showed that this improves things. The reason is they actually augmented the data set using all of those eight different elements from that group. So that's the reason they're using it. They get more data, thus they get better weights and yeah, they use it in the search as well. So yeah, symmetries, if you didn't expect any symmetries, here they are. I know that Alpha Fold is also using some interesting symmetries. The paper still hasn't hasn't come out, but I read some blogs and it's probably using some equivalent types of deploying models. So we'll see that anyways. A bit about computation. I found this interesting. It takes 50 GPUs and you need to train this for three weeks to train the actual the SL policy network. So that's a lot of compute and that's just the final network without any hyperparameter. Then we have for the RL policy, they train it for one day on 50 GPUs and finally to train the value network, they have 50 GPUs for one week. So that's more than a month to train the final configuration of this algorithm. Plus you have the Monte Carlo tree search during the inference. It's a lot of compute that went into this. Yeah, just keep that in mind. Okay, let me see if we have some interesting details here. I mentioned this already. So we have the pool of opponents for the RL policy. So here is a small detail that's probably important. So I previously had only ZT here and that's not exactly correct. So we're not using that simple version of the policy gradient algorithm. We're using something called Reinforce Policy Gradient. Here we just, in order to reduce the variance and keep the same bias, we just subtract this thing called the baseline and they're just using the value function. So basically everything else remains the same. So across every single game and every single step, we average the gradients according to this thing. And you can treat this as an advantage. Basically, if you won and the value function was maybe 0.8, so that's plus one, that means you have a smaller update. So you want to make sure that that action goes up, but like by just a little bit. So hopefully you have intuitive understanding why this is here. I won't get into the maths. It doesn't change. It doesn't change the maths. The expectation remains the same. Thus the bias is unchanged. Only the variance is reduced here. But it's the same thing as you intuitively. It's the same thing as you only had ZT. So if it's easier for you, just ignore the V portion. I think that's pretty much it. Okay, I've covered everything. There is a lot of things that went into this. Here are some of the hyperparameters I was using. The threshold was 40. The virtual loss, the part for the P-UCT algorithm, the mixing parameter, the temperature for the SL policy. A lot of hyperparameters took a lot of work from a team of people from DeepMind to create this thing. So yeah, it's pretty awesome. So let me know if you found this video interesting. If you did, you know the drill. Hit the subscribe button. Click the bell icon to get notified. And see you in the next video. Until next time, keep learning deep.
[{"start": 0.0, "end": 8.0, "text": " What's up? In this video I'm going to cover AlphaGo paper or mastering the game of Go with deep neural networks and research."}, {"start": 8.0, "end": 13.0, "text": " And I guess all of you already know and heard a lot about it. They even made a movie about AlphaGo."}, {"start": 13.0, "end": 17.0, "text": " Everything we've ever tried in AI just falls over when you try the game of Go."}, {"start": 17.0, "end": 22.0, "text": " The number of possible configurations of the board is more than the number of atoms in the universe."}, {"start": 22.0, "end": 27.0, "text": " After you've finished with this video, you'll have a much better background."}, {"start": 27.0, "end": 31.0, "text": " And I strongly suggest you go ahead and watch the complete movie if you haven't already."}, {"start": 31.0, "end": 34.0, "text": " The game kind of turned on its axis."}, {"start": 34.0, "end": 37.0, "text": " That is not a confident face."}, {"start": 37.0, "end": 40.0, "text": " Okay, with that out of the way, let's jump into the paper."}, {"start": 40.0, "end": 52.0, "text": " So basically, why is Go interesting? The reason it's so interesting is that all of the other famous board games such as chess, checkers, etc. were already solved much earlier."}, {"start": 52.0, "end": 63.0, "text": " Like basically, in 1997, Deep Blue program has beaten Garry Kasparov, the best chess player in the world at that point of time."}, {"start": 63.0, "end": 67.0, "text": " And all of the other games were also solved, but Go still wasn't solved."}, {"start": 67.0, "end": 76.0, "text": " And people predicted that we'll need at least, so I think in 2014 or 2015, people thought that we'll need at least 10 more years to actually solve the game."}, {"start": 76.0, "end": 82.0, "text": " And then in 2015-16, Destiny appeared and cracked the game of Go."}, {"start": 82.0, "end": 87.0, "text": " So it's a pretty big deal and it's a seminal. It's a breakthrough in the world of AI."}, {"start": 87.0, "end": 91.0, "text": " And yeah, so basically, what is Go to start with?"}, {"start": 91.0, "end": 98.0, "text": " So basically, it's an old Eastern game that was, I think, invented in China a couple of thousand years ago."}, {"start": 98.0, "end": 108.0, "text": " And basically, you have a 19 by 19 board and you place these kind of white and black stones on the intersections."}, {"start": 108.0, "end": 111.0, "text": " And the whole goal is to acquire as much territory as possible."}, {"start": 111.0, "end": 117.0, "text": " So you can see here, the white here actually gained all of the axes belong to the white player."}, {"start": 117.0, "end": 121.0, "text": " And the game finishes in two ways."}, {"start": 121.0, "end": 130.0, "text": " Either you win because the opponent resigned or you win because the game ended and you have more territory than the opponent."}, {"start": 130.0, "end": 132.0, "text": " Those are the basics."}, {"start": 132.0, "end": 137.0, "text": " Now let's jump into the details of the paper and I'll slowly start going from the high level explanation."}, {"start": 137.0, "end": 139.0, "text": " I'll go into the depth at the end of the paper."}, {"start": 139.0, "end": 141.0, "text": " So yeah, stick with me."}, {"start": 141.0, "end": 153.0, "text": " Okay, so as a bit of theory, basically, all the games that have perfect information have this thing called the optimal value function."}, {"start": 153.0, "end": 157.0, "text": " In games that have perfect information such as Go because it's completely observable,"}, {"start": 157.0, "end": 164.0, "text": " compare that to poker, maybe where you don't know the cards of the opponent, so it's partially observable."}, {"start": 164.0, "end": 167.0, "text": " Here in perfect information games, you have this thing called optimal value function."}, {"start": 167.0, "end": 173.0, "text": " Once you have the optimal value function, you can play using it just following the greedy policy and you'll win."}, {"start": 173.0, "end": 178.0, "text": " You'll basically play the most optimal you can if the other players are also playing the optimal game."}, {"start": 178.0, "end": 180.0, "text": " So that's some game theory for you."}, {"start": 180.0, "end": 186.0, "text": " But basically, we'll find a way how to approximate this function and we'll also use something called policy function"}, {"start": 186.0, "end": 190.0, "text": " and Monte Carlo tree search to get to the final AlphaGo layout."}, {"start": 190.0, "end": 194.0, "text": " And I'll slowly start digesting all of those parts for you."}, {"start": 194.0, "end": 198.0, "text": " So before that, why is Go so hard?"}, {"start": 198.0, "end": 203.0, "text": " And one of the reasons, the main reason basically is that the search space is huge."}, {"start": 203.0, "end": 209.0, "text": " So in games like chess, you have the breath is around 35 and the depth is around 80."}, {"start": 209.0, "end": 213.0, "text": " So you can just kind of average out all of the branches and get this number."}, {"start": 213.0, "end": 217.0, "text": " And the final number of states is B raised to the power of D."}, {"start": 217.0, "end": 222.0, "text": " So you can see that chess has a lot of states, but it's nowhere near close to Go,"}, {"start": 222.0, "end": 225.0, "text": " so you have basically 250 raised to the power of 150."}, {"start": 225.0, "end": 228.0, "text": " That's a huge amount of states."}, {"start": 228.0, "end": 231.0, "text": " And all of the previous approaches such as the minimax search,"}, {"start": 231.0, "end": 236.0, "text": " such as the depth minimax search with the alpha, beta, pruning, whatever,"}, {"start": 236.0, "end": 238.0, "text": " failed to solve the search."}, {"start": 238.0, "end": 245.0, "text": " And even pure Monte Carlo tree search methods, which are proven to be really good and effective for other board games,"}, {"start": 245.0, "end": 249.0, "text": " failed miserably when deployed to Go."}, {"start": 249.0, "end": 256.0, "text": " So let's see how this paper actually coped and found the solution to solve this hard, difficult game."}, {"start": 256.0, "end": 262.0, "text": " And yeah, here is the kind of the high level overview of the whole system."}, {"start": 262.0, "end": 265.0, "text": " And it has a couple of components."}, {"start": 265.0, "end": 268.0, "text": " So the main components are these."}, {"start": 268.0, "end": 270.0, "text": " So we first have this thing called policy network."}, {"start": 270.0, "end": 273.0, "text": " And what the policy network does is the following."}, {"start": 273.0, "end": 276.0, "text": " You input a state where the state is the following."}, {"start": 276.0, "end": 280.0, "text": " So you basically have a 19 by 19 board, right?"}, {"start": 280.0, "end": 283.0, "text": " And they'll just have some handcrafted feature maps."}, {"start": 283.0, "end": 290.0, "text": " You'll end up with a 19 by 19 times 48 volume, which you'll be inputting into CNN."}, {"start": 290.0, "end": 292.0, "text": " And that's pretty much it."}, {"start": 292.0, "end": 295.0, "text": " And you don't need to know anything about those specifics."}, {"start": 295.0, "end": 300.0, "text": " So here is, for example, one of the features is these things called liberties."}, {"start": 300.0, "end": 302.0, "text": " And those are basically the empty slots around a certain player."}, {"start": 302.0, "end": 306.0, "text": " Maybe we have a black here and some of the liberties would be the red circles here."}, {"start": 306.0, "end": 312.0, "text": " And you kind of you somehow integrate that information into that feature, into those feature maps."}, {"start": 312.0, "end": 315.0, "text": " And you input those and use that as the as the current state."}, {"start": 315.0, "end": 318.0, "text": " So with that out of the way, let's just see how it works."}, {"start": 318.0, "end": 324.0, "text": " So the first thing they do is they train this thing called supervised learning policy network."}, {"start": 324.0, "end": 325.0, "text": " And what they do is the following."}, {"start": 325.0, "end": 332.0, "text": " So they have a huge database of professional, so experts playing game of go."}, {"start": 332.0, "end": 341.0, "text": " And it's they just train this policy network as a simple supervised learning method using the simple SL method."}, {"start": 341.0, "end": 344.0, "text": " And so basically you input the state."}, {"start": 344.0, "end": 345.0, "text": " Here's a visual representation."}, {"start": 345.0, "end": 352.0, "text": " So you input the state here and out of the policy network comes this vector of probabilities distributed over the whole board."}, {"start": 352.0, "end": 358.0, "text": " So there's basically a 19 by 19 vector of probabilities."}, {"start": 358.0, "end": 361.0, "text": " And what they do here to train it is the following."}, {"start": 361.0, "end": 365.0, "text": " So they just have these tuples, a bunch of these tuples, I think 30 million."}, {"start": 365.0, "end": 366.0, "text": " But we'll get to those details a bit later."}, {"start": 366.0, "end": 369.0, "text": " And we have these tuple and tuples."}, {"start": 369.0, "end": 370.0, "text": " And so what they do is the following."}, {"start": 370.0, "end": 371.0, "text": " So they have a CNN."}, {"start": 371.0, "end": 373.0, "text": " So that's the policy network."}, {"start": 373.0, "end": 380.0, "text": " So just basically plug in the S, the state, and you get out the probabilities."}, {"start": 380.0, "end": 383.0, "text": " So you just want to do a simple cross entropy loss here."}, {"start": 383.0, "end": 386.0, "text": " And basically whatever the action that the professional player played."}, {"start": 386.0, "end": 388.0, "text": " So maybe that's this one."}, {"start": 388.0, "end": 395.0, "text": " You want to make sure that you push that one to one and you make sure that the other ones go to zero."}, {"start": 395.0, "end": 403.0, "text": " So because this is softmax, you just basically do log on this action and minus log and you just minimize that loss."}, {"start": 403.0, "end": 404.0, "text": " So if I just plot that."}, {"start": 404.0, "end": 406.0, "text": " So that's the minus log."}, {"start": 406.0, "end": 409.0, "text": " And you want to make sure that the probability of that particular action goes to one."}, {"start": 409.0, "end": 411.0, "text": " And then we'll have the loss dropping down to zero."}, {"start": 411.0, "end": 413.0, "text": " So that's how they train it."}, {"start": 413.0, "end": 414.0, "text": " And that's super simple."}, {"start": 414.0, "end": 419.0, "text": " So it's just that you need a lot of data and they have a big CNN, 12, 13 layers deep."}, {"start": 419.0, "end": 421.0, "text": " Again, we'll get to those details a bit later."}, {"start": 421.0, "end": 423.0, "text": " So but that's the policy network."}, {"start": 423.0, "end": 424.0, "text": " Okay."}, {"start": 424.0, "end": 426.0, "text": " The self supervised policy network."}, {"start": 426.0, "end": 429.0, "text": " Then we have this thing called the rollout policy."}, {"start": 429.0, "end": 431.0, "text": " And it's trained in the same fashion."}, {"start": 431.0, "end": 434.0, "text": " The only difference is that it's much smaller."}, {"start": 434.0, "end": 436.0, "text": " It's actually it's not a DNN."}, {"start": 436.0, "end": 437.0, "text": " It's not a neural network."}, {"start": 437.0, "end": 439.0, "text": " It's a it's a simple linear layer."}, {"start": 439.0, "end": 445.0, "text": " And it's trained on less data because it's it's has a smaller capacity."}, {"start": 445.0, "end": 446.0, "text": " OK."}, {"start": 446.0, "end": 450.0, "text": " Once that's done, you want to train something called the RL policy network."}, {"start": 450.0, "end": 460.0, "text": " And what you do basically is you take this trained this pre trained supervised learning policy and you just initialize this RL policy network."}, {"start": 460.0, "end": 466.0, "text": " And then you create a huge pool of different policy networks."}, {"start": 466.0, "end": 468.0, "text": " And you just let them play against each other."}, {"start": 468.0, "end": 472.0, "text": " So something we call self play in the field of reinforcement learning."}, {"start": 472.0, "end": 480.0, "text": " And basically after after you train those against each other, you basically then train those networks using policy gradient methods."}, {"start": 480.0, "end": 485.0, "text": " And again, I'll give you a simple intuitive explanation and then we'll dig into equations."}, {"start": 485.0, "end": 490.0, "text": " But what you basically do so you have two players and they play they play the game of go."}, {"start": 490.0, "end": 494.0, "text": " And now let's say that each of these lines represents the action."}, {"start": 494.0, "end": 496.0, "text": " So let's say that this player won."}, {"start": 496.0, "end": 498.0, "text": " So player one player two."}, {"start": 498.0, "end": 501.0, "text": " So at the end, the player one won."}, {"start": 501.0, "end": 511.0, "text": " And so basically what you'll do is you'll take all of the actions that that player did and you'll make sure they are more like they have higher probability."}, {"start": 511.0, "end": 522.0, "text": " And on the other hand, you'll take the player two and you'll make sure that the actions that that player took in those specific states are less probable the next time."}, {"start": 522.0, "end": 524.0, "text": " And then you'll just take the player two."}, {"start": 524.0, "end": 528.0, "text": " You'll input it into the pool of players, potential players."}, {"start": 528.0, "end": 532.0, "text": " You'll update your your current your RL policy."}, {"start": 532.0, "end": 534.0, "text": " And that's how you'll continue doing it."}, {"start": 534.0, "end": 536.0, "text": " You'll be improving your policy."}, {"start": 536.0, "end": 544.0, "text": " You'll be putting the opponents into the pool and you'll be playing against them and basically do the policy gradient."}, {"start": 544.0, "end": 546.0, "text": " And that's how you learn the policy network."}, {"start": 546.0, "end": 549.0, "text": " Again, just a high level explanation will get equations."}, {"start": 549.0, "end": 553.0, "text": " Basically, I just want you to have this mental picture."}, {"start": 553.0, "end": 560.0, "text": " The actions if you won, you want to make sure that the actions you played in those particular states are of higher probability."}, {"start": 560.0, "end": 565.0, "text": " And if you lost, you want to make sure that the actions are actually less probable next time you use that network."}, {"start": 565.0, "end": 567.0, "text": " That's it. It's really simple."}, {"start": 567.0, "end": 572.0, "text": " It's basically you can treat it as a classification task for all of those couples."}, {"start": 572.0, "end": 579.0, "text": " OK, so the trick here is we won't be using this RL policy network in the final AlphaGo algorithm."}, {"start": 579.0, "end": 580.0, "text": " So why do we have it?"}, {"start": 580.0, "end": 584.0, "text": " Well, we have it in order to bootstrap this thing called the value network."}, {"start": 584.0, "end": 591.0, "text": " And what the value network does is you give it the state and it just tells you how likely are you to win in that particular state."}, {"start": 591.0, "end": 594.0, "text": " So it has values between one and minus one."}, {"start": 594.0, "end": 597.0, "text": " If it outputs one, that means you're highly likely to win."}, {"start": 597.0, "end": 604.0, "text": " If it outputs minus one, that means you're going to lose and anything in between. Yeah, just you can interpolate yourself."}, {"start": 604.0, "end": 608.0, "text": " OK, so you can see here the again the visualization of the value network."}, {"start": 608.0, "end": 611.0, "text": " The output is a scalar, which tells you how good that state is."}, {"start": 611.0, "end": 617.0, "text": " So that's basically our best approximation of the value state function in RL."}, {"start": 617.0, "end": 625.0, "text": " OK, so the way we train the value network is we take a bunch of so we have we play a bunch of self play games."}, {"start": 625.0, "end": 634.0, "text": " And from each game, we take a single state and the outcome of the game and we train the value network to regress that outcome."}, {"start": 634.0, "end": 638.0, "text": " So we'll again we'll get to details, but like that's the high level picture."}, {"start": 638.0, "end": 642.0, "text": " So you want to make sure that it learns how to basically regress."}, {"start": 642.0, "end": 647.0, "text": " It's again a supervised learning on a data set that we extracted from the self play games."}, {"start": 647.0, "end": 649.0, "text": " And that's it."}, {"start": 649.0, "end": 656.0, "text": " So once we have all of those networks, we'll be using them and combine them into this thing called Monte Carlo Tree Search."}, {"start": 656.0, "end": 663.0, "text": " And we'll use these networks to better to kind of scope the breadth and the depth of the tree search."}, {"start": 663.0, "end": 667.0, "text": " And that's how the ultimate algorithm, the final algorithm will look like."}, {"start": 667.0, "end": 672.0, "text": " So we'll slowly get to those details, but this was the high level picture."}, {"start": 672.0, "end": 674.0, "text": " Hopefully you understand everything."}, {"start": 674.0, "end": 679.0, "text": " OK, now let's start digging into some equations and let's see how the SL policies train."}, {"start": 679.0, "end": 684.0, "text": " So we basically want to maximize the likelihood of the human move a selected in state s."}, {"start": 684.0, "end": 689.0, "text": " And here you can just symbolically see that basically you want to move."}, {"start": 689.0, "end": 694.0, "text": " So for that particular action, you just want to maximize it given the state."}, {"start": 694.0, "end": 708.0, "text": " So those are these state action tuples and you just do a step along the gradient, which will kind of boost up the probability of that action and diminish all of the other probabilities."}, {"start": 708.0, "end": 713.0, "text": " Simple classification, basically cross entropy loss."}, {"start": 713.0, "end": 715.0, "text": " Don't get confused by these expressions."}, {"start": 715.0, "end": 720.0, "text": " This just means that the update step is proportional to this one."}, {"start": 720.0, "end": 723.0, "text": " But you basically do cross entropy loss and that's it."}, {"start": 723.0, "end": 726.0, "text": " So it's a 13 layer CNN basically."}, {"start": 726.0, "end": 731.0, "text": " And you train it on 30 million positions and position is just a fancy name for the state."}, {"start": 731.0, "end": 735.0, "text": " And so that's the data set I was mentioning here."}, {"start": 735.0, "end": 748.0, "text": " And yeah, so finally, once we have this network pre trained, they showed that on the on the test set, you can predict with 57 percent accuracy the next step that the expert will play."}, {"start": 748.0, "end": 755.0, "text": " So that's a huge improvement compared to the previous state of the art, which was 44.4."}, {"start": 755.0, "end": 763.0, "text": " And that actually translates into much higher probability of winning the game as we'll soon see."}, {"start": 763.0, "end": 765.0, "text": " OK, so that was the SL policy."}, {"start": 765.0, "end": 770.0, "text": " Now, the fast rollout policy already explained it's a linear softmax."}, {"start": 770.0, "end": 774.0, "text": " It just has a bit different input."}, {"start": 774.0, "end": 779.0, "text": " Basically, again, some go magic, some hand crafting."}, {"start": 779.0, "end": 787.0, "text": " And so the thing with this network is it has a much smaller accuracy, so 24.2."}, {"start": 787.0, "end": 791.0, "text": " But the thing is, it's much faster because it's so smaller."}, {"start": 791.0, "end": 798.0, "text": " So the inference is two microseconds compared to this SL policy, which has three milliseconds."}, {"start": 798.0, "end": 809.0, "text": " Now you'll see why this will be useful and once we get to the search part of the algorithm for the aerial network, let's see how we train that one."}, {"start": 809.0, "end": 817.0, "text": " So, again, we train the current policy network, which is initial initially initialized using the SL policy network."}, {"start": 817.0, "end": 821.0, "text": " And we randomly select some previous iteration of the policy network."}, {"start": 821.0, "end": 823.0, "text": " So that's the self play part of the game."}, {"start": 823.0, "end": 828.0, "text": " And this is how you, again, how you update the weights of that policy network."}, {"start": 828.0, "end": 830.0, "text": " Basically, it's again proportional."}, {"start": 830.0, "end": 833.0, "text": " So the update step is proportional to the gradient."}, {"start": 833.0, "end": 835.0, "text": " So that's the same thing as this one."}, {"start": 835.0, "end": 836.0, "text": " But there's a trick."}, {"start": 836.0, "end": 838.0, "text": " There is this ZT and that's the outcome."}, {"start": 838.0, "end": 841.0, "text": " So that means if you won, you'd get plus one."}, {"start": 841.0, "end": 846.0, "text": " And then that means you basically end up in the same situation as here."}, {"start": 846.0, "end": 854.0, "text": " You'll basically want to make sure that all of the actions you played get get like higher probability in those particular states."}, {"start": 854.0, "end": 857.0, "text": " And if you lost, then you'll have minus one."}, {"start": 857.0, "end": 858.0, "text": " That means you'll move."}, {"start": 858.0, "end": 860.0, "text": " You'll actually move in the opposite direction."}, {"start": 860.0, "end": 865.0, "text": " So you'll make sure that the actions are less probable and that intuitively makes sense."}, {"start": 865.0, "end": 872.0, "text": " And in expectations, all of that will lead you to have much better policy function."}, {"start": 872.0, "end": 873.0, "text": " OK."}, {"start": 873.0, "end": 881.0, "text": " And when you play head to head, when you take this RL policy that was trained using self play and you compare it with the SL policy network,"}, {"start": 881.0, "end": 884.0, "text": " you can see that it will win in 80 percent of the games."}, {"start": 884.0, "end": 886.0, "text": " So it improved dramatically."}, {"start": 886.0, "end": 893.0, "text": " And if you use no search whatsoever, and we still haven't I still haven't explained the search, but we'll get there."}, {"start": 893.0, "end": 902.0, "text": " And if you don't use a search, you can actually just use this RL policy and you'll win in 85 percent of the games against this open source go pro."}, {"start": 902.0, "end": 905.0, "text": " And of course, go program called Pachi."}, {"start": 905.0, "end": 909.0, "text": " And it's not the best one out there, but it was pretty decent."}, {"start": 909.0, "end": 913.0, "text": " And it's it's awesome to see that without any search whatsoever."}, {"start": 913.0, "end": 926.0, "text": " And this thing, this Pachi thing, it uses a bunch of simulations, a bunch of search, and it still loses against a pure DNN approach without any MTCS Monte Carlo tree search."}, {"start": 926.0, "end": 928.0, "text": " So those are some of the details."}, {"start": 928.0, "end": 931.0, "text": " Let me show you what this chart means."}, {"start": 931.0, "end": 940.0, "text": " This one tells you that depending how many filters you have in your CNN, you will be improving the training accuracy."}, {"start": 940.0, "end": 942.0, "text": " And this is the final AlphaGo model."}, {"start": 942.0, "end": 951.0, "text": " And you can see that the as expected, the battery gets on this KGS supervised data set, the battery will get against the final version."}, {"start": 951.0, "end": 954.0, "text": " And the final CNN they took had 192 layers."}, {"start": 954.0, "end": 963.0, "text": " As you can see, that's this curve, it's got a decent trade off between the performance and its way faster than these bulky big networks."}, {"start": 963.0, "end": 966.0, "text": " So it's a trade off they did."}, {"start": 966.0, "end": 970.0, "text": " OK, we'll get to this chart in a bit later when we start talking about search."}, {"start": 970.0, "end": 972.0, "text": " OK, we saw how to train the policy networks."}, {"start": 972.0, "end": 976.0, "text": " We saw how to train the RL policy as well."}, {"start": 976.0, "end": 980.0, "text": " And you saw this is pretty much simple cross entropy."}, {"start": 980.0, "end": 985.0, "text": " It's a simple classification task pretty much ignoring this part."}, {"start": 985.0, "end": 988.0, "text": " And let's see how we train the value network."}, {"start": 988.0, "end": 990.0, "text": " So the value network is trained like this."}, {"start": 990.0, "end": 992.0, "text": " So it's basically a regression task."}, {"start": 992.0, "end": 994.0, "text": " And what this means is the following."}, {"start": 994.0, "end": 1003.0, "text": " So let's say we have state S and let's say that the final outcome going from that state was maybe ZT was one."}, {"start": 1003.0, "end": 1006.0, "text": " That means we won playing from this state."}, {"start": 1006.0, "end": 1013.0, "text": " So in let's say that the value function gave us maybe zero point seven."}, {"start": 1013.0, "end": 1019.0, "text": " So because of that, this difference, this is plus one and this is zero point seven."}, {"start": 1019.0, "end": 1024.0, "text": " That means you have zero point three, a positive number scaling the value function."}, {"start": 1024.0, "end": 1026.0, "text": " That means you'll increment."}, {"start": 1026.0, "end": 1030.0, "text": " You'll you'll you'll kind of update this to go from zero point seven."}, {"start": 1030.0, "end": 1035.0, "text": " Next time you'll see the state you'll maybe you'll maybe tell you maybe out with zero point seventy five."}, {"start": 1035.0, "end": 1046.0, "text": " But if the value function was off by much more, if it was maybe I don't know if it was zero point three or zero point two, then this difference would be zero point eight."}, {"start": 1046.0, "end": 1052.0, "text": " And that means you take like a bigger step and a bigger gradient."}, {"start": 1052.0, "end": 1056.0, "text": " And that means that you'd go from maybe zero point two."}, {"start": 1056.0, "end": 1061.0, "text": " You'd next time you see the step, you'd output zero point thirty five."}, {"start": 1061.0, "end": 1065.0, "text": " So you can see we had a bigger increment here than here."}, {"start": 1065.0, "end": 1068.0, "text": " And that's how it pretty much works."}, {"start": 1068.0, "end": 1071.0, "text": " So it's L2 L2 loss, basically."}, {"start": 1071.0, "end": 1072.0, "text": " Nothing special there."}, {"start": 1072.0, "end": 1073.0, "text": " Simple regression."}, {"start": 1073.0, "end": 1078.0, "text": " You always have to be supervised at the end in order to learn using gradient based methods."}, {"start": 1078.0, "end": 1081.0, "text": " So again, I explained how they train it."}, {"start": 1081.0, "end": 1085.0, "text": " They they they so they had some problems overfitting."}, {"start": 1085.0, "end": 1090.0, "text": " So if you if you if you train this value network the following way."}, {"start": 1090.0, "end": 1104.0, "text": " If you took two players and they played a game and if you were using every single state and the final outcome, which may be plus one for the this player, then you'd kind of overfit because the states are highly correlated."}, {"start": 1104.0, "end": 1113.0, "text": " And so what it did is instead from every single game, they just take a single random state and couple that with the final outcome ZT."}, {"start": 1113.0, "end": 1115.0, "text": " And that's what they use."}, {"start": 1115.0, "end": 1121.0, "text": " And after training on 30 million data points, they they pretty much avoid the overfitting."}, {"start": 1121.0, "end": 1129.0, "text": " And you can see that they have zero point two to six and zero point two three four on training and test sets, which is minimal overfitting."}, {"start": 1129.0, "end": 1138.0, "text": " Before that, they had zero point thirty seven on the test set, which is much higher than zero point nineteen on the training set when they just take this naive approach."}, {"start": 1138.0, "end": 1147.0, "text": " So that's the reason they took just a single couple from a single game. And that's how they avoided overfitting on this problem."}, {"start": 1147.0, "end": 1151.0, "text": " OK, finally, search."}, {"start": 1151.0, "end": 1153.0, "text": " Let me see how I'll explain this."}, {"start": 1153.0, "end": 1160.0, "text": " So here are some equations which will make sense in a second."}, {"start": 1160.0, "end": 1163.0, "text": " Basically, let's start with this diagram."}, {"start": 1163.0, "end": 1166.0, "text": " So this chart here tells you the following."}, {"start": 1166.0, "end": 1173.0, "text": " So we're in a current state and we want to do a bunch of simulations in order to improve our predictions."}, {"start": 1173.0, "end": 1177.0, "text": " So what we'll do is we'll somehow create these priors."}, {"start": 1177.0, "end": 1181.0, "text": " And in order to create these priors, we'll use the SL policy network."}, {"start": 1181.0, "end": 1183.0, "text": " So we'll just input the state."}, {"start": 1183.0, "end": 1187.0, "text": " We'll through this SL policy network, it will give us some priors."}, {"start": 1187.0, "end": 1196.0, "text": " And using those priors, you can see that we'll end up with some. So every single edge will have this score associated with it."}, {"start": 1196.0, "end": 1199.0, "text": " So that's a Q. So it's an action value function plus this U of P."}, {"start": 1199.0, "end": 1203.0, "text": " And we'll see what it is. And we just maximize that score."}, {"start": 1203.0, "end": 1207.0, "text": " And that's how we traverse the tree. And once we get to the leaf node."}, {"start": 1207.0, "end": 1212.0, "text": " So once we get here or here and we'll just keep this for a moment."}, {"start": 1212.0, "end": 1215.0, "text": " Expansion steps will go like this. Evaluation."}, {"start": 1215.0, "end": 1219.0, "text": " So once we get to the leaf node, we do the two things."}, {"start": 1219.0, "end": 1223.0, "text": " So the first thing is we pass that state into a value network."}, {"start": 1223.0, "end": 1228.0, "text": " So we pass this into a value network and it will give us the estimate of how good that state is."}, {"start": 1228.0, "end": 1233.0, "text": " And secondly, we'll do a rollout using the fast policy network."}, {"start": 1233.0, "end": 1240.0, "text": " So basically from this state, using that policy, we'll be playing the game and then we'll get to the final state."}, {"start": 1240.0, "end": 1248.0, "text": " We'll get some reward and using those two informations, those two snippets, we'll do something called backup."}, {"start": 1248.0, "end": 1254.0, "text": " Where we'll from the leaf node going to the root node, we'll be updating the edges with some statistics."}, {"start": 1254.0, "end": 1261.0, "text": " So we somehow update those statistics. We use those statistics to calculate the Q values."}, {"start": 1261.0, "end": 1269.0, "text": " And later we use those Q values to again in the next simulation to again do this simulation."}, {"start": 1269.0, "end": 1280.0, "text": " So when you do a bunch of these simulations, the values, the Q values and the visitation counts will be much more predictive of the final outcome."}, {"start": 1280.0, "end": 1287.0, "text": " And you'll finally use the visitation count from the edge to pick the best action from that particular state."}, {"start": 1287.0, "end": 1289.0, "text": " And that's the root state."}, {"start": 1289.0, "end": 1296.0, "text": " So if you're currently confused, that's totally fine. We'll slowly digest each of these pieces in a second."}, {"start": 1296.0, "end": 1304.0, "text": " Okay, so I skipped the expansion part because it's not usually you won't do expansion in every single step."}, {"start": 1304.0, "end": 1314.0, "text": " So basically expansion happens once you. So for example, let's say this is a leaf node and it has it was visited 39 times so far."}, {"start": 1314.0, "end": 1323.0, "text": " So out of all the simulations we had so far, we visited the state exactly 39 times and we have something called and threshold,"}, {"start": 1323.0, "end": 1327.0, "text": " which basically and they have it. The nominal value here is 40."}, {"start": 1327.0, "end": 1335.0, "text": " So once you get the next time you visit this leaf, the number of visits will rise to 40 and you'll want to expand this node."}, {"start": 1335.0, "end": 1339.0, "text": " So how to expand it? Well, again, you just use the SL policy network."}, {"start": 1339.0, "end": 1351.0, "text": " You input the state and you get the new priors and then you can you just pick the using the this the score that you'll pick the next state as the leaf."}, {"start": 1351.0, "end": 1355.0, "text": " So the one that had the max value, this will be your leaf node."}, {"start": 1355.0, "end": 1360.0, "text": " And then again, you just proceed with the evaluation. You do the you pass it through the value function."}, {"start": 1360.0, "end": 1366.0, "text": " You do the rollouts and you just kind of update the statistics up the tree, the entry updates."}, {"start": 1366.0, "end": 1376.0, "text": " OK, so that's the Monte Carlo tree search like a high level overview and we'll slowly start to better understand how it works."}, {"start": 1376.0, "end": 1385.0, "text": " So how do we pick the actions? So again, I said action value function plus this you think and this algorithm is called P."}, {"start": 1385.0, "end": 1392.0, "text": " You see T or polynomial upper bound."}, {"start": 1392.0, "end": 1399.0, "text": " Upper confidence bound for trees algorithm and if you're familiar with UCB, this will come will be pretty easy."}, {"start": 1399.0, "end": 1403.0, "text": " David Silver had a really nice explanation of UCB, so I'll link that in the description."}, {"start": 1403.0, "end": 1408.0, "text": " But here it will be enough to just understand this on like intuitively."}, {"start": 1408.0, "end": 1418.0, "text": " Basically, this thing, the you part is proportional to the prior and to the number of visitations of that particular edge."}, {"start": 1418.0, "end": 1425.0, "text": " So what that means is the following. So initially, this thing will be really small to zero."}, {"start": 1425.0, "end": 1429.0, "text": " So basically have the you part being proportional to the prior."}, {"start": 1429.0, "end": 1436.0, "text": " So you get the prior from the SL network, remember? So once you start visiting this this edge a lot of times,"}, {"start": 1436.0, "end": 1442.0, "text": " so that that particular action from that particular state, this thing will start to go to infinity,"}, {"start": 1442.0, "end": 1445.0, "text": " which means that the whole you part will go to zero."}, {"start": 1445.0, "end": 1453.0, "text": " And that means that initially you are preferring. So that's something called confidence in the face of uncertainty."}, {"start": 1453.0, "end": 1458.0, "text": " So you'll basically prefer exploration and you won't be focusing on in the Q value."}, {"start": 1458.0, "end": 1463.0, "text": " But as time goes on, this will go to zero. And you'll basically because you visited this edge a lot of times,"}, {"start": 1463.0, "end": 1467.0, "text": " you have a decent you have more confidence in your Q estimate."}, {"start": 1467.0, "end": 1473.0, "text": " And then if the Q estimate is high, that means that that particular action from that state is really promising."}, {"start": 1473.0, "end": 1477.0, "text": " So you'll keep continue exploring that part of a subtree."}, {"start": 1477.0, "end": 1484.0, "text": " So, yeah, hopefully that makes a bit more sense now. And don't worry, we'll get even deeper than this."}, {"start": 1484.0, "end": 1491.0, "text": " OK, so small optimization detail you basically don't want because the SL network is a huge CNN."}, {"start": 1491.0, "end": 1497.0, "text": " You don't want to use it repeatedly. You just once you calculate the priors for a specific leaf node, that's it."}, {"start": 1497.0, "end": 1502.0, "text": " You just store the priors and you don't recalculate it again. So that's that's a small detail."}, {"start": 1502.0, "end": 1508.0, "text": " So how do we get the Q values, which we later use to do navigate to traverse the tree?"}, {"start": 1508.0, "end": 1513.0, "text": " So this is how we get the Q values. Basically, we first calculate this thing."}, {"start": 1513.0, "end": 1522.0, "text": " So the value function, it's a simple weighted average between the value function and the rollout outcome."}, {"start": 1522.0, "end": 1529.0, "text": " So that's this part. So once we do the rollout and we do the V part, we take those values and we average them."}, {"start": 1529.0, "end": 1537.0, "text": " And these 0.5 here for the lambda, which means they'll equally take from the value estimate as well as from the rollout estimate."}, {"start": 1537.0, "end": 1543.0, "text": " That's the V. And finally, in order to get the Q, you'll just divide that by the number of visitations of that particular edge."}, {"start": 1543.0, "end": 1554.0, "text": " And that's your Q estimate. So this is just a fancy way of saying, hey, in iteration I, did you visit the state S and action A?"}, {"start": 1554.0, "end": 1564.0, "text": " If you had, just increment this. So just a fancy way of saying from N simulations we had, how many of those simulations did traverse this particular edge S A?"}, {"start": 1564.0, "end": 1574.0, "text": " So that's it. And that's how you calculate the Qs. So if we were following along, you may be confused about why are we using SL policy to calculate the priors."}, {"start": 1574.0, "end": 1587.0, "text": " And the reason is simple experiment. They tried both. So it's worth noting that the SL policy P sigma performed better in AlphaGo than the stronger RL policy network P row."}, {"start": 1587.0, "end": 1595.0, "text": " Basically, but on the other hand, the value function was derived from the stronger RL policy network."}, {"start": 1595.0, "end": 1602.0, "text": " So, yeah, it performed better than the value function derived from the SL policy network."}, {"start": 1602.0, "end": 1608.0, "text": " Just a simple detail. Again, experiments are pretty common in deep learning."}, {"start": 1608.0, "end": 1613.0, "text": " So, yeah, even though we'd expect that the RL policy would perform better, it did not."}, {"start": 1613.0, "end": 1622.0, "text": " And they make some assumption here, presumably because humans select a diverse beam of promising moves, whereas RL optimized for a single, for the single best move."}, {"start": 1622.0, "end": 1628.0, "text": " Not sure whether that's the best explanation, I guess. Yeah."}, {"start": 1628.0, "end": 1636.0, "text": " OK, so that was the search part. We saw all of the components on the high level. Let's see some results."}, {"start": 1636.0, "end": 1645.0, "text": " And here on the Y axis, you can see the L rating. And so what's what I hope I'm pronouncing that right. So you know, or something like that."}, {"start": 1645.0, "end": 1651.0, "text": " So basically, when you have a 230 point gap, that means you have a 80 percent probability of winning."}, {"start": 1651.0, "end": 1657.0, "text": " So that means translating that into human language, you have player one and player two."}, {"start": 1657.0, "end": 1669.0, "text": " And this guy has 230 points more in ELO than this guy. That means he'll win in 80 percent of the games in expectation."}, {"start": 1669.0, "end": 1675.0, "text": " So that's it. So once we know the Y axis on the X axis, you can see the following things."}, {"start": 1675.0, "end": 1683.0, "text": " So these two are open source go programs. This one was using it wasn't using this Monte Carlo tree search."}, {"start": 1683.0, "end": 1690.0, "text": " It used some older search heuristics, some variations of Minimax probably. And then we have two commercial go programs."}, {"start": 1690.0, "end": 1698.0, "text": " Here we have Fan Hui, who was you probably know this guy. He actually was one of the main actors in the AlphaGo movie."}, {"start": 1698.0, "end": 1704.0, "text": " Really cool guy. He was a European champion at that point of time. And you can see the scores."}, {"start": 1704.0, "end": 1708.0, "text": " And this is the AlphaGo. And this is finally the distributed version of the AlphaGo."}, {"start": 1708.0, "end": 1713.0, "text": " So on multiple machines, not on a single machine. And you can see this one is really bad."}, {"start": 1713.0, "end": 1721.0, "text": " Beginner level, all of the other go programs were amateurs. And finally, only the AlphaGo achieved two then."}, {"start": 1721.0, "end": 1730.0, "text": " So this is just some go specific ranking. So we had he was to then professional and AlphaGo was in the same ballpark."}, {"start": 1730.0, "end": 1734.0, "text": " And the distributed version was even stronger. So that's about it."}, {"start": 1734.0, "end": 1741.0, "text": " You can see AlphaGo finally achieved professional level rankings. And here are just some ablation studies."}, {"start": 1741.0, "end": 1749.0, "text": " Basically, when you use both the rollout, the fast rollout, both the value function and the policy network to estimate the priors."}, {"start": 1749.0, "end": 1755.0, "text": " That's the best thing to do. Once you remove the value, we drop. Once you remove the rollouts, we drop even further."}, {"start": 1755.0, "end": 1762.0, "text": " So that means that the rollouts are even more important. They're really important in evaluating those leaf nodes."}, {"start": 1762.0, "end": 1772.0, "text": " So yeah, here just a simple ablation of different. Once you increase the number of threads, you can see the yellow goes up."}, {"start": 1772.0, "end": 1779.0, "text": " Once you increase the number of GPUs and hold the 40 threads, you improve again the performance and finally distributed."}, {"start": 1779.0, "end": 1785.0, "text": " Once you start adding up more GPUs, you again, as expected, you get more performance."}, {"start": 1785.0, "end": 1794.0, "text": " So compute, especially in RL, is something that can improve things. And we know that from GPT family of models as well."}, {"start": 1794.0, "end": 1798.0, "text": " So throw more compute at it and you'll get better results. That's for sure."}, {"start": 1798.0, "end": 1804.0, "text": " The trick here is they only had five seconds for each move. So they have some limitations."}, {"start": 1804.0, "end": 1809.0, "text": " And that's something that you hear each program use approximately five second computations."}, {"start": 1809.0, "end": 1816.0, "text": " And using that constraint, we get these yellow scores. Otherwise, we had more time would get even higher yellow scores."}, {"start": 1816.0, "end": 1824.0, "text": " OK, so some detail about the hover implementation."}, {"start": 1824.0, "end": 1834.0, "text": " So the AlphaGo uses an asynchronous multi-threaded search that executes simulations on CPUs and computes policy and value networks in parallel on GPUs."}, {"start": 1834.0, "end": 1847.0, "text": " So basically those fast policy rollouts, they'll be delegated to CPUs and the slower, the bulkier policy evaluations, such as the one that calculates the priors and the value function that estimates the leaf node,"}, {"start": 1847.0, "end": 1854.0, "text": " will be just offloaded to a GPU. And that's a small optimization they did. And you can see a bunch of hardware here."}, {"start": 1854.0, "end": 1867.0, "text": " And the final distributed version of AlphaGo used multiple machines, 40 threads, 1,200 something CPUs and almost 200 GPUs. It's a huge amount of hardware."}, {"start": 1867.0, "end": 1874.0, "text": " OK, I'll skip this part. Not that interesting. And I'll skip this. So this was just a couple of."}, {"start": 1874.0, "end": 1882.0, "text": " So the five games they played against Fanhui and they won each of the five games back in 2015 or 16."}, {"start": 1882.0, "end": 1894.0, "text": " So you can just check out. I personally don't understand, never played Go, but you don't need to understand Go in order to understand the actual implementation and deep learning approach part of this paper."}, {"start": 1894.0, "end": 1905.0, "text": " OK, one more detail. The kind of the ablations with that lambda term combining the value function estimation and the rollout estimation."}, {"start": 1905.0, "end": 1914.0, "text": " And they're just complementary. So the value network approximates the outcome of games played by the strong, but in practically slow network."}, {"start": 1914.0, "end": 1921.0, "text": " Well, the rollouts can be can precisely score and evaluate the outcome of games played by the weaker, but faster rollout policy."}, {"start": 1921.0, "end": 1925.0, "text": " So basically, because policy only takes two microseconds, you can execute it multiple times."}, {"start": 1925.0, "end": 1933.0, "text": " And finally, the estimates will get much closer to the value function, which is slower, but more accurate. And combining them is kind of complementary."}, {"start": 1933.0, "end": 1944.0, "text": " So, yeah, again, experiments, experiments. OK, I think this was the the the the this was the high level part."}, {"start": 1944.0, "end": 1952.0, "text": " And I've got only two more pages left where we'll dig into even more details and you'll really understand how this thing works."}, {"start": 1952.0, "end": 1962.0, "text": " Before that, just a quick mention. So the AlphaGo evaluated thousands of times fewer positions where the position is again a synonym for state."}, {"start": 1962.0, "end": 1969.0, "text": " Then Deep Blue didn't just match against Casperov 97. So basically it's a much smarter approach."}, {"start": 1969.0, "end": 1976.0, "text": " It's not as brute force as Deep Blue was. So that's again plus one for AlphaGo."}, {"start": 1976.0, "end": 1984.0, "text": " And so it's provided a hope that human level performance can now be achieved in other seemingly intractable A.I. domains."}, {"start": 1984.0, "end": 1992.0, "text": " So the good thing about this paper is that it gave us hope that when something looks so insolvable, when the search space is so huge, as in the case of Go,"}, {"start": 1992.0, "end": 1999.0, "text": " and people thought we'll need additional 10 years to solve it and we solved it in a year or two after that prediction was made,"}, {"start": 1999.0, "end": 2007.0, "text": " this gives us hope that who knows, maybe a couple of years down the road we'll be solving some problems which we now think are uncrackable."}, {"start": 2007.0, "end": 2017.0, "text": " So who knows. Okay, that was the first part. Getting even deeper. The last two pages will get you a really firm grasp and understanding of AlphaGo."}, {"start": 2017.0, "end": 2029.0, "text": " So let's dig into it. Quick note here. Prior to AlphaGo, in order to solve board games, people used this thing called Minimax Search."}, {"start": 2029.0, "end": 2038.0, "text": " And you probably heard of it in your favorite CS class. And then people invented some better heuristics like Depth-first Minimax Search with Alpha-Beta pruning,"}, {"start": 2038.0, "end": 2044.0, "text": " which achieved superhuman performance in chess, checkers, and Othello, but has not been effective in Go."}, {"start": 2044.0, "end": 2053.0, "text": " So after that, people started implementing this thing called Monte Carlo Tree Search without using DNNs."}, {"start": 2053.0, "end": 2063.0, "text": " And basically this one dramatically improved performance in a bunch of different board games as well as in Go in particular,"}, {"start": 2063.0, "end": 2074.0, "text": " but it still wasn't good enough by itself to just push it into the professional league, to push those algorithms into professional level."}, {"start": 2074.0, "end": 2084.0, "text": " Okay, so let's see now every single detail about the search algorithm. So I mentioned the statistics and here are the statistics."}, {"start": 2084.0, "end": 2090.0, "text": " So each edge stores a set of statistics. And here we can see the prior. So that's the thing we calculate using the SL network."}, {"start": 2090.0, "end": 2099.0, "text": " We store the visitation counts for that particular edge. We store the cumulative Monte Carlo estimates coming from the value function and coming from the rollouts."}, {"start": 2099.0, "end": 2108.0, "text": " And finally, we calculate the Q and we store that in each and every edge. And we just calculate the Q using all of these previous statistics."}, {"start": 2108.0, "end": 2119.0, "text": " So, yeah, keep that in mind. I mentioned the P-UCT. So that's the variant of the upper confidence bound algorithm for trees, the ECT."}, {"start": 2119.0, "end": 2127.0, "text": " And this is the exact formulation. So they have something called this coefficient which just weighs this U part compared to the Q part."}, {"start": 2127.0, "end": 2136.0, "text": " So this is the complete equation. And this is just a hyperparameter like your lambda parameter. And they set it to five."}, {"start": 2136.0, "end": 2143.0, "text": " Again, probably hyperparameter search. And here is a prior. Here is the visitation. So we were just missing this part."}, {"start": 2143.0, "end": 2161.0, "text": " So what it means is the following. Let's say we have the parent state and we have the child state. And if we're calculating the P-UCT value for this edge,"}, {"start": 2161.0, "end": 2168.0, "text": " we're basically counting the number of visitations that this child, that his parent had."}, {"start": 2168.0, "end": 2175.0, "text": " So let me now translate this, try and translate this equation into human language. What it basically means is the following."}, {"start": 2175.0, "end": 2190.0, "text": " So the more visitations your parent had and the less times they visited you, but you have a strong prior, that means this edge will be highly likely to visit the next time."}, {"start": 2190.0, "end": 2200.0, "text": " So if we were visiting a lot of the other children of this parent, so that would mean that this numerator part would increase."}, {"start": 2200.0, "end": 2207.0, "text": " But we haven't been visiting this child, that means that this part would be low. So this one is low, this one is high, this will explode."}, {"start": 2207.0, "end": 2214.0, "text": " And then we have the prior, so if it's non-zero and will never be zero, that means eventually we'll want to explore this particular node."}, {"start": 2214.0, "end": 2221.0, "text": " So that's the exploration versus the exploitation tradeoff I was mentioning previously. Now you can see every single detail."}, {"start": 2221.0, "end": 2232.0, "text": " And this search control strategy initially prefers actions with high prior probability and low visit count, but asymptotically prefers actions with high action value."}, {"start": 2232.0, "end": 2240.0, "text": " So that's this one. So as I said, this will go to zero. And finally, you'll be just traversing the tree using the action value functions."}, {"start": 2240.0, "end": 2250.0, "text": " Okay. Optimization detail again, the leaf position is added to the queue for evaluation by the value network unless it has previously been evaluated."}, {"start": 2250.0, "end": 2258.0, "text": " So again, we'll just run the leaf node through the value functions only once and that's it, just a small optimization."}, {"start": 2258.0, "end": 2265.0, "text": " One more interesting thing here is this. So they have this concept of a virtual loss. And what it does is the following."}, {"start": 2265.0, "end": 2271.0, "text": " So let's say we have a root node and we traverse some path and we got to a leaf node."}, {"start": 2271.0, "end": 2276.0, "text": " And so because now we have to evaluate this node, right, so we have to pass it through the value network."}, {"start": 2276.0, "end": 2285.0, "text": " So the value network is on some GPU there and we have to do a rollout and that means we'll have to delegate that to some CPU."}, {"start": 2285.0, "end": 2293.0, "text": " So we have some CPU here and basically we'll have some delay before this is calculated, before the game is done."}, {"start": 2293.0, "end": 2304.0, "text": " And before this inference on the GPU is done. So in that time, what we want to do is we want to make sure that the other threads are not exploring this exact same path."}, {"start": 2304.0, "end": 2310.0, "text": " Because remember, Monte Carlo Tree Search has a bunch of simulations going on in parallel."}, {"start": 2310.0, "end": 2316.0, "text": " So there's a bunch of different threads we're trying to calculate and improve upon the estimates of this search tree."}, {"start": 2316.0, "end": 2320.0, "text": " And so we want to discourage them to visit the same trace as this one."}, {"start": 2320.0, "end": 2327.0, "text": " So what we'll do is we'll for each every edge. So these are the states and the connections are the edges."}, {"start": 2327.0, "end": 2337.0, "text": " So basically what we'll do is we'll increase the artificial increase the visitation counts by three, where three is this number of virtual laws."}, {"start": 2337.0, "end": 2353.0, "text": " They use number three and will also decrease the Monte Carlo estimate. So these W values, the cumulative estimates will decrease them by three, which will effectively do the following thing."}, {"start": 2353.0, "end": 2363.0, "text": " So as you can see, looking at this equation, the PUCT equation, because this will get bigger, that means that this term will get smaller."}, {"start": 2363.0, "end": 2371.0, "text": " So U will get smaller and also Q, because Q is calculated by averaging, just a second, so Q, this is Q."}, {"start": 2371.0, "end": 2377.0, "text": " You can see that that's how we calculate and because this part, the cumulative part is smaller, Q will be smaller."}, {"start": 2377.0, "end": 2381.0, "text": " So that means every single of these edges will have smaller probability."}, {"start": 2381.0, "end": 2388.0, "text": " And so the other parallel threads won't be visiting it until we finish up with this value inference and rollout."}, {"start": 2388.0, "end": 2397.0, "text": " Then we'll just kind of counteract this. So we'll add minus three, so we'll delete this and we'll add plus three here."}, {"start": 2397.0, "end": 2406.0, "text": " And we'll finally add the Z estimate, the actual estimate from the rollout, and we'll add the visitation count will be increased by one."}, {"start": 2406.0, "end": 2412.0, "text": " So finally, we have the exact state that we want to be in. So hopefully this was clear enough."}, {"start": 2412.0, "end": 2416.0, "text": " If not, please comment down in the comment section whether this was confusing."}, {"start": 2416.0, "end": 2425.0, "text": " Okay, that was the virtual part of the tree. So at the end of the simulation, the rollout statistics are updated in backward pass through each step,"}, {"start": 2425.0, "end": 2432.0, "text": " replacing the virtual losses by the actual outcome. So you can see we counteract the virtual loss, we increase the visitation count,"}, {"start": 2432.0, "end": 2439.0, "text": " we counteract the virtual loss, and we add the actual rollout outcome. So that's the whole trick."}, {"start": 2439.0, "end": 2447.0, "text": " Okay, this is how the Q is evaluated again. This is 0.5, so we'll have equal contribution coming from the value function,"}, {"start": 2447.0, "end": 2454.0, "text": " averaged over the number of visitations of that particular edge. Same goes for the rollouts, that's how we get the Q."}, {"start": 2454.0, "end": 2464.0, "text": " And that's it. Now, there is more details. Hopefully you find this interesting, because there is a lot of engineering here."}, {"start": 2464.0, "end": 2473.0, "text": " AlphaGo is not a new novel research per se, but the whole engineering pipeline and everything put together is awesome,"}, {"start": 2473.0, "end": 2479.0, "text": " and that's why they achieved the results they did. So I mentioned the prior calculations."}, {"start": 2479.0, "end": 2487.0, "text": " So once we have the leaf and we want to expand it, you basically feed it through an SL policy network."}, {"start": 2487.0, "end": 2496.0, "text": " But the thing is, again, SL policy is on some GPU, so until you get the actual priors, you'll have to wait."}, {"start": 2496.0, "end": 2505.0, "text": " And that's why they have this thing called the tree policy. And we never heard about this policy so far, so this is the fourth policy network."}, {"start": 2505.0, "end": 2510.0, "text": " They only mention it here. And what it does, it provides the placeholders for the priors."}, {"start": 2510.0, "end": 2518.0, "text": " So before we get the actual priors from the GPU, they just put some roughly good priors, not the best ones, so these ones will be higher quality."}, {"start": 2518.0, "end": 2524.0, "text": " So it's a similar thing to this virtual loss. So it's a placeholder before we get the actual results."}, {"start": 2524.0, "end": 2528.0, "text": " And you'll see this pattern repeating in this paper."}, {"start": 2528.0, "end": 2535.0, "text": " Okay. And finally, the priors are computed by the SL policy, and they use this beta to control the temperature."}, {"start": 2535.0, "end": 2543.0, "text": " That means you basically, you know what the temperature does to softmax, so the lower temperature will kind of make it sharper distribution."}, {"start": 2543.0, "end": 2546.0, "text": " So yeah, that's a minor, not that important detail."}, {"start": 2546.0, "end": 2552.0, "text": " Finally, at the end, so it selects the action with the maximum visit count."}, {"start": 2552.0, "end": 2559.0, "text": " And this is less sensitive to outliers than maximizing the action value function, value, okay?"}, {"start": 2559.0, "end": 2565.0, "text": " So basically, that means the following. So we have a route, and we did a bunch of simulations."}, {"start": 2565.0, "end": 2573.0, "text": " So now we have pretty decent statistics and much better approximations to the actual Q values."}, {"start": 2573.0, "end": 2579.0, "text": " And now we have some numbers here. Maybe this edge was visited 32 times. This one was visited only three times."}, {"start": 2579.0, "end": 2586.0, "text": " This one was visited 50 times. So we'll pick up this edge, and that's the action we'll play, and we'll end up with a new state."}, {"start": 2586.0, "end": 2594.0, "text": " And now maybe Lissedol will play the game, and he'll bring us to some other state, and then we'll do the Monte Carlo again."}, {"start": 2594.0, "end": 2600.0, "text": " Okay, that's how the game plays off. Okay?"}, {"start": 2600.0, "end": 2607.0, "text": " But again, some tricks. We want to reuse the tree at subsequent time steps. So that means the following."}, {"start": 2607.0, "end": 2613.0, "text": " So we were calculating this huge tree at this root state here, okay?"}, {"start": 2613.0, "end": 2626.0, "text": " And Lissedol decided to... and we decided to go here. And once we go here, we still have a bunch of simulations that went down and calculated the statistics for this part of the subtree."}, {"start": 2626.0, "end": 2634.0, "text": " We'll just throw away this part, but we'll still have some statistics here, and so we won't have to recalculate those from scratch."}, {"start": 2634.0, "end": 2643.0, "text": " Again, smaller optimization detail. It kind of helps you build up this picture of what it takes to actually make this thing work."}, {"start": 2643.0, "end": 2648.0, "text": " So it's one thing to just say, hey, we're using policy gradients, supervised learning, blah, blah, blah."}, {"start": 2648.0, "end": 2658.0, "text": " But when you actually see what it takes to make this thing work, there's a lot of details, a lot of distributed programming, software engineering, testing, trying out things, experimenting."}, {"start": 2658.0, "end": 2664.0, "text": " So, yeah, just keep that in mind if you don't already know it."}, {"start": 2664.0, "end": 2670.0, "text": " And finally, the match opponent version of the AlphaGo continues searching during the opponent's move."}, {"start": 2670.0, "end": 2680.0, "text": " So that means once we get into the... once Lissedol, for example, is... it's history, we're still calculating."}, {"start": 2680.0, "end": 2687.0, "text": " We're still trying to approximate a better approximation of this. So we're here. We're still building up the Monte Carlo tree search."}, {"start": 2687.0, "end": 2699.0, "text": " So wherever does Lissedol take us to some state, we'll have better approximations of the subtree here because we continued calculating the Monte Carlo... updating the Monte Carlo tree."}, {"start": 2699.0, "end": 2711.0, "text": " And, yeah, again, one more detail. It extends the search if the action maximizing visit count and the action maximizing action value disagree."}, {"start": 2711.0, "end": 2720.0, "text": " So if the Q value and the N value tell you to take different actions, so there is some discrepancy there and we want to improve it."}, {"start": 2720.0, "end": 2728.0, "text": " And so they just keep calculating until they agree. And then we know we have much better estimate and we're more certain that we'll win."}, {"start": 2728.0, "end": 2732.0, "text": " Okay. Those were the most important parts of the AlphaGo."}, {"start": 2732.0, "end": 2740.0, "text": " Now, I'll just wrap this up with a couple more details around the neural networks. And that's it."}, {"start": 2740.0, "end": 2747.0, "text": " We already heard a lot of details about this policy. So it's a linear softmax. That's why it's so fast, two microseconds."}, {"start": 2747.0, "end": 2755.0, "text": " And, again, it was trained on eight million positions. So the huge SL policy was trained on 30 million positions."}, {"start": 2755.0, "end": 2763.0, "text": " This one was trained on less data because it's smaller, less capacity, basically. But it's the same approach to training this one."}, {"start": 2763.0, "end": 2777.0, "text": " And there is some additional heuristics which I won't mention. So just keep in mind that, again, lots of details which go into this once you start really digging deep."}, {"start": 2777.0, "end": 2783.0, "text": " I want to mention symmetries. And what's the thing with symmetries? Well, because the board looks like this."}, {"start": 2783.0, "end": 2796.0, "text": " Let's say, just to mark this orientation like this. So we can have, you can imagine four different rotations you can perform. So maybe you can rotate this thing by 90 degrees and you'll end up in this layout."}, {"start": 2796.0, "end": 2804.0, "text": " And you can also have reflections. So you'll end up with eight different possibilities to feed this into a CNN."}, {"start": 2804.0, "end": 2814.0, "text": " And when they use the raw policy, the RL policy network to play the game, they do the following."}, {"start": 2814.0, "end": 2823.0, "text": " They take all of the eight configurations and they feed that into the policy network and they just average out all the probabilities across those eight configurations."}, {"start": 2823.0, "end": 2832.0, "text": " And they just get a better actually gameplay by doing that, better probabilities, better distributions over actions."}, {"start": 2832.0, "end": 2847.0, "text": " And yeah, just keep that in mind. And also the actual Monte Carlo tree search I was mentioning so far, what it does is makes use of an implicit symmetry ensemble that randomly selects the single rotation slash reflection for each evaluation."}, {"start": 2847.0, "end": 2862.0, "text": " So that means once you get to the leaf node, you'll sometimes take a random element from this group of symmetries and do the actual inference on top of that one."}, {"start": 2862.0, "end": 2877.0, "text": " And they just showed that this improves things. The reason is they actually augmented the data set using all of those eight different elements from that group. So that's the reason they're using it."}, {"start": 2877.0, "end": 2883.0, "text": " They get more data, thus they get better weights and yeah, they use it in the search as well."}, {"start": 2883.0, "end": 2892.0, "text": " So yeah, symmetries, if you didn't expect any symmetries, here they are. I know that Alpha Fold is also using some interesting symmetries."}, {"start": 2892.0, "end": 2901.0, "text": " The paper still hasn't hasn't come out, but I read some blogs and it's probably using some equivalent types of deploying models."}, {"start": 2901.0, "end": 2914.0, "text": " So we'll see that anyways. A bit about computation. I found this interesting. It takes 50 GPUs and you need to train this for three weeks to train the actual the SL policy network."}, {"start": 2914.0, "end": 2919.0, "text": " So that's a lot of compute and that's just the final network without any hyperparameter."}, {"start": 2919.0, "end": 2933.0, "text": " Then we have for the RL policy, they train it for one day on 50 GPUs and finally to train the value network, they have 50 GPUs for one week."}, {"start": 2933.0, "end": 2940.0, "text": " So that's more than a month to train the final configuration of this algorithm. Plus you have the Monte Carlo tree search during the inference."}, {"start": 2940.0, "end": 2944.0, "text": " It's a lot of compute that went into this. Yeah, just keep that in mind."}, {"start": 2944.0, "end": 2953.0, "text": " Okay, let me see if we have some interesting details here. I mentioned this already. So we have the pool of opponents for the RL policy."}, {"start": 2953.0, "end": 2961.0, "text": " So here is a small detail that's probably important. So I previously had only ZT here and that's not exactly correct."}, {"start": 2961.0, "end": 2968.0, "text": " So we're not using that simple version of the policy gradient algorithm. We're using something called Reinforce Policy Gradient."}, {"start": 2968.0, "end": 2978.0, "text": " Here we just, in order to reduce the variance and keep the same bias, we just subtract this thing called the baseline and they're just using the value function."}, {"start": 2978.0, "end": 2989.0, "text": " So basically everything else remains the same. So across every single game and every single step, we average the gradients according to this thing."}, {"start": 2989.0, "end": 3000.0, "text": " And you can treat this as an advantage. Basically, if you won and the value function was maybe 0.8, so that's plus one,"}, {"start": 3000.0, "end": 3008.0, "text": " that means you have a smaller update. So you want to make sure that that action goes up, but like by just a little bit."}, {"start": 3008.0, "end": 3016.0, "text": " So hopefully you have intuitive understanding why this is here. I won't get into the maths. It doesn't change."}, {"start": 3016.0, "end": 3023.0, "text": " It doesn't change the maths. The expectation remains the same. Thus the bias is unchanged. Only the variance is reduced here."}, {"start": 3023.0, "end": 3031.0, "text": " But it's the same thing as you intuitively. It's the same thing as you only had ZT. So if it's easier for you, just ignore the V portion."}, {"start": 3031.0, "end": 3039.0, "text": " I think that's pretty much it. Okay, I've covered everything. There is a lot of things that went into this."}, {"start": 3039.0, "end": 3047.0, "text": " Here are some of the hyperparameters I was using. The threshold was 40. The virtual loss, the part for the P-UCT algorithm,"}, {"start": 3047.0, "end": 3059.0, "text": " the mixing parameter, the temperature for the SL policy. A lot of hyperparameters took a lot of work from a team of people from DeepMind to create this thing."}, {"start": 3059.0, "end": 3066.0, "text": " So yeah, it's pretty awesome. So let me know if you found this video interesting. If you did, you know the drill. Hit the subscribe button."}, {"start": 3066.0, "end": 3074.0, "text": " Click the bell icon to get notified. And see you in the next video. Until next time, keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=H1NRNGiS8YU
DQN - Playing Atari with Deep Reinforcement Learning | RL Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover the paper that started the hype for deep RL - Playing Atari with deep RL, which introduced the DQN or deep Q-network. You'll learn about: ✔️ All of the nitty-gritty details behind the paper ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Paper: https://arxiv.org/abs/1312.5602 ✅ A decent gist: https://github.com/higgsfield/RL-Adventure/blob/master/1.dqn.ipynb Note: there is an error with the target Q-function he didn't freeze it but it's good enough - until I implement something myself :P ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 01:05 High-level overview of the paper 07:00 Experience replay buffer 10:35 Difficulties with RL (correlations, non-stationary distributions) 13:30 DQN is very general 17:00 MDP formalism and optimal Q function 22:37 Function approximators 23:50 The loss function explained 28:40 The deadly triad 30:55 Algorithm walk-through 35:00 Preprocessing and architecture details 37:20 Additional details - normalizing score, schedule, etc. 42:50 Agent training metrics 47:15 Results ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #dqn #deepreinforcementlearning #aiepiphany
What's up folks? In this video I'm going to cover the paper that caused the ImageNet moment for the reinforcement learning field, the DQN paper, also known as the Playing Atari with Deep Reinforcement Learning paper by Volodymyr Mni and his collaborator at DeepMind. And I hopefully pronounce his name right. So let's dig into it. We present the first deep learning model to successfully learn control policies directly from high dimensional sensory input, IE images in this case, using RL. The model is a CNN trained with a variant of Q-learning and we'll see what it is in a moment, whose input is raw pixels and the output is a value function, actually the action value function or the Q function estimating the future rewards. We apply our method to seven Atari games and they don't adjust the architecture or the learning algorithm which makes this algorithm much more general than all of the previous ones. We find that it outperforms all previous approaches on six of the games and even surpasses a human expert on three of them. So before I start digging into the paper, I'll just give you a rough sketch so that you have a high level understanding of all of the components and then we'll start going into a lot of details as I usually do in my paper overviews. So okay, let's start. So the first thing is first, the most general framework of how RL problem is constructed is the following. So we have an agent and you'll see this all over the place and we have environment and basically the agent sends some action to the environment and the environment gives it back some observation, let's call it O and also some reward. And now the whole goal of this agent in this environment is to maximize the future expected reward and that's pretty much it. Now this can mean like every single thing you can imagine can be constructed like this pretty much, many things, maybe not everything. So let me just concretize this for the specific problem we are coping with in this paper and that's Atari. So in our case, what will happen is the following. So agent is actually a CNN. So we have a deep convolutional neural network, not so deep, you'll see soon it was actually only two convoliers. And so what we do is the following. So we interact with the environment in the following way. So the environment is the Atari emulator. So let's just draw it like some screen. And what we do is we in the case, so just a small digression here are the games in Atari. So basically here are five examples, we have Pong, we have something called Breakout, Space Invaders, SeaQuest and Beam Rider. And hopefully you're at least familiar with some of these, maybe Pong is the easiest example. So you basically have two pedals and you have a ball here and you're trying to not to miss the ball, you're trying to beat the opponent by just shooting the balls past through the opponent. And once you do that, you receive the reward one. If the opponent wins, he gets the reward of one and you get just zero. So as you can see, the problem is that the reward is very sparse, you have to play for some time in order to get one and that's it. We don't have as in supervised learning in every single step, maybe when you're doing image classification for every single image, you have a label and that's how you train. So here the reward is sparse, it's delayed and you're not sure whether the award will come at all if you're having, if you have in the beginning, especially when you have a random policy, you'll be getting zero rewards for a long, long time. So having that context in mind, so this is the image we get in Pong, this is the reward, zeros or ones in Pong. Let me get back to the thing I was trying to explain. Whoop, if I can find it, okay. Okay, so basically what happens is the following. In the Pong game, we'll have three actions. We'll have up, we'll have no up, basically just don't move and we have the down action, okay? Just three simple actions. And the state will be the following thing. The state will be not the single image because unfortunately the single image is ambiguous because for example, if you have an image and you have a ball here, you don't know whether the ball is going to go in this direction, whether it's going to go in this direction, maybe even in the opposite direction here. So you need to have at least two frames to have the temporal information so that you can solve this task. So because of that, what they actually did is they represented the state after doing some pre-processing on images. We'll see that later, they just lump four frames together. So like this, one, two, three, four, whatever. And that's a state, this is a state. And this state goes into the CNN and that's our agent and it outputs the actions. So once we put the state into the CNN, we get some numbers here and these are the action values. And basically maybe if the up value was eight and this one was two and this one was maybe minus one, with the best thing to do would be to pick eight because that promises the biggest expected reward in the future if we're following the greedy policy. And so we pick eight and that's up, we send that to the emulator. The emulator does its thing, it just renders new screen and sends it back. So it sends us back the new frame and some reward or maybe zero reward if nothing happens, if the game still hasn't finished. And what we do is the following. So we treat this as a circular buffer. So basically what we do is we just take the last frame in this state, we just get rid of it and we append to the front of this queue, the newest frame and that's our new state. So once the model is trained and I'll skip the training part for a second, let's see what happens once the model is fully trained. So the following thing happens. We have a CNN, we get the first observation, the first frame and we start stacking the frames until we get four frames. Once we have four frames, we have the full state, we do the forward prop, we get the actions, we send the action maybe up, we send it to the emulator, the emulator sends us back the reward and the observed state and we just do that in circle. And we're actually following something called Epsilon Greedy Policy and we'll see what it is in a minute. So Epsilon Greedy, which basically means the following. So instead of just taking the max action every single time, so eight in this particular case, we'll sometimes with Epsilon Probability, which is usually, let's say 0.1, we'll take a random action. So either up, either down or no up with equal probability or with 0.9 probability, we'll be following this Greedy Policy where we're actually taking the action that has the highest action value associated with it. So that's the rough sketch of how this looks like and this is just going to run in cycles, blah, blah, blah, until we receive the dawn flag from the emulator and once the dawn flag is over, that's it, we stop the game. And one important thing to note here is that during this game, we have something called a memory replay, experience replay buffer. And it's nothing else but collection of the experiences we collected during the gameplay. So what happens is the following. So we have, let me call this state S, let me call the reward R, let me call the new state. So after we receive this frame and we update the old state, we'll have S prime and let me call the action that we send to the emulator A. So we'll be collecting these tuples, S, A, R, S, SARS. Okay, we'll just be plugging these in into this replay buffer as we're playing. So we have a state, we send some action, depending on that state, we received some reward, we received a frame, we updated our state, we got S prime, and we just kind of store this in this replay buffer and this one is huge, like they use one million frames, one million slots here, so just keep that in mind. Once we have this, we'll be later using these tuples to actually train the agent. So, so far you just saw how the agent is actually acting in the environment once it's fully trained. Now we can use these SARS tuples to actually train it. And I was just gonna give you like a really high level overview and then we'll jump into the details. So what we actually do is the following. So we have a state S and we perform some action, we got into state S prime and we received some reward for doing this and this is a tuple from our experience replay buffer, okay? So what happens the following? So the Q network here, the CNM, will basically have some output for this state. So we pass in the S and it will give us some Q of S, so some value for this particular action. Maybe this is the action like whatever, we were using up. So we get some value, maybe eight. And that was received some reward here and we get some new value here, Q S prime. And the important thing to note is, this is an important detail, is this Q network here is gonna be some old frozen Q network from some previous iteration and that's gonna be used here and not the current network. Once we have the Q S prime and R, we're gonna try and do a simple L2 loss minimization. IE, so we're gonna try to do, we're gonna try and make QS as close as possible to the reward we got plus this Q in this new state S prime. So that's the whole goal of this training. We're gonna take mini batches of these SAR stouples, they were using 32 of these stouples and we're just gonna do a simple graded descent and do L2 minimization, that's it. So basically, MSC loss on these fake labeled data, the stouples we were collecting. So having said that, now let's jump into all of the details and see how this actually, actually works. Okay, let's go step by step. First things first. So usually, deep learning applications, they require a large amount of hand labeled training data. If you're familiar with computer vision, with NLP, with graph, ML, with whatever other subfield of machine learning or deep learning, you know that we need a lot of hand labeled data. On the other hand, ARIA algorithms have, they have to learn from a scalar reward as we saw, like the score may be in Pong that's one or zero, maybe in breakout game, it could be, or in Space Invaders for every single, like spaceship you kill, maybe get three points, whatever. So it's a scalar reward and it's sparse. You only get it at the end of the episode, potentially. It's noisy and it's delayed, obviously, because it's sparse. I guess these two are kind of correlated. Okay, that's the first thing. Another issue is that most algorithms assume that the data is independent. You'll see the IID thing thrown around every single, every now and then. So that basically means we have independent, identically distributed data points from our data set. But here in RL, you typically encounter a sequence of highly correlated states. So why is that? Well, it's pretty simple. When you have a Pong game and the ball is here, in the next frame, you can expect the ball to be anywhere. It won't be here or here. It's gonna be somewhere in this like circle. And that means the next state is highly correlated with the last one. And that's the problem in RL that we don't have in the computer vision. Usually we don't have it. So furthermore, the data distribution changes as the algorithm learns new behaviors, which can be problematic for deep learning methods that assume a fixed underlying distribution. So what it means when you're training your classifier on images in computer vision, you basically have a data set of images and they form some static distribution, which you're trying to figure out so that you can predict the class successfully. Here in RL, when you're learning new policy, you'll be, because of your new behavior, your new policy, as we call it in RL, you'll be visiting different states. And that means you'll be seeing different data every now and then. And that means that the distribution shifts and that's problematic. And the way they solve this in this DQM paper is by using this replay buffer because it's gonna have these source tuples from various, various policies, because in the meanwhile, we're gonna update the network and we're gonna be storing new tuples with the updated network. So we'll have data from different distributions stored in this replay buffer. And that's one of the ways they couple, they kind of fixed this thing, these instabilities caused by these assumptions being violated. Okay, let's continue. I already explained the games in Atari we have. Just for your kind of mental picture, the imagery is really small. It's 210 by 160 at 60 hertz. So that's definitely not your 4K video. So just keep that in mind. The problem was really small. Like otherwise, the problem of learning these agents would be really hard back then in 2013 when the paper was originally published. Okay, so important things to notice here is the following. So the network was not provided with any game specific information or hand designed visual features. So basically, you don't know the only thing you get for every single one of these games is the observation, I the frame, and you get the reward, the scalar, that's it. So nothing else, non game specific information. You don't get the hand designed visual features, you're using CNN to learn the future representations. So no hand designing, no hand crafting here. And we don't have the internal state of the emulator. So that means we're dealing with a partially observed MDP. So that may be a mouthful. So basically MDP is this formalism in RL, which I won't get too deep into now. So just a mark of decision process. And basically it's partially observed because you don't know the state behind the emulator. You just get the frame. You don't know all of the processes that are happening to actually render and prepare that frame. So that means we're partially observing the system and that's one more difficulty that we have in Atari here. And it's a common thing in the real world applications, but yeah. Okay, so it learned nothing. It just learns from the video input, from the reward and that's it. And the only thing that's actually specific to every single of these games is the number of actions. That's it. So you have a CNN here, CNN agent. And because this is Pong, you'll have three outputs probably. So the no up, the down and the up action. In Break Board it's gonna be similar. Maybe in Space Invaders you can shoot aside from moving the paddle. So maybe have, I don't know, like four actions. So this is the only thing that's game specific. The number of actions. And that's kinda meaningful because even humans will have a different joystick so that makes sense. Okay, so that's the only thing that's kinda game specific. And the architecture, so the CNN and the hyperparams were actually kept constant for every single one of these Atari games. So that's a good step towards the artificial general intelligence. Although we are very far off because one thing you need to know here is that for every single one of these games, they had a specific network trained only for that game. So that means if you have seven games or 57 games, you'll have 57 differently trained CNNs. So you can train one CNN and then deploy it as an agent in all of these environments. You have to train a single CNN on a single game and use it only there. And fun thing about RL, like I just recently read this quote, like RL is the only field, the only subfield of deep learning where it's socially acceptable to train on your test data. And yeah, that's pretty much true because you're training on Pong and then you're actually deploying the agent in that very same environment. So just keep that in mind. Okay, that was the initial things I wanted to explain here. And now let's see, let's go further. Now we're gonna see some mathematical formalism of MDPs and stuff. So yeah, let's start. First things first, the task is partially observed and many other states are perceptually aliased as I already mentioned that the example with the Pong ball that can go in any direction. So it's impossible to fully understand the current situation from only the current frame. And that's the reason they're stacking four frames together. We need the temporal information in order to play the game. Okay, let's see the formalism. This is the quantity we wanna actually maximize. It's the future discounted because of this gamma factor reward. And gamma is just a mathematical formalism which makes you, when t goes to infinity, when the number of steps here goes to infinity, if we have gamma that's usually between, that's always between zero and one, and it's usually actually 0.99, when you have that, this array is gonna converge to a finite number and that's desirable when you're dealing, you don't wanna deal with infinities, right? But usually you deal with finite episodes so this gamma doesn't matter so much. You can sometimes even set it to one and nothing will break. That's the first quantity. The second one is the action value function and aside from this one you'll be seeing something called state value function, but for now let's just focus on this one. And in optimal case you wanna find the policy P, this is what this equation is telling us. So we wanna find the policy P that will maximize this expression here. And this is just the reward, the expected reward from this particular state and taking this particular action. So that's what the Q value is telling you. It's telling you the following. If I'm in this state and I perform this specific action, what's the expected reward I'll get? I can expect from this action in this state. And once you get to the actual action value, it's gonna be something called Bellman equation. And you're gonna see this a lot, especially in the older RL algorithms where they used to use table lookups instead of neural networks. So basically this is what the Bellman equation looks like. For the specific case of using something called TD or temporal difference learning. And basically the Q will be equal, the optimal value, the Q star will be equal to R, the reward, plus gamma, discounted, maximum Q star across all the actions in the future state S prime. So this is just something to wrap your head around. If you've never seen this before, it's gonna be kind of look really strange, but it's actually much easier when you see the implementation because this tells us the following thing. We take the CNN and we input the current state S. And basically then we take another CNN and we put the state S prime. And now we're gonna find the max, whatever the max here is, we're gonna sum it up with R and once we have the true action value function, which we're trying to approximate during this training, it's gonna satisfy this equation. That's what it says. And don't get scared because of the expectation operator. It just tells us that we are basically sampling this next state from the distribution that the emulator is providing us. That's this epsilon here. But for the old purposes, you can just ignore it because this is how it's actually gonna work when you try and code it up. You're just gonna do this and then an expectation because you're sampling different states as you're playing the game, this thing is gonna be solved for itself. So there is this discrepancy between equations and how it actually works. The second discrepancy you could maybe notice here is that Q is fed with S and A, but actually as we saw, CNN is only taking S and it's outputting the actions. So that's just the legacy formalism that most of these papers have. And once you actually see the code, believe me, it's much easier. I mean, it's not easier. It's just the discrepancies what makes me kinda have this dissonance in my brain. So I guess most of you already experienced it. Okay, so now the theory goes like this. So this is what the optimal, the true Q value function will look like. It's gonna obey this Bellman equation. But initially our neural network is random. So it's not gonna obey this. And in order to actually converge to this true Q value function, we're gonna do something called value iteration. And they converge to the optimal action value function. So basically by just doing this thing here, by just iterating, and we'll see the exact algorithm a bit later, but just stick with me for now. So by just doing this, we're gonna slowly get to the true value. And after we update the Q value, because our policy is coupled, if you remember, with this Q value by the Epsilon greedy algorithm, we're gonna improve the policy automatically. And that there is this theory from RL that says, if you do that, if you're improving Q, and if you're improving your policy, which we're automatically doing here because we're coupled by Epsilon greedy, we're gonna converge to the actual optimal Q value, the Q star action value function. So yeah. And all of this theory actually was first invented, let's call it invented for the table lookup example, where you had something like this. So you basically had a matrix, and along this dimension you'd have states S, and along this one you'd have actions. And now you can see why this is totally impractical. S is gonna explode in any serious real world problem, as well as actions that can be continuous in some cases, when in robotics, for example. So there's gonna be huge and impractical. And the second thing, aside from memory problems, we have something even more serious, and that's once you figure out the value for this state action point, how are you going to generalize to every single other point in this state action space? You won't, unless you do some interpolation, but that's again computationally expensive because you have to interpolate. Because of that, the neural networks are really cool because they can just automatically, by just updating the parameters, you get the close, you also update the states around this state for free, let's call it like that. So that's something I wanted to mention there. And now this is the thing I initially explained, how we actually train this network. So we have the targets, and we're just trying to minimize the L2 loss. So you see here, we just have the difference and then squared, we're just trying to minimize the L2 loss between the target and between the queue. And the target is, as I mentioned, the frozen network, so this theta, I minus one, let me zoom in a little bit. So we're gonna freeze in some point the parameters, and we're not gonna change it, and we're going to use that network for the target network. So how it goes is the following. We have the SARS top, remember? So we have the SARS. So what we're going to do is we have the queue network, that's our current CNN, okay? We input S here, we know what the action is, so we'll focus on that queue value, and that's this queue, okay? So how do we get a target? Well, we take that frozen CNN, some old CNN, old like this, we input the S prime, so from the tuple again, we have that information, we input the S prime, and now we're gonna actually maximize, find the biggest value here, and we're gonna add the R, which we also have from the tuple, and that's gonna be our target. So that's gonna be Y of I, and once we have that value, we're just gonna try and tweak the parameters here, so as to approach that R plus Q, the new Q, the Q prime, let's call it, whatever. So that's the whole point, that's how it works. You're doing L2, you're doing the mean squared error when you have the mini-batch, and that's how you train the parameters. So now you may be puzzled with this replay buffer, you may be puzzled with this frozen network, some of those are actually just heuristics, and there is no theoretical guarantee that it's gonna converge, but it does in practice, and was inspired by the actual theory that came from these exact cases where we had the table lookups and not the function approximations. So yeah, hopefully that makes sense, and they say here the parameters from the previous iteration theta I minus one are held fixed when optimizing the loss function, blah blah. So we won't be back propagating through this one, so this is just, yeah, just the parameters are frozen. Okay, I think I've covered everything here. They just explicitly derived the gradient for the loss, this is something that your favorite deep learning framework of choice does for you, so we don't have to cover this one. So either PyTorch or TensorFlow will just, once you specify the loss like this, so the simple L2, the framework will find the gradients automatically using the autodef engine, okay? So let's see these two things. First, this Q learning algorithm is model-free. That means we're not trying to explicitly model the transition from S to S prime, from this state to the next state. We don't know the probability of transiting into the next state, we don't care about it, we're just doing model-free RL, interacting with the environment and learning the action value functions directly without caring about the actual probabilities and modeling these. So that's why this thing is model-free. Why is it off-policy? Well, it's off-policy because it's using the epsilon greedy to collect the data into the replay buffer, if you remember. So we are collecting into the replay buffer using the epsilon greedy, and then the target is actually using the max policy, the greedy policy, and that's another policy. And so we're trying to learn the max Q value while using the behavior policy, that's how they call it, the epsilon greedy and the target policy is the greedy policy. That's why it's off-policy. So intuitively, what we're trying to do here is we wanna have this Q, our CNN here, be able to predict what will be the expected reward plus the whatever max here is. So we're gonna try and learn what this particular state, what's the max value we can expect. So hopefully I didn't confuse you here. Let me know in the comments if this was clear enough, I can try and explain a bit better next time, whatever. Okay, let's continue and see what else is there. So furthermore, it was shown that combining model-free reinforcement learning such as Q learning with nonlinear function approximators and using off-policy learning could cause the Q network to diverge. So that's the theory. And that's pretty problematic because you wanna have stable training. And fortunately, DQM paper, they didn't have the problem because of the memory replay buffer and because of the target network being frozen, actually, they avoided the instabilities. But this problem is something known as the deadly triad in the literature. So this is just an excerpt from the Rich Sutton's book and an introduction to RL, which is like a must read if you're doing RL really seriously. At one point of time, you should probably check it out. I still haven't. I usually start by doing high level resources first and then slowly going down to papers and only later do I read the book. So that's maybe counterintuitive, but that's the best approach for me. And he said also here, so when you combine the function approximation, the bootstrapping, so bootstrapping being TD, and other than TD, you could be using something called MC, like Monte Carlo, or you could be using something in between, like TD lambda, and you don't have to know what all of these are. But it's basically, instead of doing just R plus the Q value here and your bootstrapping that way, you just roll out until the end of the episodes and just use the actual rewards instead of trying to use the Q value to bootstrap and figure out the actual Q value now. So that's the bootstrapping part. And finally, the off policy, and that's why it's called a triad, tri for three. So basically, when you have these three elements combined together, you're roughly, you're probably going to have instability problems, but yeah, as DQN folks showed, they didn't have it because of some of the hacks, hacks they used, the replay buffer and the target network are the main two components that made it work. Okay, that's that part, let's go further. And let's actually see the algorithm, and I'll try to walk you through this one. So first, we initialize the memory, the replay buffer to capacity N. And they had 1 million slots here, just for being less abstract. They initialized the action value function Q with random weights. So just take your CNN and use whatever your favorite in it method is, maybe Gloroth or Xavier, whatever your initialization method is. And then we do the following, we trade over the episode, we have M episodes, we initialize the sequence, we take the initial frame from the Atari environment, we just do some pre-processing, so that's the four frames, the cropping, some details you'll see in a bit. And then for every single step in that particular episode, we do the following, we perform an action using epsilon greedy policy, so that's this. We probability epsilon, we select a random action, otherwise we select the max, we max it out, we do the greedy approach. Then we send the action to the environment. So that's if you're using, you're usually gonna be using something like OpenAI's gym, so that's gonna be roughly correspond to dot step, and you're gonna pass the action, and then the environment's gonna return the observation, the reward, and the dom flag, which signals whether the episode is over or not. Once we have that, we just pre-process that frame, and we store the transition, the star stopple, where they're using this five to denote the processed state. So this is basically the state. So we just store the star stopple inside of the replay buffer, and then we're gonna sample random mini-batch from D. Now, just for your understanding, they're going to do this, I think the actual number was four. So every four actions, you perform four actions, and only then do you sample the mini-batch. Because otherwise, you need to fill in, remember, this pre-processing function expects four frames, so we have to kind of fill it up before we start doing the inference with the Q network. So that's why they're gonna do, I think that's the reason they're going to do it every four steps. And again, you can see we just formed the target, and that's either reward if you're in the terminal state, if you're at the last frame of the game, otherwise we're gonna bootstrap using Q, the old Q function, and that's the target, and then we're going to do gradient descent on the L2 of these two. And that's it. So I think the mini-batch size was 32. That means we'll take 32 samples randomly from this buffer, so just 32 random source tuples, and we're gonna form this, and we're just gonna average it out and let the deep learning framework do the rest for us, find the gradient automatically, and minimize this loss. And that's the whole algorithm. Hopefully now it's a bit more clear. And I think I mentioned this, but let me just reiterate. So learning directly from consecutive samples is inefficient due to strong correlations. So I mentioned the example with the ball in Pong and being correlated with the next frame. So randomizing the samples by taking random samples from the replay buffer, we are breaking these correlations, and we are reducing the variance of the updates. So that's the important trick to make this thing stable and to make it work. I just took a snippet from the, so this paper is from 2013, and later they published a paper in Nature, and so I just took a snippet from that paper, and they have additional line here, so every C steps just swap the target, that's the frozen Q network I was talking about. So you just take the current network, and every C steps, which is maybe every, I don't know, 1,000, I think they use 1K batch updates, you just swap the networks and use that one as the frozen network to predict your target labels. And that's the additional detail I wanted to mention. Okay, with that out of the way, let's continue and dive into a bit more details about the pre-processing. So what they do is they convert the RGB to grayscale, they down sample it to this resolution, and then they crop to 84 by 84 because that was a constraint from the network they were using, and they were using, if you take a look at this reference 11, that's actually AlexNet, the famous network that caused the ImageNet moment, and it had the constraint of using only square inputs, that's why they crop to a square resolution. The second thing they do is, as I already mentioned, is they take the last four frames of history and they stack them together to produce the input to the Q function, and that was the state that we were using all along. Okay, so those were some additional details, and now one more thing, I kind of highlighted this eight by eight because AlexNet, back in the days, they were using much wider kernels compared to, and shallower networks compared to today's networks, which have kernels that are usually three by three, and they are much deeper, especially with the arrival of ResNet that allowed us to, using those skip connections to have much deeper networks going to 150 and more layers, and yeah, that's just kind of thought, it's interesting. One thing they mentioned here additionally is, so the main advantage of this type of architecture is to compute the QVLs for all possible actions in a given state with only a single forward pass. So imagine if we were actually treating the formulas like exactly, so you'd have to do the following thing, you'd have a CNN, you'd input a state, and you'd input an action, and then you'd get a QVL for that action, but then you'd have to do a for loop, you just had to kind of iterate this in a for loop to get all the QVLs, and you'd be storing them somewhere in summary, and so you'd be using additional time, you'd be using additional compute, and you'd be using additional memory if they haven't made it the way they had, and that's just input a single state, and you get all of the actions, you get the QVLs for all of the actions, and that's how it works. So that's additional engineering detail, that's kind of fun. Let's jump to experiments finally. Let's see how this thing performed. I already mentioned it kind of surpassed the humans in some games and many previous methods, so let's see details about that. Additionally, because some games like Breakout or Space Invader, maybe when you kill a spaceship in Space Invader, you'd get plus reward, where in Pong you get plus one, so because these scales differ between the games, what it did, and this is again something that's Atari specific, so they are not completely agnostic to the setup, they are actually clipping it to plus one or minus one, if we get minus seven reward, that would get clipped to minus one, if we get plus three, that would get clipped to plus one, and that's additional trick they used to make it more stable and to be able to use the same set of hyperparams like the learning rate for every single one of these games, Atari games. Again, one more detail, the epsilon was actually, the epsilon really, the epsilon was annealed linearly from one to zero one over the first million frames and fixed to 0.1 afterwards, so what that means, that means the following, so we have epsilon, we have the steps in our RL environment, and basically it starts with one and linearly annealed to zero one, and this is one million frames here, and that means we start off being completely random, our policy is completely random, so we'll be picking whatever state comes into the Q network, we pick the action without caring which is the max one, we just take one action, and that's how we behave, that's how we explore the environment initially, and that's an important concept in RL exploration versus exploitation trade-off, and here you can see usually the RL algorithm start exploring maximally to let it explore the environment and learn which states are better by random, and then start kind of being more greedy, being more deterministic about your actions and less sarcastic, and that's, you'll usually see policies like the schedules like this one, and then we'll end up having 0.1 epsilon throughout the rest of the training, that's what I did. They also additionally use this frame skipping technique which means that, just a simple, again Atari specific thing, they just pick an action for, and then they just repeat the action for the next three steps, they're just sending the same action to the emulator, and only after the fourth frame do they actually pick the new action, here they say that they use K4, so they actually pick the action every four frames that comes from the emulator, except for the Space Invaders where they actually notice that the lasers are invisible if they take that period, so they actually had to use three, which is again, I think this is the only game specific detail they actually use, so just, yeah, it's a fun fact. Here I'm not sure whether once they are doing those dummy actions where they're just repeating the last intelligent action, where they're actually feeding those frames into the state, into the circular buffer that has four slots, so I'm not sure if they're using these frames to fill that up, or they're only using the frames that got received from the environment when they produce an intelligent action. So that's just something I'm confused about, maybe if you know, write it down in the comments, I haven't checked the original implementation, I did check some other GISTs and implementations of DQN, so I'm familiar how it works. There is one thing I wanna mention here, and this nicely summarizes the difference between research and engineering, and that's that for this particular Atari game, they noticed this thing, so first encode a single frame, we take the maximum value for each pixel color value of the frame being encoded and the previous frame. This was necessary to remove flickering that is present in games where some objects appear only in even frames, while other objects appear only in odd frames. An artifact caused by the limited number of sprites in Atari 2600 that can display at once. So this is totally not important for this research paper and for you understanding this paper, but it is important to understand that there are so many details that you actually understand when you start implementing the thing. So reading research is really nice and everything, but until you start looking at the actual code and try to implement it, you probably first don't understand the actual theory, and secondly, you don't understand all of the pain points to actually make this work in a real world product. So just something I want to mention. And also, what I found useful is, for example, when I read this DQM paper the first time, I just found some gists and I went through the code and that made it much less abstract and clear how it works. So I kinda suggest that you, aside from reading papers, maybe consult some code, and that's a good nice trade-off to understand the research better. And that's somewhere like half the way between you actually implementing the project and you not seeing any code whatsoever. So yeah, that was a small digression. Hopefully it was useful for some of you. Let me know if this was useful in the comments, things like this or whether this just distracts you. Okay, let's continue. Let us see how the agent progresses during the training. Basically, here you can see that the rewards, so here we can see the training epochs, so and here we can see the average reward that the agent is receiving, and we can see on the breakout game an upward trend, so that means that the agent is learning better policies and better action values or Q values. So basically, on the Sequest, we can see the same trend. There is a lot of noise, but still the trend is kind of upward, so yeah. A better metric they used is this, the average Q on breakout and the average Q on whatever game there may be. And you can see that that one is pretty much monotonically increasing and has less noise than these ones here. And let me just explain to you how they actually calculate that one. So we collect a fixed set of states by running a random policy before training even starts, and they track the average of the maximum predicted Q values for these states. So that looks the following. Basically, before the training even starts, before the agent starts learning, they do the following. So we can represent the environment and the game as an MDP, and I didn't explain quite what MDP is, but for now, you can imagine it as a bunch of states and you can transition and you can receive rewards, and yeah, that's the formalism. But basically what they did is they took a couple of states, and those are those four frame tuples, remember? So they took a couple of these, and they just stored them into some memory, and basically later on, while the agent is actually learning, they just pass these specific values. So let's call this one, let's call this two. They'll just feed it through the CNN, and they'll find the max Q value for this state, max Q value for this state, and they're just gonna be accumulating these, like summing them up and then averaging by the number of states, i.e. they are gonna keep the average of the max Q values for these states. And that's the metric they were using here, and it proved to be much more stable and better to keep track of the learning progress of the agent. Okay, those are the curves. Here, once you actually train the agent, you can see interesting behavior, and here you can see the following. So here you can see that that's our ship that we're controlling, that's the agent. The algorithm is controlling the ship, and once the enemy ship enters the screen, that's the point A, and you can see that the Q value increases, because we're expecting reward once we kill the enemy spaceship, we're gonna see an increase in score, and the Q value correctly anticipates that score increase, and then has a spike. And then you can see once the torpedo is shot, and we are really close to killing the enemy ship, we're at point B, where we expect a huge reward, and once we actually kill the ship and we enter in this state here, the value drops, and it drops because it only expects the future rewards. So we just received a huge reward, and that's why we dipped. So you can see that it's really meaningful for this specific scenario, and it makes a lot of sense. That was another interesting thing that I wanted to mention, and here is one additional detail. So in addition to seeing relatively smooth improvements to predicted Q during training, we did not experience any divergence issues in any of the experiments, so despite lacking any theoretical convergence guarantees, the method is able to train the network with RL algorithm signal, and gradient descent in a stable manner. So again, small discrepancy between the theory and the engineering or the practice. Sometimes things just work, and later a couple of years down the road, somebody figures out why this was working, or yeah, but yeah, sometimes, a lot of times, the engineering and the technology is way in front of the actual research and theoretical understanding. That's my honest opinion about this. You can disagree, and I'd love to hear your comments on that. Yeah. So results are here. DQN kinda destroyed every single other baseline. So this is the random policy. If you just play it random, if you play the game at random, these are the scores you get. SARSA, it's an online algorithm similar to Q-learning. I won't get into the details. Contingency, again, very similar, and you can see usually Q-learning has better results, and DQN has much better results, and all of those baselines, and it's actually even better than humans in some games, like here on Breakout, Enduro, and Pong. It's actually better than the human, and that's kinda interesting, and the scores here, how they got these are, so they just took some human, and that guy, that poor guy or girl, they were just playing the game for two hours, and once the two hours are done, they just take the scores that they achieved, they sort them, and they find the middle one, that's the median or the 50th percentile, and that's the score they publish here, and you can see that, yeah, in some cases, DQN achieved better results. So one important thing to mention here is that in their subsequent paper that were published in Nature in 2015, they actually played on all the 57 Atari games, and they achieved impressive results where they were actually better than humans on 29 out of 57 games, and that's, yeah, quite impressive, especially back in 2015. Now we have some improvements like Agent 57, and some other algorithms, and I'll be covering those in the subsequent videos, so yeah, stay tuned, but the interesting thing you can notice here is that all of the games that require fast reflexes, like boxing, like Pong, are much easier for our agents to learn how to play than some games like Montezuma's Revenge, it's a really famous game. It's a tough one, you need to play it to understand why I'm saying this. It requires a lot more strategy than Pong or Breakout or those other games, and that's why you can see we have actually zero score here, so the algorithm cannot, DQN cannot learn in this vanilla form, cannot learn to play this game at all. So that was just some fun fact, and as I mentioned, so this Qbert, Sequest, and Space Invaders, so those are these three games, you can see that results are really poor compared to humans, and the reason is that they're challenging because they require us to find a strategy that extends over long time scales, and that's something that if you're familiar with any other field like NLP, when you're dealing with long sequences, you're gonna end up having problems, catastrophic forgetting whatever, the problems that RNN's had, and then LSTM's kind of solved, and Transformers, so yeah, that's the reason they, you can basically just by analyzing the game kind of get an intuition why the algorithm is failing on specific games. And that's pretty much it, these are just some references, I think I've covered it pretty detailed. Hopefully if you stuck with me until the end of the video, I'd love to hear your thoughts, did you like this one better than the last ones, and if you have any tips on how to improve this further, feel free to comment down below, I'll try to answer ASAP, so until next time, you know the drill, hit that bell icon, subscribe, and share the good word, and see you in the next video, until then, keep learning deep. Yeah.
[{"start": 0.88, "end": 1.72, "text": " What's up folks?"}, {"start": 1.72, "end": 3.7800000000000002, "text": " In this video I'm going to cover the paper"}, {"start": 3.7800000000000002, "end": 5.44, "text": " that caused the ImageNet moment"}, {"start": 5.44, "end": 7.32, "text": " for the reinforcement learning field,"}, {"start": 7.32, "end": 10.74, "text": " the DQN paper, also known as the Playing Atari"}, {"start": 10.74, "end": 14.32, "text": " with Deep Reinforcement Learning paper by Volodymyr Mni"}, {"start": 14.32, "end": 16.8, "text": " and his collaborator at DeepMind."}, {"start": 16.8, "end": 19.48, "text": " And I hopefully pronounce his name right."}, {"start": 19.48, "end": 21.88, "text": " So let's dig into it."}, {"start": 21.88, "end": 23.6, "text": " We present the first deep learning model"}, {"start": 23.6, "end": 26.3, "text": " to successfully learn control policies"}, {"start": 26.3, "end": 29.060000000000002, "text": " directly from high dimensional sensory input,"}, {"start": 29.06, "end": 32.36, "text": " IE images in this case, using RL."}, {"start": 32.36, "end": 35.32, "text": " The model is a CNN trained with a variant of Q-learning"}, {"start": 35.32, "end": 37.44, "text": " and we'll see what it is in a moment,"}, {"start": 37.44, "end": 39.96, "text": " whose input is raw pixels and the output"}, {"start": 39.96, "end": 42.4, "text": " is a value function, actually the action value function"}, {"start": 42.4, "end": 46.28, "text": " or the Q function estimating the future rewards."}, {"start": 46.28, "end": 49.7, "text": " We apply our method to seven Atari games"}, {"start": 49.7, "end": 52.44, "text": " and they don't adjust the architecture"}, {"start": 52.44, "end": 55.36, "text": " or the learning algorithm which makes this algorithm"}, {"start": 55.36, "end": 57.7, "text": " much more general than all of the previous ones."}, {"start": 57.7, "end": 60.120000000000005, "text": " We find that it outperforms all previous approaches"}, {"start": 60.120000000000005, "end": 63.68000000000001, "text": " on six of the games and even surpasses a human expert"}, {"start": 63.68000000000001, "end": 65.0, "text": " on three of them."}, {"start": 65.0, "end": 67.68, "text": " So before I start digging into the paper,"}, {"start": 67.68, "end": 69.56, "text": " I'll just give you a rough sketch"}, {"start": 69.56, "end": 72.64, "text": " so that you have a high level understanding"}, {"start": 72.64, "end": 74.92, "text": " of all of the components and then we'll start"}, {"start": 74.92, "end": 76.96000000000001, "text": " going into a lot of details as I usually do"}, {"start": 76.96000000000001, "end": 78.44, "text": " in my paper overviews."}, {"start": 78.44, "end": 80.2, "text": " So okay, let's start."}, {"start": 80.2, "end": 84.28, "text": " So the first thing is first, the most general framework"}, {"start": 84.28, "end": 87.02000000000001, "text": " of how RL problem is constructed is the following."}, {"start": 87.02, "end": 90.84, "text": " So we have an agent and you'll see this all over the place"}, {"start": 90.84, "end": 94.92, "text": " and we have environment and basically the agent"}, {"start": 94.92, "end": 97.8, "text": " sends some action to the environment"}, {"start": 97.8, "end": 101.56, "text": " and the environment gives it back some observation,"}, {"start": 101.56, "end": 105.03999999999999, "text": " let's call it O and also some reward."}, {"start": 105.03999999999999, "end": 108.88, "text": " And now the whole goal of this agent in this environment"}, {"start": 108.88, "end": 112.8, "text": " is to maximize the future expected reward"}, {"start": 112.8, "end": 114.39999999999999, "text": " and that's pretty much it."}, {"start": 114.4, "end": 118.68, "text": " Now this can mean like every single thing you can imagine"}, {"start": 118.68, "end": 121.80000000000001, "text": " can be constructed like this pretty much,"}, {"start": 121.80000000000001, "end": 123.60000000000001, "text": " many things, maybe not everything."}, {"start": 123.60000000000001, "end": 126.56, "text": " So let me just concretize this for the specific problem"}, {"start": 126.56, "end": 129.64000000000001, "text": " we are coping with in this paper and that's Atari."}, {"start": 129.64000000000001, "end": 132.20000000000002, "text": " So in our case, what will happen is the following."}, {"start": 132.20000000000002, "end": 135.28, "text": " So agent is actually a CNN."}, {"start": 136.12, "end": 139.12, "text": " So we have a deep convolutional neural network,"}, {"start": 139.12, "end": 141.04000000000002, "text": " not so deep, you'll see soon it was actually"}, {"start": 141.04000000000002, "end": 143.04000000000002, "text": " only two convoliers."}, {"start": 143.04, "end": 145.88, "text": " And so what we do is the following."}, {"start": 145.88, "end": 149.32, "text": " So we interact with the environment in the following way."}, {"start": 149.32, "end": 152.04, "text": " So the environment is the Atari emulator."}, {"start": 152.04, "end": 154.95999999999998, "text": " So let's just draw it like some screen."}, {"start": 154.95999999999998, "end": 158.07999999999998, "text": " And what we do is we in the case,"}, {"start": 158.07999999999998, "end": 162.45999999999998, "text": " so just a small digression here are the games in Atari."}, {"start": 162.45999999999998, "end": 165.26, "text": " So basically here are five examples, we have Pong,"}, {"start": 165.26, "end": 167.88, "text": " we have something called Breakout, Space Invaders,"}, {"start": 167.88, "end": 169.88, "text": " SeaQuest and Beam Rider."}, {"start": 169.88, "end": 174.04, "text": " And hopefully you're at least familiar with some of these,"}, {"start": 174.04, "end": 176.1, "text": " maybe Pong is the easiest example."}, {"start": 176.1, "end": 179.12, "text": " So you basically have two pedals and you have a ball here"}, {"start": 179.12, "end": 181.85999999999999, "text": " and you're trying to not to miss the ball,"}, {"start": 181.85999999999999, "end": 185.4, "text": " you're trying to beat the opponent by just shooting"}, {"start": 185.4, "end": 187.4, "text": " the balls past through the opponent."}, {"start": 187.4, "end": 190.84, "text": " And once you do that, you receive the reward one."}, {"start": 190.84, "end": 194.34, "text": " If the opponent wins, he gets the reward of one"}, {"start": 194.34, "end": 195.84, "text": " and you get just zero."}, {"start": 195.84, "end": 198.72, "text": " So as you can see, the problem is that the reward"}, {"start": 198.72, "end": 201.48, "text": " is very sparse, you have to play for some time"}, {"start": 201.48, "end": 203.62, "text": " in order to get one and that's it."}, {"start": 203.62, "end": 205.72, "text": " We don't have as in supervised learning"}, {"start": 205.72, "end": 208.76, "text": " in every single step, maybe when you're doing"}, {"start": 208.76, "end": 210.44, "text": " image classification for every single image,"}, {"start": 210.44, "end": 212.32, "text": " you have a label and that's how you train."}, {"start": 212.32, "end": 214.8, "text": " So here the reward is sparse, it's delayed"}, {"start": 214.8, "end": 217.07999999999998, "text": " and you're not sure whether the award will come at all"}, {"start": 217.07999999999998, "end": 218.92, "text": " if you're having, if you have in the beginning,"}, {"start": 218.92, "end": 221.04, "text": " especially when you have a random policy,"}, {"start": 221.04, "end": 224.57999999999998, "text": " you'll be getting zero rewards for a long, long time."}, {"start": 224.57999999999998, "end": 227.2, "text": " So having that context in mind,"}, {"start": 227.2, "end": 229.16, "text": " so this is the image we get in Pong,"}, {"start": 229.16, "end": 231.72, "text": " this is the reward, zeros or ones in Pong."}, {"start": 231.72, "end": 234.79999999999998, "text": " Let me get back to the thing I was trying to explain."}, {"start": 234.79999999999998, "end": 236.44, "text": " Whoop, if I can find it, okay."}, {"start": 237.92, "end": 240.2, "text": " Okay, so basically what happens is the following."}, {"start": 240.2, "end": 242.44, "text": " In the Pong game, we'll have three actions."}, {"start": 242.44, "end": 247.44, "text": " We'll have up, we'll have no up, basically just don't move"}, {"start": 248.2, "end": 251.1, "text": " and we have the down action, okay?"}, {"start": 251.1, "end": 252.92, "text": " Just three simple actions."}, {"start": 252.92, "end": 255.48, "text": " And the state will be the following thing."}, {"start": 255.48, "end": 257.59999999999997, "text": " The state will be not the single image"}, {"start": 257.59999999999997, "end": 260.8, "text": " because unfortunately the single image is ambiguous"}, {"start": 260.8, "end": 263.3, "text": " because for example, if you have an image"}, {"start": 263.3, "end": 264.8, "text": " and you have a ball here, you don't know"}, {"start": 264.8, "end": 267.15999999999997, "text": " whether the ball is going to go in this direction,"}, {"start": 267.15999999999997, "end": 268.92, "text": " whether it's going to go in this direction,"}, {"start": 268.92, "end": 271.08, "text": " maybe even in the opposite direction here."}, {"start": 271.08, "end": 273.15999999999997, "text": " So you need to have at least two frames"}, {"start": 273.15999999999997, "end": 275.28, "text": " to have the temporal information"}, {"start": 275.28, "end": 276.82, "text": " so that you can solve this task."}, {"start": 276.82, "end": 278.8, "text": " So because of that, what they actually did"}, {"start": 278.8, "end": 280.12, "text": " is they represented the state"}, {"start": 280.12, "end": 282.36, "text": " after doing some pre-processing on images."}, {"start": 282.36, "end": 285.92, "text": " We'll see that later, they just lump four frames together."}, {"start": 285.92, "end": 290.92, "text": " So like this, one, two, three, four, whatever."}, {"start": 291.04, "end": 293.56, "text": " And that's a state, this is a state."}, {"start": 293.56, "end": 296.76, "text": " And this state goes into the CNN and that's our agent"}, {"start": 296.76, "end": 298.56, "text": " and it outputs the actions."}, {"start": 298.56, "end": 300.84000000000003, "text": " So once we put the state into the CNN,"}, {"start": 300.84000000000003, "end": 304.64, "text": " we get some numbers here and these are the action values."}, {"start": 304.64, "end": 307.04, "text": " And basically maybe if the up value was eight"}, {"start": 307.04, "end": 309.88, "text": " and this one was two and this one was maybe minus one,"}, {"start": 309.88, "end": 312.68, "text": " with the best thing to do would be to pick eight"}, {"start": 312.68, "end": 314.92, "text": " because that promises the biggest expected reward"}, {"start": 314.92, "end": 318.0, "text": " in the future if we're following the greedy policy."}, {"start": 318.0, "end": 319.84, "text": " And so we pick eight and that's up,"}, {"start": 319.84, "end": 321.65999999999997, "text": " we send that to the emulator."}, {"start": 321.65999999999997, "end": 323.15999999999997, "text": " The emulator does its thing,"}, {"start": 323.15999999999997, "end": 326.52, "text": " it just renders new screen and sends it back."}, {"start": 326.52, "end": 330.74, "text": " So it sends us back the new frame and some reward"}, {"start": 330.74, "end": 332.94, "text": " or maybe zero reward if nothing happens,"}, {"start": 332.94, "end": 334.84, "text": " if the game still hasn't finished."}, {"start": 334.84, "end": 336.12, "text": " And what we do is the following."}, {"start": 336.12, "end": 338.36, "text": " So we treat this as a circular buffer."}, {"start": 338.36, "end": 342.52000000000004, "text": " So basically what we do is we just take the last frame"}, {"start": 342.52000000000004, "end": 345.32, "text": " in this state, we just get rid of it"}, {"start": 345.32, "end": 350.0, "text": " and we append to the front of this queue,"}, {"start": 350.0, "end": 351.8, "text": " the newest frame and that's our new state."}, {"start": 351.8, "end": 353.56, "text": " So once the model is trained"}, {"start": 353.56, "end": 356.04, "text": " and I'll skip the training part for a second,"}, {"start": 356.04, "end": 358.24, "text": " let's see what happens once the model is fully trained."}, {"start": 358.24, "end": 360.52000000000004, "text": " So the following thing happens."}, {"start": 360.52000000000004, "end": 364.6, "text": " We have a CNN, we get the first observation, the first frame"}, {"start": 364.6, "end": 367.40000000000003, "text": " and we start stacking the frames until we get four frames."}, {"start": 367.4, "end": 369.56, "text": " Once we have four frames, we have the full state,"}, {"start": 369.56, "end": 372.79999999999995, "text": " we do the forward prop, we get the actions,"}, {"start": 372.79999999999995, "end": 376.15999999999997, "text": " we send the action maybe up, we send it to the emulator,"}, {"start": 376.15999999999997, "end": 378.84, "text": " the emulator sends us back the reward and the observed state"}, {"start": 378.84, "end": 380.44, "text": " and we just do that in circle."}, {"start": 380.44, "end": 382.08, "text": " And we're actually following something called"}, {"start": 382.08, "end": 385.56, "text": " Epsilon Greedy Policy and we'll see what it is in a minute."}, {"start": 385.56, "end": 390.28, "text": " So Epsilon Greedy, which basically means the following."}, {"start": 390.28, "end": 394.03999999999996, "text": " So instead of just taking the max action every single time,"}, {"start": 394.03999999999996, "end": 395.91999999999996, "text": " so eight in this particular case,"}, {"start": 395.92, "end": 398.48, "text": " we'll sometimes with Epsilon Probability,"}, {"start": 398.48, "end": 400.96000000000004, "text": " which is usually, let's say 0.1,"}, {"start": 400.96000000000004, "end": 402.28000000000003, "text": " we'll take a random action."}, {"start": 402.28000000000003, "end": 406.84000000000003, "text": " So either up, either down or no up with equal probability"}, {"start": 407.8, "end": 410.84000000000003, "text": " or with 0.9 probability, we'll be following"}, {"start": 410.84000000000003, "end": 414.48, "text": " this Greedy Policy where we're actually taking the action"}, {"start": 414.48, "end": 418.0, "text": " that has the highest action value associated with it."}, {"start": 418.0, "end": 421.16, "text": " So that's the rough sketch of how this looks like"}, {"start": 421.16, "end": 425.36, "text": " and this is just going to run in cycles, blah, blah, blah,"}, {"start": 425.36, "end": 429.76, "text": " until we receive the dawn flag from the emulator"}, {"start": 429.76, "end": 433.04, "text": " and once the dawn flag is over, that's it, we stop the game."}, {"start": 433.04, "end": 437.0, "text": " And one important thing to note here is that during this game,"}, {"start": 437.0, "end": 440.16, "text": " we have something called a memory replay,"}, {"start": 440.16, "end": 441.84000000000003, "text": " experience replay buffer."}, {"start": 441.84000000000003, "end": 445.44, "text": " And it's nothing else but collection of the experiences"}, {"start": 445.44, "end": 448.76, "text": " we collected during the gameplay."}, {"start": 448.76, "end": 450.48, "text": " So what happens is the following."}, {"start": 450.48, "end": 454.72, "text": " So we have, let me call this state S,"}, {"start": 454.72, "end": 458.48, "text": " let me call the reward R, let me call the new state."}, {"start": 458.48, "end": 462.16, "text": " So after we receive this frame and we update the old state,"}, {"start": 462.16, "end": 465.96000000000004, "text": " we'll have S prime and let me call the action"}, {"start": 465.96000000000004, "end": 469.66, "text": " that we send to the emulator A."}, {"start": 469.66, "end": 474.66, "text": " So we'll be collecting these tuples, S, A, R, S, SARS."}, {"start": 479.56, "end": 482.40000000000003, "text": " Okay, we'll just be plugging these in"}, {"start": 482.40000000000003, "end": 484.68, "text": " into this replay buffer as we're playing."}, {"start": 484.68, "end": 486.88, "text": " So we have a state, we send some action,"}, {"start": 486.88, "end": 489.76, "text": " depending on that state, we received some reward,"}, {"start": 489.76, "end": 493.0, "text": " we received a frame, we updated our state, we got S prime,"}, {"start": 493.0, "end": 498.0, "text": " and we just kind of store this in this replay buffer"}, {"start": 498.0, "end": 501.8, "text": " and this one is huge, like they use one million frames,"}, {"start": 501.8, "end": 504.7, "text": " one million slots here, so just keep that in mind."}, {"start": 505.96000000000004, "end": 509.4, "text": " Once we have this, we'll be later using these tuples"}, {"start": 509.4, "end": 511.56, "text": " to actually train the agent."}, {"start": 511.56, "end": 513.52, "text": " So, so far you just saw how the agent"}, {"start": 513.52, "end": 515.3199999999999, "text": " is actually acting in the environment"}, {"start": 515.3199999999999, "end": 516.6999999999999, "text": " once it's fully trained."}, {"start": 516.6999999999999, "end": 519.98, "text": " Now we can use these SARS tuples to actually train it."}, {"start": 519.98, "end": 522.68, "text": " And I was just gonna give you like a really high level"}, {"start": 522.68, "end": 524.64, "text": " overview and then we'll jump into the details."}, {"start": 524.64, "end": 526.64, "text": " So what we actually do is the following."}, {"start": 526.64, "end": 531.5799999999999, "text": " So we have a state S and we perform some action,"}, {"start": 531.5799999999999, "end": 536.16, "text": " we got into state S prime and we received some reward"}, {"start": 536.16, "end": 538.52, "text": " for doing this and this is a tuple"}, {"start": 538.52, "end": 541.64, "text": " from our experience replay buffer, okay?"}, {"start": 541.64, "end": 542.84, "text": " So what happens the following?"}, {"start": 542.84, "end": 546.44, "text": " So the Q network here, the CNM,"}, {"start": 546.44, "end": 550.6800000000001, "text": " will basically have some output for this state."}, {"start": 550.6800000000001, "end": 555.1600000000001, "text": " So we pass in the S and it will give us some Q of S,"}, {"start": 555.1600000000001, "end": 557.62, "text": " so some value for this particular action."}, {"start": 557.62, "end": 562.22, "text": " Maybe this is the action like whatever, we were using up."}, {"start": 562.22, "end": 564.2800000000001, "text": " So we get some value, maybe eight."}, {"start": 564.2800000000001, "end": 566.6, "text": " And that was received some reward here"}, {"start": 566.6, "end": 571.6, "text": " and we get some new value here, Q S prime."}, {"start": 571.6, "end": 574.44, "text": " And the important thing to note is,"}, {"start": 574.44, "end": 577.96, "text": " this is an important detail, is this Q network here"}, {"start": 577.96, "end": 580.62, "text": " is gonna be some old frozen Q network"}, {"start": 580.62, "end": 584.28, "text": " from some previous iteration and that's gonna be used here"}, {"start": 584.28, "end": 586.28, "text": " and not the current network."}, {"start": 586.28, "end": 588.64, "text": " Once we have the Q S prime and R,"}, {"start": 588.64, "end": 593.16, "text": " we're gonna try and do a simple L2 loss minimization."}, {"start": 593.16, "end": 596.9200000000001, "text": " IE, so we're gonna try to do, we're gonna try and make QS"}, {"start": 596.92, "end": 601.92, "text": " as close as possible to the reward we got plus this Q"}, {"start": 603.4, "end": 605.4, "text": " in this new state S prime."}, {"start": 605.4, "end": 608.4, "text": " So that's the whole goal of this training."}, {"start": 608.4, "end": 611.7199999999999, "text": " We're gonna take mini batches of these SAR stouples,"}, {"start": 611.7199999999999, "end": 614.9599999999999, "text": " they were using 32 of these stouples"}, {"start": 614.9599999999999, "end": 618.56, "text": " and we're just gonna do a simple graded descent"}, {"start": 618.56, "end": 621.88, "text": " and do L2 minimization, that's it."}, {"start": 621.88, "end": 626.88, "text": " So basically, MSC loss on these fake labeled data,"}, {"start": 629.0, "end": 631.4, "text": " the stouples we were collecting."}, {"start": 631.4, "end": 634.6, "text": " So having said that, now let's jump into all of the details"}, {"start": 634.6, "end": 637.06, "text": " and see how this actually, actually works."}, {"start": 637.06, "end": 639.12, "text": " Okay, let's go step by step."}, {"start": 639.12, "end": 641.08, "text": " First things first."}, {"start": 641.08, "end": 644.16, "text": " So usually, deep learning applications,"}, {"start": 644.16, "end": 647.32, "text": " they require a large amount of hand labeled training data."}, {"start": 647.32, "end": 649.68, "text": " If you're familiar with computer vision, with NLP,"}, {"start": 649.68, "end": 653.4399999999999, "text": " with graph, ML, with whatever other subfield"}, {"start": 653.4399999999999, "end": 656.06, "text": " of machine learning or deep learning,"}, {"start": 656.06, "end": 658.64, "text": " you know that we need a lot of hand labeled data."}, {"start": 658.64, "end": 661.26, "text": " On the other hand, ARIA algorithms have,"}, {"start": 661.26, "end": 663.88, "text": " they have to learn from a scalar reward as we saw,"}, {"start": 663.88, "end": 667.14, "text": " like the score may be in Pong that's one or zero,"}, {"start": 667.14, "end": 669.8599999999999, "text": " maybe in breakout game, it could be,"}, {"start": 669.8599999999999, "end": 671.92, "text": " or in Space Invaders for every single,"}, {"start": 671.92, "end": 674.8399999999999, "text": " like spaceship you kill, maybe get three points, whatever."}, {"start": 674.8399999999999, "end": 678.24, "text": " So it's a scalar reward and it's sparse."}, {"start": 678.24, "end": 681.64, "text": " You only get it at the end of the episode, potentially."}, {"start": 681.64, "end": 685.26, "text": " It's noisy and it's delayed, obviously, because it's sparse."}, {"start": 685.26, "end": 688.2, "text": " I guess these two are kind of correlated."}, {"start": 688.2, "end": 689.92, "text": " Okay, that's the first thing."}, {"start": 689.92, "end": 692.6800000000001, "text": " Another issue is that most algorithms assume"}, {"start": 692.6800000000001, "end": 694.58, "text": " that the data is independent."}, {"start": 694.58, "end": 698.84, "text": " You'll see the IID thing thrown around every single,"}, {"start": 698.84, "end": 699.76, "text": " every now and then."}, {"start": 699.76, "end": 702.36, "text": " So that basically means we have independent,"}, {"start": 702.36, "end": 705.12, "text": " identically distributed data points from our data set."}, {"start": 705.12, "end": 708.92, "text": " But here in RL, you typically encounter a sequence"}, {"start": 708.92, "end": 710.64, "text": " of highly correlated states."}, {"start": 710.64, "end": 711.48, "text": " So why is that?"}, {"start": 711.48, "end": 712.32, "text": " Well, it's pretty simple."}, {"start": 712.32, "end": 715.72, "text": " When you have a Pong game and the ball is here,"}, {"start": 715.72, "end": 720.0, "text": " in the next frame, you can expect the ball to be anywhere."}, {"start": 720.0, "end": 721.2, "text": " It won't be here or here."}, {"start": 721.2, "end": 725.24, "text": " It's gonna be somewhere in this like circle."}, {"start": 725.24, "end": 728.4, "text": " And that means the next state is highly correlated"}, {"start": 728.4, "end": 729.5600000000001, "text": " with the last one."}, {"start": 729.5600000000001, "end": 731.24, "text": " And that's the problem in RL"}, {"start": 731.24, "end": 733.6800000000001, "text": " that we don't have in the computer vision."}, {"start": 733.6800000000001, "end": 735.04, "text": " Usually we don't have it."}, {"start": 735.04, "end": 738.36, "text": " So furthermore, the data distribution changes"}, {"start": 738.36, "end": 740.92, "text": " as the algorithm learns new behaviors,"}, {"start": 740.92, "end": 743.56, "text": " which can be problematic for deep learning methods"}, {"start": 743.56, "end": 746.1999999999999, "text": " that assume a fixed underlying distribution."}, {"start": 746.1999999999999, "end": 748.9599999999999, "text": " So what it means when you're training your classifier"}, {"start": 748.9599999999999, "end": 750.52, "text": " on images in computer vision,"}, {"start": 750.52, "end": 752.16, "text": " you basically have a data set of images"}, {"start": 752.16, "end": 754.64, "text": " and they form some static distribution,"}, {"start": 754.64, "end": 756.26, "text": " which you're trying to figure out"}, {"start": 756.26, "end": 758.4399999999999, "text": " so that you can predict the class successfully."}, {"start": 758.4399999999999, "end": 762.0999999999999, "text": " Here in RL, when you're learning new policy,"}, {"start": 762.0999999999999, "end": 764.28, "text": " you'll be, because of your new behavior,"}, {"start": 764.28, "end": 766.88, "text": " your new policy, as we call it in RL,"}, {"start": 766.88, "end": 768.76, "text": " you'll be visiting different states."}, {"start": 768.76, "end": 770.8399999999999, "text": " And that means you'll be seeing different data"}, {"start": 770.8399999999999, "end": 771.92, "text": " every now and then."}, {"start": 771.92, "end": 774.04, "text": " And that means that the distribution shifts"}, {"start": 774.04, "end": 775.3199999999999, "text": " and that's problematic."}, {"start": 775.3199999999999, "end": 778.04, "text": " And the way they solve this in this DQM paper"}, {"start": 778.04, "end": 780.1999999999999, "text": " is by using this replay buffer"}, {"start": 780.1999999999999, "end": 782.9599999999999, "text": " because it's gonna have these source tuples"}, {"start": 782.9599999999999, "end": 784.74, "text": " from various, various policies,"}, {"start": 784.74, "end": 787.0, "text": " because in the meanwhile, we're gonna update the network"}, {"start": 787.0, "end": 788.9599999999999, "text": " and we're gonna be storing new tuples"}, {"start": 788.9599999999999, "end": 790.3199999999999, "text": " with the updated network."}, {"start": 790.3199999999999, "end": 792.76, "text": " So we'll have data from different distributions"}, {"start": 792.76, "end": 795.04, "text": " stored in this replay buffer."}, {"start": 795.04, "end": 797.4399999999999, "text": " And that's one of the ways they couple,"}, {"start": 797.4399999999999, "end": 801.2, "text": " they kind of fixed this thing,"}, {"start": 801.2, "end": 804.56, "text": " these instabilities caused by these assumptions"}, {"start": 804.56, "end": 806.4, "text": " being violated."}, {"start": 806.4, "end": 808.48, "text": " Okay, let's continue."}, {"start": 808.48, "end": 812.46, "text": " I already explained the games in Atari we have."}, {"start": 812.46, "end": 815.52, "text": " Just for your kind of mental picture,"}, {"start": 815.52, "end": 817.4, "text": " the imagery is really small."}, {"start": 817.4, "end": 821.38, "text": " It's 210 by 160 at 60 hertz."}, {"start": 821.38, "end": 824.04, "text": " So that's definitely not your 4K video."}, {"start": 824.04, "end": 825.24, "text": " So just keep that in mind."}, {"start": 825.24, "end": 827.08, "text": " The problem was really small."}, {"start": 827.08, "end": 830.66, "text": " Like otherwise, the problem of learning these agents"}, {"start": 830.66, "end": 833.12, "text": " would be really hard back then in 2013"}, {"start": 833.12, "end": 835.6, "text": " when the paper was originally published."}, {"start": 835.6, "end": 839.0, "text": " Okay, so important things to notice here is the following."}, {"start": 839.0, "end": 841.08, "text": " So the network was not provided"}, {"start": 841.08, "end": 843.68, "text": " with any game specific information"}, {"start": 843.68, "end": 846.4, "text": " or hand designed visual features."}, {"start": 846.4, "end": 849.38, "text": " So basically, you don't know the only thing you get"}, {"start": 849.38, "end": 852.56, "text": " for every single one of these games is the observation,"}, {"start": 852.56, "end": 856.4, "text": " I the frame, and you get the reward, the scalar, that's it."}, {"start": 856.4, "end": 860.32, "text": " So nothing else, non game specific information."}, {"start": 860.32, "end": 862.64, "text": " You don't get the hand designed visual features,"}, {"start": 862.64, "end": 865.76, "text": " you're using CNN to learn the future representations."}, {"start": 865.76, "end": 869.04, "text": " So no hand designing, no hand crafting here."}, {"start": 869.04, "end": 871.76, "text": " And we don't have the internal state of the emulator."}, {"start": 871.76, "end": 874.82, "text": " So that means we're dealing with a partially observed MDP."}, {"start": 874.82, "end": 875.96, "text": " So that may be a mouthful."}, {"start": 875.96, "end": 878.64, "text": " So basically MDP is this formalism in RL,"}, {"start": 878.64, "end": 881.08, "text": " which I won't get too deep into now."}, {"start": 881.08, "end": 883.5, "text": " So just a mark of decision process."}, {"start": 883.5, "end": 885.08, "text": " And basically it's partially observed"}, {"start": 885.08, "end": 888.78, "text": " because you don't know the state behind the emulator."}, {"start": 888.78, "end": 890.08, "text": " You just get the frame."}, {"start": 890.08, "end": 892.02, "text": " You don't know all of the processes that are happening"}, {"start": 892.02, "end": 893.96, "text": " to actually render and prepare that frame."}, {"start": 893.96, "end": 896.0, "text": " So that means we're partially observing the system"}, {"start": 896.0, "end": 899.54, "text": " and that's one more difficulty that we have in Atari here."}, {"start": 899.54, "end": 902.64, "text": " And it's a common thing in the real world applications,"}, {"start": 902.64, "end": 903.48, "text": " but yeah."}, {"start": 904.6, "end": 906.28, "text": " Okay, so it learned nothing."}, {"start": 906.28, "end": 908.72, "text": " It just learns from the video input, from the reward"}, {"start": 908.72, "end": 910.48, "text": " and that's it."}, {"start": 910.48, "end": 912.4599999999999, "text": " And the only thing that's actually specific"}, {"start": 912.4599999999999, "end": 916.4399999999999, "text": " to every single of these games is the number of actions."}, {"start": 916.4399999999999, "end": 917.26, "text": " That's it."}, {"start": 917.26, "end": 919.8199999999999, "text": " So you have a CNN here, CNN agent."}, {"start": 919.8199999999999, "end": 924.36, "text": " And because this is Pong, you'll have three outputs probably."}, {"start": 924.36, "end": 927.4, "text": " So the no up, the down and the up action."}, {"start": 927.4, "end": 929.16, "text": " In Break Board it's gonna be similar."}, {"start": 929.16, "end": 932.06, "text": " Maybe in Space Invaders you can shoot aside"}, {"start": 932.06, "end": 933.36, "text": " from moving the paddle."}, {"start": 933.36, "end": 935.72, "text": " So maybe have, I don't know, like four actions."}, {"start": 935.72, "end": 937.84, "text": " So this is the only thing that's game specific."}, {"start": 937.84, "end": 939.12, "text": " The number of actions."}, {"start": 939.12, "end": 942.5, "text": " And that's kinda meaningful because even humans"}, {"start": 942.5, "end": 945.0, "text": " will have a different joystick so that makes sense."}, {"start": 946.08, "end": 950.36, "text": " Okay, so that's the only thing that's kinda game specific."}, {"start": 950.36, "end": 954.32, "text": " And the architecture, so the CNN and the hyperparams"}, {"start": 954.32, "end": 957.3000000000001, "text": " were actually kept constant for every single one"}, {"start": 957.3000000000001, "end": 958.2, "text": " of these Atari games."}, {"start": 958.2, "end": 963.1, "text": " So that's a good step towards the artificial"}, {"start": 963.1, "end": 964.32, "text": " general intelligence."}, {"start": 964.32, "end": 967.1, "text": " Although we are very far off because one thing"}, {"start": 967.1, "end": 969.9200000000001, "text": " you need to know here is that for every single one"}, {"start": 969.9200000000001, "end": 972.8000000000001, "text": " of these games, they had a specific network trained"}, {"start": 972.8000000000001, "end": 973.88, "text": " only for that game."}, {"start": 973.88, "end": 976.7600000000001, "text": " So that means if you have seven games or 57 games,"}, {"start": 976.7600000000001, "end": 980.58, "text": " you'll have 57 differently trained CNNs."}, {"start": 980.58, "end": 983.88, "text": " So you can train one CNN and then deploy it as an agent"}, {"start": 983.88, "end": 985.1600000000001, "text": " in all of these environments."}, {"start": 985.1600000000001, "end": 988.86, "text": " You have to train a single CNN on a single game"}, {"start": 988.86, "end": 990.32, "text": " and use it only there."}, {"start": 990.32, "end": 993.62, "text": " And fun thing about RL, like I just recently read"}, {"start": 993.62, "end": 997.72, "text": " this quote, like RL is the only field,"}, {"start": 997.72, "end": 1000.28, "text": " the only subfield of deep learning where it's socially"}, {"start": 1000.28, "end": 1003.04, "text": " acceptable to train on your test data."}, {"start": 1003.04, "end": 1006.3, "text": " And yeah, that's pretty much true because you're training"}, {"start": 1006.3, "end": 1009.6, "text": " on Pong and then you're actually deploying the agent"}, {"start": 1009.6, "end": 1011.6, "text": " in that very same environment."}, {"start": 1011.6, "end": 1012.9, "text": " So just keep that in mind."}, {"start": 1013.76, "end": 1017.86, "text": " Okay, that was the initial things I wanted to explain here."}, {"start": 1017.86, "end": 1020.44, "text": " And now let's see, let's go further."}, {"start": 1020.44, "end": 1023.7, "text": " Now we're gonna see some mathematical formalism"}, {"start": 1023.7, "end": 1024.8600000000001, "text": " of MDPs and stuff."}, {"start": 1024.8600000000001, "end": 1026.6200000000001, "text": " So yeah, let's start."}, {"start": 1026.6200000000001, "end": 1029.8600000000001, "text": " First things first, the task is partially observed"}, {"start": 1029.8600000000001, "end": 1032.16, "text": " and many other states are perceptually aliased"}, {"start": 1032.16, "end": 1034.94, "text": " as I already mentioned that the example with the Pong ball"}, {"start": 1034.94, "end": 1036.88, "text": " that can go in any direction."}, {"start": 1036.88, "end": 1039.46, "text": " So it's impossible to fully understand the current situation"}, {"start": 1039.46, "end": 1040.78, "text": " from only the current frame."}, {"start": 1040.78, "end": 1043.6200000000001, "text": " And that's the reason they're stacking four frames together."}, {"start": 1043.6200000000001, "end": 1046.1200000000001, "text": " We need the temporal information in order to play the game."}, {"start": 1046.1200000000001, "end": 1048.3200000000002, "text": " Okay, let's see the formalism."}, {"start": 1048.32, "end": 1051.54, "text": " This is the quantity we wanna actually maximize."}, {"start": 1051.54, "end": 1055.06, "text": " It's the future discounted"}, {"start": 1055.06, "end": 1058.3799999999999, "text": " because of this gamma factor reward."}, {"start": 1058.3799999999999, "end": 1061.24, "text": " And gamma is just a mathematical formalism"}, {"start": 1061.24, "end": 1064.1, "text": " which makes you, when t goes to infinity,"}, {"start": 1064.1, "end": 1067.1599999999999, "text": " when the number of steps here goes to infinity,"}, {"start": 1067.1599999999999, "end": 1069.1, "text": " if we have gamma that's usually between,"}, {"start": 1069.1, "end": 1071.12, "text": " that's always between zero and one,"}, {"start": 1071.12, "end": 1074.62, "text": " and it's usually actually 0.99,"}, {"start": 1074.62, "end": 1078.4599999999998, "text": " when you have that, this array is gonna converge"}, {"start": 1078.4599999999998, "end": 1081.26, "text": " to a finite number and that's desirable"}, {"start": 1081.26, "end": 1082.7399999999998, "text": " when you're dealing, you don't wanna deal"}, {"start": 1082.7399999999998, "end": 1084.4799999999998, "text": " with infinities, right?"}, {"start": 1084.4799999999998, "end": 1086.8799999999999, "text": " But usually you deal with finite episodes"}, {"start": 1086.8799999999999, "end": 1088.9799999999998, "text": " so this gamma doesn't matter so much."}, {"start": 1088.9799999999998, "end": 1091.1399999999999, "text": " You can sometimes even set it to one"}, {"start": 1091.1399999999999, "end": 1092.86, "text": " and nothing will break."}, {"start": 1092.86, "end": 1093.6999999999998, "text": " That's the first quantity."}, {"start": 1093.6999999999998, "end": 1096.02, "text": " The second one is the action value function"}, {"start": 1096.02, "end": 1097.62, "text": " and aside from this one you'll be seeing"}, {"start": 1097.62, "end": 1099.6399999999999, "text": " something called state value function,"}, {"start": 1099.6399999999999, "end": 1101.62, "text": " but for now let's just focus on this one."}, {"start": 1101.62, "end": 1106.62, "text": " And in optimal case you wanna find the policy P,"}, {"start": 1106.8999999999999, "end": 1108.34, "text": " this is what this equation is telling us."}, {"start": 1108.34, "end": 1110.6799999999998, "text": " So we wanna find the policy P"}, {"start": 1110.6799999999998, "end": 1113.82, "text": " that will maximize this expression here."}, {"start": 1113.82, "end": 1115.7399999999998, "text": " And this is just the reward,"}, {"start": 1115.7399999999998, "end": 1118.84, "text": " the expected reward from this particular state"}, {"start": 1118.84, "end": 1121.12, "text": " and taking this particular action."}, {"start": 1121.12, "end": 1123.4599999999998, "text": " So that's what the Q value is telling you."}, {"start": 1123.4599999999998, "end": 1124.5, "text": " It's telling you the following."}, {"start": 1124.5, "end": 1128.1399999999999, "text": " If I'm in this state and I perform this specific action,"}, {"start": 1128.1399999999999, "end": 1130.62, "text": " what's the expected reward I'll get?"}, {"start": 1130.62, "end": 1133.6999999999998, "text": " I can expect from this action in this state."}, {"start": 1133.6999999999998, "end": 1137.3, "text": " And once you get to the actual action value,"}, {"start": 1137.3, "end": 1139.7399999999998, "text": " it's gonna be something called Bellman equation."}, {"start": 1139.7399999999998, "end": 1141.1799999999998, "text": " And you're gonna see this a lot,"}, {"start": 1141.1799999999998, "end": 1144.6799999999998, "text": " especially in the older RL algorithms"}, {"start": 1145.58, "end": 1148.62, "text": " where they used to use table lookups"}, {"start": 1148.62, "end": 1150.1, "text": " instead of neural networks."}, {"start": 1150.1, "end": 1153.9399999999998, "text": " So basically this is what the Bellman equation looks like."}, {"start": 1156.3799999999999, "end": 1159.4199999999998, "text": " For the specific case of using something called TD"}, {"start": 1159.42, "end": 1161.94, "text": " or temporal difference learning."}, {"start": 1161.94, "end": 1166.42, "text": " And basically the Q will be equal,"}, {"start": 1166.42, "end": 1170.1000000000001, "text": " the optimal value, the Q star will be equal to R,"}, {"start": 1170.1000000000001, "end": 1172.98, "text": " the reward, plus gamma, discounted,"}, {"start": 1172.98, "end": 1177.22, "text": " maximum Q star across all the actions"}, {"start": 1177.22, "end": 1179.6200000000001, "text": " in the future state S prime."}, {"start": 1179.6200000000001, "end": 1182.0600000000002, "text": " So this is just something to wrap your head around."}, {"start": 1182.0600000000002, "end": 1183.54, "text": " If you've never seen this before,"}, {"start": 1183.54, "end": 1185.94, "text": " it's gonna be kind of look really strange,"}, {"start": 1185.94, "end": 1189.7, "text": " but it's actually much easier when you see the implementation"}, {"start": 1189.7, "end": 1193.1000000000001, "text": " because this tells us the following thing."}, {"start": 1193.1000000000001, "end": 1198.1000000000001, "text": " We take the CNN and we input the current state S."}, {"start": 1199.38, "end": 1202.42, "text": " And basically then we take another CNN"}, {"start": 1204.22, "end": 1206.42, "text": " and we put the state S prime."}, {"start": 1207.98, "end": 1210.02, "text": " And now we're gonna find the max,"}, {"start": 1210.02, "end": 1212.1200000000001, "text": " whatever the max here is,"}, {"start": 1212.1200000000001, "end": 1214.06, "text": " we're gonna sum it up with R"}, {"start": 1214.06, "end": 1219.06, "text": " and once we have the true action value function,"}, {"start": 1219.58, "end": 1221.94, "text": " which we're trying to approximate during this training,"}, {"start": 1221.94, "end": 1223.76, "text": " it's gonna satisfy this equation."}, {"start": 1223.76, "end": 1224.8999999999999, "text": " That's what it says."}, {"start": 1224.8999999999999, "end": 1228.86, "text": " And don't get scared because of the expectation operator."}, {"start": 1228.86, "end": 1232.86, "text": " It just tells us that we are basically sampling"}, {"start": 1232.86, "end": 1235.5, "text": " this next state from the distribution"}, {"start": 1235.5, "end": 1237.26, "text": " that the emulator is providing us."}, {"start": 1237.26, "end": 1239.1, "text": " That's this epsilon here."}, {"start": 1239.1, "end": 1243.58, "text": " But for the old purposes, you can just ignore it"}, {"start": 1243.58, "end": 1245.86, "text": " because this is how it's actually gonna work"}, {"start": 1245.86, "end": 1247.48, "text": " when you try and code it up."}, {"start": 1247.48, "end": 1251.28, "text": " You're just gonna do this and then an expectation"}, {"start": 1251.28, "end": 1252.74, "text": " because you're sampling different states"}, {"start": 1252.74, "end": 1254.1799999999998, "text": " as you're playing the game,"}, {"start": 1254.1799999999998, "end": 1256.5, "text": " this thing is gonna be solved for itself."}, {"start": 1256.5, "end": 1258.6999999999998, "text": " So there is this discrepancy between equations"}, {"start": 1258.6999999999998, "end": 1260.06, "text": " and how it actually works."}, {"start": 1260.06, "end": 1262.26, "text": " The second discrepancy you could maybe notice here"}, {"start": 1262.26, "end": 1265.08, "text": " is that Q is fed with S and A,"}, {"start": 1265.08, "end": 1266.34, "text": " but actually as we saw,"}, {"start": 1266.34, "end": 1270.22, "text": " CNN is only taking S and it's outputting the actions."}, {"start": 1270.22, "end": 1273.7, "text": " So that's just the legacy formalism"}, {"start": 1273.7, "end": 1275.78, "text": " that most of these papers have."}, {"start": 1275.78, "end": 1277.7, "text": " And once you actually see the code,"}, {"start": 1277.7, "end": 1279.3, "text": " believe me, it's much easier."}, {"start": 1279.3, "end": 1280.38, "text": " I mean, it's not easier."}, {"start": 1280.38, "end": 1282.6200000000001, "text": " It's just the discrepancies what makes me"}, {"start": 1282.6200000000001, "end": 1285.64, "text": " kinda have this dissonance in my brain."}, {"start": 1285.64, "end": 1288.14, "text": " So I guess most of you already experienced it."}, {"start": 1289.66, "end": 1292.18, "text": " Okay, so now the theory goes like this."}, {"start": 1292.18, "end": 1293.74, "text": " So this is what the optimal,"}, {"start": 1293.74, "end": 1296.94, "text": " the true Q value function will look like."}, {"start": 1296.94, "end": 1298.84, "text": " It's gonna obey this Bellman equation."}, {"start": 1298.84, "end": 1301.4599999999998, "text": " But initially our neural network is random."}, {"start": 1301.4599999999998, "end": 1302.9199999999998, "text": " So it's not gonna obey this."}, {"start": 1302.9199999999998, "end": 1305.9399999999998, "text": " And in order to actually converge"}, {"start": 1305.9399999999998, "end": 1307.58, "text": " to this true Q value function,"}, {"start": 1307.58, "end": 1310.3799999999999, "text": " we're gonna do something called value iteration."}, {"start": 1310.3799999999999, "end": 1314.6999999999998, "text": " And they converge to the optimal action value function."}, {"start": 1314.6999999999998, "end": 1317.6999999999998, "text": " So basically by just doing this thing here,"}, {"start": 1317.6999999999998, "end": 1319.86, "text": " by just iterating,"}, {"start": 1319.86, "end": 1321.6999999999998, "text": " and we'll see the exact algorithm a bit later,"}, {"start": 1321.6999999999998, "end": 1323.76, "text": " but just stick with me for now."}, {"start": 1323.76, "end": 1325.1999999999998, "text": " So by just doing this,"}, {"start": 1325.1999999999998, "end": 1327.8799999999999, "text": " we're gonna slowly get to the true value."}, {"start": 1327.88, "end": 1331.22, "text": " And after we update the Q value,"}, {"start": 1331.22, "end": 1333.88, "text": " because our policy is coupled, if you remember,"}, {"start": 1333.88, "end": 1336.44, "text": " with this Q value by the Epsilon greedy algorithm,"}, {"start": 1336.44, "end": 1338.5800000000002, "text": " we're gonna improve the policy automatically."}, {"start": 1338.5800000000002, "end": 1341.8200000000002, "text": " And that there is this theory from RL that says,"}, {"start": 1341.8200000000002, "end": 1343.5400000000002, "text": " if you do that, if you're improving Q,"}, {"start": 1343.5400000000002, "end": 1345.18, "text": " and if you're improving your policy,"}, {"start": 1345.18, "end": 1346.5, "text": " which we're automatically doing here"}, {"start": 1346.5, "end": 1348.8200000000002, "text": " because we're coupled by Epsilon greedy,"}, {"start": 1348.8200000000002, "end": 1352.5, "text": " we're gonna converge to the actual optimal Q value,"}, {"start": 1352.5, "end": 1356.3400000000001, "text": " the Q star action value function."}, {"start": 1356.3400000000001, "end": 1357.72, "text": " So yeah."}, {"start": 1357.72, "end": 1361.38, "text": " And all of this theory actually was first invented,"}, {"start": 1361.38, "end": 1365.04, "text": " let's call it invented for the table lookup example,"}, {"start": 1365.04, "end": 1367.8, "text": " where you had something like this."}, {"start": 1367.8, "end": 1370.1000000000001, "text": " So you basically had a matrix,"}, {"start": 1370.1000000000001, "end": 1374.04, "text": " and along this dimension you'd have states S,"}, {"start": 1374.04, "end": 1376.06, "text": " and along this one you'd have actions."}, {"start": 1376.06, "end": 1378.94, "text": " And now you can see why this is totally impractical."}, {"start": 1378.94, "end": 1383.94, "text": " S is gonna explode in any serious real world problem,"}, {"start": 1383.94, "end": 1386.74, "text": " as well as actions that can be continuous in some cases,"}, {"start": 1386.74, "end": 1388.42, "text": " when in robotics, for example."}, {"start": 1388.42, "end": 1390.78, "text": " So there's gonna be huge and impractical."}, {"start": 1390.78, "end": 1394.06, "text": " And the second thing, aside from memory problems,"}, {"start": 1394.06, "end": 1395.9, "text": " we have something even more serious,"}, {"start": 1395.9, "end": 1398.66, "text": " and that's once you figure out the value"}, {"start": 1398.66, "end": 1401.42, "text": " for this state action point,"}, {"start": 1401.42, "end": 1403.14, "text": " how are you going to generalize"}, {"start": 1403.14, "end": 1407.54, "text": " to every single other point in this state action space?"}, {"start": 1407.54, "end": 1410.14, "text": " You won't, unless you do some interpolation,"}, {"start": 1410.14, "end": 1412.66, "text": " but that's again computationally expensive"}, {"start": 1412.66, "end": 1414.58, "text": " because you have to interpolate."}, {"start": 1414.58, "end": 1417.1399999999999, "text": " Because of that, the neural networks are really cool"}, {"start": 1417.1399999999999, "end": 1419.1399999999999, "text": " because they can just automatically,"}, {"start": 1419.1399999999999, "end": 1421.78, "text": " by just updating the parameters,"}, {"start": 1421.78, "end": 1426.22, "text": " you get the close, you also update the states"}, {"start": 1426.22, "end": 1429.9399999999998, "text": " around this state for free, let's call it like that."}, {"start": 1429.9399999999998, "end": 1432.26, "text": " So that's something I wanted to mention there."}, {"start": 1432.26, "end": 1436.06, "text": " And now this is the thing I initially explained,"}, {"start": 1436.06, "end": 1438.1799999999998, "text": " how we actually train this network."}, {"start": 1438.1799999999998, "end": 1439.58, "text": " So we have the targets,"}, {"start": 1439.58, "end": 1443.1, "text": " and we're just trying to minimize the L2 loss."}, {"start": 1443.1, "end": 1446.1399999999999, "text": " So you see here, we just have the difference"}, {"start": 1446.1399999999999, "end": 1448.1, "text": " and then squared, we're just trying to minimize"}, {"start": 1448.1, "end": 1452.54, "text": " the L2 loss between the target and between the queue."}, {"start": 1452.54, "end": 1456.3, "text": " And the target is, as I mentioned, the frozen network,"}, {"start": 1456.3, "end": 1460.02, "text": " so this theta, I minus one, let me zoom in a little bit."}, {"start": 1460.02, "end": 1464.6399999999999, "text": " So we're gonna freeze in some point the parameters,"}, {"start": 1464.6399999999999, "end": 1466.2199999999998, "text": " and we're not gonna change it,"}, {"start": 1466.2199999999998, "end": 1470.1999999999998, "text": " and we're going to use that network for the target network."}, {"start": 1470.1999999999998, "end": 1472.9399999999998, "text": " So how it goes is the following."}, {"start": 1472.94, "end": 1475.78, "text": " We have the SARS top, remember?"}, {"start": 1475.78, "end": 1476.98, "text": " So we have the SARS."}, {"start": 1478.8200000000002, "end": 1482.02, "text": " So what we're going to do is we have the queue network,"}, {"start": 1482.02, "end": 1486.02, "text": " that's our current CNN, okay?"}, {"start": 1487.26, "end": 1492.26, "text": " We input S here, we know what the action is,"}, {"start": 1492.72, "end": 1496.18, "text": " so we'll focus on that queue value,"}, {"start": 1496.18, "end": 1498.8400000000001, "text": " and that's this queue, okay?"}, {"start": 1498.8400000000001, "end": 1500.7, "text": " So how do we get a target?"}, {"start": 1500.7, "end": 1505.42, "text": " Well, we take that frozen CNN, some old CNN,"}, {"start": 1505.42, "end": 1510.42, "text": " old like this, we input the S prime,"}, {"start": 1510.94, "end": 1513.9, "text": " so from the tuple again, we have that information,"}, {"start": 1513.9, "end": 1518.9, "text": " we input the S prime, and now we're gonna actually maximize,"}, {"start": 1519.38, "end": 1523.88, "text": " find the biggest value here, and we're gonna add the R,"}, {"start": 1523.88, "end": 1525.88, "text": " which we also have from the tuple,"}, {"start": 1525.88, "end": 1528.14, "text": " and that's gonna be our target."}, {"start": 1528.14, "end": 1532.66, "text": " So that's gonna be Y of I, and once we have that value,"}, {"start": 1532.66, "end": 1537.66, "text": " we're just gonna try and tweak the parameters here,"}, {"start": 1538.1000000000001, "end": 1542.4, "text": " so as to approach that R plus Q,"}, {"start": 1542.4, "end": 1545.16, "text": " the new Q, the Q prime, let's call it, whatever."}, {"start": 1545.16, "end": 1547.74, "text": " So that's the whole point, that's how it works."}, {"start": 1547.74, "end": 1551.38, "text": " You're doing L2, you're doing the mean squared error"}, {"start": 1551.38, "end": 1552.6000000000001, "text": " when you have the mini-batch,"}, {"start": 1552.6000000000001, "end": 1555.24, "text": " and that's how you train the parameters."}, {"start": 1555.24, "end": 1560.24, "text": " So now you may be puzzled with this replay buffer,"}, {"start": 1560.78, "end": 1562.86, "text": " you may be puzzled with this frozen network,"}, {"start": 1562.86, "end": 1564.82, "text": " some of those are actually just heuristics,"}, {"start": 1564.82, "end": 1567.06, "text": " and there is no theoretical guarantee"}, {"start": 1567.06, "end": 1569.86, "text": " that it's gonna converge, but it does in practice,"}, {"start": 1569.86, "end": 1572.74, "text": " and was inspired by the actual theory"}, {"start": 1572.74, "end": 1575.96, "text": " that came from these exact cases"}, {"start": 1575.96, "end": 1577.32, "text": " where we had the table lookups"}, {"start": 1577.32, "end": 1579.34, "text": " and not the function approximations."}, {"start": 1579.34, "end": 1581.44, "text": " So yeah, hopefully that makes sense,"}, {"start": 1582.54, "end": 1583.98, "text": " and they say here the parameters"}, {"start": 1583.98, "end": 1587.02, "text": " from the previous iteration theta I minus one"}, {"start": 1587.02, "end": 1590.6200000000001, "text": " are held fixed when optimizing the loss function, blah blah."}, {"start": 1590.6200000000001, "end": 1593.22, "text": " So we won't be back propagating through this one,"}, {"start": 1593.22, "end": 1598.06, "text": " so this is just, yeah, just the parameters are frozen."}, {"start": 1598.06, "end": 1601.38, "text": " Okay, I think I've covered everything here."}, {"start": 1601.38, "end": 1605.58, "text": " They just explicitly derived the gradient for the loss,"}, {"start": 1605.58, "end": 1607.02, "text": " this is something that your favorite"}, {"start": 1607.02, "end": 1609.98, "text": " deep learning framework of choice does for you,"}, {"start": 1609.98, "end": 1611.7, "text": " so we don't have to cover this one."}, {"start": 1611.7, "end": 1613.14, "text": " So either PyTorch or TensorFlow"}, {"start": 1613.14, "end": 1615.5800000000002, "text": " will just, once you specify the loss like this,"}, {"start": 1615.5800000000002, "end": 1620.5800000000002, "text": " so the simple L2, the framework will find the gradients"}, {"start": 1620.66, "end": 1622.98, "text": " automatically using the autodef engine, okay?"}, {"start": 1624.14, "end": 1626.38, "text": " So let's see these two things."}, {"start": 1626.38, "end": 1630.3200000000002, "text": " First, this Q learning algorithm is model-free."}, {"start": 1630.3200000000002, "end": 1633.3400000000001, "text": " That means we're not trying to explicitly model"}, {"start": 1633.3400000000001, "end": 1635.98, "text": " the transition from S to S prime,"}, {"start": 1635.98, "end": 1637.7800000000002, "text": " from this state to the next state."}, {"start": 1637.7800000000002, "end": 1639.4, "text": " We don't know the probability"}, {"start": 1639.4, "end": 1641.0600000000002, "text": " of transiting into the next state,"}, {"start": 1641.0600000000002, "end": 1642.1000000000001, "text": " we don't care about it,"}, {"start": 1642.1, "end": 1644.6999999999998, "text": " we're just doing model-free RL,"}, {"start": 1644.6999999999998, "end": 1646.32, "text": " interacting with the environment"}, {"start": 1646.32, "end": 1650.2199999999998, "text": " and learning the action value functions directly"}, {"start": 1650.2199999999998, "end": 1652.54, "text": " without caring about the actual probabilities"}, {"start": 1652.54, "end": 1653.82, "text": " and modeling these."}, {"start": 1653.82, "end": 1656.1799999999998, "text": " So that's why this thing is model-free."}, {"start": 1656.1799999999998, "end": 1657.54, "text": " Why is it off-policy?"}, {"start": 1657.54, "end": 1661.02, "text": " Well, it's off-policy because it's using the epsilon greedy"}, {"start": 1661.02, "end": 1664.5, "text": " to collect the data into the replay buffer, if you remember."}, {"start": 1664.5, "end": 1667.2199999999998, "text": " So we are collecting into the replay buffer"}, {"start": 1667.2199999999998, "end": 1669.54, "text": " using the epsilon greedy,"}, {"start": 1669.54, "end": 1673.42, "text": " and then the target is actually using the max policy,"}, {"start": 1673.42, "end": 1676.1, "text": " the greedy policy, and that's another policy."}, {"start": 1676.1, "end": 1681.1, "text": " And so we're trying to learn the max Q value"}, {"start": 1681.58, "end": 1684.06, "text": " while using the behavior policy,"}, {"start": 1684.06, "end": 1685.42, "text": " that's how they call it,"}, {"start": 1685.42, "end": 1687.34, "text": " the epsilon greedy and the target policy"}, {"start": 1687.34, "end": 1688.94, "text": " is the greedy policy."}, {"start": 1688.94, "end": 1690.42, "text": " That's why it's off-policy."}, {"start": 1690.42, "end": 1692.94, "text": " So intuitively, what we're trying to do here"}, {"start": 1692.94, "end": 1696.54, "text": " is we wanna have this Q, our CNN here,"}, {"start": 1696.54, "end": 1700.54, "text": " be able to predict what will be the expected reward"}, {"start": 1700.54, "end": 1702.46, "text": " plus the whatever max here is."}, {"start": 1702.46, "end": 1707.46, "text": " So we're gonna try and learn what this particular state,"}, {"start": 1707.8, "end": 1709.36, "text": " what's the max value we can expect."}, {"start": 1709.36, "end": 1711.8999999999999, "text": " So hopefully I didn't confuse you here."}, {"start": 1711.8999999999999, "end": 1713.78, "text": " Let me know in the comments if this was clear enough,"}, {"start": 1713.78, "end": 1718.42, "text": " I can try and explain a bit better next time, whatever."}, {"start": 1718.42, "end": 1722.1, "text": " Okay, let's continue and see what else is there."}, {"start": 1723.7, "end": 1725.1, "text": " So furthermore, it was shown"}, {"start": 1725.1, "end": 1727.5, "text": " that combining model-free reinforcement learning"}, {"start": 1727.5, "end": 1731.34, "text": " such as Q learning with nonlinear function approximators"}, {"start": 1732.6599999999999, "end": 1734.78, "text": " and using off-policy learning"}, {"start": 1734.78, "end": 1737.1, "text": " could cause the Q network to diverge."}, {"start": 1737.1, "end": 1738.78, "text": " So that's the theory."}, {"start": 1738.78, "end": 1740.1, "text": " And that's pretty problematic"}, {"start": 1740.1, "end": 1742.2199999999998, "text": " because you wanna have stable training."}, {"start": 1742.2199999999998, "end": 1746.3799999999999, "text": " And fortunately, DQM paper,"}, {"start": 1746.3799999999999, "end": 1747.5, "text": " they didn't have the problem"}, {"start": 1747.5, "end": 1749.2199999999998, "text": " because of the memory replay buffer"}, {"start": 1749.2199999999998, "end": 1752.02, "text": " and because of the target network being frozen,"}, {"start": 1752.02, "end": 1754.48, "text": " actually, they avoided the instabilities."}, {"start": 1754.48, "end": 1755.78, "text": " But this problem is something known"}, {"start": 1755.78, "end": 1758.6200000000001, "text": " as the deadly triad in the literature."}, {"start": 1758.6200000000001, "end": 1762.38, "text": " So this is just an excerpt from the Rich Sutton's book"}, {"start": 1762.38, "end": 1765.0, "text": " and an introduction to RL,"}, {"start": 1765.0, "end": 1766.78, "text": " which is like a must read"}, {"start": 1766.78, "end": 1769.42, "text": " if you're doing RL really seriously."}, {"start": 1769.42, "end": 1771.52, "text": " At one point of time, you should probably check it out."}, {"start": 1771.52, "end": 1772.3600000000001, "text": " I still haven't."}, {"start": 1772.3600000000001, "end": 1776.82, "text": " I usually start by doing high level resources first"}, {"start": 1776.82, "end": 1779.82, "text": " and then slowly going down to papers"}, {"start": 1779.82, "end": 1781.5, "text": " and only later do I read the book."}, {"start": 1781.5, "end": 1783.08, "text": " So that's maybe counterintuitive,"}, {"start": 1783.08, "end": 1785.3, "text": " but that's the best approach for me."}, {"start": 1785.3, "end": 1786.6999999999998, "text": " And he said also here,"}, {"start": 1786.6999999999998, "end": 1789.62, "text": " so when you combine the function approximation,"}, {"start": 1789.62, "end": 1792.74, "text": " the bootstrapping, so bootstrapping being TD,"}, {"start": 1792.74, "end": 1797.26, "text": " and other than TD, you could be using something called MC,"}, {"start": 1797.26, "end": 1801.02, "text": " like Monte Carlo, or you could be using something in between,"}, {"start": 1801.02, "end": 1804.46, "text": " like TD lambda, and you don't have to know"}, {"start": 1804.46, "end": 1805.8799999999999, "text": " what all of these are."}, {"start": 1805.8799999999999, "end": 1810.26, "text": " But it's basically, instead of doing just R"}, {"start": 1810.26, "end": 1813.86, "text": " plus the Q value here and your bootstrapping that way,"}, {"start": 1813.86, "end": 1816.42, "text": " you just roll out until the end of the episodes"}, {"start": 1816.42, "end": 1818.34, "text": " and just use the actual rewards"}, {"start": 1818.34, "end": 1821.82, "text": " instead of trying to use the Q value to bootstrap"}, {"start": 1821.82, "end": 1824.24, "text": " and figure out the actual Q value now."}, {"start": 1824.24, "end": 1826.46, "text": " So that's the bootstrapping part."}, {"start": 1826.46, "end": 1828.02, "text": " And finally, the off policy,"}, {"start": 1828.02, "end": 1830.58, "text": " and that's why it's called a triad, tri for three."}, {"start": 1830.58, "end": 1832.56, "text": " So basically, when you have these three elements"}, {"start": 1832.56, "end": 1835.46, "text": " combined together, you're roughly,"}, {"start": 1835.46, "end": 1837.78, "text": " you're probably going to have instability problems,"}, {"start": 1837.78, "end": 1842.26, "text": " but yeah, as DQN folks showed,"}, {"start": 1842.26, "end": 1844.42, "text": " they didn't have it because of some of the hacks,"}, {"start": 1844.42, "end": 1847.3, "text": " hacks they used, the replay buffer and the target network"}, {"start": 1847.3, "end": 1850.6, "text": " are the main two components that made it work."}, {"start": 1850.6, "end": 1855.54, "text": " Okay, that's that part, let's go further."}, {"start": 1855.54, "end": 1858.12, "text": " And let's actually see the algorithm,"}, {"start": 1858.12, "end": 1860.5, "text": " and I'll try to walk you through this one."}, {"start": 1860.5, "end": 1862.78, "text": " So first, we initialize the memory,"}, {"start": 1862.78, "end": 1865.8799999999999, "text": " the replay buffer to capacity N."}, {"start": 1865.88, "end": 1869.7800000000002, "text": " And they had 1 million slots here,"}, {"start": 1869.7800000000002, "end": 1872.18, "text": " just for being less abstract."}, {"start": 1872.18, "end": 1874.66, "text": " They initialized the action value function Q"}, {"start": 1874.66, "end": 1875.7, "text": " with random weights."}, {"start": 1875.7, "end": 1879.8200000000002, "text": " So just take your CNN and use whatever your favorite"}, {"start": 1879.8200000000002, "end": 1882.74, "text": " in it method is, maybe Gloroth or Xavier,"}, {"start": 1882.74, "end": 1885.3400000000001, "text": " whatever your initialization method is."}, {"start": 1885.3400000000001, "end": 1887.6000000000001, "text": " And then we do the following, we trade over the episode,"}, {"start": 1887.6000000000001, "end": 1890.6200000000001, "text": " we have M episodes, we initialize the sequence,"}, {"start": 1890.6200000000001, "end": 1894.9, "text": " we take the initial frame from the Atari environment,"}, {"start": 1894.9, "end": 1897.92, "text": " we just do some pre-processing, so that's the four frames,"}, {"start": 1897.92, "end": 1901.0600000000002, "text": " the cropping, some details you'll see in a bit."}, {"start": 1901.0600000000002, "end": 1904.6200000000001, "text": " And then for every single step in that particular episode,"}, {"start": 1904.6200000000001, "end": 1907.0, "text": " we do the following, we perform an action"}, {"start": 1907.0, "end": 1909.44, "text": " using epsilon greedy policy, so that's this."}, {"start": 1909.44, "end": 1911.98, "text": " We probability epsilon, we select a random action,"}, {"start": 1911.98, "end": 1915.5400000000002, "text": " otherwise we select the max, we max it out,"}, {"start": 1915.5400000000002, "end": 1917.3000000000002, "text": " we do the greedy approach."}, {"start": 1917.3000000000002, "end": 1919.94, "text": " Then we send the action to the environment."}, {"start": 1919.94, "end": 1922.98, "text": " So that's if you're using, you're usually gonna be using"}, {"start": 1922.98, "end": 1927.5, "text": " something like OpenAI's gym, so that's gonna be roughly"}, {"start": 1927.5, "end": 1931.06, "text": " correspond to dot step, and you're gonna pass the action,"}, {"start": 1931.06, "end": 1933.26, "text": " and then the environment's gonna return the observation,"}, {"start": 1933.26, "end": 1935.6, "text": " the reward, and the dom flag, which signals whether"}, {"start": 1935.6, "end": 1938.26, "text": " the episode is over or not."}, {"start": 1939.7, "end": 1942.82, "text": " Once we have that, we just pre-process that frame,"}, {"start": 1942.82, "end": 1946.3, "text": " and we store the transition, the star stopple,"}, {"start": 1946.3, "end": 1950.26, "text": " where they're using this five to denote the processed state."}, {"start": 1950.26, "end": 1952.3, "text": " So this is basically the state."}, {"start": 1952.3, "end": 1956.02, "text": " So we just store the star stopple inside of the replay"}, {"start": 1956.02, "end": 1960.7, "text": " buffer, and then we're gonna sample random mini-batch"}, {"start": 1960.7, "end": 1961.54, "text": " from D."}, {"start": 1961.54, "end": 1965.34, "text": " Now, just for your understanding, they're going to do this,"}, {"start": 1965.34, "end": 1967.48, "text": " I think the actual number was four."}, {"start": 1967.48, "end": 1970.82, "text": " So every four actions, you perform four actions,"}, {"start": 1970.82, "end": 1973.76, "text": " and only then do you sample the mini-batch."}, {"start": 1973.76, "end": 1977.26, "text": " Because otherwise, you need to fill in, remember,"}, {"start": 1977.26, "end": 1979.68, "text": " this pre-processing function expects four frames,"}, {"start": 1979.68, "end": 1982.6200000000001, "text": " so we have to kind of fill it up before we start"}, {"start": 1982.6200000000001, "end": 1984.42, "text": " doing the inference with the Q network."}, {"start": 1984.42, "end": 1987.02, "text": " So that's why they're gonna do, I think that's the reason"}, {"start": 1987.02, "end": 1989.98, "text": " they're going to do it every four steps."}, {"start": 1989.98, "end": 1994.3600000000001, "text": " And again, you can see we just formed the target,"}, {"start": 1995.22, "end": 1998.54, "text": " and that's either reward if you're in the terminal state,"}, {"start": 1998.54, "end": 2000.78, "text": " if you're at the last frame of the game,"}, {"start": 2000.78, "end": 2003.18, "text": " otherwise we're gonna bootstrap using Q,"}, {"start": 2003.18, "end": 2005.66, "text": " the old Q function, and that's the target,"}, {"start": 2005.66, "end": 2007.96, "text": " and then we're going to do gradient descent"}, {"start": 2007.96, "end": 2011.02, "text": " on the L2 of these two."}, {"start": 2011.02, "end": 2012.0, "text": " And that's it."}, {"start": 2012.0, "end": 2015.22, "text": " So I think the mini-batch size was 32."}, {"start": 2015.22, "end": 2018.66, "text": " That means we'll take 32 samples randomly"}, {"start": 2018.66, "end": 2023.1000000000001, "text": " from this buffer, so just 32 random source tuples,"}, {"start": 2023.1000000000001, "end": 2025.3400000000001, "text": " and we're gonna form this, and we're just gonna"}, {"start": 2025.3400000000001, "end": 2028.06, "text": " average it out and let the deep learning framework"}, {"start": 2028.06, "end": 2030.16, "text": " do the rest for us, find the gradient automatically,"}, {"start": 2030.16, "end": 2032.46, "text": " and minimize this loss."}, {"start": 2032.46, "end": 2034.44, "text": " And that's the whole algorithm."}, {"start": 2034.44, "end": 2037.06, "text": " Hopefully now it's a bit more clear."}, {"start": 2037.06, "end": 2041.08, "text": " And I think I mentioned this, but let me just reiterate."}, {"start": 2041.08, "end": 2043.4199999999998, "text": " So learning directly from consecutive samples"}, {"start": 2043.4199999999998, "end": 2045.3799999999999, "text": " is inefficient due to strong correlations."}, {"start": 2045.3799999999999, "end": 2048.2599999999998, "text": " So I mentioned the example with the ball in Pong"}, {"start": 2048.2599999999998, "end": 2051.34, "text": " and being correlated with the next frame."}, {"start": 2051.34, "end": 2055.54, "text": " So randomizing the samples by taking random samples"}, {"start": 2055.54, "end": 2059.22, "text": " from the replay buffer, we are breaking these correlations,"}, {"start": 2059.22, "end": 2061.74, "text": " and we are reducing the variance of the updates."}, {"start": 2061.74, "end": 2066.34, "text": " So that's the important trick to make this thing stable"}, {"start": 2066.34, "end": 2067.7000000000003, "text": " and to make it work."}, {"start": 2067.7000000000003, "end": 2072.3, "text": " I just took a snippet from the, so this paper is from 2013,"}, {"start": 2072.3, "end": 2074.54, "text": " and later they published a paper in Nature,"}, {"start": 2074.54, "end": 2076.7000000000003, "text": " and so I just took a snippet from that paper,"}, {"start": 2076.7000000000003, "end": 2080.86, "text": " and they have additional line here, so every C steps"}, {"start": 2080.86, "end": 2084.7000000000003, "text": " just swap the target, that's the frozen Q network"}, {"start": 2084.7000000000003, "end": 2085.6600000000003, "text": " I was talking about."}, {"start": 2085.6600000000003, "end": 2088.46, "text": " So you just take the current network, and every C steps,"}, {"start": 2088.46, "end": 2090.6400000000003, "text": " which is maybe every, I don't know, 1,000,"}, {"start": 2090.6400000000003, "end": 2094.02, "text": " I think they use 1K batch updates, you just swap"}, {"start": 2094.02, "end": 2097.02, "text": " the networks and use that one as the frozen network"}, {"start": 2097.02, "end": 2098.94, "text": " to predict your target labels."}, {"start": 2098.94, "end": 2101.86, "text": " And that's the additional detail I wanted to mention."}, {"start": 2101.86, "end": 2105.5, "text": " Okay, with that out of the way, let's continue"}, {"start": 2105.5, "end": 2109.96, "text": " and dive into a bit more details about the pre-processing."}, {"start": 2109.96, "end": 2114.78, "text": " So what they do is they convert the RGB to grayscale,"}, {"start": 2114.78, "end": 2118.54, "text": " they down sample it to this resolution,"}, {"start": 2118.54, "end": 2122.7, "text": " and then they crop to 84 by 84 because that was"}, {"start": 2122.7, "end": 2125.14, "text": " a constraint from the network they were using,"}, {"start": 2125.14, "end": 2127.54, "text": " and they were using, if you take a look at this reference"}, {"start": 2127.54, "end": 2130.3799999999997, "text": " 11, that's actually AlexNet, the famous network"}, {"start": 2130.3799999999997, "end": 2135.3799999999997, "text": " that caused the ImageNet moment, and it had the constraint"}, {"start": 2136.8399999999997, "end": 2139.06, "text": " of using only square inputs, that's why they crop"}, {"start": 2139.06, "end": 2140.98, "text": " to a square resolution."}, {"start": 2142.3799999999997, "end": 2144.8999999999996, "text": " The second thing they do is, as I already mentioned,"}, {"start": 2144.8999999999996, "end": 2147.14, "text": " is they take the last four frames of history"}, {"start": 2147.14, "end": 2149.02, "text": " and they stack them together to produce the input"}, {"start": 2149.02, "end": 2150.9399999999996, "text": " to the Q function, and that was the state"}, {"start": 2150.94, "end": 2153.02, "text": " that we were using all along."}, {"start": 2153.02, "end": 2155.42, "text": " Okay, so those were some additional details,"}, {"start": 2155.42, "end": 2160.42, "text": " and now one more thing, I kind of highlighted"}, {"start": 2161.66, "end": 2164.7400000000002, "text": " this eight by eight because AlexNet, back in the days,"}, {"start": 2164.7400000000002, "end": 2169.7400000000002, "text": " they were using much wider kernels compared to,"}, {"start": 2169.82, "end": 2172.58, "text": " and shallower networks compared to today's networks,"}, {"start": 2172.58, "end": 2175.98, "text": " which have kernels that are usually three by three,"}, {"start": 2175.98, "end": 2178.58, "text": " and they are much deeper, especially with the arrival"}, {"start": 2178.58, "end": 2182.02, "text": " of ResNet that allowed us to, using those skip connections"}, {"start": 2182.02, "end": 2185.5, "text": " to have much deeper networks going to 150 and more layers,"}, {"start": 2185.5, "end": 2188.34, "text": " and yeah, that's just kind of thought, it's interesting."}, {"start": 2188.34, "end": 2191.34, "text": " One thing they mentioned here additionally is,"}, {"start": 2191.34, "end": 2193.62, "text": " so the main advantage of this type of architecture"}, {"start": 2193.62, "end": 2195.98, "text": " is to compute the QVLs for all possible actions"}, {"start": 2195.98, "end": 2198.8199999999997, "text": " in a given state with only a single forward pass."}, {"start": 2198.8199999999997, "end": 2203.06, "text": " So imagine if we were actually treating the formulas"}, {"start": 2203.06, "end": 2205.7, "text": " like exactly, so you'd have to do the following thing,"}, {"start": 2205.7, "end": 2208.42, "text": " you'd have a CNN, you'd input a state,"}, {"start": 2208.42, "end": 2212.1, "text": " and you'd input an action, and then you'd get a QVL"}, {"start": 2212.1, "end": 2215.42, "text": " for that action, but then you'd have to do a for loop,"}, {"start": 2215.42, "end": 2218.38, "text": " you just had to kind of iterate this in a for loop"}, {"start": 2218.38, "end": 2221.1, "text": " to get all the QVLs, and you'd be storing them"}, {"start": 2221.1, "end": 2225.46, "text": " somewhere in summary, and so you'd be using additional time,"}, {"start": 2225.46, "end": 2227.0, "text": " you'd be using additional compute,"}, {"start": 2227.0, "end": 2228.7400000000002, "text": " and you'd be using additional memory"}, {"start": 2228.7400000000002, "end": 2231.2200000000003, "text": " if they haven't made it the way they had,"}, {"start": 2231.2200000000003, "end": 2233.54, "text": " and that's just input a single state,"}, {"start": 2233.54, "end": 2236.34, "text": " and you get all of the actions, you get the QVLs"}, {"start": 2236.34, "end": 2238.06, "text": " for all of the actions, and that's how it works."}, {"start": 2238.06, "end": 2241.06, "text": " So that's additional engineering detail, that's kind of fun."}, {"start": 2242.98, "end": 2246.04, "text": " Let's jump to experiments finally."}, {"start": 2246.04, "end": 2247.7799999999997, "text": " Let's see how this thing performed."}, {"start": 2247.7799999999997, "end": 2250.2999999999997, "text": " I already mentioned it kind of surpassed the humans"}, {"start": 2250.2999999999997, "end": 2253.2999999999997, "text": " in some games and many previous methods,"}, {"start": 2253.2999999999997, "end": 2255.98, "text": " so let's see details about that."}, {"start": 2255.98, "end": 2259.02, "text": " Additionally, because some games like Breakout"}, {"start": 2259.02, "end": 2261.62, "text": " or Space Invader, maybe when you kill a spaceship"}, {"start": 2261.62, "end": 2263.72, "text": " in Space Invader, you'd get plus reward,"}, {"start": 2263.72, "end": 2265.66, "text": " where in Pong you get plus one,"}, {"start": 2265.66, "end": 2268.94, "text": " so because these scales differ between the games,"}, {"start": 2268.94, "end": 2270.58, "text": " what it did, and this is again something"}, {"start": 2270.58, "end": 2273.02, "text": " that's Atari specific, so they are not completely"}, {"start": 2273.02, "end": 2276.24, "text": " agnostic to the setup, they are actually clipping it"}, {"start": 2276.24, "end": 2280.2599999999998, "text": " to plus one or minus one, if we get minus seven reward,"}, {"start": 2280.2599999999998, "end": 2281.92, "text": " that would get clipped to minus one,"}, {"start": 2281.92, "end": 2284.2999999999997, "text": " if we get plus three, that would get clipped to plus one,"}, {"start": 2284.2999999999997, "end": 2286.2999999999997, "text": " and that's additional trick they used"}, {"start": 2286.2999999999997, "end": 2288.8599999999997, "text": " to make it more stable and to be able to use"}, {"start": 2288.8599999999997, "end": 2291.8599999999997, "text": " the same set of hyperparams like the learning rate"}, {"start": 2291.86, "end": 2296.02, "text": " for every single one of these games, Atari games."}, {"start": 2296.02, "end": 2299.1800000000003, "text": " Again, one more detail, the epsilon was actually,"}, {"start": 2299.1800000000003, "end": 2301.9, "text": " the epsilon really, the epsilon was annealed linearly"}, {"start": 2301.9, "end": 2306.3, "text": " from one to zero one over the first million frames"}, {"start": 2306.3, "end": 2310.3, "text": " and fixed to 0.1 afterwards, so what that means,"}, {"start": 2311.54, "end": 2313.98, "text": " that means the following, so we have epsilon,"}, {"start": 2313.98, "end": 2318.36, "text": " we have the steps in our RL environment,"}, {"start": 2318.36, "end": 2322.46, "text": " and basically it starts with one and linearly annealed"}, {"start": 2322.46, "end": 2325.1800000000003, "text": " to zero one, and this is one million frames here,"}, {"start": 2325.1800000000003, "end": 2328.86, "text": " and that means we start off being completely random,"}, {"start": 2328.86, "end": 2331.6600000000003, "text": " our policy is completely random, so we'll be picking"}, {"start": 2331.6600000000003, "end": 2334.82, "text": " whatever state comes into the Q network,"}, {"start": 2334.82, "end": 2337.94, "text": " we pick the action without caring which is the max one,"}, {"start": 2337.94, "end": 2340.86, "text": " we just take one action, and that's how we behave,"}, {"start": 2340.86, "end": 2343.2200000000003, "text": " that's how we explore the environment initially,"}, {"start": 2343.2200000000003, "end": 2345.9, "text": " and that's an important concept in RL exploration"}, {"start": 2345.9, "end": 2348.86, "text": " versus exploitation trade-off, and here you can see"}, {"start": 2348.86, "end": 2352.62, "text": " usually the RL algorithm start exploring maximally"}, {"start": 2352.62, "end": 2354.9, "text": " to let it explore the environment and learn"}, {"start": 2354.9, "end": 2357.62, "text": " which states are better by random,"}, {"start": 2357.62, "end": 2360.34, "text": " and then start kind of being more greedy,"}, {"start": 2360.34, "end": 2362.48, "text": " being more deterministic about your actions"}, {"start": 2362.48, "end": 2367.26, "text": " and less sarcastic, and that's, you'll usually see policies"}, {"start": 2367.26, "end": 2370.7000000000003, "text": " like the schedules like this one,"}, {"start": 2370.7000000000003, "end": 2374.84, "text": " and then we'll end up having 0.1 epsilon"}, {"start": 2374.84, "end": 2378.1000000000004, "text": " throughout the rest of the training, that's what I did."}, {"start": 2378.1000000000004, "end": 2381.02, "text": " They also additionally use this frame skipping technique"}, {"start": 2381.02, "end": 2385.42, "text": " which means that, just a simple, again Atari specific thing,"}, {"start": 2385.42, "end": 2388.7000000000003, "text": " they just pick an action for, and then they just repeat"}, {"start": 2388.7000000000003, "end": 2390.5, "text": " the action for the next three steps,"}, {"start": 2391.42, "end": 2393.86, "text": " they're just sending the same action to the emulator,"}, {"start": 2393.86, "end": 2397.1800000000003, "text": " and only after the fourth frame do they actually"}, {"start": 2397.1800000000003, "end": 2400.38, "text": " pick the new action, here they say that they use K4,"}, {"start": 2400.38, "end": 2403.38, "text": " so they actually pick the action every four frames"}, {"start": 2403.38, "end": 2407.34, "text": " that comes from the emulator, except for the Space Invaders"}, {"start": 2407.34, "end": 2410.42, "text": " where they actually notice that the lasers are invisible"}, {"start": 2410.42, "end": 2414.42, "text": " if they take that period, so they actually had to use three,"}, {"start": 2414.42, "end": 2417.7000000000003, "text": " which is again, I think this is the only game specific"}, {"start": 2417.7000000000003, "end": 2422.1400000000003, "text": " detail they actually use, so just, yeah, it's a fun fact."}, {"start": 2422.1400000000003, "end": 2424.6800000000003, "text": " Here I'm not sure whether once they are doing"}, {"start": 2424.6800000000003, "end": 2426.58, "text": " those dummy actions where they're just repeating"}, {"start": 2426.58, "end": 2430.02, "text": " the last intelligent action, where they're actually"}, {"start": 2430.02, "end": 2431.98, "text": " feeding those frames into the state,"}, {"start": 2431.98, "end": 2434.82, "text": " into the circular buffer that has four slots,"}, {"start": 2434.82, "end": 2436.82, "text": " so I'm not sure if they're using these frames"}, {"start": 2436.82, "end": 2439.46, "text": " to fill that up, or they're only using the frames"}, {"start": 2439.46, "end": 2441.9, "text": " that got received from the environment"}, {"start": 2441.9, "end": 2444.18, "text": " when they produce an intelligent action."}, {"start": 2444.18, "end": 2446.58, "text": " So that's just something I'm confused about,"}, {"start": 2446.58, "end": 2450.02, "text": " maybe if you know, write it down in the comments,"}, {"start": 2450.02, "end": 2451.9, "text": " I haven't checked the original implementation,"}, {"start": 2451.9, "end": 2455.34, "text": " I did check some other GISTs and implementations of DQN,"}, {"start": 2455.34, "end": 2456.78, "text": " so I'm familiar how it works."}, {"start": 2458.7, "end": 2461.46, "text": " There is one thing I wanna mention here,"}, {"start": 2461.46, "end": 2463.18, "text": " and this nicely summarizes the difference"}, {"start": 2463.18, "end": 2465.26, "text": " between research and engineering,"}, {"start": 2465.26, "end": 2467.86, "text": " and that's that for this particular Atari game,"}, {"start": 2467.86, "end": 2470.56, "text": " they noticed this thing, so first encode a single frame,"}, {"start": 2470.56, "end": 2473.98, "text": " we take the maximum value for each pixel color value"}, {"start": 2473.98, "end": 2476.92, "text": " of the frame being encoded and the previous frame."}, {"start": 2476.92, "end": 2479.28, "text": " This was necessary to remove flickering"}, {"start": 2479.28, "end": 2482.3, "text": " that is present in games where some objects appear"}, {"start": 2482.3, "end": 2484.94, "text": " only in even frames, while other objects appear"}, {"start": 2484.94, "end": 2486.54, "text": " only in odd frames."}, {"start": 2486.54, "end": 2489.34, "text": " An artifact caused by the limited number of sprites"}, {"start": 2489.34, "end": 2494.02, "text": " in Atari 2600 that can display at once."}, {"start": 2494.02, "end": 2497.9, "text": " So this is totally not important for this research paper"}, {"start": 2497.9, "end": 2499.54, "text": " and for you understanding this paper,"}, {"start": 2499.54, "end": 2502.1000000000004, "text": " but it is important to understand"}, {"start": 2502.1000000000004, "end": 2505.6200000000003, "text": " that there are so many details that you actually understand"}, {"start": 2505.6200000000003, "end": 2507.58, "text": " when you start implementing the thing."}, {"start": 2507.58, "end": 2510.34, "text": " So reading research is really nice and everything,"}, {"start": 2510.34, "end": 2514.06, "text": " but until you start looking at the actual code"}, {"start": 2514.06, "end": 2517.0, "text": " and try to implement it, you probably first don't understand"}, {"start": 2517.0, "end": 2519.42, "text": " the actual theory, and secondly, you don't understand"}, {"start": 2519.42, "end": 2521.9, "text": " all of the pain points to actually make this work"}, {"start": 2521.9, "end": 2524.82, "text": " in a real world product."}, {"start": 2524.82, "end": 2526.14, "text": " So just something I want to mention."}, {"start": 2526.14, "end": 2530.38, "text": " And also, what I found useful is, for example,"}, {"start": 2530.38, "end": 2533.48, "text": " when I read this DQM paper the first time,"}, {"start": 2533.48, "end": 2537.42, "text": " I just found some gists and I went through the code"}, {"start": 2537.42, "end": 2539.6, "text": " and that made it much less abstract"}, {"start": 2539.6, "end": 2541.56, "text": " and clear how it works."}, {"start": 2541.56, "end": 2546.28, "text": " So I kinda suggest that you, aside from reading papers,"}, {"start": 2546.28, "end": 2549.94, "text": " maybe consult some code, and that's a good nice trade-off"}, {"start": 2549.94, "end": 2553.1000000000004, "text": " to understand the research better."}, {"start": 2553.1000000000004, "end": 2555.5, "text": " And that's somewhere like half the way"}, {"start": 2555.5, "end": 2558.38, "text": " between you actually implementing the project"}, {"start": 2558.38, "end": 2561.26, "text": " and you not seeing any code whatsoever."}, {"start": 2561.26, "end": 2563.0600000000004, "text": " So yeah, that was a small digression."}, {"start": 2563.0600000000004, "end": 2565.1400000000003, "text": " Hopefully it was useful for some of you."}, {"start": 2565.1400000000003, "end": 2567.6200000000003, "text": " Let me know if this was useful in the comments,"}, {"start": 2567.6200000000003, "end": 2570.3, "text": " things like this or whether this just distracts you."}, {"start": 2571.7200000000003, "end": 2572.78, "text": " Okay, let's continue."}, {"start": 2572.78, "end": 2577.78, "text": " Let us see how the agent progresses during the training."}, {"start": 2577.94, "end": 2580.78, "text": " Basically, here you can see that the rewards,"}, {"start": 2580.78, "end": 2583.1000000000004, "text": " so here we can see the training epochs,"}, {"start": 2583.1000000000004, "end": 2585.2200000000003, "text": " so and here we can see the average reward"}, {"start": 2585.2200000000003, "end": 2586.46, "text": " that the agent is receiving,"}, {"start": 2586.46, "end": 2590.02, "text": " and we can see on the breakout game an upward trend,"}, {"start": 2590.02, "end": 2593.1400000000003, "text": " so that means that the agent is learning better policies"}, {"start": 2593.1400000000003, "end": 2596.84, "text": " and better action values or Q values."}, {"start": 2597.86, "end": 2601.26, "text": " So basically, on the Sequest, we can see the same trend."}, {"start": 2601.26, "end": 2603.82, "text": " There is a lot of noise, but still the trend"}, {"start": 2603.82, "end": 2605.6600000000003, "text": " is kind of upward, so yeah."}, {"start": 2606.5200000000004, "end": 2609.28, "text": " A better metric they used is this,"}, {"start": 2609.28, "end": 2611.3, "text": " the average Q on breakout"}, {"start": 2611.3, "end": 2614.6200000000003, "text": " and the average Q on whatever game there may be."}, {"start": 2614.6200000000003, "end": 2615.9, "text": " And you can see that that one"}, {"start": 2615.9, "end": 2618.1400000000003, "text": " is pretty much monotonically increasing"}, {"start": 2618.1400000000003, "end": 2623.1400000000003, "text": " and has less noise than these ones here."}, {"start": 2623.2200000000003, "end": 2625.0600000000004, "text": " And let me just explain to you"}, {"start": 2625.0600000000004, "end": 2627.84, "text": " how they actually calculate that one."}, {"start": 2627.84, "end": 2631.0200000000004, "text": " So we collect a fixed set of states"}, {"start": 2631.02, "end": 2634.46, "text": " by running a random policy before training even starts,"}, {"start": 2634.46, "end": 2637.06, "text": " and they track the average of the maximum"}, {"start": 2637.06, "end": 2639.22, "text": " predicted Q values for these states."}, {"start": 2639.22, "end": 2641.42, "text": " So that looks the following."}, {"start": 2641.42, "end": 2643.52, "text": " Basically, before the training even starts,"}, {"start": 2643.52, "end": 2646.96, "text": " before the agent starts learning, they do the following."}, {"start": 2646.96, "end": 2651.96, "text": " So we can represent the environment and the game as an MDP,"}, {"start": 2652.58, "end": 2655.88, "text": " and I didn't explain quite what MDP is,"}, {"start": 2655.88, "end": 2659.58, "text": " but for now, you can imagine it as a bunch of states"}, {"start": 2659.58, "end": 2662.7599999999998, "text": " and you can transition and you can receive rewards,"}, {"start": 2662.7599999999998, "end": 2664.7, "text": " and yeah, that's the formalism."}, {"start": 2664.7, "end": 2668.66, "text": " But basically what they did is they took a couple of states,"}, {"start": 2668.66, "end": 2670.98, "text": " and those are those four frame tuples, remember?"}, {"start": 2670.98, "end": 2672.84, "text": " So they took a couple of these,"}, {"start": 2672.84, "end": 2675.38, "text": " and they just stored them into some memory,"}, {"start": 2675.38, "end": 2677.3199999999997, "text": " and basically later on,"}, {"start": 2677.3199999999997, "end": 2679.24, "text": " while the agent is actually learning,"}, {"start": 2679.24, "end": 2681.88, "text": " they just pass these specific values."}, {"start": 2681.88, "end": 2684.08, "text": " So let's call this one, let's call this two."}, {"start": 2684.08, "end": 2686.64, "text": " They'll just feed it through the CNN,"}, {"start": 2686.64, "end": 2691.64, "text": " and they'll find the max Q value for this state,"}, {"start": 2691.8599999999997, "end": 2693.2599999999998, "text": " max Q value for this state,"}, {"start": 2693.2599999999998, "end": 2695.8199999999997, "text": " and they're just gonna be accumulating these,"}, {"start": 2695.8199999999997, "end": 2697.42, "text": " like summing them up and then averaging"}, {"start": 2697.42, "end": 2698.62, "text": " by the number of states,"}, {"start": 2698.62, "end": 2700.8199999999997, "text": " i.e. they are gonna keep the average"}, {"start": 2700.8199999999997, "end": 2703.3799999999997, "text": " of the max Q values for these states."}, {"start": 2703.3799999999997, "end": 2705.4, "text": " And that's the metric they were using here,"}, {"start": 2705.4, "end": 2707.3799999999997, "text": " and it proved to be much more stable"}, {"start": 2707.3799999999997, "end": 2709.54, "text": " and better to keep track of the learning progress"}, {"start": 2709.54, "end": 2710.3799999999997, "text": " of the agent."}, {"start": 2711.94, "end": 2715.06, "text": " Okay, those are the curves."}, {"start": 2715.06, "end": 2719.14, "text": " Here, once you actually train the agent,"}, {"start": 2719.14, "end": 2720.94, "text": " you can see interesting behavior,"}, {"start": 2720.94, "end": 2722.5, "text": " and here you can see the following."}, {"start": 2722.5, "end": 2725.74, "text": " So here you can see that that's our ship"}, {"start": 2725.74, "end": 2727.42, "text": " that we're controlling, that's the agent."}, {"start": 2727.42, "end": 2729.2599999999998, "text": " The algorithm is controlling the ship,"}, {"start": 2729.2599999999998, "end": 2731.7, "text": " and once the enemy ship enters the screen,"}, {"start": 2731.7, "end": 2732.7, "text": " that's the point A,"}, {"start": 2732.7, "end": 2735.38, "text": " and you can see that the Q value increases,"}, {"start": 2735.38, "end": 2737.5, "text": " because we're expecting reward"}, {"start": 2737.5, "end": 2739.18, "text": " once we kill the enemy spaceship,"}, {"start": 2739.18, "end": 2741.38, "text": " we're gonna see an increase in score,"}, {"start": 2741.38, "end": 2744.86, "text": " and the Q value correctly anticipates"}, {"start": 2744.86, "end": 2748.3, "text": " that score increase, and then has a spike."}, {"start": 2748.3, "end": 2751.6200000000003, "text": " And then you can see once the torpedo is shot,"}, {"start": 2751.6200000000003, "end": 2754.3, "text": " and we are really close to killing the enemy ship,"}, {"start": 2754.3, "end": 2757.54, "text": " we're at point B, where we expect a huge reward,"}, {"start": 2757.54, "end": 2759.42, "text": " and once we actually kill the ship"}, {"start": 2759.42, "end": 2762.48, "text": " and we enter in this state here,"}, {"start": 2762.48, "end": 2766.58, "text": " the value drops, and it drops because it only expects"}, {"start": 2766.58, "end": 2768.1400000000003, "text": " the future rewards."}, {"start": 2768.1400000000003, "end": 2769.78, "text": " So we just received a huge reward,"}, {"start": 2769.78, "end": 2771.58, "text": " and that's why we dipped."}, {"start": 2771.58, "end": 2773.1800000000003, "text": " So you can see that it's really meaningful"}, {"start": 2773.1800000000003, "end": 2774.6600000000003, "text": " for this specific scenario,"}, {"start": 2774.66, "end": 2776.2599999999998, "text": " and it makes a lot of sense."}, {"start": 2777.8599999999997, "end": 2780.94, "text": " That was another interesting thing"}, {"start": 2780.94, "end": 2781.94, "text": " that I wanted to mention,"}, {"start": 2781.94, "end": 2784.1, "text": " and here is one additional detail."}, {"start": 2784.1, "end": 2786.7799999999997, "text": " So in addition to seeing relatively smooth improvements"}, {"start": 2786.7799999999997, "end": 2789.02, "text": " to predicted Q during training,"}, {"start": 2789.02, "end": 2791.66, "text": " we did not experience any divergence issues"}, {"start": 2791.66, "end": 2793.22, "text": " in any of the experiments,"}, {"start": 2793.22, "end": 2796.94, "text": " so despite lacking any theoretical convergence guarantees,"}, {"start": 2796.94, "end": 2799.18, "text": " the method is able to train the network"}, {"start": 2799.18, "end": 2801.14, "text": " with RL algorithm signal,"}, {"start": 2801.14, "end": 2804.14, "text": " and gradient descent in a stable manner."}, {"start": 2804.14, "end": 2807.22, "text": " So again, small discrepancy between the theory"}, {"start": 2807.22, "end": 2809.94, "text": " and the engineering or the practice."}, {"start": 2809.94, "end": 2811.58, "text": " Sometimes things just work,"}, {"start": 2811.58, "end": 2813.62, "text": " and later a couple of years down the road,"}, {"start": 2813.62, "end": 2816.8199999999997, "text": " somebody figures out why this was working,"}, {"start": 2816.8199999999997, "end": 2819.98, "text": " or yeah, but yeah, sometimes, a lot of times,"}, {"start": 2819.98, "end": 2824.06, "text": " the engineering and the technology is way in front"}, {"start": 2824.06, "end": 2827.62, "text": " of the actual research and theoretical understanding."}, {"start": 2827.62, "end": 2830.42, "text": " That's my honest opinion about this."}, {"start": 2830.42, "end": 2833.6, "text": " You can disagree, and I'd love to hear your comments on that."}, {"start": 2833.6, "end": 2834.44, "text": " Yeah."}, {"start": 2836.42, "end": 2839.38, "text": " So results are here."}, {"start": 2839.38, "end": 2844.38, "text": " DQN kinda destroyed every single other baseline."}, {"start": 2844.62, "end": 2845.74, "text": " So this is the random policy."}, {"start": 2845.74, "end": 2847.42, "text": " If you just play it random,"}, {"start": 2847.42, "end": 2848.7799999999997, "text": " if you play the game at random,"}, {"start": 2848.7799999999997, "end": 2850.58, "text": " these are the scores you get."}, {"start": 2850.58, "end": 2854.38, "text": " SARSA, it's an online algorithm similar to Q-learning."}, {"start": 2854.38, "end": 2855.94, "text": " I won't get into the details."}, {"start": 2855.94, "end": 2857.94, "text": " Contingency, again, very similar,"}, {"start": 2857.94, "end": 2861.02, "text": " and you can see usually Q-learning has better results,"}, {"start": 2861.02, "end": 2863.46, "text": " and DQN has much better results,"}, {"start": 2863.46, "end": 2864.82, "text": " and all of those baselines,"}, {"start": 2864.82, "end": 2867.18, "text": " and it's actually even better than humans in some games,"}, {"start": 2867.18, "end": 2870.48, "text": " like here on Breakout, Enduro, and Pong."}, {"start": 2870.48, "end": 2871.98, "text": " It's actually better than the human,"}, {"start": 2871.98, "end": 2874.2200000000003, "text": " and that's kinda interesting,"}, {"start": 2874.2200000000003, "end": 2877.98, "text": " and the scores here, how they got these are,"}, {"start": 2877.98, "end": 2880.2, "text": " so they just took some human,"}, {"start": 2881.1, "end": 2883.34, "text": " and that guy, that poor guy or girl,"}, {"start": 2883.34, "end": 2886.18, "text": " they were just playing the game for two hours,"}, {"start": 2886.18, "end": 2889.86, "text": " and once the two hours are done,"}, {"start": 2889.86, "end": 2892.08, "text": " they just take the scores that they achieved,"}, {"start": 2892.08, "end": 2894.8199999999997, "text": " they sort them, and they find the middle one,"}, {"start": 2894.8199999999997, "end": 2898.98, "text": " that's the median or the 50th percentile,"}, {"start": 2898.98, "end": 2901.98, "text": " and that's the score they publish here,"}, {"start": 2901.98, "end": 2903.38, "text": " and you can see that, yeah,"}, {"start": 2903.38, "end": 2906.18, "text": " in some cases, DQN achieved better results."}, {"start": 2906.18, "end": 2908.02, "text": " So one important thing to mention here"}, {"start": 2908.02, "end": 2910.86, "text": " is that in their subsequent paper"}, {"start": 2910.86, "end": 2913.34, "text": " that were published in Nature in 2015,"}, {"start": 2913.34, "end": 2916.2799999999997, "text": " they actually played on all the 57 Atari games,"}, {"start": 2916.2799999999997, "end": 2917.7799999999997, "text": " and they achieved impressive results"}, {"start": 2917.7799999999997, "end": 2919.8199999999997, "text": " where they were actually better than humans"}, {"start": 2919.82, "end": 2924.5800000000004, "text": " on 29 out of 57 games,"}, {"start": 2924.5800000000004, "end": 2927.5800000000004, "text": " and that's, yeah, quite impressive,"}, {"start": 2927.5800000000004, "end": 2929.7400000000002, "text": " especially back in 2015."}, {"start": 2929.7400000000002, "end": 2933.1200000000003, "text": " Now we have some improvements like Agent 57,"}, {"start": 2934.1800000000003, "end": 2935.7000000000003, "text": " and some other algorithms,"}, {"start": 2935.7000000000003, "end": 2938.1200000000003, "text": " and I'll be covering those in the subsequent videos,"}, {"start": 2938.1200000000003, "end": 2939.9, "text": " so yeah, stay tuned,"}, {"start": 2939.9, "end": 2942.1400000000003, "text": " but the interesting thing you can notice here"}, {"start": 2942.1400000000003, "end": 2946.1000000000004, "text": " is that all of the games that require fast reflexes,"}, {"start": 2946.1, "end": 2951.1, "text": " like boxing, like Pong, are much easier"}, {"start": 2951.22, "end": 2953.46, "text": " for our agents to learn how to play"}, {"start": 2953.46, "end": 2955.7, "text": " than some games like Montezuma's Revenge,"}, {"start": 2955.7, "end": 2957.58, "text": " it's a really famous game."}, {"start": 2957.58, "end": 2960.38, "text": " It's a tough one, you need to play it"}, {"start": 2960.38, "end": 2962.24, "text": " to understand why I'm saying this."}, {"start": 2962.24, "end": 2964.66, "text": " It requires a lot more strategy than Pong"}, {"start": 2964.66, "end": 2966.5, "text": " or Breakout or those other games,"}, {"start": 2966.5, "end": 2970.46, "text": " and that's why you can see we have actually zero score here,"}, {"start": 2970.46, "end": 2973.36, "text": " so the algorithm cannot, DQN cannot learn"}, {"start": 2973.36, "end": 2974.42, "text": " in this vanilla form,"}, {"start": 2974.42, "end": 2976.78, "text": " cannot learn to play this game at all."}, {"start": 2976.78, "end": 2978.9, "text": " So that was just some fun fact,"}, {"start": 2978.9, "end": 2980.1, "text": " and as I mentioned,"}, {"start": 2980.1, "end": 2982.94, "text": " so this Qbert, Sequest, and Space Invaders,"}, {"start": 2982.94, "end": 2985.1800000000003, "text": " so those are these three games,"}, {"start": 2985.1800000000003, "end": 2988.82, "text": " you can see that results are really poor"}, {"start": 2988.82, "end": 2990.82, "text": " compared to humans,"}, {"start": 2990.82, "end": 2993.1800000000003, "text": " and the reason is that they're challenging"}, {"start": 2993.1800000000003, "end": 2995.1, "text": " because they require us to find a strategy"}, {"start": 2995.1, "end": 2997.26, "text": " that extends over long time scales,"}, {"start": 2997.26, "end": 2999.46, "text": " and that's something that if you're familiar"}, {"start": 2999.46, "end": 3001.82, "text": " with any other field like NLP,"}, {"start": 3001.82, "end": 3004.06, "text": " when you're dealing with long sequences,"}, {"start": 3004.06, "end": 3007.18, "text": " you're gonna end up having problems,"}, {"start": 3007.18, "end": 3009.18, "text": " catastrophic forgetting whatever,"}, {"start": 3009.18, "end": 3011.02, "text": " the problems that RNN's had,"}, {"start": 3011.02, "end": 3013.9, "text": " and then LSTM's kind of solved, and Transformers,"}, {"start": 3013.9, "end": 3016.86, "text": " so yeah, that's the reason they,"}, {"start": 3016.86, "end": 3019.48, "text": " you can basically just by analyzing the game"}, {"start": 3019.48, "end": 3021.2999999999997, "text": " kind of get an intuition why the algorithm"}, {"start": 3021.2999999999997, "end": 3022.7799999999997, "text": " is failing on specific games."}, {"start": 3023.68, "end": 3024.74, "text": " And that's pretty much it,"}, {"start": 3024.74, "end": 3026.46, "text": " these are just some references,"}, {"start": 3026.46, "end": 3029.54, "text": " I think I've covered it pretty detailed."}, {"start": 3029.54, "end": 3032.06, "text": " Hopefully if you stuck with me until the end of the video,"}, {"start": 3032.06, "end": 3034.42, "text": " I'd love to hear your thoughts,"}, {"start": 3034.42, "end": 3036.94, "text": " did you like this one better than the last ones,"}, {"start": 3036.94, "end": 3040.98, "text": " and if you have any tips on how to improve this further,"}, {"start": 3040.98, "end": 3042.2999999999997, "text": " feel free to comment down below,"}, {"start": 3042.2999999999997, "end": 3044.7, "text": " I'll try to answer ASAP,"}, {"start": 3044.7, "end": 3047.14, "text": " so until next time, you know the drill,"}, {"start": 3047.14, "end": 3051.14, "text": " hit that bell icon, subscribe, and share the good word,"}, {"start": 3051.14, "end": 3053.04, "text": " and see you in the next video,"}, {"start": 3053.04, "end": 3054.9, "text": " until then, keep learning deep."}, {"start": 3054.9, "end": 3063.7400000000002, "text": " Yeah."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=UGvbALEszws
How to get started with Graph ML? (Blog walkthrough)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I walk you through my blog on getting started with Graph ML. I talk about research, learning, cool Graph ML apps, resources to get you started, my GAT project, and beyond! (till infinity) You'll learn about: ✔️ Various research tips ✔️ How to get started with Graph ML ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Medium blog: https://gordicaleksa.medium.com/how-to-get-started-with-graph-machine-learning-afa53f6f963a ✅ GAT project: https://github.com/gordicaleksa/pytorch-GAT ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Research/learning challenges 03:05 What is Graph ML? We're all graphs 05:05 Cool Graph ML applications 09:35 Fake news and fundamental science 13:20 Halicin a potent antibiotic discovered by a GNN 15:45 Contrasting Graph ML with CV and NLP 19:15 Resources - graph embedding methods 21:10 Graph Neural Networks 23:20 Top to bottom approach - high level resources 26:35 Spatial methods 30:40 Simple baselines sometimes work great! 32:25 Parallel with CNNs 33:40 GNN expressivity 36:35 Dynamic graphs 39:15 Unsupervised graph learning and geometric DL 40:55 Datasets/benchmarks and newsletter 44:15 GAT project 44:50 Related research subfields ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphml #graphs #deeplearning
What's up folks? So in this video I'm going to tell you more about how to get started with GraphML. So I'm wrapping up my exploration of Graph Machine Learning and I wrote this blog on how to get started. So I thought of just creating a video and maybe telling you more about my opinions and some of the stuff that I've already written in the blog and giving you some complimentary ideas. So yeah, basically stay tuned and hopefully you'll learn something new. Okay so the first thing I want to talk about is this. So research can be really tough and just be aware that everybody, every single like top-notch researcher you know of or engineer or successful business person, whatever, has hard days and it's not always easy. So I'm talking from my own experience. So it's not always mentally easy to stick with the plan and execute and do the research, read the papers, code up that project. So just be aware of it that you're not alone in this and you should never give up. That's my only advice I have to give you. Never freaking give up. Okay? Secondly, don't get demotivated by the people who are, who seem, who appear to be super successful in something. So the thing is most of them are really specialized. So I like to think from about this from like this linear algebra perspective. So maybe nerdy but there is a method to this madness. Basically I want to say the following. So among all of those different dimensions, all of the things you could be doing researching, learning, about, let's take two axes. So one axis would be maybe somebody who is just reading research papers. So some fictive person that's only reading research. On the other hand you have people who are just coding something, who are working on some deep learning libraries, like maybe in the context of GraphML that would be PyTorch Geometric. And most people tend to be somewhere in between. But there are a lot of people who only read research paper and they never implement something from scratch or never work on anything that has to do with engineering or coding. And I want to tell you that they don't know many details even though they appear to understand everything because they curate their own content. They don't know many details of how stuff works. And I'm talking from my personal experience. So until I start coding something up from scratch I didn't, I don't understand thoroughly how much I actually don't know, if that makes sense. So you want to be somewhere in between. If you want to be a really successful researcher, in my opinion, and you want to boost that H index if that's of something of interest to you, you should probably be reading, focusing on reading research papers a lot. But you should also make these, let's call them detours into implementing something and actually playing with the code. On the other hand if you want to be a great engineer and you probably want to focus on code but you still want to research and read, allocate some time to read research papers because otherwise you'll just lose the pace and you'll, the field is progressing very rapidly and you got to allocate some time to do the due diligence. Having said that, let me walk you through this blog. So first GraphML and hopefully you know what GraphML is. You heard about graphs so those are those data structures and GraphML is just a bunch of machine learning models so we learn on top of that graph data. And the first thing I want to advise you here is don't, if you're, especially if you're a perfectionist, you'll tend to be like this. I want to make a, I want to go on a tangent here and maybe invest three months of my time and get some solid background in graph theory, a branch of mathematics. Or maybe get a really good foundations in the network analysis field. And I want to tell you, you don't need to do that. You can jump straight into the field and make a first step and without doing all of those detours. So it's totally fine to start reading this blog and start with high-level resources and that would be your, as a first step to on your journey through Graph Machine Learning. Sometimes you may want to take a step back or maybe better said, make a side step and invest maybe three months of your time into building up more solid mathematical foundations. And me personally, last year in 2020, I did maybe, I invested maybe around three, four months into reading this book, Mathematics for Machine Learning. And it's a bit complex if you're intermediate or even beginner. But it meant a lot for me and it helped me better understand the stuff that I later learned. So the strategy of doing side steps and investing into your some foundations is also a good idea sometimes. So that's GraphML. You have machine learning, you have machine learning models that learn on graph data. Instead of images or text like in computer vision or NLP, you're learning from graphs. And now before we start, before I show you the resources, let's just, let me just walk you through some cool applications. And one more thing to note, you'll hear a lot about graph neural networks here and for the sake of, for now just think of them as neural networks that work with graphs and somehow can process those graphs and do the classification or regression on nodes or edges or even the complete graphs. So just have them as a black box on top of your mind, okay? So first things first, GraphML is such an exciting topic because the applications you'll see here are so much different from those you'll see in the computer vision or the NLP worlds. So there in computer vision you'll see, you'll be seeing algorithms that manipulate images or videos and in NLP you'll be seeing algorithms that manipulate text and those are all pretty homogeneous. So even though you can do, I don't know, facial landmark detection, pose estimation, bunch of things, segmentation, regressing 3D models, bunch of things in computer vision, they are all still kind of related. And then when you start doing GraphML you understand like all the plethora, like the vast diversity of different applications you can do with graphs. So two cool applications I start, I want to start with our recommendation systems and if you're not aware already, like in my opinion, recommendation systems are the single most important and influential piece of AI currently in 2021. So you interact with with recommendation systems all the time, YouTube recommendation, Facebook, Twitter, Pinterest, whatever, just name it, and you have recommendation systems influencing your decisions. So before I started making YouTube videos I wasn't fully aware of it and now I'm getting a thorough understanding of how how influential recommender systems are. So basically as an example I've seen many videos which are really top-notch, high quality, that have 5k views and not because the presenter was anything less eloquent or the content was worse, no. It was just because he or she came on later and so he or she is not on the on the front page, like among the first queries. And then you'll see some videos which were made maybe 2015, 16, 17 and they have amassed a lot of views and so that's what YouTube will recommend you. So what I've learned is I now tend to go past the first five results before I would just open the most viewed videos but now I'm completely aware how recommendation systems can actually bring you inferior recommendations, unfortunately. And yeah, engineers and researchers are working on this and one way is to just kind of use graph neural networks. So because the problem as you can see here with with uBreads for example, you can really nicely model all of these problems as graphs. So here you have a graph between users and food and uBreads has a separate graph between users and restaurants. They still haven't merged the two together even though I think they're currently working on it or maybe they even maybe they recently even deployed it. I'm not aware of it but yeah this is the last news I'm aware of. And so just processing this graph data you can make better recommendations to your users and obviously I won't get into all of the details. I did some videos on Pinsage, that's the recommendation engine at Pinterest and you can check out that one somewhere here in the card. Okay, so you can kind of see how the graph neural network Pinsage improve upon these baselines. Here you can see people carrying some logs and here you can see the visual. If you just take the visual baseline it will bring you images that are like really similar when it comes to appearance because they are also grayscale but you can see the semantics like here the soldiers and here you can see actually the trees and the logging so Pinsage got both the semantics as well as the appearance successfully by leveraging both the graph structure as well as the visual textual information that's available in those graphs on those platforms. So those are some nice applications and aside from from recommendation systems we have these basically fake news detection GNNs and that's so important. The societal benefits of deploying really successful fake news detection algorithms are so important and again you can see here maybe on Twitter you have a single tweet and somebody retweets a tweet that would be a second node and you have the edge and so you can build up the graph and basically these networks can figure out the propagation patterns of those retweets and just by looking at the propagation patterns it was shown in previous research that we can figure out whether the news is actually fake or real and again that's a really naturally model as a graph so GNNs to the rescue. Aside from this app there is one more cool app which is basically figuring out the probability that you will produce certain event depending on your neighborhood and whether they did the same thing so for example to be less abstract maybe the goal is to figure out whether you'll retweet a tweet if some of your neighbors retweeted the same tweet that would be one concrete example and again you have the graph structure and you can leverage to regress these probabilities so various cool applications and what's even more interesting is that graph ML so all of these are still kind of familiar to you but like graph ML goes into these deep fundamental science problems and can naturally solve them so one of them would be this example I gave with the neutrino detection where basically the setup is the following you have a huge huge cube of ice and you have these light sensors inter dispersed in the like distributed throughout this ice and what happens is a neutrino comes comes out from the space and hits the ice and interacts with the ice molecules and in that interaction something that's called charank of light emits and basically those sensors those light sensors can catch those that light and you can basically see certain nodes of that grid like structure of sensors get activated and then you can just deploy the graph neural network on top to figure out whether the neutrino came from some certain events we care about like supernova explosions or whatever so I don't know about you but like this completely blew out my mind because usually you see neural networks applied to images videos text and that's it and here you can see fundamental science and GNN is being used to solve them to solve those problems okay yeah there are many other apps I'll skip some of them like figuring out the molecules that can help beat cancer because you can see you can model the the thing naturally as a graph again you have genes you have proteins all of those interactions and here's some molecules or drugs and you you model that as a graph and you can find interesting information like which molecules can help treat cancer better or prevent cancer then you have drug side effects when you combine multiple drugs you want to be sure it would be beneficial if we could computationally calculate at the possible side effects because the combinatorics is so huge here and it's impractical to do it in a lab setting in vivo we need to do it in in silica or computationally so many cool applications so the last one I'll note here is the one that it was gained a lot of traction last year in 2020 basically we as a society have a huge problem with with anti antibiotic resistant bacteria so the reason that happens is because people are misusing antibiotics they're using that them for viruses they're using them to treat common colds and you shouldn't be using antibiotics in those cases you should be using antibiotic a specific antibiotic if you're sure you have a certain bacteria that this antibiotic can inhibit and because of all of those misuse there is a threat of this bacteria which is resistant to every single antibiotic and that's like the worst nightmare in the like in the science community and in the medicine in general pharmacy and so what this graph neural network did is it was trained on a set of these of these specific molecules and then and was trained to figure out which of these molecules tend to inhibit one specific bacteria namely the asharica coli in this example and then once they deployed that trained graph neural network onto a specific new subset a new data set it was able to filter out hundred candidates that gave a high response a high probability that this molecule will inhibit asharica coli and then they just dedicated that small small subset of molecules to scientists and they were able to find this molecule called helicin which basically can treat which was basically found out to be a super potent antibiotic and extremely successful in inhibiting asharica coli and previously it was just they considered this same molecule for his for its anti-diabetic properties so those are just some of the super exciting applications of graph neural networks and I hope you you found this interesting and and that it motivated you to further investigate the graph ML field now let me just kind of make a brief detour and compare graph ML with computer vision and natural language processing and I personally think it's really important to to try and connect the ideas from various subfields of deep learning because by doing that but we're just creating those connections I think you'll be better to generate much better research ideas and yeah it's just cool to kind of make this connection just gives you some kind of satisfaction as well so basically you can treat computer vision or images and videos in general as a just a special case of graph which is highly regular and we call these grids so you can see here a four by four pixel image can be represented as this specific graph and here we have four neighbors in general you can have up to eight neighbors because right you have the diagonal pixels as well and you have the top left top right bottom left bottom right and that means we can actually deploy graph neural networks to solve computer vision problems and same goes for for NLP because sentence is just basically you can treat sentence either as a fully connected graph or maybe just as a as a tree and yeah but the thing I want to mention here and I mentioned that in the blog as well is don't fall into the trap into thinking that you can you can better solve these problems with graph ML so the thing is certain biases that CNN's have like having weight sharing and localization are super useful and we show that that's much more efficient than using graph ML in some cases though you can use graph neural networks and perform better so when I say that I mean the following transformers are a special case of graph neural networks in a sense so once you have a fully connected graph transformer is basically a graph neural network that operates on a fully connected graph and it was shown in 2020 in October I think the vision transformer that we can use transformers on images to improve upon CNN's if we have enough data so if you have a huge amount of data it is possible maybe to use transformers or graph neural networks to achieve even better results then by using CNN's but there's a saying if you only have hammer everything looks like a nail and it's it's really bad if you if you just have the graph ML context and you don't have if you don't know other fields you're gonna try and and and use graph neural network for everything same goes for deep learning and that's why it's useful to be aware of different fields different methods different techniques because for every specific problem there is a solution that's highly optimized to be used for that specific problem now there is a trade-off between being as general as possible like maybe GPT and on the other hand having a super specialized algorithms so they are better they are more specialized they are more optimized but then you have the overhead of some like developers having to invest a lot of their time to now specialize that's to tune that algorithm for every single problem encounter on the other hand you have this general architectures you just apply them and they work on everything you don't have to invest any more time but they are suboptimal in every single of those problems so that's the trade-off having said all of that let me now walk you through the resources in the way and just kind of advise you how you can structure your learning of graph machine learning okay so if I were you I'd first start with graph embedding methods so these are not graph neural networks these are a specific class of methods which are similar to Vertubec if you're familiar with natural language processing and you know what Vertubec is these are going to be much more familiar and easy to grasp so they are just repurposed Vertubec onto graphs so basically for example you have these these algorithms DeepWalk and No2vec and Planetoid and what Deep2Walk does is the following it just samples random walks from a graph and you can treat those as a sentence so if you treat the graph as a corpus of text and you treat that specific random walk as as a sample sentence it's basically the same thing as Vertubec you just want to make sure that the neighboring words have similar embeddings so if they tend to be close on those in those sentences you want to make the embeddings really close and if they're if they never appear in the same sentence if some words never appear in the same sentence you want to make their embeddings really different so that's the main idea of all of these methods so take some time to kind of understand these because all of these graph embedding methods can be used for exciting graph problems like doing regression on top of those embeddings or classification both on node wise level edgewise and graph wise level so I should you should probably start with these and that's gonna be it's gonna be a nice first step okay in a way it's orthogonal to the graph neural networks that I'm going to tell you about more in a second so graph neural networks when it comes to graph neural networks there are two main branches and one is the spectral methods and the second one is spatial or message passing methods and my advice here is to take some time and investigate spectral methods because the terminology and references will get get propagated into the spatial methods and you'll be seeing some terminology from the spectral methods so it's good to kind of understand at least some basics about these so having said that if you're not familiar if you don't have a strong base in like fundamentals in linear algebra if you don't understand what eigen space is if you don't understand what eigen vector and eigenvalue is if you don't have some understanding of the Fourier analysis or signal processing this is going to be tough like you should probably just read some high-level blogs and try to kind of get familiar with the words and terminology and don't kind of pull yourself down just because you don't understand everything because yeah as I told you you'll need some time to understand it especially if you don't have the fundamentals but having said that just go through it get some feeling because that's gonna be super useful for you believe me so the reason I tell you not to worry about them too much is because they are computationally very expensive because how they work you have something called graph ML so they're just a huge matrix and what you want to do is you want to calculate those eigenvectors and that operation is very expensive so and not only that it's that also these these methods are key and generalized to different graph structures so they they have to be applied in a transductive manner and can be applied to new graphs which are smaller bigger because they inherently are transductive so that's the second pin point of these algorithms but still I told you take some time and learn about them okay in my opinion again in this video is full of opinions you want to start with as always with a top-to-bottom approach you want to start with some high-level videos high-level blocks that will get you exposed to the terminology and new ideas new concepts without you having to understand every single detail so you're just trying to build up that skeleton of knowledge and you should probably start doing that by by watching videos and reading through some medium and towards data science blocks so those are usually really nicely curated high quality blogs and videos and they'll get you exposed to the essential concepts having said that you'll find some of the links here and this video from the from the New York University they are they're kind of publishing all of the lectures they hold there so that's super super super useful for the community and yeah you should start with those I mentioned the Fourier analysis and stuff so if you don't know about if you haven't watched this one even if you know everything about Fourier I strongly suggest you go ahead and watch Grant Sanderson's video on Fourier so when I say Grant Sanderson's he's better known as the three blue one brown channel and he's got some a couple of videos on Fourier which are so nicely visualized and if you don't have the background from Fourier analysis this is probably going to be enough like a high-level understanding of what Fourier is basically decomposing your signal into a sum of differently scaled frequencies and it's gonna be a really good step for you to understand spectrum methods if you don't have any background whatsoever so do watch that one I've linked up to important papers here this one is the most important one spectral networks and locally connected networks and graphs and Chabnets just kind of localized that method local when I say localized I mean you just want to update a single node you'll just use the k-hop not neighborhood instead of using every single node in the graph which can be huge like in some examples when you're having maybe social level graphs you want to make those algorithms localized and the second thing aside from localization Chabnets made it also more efficient to calculate these representations by just leveraging the laplacian graph matrices and not and not calculating the eigenvectors I previously mentioned again some higher-level blogs just go through those research resources you should probably literally sequentially be learning about these resources I've linked you should probably read the graph embedding methods after you do the high-level blogs and videos but whatever feel free to experiment there but you should roughly be following the flow of the blog and as the blog progresses we go into more research and more niche topics like dynamics dynamic graphs and like graph expressivity etc okay having said that let me jump to spatial methods and these are the methods you'll be seeing you'll mostly be seeing so those are networks such as GCN graph convolutional network networks such as get graph attention network then you have graph sage pin sage many different spatial methods and on the high level they all work really similarly so basically you do the following thing so you have a node and you have the neighborhood and what you do is you transform the neighborhood feature vector somehow maybe doing some linear projection or MLP whatever you kind of is you somehow aggregate them maybe doing mean of those feature vectors or some more max or whatever and then you combine the central node its representation with that aggregated transform representation again somehow and that's your new representation and that's the general framework of message passing networks and all of these can be considered as special case of that framework I just explained you I covered many of these spatial methods on my youtube channel so check them out graph attention networks graph convolutional networks pin sage and graph sage so those are the two that were deployed in the recommendation system I previously mentioned at the beginning of the video message passing a neural network is the one that introduced this general framework general notion and understanding that all of these methods are just special case of something and they're doing something fairly similar to each other two more papers you should probably check out is the gated graph neural network which kinda popularized the usage of graph neural networks again because they were initially the similar methods were proposed already I think in 2004 2009 period time frame and this paper kind of popularized GNN's again and it's it's a nice paper it's it's really really general because it also includes the edge features it uses the group the LSTM for the aggregation I just mentioned and so it's a it's a it's a nice read aside from that paper learning convolutional neural networks for graphs is a cool paper because I didn't mention this but like the spectral and spatial methods what they try to do is generalize the concept of convolution that works so nicely in the computer vision world so with CNN's on two graphs and that turned out to be quite a difficult challenging problem in a way and so there were different approaches one was a spectral one one was a spatial one which is basically approximation of the spectral methods so it's not as mathematically rigorous as the spectral methods but a as I already mentioned in practice they work really nice and this paper here just introduced another idea and that was to basically copy the concept from the CNN's more literally than mathematically if that makes sense so basically what they try to do is introduce the concept of node ordering into into their method because the main problem with graphs and the reasons you can't you just can't use a fixed kernel like using the CNN is first of all you have the varying number of nodes sometimes a node will have only a single neighbor sometimes it will have thousand neighbors and you obviously can't use a nine by nine kernel or three by three kernel as it's commonly used in CNN's so that's the first problem the second problem you don't have any concept any any notion of ordering so like in CNN's you have top left pixel you have top right pixel in GNN's you don't have that notion so what they try to do is they try to linearize a neighborhood by doing something called graph labeling and then apply this the kernel that's always fixed onto those order linearized neighborhoods and so it's a nice read the idea itself didn't get that much traction but it's it's an awesome read and so do check it out okay I'm wrapping now with message passing networks with this one simplifying graph convolutional networks paper what they figured out is there are some graph classes like homophilic graphs where you just you can use just much simpler baselines the graph neural networks and so this paper just showed that so homophilic graphs are those graphs where the node where the nodes that are connected have similar label so that that's a common thing you you have in social network types of graphs because usually people had to have similar opinions tend to cluster together and those types of graphs are called homophilic and basically what this paper showed is that you can use a much simpler baseline you can just leave out a multiple layered graph neural network and just use the aggregation on the k-hop neighborhood so instead of using just a single a one hop neighborhood so your your direct connections you can use and do the the MPN and the message passing scheme I told you multiple times you can instead do the following you can just aggregate the neighbors from the k-hop neighborhood where K can be maybe three or four which means you can get up to like four under in their indirect connections and build up and then just use a single just so aggregate those and then use a single MPN layer to to build up a nice representations which can even achieve better results in those complex spatial methods like graph attention network etc and they are much more computationally efficient because they are so much simpler now now there is a nice parallel I can make and I will make here between the CNNs and this thing here so if you know anything about CNNs you know that back in 2012 when Alex net was introduced and produced this thing called image net moment Alex net had nine I think 11 by 11 kernels whereas so that's really broad compared to VGG which in 2015 I think basically I figured they figure out that using 3 by 3 kernels in much deeper architectures is actually beneficial so you can see the parallel between this and the CNNs and basically with the arrival of certain ideas such as residual connections such as better realization we were able to make deeper and deeper CNNs and keep those kernel kernels really small at 3 by 3 size and that turned out to be really good so much better than using large kernels and shallower networks on the other hand we can see here in graph ML they're using big kernels and shallow networks actually produces better results on certain classes of graphs not on everything sometimes you still need deep graph neural networks and just keep that in mind okay so those were the main methods and now the topics that are following are a bit more esoteric but none nothing less useful useful from the previous methods I just explained like embeddings and spectrum methods and spatial methods so DNN expressivity is really interesting so let me let me just explain what it is in a nutshell and it's actually fairly easy when you put it this way you see these two graphs here and you can see that they are different so how we also call that more more like a more formal term for that would be their non isomorphic and what what you ideally want to have is you want to have your graph neural network assign a different embedding to two different to two different graphs that makes sense to two non isomorphic graphs on the other hand if the graphs are isomorphic you want to assign them the same embedding and that's the ideal case so all of the methods I explained you so far like GCN graph convolutional network or get are actually confused by this simple example as you can see the the color histogram we have four reds here four reds here four blues four blues two greens and two greens and the way how for example graph classification works you just take all of these features you kind of do a mean and then you'll try to do some prediction and because the histograms are the same in this particular case that means we'll have the same embedding even though they are non isomorphic so that's the potential failure case and we want to avoid this and so by building more expressive models they'll know how to cope with cases like this one and that's the whole so the field of expressivity deals with this exact problem trying to assign the same label if the graphs are isomorphic otherwise assign different a different not label different embeddings okay so that's the whole concept and this graph isomorphism network paper first showed that by using that GCN and GATS and all of those have there is certain families of graphs which confuse which confuse those MPNN networks and later a lot of cool papers appeared that do some sub count that use sub structures to to to increase the expressivity or use multiple aggregation techniques like principal neighborhood aggregation paper etc but that's the basic idea I want you to take out of this so yeah just discriminating between the nine non isomorphic graphs papers on exciting gene applications feel free to go through these resources I already walked you through these so yeah just go at your own at your own pace and hopefully you'll find some interesting new papers additional papers here okay so handling dynamic graphs chapter okay all the methods I've explained so far are dealing with static graphs so what's static what's dynamic so dynamic graphs are those that evolve with time so either new edges or new nodes appear as the time progresses okay so one good example would be social network maybe you have Facebook or Twitter where each new tweet is a new node that appears at certain time point yeah the same goes with edges maybe somebody does a read feed and that's a new edge that appeared dynamically in this graph so the graph is constantly being pruned and increased so it's usually more it's usually always increasing but there is some pruning happening with somebody believes the account leads the tweet whatever for those types of graphs we need different methods and the two main ideas I want you to take out from from this section is that you need to find a way to represent the time which is a scalar as a vector and there are a couple of papers you can you can find so I've linked some papers like TGN here and you'll you'll see some references inside like time to back I think which shows one of the ways you can you can kind of so map the scalar the time into its vector eyes representation so that's one crucial idea that's needed and the second one is doing the concept of doing a temporal neighborhood so basically you have a single node let's say it's a tweet and what you want to do is you want to aggregate only those edges there may be at most one hour like old and you want to just ignore all of the other edges otherwise you have a huge graph and you have this thing called over squashing problem because you have so many nodes and you want to try to aggregate them usually doing some mean or something and that's kind of that's a similar effect happens in those bag of words if you're familiar with bag of words technique so you kind of know that by doing bag of roots and you lose the information same thing happens in RNNs where you get these this over squashing effect because as the sequence becomes longer you're kind of doing this bag of word thing and you're losing the older information so what so transformers for example tackled this problem by using the concept of attention right so you give more attention to more important feature vectors and ignore some less important ones so yeah basically temporal neighborhood helps us avoid that over squashing problems problem that happens in all of those bag of word methods such as in RNNs I linked a couple of unsupervised papers unsupervised learning papers like DJI and this variational graph autoencoder so if you're not familiar with variational autoencoder this paper probably won't make any sense it's just it gets three or four pages it's three or four pages long and it already assumes you know everything about variational autoencoders but it's a it's a nice read if you're interested in learning more about these unsupervised learning methods just make it I made a small tangent here on the this field called geometric deep learning so when you start doing graph ML you'll hear about geometric deep learning that's for sure and those are pretty much separate it's a pretty much separate research area which deals with manifolds and not with graphs per se so even though again you can treat certain manifolds once you discretize them them there they are basically graphs but here the same thing as with computer vision the same thing as with NLP you have a class of algorithms which are much better optimized to deal with manifolds even though they can be treated when discretized as graphs so just keep that in mind and don't make it too big of a detour into that field because it's pretty much separate research area compared to graph ML okay with that out of the way I linked a couple of cool visualization tools here like iGraph I've used iGraph personally in my graph attention network project check that one out if you haven't I'll link it down in the description as well and aside from that I've been mentioning data sets here and data sets are so important and they usually achieve not they usually don't get so much attention in the community usually you have something like ah there's this huge model like GPT that has like five through trillion parameters and that gets really hyped up but the data sets which are so important for progressing the research don't get so much attention and there is a problem with graph ML that people use these Cora sites here PPI and PubMed data sets a lot in the over the previous years and only recently have we started to integrate bigger real-world graphs and use them as benchmarks so what happened is that when you use these these let's call them toy benchmark graph data says like similar to amnesty computer vision you basically can't discern between much more expressive and better models from simpler baselines so there's the whole point of the benchmark right you want to you want to have a benchmark which clearly separates and fill sorts better algorithms from the worst ones and here the the the margin became smaller and smaller even between the simplest baselines and the most complex GNNs so we needed new data sets and so this I link this open graph benchmark which is a nice initiative that introduced much better much much larger and more diverse graphs into this graph ML field and that's a really good thing and we should be very grateful for people who invest their time to develop better benchmarks even though that's not as exciting as developing some proposing some novel novel idea novel it's usually research is usually super incremental rarely do you see something that's completely new like graph embedding methods Vertweck already existed they just adapted it to to graphs similarly for most of these methods are just a variation on some research that already existed just keep that in mind I linked some cool resources Michael Bronstein's got a really awesome blog highly recommend you go and read those but probably you'll want to leave those for a bit later because he's a professor and he has a that his blogs are a bit more technical so you want to start with those high level blogs read through resources I've listed and only then start reading Michael's blog blog finally once you get some once you get your foot inside graph ML start following people in Twitter and start following Sergey's newsletter I highly recommend it that's a good way to keep in sync with the newest things happening in a specific field so similarly to how Sergey has newsletter in graph ML Sebastian Huda has a really nice newsletter in the world of natural language processing so yeah this is something that was shown to be a nice format for transferring knowledge in the community so you should be following these small specialized newsletters highly recommended I'll skip through this I had I made a couple of videos on graph attention network project they did so we gained a lot of traction like it got 750 stars since I published it a couple of weeks ago and you can go through it you'll find it useful probably I've got both the trans active as well as the inductive examples inside like both the Cora which is trans active and PPI protein protein interaction data set which is inductive and I've got the I made this Jupiter notebook as well so yeah do check it out that's the project and that's pretty much everything I wanted to tell you maybe a short note on some other detours you could be making and that's I already mentioned geometric deep learning so it's a in a way it's a separate research topic so be aware don't don't just don't make a mistake and start reading those papers because because they are pretty much independent orthogonal to the graph ML papers then there is this equivalent deep learning subfield which deals with making your algorithms much more statistically efficient by exploiting symmetries of your problem so the same way as CNN's are translationally equivalent they are trying to make models which are also rotationally equivalent which can just exploit different symmetries and again it's a separate research subfield be aware of it don't don't waste too much time of going there if you just want to focus on graph ML even though again there is always some cross-semination of ideas so you have to draw the line where you want to explore more versus exploit more finally knowledge graphs there's something there are mix between NLP and graph ML so again totally new terminology totally new set of people history papers so just be cognizant of that fact that's it you know the drill hit that subscribe button hit the bell icon and until next time keep learning deep
[{"start": 0.0, "end": 4.16, "text": " What's up folks? So in this video I'm going to tell you more about how to get"}, {"start": 4.16, "end": 8.8, "text": " started with GraphML. So I'm wrapping up my exploration of Graph Machine"}, {"start": 8.8, "end": 12.44, "text": " Learning and I wrote this blog on how to get started. So I thought of just creating"}, {"start": 12.44, "end": 17.44, "text": " a video and maybe telling you more about my opinions and some of the stuff that"}, {"start": 17.44, "end": 21.88, "text": " I've already written in the blog and giving you some complimentary ideas. So"}, {"start": 21.88, "end": 26.2, "text": " yeah, basically stay tuned and hopefully you'll learn something new. Okay so the"}, {"start": 26.2, "end": 32.24, "text": " first thing I want to talk about is this. So research can be really tough and just"}, {"start": 32.24, "end": 37.08, "text": " be aware that everybody, every single like top-notch researcher you know"}, {"start": 37.08, "end": 41.56, "text": " of or engineer or successful business person, whatever, has hard days and it's"}, {"start": 41.56, "end": 45.16, "text": " not always easy. So I'm talking from my own experience. So it's not always"}, {"start": 45.16, "end": 51.14, "text": " mentally easy to stick with the plan and execute and do the research, read the"}, {"start": 51.14, "end": 56.120000000000005, "text": " papers, code up that project. So just be aware of it that you're not alone in"}, {"start": 56.12, "end": 59.919999999999995, "text": " this and you should never give up. That's my only advice I have to give you."}, {"start": 59.919999999999995, "end": 64.8, "text": " Never freaking give up. Okay? Secondly, don't get demotivated by the people who"}, {"start": 64.8, "end": 69.84, "text": " are, who seem, who appear to be super successful in something. So the thing is"}, {"start": 69.84, "end": 76.24, "text": " most of them are really specialized. So I like to think from about this from like"}, {"start": 76.24, "end": 80.03999999999999, "text": " this linear algebra perspective. So maybe nerdy but there is a method to this"}, {"start": 80.03999999999999, "end": 83.08, "text": " madness. Basically I want to say the following. So among all of those"}, {"start": 83.08, "end": 86.08, "text": " different dimensions, all of the things you could be doing researching, learning,"}, {"start": 86.08, "end": 91.52, "text": " about, let's take two axes. So one axis would be maybe somebody who is just"}, {"start": 91.52, "end": 95.44, "text": " reading research papers. So some fictive person that's only reading research. On"}, {"start": 95.44, "end": 100.44, "text": " the other hand you have people who are just coding something, who are working on"}, {"start": 100.44, "end": 104.24, "text": " some deep learning libraries, like maybe in the context of GraphML that would be"}, {"start": 104.24, "end": 109.16, "text": " PyTorch Geometric. And most people tend to be somewhere in between. But there are"}, {"start": 109.16, "end": 112.48, "text": " a lot of people who only read research paper and they never implement"}, {"start": 112.48, "end": 117.32000000000001, "text": " something from scratch or never work on anything that has to do with engineering"}, {"start": 117.32000000000001, "end": 122.12, "text": " or coding. And I want to tell you that they don't know many details even though"}, {"start": 122.12, "end": 126.52000000000001, "text": " they appear to understand everything because they curate their own content."}, {"start": 126.52000000000001, "end": 131.08, "text": " They don't know many details of how stuff works. And I'm talking from my"}, {"start": 131.08, "end": 134.84, "text": " personal experience. So until I start coding something up from scratch I"}, {"start": 134.84, "end": 139.66, "text": " didn't, I don't understand thoroughly how much I actually don't know, if that"}, {"start": 139.66, "end": 144.28, "text": " makes sense. So you want to be somewhere in between. If you want to be a really"}, {"start": 144.28, "end": 147.96, "text": " successful researcher, in my opinion, and you want to boost that H index if that's"}, {"start": 147.96, "end": 151.92, "text": " of something of interest to you, you should probably be reading, focusing on"}, {"start": 151.92, "end": 156.44, "text": " reading research papers a lot. But you should also make these, let's call them"}, {"start": 156.44, "end": 161.28, "text": " detours into implementing something and actually playing with the code. On the"}, {"start": 161.28, "end": 165.68, "text": " other hand if you want to be a great engineer and you probably want to focus"}, {"start": 165.68, "end": 170.28, "text": " on code but you still want to research and read, allocate some time to read"}, {"start": 170.28, "end": 175.20000000000002, "text": " research papers because otherwise you'll just lose the pace and you'll,"}, {"start": 175.20000000000002, "end": 181.36, "text": " the field is progressing very rapidly and you got to allocate some time to do"}, {"start": 181.36, "end": 187.84, "text": " the due diligence. Having said that, let me walk you through this blog. So first"}, {"start": 187.84, "end": 193.32, "text": " GraphML and hopefully you know what GraphML is. You heard about graphs so"}, {"start": 193.32, "end": 197.23999999999998, "text": " those are those data structures and GraphML is just a bunch of machine"}, {"start": 197.23999999999998, "end": 203.04, "text": " learning models so we learn on top of that graph data. And the first thing I"}, {"start": 203.04, "end": 206.35999999999999, "text": " want to advise you here is don't, if you're, especially if you're a"}, {"start": 206.35999999999999, "end": 211.28, "text": " perfectionist, you'll tend to be like this. I want to make a, I want to go on a"}, {"start": 211.28, "end": 215.64, "text": " tangent here and maybe invest three months of my time and get some solid"}, {"start": 215.64, "end": 222.12, "text": " background in graph theory, a branch of mathematics. Or maybe get a really good"}, {"start": 222.12, "end": 226.44, "text": " foundations in the network analysis field. And I want to tell you, you don't"}, {"start": 226.44, "end": 231.12, "text": " need to do that. You can jump straight into the field and make a first step and"}, {"start": 231.12, "end": 236.88, "text": " without doing all of those detours. So it's totally fine to start reading this"}, {"start": 236.88, "end": 243.16, "text": " blog and start with high-level resources and that would be your, as a first step"}, {"start": 243.16, "end": 249.4, "text": " to on your journey through Graph Machine Learning. Sometimes you may want to take"}, {"start": 249.4, "end": 255.32, "text": " a step back or maybe better said, make a side step and invest maybe three"}, {"start": 255.32, "end": 259.64, "text": " months of your time into building up more solid mathematical"}, {"start": 259.64, "end": 266.28000000000003, "text": " foundations. And me personally, last year in 2020, I did maybe, I invested maybe"}, {"start": 266.28000000000003, "end": 270.2, "text": " around three, four months into reading this book, Mathematics for Machine"}, {"start": 270.2, "end": 274.48, "text": " Learning. And it's a bit complex if you're intermediate or even"}, {"start": 274.48, "end": 280.64000000000004, "text": " beginner. But it meant a lot for me and it helped me better understand the"}, {"start": 280.64000000000004, "end": 286.12, "text": " stuff that I later learned. So the strategy of doing side steps and"}, {"start": 286.12, "end": 293.0, "text": " investing into your some foundations is also a good idea sometimes. So that's"}, {"start": 293.0, "end": 297.76, "text": " GraphML. You have machine learning, you have machine learning models that learn"}, {"start": 297.76, "end": 303.24, "text": " on graph data. Instead of images or text like in computer vision or NLP, you're"}, {"start": 303.24, "end": 309.36, "text": " learning from graphs. And now before we start, before I show you the resources,"}, {"start": 309.36, "end": 313.36, "text": " let's just, let me just walk you through some cool applications. And one more"}, {"start": 313.36, "end": 318.88, "text": " thing to note, you'll hear a lot about graph neural networks here and for the"}, {"start": 318.88, "end": 323.40000000000003, "text": " sake of, for now just think of them as neural networks that work with graphs"}, {"start": 323.40000000000003, "end": 328.76, "text": " and somehow can process those graphs and do the classification or"}, {"start": 328.76, "end": 333.92, "text": " regression on nodes or edges or even the complete graphs. So just have them as a"}, {"start": 333.92, "end": 340.71999999999997, "text": " black box on top of your mind, okay? So first things first, GraphML is"}, {"start": 340.71999999999997, "end": 346.36, "text": " such an exciting topic because the applications you'll see here are so much"}, {"start": 346.36, "end": 351.0, "text": " different from those you'll see in the computer vision or the NLP worlds. So"}, {"start": 351.0, "end": 355.03999999999996, "text": " there in computer vision you'll see, you'll be seeing algorithms that"}, {"start": 355.04, "end": 361.24, "text": " manipulate images or videos and in NLP you'll be seeing algorithms that"}, {"start": 361.24, "end": 366.40000000000003, "text": " manipulate text and those are all pretty homogeneous. So even though you can do, I"}, {"start": 366.40000000000003, "end": 372.12, "text": " don't know, facial landmark detection, pose estimation, bunch of things,"}, {"start": 372.12, "end": 377.84000000000003, "text": " segmentation, regressing 3D models, bunch of things in computer vision, they are"}, {"start": 377.84000000000003, "end": 383.52000000000004, "text": " all still kind of related. And then when you start doing GraphML you understand"}, {"start": 383.52, "end": 389.59999999999997, "text": " like all the plethora, like the vast diversity of different applications"}, {"start": 389.59999999999997, "end": 395.12, "text": " you can do with graphs. So two cool applications I start, I want to start"}, {"start": 395.12, "end": 401.24, "text": " with our recommendation systems and if you're not aware already, like in my"}, {"start": 401.24, "end": 406.35999999999996, "text": " opinion, recommendation systems are the single most important and influential"}, {"start": 406.35999999999996, "end": 412.08, "text": " piece of AI currently in 2021. So you interact with with recommendation systems"}, {"start": 412.08, "end": 418.44, "text": " all the time, YouTube recommendation, Facebook, Twitter, Pinterest, whatever,"}, {"start": 418.44, "end": 424.24, "text": " just name it, and you have recommendation systems influencing your decisions. So"}, {"start": 424.24, "end": 428.84, "text": " before I started making YouTube videos I wasn't fully aware of it and now I'm"}, {"start": 428.84, "end": 432.96, "text": " getting a thorough understanding of how how influential recommender systems are."}, {"start": 432.96, "end": 439.08, "text": " So basically as an example I've seen many videos which are really top-notch,"}, {"start": 439.08, "end": 445.32, "text": " high quality, that have 5k views and not because the presenter was anything less"}, {"start": 445.32, "end": 451.4, "text": " eloquent or the content was worse, no. It was just because he or she came on later"}, {"start": 451.4, "end": 456.96, "text": " and so he or she is not on the on the front page, like among the first"}, {"start": 456.96, "end": 462.96, "text": " queries. And then you'll see some videos which were made maybe 2015, 16, 17 and"}, {"start": 462.96, "end": 467.15999999999997, "text": " they have amassed a lot of views and so that's what YouTube will recommend you."}, {"start": 467.16, "end": 472.76000000000005, "text": " So what I've learned is I now tend to go past the first five results before I"}, {"start": 472.76000000000005, "end": 477.28000000000003, "text": " would just open the most viewed videos but now I'm completely aware how"}, {"start": 477.28000000000003, "end": 482.8, "text": " recommendation systems can actually bring you inferior recommendations,"}, {"start": 482.8, "end": 487.56, "text": " unfortunately. And yeah, engineers and researchers are working on this and one"}, {"start": 487.56, "end": 492.52000000000004, "text": " way is to just kind of use graph neural networks. So because the problem as you"}, {"start": 492.52, "end": 497.59999999999997, "text": " can see here with with uBreads for example, you can really nicely model all"}, {"start": 497.59999999999997, "end": 503.28, "text": " of these problems as graphs. So here you have a graph between users and food and"}, {"start": 503.28, "end": 509.32, "text": " uBreads has a separate graph between users and restaurants. They still haven't"}, {"start": 509.32, "end": 512.4399999999999, "text": " merged the two together even though I think they're currently working on it or"}, {"start": 512.4399999999999, "end": 516.4399999999999, "text": " maybe they even maybe they recently even deployed it. I'm not aware of it but yeah"}, {"start": 516.4399999999999, "end": 521.88, "text": " this is the last news I'm aware of. And so just processing this graph data"}, {"start": 521.88, "end": 526.56, "text": " you can make better recommendations to your users and"}, {"start": 526.56, "end": 531.72, "text": " obviously I won't get into all of the details. I did some videos on Pinsage,"}, {"start": 531.72, "end": 536.84, "text": " that's the recommendation engine at Pinterest and you can check out that one"}, {"start": 536.84, "end": 544.0, "text": " somewhere here in the card. Okay, so you can kind of see how the graph neural"}, {"start": 544.0, "end": 549.04, "text": " network Pinsage improve upon these baselines. Here you can see people"}, {"start": 549.04, "end": 552.76, "text": " carrying some logs and here you can see the visual. If you just take the visual"}, {"start": 552.76, "end": 557.8399999999999, "text": " baseline it will bring you images that are like really similar when it comes to"}, {"start": 557.8399999999999, "end": 562.0, "text": " appearance because they are also grayscale but you can see the semantics"}, {"start": 562.0, "end": 565.8, "text": " like here the soldiers and here you can see actually the trees and the logging"}, {"start": 565.8, "end": 572.28, "text": " so Pinsage got both the semantics as well as the appearance successfully by"}, {"start": 572.28, "end": 576.4399999999999, "text": " leveraging both the graph structure as well as the visual textual information"}, {"start": 576.44, "end": 582.44, "text": " that's available in those graphs on those platforms. So those are some nice"}, {"start": 582.44, "end": 588.5200000000001, "text": " applications and aside from from recommendation systems we have these"}, {"start": 588.5200000000001, "end": 595.8000000000001, "text": " basically fake news detection GNNs and that's so important. The societal"}, {"start": 595.8000000000001, "end": 601.32, "text": " benefits of deploying really successful fake news detection algorithms are so"}, {"start": 601.32, "end": 605.24, "text": " important and again you can see here maybe on Twitter you have a"}, {"start": 605.24, "end": 610.12, "text": " single tweet and somebody retweets a tweet that would be a second node and you"}, {"start": 610.12, "end": 614.96, "text": " have the edge and so you can build up the graph and basically these networks"}, {"start": 614.96, "end": 620.2, "text": " can figure out the propagation patterns of those retweets and just by looking at"}, {"start": 620.2, "end": 625.5600000000001, "text": " the propagation patterns it was shown in previous research that we can figure out"}, {"start": 625.5600000000001, "end": 630.72, "text": " whether the news is actually fake or real and again that's a really naturally"}, {"start": 630.72, "end": 636.8000000000001, "text": " model as a graph so GNNs to the rescue. Aside from this app there is one"}, {"start": 636.8000000000001, "end": 641.76, "text": " more cool app which is basically figuring out the probability that you"}, {"start": 641.76, "end": 648.0, "text": " will produce certain event depending on your neighborhood and whether they did"}, {"start": 648.0, "end": 654.72, "text": " the same thing so for example to be less abstract maybe the goal is to figure"}, {"start": 654.72, "end": 659.24, "text": " out whether you'll retweet a tweet if some of your neighbors retweeted the"}, {"start": 659.24, "end": 663.48, "text": " same tweet that would be one concrete example and again you have the graph"}, {"start": 663.48, "end": 669.44, "text": " structure and you can leverage to regress these probabilities so various"}, {"start": 669.44, "end": 674.72, "text": " cool applications and what's even more interesting is that graph ML so all of"}, {"start": 674.72, "end": 679.8, "text": " these are still kind of familiar to you but like graph ML goes into these deep"}, {"start": 679.8, "end": 686.0, "text": " fundamental science problems and can naturally solve them so one of them"}, {"start": 686.0, "end": 691.92, "text": " would be this example I gave with the neutrino detection where basically the"}, {"start": 691.92, "end": 696.4, "text": " setup is the following you have a huge huge cube of ice and you have these"}, {"start": 696.4, "end": 700.96, "text": " light sensors inter dispersed in the like distributed throughout this ice and"}, {"start": 700.96, "end": 706.24, "text": " what happens is a neutrino comes comes out from the space and hits the ice and"}, {"start": 706.24, "end": 710.88, "text": " interacts with the ice molecules and in that interaction something that's called"}, {"start": 710.88, "end": 716.08, "text": " charank of light emits and basically those sensors those light sensors can catch"}, {"start": 716.08, "end": 721.6, "text": " those that light and you can basically see certain nodes of that grid like"}, {"start": 721.6, "end": 726.92, "text": " structure of sensors get activated and then you can just deploy the graph neural"}, {"start": 726.92, "end": 732.36, "text": " network on top to figure out whether the neutrino came from some certain events"}, {"start": 732.36, "end": 737.04, "text": " we care about like supernova explosions or whatever so I don't know about you"}, {"start": 737.04, "end": 740.9599999999999, "text": " but like this completely blew out my mind because usually you see neural"}, {"start": 740.9599999999999, "end": 745.04, "text": " networks applied to images videos text and that's it and here you can see"}, {"start": 745.04, "end": 751.68, "text": " fundamental science and GNN is being used to solve them to solve those"}, {"start": 751.68, "end": 756.52, "text": " problems okay yeah there are many other apps I'll skip some of them like"}, {"start": 756.52, "end": 761.1999999999999, "text": " figuring out the molecules that can help beat cancer because you can see you can"}, {"start": 761.2, "end": 767.6400000000001, "text": " model the the thing naturally as a graph again you have genes you have proteins"}, {"start": 767.6400000000001, "end": 772.72, "text": " all of those interactions and here's some molecules or drugs and you you model"}, {"start": 772.72, "end": 777.1600000000001, "text": " that as a graph and you can find interesting information like which"}, {"start": 777.1600000000001, "end": 781.84, "text": " molecules can help treat cancer better or prevent cancer then you have drug"}, {"start": 781.84, "end": 786.6400000000001, "text": " side effects when you combine multiple drugs you want to be sure it would be"}, {"start": 786.64, "end": 791.64, "text": " beneficial if we could computationally calculate at the possible side effects"}, {"start": 791.64, "end": 796.0, "text": " because the combinatorics is so huge here and it's impractical to do it in a"}, {"start": 796.0, "end": 804.48, "text": " lab setting in vivo we need to do it in in silica or computationally so many"}, {"start": 804.48, "end": 809.88, "text": " cool applications so the last one I'll note here is the one that it was gained"}, {"start": 809.88, "end": 816.04, "text": " a lot of traction last year in 2020 basically we as a society have a huge"}, {"start": 816.04, "end": 822.64, "text": " problem with with anti antibiotic resistant bacteria so the reason that"}, {"start": 822.64, "end": 826.5999999999999, "text": " happens is because people are misusing antibiotics they're using that them for"}, {"start": 826.5999999999999, "end": 831.56, "text": " viruses they're using them to treat common colds and you shouldn't be using"}, {"start": 831.56, "end": 835.5999999999999, "text": " antibiotics in those cases you should be using antibiotic a specific antibiotic"}, {"start": 835.5999999999999, "end": 839.54, "text": " if you're sure you have a certain bacteria that this antibiotic can"}, {"start": 839.54, "end": 846.52, "text": " inhibit and because of all of those misuse there is a threat of this"}, {"start": 846.52, "end": 852.56, "text": " bacteria which is resistant to every single antibiotic and that's like the"}, {"start": 852.56, "end": 857.52, "text": " worst nightmare in the like in the science community and in the medicine in"}, {"start": 857.52, "end": 863.24, "text": " general pharmacy and so what this graph neural network did is it was trained on"}, {"start": 863.24, "end": 869.72, "text": " a set of these of these specific molecules and then and was trained to"}, {"start": 869.72, "end": 875.16, "text": " figure out which of these molecules tend to inhibit one specific bacteria namely"}, {"start": 875.16, "end": 880.2, "text": " the asharica coli in this example and then once they deployed that trained"}, {"start": 880.2, "end": 885.8, "text": " graph neural network onto a specific new subset a new data set it was able to"}, {"start": 885.8, "end": 891.64, "text": " filter out hundred candidates that gave a high response a high probability that"}, {"start": 891.64, "end": 898.16, "text": " this molecule will inhibit asharica coli and then they just dedicated that"}, {"start": 898.16, "end": 903.56, "text": " small small subset of molecules to scientists and they were able to find"}, {"start": 903.56, "end": 908.92, "text": " this molecule called helicin which basically can treat which was basically"}, {"start": 908.92, "end": 914.48, "text": " found out to be a super potent antibiotic and extremely successful in"}, {"start": 914.48, "end": 919.92, "text": " inhibiting asharica coli and previously it was just they considered this same"}, {"start": 919.92, "end": 928.16, "text": " molecule for his for its anti-diabetic properties so those are just some of the"}, {"start": 928.16, "end": 933.68, "text": " super exciting applications of graph neural networks and I hope you you found"}, {"start": 933.68, "end": 940.3199999999999, "text": " this interesting and and that it motivated you to further investigate the"}, {"start": 940.3199999999999, "end": 943.12, "text": " graph ML field"}, {"start": 943.12, "end": 951.04, "text": " now let me just kind of make a brief detour and compare graph ML with"}, {"start": 951.04, "end": 954.72, "text": " computer vision and natural language processing and I personally think it's"}, {"start": 954.72, "end": 960.0, "text": " really important to to try and connect the ideas from various subfields of deep"}, {"start": 960.0, "end": 964.96, "text": " learning because by doing that but we're just creating those connections I think"}, {"start": 964.96, "end": 971.52, "text": " you'll be better to generate much better research ideas and yeah it's just cool"}, {"start": 971.52, "end": 975.6, "text": " to kind of make this connection just gives you some kind of satisfaction as"}, {"start": 975.6, "end": 982.24, "text": " well so basically you can treat computer vision or images and videos in general"}, {"start": 982.24, "end": 987.24, "text": " as a just a special case of graph which is highly regular and we call these"}, {"start": 987.24, "end": 994.0799999999999, "text": " grids so you can see here a four by four pixel image can be represented as this"}, {"start": 994.0799999999999, "end": 999.16, "text": " specific graph and here we have four neighbors in general you can have up to"}, {"start": 999.16, "end": 1002.88, "text": " eight neighbors because right you have the diagonal pixels as well and you have"}, {"start": 1002.88, "end": 1008.0799999999999, "text": " the top left top right bottom left bottom right and that means we can"}, {"start": 1008.0799999999999, "end": 1012.52, "text": " actually deploy graph neural networks to solve computer vision problems and same"}, {"start": 1012.52, "end": 1017.12, "text": " goes for for NLP because sentence is just basically you can treat sentence"}, {"start": 1017.12, "end": 1023.8, "text": " either as a fully connected graph or maybe just as a as a tree and yeah but"}, {"start": 1023.8, "end": 1027.24, "text": " the thing I want to mention here and I mentioned that in the blog as well is"}, {"start": 1027.24, "end": 1031.64, "text": " don't fall into the trap into thinking that you can you can better solve these"}, {"start": 1031.64, "end": 1036.72, "text": " problems with graph ML so the thing is certain biases that CNN's have like"}, {"start": 1036.72, "end": 1046.4, "text": " having weight sharing and localization are super useful and we show that that's"}, {"start": 1046.4, "end": 1051.36, "text": " much more efficient than using graph ML in some cases though you can use graph"}, {"start": 1051.36, "end": 1056.0, "text": " neural networks and perform better so when I say that I mean the following"}, {"start": 1056.0, "end": 1060.68, "text": " transformers are a special case of graph neural networks in a sense so once you"}, {"start": 1060.68, "end": 1065.48, "text": " have a fully connected graph transformer is basically a graph neural network"}, {"start": 1065.48, "end": 1071.68, "text": " that operates on a fully connected graph and it was shown in 2020 in October I"}, {"start": 1071.68, "end": 1077.96, "text": " think the vision transformer that we can use transformers on images to improve"}, {"start": 1077.96, "end": 1084.4, "text": " upon CNN's if we have enough data so if you have a huge amount of data it is"}, {"start": 1084.4, "end": 1089.1200000000001, "text": " possible maybe to use transformers or graph neural networks to achieve even"}, {"start": 1089.1200000000001, "end": 1093.4, "text": " better results then by using CNN's but there's a saying if you only have hammer"}, {"start": 1093.4, "end": 1098.0, "text": " everything looks like a nail and it's it's really bad if you if you just have"}, {"start": 1098.0, "end": 1102.6000000000001, "text": " the graph ML context and you don't have if you don't know other fields you're"}, {"start": 1102.6000000000001, "end": 1107.0400000000002, "text": " gonna try and and and use graph neural network for everything same goes for"}, {"start": 1107.0400000000002, "end": 1111.1200000000001, "text": " deep learning and that's why it's useful to be aware of different fields"}, {"start": 1111.12, "end": 1115.1399999999999, "text": " different methods different techniques because for every specific problem there"}, {"start": 1115.1399999999999, "end": 1120.4399999999998, "text": " is a solution that's highly optimized to be used for that specific problem now"}, {"start": 1120.4399999999998, "end": 1126.0, "text": " there is a trade-off between being as general as possible like maybe GPT and"}, {"start": 1126.0, "end": 1129.6, "text": " on the other hand having a super specialized algorithms so they are better"}, {"start": 1129.6, "end": 1133.32, "text": " they are more specialized they are more optimized but then you have the overhead"}, {"start": 1133.32, "end": 1139.4399999999998, "text": " of some like developers having to invest a lot of their time to now specialize"}, {"start": 1139.44, "end": 1144.3200000000002, "text": " that's to tune that algorithm for every single problem encounter on the other"}, {"start": 1144.3200000000002, "end": 1148.6000000000001, "text": " hand you have this general architectures you just apply them and they work on"}, {"start": 1148.6000000000001, "end": 1152.76, "text": " everything you don't have to invest any more time but they are suboptimal in"}, {"start": 1152.76, "end": 1157.3200000000002, "text": " every single of those problems so that's the trade-off having said all of that"}, {"start": 1157.3200000000002, "end": 1163.0800000000002, "text": " let me now walk you through the resources in the way and just kind of"}, {"start": 1163.0800000000002, "end": 1167.24, "text": " advise you how you can structure your learning of graph machine learning okay"}, {"start": 1167.24, "end": 1172.6, "text": " so if I were you I'd first start with graph embedding methods so these are not"}, {"start": 1172.6, "end": 1177.08, "text": " graph neural networks these are a specific class of methods which are"}, {"start": 1177.08, "end": 1182.42, "text": " similar to Vertubec if you're familiar with natural language processing and you"}, {"start": 1182.42, "end": 1187.28, "text": " know what Vertubec is these are going to be much more familiar and easy to"}, {"start": 1187.28, "end": 1192.48, "text": " grasp so they are just repurposed Vertubec onto graphs so basically for"}, {"start": 1192.48, "end": 1197.32, "text": " example you have these these algorithms DeepWalk and No2vec and Planetoid and what"}, {"start": 1197.32, "end": 1203.76, "text": " Deep2Walk does is the following it just samples random walks from a graph and"}, {"start": 1203.76, "end": 1208.72, "text": " you can treat those as a sentence so if you treat the graph as a corpus of text"}, {"start": 1208.72, "end": 1214.44, "text": " and you treat that specific random walk as as a sample sentence it's basically"}, {"start": 1214.44, "end": 1219.8, "text": " the same thing as Vertubec you just want to make sure that the neighboring words"}, {"start": 1219.8, "end": 1224.76, "text": " have similar embeddings so if they tend to be close on those in those sentences"}, {"start": 1224.76, "end": 1228.96, "text": " you want to make the embeddings really close and if they're if they never"}, {"start": 1228.96, "end": 1232.8, "text": " appear in the same sentence if some words never appear in the same sentence"}, {"start": 1232.8, "end": 1238.2, "text": " you want to make their embeddings really different so that's the main idea of all"}, {"start": 1238.2, "end": 1244.82, "text": " of these methods so take some time to kind of understand these because all of"}, {"start": 1244.82, "end": 1249.6799999999998, "text": " these graph embedding methods can be used for exciting graph problems like"}, {"start": 1249.6799999999998, "end": 1254.4399999999998, "text": " doing regression on top of those embeddings or classification both on"}, {"start": 1254.4399999999998, "end": 1261.0, "text": " node wise level edgewise and graph wise level so I should you should probably"}, {"start": 1261.0, "end": 1264.9199999999998, "text": " start with these and that's gonna be it's gonna be a nice first step okay in"}, {"start": 1264.9199999999998, "end": 1269.28, "text": " a way it's orthogonal to the graph neural networks that I'm going to tell"}, {"start": 1269.28, "end": 1275.6399999999999, "text": " you about more in a second so graph neural networks when it comes to graph"}, {"start": 1275.6399999999999, "end": 1280.6399999999999, "text": " neural networks there are two main branches and one is the spectral methods"}, {"start": 1280.6399999999999, "end": 1286.48, "text": " and the second one is spatial or message passing methods and my advice here is to"}, {"start": 1286.48, "end": 1292.8799999999999, "text": " take some time and investigate spectral methods because the terminology and"}, {"start": 1292.8799999999999, "end": 1298.48, "text": " references will get get propagated into the spatial methods and you'll be seeing"}, {"start": 1298.48, "end": 1301.6, "text": " some terminology from the spectral methods so it's good to kind of"}, {"start": 1301.6, "end": 1307.0, "text": " understand at least some basics about these so having said that if you're not"}, {"start": 1307.0, "end": 1312.28, "text": " familiar if you don't have a strong base in like fundamentals in linear algebra"}, {"start": 1312.28, "end": 1315.84, "text": " if you don't understand what eigen space is if you don't understand what eigen"}, {"start": 1315.84, "end": 1320.04, "text": " vector and eigenvalue is if you don't have some understanding of the Fourier"}, {"start": 1320.04, "end": 1325.9, "text": " analysis or signal processing this is going to be tough like you should"}, {"start": 1325.9, "end": 1330.0, "text": " probably just read some high-level blogs and try to kind of get familiar with"}, {"start": 1330.0, "end": 1335.4, "text": " the words and terminology and don't kind of pull yourself down just because you"}, {"start": 1335.4, "end": 1340.5400000000002, "text": " don't understand everything because yeah as I told you you'll need some time to"}, {"start": 1340.5400000000002, "end": 1345.64, "text": " understand it especially if you don't have the fundamentals but having said"}, {"start": 1345.64, "end": 1351.8000000000002, "text": " that just go through it get some feeling because that's gonna be super useful for"}, {"start": 1351.8, "end": 1356.68, "text": " you believe me so the reason I tell you not to worry about them too much is"}, {"start": 1356.68, "end": 1361.6399999999999, "text": " because they are computationally very expensive because how they work you have"}, {"start": 1361.6399999999999, "end": 1365.32, "text": " something called graph ML so they're just a huge matrix and what you want to"}, {"start": 1365.32, "end": 1370.56, "text": " do is you want to calculate those eigenvectors and that operation is very"}, {"start": 1370.56, "end": 1379.6399999999999, "text": " expensive so and not only that it's that also these these methods are key and"}, {"start": 1379.64, "end": 1383.0800000000002, "text": " generalized to different graph structures so they they have to be"}, {"start": 1383.0800000000002, "end": 1386.92, "text": " applied in a transductive manner and can be applied to new graphs which are"}, {"start": 1386.92, "end": 1392.18, "text": " smaller bigger because they inherently are transductive so that's the second"}, {"start": 1392.18, "end": 1398.5600000000002, "text": " pin point of these algorithms but still I told you take some time and learn about"}, {"start": 1398.5600000000002, "end": 1408.48, "text": " them okay in my opinion again in this video is full of opinions you want to"}, {"start": 1408.48, "end": 1412.48, "text": " start with as always with a top-to-bottom approach you want to start"}, {"start": 1412.48, "end": 1416.72, "text": " with some high-level videos high-level blocks that will get you exposed to the"}, {"start": 1416.72, "end": 1421.96, "text": " terminology and new ideas new concepts without you having to understand every"}, {"start": 1421.96, "end": 1425.72, "text": " single detail so you're just trying to build up that skeleton of knowledge and"}, {"start": 1425.72, "end": 1430.68, "text": " you should probably start doing that by by watching videos and reading through"}, {"start": 1430.68, "end": 1435.16, "text": " some medium and towards data science blocks so those are usually really"}, {"start": 1435.16, "end": 1441.28, "text": " nicely curated high quality blogs and videos and they'll get you exposed to"}, {"start": 1441.28, "end": 1448.16, "text": " the essential concepts having said that you'll find some of the links here and"}, {"start": 1448.16, "end": 1453.28, "text": " this video from the from the New York University they are they're kind of"}, {"start": 1453.28, "end": 1461.0800000000002, "text": " publishing all of the lectures they hold there so that's super super super"}, {"start": 1461.08, "end": 1466.48, "text": " useful for the community and yeah you should start with those I mentioned the"}, {"start": 1466.48, "end": 1471.1599999999999, "text": " Fourier analysis and stuff so if you don't know about if you haven't watched"}, {"start": 1471.1599999999999, "end": 1475.3, "text": " this one even if you know everything about Fourier I strongly suggest you go"}, {"start": 1475.3, "end": 1479.84, "text": " ahead and watch Grant Sanderson's video on Fourier so when I say Grant"}, {"start": 1479.84, "end": 1483.8, "text": " Sanderson's he's better known as the three blue one brown channel and he's"}, {"start": 1483.8, "end": 1488.84, "text": " got some a couple of videos on Fourier which are so nicely visualized and if"}, {"start": 1488.84, "end": 1493.28, "text": " you don't have the background from Fourier analysis this is probably going"}, {"start": 1493.28, "end": 1496.9199999999998, "text": " to be enough like a high-level understanding of what Fourier is"}, {"start": 1496.9199999999998, "end": 1502.6399999999999, "text": " basically decomposing your signal into a sum of differently scaled frequencies"}, {"start": 1502.6399999999999, "end": 1508.1999999999998, "text": " and it's gonna be a really good step for you to understand spectrum methods if"}, {"start": 1508.1999999999998, "end": 1514.08, "text": " you don't have any background whatsoever so do watch that one I've linked up to"}, {"start": 1514.08, "end": 1518.12, "text": " important papers here this one is the most important one spectral networks and"}, {"start": 1518.12, "end": 1523.4399999999998, "text": " locally connected networks and graphs and Chabnets just kind of localized that"}, {"start": 1523.4399999999998, "end": 1528.32, "text": " method local when I say localized I mean you just want to update a single"}, {"start": 1528.32, "end": 1531.84, "text": " node you'll just use the k-hop not neighborhood instead of using every"}, {"start": 1531.84, "end": 1535.8, "text": " single node in the graph which can be huge like in some examples when you're"}, {"start": 1535.8, "end": 1540.08, "text": " having maybe social level graphs you want to make those algorithms localized"}, {"start": 1540.08, "end": 1546.32, "text": " and the second thing aside from localization Chabnets made it also more"}, {"start": 1546.32, "end": 1550.8, "text": " efficient to calculate these representations by just leveraging the"}, {"start": 1550.8, "end": 1556.96, "text": " laplacian graph matrices and not and not calculating the eigenvectors I"}, {"start": 1556.96, "end": 1563.76, "text": " previously mentioned again some higher-level blogs just go through those"}, {"start": 1563.76, "end": 1569.36, "text": " research resources you should probably literally sequentially be learning about"}, {"start": 1569.36, "end": 1575.0, "text": " these resources I've linked you should probably read the graph embedding"}, {"start": 1575.0, "end": 1579.76, "text": " methods after you do the high-level blogs and videos but whatever feel free"}, {"start": 1579.76, "end": 1583.12, "text": " to experiment there but you should roughly be following the flow of the"}, {"start": 1583.12, "end": 1589.34, "text": " blog and as the blog progresses we go into more research and more niche topics"}, {"start": 1589.34, "end": 1597.88, "text": " like dynamics dynamic graphs and like graph expressivity etc okay having said"}, {"start": 1597.88, "end": 1602.92, "text": " that let me jump to spatial methods and these are the methods you'll be seeing"}, {"start": 1602.92, "end": 1608.4, "text": " you'll mostly be seeing so those are networks such as GCN graph convolutional"}, {"start": 1608.4, "end": 1612.8400000000001, "text": " network networks such as get graph attention network then you have graph"}, {"start": 1612.8400000000001, "end": 1616.8400000000001, "text": " sage pin sage many different spatial methods and on the high level they all"}, {"start": 1616.8400000000001, "end": 1621.44, "text": " work really similarly so basically you do the following thing so you have a"}, {"start": 1621.44, "end": 1625.8000000000002, "text": " node and you have the neighborhood and what you do is you transform the"}, {"start": 1625.8000000000002, "end": 1629.3200000000002, "text": " neighborhood feature vector somehow maybe doing some linear projection or"}, {"start": 1629.32, "end": 1634.8, "text": " MLP whatever you kind of is you somehow aggregate them maybe doing mean of"}, {"start": 1634.8, "end": 1639.6, "text": " those feature vectors or some more max or whatever and then you combine the"}, {"start": 1639.6, "end": 1644.4399999999998, "text": " central node its representation with that aggregated transform representation"}, {"start": 1644.4399999999998, "end": 1648.56, "text": " again somehow and that's your new representation and that's the general"}, {"start": 1648.56, "end": 1653.9199999999998, "text": " framework of message passing networks and all of these can be considered as"}, {"start": 1653.9199999999998, "end": 1658.08, "text": " special case of that framework I just explained you I covered many of these"}, {"start": 1658.08, "end": 1662.6399999999999, "text": " spatial methods on my youtube channel so check them out graph attention networks"}, {"start": 1662.6399999999999, "end": 1667.6, "text": " graph convolutional networks pin sage and graph sage so those are the two that"}, {"start": 1667.6, "end": 1671.84, "text": " were deployed in the recommendation system I previously mentioned at the"}, {"start": 1671.84, "end": 1675.28, "text": " beginning of the video message passing a neural network is the one that"}, {"start": 1675.28, "end": 1679.3999999999999, "text": " introduced this general framework general notion and understanding that"}, {"start": 1679.3999999999999, "end": 1683.8, "text": " all of these methods are just special case of something and they're doing"}, {"start": 1683.8, "end": 1688.36, "text": " something fairly similar to each other two more papers you should probably"}, {"start": 1688.36, "end": 1694.8799999999999, "text": " check out is the gated graph neural network which kinda popularized the"}, {"start": 1694.8799999999999, "end": 1699.6, "text": " usage of graph neural networks again because they were initially the similar"}, {"start": 1699.6, "end": 1706.8, "text": " methods were proposed already I think in 2004 2009 period time frame and this"}, {"start": 1706.8, "end": 1712.08, "text": " paper kind of popularized GNN's again and it's it's a nice paper it's it's"}, {"start": 1712.08, "end": 1719.12, "text": " really really general because it also includes the edge features it uses the"}, {"start": 1719.12, "end": 1725.12, "text": " group the LSTM for the aggregation I just mentioned and so it's a it's a it's"}, {"start": 1725.12, "end": 1728.08, "text": " a nice read aside from that paper learning convolutional neural networks"}, {"start": 1728.08, "end": 1733.6399999999999, "text": " for graphs is a cool paper because I didn't mention this but like the"}, {"start": 1733.6399999999999, "end": 1737.84, "text": " spectral and spatial methods what they try to do is generalize the concept of"}, {"start": 1737.84, "end": 1741.9199999999998, "text": " convolution that works so nicely in the computer vision world so with"}, {"start": 1741.92, "end": 1748.28, "text": " CNN's on two graphs and that turned out to be quite a difficult challenging"}, {"start": 1748.28, "end": 1753.0800000000002, "text": " problem in a way and so there were different approaches one was a spectral"}, {"start": 1753.0800000000002, "end": 1756.5600000000002, "text": " one one was a spatial one which is basically approximation of the spectral"}, {"start": 1756.5600000000002, "end": 1760.8400000000001, "text": " methods so it's not as mathematically rigorous as the spectral methods but a"}, {"start": 1760.8400000000001, "end": 1766.24, "text": " as I already mentioned in practice they work really nice and this paper here"}, {"start": 1766.24, "end": 1773.84, "text": " just introduced another idea and that was to basically copy the concept from"}, {"start": 1773.84, "end": 1778.64, "text": " the CNN's more literally than mathematically if that makes sense so"}, {"start": 1778.64, "end": 1783.16, "text": " basically what they try to do is introduce the concept of node ordering"}, {"start": 1783.16, "end": 1788.72, "text": " into into their method because the main problem with graphs and the reasons you"}, {"start": 1788.72, "end": 1793.76, "text": " can't you just can't use a fixed kernel like using the CNN is first of all you"}, {"start": 1793.76, "end": 1797.48, "text": " have the varying number of nodes sometimes a node will have only a single"}, {"start": 1797.48, "end": 1801.92, "text": " neighbor sometimes it will have thousand neighbors and you obviously can't use a"}, {"start": 1801.92, "end": 1807.08, "text": " nine by nine kernel or three by three kernel as it's commonly used in CNN's so"}, {"start": 1807.08, "end": 1811.52, "text": " that's the first problem the second problem you don't have any concept any"}, {"start": 1811.52, "end": 1816.72, "text": " any notion of ordering so like in CNN's you have top left pixel you have"}, {"start": 1816.72, "end": 1821.8, "text": " top right pixel in GNN's you don't have that notion so what they try to do is"}, {"start": 1821.8, "end": 1825.1599999999999, "text": " they try to linearize a neighborhood by doing something called graph labeling"}, {"start": 1825.1599999999999, "end": 1832.6399999999999, "text": " and then apply this the kernel that's always fixed onto those order linearized"}, {"start": 1832.6399999999999, "end": 1838.32, "text": " neighborhoods and so it's a nice read the idea itself didn't get that much"}, {"start": 1838.32, "end": 1844.72, "text": " traction but it's it's an awesome read and so do check it out okay I'm wrapping"}, {"start": 1844.72, "end": 1848.6, "text": " now with message passing networks with this one simplifying graph convolutional"}, {"start": 1848.6, "end": 1854.56, "text": " networks paper what they figured out is there are some graph classes like"}, {"start": 1854.56, "end": 1860.84, "text": " homophilic graphs where you just you can use just much simpler baselines the"}, {"start": 1860.84, "end": 1866.0, "text": " graph neural networks and so this paper just showed that so homophilic graphs are"}, {"start": 1866.0, "end": 1870.7199999999998, "text": " those graphs where the node where the nodes that are connected have similar"}, {"start": 1870.7199999999998, "end": 1876.52, "text": " label so that that's a common thing you you have in social network types of"}, {"start": 1876.52, "end": 1881.0, "text": " graphs because usually people had to have similar opinions tend to cluster"}, {"start": 1881.0, "end": 1886.24, "text": " together and those types of graphs are called homophilic and basically what"}, {"start": 1886.24, "end": 1890.96, "text": " this paper showed is that you can use a much simpler baseline you can just leave"}, {"start": 1890.96, "end": 1897.72, "text": " out a multiple layered graph neural network and just use the aggregation on"}, {"start": 1897.72, "end": 1901.84, "text": " the k-hop neighborhood so instead of using just a single a one hop"}, {"start": 1901.84, "end": 1907.1599999999999, "text": " neighborhood so your your direct connections you can use and do the the"}, {"start": 1907.1599999999999, "end": 1912.52, "text": " MPN and the message passing scheme I told you multiple times you can instead"}, {"start": 1912.52, "end": 1916.8, "text": " do the following you can just aggregate the neighbors from the k-hop"}, {"start": 1916.8, "end": 1920.8, "text": " neighborhood where K can be maybe three or four which means you can get up to"}, {"start": 1920.8, "end": 1926.0, "text": " like four under in their indirect connections and build up and then just"}, {"start": 1926.0, "end": 1931.28, "text": " use a single just so aggregate those and then use a single MPN layer to to"}, {"start": 1931.28, "end": 1935.92, "text": " build up a nice representations which can even achieve better results in those"}, {"start": 1935.92, "end": 1940.6, "text": " complex spatial methods like graph attention network etc and they are much"}, {"start": 1940.6, "end": 1945.3999999999999, "text": " more computationally efficient because they are so much simpler now now there"}, {"start": 1945.3999999999999, "end": 1950.52, "text": " is a nice parallel I can make and I will make here between the CNNs and this"}, {"start": 1950.52, "end": 1955.92, "text": " thing here so if you know anything about CNNs you know that back in 2012 when"}, {"start": 1955.92, "end": 1960.44, "text": " Alex net was introduced and produced this thing called image net moment Alex"}, {"start": 1960.44, "end": 1967.28, "text": " net had nine I think 11 by 11 kernels whereas so that's really broad compared"}, {"start": 1967.28, "end": 1973.3200000000002, "text": " to VGG which in 2015 I think basically I figured they figure out that using 3 by"}, {"start": 1973.3200000000002, "end": 1979.46, "text": " 3 kernels in much deeper architectures is actually beneficial so you can see the"}, {"start": 1979.46, "end": 1985.1200000000001, "text": " parallel between this and the CNNs and basically with the arrival of certain"}, {"start": 1985.12, "end": 1990.56, "text": " ideas such as residual connections such as better realization we were able to"}, {"start": 1990.56, "end": 1995.6799999999998, "text": " make deeper and deeper CNNs and keep those kernel kernels really small at 3"}, {"start": 1995.6799999999998, "end": 2001.1599999999999, "text": " by 3 size and that turned out to be really good so much better than using"}, {"start": 2001.1599999999999, "end": 2005.1999999999998, "text": " large kernels and shallower networks on the other hand we can see here in graph"}, {"start": 2005.1999999999998, "end": 2010.84, "text": " ML they're using big kernels and shallow networks actually produces better"}, {"start": 2010.84, "end": 2015.08, "text": " results on certain classes of graphs not on everything sometimes you still need"}, {"start": 2015.08, "end": 2020.56, "text": " deep graph neural networks and just keep that in mind okay so those were the main"}, {"start": 2020.56, "end": 2026.6399999999999, "text": " methods and now the topics that are following are a bit more esoteric but"}, {"start": 2026.6399999999999, "end": 2032.56, "text": " none nothing less useful useful from the previous methods I just explained like"}, {"start": 2032.56, "end": 2037.3999999999999, "text": " embeddings and spectrum methods and spatial methods so DNN expressivity is"}, {"start": 2037.3999999999999, "end": 2041.96, "text": " really interesting so let me let me just explain what it is in a nutshell and"}, {"start": 2041.96, "end": 2047.32, "text": " it's actually fairly easy when you put it this way you see these two graphs here"}, {"start": 2047.32, "end": 2052.92, "text": " and you can see that they are different so how we also call that more more like"}, {"start": 2052.92, "end": 2058.76, "text": " a more formal term for that would be their non isomorphic and what what you"}, {"start": 2058.76, "end": 2063.7200000000003, "text": " ideally want to have is you want to have your graph neural network assign a"}, {"start": 2063.7200000000003, "end": 2068.8, "text": " different embedding to two different to two different graphs that makes sense to"}, {"start": 2068.8, "end": 2074.1600000000003, "text": " two non isomorphic graphs on the other hand if the graphs are isomorphic you"}, {"start": 2074.1600000000003, "end": 2080.28, "text": " want to assign them the same embedding and that's the ideal case so all of the"}, {"start": 2080.28, "end": 2086.4, "text": " methods I explained you so far like GCN graph convolutional network or get are"}, {"start": 2086.4, "end": 2091.6400000000003, "text": " actually confused by this simple example as you can see the the color histogram"}, {"start": 2091.6400000000003, "end": 2096.8, "text": " we have four reds here four reds here four blues four blues two greens and two"}, {"start": 2096.8, "end": 2101.92, "text": " greens and the way how for example graph classification works you just take all"}, {"start": 2101.92, "end": 2105.7200000000003, "text": " of these features you kind of do a mean and then you'll try to do some"}, {"start": 2105.7200000000003, "end": 2109.88, "text": " prediction and because the histograms are the same in this particular case"}, {"start": 2109.88, "end": 2114.6000000000004, "text": " that means we'll have the same embedding even though they are non isomorphic so"}, {"start": 2114.6000000000004, "end": 2119.04, "text": " that's the potential failure case and we want to avoid this and so by building"}, {"start": 2119.04, "end": 2124.44, "text": " more expressive models they'll know how to cope with cases like this one and"}, {"start": 2124.44, "end": 2128.64, "text": " that's the whole so the field of expressivity deals with this exact"}, {"start": 2128.64, "end": 2133.76, "text": " problem trying to assign the same label if the graphs are isomorphic otherwise"}, {"start": 2133.76, "end": 2139.08, "text": " assign different a different not label different embeddings okay so that's the"}, {"start": 2139.08, "end": 2145.68, "text": " whole concept and this graph isomorphism network paper first showed that by using"}, {"start": 2145.68, "end": 2150.8, "text": " that GCN and GATS and all of those have there is certain families of graphs"}, {"start": 2150.8, "end": 2156.92, "text": " which confuse which confuse those MPNN networks and later a lot of cool papers"}, {"start": 2156.92, "end": 2164.4, "text": " appeared that do some sub count that use sub structures to to to increase the"}, {"start": 2164.4, "end": 2168.0800000000004, "text": " expressivity or use multiple aggregation techniques like principal"}, {"start": 2168.0800000000004, "end": 2173.5600000000004, "text": " neighborhood aggregation paper etc but that's the basic idea I want you to take"}, {"start": 2173.5600000000004, "end": 2180.36, "text": " out of this so yeah just discriminating between the nine non isomorphic graphs"}, {"start": 2180.36, "end": 2184.6400000000003, "text": " papers on exciting gene applications feel free to go through these resources"}, {"start": 2184.6400000000003, "end": 2190.1200000000003, "text": " I already walked you through these so yeah just go at your own at your own"}, {"start": 2190.1200000000003, "end": 2194.88, "text": " pace and hopefully you'll find some interesting new papers additional papers"}, {"start": 2194.88, "end": 2201.56, "text": " here okay so handling dynamic graphs chapter okay all the methods I've"}, {"start": 2201.56, "end": 2206.04, "text": " explained so far are dealing with static graphs so what's static what's dynamic"}, {"start": 2206.04, "end": 2211.32, "text": " so dynamic graphs are those that evolve with time so either new edges or new"}, {"start": 2211.32, "end": 2216.98, "text": " nodes appear as the time progresses okay so one good example would be social"}, {"start": 2216.98, "end": 2221.4, "text": " network maybe you have Facebook or Twitter where each new tweet is a new"}, {"start": 2221.4, "end": 2226.2799999999997, "text": " node that appears at certain time point yeah the same goes with edges maybe"}, {"start": 2226.2799999999997, "end": 2230.6, "text": " somebody does a read feed and that's a new edge that appeared dynamically in"}, {"start": 2230.6, "end": 2235.2799999999997, "text": " this graph so the graph is constantly being pruned and increased so it's"}, {"start": 2235.28, "end": 2239.2400000000002, "text": " usually more it's usually always increasing but there is some pruning"}, {"start": 2239.2400000000002, "end": 2243.92, "text": " happening with somebody believes the account leads the tweet whatever for"}, {"start": 2243.92, "end": 2249.1600000000003, "text": " those types of graphs we need different methods and the two main ideas I want"}, {"start": 2249.1600000000003, "end": 2253.5600000000004, "text": " you to take out from from this section is that you need to find a way to"}, {"start": 2253.5600000000004, "end": 2258.2400000000002, "text": " represent the time which is a scalar as a vector and there are a couple of"}, {"start": 2258.2400000000002, "end": 2262.96, "text": " papers you can you can find so I've linked some papers like TGN here and"}, {"start": 2262.96, "end": 2268.28, "text": " you'll you'll see some references inside like time to back I think which shows"}, {"start": 2268.28, "end": 2274.44, "text": " one of the ways you can you can kind of so map the scalar the time into its"}, {"start": 2274.44, "end": 2279.88, "text": " vector eyes representation so that's one crucial idea that's needed and the"}, {"start": 2279.88, "end": 2284.16, "text": " second one is doing the concept of doing a temporal neighborhood so basically you"}, {"start": 2284.16, "end": 2288.04, "text": " have a single node let's say it's a tweet and what you want to do is you"}, {"start": 2288.04, "end": 2295.16, "text": " want to aggregate only those edges there may be at most one hour like old and you"}, {"start": 2295.16, "end": 2298.68, "text": " want to just ignore all of the other edges otherwise you have a huge graph"}, {"start": 2298.68, "end": 2302.04, "text": " and you have this thing called over squashing problem because you have so"}, {"start": 2302.04, "end": 2305.7599999999998, "text": " many nodes and you want to try to aggregate them usually doing some mean"}, {"start": 2305.7599999999998, "end": 2311.56, "text": " or something and that's kind of that's a similar effect happens in those bag of"}, {"start": 2311.56, "end": 2315.52, "text": " words if you're familiar with bag of words technique so you kind of know that"}, {"start": 2315.52, "end": 2322.28, "text": " by doing bag of roots and you lose the information same thing happens in RNNs"}, {"start": 2322.28, "end": 2327.12, "text": " where you get these this over squashing effect because as the sequence becomes"}, {"start": 2327.12, "end": 2331.64, "text": " longer you're kind of doing this bag of word thing and you're losing the older"}, {"start": 2331.64, "end": 2336.36, "text": " information so what so transformers for example tackled this problem by using"}, {"start": 2336.36, "end": 2341.88, "text": " the concept of attention right so you give more attention to more important"}, {"start": 2341.88, "end": 2348.7200000000003, "text": " feature vectors and ignore some less important ones so yeah basically temporal"}, {"start": 2348.7200000000003, "end": 2352.44, "text": " neighborhood helps us avoid that over squashing problems problem that happens"}, {"start": 2352.44, "end": 2358.4, "text": " in all of those bag of word methods such as in RNNs I linked a couple of"}, {"start": 2358.4, "end": 2365.48, "text": " unsupervised papers unsupervised learning papers like DJI and this"}, {"start": 2365.48, "end": 2370.6400000000003, "text": " variational graph autoencoder so if you're not familiar with variational"}, {"start": 2370.64, "end": 2374.3199999999997, "text": " autoencoder this paper probably won't make any sense it's just it gets three"}, {"start": 2374.3199999999997, "end": 2378.3599999999997, "text": " or four pages it's three or four pages long and it already assumes you know"}, {"start": 2378.3599999999997, "end": 2382.8799999999997, "text": " everything about variational autoencoders but it's a it's a nice read"}, {"start": 2382.8799999999997, "end": 2387.24, "text": " if you're interested in learning more about these unsupervised learning methods"}, {"start": 2387.24, "end": 2392.8399999999997, "text": " just make it I made a small tangent here on the this field called geometric deep"}, {"start": 2392.8399999999997, "end": 2397.0, "text": " learning so when you start doing graph ML you'll hear about geometric deep"}, {"start": 2397.0, "end": 2402.48, "text": " learning that's for sure and those are pretty much separate it's a pretty much"}, {"start": 2402.48, "end": 2407.52, "text": " separate research area which deals with manifolds and not with graphs per se so"}, {"start": 2407.52, "end": 2412.56, "text": " even though again you can treat certain manifolds once you discretize them them"}, {"start": 2412.56, "end": 2417.44, "text": " there they are basically graphs but here the same thing as with computer vision"}, {"start": 2417.44, "end": 2421.96, "text": " the same thing as with NLP you have a class of algorithms which are much"}, {"start": 2421.96, "end": 2426.8, "text": " better optimized to deal with manifolds even though they can be treated when"}, {"start": 2426.8, "end": 2432.32, "text": " discretized as graphs so just keep that in mind and don't make it too big of a"}, {"start": 2432.32, "end": 2436.76, "text": " detour into that field because it's pretty much separate research area"}, {"start": 2436.76, "end": 2443.0, "text": " compared to graph ML okay with that out of the way I linked a couple of cool"}, {"start": 2443.0, "end": 2447.6400000000003, "text": " visualization tools here like iGraph I've used iGraph personally in my graph"}, {"start": 2447.6400000000003, "end": 2452.0800000000004, "text": " attention network project check that one out if you haven't I'll link it down in"}, {"start": 2452.08, "end": 2460.56, "text": " the description as well and aside from that I've been mentioning data sets"}, {"start": 2460.56, "end": 2466.24, "text": " here and data sets are so important and they usually achieve not they usually"}, {"start": 2466.24, "end": 2471.04, "text": " don't get so much attention in the community usually you have something"}, {"start": 2471.04, "end": 2475.96, "text": " like ah there's this huge model like GPT that has like five through trillion"}, {"start": 2475.96, "end": 2481.6, "text": " parameters and that gets really hyped up but the data sets which are so"}, {"start": 2481.6, "end": 2487.0, "text": " important for progressing the research don't get so much attention and there"}, {"start": 2487.0, "end": 2492.96, "text": " is a problem with graph ML that people use these Cora sites here PPI and PubMed"}, {"start": 2492.96, "end": 2499.2799999999997, "text": " data sets a lot in the over the previous years and only recently have we started"}, {"start": 2499.2799999999997, "end": 2504.3199999999997, "text": " to integrate bigger real-world graphs and use them as benchmarks so what"}, {"start": 2504.3199999999997, "end": 2509.72, "text": " happened is that when you use these these let's call them toy benchmark"}, {"start": 2509.72, "end": 2515.24, "text": " graph data says like similar to amnesty computer vision you basically can't"}, {"start": 2515.24, "end": 2520.8799999999997, "text": " discern between much more expressive and better models from simpler baselines so"}, {"start": 2520.8799999999997, "end": 2523.8799999999997, "text": " there's the whole point of the benchmark right you want to you want to have a"}, {"start": 2523.8799999999997, "end": 2529.48, "text": " benchmark which clearly separates and fill sorts better algorithms from the"}, {"start": 2529.48, "end": 2534.8799999999997, "text": " worst ones and here the the the margin became smaller and smaller even between"}, {"start": 2534.8799999999997, "end": 2539.56, "text": " the simplest baselines and the most complex GNNs so we needed new data sets"}, {"start": 2539.56, "end": 2544.04, "text": " and so this I link this open graph benchmark which is a nice initiative"}, {"start": 2544.04, "end": 2551.4, "text": " that introduced much better much much larger and more diverse graphs into"}, {"start": 2551.4, "end": 2556.7999999999997, "text": " this graph ML field and that's a really good thing and we should be very"}, {"start": 2556.7999999999997, "end": 2561.88, "text": " grateful for people who invest their time to develop better benchmarks even"}, {"start": 2561.88, "end": 2566.36, "text": " though that's not as exciting as developing some proposing some novel"}, {"start": 2566.36, "end": 2571.76, "text": " novel idea novel it's usually research is usually super incremental rarely do"}, {"start": 2571.76, "end": 2575.8, "text": " you see something that's completely new like graph embedding methods Vertweck"}, {"start": 2575.8, "end": 2581.4, "text": " already existed they just adapted it to to graphs similarly for most of these"}, {"start": 2581.4, "end": 2587.48, "text": " methods are just a variation on some research that already existed just keep"}, {"start": 2587.48, "end": 2592.36, "text": " that in mind I linked some cool resources Michael Bronstein's got a"}, {"start": 2592.36, "end": 2598.96, "text": " really awesome blog highly recommend you go and read those but probably you'll"}, {"start": 2598.96, "end": 2602.7200000000003, "text": " want to leave those for a bit later because he's a professor and he has a"}, {"start": 2602.7200000000003, "end": 2607.6, "text": " that his blogs are a bit more technical so you want to start with those high"}, {"start": 2607.6, "end": 2612.1600000000003, "text": " level blogs read through resources I've listed and only then start reading"}, {"start": 2612.1600000000003, "end": 2618.96, "text": " Michael's blog blog finally once you get some once you get your foot inside"}, {"start": 2618.96, "end": 2623.64, "text": " graph ML start following people in Twitter and start following Sergey's"}, {"start": 2623.64, "end": 2628.68, "text": " newsletter I highly recommend it that's a good way to keep in sync with the"}, {"start": 2628.68, "end": 2633.16, "text": " newest things happening in a specific field so similarly to how Sergey has"}, {"start": 2633.16, "end": 2640.2400000000002, "text": " newsletter in graph ML Sebastian Huda has a really nice newsletter in the world of"}, {"start": 2640.2400000000002, "end": 2646.04, "text": " natural language processing so yeah this is something that was shown to be a nice"}, {"start": 2646.04, "end": 2650.32, "text": " format for transferring knowledge in the community so you should be following"}, {"start": 2650.32, "end": 2655.7599999999998, "text": " these small specialized newsletters highly recommended I'll skip through this"}, {"start": 2655.7599999999998, "end": 2661.72, "text": " I had I made a couple of videos on graph attention network project they did so"}, {"start": 2661.72, "end": 2667.84, "text": " we gained a lot of traction like it got 750 stars since I published it a couple"}, {"start": 2667.84, "end": 2672.56, "text": " of weeks ago and you can go through it you'll find it useful probably I've got"}, {"start": 2672.56, "end": 2676.4, "text": " both the trans active as well as the inductive examples inside like both the"}, {"start": 2676.4, "end": 2680.52, "text": " Cora which is trans active and PPI protein protein interaction data set"}, {"start": 2680.52, "end": 2686.56, "text": " which is inductive and I've got the I made this Jupiter notebook as well so"}, {"start": 2686.56, "end": 2692.36, "text": " yeah do check it out that's the project and that's pretty much everything I"}, {"start": 2692.36, "end": 2697.2, "text": " wanted to tell you maybe a short note on some other detours you could be making"}, {"start": 2697.2, "end": 2701.52, "text": " and that's I already mentioned geometric deep learning so it's a in a way it's a"}, {"start": 2701.52, "end": 2706.4, "text": " separate research topic so be aware don't don't just don't make a mistake and"}, {"start": 2706.4, "end": 2711.72, "text": " start reading those papers because because they are pretty much independent"}, {"start": 2711.72, "end": 2716.0, "text": " orthogonal to the graph ML papers then there is this equivalent deep learning"}, {"start": 2716.0, "end": 2721.16, "text": " subfield which deals with making your algorithms much more statistically"}, {"start": 2721.16, "end": 2726.92, "text": " efficient by exploiting symmetries of your problem so the same way as CNN's"}, {"start": 2726.92, "end": 2733.48, "text": " are translationally equivalent they are trying to make models which are also"}, {"start": 2733.48, "end": 2738.56, "text": " rotationally equivalent which can just exploit different symmetries and again"}, {"start": 2738.56, "end": 2744.48, "text": " it's a separate research subfield be aware of it don't don't waste too much"}, {"start": 2744.48, "end": 2750.4, "text": " time of going there if you just want to focus on graph ML even though again"}, {"start": 2750.4, "end": 2754.88, "text": " there is always some cross-semination of ideas so you have to draw the line where"}, {"start": 2754.88, "end": 2761.36, "text": " you want to explore more versus exploit more finally knowledge graphs there's"}, {"start": 2761.36, "end": 2766.92, "text": " something there are mix between NLP and graph ML so again totally new"}, {"start": 2766.92, "end": 2773.48, "text": " terminology totally new set of people history papers so just be cognizant of"}, {"start": 2773.48, "end": 2777.44, "text": " that fact that's it you know the drill hit that subscribe button hit the bell"}, {"start": 2777.44, "end": 2791.08, "text": " icon and until next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=364hpoRB4PQ
Graph Attention Network Project Walkthrough
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I walk you through my recently open-source GAT project. I focus on 2 potential pain points in the implementation and I also highlight some of my previous projects you could find interesting. You'll learn about: ✔️ Cora dataset ✔️ Highly-optimized GAT implementation ✔️ Other exciting DL projects you could play with ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My GAT project: https://github.com/gordicaleksa/pytorch-GAT ✅ Naive neural style transfer for videos: https://github.com/gordicaleksa/pytorch-naive-video-neural-style-transfer ✅ Planetoid: https://github.com/kimiyoung/planetoid ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to GAT project 00:35 My other deep learning projects 04:00 README walkthrough 07:35 Node degree statistics 10:10 Entropy histograms 12:05 t-SNE plots 12:50 Graph drawing layout 14:00 Jupyter walkthrough 15:22 Understanding Cora dataset 18:19 Feature vectors and labels 20:00 Building the edge index 22:40 Toy example (understanding the implementation) 29:00 Lifting 32:50 Neighborhood aware softmax and aggregate 38:18 Outro, exciting deep learning projects ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #gat #graphattentionnetwork #project
What's up folks? So for those of you who don't know, four days ago I open sourced a brand new project. It's the implementation of the Graph Attention Network. So in this video I thought doing a quick walkthrough and just going to read me and potentially try and explain the potential pain points in the Jupyter notebook. Mainly the implementation itself, like the core part of how the get method works. But before that, I just want to remind you that aside from this project, I've got a lot more interesting projects on my GitHub. So I've been working and implementing these projects for at least a year. So aside from this one, which is super popular, I've open sourced this four days ago and it's got almost 600 stars, which is better than all of my previous projects. Like the best one was Transformer, which had around 400 stars. And I'm really glad I don't treat these as a vanity metric. I'm just glad that I see that the community is actually using the stuff I produce. So that makes me happy and makes me want to create these even more. So as I said, aside from the PyTorch get, which I'll cover in this video, I had some really interesting projects, especially if you're like a newcomer to deep learning field. Like you should check out this Deep Dream project. It's got really like just look at this imagery. It's so so freaking exciting and looks really nice, if nothing else. But you'll learn a lot by doing these projects. And I'll always recommend something like this to a beginner. So if you're a beginner, you should consider doing something like this, something like Deep Dream as your first or like second project, whatever. And yeah, it's just so awesome. And I think the reason it's not as popular, so it's only it only has like 56 stars, is I haven't added the Jupyter notebook yet. And I think I'll just invest like a couple of days in a couple in a couple of months to just add the Jupyter notebooks to all of the projects that don't have it. So that will hopefully make it even more accessible than than the current state of the project. OK, aside from Deep Dream, you should check out the Neural Style Transfer project. That's how I started the whole this whole YouTube channel. I started with the Neural Style Transfer series, but it wasn't as popular, so it kind of dumped it and started doing other things. But I really enjoyed creating this project. And just again, just look at this like like if somebody told me like a couple of years ago that like a neural network create like a piece of art like this, I'd be really surprised. It's amazing. And the reason like the pure fact that it's so visual helps you maybe at least help me better engage with this project and made it so much interesting than just doing something that you can. It's not as palpable as this thing. And also I've got implementation of a couple of different GAN architectures like I implemented the original GAN architecture. And you can see here the generator slowly learns how to generate MNIST digits. And finally, the outputs are almost indistinguishable from the inputs. But this is a simple data set, so it's not hard to to learn that. And I also had the conditional GAN implementation and DC GAN where I'm just using this Celibate data set to generate faces. So I encourage you to check out these projects and play with them. I hate seeing them not being as used as much as as the current as the GAT project, even though they're really cool and an awesome learning resource. And I'll try and add the Jupyter really soon. With that out of the way, let me jump back to the topic of this video. And that's a graph attention network. I just want to quickly walk you through the readme and show you what you can see here. So the first thing, just go and click click around these links. There are many awesome applications I've linked here. Those are mostly short blogs which you can just read. And if you're not familiar with GNNs, I strongly encourage you to just go and explore and read about all of the awesome applications that GNNs not just get like graph neural networks in general enable. OK, so in this schematic, what you can see is the like a schematic representation of GAT where basically you can see the H1 area, which is the feature vector of this node one. And the edges just they just represent the other feature vectors that will get used to update this node's feature vector. And the differently colored and these squiggly lines just represent multiple attention heads. And in this image, it seems like this GAT has three attention heads. And after you just kind of accumulate and you weigh the feature vectors and you aggregate them, you get the updated version H1 prime. And again, I encourage you if you have if you're not familiar with GAT, go and watch my video on graph attention network. I got it covered thoroughly. So that's pretty much a prerequisite to understand this project. And yet so I encourage you go check it out. Let me continue here. So for the time being, I've only got a core citation network data set covered. So that's the transductive approach. And I'll be I'll be probably adding PPI or the protein protein interaction data set really soon, which is the inductive setup, which will make this project even more useful for you. So what you can see here is just the visualization of Cora. And what I did here is that the size of the node so Cora has two thousand seven hundred eight nodes. And what I did here to plot it is the following. Basically, the size of the node corresponds to the degree of that particular node where the degree is the number of edges. So the bigger the node, the more the edges. And you can see there are like a couple of nodes that are really big and most of them are really small. And we'll see that in a minute. Like some nicer plots that will explain that. Second thing, the colors are the classes. So Cora has got seven classes. And that's why we have seven colors in this in this chart. And finally, the edges. So the thickness of the edge is pretty much correlates with the something called edge between us. Basically, you want to take a particular edge and you want to find the number of geodesics, i.e. the the number of shortest paths that go through that particular edge. So you have all of those nodes in the in the in the graph. Right. And for every single pair of those nodes, you just go through the command control space and you find all the pairs and you ask the question, what's the shortest path between this node and this node? And while looking at a particular edge, you want to know how many of those shortest paths go through this through that edge. And that kind of quantifies the popularity of that edge. And so the thicker the edge, the more shortest paths go through it. Hopefully, that was clear enough. So that's the chart you can see here. And going down here. So in this chart, we can see is the like the node degree of every single node in core. So we've got two thousand seven hundred eight nodes on the x axis is the node ID on the y axis. We get the edge count for a particular node. And we can see there is one node that has one hundred sixty nine like edges. And the rest are have much, much, much, much less edges. And the final this bottom plot shows really nicely that the accumulation happens around two to five edges. So most of the nodes in Cora have only two to five edges. And this plot shows it really nicely. So it goes up until hundred sixty nine. That's the this this node and that's the big green node in this chart above. OK, that's that was Cora. We showed I showed you the schematic of GATT. And finally, once you once you train the GATT model and hopefully understand how GATT works, basically, you know that it tries to to to assign attention coefficient to every single of its neighbor in order to aggregate those and update the representation. So what we see here is the the the edges, the thickness of the edge is proportional to the attention coefficient. And because we can see that every single edge here is of the same thickness, we can deduce that the GATT model on Cora data set learns to to just output constant attention pattern, which is not as interesting. And the reason is because this data is pretty simple for forget its homophilic, meaning that similar nodes cluster together. You can see here by just looking at the color. So the blue node here, you can see that mostly the blue nodes connect to the blue node. And that's something known as the homophilic property of of of of graphs. And yeah, that's that's the reason why why GATT learns these boring constant attention patterns. But yeah, in this node here is one of the nodes with the most edges. Most of you'll see nodes like this. So as I said, two to five edges is the common number. So this one is a common representative you'll find in Cora. And again, the edges are the same thickness, meaning constant attention. There is one more interesting way to to understand the attention patterns that GATT is learning. And that's this entropy histograms method. And what we do here basically is the following. So because all of these sum up to one, because we applied the softmax while producing these attention coefficients, we can just treat all of these neighborhoods as a simple as simple distributions. And because you have a probability distribution, you can calculate the entropy. And now the point I want to want to stress here is that you don't need to know a lot about information theory. Actually, you don't need to know anything. The whole point with entropy histograms is the following. So you take the ideal, the hypothetical GATT model on your particular data set in Cora here, and you make it have like a constant attention, like a perfectly constant attention. That means you always have the uniform distributions. And what you do is you just plot, you just calculate the entropies, you create a histogram. And that's the orange histogram in this chart. And then you take your GATT trained, your trained GATT model, and you do the same procedure. And if the histograms match with that hypothetical GATT, that means your GATT has learned constant attention. And you can see here basically that the orange and the blue histograms totally match with each other, which means our GATT learned to produce constant attention patterns. And yeah, that's just another way to look at this. And if we had some different data set like PPI, which I'll soon add, the histograms would be like, would have much less matching than on this chart. So yeah, that's a second really nice visualization that helps you understand the attention patterns that GATT is learning. Okay. Having said that, in the paper you could see the t-SNE plot. I did the same thing here. Basically, you take the final from the final layer, you take the embeddings, and you use t-SNE, project them into 2D, and you get this nice chart here showing us basically that GATT has learned to cluster nodes of the same class in the same part of the 2D space, which is awesome, which means we can easily train a classifier to kind of just classify these edges into one of seven classes. Okay. That was pretty much it. I've got the setup usage and everything here. You can just go and read through the README, and you'll hopefully have no problems starting this project. One more thing I want to note here is that you'll usually see these circular patterns. So the thing is the graph drawing is a subfield for itself, like something similar to tokenizers if you're familiar with transformers. It's something you just take for granted, but it's a field for itself. So I usually use this Rhine Gold Tilford circular layout, and you'll get charts like this one, but you can use some other techniques and layouts like Kamadi Kawai, this one, and you'll get charts like this one. So this one, the Rhine Gold Tilford, is really adequate for true-like graphs, and because this is basically MRE3, I've used this layout, and it gives really nice results. But just be aware there are multiple layouts, and I've been using iGraph plotting package, so you can just go explore some other layouts, but for this particular problem, I think this does the job. Okay. That was everything I wanted to cover in the README. Now I'll jump into the Jupyter and OneNote and try and explain a couple of potential pain points. And welcome to the Jupyter Notebook. So I'll give you time to go through the Jupyter Notebook by yourself, by your own pace. What I want to do in this video is just kind of focus on two specific pain points, potential pain points for you, and just focus on that one. So first things first is data. What I did here is you'll usually see, if you take a look at the official GATT implementation or some re-implementations like PyGATT project, you'll see a bunch of files which actually date back from the planetoid paper. And the thing is, like most of those implementations, just copy-pasted the same snippets for processing the Quora. And so I just decided to do the processing and just save those into files so that you can just import the node features like this, the node labels like this, and finally the structure of the graph, the edges and connections through one line of code. Hopefully this will make it a bit easier for you to just not have to look through some pre-processing stuff. Let me quickly just explain how these structures look like, and then I'll jump into explaining how the GATT method itself works. Okay, so node features. So those of you who watched my previous video on GATT, this will probably be familiar, but let me just recap here because I'll be using these structures later when explaining the implementation of the GATT method itself. So first things first, node features. It's a matrix like this. We basically, so Quora has 2,708 nodes, and I've been working so much with Quora that I know every single number on top of my head. And the second dimension is 1,433. Now that may seem like an arbitrary number, and it's actually not. I'll just explain how they got that number. So yeah, so that's the feature vectors for every single node. And so how they got this number is the following. So they took all of the research papers. They just created a simple histogram of words, meaning on the x-axis you have a word, on the y-axis you have the frequency of that word throughout all of those research papers. And now what they did is the following. They just created a cutoff frequency, like maybe 10, which means just disregard the words that have a frequency less than 10 across all documents, and just keep all of the rest, and they ended up with 1,433 words. So now how you create a feature vector for a specific research paper, which is a node in Quora, right? You do the following. You just go through the words of the research paper, and you see if some of the words from this vocabulary we just created is present in that research paper. So maybe if we had attention is all you need paper, and in the vocab we had the attention word, we put one where that field is. So basically, let me just draw it here. If this is the feature vector, and this particular slot, so this is again 1,433, and if this one corresponds to attention, you take a specific research paper and you ask, does it have the word attention present inside? If it does, we put a 1, and by doing that you create a feature vector, and it will look something like this, and it will mostly be sparse, so most of these will be zeros, and I'll show you how it actually looks like in a second, but let's take this for a moment, and later you'll see in the code the feature normalization part, and what it does, it's a really simple thing. You just sum up all of these ones, and you just divide all of the numbers by the sum, and we'll end up with having 1 over 2 here, and 1 over 2 here, because we only had two ones in this particular example. So in a more general case, you'll just have 1 over n, where n is the number of ones in your feature vector. So that's the normalization part, and now let me just show you how the feature vectors look like. Okay, so I just opened up the PyGad project, and you can see here on the screen the feature vectors for every single node in Quora, and most of the numbers are zeros. You see a couple of ones here, but it's pretty sparse representation. And finally, the labels, I mentioned seven labels, so they're genetic algorithms, probabilistic methods, case-based theory, reinforcement learning, etc. So what I did is I just assigned a number 0 through 6 to all of these strings, and we'll be using that as labels. So that leads me to the second structure, and that's the node labels, and the shape is pretty straightforward. So basically we have, again, 2,708 nodes here, and we only have one here, because we have a single class, right? So these numbers will be 0 through 6, maybe this one will be 5, so this node with this feature vector will have class 5. So that's the node labels structure. And finally, we've got this adjacency list dictionary, which just contains the connections, the edges in the Quora, and the structure, it's a simple dictionary, meaning the keys are node IDs, maybe 0, and 0 will have associated with it all of the edges, maybe something like, I don't know, 1, 2, 3, 25, whatever. And those are the nodes that 0 connects to. And finally, once we have this, the third most important structure that you'll see used throughout the code is something called the edge index. And what edge index is, it's a simple way to, it's a dense way to represent your connectivity in your graph. So I just copy-pasted that snippet that creates the edge index, and you can see it uses this structure, the adjacency list we have here. It just needs a number of nodes, and we'll be adding self-edges, and we'll see that in a moment. So what happens is the following. I just iterate through this dictionary object, and I create the edge index doing the following thing. So, by the way, the shape is just 2, comma, E, where E is, I think, 13,264 edges. And how we create that is really simple. So we have 0 and the corresponding list of target nodes here, so I'll just go through the list, and I'll see 0, 1, okay. We add 0, 1 topple to the edge. Then we add 0, 2 topple to the edge. Finally, we add 0, 3, 0, blah, blah, blah, 0, 25. And after I do that for every single node, so 0, 25, I don't know how to write today, and I'll just iterate through all of the keys, and finally, I'll just add the self-edges. So that means after I finish all of this procedure, going through this dictionary, I'll just add 0, 0, meaning node 0 connects to itself, and the same goes for every single node up until 2,707. So that's the edge index, and you can see the shape is just a bunch of these topples, and there are this many of these topples, and that's the edge index structure. And hopefully, this makes it easier for you to now tackle the Jupyter notebook yourself. I'll just cover the... I'll try and cover the method. It's complicated. I'll use a dumb example, and we'll see if it works. Going back to the Jupyter notebook, and just scrolling to the main part, to the meat of this project, so we've got some visualizations. We saw this in the readme. You'll be able to just generate those yourself, just go through the Jupyter, and yeah, this is the main part. So we have the high-level class, and we have the actual implementation. And so the only part that will probably create you some problems to understand how it works is a couple of lines of code, so namely this. So from this huge, huge, huge, huge Jupyter notebook, the only part that's actually maybe hard to understand is this one. And so I'll try and now explain how this part works. Maybe you should first just go yourself through this code, and then continue watching this video. Okay, I'll try and use a toy example here to explain how the implementation works. So just a quick recap. Let me use this as a running example. Three nodes, just two edges here, and also self-connections are here. And now I'll just add some IDs here. So this is node 0, this is node 1, and this is node 2. So how GAT works is the following. So we have associated a feature vector with every single node, and what we do is we basically need to find the attention coefficient for this feature vector, and for this one, and for this one, in order to update the feature vector, this feature vector, okay? And so we first project those feature vectors into some smaller subspace, and then how we find those attentions is the following. We just concatenate this representation with this one. We apply something called like a scoring function, and that one is basically densely connected like a linear layer here, and that will give us a score for this particular edge. We do the same thing here, we do the same thing for the self-edge, and which means for the self-edge we'll have this same vector duplicated two times, and we just apply the same scoring function, so these weights here. Once we have the scores, we apply the softmax, that will make them sum up to 1, and then we just multiply this particular vector with that specific attention coefficient, we'll multiply this one with its attention coefficient, and the same goes for this vector, and we'll end up with a new representation, and that's the updated representation. I missed some details like Rayleigh uses, blah blah blah, but this is the core concept, so you want to find those attention coefficients, you want to multiply the projected vectors using those coefficients, and you want to aggregate all of those using the sum into a new updated representation. So hopefully that kind of just refreshes your mind on how GAT works. Okay, let's build up the structures we'll need in order to go through this example. So, first assumption is I only have a single attention head, I'll ditch the eight attention heads that we have in the actual implementation, I'll just work with a single attention head, otherwise it gets really hard to explain it, but it's the same thing, we just duplicate whatever I say here, eight times, that's it. They are totally independent, so it makes sense to reduce this problem to a single attention head. So, first thing we have node vectors, so that means we have three here because we have three nodes, and these are 1,433. The second thing we have is the edge index, and let's see how the edge index looks like. So let's first see which nodes connect to zero. So, one connects to zero, two connects to zero, and zero connects to zero. Then let's see which nodes connect to one. So, zero connects to one, and one connects to one. And finally, which nodes connect to two, and that's zero connects to two, and to itself connects to itself, right? So this is the edge index structure for this particular graph. Okay? Once we have that, let's see how the method works. So what I do is, the first thing, you have this projection layer, and you just project this thing into some smaller subspace. So this gets projected into eight dimensional vectors. So we'll have three by eight. And if we had multiple attention heads, the only thing that would be different, you would have eight by eight independent columns here, and that's the only difference. And the actual implementation detail is that you'll project this vector into 64 dimensional vector, and then you just apply this view function, which basically just doesn't change anything. It changes like the stride, a couple of parameters in the matrix itself, but it doesn't change the underlying memory layout. So that's the important thing to notice. Okay, so once we have this projected thing, what do we do? We do the following. We need to first lift these. So we first need to calculate these scores for every single edge, because that's how get works. It requires these scores, and then transforms them into attention coefficients, and let's see how we actually do it through code. So after we project them, we do the following. We apply these left and right scoring functions. And let me just explain that, why that works. So basically, you can treat this as the left side and the right side, and you just apply, so let me draw it again. So instead of holistically applying, concatenating the feature vectors, and then applying a single scoring function, you can do the following. You can take this feature vector, put it here, and apply a single, the source, I called it the source scoring function, and you find the score here. And you take the second feature vector, and you apply a separate scoring function, the target scoring function, and you get the score. And if you just sum them up, that's exactly the same result as by just doing a single scoring function on the concatenated version of these vectors, okay? Hopefully that makes sense. So that's the main potential pain point for you. So it's conceptually, it's a bit different, but like it's the same thing at the end. It's semantically the same thing, okay? So what we'll do here is we'll take these projected vectors, we'll apply the left and the right scoring function, which will, so we'll end up with three by one. So we have scores, so these are the source scores, and these are the target scores. Okay. So that's again three by one. And now there is this thing called lifting in the code. So what lifting does is it uses this edge index structure to just copy these projected vectors and these source and target scores. So what I mean by that is the following. So we'll take the source scores, and we'll take the source column. So this is the source column, and this is the target column. Remember, this means one, the source node points to zero, the target node. So that's this directed edge. And because we have both, we just read the core as undirected graph. So we'll just do the following. So one means we take score one and we copy it here. Let me just draw it like this. So we'll have, so because this has how many edges, like seven. So this one will have seven edges here and one after being lifted. So we'll take one, we copy score one here. We see two, we copy score two here. We see zero, we copy score zero here, et cetera. And that's how we lift up the source scores. We do the same thing for the target scores, but we just use the target column of the index this time. And the same procedure goes. So we'll have zero, so we'll copy zero here, blah, blah, blah. And we form the lifted versions of these structures. And finally, we do the same thing for this projected vectors and we use the source index. And it will hopefully make sense why we are using the source index here. But like for the time being, let me just copy it here. So we'll end up with seven, seven rows and the same concept goes. So the same idea again, we just copy. So we have one, we copy the projected vector one here. We have two, we copy the projected vector two here. And so on and so on. And that's how we form these lifted versions. So now for the fun part. So what do we do now? So we take and we just add up these scores. And we end up with something that calls scores per edge. And after applying a ReLU, we just apply a ReLU, Leaky ReLU, and we get the final scores per edge. So why do we do the lifting? So the idea is really simple, because we have score for node one here and we have score for node zero here. What we effectively did by summing up these two and putting them here is we calculated the score for this particular edge. And that's what we want, because we need scores, we need those scores in order to calculate the attention coefficients and we need those in order to create the updated version of the feature vector. So again, by doing this simple lifting, we calculated the scores only for those edges that we care about. So if this was a fully connected graph, we'd have much more edges and we don't need to calculate the scores for the edges that don't exist. And that's, so if you take a look at implementation one and two of my project, you'll see exactly that. I'm calculating the scores for every single possible edge and then doing some smart masking, we just finally find the attention coefficients only for the edges. But it's far less efficient than this implementation, even though this one is conceptually, as you can see, a bit harder to explain and to understand. But it's super simple, like once you invest a little bit of time, you'll be able to understand everything. OK, so we've got the scores per edge. How do we convert these scores per edge into those attention coefficients we need? Well, the following we do the following. We basically again take the target column of the edge index and we do something called a neighborhood aware softmax. And that's something I just dubbed it like that. That's not official literature term. That's something just I dubbed it like that. That's how I name my variables. And what happens is the following. So we take the edge index and we can see there are zeros here. That means we want to make sure that these scores here sum up to one. That's the neighborhood aware portion. So using the edge index, you know, you know exactly which parts of this scores per edge structure need to sum up to one. So because these correspond to node zero, we want them. We want to we want to make sure that they sum up to one. And then we have ones here. So that means we want to make sure that this chunk here sums up to one. And finally, because we have twos here, we want to make sure that these sum up to one. So how we do that is the following. We just we first raise this structure. We just do exponential of this structure, because if you remember like the softmax, we got e to the i. And then you've got sums of e to the like these are scores. And basically, this is some particular score for node i. And this is for the other neighbors. And because of that, because we have that exponential, we want to exponentiate. That's even a word, this structure. So we have the same dimensions. These are now just exponents. So this is seven. This is one. In general, this will be e. So for Quora, this will be, if we remember, 13,264, whatever. And now we want to make sure that the summations go over these neighborhoods I just defined. So that means in the code, you'll see something called scatter add. And what scatter add does is the following. It will just sum up these. And so we'll get a smaller structure. So this thing will actually end up being three. Or in general, this will be the number of nodes in your graph. And once we do that, we just want to lift it up again. And you'll see in a moment why. Hopefully you already understand. So we'll lift it up again by copying the lifting up using the target portion of the edge index. And once we have these numbers, now it's pretty simple. We just divide these by these, because these here are the same numbers, and they are just the sum of these three elements. So once we divide this, that's basically doing this portion. And we end up having the attention coefficients. So after we do division, after we divide, let me denote it like this, this vector with this one, we end up with the attention coefficients. And now we're really close. Now it's really simple. Now we do the following. We just take these attention coefficients and we multiply them with this. And if you remember, we use the source portion of the edge index to project, to lift these up. And now we just multiply these, and that's the weighted part of the... And now we just need to aggregate all of those, and that's the get method, right? So after we multiply this coefficient to this projected vector and to this and to this, we'll get what we semantically did for get, right? And hopefully now we can see why we use the source portion, the source column of the edge index to lift the projected vectors. Because this now is node 1 representation, this is node 2 representation, or the projected feature vector. And once we multiply it with this coefficient, we basically get what we want, and that's we accumulate, we aggregate. So this is... so 1, that's this one, 2, that's this one, and we just multiply with the attention coefficients and we get the weighted feature vectors. And now we just want to do the aggregation. How do we do the aggregation? So after multiplying these two structures, we'll end up with the weighted structure, again 7 and 8 here, because remember the projected vectors had dimension 8. And finally, again, simple scatter add using the target index will make sure that we sum up the right nodes, the right nodes, yep, and we'll finally end up with 3 by 8. And that's... those are the new representations that we wanted. Again, we use the target column of the edge index to do the scatter add, and that will make sure that we... so we have zeros here, that means these feature vectors connect to zero, and we'll just scatter add them, and that's the final representation here. Hopefully this made it a bit easier for you to understand it. I don't know if I explained it perfectly, I tried my best. I just want to wrap it up, reminding you again to just check out other projects, and this one got un-proportionally more popular than the other ones. I don't like that, I think that the other projects are worthy as well. And if I refresh this, so if you remember at the beginning we had... I had almost 600 stars, let me see if I went over 600 stars. If I refresh it, 607 stars. So this project is so popular, I'm not sure why, I know I did a good job, but please take a look at other projects, and go ahead and play with them. If you're a beginner, I recommend you go and play with Neural Style Transfer. This is the fast version of the Neural Style Transfer, this is the Johnson's method, which is much faster, but a bit worse results. And Deep Dream is awesome, so I recommend you start with these projects, they're awesome. I also have one more, which is hidden, which is not pinned, which actually creates videos using this naive approach, so that means there is no temporal loss included, so they'll be a bit jittery. But yeah, I won't get into those details, just check them out, and... yep. As a fun fact, this image here was created using this Deep Dream project, so that's something I created, and you can create your own images yourself, so yeah, go ahead, enjoy, play, and hopefully you found this video useful. If you did, you know the drill, just subscribe, hit that bell icon, and that will help me grow this channel, if you found some value in it, and until next time, keep learning deep.
[{"start": 0.0, "end": 8.0, "text": " What's up folks? So for those of you who don't know, four days ago I open sourced a brand new project."}, {"start": 8.0, "end": 11.0, "text": " It's the implementation of the Graph Attention Network."}, {"start": 11.0, "end": 26.0, "text": " So in this video I thought doing a quick walkthrough and just going to read me and potentially try and explain the potential pain points in the Jupyter notebook."}, {"start": 26.0, "end": 34.0, "text": " Mainly the implementation itself, like the core part of how the get method works."}, {"start": 34.0, "end": 44.0, "text": " But before that, I just want to remind you that aside from this project, I've got a lot more interesting projects on my GitHub."}, {"start": 44.0, "end": 47.0, "text": " So I've been working and implementing these projects for at least a year."}, {"start": 47.0, "end": 58.0, "text": " So aside from this one, which is super popular, I've open sourced this four days ago and it's got almost 600 stars, which is better than all of my previous projects."}, {"start": 58.0, "end": 62.0, "text": " Like the best one was Transformer, which had around 400 stars."}, {"start": 62.0, "end": 65.0, "text": " And I'm really glad I don't treat these as a vanity metric."}, {"start": 65.0, "end": 69.0, "text": " I'm just glad that I see that the community is actually using the stuff I produce."}, {"start": 69.0, "end": 73.0, "text": " So that makes me happy and makes me want to create these even more."}, {"start": 73.0, "end": 84.0, "text": " So as I said, aside from the PyTorch get, which I'll cover in this video, I had some really interesting projects, especially if you're like a newcomer to deep learning field."}, {"start": 84.0, "end": 88.0, "text": " Like you should check out this Deep Dream project."}, {"start": 88.0, "end": 90.0, "text": " It's got really like just look at this imagery."}, {"start": 90.0, "end": 95.0, "text": " It's so so freaking exciting and looks really nice, if nothing else."}, {"start": 95.0, "end": 97.0, "text": " But you'll learn a lot by doing these projects."}, {"start": 97.0, "end": 100.0, "text": " And I'll always recommend something like this to a beginner."}, {"start": 100.0, "end": 108.0, "text": " So if you're a beginner, you should consider doing something like this, something like Deep Dream as your first or like second project, whatever."}, {"start": 108.0, "end": 111.0, "text": " And yeah, it's just so awesome."}, {"start": 111.0, "end": 120.0, "text": " And I think the reason it's not as popular, so it's only it only has like 56 stars, is I haven't added the Jupyter notebook yet."}, {"start": 120.0, "end": 130.0, "text": " And I think I'll just invest like a couple of days in a couple in a couple of months to just add the Jupyter notebooks to all of the projects that don't have it."}, {"start": 130.0, "end": 136.0, "text": " So that will hopefully make it even more accessible than than the current state of the project."}, {"start": 136.0, "end": 141.0, "text": " OK, aside from Deep Dream, you should check out the Neural Style Transfer project."}, {"start": 141.0, "end": 144.0, "text": " That's how I started the whole this whole YouTube channel."}, {"start": 144.0, "end": 151.0, "text": " I started with the Neural Style Transfer series, but it wasn't as popular, so it kind of dumped it and started doing other things."}, {"start": 151.0, "end": 153.0, "text": " But I really enjoyed creating this project."}, {"start": 153.0, "end": 165.0, "text": " And just again, just look at this like like if somebody told me like a couple of years ago that like a neural network create like a piece of art like this, I'd be really surprised."}, {"start": 165.0, "end": 166.0, "text": " It's amazing."}, {"start": 166.0, "end": 181.0, "text": " And the reason like the pure fact that it's so visual helps you maybe at least help me better engage with this project and made it so much interesting than just doing something that you can."}, {"start": 181.0, "end": 184.0, "text": " It's not as palpable as this thing."}, {"start": 184.0, "end": 193.0, "text": " And also I've got implementation of a couple of different GAN architectures like I implemented the original GAN architecture."}, {"start": 193.0, "end": 198.0, "text": " And you can see here the generator slowly learns how to generate MNIST digits."}, {"start": 198.0, "end": 202.0, "text": " And finally, the outputs are almost indistinguishable from the inputs."}, {"start": 202.0, "end": 207.0, "text": " But this is a simple data set, so it's not hard to to learn that."}, {"start": 207.0, "end": 214.0, "text": " And I also had the conditional GAN implementation and DC GAN where I'm just using this Celibate data set to generate faces."}, {"start": 214.0, "end": 220.0, "text": " So I encourage you to check out these projects and play with them."}, {"start": 220.0, "end": 231.0, "text": " I hate seeing them not being as used as much as as the current as the GAT project, even though they're really cool and an awesome learning resource."}, {"start": 231.0, "end": 234.0, "text": " And I'll try and add the Jupyter really soon."}, {"start": 234.0, "end": 239.0, "text": " With that out of the way, let me jump back to the topic of this video."}, {"start": 239.0, "end": 242.0, "text": " And that's a graph attention network."}, {"start": 242.0, "end": 247.0, "text": " I just want to quickly walk you through the readme and show you what you can see here."}, {"start": 247.0, "end": 251.0, "text": " So the first thing, just go and click click around these links."}, {"start": 251.0, "end": 255.0, "text": " There are many awesome applications I've linked here."}, {"start": 255.0, "end": 257.0, "text": " Those are mostly short blogs which you can just read."}, {"start": 257.0, "end": 271.0, "text": " And if you're not familiar with GNNs, I strongly encourage you to just go and explore and read about all of the awesome applications that GNNs not just get like graph neural networks in general enable."}, {"start": 271.0, "end": 285.0, "text": " OK, so in this schematic, what you can see is the like a schematic representation of GAT where basically you can see the H1 area, which is the feature vector of this node one."}, {"start": 285.0, "end": 297.0, "text": " And the edges just they just represent the other feature vectors that will get used to update this node's feature vector."}, {"start": 297.0, "end": 303.0, "text": " And the differently colored and these squiggly lines just represent multiple attention heads."}, {"start": 303.0, "end": 308.0, "text": " And in this image, it seems like this GAT has three attention heads."}, {"start": 308.0, "end": 318.0, "text": " And after you just kind of accumulate and you weigh the feature vectors and you aggregate them, you get the updated version H1 prime."}, {"start": 318.0, "end": 325.0, "text": " And again, I encourage you if you have if you're not familiar with GAT, go and watch my video on graph attention network."}, {"start": 325.0, "end": 327.0, "text": " I got it covered thoroughly."}, {"start": 327.0, "end": 332.0, "text": " So that's pretty much a prerequisite to understand this project."}, {"start": 332.0, "end": 336.0, "text": " And yet so I encourage you go check it out."}, {"start": 336.0, "end": 338.0, "text": " Let me continue here."}, {"start": 338.0, "end": 344.0, "text": " So for the time being, I've only got a core citation network data set covered."}, {"start": 344.0, "end": 346.0, "text": " So that's the transductive approach."}, {"start": 346.0, "end": 359.0, "text": " And I'll be I'll be probably adding PPI or the protein protein interaction data set really soon, which is the inductive setup, which will make this project even more useful for you."}, {"start": 359.0, "end": 363.0, "text": " So what you can see here is just the visualization of Cora."}, {"start": 363.0, "end": 371.0, "text": " And what I did here is that the size of the node so Cora has two thousand seven hundred eight nodes."}, {"start": 371.0, "end": 374.0, "text": " And what I did here to plot it is the following."}, {"start": 374.0, "end": 382.0, "text": " Basically, the size of the node corresponds to the degree of that particular node where the degree is the number of edges."}, {"start": 382.0, "end": 384.0, "text": " So the bigger the node, the more the edges."}, {"start": 384.0, "end": 389.0, "text": " And you can see there are like a couple of nodes that are really big and most of them are really small."}, {"start": 389.0, "end": 391.0, "text": " And we'll see that in a minute."}, {"start": 391.0, "end": 394.0, "text": " Like some nicer plots that will explain that."}, {"start": 394.0, "end": 397.0, "text": " Second thing, the colors are the classes."}, {"start": 397.0, "end": 399.0, "text": " So Cora has got seven classes."}, {"start": 399.0, "end": 403.0, "text": " And that's why we have seven colors in this in this chart."}, {"start": 403.0, "end": 405.0, "text": " And finally, the edges."}, {"start": 405.0, "end": 414.0, "text": " So the thickness of the edge is pretty much correlates with the something called edge between us."}, {"start": 414.0, "end": 424.0, "text": " Basically, you want to take a particular edge and you want to find the number of geodesics, i.e. the the number of shortest paths that go through that particular edge."}, {"start": 424.0, "end": 427.0, "text": " So you have all of those nodes in the in the in the graph."}, {"start": 427.0, "end": 428.0, "text": " Right."}, {"start": 428.0, "end": 440.0, "text": " And for every single pair of those nodes, you just go through the command control space and you find all the pairs and you ask the question, what's the shortest path between this node and this node?"}, {"start": 440.0, "end": 447.0, "text": " And while looking at a particular edge, you want to know how many of those shortest paths go through this through that edge."}, {"start": 447.0, "end": 450.0, "text": " And that kind of quantifies the popularity of that edge."}, {"start": 450.0, "end": 454.0, "text": " And so the thicker the edge, the more shortest paths go through it."}, {"start": 454.0, "end": 456.0, "text": " Hopefully, that was clear enough."}, {"start": 456.0, "end": 458.0, "text": " So that's the chart you can see here."}, {"start": 458.0, "end": 461.0, "text": " And going down here."}, {"start": 461.0, "end": 466.0, "text": " So in this chart, we can see is the like the node degree of every single node in core."}, {"start": 466.0, "end": 471.0, "text": " So we've got two thousand seven hundred eight nodes on the x axis is the node ID on the y axis."}, {"start": 471.0, "end": 475.0, "text": " We get the edge count for a particular node."}, {"start": 475.0, "end": 480.0, "text": " And we can see there is one node that has one hundred sixty nine like edges."}, {"start": 480.0, "end": 486.0, "text": " And the rest are have much, much, much, much less edges."}, {"start": 486.0, "end": 495.0, "text": " And the final this bottom plot shows really nicely that the accumulation happens around two to five edges."}, {"start": 495.0, "end": 499.0, "text": " So most of the nodes in Cora have only two to five edges."}, {"start": 499.0, "end": 501.0, "text": " And this plot shows it really nicely."}, {"start": 501.0, "end": 504.0, "text": " So it goes up until hundred sixty nine."}, {"start": 504.0, "end": 511.0, "text": " That's the this this node and that's the big green node in this chart above."}, {"start": 511.0, "end": 513.0, "text": " OK, that's that was Cora."}, {"start": 513.0, "end": 516.0, "text": " We showed I showed you the schematic of GATT."}, {"start": 516.0, "end": 522.0, "text": " And finally, once you once you train the GATT model and hopefully understand how GATT works,"}, {"start": 522.0, "end": 530.0, "text": " basically, you know that it tries to to to assign attention coefficient to every single of its neighbor"}, {"start": 530.0, "end": 534.0, "text": " in order to aggregate those and update the representation."}, {"start": 534.0, "end": 545.0, "text": " So what we see here is the the the edges, the thickness of the edge is proportional to the attention coefficient."}, {"start": 545.0, "end": 549.0, "text": " And because we can see that every single edge here is of the same thickness,"}, {"start": 549.0, "end": 558.0, "text": " we can deduce that the GATT model on Cora data set learns to to just output constant attention pattern,"}, {"start": 558.0, "end": 560.0, "text": " which is not as interesting."}, {"start": 560.0, "end": 565.0, "text": " And the reason is because this data is pretty simple for forget its homophilic,"}, {"start": 565.0, "end": 568.0, "text": " meaning that similar nodes cluster together."}, {"start": 568.0, "end": 571.0, "text": " You can see here by just looking at the color."}, {"start": 571.0, "end": 576.0, "text": " So the blue node here, you can see that mostly the blue nodes connect to the blue node."}, {"start": 576.0, "end": 582.0, "text": " And that's something known as the homophilic property of of of of graphs."}, {"start": 582.0, "end": 590.0, "text": " And yeah, that's that's the reason why why GATT learns these boring constant attention patterns."}, {"start": 590.0, "end": 595.0, "text": " But yeah, in this node here is one of the nodes with the most edges."}, {"start": 595.0, "end": 598.0, "text": " Most of you'll see nodes like this."}, {"start": 598.0, "end": 602.0, "text": " So as I said, two to five edges is the common number."}, {"start": 602.0, "end": 605.0, "text": " So this one is a common representative you'll find in Cora."}, {"start": 605.0, "end": 609.0, "text": " And again, the edges are the same thickness, meaning constant attention."}, {"start": 609.0, "end": 615.0, "text": " There is one more interesting way to to understand the attention patterns that GATT is learning."}, {"start": 615.0, "end": 618.0, "text": " And that's this entropy histograms method."}, {"start": 618.0, "end": 621.0, "text": " And what we do here basically is the following."}, {"start": 621.0, "end": 628.0, "text": " So because all of these sum up to one, because we applied the softmax while producing these attention coefficients,"}, {"start": 628.0, "end": 632.0, "text": " we can just treat all of these neighborhoods as a simple as simple distributions."}, {"start": 632.0, "end": 636.0, "text": " And because you have a probability distribution, you can calculate the entropy."}, {"start": 636.0, "end": 644.0, "text": " And now the point I want to want to stress here is that you don't need to know a lot about information theory."}, {"start": 644.0, "end": 646.0, "text": " Actually, you don't need to know anything."}, {"start": 646.0, "end": 649.0, "text": " The whole point with entropy histograms is the following."}, {"start": 649.0, "end": 656.0, "text": " So you take the ideal, the hypothetical GATT model on your particular data set in Cora here,"}, {"start": 656.0, "end": 661.0, "text": " and you make it have like a constant attention, like a perfectly constant attention."}, {"start": 661.0, "end": 665.0, "text": " That means you always have the uniform distributions."}, {"start": 665.0, "end": 670.0, "text": " And what you do is you just plot, you just calculate the entropies, you create a histogram."}, {"start": 670.0, "end": 674.0, "text": " And that's the orange histogram in this chart."}, {"start": 674.0, "end": 680.0, "text": " And then you take your GATT trained, your trained GATT model, and you do the same procedure."}, {"start": 680.0, "end": 688.0, "text": " And if the histograms match with that hypothetical GATT, that means your GATT has learned constant attention."}, {"start": 688.0, "end": 695.0, "text": " And you can see here basically that the orange and the blue histograms totally match with each other,"}, {"start": 695.0, "end": 700.0, "text": " which means our GATT learned to produce constant attention patterns."}, {"start": 700.0, "end": 703.0, "text": " And yeah, that's just another way to look at this."}, {"start": 703.0, "end": 707.0, "text": " And if we had some different data set like PPI, which I'll soon add,"}, {"start": 707.0, "end": 714.0, "text": " the histograms would be like, would have much less matching than on this chart."}, {"start": 714.0, "end": 722.0, "text": " So yeah, that's a second really nice visualization that helps you understand the attention patterns that GATT is learning."}, {"start": 722.0, "end": 728.0, "text": " Okay. Having said that, in the paper you could see the t-SNE plot."}, {"start": 728.0, "end": 730.0, "text": " I did the same thing here."}, {"start": 730.0, "end": 736.0, "text": " Basically, you take the final from the final layer, you take the embeddings, and you use t-SNE, project them into 2D,"}, {"start": 736.0, "end": 747.0, "text": " and you get this nice chart here showing us basically that GATT has learned to cluster nodes of the same class in the same part of the 2D space,"}, {"start": 747.0, "end": 757.0, "text": " which is awesome, which means we can easily train a classifier to kind of just classify these edges into one of seven classes."}, {"start": 757.0, "end": 762.0, "text": " Okay. That was pretty much it. I've got the setup usage and everything here."}, {"start": 762.0, "end": 770.0, "text": " You can just go and read through the README, and you'll hopefully have no problems starting this project."}, {"start": 770.0, "end": 776.0, "text": " One more thing I want to note here is that you'll usually see these circular patterns."}, {"start": 776.0, "end": 785.0, "text": " So the thing is the graph drawing is a subfield for itself, like something similar to tokenizers if you're familiar with transformers."}, {"start": 785.0, "end": 789.0, "text": " It's something you just take for granted, but it's a field for itself."}, {"start": 789.0, "end": 797.0, "text": " So I usually use this Rhine Gold Tilford circular layout, and you'll get charts like this one,"}, {"start": 797.0, "end": 805.0, "text": " but you can use some other techniques and layouts like Kamadi Kawai, this one, and you'll get charts like this one."}, {"start": 805.0, "end": 815.0, "text": " So this one, the Rhine Gold Tilford, is really adequate for true-like graphs, and because this is basically MRE3,"}, {"start": 815.0, "end": 819.0, "text": " I've used this layout, and it gives really nice results."}, {"start": 819.0, "end": 824.0, "text": " But just be aware there are multiple layouts, and I've been using iGraph plotting package,"}, {"start": 824.0, "end": 830.0, "text": " so you can just go explore some other layouts, but for this particular problem, I think this does the job."}, {"start": 830.0, "end": 833.0, "text": " Okay. That was everything I wanted to cover in the README."}, {"start": 833.0, "end": 840.0, "text": " Now I'll jump into the Jupyter and OneNote and try and explain a couple of potential pain points."}, {"start": 840.0, "end": 848.0, "text": " And welcome to the Jupyter Notebook. So I'll give you time to go through the Jupyter Notebook by yourself, by your own pace."}, {"start": 848.0, "end": 856.0, "text": " What I want to do in this video is just kind of focus on two specific pain points, potential pain points for you,"}, {"start": 856.0, "end": 861.0, "text": " and just focus on that one. So first things first is data."}, {"start": 861.0, "end": 867.0, "text": " What I did here is you'll usually see, if you take a look at the official GATT implementation"}, {"start": 867.0, "end": 878.0, "text": " or some re-implementations like PyGATT project, you'll see a bunch of files which actually date back from the planetoid paper."}, {"start": 878.0, "end": 888.0, "text": " And the thing is, like most of those implementations, just copy-pasted the same snippets for processing the Quora."}, {"start": 888.0, "end": 896.0, "text": " And so I just decided to do the processing and just save those into files so that you can just import the node features like this,"}, {"start": 896.0, "end": 905.0, "text": " the node labels like this, and finally the structure of the graph, the edges and connections through one line of code."}, {"start": 905.0, "end": 910.0, "text": " Hopefully this will make it a bit easier for you to just not have to look through some pre-processing stuff."}, {"start": 910.0, "end": 922.0, "text": " Let me quickly just explain how these structures look like, and then I'll jump into explaining how the GATT method itself works."}, {"start": 922.0, "end": 929.0, "text": " Okay, so node features. So those of you who watched my previous video on GATT, this will probably be familiar,"}, {"start": 929.0, "end": 936.0, "text": " but let me just recap here because I'll be using these structures later when explaining the implementation of the GATT method itself."}, {"start": 936.0, "end": 945.0, "text": " So first things first, node features. It's a matrix like this. We basically, so Quora has 2,708 nodes,"}, {"start": 945.0, "end": 952.0, "text": " and I've been working so much with Quora that I know every single number on top of my head."}, {"start": 952.0, "end": 960.0, "text": " And the second dimension is 1,433. Now that may seem like an arbitrary number, and it's actually not."}, {"start": 960.0, "end": 969.0, "text": " I'll just explain how they got that number. So yeah, so that's the feature vectors for every single node."}, {"start": 969.0, "end": 973.0, "text": " And so how they got this number is the following. So they took all of the research papers."}, {"start": 973.0, "end": 981.0, "text": " They just created a simple histogram of words, meaning on the x-axis you have a word, on the y-axis you have the frequency of that word"}, {"start": 981.0, "end": 985.0, "text": " throughout all of those research papers. And now what they did is the following."}, {"start": 985.0, "end": 993.0, "text": " They just created a cutoff frequency, like maybe 10, which means just disregard the words that have a frequency less than 10"}, {"start": 993.0, "end": 1000.0, "text": " across all documents, and just keep all of the rest, and they ended up with 1,433 words."}, {"start": 1000.0, "end": 1007.0, "text": " So now how you create a feature vector for a specific research paper, which is a node in Quora, right?"}, {"start": 1007.0, "end": 1015.0, "text": " You do the following. You just go through the words of the research paper, and you see if some of the words from this vocabulary"}, {"start": 1015.0, "end": 1020.0, "text": " we just created is present in that research paper. So maybe if we had attention is all you need paper,"}, {"start": 1020.0, "end": 1026.0, "text": " and in the vocab we had the attention word, we put one where that field is."}, {"start": 1026.0, "end": 1040.0, "text": " So basically, let me just draw it here. If this is the feature vector, and this particular slot, so this is again 1,433,"}, {"start": 1040.0, "end": 1049.0, "text": " and if this one corresponds to attention, you take a specific research paper and you ask, does it have the word attention present inside?"}, {"start": 1049.0, "end": 1056.0, "text": " If it does, we put a 1, and by doing that you create a feature vector, and it will look something like this,"}, {"start": 1056.0, "end": 1062.0, "text": " and it will mostly be sparse, so most of these will be zeros, and I'll show you how it actually looks like in a second,"}, {"start": 1062.0, "end": 1069.0, "text": " but let's take this for a moment, and later you'll see in the code the feature normalization part,"}, {"start": 1069.0, "end": 1077.0, "text": " and what it does, it's a really simple thing. You just sum up all of these ones, and you just divide all of the numbers by the sum,"}, {"start": 1077.0, "end": 1086.0, "text": " and we'll end up with having 1 over 2 here, and 1 over 2 here, because we only had two ones in this particular example."}, {"start": 1086.0, "end": 1093.0, "text": " So in a more general case, you'll just have 1 over n, where n is the number of ones in your feature vector."}, {"start": 1093.0, "end": 1099.0, "text": " So that's the normalization part, and now let me just show you how the feature vectors look like."}, {"start": 1099.0, "end": 1108.0, "text": " Okay, so I just opened up the PyGad project, and you can see here on the screen the feature vectors for every single node in Quora,"}, {"start": 1108.0, "end": 1115.0, "text": " and most of the numbers are zeros. You see a couple of ones here, but it's pretty sparse representation."}, {"start": 1115.0, "end": 1124.0, "text": " And finally, the labels, I mentioned seven labels, so they're genetic algorithms, probabilistic methods, case-based theory, reinforcement learning, etc."}, {"start": 1124.0, "end": 1132.0, "text": " So what I did is I just assigned a number 0 through 6 to all of these strings, and we'll be using that as labels."}, {"start": 1132.0, "end": 1138.0, "text": " So that leads me to the second structure, and that's the node labels, and the shape is pretty straightforward."}, {"start": 1138.0, "end": 1148.0, "text": " So basically we have, again, 2,708 nodes here, and we only have one here, because we have a single class, right?"}, {"start": 1148.0, "end": 1157.0, "text": " So these numbers will be 0 through 6, maybe this one will be 5, so this node with this feature vector will have class 5."}, {"start": 1157.0, "end": 1167.0, "text": " So that's the node labels structure. And finally, we've got this adjacency list dictionary, which just contains the connections, the edges in the Quora,"}, {"start": 1167.0, "end": 1177.0, "text": " and the structure, it's a simple dictionary, meaning the keys are node IDs, maybe 0, and 0 will have associated with it all of the edges,"}, {"start": 1177.0, "end": 1186.0, "text": " maybe something like, I don't know, 1, 2, 3, 25, whatever. And those are the nodes that 0 connects to."}, {"start": 1186.0, "end": 1196.0, "text": " And finally, once we have this, the third most important structure that you'll see used throughout the code is something called the edge index."}, {"start": 1196.0, "end": 1203.0, "text": " And what edge index is, it's a simple way to, it's a dense way to represent your connectivity in your graph."}, {"start": 1203.0, "end": 1211.0, "text": " So I just copy-pasted that snippet that creates the edge index, and you can see it uses this structure, the adjacency list we have here."}, {"start": 1211.0, "end": 1215.0, "text": " It just needs a number of nodes, and we'll be adding self-edges, and we'll see that in a moment."}, {"start": 1215.0, "end": 1222.0, "text": " So what happens is the following. I just iterate through this dictionary object, and I create the edge index doing the following thing."}, {"start": 1222.0, "end": 1234.0, "text": " So, by the way, the shape is just 2, comma, E, where E is, I think, 13,264 edges."}, {"start": 1234.0, "end": 1244.0, "text": " And how we create that is really simple. So we have 0 and the corresponding list of target nodes here, so I'll just go through the list, and I'll see 0, 1, okay."}, {"start": 1244.0, "end": 1253.0, "text": " We add 0, 1 topple to the edge. Then we add 0, 2 topple to the edge."}, {"start": 1253.0, "end": 1265.0, "text": " Finally, we add 0, 3, 0, blah, blah, blah, 0, 25. And after I do that for every single node, so 0, 25, I don't know how to write today,"}, {"start": 1265.0, "end": 1270.0, "text": " and I'll just iterate through all of the keys, and finally, I'll just add the self-edges."}, {"start": 1270.0, "end": 1279.0, "text": " So that means after I finish all of this procedure, going through this dictionary, I'll just add 0, 0, meaning node 0 connects to itself,"}, {"start": 1279.0, "end": 1290.0, "text": " and the same goes for every single node up until 2,707. So that's the edge index, and you can see the shape is just a bunch of these topples,"}, {"start": 1290.0, "end": 1294.0, "text": " and there are this many of these topples, and that's the edge index structure."}, {"start": 1294.0, "end": 1304.0, "text": " And hopefully, this makes it easier for you to now tackle the Jupyter notebook yourself. I'll just cover the... I'll try and cover the method."}, {"start": 1304.0, "end": 1308.0, "text": " It's complicated. I'll use a dumb example, and we'll see if it works."}, {"start": 1308.0, "end": 1315.0, "text": " Going back to the Jupyter notebook, and just scrolling to the main part, to the meat of this project, so we've got some visualizations."}, {"start": 1315.0, "end": 1322.0, "text": " We saw this in the readme. You'll be able to just generate those yourself, just go through the Jupyter, and yeah, this is the main part."}, {"start": 1322.0, "end": 1326.0, "text": " So we have the high-level class, and we have the actual implementation."}, {"start": 1326.0, "end": 1339.0, "text": " And so the only part that will probably create you some problems to understand how it works is a couple of lines of code, so namely this."}, {"start": 1339.0, "end": 1347.0, "text": " So from this huge, huge, huge, huge Jupyter notebook, the only part that's actually maybe hard to understand is this one."}, {"start": 1347.0, "end": 1359.0, "text": " And so I'll try and now explain how this part works. Maybe you should first just go yourself through this code, and then continue watching this video."}, {"start": 1359.0, "end": 1365.0, "text": " Okay, I'll try and use a toy example here to explain how the implementation works."}, {"start": 1365.0, "end": 1381.0, "text": " So just a quick recap. Let me use this as a running example. Three nodes, just two edges here, and also self-connections are here."}, {"start": 1381.0, "end": 1388.0, "text": " And now I'll just add some IDs here. So this is node 0, this is node 1, and this is node 2."}, {"start": 1388.0, "end": 1409.0, "text": " So how GAT works is the following. So we have associated a feature vector with every single node, and what we do is we basically need to find the attention coefficient for this feature vector, and for this one, and for this one, in order to update the feature vector, this feature vector, okay?"}, {"start": 1409.0, "end": 1418.0, "text": " And so we first project those feature vectors into some smaller subspace, and then how we find those attentions is the following."}, {"start": 1418.0, "end": 1438.0, "text": " We just concatenate this representation with this one. We apply something called like a scoring function, and that one is basically densely connected like a linear layer here, and that will give us a score for this particular edge."}, {"start": 1438.0, "end": 1453.0, "text": " We do the same thing here, we do the same thing for the self-edge, and which means for the self-edge we'll have this same vector duplicated two times, and we just apply the same scoring function, so these weights here."}, {"start": 1453.0, "end": 1474.0, "text": " Once we have the scores, we apply the softmax, that will make them sum up to 1, and then we just multiply this particular vector with that specific attention coefficient, we'll multiply this one with its attention coefficient, and the same goes for this vector, and we'll end up with a new representation, and that's the updated representation."}, {"start": 1474.0, "end": 1490.0, "text": " I missed some details like Rayleigh uses, blah blah blah, but this is the core concept, so you want to find those attention coefficients, you want to multiply the projected vectors using those coefficients, and you want to aggregate all of those using the sum into a new updated representation."}, {"start": 1490.0, "end": 1494.0, "text": " So hopefully that kind of just refreshes your mind on how GAT works."}, {"start": 1494.0, "end": 1516.0, "text": " Okay, let's build up the structures we'll need in order to go through this example. So, first assumption is I only have a single attention head, I'll ditch the eight attention heads that we have in the actual implementation, I'll just work with a single attention head, otherwise it gets really hard to explain it, but it's the same thing, we just duplicate whatever I say here, eight times, that's it."}, {"start": 1516.0, "end": 1534.0, "text": " They are totally independent, so it makes sense to reduce this problem to a single attention head. So, first thing we have node vectors, so that means we have three here because we have three nodes, and these are 1,433."}, {"start": 1534.0, "end": 1551.0, "text": " The second thing we have is the edge index, and let's see how the edge index looks like. So let's first see which nodes connect to zero. So, one connects to zero, two connects to zero, and zero connects to zero."}, {"start": 1551.0, "end": 1565.0, "text": " Then let's see which nodes connect to one. So, zero connects to one, and one connects to one. And finally, which nodes connect to two, and that's zero connects to two, and to itself connects to itself, right?"}, {"start": 1565.0, "end": 1572.0, "text": " So this is the edge index structure for this particular graph. Okay?"}, {"start": 1572.0, "end": 1585.0, "text": " Once we have that, let's see how the method works. So what I do is, the first thing, you have this projection layer, and you just project this thing into some smaller subspace."}, {"start": 1585.0, "end": 1603.0, "text": " So this gets projected into eight dimensional vectors. So we'll have three by eight. And if we had multiple attention heads, the only thing that would be different, you would have eight by eight independent columns here, and that's the only difference."}, {"start": 1603.0, "end": 1617.0, "text": " And the actual implementation detail is that you'll project this vector into 64 dimensional vector, and then you just apply this view function, which basically just doesn't change anything."}, {"start": 1617.0, "end": 1626.0, "text": " It changes like the stride, a couple of parameters in the matrix itself, but it doesn't change the underlying memory layout. So that's the important thing to notice."}, {"start": 1626.0, "end": 1644.0, "text": " Okay, so once we have this projected thing, what do we do? We do the following. We need to first lift these. So we first need to calculate these scores for every single edge, because that's how get works."}, {"start": 1644.0, "end": 1657.0, "text": " It requires these scores, and then transforms them into attention coefficients, and let's see how we actually do it through code. So after we project them, we do the following. We apply these left and right scoring functions."}, {"start": 1657.0, "end": 1667.0, "text": " And let me just explain that, why that works. So basically, you can treat this as the left side and the right side, and you just apply, so let me draw it again."}, {"start": 1667.0, "end": 1674.0, "text": " So instead of holistically applying, concatenating the feature vectors, and then applying a single scoring function, you can do the following."}, {"start": 1674.0, "end": 1685.0, "text": " You can take this feature vector, put it here, and apply a single, the source, I called it the source scoring function, and you find the score here."}, {"start": 1685.0, "end": 1693.0, "text": " And you take the second feature vector, and you apply a separate scoring function, the target scoring function, and you get the score."}, {"start": 1693.0, "end": 1702.0, "text": " And if you just sum them up, that's exactly the same result as by just doing a single scoring function on the concatenated version of these vectors, okay?"}, {"start": 1702.0, "end": 1718.0, "text": " Hopefully that makes sense. So that's the main potential pain point for you. So it's conceptually, it's a bit different, but like it's the same thing at the end. It's semantically the same thing, okay?"}, {"start": 1718.0, "end": 1729.0, "text": " So what we'll do here is we'll take these projected vectors, we'll apply the left and the right scoring function, which will, so we'll end up with three by one."}, {"start": 1729.0, "end": 1739.0, "text": " So we have scores, so these are the source scores, and these are the target scores. Okay. So that's again three by one."}, {"start": 1739.0, "end": 1755.0, "text": " And now there is this thing called lifting in the code. So what lifting does is it uses this edge index structure to just copy these projected vectors and these source and target scores."}, {"start": 1755.0, "end": 1763.0, "text": " So what I mean by that is the following. So we'll take the source scores, and we'll take the source column. So this is the source column, and this is the target column."}, {"start": 1763.0, "end": 1775.0, "text": " Remember, this means one, the source node points to zero, the target node. So that's this directed edge. And because we have both, we just read the core as undirected graph."}, {"start": 1775.0, "end": 1785.0, "text": " So we'll just do the following. So one means we take score one and we copy it here. Let me just draw it like this."}, {"start": 1785.0, "end": 1793.0, "text": " So we'll have, so because this has how many edges, like seven. So this one will have seven edges here and one after being lifted."}, {"start": 1793.0, "end": 1804.0, "text": " So we'll take one, we copy score one here. We see two, we copy score two here. We see zero, we copy score zero here, et cetera."}, {"start": 1804.0, "end": 1816.0, "text": " And that's how we lift up the source scores. We do the same thing for the target scores, but we just use the target column of the index this time."}, {"start": 1816.0, "end": 1828.0, "text": " And the same procedure goes. So we'll have zero, so we'll copy zero here, blah, blah, blah. And we form the lifted versions of these structures."}, {"start": 1828.0, "end": 1836.0, "text": " And finally, we do the same thing for this projected vectors and we use the source index."}, {"start": 1836.0, "end": 1845.0, "text": " And it will hopefully make sense why we are using the source index here. But like for the time being, let me just copy it here."}, {"start": 1845.0, "end": 1860.0, "text": " So we'll end up with seven, seven rows and the same concept goes. So the same idea again, we just copy. So we have one, we copy the projected vector one here."}, {"start": 1860.0, "end": 1869.0, "text": " We have two, we copy the projected vector two here. And so on and so on. And that's how we form these lifted versions."}, {"start": 1869.0, "end": 1876.0, "text": " So now for the fun part. So what do we do now? So we take and we just add up these scores."}, {"start": 1876.0, "end": 1892.0, "text": " And we end up with something that calls scores per edge. And after applying a ReLU, we just apply a ReLU, Leaky ReLU, and we get the final scores per edge."}, {"start": 1892.0, "end": 1902.0, "text": " So why do we do the lifting? So the idea is really simple, because we have score for node one here and we have score for node zero here."}, {"start": 1902.0, "end": 1913.0, "text": " What we effectively did by summing up these two and putting them here is we calculated the score for this particular edge."}, {"start": 1913.0, "end": 1925.0, "text": " And that's what we want, because we need scores, we need those scores in order to calculate the attention coefficients and we need those in order to create the updated version of the feature vector."}, {"start": 1925.0, "end": 1932.0, "text": " So again, by doing this simple lifting, we calculated the scores only for those edges that we care about."}, {"start": 1932.0, "end": 1939.0, "text": " So if this was a fully connected graph, we'd have much more edges and we don't need to calculate the scores for the edges that don't exist."}, {"start": 1939.0, "end": 1945.0, "text": " And that's, so if you take a look at implementation one and two of my project, you'll see exactly that."}, {"start": 1945.0, "end": 1955.0, "text": " I'm calculating the scores for every single possible edge and then doing some smart masking, we just finally find the attention coefficients only for the edges."}, {"start": 1955.0, "end": 1963.0, "text": " But it's far less efficient than this implementation, even though this one is conceptually, as you can see, a bit harder to explain and to understand."}, {"start": 1963.0, "end": 1970.0, "text": " But it's super simple, like once you invest a little bit of time, you'll be able to understand everything."}, {"start": 1970.0, "end": 1973.0, "text": " OK, so we've got the scores per edge."}, {"start": 1973.0, "end": 1977.0, "text": " How do we convert these scores per edge into those attention coefficients we need?"}, {"start": 1977.0, "end": 1979.0, "text": " Well, the following we do the following."}, {"start": 1979.0, "end": 1986.0, "text": " We basically again take the target column of the edge index and we do something called a neighborhood aware softmax."}, {"start": 1986.0, "end": 1993.0, "text": " And that's something I just dubbed it like that. That's not official literature term. That's something just I dubbed it like that."}, {"start": 1993.0, "end": 1995.0, "text": " That's how I name my variables."}, {"start": 1995.0, "end": 1997.0, "text": " And what happens is the following."}, {"start": 1997.0, "end": 2001.0, "text": " So we take the edge index and we can see there are zeros here."}, {"start": 2001.0, "end": 2008.0, "text": " That means we want to make sure that these scores here sum up to one."}, {"start": 2008.0, "end": 2010.0, "text": " That's the neighborhood aware portion."}, {"start": 2010.0, "end": 2021.0, "text": " So using the edge index, you know, you know exactly which parts of this scores per edge structure need to sum up to one."}, {"start": 2021.0, "end": 2025.0, "text": " So because these correspond to node zero, we want them."}, {"start": 2025.0, "end": 2029.0, "text": " We want to we want to make sure that they sum up to one."}, {"start": 2029.0, "end": 2031.0, "text": " And then we have ones here."}, {"start": 2031.0, "end": 2035.0, "text": " So that means we want to make sure that this chunk here sums up to one."}, {"start": 2035.0, "end": 2040.0, "text": " And finally, because we have twos here, we want to make sure that these sum up to one."}, {"start": 2040.0, "end": 2042.0, "text": " So how we do that is the following."}, {"start": 2042.0, "end": 2045.0, "text": " We just we first raise this structure."}, {"start": 2045.0, "end": 2052.0, "text": " We just do exponential of this structure, because if you remember like the softmax, we got e to the i."}, {"start": 2052.0, "end": 2057.0, "text": " And then you've got sums of e to the like these are scores."}, {"start": 2057.0, "end": 2060.0, "text": " And basically, this is some particular score for node i."}, {"start": 2060.0, "end": 2063.0, "text": " And this is for the other neighbors."}, {"start": 2063.0, "end": 2068.0, "text": " And because of that, because we have that exponential, we want to exponentiate."}, {"start": 2068.0, "end": 2071.0, "text": " That's even a word, this structure."}, {"start": 2071.0, "end": 2074.0, "text": " So we have the same dimensions."}, {"start": 2074.0, "end": 2076.0, "text": " These are now just exponents."}, {"start": 2076.0, "end": 2077.0, "text": " So this is seven."}, {"start": 2077.0, "end": 2078.0, "text": " This is one."}, {"start": 2078.0, "end": 2080.0, "text": " In general, this will be e."}, {"start": 2080.0, "end": 2085.0, "text": " So for Quora, this will be, if we remember, 13,264, whatever."}, {"start": 2085.0, "end": 2092.0, "text": " And now we want to make sure that the summations go over these neighborhoods I just defined."}, {"start": 2092.0, "end": 2096.0, "text": " So that means in the code, you'll see something called scatter add."}, {"start": 2096.0, "end": 2098.0, "text": " And what scatter add does is the following."}, {"start": 2098.0, "end": 2101.0, "text": " It will just sum up these."}, {"start": 2101.0, "end": 2104.0, "text": " And so we'll get a smaller structure."}, {"start": 2104.0, "end": 2108.0, "text": " So this thing will actually end up being three."}, {"start": 2108.0, "end": 2112.0, "text": " Or in general, this will be the number of nodes in your graph."}, {"start": 2112.0, "end": 2116.0, "text": " And once we do that, we just want to lift it up again."}, {"start": 2116.0, "end": 2118.0, "text": " And you'll see in a moment why."}, {"start": 2118.0, "end": 2120.0, "text": " Hopefully you already understand."}, {"start": 2120.0, "end": 2130.0, "text": " So we'll lift it up again by copying the lifting up using the target portion of the edge index."}, {"start": 2130.0, "end": 2133.0, "text": " And once we have these numbers, now it's pretty simple."}, {"start": 2133.0, "end": 2143.0, "text": " We just divide these by these, because these here are the same numbers, and they are just the sum of these three elements."}, {"start": 2143.0, "end": 2147.0, "text": " So once we divide this, that's basically doing this portion."}, {"start": 2147.0, "end": 2149.0, "text": " And we end up having the attention coefficients."}, {"start": 2149.0, "end": 2160.0, "text": " So after we do division, after we divide, let me denote it like this, this vector with this one, we end up with the attention coefficients."}, {"start": 2160.0, "end": 2162.0, "text": " And now we're really close."}, {"start": 2162.0, "end": 2163.0, "text": " Now it's really simple."}, {"start": 2163.0, "end": 2165.0, "text": " Now we do the following."}, {"start": 2165.0, "end": 2170.0, "text": " We just take these attention coefficients and we multiply them with this."}, {"start": 2170.0, "end": 2177.0, "text": " And if you remember, we use the source portion of the edge index to project, to lift these up."}, {"start": 2177.0, "end": 2183.0, "text": " And now we just multiply these, and that's the weighted part of the..."}, {"start": 2183.0, "end": 2187.0, "text": " And now we just need to aggregate all of those, and that's the get method, right?"}, {"start": 2187.0, "end": 2198.0, "text": " So after we multiply this coefficient to this projected vector and to this and to this, we'll get what we semantically did for get, right?"}, {"start": 2198.0, "end": 2205.0, "text": " And hopefully now we can see why we use the source portion, the source column of the edge index to lift the projected vectors."}, {"start": 2205.0, "end": 2213.0, "text": " Because this now is node 1 representation, this is node 2 representation, or the projected feature vector."}, {"start": 2213.0, "end": 2221.0, "text": " And once we multiply it with this coefficient, we basically get what we want, and that's we accumulate, we aggregate."}, {"start": 2221.0, "end": 2233.0, "text": " So this is... so 1, that's this one, 2, that's this one, and we just multiply with the attention coefficients and we get the weighted feature vectors."}, {"start": 2233.0, "end": 2236.0, "text": " And now we just want to do the aggregation. How do we do the aggregation?"}, {"start": 2236.0, "end": 2250.0, "text": " So after multiplying these two structures, we'll end up with the weighted structure, again 7 and 8 here, because remember the projected vectors had dimension 8."}, {"start": 2250.0, "end": 2267.0, "text": " And finally, again, simple scatter add using the target index will make sure that we sum up the right nodes, the right nodes, yep, and we'll finally end up with 3 by 8."}, {"start": 2267.0, "end": 2272.0, "text": " And that's... those are the new representations that we wanted."}, {"start": 2272.0, "end": 2290.0, "text": " Again, we use the target column of the edge index to do the scatter add, and that will make sure that we... so we have zeros here, that means these feature vectors connect to zero, and we'll just scatter add them, and that's the final representation here."}, {"start": 2290.0, "end": 2297.0, "text": " Hopefully this made it a bit easier for you to understand it. I don't know if I explained it perfectly, I tried my best."}, {"start": 2297.0, "end": 2306.0, "text": " I just want to wrap it up, reminding you again to just check out other projects, and this one got un-proportionally more popular than the other ones."}, {"start": 2306.0, "end": 2311.0, "text": " I don't like that, I think that the other projects are worthy as well."}, {"start": 2311.0, "end": 2318.0, "text": " And if I refresh this, so if you remember at the beginning we had... I had almost 600 stars, let me see if I went over 600 stars."}, {"start": 2318.0, "end": 2332.0, "text": " If I refresh it, 607 stars. So this project is so popular, I'm not sure why, I know I did a good job, but please take a look at other projects, and go ahead and play with them."}, {"start": 2332.0, "end": 2343.0, "text": " If you're a beginner, I recommend you go and play with Neural Style Transfer. This is the fast version of the Neural Style Transfer, this is the Johnson's method, which is much faster, but a bit worse results."}, {"start": 2343.0, "end": 2349.0, "text": " And Deep Dream is awesome, so I recommend you start with these projects, they're awesome."}, {"start": 2349.0, "end": 2361.0, "text": " I also have one more, which is hidden, which is not pinned, which actually creates videos using this naive approach, so that means there is no temporal loss included, so they'll be a bit jittery."}, {"start": 2361.0, "end": 2368.0, "text": " But yeah, I won't get into those details, just check them out, and... yep."}, {"start": 2368.0, "end": 2383.0, "text": " As a fun fact, this image here was created using this Deep Dream project, so that's something I created, and you can create your own images yourself, so yeah, go ahead, enjoy, play, and hopefully you found this video useful."}, {"start": 2383.0, "end": 2402.0, "text": " If you did, you know the drill, just subscribe, hit that bell icon, and that will help me grow this channel, if you found some value in it, and until next time, keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=28kGxllog28
Graph Neural Network Project Update! (I'm coding GAT from scratch)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ A short update on some of my previous projects as well as my current project: the Graph Attention Network (GAT). You'll learn about: ✔️My previous deep learning projects ✔️Some of the challenges I faced implementing GAT ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Neural Style Transfer (original): https://github.com/gordicaleksa/pytorch-neural-style-transfer ✅ Neural Style Transfer (fast): https://github.com/gordicaleksa/pytorch-neural-style-transfer-fast ✅ NST + video segmentation: https://github.com/gordicaleksa/pytorch-naive-video-neural-style-transfer ✅ DeepDream: https://github.com/gordicaleksa/pytorch-deepdream ✅ GANs: https://github.com/gordicaleksa/pytorch-gans ✅ Vaswani's transformer: https://github.com/gordicaleksa/pytorch-original-transformer ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 My previous deep learning projects 2:10 GAT 3:20 Profiling three/four different GAT implementations 5:10 Profiling different sparse formats (COO, LIL, CSR, ...) 6:10 t-SNE, UMAP 6:30 Patreon shoutout 6:45 Love you folks Credits: some of the images I used belong to other creators. I created this video for purely educational purposes without the goal of monetizing it. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphattentionnetwork #projectupdate #deeplearning
What's up folks? So this video is going to be an update and not a classical video on my channel where I'm just covering some research paper. So basically what I've been doing is over the last seven days I think I've been coding this graph attention network or GATT for short from scratch and I actually started somewhere around New Year's time and I just continued now. I'll be just skipping the the classical research paper overview because for those of you who know me you know that I do I mix a lot of theoretical work but I also do a lot of hands-on like deep learning projects and stuff. So now the time has come pretty much in my like workflow to just kind of pause the theory a little bit and focus on coding. As you probably know like my github profile has a lot of projects those are some of the the projects I've been developing throughout the last year. Make sure to check them out I've got a neural style transfer on my github you can play with that. I've implemented both Gatiss's approach so that's the original optimization based approach. I've also got a Johnson's approach which is the fast-forward like just using CNN to stylize your images. I've got us even some really cool projects where you can just basically segment yourself and stylize everything around or stylize yourself whatever. So that's cool check it out as well. Then I've got deep dream which was really awesome artistic project and you can learn a lot about CNNs and deep learning in general by doing this project so make sure to check that one as well. Then I've been developing generative adversarial networks as well so I did like develop the original one I did develop the conditional GAN and finally the DC GAN which kind of started the Cambrian explosion in the world of generative adversarial networks so that one is really cool. Then continuing I was developing the transformer I was working on transformers and I actually open-sourced the original was finally transformer so I really like to think that that project is a super super good place to add to start supplementing the paper and understanding how it actually works so every single detail so we got it's got a lot of comments so yeah hopefully you'll find that one useful as well. So now I'm working on GAT graph attention at work and as any as pretty much any deep learning project you can conceptually I kind of split it into a couple parts like the first one is data then we have the the model part and finally we have the training loop and some additional visualizations the stuff I usually put in playground a cool playground whatever and basically when it comes to data because these are graphs I've been doing some stuff that they usually don't do when I'm exploring data so I've been doing network analysis like just plotting the degree distribution of my network of core network and some other statistics I did some I found this really good tool iGraph and I actually plotted how core looks like and that kind of just gives me a gut feeling for for how the graph looks like and that's that's kind of useful you feel better about yourself nothing else you probably don't need to know all of those details to make a to make a this project successful but it's it's really cool and you learn a lot in the process to learn different tools you learn yeah basically that and finally the the model part is so I have like four I think I have four different implementations so the first one is similar to pie tour geometric if you're not familiar with that project that's a really super awesome project that was implementing most of the main graph networks so GNN so do check it out but the thing is my implementation is actually like almost twice as fast because I was using some specialized pie torch functions like instead of using basic non-py indexing I was using index select I was using instead of using like some suffix function I was using torch some I also don't have the overhead of having to fit my my get project inside of this message passing framework as they do so it's it's it's cleaner and it's faster so but it's only for graph attention network so hopefully you'll find it useful I'll open source it I think next weekend so stay tuned for their one and the other implementations one is using towards sparse API which is currently in beta so that's going to be fun because I'll have to develop the back prop function explicitly so it doesn't handle so those sparse functions don't handle the back prop automatically the third one is the conceptually the the easiest to understand but it's computationally most inefficient so yeah so that's the gradation the training loop part is really easy because core is just a simple classification problem so nothing new there just cross entropy like some masking because we have because it's a transactive training setting and that means you you have to mask out the training nodes but you can see the test nodes as well as the validation nodes during the procedure so yeah that's nothing interesting there then I've got the playground where I've been profiling different sparse formats for different matrices so like you probably heard of some of them like COO the coordinate form format then we have D okay that we have little we have CSR CSS see as no CSC sorry then we've got not CSS that's that's that's not the best format and then we've got Daya and BSR so about about seven formats which scipy supports and what I've did is I just analyzed how fast they are and actually figure out that some of the implementations out there like GCN by Thomas Kupf and even get the official get by Petar Velichkovic used they've used a little format for some arithmetic operations like summations and stuff like that and that's actually suboptimal because you want to use that one as you're building up your graph as you're just modifying the sparsity structure but you don't want to use it for arithmetic operations so yeah that's a that's a fun fun it was a fun experimentation and also of course I love visualizing stuff I played a little bit with t-sne where I visualized the embeddings that come out from the train get model so yeah I also play with UMAP that's a that's a relatively new algorithm most of the people are just using t-sne as the default method but UMAP is also cool but in this particular case I actually found that t-sne is as good as UMAP so I just kind of dumped it all together and finally I want to make a shout out to my patrons to Petar Velichkovic from DeepMind who is the AI overlord of this channel and to Jimmix who is the AI disciple of this channel nothing less important just wanted to make a shout out and thank you for for supporting my channel nothing less important the whole community I love all of your comments keep them going I just want to read a couple of them here so the first one great one question have you noticed that arbitrary style transfer paper has more citations than universal style transfer one so I love those geeky ones as well as these more simple ones like geez man do you ever take a holiday lol I appreciate all your videos so just keep them coming I love your comments and I read all of your comments and I'm gonna try and answer all of them hi Alexa can you make a video about your background and your journey into AI ML yeah basically if I see more of these comments I'll probably make some time and create a video where I'll walk you through my journey into deep learning so also want to make a shout out to to Phil who is also supporting my channel and thank you your channel is also amazing so love to collaborate one day with you yeah that was pretty much it you know the drill subscribe hit the like button and share this video that's how you can help me build up this channel build up the community I want to include you as more as I can in in my in all of this so if you have any suggestions how I can do that please write them down in the comment section and until next time keep learning deep
[{"start": 0.0, "end": 5.32, "text": " What's up folks? So this video is going to be an update and not a classical video"}, {"start": 5.32, "end": 10.36, "text": " on my channel where I'm just covering some research paper. So basically what"}, {"start": 10.36, "end": 15.3, "text": " I've been doing is over the last seven days I think I've been coding this graph"}, {"start": 15.3, "end": 19.16, "text": " attention network or GATT for short from scratch and I actually started"}, {"start": 19.16, "end": 23.080000000000002, "text": " somewhere around New Year's time and I just continued now. I'll be just skipping"}, {"start": 23.080000000000002, "end": 28.96, "text": " the the classical research paper overview because for those of you who"}, {"start": 28.96, "end": 33.64, "text": " know me you know that I do I mix a lot of theoretical work but I also do a lot"}, {"start": 33.64, "end": 38.760000000000005, "text": " of hands-on like deep learning projects and stuff. So now the time has come"}, {"start": 38.760000000000005, "end": 43.88, "text": " pretty much in my like workflow to just kind of pause the theory a"}, {"start": 43.88, "end": 49.28, "text": " little bit and focus on coding. As you probably know like my github profile"}, {"start": 49.28, "end": 52.88, "text": " has a lot of projects those are some of the the projects I've been developing"}, {"start": 52.88, "end": 56.6, "text": " throughout the last year. Make sure to check them out I've got a neural style"}, {"start": 56.6, "end": 60.6, "text": " transfer on my github you can play with that. I've implemented both Gatiss's"}, {"start": 60.6, "end": 63.800000000000004, "text": " approach so that's the original optimization based approach. I've also"}, {"start": 63.800000000000004, "end": 68.68, "text": " got a Johnson's approach which is the fast-forward like just using CNN to"}, {"start": 68.68, "end": 74.64, "text": " stylize your images. I've got us even some really cool projects where you can"}, {"start": 74.64, "end": 78.84, "text": " just basically segment yourself and stylize everything around or stylize"}, {"start": 78.84, "end": 83.12, "text": " yourself whatever. So that's cool check it out as well. Then I've got deep dream"}, {"start": 83.12, "end": 86.72, "text": " which was really awesome artistic project and you can learn a lot about"}, {"start": 86.72, "end": 90.68, "text": " CNNs and deep learning in general by doing this project so make sure to check"}, {"start": 90.68, "end": 94.64, "text": " that one as well. Then I've been developing generative adversarial"}, {"start": 94.64, "end": 98.88000000000001, "text": " networks as well so I did like develop the original one I did develop the"}, {"start": 98.88000000000001, "end": 103.64, "text": " conditional GAN and finally the DC GAN which kind of started the Cambrian"}, {"start": 103.64, "end": 107.52000000000001, "text": " explosion in the world of generative adversarial networks so that one is"}, {"start": 107.52000000000001, "end": 111.56, "text": " really cool. Then continuing I was developing the transformer I was working"}, {"start": 111.56, "end": 114.96000000000001, "text": " on transformers and I actually open-sourced the original was finally"}, {"start": 114.96000000000001, "end": 120.48, "text": " transformer so I really like to think that that project is a super super good"}, {"start": 120.48, "end": 125.16, "text": " place to add to start supplementing the paper and understanding how it actually"}, {"start": 125.16, "end": 129.68, "text": " works so every single detail so we got it's got a lot of comments so yeah"}, {"start": 129.68, "end": 134.16, "text": " hopefully you'll find that one useful as well. So now I'm working on GAT graph"}, {"start": 134.16, "end": 139.48000000000002, "text": " attention at work and as any as pretty much any deep learning project you can"}, {"start": 139.48, "end": 143.88, "text": " conceptually I kind of split it into a couple parts like the first one is data"}, {"start": 143.88, "end": 148.0, "text": " then we have the the model part and finally we have the training loop and"}, {"start": 148.0, "end": 152.39999999999998, "text": " some additional visualizations the stuff I usually put in playground a cool"}, {"start": 152.39999999999998, "end": 158.04, "text": " playground whatever and basically when it comes to data because these are"}, {"start": 158.04, "end": 162.51999999999998, "text": " graphs I've been doing some stuff that they usually don't do when I'm exploring"}, {"start": 162.51999999999998, "end": 166.67999999999998, "text": " data so I've been doing network analysis like just plotting the degree"}, {"start": 166.68, "end": 171.52, "text": " distribution of my network of core network and some other statistics I did"}, {"start": 171.52, "end": 176.92000000000002, "text": " some I found this really good tool iGraph and I actually plotted how core"}, {"start": 176.92000000000002, "end": 180.56, "text": " looks like and that kind of just gives me a gut feeling for for how the graph"}, {"start": 180.56, "end": 184.12, "text": " looks like and that's that's kind of useful you feel better about yourself"}, {"start": 184.12, "end": 188.68, "text": " nothing else you probably don't need to know all of those details to make a to"}, {"start": 188.68, "end": 192.68, "text": " make a this project successful but it's it's really cool and you learn a lot in"}, {"start": 192.68, "end": 198.64000000000001, "text": " the process to learn different tools you learn yeah basically that and finally"}, {"start": 198.64000000000001, "end": 203.96, "text": " the the model part is so I have like four I think I have four different"}, {"start": 203.96, "end": 208.12, "text": " implementations so the first one is similar to pie tour geometric if you're"}, {"start": 208.12, "end": 212.24, "text": " not familiar with that project that's a really super awesome project that was"}, {"start": 212.24, "end": 219.24, "text": " implementing most of the main graph networks so GNN so do check it out but"}, {"start": 219.24, "end": 222.68, "text": " the thing is my implementation is actually like almost twice as fast"}, {"start": 222.68, "end": 226.64000000000001, "text": " because I was using some specialized pie torch functions like instead of using"}, {"start": 226.64000000000001, "end": 231.44, "text": " basic non-py indexing I was using index select I was using instead of using like"}, {"start": 231.44, "end": 237.64000000000001, "text": " some suffix function I was using torch some I also don't have the overhead of"}, {"start": 237.64000000000001, "end": 242.36, "text": " having to fit my my get project inside of this message passing framework as"}, {"start": 242.36, "end": 248.44, "text": " they do so it's it's it's cleaner and it's faster so but it's only for graph"}, {"start": 248.44, "end": 252.2, "text": " attention network so hopefully you'll find it useful I'll open source it I"}, {"start": 252.2, "end": 257.8, "text": " think next weekend so stay tuned for their one and the other implementations"}, {"start": 257.8, "end": 264.0, "text": " one is using towards sparse API which is currently in beta so that's going to be"}, {"start": 264.0, "end": 268.15999999999997, "text": " fun because I'll have to develop the back prop function explicitly so it"}, {"start": 268.15999999999997, "end": 272.04, "text": " doesn't handle so those sparse functions don't handle the back prop"}, {"start": 272.04, "end": 276.48, "text": " automatically the third one is the conceptually the the easiest to"}, {"start": 276.48, "end": 280.48, "text": " understand but it's computationally most inefficient so yeah so that's the"}, {"start": 280.48, "end": 284.22, "text": " gradation the training loop part is really easy because core is just a"}, {"start": 284.22, "end": 288.36, "text": " simple classification problem so nothing new there just cross entropy like some"}, {"start": 288.36, "end": 293.24, "text": " masking because we have because it's a transactive training setting and that"}, {"start": 293.24, "end": 300.0, "text": " means you you have to mask out the training nodes but you can see the test"}, {"start": 300.0, "end": 305.32, "text": " nodes as well as the validation nodes during the procedure so yeah that's"}, {"start": 305.32, "end": 309.76, "text": " nothing interesting there then I've got the playground where I've been profiling"}, {"start": 309.76, "end": 315.2, "text": " different sparse formats for different matrices so like you probably heard of"}, {"start": 315.2, "end": 320.56, "text": " some of them like COO the coordinate form format then we have D okay that we"}, {"start": 320.56, "end": 328.52, "text": " have little we have CSR CSS see as no CSC sorry then we've got not CSS that's"}, {"start": 328.52, "end": 335.0, "text": " that's that's not the best format and then we've got Daya and BSR so about"}, {"start": 335.0, "end": 340.4, "text": " about seven formats which scipy supports and what I've did is I just analyzed how"}, {"start": 340.4, "end": 344.16, "text": " fast they are and actually figure out that some of the implementations out"}, {"start": 344.16, "end": 347.84, "text": " there like GCN by Thomas Kupf and even get the official get by Petar"}, {"start": 347.84, "end": 353.76, "text": " Velichkovic used they've used a little format for some arithmetic operations"}, {"start": 353.76, "end": 358.24, "text": " like summations and stuff like that and that's actually suboptimal because you"}, {"start": 358.24, "end": 361.36, "text": " want to use that one as you're building up your graph as you're just modifying"}, {"start": 361.36, "end": 365.92, "text": " the sparsity structure but you don't want to use it for arithmetic operations"}, {"start": 365.92, "end": 371.48, "text": " so yeah that's a that's a fun fun it was a fun experimentation and also of course"}, {"start": 371.48, "end": 375.52000000000004, "text": " I love visualizing stuff I played a little bit with t-sne where I visualized"}, {"start": 375.52000000000004, "end": 380.52000000000004, "text": " the embeddings that come out from the train get model so yeah I also play with"}, {"start": 380.52000000000004, "end": 384.56, "text": " UMAP that's a that's a relatively new algorithm most of the people are just"}, {"start": 384.56, "end": 388.40000000000003, "text": " using t-sne as the default method but UMAP is also cool but in this particular"}, {"start": 388.4, "end": 392.96, "text": " case I actually found that t-sne is as good as UMAP so I just kind of dumped it"}, {"start": 392.96, "end": 396.79999999999995, "text": " all together and finally I want to make a shout out to my patrons to Petar"}, {"start": 396.79999999999995, "end": 401.35999999999996, "text": " Velichkovic from DeepMind who is the AI overlord of this channel and to"}, {"start": 401.35999999999996, "end": 405.15999999999997, "text": " Jimmix who is the AI disciple of this channel nothing less important just"}, {"start": 405.15999999999997, "end": 408.85999999999996, "text": " wanted to make a shout out and thank you for for supporting my channel nothing"}, {"start": 408.85999999999996, "end": 413.12, "text": " less important the whole community I love all of your comments keep them"}, {"start": 413.12, "end": 417.79999999999995, "text": " going I just want to read a couple of them here so the first one great one"}, {"start": 417.8, "end": 421.48, "text": " question have you noticed that arbitrary style transfer paper has more citations"}, {"start": 421.48, "end": 426.64, "text": " than universal style transfer one so I love those geeky ones as well as these"}, {"start": 426.64, "end": 431.64, "text": " more simple ones like geez man do you ever take a holiday lol I appreciate"}, {"start": 431.64, "end": 436.24, "text": " all your videos so just keep them coming I love your comments and I read all of"}, {"start": 436.24, "end": 440.52, "text": " your comments and I'm gonna try and answer all of them hi Alexa can you make"}, {"start": 440.52, "end": 445.52, "text": " a video about your background and your journey into AI ML yeah basically if I"}, {"start": 445.52, "end": 450.84, "text": " see more of these comments I'll probably make some time and create a video where"}, {"start": 450.84, "end": 454.64, "text": " I'll walk you through my journey into deep learning so also want to make a"}, {"start": 454.64, "end": 459.56, "text": " shout out to to Phil who is also supporting my channel and thank you your"}, {"start": 459.56, "end": 464.71999999999997, "text": " channel is also amazing so love to collaborate one day with you yeah that"}, {"start": 464.71999999999997, "end": 469.08, "text": " was pretty much it you know the drill subscribe hit the like button and share"}, {"start": 469.08, "end": 472.28, "text": " this video that's how you can help me build up this channel build up the"}, {"start": 472.28, "end": 477.4, "text": " community I want to include you as more as I can in in my in all of this so if"}, {"start": 477.4, "end": 480.55999999999995, "text": " you have any suggestions how I can do that please write them down in the"}, {"start": 480.56, "end": 505.16, "text": " comment section and until next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=0tw66aTfWaI
Temporal Graph Networks (TGN) | GNN Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ A deep dive into the temporal graph networks paper. You'll learn about: ✔️ What are dynamic graphs? ✔️ How to get a vectorized representation of time ✔️ All the nitty-gritty details behind the paper ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ https://arxiv.org/abs/2006.10637 ✅ Chris Olah on LSTMs: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Dynamic graphs 03:00 Suboptimal strategies 05:30 Terminology, temporal neighborhood 07:30 High-level overview of the system 08:35 We need to go deeper 13:30 Using temporal information to sample 14:10 Information leakage and the solution 16:55 Main modules explained 21:20 Memory staleness problem 24:00 Temporal graph attention 26:00 Vector representation of time 29:15 Batch size tradeoff 31:00 Results and ablation studies 33:55 Recap of the system 36:55 Some confusing parts ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #temporalgraphnetworks #dynamicgraphs #graphml
I want to make shout out to Shantanu who recommended I create a video on TGN temporal graph networks and it was about time since dynamic networks is such an interesting concept, interesting research direction and many real world applications such as social media networks rely on having the modeling these dynamic graphs. So what's a dynamic graph? So basically whenever you have, so in contrast with static graphs, basically here you have the nodes can be added, deleted, modified, same goes with edges. So we basically have two types of events. We have node-wise events and we have interaction or edge events. So let's say we have some simple network here like graph and it can be multi-graph in a general case and basically I'll be using Twitter as a running example since the authors of this paper, Emanuella Rossi, Ben Fabrizio, David Federico and Michael Bronstein are all from Twitter. Let's say Anna decides to join Twitter. So basically she signs up, she fills up this sign-up form and now she's a new node in Twitter, social graph and that's basically the node addition event. Now let's say after a couple of days she was just kind of retweeting people and every single retweet is basically an edge in that graph. Now Ben joins Twitter and because those two broke up, Anna is pissed off and she just deletes the Twitter account altogether and that's the node deletion event. Now Ben is also pissed off and so he decides to update his user profile, updates his description and by updating the description he's basically changing his feature vector and you can think of it maybe we take this text and we pass it in a BERT encoder and we basically have CLS here, token, hopefully you're familiar with BERT. If you're not, just treat it as a black box. We input the text here, we get out the condensed representation here and we use exactly this thing as the feature vector. So that's the node modification event. Now after a couple of days of activity he decides to retweet Andrew Eng's school post and basically now whatever the retweet text is that's the edge feature vector and now maybe he made a typo so he decides to modify the text even though that's not possible on Twitter, but let's say it is and for the sake of argument and basically that's the edge modification event where now the new text is again fed through BERT and we get the new feature vector and finally he decides to just delete the tweet altogether and that's the edge deletion event. So that's how the dynamic graph works in a nutshell. It's basically a multi-graph because every single node can have multiple edges between, so the nodes are connected by multiple edges and that's what's known as a multi-graph. Okay, dead out of the way, let's jump into the paper and see what they say. So they say here a few approaches have been proposed for dealing with graphs that are dynamic in nature. So basically most of the research in graph ML was focusing on static graphs and not on dynamic graphs and while it is possible to apply static graph deep learning models to dynamic graphs by ignoring the temporal evolution, this has shown been shown to be suboptimal. So what that means is the following. So you have this graph here and basically every single edge has a time stamp attached to it. So let's say Ben here has got maybe three retweets of Andrew Ang and this one happened at t1, this one happened at t2, this one happened just recently like a couple of seconds ago at t3 and you can treat this as a static graph, just a static multi-graph or you can kind of acknowledge the like the existence of these time stamps and use that to, when you sub sample your neighborhood later and we'll see why we do that a bit later, but when we sub sample the neighborhood we can just take the most recent edges and by doing that we have a much more optimal performance than just by treating this as a static graph and doing uniform sampling of the neighborhood. So yeah, basically that's suboptimal, you can treat it as a static graph but it's suboptimal, okay? And yeah, so learning on dynamic graphs is relatively recent and most of the previous work was focused on these discrete time dynamic graphs which is just a sequence of snapshots of the graph. So basically you have your graph that's evolving and what you do in this discrete time dynamic graph is the following. You just take a snapshot at equidistant moments, so maybe you had some graph at t1 that looked like this and then after some period and the important part here is that this is pretty much the same constant so you have equidistant moments when you take the snapshot and maybe a couple of more edges and nodes appeared etc. So like this, I don't know, whatever. And you do the same thing after the same time period t. This is basically the improvement over the static graph modeling but this discrete time dynamic graphs are still limited compared to the continuous ones that we're gonna see in this paper. Okay, a small recap basically when you want to compute the embeddings for the static graph, you just have basically accumulation over the neighborhood of these messages and h is just a trainable update function and that's how you get your embeddings. That's what this equation here stands for. But it's not so important now so I just want to kind of walk you through some terminology that we'll be using and that's as I already mentioned we have this discrete time dynamic graphs or DTDGs for short and we have continuous time dynamic graphs on the other hand and those are the ones we'll be treating in this video. And basically the point here is that you have to the point here is we treat it as a sequence of time stamped events. So every single event be it nodewise event, be it the edge event has a time stamp associated with it. Contrast that with static graphs where we had those equidistant snapshots. And I already explained those two and now the interesting concept, new concept that's not encountered in static graphs is this concept of a temporal neighborhood. So what it means basically is the following. So again you have some nodes in the graph and let's say there are multiple edges here and let's say this one happened at t5 and we're currently looking for some reason we're looking at a moment that happened before prior to t5 like at t4. That basically means that the neighborhood of this graph here is will ignore this edge. So everything that comes after this particular target moment of time is ignored and that's what you're left with. So all of the other edges here is what's known as the temporal neighborhood of this particular node at this point of time. Okay so that's a new concept and with that out of the way let's jump and see what are the main modules in this TGN network. Okay let me give you a quick high level overview of the system and then we'll slowly start building on and adding new details. So high level how we train this graph neural network, this whole system actually because it also has this thing called memory, is the following. So we do a self-supervised learning and we're learning how to do link prediction. So basically because we have all of those retweets and we have time stamps associated with them we can just do chronological sort over all of those interactions and we can just predict the interactions that haven't yet happened and that's basically the you just batch that chronologically sorted array of those edge interactions and you just predict the follow-up edges. It's a similar thing as transformers there you're trying to predict the follow-up word and here you're trying to predict what the next interaction will be. So that's the high level overview of how the thing works. So now let's see on this chart what happens basically. So you have you have and I'll just create a simple example here. We have node 1, we have nodes 2 and we have node 3. So 1 as you can see here at t1, 1 retweeted node 2 and at t2 node 2 retweeted node 3. So like this. So now what we do is the following. We have so because node 1 also had some other interactions maybe that happened even prior to these interactions. So let's say these happened at whatever t0 maybe. Okay so now what happens is that we have this embedding method and that's basically get. If you if you haven't watched my video on graph attention networks you can check it out. I'll link it somewhere here and you basically do one layer get over this node 1 over its temporal neighborhood and we already mentioned what that is. We do get and we calculate the embeddings for nodes 1, 2 and node 3 right and those are these z's. Once we have those we can just calculate the probability using a simple decoder. So they used MLP here. So you just concatenate these two embeddings. You just pass them through MLP and you just add the non-linear function like sigmoid non-linearity and that means this is basically probability. And now in order to train you do the following. So you want to make sure that P so given t1 what's the probability of interaction between 1 and 2 happening? And we know it's 100% it's 1 because it already happened. We have it in the data set right. We did it chronological sort. So you want to push the loss to go to 1, P to go to 1. I loss will be 0 because we're just using simple binary cross entropy here and we'll also have negative edges which we'll want to make sure that the probability goes to 0 and yeah that's it. So if you have maybe some nodes x and nodes y and this interaction hasn't happened we want to make sure that we push the probability to 0 for those node pairs. And we use a simple as I mentioned so loss is just a binary cross entropy that means that's a fancy name for you just put the log here. So basically if we have positive examples we'll just the loss will have a format like this minus log of P right and if I just draw it here that will look like this and basically we want to push the probability to go to 1 because then the loss goes to 0. For when we have negative examples we'll just have minus log of 1 minus P and that will just kind of mirror this chart here and it will look like this. So we have P on the x-axis, we have loss on the y-axis and the loss will look like this. Basically at 0 we have loss that's equal to 0 and at 1 it goes to infinity. So that's how we train it simple BC we have positives we have negatives we have the data set with and we just batch it and we use those batches to predict those links because we know they're there. So how the embedder part works so get leverage is something called memory or states in this TGN network. So what it is is basically every single node in our graph in Twitter graph has a state associated with it. So we have a huge table and every single node in this table has some representation and they said here basically the purpose is to represent the node's history in a compressed format and that's exactly what it does. So basically over the when the interactions are happening in Twitter we're basically using all of those to just update this memory state and then we can use those to later do some recommendations but I'll get to that in a moment okay. So we have these states and they are somehow calculated so we have some messages we do some aggregation we somehow use them to update the memory state and then the embedding method the get will just use those to calculate the embeddings and we can predict the links. So that's the high level overview and now we'll slowly start digging into details hopefully it was clear enough. Maybe one more thing worth mentioning when we are using get in this model the way we we sub sample the neighborhood is using the most recent edges so that's why we have timestamps. So basically if we have a bunch of edges we'll use we'll only sub sample the most recent k ones so we we sort the the neighborhood edges by timestamps and we just take the k most recent ones that may be like 10 most recent neighbors and then you just do the simple get aggregation and we'll see what exact features go inside here in a sec. Okay now the problem with this chart is if you maybe if you noticed is that we are using these interactions to calculate the messages to update the states and then we're using that those states to predict the links so that means we have information leakage here we're basically using this information to update states and then we're using the states to predict those links and that doesn't make any sense right but because you somehow have to you somehow have to calculate and use these message functions to get the states because otherwise you won't have any gradients flowing towards these trainable functions the aggregated messages and messages and this mem function they are all trainable so you somehow have to have this forward pass here but you can use this batch you'll just use the previous batches and that's the solution so basically they have a graph here okay you have to introduce this concept of a raw memory storage and now what you do here is from the previous batch you just store the previous batch you just stored all the necessary information inside here so now when you try to predict the the the existence of a link between one and two the the existence of of retweet in the case of twitter what you do is the following you use the the the last batch and i mean not just the last batch actually the whole history and you calculate the messages and we'll see how we do that so we have some data here that's necessary to calculate these messages then we just aggregate the messages so you see here that messages for the node two are kind of accumulated and then we update the states and they just use grue here and we'll see those details so what happens is basically you you you have the previous state so this is grue g r u you just use s2 and this is the old state and you use the the newest message this m2 aggregated and this will just spit out new state and we just write it down here so that's how we update the the states now the states have now we have this thing like included in the computation graph and we just use get to calculate embeddings and to we get to the loss so now gradient gradients can now flow through the whole system and all of these functions which will see what they exactly are so these egg these mem functions will be updated in the back propagation step okay so that's the important solution so introduction of raw message store is important because that allows us to train this part of the pipeline this part here okay okay hopefully that was clear enough um let me let me slowly start digging into into details of how the messages are computed and then once we have all of those details i'll again zoom out and try and explain everything okay so messages are computed the following once interaction happens between nodes i and j so we have node i we have j one of them retweeted the other so we have interaction that happened and it has feature vector associated with it so that's e i j right okay so how we calculate the message is basically we input the previous states and this t minus just means the state before this interaction happened for those notes so that's basically whatever is currently in the state table so we have si here we have sj so whatever is currently in there we use those and we input them in the message function and we additionally use the uh this feature vector i just mentioned so that's like the maybe birth encoding of retweet and we finally use this delta t which is the time that has elapsed between the previous interaction and this current interaction so that may be like 30 seconds or whatever and that's how we compute the message for this for this node i and then we do analogously to node j so that's the destination so that's the source and destination nodes and the only difference is these are permuted if you can if you can notice here so just permutation simple permutation and uh although this can be trainable what it did is they in practice use just identity function which means they are just doing simple concatenations uh between all of these features so you have some vector here vector you concatenate all of them and that's your message and this here is an interesting thing and i'll later explain uh how we convert time to a vector uh to a vector representation and just stay tuned for that okay so that's the uh so that's the part with with the messages okay and they say here a more complex message function that involves additional aggregation from the neighborhoods from the neighbors of nodes i and j is also possible and is left for future study so the thing with this paper is there is there is a it's a it's a work in progress there is a lot of things they still want to try and they haven't tried it so for example they have this uh node-wise uh memory but they also they say here while a global graph-wise memory can also be added to the model to track the evolution of the entire network we'll leave this as a future work so that's that's my point there is still a lot of things uh evolving in this work so yeah just just keep that in mind um okay so once we have the message function so that's this part right so that's the messages how do we aggregate them and here we just have a message aggregator and again it can be a trainable function but what they did is they just do a simple most recent message and mean message heuristic and you can even see it here so what it did is you have m2 at t1 you have m2 at t2 they just took the last message and that's the aggregation okay um in more general case they can also just do a mean or something else just kind of do make it trainable uh maybe pass it through an rnn or whatever okay but it's stuck with the most recent message and let's use it as the running example uh finally we have the memory updater i already explained that they used some form you can use whatever like rnn or lstm they used a grew which is a specific type of of lstm basically and i already explained how it works but let me just repeat okay let's this is grew and we have we have the state state i t minus we just get the new message and we just run this and there are like a if you know how if you don't know how grew works i'll just link uh chris ola's blog he has a really nice blog intuitive to understand how this works basically a bunch of forget and update gates and this spits out we can treat it as a black box for now it just spits out the new state si at point t so this was a t minus and here we have the current state so that's the memory updating part okay finally uh we have the embedding module uh and um so i haven't mentioned this but the main goal of the embedding module is to avoid the so-called memory staleness problem so what that means is the following so let's say we we have a graph and um like we have we had ben here and maybe ben stopped uh using twitter for maybe a month or two and now what happens is that his if you take a look at the memory so and we we take ben's state uh because he's not interacting with anybody anymore um we this this representation is not updated and so um we we we need to have uh some sort of of of embedding to to to make this relevant for ben when he returns so if we want to recommend to ben which tweets should he uh see so that's like your your home page on twitter you see a bunch of tweets so twitter somehow needs to figure out which tweets are relevant for you and also uh twitter should recommend you what's the what's what what are the persons you should probably follow so how they do that is the following so ben has uh his state b and in the simplest case you could just do the following you could just take sp and maybe andrew ang has his own state sa and you just do a dot product or mlp and uh whatever and that spits out some probability so if the probability is close to one uh twitter may recommend that you follow uh and you ang okay but now the problem is if this remains stale these predictions will not be as relevant anymore for ben because he he has changed in the meanwhile okay so the solution is to use what i already explained and that's using some kind of like a gnn module like get and you you get the updated version uh of the of the embedding because his friends presumably presumably his friends haven't stopped using twitter so their states are are being constantly being updated so we can use the the top so we can use that information to make a more relevant predictions for ben okay hopefully that makes sense and um they had a couple of baselines so the one i mentioned the the simplest one is just you take the you just take the state of of the user so that's this uh identity uh embedding so the final embedding is just whatever the state is and they show that this has a really bad performance on the later benchmarks okay and then we have this time projections simple equation you can check it out yourself i'll focus on on on the temporal graph attention so this is the important part this is the strongest baseline basically they had uh and the way it works it's it's the same thing as get uh just instead of using um the the attention uh that get used originally they just use the original of aswani attention so that means they have uh queries keys and values and everything else remains the same um basically the new information here is uh this phi function and the fact they're using multiple features concatenated here to get the final uh embedding representation okay so let's see how it exactly works so this h so for a particular node at a particular time t how they calculate the h is like this so they they take the state of that node at a particular time t and they take the node's features so that's basically maybe a description on your profile and then birth embedding blah blah blah blah and you basically just add them up and that's your zero zero that's the row that those are the raw features okay so we concatenate those with the with the uh basically edge information so let me let me draw uh example here it'll be easier we have a node we have a bunch of nodes here again we'll do the most recent uh neighbors will just kind of disregard all of the others so let's say we only take these two into account okay so we'll these are these will have some um these will have some features so if this is node one this is maybe node i so this is uh node i1 and this is node i j sorry j1 and uh so you just combine those information and you combine with phi so what's what's phi exactly here and uh it's a it's a there is a this nice paper called um time to work time to work and you can check it out but basically so that's one thing you could do and uh basically what you do is in order to get the vectorized representation of time you just map it into the following thing so you map it into a vector that has uh features like this so maybe uh w1 t plus uh n1 and then the second feature will be sine of w2 t plus n2 etc so you just continue using signs here and you increment um the arguments by one so we have w3 etc so these are learnable so you learn w2 and 2 w1 and 1 etc and if you take a closer look and you're familiar with the transformer this kind of resembles the positional encodings except that these are uh time aware okay and basically so you learn these and they showed in the time to work paper that you basically uh these are capturing the periodicity in the signal whereas this one is capturing something more uh constant in the signal so this one if you take a look this is just a simple uh basically a line and these here are sinusoids so you basically are learning sinusoids like this and the higher the w the the higher the frequency will be and if you have some smaller w then you'll have slower smaller frequency etc and these are just the offset these are just encoding the offset of your sinusoids so that's the simple heuristic they use to encode a time and once you have that you just do simple multi-head attention and you get the aggregated feature vector you concatenate it with the current feature vector and you pass it through an mlp so this thing here hi corresponds to this node and the second term here is the aggregated uh combination of these two because they are their most recent neighbors right so we kind of associate some alphas alpha one alpha two here and uh we combine them like a simple weighted sum and we get the features so that's how we calculate the embedding and they show that graph attention uh network is the best baseline we'll see that in a minute so that was that was all the nitty-gritty details um we saw how to train these uh uh how to how to pass how to pass some gradient information to these modules we saw how they exactly work and now we'll see a couple more details and then i'll zoom out um i already mentioned this part about the information leakage and so i'll skip it and so that's the reason we had to introduce the raw memory storage and this part is interesting so while from the perspective of the first interaction in the batch the memory is up to date since it contains information about all previous interactions in the graph from the perspective of the last interaction in the batch the same memory is out of date since it lacks information about previous interactions in the same batch this incentivizes the use of a big batch size so let me break it down for you um that means the following so we have the previous batch information we update the states and now this first because this is a batch that's chronologically sorted we have t5 t6 so this first interaction will be using states which are totally up to date but once this one is done we should basically have s1 and s2 updated because once an interaction happens we calculate the messages for the nodes that that are associated with that interaction and we update the states but we can't do that while we are in the batch so that means this to this one here two to three will use the same state as this one here and that means it's slightly out of date and now imagine the bigger batch and they use 200 as a nice trade-off and the the deeper you are in the batch let's call it that way the the more the more stale this current state representation is so that's the reason you don't want to have a too big of a batch but then again you don't want to keep it too slow too small because otherwise it will be really compute inefficient to have a small batch size okay so that's that part and now let me just show you the results before before i do a high level overview so they had some continuous this dynamic graph baselines here like jody tgad which is basically the same thing as t like this this paper at tgm it just doesn't have memory and the associated modules so it's a simplification of this of this paper basically it's a specific case specific instance and they also experimented with different basically different embedding methods using memory or not using memory module etc so they have a bunch of baselines and the results are the following so the tgm with attention achieves the best results overall on all of the three data sets so i was continuously using twitter as a running example but they also have wikipedia and reddit where this one is just users are notes and pages are notes and user editing a page is an interaction event and basically the edit text is what the feature for the edges etc okay similarly for reddit so it's those are bipartite graphs so we have have pages here we have users here and let me see what's also interesting they also did node-wise classification but that's not so so important for this paper and here are some ablation studies basically you can see that let me just zoom in a little bit basically the the tgm with attention is the best trade-off overall because it takes less time than these two whereas the performance is really similar so this one just uses two get layers and the reason one get layer is enough is because they are using this memory thing and the memory thing already implicitly contains the information from the neighboring nodes so using one layer you're basically you're basically accessing the features from the two hop neighborhood okay and using mean i think the mean just means they are using for the for the memory aggregation they are using the mean instead of the last last heuristic and you can see t get and jody and direp these other baselines are way lower performance wise than than this method some ablations again with their own model what they don't use the memory you can see the the worst curve is here when they add two layers again no memory it's a bit better and then as we add the memory and we add the the the the the last message aggregation heuristic we get the best results overall so that's pretty much it let me see if there is something else left that's it now let me just now with all the information you have in your head let me just briefly go ahead and explain it again okay now that we have all the information explained let me just kind of give you a holistic overview of the whole pipeline so here we have the information from the previous batches of these edge interactions okay and that means we have so rm1 is basically si contains si sj contains the delta t so the time that elapsed between the previous interaction and this interaction and we have ej which is the note the interaction features and we have the same tuples for all of the others the thing is we only focus on the nodes that we need so this thing may be huge and contain information for for many other notes in our graph we only care about nodes one two and three and that's why we only calculate those messages so we have we we just basically do a concatenation that's what they did and you get m1 you do a concatenation you get m2 etc at different timestamps okay now the aggregation they used either the mean or the last message so basically you just take let's say we we use the last message we just pass this message into the next step and that's the aggregated messages array once we have those we just have grew here and for every one of those we pass the state we have feed in the message we we spit out the new state when we update the states so now we have the states and the the computer this part is included into the computational graph now we use those states and we use get to create the embeddings now interesting thing here to note is so we have nodes one two and three but because this node one maybe has some other neighbors it will make sense it will make sense to have other states here as well so whatever the neighbors are i'll call them sn so those are these these guys here it will be useful to have the states for them updated as well because those will be included by a get because get accumulates the neighborhood information right so those will be accumulated by the get so i guess they just missed to to place sn here like the neighbors but whatever and finally you calculate the embeddings and now you just depending on the ground truth so these exist but the negative the negative ones don't exist so you just use binary cross entropy loss to train this whole pipeline end to end and now you have a system that can successfully predict uh given two nodes given two yeah given two nodes you can calculate their embeddings and you can predict whether they are likely to interact in the future and that's useful for twitter because you can use that to recommend for node a which node should that node follow in the future like maybe andre ang or which which tweets should that user see etc couple of strange things i've noticed in the paper maybe maybe a typo or something basically if you take a look at the query it doesn't have the edge information here so that means if you're doing a scale dot product between the the query and the keys you have a like a differently dimension vectors so you can do it so it's either a typo or they have some additional projection layer here that they didn't explicitly include here or i've missed something the second thing is i mentioned the memory staleness problem but basically if it's like they are implicitly stating that you need to retweet somebody else in order for your memory state to get updated but what if somebody retweets my post so i assume because that interaction involves my node and that person's node so because the way how the messages are computed my node state should be updated as well so i'm not sure um uh whether they're just simplifying it here in the paper but anyways uh just just keep that in mind that uh that's not super clear because for me it looks like that my state should be updated even though i'm not active on twitter because other people are interacting with me and thus my memory state is being updated constantly they're probably using a directed edge assumption here but it's not super clear from the paper uh one more thing to keep in mind is that this memory uh table is not uh let me open a pen is not trainable so what is trainable is the group here the lstm and potentially the message and aggregation functions even though they just use the concatenation so there's no learner learnable parameters here neither there is learnable parameters in their aggregation function so they're left up with learning mem so that means once the the model is fully trained uh you're basically once an interaction happens you just calculate the messages and you basically uh aggregate them and you update the states and um that's it and once you once you want to predict do some recommendation then you use the the get module and create those embeddings and just make the recommendation so that was all i had to say for this video you know the drill subscribe hit the bell icon and until next time keep learning deep you
[{"start": 0.0, "end": 6.8, "text": " I want to make shout out to Shantanu who recommended I create a video on TGN temporal graph networks"}, {"start": 6.8, "end": 13.52, "text": " and it was about time since dynamic networks is such an interesting concept, interesting research"}, {"start": 13.52, "end": 19.2, "text": " direction and many real world applications such as social media networks rely on"}, {"start": 19.2, "end": 25.44, "text": " having the modeling these dynamic graphs. So what's a dynamic graph? So basically whenever you have,"}, {"start": 25.44, "end": 31.12, "text": " so in contrast with static graphs, basically here you have the nodes can be added, deleted,"}, {"start": 31.12, "end": 37.44, "text": " modified, same goes with edges. So we basically have two types of events. We have node-wise events"}, {"start": 37.44, "end": 44.400000000000006, "text": " and we have interaction or edge events. So let's say we have some simple network here like graph"}, {"start": 44.400000000000006, "end": 50.88, "text": " and it can be multi-graph in a general case and basically I'll be using Twitter as a running"}, {"start": 50.88, "end": 58.080000000000005, "text": " example since the authors of this paper, Emanuella Rossi, Ben Fabrizio, David Federico and Michael"}, {"start": 58.080000000000005, "end": 63.92, "text": " Bronstein are all from Twitter. Let's say Anna decides to join Twitter. So basically she signs up,"}, {"start": 64.48, "end": 70.72, "text": " she fills up this sign-up form and now she's a new node in Twitter, social graph and that's"}, {"start": 70.72, "end": 75.12, "text": " basically the node addition event. Now let's say after a couple of days she was just kind of"}, {"start": 75.12, "end": 81.52000000000001, "text": " retweeting people and every single retweet is basically an edge in that graph. Now Ben joins"}, {"start": 81.52000000000001, "end": 86.56, "text": " Twitter and because those two broke up, Anna is pissed off and she just deletes the Twitter"}, {"start": 86.56, "end": 92.32000000000001, "text": " account altogether and that's the node deletion event. Now Ben is also pissed off and so he decides"}, {"start": 92.32000000000001, "end": 99.28, "text": " to update his user profile, updates his description and by updating the description he's basically"}, {"start": 99.28, "end": 106.0, "text": " changing his feature vector and you can think of it maybe we take this text and we pass it in a BERT"}, {"start": 107.28, "end": 113.28, "text": " encoder and we basically have CLS here, token, hopefully you're familiar with BERT. If you're"}, {"start": 113.28, "end": 120.48, "text": " not, just treat it as a black box. We input the text here, we get out the condensed representation"}, {"start": 120.48, "end": 126.48, "text": " here and we use exactly this thing as the feature vector. So that's the node modification event."}, {"start": 126.48, "end": 134.72, "text": " Now after a couple of days of activity he decides to retweet Andrew Eng's school post"}, {"start": 134.72, "end": 142.8, "text": " and basically now whatever the retweet text is that's the edge feature vector and now maybe he"}, {"start": 142.8, "end": 148.4, "text": " made a typo so he decides to modify the text even though that's not possible on Twitter,"}, {"start": 148.4, "end": 153.68, "text": " but let's say it is and for the sake of argument and basically that's the edge modification event"}, {"start": 153.68, "end": 159.68, "text": " where now the new text is again fed through BERT and we get the new feature vector and finally he"}, {"start": 159.68, "end": 166.24, "text": " decides to just delete the tweet altogether and that's the edge deletion event. So that's how the"}, {"start": 166.24, "end": 172.8, "text": " dynamic graph works in a nutshell. It's basically a multi-graph because every single node can have"}, {"start": 172.8, "end": 179.84, "text": " multiple edges between, so the nodes are connected by multiple edges and that's what's known as a"}, {"start": 179.84, "end": 187.68, "text": " multi-graph. Okay, dead out of the way, let's jump into the paper and see what they say. So they say"}, {"start": 187.68, "end": 191.36, "text": " here a few approaches have been proposed for dealing with graphs that are dynamic in nature."}, {"start": 191.36, "end": 197.36, "text": " So basically most of the research in graph ML was focusing on static graphs and not on dynamic"}, {"start": 197.36, "end": 203.52, "text": " graphs and while it is possible to apply static graph deep learning models to dynamic graphs by"}, {"start": 203.52, "end": 209.04, "text": " ignoring the temporal evolution, this has shown been shown to be suboptimal. So what that means"}, {"start": 209.04, "end": 214.4, "text": " is the following. So you have this graph here and basically every single edge has a time stamp"}, {"start": 214.4, "end": 221.12, "text": " attached to it. So let's say Ben here has got maybe three retweets of Andrew Ang and this one"}, {"start": 221.12, "end": 226.64, "text": " happened at t1, this one happened at t2, this one happened just recently like a couple of seconds"}, {"start": 226.64, "end": 233.2, "text": " ago at t3 and you can treat this as a static graph, just a static multi-graph or you can"}, {"start": 233.2, "end": 238.88, "text": " kind of acknowledge the like the existence of these time stamps and use that to, when you"}, {"start": 238.88, "end": 243.67999999999998, "text": " sub sample your neighborhood later and we'll see why we do that a bit later, but when we"}, {"start": 243.67999999999998, "end": 250.32, "text": " sub sample the neighborhood we can just take the most recent edges and by doing that we have a"}, {"start": 250.32, "end": 255.35999999999999, "text": " much more optimal performance than just by treating this as a static graph and doing uniform"}, {"start": 255.35999999999999, "end": 261.36, "text": " sampling of the neighborhood. So yeah, basically that's suboptimal, you can treat it as a"}, {"start": 261.36, "end": 268.72, "text": " static graph but it's suboptimal, okay? And yeah, so learning on dynamic graphs is relatively recent"}, {"start": 268.72, "end": 275.12, "text": " and most of the previous work was focused on these discrete time dynamic graphs which is just"}, {"start": 275.12, "end": 282.32, "text": " a sequence of snapshots of the graph. So basically you have your graph that's evolving and what you"}, {"start": 282.32, "end": 287.6, "text": " do in this discrete time dynamic graph is the following. You just take a snapshot at equidistant"}, {"start": 287.6, "end": 296.56, "text": " moments, so maybe you had some graph at t1 that looked like this and then after some period"}, {"start": 296.56, "end": 300.96000000000004, "text": " and the important part here is that this is pretty much the same constant so you have"}, {"start": 300.96000000000004, "end": 306.56, "text": " equidistant moments when you take the snapshot and maybe a couple of more edges and nodes appeared"}, {"start": 306.56, "end": 314.8, "text": " etc. So like this, I don't know, whatever. And you do the same thing after the same time period t."}, {"start": 314.8, "end": 319.6, "text": " This is basically the improvement over the static graph modeling but this discrete time"}, {"start": 321.04, "end": 325.68, "text": " dynamic graphs are still limited compared to the continuous ones that we're gonna see in this paper."}, {"start": 326.56, "end": 332.56, "text": " Okay, a small recap basically when you want to compute the embeddings for the static graph,"}, {"start": 332.56, "end": 339.12, "text": " you just have basically accumulation over the neighborhood of these messages and h is just"}, {"start": 339.12, "end": 345.04, "text": " a trainable update function and that's how you get your embeddings. That's what this equation here"}, {"start": 345.04, "end": 351.04, "text": " stands for. But it's not so important now so I just want to kind of walk you through some terminology"}, {"start": 351.04, "end": 355.6, "text": " that we'll be using and that's as I already mentioned we have this discrete time dynamic graphs"}, {"start": 355.6, "end": 361.84000000000003, "text": " or DTDGs for short and we have continuous time dynamic graphs on the other hand and those are"}, {"start": 361.84000000000003, "end": 367.84000000000003, "text": " the ones we'll be treating in this video. And basically the point here is that you have to"}, {"start": 367.84, "end": 374.08, "text": " the point here is we treat it as a sequence of time stamped events. So every single event"}, {"start": 374.08, "end": 382.0, "text": " be it nodewise event, be it the edge event has a time stamp associated with it. Contrast that with"}, {"start": 382.0, "end": 388.64, "text": " static graphs where we had those equidistant snapshots. And I already explained those two"}, {"start": 388.64, "end": 394.71999999999997, "text": " and now the interesting concept, new concept that's not encountered in static graphs is this"}, {"start": 394.72, "end": 399.76000000000005, "text": " concept of a temporal neighborhood. So what it means basically is the following. So again you have"}, {"start": 399.76000000000005, "end": 410.0, "text": " some nodes in the graph and let's say there are multiple edges here and let's say this one"}, {"start": 410.96000000000004, "end": 418.64000000000004, "text": " happened at t5 and we're currently looking for some reason we're looking at a moment that happened"}, {"start": 418.64, "end": 424.71999999999997, "text": " before prior to t5 like at t4. That basically means that the neighborhood of this graph here"}, {"start": 425.44, "end": 431.68, "text": " is will ignore this edge. So everything that comes after this particular target"}, {"start": 433.84, "end": 439.84, "text": " moment of time is ignored and that's what you're left with. So all of the other edges here is"}, {"start": 439.84, "end": 446.24, "text": " what's known as the temporal neighborhood of this particular node at this point of time. Okay so"}, {"start": 446.24, "end": 453.12, "text": " that's a new concept and with that out of the way let's jump and see what are the main modules in"}, {"start": 453.12, "end": 458.88, "text": " this TGN network. Okay let me give you a quick high level overview of the system and then we'll"}, {"start": 458.88, "end": 466.16, "text": " slowly start building on and adding new details. So high level how we train this graph neural"}, {"start": 466.16, "end": 472.08, "text": " network, this whole system actually because it also has this thing called memory, is the following. So"}, {"start": 472.08, "end": 479.59999999999997, "text": " we do a self-supervised learning and we're learning how to do link prediction. So basically because"}, {"start": 479.59999999999997, "end": 485.76, "text": " we have all of those retweets and we have time stamps associated with them we can just do"}, {"start": 485.76, "end": 491.84, "text": " chronological sort over all of those interactions and we can just predict the interactions that"}, {"start": 491.84, "end": 499.84, "text": " haven't yet happened and that's basically the you just batch that chronologically sorted array of"}, {"start": 499.84, "end": 505.76, "text": " those edge interactions and you just predict the follow-up edges. It's a similar thing as"}, {"start": 505.76, "end": 510.4, "text": " transformers there you're trying to predict the follow-up word and here you're trying to predict"}, {"start": 510.4, "end": 516.0799999999999, "text": " what the next interaction will be. So that's the high level overview of how the thing works. So now"}, {"start": 516.0799999999999, "end": 523.1999999999999, "text": " let's see on this chart what happens basically. So you have you have and I'll just create a simple"}, {"start": 523.2, "end": 534.48, "text": " example here. We have node 1, we have nodes 2 and we have node 3. So 1 as you can see here at t1,"}, {"start": 534.48, "end": 547.44, "text": " 1 retweeted node 2 and at t2 node 2 retweeted node 3. So like this. So now what we do is the following."}, {"start": 547.44, "end": 554.6400000000001, "text": " We have so because node 1 also had some other interactions maybe that happened even prior to"}, {"start": 554.6400000000001, "end": 563.0400000000001, "text": " these interactions. So let's say these happened at whatever t0 maybe. Okay so now what happens is that"}, {"start": 563.0400000000001, "end": 567.6, "text": " we have this embedding method and that's basically get. If you if you haven't watched my video"}, {"start": 568.24, "end": 573.7600000000001, "text": " on graph attention networks you can check it out. I'll link it somewhere here and you basically do"}, {"start": 573.76, "end": 580.88, "text": " one layer get over this node 1 over its temporal neighborhood and we already mentioned what that is."}, {"start": 580.88, "end": 588.72, "text": " We do get and we calculate the embeddings for nodes 1, 2 and node 3 right and those are these z's."}, {"start": 588.72, "end": 594.0, "text": " Once we have those we can just calculate the probability using a simple decoder. So they used"}, {"start": 594.0, "end": 600.48, "text": " MLP here. So you just concatenate these two embeddings. You just pass them through MLP"}, {"start": 600.48, "end": 606.24, "text": " and you just add the non-linear function like sigmoid non-linearity and that means this is"}, {"start": 606.24, "end": 611.76, "text": " basically probability. And now in order to train you do the following. So you want to make sure"}, {"start": 611.76, "end": 618.88, "text": " that P so given t1 what's the probability of interaction between 1 and 2 happening?"}, {"start": 618.88, "end": 623.04, "text": " And we know it's 100% it's 1 because it already happened. We have it in the data set right."}, {"start": 623.04, "end": 631.36, "text": " We did it chronological sort. So you want to push the loss to go to 1, P to go to 1."}, {"start": 632.0799999999999, "end": 639.4399999999999, "text": " I loss will be 0 because we're just using simple binary cross entropy here and we'll also have"}, {"start": 639.4399999999999, "end": 646.0, "text": " negative edges which we'll want to make sure that the probability goes to 0 and yeah that's it."}, {"start": 646.0, "end": 653.84, "text": " So if you have maybe some nodes x and nodes y and this interaction hasn't happened we want to make"}, {"start": 653.84, "end": 663.44, "text": " sure that we push the probability to 0 for those node pairs. And we use a simple as I mentioned"}, {"start": 663.44, "end": 670.96, "text": " so loss is just a binary cross entropy that means that's a fancy name for you just put the log here."}, {"start": 670.96, "end": 677.44, "text": " So basically if we have positive examples we'll just the loss will have a format like this minus"}, {"start": 677.44, "end": 687.6, "text": " log of P right and if I just draw it here that will look like this and basically we want to push"}, {"start": 687.6, "end": 693.76, "text": " the probability to go to 1 because then the loss goes to 0. For when we have negative examples"}, {"start": 693.76, "end": 704.56, "text": " we'll just have minus log of 1 minus P and that will just kind of mirror this chart here"}, {"start": 704.56, "end": 710.56, "text": " and it will look like this. So we have P on the x-axis, we have loss on the y-axis and the loss"}, {"start": 710.56, "end": 720.24, "text": " will look like this. Basically at 0 we have loss that's equal to 0 and at 1 it goes to infinity."}, {"start": 720.24, "end": 725.84, "text": " So that's how we train it simple BC we have positives we have negatives we have the data"}, {"start": 725.84, "end": 735.76, "text": " set with and we just batch it and we use those batches to predict those links because we know"}, {"start": 735.76, "end": 742.96, "text": " they're there. So how the embedder part works so get leverage is something called memory or"}, {"start": 742.96, "end": 748.96, "text": " states in this TGN network. So what it is is basically every single node in our graph in"}, {"start": 748.96, "end": 755.76, "text": " Twitter graph has a state associated with it. So we have a huge table and every single node in this"}, {"start": 755.76, "end": 761.84, "text": " table has some representation and they said here basically the purpose is to represent the node's"}, {"start": 761.84, "end": 768.5600000000001, "text": " history in a compressed format and that's exactly what it does. So basically over the when the"}, {"start": 768.5600000000001, "end": 773.2800000000001, "text": " interactions are happening in Twitter we're basically using all of those to just update"}, {"start": 773.28, "end": 780.24, "text": " this memory state and then we can use those to later do some recommendations but I'll get to that"}, {"start": 780.24, "end": 785.92, "text": " in a moment okay. So we have these states and they are somehow calculated so we have some messages"}, {"start": 785.92, "end": 793.12, "text": " we do some aggregation we somehow use them to update the memory state and then the embedding"}, {"start": 793.12, "end": 800.0799999999999, "text": " method the get will just use those to calculate the embeddings and we can predict the links."}, {"start": 800.08, "end": 805.5200000000001, "text": " So that's the high level overview and now we'll slowly start digging into details hopefully"}, {"start": 805.5200000000001, "end": 815.12, "text": " it was clear enough. Maybe one more thing worth mentioning when we are using get in this model"}, {"start": 815.84, "end": 821.84, "text": " the way we we sub sample the neighborhood is using the most recent edges so that's why we"}, {"start": 821.84, "end": 829.36, "text": " have timestamps. So basically if we have a bunch of edges we'll use we'll only sub sample the"}, {"start": 829.36, "end": 837.12, "text": " most recent k ones so we we sort the the neighborhood edges by timestamps and we just take the k most"}, {"start": 837.12, "end": 842.16, "text": " recent ones that may be like 10 most recent neighbors and then you just do the simple get"}, {"start": 842.16, "end": 849.36, "text": " aggregation and we'll see what exact features go inside here in a sec. Okay now the problem with"}, {"start": 849.36, "end": 856.72, "text": " this chart is if you maybe if you noticed is that we are using these interactions to calculate the"}, {"start": 856.72, "end": 863.44, "text": " messages to update the states and then we're using that those states to predict the links so"}, {"start": 863.44, "end": 869.2, "text": " that means we have information leakage here we're basically using this information to update states"}, {"start": 869.2, "end": 875.2, "text": " and then we're using the states to predict those links and that doesn't make any sense right but"}, {"start": 875.2, "end": 882.88, "text": " because you somehow have to you somehow have to calculate and use these message functions to get"}, {"start": 882.88, "end": 887.4399999999999, "text": " the states because otherwise you won't have any gradients flowing towards these trainable functions"}, {"start": 887.4399999999999, "end": 893.2, "text": " the aggregated messages and messages and this mem function they are all trainable so you somehow"}, {"start": 893.2, "end": 899.76, "text": " have to have this forward pass here but you can use this batch you'll just use the previous batches"}, {"start": 899.76, "end": 905.4399999999999, "text": " and that's the solution so basically they have a graph here okay you have to introduce this concept"}, {"start": 905.4399999999999, "end": 911.36, "text": " of a raw memory storage and now what you do here is from the previous batch you just store the"}, {"start": 911.36, "end": 917.76, "text": " previous batch you just stored all the necessary information inside here so now when you try to"}, {"start": 917.76, "end": 925.12, "text": " predict the the the existence of a link between one and two the the existence of of retweet in"}, {"start": 925.12, "end": 931.2, "text": " the case of twitter what you do is the following you use the the the last batch and i mean not just"}, {"start": 931.2, "end": 936.08, "text": " the last batch actually the whole history and you calculate the messages and we'll see how we do that"}, {"start": 936.08, "end": 941.12, "text": " so we have some data here that's necessary to calculate these messages then we just aggregate"}, {"start": 941.12, "end": 947.36, "text": " the messages so you see here that messages for the node two are kind of accumulated and then we"}, {"start": 947.36, "end": 953.84, "text": " update the states and they just use grue here and we'll see those details so what happens is basically"}, {"start": 953.84, "end": 964.88, "text": " you you you have the previous state so this is grue g r u you just use s2 and this is the old state"}, {"start": 964.88, "end": 972.96, "text": " and you use the the newest message this m2 aggregated and this will just spit out new state"}, {"start": 972.96, "end": 979.92, "text": " and we just write it down here so that's how we update the the states now the states have now we"}, {"start": 979.92, "end": 987.28, "text": " have this thing like included in the computation graph and we just use get to calculate embeddings"}, {"start": 987.28, "end": 992.8, "text": " and to we get to the loss so now gradient gradients can now flow through the whole system"}, {"start": 992.8, "end": 998.9599999999999, "text": " and all of these functions which will see what they exactly are so these egg these mem functions"}, {"start": 998.9599999999999, "end": 1005.52, "text": " will be updated in the back propagation step okay so that's the important solution so introduction"}, {"start": 1005.52, "end": 1011.5999999999999, "text": " of raw message store is important because that allows us to train this part of the pipeline this"}, {"start": 1011.5999999999999, "end": 1020.24, "text": " part here okay okay hopefully that was clear enough um let me let me slowly start digging into"}, {"start": 1020.24, "end": 1026.0, "text": " into details of how the messages are computed and then once we have all of those details i'll again"}, {"start": 1026.0, "end": 1032.16, "text": " zoom out and try and explain everything okay so messages are computed the following once interaction"}, {"start": 1032.16, "end": 1038.08, "text": " happens between nodes i and j so we have node i we have j one of them retweeted the other so we have"}, {"start": 1038.08, "end": 1043.36, "text": " interaction that happened and it has feature vector associated with it so that's e i j right"}, {"start": 1043.36, "end": 1052.56, "text": " okay so how we calculate the message is basically we input the previous states and this t minus"}, {"start": 1052.56, "end": 1059.9199999999998, "text": " just means the state before this interaction happened for those notes so that's basically"}, {"start": 1059.9199999999998, "end": 1066.08, "text": " whatever is currently in the state table so we have si here we have sj so whatever is currently"}, {"start": 1066.08, "end": 1071.4399999999998, "text": " in there we use those and we input them in the message function and we additionally use the"}, {"start": 1071.44, "end": 1075.52, "text": " uh this feature vector i just mentioned so that's like the maybe birth encoding of retweet"}, {"start": 1076.4, "end": 1081.68, "text": " and we finally use this delta t which is the time that has elapsed between the previous interaction"}, {"start": 1081.68, "end": 1087.44, "text": " and this current interaction so that may be like 30 seconds or whatever and that's how we compute"}, {"start": 1087.44, "end": 1095.04, "text": " the message for this for this node i and then we do analogously to node j so that's the destination"}, {"start": 1095.04, "end": 1100.48, "text": " so that's the source and destination nodes and the only difference is these are permuted if you can"}, {"start": 1100.48, "end": 1106.64, "text": " if you can notice here so just permutation simple permutation and uh although this can be trainable"}, {"start": 1106.64, "end": 1112.08, "text": " what it did is they in practice use just identity function which means they are just doing simple"}, {"start": 1112.08, "end": 1117.6, "text": " concatenations uh between all of these features so you have some vector here vector you concatenate"}, {"start": 1117.6, "end": 1124.0, "text": " all of them and that's your message and this here is an interesting thing and i'll later explain"}, {"start": 1124.0, "end": 1130.08, "text": " uh how we convert time to a vector uh to a vector representation and just stay tuned for that okay"}, {"start": 1130.64, "end": 1136.4, "text": " so that's the uh so that's the part with with the messages okay and they say here a more complex"}, {"start": 1136.4, "end": 1140.16, "text": " message function that involves additional aggregation from the neighborhoods from the"}, {"start": 1140.16, "end": 1146.16, "text": " neighbors of nodes i and j is also possible and is left for future study so the thing with this"}, {"start": 1146.16, "end": 1150.8, "text": " paper is there is there is a it's a it's a work in progress there is a lot of things they still want"}, {"start": 1150.8, "end": 1158.96, "text": " to try and they haven't tried it so for example they have this uh node-wise uh memory but they"}, {"start": 1158.96, "end": 1164.1599999999999, "text": " also they say here while a global graph-wise memory can also be added to the model to track"}, {"start": 1164.1599999999999, "end": 1169.04, "text": " the evolution of the entire network we'll leave this as a future work so that's that's my point"}, {"start": 1169.6, "end": 1176.24, "text": " there is still a lot of things uh evolving in this work so yeah just just keep that in mind"}, {"start": 1176.24, "end": 1183.36, "text": " um okay so once we have the message function so that's this part right so that's the messages how"}, {"start": 1183.36, "end": 1188.88, "text": " do we aggregate them and here we just have a message aggregator and again it can be a trainable"}, {"start": 1188.88, "end": 1195.68, "text": " function but what they did is they just do a simple most recent message and mean message"}, {"start": 1195.68, "end": 1203.44, "text": " heuristic and you can even see it here so what it did is you have m2 at t1 you have m2 at t2"}, {"start": 1203.44, "end": 1209.6000000000001, "text": " they just took the last message and that's the aggregation okay um in more general case they can"}, {"start": 1209.6000000000001, "end": 1216.72, "text": " also just do a mean or something else just kind of do make it trainable uh maybe pass it through an"}, {"start": 1216.72, "end": 1223.3600000000001, "text": " rnn or whatever okay but it's stuck with the most recent message and let's use it as the running"}, {"start": 1224.0800000000002, "end": 1230.16, "text": " example uh finally we have the memory updater i already explained that they used some form you"}, {"start": 1230.16, "end": 1236.64, "text": " can use whatever like rnn or lstm they used a grew which is a specific type of of lstm basically"}, {"start": 1237.2, "end": 1242.72, "text": " and i already explained how it works but let me just repeat okay let's this is grew and we have"}, {"start": 1243.52, "end": 1254.8000000000002, "text": " we have the state state i t minus we just get the new message and we just run this and there are"}, {"start": 1254.8000000000002, "end": 1259.76, "text": " like a if you know how if you don't know how grew works i'll just link uh chris ola's blog he has"}, {"start": 1259.76, "end": 1264.24, "text": " a really nice blog intuitive to understand how this works basically a bunch of forget and update"}, {"start": 1264.24, "end": 1269.52, "text": " gates and this spits out we can treat it as a black box for now it just spits out the new state"}, {"start": 1270.8799999999999, "end": 1278.8799999999999, "text": " si at point t so this was a t minus and here we have the current state so that's the memory"}, {"start": 1278.8799999999999, "end": 1287.84, "text": " updating part okay finally uh we have the embedding module uh and um so i haven't mentioned this but"}, {"start": 1287.84, "end": 1293.12, "text": " the main goal of the embedding module is to avoid the so-called memory staleness problem so what that"}, {"start": 1293.12, "end": 1300.08, "text": " means is the following so let's say we we have a graph and um like we have we had ben here"}, {"start": 1301.12, "end": 1307.84, "text": " and maybe ben stopped uh using twitter for maybe a month or two and now what happens is that his if"}, {"start": 1307.84, "end": 1315.4399999999998, "text": " you take a look at the memory so and we we take ben's state uh because he's not interacting with"}, {"start": 1315.44, "end": 1325.1200000000001, "text": " anybody anymore um we this this representation is not updated and so um we we we need to have"}, {"start": 1325.1200000000001, "end": 1332.24, "text": " uh some sort of of of embedding to to to make this relevant for ben when he returns so if we want to"}, {"start": 1332.24, "end": 1338.16, "text": " recommend to ben which tweets should he uh see so that's like your your home page on twitter you see"}, {"start": 1338.16, "end": 1343.28, "text": " a bunch of tweets so twitter somehow needs to figure out which tweets are relevant for you and"}, {"start": 1343.28, "end": 1348.6399999999999, "text": " also uh twitter should recommend you what's the what's what what are the persons you should"}, {"start": 1348.6399999999999, "end": 1354.08, "text": " probably follow so how they do that is the following so ben has uh his state b and in the"}, {"start": 1354.08, "end": 1359.12, "text": " simplest case you could just do the following you could just take sp and maybe andrew ang has his"}, {"start": 1359.12, "end": 1366.32, "text": " own state sa and you just do a dot product or mlp and uh whatever and that spits out some probability"}, {"start": 1366.32, "end": 1373.52, "text": " so if the probability is close to one uh twitter may recommend that you follow uh and you ang okay"}, {"start": 1373.52, "end": 1379.84, "text": " but now the problem is if this remains stale these predictions will not be as relevant anymore for"}, {"start": 1379.84, "end": 1386.08, "text": " ben because he he has changed in the meanwhile okay so the solution is to use what i already"}, {"start": 1386.08, "end": 1392.96, "text": " explained and that's using some kind of like a gnn module like get and you you get the updated"}, {"start": 1392.96, "end": 1400.08, "text": " version uh of the of the embedding because his friends presumably presumably his friends haven't"}, {"start": 1400.64, "end": 1406.96, "text": " stopped using twitter so their states are are being constantly being updated so we can use the the"}, {"start": 1406.96, "end": 1412.56, "text": " top so we can use that information to make a more relevant predictions for ben okay hopefully that"}, {"start": 1412.56, "end": 1420.32, "text": " makes sense and um they had a couple of baselines so the one i mentioned the the simplest one is"}, {"start": 1420.32, "end": 1427.4399999999998, "text": " just you take the you just take the state of of the user so that's this uh identity uh embedding"}, {"start": 1427.4399999999998, "end": 1433.6, "text": " so the final embedding is just whatever the state is and they show that this has a really bad"}, {"start": 1433.6, "end": 1439.4399999999998, "text": " performance on the later benchmarks okay and then we have this time projections simple equation you"}, {"start": 1439.4399999999998, "end": 1445.28, "text": " can check it out yourself i'll focus on on on the temporal graph attention so this is the important"}, {"start": 1445.28, "end": 1451.6, "text": " part this is the strongest baseline basically they had uh and the way it works it's it's the same"}, {"start": 1451.6, "end": 1459.36, "text": " thing as get uh just instead of using um the the attention uh that get used originally they just"}, {"start": 1459.36, "end": 1465.92, "text": " use the original of aswani attention so that means they have uh queries keys and values and everything"}, {"start": 1465.92, "end": 1474.96, "text": " else remains the same um basically the new information here is uh this phi function and the"}, {"start": 1474.96, "end": 1482.0, "text": " fact they're using multiple features concatenated here to get the final uh embedding representation"}, {"start": 1482.0, "end": 1488.64, "text": " okay so let's see how it exactly works so this h so for a particular node at a particular time t"}, {"start": 1488.64, "end": 1497.2800000000002, "text": " how they calculate the h is like this so they they take the state of that node at a particular time"}, {"start": 1497.2800000000002, "end": 1503.1200000000001, "text": " t and they take the node's features so that's basically maybe a description on your profile"}, {"start": 1503.1200000000001, "end": 1508.24, "text": " and then birth embedding blah blah blah blah and you basically just add them up and that's your"}, {"start": 1508.24, "end": 1515.8400000000001, "text": " zero zero that's the row that those are the raw features okay so we concatenate those with the"}, {"start": 1515.84, "end": 1524.24, "text": " with the uh basically edge information so let me let me draw uh example here it'll be easier"}, {"start": 1524.24, "end": 1533.28, "text": " we have a node we have a bunch of nodes here again we'll do the most recent uh neighbors will just"}, {"start": 1534.3999999999999, "end": 1539.04, "text": " kind of disregard all of the others so let's say we only take these two into account okay"}, {"start": 1539.04, "end": 1546.0, "text": " so we'll these are these will have some um these will have some features so if this is node one"}, {"start": 1546.56, "end": 1561.28, "text": " this is maybe node i so this is uh node i1 and this is node i j sorry j1 and uh so you just combine"}, {"start": 1561.28, "end": 1566.8799999999999, "text": " those information and you combine with phi so what's what's phi exactly here and uh it's a"}, {"start": 1566.88, "end": 1575.2800000000002, "text": " it's a there is a this nice paper called um time to work time to work and you can check it out"}, {"start": 1576.0, "end": 1582.48, "text": " but basically so that's one thing you could do and uh basically what you do is in order to get"}, {"start": 1582.48, "end": 1588.3200000000002, "text": " the vectorized representation of time you just map it into the following thing so you map it into"}, {"start": 1588.32, "end": 1600.24, "text": " a vector that has uh features like this so maybe uh w1 t plus uh n1 and then the second feature will"}, {"start": 1600.24, "end": 1616.8799999999999, "text": " be sine of w2 t plus n2 etc so you just continue using signs here and you increment um the arguments"}, {"start": 1616.88, "end": 1629.6000000000001, "text": " by one so we have w3 etc so these are learnable so you learn w2 and 2 w1 and 1 etc and if you take"}, {"start": 1629.6000000000001, "end": 1634.3200000000002, "text": " a closer look and you're familiar with the transformer this kind of resembles the positional"}, {"start": 1634.3200000000002, "end": 1641.7600000000002, "text": " encodings except that these are uh time aware okay and basically so you learn these and they"}, {"start": 1641.76, "end": 1648.08, "text": " showed in the time to work paper that you basically uh these are capturing the periodicity in the"}, {"start": 1648.08, "end": 1654.16, "text": " signal whereas this one is capturing something more uh constant in the signal so this one if you take"}, {"start": 1654.16, "end": 1662.56, "text": " a look this is just a simple uh basically a line and these here are sinusoids so you basically are"}, {"start": 1662.56, "end": 1670.32, "text": " learning sinusoids like this and the higher the w the the higher the frequency will be and if you"}, {"start": 1670.32, "end": 1677.9199999999998, "text": " have some smaller w then you'll have slower smaller frequency etc and these are just the offset"}, {"start": 1678.96, "end": 1685.2, "text": " these are just encoding the offset of your sinusoids so that's the simple heuristic they"}, {"start": 1685.2, "end": 1692.0, "text": " use to encode a time and once you have that you just do simple multi-head attention and you get"}, {"start": 1692.0, "end": 1697.2, "text": " the aggregated feature vector you concatenate it with the current feature vector and you pass it"}, {"start": 1697.2, "end": 1706.64, "text": " through an mlp so this thing here hi corresponds to this node and the second term here is the"}, {"start": 1706.64, "end": 1713.52, "text": " aggregated uh combination of these two because they are their most recent neighbors right so we"}, {"start": 1713.52, "end": 1724.88, "text": " kind of associate some alphas alpha one alpha two here and uh we combine them like a simple weighted"}, {"start": 1724.88, "end": 1731.68, "text": " sum and we get the features so that's how we calculate the embedding and they show that graph"}, {"start": 1731.68, "end": 1738.48, "text": " attention uh network is the best baseline we'll see that in a minute so that was that was all the"}, {"start": 1738.48, "end": 1746.3200000000002, "text": " nitty-gritty details um we saw how to train these uh uh how to how to pass how to pass some"}, {"start": 1746.3200000000002, "end": 1751.6000000000001, "text": " gradient information to these modules we saw how they exactly work and now we'll see a couple more"}, {"start": 1751.6, "end": 1758.0, "text": " details and then i'll zoom out um i already mentioned this part about the information leakage"}, {"start": 1758.0, "end": 1763.84, "text": " and so i'll skip it and so that's the reason we had to introduce the raw memory storage and this"}, {"start": 1763.84, "end": 1767.6, "text": " part is interesting so while from the perspective of the first interaction in the batch the memory"}, {"start": 1767.6, "end": 1772.24, "text": " is up to date since it contains information about all previous interactions in the graph"}, {"start": 1772.24, "end": 1776.6399999999999, "text": " from the perspective of the last interaction in the batch the same memory is out of date"}, {"start": 1776.64, "end": 1782.5600000000002, "text": " since it lacks information about previous interactions in the same batch this incentivizes"}, {"start": 1782.5600000000002, "end": 1789.6000000000001, "text": " the use of a big batch size so let me break it down for you um that means the following so we"}, {"start": 1789.6000000000001, "end": 1796.0800000000002, "text": " have the previous batch information we update the states and now this first because this is a batch"}, {"start": 1796.0800000000002, "end": 1803.2, "text": " that's chronologically sorted we have t5 t6 so this first interaction will be using states which"}, {"start": 1803.2, "end": 1810.64, "text": " are totally up to date but once this one is done we should basically have s1 and s2 updated because"}, {"start": 1810.64, "end": 1815.92, "text": " once an interaction happens we calculate the messages for the nodes that that are associated"}, {"start": 1815.92, "end": 1821.3600000000001, "text": " with that interaction and we update the states but we can't do that while we are in the batch"}, {"start": 1821.3600000000001, "end": 1829.1200000000001, "text": " so that means this to this one here two to three will use the same state as this one here and that"}, {"start": 1829.12, "end": 1835.12, "text": " means it's slightly out of date and now imagine the bigger batch and they use 200 as a nice trade-off"}, {"start": 1835.76, "end": 1842.6399999999999, "text": " and the the deeper you are in the batch let's call it that way the the more the more stale"}, {"start": 1842.6399999999999, "end": 1849.28, "text": " this current state representation is so that's the reason you don't want to have a too big of a batch"}, {"start": 1849.28, "end": 1853.6, "text": " but then again you don't want to keep it too slow too small because otherwise it will be really"}, {"start": 1853.6, "end": 1862.48, "text": " compute inefficient to have a small batch size okay so that's that part and now let me just show"}, {"start": 1862.48, "end": 1872.3999999999999, "text": " you the results before before i do a high level overview so they had some continuous this dynamic"}, {"start": 1872.3999999999999, "end": 1878.1599999999999, "text": " graph baselines here like jody tgad which is basically the same thing as t like this this paper"}, {"start": 1878.16, "end": 1883.6000000000001, "text": " at tgm it just doesn't have memory and the associated modules so it's a simplification of this of this"}, {"start": 1883.6000000000001, "end": 1889.6000000000001, "text": " paper basically it's a specific case specific instance and they also experimented with different"}, {"start": 1890.8000000000002, "end": 1896.72, "text": " basically different embedding methods using memory or not using memory module etc so they have a"}, {"start": 1896.72, "end": 1902.88, "text": " bunch of baselines and the results are the following so the tgm with attention achieves the best"}, {"start": 1902.88, "end": 1908.8000000000002, "text": " results overall on all of the three data sets so i was continuously using twitter as a running"}, {"start": 1908.8000000000002, "end": 1916.64, "text": " example but they also have wikipedia and reddit where this one is just users are notes and pages"}, {"start": 1916.64, "end": 1923.5200000000002, "text": " are notes and user editing a page is an interaction event and basically the edit text is what the"}, {"start": 1923.5200000000002, "end": 1930.16, "text": " feature for the edges etc okay similarly for reddit so it's those are bipartite graphs so we have"}, {"start": 1930.16, "end": 1936.5600000000002, "text": " have pages here we have users here and let me see what's also interesting they also did"}, {"start": 1937.28, "end": 1944.0800000000002, "text": " node-wise classification but that's not so so important for this paper and here are some"}, {"start": 1944.0800000000002, "end": 1949.3600000000001, "text": " ablation studies basically you can see that let me just zoom in a little bit basically the the"}, {"start": 1949.3600000000001, "end": 1958.0, "text": " tgm with attention is the best trade-off overall because it takes less time than these two whereas"}, {"start": 1958.0, "end": 1965.04, "text": " the performance is really similar so this one just uses two get layers and the reason one get layer"}, {"start": 1965.04, "end": 1970.0, "text": " is enough is because they are using this memory thing and the memory thing already implicitly"}, {"start": 1970.0, "end": 1974.08, "text": " contains the information from the neighboring nodes so using one layer you're basically"}, {"start": 1975.12, "end": 1981.6, "text": " you're basically accessing the features from the two hop neighborhood okay and using mean i think"}, {"start": 1981.6, "end": 1986.88, "text": " the mean just means they are using for the for the memory aggregation they are using the mean"}, {"start": 1986.88, "end": 1994.8000000000002, "text": " instead of the last last heuristic and you can see t get and jody and direp these other baselines are"}, {"start": 1994.8000000000002, "end": 2004.72, "text": " way lower performance wise than than this method some ablations again with their own model what"}, {"start": 2004.72, "end": 2010.96, "text": " they don't use the memory you can see the the worst curve is here when they add two layers again"}, {"start": 2010.96, "end": 2018.24, "text": " no memory it's a bit better and then as we add the memory and we add the the the the the last"}, {"start": 2019.04, "end": 2026.56, "text": " message aggregation heuristic we get the best results overall so that's pretty much it"}, {"start": 2027.68, "end": 2033.1200000000001, "text": " let me see if there is something else left that's it now let me just now with all the information"}, {"start": 2033.1200000000001, "end": 2040.08, "text": " you have in your head let me just briefly go ahead and explain it again okay now that we have all"}, {"start": 2040.08, "end": 2046.3999999999999, "text": " the information explained let me just kind of give you a holistic overview of the whole pipeline"}, {"start": 2046.3999999999999, "end": 2053.2799999999997, "text": " so here we have the information from the previous batches of these edge interactions okay and that"}, {"start": 2053.2799999999997, "end": 2063.84, "text": " means we have so rm1 is basically si contains si sj contains the delta t so the time that elapsed"}, {"start": 2063.84, "end": 2070.2400000000002, "text": " between the previous interaction and this interaction and we have ej which is the note the interaction"}, {"start": 2070.2400000000002, "end": 2079.28, "text": " features and we have the same tuples for all of the others the thing is we only focus on the"}, {"start": 2079.28, "end": 2085.04, "text": " nodes that we need so this thing may be huge and contain information for for many other notes in"}, {"start": 2085.04, "end": 2090.6400000000003, "text": " our graph we only care about nodes one two and three and that's why we only calculate those"}, {"start": 2090.64, "end": 2096.7999999999997, "text": " messages so we have we we just basically do a concatenation that's what they did and you get m1"}, {"start": 2096.7999999999997, "end": 2104.08, "text": " you do a concatenation you get m2 etc at different timestamps okay now the aggregation they used"}, {"start": 2104.08, "end": 2110.48, "text": " either the mean or the last message so basically you just take let's say we we use the last message"}, {"start": 2110.48, "end": 2115.7599999999998, "text": " we just pass this message into the next step and that's the aggregated messages array once we have"}, {"start": 2115.76, "end": 2122.96, "text": " those we just have grew here and for every one of those we pass the state we have feed in the message"}, {"start": 2122.96, "end": 2128.0, "text": " we we spit out the new state when we update the states so now we have the states and the the"}, {"start": 2128.0, "end": 2135.1200000000003, "text": " computer this part is included into the computational graph now we use those states and we use get to"}, {"start": 2135.1200000000003, "end": 2141.0400000000004, "text": " create the embeddings now interesting thing here to note is so we have nodes one two and three"}, {"start": 2141.04, "end": 2150.64, "text": " but because this node one maybe has some other neighbors it will make sense it will make sense"}, {"start": 2150.64, "end": 2156.64, "text": " to have other states here as well so whatever the neighbors are i'll call them sn so those are these"}, {"start": 2156.64, "end": 2163.6, "text": " these guys here it will be useful to have the states for them updated as well because those"}, {"start": 2163.6, "end": 2169.68, "text": " will be included by a get because get accumulates the neighborhood information right so those will"}, {"start": 2169.68, "end": 2175.2799999999997, "text": " be accumulated by the get so i guess they just missed to to place sn here like the neighbors"}, {"start": 2175.9199999999996, "end": 2181.68, "text": " but whatever and finally you calculate the embeddings and now you just depending on the"}, {"start": 2181.68, "end": 2187.04, "text": " ground truth so these exist but the negative the negative ones don't exist so you just use binary"}, {"start": 2187.04, "end": 2192.72, "text": " cross entropy loss to train this whole pipeline end to end and now you have a system that can"}, {"start": 2192.72, "end": 2198.48, "text": " successfully predict uh given two nodes given two yeah given two nodes you can calculate their"}, {"start": 2198.48, "end": 2203.6, "text": " embeddings and you can predict whether they are likely to interact in the future and that's useful"}, {"start": 2203.6, "end": 2209.68, "text": " for twitter because you can use that to recommend for node a which node should that node follow in"}, {"start": 2209.68, "end": 2216.72, "text": " the future like maybe andre ang or which which tweets should that user see etc couple of strange"}, {"start": 2216.72, "end": 2222.16, "text": " things i've noticed in the paper maybe maybe a typo or something basically if you take a look"}, {"start": 2222.16, "end": 2228.3999999999996, "text": " at the query it doesn't have the edge information here so that means if you're doing a scale dot"}, {"start": 2229.2799999999997, "end": 2237.2799999999997, "text": " product between the the query and the keys you have a like a differently dimension vectors so"}, {"start": 2237.2799999999997, "end": 2242.7999999999997, "text": " you can do it so it's either a typo or they have some additional projection layer here"}, {"start": 2242.7999999999997, "end": 2248.56, "text": " that they didn't explicitly include here or i've missed something the second thing is i"}, {"start": 2248.56, "end": 2254.08, "text": " mentioned the memory staleness problem but basically if it's like they are implicitly stating"}, {"start": 2254.08, "end": 2259.68, "text": " that you need to retweet somebody else in order for your memory state to get updated but what if"}, {"start": 2259.68, "end": 2266.48, "text": " somebody retweets my post so i assume because that interaction involves my node and that person's"}, {"start": 2266.48, "end": 2272.96, "text": " node so because the way how the messages are computed my node state should be updated as well"}, {"start": 2272.96, "end": 2279.28, "text": " so i'm not sure um uh whether they're just simplifying it here in the paper but anyways"}, {"start": 2279.28, "end": 2285.92, "text": " uh just just keep that in mind that uh that's not super clear because for me it looks like that my"}, {"start": 2285.92, "end": 2290.88, "text": " state should be updated even though i'm not active on twitter because other people are interacting"}, {"start": 2290.88, "end": 2295.92, "text": " with me and thus my memory state is being updated constantly they're probably using a directed edge"}, {"start": 2295.92, "end": 2300.56, "text": " assumption here but it's not super clear from the paper uh one more thing to keep in mind is that"}, {"start": 2300.56, "end": 2308.88, "text": " this memory uh table is not uh let me open a pen is not trainable so what is trainable is the"}, {"start": 2308.88, "end": 2315.44, "text": " group here the lstm and potentially the message and aggregation functions even though they"}, {"start": 2315.44, "end": 2321.2799999999997, "text": " just use the concatenation so there's no learner learnable parameters here neither there is"}, {"start": 2321.2799999999997, "end": 2326.32, "text": " learnable parameters in their aggregation function so they're left up with learning mem so that means"}, {"start": 2326.32, "end": 2331.92, "text": " once the the model is fully trained uh you're basically once an interaction happens you just"}, {"start": 2331.92, "end": 2338.1600000000003, "text": " calculate the messages and you basically uh aggregate them and you update the states"}, {"start": 2338.7200000000003, "end": 2344.32, "text": " and um that's it and once you once you want to predict do some recommendation then you use the"}, {"start": 2344.96, "end": 2351.04, "text": " the get module and create those embeddings and just make the recommendation so that was all i"}, {"start": 2351.04, "end": 2356.96, "text": " had to say for this video you know the drill subscribe hit the bell icon and until next time"}, {"start": 2356.96, "end": 2381.76, "text": " keep learning deep you"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=fQyHEXZB-nM
OpenAI CLIP - Connecting Text and Images | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover the CLIP paper - Learning Transferable Visual Models from Natural Language Supervision. You'll learn about: ✔️ How the contrastive learning behind CLIP works ✔️ All the nitty-gritty details behind the paper ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Bitter lessons by Sutton: http://www.incompleteideas.net/IncIdeas/BitterLesson.html ✅ CLIP paper: https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language.pdf ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 OpenAI's CLIP 02:10 Detailed explanation of the method 06:00 Comparision with SimCLR 12:55 How does the zero-shot part work 20:45 WIT dataset 21:30 Why this method, hint efficiency 28:35 Zero-shot - generalizing to new tasks 31:30 Prompt programming and ensembling 34:00 Zero-shot perf 36:20 Few-shot comparison with best baselines 38:20 How good the zero-shot classifier is? 40:45 Compute error correlation 41:20 Quality of CLIP's embedding space 43:05 Robustness to distribution shift 49:10 Limitations (MNIST failure) 50:30 A short recap ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #clip #openai #nlpsupervision
Hi there. So a couple of days ago OpenAI published two really exciting research works and among them Dali took pretty much all of the media attention and was all the hype but I found this work clip or learning transferable visual models from natural language supervision really exciting and I want to cover it in this video. So basically the method itself is not new so they it's just a modified version of convert paper but the thing there they introduced here is a concept of using zero shot in computer vision. So they're probably not the first ones but they are really bullish on it. They're just pretty much a continuation of their exploration of the GPT and all of the zero-shot capabilities it had and they're now trying to take that NLP paradigm and bring it back into the computer vision. So yeah, so this paper is mostly about how this model can, how can we create models which are, which mimic the human brain and learn the way we do. So with that being said let me jump into the method and I'll first explain the method and then we'll go through the different experiments they did. Okay, so they say here we demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn soda image representations from scratch on a data set of 400 million image sex pairs collected from the internet and so they they also introduced this new data set because all of so because they really so that's open AI they really are bullish on computation and all the existing data sets weren't leveraging the scale as much as they could and especially since most of the computer vision tasks are benchmarks data sets like like image net are only associated with a simple like structured label and they just want to associate images and text so they had to create their own data set. So without further ado let's let's jump into the method itself and I'll explain it in depth. I'll start doing some high-level explanation and we'll slowly dig deeper into the method. Okay, so this is this is how it looks like. Basically what they do is the following. So they have as I said associated with they have associated image text pairs and the way the thing they do is they first do some data augmentations here for the for the image part and then they have image encoder and they have text encoder and here they just used Resnets and the newly introduced vision transformer whereas for the text encoder they just use the original Vaslani transformer with certain modifications they did for the GPT family of models as well and once you once you have that setup what you do is the following. So you take the text sequence and you just encode it and how do you do that? Well because this is transformer so you basically just embed the text sequence here and you just take the start of sentence token and you take the end of sentence tokens and you place them you just bracket the text sequence using those and now if you know how the transformer works and I hope you do so you just have the forward pass through the transformer and whatever comes out from the last layer of the transformer above the EOS end of sentence token that's the representation they're gonna use. So they just do some simple layer normalization here and they project this using some linear projection layer into the embedding space and we are here. So one through N is just N is just the size of the batch so these are the encodings we get for the text sentences sequences. On the other hand image encoder is pretty straightforward you basically feed through the the image and you get certain representation at the output. So if this was resonant you'd probably just take the output at the after the average pooling part whereas they did some modifications with resonant I won't get into those details right now but yeah you can imagine you just take out some certain representation at after some after that pooling part and you use that as your image representation. You also do that linear projection and you're finally inside this contrastive embedding space. So once we're here we basically on the high level we just want to make sure that this image image one has the highest cosine similarity with this text embedding vector T1 and you want to push all of these down to zero and you want to do the same thing along this column. So you want to make sure because we have asymmetric setting here we have on one side we have textual information and on the other side we have visual information in our embedding space so this is a multimodal embedding space and so they do it both ways so they want to also make sure this one goes up to to to one and all of these are pushed down to zero and you just repeat all of that across the batch and that's it. Now let's let's get one level deeper and see how this exactly works. So this paper was heavily inspired by this convert paper that was working on medical images and they actually achieved some really awesome results so the visual features they trained using their method proved to be really good at some downstream medical tasks. So you can see the the how the pipeline looks like and it's the same thing as I just explained. So we have sequence of we have a sequence of images and sequence of associated textual sequences and basically what they do is they do data augmentations which is really really popular thing to do in these contrastive learning methods and they did some textual augmentations so just extracting single sentences from their text sequences. So this paper on the other hand ditched this thing here because most of the images they had had only one single sentence associated with image so they can't just remove the sentence otherwise they'll they'll lose the supervision signal. So they just ditch this away and they simplified the this augmentation part where the only thing they do is basically they resize the image and they take a square crop. That's everything. No color jitter, no blurring, nothing. So that's that's everything. Now the second thing they did is so they modified is basically this was MLP in the previous work and also so by the way this work this convert paper was inspired by SimClear framework which you probably heard of which achieved really nice results on self-supervision visual representation learning and it introduced this concept of using MLP between the embedding space and between the contrastive embedding space. So in this paper they just ditch this away and they just use a linear projection layer and that's it. So no activation functions you just have one matrix here that projects from the embedding space into the contrasting embedding space. So that's one of the other modification they did to the pipeline. Okay so once we have that let's let's see like some formulas for the things I already intuitively explain how how it functions. So so what we do is the cosine similarity so this thing with these like angle brackets just symbolize the cosine similarity. So once again you have two vectors let's say this is VI and this thing here is UI and because this is a cosine similarity that means these two vectors are normalized they are basically unit norm so the norm is equal to to 1 and so basically when you do the the dot product you end up having cosine of t. So cosine of t where of theta where theta is the angle between those two. So again if the theta is 0 we'll have cosine similarity 1 if the theta is 0 that means they're orthogonal etc. So when we say we want to make these two be equal to 1 that means those vectors are pretty much aligned and that's that's the idea. So t here is the temperature coefficient just it just modifies the softmax distribution to make it a bit more steep and it's actually trainable parameter in this paper and not a hyperparam and this is a familiar softmax thing so basically what you're doing your your summing so you take the cosine this cosine similarity and you just average so in the denominator denominator you just sum up all of these and you get the probability distribution and then you just have a simple cross entropy loss over that softmax distribution which is a pretty usual thing to do. So you can treat this as a classification problem basically you're trying to classify which out of these combinations is the true combination so what's the true class and because as I already mentioned this problem is a symmetric you're basically doing the same thing just this time you put UI here and your your denominator part goes sums over the column and the final loss vector is just a weighted combination of those two and you just go over the whole batch you sum over the batch and you average out so that means you take these and then you also add this one to the loss and you add the row part and you just do that for the whole batch so yeah that hopefully that was detailed enough for you to understand the exact way this method works so so once you have that loss you just back prop the information the gradients through your encoders and you train the image and the text encoders and that's it your model is ready for this zero-shot learning and I'll get to that in a second I just want to mention one more thing about data augmentations and the reason they just took they just did this simple geometric augmentation whereby they just cropped from from the resized image and not use color data so seem clear this framework I already mentioned found this this nice finding so one composition of augmentation stands out random cropping and random color jitter we conjecture that one serious issue when using only random cropping as data augmentation is that most patches from an image share a similar color distribution so that means that the the model can learn to cheat by just looking at the histogram distribution and figure out that those are the same instances now the difference between this framework and and clip model is that they had they just compared images to images in contrast to loss whereas we have now text an image so the reason I think they just ditched away with with color jitter is the efficiency and we'll see that in a couple of minutes because yeah that's open AI there they're just trying to scale things and and see how they how they perform with that scale so what you can see here is the following so this thing here is basically you have an image and if you do know if you if you if you just ditch the color jitter and you just take a crop and you take another patch and you take another patch and you take another patch so all of these patches will have as you can see here for four different patches they'll have a similar color distribution and so once you do the contrastive learning between those augmented patches they it will be a really easy to even they they are visually different the histograms are the same so it will be a really easy problem to just place them in the same part of the embedding space where here once you do the color jitter you can see that the model can't explain cannot exploit this this this this this trick and that's the reason they use color jitter but as I said probably because of the efficiency reasons so it was a trade-off they just used the random cropping okay that was it now let's focus on this part and this is probably arguably the most important part of the of this paper and that's how did they use this thing now so after they trained it how did they use it in a zero shot setting and this is how so basically let's say you have you now want to just test this one in a zero shot a setting to a new you know on the nude like downstream computer vision tasks like we have cipher 10 and that's a pretty famous and simple benchmark in computer vision and you basically have ten classes right and so maybe plane car dog bird etc so what you do is the following you basically embed you embed those classes in sentences of this construction maybe a photo of and then you just fill in the blank you put a plane your car dog whatever so and once you do that you just encode those sentences in the same way so you have end of sentence token blah blah this is transformer but this time this one is trained so we have it initialized with certain weights after the training after this contrastive learning training and so we just prepare all of those embeddings so we just pre compute them once and that's it we are we are finished with text encoder we just have this dispatch of embeddings here and now how we do the zero shot classification is the following we just take the image that we want to get classified we find its embedding vector and we just do a simple cosine similarity between this embedding vector and between these embedding textual embedding vectors and whatever the highest similarity is that's our class and you can see here a photo of a dog and because t3 corresponds to that particular sentence and had the highest similarity so maybe this one has 0.7 and all of the others had maybe I know 0.1 this one had even smaller similarity etc so that that's how the method works in a nutshell and yeah hopefully this was clear enough and they also mentioned here at test time the learned text encoder synthesizes our zero shot linear classifier so synthesizes that's an interesting word because you can basically treat this as a as a hyper network so you can basically treat this whole system with text encoder and with these sentences as an adaptable as a hyper network so basically depending on the on these on your data set you construct certain corpus of sentences and those will adapt this text encoder so that it's like a configurable linear classifier in a way let's call it that way so if you're still having problems understanding why this is a linear configurable classifier in a way let's look at it from a different perspective so as I mentioned we have those linear projection layers from the embedding space of the encoders into the contrast of embedding space so that means after the embedding space here we'll have a linear projection which will project this vector into this vector so that we'll end up having same dimensions across these image vectors and these text vectors so that means this one is d dimensions and all of these have d dimensions so let me take a look let me just extract this box and represent it as a vector which it is and these are also d vectors so when we do when we do cosine similarity when we do dot product we're basically doing in a way where we are creating a fully connected layer so these weights here these weights correspond to maybe t1 and we'll end up with a value here and we do the same thing with t2 t2 is also fully connected layer over this vector here and we end up with a second value and we do that n times so we'll have n values here so that's like n heads and you can see that all of these weights here are basically configured depending on the text corpus because text encoder is pre-trained and it's fixed and we'll just we just input these this text corpus that's created out of the labels for that particular data set and we end up with particular weights here which again are basically specifying the weights in this classifier and that's why it's a configurable classifier and so that's really interesting so as your shot linear classifier by embedding the names or descriptions of target data set classes and that's we already saw that so we just embedded so one detail I'll just I'll omit it here is that basically what I do here is in sampling and prompt programming so if you're familiar with the GPT family of models you know that prop programming is a thing that's just it's called like a software 3.0 and the idea is depending on how you construct these these these the skeleton around the class you'll get different performance out of your model so what I ended up doing is they play with different constructions and they also assembled bunch of these constructions and they just average out these the features here before they they project them into the contrasting embedding space so what that means maybe you maybe they'll have so for car they maybe have a photo of a car but they'll have another sentence like a nice photo of a car for example and they'll just take both of those embeddings and they'll average it out before they project it into contrast space and they they found that that small trick actually improves performance significantly but we'll get to that in a bit more detail a bit later okay so that was that was the explanation and now let's let's dug even a bit deeper and then we'll jump to experiments so let's see as I said we find the clip similar to the GPT family learns to perform a wide set of tasks during pre training including OCR geolocalization action recognition and many others so again if you're familiar with GPT family of models you know that there is this emerging let's call it phenomena happening so where you're basically training your model on on some tasks like GPT-3 was just trained to predict the next token and once you do that they figured out that it learned how to do many other tasks that it wasn't explicitly trained to do like maybe you input a sentence into GPT and you just append too long didn't read words here and then after you sample out of it you'll notice that it actually learned to summarize the this input sentence even though it wasn't trained to do that and the same thing happened here with clip they trained it to as we saw to just associate the the most probable text sequence with the image and along the way it learned to do all of these things so it's that's interesting okay I'll get to this pick in a second let's just go over a couple more things so learning from natural language also has an important advantage over most unsupervised blah blah so basically this says that the way how we train it enabled us to do this flexible zero-shot transfer as you saw with those with the hyper network story I just explained you and yeah I mentioned that they created this WIT data set which has 400 million pairs of images in text so why they did it is because they want to leverage the scale and none of the existing data sets had that so that's why they did it and it's interesting how they created it so they had like a half a million queries which they created like by collecting words that appear at least hundred times in English version of Wikipedia and some other heuristics and they tried to balance it out so that every one of those queries have has a like a similar number of images associated with them that's how they created the data set so but more importantly let's jump into this part about efficiency so let's see why they ended up having the exact same task that I just explained so they say here that Mahajan and his collaborators folks from Facebook required 19 GPU years to train this Resnext model she and others are required 33 TPU v3 core years to train the noisy student so that's a lot of compute and the only thing they are trying to do is predict hundred image net classes and compare that like comparing that to the to learning the open set of visual concepts from natural language seems daunting so if this was so hard to do then just imagine what how much harder it is to train the the the model of predicting like just natural language labels that they're open for they did try first exploring with this discriminating method that has a predictive objective instead of the contrastive objective I just showed you and what vertex paper did is they basically also had image encoder and they had a transformer above it they were trying to predict the exact same words from the caption of that image so you have an image you have a caption associated with it and you're trying to predict the exact same word so that's not the new approach like we've been doing these captioning predictions for a long time and people notice that the visual representations you get out of this tasks so when you for example do some linear probing on these representations that they are really useful for many downstream tasks classification etc so the thing is they tried this one and it was too expensive and we'll see the one of the charts above which will nicely explain this and the second thing they also consider these generative methods like image GPT and what it does it learns to autoregressively just regress the image the input image and it also learns really nice visual representations but the thing is it's not efficient and like efficiency is the key point here this is open AI they want to make the most efficient method and then leverage the computer have both in the Harvard sense and the huge data set they collected so like the bitter lessons by sudden are a nice read if you're not familiar with this but basically the more computer you can throw at the problem usually that that's that's a good idea I'm not I'm not I'm not saying that's the only way we will go forward with the research but that's definitely an option okay and having said that let's let's just jump to this chart and what it did is the following so this transformer language model that's basically that vertex approach I just mentioned so you just basically stick the transformer up on top of the image encoder and you can see that by trying to predict the exact same birds that know from the caption it's the the zero shot image net accuracy is really low and you need a lot of data to accomplish it compare that to just a simple baseline where instead of trying to predict the exact same birds from the transformer you're just trying to predict which words appear in your caption so that's a much simpler problem so you have a like a vocab here like maybe 30k of words and you're just trying to predict which words are present in your current caption and that's this orange line you can see and we already noticed that we have a 3x efficiency gain and finally once we ditch the predictive objective altogether so trying to predict the words and just trying to associate the the textual as sequence with the image we get a boost in efficiency a further boost and now we can truly leverage the computation okay so that was that was it that was it about the efficiency part and now I'll go into some more details about the method and then we'll we'll just jump to exploring the easier shot results they got so they say here that they we train clip from scratch without initializing the image encoder with image net weights or text encoder with pre-trained weights and why they do that is because work like a vision transformer already showed that once you have enough compute and enough data you really don't need to bias your model in any way so you just so what vision transformer showed is that this particular model is better than CNN's given enough data like they use JFT 300 data set and that's Google's proprietary data set and they show that it outperforms CNN's and it learns something similar to CNN's just a bit better obviously so that's why they don't encode it why they don't initialize it with pre-trained weights so I explained most of these so I'll just skip them yeah this is an interesting detail they are not using so they for a resnet they just swap this average pulling layer with the attention pulling layer that's implemented in a transformer style and yeah they're using as I said was funny for the text encoder and I also mentioned this one about EOS so the interesting part about how they scaled their approach is that they used efficient nets ideas so they used ideas from this paper to scale I don't know how to write today okay to scale their their technique and I just highlighted these as a just to show you how much engineering goes into all of their all of their research work and like you can see all of this mixed precision gradient checkpointing is all about making use as efficient as possible of the memory available in their on their hardware so half precision half precision and yeah it still takes a lot of time to train their biggest models so it took 18 days to train on almost 600 we hunted GPUs and 12 days on 256 for the vision transformer which is a vision transformer is more efficient than CNN's and that's one of the points that that paper made and that's why it takes a lot less compute and finally the the best model they had is the vision transformer L that was trained for one epoch on on a higher resolution of 336 pixels and that's that's a final clip model that they that they are using throughout this paper okay finally experiments so as I mentioned the target the goal of the paper is to show that clip has a really nice zero shot transfer performance and what they mean by zero shot transfer performance is a bit more general than what you usually mean by saying that in computer vision so usually and dates here they say here most of these tasks are are testing for distribution shift and domain generalization so what that means is the previous models in computer vision basically you want you want to test if it's if it's if it's having a really good performance even when you change the this underlying distribution so basically here you have a banana like a real-world banana like a sketch and those are called different domains and the other thing that we're testing for is that for example if the model was testing test that was trained on real world objects like banana I don't know dogs cats whatever and you want to see how how easy it is to just do some fine-tuning or do some linear probe on top of the feature space to maybe learn also about a new object like maybe horse and what they mean by zero shot transfer learning in this and this paper is not generalizing to those to those so distribution shifts and domain generalizations but to new tasks so for example that they say here well it is reasonable to say that the SVH and data set which is this one measures the task of street number transcription on the distribution of Google Street View photos is unclear what real task the cipher 10 data set measures and this is cipher 10 so basically if you train the model maybe image net and then you you want to test it on on on cipher 10 like this you're you're basically doing what I already said you're doing basically just a different different distribute just has a different distribution and maybe some different objects whereas if you're testing on SVH n you're basically learning how to do OCR in the wild if your model learns how to extract numbers from these images so that's it so we we want to generalize not only to distribution shifts and generalizations to new objects we also want to generalize to new tasks like OCR and not only two different classification datasets okay let's let's start exploring the paper first this table is not so important they just show so visual engrams are our method from 2016 which had a similar similar notion of zero-shot transfer and that's why they compare with them although it's not comparable because many things like transformers did not exist back then and so yeah they just have much better clip just as much better results on these datasets like image net so before seeing the zero shop and few shot performance of clip let's first go through prompt engineering and assembling details so the first thing they had to cope with is the fact that there is a polysemy problem in some of the downstream computer vision tasks and that means that for example in some of the datasets like image net you basically have two words which mean two different things so different semantics and maybe they give example here construction cranes and cranes the fly so the first thing they had to do is for a given data set and labels to disambiguate between those labels which have the polysemy problems the second thing and I already mentioned this one is they had to construct the the second the sequences like this so instead of just using labels so for example if you have a on a downstream class task you have labeled dog instead of using dog to in your zero shot classifier to configure it you use something like this a photo of a label and by doing that you get a 1.3 percent increase in accuracy and that's like a low low-hanging fruit the second thing they did is they they again they play with prompt programming so instead of just using a photo of a label they also experiment with adding maybe a type of pet if that made sense for the so we found several fine-grained image classification datasets that it helped to specify the category so for example in Oxford pets using that one would even help them even boost the performance even higher so on OCR just putting the quotes around the numbers would also gained gave them some additional boost they also played with and sampling as I mentioned so here is a concrete example so you by using a photo of a big for example dog and a photo of a small dog and combining those in the in the embedding feature space they get they get an additional boosting performance so they did not average in the probability space they just average before they do the the the soft max and everything so considering those two things together they get a boost of almost 5% and that's on image net so that's that's a really long low-hanging fruit and they made use of it so let's see that on the chart here basically for a given compute by just using those additional in sampling and pro programming programming methods they just get a huge improvements in the average score so that's that's that's the first thing I wanted to mention now let's jump to the zero shot performance of clip so what I did here is they took off-the-shelf models such as resonant hundred I think no resonance 50 here and they just fully pre fine-tuned it on every single data set you see here so it they pre trained it on STL 10 Stanford cars etc and they took on the other side it took clip in a zero shot setting and we can see that clip is much better than fully supervised resident resident 50 on many of the data sets and now if we just analyze the extremes here this part and this part that will probably give us some more information what's happening here so first things first the data set zero shot clip improves by the most is STL 10 a data set designed to encourage efficient learning by providing only a limited number of labeled examples so what that means is that for this particular data set because resonant was you basically take the resident features that were that we got from image net and we freeze those and then we take the classifier and training that one with this many with this like this small amount of examples just didn't didn't give us a good results so you can see that clip was much better than resonant and on the other hand we have this other part of spectrum where if we analyze these data sets you can see that these data sets are complex and specialized so they say it here looking where it underperforms it works really weak on on these data sets like satellite image classification lymph node tumor counting objects blah blah blah and these results highlight the poor capability of zero shot clip or more complex complex tasks so that's where it underperforms resonant but it's still quite fascinating that a fully supervised model using labeled information is underperforming clip on on many of the data sets so it's worth mentioning of course resonant 50 is by no means state-of-the-art currently so let's now examine what happens when we compare clip with state-of-the-art but in a few shots few shot setting this time so these models are not trained like with all of the labels they just take a couple of labels so let's analyze the chart and they took three models first is resonant 50 sim clear the self supervised framework I mentioned and bit big transfer so this one was trained on JFT 300 so a lot of data and we can see that with 16 so we just so what we do here is we take those models like bit and we freeze the weights we add the classifier and we we pre-trip we we fine-tune on these different data sets and we see that for 16 examples we are approaching the zero shot clip which is a nice nice nice result because these are the best methods available currently and what's interesting here is that there is this drop between zero shot and one shot and basically that discrepancy shouldn't be there in in ideal world because humans for example when you when you give us one example we won't be underperforming like yeah we'll we'll we'll be better so that's something they're trying to to to to work on and the potential solution they propose is just to initialize so once you take the vision encoder of clip and instead of training a classifier from random they just initialize it with the zero classifier zero shot classifiers weights for that particular data set and use that but that still didn't help them so they just left it out for for the future for the future exploration let's continue and see how the how good the zero shot classifier actually is and here we see what we see in this chart is for example how many labeled examples do we need to achieve the same accuracy as zero shot classifier so for fair for fair 2013 if we just take that division encoder of clip and use a random initial classifier we need hundred eighty four examples to achieve the same performance as zero shot clip so that's that's a good result we we want to make zero shot classifier as good as possible so that even with a lot of label data we can improve upon it so that would be an ideal result if every single data set required a lot of labeled examples to improve upon that randomly initialized classifier that would be a good result here on the other chart we can see the difference between a linear probe clip performance versus a zero shot performance so this time you take that classifier and you basically train it with all the available data so it's fully supervised and we use all of the data available in a particular data set and we see that for a given data set like maybe this one the the performance is maybe 45% once we use all once we leverage all of the data but by using zero shot we have only 35 so obviously zero shot a classifier is suboptimal for this data set and we can see the general trend so in a perfect world zero shot would be as good as this classifier that used the whole corpus of label data and but it's you can see the trend it's suboptimal and it's down by maybe 20 or 15 points and only a couple of data says like STL 10 and if you remember that one has a little like a small amount of a label data available so here the the zero shot performance is similar to this fully fully trained classifier so that shows us that there is a lot more like space to to improve upon this zero shot classifier let's continue analyze other charts so this one what it tells us is that basically increasing the compute will decrease the error which is something we could expect but the interesting thing is it's really noisy so if we take a particular data set we can see that with compute sometimes the error increase increases and then it decreases so it's really noisy but the overall trend is still strong in favor of more computation is better here what we see is that basically the future space of clip is high-quality because once we do once we compare it with different models like you can see here efficient nets and be all and moco sim cleared like self supervised methods and bid which is fully supervised we can see that the feature space is higher quality for clip and so what we see on this diagram is the amount of compute on x-axis and the average score across a suit of different like 27 data sets and every single model was had a had a linear probe fully fully trained on the frozen feature space and a clip again shows much better much better average score which is something that we ideally want we want the clip to to be the features we train for clip are are really good for many downstream data sets and it showed that here if we take one specific model like maybe efficient net noisy student and we analyze we break down those instead of using the average score we just see the performance across every single data set we can see where the the results for clip are much better and where it's a bit worse than efficient net and because efficient net was basically pre trained on image net so it does make sense that it has better performance than clip that the feature space he developed is better for image net but only for image and I was as we can see on most of the other data sets the future space developed by clip is way more high quality than than efficient nets feature space okay let's continue and see some other results I'll skip this one not that interesting and I'll jump to this one basically if we take what we see here is the following we want to see we want to see how how the clip accuracy behaves when we change the underlying distribution so we see here basically same class same object same classes but like different distribution so here we have sketches here we have real-world objects some like again some drawings etc and if we take for example a model that had 80% accuracy on image net top 1 accuracy we see that for all of those standard image net models that they severely have like have much lower accuracy on those other related data sets were just with different distribution whereas on the other hand if we take the results for for clip it has much higher much higher accuracy than then then then image at once and ideally again if we have 80% on image net we want to have 80% on all of those similar data sets just with different distributions if we take a look at the tabular data here what we see is when we take resonant 101 and so it was just pre trained on image and we don't fine-tune it on any of these new datasets with different distributions we just use it as it is and we see that the performance that the accuracy just severely drops and for these adversarial examples like you can see here some chopped banana something that the model never saw during training it has really really bad accuracy on the other hand clips got much much higher accuracy and it improves like even 75% in this case on this data set so that shows that clip is much more robust to distributional shift than these other other methods let's see some other results they showed and this one is interesting so this line is the same as the one above so we just use clip in a zero shot setting and if we actually fine-tune clip for image net we can see that even though it improves the image net accuracy the the actual robustness to to distributional shift so again we have those data sets like those seven data sets we see that the curves actually go go down and it's it's less robust to to all of those data sets from another perspective if we look at this chart it's it's had some improvements on image net whereas on many other data sets it just has less less good performance so was this adapt to class shift so I think I noted somewhere here basically because some of these data sets like YouTube BB and image net vid have different classes than image net which has thousand classes so it has so for the person class in YouTube BB we have what we have to do for those image net models is at collect the scores over baseball player bridegroom and scuba diver and that's how we give the prediction for person on YouTube BB which is suboptimal but since clip is configurable we can just use this new class and create the construction a type of a person or whatever to to have much better results than than these fixed image net models and when we do that we we have we can see that the trend improves and the robustness increases further which is really nice and particularly on those couple of data sets which has which have that superset of classes this small trick helps whereas the other methods as I already explained have to do the following so for example this this data set and has maybe class person so when we're testing when we're testing efficient net we have to pull over three classes out of thousand that he has to give an estimate of what person would be since he doesn't know of the concept of a person and on the other hand we can just use the flexibility of a zero shot classifier and get the direct results instead I'll just I'll just skip some of these because this is similar experiments to these ones is just that instead of doing showing us the fully supervised curve so this model is clip this that's fine-tuned to image net fully fine-tuned and instead of that they just show a trend once we start from few shots so two shots four shot as we start increasing the number of labels from image and we can see that we are slowly increasing the the image net top one accuracy but the we're still underneath zero shot clip on this on these seven natural distribution data sets okay there's there's a lot of information here the main idea is basically that clip clips feature space is way better suited to to just extend to many other downstream tasks and it's much more robust as we saw two different natural shifts so in a way it's it's a better way to and additionally it's not using those costly supervised data sets is just using those pairings and so it's easier to to to just get a get a get a grab of those of that data that's the main idea of the paper this shows some contamination analysis and finally I just want to show that there are some examples where it's not working really good and a nice nice example of a failure is on amnest where it achieves only 88 percent accuracy on amnest just for comparison if you take a simplest baseline so you just take amnest image which is 28 by 28 you just flatten out the pixels and you use a simple feed for net this one will achieve higher higher accuracy than clip which was trained on a lot of data and lots of compute so it really failed on amnest so as they say here this suggests clip does little to address the underlying problem of brittle generalization of deep learning models instead clips tries to circumvent the problem and hopes that by training in such a large and varied data set at all data will effectively be in distribution and this is a naive assumption that as amnest demonstrates is easy easy to violate so obviously this is not the the final the final way to go to towards towards really generalized model to really general models but it's a it's a it's a step forward and I'm excited about this work so yeah that was that was pretty much it a short recap because there was a lot of details and hopefully this will make it kind of consolidate in your head and so they care about zero-shot performance generalization to new tasks and not only a new distributions and a domain generalization they show that prompt programming and assembling does help boost the performance of clip models so that's important they show that even against a fully supervised baseline such as the resonance 50 they are comparable and those are exciting results they show that even against the best baselines in a few shot setting they are comparable in a zero shot setting and they are better when they're doing the same thing when they're also fine-tuning their model I linear probing actually here next they show that zero shot classifier can be further improved it's suboptimal to the classifiers which can which are imaginable by just fine-tuning them on specific data sets so there is a lot of there is more work that can be done on improving that again compute doesn't does improve a clip features are better so feature space that clip provides is better than other baselines in compared to a specific baseline efficient net here we can see that it's much better on many data sets so again feature space is richer than than the other baselines here it shows a impressive generalization to a robustness to distribution shift and same thing here and they show that if you want if you if you kind of add a classifier and linear probe the clip onto image net you basically do improve image and performance but you lose the robustness to other data sets so you don't want to overfit to imagine basically and finally they showed some limitations among them that it's failing on on on simple data sets such as MNIST because it never saw those images during training so it just fails miserably even compared to the simplest of baselines okay those are there was a much longer video than I expected but hopefully you learned something from this video if you did consider subscribing and sharing this video and also hit the bell icon to get notified when I upload a new video until next time keep learning deep
[{"start": 0.0, "end": 7.5600000000000005, "text": " Hi there. So a couple of days ago OpenAI published two really exciting research"}, {"start": 7.5600000000000005, "end": 14.24, "text": " works and among them Dali took pretty much all of the media attention and was"}, {"start": 14.24, "end": 20.84, "text": " all the hype but I found this work clip or learning transferable"}, {"start": 20.84, "end": 26.48, "text": " visual models from natural language supervision really exciting and I want"}, {"start": 26.48, "end": 32.24, "text": " to cover it in this video. So basically the method itself is not new so they"}, {"start": 32.24, "end": 38.86, "text": " it's just a modified version of convert paper but the thing there they"}, {"start": 38.86, "end": 44.32, "text": " introduced here is a concept of using zero shot in computer vision. So they're"}, {"start": 44.32, "end": 47.92, "text": " probably not the first ones but they are really bullish on it. They're just"}, {"start": 47.92, "end": 52.84, "text": " pretty much a continuation of their exploration of the GPT and all of the"}, {"start": 52.84, "end": 57.28, "text": " zero-shot capabilities it had and they're now trying to take that NLP"}, {"start": 57.28, "end": 63.800000000000004, "text": " paradigm and bring it back into the computer vision. So yeah, so this paper is"}, {"start": 63.800000000000004, "end": 68.76, "text": " mostly about how this model can, how can we create models which are, which mimic"}, {"start": 68.76, "end": 73.76, "text": " the human brain and learn the way we do. So with that being said let me jump into"}, {"start": 73.76, "end": 78.08000000000001, "text": " the method and I'll first explain the method and then we'll go through the"}, {"start": 78.08, "end": 84.44, "text": " different experiments they did. Okay, so they say here we demonstrate that the"}, {"start": 84.44, "end": 88.72, "text": " simple pre-training task of predicting which caption goes with which image is"}, {"start": 88.72, "end": 93.44, "text": " an efficient and scalable way to learn soda image representations from scratch"}, {"start": 93.44, "end": 98.52, "text": " on a data set of 400 million image sex pairs collected from the internet and so"}, {"start": 98.52, "end": 104.2, "text": " they they also introduced this new data set because all of so because they"}, {"start": 104.2, "end": 109.56, "text": " really so that's open AI they really are bullish on computation and all the"}, {"start": 109.56, "end": 115.2, "text": " existing data sets weren't leveraging the scale as much as they could and"}, {"start": 115.2, "end": 121.2, "text": " especially since most of the computer vision tasks are benchmarks data sets"}, {"start": 121.2, "end": 127.64, "text": " like like image net are only associated with a simple like structured label and"}, {"start": 127.64, "end": 132.24, "text": " they just want to associate images and text so they had to create their own"}, {"start": 132.24, "end": 136.32000000000002, "text": " data set. So without further ado let's let's jump into the method itself and"}, {"start": 136.32000000000002, "end": 141.36, "text": " I'll explain it in depth. I'll start doing some high-level explanation and"}, {"start": 141.36, "end": 147.48000000000002, "text": " we'll slowly dig deeper into the method. Okay, so this is this is how it looks"}, {"start": 147.48000000000002, "end": 151.20000000000002, "text": " like. Basically what they do is the following. So they have as I said"}, {"start": 151.20000000000002, "end": 157.68, "text": " associated with they have associated image text pairs and the way the thing"}, {"start": 157.68, "end": 162.4, "text": " they do is they first do some data augmentations here for the for the image"}, {"start": 162.4, "end": 168.92000000000002, "text": " part and then they have image encoder and they have text encoder and here they"}, {"start": 168.92000000000002, "end": 174.32, "text": " just used Resnets and the newly introduced vision transformer whereas"}, {"start": 174.32, "end": 179.60000000000002, "text": " for the text encoder they just use the original Vaslani transformer with"}, {"start": 179.60000000000002, "end": 184.36, "text": " certain modifications they did for the GPT family of models as well and once"}, {"start": 184.36, "end": 189.72000000000003, "text": " you once you have that setup what you do is the following. So you take the text"}, {"start": 189.72000000000003, "end": 194.0, "text": " sequence and you just encode it and how do you do that? Well because this is"}, {"start": 194.0, "end": 198.44000000000003, "text": " transformer so you basically just embed the text sequence here and you just"}, {"start": 198.44000000000003, "end": 206.44000000000003, "text": " take the start of sentence token and you take the end of sentence tokens and you"}, {"start": 206.44000000000003, "end": 213.64000000000001, "text": " place them you just bracket the text sequence using those and now if you know"}, {"start": 213.64, "end": 217.07999999999998, "text": " how the transformer works and I hope you do so you just have the forward pass"}, {"start": 217.07999999999998, "end": 221.39999999999998, "text": " through the transformer and whatever comes out from the last layer of the"}, {"start": 221.39999999999998, "end": 226.79999999999998, "text": " transformer above the EOS end of sentence token that's the representation"}, {"start": 226.79999999999998, "end": 231.32, "text": " they're gonna use. So they just do some simple layer normalization here and they"}, {"start": 231.32, "end": 234.76, "text": " project this using some linear projection layer into the embedding"}, {"start": 234.76, "end": 239.56, "text": " space and we are here. So one through N is just N is just the size of the batch"}, {"start": 239.56, "end": 246.8, "text": " so these are the encodings we get for the text sentences sequences. On the"}, {"start": 246.8, "end": 252.36, "text": " other hand image encoder is pretty straightforward you basically feed"}, {"start": 252.36, "end": 257.2, "text": " through the the image and you get certain representation at the output. So"}, {"start": 257.2, "end": 262.22, "text": " if this was resonant you'd probably just take the output at the after the"}, {"start": 262.22, "end": 266.2, "text": " average pooling part whereas they did some modifications with resonant I"}, {"start": 266.2, "end": 269.8, "text": " won't get into those details right now but yeah you can imagine you just take"}, {"start": 269.8, "end": 275.92, "text": " out some certain representation at after some after that pooling part and you use"}, {"start": 275.92, "end": 280.44, "text": " that as your image representation. You also do that linear projection and"}, {"start": 280.44, "end": 287.32, "text": " you're finally inside this contrastive embedding space. So once we're here we"}, {"start": 287.32, "end": 292.32, "text": " basically on the high level we just want to make sure that this image image one"}, {"start": 292.32, "end": 301.12, "text": " has the highest cosine similarity with this text embedding vector T1 and you"}, {"start": 301.12, "end": 305.44, "text": " want to push all of these down to zero and you want to do the same thing along"}, {"start": 305.44, "end": 309.56, "text": " this column. So you want to make sure because we have asymmetric setting here"}, {"start": 309.56, "end": 313.88, "text": " we have on one side we have textual information and on the other side we"}, {"start": 313.88, "end": 318.12, "text": " have visual information in our embedding space so this is a multimodal embedding"}, {"start": 318.12, "end": 322.44, "text": " space and so they do it both ways so they want to also make sure this one goes"}, {"start": 322.44, "end": 327.68, "text": " up to to to one and all of these are pushed down to zero and you just repeat"}, {"start": 327.68, "end": 334.08, "text": " all of that across the batch and that's it. Now let's let's get one level deeper"}, {"start": 334.08, "end": 340.76, "text": " and see how this exactly works. So this paper was heavily inspired by this"}, {"start": 340.76, "end": 345.42, "text": " convert paper that was working on medical images and they actually"}, {"start": 345.42, "end": 351.04, "text": " achieved some really awesome results so the visual features they trained using"}, {"start": 351.04, "end": 357.48, "text": " their method proved to be really good at some downstream medical tasks. So you can"}, {"start": 357.48, "end": 361.28000000000003, "text": " see the the how the pipeline looks like and it's the same thing as I just"}, {"start": 361.28000000000003, "end": 365.92, "text": " explained. So we have sequence of we have a sequence of images and sequence of"}, {"start": 365.92, "end": 372.08000000000004, "text": " associated textual sequences and basically what they do is they do data"}, {"start": 372.08, "end": 375.32, "text": " augmentations which is really really popular thing to do in these"}, {"start": 375.32, "end": 381.32, "text": " contrastive learning methods and they did some textual augmentations so just"}, {"start": 381.32, "end": 386.4, "text": " extracting single sentences from their text sequences. So this paper on the"}, {"start": 386.4, "end": 392.03999999999996, "text": " other hand ditched this thing here because most of the images they had had"}, {"start": 392.03999999999996, "end": 396.34, "text": " only one single sentence associated with image so they can't just remove the"}, {"start": 396.34, "end": 399.84, "text": " sentence otherwise they'll they'll lose the supervision signal. So they just"}, {"start": 399.84, "end": 405.08, "text": " ditch this away and they simplified the this augmentation part where the only"}, {"start": 405.08, "end": 410.76, "text": " thing they do is basically they resize the image and they take a square crop."}, {"start": 410.76, "end": 416.55999999999995, "text": " That's everything. No color jitter, no blurring, nothing. So that's that's"}, {"start": 416.55999999999995, "end": 422.12, "text": " everything. Now the second thing they did is so they modified is basically this was"}, {"start": 422.12, "end": 427.71999999999997, "text": " MLP in the previous work and also so by the way this work this convert paper"}, {"start": 427.72, "end": 434.28000000000003, "text": " was inspired by SimClear framework which you probably heard of which achieved"}, {"start": 434.28000000000003, "end": 440.16, "text": " really nice results on self-supervision visual representation learning and it"}, {"start": 440.16, "end": 444.24, "text": " introduced this concept of using MLP between the embedding space and between"}, {"start": 444.24, "end": 448.88000000000005, "text": " the contrastive embedding space. So in this paper they just ditch this away and"}, {"start": 448.88000000000005, "end": 453.68, "text": " they just use a linear projection layer and that's it. So no activation functions"}, {"start": 453.68, "end": 458.16, "text": " you just have one matrix here that projects from the embedding space into"}, {"start": 458.16, "end": 463.16, "text": " the contrasting embedding space. So that's one of the other modification"}, {"start": 463.16, "end": 468.82, "text": " they did to the pipeline. Okay so once we have that let's let's see like some"}, {"start": 468.82, "end": 472.64, "text": " formulas for the things I already intuitively explain how how it functions."}, {"start": 472.64, "end": 478.56, "text": " So so what we do is the cosine similarity so this thing with these like"}, {"start": 478.56, "end": 484.64, "text": " angle brackets just symbolize the cosine similarity. So once again you have two"}, {"start": 484.64, "end": 491.96, "text": " vectors let's say this is VI and this thing here is UI and because this is a"}, {"start": 491.96, "end": 495.76, "text": " cosine similarity that means these two vectors are normalized they are"}, {"start": 495.76, "end": 502.6, "text": " basically unit norm so the norm is equal to to 1 and so basically when you do the"}, {"start": 502.6, "end": 510.48, "text": " the dot product you end up having cosine of t. So cosine of t where of theta where"}, {"start": 510.48, "end": 515.78, "text": " theta is the angle between those two. So again if the theta is 0 we'll have"}, {"start": 515.78, "end": 522.4, "text": " cosine similarity 1 if the theta is 0 that means they're orthogonal etc. So when"}, {"start": 522.4, "end": 529.2, "text": " we say we want to make these two be equal to 1 that means those vectors are"}, {"start": 529.2, "end": 534.96, "text": " pretty much aligned and that's that's the idea. So t here is the temperature"}, {"start": 534.96, "end": 539.6, "text": " coefficient just it just modifies the softmax distribution to make it a bit"}, {"start": 539.6, "end": 544.12, "text": " more steep and it's actually trainable parameter in this paper and not a"}, {"start": 544.12, "end": 549.6800000000001, "text": " hyperparam and this is a familiar softmax thing so basically what you're"}, {"start": 549.6800000000001, "end": 554.8000000000001, "text": " doing your your summing so you take the cosine this cosine similarity and you"}, {"start": 554.8, "end": 561.04, "text": " just average so in the denominator denominator you just sum up all of these"}, {"start": 561.04, "end": 566.0799999999999, "text": " and you get the probability distribution and then you just have a simple cross"}, {"start": 566.0799999999999, "end": 570.68, "text": " entropy loss over that softmax distribution which is a pretty usual"}, {"start": 570.68, "end": 574.16, "text": " thing to do. So you can treat this as a classification problem basically you're"}, {"start": 574.16, "end": 579.52, "text": " trying to classify which out of these combinations is the true combination so"}, {"start": 579.52, "end": 584.24, "text": " what's the true class and because as I already mentioned this problem is a"}, {"start": 584.24, "end": 589.6800000000001, "text": " symmetric you're basically doing the same thing just this time you put UI"}, {"start": 589.6800000000001, "end": 597.52, "text": " here and your your denominator part goes sums over the column and the final loss"}, {"start": 597.52, "end": 603.12, "text": " vector is just a weighted combination of those two and you just go over the whole"}, {"start": 603.12, "end": 607.88, "text": " batch you sum over the batch and you average out so that means you take"}, {"start": 607.88, "end": 614.96, "text": " these and then you also add this one to the loss and you add the row part and"}, {"start": 614.96, "end": 621.12, "text": " you just do that for the whole batch so yeah that hopefully that was detailed"}, {"start": 621.12, "end": 627.6, "text": " enough for you to understand the exact way this method works so so once you"}, {"start": 627.6, "end": 631.4, "text": " have that loss you just back prop the information the gradients through your"}, {"start": 631.4, "end": 637.6, "text": " encoders and you train the image and the text encoders and that's it your model"}, {"start": 637.6, "end": 643.9, "text": " is ready for this zero-shot learning and I'll get to that in a second I just want"}, {"start": 643.9, "end": 647.6, "text": " to mention one more thing about data augmentations and the reason they just"}, {"start": 647.6, "end": 653.48, "text": " took they just did this simple geometric augmentation whereby they just cropped"}, {"start": 653.48, "end": 659.6800000000001, "text": " from from the resized image and not use color data so seem clear this framework"}, {"start": 659.6800000000001, "end": 665.36, "text": " I already mentioned found this this nice finding so one composition of"}, {"start": 665.36, "end": 670.6800000000001, "text": " augmentation stands out random cropping and random color jitter we conjecture"}, {"start": 670.6800000000001, "end": 675.08, "text": " that one serious issue when using only random cropping as data augmentation is"}, {"start": 675.08, "end": 680.2, "text": " that most patches from an image share a similar color distribution so that means"}, {"start": 680.2, "end": 685.4, "text": " that the the model can learn to cheat by just looking at the histogram"}, {"start": 685.4, "end": 689.36, "text": " distribution and figure out that those are the same instances now the difference"}, {"start": 689.36, "end": 695.6800000000001, "text": " between this framework and and clip model is that they had they just"}, {"start": 695.6800000000001, "end": 699.6800000000001, "text": " compared images to images in contrast to loss whereas we have now text an image"}, {"start": 699.6800000000001, "end": 704.92, "text": " so the reason I think they just ditched away with with color jitter is the"}, {"start": 704.92, "end": 709.48, "text": " efficiency and we'll see that in a couple of minutes because yeah that's"}, {"start": 709.48, "end": 713.8000000000001, "text": " open AI there they're just trying to scale things and and see how they how"}, {"start": 713.8, "end": 719.92, "text": " they perform with that scale so what you can see here is the following so this"}, {"start": 719.92, "end": 725.56, "text": " thing here is basically you have an image and if you do know if you if you"}, {"start": 725.56, "end": 730.5999999999999, "text": " if you just ditch the color jitter and you just take a crop and you take"}, {"start": 730.5999999999999, "end": 734.4, "text": " another patch and you take another patch and you take another patch so all of"}, {"start": 734.4, "end": 738.5999999999999, "text": " these patches will have as you can see here for four different patches they'll"}, {"start": 738.5999999999999, "end": 743.56, "text": " have a similar color distribution and so once you do the contrastive learning"}, {"start": 743.56, "end": 748.68, "text": " between those augmented patches they it will be a really easy to even they they"}, {"start": 748.68, "end": 752.2399999999999, "text": " are visually different the histograms are the same so it will be a really easy"}, {"start": 752.2399999999999, "end": 757.8399999999999, "text": " problem to just place them in the same part of the embedding space where here"}, {"start": 757.8399999999999, "end": 762.5999999999999, "text": " once you do the color jitter you can see that the model can't explain cannot"}, {"start": 762.5999999999999, "end": 767.1199999999999, "text": " exploit this this this this this trick and that's the reason they use color"}, {"start": 767.1199999999999, "end": 771.4, "text": " jitter but as I said probably because of the efficiency reasons so it was a"}, {"start": 771.4, "end": 777.3199999999999, "text": " trade-off they just used the random cropping okay that was it now let's"}, {"start": 777.3199999999999, "end": 781.36, "text": " focus on this part and this is probably arguably the most important part of the"}, {"start": 781.36, "end": 786.9599999999999, "text": " of this paper and that's how did they use this thing now so after they trained"}, {"start": 786.9599999999999, "end": 791.04, "text": " it how did they use it in a zero shot setting and this is how so basically"}, {"start": 791.04, "end": 796.16, "text": " let's say you have you now want to just test this one in a zero shot a setting"}, {"start": 796.16, "end": 801.52, "text": " to a new you know on the nude like downstream computer vision tasks like"}, {"start": 801.52, "end": 807.64, "text": " we have cipher 10 and that's a pretty famous and simple benchmark in computer"}, {"start": 807.64, "end": 812.7199999999999, "text": " vision and you basically have ten classes right and so maybe plane car dog"}, {"start": 812.7199999999999, "end": 818.56, "text": " bird etc so what you do is the following you basically embed you embed those"}, {"start": 818.56, "end": 824.98, "text": " classes in sentences of this construction maybe a photo of and then"}, {"start": 824.98, "end": 831.96, "text": " you just fill in the blank you put a plane your car dog whatever so and once"}, {"start": 831.96, "end": 835.6800000000001, "text": " you do that you just encode those sentences in the same way so you have"}, {"start": 835.6800000000001, "end": 841.0600000000001, "text": " end of sentence token blah blah this is transformer but this time this one is"}, {"start": 841.0600000000001, "end": 846.04, "text": " trained so we have it initialized with certain weights after the training after"}, {"start": 846.04, "end": 850.84, "text": " this contrastive learning training and so we just prepare all of those"}, {"start": 850.84, "end": 856.52, "text": " embeddings so we just pre compute them once and that's it we are we are finished"}, {"start": 856.52, "end": 863.2800000000001, "text": " with text encoder we just have this dispatch of embeddings here and now how"}, {"start": 863.2800000000001, "end": 868.4, "text": " we do the zero shot classification is the following we just take the image"}, {"start": 868.4, "end": 873.88, "text": " that we want to get classified we find its embedding vector and we just do a"}, {"start": 873.88, "end": 878.64, "text": " simple cosine similarity between this embedding vector and between these"}, {"start": 878.64, "end": 883.48, "text": " embedding textual embedding vectors and whatever the highest similarity is"}, {"start": 883.48, "end": 891.84, "text": " that's our class and you can see here a photo of a dog and because t3 corresponds"}, {"start": 891.84, "end": 896.28, "text": " to that particular sentence and had the highest similarity so maybe this one has"}, {"start": 896.28, "end": 902.92, "text": " 0.7 and all of the others had maybe I know 0.1 this one had even smaller"}, {"start": 902.92, "end": 908.56, "text": " similarity etc so that that's how the method works in a nutshell and yeah"}, {"start": 908.56, "end": 914.0799999999999, "text": " hopefully this was clear enough and they also mentioned here at test time the"}, {"start": 914.0799999999999, "end": 919.52, "text": " learned text encoder synthesizes our zero shot linear classifier so"}, {"start": 919.52, "end": 923.56, "text": " synthesizes that's an interesting word because you can basically treat this as"}, {"start": 923.56, "end": 928.1199999999999, "text": " a as a hyper network so you can basically treat this whole system with"}, {"start": 928.1199999999999, "end": 932.8399999999999, "text": " text encoder and with these sentences as an adaptable as a hyper network so"}, {"start": 932.8399999999999, "end": 938.16, "text": " basically depending on the on these on your data set you construct certain"}, {"start": 938.16, "end": 945.0, "text": " corpus of sentences and those will adapt this text encoder so that it's like a"}, {"start": 945.0, "end": 949.4, "text": " configurable linear classifier in a way let's call it that way so if you're"}, {"start": 949.4, "end": 954.0, "text": " still having problems understanding why this is a linear configurable classifier"}, {"start": 954.0, "end": 958.9599999999999, "text": " in a way let's look at it from a different perspective so as I mentioned"}, {"start": 958.9599999999999, "end": 964.36, "text": " we have those linear projection layers from the embedding space of the encoders"}, {"start": 964.36, "end": 969.16, "text": " into the contrast of embedding space so that means after the embedding space here"}, {"start": 969.16, "end": 973.6800000000001, "text": " we'll have a linear projection which will project this vector into this"}, {"start": 973.6800000000001, "end": 979.4, "text": " vector so that we'll end up having same dimensions across these image vectors"}, {"start": 979.4, "end": 984.92, "text": " and these text vectors so that means this one is d dimensions and all of"}, {"start": 984.92, "end": 990.08, "text": " these have d dimensions so let me take a look let me just extract this box and"}, {"start": 990.08, "end": 996.08, "text": " represent it as a vector which it is and these are also d vectors so when we do"}, {"start": 996.08, "end": 1000.9200000000001, "text": " when we do cosine similarity when we do dot product we're basically doing in a"}, {"start": 1000.9200000000001, "end": 1005.36, "text": " way where we are creating a fully connected layer so these weights here"}, {"start": 1005.36, "end": 1011.5600000000001, "text": " these weights correspond to maybe t1 and we'll end up with a value here and we"}, {"start": 1011.5600000000001, "end": 1017.72, "text": " do the same thing with t2 t2 is also fully connected layer over this vector"}, {"start": 1017.72, "end": 1022.6, "text": " here and we end up with a second value and we do that n times so we'll have n"}, {"start": 1022.6, "end": 1029.76, "text": " values here so that's like n heads and you can see that all of these weights"}, {"start": 1029.76, "end": 1036.0, "text": " here are basically configured depending on the text corpus because text encoder"}, {"start": 1036.0, "end": 1042.4, "text": " is pre-trained and it's fixed and we'll just we just input these this text"}, {"start": 1042.4, "end": 1046.8, "text": " corpus that's created out of the labels for that particular data set and we end"}, {"start": 1046.8, "end": 1053.04, "text": " up with particular weights here which again are basically specifying the"}, {"start": 1053.04, "end": 1056.6399999999999, "text": " weights in this classifier and that's why it's a configurable classifier and"}, {"start": 1056.6399999999999, "end": 1060.84, "text": " so that's really interesting so as your shot linear classifier by embedding the"}, {"start": 1060.84, "end": 1065.6399999999999, "text": " names or descriptions of target data set classes and that's we already saw that"}, {"start": 1065.6399999999999, "end": 1072.1599999999999, "text": " so we just embedded so one detail I'll just I'll omit it here is that basically"}, {"start": 1072.16, "end": 1078.16, "text": " what I do here is in sampling and prompt programming so if you're familiar with"}, {"start": 1078.16, "end": 1082.64, "text": " the GPT family of models you know that prop programming is a thing that's just"}, {"start": 1082.64, "end": 1088.64, "text": " it's called like a software 3.0 and the idea is depending on how you construct"}, {"start": 1088.64, "end": 1095.16, "text": " these these these the skeleton around the class you'll get different"}, {"start": 1095.16, "end": 1099.72, "text": " performance out of your model so what I ended up doing is they play with"}, {"start": 1099.72, "end": 1103.88, "text": " different constructions and they also assembled bunch of these constructions"}, {"start": 1103.88, "end": 1110.72, "text": " and they just average out these the features here before they they project"}, {"start": 1110.72, "end": 1114.2, "text": " them into the contrasting embedding space so what that means maybe you maybe"}, {"start": 1114.2, "end": 1119.32, "text": " they'll have so for car they maybe have a photo of a car but they'll have"}, {"start": 1119.32, "end": 1125.68, "text": " another sentence like a nice photo of a car for example and they'll just take"}, {"start": 1125.68, "end": 1130.4, "text": " both of those embeddings and they'll average it out before they project it"}, {"start": 1130.4, "end": 1136.2, "text": " into contrast space and they they found that that small trick actually improves"}, {"start": 1136.2, "end": 1140.8, "text": " performance significantly but we'll get to that in a bit more detail a bit later"}, {"start": 1140.8, "end": 1149.28, "text": " okay so that was that was the explanation and now let's let's dug even"}, {"start": 1149.28, "end": 1158.04, "text": " a bit deeper and then we'll jump to experiments so let's see as I said we"}, {"start": 1158.04, "end": 1161.8799999999999, "text": " find the clip similar to the GPT family learns to perform a wide set of tasks"}, {"start": 1161.8799999999999, "end": 1166.76, "text": " during pre training including OCR geolocalization action recognition and"}, {"start": 1166.76, "end": 1171.44, "text": " many others so again if you're familiar with GPT family of models you know that"}, {"start": 1171.44, "end": 1177.24, "text": " there is this emerging let's call it phenomena happening so where you're"}, {"start": 1177.24, "end": 1181.6, "text": " basically training your model on on some tasks like GPT-3 was just trained to"}, {"start": 1181.6, "end": 1187.08, "text": " predict the next token and once you do that they figured out that it learned"}, {"start": 1187.08, "end": 1191.4, "text": " how to do many other tasks that it wasn't explicitly trained to do like"}, {"start": 1191.4, "end": 1199.76, "text": " maybe you input a sentence into GPT and you just append too long didn't read"}, {"start": 1199.76, "end": 1205.16, "text": " words here and then after you sample out of it you'll notice that it actually"}, {"start": 1205.16, "end": 1209.96, "text": " learned to summarize the this input sentence even though it wasn't trained"}, {"start": 1209.96, "end": 1214.4, "text": " to do that and the same thing happened here with clip they trained it to as we"}, {"start": 1214.4, "end": 1219.0600000000002, "text": " saw to just associate the the most probable text sequence with the image"}, {"start": 1219.0600000000002, "end": 1225.2, "text": " and along the way it learned to do all of these things so it's that's"}, {"start": 1225.2, "end": 1231.0800000000002, "text": " interesting okay I'll get to this pick in a second let's just go over a couple"}, {"start": 1231.0800000000002, "end": 1234.52, "text": " more things so learning from natural language also has an important"}, {"start": 1234.52, "end": 1240.8799999999999, "text": " advantage over most unsupervised blah blah so basically this says that the way"}, {"start": 1240.8799999999999, "end": 1245.32, "text": " how we train it enabled us to do this flexible zero-shot transfer as you saw"}, {"start": 1245.32, "end": 1252.96, "text": " with those with the hyper network story I just explained you and yeah I mentioned"}, {"start": 1252.96, "end": 1259.4, "text": " that they created this WIT data set which has 400 million pairs of images"}, {"start": 1259.4, "end": 1264.08, "text": " in text so why they did it is because they want to leverage the scale and none"}, {"start": 1264.08, "end": 1268.52, "text": " of the existing data sets had that so that's why they did it and it's"}, {"start": 1268.52, "end": 1271.96, "text": " interesting how they created it so they had like a half a million queries which"}, {"start": 1271.96, "end": 1276.1599999999999, "text": " they created like by collecting words that appear at least hundred times in"}, {"start": 1276.1599999999999, "end": 1280.52, "text": " English version of Wikipedia and some other heuristics and they tried to"}, {"start": 1280.52, "end": 1285.76, "text": " balance it out so that every one of those queries have has a like a similar"}, {"start": 1285.76, "end": 1290.3999999999999, "text": " number of images associated with them that's how they created the data set so"}, {"start": 1290.4, "end": 1294.5600000000002, "text": " but more importantly let's jump into this part about efficiency so let's see"}, {"start": 1294.5600000000002, "end": 1300.24, "text": " why they ended up having the exact same task that I just explained so they say"}, {"start": 1300.24, "end": 1307.6000000000001, "text": " here that Mahajan and his collaborators folks from Facebook required 19 GPU years"}, {"start": 1307.6000000000001, "end": 1315.3600000000001, "text": " to train this Resnext model she and others are required 33 TPU v3 core years"}, {"start": 1315.3600000000001, "end": 1320.16, "text": " to train the noisy student so that's a lot of compute and the only thing they"}, {"start": 1320.16, "end": 1325.3200000000002, "text": " are trying to do is predict hundred image net classes and compare that like"}, {"start": 1325.3200000000002, "end": 1329.48, "text": " comparing that to the to learning the open set of visual concepts from"}, {"start": 1329.48, "end": 1334.8000000000002, "text": " natural language seems daunting so if this was so hard to do then just imagine"}, {"start": 1334.8000000000002, "end": 1342.4, "text": " what how much harder it is to train the the the model of predicting like just"}, {"start": 1342.4, "end": 1350.16, "text": " natural language labels that they're open for they did try first exploring"}, {"start": 1350.16, "end": 1354.16, "text": " with this discriminating method that has a predictive objective instead of the"}, {"start": 1354.16, "end": 1360.8400000000001, "text": " contrastive objective I just showed you and what vertex paper did is they"}, {"start": 1360.8400000000001, "end": 1365.8400000000001, "text": " basically also had image encoder and they had a transformer above it they"}, {"start": 1365.8400000000001, "end": 1370.72, "text": " were trying to predict the exact same words from the caption of that image so"}, {"start": 1370.72, "end": 1374.1200000000001, "text": " you have an image you have a caption associated with it and you're trying to"}, {"start": 1374.1200000000001, "end": 1378.24, "text": " predict the exact same word so that's not the new approach like we've been"}, {"start": 1378.24, "end": 1385.24, "text": " doing these captioning predictions for a long time and people notice that"}, {"start": 1385.24, "end": 1390.76, "text": " the visual representations you get out of this tasks so when you for example do"}, {"start": 1390.76, "end": 1393.44, "text": " some linear probing on these representations that they are really"}, {"start": 1393.44, "end": 1399.44, "text": " useful for many downstream tasks classification etc so the thing is they"}, {"start": 1399.44, "end": 1404.3200000000002, "text": " tried this one and it was too expensive and we'll see the one of the charts"}, {"start": 1404.3200000000002, "end": 1410.64, "text": " above which will nicely explain this and the second thing they also consider"}, {"start": 1410.64, "end": 1417.76, "text": " these generative methods like image GPT and what it does it learns to"}, {"start": 1417.76, "end": 1426.4, "text": " autoregressively just regress the image the input image and it also learns"}, {"start": 1426.4, "end": 1430.6000000000001, "text": " really nice visual representations but the thing is it's not efficient and like"}, {"start": 1430.6000000000001, "end": 1436.5600000000002, "text": " efficiency is the key point here this is open AI they want to make the most"}, {"start": 1436.5600000000002, "end": 1440.48, "text": " efficient method and then leverage the computer have both in the Harvard sense"}, {"start": 1440.48, "end": 1445.3200000000002, "text": " and the huge data set they collected so like the bitter lessons by sudden are a"}, {"start": 1445.3200000000002, "end": 1449.0800000000002, "text": " nice read if you're not familiar with this but basically the more computer you"}, {"start": 1449.0800000000002, "end": 1454.1200000000001, "text": " can throw at the problem usually that that's that's a good idea I'm not I'm"}, {"start": 1454.12, "end": 1458.1599999999999, "text": " not I'm not saying that's the only way we will go forward with the research"}, {"start": 1458.1599999999999, "end": 1463.6, "text": " but that's definitely an option okay and having said that let's let's just jump"}, {"start": 1463.6, "end": 1469.52, "text": " to this chart and what it did is the following so this transformer language"}, {"start": 1469.52, "end": 1474.76, "text": " model that's basically that vertex approach I just mentioned so you just"}, {"start": 1474.76, "end": 1481.62, "text": " basically stick the transformer up on top of the image encoder and you can see"}, {"start": 1481.62, "end": 1486.0, "text": " that by trying to predict the exact same birds that know from the caption it's"}, {"start": 1486.0, "end": 1491.52, "text": " the the zero shot image net accuracy is really low and you need a lot of data"}, {"start": 1491.52, "end": 1496.8, "text": " to accomplish it compare that to just a simple baseline where instead of trying"}, {"start": 1496.8, "end": 1501.3999999999999, "text": " to predict the exact same birds from the transformer you're just trying to"}, {"start": 1501.3999999999999, "end": 1505.32, "text": " predict which words appear in your caption so that's a much simpler"}, {"start": 1505.32, "end": 1510.8799999999999, "text": " problem so you have a like a vocab here like maybe 30k of words and you're just"}, {"start": 1510.88, "end": 1515.68, "text": " trying to predict which words are present in your current caption and"}, {"start": 1515.68, "end": 1521.92, "text": " that's this orange line you can see and we already noticed that we have a 3x"}, {"start": 1521.92, "end": 1527.6000000000001, "text": " efficiency gain and finally once we ditch the predictive objective altogether"}, {"start": 1527.6000000000001, "end": 1532.92, "text": " so trying to predict the words and just trying to associate the the textual as"}, {"start": 1532.92, "end": 1539.5600000000002, "text": " sequence with the image we get a boost in efficiency a further boost and now we"}, {"start": 1539.56, "end": 1546.44, "text": " can truly leverage the computation okay so that was that was it that was it"}, {"start": 1546.44, "end": 1552.8, "text": " about the efficiency part and now I'll go into some more details about the"}, {"start": 1552.8, "end": 1559.72, "text": " method and then we'll we'll just jump to exploring the easier shot results they"}, {"start": 1559.72, "end": 1566.08, "text": " got so they say here that they we train clip from scratch without initializing"}, {"start": 1566.08, "end": 1570.96, "text": " the image encoder with image net weights or text encoder with pre-trained weights"}, {"start": 1570.96, "end": 1577.6, "text": " and why they do that is because work like a vision transformer already showed"}, {"start": 1577.6, "end": 1583.6399999999999, "text": " that once you have enough compute and enough data you really don't need to"}, {"start": 1583.6399999999999, "end": 1590.36, "text": " bias your model in any way so you just so what vision transformer showed is that"}, {"start": 1590.36, "end": 1596.32, "text": " this particular model is better than CNN's given enough data like they use"}, {"start": 1596.32, "end": 1602.52, "text": " JFT 300 data set and that's Google's proprietary data set and they show that"}, {"start": 1602.52, "end": 1607.4399999999998, "text": " it outperforms CNN's and it learns something similar to CNN's just a bit"}, {"start": 1607.4399999999998, "end": 1612.8799999999999, "text": " better obviously so that's why they don't encode it why they don't initialize it"}, {"start": 1612.8799999999999, "end": 1618.4399999999998, "text": " with pre-trained weights so I explained most of these so I'll just skip them"}, {"start": 1618.44, "end": 1623.48, "text": " yeah this is an interesting detail they are not using so they for a resnet they"}, {"start": 1623.48, "end": 1627.48, "text": " just swap this average pulling layer with the attention pulling layer that's"}, {"start": 1627.48, "end": 1634.28, "text": " implemented in a transformer style and yeah they're using as I said was funny"}, {"start": 1634.28, "end": 1639.92, "text": " for the text encoder and I also mentioned this one about EOS so the"}, {"start": 1639.92, "end": 1643.3600000000001, "text": " interesting part about how they scaled their approach is that they used"}, {"start": 1643.36, "end": 1649.9199999999998, "text": " efficient nets ideas so they used ideas from this paper to scale I don't know"}, {"start": 1649.9199999999998, "end": 1655.28, "text": " how to write today okay to scale their their technique and I just highlighted"}, {"start": 1655.28, "end": 1662.32, "text": " these as a just to show you how much engineering goes into all of their all"}, {"start": 1662.32, "end": 1668.08, "text": " of their research work and like you can see all of this mixed precision gradient"}, {"start": 1668.08, "end": 1673.8, "text": " checkpointing is all about making use as efficient as possible of the memory"}, {"start": 1673.8, "end": 1679.72, "text": " available in their on their hardware so half precision half precision and yeah"}, {"start": 1679.72, "end": 1684.1999999999998, "text": " it still takes a lot of time to train their biggest models so it took 18 days"}, {"start": 1684.1999999999998, "end": 1691.12, "text": " to train on almost 600 we hunted GPUs and 12 days on 256 for the vision"}, {"start": 1691.12, "end": 1694.6399999999999, "text": " transformer which is a vision transformer is more efficient than CNN's"}, {"start": 1694.64, "end": 1699.64, "text": " and that's one of the points that that paper made and that's why it takes a lot"}, {"start": 1699.64, "end": 1704.92, "text": " less compute and finally the the best model they had is the vision transformer"}, {"start": 1704.92, "end": 1713.3600000000001, "text": " L that was trained for one epoch on on a higher resolution of 336 pixels and"}, {"start": 1713.3600000000001, "end": 1718.92, "text": " that's that's a final clip model that they that they are using throughout this"}, {"start": 1718.92, "end": 1724.5200000000002, "text": " paper okay finally experiments so as I mentioned the target the goal of the"}, {"start": 1724.52, "end": 1729.44, "text": " paper is to show that clip has a really nice zero shot transfer performance and"}, {"start": 1729.44, "end": 1733.6, "text": " what they mean by zero shot transfer performance is a bit more general than"}, {"start": 1733.6, "end": 1739.2, "text": " what you usually mean by saying that in computer vision so usually and dates"}, {"start": 1739.2, "end": 1744.52, "text": " here they say here most of these tasks are are testing for distribution shift"}, {"start": 1744.52, "end": 1748.36, "text": " and domain generalization so what that means is the previous models in computer"}, {"start": 1748.36, "end": 1754.08, "text": " vision basically you want you want to test if it's if it's if it's having a"}, {"start": 1754.08, "end": 1759.1599999999999, "text": " really good performance even when you change the this underlying distribution"}, {"start": 1759.1599999999999, "end": 1763.9199999999998, "text": " so basically here you have a banana like a real-world banana like a sketch and"}, {"start": 1763.9199999999998, "end": 1768.32, "text": " those are called different domains and the other thing that we're testing for"}, {"start": 1768.32, "end": 1773.0, "text": " is that for example if the model was testing test that was trained on real"}, {"start": 1773.0, "end": 1777.1799999999998, "text": " world objects like banana I don't know dogs cats whatever and you want to see"}, {"start": 1777.1799999999998, "end": 1782.6, "text": " how how easy it is to just do some fine-tuning or do some linear probe on"}, {"start": 1782.6, "end": 1788.08, "text": " top of the feature space to maybe learn also about a new object like maybe horse"}, {"start": 1788.08, "end": 1794.4399999999998, "text": " and what they mean by zero shot transfer learning in this and this paper is not"}, {"start": 1794.4399999999998, "end": 1797.54, "text": " generalizing to those to those so distribution shifts and domain"}, {"start": 1797.54, "end": 1802.12, "text": " generalizations but to new tasks so for example that they say here well it is"}, {"start": 1802.12, "end": 1807.8, "text": " reasonable to say that the SVH and data set which is this one measures the task"}, {"start": 1807.8, "end": 1811.6799999999998, "text": " of street number transcription on the distribution of Google Street View"}, {"start": 1811.68, "end": 1816.8400000000001, "text": " photos is unclear what real task the cipher 10 data set measures and this is"}, {"start": 1816.8400000000001, "end": 1822.68, "text": " cipher 10 so basically if you train the model maybe image net and then you you"}, {"start": 1822.68, "end": 1827.2, "text": " want to test it on on on cipher 10 like this you're you're basically doing what"}, {"start": 1827.2, "end": 1832.2, "text": " I already said you're doing basically just a different different distribute"}, {"start": 1832.2, "end": 1837.3600000000001, "text": " just has a different distribution and maybe some different objects whereas if"}, {"start": 1837.36, "end": 1842.56, "text": " you're testing on SVH n you're basically learning how to do OCR in the wild if"}, {"start": 1842.56, "end": 1847.28, "text": " your model learns how to extract numbers from these images so that's it so we we"}, {"start": 1847.28, "end": 1850.8, "text": " want to generalize not only to distribution shifts and generalizations"}, {"start": 1850.8, "end": 1855.4199999999998, "text": " to new objects we also want to generalize to new tasks like OCR and not"}, {"start": 1855.4199999999998, "end": 1859.6399999999999, "text": " only two different classification datasets okay let's let's start"}, {"start": 1859.6399999999999, "end": 1864.6399999999999, "text": " exploring the paper first this table is not so important they just show so"}, {"start": 1864.64, "end": 1869.3200000000002, "text": " visual engrams are our method from 2016 which had a similar similar notion of"}, {"start": 1869.3200000000002, "end": 1872.5600000000002, "text": " zero-shot transfer and that's why they compare with them although it's not"}, {"start": 1872.5600000000002, "end": 1876.8000000000002, "text": " comparable because many things like transformers did not exist back then and"}, {"start": 1876.8000000000002, "end": 1882.1200000000001, "text": " so yeah they just have much better clip just as much better results on these"}, {"start": 1882.1200000000001, "end": 1887.72, "text": " datasets like image net so before seeing the zero shop and few shot performance"}, {"start": 1887.72, "end": 1892.0800000000002, "text": " of clip let's first go through prompt engineering and assembling details so"}, {"start": 1892.08, "end": 1895.6, "text": " the first thing they had to cope with is the fact that there is a polysemy"}, {"start": 1895.6, "end": 1898.96, "text": " problem in some of the downstream computer vision tasks and that means"}, {"start": 1898.96, "end": 1903.04, "text": " that for example in some of the datasets like image net you basically"}, {"start": 1903.04, "end": 1908.56, "text": " have two words which mean two different things so different semantics and maybe"}, {"start": 1908.56, "end": 1913.4399999999998, "text": " they give example here construction cranes and cranes the fly so the first"}, {"start": 1913.4399999999998, "end": 1917.48, "text": " thing they had to do is for a given data set and labels to disambiguate"}, {"start": 1917.48, "end": 1921.8799999999999, "text": " between those labels which have the polysemy problems the second thing and"}, {"start": 1921.88, "end": 1927.0800000000002, "text": " I already mentioned this one is they had to construct the the second the"}, {"start": 1927.0800000000002, "end": 1930.64, "text": " sequences like this so instead of just using labels so for example if you have"}, {"start": 1930.64, "end": 1937.0400000000002, "text": " a on a downstream class task you have labeled dog instead of using dog to in"}, {"start": 1937.0400000000002, "end": 1941.5600000000002, "text": " your zero shot classifier to configure it you use something like this a photo"}, {"start": 1941.5600000000002, "end": 1947.92, "text": " of a label and by doing that you get a 1.3 percent increase in accuracy and"}, {"start": 1947.92, "end": 1955.0800000000002, "text": " that's like a low low-hanging fruit the second thing they did is they they again"}, {"start": 1955.0800000000002, "end": 1959.24, "text": " they play with prompt programming so instead of just using a photo of a label"}, {"start": 1959.24, "end": 1963.96, "text": " they also experiment with adding maybe a type of pet if that made sense for the"}, {"start": 1963.96, "end": 1968.24, "text": " so we found several fine-grained image classification datasets that it helped"}, {"start": 1968.24, "end": 1974.2, "text": " to specify the category so for example in Oxford pets using that one would even"}, {"start": 1974.2, "end": 1979.04, "text": " help them even boost the performance even higher so on OCR just putting the"}, {"start": 1979.04, "end": 1985.04, "text": " quotes around the numbers would also gained gave them some additional boost"}, {"start": 1985.04, "end": 1989.44, "text": " they also played with and sampling as I mentioned so here is a concrete example"}, {"start": 1989.44, "end": 1994.32, "text": " so you by using a photo of a big for example dog and a photo of a small dog"}, {"start": 1994.32, "end": 1999.16, "text": " and combining those in the in the embedding feature space they get they get"}, {"start": 1999.16, "end": 2003.52, "text": " an additional boosting performance so they did not average in the probability"}, {"start": 2003.52, "end": 2010.6, "text": " space they just average before they do the the the soft max and everything so"}, {"start": 2010.6, "end": 2015.24, "text": " considering those two things together they get a boost of almost 5% and that's"}, {"start": 2015.24, "end": 2021.28, "text": " on image net so that's that's a really long low-hanging fruit and they made use"}, {"start": 2021.28, "end": 2028.76, "text": " of it so let's see that on the chart here basically for a given compute by"}, {"start": 2028.76, "end": 2034.0, "text": " just using those additional in sampling and pro programming programming methods"}, {"start": 2034.0, "end": 2040.56, "text": " they just get a huge improvements in the average score so that's that's that's"}, {"start": 2040.56, "end": 2045.2, "text": " the first thing I wanted to mention now let's jump to the zero shot performance"}, {"start": 2045.2, "end": 2050.96, "text": " of clip so what I did here is they took off-the-shelf models such as resonant"}, {"start": 2050.96, "end": 2057.6, "text": " hundred I think no resonance 50 here and they just fully pre fine-tuned it on"}, {"start": 2057.6, "end": 2062.92, "text": " every single data set you see here so it they pre trained it on STL 10"}, {"start": 2062.92, "end": 2068.52, "text": " Stanford cars etc and they took on the other side it took clip in a zero shot"}, {"start": 2068.52, "end": 2075.7999999999997, "text": " setting and we can see that clip is much better than fully supervised"}, {"start": 2075.7999999999997, "end": 2082.3199999999997, "text": " resident resident 50 on many of the data sets and now if we just analyze the"}, {"start": 2082.3199999999997, "end": 2086.4, "text": " extremes here this part and this part that will probably give us some more"}, {"start": 2086.4, "end": 2091.36, "text": " information what's happening here so first things first the data set zero"}, {"start": 2091.36, "end": 2096.48, "text": " shot clip improves by the most is STL 10 a data set designed to encourage"}, {"start": 2096.48, "end": 2100.96, "text": " efficient learning by providing only a limited number of labeled examples so"}, {"start": 2100.96, "end": 2105.96, "text": " what that means is that for this particular data set because resonant"}, {"start": 2105.96, "end": 2110.48, "text": " was you basically take the resident features that were that we got from"}, {"start": 2110.48, "end": 2116.2400000000002, "text": " image net and we freeze those and then we take the classifier and training"}, {"start": 2116.24, "end": 2121.8399999999997, "text": " that one with this many with this like this small amount of examples just"}, {"start": 2121.8399999999997, "end": 2127.72, "text": " didn't didn't give us a good results so you can see that clip was much better"}, {"start": 2127.72, "end": 2135.64, "text": " than resonant and on the other hand we have this other part of spectrum where"}, {"start": 2135.64, "end": 2140.9199999999996, "text": " if we analyze these data sets you can see that these data sets are complex and"}, {"start": 2140.92, "end": 2146.6, "text": " specialized so they say it here looking where it underperforms it works really"}, {"start": 2146.6, "end": 2151.4, "text": " weak on on these data sets like satellite image classification lymph node"}, {"start": 2151.4, "end": 2155.92, "text": " tumor counting objects blah blah blah and these results highlight the poor"}, {"start": 2155.92, "end": 2160.44, "text": " capability of zero shot clip or more complex complex tasks so that's where it"}, {"start": 2160.44, "end": 2165.04, "text": " underperforms resonant but it's still quite fascinating that a fully"}, {"start": 2165.04, "end": 2170.48, "text": " supervised model using labeled information is underperforming clip on"}, {"start": 2170.48, "end": 2176.12, "text": " on many of the data sets so it's worth mentioning of course resonant 50 is by"}, {"start": 2176.12, "end": 2182.28, "text": " no means state-of-the-art currently so let's now examine what happens when we"}, {"start": 2182.28, "end": 2186.2, "text": " compare clip with state-of-the-art but in a few shots few shot setting this time"}, {"start": 2186.2, "end": 2191.52, "text": " so these models are not trained like with all of the labels they just take a"}, {"start": 2191.52, "end": 2199.32, "text": " couple of labels so let's analyze the chart and they took three models first"}, {"start": 2199.32, "end": 2204.1600000000003, "text": " is resonant 50 sim clear the self supervised framework I mentioned and"}, {"start": 2204.1600000000003, "end": 2210.6000000000004, "text": " bit big transfer so this one was trained on JFT 300 so a lot of data and we can"}, {"start": 2210.6000000000004, "end": 2217.0800000000004, "text": " see that with 16 so we just so what we do here is we take those models like bit"}, {"start": 2217.0800000000004, "end": 2223.38, "text": " and we freeze the weights we add the classifier and we we pre-trip we we"}, {"start": 2223.38, "end": 2232.12, "text": " fine-tune on these different data sets and we see that for 16 examples we are"}, {"start": 2232.12, "end": 2239.6400000000003, "text": " approaching the zero shot clip which is a nice nice nice result because these are"}, {"start": 2239.6400000000003, "end": 2245.2400000000002, "text": " the best methods available currently and what's interesting here is that there is"}, {"start": 2245.24, "end": 2254.08, "text": " this drop between zero shot and one shot and basically that discrepancy"}, {"start": 2254.08, "end": 2259.6, "text": " shouldn't be there in in ideal world because humans for example when you when"}, {"start": 2259.6, "end": 2265.68, "text": " you give us one example we won't be underperforming like yeah we'll we'll"}, {"start": 2265.68, "end": 2270.8399999999997, "text": " we'll be better so that's something they're trying to to to to work on and"}, {"start": 2270.84, "end": 2276.88, "text": " the potential solution they propose is just to initialize so once you take the"}, {"start": 2276.88, "end": 2284.4, "text": " vision encoder of clip and instead of training a classifier from random they"}, {"start": 2284.4, "end": 2289.2400000000002, "text": " just initialize it with the zero classifier zero shot classifiers weights"}, {"start": 2289.2400000000002, "end": 2294.36, "text": " for that particular data set and use that but that still didn't help them so"}, {"start": 2294.36, "end": 2301.88, "text": " they just left it out for for the future for the future exploration let's continue"}, {"start": 2301.88, "end": 2311.56, "text": " and see how the how good the zero shot classifier actually is and here we see"}, {"start": 2311.56, "end": 2317.88, "text": " what we see in this chart is for example how many labeled examples do we need to"}, {"start": 2317.88, "end": 2326.2000000000003, "text": " achieve the same accuracy as zero shot classifier so for fair for fair 2013 if"}, {"start": 2326.2000000000003, "end": 2333.4, "text": " we just take that division encoder of clip and use a random initial classifier"}, {"start": 2333.4, "end": 2338.96, "text": " we need hundred eighty four examples to achieve the same performance as zero"}, {"start": 2338.96, "end": 2344.92, "text": " shot clip so that's that's a good result we we want to make zero shot classifier"}, {"start": 2344.92, "end": 2350.52, "text": " as good as possible so that even with a lot of label data we can improve upon"}, {"start": 2350.52, "end": 2355.32, "text": " it so that would be an ideal result if every single data set required a lot of"}, {"start": 2355.32, "end": 2360.96, "text": " labeled examples to improve upon that randomly initialized classifier that"}, {"start": 2360.96, "end": 2365.56, "text": " would be a good result here on the other chart we can see the difference between"}, {"start": 2365.56, "end": 2373.06, "text": " a linear probe clip performance versus a zero shot performance so this time you"}, {"start": 2373.06, "end": 2380.88, "text": " take that classifier and you basically train it with all the available data so"}, {"start": 2380.88, "end": 2385.96, "text": " it's fully supervised and we use all of the data available in a particular data"}, {"start": 2385.96, "end": 2392.48, "text": " set and we see that for a given data set like maybe this one the the performance"}, {"start": 2392.48, "end": 2398.0, "text": " is maybe 45% once we use all once we leverage all of the data but by using"}, {"start": 2398.0, "end": 2403.92, "text": " zero shot we have only 35 so obviously zero shot a classifier is suboptimal"}, {"start": 2403.92, "end": 2408.8, "text": " for this data set and we can see the general trend so in a perfect world zero"}, {"start": 2408.8, "end": 2414.48, "text": " shot would be as good as this classifier that used the whole corpus of label data"}, {"start": 2414.48, "end": 2420.44, "text": " and but it's you can see the trend it's suboptimal and it's down by maybe 20 or"}, {"start": 2420.44, "end": 2427.3, "text": " 15 points and only a couple of data says like STL 10 and if you remember that one"}, {"start": 2427.3, "end": 2432.0800000000004, "text": " has a little like a small amount of a label data available so here the the"}, {"start": 2432.0800000000004, "end": 2440.7200000000003, "text": " zero shot performance is similar to this fully fully trained classifier so that"}, {"start": 2440.7200000000003, "end": 2446.32, "text": " shows us that there is a lot more like space to to improve upon this zero shot"}, {"start": 2446.32, "end": 2452.6400000000003, "text": " classifier let's continue analyze other charts so this one what it tells us is"}, {"start": 2452.64, "end": 2459.6, "text": " that basically increasing the compute will decrease the error which is"}, {"start": 2459.6, "end": 2463.44, "text": " something we could expect but the interesting thing is it's really noisy"}, {"start": 2463.44, "end": 2467.3599999999997, "text": " so if we take a particular data set we can see that with compute sometimes the"}, {"start": 2467.3599999999997, "end": 2473.2799999999997, "text": " error increase increases and then it decreases so it's really noisy but the"}, {"start": 2473.2799999999997, "end": 2480.64, "text": " overall trend is still strong in favor of more computation is better here what"}, {"start": 2480.64, "end": 2487.24, "text": " we see is that basically the future space of clip is high-quality because"}, {"start": 2487.24, "end": 2491.68, "text": " once we do once we compare it with different models like you can see here"}, {"start": 2491.68, "end": 2498.72, "text": " efficient nets and be all and moco sim cleared like self supervised methods"}, {"start": 2498.72, "end": 2503.52, "text": " and bid which is fully supervised we can see that the feature space is higher"}, {"start": 2503.52, "end": 2508.8799999999997, "text": " quality for clip and so what we see on this diagram is the amount of compute on"}, {"start": 2508.88, "end": 2514.52, "text": " x-axis and the average score across a suit of different like 27 data sets and"}, {"start": 2514.52, "end": 2522.8, "text": " every single model was had a had a linear probe fully fully trained on the"}, {"start": 2522.8, "end": 2529.08, "text": " frozen feature space and a clip again shows much better much better average"}, {"start": 2529.08, "end": 2534.8, "text": " score which is something that we ideally want we want the clip to to be the"}, {"start": 2534.8, "end": 2540.0800000000004, "text": " features we train for clip are are really good for many downstream data sets"}, {"start": 2540.0800000000004, "end": 2545.48, "text": " and it showed that here if we take one specific model like maybe efficient net"}, {"start": 2545.48, "end": 2551.04, "text": " noisy student and we analyze we break down those instead of using the average"}, {"start": 2551.04, "end": 2555.52, "text": " score we just see the performance across every single data set we can see where"}, {"start": 2555.52, "end": 2560.2400000000002, "text": " the the results for clip are much better and where it's a bit worse than"}, {"start": 2560.2400000000002, "end": 2564.32, "text": " efficient net and because efficient net was basically pre trained on image net"}, {"start": 2564.32, "end": 2569.0, "text": " so it does make sense that it has better performance than clip that the feature"}, {"start": 2569.0, "end": 2572.96, "text": " space he developed is better for image net but only for image and I was as we"}, {"start": 2572.96, "end": 2576.48, "text": " can see on most of the other data sets the future space developed by clip is"}, {"start": 2576.48, "end": 2582.6000000000004, "text": " way more high quality than than efficient nets feature space okay let's"}, {"start": 2582.6000000000004, "end": 2589.44, "text": " continue and see some other results I'll skip this one not that interesting and"}, {"start": 2589.44, "end": 2596.28, "text": " I'll jump to this one basically if we take what we see here is the following"}, {"start": 2596.28, "end": 2604.4, "text": " we want to see we want to see how how the clip accuracy behaves when we change"}, {"start": 2604.4, "end": 2610.2000000000003, "text": " the underlying distribution so we see here basically same class same object"}, {"start": 2610.2000000000003, "end": 2613.56, "text": " same classes but like different distribution so here we have sketches"}, {"start": 2613.56, "end": 2621.36, "text": " here we have real-world objects some like again some drawings etc and if we"}, {"start": 2621.36, "end": 2628.08, "text": " take for example a model that had 80% accuracy on image net top 1 accuracy we"}, {"start": 2628.08, "end": 2636.88, "text": " see that for all of those standard image net models that they severely have like"}, {"start": 2636.88, "end": 2643.7200000000003, "text": " have much lower accuracy on those other related data sets were just with"}, {"start": 2643.7200000000003, "end": 2649.1600000000003, "text": " different distribution whereas on the other hand if we take the results for"}, {"start": 2649.1600000000003, "end": 2656.08, "text": " for clip it has much higher much higher accuracy than then then then image at"}, {"start": 2656.08, "end": 2664.28, "text": " once and ideally again if we have 80% on image net we want to have 80% on all of"}, {"start": 2664.28, "end": 2669.6800000000003, "text": " those similar data sets just with different distributions if we take a"}, {"start": 2669.6800000000003, "end": 2676.28, "text": " look at the tabular data here what we see is when we take resonant 101 and so"}, {"start": 2676.28, "end": 2681.2000000000003, "text": " it was just pre trained on image and we don't fine-tune it on any of these new"}, {"start": 2681.2000000000003, "end": 2685.96, "text": " datasets with different distributions we just use it as it is and we see that the"}, {"start": 2685.96, "end": 2690.7200000000003, "text": " performance that the accuracy just severely drops and for these adversarial"}, {"start": 2690.72, "end": 2694.7599999999998, "text": " examples like you can see here some chopped banana something that the model"}, {"start": 2694.7599999999998, "end": 2700.3999999999996, "text": " never saw during training it has really really bad accuracy on the other hand"}, {"start": 2700.3999999999996, "end": 2707.7599999999998, "text": " clips got much much higher accuracy and it improves like even 75% in this case"}, {"start": 2707.7599999999998, "end": 2713.64, "text": " on this data set so that shows that clip is much more robust to"}, {"start": 2713.64, "end": 2720.92, "text": " distributional shift than these other other methods let's see some other results"}, {"start": 2720.92, "end": 2726.24, "text": " they showed and this one is interesting so this line is the same as the one above"}, {"start": 2726.24, "end": 2732.0, "text": " so we just use clip in a zero shot setting and if we actually fine-tune"}, {"start": 2732.0, "end": 2737.6, "text": " clip for image net we can see that even though it improves the image net"}, {"start": 2737.6, "end": 2743.72, "text": " accuracy the the actual robustness to to distributional shift so again we have"}, {"start": 2743.72, "end": 2749.16, "text": " those data sets like those seven data sets we see that the curves actually go"}, {"start": 2749.16, "end": 2756.44, "text": " go down and it's it's less robust to to all of those data sets from another"}, {"start": 2756.44, "end": 2761.56, "text": " perspective if we look at this chart it's it's had some improvements on image"}, {"start": 2761.56, "end": 2769.32, "text": " net whereas on many other data sets it just has less less good performance so"}, {"start": 2769.32, "end": 2774.6, "text": " was this adapt to class shift so I think I noted somewhere here basically"}, {"start": 2774.6, "end": 2778.84, "text": " because some of these data sets like YouTube BB and image net vid have"}, {"start": 2778.84, "end": 2783.6, "text": " different classes than image net which has thousand classes so it has so for"}, {"start": 2783.6, "end": 2789.2, "text": " the person class in YouTube BB we have what we have to do for those image net"}, {"start": 2789.2, "end": 2796.4399999999996, "text": " models is at collect the scores over baseball player bridegroom and scuba"}, {"start": 2796.4399999999996, "end": 2801.3999999999996, "text": " diver and that's how we give the prediction for person on YouTube BB"}, {"start": 2801.3999999999996, "end": 2808.2799999999997, "text": " which is suboptimal but since clip is configurable we can just use this new"}, {"start": 2808.2799999999997, "end": 2814.12, "text": " class and create the construction a type of a person or whatever to to have much"}, {"start": 2814.12, "end": 2821.2, "text": " better results than than these fixed image net models and when we do that we"}, {"start": 2821.2, "end": 2825.6, "text": " we have we can see that the trend improves and the robustness increases"}, {"start": 2825.6, "end": 2830.08, "text": " further which is really nice and particularly on those couple of data"}, {"start": 2830.08, "end": 2835.72, "text": " sets which has which have that superset of classes this small trick helps"}, {"start": 2835.72, "end": 2840.3599999999997, "text": " whereas the other methods as I already explained have to do the following so"}, {"start": 2840.36, "end": 2847.6, "text": " for example this this data set and has maybe class person so when we're testing"}, {"start": 2847.6, "end": 2853.52, "text": " when we're testing efficient net we have to pull over three classes out of"}, {"start": 2853.52, "end": 2857.96, "text": " thousand that he has to give an estimate of what person would be since he doesn't"}, {"start": 2857.96, "end": 2862.1600000000003, "text": " know of the concept of a person and on the other hand we can just use the"}, {"start": 2862.1600000000003, "end": 2869.2400000000002, "text": " flexibility of a zero shot classifier and get the direct results instead I'll"}, {"start": 2869.24, "end": 2874.6, "text": " just I'll just skip some of these because this is similar experiments to"}, {"start": 2874.6, "end": 2881.0, "text": " these ones is just that instead of doing showing us the fully supervised curve so"}, {"start": 2881.0, "end": 2887.0, "text": " this model is clip this that's fine-tuned to image net fully fine-tuned"}, {"start": 2887.0, "end": 2892.3199999999997, "text": " and instead of that they just show a trend once we start from few shots so"}, {"start": 2892.3199999999997, "end": 2896.3599999999997, "text": " two shots four shot as we start increasing the number of labels from"}, {"start": 2896.36, "end": 2901.6400000000003, "text": " image and we can see that we are slowly increasing the the image net top one"}, {"start": 2901.6400000000003, "end": 2909.2000000000003, "text": " accuracy but the we're still underneath zero shot clip on this on these seven"}, {"start": 2909.2000000000003, "end": 2913.84, "text": " natural distribution data sets okay there's there's a lot of information"}, {"start": 2913.84, "end": 2921.1600000000003, "text": " here the main idea is basically that clip clips feature space is way better"}, {"start": 2921.16, "end": 2927.56, "text": " suited to to just extend to many other downstream tasks and it's much more"}, {"start": 2927.56, "end": 2933.3999999999996, "text": " robust as we saw two different natural shifts so in a way it's it's a better"}, {"start": 2933.3999999999996, "end": 2937.96, "text": " way to and additionally it's not using those costly supervised data sets is"}, {"start": 2937.96, "end": 2944.64, "text": " just using those pairings and so it's easier to to to just get a get a get a"}, {"start": 2944.64, "end": 2949.8399999999997, "text": " grab of those of that data that's the main idea of the paper this shows some"}, {"start": 2949.84, "end": 2952.88, "text": " contamination analysis and finally I just want to show that there are some"}, {"start": 2952.88, "end": 2959.6800000000003, "text": " examples where it's not working really good and a nice nice example of a"}, {"start": 2959.6800000000003, "end": 2966.36, "text": " failure is on amnest where it achieves only 88 percent accuracy on amnest just"}, {"start": 2966.36, "end": 2971.36, "text": " for comparison if you take a simplest baseline so you just take amnest image"}, {"start": 2971.36, "end": 2976.96, "text": " which is 28 by 28 you just flatten out the pixels and you use a simple feed"}, {"start": 2976.96, "end": 2982.18, "text": " for net this one will achieve higher higher accuracy than clip which was"}, {"start": 2982.18, "end": 2988.0, "text": " trained on a lot of data and lots of compute so it really failed on amnest so"}, {"start": 2988.0, "end": 2993.7200000000003, "text": " as they say here this suggests clip does little to address the underlying problem"}, {"start": 2993.7200000000003, "end": 2997.64, "text": " of brittle generalization of deep learning models instead clips tries to"}, {"start": 2997.64, "end": 3001.68, "text": " circumvent the problem and hopes that by training in such a large and varied"}, {"start": 3001.68, "end": 3006.4, "text": " data set at all data will effectively be in distribution and this is a naive"}, {"start": 3006.4, "end": 3012.6, "text": " assumption that as amnest demonstrates is easy easy to violate so obviously"}, {"start": 3012.6, "end": 3019.2400000000002, "text": " this is not the the final the final way to go to towards towards really"}, {"start": 3019.2400000000002, "end": 3024.12, "text": " generalized model to really general models but it's a it's a it's a step"}, {"start": 3024.12, "end": 3030.28, "text": " forward and I'm excited about this work so yeah that was that was pretty much it"}, {"start": 3030.28, "end": 3033.96, "text": " a short recap because there was a lot of details and hopefully this will make it"}, {"start": 3033.96, "end": 3039.2400000000002, "text": " kind of consolidate in your head and so they care about zero-shot performance"}, {"start": 3039.2400000000002, "end": 3044.28, "text": " generalization to new tasks and not only a new distributions and a domain"}, {"start": 3044.28, "end": 3049.04, "text": " generalization they show that prompt programming and assembling does help"}, {"start": 3049.04, "end": 3053.92, "text": " boost the performance of clip models so that's important they show that even"}, {"start": 3053.92, "end": 3059.16, "text": " against a fully supervised baseline such as the resonance 50 they are comparable"}, {"start": 3059.16, "end": 3064.2799999999997, "text": " and those are exciting results they show that even against the best baselines in"}, {"start": 3064.2799999999997, "end": 3068.7599999999998, "text": " a few shot setting they are comparable in a zero shot setting and they are"}, {"start": 3068.7599999999998, "end": 3073.6, "text": " better when they're doing the same thing when they're also fine-tuning their"}, {"start": 3073.6, "end": 3081.52, "text": " model I linear probing actually here next they show that zero shot classifier"}, {"start": 3081.52, "end": 3086.96, "text": " can be further improved it's suboptimal to the classifiers which can which are"}, {"start": 3086.96, "end": 3092.28, "text": " imaginable by just fine-tuning them on specific data sets so there is a lot of"}, {"start": 3092.28, "end": 3097.92, "text": " there is more work that can be done on improving that again compute doesn't"}, {"start": 3097.92, "end": 3103.96, "text": " does improve a clip features are better so feature space that clip provides is"}, {"start": 3103.96, "end": 3108.88, "text": " better than other baselines in compared to a specific baseline efficient net"}, {"start": 3108.88, "end": 3114.16, "text": " here we can see that it's much better on many data sets so again feature space is"}, {"start": 3114.16, "end": 3122.68, "text": " richer than than the other baselines here it shows a impressive"}, {"start": 3122.68, "end": 3129.16, "text": " generalization to a robustness to distribution shift and same thing here"}, {"start": 3129.16, "end": 3134.7999999999997, "text": " and they show that if you want if you if you kind of add a classifier and linear"}, {"start": 3134.7999999999997, "end": 3140.08, "text": " probe the clip onto image net you basically do improve image and performance"}, {"start": 3140.08, "end": 3144.56, "text": " but you lose the robustness to other data sets so you don't want to overfit"}, {"start": 3144.56, "end": 3151.3199999999997, "text": " to imagine basically and finally they showed some limitations among them that"}, {"start": 3151.3199999999997, "end": 3157.6, "text": " it's failing on on on simple data sets such as MNIST because it never saw those"}, {"start": 3157.6, "end": 3162.7599999999998, "text": " images during training so it just fails miserably even compared to the simplest"}, {"start": 3162.7599999999998, "end": 3166.4, "text": " of baselines okay those are there was a much longer video than I expected but"}, {"start": 3166.4, "end": 3170.2400000000002, "text": " hopefully you learned something from this video if you did consider"}, {"start": 3170.2400000000002, "end": 3174.64, "text": " subscribing and sharing this video and also hit the bell icon to get notified"}, {"start": 3174.64, "end": 3197.4, "text": " when I upload a new video until next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=ed0NJdqwEyg
PinSage - Graph Convolutional Neural Networks for Web-Scale Recommender Systems | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I do a deep dive into the PinSage paper! It was the first application of the GNN as a huge scale recommender system such as the one at Pinterest. You'll learn about: ✔️All the nitty-gritty details behind PinSage ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ paper: https://arxiv.org/pdf/1806.01973.pdf ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to PinSage and Pinterest 02:40 High-level overview of engineering innovation 05:40 MapReduce framework - a quick overview 08:00 Quick overview of the algorithm 09:10 High-level overview of algorithmic innovation 10:15 Tasks and related work 13:00 Problem setting (bipartite graph) 14:40 Detailed explanation of the algorithm 18:00 Neighborhood sampling and importance pooling 21:45 Model training and supervised data 23:50 Max margin ranking loss function explanation 29:18 Negative (hard) samples and curriculum learning 35:30 Pin features and baselines description 39:00 Offline evaluations and metrics (hit-rate, MRR) 43:00 Analysis of the embedding space (cosine sim) 45:15 User studies and A/B testing 49:00 Embedding space visualizations (t-SNE) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #pinsage #graphneuralnets #recommendersystems
Continuing with the graph neural network series and this video I'm covering graph convolutional neural networks for web scale recommender systems or Pinsage for short and as you can tell by the title of the video is going to be more about like this huge Engineering system that's deployed at Pinterest and then about novel Algorithms like propagation rules or something like that. So anyways One thing worth mentioning is that this is a direct continuation of the graph sage Paper which I've covered in my previous video. So if you don't know anything about graph sage It's probably a good idea to just go and check out that video. I'll link it somewhere in the in somewhere here So basically the authors are basically the same Rex, Ying, Hamilton and Leskowitz Were the authors on graph sage paper and we have some additional authors for Pinsage here. Mm-hmm So let me start we describe a large scale deep recommendation engine that we developed and deployed at Pinterest And I so happened to have opened a new account today. So let me just show you how it all functions So basically and why I'm showing you this is because they're they're training Pinsage for two specific recommendation tasks the first one is this home feed a recommendation where basically the goal is to give you To give you like yeah most relevant pins on your home feed and the reason I'm mostly having like a motor cars here and motorbikes is Because the only pin I have is of a muscle car so initially I just picked some categories and What Pinterest did was it gave me pretty much uniformly for from all of those categories a couple of pins And what I did I went and pinned a certain pin and now I'm pretty much getting related pins to the last pin I've made And we'll see that in the paper They actually explicitly mentioned that that's how this home feed our recommendation works and it seems they haven't changed that thing so another task they are doing is if I open a specific pin and What they do here is they they offer you this list of similar pins. So more like this So these are the related pins to the pin you're just chosen and There is also aside from the concept of a pin. They have something called boards and so this user Harry saved this particular pin to this board V road and basically boards just Just basically Accumulate similar like similar pins in a logical unit and that's that's everything I wanted to cover Let me get back to the paper So they say here we deploy pinsage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards in 18 billion edges So hopefully because I went through the platform if you if you're not familiar with Pinterest this will kind of contextualize Both the problem and how how everything fits together. So let me let me start and show you the some engineering innovations they introduced as well as some algorithmic innovations so first off they say here traditional GCN algorithms perform graph convolutions by multiplying feature matrices by powers of the full graph laplation and what that basically means if you if you if you're familiar with GCN and here I'll just open I'll open this paper which I've covered in one of my previous videos and you can see the update rule for GCN and Basically this part the a tilde part is in a structure similar to laplation even though it's not laplation and That's what they're basically saying they're saying that you need to store this whole huge adjacency or laplation matrix which is m by n where n is the number of nodes and in the case of Pinterest we have like 3 billion nodes at a time that the paper was written and so basically that's not that's not An option at all. So let me let me get back here So what it did as is the same as in graph sage they basically they are just sub sampling the neighborhoods Although they are not Doing uniform sampling they're doing something more clever and I'll get to that in a bit The second thing worth mentioning is that they have so on the on the engineering side is that they have this producer consumer pipeline and basically what it means is that We have in so in parallel the cpu is Preparing mini batches while the gpu is doing propagations and then you can just optimize and sync them so that GPU is never starving and they say it here cpu bound producer efficiently samples node network neighborhoods a gpu bound TensorFlow model consumes those And thirdly they have this efficient map reduce inference and why do they have this well basically? and we'll see that in a bit a bit in a bit more detail, but the the Pinterest graph is huge and what they do is they train the Pinsage model only on a subset of that huge graph and once they have the model trained They just use this map reduce pipeline to basically compute the embeddings for every single other node in the graph That's pretty much it So they say it here that can so map reduce pipeline can distribute the trained model to generate embeddings for billions of nodes While minimizing repeated computations, so that's this is the the the important part and Just on a small tangent. Let me let me quickly explain the map reduce pipeline and it's really simple So I thought it's worth mentioning it Basically here on an example This is a canonical example of doing uh finding out word frequencies in a document in documents And here is just a toy example with three sentences You can imagine a huge corpora here like imagine transformers which have multiple terabytes of textual data Which needs to get somehow maybe like the word frequency needs to get calculated And here on this example what happens is we basically have two modules We have this mapping part and we have the reduce part and hence the name mapping use and basically Inside the mapping module. We first just split the data. So here every single sentence will go there in in a separate module and what then happens is we we just map those parts those those Those samples of data into key value pairs and in this example of figuring out the frequencies of the words, we basically end up with like a Deer having one bear one, etc. And then what we do is we shuffle which is basically clustering These key value pairs by the key and we end up with these And finally the reduce stage sums these values up And it depends on how you define those reduce and map scripts And finally just aggregate those And you get the final result every single word has its frequency associated with it and this is a simplified example, obviously You'll have multiple shuffling and reducing stages in a in a in a real production system. So again It's supposed to minimize repeated computations and If you're familiar with graph sage, this will be really This image will make sense If not, i'll just i'll briefly do an overview as well here But like the repeated computations are these here. So basically the node a This computation here is the same as this one here and the same as this one here So what map reduce does is it it does this projection only once? For every single node in the graph and thus saves us a lot of computation unnecessary computation So that's the role of map reduce And on a short note how this works is the following so let me take I'll just ignore Because this is these are two layers of the pin sage algorithm I'll just ignore one layer and i'll focus on the on a single layer and basically to calculate For a specific node and you can see a toy graph here and we are interested in this target node So to calculate the representation of this node a what we do is the following so we basically concatenate it It's it's last layer representation with the aggregated neighborhood representation which we again get by doing some aggregation And in pin sage they introduce this thing called important sampling important spooling and i'll i'll explain that in a lot more detail a bit later But for now just this is some aggregation function like in graph sage you had max pooling you had lstms means so this is something similar And so what we do is we take the neighbors you can see b c and d are the neighbors of a here so we take those neighbors We just project their representations here and we aggregate them and that's how we get the aggregated portion So that's how the algorithm works in a nutshell There are a couple of innovations which i'll now briefly do a high level of and so these are the innovations in algorithmic sense Basically they don't do they show that the random sampling uniform land random sampling they used in graph sage is suboptimal So they do this thing called importance pooling where they are basically picking the the the those those neighbors which have the most influence on the target node And that will get clear in a couple of minutes So they say here an additional benefit is that each node now has an important score and those scores will be used in this importance pooling I'll explain that a bit later one more algorithmic innovation is this curriculum training where basically the algorithm is fed harder and harder negative examples And we'll we'll see what those exactly are in a couple of minutes So that's a high level overview of what the both the engineering as well as the algorithmic innovations are And now i'm going to slowly dig into details of this paper So first first things first the tasks as I mentioned are related pin recommendation and home feed recommendation tasks and we saw that on the platform itself So just keep that in mind So in terms of the algorithmic design they they are basically as I said continuing the graph sage paper so the work on that paper and they took a couple of ideas from from fast gcn by Chen and his collaborators Basically the the importance pooling part those ideas were borrowed from from from this paper And again engineering details and I'll have a lot of these throughout the video because this this this this paper is there is a lot of engineering that went into this this work so I think it's just worth mentioning some of them and I'll omit like most of them but I still want to mention a couple of them So we fundamentally improve upon graph sage by removing the limitation that the whole graph be stored in a GPU memory so basically they are using that producer consumer pipeline so we don't have to hold the whole graph in GPU you hold it in CPU and that's where the producer is and then the GPU the consumer just takes many batches and basically that's how it trains Okay one more thing to note they they they they mentioned here why we can't use those shallow graph embedding methods so like deep walk and note to work the reason is they basically have for every single embedding is stored in the table in a lookup table so this table would be huge like because this is in and it's a number of notes and because Pinterest has like multiple like billions of notes This is like pretty much intractable and not only that but it's also inherently transacting methods which means you will have to get re-optimized to get when new pins are introduced into the system which is also not desirable and lastly these methods only use the co-occurrence statistics of the graph and they can use the features and as you probably can assume pins in in the graph pins in in in this graph have a rich features associated with them because they have like imagery and they have like maybe some let me just draw something here okay and they have they also have textual information associated with them so these methods can even make use of these so yeah that that those are multiple reasons why you just can't use those graph embedding methods for this task okay let's let's continue and see how they structure how they model the the the problem so basically the model of the the the graph the following so you have on one side you have pins or more generally items and on one side you have you have boards or more generally collections and what they do is they just create a by by part of the graph where basically as we saw one board has multiple related pins associated with with it so this is how the the graph will look like something like this and they mentioned they have around two billion pins and about one billion boards so this is how the the graph looks like and yeah I mentioned this pins are associated with both rich text and image features and the goal is to leverage both the features and the graph structure to to find those embeddings which can later be used to recommend related pins etc okay so now that we know how the problem is structured and what the what the tasks are and let's dig into some details yeah by the way they treat both the pins and the boards as the same nodes even though only the pins basically have features so they say here for notational convenience in generality when we describe the pin sage algorithm we simply refer to the node set of the full graph with V so it's a union between items and collections and they do not explicitly distinguish between pin and board nodes okay this is the algorithm basically if you're familiar with graph sage it has the same structure same old structure so here this one line corresponds to this line concatenation corresponds to this line and normalization corresponds to this line there are a couple of innovations though the first one is how they sampled the neighborhood that's the first innovation the second one importance pooling and that's pretty much it there is not a lot of new things that happen here everything else pretty much remain the same so a couple more nodes before I explain the importance pooling and before I explain the neighborhood they empirically so they say here empirically we observe significant performance gains when we look at the operation instead of the average operation as in 21 which is GCN paper so basically this year they showed empirically that it's better to use concatenation than just the thing that GCN did and that's just pretty much scaled weighted mean of these features and secondly normalization part so this part here in line three makes training more stable and it's more efficient to perform approximate nearest neighbor search for normalized embeddings and I'll explain what that is in a sec so first normalization what it does is basically make sure that all of our embeddings lie on a hypersphere and there's just a fancy name saying a sphere in higher dimensions if that sphere was in three dimensions and that sphere was in three dimensions then that sphere is a hyper sphere and that's the only thing that's important to know about this so basically we have we have that sphere lying around the coordinate beginning and all of the embeddings would lie on the edge of that sphere hopefully you can see that so again maybe here we have a huge sphere and all of the embeddings lie somewhere on this sphere all of our pins all of our boards lie somewhere on this sphere so they say here so it's more efficient to perform approximate nearest neighbor search so why do we care about approximate nearest neighbor search in this context so the reason is pretty simple so once you have so you basically have the embedding space and you embed your pins like maybe you'll maybe take this motorcycle and you'll embed it here and that's pin one and then you'll have all of the pins embedded and basically if you want to do related pin recommendation what you pretty much do is you find the k nearest neighbors to this target node p1 so if you're if I'm interested what the related pins to the motorcycle are I just find the closest ones and now because there are so many embeddings in this space so there are billions of embeddings we can't do exact nearest neighbor so they just use methods such as locality sensitive hashing to which are really good approximative methods to to do k nearest neighbors so that's why they mention nearest neighbors here and so normalization helps helps calculate those those nearest neighbors okay with those details out of the way let me explain the two most important parts and that's neighborhood and important sampling so basically this is what looks like we have a target node and we have its neighborhood so this is the one hop neighborhood this is the two hop neighborhood three hop et cetera so what we do basically is the following we take a bunch of random walks which are decently short and we we run them in parallel and what we do is we just count up the number of wizards every single node had so this one maybe had two wizards this one had only one visit maybe some of them had three wizards like this one maybe had three this one maybe had three and we basically take the t nodes that had the highest visit count and that's it so t was 15 there in their scenario so one thing worth mentioning is that these random walks are not biased and what that means is basically if you're at this node you'll go to your next node will be one of the neighboring nodes with equal probability contrast that to biased nodes such as so node2vec for example method used biased biased random walks where if you you have some parameters there like p and q and depending how you set them up you'll either prefer going in like a bfs manner so breadth first search or you'll be preferring depth first search traversals of the graph so here we just have pure unbiased random walks we we figure out we run multiple of them and in parallel and we just find the counts and that's the that's our neighborhood so for example if t was if t was maybe three we only take this neighbor we take this neighbor and we take this one and so how does that fit in into important sampling well it's pretty easy once you think of it so basically what we do is we take the their visit counts and we just normalize them by the sum so this will be a three three and two that's eight so we'll have the final weights will be this and then the importance pooling part just works you just take these weights you sum up their projected representations and so you multiply the representations the projected representations by these weights so this one so this representation this feature vector will get multiplied by this one and this feature vector will get multiplied with this weight and you just sum them up and that's the gamma part that's the importance pooling part so contrast that the graph sage which has just had max pooling and there we didn't have these weights we just find the like element wise max and that's that's the difference okay so they say they mentioned here the neighborhood of a node u is defined as the t-notes and I mentioned that's 50 that exert the most influence on node u and you it's pretty much intuitive that all of those nodes which had the highest visit counts are in a sense the most influential ones because we have the highest probability of visiting them they are the most connected in a way to our target node u and again about the importance pooling we implemented gamma as a weighted mean with weights defined according to the l1 normalized visit counts which is the thing I just explained and they refer to this as importance pooling so that those are the main innovations they had but there are some more things that I want to cover so let's just continue with model training so if you remember a graph sage how graph sage was trained it was basically trained either in an unsupervised manner using negative sampling or we had task specific supervised loss like cross entropy in the case of classification so here pin sage uses this thing called max margin ranking loss and we'll see what that is in a sec but before that what's the data so the goal of the training phase is to optimize the pin sage parameters so that the output embeddings of pairs query and i from this set l in the labeled set are close together so what's this set l so basically what it is is the following they just tracked the engagement on pinterest so if you for example had if you were playing with pin q and the first next pin that you opened or interacted with had an engagement with was pin i then they just put this pair in this set l they just add this pair to this set l and that's how they collect basically the supervised data set just through natural engagement of users on pinterest they through time get a huge huge training set l okay now let's jump in understand how the loss works and i'll i'll skip through this mini batch version of the algorithm i just showed you because it's equivalent to the graph sage algorithm they just instead of working on the whole pinterest graph they just take a subset and they just work on on that sub graph instead of using the whole graph so i'll just keep it for now and let's focus on the loss function and this is this is interesting part so basically the following happens so as zq z and q all of these are the final representations that you end up having after you run the pin sage algorithm so you end up with these as the vectors and once you have them what you do is the following so you have you have q which is the query query embedding and you have positive samples which is z i and you have negative samples which is n k and the goal here is the following you basically want to make sure that the similarity between q and i which is by definition a related pin this thing should be bigger than the similarity between q and this negative sample by at least a margin so for example if this similarity was three and the margin was which is a hyper parameter itself so if the margin was two that means we need to make sure that this part is at least five five or bigger five plus and why is that because we have a max between zero and this term here which means the loss goes to zero once this becomes negative and this becomes negative once this part once this part becomes bigger than the similarity between the query and negative sample plus the margin so that's this this portion of the loss now if you're like me you you're probably confused or you were confused at one point of your life with this part the expected value and what this basically says is the following so the higher the probability of a certain negative sample the more important it is to minimize its associated term here so let me let me try and visualize that here so basically first maybe a mathematical definition of what expected value is maybe it will be for some of you will make sense so expected value of a random variable x and some function g of x where this small x is just one of the possible outcomes of this random variable big X is the following is just a plain sum of the probability of those outcomes times the value of the function g at that particular outcome x and let me now contextualize and associate these two so g of x is basically this part this max part and let's see where the p part comes from so if we have a graph like this if we have a graph and we have a couple of notes and maybe this is the query node and this is the the the related node that's ground truth because of the engagement and the way how Pinterest collects these so we know this is the related one and we know this is the query pin so now let's say there are a couple there so every other pin every other node in the graph is a potential negative sample assuming that this is the only related pin to query pin so what we do is the following so let's let's say that during the training we maybe take seven times we take this this node as a negative sample as the NK so remember the notation we had ZNK so seven times maybe we take this one one time maybe we take this one zero times and this one two times so basically what happens is because this one was this one was sampled most oftenly seven times compared to two and one it's really important to minimize its associated values so g of x had to be as close to zero as possible right because we'll have we'll have one time one time something so the last will be one times x plus two times so this is two times its loss that's y plus seven times times Z where Z is the loss value associated with this negative sample so that means it's the most important so the most important thing we can do is minimize this part so because this is the highest the highest the highest number and if we translate these into probabilities into empirical probabilities because we had ten times we draw negative samples ten times this will be seven over ten this will be two over ten and this will be one over ten and that's where the p of x comes from so basically we have zero point seven times this max and we that's the reason we want to make sure that the higher the probability we want to make sure that the term is the lowest okay hopefully this wasn't too much confusing okay let me continue what else again some engineering details we design a producer consumer pattern to run GPU computation at the current iteration and CPU computation at the next iteration in parallel so that's what I mentioned already and this further reduces the training time by almost a half so even though conceptually this is a small thing like it's a really hard engineering problem and b it led to like a huge improvement in efficiency of the algorithm and training time okay and one more engineering detail is that they are to improve efficiency when training with large batch sizes we sample a set of five hundred negative items to be shared by all training examples in each mini batch so they won't be finding for every single node they won't be figuring out the negative examples they'll just pre compute five hundred negative items and they'll be sharing those for different query and related pins enough about engineering details let's let's see one more important part algorithmically so however ensuring that the inner product of the positive example so pair of items q and i is larger than that of the q and each of the five hundred negative items is too easy and why is it too easy because if you take a random negative example from the huge data set they have like a multiple billions of pins there is a huge probability that will be totally different from the related pins and that means it's really too easy to to optimize this this problem and here they show so they show an example of what they mean and they make of what they mean and they mentioned here so to solve the above problem for each positive training example so the item pair q and i we had hard negative examples so we make a distinction between hard negative examples and like maybe easy and negative examples and you can see here visually basically this is a query so we have some flower on an embroidery and then we have a positive example so that's the eye pin had this image and you can see it's also an embroidery and it's also flower so a random negative example so the easy negative sample would be this so this is neither embroidery and it's neither a flower so just some random hat and this is the hard example because it is embroidery so it's a similar in a way similar to these two but the content is a bird instead of a flower so that's what what what makes it hard okay so now that we know that let me see what they what they do and how they calculate how did they figure out these a hard example so they are generated by ranking items in a graph according to their personalized page rank scores with respect to the query item q items ranked at 2000 to 5000 are randomly sampled as hard negative items so a page rank is you probably heard of it page rank is Google initial the initial algorithm that Google use for research and intuitively it just looks at the edges and using that finds how important certain web page or element in general is important depending on how many links point to that particular note so what I do is they take the query pin and they just calculate these personalized page rank scores they just saw them by the score and they just take for example from 2k to 5k they consider so there's just a simple some heuristic they they figure out makes sense and they just randomly sample elements from from this subset and those are the hard hard examples in the training and for the part so this part tells us about the curriculum learning so in the first epoch of the training no hard negative items are used so that the algorithm quickly finds an area in the parameter space where the loss is relatively small at each epoch epoch and the training we had and minus one hard negative items to the set of negative items for each item so that means in epoch to they'll be adding one so that's a minus one so they'll be adding only one hard example as the epoch progress at epoch 10 they'll be adding nine negative items to the set of negative items for each item and that's how they progressively stimulate and make it harder to to optimize that loss okay that was that was all I had to say about the like some of the some some things were about engineering some things about were about algorithmic innovation and now let's switch topic a little bit and see how they how they tested this thing and what they what they did so again just just a fun fact so so for the home feed recommendation task we select the pins that are closest in the embedding space to one of the most recently pinned items by the user and if you remember I initially showed you that I have bunch of cars here and the reason and motorcycles now because I was interacting with a motorcycle but like basically they are they are the the the last pin you made is heavily influences this home feed recommendations and I can just empirically see that and verify it although it's not as simple as we saw like this motorcycle me engaging with this pin change my home feed here okay so again I mentioned this day they defined the set tell by using the historical user engagement data okay before we we we we see some results let me just detail a little bit how the features for every single pin are constructed so basically every single pin has some image associated with it so that some image and then we have also have some annotations so textual information and this notice also has some connections with some other notes so how they construct the final feature vector is the following so they past the image through a VGG network so there's just a convolutional neural net and they take one of the flat one of the feed forward activations and they use that as one part of the feature vector then they use they do a virtual back the train a virtual back model on the annotations and they embed the annotations also in a I think it was 256 dimensional vector and they concatenate these two and finally they add the log of the degree of this note of this particular pin so this is the degree so maybe this note has just ignore these so we have four four four notes connecting to this one so we'll just add log four as a feature so in a general case log the I where this is no I and this is the degree of the note and now that we know how the feature vectors are constructed for the pins for pins in pins age let's see how the baselines are constructed we have this visual baseline which basically does the following so for every single pin it just takes the image that's associated with that pen and does the same thing as pins age so just passes it through a VGG network takes out those features from certain fully connected layer and embeds that directly uses that as the final embedding and so once you embed that into so once you embed this thing into embedding space how you recommend how this simple baseline recommends is just using canerous neighbors so the same as as pin stages he just has a much less delicate method it doesn't use it doesn't use the graph information the graph structure it's a content recommender system okay second baselines super similar just uses only annotations instead of the image so it uses textual information third one combines both of those entrains MLP on top of them and finally the fourth one is pixie which uses random walks to find the the the the recommendations and basically doesn't use the those features just uses the random walks so that's the method that was previously used in production at Pinterest okay and yeah they don't compare with other methods because basically the scale is so huge that none of the previous methods could be simply used in this setting finally they just modify pins age and in a couple of ways and use those as baselines as well so here this max pooling one uses instead of using the importance pooling it uses max pooling and it just doesn't use those hard negative samples which I showed you previously so yeah that's the those are the baselines and now we'll see the results again some engineering details huge pages hadoop which is an open source implementation of the map reduce framework which I previously showed you and explained you so yep let's see the results yeah unfortunately these are not metrics you're probably familiar with so let me just briefly explain why what these are so we have something called hit rate hit rate and we have something called mean reciprocal rank and what hit rate is is the following so basically you have the query pin and you have the related pin I which we got by collecting like through engagement data right and so now our recommender system basically gives out a list of related pins to this query pin and the length of this list is K which is a hyper parameter and they used 500 in this paper so now what the hit rate is is the following so we're interested whether this ground truth pin is somewhere inside the recommended recommended pins if it is we that's a good thing if it's not that's a bad thing and basically hit rate just tells you going through the test set so going through various cues these cues and various eyes you basically see the percentage of time that the ground truth pin appeared in your recommendations and okay looking at that you can see that pinsage has 67% hit rate compared to these simple baselines like visual annotation and the combined one is combined one is better as expected than purely using like text or images and pinsage is also better than all of these maybe simpler baselines of pinsage so the second metric is this mean reciprocal rank and you can see the formula here and let me just make some sense out of it so basically it's super similar to hit rate except that we are interested into where exactly this pin this ground truth pin I appeared so let's assume that among the recommended items pins we we we did have this pin I and it was a location maybe 256 then that's much worse than having it the location one right so if we'd ideally want to recommend this ground truth pin somewhere closer to position one and that's that's this our term here and they just sum up those values and they just divided by the number of nodes in the test and that's it and if you take a look and the hundred is just hard coded so it's just empirically chosen value so that once we have so that we can distinguish when this is on one position to 1000 versus 2000 and that should make that will still make difference otherwise if this wasn't here the difference between what one over 1000 one over 2000 is super small and this makes it bigger so that's the reason they put the conjures here and again looking at the mixed reciprocal rank we can see it greatly improves with pinsage so those those were the offline evaluations they also did a be testings and user studies and I'll get into those in a minute but first one more important analysis they did is the exploration of the embedding space that the pins age method created against these baselines so the annotation visual baselines so basically what it did so they they took the embedding so you have a bunch of pins and they're embedded and you just took take two random pins and their embeddings and you do the cosine similarity so basically you're interested in the so if these are embedding vectors so you're interested in cosine of the angle between them cosine theta and you just plot that and you'd ideally one day to be distributed as as much as possible because that means they are not there not all the embeddings are similar and similarly oriented you want to have them or if they're if you pick the random pins you want to have them randomly distributed in space oriented in space so looking at the graph here you can see that the the pin sage has the most distributed curve here and the worst one is this one because it's highly it's really close to to one so you get one once you have because all of the embeddings remember they are they are normalized to unit length unit norm so that means this what you get one only when the vectors of the both embeddings totally coincide and if you if you if you take a look at the annotation baseline you can see it has the worst results it's 14 percent hit rate against 17 and 0.9 and so it's it's the worst method and these curves here tell the same thing pretty much and if you can introduce some fancy mathematics into this statistics so basically the court curdosis statistical measure of the cosine similarities of pin sage embedding is 020 43 compared to 2.49 for annotation embeddings and 1.20 for visual embeddings so you want to have this as small as possible this pretty much measures how heavy the details of the distribution are so that's this statistical measure the curdosis okay so those were the offline metrics and this analysis of the embedding space now for the user studies so they basically formed a group of people and they gave them a side by side different baseline so pin sage and visual baseline pin sage and annotation baseline pin sage and pixie and they asked them which one is more relevant to the query to the query pin so this is a query pin and then they they give them some results from different baselines and they asked them what's better and you can see so first the table again pin sage has the highest fraction of winds so that's not much information there so let's see some visual results okay so given this query pin where you can see some some like young plants here the visual baseline makes some semantical errors so it returns the image imagery of food which is visually related but it's not what we what we are interested in it's not as related conceptually on the other hand annotations are returning some results pixie also but like none of these are as relevant as pin sage which basically also gave both visually similar things but conceptually the topic that's the same those are also some like young plants here and here and here so you can see visually that pin sage gives the best recommendations finally one more example here we see some logs in the query pin so the visual one again made mistake giving some war photos which are like grayscale and confuse them thus with the topic that's similar to this one and that's just logging and you can see that pin sage gave both the visually similar picks and most importantly the most relevant pictures pins in general okay I mentioned they did a bit testing so what they were testing in production is they were testing home feed and they were testing this re pin rate so a user saving a pin to a board is a high value action that signifies deep engagement of the user so once you recommend something to the user on the home feed and user engages with some of the pins and saves them into some of his or her boards that's a really good signal and that's what they were testing like in a be tests at Pinterest so I assume they took some geographic location they use some baseline there and then took some other location they took pin sage and they compared how much the engagement varies between the different production systems and they figure out that pin sage is again performing better so all of the three metrics both the offline metrics all of both the user studies both the a be tests show that pin sage improved the results so let me wrap up the paper basically couple more details this one is interesting so after training completes due to the highly efficient map reduce inference pipeline the whole inference procedure to generate embeddings for three billion items can finish in less than 24 hours so this is impressive and you can see how much engineering went into this system and yeah it's just there is a lot of innovation happening in engineering also it's not only about research couple more interesting charts here so what they did is they took the embeddings of pin sage and they just did some t-sne reduction to 2d chart and you can see that if I zoom in embeddings that are really close into the space are topically similar so you can see some fashion images here you can see some covers here so yeah all in all semantically it makes sense finally they were exploring the number of neighbors so if you remember how the neighborhood is computed we basically take the t neighbors that had the highest count and here they showed that taking 50 neighbors has the highest hit rate and mixed this mean reciprocal rank where the training time was still acceptable 78 hours compared to these so that was pretty much it I covered bunch of details in this paper I still omitted some details but those are probably the most important details you should understand about this work and yeah if you found this video useful consider subscribing and hitting that bell icon to get notified when I upload each new video and if you have any feedback any comment how I can improve or whatever you have to say please write it down in the comment section that will also help me boost the YouTube algorithm so yeah until next time keep learning deep
[{"start": 0.0, "end": 6.78, "text": " Continuing with the graph neural network series and this video I'm covering graph convolutional neural networks for web scale"}, {"start": 7.22, "end": 8.96, "text": " recommender systems or"}, {"start": 8.96, "end": 15.08, "text": " Pinsage for short and as you can tell by the title of the video is going to be more about like this huge"}, {"start": 15.64, "end": 20.080000000000002, "text": " Engineering system that's deployed at Pinterest and then about novel"}, {"start": 20.96, "end": 25.52, "text": " Algorithms like propagation rules or something like that. So anyways"}, {"start": 25.52, "end": 31.3, "text": " One thing worth mentioning is that this is a direct continuation of the graph sage"}, {"start": 31.7, "end": 36.34, "text": " Paper which I've covered in my previous video. So if you don't know anything about graph sage"}, {"start": 36.42, "end": 42.42, "text": " It's probably a good idea to just go and check out that video. I'll link it somewhere in the in somewhere here"}, {"start": 43.14, "end": 45.620000000000005, "text": " So basically the authors are basically the same"}, {"start": 46.3, "end": 49.4, "text": " Rex, Ying, Hamilton and Leskowitz"}, {"start": 49.4, "end": 56.36, "text": " Were the authors on graph sage paper and we have some additional authors for Pinsage here. Mm-hmm"}, {"start": 56.64, "end": 63.16, "text": " So let me start we describe a large scale deep recommendation engine that we developed and deployed at Pinterest"}, {"start": 63.16, "end": 69.75999999999999, "text": " And I so happened to have opened a new account today. So let me just show you how it all functions"}, {"start": 69.92, "end": 77.12, "text": " So basically and why I'm showing you this is because they're they're training Pinsage for two specific recommendation tasks"}, {"start": 77.12, "end": 79.80000000000001, "text": " the first one is this home feed a"}, {"start": 80.24000000000001, "end": 82.56, "text": " recommendation where basically the goal is to give you"}, {"start": 83.36, "end": 85.68, "text": " To give you like yeah"}, {"start": 86.28, "end": 92.16000000000001, "text": " most relevant pins on your home feed and the reason I'm mostly having like a motor cars here and motorbikes is"}, {"start": 92.64, "end": 95.60000000000001, "text": " Because the only pin I have is of a muscle car"}, {"start": 95.60000000000001, "end": 98.2, "text": " so initially I just picked some categories and"}, {"start": 98.64, "end": 104.24000000000001, "text": " What Pinterest did was it gave me pretty much uniformly for from all of those categories a couple of pins"}, {"start": 104.24, "end": 109.39999999999999, "text": " And what I did I went and pinned a certain pin and now I'm pretty much getting"}, {"start": 110.0, "end": 112.0, "text": " related pins to the last pin I've made"}, {"start": 112.64, "end": 114.11999999999999, "text": " And we'll see that in the paper"}, {"start": 114.11999999999999, "end": 120.56, "text": " They actually explicitly mentioned that that's how this home feed our recommendation works and it seems they haven't changed that thing"}, {"start": 120.56, "end": 124.39999999999999, "text": " so another task they are doing is if I open a specific pin and"}, {"start": 124.88, "end": 130.48, "text": " What they do here is they they offer you this list of similar pins. So more like this"}, {"start": 130.48, "end": 134.88, "text": " So these are the related pins to the pin you're just chosen and"}, {"start": 135.39999999999998, "end": 142.85999999999999, "text": " There is also aside from the concept of a pin. They have something called boards and so this user Harry saved this"}, {"start": 143.48, "end": 147.64, "text": " particular pin to this board V road and basically boards just"}, {"start": 148.6, "end": 150.2, "text": " Just basically"}, {"start": 150.2, "end": 157.23999999999998, "text": " Accumulate similar like similar pins in a logical unit and that's that's everything I wanted to cover"}, {"start": 157.24, "end": 159.24, "text": " Let me get back to the paper"}, {"start": 159.24, "end": 164.52, "text": " So they say here we deploy pinsage at Pinterest and train it on 7.5 billion examples"}, {"start": 165.24, "end": 170.76000000000002, "text": " on a graph with 3 billion nodes representing pins and boards in 18 billion edges"}, {"start": 170.76000000000002, "end": 176.96, "text": " So hopefully because I went through the platform if you if you're not familiar with Pinterest this will kind of contextualize"}, {"start": 177.52, "end": 183.32000000000002, "text": " Both the problem and how how everything fits together. So let me let me start and"}, {"start": 184.04000000000002, "end": 186.04000000000002, "text": " show you the"}, {"start": 186.04, "end": 188.04, "text": " some engineering"}, {"start": 188.04, "end": 190.51999999999998, "text": " innovations they introduced as well as some"}, {"start": 191.07999999999998, "end": 199.07999999999998, "text": " algorithmic innovations so first off they say here traditional GCN algorithms perform graph convolutions by multiplying feature matrices"}, {"start": 199.07999999999998, "end": 205.88, "text": " by powers of the full graph laplation and what that basically means if you if you if you're familiar with GCN and here"}, {"start": 205.88, "end": 207.88, "text": " I'll just open I'll open this"}, {"start": 208.68, "end": 214.35999999999999, "text": " paper which I've covered in one of my previous videos and you can see the update rule for GCN and"}, {"start": 214.36, "end": 220.28, "text": " Basically this part the a tilde part is in a structure similar to laplation even though it's not laplation and"}, {"start": 220.28, "end": 225.0, "text": " That's what they're basically saying they're saying that you need to store this whole huge"}, {"start": 225.56, "end": 230.92000000000002, "text": " adjacency or laplation matrix which is m by n where n is the number of nodes and in the case of"}, {"start": 231.32000000000002, "end": 237.72000000000003, "text": " Pinterest we have like 3 billion nodes at a time that the paper was written and so basically that's not that's not"}, {"start": 238.28000000000003, "end": 241.32000000000002, "text": " An option at all. So let me let me get back here"}, {"start": 241.32, "end": 247.23999999999998, "text": " So what it did as is the same as in graph sage they basically they are just sub sampling the neighborhoods"}, {"start": 247.64, "end": 249.64, "text": " Although they are not"}, {"start": 249.64, "end": 253.64, "text": " Doing uniform sampling they're doing something more clever and I'll get to that in a bit"}, {"start": 254.44, "end": 259.88, "text": " The second thing worth mentioning is that they have so on the on the engineering side is that they have this"}, {"start": 260.52, "end": 264.84, "text": " producer consumer pipeline and basically what it means is that"}, {"start": 265.48, "end": 268.52, "text": " We have in so in parallel the cpu is"}, {"start": 268.52, "end": 272.28, "text": " Preparing mini batches while the gpu is doing"}, {"start": 272.84, "end": 277.64, "text": " propagations and then you can just optimize and sync them so that"}, {"start": 278.12, "end": 285.0, "text": " GPU is never starving and they say it here cpu bound producer efficiently samples node network neighborhoods a gpu bound"}, {"start": 285.32, "end": 287.32, "text": " TensorFlow model consumes those"}, {"start": 288.35999999999996, "end": 294.91999999999996, "text": " And thirdly they have this efficient map reduce inference and why do they have this well basically?"}, {"start": 294.92, "end": 298.12, "text": " and we'll see that in a bit a bit in a bit more detail, but"}, {"start": 298.84000000000003, "end": 303.40000000000003, "text": " the the Pinterest graph is huge and what they do is they train the"}, {"start": 303.96000000000004, "end": 311.08000000000004, "text": " Pinsage model only on a subset of that huge graph and once they have the model trained"}, {"start": 311.40000000000003, "end": 319.08000000000004, "text": " They just use this map reduce pipeline to basically compute the embeddings for every single other node in the graph"}, {"start": 319.72, "end": 320.92, "text": " That's pretty much it"}, {"start": 320.92, "end": 328.68, "text": " So they say it here that can so map reduce pipeline can distribute the trained model to generate embeddings for billions of nodes"}, {"start": 329.08000000000004, "end": 334.20000000000005, "text": " While minimizing repeated computations, so that's this is the the the important part"}, {"start": 334.84000000000003, "end": 335.88, "text": " and"}, {"start": 335.88, "end": 340.76, "text": " Just on a small tangent. Let me let me quickly explain the map reduce pipeline and it's really simple"}, {"start": 340.92, "end": 342.92, "text": " So I thought it's worth mentioning it"}, {"start": 343.56, "end": 345.56, "text": " Basically here on an example"}, {"start": 345.56, "end": 351.64, "text": " This is a canonical example of doing uh finding out word frequencies in a document in documents"}, {"start": 351.88, "end": 354.6, "text": " And here is just a toy example with three sentences"}, {"start": 354.6, "end": 361.32, "text": " You can imagine a huge corpora here like imagine transformers which have multiple terabytes of textual data"}, {"start": 362.04, "end": 366.6, "text": " Which needs to get somehow maybe like the word frequency needs to get calculated"}, {"start": 367.16, "end": 371.96, "text": " And here on this example what happens is we basically have two modules"}, {"start": 371.96, "end": 378.35999999999996, "text": " We have this mapping part and we have the reduce part and hence the name mapping use and basically"}, {"start": 378.76, "end": 384.28, "text": " Inside the mapping module. We first just split the data. So here every single sentence will go there"}, {"start": 385.08, "end": 392.44, "text": " in in a separate module and what then happens is we we just map those parts those those"}, {"start": 393.56, "end": 398.35999999999996, "text": " Those samples of data into key value pairs and in this example"}, {"start": 398.36, "end": 403.40000000000003, "text": " of figuring out the frequencies of the words, we basically end up with like a"}, {"start": 404.12, "end": 410.12, "text": " Deer having one bear one, etc. And then what we do is we shuffle which is basically clustering"}, {"start": 410.84000000000003, "end": 414.84000000000003, "text": " These key value pairs by the key and we end up with these"}, {"start": 415.32, "end": 418.36, "text": " And finally the reduce stage sums these values up"}, {"start": 418.68, "end": 422.84000000000003, "text": " And it depends on how you define those reduce and map scripts"}, {"start": 422.84000000000003, "end": 424.84000000000003, "text": " And finally just aggregate those"}, {"start": 424.84, "end": 432.03999999999996, "text": " And you get the final result every single word has its frequency associated with it and this is a simplified"}, {"start": 432.76, "end": 434.76, "text": " example, obviously"}, {"start": 434.76, "end": 440.59999999999997, "text": " You'll have multiple shuffling and reducing stages in a in a in a real production system. So"}, {"start": 441.32, "end": 442.59999999999997, "text": " again"}, {"start": 442.59999999999997, "end": 445.79999999999995, "text": " It's supposed to minimize repeated computations and"}, {"start": 446.59999999999997, "end": 449.15999999999997, "text": " If you're familiar with graph sage, this will be really"}, {"start": 449.71999999999997, "end": 451.71999999999997, "text": " This image will make sense"}, {"start": 451.72, "end": 456.12, "text": " If not, i'll just i'll briefly do an overview as well here"}, {"start": 456.12, "end": 460.68, "text": " But like the repeated computations are these here. So basically the node a"}, {"start": 461.48, "end": 465.96000000000004, "text": " This computation here is the same as this one here and the same as this one here"}, {"start": 465.96000000000004, "end": 470.52000000000004, "text": " So what map reduce does is it it does this projection only once?"}, {"start": 471.08000000000004, "end": 476.28000000000003, "text": " For every single node in the graph and thus saves us a lot of computation unnecessary computation"}, {"start": 477.32000000000005, "end": 479.32000000000005, "text": " So that's the role of map reduce"}, {"start": 479.32, "end": 484.36, "text": " And on a short note how this works is the following so let me take"}, {"start": 485.0, "end": 486.44, "text": " I'll just ignore"}, {"start": 486.44, "end": 490.04, "text": " Because this is these are two layers of the pin sage algorithm"}, {"start": 490.04, "end": 495.32, "text": " I'll just ignore one layer and i'll focus on the on a single layer and basically to calculate"}, {"start": 495.8, "end": 500.28, "text": " For a specific node and you can see a toy graph here and we are interested in this target node"}, {"start": 500.28, "end": 506.2, "text": " So to calculate the representation of this node a what we do is the following so we basically concatenate it"}, {"start": 506.2, "end": 513.72, "text": " It's it's last layer representation with the aggregated neighborhood representation which we again get by doing some aggregation"}, {"start": 514.12, "end": 523.16, "text": " And in pin sage they introduce this thing called important sampling important spooling and i'll i'll explain that in a lot more detail a bit later"}, {"start": 523.16, "end": 530.52, "text": " But for now just this is some aggregation function like in graph sage you had max pooling you had lstms means so this is something similar"}, {"start": 530.52, "end": 538.04, "text": " And so what we do is we take the neighbors you can see b c and d are the neighbors of a here so we take those neighbors"}, {"start": 538.04, "end": 544.04, "text": " We just project their representations here and we aggregate them and that's how we get the aggregated portion"}, {"start": 544.04, "end": 546.04, "text": " So that's how the algorithm works in a nutshell"}, {"start": 547.24, "end": 556.04, "text": " There are a couple of innovations which i'll now briefly do a high level of and so these are the innovations in algorithmic sense"}, {"start": 556.04, "end": 564.04, "text": " Basically they don't do they show that the random sampling uniform land random sampling they used in graph sage is suboptimal"}, {"start": 564.04, "end": 576.04, "text": " So they do this thing called importance pooling where they are basically picking the the the those those neighbors which have the most influence on the target node"}, {"start": 576.04, "end": 578.04, "text": " And that will get clear in a couple of minutes"}, {"start": 578.04, "end": 588.04, "text": " So they say here an additional benefit is that each node now has an important score and those scores will be used in this importance pooling"}, {"start": 588.04, "end": 598.04, "text": " I'll explain that a bit later one more algorithmic innovation is this curriculum training where basically the algorithm is fed harder and harder negative examples"}, {"start": 598.04, "end": 602.04, "text": " And we'll we'll see what those exactly are in a couple of minutes"}, {"start": 602.04, "end": 608.04, "text": " So that's a high level overview of what the both the engineering as well as the algorithmic innovations are"}, {"start": 608.04, "end": 614.04, "text": " And now i'm going to slowly dig into details of this paper"}, {"start": 614.04, "end": 626.04, "text": " So first first things first the tasks as I mentioned are related pin recommendation and home feed recommendation tasks and we saw that on the platform itself"}, {"start": 626.04, "end": 628.04, "text": " So just keep that in mind"}, {"start": 628.04, "end": 642.04, "text": " So in terms of the algorithmic design they they are basically as I said continuing the graph sage paper so the work on that paper and they took a couple of ideas from from fast gcn by Chen and his collaborators"}, {"start": 642.04, "end": 648.04, "text": " Basically the the importance pooling part those ideas were borrowed from from from this paper"}, {"start": 648.04, "end": 670.04, "text": " And again engineering details and I'll have a lot of these throughout the video because this this this this paper is there is a lot of engineering that went into this this work so I think it's just worth mentioning some of them and I'll omit like most of them but I still want to mention a couple of them"}, {"start": 670.04, "end": 696.04, "text": " So we fundamentally improve upon graph sage by removing the limitation that the whole graph be stored in a GPU memory so basically they are using that producer consumer pipeline so we don't have to hold the whole graph in GPU you hold it in CPU and that's where the producer is and then the GPU the consumer just takes many batches and basically that's how it trains"}, {"start": 696.04, "end": 725.04, "text": " Okay one more thing to note they they they they mentioned here why we can't use those shallow graph embedding methods so like deep walk and note to work the reason is they basically have for every single embedding is stored in the table in a lookup table so this table would be huge like because this is in and it's a number of notes and because Pinterest has like multiple like billions of notes"}, {"start": 725.04, "end": 754.04, "text": " This is like pretty much intractable and not only that but it's also inherently transacting methods which means you will have to get re-optimized to get when new pins are introduced into the system which is also not desirable and lastly these methods only use the co-occurrence statistics of the graph and they can use the features and as you probably can assume pins in in the graph"}, {"start": 754.04, "end": 782.04, "text": " pins in in in this graph have a rich features associated with them because they have like imagery and they have like maybe some let me just draw something here okay and they have they also have textual information associated with them so these methods can even make use of these so yeah that that those are multiple reasons why you just can't use those graph embedding methods for this task"}, {"start": 782.04, "end": 811.04, "text": " okay let's let's continue and see how they structure how they model the the the problem so basically the model of the the the graph the following so you have on one side you have pins or more generally items and on one side you have you have boards or more generally collections and what they do is they just create a by by part of the graph where basically as we saw one board has multiple"}, {"start": 811.04, "end": 840.04, "text": " related pins associated with with it so this is how the the graph will look like something like this and they mentioned they have around two billion pins and about one billion boards so this is how the the graph looks like and yeah I mentioned this pins are associated with both rich text and image features and the goal is to leverage both the features and the graph structure to to find those embeddings which can later be used to recommend"}, {"start": 840.04, "end": 868.04, "text": " related pins etc okay so now that we know how the problem is structured and what the what the tasks are and let's dig into some details yeah by the way they treat both the pins and the boards as the same nodes even though only the pins basically have features so they say here for notational convenience in generality"}, {"start": 868.04, "end": 897.04, "text": " when we describe the pin sage algorithm we simply refer to the node set of the full graph with V so it's a union between items and collections and they do not explicitly distinguish between pin and board nodes okay this is the algorithm basically if you're familiar with graph sage it has the same structure same old structure so here this one line corresponds to this line concatenation corresponds to this line and normalization corresponds to this line"}, {"start": 897.04, "end": 922.04, "text": " there are a couple of innovations though the first one is how they sampled the neighborhood that's the first innovation the second one importance pooling and that's pretty much it there is not a lot of new things that happen here everything else pretty much remain the same so a couple more nodes before I explain the importance pooling and before I explain the neighborhood they empirically so they say here empirically we observe significant performance gains when we look at the"}, {"start": 922.04, "end": 945.04, "text": " operation instead of the average operation as in 21 which is GCN paper so basically this year they showed empirically that it's better to use concatenation than just the thing that GCN did and that's just pretty much scaled weighted mean of these features and secondly normalization part so this part here"}, {"start": 945.04, "end": 975.04, "text": " in line three makes training more stable and it's more efficient to perform approximate nearest neighbor search for normalized embeddings and I'll explain what that is in a sec so first normalization what it does is basically make sure that all of our embeddings lie on a hypersphere and there's just a fancy name saying a sphere in higher dimensions if that sphere was in three dimensions and that sphere was in three dimensions then that sphere is a hyper sphere and that's the only thing that's important to know about this so"}, {"start": 975.04, "end": 1005.0, "text": " basically we have we have that sphere lying around the coordinate beginning and all of the embeddings would lie on the edge of that sphere hopefully you can see that so again maybe here we have a huge sphere and all of the embeddings lie somewhere on this sphere all of our pins all of our boards lie somewhere on this sphere so they say here so it's more efficient to perform approximate nearest neighbor search so why do we"}, {"start": 1005.0, "end": 1034.96, "text": " care about approximate nearest neighbor search in this context so the reason is pretty simple so once you have so you basically have the embedding space and you embed your pins like maybe you'll maybe take this motorcycle and you'll embed it here and that's pin one and then you'll have all of the pins embedded and basically if you want to do related pin recommendation what you pretty much do is you find"}, {"start": 1034.96, "end": 1062.1200000000001, "text": " the k nearest neighbors to this target node p1 so if you're if I'm interested what the related pins to the motorcycle are I just find the closest ones and now because there are so many embeddings in this space so there are billions of embeddings we can't do exact nearest neighbor so they just use methods such as locality sensitive hashing to which are really good"}, {"start": 1062.12, "end": 1087.6, "text": " approximative methods to to do k nearest neighbors so that's why they mention nearest neighbors here and so normalization helps helps calculate those those nearest neighbors okay with those details out of the way let me explain the two most important parts and that's neighborhood and"}, {"start": 1087.6, "end": 1106.36, "text": " important sampling so basically this is what looks like we have a target node and we have its neighborhood so this is the one hop neighborhood this is the two hop neighborhood three hop et cetera so what we do basically is the following we take a bunch of random walks which are"}, {"start": 1106.36, "end": 1133.52, "text": " decently short and we we run them in parallel and what we do is we just count up the number of wizards every single node had so this one maybe had two wizards this one had only one visit maybe some of them had three wizards like this one maybe had three this one maybe had three and we basically take the t"}, {"start": 1133.52, "end": 1151.44, "text": " nodes that had the highest visit count and that's it so t was 15 there in their scenario so one thing worth mentioning is that these random walks are not biased and what that means is basically if you're at this node you'll go to your next"}, {"start": 1151.44, "end": 1181.3600000000001, "text": " node will be one of the neighboring nodes with equal probability contrast that to biased nodes such as so node2vec for example method used biased biased random walks where if you you have some parameters there like p and q and depending how you set them up you'll either prefer going in like a bfs manner so breadth first search or you'll be preferring depth first search traversals of the graph so here we just have pure unbiased random walks we we"}, {"start": 1181.36, "end": 1211.32, "text": " figure out we run multiple of them and in parallel and we just find the counts and that's the that's our neighborhood so for example if t was if t was maybe three we only take this neighbor we take this neighbor and we take this one and so how does that fit in into important sampling well it's pretty easy once you think of it so basically what we do is we take the their visit counts and we just normalize them by the"}, {"start": 1211.32, "end": 1234.3999999999999, "text": " sum so this will be a three three and two that's eight so we'll have the final weights will be this and then the importance pooling part just works you just take these weights you sum up their projected representations and so you multiply the representations the projected representations by these"}, {"start": 1234.4, "end": 1262.0, "text": " weights so this one so this representation this feature vector will get multiplied by this one and this feature vector will get multiplied with this weight and you just sum them up and that's the gamma part that's the importance pooling part so contrast that the graph sage which has just had max pooling and there we didn't have these weights we just find the like element wise max and that's that's the difference"}, {"start": 1262.0, "end": 1290.0, "text": " okay so they say they mentioned here the neighborhood of a node u is defined as the t-notes and I mentioned that's 50 that exert the most influence on node u and you it's pretty much intuitive that all of those nodes which had the highest visit counts are in a sense the most influential ones because we have the highest probability of visiting them they are the most connected in a way to our target node u"}, {"start": 1290.0, "end": 1319.36, "text": " and again about the importance pooling we implemented gamma as a weighted mean with weights defined according to the l1 normalized visit counts which is the thing I just explained and they refer to this as importance pooling so that those are the main innovations they had but there are some more things that I want to cover so let's just continue with model training so if you remember a graph sage"}, {"start": 1319.36, "end": 1341.36, "text": " how graph sage was trained it was basically trained either in an unsupervised manner using negative sampling or we had task specific supervised loss like cross entropy in the case of classification so here pin sage uses this thing called max margin ranking loss and we'll see what that is in a sec"}, {"start": 1341.36, "end": 1370.36, "text": " but before that what's the data so the goal of the training phase is to optimize the pin sage parameters so that the output embeddings of pairs query and i from this set l in the labeled set are close together so what's this set l so basically what it is is the following they just tracked the engagement on pinterest so if you for example had if you were playing with pin q"}, {"start": 1370.36, "end": 1399.36, "text": " and the first next pin that you opened or interacted with had an engagement with was pin i then they just put this pair in this set l they just add this pair to this set l and that's how they collect basically the supervised data set just through natural engagement of users on pinterest they through time get a huge huge training set l"}, {"start": 1399.36, "end": 1428.36, "text": " okay now let's jump in understand how the loss works and i'll i'll skip through this mini batch version of the algorithm i just showed you because it's equivalent to the graph sage algorithm they just instead of working on the whole pinterest graph they just take a subset and they just work on on that sub graph instead of using the whole graph so i'll just keep it for now"}, {"start": 1428.36, "end": 1450.36, "text": " and let's focus on the loss function and this is this is interesting part so basically the following happens so as zq z and q all of these are the final representations that you end up having after you run the pin sage algorithm so you end up with these as the vectors"}, {"start": 1450.36, "end": 1469.36, "text": " and once you have them what you do is the following so you have you have q which is the query query embedding and you have positive samples which is z i and you have negative samples which is n k and the goal here is the following"}, {"start": 1469.36, "end": 1490.36, "text": " you basically want to make sure that the similarity between q and i which is by definition a related pin this thing should be bigger than the similarity between q and this negative sample by at least a margin"}, {"start": 1490.36, "end": 1510.36, "text": " so for example if this similarity was three and the margin was which is a hyper parameter itself so if the margin was two that means we need to make sure that this part is at least five five or bigger five plus"}, {"start": 1510.36, "end": 1532.36, "text": " and why is that because we have a max between zero and this term here which means the loss goes to zero once this becomes negative and this becomes negative once this part once this part becomes bigger than the similarity between the query and negative sample plus the margin"}, {"start": 1532.36, "end": 1551.36, "text": " so that's this this portion of the loss now if you're like me you you're probably confused or you were confused at one point of your life with this part the expected value and what this basically says is the following so the higher the probability of a certain negative sample"}, {"start": 1551.36, "end": 1571.36, "text": " the more important it is to minimize its associated term here so let me let me try and visualize that here so basically first maybe a mathematical definition of what expected value is maybe it will be for some of you will make sense so expected value of a random variable"}, {"start": 1571.36, "end": 1596.36, "text": " x and some function g of x where this small x is just one of the possible outcomes of this random variable big X is the following is just a plain sum of the probability of those outcomes times the value of the function g at that particular outcome x"}, {"start": 1596.36, "end": 1615.36, "text": " and let me now contextualize and associate these two so g of x is basically this part this max part and let's see where the p part comes from so if we have a graph like this if we have a graph and we have a couple of notes and maybe this is the query node"}, {"start": 1615.36, "end": 1635.36, "text": " and this is the the the related node that's ground truth because of the engagement and the way how Pinterest collects these so we know this is the related one and we know this is the query pin so now let's say there are a couple there so every other pin every other node in the graph is a potential negative sample"}, {"start": 1635.36, "end": 1664.36, "text": " assuming that this is the only related pin to query pin so what we do is the following so let's let's say that during the training we maybe take seven times we take this this node as a negative sample as the NK so remember the notation we had ZNK so seven times maybe we take this one one time maybe we take this one zero times and this one two times so basically what happens is because this one was"}, {"start": 1664.36, "end": 1691.36, "text": " this one was sampled most oftenly seven times compared to two and one it's really important to minimize its associated values so g of x had to be as close to zero as possible right because we'll have we'll have one time one time something so the last will be one times x plus two times so this is two times"}, {"start": 1691.36, "end": 1718.36, "text": " its loss that's y plus seven times times Z where Z is the loss value associated with this negative sample so that means it's the most important so the most important thing we can do is minimize this part so because this is the highest the highest the highest number and if we translate these into probabilities into"}, {"start": 1718.36, "end": 1744.36, "text": " empirical probabilities because we had ten times we draw negative samples ten times this will be seven over ten this will be two over ten and this will be one over ten and that's where the p of x comes from so basically we have zero point seven times this max and we that's the reason we want to make sure"}, {"start": 1744.36, "end": 1773.36, "text": " that the higher the probability we want to make sure that the term is the lowest okay hopefully this wasn't too much confusing okay let me continue what else again some engineering details we design a producer consumer pattern to run GPU computation at the current iteration and CPU computation at the next iteration in parallel so that's what I mentioned already and this further reduces the training time by almost a half"}, {"start": 1773.36, "end": 1802.36, "text": " so even though conceptually this is a small thing like it's a really hard engineering problem and b it led to like a huge improvement in efficiency of the algorithm and training time okay and one more engineering detail is that they are to improve efficiency when training with large batch sizes we sample a set of five hundred negative items to be shared by all training examples in each mini batch so they won't be"}, {"start": 1802.36, "end": 1831.36, "text": " finding for every single node they won't be figuring out the negative examples they'll just pre compute five hundred negative items and they'll be sharing those for different query and related pins enough about engineering details let's let's see one more important part algorithmically so however ensuring that the inner product of the positive example so pair of items q and i is larger than that of the q and each of the five hundred negative items"}, {"start": 1831.36, "end": 1860.36, "text": " is too easy and why is it too easy because if you take a random negative example from the huge data set they have like a multiple billions of pins there is a huge probability that will be totally different from the related pins and that means it's really too easy to to optimize this this problem and here they show so they show an example of what they mean and they make"}, {"start": 1860.36, "end": 1886.36, "text": " of what they mean and they mentioned here so to solve the above problem for each positive training example so the item pair q and i we had hard negative examples so we make a distinction between hard negative examples and like maybe easy and negative examples and you can see here visually basically this is a query so we have some flower on an embroidery and then we have"}, {"start": 1886.36, "end": 1906.36, "text": " a positive example so that's the eye pin had this image and you can see it's also an embroidery and it's also flower so a random negative example so the easy negative sample would be this so this is neither embroidery and it's neither a flower so just some random hat and this is the hard"}, {"start": 1906.36, "end": 1927.36, "text": " example because it is embroidery so it's a similar in a way similar to these two but the content is a bird instead of a flower so that's what what what makes it hard okay so now that we know that let me see what they what they do and how they calculate how did they figure out these"}, {"start": 1927.36, "end": 1946.36, "text": " a hard example so they are generated by ranking items in a graph according to their personalized page rank scores with respect to the query item q items ranked at 2000 to 5000 are randomly sampled as hard negative items so a page rank is you probably heard of it page rank is"}, {"start": 1946.36, "end": 1972.36, "text": " Google initial the initial algorithm that Google use for research and intuitively it just looks at the edges and using that finds how important certain web page or element in general is important depending on how many links point to that particular note so what I do is they take the query pin"}, {"start": 1972.36, "end": 1997.36, "text": " and they just calculate these personalized page rank scores they just saw them by the score and they just take for example from 2k to 5k they consider so there's just a simple some heuristic they they figure out makes sense and they just randomly sample elements from from this subset and those are the hard"}, {"start": 1997.36, "end": 2018.36, "text": " hard examples in the training and for the part so this part tells us about the curriculum learning so in the first epoch of the training no hard negative items are used so that the algorithm quickly finds an area in the parameter space where the loss is relatively small at each epoch"}, {"start": 2018.36, "end": 2044.36, "text": " epoch and the training we had and minus one hard negative items to the set of negative items for each item so that means in epoch to they'll be adding one so that's a minus one so they'll be adding only one hard example as the epoch progress at epoch 10 they'll be adding nine negative items to the set of negative items for each item and that's how they progressively"}, {"start": 2044.36, "end": 2052.3599999999997, "text": " stimulate and make it harder to to optimize that loss"}, {"start": 2052.36, "end": 2079.36, "text": " okay that was that was all I had to say about the like some of the some some things were about engineering some things about were about algorithmic innovation and now let's switch topic a little bit and see how they how they tested this thing and what they what they did so again just just a fun fact so"}, {"start": 2079.36, "end": 2098.36, "text": " so for the home feed recommendation task we select the pins that are closest in the embedding space to one of the most recently pinned items by the user and if you remember I initially showed you that I have bunch of cars here and the reason and motorcycles now because I was"}, {"start": 2098.36, "end": 2118.36, "text": " interacting with a motorcycle but like basically they are they are the the the last pin you made is heavily influences this home feed recommendations and I can just empirically see that and verify it although it's not as simple as we saw like this motorcycle me engaging with this pin"}, {"start": 2118.36, "end": 2145.36, "text": " change my home feed here okay so again I mentioned this day they defined the set tell by using the historical user engagement data okay before we we we we see some results let me just detail a little bit how the features for every single pin are constructed so basically every"}, {"start": 2145.36, "end": 2169.36, "text": " single pin has some image associated with it so that some image and then we have also have some annotations so textual information and this notice also has some connections with some other notes so how they construct the final feature vector is the following so they"}, {"start": 2169.36, "end": 2190.36, "text": " past the image through a VGG network so there's just a convolutional neural net and they take one of the flat one of the feed forward activations and they use that as one part of the feature vector then they use they do a virtual back the train a virtual back model on the"}, {"start": 2190.36, "end": 2213.36, "text": " annotations and they embed the annotations also in a I think it was 256 dimensional vector and they concatenate these two and finally they add the log of the degree of this note of this particular pin so this is the degree so maybe this note has just ignore these so we have four"}, {"start": 2213.36, "end": 2231.36, "text": " four four notes connecting to this one so we'll just add log four as a feature so in a general case log the I where this is no I and this is the degree of the note and now that we know how the feature vectors are constructed for the pins for pins in"}, {"start": 2231.36, "end": 2248.36, "text": " pins age let's see how the baselines are constructed we have this visual baseline which basically does the following so for every single pin it just takes the image that's associated with that pen and does the same thing as pins age so just passes it through a VGG network takes out"}, {"start": 2248.36, "end": 2266.36, "text": " those features from certain fully connected layer and embeds that directly uses that as the final embedding and so once you embed that into so once you embed this thing into embedding space how you recommend how this simple baseline recommends is just using"}, {"start": 2266.36, "end": 2285.36, "text": " canerous neighbors so the same as as pin stages he just has a much less delicate method it doesn't use it doesn't use the graph information the graph structure it's a content recommender system okay second baselines super similar just uses only"}, {"start": 2285.36, "end": 2304.36, "text": " annotations instead of the image so it uses textual information third one combines both of those entrains MLP on top of them and finally the fourth one is pixie which uses random walks to find the the the the recommendations and basically doesn't use"}, {"start": 2304.36, "end": 2323.36, "text": " the those features just uses the random walks so that's the method that was previously used in production at Pinterest okay and yeah they don't compare with other methods because basically the scale is so huge that none of the previous methods could be"}, {"start": 2323.36, "end": 2341.36, "text": " simply used in this setting finally they just modify pins age and in a couple of ways and use those as baselines as well so here this max pooling one uses instead of using the importance pooling it uses max pooling and it just doesn't use those hard"}, {"start": 2341.36, "end": 2358.36, "text": " negative samples which I showed you previously so yeah that's the those are the baselines and now we'll see the results again some engineering details huge pages hadoop which is an open source implementation of the map reduce framework which I"}, {"start": 2358.36, "end": 2376.36, "text": " previously showed you and explained you so yep let's see the results yeah unfortunately these are not metrics you're probably familiar with so let me just briefly explain why what these are so we have something called hit rate hit rate and we have something"}, {"start": 2376.36, "end": 2400.36, "text": " called mean reciprocal rank and what hit rate is is the following so basically you have the query pin and you have the related pin I which we got by collecting like through engagement data right and so now our recommender system basically gives out a list of"}, {"start": 2400.36, "end": 2424.36, "text": " related pins to this query pin and the length of this list is K which is a hyper parameter and they used 500 in this paper so now what the hit rate is is the following so we're interested whether this ground truth pin is somewhere inside the recommended"}, {"start": 2424.36, "end": 2446.36, "text": " recommended pins if it is we that's a good thing if it's not that's a bad thing and basically hit rate just tells you going through the test set so going through various cues these cues and various eyes you basically see the percentage of time that the ground truth pin"}, {"start": 2446.36, "end": 2471.36, "text": " appeared in your recommendations and okay looking at that you can see that pinsage has 67% hit rate compared to these simple baselines like visual annotation and the combined one is combined one is better as expected than purely using like text or images and"}, {"start": 2471.36, "end": 2492.36, "text": " pinsage is also better than all of these maybe simpler baselines of pinsage so the second metric is this mean reciprocal rank and you can see the formula here and let me just make some sense out of it so basically it's super similar to hit rate except that we are interested"}, {"start": 2492.36, "end": 2520.36, "text": " into where exactly this pin this ground truth pin I appeared so let's assume that among the recommended items pins we we we did have this pin I and it was a location maybe 256 then that's much worse than having it the location one right so if we'd ideally want to recommend"}, {"start": 2520.36, "end": 2539.36, "text": " this ground truth pin somewhere closer to position one and that's that's this our term here and they just sum up those values and they just divided by the number of nodes in the test and that's it and if you take a look and the hundred is just hard coded so it's just"}, {"start": 2539.36, "end": 2568.36, "text": " empirically chosen value so that once we have so that we can distinguish when this is on one position to 1000 versus 2000 and that should make that will still make difference otherwise if this wasn't here the difference between what one over 1000 one over 2000 is super small and this makes it bigger so that's the reason they put the conjures here and again looking at the mixed reciprocal rank we can see it greatly improves with pinsage"}, {"start": 2568.36, "end": 2591.36, "text": " so those those were the offline evaluations they also did a be testings and user studies and I'll get into those in a minute but first one more important analysis they did is the exploration of the embedding space that the pins age method created against these baselines so the annotation visual"}, {"start": 2591.36, "end": 2610.36, "text": " baselines so basically what it did so they they took the embedding so you have a bunch of pins and they're embedded and you just took take two random pins and their embeddings and you do the cosine similarity so basically you're interested in the so if these are"}, {"start": 2610.36, "end": 2638.36, "text": " embedding vectors so you're interested in cosine of the angle between them cosine theta and you just plot that and you'd ideally one day to be distributed as as much as possible because that means they are not there not all the embeddings are similar and similarly oriented you want to have them or if they're if you pick the random pins you want to have them randomly distributed in space oriented in space so looking at the graph here you can see that"}, {"start": 2638.36, "end": 2640.36, "text": " the"}, {"start": 2640.36, "end": 2667.36, "text": " the pin sage has the most distributed curve here and the worst one is this one because it's highly it's really close to to one so you get one once you have because all of the embeddings remember they are they are normalized to unit length unit norm so that means this what you get one only when the vectors of the both embeddings totally coincide"}, {"start": 2667.36, "end": 2692.36, "text": " and if you if you if you take a look at the annotation baseline you can see it has the worst results it's 14 percent hit rate against 17 and 0.9 and so it's it's the worst method and these curves here tell the same thing pretty much and if you can introduce some fancy mathematics into this statistics so basically the"}, {"start": 2692.36, "end": 2714.36, "text": " court curdosis statistical measure of the cosine similarities of pin sage embedding is 020 43 compared to 2.49 for annotation embeddings and 1.20 for visual embeddings so you want to have this as small as possible this pretty much measures how heavy the details of the distribution are so that's this statistical measure the curdosis"}, {"start": 2714.36, "end": 2736.36, "text": " okay so those were the offline metrics and this analysis of the embedding space now for the user studies so they basically formed a group of people and they gave them a side by side different baseline so pin sage and visual baseline pin sage and annotation baseline"}, {"start": 2736.36, "end": 2763.36, "text": " pin sage and pixie and they asked them which one is more relevant to the query to the query pin so this is a query pin and then they they give them some results from different baselines and they asked them what's better and you can see so first the table again pin sage has the highest fraction of winds so that's not much information there so let's see some visual results"}, {"start": 2763.36, "end": 2789.36, "text": " okay so given this query pin where you can see some some like young plants here the visual baseline makes some semantical errors so it returns the image imagery of food which is visually related but it's not what we what we are interested in it's not as related conceptually on the other hand annotations are returning some"}, {"start": 2789.36, "end": 2811.36, "text": " results pixie also but like none of these are as relevant as pin sage which basically also gave both visually similar things but conceptually the topic that's the same those are also some like young plants here and here and here so you can see visually that pin sage gives the best recommendations"}, {"start": 2811.36, "end": 2839.36, "text": " finally one more example here we see some logs in the query pin so the visual one again made mistake giving some war photos which are like grayscale and confuse them thus with the topic that's similar to this one and that's just logging and you can see that pin sage gave both the visually similar picks and most importantly the most relevant pictures"}, {"start": 2839.36, "end": 2862.36, "text": " pins in general okay I mentioned they did a bit testing so what they were testing in production is they were testing home feed and they were testing this re pin rate so a user saving a pin to a board is a high value action that signifies deep engagement of the user so once you recommend something to the user on the home feed"}, {"start": 2862.36, "end": 2887.36, "text": " and user engages with some of the pins and saves them into some of his or her boards that's a really good signal and that's what they were testing like in a be tests at Pinterest so I assume they took some geographic location they use some baseline there and then took some other location they took pin sage and they compared how much the engagement varies between the different"}, {"start": 2887.36, "end": 2911.36, "text": " production systems and they figure out that pin sage is again performing better so all of the three metrics both the offline metrics all of both the user studies both the a be tests show that pin sage improved the results so let me wrap up the paper basically couple more details this one is interesting"}, {"start": 2911.36, "end": 2939.36, "text": " so after training completes due to the highly efficient map reduce inference pipeline the whole inference procedure to generate embeddings for three billion items can finish in less than 24 hours so this is impressive and you can see how much engineering went into this system and yeah it's just there is a lot of innovation happening in engineering also it's not only about research"}, {"start": 2939.36, "end": 2967.36, "text": " couple more interesting charts here so what they did is they took the embeddings of pin sage and they just did some t-sne reduction to 2d chart and you can see that if I zoom in embeddings that are really close into the space are topically similar so you can see some fashion images here you can see some"}, {"start": 2967.36, "end": 2993.36, "text": " covers here so yeah all in all semantically it makes sense finally they were exploring the number of neighbors so if you remember how the neighborhood is computed we basically take the t neighbors that had the highest count and here they showed that taking 50 neighbors has the highest hit rate and mixed this mean"}, {"start": 2993.36, "end": 3019.36, "text": " reciprocal rank where the training time was still acceptable 78 hours compared to these so that was pretty much it I covered bunch of details in this paper I still omitted some details but those are probably the most important details you should understand about this work and yeah if you found this video useful"}, {"start": 3019.36, "end": 3041.36, "text": " consider subscribing and hitting that bell icon to get notified when I upload each new video and if you have any feedback any comment how I can improve or whatever you have to say please write it down in the comment section that will also help me boost the YouTube algorithm so yeah until next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=vinQCnizqDA
Graph SAGE - Inductive Representation Learning on Large Graphs | GNN Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I do a deep dive into the Graph SAGE paper! The first paper that started pushing the usage of GNNs for super large graphs. You'll learn about: ✔️All the nitty-gritty details behind Graph SAGE ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Graph SAGE paper: https://arxiv.org/abs/1706.02216 ✅ Chris Olah on LSTMs: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro 00:38 Problems with previous methods 04:30 High-level overview of the method 06:10 Some notes on the related work 07:13 Pseudo-code explanation 12:03 How do we train Graph SAGE? 15:40 Note on the neighborhood function 17:40 Aggregator functions 23:30 Results 28:00 Expressiveness of Graph SAGE 30:10 Mini-batch version 35:30 Problems with graph embedding methods (drift) 40:30 Comparison with GCN and GAT ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphsage #gnns #graphtheory
Hi, in this video I'm continuing with Graph Neural Network series and I'll be covering this paper called Inductive Representation Learning on Large Graphs or also known as GraphSAGE. So basically the main proposition of GraphSAGE is that they developed a method that's inherently inductive and can work on huge graphs. So the later paper that came after GraphSAGE is PinSAGE and I'll be covering that one in one of the next videos and this is basically a precursor to that paper which is really awesome and has an awesome implication as a recommender system in Pinterest. So basically they say here, most existing approaches require that all nodes in the graph are present during training of the embeddings. These previous approaches are inherently transductive and do not naturally generalize to unseen nodes. So let me do just a quick recap of how those methods basically work. So what you have is, you have a Static Graph. So let's draw some toy graph here and I'll be using this one throughout the series. And basically what you're trying to do is you're trying to train the embeddings for all of these nodes. So basically an embedding table and this one has dimension n. So this vertical dimension is n, where n is the number of nodes. So basically this. And this dimension is d, which is a hyperparameter and is usually something lower dimensional than the input feature vectors. So all of the nodes have features associated with them and we want to find a lower dimensionality representation of those nodes, which would encode both the features and the neighborhood information. Now the problem with these methods is that, so they would basically what they did is, so if you have for example this node and this node here, you want to make those two embedding vectors as similar to each other. So if we just represent those vectors in 2D space like this, we want to have one and two close to each other. And on the other hand, if we have some node somewhere further away, connected by some intermediate node with one, and let's call it maybe three, and maybe it's somewhere here in the table. So we want to make it be dissimilar to those two vectors. Okay, so what's the problem with this approach? So the problem is once you train this embedding table, it's really hard to generalize to new unseen nodes and to new graphs. So let me draw it like this. So basically let's say this thing, we pick it up into this blob and we call it G1. And if you want to add new node or maybe even two new nodes, you basically have the following. You have this table which was previously trained and has meaningful vectors and you just append two new vectors here which are initialized randomly because we still don't know anything about those and we have to train those. So now there are a lot of problems with doing this and I'll address some of them later, but it's basically even harder once you want to generalize to unseen graphs because this graph too here has totally different random walk statistics than this graph here and so this embedding table won't make any sense if you try and just use it here. So you'll basically have to retrain the whole thing or find some way to kind of align nodes between these two graphs and in any case it's cumbersome and we'll now see what GraphSage did. So basically they said here instead of training individual embeddings for each node, we'll learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. And GraphSage is not super different from GET or GCN if you looked at my previous videos, but there are some differences which make it more like generalizable to large graphs and more efficient than GET and GCN and now I'll show you why. So by the way, GraphSage, the Sage part is just an acronym from sample and aggregate. And let's see those two steps. I'll just first start high level and then I'll have a pseudocode and I'll explain in detail how the algorithm works. But basically what we do is instead of taking the whole neighborhood as we did with GCNs and GETs, we basically just do a uniform sample from the neighborhood and then we repeat that through a couple of GraphSage layers and we end up with this. So basically this red node in the middle will end up having some parts, like some mixture of representations of these orange nodes. And basically these aggregate feature, aggregate functions are trainable and they're general here. So the paper itself suggested a couple of different variants and we'll see those in a minute. But if you remember, so basically what GCNs did, they would just simply do add these up, but they would be normalized by 1 over square root of di dj, where those are the degrees of corresponding nodes. And GET did this via attention, so it dynamically figured out these coefficients with all of these representations and then we just multiplied them and added them up. So that was just a high level overview and I'll slowly start digging into depth, but let's start with some related work. Basically before GraphSage, there was a couple of methods that did similar things and Planetoid inductive, i stands for inductive, version of Planetoid did something similar, but the difference is it only used the graph structure during training, whereas GraphSage and GCN and other spatial methods actually use the graph structure during the inference time and that makes them different. And also they later kind of, so GraphSage in a way generalized the GCN algorithm to an inductive setting and then GCN algorithm to an inductive setting and so they say here the original GCN algorithm is designed for semi-supervised learning in a transductive setting and the exact algorithm requires that the full graph laplation is known during training. And yeah, a simple variant of our algorithm can be viewed as an extension of the GCN to inductive setting and we'll see that in a couple of minutes. And this is the pseudocode for the GraphSage algorithm. It's, if you're familiar with GCN and GET, they have a really similar structure. Let me try and go through the pseudocode and make some sense out of it. So first the notation. So they have a graph G that contains these sets of nodes and edges. They have initial node features and which are like problem specific. Maybe if we have a citation network like this, maybe those node features would be, let me connect them somehow, maybe this paper cited these three ones. So these node features would maybe correspond to, you take the abstract of the paper, you just apply word2back to every single word, you just do some bag of words and so you basically average them out and you maybe add some additional information like the node degree. So this one has three, these ones have one, etc. You just append the node degree here and you append the abstract information here and that would be a particular node feature. So those are these axes. We have depth which is basically the number of layers in GraphSage and we have these weight matrices WKs which are trainable and so those will be trained during the unsupervised and supervised training process and I'll show those in a couple of minutes. We have non-linearities, sigma and these are important differentiable aggregated functions because they are also trainable. So WKs and these functions are something that we'll want to train and finally we have this fence notation for neighborhood function. This is basically the power set of V. So this is basically just a fancy way of saying maybe node one, so this is maybe node one, two, three, four. The whole power set would have 16 elements so the empty set you'd have the particular nodes and have the tuples, you have the triples and finally you have the every single node and basically one particular mapping could be one, maps two, two, three, four and that's the particular scenario we have here. So one gets these neighbors, these are the nodes that are neighbors of one. Okay and let's see how the algorithm itself functions. So basically we start by initializing these representations with initial node features for every single node in the graph and later I'll show you the the mini batch version they have and that's really important because they want to use this algorithm on huge graphs like maybe having which the graphs which have like billions of nodes so you don't want to have to iterate over every single node in the graph, you want to have just iterate over subsets but the idea is very similar. So we iterate over every single layer so Ks, so we iterate over different graph sage layers and for every node in the in the graph we do the following. We basically take the neighborhoods so in this particular case for node this would be maybe node v and these are the neighbors and we just aggregate those representations. So now aggregate is can be a bunch of things and we'll get to that in a moment but it's basically either mean maybe you can use LSTMs, you can use max pooling, a bunch of different stuff but basically you have a way to somehow take these representations and combine them, aggregate them. Once you have that you just concatenate the current representation so the current nodes representation so we are currently at v and this is v so we take its representation and we concatenate it with the aggregated representation and we just do a feed forward layer and that's it. There is one additional detail they just do this normalization step which basically makes these vectors unit norm which makes them lie on the unit hypersphere and now hypersphere is just a fancy name of saying like sphere in higher dimensions so basically if you have if the if the number of of dimensions in these representations was three we would basically have a 3d sphere and these vectors will lie somewhere on the on that sphere and we just repeat that over a couple of layers and that's it. We get the final representations which are denoted as zv. So now once we have these representations how do we train graph sage and it turns out they used two approaches we can either train graph sage in an unsupervised manner. Let me just yeah here so we can either train it like this so basically if you're familiar with graph embedding methods which I mentioned we we want to make sure that these so that the the the nodes that are close each other that are connected have similar embeddings similar representations and on the other hand we want to to have these nodes which are not connected in the graph have very dissimilar representations and I'll try and help you out to to to let's just parse this equation so basically sigma here is nothing but a sigmoid function which looks like this it basically we have zero five here and in the limit it approaches one so this is sigmoid's limit and so basically here once you do a dot product between two node representations so let's say we have a graph we have a graph like this and maybe not all of the nodes are directly connected and let's take and let's take this double here so because they are connected we want their representations after we after we do a couple of iterations through the graph sage we get the final representations and we want those representations to be as close to each other because once they are really close so once they once those vectors are really close this value gets really big and sigmoid from a big value gets closer to one and log of one equals zero so that means we are minimizing the loss function and that's something we want to do and then we have this part of the equation where we do a similar thing so we take maybe we take this node this is the negative sample and we want this two nodes to be very dissimilar so we we what we do is we want to we want to make them as dissimilar because once they are really dissimilar they'll get really negative and because of the minus here they will be really positive and that means the loss again gets pulled down to zero so and we have this q factor we just which is a hyperparameter and it determines how important it is to to have these minimized down to zero so that's the how the the unsupervised method and learning procedure for graph sage works and they see here a generator from futures contained within a node's local neighborhood rather than training a unique embedding for each node so the difference between graph embedding graph embedding methods and graph sage is that we don't have a embedding table here so we are not training those fixed tables which are not prone to being later used in inductive setting we are training those wk matrices and we're training those aggregate functions which we'll see what exactly they are in graph sage in a minute graph sages can also be trained and in a supervised manner and for example if you have a classification task is just you just figure out the representations you do pretty much cross entropy and you train those wks and those aggregation functions so i won't get into much details there an important thing about graph sage is that once we have so if we have a node and we have some huge neighborhood maybe some of these networks have like maybe over a thousand neighborhood nodes what they do is they never in contrast to to get and in contrast to gcns they never sample they never try and aggregate all of the neighborhoods they just uh sub sample maybe they use in particular they used 25 on the first layer neighbors and they used 10 neighbors in the second layer of graph sage and what that means is in the first layer of graph sage you basically take only a uniformly sample and take 25 neighbors and you aggregate those representations only and in the second layer you'll just take 10 and that means uh you're basically um that means that um this node here will basically potentially see up to 250 different uh nodes okay where was i here and they mentioned that here so using we're using overloaded definition we we define the neighborhood as a fixed size uniform draw from uh from the set of of of those edges so from all of these neighbors from the full set from the full neighborhood and we draw different uniform samples at each iteration so that means this node will have uh for every single layer of graph sage will have different neighborhoods picked up so it's stochastic and uh yeah they mentioned here also so they usually use the product should be less than 500 so to to just have this trade-off between the performance and between the uh efficiency of this algorithm so now let's jump to those aggregator architectures and they are using basically three different architectures so one is the mean aggregator so that means once you take so so so once you have once you sample some maybe 25 neighbors you just do element wise mean and then you can concatenate that representation with this nodes with this node feature vector and you do the forward forward uh feed forward layer right so that's that's one way how you can aggregate and that's the dumbest way pretty much uh the second way they propose is similar to gcm so they they ditch the the concatenation altogether and they just take they just take all of these so they take the neighbors the sample neighbors representations they take this representation and they do a mean over all of those they don't have any concatenation so that means this particular aggregation this this method they call it graph sage dash gcm because it's similar to gcn other than the the normalization constants are not the same and they mentioned that here so this differs from kipf kipf is the author of gcm exact equation by a minor normalization constant so if you remember gcn had this one over square root di dj which are basically degrees of of these nodes so this node has some degree and maybe one of these neighbors has some degree so once you combine those two you normalize them with this constant whereas here they don't do that so um yeah they mentioned here an important distinction between this convolutional aggregator and our other proposed aggregators is that it does not perform the concatenation operation in line five of algorithm one so that means here if you remember so they just uh don't do this concat part they just aggregate all of these together that's the difference whoops okay so that's the the mean aggregation function now they also tried and used lstms and now the problem with lstms is that they are inherently they are sequential and they they are they are ordered so that means basically they they had to find a way how to use lstm as a symmetric aggregator and what they did is they just applied random permutations to nodes neighbors so once you have the neighborhood you just do some random permutation and then you just find the aggregation doing the following so basically if you're familiar with lstms if you're not i'll just link a really good blog from christopher ola which would help you understand groose and lstms so basically it's initialized lstm is initialized with some random uh h0 state and you would input the first node representation i'll call it maybe v1 whatever and then you'd have some new state here and finally you'd output the second neighbor from the current permutation and anyways you you end up with a final aggregated representation that combined all of the previous uh node uh features and that's how the lstm aggregator works so those parameters inside the lstm would be trained together with those wk matrices and those are those are all of our learnable parameters finally they have this max pooling aggregator function and what they do here is they just take the neighborhood again so they take those representations they take some subset of those representations and they just apply a feed-forward neural network here and so once we have those transformed features they would just do an element-wise max pooling so basically this approach is inspired by this point nets paper and in a nutshell the point nets paper showed that this max pooling worked really well for them even better than doing mean pooling or doing attention based poolings so in a nutshell what this paper did it it works with point clouds and it starts with maybe endpoints and every single point has three features and those features are just the coordinates x y z and once you do some transforms and mlp transformations you end up with this n where n is number of points and you have 1024 features and they just did so they just do element-wise max pooling so that means whatever the max element in this column is that will be put here and then you do the same for every single column and as i mentioned they they tried also doing the mean of this column and they tried doing the attention so the attention what the attention does is you basically find coefficients for all of these for all of these vectors rows so you have maybe alpha one for the first row you'll calculate all of those alphas you'll finally have alpha n and now what we would do is you would basically just multiply these these corresponding row vectors by alphas and that's how you'd aggregate and you'll get the global feature finally so that's pretty much everything you needed to know about graph sage aside from that detail about how do they do the mini batch part and i'll explain that in a couple minutes but first let's let's just go through the experiments and as i previously said the main like advantage of this paper is that they are really good at inductive setting and they use this ppi protein protein traction data set and there they have these entirely unseen graphs which is something that's really challenging for all of those graph method graph embedding methods and for all those previous methods that were inherently transductive um methods that were inherently transductive as i already mentioned they used sampling sizes of 25 neighbors in the first layer 10 in the second layer and one more detail is that in the multi graph setting they can apply deep walk since since because of this fact so the so in the multi graph setting we cannot apply deep walk since the embedding space is generated by running the deep walk algorithm on different disjoint graphs can be arbitrarily rotated with respect to each other and i'll touch on that a bit later but just for now just keep that in hand that those graph embedding methods are really hard to to to to generalize to inductive setting okay let's see the results they had four baselines the random classifier they just use the raw features so those are those i mentioned those maybe you have the abstract here you have uh degree information and that's your feature vector and here they just use those raw features and maybe just do some mlp on top of those and that's how they learn to classify without using any graph structure information whatsoever um on the other hand they had deep walk which was trained a bit smarter basically it encodes to this graph structure as well and here is a combination of doing both of those and then they have uh four different graph sage algorithms depending on the aggregation function they use so we have gcm we have mean lstm and pool and here we can see it pretty much steadily increases here and also they have the unsupervised trained graph sage and they have the supervised trained graph sage and you can also you can also expect that this will increase and it did so what we can see here is that basically yeah they got better results than all of those baselines and now there is the best aggregator functions seem to be lstm and pooling whereas pooling is a lot cheaper so they finally gave it a slight edge and they they said that the pooling had the best performance altogether it's probably the best overall because it has um that the best trade-off between the performance and the efficiency uh yeah here they just in this chart they just um showed that these graph embedding methods such as deep walk have a really high cost to do the inference on on the full test set because they have to recalculate to retrain uh those embeddings and that's really computationally costly and this is log scale here so that's thousand seconds that's much more than what graph sage takes they also have a small chart here where they plot how the performance so the f1 score and how the runtime uh increase as we increase the the sample size so if you remember we had we had 25 and we used 10 and here they just experimented they went all the way on up to 75 neighbors and we can see that the performance does increase as well as the runtime so uh empirically they just figured out that some smaller neighborhood sizes such as 25 and 10 are performing the best overall okay that was pretty much it for this main part of the paper uh yeah they mentioned here that so graph sage lstm is significantly slower than graph sage pool so approximately twice so perhaps giving the pooling based aggregator a slight edge overall um one more interesting theorem here and they have a whole proof for this thing that's like four pages long and i won't get into the mathematics in this video but i just want to briefly explain what they said here um what they say here is that graph sage even though it heavily relies on features and like aggregating those features and stuff it understands the graph structure really well so how they proved that is the following so they they showed that um the final representation so zvs if you remember from the pseudo algorithm uh can be made like arbitrarily close to these cvs which are basically cluster something called clustering coefficients and um and i'll now explain what those are so this equation again tells us that we can make the representations arbitrarily close so the whatever epsilon you give me how no matter how small like maybe you give me 0.01 i can make the zv close enough to the real um clustering coefficient so what's the clustering coefficient basically if you have some small graph here something like this and let's say this node one is connected to these three nodes basically um the clustering coefficient let's call it c for node one is if these are all connected together like this so they form a clique then the clustering coefficient equals one because the neighborhood nodes have the the highest connectivity pattern that's possible and that's a fully connected graph so if we were to just disconnect maybe these two edges would be left with this one and the clustering coefficient would drop to one third so basically what they state here is that the graph sage can learn all of these uh so this is like a structural information in the graph and they can learn it to an arbitrary precision okay uh let me wrap up this paper explaining the mini batch mini batch part and something related to graph embedding methods which will be interesting basically the main part the only difference is this so we won't be iterating over everything the only difference is this so we won't be iterating over every single node in the graph because the graph can be really huge uh like maybe a couple of billion nodes so basically they calculate these b sets so uh if we have let's assume we have two layers in graph sage that means they'll be calculating set b zero set b one and set b two and let me try and illustrate what what those are and how they look like so basically if we have some huge graph that has a couple of billion nodes maybe we're only interested in representations of a small subset of nodes which we'll call b2 now because of the way how graph sage works it just aggregates the neighborhood of every single node in a particular layer that means we'll need one hop neighborhood of the nodes in b2 and let's call that b1 and now because we have two layers we'll also need the one hop neighborhood of b1 which is basically b0 and that will be a superset of b1 which is itself a superset of b2 so this would this this is what the the algorithm would um put in these b sets so let me just zoom in a little bit on this one so we start with the so these are the target nodes that we want to calculate the representation for and so we initialize b2 with that one and then we start going towards one we we we initially initialized b1 b1 we initialize it with b2 nodes and then we iterate over b2's nodes so over these nodes and we just add up we just do a union of all of the neighbors of all of the nodes in b2 so we'll go through nodes and maybe some of these nodes we'll have neighbors that are here and thus we'll we'll slowly build up b1 and then we'll slowly build up b0 and then the main loop what we do is we first initialize the initial representations with features feature vectors coming from b0 set so we'll have we'll only need to take we'll only need to think about b0 vectors we don't have to care about any of these nodes so we have we gain efficiency and so now the algorithm what it does we start with the with b1 so we start with this with these nodes so we iterate over those nodes and we do aggregations some of them will belong to b0 and hopefully luckily we already we have already have those in memory and we have those feature vectors so we can calculate all of the representations in b1 and once we have b1 in the next cycle once k gets to 2 we'll be iterating over nodes in b2 and we'll be building up representations and we'll luckily have all of these representations calculated from the last iteration so we can finally have all of the representations that we are interested in in b2 calculated and put into zv zus and that's pretty much it that's how the mini-batch algorithm works and hopefully that was clear enough if you have any questions please comment down in the comment section i'll try and answer all of them and yeah one interesting thing is they sample with replacement in cases where the sample size is larger than the nodes degree so maybe we have a node and it only has maybe three neighbors like this but if you remember s1 was 25 and s2 was 10 which means we'll have to repeat some of these features before we call the aggregate function so maybe we'll have eight of these we'll have eight of these and we'll have nine of these and then we'll call the aggregate function so that means sampling with replacement so yeah that's a small detail and this is one more interesting detail and as you can see graph sage is all about these small details which are making the implementation a lot more efficient and it can later be used on huge graphs and here what they say is basically they do a pre-processing step on these huge graphs and in case some of the nodes has maybe thousand edges they'll just sub sample up to 128 of them so every single node will have its degree less than or equal to 128 and they say here due to heavy tailed nature of degree distributions we down sample the edges in all graphs before feeding them into the graph sage algorithm in particular we sub sample edges so that no node has a degree larger than 128 the final thing i want to mention is the part that i mentioned in the beginning and that's these what's the problem once you're once you're dealing with these embedding tables so basically what you're doing by doing those trying to make some vectors similar and some dissimilar you're basically doing implicit factorization so basically you're trying to calculate that z matrix and this is number of nodes this is the hidden dimension d and you're if you do transpose this z matrix into zt and you just do multiplication what you end up is this matrix m which basically contains all of these similarities between vectors so you have you have a you have a how similar vector one is with vector one and you'll have how similar vector one is with vector two etc so you have a huge m by n matrix which basically tells you how similar or dissimilar those embedding feature vectors are and you're basically by doing those random walks and training those graph embedding methods you're you're implicitly trying to find this you find this implicit m matrix which contains the random walk statistics you know the problem is that as you can see here you can basically have a whole family of these z matrices which basically add up to m and that means those those embedding vectors can rotate they can the whole space can rotate and you'd still have the same matrix m so now what why is that the problem so the problem is once you try to add new nodes to your graph as we saw in the beginning so basically if you try and you have graph g1 you have embedding table for g1 and you try to add maybe two new nodes and what happens is you'll have to update the table and if you just do the following if you just if you maybe so basically you had this table previously and now you're trying to add two new vectors but the thing is you train a classifier for those vectors here so you have some some classification had here which was trained for those and once you add up these two new random vectors and if you try to retrain everything then these won't be working correctly with this classifier anymore and you'll mess up everything you you all of the training will be pretty much in vain and so yeah they mentioned here moreover if we update all embeddings during training not just for the new nodes as suggested by deep walk then the embedding space can arbitrarily rotate compared to the embedding space that we train our classifier on which only further exasperates the problem so basically once you add again re-dating once you add the nodes if you just do this naive retraining you'll mess up the classifier because all of these will start rotating because as we already saw all of those families are still add up to this matrix m and they suggested here a couple of reasonable approaches so some reasonable approaches to alleviate this issue of statistical drift are to not update the already trained embeddings when optimizing the embeddings for new test nodes so that means we pretty much freeze up all of these so we freeze them up we don't change those and second to only keep existing nodes as context nodes in the sample random box i.e. to ensure that every dot product in the skip gram objective is a product of an already trained node and a new test node so that means once you're doing those random walks you only take context nodes from these that are already trained so you you can take for example if this is a new node let's call it node number three and this is also a new node and even though this node three is on this random walk because it's a new node we won't be using it as a context node if you're familiar with how the word2vec and these graph embedding methods work you'll understand what i'm talking about and so basically the second approach the the the second suggestion they have is only use nodes that are already trained as the context there was a lot of information packed in this video i think it's worth just comparing graph sage with get and with gcn because all of the three are spatial methods and get and gcn are the most cited papers in the gnn literature so the structure is really similar if you take a look at it so we have a we have a node and we have a neighborhood and what gcn and what get do is the following they just aggregate all of the nodes aggregate all of the node features and before before they add them up they just scale them with a certain coefficients so basically gcn takes this one over square root di times tj which are the degrees of those nodes and so we we take both the neighborhood features as well as the current nodes feature and we just scale them with this that's gcm and what get does instead it just computes dynamically these alpha coefficients which are just attention coefficients and then multiplies those nodes with these coefficients that's get once you multiply them you just add them up and then you do a simple projection followed by a simple non-linearity such as relu so you have this let's call it aggregated features whatever and then we just take those we project them and we do non-linearity such as relu on the other side what graph sage does is it also it it doesn't take the full neighborhood it takes a sub sample uniform some sample of the neighborhood and it again aggregates those features in bunch of different ways some of them you saw like mean you can do lstm aggregation whatever and finally you again do a projection and followed by a non-linearity so the pattern is really similar graph sage is a bit more efficient because it's sub samples the neighborhood and it experimented with different aggregation function and also some of the other details you heard during this video so that was it if you found this video useful i'd really appreciate you leaving some feedback in the form of a comment in the comment section that will help me like twofold first thing is i'll get much better by reading your feedback what i'm doing right what i'm doing wrong and secondly you'll boost these videos like because of the the way youtube algorithm functions so anyways if you found this video useful consider subscribing and consider hitting the bell icon if you want to get notified for every single video i make and yep until next time keep learning deep
[{"start": 0.0, "end": 5.76, "text": " Hi, in this video I'm continuing with Graph Neural Network series and I'll be covering this paper called"}, {"start": 5.76, "end": 12.0, "text": " Inductive Representation Learning on Large Graphs or also known as GraphSAGE."}, {"start": 12.0, "end": 18.8, "text": " So basically the main proposition of GraphSAGE is that they developed a method that's inherently inductive"}, {"start": 18.8, "end": 25.6, "text": " and can work on huge graphs. So the later paper that came after GraphSAGE is PinSAGE and I'll be covering that one"}, {"start": 25.6, "end": 32.800000000000004, "text": " in one of the next videos and this is basically a precursor to that paper which is really awesome and has an"}, {"start": 32.800000000000004, "end": 40.0, "text": " awesome implication as a recommender system in Pinterest. So basically they say here,"}, {"start": 40.0, "end": 45.6, "text": " most existing approaches require that all nodes in the graph are present during training of the embeddings."}, {"start": 45.6, "end": 51.2, "text": " These previous approaches are inherently transductive and do not naturally generalize to unseen nodes."}, {"start": 51.2, "end": 57.6, "text": " So let me do just a quick recap of how those methods basically work. So what you have is,"}, {"start": 57.6, "end": 67.6, "text": " you have a Static Graph. So let's draw some toy graph here and I'll be using this one throughout the series."}, {"start": 67.6, "end": 74.80000000000001, "text": " And basically what you're trying to do is you're trying to train the embeddings for all of these nodes."}, {"start": 74.8, "end": 81.2, "text": " So basically an embedding table and this one has dimension n. So this vertical dimension is n,"}, {"start": 81.2, "end": 91.6, "text": " where n is the number of nodes. So basically this. And this dimension is d, which is a hyperparameter"}, {"start": 92.16, "end": 100.0, "text": " and is usually something lower dimensional than the input feature vectors. So all of the nodes have"}, {"start": 100.0, "end": 107.04, "text": " features associated with them and we want to find a lower dimensionality representation of those nodes,"}, {"start": 107.04, "end": 112.24, "text": " which would encode both the features and the neighborhood information. Now the problem with"}, {"start": 112.24, "end": 117.6, "text": " these methods is that, so they would basically what they did is, so if you have for example"}, {"start": 118.24000000000001, "end": 124.64, "text": " this node and this node here, you want to make those two embedding vectors as similar to each other."}, {"start": 124.64, "end": 130.32, "text": " So if we just represent those vectors in 2D space like this, we want to have one and two"}, {"start": 131.36, "end": 136.96, "text": " close to each other. And on the other hand, if we have some node somewhere further away,"}, {"start": 136.96, "end": 143.68, "text": " connected by some intermediate node with one, and let's call it maybe three, and maybe it's"}, {"start": 143.68, "end": 150.24, "text": " somewhere here in the table. So we want to make it be dissimilar to those two vectors. Okay,"}, {"start": 150.24, "end": 156.08, "text": " so what's the problem with this approach? So the problem is once you train this embedding table,"}, {"start": 156.08, "end": 163.04000000000002, "text": " it's really hard to generalize to new unseen nodes and to new graphs. So let me"}, {"start": 163.84, "end": 169.60000000000002, "text": " draw it like this. So basically let's say this thing, we pick it up into this blob and we call"}, {"start": 169.60000000000002, "end": 176.48000000000002, "text": " it G1. And if you want to add new node or maybe even two new nodes, you basically have the"}, {"start": 176.48, "end": 182.32, "text": " following. You have this table which was previously trained and has meaningful vectors and you just"}, {"start": 182.32, "end": 190.16, "text": " append two new vectors here which are initialized randomly because we still don't know anything"}, {"start": 190.16, "end": 198.39999999999998, "text": " about those and we have to train those. So now there are a lot of problems with doing this and"}, {"start": 198.39999999999998, "end": 203.12, "text": " I'll address some of them later, but it's basically even harder once you want to generalize to"}, {"start": 203.12, "end": 211.04, "text": " unseen graphs because this graph too here has totally different random walk statistics than this"}, {"start": 211.76, "end": 221.68, "text": " graph here and so this embedding table won't make any sense if you try and just use it here."}, {"start": 222.48000000000002, "end": 229.52, "text": " So you'll basically have to retrain the whole thing or find some way to kind of align nodes"}, {"start": 229.52, "end": 237.12, "text": " between these two graphs and in any case it's cumbersome and we'll now see what GraphSage did."}, {"start": 237.12, "end": 242.8, "text": " So basically they said here instead of training individual embeddings for each node,"}, {"start": 242.8, "end": 247.44, "text": " we'll learn a function that generates embeddings by sampling and aggregating"}, {"start": 248.32000000000002, "end": 255.12, "text": " features from a node's local neighborhood. And GraphSage is not super different from"}, {"start": 255.12, "end": 262.08, "text": " GET or GCN if you looked at my previous videos, but there are some differences which make it"}, {"start": 262.08, "end": 269.52, "text": " more like generalizable to large graphs and more efficient than GET and GCN and now I'll show you"}, {"start": 269.52, "end": 278.56, "text": " why. So by the way, GraphSage, the Sage part is just an acronym from sample and aggregate."}, {"start": 278.56, "end": 283.92, "text": " And let's see those two steps. I'll just first start high level and then I'll have a pseudocode"}, {"start": 283.92, "end": 289.28000000000003, "text": " and I'll explain in detail how the algorithm works. But basically what we do is instead of taking the"}, {"start": 289.28000000000003, "end": 297.92, "text": " whole neighborhood as we did with GCNs and GETs, we basically just do a uniform sample from the"}, {"start": 297.92, "end": 305.84000000000003, "text": " neighborhood and then we repeat that through a couple of GraphSage layers and we end up with"}, {"start": 305.84, "end": 313.35999999999996, "text": " this. So basically this red node in the middle will end up having some parts, like some mixture"}, {"start": 313.35999999999996, "end": 322.15999999999997, "text": " of representations of these orange nodes. And basically these aggregate feature, aggregate"}, {"start": 322.15999999999997, "end": 331.59999999999997, "text": " functions are trainable and they're general here. So the paper itself suggested a couple of"}, {"start": 331.6, "end": 337.28000000000003, "text": " different variants and we'll see those in a minute. But if you remember, so basically what GCNs did,"}, {"start": 337.28000000000003, "end": 347.44, "text": " they would just simply do add these up, but they would be normalized by 1 over square root of"}, {"start": 347.44, "end": 359.04, "text": " di dj, where those are the degrees of corresponding nodes. And GET did this via attention, so"}, {"start": 359.04, "end": 364.64000000000004, "text": " it dynamically figured out these coefficients with all of these representations and then we just"}, {"start": 364.64000000000004, "end": 370.08000000000004, "text": " multiplied them and added them up. So that was just a high level overview and I'll slowly start"}, {"start": 370.08000000000004, "end": 378.88, "text": " digging into depth, but let's start with some related work. Basically before GraphSage, there"}, {"start": 378.88, "end": 384.8, "text": " was a couple of methods that did similar things and Planetoid inductive, i stands for inductive,"}, {"start": 384.8, "end": 390.64, "text": " version of Planetoid did something similar, but the difference is it only used the graph structure"}, {"start": 390.64, "end": 397.44, "text": " during training, whereas GraphSage and GCN and other spatial methods actually use the graph"}, {"start": 397.44, "end": 406.0, "text": " structure during the inference time and that makes them different. And also they later kind of,"}, {"start": 406.0, "end": 412.08000000000004, "text": " so GraphSage in a way generalized the GCN algorithm to an inductive setting and then"}, {"start": 412.08, "end": 417.12, "text": " GCN algorithm to an inductive setting and so they say here the original GCN algorithm is designed"}, {"start": 417.12, "end": 422.0, "text": " for semi-supervised learning in a transductive setting and the exact algorithm requires that the"}, {"start": 422.0, "end": 428.24, "text": " full graph laplation is known during training. And yeah, a simple variant of our algorithm can"}, {"start": 428.24, "end": 433.36, "text": " be viewed as an extension of the GCN to inductive setting and we'll see that in a couple of minutes."}, {"start": 434.47999999999996, "end": 441.59999999999997, "text": " And this is the pseudocode for the GraphSage algorithm. It's, if you're familiar with GCN"}, {"start": 441.6, "end": 447.20000000000005, "text": " and GET, they have a really similar structure. Let me try and go through the pseudocode and"}, {"start": 447.20000000000005, "end": 454.96000000000004, "text": " make some sense out of it. So first the notation. So they have a graph G that contains these sets"}, {"start": 454.96000000000004, "end": 463.6, "text": " of nodes and edges. They have initial node features and which are like problem specific. Maybe"}, {"start": 463.6, "end": 470.32000000000005, "text": " if we have a citation network like this, maybe those node features would be, let me connect"}, {"start": 470.32, "end": 476.8, "text": " them somehow, maybe this paper cited these three ones. So these node features would maybe correspond"}, {"start": 476.8, "end": 484.15999999999997, "text": " to, you take the abstract of the paper, you just apply word2back to every single word, you just do"}, {"start": 484.15999999999997, "end": 489.6, "text": " some bag of words and so you basically average them out and you maybe add some additional"}, {"start": 489.6, "end": 496.0, "text": " information like the node degree. So this one has three, these ones have one, etc. You just append"}, {"start": 496.0, "end": 501.36, "text": " the node degree here and you append the abstract information here and that would be a particular"}, {"start": 501.36, "end": 507.44, "text": " node feature. So those are these axes. We have depth which is basically the number of layers in"}, {"start": 507.44, "end": 514.32, "text": " GraphSage and we have these weight matrices WKs which are trainable and so those will be trained"}, {"start": 514.32, "end": 520.0, "text": " during the unsupervised and supervised training process and I'll show those in a couple of minutes."}, {"start": 520.0, "end": 526.16, "text": " We have non-linearities, sigma and these are important differentiable aggregated functions"}, {"start": 526.16, "end": 532.24, "text": " because they are also trainable. So WKs and these functions are something that we'll want to train"}, {"start": 532.24, "end": 538.24, "text": " and finally we have this fence notation for neighborhood function. This is basically the"}, {"start": 538.24, "end": 548.0, "text": " power set of V. So this is basically just a fancy way of saying maybe node one, so this is maybe"}, {"start": 548.0, "end": 555.68, "text": " node one, two, three, four. The whole power set would have 16 elements so the empty set you'd have"}, {"start": 555.68, "end": 562.4, "text": " the particular nodes and have the tuples, you have the triples and finally you have the every single"}, {"start": 562.4, "end": 573.84, "text": " node and basically one particular mapping could be one, maps two, two, three, four and that's the"}, {"start": 573.84, "end": 580.64, "text": " particular scenario we have here. So one gets these neighbors, these are the nodes that are"}, {"start": 580.64, "end": 588.4, "text": " neighbors of one. Okay and let's see how the algorithm itself functions. So basically we start"}, {"start": 588.4, "end": 595.44, "text": " by initializing these representations with initial node features for every single node in the graph"}, {"start": 596.24, "end": 602.24, "text": " and later I'll show you the the mini batch version they have and that's really important because"}, {"start": 602.24, "end": 609.12, "text": " they want to use this algorithm on huge graphs like maybe having which the graphs which have like"}, {"start": 609.12, "end": 613.92, "text": " billions of nodes so you don't want to have to iterate over every single node in the graph,"}, {"start": 613.92, "end": 620.72, "text": " you want to have just iterate over subsets but the idea is very similar. So we iterate over every"}, {"start": 620.72, "end": 628.16, "text": " single layer so Ks, so we iterate over different graph sage layers and for every node in the"}, {"start": 628.16, "end": 635.76, "text": " in the graph we do the following. We basically take the neighborhoods so in this particular case"}, {"start": 636.8, "end": 643.1999999999999, "text": " for node this would be maybe node v and these are the neighbors and we just aggregate those"}, {"start": 643.1999999999999, "end": 650.24, "text": " representations. So now aggregate is can be a bunch of things and we'll get to that in a moment but"}, {"start": 650.24, "end": 656.0799999999999, "text": " it's basically either mean maybe you can use LSTMs, you can use max pooling, a bunch of different stuff"}, {"start": 656.08, "end": 664.48, "text": " but basically you have a way to somehow take these representations and combine them, aggregate them."}, {"start": 664.48, "end": 669.0400000000001, "text": " Once you have that you just concatenate the current representation so the current nodes"}, {"start": 669.0400000000001, "end": 676.0, "text": " representation so we are currently at v and this is v so we take its representation and we concatenate"}, {"start": 676.0, "end": 684.24, "text": " it with the aggregated representation and we just do a feed forward layer and that's it. There is one"}, {"start": 684.24, "end": 692.48, "text": " additional detail they just do this normalization step which basically makes these vectors unit"}, {"start": 693.04, "end": 699.6, "text": " norm which makes them lie on the unit hypersphere and now hypersphere is just a fancy name of saying"}, {"start": 699.6, "end": 705.76, "text": " like sphere in higher dimensions so basically if you have if the if the number of of dimensions"}, {"start": 705.76, "end": 710.64, "text": " in these representations was three we would basically have a 3d sphere and these vectors"}, {"start": 710.64, "end": 718.08, "text": " will lie somewhere on the on that sphere and we just repeat that over a couple of layers and that's"}, {"start": 718.08, "end": 726.64, "text": " it. We get the final representations which are denoted as zv. So now once we have these"}, {"start": 726.64, "end": 733.52, "text": " representations how do we train graph sage and it turns out they used two approaches we can either"}, {"start": 733.52, "end": 741.36, "text": " train graph sage in an unsupervised manner. Let me just yeah here so we can either train it like"}, {"start": 741.36, "end": 746.88, "text": " this so basically if you're familiar with graph embedding methods which I mentioned we we want to"}, {"start": 746.88, "end": 753.84, "text": " make sure that these so that the the the nodes that are close each other that are connected"}, {"start": 754.48, "end": 760.48, "text": " have similar embeddings similar representations and on the other hand we want to to have these"}, {"start": 760.48, "end": 766.4, "text": " nodes which are not connected in the graph have very dissimilar representations and I'll try and"}, {"start": 766.4, "end": 773.6800000000001, "text": " help you out to to to let's just parse this equation so basically sigma here is nothing"}, {"start": 773.6800000000001, "end": 780.16, "text": " but a sigmoid function which looks like this it basically we have zero five here"}, {"start": 780.16, "end": 791.68, "text": " and in the limit it approaches one so this is sigmoid's limit and so basically here"}, {"start": 791.68, "end": 797.68, "text": " once you do a dot product between two node representations so let's say we have a graph"}, {"start": 798.48, "end": 806.56, "text": " we have a graph like this and maybe not all of the nodes are directly connected and let's take"}, {"start": 806.56, "end": 813.76, "text": " and let's take this double here so because they are connected we want their representations"}, {"start": 813.76, "end": 818.0, "text": " after we after we do a couple of iterations through the graph sage we get the final"}, {"start": 818.0, "end": 823.1999999999999, "text": " representations and we want those representations to be as close to each other because once they are"}, {"start": 823.1999999999999, "end": 829.04, "text": " really close so once they once those vectors are really close this value gets really big"}, {"start": 829.04, "end": 836.56, "text": " and sigmoid from a big value gets closer to one and log of one equals zero so that means"}, {"start": 836.56, "end": 840.64, "text": " we are minimizing the loss function and that's something we want to do and then we have this"}, {"start": 840.64, "end": 848.3199999999999, "text": " part of the equation where we do a similar thing so we take maybe we take this node this is the"}, {"start": 848.3199999999999, "end": 857.68, "text": " negative sample and we want this two nodes to be very dissimilar so we we what we do is we want to"}, {"start": 857.68, "end": 862.7199999999999, "text": " we want to make them as dissimilar because once they are really dissimilar they'll get really"}, {"start": 862.7199999999999, "end": 870.0, "text": " negative and because of the minus here they will be really positive and that means the loss again"}, {"start": 870.0, "end": 878.8, "text": " gets pulled down to zero so and we have this q factor we just which is a hyperparameter and"}, {"start": 878.8, "end": 886.3199999999999, "text": " it determines how important it is to to have these minimized down to zero so that's the how the"}, {"start": 886.32, "end": 894.08, "text": " the unsupervised method and learning procedure for graph sage works and they see here a generator"}, {"start": 894.08, "end": 899.44, "text": " from futures contained within a node's local neighborhood rather than training a unique"}, {"start": 899.44, "end": 905.7600000000001, "text": " embedding for each node so the difference between graph embedding graph embedding methods and graph"}, {"start": 905.7600000000001, "end": 912.4000000000001, "text": " sage is that we don't have a embedding table here so we are not training those fixed tables which"}, {"start": 912.4, "end": 919.6, "text": " are not prone to being later used in inductive setting we are training those wk matrices and"}, {"start": 919.6, "end": 924.56, "text": " we're training those aggregate functions which we'll see what exactly they are in graph sage in a"}, {"start": 924.56, "end": 930.0, "text": " minute graph sages can also be trained and in a supervised manner and for example if you have a"}, {"start": 930.0, "end": 936.3199999999999, "text": " classification task is just you just figure out the representations you do pretty much cross entropy"}, {"start": 936.32, "end": 944.08, "text": " and you train those wks and those aggregation functions so i won't get into much details there"}, {"start": 944.08, "end": 949.84, "text": " an important thing about graph sage is that once we have so if we have a node and we have some huge"}, {"start": 949.84, "end": 955.84, "text": " neighborhood maybe some of these networks have like maybe over a thousand neighborhood nodes what"}, {"start": 955.84, "end": 963.9200000000001, "text": " they do is they never in contrast to to get and in contrast to gcns they never sample they never"}, {"start": 963.92, "end": 971.12, "text": " try and aggregate all of the neighborhoods they just uh sub sample maybe they use in particular"}, {"start": 971.12, "end": 981.1999999999999, "text": " they used 25 on the first layer neighbors and they used 10 neighbors in the second layer of graph sage"}, {"start": 981.1999999999999, "end": 989.4399999999999, "text": " and what that means is in the first layer of graph sage you basically take only a uniformly sample"}, {"start": 989.44, "end": 996.08, "text": " and take 25 neighbors and you aggregate those representations only and in the second layer you'll"}, {"start": 996.08, "end": 1006.08, "text": " just take 10 and that means uh you're basically um that means that um this node here will basically"}, {"start": 1006.08, "end": 1018.6400000000001, "text": " potentially see up to 250 different uh nodes okay where was i here and they mentioned that here so"}, {"start": 1018.64, "end": 1023.92, "text": " using we're using overloaded definition we we define the neighborhood as a fixed size uniform"}, {"start": 1023.92, "end": 1031.28, "text": " draw from uh from the set of of of those edges so from all of these neighbors from the full set"}, {"start": 1031.28, "end": 1035.92, "text": " from the full neighborhood and we draw different uniform samples at each iteration so that means"}, {"start": 1036.96, "end": 1043.28, "text": " this node will have uh for every single layer of graph sage will have different neighborhoods picked"}, {"start": 1043.28, "end": 1050.6399999999999, "text": " up so it's stochastic and uh yeah they mentioned here also so they usually use the product should"}, {"start": 1050.6399999999999, "end": 1058.32, "text": " be less than 500 so to to just have this trade-off between the performance and between the uh"}, {"start": 1058.32, "end": 1065.6, "text": " efficiency of this algorithm so now let's jump to those aggregator architectures and they are using"}, {"start": 1066.3999999999999, "end": 1072.32, "text": " basically three different architectures so one is the mean aggregator so that means once you take"}, {"start": 1072.32, "end": 1082.3999999999999, "text": " so so so once you have once you sample some maybe 25 neighbors you just do element wise mean and then"}, {"start": 1082.3999999999999, "end": 1089.9199999999998, "text": " you can concatenate that representation with this nodes with this node feature vector and you do the"}, {"start": 1089.9199999999998, "end": 1097.04, "text": " forward forward uh feed forward layer right so that's that's one way how you can aggregate and"}, {"start": 1097.04, "end": 1104.08, "text": " that's the dumbest way pretty much uh the second way they propose is similar to gcm so they they"}, {"start": 1104.08, "end": 1110.56, "text": " ditch the the concatenation altogether and they just take they just take all of these so they take"}, {"start": 1110.56, "end": 1116.24, "text": " the neighbors the sample neighbors representations they take this representation and they do a mean"}, {"start": 1116.24, "end": 1123.36, "text": " over all of those they don't have any concatenation so that means this particular aggregation this this"}, {"start": 1123.36, "end": 1131.52, "text": " method they call it graph sage dash gcm because it's similar to gcn other than the the normalization"}, {"start": 1131.52, "end": 1136.8, "text": " constants are not the same and they mentioned that here so this differs from kipf kipf is the author"}, {"start": 1136.8, "end": 1143.6799999999998, "text": " of gcm exact equation by a minor normalization constant so if you remember gcn had this"}, {"start": 1145.04, "end": 1152.0, "text": " one over square root di dj which are basically degrees of of these nodes so this node has some"}, {"start": 1152.0, "end": 1157.92, "text": " degree and maybe one of these neighbors has some degree so once you combine those two you normalize"}, {"start": 1157.92, "end": 1164.88, "text": " them with this constant whereas here they don't do that so um yeah they mentioned here an important"}, {"start": 1164.88, "end": 1169.44, "text": " distinction between this convolutional aggregator and our other proposed aggregators is that it"}, {"start": 1169.44, "end": 1176.16, "text": " does not perform the concatenation operation in line five of algorithm one so that means here"}, {"start": 1176.16, "end": 1182.72, "text": " if you remember so they just uh don't do this concat part they just aggregate all of these together"}, {"start": 1183.52, "end": 1192.4, "text": " that's the difference whoops okay so that's the the mean aggregation function now they also tried"}, {"start": 1192.4, "end": 1198.4, "text": " and used lstms and now the problem with lstms is that they are inherently they are sequential"}, {"start": 1198.4, "end": 1207.2800000000002, "text": " and they they are they are ordered so that means basically they they had to find a way how to"}, {"start": 1207.8400000000001, "end": 1214.0, "text": " use lstm as a symmetric aggregator and what they did is they just applied random permutations"}, {"start": 1214.0, "end": 1219.2, "text": " to nodes neighbors so once you have the neighborhood you just do some random permutation"}, {"start": 1219.2, "end": 1224.8000000000002, "text": " and then you just find the aggregation doing the following so basically if you're familiar"}, {"start": 1224.8, "end": 1230.08, "text": " with lstms if you're not i'll just link a really good blog from christopher ola which would help"}, {"start": 1230.08, "end": 1236.24, "text": " you understand groose and lstms so basically it's initialized lstm is initialized with some random"}, {"start": 1237.04, "end": 1242.8, "text": " uh h0 state and you would input the first node representation i'll call it maybe"}, {"start": 1242.8, "end": 1254.24, "text": " v1 whatever and then you'd have some new state here and finally you'd output the second neighbor"}, {"start": 1254.24, "end": 1261.68, "text": " from the current permutation and anyways you you end up with a final aggregated representation"}, {"start": 1261.68, "end": 1270.1599999999999, "text": " that combined all of the previous uh node uh features and that's how the lstm aggregator works"}, {"start": 1270.16, "end": 1278.0800000000002, "text": " so those parameters inside the lstm would be trained together with those wk matrices"}, {"start": 1279.28, "end": 1286.48, "text": " and those are those are all of our learnable parameters finally they have this"}, {"start": 1286.48, "end": 1294.64, "text": " max pooling aggregator function and what they do here is they just take the neighborhood again"}, {"start": 1294.64, "end": 1302.16, "text": " so they take those representations they take some subset of those representations and they just apply"}, {"start": 1302.16, "end": 1309.3600000000001, "text": " a feed-forward neural network here and so once we have those transformed features they would just do"}, {"start": 1309.3600000000001, "end": 1318.48, "text": " an element-wise max pooling so basically this approach is inspired by this point nets paper"}, {"start": 1318.48, "end": 1325.76, "text": " and in a nutshell the point nets paper showed that this max pooling worked really well for them"}, {"start": 1325.76, "end": 1333.44, "text": " even better than doing mean pooling or doing attention based poolings so in a nutshell what"}, {"start": 1333.44, "end": 1343.1200000000001, "text": " this paper did it it works with point clouds and it starts with maybe endpoints and every single point"}, {"start": 1343.12, "end": 1349.9199999999998, "text": " has three features and those features are just the coordinates x y z and once you do some transforms"}, {"start": 1349.9199999999998, "end": 1357.1999999999998, "text": " and mlp transformations you end up with this n where n is number of points and you have 1024"}, {"start": 1357.1999999999998, "end": 1364.56, "text": " features and they just did so they just do element-wise max pooling so that means whatever"}, {"start": 1364.56, "end": 1371.04, "text": " the max element in this column is that will be put here and then you do the same for every single"}, {"start": 1371.04, "end": 1376.96, "text": " column and as i mentioned they they tried also doing the mean of this column and they tried doing"}, {"start": 1376.96, "end": 1382.8, "text": " the attention so the attention what the attention does is you basically find coefficients for all of"}, {"start": 1382.8, "end": 1391.04, "text": " these for all of these vectors rows so you have maybe alpha one for the first row you'll calculate"}, {"start": 1391.04, "end": 1397.12, "text": " all of those alphas you'll finally have alpha n and now what we would do is you would basically"}, {"start": 1397.12, "end": 1403.4399999999998, "text": " just multiply these these corresponding row vectors by alphas and that's how you'd aggregate"}, {"start": 1403.4399999999998, "end": 1409.76, "text": " and you'll get the global feature finally so that's pretty much everything you needed to know about"}, {"start": 1409.76, "end": 1415.9199999999998, "text": " graph sage aside from that detail about how do they do the mini batch part and i'll explain that"}, {"start": 1415.9199999999998, "end": 1422.4799999999998, "text": " in a couple minutes but first let's let's just go through the experiments and as i previously said"}, {"start": 1422.48, "end": 1429.68, "text": " the main like advantage of this paper is that they are really good at inductive setting"}, {"start": 1429.68, "end": 1436.72, "text": " and they use this ppi protein protein traction data set and there they have these entirely"}, {"start": 1436.72, "end": 1443.44, "text": " unseen graphs which is something that's really challenging for all of those graph method graph"}, {"start": 1443.44, "end": 1449.6, "text": " embedding methods and for all those previous methods that were inherently transductive"}, {"start": 1449.6, "end": 1456.24, "text": " um methods that were inherently transductive as i already mentioned they used sampling sizes of"}, {"start": 1456.24, "end": 1462.6399999999999, "text": " 25 neighbors in the first layer 10 in the second layer and one more detail is that in the multi"}, {"start": 1462.6399999999999, "end": 1471.6, "text": " graph setting they can apply deep walk since since because of this fact so the so in the"}, {"start": 1471.6, "end": 1476.32, "text": " multi graph setting we cannot apply deep walk since the embedding space is generated by running"}, {"start": 1476.32, "end": 1481.52, "text": " the deep walk algorithm on different disjoint graphs can be arbitrarily rotated with respect"}, {"start": 1481.52, "end": 1487.52, "text": " to each other and i'll touch on that a bit later but just for now just keep that in hand that"}, {"start": 1487.52, "end": 1492.08, "text": " those graph embedding methods are really hard to to to to generalize to inductive setting"}, {"start": 1493.12, "end": 1497.76, "text": " okay let's see the results they had four baselines the random classifier they just"}, {"start": 1497.76, "end": 1504.1599999999999, "text": " use the raw features so those are those i mentioned those maybe you have the abstract here you have"}, {"start": 1504.16, "end": 1509.6000000000001, "text": " uh degree information and that's your feature vector and here they just use those raw features"}, {"start": 1509.6000000000001, "end": 1515.44, "text": " and maybe just do some mlp on top of those and that's how they learn to classify without using"}, {"start": 1515.44, "end": 1521.52, "text": " any graph structure information whatsoever um on the other hand they had deep walk which was"}, {"start": 1521.8400000000001, "end": 1527.8400000000001, "text": " trained a bit smarter basically it encodes to this graph structure as well and here is a combination"}, {"start": 1527.84, "end": 1534.3999999999999, "text": " of doing both of those and then they have uh four different graph sage algorithms depending on the"}, {"start": 1534.3999999999999, "end": 1541.1999999999998, "text": " aggregation function they use so we have gcm we have mean lstm and pool and here we can see it"}, {"start": 1541.1999999999998, "end": 1547.6799999999998, "text": " pretty much steadily increases here and also they have the unsupervised trained graph sage and they"}, {"start": 1547.6799999999998, "end": 1552.8799999999999, "text": " have the supervised trained graph sage and you can also you can also expect that this will increase"}, {"start": 1552.88, "end": 1560.16, "text": " and it did so what we can see here is that basically yeah they got better results than all"}, {"start": 1560.16, "end": 1569.6000000000001, "text": " of those baselines and now there is the best aggregator functions seem to be lstm and pooling"}, {"start": 1570.16, "end": 1577.6000000000001, "text": " whereas pooling is a lot cheaper so they finally gave it a slight edge and they they said that the"}, {"start": 1577.6, "end": 1583.4399999999998, "text": " pooling had the best performance altogether it's probably the best overall because it has um that"}, {"start": 1583.4399999999998, "end": 1590.24, "text": " the best trade-off between the performance and the efficiency uh yeah here they just in this chart"}, {"start": 1590.24, "end": 1597.4399999999998, "text": " they just um showed that these graph embedding methods such as deep walk have a really high cost"}, {"start": 1597.4399999999998, "end": 1604.6399999999999, "text": " to do the inference on on the full test set because they have to recalculate to retrain uh those"}, {"start": 1604.64, "end": 1610.96, "text": " embeddings and that's really computationally costly and this is log scale here so that's"}, {"start": 1610.96, "end": 1617.76, "text": " thousand seconds that's much more than what graph sage takes they also have a small chart here where"}, {"start": 1617.76, "end": 1627.0400000000002, "text": " they plot how the performance so the f1 score and how the runtime uh increase as we increase the"}, {"start": 1627.04, "end": 1635.44, "text": " the sample size so if you remember we had we had 25 and we used 10 and here they just experimented"}, {"start": 1635.44, "end": 1642.48, "text": " they went all the way on up to 75 neighbors and we can see that the performance does increase as"}, {"start": 1642.48, "end": 1650.1599999999999, "text": " well as the runtime so uh empirically they just figured out that some smaller neighborhood sizes"}, {"start": 1650.16, "end": 1661.68, "text": " such as 25 and 10 are performing the best overall okay that was pretty much it for this main part"}, {"start": 1661.68, "end": 1669.8400000000001, "text": " of the paper uh yeah they mentioned here that so graph sage lstm is significantly slower than graph"}, {"start": 1669.8400000000001, "end": 1676.72, "text": " sage pool so approximately twice so perhaps giving the pooling based aggregator a slight edge overall"}, {"start": 1676.72, "end": 1684.08, "text": " um one more interesting theorem here and they have a whole proof for this thing that's like four pages"}, {"start": 1684.08, "end": 1690.4, "text": " long and i won't get into the mathematics in this video but i just want to briefly explain what they"}, {"start": 1690.4, "end": 1697.76, "text": " said here um what they say here is that graph sage even though it heavily relies on features and like"}, {"start": 1697.76, "end": 1703.3600000000001, "text": " aggregating those features and stuff it understands the graph structure really well so how they proved"}, {"start": 1703.36, "end": 1709.9199999999998, "text": " that is the following so they they showed that um the final representation so zvs if you remember"}, {"start": 1709.9199999999998, "end": 1718.32, "text": " from the pseudo algorithm uh can be made like arbitrarily close to these cvs which are basically"}, {"start": 1718.32, "end": 1725.12, "text": " cluster something called clustering coefficients and um and i'll now explain what those are"}, {"start": 1725.12, "end": 1732.1599999999999, "text": " so this equation again tells us that we can make the representations arbitrarily close so the"}, {"start": 1732.1599999999999, "end": 1738.56, "text": " whatever epsilon you give me how no matter how small like maybe you give me 0.01 i can make the"}, {"start": 1738.56, "end": 1745.6, "text": " zv close enough to the real um clustering coefficient so what's the clustering coefficient"}, {"start": 1745.6, "end": 1751.52, "text": " basically if you have some small graph here something like this"}, {"start": 1751.52, "end": 1760.0, "text": " and let's say this node one is connected to these three nodes basically um the clustering"}, {"start": 1760.0, "end": 1769.44, "text": " coefficient let's call it c for node one is if these are all connected together like this"}, {"start": 1770.0, "end": 1778.0, "text": " so they form a clique then the clustering coefficient equals one because the neighborhood"}, {"start": 1778.0, "end": 1783.12, "text": " nodes have the the highest connectivity pattern that's possible and that's a fully connected"}, {"start": 1783.12, "end": 1791.44, "text": " graph so if we were to just disconnect maybe these two edges would be left with this one"}, {"start": 1791.44, "end": 1800.48, "text": " and the clustering coefficient would drop to one third so basically what they state here is that"}, {"start": 1800.48, "end": 1808.08, "text": " the graph sage can learn all of these uh so this is like a structural information in the graph and"}, {"start": 1808.08, "end": 1815.04, "text": " they can learn it to an arbitrary precision okay uh let me wrap up this paper explaining the mini"}, {"start": 1815.04, "end": 1820.48, "text": " batch mini batch part and something related to graph embedding methods which will be interesting"}, {"start": 1821.84, "end": 1827.3600000000001, "text": " basically the main part the only difference is this so we won't be iterating over everything"}, {"start": 1827.36, "end": 1832.0, "text": " the only difference is this so we won't be iterating over every single node in the graph"}, {"start": 1832.0, "end": 1837.1999999999998, "text": " because the graph can be really huge uh like maybe a couple of billion nodes so basically"}, {"start": 1837.1999999999998, "end": 1844.0, "text": " they calculate these b sets so uh if we have let's assume we have two layers in graph sage"}, {"start": 1844.0, "end": 1852.7199999999998, "text": " that means they'll be calculating set b zero set b one and set b two and let me try and illustrate"}, {"start": 1852.72, "end": 1859.84, "text": " what what those are and how they look like so basically if we have some huge graph that has"}, {"start": 1859.84, "end": 1865.76, "text": " a couple of billion nodes maybe we're only interested in representations of a small subset"}, {"start": 1865.76, "end": 1873.2, "text": " of nodes which we'll call b2 now because of the way how graph sage works it just aggregates the"}, {"start": 1873.2, "end": 1878.88, "text": " neighborhood of every single node in a particular layer that means we'll need one hop neighborhood"}, {"start": 1878.88, "end": 1887.92, "text": " of the nodes in b2 and let's call that b1 and now because we have two layers we'll also need"}, {"start": 1887.92, "end": 1896.0, "text": " the one hop neighborhood of b1 which is basically b0 and that will be a superset of b1 which is"}, {"start": 1896.0, "end": 1905.1200000000001, "text": " itself a superset of b2 so this would this this is what the the algorithm would um put in these"}, {"start": 1905.12, "end": 1911.1999999999998, "text": " b sets so let me just zoom in a little bit on this one so we start with the so these are the target"}, {"start": 1911.1999999999998, "end": 1917.52, "text": " nodes that we want to calculate the representation for and so we initialize b2 with that one and then"}, {"start": 1917.52, "end": 1928.9599999999998, "text": " we start going towards one we we we initially initialized b1 b1 we initialize it with b2"}, {"start": 1928.96, "end": 1938.0, "text": " nodes and then we iterate over b2's nodes so over these nodes and we just add up we just do a union"}, {"start": 1938.0, "end": 1944.88, "text": " of all of the neighbors of all of the nodes in b2 so we'll go through nodes and maybe some of these"}, {"start": 1944.88, "end": 1953.1200000000001, "text": " nodes we'll have neighbors that are here and thus we'll we'll slowly build up b1 and then we'll"}, {"start": 1953.12, "end": 1960.8799999999999, "text": " slowly build up b0 and then the main loop what we do is we first initialize the initial representations"}, {"start": 1960.8799999999999, "end": 1969.52, "text": " with features feature vectors coming from b0 set so we'll have we'll only need to take we'll only"}, {"start": 1969.52, "end": 1976.4799999999998, "text": " need to think about b0 vectors we don't have to care about any of these nodes so we have we gain"}, {"start": 1976.48, "end": 1983.52, "text": " efficiency and so now the algorithm what it does we start with the with b1 so we start with this"}, {"start": 1983.52, "end": 1990.4, "text": " with these nodes so we iterate over those nodes and we do aggregations some of them will belong"}, {"start": 1990.4, "end": 1998.56, "text": " to b0 and hopefully luckily we already we have already have those in memory and we have those"}, {"start": 1998.56, "end": 2004.88, "text": " feature vectors so we can calculate all of the representations in b1 and once we have b1 in the"}, {"start": 2004.88, "end": 2011.8400000000001, "text": " next cycle once k gets to 2 we'll be iterating over nodes in b2 and we'll be building up"}, {"start": 2011.8400000000001, "end": 2016.72, "text": " representations and we'll luckily have all of these representations calculated from the last"}, {"start": 2016.72, "end": 2021.92, "text": " iteration so we can finally have all of the representations that we are interested in in"}, {"start": 2021.92, "end": 2029.92, "text": " b2 calculated and put into zv zus and that's pretty much it that's how the mini-batch algorithm"}, {"start": 2029.92, "end": 2034.5600000000002, "text": " works and hopefully that was clear enough if you have any questions please comment down in the"}, {"start": 2034.56, "end": 2041.52, "text": " comment section i'll try and answer all of them and yeah one interesting thing is they sample"}, {"start": 2041.52, "end": 2046.6399999999999, "text": " with replacement in cases where the sample size is larger than the nodes degree so maybe we have"}, {"start": 2046.6399999999999, "end": 2057.36, "text": " a node and it only has maybe three neighbors like this but if you remember s1 was 25 and s2 was 10"}, {"start": 2057.36, "end": 2066.32, "text": " which means we'll have to repeat some of these features before we call the aggregate function"}, {"start": 2066.32, "end": 2072.4, "text": " so maybe we'll have eight of these we'll have eight of these and we'll have nine of these and"}, {"start": 2072.4, "end": 2079.52, "text": " then we'll call the aggregate function so that means sampling with replacement so yeah that's"}, {"start": 2079.52, "end": 2085.84, "text": " a small detail and this is one more interesting detail and as you can see graph sage is all about"}, {"start": 2085.84, "end": 2093.6000000000004, "text": " these small details which are making the implementation a lot more efficient and it can later be used"}, {"start": 2094.56, "end": 2101.76, "text": " on huge graphs and here what they say is basically they do a pre-processing step on these huge graphs"}, {"start": 2101.76, "end": 2111.1200000000003, "text": " and in case some of the nodes has maybe thousand edges they'll just sub sample up to 128 of them"}, {"start": 2111.12, "end": 2121.2, "text": " so every single node will have its degree less than or equal to 128 and they say here due to"}, {"start": 2121.2, "end": 2128.24, "text": " heavy tailed nature of degree distributions we down sample the edges in all graphs before feeding"}, {"start": 2128.24, "end": 2133.2, "text": " them into the graph sage algorithm in particular we sub sample edges so that no node has a degree"}, {"start": 2133.2, "end": 2140.16, "text": " larger than 128 the final thing i want to mention is the part that i mentioned in the beginning"}, {"start": 2140.16, "end": 2147.2, "text": " and that's these what's the problem once you're once you're dealing with these embedding tables"}, {"start": 2147.2, "end": 2155.3599999999997, "text": " so basically what you're doing by doing those trying to make some vectors similar and some"}, {"start": 2155.3599999999997, "end": 2159.8399999999997, "text": " dissimilar you're basically doing implicit factorization so basically you're trying to"}, {"start": 2159.8399999999997, "end": 2168.56, "text": " calculate that z matrix and this is number of nodes this is the hidden dimension d"}, {"start": 2168.56, "end": 2178.32, "text": " and you're if you do transpose this z matrix into zt and you just do multiplication what you end up"}, {"start": 2178.32, "end": 2187.92, "text": " is this matrix m which basically contains all of these similarities between vectors so you have"}, {"start": 2187.92, "end": 2194.32, "text": " you have a you have a how similar vector one is with vector one and you'll have how similar vector"}, {"start": 2194.32, "end": 2203.36, "text": " one is with vector two etc so you have a huge m by n matrix which basically tells you how similar"}, {"start": 2203.36, "end": 2212.56, "text": " or dissimilar those embedding feature vectors are and you're basically by doing those random walks"}, {"start": 2212.56, "end": 2217.2000000000003, "text": " and training those graph embedding methods you're you're implicitly trying to find this"}, {"start": 2217.2, "end": 2226.0, "text": " you find this implicit m matrix which contains the random walk statistics you know the problem is"}, {"start": 2226.0, "end": 2235.4399999999996, "text": " that as you can see here you can basically have a whole family of these z matrices which basically"}, {"start": 2235.4399999999996, "end": 2244.7999999999997, "text": " add up to m and that means those those embedding vectors can rotate they can the whole space can"}, {"start": 2244.8, "end": 2251.28, "text": " rotate and you'd still have the same matrix m so now what why is that the problem so the problem"}, {"start": 2251.28, "end": 2258.0, "text": " is once you try to add new nodes to your graph as we saw in the beginning so basically if you try and"}, {"start": 2258.0, "end": 2265.76, "text": " you have graph g1 you have embedding table for g1 and you try to add maybe two new nodes and what"}, {"start": 2265.76, "end": 2273.52, "text": " happens is you'll have to update the table and if you just do the following if you just if you maybe"}, {"start": 2273.52, "end": 2279.6, "text": " so basically you had this table previously and now you're trying to add two new vectors but the"}, {"start": 2279.6, "end": 2287.2, "text": " thing is you train a classifier for those vectors here so you have some some classification had here"}, {"start": 2288.24, "end": 2294.96, "text": " which was trained for those and once you add up these two new random vectors and if you try to"}, {"start": 2295.68, "end": 2303.04, "text": " retrain everything then these won't be working correctly with this classifier anymore"}, {"start": 2303.04, "end": 2310.16, "text": " and you'll mess up everything you you all of the training will be pretty much in vain and so"}, {"start": 2310.16, "end": 2315.92, "text": " yeah they mentioned here moreover if we update all embeddings during training not just for the new"}, {"start": 2315.92, "end": 2322.24, "text": " nodes as suggested by deep walk then the embedding space can arbitrarily rotate compared to the"}, {"start": 2322.24, "end": 2328.24, "text": " embedding space that we train our classifier on which only further exasperates the problem"}, {"start": 2328.24, "end": 2334.16, "text": " so basically once you add again re-dating once you add the nodes if you just do this naive"}, {"start": 2334.16, "end": 2339.68, "text": " retraining you'll mess up the classifier because all of these will start rotating because as we"}, {"start": 2339.68, "end": 2348.16, "text": " already saw all of those families are still add up to this matrix m and they suggested here a couple"}, {"start": 2348.16, "end": 2352.8799999999997, "text": " of reasonable approaches so some reasonable approaches to alleviate this issue of statistical"}, {"start": 2352.88, "end": 2358.1600000000003, "text": " drift are to not update the already trained embeddings when optimizing the embeddings for"}, {"start": 2358.1600000000003, "end": 2364.48, "text": " new test nodes so that means we pretty much freeze up all of these so we freeze them up we don't"}, {"start": 2364.48, "end": 2372.2400000000002, "text": " change those and second to only keep existing nodes as context nodes in the sample random box"}, {"start": 2372.2400000000002, "end": 2376.96, "text": " i.e. to ensure that every dot product in the skip gram objective is a product of an already"}, {"start": 2376.96, "end": 2386.88, "text": " trained node and a new test node so that means once you're doing those random walks you only take"}, {"start": 2386.88, "end": 2395.12, "text": " context nodes from these that are already trained so you you can take for example if this is a new"}, {"start": 2395.12, "end": 2402.64, "text": " node let's call it node number three and this is also a new node and even though this node three"}, {"start": 2402.64, "end": 2409.3599999999997, "text": " is on this random walk because it's a new node we won't be using it as a context node if you're"}, {"start": 2409.3599999999997, "end": 2414.72, "text": " familiar with how the word2vec and these graph embedding methods work you'll understand what"}, {"start": 2414.72, "end": 2421.44, "text": " i'm talking about and so basically the second approach the the the second suggestion they have"}, {"start": 2421.44, "end": 2428.24, "text": " is only use nodes that are already trained as the context there was a lot of information packed in"}, {"start": 2428.24, "end": 2435.2799999999997, "text": " this video i think it's worth just comparing graph sage with get and with gcn because all of the"}, {"start": 2435.2799999999997, "end": 2441.8399999999997, "text": " three are spatial methods and get and gcn are the most cited papers in the gnn literature so"}, {"start": 2442.24, "end": 2447.6, "text": " the structure is really similar if you take a look at it so we have a we have a node and we have a"}, {"start": 2447.6, "end": 2456.16, "text": " neighborhood and what gcn and what get do is the following they just aggregate all of the nodes"}, {"start": 2456.16, "end": 2463.6, "text": " aggregate all of the node features and before before they add them up they just scale them with"}, {"start": 2463.6, "end": 2473.2799999999997, "text": " a certain coefficients so basically gcn takes this one over square root di times tj which are the"}, {"start": 2473.2799999999997, "end": 2480.16, "text": " degrees of those nodes and so we we take both the neighborhood features as well as the current"}, {"start": 2480.16, "end": 2490.16, "text": " nodes feature and we just scale them with this that's gcm and what get does instead it just"}, {"start": 2490.16, "end": 2495.92, "text": " computes dynamically these alpha coefficients which are just attention coefficients and then"}, {"start": 2495.92, "end": 2506.64, "text": " multiplies those nodes with these coefficients that's get once you multiply them you just add"}, {"start": 2506.64, "end": 2513.68, "text": " them up and then you do a simple projection followed by a simple non-linearity such as"}, {"start": 2513.68, "end": 2523.2799999999997, "text": " relu so you have this let's call it aggregated features whatever and then we just take those we"}, {"start": 2523.2799999999997, "end": 2533.12, "text": " project them and we do non-linearity such as relu on the other side what graph sage does is it also it"}, {"start": 2533.12, "end": 2538.4, "text": " it doesn't take the full neighborhood it takes a sub sample uniform some sample of the neighborhood"}, {"start": 2539.3599999999997, "end": 2548.24, "text": " and it again aggregates those features in bunch of different ways some of them you saw like mean"}, {"start": 2548.24, "end": 2554.72, "text": " you can do lstm aggregation whatever and finally you again do a projection and followed by a"}, {"start": 2554.72, "end": 2560.88, "text": " non-linearity so the pattern is really similar graph sage is a bit more efficient because it's"}, {"start": 2560.88, "end": 2566.96, "text": " sub samples the neighborhood and it experimented with different aggregation function and also some"}, {"start": 2566.96, "end": 2572.88, "text": " of the other details you heard during this video so that was it if you found this video useful i'd"}, {"start": 2572.88, "end": 2577.84, "text": " really appreciate you leaving some feedback in the form of a comment in the comment section that will"}, {"start": 2577.84, "end": 2583.52, "text": " help me like twofold first thing is i'll get much better by reading your feedback what i'm doing"}, {"start": 2583.52, "end": 2589.12, "text": " right what i'm doing wrong and secondly you'll boost these videos like because of the the way"}, {"start": 2589.12, "end": 2594.96, "text": " youtube algorithm functions so anyways if you found this video useful consider subscribing"}, {"start": 2594.96, "end": 2600.72, "text": " and consider hitting the bell icon if you want to get notified for every single video i make"}, {"start": 2600.72, "end": 2619.6, "text": " and yep until next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=VyIOfIglrUM
Graph Convolutional Networks (GCN) | GNN Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I do a deep dive into the graph convolutional networks paper! It's currently the most cited paper in the GNN literature at the time of making this video. You'll learn about: ✔️ All the nitty-gritty details behind GCN ✔️ 3 different perspectives (spectral, WL, MPNN) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GCN paper: https://arxiv.org/abs/1609.02907 ✅ T.Kipf's awesome website: https://tkipf.github.io/graph-convolutional-networks/ ✅ M.Bronstein's blog on GNN depth: https://towardsdatascience.com/do-we-need-deep-graph-neural-networks-be62d3ec5c59 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Intro to GCNs 01:00 Graph Laplacian regularization methods 06:00 GCN method (in-depth explanation) 12:40 Vectorized form explanation 17:05 Spectral methods (the motivation behind GCNs) 29:20 Visualizing GCN hidden features (t-SNE) 30:17 Explanation of semi-supervised learning process 32:07 Graph embedding methods, results 34:12 Different variations of GCN 36:30 Speed benchmarking & limitations 39:30 Weisfeiler-Lehman perspective (GCN vs GIN) 44:25 GAT perspective, consequences of WL 46:45 GNN depth ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphconvolutionalnetwork #graphs #deeplearning
What's up? In this video I'm continuing with a series on graph neural networks and I'm going to cover the semi-supervised classification with graph convolutional networks or GCNs for short. That's the most cited paper in the GNN literature and it was published by Thomas Kipf and Max Welling. So basically this paper can be looked at from three different perspectives. So the one they showed here in the beginning of the paper, which I'll briefly explain, is the spectral. So just considering their method as a special case as an approximation to spectral graph methods. The second perspective you can take is considering this as a generalized Weiss-Vallemann algorithm. And the third approach is basically just treating this as a specific case of a message passing neural network or like maybe even a get graph attention network, which was the paper I covered in my last video. So I'm just going to before I jump into explaining the method and the spectral perspective, I'm going to just tell you about these explicit graph laplacian regularization methods. So previously before GCN came and those spectral methods came, we mostly had like two classes of algorithms. We had on one side the graph embedding algorithms such as DeepWalk, such as Planetoid, etc. And on the other side we had these methods that did semi-supervised learning using these explicit graph laplacians. So let me just show you what this regularization term does. So basically if you take a look at the semi-supervised loss here, we have the supervised portion, the L0, and we have this regularization. So and you can see here how it looks like. So basically what it does is it enforces the nodes which are connected to have the same label or similar label. So it just smoothens out the labels. And there's just a hypothesis, there's just a prior, the bias this method has. It wants to make that the locally connected nodes have the same label, which is something that not, that might not be a good thing to do in all in all of the problems, but sometimes may be a good idea as well. So they say here the formulation of equation one relies on the assumption that connected nodes in the graph are likely to share the same label. This assumption, however, might restrict modeling capacity as graph edges need not necessarily encode node similarity, but could contain additional information. So GCNs showed later, we'll see the results, that they outperformed all of these baselines. And it's much better idea to just let the neural network learn from the features and from the graph structure than by just encoding this bias into the model. So what this thing does, and I'm going to draw a simple graph here because we'll need the adjacency matrices and all the terminology later. So let me just draw a simple toy graph here and do some simple connections like this. And this may be node number one, node number two, three, and finally four. So how the adjacency matrix looks like is the following. So that's an important concept and I'm going to repeat it even though I covered it in my last video. So basically what it does, it just represents the connection pattern of the graph. So we have zero here because node one doesn't connect itself, it doesn't have a self-connection, and we have ones here because it connects to all of the other nodes. And the other nodes just connect to node number one, so we'll have ones here and we'll have zeros elsewhere. So basically now what this equation does here is you just iterate through this adjacency matrix and you take the corresponding labels and you just subtract them and you square them. So basically that means if we go through the row number one, we'll basically have, we'll take this one and let's say for the sake of argument this one has the label two, node one has label one, node three has maybe label three, and this one also has label two. So what we'll do here is we'll take, because we have, so this is node number one, this is node number two, we have one here so that means it will get included into this sum and we'll just take the label. So the label from this node is two, so we take two, we subtract one, and we square it up. And we repeat that the same procedure for wherever we have one or wherever we have a connection in the adjacency matrix. So basically we'll have label three here, we'll take three, minus one, and we'll square it up. And again we'll have two here, minus one, and we'll square it up. And we can see that none of these goes to zero and that's what this graph laplacian regularization term enforces the method to do. So in order for this to get to zero all of these labels should be changed to basically to one because that's what this node has. So that's it in a nutshell. So again f is just a function that maps the feature vectors that are associated with every node and just gives us the class. And we want those to be similar and smooth. So basically this regularization term won't allow the method to assign label 10, for example, to node number three if we assume we have 10 classes. So that's it in a nutshell. So now I'm going to first cover how the GCM works because it's better to have a clear mental picture of how it works before we go into the spectral perspective. So this is how the equation two, how the GCM works. So we have this constant term which won't change during the propagation because it contains the adjacency matrix with added self connections which is the tilde term here, the tilde symbol, and then we have degree matrices which also contain self connections and those are just raised to the power of minus 1 half, which I'll briefly explain what that means in a sec. And then we have h which is just like a matrix of feature vectors for every single node and we have w which is just the linear projection layer. And again for the sake of easier understanding this part I'll just draw this graph. So I'll take the same example as before. So we have a graph, we have some connections like this one, and now what we do is the following. So we again have the matrix, adjacency matrix A, and so we had like the pattern was like this, we had ones here because node one connects to both nodes two, three, and four, hence the ones here, and then we have ones here because node number two connects only to to node number one, and we have zeros elsewhere. So now in order to make this like the tilde version we just need to add an identity matrix here which basically just adds the self connections. We'll have ones here, we'll have ones here, we'll have ones along the diagonal pretty much. And what this does is basically assumes we have self connections on every single node. So once we have this matrix A tilde we need to normalize it and we do that by using these degree tilde minus one half terms. So what does this represent? What does this mean? Basically this degree tilde matrix just does the summation row wise along this A tilde matrix. So we'll have a diagonal matrix which contains four here because we when we sum up we'll get four four here and that's what it means is basically this is a degree. We have four connections going to this node number one. That's why we have four here. Then we have two, two, and two because all the other nodes have only two connections. Okay so the next step is just to inverse it and do a square root. So basically what we end up doing once we once we multiply D tilde raised to the minus one half times A tilde and we do the same thing here we basically normalize all of these ones with this term. With one over square root of DI times DJ. Now what DI means is it's basically the degree of a node I and DJ is degree of node J. So basically this particular node which now has one corresponds to so that's a connection between node one and node two right? So that's this connection and because this node has degree of two that's the DJ and with this one has degree four we'll basically just normalize this term so this one here with one over square root four times two and that's square root of eight whatever that is. So that's what we do and that's how we normalize this matrix. And now all of this will hopefully fit together in a moment so I just wanted to explain how that matrix is calculated. So how the process looks like in the GCI method is the following. So all of these nodes have associated a feature vector and for the sake of being less abstract let me say this one has thousand four hundred thirty three features and for example this number is not like random. Quora data set the citation network which this paper also used as a benchmark has this many features per node where every node represents a paper in a citation graph. So basically all of these nodes will have this feature vector and high level what the method does is it first projects these these feature vectors using this matrix W which we can see here projects the projects I mean to some lower usually lower dimensional space. So it will basically be a fully connected feed-forward layer without the activation at this point or you can just treat it as an identity activation which is the same thing. And so basically this one will maybe have 64 dimensions and once we do this for every single node in the graph we end up having so that's this part and now what this part does and I'll explain why it basically just accumulates or aggregates which is a term used in the graph literature these features and it just scales them using these square root over di dj things and you add them up and you get the new representation. So basically this node one will accumulate all of the vectors because all of them connect all of these nodes connect to it so it will have so its final representation will contain some portions of all of these other feature vectors. On the other hand if we take node two it will have only the feature vector that came from itself and the came from node one. So that's the difference. So you just basically take the one hop neighborhood you just aggregate those projected features you sum them up but before you sum you use those like scale these scaling factors and that's it. You basically only apply after you do that you apply some nonlinear differentiable activation functions such as ReLU and I assume everybody knows what ReLU is but I'm gonna just draw it for the sake of argument so basically it's like a zero here and pretty much y equals to x in this part. So if you still can't associate this equation with the process I just described I'm going to go once more and explaining the exact process that goes on in this vectorized form. So I want to explain concepts I try and do it as intuitive as possible but like these all of these equations are usually vectorized and which makes them less maybe less intuitive but once you kind of get the hang of it it becomes really easy. So let me try and explain every single matrix. So H matrix is basically as I said you just we just stack horizontally all of these feature vectors. So because we have n nodes this H matrix in the zeroth level so initially it will just be basically n times whatever the dimension here is and that's we said 1433. So this one will be so again it will be n times 1433 that's the size. Then we have this w matrix and w projects from 1433 into we said I took like as an example I took 64. So it will be a matrix that has the same dimension here will be here so we'll have pretty much 1433 times 64. So we'll have 64 dimensions here. So basically this corresponds to this one and once you multiply these matrices so H times w we'll end up with a new matrix which contains the projected feature vectors for every single vector for every single node in the in the in the graph. So we'll end up with a new matrix some intermediate matrix that has a dimension n times this times 64 and those are these small vectors I draw here and here etc. So hopefully that part is so far so good. So that's this part okay and finally we need to multiply by this by this normal by this normalized adjacency matrix and let's let's figure out that part. So now we have so let me take this example we already have concrete numbers here I just don't have these 1 over di dj parts but you can just imagine they are there and what we do so I'm going to redraw it again here just to make it a bit more clear. So we have ones here we have ones here and because of the self connections we have ones along the diagonal as well everything else is zero. So I just stack four nodes because n equals to 4 here right in our small graph we have only four nodes. So we stack up those 64 dimensional vectors and I won't draw every single element because they are initially just some random values. So basically what we do is the following so the first node as you can see because it has all ones it will basically take into account every single projected vector and that's what I previously explained so because it's connected to everything it will take all of the feature vectors the projected ones and it will just add them up and we have those terms as I mentioned which I want whatever like square root whatever corresponds to the degrees of those nodes. So that's the case for the node 1 but let's take for example node 2. So node 2 only has one here and one here and it has zeros here that means once you multiply this row with this matrix you'll just basically take these two feature vectors and that corresponds to the thing I mentioned previously and that's just you just take this feature vector and you just take this feature vector and you add them up and you use those scales again and that's that's why the equation works and it's basically does the same thing as what I just intuitively explained here using the drawing. Hopefully this this makes sense and I went into a lot of details please let me know if this was too too detailed painfully detailed or it was just right so appreciate the feedback. Now the reason they use these coefficients here to to to normalize the adjacency matrix values is because if we didn't do that we'd have a problem with exploding or vanishing gradients which can nicely be seen through this spectral prism which I'm going to explain right now. So basically if you're familiar with eigen decompositions what those scaling coefficients do is basically they make sure that the largest eigenvalue of our matrix is equal to 1. So basically they make sure that the range of eigenvalues is between 0 and 1 and by just continually applying that matrix we won't have exploding or vanishing gradients. So that's that's that's the trick. So this part by just constantly multiplying by this term if the eigenvalue range is in a good in a good range basically we won't have the problem I just mentioned with vanishing or exploding gradients. So because the the spectral part is really complicated it deserves a video for itself I'm just going to briefly explain the main ideas here and hopefully if you think that that could be useful I'll cover the specs some of the spectral methods in one of the next videos although they are not as popular as these spatial methods where GCN is one representative, GET is another representative, etc. GraphSAGE, etc. So how it works how the spectral graph convolutional method works in a nutshell is the following. So you take the Laplacian matrix which is basically the Green matrix and you subtract the adjacency matrix and you find the eigen decomposition of this matrix and you get eigenvectors and you get eigenvalues. So once you stack up those eigenvectors in a matrix you get these U matrices and what you do is you take the signal on your graph X so what's the signal? So just for the just for the sake of being of being clear so basically you take these feature vectors you take maybe first feature from every single feature vector and you can treat those as as a signal. So basically imagine we have we just align these nodes linearly and we take those first features from all of the feature vectors and maybe this one has value like this maybe this has a bit bigger value and you basically can treat this if I interpolate some random interpolation you can treat it as a signal a discretized signal on the graph. So what we do here in the spectral method is we project that signal into the into the new basis defined by the laplacian eigenvectors and those are contained in U. Then we have this this is what is learned in these methods so we don't learn the W matrix as previously in GCN we'll learn this filter G theta G sub theta and now the problem with this is and it says here it's computationally expensive as multiplication with the eigenvector matrix is O and so big O notation n square so basically because this matrix U is m times n and X is n times 1 because we remember we just took the like first feature or whatever like just one feature instead of the whole feature fully blown X matrix which has all of the features so basically that's why we have a computational cost of n squared and that's not the only problem the second problem is that doing the eigen decomposition of the laplacian is super expensive it's I think it depends on the like there are more optimal methods depending on certain like details but like O of n squared n cubed is probably a good estimate of the complexity and so that's super expensive so then there were so Hammond and his collaborators suggested that we can just instead of learning all of the n parameters here so where n is the side of the graph which can be in the size of billions if we're dealing dealing with for example Facebook graph or Pinterest wherever and basically you can we can treat this we can just compute like a like a K polynomial and using using Chebyshev polynomial and we can just use this approximation now I won't get into details how this approximation came to be there will be a separate video but like just trust me on this one and it can be shown that instead of using so this matrix here lambda contains eigenvalues we can ditch computing the eigen decomposition and we can save those n cubed computations and we can just do a Chebyshev polynomial using the this this laplacian matrix instead instead of using the eigenvalues matrix so Chebyshev you can see a simple recursion again won't get into much details it's really simple recursion we start with one and X and this is how you combine them to get the next polynomial terms and you basically end up having so this is how we do convolution spectral convolution now so we just have K terms here we have K parameters so these are scalars we just learned these and it's much more efficient this time so the thing with this this equation here approximation is that it's in contrast to the pure spectral methods like this one this one is K localized and that means that pretty much means that all of the if we have a graph and you take a single node it will only capture the features from its k-hop neighborhood so meaning the the neighbors which are further away are only our K edges far far off from from this target node so why that is is because if you know something about adjacency matrix properties if you just raise it to the power of K what you end up doing is the term ij in this matrix will tell you how many paths there are between nodes i and j like k length paths so basically that's that's the reason why why why we have a k-hop neighborhood here now again if you didn't understand this detail it's totally fine that's not the point of this video so now what GCN does is basically it starts from this approximation so this is a an approximation of the fully blown spectral method and it does the following it just says let's just treat let's just take a look at one hop neighborhoods we don't care about k-hop neighborhoods we want to make we set k to 1 so no so as they say here now imagine we will limit it the layer wise convolution operation to k1 so that's the first approximation they do and so they say it here nicely we intuitively expect that such a model can alleviate the problem of overfitting on local neighborhood structures for graphs with very wide node degree distributions such as social networks citation networks knowledge graphs and many other real world graph datasets so that's the explanation the the argument why they set k to 1 basically for certain graphs like citation networks which they are using as benchmarks in this paper like Cora sites here etc they can be they are basically something called small world networks and this approximation makes sense the second approximation they do is they say here we further approximate lambda max so the maximum eigenvalue to be approximately 2 as we can expect that neural network parameters will adapt to this change in scale during training okay so basically once you take those so k equals to 1 and lambda max equals to 2 you basically going from this equation you end up with with this one and basically now you can just if you take a look at what well lambda y y l minus identity here is because they're using they're having a Chebyshev polynomial of this scaled laplacian so basically what that means is so if we expand this polynomial we'll end up with first term will be I the second one will be this L tilde then we'll have 2 L tilde squared minus I etc depending how many how big the K is so if we if we consider L max to be 2 we'll ditch this term and we'll end up with L minus identity and that's what we get here so that's the L minus identity part and now finally there is one more approximation they do and that's constraining these two to be the same thing so theta 0 and theta 1 let's read them to be the same parameter and you can extract it and basically you get this equation and the last trick they do because if you take a look at this matrix the range of eigenvalues will be from 0 to 2 which means if we continually keep on multiplying features X projected features we have theta here times this matrix will end up increasing the values or the consequently the gradients will either explode or diminish and that's something we want to avoid and that's why they do this normalization trick and what they basically do is they add up self connections and we saw that when I was explaining the method itself so basically that's it once you once you just swap this part with this part you end up with the final method and that's the the form I previously I previously explained so that the good thing about this method it has a complexity that's proportional to the number of edges in the graph and not to do like a number of nodes like raise the power of three for like spectral spectral methods which was super super compute intensive so yeah it was a it was a mouthful there is a lot of details and you need to read a couple of papers to understand every single part so for example why this approximation works why we can why we can swap eigenvalues with laplacians here and yeah that's probably the hardest part then you can just this is simple thing this one is also conceptually simple and a couple more steps in your where the final stage but that's why I said that maybe not looking at at the GCN through this spectral prism but by looking at GCN maybe as a special case of get method or as a message passing network in general is much more like feels more natural than than treating it as a special as a spectral method because it's not spectral method we don't have to compute it's not like the final expression doesn't have to do anything with spectrum we don't have to compete to compute eigenvalues you don't have to compute eigenvectors so is that it was just motivated by by the spectral methods but it's not a spectral method itself so just avoid being confused there the hardest part we covered the method we covered how it was motivated by the spectral methods and we saw how the explicit graph laplacian regularization works so now let me just walk you through the rest of the paper a couple of more things details so once you pre train GCN with two layers on the core data set and you extract the hidden layers the hidden features you can project them into 2d space using t-sne and you get this and if you remember if you if you watch my previous video on get you can see that get maybe had a bit better separation between a Cora classes than GCN but once you actually once you and you can also verify that because it had had better accuracy than GCN but I'll get to those results in a bit but basically you can you can see that get actually has on these types of networks on citation networks the attention coefficients it used for the neighborhood are pretty similar and kind of approximate GCN although it's more more expressive and thus has a better results okay so again semi-supervised learning pretty simpler simple thing and I explained that in get but like for the sake of being complete here you just do cross entropy so you basically have a huge network and you just take for every single class and Cora has like seven classes you only take 20 notes whose labels you're going to use you don't use other labels you do use the features but you don't use the labels and finally once you get to the final layer the feature vectors now have the number of classes like seven for Cora and you'll just do you'll just do cross entropy loss on on those on those on those values and that's how you train these models so basically for the sake of being clear so what would we do again let's say this is Cora so we start with thousand four hundred thirty three we do the projection we do the aggregation so in the hidden layer we'll have maybe 64 features and again we do apply in our layer and because Cora has seven classes the final feature vectors will only have seven features and we do softmax so we end up with a probability distribution and we just take a look where the true classes and we do the cross entropy so if it's maybe 0.4 the loss will be log 0.4 and just a minus there and that means this needs to go to one for the ground truth class in order for the loss to go to zero which makes sense and you just do gradient based learning and they used Adam for GCN and that's it in a nutshell and again they visualize these features here let's see a couple more things in this paper the next thing is so I already mentioned we have previously to GCN and the spectral methods we have we had two broad categories the ones that use explicit graph laplation and the one that used a graph embedding approach so the graph embedding approach basically creates random walks it's similar to vertebrae if you're familiar with NLP yeah it's basically the same idea just apply to graphs you treat random walks on graphs as sentences and you basically train your embeddings such that they are such that the ones that appear close to each other attend to have similar embeddings in a nutshell so I mentioned previously they used three citation networks sites here core and PubMed there's this knowledge graph as benchmarks and yeah I mentioned that Cora has thousand so this we're using this example as a as a as a as on the fly example throughout this video and basically you can see some statistics related to that like data set and it has seven classes as I already mentioned so and I mentioned they're using only 20 labels per class so so that means so you have so you have the core has 2,700 nodes but you only use for hundred forty to train the the GCN method okay they also have some random graphs I'll explain those briefly why they mentioned those ones okay so here are the results you can basically they compared against these these methods used at explicit graph laplation regularization deep walk and planetoid our graph embedding methods and here is GCN and we can see on every single data set they have a significant improvement compared to the baselines and yeah that's nice okay finally they compared different methods and this is the reason I had to go into a bit more details in the spectral section part so basically they compare the Chebyshev so the one that the the approximation of the full spectral method and they tried using K2 and K3 so that means they're using K 2 hop and 3 hop neighborhoods and you can see the results are worse than GCN method and the renormalization trick is the official the final GCN method you can see so here going from top to bottom are the approximation we were making so here we just set K to 1 then we we we constrain these two tatas to be the same here in the single parameter row and we can see that all of the results are worse than by using the final the final GCN version they also compared against the first order term only so they ditch this part and they just use the the first order term and we also get worse results and finally the reason I highlighted MLP here is that it doesn't use the graph structure and you can see that the the performance the the accuracy is far like lower than all of the other methods so that's the reason so even so so basically it's really important to use the the graph structure because it contains a lot of information so how they train the MLP I suppose is so again you have you have the graph and you have features associated with them and you just basically independently apply the MLP here so this is maybe the first layer and then you have the second layer and you end up with seven classes here and you just take those four hundred hundred forty notes sorry to to basically train this MLP and you basically repeat the same so these are the same weights these are the shared weights you apply them to the first feature vector then you go to the second one blah blah blah you apply the the the cross entropy loss and you train the MLP and that's and that's so so you can see in this process it's not using the connectivity pattern so it's ignoring the connectivity pattern altogether okay let's continue and finish up wrap up this paper there are a couple more things I want to mention the first thing is and this is interesting so so basically you can see the training times here on the y-axis we have seconds per epoch and on the x-axis we have number of edges and this is where those random graphs can come into place so what they did is they just create n notes and they just take two n edges and they randomly uniformly just connect all of these notes and that's the graphs they were using and they just use a featureless approach where basically you just have a one hot vector or the adjacency matrix is pretty much the identity matrix and they use these graphs to just a benchmark like the test the speed of these their models and we can see that as we go to a huge number of edges the speed is still around one to ten seconds on CPU and GPU and compare that transformers which need days upon days of like huge machines to train them and but once you think about it it's not that interesting because we only have a W matrix in the case of GCN to train and that's a small number of parameters although as the number of nodes gets bigger the computation also gets bigger but using sparse dense matrix multiplication and some methods you can you can speed it up but it's interesting it's really it's really fast to train these networks and that's that's cool okay let's go and see a couple more things so I mentioned memory requirement and so I'm actually currently implementing the graph attention network and I'll open source it in a couple of weeks probably but like the the hardest part I found like implementing these networks is how to efficiently implement the layer and that's that's pretty much everything because you want to there is all of these like sparse formats and matrix sparse dense matrix multiplications happening and how what's the best way to represent your edges whether using edge index or just using the plain old sparse adjacency matrix and then finding some suitable format to represent it anyways I don't want to get into too much detail just keep that in mind that pretty much the hard hardest part here is to like efficiently implement these methods yeah they see here our framework currently does not naturally support edge features and is limited to undirected graphs weighted or unweighted that's that's something to keep in mind because both Cora and all of these benchmarks they used they don't have any edge features they just have like simple binary connections okay that was the the main part of paper I mentioned in the beginning of this video that I have three perspectives we can treat this so we can we can look at this GCN paper through three different perspectives the one is spectral method perspective and now let me show you the second one the Weiss-Feyler-Lehmann algorithm so basically WL algorithm or Weiss-Feyler-Lehmann specifically the 1D case of this algorithm what it does it's a simple isomorphism graph isomorphism test what that means is basically you have two graphs so graph G1 and you take second graph graph G2 and you pass it in into this test and if the if the final something called node colorings are the same no if they're different then you're most you're sure that these two graphs are not isomorphic if they're the same you're still not completely sure because there are special special cases where Weiss-Feyler-Lehmann like fails but like you're pretty certain that those two graphs are isomorphic and what isomorphic means is basically you those are just two same graphs but just have different maybe different labeling and if you find some different like permutation of labels you can see that those two graphs are the same and there are some really nice examples I'll show it here on the screen and you can see those two graphs are are the same even though they look differently like at the first glance so that's the idea with the Weiss-Feyler-Lehmann algorithm and now you can treat GCN as a pretty much as a generalized version of of of this Weiss-Feyler-Lehmann algorithm and with a disclaimer I'll in a couple of seconds explain why that's not completely true but let's for the sake of argument now let's play that game so this is how the algorithm looks like so we start with initial node coloring and you can treat maybe this the same as so we have those features the same as we use so far and basically the algorithm stops once those features stabilize and it's basically the same structure so we have repeat until is basically going through the layers of our GCN and we for every single node what we do is we aggregate the features or the colorings in this in this case that we aggregate them and we hash them and that's what's what our updated representation looks like and if you think about it it's really similar to what we have with GCN so basically we aggregate across the neighborhood again and we have these C ij's which I've mentioned a lot of times already and that equals to one over square root di dj where those are degrees of those particular notes and if you take a look at it it's the same format you can treat just the hash part is maybe the non-linearity and these projections and scaling but it's it has a similar structure and they say here this loosely speaking and it's really important to to highlight the loosely speaking part because it's not completely true and you'll see in a minute allows us to interpret our GCN model as a differentiable and parameterized generalization of the one-dimensional Weissweiler-Lehmann algorithm on graphs and now the disclaimer so basically later this paper called graph isomorphism networks appeared and it showed that GCN is not quite there so they say here this is just a snippet from the paper so we will see that these GNN variants get confused by surprisingly simple graphs and are less powerful than the WL device Weissweiler-Lehmann test nonetheless models with mean aggregators like GCN perform well for node classification tasks and we used exactly the node classification benchmarks in this paper so this paper used those benchmarks so that's why we couldn't notice I mean you this paper appear only later so it was hard for them to to compare but anyways they show that these gene GIN networks are more expressive than GCN's and more expressive that gets also so that's something to keep in mind I won't get into too much details but basically there are certain families of graphs which if you input them so you have graph one and you have graph two and they are different but once you pass them into GCN it would give it will give them the same node coloring which means GCN treats these two graphs the same like they are isomorphic but they're not they're different so GCN can fail on some on certain graph families as well as get but yeah that's something I thought it's worth mentioning just to keep connecting those dots across different papers finally so I mentioned the third perspective get perspective so basically can treat the GCN as a particular case of get where you have instead of learning those weights you just have them hard coded as this expression which I've mentioned multiple times already so that's about that's it and finally as a nice consequence of of this of the fact that you can treat this as a WL algorithm so they say here from the analogy with the WL we can understand that even an untrained GCN model with random weights can serve as powerful feature extractor for nodes in a graph so what we do here we just take these W matrices and we live them we leave them like a random and basically so this a hat is the term with the tilde mine raised to the power of minus one half etc suggest the constant term so we just apply three layers here three GCN layers and this is what we end up with so we have they showed here that if we take this correct correct karate club network so if you take this network and you just take a feature as approach meaning all of these nodes don't have some meaningful features they're just like the adjacency matrix is pretty much identity matrix and if you pass it through the untrained GCN it will still learn how to separate different classes so what it did is so basically the last layer of this GCN has only two so the node features have only two dimensions and thus they can easily be plotted on a 2d chart like here and you can see that clusters like this one which have the same labels also appear in the same part of the space in this 2d space here so that's interesting that means that GCN can without any training already have some discriminative power just because of the graph structure and finally once you start training this GCN the through layer GCN I just mentioned you can see it gets you in better separation and finally after 300 iterations the classes are clearly separated and that's that's awesome one more last thing is experimenting with depth so if you if you think about it all of the networks GCNs as I mentioned so far had either two or three layers so what's up what's up with that you know if you're familiar with computer vision you know that CNNs usually have bunch of layers like Resnets had I had 151 or more layers and basically here we'll use two or three so what's what's the thing so basically they showed here that and they've added these skip connections basically just adding the representations from the previous layer like this and this is the original equation so using these models they showed that after two or three layers we get to peak performance on the test and we don't need anything to go any further so and that's that's interesting and the the the explanation here you can you can search for it Michael Brunstein had a really nice blog wrote a nice blog about this and I'll just quickly summarize it so basically since grids are special graphs there certainly are examples of graphs on which depth helps it's just a quote from his blog and what that means is that pretty much you can treat image as a as a grid as a regular graph and we know that for images depth helps so that means there are certain classes of graphs where depth does help but here on these citation and knowledge graph graphs for some reason depth doesn't help and one of the explanation that Michael gave here is that one of the differences is that the letter letter are similar to small world networks with low diameter where one can reach any node from any other node in a few hops so you probably heard about this Facebook graph where at least I heard about it I'm not sure if the numbers are truly correct but like you can you can get to any single person on the graph on the Facebook graph by just in like six or seven hops so that means that the diameter of the of the Facebook graph is around seven I'm not sure if that's entirely correct but you get the point so it's so intricately connected that already after a couple of layers you've basically touched a lot of the neighborhoods neighborhood neighboring nodes and you don't need any more layers to to achieve better performance so that pretty much wraps it up hopefully you found this video useful I'm experimenting and I'm still figuring this out so if you have any feedback for me I really appreciate it I'll continue creating GNN videos throughout January and if you have any any requests please write them down in the comment section I'll try and maybe create some video depending on what you folks like suggest me if you like this video and if you found this channel useful please consider subscribing and click the bell icon to get notified when I upload a new video so that you can get my videos as soon as they get uploaded and until next time keep learning deep
[{"start": 0.0, "end": 4.92, "text": " What's up? In this video I'm continuing with a series on graph neural networks"}, {"start": 4.92, "end": 8.28, "text": " and I'm going to cover the semi-supervised classification with graph"}, {"start": 8.28, "end": 13.48, "text": " convolutional networks or GCNs for short. That's the most cited paper in the GNN"}, {"start": 13.48, "end": 19.400000000000002, "text": " literature and it was published by Thomas Kipf and Max Welling. So basically"}, {"start": 19.400000000000002, "end": 25.240000000000002, "text": " this paper can be looked at from three different perspectives. So the one"}, {"start": 25.24, "end": 30.52, "text": " they showed here in the beginning of the paper, which I'll briefly"}, {"start": 30.52, "end": 36.92, "text": " explain, is the spectral. So just considering their method as a special"}, {"start": 36.92, "end": 41.959999999999994, "text": " case as an approximation to spectral graph methods. The second perspective you"}, {"start": 41.959999999999994, "end": 47.36, "text": " can take is considering this as a generalized Weiss-Vallemann algorithm."}, {"start": 47.36, "end": 52.8, "text": " And the third approach is basically just treating this as a specific case of a"}, {"start": 52.8, "end": 56.599999999999994, "text": " message passing neural network or like maybe even a get graph attention"}, {"start": 56.599999999999994, "end": 62.519999999999996, "text": " network, which was the paper I covered in my last video. So I'm just going to"}, {"start": 62.519999999999996, "end": 68.52, "text": " before I jump into explaining the method and the spectral"}, {"start": 68.52, "end": 74.44, "text": " perspective, I'm going to just tell you about these explicit graph laplacian"}, {"start": 74.44, "end": 79.6, "text": " regularization methods. So previously before GCN came and those spectral"}, {"start": 79.6, "end": 84.39999999999999, "text": " methods came, we mostly had like two classes of algorithms. We had on"}, {"start": 84.39999999999999, "end": 88.39999999999999, "text": " one side the graph embedding algorithms such as DeepWalk, such as"}, {"start": 88.39999999999999, "end": 94.39999999999999, "text": " Planetoid, etc. And on the other side we had these methods that did"}, {"start": 94.39999999999999, "end": 99.6, "text": " semi-supervised learning using these explicit graph laplacians. So let me just"}, {"start": 99.6, "end": 106.44, "text": " show you what this regularization term does. So basically if you take a look at"}, {"start": 106.44, "end": 112.03999999999999, "text": " the semi-supervised loss here, we have the supervised portion, the L0, and we"}, {"start": 112.03999999999999, "end": 117.32, "text": " have this regularization. So and you can see here how it looks like. So basically"}, {"start": 117.32, "end": 123.92, "text": " what it does is it enforces the nodes which are connected to have the same"}, {"start": 123.92, "end": 128.8, "text": " label or similar label. So it just smoothens out the labels. And there's just a"}, {"start": 128.8, "end": 134.4, "text": " hypothesis, there's just a prior, the bias this method has. It wants to make that"}, {"start": 134.4, "end": 138.24, "text": " the locally connected nodes have the same label, which is something that not,"}, {"start": 138.24, "end": 142.68, "text": " that might not be a good thing to do in all in all of the problems, but sometimes"}, {"start": 142.68, "end": 146.94, "text": " may be a good idea as well. So they say here the formulation of equation one"}, {"start": 146.94, "end": 150.32, "text": " relies on the assumption that connected nodes in the graph are likely to share"}, {"start": 150.32, "end": 155.8, "text": " the same label. This assumption, however, might restrict modeling capacity as graph"}, {"start": 155.8, "end": 160.76, "text": " edges need not necessarily encode node similarity, but could contain additional"}, {"start": 160.76, "end": 165.48, "text": " information. So GCNs showed later, we'll see the results, that they outperformed"}, {"start": 165.48, "end": 169.56, "text": " all of these baselines. And it's much better idea to just let the neural"}, {"start": 169.56, "end": 173.95999999999998, "text": " network learn from the features and from the graph structure than by just"}, {"start": 173.95999999999998, "end": 181.2, "text": " encoding this bias into the model. So what this thing does, and I'm"}, {"start": 181.2, "end": 186.0, "text": " going to draw a simple graph here because we'll need the adjacency matrices"}, {"start": 186.0, "end": 194.12, "text": " and all the terminology later. So let me just draw a simple toy graph here and do"}, {"start": 194.12, "end": 200.08, "text": " some simple connections like this. And this may be node number one, node number"}, {"start": 200.08, "end": 206.64, "text": " two, three, and finally four. So how the adjacency matrix looks like is the"}, {"start": 206.64, "end": 211.08, "text": " following. So that's an important concept and I'm going to repeat it even though I"}, {"start": 211.08, "end": 217.68, "text": " covered it in my last video. So basically what it does, it just represents the"}, {"start": 217.68, "end": 224.04000000000002, "text": " connection pattern of the graph. So we have zero here because node one"}, {"start": 224.04000000000002, "end": 227.44, "text": " doesn't connect itself, it doesn't have a self-connection, and we have ones here"}, {"start": 227.44, "end": 231.84, "text": " because it connects to all of the other nodes. And the other nodes just connect"}, {"start": 231.84, "end": 238.0, "text": " to node number one, so we'll have ones here and we'll have zeros elsewhere. So"}, {"start": 238.0, "end": 243.56, "text": " basically now what this equation does here is you just iterate through this"}, {"start": 243.56, "end": 249.28, "text": " adjacency matrix and you take the corresponding labels and you just"}, {"start": 249.28, "end": 252.8, "text": " subtract them and you square them. So basically that means if we go through"}, {"start": 252.8, "end": 260.44, "text": " the row number one, we'll basically have, we'll take this one and"}, {"start": 260.44, "end": 266.6, "text": " let's say for the sake of argument this one has the label two, node one has label"}, {"start": 266.6, "end": 272.44, "text": " one, node three has maybe label three, and this one also has label two. So what"}, {"start": 272.44, "end": 277.72, "text": " we'll do here is we'll take, because we have, so this is node number one, this is"}, {"start": 277.72, "end": 283.28000000000003, "text": " node number two, we have one here so that means it will get included into this sum"}, {"start": 283.28000000000003, "end": 287.90000000000003, "text": " and we'll just take the label. So the label from this node is two, so we take"}, {"start": 287.90000000000003, "end": 294.88, "text": " two, we subtract one, and we square it up. And we repeat that the same procedure for"}, {"start": 294.88, "end": 299.71999999999997, "text": " wherever we have one or wherever we have a connection in the adjacency matrix. So"}, {"start": 299.71999999999997, "end": 306.52, "text": " basically we'll have label three here, we'll take three, minus one, and we'll"}, {"start": 306.52, "end": 314.32, "text": " square it up. And again we'll have two here, minus one, and we'll square it up."}, {"start": 314.32, "end": 320.64, "text": " And we can see that none of these goes to zero and that's what this graph"}, {"start": 320.64, "end": 325.8, "text": " laplacian regularization term enforces the method to do. So in order for this to"}, {"start": 325.8, "end": 333.36, "text": " get to zero all of these labels should be changed to basically to one"}, {"start": 333.36, "end": 339.2, "text": " because that's what this node has. So that's it in a nutshell. So again f is"}, {"start": 339.2, "end": 343.84, "text": " just a function that maps the feature vectors that are associated with every"}, {"start": 343.84, "end": 350.2, "text": " node and just gives us the class. And we want those to be similar and smooth. So"}, {"start": 350.2, "end": 356.12, "text": " basically this regularization term won't allow the method to"}, {"start": 356.12, "end": 361.03999999999996, "text": " assign label 10, for example, to node number three if we assume we have 10"}, {"start": 361.03999999999996, "end": 365.36, "text": " classes. So that's it in a nutshell. So now I'm going to first"}, {"start": 365.36, "end": 371.59999999999997, "text": " cover how the GCM works because it's better to have a clear mental"}, {"start": 371.59999999999997, "end": 375.64, "text": " picture of how it works before we go into the spectral"}, {"start": 375.64, "end": 381.64, "text": " perspective. So this is how the equation two, how the GCM works. So we"}, {"start": 381.64, "end": 388.4, "text": " have this constant term which won't change during the propagation because it"}, {"start": 388.4, "end": 393.0, "text": " contains the adjacency matrix with added self connections which is the tilde"}, {"start": 393.0, "end": 398.32, "text": " term here, the tilde symbol, and then we have degree matrices which also contain"}, {"start": 398.32, "end": 403.68, "text": " self connections and those are just raised to the power of minus 1 half, which"}, {"start": 403.68, "end": 409.32, "text": " I'll briefly explain what that means in a sec. And then we have h which is"}, {"start": 409.32, "end": 414.24, "text": " just like a matrix of feature vectors for every single node and we have w which"}, {"start": 414.24, "end": 418.5, "text": " is just the linear projection layer. And again for the sake of easier"}, {"start": 418.5, "end": 424.88, "text": " understanding this part I'll just draw this graph. So I'll take the"}, {"start": 424.88, "end": 430.84000000000003, "text": " same example as before. So we have a graph, we have some connections like this"}, {"start": 430.84, "end": 438.88, "text": " one, and now what we do is the following. So we again have the matrix, adjacency"}, {"start": 438.88, "end": 447.71999999999997, "text": " matrix A, and so we had like the pattern was like this, we had ones here because"}, {"start": 447.71999999999997, "end": 454.79999999999995, "text": " node one connects to both nodes two, three, and four, hence the ones here, and"}, {"start": 454.8, "end": 460.8, "text": " then we have ones here because node number two connects only to to node"}, {"start": 460.8, "end": 466.8, "text": " number one, and we have zeros elsewhere. So now in order to make this like the"}, {"start": 466.8, "end": 471.08000000000004, "text": " tilde version we just need to add an identity matrix here which basically"}, {"start": 471.08000000000004, "end": 475.6, "text": " just adds the self connections. We'll have ones here, we'll have ones here, we'll"}, {"start": 475.6, "end": 480.44, "text": " have ones along the diagonal pretty much. And what this does is basically assumes"}, {"start": 480.44, "end": 487.64, "text": " we have self connections on every single node. So once we have this matrix A tilde"}, {"start": 487.64, "end": 494.2, "text": " we need to normalize it and we do that by using these degree tilde"}, {"start": 494.2, "end": 501.12, "text": " minus one half terms. So what does this represent? What does this mean? Basically"}, {"start": 501.12, "end": 507.64, "text": " this degree tilde matrix just does the summation row wise along this A tilde"}, {"start": 507.64, "end": 512.48, "text": " matrix. So we'll have a diagonal matrix which contains four here because we when"}, {"start": 512.48, "end": 517.84, "text": " we sum up we'll get four four here and that's what it means is basically this"}, {"start": 517.84, "end": 522.1999999999999, "text": " is a degree. We have four connections going to this node number one. That's why"}, {"start": 522.1999999999999, "end": 528.96, "text": " we have four here. Then we have two, two, and two because all the other nodes have"}, {"start": 528.96, "end": 535.6, "text": " only two connections. Okay so the next step is just to inverse it and do a"}, {"start": 535.6, "end": 542.12, "text": " square root. So basically what we end up doing once we once we multiply D tilde"}, {"start": 542.12, "end": 549.12, "text": " raised to the minus one half times A tilde and we do the same thing here we"}, {"start": 549.12, "end": 559.2, "text": " basically normalize all of these ones with this term. With one over square root"}, {"start": 559.2, "end": 570.32, "text": " of DI times DJ. Now what DI means is it's basically the degree of a node I and DJ"}, {"start": 570.32, "end": 576.2800000000001, "text": " is degree of node J. So basically this particular node which now has one"}, {"start": 576.2800000000001, "end": 581.6400000000001, "text": " corresponds to so that's a connection between node one and node two right? So"}, {"start": 581.6400000000001, "end": 587.6400000000001, "text": " that's this connection and because this node has degree of two that's the DJ and"}, {"start": 587.64, "end": 592.56, "text": " with this one has degree four we'll basically just normalize this term so"}, {"start": 592.56, "end": 599.8, "text": " this one here with one over square root four times two and that's square root of"}, {"start": 599.8, "end": 604.1999999999999, "text": " eight whatever that is. So that's what we do and that's how we normalize this"}, {"start": 604.1999999999999, "end": 609.24, "text": " matrix. And now all of this will hopefully fit together in a moment so I"}, {"start": 609.24, "end": 614.3199999999999, "text": " just wanted to explain how that matrix is calculated. So how the process"}, {"start": 614.32, "end": 618.32, "text": " looks like in the GCI method is the following. So all of these nodes have"}, {"start": 618.32, "end": 623.24, "text": " associated a feature vector and for the sake of being less abstract let me say"}, {"start": 623.24, "end": 628.6800000000001, "text": " this one has thousand four hundred thirty three features and for example"}, {"start": 628.6800000000001, "end": 632.98, "text": " this number is not like random. Quora data set the citation network which this"}, {"start": 632.98, "end": 639.36, "text": " paper also used as a benchmark has this many features per node where every node"}, {"start": 639.36, "end": 643.86, "text": " represents a paper in a citation graph. So basically all of these nodes will"}, {"start": 643.86, "end": 650.72, "text": " have this feature vector and high level what the method does is it first"}, {"start": 650.72, "end": 655.6800000000001, "text": " projects these these feature vectors using this matrix W which we can see"}, {"start": 655.6800000000001, "end": 661.2, "text": " here projects the projects I mean to some lower usually lower dimensional"}, {"start": 661.2, "end": 669.2, "text": " space. So it will basically be a fully connected feed-forward layer without the"}, {"start": 669.2, "end": 672.64, "text": " activation at this point or you can just treat it as an identity activation which"}, {"start": 672.64, "end": 677.3199999999999, "text": " is the same thing. And so basically this one will maybe have 64 dimensions and"}, {"start": 677.3199999999999, "end": 684.3199999999999, "text": " once we do this for every single node in the graph we end up having so that's"}, {"start": 684.3199999999999, "end": 688.92, "text": " this part and now what this part does and I'll explain why it basically just"}, {"start": 688.92, "end": 694.28, "text": " accumulates or aggregates which is a term used in the graph literature these"}, {"start": 694.28, "end": 701.6, "text": " features and it just scales them using these square root over di dj things and"}, {"start": 701.6, "end": 709.2, "text": " you add them up and you get the new representation. So basically this"}, {"start": 709.2, "end": 714.24, "text": " node one will accumulate all of the vectors because all of them connect all"}, {"start": 714.24, "end": 718.44, "text": " of these nodes connect to it so it will have so its final representation will"}, {"start": 718.44, "end": 722.32, "text": " contain some portions of all of these other feature vectors. On the other hand"}, {"start": 722.32, "end": 728.24, "text": " if we take node two it will have only the feature vector that came from itself"}, {"start": 728.24, "end": 732.5600000000001, "text": " and the came from node one. So that's the difference. So you just basically take"}, {"start": 732.5600000000001, "end": 736.76, "text": " the one hop neighborhood you just aggregate those projected features you"}, {"start": 736.76, "end": 742.5600000000001, "text": " sum them up but before you sum you use those like scale these scaling factors"}, {"start": 742.5600000000001, "end": 748.04, "text": " and that's it. You basically only apply after you do that you apply some"}, {"start": 748.04, "end": 752.8, "text": " nonlinear differentiable activation functions such as ReLU and I assume"}, {"start": 752.8, "end": 756.24, "text": " everybody knows what ReLU is but I'm gonna just draw it for the sake of"}, {"start": 756.24, "end": 762.6800000000001, "text": " argument so basically it's like a zero here and pretty much y equals"}, {"start": 762.6800000000001, "end": 770.76, "text": " to x in this part. So if you still can't associate this equation with the process"}, {"start": 770.76, "end": 775.08, "text": " I just described I'm going to go once more and explaining the exact process"}, {"start": 775.08, "end": 781.28, "text": " that goes on in this vectorized form. So I want to explain concepts I try and do"}, {"start": 781.28, "end": 785.6, "text": " it as intuitive as possible but like these all of these equations are usually"}, {"start": 785.6, "end": 790.96, "text": " vectorized and which makes them less maybe less intuitive but once you kind"}, {"start": 790.96, "end": 795.36, "text": " of get the hang of it it becomes really easy. So let me try and explain every"}, {"start": 795.36, "end": 800.48, "text": " single matrix. So H matrix is basically as I said you just we just stack"}, {"start": 800.48, "end": 805.44, "text": " horizontally all of these feature vectors. So because we have n nodes this"}, {"start": 805.44, "end": 813.4, "text": " H matrix in the zeroth level so initially it will just be basically n"}, {"start": 813.4, "end": 820.52, "text": " times whatever the dimension here is and that's we said 1433. So this one will be"}, {"start": 820.52, "end": 829.6, "text": " so again it will be n times 1433 that's the size. Then we have this w matrix and"}, {"start": 829.6, "end": 836.68, "text": " w projects from 1433 into we said I took like as an example I took 64. So it will"}, {"start": 836.68, "end": 842.72, "text": " be a matrix that has the same dimension here will be here so we'll have"}, {"start": 842.72, "end": 855.96, "text": " pretty much 1433 times 64. So we'll have 64 dimensions here. So basically this"}, {"start": 855.96, "end": 863.88, "text": " corresponds to this one and once you multiply these matrices so H times w"}, {"start": 863.88, "end": 868.64, "text": " we'll end up with a new matrix which contains the projected feature"}, {"start": 868.64, "end": 872.56, "text": " vectors for every single vector for every single node in the in the in the"}, {"start": 872.56, "end": 877.3199999999999, "text": " graph. So we'll end up with a new matrix some intermediate matrix that has a"}, {"start": 877.3199999999999, "end": 884.56, "text": " dimension n times this times 64 and those are these small vectors I draw"}, {"start": 884.56, "end": 890.28, "text": " here and here etc. So hopefully that part is so far so good. So that's this part"}, {"start": 890.28, "end": 897.64, "text": " okay and finally we need to multiply by this by this normal by this normalized"}, {"start": 897.64, "end": 902.48, "text": " adjacency matrix and let's let's figure out that part. So now we have so let me"}, {"start": 902.48, "end": 906.88, "text": " take this example we already have concrete numbers here I just don't have"}, {"start": 906.88, "end": 914.6, "text": " these 1 over di dj parts but you can just imagine they are there and what we"}, {"start": 914.6, "end": 919.78, "text": " do so I'm going to redraw it again here just to make it a bit more clear. So we"}, {"start": 919.78, "end": 925.4399999999999, "text": " have ones here we have ones here and because of the self connections we have"}, {"start": 925.44, "end": 931.72, "text": " ones along the diagonal as well everything else is zero. So I just stack"}, {"start": 931.72, "end": 936.2800000000001, "text": " four nodes because n equals to 4 here right in our small graph we have only"}, {"start": 936.2800000000001, "end": 944.84, "text": " four nodes. So we stack up those 64 dimensional vectors and I won't draw"}, {"start": 944.84, "end": 950.1600000000001, "text": " every single element because they are initially just some random values. So"}, {"start": 950.1600000000001, "end": 954.5200000000001, "text": " basically what we do is the following so the first node as you can see because"}, {"start": 954.52, "end": 960.0799999999999, "text": " it has all ones it will basically take into account every single projected"}, {"start": 960.0799999999999, "end": 963.4, "text": " vector and that's what I previously explained so because it's connected to"}, {"start": 963.4, "end": 967.68, "text": " everything it will take all of the feature vectors the projected ones and"}, {"start": 967.68, "end": 974.4, "text": " it will just add them up and we have those terms as I mentioned which I want"}, {"start": 974.4, "end": 978.52, "text": " whatever like square root whatever corresponds to the degrees of those"}, {"start": 978.52, "end": 984.84, "text": " nodes. So that's the case for the node 1 but let's take for example node 2. So"}, {"start": 984.84, "end": 990.72, "text": " node 2 only has one here and one here and it has zeros here that means once"}, {"start": 990.72, "end": 996.0, "text": " you multiply this row with this matrix you'll just basically take these two"}, {"start": 996.0, "end": 999.96, "text": " feature vectors and that corresponds to the thing I mentioned previously and"}, {"start": 999.96, "end": 1003.8, "text": " that's just you just take this feature vector and you just take this feature"}, {"start": 1003.8, "end": 1010.0, "text": " vector and you add them up and you use those scales again and that's that's why"}, {"start": 1010.0, "end": 1014.0799999999999, "text": " the equation works and it's basically does the same thing as what I just"}, {"start": 1014.0799999999999, "end": 1018.52, "text": " intuitively explained here using the drawing. Hopefully this this makes sense"}, {"start": 1018.52, "end": 1024.2, "text": " and I went into a lot of details please let me know if this was too too"}, {"start": 1024.2, "end": 1029.32, "text": " detailed painfully detailed or it was just right so appreciate the feedback."}, {"start": 1029.32, "end": 1035.0, "text": " Now the reason they use these coefficients here to to to normalize"}, {"start": 1035.0, "end": 1042.56, "text": " the adjacency matrix values is because if we didn't do that we'd have a problem"}, {"start": 1042.56, "end": 1046.8799999999999, "text": " with exploding or vanishing gradients which can nicely be seen through this"}, {"start": 1046.8799999999999, "end": 1050.84, "text": " spectral prism which I'm going to explain right now. So basically if you're"}, {"start": 1050.84, "end": 1057.08, "text": " familiar with eigen decompositions what those scaling coefficients do is"}, {"start": 1057.08, "end": 1062.48, "text": " basically they make sure that the largest eigenvalue of our matrix is"}, {"start": 1062.48, "end": 1067.8, "text": " equal to 1. So basically they make sure that the range of eigenvalues is"}, {"start": 1067.8, "end": 1072.96, "text": " between 0 and 1 and by just continually applying that matrix we won't have"}, {"start": 1072.96, "end": 1077.24, "text": " exploding or vanishing gradients. So that's that's that's the trick. So this"}, {"start": 1077.24, "end": 1082.28, "text": " part by just constantly multiplying by this term if the eigenvalue range is in"}, {"start": 1082.28, "end": 1086.9199999999998, "text": " a good in a good range basically we won't have the problem I just mentioned"}, {"start": 1086.92, "end": 1093.04, "text": " with vanishing or exploding gradients. So because the the spectral part is really"}, {"start": 1093.04, "end": 1097.88, "text": " complicated it deserves a video for itself I'm just going to briefly explain"}, {"start": 1097.88, "end": 1102.72, "text": " the main ideas here and hopefully if you think that that could be useful I'll"}, {"start": 1102.72, "end": 1106.92, "text": " cover the specs some of the spectral methods in one of the next videos"}, {"start": 1106.92, "end": 1112.04, "text": " although they are not as popular as these spatial methods where GCN is one"}, {"start": 1112.04, "end": 1118.92, "text": " representative, GET is another representative, etc. GraphSAGE, etc. So how"}, {"start": 1118.92, "end": 1123.72, "text": " it works how the spectral graph convolutional method works in a nutshell"}, {"start": 1123.72, "end": 1128.96, "text": " is the following. So you take the Laplacian matrix which is basically the"}, {"start": 1128.96, "end": 1133.1599999999999, "text": " Green matrix and you subtract the adjacency matrix and you find the"}, {"start": 1133.1599999999999, "end": 1137.2, "text": " eigen decomposition of this matrix and you get eigenvectors and you get"}, {"start": 1137.2, "end": 1142.1200000000001, "text": " eigenvalues. So once you stack up those eigenvectors in a matrix you get these"}, {"start": 1142.1200000000001, "end": 1148.3600000000001, "text": " U matrices and what you do is you take the signal on your graph X so what's the"}, {"start": 1148.3600000000001, "end": 1156.72, "text": " signal? So just for the just for the sake of being of being clear so basically"}, {"start": 1156.72, "end": 1160.92, "text": " you take these feature vectors you take maybe first feature from every single"}, {"start": 1160.92, "end": 1166.32, "text": " feature vector and you can treat those as as a signal. So basically imagine we"}, {"start": 1166.32, "end": 1173.36, "text": " have we just align these nodes linearly and we take those first features from"}, {"start": 1173.36, "end": 1177.48, "text": " all of the feature vectors and maybe this one has value like this maybe this"}, {"start": 1177.48, "end": 1182.4399999999998, "text": " has a bit bigger value and you basically can treat this if I interpolate some"}, {"start": 1182.4399999999998, "end": 1186.24, "text": " random interpolation you can treat it as a signal a discretized signal on the"}, {"start": 1186.24, "end": 1192.7, "text": " graph. So what we do here in the spectral method is we project that signal into"}, {"start": 1192.7, "end": 1199.0800000000002, "text": " the into the new basis defined by the laplacian eigenvectors and those are"}, {"start": 1199.0800000000002, "end": 1203.8, "text": " contained in U. Then we have this this is what is learned in these methods so we"}, {"start": 1203.8, "end": 1208.92, "text": " don't learn the W matrix as previously in GCN we'll learn this filter G theta"}, {"start": 1208.92, "end": 1215.8400000000001, "text": " G sub theta and now the problem with this is and it says here it's"}, {"start": 1215.8400000000001, "end": 1219.8, "text": " computationally expensive as multiplication with the eigenvector"}, {"start": 1219.8, "end": 1227.08, "text": " matrix is O and so big O notation n square so basically because this matrix"}, {"start": 1227.08, "end": 1233.6399999999999, "text": " U is m times n and X is n times 1 because we remember we just took the"}, {"start": 1233.6399999999999, "end": 1239.1599999999999, "text": " like first feature or whatever like just one feature instead of the whole feature"}, {"start": 1239.1599999999999, "end": 1243.44, "text": " fully blown X matrix which has all of the features so basically that's why we"}, {"start": 1243.44, "end": 1249.1599999999999, "text": " have a computational cost of n squared and that's not the only problem the"}, {"start": 1249.16, "end": 1253.0400000000002, "text": " second problem is that doing the eigen decomposition of the laplacian is super"}, {"start": 1253.0400000000002, "end": 1257.68, "text": " expensive it's I think it depends on the like there are more optimal methods"}, {"start": 1257.68, "end": 1264.5800000000002, "text": " depending on certain like details but like O of n squared n cubed is probably"}, {"start": 1264.5800000000002, "end": 1269.64, "text": " a good estimate of the complexity and so that's super expensive so then there"}, {"start": 1269.64, "end": 1276.16, "text": " were so Hammond and his collaborators suggested that we can just instead of"}, {"start": 1276.16, "end": 1280.92, "text": " learning all of the n parameters here so where n is the side of the graph which"}, {"start": 1280.92, "end": 1284.44, "text": " can be in the size of billions if we're dealing dealing with for example"}, {"start": 1284.44, "end": 1290.5600000000002, "text": " Facebook graph or Pinterest wherever and basically you can we can treat this we"}, {"start": 1290.5600000000002, "end": 1296.3200000000002, "text": " can just compute like a like a K polynomial and using using Chebyshev"}, {"start": 1296.3200000000002, "end": 1302.0800000000002, "text": " polynomial and we can just use this approximation now I won't get into"}, {"start": 1302.08, "end": 1306.1999999999998, "text": " details how this approximation came to be there will be a separate video but"}, {"start": 1306.1999999999998, "end": 1313.1599999999999, "text": " like just trust me on this one and it can be shown that instead of using so"}, {"start": 1313.1599999999999, "end": 1318.48, "text": " this matrix here lambda contains eigenvalues we can ditch computing the"}, {"start": 1318.48, "end": 1324.36, "text": " eigen decomposition and we can save those n cubed computations and we can"}, {"start": 1324.36, "end": 1331.02, "text": " just do a Chebyshev polynomial using the this this laplacian matrix instead"}, {"start": 1331.02, "end": 1337.6, "text": " instead of using the eigenvalues matrix so Chebyshev you can see a simple"}, {"start": 1337.6, "end": 1342.06, "text": " recursion again won't get into much details it's really simple recursion we"}, {"start": 1342.06, "end": 1347.2, "text": " start with one and X and this is how you combine them to get the next polynomial"}, {"start": 1347.2, "end": 1354.16, "text": " terms and you basically end up having so this is how we do convolution spectral"}, {"start": 1354.16, "end": 1358.52, "text": " convolution now so we just have K terms here we have K parameters so these are"}, {"start": 1358.52, "end": 1365.16, "text": " scalars we just learned these and it's much more efficient this time so the"}, {"start": 1365.16, "end": 1371.8, "text": " thing with this this equation here approximation is that it's in contrast"}, {"start": 1371.8, "end": 1377.76, "text": " to the pure spectral methods like this one this one is K localized and that"}, {"start": 1377.76, "end": 1383.72, "text": " means that pretty much means that all of the if we have a graph and you take a"}, {"start": 1383.72, "end": 1389.64, "text": " single node it will only capture the features from its k-hop neighborhood so"}, {"start": 1389.64, "end": 1396.04, "text": " meaning the the neighbors which are further away are only our K edges far"}, {"start": 1396.04, "end": 1401.72, "text": " far off from from this target node so why that is is because if you know"}, {"start": 1401.72, "end": 1406.9, "text": " something about adjacency matrix properties if you just raise it to the"}, {"start": 1406.9, "end": 1414.88, "text": " power of K what you end up doing is the term ij in this matrix will tell you how"}, {"start": 1414.88, "end": 1423.2, "text": " many paths there are between nodes i and j like k length paths so basically"}, {"start": 1423.2, "end": 1430.24, "text": " that's that's the reason why why why we have a k-hop neighborhood here now again"}, {"start": 1430.24, "end": 1434.6000000000001, "text": " if you didn't understand this detail it's totally fine that's not the point"}, {"start": 1434.6, "end": 1438.84, "text": " of this video so now what GCN does is basically it starts from this"}, {"start": 1438.84, "end": 1442.08, "text": " approximation so this is a an approximation of the fully blown"}, {"start": 1442.08, "end": 1447.24, "text": " spectral method and it does the following it just says let's just treat"}, {"start": 1447.24, "end": 1451.28, "text": " let's just take a look at one hop neighborhoods we don't care about k-hop"}, {"start": 1451.28, "end": 1456.7199999999998, "text": " neighborhoods we want to make we set k to 1 so no so as they say here now"}, {"start": 1456.7199999999998, "end": 1461.08, "text": " imagine we will limit it the layer wise convolution operation to k1 so that's"}, {"start": 1461.08, "end": 1465.9199999999998, "text": " the first approximation they do and so they say it here nicely we intuitively"}, {"start": 1465.9199999999998, "end": 1470.36, "text": " expect that such a model can alleviate the problem of overfitting on local"}, {"start": 1470.36, "end": 1474.8, "text": " neighborhood structures for graphs with very wide node degree distributions such"}, {"start": 1474.8, "end": 1478.28, "text": " as social networks citation networks knowledge graphs and many other real"}, {"start": 1478.28, "end": 1483.82, "text": " world graph datasets so that's the explanation the the argument why they"}, {"start": 1483.82, "end": 1488.4399999999998, "text": " set k to 1 basically for certain graphs like citation networks which they are"}, {"start": 1488.44, "end": 1496.6000000000001, "text": " using as benchmarks in this paper like Cora sites here etc they can be they"}, {"start": 1496.6000000000001, "end": 1501.92, "text": " are basically something called small world networks and this approximation"}, {"start": 1501.92, "end": 1506.96, "text": " makes sense the second approximation they do is they say here we further"}, {"start": 1506.96, "end": 1512.98, "text": " approximate lambda max so the maximum eigenvalue to be approximately 2 as we"}, {"start": 1512.98, "end": 1515.92, "text": " can expect that neural network parameters will adapt to this change in"}, {"start": 1515.92, "end": 1520.96, "text": " scale during training okay so basically once you take those so k equals to 1 and"}, {"start": 1520.96, "end": 1526.3200000000002, "text": " lambda max equals to 2 you basically going from this equation you end up"}, {"start": 1526.3200000000002, "end": 1535.68, "text": " with with this one and basically now you can just if you take a look at what well"}, {"start": 1535.68, "end": 1542.96, "text": " lambda y y l minus identity here is because they're using they're having a"}, {"start": 1542.96, "end": 1551.88, "text": " Chebyshev polynomial of this scaled laplacian so basically what that means"}, {"start": 1551.88, "end": 1559.08, "text": " is so if we expand this polynomial we'll end up with first term will be I the"}, {"start": 1559.08, "end": 1569.3600000000001, "text": " second one will be this L tilde then we'll have 2 L tilde squared minus I"}, {"start": 1569.36, "end": 1577.24, "text": " etc depending how many how big the K is so if we if we consider L max to be 2"}, {"start": 1577.24, "end": 1584.24, "text": " we'll ditch this term and we'll end up with L minus identity and that's what we"}, {"start": 1584.24, "end": 1590.84, "text": " get here so that's the L minus identity part and now finally there is one more"}, {"start": 1590.84, "end": 1595.08, "text": " approximation they do and that's constraining these two to be the same"}, {"start": 1595.08, "end": 1601.36, "text": " thing so theta 0 and theta 1 let's read them to be the same parameter and you"}, {"start": 1601.36, "end": 1607.6, "text": " can extract it and basically you get this equation and the last trick they do"}, {"start": 1607.6, "end": 1612.12, "text": " because if you take a look at this matrix the range of eigenvalues will be"}, {"start": 1612.12, "end": 1619.1599999999999, "text": " from 0 to 2 which means if we continually keep on multiplying features"}, {"start": 1619.16, "end": 1625.92, "text": " X projected features we have theta here times this matrix will end up"}, {"start": 1625.92, "end": 1631.24, "text": " increasing the values or the consequently the gradients will either"}, {"start": 1631.24, "end": 1635.68, "text": " explode or diminish and that's something we want to avoid and that's why they do"}, {"start": 1635.68, "end": 1640.16, "text": " this normalization trick and what they basically do is they add up self"}, {"start": 1640.16, "end": 1644.5600000000002, "text": " connections and we saw that when I was explaining the method itself so"}, {"start": 1644.56, "end": 1652.48, "text": " basically that's it once you once you just swap this part with this part you"}, {"start": 1652.48, "end": 1657.96, "text": " end up with the final method and that's the the form I previously I previously"}, {"start": 1657.96, "end": 1663.08, "text": " explained so that the good thing about this method it has a complexity that's"}, {"start": 1663.08, "end": 1669.8, "text": " proportional to the number of edges in the graph and not to do like a number of"}, {"start": 1669.8, "end": 1675.28, "text": " nodes like raise the power of three for like spectral spectral methods which was"}, {"start": 1675.28, "end": 1682.28, "text": " super super compute intensive so yeah it was a it was a mouthful there is a lot"}, {"start": 1682.28, "end": 1686.8, "text": " of details and you need to read a couple of papers to understand every single"}, {"start": 1686.8, "end": 1692.6, "text": " part so for example why this approximation works why we can why we"}, {"start": 1692.6, "end": 1700.9199999999998, "text": " can swap eigenvalues with laplacians here and yeah that's probably the"}, {"start": 1700.9199999999998, "end": 1705.52, "text": " hardest part then you can just this is simple thing this one is also"}, {"start": 1705.52, "end": 1710.9599999999998, "text": " conceptually simple and a couple more steps in your where the final stage but"}, {"start": 1710.9599999999998, "end": 1717.76, "text": " that's why I said that maybe not looking at at the GCN through this spectral"}, {"start": 1717.76, "end": 1723.2, "text": " prism but by looking at GCN maybe as a special case of get method or as a"}, {"start": 1723.2, "end": 1728.56, "text": " message passing network in general is much more like feels more natural than"}, {"start": 1728.56, "end": 1732.04, "text": " than treating it as a special as a spectral method because it's not"}, {"start": 1732.04, "end": 1735.68, "text": " spectral method we don't have to compute it's not like the final expression"}, {"start": 1735.68, "end": 1739.04, "text": " doesn't have to do anything with spectrum we don't have to compete to"}, {"start": 1739.04, "end": 1743.76, "text": " compute eigenvalues you don't have to compute eigenvectors so is that it was"}, {"start": 1743.76, "end": 1747.6, "text": " just motivated by by the spectral methods but it's not a spectral method"}, {"start": 1747.6, "end": 1752.84, "text": " itself so just avoid being confused there the hardest part we covered the"}, {"start": 1752.84, "end": 1757.04, "text": " method we covered how it was motivated by the spectral methods and we saw how"}, {"start": 1757.04, "end": 1762.8999999999999, "text": " the explicit graph laplacian regularization works so now let me just"}, {"start": 1762.8999999999999, "end": 1767.6, "text": " walk you through the rest of the paper a couple of more things details so once"}, {"start": 1767.6, "end": 1772.1599999999999, "text": " you pre train GCN with two layers on the core data set and you extract the hidden"}, {"start": 1772.16, "end": 1777.68, "text": " layers the hidden features you can project them into 2d space using"}, {"start": 1777.68, "end": 1781.88, "text": " t-sne and you get this and if you remember if you if you watch my previous"}, {"start": 1781.88, "end": 1787.92, "text": " video on get you can see that get maybe had a bit better separation between a"}, {"start": 1787.92, "end": 1794.16, "text": " Cora classes than GCN but once you actually once you and you can also"}, {"start": 1794.16, "end": 1799.8000000000002, "text": " verify that because it had had better accuracy than GCN but I'll get to those"}, {"start": 1799.8, "end": 1805.8799999999999, "text": " results in a bit but basically you can you can see that get actually has on"}, {"start": 1805.8799999999999, "end": 1810.2, "text": " these types of networks on citation networks the attention coefficients it"}, {"start": 1810.2, "end": 1814.72, "text": " used for the neighborhood are pretty similar and kind of approximate GCN"}, {"start": 1814.72, "end": 1820.76, "text": " although it's more more expressive and thus has a better results okay so again"}, {"start": 1820.76, "end": 1825.56, "text": " semi-supervised learning pretty simpler simple thing and I explained that in get"}, {"start": 1825.56, "end": 1830.9199999999998, "text": " but like for the sake of being complete here you just do cross entropy so you"}, {"start": 1830.9199999999998, "end": 1835.56, "text": " basically have a huge network and you just take for every single class and"}, {"start": 1835.56, "end": 1840.44, "text": " Cora has like seven classes you only take 20 notes whose labels you're going"}, {"start": 1840.44, "end": 1844.8, "text": " to use you don't use other labels you do use the features but you don't use the"}, {"start": 1844.8, "end": 1851.6799999999998, "text": " labels and finally once you get to the final layer the feature vectors now have"}, {"start": 1851.68, "end": 1856.8, "text": " the number of classes like seven for Cora and you'll just do you'll just do"}, {"start": 1856.8, "end": 1860.72, "text": " cross entropy loss on on those on those on those values and that's how you train"}, {"start": 1860.72, "end": 1868.0, "text": " these models so basically for the sake of being clear so what would we do again"}, {"start": 1868.0, "end": 1873.88, "text": " let's say this is Cora so we start with thousand four hundred thirty three we do"}, {"start": 1873.88, "end": 1879.64, "text": " the projection we do the aggregation so in the hidden layer we'll have maybe 64"}, {"start": 1879.64, "end": 1885.24, "text": " features and again we do apply in our layer and because Cora has seven classes"}, {"start": 1885.24, "end": 1892.6000000000001, "text": " the final feature vectors will only have seven features and we do softmax so we"}, {"start": 1892.6000000000001, "end": 1897.48, "text": " end up with a probability distribution and we just take a look where the true"}, {"start": 1897.48, "end": 1904.96, "text": " classes and we do the cross entropy so if it's maybe 0.4 the loss will be log"}, {"start": 1904.96, "end": 1912.8, "text": " 0.4 and just a minus there and that means this needs to go to one for the"}, {"start": 1912.8, "end": 1917.32, "text": " ground truth class in order for the loss to go to zero which makes sense and you"}, {"start": 1917.32, "end": 1924.52, "text": " just do gradient based learning and they used Adam for GCN and that's it in a"}, {"start": 1924.52, "end": 1928.8, "text": " nutshell and again they visualize these features here let's see a couple more"}, {"start": 1928.8, "end": 1934.16, "text": " things in this paper the next thing is so I already mentioned we have"}, {"start": 1934.16, "end": 1938.48, "text": " previously to GCN and the spectral methods we have we had two broad"}, {"start": 1938.48, "end": 1942.8400000000001, "text": " categories the ones that use explicit graph laplation and the one that used a"}, {"start": 1942.8400000000001, "end": 1947.48, "text": " graph embedding approach so the graph embedding approach basically creates"}, {"start": 1947.48, "end": 1952.64, "text": " random walks it's similar to vertebrae if you're familiar with NLP yeah it's"}, {"start": 1952.64, "end": 1957.64, "text": " basically the same idea just apply to graphs you treat random walks on graphs"}, {"start": 1957.64, "end": 1964.2800000000002, "text": " as sentences and you basically train your embeddings such that they are such"}, {"start": 1964.2800000000002, "end": 1969.5200000000002, "text": " that the ones that appear close to each other attend to have similar embeddings"}, {"start": 1969.5200000000002, "end": 1977.2, "text": " in a nutshell so I mentioned previously they used three citation networks sites"}, {"start": 1977.2, "end": 1982.6000000000001, "text": " here core and PubMed there's this knowledge graph as benchmarks and yeah I"}, {"start": 1982.6, "end": 1987.8, "text": " mentioned that Cora has thousand so this we're using this example as a as a as a"}, {"start": 1987.8, "end": 1993.6799999999998, "text": " as on the fly example throughout this video and basically you can see some"}, {"start": 1993.6799999999998, "end": 1999.36, "text": " statistics related to that like data set and it has seven classes as I already"}, {"start": 1999.36, "end": 2005.6, "text": " mentioned so and I mentioned they're using only 20 labels per class so so"}, {"start": 2005.6, "end": 2012.28, "text": " that means so you have so you have the core has 2,700 nodes but you only use"}, {"start": 2012.28, "end": 2022.16, "text": " for hundred forty to train the the GCN method okay they also have some random"}, {"start": 2022.16, "end": 2029.08, "text": " graphs I'll explain those briefly why they mentioned those ones okay so here"}, {"start": 2029.08, "end": 2033.96, "text": " are the results you can basically they compared against these these methods used"}, {"start": 2033.96, "end": 2038.68, "text": " at explicit graph laplation regularization deep walk and planetoid"}, {"start": 2038.68, "end": 2044.28, "text": " our graph embedding methods and here is GCN and we can see on every single data"}, {"start": 2044.28, "end": 2049.96, "text": " set they have a significant improvement compared to the baselines and yeah"}, {"start": 2049.96, "end": 2057.2000000000003, "text": " that's nice okay finally they compared different methods and this is the reason"}, {"start": 2057.2000000000003, "end": 2062.28, "text": " I had to go into a bit more details in the spectral section part so basically"}, {"start": 2062.28, "end": 2067.32, "text": " they compare the Chebyshev so the one that the the approximation of the full"}, {"start": 2067.32, "end": 2072.92, "text": " spectral method and they tried using K2 and K3 so that means they're using K 2"}, {"start": 2072.92, "end": 2078.2000000000003, "text": " hop and 3 hop neighborhoods and you can see the results are worse than GCN method"}, {"start": 2078.2000000000003, "end": 2082.92, "text": " and the renormalization trick is the official the final GCN method you can"}, {"start": 2082.92, "end": 2090.0800000000004, "text": " see so here going from top to bottom are the approximation we were making so here"}, {"start": 2090.0800000000004, "end": 2096.2400000000002, "text": " we just set K to 1 then we we we constrain these two tatas to be the same"}, {"start": 2096.24, "end": 2102.52, "text": " here in the single parameter row and we can see that all of the results are"}, {"start": 2102.52, "end": 2108.3599999999997, "text": " worse than by using the final the final GCN version they also compared against"}, {"start": 2108.3599999999997, "end": 2113.8399999999997, "text": " the first order term only so they ditch this part and they just use the the"}, {"start": 2113.8399999999997, "end": 2118.68, "text": " first order term and we also get worse results and finally the reason I"}, {"start": 2118.68, "end": 2124.6, "text": " highlighted MLP here is that it doesn't use the graph structure and you can see"}, {"start": 2124.6, "end": 2129.72, "text": " that the the performance the the accuracy is far like lower than all of"}, {"start": 2129.72, "end": 2135.6, "text": " the other methods so that's the reason so even so so basically it's really"}, {"start": 2135.6, "end": 2139.8399999999997, "text": " important to use the the graph structure because it contains a lot of"}, {"start": 2139.8399999999997, "end": 2145.08, "text": " information so how they train the MLP I suppose is so again you have you have"}, {"start": 2145.08, "end": 2149.88, "text": " the graph and you have features associated with them and you just"}, {"start": 2149.88, "end": 2155.04, "text": " basically independently apply the MLP here so this is maybe the first layer"}, {"start": 2155.04, "end": 2159.76, "text": " and then you have the second layer and you end up with seven classes here and"}, {"start": 2159.76, "end": 2168.84, "text": " you just take those four hundred hundred forty notes sorry to to basically train"}, {"start": 2168.84, "end": 2175.0, "text": " this MLP and you basically repeat the same so these are the same weights these"}, {"start": 2175.0, "end": 2179.2400000000002, "text": " are the shared weights you apply them to the first feature vector then you go to"}, {"start": 2179.24, "end": 2184.4799999999996, "text": " the second one blah blah blah you apply the the the cross entropy loss and you"}, {"start": 2184.4799999999996, "end": 2189.04, "text": " train the MLP and that's and that's so so you can see in this process it's not"}, {"start": 2189.04, "end": 2192.8799999999997, "text": " using the connectivity pattern so it's ignoring the connectivity pattern"}, {"start": 2192.8799999999997, "end": 2197.8799999999997, "text": " altogether okay let's continue and finish up wrap up this paper there are a"}, {"start": 2197.8799999999997, "end": 2201.72, "text": " couple more things I want to mention the first thing is and this is interesting"}, {"start": 2201.72, "end": 2206.64, "text": " so so basically you can see the training times here on the y-axis we have seconds"}, {"start": 2206.64, "end": 2211.4, "text": " per epoch and on the x-axis we have number of edges and this is where those"}, {"start": 2211.4, "end": 2216.72, "text": " random graphs can come into place so what they did is they just create n"}, {"start": 2216.72, "end": 2224.52, "text": " notes and they just take two n edges and they randomly uniformly just connect"}, {"start": 2224.52, "end": 2229.68, "text": " all of these notes and that's the graphs they were using and they just use a"}, {"start": 2229.68, "end": 2234.72, "text": " featureless approach where basically you just have a one hot vector or the"}, {"start": 2234.72, "end": 2242.12, "text": " adjacency matrix is pretty much the identity matrix and they use these graphs"}, {"start": 2242.12, "end": 2246.8799999999997, "text": " to just a benchmark like the test the speed of these their models and we can"}, {"start": 2246.8799999999997, "end": 2253.68, "text": " see that as we go to a huge number of edges the speed is still around one to"}, {"start": 2253.68, "end": 2260.9199999999996, "text": " ten seconds on CPU and GPU and compare that transformers which need days upon"}, {"start": 2260.92, "end": 2265.56, "text": " days of like huge machines to train them and but once you think about it it's not"}, {"start": 2265.56, "end": 2271.64, "text": " that interesting because we only have a W matrix in the case of GCN to train and"}, {"start": 2271.64, "end": 2276.32, "text": " that's a small number of parameters although as the number of nodes gets"}, {"start": 2276.32, "end": 2281.7200000000003, "text": " bigger the computation also gets bigger but using sparse dense matrix"}, {"start": 2281.7200000000003, "end": 2285.96, "text": " multiplication and some methods you can you can speed it up but it's interesting"}, {"start": 2285.96, "end": 2291.16, "text": " it's really it's really fast to train these networks and that's that's cool"}, {"start": 2291.16, "end": 2297.32, "text": " okay let's go and see a couple more things so I mentioned memory requirement"}, {"start": 2297.32, "end": 2302.88, "text": " and so I'm actually currently implementing the graph attention"}, {"start": 2302.88, "end": 2308.04, "text": " network and I'll open source it in a couple of weeks probably but like the"}, {"start": 2308.04, "end": 2313.96, "text": " the hardest part I found like implementing these networks is how to"}, {"start": 2313.96, "end": 2317.6, "text": " efficiently implement the layer and that's that's pretty much everything"}, {"start": 2317.6, "end": 2323.2400000000002, "text": " because you want to there is all of these like sparse formats and matrix"}, {"start": 2323.2400000000002, "end": 2328.28, "text": " sparse dense matrix multiplications happening and how what's the best way to"}, {"start": 2328.28, "end": 2332.76, "text": " represent your edges whether using edge index or just using the plain old sparse"}, {"start": 2332.76, "end": 2336.84, "text": " adjacency matrix and then finding some suitable format to represent it anyways"}, {"start": 2336.84, "end": 2340.92, "text": " I don't want to get into too much detail just keep that in mind that pretty much"}, {"start": 2340.92, "end": 2348.2400000000002, "text": " the hard hardest part here is to like efficiently implement these methods yeah"}, {"start": 2348.2400000000002, "end": 2352.04, "text": " they see here our framework currently does not naturally support edge features"}, {"start": 2352.04, "end": 2357.64, "text": " and is limited to undirected graphs weighted or unweighted that's that's"}, {"start": 2357.64, "end": 2362.4, "text": " something to keep in mind because both Cora and all of these benchmarks they"}, {"start": 2362.4, "end": 2366.7200000000003, "text": " used they don't have any edge features they just have like simple binary"}, {"start": 2366.72, "end": 2373.68, "text": " connections okay that was the the main part of paper I mentioned in the"}, {"start": 2373.68, "end": 2377.08, "text": " beginning of this video that I have three perspectives we can treat this so"}, {"start": 2377.08, "end": 2382.14, "text": " we can we can look at this GCN paper through three different perspectives the"}, {"start": 2382.14, "end": 2386.64, "text": " one is spectral method perspective and now let me show you the second one the"}, {"start": 2386.64, "end": 2393.08, "text": " Weiss-Feyler-Lehmann algorithm so basically WL algorithm or Weiss-Feyler-Lehmann"}, {"start": 2393.08, "end": 2398.2, "text": " specifically the 1D case of this algorithm what it does it's a simple"}, {"start": 2398.2, "end": 2403.0, "text": " isomorphism graph isomorphism test what that means is basically you have two"}, {"start": 2403.0, "end": 2410.08, "text": " graphs so graph G1 and you take second graph graph G2 and you pass it in into"}, {"start": 2410.08, "end": 2415.64, "text": " this test and if the if the final something called node colorings are the"}, {"start": 2415.64, "end": 2420.12, "text": " same no if they're different then you're most you're sure that these two graphs"}, {"start": 2420.12, "end": 2424.52, "text": " are not isomorphic if they're the same you're still not completely sure because"}, {"start": 2424.52, "end": 2429.0, "text": " there are special special cases where Weiss-Feyler-Lehmann like fails but like"}, {"start": 2429.0, "end": 2434.2799999999997, "text": " you're pretty certain that those two graphs are isomorphic and what isomorphic"}, {"start": 2434.2799999999997, "end": 2439.2799999999997, "text": " means is basically you those are just two same graphs but just have different"}, {"start": 2439.2799999999997, "end": 2444.2999999999997, "text": " maybe different labeling and if you find some different like permutation of"}, {"start": 2444.2999999999997, "end": 2447.68, "text": " labels you can see that those two graphs are the same and there are some really"}, {"start": 2447.68, "end": 2451.7599999999998, "text": " nice examples I'll show it here on the screen and you can see those two graphs"}, {"start": 2451.7599999999998, "end": 2457.64, "text": " are are the same even though they look differently like at the first glance so"}, {"start": 2457.64, "end": 2460.58, "text": " that's the idea with the Weiss-Feyler-Lehmann algorithm and now you can treat"}, {"start": 2460.58, "end": 2468.2, "text": " GCN as a pretty much as a generalized version of of of this Weiss-Feyler-Lehmann"}, {"start": 2468.2, "end": 2472.48, "text": " algorithm and with a disclaimer I'll in a couple of seconds explain why that's"}, {"start": 2472.48, "end": 2476.52, "text": " not completely true but let's for the sake of argument now let's play that"}, {"start": 2476.52, "end": 2481.28, "text": " game so this is how the algorithm looks like so we start with initial node"}, {"start": 2481.28, "end": 2486.0, "text": " coloring and you can treat maybe this the same as so we have those features"}, {"start": 2486.0, "end": 2492.16, "text": " the same as we use so far and basically the algorithm stops once those features"}, {"start": 2492.16, "end": 2497.36, "text": " stabilize and it's basically the same structure so we have repeat until is"}, {"start": 2497.36, "end": 2501.44, "text": " basically going through the layers of our GCN and we for every single node"}, {"start": 2501.44, "end": 2507.84, "text": " what we do is we aggregate the features or the colorings in this in this case"}, {"start": 2507.84, "end": 2513.08, "text": " that we aggregate them and we hash them and that's what's what our updated"}, {"start": 2513.08, "end": 2517.44, "text": " representation looks like and if you think about it it's really similar to"}, {"start": 2517.44, "end": 2521.68, "text": " what we have with GCN so basically we aggregate across the neighborhood again"}, {"start": 2521.68, "end": 2526.8, "text": " and we have these C ij's which I've mentioned a lot of times already and"}, {"start": 2526.8, "end": 2534.04, "text": " that equals to one over square root di dj where those are degrees of those"}, {"start": 2534.04, "end": 2538.5600000000004, "text": " particular notes and if you take a look at it it's the same format you can treat"}, {"start": 2538.5600000000004, "end": 2543.5600000000004, "text": " just the hash part is maybe the non-linearity and these projections and"}, {"start": 2543.5600000000004, "end": 2549.6000000000004, "text": " scaling but it's it has a similar structure and they say here this loosely"}, {"start": 2549.6000000000004, "end": 2553.36, "text": " speaking and it's really important to to highlight the loosely speaking part"}, {"start": 2553.36, "end": 2557.44, "text": " because it's not completely true and you'll see in a minute allows us to"}, {"start": 2557.44, "end": 2560.6400000000003, "text": " interpret our GCN model as a differentiable and parameterized"}, {"start": 2560.6400000000003, "end": 2565.36, "text": " generalization of the one-dimensional Weissweiler-Lehmann algorithm on graphs"}, {"start": 2565.36, "end": 2572.8, "text": " and now the disclaimer so basically later this paper called graph isomorphism"}, {"start": 2572.8, "end": 2579.84, "text": " networks appeared and it showed that GCN is not quite there so they say here this"}, {"start": 2579.84, "end": 2583.88, "text": " is just a snippet from the paper so we will see that these GNN variants get"}, {"start": 2583.88, "end": 2588.48, "text": " confused by surprisingly simple graphs and are less powerful than the WL"}, {"start": 2588.48, "end": 2592.32, "text": " device Weissweiler-Lehmann test nonetheless models with mean aggregators"}, {"start": 2592.32, "end": 2598.44, "text": " like GCN perform well for node classification tasks and we used exactly"}, {"start": 2598.44, "end": 2603.04, "text": " the node classification benchmarks in this paper so this paper used those"}, {"start": 2603.04, "end": 2609.0, "text": " benchmarks so that's why we couldn't notice I mean you this paper appear"}, {"start": 2609.0, "end": 2614.28, "text": " only later so it was hard for them to to compare but anyways they show that these"}, {"start": 2614.28, "end": 2620.52, "text": " gene GIN networks are more expressive than GCN's and more expressive that gets"}, {"start": 2620.52, "end": 2625.6, "text": " also so that's something to keep in mind I won't get into too much details but"}, {"start": 2625.6, "end": 2631.48, "text": " basically there are certain families of graphs which if you input them so you"}, {"start": 2631.48, "end": 2638.72, "text": " have graph one and you have graph two and they are different but once you pass"}, {"start": 2638.72, "end": 2646.04, "text": " them into GCN it would give it will give them the same node coloring which means"}, {"start": 2646.04, "end": 2651.2799999999997, "text": " GCN treats these two graphs the same like they are isomorphic but they're not"}, {"start": 2651.2799999999997, "end": 2658.2, "text": " they're different so GCN can fail on some on certain graph families as well"}, {"start": 2658.2, "end": 2663.3199999999997, "text": " as get but yeah that's something I thought it's worth mentioning just to"}, {"start": 2663.32, "end": 2669.1200000000003, "text": " keep connecting those dots across different papers finally so I mentioned"}, {"start": 2669.1200000000003, "end": 2672.2400000000002, "text": " the third perspective get perspective so basically can treat the GCN as a"}, {"start": 2672.2400000000002, "end": 2677.76, "text": " particular case of get where you have instead of learning those weights you"}, {"start": 2677.76, "end": 2684.6800000000003, "text": " just have them hard coded as this expression which I've mentioned multiple"}, {"start": 2684.6800000000003, "end": 2693.0, "text": " times already so that's about that's it and finally as a nice consequence of of"}, {"start": 2693.0, "end": 2699.16, "text": " this of the fact that you can treat this as a WL algorithm so they say here from"}, {"start": 2699.16, "end": 2703.28, "text": " the analogy with the WL we can understand that even an untrained GCN"}, {"start": 2703.28, "end": 2708.12, "text": " model with random weights can serve as powerful feature extractor for nodes in"}, {"start": 2708.12, "end": 2713.64, "text": " a graph so what we do here we just take these W matrices and we live them we"}, {"start": 2713.64, "end": 2721.48, "text": " leave them like a random and basically so this a hat is the term with the tilde"}, {"start": 2721.48, "end": 2726.52, "text": " mine raised to the power of minus one half etc suggest the constant term so we"}, {"start": 2726.52, "end": 2733.56, "text": " just apply three layers here three GCN layers and this is what we end up with so"}, {"start": 2733.56, "end": 2738.56, "text": " we have they showed here that if we take this correct correct karate club network"}, {"start": 2738.56, "end": 2742.6, "text": " so if you take this network and you just take a feature as approach meaning all"}, {"start": 2742.6, "end": 2745.12, "text": " of these nodes don't have some meaningful features they're just like"}, {"start": 2745.12, "end": 2749.4, "text": " the adjacency matrix is pretty much identity matrix and if you pass it"}, {"start": 2749.4, "end": 2755.8, "text": " through the untrained GCN it will still learn how to separate different classes"}, {"start": 2755.8, "end": 2762.76, "text": " so what it did is so basically the last layer of this GCN has only two so the"}, {"start": 2762.76, "end": 2767.28, "text": " node features have only two dimensions and thus they can easily be plotted on"}, {"start": 2767.28, "end": 2771.64, "text": " a 2d chart like here and you can see that clusters like this one which have"}, {"start": 2771.64, "end": 2777.6, "text": " the same labels also appear in the same part of the space in this 2d space here"}, {"start": 2777.6, "end": 2782.88, "text": " so that's interesting that means that GCN can without any training already"}, {"start": 2782.88, "end": 2787.48, "text": " have some discriminative power just because of the graph structure and"}, {"start": 2787.48, "end": 2795.2799999999997, "text": " finally once you start training this GCN the through layer GCN I just mentioned"}, {"start": 2795.2799999999997, "end": 2799.56, "text": " you can see it gets you in better separation and finally after 300"}, {"start": 2799.56, "end": 2806.24, "text": " iterations the classes are clearly separated and that's that's awesome one"}, {"start": 2806.24, "end": 2812.24, "text": " more last thing is experimenting with depth so if you if you think about it all"}, {"start": 2812.24, "end": 2817.2799999999997, "text": " of the networks GCNs as I mentioned so far had either two or three layers so"}, {"start": 2817.2799999999997, "end": 2820.6, "text": " what's up what's up with that you know if you're familiar with computer vision"}, {"start": 2820.6, "end": 2826.16, "text": " you know that CNNs usually have bunch of layers like Resnets had I had 151 or"}, {"start": 2826.16, "end": 2830.9599999999996, "text": " more layers and basically here we'll use two or three so what's what's the thing"}, {"start": 2830.96, "end": 2836.36, "text": " so basically they showed here that and they've added these skip connections"}, {"start": 2836.36, "end": 2842.04, "text": " basically just adding the representations from the previous layer"}, {"start": 2842.04, "end": 2846.08, "text": " like this and this is the original equation so using these models they"}, {"start": 2846.08, "end": 2852.4, "text": " showed that after two or three layers we get to peak performance on the test and"}, {"start": 2852.4, "end": 2859.6, "text": " we don't need anything to go any further so and that's that's interesting and the"}, {"start": 2859.6, "end": 2863.7599999999998, "text": " the the explanation here you can you can search for it"}, {"start": 2863.7599999999998, "end": 2869.2799999999997, "text": " Michael Brunstein had a really nice blog wrote a nice blog about this and I'll"}, {"start": 2869.2799999999997, "end": 2875.2799999999997, "text": " just quickly summarize it so basically since grids are special graphs there"}, {"start": 2875.2799999999997, "end": 2879.4, "text": " certainly are examples of graphs on which depth helps it's just a quote from"}, {"start": 2879.4, "end": 2884.08, "text": " his blog and what that means is that pretty much you can treat image as a as"}, {"start": 2884.08, "end": 2890.08, "text": " a grid as a regular graph and we know that for images depth helps so that"}, {"start": 2890.08, "end": 2894.12, "text": " means there are certain classes of graphs where depth does help but here on"}, {"start": 2894.12, "end": 2899.72, "text": " these citation and knowledge graph graphs for some reason depth doesn't"}, {"start": 2899.72, "end": 2904.3199999999997, "text": " help and one of the explanation that Michael gave here is that one of the"}, {"start": 2904.3199999999997, "end": 2909.0, "text": " differences is that the letter letter are similar to small world networks with"}, {"start": 2909.0, "end": 2914.76, "text": " low diameter where one can reach any node from any other node in a few hops so"}, {"start": 2914.76, "end": 2918.84, "text": " you probably heard about this Facebook graph where at least I heard about it"}, {"start": 2918.84, "end": 2923.52, "text": " I'm not sure if the numbers are truly correct but like you can you can get to"}, {"start": 2923.52, "end": 2928.64, "text": " any single person on the graph on the Facebook graph by just in like six or"}, {"start": 2928.64, "end": 2932.44, "text": " seven hops so that means that the diameter of the of the Facebook graph"}, {"start": 2932.44, "end": 2935.8, "text": " is around seven I'm not sure if that's entirely correct but you get the point"}, {"start": 2935.8, "end": 2942.0800000000004, "text": " so it's so intricately connected that already after a couple of layers you've"}, {"start": 2942.0800000000004, "end": 2946.28, "text": " basically touched a lot of the neighborhoods neighborhood neighboring"}, {"start": 2946.28, "end": 2953.92, "text": " nodes and you don't need any more layers to to achieve better performance so"}, {"start": 2953.92, "end": 2958.5600000000004, "text": " that pretty much wraps it up hopefully you found this video useful I'm"}, {"start": 2958.5600000000004, "end": 2963.0, "text": " experimenting and I'm still figuring this out so if you have any feedback for"}, {"start": 2963.0, "end": 2969.68, "text": " me I really appreciate it I'll continue creating GNN videos throughout January"}, {"start": 2969.68, "end": 2974.2, "text": " and if you have any any requests please write them down in the comment section"}, {"start": 2974.2, "end": 2980.16, "text": " I'll try and maybe create some video depending on what you folks like"}, {"start": 2980.16, "end": 2984.52, "text": " suggest me if you like this video and if you found this channel useful please"}, {"start": 2984.52, "end": 2988.96, "text": " consider subscribing and click the bell icon to get notified when I upload a new"}, {"start": 2988.96, "end": 2994.2400000000002, "text": " video so that you can get my videos as soon as they get uploaded and until"}, {"start": 2994.24, "end": 3023.24, "text": " next time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=uFLeKkXWq2c
Graph Attention Networks (GAT) | GNN Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I do a deep dive into the graph attention network paper! GATs have a lot in common with transformers a reason more to keep an eye out for them! You'll learn about: ✔️ Basic graph theory ✔️ All the nitty-gritty details behind GAT ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GAT website: https://petar-v.com/GAT/ ✅ GAT paper: https://arxiv.org/abs/1710.10903 ✅ M. Bronstein's blog: https://towardsdatascience.com/do-we-need-deep-graph-neural-networks-be62d3ec5c59 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 - A note on geometric deep learning 00:50 - Graph theory basics 08:35 - Intro to GATs (related work) 10:15 - A detailed explanation of the method 16:20 - A multi-head version of the GAT 19:05 - Visualizations, spatial pooling, GNN depth 21:20 - A recap of GAT properties 23:35 - Receptive field of spatial GNNs 24:35 - Datasets, transductive vs inductive learning 30:35 - Results on transductive/inductive benchmarks 35:30 - Representations visualization (t-SNE) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #graphs #attention #deeplearning
So I started doing geometric deep learning and the first paper I want to kick off this series with is graph attention networks. But before I do that let me let me give you a quick note. So as I said I started doing geometric deep learning and I've been collecting a bunch of resources. So cool blogs, useful blogs, videos, papers and I've read already a lot of them and I'll be creating a video in the future covering how you can approach learning this exciting field. So similarly to how I did it for transformers, if you are familiar with those videos I created. So I'll basically create one video where I'll help you structure your learning through geometric deep learning. So without further ado let me get back to to get to graph attention networks. So you can see, so first I want to shout out to Petar Velichkovich. He's my friend from Serbia and he did this work while he was back in Montreal. So that's why you can see Josje Benjio on the list. Pietro Leo is his PhD advisor from Cambridge and his colleagues whom unfortunately I don't know but I guess they're they're great researchers as well. So let me start with abstract. We present graph attention networks novel neural network architectures that operate on graph structure data and because this is the first video in this series I'm gonna do like a quick tangent here and explain like briefly explain the notation and how it in generally works and then I'll jump to the method. So if you just want to see the method feel free to skip I'll put the timestamps on the on the video so you can jump wherever you want. Okay let me start with this one. So let me open a pen. Okay my god my god. Let me do a quick recap of graph theory which will help you better understand this paper as well as all of the other papers I'll cover. Some include spectral methods, some include spatial methods. I'll explain briefly like in a couple of minutes what those are. So let me start with graph theory 101. So basically a graph is a collection a set of nodes. So let me like depict the nodes as these small circles and we have so that set is usually known as V and then we have the cardinality number of that set which is the number of elements in that set and that's usually denoted as N. So nodes are connected via edges so let me maybe create some random graph like this one and they are somehow connected and what I've just draw here is our undirected edges but they can also be directed edges. They can also there can also be multiple edges and those things are called multi graphs or hyper graphs and but we won't be focusing on those now because this paper the GAT paper only focuses on undirected graphs but it's easy to to generalize the method as well as other methods to both directed graphs and those more complicated types of graphs which have maybe self self connections or multiple edges etc. So what happens now is that each of these nodes have something called like a feature vector associated with each single node and so you maybe have like F features there and all of these nodes have the same thing associated so they have they have different feature vectors and aside from that edges also can have feature vector associated with them and the set of edges is usually denoted by E and this is the number of edges in your graph. So now aside from these things there is something called adjacency matrix which is useful a concept to have as we will briefly see why and it's noted as A and it just tells you the connection pattern of your graph so basically it's an M by N matrix where N is the number of nodes in your graph and it just tells you which node is connected to which one so if we label these nodes numerically like this 0 1 2 3 the node 0 is connected to 1 2 & 3 but it's not connected to itself because I haven't added this thing called a self connection or maybe without the edge but anyways and so we'll have a pattern like this so 0 because the node 0 is connected to it's not connected to itself then we have 1 1 1 and if we take a look at the node 1 it's only going to be connected to node 0 and it's going to have zeros elsewhere so once you have this adjacency matrix you can construct something called a laplacian matrix which is used heavily used in spectral methods to construct something called laplacian eigen basis which is a generalization of the Fourier basis if you're familiar with that and once you have the eigen basis you can basically construct the signal on the graph as a weighted sum of those eigen vectors and if that didn't make any sense let me just briefly explain it even though it's not so important for this paper but I'm doing a recap so why not so what the signal on the graph is so you basically take like the feature number one from all of these feature vectors and that represents one signal so maybe signal one let me do know it like s1 and what I just said is you can once you connect what once you create those eigenvectors you can just represent s1 as some as a weighted sum so maybe W eyes and let me denote the eigenvectors as L eyes so that's it so that's a similar thing as as in a signal processing when you're using Fourier basis where you can represent any any signal as a sum as a weighted sum of sinusoids and this is a generalization where once you have a regular graph and not just any arbitrary graph like this one the laplacian eigen basis would actually correspond to the Fourier basis just to make this graph a bit less abstract so as an example because this paper uses these citation networks so what these nodes would be in this in these particular data sets is the following so you have so this node would be like like one paper so maybe this one is get paper maybe this one is transformer and we don't we drop the directionality so even though get sighted transformer and not vice versa we just drop the directionality for now and we just use undirected edge and that does hurt the performance a bit but not a lot not by a lot and so what I wanted to say here is that the feature vector for this particular paper may for example be the following so you just take the abstract and you basically just like embed all of these words using for example word2vec and you just then do bag of words so you'll basically whatever number of words curious like maybe 500 words in the abstract and all of the all of the word2vec embeddings are maybe 256 dimensionality so just average all of these 500 vectors that have this dimensionality and you end up with a single 256 vector and that's your feature for this node and once you have that you train the model later on to to do whatever you want so that's all I want to cover there so what I mentioned that one is because you basically have when you're trying to to do this graph representation learning and do something interesting with those representations maybe do some classification or regression you basically have either spatial methods or you have spectral methods and get here is a good a good example of a spatial method and that's why I just wanted to create this distinction so the paper itself heavily relies on the attention mechanism obviously so that's like the name of the paper is graph attention networks and especially the the famous the original transformer paper by Vasvani and his collaborators they it's a decent pretty similar idea as well as well as soon see but like back in 2017 when this paper was created the the the transformers were not as near as popular as they are now in 2020 so now what we get by doing this is so we do inherit some nice properties of transformers and that's that the operation is efficient meaning we can basically parallelize and do those representation learnings across nodes and across edges which is what makes this method really efficient it can be applying to graph nodes having different degrees now what are degrees I forgot to mention that one so basically however many edges we have connected to our notes that's the degree so this is the degree of this note would be one the degree of this note would be three this would be two etc so let me go back here so it can be applied to graph nodes having whatever the number of degrees beat one beat like 370 whatever if you have social network maybe and second nice third nice property is that the model is directly applicable to inductive learning problems and we'll see what's trans talked to you when what's inductive a bit later but just keep keep that in mind now let me finally jump into the method itself so how the method functions is is the following so I'll just be following these equations and hopefully help you better understand what those means better created really nice visualizations here so we can maybe leverage those as well and we have the final set of equations here okay let me let me try and go and explain every single one of these so how it how it works so the method works is the following so we have again I'm going to just draw simple graph here and I'm going to do some random connections like previously maybe like this okay and now what we do for every single node we do the following so let's say this is note zero this is node one two and three basically we find what the neighbors of the current nodes a node are so zero node zero has all of these nodes as neighbors and what we're going to do is we'll have something like this simple matrix W which we're going to linearly linearly project the feature vectors which we as like let me let me let me draw them here quickly and they have dimensionality like F and this matrix will project them into the space where where they'll have like dimension F prime so that means this matrix will be simply be like F F prime and once we do like multiplication with these vectors we'll just project them into that space so basically what it does it's a you can you can treat it as a simple neural network although there is no non linearity at the moment so project this into a space where we have F prime features and yeah it just is just fully connected so once we have those intermediate representations and those are denoted here like this so they use the notation that's a bit different than I have been using so far so basically I haven't been using any notations but anyways so so this would be denoted like the feature vector for node 3 would be noted as h3 with a small arrow above which is kind of interesting because it associates with me with physics and this doesn't have to be it doesn't have to do anything with physics so let me denote once once we project into the F prime space let me denote those representations as having a tilde above so once you have that now we are going to do the attention part and how it works in get paper is they just do so this is h0 arrow and tilde because we projected into F prime space and now what we do is we just concatenate h0 with maybe h1 if we want to see what the attention wait for h1 should be we just concatenate these two and I'm gonna just denote this with square brackets and then we just multiply that with this a arrow vector which will have the dimensionality to F prime and it's basically you can treat it as a dot product and we get we basically get and then we apply the like non-linearity and they used something called leaky value and we get a value out here which they denote as e i j so there's gonna be some some number and we're going to do that not just for h1 we're going to do it for h2 as well we're going to do for h3 3 and finally for h0 and doing that we'll get these scores which then will just be passed into softmax and that means we got a sum of one when you sum all of those softmax values we'll get they'll sum up to one and then we can just do a weighted sum of all of those all of these F prime representation so the ones that have tilde will just sum those up and we'll get the new representation for for every single node and aside from that we'll just apply a non-linearity and they usually used relu or even elu or e elu non-linearities there so that's let me jump into the equations you can see it here as I just said you basically just concatenate those intermediate representations you do a dot product here you pass it into relu and then you just do softmax and that's how you get the weights which you then use to do a simple weighted sum over the neighbors so j goes over the neighbors of I and we just saw that the neighbors of I are all of these other nodes 1 2 & 3 so those are the equations and as you can see it's it's pretty similar to the transformer although I have actually asked better better and he told me they tried in the Q K V matrices but the the the network itself overfitted these small data sets because they're using core sites here and PubMed which are decently small citation and network so it turned out that just using one of these weighted matrices does the job so now there is a simple extension to the method and that's using k hats like the same thing as the transformers you have multiple hats in your attention and it's a decently simple generalization you basically have so until now you had per layer you had a single matrix w and the single vector a arrow and those are all the trainable parameters you have so now you just have K of those and you independently apply all of them and you just concatenate the results as you can see here this is the operator for concatenation so you do the same thing but you just concatenate all of the feature vectors and you get the new representation so basically let me maybe do a quick explanation of that in practice what I did is so we say the maybe the F was the dimension of the initial representations was maybe 64 what they would do is those WK matrices would project this vector into maybe only eight dimensions so you just do blah blah blah just do the fully connected thing you get you get a vector that has only eight dimensions and then you do the attention you do the weighted average and you just concatenate the results for all of those and you end up at the end with again 64 dimensional representation in the end that goes into the second get layer and so forth so there is one more interesting detail and that's this equation number six where they basically showed that they do it a bit differently so they don't apply the non-linearity and then concatenate they just average out all of those vectors so representations they average them out by K and then just applied non-linearity which would be because this is the last layer this would either be softmax or maybe logistic sigmoid I think they said it somewhere here so it's a softmax or logistic sigmoid in the case they're using this for classification problems as they did in the get paper so that means that they won't be doing this sub like dimensionality reduction they'll just the last layer will actually have all of the K hats just operating on the 64 the output will be 64 and not 8 and then they'll just average these out and not concatenate them so hopefully that was that made any any sense let's take a quick look on these visualizations and you can see the thing I just explained so basically we have these intermediate representations which I denoted as with tildes and you just concatenate them you do the star product with this a arrow vector and basically then you apply some non-linearity like non like leaky relu and you apply the softmax and you get these attention weights which you then just use as the weights in the weighted average some to get the final representations and these squiggly lines here like the purple green and the straight blue line just represent different various hats and as I said you either concatenate them in in the in the layers before the last one or you just average them out in the last layer and that's how you get the next representation so it's pretty pretty pretty pretty easy and all of these graph neural networks work like conceptually really similar so you basically have a graph and you pass it through all air we just modify these representations and outcomes the processed graph G although there is this concept of spatial resolution reduction the same as you have with convolutional neural networks where you have maybe the input image then gets like down sampled by maybe two and then by two that's like VGG and many networks do it like that so you can do a similar thing here doing for some clustering and then just reducing the dimension of the graph but gets and many methods I've read usually don't do this reduction and what's also really interesting with graph neural networks is they're a much shallower than the CNNs which you usually encounter in computer vision and Mikhail Brunstein created a series of nice blocks which I'll maybe even link in this video where he explained he actually explored why that is why is it that in graph neural networks like deeper isn't better and I won't get into that at this point of time okay so let me recap a couple of things that hopefully we we already saw so computationally gets are really highly efficient and that's something that's a property they share with transformers and again I mentioned laplacian I can decompositions those are really costly and that's why you want to avoid them and not only because of that but also they are not localized meaning the spread so you don't have this nice property of just attending over the local neighborhood like one hop neighborhood or two hop neighborhood you actually touch every single every single node and that's that's oftentimes not desirable because graphs are you usually have the information of interest somewhere locally in your community and yeah this equation just pretty much sums up like the just the the calculations we just saw and yeah the attention enables a leaping model capacity so compared to say GCN the graph convolutional neural network that Thomas Kipf and others created they use they have a isotropic filtering in a way because they give a same weight to all of the feature vectors around the current node of interest whereas get actually can do a weighted can because of the tension it can do different weightings for different feature vectors depending on how they interact again as I said it's pretty trivial to to make this undirected so if to make it directed so say if you have a if you have a maybe nodes like this you'll just ignore oops you'll just ignore these I'm retarded today so just ignore these that go they have just outward connections and you'll just take these nodes as the neighbors which point to the current node of interest I so that's pretty pretty simple stuff and one more interesting thing receptive field again if you're familiar with the CNN's so receptive field of our model is upper bounded by the depth of the network so why that is is the following so as I already mentioned you basically first attend each node in a single layer attends to the to a one hop neighborhood so what happens here because this node had his neighbors like maybe here that means in the first get layer it will contain some information from this representation so in the second layer this one will be attending this one and the last layer this one attended this one so that's why you effectively after after doing to get layers you'll be attending effectively to a to hop neighborhood and that's what I say say here okay let me jump into data sets and their results and evaluations and let's see what happens what results they achieved they had four data sets that they've tested and those are Cora sites here and PubMed which are citation networks and they have this PPI data set which has to do with protein to protein interactions so I mentioned previously transductive versus inductive learning so what's the difference here so the difference is the following basically when you have transductive learning you have one huge graph so you have you maybe have a citation graph like this and nodes cite each other and for now we'll just assume they are undirected even though technically one paper is citing in other papers so that would be a directed relationship but they usually just drop out the directionality and that does hurt performance a little bit but it's it's it's okay in the sense okay so what happens during these transactive learning trainings is the following so you usually have so if we take for example PubMed you can see that it has around 20k and nodes and it has three classes and basically what they do is they just take 20 nodes out of all of those three classes and they use those 60 nodes and you can see the number 60 here so the training nodes as 60 and they use those to do a semi supervised learning but they also do see all of those other nodes which will be later be tested so you basically see the representation of those test nodes during the training which is not something we're used to in the world of maybe computer vision or even in graph learning when doing active learning you don't see you shouldn't you shouldn't see the test nodes but here in transactive learning you do see the test nodes but you only see the structure you only see the connections you don't see the labels they have so that's the those are the two approaches we have so let me quickly go how this looks like so I'll add one more node so again how this transactive approach learning works is the following so out of all of these nodes only you're using the information like the label information from maybe a single node here from this one so what will happen is during the maybe let's say we have a two layer get so first so this node here will be take we'll take a look at these representations because those are his one hope one hop neighborhoods neighbors and maybe during the second had during the second layer it will also take a look it will also use this nodes representation and then finally it will get its own final representation and after we apply the softmax we'll just get a probability distribution over different classes and say PubMed had three classes so that means this will have only three fields here and basically so then we have a classical like classic classification setup you basically what I do is you take the ground truth label so say this second this is the first class second class and third class say this node belongs to a second class so what you'll do you'll just do a log whatever is in here some probability maybe if it's 0.3 then we're going to and we have a minus so we're gonna heavily penalize it because this is some big value right and we want to have so once we have one the loss will go down to zero and that's what we want to do so then we'll just do like a gradient based learning using your favorite optimizer and that's how you update all of those W and a arrow matrices in all of the get layers so you have this node but because the network is really huge we have around 20k nodes will have 60 of these labels and but we'll we'll also be using the structural information so the connectivity and representations from different nodes but we won't be using their labels so then in the test time we'll try and predict what this once we train this thing we'll try and predict what the label of all of these other nodes are and that's transactive learning and I felt like explaining this a bit better because this was a new concept for me as well I wasn't familiar with this so yeah it's interesting anyways what's interesting is that PP PP PP I has several labels per node whereas all of these others have a single label per node and let's see what the results are before that just one more thing so I mentioned calculating those attention weights so what they additionally do is they do a dropout on those attention weights and some of those will get pulled down to zero which means you basically end up stochastically sampling the neighborhood so we applied dropout to normalize attention coefficients critically this means that at each training iteration each node is exposed to a stochastically sampled neighborhood so that's one one more detail that's interesting and the same thing is applied in transformer so again a parallel there and finally let's see what the results look like so they tested their method against various baselines and so this table here is about transactive results and this one is about inductive results where so inductive if I haven't mentioned it so we have training graphs so we train the network on a bunch of graphs and then we have a separate like testing graphs where we test our model our pre trained get model so that's the usual thing you see all around NLP and computer vision world so some of the baselines they they use for for the transactive datasets are basic multi-layer perceptron like label propagation algorithm deep walk this is a graph embedding method really really interesting one where they use random walks and approach similar to if you're familiar with vertebrate they use this skip gram model to train to train this model planetoid again uses similar to deep walk uses semi supervised learning whereas deep walk only uses unsupervised learning and then you have Chabishov which is one of those spectral methods where although they they improved it like the compute efficiency because they just do some like polynomial expansion of the laplacian and that saves them some compute GCN is a famous model by Thomas Kipf and it's basically you can maybe even treat it as a special case of get where the weights are the same it's similar it's not the same but it's in a way it's similar and then Monette is nice paper where they generalized all of these methods can be actually be treated as a special case of Monette so finally the results and yeah one more thing GCN 64 is just they optimize GCN so that it's the most fair comparison and they tuned the hyper params and these are the results they get so get achieves the state-of-the-art on Quora as well as on sites here it matches the state-of-the-art on PubMed and although these results are like clearly show that get is better than all of these previous baselines in 2020 there was this paper that showed that these benchmarks we should question like their validity because they are really small and the model can easily overfit and it's not easy to compare now in 2020 especially different gene and models and we are we are we are actually as a community like looking for for a new and better data sets where we can evaluate these gene ends results on inductive learning data set is the following so they they took as the baselines random again multilayer perceptron and this graph sage method which is similar to GCN except that it does a uniform sampling of the neighborhood and thus they they showed that it can be I can be applied to much larger like a social network like size graphs with billions of nodes and they have just a different ways of accumulating the neighborhoods they are using max pooling LSTM even and then just simple mean element-wise mean aggregation etc so yeah I won't get into a lot of details about these methods if you're familiar with them this will maybe be a nice recap and again they showed that we get improvements with get on this desk as well so they they state here that larger predictive power can be leveraged by observing the entire neighborhood so the thing is with with graph sage basically so if you have a huge neighborhood I'll denote it like a like a curve here and this is the second hop neighborhood so what the what GNN does is it basically attends to every single node in the one hop neighborhood whereas graph sage will actually just take a sub sample randomly sub sample these neighbors in the first layer and then repeat that in the second layer so yeah it's just that's that's the the hypothesis that that's the reason why I get this better although that does imply a greater unnecessary compute but that's the trade-off and yep it depends on your application so that was it there's just one more interesting chart here and this one I just want to quickly explain what this is basically once you pre-train the get on the Quora data set what you do is the following so let me let me depict the graph initial graph like like this circle we have the first get layer so this is get layer number one outcomes like the modified version of the graph representations and then there is one more layer and then we do the soft maxes and stuff so if we take the representations out of this layer and we apply this thing this dimensionality reduction method called t-sne that was introduced by Hinton and his collaborators we basically project those those vectors into 2d space and we get this and because Quora let me just jump to the table above because Quora had seven classes as you can see here and around 2,700 notes that's why we get seven different colors here and you can see that the first get layer learns how to nicely separate and cluster similar similar classes so it's pretty easy to later learn some classifier which would learn how to separate all of these and have a nice predictive generalization so that was pretty much it those are just some references so if you found this video useful click that subscribe button and also consider clicking the bell icon to get notified when they upload a new video if you'd love me to create more of these videos please write it down in the comment section I'm just starting reviewing these papers and hopefully you found this useful so anyways until next time keep learning deep
[{"start": 0.0, "end": 5.46, "text": " So I started doing geometric deep learning and the first paper I want to"}, {"start": 5.46, "end": 10.200000000000001, "text": " kick off this series with is graph attention networks. But before I do that"}, {"start": 10.200000000000001, "end": 15.64, "text": " let me let me give you a quick note. So as I said I started doing geometric deep"}, {"start": 15.64, "end": 20.72, "text": " learning and I've been collecting a bunch of resources. So cool blogs, useful"}, {"start": 20.72, "end": 28.560000000000002, "text": " blogs, videos, papers and I've read already a lot of them and I'll be"}, {"start": 28.56, "end": 35.16, "text": " creating a video in the future covering how you can approach learning"}, {"start": 35.16, "end": 39.519999999999996, "text": " this exciting field. So similarly to how I did it for"}, {"start": 39.519999999999996, "end": 43.08, "text": " transformers, if you are familiar with those videos I created. So I'll basically"}, {"start": 43.08, "end": 47.96, "text": " create one video where I'll help you structure your learning through"}, {"start": 47.96, "end": 53.76, "text": " geometric deep learning. So without further ado let me get back to to get to"}, {"start": 53.76, "end": 59.28, "text": " graph attention networks. So you can see, so first I want to shout out to"}, {"start": 59.28, "end": 65.0, "text": " Petar Velichkovich. He's my friend from Serbia and he did this work while"}, {"start": 65.0, "end": 69.96, "text": " he was back in Montreal. So that's why you can see Josje Benjio on the list."}, {"start": 69.96, "end": 77.4, "text": " Pietro Leo is his PhD advisor from Cambridge and his colleagues whom"}, {"start": 77.4, "end": 81.03999999999999, "text": " unfortunately I don't know but I guess they're they're great researchers as"}, {"start": 81.04, "end": 85.92, "text": " well. So let me start with abstract. We present graph attention networks novel"}, {"start": 85.92, "end": 89.4, "text": " neural network architectures that operate on graph structure data and"}, {"start": 89.4, "end": 94.32000000000001, "text": " because this is the first video in this series I'm gonna do like a quick tangent"}, {"start": 94.32000000000001, "end": 100.28, "text": " here and explain like briefly explain the notation and how it in generally"}, {"start": 100.28, "end": 104.04, "text": " works and then I'll jump to the method. So if you just want to see the method"}, {"start": 104.04, "end": 108.0, "text": " feel free to skip I'll put the timestamps on the on the video so you can"}, {"start": 108.0, "end": 117.04, "text": " jump wherever you want. Okay let me start with this one. So let me open a pen. Okay"}, {"start": 117.04, "end": 121.88, "text": " my god my god. Let me do a quick recap of graph theory which will help you better"}, {"start": 121.88, "end": 126.32, "text": " understand this paper as well as all of the other papers I'll cover. Some"}, {"start": 126.32, "end": 129.84, "text": " include spectral methods, some include spatial methods. I'll explain briefly"}, {"start": 129.84, "end": 134.52, "text": " like in a couple of minutes what those are. So let me start with graph theory"}, {"start": 134.52, "end": 143.76000000000002, "text": " 101. So basically a graph is a collection a set of nodes. So let me like depict the"}, {"start": 143.76000000000002, "end": 148.56, "text": " nodes as these small circles and we have so that set is usually known as V and"}, {"start": 148.56, "end": 152.60000000000002, "text": " then we have the cardinality number of that set which is the number of elements"}, {"start": 152.60000000000002, "end": 159.56, "text": " in that set and that's usually denoted as N. So nodes are connected via edges so"}, {"start": 159.56, "end": 165.92000000000002, "text": " let me maybe create some random graph like this one and they are somehow"}, {"start": 165.92000000000002, "end": 171.64000000000001, "text": " connected and what I've just draw here is our undirected edges but they can"}, {"start": 171.64000000000001, "end": 177.28, "text": " also be directed edges. They can also there can also be multiple edges and"}, {"start": 177.28, "end": 182.88, "text": " those things are called multi graphs or hyper graphs and but we won't be"}, {"start": 182.88, "end": 188.44, "text": " focusing on those now because this paper the GAT paper only focuses on"}, {"start": 188.44, "end": 193.72, "text": " undirected graphs but it's easy to to generalize the method as well as other"}, {"start": 193.72, "end": 198.16, "text": " methods to both directed graphs and those more complicated types of graphs"}, {"start": 198.16, "end": 205.96, "text": " which have maybe self self connections or multiple edges etc. So what happens"}, {"start": 205.96, "end": 211.32, "text": " now is that each of these nodes have something called like a feature vector"}, {"start": 211.32, "end": 218.88, "text": " associated with each single node and so you maybe have like F features there and"}, {"start": 218.88, "end": 223.6, "text": " all of these nodes have the same thing associated so they have they have"}, {"start": 223.6, "end": 229.48, "text": " different feature vectors and aside from that edges also can have feature vector"}, {"start": 229.48, "end": 235.28, "text": " associated with them and the set of edges is usually denoted by E and this"}, {"start": 235.28, "end": 243.48, "text": " is the number of edges in your graph. So now aside from these things there is"}, {"start": 243.48, "end": 249.44, "text": " something called adjacency matrix which is useful a concept to have as we will"}, {"start": 249.44, "end": 255.68, "text": " briefly see why and it's noted as A and it just tells you the connection pattern"}, {"start": 255.68, "end": 263.08, "text": " of your graph so basically it's an M by N matrix where N is the number of nodes"}, {"start": 263.08, "end": 267.56, "text": " in your graph and it just tells you which node is connected to which one so if we"}, {"start": 267.56, "end": 277.36, "text": " label these nodes numerically like this 0 1 2 3 the node 0 is connected to 1 2 &"}, {"start": 277.36, "end": 281.64, "text": " 3 but it's not connected to itself because I haven't added this thing"}, {"start": 281.64, "end": 288.0, "text": " called a self connection or maybe without the edge but anyways and so we'll"}, {"start": 288.0, "end": 292.0, "text": " have a pattern like this so 0 because the node 0 is connected to it's not"}, {"start": 292.0, "end": 297.6, "text": " connected to itself then we have 1 1 1 and if we take a look at the node 1 it's"}, {"start": 297.6, "end": 303.84, "text": " only going to be connected to node 0 and it's going to have zeros elsewhere so"}, {"start": 303.84, "end": 308.92, "text": " once you have this adjacency matrix you can construct something called a"}, {"start": 308.92, "end": 313.64, "text": " laplacian matrix which is used heavily used in spectral methods to construct"}, {"start": 313.64, "end": 317.92, "text": " something called laplacian eigen basis which is a generalization of the Fourier"}, {"start": 317.92, "end": 322.12, "text": " basis if you're familiar with that and once you have the eigen basis you can"}, {"start": 322.12, "end": 327.36, "text": " basically construct the signal on the graph as a weighted sum of those eigen"}, {"start": 327.36, "end": 331.0, "text": " vectors and if that didn't make any sense let me just briefly explain it"}, {"start": 331.0, "end": 334.92, "text": " even though it's not so important for this paper but I'm doing a recap so why"}, {"start": 334.92, "end": 339.32, "text": " not so what the signal on the graph is so you basically take like the feature"}, {"start": 339.32, "end": 346.72, "text": " number one from all of these feature vectors and that represents one signal"}, {"start": 346.72, "end": 352.76000000000005, "text": " so maybe signal one let me do know it like s1 and what I just said is you can"}, {"start": 352.76000000000005, "end": 356.68, "text": " once you connect what once you create those eigenvectors you can just"}, {"start": 356.68, "end": 365.44000000000005, "text": " represent s1 as some as a weighted sum so maybe W eyes and let me denote the"}, {"start": 365.44000000000005, "end": 372.52000000000004, "text": " eigenvectors as L eyes so that's it so that's a similar thing as as in a signal"}, {"start": 372.52000000000004, "end": 376.28000000000003, "text": " processing when you're using Fourier basis where you can represent any any"}, {"start": 376.28, "end": 381.55999999999995, "text": " signal as a sum as a weighted sum of sinusoids and this is a generalization"}, {"start": 381.55999999999995, "end": 386.59999999999997, "text": " where once you have a regular graph and not just any arbitrary graph like this"}, {"start": 386.59999999999997, "end": 392.2, "text": " one the laplacian eigen basis would actually correspond to the Fourier basis"}, {"start": 392.2, "end": 397.59999999999997, "text": " just to make this graph a bit less abstract so as an example because this"}, {"start": 397.59999999999997, "end": 404.47999999999996, "text": " paper uses these citation networks so what these nodes would be in this in"}, {"start": 404.48, "end": 408.68, "text": " these particular data sets is the following so you have so this node would"}, {"start": 408.68, "end": 416.0, "text": " be like like one paper so maybe this one is get paper maybe this one is"}, {"start": 416.0, "end": 422.24, "text": " transformer and we don't we drop the directionality so even though get sighted"}, {"start": 422.24, "end": 427.08000000000004, "text": " transformer and not vice versa we just drop the directionality for now and we"}, {"start": 427.08000000000004, "end": 431.72, "text": " just use undirected edge and that does hurt the performance a bit but not a lot"}, {"start": 431.72, "end": 439.48, "text": " not by a lot and so what I wanted to say here is that the feature vector for this"}, {"start": 439.48, "end": 444.64000000000004, "text": " particular paper may for example be the following so you just take the abstract"}, {"start": 444.64000000000004, "end": 453.68, "text": " and you basically just like embed all of these words using for example word2vec"}, {"start": 453.68, "end": 458.6, "text": " and you just then do bag of words so you'll basically whatever number of"}, {"start": 458.6, "end": 464.48, "text": " words curious like maybe 500 words in the abstract and all of the all of the"}, {"start": 464.48, "end": 470.48, "text": " word2vec embeddings are maybe 256 dimensionality so just average all of"}, {"start": 470.48, "end": 475.36, "text": " these 500 vectors that have this dimensionality and you end up with a"}, {"start": 475.36, "end": 481.08000000000004, "text": " single 256 vector and that's your feature for this node and once you have"}, {"start": 481.08000000000004, "end": 487.16, "text": " that you train the model later on to to do whatever you want so that's all I"}, {"start": 487.16, "end": 492.56, "text": " want to cover there so what I mentioned that one is because you basically have"}, {"start": 492.56, "end": 497.36, "text": " when you're trying to to do this graph representation learning and do"}, {"start": 497.36, "end": 499.92, "text": " something interesting with those representations maybe do some"}, {"start": 499.92, "end": 503.48, "text": " classification or regression you basically have either spatial methods or"}, {"start": 503.48, "end": 509.88, "text": " you have spectral methods and get here is a good a good example of a spatial"}, {"start": 509.88, "end": 515.12, "text": " method and that's why I just wanted to create this distinction so the paper"}, {"start": 515.12, "end": 521.68, "text": " itself heavily relies on the attention mechanism obviously so that's like the"}, {"start": 521.68, "end": 527.24, "text": " name of the paper is graph attention networks and especially the the famous"}, {"start": 527.24, "end": 535.2, "text": " the original transformer paper by Vasvani and his collaborators they it's"}, {"start": 535.2, "end": 541.64, "text": " a decent pretty similar idea as well as well as soon see but like back in 2017"}, {"start": 541.64, "end": 546.68, "text": " when this paper was created the the the transformers were not as near as"}, {"start": 546.68, "end": 555.24, "text": " popular as they are now in 2020 so now what we get by doing this is so we do"}, {"start": 555.24, "end": 558.04, "text": " inherit some nice properties of transformers and that's that the"}, {"start": 558.04, "end": 561.8, "text": " operation is efficient meaning we can basically parallelize and do those"}, {"start": 561.8, "end": 566.6, "text": " representation learnings across nodes and across edges which is what makes"}, {"start": 566.6, "end": 571.84, "text": " this method really efficient it can be applying to graph nodes having different"}, {"start": 571.84, "end": 577.84, "text": " degrees now what are degrees I forgot to mention that one so basically however"}, {"start": 577.84, "end": 583.76, "text": " many edges we have connected to our notes that's the degree so this is the"}, {"start": 583.76, "end": 589.08, "text": " degree of this note would be one the degree of this note would be three this"}, {"start": 589.08, "end": 595.32, "text": " would be two etc so let me go back here so it can be applied to graph nodes"}, {"start": 595.32, "end": 601.5200000000001, "text": " having whatever the number of degrees beat one beat like 370 whatever if you"}, {"start": 601.5200000000001, "end": 606.44, "text": " have social network maybe and second nice third nice property is that the"}, {"start": 606.44, "end": 610.44, "text": " model is directly applicable to inductive learning problems and we'll"}, {"start": 610.44, "end": 614.9200000000001, "text": " see what's trans talked to you when what's inductive a bit later but just"}, {"start": 614.9200000000001, "end": 622.96, "text": " keep keep that in mind now let me finally jump into the method itself so"}, {"start": 622.96, "end": 627.8000000000001, "text": " how the method functions is is the following so I'll just be following"}, {"start": 627.8000000000001, "end": 636.4000000000001, "text": " these equations and hopefully help you better understand what those means better"}, {"start": 636.4000000000001, "end": 644.0400000000001, "text": " created really nice visualizations here so we can maybe leverage those as well"}, {"start": 644.0400000000001, "end": 651.0, "text": " and we have the final set of equations here okay let me let me try and go and"}, {"start": 651.0, "end": 656.28, "text": " explain every single one of these so how it how it works so the method works is"}, {"start": 656.28, "end": 664.84, "text": " the following so we have again I'm going to just draw simple graph here and I'm"}, {"start": 664.84, "end": 672.84, "text": " going to do some random connections like previously maybe like this okay and now"}, {"start": 672.84, "end": 677.12, "text": " what we do for every single node we do the following so let's say this is note"}, {"start": 677.12, "end": 687.08, "text": " zero this is node one two and three basically we find what the neighbors of"}, {"start": 687.08, "end": 692.2, "text": " the current nodes a node are so zero node zero has all of these nodes as"}, {"start": 692.2, "end": 697.76, "text": " neighbors and what we're going to do is we'll have something like this simple"}, {"start": 697.76, "end": 704.36, "text": " matrix W which we're going to linearly linearly project the feature vectors"}, {"start": 704.36, "end": 711.32, "text": " which we as like let me let me let me draw them here quickly and they have"}, {"start": 711.32, "end": 716.28, "text": " dimensionality like F and this matrix will project them into the space where"}, {"start": 716.28, "end": 721.5600000000001, "text": " where they'll have like dimension F prime so that means this matrix will be"}, {"start": 721.5600000000001, "end": 729.16, "text": " simply be like F F prime and once we do like multiplication with these vectors"}, {"start": 729.16, "end": 733.52, "text": " we'll just project them into that space so basically what it does it's a you can"}, {"start": 733.52, "end": 738.4399999999999, "text": " you can treat it as a simple neural network although there is no non"}, {"start": 738.4399999999999, "end": 744.48, "text": " linearity at the moment so project this into a space where we have F prime"}, {"start": 744.48, "end": 750.1999999999999, "text": " features and yeah it just is just fully connected so once we have those"}, {"start": 750.1999999999999, "end": 755.6, "text": " intermediate representations and those are denoted here like this so they use"}, {"start": 755.6, "end": 762.36, "text": " the notation that's a bit different than I have been using so far so basically I"}, {"start": 762.36, "end": 768.36, "text": " haven't been using any notations but anyways so so this would be denoted like"}, {"start": 768.36, "end": 773.48, "text": " the feature vector for node 3 would be noted as h3 with a small arrow above"}, {"start": 773.48, "end": 781.86, "text": " which is kind of interesting because it associates with me with physics and this"}, {"start": 781.86, "end": 785.72, "text": " doesn't have to be it doesn't have to do anything with physics so let me denote"}, {"start": 785.72, "end": 788.76, "text": " once once we project into the F prime space let me denote those"}, {"start": 788.76, "end": 794.36, "text": " representations as having a tilde above so once you have that now we are going"}, {"start": 794.36, "end": 801.24, "text": " to do the attention part and how it works in get paper is they just do so"}, {"start": 801.24, "end": 809.0, "text": " this is h0 arrow and tilde because we projected into F prime space and now"}, {"start": 809.0, "end": 819.04, "text": " what we do is we just concatenate h0 with maybe h1 if we want to see what the"}, {"start": 819.04, "end": 823.96, "text": " attention wait for h1 should be we just concatenate these two and I'm gonna just"}, {"start": 823.96, "end": 830.56, "text": " denote this with square brackets and then we just multiply that with this a"}, {"start": 830.56, "end": 840.64, "text": " arrow vector which will have the dimensionality to F prime and it's"}, {"start": 840.64, "end": 846.7199999999999, "text": " basically you can treat it as a dot product and we get we basically get and"}, {"start": 846.7199999999999, "end": 851.8399999999999, "text": " then we apply the like non-linearity and they used something called leaky value"}, {"start": 851.8399999999999, "end": 859.16, "text": " and we get a value out here which they denote as e i j so there's gonna be some"}, {"start": 859.16, "end": 864.8399999999999, "text": " some number and we're going to do that not just for h1 we're going to do it for"}, {"start": 864.8399999999999, "end": 872.4399999999999, "text": " h2 as well we're going to do for h3 3 and finally for h0 and doing that we'll"}, {"start": 872.4399999999999, "end": 878.56, "text": " get these scores which then will just be passed into softmax and that means we"}, {"start": 878.56, "end": 884.76, "text": " got a sum of one when you sum all of those softmax values we'll get they'll"}, {"start": 884.76, "end": 889.8, "text": " sum up to one and then we can just do a weighted sum of all of those all of"}, {"start": 889.8, "end": 896.56, "text": " these F prime representation so the ones that have tilde will just sum those up"}, {"start": 896.56, "end": 903.72, "text": " and we'll get the new representation for for every single node and aside from"}, {"start": 903.72, "end": 910.36, "text": " that we'll just apply a non-linearity and they usually used relu or even elu"}, {"start": 910.36, "end": 917.64, "text": " or e elu non-linearities there so that's let me jump into the equations you can"}, {"start": 917.64, "end": 922.28, "text": " see it here as I just said you basically just concatenate those intermediate"}, {"start": 922.28, "end": 928.46, "text": " representations you do a dot product here you pass it into relu and then you"}, {"start": 928.46, "end": 933.2, "text": " just do softmax and that's how you get the weights which you then use to do a"}, {"start": 933.2, "end": 941.48, "text": " simple weighted sum over the neighbors so j goes over the neighbors of I and we"}, {"start": 941.48, "end": 948.6800000000001, "text": " just saw that the neighbors of I are all of these other nodes 1 2 & 3 so those"}, {"start": 948.6800000000001, "end": 952.32, "text": " are the equations and as you can see it's it's pretty similar to the"}, {"start": 952.32, "end": 957.12, "text": " transformer although I have actually asked better better and he told me they"}, {"start": 957.12, "end": 964.32, "text": " tried in the Q K V matrices but the the the network itself overfitted these"}, {"start": 964.32, "end": 969.32, "text": " small data sets because they're using core sites here and PubMed which are"}, {"start": 969.32, "end": 974.42, "text": " decently small citation and network so it turned out that just using one of"}, {"start": 974.42, "end": 982.6800000000001, "text": " these weighted matrices does the job so now there is a simple extension to the"}, {"start": 982.68, "end": 987.54, "text": " method and that's using k hats like the same thing as the transformers you have"}, {"start": 987.54, "end": 992.4, "text": " multiple hats in your attention and it's a decently simple generalization you"}, {"start": 992.4, "end": 998.9599999999999, "text": " basically have so until now you had per layer you had a single matrix w and the"}, {"start": 998.9599999999999, "end": 1005.8199999999999, "text": " single vector a arrow and those are all the trainable parameters you have so now"}, {"start": 1005.8199999999999, "end": 1012.0799999999999, "text": " you just have K of those and you independently apply all of them and you"}, {"start": 1012.08, "end": 1015.26, "text": " just concatenate the results as you can see here this is the operator for"}, {"start": 1015.26, "end": 1020.36, "text": " concatenation so you do the same thing but you just concatenate all of the"}, {"start": 1020.36, "end": 1026.04, "text": " feature vectors and you get the new representation so basically let me maybe"}, {"start": 1026.04, "end": 1034.28, "text": " do a quick explanation of that in practice what I did is so we say the"}, {"start": 1034.28, "end": 1041.16, "text": " maybe the F was the dimension of the initial representations was maybe 64"}, {"start": 1041.16, "end": 1049.2, "text": " what they would do is those WK matrices would project this vector into maybe"}, {"start": 1049.2, "end": 1054.0800000000002, "text": " only eight dimensions so you just do blah blah blah just do the fully"}, {"start": 1054.0800000000002, "end": 1060.2, "text": " connected thing you get you get a vector that has only eight dimensions and then"}, {"start": 1060.2, "end": 1065.64, "text": " you do the attention you do the weighted average and you just concatenate the"}, {"start": 1065.64, "end": 1074.3600000000001, "text": " results for all of those and you end up at the end with again 64 dimensional"}, {"start": 1074.3600000000001, "end": 1080.0400000000002, "text": " representation in the end that goes into the second get layer and so forth so"}, {"start": 1080.0400000000002, "end": 1086.2800000000002, "text": " there is one more interesting detail and that's this equation number six where"}, {"start": 1086.2800000000002, "end": 1092.1200000000001, "text": " they basically showed that they do it a bit differently so they don't apply the"}, {"start": 1092.12, "end": 1099.26, "text": " non-linearity and then concatenate they just average out all of those vectors so"}, {"start": 1099.26, "end": 1105.2399999999998, "text": " representations they average them out by K and then just applied non-linearity"}, {"start": 1105.2399999999998, "end": 1109.1599999999999, "text": " which would be because this is the last layer this would either be softmax or"}, {"start": 1109.1599999999999, "end": 1116.32, "text": " maybe logistic sigmoid I think they said it somewhere here so it's a softmax"}, {"start": 1116.32, "end": 1120.4399999999998, "text": " or logistic sigmoid in the case they're using this for classification problems"}, {"start": 1120.44, "end": 1127.52, "text": " as they did in the get paper so that means that they won't be doing this sub"}, {"start": 1127.52, "end": 1133.92, "text": " like dimensionality reduction they'll just the last layer will actually have"}, {"start": 1133.92, "end": 1140.92, "text": " all of the K hats just operating on the 64 the output will be 64 and not 8 and"}, {"start": 1140.92, "end": 1145.28, "text": " then they'll just average these out and not concatenate them so hopefully that"}, {"start": 1145.28, "end": 1151.08, "text": " was that made any any sense let's take a quick look on these visualizations and"}, {"start": 1151.08, "end": 1155.48, "text": " you can see the thing I just explained so basically we have these intermediate"}, {"start": 1155.48, "end": 1161.2, "text": " representations which I denoted as with tildes and you just concatenate them you"}, {"start": 1161.2, "end": 1167.54, "text": " do the star product with this a arrow vector and basically then you apply some"}, {"start": 1167.54, "end": 1172.56, "text": " non-linearity like non like leaky relu and you apply the softmax and you get"}, {"start": 1172.56, "end": 1177.8999999999999, "text": " these attention weights which you then just use as the weights in the weighted"}, {"start": 1177.8999999999999, "end": 1182.12, "text": " average some to get the final representations and these squiggly lines"}, {"start": 1182.12, "end": 1187.6399999999999, "text": " here like the purple green and the straight blue line just represent"}, {"start": 1187.6399999999999, "end": 1195.2, "text": " different various hats and as I said you either concatenate them in in the in the"}, {"start": 1195.2, "end": 1199.84, "text": " layers before the last one or you just average them out in the last layer and"}, {"start": 1199.84, "end": 1205.08, "text": " that's how you get the next representation so it's pretty pretty"}, {"start": 1205.08, "end": 1210.12, "text": " pretty pretty easy and all of these graph neural networks work like"}, {"start": 1210.12, "end": 1217.8999999999999, "text": " conceptually really similar so you basically have a graph and you pass it"}, {"start": 1217.8999999999999, "end": 1224.6399999999999, "text": " through all air we just modify these representations and outcomes the"}, {"start": 1224.64, "end": 1232.8000000000002, "text": " processed graph G although there is this concept of spatial resolution reduction"}, {"start": 1232.8000000000002, "end": 1237.1200000000001, "text": " the same as you have with convolutional neural networks where you have maybe the"}, {"start": 1237.1200000000001, "end": 1242.1200000000001, "text": " input image then gets like down sampled by maybe two and then by two that's like"}, {"start": 1242.1200000000001, "end": 1247.0800000000002, "text": " VGG and many networks do it like that so you can do a similar thing here doing"}, {"start": 1247.0800000000002, "end": 1252.48, "text": " for some clustering and then just reducing the dimension of the graph but"}, {"start": 1252.48, "end": 1258.68, "text": " gets and many methods I've read usually don't do this reduction and what's also"}, {"start": 1258.68, "end": 1263.3600000000001, "text": " really interesting with graph neural networks is they're a much shallower than"}, {"start": 1263.3600000000001, "end": 1269.56, "text": " the CNNs which you usually encounter in computer vision and Mikhail Brunstein"}, {"start": 1269.56, "end": 1273.84, "text": " created a series of nice blocks which I'll maybe even link in this video"}, {"start": 1273.84, "end": 1280.52, "text": " where he explained he actually explored why that is why is it that in graph"}, {"start": 1280.52, "end": 1286.12, "text": " neural networks like deeper isn't better and I won't get into that at this point"}, {"start": 1286.12, "end": 1294.24, "text": " of time okay so let me recap a couple of things that hopefully we we already saw"}, {"start": 1294.24, "end": 1299.12, "text": " so computationally gets are really highly efficient and that's something"}, {"start": 1299.12, "end": 1304.76, "text": " that's a property they share with transformers and again I mentioned"}, {"start": 1304.76, "end": 1310.76, "text": " laplacian I can decompositions those are really costly and that's why you want to"}, {"start": 1310.76, "end": 1315.12, "text": " avoid them and not only because of that but also they are not localized meaning"}, {"start": 1315.12, "end": 1319.22, "text": " the spread so you don't have this nice property of just attending over the"}, {"start": 1319.22, "end": 1323.32, "text": " local neighborhood like one hop neighborhood or two hop neighborhood you"}, {"start": 1323.32, "end": 1330.4, "text": " actually touch every single every single node and that's that's oftentimes not"}, {"start": 1330.4, "end": 1335.44, "text": " desirable because graphs are you usually have the information of interest"}, {"start": 1335.44, "end": 1340.96, "text": " somewhere locally in your community and yeah this equation just pretty much"}, {"start": 1340.96, "end": 1349.5, "text": " sums up like the just the the calculations we just saw and yeah the"}, {"start": 1349.5, "end": 1354.92, "text": " attention enables a leaping model capacity so compared to say GCN the graph"}, {"start": 1354.92, "end": 1362.68, "text": " convolutional neural network that Thomas Kipf and others created they use they"}, {"start": 1362.68, "end": 1368.0, "text": " have a isotropic filtering in a way because they give a same weight to all"}, {"start": 1368.0, "end": 1372.52, "text": " of the feature vectors around the current node of interest whereas get"}, {"start": 1372.52, "end": 1378.0800000000002, "text": " actually can do a weighted can because of the tension it can do different"}, {"start": 1378.0800000000002, "end": 1383.8400000000001, "text": " weightings for different feature vectors depending on how they interact again as"}, {"start": 1383.84, "end": 1390.76, "text": " I said it's pretty trivial to to make this undirected so if to make it"}, {"start": 1390.76, "end": 1398.08, "text": " directed so say if you have a if you have a maybe nodes like this you'll just"}, {"start": 1398.08, "end": 1404.36, "text": " ignore oops you'll just ignore these I'm retarded today so just ignore these"}, {"start": 1404.36, "end": 1410.12, "text": " that go they have just outward connections and you'll just take these"}, {"start": 1410.12, "end": 1417.0, "text": " nodes as the neighbors which point to the current node of interest I so"}, {"start": 1417.0, "end": 1422.1599999999999, "text": " that's pretty pretty simple stuff and one more interesting thing receptive"}, {"start": 1422.1599999999999, "end": 1427.2399999999998, "text": " field again if you're familiar with the CNN's so receptive field of our model is"}, {"start": 1427.2399999999998, "end": 1431.6399999999999, "text": " upper bounded by the depth of the network so why that is is the following"}, {"start": 1431.6399999999999, "end": 1438.4399999999998, "text": " so as I already mentioned you basically first attend each node in a single layer"}, {"start": 1438.44, "end": 1444.28, "text": " attends to the to a one hop neighborhood so what happens here because this node"}, {"start": 1444.28, "end": 1452.1200000000001, "text": " had his neighbors like maybe here that means in the first get layer it will"}, {"start": 1452.1200000000001, "end": 1456.72, "text": " contain some information from this representation so in the second layer"}, {"start": 1456.72, "end": 1461.48, "text": " this one will be attending this one and the last layer this one attended this"}, {"start": 1461.48, "end": 1468.24, "text": " one so that's why you effectively after after doing to get layers you'll be"}, {"start": 1468.24, "end": 1473.76, "text": " attending effectively to a to hop neighborhood and that's what I say say"}, {"start": 1473.76, "end": 1484.08, "text": " here okay let me jump into data sets and their results and evaluations and let's"}, {"start": 1484.08, "end": 1489.6, "text": " see what happens what results they achieved they had four data sets that"}, {"start": 1489.6, "end": 1496.52, "text": " they've tested and those are Cora sites here and PubMed which are citation"}, {"start": 1496.52, "end": 1501.9599999999998, "text": " networks and they have this PPI data set which has to do with protein to protein"}, {"start": 1501.9599999999998, "end": 1510.0, "text": " interactions so I mentioned previously transductive versus inductive learning so"}, {"start": 1510.0, "end": 1513.7199999999998, "text": " what's the difference here so the difference is the following basically"}, {"start": 1513.72, "end": 1519.92, "text": " when you have transductive learning you have one huge graph so you have you"}, {"start": 1519.92, "end": 1529.3600000000001, "text": " maybe have a citation graph like this and nodes cite each other and for now"}, {"start": 1529.3600000000001, "end": 1534.04, "text": " we'll just assume they are undirected even though technically one paper is"}, {"start": 1534.04, "end": 1537.8, "text": " citing in other papers so that would be a directed relationship but they usually"}, {"start": 1537.8, "end": 1541.76, "text": " just drop out the directionality and that does hurt performance a little bit"}, {"start": 1541.76, "end": 1550.2, "text": " but it's it's it's okay in the sense okay so what happens during these"}, {"start": 1550.2, "end": 1556.08, "text": " transactive learning trainings is the following so you usually have so if we"}, {"start": 1556.08, "end": 1562.36, "text": " take for example PubMed you can see that it has around 20k and nodes and it has"}, {"start": 1562.36, "end": 1568.76, "text": " three classes and basically what they do is they just take 20 nodes out of all of"}, {"start": 1568.76, "end": 1574.04, "text": " those three classes and they use those 60 nodes and you can see the number 60"}, {"start": 1574.04, "end": 1581.76, "text": " here so the training nodes as 60 and they use those to do a semi supervised"}, {"start": 1581.76, "end": 1586.96, "text": " learning but they also do see all of those other nodes which will be later be"}, {"start": 1586.96, "end": 1590.96, "text": " tested so you basically see the representation of those test nodes"}, {"start": 1590.96, "end": 1594.92, "text": " during the training which is not something we're used to in the world of"}, {"start": 1594.92, "end": 1599.68, "text": " maybe computer vision or even in graph learning when doing active learning you"}, {"start": 1599.68, "end": 1603.48, "text": " don't see you shouldn't you shouldn't see the test nodes but here in"}, {"start": 1603.48, "end": 1609.0, "text": " transactive learning you do see the test nodes but you only see the structure you"}, {"start": 1609.0, "end": 1614.72, "text": " only see the connections you don't see the labels they have so that's the those"}, {"start": 1614.72, "end": 1618.96, "text": " are the two approaches we have so let me quickly go how this looks like so I'll"}, {"start": 1618.96, "end": 1623.8000000000002, "text": " add one more node so again how this transactive approach learning works is"}, {"start": 1623.8, "end": 1628.52, "text": " the following so out of all of these nodes only you're using the information"}, {"start": 1628.52, "end": 1633.32, "text": " like the label information from maybe a single node here from this one so what"}, {"start": 1633.32, "end": 1638.8799999999999, "text": " will happen is during the maybe let's say we have a two layer get so first so"}, {"start": 1638.8799999999999, "end": 1643.8, "text": " this node here will be take we'll take a look at these representations because"}, {"start": 1643.8, "end": 1651.68, "text": " those are his one hope one hop neighborhoods neighbors and maybe during"}, {"start": 1651.68, "end": 1658.3600000000001, "text": " the second had during the second layer it will also take a look it will also"}, {"start": 1658.3600000000001, "end": 1666.16, "text": " use this nodes representation and then finally it will get its own final"}, {"start": 1666.16, "end": 1672.72, "text": " representation and after we apply the softmax we'll just get a probability"}, {"start": 1672.72, "end": 1677.3600000000001, "text": " distribution over different classes and say PubMed had three classes so that"}, {"start": 1677.36, "end": 1684.6399999999999, "text": " means this will have only three fields here and basically so then we have a"}, {"start": 1684.6399999999999, "end": 1690.24, "text": " classical like classic classification setup you basically what I do is you"}, {"start": 1690.24, "end": 1696.6799999999998, "text": " take the ground truth label so say this second this is the first class second"}, {"start": 1696.6799999999998, "end": 1701.1599999999999, "text": " class and third class say this node belongs to a second class so what you'll"}, {"start": 1701.1599999999999, "end": 1707.08, "text": " do you'll just do a log whatever is in here some probability maybe if it's 0.3"}, {"start": 1707.08, "end": 1712.76, "text": " then we're going to and we have a minus so we're gonna heavily penalize it"}, {"start": 1712.76, "end": 1719.1399999999999, "text": " because this is some big value right and we want to have so once we have one the"}, {"start": 1719.1399999999999, "end": 1724.56, "text": " loss will go down to zero and that's what we want to do so then we'll just do"}, {"start": 1724.56, "end": 1731.08, "text": " like a gradient based learning using your favorite optimizer and that's how"}, {"start": 1731.08, "end": 1740.72, "text": " you update all of those W and a arrow matrices in all of the get layers so you"}, {"start": 1740.72, "end": 1746.52, "text": " have this node but because the network is really huge we have around 20k nodes"}, {"start": 1746.52, "end": 1754.0, "text": " will have 60 of these labels and but we'll we'll also be using the structural"}, {"start": 1754.0, "end": 1758.04, "text": " information so the connectivity and representations from different nodes but"}, {"start": 1758.04, "end": 1763.36, "text": " we won't be using their labels so then in the test time we'll try and predict"}, {"start": 1763.36, "end": 1769.48, "text": " what this once we train this thing we'll try and predict what the label of all of"}, {"start": 1769.48, "end": 1773.72, "text": " these other nodes are and that's transactive learning and I felt like"}, {"start": 1773.72, "end": 1776.92, "text": " explaining this a bit better because this was a new concept for me as well I"}, {"start": 1776.92, "end": 1785.6, "text": " wasn't familiar with this so yeah it's interesting anyways what's interesting"}, {"start": 1785.6, "end": 1793.3999999999999, "text": " is that PP PP PP I has several labels per node whereas all of these others"}, {"start": 1793.3999999999999, "end": 1803.52, "text": " have a single label per node and let's see what the results are before that"}, {"start": 1803.52, "end": 1808.84, "text": " just one more thing so I mentioned calculating those attention weights so"}, {"start": 1808.84, "end": 1813.76, "text": " what they additionally do is they do a dropout on those attention weights and"}, {"start": 1813.76, "end": 1817.96, "text": " some of those will get pulled down to zero which means you basically end up"}, {"start": 1817.96, "end": 1822.92, "text": " stochastically sampling the neighborhood so we applied dropout to normalize"}, {"start": 1822.92, "end": 1826.92, "text": " attention coefficients critically this means that at each training iteration"}, {"start": 1826.92, "end": 1832.12, "text": " each node is exposed to a stochastically sampled neighborhood so that's one one"}, {"start": 1832.12, "end": 1835.96, "text": " more detail that's interesting and the same thing is applied in transformer so"}, {"start": 1835.96, "end": 1843.64, "text": " again a parallel there and finally let's see what the results look like so"}, {"start": 1843.64, "end": 1850.3200000000002, "text": " they tested their method against various baselines and so this table here is"}, {"start": 1850.3200000000002, "end": 1855.5200000000002, "text": " about transactive results and this one is about inductive results where so"}, {"start": 1855.5200000000002, "end": 1860.88, "text": " inductive if I haven't mentioned it so we have training graphs so we train the"}, {"start": 1860.88, "end": 1864.8400000000001, "text": " network on a bunch of graphs and then we have a separate like testing graphs"}, {"start": 1864.8400000000001, "end": 1871.88, "text": " where we test our model our pre trained get model so that's the usual thing you"}, {"start": 1871.88, "end": 1878.96, "text": " see all around NLP and computer vision world so some of the baselines they they"}, {"start": 1878.96, "end": 1886.72, "text": " use for for the transactive datasets are basic multi-layer perceptron like label"}, {"start": 1886.72, "end": 1892.0400000000002, "text": " propagation algorithm deep walk this is a graph embedding method really really"}, {"start": 1892.0400000000002, "end": 1896.5600000000002, "text": " interesting one where they use random walks and approach similar to if you're"}, {"start": 1896.56, "end": 1901.9199999999998, "text": " familiar with vertebrate they use this skip gram model to train to train this"}, {"start": 1901.9199999999998, "end": 1910.84, "text": " model planetoid again uses similar to deep walk uses semi supervised learning"}, {"start": 1910.84, "end": 1916.52, "text": " whereas deep walk only uses unsupervised learning and then you have Chabishov"}, {"start": 1916.52, "end": 1922.32, "text": " which is one of those spectral methods where although they they improved it"}, {"start": 1922.32, "end": 1928.52, "text": " like the compute efficiency because they just do some like polynomial expansion"}, {"start": 1928.52, "end": 1935.82, "text": " of the laplacian and that saves them some compute GCN is a famous model by"}, {"start": 1935.82, "end": 1941.8, "text": " Thomas Kipf and it's basically you can maybe even treat it as a special case of"}, {"start": 1941.8, "end": 1947.24, "text": " get where the weights are the same it's similar it's not the same but it's in a"}, {"start": 1947.24, "end": 1953.72, "text": " way it's similar and then Monette is nice paper where they generalized all of"}, {"start": 1953.72, "end": 1960.16, "text": " these methods can be actually be treated as a special case of Monette so finally"}, {"start": 1960.16, "end": 1966.4, "text": " the results and yeah one more thing GCN 64 is just they optimize GCN so that"}, {"start": 1966.4, "end": 1972.52, "text": " it's the most fair comparison and they tuned the hyper params and these are the"}, {"start": 1972.52, "end": 1979.52, "text": " results they get so get achieves the state-of-the-art on Quora as well as on"}, {"start": 1979.52, "end": 1986.48, "text": " sites here it matches the state-of-the-art on PubMed and although"}, {"start": 1986.48, "end": 1990.56, "text": " these results are like clearly show that get is better than all of these"}, {"start": 1990.56, "end": 1994.84, "text": " previous baselines in 2020 there was this paper that showed that these"}, {"start": 1994.84, "end": 1999.76, "text": " benchmarks we should question like their validity because they are really small"}, {"start": 1999.76, "end": 2004.6, "text": " and the model can easily overfit and it's not easy to compare now in 2020"}, {"start": 2004.6, "end": 2010.2, "text": " especially different gene and models and we are we are we are actually as a"}, {"start": 2010.2, "end": 2015.52, "text": " community like looking for for a new and better data sets where we can evaluate"}, {"start": 2015.52, "end": 2024.0, "text": " these gene ends results on inductive learning data set is the following so"}, {"start": 2024.0, "end": 2028.76, "text": " they they took as the baselines random again multilayer perceptron and this"}, {"start": 2028.76, "end": 2033.28, "text": " graph sage method which is similar to GCN except that it does a uniform"}, {"start": 2033.28, "end": 2038.96, "text": " sampling of the neighborhood and thus they they showed that it can be I can be"}, {"start": 2038.96, "end": 2044.84, "text": " applied to much larger like a social network like size graphs with billions"}, {"start": 2044.84, "end": 2050.44, "text": " of nodes and they have just a different ways of accumulating the neighborhoods"}, {"start": 2050.44, "end": 2057.84, "text": " they are using max pooling LSTM even and then just simple mean element-wise mean"}, {"start": 2057.84, "end": 2063.48, "text": " aggregation etc so yeah I won't get into a lot of details about these methods if"}, {"start": 2063.48, "end": 2067.6800000000003, "text": " you're familiar with them this will maybe be a nice recap and again they"}, {"start": 2067.6800000000003, "end": 2076.6000000000004, "text": " showed that we get improvements with get on this desk as well so they they state"}, {"start": 2076.6000000000004, "end": 2080.1600000000003, "text": " here that larger predictive power can be leveraged by observing the entire"}, {"start": 2080.1600000000003, "end": 2084.6400000000003, "text": " neighborhood so the thing is with with graph sage basically so if you have a"}, {"start": 2084.64, "end": 2089.3599999999997, "text": " huge neighborhood I'll denote it like a like a curve here and this is the second"}, {"start": 2089.3599999999997, "end": 2096.24, "text": " hop neighborhood so what the what GNN does is it basically attends to every"}, {"start": 2096.24, "end": 2102.64, "text": " single node in the one hop neighborhood whereas graph sage will actually just"}, {"start": 2102.64, "end": 2108.92, "text": " take a sub sample randomly sub sample these neighbors in the first layer and"}, {"start": 2108.92, "end": 2115.2000000000003, "text": " then repeat that in the second layer so yeah it's just that's that's the the"}, {"start": 2115.2000000000003, "end": 2121.0, "text": " hypothesis that that's the reason why I get this better although that does"}, {"start": 2121.0, "end": 2127.08, "text": " imply a greater unnecessary compute but that's the trade-off and yep it depends"}, {"start": 2127.08, "end": 2132.88, "text": " on your application so that was it there's just one more interesting chart"}, {"start": 2132.88, "end": 2138.0, "text": " here and this one I just want to quickly explain what this is basically once you"}, {"start": 2138.0, "end": 2144.72, "text": " pre-train the get on the Quora data set what you do is the following so let me"}, {"start": 2144.72, "end": 2151.36, "text": " let me depict the graph initial graph like like this circle we have the first"}, {"start": 2151.36, "end": 2160.32, "text": " get layer so this is get layer number one outcomes like the modified version"}, {"start": 2160.32, "end": 2165.48, "text": " of the graph representations and then there is one more layer and then we do"}, {"start": 2165.48, "end": 2170.16, "text": " the soft maxes and stuff so if we take the representations out of this layer"}, {"start": 2170.16, "end": 2175.28, "text": " and we apply this thing this dimensionality reduction method called"}, {"start": 2175.28, "end": 2181.76, "text": " t-sne that was introduced by Hinton and his collaborators we basically project"}, {"start": 2181.76, "end": 2189.72, "text": " those those vectors into 2d space and we get this and because Quora let me just"}, {"start": 2189.72, "end": 2196.6, "text": " jump to the table above because Quora had seven classes as you can see here"}, {"start": 2196.6, "end": 2203.8399999999997, "text": " and around 2,700 notes that's why we get seven different colors here and you can"}, {"start": 2203.8399999999997, "end": 2209.08, "text": " see that the first get layer learns how to nicely separate and cluster similar"}, {"start": 2209.08, "end": 2216.2799999999997, "text": " similar classes so it's pretty easy to later learn some classifier which would"}, {"start": 2216.28, "end": 2223.6800000000003, "text": " learn how to separate all of these and have a nice predictive generalization so"}, {"start": 2223.6800000000003, "end": 2231.0, "text": " that was pretty much it those are just some references so if you found this"}, {"start": 2231.0, "end": 2238.5600000000004, "text": " video useful click that subscribe button and also consider clicking the bell icon"}, {"start": 2238.5600000000004, "end": 2243.48, "text": " to get notified when they upload a new video if you'd love me to create more of"}, {"start": 2243.48, "end": 2247.32, "text": " these videos please write it down in the comment section I'm just starting"}, {"start": 2247.32, "end": 2252.72, "text": " reviewing these papers and hopefully you found this useful so anyways until next"}, {"start": 2252.72, "end": 2275.8399999999997, "text": " time keep learning deep"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=cbYxHkgkSVs
Attention Is All You Need (Transformer) | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ I do a detailed walkthrough of how the original transformer works. I use a simple machine translation (English to German) example to explain the inner workings. You'll learn about: ✔️ Everything there is to know about the transformer :P ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ my implementation: https://github.com/gordicaleksa/pytorch-original-transformer ✅ paper: https://arxiv.org/abs/1706.03762 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 A high-level overview 02:13 tokenization 05:21 embeddings and positional encodings 10:14 encoder preprocessing (splitting into subspaces) 13:20 single MHA head explanation 20:40 pointwise network 23:15 causal masking MHA 28:00 source attending MHA 30:30 projecting into vocab space and loss function 36:30 decoding ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #transformer #attention #vaswani
What's up? So in this video I thought walking you through a specific example of how the transformer, like one iteration of the training loop for the transformer looks like. And I already did a previous video where I covered the attention is all you need paper, so you can check that one out if you want to know, like a whole, if you have, if you want to have a holistic overview of the whole paper. But here you'll understand how the, like the, how the details work and how the architecture works. So I'll briefly recap how transformers work here. So basically they have, you have two parts, let me take my pencil out. So you have this encoder part on the left, this is the encoder part, and you have the decoder part on the right. And what you basically do is you split your sentence into tokens, you map those into input embeddings, you add up the positional encodings, then you pass those representations here to multi-head attention, followed up with some layer normalization, and there are also some residual connections, and then you do the feed-forward pass here, and you repeat that n times. So where n is equal to 6 in the original paper. So now I'm just walking you through, like giving you a high-level overview, we'll go into every specific detail of the things I just mentioned in a couple of seconds. So then on the other side you have the decoder, you do the same thing, you just take the German sentence, you again tokenize it, embed it, pass it through the, this time the multi-head attention, which has something called causal masking, and I'll get into that a bit later. And then we have this again, multi-head attention, but this one you're attending to the representations from the encoder, and not from the, so you're using those representations as keys and values, and not the ones that come from the decoder. And at the end you just basically project those representations into your output vocabulary space, and you just do simple, like cross-entropy loss or KL divergence, and then you basically backprop. So that's the overview of the transformer, and if you didn't understand every single thing, that's good, because I'm now going to dig deeper into how everything fits together. So the first part that happens is you basically want to learn how to split the sentence. So you want to learn how to split the English sentence into sub-words, and you want to learn how to do the same thing for the German sentence. So you basically have, you start with a huge English corpus, and you start with the second corpus, like the German corpus, that corresponds exactly one-to-one to this corpus. So you basically have a sentence in English, like how are you today, and you have that translation, something called the golden translation, because maybe like humans, like professional translators created this, and so the corresponding sentence would be, wie geht es Ihnen heute in German, and you train something called tokenizer objects, which will learn how to split these sentences and map them into some IDs. And that's where these vocabulary tables come into play. So this is an imaginary English vocab table, and then we have the German one here. So what happens is, so the procedure goes like this. You take the English sentence, and you don't do the simple splitting, like on spaces. That would be something really naive. The original transformer used something called byte-pairing coding, and I won't get into the details of that algorithm. If you're interested in learning more about that, I could create a video, a separate video, so just comment down below if you think that'd be useful. So you will maybe split the word today into separate sub words, so maybe like into something like two and day. And once it knows how to split, and it has the IDs, it can convert the sentence into just a list of IDs. So how are you today gets split into how are you today tokens, so five tokens in total, and we just map them into corresponding IDs. So you can see in the table how it corresponds to two, so we put two here, etc. And we do the same thing for the German sentence, we get as in and hoi-te, so I kind of, I imaginary split this word hoi-te into two substrings. It depends on the tokenizer how it will do this, but let's say it's like this. So what's interesting here is this start of sentence token. So that one is used to prompt the decoder into generating the translation. So basically, you later wants to transform restrained, you just prompted with this start of sentence token, and it will start generating the words here, you'll feed them back into the input, and that's something called autoregressive decoding, autoregressive models. So that's the first part how the sentence gets split into these lists of IDs. Now for the second part. Once you have the English and the German tokenizer object trained, you can do the following. So you basically, what you do is you, let me just, man, let me take something, yeah, okay. So basically you take the list, and you have this thing called input embedding. So basically that's a, so this, so you have a basically a huge matrix, like that's the size of your English vocabulary, so say that's like maybe 30k rows, and every single row correspond to a single word in the English vocab. And you basically have, in the original transformer, these were 512 dimensional vectors. So basically that's the input encoding. And now what you do is you take this sentence, so you have, how are you today? You take the ID, and the ID basically indexes into this table. So if you have two, you'll basically take the second vector, so you'll take this one, and you'll stick it right here, so in the beginning of the encoder. So you'll put the 512 dimensional vector here. You do the same thing for R. You take the third row, you place it here, and that's it. And now the thing is, in the original paper, this matrix was shared with the German matrix, and was shared with softmax weights here. But for the sake of easier explanation, I'll just, and that's how I implemented this in my own project, and I'll link my project, my implementation of the original transformer in the description. So basically split all of these into separate matrices. So once you have that, you do the same thing for German. So basically the German has a separate embedding matrix, which has the size of the, this size corresponds to the German vocab size, so maybe, I don't know, like 35k rows. And this one will be also 512. And you basically repeat the procedure. You take the first word, which is the start of sentence token, which is maybe mapped to, I don't know, like index two, in this matrix. So you take this row, you place it here, and you repeat the procedure for the rest of the words. So that's how the embedding part works. So you just use the IDs as indices into these embedding tables. And these are completely randomly initialized in the beginning. So that's like random, totally random, like maybe uniform sampling or something. It doesn't matter. So that's something that you learn throughout the training. One of the things you learn. We'll see other parameters that get learned. So the second part is this positional encoding and how that works. So that's also a matrix. And this time it's shared between English and German. And it looks something like this. And now it's basically like a, basically a bunch of sinusoids, so sines and cosines, whose frequencies form a geometrical progression. So you can see that the wavelength here is getting bigger than the wavelength here. And basically this is just some heuristic. You can basically also randomly initialize this matrix and learn it throughout the training. But the authors show that both of those approaches are equivalent, but they still stick with this one. So you do the same thing. You basically take the ID of the words, so how, you take two IDs, two, and you basically take this row and you'll stick that row here and you'll just add them up. You just add those two up and you get the new representation. And now you're here. So this is where we are now. So basically do the same thing for other words. And so just find, so this corresponds to the word R. So this is R. So you'll find the ID that's three. You'll take the third row from this column. You'll find the positional encoding. You'll add them up. Simple addition. And you end up with representations. And now we're finally came to this part where the whole logic happens. And yeah, one more thing. The same thing obviously happens for the German part. So you just find embeddings. You use the positional encoding table and you add them up and you end up with token representations here. So that's the first part. And hopefully this was clear enough. So I'll focus on the multi-head attention and I'll ignore the residual connection and the layer normalization for a second and we'll get to them a bit later. So we have five tokens, right? So we have how, R, blah blah blah, day. And all of these are like 512 dimensional vectors. So now what happens is the multi-head attention basically has four matrices that gets learned during the training. So the first three, the important ones are the query, the key matrix, and finally the value matrix. And all of these have dimension the same as these embeddings. So that's 512. So we'll have 512 times 512 for the query matrix and the same goes for the key and value matrices as well. So what we do now is we basically point-wise apply these matrices. So what I mean by that is I take the Q matrix and I multiply this one and this vector and we get, so we get 512 here and we get also 512 here, et cetera, the same here. And we do the same thing with the key matrix. So we end up with three vectors for every single token, right? So, and here we have value. So that's like the query, the key, and the value vectors. So once we are here, now what happens is you basically, because that's a multi-head attention, basically the original paper had eight heads that go in parallel. So you just split these vectors into eight pieces. So this gets split into the parts. So this one has eight times 64, yeah. So we get 64 dimensional vectors here. We take the first chunk from the second token representation as well and we do that for this. So this corresponds to token day. So we basically take the first part, first segment, which has basically 64 dimension as well. And what we do now is we get into the interesting part. So we just group all of these together. So we'll have, so we have eight heads, right? So that's head one, head two, blah, blah. We had eight heads. So we group these first segments and place them here. We group the second pieces. So we group, so this is the second piece. This is already getting really annoying, but hopefully you can see what I mean. So we take the second segment, you group them and place them here and you do the same thing for the last segments and place them here. So now I'm going to zoom out and we come to the really interesting part and that's the attention itself. Okay, so we projected all of these embedding vectors into eight subspaces and let's take a single one. So what happens is, so we ended up with those small vectors now. So this one is 64 and that's the query one. Then we have the key and the value vectors as well. So that's 64, 64. And that's for the first token, that's how, because remember we had the sentence, how are you today? So we repeat the same thing until the last token, that's the day token. It's also 64 and the same thing happens here. Just imagine the other vectors. So this is the interesting part and this is how the attention works. So you take the query vector, so this one, and you basically do a dot product between this query vector and every single key vector. So this vector and you'll do, so imagine one vector being here. So you'll do the dot product between this query and this key and with all the other keys until you get to this key and you just do the dot product. So what you end up with is basically a measure of similarity. So that's how dot product, if you're not familiar, I hope you are, so it's a simple linear algebra. So let's say this is our query vector and let's say this is our key vector. So in this, when you do a dot product between those two, because the angle is so small, basically when you project this one, you add them up, you basically get a huge value. Once the, if the key vector was like orthogonal to the query vector, you'd have like a 90 degree here, orthogonal right, and then you'd have zero score. So basically what happens here, you're measuring the similarity between the query vector and the key vectors. And the closer the key vectors are in these subspaces to this query vector, the higher the score. So you basically end up with something like this. Maybe you end up with 3.2 score here. Maybe you end up for the second token, which was how r, so this is r. Maybe you end up with like, I don't know, like 0.3 similarity and you maybe end up with the last token, you end up with, I don't know, like 2.2. And there's one interesting, one more important thing they did and that's normalization by the square root of the dimension of these heads. So that's 64. So this is dimension. So we have 64 here and that's why we do square root of 64, which is 8. They show that this normalization by the square root of this dimension helps them better train the transformer. Otherwise, they suspect that the, like the huge value, so the bigger the dimension, the bigger the dot products would be and they'd push the softmax into regions where the gradients gets really small and then the training just kind of slows down. So that's why they do this. So they just basically divide the scores here and once you do that, you apply the softmax. So softmax will make this sum go to 1 and so the bigger one will have maybe, after doing the softmax, will have maybe 0.7. This one would be maybe 0.01 and this one would be maybe 0.1. And once you have these values, aside from applying probably like a dropout is usually in practice applied here. So you randomly drop one of these to 0. So maybe this one would go to 0. And once you do that, then you use these scores to finally multiply them, these value vectors by these and you end up with the final representation. So what I mean by that, so we were, so all of these results came for this specific query vector, right? So now we just use this 0.7 and we multiply this value vector with 0.7 and then we multiply this vector with 0 and finally all of these with their corresponding values, the last one will be multiplied by 0.1. We add them up and we get the final representation that's 64 dimension long and it contains the information about all of these surrounding tokens. So that's why attention is really nice and why transformers perform so well. Because they can attend to all of the representations and create a new contextualized representation which is later helpful for the downstream tasks. So that's, you repeat the same thing basically for all of the other tokens. So now you take this query vector, you multiply it by, you do that product between different keys again, you again do the softmax, etc. etc. And you end up with all of the representations. So we end up with five representations here because again remember we have five tokens. So that's how the multi-head attention pretty much works. So now just concatenate all of the eight heads. So we have eight heads, we just group all of these together. So this was the first segment, right? So these came from these parts. So we'll just return them back into the same shape and that's it. We are ready for the final part of the attention. So once you concatenate the vectors, you end up with again 512 because we concatenated all of the eight heads. And we have five of these corresponding to our tokens. How are you today, right? And the last matrix I mentioned, so these are the three ones I showed you so far. But there is one more matrix that's used in the attention module and that's the output matrix. So basically that one again, you just, it's also 512 by 512. It's the, let's call it O, like output matrix. You basically again point-wise project these into 512. So basically just apply this one for this vector and independently apply it to this one. So that's what I mean by point-wise. And this representation is what we get out right here. So that's where we ended up. So now the residual connection would just add up those representations that were here before we did all of these attention thing and they just add it up to our final representations that came out from the, from multiplying by this matrix. That was arguably the most important part of the transformer architecture. And now what happens is, and I'll just ignore the layer normalization. It's similar to batch normalization. It basically just kind of updates all of the weights in the batch so that we can, so the gradients don't get into like, so that we don't have either vanishing or exploding gradients problems basically. It just stabilizes the training. So let me just show you this part with the feed forward net and then like the residual connection and the layer normalization just repeats the same as down here. So we ended up right here and I'll just, for the sake of making this less dirty, just repeat it again here. So again, we have five tokens corresponding to our like tokens. How are you today? And what we do now is we point-wise apply the neural network. Basically that neural network consists out of two matrices and like we have one value like the nonlinear activation layer. Usually transformers have, so as I said, they have two matrices. So the first one just kind of expands this dimension into 4x. Just don't ask me why. It's just people found out this works and that's it. So we have 512. Now we end up with 2048, right? So and then we just apply the second layer and that DOM projects this into 512. So those are the two additional matrices that we are going to learn during the training. So we had the four ones in the multi-head attention layer. We have two matrices here. We have those embedding matrices and we don't learn the positional encodings because they are, we use the sine and cosine heuristic. So that's it. That's how the encoder part of the transformer works. So we ended up here. We just repeat the same procedure six times or n times like in general and we end up with encoder representations of our initial tokens. So we end up with, in our case, with five token representations here. There are two more interesting and important things I want to cover and that's how exactly this part works and how exactly this part works. So it's really similar to this multi-head attention I already explained, but there are some details that are worth covering because people usually get confused by those. The feed forward network is the same and I'll just cover like the third portion is this, how this works. So let me start with this part. So the MAST multi-head attention module. So this is getting really like a nuclear physics whiteboard, except it's not nuclear physics. Okay. Anyways, we have tokens. This time we have German tokens. So we, let's say we have eight of these and they are 512. This one is maybe the start of the sentence token. I don't know how to write today. Okay. Start the sentence. So S okay. And we end up with the last word, which was Hoytas. So we have the substring T here. So the only difference between this multi-head attention and the last one I showed you is this thing called causal masking. And what basically happens is so everything, so for the sake of making this easier, let's assume we have only one head and not eight heads. So what happens during the attention operation here is so we have, so we have a query, we have key and we have value vectors. And now what happens here is this specific token can only look at the representations that came beforehand. So you can only look to itself and to previous. So this one gets mapped to again, value, key and query. So if you take this query, what happens is it will do a dot product between this query and this key and this query and this key, etc. It will even do a dot product between that query and this key for the last token. But now the interesting thing that happens is we just, before we add these, before we pass these to a softmax, so we ended up like maybe some scores like 3.2, I don't know, like 0.1 and this would have maybe, I don't know, 5.4 and even these would have some scores, so maybe 3.6. But we add up like minus infinity here. So we add up minus infinity here and we add up minus infinity to all of the other ones. So this got a clutter a little bit, but I just went ahead and redraw this a little bit to make it a bit clearer. So the only difference is, so now when we have like maybe the query vector that corresponds to the token s, because remember the sentence was vket s enen hoite, so I split this into two substrings and this into two substrings. So the s one, so we take its query and we do, again we just do dot products with different keys and we end up with some scores like maybe 3.2 here, blah blah blah, we end up with maybe 5.1 here, we end up with, I don't know, 2.4 here and we end up with, I don't know, like 0.1. And now what happens is every single score that comes after this current token, we just add up, basically we add up minus infinity. And what it does is basically now we feed this into softmax and after feeding it into softmax it gets pulled to zero and that basically means we won't be using these value vectors in the final representation for this specific token s. So this token, once we get the value, so 3.2 maybe gets mapped to 0.2 after softmax, this one gets mapped to, I don't know, 0.7, so we'll use 0.7 of these value vectors, we'll use 0.2 of these, but we won't use these. And that's what makes it causal. So this one can only see what came previously. And so the final representation only contains the previous representations and the current representation. If you were to do this for the first token, everything would get, so all of the other tokens would get mapped to 0. And so the first token, the start of sentence token, can only use its own representation. So basically nothing happens for this. So basically we just pass the value token here and that's what happens for the first token. And the last one we'll be using all of the representations because it's the last one. Okay, so that was this part. So that was this mask multi-head attention module. We covered that one, checked. Now the last one is this and it's simple. So again, we have German tokens, we get the Sinnenheute, this is the start of sentence, blah blah blah, we end up with the TE token. And we have these representations which ended up being calculated after iterating through six of these encoder layers. So we are using those. We have five of those. So basically we have five of those tokens here. And so this one corresponds to v, no, sorry, how are you today? And now we just use, we basically use these as the key and the value and we use these as the queries. So again, we have three matrices, the query, the key and the value matrix. Query, key, value. We take the query, we create query vectors for the German tokens, but we use key and value and apply them to these tokens. So we end up with key and value representations here all the way down to this token. And now we basically take this query and we do the usual dot product with these keys which will after doing the softmax, we'll know how to combine these value vectors to get the new token. So that's what's different for this module. So it basically combines the value vectors that come out from the encoder. So using the queries that came from the bottom layer of this decoder layer. So that's it. So I intentionally went into a lot of details. I usually love to go through from high level explanation and then slowly and gradually explain the complexity. But here I just wanted to go fully into details and hopefully some of you found this useful. But we are not finished yet. I still have this part to cover and it's pretty easy. So once we end up at the top of this stack, it's quite simple actually. So let's say we end up with eight token representations. So these correspond to start of sentence, blah blah blah. This corresponds to this token TE. And now what we do is we apply, we project these representations which are still 512 as always. We project them into this space that has the dimensionality the same as the German vocabulary size. So that was for example 35k. So that means this one gets projected into 35,000 dimensional vector and that's huge. That means we have a huge matrix here whose dimension is 512 times 35k which is where most of the computation comes into play. And there are many strategies where you can maybe create something called, you can Google it, hierarchical softmax instead of doing this one. Or you can do something called negative sampling, etc. But I won't get into those because those techniques were not mentioned in the original paper. So after you project these, you do the same thing for all of the tokens. And now you just have the ground truth. So basically because we know what the output sentence should be, so that's, we get the sin heute, that was the German sentence. So just to freshen up your memory. So we get the sin heute, okay? So basically what will happen here is we'll have, so we'll take the token v and we'll find the id. And basically what happens is we'll have the ground truth vector here and on the position 3 it will have 1. And here for example this one will be some, won't be a binary vector, it will be, maybe we'll have, I don't know, like 0.7 here and then some random value distribution, right? And basically we do either cross entropy, in this case where we have 1 here, and basically that means if we have 0.7, the loss would be log of 0.7. And if you want to minimize this loss, it should go, this argument better go to 1 because log of 1 equals 0. So the final loss when you're training the transformer is basically a sum of those logs. And I'll get to soft labeling in a bit, but let's assume we're using cross entropy for the moment. And so basically we'll have log of 0.7 here, so this vector will maybe have, let me draw the ground truth one. So let's say this Te is the ground truth vector is 1 here and everything else is 0, but like the distribution was a bit different, so the part that we're interested in is maybe 0.3, so the loss would be log of 0.3 and that loss is even bigger. I mean we just add the minus and then it's big otherwise this is a negative number. So we just sum up those logs and we average them. So basically that's what the loss function is and then you just do the basic gradient based learning using your favorite optimizer like I think they used Adam in the original transformer paper. Yeah they used Adam with a specific learning rate strategy. So that was how it usually works, but there was a slight detail here. They're actually using, they're not using these one-hot vectors as the ground truth, they're using something called soft labeling where basically this would be set to 0.9 and then the rest of the probability mass would be equally distributed evenly across this vector except for the padding token which would always get set to 0. So that's the soft label and then you do something called KL divergence instead of cross entropy loss and you can Google that, I won't get into, this video is already getting too long. So that's it, that's how the whole procedure looks like. So you, just to recap once more, you split the sentences into tokens, you map them into IDs, you look up at those embedding tables, you find the embedding vectors, you just add up those positional encodings. Now you have the initial representations here, now you go through six, you go six times through this encoder stack where you apply the attention and you apply the feedforward net, you end up with the final representations. You do the same thing for the German sentence except that you have the causal mask here and you attend to the encoding vectors in this module and that's it, you finally project into the output vocabulary space and you do backprop on the loss function that consists of KL divergence or cross entropy loss and that's how you do it. I owe you just one more explanation and that's how the, once you train the network using the process I just showed you, how do you do the actual decoding, the translation process. So what you do is you take the, just the input sentence like maybe the same one, how are you today, again you do the same procedure, tokenization, embeddings, blah, blah, blah and you just prompt the model, the decoder with the start of the sentence token and so initially this one would get propagated upwards and we'd get some word here like hopefully it will be the correct word like V and we'll just take that word and we'll embed it here and we'll just repeat the process and that's why this model is called autoregressive. You're basically whatever the output, whatever you have is the output, you just feed it back as the input and you're just autoregressing until you end up. So what's the stopping condition? The stopping condition is that end of sentence token I mentioned here, so basically you, once the network outputs end of sentence token you stop the translation process and that's pretty much it. So, this was a long video, I'm experimenting with this channel obviously, I'm trying to see what works best for you as well as for me, so if you found this video useful please put some comments down in the comment section, I appreciate any feedback like was it too detailed, was it too messy, maybe I can, yeah, I'll try and improve the next time. So, see you next time and subscribe to this channel if you haven't already and click that bell icon to get notified when I upload a new video and until next time, keep learning deep.
[{"start": 0.0, "end": 5.5600000000000005, "text": " What's up? So in this video I thought walking you through a specific example of how the"}, {"start": 5.5600000000000005, "end": 11.0, "text": " transformer, like one iteration of the training loop for the transformer looks like. And I"}, {"start": 11.0, "end": 15.24, "text": " already did a previous video where I covered the attention is all you need paper, so you"}, {"start": 15.24, "end": 18.96, "text": " can check that one out if you want to know, like a whole, if you have, if you want to"}, {"start": 18.96, "end": 24.0, "text": " have a holistic overview of the whole paper. But here you'll understand how the, like the,"}, {"start": 24.0, "end": 29.96, "text": " how the details work and how the architecture works. So I'll briefly recap how transformers"}, {"start": 29.96, "end": 36.44, "text": " work here. So basically they have, you have two parts, let me take my pencil out. So you"}, {"start": 36.44, "end": 41.6, "text": " have this encoder part on the left, this is the encoder part, and you have the decoder"}, {"start": 41.6, "end": 48.16, "text": " part on the right. And what you basically do is you split your sentence into tokens,"}, {"start": 48.16, "end": 53.96, "text": " you map those into input embeddings, you add up the positional encodings, then you pass"}, {"start": 53.96, "end": 61.68, "text": " those representations here to multi-head attention, followed up with some layer normalization,"}, {"start": 61.68, "end": 66.84, "text": " and there are also some residual connections, and then you do the feed-forward pass here,"}, {"start": 66.84, "end": 72.6, "text": " and you repeat that n times. So where n is equal to 6 in the original paper. So now I'm"}, {"start": 72.6, "end": 76.28, "text": " just walking you through, like giving you a high-level overview, we'll go into every"}, {"start": 76.28, "end": 82.12, "text": " specific detail of the things I just mentioned in a couple of seconds. So then on the other"}, {"start": 82.12, "end": 86.80000000000001, "text": " side you have the decoder, you do the same thing, you just take the German sentence,"}, {"start": 86.80000000000001, "end": 93.84, "text": " you again tokenize it, embed it, pass it through the, this time the multi-head attention, which"}, {"start": 93.84, "end": 99.2, "text": " has something called causal masking, and I'll get into that a bit later. And then we have"}, {"start": 99.2, "end": 104.32000000000001, "text": " this again, multi-head attention, but this one you're attending to the representations"}, {"start": 104.32, "end": 114.11999999999999, "text": " from the encoder, and not from the, so you're using those representations as keys and values,"}, {"start": 114.11999999999999, "end": 121.39999999999999, "text": " and not the ones that come from the decoder. And at the end you just basically project"}, {"start": 121.39999999999999, "end": 128.16, "text": " those representations into your output vocabulary space, and you just do simple, like cross-entropy"}, {"start": 128.16, "end": 138.16, "text": " loss or KL divergence, and then you basically backprop. So that's the overview of the transformer,"}, {"start": 138.16, "end": 142.76, "text": " and if you didn't understand every single thing, that's good, because I'm now going"}, {"start": 142.76, "end": 151.24, "text": " to dig deeper into how everything fits together. So the first part that happens is you basically"}, {"start": 151.24, "end": 156.32, "text": " want to learn how to split the sentence. So you want to learn how to split the English"}, {"start": 156.32, "end": 161.35999999999999, "text": " sentence into sub-words, and you want to learn how to do the same thing for the German sentence."}, {"start": 161.35999999999999, "end": 168.2, "text": " So you basically have, you start with a huge English corpus, and you start with the second"}, {"start": 168.2, "end": 173.64, "text": " corpus, like the German corpus, that corresponds exactly one-to-one to this corpus. So you"}, {"start": 173.64, "end": 179.12, "text": " basically have a sentence in English, like how are you today, and you have that translation,"}, {"start": 179.12, "end": 184.84, "text": " something called the golden translation, because maybe like humans, like professional translators"}, {"start": 184.84, "end": 191.76, "text": " created this, and so the corresponding sentence would be, wie geht es Ihnen heute in German,"}, {"start": 191.76, "end": 196.96, "text": " and you train something called tokenizer objects, which will learn how to split these sentences"}, {"start": 196.96, "end": 205.44, "text": " and map them into some IDs. And that's where these vocabulary tables come into play. So"}, {"start": 205.44, "end": 212.12, "text": " this is an imaginary English vocab table, and then we have the German one here. So what"}, {"start": 212.12, "end": 217.16, "text": " happens is, so the procedure goes like this. You take the English sentence, and you don't"}, {"start": 217.16, "end": 222.52, "text": " do the simple splitting, like on spaces. That would be something really naive. The original"}, {"start": 222.52, "end": 228.64000000000001, "text": " transformer used something called byte-pairing coding, and I won't get into the details of"}, {"start": 228.64000000000001, "end": 232.88, "text": " that algorithm. If you're interested in learning more about that, I could create a video, a"}, {"start": 232.88, "end": 239.16, "text": " separate video, so just comment down below if you think that'd be useful. So you will"}, {"start": 239.16, "end": 244.32, "text": " maybe split the word today into separate sub words, so maybe like into something like two"}, {"start": 244.32, "end": 250.68, "text": " and day. And once it knows how to split, and it has the IDs, it can convert the sentence"}, {"start": 250.68, "end": 257.6, "text": " into just a list of IDs. So how are you today gets split into how are you today tokens,"}, {"start": 257.6, "end": 262.71999999999997, "text": " so five tokens in total, and we just map them into corresponding IDs. So you can see in"}, {"start": 262.71999999999997, "end": 267.36, "text": " the table how it corresponds to two, so we put two here, etc. And we do the same thing"}, {"start": 267.36, "end": 275.08000000000004, "text": " for the German sentence, we get as in and hoi-te, so I kind of, I imaginary split this"}, {"start": 275.08000000000004, "end": 281.16, "text": " word hoi-te into two substrings. It depends on the tokenizer how it will do this, but"}, {"start": 281.16, "end": 287.84000000000003, "text": " let's say it's like this. So what's interesting here is this start of sentence token. So that"}, {"start": 287.84000000000003, "end": 296.0, "text": " one is used to prompt the decoder into generating the translation. So basically, you later wants"}, {"start": 296.0, "end": 301.12, "text": " to transform restrained, you just prompted with this start of sentence token, and it"}, {"start": 301.12, "end": 306.52, "text": " will start generating the words here, you'll feed them back into the input, and that's"}, {"start": 306.52, "end": 313.08, "text": " something called autoregressive decoding, autoregressive models. So that's the first"}, {"start": 313.08, "end": 322.48, "text": " part how the sentence gets split into these lists of IDs. Now for the second part. Once"}, {"start": 322.48, "end": 328.6, "text": " you have the English and the German tokenizer object trained, you can do the following."}, {"start": 328.6, "end": 336.04, "text": " So you basically, what you do is you, let me just, man, let me take something, yeah,"}, {"start": 336.04, "end": 342.96000000000004, "text": " okay. So basically you take the list, and you have this thing called input embedding."}, {"start": 342.96000000000004, "end": 349.28000000000003, "text": " So basically that's a, so this, so you have a basically a huge matrix, like that's the"}, {"start": 349.28, "end": 357.71999999999997, "text": " size of your English vocabulary, so say that's like maybe 30k rows, and every single row"}, {"start": 357.71999999999997, "end": 364.32, "text": " correspond to a single word in the English vocab. And you basically have, in the original"}, {"start": 364.32, "end": 372.2, "text": " transformer, these were 512 dimensional vectors. So basically that's the input encoding. And"}, {"start": 372.2, "end": 378.59999999999997, "text": " now what you do is you take this sentence, so you have, how are you today? You take the"}, {"start": 378.6, "end": 385.20000000000005, "text": " ID, and the ID basically indexes into this table. So if you have two, you'll basically"}, {"start": 385.20000000000005, "end": 391.72, "text": " take the second vector, so you'll take this one, and you'll stick it right here, so in"}, {"start": 391.72, "end": 398.88, "text": " the beginning of the encoder. So you'll put the 512 dimensional vector here. You do the"}, {"start": 398.88, "end": 405.84000000000003, "text": " same thing for R. You take the third row, you place it here, and that's it. And now"}, {"start": 405.84, "end": 413.08, "text": " the thing is, in the original paper, this matrix was shared with the German matrix,"}, {"start": 413.08, "end": 420.12, "text": " and was shared with softmax weights here. But for the sake of easier explanation, I'll"}, {"start": 420.12, "end": 425.76, "text": " just, and that's how I implemented this in my own project, and I'll link my project,"}, {"start": 425.76, "end": 429.88, "text": " my implementation of the original transformer in the description. So basically split all"}, {"start": 429.88, "end": 437.96, "text": " of these into separate matrices. So once you have that, you do the same thing for German."}, {"start": 437.96, "end": 444.88, "text": " So basically the German has a separate embedding matrix, which has the size of the, this size"}, {"start": 444.88, "end": 451.7, "text": " corresponds to the German vocab size, so maybe, I don't know, like 35k rows. And this one"}, {"start": 451.7, "end": 457.68, "text": " will be also 512. And you basically repeat the procedure. You take the first word, which"}, {"start": 457.68, "end": 464.76, "text": " is the start of sentence token, which is maybe mapped to, I don't know, like index two, in"}, {"start": 464.76, "end": 470.76, "text": " this matrix. So you take this row, you place it here, and you repeat the procedure for"}, {"start": 470.76, "end": 478.16, "text": " the rest of the words. So that's how the embedding part works. So you just use the IDs as indices"}, {"start": 478.16, "end": 483.72, "text": " into these embedding tables. And these are completely randomly initialized in the beginning."}, {"start": 483.72, "end": 488.88000000000005, "text": " So that's like random, totally random, like maybe uniform sampling or something. It doesn't"}, {"start": 488.88000000000005, "end": 494.0, "text": " matter. So that's something that you learn throughout the training. One of the things"}, {"start": 494.0, "end": 499.40000000000003, "text": " you learn. We'll see other parameters that get learned. So the second part is this positional"}, {"start": 499.40000000000003, "end": 506.52000000000004, "text": " encoding and how that works. So that's also a matrix. And this time it's shared between"}, {"start": 506.52, "end": 516.64, "text": " English and German. And it looks something like this. And now it's basically like a,"}, {"start": 516.64, "end": 522.28, "text": " basically a bunch of sinusoids, so sines and cosines, whose frequencies form a geometrical"}, {"start": 522.28, "end": 527.52, "text": " progression. So you can see that the wavelength here is getting bigger than the wavelength"}, {"start": 527.52, "end": 533.24, "text": " here. And basically this is just some heuristic. You can basically also randomly initialize"}, {"start": 533.24, "end": 539.4, "text": " this matrix and learn it throughout the training. But the authors show that both of those approaches"}, {"start": 539.4, "end": 545.5600000000001, "text": " are equivalent, but they still stick with this one. So you do the same thing. You basically"}, {"start": 545.5600000000001, "end": 550.6800000000001, "text": " take the ID of the words, so how, you take two IDs, two, and you basically take this"}, {"start": 550.6800000000001, "end": 558.76, "text": " row and you'll stick that row here and you'll just add them up. You just add those two up"}, {"start": 558.76, "end": 565.92, "text": " and you get the new representation. And now you're here. So this is where we are now."}, {"start": 565.92, "end": 572.92, "text": " So basically do the same thing for other words. And so just find, so this corresponds to the"}, {"start": 572.92, "end": 581.16, "text": " word R. So this is R. So you'll find the ID that's three. You'll take the third row from"}, {"start": 581.16, "end": 586.24, "text": " this column. You'll find the positional encoding. You'll add them up. Simple addition. And you"}, {"start": 586.24, "end": 592.2, "text": " end up with representations. And now we're finally came to this part where the whole"}, {"start": 592.2, "end": 600.0, "text": " logic happens. And yeah, one more thing. The same thing obviously happens for the German"}, {"start": 600.0, "end": 606.44, "text": " part. So you just find embeddings. You use the positional encoding table and you add"}, {"start": 606.44, "end": 613.26, "text": " them up and you end up with token representations here. So that's the first part. And hopefully"}, {"start": 613.26, "end": 619.3199999999999, "text": " this was clear enough. So I'll focus on the multi-head attention and I'll ignore the residual"}, {"start": 619.3199999999999, "end": 625.02, "text": " connection and the layer normalization for a second and we'll get to them a bit later."}, {"start": 625.02, "end": 636.28, "text": " So we have five tokens, right? So we have how, R, blah blah blah, day. And all of these"}, {"start": 636.28, "end": 644.64, "text": " are like 512 dimensional vectors. So now what happens is the multi-head attention basically"}, {"start": 644.64, "end": 650.76, "text": " has four matrices that gets learned during the training. So the first three, the important"}, {"start": 650.76, "end": 665.16, "text": " ones are the query, the key matrix, and finally the value matrix. And all of these have dimension"}, {"start": 665.16, "end": 671.76, "text": " the same as these embeddings. So that's 512. So we'll have 512 times 512 for the query"}, {"start": 671.76, "end": 678.12, "text": " matrix and the same goes for the key and value matrices as well. So what we do now is we"}, {"start": 678.12, "end": 684.3199999999999, "text": " basically point-wise apply these matrices. So what I mean by that is I take the Q matrix"}, {"start": 684.3199999999999, "end": 694.72, "text": " and I multiply this one and this vector and we get, so we get 512 here and we get also"}, {"start": 694.72, "end": 700.6, "text": " 512 here, et cetera, the same here. And we do the same thing with the key matrix. So"}, {"start": 700.6, "end": 709.0, "text": " we end up with three vectors for every single token, right? So, and here we have value."}, {"start": 709.0, "end": 717.76, "text": " So that's like the query, the key, and the value vectors. So once we are here, now what"}, {"start": 717.76, "end": 723.2, "text": " happens is you basically, because that's a multi-head attention, basically the original"}, {"start": 723.2, "end": 728.2, "text": " paper had eight heads that go in parallel. So you just split these vectors into eight"}, {"start": 728.2, "end": 735.36, "text": " pieces. So this gets split into the parts. So this one has eight times 64, yeah. So we"}, {"start": 735.36, "end": 742.72, "text": " get 64 dimensional vectors here. We take the first chunk from the second token representation"}, {"start": 742.72, "end": 751.76, "text": " as well and we do that for this. So this corresponds to token day. So we basically take the first"}, {"start": 751.76, "end": 761.0, "text": " part, first segment, which has basically 64 dimension as well. And what we do now is we"}, {"start": 761.0, "end": 766.24, "text": " get into the interesting part. So we just group all of these together. So we'll have,"}, {"start": 766.24, "end": 774.04, "text": " so we have eight heads, right? So that's head one, head two, blah, blah. We had eight heads."}, {"start": 774.04, "end": 780.76, "text": " So we group these first segments and place them here. We group the second pieces. So"}, {"start": 780.76, "end": 788.36, "text": " we group, so this is the second piece. This is already getting really annoying, but hopefully"}, {"start": 788.36, "end": 792.4399999999999, "text": " you can see what I mean. So we take the second segment, you group them and place them here"}, {"start": 792.4399999999999, "end": 796.84, "text": " and you do the same thing for the last segments and place them here. So now I'm going to zoom"}, {"start": 796.84, "end": 802.92, "text": " out and we come to the really interesting part and that's the attention itself. Okay,"}, {"start": 802.92, "end": 808.96, "text": " so we projected all of these embedding vectors into eight subspaces and let's take a single"}, {"start": 808.96, "end": 818.6, "text": " one. So what happens is, so we ended up with those small vectors now. So this one is 64"}, {"start": 818.6, "end": 824.5600000000001, "text": " and that's the query one. Then we have the key and the value vectors as well. So that's"}, {"start": 824.5600000000001, "end": 831.64, "text": " 64, 64. And that's for the first token, that's how, because remember we had the sentence,"}, {"start": 831.64, "end": 838.88, "text": " how are you today? So we repeat the same thing until the last token, that's the day token."}, {"start": 838.88, "end": 848.32, "text": " It's also 64 and the same thing happens here. Just imagine the other vectors. So this is"}, {"start": 848.32, "end": 853.4399999999999, "text": " the interesting part and this is how the attention works. So you take the query vector, so this"}, {"start": 853.4399999999999, "end": 859.4399999999999, "text": " one, and you basically do a dot product between this query vector and every single key vector."}, {"start": 859.4399999999999, "end": 865.92, "text": " So this vector and you'll do, so imagine one vector being here. So you'll do the dot product"}, {"start": 865.92, "end": 871.12, "text": " between this query and this key and with all the other keys until you get to this key and"}, {"start": 871.12, "end": 877.3199999999999, "text": " you just do the dot product. So what you end up with is basically a measure of similarity."}, {"start": 877.3199999999999, "end": 882.9399999999999, "text": " So that's how dot product, if you're not familiar, I hope you are, so it's a simple linear algebra."}, {"start": 882.9399999999999, "end": 890.28, "text": " So let's say this is our query vector and let's say this is our key vector. So in this,"}, {"start": 890.28, "end": 895.8, "text": " when you do a dot product between those two, because the angle is so small, basically when"}, {"start": 895.8, "end": 901.5999999999999, "text": " you project this one, you add them up, you basically get a huge value. Once the, if the"}, {"start": 901.5999999999999, "end": 908.0799999999999, "text": " key vector was like orthogonal to the query vector, you'd have like a 90 degree here,"}, {"start": 908.0799999999999, "end": 914.9599999999999, "text": " orthogonal right, and then you'd have zero score. So basically what happens here, you're"}, {"start": 914.9599999999999, "end": 920.04, "text": " measuring the similarity between the query vector and the key vectors. And the closer"}, {"start": 920.04, "end": 926.52, "text": " the key vectors are in these subspaces to this query vector, the higher the score. So"}, {"start": 926.52, "end": 932.1999999999999, "text": " you basically end up with something like this. Maybe you end up with 3.2 score here. Maybe"}, {"start": 932.1999999999999, "end": 940.36, "text": " you end up for the second token, which was how r, so this is r. Maybe you end up with"}, {"start": 940.36, "end": 946.3199999999999, "text": " like, I don't know, like 0.3 similarity and you maybe end up with the last token, you"}, {"start": 946.32, "end": 953.5200000000001, "text": " end up with, I don't know, like 2.2. And there's one interesting, one more important thing"}, {"start": 953.5200000000001, "end": 959.36, "text": " they did and that's normalization by the square root of the dimension of these heads. So that's"}, {"start": 959.36, "end": 964.8000000000001, "text": " 64. So this is dimension. So we have 64 here and that's why we do square root of 64, which"}, {"start": 964.8000000000001, "end": 971.0400000000001, "text": " is 8. They show that this normalization by the square root of this dimension helps them"}, {"start": 971.04, "end": 978.24, "text": " better train the transformer. Otherwise, they suspect that the, like the huge value, so"}, {"start": 978.24, "end": 982.52, "text": " the bigger the dimension, the bigger the dot products would be and they'd push the softmax"}, {"start": 982.52, "end": 987.76, "text": " into regions where the gradients gets really small and then the training just kind of slows"}, {"start": 987.76, "end": 992.5799999999999, "text": " down. So that's why they do this. So they just basically divide the scores here and"}, {"start": 992.5799999999999, "end": 999.1999999999999, "text": " once you do that, you apply the softmax. So softmax will make this sum go to 1 and so"}, {"start": 999.2, "end": 1004.6, "text": " the bigger one will have maybe, after doing the softmax, will have maybe 0.7. This one"}, {"start": 1004.6, "end": 1013.76, "text": " would be maybe 0.01 and this one would be maybe 0.1. And once you have these values,"}, {"start": 1013.76, "end": 1019.2, "text": " aside from applying probably like a dropout is usually in practice applied here. So you"}, {"start": 1019.2, "end": 1025.4, "text": " randomly drop one of these to 0. So maybe this one would go to 0. And once you do that,"}, {"start": 1025.4, "end": 1034.3200000000002, "text": " then you use these scores to finally multiply them, these value vectors by these and you"}, {"start": 1034.3200000000002, "end": 1038.74, "text": " end up with the final representation. So what I mean by that, so we were, so all of these"}, {"start": 1038.74, "end": 1046.24, "text": " results came for this specific query vector, right? So now we just use this 0.7 and we"}, {"start": 1046.24, "end": 1052.44, "text": " multiply this value vector with 0.7 and then we multiply this vector with 0 and finally"}, {"start": 1052.44, "end": 1058.1200000000001, "text": " all of these with their corresponding values, the last one will be multiplied by 0.1. We"}, {"start": 1058.1200000000001, "end": 1068.92, "text": " add them up and we get the final representation that's 64 dimension long and it contains the"}, {"start": 1068.92, "end": 1074.38, "text": " information about all of these surrounding tokens. So that's why attention is really"}, {"start": 1074.38, "end": 1080.98, "text": " nice and why transformers perform so well. Because they can attend to all of the representations"}, {"start": 1080.98, "end": 1085.96, "text": " and create a new contextualized representation which is later helpful for the downstream"}, {"start": 1085.96, "end": 1091.0, "text": " tasks. So that's, you repeat the same thing basically for all of the other tokens. So"}, {"start": 1091.0, "end": 1096.72, "text": " now you take this query vector, you multiply it by, you do that product between different"}, {"start": 1096.72, "end": 1103.02, "text": " keys again, you again do the softmax, etc. etc. And you end up with all of the representations."}, {"start": 1103.02, "end": 1109.1, "text": " So we end up with five representations here because again remember we have five tokens."}, {"start": 1109.1, "end": 1114.3999999999999, "text": " So that's how the multi-head attention pretty much works. So now just concatenate all of"}, {"start": 1114.3999999999999, "end": 1120.1999999999998, "text": " the eight heads. So we have eight heads, we just group all of these together. So this"}, {"start": 1120.1999999999998, "end": 1126.36, "text": " was the first segment, right? So these came from these parts. So we'll just return them"}, {"start": 1126.36, "end": 1133.6399999999999, "text": " back into the same shape and that's it. We are ready for the final part of the attention."}, {"start": 1133.64, "end": 1140.0400000000002, "text": " So once you concatenate the vectors, you end up with again 512 because we concatenated"}, {"start": 1140.0400000000002, "end": 1150.68, "text": " all of the eight heads. And we have five of these corresponding to our tokens. How are"}, {"start": 1150.68, "end": 1157.88, "text": " you today, right? And the last matrix I mentioned, so these are the three ones I showed you so"}, {"start": 1157.88, "end": 1164.0400000000002, "text": " far. But there is one more matrix that's used in the attention module and that's the output"}, {"start": 1164.0400000000002, "end": 1172.0, "text": " matrix. So basically that one again, you just, it's also 512 by 512. It's the, let's call"}, {"start": 1172.0, "end": 1182.0400000000002, "text": " it O, like output matrix. You basically again point-wise project these into 512. So basically"}, {"start": 1182.04, "end": 1189.6, "text": " just apply this one for this vector and independently apply it to this one. So that's what I mean"}, {"start": 1189.6, "end": 1197.2, "text": " by point-wise. And this representation is what we get out right here. So that's where"}, {"start": 1197.2, "end": 1205.48, "text": " we ended up. So now the residual connection would just add up those representations that"}, {"start": 1205.48, "end": 1212.44, "text": " were here before we did all of these attention thing and they just add it up to our final"}, {"start": 1212.44, "end": 1218.64, "text": " representations that came out from the, from multiplying by this matrix. That was arguably"}, {"start": 1218.64, "end": 1223.56, "text": " the most important part of the transformer architecture. And now what happens is, and"}, {"start": 1223.56, "end": 1227.56, "text": " I'll just ignore the layer normalization. It's similar to batch normalization. It basically"}, {"start": 1227.56, "end": 1232.84, "text": " just kind of updates all of the weights in the batch so that we can, so the gradients"}, {"start": 1232.84, "end": 1238.04, "text": " don't get into like, so that we don't have either vanishing or exploding gradients problems"}, {"start": 1238.04, "end": 1243.28, "text": " basically. It just stabilizes the training. So let me just show you this part with the"}, {"start": 1243.28, "end": 1247.4399999999998, "text": " feed forward net and then like the residual connection and the layer normalization just"}, {"start": 1247.4399999999998, "end": 1256.3999999999999, "text": " repeats the same as down here. So we ended up right here and I'll just, for the sake"}, {"start": 1256.4, "end": 1264.72, "text": " of making this less dirty, just repeat it again here. So again, we have five tokens"}, {"start": 1264.72, "end": 1273.1200000000001, "text": " corresponding to our like tokens. How are you today? And what we do now is we point-wise"}, {"start": 1273.1200000000001, "end": 1283.5600000000002, "text": " apply the neural network. Basically that neural network consists out of two matrices and like"}, {"start": 1283.56, "end": 1291.9199999999998, "text": " we have one value like the nonlinear activation layer. Usually transformers have, so as I"}, {"start": 1291.9199999999998, "end": 1297.6399999999999, "text": " said, they have two matrices. So the first one just kind of expands this dimension into"}, {"start": 1297.6399999999999, "end": 1304.24, "text": " 4x. Just don't ask me why. It's just people found out this works and that's it. So we"}, {"start": 1304.24, "end": 1315.28, "text": " have 512. Now we end up with 2048, right? So and then we just apply the second layer"}, {"start": 1315.28, "end": 1321.72, "text": " and that DOM projects this into 512. So those are the two additional matrices that we are"}, {"start": 1321.72, "end": 1330.4, "text": " going to learn during the training. So we had the four ones in the multi-head attention"}, {"start": 1330.4, "end": 1335.3600000000001, "text": " layer. We have two matrices here. We have those embedding matrices and we don't learn"}, {"start": 1335.3600000000001, "end": 1341.5600000000002, "text": " the positional encodings because they are, we use the sine and cosine heuristic. So that's"}, {"start": 1341.5600000000002, "end": 1347.92, "text": " it. That's how the encoder part of the transformer works. So we ended up here. We just repeat"}, {"start": 1347.92, "end": 1358.5600000000002, "text": " the same procedure six times or n times like in general and we end up with encoder representations"}, {"start": 1358.56, "end": 1365.96, "text": " of our initial tokens. So we end up with, in our case, with five token representations"}, {"start": 1365.96, "end": 1371.7, "text": " here. There are two more interesting and important things I want to cover and that's how exactly"}, {"start": 1371.7, "end": 1375.76, "text": " this part works and how exactly this part works. So it's really similar to this multi-head"}, {"start": 1375.76, "end": 1380.84, "text": " attention I already explained, but there are some details that are worth covering because"}, {"start": 1380.84, "end": 1384.8799999999999, "text": " people usually get confused by those. The feed forward network is the same and I'll"}, {"start": 1384.88, "end": 1389.72, "text": " just cover like the third portion is this, how this works. So let me start with this"}, {"start": 1389.72, "end": 1399.8000000000002, "text": " part. So the MAST multi-head attention module. So this is getting really like a nuclear physics"}, {"start": 1399.8000000000002, "end": 1407.98, "text": " whiteboard, except it's not nuclear physics. Okay. Anyways, we have tokens. This time we"}, {"start": 1407.98, "end": 1416.44, "text": " have German tokens. So we, let's say we have eight of these and they are 512. This one"}, {"start": 1416.44, "end": 1423.0, "text": " is maybe the start of the sentence token. I don't know how to write today. Okay. Start"}, {"start": 1423.0, "end": 1428.84, "text": " the sentence. So S okay. And we end up with the last word, which was Hoytas. So we have"}, {"start": 1428.84, "end": 1435.6, "text": " the substring T here. So the only difference between this multi-head attention and the"}, {"start": 1435.6, "end": 1441.04, "text": " last one I showed you is this thing called causal masking. And what basically happens"}, {"start": 1441.04, "end": 1445.9199999999998, "text": " is so everything, so for the sake of making this easier, let's assume we have only one"}, {"start": 1445.9199999999998, "end": 1452.84, "text": " head and not eight heads. So what happens during the attention operation here is so"}, {"start": 1452.84, "end": 1464.56, "text": " we have, so we have a query, we have key and we have value vectors. And now what happens"}, {"start": 1464.56, "end": 1471.96, "text": " here is this specific token can only look at the representations that came beforehand."}, {"start": 1471.96, "end": 1482.3999999999999, "text": " So you can only look to itself and to previous. So this one gets mapped to again, value, key"}, {"start": 1482.3999999999999, "end": 1491.84, "text": " and query. So if you take this query, what happens is it will do a dot product between"}, {"start": 1491.84, "end": 1497.84, "text": " this query and this key and this query and this key, etc. It will even do a dot product"}, {"start": 1497.84, "end": 1505.3999999999999, "text": " between that query and this key for the last token. But now the interesting thing that"}, {"start": 1505.3999999999999, "end": 1513.48, "text": " happens is we just, before we add these, before we pass these to a softmax, so we ended up"}, {"start": 1513.48, "end": 1522.88, "text": " like maybe some scores like 3.2, I don't know, like 0.1 and this would have maybe, I don't"}, {"start": 1522.88, "end": 1532.1200000000001, "text": " know, 5.4 and even these would have some scores, so maybe 3.6. But we add up like minus infinity"}, {"start": 1532.1200000000001, "end": 1539.56, "text": " here. So we add up minus infinity here and we add up minus infinity to all of the other"}, {"start": 1539.56, "end": 1546.36, "text": " ones. So this got a clutter a little bit, but I just went ahead and redraw this a little"}, {"start": 1546.36, "end": 1552.6799999999998, "text": " bit to make it a bit clearer. So the only difference is, so now when we have like maybe"}, {"start": 1552.6799999999998, "end": 1558.6, "text": " the query vector that corresponds to the token s, because remember the sentence was vket"}, {"start": 1558.6, "end": 1565.56, "text": " s enen hoite, so I split this into two substrings and this into two substrings. So the s one,"}, {"start": 1565.56, "end": 1570.52, "text": " so we take its query and we do, again we just do dot products with different keys and we"}, {"start": 1570.52, "end": 1578.24, "text": " end up with some scores like maybe 3.2 here, blah blah blah, we end up with maybe 5.1 here,"}, {"start": 1578.24, "end": 1584.9199999999998, "text": " we end up with, I don't know, 2.4 here and we end up with, I don't know, like 0.1. And"}, {"start": 1584.9199999999998, "end": 1591.3999999999999, "text": " now what happens is every single score that comes after this current token, we just add"}, {"start": 1591.4, "end": 1601.68, "text": " up, basically we add up minus infinity. And what it does is basically now we feed this"}, {"start": 1601.68, "end": 1608.3200000000002, "text": " into softmax and after feeding it into softmax it gets pulled to zero and that basically"}, {"start": 1608.3200000000002, "end": 1614.8000000000002, "text": " means we won't be using these value vectors in the final representation for this specific"}, {"start": 1614.8, "end": 1624.0, "text": " token s. So this token, once we get the value, so 3.2 maybe gets mapped to 0.2 after softmax,"}, {"start": 1624.0, "end": 1631.04, "text": " this one gets mapped to, I don't know, 0.7, so we'll use 0.7 of these value vectors, we'll"}, {"start": 1631.04, "end": 1637.84, "text": " use 0.2 of these, but we won't use these. And that's what makes it causal. So this one"}, {"start": 1637.84, "end": 1642.68, "text": " can only see what came previously. And so the final representation only contains the"}, {"start": 1642.68, "end": 1651.0800000000002, "text": " previous representations and the current representation. If you were to do this for the first token,"}, {"start": 1651.0800000000002, "end": 1655.5600000000002, "text": " everything would get, so all of the other tokens would get mapped to 0. And so the first"}, {"start": 1655.5600000000002, "end": 1661.4, "text": " token, the start of sentence token, can only use its own representation. So basically nothing"}, {"start": 1661.4, "end": 1668.28, "text": " happens for this. So basically we just pass the value token here and that's what happens"}, {"start": 1668.28, "end": 1672.76, "text": " for the first token. And the last one we'll be using all of the representations because"}, {"start": 1672.76, "end": 1680.48, "text": " it's the last one. Okay, so that was this part. So that was this mask multi-head attention"}, {"start": 1680.48, "end": 1689.56, "text": " module. We covered that one, checked. Now the last one is this and it's simple. So again,"}, {"start": 1689.56, "end": 1697.2, "text": " we have German tokens, we get the Sinnenheute, this is the start of sentence, blah blah blah,"}, {"start": 1697.2, "end": 1707.26, "text": " we end up with the TE token. And we have these representations which ended up being calculated"}, {"start": 1707.26, "end": 1713.1200000000001, "text": " after iterating through six of these encoder layers. So we are using those. We have five"}, {"start": 1713.1200000000001, "end": 1721.56, "text": " of those. So basically we have five of those tokens here. And so this one corresponds to"}, {"start": 1721.56, "end": 1736.96, "text": " v, no, sorry, how are you today? And now we just use, we basically use these as the key"}, {"start": 1736.96, "end": 1745.6599999999999, "text": " and the value and we use these as the queries. So again, we have three matrices, the query,"}, {"start": 1745.66, "end": 1756.8400000000001, "text": " the key and the value matrix. Query, key, value. We take the query, we create query vectors"}, {"start": 1756.8400000000001, "end": 1764.3600000000001, "text": " for the German tokens, but we use key and value and apply them to these tokens. So we"}, {"start": 1764.36, "end": 1775.9599999999998, "text": " end up with key and value representations here all the way down to this token. And now"}, {"start": 1775.9599999999998, "end": 1784.32, "text": " we basically take this query and we do the usual dot product with these keys which will"}, {"start": 1784.32, "end": 1791.6, "text": " after doing the softmax, we'll know how to combine these value vectors to get the new"}, {"start": 1791.6, "end": 1798.08, "text": " token. So that's what's different for this module. So it basically combines the value"}, {"start": 1798.08, "end": 1803.76, "text": " vectors that come out from the encoder. So using the queries that came from the bottom"}, {"start": 1803.76, "end": 1811.24, "text": " layer of this decoder layer. So that's it. So I intentionally went into a lot of details."}, {"start": 1811.24, "end": 1817.52, "text": " I usually love to go through from high level explanation and then slowly and gradually"}, {"start": 1817.52, "end": 1826.8799999999999, "text": " explain the complexity. But here I just wanted to go fully into details and hopefully some"}, {"start": 1826.8799999999999, "end": 1833.6, "text": " of you found this useful. But we are not finished yet. I still have this part to cover and it's"}, {"start": 1833.6, "end": 1843.72, "text": " pretty easy. So once we end up at the top of this stack, it's quite simple actually."}, {"start": 1843.72, "end": 1854.56, "text": " So let's say we end up with eight token representations. So these correspond to start of sentence,"}, {"start": 1854.56, "end": 1865.92, "text": " blah blah blah. This corresponds to this token TE. And now what we do is we apply, we project"}, {"start": 1865.92, "end": 1873.18, "text": " these representations which are still 512 as always. We project them into this space"}, {"start": 1873.18, "end": 1879.1200000000001, "text": " that has the dimensionality the same as the German vocabulary size. So that was for example"}, {"start": 1879.1200000000001, "end": 1888.24, "text": " 35k. So that means this one gets projected into 35,000 dimensional vector and that's"}, {"start": 1888.24, "end": 1900.8, "text": " huge. That means we have a huge matrix here whose dimension is 512 times 35k which is"}, {"start": 1900.8, "end": 1905.32, "text": " where most of the computation comes into play. And there are many strategies where you can"}, {"start": 1905.32, "end": 1914.08, "text": " maybe create something called, you can Google it, hierarchical softmax instead of doing"}, {"start": 1914.08, "end": 1919.08, "text": " this one. Or you can do something called negative sampling, etc. But I won't get into those"}, {"start": 1919.08, "end": 1924.0, "text": " because those techniques were not mentioned in the original paper. So after you project"}, {"start": 1924.0, "end": 1930.68, "text": " these, you do the same thing for all of the tokens. And now you just have the ground truth."}, {"start": 1930.68, "end": 1936.76, "text": " So basically because we know what the output sentence should be, so that's, we get the"}, {"start": 1936.76, "end": 1944.64, "text": " sin heute, that was the German sentence. So just to freshen up your memory. So we get"}, {"start": 1944.64, "end": 1951.16, "text": " the sin heute, okay? So basically what will happen here is we'll have, so we'll take the"}, {"start": 1951.16, "end": 1963.92, "text": " token v and we'll find the id. And basically what happens is we'll have the ground truth"}, {"start": 1963.92, "end": 1970.16, "text": " vector here and on the position 3 it will have 1. And here for example this one will"}, {"start": 1970.16, "end": 1977.76, "text": " be some, won't be a binary vector, it will be, maybe we'll have, I don't know, like 0.7"}, {"start": 1977.76, "end": 1984.96, "text": " here and then some random value distribution, right? And basically we do either cross entropy,"}, {"start": 1984.96, "end": 1991.52, "text": " in this case where we have 1 here, and basically that means if we have 0.7, the loss would"}, {"start": 1991.52, "end": 2001.02, "text": " be log of 0.7. And if you want to minimize this loss, it should go, this argument better"}, {"start": 2001.02, "end": 2009.52, "text": " go to 1 because log of 1 equals 0. So the final loss when you're training the transformer"}, {"start": 2009.52, "end": 2017.3, "text": " is basically a sum of those logs. And I'll get to soft labeling in a bit, but let's assume"}, {"start": 2017.3, "end": 2023.04, "text": " we're using cross entropy for the moment. And so basically we'll have log of 0.7 here,"}, {"start": 2023.04, "end": 2034.68, "text": " so this vector will maybe have, let me draw the ground truth one. So let's say this Te"}, {"start": 2034.68, "end": 2041.12, "text": " is the ground truth vector is 1 here and everything else is 0, but like the distribution was a"}, {"start": 2041.12, "end": 2049.2, "text": " bit different, so the part that we're interested in is maybe 0.3, so the loss would be log"}, {"start": 2049.2, "end": 2058.3999999999996, "text": " of 0.3 and that loss is even bigger. I mean we just add the minus and then it's big otherwise"}, {"start": 2058.3999999999996, "end": 2067.2, "text": " this is a negative number. So we just sum up those logs and we average them. So basically"}, {"start": 2067.2, "end": 2072.08, "text": " that's what the loss function is and then you just do the basic gradient based learning"}, {"start": 2072.08, "end": 2077.24, "text": " using your favorite optimizer like I think they used Adam in the original transformer"}, {"start": 2077.24, "end": 2091.3599999999997, "text": " paper. Yeah they used Adam with a specific learning rate strategy. So that was how it"}, {"start": 2091.3599999999997, "end": 2096.04, "text": " usually works, but there was a slight detail here. They're actually using, they're not"}, {"start": 2096.04, "end": 2104.24, "text": " using these one-hot vectors as the ground truth, they're using something called soft"}, {"start": 2104.24, "end": 2111.12, "text": " labeling where basically this would be set to 0.9 and then the rest of the probability"}, {"start": 2111.12, "end": 2116.8399999999997, "text": " mass would be equally distributed evenly across this vector except for the padding token which"}, {"start": 2116.8399999999997, "end": 2123.8799999999997, "text": " would always get set to 0. So that's the soft label and then you do something called KL"}, {"start": 2123.8799999999997, "end": 2129.08, "text": " divergence instead of cross entropy loss and you can Google that, I won't get into, this"}, {"start": 2129.08, "end": 2136.92, "text": " video is already getting too long. So that's it, that's how the whole procedure looks like."}, {"start": 2136.92, "end": 2142.84, "text": " So you, just to recap once more, you split the sentences into tokens, you map them into"}, {"start": 2142.84, "end": 2149.0, "text": " IDs, you look up at those embedding tables, you find the embedding vectors, you just add"}, {"start": 2149.0, "end": 2155.4, "text": " up those positional encodings. Now you have the initial representations here, now you"}, {"start": 2155.4, "end": 2160.1600000000003, "text": " go through six, you go six times through this encoder stack where you apply the attention"}, {"start": 2160.1600000000003, "end": 2164.28, "text": " and you apply the feedforward net, you end up with the final representations. You do"}, {"start": 2164.28, "end": 2170.52, "text": " the same thing for the German sentence except that you have the causal mask here and you"}, {"start": 2170.52, "end": 2178.56, "text": " attend to the encoding vectors in this module and that's it, you finally project into the"}, {"start": 2178.56, "end": 2186.7999999999997, "text": " output vocabulary space and you do backprop on the loss function that consists of KL divergence"}, {"start": 2186.7999999999997, "end": 2192.12, "text": " or cross entropy loss and that's how you do it."}, {"start": 2192.12, "end": 2198.56, "text": " I owe you just one more explanation and that's how the, once you train the network using"}, {"start": 2198.56, "end": 2204.0, "text": " the process I just showed you, how do you do the actual decoding, the translation process."}, {"start": 2204.0, "end": 2210.0, "text": " So what you do is you take the, just the input sentence like maybe the same one, how are"}, {"start": 2210.0, "end": 2215.68, "text": " you today, again you do the same procedure, tokenization, embeddings, blah, blah, blah"}, {"start": 2215.68, "end": 2222.84, "text": " and you just prompt the model, the decoder with the start of the sentence token and so"}, {"start": 2222.84, "end": 2233.92, "text": " initially this one would get propagated upwards and we'd get some word here like hopefully"}, {"start": 2233.92, "end": 2243.48, "text": " it will be the correct word like V and we'll just take that word and we'll embed it here"}, {"start": 2243.48, "end": 2248.92, "text": " and we'll just repeat the process and that's why this model is called autoregressive. You're"}, {"start": 2248.92, "end": 2253.0, "text": " basically whatever the output, whatever you have is the output, you just feed it back"}, {"start": 2253.0, "end": 2259.08, "text": " as the input and you're just autoregressing until you end up. So what's the stopping condition?"}, {"start": 2259.08, "end": 2266.48, "text": " The stopping condition is that end of sentence token I mentioned here, so basically you,"}, {"start": 2266.48, "end": 2274.16, "text": " once the network outputs end of sentence token you stop the translation process and that's"}, {"start": 2274.16, "end": 2281.48, "text": " pretty much it. So, this was a long video, I'm experimenting with this channel obviously,"}, {"start": 2281.48, "end": 2286.08, "text": " I'm trying to see what works best for you as well as for me, so if you found this video"}, {"start": 2286.08, "end": 2291.88, "text": " useful please put some comments down in the comment section, I appreciate any feedback"}, {"start": 2291.88, "end": 2301.36, "text": " like was it too detailed, was it too messy, maybe I can, yeah, I'll try and improve the"}, {"start": 2301.36, "end": 2308.68, "text": " next time. So, see you next time and subscribe to this channel if you haven't already and"}, {"start": 2308.68, "end": 2313.64, "text": " click that bell icon to get notified when I upload a new video and until next time,"}, {"start": 2313.64, "end": 2324.3199999999997, "text": " keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=HhomSGnP-x8
Google DeepMind's AlphaFold 2 explained! (Protein folding, AlphaFold 1, a glimpse into AlphaFold 2)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I walk you through what the protein folding problem is, I dig deep into the AlphaFold1 paper, and finally, I take a glimpse and I predict what AlphaFold2 may look like. This thing is huge. Hopefully, it will accelerate the pace of solving the COVID-19 pandemic as well. You'll learn about: ✔️ Recap of essential biology ✔️ What is the protein folding problem? ✔️ How does AlphaFold1 exactly work ✔️ How does AlphaFold2 probably work ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ DeepMind on AlphaFold2: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology ✅ DeepMind on COVID-19: https://deepmind.com/research/open-source/computational-predictions-of-protein-structures-associated-with-COVID-19 2 more useful DeepMind blogs: ✅ https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery ✅ https://deepmind.com/research/case-studies/alphafold ✅ What is protein? https://www.youtube.com/watch?v=wvTv8TqWC48&ab_channel=RCSBProteinDataBank ✅ How are proteins created? https://www.youtube.com/watch?v=gG7uCskUOrA&ab_channel=yourgenome ✅ Nature's blog on AlphaFold2: https://www.nature.com/articles/d41586-020-03348-4 ✅ Christian Anfinsen's Nobel prize lecture: https://www.nobelprize.org/uploads/2018/06/anfinsen-lecture.pdf ✅ Charts for sequenced proteins: https://www.ebi.ac.uk/uniprot/TrEMBLstats ✅ PDB sequenced vs 3D structure found: https://www.rcsb.org/stats/growth/growth-released-structures ✅ AlphaFold1 paper: https://www.nature.com/articles/s41586-019-1923-7.epdf?author_access_token=Z_KaZKDqtKzbE7Wd5HtwI9RgN0jAjWel9jnR3ZoTv0MCcgAwHMgRx9mvLjNQdB2TlQQaa7l420UCtGo8vYQ39gg8lFWR9mAZtvsN_1PrccXfIbc6e-tGSgazNL_XdtQzn1PHfy21qdcxV7Pw-k3htw%3D%3D ... ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 Enter the AlphaFold2 03:40 Nobel prize winner Christian Anfinson, biology recap 05:50 Experimental methods (x-ray crystallography, etc.) and some stats 08:24 Why do we care? (Curing diseases, new materials, etc.) 10:10 A deep dive into the AlphaFold2 paper 13:30 Overview of the deep learning pipeline 17:30 Understanding torsion angles and beta carbon 21:00 How do distance maps (distograms) work? 26:10 Geometric model and optimization (L-BFGS) 29:10 Primary, secondary, tertiary structures 31:30 Initialization procedure 32:50 Overview of the whole pipeline 33:54 AlphaFold2 predictions (paper still not published) 37:10 COVID pandemic ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #alphafold2 #deepmind #biology
It will change everything. DeepMind's AI makes a gigantic leap in solving protein structures. So, I guess all of you heard about this so far, or most of you. And basically what happened is that DeepMind released, published their results on this year's protein folding challenge. And their method alpha fold, actually the second iteration of their algorithm alpha fold, kind of ruined the whole, like, beat everybody and got the best scores. And we can see, so this is the blog that they published so far, they still haven't published the paper. The paper will come probably in maybe a year or something, because the last paper for the alpha fold one came out in December 2019, where the competition happened in 2018. So, we'll have to wait some time to get to that, to get the paper. And I basically can see the scores throughout the years, so that's a biennial competition called CASP. And it happens, so it happens every two years. And you can see here that in 2018 their method alpha fold one also dominated the competition, but this year they crossed 90 on this global distance test metric, which pretty much tells you how good the alignment between the experimentally found protein structure and the computationally found protein structures are. And I'll first, so the idea of this video will be to first explain you what the problem is, so what the protein folding problem is, give you a bit about history and background, and how proteins work and everything. And then my idea is to go into the alpha fold one paper, because that's what we have so far, and I'll explain how it works, how the deep learning method works, so there'll be a deep, in-depth explanation of the paper. And I'll give some assumptions of what will happen with the alpha fold two, what the method is, they only have one chart, so I'll try and deduce something out of it. So basically, so why this is interesting is that 90 is considered as something that's like, if you get to 90 or above, you're pretty much comparable to experimental methods, the reason being they are also not perfect, and they do have their measuring uncertainties. So here we can see, let me zoom in a bit, here we can see what the problem is. Basically, the green thing is the pseudo ground truth, so that means somebody found that one using some of the experimental methods, and I'll tell you a bit more about those. And the blue one, so the blue one is a computational prediction that DeepMind's alpha fold two made, and you can see it's pretty, pretty close. The results are really, really good, and you can see 90.7, 93.3, and these are just the names of the proteins T1049, whatever. So let me kind of step back a little bit here and tell you something about how the problem came to be. So we've been trying to solve this problem in biology for 50 years already, and so this is kind of a really big, big thing that happened, so yeah, the hype is real this time. First of all, there's this guy called Christian Anfinsen, and you can see in 1972 he gave this Nobel Prize lecture. So he basically got his Nobel Prize because he showed that the function of the protein is pretty much uniquely determined by its 3D structure, which is really interesting. So here is a really cool example of hemoglobin. The hemoglobin forms a pocket to hold heme, a small molecule with an iron atom in the center that binds to oxygen. So basically, like, the structure is made so that this protein, the hemoglobin, has a small pocket where this molecule that has, like, an iron atom which binds to, like, the oxygen and thus transports oxygen throughout our body, like, just that molecule fits so nicely in this hemoglobin. So that's a really plastic example of how the structure determines the function. That's a pretty obvious one. So there's this cool example of this molecule that has this Y shape, and the molecule attaches to bacteria or viruses and thus stacks those cells for destruction in the immune system. So let me give you just a brief example of how the proteins are created. So I like to think of this as, like, a kitchen where the ribosome is basically this macromolecule acts as a cook. We have the messenger RNA, which is pretty much the receipt, and then we have amino acids, which are the ingredients, which are transported by this thing called the transport RNA. So basically, the messenger RNA contains this linear sequence of amino acids, and it gets translated so the receipt gets read by this ribosome, and we form, like, a chain of proteins which slowly start folding because different parts, different amino acids have different charges, and so they attract or repel, and thus we get that 3D structure. So that's basically how the protein is formed, and now we're trying to figure out this 3D structure computationally. Now, the thing is, how do people find the 3D structures now that we know that it's really important for, like, for figuring out the function of the proteins? How do we find the structure of the protein? So there are three methods that are used extensively in different labs, and those are mainly X-ray diffraction, nuclear magnetic resonance, and this electron microscopy. And as you can see, the X-ray diffraction was so far had the highest amount of proteins were, like, protein structures were determined using this method. Although it does have its own flaws, it cannot solve, so depending on the size of the protein, it has its constraints, so it can't solve everything. Now, these two are getting more popular lately, but they are still super expensive and really slow methods to obtain the 3D structure. So that's why the computational biology and these computational methods for figuring out the structure were so important. So if we take a look at this chart here, we can see that the number of proteins we sequenced, successfully sequenced, is around 200 million now in 2020. So it's been rising, like, so those are the linear sequences that are encoded in the DNA, so, but we still don't have the 3D structure for all of those. And if we take a look at this chart, we can see there is this thing called a protein database, which is really important because that's the data set that's been used in the AlphaFold paper, and we'll get to it in a couple of minutes. But basically here, you can see that, like, we have, so we have a lot of sequences being kind of ingested into this data set, but the number of structures is actually a lot smaller. So here, 2020, we can see only 12,000 structures are there, but we have, like, much more sequences. So that was the motivation for why we want to accelerate the pace of finding 3D structures. The experimental one is slow, it's really expensive, so, yeah, we need an alternative. So, again, why we care about figuring out the structure, the 3D structure of the protein is because, as you can see on DeepMind's blog, an error in the genetic recipe may result in a malformed protein which could result in disease or death for an organism. Many diseases, therefore, are fundamentally linked to proteins. So diseases like diabetes, like dementia, Parkinson's, Alzheimer's, cancer, some of the diseases that are known to be, like, plaguing the humankind for decades or more can probably be solved by just us knowing the 3D structures. So that's why it's super important to, so that's why this problem is super important. Aside from that, also, we could develop much better materials if we knew how to construct and understand the shapes of different proteins, and we could create, like, plastic-eating enzymes and thus reduce the pollution, like, on the global level, which is something we do care about, obviously. So that was the background story. It was a bit longer. I assume most of my audiences, like, machine learning, have machine learning background, so I felt I need to really give you a nice overview of what the problem is and everything that goes inside, and, like, give you some numbers and methods, etc. So let's start exploring the Alpha Fold 1 now. As you can see here, this is an animation that DeepMind provided, which basically shows how, during their optimization procedure, which we'll get to in a sec, like, folds the proteins, so it starts from a semi-unfolded protein strand, which then slowly takes its 3D shape in space. Okay, let's overview the Alpha Fold 1 paper. You can see, so, the title is Alpha Fold Improved Protein Structure Prediction Using Potentials from Deep Learning. They had, like, a bunch of folks working on this project, as it's pretty common for huge projects like this one. So, they started in 2016, so it's already been, like, a four-year effort of many smart people. So, I've already mentioned it's DeepMind. They're based in London, and, okay. So, the paper is pretty complicated. There's a lot of text and a lot of domain knowledge, so I'll just try and extract the Deep Learning side of it, and hopefully that will give you, like, a good understanding of how the method works. Let's start with this chart. So, basically, on this chart here, they just show that they are better than other groups. So, many other research groups were participating in this in 2018, in Cusp13 Challenge, that was the name of the challenge. And, basically, the green lines represent the groups, those other groups, and the blue line is DeepMind. And so, on the, basically, on the x-axis, we have something called TM score. So, there are a bunch of metrics for determining how good the structure prediction is. So, one is TM. There are other scores, like we saw the Global Distance Test score. There are some, like, IDDT or something. Yeah, there are a lot of different scores, and basically, so, FM Domain Count, Domain is some, like, fancy name for protein, pretty much. So, here, in this free modeling domain, we have 43 sequences which we needed to figure the structure for. And you can see that, on average, the DeepMind had a lot higher TM score than others. So, that's what this thing is. And now, I should probably tell you what the difference between free modeling and template modeling is. So, template modeling is when you're trying to predict the structure for a sequence that has similar sequences whose structure we already know. Now, the thing is, once you know some similar sequence, you can kind of assume that the structure of your sequence will be similar to that one. So, you just need to find where the sequence differs and tweak, kind of tweak the structure, so, starting from that, maybe that structure. So, that's a template modeling. So, I'm simplifying a little bit here, but, like, that's the rough notion. Again, here, on this chart, they just displayed five different proteins, and they, again, show that, like, the blue dots that DeepMind is crushing it. I mean, so, these two were bragging charts. So, now, let's get to the actual method. And this is how it looks like. We can see some neural network here, of course, and that's actually ResNet with dilated convolutions. And so, basically, before I get into those details, there are pretty much, let me see, three parts of this pipeline. So, the first one is preparing the features that we'll need in order to predict the structure. Then the second part is the distance and torsion angle predictions, and I'll tell you what those are in a minute. So, that's the second part of pipeline, and then we have the gradient descent or the nonlinear optimization part of the pipeline. So, in stage one, they somehow find a way to go from 1D amino acid sequence that represents the protein to 2D map. And the 2D map actually contains covariations between our target sequence and similar sequence from this huge data set of sequences. And so, this is this thing called multi-sequence alignment. So, you basically find a group of sequences, like thousand sequences, which are similar to your target sequence, which you're trying to predict the structure for, and you encode the covariations in these maps. So, if I take one point on this 2D map, and this is 2D volume, so, this will be like 500 or something channels or more. There are some details down in the paper. So, one thing they encode is like the one-hot encoding of the amino acid. So, basically, that means that if we take this point, so that's... let me zoom in. Oops. So, basically, let's say this is like row five, and let's say this is like row, I don't know, 40. They'll basically encode five as a one... They'll basically find what the fifth amino acid is, and because we have only 21 amino acids in human genome, we'll encode this, let's say this is... Because there are 21, we can represent each one with a letter from the alphabet. Luckily, we have 26 letters. So, let's say this one is, I don't know, the fifth one is amino acid A, this 40th is maybe, I don't know, like S, and they just encode this one as a one-hot. Let's say this one will be like zeros, and then on some spot like 13, I don't know, they'll have 1, etc. So, what they will do is they will concatenate this one-hot vector with one-hot vector that corresponds to amino acid A, and that will be a part of the features they are using. There is much more domain-specific things they're including. Also, they're including these multi-sequence alignment things, and I'll probably explain those a bit more... I'll go into detail about those, because that's interesting a bit later. So, that's how they get this 3D volume, and now you can treat this as a simple computer vision problem, and they pretty much use this classic deep learning model like ResNet with the dialectic convolutions, and treat this as an image-to-image translation problem. So, the network itself, as you can see, has kernels of size 64 by 64, so they're just actually randomly sliding across this volume, and they end up getting the distance predictions. Okay, so what is this distance map, and how do we get one? So, in order to understand that, I have to make a small tangent here and explain how the amino acids connect, and which distances and torsion angles are we actually measuring. So, these are all the 21 amino acids we have in the human genome, and basically, as you can see, this part is always the same. For every single amino acid, no matter the properties, this part always remains the same. So, this thing here is called something called, I think, alpha carbon atom, this one is called beta carbon atom, and that one is really important. So, let's say for the sake of argument that we have a sequence like R, H, K, which is pretty much these three amino acids here. So, what will happen is, once they start going through the ribosome, they'll start connecting, and so these parts will form the backbone. So, these parts, and the groups, which is this part that goes off from the beta carbon atom, those will be, like for example, some will be positively charged, maybe this one will be negatively charged. So, what will happen is that they'll start attracting, and that's the thing that causes the protein to fold. So, imagine if you had a protein that had 140 amino acids. So, what could happen is that in this complex interaction, maybe the first amino acid would be, like in the 3D structure, would be really close to maybe 127th amino acid. So, that means we have our long range dependencies in this structure. So, now, when we are, so I mentioned distance map. So, what we are actually looking for is distance between beta carbon atoms between different amino acids. So, basically, that's what we are looking at, and then we have torsion angles, so phi and psi angles, which are basically in that 3D structure, two angles that we care about. Let me see if I can find a nice visualization of that, and you can see it here, hopefully. Let me zoom in. So, this is the protein chain, and so one of the two dihedral angles inside here, without getting into too much explanation, is two angles that we care about. So, for the sake of argument, just think there are two angles we care about. There is a third, but that one is almost always 180, so DeepMind kind of hard-coded that one to be 180, and they're just considering these two. So, okay, let me get back to the OneNote. So, now that we know this information about how we calculate distance and phi and psi angles, let's go back to the deep learning pipeline. So, basically, you can see here we have amino acids here, and we have the same sequence here, and so this particular point maybe corresponds to finding the... So, I'll actually use a different drawing, which will probably be easier to understand. Okay, let me see if I can nicely explain the distance maps. So, we have the experimentally found the pseudo ground truth distance map for this particular amino acid sequence. And we have the predicted distance map over here. So, basically, what they showed here is they took a specific amino acid from the sequence, like in particular they took 29th amino acid, and that's depicted on these charts by these red stripes. And basically, you can see that the distance between 29th carbon, beta carbon, and with itself is obviously zero, which is this colored with yellow. And over the whole main diagonal is going to be yellow for that specific, for that particular reason, because the distance between beta carbon and itself is always going to be zero. So, that's why we see this pattern along the main diagonal. And what it did here is they took the neighborhood from the 29th amino acid, and they just extracted all of the 41 distances, and that's the chart on the right. And now the thing is these things here are not scalars, they're actually probability distributions. So, if we take maybe, I don't know, 40th probability distribution, 40th amino acid and its corresponding probability distribution, we end up with something like this. And you can see on the x-axis we have the distance, and on the y-axis we have the probability. So, this particular position has, we can see that the distribution is somewhere, like the peak is pretty much around like 4 angstroms, where angstrom is 0.1 nanometer. So, this is how they symbolize the angstrom. And the red bar is the ground truth value. So, the red bar says, I don't know, like 5, and we predicted 4, which is pretty good. If we take a look at, because 29 is missing obviously, because we know that it's going to have a spike on the 0 distance, basically if we take a look at the surrounding probability distributions, so those correspond to this part here, basically distributions are really narrow and really precise, so we can see that the peak, the mode of the distribution corresponds perfectly to the red bars. Here also, really nice, as we go further apart, further away, we can see that the red bars don't correspond always with the mode of the distribution. So, say, let me take an example where we have some problems, like here, like this one is a good example, because we predicted that the distance is here, so that's whatever this value is, but the red bar is here, so, yeah. By the way, these black lines is the cut of length distance when we consider the two carbon, beta carbon atoms to be in contact. So, if we are, so this one is at 8 angstrom, so if you fall below 8 angstrom, we consider those two beta carbon atoms to be in contact, otherwise they are not in contact, obviously. So, those are basically, so the map is basically the, like the inter-residue distances for the amino sequence. And we want to predict, obviously, as close as possible to the ground truth. So, let me now go back to our initial deep learning pipeline, if I can get there, okay, we're here. So, we do a similar thing for the, for torsion angles, and once we have those maps, what we now do is we form a geometric model of our sequence, meaning we basically take, like we have a geometric model, and we have two arguments, we have the phi and we have psi, so the torsion angles, and those give us the, like, 3D coordinates of those beta carbon atoms in 3D space. So, what I mean by that, so if we have a, like a sequence of length 100, we'll have 200 parameters here. And basically, for one specific configuration of these 2L torsion angles, we get a specific configuration in 3D space, like, I don't know, like we'll have some thing going on in space. And so, what this part does is we are optimizing, so we are changing these two, so that the 3D coordinates or the corresponding 3D structure gets as close as possible to the gray one, so that this one is the native, the experimentally found structure. So, what happens is by tweaking these phi and psi angles, we are also tweaking, you can imagine, like, we are tweaking the distance map for this particular configuration, so for a given phi and psi, we have a given distance map, which we are trying to match with this one. So, the two parts of the pipeline are pretty much independent. The second pipeline, so the second part of the pipeline just figures out the depth map and the torsion maps, and then the third part of the pipeline just tries and tweaks this differentiable G model, so that we minimize the distance. And we can see on this chart here that the TM score as we are, I'm losing my cursor, my God, the TM score is going down as we are progressing with our optimization steps, and, sorry, the RMSD, that's one of the metrics that they use to figure out whether the 3D structures are, like, getting closer together, and the TM score, on the other hand, is going upwards. What's else interesting? So, they just took a snapshot here, and we can see how it's progressing and slowly getting into the final shape, which is really close to the gray one here. Now, the interesting thing is, and I didn't mention this, because those details were probably not as relevant as now, so the protein has something called the primary, the secondary, the tertiary, and the co-ordinary, like, structure. The primary structure is just the amino sequence. The secondary structure are those things that form spontaneously, like this thing, it's called alpha helix, and then we have these things here, which are called beta sheets, and then those are forming in these intricate, like, shapes, which are the tertiary structure of the protein, and finally, the co-ordinary. Let me actually use this shot, it will be easier to understand what I'm talking about. So, we have the primary structure here, then we have the secondary structures, which form, so, alpha helixes and these plebe or beta sheets, then they form in this thing called, like, the tertiary structure, and finally, multiple proteins can interact, and that's the co-ordinary structure. I hope I'm pronouncing that well. So, that's about the shapes. And aside from the distance map I already showed you, this pipeline is also predicting these secondary shapes, so the blue one is the alpha helix, and the red one is the beta sheet. So, this is the ground truth, the pseudo ground truth, and these here are taken, is a snapshot taken from the last optimization step, and we can see that the modes pretty much, we are pretty well correlated with the ground truth, and thus, the final 3D shape is also, coincides with the experimentally found shape. Whew, that was, that was, that was, that was crazy. Now, there is one more question, how do we initialize the phi's and the psi's initially? And what they did is, they used those predicted torsion angles, which are not depicted here, they only showed the distance map, and they sample from those distributions to get the initial, like, configuration, you can see it here, this is the initial one. And, it's getting crowded here. So, basically, that's how they initialize this, they randomly sample from that distribution, and then they do the optimization. But now, what they, what they, they do one more thing, that's, they keep a pool of final structures, so they keep like a 20 final structures, so whatever they get at the end of these optimization procedures, they minimize the potential, they just keep those in this pool, and then, they also sample from that pool, so they take one of these minimal configurations, and they randomly add noise to phi's and psi's, so they kind of get, experimentally show that they are getting, those are giving them better initializations than just randomly sampling from those torsion angle maps. So, that's one more detail I wanted to mention. And now, we have the whole pipeline, hopefully, in place, let me go once more, like, the high level thing, we have the features related to amino acids, we do basically convolutions over those input maps, we get the distance maps, we get the torsion angles, we get these secondary structure predictions, and once we have those, we just freeze it, and then, we have an independence step, and that's the gradient descent optimization, they used LBFGS, actually, forgot to mention that detail, and they basically are tweaking the phi's and psi's, so that the geometric model, the 3D coordinates, are getting in a configuration, which will produce distance maps, similar to the one that we got here. So, that's the high level overview of how it all fits together. Okay, that was the overview of the Alpha Fold 1, hopefully, that was useful, I know there was a lot of details, let that sink in. So, regarding the Alpha Fold 2, that's the main topic today, but we don't know much, so this is what we got from DeepMind, so we trained this system on publicly available data consisting of 170,000 protein structures from the Protein Data Bank, so we already mentioned this one, together with large bases, blah, blah, blah, and so they used 128 TPU V3 cores for maybe a couple of weeks to train this thing, and basically, this is the diagram they gave us, and the paper, so they are preparing a paper to submit for a peer-reviewed journal. And let me see what I can deduce from this chart alone. So, basically, what I expect here, and they mentioned spatial graphs in the blog, so I won't get into any assumptions aside from this one, once the paper comes out, I'll cover it, but for now, let me just try and see what we can deduce here. So, basically, what I expect to happen is, because they were using only 64 by 64 COV maps here, is that, so basically, you have the amino sequence, that's a bad depiction of amino sequence, and basically, until now, they could only model the long-range dependencies throughout the sequence, they could only take chunks, so it's basically a form of local attention, so they could maybe model, like, amino acids that lie close together, if we are on the main diagonal here, so that's, if the COV map is currently maybe here, that's when we model, like, the local attention of close-by amino acids. If the COV map is maybe here, then what happens is the following. We can model maybe this group and this group, and all the interactions between them, like this one corresponds to this one, and this one, etc. So, it becomes a mess really quick, but basically, what I expect they will do is, they will use transformers, and they'll just have, so they'll probably use some efficient transformer, a lot of them came out this year, like, I don't know, Linformer, Performer, Reformer, fill in the blank, and basically, they won't have to create these inductive biases that we currently have, and that's the intrinsic property of convolutional neural networks. Yeah, that was it about the second paper, I won't get into any more details, I just want to tell you about some really cool repercussions that the system will have in the, like, COVID-19 pandemic. So, basically, they already helped us find structures of certain proteins that are present in the SARS-CoV-2 virus. One of those is ORF3A protein, and what they can do is they can accelerate the pace of experimental methods. They can predict the structure, and then they can help the experimentation labs figure out the structure much faster than by doing it from scratch. So, yeah, that was it, hopefully you found this video insightful. If you did, go ahead and subscribe to my channel, and click that bell icon to get notified when I upload a new video, and until next time, keep learning deep.
[{"start": 0.0, "end": 6.0, "text": " It will change everything. DeepMind's AI makes a gigantic leap in solving protein structures."}, {"start": 6.0, "end": 12.0, "text": " So, I guess all of you heard about this so far, or most of you."}, {"start": 12.0, "end": 20.0, "text": " And basically what happened is that DeepMind released, published their results"}, {"start": 20.0, "end": 23.0, "text": " on this year's protein folding challenge."}, {"start": 23.0, "end": 29.0, "text": " And their method alpha fold, actually the second iteration of their algorithm alpha fold,"}, {"start": 29.0, "end": 38.0, "text": " kind of ruined the whole, like, beat everybody and got the best scores."}, {"start": 38.0, "end": 43.0, "text": " And we can see, so this is the blog that they published so far,"}, {"start": 43.0, "end": 46.0, "text": " they still haven't published the paper."}, {"start": 46.0, "end": 50.0, "text": " The paper will come probably in maybe a year or something,"}, {"start": 50.0, "end": 56.0, "text": " because the last paper for the alpha fold one came out in December 2019,"}, {"start": 56.0, "end": 59.0, "text": " where the competition happened in 2018."}, {"start": 59.0, "end": 63.0, "text": " So, we'll have to wait some time to get to that, to get the paper."}, {"start": 63.0, "end": 68.0, "text": " And I basically can see the scores throughout the years,"}, {"start": 68.0, "end": 71.0, "text": " so that's a biennial competition called CASP."}, {"start": 71.0, "end": 75.0, "text": " And it happens, so it happens every two years."}, {"start": 75.0, "end": 82.0, "text": " And you can see here that in 2018 their method alpha fold one also dominated the competition,"}, {"start": 82.0, "end": 88.0, "text": " but this year they crossed 90 on this global distance test metric,"}, {"start": 88.0, "end": 95.0, "text": " which pretty much tells you how good the alignment between the experimentally found protein structure"}, {"start": 95.0, "end": 99.0, "text": " and the computationally found protein structures are."}, {"start": 99.0, "end": 106.0, "text": " And I'll first, so the idea of this video will be to first explain you what the problem is,"}, {"start": 106.0, "end": 113.0, "text": " so what the protein folding problem is, give you a bit about history and background,"}, {"start": 113.0, "end": 115.0, "text": " and how proteins work and everything."}, {"start": 115.0, "end": 120.0, "text": " And then my idea is to go into the alpha fold one paper, because that's what we have so far,"}, {"start": 120.0, "end": 124.0, "text": " and I'll explain how it works, how the deep learning method works,"}, {"start": 124.0, "end": 128.0, "text": " so there'll be a deep, in-depth explanation of the paper."}, {"start": 128.0, "end": 135.0, "text": " And I'll give some assumptions of what will happen with the alpha fold two, what the method is,"}, {"start": 135.0, "end": 140.0, "text": " they only have one chart, so I'll try and deduce something out of it."}, {"start": 140.0, "end": 150.0, "text": " So basically, so why this is interesting is that 90 is considered as something that's like,"}, {"start": 150.0, "end": 157.0, "text": " if you get to 90 or above, you're pretty much comparable to experimental methods,"}, {"start": 157.0, "end": 163.0, "text": " the reason being they are also not perfect, and they do have their measuring uncertainties."}, {"start": 163.0, "end": 171.0, "text": " So here we can see, let me zoom in a bit, here we can see what the problem is."}, {"start": 171.0, "end": 175.0, "text": " Basically, the green thing is the pseudo ground truth,"}, {"start": 175.0, "end": 182.0, "text": " so that means somebody found that one using some of the experimental methods,"}, {"start": 182.0, "end": 184.0, "text": " and I'll tell you a bit more about those."}, {"start": 184.0, "end": 191.0, "text": " And the blue one, so the blue one is a computational prediction that DeepMind's alpha fold two made,"}, {"start": 191.0, "end": 195.0, "text": " and you can see it's pretty, pretty close."}, {"start": 195.0, "end": 200.0, "text": " The results are really, really good, and you can see 90.7, 93.3,"}, {"start": 200.0, "end": 205.0, "text": " and these are just the names of the proteins T1049, whatever."}, {"start": 205.0, "end": 212.0, "text": " So let me kind of step back a little bit here and tell you something about how the problem came to be."}, {"start": 212.0, "end": 216.0, "text": " So we've been trying to solve this problem in biology for 50 years already,"}, {"start": 216.0, "end": 223.0, "text": " and so this is kind of a really big, big thing that happened, so yeah, the hype is real this time."}, {"start": 223.0, "end": 227.0, "text": " First of all, there's this guy called Christian Anfinsen,"}, {"start": 227.0, "end": 233.0, "text": " and you can see in 1972 he gave this Nobel Prize lecture."}, {"start": 233.0, "end": 241.0, "text": " So he basically got his Nobel Prize because he showed that the function of the protein"}, {"start": 241.0, "end": 246.0, "text": " is pretty much uniquely determined by its 3D structure, which is really interesting."}, {"start": 246.0, "end": 249.0, "text": " So here is a really cool example of hemoglobin."}, {"start": 249.0, "end": 255.0, "text": " The hemoglobin forms a pocket to hold heme, a small molecule with an iron atom in the center that binds to oxygen."}, {"start": 255.0, "end": 262.0, "text": " So basically, like, the structure is made so that this protein, the hemoglobin, has a small pocket"}, {"start": 262.0, "end": 272.0, "text": " where this molecule that has, like, an iron atom which binds to, like, the oxygen and thus transports oxygen throughout our body,"}, {"start": 272.0, "end": 276.0, "text": " like, just that molecule fits so nicely in this hemoglobin."}, {"start": 276.0, "end": 282.0, "text": " So that's a really plastic example of how the structure determines the function."}, {"start": 282.0, "end": 284.0, "text": " That's a pretty obvious one."}, {"start": 284.0, "end": 287.0, "text": " So there's this cool example of this molecule that has this Y shape,"}, {"start": 287.0, "end": 295.0, "text": " and the molecule attaches to bacteria or viruses and thus stacks those cells for destruction in the immune system."}, {"start": 295.0, "end": 299.0, "text": " So let me give you just a brief example of how the proteins are created."}, {"start": 299.0, "end": 309.0, "text": " So I like to think of this as, like, a kitchen where the ribosome is basically this macromolecule acts as a cook."}, {"start": 309.0, "end": 313.0, "text": " We have the messenger RNA, which is pretty much the receipt,"}, {"start": 313.0, "end": 321.0, "text": " and then we have amino acids, which are the ingredients, which are transported by this thing called the transport RNA."}, {"start": 321.0, "end": 326.0, "text": " So basically, the messenger RNA contains this linear sequence of amino acids,"}, {"start": 326.0, "end": 332.0, "text": " and it gets translated so the receipt gets read by this ribosome,"}, {"start": 332.0, "end": 342.0, "text": " and we form, like, a chain of proteins which slowly start folding because different parts, different amino acids have different charges,"}, {"start": 342.0, "end": 346.0, "text": " and so they attract or repel, and thus we get that 3D structure."}, {"start": 346.0, "end": 353.0, "text": " So that's basically how the protein is formed, and now we're trying to figure out this 3D structure computationally."}, {"start": 353.0, "end": 362.0, "text": " Now, the thing is, how do people find the 3D structures now that we know that it's really important for, like,"}, {"start": 362.0, "end": 368.0, "text": " for figuring out the function of the proteins? How do we find the structure of the protein?"}, {"start": 368.0, "end": 374.0, "text": " So there are three methods that are used extensively in different labs,"}, {"start": 374.0, "end": 382.0, "text": " and those are mainly X-ray diffraction, nuclear magnetic resonance, and this electron microscopy."}, {"start": 382.0, "end": 392.0, "text": " And as you can see, the X-ray diffraction was so far had the highest amount of proteins were, like, protein structures were determined using this method."}, {"start": 392.0, "end": 401.0, "text": " Although it does have its own flaws, it cannot solve, so depending on the size of the protein, it has its constraints, so it can't solve everything."}, {"start": 401.0, "end": 412.0, "text": " Now, these two are getting more popular lately, but they are still super expensive and really slow methods to obtain the 3D structure."}, {"start": 412.0, "end": 419.0, "text": " So that's why the computational biology and these computational methods for figuring out the structure were so important."}, {"start": 419.0, "end": 430.0, "text": " So if we take a look at this chart here, we can see that the number of proteins we sequenced, successfully sequenced, is around 200 million now in 2020."}, {"start": 430.0, "end": 439.0, "text": " So it's been rising, like, so those are the linear sequences that are encoded in the DNA, so, but we still don't have the 3D structure for all of those."}, {"start": 439.0, "end": 444.0, "text": " And if we take a look at this chart, we can see there is this thing called a protein database,"}, {"start": 444.0, "end": 455.0, "text": " which is really important because that's the data set that's been used in the AlphaFold paper, and we'll get to it in a couple of minutes."}, {"start": 455.0, "end": 468.0, "text": " But basically here, you can see that, like, we have, so we have a lot of sequences being kind of ingested into this data set,"}, {"start": 468.0, "end": 481.0, "text": " but the number of structures is actually a lot smaller. So here, 2020, we can see only 12,000 structures are there, but we have, like, much more sequences."}, {"start": 481.0, "end": 489.0, "text": " So that was the motivation for why we want to accelerate the pace of finding 3D structures."}, {"start": 489.0, "end": 495.0, "text": " The experimental one is slow, it's really expensive, so, yeah, we need an alternative."}, {"start": 495.0, "end": 504.0, "text": " So, again, why we care about figuring out the structure, the 3D structure of the protein is because, as you can see on DeepMind's blog,"}, {"start": 504.0, "end": 513.0, "text": " an error in the genetic recipe may result in a malformed protein which could result in disease or death for an organism."}, {"start": 513.0, "end": 516.0, "text": " Many diseases, therefore, are fundamentally linked to proteins."}, {"start": 516.0, "end": 533.0, "text": " So diseases like diabetes, like dementia, Parkinson's, Alzheimer's, cancer, some of the diseases that are known to be, like, plaguing the humankind for decades or more"}, {"start": 533.0, "end": 543.0, "text": " can probably be solved by just us knowing the 3D structures. So that's why it's super important to, so that's why this problem is super important."}, {"start": 543.0, "end": 553.0, "text": " Aside from that, also, we could develop much better materials if we knew how to construct and understand the shapes of different proteins,"}, {"start": 553.0, "end": 563.0, "text": " and we could create, like, plastic-eating enzymes and thus reduce the pollution, like, on the global level, which is something we do care about, obviously."}, {"start": 563.0, "end": 571.0, "text": " So that was the background story. It was a bit longer. I assume most of my audiences, like, machine learning, have machine learning background,"}, {"start": 571.0, "end": 581.0, "text": " so I felt I need to really give you a nice overview of what the problem is and everything that goes inside, and, like, give you some numbers and methods, etc."}, {"start": 581.0, "end": 586.0, "text": " So let's start exploring the Alpha Fold 1 now."}, {"start": 586.0, "end": 597.0, "text": " As you can see here, this is an animation that DeepMind provided, which basically shows how, during their optimization procedure, which we'll get to in a sec,"}, {"start": 597.0, "end": 610.0, "text": " like, folds the proteins, so it starts from a semi-unfolded protein strand, which then slowly takes its 3D shape in space."}, {"start": 610.0, "end": 623.0, "text": " Okay, let's overview the Alpha Fold 1 paper. You can see, so, the title is Alpha Fold Improved Protein Structure Prediction Using Potentials from Deep Learning."}, {"start": 623.0, "end": 631.0, "text": " They had, like, a bunch of folks working on this project, as it's pretty common for huge projects like this one."}, {"start": 631.0, "end": 638.0, "text": " So, they started in 2016, so it's already been, like, a four-year effort of many smart people."}, {"start": 638.0, "end": 644.0, "text": " So, I've already mentioned it's DeepMind. They're based in London, and, okay."}, {"start": 644.0, "end": 649.0, "text": " So, the paper is pretty complicated. There's a lot of text and a lot of domain knowledge,"}, {"start": 649.0, "end": 658.0, "text": " so I'll just try and extract the Deep Learning side of it, and hopefully that will give you, like, a good understanding of how the method works."}, {"start": 658.0, "end": 667.0, "text": " Let's start with this chart. So, basically, on this chart here, they just show that they are better than other groups."}, {"start": 667.0, "end": 678.0, "text": " So, many other research groups were participating in this in 2018, in Cusp13 Challenge, that was the name of the challenge."}, {"start": 678.0, "end": 686.0, "text": " And, basically, the green lines represent the groups, those other groups, and the blue line is DeepMind."}, {"start": 686.0, "end": 692.0, "text": " And so, on the, basically, on the x-axis, we have something called TM score."}, {"start": 692.0, "end": 698.0, "text": " So, there are a bunch of metrics for determining how good the structure prediction is."}, {"start": 698.0, "end": 710.0, "text": " So, one is TM. There are other scores, like we saw the Global Distance Test score. There are some, like, IDDT or something."}, {"start": 710.0, "end": 721.0, "text": " Yeah, there are a lot of different scores, and basically, so, FM Domain Count, Domain is some, like, fancy name for protein, pretty much."}, {"start": 721.0, "end": 732.0, "text": " So, here, in this free modeling domain, we have 43 sequences which we needed to figure the structure for."}, {"start": 732.0, "end": 742.0, "text": " And you can see that, on average, the DeepMind had a lot higher TM score than others. So, that's what this thing is."}, {"start": 742.0, "end": 747.0, "text": " And now, I should probably tell you what the difference between free modeling and template modeling is."}, {"start": 747.0, "end": 758.0, "text": " So, template modeling is when you're trying to predict the structure for a sequence that has similar sequences whose structure we already know."}, {"start": 758.0, "end": 773.0, "text": " Now, the thing is, once you know some similar sequence, you can kind of assume that the structure of your sequence will be similar to that one."}, {"start": 773.0, "end": 783.0, "text": " So, you just need to find where the sequence differs and tweak, kind of tweak the structure, so, starting from that, maybe that structure."}, {"start": 783.0, "end": 789.0, "text": " So, that's a template modeling. So, I'm simplifying a little bit here, but, like, that's the rough notion."}, {"start": 789.0, "end": 801.0, "text": " Again, here, on this chart, they just displayed five different proteins, and they, again, show that, like, the blue dots that DeepMind is crushing it."}, {"start": 801.0, "end": 808.0, "text": " I mean, so, these two were bragging charts. So, now, let's get to the actual method."}, {"start": 808.0, "end": 819.0, "text": " And this is how it looks like. We can see some neural network here, of course, and that's actually ResNet with dilated convolutions."}, {"start": 819.0, "end": 829.0, "text": " And so, basically, before I get into those details, there are pretty much, let me see, three parts of this pipeline."}, {"start": 829.0, "end": 837.0, "text": " So, the first one is preparing the features that we'll need in order to predict the structure."}, {"start": 837.0, "end": 848.0, "text": " Then the second part is the distance and torsion angle predictions, and I'll tell you what those are in a minute."}, {"start": 848.0, "end": 856.0, "text": " So, that's the second part of pipeline, and then we have the gradient descent or the nonlinear optimization part of the pipeline."}, {"start": 856.0, "end": 868.0, "text": " So, in stage one, they somehow find a way to go from 1D amino acid sequence that represents the protein to 2D map."}, {"start": 868.0, "end": 880.0, "text": " And the 2D map actually contains covariations between our target sequence and similar sequence from this huge data set of sequences."}, {"start": 880.0, "end": 889.0, "text": " And so, this is this thing called multi-sequence alignment. So, you basically find a group of sequences, like thousand sequences,"}, {"start": 889.0, "end": 898.0, "text": " which are similar to your target sequence, which you're trying to predict the structure for, and you encode the covariations in these maps."}, {"start": 898.0, "end": 908.0, "text": " So, if I take one point on this 2D map, and this is 2D volume, so, this will be like 500 or something channels or more."}, {"start": 908.0, "end": 917.0, "text": " There are some details down in the paper. So, one thing they encode is like the one-hot encoding of the amino acid."}, {"start": 917.0, "end": 924.0, "text": " So, basically, that means that if we take this point, so that's... let me zoom in."}, {"start": 924.0, "end": 941.0, "text": " Oops. So, basically, let's say this is like row five, and let's say this is like row, I don't know, 40. They'll basically encode five as a one..."}, {"start": 941.0, "end": 955.0, "text": " They'll basically find what the fifth amino acid is, and because we have only 21 amino acids in human genome, we'll encode this, let's say this is..."}, {"start": 955.0, "end": 964.0, "text": " Because there are 21, we can represent each one with a letter from the alphabet. Luckily, we have 26 letters."}, {"start": 964.0, "end": 976.0, "text": " So, let's say this one is, I don't know, the fifth one is amino acid A, this 40th is maybe, I don't know, like S, and they just encode this one as a one-hot."}, {"start": 976.0, "end": 983.0, "text": " Let's say this one will be like zeros, and then on some spot like 13, I don't know, they'll have 1, etc."}, {"start": 983.0, "end": 998.0, "text": " So, what they will do is they will concatenate this one-hot vector with one-hot vector that corresponds to amino acid A, and that will be a part of the features they are using."}, {"start": 998.0, "end": 1011.0, "text": " There is much more domain-specific things they're including. Also, they're including these multi-sequence alignment things, and I'll probably explain those a bit more..."}, {"start": 1011.0, "end": 1015.0, "text": " I'll go into detail about those, because that's interesting a bit later."}, {"start": 1015.0, "end": 1033.0, "text": " So, that's how they get this 3D volume, and now you can treat this as a simple computer vision problem, and they pretty much use this classic deep learning model like ResNet with the dialectic convolutions,"}, {"start": 1033.0, "end": 1053.0, "text": " and treat this as an image-to-image translation problem. So, the network itself, as you can see, has kernels of size 64 by 64, so they're just actually randomly sliding across this volume, and they end up getting the distance predictions."}, {"start": 1053.0, "end": 1069.0, "text": " Okay, so what is this distance map, and how do we get one? So, in order to understand that, I have to make a small tangent here and explain how the amino acids connect, and which distances and torsion angles are we actually measuring."}, {"start": 1069.0, "end": 1089.0, "text": " So, these are all the 21 amino acids we have in the human genome, and basically, as you can see, this part is always the same. For every single amino acid, no matter the properties, this part always remains the same."}, {"start": 1089.0, "end": 1101.0, "text": " So, this thing here is called something called, I think, alpha carbon atom, this one is called beta carbon atom, and that one is really important."}, {"start": 1101.0, "end": 1124.0, "text": " So, let's say for the sake of argument that we have a sequence like R, H, K, which is pretty much these three amino acids here. So, what will happen is, once they start going through the ribosome, they'll start connecting, and so these parts will form the backbone."}, {"start": 1124.0, "end": 1143.0, "text": " So, these parts, and the groups, which is this part that goes off from the beta carbon atom, those will be, like for example, some will be positively charged, maybe this one will be negatively charged."}, {"start": 1143.0, "end": 1158.0, "text": " So, what will happen is that they'll start attracting, and that's the thing that causes the protein to fold. So, imagine if you had a protein that had 140 amino acids."}, {"start": 1158.0, "end": 1176.0, "text": " So, what could happen is that in this complex interaction, maybe the first amino acid would be, like in the 3D structure, would be really close to maybe 127th amino acid."}, {"start": 1176.0, "end": 1195.0, "text": " So, that means we have our long range dependencies in this structure. So, now, when we are, so I mentioned distance map. So, what we are actually looking for is distance between beta carbon atoms between different amino acids."}, {"start": 1195.0, "end": 1211.0, "text": " So, basically, that's what we are looking at, and then we have torsion angles, so phi and psi angles, which are basically in that 3D structure, two angles that we care about."}, {"start": 1211.0, "end": 1239.0, "text": " Let me see if I can find a nice visualization of that, and you can see it here, hopefully. Let me zoom in. So, this is the protein chain, and so one of the two dihedral angles inside here, without getting into too much explanation, is two angles that we care about."}, {"start": 1239.0, "end": 1253.0, "text": " So, for the sake of argument, just think there are two angles we care about. There is a third, but that one is almost always 180, so DeepMind kind of hard-coded that one to be 180, and they're just considering these two."}, {"start": 1253.0, "end": 1268.0, "text": " So, okay, let me get back to the OneNote. So, now that we know this information about how we calculate distance and phi and psi angles, let's go back to the deep learning pipeline."}, {"start": 1268.0, "end": 1288.0, "text": " So, basically, you can see here we have amino acids here, and we have the same sequence here, and so this particular point maybe corresponds to finding the..."}, {"start": 1288.0, "end": 1313.0, "text": " So, I'll actually use a different drawing, which will probably be easier to understand. Okay, let me see if I can nicely explain the distance maps. So, we have the experimentally found the pseudo ground truth distance map for this particular amino acid sequence."}, {"start": 1313.0, "end": 1337.0, "text": " And we have the predicted distance map over here. So, basically, what they showed here is they took a specific amino acid from the sequence, like in particular they took 29th amino acid, and that's depicted on these charts by these red stripes."}, {"start": 1337.0, "end": 1356.0, "text": " And basically, you can see that the distance between 29th carbon, beta carbon, and with itself is obviously zero, which is this colored with yellow."}, {"start": 1356.0, "end": 1369.0, "text": " And over the whole main diagonal is going to be yellow for that specific, for that particular reason, because the distance between beta carbon and itself is always going to be zero."}, {"start": 1369.0, "end": 1395.0, "text": " So, that's why we see this pattern along the main diagonal. And what it did here is they took the neighborhood from the 29th amino acid, and they just extracted all of the 41 distances, and that's the chart on the right."}, {"start": 1395.0, "end": 1418.0, "text": " And now the thing is these things here are not scalars, they're actually probability distributions. So, if we take maybe, I don't know, 40th probability distribution, 40th amino acid and its corresponding probability distribution, we end up with something like this."}, {"start": 1418.0, "end": 1444.0, "text": " And you can see on the x-axis we have the distance, and on the y-axis we have the probability. So, this particular position has, we can see that the distribution is somewhere, like the peak is pretty much around like 4 angstroms, where angstrom is 0.1 nanometer."}, {"start": 1444.0, "end": 1457.0, "text": " So, this is how they symbolize the angstrom. And the red bar is the ground truth value. So, the red bar says, I don't know, like 5, and we predicted 4, which is pretty good."}, {"start": 1457.0, "end": 1476.0, "text": " If we take a look at, because 29 is missing obviously, because we know that it's going to have a spike on the 0 distance, basically if we take a look at the surrounding probability distributions, so those correspond to this part here,"}, {"start": 1476.0, "end": 1488.0, "text": " basically distributions are really narrow and really precise, so we can see that the peak, the mode of the distribution corresponds perfectly to the red bars."}, {"start": 1488.0, "end": 1501.0, "text": " Here also, really nice, as we go further apart, further away, we can see that the red bars don't correspond always with the mode of the distribution."}, {"start": 1501.0, "end": 1521.0, "text": " So, say, let me take an example where we have some problems, like here, like this one is a good example, because we predicted that the distance is here, so that's whatever this value is, but the red bar is here, so, yeah."}, {"start": 1521.0, "end": 1537.0, "text": " By the way, these black lines is the cut of length distance when we consider the two carbon, beta carbon atoms to be in contact."}, {"start": 1537.0, "end": 1551.0, "text": " So, if we are, so this one is at 8 angstrom, so if you fall below 8 angstrom, we consider those two beta carbon atoms to be in contact, otherwise they are not in contact, obviously."}, {"start": 1551.0, "end": 1563.0, "text": " So, those are basically, so the map is basically the, like the inter-residue distances for the amino sequence."}, {"start": 1563.0, "end": 1568.0, "text": " And we want to predict, obviously, as close as possible to the ground truth."}, {"start": 1568.0, "end": 1576.0, "text": " So, let me now go back to our initial deep learning pipeline, if I can get there, okay, we're here."}, {"start": 1576.0, "end": 1590.0, "text": " So, we do a similar thing for the, for torsion angles, and once we have those maps, what we now do is we form a geometric model of our sequence,"}, {"start": 1590.0, "end": 1603.0, "text": " meaning we basically take, like we have a geometric model, and we have two arguments, we have the phi and we have psi, so the torsion angles,"}, {"start": 1603.0, "end": 1612.0, "text": " and those give us the, like, 3D coordinates of those beta carbon atoms in 3D space."}, {"start": 1612.0, "end": 1622.0, "text": " So, what I mean by that, so if we have a, like a sequence of length 100, we'll have 200 parameters here."}, {"start": 1622.0, "end": 1638.0, "text": " And basically, for one specific configuration of these 2L torsion angles, we get a specific configuration in 3D space, like, I don't know, like we'll have some thing going on in space."}, {"start": 1638.0, "end": 1658.0, "text": " And so, what this part does is we are optimizing, so we are changing these two, so that the 3D coordinates or the corresponding 3D structure gets as close as possible to the gray one,"}, {"start": 1658.0, "end": 1662.0, "text": " so that this one is the native, the experimentally found structure."}, {"start": 1662.0, "end": 1674.0, "text": " So, what happens is by tweaking these phi and psi angles, we are also tweaking, you can imagine, like, we are tweaking the distance map for this particular configuration,"}, {"start": 1674.0, "end": 1685.0, "text": " so for a given phi and psi, we have a given distance map, which we are trying to match with this one."}, {"start": 1685.0, "end": 1696.0, "text": " So, the two parts of the pipeline are pretty much independent. The second pipeline, so the second part of the pipeline just figures out the depth map and the torsion maps,"}, {"start": 1696.0, "end": 1709.0, "text": " and then the third part of the pipeline just tries and tweaks this differentiable G model, so that we minimize the distance."}, {"start": 1709.0, "end": 1723.0, "text": " And we can see on this chart here that the TM score as we are, I'm losing my cursor, my God, the TM score is going down as we are progressing with our optimization steps,"}, {"start": 1723.0, "end": 1735.0, "text": " and, sorry, the RMSD, that's one of the metrics that they use to figure out whether the 3D structures are, like, getting closer together,"}, {"start": 1735.0, "end": 1747.0, "text": " and the TM score, on the other hand, is going upwards. What's else interesting? So, they just took a snapshot here, and we can see how it's progressing"}, {"start": 1747.0, "end": 1752.0, "text": " and slowly getting into the final shape, which is really close to the gray one here."}, {"start": 1752.0, "end": 1761.0, "text": " Now, the interesting thing is, and I didn't mention this, because those details were probably not as relevant as now,"}, {"start": 1761.0, "end": 1773.0, "text": " so the protein has something called the primary, the secondary, the tertiary, and the co-ordinary, like, structure."}, {"start": 1773.0, "end": 1783.0, "text": " The primary structure is just the amino sequence. The secondary structure are those things that form spontaneously, like this thing, it's called alpha helix,"}, {"start": 1783.0, "end": 1795.0, "text": " and then we have these things here, which are called beta sheets, and then those are forming in these intricate, like, shapes,"}, {"start": 1795.0, "end": 1808.0, "text": " which are the tertiary structure of the protein, and finally, the co-ordinary. Let me actually use this shot, it will be easier to understand what I'm talking about."}, {"start": 1808.0, "end": 1817.0, "text": " So, we have the primary structure here, then we have the secondary structures, which form, so, alpha helixes and these plebe or beta sheets,"}, {"start": 1817.0, "end": 1827.0, "text": " then they form in this thing called, like, the tertiary structure, and finally, multiple proteins can interact, and that's the co-ordinary structure."}, {"start": 1827.0, "end": 1837.0, "text": " I hope I'm pronouncing that well. So, that's about the shapes. And aside from the distance map I already showed you,"}, {"start": 1837.0, "end": 1855.0, "text": " this pipeline is also predicting these secondary shapes, so the blue one is the alpha helix, and the red one is the beta sheet."}, {"start": 1855.0, "end": 1867.0, "text": " So, this is the ground truth, the pseudo ground truth, and these here are taken, is a snapshot taken from the last optimization step,"}, {"start": 1867.0, "end": 1881.0, "text": " and we can see that the modes pretty much, we are pretty well correlated with the ground truth, and thus, the final 3D shape is also,"}, {"start": 1881.0, "end": 1890.0, "text": " coincides with the experimentally found shape. Whew, that was, that was, that was, that was crazy."}, {"start": 1890.0, "end": 1896.0, "text": " Now, there is one more question, how do we initialize the phi's and the psi's initially?"}, {"start": 1896.0, "end": 1904.0, "text": " And what they did is, they used those predicted torsion angles, which are not depicted here, they only showed the distance map,"}, {"start": 1904.0, "end": 1913.0, "text": " and they sample from those distributions to get the initial, like, configuration, you can see it here, this is the initial one."}, {"start": 1913.0, "end": 1923.0, "text": " And, it's getting crowded here. So, basically, that's how they initialize this, they randomly sample from that distribution,"}, {"start": 1923.0, "end": 1934.0, "text": " and then they do the optimization. But now, what they, what they, they do one more thing, that's, they keep a pool of final structures,"}, {"start": 1934.0, "end": 1945.0, "text": " so they keep like a 20 final structures, so whatever they get at the end of these optimization procedures, they minimize the potential,"}, {"start": 1945.0, "end": 1953.0, "text": " they just keep those in this pool, and then, they also sample from that pool, so they take one of these minimal configurations,"}, {"start": 1953.0, "end": 1962.0, "text": " and they randomly add noise to phi's and psi's, so they kind of get, experimentally show that they are getting,"}, {"start": 1962.0, "end": 1970.0, "text": " those are giving them better initializations than just randomly sampling from those torsion angle maps."}, {"start": 1970.0, "end": 1979.0, "text": " So, that's one more detail I wanted to mention. And now, we have the whole pipeline, hopefully, in place, let me go once more,"}, {"start": 1979.0, "end": 1991.0, "text": " like, the high level thing, we have the features related to amino acids, we do basically convolutions over those input maps,"}, {"start": 1991.0, "end": 1997.0, "text": " we get the distance maps, we get the torsion angles, we get these secondary structure predictions,"}, {"start": 1997.0, "end": 2006.0, "text": " and once we have those, we just freeze it, and then, we have an independence step, and that's the gradient descent optimization,"}, {"start": 2006.0, "end": 2016.0, "text": " they used LBFGS, actually, forgot to mention that detail, and they basically are tweaking the phi's and psi's,"}, {"start": 2016.0, "end": 2024.0, "text": " so that the geometric model, the 3D coordinates, are getting in a configuration, which will produce distance maps,"}, {"start": 2024.0, "end": 2033.0, "text": " similar to the one that we got here. So, that's the high level overview of how it all fits together."}, {"start": 2033.0, "end": 2043.0, "text": " Okay, that was the overview of the Alpha Fold 1, hopefully, that was useful, I know there was a lot of details, let that sink in."}, {"start": 2043.0, "end": 2053.0, "text": " So, regarding the Alpha Fold 2, that's the main topic today, but we don't know much, so this is what we got from DeepMind,"}, {"start": 2053.0, "end": 2060.0, "text": " so we trained this system on publicly available data consisting of 170,000 protein structures from the Protein Data Bank,"}, {"start": 2060.0, "end": 2074.0, "text": " so we already mentioned this one, together with large bases, blah, blah, blah, and so they used 128 TPU V3 cores for maybe a couple of weeks to train this thing,"}, {"start": 2074.0, "end": 2088.0, "text": " and basically, this is the diagram they gave us, and the paper, so they are preparing a paper to submit for a peer-reviewed journal."}, {"start": 2088.0, "end": 2100.0, "text": " And let me see what I can deduce from this chart alone. So, basically, what I expect here, and they mentioned spatial graphs in the blog,"}, {"start": 2100.0, "end": 2116.0, "text": " so I won't get into any assumptions aside from this one, once the paper comes out, I'll cover it, but for now, let me just try and see what we can deduce here."}, {"start": 2116.0, "end": 2132.0, "text": " So, basically, what I expect to happen is, because they were using only 64 by 64 COV maps here, is that, so basically, you have the amino sequence,"}, {"start": 2132.0, "end": 2145.0, "text": " that's a bad depiction of amino sequence, and basically, until now, they could only model the long-range dependencies throughout the sequence,"}, {"start": 2145.0, "end": 2158.0, "text": " they could only take chunks, so it's basically a form of local attention, so they could maybe model, like, amino acids that lie close together,"}, {"start": 2158.0, "end": 2179.0, "text": " if we are on the main diagonal here, so that's, if the COV map is currently maybe here, that's when we model, like, the local attention of close-by amino acids."}, {"start": 2179.0, "end": 2195.0, "text": " If the COV map is maybe here, then what happens is the following. We can model maybe this group and this group, and all the interactions between them,"}, {"start": 2195.0, "end": 2204.0, "text": " like this one corresponds to this one, and this one, etc. So, it becomes a mess really quick, but basically, what I expect they will do is,"}, {"start": 2204.0, "end": 2215.0, "text": " they will use transformers, and they'll just have, so they'll probably use some efficient transformer, a lot of them came out this year,"}, {"start": 2215.0, "end": 2227.0, "text": " like, I don't know, Linformer, Performer, Reformer, fill in the blank, and basically, they won't have to create these inductive biases that we currently have,"}, {"start": 2227.0, "end": 2238.0, "text": " and that's the intrinsic property of convolutional neural networks. Yeah, that was it about the second paper, I won't get into any more details,"}, {"start": 2238.0, "end": 2247.0, "text": " I just want to tell you about some really cool repercussions that the system will have in the, like, COVID-19 pandemic."}, {"start": 2247.0, "end": 2262.0, "text": " So, basically, they already helped us find structures of certain proteins that are present in the SARS-CoV-2 virus."}, {"start": 2262.0, "end": 2270.0, "text": " One of those is ORF3A protein, and what they can do is they can accelerate the pace of experimental methods."}, {"start": 2270.0, "end": 2281.0, "text": " They can predict the structure, and then they can help the experimentation labs figure out the structure much faster than by doing it from scratch."}, {"start": 2281.0, "end": 2292.0, "text": " So, yeah, that was it, hopefully you found this video insightful. If you did, go ahead and subscribe to my channel,"}, {"start": 2292.0, "end": 2300.0, "text": " and click that bell icon to get notified when I upload a new video, and until next time, keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=fVt387VZJe8
GPT-3 - Language Models are Few-Shot Learners | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover the famous GPT-3 model. I first give you some context about the stuff that happened since the paper was first published in May 2020 (hype, anti-hype, limitations, and cool apps), and then I dive deep into explaining the paper. You'll learn about: ✔️ Useful resources on GPT-3 ✔️ Main takeaways from the paper ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ "anti-hype" blog: https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html ✅ Gwern's blog: https://www.gwern.net/GPT-3 ✅ My transformer implementation: https://github.com/gordicaleksa/pytorch-original-transformer ✅ Cool "GPT game": https://play.aidungeon.io/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 00:00 GPT (anti)hype, Gwern, prompt programming 04:30 Abstract of the paper 06:50 Architecture, data, compute 12:15 Zero-shot, one-shot, and few-shot learning 18:45 Power-law chart (more compute please) 20:35 Results (machine translation) 23:05 NLI (reasoning is hard) 24:40 Arithmetic 26:25 Word unscrambling 28:40 SAT analogies (how smart are humans?) 30:45 Fake news generation 32:05 Data contamination 35:05 Limitations of the model 37:35 Bias, fairness (broader impact) 44:30 Final thoughts, are we going towards an AGI? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #gpt3 #fewshotlearning #deeplearning
What's up folks? So in this video I thought covering this famous GPT-3 model titled language models are a few-shot learners Published earlier this year by OpenAI. So first I just want to give you an overview like a context of what was happening since then and Basically what we had is on one hand we had a lot of hype happening. So people saying so this is artificial general intelligence It's going to solve everything and then on the other hand we had people nagging saying how Going in this direction is going to ruin the field of machine learning and this is Like there is no way that this is the the the right path towards AGI so basically the truth is probably somewhere in between and Let me show you a couple of demos before I start doing a deep dive in of this of this paper Which I think you'll find interesting. So first Twitter had a lot of Demos happening. So this was one of the most famous ones So basically a guy would input a button that looks like a watermelon and then GPT-3 would like kind of generate code For for a button that would look like a watermelon, which is pretty pretty awesome And then we had also cool applications like AI dungeon, which actually appeared much earlier But they just upgraded their their back end using Partly using GPT-3 model so you can basically hear Enter like the genre of the of the play then you pick your character like let me take I know wizard and then let me put in the name like GPT-3 Let me see what it generates and basically it will generate text and then you can interactively play this game by just like Prompting it with different text and that's super awesome and nice use case for for this model So I won't do it right now. I'll link it in the description so you can play with the game if you want So as I mentioned on the other side we had people who Who used a single example as an argument that the model is not not as good as as people are hyping it that it is So basically here is a question which is heavier a toaster or a pencil and GPT-3 would answer a pencil is heavier than a toaster which is not true like in probably in most most of the cases and Then like second example, maybe how many eyes does my foot have and GPT-3 would answer your foot has two eyes so After this blog was published a lot of people were experimenting with GPT-3 Including this guy named Gwern Whose blog you should definitely check out he's writing a lot It's a bit harder to parse in my in my like subjective opinion, but like it's a really nice resource so what he showed is that using like smart like by picking smart hyper parameters like the Temperature like the sampling method maybe nucleus and by picking like the correct text that will that would get input to the model That's like the conditioning text you can get the GPT-3 to answer all of those questions Like the toaster or pencil question correctly, which is surprising and then like basically like people started talking about a new paradigm So we had like we had the software 1.0 We still have it like where you're basically just designing the algorithm so like classical computer science, then we have designing the data set So that's pretty much machine learning. You're now designing the data set and then the algorithm kind of just develops by learning from the data using gradient based methods and finally We are just we came in a situation where we are designing a prompt So as to communicate effectively communicate with this GPT-3 model So Gwern had a nice sentence here, which says sampling can prove the presence of knowledge, but not the absence and So that's that's the idea behind all of this like prompt programming you basically Can't so if the model doesn't give you a correct answer Maybe the model wasn't incentivized to give you a correct answer. Maybe the model was just being a jokester So now I'm anthropomorphizing GPT-3 But Gwern showed that by thinking like that by thinking that you're pretty much communicating with a human You can more effectively get the information you want out of it. So that was it for the overview So we had the hype the hyping team. We had the nagging team. We have we had everything in between so now like at the end of 2020 we know much more about both the limitations and Useful use cases that models like GPT-3 can have so without further ado, let's jump into the deep dive Let's jump into the paper. So first thing you can notice is that like a whole lot of people work in this project Like this looks like like a half of the open AI team pretty much like bunch of people. So it's a multi person effort as all of these huge papers are and What this paper basically did was because they have a lot of resources. They were able to kind of explore what happens when you're Increasing the model size and especially with the looking at how the what the like a few shot Performances compared conscious that to fine-tuning which is the usual thing to do like you you pre-train the model And then you fine-tune it. Well this paper Did the pre-training also but then without any fine-tuning they try to see what's the zero shot one shot and few shot Performance of this model and we'll get into some details a bit later. So Let me see what this say here So by contrast humans can generally perform a new language task from only a few examples or from simple instructions Something which current NLP systems still largely struggle to do so that's their like point basically humans you only need a couple of examples and you're already able to solve the new task and Now it's debatable like because we've been collecting data like like since we were born pretty much like continuously So yeah, it's a complicated question. But like I do agree that we need to Develop better like few shot performance of our models Then we have basically what they say is that so that the thing is they don't update the gradients So we'll see in a moment how they do it So basically they just condition the pre-trained model on like a bunch of text and it doesn't have to be bunch of text It's like a bit basically like a prompt in natural language and then maybe a couple of examples We'll see it in a minute and so the final finally what they what they what they Say is that it's able to generate text That's really hard to distinguish from the text that humans wrote and then they kind of Have a whole section on broader impacts because this kind of technology can really be used for malicious malicious purposes as well and I'll explain I'll talk about that part also like the fairness and bias a bit later So, okay. Let's let me first we'll go kind of backwards here. I'll first want to explain you the architecture the The data set and then we'll jump to how the a few shot thing works. So basically the architecture is They say here the same as GPT-2, which is basically the same as GPT, which is Basically the decoder portion of the transformer model So we can see it here So this is the the the transformer paper like this is the transformer model from the attention is all you need paper And they took the decoder part and they just they don't need this multi-head attention because they don't have the encoding part So they they just leave the the causal masking. So basically tokens can only look The tokens that came previously or or look at themselves. So yeah, so maybe I'll try and briefly explain how it works again Let me Take a pencil here So you basically take the input text you you tokenize it so you get let me zoom zoom in a little bit So you you tokenize it Then what you do is you embed these tokens so that's this output embedding part so let's say for the for the sake of argument that the Uh, like the hidden dimension in the model is like 512. So basically you'll end up with a bunch of vectors They have like dimension 512 And then what you do is you have these positional encodings. It's basically like a huge table So it's basically a huge table and then depending on the position here. So this is like Token number zero you basically take zero And you add this with uh like row here and you just add it up To this embedding vector and after adding it up your you end up here and then you just have two parts of the architecture So the first one is the multi-headed attention Which is basically attends to all of the tokens and creates like those nice representations when I say attains to all so it's not bi-directional as I said, they have the causal masking and Finally, we have the feedforward part where basically, yeah, you just kind of independently process those token representations again. So I have a video about this one, but just wanted to do a quick recap of it. And basically the only difference is they use something called a couple of ideas from the sparse transformer paper where you basically, instead of using those like causal masks, you use sparse causal masks. So let's not get into too much details. And yeah, that's pretty much it about the architecture. Now for the training data set, they used, so they used the common crawl, which is a huge, huge data set pretty much like crawling every single month that site downloads bunch of the data from the internet. And what they did is they filtered it, and they also used something called WebText, which was created by using the links from Reddit, which had at least three uploads, which was kind of heuristic for filtering like higher quality content. And they just kind of like scraped all of the data that those links were pointing to. They also have the books, One Books, Two and Wikipedia. So basically the point is that depending on the quality of the data set, they took more samples or less samples. So some of the data like Wikipedia, which is higher quality than for example, common crawl, like had like 3.4 epochs, meaning the model saw it approximately 3.4 times, whereas the common crawl was only seen 0.44 times. One important thing to notice here and comment is that they had a problem with data contamination, which means many of the results, they later showed that the results are not as seriously affected. Some of the results that were seriously affected were kind of just not displayed in the paper, but I still think they had some contamination issues left over, especially on the PICA data set where they are testing for understanding of the like physics in the world. And we'll get to it a bit later. The compute was enormous. So they had like, you basically had like 10 years of petaflop second. Like if you had a machine that has a petaflop second performance, you need 10 years to train this model. So it's huge in that sense, you can see it here. And the interesting comparison is that they say here, as a consequence, although GPT-3, 3 billion is almost 10X larger than Roberta large, both models took approximately 50 petaflop second days of compute during pre-training. The reason is that there is some other paper that tells that you shouldn't, that the model shouldn't see like tokens too many times, otherwise it will overfit and the performance will get worse. So that was that part about the data. And yeah, one more interesting thing maybe is that basically they needed a supercomputer because the model was huge. He had 175 billion, so they needed like a supercomputer from Microsoft to train the thing. So that's kind of a fun fact. Let me jump to something that's really like the core part of this paper. And that's how they do this zero shot, like one shot and few shot learning and why they do it. So basically why they do it is, so they said here, so the reason to distinguish between one shot from few shot and zero shot is that it most closely matches the way in which some of the tests are communicated to humans. So yeah, basically that's how humans work. That's how humans work and then for the zero shot, there are some examples like for example, translation. If I told you to translate from English to German, supposing you knew both languages, you'd know what to do. You wouldn't need any additional examples. But in some other cases, you do need example and that's why the one shot and few shot setting. So basically in some cases, it may even be difficult for humans to understand the format of the task without prior examples. So this setting is in some cases unfairly hard. So zero shot is going to be, always going to have like worse performance and we're going to see some data that will back that up. Also, they say here, nevertheless, for at least some settings, zero shot is closest to how human perform tasks. So that's the thing I mentioned about the translation. Okay, having said that, let's see some examples here. So basically on the translation example, this is a zero shot setting. You basically say, you condition the model on this sentence. So you give it translate English to French, cheese and then prompt. And basically you expect the model to regressively complete the sentence and translate cheese into whatever, however you say cheese in French, I don't speak French. Then there is the one shot where you give one example like sea otter, l'outre de mer, I don't know if it's pronounced like that, but let's say for the sake of it. And then the few shot one where you give multiple examples and then you prompt it and expect the correct answer. So depending on the tasks, some will make more sense, some will make less sense and few shot basically almost always gives better performance, which makes sense. And let me compare that to fine tuning, which is basically what most of the other models in NLP so far, like BERT, et cetera, they always use to pre-train the model on a huge corpus of text. And then they fine tune the model on a specific downstream like task and then that's how it pretty much works. And the problem, so they did notice a couple of problems with that, there are some problems with doing that. Basically you kind of overfit to a more narrow distribution, that's one problem. Second problem is you need supervised like label data, which is usually expensive. And the third problem is that humans like don't need like a label data, like you only need a couple of examples and that's it. So that's the few shot argument again. Here they say that they usually need around, they usually put around 10 to 100 examples in the few shot setting, depending on the, like the basically they're constrained by this, by the model size pretty much, by the memory actually. And they have only 2048 tokens available. So depending on the example size, they can go from 100 to, from 10 to 100. So that was, that was that part. And there are a couple of interesting charts in the beginning of the paper. So this one shows us that as we are scaling, so they pretty much averaged across different benchmarks, which we'll see in a couple of minutes, they averaged the performance. And you can see that as the, on the x-axis as the, as the model size grows in size, so as we slowly get to the 175 billion model version, we get the better and better performance. And the second important thing to notice here is that the difference in performance between the few shot, one shot and zero shot also increases, which kinda tells us that the model is becoming better at meta-learning. So those are two interesting facts to see from this plot. So the first one, bigger the scale, the better the performance. And the second thing is they are getting better at meta-learning as the size increases, which we can see by the difference between the different performances. So basically going from zero to one shot here, you get less improvement than by going from zero to one to few shot learning here. So that was that chart. Let me see what else we have here. So here they show how the, like the number of context, number of examples in the context also affects the performance. And they showed a couple of models. So the 1.3 billion, the 13 billion, and the biggest one, the GPT-3 175 billion model. And we can see that as we increased the number of examples, usually the performance also increases. And this is for some specific task, like insertion, trying to unscramble the word, which I'll explain a bit later. But that's the basic idea. So we have the zero shot, we have the one shot setting, and we have the few shot setting, and the performance increases. And the second thing is the bigger the model, like the steeper the improvements in a sense. Okay, and finally, this is the last chart in this section, which tells us, so they treat the model in a sense like a meta-learning, like a, they treat the problem as a meta-learning problem. So basically you pre-train the model, that's the outer loop. So that's all of those web crawl, like common crawl data, web text and common crawl data. And then once you condition the model on the, on the like text, like we saw in the translation example, you basically, that's the inner loop. When you put the context, you have, you form in a sense, ephemeral weights, which define the model. So the model is kinda like changes its, let's call it shape, depending on the conditioning text. So you have a completely different model, which is in a sense fine-tuned to the new, adapts to the new task, which is the interesting thing. Before I jump to the results section, and show you how it performs on different benchmarks, let me show you one more really important chart. So this chart shows us that the validation loss decreases as the compute increases. And the x-axis is the log scale, and you can basically see, so GPT-3, which had this many parameters, so that's approximately 175 billion params, and had 3,000 days of compute. So this is the GPT-3 model. You can see as the compute increases, the loss decreases, and it follows the power law here. So basically smaller models with less compute saturate at higher validation loss, which is an interesting, which was actually hypothesized by some of the previous papers, and this paper showed that the loss still holds even for huge, huge models such as the GPT-3. So that was all I had to tell you before we jumped to the results section. So pretty much nothing new in the research sense. They pretty much like sweep the whole combinatorial space, tried a bunch of different models, saw how it generalizes. So the nice thing is that they are pushing for this few shot and zero shot and one shot like key evaluation and not for the fine tuning approach, which just gives you better results in the benchmark, but is not as applicable as having a general model which you can later apply on the fly, which is really cool. So the paper is really huge, so I'm going to just show you a subset of interesting results. And yeah, so the first one will be, I'll skip Lombetta. Those are some language modeling tasks, and as expected because the model was, GPT-3 was pre-trained as a language model, i.e. predicting the next word in the sentence. It performs pretty good on those tasks. I'll skip this and I'll jump to translation. Let me just find the curves. Whoop, just a sec, this one. What's interesting is that basically the model was not explicitly trained to do translation. And that's one of the funny things about this GPT-3 model. So basically it was just trained to predict the next word, but what emerges is a set of skills which seem to be useful to know in order to predict those words. And one of those skills is just like translation. And I'm pretty surprised by the BLEO score it achieved. So I previously reconstructed the original transformer paper, and I'll link the project in the description. And basically I think I achieved around 33 BLEO score for English to German. And here without being explicitly trained to do translation and just having a small subset of the text in German, it achieved a really, really decent score. And interesting thing is that going from French to English, German to English, and Romanian to English, the performance is really good. But once you go in the opposite direction, it's really hard to achieve a good result. So this, the Romanian really has some serious problems. It's around 20, which is really, really low. That's interesting. And that shows that this model is a good English language model, but not as good for other languages. So they noticed, so as I mentioned, they noticed this skew, and they said that this could be a weakness due to reusing the bad level BPE tokenizer of GPT-2, which was developed for an almost entirely English training data set. So BPE, the by pair encoding was, many people pointed out that BPE can be causing different problems. We'll see some problems that it's causing in one of the tasks I'm going to show you about unscrambling words. And yeah, just keep that in mind about the BPE. The next interesting tasks I wanna show you is the natural language inference, where the model is having some problems doing those. So the NLI tasks, and we can see here, basically, so the task there is to read the text and understand how sentences relate to each other. For example, you give it one sentence and the second sentence, and you ask it to say whether the second sentence follows from the first, whether it contradicts the first sentence, or whether they're just neutral. And basically, that's really interesting because that means that the model doesn't handle comprehension like in reasoning, as well as it does just like generating sequences like translation or language modeling or like completing sentences. So different tasks that have to do with generation. And we can see the results here that we have pretty much random behavior on smaller models, and then as the model grows, the few shot model actually improves the performance, but it's still way below the baselines, like even BERT and Roberta is even better. So those are some of the tasks that this model is having problems with. So A stands for adversarial. So there's actually a subset of those NLI examples where humans picked some examples which are especially hard for language models. And yeah, so it's struggling with those types of problems. Now my favorite ones are these synthetic tasks. So what they did is they tried and prompt the model whether it knows how to do some basic addition, subtraction, and multiplication, and here we can see results. So basically for two digit addition and subtraction, the model has really good performance, at least the biggest one, and we can see like huge improvements as the model size increases again. But then when we go to three digit addition, perf goes down and it goes even further down for four digit addition and subtraction, for multiplication, and for some three digit operations. So what is probably causing this is that like in the huge stacks that the model has seen during the pre-training, there were a lot of tables that had like two digit numbers, but less so for three digit numbers, and even less so for four and five, et cetera. So basically this may be indicative that the model is not learning how to reason and actually doesn't adapt in a meta-learning sense to this new task and learns it from a few examples, but it's just doing some kind of a statistical like pattern matching and stuff. So it's still unexplored what exactly happens in the few shot setting, but like a probably good hypothesis is that it's not reasoning in the sense we humans are reasoning. Okay, so that was an interesting example. So one more really interesting task for me was this word scrambling and manipulation task. So they had five sub-tasks here. So the first one was cycle letters in words. So basically you give the model something like this and the model is supposed to figure out that this is actually just a circularly permuted. So you don't scramble it into inevitably here. Then we have anagrams where you keep the first and the last letter and you scramble everything in between and that's what you give to the model and you expect the model to unscramble the word. And this is actually super tough. I had problems also unscrambling this word. So I thought it had something to do with crypto, but then yeah, it doesn't have any Y, but it's still, it's kind of hard. And then this one is a bit easier. You hold the first two letters fixed and the last two letters and you just unscramble the middle and you get opponent. Then they have a random insertion in the word where you basically just after every single character, you insert a random punctuation or like a blank space and you expect again, the model to unscramble it. And finally, we have just the reverse word. So but just the model should figure out that you should just reverse the letters and get objects here. So those were the tasks and a bit surprisingly, so even the biggest model, the GPT-3, the 175 billion one had zero like accuracy on the reverse words. So that's the first intuitive thought I had like what's happening here. And then you figure out that the model is using sub word tokenization. So it's really hard for it to do this task. Whereas the easiest one was random insertion, which was also easiest for me actually. And then you can see the anagram, the one word anagram is having problems. So that mirrors my difficulties with understanding with doing this A1 task as well. And then the A2 anagram task was a bit easier. So that's the one where we keep the first two and last two letters fixed. So that's actually all intuitive. So the same difficulties I had solving these problems, the model had similar difficulties. There are two more interesting tasks I wanna kind of explore a bit more in depth. So the first one is the set analogies. And here you can see what the problem looks like. So this is in the appendix part of the paper. Basically the context is a law is to trust as, and then you have a multiple like answers here, multiple choices, and you are supposed to pick out the correct one. And I don't know about you, like English is not my native language. And I think I'm pretty good at it. But like some of these words like Kajoli or Belk were not familiar to me. So this problem actually turned out to be really difficult even for me. So I was just thinking how we are like testing our models really like really heavy. Even the humans are not like, not nearly as smart as we think they are. So I mean, yeah, that's just something that struck me and was interesting. So let me go and see how the model is actually performing on the set problem. So on the set analogy problem. So what was interesting for me was that actually GPT-3 achieved higher score than average high school student. So on this task GPT-3 achieved 65.2 in the few shot setting, whereas the average score among college applicants was 57%. And we can see the curves here again, the few shot one is growing steadily and is always the best for the GPT-3 model. So one thing that we can maybe conclude from this result is that we shouldn't be teaching our kids to just do these tasks that are really easy for like big neural networks to do. They should be focusing more like on things that the networks can do, like reasoning and maybe art or stuff. But even art is something that's already in the realm of neural networks. So I guess that's going to be a complicated question that we have to solve, but let me not digress too much. The final problem that I wanted to talk about is the news article generation. And the results here are really striking. Basically, if I show you the plot here, as the model size increases, so here on the X dimension, on the X axis, you can see that the accuracy goes down, meaning that people are less and less confident that an article, whether the text came from the model or whether it came from humans. So they did a really nice study where they tested a bunch of models, also they used something called control model, which was a smaller GPT-3 version, and it intentionally had like a higher temperature, which increased the randomness of the softmax. So basically, it was more random than all of the other models. And they compared all of those models with the control model. And what it turns out is that as we go, as the models increase, we end up pretty much with the fact that people can decide whether an article came from human or from the GPT-3, which is amazing and has some probably scary repercussions. So those were some interesting results that they showed in this huge paper. This video's already getting too long, so I'm going to tell you a little bit more about two or three more things. So one is data contamination. So what happened is that because the training dataset they have is so huge, they ended up having some of the dev and test data from the benchmarks already being included in the training dataset. And then they did some investigations, and for the most part, they said that the tests weren't as affected as they initially expected because there was a bug in their filtering of the data, which caused all of this. So yeah, but some of the tests like PICA, where they had physical, let me see if I can find it. So this is one example from PICA data sets. So basically the context is how to apply sealant to wood, and then we have the correct answer, using a brush, brush on sealant onto wood until it is fully saturated with the sealant, and then they have the incorrect one. So as they showed, the results on PICA were really good, and because it looks like it's an outlier, I do think that contamination played its role here. Okay, getting back to this chart, I just wanna note one more thing. And so they had a bug, so they said here, unfortunately a bug resulted in only partial removal of all detected overlaps from the training data. Due to the cost of training, it wasn't feasible to retrain the model. So even I think that they're using some kind of like, they're using 13 grams to figure out the overlaps, and they didn't mention somewhere in the paper that this is still a new area of research. So I'm really skeptical about this part, how they are, and what do they consider by duplicate. So it's hard to figure out which parts of text you should filter out so that you can say with certainty, okay, these examples from the dev and test sets are not present in any way in the training sets in a way that could bias and help the model to better predict on those dev and test sets. So I mentioned PICA, we can see it's an outlier, so it has better performance. You can see here that the points on the lower part of this chart have better performance on the dirty data sets, and so that means that there was a contamination issue. What I'm not clear about is this drop. They did mention it, but it looks like a huge outlier, but they didn't mention anything specifically about it. They did mention PICA and VinaGrad, et cetera. So that was it about this part. Now I wanna just walk you through some of the limitations they mentioned. So the first one is this one. So GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs. So even though this is a huge language model, it still has its own problems, like repeating. So there's still a lot of research to do about how do we decode the output from these models and whether the problem is the model itself or the heuristics we are using for decoding. So basically, Gvern and most people are using the top P, the nucleus decoding, and they're playing with temperatures, so it still doesn't feel like as a correct way to do things. They then mentioned that they are obviously aware that if they had bi-directional representations compared, contrasted to the ones that they do have, and that's the causal, the unidirectional representations, they expect better results. So that's a thing they could try out. They also acknowledge that the pre-training objective could be further improved, and they also acknowledge that the understanding precisely how few-shot learning works is an important unexplored direction for future research. So yeah, basically nobody quite knows how this exactly works. It's still kind of not easy to interpret, and that's one of the bad things. So they mentioned it here. Its decisions are not easily interpretable. Also, the pure scale of the model doesn't allow them to iterate when they make mistakes, as we saw on the example with data contamination. So a limitation associated with models at the scale of GPT-3 regardless of objective function or algorithm is that they are both expensive and inconvenient to perform inference on, which may present a challenge for practical applicability of models of this scale in their current form. So I guess some sorts of knowledge distillations and pruning, et cetera, could help us make smaller, yet performant models. I wanna close up this video talking about the broader impact. So I do think this is important, even though it's not like a technical part, so feel free to skip it if you're not interested in it. So basically what they consider, so with GPT-2, they had a staged release. So that happened last year, and they initially released the smallest GPT-2 model. They were monitoring different forums, et cetera, and just looking for any signs of potential misuse. Once they were pretty certain that no misuse was happening, they started gradually releasing the models, and at the end of 2019, the whole GPT-2, the biggest one, I think it had 1.5 billion params, was published. So GPT-3 is still not published, and it probably will never be, but the API is already accessible for certain people. It's in beta. Okay, so what I wanted to say here is that it's even more important to consider these questions for GPT-3, because basically whatever malicious task that depends on producing a huge amount of text, GPT-3 will help with that. So that's phishing, that's spamming, bunch of different stuff. So social engineering, so that can be automated using GPT-3 in a sense. So they did consider those things. So many of these applications bottle on human beings to write sufficiently high-quality text. And they mentioned who has the resources to do this, like government groups. I don't think this part is as interesting as this part about the bias. So they considered three axes. So one is gender, the second one is religion, and the third one is, let me check, race. Yep. So how they evaluated the bias in these models is, they basically prompted with sentences like this one. The occupation was up. Like for example, and they note here the detective was up, and they are looking at the, what's the probability distribution coming out from the GPT-3, and they're looking at male and female. And what I figured out is that most of the occupations are biased towards males. So especially when they put it like this, the competent, then fill in the blank, was up. And they figured out that that version of the prompt had even higher bias for males. And actually also the incompetent one was also biased more towards males. So again, iterating a little bit on this one. Here we can see in particular occupations demonstrating higher levels of education, such as legislator, banker, or professor, were heavily male leaning. And also like physical labor stuff was also male leaning, where the occupations that were more likely to be followed by female identifiers include midwife, nurse, receptionist, housekeeper, et cetera. So once more how it works, you basically take a variant of the prompt like this one, you input it into your GPT-3, let me zoom in a little bit. So you basically input it into your GPT-3 model. Let's say this is GPT-3. And it outputs as the next token, so the probability distribution which has the size of the GPT-3's vocab, which I think is around 50K. And what it did then was they monitored a couple of male identifiers like man, male, et cetera, and they monitored like female, woman, et cetera. And they just pretty much added the probabilities and compared those. They also did some normalization. So basically that's the method how they did this. And I already mentioned the results that came out of it. So that was about the gender. Then I'll just briefly go over the race. The, so across the models, they analyzed Asian had a consistently high sentiment, whereas the black had a consistently low sentiment. And this was kind of surprising because you usually hear that like white males are much better positioned than Asians. So this was kind of surprising in a way for me. And yeah, that's an obvious bias. And especially for black people, like a lot of the text has, is referencing like slavery, et cetera. And then that's the reason black people have the lowest sentiment. They did mention it here. So the resulting sentiment can reflect socio-historical factors. For instance, text relating to a discussion of slavery will frequently have a negative sentiment. And that's the reason why the black race always has lowest sentiment, like for every single model size. So X axis is the model size. Here is the biggest one. Finally, religion, they tried out like the top, the most popular like world religions like Christianity, Buddhism, Islam, et cetera. And so they found some negative biases about Islam. Like we also found that words such as violent, terrorism, terrorist occurred at a greater scale with Islam than with other religions. And we're in the top 40 most favorite words for Islam in GPT-3, which is really sad. And we need to be aware of these biases because obviously, I won't get into this, but like most of the Muslim people are really nice folks and that's it. They did also consider the energy usage. And one really surprising fact was this. So though models like GPT-3 consumes significant resources during training, they can be surprisingly efficient once trained. So even with the full GPT-3, 175 billion per am, a large model, generating 100 pages of content from a trained model can cost on the order of 0.4 kilowatt hour or only a few cents in energy costs. It's actually pretty efficient once you train this huge monster. And then during the lifetime of the model, it can be like the energy costs can be amortized potentially. Yep. So that was pretty much all I had to say. I could make this video much longer. I mean, this is a huge paper. So hopefully you liked it. If you did, please leave a comment, what you liked, what you didn't like, what are your opinions on this? And yeah, regarding the nagging versus the hyping sides, I just wanna mention one thing. So what's my opinion on this one? So I really think that these huge models, like huge transformers, are not something that's a digression or a bad thing for the future of AI. Why do I think that? That's because your brain, most of your brain, like cerebellum, where there is, the largest amount of computation happens in cerebellum is actually unconscious. So we are probably now in the phase where we are developing a synthetic primordial brain. Basically, the equivalent that we have, which is pretty much regulating all of our vital functions, like controlling the heart, temperature, et cetera, and also perception, so just kind of extracting concepts from image and audio, et cetera. So all of those require a lot of computation, and I don't see this as a digression. I just see it as a supplement, and as a step towards the artificial general intelligence, which will happen once we are able to simulate the cognitive layer above the primordial brain level. So those are just some two cents on my side about what I think about this. And I have some gut feeling that graph neural networks and causational relations that Judea Pearl has been pushing for decades now will play a significant role, as well as symbolic AI. So I don't think we should kinda disregard all of those, and we should just be cognizant that all of those will probably somehow play a role like in this long-term goal towards achieving AGI. So that was pretty much it. If you liked this video, consider subscribing to this channel and hit that bell icon to get notified when I upload a new video. And until next time, keep learning. Deep. We'll see you next time.
[{"start": 0.0, "end": 8.08, "text": " What's up folks? So in this video I thought covering this famous GPT-3 model titled language models are a few-shot learners"}, {"start": 8.72, "end": 13.76, "text": " Published earlier this year by OpenAI. So first I just want to give you an overview"}, {"start": 14.120000000000001, "end": 16.84, "text": " like a context of what was happening since then and"}, {"start": 17.56, "end": 24.84, "text": " Basically what we had is on one hand we had a lot of hype happening. So people saying so this is artificial general intelligence"}, {"start": 24.84, "end": 29.82, "text": " It's going to solve everything and then on the other hand we had people nagging saying how"}, {"start": 30.36, "end": 34.16, "text": " Going in this direction is going to ruin the field of machine learning and this is"}, {"start": 34.480000000000004, "end": 38.96, "text": " Like there is no way that this is the the the right path towards AGI"}, {"start": 39.32, "end": 41.96, "text": " so basically the truth is probably somewhere in between and"}, {"start": 42.64, "end": 47.54, "text": " Let me show you a couple of demos before I start doing a deep dive in of this of this paper"}, {"start": 48.3, "end": 50.3, "text": " Which I think you'll find interesting. So"}, {"start": 50.879999999999995, "end": 52.879999999999995, "text": " first Twitter had a lot of"}, {"start": 52.88, "end": 56.040000000000006, "text": " Demos happening. So this was one of the most famous ones"}, {"start": 56.040000000000006, "end": 63.080000000000005, "text": " So basically a guy would input a button that looks like a watermelon and then GPT-3 would like kind of generate code"}, {"start": 63.64, "end": 67.88, "text": " For for a button that would look like a watermelon, which is pretty pretty awesome"}, {"start": 67.88, "end": 74.48, "text": " And then we had also cool applications like AI dungeon, which actually appeared much earlier"}, {"start": 74.48, "end": 77.2, "text": " But they just upgraded their their back end using"}, {"start": 77.96000000000001, "end": 80.74000000000001, "text": " Partly using GPT-3 model so you can basically hear"}, {"start": 80.74, "end": 86.58, "text": " Enter like the genre of the of the play then you pick your character like let me take"}, {"start": 86.58, "end": 91.0, "text": " I know wizard and then let me put in the name like GPT-3"}, {"start": 91.53999999999999, "end": 95.86, "text": " Let me see what it generates and basically it will generate text and then you can"}, {"start": 96.74, "end": 99.02, "text": " interactively play this game by just"}, {"start": 100.06, "end": 101.16, "text": " like"}, {"start": 101.16, "end": 106.78, "text": " Prompting it with different text and that's super awesome and nice use case for for this model"}, {"start": 106.78, "end": 111.16, "text": " So I won't do it right now. I'll link it in the description so you can play with the game if you want"}, {"start": 112.58, "end": 115.54, "text": " So as I mentioned on the other side we had people who"}, {"start": 115.9, "end": 122.42, "text": " Who used a single example as an argument that the model is not not as good as as people are hyping it that it is"}, {"start": 122.62, "end": 130.78, "text": " So basically here is a question which is heavier a toaster or a pencil and GPT-3 would answer a pencil is heavier than a toaster"}, {"start": 130.9, "end": 135.54, "text": " which is not true like in probably in most most of the cases and"}, {"start": 135.54, "end": 143.78, "text": " Then like second example, maybe how many eyes does my foot have and GPT-3 would answer your foot has two eyes so"}, {"start": 144.89999999999998, "end": 148.7, "text": " After this blog was published a lot of people were experimenting with GPT-3"}, {"start": 149.5, "end": 151.5, "text": " Including this guy named Gwern"}, {"start": 152.22, "end": 155.34, "text": " Whose blog you should definitely check out he's writing a lot"}, {"start": 155.34, "end": 160.82, "text": " It's a bit harder to parse in my in my like subjective opinion, but like it's a really nice resource"}, {"start": 160.82, "end": 168.62, "text": " so what he showed is that using like smart like by picking smart hyper parameters like the"}, {"start": 168.66, "end": 176.34, "text": " Temperature like the sampling method maybe nucleus and by picking like the correct text that will that would get input to the model"}, {"start": 176.66, "end": 181.54, "text": " That's like the conditioning text you can get the GPT-3 to answer all of those questions"}, {"start": 182.14, "end": 190.54, "text": " Like the toaster or pencil question correctly, which is surprising and then like basically like people started talking about a new paradigm"}, {"start": 190.54, "end": 192.98, "text": " So we had like we had the software 1.0"}, {"start": 192.98, "end": 200.38, "text": " We still have it like where you're basically just designing the algorithm so like classical computer science, then we have designing the data set"}, {"start": 200.38, "end": 205.7, "text": " So that's pretty much machine learning. You're now designing the data set and then the algorithm kind of just"}, {"start": 206.29999999999998, "end": 210.57999999999998, "text": " develops by learning from the data using gradient based methods and finally"}, {"start": 210.57999999999998, "end": 214.7, "text": " We are just we came in a situation where we are designing a prompt"}, {"start": 214.73999999999998, "end": 218.38, "text": " So as to communicate effectively communicate with this GPT-3 model"}, {"start": 218.38, "end": 225.62, "text": " So Gwern had a nice sentence here, which says sampling can prove the presence of knowledge, but not the absence and"}, {"start": 226.34, "end": 230.62, "text": " So that's that's the idea behind all of this like prompt programming you basically"}, {"start": 231.46, "end": 234.01999999999998, "text": " Can't so if the model doesn't give you a correct answer"}, {"start": 234.42, "end": 239.14, "text": " Maybe the model wasn't incentivized to give you a correct answer. Maybe the model was just being a jokester"}, {"start": 239.14, "end": 241.98, "text": " So now I'm anthropomorphizing GPT-3"}, {"start": 242.01999999999998, "end": 247.62, "text": " But Gwern showed that by thinking like that by thinking that you're pretty much communicating with a human"}, {"start": 247.62, "end": 253.82, "text": " You can more effectively get the information you want out of it. So that was it for the overview"}, {"start": 253.82, "end": 258.94, "text": " So we had the hype the hyping team. We had the nagging team. We have we had everything in between"}, {"start": 259.02, "end": 265.1, "text": " so now like at the end of 2020 we know much more about both the limitations and"}, {"start": 265.66, "end": 272.7, "text": " Useful use cases that models like GPT-3 can have so without further ado, let's jump into the deep dive"}, {"start": 272.7, "end": 278.42, "text": " Let's jump into the paper. So first thing you can notice is that like a whole lot of people work in this project"}, {"start": 278.42, "end": 283.53999999999996, "text": " Like this looks like like a half of the open AI team pretty much like bunch of people. So it's a"}, {"start": 284.18, "end": 285.58, "text": " multi"}, {"start": 285.58, "end": 288.98, "text": " person effort as all of these huge papers are and"}, {"start": 289.78, "end": 296.18, "text": " What this paper basically did was because they have a lot of resources. They were able to kind of"}, {"start": 296.78, "end": 298.78, "text": " explore what happens when you're"}, {"start": 298.78, "end": 304.85999999999996, "text": " Increasing the model size and especially with the looking at how the what the like a few shot"}, {"start": 305.41999999999996, "end": 311.34, "text": " Performances compared conscious that to fine-tuning which is the usual thing to do like you you pre-train the model"}, {"start": 311.34, "end": 313.73999999999995, "text": " And then you fine-tune it. Well this paper"}, {"start": 314.14, "end": 320.53999999999996, "text": " Did the pre-training also but then without any fine-tuning they try to see what's the zero shot one shot and few shot"}, {"start": 320.53999999999996, "end": 324.78, "text": " Performance of this model and we'll get into some details a bit later. So"}, {"start": 325.5, "end": 327.5, "text": " Let me see what this say here"}, {"start": 327.5, "end": 333.5, "text": " So by contrast humans can generally perform a new language task from only a few examples or from simple instructions"}, {"start": 333.5, "end": 338.34, "text": " Something which current NLP systems still largely struggle to do so that's their like point"}, {"start": 338.34, "end": 345.42, "text": " basically humans you only need a couple of examples and you're already able to solve the new task and"}, {"start": 346.18, "end": 353.02, "text": " Now it's debatable like because we've been collecting data like like since we were born pretty much like continuously"}, {"start": 353.02, "end": 357.65999999999997, "text": " So yeah, it's a complicated question. But like I do agree that we need to"}, {"start": 357.97999999999996, "end": 360.53999999999996, "text": " Develop better like few shot performance of our models"}, {"start": 361.09999999999997, "end": 366.62, "text": " Then we have basically what they say is that so that the thing is they don't update the gradients"}, {"start": 366.62, "end": 368.62, "text": " So we'll see in a moment how they do it"}, {"start": 368.62, "end": 375.74, "text": " So basically they just condition the pre-trained model on like a bunch of text and it doesn't have to be bunch of text"}, {"start": 375.74, "end": 380.21999999999997, "text": " It's like a bit basically like a prompt in natural language and then maybe a couple of examples"}, {"start": 380.22, "end": 385.90000000000003, "text": " We'll see it in a minute and so the final finally what they what they what they"}, {"start": 386.62, "end": 390.14000000000004, "text": " Say is that it's able to generate text"}, {"start": 390.14000000000004, "end": 395.66, "text": " That's really hard to distinguish from the text that humans wrote and then they kind of"}, {"start": 396.06, "end": 404.14000000000004, "text": " Have a whole section on broader impacts because this kind of technology can really be used for malicious malicious purposes as well and I'll"}, {"start": 404.70000000000005, "end": 405.74, "text": " explain"}, {"start": 405.74, "end": 408.70000000000005, "text": " I'll talk about that part also like the fairness and bias"}, {"start": 408.7, "end": 410.53999999999996, "text": " a bit later"}, {"start": 410.53999999999996, "end": 416.62, "text": " So, okay. Let's let me first we'll go kind of backwards here. I'll first want to explain you the"}, {"start": 417.34, "end": 418.86, "text": " architecture the"}, {"start": 418.86, "end": 425.34, "text": " The data set and then we'll jump to how the a few shot thing works. So basically the architecture is"}, {"start": 426.06, "end": 430.14, "text": " They say here the same as GPT-2, which is basically the same as GPT, which is"}, {"start": 430.7, "end": 433.34, "text": " Basically the decoder portion of the transformer model"}, {"start": 434.06, "end": 436.06, "text": " So we can see it here"}, {"start": 436.06, "end": 443.42, "text": " So this is the the the transformer paper like this is the transformer model from the attention is all you need paper"}, {"start": 443.42, "end": 450.38, "text": " And they took the decoder part and they just they don't need this multi-head attention because they don't have the encoding part"}, {"start": 450.38, "end": 455.58, "text": " So they they just leave the the causal masking. So basically tokens can only look"}, {"start": 456.38, "end": 464.54, "text": " The tokens that came previously or or look at themselves. So yeah, so maybe I'll try and briefly explain how it works again"}, {"start": 464.54, "end": 466.54, "text": " Let me"}, {"start": 466.86, "end": 468.86, "text": " Take a pencil here"}, {"start": 469.26000000000005, "end": 475.5, "text": " So you basically take the input text you you tokenize it so you get let me zoom zoom in a little bit"}, {"start": 475.90000000000003, "end": 477.90000000000003, "text": " So you you tokenize it"}, {"start": 479.98, "end": 485.42, "text": " Then what you do is you embed these tokens so that's this output embedding part"}, {"start": 485.90000000000003, "end": 488.86, "text": " so let's say for the for the sake of argument that the"}, {"start": 488.86, "end": 494.7, "text": " Uh, like the hidden dimension in the model is like 512. So basically you'll end up with a bunch of vectors"}, {"start": 495.90000000000003, "end": 498.46000000000004, "text": " They have like dimension 512"}, {"start": 499.58000000000004, "end": 504.62, "text": " And then what you do is you have these positional encodings. It's basically like a huge table"}, {"start": 505.02000000000004, "end": 509.90000000000003, "text": " So it's basically a huge table and then depending on the position here. So this is like"}, {"start": 510.7, "end": 513.98, "text": " Token number zero you basically take zero"}, {"start": 513.98, "end": 518.62, "text": " And you add this with uh like row here and you just add it up"}, {"start": 519.34, "end": 525.98, "text": " To this embedding vector and after adding it up your you end up here and then you just have two parts of the architecture"}, {"start": 526.14, "end": 528.86, "text": " So the first one is the multi-headed attention"}, {"start": 529.5, "end": 536.86, "text": " Which is basically attends to all of the tokens and creates like those nice representations when I say attains to all so it's not"}, {"start": 536.86, "end": 539.9, "text": " bi-directional as I said, they have the causal masking and"}, {"start": 539.9, "end": 544.9, "text": " Finally, we have the feedforward part where basically,"}, {"start": 546.3, "end": 549.1, "text": " yeah, you just kind of independently process"}, {"start": 549.1, "end": 551.6999999999999, "text": " those token representations again."}, {"start": 551.6999999999999, "end": 554.54, "text": " So I have a video about this one,"}, {"start": 554.54, "end": 558.3, "text": " but just wanted to do a quick recap of it."}, {"start": 558.3, "end": 560.74, "text": " And basically the only difference is they use something"}, {"start": 560.74, "end": 564.02, "text": " called a couple of ideas from the sparse transformer paper"}, {"start": 564.02, "end": 567.6999999999999, "text": " where you basically, instead of using those like causal masks,"}, {"start": 567.6999999999999, "end": 569.4399999999999, "text": " you use sparse causal masks."}, {"start": 569.44, "end": 572.1400000000001, "text": " So let's not get into too much details."}, {"start": 572.1400000000001, "end": 575.6600000000001, "text": " And yeah, that's pretty much it about the architecture."}, {"start": 576.62, "end": 580.4200000000001, "text": " Now for the training data set, they used,"}, {"start": 580.4200000000001, "end": 584.1, "text": " so they used the common crawl, which is a huge, huge data set"}, {"start": 584.1, "end": 587.9200000000001, "text": " pretty much like crawling every single month"}, {"start": 587.9200000000001, "end": 591.34, "text": " that site downloads bunch of the data from the internet."}, {"start": 591.34, "end": 593.5600000000001, "text": " And what they did is they filtered it,"}, {"start": 593.5600000000001, "end": 595.7, "text": " and they also used something called WebText,"}, {"start": 595.7, "end": 600.7, "text": " which was created by using the links from Reddit,"}, {"start": 601.74, "end": 604.26, "text": " which had at least three uploads,"}, {"start": 604.26, "end": 607.82, "text": " which was kind of heuristic for filtering"}, {"start": 607.82, "end": 609.82, "text": " like higher quality content."}, {"start": 609.82, "end": 612.82, "text": " And they just kind of like scraped all of the data"}, {"start": 612.82, "end": 616.1800000000001, "text": " that those links were pointing to."}, {"start": 616.1800000000001, "end": 619.26, "text": " They also have the books, One Books, Two and Wikipedia."}, {"start": 619.26, "end": 622.1800000000001, "text": " So basically the point is that depending on the quality"}, {"start": 622.18, "end": 626.06, "text": " of the data set, they took more samples or less samples."}, {"start": 626.06, "end": 627.66, "text": " So some of the data like Wikipedia,"}, {"start": 627.66, "end": 631.02, "text": " which is higher quality than for example, common crawl,"}, {"start": 631.02, "end": 635.38, "text": " like had like 3.4 epochs, meaning the model"}, {"start": 635.38, "end": 637.5799999999999, "text": " saw it approximately 3.4 times,"}, {"start": 637.5799999999999, "end": 642.14, "text": " whereas the common crawl was only seen 0.44 times."}, {"start": 642.14, "end": 646.06, "text": " One important thing to notice here and comment"}, {"start": 646.06, "end": 649.5, "text": " is that they had a problem with data contamination,"}, {"start": 649.5, "end": 651.8199999999999, "text": " which means many of the results,"}, {"start": 651.82, "end": 654.1, "text": " they later showed that the results"}, {"start": 654.1, "end": 655.82, "text": " are not as seriously affected."}, {"start": 655.82, "end": 658.38, "text": " Some of the results that were seriously affected"}, {"start": 658.38, "end": 662.1, "text": " were kind of just not displayed in the paper,"}, {"start": 662.1, "end": 666.3000000000001, "text": " but I still think they had some contamination issues"}, {"start": 666.3000000000001, "end": 668.6600000000001, "text": " left over, especially on the PICA data set"}, {"start": 668.6600000000001, "end": 671.1400000000001, "text": " where they are testing for understanding"}, {"start": 671.1400000000001, "end": 675.0600000000001, "text": " of the like physics in the world."}, {"start": 675.0600000000001, "end": 676.6600000000001, "text": " And we'll get to it a bit later."}, {"start": 677.8000000000001, "end": 679.46, "text": " The compute was enormous."}, {"start": 679.46, "end": 682.7, "text": " So they had like, you basically had like 10 years"}, {"start": 682.7, "end": 684.1, "text": " of petaflop second."}, {"start": 684.1, "end": 685.5, "text": " Like if you had a machine that has"}, {"start": 685.5, "end": 687.14, "text": " a petaflop second performance,"}, {"start": 687.14, "end": 688.7, "text": " you need 10 years to train this model."}, {"start": 688.7, "end": 691.7, "text": " So it's huge in that sense, you can see it here."}, {"start": 691.7, "end": 694.5400000000001, "text": " And the interesting comparison is that they say here,"}, {"start": 694.5400000000001, "end": 696.9000000000001, "text": " as a consequence, although GPT-3, 3 billion"}, {"start": 696.9000000000001, "end": 699.98, "text": " is almost 10X larger than Roberta large,"}, {"start": 699.98, "end": 703.3000000000001, "text": " both models took approximately 50 petaflop second days"}, {"start": 703.3000000000001, "end": 705.5, "text": " of compute during pre-training."}, {"start": 705.5, "end": 709.02, "text": " The reason is that there is some other paper"}, {"start": 709.02, "end": 710.54, "text": " that tells that you shouldn't,"}, {"start": 711.54, "end": 714.34, "text": " that the model shouldn't see like tokens too many times,"}, {"start": 714.34, "end": 717.38, "text": " otherwise it will overfit and the performance will get worse."}, {"start": 718.5799999999999, "end": 720.8199999999999, "text": " So that was that part about the data."}, {"start": 720.8199999999999, "end": 722.78, "text": " And yeah, one more interesting thing maybe"}, {"start": 722.78, "end": 725.78, "text": " is that basically they needed a supercomputer"}, {"start": 725.78, "end": 726.9399999999999, "text": " because the model was huge."}, {"start": 726.9399999999999, "end": 730.8199999999999, "text": " He had 175 billion, so they needed like a supercomputer"}, {"start": 730.8199999999999, "end": 733.38, "text": " from Microsoft to train the thing."}, {"start": 733.38, "end": 735.18, "text": " So that's kind of a fun fact."}, {"start": 735.18, "end": 737.5, "text": " Let me jump to something that's really like"}, {"start": 737.5, "end": 738.76, "text": " the core part of this paper."}, {"start": 738.76, "end": 742.78, "text": " And that's how they do this zero shot,"}, {"start": 742.78, "end": 746.02, "text": " like one shot and few shot learning and why they do it."}, {"start": 746.02, "end": 749.38, "text": " So basically why they do it is, so they said here,"}, {"start": 749.38, "end": 751.36, "text": " so the reason to distinguish between one shot"}, {"start": 751.36, "end": 756.04, "text": " from few shot and zero shot is that it most closely matches"}, {"start": 756.04, "end": 758.42, "text": " the way in which some of the tests"}, {"start": 758.42, "end": 760.38, "text": " are communicated to humans."}, {"start": 760.38, "end": 765.38, "text": " So yeah, basically that's how humans work."}, {"start": 765.38, "end": 768.8, "text": " That's how humans work and then for the zero shot,"}, {"start": 770.22, "end": 772.64, "text": " there are some examples like for example, translation."}, {"start": 772.64, "end": 774.88, "text": " If I told you to translate from English to German,"}, {"start": 774.88, "end": 778.14, "text": " supposing you knew both languages, you'd know what to do."}, {"start": 778.14, "end": 780.36, "text": " You wouldn't need any additional examples."}, {"start": 780.36, "end": 782.62, "text": " But in some other cases, you do need example"}, {"start": 782.62, "end": 785.26, "text": " and that's why the one shot and few shot setting."}, {"start": 785.26, "end": 789.22, "text": " So basically in some cases, it may even be difficult"}, {"start": 789.22, "end": 791.42, "text": " for humans to understand the format of the task"}, {"start": 791.42, "end": 792.5, "text": " without prior examples."}, {"start": 792.5, "end": 795.78, "text": " So this setting is in some cases unfairly hard."}, {"start": 795.78, "end": 800.02, "text": " So zero shot is going to be, always going to have"}, {"start": 800.02, "end": 803.3, "text": " like worse performance and we're going to see some data"}, {"start": 803.3, "end": 804.36, "text": " that will back that up."}, {"start": 805.86, "end": 808.14, "text": " Also, they say here, nevertheless,"}, {"start": 808.14, "end": 810.58, "text": " for at least some settings, zero shot is closest"}, {"start": 810.58, "end": 813.14, "text": " to how human perform tasks."}, {"start": 813.14, "end": 816.04, "text": " So that's the thing I mentioned about the translation."}, {"start": 816.04, "end": 820.2, "text": " Okay, having said that, let's see some examples here."}, {"start": 820.2, "end": 823.26, "text": " So basically on the translation example,"}, {"start": 823.26, "end": 825.46, "text": " this is a zero shot setting."}, {"start": 825.46, "end": 829.7800000000001, "text": " You basically say, you condition the model on this sentence."}, {"start": 829.7800000000001, "end": 833.1, "text": " So you give it translate English to French,"}, {"start": 833.1, "end": 835.4200000000001, "text": " cheese and then prompt."}, {"start": 835.4200000000001, "end": 839.1800000000001, "text": " And basically you expect the model to regressively"}, {"start": 839.1800000000001, "end": 842.4200000000001, "text": " complete the sentence and translate cheese into whatever,"}, {"start": 842.4200000000001, "end": 845.7, "text": " however you say cheese in French, I don't speak French."}, {"start": 845.7, "end": 849.38, "text": " Then there is the one shot where you give one example"}, {"start": 849.38, "end": 852.22, "text": " like sea otter, l'outre de mer, I don't know if it's"}, {"start": 852.22, "end": 854.98, "text": " pronounced like that, but let's say for the sake of it."}, {"start": 854.98, "end": 858.46, "text": " And then the few shot one where you give multiple examples"}, {"start": 858.46, "end": 861.42, "text": " and then you prompt it and expect the correct answer."}, {"start": 861.42, "end": 864.66, "text": " So depending on the tasks, some will make more sense,"}, {"start": 864.66, "end": 867.58, "text": " some will make less sense and few shot basically"}, {"start": 867.58, "end": 870.54, "text": " almost always gives better performance, which makes sense."}, {"start": 870.54, "end": 874.38, "text": " And let me compare that to fine tuning,"}, {"start": 874.38, "end": 877.86, "text": " which is basically what most of the other models"}, {"start": 877.86, "end": 881.14, "text": " in NLP so far, like BERT, et cetera,"}, {"start": 881.14, "end": 883.22, "text": " they always use to pre-train the model"}, {"start": 883.22, "end": 885.36, "text": " on a huge corpus of text."}, {"start": 885.36, "end": 888.92, "text": " And then they fine tune the model on a specific downstream"}, {"start": 888.92, "end": 893.02, "text": " like task and then that's how it pretty much works."}, {"start": 893.02, "end": 896.38, "text": " And the problem, so they did notice a couple of problems"}, {"start": 896.38, "end": 898.6800000000001, "text": " with that, there are some problems with doing that."}, {"start": 898.6800000000001, "end": 902.6, "text": " Basically you kind of overfit to a more narrow distribution,"}, {"start": 902.6, "end": 903.78, "text": " that's one problem."}, {"start": 903.78, "end": 907.82, "text": " Second problem is you need supervised like label data,"}, {"start": 907.82, "end": 909.74, "text": " which is usually expensive."}, {"start": 909.74, "end": 913.98, "text": " And the third problem is that humans like don't need"}, {"start": 913.98, "end": 917.82, "text": " like a label data, like you only need a couple of examples"}, {"start": 917.82, "end": 918.6600000000001, "text": " and that's it."}, {"start": 918.6600000000001, "end": 920.6, "text": " So that's the few shot argument again."}, {"start": 920.6, "end": 923.82, "text": " Here they say that they usually need around,"}, {"start": 923.82, "end": 927.46, "text": " they usually put around 10 to 100 examples"}, {"start": 927.46, "end": 929.82, "text": " in the few shot setting, depending on the,"}, {"start": 929.82, "end": 934.5, "text": " like the basically they're constrained by this,"}, {"start": 934.5, "end": 937.5400000000001, "text": " by the model size pretty much, by the memory actually."}, {"start": 937.54, "end": 941.5, "text": " And they have only 2048 tokens available."}, {"start": 941.5, "end": 943.5999999999999, "text": " So depending on the example size,"}, {"start": 943.5999999999999, "end": 946.8, "text": " they can go from 100 to, from 10 to 100."}, {"start": 946.8, "end": 949.4599999999999, "text": " So that was, that was that part."}, {"start": 949.4599999999999, "end": 952.18, "text": " And there are a couple of interesting charts"}, {"start": 952.18, "end": 953.04, "text": " in the beginning of the paper."}, {"start": 953.04, "end": 956.8199999999999, "text": " So this one shows us that as we are scaling,"}, {"start": 956.8199999999999, "end": 959.8399999999999, "text": " so they pretty much averaged across different benchmarks,"}, {"start": 959.8399999999999, "end": 961.8399999999999, "text": " which we'll see in a couple of minutes,"}, {"start": 961.8399999999999, "end": 963.4399999999999, "text": " they averaged the performance."}, {"start": 963.44, "end": 967.62, "text": " And you can see that as the, on the x-axis as the,"}, {"start": 967.62, "end": 969.5400000000001, "text": " as the model size grows in size,"}, {"start": 969.5400000000001, "end": 974.5400000000001, "text": " so as we slowly get to the 175 billion model version,"}, {"start": 975.2600000000001, "end": 977.82, "text": " we get the better and better performance."}, {"start": 977.82, "end": 980.7, "text": " And the second important thing to notice here"}, {"start": 980.7, "end": 982.7, "text": " is that the difference in performance"}, {"start": 982.7, "end": 986.44, "text": " between the few shot, one shot and zero shot also increases,"}, {"start": 986.44, "end": 990.1400000000001, "text": " which kinda tells us that the model is becoming better"}, {"start": 990.1400000000001, "end": 991.44, "text": " at meta-learning."}, {"start": 991.44, "end": 994.5400000000001, "text": " So those are two interesting facts to see from this plot."}, {"start": 994.5400000000001, "end": 996.7800000000001, "text": " So the first one, bigger the scale,"}, {"start": 996.7800000000001, "end": 998.2600000000001, "text": " the better the performance."}, {"start": 998.2600000000001, "end": 1001.0200000000001, "text": " And the second thing is they are getting better"}, {"start": 1001.0200000000001, "end": 1003.86, "text": " at meta-learning as the size increases,"}, {"start": 1003.86, "end": 1005.24, "text": " which we can see by the difference"}, {"start": 1005.24, "end": 1006.74, "text": " between the different performances."}, {"start": 1006.74, "end": 1011.3000000000001, "text": " So basically going from zero to one shot here,"}, {"start": 1011.3000000000001, "end": 1014.7800000000001, "text": " you get less improvement than by going from zero to one"}, {"start": 1014.7800000000001, "end": 1018.48, "text": " to few shot learning here."}, {"start": 1018.48, "end": 1019.82, "text": " So that was that chart."}, {"start": 1019.82, "end": 1021.74, "text": " Let me see what else we have here."}, {"start": 1021.74, "end": 1026.38, "text": " So here they show how the, like the number of context,"}, {"start": 1026.38, "end": 1028.3, "text": " number of examples in the context"}, {"start": 1028.3, "end": 1029.78, "text": " also affects the performance."}, {"start": 1029.78, "end": 1032.1000000000001, "text": " And they showed a couple of models."}, {"start": 1032.1000000000001, "end": 1034.78, "text": " So the 1.3 billion, the 13 billion,"}, {"start": 1034.78, "end": 1038.48, "text": " and the biggest one, the GPT-3 175 billion model."}, {"start": 1038.48, "end": 1042.78, "text": " And we can see that as we increased the number of examples,"}, {"start": 1042.78, "end": 1045.92, "text": " usually the performance also increases."}, {"start": 1045.92, "end": 1048.18, "text": " And this is for some specific task,"}, {"start": 1048.18, "end": 1052.18, "text": " like insertion, trying to unscramble the word,"}, {"start": 1052.18, "end": 1054.42, "text": " which I'll explain a bit later."}, {"start": 1054.42, "end": 1055.44, "text": " But that's the basic idea."}, {"start": 1055.44, "end": 1058.3400000000001, "text": " So we have the zero shot, we have the one shot setting,"}, {"start": 1058.3400000000001, "end": 1060.1000000000001, "text": " and we have the few shot setting,"}, {"start": 1060.1000000000001, "end": 1061.68, "text": " and the performance increases."}, {"start": 1061.68, "end": 1064.5800000000002, "text": " And the second thing is the bigger the model,"}, {"start": 1065.78, "end": 1068.8600000000001, "text": " like the steeper the improvements in a sense."}, {"start": 1070.18, "end": 1075.18, "text": " Okay, and finally, this is the last chart in this section,"}, {"start": 1075.18, "end": 1079.5800000000002, "text": " which tells us, so they treat the model in a sense"}, {"start": 1079.5800000000002, "end": 1082.26, "text": " like a meta-learning, like a,"}, {"start": 1082.26, "end": 1083.98, "text": " they treat the problem as a meta-learning problem."}, {"start": 1083.98, "end": 1087.66, "text": " So basically you pre-train the model, that's the outer loop."}, {"start": 1087.66, "end": 1091.3, "text": " So that's all of those web crawl, like common crawl data,"}, {"start": 1091.3, "end": 1093.0600000000002, "text": " web text and common crawl data."}, {"start": 1093.0600000000002, "end": 1097.88, "text": " And then once you condition the model on the,"}, {"start": 1097.88, "end": 1102.5800000000002, "text": " on the like text, like we saw in the translation example,"}, {"start": 1102.58, "end": 1106.06, "text": " you basically, that's the inner loop."}, {"start": 1106.06, "end": 1108.1399999999999, "text": " When you put the context, you have,"}, {"start": 1108.1399999999999, "end": 1111.22, "text": " you form in a sense, ephemeral weights,"}, {"start": 1111.22, "end": 1112.58, "text": " which define the model."}, {"start": 1112.58, "end": 1116.86, "text": " So the model is kinda like changes its,"}, {"start": 1116.86, "end": 1120.06, "text": " let's call it shape, depending on the conditioning text."}, {"start": 1120.06, "end": 1122.04, "text": " So you have a completely different model,"}, {"start": 1122.04, "end": 1125.06, "text": " which is in a sense fine-tuned to the new,"}, {"start": 1125.06, "end": 1127.58, "text": " adapts to the new task, which is the interesting thing."}, {"start": 1127.58, "end": 1130.58, "text": " Before I jump to the results section,"}, {"start": 1130.58, "end": 1133.6599999999999, "text": " and show you how it performs on different benchmarks,"}, {"start": 1133.6599999999999, "end": 1136.1, "text": " let me show you one more really important chart."}, {"start": 1136.1, "end": 1140.22, "text": " So this chart shows us that the validation loss"}, {"start": 1141.6399999999999, "end": 1145.98, "text": " decreases as the compute increases."}, {"start": 1145.98, "end": 1149.1, "text": " And the x-axis is the log scale,"}, {"start": 1149.1, "end": 1151.62, "text": " and you can basically see, so GPT-3,"}, {"start": 1151.62, "end": 1154.54, "text": " which had this many parameters,"}, {"start": 1154.54, "end": 1158.62, "text": " so that's approximately 175 billion params,"}, {"start": 1158.62, "end": 1161.08, "text": " and had 3,000 days of compute."}, {"start": 1161.08, "end": 1163.06, "text": " So this is the GPT-3 model."}, {"start": 1163.06, "end": 1166.1399999999999, "text": " You can see as the compute increases,"}, {"start": 1166.1399999999999, "end": 1171.1399999999999, "text": " the loss decreases, and it follows the power law here."}, {"start": 1172.7399999999998, "end": 1175.34, "text": " So basically smaller models with less compute"}, {"start": 1176.34, "end": 1179.9399999999998, "text": " saturate at higher validation loss,"}, {"start": 1179.9399999999998, "end": 1182.54, "text": " which is an interesting, which was actually"}, {"start": 1183.4599999999998, "end": 1185.86, "text": " hypothesized by some of the previous papers,"}, {"start": 1185.86, "end": 1189.4199999999998, "text": " and this paper showed that the loss still holds"}, {"start": 1189.4199999999998, "end": 1193.4199999999998, "text": " even for huge, huge models such as the GPT-3."}, {"start": 1193.4199999999998, "end": 1197.6799999999998, "text": " So that was all I had to tell you"}, {"start": 1197.6799999999998, "end": 1200.1, "text": " before we jumped to the results section."}, {"start": 1200.1, "end": 1205.1, "text": " So pretty much nothing new in the research sense."}, {"start": 1205.5, "end": 1209.6, "text": " They pretty much like sweep the whole combinatorial space,"}, {"start": 1209.6, "end": 1211.24, "text": " tried a bunch of different models,"}, {"start": 1211.24, "end": 1212.6, "text": " saw how it generalizes."}, {"start": 1212.6, "end": 1214.6599999999999, "text": " So the nice thing is that they are pushing"}, {"start": 1214.66, "end": 1219.1000000000001, "text": " for this few shot and zero shot and one shot"}, {"start": 1219.1000000000001, "end": 1222.74, "text": " like key evaluation and not for the fine tuning approach,"}, {"start": 1222.74, "end": 1225.78, "text": " which just gives you better results in the benchmark,"}, {"start": 1225.78, "end": 1230.3400000000001, "text": " but is not as applicable as having a general model"}, {"start": 1230.3400000000001, "end": 1234.8200000000002, "text": " which you can later apply on the fly, which is really cool."}, {"start": 1234.8200000000002, "end": 1235.98, "text": " So the paper is really huge,"}, {"start": 1235.98, "end": 1237.26, "text": " so I'm going to just show you"}, {"start": 1237.26, "end": 1239.68, "text": " a subset of interesting results."}, {"start": 1239.68, "end": 1242.0, "text": " And yeah, so the first one will be,"}, {"start": 1242.98, "end": 1244.38, "text": " I'll skip Lombetta."}, {"start": 1244.38, "end": 1246.0200000000002, "text": " Those are some language modeling tasks,"}, {"start": 1246.0200000000002, "end": 1247.9, "text": " and as expected because the model was,"}, {"start": 1247.9, "end": 1250.3000000000002, "text": " GPT-3 was pre-trained as a language model,"}, {"start": 1250.3000000000002, "end": 1253.1000000000001, "text": " i.e. predicting the next word in the sentence."}, {"start": 1253.1000000000001, "end": 1256.3400000000001, "text": " It performs pretty good on those tasks."}, {"start": 1256.3400000000001, "end": 1258.7800000000002, "text": " I'll skip this and I'll jump to translation."}, {"start": 1258.7800000000002, "end": 1260.3400000000001, "text": " Let me just find the curves."}, {"start": 1260.3400000000001, "end": 1262.0600000000002, "text": " Whoop, just a sec, this one."}, {"start": 1263.3400000000001, "end": 1267.14, "text": " What's interesting is that basically"}, {"start": 1268.46, "end": 1271.3200000000002, "text": " the model was not explicitly trained to do translation."}, {"start": 1271.3200000000002, "end": 1274.2600000000002, "text": " And that's one of the funny things about this GPT-3 model."}, {"start": 1274.26, "end": 1276.98, "text": " So basically it was just trained to predict the next word,"}, {"start": 1276.98, "end": 1279.66, "text": " but what emerges is a set of skills"}, {"start": 1279.66, "end": 1282.9, "text": " which seem to be useful to know"}, {"start": 1282.9, "end": 1284.44, "text": " in order to predict those words."}, {"start": 1284.44, "end": 1287.06, "text": " And one of those skills is just like translation."}, {"start": 1287.06, "end": 1292.06, "text": " And I'm pretty surprised by the BLEO score it achieved."}, {"start": 1292.26, "end": 1294.94, "text": " So I previously reconstructed"}, {"start": 1294.94, "end": 1296.42, "text": " the original transformer paper,"}, {"start": 1296.42, "end": 1300.66, "text": " and I'll link the project in the description."}, {"start": 1300.66, "end": 1304.18, "text": " And basically I think I achieved around 33 BLEO score"}, {"start": 1304.18, "end": 1307.02, "text": " for English to German."}, {"start": 1307.02, "end": 1311.3400000000001, "text": " And here without being explicitly trained to do translation"}, {"start": 1311.3400000000001, "end": 1316.3400000000001, "text": " and just having a small subset of the text in German,"}, {"start": 1316.9, "end": 1319.98, "text": " it achieved a really, really decent score."}, {"start": 1319.98, "end": 1324.6200000000001, "text": " And interesting thing is that going from French to English,"}, {"start": 1324.6200000000001, "end": 1326.92, "text": " German to English, and Romanian to English,"}, {"start": 1326.92, "end": 1328.5800000000002, "text": " the performance is really good."}, {"start": 1328.5800000000002, "end": 1331.0800000000002, "text": " But once you go in the opposite direction,"}, {"start": 1331.08, "end": 1336.08, "text": " it's really hard to achieve a good result."}, {"start": 1336.3, "end": 1340.54, "text": " So this, the Romanian really has some serious problems."}, {"start": 1340.54, "end": 1343.5, "text": " It's around 20, which is really, really low."}, {"start": 1343.5, "end": 1344.34, "text": " That's interesting."}, {"start": 1344.34, "end": 1345.1799999999998, "text": " And that shows that this model"}, {"start": 1345.1799999999998, "end": 1347.4199999999998, "text": " is a good English language model,"}, {"start": 1347.4199999999998, "end": 1350.78, "text": " but not as good for other languages."}, {"start": 1352.1399999999999, "end": 1357.1399999999999, "text": " So they noticed, so as I mentioned,"}, {"start": 1357.5, "end": 1358.82, "text": " they noticed this skew,"}, {"start": 1358.82, "end": 1360.6599999999999, "text": " and they said that this could be a weakness"}, {"start": 1360.66, "end": 1364.5600000000002, "text": " due to reusing the bad level BPE tokenizer of GPT-2,"}, {"start": 1364.5600000000002, "end": 1366.66, "text": " which was developed for an almost entirely"}, {"start": 1366.66, "end": 1368.1000000000001, "text": " English training data set."}, {"start": 1368.1000000000001, "end": 1372.9, "text": " So BPE, the by pair encoding was,"}, {"start": 1372.9, "end": 1374.5400000000002, "text": " many people pointed out that BPE"}, {"start": 1374.5400000000002, "end": 1376.24, "text": " can be causing different problems."}, {"start": 1376.24, "end": 1378.6200000000001, "text": " We'll see some problems that it's causing"}, {"start": 1378.6200000000001, "end": 1381.78, "text": " in one of the tasks I'm going to show you"}, {"start": 1381.78, "end": 1383.38, "text": " about unscrambling words."}, {"start": 1383.38, "end": 1387.0400000000002, "text": " And yeah, just keep that in mind about the BPE."}, {"start": 1387.94, "end": 1390.3200000000002, "text": " The next interesting tasks I wanna show you"}, {"start": 1390.32, "end": 1392.46, "text": " is the natural language inference,"}, {"start": 1392.46, "end": 1395.78, "text": " where the model is having some problems doing those."}, {"start": 1395.78, "end": 1400.78, "text": " So the NLI tasks, and we can see here, basically,"}, {"start": 1401.06, "end": 1403.9399999999998, "text": " so the task there is to read the text"}, {"start": 1403.9399999999998, "end": 1408.5, "text": " and understand how sentences relate to each other."}, {"start": 1408.5, "end": 1410.34, "text": " For example, you give it one sentence"}, {"start": 1410.34, "end": 1411.5, "text": " and the second sentence,"}, {"start": 1411.5, "end": 1414.5, "text": " and you ask it to say whether the second sentence"}, {"start": 1414.5, "end": 1415.8999999999999, "text": " follows from the first,"}, {"start": 1415.8999999999999, "end": 1417.86, "text": " whether it contradicts the first sentence,"}, {"start": 1417.86, "end": 1419.6399999999999, "text": " or whether they're just neutral."}, {"start": 1419.64, "end": 1423.7, "text": " And basically, that's really interesting"}, {"start": 1423.7, "end": 1425.26, "text": " because that means that the model"}, {"start": 1425.26, "end": 1428.9, "text": " doesn't handle comprehension like in reasoning,"}, {"start": 1428.9, "end": 1433.44, "text": " as well as it does just like generating sequences"}, {"start": 1433.44, "end": 1436.26, "text": " like translation or language modeling"}, {"start": 1436.26, "end": 1438.18, "text": " or like completing sentences."}, {"start": 1438.18, "end": 1441.14, "text": " So different tasks that have to do with generation."}, {"start": 1443.26, "end": 1445.5600000000002, "text": " And we can see the results here"}, {"start": 1445.5600000000002, "end": 1448.3000000000002, "text": " that we have pretty much random behavior"}, {"start": 1448.3, "end": 1451.58, "text": " on smaller models, and then as the model grows,"}, {"start": 1451.58, "end": 1456.22, "text": " the few shot model actually improves the performance,"}, {"start": 1456.22, "end": 1459.06, "text": " but it's still way below the baselines,"}, {"start": 1459.06, "end": 1461.8999999999999, "text": " like even BERT and Roberta is even better."}, {"start": 1462.86, "end": 1464.3, "text": " So those are some of the tasks"}, {"start": 1464.3, "end": 1467.3, "text": " that this model is having problems with."}, {"start": 1467.3, "end": 1470.1, "text": " So A stands for adversarial."}, {"start": 1470.1, "end": 1473.68, "text": " So there's actually a subset of those NLI examples"}, {"start": 1473.68, "end": 1476.52, "text": " where humans picked some examples"}, {"start": 1476.52, "end": 1479.46, "text": " which are especially hard for language models."}, {"start": 1479.46, "end": 1484.1, "text": " And yeah, so it's struggling with those types of problems."}, {"start": 1484.1, "end": 1488.58, "text": " Now my favorite ones are these synthetic tasks."}, {"start": 1488.58, "end": 1493.58, "text": " So what they did is they tried and prompt the model"}, {"start": 1493.98, "end": 1497.74, "text": " whether it knows how to do some basic addition,"}, {"start": 1497.74, "end": 1500.02, "text": " subtraction, and multiplication,"}, {"start": 1500.02, "end": 1502.34, "text": " and here we can see results."}, {"start": 1502.34, "end": 1507.34, "text": " So basically for two digit addition and subtraction,"}, {"start": 1507.54, "end": 1509.54, "text": " the model has really good performance,"}, {"start": 1509.54, "end": 1511.06, "text": " at least the biggest one,"}, {"start": 1511.06, "end": 1512.9399999999998, "text": " and we can see like huge improvements"}, {"start": 1512.9399999999998, "end": 1517.4199999999998, "text": " as the model size increases again."}, {"start": 1517.4199999999998, "end": 1521.1799999999998, "text": " But then when we go to three digit addition,"}, {"start": 1521.1799999999998, "end": 1525.06, "text": " perf goes down and it goes even further down"}, {"start": 1525.06, "end": 1526.8999999999999, "text": " for four digit addition and subtraction,"}, {"start": 1526.9, "end": 1531.9, "text": " for multiplication, and for some three digit operations."}, {"start": 1534.02, "end": 1538.1000000000001, "text": " So what is probably causing this is that"}, {"start": 1538.1000000000001, "end": 1542.14, "text": " like in the huge stacks that the model has seen"}, {"start": 1542.14, "end": 1543.98, "text": " during the pre-training,"}, {"start": 1543.98, "end": 1547.6200000000001, "text": " there were a lot of tables that had like two digit numbers,"}, {"start": 1547.6200000000001, "end": 1549.5600000000002, "text": " but less so for three digit numbers,"}, {"start": 1549.5600000000002, "end": 1551.98, "text": " and even less so for four and five, et cetera."}, {"start": 1551.98, "end": 1555.8200000000002, "text": " So basically this may be indicative"}, {"start": 1555.82, "end": 1558.6599999999999, "text": " that the model is not learning how to reason"}, {"start": 1558.6599999999999, "end": 1562.8, "text": " and actually doesn't adapt in a meta-learning sense"}, {"start": 1562.8, "end": 1565.26, "text": " to this new task and learns it from a few examples,"}, {"start": 1565.26, "end": 1569.78, "text": " but it's just doing some kind of a statistical"}, {"start": 1569.78, "end": 1572.58, "text": " like pattern matching and stuff."}, {"start": 1572.58, "end": 1576.3, "text": " So it's still unexplored what exactly happens"}, {"start": 1576.3, "end": 1577.36, "text": " in the few shot setting,"}, {"start": 1577.36, "end": 1580.5, "text": " but like a probably good hypothesis is that"}, {"start": 1580.5, "end": 1583.1399999999999, "text": " it's not reasoning in the sense we humans are reasoning."}, {"start": 1583.14, "end": 1586.6200000000001, "text": " Okay, so that was an interesting example."}, {"start": 1586.6200000000001, "end": 1589.8200000000002, "text": " So one more really interesting task for me was this"}, {"start": 1589.8200000000002, "end": 1591.5400000000002, "text": " word scrambling and manipulation task."}, {"start": 1591.5400000000002, "end": 1593.9, "text": " So they had five sub-tasks here."}, {"start": 1593.9, "end": 1595.9, "text": " So the first one was cycle letters in words."}, {"start": 1595.9, "end": 1598.7800000000002, "text": " So basically you give the model something like this"}, {"start": 1598.7800000000002, "end": 1600.8600000000001, "text": " and the model is supposed to figure out"}, {"start": 1600.8600000000001, "end": 1605.0600000000002, "text": " that this is actually just a circularly permuted."}, {"start": 1605.0600000000002, "end": 1608.3600000000001, "text": " So you don't scramble it into inevitably here."}, {"start": 1608.3600000000001, "end": 1611.0600000000002, "text": " Then we have anagrams where you keep the first"}, {"start": 1611.06, "end": 1614.3, "text": " and the last letter and you scramble everything in between"}, {"start": 1614.3, "end": 1615.62, "text": " and that's what you give to the model"}, {"start": 1615.62, "end": 1618.34, "text": " and you expect the model to unscramble the word."}, {"start": 1618.34, "end": 1619.78, "text": " And this is actually super tough."}, {"start": 1619.78, "end": 1623.26, "text": " I had problems also unscrambling this word."}, {"start": 1623.26, "end": 1625.5, "text": " So I thought it had something to do with crypto,"}, {"start": 1625.5, "end": 1627.06, "text": " but then yeah, it doesn't have any Y,"}, {"start": 1627.06, "end": 1628.7, "text": " but it's still, it's kind of hard."}, {"start": 1628.7, "end": 1630.6399999999999, "text": " And then this one is a bit easier."}, {"start": 1630.6399999999999, "end": 1632.98, "text": " You hold the first two letters fixed"}, {"start": 1632.98, "end": 1635.86, "text": " and the last two letters and you just unscramble the middle"}, {"start": 1635.86, "end": 1636.96, "text": " and you get opponent."}, {"start": 1636.96, "end": 1639.1, "text": " Then they have a random insertion in the word"}, {"start": 1639.1, "end": 1641.54, "text": " where you basically just after every single character,"}, {"start": 1641.54, "end": 1644.98, "text": " you insert a random punctuation or like a blank space"}, {"start": 1644.98, "end": 1648.1799999999998, "text": " and you expect again, the model to unscramble it."}, {"start": 1648.1799999999998, "end": 1651.02, "text": " And finally, we have just the reverse word."}, {"start": 1651.02, "end": 1653.3, "text": " So but just the model should figure out"}, {"start": 1653.3, "end": 1655.86, "text": " that you should just reverse the letters"}, {"start": 1655.86, "end": 1658.62, "text": " and get objects here."}, {"start": 1658.62, "end": 1662.82, "text": " So those were the tasks and a bit surprisingly,"}, {"start": 1662.82, "end": 1667.82, "text": " so even the biggest model, the GPT-3, the 175 billion one"}, {"start": 1672.1, "end": 1676.26, "text": " had zero like accuracy on the reverse words."}, {"start": 1676.26, "end": 1678.5, "text": " So that's the first intuitive thought I had"}, {"start": 1678.5, "end": 1680.1799999999998, "text": " like what's happening here."}, {"start": 1680.1799999999998, "end": 1681.9399999999998, "text": " And then you figure out that the model"}, {"start": 1681.9399999999998, "end": 1684.58, "text": " is using sub word tokenization."}, {"start": 1684.58, "end": 1688.22, "text": " So it's really hard for it to do this task."}, {"start": 1688.22, "end": 1691.0, "text": " Whereas the easiest one was random insertion,"}, {"start": 1691.0, "end": 1693.42, "text": " which was also easiest for me actually."}, {"start": 1693.42, "end": 1696.22, "text": " And then you can see the anagram,"}, {"start": 1696.22, "end": 1699.9, "text": " the one word anagram is having problems."}, {"start": 1699.9, "end": 1703.46, "text": " So that mirrors my difficulties with understanding"}, {"start": 1703.46, "end": 1706.66, "text": " with doing this A1 task as well."}, {"start": 1706.66, "end": 1710.92, "text": " And then the A2 anagram task was a bit easier."}, {"start": 1710.92, "end": 1713.34, "text": " So that's the one where we keep the first two"}, {"start": 1713.34, "end": 1715.22, "text": " and last two letters fixed."}, {"start": 1715.22, "end": 1717.94, "text": " So that's actually all intuitive."}, {"start": 1717.94, "end": 1722.02, "text": " So the same difficulties I had solving these problems,"}, {"start": 1722.02, "end": 1724.8600000000001, "text": " the model had similar difficulties."}, {"start": 1724.8600000000001, "end": 1726.3600000000001, "text": " There are two more interesting tasks"}, {"start": 1726.3600000000001, "end": 1730.02, "text": " I wanna kind of explore a bit more in depth."}, {"start": 1730.02, "end": 1732.3400000000001, "text": " So the first one is the set analogies."}, {"start": 1732.3400000000001, "end": 1737.14, "text": " And here you can see what the problem looks like."}, {"start": 1737.14, "end": 1739.6200000000001, "text": " So this is in the appendix part of the paper."}, {"start": 1739.6200000000001, "end": 1743.6200000000001, "text": " Basically the context is a law is to trust as,"}, {"start": 1743.6200000000001, "end": 1747.16, "text": " and then you have a multiple like answers here,"}, {"start": 1747.16, "end": 1749.98, "text": " multiple choices, and you are supposed to pick out"}, {"start": 1749.98, "end": 1750.8600000000001, "text": " the correct one."}, {"start": 1750.8600000000001, "end": 1752.22, "text": " And I don't know about you,"}, {"start": 1752.22, "end": 1754.02, "text": " like English is not my native language."}, {"start": 1754.02, "end": 1756.0600000000002, "text": " And I think I'm pretty good at it."}, {"start": 1756.0600000000002, "end": 1759.88, "text": " But like some of these words like Kajoli or Belk"}, {"start": 1759.88, "end": 1761.1000000000001, "text": " were not familiar to me."}, {"start": 1761.1000000000001, "end": 1762.7, "text": " So this problem actually turned out"}, {"start": 1762.7, "end": 1764.5, "text": " to be really difficult even for me."}, {"start": 1764.5, "end": 1768.5800000000002, "text": " So I was just thinking how we are like testing our models"}, {"start": 1768.5800000000002, "end": 1770.26, "text": " really like really heavy."}, {"start": 1770.26, "end": 1772.4, "text": " Even the humans are not like,"}, {"start": 1773.5, "end": 1775.98, "text": " not nearly as smart as we think they are."}, {"start": 1775.98, "end": 1778.6, "text": " So I mean, yeah, that's just something that struck me"}, {"start": 1778.6, "end": 1779.8, "text": " and was interesting."}, {"start": 1779.8, "end": 1784.8, "text": " So let me go and see how the model is actually performing"}, {"start": 1784.9, "end": 1787.06, "text": " on the set problem."}, {"start": 1787.06, "end": 1789.34, "text": " So on the set analogy problem."}, {"start": 1790.94, "end": 1794.58, "text": " So what was interesting for me was that actually"}, {"start": 1794.58, "end": 1796.46, "text": " GPT-3 achieved higher score"}, {"start": 1796.46, "end": 1799.54, "text": " than average high school student."}, {"start": 1799.54, "end": 1804.54, "text": " So on this task GPT-3 achieved 65.2 in the few shot setting,"}, {"start": 1804.54, "end": 1809.54, "text": " whereas the average score among college applicants was 57%."}, {"start": 1810.1399999999999, "end": 1814.58, "text": " And we can see the curves here again,"}, {"start": 1814.58, "end": 1817.22, "text": " the few shot one is growing steadily"}, {"start": 1817.22, "end": 1822.22, "text": " and is always the best for the GPT-3 model."}, {"start": 1822.5, "end": 1826.1399999999999, "text": " So one thing that we can maybe conclude from this result"}, {"start": 1826.1399999999999, "end": 1827.98, "text": " is that we shouldn't be teaching our kids"}, {"start": 1827.98, "end": 1831.26, "text": " to just do these tasks that are really easy"}, {"start": 1831.26, "end": 1834.3, "text": " for like big neural networks to do."}, {"start": 1834.3, "end": 1836.6599999999999, "text": " They should be focusing more like on things"}, {"start": 1836.6599999999999, "end": 1838.82, "text": " that the networks can do,"}, {"start": 1838.82, "end": 1842.26, "text": " like reasoning and maybe art or stuff."}, {"start": 1842.26, "end": 1843.94, "text": " But even art is something that's already"}, {"start": 1843.94, "end": 1845.6599999999999, "text": " in the realm of neural networks."}, {"start": 1845.6599999999999, "end": 1848.58, "text": " So I guess that's going to be a complicated question"}, {"start": 1848.58, "end": 1851.4199999999998, "text": " that we have to solve, but let me not digress too much."}, {"start": 1852.58, "end": 1854.56, "text": " The final problem that I wanted to talk about"}, {"start": 1854.56, "end": 1856.82, "text": " is the news article generation."}, {"start": 1856.82, "end": 1860.54, "text": " And the results here are really striking."}, {"start": 1860.54, "end": 1864.46, "text": " Basically, if I show you the plot here,"}, {"start": 1864.46, "end": 1866.98, "text": " as the model size increases,"}, {"start": 1866.98, "end": 1871.1, "text": " so here on the X dimension, on the X axis,"}, {"start": 1871.1, "end": 1873.46, "text": " you can see that the accuracy goes down,"}, {"start": 1873.46, "end": 1876.6599999999999, "text": " meaning that people are less and less confident"}, {"start": 1876.6599999999999, "end": 1881.6599999999999, "text": " that an article, whether the text came from the model"}, {"start": 1881.6599999999999, "end": 1884.1599999999999, "text": " or whether it came from humans."}, {"start": 1884.1599999999999, "end": 1887.56, "text": " So they did a really nice study"}, {"start": 1887.56, "end": 1890.46, "text": " where they tested a bunch of models,"}, {"start": 1890.46, "end": 1892.66, "text": " also they used something called control model,"}, {"start": 1892.66, "end": 1895.3400000000001, "text": " which was a smaller GPT-3 version,"}, {"start": 1895.3400000000001, "end": 1899.64, "text": " and it intentionally had like a higher temperature,"}, {"start": 1899.64, "end": 1902.38, "text": " which increased the randomness of the softmax."}, {"start": 1902.38, "end": 1904.74, "text": " So basically, it was more random"}, {"start": 1904.74, "end": 1906.58, "text": " than all of the other models."}, {"start": 1906.58, "end": 1908.38, "text": " And they compared all of those models"}, {"start": 1908.38, "end": 1909.5, "text": " with the control model."}, {"start": 1909.5, "end": 1912.46, "text": " And what it turns out is that as we go,"}, {"start": 1912.46, "end": 1917.46, "text": " as the models increase, we end up pretty much"}, {"start": 1917.46, "end": 1920.52, "text": " with the fact that people can decide"}, {"start": 1920.52, "end": 1924.8, "text": " whether an article came from human or from the GPT-3,"}, {"start": 1924.8, "end": 1929.56, "text": " which is amazing and has some probably scary repercussions."}, {"start": 1929.56, "end": 1931.68, "text": " So those were some interesting results"}, {"start": 1931.68, "end": 1934.8, "text": " that they showed in this huge paper."}, {"start": 1934.8, "end": 1936.3600000000001, "text": " This video's already getting too long,"}, {"start": 1936.3600000000001, "end": 1939.24, "text": " so I'm going to tell you a little bit more"}, {"start": 1939.24, "end": 1942.92, "text": " about two or three more things."}, {"start": 1942.92, "end": 1945.16, "text": " So one is data contamination."}, {"start": 1945.16, "end": 1947.88, "text": " So what happened is that because the training dataset"}, {"start": 1947.88, "end": 1951.6000000000001, "text": " they have is so huge, they ended up having"}, {"start": 1951.6000000000001, "end": 1955.0800000000002, "text": " some of the dev and test data from the benchmarks"}, {"start": 1955.0800000000002, "end": 1957.6000000000001, "text": " already being included in the training dataset."}, {"start": 1957.6000000000001, "end": 1961.88, "text": " And then they did some investigations,"}, {"start": 1961.88, "end": 1966.76, "text": " and for the most part, they said that the tests"}, {"start": 1966.76, "end": 1970.64, "text": " weren't as affected as they initially expected"}, {"start": 1970.64, "end": 1973.64, "text": " because there was a bug in their filtering of the data,"}, {"start": 1973.64, "end": 1975.0800000000002, "text": " which caused all of this."}, {"start": 1975.0800000000002, "end": 1978.48, "text": " So yeah, but some of the tests like PICA,"}, {"start": 1978.48, "end": 1983.48, "text": " where they had physical, let me see if I can find it."}, {"start": 1984.0, "end": 1986.92, "text": " So this is one example from PICA data sets."}, {"start": 1986.92, "end": 1990.3200000000002, "text": " So basically the context is how to apply sealant to wood,"}, {"start": 1990.3200000000002, "end": 1992.24, "text": " and then we have the correct answer,"}, {"start": 1992.24, "end": 1994.3600000000001, "text": " using a brush, brush on sealant onto wood"}, {"start": 1994.3600000000001, "end": 1996.72, "text": " until it is fully saturated with the sealant,"}, {"start": 1996.72, "end": 1998.68, "text": " and then they have the incorrect one."}, {"start": 1998.68, "end": 2003.68, "text": " So as they showed, the results on PICA were really good,"}, {"start": 2004.3200000000002, "end": 2007.44, "text": " and because it looks like it's an outlier,"}, {"start": 2007.44, "end": 2010.9, "text": " I do think that contamination played its role here."}, {"start": 2010.9, "end": 2013.3200000000002, "text": " Okay, getting back to this chart,"}, {"start": 2013.3200000000002, "end": 2015.3600000000001, "text": " I just wanna note one more thing."}, {"start": 2015.3600000000001, "end": 2018.52, "text": " And so they had a bug, so they said here,"}, {"start": 2018.52, "end": 2022.0800000000002, "text": " unfortunately a bug resulted in only partial removal"}, {"start": 2022.0800000000002, "end": 2024.94, "text": " of all detected overlaps from the training data."}, {"start": 2024.94, "end": 2026.2, "text": " Due to the cost of training,"}, {"start": 2026.2, "end": 2028.76, "text": " it wasn't feasible to retrain the model."}, {"start": 2028.76, "end": 2033.2, "text": " So even I think that they're using some kind of like,"}, {"start": 2033.2, "end": 2037.16, "text": " they're using 13 grams to figure out the overlaps,"}, {"start": 2037.16, "end": 2039.26, "text": " and they didn't mention somewhere in the paper"}, {"start": 2039.26, "end": 2041.68, "text": " that this is still a new area of research."}, {"start": 2041.68, "end": 2044.3400000000001, "text": " So I'm really skeptical about this part,"}, {"start": 2044.3400000000001, "end": 2049.34, "text": " how they are, and what do they consider by duplicate."}, {"start": 2049.92, "end": 2052.8, "text": " So it's hard to figure out which parts of text"}, {"start": 2052.8, "end": 2055.7200000000003, "text": " you should filter out so that you can say with certainty,"}, {"start": 2055.72, "end": 2059.3599999999997, "text": " okay, these examples from the dev and test sets"}, {"start": 2059.3599999999997, "end": 2063.3599999999997, "text": " are not present in any way in the training sets"}, {"start": 2063.3599999999997, "end": 2065.68, "text": " in a way that could bias and help the model"}, {"start": 2065.68, "end": 2069.2799999999997, "text": " to better predict on those dev and test sets."}, {"start": 2069.2799999999997, "end": 2073.9599999999996, "text": " So I mentioned PICA, we can see it's an outlier,"}, {"start": 2073.9599999999996, "end": 2076.12, "text": " so it has better performance."}, {"start": 2076.12, "end": 2079.64, "text": " You can see here that the points"}, {"start": 2079.64, "end": 2083.8399999999997, "text": " on the lower part of this chart have better performance"}, {"start": 2083.84, "end": 2087.8, "text": " on the dirty data sets,"}, {"start": 2087.8, "end": 2092.8, "text": " and so that means that there was a contamination issue."}, {"start": 2093.1200000000003, "end": 2095.36, "text": " What I'm not clear about is this drop."}, {"start": 2095.36, "end": 2099.32, "text": " They did mention it, but it looks like a huge outlier,"}, {"start": 2101.1600000000003, "end": 2104.0, "text": " but they didn't mention anything specifically about it."}, {"start": 2104.0, "end": 2107.44, "text": " They did mention PICA and VinaGrad, et cetera."}, {"start": 2107.44, "end": 2109.92, "text": " So that was it about this part."}, {"start": 2109.92, "end": 2111.76, "text": " Now I wanna just walk you through"}, {"start": 2111.76, "end": 2113.6000000000004, "text": " some of the limitations they mentioned."}, {"start": 2113.6, "end": 2115.12, "text": " So the first one is this one."}, {"start": 2115.12, "end": 2118.44, "text": " So GPT-3 samples still sometimes repeat themselves"}, {"start": 2118.44, "end": 2120.36, "text": " semantically at the document level,"}, {"start": 2120.36, "end": 2123.7599999999998, "text": " start to lose coherence over sufficiently long passages,"}, {"start": 2123.7599999999998, "end": 2126.48, "text": " contradict themselves, and occasionally contain"}, {"start": 2126.48, "end": 2129.48, "text": " non-sequitur sentences or paragraphs."}, {"start": 2129.48, "end": 2132.56, "text": " So even though this is a huge language model,"}, {"start": 2132.56, "end": 2137.56, "text": " it still has its own problems, like repeating."}, {"start": 2137.6, "end": 2142.44, "text": " So there's still a lot of research to do"}, {"start": 2142.44, "end": 2145.6, "text": " about how do we decode the output from these models"}, {"start": 2145.6, "end": 2147.76, "text": " and whether the problem is the model itself"}, {"start": 2147.76, "end": 2150.58, "text": " or the heuristics we are using for decoding."}, {"start": 2150.58, "end": 2155.58, "text": " So basically, Gvern and most people are using the top P,"}, {"start": 2155.88, "end": 2159.36, "text": " the nucleus decoding, and they're playing with temperatures,"}, {"start": 2159.36, "end": 2162.6, "text": " so it still doesn't feel like"}, {"start": 2162.6, "end": 2164.12, "text": " as a correct way to do things."}, {"start": 2165.88, "end": 2169.2000000000003, "text": " They then mentioned that they are obviously aware"}, {"start": 2169.2000000000003, "end": 2172.2000000000003, "text": " that if they had bi-directional representations"}, {"start": 2172.2, "end": 2175.68, "text": " compared, contrasted to the ones that they do have,"}, {"start": 2175.68, "end": 2179.8399999999997, "text": " and that's the causal, the unidirectional representations,"}, {"start": 2179.8399999999997, "end": 2181.7999999999997, "text": " they expect better results."}, {"start": 2181.7999999999997, "end": 2184.7999999999997, "text": " So that's a thing they could try out."}, {"start": 2184.7999999999997, "end": 2187.72, "text": " They also acknowledge that the pre-training objective"}, {"start": 2187.72, "end": 2190.8799999999997, "text": " could be further improved, and they also acknowledge"}, {"start": 2190.8799999999997, "end": 2194.3599999999997, "text": " that the understanding precisely how few-shot learning works"}, {"start": 2194.3599999999997, "end": 2197.72, "text": " is an important unexplored direction for future research."}, {"start": 2197.72, "end": 2201.0, "text": " So yeah, basically nobody quite knows"}, {"start": 2201.0, "end": 2202.48, "text": " how this exactly works."}, {"start": 2202.48, "end": 2205.3, "text": " It's still kind of not easy to interpret,"}, {"start": 2205.3, "end": 2206.8, "text": " and that's one of the bad things."}, {"start": 2206.8, "end": 2208.12, "text": " So they mentioned it here."}, {"start": 2208.12, "end": 2211.8, "text": " Its decisions are not easily interpretable."}, {"start": 2211.8, "end": 2215.4, "text": " Also, the pure scale of the model"}, {"start": 2217.12, "end": 2220.2, "text": " doesn't allow them to iterate when they make mistakes,"}, {"start": 2220.2, "end": 2223.32, "text": " as we saw on the example with data contamination."}, {"start": 2223.32, "end": 2228.12, "text": " So a limitation associated with models at the scale of GPT-3"}, {"start": 2228.12, "end": 2230.68, "text": " regardless of objective function or algorithm"}, {"start": 2230.68, "end": 2233.68, "text": " is that they are both expensive and inconvenient"}, {"start": 2233.68, "end": 2236.7599999999998, "text": " to perform inference on, which may present a challenge"}, {"start": 2236.7599999999998, "end": 2239.2, "text": " for practical applicability of models of this scale"}, {"start": 2239.2, "end": 2240.3599999999997, "text": " in their current form."}, {"start": 2242.3399999999997, "end": 2246.2, "text": " So I guess some sorts of knowledge distillations"}, {"start": 2246.2, "end": 2251.2, "text": " and pruning, et cetera, could help us make smaller,"}, {"start": 2253.12, "end": 2255.7599999999998, "text": " yet performant models."}, {"start": 2255.7599999999998, "end": 2257.7599999999998, "text": " I wanna close up this video talking about"}, {"start": 2257.7599999999998, "end": 2258.68, "text": " the broader impact."}, {"start": 2258.68, "end": 2260.2, "text": " So I do think this is important,"}, {"start": 2260.2, "end": 2262.74, "text": " even though it's not like a technical part,"}, {"start": 2262.74, "end": 2265.3199999999997, "text": " so feel free to skip it if you're not interested in it."}, {"start": 2265.3199999999997, "end": 2268.16, "text": " So basically what they consider,"}, {"start": 2268.16, "end": 2270.7599999999998, "text": " so with GPT-2, they had a staged release."}, {"start": 2270.7599999999998, "end": 2272.24, "text": " So that happened last year,"}, {"start": 2272.24, "end": 2274.98, "text": " and they initially released the smallest GPT-2 model."}, {"start": 2274.98, "end": 2279.04, "text": " They were monitoring different forums, et cetera,"}, {"start": 2279.04, "end": 2282.96, "text": " and just looking for any signs of potential misuse."}, {"start": 2282.96, "end": 2284.52, "text": " Once they were pretty certain"}, {"start": 2284.52, "end": 2287.7599999999998, "text": " that no misuse was happening,"}, {"start": 2287.7599999999998, "end": 2289.96, "text": " they started gradually releasing the models,"}, {"start": 2289.96, "end": 2294.04, "text": " and at the end of 2019, the whole GPT-2,"}, {"start": 2294.04, "end": 2297.6, "text": " the biggest one, I think it had 1.5 billion params,"}, {"start": 2297.6, "end": 2298.64, "text": " was published."}, {"start": 2298.64, "end": 2300.68, "text": " So GPT-3 is still not published,"}, {"start": 2300.68, "end": 2302.28, "text": " and it probably will never be,"}, {"start": 2302.28, "end": 2307.28, "text": " but the API is already accessible for certain people."}, {"start": 2309.32, "end": 2310.26, "text": " It's in beta."}, {"start": 2310.26, "end": 2314.32, "text": " Okay, so what I wanted to say here is that"}, {"start": 2316.08, "end": 2318.52, "text": " it's even more important to consider these questions"}, {"start": 2318.52, "end": 2323.52, "text": " for GPT-3, because basically whatever malicious task"}, {"start": 2325.24, "end": 2328.86, "text": " that depends on producing a huge amount of text,"}, {"start": 2328.86, "end": 2330.14, "text": " GPT-3 will help with that."}, {"start": 2330.14, "end": 2332.24, "text": " So that's phishing, that's spamming,"}, {"start": 2333.48, "end": 2334.38, "text": " bunch of different stuff."}, {"start": 2334.38, "end": 2335.68, "text": " So social engineering,"}, {"start": 2335.68, "end": 2340.22, "text": " so that can be automated using GPT-3 in a sense."}, {"start": 2340.22, "end": 2343.04, "text": " So they did consider those things."}, {"start": 2343.04, "end": 2345.92, "text": " So many of these applications bottle on human beings"}, {"start": 2345.92, "end": 2349.3, "text": " to write sufficiently high-quality text."}, {"start": 2350.66, "end": 2355.1, "text": " And they mentioned who has the resources to do this,"}, {"start": 2355.1, "end": 2356.1, "text": " like government groups."}, {"start": 2356.1, "end": 2358.16, "text": " I don't think this part is as interesting"}, {"start": 2358.16, "end": 2360.1800000000003, "text": " as this part about the bias."}, {"start": 2360.1800000000003, "end": 2362.1, "text": " So they considered three axes."}, {"start": 2362.1, "end": 2365.3, "text": " So one is gender, the second one is religion,"}, {"start": 2365.3, "end": 2368.58, "text": " and the third one is, let me check, race."}, {"start": 2368.58, "end": 2369.42, "text": " Yep."}, {"start": 2369.42, "end": 2374.42, "text": " So how they evaluated the bias in these models is,"}, {"start": 2376.1, "end": 2380.06, "text": " they basically prompted with sentences like this one."}, {"start": 2380.06, "end": 2381.52, "text": " The occupation was up."}, {"start": 2381.52, "end": 2385.62, "text": " Like for example, and they note here the detective was up,"}, {"start": 2385.62, "end": 2388.26, "text": " and they are looking at the,"}, {"start": 2389.54, "end": 2392.14, "text": " what's the probability distribution coming out"}, {"start": 2392.14, "end": 2395.14, "text": " from the GPT-3, and they're looking at male and female."}, {"start": 2395.14, "end": 2398.26, "text": " And what I figured out is that most of the occupations"}, {"start": 2398.26, "end": 2401.2000000000003, "text": " are biased towards males."}, {"start": 2401.2000000000003, "end": 2404.0600000000004, "text": " So especially when they put it like this,"}, {"start": 2404.0600000000004, "end": 2409.0600000000004, "text": " the competent, then fill in the blank, was up."}, {"start": 2409.82, "end": 2414.1000000000004, "text": " And they figured out that that version of the prompt"}, {"start": 2414.1000000000004, "end": 2416.78, "text": " had even higher bias for males."}, {"start": 2416.78, "end": 2419.44, "text": " And actually also the incompetent one"}, {"start": 2419.44, "end": 2421.5800000000004, "text": " was also biased more towards males."}, {"start": 2421.5800000000004, "end": 2424.46, "text": " So again, iterating a little bit on this one."}, {"start": 2424.46, "end": 2427.46, "text": " Here we can see in particular occupations"}, {"start": 2427.46, "end": 2429.58, "text": " demonstrating higher levels of education,"}, {"start": 2429.58, "end": 2432.66, "text": " such as legislator, banker, or professor,"}, {"start": 2433.66, "end": 2435.7200000000003, "text": " were heavily male leaning."}, {"start": 2436.86, "end": 2440.52, "text": " And also like physical labor stuff was also male leaning,"}, {"start": 2440.52, "end": 2443.58, "text": " where the occupations that were more likely to be followed"}, {"start": 2443.58, "end": 2446.42, "text": " by female identifiers include midwife, nurse,"}, {"start": 2446.42, "end": 2448.7, "text": " receptionist, housekeeper, et cetera."}, {"start": 2448.7, "end": 2453.1, "text": " So once more how it works, you basically take a variant"}, {"start": 2453.1, "end": 2456.82, "text": " of the prompt like this one, you input it into your GPT-3,"}, {"start": 2456.82, "end": 2458.2200000000003, "text": " let me zoom in a little bit."}, {"start": 2458.2200000000003, "end": 2461.38, "text": " So you basically input it into your GPT-3 model."}, {"start": 2462.46, "end": 2464.34, "text": " Let's say this is GPT-3."}, {"start": 2464.34, "end": 2467.26, "text": " And it outputs as the next token,"}, {"start": 2467.26, "end": 2470.6600000000003, "text": " so the probability distribution which has the size"}, {"start": 2470.6600000000003, "end": 2475.54, "text": " of the GPT-3's vocab, which I think is around 50K."}, {"start": 2477.52, "end": 2482.52, "text": " And what it did then was they monitored a couple"}, {"start": 2482.52, "end": 2486.92, "text": " of male identifiers like man, male, et cetera,"}, {"start": 2486.92, "end": 2491.2, "text": " and they monitored like female, woman, et cetera."}, {"start": 2491.2, "end": 2495.2, "text": " And they just pretty much added the probabilities"}, {"start": 2495.2, "end": 2497.32, "text": " and compared those."}, {"start": 2497.32, "end": 2498.88, "text": " They also did some normalization."}, {"start": 2498.88, "end": 2501.44, "text": " So basically that's the method how they did this."}, {"start": 2501.44, "end": 2505.44, "text": " And I already mentioned the results that came out of it."}, {"start": 2507.12, "end": 2508.28, "text": " So that was about the gender."}, {"start": 2508.28, "end": 2511.92, "text": " Then I'll just briefly go over the race."}, {"start": 2511.92, "end": 2516.92, "text": " The, so across the models, they analyzed Asian"}, {"start": 2517.36, "end": 2519.36, "text": " had a consistently high sentiment,"}, {"start": 2519.36, "end": 2522.7200000000003, "text": " whereas the black had a consistently low sentiment."}, {"start": 2522.7200000000003, "end": 2527.34, "text": " And this was kind of surprising because you usually hear"}, {"start": 2527.34, "end": 2532.26, "text": " that like white males are much better positioned"}, {"start": 2532.26, "end": 2533.5, "text": " than Asians."}, {"start": 2533.5, "end": 2537.08, "text": " So this was kind of surprising in a way for me."}, {"start": 2537.08, "end": 2539.6800000000003, "text": " And yeah, that's an obvious bias."}, {"start": 2539.68, "end": 2544.68, "text": " And especially for black people, like a lot of the text"}, {"start": 2545.3599999999997, "end": 2550.3599999999997, "text": " has, is referencing like slavery, et cetera."}, {"start": 2550.48, "end": 2553.2, "text": " And then that's the reason black people"}, {"start": 2553.2, "end": 2555.64, "text": " have the lowest sentiment."}, {"start": 2555.64, "end": 2556.9199999999996, "text": " They did mention it here."}, {"start": 2556.9199999999996, "end": 2558.7999999999997, "text": " So the resulting sentiment can reflect"}, {"start": 2558.7999999999997, "end": 2560.44, "text": " socio-historical factors."}, {"start": 2560.44, "end": 2563.44, "text": " For instance, text relating to a discussion of slavery"}, {"start": 2563.44, "end": 2565.8799999999997, "text": " will frequently have a negative sentiment."}, {"start": 2565.88, "end": 2570.88, "text": " And that's the reason why the black race always has"}, {"start": 2573.48, "end": 2578.4, "text": " lowest sentiment, like for every single model size."}, {"start": 2578.4, "end": 2580.32, "text": " So X axis is the model size."}, {"start": 2580.32, "end": 2582.08, "text": " Here is the biggest one."}, {"start": 2582.08, "end": 2586.1600000000003, "text": " Finally, religion, they tried out like the top,"}, {"start": 2586.1600000000003, "end": 2591.08, "text": " the most popular like world religions like Christianity,"}, {"start": 2591.08, "end": 2594.88, "text": " Buddhism, Islam, et cetera."}, {"start": 2594.88, "end": 2599.88, "text": " And so they found some negative biases about Islam."}, {"start": 2600.04, "end": 2603.1600000000003, "text": " Like we also found that words such as violent, terrorism,"}, {"start": 2603.1600000000003, "end": 2605.96, "text": " terrorist occurred at a greater scale with Islam"}, {"start": 2605.96, "end": 2607.2000000000003, "text": " than with other religions."}, {"start": 2607.2000000000003, "end": 2611.04, "text": " And we're in the top 40 most favorite words for Islam"}, {"start": 2611.04, "end": 2612.86, "text": " in GPT-3, which is really sad."}, {"start": 2612.86, "end": 2616.3, "text": " And we need to be aware of these biases"}, {"start": 2617.26, "end": 2619.6400000000003, "text": " because obviously, I won't get into this,"}, {"start": 2619.6400000000003, "end": 2621.32, "text": " but like most of the Muslim people"}, {"start": 2621.32, "end": 2625.06, "text": " are really nice folks and that's it."}, {"start": 2625.06, "end": 2628.8, "text": " They did also consider the energy usage."}, {"start": 2628.8, "end": 2632.76, "text": " And one really surprising fact was this."}, {"start": 2632.76, "end": 2636.4, "text": " So though models like GPT-3 consumes significant resources"}, {"start": 2636.4, "end": 2639.28, "text": " during training, they can be surprisingly efficient"}, {"start": 2639.28, "end": 2640.1600000000003, "text": " once trained."}, {"start": 2640.1600000000003, "end": 2644.32, "text": " So even with the full GPT-3, 175 billion per am,"}, {"start": 2644.32, "end": 2647.2400000000002, "text": " a large model, generating 100 pages of content"}, {"start": 2647.24, "end": 2652.24, "text": " from a trained model can cost on the order of 0.4 kilowatt hour"}, {"start": 2653.4799999999996, "end": 2655.52, "text": " or only a few cents in energy costs."}, {"start": 2655.52, "end": 2656.7999999999997, "text": " It's actually pretty efficient"}, {"start": 2656.7999999999997, "end": 2659.3799999999997, "text": " once you train this huge monster."}, {"start": 2659.3799999999997, "end": 2664.3799999999997, "text": " And then during the lifetime of the model,"}, {"start": 2664.72, "end": 2669.72, "text": " it can be like the energy costs can be amortized potentially."}, {"start": 2670.2, "end": 2671.64, "text": " Yep."}, {"start": 2671.64, "end": 2674.52, "text": " So that was pretty much all I had to say."}, {"start": 2674.52, "end": 2677.56, "text": " I could make this video much longer."}, {"start": 2677.56, "end": 2679.56, "text": " I mean, this is a huge paper."}, {"start": 2679.56, "end": 2681.88, "text": " So hopefully you liked it."}, {"start": 2681.88, "end": 2683.4, "text": " If you did, please leave a comment,"}, {"start": 2683.4, "end": 2685.08, "text": " what you liked, what you didn't like,"}, {"start": 2685.08, "end": 2687.16, "text": " what are your opinions on this?"}, {"start": 2687.16, "end": 2691.72, "text": " And yeah, regarding the nagging versus the hyping sides,"}, {"start": 2691.72, "end": 2692.88, "text": " I just wanna mention one thing."}, {"start": 2692.88, "end": 2694.44, "text": " So what's my opinion on this one?"}, {"start": 2694.44, "end": 2696.56, "text": " So I really think that these huge models,"}, {"start": 2696.56, "end": 2699.94, "text": " like huge transformers, are not something"}, {"start": 2699.94, "end": 2703.68, "text": " that's a digression or a bad thing for the future of AI."}, {"start": 2703.68, "end": 2704.6, "text": " Why do I think that?"}, {"start": 2704.6, "end": 2708.52, "text": " That's because your brain, most of your brain,"}, {"start": 2708.52, "end": 2710.96, "text": " like cerebellum, where there is,"}, {"start": 2710.96, "end": 2714.7599999999998, "text": " the largest amount of computation happens in cerebellum"}, {"start": 2714.7599999999998, "end": 2716.16, "text": " is actually unconscious."}, {"start": 2716.16, "end": 2720.4199999999996, "text": " So we are probably now in the phase"}, {"start": 2720.4199999999996, "end": 2725.04, "text": " where we are developing a synthetic primordial brain."}, {"start": 2725.04, "end": 2727.52, "text": " Basically, the equivalent that we have,"}, {"start": 2727.52, "end": 2730.44, "text": " which is pretty much regulating all of our"}, {"start": 2730.44, "end": 2734.8, "text": " vital functions, like controlling the heart,"}, {"start": 2734.8, "end": 2737.6, "text": " temperature, et cetera, and also perception,"}, {"start": 2737.6, "end": 2739.68, "text": " so just kind of extracting concepts"}, {"start": 2739.68, "end": 2741.5, "text": " from image and audio, et cetera."}, {"start": 2741.5, "end": 2743.84, "text": " So all of those require a lot of computation,"}, {"start": 2743.84, "end": 2745.6, "text": " and I don't see this as a digression."}, {"start": 2745.6, "end": 2747.48, "text": " I just see it as a supplement,"}, {"start": 2747.48, "end": 2750.96, "text": " and as a step towards the artificial general intelligence,"}, {"start": 2750.96, "end": 2754.8, "text": " which will happen once we are able to simulate"}, {"start": 2754.8, "end": 2757.64, "text": " the cognitive layer above the primordial brain level."}, {"start": 2757.64, "end": 2761.24, "text": " So those are just some two cents on my side"}, {"start": 2761.24, "end": 2762.68, "text": " about what I think about this."}, {"start": 2762.68, "end": 2765.8399999999997, "text": " And I have some gut feeling that graph neural networks"}, {"start": 2765.8399999999997, "end": 2768.6, "text": " and causational relations that Judea Pearl"}, {"start": 2770.24, "end": 2772.6, "text": " has been pushing for decades now"}, {"start": 2772.6, "end": 2776.2, "text": " will play a significant role, as well as symbolic AI."}, {"start": 2776.2, "end": 2779.2, "text": " So I don't think we should kinda disregard all of those,"}, {"start": 2779.2, "end": 2782.7599999999998, "text": " and we should just be cognizant that all of those"}, {"start": 2782.7599999999998, "end": 2785.16, "text": " will probably somehow play a role"}, {"start": 2785.16, "end": 2789.3199999999997, "text": " like in this long-term goal towards achieving AGI."}, {"start": 2789.3199999999997, "end": 2790.8399999999997, "text": " So that was pretty much it."}, {"start": 2790.8399999999997, "end": 2793.0, "text": " If you liked this video, consider subscribing"}, {"start": 2793.0, "end": 2794.8399999999997, "text": " to this channel and hit that bell icon"}, {"start": 2794.8399999999997, "end": 2797.02, "text": " to get notified when I upload a new video."}, {"start": 2797.02, "end": 2799.7, "text": " And until next time, keep learning."}, {"start": 2799.7, "end": 2800.54, "text": " Deep."}, {"start": 2800.54, "end": 2815.54, "text": " We'll see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=j6kuz_NqkG0
Vision Transformer (ViT) - An image is worth 16x16 words | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video I do a (semi) deep dive of the "An image is worth 16x16 words: transformers for image recognition at scale" paper which introduced the Vision Transformer. The paper is very interesting as it showed that with minimal modifications transformers can give better results than CNNs on the image classification problem. Until now transformers were ruling the NLP world and now they are coming for the CV world as well! You'll learn about: ✔️ How the vision transformer works ✔️ Main ideas in the paper ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ paper: https://arxiv.org/abs/2010.11929 ✅ transformer Jupyter Notebook: https://github.com/gordicaleksa/pytorch-original-transformer/blob/main/The%20Annotated%20Transformer%20%2B%2B.ipynb ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Enter The Vision Transformer, Jupyter Notebook 1:00 Deep dive intro 3:14 How does Vision Transformer work? 4:39 Let's go even deeper 8:33 Positional encoding inductive bias 9:50 Model variants and results 11:50 VTAB benchmark results 13:00 Perf vs amount of pretrained data 16:08 What does Vision Transformer learn? (attention span) 19:35 Self-supervision vs Supervised learning 21:10 Scaling the transformer (future research prediction) 22:15 Positional Encodings details 24:15 Logging out ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #transformers #computervision #visiontransformer
Okay, I'm super excited about this paper. It's called An Image Is Worth 16x16 Words. And the reason is, this paper got me into Transformers two months ago when it was first published, like I think October 3rd. And it was under review and it still is. But I'm going to try and do a deep dive today, so hopefully you'll find it useful. Basically the first time I read it, like on maybe October 5th or something, I didn't understand many details, so I took a step back. Went and researched NLP and Transformers specifically, like read 30 papers, even open sourced the original Transformer paper. And now I came a full circle back, so I'm going to try and explain what I learned and hopefully make this paper clear for you. By the way, it has a hard dependency on the original Transformer, so if you want to understand that, I have a Jupyter notebook, which you may find useful. So as usual, I'll just link it in the description so you can play with it. Okay, without further ado, let's jump into the deep dive. Okay, so the basic premise here is that we don't need, so we don't need CNNs to solve computer vision as we thought until now. So Transformer did the same thing as it did in the NLP field, basically show we don't need CNNs, we don't need recurrence, we can just do like attention, self-attention mechanisms, and that should be enough. So yeah, it attains super results on the computer vision on the image classification task, and it took less time to compute, and that's what's amazing. So a big note here is that it achieved all of those results, which we'll see really soon in the big data regime. So it needs a lot of images in order to accomplish these results, so that's a thing to keep in mind. So a good thing with Transformer in general is that they can be scaled up. So you probably know about GPT-3, which has around 175 billion prams, and I assume something similar will happen with the Vision Transformer in one of the future iterations of this paper. Yeah, ResNets are still, until this paper, we're dominating the different benchmarks, image classification benchmarks. So it's not like this paper was the first one to introduce Transformers to computer vision, that's not the case. Many other people tried. So in the related work here, we can see that people already tried at doing some sort of attention, so including attentions into CNNs, so either replacing certain layers of the CNN with attention modules or just kind of complementing it. And there were also some approaches where people tried to use some more sparse attention patterns and stuff, but it turned out to be really complicated in the engineering sense of the word. So yeah, they didn't give the results as this paper did. So let's jump to the meat of this paper, and I'm going to try to explain how it works using this image. I'm gonna zoom in a little bit. Okay, so maybe first up, like a high level overview. Basically, what we have here is we have the input image, we split it into patches, and then we flatten those patches like in the rest order, so basically, this is the first patch, second, third, fourth, fifth, sixth, et cetera. What we do then is we just flatten out those patches and we do a linear projection here. We add the positional encodings, if you're familiar with the original transformer. I'll get into details, but for now, this is a high level overview. And then we just feed those into the original transformer, so this is the block here. And then just putting a simple MLP, multilayer perceptron hat here, we can do classification. And that's the high level overview. And now this transformer, just a short recap, consists of these layers, which have this multi-hat attention hat, multi-hat attention module, yeah, and followed by like multilayer perceptron. So that's the basic structure of the layer in the transformer. Now, let's zoom in a little bit and get into the details, how it exactly works. I zoomed in a little bit and I'm going to tell you all of the details that go into this preprocessing step in the Vision Transformer. So we take an ImageNet image, which has, which looks like this, and it's 224 by 224. And that's actually where the name of the paper comes from. So basically what they now do is they split the image into patches, which are either 14 by 14 in size or 16 by 16. They have three flavors. They have the base, the large, and the huge model. We'll see that in a minute. But like, let's assume we use 14 by 14 here. And yeah, basically once you have this, if you divide 224 by 14, you get 16. So basically you have 16 by 16 patches. And that's where the, like the title of the paper comes from because we're basically treating all of these patches as simple tokens and we input them into the transformer so you can treat it as a word, whatever. Like that's just like a nice metaphor. Now what you do, you take this patch, you flatten it out. So you get a vector that's like 14 by 14. That's 196, I think. And that's this part. So we are now here. And now we do the simple linear projection, which will embed this vector into the model dimension space. So we'll basically, so for example, for the base model, base vision transformer, they use 796. So that's the same as bird base. So we end up with a vector that has 796 dimensions. And now we are here. So finally we just add the positional encoding to this representation and we input it into the transformer and that's pretty much it. Now for the positional encodings, if you're not familiar, what they did here, and they did some ablation studies, but they ended up using the 1D positional embedding. So you basically have a table like this. And so for example, if you are using, if we are looking at this specific token, so patch embedding, we just, so because it's the first patch, we just take the vector from this table, from the position one, and that's the positional encoding, which is actually learned during the training. So that's the trick. So you just add this one to the embedded patch and that's the final representation. So one thing I didn't mention here, and this is similar to BERT, is that they use this, they have this notion of a class embedding, which is the same thing that BERT used. So this one is also learned and it doesn't come from the image itself. It's just hard coded in the model architecture. So that was like a detailed overview of how this stem part of the model looks like. And one minor detail is that this linear projection, it's like basically a matrix E, so it's shared among different patches. So we use the same matrix E to project all of these patches, like flattened vectors of the patches into the embedding patch. So yeah, that was everything I wanted to mention there. Okay, I zoomed out. Let's go to second important part. And so there is one thing in this model that's kind of has like inductive bias, and that part has to do with positional encodings. So when you fine tune these models, so previous papers showed that when you fine tune for image classification tasks, you wanna fine tune on higher resolution. So what you end up, if you have higher resolution as a consequence, you end up with a bigger number of patches. So you're not 16 by 16 anymore. The number of patches becomes bigger, so the sequence becomes larger. And because of that, the positional encodings wouldn't make any sense if you just applied them during the fine tuning process. So you have to do 2D interpolation. So that's the only part pretty much where, so that's one of the parts where they introduce bias. So the transformer model itself has some biases like the residual connections, et cetera, but like it's more general than some other models like CNNs and LSTMs. So that was an important thing to mention. Now, as I mentioned, we have three variants of the model. They use the base and large, which roughly correspond to BERT base and BERT large when it comes to model size. And then we have the vision transformer huge, which is even bigger than the vision transformer large. And let us see the interesting results they got. So first of all, we can see two state-of-the-art so previous SODAs on these benchmarks, and those were ResNets and the FissionNets. Now, when you take the huge variant of the vision transformer and pre-train it on the JFT, which is like Google's proprietary data set, which contains like 303 million images, you get basically SODAs on every single benchmark. And additionally, you have a lesser, like you need less compute to do that. And if we take a look at, for example, this vision transformer large, it took only 0.68 thousand TPU v3 core days. And that's a mouthful, I'll explain it in a minute. And basically that's much less than 9.9 thousand that this previous ResNet took. So what this means is every single TPU v3 chip has two of these cores. So you basically need, if you have a single TPU v3 chip, you'd need 340 days to pre-train this model. And that's like a lot of compute that pretty much only Google and like richest labs, industrial labs in the world have. So not everybody can reproduce this paper. And that's the bad thing about this, about these kinds of free searches, even though they do advance the field, people can kind of reproduce this. And yeah, that sucks. Okay, let's go further. And interesting thing is this VTab benchmark. What it basically has, so VTab is interesting because it has three types of data. So first of all, it's low data benchmark, meaning it only has around thousand images per task. And secondly, it splits those images into three categories. So natural images, which are kind of like cipher, 10, cipher, 100 data set images. So like natural looking images like dogs, whatever, trees. And then we have these specialized images like medical and satellite images. And then they have this geometric, I think structured class of images. And basically what they showed is that the huge variant of the vision transformer outperformed every other soda across all of those categories, which is really reassuring. So the transfer learning for these huge transformers does work and it gives us nice results. Okay, going further, really interesting analysis they did here is how do these models perform with respect to ResNets, depending on the amount of pre-trained data they have. And the left figure here, we can see when we pre-trained the vision transformer on ImageNet, which has 1.3 million images, then on ImageNet 21K, which has 14 million images. And finally on JFT, which has 303 million images. How they, what's the accuracy, what's the performance of the models? And as we can see here, in the low data regimes, ResNets are the best. And that's because they have a useful inductive bias, meaning that the kernels are basically, that's the bias part of the ResNet. Basically you have to have the kernel, which only looks at the local part of the image and can have a global, like receptive field. And then going deeper into the model, you get like a bigger receptive field. That's one of the biases that ResNets have. The second one is the residual connections, but yeah. And the best vision transformer is actually the base variant. And that's not surprising actually. And then as we increase the number of pre-trained data, we can see that finally in the big data regime, the huge variant of the vision transformer performs the best and it's better than ResNets and it's also more compute like efficient. So that's a cool thing. Right figure shows the same thing, just like splitting the JFT into smaller chunks. And we can see again, the blue, the light blue line here is the vision transformer large, and it outperforms ResNets again in the big data regimes. So that was interesting. That was basically what this paper, one of the main points this paper made. In big data regimes, those inductive biases that we put into our models like CNNs are not useful anymore and we should let the model learn those biases itself. And we'll see in a short, an interesting showcase that shows us how the attention varies from in different layers. Here, yeah, this figure is not so important. Basically, it shows the vision transformers are more like efficient when it comes to compute. And this line is interesting. So third, vision transformers appear not to saturate within the range tried, motivating future scaling efforts. So you can basically expect something similar to GPT-3 in the world of computer vision. That's my prediction. I mean, they'll probably stop as soon as they see saturation because, yeah, you have some fixed budget. You don't want to overspend if you don't get any gains out of it. So yeah, we can expect a bigger vision transformer, that's for sure. Now, this part is really interesting. This is the thing I was mentioning about the attention. So they took the large variant of the model and they analyzed the different attention heads. So in particular, this model has 24 layers and it has 16 attention heads. And what you can see here is in the shallower layers, those attention heads, so some of them have a really localized attention. So they just attend pixels in the neighborhood of a current pixel, whereas the other attention heads have a global attention compared to CNNs, which only have local small receptive fields in these shallower layers. So if I were to draw how CNN receptive field would change with the depth, ComNets will probably have a curve like this. Like they would go linearly increase. So with every single layer, you increase linearly the receptive field. And then once you have the whole, once every single pixel attends to the whole input image, you basically stop increasing the receptive field. And this is how the CNN receptive field graph would look like. And so all of this region is where the vision transformer has an advantage because shallower layers can also attend to like the global to every single pixel in the image. And that's what it gives it the advantage because you can learn how to attend and when to attend and the bias is not needed in the big data regimes as I already mentioned. Couple of more interesting figures here on the left. This is how the 1D positional encodings look like after the model has been pre-trained. And you can see that this 2D representation emerges, which is why, and we'll see that in the appendix, which is why other variants of the embeddings they tried, positional embeddings they tried didn't work any better than the 1D. Finally, they took the E matrix, so the embedding matrix and they plotted like principal components. And you can see if you're familiar with AlexNet or paper or CNNs in general, you can notice that these look like the kernels, like the same as the learn kernels that CNNs learn. So it's doing a similar thing, obviously, as CNNs. Let me zoom out and go and see the other parts of the paper. First, here's a nice visualization of what the transformer attends to. And you can see it's meaningful. So this is what we would probably expect the model to attend to, like the salient objects in the picture. Self-supervision, they also tried self-supervised approach instead of the fully supervised pre-training approach, and it's still behind the supervised training, but it's worth discussing this part, and I'll discuss it more in the appendix. Again, yeah, I also expect, let me jump over the references. So I'm expecting to see like huge vision transformers somewhere in the near future, probably. So I mentioned self-supervision, and if you're familiar with NLP, you know that that's the way how big transformer models are pre-trained in the NLP field. On the other hand, computer vision was mostly about supervised pre-training objectives. And so in this paper, they tried doing self-supervision in the context of computer vision, and they did get encouraging results, still not as good as supervised results, but like they basically, so how they did it is they did it similarly to how BERT, if you're familiar with BERT, so they did a similar approach. They corrupt 50% of the input patch embeddings, and they tried to predict three things. First is the mean color of the patch. So you take basically, you take patch embedding, the original patch embedding, you find the mean value, so that's the mean color, then you mask it, and then you make the model predict what the color should be, depending on the context. So that's how the self-supervised thing works. They didn't only try the color, they also tried predicting the downsampled version of the patch embedding that's like four by four instead of like 16 by 16, as well as the color, and they also tried regressing the whole thing, so like doing auto-encoding pretty much. And as I already mentioned, results are not as good, but that's a promising like research direction, so yeah, we can expect innovation on that front. Let me go through two more sections which I found important. The first one, they did a scaling effort to figure out how should they scale their model to get the best performance out of it. They tried doing depth, they tried doing width, they tried making patches smaller, and like obviously depth gave nice results. As we can see here on the graph, on the chart, basically increasing depth gives us better performance. The computer also goes up, obviously, but like yeah, increasing depth will increase the performance. Now the interesting thing for me is that patch size, the smaller it is, they gain more performance. So I'm gonna make a prediction now. Basically somebody will take some of the efficient transformers like reformer, like longformer or leanformer, and they'll do the same thing as a vision transformer. They just use smaller patches and they'll get better results. That's a research I can expect to come over the next months. And finally, let me finish up, wrap up this already long deep dive, like talking a little bit more about positional encodings. They tried four things, and the biggest gap was between using no positional embeddings, I treating the patches as a bag of patches, and using 1D positional embeddings. So that's where the highest gap was, and you can see here the numbers. So using positional embeddings helps. Now how exactly do you like, which heuristic do you use? Let's call it that way, that matters a bit less. So they tried 2D positional embeddings and 1D which I already explained. Let me try to quickly explain how the 2D positional embeddings work. They basically have two tables this time, and how you index into these tables is the following. So you have your input image, again you split it into patches. And so this one will be patch one one, this one will be patch one two, this one will be patch one three, this is row two, et cetera. So you basically take the indices, so let's say this patch will have one, like so basically take, so this is one, two, same here, one, two, you basically take the first embedding here, and you concatenate it with the second embedding here. So this represents the X dimension, this one represents the Y dimension, and you just train the transformer like that. And they showed that that approach won't do anything better than just using a simple 1D positional encoding. So that was pretty much it. This was already probably too long. Hopefully you found this video useful. This is the first time I'm doing like a deep dive, like a semi deep dive of a research paper on this channel. If you found it useful, please leave a comment down in the description if you'd like me to make more of these. And yeah, subscribe if you haven't already and click that bell icon to get notified when I upload a new video. And until next time, keep learning deep. Bye.
[{"start": 0.0, "end": 2.32, "text": " Okay, I'm super excited about this paper."}, {"start": 2.32, "end": 5.5200000000000005, "text": " It's called An Image Is Worth 16x16 Words."}, {"start": 5.5200000000000005, "end": 8.48, "text": " And the reason is, this paper got me into Transformers"}, {"start": 8.48, "end": 10.24, "text": " two months ago when it was first published,"}, {"start": 10.24, "end": 12.08, "text": " like I think October 3rd."}, {"start": 12.08, "end": 14.52, "text": " And it was under review and it still is."}, {"start": 14.52, "end": 17.580000000000002, "text": " But I'm going to try and do a deep dive today,"}, {"start": 17.580000000000002, "end": 19.44, "text": " so hopefully you'll find it useful."}, {"start": 20.8, "end": 22.84, "text": " Basically the first time I read it,"}, {"start": 22.84, "end": 26.18, "text": " like on maybe October 5th or something,"}, {"start": 26.18, "end": 27.64, "text": " I didn't understand many details,"}, {"start": 27.64, "end": 29.96, "text": " so I took a step back."}, {"start": 29.96, "end": 33.620000000000005, "text": " Went and researched NLP and Transformers specifically,"}, {"start": 33.620000000000005, "end": 35.68, "text": " like read 30 papers, even open sourced"}, {"start": 35.68, "end": 37.6, "text": " the original Transformer paper."}, {"start": 37.6, "end": 40.04, "text": " And now I came a full circle back,"}, {"start": 40.04, "end": 42.2, "text": " so I'm going to try and explain what I learned"}, {"start": 42.2, "end": 45.760000000000005, "text": " and hopefully make this paper clear for you."}, {"start": 45.760000000000005, "end": 47.480000000000004, "text": " By the way, it has a hard dependency"}, {"start": 47.480000000000004, "end": 48.92, "text": " on the original Transformer,"}, {"start": 48.92, "end": 50.88, "text": " so if you want to understand that,"}, {"start": 52.94, "end": 57.040000000000006, "text": " I have a Jupyter notebook, which you may find useful."}, {"start": 57.04, "end": 60.98, "text": " So as usual, I'll just link it in the description"}, {"start": 60.98, "end": 62.56, "text": " so you can play with it."}, {"start": 63.92, "end": 67.06, "text": " Okay, without further ado, let's jump into the deep dive."}, {"start": 67.06, "end": 72.06, "text": " Okay, so the basic premise here is that we don't need,"}, {"start": 72.98, "end": 77.0, "text": " so we don't need CNNs to solve computer vision"}, {"start": 77.0, "end": 78.74, "text": " as we thought until now."}, {"start": 79.94, "end": 84.25999999999999, "text": " So Transformer did the same thing as it did in the NLP field,"}, {"start": 84.25999999999999, "end": 85.9, "text": " basically show we don't need CNNs,"}, {"start": 85.9, "end": 89.88000000000001, "text": " we don't need recurrence, we can just do like attention,"}, {"start": 89.88000000000001, "end": 93.80000000000001, "text": " self-attention mechanisms, and that should be enough."}, {"start": 93.80000000000001, "end": 98.80000000000001, "text": " So yeah, it attains super results on the computer vision"}, {"start": 98.88000000000001, "end": 100.82000000000001, "text": " on the image classification task,"}, {"start": 100.82000000000001, "end": 105.4, "text": " and it took less time to compute, and that's what's amazing."}, {"start": 105.4, "end": 109.0, "text": " So a big note here is that it achieved all of those results,"}, {"start": 109.0, "end": 112.96000000000001, "text": " which we'll see really soon in the big data regime."}, {"start": 112.96000000000001, "end": 115.84, "text": " So it needs a lot of images in order to accomplish"}, {"start": 115.84, "end": 118.4, "text": " these results, so that's a thing to keep in mind."}, {"start": 118.4, "end": 120.44, "text": " So a good thing with Transformer in general"}, {"start": 120.44, "end": 123.76, "text": " is that they can be scaled up."}, {"start": 123.76, "end": 126.4, "text": " So you probably know about GPT-3,"}, {"start": 126.4, "end": 130.42000000000002, "text": " which has around 175 billion prams,"}, {"start": 130.42000000000002, "end": 133.42000000000002, "text": " and I assume something similar will happen"}, {"start": 133.42000000000002, "end": 136.12, "text": " with the Vision Transformer in one of the future iterations"}, {"start": 136.12, "end": 138.16, "text": " of this paper."}, {"start": 139.52, "end": 143.04, "text": " Yeah, ResNets are still, until this paper,"}, {"start": 143.04, "end": 145.68, "text": " we're dominating the different benchmarks,"}, {"start": 145.68, "end": 147.72, "text": " image classification benchmarks."}, {"start": 147.72, "end": 150.8, "text": " So it's not like this paper was the first one"}, {"start": 150.8, "end": 153.76000000000002, "text": " to introduce Transformers to computer vision,"}, {"start": 153.76000000000002, "end": 155.88, "text": " that's not the case."}, {"start": 155.88, "end": 157.28, "text": " Many other people tried."}, {"start": 157.28, "end": 160.08, "text": " So in the related work here, we can see that people"}, {"start": 160.08, "end": 164.8, "text": " already tried at doing some sort of attention,"}, {"start": 164.8, "end": 168.04000000000002, "text": " so including attentions into CNNs,"}, {"start": 168.04000000000002, "end": 172.92000000000002, "text": " so either replacing certain layers of the CNN"}, {"start": 172.92, "end": 177.28, "text": " with attention modules or just kind of complementing it."}, {"start": 177.28, "end": 180.23999999999998, "text": " And there were also some approaches where people tried"}, {"start": 180.23999999999998, "end": 184.39999999999998, "text": " to use some more sparse attention patterns and stuff,"}, {"start": 184.39999999999998, "end": 186.44, "text": " but it turned out to be really complicated"}, {"start": 186.44, "end": 188.92, "text": " in the engineering sense of the word."}, {"start": 188.92, "end": 193.88, "text": " So yeah, they didn't give the results as this paper did."}, {"start": 193.88, "end": 196.48, "text": " So let's jump to the meat of this paper,"}, {"start": 196.48, "end": 199.35999999999999, "text": " and I'm going to try to explain how it works"}, {"start": 199.35999999999999, "end": 200.26, "text": " using this image."}, {"start": 200.26, "end": 202.32, "text": " I'm gonna zoom in a little bit."}, {"start": 202.32, "end": 207.32, "text": " Okay, so maybe first up, like a high level overview."}, {"start": 208.4, "end": 213.4, "text": " Basically, what we have here is we have the input image,"}, {"start": 213.76, "end": 218.0, "text": " we split it into patches, and then we flatten those patches"}, {"start": 218.0, "end": 220.16, "text": " like in the rest order, so basically,"}, {"start": 220.16, "end": 222.64, "text": " this is the first patch, second, third, fourth,"}, {"start": 222.64, "end": 224.56, "text": " fifth, sixth, et cetera."}, {"start": 224.56, "end": 228.88, "text": " What we do then is we just flatten out those patches"}, {"start": 228.88, "end": 231.4, "text": " and we do a linear projection here."}, {"start": 231.4, "end": 232.88, "text": " We add the positional encodings,"}, {"start": 232.88, "end": 235.02, "text": " if you're familiar with the original transformer."}, {"start": 235.02, "end": 237.24, "text": " I'll get into details, but for now,"}, {"start": 237.24, "end": 238.84, "text": " this is a high level overview."}, {"start": 238.84, "end": 241.92000000000002, "text": " And then we just feed those into the original transformer,"}, {"start": 241.92000000000002, "end": 243.24, "text": " so this is the block here."}, {"start": 244.16, "end": 248.04000000000002, "text": " And then just putting a simple MLP,"}, {"start": 248.04000000000002, "end": 252.12, "text": " multilayer perceptron hat here, we can do classification."}, {"start": 252.12, "end": 253.70000000000002, "text": " And that's the high level overview."}, {"start": 253.70000000000002, "end": 256.6, "text": " And now this transformer, just a short recap,"}, {"start": 256.6, "end": 258.36, "text": " consists of these layers,"}, {"start": 258.36, "end": 261.88, "text": " which have this multi-hat attention hat,"}, {"start": 261.88, "end": 264.48, "text": " multi-hat attention module, yeah,"}, {"start": 264.48, "end": 269.04, "text": " and followed by like multilayer perceptron."}, {"start": 269.04, "end": 270.76, "text": " So that's the basic structure"}, {"start": 270.76, "end": 273.52000000000004, "text": " of the layer in the transformer."}, {"start": 274.56, "end": 276.58000000000004, "text": " Now, let's zoom in a little bit"}, {"start": 276.58000000000004, "end": 279.2, "text": " and get into the details, how it exactly works."}, {"start": 279.2, "end": 282.68, "text": " I zoomed in a little bit and I'm going to tell you"}, {"start": 282.68, "end": 286.2, "text": " all of the details that go into this preprocessing step"}, {"start": 287.16, "end": 288.12, "text": " in the Vision Transformer."}, {"start": 288.12, "end": 291.74, "text": " So we take an ImageNet image,"}, {"start": 292.8, "end": 295.72, "text": " which has, which looks like this,"}, {"start": 296.84000000000003, "end": 300.92, "text": " and it's 224 by 224."}, {"start": 302.74, "end": 305.5, "text": " And that's actually where the name of the paper comes from."}, {"start": 305.5, "end": 306.96, "text": " So basically what they now do"}, {"start": 306.96, "end": 309.44, "text": " is they split the image into patches,"}, {"start": 309.44, "end": 313.72, "text": " which are either 14 by 14 in size or 16 by 16."}, {"start": 313.72, "end": 314.68, "text": " They have three flavors."}, {"start": 314.68, "end": 317.32, "text": " They have the base, the large, and the huge model."}, {"start": 317.32, "end": 318.68, "text": " We'll see that in a minute."}, {"start": 318.68, "end": 321.86, "text": " But like, let's assume we use 14 by 14 here."}, {"start": 323.88, "end": 328.04, "text": " And yeah, basically once you have this,"}, {"start": 328.04, "end": 331.71999999999997, "text": " if you divide 224 by 14, you get 16."}, {"start": 331.71999999999997, "end": 334.56, "text": " So basically you have 16 by 16 patches."}, {"start": 334.56, "end": 337.8, "text": " And that's where the, like the title of the paper"}, {"start": 337.8, "end": 340.24, "text": " comes from because we're basically treating"}, {"start": 340.24, "end": 342.68, "text": " all of these patches as simple tokens"}, {"start": 342.68, "end": 345.0, "text": " and we input them into the transformer"}, {"start": 345.0, "end": 347.52, "text": " so you can treat it as a word, whatever."}, {"start": 347.52, "end": 349.6, "text": " Like that's just like a nice metaphor."}, {"start": 350.6, "end": 354.04, "text": " Now what you do, you take this patch,"}, {"start": 354.04, "end": 354.88, "text": " you flatten it out."}, {"start": 354.88, "end": 359.28, "text": " So you get a vector that's like 14 by 14."}, {"start": 359.28, "end": 362.78, "text": " That's 196, I think."}, {"start": 362.78, "end": 364.52, "text": " And that's this part."}, {"start": 364.52, "end": 366.06, "text": " So we are now here."}, {"start": 367.96, "end": 371.48, "text": " And now we do the simple linear projection,"}, {"start": 371.48, "end": 376.48, "text": " which will embed this vector into the model dimension space."}, {"start": 376.8, "end": 381.20000000000005, "text": " So we'll basically, so for example, for the base model,"}, {"start": 381.20000000000005, "end": 385.0, "text": " base vision transformer, they use 796."}, {"start": 385.0, "end": 386.86, "text": " So that's the same as bird base."}, {"start": 388.16, "end": 393.16, "text": " So we end up with a vector that has 796 dimensions."}, {"start": 395.56, "end": 397.18, "text": " And now we are here."}, {"start": 397.18, "end": 402.18, "text": " So finally we just add the positional encoding"}, {"start": 404.98, "end": 408.5, "text": " to this representation and we input it into the transformer"}, {"start": 408.5, "end": 409.98, "text": " and that's pretty much it."}, {"start": 409.98, "end": 413.14, "text": " Now for the positional encodings, if you're not familiar,"}, {"start": 413.14, "end": 415.88, "text": " what they did here, and they did some ablation studies,"}, {"start": 415.88, "end": 419.58, "text": " but they ended up using the 1D positional embedding."}, {"start": 419.58, "end": 424.58, "text": " So you basically have a table like this."}, {"start": 424.58, "end": 429.09999999999997, "text": " And so for example, if you are using,"}, {"start": 429.09999999999997, "end": 434.09999999999997, "text": " if we are looking at this specific token,"}, {"start": 434.5, "end": 437.3, "text": " so patch embedding, we just,"}, {"start": 437.3, "end": 439.74, "text": " so because it's the first patch,"}, {"start": 439.74, "end": 443.06, "text": " we just take the vector from this table,"}, {"start": 443.06, "end": 447.09999999999997, "text": " from the position one, and that's the positional encoding,"}, {"start": 447.09999999999997, "end": 450.0, "text": " which is actually learned during the training."}, {"start": 450.0, "end": 450.84, "text": " So that's the trick."}, {"start": 450.84, "end": 454.46, "text": " So you just add this one to the embedded patch"}, {"start": 454.46, "end": 456.46, "text": " and that's the final representation."}, {"start": 456.46, "end": 458.5, "text": " So one thing I didn't mention here,"}, {"start": 458.5, "end": 462.06, "text": " and this is similar to BERT, is that they use this,"}, {"start": 462.06, "end": 466.64, "text": " they have this notion of a class embedding,"}, {"start": 466.64, "end": 468.53999999999996, "text": " which is the same thing that BERT used."}, {"start": 468.53999999999996, "end": 471.38, "text": " So this one is also learned"}, {"start": 471.38, "end": 473.85999999999996, "text": " and it doesn't come from the image itself."}, {"start": 473.85999999999996, "end": 478.82, "text": " It's just hard coded in the model architecture."}, {"start": 478.82, "end": 481.38, "text": " So that was like a detailed overview"}, {"start": 481.38, "end": 485.62, "text": " of how this stem part of the model looks like."}, {"start": 485.62, "end": 489.26, "text": " And one minor detail is that this linear projection,"}, {"start": 489.26, "end": 493.3, "text": " it's like basically a matrix E,"}, {"start": 494.42, "end": 497.26, "text": " so it's shared among different patches."}, {"start": 497.26, "end": 501.26, "text": " So we use the same matrix E to project all of these patches,"}, {"start": 502.82, "end": 506.38, "text": " like flattened vectors of the patches"}, {"start": 506.38, "end": 508.78, "text": " into the embedding patch."}, {"start": 508.78, "end": 513.74, "text": " So yeah, that was everything I wanted to mention there."}, {"start": 513.74, "end": 515.42, "text": " Okay, I zoomed out."}, {"start": 515.42, "end": 518.5, "text": " Let's go to second important part."}, {"start": 518.5, "end": 522.18, "text": " And so there is one thing in this model"}, {"start": 522.18, "end": 525.9399999999999, "text": " that's kind of has like inductive bias,"}, {"start": 525.9399999999999, "end": 529.38, "text": " and that part has to do with positional encodings."}, {"start": 529.38, "end": 533.6999999999999, "text": " So when you fine tune these models,"}, {"start": 533.6999999999999, "end": 538.66, "text": " so previous papers showed"}, {"start": 538.66, "end": 543.3, "text": " that when you fine tune for image classification tasks,"}, {"start": 543.3, "end": 545.74, "text": " you wanna fine tune on higher resolution."}, {"start": 545.74, "end": 548.4599999999999, "text": " So what you end up, if you have higher resolution"}, {"start": 548.4599999999999, "end": 552.54, "text": " as a consequence, you end up with a bigger number of patches."}, {"start": 552.54, "end": 555.18, "text": " So you're not 16 by 16 anymore."}, {"start": 555.18, "end": 556.9399999999999, "text": " The number of patches becomes bigger,"}, {"start": 556.9399999999999, "end": 559.98, "text": " so the sequence becomes larger."}, {"start": 559.98, "end": 563.38, "text": " And because of that, the positional encodings"}, {"start": 563.38, "end": 566.3, "text": " wouldn't make any sense if you just applied them"}, {"start": 566.3, "end": 567.74, "text": " during the fine tuning process."}, {"start": 567.74, "end": 569.94, "text": " So you have to do 2D interpolation."}, {"start": 569.94, "end": 572.02, "text": " So that's the only part pretty much where,"}, {"start": 572.02, "end": 576.02, "text": " so that's one of the parts where they introduce bias."}, {"start": 576.02, "end": 578.1800000000001, "text": " So the transformer model itself has some biases"}, {"start": 578.1800000000001, "end": 580.54, "text": " like the residual connections, et cetera,"}, {"start": 580.54, "end": 583.42, "text": " but like it's more general than some other models"}, {"start": 583.42, "end": 587.1800000000001, "text": " like CNNs and LSTMs."}, {"start": 587.1800000000001, "end": 590.46, "text": " So that was an important thing to mention."}, {"start": 590.46, "end": 594.66, "text": " Now, as I mentioned, we have three variants of the model."}, {"start": 594.66, "end": 596.58, "text": " They use the base and large,"}, {"start": 596.58, "end": 600.38, "text": " which roughly correspond to BERT base and BERT large"}, {"start": 600.38, "end": 602.3000000000001, "text": " when it comes to model size."}, {"start": 602.3000000000001, "end": 604.9000000000001, "text": " And then we have the vision transformer huge,"}, {"start": 604.9000000000001, "end": 609.62, "text": " which is even bigger than the vision transformer large."}, {"start": 609.62, "end": 612.5400000000001, "text": " And let us see the interesting results they got."}, {"start": 612.5400000000001, "end": 617.0200000000001, "text": " So first of all, we can see two state-of-the-art"}, {"start": 617.0200000000001, "end": 620.82, "text": " so previous SODAs on these benchmarks,"}, {"start": 620.82, "end": 623.5, "text": " and those were ResNets and the FissionNets."}, {"start": 623.5, "end": 627.46, "text": " Now, when you take the huge variant"}, {"start": 627.46, "end": 631.06, "text": " of the vision transformer and pre-train it on the JFT,"}, {"start": 631.06, "end": 633.66, "text": " which is like Google's proprietary data set,"}, {"start": 633.66, "end": 636.78, "text": " which contains like 303 million images,"}, {"start": 638.18, "end": 643.18, "text": " you get basically SODAs on every single benchmark."}, {"start": 643.9, "end": 645.78, "text": " And additionally, you have a lesser,"}, {"start": 645.78, "end": 648.62, "text": " like you need less compute to do that."}, {"start": 648.62, "end": 651.02, "text": " And if we take a look at, for example,"}, {"start": 651.02, "end": 653.14, "text": " this vision transformer large,"}, {"start": 653.14, "end": 658.14, "text": " it took only 0.68 thousand TPU v3 core days."}, {"start": 660.14, "end": 663.5, "text": " And that's a mouthful, I'll explain it in a minute."}, {"start": 663.5, "end": 668.5, "text": " And basically that's much less than 9.9 thousand"}, {"start": 668.58, "end": 672.28, "text": " that this previous ResNet took."}, {"start": 672.28, "end": 676.98, "text": " So what this means is every single TPU v3 chip"}, {"start": 676.98, "end": 678.74, "text": " has two of these cores."}, {"start": 678.74, "end": 680.54, "text": " So you basically need,"}, {"start": 680.54, "end": 684.8199999999999, "text": " if you have a single TPU v3 chip,"}, {"start": 684.8199999999999, "end": 689.8199999999999, "text": " you'd need 340 days to pre-train this model."}, {"start": 690.4599999999999, "end": 692.3399999999999, "text": " And that's like a lot of compute"}, {"start": 692.3399999999999, "end": 697.3, "text": " that pretty much only Google and like richest labs,"}, {"start": 697.3, "end": 699.06, "text": " industrial labs in the world have."}, {"start": 699.06, "end": 700.78, "text": " So not everybody can reproduce this paper."}, {"start": 700.78, "end": 703.14, "text": " And that's the bad thing about this,"}, {"start": 703.14, "end": 705.18, "text": " about these kinds of free searches,"}, {"start": 705.18, "end": 707.5799999999999, "text": " even though they do advance the field,"}, {"start": 707.5799999999999, "end": 710.02, "text": " people can kind of reproduce this."}, {"start": 710.02, "end": 712.14, "text": " And yeah, that sucks."}, {"start": 713.38, "end": 715.62, "text": " Okay, let's go further."}, {"start": 716.88, "end": 721.88, "text": " And interesting thing is this VTab benchmark."}, {"start": 723.06, "end": 724.16, "text": " What it basically has,"}, {"start": 724.16, "end": 728.18, "text": " so VTab is interesting because it has three types of data."}, {"start": 728.18, "end": 730.66, "text": " So first of all, it's low data benchmark,"}, {"start": 730.66, "end": 735.5, "text": " meaning it only has around thousand images per task."}, {"start": 735.5, "end": 739.22, "text": " And secondly, it splits those images into three categories."}, {"start": 739.22, "end": 742.58, "text": " So natural images, which are kind of like cipher,"}, {"start": 742.58, "end": 747.08, "text": " 10, cipher, 100 data set images."}, {"start": 747.08, "end": 750.22, "text": " So like natural looking images like dogs, whatever, trees."}, {"start": 750.22, "end": 753.62, "text": " And then we have these specialized images"}, {"start": 753.62, "end": 755.26, "text": " like medical and satellite images."}, {"start": 755.26, "end": 757.86, "text": " And then they have this geometric,"}, {"start": 757.86, "end": 762.1800000000001, "text": " I think structured class of images."}, {"start": 762.1800000000001, "end": 765.5, "text": " And basically what they showed is that the huge variant"}, {"start": 765.5, "end": 770.06, "text": " of the vision transformer outperformed every other soda"}, {"start": 770.06, "end": 775.06, "text": " across all of those categories, which is really reassuring."}, {"start": 775.18, "end": 780.18, "text": " So the transfer learning for these huge transformers"}, {"start": 780.58, "end": 783.54, "text": " does work and it gives us nice results."}, {"start": 783.54, "end": 785.74, "text": " Okay, going further, really interesting analysis"}, {"start": 785.74, "end": 790.58, "text": " they did here is how do these models perform"}, {"start": 790.58, "end": 792.94, "text": " with respect to ResNets,"}, {"start": 792.94, "end": 796.1, "text": " depending on the amount of pre-trained data they have."}, {"start": 796.1, "end": 799.86, "text": " And the left figure here, we can see when we pre-trained"}, {"start": 800.86, "end": 803.1800000000001, "text": " the vision transformer on ImageNet,"}, {"start": 803.1800000000001, "end": 805.4200000000001, "text": " which has 1.3 million images,"}, {"start": 805.4200000000001, "end": 810.1, "text": " then on ImageNet 21K, which has 14 million images."}, {"start": 810.1, "end": 814.58, "text": " And finally on JFT, which has 303 million images."}, {"start": 814.58, "end": 816.9000000000001, "text": " How they, what's the accuracy,"}, {"start": 816.9000000000001, "end": 818.46, "text": " what's the performance of the models?"}, {"start": 818.46, "end": 821.9000000000001, "text": " And as we can see here, in the low data regimes,"}, {"start": 821.9, "end": 823.9, "text": " ResNets are the best."}, {"start": 823.9, "end": 827.5799999999999, "text": " And that's because they have a useful inductive bias,"}, {"start": 827.5799999999999, "end": 830.74, "text": " meaning that the kernels are basically,"}, {"start": 832.42, "end": 834.5, "text": " that's the bias part of the ResNet."}, {"start": 834.5, "end": 836.5799999999999, "text": " Basically you have to have the kernel,"}, {"start": 836.5799999999999, "end": 840.86, "text": " which only looks at the local part of the image"}, {"start": 840.86, "end": 843.98, "text": " and can have a global, like receptive field."}, {"start": 843.98, "end": 846.86, "text": " And then going deeper into the model,"}, {"start": 846.86, "end": 849.0799999999999, "text": " you get like a bigger receptive field."}, {"start": 849.08, "end": 852.84, "text": " That's one of the biases that ResNets have."}, {"start": 852.84, "end": 856.08, "text": " The second one is the residual connections, but yeah."}, {"start": 856.08, "end": 858.6, "text": " And the best vision transformer"}, {"start": 858.6, "end": 861.2, "text": " is actually the base variant."}, {"start": 861.2, "end": 864.4000000000001, "text": " And that's not surprising actually."}, {"start": 864.4000000000001, "end": 867.1600000000001, "text": " And then as we increase the number of pre-trained data,"}, {"start": 867.1600000000001, "end": 870.44, "text": " we can see that finally in the big data regime,"}, {"start": 870.44, "end": 872.8000000000001, "text": " the huge variant of the vision transformer"}, {"start": 872.8000000000001, "end": 875.48, "text": " performs the best and it's better than ResNets"}, {"start": 875.48, "end": 877.96, "text": " and it's also more compute like efficient."}, {"start": 877.96, "end": 879.38, "text": " So that's a cool thing."}, {"start": 880.3000000000001, "end": 881.9200000000001, "text": " Right figure shows the same thing,"}, {"start": 881.9200000000001, "end": 885.84, "text": " just like splitting the JFT into smaller chunks."}, {"start": 885.84, "end": 887.8000000000001, "text": " And we can see again, the blue,"}, {"start": 887.8000000000001, "end": 892.8000000000001, "text": " the light blue line here is the vision transformer large,"}, {"start": 894.26, "end": 898.32, "text": " and it outperforms ResNets again in the big data regimes."}, {"start": 898.32, "end": 899.76, "text": " So that was interesting."}, {"start": 899.76, "end": 902.8000000000001, "text": " That was basically what this paper,"}, {"start": 902.8000000000001, "end": 905.44, "text": " one of the main points this paper made."}, {"start": 905.44, "end": 909.48, "text": " In big data regimes, those inductive biases"}, {"start": 909.48, "end": 912.0, "text": " that we put into our models like CNNs"}, {"start": 912.0, "end": 914.6, "text": " are not useful anymore and we should let the model"}, {"start": 914.6, "end": 916.6, "text": " learn those biases itself."}, {"start": 916.6, "end": 919.84, "text": " And we'll see in a short, an interesting showcase"}, {"start": 919.84, "end": 922.08, "text": " that shows us how the attention varies"}, {"start": 922.08, "end": 925.0, "text": " from in different layers."}, {"start": 925.0, "end": 927.6400000000001, "text": " Here, yeah, this figure is not so important."}, {"start": 927.6400000000001, "end": 930.1600000000001, "text": " Basically, it shows the vision transformers"}, {"start": 930.16, "end": 936.4399999999999, "text": " are more like efficient when it comes to compute."}, {"start": 936.4399999999999, "end": 937.4399999999999, "text": " And this line is interesting."}, {"start": 937.4399999999999, "end": 940.52, "text": " So third, vision transformers appear not to saturate"}, {"start": 940.52, "end": 943.88, "text": " within the range tried, motivating future scaling efforts."}, {"start": 943.88, "end": 948.04, "text": " So you can basically expect something similar to GPT-3"}, {"start": 948.04, "end": 949.48, "text": " in the world of computer vision."}, {"start": 949.48, "end": 950.64, "text": " That's my prediction."}, {"start": 951.6, "end": 955.3199999999999, "text": " I mean, they'll probably stop as soon as they see saturation"}, {"start": 955.32, "end": 960.32, "text": " because, yeah, you have some fixed budget."}, {"start": 960.32, "end": 963.0, "text": " You don't want to overspend if you don't get any gains"}, {"start": 963.0, "end": 964.08, "text": " out of it."}, {"start": 964.08, "end": 966.48, "text": " So yeah, we can expect a bigger vision transformer,"}, {"start": 966.48, "end": 967.36, "text": " that's for sure."}, {"start": 967.36, "end": 969.6, "text": " Now, this part is really interesting."}, {"start": 969.6, "end": 972.6400000000001, "text": " This is the thing I was mentioning about the attention."}, {"start": 972.6400000000001, "end": 977.36, "text": " So they took the large variant of the model"}, {"start": 977.36, "end": 980.5600000000001, "text": " and they analyzed the different attention heads."}, {"start": 980.5600000000001, "end": 983.72, "text": " So in particular, this model has 24 layers"}, {"start": 983.72, "end": 987.52, "text": " and it has 16 attention heads."}, {"start": 987.52, "end": 991.5600000000001, "text": " And what you can see here is in the shallower layers,"}, {"start": 991.5600000000001, "end": 993.64, "text": " those attention heads, so some of them"}, {"start": 993.64, "end": 995.4, "text": " have a really localized attention."}, {"start": 995.4, "end": 998.72, "text": " So they just attend pixels in the neighborhood"}, {"start": 998.72, "end": 1004.0400000000001, "text": " of a current pixel, whereas the other attention heads"}, {"start": 1004.0400000000001, "end": 1008.2, "text": " have a global attention compared to CNNs,"}, {"start": 1008.2, "end": 1011.0400000000001, "text": " which only have local small receptive fields"}, {"start": 1011.0400000000001, "end": 1012.5600000000001, "text": " in these shallower layers."}, {"start": 1012.56, "end": 1017.56, "text": " So if I were to draw how CNN receptive field would change"}, {"start": 1017.92, "end": 1021.7199999999999, "text": " with the depth, ComNets will probably have a curve like this."}, {"start": 1021.7199999999999, "end": 1024.04, "text": " Like they would go linearly increase."}, {"start": 1024.04, "end": 1026.6799999999998, "text": " So with every single layer, you increase linearly"}, {"start": 1026.6799999999998, "end": 1027.96, "text": " the receptive field."}, {"start": 1027.96, "end": 1030.56, "text": " And then once you have the whole,"}, {"start": 1030.56, "end": 1035.56, "text": " once every single pixel attends to the whole input image,"}, {"start": 1036.36, "end": 1039.48, "text": " you basically stop increasing the receptive field."}, {"start": 1039.48, "end": 1044.48, "text": " And this is how the CNN receptive field graph would look like."}, {"start": 1046.52, "end": 1050.08, "text": " And so all of this region is where the vision transformer"}, {"start": 1050.08, "end": 1054.96, "text": " has an advantage because shallower layers can also attend"}, {"start": 1054.96, "end": 1059.68, "text": " to like the global to every single pixel in the image."}, {"start": 1059.68, "end": 1062.76, "text": " And that's what it gives it the advantage"}, {"start": 1062.76, "end": 1066.0, "text": " because you can learn how to attend and when to attend"}, {"start": 1066.0, "end": 1070.2, "text": " and the bias is not needed in the big data regimes"}, {"start": 1070.2, "end": 1071.4, "text": " as I already mentioned."}, {"start": 1072.48, "end": 1075.08, "text": " Couple of more interesting figures here on the left."}, {"start": 1076.12, "end": 1081.12, "text": " This is how the 1D positional encodings look like"}, {"start": 1081.24, "end": 1083.6, "text": " after the model has been pre-trained."}, {"start": 1083.6, "end": 1088.6, "text": " And you can see that this 2D representation emerges,"}, {"start": 1089.8, "end": 1093.32, "text": " which is why, and we'll see that in the appendix,"}, {"start": 1093.32, "end": 1097.1599999999999, "text": " which is why other variants of the embeddings they tried,"}, {"start": 1097.1599999999999, "end": 1099.48, "text": " positional embeddings they tried didn't work any better"}, {"start": 1099.48, "end": 1100.4399999999998, "text": " than the 1D."}, {"start": 1100.4399999999998, "end": 1105.4399999999998, "text": " Finally, they took the E matrix, so the embedding matrix"}, {"start": 1105.4399999999998, "end": 1108.84, "text": " and they plotted like principal components."}, {"start": 1108.84, "end": 1111.36, "text": " And you can see if you're familiar with AlexNet"}, {"start": 1111.36, "end": 1115.3999999999999, "text": " or paper or CNNs in general,"}, {"start": 1115.3999999999999, "end": 1119.0, "text": " you can notice that these look like the kernels,"}, {"start": 1119.0, "end": 1122.12, "text": " like the same as the learn kernels that CNNs learn."}, {"start": 1122.12, "end": 1126.08, "text": " So it's doing a similar thing, obviously, as CNNs."}, {"start": 1126.08, "end": 1129.9199999999998, "text": " Let me zoom out and go and see the other parts of the paper."}, {"start": 1130.8799999999999, "end": 1134.04, "text": " First, here's a nice visualization"}, {"start": 1134.04, "end": 1136.76, "text": " of what the transformer attends to."}, {"start": 1137.6, "end": 1139.8, "text": " And you can see it's meaningful."}, {"start": 1139.8, "end": 1143.28, "text": " So this is what we would probably expect the model"}, {"start": 1143.28, "end": 1146.76, "text": " to attend to, like the salient objects in the picture."}, {"start": 1149.28, "end": 1152.0, "text": " Self-supervision, they also tried self-supervised"}, {"start": 1152.0, "end": 1155.72, "text": " approach instead of the fully supervised pre-training"}, {"start": 1155.72, "end": 1159.36, "text": " approach, and it's still behind the supervised training,"}, {"start": 1159.36, "end": 1161.96, "text": " but it's worth discussing this part,"}, {"start": 1161.96, "end": 1164.44, "text": " and I'll discuss it more in the appendix."}, {"start": 1164.44, "end": 1169.44, "text": " Again, yeah, I also expect, let me jump over the references."}, {"start": 1171.96, "end": 1176.24, "text": " So I'm expecting to see like huge vision transformers"}, {"start": 1176.24, "end": 1177.96, "text": " somewhere in the near future, probably."}, {"start": 1177.96, "end": 1179.52, "text": " So I mentioned self-supervision,"}, {"start": 1179.52, "end": 1181.88, "text": " and if you're familiar with NLP,"}, {"start": 1181.88, "end": 1185.68, "text": " you know that that's the way how big transformer models"}, {"start": 1185.68, "end": 1188.48, "text": " are pre-trained in the NLP field."}, {"start": 1188.48, "end": 1191.64, "text": " On the other hand, computer vision was mostly about"}, {"start": 1191.64, "end": 1194.0400000000002, "text": " supervised pre-training objectives."}, {"start": 1194.0400000000002, "end": 1198.68, "text": " And so in this paper, they tried doing self-supervision"}, {"start": 1198.68, "end": 1200.64, "text": " in the context of computer vision,"}, {"start": 1200.64, "end": 1204.1200000000001, "text": " and they did get encouraging results,"}, {"start": 1204.1200000000001, "end": 1207.2, "text": " still not as good as supervised results,"}, {"start": 1207.2, "end": 1211.0, "text": " but like they basically, so how they did it"}, {"start": 1211.0, "end": 1214.96, "text": " is they did it similarly to how BERT,"}, {"start": 1214.96, "end": 1216.2, "text": " if you're familiar with BERT,"}, {"start": 1216.2, "end": 1217.68, "text": " so they did a similar approach."}, {"start": 1217.68, "end": 1222.08, "text": " They corrupt 50% of the input patch embeddings,"}, {"start": 1222.08, "end": 1224.6, "text": " and they tried to predict three things."}, {"start": 1224.6, "end": 1228.32, "text": " First is the mean color of the patch."}, {"start": 1228.32, "end": 1232.32, "text": " So you take basically, you take patch embedding,"}, {"start": 1232.32, "end": 1235.08, "text": " the original patch embedding, you find the mean value,"}, {"start": 1235.08, "end": 1238.56, "text": " so that's the mean color, then you mask it,"}, {"start": 1238.56, "end": 1240.52, "text": " and then you make the model predict"}, {"start": 1240.52, "end": 1245.2, "text": " what the color should be, depending on the context."}, {"start": 1245.2, "end": 1247.8799999999999, "text": " So that's how the self-supervised thing works."}, {"start": 1247.8799999999999, "end": 1249.24, "text": " They didn't only try the color,"}, {"start": 1249.24, "end": 1252.4, "text": " they also tried predicting the downsampled version"}, {"start": 1252.4, "end": 1254.84, "text": " of the patch embedding that's like four by four"}, {"start": 1254.84, "end": 1258.84, "text": " instead of like 16 by 16, as well as the color,"}, {"start": 1258.84, "end": 1261.04, "text": " and they also tried regressing the whole thing,"}, {"start": 1261.04, "end": 1263.6, "text": " so like doing auto-encoding pretty much."}, {"start": 1263.6, "end": 1267.16, "text": " And as I already mentioned, results are not as good,"}, {"start": 1267.16, "end": 1269.48, "text": " but that's a promising like research direction,"}, {"start": 1269.48, "end": 1272.64, "text": " so yeah, we can expect innovation on that front."}, {"start": 1273.96, "end": 1276.92, "text": " Let me go through two more sections which I found important."}, {"start": 1276.92, "end": 1279.04, "text": " The first one, they did a scaling effort"}, {"start": 1279.04, "end": 1282.08, "text": " to figure out how should they scale their model"}, {"start": 1282.08, "end": 1284.04, "text": " to get the best performance out of it."}, {"start": 1284.04, "end": 1286.3600000000001, "text": " They tried doing depth, they tried doing width,"}, {"start": 1286.3600000000001, "end": 1288.48, "text": " they tried making patches smaller,"}, {"start": 1288.48, "end": 1293.48, "text": " and like obviously depth gave nice results."}, {"start": 1293.68, "end": 1298.3600000000001, "text": " As we can see here on the graph, on the chart,"}, {"start": 1298.36, "end": 1303.36, "text": " basically increasing depth gives us better performance."}, {"start": 1303.8, "end": 1305.6799999999998, "text": " The computer also goes up, obviously,"}, {"start": 1305.6799999999998, "end": 1309.9199999999998, "text": " but like yeah, increasing depth will increase the performance."}, {"start": 1309.9199999999998, "end": 1312.6399999999999, "text": " Now the interesting thing for me is that patch size,"}, {"start": 1312.6399999999999, "end": 1315.8799999999999, "text": " the smaller it is, they gain more performance."}, {"start": 1315.8799999999999, "end": 1317.4399999999998, "text": " So I'm gonna make a prediction now."}, {"start": 1317.4399999999998, "end": 1319.9599999999998, "text": " Basically somebody will take some of the"}, {"start": 1319.9599999999998, "end": 1321.8799999999999, "text": " efficient transformers like reformer,"}, {"start": 1321.8799999999999, "end": 1324.6, "text": " like longformer or leanformer,"}, {"start": 1324.6, "end": 1327.8799999999999, "text": " and they'll do the same thing as a vision transformer."}, {"start": 1327.88, "end": 1331.7600000000002, "text": " They just use smaller patches and they'll get better results."}, {"start": 1331.7600000000002, "end": 1336.7600000000002, "text": " That's a research I can expect to come over the next months."}, {"start": 1336.88, "end": 1339.5600000000002, "text": " And finally, let me finish up,"}, {"start": 1339.5600000000002, "end": 1342.64, "text": " wrap up this already long deep dive,"}, {"start": 1345.3600000000001, "end": 1349.3200000000002, "text": " like talking a little bit more about positional encodings."}, {"start": 1349.3200000000002, "end": 1353.92, "text": " They tried four things, and the biggest gap was"}, {"start": 1353.92, "end": 1357.0400000000002, "text": " between using no positional embeddings,"}, {"start": 1357.04, "end": 1360.08, "text": " I treating the patches as a bag of patches,"}, {"start": 1360.08, "end": 1363.24, "text": " and using 1D positional embeddings."}, {"start": 1363.24, "end": 1365.04, "text": " So that's where the highest gap was,"}, {"start": 1365.04, "end": 1366.72, "text": " and you can see here the numbers."}, {"start": 1366.72, "end": 1370.48, "text": " So using positional embeddings helps."}, {"start": 1370.48, "end": 1375.48, "text": " Now how exactly do you like, which heuristic do you use?"}, {"start": 1375.8, "end": 1379.72, "text": " Let's call it that way, that matters a bit less."}, {"start": 1379.72, "end": 1382.56, "text": " So they tried 2D positional embeddings"}, {"start": 1382.56, "end": 1384.6, "text": " and 1D which I already explained."}, {"start": 1384.6, "end": 1385.68, "text": " Let me try to quickly explain"}, {"start": 1385.68, "end": 1388.0, "text": " how the 2D positional embeddings work."}, {"start": 1389.04, "end": 1392.0, "text": " They basically have two tables this time,"}, {"start": 1393.1200000000001, "end": 1396.96, "text": " and how you index into these tables is the following."}, {"start": 1396.96, "end": 1400.92, "text": " So you have your input image,"}, {"start": 1400.92, "end": 1403.0800000000002, "text": " again you split it into patches."}, {"start": 1404.1200000000001, "end": 1407.68, "text": " And so this one will be patch one one,"}, {"start": 1407.68, "end": 1409.92, "text": " this one will be patch one two,"}, {"start": 1409.92, "end": 1412.52, "text": " this one will be patch one three,"}, {"start": 1412.52, "end": 1414.0800000000002, "text": " this is row two, et cetera."}, {"start": 1414.08, "end": 1416.04, "text": " So you basically take the indices,"}, {"start": 1416.04, "end": 1419.84, "text": " so let's say this patch will have one,"}, {"start": 1419.84, "end": 1424.84, "text": " like so basically take, so this is one, two, same here,"}, {"start": 1427.58, "end": 1432.58, "text": " one, two, you basically take the first embedding here,"}, {"start": 1433.8799999999999, "end": 1438.24, "text": " and you concatenate it with the second embedding here."}, {"start": 1438.24, "end": 1442.06, "text": " So this represents the X dimension,"}, {"start": 1442.06, "end": 1444.52, "text": " this one represents the Y dimension,"}, {"start": 1444.52, "end": 1448.84, "text": " and you just train the transformer like that."}, {"start": 1448.84, "end": 1451.94, "text": " And they showed that that approach won't do anything better"}, {"start": 1451.94, "end": 1456.46, "text": " than just using a simple 1D positional encoding."}, {"start": 1456.46, "end": 1458.6, "text": " So that was pretty much it."}, {"start": 1458.6, "end": 1461.8, "text": " This was already probably too long."}, {"start": 1461.8, "end": 1464.32, "text": " Hopefully you found this video useful."}, {"start": 1464.32, "end": 1466.76, "text": " This is the first time I'm doing like a deep dive,"}, {"start": 1466.76, "end": 1471.1799999999998, "text": " like a semi deep dive of a research paper on this channel."}, {"start": 1471.18, "end": 1472.74, "text": " If you found it useful,"}, {"start": 1472.74, "end": 1474.4, "text": " please leave a comment down in the description"}, {"start": 1474.4, "end": 1477.5600000000002, "text": " if you'd like me to make more of these."}, {"start": 1477.5600000000002, "end": 1481.28, "text": " And yeah, subscribe if you haven't already"}, {"start": 1481.28, "end": 1483.3600000000001, "text": " and click that bell icon to get notified"}, {"start": 1483.3600000000001, "end": 1485.0, "text": " when I upload a new video."}, {"start": 1485.0, "end": 1488.4, "text": " And until next time, keep learning deep."}, {"start": 1488.4, "end": 1507.96, "text": " Bye."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=px4rtkWHFvM
Developing a deep learning project (case study on transformer)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I talk about how I work on my deep learning projects on the example of the transformer which I've recently open-sourced. You'll learn about: ✔️ Snippets of my personal story ✔️ How I think about approaching a new project ✔️ Problems I encountered developing the transformer ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ My transformer implementation: https://github.com/gordicaleksa/pytorch-original-transformer ✅ The Annotated Transformer blog: http://nlp.seas.harvard.edu/2018/04/03/attention.html ✅ Original paper: https://arxiv.org/abs/1706.03762 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Open-sourcing original transformer and why 3:00 Creating great projects takes time 5:40 My time management 9:00 My story and embracing failures 12:05 Overview of the transformer project 14:03 Data and task definition 15:57 Training loop 20:10 Problems I encountered 21:44 Beam search fun 23:10 BucketIterator fun 25:35 Optimizing things to speed up the loop 26:25 Translating from English to German and vice versa 30:00 Hardware requirements 30:47 Wrapping things up ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #transformer #attention #deeplearning
Finally, finally I've open sourced my implementation of the original transformer paper, Attention is All You Need, and it took me like three weeks of non-stop working on this and I'm super happy. I learned a lot and hopefully you'll find it as a valuable resource as well. So yeah, let me just walk you through this, read me. This is the architecture. I have some nice visualizations here and I have two models which I've pre-trained, which I've linked. So yeah, before I get into this, like the code itself, I just want to tell you about a couple of things, like the whole journey I had learning about this. And so those of you who've been following me over the last like a couple of weeks, you know that I've been... so I've been working on transformers for more than a month now and I first started by like just reading about theory, reading like research papers for more than like two plus weeks. And then after I finished that one, I basically started coding the implementation from scratch and that took me around three weeks more or less. So let me start with why I did it. So the thing is I looked for like good resources to understand transformers but not on the like level of theory but like to be able to actually understand the code itself. And I basically only found two decent resources, good resources. Basically, I'm not counting in Hugging Face because they do have a really awesome library of transformers but they are like more for like as a black box experience type of usage and not for going through the code and understanding how stuff works. So the two resources I mentioned are the annotated transformer is the first one and then PyTorch official implementation. Now the thing with PyTorch official implementation is that like their multi-headed attention module is awesome, like functionality wise it works. But the thing is it's so generic because they got to cover a bunch of use cases and so it's so complicated to actually go through the code and understand how it works. So my idea was to just focus on this specific use case and like of this transformer model from 2017 and that's it. On the other hand, the annotated transformer block is basically suited only for researchers and also had a couple of bugs. It's a it's an overall really nice resource and helped me a lot but it's really only for researchers and they also like kind of suppose you understand the source code of Torch text which is PyTorch's like module for manipulating NLP tasks which is usually not the not the case right. So I want to I want to make a couple of points before we go and take an overview of the code and I explain you how I struggled, which problems I had, how I approached it. I won't get into every single detail because the code base is now already pretty big. I'll show you some details but before that I want to tell you that making good projects takes time. So I had I had a couple of comments. I had Rahul commenting, thanks for telling us that understanding and coding a new AI algorithm takes a few months. I always felt like people are going too fast and doing things in just days and I cannot keep up. But thank you, it gives me motivation. So thank you Rahul. The thing is I mean it's funny like my first so my first attempt at trying to reconstruct a research paper took three months. So I was coding three months non-stop and the reason being the authors did not open source their like implementation and you couldn't find the code wasn't as popular as the transformer so I couldn't find snippets of code anywhere. So what I what I ended up doing is just like trying and reconstructing the the the paper by just reading text which is super hard and it takes time and I had lots of back and forth with the authors to understand certain big details and it's like so the point I want to make here is don't fall for that learn topic huge topic X in five days. I mean that doesn't work like that and I'm just wondering I think I know why that's the case like you have you have Siraj Rawal who had videos like learn tensorflow in five minutes and I mean if you I want to tell you something I mean if you if you believe that crap you're in a really bad position because that's going to set you up for failure and why because you're basically it's basically setting up setting you up for impatience whereas you really truly need patience in order to create anything great like every single project like the projects I'm doing at Microsoft they take that's like a multi like multi like team effort that lasts at least six months one year two years three years so anything significant takes time in artificial intelligence and computer science whatever the topic it is like I have skipped creating a video last week last weekend because I was so in a flow of cutting this transformers thing up that I didn't want to break my flow and create a video so that's that's how things function you got to make trade-offs sometimes and when we are at this point of talking about time management let me let me let me walk you through how my like commit history looked like and how my how I organized my time during this three weeks okay so let me let me walk you through my schedule because I mean everybody's like I've noticed a lot of comments people thinking everybody is so much faster than they are and that's so much so not true it just takes so much time and let me let me walk you through my commit history if I open this up and let me find some representative day like November 5th and okay so let's start like I usually start my day around around like 929 30 a.m. and so I code until maybe 11 15 so usually around two hours every morning and then what I do I take a short break and like take a stroll get something to eat and I start working at Microsoft where I also work full-time so it's a really tight schedule and after I finish my my daily daily work at Microsoft where so the good thing about Microsoft and that's really awesome is that it's really it's a truly like a meritocracy system so my managers he doesn't care if I'm like working 10 hours or or six hours he cares that I get my done and that's this super rewarding I'll get fired because of this video I'm joking so so basically I do work around eight hours but the good thing is they value what you do and how productive you are and how much you sit down in that chair doesn't matter you get done and that's it so after I finish my my daily work I like usually take a power nap and depending how tired I am I maybe whatever I take some more more time to do like rest and then around usually around like 9 9 p.m. like 830 whatever I get working I get to to to coding again and so I work until maybe 11 p.m. 1130 so that's again like two three hours of quality coding so that's that's a that a that thing accumulates so that's like four or five hours every single work day and then I have weekends where I'm even more productive because I don't have to work at Microsoft on weekends and at the end of the day during work days I was usually either like I usually take some time to chill I either watch like South Park to just get my like just relax my brain or recently over the last week I've been just watching Lex Friedman's podcast and there is just so many interesting people there and just kind of also relax me even though I'm learning by watching that podcast as well so that's that's cool and now if somebody told me like even two years ago like about this schedule I would tell like I would tell that person like you're like nuts and the thing is depending on your mindset and where like your phase of life and like your situation this may not sound feasible but like that's what I thought like two years ago and then things change and it's totally fine so don't feel any pressure for not working as hard as somebody else because everybody's got their own pace and that's that's super fine like it took me 12 years to get in a working to acquire working habits that I have right now and initially I was just like learning different stuff on my own and structuring my journeys but they were not related to programming whatsoever so I was like learning about human languages I'm really passionate that was my of my there was one of my main hobbies when I was in high school and beginning of my faculty I was training bunch of sports I tried so many sports out there so I was just exploring and only at 19 did I get exposed to to programming and that's totally fine you can still achieve things even when you start a bit later like when you're 19 that's that's super totally fine like one more thing and I know it's kind of cliche but like life's not fair so like Bill Gates back in the 60s had a computer when he was like 12 or 13 and I mean he started programming when he was 12 and it's still I'll tell you something it still doesn't matter you can start when you're 19 20 25 30 and it's still not late if you keep up the good work and you keep like being consistent whatever you do you'll get better and you'll achieve you'll get where you want to be and that's it I mean chillax and lastly there is one more thing I want to mention and that's failures and like turn your head around you and look at successful people around you and they all had failures and if they're honest enough they'll tell you about those if they're not well so I think this guy like he's a Stanford PhD and he was open enough and honest enough to share his his journey and that's how pretty much everybody's journey looks like you get a lot of failures in his case it's like paper rejections but then you have a couple of successes and everybody knows you for your successes and that's it and like me personally I had like a bunch of failures like I was rejected by Facebook I was rejected by Nvidia I was rejected by Microsoft and then my second attempt I got like accepted into Microsoft but even if was like third attempt or fourth attempt it doesn't matter like as long as you like keep learning and you keep applying that's that you're set up for success if you're just applying then you're just spamming people like you got to keep on learning and applying and wherever you want to be you'll get there like sometimes when you get rejected it doesn't have to be that you're bad like I don't know like hiring pipelines are in big companies in big tech companies are not perfect they're far from perfect enough about motivational speaking hopefully I just wanted to share my personal story and some of details I think many YouTube videos out there are doing similar things there's already so much high quality educational resources and there is too little personal stories that I felt like sharing some some of the details from my life and from my workflow and I hope you found this helpful if you did please leave a comment in this comment section because I'll start creating more of these if you if you found useful okay all of that being said let me kind of explain you how I think about approaching a new problem and new project so a good thing about transformer project and in general about many software projects is that they can be modularized and what I mean by that is you can basically orthogonally I independently develop certain components and not care about others so this project was pretty much split into two parts so the first one takes care of training the model and then you have the second part where once you've trained the model you want to do you want to do translations and you want to do some decoding on those models so once once you split like the functionality is like that you can focus on pretty much basically only on the training side and I had only three files let's let's say three files three main files which I had to develop in order to get like the transformer model trained and the first one is obviously like the transformer model a pie file which contains the architecture the definition of the model itself and so the good thing is when I said it's orthogonal so you can basically I can I basically just developed this model and then I created a small main function inside of this Python file where I could test whether the model is working or not before I move ahead and develop the other parts of the project so it's never linear like that you usually jump a little bit here a little bit there but let's say I basically first developed this this model and then and then I went and did the other things so I figured out some bugs doing this which is a good thing better catch them sooner than later and I'll get into problems a bit later now let me just give you an overview of the project so the second thing is second important thing is like data and you want to load the data you want to load like in this case in this case so the transformer model I should mention that is is trained for the machine translation task so basically want to translate from one language which is called a source language into another language which is called a target language and the languages I use were English and German or German to English vice versa so I had lots of problems with loading data in in pytorch first of all because I was usually I'm used to computer vision projects and this was natural language processing so it was kind of a little bit of paradigm shift and second of all it's nowhere as near as near as good as historic vision is for computer vision so I stumbled across two significant problems first one was like functionality wise so the thing didn't work as I expected to do like a bucket iterate something called bucket iterators which were supposed to batch the data in an optimal way weren't working as I was expecting them to and the second problem is the performance problem so the like tokenization procedure and everything took a lot of time like every time I wanted to run the training loop it took like 45 seconds and it's so annoying and I kind I think I went from no I went from 66 seconds to like two and a half seconds which was a massive optimization jump and they helped me through it much faster and that's super important in deep learning and machine learning in general like you want to have a like a really short iteration cycle that that's that's that helps you build stuff better and faster so I'll tell you about some of the problems I had developing this like the data pipeline but first as I mentioned let me just quickly go through other files so then we have like the training loop and basically the training loop is pretty simple once you take a look so let me let me show you so basically yeah you I load them I prepare the model I prepare the data then what I do is I just prepare some things like for label smoothing I won't get into details what that is like optimizers and then the loop is really simple so for a number of epochs I use 20 because just looking at the the attention is all you need paper and calculating stuff you can see they approximately had 19 epochs on the WMT 14 data set so that's what I did also and I didn't get into any situation regions until 20 20 bucks it was totally fine and so the tration loop so I just I basically just have so I do the training loop and then I do the validation loop because you want to see how the model is generalizing to data that he hasn't seen during the training so training loop is pretty so I'm already getting into too much details so you basically once I said a training loop I can I can I can run the whole thing I can test it and then what I did is I quickly made a translation script which would test whether the model was doing what it's supposed to be doing and that's like translating from language to language and that's super important like you want to have like end-to-end system as soon as possible and then you can keep on iterating and improving either the model or the data loading and that's how I did it so I first had the whole thing working end-to-end and only then I did those optimizations etc I also had to develop decoding algorithms so what I did is I developed a simple one that's called like greedy decoding where basically so once you get the like the output distribution for output distribution for us for the next token you basically figure out where the highest probability is and that position corresponds to a certain token in your vocabulary so basically you find the next token in your output or target sentence in that way and greedy is the simplest method what they used in the paper was actually something called beam search where you keep like a like a like a number of hypotheses running in parallel and you at the end you're left with the with the with the hypothesis that's like most probable and you keep that one and you output that one as the output sentence finally I also have something called playground file and I do that not only because I'm open sourcing this and treating this like as a learning resource for others but it's also a really good thing for me because it helped me understand something a lot a lot better and because it's much easier I'm a highly visual person so I like to visualize stuff so for example there is this thing called if you're familiar with the transformer model they have something called positional encodings which you basically add to tokens embedding vectors and so they the formula is kind of complicated and then once you visualize it it's not as complicated so here is how it looks like for like so you basically take one row from this image and so that this like a vector of numbers and you add that to to the token at this particular position so row zero would be added to token representations at position zero or one would go to token representations at position one etc so that was the like the high-level overview of how I think about the project when I'm developing the project so I like to separate functionalities I like to get end-to-end system working as soon as possible and then I kind of go into depth so there's the same way I learned theory I always start high level so first like create a skeleton and then start filling in the gaps this the same I pretty much do the same thing I use the same strategy when I develop like software projects so that was the high-level overview of the project now just one I want to give you like a couple I want to like kind of tell you about a couple of problems I struggle with so one of them is when I was developing the the transformer model itself so it's always a good thing to kind of print out the layers you have in the model to print out the shapes and in general just to see how many parameters the model has so doing that I figured out that my multi had attention was actually I was I was referencing the same object in memory in multiple encoder layers and there was simply a bug which I discovered because I was I printed out these like layer layer names and I saw there there were some like multi had attention layers missing I also so basically if I change this to true here because the the paper explained they they had the base model and they had the big model so if I put the big one and I run the script I'll get I'm printing out the number of parameters and you can see here like 176 million parameters and that's also a good thing to do because I knew that the big model had 175 million parameters so this kind of confirmed that I'm that I have the same number of parameters as the paper stated so those are just some of the things you want to do because you you want to make sure that something is correct it's kind of it's a kind of a test pretty much yeah so the second fun experience I had was during when I was developing beam search functionality and I actually still haven't finished it yet I have really tested but beam is still not working and while I was kind of investigating how other people did it I found like in tensor to tensor tensor to tensor library I found like a line like this like length penalty equals to TF power of 5 plus something divided by 6 part to the alpha raised to the alpha power and was like what the so the thing is I beg you if you ever write code in now like a public open source library please leave a comment you will make like live so much easier so I don't know how but I like kind of went through the code and some other people's codes like open it and empty etc and I figured out there is a paper which introduced this is a terry heuristic and it kind of just works I mean and you can see here 5 plus blah blah raised to the power of alpha I was like what the fuck so those are just I just wanted to give you a glimpse of things you you encounter where you while you're working on these projects it's it's super interesting super funny stuff you can you can see okay this video is getting already a bit too long let me just show you one more interesting problem I stumbled upon while working on the transformer project so it has to do with Torch text and loading data and basically you have this class in Torch text which is called bucket iterator and it's something similar to data loader if you if you're familiar with computer vision like Torch vision library and what happened basically is that let me see if I can find so so what happened here is that like the documentation said something totally different from the results I was getting so basically what the bucket iterator said is minimizes amount of padding needed while producing freshly shuffled batches for each new epoch and that's not true like I had to dig so deep into the source code and finally I submitted like I saw there is already some issue pending and I just commented and like kind of said yeah that guy had right he stumbled upon the same bug as I did and basically you have to set this sort within batch to true otherwise you won't get like bad padding minimized and once you do that everything so so then you get the functionality you actually wanted and then there was a second thing so what I wanted to do is because certain sentences are much longer than other so if you if you just set the batch number as a fixed number so for example always load eight sentences sometimes the batch will be huge sometimes it will be a really really like small and that's a bad thing because you want to maximize the amount of VRAM on your GPU that you're using so what I had to do is I had to to write down this custom function which will which will basically make sure that I always have like the same number of approximately the same number of tokens in the batch and the annotated transformer luckily had something similar implemented although they didn't comment why this was like why this was there and so I struggled a lot until I actually understood the source code and then I figured everything made sense so now hopefully if you go through my code it will be a bit easier to understand what happened here just one more quick thing that's that I had to add this basically custom data set because using space tokenizers at the moment it took me 66 seconds 70 seconds whatever to to re-tokenize the the source and target sentences every single time and so I just decided why not tokenized once save it as a like simple txt file and then instead of re-tokenizing things again just load the txt file and that now takes like two and a half seconds so in a nutshell that's what I did with these data set wrapper and this fast translation data set wrapper class so that was a short story about like some problems I encountered now like for the end let me just show you how the thing functions and how the translation script actually works and I think it's really interesting and magical for me because I was usually doing computer vision things and then once you see this thing actually working and translating from English to German and vice versa it's pretty fascinating if you ask me so I've got up so you basically just have to input the sentence depending so I have an English sentence how are you doing today whatever like set as a source sentence and we have the model that was trained on IWSLT data set and it's translating from English to German so the name is pretty indicative of what the model is doing and where the model was trained on so you just have to keep these two in sync with the model so if you change the model to G to E German to English you will just have to set the enum here to G to E instead of E to G that's it once I start this thing let me run it it takes a little bit of time to half seconds to load the the like the the the vocabularies and what cat to load vocabulary and then it'll just translate why is it not working okay here it is it took two and a two point six seconds to load the data and then you can see the input sentence the source sentence tokenized and you see the output sentence we get the scene heute which means the same thing pretty much although enum is a German word like a polite way to say you so there is nothing stopping me from translating from German to English let me just change this thing to German to English let me change the model to German to English and now if I input for example we get this dia which translates as yeah how are you basically in English let me see what the model will say how the model would translate this this time and it basically translated this as how are you which is pretty much something called gold translation which means like ground truth translation this looks pretty pretty decent the model is not perfect you can find some sentences which would kind of confuse the model so even a simple sentence like a spin I'm berlinna which is a sentence like I think Kennedy said when he was back in Berlin back in the days and it means I'm a Berliner I'm a I'm person living in Berlin and if I input this and run it so the model will actually the thing is the model doesn't have a token Berliner inside the vocabulary so that's why it will say Berlin instead of say I mean I'm anthropomorphizing the model here so it will output Berlin so I am a Berlin so if the model had Berliner it would output Berliner so that's another another point I want to make here so if I had something called byte pairing coding I could the model would be much more expressive in creating these words then at this point of time but that's a future work and I'll edit hopefully like maybe next week or something so there are many other things I could show you but for the sake of this video not being like one hour long let me just show you how the perf what amount of hardware requirements do you need in order to run translations not in order to train the model you would need a lot of GPUs to train the model actually had have some comments in the readme please go go ahead and read that part but for like just for for translations let me just open the task manager and if I open it up if I run the script now we can follow how much GPU is needed to to get the translation so it's so it's basically only a small spike so you won't need as much hardware like like a really fancy GPU if you're just running translations on the other side if you're training this model you'll need a lot of power so that that pretty much wraps it up hope you like this video there was a lot of information in here so what my next steps will be is I'll start reading research papers again I want to explore what state-of-the-art in natural language processing now in 2020 so I'll be spending like maybe reading one paper in the morning one paper in the evening that amounts to maybe at least ten papers a week which is good enough to get me like familiar with the field in like 10 days period two weeks so yeah that was pretty much it if you like this video go ahead and subscribe to this channel and hit the bell icon to get notified when I upload a new video and until next time keep learning
[{"start": 0.0, "end": 5.58, "text": " Finally, finally I've open sourced my implementation of the original"}, {"start": 5.58, "end": 10.86, "text": " transformer paper, Attention is All You Need, and it took me like three weeks of"}, {"start": 10.86, "end": 16.580000000000002, "text": " non-stop working on this and I'm super happy. I learned a lot and hopefully"}, {"start": 16.580000000000002, "end": 21.54, "text": " you'll find it as a valuable resource as well. So yeah, let me just walk you"}, {"start": 21.54, "end": 27.54, "text": " through this, read me. This is the architecture. I have some nice"}, {"start": 27.54, "end": 32.32, "text": " visualizations here and I have two models which I've"}, {"start": 32.32, "end": 40.12, "text": " pre-trained, which I've linked. So yeah, before I get into this, like the"}, {"start": 40.12, "end": 44.36, "text": " code itself, I just want to tell you about a couple of things, like the whole"}, {"start": 44.36, "end": 49.879999999999995, "text": " journey I had learning about this. And so those of you who've been following me"}, {"start": 49.879999999999995, "end": 56.2, "text": " over the last like a couple of weeks, you know that I've been... so I've been"}, {"start": 56.2, "end": 61.0, "text": " working on transformers for more than a month now and I first started by like"}, {"start": 61.0, "end": 66.64, "text": " just reading about theory, reading like research papers for more than like two"}, {"start": 66.64, "end": 72.56, "text": " plus weeks. And then after I finished that one, I basically started coding the"}, {"start": 72.56, "end": 76.88, "text": " implementation from scratch and that took me around three weeks more or less."}, {"start": 76.88, "end": 82.88, "text": " So let me start with why I did it. So the thing is I looked for like good"}, {"start": 82.88, "end": 87.52, "text": " resources to understand transformers but not on the like level of theory but like"}, {"start": 87.52, "end": 91.96, "text": " to be able to actually understand the code itself. And I basically only found"}, {"start": 91.96, "end": 98.6, "text": " two decent resources, good resources. Basically, I'm not counting in Hugging"}, {"start": 98.6, "end": 103.08, "text": " Face because they do have a really awesome library of transformers but they"}, {"start": 103.08, "end": 108.38, "text": " are like more for like as a black box experience type of usage and not for"}, {"start": 108.38, "end": 112.28, "text": " going through the code and understanding how stuff works. So the two resources I"}, {"start": 112.28, "end": 117.0, "text": " mentioned are the annotated transformer is the first one and then PyTorch"}, {"start": 117.0, "end": 121.32000000000001, "text": " official implementation. Now the thing with PyTorch official implementation is"}, {"start": 121.32000000000001, "end": 126.36, "text": " that like their multi-headed attention module is awesome, like functionality"}, {"start": 126.36, "end": 131.28, "text": " wise it works. But the thing is it's so generic because they got to cover a"}, {"start": 131.28, "end": 135.48, "text": " bunch of use cases and so it's so complicated to actually go through the"}, {"start": 135.48, "end": 140.72, "text": " code and understand how it works. So my idea was to just focus on this specific"}, {"start": 140.72, "end": 148.08, "text": " use case and like of this transformer model from 2017 and that's it. On the"}, {"start": 148.08, "end": 155.48, "text": " other hand, the annotated transformer block is basically suited only for"}, {"start": 155.48, "end": 161.16, "text": " researchers and also had a couple of bugs. It's a it's an overall really nice"}, {"start": 161.16, "end": 166.92, "text": " resource and helped me a lot but it's really only for researchers and they"}, {"start": 166.92, "end": 173.16, "text": " also like kind of suppose you understand the source code of Torch text which is"}, {"start": 173.16, "end": 179.39999999999998, "text": " PyTorch's like module for manipulating NLP tasks which is usually not the"}, {"start": 179.39999999999998, "end": 184.2, "text": " not the case right. So I want to I want to make a couple of points before we go"}, {"start": 184.2, "end": 188.44, "text": " and take an overview of the code and I explain you how I struggled, which"}, {"start": 188.44, "end": 192.27999999999997, "text": " problems I had, how I approached it. I won't get into every single detail"}, {"start": 192.28, "end": 197.04, "text": " because the code base is now already pretty big. I'll show you some details but"}, {"start": 197.04, "end": 203.28, "text": " before that I want to tell you that making good projects takes time. So I had"}, {"start": 203.28, "end": 207.28, "text": " I had a couple of comments. I had Rahul commenting, thanks for telling us that"}, {"start": 207.28, "end": 211.12, "text": " understanding and coding a new AI algorithm takes a few months. I always"}, {"start": 211.12, "end": 215.48, "text": " felt like people are going too fast and doing things in just days and I cannot"}, {"start": 215.48, "end": 220.6, "text": " keep up. But thank you, it gives me motivation. So thank you Rahul. The thing"}, {"start": 220.6, "end": 226.28, "text": " is I mean it's funny like my first so my first attempt at trying to"}, {"start": 226.28, "end": 230.35999999999999, "text": " reconstruct a research paper took three months. So I was coding three months"}, {"start": 230.35999999999999, "end": 236.2, "text": " non-stop and the reason being the authors did not open source their like"}, {"start": 236.2, "end": 239.76, "text": " implementation and you couldn't find the code wasn't as popular as the"}, {"start": 239.76, "end": 244.68, "text": " transformer so I couldn't find snippets of code anywhere. So what I what I ended"}, {"start": 244.68, "end": 248.79999999999998, "text": " up doing is just like trying and reconstructing the the the paper by"}, {"start": 248.8, "end": 254.28, "text": " just reading text which is super hard and it takes time and I had lots of back"}, {"start": 254.28, "end": 261.24, "text": " and forth with the authors to understand certain big details and it's like so the"}, {"start": 261.24, "end": 268.28000000000003, "text": " point I want to make here is don't fall for that learn topic huge topic X in"}, {"start": 268.28000000000003, "end": 274.16, "text": " five days. I mean that doesn't work like that and I'm just wondering I think I"}, {"start": 274.16, "end": 277.76, "text": " know why that's the case like you have you have Siraj Rawal who had videos"}, {"start": 277.76, "end": 284.24, "text": " like learn tensorflow in five minutes and I mean if you I want to tell you"}, {"start": 284.24, "end": 287.88, "text": " something I mean if you if you believe that crap you're in a really bad"}, {"start": 287.88, "end": 292.56, "text": " position because that's going to set you up for failure and why because you're"}, {"start": 292.56, "end": 297.03999999999996, "text": " basically it's basically setting up setting you up for impatience whereas"}, {"start": 297.03999999999996, "end": 302.4, "text": " you really truly need patience in order to create anything great like every"}, {"start": 302.4, "end": 306.36, "text": " single project like the projects I'm doing at Microsoft they take that's like"}, {"start": 306.36, "end": 312.12, "text": " a multi like multi like team effort that lasts at least six months one year two"}, {"start": 312.12, "end": 316.64, "text": " years three years so anything significant takes time in artificial"}, {"start": 316.64, "end": 320.48, "text": " intelligence and computer science whatever the topic it is like I have"}, {"start": 320.48, "end": 325.26, "text": " skipped creating a video last week last weekend because I was so in a flow of"}, {"start": 325.26, "end": 331.0, "text": " cutting this transformers thing up that I didn't want to break my flow and create"}, {"start": 331.0, "end": 334.6, "text": " a video so that's that's how things function you got to make trade-offs"}, {"start": 334.6, "end": 338.96000000000004, "text": " sometimes and when we are at this point of talking about time management let me"}, {"start": 338.96000000000004, "end": 343.16, "text": " let me let me walk you through how my like commit history looked like and how"}, {"start": 343.16, "end": 347.40000000000003, "text": " my how I organized my time during this three weeks okay so let me let me walk"}, {"start": 347.40000000000003, "end": 351.76000000000005, "text": " you through my schedule because I mean everybody's like I've noticed a lot of"}, {"start": 351.76000000000005, "end": 357.58000000000004, "text": " comments people thinking everybody is so much faster than they are and that's so"}, {"start": 357.58000000000004, "end": 360.92, "text": " much so not true it just takes so much time and let me let me walk you through"}, {"start": 360.92, "end": 365.8, "text": " my commit history if I open this up and let me find some representative day like"}, {"start": 365.8, "end": 373.16, "text": " November 5th and okay so let's start like I usually start my day around around"}, {"start": 373.16, "end": 383.32, "text": " like 929 30 a.m. and so I code until maybe 11 15 so usually around two hours"}, {"start": 383.32, "end": 389.96000000000004, "text": " every morning and then what I do I take a short break and like take a stroll get"}, {"start": 389.96, "end": 395.12, "text": " something to eat and I start working at Microsoft where I also work full-time so"}, {"start": 395.12, "end": 401.71999999999997, "text": " it's a really tight schedule and after I finish my my daily daily work at"}, {"start": 401.71999999999997, "end": 406.12, "text": " Microsoft where so the good thing about Microsoft and that's really awesome is"}, {"start": 406.12, "end": 411.67999999999995, "text": " that it's really it's a truly like a meritocracy system so my managers he"}, {"start": 411.67999999999995, "end": 417.2, "text": " doesn't care if I'm like working 10 hours or or six hours he cares that I"}, {"start": 417.2, "end": 424.52, "text": " get my done and that's this super rewarding I'll get fired because of this"}, {"start": 424.52, "end": 430.28, "text": " video I'm joking so so basically I do work around eight hours but the good"}, {"start": 430.28, "end": 434.98, "text": " thing is they value what you do and how productive you are and how much you sit"}, {"start": 434.98, "end": 440.32, "text": " down in that chair doesn't matter you get done and that's it so after I finish"}, {"start": 440.32, "end": 444.64, "text": " my my daily work I like usually take a power nap and depending how tired I am I"}, {"start": 444.64, "end": 450.28, "text": " maybe whatever I take some more more time to do like rest and then around"}, {"start": 450.28, "end": 457.88, "text": " usually around like 9 9 p.m. like 830 whatever I get working I get to to to"}, {"start": 457.88, "end": 464.52, "text": " coding again and so I work until maybe 11 p.m. 1130 so that's again like two"}, {"start": 464.52, "end": 469.64, "text": " three hours of quality coding so that's that's a that a that thing accumulates"}, {"start": 469.64, "end": 474.44, "text": " so that's like four or five hours every single work day and then I have weekends"}, {"start": 474.44, "end": 477.84, "text": " where I'm even more productive because I don't have to work at Microsoft on"}, {"start": 477.84, "end": 482.47999999999996, "text": " weekends and at the end of the day during work days I was usually either"}, {"start": 482.47999999999996, "end": 488.0, "text": " like I usually take some time to chill I either watch like South Park to just get"}, {"start": 488.0, "end": 493.32, "text": " my like just relax my brain or recently over the last week I've been just"}, {"start": 493.32, "end": 497.4, "text": " watching Lex Friedman's podcast and there is just so many interesting people"}, {"start": 497.4, "end": 502.0, "text": " there and just kind of also relax me even though I'm learning by watching"}, {"start": 502.0, "end": 506.4, "text": " that podcast as well so that's that's cool and now if somebody told me like"}, {"start": 506.4, "end": 513.42, "text": " even two years ago like about this schedule I would tell like I would tell"}, {"start": 513.42, "end": 518.72, "text": " that person like you're like nuts and the thing is depending on your mindset"}, {"start": 518.72, "end": 524.36, "text": " and where like your phase of life and like your situation this may not sound"}, {"start": 524.36, "end": 528.88, "text": " feasible but like that's what I thought like two years ago and then things"}, {"start": 528.88, "end": 533.92, "text": " change and it's totally fine so don't feel any pressure for not working as"}, {"start": 533.92, "end": 537.44, "text": " hard as somebody else because everybody's got their own pace and that's"}, {"start": 537.44, "end": 544.16, "text": " that's super fine like it took me 12 years to get in a working to acquire"}, {"start": 544.16, "end": 548.04, "text": " working habits that I have right now and initially I was just like learning"}, {"start": 548.04, "end": 551.6, "text": " different stuff on my own and structuring my journeys but they were"}, {"start": 551.6, "end": 555.44, "text": " not related to programming whatsoever so I was like learning about human"}, {"start": 555.44, "end": 560.0, "text": " languages I'm really passionate that was my of my there was one of my main"}, {"start": 560.0, "end": 563.6, "text": " hobbies when I was in high school and beginning of my faculty I was training"}, {"start": 563.6, "end": 568.88, "text": " bunch of sports I tried so many sports out there so I was just exploring and"}, {"start": 568.88, "end": 574.52, "text": " only at 19 did I get exposed to to programming and that's totally fine you"}, {"start": 574.52, "end": 578.36, "text": " can still achieve things even when you start a bit later like when you're 19"}, {"start": 578.36, "end": 581.84, "text": " that's that's super totally fine like one more thing and I know it's kind of"}, {"start": 581.84, "end": 587.0, "text": " cliche but like life's not fair so like Bill Gates back in the 60s had a"}, {"start": 587.0, "end": 593.48, "text": " computer when he was like 12 or 13 and I mean he started programming when he was"}, {"start": 593.48, "end": 598.32, "text": " 12 and it's still I'll tell you something it still doesn't matter you can"}, {"start": 598.32, "end": 603.4, "text": " start when you're 19 20 25 30 and it's still not late if you keep up the"}, {"start": 603.4, "end": 608.4399999999999, "text": " good work and you keep like being consistent whatever you do you'll get"}, {"start": 608.4399999999999, "end": 612.16, "text": " better and you'll achieve you'll get where you want to be and that's it I"}, {"start": 612.16, "end": 616.16, "text": " mean chillax and lastly there is one more thing I want to mention and that's"}, {"start": 616.16, "end": 621.28, "text": " failures and like turn your head around you and look at successful people around"}, {"start": 621.28, "end": 625.68, "text": " you and they all had failures and if they're honest enough they'll tell you"}, {"start": 625.68, "end": 630.16, "text": " about those if they're not well so I think this guy like he's a Stanford PhD"}, {"start": 630.16, "end": 637.04, "text": " and he was open enough and honest enough to share his his journey and that's how"}, {"start": 637.04, "end": 642.28, "text": " pretty much everybody's journey looks like you get a lot of failures in his"}, {"start": 642.28, "end": 646.56, "text": " case it's like paper rejections but then you have a couple of successes and"}, {"start": 646.56, "end": 651.04, "text": " everybody knows you for your successes and that's it and like me personally I"}, {"start": 651.04, "end": 655.48, "text": " had like a bunch of failures like I was rejected by Facebook I was rejected by"}, {"start": 655.48, "end": 660.9200000000001, "text": " Nvidia I was rejected by Microsoft and then my second attempt I got like"}, {"start": 660.9200000000001, "end": 665.4, "text": " accepted into Microsoft but even if was like third attempt or fourth attempt it"}, {"start": 665.4, "end": 670.48, "text": " doesn't matter like as long as you like keep learning and you keep applying"}, {"start": 670.48, "end": 675.04, "text": " that's that you're set up for success if you're just applying then you're just"}, {"start": 675.04, "end": 678.88, "text": " spamming people like you got to keep on learning and applying and wherever you"}, {"start": 678.88, "end": 682.64, "text": " want to be you'll get there like sometimes when you get rejected it"}, {"start": 682.64, "end": 686.48, "text": " doesn't have to be that you're bad like I don't know like hiring pipelines are"}, {"start": 686.48, "end": 691.16, "text": " in big companies in big tech companies are not perfect they're far from perfect"}, {"start": 691.16, "end": 695.72, "text": " enough about motivational speaking hopefully I just wanted to share my"}, {"start": 695.72, "end": 700.16, "text": " personal story and some of details I think many YouTube videos out there are"}, {"start": 700.16, "end": 704.96, "text": " doing similar things there's already so much high quality educational resources"}, {"start": 704.96, "end": 709.96, "text": " and there is too little personal stories that I felt like sharing some some of"}, {"start": 709.96, "end": 714.64, "text": " the details from my life and from my workflow and I hope you found this"}, {"start": 714.64, "end": 719.96, "text": " helpful if you did please leave a comment in this comment section because"}, {"start": 719.96, "end": 725.24, "text": " I'll start creating more of these if you if you found useful okay all of that"}, {"start": 725.24, "end": 731.4000000000001, "text": " being said let me kind of explain you how I think about approaching a new"}, {"start": 731.4000000000001, "end": 736.8000000000001, "text": " problem and new project so a good thing about transformer project and in general"}, {"start": 736.8, "end": 741.56, "text": " about many software projects is that they can be modularized and what I mean"}, {"start": 741.56, "end": 745.56, "text": " by that is you can basically orthogonally I independently develop"}, {"start": 745.56, "end": 751.16, "text": " certain components and not care about others so this project was pretty much"}, {"start": 751.16, "end": 756.64, "text": " split into two parts so the first one takes care of training the model and"}, {"start": 756.64, "end": 761.5999999999999, "text": " then you have the second part where once you've trained the model you want to do"}, {"start": 761.6, "end": 769.8000000000001, "text": " you want to do translations and you want to do some decoding on those models so"}, {"start": 769.8000000000001, "end": 773.6800000000001, "text": " once once you split like the functionality is like that you can focus"}, {"start": 773.6800000000001, "end": 778.48, "text": " on pretty much basically only on the training side and I had only three"}, {"start": 778.48, "end": 784.48, "text": " files let's let's say three files three main files which I had to develop in"}, {"start": 784.48, "end": 790.36, "text": " order to get like the transformer model trained and the first one is obviously"}, {"start": 790.36, "end": 795.5600000000001, "text": " like the transformer model a pie file which contains the architecture the"}, {"start": 795.5600000000001, "end": 800.92, "text": " definition of the model itself and so the good thing is when I said it's"}, {"start": 800.92, "end": 807.0, "text": " orthogonal so you can basically I can I basically just developed this model and"}, {"start": 807.0, "end": 812.32, "text": " then I created a small main function inside of this Python file where I could"}, {"start": 812.32, "end": 818.2, "text": " test whether the model is working or not before I move ahead and develop the"}, {"start": 818.2, "end": 822.5600000000001, "text": " other parts of the project so it's never linear like that you usually jump a"}, {"start": 822.5600000000001, "end": 826.5600000000001, "text": " little bit here a little bit there but let's say I basically first developed"}, {"start": 826.5600000000001, "end": 831.9200000000001, "text": " this this model and then and then I went and did the other things so I figured"}, {"start": 831.9200000000001, "end": 836.6, "text": " out some bugs doing this which is a good thing better catch them sooner than"}, {"start": 836.6, "end": 841.8000000000001, "text": " later and I'll get into problems a bit later now let me just give you an"}, {"start": 841.8000000000001, "end": 846.6800000000001, "text": " overview of the project so the second thing is second important thing is like"}, {"start": 846.68, "end": 851.0, "text": " data and you want to load the data you want to load like in this case in this"}, {"start": 851.0, "end": 855.04, "text": " case so the transformer model I should mention that is is trained for the"}, {"start": 855.04, "end": 859.3599999999999, "text": " machine translation task so basically want to translate from one language which"}, {"start": 859.3599999999999, "end": 862.92, "text": " is called a source language into another language which is called a target"}, {"start": 862.92, "end": 867.5999999999999, "text": " language and the languages I use were English and German or German to English"}, {"start": 867.5999999999999, "end": 874.64, "text": " vice versa so I had lots of problems with loading data in in pytorch first of"}, {"start": 874.64, "end": 878.92, "text": " all because I was usually I'm used to computer vision projects and this was"}, {"start": 878.92, "end": 882.6, "text": " natural language processing so it was kind of a little bit of paradigm shift"}, {"start": 882.6, "end": 888.48, "text": " and second of all it's nowhere as near as near as good as historic vision is for"}, {"start": 888.48, "end": 894.52, "text": " computer vision so I stumbled across two significant problems first one was like"}, {"start": 894.52, "end": 899.84, "text": " functionality wise so the thing didn't work as I expected to do like a bucket"}, {"start": 899.84, "end": 903.68, "text": " iterate something called bucket iterators which were supposed to batch"}, {"start": 903.68, "end": 907.5999999999999, "text": " the data in an optimal way weren't working as I was expecting them to and"}, {"start": 907.5999999999999, "end": 912.68, "text": " the second problem is the performance problem so the like tokenization"}, {"start": 912.68, "end": 917.16, "text": " procedure and everything took a lot of time like every time I wanted to run"}, {"start": 917.16, "end": 922.3199999999999, "text": " the training loop it took like 45 seconds and it's so annoying and I kind"}, {"start": 922.3199999999999, "end": 927.5999999999999, "text": " I think I went from no I went from 66 seconds to like two and a half seconds"}, {"start": 927.5999999999999, "end": 933.16, "text": " which was a massive optimization jump and they helped me through it much"}, {"start": 933.16, "end": 936.0799999999999, "text": " faster and that's super important in deep learning and machine learning in"}, {"start": 936.0799999999999, "end": 940.68, "text": " general like you want to have a like a really short iteration cycle that that's"}, {"start": 940.68, "end": 948.28, "text": " that's that helps you build stuff better and faster so I'll tell you about some"}, {"start": 948.28, "end": 955.4399999999999, "text": " of the problems I had developing this like the data pipeline but first as I"}, {"start": 955.4399999999999, "end": 959.92, "text": " mentioned let me just quickly go through other files so then we have like the"}, {"start": 959.92, "end": 965.4399999999999, "text": " training loop and basically the training loop is pretty simple once you take a"}, {"start": 965.4399999999999, "end": 971.4799999999999, "text": " look so let me let me show you so basically yeah you I load them I prepare"}, {"start": 971.4799999999999, "end": 977.76, "text": " the model I prepare the data then what I do is I just prepare some things like"}, {"start": 977.76, "end": 983.0, "text": " for label smoothing I won't get into details what that is like optimizers and"}, {"start": 983.0, "end": 988.12, "text": " then the loop is really simple so for a number of epochs I use 20 because just"}, {"start": 988.12, "end": 993.16, "text": " looking at the the attention is all you need paper and calculating stuff you can"}, {"start": 993.16, "end": 999.72, "text": " see they approximately had 19 epochs on the WMT 14 data set so that's what I did"}, {"start": 999.72, "end": 1005.12, "text": " also and I didn't get into any situation regions until 20 20 bucks it was totally"}, {"start": 1005.12, "end": 1011.08, "text": " fine and so the tration loop so I just I basically just have so I do the"}, {"start": 1011.08, "end": 1013.84, "text": " training loop and then I do the validation loop because you want to see"}, {"start": 1013.84, "end": 1018.72, "text": " how the model is generalizing to data that he hasn't seen during the training"}, {"start": 1018.72, "end": 1024.2, "text": " so training loop is pretty so I'm already getting into too much details so"}, {"start": 1024.2, "end": 1028.68, "text": " you basically once I said a training loop I can I can I can run the whole"}, {"start": 1028.68, "end": 1035.4, "text": " thing I can test it and then what I did is I quickly made a translation script"}, {"start": 1035.4, "end": 1039.76, "text": " which would test whether the model was doing what it's supposed to be doing and"}, {"start": 1039.76, "end": 1044.92, "text": " that's like translating from language to language and that's super important like"}, {"start": 1044.92, "end": 1050.52, "text": " you want to have like end-to-end system as soon as possible and then you can"}, {"start": 1050.52, "end": 1055.04, "text": " keep on iterating and improving either the model or the data loading and that's"}, {"start": 1055.04, "end": 1059.36, "text": " how I did it so I first had the whole thing working end-to-end and only then I"}, {"start": 1059.36, "end": 1064.4, "text": " did those optimizations etc I also had to develop decoding algorithms so what I"}, {"start": 1064.4, "end": 1069.0, "text": " did is I developed a simple one that's called like greedy decoding where"}, {"start": 1069.0, "end": 1073.8, "text": " basically so once you get the like the output distribution for output"}, {"start": 1073.8, "end": 1078.44, "text": " distribution for us for the next token you basically figure out where the"}, {"start": 1078.44, "end": 1082.96, "text": " highest probability is and that position corresponds to a certain token in your"}, {"start": 1082.96, "end": 1089.1, "text": " vocabulary so basically you find the next token in your output or target"}, {"start": 1089.1, "end": 1095.76, "text": " sentence in that way and greedy is the simplest method what they used in the"}, {"start": 1095.76, "end": 1101.04, "text": " paper was actually something called beam search where you keep like a like a like"}, {"start": 1101.04, "end": 1106.04, "text": " a number of hypotheses running in parallel and you at the end you're left"}, {"start": 1106.04, "end": 1110.24, "text": " with the with the with the hypothesis that's like most probable and you keep"}, {"start": 1110.24, "end": 1116.32, "text": " that one and you output that one as the output sentence finally I also have"}, {"start": 1116.32, "end": 1120.2, "text": " something called playground file and I do that not only because I'm open"}, {"start": 1120.2, "end": 1125.4, "text": " sourcing this and treating this like as a learning resource for others but it's"}, {"start": 1125.4, "end": 1130.24, "text": " also a really good thing for me because it helped me understand something a lot"}, {"start": 1130.24, "end": 1135.0, "text": " a lot better and because it's much easier I'm a highly visual person so I"}, {"start": 1135.0, "end": 1139.48, "text": " like to visualize stuff so for example there is this thing called if you're"}, {"start": 1139.48, "end": 1143.68, "text": " familiar with the transformer model they have something called positional"}, {"start": 1143.68, "end": 1150.92, "text": " encodings which you basically add to tokens embedding vectors and so they the"}, {"start": 1150.92, "end": 1154.52, "text": " formula is kind of complicated and then once you visualize it it's not as"}, {"start": 1154.52, "end": 1159.4, "text": " complicated so here is how it looks like for like so you basically take one row"}, {"start": 1159.4, "end": 1166.16, "text": " from this image and so that this like a vector of numbers and you add that to to"}, {"start": 1166.16, "end": 1170.92, "text": " the token at this particular position so row zero would be added to token"}, {"start": 1170.92, "end": 1176.8, "text": " representations at position zero or one would go to token representations at"}, {"start": 1176.8, "end": 1183.8, "text": " position one etc so that was the like the high-level overview of how I think"}, {"start": 1183.8, "end": 1187.6, "text": " about the project when I'm developing the project so I like to separate"}, {"start": 1187.6, "end": 1192.84, "text": " functionalities I like to get end-to-end system working as soon as possible and"}, {"start": 1192.84, "end": 1197.52, "text": " then I kind of go into depth so there's the same way I learned theory I always"}, {"start": 1197.52, "end": 1201.9199999999998, "text": " start high level so first like create a skeleton and then start filling in the"}, {"start": 1201.9199999999998, "end": 1206.48, "text": " gaps this the same I pretty much do the same thing I use the same strategy when"}, {"start": 1206.48, "end": 1211.32, "text": " I develop like software projects so that was the high-level overview of the"}, {"start": 1211.32, "end": 1216.3999999999999, "text": " project now just one I want to give you like a couple I want to like kind of"}, {"start": 1216.3999999999999, "end": 1221.3999999999999, "text": " tell you about a couple of problems I struggle with so one of them is when I"}, {"start": 1221.3999999999999, "end": 1226.9199999999998, "text": " was developing the the transformer model itself so it's always a good thing to"}, {"start": 1226.9199999999998, "end": 1232.96, "text": " kind of print out the layers you have in the model to print out the shapes and in"}, {"start": 1232.96, "end": 1237.52, "text": " general just to see how many parameters the model has so doing that I figured"}, {"start": 1237.52, "end": 1242.72, "text": " out that my multi had attention was actually I was I was referencing the"}, {"start": 1242.72, "end": 1248.08, "text": " same object in memory in multiple encoder layers and there was simply a"}, {"start": 1248.08, "end": 1253.36, "text": " bug which I discovered because I was I printed out these like layer layer names"}, {"start": 1253.36, "end": 1258.56, "text": " and I saw there there were some like multi had attention layers missing I"}, {"start": 1258.56, "end": 1265.6, "text": " also so basically if I change this to true here because the the paper explained"}, {"start": 1265.6, "end": 1269.36, "text": " they they had the base model and they had the big model so if I put the big"}, {"start": 1269.36, "end": 1274.0, "text": " one and I run the script I'll get I'm printing out the number of parameters"}, {"start": 1274.0, "end": 1280.04, "text": " and you can see here like 176 million parameters and that's also a good thing"}, {"start": 1280.04, "end": 1285.6799999999998, "text": " to do because I knew that the big model had 175 million parameters so this kind"}, {"start": 1285.6799999999998, "end": 1290.28, "text": " of confirmed that I'm that I have the same number of parameters as the paper"}, {"start": 1290.28, "end": 1295.56, "text": " stated so those are just some of the things you want to do because you you"}, {"start": 1295.56, "end": 1300.24, "text": " want to make sure that something is correct it's kind of it's a kind of a"}, {"start": 1300.24, "end": 1308.24, "text": " test pretty much yeah so the second fun experience I had was during when I was"}, {"start": 1308.24, "end": 1313.56, "text": " developing beam search functionality and I actually still haven't finished it yet"}, {"start": 1313.56, "end": 1319.52, "text": " I have really tested but beam is still not working and while I was kind of"}, {"start": 1319.52, "end": 1325.52, "text": " investigating how other people did it I found like in tensor to tensor tensor"}, {"start": 1325.52, "end": 1331.12, "text": " to tensor library I found like a line like this like length penalty equals to"}, {"start": 1331.12, "end": 1339.8, "text": " TF power of 5 plus something divided by 6 part to the alpha raised to the alpha"}, {"start": 1339.8, "end": 1346.68, "text": " power and was like what the so the thing is I beg you if you ever write code in"}, {"start": 1346.68, "end": 1352.48, "text": " now like a public open source library please leave a comment you will make"}, {"start": 1352.48, "end": 1358.5600000000002, "text": " like live so much easier so I don't know how but I like kind of went through the"}, {"start": 1358.5600000000002, "end": 1363.3600000000001, "text": " code and some other people's codes like open it and empty etc and I figured out"}, {"start": 1363.3600000000001, "end": 1370.1200000000001, "text": " there is a paper which introduced this is a terry heuristic and it kind of just"}, {"start": 1370.1200000000001, "end": 1375.68, "text": " works I mean and you can see here 5 plus blah blah raised to the power of alpha"}, {"start": 1375.68, "end": 1379.76, "text": " I was like what the fuck so those are just I just wanted to give you a glimpse"}, {"start": 1379.76, "end": 1384.52, "text": " of things you you encounter where you while you're working on these projects"}, {"start": 1384.52, "end": 1389.2, "text": " it's it's super interesting super funny stuff you can you can see okay this"}, {"start": 1389.2, "end": 1393.68, "text": " video is getting already a bit too long let me just show you one more interesting"}, {"start": 1393.68, "end": 1399.24, "text": " problem I stumbled upon while working on the transformer project so it has to do"}, {"start": 1399.24, "end": 1404.44, "text": " with Torch text and loading data and basically you have this class in Torch"}, {"start": 1404.44, "end": 1409.8, "text": " text which is called bucket iterator and it's something similar to data loader if"}, {"start": 1409.8, "end": 1414.72, "text": " you if you're familiar with computer vision like Torch vision library and"}, {"start": 1414.72, "end": 1421.8, "text": " what happened basically is that let me see if I can find so so what happened"}, {"start": 1421.8, "end": 1427.3600000000001, "text": " here is that like the documentation said something totally different from the"}, {"start": 1427.3600000000001, "end": 1432.96, "text": " results I was getting so basically what the bucket iterator said is minimizes"}, {"start": 1432.96, "end": 1437.48, "text": " amount of padding needed while producing freshly shuffled batches for each new"}, {"start": 1437.48, "end": 1444.3600000000001, "text": " epoch and that's not true like I had to dig so deep into the source code and"}, {"start": 1444.3600000000001, "end": 1450.48, "text": " finally I submitted like I saw there is already some issue pending and I just"}, {"start": 1450.48, "end": 1454.98, "text": " commented and like kind of said yeah that guy had right he stumbled upon the"}, {"start": 1454.98, "end": 1460.64, "text": " same bug as I did and basically you have to set this sort within batch to true"}, {"start": 1460.64, "end": 1467.3200000000002, "text": " otherwise you won't get like bad padding minimized and once you do that"}, {"start": 1467.3200000000002, "end": 1470.64, "text": " everything so so then you get the functionality you actually wanted and"}, {"start": 1470.64, "end": 1476.3200000000002, "text": " then there was a second thing so what I wanted to do is because certain sentences"}, {"start": 1476.3200000000002, "end": 1481.3200000000002, "text": " are much longer than other so if you if you just set the batch number as a fixed"}, {"start": 1481.3200000000002, "end": 1486.2800000000002, "text": " number so for example always load eight sentences sometimes the batch will be"}, {"start": 1486.28, "end": 1492.04, "text": " huge sometimes it will be a really really like small and that's a bad thing"}, {"start": 1492.04, "end": 1496.76, "text": " because you want to maximize the amount of VRAM on your GPU that you're using"}, {"start": 1496.76, "end": 1505.04, "text": " so what I had to do is I had to to write down this custom function which will"}, {"start": 1505.04, "end": 1511.36, "text": " which will basically make sure that I always have like the same number of"}, {"start": 1511.36, "end": 1516.0, "text": " approximately the same number of tokens in the batch and the annotated"}, {"start": 1516.0, "end": 1519.44, "text": " transformer luckily had something similar implemented although they didn't"}, {"start": 1519.44, "end": 1525.6, "text": " comment why this was like why this was there and so I struggled a lot"}, {"start": 1525.6, "end": 1529.52, "text": " until I actually understood the source code and then I figured everything"}, {"start": 1529.52, "end": 1534.36, "text": " made sense so now hopefully if you go through my code it will be a bit easier"}, {"start": 1534.36, "end": 1540.66, "text": " to understand what happened here just one more quick thing that's that I had"}, {"start": 1540.66, "end": 1548.6000000000001, "text": " to add this basically custom data set because using space tokenizers at the"}, {"start": 1548.6000000000001, "end": 1555.6000000000001, "text": " moment it took me 66 seconds 70 seconds whatever to to re-tokenize the the"}, {"start": 1555.6000000000001, "end": 1560.2, "text": " source and target sentences every single time and so I just decided why not"}, {"start": 1560.2, "end": 1567.5600000000002, "text": " tokenized once save it as a like simple txt file and then instead of"}, {"start": 1567.56, "end": 1571.6799999999998, "text": " re-tokenizing things again just load the txt file and that now takes like two and"}, {"start": 1571.6799999999998, "end": 1577.08, "text": " a half seconds so in a nutshell that's what I did with these data set wrapper"}, {"start": 1577.08, "end": 1583.28, "text": " and this fast translation data set wrapper class so that was a short story"}, {"start": 1583.28, "end": 1589.04, "text": " about like some problems I encountered now like for the end let me just show"}, {"start": 1589.04, "end": 1593.96, "text": " you how the thing functions and how the translation script actually works and I"}, {"start": 1593.96, "end": 1597.52, "text": " think it's really interesting and magical for me because I was usually"}, {"start": 1597.52, "end": 1601.68, "text": " doing computer vision things and then once you see this thing actually working"}, {"start": 1601.68, "end": 1605.24, "text": " and translating from English to German and vice versa it's pretty fascinating"}, {"start": 1605.24, "end": 1609.3400000000001, "text": " if you ask me so I've got up so you basically just have to input the"}, {"start": 1609.3400000000001, "end": 1612.92, "text": " sentence depending so I have an English sentence how are you doing today"}, {"start": 1612.92, "end": 1618.8, "text": " whatever like set as a source sentence and we have the model that was trained"}, {"start": 1618.8, "end": 1623.82, "text": " on IWSLT data set and it's translating from English to German so the name is"}, {"start": 1623.82, "end": 1627.24, "text": " pretty indicative of what the model is doing and where the model was trained on"}, {"start": 1627.24, "end": 1633.76, "text": " so you just have to keep these two in sync with the model so if you change the"}, {"start": 1633.76, "end": 1640.3999999999999, "text": " model to G to E German to English you will just have to set the enum here to"}, {"start": 1640.3999999999999, "end": 1648.28, "text": " G to E instead of E to G that's it once I start this thing let me run it it takes"}, {"start": 1648.28, "end": 1654.6399999999999, "text": " a little bit of time to half seconds to load the the like the the the"}, {"start": 1654.6399999999999, "end": 1660.76, "text": " vocabularies and what cat to load vocabulary and then it'll just translate"}, {"start": 1660.76, "end": 1668.92, "text": " why is it not working okay here it is it took two and a two point six seconds to"}, {"start": 1668.92, "end": 1674.12, "text": " load the data and then you can see the input sentence the source sentence"}, {"start": 1674.12, "end": 1679.32, "text": " tokenized and you see the output sentence we get the scene heute which"}, {"start": 1679.32, "end": 1684.52, "text": " means the same thing pretty much although enum is a German word like a"}, {"start": 1684.52, "end": 1689.56, "text": " polite way to say you so there is nothing stopping me from translating from"}, {"start": 1689.56, "end": 1694.32, "text": " German to English let me just change this thing to German to English let me"}, {"start": 1694.32, "end": 1702.08, "text": " change the model to German to English and now if I input for example we get"}, {"start": 1702.08, "end": 1709.84, "text": " this dia which translates as yeah how are you basically in English let me see"}, {"start": 1709.84, "end": 1715.12, "text": " what the model will say how the model would translate this this time and it"}, {"start": 1715.12, "end": 1719.52, "text": " basically translated this as how are you which is pretty much something called"}, {"start": 1719.52, "end": 1723.9199999999998, "text": " gold translation which means like ground truth translation this looks pretty"}, {"start": 1723.9199999999998, "end": 1728.96, "text": " pretty decent the model is not perfect you can find some sentences which would"}, {"start": 1728.96, "end": 1736.04, "text": " kind of confuse the model so even a simple sentence like a spin I'm"}, {"start": 1736.04, "end": 1741.0, "text": " berlinna which is a sentence like I think Kennedy said when he was back in"}, {"start": 1741.0, "end": 1748.24, "text": " Berlin back in the days and it means I'm a Berliner I'm a I'm person living in"}, {"start": 1748.24, "end": 1756.8, "text": " Berlin and if I input this and run it so the model will actually the thing is the"}, {"start": 1756.8, "end": 1762.68, "text": " model doesn't have a token Berliner inside the vocabulary so that's why it"}, {"start": 1762.68, "end": 1769.0, "text": " will say Berlin instead of say I mean I'm anthropomorphizing the model here so it"}, {"start": 1769.0, "end": 1775.9199999999998, "text": " will output Berlin so I am a Berlin so if the model had Berliner it would"}, {"start": 1775.9199999999998, "end": 1780.52, "text": " output Berliner so that's another another point I want to make here so if I had"}, {"start": 1780.52, "end": 1784.36, "text": " something called byte pairing coding I could the model would be much more"}, {"start": 1784.36, "end": 1789.1599999999999, "text": " expressive in creating these words then at this point of time but that's a"}, {"start": 1789.1599999999999, "end": 1795.9599999999998, "text": " future work and I'll edit hopefully like maybe next week or something so there"}, {"start": 1795.9599999999998, "end": 1799.76, "text": " are many other things I could show you but for the sake of this video not being"}, {"start": 1799.76, "end": 1805.3999999999999, "text": " like one hour long let me just show you how the perf what amount of hardware"}, {"start": 1805.3999999999999, "end": 1810.1599999999999, "text": " requirements do you need in order to run translations not in order to train the"}, {"start": 1810.16, "end": 1814.8400000000001, "text": " model you would need a lot of GPUs to train the model actually had have some"}, {"start": 1814.8400000000001, "end": 1819.4, "text": " comments in the readme please go go ahead and read that part but for like"}, {"start": 1819.4, "end": 1825.16, "text": " just for for translations let me just open the task manager and if I open it"}, {"start": 1825.16, "end": 1832.24, "text": " up if I run the script now we can follow how much GPU is needed to to get the"}, {"start": 1832.24, "end": 1838.28, "text": " translation so it's so it's basically only a small spike so you won't need as"}, {"start": 1838.28, "end": 1843.0, "text": " much hardware like like a really fancy GPU if you're just running translations"}, {"start": 1843.0, "end": 1848.0, "text": " on the other side if you're training this model you'll need a lot of power so"}, {"start": 1848.0, "end": 1853.28, "text": " that that pretty much wraps it up hope you like this video there was a lot of"}, {"start": 1853.28, "end": 1859.28, "text": " information in here so what my next steps will be is I'll start reading"}, {"start": 1859.28, "end": 1863.72, "text": " research papers again I want to explore what state-of-the-art in natural language"}, {"start": 1863.72, "end": 1870.56, "text": " processing now in 2020 so I'll be spending like maybe reading one paper in"}, {"start": 1870.56, "end": 1874.96, "text": " the morning one paper in the evening that amounts to maybe at least ten"}, {"start": 1874.96, "end": 1880.4, "text": " papers a week which is good enough to get me like familiar with the field in"}, {"start": 1880.4, "end": 1886.32, "text": " like 10 days period two weeks so yeah that was pretty much it if you like this"}, {"start": 1886.32, "end": 1891.24, "text": " video go ahead and subscribe to this channel and hit the bell icon to get"}, {"start": 1891.24, "end": 1897.8, "text": " notified when I upload a new video and until next time keep learning"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=n9sLZPLOxG8
How do transformers work? (Attention is all you need)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I give you a semi-quick tour through the "Attention is all you need" paper. The paper that introduced the first-ever transformer model! I also show you some cool blogs along the way and my half-baked implementation of the original transformer model. You'll learn about: ✔️ How the original transformer model works ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ The Annotated Transformer blog: http://nlp.seas.harvard.edu/2018/04/03/attention.html ✅ Jay Alammar's blog: https://jalammar.github.io/illustrated-transformer/ ✅ Original paper: https://arxiv.org/abs/1706.03762 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 prerequisite theory and my semi-done transformer implementation 1:40 High-level overview of the paper 2:55 Visualization of positional encodings (my code) 5:07 Attention mask (no looking forward!) 7:35 Optimizer 10:20 Multi-head attention in depth 15:15 A glimpse at the code implementation 17:49 Training procedure - machine translation 18:09 Na ja, wie geht's? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #transformer #attention #deeplearning
What's up folks? So over the last month, I've been reading and learning about transformers. So I read a bunch of research papers, I started implementing the original transformer paper from scratch just a week ago. So I thought while it's still fresh in my memory, I should go ahead and do a small deep dive into the Attention Is All You Need paper. So without further ado, let's jump into it. So let me open the paper, Attention Is All You Need. So actually before I go and do a high level overview of this paper, I strongly recommend you go and read these three blogs, which I already briefly mentioned in my previous video, which I'll link somewhere here. So it's from Jay Elmer, and they are super useful if you're new to transformers and to your attention mechanisms and stuff like that. So this one is really nice, introduces you to the concept of word to back to like word embeddings in general. And then there is this sec to sec tutorial, which is also really nice. And finally, the most relevant one is this illustrated transformer, which is really, really nice. So take your time, read through those, and then you'll have the required prerequisite knowledge for understanding this video. Although you can try and do it without reading the blogs, I just strongly recommend you go through the blogs. Also, so yeah, I already said this. So like a week ago, I started implementing the original transformer paper, and you can see it here in my coding history. So hopefully all of those nitty gritty details will be like fresh in my RAM. So without further ado, let's jump into the paper. Okay, so let's first do a short high level overview of the whole paper. So in 2017, a team from Google published this paper called Attention is All You Need. And they basically introduced this novel architecture called the transformer. The main idea there was that they stopped using RNNs and CNNs and instead used a simple architecture, but they used attention in a really clever way that enabled them to have models, transformers, which are much more parallelizable, and thus more compute optimal than previous architectures. And they basically show here that, yeah, they basically say, hey, this is better than LSTMs and better than GRUs in modeling long range dependencies among words, and also much more compute optimal. You can see the architecture here, and how it functions on the high level is you basically take the input sentence, you convert it into embeddings, you add something called positional encodings, which I'll show you right now because I've been coding that up from scratch on my own. So here is my implementation and just visualize it because it helped me also understand better how these look like. And what you basically see here is that every single row is something that gets added to the token representation. And using this information, the model knows where all of the tokens came from, otherwise the word order would get lost. So this is really important, and they didn't use the learned representation for this positional encodings, they just used this, these are basically sine and cosine functions whose frequencies form a geometric progression. And it turned out to be as good as learnable positional encodings, which they showed in ablation studies later in the paper. Okay, enough of that, let me go back to the paper. So then what you do is you pass those representations into multi-head attention, which I'll go into a bit more depth a bit later, and they then apply something called point-wise feed-forward network. Finally, the output encoder representations are fed into the decoder, which is super similar to encoder part, except that it has this middle multi-head attention, which actually attends also to encoder representations, and not only to previous representations from the same stack. So that's the difference. And this bottom multi-head attention also uses something called masking. Basically, the current token can only see representations that came before it. And that makes sense because Transformer was used to do some machine translation, so translating from English to German and English to French, and thus it doesn't make any sense to see the future words because you're trying to predict the future words. That's why the mask is used. And I can show you that here, just a second. So this is how the mask looks like. You basically put some really big negative numbers before the softmax in the attention module, and that causes masking of token representations. Okay, so that was a brief overview of the architecture. Yeah, finally, you have the linear layer and softmax, which creates a distribution, and you're just trying to minimize the distribution neither using cross-entropy loss or KL divergence in the case of soft labeling, which I'll explain a bit later. And hopefully, I know this probably sounds gibberish at the moment, but just stick with me, and you'll get to kind of slowly learn all of those details in time. Okay, so attention, I already mentioned it. They are using something called scaled.productattention. I'll explain that a bit later. And point-wise feedforward networks, I mentioned those. So basically, what they do is you have a single network, and it processes a single token representation, and that's it. You just train a single network, and you just do something like a for loop over all of the token representations. Embeddings and softmax, positional encoding, I showed you how this looks like. So basically, they use these sine and cosine functions with frequencies which form geometric progression. And then they argue why self-attention is good. So why it's good is it's computationally really, a really, really nice thing. So basically, if you compare the second row recurrent network with self-attention, you have that the maximum path length is O of one for transformer, whereas it's O of n for RNNs. And similarly, complexity of all areas, n squared times d, where n is the number of input tokens, which is usually smaller than d, which is the dimension of the model, which was 512 in the baseline model. So that's why it's, so basically, attention plus highly parallelizable, that's what's the secret sauce of transformers. Okay, they then just, they train this model for translation tasks, as I already mentioned, and they achieved so does, so state of the art, in both English to German and English to French translation tasks. They also use some really interesting optimizing learning rate schedule. And I can show you that briefly in this super nice blog called the, let me see if I can find it, the annotated transformer, and I'll link the link down in the description. So basically, this is how it looks like. You basically have, you linearly increase the learning rate initially for the warm up number of steps, and then you start decaying them proportional to, proportionally to this inverse square root of the number of steps. So that's how the learning rate schedule looks like for their transformer, and they use a simple, like a pretty popular optimizer, like first order optimizer called Atom. Finally, they did some regularizations, dropouts, label smoothing, which, simply said, you usually wanna match like a one hot vector. What label smoothing does is instead of putting one on the ground truth word, you put 0.9, and you just spread out the rest of the mass across the rest of the vocabulary, and that's it in a nutshell. So I mentioned machine translation, they did ablation study, this part is interesting. So they figured out that bigger models are better, and like fast forward to 2020, you know this, and you have GPT family of models where GPT-3 has 175 billion parameters, and you have BERT and BERT derivatives, and all of those models are huge. Compared to this one, I think it was, the original transformer had around 100 million parameters, so compare that to 175 billion today in 2020. And finally, they conclude, saying they have ideas of using this same architecture also in vision and speech recognition, and again, fast forward to 2020, you have this visual transformer which performs really good, really nice with images, and that's super exciting, although only in the big data regime, but still. Finally, they have some nice visualizations here showing how different multi-hat attention modules attend different words in the previous layers, so you can see here that the masking, this word masking, oh sorry, making, is focusing a lot on more difficult, which makes sense if you look at this semantically, and yeah, there are just nice, interesting emerging patterns that appear when you train these transformers and networks. So with that being said, let's go a bit into more depth about how the multi-hat attention actually works. Okay, let me try to explain how multi-hat attention works, and I'll use two methods here. I'll first use JLMR's visualizations because they're really nice, and I'll use my own code I've been developing over the last couple of days. So let's see how it works. So basically, you have, so let me open the paper. We said you convert the sentence into a sequence of embeddings, right? And you're basically here. And now, let me show you how the multi-hat attention works. So these are the embeddings. So you have, say, thinking machines, and in real world, you won't actually have a single vector for a single word. There is something called tokenization, and concretely, Transformer used something called byte-pair encoding, meaning thinking will actually be split maybe into two vectors, like one which actually corresponds to THIN, I don't know, or the second one for the KI, and the third one for NG, whatever. So but basically, at the end, you have a sequence of vectors. And what you do now is you apply three neural networks. One will produce something called queries, the second one will produce something called keys, and the third one will produce values. And now, you basically will use the same network for every single token vector to produce queries. So there are only three networks in one multi-hat attention, and there is one more, like at the output, but I'll mention the one a bit later. So what you do now is you wanna find how similar query from a particular word, say, query one, is similar to keys of every single token in a sentence. So you do a dot product, so that's more generally called a scoring function, but the Transformer paper used something called scale dot product scoring function. So you do a dot product between query one and key one, and you get a number, say, 14. And then you do a dot product between query one and key two, and you get 12, and those are scores. And now you do softmax, which makes the sum goes to one, and you get 0.88 and 0.12, and basically, you take those numbers, you multiply the value vectors, value one, value two, and Jay nicely visualized it here, so because this is 0.12, basically this one will be dimmer, and then you sum them up, and you get Z1. So that's how the attention works. But now, the only difference with multi-hat attention is basically you split this query one. In the baseline Transformer model, it has 512 dimensions. Once you split them into eight hats, that's the number I used in the paper, you get, so one query one will actually have 64 dimensions. And then you do the same procedure I just described, but independently for all of those eight hats. And once you get Z1s for all of those eight hats, you just concatenate them together, and you do another feed forward through a neural network, and you get the final result. And that's when you get to this part here before addition with this residual connection. So that's how it works in a nutshell. Now, I mentioned the scoring functions, I mentioned dot product. Why do you do a dot product between query one and V1 and V2, whatever? The reason is that gives you a similarity between those vectors. So what I mean by that, let me use Google for this. So this is a 2D case. So basically, I told you, so those vectors are like, so query one and key one or key two, they have 512 dimensions. But if you kind of maybe project them into two dimensions, and you get something like this. And now the smaller the angle, the more similar the vectors are. And basically, so the smaller the angle, you'll get higher score here. And basically, that means you'll attend more to that particular representation. So I hope this didn't confuse you too much, because that wasn't the point. But yeah, let me know if this helped or it did not. Just reiterating once more, you take queries and you take keys, and you wanna see how similar they are. And if certain query is really similar to certain key, then you wanna take into account that vector with a high score. And basically, if vectors are orthogonal, the score will be zero. If vectors are like even on the opposite side, then you'll have a negative score. So that's basic cosine distance, basically. Jumping into code, I wanna show you how it looks like like in flash. So as I said here, so you have these query, key, and value nets. You apply them separately on all of those tokens, and you get query, key, and value batches. And once you do that, you pass that into the attention, oops, you pass that into the attention function, which does some matrix multiplication. And that way, you basically apply attention to all of the tokens and through all of the batches in a really optimal way, but it's harder to understand than by looking at single batch and examples that I showed you in the blog. Then we have potentially some masking operation. And as I mentioned, in case you wanna ignore a certain representation, you just put negative infinity, oops, this should be negative here. It's still a prototype code, so forgive me for this. And then you apply softmax to scoring functions, you get attention weights. And also one more detail is that you sometimes wanna drop certain attention weights, which means certain attention where we get pulled to zero. And that's one more nitty gritty detail of implementation. And then you do once more matrix multiplication between attention weights and between those values. And here is where you get those, so here is where you get those, let me just open this up, these vectors, and now you wanna concatenate them and do one more pass through a neural network to get the final result. And that's exactly what I do here. So intermediate representations, and then we change the view of those, and finally we do another projection and you get the final results. So I hope this wasn't too hard to follow along. If it was, please let me know in the comments. And I'll be open sourcing this code in a week or two, so you'll be able to go through the code at your own pace and read through the blogs at your own pace. So hopefully all of this together will help you grasp how all of this works and fits together. Finally, I just wanna give you a feeling of how the actual machine translation training for the original transformer looked like. So you basically have two corpuses of text. You have, say, German and you have English. And those corpuses are synced, meaning you have one sentence in German and there is a corresponding sentence in English. So for example, Wissen Sie eines der großen Vergnügen beim Reisen und eine der Freuden bei der ethnographischen Forschung ist, gemeinsam mit den Menschen zu leben, die sich noch an die alten Tagen erinnern können. That was a bit of German for you. And you basically have a corresponding sentence like here on the other side of the document, if I open it up, you know one of the intense pleasures of travel, blah, blah, blah. So having those two synced, so basically if you're translating from English to German, the German sentence is something called like the gold translation. So somebody like a professional translator translated the sentence. Okay, so once you have those two sentences that correspond to each other, what you do is the following. So you have the transformer. You take the English sentence, you tokenize it, you embed it. You do a forward pass through the encoder module here and you get representations. On the other side, you take a German sentence, you again tokenize it. But now what you do is you set, you put as a prefix, you put this start of sentence token. So that's here. And then you do forward pass here. And finally, the gold translation, just you convert that into soft labeled distributions and you just do KL divergence. And then doing back prop will train and tune the weights of the whole transformer model and that's a single training step. The important part is that you wanna use those masks so that a certain token, like say a token that's here, cannot see tokens that are coming afterwards. That being said, if you found this video useful, consider subscribing to this channel and hit the bell icon to get notified when I upload a new video. And until next time, keep learning deep. And I'll see you next time.
[{"start": 0.0, "end": 1.44, "text": " What's up folks?"}, {"start": 1.44, "end": 5.24, "text": " So over the last month, I've been reading"}, {"start": 5.24, "end": 7.08, "text": " and learning about transformers."}, {"start": 7.08, "end": 9.28, "text": " So I read a bunch of research papers,"}, {"start": 9.28, "end": 12.040000000000001, "text": " I started implementing the original transformer paper"}, {"start": 12.040000000000001, "end": 14.08, "text": " from scratch just a week ago."}, {"start": 14.08, "end": 16.54, "text": " So I thought while it's still fresh in my memory,"}, {"start": 16.54, "end": 19.080000000000002, "text": " I should go ahead and do a small deep dive"}, {"start": 19.080000000000002, "end": 21.72, "text": " into the Attention Is All You Need paper."}, {"start": 21.72, "end": 24.240000000000002, "text": " So without further ado, let's jump into it."}, {"start": 24.240000000000002, "end": 27.560000000000002, "text": " So let me open the paper, Attention Is All You Need."}, {"start": 27.56, "end": 30.2, "text": " So actually before I go and do a high level overview"}, {"start": 30.2, "end": 33.6, "text": " of this paper, I strongly recommend you go"}, {"start": 33.6, "end": 37.0, "text": " and read these three blogs, which I already briefly mentioned"}, {"start": 37.0, "end": 40.04, "text": " in my previous video, which I'll link somewhere here."}, {"start": 40.04, "end": 45.04, "text": " So it's from Jay Elmer, and they are super useful"}, {"start": 46.3, "end": 49.72, "text": " if you're new to transformers and to your attention mechanisms"}, {"start": 49.72, "end": 50.56, "text": " and stuff like that."}, {"start": 50.56, "end": 54.44, "text": " So this one is really nice, introduces you to the concept"}, {"start": 54.44, "end": 57.76, "text": " of word to back to like word embeddings in general."}, {"start": 57.76, "end": 61.04, "text": " And then there is this sec to sec tutorial,"}, {"start": 61.04, "end": 62.16, "text": " which is also really nice."}, {"start": 62.16, "end": 64.9, "text": " And finally, the most relevant one is this illustrated"}, {"start": 64.9, "end": 68.0, "text": " transformer, which is really, really nice."}, {"start": 68.0, "end": 70.96, "text": " So take your time, read through those,"}, {"start": 70.96, "end": 75.72, "text": " and then you'll have the required prerequisite knowledge"}, {"start": 75.72, "end": 77.53999999999999, "text": " for understanding this video."}, {"start": 79.12, "end": 81.88, "text": " Although you can try and do it without reading the blogs,"}, {"start": 81.88, "end": 85.39999999999999, "text": " I just strongly recommend you go through the blogs."}, {"start": 85.39999999999999, "end": 88.36, "text": " Also, so yeah, I already said this."}, {"start": 88.36, "end": 90.67999999999999, "text": " So like a week ago, I started implementing"}, {"start": 90.67999999999999, "end": 92.19999999999999, "text": " the original transformer paper,"}, {"start": 92.19999999999999, "end": 94.38, "text": " and you can see it here in my coding history."}, {"start": 94.38, "end": 96.44, "text": " So hopefully all of those nitty gritty details"}, {"start": 96.44, "end": 98.36, "text": " will be like fresh in my RAM."}, {"start": 98.36, "end": 102.1, "text": " So without further ado, let's jump into the paper."}, {"start": 102.1, "end": 105.64, "text": " Okay, so let's first do a short high level overview"}, {"start": 105.64, "end": 106.72, "text": " of the whole paper."}, {"start": 106.72, "end": 110.47999999999999, "text": " So in 2017, a team from Google published this paper"}, {"start": 110.48, "end": 112.68, "text": " called Attention is All You Need."}, {"start": 112.68, "end": 115.04, "text": " And they basically introduced this novel architecture"}, {"start": 115.04, "end": 116.96000000000001, "text": " called the transformer."}, {"start": 116.96000000000001, "end": 120.68, "text": " The main idea there was that they stopped using RNNs"}, {"start": 120.68, "end": 124.4, "text": " and CNNs and instead used a simple architecture,"}, {"start": 124.4, "end": 127.96000000000001, "text": " but they used attention in a really clever way"}, {"start": 127.96000000000001, "end": 131.84, "text": " that enabled them to have models, transformers,"}, {"start": 131.84, "end": 134.36, "text": " which are much more parallelizable,"}, {"start": 134.36, "end": 138.66, "text": " and thus more compute optimal than previous architectures."}, {"start": 138.66, "end": 143.34, "text": " And they basically show here that, yeah,"}, {"start": 143.34, "end": 148.34, "text": " they basically say, hey, this is better than LSTMs"}, {"start": 148.54, "end": 152.7, "text": " and better than GRUs in modeling long range dependencies"}, {"start": 152.7, "end": 156.12, "text": " among words, and also much more compute optimal."}, {"start": 159.57999999999998, "end": 161.66, "text": " You can see the architecture here,"}, {"start": 161.66, "end": 164.9, "text": " and how it functions on the high level"}, {"start": 164.9, "end": 167.7, "text": " is you basically take the input sentence,"}, {"start": 167.7, "end": 170.6, "text": " you convert it into embeddings,"}, {"start": 170.6, "end": 173.28, "text": " you add something called positional encodings,"}, {"start": 173.28, "end": 175.06, "text": " which I'll show you right now"}, {"start": 175.06, "end": 178.54, "text": " because I've been coding that up from scratch on my own."}, {"start": 178.54, "end": 181.94, "text": " So here is my implementation and just visualize it"}, {"start": 181.94, "end": 183.94, "text": " because it helped me also understand"}, {"start": 183.94, "end": 185.82, "text": " better how these look like."}, {"start": 185.82, "end": 190.66, "text": " And what you basically see here is that every single row"}, {"start": 190.66, "end": 195.66, "text": " is something that gets added to the token representation."}, {"start": 195.66, "end": 199.18, "text": " And using this information, the model knows"}, {"start": 199.18, "end": 202.3, "text": " where all of the tokens came from,"}, {"start": 202.3, "end": 204.76, "text": " otherwise the word order would get lost."}, {"start": 204.76, "end": 206.01999999999998, "text": " So this is really important,"}, {"start": 206.01999999999998, "end": 209.34, "text": " and they didn't use the learned representation"}, {"start": 209.34, "end": 213.38, "text": " for this positional encodings, they just used this,"}, {"start": 213.38, "end": 216.32, "text": " these are basically sine and cosine functions"}, {"start": 216.32, "end": 219.32, "text": " whose frequencies form a geometric progression."}, {"start": 219.32, "end": 222.06, "text": " And it turned out to be as good as learnable"}, {"start": 223.54, "end": 224.42, "text": " positional encodings,"}, {"start": 224.42, "end": 227.22, "text": " which they showed in ablation studies later in the paper."}, {"start": 227.22, "end": 230.94, "text": " Okay, enough of that, let me go back to the paper."}, {"start": 230.94, "end": 234.51999999999998, "text": " So then what you do is you pass those representations"}, {"start": 234.51999999999998, "end": 236.17999999999998, "text": " into multi-head attention,"}, {"start": 236.17999999999998, "end": 239.29999999999998, "text": " which I'll go into a bit more depth a bit later,"}, {"start": 239.29999999999998, "end": 241.45999999999998, "text": " and they then apply something"}, {"start": 241.45999999999998, "end": 243.94, "text": " called point-wise feed-forward network."}, {"start": 243.94, "end": 246.44, "text": " Finally, the output encoder representations"}, {"start": 246.44, "end": 247.89999999999998, "text": " are fed into the decoder,"}, {"start": 247.89999999999998, "end": 250.42, "text": " which is super similar to encoder part,"}, {"start": 250.42, "end": 253.89999999999998, "text": " except that it has this middle multi-head attention,"}, {"start": 253.9, "end": 258.78000000000003, "text": " which actually attends also to encoder representations,"}, {"start": 258.78000000000003, "end": 263.74, "text": " and not only to previous representations from the same stack."}, {"start": 263.74, "end": 265.46, "text": " So that's the difference."}, {"start": 265.46, "end": 269.02, "text": " And this bottom multi-head attention"}, {"start": 269.02, "end": 272.14, "text": " also uses something called masking."}, {"start": 272.14, "end": 276.14, "text": " Basically, the current token"}, {"start": 276.14, "end": 279.06, "text": " can only see representations that came before it."}, {"start": 279.06, "end": 282.74, "text": " And that makes sense because Transformer was used"}, {"start": 282.74, "end": 285.74, "text": " to do some machine translation,"}, {"start": 285.74, "end": 289.02, "text": " so translating from English to German and English to French,"}, {"start": 289.02, "end": 292.56, "text": " and thus it doesn't make any sense to see the future words"}, {"start": 292.56, "end": 294.56, "text": " because you're trying to predict the future words."}, {"start": 294.56, "end": 296.08, "text": " That's why the mask is used."}, {"start": 296.08, "end": 300.74, "text": " And I can show you that here, just a second."}, {"start": 301.6, "end": 305.34000000000003, "text": " So this is how the mask looks like."}, {"start": 305.34000000000003, "end": 308.62, "text": " You basically put some really big negative numbers"}, {"start": 308.62, "end": 313.22, "text": " before the softmax in the attention module,"}, {"start": 313.22, "end": 317.56, "text": " and that causes masking of token representations."}, {"start": 318.42, "end": 322.62, "text": " Okay, so that was a brief overview of the architecture."}, {"start": 322.62, "end": 325.78000000000003, "text": " Yeah, finally, you have the linear layer and softmax,"}, {"start": 325.78000000000003, "end": 327.34000000000003, "text": " which creates a distribution,"}, {"start": 327.34000000000003, "end": 330.3, "text": " and you're just trying to minimize the distribution"}, {"start": 330.3, "end": 334.18, "text": " neither using cross-entropy loss or KL divergence"}, {"start": 334.18, "end": 336.5, "text": " in the case of soft labeling,"}, {"start": 336.5, "end": 338.14, "text": " which I'll explain a bit later."}, {"start": 338.14, "end": 340.62, "text": " And hopefully, I know this probably sounds gibberish"}, {"start": 340.62, "end": 342.3, "text": " at the moment, but just stick with me,"}, {"start": 342.3, "end": 344.78, "text": " and you'll get to kind of slowly learn"}, {"start": 344.78, "end": 347.76, "text": " all of those details in time."}, {"start": 347.76, "end": 351.68, "text": " Okay, so attention, I already mentioned it."}, {"start": 351.68, "end": 355.38, "text": " They are using something called scaled.productattention."}, {"start": 355.38, "end": 358.3, "text": " I'll explain that a bit later."}, {"start": 358.3, "end": 362.46, "text": " And point-wise feedforward networks, I mentioned those."}, {"start": 362.46, "end": 366.09999999999997, "text": " So basically, what they do is you have a single network,"}, {"start": 366.1, "end": 370.54, "text": " and it processes a single token representation,"}, {"start": 370.54, "end": 371.38, "text": " and that's it."}, {"start": 371.38, "end": 374.26000000000005, "text": " You just train a single network,"}, {"start": 374.26000000000005, "end": 376.06, "text": " and you just do something like a for loop"}, {"start": 376.06, "end": 377.94, "text": " over all of the token representations."}, {"start": 377.94, "end": 380.3, "text": " Embeddings and softmax, positional encoding,"}, {"start": 380.3, "end": 381.78000000000003, "text": " I showed you how this looks like."}, {"start": 381.78000000000003, "end": 385.74, "text": " So basically, they use these sine and cosine functions"}, {"start": 385.74, "end": 388.74, "text": " with frequencies which form geometric progression."}, {"start": 388.74, "end": 392.3, "text": " And then they argue why self-attention is good."}, {"start": 392.3, "end": 396.1, "text": " So why it's good is it's computationally really,"}, {"start": 396.1, "end": 398.46000000000004, "text": " a really, really nice thing."}, {"start": 398.46000000000004, "end": 403.1, "text": " So basically, if you compare the second row recurrent"}, {"start": 403.1, "end": 405.86, "text": " network with self-attention,"}, {"start": 405.86, "end": 408.94, "text": " you have that the maximum path length is O of one"}, {"start": 408.94, "end": 413.94, "text": " for transformer, whereas it's O of n for RNNs."}, {"start": 414.26, "end": 416.46000000000004, "text": " And similarly, complexity of all areas,"}, {"start": 416.46000000000004, "end": 421.46000000000004, "text": " n squared times d, where n is the number of input tokens,"}, {"start": 421.46, "end": 423.65999999999997, "text": " which is usually smaller than d,"}, {"start": 423.65999999999997, "end": 426.06, "text": " which is the dimension of the model,"}, {"start": 426.06, "end": 428.7, "text": " which was 512 in the baseline model."}, {"start": 430.09999999999997, "end": 432.85999999999996, "text": " So that's why it's, so basically,"}, {"start": 432.85999999999996, "end": 436.02, "text": " attention plus highly parallelizable,"}, {"start": 436.02, "end": 438.71999999999997, "text": " that's what's the secret sauce of transformers."}, {"start": 440.09999999999997, "end": 443.65999999999997, "text": " Okay, they then just, they train this model"}, {"start": 443.65999999999997, "end": 446.5, "text": " for translation tasks, as I already mentioned,"}, {"start": 446.5, "end": 448.85999999999996, "text": " and they achieved so does, so state of the art,"}, {"start": 448.86, "end": 451.86, "text": " in both English to German and English to French"}, {"start": 451.86, "end": 453.92, "text": " translation tasks."}, {"start": 453.92, "end": 456.42, "text": " They also use some really interesting"}, {"start": 456.42, "end": 459.26, "text": " optimizing learning rate schedule."}, {"start": 459.26, "end": 463.18, "text": " And I can show you that briefly in this super nice blog"}, {"start": 463.18, "end": 466.78000000000003, "text": " called the, let me see if I can find it,"}, {"start": 466.78000000000003, "end": 468.58000000000004, "text": " the annotated transformer,"}, {"start": 468.58000000000004, "end": 471.82, "text": " and I'll link the link down in the description."}, {"start": 471.82, "end": 475.06, "text": " So basically, this is how it looks like."}, {"start": 475.06, "end": 477.58000000000004, "text": " You basically have, you linearly increase"}, {"start": 477.58, "end": 481.3, "text": " the learning rate initially for the warm up number of steps,"}, {"start": 481.3, "end": 484.46, "text": " and then you start decaying them proportional to,"}, {"start": 484.46, "end": 487.9, "text": " proportionally to this inverse square root"}, {"start": 487.9, "end": 489.38, "text": " of the number of steps."}, {"start": 489.38, "end": 492.02, "text": " So that's how the learning rate schedule looks like"}, {"start": 492.02, "end": 494.18, "text": " for their transformer, and they use a simple,"}, {"start": 494.18, "end": 497.46, "text": " like a pretty popular optimizer,"}, {"start": 497.46, "end": 499.82, "text": " like first order optimizer called Atom."}, {"start": 501.29999999999995, "end": 503.97999999999996, "text": " Finally, they did some regularizations, dropouts,"}, {"start": 503.98, "end": 508.14000000000004, "text": " label smoothing, which, simply said,"}, {"start": 508.14000000000004, "end": 512.26, "text": " you usually wanna match like a one hot vector."}, {"start": 512.26, "end": 515.38, "text": " What label smoothing does is instead of putting one"}, {"start": 515.38, "end": 518.62, "text": " on the ground truth word, you put 0.9,"}, {"start": 518.62, "end": 520.58, "text": " and you just spread out the rest of the mass"}, {"start": 520.58, "end": 522.94, "text": " across the rest of the vocabulary,"}, {"start": 522.94, "end": 524.84, "text": " and that's it in a nutshell."}, {"start": 526.02, "end": 527.54, "text": " So I mentioned machine translation,"}, {"start": 527.54, "end": 529.86, "text": " they did ablation study, this part is interesting."}, {"start": 529.86, "end": 532.34, "text": " So they figured out that bigger models are better,"}, {"start": 532.34, "end": 537.0600000000001, "text": " and like fast forward to 2020, you know this,"}, {"start": 537.0600000000001, "end": 538.98, "text": " and you have GPT family of models"}, {"start": 538.98, "end": 542.26, "text": " where GPT-3 has 175 billion parameters,"}, {"start": 542.26, "end": 544.58, "text": " and you have BERT and BERT derivatives,"}, {"start": 544.58, "end": 546.32, "text": " and all of those models are huge."}, {"start": 546.32, "end": 548.14, "text": " Compared to this one, I think it was,"}, {"start": 548.14, "end": 551.5, "text": " the original transformer had around 100 million parameters,"}, {"start": 551.5, "end": 555.1, "text": " so compare that to 175 billion today in 2020."}, {"start": 556.02, "end": 561.02, "text": " And finally, they conclude, saying they have ideas"}, {"start": 561.02, "end": 565.9399999999999, "text": " of using this same architecture also in vision"}, {"start": 565.9399999999999, "end": 570.3, "text": " and speech recognition, and again, fast forward to 2020,"}, {"start": 570.3, "end": 573.42, "text": " you have this visual transformer which performs really good,"}, {"start": 573.42, "end": 577.26, "text": " really nice with images, and that's super exciting,"}, {"start": 577.26, "end": 579.88, "text": " although only in the big data regime, but still."}, {"start": 580.9, "end": 583.78, "text": " Finally, they have some nice visualizations here"}, {"start": 583.78, "end": 586.74, "text": " showing how different multi-hat attention modules"}, {"start": 586.74, "end": 591.0600000000001, "text": " attend different words in the previous layers,"}, {"start": 591.0600000000001, "end": 592.82, "text": " so you can see here that the masking,"}, {"start": 592.82, "end": 595.86, "text": " this word masking, oh sorry, making,"}, {"start": 595.86, "end": 598.7, "text": " is focusing a lot on more difficult,"}, {"start": 598.7, "end": 601.98, "text": " which makes sense if you look at this semantically,"}, {"start": 601.98, "end": 605.02, "text": " and yeah, there are just nice, interesting emerging patterns"}, {"start": 605.02, "end": 610.02, "text": " that appear when you train these transformers and networks."}, {"start": 611.5, "end": 615.7, "text": " So with that being said, let's go a bit into more depth"}, {"start": 615.7, "end": 619.58, "text": " about how the multi-hat attention actually works."}, {"start": 619.58, "end": 623.0200000000001, "text": " Okay, let me try to explain how multi-hat attention works,"}, {"start": 623.0200000000001, "end": 625.58, "text": " and I'll use two methods here."}, {"start": 625.58, "end": 628.4200000000001, "text": " I'll first use JLMR's visualizations"}, {"start": 628.4200000000001, "end": 631.1800000000001, "text": " because they're really nice, and I'll use my own code"}, {"start": 631.1800000000001, "end": 633.22, "text": " I've been developing over the last couple of days."}, {"start": 633.22, "end": 635.6800000000001, "text": " So let's see how it works."}, {"start": 635.6800000000001, "end": 640.58, "text": " So basically, you have, so let me open the paper."}, {"start": 640.58, "end": 642.6600000000001, "text": " We said you convert the sentence"}, {"start": 642.6600000000001, "end": 645.5600000000001, "text": " into a sequence of embeddings, right?"}, {"start": 645.56, "end": 647.2199999999999, "text": " And you're basically here."}, {"start": 647.2199999999999, "end": 650.38, "text": " And now, let me show you how the multi-hat attention works."}, {"start": 651.3, "end": 653.26, "text": " So these are the embeddings."}, {"start": 653.26, "end": 655.18, "text": " So you have, say, thinking machines,"}, {"start": 655.18, "end": 658.38, "text": " and in real world, you won't actually have"}, {"start": 658.38, "end": 660.42, "text": " a single vector for a single word."}, {"start": 660.42, "end": 662.3, "text": " There is something called tokenization,"}, {"start": 662.3, "end": 664.6999999999999, "text": " and concretely, Transformer used something"}, {"start": 664.6999999999999, "end": 667.54, "text": " called byte-pair encoding, meaning thinking"}, {"start": 667.54, "end": 670.04, "text": " will actually be split maybe into two vectors,"}, {"start": 670.04, "end": 675.04, "text": " like one which actually corresponds to THIN, I don't know,"}, {"start": 676.5799999999999, "end": 680.4599999999999, "text": " or the second one for the KI,"}, {"start": 680.4599999999999, "end": 682.98, "text": " and the third one for NG, whatever."}, {"start": 682.98, "end": 686.0999999999999, "text": " So but basically, at the end, you have a sequence of vectors."}, {"start": 686.0999999999999, "end": 691.0999999999999, "text": " And what you do now is you apply three neural networks."}, {"start": 691.56, "end": 693.8, "text": " One will produce something called queries,"}, {"start": 693.8, "end": 696.6999999999999, "text": " the second one will produce something called keys,"}, {"start": 696.6999999999999, "end": 699.26, "text": " and the third one will produce values."}, {"start": 699.26, "end": 702.4, "text": " And now, you basically will use the same network"}, {"start": 702.4, "end": 707.4, "text": " for every single token vector to produce queries."}, {"start": 707.7, "end": 711.9, "text": " So there are only three networks in one multi-hat attention,"}, {"start": 711.9, "end": 714.1, "text": " and there is one more, like at the output,"}, {"start": 714.1, "end": 715.9, "text": " but I'll mention the one a bit later."}, {"start": 715.9, "end": 720.34, "text": " So what you do now is you wanna find how similar"}, {"start": 721.3, "end": 724.28, "text": " query from a particular word, say, query one,"}, {"start": 724.28, "end": 729.28, "text": " is similar to keys of every single token in a sentence."}, {"start": 729.66, "end": 732.6999999999999, "text": " So you do a dot product, so that's more generally"}, {"start": 732.6999999999999, "end": 735.86, "text": " called a scoring function, but the Transformer paper"}, {"start": 735.86, "end": 738.6999999999999, "text": " used something called scale dot product scoring function."}, {"start": 738.6999999999999, "end": 743.3399999999999, "text": " So you do a dot product between query one and key one,"}, {"start": 743.3399999999999, "end": 746.24, "text": " and you get a number, say, 14."}, {"start": 746.24, "end": 751.24, "text": " And then you do a dot product between query one and key two,"}, {"start": 751.3399999999999, "end": 753.86, "text": " and you get 12, and those are scores."}, {"start": 753.86, "end": 758.1800000000001, "text": " And now you do softmax, which makes the sum goes to one,"}, {"start": 758.1800000000001, "end": 761.74, "text": " and you get 0.88 and 0.12, and basically,"}, {"start": 761.74, "end": 764.86, "text": " you take those numbers, you multiply the value vectors,"}, {"start": 764.86, "end": 769.22, "text": " value one, value two, and Jay nicely visualized it here,"}, {"start": 769.22, "end": 773.3000000000001, "text": " so because this is 0.12, basically this one will be dimmer,"}, {"start": 773.3000000000001, "end": 775.86, "text": " and then you sum them up, and you get Z1."}, {"start": 775.86, "end": 778.26, "text": " So that's how the attention works."}, {"start": 778.26, "end": 781.44, "text": " But now, the only difference with multi-hat attention"}, {"start": 781.44, "end": 785.2800000000001, "text": " is basically you split this query one."}, {"start": 785.2800000000001, "end": 789.0600000000001, "text": " In the baseline Transformer model, it has 512 dimensions."}, {"start": 789.0600000000001, "end": 791.2600000000001, "text": " Once you split them into eight hats,"}, {"start": 791.2600000000001, "end": 793.84, "text": " that's the number I used in the paper, you get,"}, {"start": 793.84, "end": 796.7800000000001, "text": " so one query one will actually have 64 dimensions."}, {"start": 796.7800000000001, "end": 799.96, "text": " And then you do the same procedure I just described,"}, {"start": 799.96, "end": 802.9000000000001, "text": " but independently for all of those eight hats."}, {"start": 802.9000000000001, "end": 806.34, "text": " And once you get Z1s for all of those eight hats,"}, {"start": 806.34, "end": 808.4200000000001, "text": " you just concatenate them together,"}, {"start": 808.42, "end": 813.42, "text": " and you do another feed forward through a neural network,"}, {"start": 813.5, "end": 815.18, "text": " and you get the final result."}, {"start": 815.18, "end": 819.42, "text": " And that's when you get to this part here"}, {"start": 819.42, "end": 823.36, "text": " before addition with this residual connection."}, {"start": 823.36, "end": 825.76, "text": " So that's how it works in a nutshell."}, {"start": 825.76, "end": 827.9799999999999, "text": " Now, I mentioned the scoring functions,"}, {"start": 827.9799999999999, "end": 829.3, "text": " I mentioned dot product."}, {"start": 829.3, "end": 831.4599999999999, "text": " Why do you do a dot product between query one"}, {"start": 831.4599999999999, "end": 834.04, "text": " and V1 and V2, whatever?"}, {"start": 834.04, "end": 836.9399999999999, "text": " The reason is that gives you a similarity"}, {"start": 836.9399999999999, "end": 838.3199999999999, "text": " between those vectors."}, {"start": 838.32, "end": 841.0600000000001, "text": " So what I mean by that, let me use Google for this."}, {"start": 841.0600000000001, "end": 842.6400000000001, "text": " So this is a 2D case."}, {"start": 842.6400000000001, "end": 846.0200000000001, "text": " So basically, I told you, so those vectors are like,"}, {"start": 846.0200000000001, "end": 849.34, "text": " so query one and key one or key two,"}, {"start": 849.34, "end": 852.12, "text": " they have 512 dimensions."}, {"start": 852.12, "end": 857.12, "text": " But if you kind of maybe project them into two dimensions,"}, {"start": 857.72, "end": 859.0, "text": " and you get something like this."}, {"start": 859.0, "end": 860.82, "text": " And now the smaller the angle,"}, {"start": 860.82, "end": 863.34, "text": " the more similar the vectors are."}, {"start": 863.34, "end": 867.4200000000001, "text": " And basically, so the smaller the angle,"}, {"start": 867.42, "end": 870.66, "text": " you'll get higher score here."}, {"start": 870.66, "end": 873.8199999999999, "text": " And basically, that means you'll attend more"}, {"start": 873.8199999999999, "end": 876.6999999999999, "text": " to that particular representation."}, {"start": 876.6999999999999, "end": 879.9599999999999, "text": " So I hope this didn't confuse you too much,"}, {"start": 879.9599999999999, "end": 881.18, "text": " because that wasn't the point."}, {"start": 881.18, "end": 885.42, "text": " But yeah, let me know if this helped or it did not."}, {"start": 885.42, "end": 887.6999999999999, "text": " Just reiterating once more,"}, {"start": 887.6999999999999, "end": 890.4, "text": " you take queries and you take keys,"}, {"start": 890.4, "end": 892.76, "text": " and you wanna see how similar they are."}, {"start": 892.76, "end": 897.12, "text": " And if certain query is really similar"}, {"start": 897.12, "end": 901.54, "text": " to certain key, then you wanna take into account"}, {"start": 901.54, "end": 903.9, "text": " that vector with a high score."}, {"start": 903.9, "end": 906.38, "text": " And basically, if vectors are orthogonal,"}, {"start": 906.38, "end": 907.7, "text": " the score will be zero."}, {"start": 907.7, "end": 911.34, "text": " If vectors are like even on the opposite side,"}, {"start": 911.34, "end": 912.74, "text": " then you'll have a negative score."}, {"start": 912.74, "end": 917.34, "text": " So that's basic cosine distance, basically."}, {"start": 919.38, "end": 923.5, "text": " Jumping into code, I wanna show you how it looks like"}, {"start": 923.5, "end": 924.8, "text": " like in flash."}, {"start": 924.8, "end": 929.4799999999999, "text": " So as I said here, so you have these query,"}, {"start": 929.4799999999999, "end": 931.06, "text": " key, and value nets."}, {"start": 931.06, "end": 935.4599999999999, "text": " You apply them separately on all of those tokens,"}, {"start": 935.4599999999999, "end": 939.4599999999999, "text": " and you get query, key, and value batches."}, {"start": 941.06, "end": 944.5, "text": " And once you do that, you pass that into the attention,"}, {"start": 945.38, "end": 948.52, "text": " oops, you pass that into the attention function,"}, {"start": 948.52, "end": 951.74, "text": " which does some matrix multiplication."}, {"start": 951.74, "end": 956.74, "text": " And that way, you basically apply attention"}, {"start": 957.22, "end": 960.82, "text": " to all of the tokens and through all of the batches"}, {"start": 960.82, "end": 963.78, "text": " in a really optimal way, but it's harder to understand"}, {"start": 963.78, "end": 967.74, "text": " than by looking at single batch and examples"}, {"start": 967.74, "end": 970.1, "text": " that I showed you in the blog."}, {"start": 970.1, "end": 973.0600000000001, "text": " Then we have potentially some masking operation."}, {"start": 973.0600000000001, "end": 977.3, "text": " And as I mentioned, in case you wanna ignore"}, {"start": 977.3, "end": 980.62, "text": " a certain representation, you just put negative infinity,"}, {"start": 980.62, "end": 983.78, "text": " oops, this should be negative here."}, {"start": 984.74, "end": 989.5, "text": " It's still a prototype code, so forgive me for this."}, {"start": 989.5, "end": 991.82, "text": " And then you apply softmax to scoring functions,"}, {"start": 991.82, "end": 993.26, "text": " you get attention weights."}, {"start": 993.26, "end": 997.78, "text": " And also one more detail is that you sometimes wanna"}, {"start": 997.78, "end": 1001.7, "text": " drop certain attention weights, which means"}, {"start": 1001.7, "end": 1005.44, "text": " certain attention where we get pulled to zero."}, {"start": 1005.44, "end": 1009.2, "text": " And that's one more nitty gritty detail of implementation."}, {"start": 1009.2, "end": 1013.62, "text": " And then you do once more matrix multiplication"}, {"start": 1013.62, "end": 1015.86, "text": " between attention weights and between those values."}, {"start": 1015.86, "end": 1017.9000000000001, "text": " And here is where you get those,"}, {"start": 1019.5400000000001, "end": 1024.02, "text": " so here is where you get those, let me just open this up,"}, {"start": 1024.02, "end": 1027.5800000000002, "text": " these vectors, and now you wanna concatenate them"}, {"start": 1027.5800000000002, "end": 1030.98, "text": " and do one more pass through a neural network"}, {"start": 1030.98, "end": 1032.3, "text": " to get the final result."}, {"start": 1032.3, "end": 1034.44, "text": " And that's exactly what I do here."}, {"start": 1034.44, "end": 1037.06, "text": " So intermediate representations,"}, {"start": 1037.06, "end": 1040.1399999999999, "text": " and then we change the view of those,"}, {"start": 1040.1399999999999, "end": 1042.62, "text": " and finally we do another projection"}, {"start": 1042.62, "end": 1044.74, "text": " and you get the final results."}, {"start": 1044.74, "end": 1049.74, "text": " So I hope this wasn't too hard to follow along."}, {"start": 1050.94, "end": 1053.5, "text": " If it was, please let me know in the comments."}, {"start": 1053.5, "end": 1056.32, "text": " And I'll be open sourcing this code in a week or two,"}, {"start": 1056.32, "end": 1058.62, "text": " so you'll be able to go through the code"}, {"start": 1058.62, "end": 1060.8999999999999, "text": " at your own pace and read through the blogs"}, {"start": 1060.8999999999999, "end": 1062.0, "text": " at your own pace."}, {"start": 1062.0, "end": 1065.86, "text": " So hopefully all of this together will help you"}, {"start": 1065.86, "end": 1070.26, "text": " grasp how all of this works and fits together."}, {"start": 1070.26, "end": 1072.62, "text": " Finally, I just wanna give you a feeling"}, {"start": 1072.62, "end": 1075.4599999999998, "text": " of how the actual machine translation training"}, {"start": 1075.4599999999998, "end": 1077.54, "text": " for the original transformer looked like."}, {"start": 1077.54, "end": 1079.54, "text": " So you basically have two corpuses of text."}, {"start": 1079.54, "end": 1082.54, "text": " You have, say, German and you have English."}, {"start": 1082.54, "end": 1085.02, "text": " And those corpuses are synced,"}, {"start": 1085.02, "end": 1087.34, "text": " meaning you have one sentence in German"}, {"start": 1087.34, "end": 1088.9799999999998, "text": " and there is a corresponding sentence in English."}, {"start": 1088.9799999999998, "end": 1089.9199999999998, "text": " So for example,"}, {"start": 1090.78, "end": 1093.62, "text": " Wissen Sie eines der gro\u00dfen Vergn\u00fcgen beim Reisen"}, {"start": 1093.62, "end": 1096.3, "text": " und eine der Freuden bei der ethnographischen Forschung"}, {"start": 1096.3, "end": 1098.6599999999999, "text": " ist, gemeinsam mit den Menschen zu leben,"}, {"start": 1098.6599999999999, "end": 1101.82, "text": " die sich noch an die alten Tagen erinnern k\u00f6nnen."}, {"start": 1101.82, "end": 1103.54, "text": " That was a bit of German for you."}, {"start": 1103.54, "end": 1105.58, "text": " And you basically have a corresponding sentence"}, {"start": 1105.58, "end": 1109.1799999999998, "text": " like here on the other side of the document,"}, {"start": 1109.1799999999998, "end": 1110.1399999999999, "text": " if I open it up,"}, {"start": 1110.1399999999999, "end": 1112.1399999999999, "text": " you know one of the intense pleasures of travel,"}, {"start": 1112.1399999999999, "end": 1113.1, "text": " blah, blah, blah."}, {"start": 1113.1, "end": 1115.26, "text": " So having those two synced,"}, {"start": 1115.26, "end": 1119.5, "text": " so basically if you're translating from English to German,"}, {"start": 1119.5, "end": 1121.1, "text": " the German sentence is something called"}, {"start": 1121.1, "end": 1122.62, "text": " like the gold translation."}, {"start": 1122.62, "end": 1124.9399999999998, "text": " So somebody like a professional translator"}, {"start": 1124.9399999999998, "end": 1126.54, "text": " translated the sentence."}, {"start": 1126.54, "end": 1128.86, "text": " Okay, so once you have those two sentences"}, {"start": 1128.86, "end": 1130.1799999999998, "text": " that correspond to each other,"}, {"start": 1130.1799999999998, "end": 1131.5, "text": " what you do is the following."}, {"start": 1131.5, "end": 1134.58, "text": " So you have the transformer."}, {"start": 1134.58, "end": 1136.3799999999999, "text": " You take the English sentence,"}, {"start": 1136.3799999999999, "end": 1138.1399999999999, "text": " you tokenize it, you embed it."}, {"start": 1138.1399999999999, "end": 1142.02, "text": " You do a forward pass through the encoder module here"}, {"start": 1142.02, "end": 1143.4199999999998, "text": " and you get representations."}, {"start": 1143.4199999999998, "end": 1145.34, "text": " On the other side, you take a German sentence,"}, {"start": 1145.34, "end": 1147.34, "text": " you again tokenize it."}, {"start": 1147.34, "end": 1149.62, "text": " But now what you do is you set,"}, {"start": 1149.62, "end": 1154.62, "text": " you put as a prefix, you put this start of sentence token."}, {"start": 1155.58, "end": 1156.7399999999998, "text": " So that's here."}, {"start": 1156.7399999999998, "end": 1160.02, "text": " And then you do forward pass here."}, {"start": 1160.02, "end": 1164.6999999999998, "text": " And finally, the gold translation,"}, {"start": 1164.6999999999998, "end": 1168.82, "text": " just you convert that into soft labeled distributions"}, {"start": 1168.82, "end": 1170.6999999999998, "text": " and you just do KL divergence."}, {"start": 1170.6999999999998, "end": 1173.26, "text": " And then doing back prop will train"}, {"start": 1173.26, "end": 1175.34, "text": " and tune the weights of the whole transformer model"}, {"start": 1175.34, "end": 1178.3, "text": " and that's a single training step."}, {"start": 1178.3, "end": 1181.98, "text": " The important part is that you wanna use those masks"}, {"start": 1181.98, "end": 1186.1399999999999, "text": " so that a certain token, like say a token that's here,"}, {"start": 1186.1399999999999, "end": 1190.1, "text": " cannot see tokens that are coming afterwards."}, {"start": 1191.3, "end": 1194.58, "text": " That being said, if you found this video useful,"}, {"start": 1194.58, "end": 1196.58, "text": " consider subscribing to this channel"}, {"start": 1196.58, "end": 1198.6599999999999, "text": " and hit the bell icon to get notified"}, {"start": 1198.6599999999999, "end": 1200.1, "text": " when I upload a new video."}, {"start": 1200.1, "end": 1203.4199999999998, "text": " And until next time, keep learning deep."}, {"start": 1203.42, "end": 1208.42, "text": " And I'll see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=bvBK-coXf9I
How to learn deep learning? (Transformers Example)
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I walk you through how I approach tackling a new deep learning topic and how my learning program looks like! You'll learn about: ✔️ My strategy for learning ANY new (deep learning) topic fast ✔️ Lots of learning/personal tips along the way ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ https://jalammar.github.io/illustrated-transformer/ ✅ Link to pdf of my OneNote page: https://1drv.ms/b/s!AiUr5QUl32i_0B54CcWPM7JcLDDN?e=p8QoEL Note: it's a snapshot from November 12th, 2020. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Quick recap of my 2020 deep learning journey 0:55 Tricks I learned doing my past projects 4:11 What I learned from researching NST 6:30 Deep Dream project 8:25 GANs project 10:00 Going forward - transformers! 10:36 Why transformers? 12:47 OneNote walk-through (attention mechanism) 15:30 OneNote (self-attention mechanism) 17:40 Zoom out - is there a life after GPT? 18:50 Word embeddings (word to vec, GloVe) 20:30 Tokenization - the underdog of NLP 22:00 Main BERT dependencies 25:30 Going forward 27:00 Always keep the bigger picture in your head ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #deeplearning #deeplearningtips #learningtips
What's up folks? So in this video I had this idea of sharing with you how I approach learning a new deep learning topic. So over the past seven months I've been learning about neural style transfer for maybe three, four months. That took a while because I really wanted to get a familiarity with PyTorch and yeah, basically PyTorch. Then I did some Deep Dream project and that took maybe a month and a half and my last big deep learning topic exploration was Verganz and I did those for maybe two months. Open sourced a project there and I'm currently doing Transformers and I'll be walking you through my OneNote and how I accumulate my notes and how I approach organizing information and trying to get better at whatever topic I choose to study at the present. So as I said I basically started with neural style transfer earlier this year and what I did was I first started reading lots of research papers and I do understand that if you're a beginner you won't start with research papers but there are lots of blogs and some high-level resources which you could leverage to get a better understanding of the field and so I can show you here I basically yeah I just started with the original neural style transfer paper and that's this one by Gatiss and so basically while reading the paper you you find some references and reading blogs you figure out there are some better methods now and then and that's how you start slowly accumulating good research papers to read and also like using simply using Google and figuring out how many citations a paper has that's also one of the probably good indicators of whether you should be reading it or not. So as you can see here so I've been reading like I read maybe 18 papers on neural style transfer and also I don't I don't read every single paper thoroughly some of those I just kind of glimpse and so I organized it like this so I ran through those whatever like silly name just a directory where I dump the papers which I just kind of have some overview but I am not thoroughly familiar with those and these are kind of some papers which are on my backlog and I don't think I'll be I'll have time to to go through these but yeah they are they are still here anyways so after running through the theory what I did was basically started implementing stuff from scratch and that's super important so theory must be complemented with code otherwise it will be it won't be the same you you won't have a deep understanding of how stuff works because devil is in the details like the first paper I started implementing was Gadeus's original paper and I thought I knew everything works but I didn't turned out I didn't and so I struggle with some details like should you use 0 to 25 to 55 range or 0 to 1 range and there's so many of those small details which you don't think about when you're just reading theory so what I did was I implemented the original paper here then I I implemented some Johnson's paper which didn't use the optimization method they just use convolutional neural network to you basically input an image it passes through the convolutional neural net and out goes the stylized version of that same image and finally I did play with videos so I did implement like a naive approach of neural style transfer for videos naive in the sense that I don't have any temporal constraints included so it's kind of jittery when you go from frame to frame but I did play with some like models which actually during the training include the temporal loss and so the result is much more stable once you start doing the inference let's go and take a look at a couple of the projects implemented so the first one was so the optimization method by Gadeus and you'll look the original neural style transfer algorithm and as you can see here I do think I'm a big believer that you should create a really good readme file otherwise people will get confused nobody wants to have to go deep inside the code you know to understand what's happening you should present what your what your code is doing so that it's easy for the others to to know straight ahead whether they want to play with your code or not and you should put some basic instructions how to set up your code how to use your code and maybe how to debug and experiment and whatever additional things you have and also like putting acknowledgments is really nice because you always build up on somebody else's work or you use it as an inspiration to build your own work and many do not admit this but I think it's like everybody knows that everybody's using other people's code and either building off of it or using it as an inspiration so that was the that was the first project I did and then the second one as I mentioned was Johnson's same similar thing it's this one is much faster it can run almost real-time but it's has a bit worse quality than than the Gattas project and finally this is the video project I was talking about so what I did here additionally was I included a small part that will actually segment the the the user that's in the frame so so so it's basically built for a specific setup and that's a person talking in front of the camera like I'm doing right now and it does a pretty decent job in that setup as you can see here so what you can do after you find the segmentation mask is you can apply different styles to either background or foreground and that's that's a useful useful nice thing to have and here is a small GIF I created using that project so you can go ahead and play with these so the main idea here is just to show you how how I personally work and maybe you'll find some inspiration hopefully do if you do please leave a comment whether this video helped you or did not next thing I did as I mentioned was deep dream and now you may wonder why is this guy doing this nice stylistic projects I mean you don't learn anything pragmatic right so the thing is in Microsoft I'm doing lots of really practical stuff and so I have this luxury of just playing around with the different techniques different research areas in my free time so the thing is you learn a lot doing these projects so for example doing neural style transfer you you'll learn about the concept called perceptual loss for example that's used all around the place in other areas also for example the Johnson the paper I mentioned with the like the Johnson's paper he the architecture he used for the generation of neural style transfer images is something that was was being really used in and I've seen it reused in bunch of other papers so you do get to extract some useful information from one field and transfer it to another so one additionally so there's this thing called adaptive instant normalization and it was invented in the context of the neural style transfer field but I recently saw it while researching GANs and the famous style again architecture is actually using adaptive instance normalization so that's that's the cross-semination I was talking about so basically do learn a lot by doing some maybe unrelated field but it is related to whatever you'll be doing in the future if you if you decide to keep doing deep learning anyways this is how deep dream algorithm looks like I think it's got really decent images just super exciting to see that the neural network itself has this kind of power inside and that you can achieve this by just using simple optimization methods and CNNs and I did create some gifts here and that's all awesome jumping into GANs quickly so that was my final deep learning topic I researched and again read bunch of papers that's usually how I start exploring the field actually I'm lying I actually start usually start with high-level resources but I'll get to details a bit later with transformers so after reading papers I just developed from scratch like the vanilla the original generative adversarial network then I did some conditional GANs where you can use you can condition which class you want to generate so in particular I did amnest and I used amnest data here so what I achieved was that I can basically choose which class I want to generate so here you can see I can condition the generator to create only zeros or only ones etc and finally I developed this DC GAN paper which was super popular and it's super famous because it basically created this Cambrian explosion of new research papers that were using DC GAN as a building block and it really accelerated the field as such so like doing doing this I learned a lot and that's something I recommend to everybody that's that's a really good way to learn start with theory don't dwell too much on theory maybe two weeks like tops and then start start coding it up from scratch I think I mentioned this I still didn't so I'm currently doing transformers I'm in this phase of exploring the theory that's behind transformers so I still don't have as you can see here I don't have the project implemented yet but I'll start doing it like this Monday I think so hopefully you folks will be able to play with the code once I finish it up in maybe two weeks time so let me let me walk you through my one note and show you how I approach learning a new deep learning fields I don't start with research papers usually unless I'm really familiar with the field I do start with some high-level resources and let's just check out how it looks like so if I open my one note you can see a bunch of like resources I've accumulated over the last two weeks actually because I've been doing this pretty much every single day and yeah let me start slowly digesting this for you so first a little bit of context how I started and why I started doing transformers so first of all you all heard of GPT-3 and the super famous language model by OpenAI and the cool stuff it's been creating over the last couple of months and lots of hype around there so that wasn't actually the reason I started doing transformers even though they were on my backlog I would I would have started learning transformers at one point of time but it's just kind of happened sooner because what happened is this visual transformer and it was published I think around 3rd of October on this machine learning conference called iClear I think that's how it's pronounced and basically what it did it showed that transformers are more powerful than CNNs in the field of computer vision in specifically in the big data regime and it kind of blew my mind off and I was like I have to start doing this sooner so there was a short background story why I started doing transformers and what I first did was I just opened the paper visual transformer and I wasn't an NLP guy I was a computer vision guy so I've been spending most of my time over the last couple of years doing computer vision and I started reading it and I basically immediately saw it has a really strong dependency on the original transformer paper so that's what I did I took a step back I opened the transformer paper and I've read it from top to bottom and basically after finishing the paper then I read the visual transformer and I still didn't have a clue of what's happening many details I mean I did have a rough understanding what's happening but I didn't have many details so I decided I have to take a step even one step back and start learning about the attention mechanism and some other dependencies that those papers had like like word embeddings and concepts like like that one let's finally start walking through the the one note so as you can see here that this section is attention as I mentioned I figure out I'll need to understand the tension mechanism thoroughly so what I'd start what I started doing was first just searching on the internet how attention mechanism works and usually you can find a couple of really good medium or data science towards data science I think it's called like blocks and those are super useful they they they usually do a really good visual explanation and they will help you start start building that knowledge from top to bottom like strengthening that knowledge pyramid you have and so after after reading a couple of papers of a couple of blogs I found this this amazing super amazing blog by this guy Jay Elomar and it helped me a lot understand how the thing works and that's that's something really interesting so even the best researchers in the field are are sometimes less useful for you as somebody who's just starting to learn something it's they are less useful than just some guy that did a really nice visualization on the internet and so don't get stuck with the like celebrities and try and find a good resource for your particular level so not everything even though something is world content like world-class content that doesn't mean it will be the best thing for you at the moment at your current skill level so anyways this this blog does a really awesome job at explaining how attention mechanism works and it helped me a lot understand how the thing works on the high level and then the next step was to read the papers so I read the original paper by Yoshua Bengio and his collaborator and it's called neural machine translation by jointly learning to align and translate so it's a it's a mouthful my my god so after that I found I somehow found I don't remember every single detail but I found this paper effective approaches to attention-based neural machine translation and that paper introduced the concept of a local attention mechanism and a couple of other things but basically and also it introduced new ideas around the scoring functions how you can use some different scoring functions so not only the concat which was used in the original paper but also the dot product one the general attention the general scoring function etc so those are just some maybe boring details which I don't want to get into right now but I'm just trying to kind of show you how my how my process works after I felt I understand attention really thoroughly I knew I had to understand the self attention concept which was being which was the concrete thing I was being used in transformers so again I started with blocks high level resources first then going into depth so again found some really nice blocks some not so useful ones and but again J LMR had awesome awesome block which helped me understand how the self self attention mechanism works at least on the high level but he actually also goes into depth so it's a I mean I think the the series of blocks he created around transformers and NLP are probably the best resources I've seen in my life so you do have to check out this guy he's really awesome kudos to him and I also watched a couple of videos to just to get like because it was my first time I'm doing this I'm as I said I'm not so much internal P I am now a lot more but I wasn't two weeks ago so I did some I watched some videos like computer file and a couple of other channels code Emporium has a really nice visualization kudos to him also and that helped me get a nice understanding of the self attention mechanism and after after doing the the high level stuff again I just read the transformer paper again and now it was a lot more it was much more clear for me how stuff works I understood the details also and I just the high level picture so that was really nice yeah again I searched for some more resources and found some blocks that have a really nice recap of various attention mechanisms so that's that's the way I work so I start with high level go into depth and along the way I do find a bunch of useful resources and I just try and organize all of those into into my one note now after reading all of the this and having a thorough understanding of transformer I wanted to you know I wanted to figure out is there something else aside from BERT and GPT family of transformers so you probably heard of those two they're super famous and I just wanted to get an understanding what's of what's out there what's the current state of the art so what I did I'll just skip over this so what I did is I started searching for some overview blocks of the NLP and the currently best models so that's what I did next and I figure out basically only have BERT or BERT derivatives like Alberta Roberta and like but a bunch of other models and you have the GPT family GPT 1 2 & 3 by open AI which are crashing it basically at this moment so that's how I organize my my next things I'm gonna learn so I started learning about BERT I started learning about GPTs then again reading and learning about BERT I figure out I don't have some prerequisite knowledge and I mean that's how learning basically works you it's it's super nonlinear I'm trying to to just lead you through my like process but hopefully will be useful for you also and I figure out I don't know anything about word embeddings and I mean I did have some high-level knowledge like the nice semantic properties where you can basically do some arithmetic on the embeddings so it's a similar thing that you can find in the GAN literature with latent vectors there so doing some arithmetic in the latent space in the like in GANs you can create some you can get some interesting results in the image space and again I understood I have to understand these word embeddings a bit better so I started investigating what word to back is what glovey I think it's pronounced like glovey I don't know is and again found some again J LMR had awesome blog so it helped me a lot I read like the original research papers in 2013 and I also read the original neural language model by Ben Geo that was written in 2003 which was the I think the first model where they employed a neural network to to solve the language modeling task previously they used Engrams and some other techniques from statistical machine learning to to kind of crack this problem of language modeling where basically the task is if you don't know is to predict what the next word is given the previous sentence or previous couple of words so that's that's basically one of the dependencies I had bird embeddings and then again this there is this thing called the tokenization because you usually don't do anymore you don't do embedding for every single word you first tokenize the whole sentence and there is a whole science around tokenization and the thing is Bert used word piece so that's why I had this dependency and I wanted to learn what took a bit more about tokenization and turn out it's a complete like a separate research field and it's super under underestimated but it's used everywhere and you basically have character level embeddings you have sub word embeddings and you have word embeddings and most of the current state-of-the-art models like GPT family of models and Bert they use sub word embeddings and such as like the BPE algorithm that's the byte pairing I think byte pairing coding and also you have things like word piece things like sentence piece and I'm just throwing random names at you right now but you get the point like tokenization is really important and it's being used everywhere so those were most of the dependencies I had on Bert and again I searched for some blogs I found some official Google blogs I also found J.L.Lemmer's blog on Bert again super useful as I said Code Emporium had nice video and finally I reading the the J.L.Lemmer's blog I saw a couple of more things I have to understand before I dwell into Bert and those were specifically contextualized word embeddings so the thing with word embeddings is they're super shallow representations so that's something if you're from the computer vision field that's something like having only the the edge feature detector instead of the full full rich representation which the CNNs provide you with which start with maybe some simpler shapes and edges and stuff and they get to more abstract and more semantically meaningful representations at the deeper layers of the network so the problem with word embeddings is the following they encode the same exact embedding vector for a word regardless of the context of the surrounding words and that's just not how the things were in the human language so the thing is you say for example if you have a word Queen it depends on the context whether that the Queen is maybe a Queen bee or like Queen like a royalty and the context is basically determined by the surrounding words so you want to have a different ebony vector for the word Queen depending on the context whereas the word to back and glovey methods they they basically always encoded the word with the same vector and that's something that we now know is not the best thing to do and we have so there were methods that followed the word to back and glovey and that's this Kovie that the it's contextualized vectors and then you have Elmo embeddings from language models which finally starting including the like the context of the surrounding words and that was super important for for what followed along and that's and that's bird and GPT's GPT models one crucial thing that happened aside from from from these contextualized word representations is this ULM fit universal language model where they showed that you can do fine-tuning in NLP as well so previously we all knew the computer vision we had the computer vision had this image net moment in 2012 where basically we figure out we can use a pre-trained models on the image net data set to to later just fine-tune and do many of other downstream tasks like maybe image classification on some other data sets where you don't have as much data as you have in the image image net so that's the so we finally have the same ability to do the same thing in NLP and that's to do fine-tuning and just pre-train your language model on like a huge corpus of text and then just fine-tune it for a different downstream NLP tasks and ULM fit was one of the papers they kind of promoted that idea and started this trend in NLP finally I read the the BERT paper and recently a couple of days ago I started reading about GPT again JLMR has some really nice blogs and and there are some nice blogs official blogs from open AI as well and papers or something that's when you when you finish up with those you can you can go and study some some papers as always so after I finished the two important topics and that's BERT and GPT my idea is like starting this Monday to to implement the transformer the original transformer paper from scratch because as it turned out basically both BERT and GPT basically took one part of the original transformer model so BERT took the encoder part GPTs took the decoder part of the original transformer and they just kind of increased it and training some bigger data and yeah that's that's pretty much it so I'm gonna implement it and then I have this section later work where like reading through those blogs blogs and overview blogs and bunch of different resources I already showed you I kind of accumulated some things I know it'd be useful to to read them to go through them and one of those is like Alberta like some BERT derivative and then that there is this Excel net and Google t5 and a bunch of models so anyways that's how I work I do some theory then I do some coding I take some time to let that theory kind of sink down and let it go through my fingers by coding the things from scratch and then again I shift my attention to theory I do some more theory and maybe I'll do some another project after I finish with the second second iteration of this theory exploration whatever let me let me finish up with this so I always kind of try and have a bigger picture of how this all fits into the next things I'm going to do and the next area I wanna I wanna research a bit more is graph neural networks GNNs for short and so I just have some check boxes some things I have to to understand before I proceed and start digging into that new topic so I basically already learned all of these yeah let me check I don't still don't have a clear picture of what blue metric is I'm gonna read read about it a bit later so anyways hopefully you found this video useful it's the first time I created a video like this it's a bit long format and it's not as curated so hopefully you did learn something useful here if you did go ahead and subscribe to my channel hit that bell icon so that you get notified when I upload a new video and let me know if you have any questions in the comments I'd like to answer all of those so anyways have a nice day keep learning
[{"start": 0.0, "end": 5.28, "text": " What's up folks? So in this video I had this idea of sharing with you how I"}, {"start": 5.28, "end": 10.44, "text": " approach learning a new deep learning topic. So over the past seven months I've been"}, {"start": 10.44, "end": 15.040000000000001, "text": " learning about neural style transfer for maybe three, four months. That took a"}, {"start": 15.040000000000001, "end": 21.6, "text": " while because I really wanted to get a familiarity with PyTorch and yeah,"}, {"start": 21.6, "end": 27.12, "text": " basically PyTorch. Then I did some Deep Dream project and that took maybe a"}, {"start": 27.12, "end": 33.88, "text": " month and a half and my last big deep learning topic exploration was"}, {"start": 33.88, "end": 39.36, "text": " Verganz and I did those for maybe two months. Open sourced a project there and"}, {"start": 39.36, "end": 45.28, "text": " I'm currently doing Transformers and I'll be walking you through my OneNote and"}, {"start": 45.28, "end": 49.28, "text": " how I accumulate my notes and how I approach organizing information and"}, {"start": 49.28, "end": 55.24, "text": " trying to get better at whatever topic I choose to study at the present."}, {"start": 55.24, "end": 60.400000000000006, "text": " So as I said I basically started with neural style transfer earlier this year"}, {"start": 60.400000000000006, "end": 66.28, "text": " and what I did was I first started reading lots of research papers and I do"}, {"start": 66.28, "end": 69.6, "text": " understand that if you're a beginner you won't start with research papers but"}, {"start": 69.6, "end": 73.3, "text": " there are lots of blogs and some high-level resources which you could"}, {"start": 73.3, "end": 78.16, "text": " leverage to get a better understanding of the field and so I can show you here"}, {"start": 78.16, "end": 87.12, "text": " I basically yeah I just started with the original neural style transfer paper"}, {"start": 87.12, "end": 93.96, "text": " and that's this one by Gatiss and so basically while reading the paper you"}, {"start": 93.96, "end": 99.28, "text": " you find some references and reading blogs you figure out there are some"}, {"start": 99.28, "end": 104.16, "text": " better methods now and then and that's how you start slowly accumulating good"}, {"start": 104.16, "end": 109.64, "text": " research papers to read and also like using simply using Google and figuring"}, {"start": 109.64, "end": 115.52, "text": " out how many citations a paper has that's also one of the probably good"}, {"start": 115.52, "end": 120.08, "text": " indicators of whether you should be reading it or not. So as you can see here"}, {"start": 120.08, "end": 124.75999999999999, "text": " so I've been reading like I read maybe 18 papers on neural style transfer and"}, {"start": 124.75999999999999, "end": 130.56, "text": " also I don't I don't read every single paper thoroughly some of those I just"}, {"start": 130.56, "end": 134.08, "text": " kind of glimpse and so I organized it like this so I ran through those"}, {"start": 134.08, "end": 138.72, "text": " whatever like silly name just a directory where I dump the papers which"}, {"start": 138.72, "end": 145.04, "text": " I just kind of have some overview but I am not thoroughly familiar with those"}, {"start": 145.04, "end": 152.28, "text": " and these are kind of some papers which are on my backlog and I don't think I'll"}, {"start": 152.28, "end": 158.48000000000002, "text": " be I'll have time to to go through these but yeah they are they are still"}, {"start": 158.48, "end": 164.51999999999998, "text": " here anyways so after running through the theory what I did was basically"}, {"start": 164.51999999999998, "end": 171.32, "text": " started implementing stuff from scratch and that's super important so theory"}, {"start": 171.32, "end": 176.39999999999998, "text": " must be complemented with code otherwise it will be it won't be the same you you"}, {"start": 176.39999999999998, "end": 180.01999999999998, "text": " won't have a deep understanding of how stuff works because devil is in the"}, {"start": 180.01999999999998, "end": 184.1, "text": " details like the first paper I started implementing was Gadeus's original"}, {"start": 184.1, "end": 188.28, "text": " paper and I thought I knew everything works but I didn't turned out I didn't"}, {"start": 188.28, "end": 194.76, "text": " and so I struggle with some details like should you use 0 to 25 to 55 range or 0"}, {"start": 194.76, "end": 200.4, "text": " to 1 range and there's so many of those small details which you don't think"}, {"start": 200.4, "end": 205.6, "text": " about when you're just reading theory so what I did was I implemented the"}, {"start": 205.6, "end": 211.52, "text": " original paper here then I I implemented some Johnson's paper which didn't use"}, {"start": 211.52, "end": 215.88, "text": " the optimization method they just use convolutional neural network to you"}, {"start": 215.88, "end": 219.35999999999999, "text": " basically input an image it passes through the convolutional neural net and"}, {"start": 219.35999999999999, "end": 226.12, "text": " out goes the stylized version of that same image and finally I did play with"}, {"start": 226.12, "end": 231.84, "text": " videos so I did implement like a naive approach of neural style transfer for"}, {"start": 231.84, "end": 236.56, "text": " videos naive in the sense that I don't have any temporal constraints included"}, {"start": 236.56, "end": 241.28, "text": " so it's kind of jittery when you go from frame to frame but I did play with some"}, {"start": 241.28, "end": 246.76, "text": " like models which actually during the training include the temporal loss and"}, {"start": 246.76, "end": 251.72, "text": " so the result is much more stable once you start doing the inference let's go"}, {"start": 251.72, "end": 254.88, "text": " and take a look at a couple of the projects implemented so the first one"}, {"start": 254.88, "end": 259.88, "text": " was so the optimization method by Gadeus and you'll look the original neural"}, {"start": 259.88, "end": 264.48, "text": " style transfer algorithm and as you can see here I do think I'm a big believer"}, {"start": 264.48, "end": 268.9, "text": " that you should create a really good readme file otherwise people will get"}, {"start": 268.9, "end": 273.47999999999996, "text": " confused nobody wants to have to go deep inside the code you know to understand"}, {"start": 273.47999999999996, "end": 278.32, "text": " what's happening you should present what your what your code is doing so that"}, {"start": 278.32, "end": 282.28, "text": " it's easy for the others to to know straight ahead whether they want to play"}, {"start": 282.28, "end": 285.76, "text": " with your code or not and you should put some basic instructions how to set up"}, {"start": 285.76, "end": 292.03999999999996, "text": " your code how to use your code and maybe how to debug and experiment and whatever"}, {"start": 292.03999999999996, "end": 296.88, "text": " additional things you have and also like putting acknowledgments is really nice"}, {"start": 296.88, "end": 300.96, "text": " because you always build up on somebody else's work or you use it as an"}, {"start": 300.96, "end": 306.56, "text": " inspiration to build your own work and many do not admit this but I think it's"}, {"start": 306.56, "end": 311.44, "text": " like everybody knows that everybody's using other people's code and either"}, {"start": 311.44, "end": 316.38, "text": " building off of it or using it as an inspiration so that was the that was the"}, {"start": 316.38, "end": 320.64, "text": " first project I did and then the second one as I mentioned was Johnson's same"}, {"start": 320.64, "end": 326.76, "text": " similar thing it's this one is much faster it can run almost real-time but"}, {"start": 326.76, "end": 335.59999999999997, "text": " it's has a bit worse quality than than the Gattas project and finally this is"}, {"start": 335.59999999999997, "end": 340.15999999999997, "text": " the video project I was talking about so what I did here additionally was I"}, {"start": 340.15999999999997, "end": 345.64, "text": " included a small part that will actually segment the the the user that's in the"}, {"start": 345.64, "end": 350.88, "text": " frame so so so it's basically built for a specific setup and that's a person"}, {"start": 350.88, "end": 354.56, "text": " talking in front of the camera like I'm doing right now and it does a pretty"}, {"start": 354.56, "end": 359.68, "text": " decent job in that setup as you can see here so what you can do after you find"}, {"start": 359.68, "end": 363.15999999999997, "text": " the segmentation mask is you can apply different styles to either background or"}, {"start": 363.15999999999997, "end": 370.24, "text": " foreground and that's that's a useful useful nice thing to have and here is a"}, {"start": 370.24, "end": 376.76, "text": " small GIF I created using that project so you can go ahead and play with these"}, {"start": 376.76, "end": 381.56, "text": " so the main idea here is just to show you how how I personally work and maybe"}, {"start": 381.56, "end": 385.04, "text": " you'll find some inspiration hopefully do if you do please leave a comment"}, {"start": 385.04, "end": 390.68, "text": " whether this video helped you or did not next thing I did as I mentioned was"}, {"start": 390.68, "end": 395.96000000000004, "text": " deep dream and now you may wonder why is this guy doing this nice stylistic"}, {"start": 395.96, "end": 400.32, "text": " projects I mean you don't learn anything pragmatic right so the thing is in"}, {"start": 400.32, "end": 404.79999999999995, "text": " Microsoft I'm doing lots of really practical stuff and so I have this"}, {"start": 404.79999999999995, "end": 409.0, "text": " luxury of just playing around with the different techniques different research"}, {"start": 409.0, "end": 414.59999999999997, "text": " areas in my free time so the thing is you learn a lot doing these projects so"}, {"start": 414.59999999999997, "end": 418.64, "text": " for example doing neural style transfer you you'll learn about the concept called"}, {"start": 418.64, "end": 423.79999999999995, "text": " perceptual loss for example that's used all around the place in other areas also"}, {"start": 423.8, "end": 427.48, "text": " for example the Johnson the paper I mentioned with the like the Johnson's"}, {"start": 427.48, "end": 432.96000000000004, "text": " paper he the architecture he used for the generation of neural style transfer"}, {"start": 432.96000000000004, "end": 438.40000000000003, "text": " images is something that was was being really used in and I've seen it reused in"}, {"start": 438.40000000000003, "end": 445.16, "text": " bunch of other papers so you do get to extract some useful information from one"}, {"start": 445.16, "end": 450.64, "text": " field and transfer it to another so one additionally so there's this thing"}, {"start": 450.64, "end": 454.96, "text": " called adaptive instant normalization and it was invented in the context of"}, {"start": 454.96, "end": 459.56, "text": " the neural style transfer field but I recently saw it while researching GANs"}, {"start": 459.56, "end": 464.08, "text": " and the famous style again architecture is actually using adaptive instance"}, {"start": 464.08, "end": 468.71999999999997, "text": " normalization so that's that's the cross-semination I was talking about so"}, {"start": 468.71999999999997, "end": 474.56, "text": " basically do learn a lot by doing some maybe unrelated field but it is"}, {"start": 474.56, "end": 479.88, "text": " related to whatever you'll be doing in the future if you if you decide to keep"}, {"start": 479.88, "end": 486.0, "text": " doing deep learning anyways this is how deep dream algorithm looks like I think"}, {"start": 486.0, "end": 490.84, "text": " it's got really decent images just super exciting to see that the neural"}, {"start": 490.84, "end": 496.38, "text": " network itself has this kind of power inside and that you can achieve this by"}, {"start": 496.38, "end": 501.96, "text": " just using simple optimization methods and CNNs and I did create some gifts"}, {"start": 501.96, "end": 509.76, "text": " here and that's all awesome jumping into GANs quickly so that was my final deep"}, {"start": 509.76, "end": 516.0, "text": " learning topic I researched and again read bunch of papers that's usually how"}, {"start": 516.0, "end": 520.4399999999999, "text": " I start exploring the field actually I'm lying I actually start usually start"}, {"start": 520.4399999999999, "end": 523.88, "text": " with high-level resources but I'll get to details a bit later with transformers"}, {"start": 523.88, "end": 530.2, "text": " so after reading papers I just developed from scratch like the vanilla the"}, {"start": 530.2, "end": 534.28, "text": " original generative adversarial network then I did some conditional GANs where"}, {"start": 534.28, "end": 538.88, "text": " you can use you can condition which class you want to generate so in"}, {"start": 538.88, "end": 545.12, "text": " particular I did amnest and I used amnest data here so what I achieved was"}, {"start": 545.12, "end": 549.64, "text": " that I can basically choose which class I want to generate so here you can see I"}, {"start": 549.64, "end": 556.84, "text": " can condition the generator to create only zeros or only ones etc and finally"}, {"start": 556.84, "end": 563.72, "text": " I developed this DC GAN paper which was super popular and it's super famous"}, {"start": 563.72, "end": 570.48, "text": " because it basically created this Cambrian explosion of new research"}, {"start": 570.48, "end": 577.48, "text": " papers that were using DC GAN as a building block and it really accelerated"}, {"start": 577.48, "end": 583.88, "text": " the field as such so like doing doing this I learned a lot and that's"}, {"start": 583.88, "end": 587.9200000000001, "text": " something I recommend to everybody that's that's a really good way to learn"}, {"start": 587.9200000000001, "end": 593.6, "text": " start with theory don't dwell too much on theory maybe two weeks like tops and"}, {"start": 593.6, "end": 599.0400000000001, "text": " then start start coding it up from scratch I think I mentioned this I still"}, {"start": 599.0400000000001, "end": 602.36, "text": " didn't so I'm currently doing transformers I'm in this phase of"}, {"start": 602.36, "end": 607.26, "text": " exploring the theory that's behind transformers so I still don't have as"}, {"start": 607.26, "end": 611.84, "text": " you can see here I don't have the project implemented yet but I'll start"}, {"start": 611.84, "end": 616.44, "text": " doing it like this Monday I think so hopefully you folks will be able to play"}, {"start": 616.44, "end": 621.12, "text": " with the code once I finish it up in maybe two weeks time so let me let me"}, {"start": 621.12, "end": 626.24, "text": " walk you through my one note and show you how I approach learning a new"}, {"start": 626.24, "end": 630.68, "text": " deep learning fields I don't start with research papers usually unless I'm"}, {"start": 630.68, "end": 634.12, "text": " really familiar with the field I do start with some high-level resources and"}, {"start": 634.12, "end": 639.84, "text": " let's just check out how it looks like so if I open my one note you can see a"}, {"start": 639.84, "end": 646.0, "text": " bunch of like resources I've accumulated over the last two weeks actually because"}, {"start": 646.0, "end": 651.24, "text": " I've been doing this pretty much every single day and yeah let me start"}, {"start": 651.24, "end": 656.24, "text": " slowly digesting this for you so first a little bit of context how I started and"}, {"start": 656.24, "end": 660.24, "text": " why I started doing transformers so first of all you all heard of GPT-3 and"}, {"start": 660.24, "end": 665.08, "text": " the super famous language model by OpenAI and the cool stuff it's been"}, {"start": 665.08, "end": 669.4, "text": " creating over the last couple of months and lots of hype around there so that"}, {"start": 669.4, "end": 672.6, "text": " wasn't actually the reason I started doing transformers even though they were"}, {"start": 672.6, "end": 676.8000000000001, "text": " on my backlog I would I would have started learning transformers at one"}, {"start": 676.8000000000001, "end": 680.64, "text": " point of time but it's just kind of happened sooner because what happened is"}, {"start": 680.64, "end": 685.6, "text": " this visual transformer and it was published I think around 3rd of October"}, {"start": 685.6, "end": 689.96, "text": " on this machine learning conference called iClear I think that's how it's"}, {"start": 689.96, "end": 695.8000000000001, "text": " pronounced and basically what it did it showed that transformers are more"}, {"start": 695.8000000000001, "end": 701.24, "text": " powerful than CNNs in the field of computer vision in specifically"}, {"start": 701.24, "end": 706.28, "text": " in the big data regime and it kind of blew my mind off and I was like I have"}, {"start": 706.28, "end": 711.08, "text": " to start doing this sooner so there was a short background story why I started"}, {"start": 711.08, "end": 716.44, "text": " doing transformers and what I first did was I just opened the paper visual"}, {"start": 716.44, "end": 720.88, "text": " transformer and I wasn't an NLP guy I was a computer vision guy so I've been"}, {"start": 720.88, "end": 724.44, "text": " spending most of my time over the last couple of years doing computer vision and"}, {"start": 724.44, "end": 729.64, "text": " I started reading it and I basically immediately saw it has a really strong"}, {"start": 729.64, "end": 733.68, "text": " dependency on the original transformer paper so that's what I did I took a step"}, {"start": 733.68, "end": 737.56, "text": " back I opened the transformer paper and I've read it from top to bottom and"}, {"start": 737.56, "end": 742.68, "text": " basically after finishing the paper then I read the visual transformer and I"}, {"start": 742.68, "end": 746.68, "text": " still didn't have a clue of what's happening many details I mean I did have"}, {"start": 746.68, "end": 750.36, "text": " a rough understanding what's happening but I didn't have many details so I"}, {"start": 750.36, "end": 755.46, "text": " decided I have to take a step even one step back and start learning about the"}, {"start": 755.46, "end": 761.4000000000001, "text": " attention mechanism and some other dependencies that those papers had like"}, {"start": 761.4000000000001, "end": 767.2, "text": " like word embeddings and concepts like like that one let's finally start"}, {"start": 767.2, "end": 770.88, "text": " walking through the the one note so as you can see here that this section is"}, {"start": 770.88, "end": 775.12, "text": " attention as I mentioned I figure out I'll need to understand the tension"}, {"start": 775.12, "end": 779.2800000000001, "text": " mechanism thoroughly so what I'd start what I started doing was first just"}, {"start": 779.2800000000001, "end": 784.32, "text": " searching on the internet how attention mechanism works and usually you can find"}, {"start": 784.32, "end": 788.2800000000001, "text": " a couple of really good medium or data science towards data science I think"}, {"start": 788.2800000000001, "end": 793.5600000000001, "text": " it's called like blocks and those are super useful they they they usually do"}, {"start": 793.5600000000001, "end": 798.5600000000001, "text": " a really good visual explanation and they will help you start start building"}, {"start": 798.5600000000001, "end": 803.6400000000001, "text": " that knowledge from top to bottom like strengthening that knowledge pyramid you"}, {"start": 803.6400000000001, "end": 810.5600000000001, "text": " have and so after after reading a couple of papers of a couple of blogs I found"}, {"start": 810.56, "end": 818.8399999999999, "text": " this this amazing super amazing blog by this guy Jay Elomar and it helped me a"}, {"start": 818.8399999999999, "end": 823.7199999999999, "text": " lot understand how the thing works and that's that's something really"}, {"start": 823.7199999999999, "end": 828.68, "text": " interesting so even the best researchers in the field are are sometimes less"}, {"start": 828.68, "end": 833.88, "text": " useful for you as somebody who's just starting to learn something it's they"}, {"start": 833.88, "end": 837.76, "text": " are less useful than just some guy that did a really nice visualization on the"}, {"start": 837.76, "end": 842.4, "text": " internet and so don't get stuck with the like celebrities and try and find a good"}, {"start": 842.4, "end": 847.2, "text": " resource for your particular level so not everything even though something is"}, {"start": 847.2, "end": 850.6, "text": " world content like world-class content that doesn't mean it will be the best"}, {"start": 850.6, "end": 857.08, "text": " thing for you at the moment at your current skill level so anyways this this"}, {"start": 857.08, "end": 861.84, "text": " blog does a really awesome job at explaining how attention mechanism"}, {"start": 861.84, "end": 868.0, "text": " works and it helped me a lot understand how the thing works on the high level"}, {"start": 868.0, "end": 872.1, "text": " and then the next step was to read the papers so I read the original paper by"}, {"start": 872.1, "end": 877.84, "text": " Yoshua Bengio and his collaborator and it's called neural machine translation"}, {"start": 877.84, "end": 881.44, "text": " by jointly learning to align and translate so it's a it's a mouthful my"}, {"start": 881.44, "end": 889.2, "text": " my god so after that I found I somehow found I don't remember every single"}, {"start": 889.2, "end": 892.4000000000001, "text": " detail but I found this paper effective approaches to attention-based neural"}, {"start": 892.4000000000001, "end": 897.4000000000001, "text": " machine translation and that paper introduced the concept of a local"}, {"start": 897.4000000000001, "end": 903.84, "text": " attention mechanism and a couple of other things but basically and also it"}, {"start": 903.84, "end": 909.88, "text": " introduced new ideas around the scoring functions how you can use some different"}, {"start": 909.88, "end": 914.36, "text": " scoring functions so not only the concat which was used in the original paper but"}, {"start": 914.36, "end": 918.6800000000001, "text": " also the dot product one the general attention the general scoring function"}, {"start": 918.68, "end": 922.92, "text": " etc so those are just some maybe boring details which I don't want to get into"}, {"start": 922.92, "end": 928.12, "text": " right now but I'm just trying to kind of show you how my how my process works"}, {"start": 928.12, "end": 933.5999999999999, "text": " after I felt I understand attention really thoroughly I knew I had to"}, {"start": 933.5999999999999, "end": 937.0799999999999, "text": " understand the self attention concept which was being which was the concrete"}, {"start": 937.0799999999999, "end": 941.92, "text": " thing I was being used in transformers so again I started with blocks high"}, {"start": 941.92, "end": 946.76, "text": " level resources first then going into depth so again found some really nice"}, {"start": 946.76, "end": 954.12, "text": " blocks some not so useful ones and but again J LMR had awesome awesome block"}, {"start": 954.12, "end": 960.2, "text": " which helped me understand how the self self attention mechanism works at least"}, {"start": 960.2, "end": 964.84, "text": " on the high level but he actually also goes into depth so it's a I mean I think"}, {"start": 964.84, "end": 971.04, "text": " the the series of blocks he created around transformers and NLP are probably"}, {"start": 971.04, "end": 975.56, "text": " the best resources I've seen in my life so you do have to check out this guy he's"}, {"start": 975.56, "end": 982.5999999999999, "text": " really awesome kudos to him and I also watched a couple of videos to just to"}, {"start": 982.5999999999999, "end": 987.3199999999999, "text": " get like because it was my first time I'm doing this I'm as I said I'm not so"}, {"start": 987.3199999999999, "end": 994.16, "text": " much internal P I am now a lot more but I wasn't two weeks ago so I did some I"}, {"start": 994.16, "end": 998.8199999999999, "text": " watched some videos like computer file and a couple of other channels code"}, {"start": 998.8199999999999, "end": 1005.04, "text": " Emporium has a really nice visualization kudos to him also and that helped me get"}, {"start": 1005.04, "end": 1011.7199999999999, "text": " a nice understanding of the self attention mechanism and after after"}, {"start": 1011.7199999999999, "end": 1018.0799999999999, "text": " doing the the high level stuff again I just read the transformer paper again"}, {"start": 1018.0799999999999, "end": 1022.76, "text": " and now it was a lot more it was much more clear for me how stuff works I"}, {"start": 1022.76, "end": 1027.76, "text": " understood the details also and I just the high level picture so that was"}, {"start": 1027.76, "end": 1035.56, "text": " really nice yeah again I searched for some more resources and found some blocks"}, {"start": 1035.56, "end": 1040.24, "text": " that have a really nice recap of various attention mechanisms so that's that's"}, {"start": 1040.24, "end": 1046.4, "text": " the way I work so I start with high level go into depth and along the way I"}, {"start": 1046.4, "end": 1051.1, "text": " do find a bunch of useful resources and I just try and organize all of those"}, {"start": 1051.1, "end": 1056.92, "text": " into into my one note now after reading all of the this and having a thorough"}, {"start": 1056.92, "end": 1061.5600000000002, "text": " understanding of transformer I wanted to you know I wanted to figure out is there"}, {"start": 1061.5600000000002, "end": 1066.52, "text": " something else aside from BERT and GPT family of transformers so you probably"}, {"start": 1066.52, "end": 1071.2, "text": " heard of those two they're super famous and I just wanted to get an"}, {"start": 1071.2, "end": 1074.8000000000002, "text": " understanding what's of what's out there what's the current state of the art so"}, {"start": 1074.8000000000002, "end": 1081.72, "text": " what I did I'll just skip over this so what I did is I started searching for"}, {"start": 1081.72, "end": 1086.28, "text": " some overview blocks of the NLP and the currently best models so that's what I"}, {"start": 1086.28, "end": 1092.6399999999999, "text": " did next and I figure out basically only have BERT or BERT derivatives like"}, {"start": 1092.6399999999999, "end": 1097.52, "text": " Alberta Roberta and like but a bunch of other models and you have the GPT family"}, {"start": 1097.52, "end": 1102.24, "text": " GPT 1 2 & 3 by open AI which are crashing it basically at this moment so"}, {"start": 1102.24, "end": 1106.6, "text": " that's how I organize my my next things I'm gonna learn so I started learning"}, {"start": 1106.6, "end": 1112.28, "text": " about BERT I started learning about GPTs then again reading and learning about"}, {"start": 1112.28, "end": 1119.3999999999999, "text": " BERT I figure out I don't have some prerequisite knowledge and I mean that's"}, {"start": 1119.3999999999999, "end": 1124.3999999999999, "text": " how learning basically works you it's it's super nonlinear I'm trying to to"}, {"start": 1124.3999999999999, "end": 1129.76, "text": " just lead you through my like process but hopefully will be useful for you"}, {"start": 1129.76, "end": 1134.44, "text": " also and I figure out I don't know anything about word embeddings and I"}, {"start": 1134.44, "end": 1138.12, "text": " mean I did have some high-level knowledge like the nice semantic"}, {"start": 1138.12, "end": 1142.8, "text": " properties where you can basically do some arithmetic on the embeddings so"}, {"start": 1142.8, "end": 1146.76, "text": " it's a similar thing that you can find in the GAN literature with latent"}, {"start": 1146.76, "end": 1152.6799999999998, "text": " vectors there so doing some arithmetic in the latent space in the like in GANs"}, {"start": 1152.6799999999998, "end": 1158.28, "text": " you can create some you can get some interesting results in the image space"}, {"start": 1158.28, "end": 1166.1999999999998, "text": " and again I understood I have to understand these word embeddings a bit"}, {"start": 1166.2, "end": 1173.78, "text": " better so I started investigating what word to back is what glovey I think it's"}, {"start": 1173.78, "end": 1180.8, "text": " pronounced like glovey I don't know is and again found some again J LMR had"}, {"start": 1180.8, "end": 1186.68, "text": " awesome blog so it helped me a lot I read like the original research papers"}, {"start": 1186.68, "end": 1194.28, "text": " in 2013 and I also read the original neural language model by Ben Geo that"}, {"start": 1194.28, "end": 1199.8799999999999, "text": " was written in 2003 which was the I think the first model where they"}, {"start": 1199.8799999999999, "end": 1204.76, "text": " employed a neural network to to solve the language modeling task previously"}, {"start": 1204.76, "end": 1209.06, "text": " they used Engrams and some other techniques from statistical machine"}, {"start": 1209.06, "end": 1213.8, "text": " learning to to kind of crack this problem of language modeling where"}, {"start": 1213.8, "end": 1217.72, "text": " basically the task is if you don't know is to predict what the next word is"}, {"start": 1217.72, "end": 1226.3600000000001, "text": " given the previous sentence or previous couple of words so that's that's"}, {"start": 1226.3600000000001, "end": 1232.88, "text": " basically one of the dependencies I had bird embeddings and then again this"}, {"start": 1232.88, "end": 1237.92, "text": " there is this thing called the tokenization because you usually don't"}, {"start": 1237.92, "end": 1244.28, "text": " do anymore you don't do embedding for every single word you first tokenize the"}, {"start": 1244.28, "end": 1248.72, "text": " whole sentence and there is a whole science around tokenization and the"}, {"start": 1248.72, "end": 1253.72, "text": " thing is Bert used word piece so that's why I had this dependency and I wanted"}, {"start": 1253.72, "end": 1256.92, "text": " to learn what took a bit more about tokenization and turn out it's a"}, {"start": 1256.92, "end": 1262.76, "text": " complete like a separate research field and it's super under underestimated but"}, {"start": 1262.76, "end": 1268.3999999999999, "text": " it's used everywhere and you basically have character level embeddings you"}, {"start": 1268.3999999999999, "end": 1272.04, "text": " have sub word embeddings and you have word embeddings and most of the current"}, {"start": 1272.04, "end": 1276.6, "text": " state-of-the-art models like GPT family of models and Bert they use sub word"}, {"start": 1276.6, "end": 1282.28, "text": " embeddings and such as like the BPE algorithm that's the byte pairing I"}, {"start": 1282.28, "end": 1288.96, "text": " think byte pairing coding and also you have things like word piece things like"}, {"start": 1288.96, "end": 1292.52, "text": " sentence piece and I'm just throwing random names at you right now but you"}, {"start": 1292.52, "end": 1297.08, "text": " get the point like tokenization is really important and it's being used"}, {"start": 1297.08, "end": 1305.24, "text": " everywhere so those were most of the dependencies I had on Bert and again I"}, {"start": 1305.24, "end": 1309.9199999999998, "text": " searched for some blogs I found some official Google blogs I also found"}, {"start": 1309.9199999999998, "end": 1315.6799999999998, "text": " J.L.Lemmer's blog on Bert again super useful as I said Code Emporium had nice"}, {"start": 1315.6799999999998, "end": 1323.28, "text": " video and finally I reading the the J.L.Lemmer's blog I saw a couple of more"}, {"start": 1323.28, "end": 1327.24, "text": " things I have to understand before I dwell into Bert and those were"}, {"start": 1327.24, "end": 1331.96, "text": " specifically contextualized word embeddings so the thing with word"}, {"start": 1331.96, "end": 1335.8799999999999, "text": " embeddings is they're super shallow representations so that's something if"}, {"start": 1335.8799999999999, "end": 1340.04, "text": " you're from the computer vision field that's something like having only the"}, {"start": 1340.04, "end": 1346.62, "text": " the edge feature detector instead of the full full rich representation which the"}, {"start": 1346.62, "end": 1351.28, "text": " CNNs provide you with which start with maybe some simpler shapes and edges and"}, {"start": 1351.28, "end": 1355.76, "text": " stuff and they get to more abstract and more semantically meaningful"}, {"start": 1355.76, "end": 1360.2, "text": " representations at the deeper layers of the network so the problem with word"}, {"start": 1360.2, "end": 1365.28, "text": " embeddings is the following they encode the same exact embedding vector for a"}, {"start": 1365.28, "end": 1370.12, "text": " word regardless of the context of the surrounding words and that's just not"}, {"start": 1370.12, "end": 1374.36, "text": " how the things were in the human language so the thing is you say for"}, {"start": 1374.36, "end": 1380.44, "text": " example if you have a word Queen it depends on the context whether that the"}, {"start": 1380.44, "end": 1385.2, "text": " Queen is maybe a Queen bee or like Queen like a royalty and the context is"}, {"start": 1385.2, "end": 1389.4, "text": " basically determined by the surrounding words so you want to have a different"}, {"start": 1389.4, "end": 1393.76, "text": " ebony vector for the word Queen depending on the context whereas the"}, {"start": 1393.76, "end": 1399.04, "text": " word to back and glovey methods they they basically always encoded the word"}, {"start": 1399.04, "end": 1403.5, "text": " with the same vector and that's something that we now know is not the"}, {"start": 1403.5, "end": 1408.88, "text": " best thing to do and we have so there were methods that followed the word to"}, {"start": 1408.88, "end": 1414.1200000000001, "text": " back and glovey and that's this Kovie that the it's contextualized vectors and"}, {"start": 1414.1200000000001, "end": 1419.0, "text": " then you have Elmo embeddings from language models which finally starting"}, {"start": 1419.0, "end": 1423.5200000000002, "text": " including the like the context of the surrounding words and that was super"}, {"start": 1423.5200000000002, "end": 1430.1200000000001, "text": " important for for what followed along and that's and that's bird and GPT's GPT"}, {"start": 1430.1200000000001, "end": 1436.0800000000002, "text": " models one crucial thing that happened aside from from from these"}, {"start": 1436.08, "end": 1443.6799999999998, "text": " contextualized word representations is this ULM fit universal language model"}, {"start": 1443.6799999999998, "end": 1450.84, "text": " where they showed that you can do fine-tuning in NLP as well so previously"}, {"start": 1450.84, "end": 1454.6399999999999, "text": " we all knew the computer vision we had the computer vision had this image net"}, {"start": 1454.6399999999999, "end": 1461.48, "text": " moment in 2012 where basically we figure out we can use a pre-trained models on"}, {"start": 1461.48, "end": 1467.96, "text": " the image net data set to to later just fine-tune and do many of other"}, {"start": 1467.96, "end": 1472.68, "text": " downstream tasks like maybe image classification on some other data sets"}, {"start": 1472.68, "end": 1477.2, "text": " where you don't have as much data as you have in the image image net so that's"}, {"start": 1477.2, "end": 1484.56, "text": " the so we finally have the same ability to do the same thing in NLP and that's"}, {"start": 1484.56, "end": 1489.38, "text": " to do fine-tuning and just pre-train your language model on like a huge"}, {"start": 1489.38, "end": 1494.42, "text": " corpus of text and then just fine-tune it for a different downstream NLP tasks"}, {"start": 1494.42, "end": 1500.3200000000002, "text": " and ULM fit was one of the papers they kind of promoted that idea and started"}, {"start": 1500.3200000000002, "end": 1507.7600000000002, "text": " this trend in NLP finally I read the the BERT paper and recently a couple of"}, {"start": 1507.7600000000002, "end": 1512.5200000000002, "text": " days ago I started reading about GPT again JLMR has some really nice blogs"}, {"start": 1512.52, "end": 1520.16, "text": " and and there are some nice blogs official blogs from open AI as well and"}, {"start": 1520.16, "end": 1524.76, "text": " papers or something that's when you when you finish up with those you can you can"}, {"start": 1524.76, "end": 1528.72, "text": " go and study some some papers as always so after I finished the two important"}, {"start": 1528.72, "end": 1533.92, "text": " topics and that's BERT and GPT my idea is like starting this Monday to to"}, {"start": 1533.92, "end": 1538.44, "text": " implement the transformer the original transformer paper from scratch because"}, {"start": 1538.44, "end": 1543.8, "text": " as it turned out basically both BERT and GPT basically took one part of the"}, {"start": 1543.8, "end": 1549.52, "text": " original transformer model so BERT took the encoder part GPTs took the decoder"}, {"start": 1549.52, "end": 1553.92, "text": " part of the original transformer and they just kind of increased it and"}, {"start": 1553.92, "end": 1558.48, "text": " training some bigger data and yeah that's that's pretty much it so I'm"}, {"start": 1558.48, "end": 1564.18, "text": " gonna implement it and then I have this section later work where like reading"}, {"start": 1564.18, "end": 1569.0, "text": " through those blogs blogs and overview blogs and bunch of different resources I"}, {"start": 1569.0, "end": 1574.1200000000001, "text": " already showed you I kind of accumulated some things I know it'd be useful to to"}, {"start": 1574.1200000000001, "end": 1578.72, "text": " read them to go through them and one of those is like Alberta like some BERT"}, {"start": 1578.72, "end": 1583.96, "text": " derivative and then that there is this Excel net and Google t5 and a bunch of"}, {"start": 1583.96, "end": 1593.28, "text": " models so anyways that's how I work I do some theory then I do some coding I take"}, {"start": 1593.28, "end": 1598.6399999999999, "text": " some time to let that theory kind of sink down and let it go through my"}, {"start": 1598.6399999999999, "end": 1603.8, "text": " fingers by coding the things from scratch and then again I shift my"}, {"start": 1603.8, "end": 1609.32, "text": " attention to theory I do some more theory and maybe I'll do some another"}, {"start": 1609.32, "end": 1615.6, "text": " project after I finish with the second second iteration of this theory"}, {"start": 1615.6, "end": 1621.96, "text": " exploration whatever let me let me finish up with this so I always kind of"}, {"start": 1621.96, "end": 1627.04, "text": " try and have a bigger picture of how this all fits into the next things I'm"}, {"start": 1627.04, "end": 1633.48, "text": " going to do and the next area I wanna I wanna research a bit more is graph"}, {"start": 1633.48, "end": 1639.28, "text": " neural networks GNNs for short and so I just have some check boxes some things I"}, {"start": 1639.28, "end": 1646.28, "text": " have to to understand before I proceed and start digging into that new topic so"}, {"start": 1646.28, "end": 1652.0, "text": " I basically already learned all of these yeah let me check I don't still don't"}, {"start": 1652.0, "end": 1657.08, "text": " have a clear picture of what blue metric is I'm gonna read read about it a bit"}, {"start": 1657.08, "end": 1664.76, "text": " later so anyways hopefully you found this video useful it's the first time I"}, {"start": 1664.76, "end": 1671.0, "text": " created a video like this it's a bit long format and it's not as curated so"}, {"start": 1671.0, "end": 1675.56, "text": " hopefully you did learn something useful here if you did go ahead and"}, {"start": 1675.56, "end": 1680.72, "text": " subscribe to my channel hit that bell icon so that you get notified when I"}, {"start": 1680.72, "end": 1686.08, "text": " upload a new video and let me know if you have any questions in the comments"}, {"start": 1686.08, "end": 1705.76, "text": " I'd like to answer all of those so anyways have a nice day keep learning"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=rcIrIhNAe_c
Cheapest (0$) Deep Learning Hardware Options | 2021
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I'll give you the cheapest options to get started with deep learning! I'll also mention what paid options you have at your disposal. You'll get: ✔️ A high-level overview of the deep learning HW landscape ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ --- Free options ✅ Google Colab: https://colab.research.google.com/ ✅ (Google) Kaggle: https://www.kaggle.com/ ✅ Kaggle tips: https://www.kaggle.com/docs/efficient-gpu-usage If you want to dig deeper: ✅ Colab vs Kaggle: https://towardsdatascience.com/kaggle-vs-colab-faceoff-which-free-gpu-provider-is-tops-d4f0cd625029 Sign-up for 1-month free credits: ✅ Azure ML: https://azure.microsoft.com/en-us/free/machine-learning/search/?&ef_id=Cj0KCQjw5eX7BRDQARIsAMhYLP--LX_y8V5tXNp2AIQH79TiJDc0S9_P694O-Bh75-TahDH_BjMhbxwaAuqBEALw_wcB:G:s&OCID=AID2100645_SEM_Cj0KCQjw5eX7BRDQARIsAMhYLP--LX_y8V5tXNp2AIQH79TiJDc0S9_P694O-Bh75-TahDH_BjMhbxwaAuqBEALw_wcB:G:s&dclid=CjgKEAjw5eX7BRDukp61rqLX0z8SJACF1AMBoJvajYg6QC5LOUWfroyE4zdyncMk1K_MEPbu2CqyafD_BwE ✅ GCP: https://cloud.google.com/free ✅ IBM: https://www.ibm.com/cloud/free Check out this link for an awesome compilation of your options: ✅ https://github.com/zszazi/Deep-learning-in-cloud Check out these links for comparisons between cloud options: ✅ https://determined.ai/blog/cloud-v-onprem/ ✅ https://medium.com/@akhil.vasvani/what-to-use-for-deep-learning-cloud-services-vs-gpu-385ebaa037ee ✅ https://towardsdatascience.com/maximize-your-gpu-dollars-a9133f4e546a --- Paid options Build a budget deep learning PC: ✅ https://www.youtube.com/watch?v=xsnVlMWQj8o&ab_channel=MachineLearningwithPhil ✅ https://www.youtube.com/watch?v=th20fbZfZUc&ab_channel=Pysource Build your deep learning rig: ✅ https://www.mrdbourke.com/notes-on-building-a-deep-learning-pc/ Why is deep learning rig more cost-effective than the cloud? ✅ https://medium.com/the-mission/why-building-your-own-deep-learning-computer-is-10x-cheaper-than-aws-b1c91b55ce8c ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Intro: Deep Learning is expensive 0:37 First option: Google Colab and Kaggle 1:12 Tradeoffs you make by picking this option 2:00 Second option: Cloud For Free 3:30 ML-specific cloud providers (vs traditional) 4:45 Paid options (budget deep learning PC and beyond) 5:48 Recap - What's the best option for you? 6:27 Grants for researchers and final thoughts ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #deeplearning #gpus #hardware
Okay, so you started playing with deep learning and you figured out that neural networks take up a lot of VRAM, they are really compute intensive and you'll need some expensive hardware, specifically GPUs in order to do anything meaningful. So in this video I'm going to break it down for you and start with the cheapest options like zero dollars, like free options first, and then in the second part of the video I'll give you some recommendations about the paid options also. So without further ado let's start with the first option. Okay, so the first two options you should consider are both free and they are Google CoLab and Kaggle. So you probably heard of both of them, actually they both belong to Google. Google CoLab is awesome because it gives you GPUs and also TPUs for free and the same goes for Kaggle which gives you P100 GPUs which are actually a bit better than CoLab's because CoLab gives you K80s. There are a lot of tutorials how to get started and I won't get into that part, I'll just give you all the options you have. There are obviously some trade-offs when using Google CoLab and using Kaggle and the main ones is that they disconnect after a certain period. So for CoLab you can expect around 12 hours before it kind of just disconnects your runtime and Kaggle gives you around nine hours of free usage. Now if your GPU goes idle it will basically stop your runs. And there's one more thing you should know and that's that there is no guarantee that CoLab will give you GPUs or TPUs, they have a certain amount of GPUs per region so if people in your region are using CoLab a lot, either for deep learning or maybe mining, then you probably won't get a GPU and that sucks. Okay for the second part of this video I'm going to tell you what your next three options are and that's cloud. And I hear you say wait, cloud? But the thing is most of the big traditional cloud providers such as Azure, such as AWS, Google Cloud Platform or GCP for short, they all give you like 200 to 300 dollars for a month for free and you can kind of use that. So Azure gives you 200 bucks for a month, also IBM gives you 200 dollars for a month, GCP gives you 300 dollars for a month, and finally Alibaba gives you 300 dollars. You also have AWS has some free program but I'm not sure if they have the same thing as these ones I mentioned. I think they only offer CPUs. So doing this you can basically have like you could have cloud GPUs like K80s for four or five months if you just switch between these providers. And it's kind of annoying to switch all the time but you basically need one day to set it up and get started and then you have it free for a month and then you can just switch to the other cloud provider. Doing this you'll acquire important skill and that's using different cloud providers. Later if you want to maybe continue using some of them you can just pay and keep using the provider. So aside from these traditional cloud providers you also have machine learning specific cloud providers such as Spell which I've used and which is really super easy to use, much easier than some of these more traditional cloud providers. There is also Paperspace, there is Floyd Hub, there are a bunch of options there. So Spell gives you 10 bucks for free and you can check for the others. I'll link a bunch of useful resources down in the description. So the thing with these ML specific cloud providers is that they are all based off of like traditional providers. So they use as a backbone they use either AWS or Azure or GCP and so forth. And that was basically it for the free options. So you either use Colab or Kaggle for free forever but obviously it's slower, it sometimes disconnects your runtime so there are some cons there. Or you simply use you switch from cloud provider to cloud provider, you get a lot of skill by doing that and you get some awesome GPU clusters for free. So those were the free options. Now let's jump into some of the paid options which you may consider if you're doing deep learning really seriously. So there are basically two things you could do here. You could either just buy some hardware off the shelf or you could build your own custom PC. You can basically build a deep learning PC for less than thousand bucks and it could be really good. I'll link a couple of those videos for building a budget deep learning PC down in the description. If you have more money you can build obviously better deep learning PC. They also call those deep learning rigs. So basically if you can afford two or three or four thousand dollars you can build a really a beast of a PC. And the reason why you might want to do this is if you're really doing some serious deep learning then you'll either have to pay some cloud based solution or you'll build your custom PC. And there are a bunch of resources out there that show that it's much more cost effective to actually build your own deep learning rig. And I'll also link some of the resources down in the description helping you build the rig for yourself. So at the end it all depends on your preference. If you don't have any budget whatsoever and you want to start with machine learning then go with Kaggle or go with Google CoLab. If you do have some money but you don't want to lose any time you just want something quick just buy some off-the-shelf PC or a deep learning rig. If you do have some money and you also do have some time to spend you can the best thing to do is to build your own deep learning rig. And I think that the worst option if you're an individual playing with deep learning is to be paying the cloud unless you are a startup and you create some kind of a multi-year contract. So that's it for this video. Hope you found it useful. So you have all of those options out there. I'll link a bunch of useful resources down in the description if you want to maybe investigate a bit deeper. But that's like the high level overview of the current hardware, deep learning hardware landscape. So I'd like to know which options out of these did you personally use and also which one works the best for you in your current situation. If you think I've missed something feel free to comment down in the comment section and I'll get back to you. If you found this video useful consider subscribing and hit that like button to get notified when I upload a new video. Until next time keep learning deep.
[{"start": 0.0, "end": 6.72, "text": " Okay, so you started playing with deep learning and you figured out that neural networks take"}, {"start": 6.72, "end": 14.0, "text": " up a lot of VRAM, they are really compute intensive and you'll need some expensive hardware,"}, {"start": 14.0, "end": 19.92, "text": " specifically GPUs in order to do anything meaningful. So in this video I'm going to"}, {"start": 19.92, "end": 27.92, "text": " break it down for you and start with the cheapest options like zero dollars, like free options first,"}, {"start": 27.92, "end": 32.32, "text": " and then in the second part of the video I'll give you some recommendations about the paid"}, {"start": 32.32, "end": 36.56, "text": " options also. So without further ado let's start with the first option."}, {"start": 38.72, "end": 45.52, "text": " Okay, so the first two options you should consider are both free and they are Google CoLab"}, {"start": 45.52, "end": 51.760000000000005, "text": " and Kaggle. So you probably heard of both of them, actually they both belong to Google. Google CoLab"}, {"start": 51.76, "end": 59.92, "text": " is awesome because it gives you GPUs and also TPUs for free and the same goes for Kaggle which gives"}, {"start": 59.92, "end": 66.64, "text": " you P100 GPUs which are actually a bit better than CoLab's because CoLab gives you K80s. There are a"}, {"start": 66.64, "end": 71.68, "text": " lot of tutorials how to get started and I won't get into that part, I'll just give you all the"}, {"start": 71.68, "end": 76.96, "text": " options you have. There are obviously some trade-offs when using Google CoLab and using Kaggle"}, {"start": 76.96, "end": 82.96, "text": " and the main ones is that they disconnect after a certain period. So for CoLab you can expect"}, {"start": 82.96, "end": 90.47999999999999, "text": " around 12 hours before it kind of just disconnects your runtime and Kaggle gives you around nine hours"}, {"start": 90.47999999999999, "end": 98.96, "text": " of free usage. Now if your GPU goes idle it will basically stop your runs. And there's one more"}, {"start": 98.96, "end": 104.88, "text": " thing you should know and that's that there is no guarantee that CoLab will give you GPUs or TPUs,"}, {"start": 104.88, "end": 111.03999999999999, "text": " they have a certain amount of GPUs per region so if people in your region are using CoLab a lot,"}, {"start": 111.03999999999999, "end": 118.32, "text": " either for deep learning or maybe mining, then you probably won't get a GPU and that sucks."}, {"start": 120.8, "end": 127.52, "text": " Okay for the second part of this video I'm going to tell you what your next three options are"}, {"start": 127.52, "end": 134.16, "text": " and that's cloud. And I hear you say wait, cloud? But the thing is most of the big traditional cloud"}, {"start": 134.16, "end": 142.72, "text": " providers such as Azure, such as AWS, Google Cloud Platform or GCP for short, they all give you like"}, {"start": 142.72, "end": 150.48, "text": " 200 to 300 dollars for a month for free and you can kind of use that. So Azure gives you 200 bucks"}, {"start": 150.48, "end": 157.92, "text": " for a month, also IBM gives you 200 dollars for a month, GCP gives you 300 dollars for a month,"}, {"start": 157.92, "end": 164.72, "text": " and finally Alibaba gives you 300 dollars. You also have AWS has some free program but I'm not"}, {"start": 164.72, "end": 171.35999999999999, "text": " sure if they have the same thing as these ones I mentioned. I think they only offer CPUs. So doing"}, {"start": 171.35999999999999, "end": 180.95999999999998, "text": " this you can basically have like you could have cloud GPUs like K80s for four or five months if"}, {"start": 180.96, "end": 188.0, "text": " you just switch between these providers. And it's kind of annoying to switch all the time but you"}, {"start": 188.0, "end": 193.76000000000002, "text": " basically need one day to set it up and get started and then you have it free for a month and then you"}, {"start": 193.76000000000002, "end": 200.4, "text": " can just switch to the other cloud provider. Doing this you'll acquire important skill and that's"}, {"start": 200.4, "end": 206.88, "text": " using different cloud providers. Later if you want to maybe continue using some of them you can just"}, {"start": 206.88, "end": 212.96, "text": " pay and keep using the provider. So aside from these traditional cloud providers you also have"}, {"start": 212.96, "end": 218.4, "text": " machine learning specific cloud providers such as Spell which I've used and which is really super"}, {"start": 218.4, "end": 224.64, "text": " easy to use, much easier than some of these more traditional cloud providers. There is also"}, {"start": 225.6, "end": 231.84, "text": " Paperspace, there is Floyd Hub, there are a bunch of options there. So Spell gives you 10 bucks for"}, {"start": 231.84, "end": 237.44, "text": " free and you can check for the others. I'll link a bunch of useful resources down in the description."}, {"start": 237.44, "end": 241.84, "text": " So the thing with these ML specific cloud providers is that they are all based off of"}, {"start": 242.56, "end": 249.04, "text": " like traditional providers. So they use as a backbone they use either AWS or Azure or GCP"}, {"start": 249.68, "end": 255.44, "text": " and so forth. And that was basically it for the free options. So you either use Colab or Kaggle"}, {"start": 255.44, "end": 262.0, "text": " for free forever but obviously it's slower, it sometimes disconnects your runtime so there are"}, {"start": 262.0, "end": 269.68, "text": " some cons there. Or you simply use you switch from cloud provider to cloud provider, you get a lot"}, {"start": 269.68, "end": 276.32, "text": " of skill by doing that and you get some awesome GPU clusters for free. So those were the free options."}, {"start": 276.32, "end": 282.4, "text": " Now let's jump into some of the paid options which you may consider if you're doing deep learning"}, {"start": 282.4, "end": 291.67999999999995, "text": " really seriously. So there are basically two things you could do here. You could either just buy some"}, {"start": 291.67999999999995, "end": 297.59999999999997, "text": " hardware off the shelf or you could build your own custom PC. You can basically build a deep learning"}, {"start": 297.59999999999997, "end": 303.59999999999997, "text": " PC for less than thousand bucks and it could be really good. I'll link a couple of those videos"}, {"start": 303.59999999999997, "end": 308.64, "text": " for building a budget deep learning PC down in the description. If you have more money you can build"}, {"start": 308.64, "end": 314.8, "text": " obviously better deep learning PC. They also call those deep learning rigs. So basically if you can"}, {"start": 314.8, "end": 322.0, "text": " afford two or three or four thousand dollars you can build a really a beast of a PC. And the reason"}, {"start": 322.0, "end": 328.47999999999996, "text": " why you might want to do this is if you're really doing some serious deep learning then you'll either"}, {"start": 328.47999999999996, "end": 334.96, "text": " have to pay some cloud based solution or you'll build your custom PC. And there are a bunch of"}, {"start": 334.96, "end": 339.76, "text": " resources out there that show that it's much more cost effective to actually build your own deep"}, {"start": 339.76, "end": 345.59999999999997, "text": " learning rig. And I'll also link some of the resources down in the description helping you"}, {"start": 345.59999999999997, "end": 350.88, "text": " build the rig for yourself. So at the end it all depends on your preference. If you don't have any"}, {"start": 350.88, "end": 355.35999999999996, "text": " budget whatsoever and you want to start with machine learning then go with Kaggle or go with Google"}, {"start": 355.35999999999996, "end": 361.35999999999996, "text": " CoLab. If you do have some money but you don't want to lose any time you just want something quick"}, {"start": 361.36, "end": 368.72, "text": " just buy some off-the-shelf PC or a deep learning rig. If you do have some money and you also do"}, {"start": 368.72, "end": 374.24, "text": " have some time to spend you can the best thing to do is to build your own deep learning rig."}, {"start": 374.24, "end": 380.16, "text": " And I think that the worst option if you're an individual playing with deep learning is to be"}, {"start": 380.16, "end": 386.72, "text": " paying the cloud unless you are a startup and you create some kind of a multi-year contract."}, {"start": 386.72, "end": 392.48, "text": " So that's it for this video. Hope you found it useful. So you have all of those options out"}, {"start": 392.48, "end": 397.28000000000003, "text": " there. I'll link a bunch of useful resources down in the description if you want to maybe"}, {"start": 397.28000000000003, "end": 402.72, "text": " investigate a bit deeper. But that's like the high level overview of the current hardware,"}, {"start": 403.76000000000005, "end": 409.12, "text": " deep learning hardware landscape. So I'd like to know which options out of these did you"}, {"start": 409.12, "end": 415.36, "text": " personally use and also which one works the best for you in your current situation."}, {"start": 415.36, "end": 419.92, "text": " If you think I've missed something feel free to comment down in the comment section"}, {"start": 420.32, "end": 425.28000000000003, "text": " and I'll get back to you. If you found this video useful consider subscribing and hit that"}, {"start": 425.28, "end": 446.32, "text": " like button to get notified when I upload a new video. Until next time keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=ynx48f7LhCc
Machine Learning Projects (Intermediate level) | 2021
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I'll recommend you some intermediate-level machine learning projects. You'll get some: ✔️ intermediate level ML project ideas ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ NST (optimization): https://github.com/gordicaleksa/pytorch-neural-style-transfer ✅ NST (neural net): https://github.com/gordicaleksa/pytorch-nst-feedforward ✅ DeepDream: https://github.com/gordicaleksa/pytorch-deepdream ✅ GANs: https://github.com/gordicaleksa/pytorch-gans All of the papers and blogs are linked in READMEs of the above projects! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Intro: what will you learn through these projects? 1:30 Project: Neural Style Transfer (optimization) 3:20 Project: Neural Style Transfer (feed-forward) 4:19 Support me on Patreon 4:34 Project: Deep Dream 6:36 Project: GANs 8:38 How should you approach these? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #machinelearning #machinelearningprojects #deeplearning
What's up folks? Got myself a new microphone here, so hopefully the audio will get better. I still didn't have the time to buy the rest of the equipment, but it will come soon. Hopefully the audio will be much better in this video. So in the last video I showed you some beginner-friendly machine learning projects, which you could do to get yourself started with machine learning. And they were beginner-friendly in the sense that you can basically write the MNIST classifier in about 50 lines of code. And you can find a bunch of resources out there which show you how to solve those problems. And I'll link the video in the card somewhere here, I hope. In this video I'm going to recommend you some more advanced projects. I kind of label them as intermediate-level machine learning projects. And the good thing about these is that I actually have all of them implemented on my GitHub. So that's going to help you kickstart and do all of these without a problem. So I'm a big fan of learning by doing, and by coding these from scratch, you're going to learn so much more about the deep learning framework of your choice. And there will be a PyTorch if you're following along my GitHub projects. And aside from the framework, you're obviously going to learn a lot more about deep learning by just digging into the project, coding the neural networks from scratch, and everything else that goes along. Without further ado, let's jump into the project. So the first project I'm going to recommend you do is Neural Style Transfer. And I've actually got a whole series on Neural Style Transfer. I'll link the videos somewhere here and also down below in the description. Jumping straight into the project on my GitHub here, it's called PyTorch Neural Style Transfer. And if you open it up, you'll see how it looks like. And so just let me reiterate once more for those of you who don't know this. So Neural Style Transfer is basically about combining this content image with the style image and getting a stylized image out, as you can see here on the screen. So this colorized image on the left is the stylized image you get from the NST algorithm. And you can see a bunch of beautiful examples that the algorithm can create. And this is one of the reasons I recommend you starting with this project is because it's highly visual and most people just like seeing the things they do visualized. So it's a nice intermediate project to start with. You're going to have a paradigm shift using this project. And the reason being is that you usually optimize the weights of the neural network. And here in this project, you're actually going to tune the input pixels so that you get these amazing imagery, as you can see on the screen here. So aside from the paradigm shift there, you'll learn how to match the feature maps of input imagery with the feature maps of the content image and also the grand matrix of the style image. And if all of that sounds gibberish to you now, that's totally fine. So you'll learn that stuff. So if you don't know it right now, it's okay. And the second project after we finish this one is also my GitHub here. It's the PyTorch Neural Style Transfer feed forward. If we open it up here, it's super similar to this one. The difference being is that you're actually training a neural network here. So in the previous one, we're doing optimization procedure. We got those imagery in here. You just pass your content image into the neural network. And it just kind of stylizes the image into the style that the network was trained for. You're not going to tweak the input pixels. You're going to tweak the actual weights of the neural network. Doing this project is going to teach you how to build your neural networks from scratch and how to set up the whole training procedure and pretty much the whole pipeline. So if you're having any problems whatsoever starting with these projects, please write the comments down in the comment section. I'll make sure to address all of them. And also feel free to open up issues on GitHub. I'll be monitoring those also. One more thing, I opened up a Patreon account recently. So if you want to support me and become a member of this growing community, consider becoming my Patreon here. And you'll get early access to my content and a bunch of other perks. So the second project is going to be Deep Dream. And it's a really interesting algorithm developed by this guy called Alexander Morvintsev while he was back in Google. And he kind of woke up during the night, had a nightmare, and came up with this thing. And you can see how it looks like. You get this psychedelic looking imagery here. So you take some whatever image as the input, like say these figures here. And after passing it through the algorithm, you get these psychedelic ones. Even this one on the top left actually came from this image here. So it's super exciting. How it works is you basically take some images as the input, pass it into the pre-trained neural network. And that neural network will kind of create something called feature maps. And you'll want to amplify whatever the network already sees in the image. So because most of them were trained on like dog images, you'll see how the network is slowly modifying the image. It's going to add some like dog eyes and cat eyes and fur and stuff like that because that was the data that the image was trained on. So you're going to learn how to actually do, manually do gradient steps, so optimization steps. You're going to use the raw gradients that you get from maximizing those feature maps. And you're going to apply them to these input pixels. And that's how you're going to get these images. So there are a lot of more things you can play with, as you'll see when you open up this ReadMe, depending on which layers you use to do this optimization, you're going to get different patterns. Like if you go to lower layers, you'll get these geometric patterns. And the higher up you go, the deeper you go actually. You'll get more like abstract imagery like these here. And yeah, so you can also create GIFs. Pretty much fun stuff. And last but not least, generative adversarial networks or GANs for short. And if you haven't heard of GANs so far, which I doubt, they're just a framework where we have two neural networks. One is called the generator, the other one is called the discriminator. And the goal of the generator is to create imagery undistinguishable from the real data. And you can see here in this, so I basically have three GAN projects inside of this repo. And this original one invented by Ian Goodfellow, I used MNIST digits, so the same as in the beginner video, the same data set. And basically you can see this row here that's real imagery from MNIST. And after the generator is trained, it can generate data that looks like somebody could have written those. Like, looks totally real. You couldn't probably distinguish between this row and this one here. And aside from that one, I also have this conditional GAN, where you can additionally control which class you want to generate. So you can condition it and say, hey, I want a zero, and we just generate zeros. You can see in the column here. Finally, this deep convolutional GAN or DC GAN for short is super famous architecture. And I've just picked a different data set here. You can also use MNIST, but I've picked this Celeb A data set of human faces. And you can see how it's learning to generate faces of humans. And this is how it looks like after the generator is trained. Obviously not undistinguishable from the real data, but this was a model that was developed, I think, in 2016. So you have much better models now, state of the art, like StyleGAN version 2. And I've also developed this Jupyter Notebook, which you can use to better understand how this all works. And hopefully, you'll find that useful. A couple of tips on how you should approach these projects. The first MNIST project, I just recommend you go ahead and read the original paper by Gatiss. And it's not super tough, even if you're intermediate, since you're watching this video. But even if you're kind of a beginner, go ahead and try and give it a try. As for Deep Dream, they have a blog that they wrote a couple of years ago, like Crisola and Alexander Mertvinsev. And you should go ahead and read that one before you go ahead and try and implement that code yourself. Also, just feel free to go ahead and examine other people's code, including mine, obviously. And that will help you develop your own version. But try and develop the core parts by yourself. Don't just copy-paste code. That beats the purpose, because you want to learn here. And when it comes to GANs, I definitely do not recommend you go ahead and try and read the original paper. It's super tough. Lots of math, unless you're familiar with mathematics for machine learning, just skip that one. Go ahead and find useful blogs online. There are many GAN blogs out there, and that will help you understand the basic logic. And then the most important thing here is that you try and implement the training loop yourself. That's where the whole brain is. Okay, that was pretty much it. Going forward, I'll be developing some even more advanced projects like graph neural networks and transformers. You can expect a video in a couple of months, me suggesting some projects you should do yourself, which are for more advanced machine learning practitioners. I'd love to hear your thoughts on these three projects. Do you have any project which you think I should have included here? Just let me know your thoughts down in the comment section. If you're new to this channel, consider subscribing and hit that bell icon to get notified when I upload a new video. Until next time, keep learning deep.
[{"start": 0.0, "end": 6.0, "text": " What's up folks? Got myself a new microphone here, so hopefully the audio will get better."}, {"start": 6.0, "end": 11.0, "text": " I still didn't have the time to buy the rest of the equipment, but it will come soon."}, {"start": 11.0, "end": 14.0, "text": " Hopefully the audio will be much better in this video."}, {"start": 14.0, "end": 19.0, "text": " So in the last video I showed you some beginner-friendly machine learning projects,"}, {"start": 19.0, "end": 23.0, "text": " which you could do to get yourself started with machine learning."}, {"start": 23.0, "end": 29.0, "text": " And they were beginner-friendly in the sense that you can basically write the MNIST classifier in about 50 lines of code."}, {"start": 29.0, "end": 35.0, "text": " And you can find a bunch of resources out there which show you how to solve those problems."}, {"start": 35.0, "end": 39.0, "text": " And I'll link the video in the card somewhere here, I hope."}, {"start": 39.0, "end": 43.0, "text": " In this video I'm going to recommend you some more advanced projects."}, {"start": 43.0, "end": 47.0, "text": " I kind of label them as intermediate-level machine learning projects."}, {"start": 47.0, "end": 53.0, "text": " And the good thing about these is that I actually have all of them implemented on my GitHub."}, {"start": 53.0, "end": 60.0, "text": " So that's going to help you kickstart and do all of these without a problem."}, {"start": 60.0, "end": 65.0, "text": " So I'm a big fan of learning by doing, and by coding these from scratch,"}, {"start": 65.0, "end": 70.0, "text": " you're going to learn so much more about the deep learning framework of your choice."}, {"start": 70.0, "end": 74.0, "text": " And there will be a PyTorch if you're following along my GitHub projects."}, {"start": 74.0, "end": 80.0, "text": " And aside from the framework, you're obviously going to learn a lot more about deep learning"}, {"start": 80.0, "end": 86.0, "text": " by just digging into the project, coding the neural networks from scratch, and everything else that goes along."}, {"start": 86.0, "end": 91.0, "text": " Without further ado, let's jump into the project."}, {"start": 91.0, "end": 95.0, "text": " So the first project I'm going to recommend you do is Neural Style Transfer."}, {"start": 95.0, "end": 98.0, "text": " And I've actually got a whole series on Neural Style Transfer."}, {"start": 98.0, "end": 104.0, "text": " I'll link the videos somewhere here and also down below in the description."}, {"start": 104.0, "end": 110.0, "text": " Jumping straight into the project on my GitHub here, it's called PyTorch Neural Style Transfer."}, {"start": 110.0, "end": 114.0, "text": " And if you open it up, you'll see how it looks like."}, {"start": 114.0, "end": 119.0, "text": " And so just let me reiterate once more for those of you who don't know this."}, {"start": 119.0, "end": 126.0, "text": " So Neural Style Transfer is basically about combining this content image with the style image"}, {"start": 126.0, "end": 130.0, "text": " and getting a stylized image out, as you can see here on the screen."}, {"start": 130.0, "end": 138.0, "text": " So this colorized image on the left is the stylized image you get from the NST algorithm."}, {"start": 138.0, "end": 144.0, "text": " And you can see a bunch of beautiful examples that the algorithm can create."}, {"start": 144.0, "end": 150.0, "text": " And this is one of the reasons I recommend you starting with this project is because it's highly visual"}, {"start": 150.0, "end": 155.0, "text": " and most people just like seeing the things they do visualized."}, {"start": 155.0, "end": 158.0, "text": " So it's a nice intermediate project to start with."}, {"start": 158.0, "end": 161.0, "text": " You're going to have a paradigm shift using this project."}, {"start": 161.0, "end": 167.0, "text": " And the reason being is that you usually optimize the weights of the neural network."}, {"start": 167.0, "end": 171.0, "text": " And here in this project, you're actually going to tune the input pixels"}, {"start": 171.0, "end": 176.0, "text": " so that you get these amazing imagery, as you can see on the screen here."}, {"start": 176.0, "end": 182.0, "text": " So aside from the paradigm shift there, you'll learn how to match the feature maps of input imagery"}, {"start": 182.0, "end": 189.0, "text": " with the feature maps of the content image and also the grand matrix of the style image."}, {"start": 189.0, "end": 192.0, "text": " And if all of that sounds gibberish to you now, that's totally fine."}, {"start": 192.0, "end": 197.0, "text": " So you'll learn that stuff. So if you don't know it right now, it's okay."}, {"start": 197.0, "end": 203.0, "text": " And the second project after we finish this one is also my GitHub here."}, {"start": 203.0, "end": 206.0, "text": " It's the PyTorch Neural Style Transfer feed forward."}, {"start": 206.0, "end": 211.0, "text": " If we open it up here, it's super similar to this one."}, {"start": 211.0, "end": 215.0, "text": " The difference being is that you're actually training a neural network here."}, {"start": 215.0, "end": 217.0, "text": " So in the previous one, we're doing optimization procedure."}, {"start": 217.0, "end": 219.0, "text": " We got those imagery in here."}, {"start": 219.0, "end": 222.0, "text": " You just pass your content image into the neural network."}, {"start": 222.0, "end": 228.0, "text": " And it just kind of stylizes the image into the style that the network was trained for."}, {"start": 228.0, "end": 231.0, "text": " You're not going to tweak the input pixels."}, {"start": 231.0, "end": 233.0, "text": " You're going to tweak the actual weights of the neural network."}, {"start": 233.0, "end": 239.0, "text": " Doing this project is going to teach you how to build your neural networks from scratch"}, {"start": 239.0, "end": 244.0, "text": " and how to set up the whole training procedure and pretty much the whole pipeline."}, {"start": 244.0, "end": 248.0, "text": " So if you're having any problems whatsoever starting with these projects,"}, {"start": 248.0, "end": 251.0, "text": " please write the comments down in the comment section."}, {"start": 251.0, "end": 253.0, "text": " I'll make sure to address all of them."}, {"start": 253.0, "end": 256.0, "text": " And also feel free to open up issues on GitHub."}, {"start": 256.0, "end": 259.0, "text": " I'll be monitoring those also."}, {"start": 259.0, "end": 262.0, "text": " One more thing, I opened up a Patreon account recently."}, {"start": 262.0, "end": 267.0, "text": " So if you want to support me and become a member of this growing community,"}, {"start": 267.0, "end": 270.0, "text": " consider becoming my Patreon here."}, {"start": 270.0, "end": 275.0, "text": " And you'll get early access to my content and a bunch of other perks."}, {"start": 275.0, "end": 279.0, "text": " So the second project is going to be Deep Dream."}, {"start": 279.0, "end": 284.0, "text": " And it's a really interesting algorithm developed by this guy called Alexander Morvintsev"}, {"start": 284.0, "end": 286.0, "text": " while he was back in Google."}, {"start": 286.0, "end": 290.0, "text": " And he kind of woke up during the night, had a nightmare, and came up with this thing."}, {"start": 290.0, "end": 292.0, "text": " And you can see how it looks like."}, {"start": 292.0, "end": 295.0, "text": " You get this psychedelic looking imagery here."}, {"start": 295.0, "end": 299.0, "text": " So you take some whatever image as the input, like say these figures here."}, {"start": 299.0, "end": 304.0, "text": " And after passing it through the algorithm, you get these psychedelic ones."}, {"start": 304.0, "end": 310.0, "text": " Even this one on the top left actually came from this image here."}, {"start": 310.0, "end": 312.0, "text": " So it's super exciting."}, {"start": 312.0, "end": 316.0, "text": " How it works is you basically take some images as the input,"}, {"start": 316.0, "end": 320.0, "text": " pass it into the pre-trained neural network."}, {"start": 320.0, "end": 325.0, "text": " And that neural network will kind of create something called feature maps."}, {"start": 325.0, "end": 330.0, "text": " And you'll want to amplify whatever the network already sees in the image."}, {"start": 330.0, "end": 334.0, "text": " So because most of them were trained on like dog images,"}, {"start": 334.0, "end": 338.0, "text": " you'll see how the network is slowly modifying the image."}, {"start": 338.0, "end": 342.0, "text": " It's going to add some like dog eyes and cat eyes and fur and stuff like that"}, {"start": 342.0, "end": 345.0, "text": " because that was the data that the image was trained on."}, {"start": 345.0, "end": 352.0, "text": " So you're going to learn how to actually do, manually do gradient steps,"}, {"start": 352.0, "end": 354.0, "text": " so optimization steps."}, {"start": 354.0, "end": 359.0, "text": " You're going to use the raw gradients that you get from maximizing those feature maps."}, {"start": 359.0, "end": 362.0, "text": " And you're going to apply them to these input pixels."}, {"start": 362.0, "end": 365.0, "text": " And that's how you're going to get these images."}, {"start": 365.0, "end": 368.0, "text": " So there are a lot of more things you can play with,"}, {"start": 368.0, "end": 371.0, "text": " as you'll see when you open up this ReadMe,"}, {"start": 371.0, "end": 376.0, "text": " depending on which layers you use to do this optimization,"}, {"start": 376.0, "end": 378.0, "text": " you're going to get different patterns."}, {"start": 378.0, "end": 382.0, "text": " Like if you go to lower layers, you'll get these geometric patterns."}, {"start": 382.0, "end": 386.0, "text": " And the higher up you go, the deeper you go actually."}, {"start": 386.0, "end": 390.0, "text": " You'll get more like abstract imagery like these here."}, {"start": 390.0, "end": 394.0, "text": " And yeah, so you can also create GIFs."}, {"start": 394.0, "end": 396.0, "text": " Pretty much fun stuff."}, {"start": 396.0, "end": 402.0, "text": " And last but not least, generative adversarial networks or GANs for short."}, {"start": 402.0, "end": 406.0, "text": " And if you haven't heard of GANs so far, which I doubt,"}, {"start": 406.0, "end": 409.0, "text": " they're just a framework where we have two neural networks."}, {"start": 409.0, "end": 412.0, "text": " One is called the generator, the other one is called the discriminator."}, {"start": 412.0, "end": 417.0, "text": " And the goal of the generator is to create imagery undistinguishable from the real data."}, {"start": 417.0, "end": 423.0, "text": " And you can see here in this, so I basically have three GAN projects inside of this repo."}, {"start": 423.0, "end": 428.0, "text": " And this original one invented by Ian Goodfellow,"}, {"start": 428.0, "end": 433.0, "text": " I used MNIST digits, so the same as in the beginner video, the same data set."}, {"start": 433.0, "end": 438.0, "text": " And basically you can see this row here that's real imagery from MNIST."}, {"start": 438.0, "end": 444.0, "text": " And after the generator is trained, it can generate data that looks like somebody could have written those."}, {"start": 444.0, "end": 447.0, "text": " Like, looks totally real."}, {"start": 447.0, "end": 451.0, "text": " You couldn't probably distinguish between this row and this one here."}, {"start": 451.0, "end": 457.0, "text": " And aside from that one, I also have this conditional GAN,"}, {"start": 457.0, "end": 460.0, "text": " where you can additionally control which class you want to generate."}, {"start": 460.0, "end": 466.0, "text": " So you can condition it and say, hey, I want a zero, and we just generate zeros."}, {"start": 466.0, "end": 468.0, "text": " You can see in the column here."}, {"start": 468.0, "end": 476.0, "text": " Finally, this deep convolutional GAN or DC GAN for short is super famous architecture."}, {"start": 476.0, "end": 479.0, "text": " And I've just picked a different data set here."}, {"start": 479.0, "end": 485.0, "text": " You can also use MNIST, but I've picked this Celeb A data set of human faces."}, {"start": 485.0, "end": 492.0, "text": " And you can see how it's learning to generate faces of humans."}, {"start": 492.0, "end": 495.0, "text": " And this is how it looks like after the generator is trained."}, {"start": 495.0, "end": 502.0, "text": " Obviously not undistinguishable from the real data, but this was a model that was developed, I think, in 2016."}, {"start": 502.0, "end": 508.0, "text": " So you have much better models now, state of the art, like StyleGAN version 2."}, {"start": 508.0, "end": 516.0, "text": " And I've also developed this Jupyter Notebook, which you can use to better understand how this all works."}, {"start": 516.0, "end": 520.0, "text": " And hopefully, you'll find that useful."}, {"start": 520.0, "end": 524.0, "text": " A couple of tips on how you should approach these projects."}, {"start": 524.0, "end": 530.0, "text": " The first MNIST project, I just recommend you go ahead and read the original paper by Gatiss."}, {"start": 530.0, "end": 536.0, "text": " And it's not super tough, even if you're intermediate, since you're watching this video."}, {"start": 536.0, "end": 541.0, "text": " But even if you're kind of a beginner, go ahead and try and give it a try."}, {"start": 541.0, "end": 552.0, "text": " As for Deep Dream, they have a blog that they wrote a couple of years ago, like Crisola and Alexander Mertvinsev."}, {"start": 552.0, "end": 559.0, "text": " And you should go ahead and read that one before you go ahead and try and implement that code yourself."}, {"start": 559.0, "end": 566.0, "text": " Also, just feel free to go ahead and examine other people's code, including mine, obviously."}, {"start": 566.0, "end": 569.0, "text": " And that will help you develop your own version."}, {"start": 569.0, "end": 573.0, "text": " But try and develop the core parts by yourself."}, {"start": 573.0, "end": 575.0, "text": " Don't just copy-paste code."}, {"start": 575.0, "end": 579.0, "text": " That beats the purpose, because you want to learn here."}, {"start": 579.0, "end": 585.0, "text": " And when it comes to GANs, I definitely do not recommend you go ahead and try and read the original paper."}, {"start": 585.0, "end": 586.0, "text": " It's super tough."}, {"start": 586.0, "end": 591.0, "text": " Lots of math, unless you're familiar with mathematics for machine learning, just skip that one."}, {"start": 591.0, "end": 594.0, "text": " Go ahead and find useful blogs online."}, {"start": 594.0, "end": 600.0, "text": " There are many GAN blogs out there, and that will help you understand the basic logic."}, {"start": 600.0, "end": 607.0, "text": " And then the most important thing here is that you try and implement the training loop yourself."}, {"start": 607.0, "end": 609.0, "text": " That's where the whole brain is."}, {"start": 609.0, "end": 611.0, "text": " Okay, that was pretty much it."}, {"start": 611.0, "end": 618.0, "text": " Going forward, I'll be developing some even more advanced projects like graph neural networks and transformers."}, {"start": 618.0, "end": 624.0, "text": " You can expect a video in a couple of months, me suggesting some projects you should do yourself,"}, {"start": 624.0, "end": 628.0, "text": " which are for more advanced machine learning practitioners."}, {"start": 628.0, "end": 632.0, "text": " I'd love to hear your thoughts on these three projects."}, {"start": 632.0, "end": 637.0, "text": " Do you have any project which you think I should have included here?"}, {"start": 637.0, "end": 640.0, "text": " Just let me know your thoughts down in the comment section."}, {"start": 640.0, "end": 647.0, "text": " If you're new to this channel, consider subscribing and hit that bell icon to get notified when I upload a new video."}, {"start": 647.0, "end": 670.0, "text": " Until next time, keep learning deep."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=yhhSYk9zt1w
3 Machine Learning Projects For Beginners (Highly visual) | 2021
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I'll recommend you 3 beginner-friendly machine learning projects. It'll probably take you a week to finish each one of these. You'll learn about: ✔️ beginner-friendly ML project ideas Note: I mention 3 versions of the YOLO model in the video. What I meant were YOLO models developed by the original author, recently v4 and v5 have been developed but not by the original author. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ Create new MNIST digits: https://github.com/gordicaleksa/pytorch-gans ✅ PASCAL VOC: https://pjreddie.com/projects/pascal-voc-dataset-mirror/ ✅ YOLO v3: https://github.com/ultralytics/yolov3 Note: You can use the PASCAL dataset I linked as it has less than 500 MBs but you can in general just download 50/100/1000 images from some dataset and develop your small image search engine. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:42 Project #1: 50 shades of MNIST 2:22 Confusion matrix matters 3:10 Add new MNIST digits (manually/using GANs) 4:15 Project #2: Toy image search engine 6:04 Project #3: Object Detection Pipeline (YOLO) 7:19 Level up your projects Credit for the burger image: https://towardsdatascience.com/find-similar-images-using-autoencoders-315f374029ea ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #machinelearning #beginnerprojects #deeplearning
What's up in this video? I'm going to recommend you three different Highly visual machine learning projects which are going to help you better Understand machine learning and also improve your coding skills So many of you go through Coursera courses and you learn a bunch of machine learning theory but not so many of you go and actually develop a machine learning project and that will a Help you learn machine learning much better because you'll notice certain things you thought in you and you'll realize you actually didn't know them and Secondly it will be really awesome thing to have in your CV and much better than having bunch of random skills and No code to back it up that being said let me recommend you the first project and it's going to be amnest digit Classification now I already hear you say like dude. Don't give me this super mainstream toy project It's the hello world of computer vision basically But I'm going to give you a couple of tips to make this super good beginner friendly machine learning project so the reason I'm recommending this project as the first project is that you'll be focusing on the model on the neural networks and not on the data and Much of the machine learning day-to-day job is about data engineering Analyzing your data and visualizing your data, but in this project You're going to focus on the machine learning model mostly and just tracking its performance via accuracy metrics so The first thing I want you to do is just take some off-the-shelf model like VGG 16 VGG 19 mobile nets explore those off-the-shelf models Which are already pre trained for you and just develop the end-to-end pipeline so load the data I use off-the-shelf model and track the accuracy metric after you've done that Go ahead and try and develop your own neural network from scratch for start using like pure feed-forward neural network and as a third step after you've Developed the feed-forward neural network, and you track the accuracy you're going to do it with convolutional neural network CNN and You're going to again track the accuracy and then you'll have you can compare the three approaches You took so the off-the-shelf model the feed-forward neural network and the CNN's one more thing you're going to do for all of the three approaches here is plot the confusion matrix and the confusion matrix basically tells you where your model is having a really hard time predicting the true label the true digit so say you have like an image of digit one and You figure out that your model is often making mistake Confusing it with number seven and that would make sense right because ones and sevens people tend to write those alike There's a lot of interesting insight you can get just plotting confusion matrix and seeing where your models is having hard time and by doing that you can maybe even augment your data set and maybe put some more like sevens and ones in the data set so that your model will learn how to Discriminate between those two better so once you're done with developing those three approaches Go ahead and maybe Create your own data points that look like amnest digits So just go like use paint or some program and manually Draw a couple of digits and and just check out how your model is Generalizing to those images which the model didn't see during the training Procedure and also you can use maybe GANs. I'll link my repo in the description Which will help you generate new amnest digits and you can see whether your model is Can correctly predict the digit that's in that image finally you can always take your project a step further by developing say web application and Deploying your model there and just letting others maybe use it as a service But that's less of an machine learning arguably more of a like software engineering and also machine learning ML ops popularly known as Anyways, there's just an option you also have on your disposal so the second project that recommends you go ahead and try out is Develop your small image search engine and that might sound frightening But it's actually pretty simple the goal will be to Given your data set of say thousand images to take a single image and find say five or ten images Which are the most similar to your input image? So the the how the project will approximately look like is you're going to Find some data set like image net or MSCoco or some of those popular computer vision Data sets you're gonna extract thousand images out of those and it's a small preprocessing step Maybe and then the smart part of the project is actually finding the code for that input image And for all of the other images in your data set and just finding five or say ten Images which are the most similar to your input image how you'll find the code is just input the image into VGG and from some deep layer, you'll take Feature maps flatten them and that's your code and then how you calculate the distance There's just simple Euclidean distance between those two codes So in these two first projects, you're going to learn how to just consume models that are already there in the deep learning framework Like off-the-shelf models, you're going to learn how to develop some basic neural networks from scratch And you'll also learn how to tweak the existing models like for the image search engine You'll probably have to prune a couple of the last layer like fully connected layer from the image search engine Connected layer from VGG or whatever and just flatten out change the view of those feature maps in order to get the The image code in this third project you're going to leverage Existing models that are not a part of the deep learning framework. You're going to develop an object detection pipeline using this popular object detection model called YOLO and There are three flavors of this model version 1 version 2 and version 3 I encourage you to go ahead and explore read some blocks Maybe even research papers depending on your skill how those work You'll just go ahead and use the model out of the box So you won't be training it for yourself because it takes a lot of time to train your YOLO You're just going to use it as it is and develop some box filtering depending on which classes you want to use Some models are trained on MSCoco. Some are trained on Pascal VOC. So they'll be having different number of classes the Coco version will have 80 classes I think and then Pascal VOC will have 20 classes. So pick the version you want do the filtering and just Figure out which images it expects and voila you built yourself an object detection pipeline The nice thing about YOLO is that it's super fast. It can run real-time So that means 30 FPS or or even higher depending on the flavor of the model you took a simple way to just kind of level up this project would be to Figure out a way to deploy this to some mobile application where you could be using your your mobile camera to be to detect Some certain objects real-time so that's it these three projects will give you a hands-on experience with machine learning And you're going to learn a lot tweaking models playing with the data visualizing Coding it yourself and publishing it to github also feel free to check out our other videos Also, feel free to check out my github I've been developing a lot of deep learning projects recently And I'll be including those in the next video on advanced projects which will help you get even better at deep learning So I hope you found this video useful if you did go ahead and subscribe Also, make sure to click that Bell icon so that you get notified when I upload a new video and finally if you have any other interesting beginner friendly Friendly projects which you think others may find useful go ahead and write them down in the comment section I'd love to know about them. Feel free to ask any questions. I'll make sure to answer all them until next time keep learning
[{"start": 0.0, "end": 3.42, "text": " What's up in this video? I'm going to recommend you three different"}, {"start": 4.26, "end": 7.58, "text": " Highly visual machine learning projects which are going to help you better"}, {"start": 8.18, "end": 11.42, "text": " Understand machine learning and also improve your coding skills"}, {"start": 11.42, "end": 16.86, "text": " So many of you go through Coursera courses and you learn a bunch of machine learning theory"}, {"start": 17.22, "end": 22.66, "text": " but not so many of you go and actually develop a machine learning project and that will a"}, {"start": 23.06, "end": 29.42, "text": " Help you learn machine learning much better because you'll notice certain things you thought in you"}, {"start": 29.42, "end": 32.32, "text": " and you'll realize you actually didn't know them and"}, {"start": 33.02, "end": 40.02, "text": " Secondly it will be really awesome thing to have in your CV and much better than having bunch of random skills and"}, {"start": 40.46, "end": 46.660000000000004, "text": " No code to back it up that being said let me recommend you the first project and it's going to be amnest digit"}, {"start": 46.900000000000006, "end": 52.82000000000001, "text": " Classification now I already hear you say like dude. Don't give me this super mainstream toy project"}, {"start": 52.82000000000001, "end": 55.06, "text": " It's the hello world of computer vision basically"}, {"start": 55.06, "end": 61.78, "text": " But I'm going to give you a couple of tips to make this super good beginner friendly machine learning project"}, {"start": 61.78, "end": 68.22, "text": " so the reason I'm recommending this project as the first project is that you'll be focusing on the model on the neural networks and"}, {"start": 68.34, "end": 70.34, "text": " not on the data and"}, {"start": 70.82000000000001, "end": 74.7, "text": " Much of the machine learning day-to-day job is about data engineering"}, {"start": 75.58, "end": 79.18, "text": " Analyzing your data and visualizing your data, but in this project"}, {"start": 79.18, "end": 85.54, "text": " You're going to focus on the machine learning model mostly and just tracking its performance via"}, {"start": 86.14, "end": 88.14000000000001, "text": " accuracy metrics so"}, {"start": 88.42, "end": 95.42000000000002, "text": " The first thing I want you to do is just take some off-the-shelf model like VGG 16 VGG 19 mobile nets"}, {"start": 96.26, "end": 98.46000000000001, "text": " explore those off-the-shelf models"}, {"start": 98.46000000000001, "end": 104.18, "text": " Which are already pre trained for you and just develop the end-to-end pipeline so load the data"}, {"start": 104.18, "end": 109.82000000000001, "text": " I use off-the-shelf model and track the accuracy metric after you've done that"}, {"start": 110.14, "end": 115.42, "text": " Go ahead and try and develop your own neural network from scratch for start using"}, {"start": 116.18, "end": 118.66000000000001, "text": " like pure feed-forward neural network and"}, {"start": 119.46000000000001, "end": 121.54, "text": " as a third step after you've"}, {"start": 122.30000000000001, "end": 129.5, "text": " Developed the feed-forward neural network, and you track the accuracy you're going to do it with convolutional neural network"}, {"start": 129.94, "end": 131.3, "text": " CNN and"}, {"start": 131.3, "end": 137.58, "text": " You're going to again track the accuracy and then you'll have you can compare the three approaches"}, {"start": 137.58, "end": 142.06, "text": " You took so the off-the-shelf model the feed-forward neural network and the CNN's"}, {"start": 142.46, "end": 146.34, "text": " one more thing you're going to do for all of the three approaches here is"}, {"start": 146.86, "end": 153.38000000000002, "text": " plot the confusion matrix and the confusion matrix basically tells you where your model is having a really hard time"}, {"start": 153.86, "end": 158.68, "text": " predicting the true label the true digit so say you have like an image of"}, {"start": 159.26000000000002, "end": 160.86, "text": " digit one and"}, {"start": 160.86, "end": 164.14000000000001, "text": " You figure out that your model is often making mistake"}, {"start": 164.54000000000002, "end": 171.38000000000002, "text": " Confusing it with number seven and that would make sense right because ones and sevens people tend to write those alike"}, {"start": 171.38000000000002, "end": 177.06, "text": " There's a lot of interesting insight you can get just plotting confusion matrix and seeing where your models is having"}, {"start": 177.74, "end": 180.74, "text": " hard time and by doing that you can maybe even"}, {"start": 181.26000000000002, "end": 188.26000000000002, "text": " augment your data set and maybe put some more like sevens and ones in the data set so that your model will learn how to"}, {"start": 188.26, "end": 192.78, "text": " Discriminate between those two better so once you're done with"}, {"start": 193.62, "end": 195.62, "text": " developing those three approaches"}, {"start": 196.34, "end": 198.5, "text": " Go ahead and maybe"}, {"start": 199.42, "end": 203.14, "text": " Create your own data points that look like amnest digits"}, {"start": 203.29999999999998, "end": 207.98, "text": " So just go like use paint or some program and manually"}, {"start": 208.45999999999998, "end": 212.26, "text": " Draw a couple of digits and and just check out how your model is"}, {"start": 212.78, "end": 216.66, "text": " Generalizing to those images which the model didn't see during the training"}, {"start": 216.66, "end": 222.54, "text": " Procedure and also you can use maybe GANs. I'll link my repo in the description"}, {"start": 222.94, "end": 228.74, "text": " Which will help you generate new amnest digits and you can see whether your model is"}, {"start": 229.42, "end": 235.54, "text": " Can correctly predict the digit that's in that image finally you can always take your project a step further by"}, {"start": 236.34, "end": 238.54, "text": " developing say web application and"}, {"start": 239.54, "end": 244.57999999999998, "text": " Deploying your model there and just letting others maybe use it as a service"}, {"start": 244.58, "end": 252.26000000000002, "text": " But that's less of an machine learning arguably more of a like software engineering and also machine learning"}, {"start": 252.94, "end": 254.94, "text": " ML ops popularly known as"}, {"start": 255.66000000000003, "end": 258.54, "text": " Anyways, there's just an option you also have on your disposal"}, {"start": 259.02000000000004, "end": 263.3, "text": " so the second project that recommends you go ahead and try out is"}, {"start": 264.7, "end": 270.02000000000004, "text": " Develop your small image search engine and that might sound frightening"}, {"start": 270.02000000000004, "end": 272.90000000000003, "text": " But it's actually pretty simple the goal will be to"}, {"start": 272.9, "end": 280.38, "text": " Given your data set of say thousand images to take a single image and find say five or ten images"}, {"start": 280.7, "end": 287.73999999999995, "text": " Which are the most similar to your input image? So the the how the project will approximately look like is you're going to"}, {"start": 288.73999999999995, "end": 294.09999999999997, "text": " Find some data set like image net or MSCoco or some of those popular computer vision"}, {"start": 294.73999999999995, "end": 300.82, "text": " Data sets you're gonna extract thousand images out of those and it's a small preprocessing step"}, {"start": 300.82, "end": 305.7, "text": " Maybe and then the smart part of the project is actually finding the code for that input image"}, {"start": 305.7, "end": 310.62, "text": " And for all of the other images in your data set and just finding five or say ten"}, {"start": 311.38, "end": 318.3, "text": " Images which are the most similar to your input image how you'll find the code is just input the image into VGG and"}, {"start": 318.78, "end": 320.78, "text": " from some deep layer, you'll take"}, {"start": 321.58, "end": 326.78, "text": " Feature maps flatten them and that's your code and then how you calculate the distance"}, {"start": 326.78, "end": 331.34, "text": " There's just simple Euclidean distance between those two codes"}, {"start": 331.34, "end": 339.09999999999997, "text": " So in these two first projects, you're going to learn how to just consume models that are already there in the deep learning framework"}, {"start": 339.65999999999997, "end": 345.41999999999996, "text": " Like off-the-shelf models, you're going to learn how to develop some basic neural networks from scratch"}, {"start": 345.97999999999996, "end": 350.26, "text": " And you'll also learn how to tweak the existing models like for the image search engine"}, {"start": 350.26, "end": 355.9, "text": " You'll probably have to prune a couple of the last layer like fully connected layer from the image search engine"}, {"start": 355.9, "end": 362.7, "text": " Connected layer from VGG or whatever and just flatten out change the view of those feature maps in order to get the"}, {"start": 363.02, "end": 367.34, "text": " The image code in this third project you're going to leverage"}, {"start": 367.85999999999996, "end": 375.9, "text": " Existing models that are not a part of the deep learning framework. You're going to develop an object detection pipeline using this popular"}, {"start": 376.26, "end": 379.17999999999995, "text": " object detection model called YOLO and"}, {"start": 379.46, "end": 384.06, "text": " There are three flavors of this model version 1 version 2 and version 3"}, {"start": 384.06, "end": 387.02, "text": " I encourage you to go ahead and explore read some blocks"}, {"start": 387.54, "end": 391.52, "text": " Maybe even research papers depending on your skill how those work"}, {"start": 391.52, "end": 394.7, "text": " You'll just go ahead and use the model out of the box"}, {"start": 394.7, "end": 399.62, "text": " So you won't be training it for yourself because it takes a lot of time to train your YOLO"}, {"start": 399.78000000000003, "end": 406.06, "text": " You're just going to use it as it is and develop some box filtering depending on which classes you want to use"}, {"start": 406.06, "end": 410.74, "text": " Some models are trained on MSCoco. Some are trained on Pascal VOC. So they'll be having"}, {"start": 410.74, "end": 415.7, "text": " different number of classes the Coco version will have 80 classes I think and then"}, {"start": 416.02, "end": 421.86, "text": " Pascal VOC will have 20 classes. So pick the version you want do the filtering and just"}, {"start": 422.38, "end": 427.94, "text": " Figure out which images it expects and voila you built yourself an object detection pipeline"}, {"start": 427.94, "end": 432.02, "text": " The nice thing about YOLO is that it's super fast. It can run real-time"}, {"start": 432.02, "end": 437.26, "text": " So that means 30 FPS or or even higher depending on the flavor of the model"}, {"start": 437.26, "end": 441.74, "text": " you took a simple way to just kind of level up this project would be to"}, {"start": 442.42, "end": 449.58, "text": " Figure out a way to deploy this to some mobile application where you could be using your your mobile camera to be to detect"}, {"start": 449.58, "end": 456.53999999999996, "text": " Some certain objects real-time so that's it these three projects will give you a hands-on experience with machine learning"}, {"start": 456.53999999999996, "end": 461.21999999999997, "text": " And you're going to learn a lot tweaking models playing with the data visualizing"}, {"start": 461.86, "end": 466.3, "text": " Coding it yourself and publishing it to github also feel free to check out our other videos"}, {"start": 466.3, "end": 467.94, "text": " Also, feel free to check out my github"}, {"start": 467.94, "end": 471.98, "text": " I've been developing a lot of deep learning projects recently"}, {"start": 471.98, "end": 478.38, "text": " And I'll be including those in the next video on advanced projects which will help you get even better at deep learning"}, {"start": 478.38, "end": 482.1, "text": " So I hope you found this video useful if you did go ahead and subscribe"}, {"start": 482.94, "end": 484.94, "text": " Also, make sure to click that"}, {"start": 485.22, "end": 491.34000000000003, "text": " Bell icon so that you get notified when I upload a new video and finally if you have any other interesting beginner friendly"}, {"start": 491.34, "end": 496.85999999999996, "text": " Friendly projects which you think others may find useful go ahead and write them down in the comment section"}, {"start": 496.85999999999996, "end": 502.85999999999996, "text": " I'd love to know about them. Feel free to ask any questions. I'll make sure to answer all them until next time keep"}, {"start": 502.86, "end": 521.86, "text": " learning"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=NuJB-RjhMH4
PyTorch or TensorFlow?
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Should you pick PyTorch or TensorFlow? You'll learn: ✔️ A brief history of both frameworks ✔️ How they compare in the research community ✔️ How they compare in shipping to production ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ TF vs PT papers: http://horace.io/pytorch-vs-tensorflow/ ✅ Google Trends: https://trends.google.com/trends/ ✅ TF GitHub: https://github.com/tensorflow/tensorflow ✅ PT GitHub: https://github.com/pytorch/pytorch ✅ OpenAI blog: https://openai.com/blog/openai-pytorch/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 - Are there any other frameworks? 0:30 - Google Trends (PyTorch vs TensorFlow) 2:27 - Dimension 1: Ease of development & research 4:22 - Data-driven conclusions 5:55 - Dimension 2: Can we ship it? 7:12 - PyTorch is catching up? 7:45 - So what should I use? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #pytorch #tensorflow #deeplearning
Five years ago if you asked me the same question which deep learning framework should they use I'll be telling you about six deep learning frameworks MX net CNTK Chainer Keras Tiano and cafe fast forward to 2020 they are pretty much all dead and The only two frameworks that matter are tensorflow and PyTorch unless if you're developing some exotic JVM deep learning applications you'll be using DL for J Okay, so let's briefly go through the history of how both frameworks both by Torch and tensorflow came to be so using Google Trends here We can see that the initial release of tensorflow happened in November 2015 and we had a huge spike here as it was probably Like heavily hyped as the whole deep learning field currently is and then we had some bust here So fast forward a year later PyTorch was initially released in I think September 2016 and By then tensorflow already gained a lot of traction as you can see here and it took a while But fast forward to 2020 they pretty much converged here. Let's briefly go into the worldwide View here and you can see the tensorflow is still more popular than Than PyTorch also if we take a look at GitHub repos tensorflow has got 148k stars and Whereas PyTorch has around 50k stars now looking at this data from Google Trends and github You may say well, okay like PyTorch kind of caught up, but tensorflow is still more popular But is it so now in this video? I'm going to give you an overview of this to frameworks along two dimensions So the first one being at the general ease of development how quickly can you prototype something? How quickly can you do research and the second dimension is can you deploy it? How easy is to deploy your models once you train them and push them to production now what this video is not It's not me telling you go use tensorflow or go use PyTorch because neither Google nor Facebook is paying me to do this video So what this video is it's my review Based on my research I've done on this topic after this video You'll know which framework makes more sense depending on your particular context so unfortunately tensorflow has got this nasty history with static graphs and That was the tensorflow version 1.0 where you basically had to define a static graph of your neural network before you start using it and it was very hard to debug it was not by tonic and They are paying the price now because of that in the meanwhile tensorflow released tensorflow 2.0 where they basically pretty much copied the paradigm PyTorch is using and that's dynamic graphs where you basically Create the graph of your neural network like you'd write a simple Python program so I said nasty because now they have problems with legacy docs is this 1.0 or 2.0 if you go and search your question on stackable flow you'll sometimes get answer from version 1.0 which totally does not make sense for tensorflow 2.0 so 2.0 is also referred to as eager execution and Although it did brought dynamic graphs with it It's way slower than PyTorch is and that's the second bad thing looking at the API itself now in 2020 They pretty much have the same APIs. They've converged even The one of the co-authors of PyTorch said himself in this tweet That it doesn't make any sense to compare them anymore because they've converged so much in that sense PyTorch on the other hand was Pythonic from the very start It's super easy to learn it's got awesome documentation. It's got awesome community And there is no ambiguity between which version is it can I use this one can I use this one? It's simple, so let's look at some curves. I Don't want this to be me ranting about PyTorch is much better in research. Let's back this up with some numbers Okay, so there is this awesome website, which I'll link in the description Which is showing us the like relative popularity between tensorflow and PyTorch And if we look at the first graph here, you can see everything above 50% means that PyTorch is better so looking at the most famous conferences on computer vision like CVPR or a natural language processing like EMNLP or some more classic deep learning machine learning conferences like NIPs and I see LR PyTorch is pretty much beating tensorflow If we look at some other graphs like the percentage of papers written in certain framework We can see that CVPR like we 30% of the papers were written in in PyTorch whereas only 7.7% Of the papers were written in tensorflow and we can see that the number is going down Whereas the PyTorch trend is not slowing down at all Finally the last plot just shows us the like the sheer number of papers written in certain framework and CVPR again had 418 papers written in PyTorch whereas only 113 papers were written in tensorflow and again the trends PyTorch is going up tensorflow is going down It's pretty obvious that PyTorch is killing it in the research community now let's take a look at the second dimension deploy the damn model to production dimension Now tensorflow is much more mature along this dimension So you have so first it's backed up by Google secondly simply had more time to mature One year to be precise. So it's got tensorflow serving so it's a broad term for let's say you have a web app You have your model served there and basically what it allows you to do is to seamlessly just kind of update the model on the fly Without the users ever noticing that happened. So it's got a really strong support for that. It also has a strong support for like deploying your models to mobile devices and also to different kinds of IOT devices embedded devices that's known as tensorflow light again and It also has this thing called tensorflow.js which enables you to deploy our models to the browser PyTorch on the other hand has got its own equivalents like PyTorch serve And PyTorch mobile which do pretty much the same things. They are just much less mature than tensorflow Versions they both came maybe less than a year ago And they still have a long time to just become as mature as tensorflow is in that aspect on the good note Many companies like open AI are Embracing PyTorch as their official framework of choice, which is really reassuring Also, Tesla is using PyTorch heavily and andrey karpathy is being Promoting PyTorch all over the place Microsoft's official became the PyTorch maintainer for Windows platform even big companies are starting to use PyTorch Which is a cool thing because they obviously have to deploy their models and that means they are betting that PyTorch will eventually Mature in that sense There's a general conclusion if you're a startup a business and you want a cheaper product that has a Machine learning deep learning components in it. It's probably safer bet to go with tensorflow with it's got a really mature ecosystem of Deploying your models on the other hand if you're willing to bet that PyTorch will get there eventually And you just want to have the ease of development and to be able to use the the best research out there I'd go with PyTorch So if you're a startup you want to be able to use the best research out there With PyTorch so for any one of you who doesn't have its own business you just want to learn deep learning I strongly suggest you start with PyTorch as a final note it's worth mentioning that fast AI is teaching its deep learning courses in PyTorch and Stanford also started teaching its courses in PyTorch which will in my opinion bias new graduates and new PhD students to Love the framework and to start developing their own startups using PyTorch Which will actually put the inertia moment on PyTorch side which was till now on tensorflow side So hope you liked this video and found it useful I'd love to know what's your favorite framework and why and just comment down in the comment section And I love your opinions on this one also subscribe to this channel and gently click that Bell icon So that you get notified when I upload a new video until next time keep learning
[{"start": 0.0, "end": 6.98, "text": " Five years ago if you asked me the same question which deep learning framework should they use I'll be telling you about six deep learning"}, {"start": 7.08, "end": 8.1, "text": " frameworks"}, {"start": 8.1, "end": 10.0, "text": " MX net CNTK"}, {"start": 10.0, "end": 13.24, "text": " Chainer Keras Tiano and cafe"}, {"start": 13.84, "end": 15.84, "text": " fast forward to 2020"}, {"start": 16.04, "end": 18.04, "text": " they are pretty much all dead and"}, {"start": 19.32, "end": 24.62, "text": " The only two frameworks that matter are tensorflow and PyTorch unless if you're developing some"}, {"start": 24.62, "end": 30.020000000000003, "text": " exotic JVM deep learning applications you'll be using DL for J"}, {"start": 30.020000000000003, "end": 37.58, "text": " Okay, so let's briefly go through the history of how both frameworks both by Torch and tensorflow came to be so using Google Trends here"}, {"start": 37.58, "end": 41.34, "text": " We can see that the initial release of tensorflow happened in November"}, {"start": 42.7, "end": 46.5, "text": " 2015 and we had a huge spike here as it was probably"}, {"start": 47.34, "end": 52.900000000000006, "text": " Like heavily hyped as the whole deep learning field currently is and then we had some bust here"}, {"start": 52.9, "end": 54.9, "text": " So fast forward a year later"}, {"start": 55.58, "end": 56.94, "text": " PyTorch"}, {"start": 56.94, "end": 61.379999999999995, "text": " was initially released in I think September 2016 and"}, {"start": 62.06, "end": 67.78, "text": " By then tensorflow already gained a lot of traction as you can see here and it took a while"}, {"start": 68.06, "end": 73.38, "text": " But fast forward to 2020 they pretty much converged here. Let's briefly go into the worldwide"}, {"start": 74.1, "end": 78.5, "text": " View here and you can see the tensorflow is still more popular than"}, {"start": 78.5, "end": 82.66, "text": " Than PyTorch also if we take a look at"}, {"start": 83.98, "end": 86.1, "text": " GitHub repos tensorflow has got"}, {"start": 87.86, "end": 89.86, "text": " 148k stars and"}, {"start": 90.66, "end": 97.38, "text": " Whereas PyTorch has around 50k stars now looking at this data from Google Trends and github"}, {"start": 97.38, "end": 103.46000000000001, "text": " You may say well, okay like PyTorch kind of caught up, but tensorflow is still more popular"}, {"start": 103.9, "end": 105.9, "text": " But is it so now in this video?"}, {"start": 105.9, "end": 108.58000000000001, "text": " I'm going to give you an overview of this to"}, {"start": 109.22, "end": 111.54, "text": " frameworks along two dimensions"}, {"start": 111.62, "end": 116.94000000000001, "text": " So the first one being at the general ease of development how quickly can you prototype something?"}, {"start": 116.94000000000001, "end": 121.42, "text": " How quickly can you do research and the second dimension is can you deploy it?"}, {"start": 121.42, "end": 127.58000000000001, "text": " How easy is to deploy your models once you train them and push them to production now what this video is not"}, {"start": 127.62, "end": 135.02, "text": " It's not me telling you go use tensorflow or go use PyTorch because neither Google nor Facebook is paying me to do this video"}, {"start": 135.02, "end": 137.94, "text": " So what this video is it's my review"}, {"start": 138.78, "end": 142.78, "text": " Based on my research I've done on this topic after this video"}, {"start": 142.82000000000002, "end": 147.3, "text": " You'll know which framework makes more sense depending on your particular context"}, {"start": 148.26000000000002, "end": 152.5, "text": " so unfortunately tensorflow has got this nasty history with static graphs and"}, {"start": 152.98000000000002, "end": 158.02, "text": " That was the tensorflow version 1.0 where you basically had to define a"}, {"start": 158.38, "end": 162.78, "text": " static graph of your neural network before you start using it and it was"}, {"start": 162.78, "end": 166.46, "text": " very hard to debug it was not by tonic and"}, {"start": 166.98, "end": 170.02, "text": " They are paying the price now because of that in the meanwhile"}, {"start": 170.54, "end": 176.38, "text": " tensorflow released tensorflow 2.0 where they basically pretty much copied the"}, {"start": 177.42000000000002, "end": 181.42000000000002, "text": " paradigm PyTorch is using and that's dynamic graphs where you basically"}, {"start": 181.98, "end": 187.94, "text": " Create the graph of your neural network like you'd write a simple Python program"}, {"start": 187.94, "end": 196.46, "text": " so I said nasty because now they have problems with legacy docs is this 1.0 or 2.0 if you go and search your question on stackable flow you'll"}, {"start": 197.3, "end": 202.57999999999998, "text": " sometimes get answer from version 1.0 which totally does not make sense for"}, {"start": 203.57999999999998, "end": 210.1, "text": " tensorflow 2.0 so 2.0 is also referred to as eager execution and"}, {"start": 210.42, "end": 213.54, "text": " Although it did brought dynamic graphs with it"}, {"start": 213.54, "end": 220.82, "text": " It's way slower than PyTorch is and that's the second bad thing looking at the API itself now in 2020"}, {"start": 221.29999999999998, "end": 226.26, "text": " They pretty much have the same APIs. They've converged even"}, {"start": 226.98, "end": 231.01999999999998, "text": " The one of the co-authors of PyTorch said himself in this tweet"}, {"start": 231.57999999999998, "end": 236.5, "text": " That it doesn't make any sense to compare them anymore because they've converged so much in that sense"}, {"start": 236.5, "end": 239.26, "text": " PyTorch on the other hand was Pythonic from the very start"}, {"start": 239.26, "end": 244.38, "text": " It's super easy to learn it's got awesome documentation. It's got awesome community"}, {"start": 244.38, "end": 249.17999999999998, "text": " And there is no ambiguity between which version is it can I use this one can I use this one?"}, {"start": 250.14, "end": 253.38, "text": " It's simple, so let's look at some curves. I"}, {"start": 253.98, "end": 259.86, "text": " Don't want this to be me ranting about PyTorch is much better in research. Let's back this up with some numbers"}, {"start": 259.86, "end": 263.21999999999997, "text": " Okay, so there is this awesome website, which I'll link in the description"}, {"start": 263.22, "end": 268.74, "text": " Which is showing us the like relative popularity between tensorflow and PyTorch"}, {"start": 268.74, "end": 274.22, "text": " And if we look at the first graph here, you can see everything above 50% means that"}, {"start": 274.86, "end": 276.66, "text": " PyTorch is better"}, {"start": 276.66, "end": 281.98, "text": " so looking at the most famous conferences on computer vision like CVPR or a"}, {"start": 282.3, "end": 289.62, "text": " natural language processing like EMNLP or some more classic deep learning machine learning conferences like NIPs"}, {"start": 289.62, "end": 293.42, "text": " and I see LR"}, {"start": 293.94, "end": 296.54, "text": " PyTorch is pretty much beating tensorflow"}, {"start": 297.02, "end": 302.42, "text": " If we look at some other graphs like the percentage of papers written in certain framework"}, {"start": 302.42, "end": 307.94, "text": " We can see that CVPR like we 30% of the papers were written in in PyTorch whereas"}, {"start": 308.5, "end": 310.5, "text": " only 7.7%"}, {"start": 311.34000000000003, "end": 315.38, "text": " Of the papers were written in tensorflow and we can see that the number is going down"}, {"start": 315.38, "end": 319.9, "text": " Whereas the PyTorch trend is not slowing down at all"}, {"start": 320.46, "end": 326.3, "text": " Finally the last plot just shows us the like the sheer number of papers written in certain framework and"}, {"start": 326.82, "end": 328.82, "text": " CVPR again had"}, {"start": 329.3, "end": 335.86, "text": " 418 papers written in PyTorch whereas only 113 papers were written in tensorflow and again the trends"}, {"start": 335.86, "end": 338.5, "text": " PyTorch is going up tensorflow is going down"}, {"start": 338.5, "end": 341.82, "text": " It's pretty obvious that PyTorch is killing it in the research"}, {"start": 341.82, "end": 347.7, "text": " community now let's take a look at the second dimension deploy the damn model to production"}, {"start": 348.26, "end": 350.26, "text": " dimension"}, {"start": 350.58, "end": 353.78, "text": " Now tensorflow is much more mature along this dimension"}, {"start": 353.78, "end": 359.53999999999996, "text": " So you have so first it's backed up by Google secondly simply had more time to mature"}, {"start": 360.02, "end": 367.34, "text": " One year to be precise. So it's got tensorflow serving so it's a broad term for let's say you have a web app"}, {"start": 367.34, "end": 374.38, "text": " You have your model served there and basically what it allows you to do is to seamlessly just kind of update the model on the fly"}, {"start": 374.38, "end": 380.97999999999996, "text": " Without the users ever noticing that happened. So it's got a really strong support for that. It also has a strong support for"}, {"start": 381.97999999999996, "end": 385.82, "text": " like deploying your models to mobile devices and also to"}, {"start": 386.34, "end": 392.38, "text": " different kinds of IOT devices embedded devices that's known as tensorflow light again and"}, {"start": 392.38, "end": 399.86, "text": " It also has this thing called tensorflow.js which enables you to deploy our models to the browser"}, {"start": 399.86, "end": 404.86, "text": " PyTorch on the other hand has got its own equivalents like PyTorch serve"}, {"start": 405.18, "end": 411.62, "text": " And PyTorch mobile which do pretty much the same things. They are just much less mature than tensorflow"}, {"start": 412.42, "end": 417.3, "text": " Versions they both came maybe less than a year ago"}, {"start": 417.3, "end": 424.78000000000003, "text": " And they still have a long time to just become as mature as tensorflow is in that aspect on the good note"}, {"start": 424.78000000000003, "end": 427.18, "text": " Many companies like open AI are"}, {"start": 427.7, "end": 432.5, "text": " Embracing PyTorch as their official framework of choice, which is really reassuring"}, {"start": 433.22, "end": 438.74, "text": " Also, Tesla is using PyTorch heavily and andrey karpathy is being"}, {"start": 439.46000000000004, "end": 442.5, "text": " Promoting PyTorch all over the place Microsoft's official"}, {"start": 442.5, "end": 449.06, "text": " became the PyTorch maintainer for Windows platform even big companies are starting to use PyTorch"}, {"start": 449.06, "end": 457.42, "text": " Which is a cool thing because they obviously have to deploy their models and that means they are betting that PyTorch will eventually"}, {"start": 458.74, "end": 460.74, "text": " Mature in that sense"}, {"start": 461.1, "end": 467.3, "text": " There's a general conclusion if you're a startup a business and you want a cheaper product that has a"}, {"start": 467.3, "end": 473.90000000000003, "text": " Machine learning deep learning components in it. It's probably safer bet to go with tensorflow with it's got a really mature"}, {"start": 474.42, "end": 476.42, "text": " ecosystem of"}, {"start": 476.66, "end": 482.14, "text": " Deploying your models on the other hand if you're willing to bet that PyTorch will get there eventually"}, {"start": 482.14, "end": 489.26, "text": " And you just want to have the ease of development and to be able to use the the best research out there"}, {"start": 489.74, "end": 491.18, "text": " I'd go with PyTorch"}, {"start": 491.18, "end": 496.1, "text": " So if you're a startup you want to be able to use the best research out there"}, {"start": 496.1, "end": 501.5, "text": " With PyTorch so for any one of you who doesn't have its own business you just want to learn deep learning"}, {"start": 501.5, "end": 503.74, "text": " I strongly suggest you start with"}, {"start": 504.42, "end": 506.3, "text": " PyTorch as a final note"}, {"start": 506.3, "end": 511.26000000000005, "text": " it's worth mentioning that fast AI is teaching its deep learning courses in PyTorch and"}, {"start": 512.1, "end": 517.3000000000001, "text": " Stanford also started teaching its courses in PyTorch which will in my opinion"}, {"start": 518.22, "end": 521.74, "text": " bias new graduates and new PhD students to"}, {"start": 521.74, "end": 526.7, "text": " Love the framework and to start developing their own startups using PyTorch"}, {"start": 527.1800000000001, "end": 533.82, "text": " Which will actually put the inertia moment on PyTorch side which was till now on tensorflow side"}, {"start": 533.82, "end": 535.82, "text": " So hope you liked this video and found it useful"}, {"start": 536.58, "end": 542.22, "text": " I'd love to know what's your favorite framework and why and just comment down in the comment section"}, {"start": 542.34, "end": 549.3, "text": " And I love your opinions on this one also subscribe to this channel and gently click that Bell icon"}, {"start": 549.3, "end": 553.5799999999999, "text": " So that you get notified when I upload a new video until next time keep learning"}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=2n_uoGOPoVk
How to learn PyTorch? (3 easy steps) | 2021
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I'll give you 3 easy steps you should follow to quickly get started with PyTorch - the most popular deep learning framework in the research community. You'll learn about: ✔️ How to quickly learn PyTorch ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ official tutorial: https://pytorch.org/tutorials/ ✅ blog: https://towardsdatascience.com/understanding-pytorch-with-an-example-a-step-by-step-tutorial-81fc5f8c4e8e ✅ GitHub: https://github.com/gordicaleksa/pytorch-deepdream/blob/master/playground.py Also make sure to code your own (toy) project in PyTorch, from scratch! ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 What is PyTorch? 0:47 60 min Blitz tutorial + Tensorboard 2:27 step by step tutorial (blog) 3:10 DeepDream playground + code your own project 4:25 Do you know of a better way? ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #pytorch #deeplearning #framework
In this video I'm going to show you how to quickly learn PyTorch in three easy steps. PyTorch is a deep learning framework developed by Facebook AI research and deep learning frameworks abstract away from you certain implementation details like how to construct a convolutional layer or a linear layer or how to do back propagation. It ends up with you calling a single line like loss.backward and that's it. It calculates the gradients automatically for you. So it's super popular in the research community. Most of the papers published on the biggest conferences out there for deep learning, for computer vision, NLP, etc. are using PyTorch. And allegedly it can also make your eyesight better and your skin clearer. So how can you learn PyTorch? Step number one. The official PyTorch tutorials are actually pretty decent. You should start by going through this deep learning with PyTorch, a 60 minute blitz. It's going to teach you the basics of the framework. So this chapter, What is PyTorch? It's going to teach you how to instantiate tensors, how to create like tensors of zeros, ones, random numbers, how to convert back and forth between NumPy. But it's actually the syntax. It's really similar to NumPy. That's why one of the reasons why PyTorch is so popular. So secondly, it's going to teach you some of the basics behind what happens when you call backward, i.e. when you do backpropagation. It's going to teach you how to construct a really basic neural network and define the loss and everything. And finally, training a classifier is going to teach you how to do an end to end, a small deep learning pipeline, and that's super useful. So after you're done with this 60 minute blitz, just feel free to skip the second two chapters here, learning PyTorch with examples and what is Torch and really, as that one goes really into depth of how to construct those yourself and have a better understanding. But you won't need to start developing your own neural networks using PyTorch. So just feel free to skip it for now. Visualizing models, data and training with TensorBoard might be useful as TensorBoard is a super useful tool, especially when you start training your models. It will help you log your loss, log your metrics, even your imagery, or depending what your task is about. Step number two. The second thing I'd recommend, and this is how I learned PyTorch, is go through this blog called Understanding PyTorch with an example, a step by step tutorial. And it's super structured. You have it all in one place. Just go end to end and you'll learn a bunch of stuff like about data set class, data loaders, how to create your own training loop, how to build your own simple network, loss and everything in a single place. Really nice visualizations, really nice writing. Super recommended. It's a bit longer, but you'll finish it in a couple of hours. And you're pretty much ready to start doing your own projects in PyTorch. And the third step, just start coding in PyTorch. So I have a couple of PyTorch projects on my GitHub. And you can take a look at this Deep Dream repo. It's a super good way to start understanding how PyTorch works. And if you go to this playground.py, you'll find this function called Understand PyTorch gradients. And it will help you better understand how the computational graph structure works behind the scenes. And also this one, Deep Dream Simple, will also kind of clarify how the whole gradient update works, which usually your optimizers do behind the scene, like Adam, SGD, whatever, is just being explicitly written down here in the code. So aside from going through that code, I super recommend you go and code your own project. So just pick some task, whatever it is, like take, for example, simple image classification, go ahead and write it down yourself from scratch. And you'll learn the syntax. You'll understand how the whole pipeline works. And that's pretty much it. So I hope you enjoyed this brief video on learning PyTorch. If you did, please subscribe and hit that bell icon to get notified when I upload a new video. Also, if you feel that there is a better way to learn PyTorch or some really good resource, just comment down in the comment section. And until next time, keep learning.
[{"start": 0.0, "end": 4.16, "text": " In this video I'm going to show you how to quickly learn PyTorch in three easy steps."}, {"start": 4.16, "end": 8.0, "text": " PyTorch is a deep learning framework developed by Facebook AI research"}, {"start": 8.0, "end": 13.44, "text": " and deep learning frameworks abstract away from you certain implementation details"}, {"start": 13.44, "end": 17.0, "text": " like how to construct a convolutional layer or a linear layer"}, {"start": 17.0, "end": 19.240000000000002, "text": " or how to do back propagation."}, {"start": 19.240000000000002, "end": 24.6, "text": " It ends up with you calling a single line like loss.backward and that's it."}, {"start": 24.6, "end": 27.16, "text": " It calculates the gradients automatically for you."}, {"start": 27.16, "end": 30.36, "text": " So it's super popular in the research community."}, {"start": 30.36, "end": 34.84, "text": " Most of the papers published on the biggest conferences out there for deep learning,"}, {"start": 34.84, "end": 39.64, "text": " for computer vision, NLP, etc. are using PyTorch."}, {"start": 39.64, "end": 44.16, "text": " And allegedly it can also make your eyesight better and your skin clearer."}, {"start": 44.16, "end": 45.96, "text": " So how can you learn PyTorch?"}, {"start": 45.96, "end": 48.0, "text": " Step number one."}, {"start": 48.0, "end": 51.0, "text": " The official PyTorch tutorials are actually pretty decent."}, {"start": 51.0, "end": 55.96, "text": " You should start by going through this deep learning with PyTorch, a 60 minute blitz."}, {"start": 55.96, "end": 58.56, "text": " It's going to teach you the basics of the framework."}, {"start": 58.56, "end": 60.96, "text": " So this chapter, What is PyTorch?"}, {"start": 60.96, "end": 63.96, "text": " It's going to teach you how to instantiate tensors,"}, {"start": 63.96, "end": 67.64, "text": " how to create like tensors of zeros, ones, random numbers,"}, {"start": 67.64, "end": 71.4, "text": " how to convert back and forth between NumPy."}, {"start": 71.4, "end": 73.16, "text": " But it's actually the syntax."}, {"start": 73.16, "end": 74.84, "text": " It's really similar to NumPy."}, {"start": 74.84, "end": 78.24000000000001, "text": " That's why one of the reasons why PyTorch is so popular."}, {"start": 78.24000000000001, "end": 83.8, "text": " So secondly, it's going to teach you some of the basics behind"}, {"start": 83.8, "end": 89.36, "text": " what happens when you call backward, i.e. when you do backpropagation."}, {"start": 89.36, "end": 92.75999999999999, "text": " It's going to teach you how to construct a really basic neural network"}, {"start": 92.75999999999999, "end": 95.0, "text": " and define the loss and everything."}, {"start": 95.0, "end": 99.75999999999999, "text": " And finally, training a classifier is going to teach you how to do an end to end,"}, {"start": 99.75999999999999, "end": 103.8, "text": " a small deep learning pipeline, and that's super useful."}, {"start": 103.8, "end": 107.84, "text": " So after you're done with this 60 minute blitz,"}, {"start": 107.84, "end": 111.56, "text": " just feel free to skip the second two chapters here,"}, {"start": 111.56, "end": 115.36, "text": " learning PyTorch with examples and what is Torch and really,"}, {"start": 115.36, "end": 120.56, "text": " as that one goes really into depth of how to construct those yourself"}, {"start": 120.56, "end": 122.16, "text": " and have a better understanding."}, {"start": 122.16, "end": 127.56, "text": " But you won't need to start developing your own neural networks using PyTorch."}, {"start": 127.56, "end": 129.88, "text": " So just feel free to skip it for now."}, {"start": 129.88, "end": 133.4, "text": " Visualizing models, data and training with TensorBoard might be useful"}, {"start": 133.4, "end": 137.24, "text": " as TensorBoard is a super useful tool, especially when you start training your models."}, {"start": 137.24, "end": 143.56, "text": " It will help you log your loss, log your metrics, even your imagery,"}, {"start": 143.56, "end": 146.44, "text": " or depending what your task is about."}, {"start": 146.44, "end": 148.76000000000002, "text": " Step number two."}, {"start": 148.76000000000002, "end": 152.32000000000002, "text": " The second thing I'd recommend, and this is how I learned PyTorch,"}, {"start": 152.32000000000002, "end": 156.96, "text": " is go through this blog called Understanding PyTorch with an example,"}, {"start": 156.96, "end": 158.8, "text": " a step by step tutorial."}, {"start": 158.8, "end": 160.96, "text": " And it's super structured."}, {"start": 160.96, "end": 163.0, "text": " You have it all in one place."}, {"start": 163.0, "end": 169.24, "text": " Just go end to end and you'll learn a bunch of stuff like about data set class,"}, {"start": 169.24, "end": 172.68, "text": " data loaders, how to create your own training loop,"}, {"start": 172.68, "end": 176.8, "text": " how to build your own simple network, loss and everything in a single place."}, {"start": 176.8, "end": 180.2, "text": " Really nice visualizations, really nice writing."}, {"start": 180.2, "end": 181.08, "text": " Super recommended."}, {"start": 181.08, "end": 184.84, "text": " It's a bit longer, but you'll finish it in a couple of hours."}, {"start": 184.84, "end": 190.6, "text": " And you're pretty much ready to start doing your own projects in PyTorch."}, {"start": 190.6, "end": 193.6, "text": " And the third step, just start coding in PyTorch."}, {"start": 193.6, "end": 197.96, "text": " So I have a couple of PyTorch projects on my GitHub."}, {"start": 197.96, "end": 201.6, "text": " And you can take a look at this Deep Dream repo."}, {"start": 201.6, "end": 205.84, "text": " It's a super good way to start understanding how PyTorch works."}, {"start": 205.84, "end": 213.2, "text": " And if you go to this playground.py, you'll find this function called"}, {"start": 213.2, "end": 216.51999999999998, "text": " Understand PyTorch gradients."}, {"start": 216.52, "end": 221.52, "text": " And it will help you better understand how the computational graph structure"}, {"start": 221.52, "end": 223.4, "text": " works behind the scenes."}, {"start": 223.4, "end": 230.32000000000002, "text": " And also this one, Deep Dream Simple, will also kind of clarify how the whole"}, {"start": 230.32000000000002, "end": 236.60000000000002, "text": " gradient update works, which usually your optimizers do behind the scene,"}, {"start": 236.60000000000002, "end": 243.28, "text": " like Adam, SGD, whatever, is just being explicitly written down here in the code."}, {"start": 243.28, "end": 249.68, "text": " So aside from going through that code, I super recommend you go and code your"}, {"start": 249.68, "end": 250.68, "text": " own project."}, {"start": 250.68, "end": 255.92000000000002, "text": " So just pick some task, whatever it is, like take, for example, simple image"}, {"start": 255.92000000000002, "end": 260.72, "text": " classification, go ahead and write it down yourself from scratch."}, {"start": 260.72, "end": 262.92, "text": " And you'll learn the syntax."}, {"start": 262.92, "end": 265.68, "text": " You'll understand how the whole pipeline works."}, {"start": 265.68, "end": 268.08, "text": " And that's pretty much it."}, {"start": 268.08, "end": 271.24, "text": " So I hope you enjoyed this brief video on learning PyTorch."}, {"start": 271.24, "end": 276.28000000000003, "text": " If you did, please subscribe and hit that bell icon to get notified when I upload a"}, {"start": 276.28000000000003, "end": 277.28000000000003, "text": " new video."}, {"start": 277.28000000000003, "end": 281.2, "text": " Also, if you feel that there is a better way to learn PyTorch or some really good"}, {"start": 281.2, "end": 284.08, "text": " resource, just comment down in the comment section."}, {"start": 284.08, "end": 301.32, "text": " And until next time, keep learning."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=7q_OJvQQ7vY
How to get started with Machine Learning | 2021
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Bunch of you asked me how to get started in machine learning. So I thought I should make a video about it. List of resources: ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Learn Python ✅ video: https://www.youtube.com/watch?v=rfscVS0vtbw ✅ book: https://automatetheboringstuff.com/ Learn everything else about Python on the fly. Learn ML (high-level intro) ✅ course: https://www.coursera.org/learn/machine-learning ✅ course: https://www.coursera.org/specializations/deep-learning + at least 1 open-source project + 1 blog Learn ML (middle-level intro) ✅ course: https://course.fast.ai/ ✅ course: https://course19.fast.ai/part2 Again code, code, code. Open-source on GitHub. Start reading research papers and implement 1 paper from scratch. Learn ML (pro) ✅ MML book: https://mml-book.github.io/book/mml-book.pdf Supplements: ✅ 3B1B LA: https://www.youtube.com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab ✅ 3B1B Calculus: https://www.youtube.com/watch?v=WUvTyaaNkzM&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr ✅ chapter 5: https://jakevdp.github.io/PythonDataScienceHandbook/index.html Learn deep learning (pro) ✅ book: https://github.com/janishar/mit-deep-learning-book-pdf (you'll find a good pdf in this GitHub repo) BONUS TIPS: ✅ course: https://www.coursera.org/learn/learning-how-to-learn ✅ podcast: https://www.youtube.com/user/lexfridman ✅ Twitter the people I follow: https://twitter.com/gordic_aleksa ✅ My GitHub: https://github.com/gordicaleksa ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 ML and main obstacles to learning it 1:50 STEP1: Learn to code (in Python) 3:36 STEP2: Get a high-level intro to ML (Coursera) 4:43 How much time do I need to complete it? 5:05 Code your own project (GitHub, Medium) 6:53 STEP3: Getting deeper (fast.ai) 8:26 STEP4: Read and implement research papers 10:00 STEP5: Get strong mathematical foundations 11:35 TIP1: Learn how to learn 12:04 TIP2: Learn only the tools you need 12:38 TIP3: Get into the output mode (code, blog) 12:58 TIP4: Focus (the field is too broad) 13:24 TIP5: Follow AI people on Twitter 13:37 TIP6: Watch Lex's AI podcast 13:52 Comment and subscribe ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donation: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #machinelearning #learnmachinelearning #deeplearning
In this video I'm going to give you a five step curriculum for mastering machine learning and six bonus tips. So I'm currently working as a machine learning engineer at Microsoft where I've landed thanks to this curriculum. I've never taken a single official university course on machine learning. Why? Because we don't have any here in Serbia and I guess most of you won't have one on your university also. So my background is in electrical engineering. I did have some computer science subjects but I had to take additional software engineering self-education path and I've landed my first job ever as a software engineer at Microsoft where I'm currently working as a machine learning engineer thanks to my curriculum. So machine learning is this super useful field and you must have seen these videos from OpenAI where this robotic hand learns how to solve Rubik's Cube in a 100% simulated environment. Why the fuck do I have this? I don't even know how to solve this thing. What we've done is trained an algorithm to solve the Rubik's Cube one-handed with a robotic hand. Or these videos from Waymo where a car is learning how to drive by itself by using computer vision and deep learning. It knows exactly where it is on the road. The street is mapped. It can also identify everything around it in full 360 degrees. Objects are identified and analyzed and then predict what those things might do next. Paths are charted. So the biggest problem with machine learning is that there are too many resources out there and in the process you just get a decision-making fatigue and you give up. The truth be told they're all just reiterating on the pretty much very same things and it's actually really easy to learn machine learning and no matter your background I promise you you can learn it. I'm going to give you five simple steps and six bonus points to get you started. Step number one learn to code in Python. So the better your software engineering skills in Python are the easier it will be for you to just follow along the next steps. So if you're a complete beginner you should go through this four hour long video and just follow along and do and encode everything that that guy just teaches you to do. Don't just binge watch the thing you gotta learn how to code and the best way to learn something is to do it yourself. So if you're not a complete beginner I recommend you going through this book called Automate the Boring Stuff with Python and if you still feel like your Python coding skills are kind of weak just go through the first eight chapters. If you feel confident enough just skip the eight chapters and just do the chapters nine to the end of the book. You'll learn so much from this you'll learn how to automate stuff you previously maybe did manually or something. It's such an amazing thing to have in your tool set. You'll further improve your Python skills by just learning stuff on the fly. So you're doing some concrete project or test whatever you just go to Stack Overflow you query something you figure it out and that's it. That's how you learn Python. Don't go about doing bunch of courses bunch of books just do this believe me it's enough. So just get used to this workflow in Python. You don't know how to do something you just go ahead and you google it like reverse list in Python and you pretty much open like you can open like Stack Overflow and find some results here or another cool resource I'm I'm often using is Geeks4Geeks. This website is also pretty awesome you'll find your answers there. Step number two get a high level overview of what machine learning can do and what machine learning is. So in this part you're going to go through two courses. The first one is a general machine learning course taught by Andrew Eng. He's a famous professor and you probably even heard of this course and this course you're going to learn about different techniques such as linear regression, logistic regression, even a little bit about neural networks, IE deep learning and some other techniques like SVMs and you're going to get a glimpse of what like the field looks like. The second course you're going to take is a deep learning course taught also by Andrew Eng and his company Deep Learning AI and in this one you're going to get a understanding of what deep learning as a field is. You're going to learn how to structure your projects and you're going to learn something about convolutional neural networks which are really popular in the field of computer vision and also something about RNNs or in general sequence models. If this all sounds gibberish at the moment just stick with me and you'll be able to understand all of this as soon as you start following this curriculum. So it took me approximately two and a half months to go through these two courses and the reason being I was actually working full-time in Microsoft and I was also doing some other resources which I'll tell you more about like in the bonus points on the end of the video. So you basically can finish this one in less than a month and you'll learn a lot. So during these two courses you're going to learn a lot of new terminology just don't get frightened. It's totally okay not to understand everything you're still like on some higher level of the knowledge pyramid. You'll get deeper down the pyramid and you'll learn more details as the like time goes by. So just stick with the course do everything and when you finish with the course create some project that's like immensely important for you to understand. Once you finish the course you have to create something on your own and you have to publish it somewhere like on github just open source it and you'll learn a lot and you'll have something to show off. It's not only about certificates. Certificates are nice but once you have some project something concrete that you can show to other people that's so much better. And you can see my github repo here and some of the projects I've been open sourcing over the last couple of months. So you learn a lot by doing that and you just get some kind of public artifacts so other people can know that you're at least this good. Also I encourage you to go ahead and write some blog about the things that you learn during the courses and in general try and switch between these two modes which I like to call input and output modes because I'm a nerd and engineer. So basically input mode is you're taking ingesting information you're just learning you're reading you're focusing. Output mode is when you're actually trying and trying and structuring that knowledge and trying and outputting it in the sense like either code or or blog or whatever or video just go ahead and use that knowledge and do something practical. So please please do not skip this step. I cannot stress enough how important it is for you to create some concrete coding project. Phew! Step three. We're getting deeper into the middle level of the knowledge pyramid. Here you're going to take two fast AI courses which are much more practical than Coursera courses. So the first one you're going to do is called deep learning for coders and if you notice there is some overlap with things you already learned feel free to skip feel free to skip some content but I strongly suggest you try and do this course also. The second fast AI course which you should take is called deep learning from the foundations and you're going to dig a lot deeper into how deep learning works how the back prop is implemented how the optimizers are implemented etc. So go through this it will help you immensely in the long term and again after you finish these two courses do create your own project and do open source it on github. This will help you tremendously and at this point of time you should already start focusing on one specific deep learning framework and I strongly suggest you start with PyTorch. Fast AI's deep learning from foundations will also teach you a lot of stuff about PyTorch. Just keep on using a single framework do not try and learn tensorflow keras and stuff just focus on one single framework it will be easy for you to switch if needed. So the first three steps I showed you can easily take you up to six months if you're a total beginner. If you're not feel free to crop this curriculum so as to fit your level of expertise. Step four we're getting deeper start reading research papers and implement a single paper. So I'm obviously trying to linearize this ML learning process but learning is an extremely non-linear process so there is a huge chance that you'll start reading research papers before you get to this step. Now if there is one site that you should know about when you start reading research papers that's archive and that's just a place where people publish their papers and you can see one paper here on the screen and if there is one single word of advice I can give you here when you start reading the papers that's to just embrace the suck. That means you're going to feel so dumb there's going to be so many mathematical symbols you're not aware of algorithms terms terminology you have never heard of just be ready to read the thing from end to end and not understand anything on the first pass. Reading papers is an extremely important skill for both machine learning engineers and also for machine learning researchers. So once you read a couple of papers in a specific field like say I read 20ish papers in neural style transfer literature go ahead and implement the damn thing. You're going to learn so much by doing this. On the screen you can see my implementation of the original neural style transfer paper and I really preach what I do myself the things that I know that were useful for me and also for others. Last but not least step five mathematics. Listen up in the end you'll have to have at least one person on the team who'll know what the heck is going on and that person will know mathematics. I'm not saying you should have that knowledge I'm just saying if you have a startup or whatever you'll have to have at least one person who will have a thorough understanding of mathematical foundations for machine learning. If you were following along this curriculum you'll you'll have some basic at least basic mathematical foundations so far. The thing I recommend you do here is go over this book called mathematics for machine learning. It's free it's an awesome resource it took me around three to four months to finish it but I'd supplement I supplemented it with three blue one browns playlists for linear algebra and calculus and also with this python for data science handbook and I was also working full-time so you can do it faster but it will take some time. You'll just need to go through the fifth chapter of this python data science handbook it's about machine learning. For the end of this curriculum if you want to build a rock-solid deep learning foundations there is this book called deep learning by ian goodfellow and his colleagues. It's super academic super thorough it will just give you additional additional knowledge you need to have in order to do some serious research in the field of deep learning. So that's it for the curriculum if you stayed so far with me congrats I'll now share the six bonus tips with you. Tip number one so this curriculum will require you to have self-discipline obviously and good learning skills so before you even start doing this curriculum if you're new to learning stuff on your own I'd strongly recommend you go through this course by Coursera called learning how to learn. I finished it exactly one year ago and found it super useful even though I have a long track record of learning on my own. Tip number two do not learn tooling for the sake of tooling like this for example this python for python data science handbook has this chapters on numpy matplotlib and pandas just don't go and teach yourself these tools without previously needing them. So for example I've been I've been working full time for two years already in this field and I've been doing a bunch of stuff aside and I still haven't had to use pandas because I'm not working with structured data I'm working with mostly with imagery because I'm working in the field of computer vision. Tip number three I can't emphasize this enough build your own stuff after you go through the theory this is super important so set a project goal that's the what and a lot of hows will follow up so if you'll need pandas to accomplish the project you'll learn pandas on the fly. Tip number four focus focus focus so in the beginning just focus on a single deep learning framework that's pytorch for me focus on a single application area that's computer vision for me you'll get to learn some a lot of terminology you'll get some self-confidence nobody can keep up with all of the fields so I have friends in deep mind who are experts at computer vision but they can't keep up with reinforcement learning. Tip number five follow people on twitter there are a lot of cool people on twitter which post regularly and that will help you keep up with the newest things that are happening in the artificial intelligence field. Tip number six follow Lex Friedman's podcast he has a youtube channel and he's been having amazing guests like I've been following him for almost two years now and you can learn so much from the best people from the industry and from research. So those were all the steps and tips I had please comment if you think I've missed something or feel free to add your own suggestions in the comment section also subscribe to this channel if you found this video useful and hit the bell icon to be notified when I upload a new video until next time keep learning.
[{"start": 0.0, "end": 4.4, "text": " In this video I'm going to give you a five step curriculum for mastering machine learning"}, {"start": 4.4, "end": 9.200000000000001, "text": " and six bonus tips. So I'm currently working as a machine learning engineer at Microsoft where"}, {"start": 9.200000000000001, "end": 14.4, "text": " I've landed thanks to this curriculum. I've never taken a single official university course on"}, {"start": 14.4, "end": 20.0, "text": " machine learning. Why? Because we don't have any here in Serbia and I guess most of you won't have"}, {"start": 20.0, "end": 25.44, "text": " one on your university also. So my background is in electrical engineering. I did have some computer"}, {"start": 25.44, "end": 31.28, "text": " science subjects but I had to take additional software engineering self-education path and I've"}, {"start": 31.28, "end": 36.160000000000004, "text": " landed my first job ever as a software engineer at Microsoft where I'm currently working as a"}, {"start": 36.160000000000004, "end": 41.36, "text": " machine learning engineer thanks to my curriculum. So machine learning is this super useful field"}, {"start": 41.36, "end": 45.92, "text": " and you must have seen these videos from OpenAI where this robotic hand learns how to solve"}, {"start": 45.92, "end": 52.32, "text": " Rubik's Cube in a 100% simulated environment. Why the fuck do I have this? I don't even know how to"}, {"start": 52.32, "end": 59.44, "text": " solve this thing. What we've done is trained an algorithm to solve the Rubik's Cube one-handed"}, {"start": 59.44, "end": 65.68, "text": " with a robotic hand. Or these videos from Waymo where a car is learning how to drive by itself"}, {"start": 65.68, "end": 71.28, "text": " by using computer vision and deep learning. It knows exactly where it is on the road. The street"}, {"start": 71.28, "end": 78.88, "text": " is mapped. It can also identify everything around it in full 360 degrees. Objects are identified and"}, {"start": 78.88, "end": 85.11999999999999, "text": " analyzed and then predict what those things might do next. Paths are charted. So the biggest problem"}, {"start": 85.11999999999999, "end": 90.0, "text": " with machine learning is that there are too many resources out there and in the process you just"}, {"start": 90.0, "end": 95.75999999999999, "text": " get a decision-making fatigue and you give up. The truth be told they're all just reiterating on the"}, {"start": 95.75999999999999, "end": 102.32, "text": " pretty much very same things and it's actually really easy to learn machine learning and no"}, {"start": 102.32, "end": 107.75999999999999, "text": " matter your background I promise you you can learn it. I'm going to give you five simple steps and"}, {"start": 107.76, "end": 115.44, "text": " six bonus points to get you started. Step number one learn to code in Python. So the better your"}, {"start": 115.44, "end": 120.08000000000001, "text": " software engineering skills in Python are the easier it will be for you to just follow along"}, {"start": 120.08000000000001, "end": 126.0, "text": " the next steps. So if you're a complete beginner you should go through this four hour long video"}, {"start": 126.64, "end": 133.84, "text": " and just follow along and do and encode everything that that guy just teaches you to do. Don't just"}, {"start": 133.84, "end": 138.8, "text": " binge watch the thing you gotta learn how to code and the best way to learn something is to do it"}, {"start": 138.8, "end": 144.32, "text": " yourself. So if you're not a complete beginner I recommend you going through this book called"}, {"start": 144.32, "end": 149.68, "text": " Automate the Boring Stuff with Python and if you still feel like your Python coding skills are"}, {"start": 149.68, "end": 154.88, "text": " kind of weak just go through the first eight chapters. If you feel confident enough just skip"}, {"start": 154.88, "end": 163.36, "text": " the eight chapters and just do the chapters nine to the end of the book. You'll learn so much from"}, {"start": 163.36, "end": 169.04000000000002, "text": " this you'll learn how to automate stuff you previously maybe did manually or something."}, {"start": 169.04000000000002, "end": 174.16000000000003, "text": " It's such an amazing thing to have in your tool set. You'll further improve your Python skills"}, {"start": 174.16000000000003, "end": 180.72000000000003, "text": " by just learning stuff on the fly. So you're doing some concrete project or test whatever"}, {"start": 180.72000000000003, "end": 186.56, "text": " you just go to Stack Overflow you query something you figure it out and that's it. That's how you"}, {"start": 186.56, "end": 192.24, "text": " learn Python. Don't go about doing bunch of courses bunch of books just do this believe me it's enough."}, {"start": 192.24, "end": 196.96, "text": " So just get used to this workflow in Python. You don't know how to do something you just go ahead"}, {"start": 196.96, "end": 204.08, "text": " and you google it like reverse list in Python and you pretty much open like you can open like"}, {"start": 204.08, "end": 210.0, "text": " Stack Overflow and find some results here or another cool resource I'm I'm often using is"}, {"start": 210.72, "end": 215.68, "text": " Geeks4Geeks. This website is also pretty awesome you'll find your answers there."}, {"start": 216.96, "end": 221.92000000000002, "text": " Step number two get a high level overview of what machine learning can do and what machine"}, {"start": 221.92, "end": 227.92, "text": " learning is. So in this part you're going to go through two courses. The first one is a general"}, {"start": 227.92, "end": 232.48, "text": " machine learning course taught by Andrew Eng. He's a famous professor and you probably even"}, {"start": 232.48, "end": 238.23999999999998, "text": " heard of this course and this course you're going to learn about different techniques such as linear"}, {"start": 238.23999999999998, "end": 244.72, "text": " regression, logistic regression, even a little bit about neural networks, IE deep learning and"}, {"start": 244.72, "end": 249.44, "text": " some other techniques like SVMs and you're going to get a glimpse of what like the field looks like."}, {"start": 249.44, "end": 254.4, "text": " The second course you're going to take is a deep learning course taught also by Andrew Eng"}, {"start": 254.4, "end": 259.36, "text": " and his company Deep Learning AI and in this one you're going to get a understanding of what deep"}, {"start": 259.36, "end": 264.8, "text": " learning as a field is. You're going to learn how to structure your projects and you're going to"}, {"start": 264.8, "end": 268.96, "text": " learn something about convolutional neural networks which are really popular in the field of computer"}, {"start": 268.96, "end": 275.6, "text": " vision and also something about RNNs or in general sequence models. If this all sounds gibberish at"}, {"start": 275.6, "end": 280.24, "text": " the moment just stick with me and you'll be able to understand all of this as soon as you start"}, {"start": 280.8, "end": 286.8, "text": " following this curriculum. So it took me approximately two and a half months to go"}, {"start": 286.8, "end": 291.6, "text": " through these two courses and the reason being I was actually working full-time in Microsoft"}, {"start": 291.6, "end": 295.84000000000003, "text": " and I was also doing some other resources which I'll tell you more about like in the bonus"}, {"start": 296.56, "end": 304.0, "text": " points on the end of the video. So you basically can finish this one in less than a month and you'll"}, {"start": 304.0, "end": 310.16, "text": " learn a lot. So during these two courses you're going to learn a lot of new terminology just don't"}, {"start": 310.16, "end": 316.32, "text": " get frightened. It's totally okay not to understand everything you're still like on some higher level"}, {"start": 316.32, "end": 322.24, "text": " of the knowledge pyramid. You'll get deeper down the pyramid and you'll learn more details as the"}, {"start": 322.24, "end": 328.16, "text": " like time goes by. So just stick with the course do everything and when you finish with the course"}, {"start": 328.16, "end": 333.6, "text": " create some project that's like immensely important for you to understand. Once you finish the course"}, {"start": 333.6, "end": 340.08000000000004, "text": " you have to create something on your own and you have to publish it somewhere like on github just"}, {"start": 340.08000000000004, "end": 345.04, "text": " open source it and you'll learn a lot and you'll have something to show off. It's not only about"}, {"start": 345.04, "end": 349.76000000000005, "text": " certificates. Certificates are nice but once you have some project something concrete that you can"}, {"start": 349.76000000000005, "end": 355.52000000000004, "text": " show to other people that's so much better. And you can see my github repo here and some of the"}, {"start": 355.52, "end": 361.68, "text": " projects I've been open sourcing over the last couple of months. So you learn a lot by doing that"}, {"start": 361.68, "end": 368.32, "text": " and you just get some kind of public artifacts so other people can know that you're at least this"}, {"start": 368.32, "end": 373.59999999999997, "text": " good. Also I encourage you to go ahead and write some blog about the things that you learn during"}, {"start": 373.59999999999997, "end": 380.15999999999997, "text": " the courses and in general try and switch between these two modes which I like to call input and"}, {"start": 380.16, "end": 385.68, "text": " output modes because I'm a nerd and engineer. So basically input mode is you're taking ingesting"}, {"start": 385.68, "end": 390.24, "text": " information you're just learning you're reading you're focusing. Output mode is when you're"}, {"start": 390.24, "end": 394.48, "text": " actually trying and trying and structuring that knowledge and trying and outputting it in the"}, {"start": 394.48, "end": 402.32000000000005, "text": " sense like either code or or blog or whatever or video just go ahead and use that knowledge and do"}, {"start": 402.32000000000005, "end": 408.8, "text": " something practical. So please please do not skip this step. I cannot stress enough how important it"}, {"start": 408.8, "end": 416.64, "text": " is for you to create some concrete coding project. Phew! Step three. We're getting deeper into the"}, {"start": 416.64, "end": 423.04, "text": " middle level of the knowledge pyramid. Here you're going to take two fast AI courses which are much"}, {"start": 423.04, "end": 428.08000000000004, "text": " more practical than Coursera courses. So the first one you're going to do is called deep learning"}, {"start": 428.08000000000004, "end": 434.8, "text": " for coders and if you notice there is some overlap with things you already learned feel free to skip"}, {"start": 434.8, "end": 441.92, "text": " feel free to skip some content but I strongly suggest you try and do this course also. The"}, {"start": 441.92, "end": 446.56, "text": " second fast AI course which you should take is called deep learning from the foundations"}, {"start": 446.56, "end": 452.24, "text": " and you're going to dig a lot deeper into how deep learning works how the back prop is implemented"}, {"start": 452.24, "end": 460.24, "text": " how the optimizers are implemented etc. So go through this it will help you immensely in the"}, {"start": 460.24, "end": 467.36, "text": " long term and again after you finish these two courses do create your own project and do open"}, {"start": 467.36, "end": 472.88, "text": " source it on github. This will help you tremendously and at this point of time you should already"}, {"start": 472.88, "end": 478.40000000000003, "text": " start focusing on one specific deep learning framework and I strongly suggest you start with"}, {"start": 478.40000000000003, "end": 485.36, "text": " PyTorch. Fast AI's deep learning from foundations will also teach you a lot of stuff about PyTorch."}, {"start": 485.36, "end": 492.32, "text": " Just keep on using a single framework do not try and learn tensorflow keras and stuff just focus on"}, {"start": 492.32, "end": 498.0, "text": " one single framework it will be easy for you to switch if needed. So the first three steps I showed"}, {"start": 498.0, "end": 503.6, "text": " you can easily take you up to six months if you're a total beginner. If you're not feel free to crop"}, {"start": 503.6, "end": 510.24, "text": " this curriculum so as to fit your level of expertise. Step four we're getting deeper start"}, {"start": 510.24, "end": 516.16, "text": " reading research papers and implement a single paper. So I'm obviously trying to linearize this"}, {"start": 516.16, "end": 522.4, "text": " ML learning process but learning is an extremely non-linear process so there is a huge chance that"}, {"start": 522.4, "end": 528.4, "text": " you'll start reading research papers before you get to this step. Now if there is one site that"}, {"start": 528.4, "end": 533.12, "text": " you should know about when you start reading research papers that's archive and that's just"}, {"start": 533.12, "end": 538.8, "text": " a place where people publish their papers and you can see one paper here on the screen and if there"}, {"start": 538.8, "end": 544.64, "text": " is one single word of advice I can give you here when you start reading the papers that's to just"}, {"start": 544.64, "end": 551.12, "text": " embrace the suck. That means you're going to feel so dumb there's going to be so many mathematical"}, {"start": 551.12, "end": 558.24, "text": " symbols you're not aware of algorithms terms terminology you have never heard of just be"}, {"start": 558.24, "end": 565.52, "text": " ready to read the thing from end to end and not understand anything on the first pass. Reading"}, {"start": 565.52, "end": 570.56, "text": " papers is an extremely important skill for both machine learning engineers and also for machine"}, {"start": 570.56, "end": 577.28, "text": " learning researchers. So once you read a couple of papers in a specific field like say I read"}, {"start": 577.28, "end": 583.6, "text": " 20ish papers in neural style transfer literature go ahead and implement the damn thing. You're going"}, {"start": 583.6, "end": 589.04, "text": " to learn so much by doing this. On the screen you can see my implementation of the original"}, {"start": 589.04, "end": 596.7199999999999, "text": " neural style transfer paper and I really preach what I do myself the things that I know that were"}, {"start": 596.7199999999999, "end": 605.36, "text": " useful for me and also for others. Last but not least step five mathematics. Listen up in the end"}, {"start": 605.36, "end": 610.88, "text": " you'll have to have at least one person on the team who'll know what the heck is going on and"}, {"start": 610.88, "end": 616.4, "text": " that person will know mathematics. I'm not saying you should have that knowledge I'm just saying if"}, {"start": 616.4, "end": 621.68, "text": " you have a startup or whatever you'll have to have at least one person who will have a thorough"}, {"start": 621.68, "end": 626.9599999999999, "text": " understanding of mathematical foundations for machine learning. If you were following along this"}, {"start": 626.9599999999999, "end": 632.88, "text": " curriculum you'll you'll have some basic at least basic mathematical foundations so far."}, {"start": 633.84, "end": 639.36, "text": " The thing I recommend you do here is go over this book called mathematics for machine learning. It's"}, {"start": 639.36, "end": 644.48, "text": " free it's an awesome resource it took me around three to four months to finish it but I'd supplement"}, {"start": 644.48, "end": 650.96, "text": " I supplemented it with three blue one browns playlists for linear algebra and calculus and"}, {"start": 650.96, "end": 657.44, "text": " also with this python for data science handbook and I was also working full-time so you can do"}, {"start": 657.44, "end": 662.08, "text": " it faster but it will take some time. You'll just need to go through the fifth chapter of this python"}, {"start": 662.08, "end": 668.16, "text": " data science handbook it's about machine learning. For the end of this curriculum if you want to"}, {"start": 668.16, "end": 674.8, "text": " build a rock-solid deep learning foundations there is this book called deep learning by ian goodfellow"}, {"start": 674.8, "end": 681.6, "text": " and his colleagues. It's super academic super thorough it will just give you additional"}, {"start": 682.8, "end": 687.6, "text": " additional knowledge you need to have in order to do some serious research in the field of deep"}, {"start": 687.6, "end": 693.68, "text": " learning. So that's it for the curriculum if you stayed so far with me congrats I'll now share the"}, {"start": 693.68, "end": 701.12, "text": " six bonus tips with you. Tip number one so this curriculum will require you to have self-discipline"}, {"start": 701.12, "end": 709.12, "text": " obviously and good learning skills so before you even start doing this curriculum if you're new to"}, {"start": 709.12, "end": 715.04, "text": " learning stuff on your own I'd strongly recommend you go through this course by Coursera called"}, {"start": 715.04, "end": 721.28, "text": " learning how to learn. I finished it exactly one year ago and found it super useful even though"}, {"start": 721.28, "end": 728.16, "text": " I have a long track record of learning on my own. Tip number two do not learn tooling for the sake"}, {"start": 728.16, "end": 734.8, "text": " of tooling like this for example this python for python data science handbook has this chapters on"}, {"start": 734.8, "end": 742.64, "text": " numpy matplotlib and pandas just don't go and teach yourself these tools without previously needing"}, {"start": 742.64, "end": 747.52, "text": " them. So for example I've been I've been working full time for two years already in this field"}, {"start": 747.52, "end": 752.56, "text": " and I've been doing a bunch of stuff aside and I still haven't had to use pandas because I'm not"}, {"start": 752.56, "end": 757.4399999999999, "text": " working with structured data I'm working with mostly with imagery because I'm working in the"}, {"start": 757.4399999999999, "end": 764.88, "text": " field of computer vision. Tip number three I can't emphasize this enough build your own stuff after"}, {"start": 764.88, "end": 771.1999999999999, "text": " you go through the theory this is super important so set a project goal that's the what and a lot"}, {"start": 771.1999999999999, "end": 777.28, "text": " of hows will follow up so if you'll need pandas to accomplish the project you'll learn pandas on the"}, {"start": 777.28, "end": 785.8399999999999, "text": " fly. Tip number four focus focus focus so in the beginning just focus on a single deep learning"}, {"start": 785.8399999999999, "end": 791.1999999999999, "text": " framework that's pytorch for me focus on a single application area that's computer vision for me"}, {"start": 791.8399999999999, "end": 796.64, "text": " you'll get to learn some a lot of terminology you'll get some self-confidence nobody can keep"}, {"start": 796.64, "end": 801.8399999999999, "text": " up with all of the fields so I have friends in deep mind who are experts at computer vision but"}, {"start": 801.84, "end": 808.1600000000001, "text": " they can't keep up with reinforcement learning. Tip number five follow people on twitter there are a"}, {"start": 808.1600000000001, "end": 814.0, "text": " lot of cool people on twitter which post regularly and that will help you keep up with the newest"}, {"start": 814.0, "end": 819.36, "text": " things that are happening in the artificial intelligence field. Tip number six follow Lex"}, {"start": 819.36, "end": 824.48, "text": " Friedman's podcast he has a youtube channel and he's been having amazing guests like I've been"}, {"start": 824.48, "end": 830.32, "text": " following him for almost two years now and you can learn so much from the best people from the"}, {"start": 830.32, "end": 837.0400000000001, "text": " industry and from research. So those were all the steps and tips I had please comment if you think"}, {"start": 837.0400000000001, "end": 843.5200000000001, "text": " I've missed something or feel free to add your own suggestions in the comment section also subscribe"}, {"start": 843.5200000000001, "end": 848.96, "text": " to this channel if you found this video useful and hit the bell icon to be notified when I upload a"}, {"start": 848.96, "end": 860.8000000000001, "text": " new video until next time keep learning."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=lEtr9nnO5FI
Semantic Segmentation in PyTorch | Neural Style Transfer #7
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ In this video, I cover semantic segmentation - both basic theory and we also dig into the PyTorch implementation. You'll learn about: ✔️ What is semantic segmentation ✔️ How to implement it in PyTorch using DeepLab V3 ✔️ What are connected components and morph filters ✔️ How to post-process the raw model masks Note: I'll cover the whole pipeline in the next video. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GitHub code: https://github.com/gordicaleksa/pytorch-naive-video-nst ✅ DeepLab V3 paper: https://arxiv.org/abs/1706.05587 ✅ Morph filtering: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html ✅ Useful blog: https://www.learnopencv.com/pytorch-for-beginners-semantic-segmentation-using-torchvision/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 Semantic Segmentation (Basic Theory) 3:00 Semantic Segmentation (Code-Walkthrough) 8:25 Digital Image Processing (Basic Theory) 10:47 Mask post-processing (Code-Walkthrough) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donations: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #semanticsegmentation #pytorch #neuralstyletransfer
In this video we're going to learn about semantic segmentation and I'll first cover some basic theory and then we'll jump straight into coding in PyTorch, a project I recently open-sourced. So let's dig into it. So what exactly is semantic segmentation? So it's a computer vision task where the goal is to assign a single class to a single pixel. So you can see on the image there, certain pixels belong to the road, some pixels belong to the sidewalk, pedestrians, trees, etc. If you have all of this information, it's really easy to also do, for example, object detection because, for example, you just take the pedestrian, you just draw a bounding box around the pedestrian and you also know it's a pedestrian so you already did classification. The main metric for this task is something called mean intersectional reunion and it's pretty simple to understand. So basically you want to see the prediction your model makes, for example, for a single pedestrian and you want to see the intersection between the prediction and the ground truth, i.e. the true labels there. And the bigger the intersection, the better the metric and intuitively makes sense and you can see the nice image with the squares there. So it's used in all kinds of applications. The two main ones that fall into my mind are autonomous vehicles and the second one would be maybe mixed reality. I'm probably biased because I worked on HoloLens. The neural network that we'll be using in the code is DeepLab v3. So let me just briefly mention the DeepLab family. So it started, it was developed by Google and it started from v1 all the way to v3 plus. That's the the newest model but we are using v3 because it has official PyTorch implementation so it's really simple to use. On the screen on the top left you can see the input image and in the bottom right you can see the actual output I got with DeepLab v3. The model itself was trained on the Pascal VOC 2012 data set which has 21 classes. So 20 classes, foreground classes and one background class. And the classes compass, so I already mentioned background, we also have like person, airplane, bird etc. Because it was trained on 21 classes the output has 21 channels and the spatial resolution is the same as the input image. That's how semantic segmentation problem works. So how you get the most probable class for a single pixel is this. You just take, so you take a certain x y coordinate and you get 21 numbers right? Because you have 21 channels and wherever the highest number is, that's the highest probability, that's the class that's the most probable for that specific pixel. Say channel zero had the the biggest probability, that means background is the highest probability class for that specific pixel. It's as easy as that. The model you can see on the screen is actually FCN and not DeepLab, but the output shape is the thing I want you to see here. So it's 21 and the spatial resolution is the same as the input image as I already mentioned. Okay enough theory, let's jump into the code. This code is actually a whole neural style transfer video creation pipeline but we'll be focusing only on a single component and that's the segmentation. So let's see what the code looks like. So the extractPersonMaskFromFrames function basically takes an input frame and just extracts the pixels where the person is present. So line 55 we just like a regular pytorch thing, we just figure out whether the user has GPU or CPU. GPUs are always preferable when training, when using neural neural networks. Then we just instantiate DeepLab v3 model with the ResNet 101 backbone and we set pre-trained to true because we want to have a pre-trained model. Obviously we put it to the GPU if we have one and we set the model to the evaluation mode because certain layers like batch normalization and dropout will behave differently if we don't set this and we'll have some wrong outputs. Next up we create these transforms which will be applied to every single frame. So we want to maybe specify certain height and width because if we maybe our GPU doesn't have enough VRAM and it will just give you CUDA out of memory exception if we have a too big of a frame. So then we convert it to tensor to pytorch tensor and we do normalization using ImageNet statistics and this is just because of the way that the pytorch models were trained we have to do this processing step. Then we just create a image folder out of the frames that we want to process also standard pytorch think and we create a data loader and set the batch size to for example four or something because we want to use the parallel processing power of GPUs. Next steps are after figuring out whether the output directories are empty or full. This is like a cache mechanism if they already have some frames we want to just skip this state and this stage in the pipeline but if they are empty we want to proceed and we wrap all of this into this torch no-grat context because we are doing inference and otherwise pytorch will create computational graphs by default which will allocate lots of memory and eat up your VRAM so you want to you want to do this step it's really important. Next important thing is we just iterate through this data loader and we get image batches and we just this line 84 we just place the batch into the GPU because the model is also in GPU so you want to have tensors as well as the model on the same device otherwise you'll get some error and we just do the inference here we pass the image batch into the segmentation deep lab v3 model and because the output is actually order dictionary this is just like a thing you got to do you just extract the actual output using this out and then we place that resulting batch to the CPU and convert it to NumPy. Afterwards we iterate through the result batch which as I already mentioned contains so it's a dimension is n where the n is the number of like the batch size then 21 because we have 21 output channels and then height and width the same as the input frames. The out CPU is the has the shape I mentioned so 21 channels and height and width as the input frame and this is the main step so we do the arg max so that actually finds the thing I mentioned in the theoretical part so we just find the channel where we have the highest probability for that specific pixel and then the this equal equal person channel index will figure out the pixels where the person class was the highest probable one on the image so we just get a mask doing this we get a person mask i.e. we get the we'll have like a boolean value true on those pixels where the person was present as simple as that and then just some bureaucracy here times 255 will convert booleans into 0 to 255 binary image and converting into just explicitly converting it here to NumPy unsigned integer 8 type. After this we just do this post-processing step but before we dig into that part of the code I want to briefly also cover some theory behind the heuristics that will be used in that specific function. What it will do is it will just clean up some some components that the model spuriously outputted which are erroneous so we just want to clean the do some post-processing and it's pretty common in computer vision you usually have these hybrid approaches where the deep learning pipeline produces something and you just want to do some cleaning afterwards. So there are two things you want to know here the first one is connected components algorithm and the second one is the morphological filtering operations. So connected components are pretty simple we as humans can easily tell that the square is not connected to the circle i.e. there is there doesn't exist some like some some path of white pixels that's connecting them and what the algorithm should do here is just assign a different label to every one of these components like label 0 for background label 1 for square label 2 for circle having that information we can easily extract the circle or whatever component that we want and the colored image on the right just visualizes the thing i mentioned so it just visualizes the labels. So the second thing you need to know about is morphological filtering and you basically take the binary images the input and you just process it with something called the structuring element or the kernel which is also binary a simple binary mask and you can either do erosion where you get like the smaller area like you can see the j letter got smaller there or you can do something called dilation where you get the like bigger area and the way you implement this is if you know something about logic gates this is pretty much a multiple input and for the erosion or multiple input or gate for the dilation pretty simple. So finally opening something we'll be using actually and that's just a combination just a sequential you just sequentially apply first erosion and then dilation and it makes sense because if you see the input image before doing the opening operation you have those small dots after doing erosion they will disappear and after doing dilation you'll just you'll just be left with the j letter which will get to its previous size initial size. So on this slide you can see a concrete example of the person mask i got using deep lab wii 3 model and you can see by doing the opening we'll just kind of split that a small component that shouldn't be shouldn't supposed to be there and then after figuring out connected components and finding the second biggest one the first being background the second being person will be able to keep only the the person pixels simple as that. Back to the code we can now figure out what the post-processing method actually does and if we go jump here you can see that on line 26 we just create a kernel so that's the structuring element i mentioned in the morphological filtering slide and it's just a simple square of ones and after applying the using open cd's morphological function we just apply to the mask and we get the this thing called open mask which is the the initial mask from the deep lab model after applying opening operation. So then we just do connected components on the open mask and we get the labeled image out. Now given that labeled image from the connected components algorithm we need to figure out which component belongs to the background. So we take the upper part of the labeled image so the first 10 percent of the of the image and just count and see what's the most frequent value in that space. I call that discriminant subspace and the most common value is something we assume to be the background component background label and we get the background index here on line 37. The next step is to create a list of tuples where each tuple contains connected component components label and area and after that's line 43 and after sorting those according to the size of those areas and just filtering out the the the background label that we found above using the discriminant subspace we are left off with the the the first biggest component after the background that we just we assume to be person and we just grab the the using the so this this here it takes the the biggest leftover component and the zero just grabs the label so I'm left with person index and after after just checking which pixels contain exactly this label I'm left only with person pixels. It's pretty simple. If I go ahead and only visualize the mass that came out directly from the model we get a result like this and you can notice certain components here which do not belong to the person mask and which should be removed obviously so if I just go inside the post-processing method and after doing the morphological operations we get this and you can see on the right that the erosion already fixed this concrete mask but in some other cases we'll need connected components to remove the other connected components which do not belong to the person. So that covers the semantic segmentation theory and code. Hopefully you found this video useful and I hope you found this video useful. If you did consider subscribing and consider sharing this video and see you next time.
[{"start": 0.0, "end": 4.4, "text": " In this video we're going to learn about semantic segmentation and I'll first cover some basic theory"}, {"start": 4.4, "end": 8.96, "text": " and then we'll jump straight into coding in PyTorch, a project I recently open-sourced."}, {"start": 8.96, "end": 14.24, "text": " So let's dig into it. So what exactly is semantic segmentation? So it's a computer vision task where"}, {"start": 14.24, "end": 20.16, "text": " the goal is to assign a single class to a single pixel. So you can see on the image there, certain"}, {"start": 20.16, "end": 26.0, "text": " pixels belong to the road, some pixels belong to the sidewalk, pedestrians, trees, etc. If you have"}, {"start": 26.0, "end": 32.0, "text": " all of this information, it's really easy to also do, for example, object detection because, for"}, {"start": 32.0, "end": 37.04, "text": " example, you just take the pedestrian, you just draw a bounding box around the pedestrian and you"}, {"start": 37.04, "end": 42.8, "text": " also know it's a pedestrian so you already did classification. The main metric for this task is"}, {"start": 42.8, "end": 48.64, "text": " something called mean intersectional reunion and it's pretty simple to understand. So basically"}, {"start": 49.2, "end": 54.8, "text": " you want to see the prediction your model makes, for example, for a single pedestrian and you want"}, {"start": 54.8, "end": 62.72, "text": " to see the intersection between the prediction and the ground truth, i.e. the true labels there."}, {"start": 62.72, "end": 68.32, "text": " And the bigger the intersection, the better the metric and intuitively makes sense and you can"}, {"start": 68.32, "end": 73.6, "text": " see the nice image with the squares there. So it's used in all kinds of applications. The two main"}, {"start": 73.6, "end": 78.72, "text": " ones that fall into my mind are autonomous vehicles and the second one would be maybe"}, {"start": 78.72, "end": 83.6, "text": " mixed reality. I'm probably biased because I worked on HoloLens. The neural network that we'll be using"}, {"start": 83.6, "end": 88.72, "text": " in the code is DeepLab v3. So let me just briefly mention the DeepLab family. So it started, it was"}, {"start": 88.72, "end": 94.64, "text": " developed by Google and it started from v1 all the way to v3 plus. That's the the newest model but we"}, {"start": 94.64, "end": 99.52, "text": " are using v3 because it has official PyTorch implementation so it's really simple to use."}, {"start": 99.52, "end": 103.75999999999999, "text": " On the screen on the top left you can see the input image and in the bottom right you can see"}, {"start": 103.75999999999999, "end": 110.72, "text": " the actual output I got with DeepLab v3. The model itself was trained on the Pascal VOC 2012 data"}, {"start": 110.72, "end": 119.28, "text": " set which has 21 classes. So 20 classes, foreground classes and one background class. And the classes"}, {"start": 119.28, "end": 126.96000000000001, "text": " compass, so I already mentioned background, we also have like person, airplane, bird etc. Because it"}, {"start": 126.96000000000001, "end": 135.2, "text": " was trained on 21 classes the output has 21 channels and the spatial resolution is the same as the input"}, {"start": 135.2, "end": 141.51999999999998, "text": " image. That's how semantic segmentation problem works. So how you get the most probable class for"}, {"start": 141.51999999999998, "end": 148.16, "text": " a single pixel is this. You just take, so you take a certain x y coordinate and you get 21 numbers"}, {"start": 148.16, "end": 153.92, "text": " right? Because you have 21 channels and wherever the highest number is, that's the highest probability,"}, {"start": 154.72, "end": 161.76, "text": " that's the class that's the most probable for that specific pixel. Say channel zero had the"}, {"start": 161.76, "end": 167.84, "text": " the biggest probability, that means background is the highest probability class for that specific"}, {"start": 167.84, "end": 173.76, "text": " pixel. It's as easy as that. The model you can see on the screen is actually FCN and not DeepLab,"}, {"start": 173.76, "end": 178.64, "text": " but the output shape is the thing I want you to see here. So it's 21 and the spatial resolution"}, {"start": 178.64, "end": 182.23999999999998, "text": " is the same as the input image as I already mentioned. Okay enough theory, let's jump into"}, {"start": 182.23999999999998, "end": 189.12, "text": " the code. This code is actually a whole neural style transfer video creation pipeline but we'll"}, {"start": 189.12, "end": 194.72, "text": " be focusing only on a single component and that's the segmentation. So let's see what the code looks"}, {"start": 194.72, "end": 201.84, "text": " like. So the extractPersonMaskFromFrames function basically takes an input frame and just extracts"}, {"start": 203.12, "end": 210.16, "text": " the pixels where the person is present. So line 55 we just like a regular pytorch thing, we just"}, {"start": 210.16, "end": 216.72, "text": " figure out whether the user has GPU or CPU. GPUs are always preferable when training, when using"}, {"start": 216.72, "end": 225.92, "text": " neural neural networks. Then we just instantiate DeepLab v3 model with the ResNet 101"}, {"start": 226.56, "end": 233.28, "text": " backbone and we set pre-trained to true because we want to have a pre-trained model. Obviously"}, {"start": 233.28, "end": 239.92, "text": " we put it to the GPU if we have one and we set the model to the evaluation mode because"}, {"start": 239.92, "end": 246.32, "text": " certain layers like batch normalization and dropout will behave differently if we don't set this and"}, {"start": 246.32, "end": 252.07999999999998, "text": " we'll have some wrong outputs. Next up we create these transforms which will be applied to every"}, {"start": 252.07999999999998, "end": 260.56, "text": " single frame. So we want to maybe specify certain height and width because if we maybe our GPU"}, {"start": 260.56, "end": 266.15999999999997, "text": " doesn't have enough VRAM and it will just give you CUDA out of memory exception if we have a"}, {"start": 266.16, "end": 273.20000000000005, "text": " too big of a frame. So then we convert it to tensor to pytorch tensor and we do normalization"}, {"start": 273.20000000000005, "end": 279.76000000000005, "text": " using ImageNet statistics and this is just because of the way that the pytorch models were trained"}, {"start": 279.76000000000005, "end": 288.96000000000004, "text": " we have to do this processing step. Then we just create a image folder out of the frames that we"}, {"start": 288.96, "end": 296.71999999999997, "text": " want to process also standard pytorch think and we create a data loader and set the batch size to"}, {"start": 296.71999999999997, "end": 301.91999999999996, "text": " for example four or something because we want to use the parallel processing power of GPUs."}, {"start": 303.84, "end": 311.52, "text": " Next steps are after figuring out whether the output directories are empty or full. This is"}, {"start": 311.52, "end": 317.76, "text": " like a cache mechanism if they already have some frames we want to just skip this state and"}, {"start": 317.76, "end": 324.96, "text": " this stage in the pipeline but if they are empty we want to proceed and we wrap all of this into"}, {"start": 324.96, "end": 331.59999999999997, "text": " this torch no-grat context because we are doing inference and otherwise pytorch will create"}, {"start": 331.59999999999997, "end": 337.59999999999997, "text": " computational graphs by default which will allocate lots of memory and eat up your VRAM"}, {"start": 337.59999999999997, "end": 341.84, "text": " so you want to you want to do this step it's really important. Next important thing is we"}, {"start": 341.84, "end": 351.44, "text": " just iterate through this data loader and we get image batches and we just this line 84"}, {"start": 351.44, "end": 356.79999999999995, "text": " we just place the batch into the GPU because the model is also in GPU so you want to have tensors"}, {"start": 356.79999999999995, "end": 363.2, "text": " as well as the model on the same device otherwise you'll get some error and we just do the"}, {"start": 363.2, "end": 370.79999999999995, "text": " inference here we pass the image batch into the segmentation deep lab v3 model and because the"}, {"start": 370.8, "end": 378.32, "text": " output is actually order dictionary this is just like a thing you got to do you just extract the"}, {"start": 378.32, "end": 385.84000000000003, "text": " actual output using this out and then we place that resulting batch to the CPU and convert it to"}, {"start": 385.84000000000003, "end": 394.32, "text": " NumPy. Afterwards we iterate through the result batch which as I already mentioned contains so"}, {"start": 394.32, "end": 401.59999999999997, "text": " it's a dimension is n where the n is the number of like the batch size then 21 because we have"}, {"start": 401.59999999999997, "end": 408.96, "text": " 21 output channels and then height and width the same as the input frames. The out CPU is the"}, {"start": 410.15999999999997, "end": 416.8, "text": " has the shape I mentioned so 21 channels and height and width as the input frame and this is"}, {"start": 416.8, "end": 422.15999999999997, "text": " the main step so we do the arg max so that actually finds the thing I mentioned in the theoretical"}, {"start": 422.16, "end": 428.64000000000004, "text": " part so we just find the channel where we have the highest probability for that specific pixel"}, {"start": 428.64000000000004, "end": 435.68, "text": " and then the this equal equal person channel index will figure out the pixels where the person class"}, {"start": 435.68, "end": 442.56, "text": " was the highest probable one on the image so we just get a mask doing this we get a person mask i.e."}, {"start": 442.56, "end": 448.88, "text": " we get the we'll have like a boolean value true on those pixels where the person was present"}, {"start": 448.88, "end": 458.71999999999997, "text": " as simple as that and then just some bureaucracy here times 255 will convert booleans into 0 to 255"}, {"start": 458.71999999999997, "end": 466.71999999999997, "text": " binary image and converting into just explicitly converting it here to NumPy unsigned integer 8"}, {"start": 467.36, "end": 475.28, "text": " type. After this we just do this post-processing step but before we dig into that part of the code"}, {"start": 475.28, "end": 482.32, "text": " I want to briefly also cover some theory behind the heuristics that will be used in that specific"}, {"start": 482.32, "end": 489.76, "text": " function. What it will do is it will just clean up some some components that the model spuriously"}, {"start": 489.76, "end": 497.2, "text": " outputted which are erroneous so we just want to clean the do some post-processing and it's"}, {"start": 497.2, "end": 501.91999999999996, "text": " pretty common in computer vision you usually have these hybrid approaches where the deep learning"}, {"start": 501.92, "end": 506.08000000000004, "text": " pipeline produces something and you just want to do some cleaning afterwards. So there are two things"}, {"start": 506.08000000000004, "end": 510.40000000000003, "text": " you want to know here the first one is connected components algorithm and the second one is the"}, {"start": 511.12, "end": 516.8000000000001, "text": " morphological filtering operations. So connected components are pretty simple we as humans can"}, {"start": 516.8000000000001, "end": 523.6, "text": " easily tell that the square is not connected to the circle i.e. there is there doesn't exist some"}, {"start": 524.16, "end": 529.28, "text": " like some some path of white pixels that's connecting them and what the algorithm should"}, {"start": 529.28, "end": 537.52, "text": " do here is just assign a different label to every one of these components like label 0 for background"}, {"start": 537.52, "end": 543.12, "text": " label 1 for square label 2 for circle having that information we can easily extract the circle"}, {"start": 543.12, "end": 548.0799999999999, "text": " or whatever component that we want and the colored image on the right just visualizes the thing i"}, {"start": 548.0799999999999, "end": 552.56, "text": " mentioned so it just visualizes the labels. So the second thing you need to know about is morphological"}, {"start": 552.56, "end": 558.88, "text": " filtering and you basically take the binary images the input and you just process it with something"}, {"start": 558.88, "end": 564.64, "text": " called the structuring element or the kernel which is also binary a simple binary mask and you can"}, {"start": 564.64, "end": 570.32, "text": " either do erosion where you get like the smaller area like you can see the j letter got smaller"}, {"start": 570.32, "end": 577.68, "text": " there or you can do something called dilation where you get the like bigger area and the way"}, {"start": 577.68, "end": 582.64, "text": " you implement this is if you know something about logic gates this is pretty much a multiple"}, {"start": 582.64, "end": 592.24, "text": " input and for the erosion or multiple input or gate for the dilation pretty simple. So finally"}, {"start": 592.24, "end": 597.68, "text": " opening something we'll be using actually and that's just a combination just a sequential"}, {"start": 597.68, "end": 604.24, "text": " you just sequentially apply first erosion and then dilation and it makes sense because if you see the"}, {"start": 604.24, "end": 610.56, "text": " input image before doing the opening operation you have those small dots after doing erosion"}, {"start": 610.56, "end": 616.88, "text": " they will disappear and after doing dilation you'll just you'll just be left with the j letter which"}, {"start": 616.88, "end": 622.7199999999999, "text": " will get to its previous size initial size. So on this slide you can see a concrete example of the"}, {"start": 622.7199999999999, "end": 628.64, "text": " person mask i got using deep lab wii 3 model and you can see by doing the opening we'll just kind"}, {"start": 628.64, "end": 635.8399999999999, "text": " of split that a small component that shouldn't be shouldn't supposed to be there and then after"}, {"start": 635.84, "end": 641.12, "text": " figuring out connected components and finding the second biggest one the first being background the"}, {"start": 641.12, "end": 648.08, "text": " second being person will be able to keep only the the person pixels simple as that. Back to the code"}, {"start": 648.08, "end": 653.9200000000001, "text": " we can now figure out what the post-processing method actually does and if we go jump here"}, {"start": 654.8000000000001, "end": 660.1600000000001, "text": " you can see that on line 26 we just create a kernel so that's the structuring element i mentioned"}, {"start": 660.16, "end": 668.16, "text": " in the morphological filtering slide and it's just a simple square of ones and after applying the"}, {"start": 668.16, "end": 677.68, "text": " using open cd's morphological function we just apply to the mask and we get the this thing called"}, {"start": 677.68, "end": 684.88, "text": " open mask which is the the initial mask from the deep lab model after applying opening operation."}, {"start": 684.88, "end": 692.16, "text": " So then we just do connected components on the open mask and we get the labeled image out. Now"}, {"start": 692.16, "end": 697.28, "text": " given that labeled image from the connected components algorithm we need to figure out which"}, {"start": 697.28, "end": 704.08, "text": " component belongs to the background. So we take the upper part of the labeled image so the first 10"}, {"start": 704.08, "end": 711.52, "text": " percent of the of the image and just count and see what's the most frequent value in that space. I"}, {"start": 711.52, "end": 717.92, "text": " call that discriminant subspace and the most common value is something we assume to be the"}, {"start": 717.92, "end": 724.96, "text": " background component background label and we get the background index here on line 37. The next"}, {"start": 724.96, "end": 732.56, "text": " step is to create a list of tuples where each tuple contains connected component components label"}, {"start": 732.56, "end": 741.28, "text": " and area and after that's line 43 and after sorting those according to the size of those areas and"}, {"start": 741.28, "end": 748.0799999999999, "text": " just filtering out the the the background label that we found above using the discriminant subspace"}, {"start": 748.0799999999999, "end": 755.68, "text": " we are left off with the the the first biggest component after the background that we just"}, {"start": 755.68, "end": 763.68, "text": " we assume to be person and we just grab the the using the so this this here it takes the the"}, {"start": 763.68, "end": 772.64, "text": " biggest leftover component and the zero just grabs the label so I'm left with person index and after"}, {"start": 773.52, "end": 780.56, "text": " after just checking which pixels contain exactly this label I'm left only with person pixels."}, {"start": 780.56, "end": 786.2399999999999, "text": " It's pretty simple. If I go ahead and only visualize the mass that came out directly from the model"}, {"start": 786.2399999999999, "end": 795.8399999999999, "text": " we get a result like this and you can notice certain components here which do not belong to the person"}, {"start": 797.28, "end": 804.4799999999999, "text": " mask and which should be removed obviously so if I just go inside the post-processing method"}, {"start": 804.48, "end": 812.72, "text": " and after doing the morphological operations we get this and you can see on the right that"}, {"start": 812.72, "end": 820.4, "text": " the erosion already fixed this concrete mask but in some other cases we'll need connected components"}, {"start": 820.4, "end": 826.64, "text": " to remove the other connected components which do not belong to the person. So that covers the"}, {"start": 826.64, "end": 833.04, "text": " semantic segmentation theory and code. Hopefully you found this video useful and I hope you found"}, {"start": 833.04, "end": 842.64, "text": " this video useful. If you did consider subscribing and consider sharing this video and see you next time."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=6rVrh5gnpwk
What is Google Deep Dream? (Basic Theory) | Deep Dream Series #1
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Learn the basic theory behind the Deep Dream algorithm. You'll learn about: ✔️ What Google Deep Dream is and how it all began ✔️Impacts of shallower/deeper layers on the final result ✔️Basic concepts: gradient ascent, image pyramids, etc. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ DeepDream in PyTorch: https://github.com/gordicaleksa/pytorch-deepdream ✅ Original blog: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 1:10 - How it all started in 2015 3:30 - Simple explanation of how it works 4:30 - Impact of using shallow layers 5:30 - Dataset matters (ImageNet vs Places 365) 6:10 - More details on how it works ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donations: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #deepdream #deeplearning #art #ai
I'm starting a new video series with this video about the algorithm called Deep Dream. And in this video we're going to cover the basic theory behind the algorithm. And in the next one we'll go through the code I just open sourced a couple of days ago. As you can see it here on the screen. And it will give you some really psychedelic looking imagery. But that's not the point obviously of this series. It's going to teach you, you're going to learn a lot more about deep learning itself, about like computational graphs in PyTorch and also how to create some really nice artwork using this code. But first let's jump to some basic theory about the Deep Dream algorithm itself. So what's the idea behind this algorithm? So given an input image the thing that happens is that the neural network pretrained on some classification test or something like on ImageNet or MIT places 365. Basically sees something in that image, certain features. And whatever the network sees we just tell it to just amplify those exact features. And by doing that you get the image on the right that you can see on the screen. I found the story how it all started really interesting. So Mordvinsa, the guy that actually created the Deep Dream algorithm, the initial idea, actually woke up like around 2am and had a nightmare. He just sat down on his computer, coded this up in maybe 15 minutes and instantaneously he got some really interesting results. What's even more uncommon maybe is that they did not publish a paper on this, they just wrote the blog and subsequently they open sourced a simple implementation of the Deep Dream algorithm. And you can see on the screen they started getting some really weird creatures and results titled like Pig Snail or the Dogfish. It just sparked a lot of imagination from the very beginning. And those creatures came from this cloud image you can see in the top left. And also the interesting thing is that depending on what you give to the network, what you input as the input, for example if you give it some scene it will start populating it with some towers or buildings and if you give it some kind of maybe leaf or something it will start populating it with insects and animals. So it also depends on the data that the network itself was trained on, not only on the input image. But I'll get into a bit more details a bit later. But Deep Dream was not only about creating these psychedelic crazy looking images, it was actually a part of the effort to better understand neural networks, a field called interpretability of neural networks. And you can see in the bottom left those examples, so what they actually told the network was hey, tell me what do you see, what do you think dumbbell is? And going in reverse and reconstructing the image they got these four images in the bottom left. And you can see that the network actually learned to always expect to see some muscular hands of a weightlifter next to the dumbbell or a bodybuilder, whatever, and did not manage to actually extract the concept itself of a dumbbell, how we humans actually think about the dumbbell itself as an independent object, not dependent on the context where we usually see it. And let's get a bit deeper into how the thing actually works. So what you do, you have a pre-trained neural network as I already said, pre-trained on some classification data set like ImageNet or MIT places 365, and you pass in the input image and you feed it through to a certain layer, and when you come to a certain layer, whatever activations you have there, you want to amplify them. And you do that by doing the gradient ascent and not descent. For example, you took a simple L2 or MSC loss on those activations and you wanted to do a classical procedure that's a minimization. What you would end up with would be that the input image would either become black or more probably just some random noise image. On the other hand, if you tell it to maximize instead of minimize those activations, you get the deep dream results that we're familiar with as the image on the right here. Now the interesting thing is that depending on which exact layer in the network you're trying to optimize, maximize in this case, you get different interesting results. So if you take some shallower layers from some of the first layers in the neural network and you try and do this procedure I just described, you'll get this low level patterns, really interesting patterns and colors, edges and corners, the stuff that we know, the usual low level vision stuff. But on the other hand, if you took some deeper layer in the network, you would start getting some more increasingly complex features such as like maybe animal eyes or snout or something. I'm assuming of course here that the network was trained on ImageNet which contained a lot of like images of dogs and cats and animals in general. But if we had some other data set, then those high level features would be some other things and not eyes or snouts. Now I already mentioned ImageNet and MIT places 365, but those are not the only data sets out there obviously. You can use some other ones, but just that I use these two and these two are really common in the deep dream community. So you can see on the left here, the image was produced by a network that was trained on a lot of imagery that contained animals. On the right you can see the network that was trained on MIT places which contains some scenery, human built objects, pubs, cafes and stuff like that. And that's the reason it has these human like built things ingrained into the image itself. Going a bit more deeper into the theory, so I already mentioned gradient ascent. So basically the only difference is you just change the sign. When you do the update for a single pixel, you don't do the minus where you use the learning rate and the gradients, you just switch it to plus. But there are also some more advanced stuff like image pyramid. So the idea there is, so first what's image pyramid? So it's basically you have your base image and you just down sample it by some ratio, let's say 1.5 and then you do it again and again and again and you just stack those images and that's the image pyramid. So now why do we do that? Now the thing is there is something called receptive field of network and when you feed all of those images from the image pyramid with different sizes to the network, you will see different features in all of them. Some will be more fine grained, some will be like coarser features and combining all of those across the layers, you'll get a really nice looking deep dream image. So why they work is that when you have a smaller image in the input, a single neuron somewhere in some deeper layer will be able to cover most of the pixels in that input image and it will be able to manipulate and tweak those pixels. Whereas when you have bigger image, that pixel will only be able to change and modify just a small area of the input image. So the algorithm proceeds like this. You just start with the smallest resolution, you do the deep dream, then you just up sample the image, then again you do the deep dream and then again until you get to the base image of the pyramid, the biggest resolution and you just stop. There are two more important things that you need to implement in order to have some nice images. One is the gradient smoothing and normalization. So basically if you only took gradients that you get from this optimization and just blindly applied them, the image will quickly explode out of the bounds that we are keeping it in. So what you have to do is to just treat the gradients as a simple image and take a Gaussian kernel and just filter out the image, your convolution with the kernel and that will just smoothen out the gradients and additionally you just want to take the standard deviation of the whole thing and just scale all of those gradients with that standard deviation and that will just keep the things scoped. Aside from that it's quite usual to see to just after applying the gradient ascent step, you just want to clip the image so that you keep it in the designated boundary that you want to keep it in and those are usually ImageNet normalized boundaries. That should be enough for you to understand the next coding video where we'll go step through the code and just try and run this ourselves and get some really awesome looking images. Yeah and by the way don't forget to check out the description where I've linked GitHub code. Feel free to just play around with it before I create the actual video of explaining how to use it. I guess most of you are coders or experienced machine learning engineers already so go ahead and play. And as always if you found this video useful and only if you found it useful please consider subscribing and sharing the video. That would mean a lot to me and yeah see you in the next video.
[{"start": 0.0, "end": 5.92, "text": " I'm starting a new video series with this video about the algorithm called Deep Dream."}, {"start": 5.92, "end": 10.72, "text": " And in this video we're going to cover the basic theory behind the algorithm."}, {"start": 10.72, "end": 14.72, "text": " And in the next one we'll go through the code I just open sourced a couple of days ago."}, {"start": 14.72, "end": 16.8, "text": " As you can see it here on the screen."}, {"start": 16.8, "end": 20.240000000000002, "text": " And it will give you some really psychedelic looking imagery."}, {"start": 20.240000000000002, "end": 24.76, "text": " But that's not the point obviously of this series."}, {"start": 24.76, "end": 29.64, "text": " It's going to teach you, you're going to learn a lot more about deep learning itself, about"}, {"start": 29.64, "end": 36.84, "text": " like computational graphs in PyTorch and also how to create some really nice artwork using"}, {"start": 36.84, "end": 37.84, "text": " this code."}, {"start": 37.84, "end": 41.68, "text": " But first let's jump to some basic theory about the Deep Dream algorithm itself."}, {"start": 41.68, "end": 43.6, "text": " So what's the idea behind this algorithm?"}, {"start": 43.6, "end": 50.72, "text": " So given an input image the thing that happens is that the neural network pretrained on some"}, {"start": 50.72, "end": 57.16, "text": " classification test or something like on ImageNet or MIT places 365."}, {"start": 57.16, "end": 61.0, "text": " Basically sees something in that image, certain features."}, {"start": 61.0, "end": 67.39999999999999, "text": " And whatever the network sees we just tell it to just amplify those exact features."}, {"start": 67.39999999999999, "end": 71.03999999999999, "text": " And by doing that you get the image on the right that you can see on the screen."}, {"start": 71.03999999999999, "end": 73.64, "text": " I found the story how it all started really interesting."}, {"start": 73.64, "end": 80.67999999999999, "text": " So Mordvinsa, the guy that actually created the Deep Dream algorithm, the initial idea,"}, {"start": 80.67999999999999, "end": 83.88, "text": " actually woke up like around 2am and had a nightmare."}, {"start": 83.88, "end": 89.19999999999999, "text": " He just sat down on his computer, coded this up in maybe 15 minutes and instantaneously"}, {"start": 89.19999999999999, "end": 92.08, "text": " he got some really interesting results."}, {"start": 92.08, "end": 96.92, "text": " What's even more uncommon maybe is that they did not publish a paper on this, they just"}, {"start": 96.92, "end": 102.32, "text": " wrote the blog and subsequently they open sourced a simple implementation of the Deep"}, {"start": 102.32, "end": 103.96, "text": " Dream algorithm."}, {"start": 103.96, "end": 108.96, "text": " And you can see on the screen they started getting some really weird creatures and results"}, {"start": 108.96, "end": 113.08, "text": " titled like Pig Snail or the Dogfish."}, {"start": 113.08, "end": 116.28, "text": " It just sparked a lot of imagination from the very beginning."}, {"start": 116.28, "end": 121.44, "text": " And those creatures came from this cloud image you can see in the top left."}, {"start": 121.44, "end": 125.84, "text": " And also the interesting thing is that depending on what you give to the network, what you"}, {"start": 125.84, "end": 131.92, "text": " input as the input, for example if you give it some scene it will start populating it"}, {"start": 131.92, "end": 138.96, "text": " with some towers or buildings and if you give it some kind of maybe leaf or something it"}, {"start": 138.96, "end": 142.18, "text": " will start populating it with insects and animals."}, {"start": 142.18, "end": 148.88, "text": " So it also depends on the data that the network itself was trained on, not only on the input"}, {"start": 148.88, "end": 149.88, "text": " image."}, {"start": 149.88, "end": 151.68, "text": " But I'll get into a bit more details a bit later."}, {"start": 151.68, "end": 157.24, "text": " But Deep Dream was not only about creating these psychedelic crazy looking images, it"}, {"start": 157.24, "end": 163.72, "text": " was actually a part of the effort to better understand neural networks, a field called"}, {"start": 163.72, "end": 166.52, "text": " interpretability of neural networks."}, {"start": 166.52, "end": 173.04000000000002, "text": " And you can see in the bottom left those examples, so what they actually told the network was"}, {"start": 173.04000000000002, "end": 178.96, "text": " hey, tell me what do you see, what do you think dumbbell is?"}, {"start": 178.96, "end": 184.04000000000002, "text": " And going in reverse and reconstructing the image they got these four images in the bottom"}, {"start": 184.04000000000002, "end": 185.04000000000002, "text": " left."}, {"start": 185.04000000000002, "end": 191.60000000000002, "text": " And you can see that the network actually learned to always expect to see some muscular"}, {"start": 191.6, "end": 197.79999999999998, "text": " hands of a weightlifter next to the dumbbell or a bodybuilder, whatever, and did not manage"}, {"start": 197.79999999999998, "end": 202.88, "text": " to actually extract the concept itself of a dumbbell, how we humans actually think about"}, {"start": 202.88, "end": 208.4, "text": " the dumbbell itself as an independent object, not dependent on the context where we usually"}, {"start": 208.4, "end": 209.4, "text": " see it."}, {"start": 209.4, "end": 212.78, "text": " And let's get a bit deeper into how the thing actually works."}, {"start": 212.78, "end": 217.88, "text": " So what you do, you have a pre-trained neural network as I already said, pre-trained on"}, {"start": 217.88, "end": 225.0, "text": " some classification data set like ImageNet or MIT places 365, and you pass in the input"}, {"start": 225.0, "end": 231.07999999999998, "text": " image and you feed it through to a certain layer, and when you come to a certain layer,"}, {"start": 231.07999999999998, "end": 235.2, "text": " whatever activations you have there, you want to amplify them."}, {"start": 235.2, "end": 238.76, "text": " And you do that by doing the gradient ascent and not descent."}, {"start": 238.76, "end": 245.88, "text": " For example, you took a simple L2 or MSC loss on those activations and you wanted to do"}, {"start": 245.88, "end": 249.84, "text": " a classical procedure that's a minimization."}, {"start": 249.84, "end": 255.56, "text": " What you would end up with would be that the input image would either become black or more"}, {"start": 255.56, "end": 258.44, "text": " probably just some random noise image."}, {"start": 258.44, "end": 264.2, "text": " On the other hand, if you tell it to maximize instead of minimize those activations, you"}, {"start": 264.2, "end": 269.52, "text": " get the deep dream results that we're familiar with as the image on the right here."}, {"start": 269.52, "end": 274.84, "text": " Now the interesting thing is that depending on which exact layer in the network you're"}, {"start": 274.84, "end": 281.08, "text": " trying to optimize, maximize in this case, you get different interesting results."}, {"start": 281.08, "end": 287.23999999999995, "text": " So if you take some shallower layers from some of the first layers in the neural network"}, {"start": 287.23999999999995, "end": 294.35999999999996, "text": " and you try and do this procedure I just described, you'll get this low level patterns, really"}, {"start": 294.35999999999996, "end": 299.03999999999996, "text": " interesting patterns and colors, edges and corners, the stuff that we know, the usual"}, {"start": 299.03999999999996, "end": 302.88, "text": " low level vision stuff."}, {"start": 302.88, "end": 308.68, "text": " But on the other hand, if you took some deeper layer in the network, you would start getting"}, {"start": 308.68, "end": 316.56, "text": " some more increasingly complex features such as like maybe animal eyes or snout or something."}, {"start": 316.56, "end": 320.71999999999997, "text": " I'm assuming of course here that the network was trained on ImageNet which contained a"}, {"start": 320.71999999999997, "end": 325.28, "text": " lot of like images of dogs and cats and animals in general."}, {"start": 325.28, "end": 331.15999999999997, "text": " But if we had some other data set, then those high level features would be some other things"}, {"start": 331.16, "end": 332.94, "text": " and not eyes or snouts."}, {"start": 332.94, "end": 338.84000000000003, "text": " Now I already mentioned ImageNet and MIT places 365, but those are not the only data sets"}, {"start": 338.84000000000003, "end": 340.40000000000003, "text": " out there obviously."}, {"start": 340.40000000000003, "end": 345.40000000000003, "text": " You can use some other ones, but just that I use these two and these two are really common"}, {"start": 345.40000000000003, "end": 347.56, "text": " in the deep dream community."}, {"start": 347.56, "end": 352.48, "text": " So you can see on the left here, the image was produced by a network that was trained"}, {"start": 352.48, "end": 355.72, "text": " on a lot of imagery that contained animals."}, {"start": 355.72, "end": 360.72, "text": " On the right you can see the network that was trained on MIT places which contains some"}, {"start": 360.72, "end": 365.52000000000004, "text": " scenery, human built objects, pubs, cafes and stuff like that."}, {"start": 365.52000000000004, "end": 372.24, "text": " And that's the reason it has these human like built things ingrained into the image itself."}, {"start": 372.24, "end": 378.48, "text": " Going a bit more deeper into the theory, so I already mentioned gradient ascent."}, {"start": 378.48, "end": 383.02000000000004, "text": " So basically the only difference is you just change the sign."}, {"start": 383.02000000000004, "end": 387.88000000000005, "text": " When you do the update for a single pixel, you don't do the minus where you use the learning"}, {"start": 387.88, "end": 392.8, "text": " rate and the gradients, you just switch it to plus."}, {"start": 392.8, "end": 396.71999999999997, "text": " But there are also some more advanced stuff like image pyramid."}, {"start": 396.71999999999997, "end": 400.64, "text": " So the idea there is, so first what's image pyramid?"}, {"start": 400.64, "end": 406.84, "text": " So it's basically you have your base image and you just down sample it by some ratio,"}, {"start": 406.84, "end": 412.6, "text": " let's say 1.5 and then you do it again and again and again and you just stack those images"}, {"start": 412.6, "end": 414.04, "text": " and that's the image pyramid."}, {"start": 414.04, "end": 415.84, "text": " So now why do we do that?"}, {"start": 415.84, "end": 421.15999999999997, "text": " Now the thing is there is something called receptive field of network and when you feed"}, {"start": 421.15999999999997, "end": 426.23999999999995, "text": " all of those images from the image pyramid with different sizes to the network, you will"}, {"start": 426.23999999999995, "end": 429.2, "text": " see different features in all of them."}, {"start": 429.2, "end": 434.59999999999997, "text": " Some will be more fine grained, some will be like coarser features and combining all of"}, {"start": 434.59999999999997, "end": 439.41999999999996, "text": " those across the layers, you'll get a really nice looking deep dream image."}, {"start": 439.41999999999996, "end": 445.23999999999995, "text": " So why they work is that when you have a smaller image in the input, a single neuron somewhere"}, {"start": 445.24, "end": 451.56, "text": " in some deeper layer will be able to cover most of the pixels in that input image and"}, {"start": 451.56, "end": 456.08, "text": " it will be able to manipulate and tweak those pixels."}, {"start": 456.08, "end": 462.24, "text": " Whereas when you have bigger image, that pixel will only be able to change and modify just"}, {"start": 462.24, "end": 464.52, "text": " a small area of the input image."}, {"start": 464.52, "end": 466.46000000000004, "text": " So the algorithm proceeds like this."}, {"start": 466.46000000000004, "end": 471.06, "text": " You just start with the smallest resolution, you do the deep dream, then you just up sample"}, {"start": 471.06, "end": 476.04, "text": " the image, then again you do the deep dream and then again until you get to the base image"}, {"start": 476.04, "end": 480.0, "text": " of the pyramid, the biggest resolution and you just stop."}, {"start": 480.0, "end": 484.72, "text": " There are two more important things that you need to implement in order to have some nice"}, {"start": 484.72, "end": 485.72, "text": " images."}, {"start": 485.72, "end": 489.24, "text": " One is the gradient smoothing and normalization."}, {"start": 489.24, "end": 495.56, "text": " So basically if you only took gradients that you get from this optimization and just blindly"}, {"start": 495.56, "end": 501.88, "text": " applied them, the image will quickly explode out of the bounds that we are keeping it in."}, {"start": 501.88, "end": 509.88, "text": " So what you have to do is to just treat the gradients as a simple image and take a Gaussian"}, {"start": 509.88, "end": 516.84, "text": " kernel and just filter out the image, your convolution with the kernel and that will"}, {"start": 516.84, "end": 522.64, "text": " just smoothen out the gradients and additionally you just want to take the standard deviation"}, {"start": 522.64, "end": 530.16, "text": " of the whole thing and just scale all of those gradients with that standard deviation and"}, {"start": 530.16, "end": 532.96, "text": " that will just keep the things scoped."}, {"start": 532.96, "end": 539.6, "text": " Aside from that it's quite usual to see to just after applying the gradient ascent step,"}, {"start": 539.6, "end": 544.4399999999999, "text": " you just want to clip the image so that you keep it in the designated boundary that you"}, {"start": 544.4399999999999, "end": 548.84, "text": " want to keep it in and those are usually ImageNet normalized boundaries."}, {"start": 548.84, "end": 554.24, "text": " That should be enough for you to understand the next coding video where we'll go step"}, {"start": 554.24, "end": 559.48, "text": " through the code and just try and run this ourselves and get some really awesome looking"}, {"start": 559.48, "end": 560.48, "text": " images."}, {"start": 560.48, "end": 566.2, "text": " Yeah and by the way don't forget to check out the description where I've linked GitHub"}, {"start": 566.2, "end": 567.9200000000001, "text": " code."}, {"start": 567.9200000000001, "end": 573.0400000000001, "text": " Feel free to just play around with it before I create the actual video of explaining how"}, {"start": 573.0400000000001, "end": 574.0400000000001, "text": " to use it."}, {"start": 574.0400000000001, "end": 578.8000000000001, "text": " I guess most of you are coders or experienced machine learning engineers already so go ahead"}, {"start": 578.8, "end": 579.8, "text": " and play."}, {"start": 579.8, "end": 585.28, "text": " And as always if you found this video useful and only if you found it useful please consider"}, {"start": 585.28, "end": 587.76, "text": " subscribing and sharing the video."}, {"start": 587.76, "end": 612.56, "text": " That would mean a lot to me and yeah see you in the next video."}]
Aleksa Gordić - The AI Epiphany
https://www.youtube.com/watch?v=EuXd-aO77A0
Feed-forward method (training) | Neural Style Transfer #6
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ The 6th video in the NST series! 🎨 You'll learn about: ✔️ how to train and monitor your NST feed-forward neural network Note: you'll need a decent machine to train these yourself in a reasonable amount of time. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✅ GitHub code: https://github.com/gordicaleksa/pytorch-nst-feedforward ✅ Paper (Johnson et al.): https://arxiv.org/pdf/1603.08155.pdf ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ⌚️ Timetable: 0:00 - Quick training script walk-through 3:15 - Setting up training parameters 4:38 - MS Coco download 5:15 - Running the training procedure & Tensorboard 8:38 - MS COCO quick walk-through 9:30 - Saving models (checkpoints + binaries) ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💰 BECOME A PATREON OF THE AI EPIPHANY ❤️ If these videos, GitHub projects, and blogs help you, consider helping me out by supporting me on Patreon! The AI Epiphany ► https://www.patreon.com/theaiepiphany One-time donations: https://www.paypal.com/paypalme/theaiepiphany Much love! ❤️ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition". ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ 👋 CONNECT WITH ME ON SOCIAL LinkedIn ► https://www.linkedin.com/in/aleksagordic/ Twitter ► https://twitter.com/gordic_aleksa Instagram ► https://www.instagram.com/aiepiphany/ Facebook ► https://www.facebook.com/aiepiphany/ 👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY: Discord ► https://discord.gg/peBrCpheKE 📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER: Substack ► https://aiepiphany.substack.com/ 💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS: GitHub ► https://github.com/gordicaleksa 📚 FOLLOW ME ON MEDIUM: Medium ► https://gordicaleksa.medium.com/ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #neuralstyletransfer #deeplearning #ai
So in the last video we saw how to run stylization scripts and in this feedforward neural style transfer method. And in this one we're going to see how to run trainings and how to leverage tensorboard in order to track your experiments. Okay, so go ahead and open the training script. Let's jump to this function. So this is just some boilerplate code. And then we just create a PyTorch data loader which will basically just take images from MSCoco dataset, resize them to 256 by 256, and concatenate them into batches and also do image normalization. So afterwards we have this transformer network which is the meat of this project basically because that's the network that we'll be training. Perceptual loss net is we're using VGG16 here. It's a usual thing to do in an NST field. And yeah, just add an optimizer, nothing special there. And we're going to load this style image and you can choose between basically whatever you put in this style images directory. You'll be able to use that one to train the model for that specific style image. What we do here is we just pass the style image through the perceptual net and we just find the style representation which is just a collection of gram matrices. I just print the header here, nothing special there, just boilerplate code. And this is the main training loop. So what we do here is we, as I said, we have this train loader. So we just take one batch and we pass the batch through the transformer net. And whatever comes out is what we want this stylized batch to have the same content as the content image. And to have the same style as the style image we just initialized a couple of seconds ago. So we take this content batch which is just a collection of images from Ms. Coco. We just pass it through the perceptual loss net and we also pass this stylized batch that just came out of the transformer net through the VGG. So bottom line, this target content representation is going to be different for every single iteration. Whereas the style representation is going to be the same because these models are kind of not flexible. They can only be trained for a single style. We just calculate content loss and adjust MSC loss between the target content representation I just mentioned and between the current content representation. We just find the style loss by also doing MSC on gram matrices. And TV loss is for kind of enforces image smoothness. And in my experience, it did not help as much. So I've set it to zero but you can play with it if you get some really strange noise in the output image of your model. And then we just combine the thought loss as a sum of those three components. Let's see how we can train these models. So just go ahead down here and first thing you do is you pick your style image. So you pick a style image that you want to train your model on. So let's set, for example, mosaic here and you can pick whatever options you have here. You can just play some new style image in this directory style images and you can use that style image. So afterwards, you the only parameter pretty much that you'll have to change and tweak around is this style weight parameter. And it's for e to the power of five at the moment. So let's just stick with it. And you'll probably have to experiment with with e-books. So, for example, you can use only one e-book and put maybe three thirty images here instead of none, which will use all of the MSCoco images. And that's around eighty three thousand images output, maybe at only two thousand here just for demonstration sake. And I'm going to put only one e-book here. TensorBoard is set to true. We want to enable this. This will make your life easier when monitoring these experiments. So we're just going to head and so beforehand, actually, we got a we got a download MSCoco data set. And I just you just basically open this resource downloader. I've already prepared it for you. So you just have to to place the download choice here to one. And you will go ahead and download the MSCoco. You just go ahead and run this and it will start downloading the MSCoco data set. So I've already done that. So I'm just going to stop it here. And it's a big data set. I think it's around 13 gigabytes. So just keep that on the back of your mind. And now we're just going to run the training script and it will start training the model. I've learned to train and actually I've set the number of data points to twenty thousand so that you can better see the curves in TensorBoard. So now let's just open the TensorBoard. You can see the command here in the console. You just need to open your your account, the prompt and navigate to project route, activate the environment. And then you can just copy paste the command in the console. So TensorBoard, log, dear, blah, blah. Copy that. Paste it here. Run it. And you'll see the URL here. It's up. It will just open like a local local server which you can access through your browser. Just paste it here and you'll see the TensorBoard. Once it opens, you just go and scroll to your experiment and it will be on the bottom of this list. You just activate the five of these. Basically, I'm going to explain what these mean. So content loss and the style loss are the pretty much the most important ones. And you can see that the content loss, even after five thousand batches, i.e. twenty thousand images, is still lowering down. So you can train this model for at least at least twenty thousand more. The important thing with style loss is that it doesn't have huge spikes. Otherwise, something went wrong and you probably need to lower down your style weight and then your model will be able to train. TV loss is zero because I've set the weight to zero. You'll only need to use this one if you get some really unsmooth output from your model. And this bottom curve here just tells me some certain statistics of the stylized batch that goes. That's the output from the transformer transformer net I showed you in the code in the beginning of the video. These actually make a lot of sense because the input imagery that goes through the transformer net is ImageNet normalized. So it's kind of centralized around zero and the output should be similar, should have a similar distribution. So we can see that here. And aside from these curves, I also have some imagery logged during the training. And you can see how your your model is learning to better reproduce the style of the image you're training it for. So in the beginning is just some random junk imagery. And then as you go, as the time passes by and as the model have seen more and more data points or images from Ms. Coco in this in this case, it just gets better and better. And you can see that the content is getting more prominent and stylization is getting better. This is really useful to just debug and see if your model is actually learning the stylized imagery so that you don't have to wait for two hours and only then figure out that it's not learning anything. And one thing to note, you'll probably need a really strong machine to actually train this locally. As you can see, the ventilators on my machine are pretty working and pretty heavy. So just keep that in mind. I'm going to close the answer board now and just going to lead you through the Ms. Coco data set. Just show you the imagery here. It's always a good idea to go through the data set and just check out images you're training on just manually and kind of get the feeling of how it looks like. And you can see the imagery here on the Ms. Coco just random, random stuff pretty much. And it actually doesn't matter because the model doesn't care which images you pass it in. You only care to set some nice style style image and it will iterate through these content images and learn how to lower both the content as well as the style loss. And you can see the imagery here. Finally, you have your model saved into both binaries and checkpoints. So checkpoints basically serves to save intermediate models while your model is actually being trained. And you'll find it under the subdirectory with the same name as your style image. So our checkpoints will be under mosaic here because we use the mosaic style image. But the final model will be actually placed here in the binaries directory. And you'll be able to take that model and run it through the stylization script that I showed you in the last video. So that was it for this video. You'll learn how the training loop looks like. You learn how to start your own training, how to monitor the trainings using tensor boards, curves and and imagery, logged imagery. And you also know where to find your binaries that you can run later on in the stylization script. Thanks for watching. If you found this video useful, consider subscribing and sharing and see you in the next video.
[{"start": 0.0, "end": 6.8, "text": " So in the last video we saw how to run stylization scripts and in this feedforward neural style transfer method."}, {"start": 6.8, "end": 13.4, "text": " And in this one we're going to see how to run trainings and how to leverage tensorboard in order to track your experiments."}, {"start": 13.4, "end": 16.4, "text": " Okay, so go ahead and open the training script."}, {"start": 16.4, "end": 18.7, "text": " Let's jump to this function."}, {"start": 18.7, "end": 22.1, "text": " So this is just some boilerplate code."}, {"start": 22.1, "end": 32.6, "text": " And then we just create a PyTorch data loader which will basically just take images from MSCoco dataset, resize them to 256 by 256,"}, {"start": 32.6, "end": 37.6, "text": " and concatenate them into batches and also do image normalization."}, {"start": 37.6, "end": 47.6, "text": " So afterwards we have this transformer network which is the meat of this project basically because that's the network that we'll be training."}, {"start": 47.6, "end": 52.300000000000004, "text": " Perceptual loss net is we're using VGG16 here."}, {"start": 52.300000000000004, "end": 55.8, "text": " It's a usual thing to do in an NST field."}, {"start": 55.8, "end": 60.800000000000004, "text": " And yeah, just add an optimizer, nothing special there."}, {"start": 60.800000000000004, "end": 71.5, "text": " And we're going to load this style image and you can choose between basically whatever you put in this style images directory."}, {"start": 71.5, "end": 75.6, "text": " You'll be able to use that one to train the model for that specific style image."}, {"start": 75.6, "end": 87.6, "text": " What we do here is we just pass the style image through the perceptual net and we just find the style representation which is just a collection of gram matrices."}, {"start": 87.6, "end": 91.89999999999999, "text": " I just print the header here, nothing special there, just boilerplate code."}, {"start": 91.89999999999999, "end": 95.1, "text": " And this is the main training loop."}, {"start": 95.1, "end": 99.6, "text": " So what we do here is we, as I said, we have this train loader."}, {"start": 99.6, "end": 106.1, "text": " So we just take one batch and we pass the batch through the transformer net."}, {"start": 106.1, "end": 116.1, "text": " And whatever comes out is what we want this stylized batch to have the same content as the content image."}, {"start": 116.1, "end": 122.6, "text": " And to have the same style as the style image we just initialized a couple of seconds ago."}, {"start": 122.6, "end": 128.6, "text": " So we take this content batch which is just a collection of images from Ms. Coco."}, {"start": 128.6, "end": 136.1, "text": " We just pass it through the perceptual loss net and we also pass this stylized batch that just came out of the transformer net through the VGG."}, {"start": 136.1, "end": 143.6, "text": " So bottom line, this target content representation is going to be different for every single iteration."}, {"start": 143.6, "end": 150.6, "text": " Whereas the style representation is going to be the same because these models are kind of not flexible."}, {"start": 150.6, "end": 152.6, "text": " They can only be trained for a single style."}, {"start": 152.6, "end": 163.6, "text": " We just calculate content loss and adjust MSC loss between the target content representation I just mentioned and between the current content representation."}, {"start": 163.6, "end": 169.6, "text": " We just find the style loss by also doing MSC on gram matrices."}, {"start": 169.6, "end": 175.1, "text": " And TV loss is for kind of enforces image smoothness."}, {"start": 175.1, "end": 179.6, "text": " And in my experience, it did not help as much."}, {"start": 179.6, "end": 187.6, "text": " So I've set it to zero but you can play with it if you get some really strange noise in the output image of your model."}, {"start": 187.6, "end": 194.6, "text": " And then we just combine the thought loss as a sum of those three components."}, {"start": 194.6, "end": 197.6, "text": " Let's see how we can train these models."}, {"start": 197.6, "end": 203.1, "text": " So just go ahead down here and first thing you do is you pick your style image."}, {"start": 203.1, "end": 207.6, "text": " So you pick a style image that you want to train your model on."}, {"start": 207.6, "end": 215.6, "text": " So let's set, for example, mosaic here and you can pick whatever options you have here."}, {"start": 215.6, "end": 221.6, "text": " You can just play some new style image in this directory style images and you can use that style image."}, {"start": 221.6, "end": 230.6, "text": " So afterwards, you the only parameter pretty much that you'll have to change and tweak around is this style weight parameter."}, {"start": 230.6, "end": 235.6, "text": " And it's for e to the power of five at the moment."}, {"start": 235.6, "end": 238.6, "text": " So let's just stick with it."}, {"start": 238.6, "end": 244.6, "text": " And you'll probably have to experiment with with e-books."}, {"start": 244.6, "end": 252.6, "text": " So, for example, you can use only one e-book and put maybe three thirty images here instead of none,"}, {"start": 252.6, "end": 255.6, "text": " which will use all of the MSCoco images."}, {"start": 255.6, "end": 263.6, "text": " And that's around eighty three thousand images output, maybe at only two thousand here just for demonstration sake."}, {"start": 263.6, "end": 267.6, "text": " And I'm going to put only one e-book here."}, {"start": 267.6, "end": 270.6, "text": " TensorBoard is set to true. We want to enable this."}, {"start": 270.6, "end": 275.6, "text": " This will make your life easier when monitoring these experiments."}, {"start": 275.6, "end": 283.6, "text": " So we're just going to head and so beforehand, actually, we got a we got a download MSCoco data set."}, {"start": 283.6, "end": 286.6, "text": " And I just you just basically open this resource downloader."}, {"start": 286.6, "end": 294.6, "text": " I've already prepared it for you. So you just have to to place the download choice here to one."}, {"start": 294.6, "end": 298.6, "text": " And you will go ahead and download the MSCoco."}, {"start": 298.6, "end": 304.6, "text": " You just go ahead and run this and it will start downloading the MSCoco data set."}, {"start": 304.6, "end": 307.6, "text": " So I've already done that. So I'm just going to stop it here."}, {"start": 307.6, "end": 312.6, "text": " And it's a big data set. I think it's around 13 gigabytes."}, {"start": 312.6, "end": 314.6, "text": " So just keep that on the back of your mind."}, {"start": 314.6, "end": 321.6, "text": " And now we're just going to run the training script and it will start training the model."}, {"start": 321.6, "end": 328.6, "text": " I've learned to train and actually I've set the number of data points to twenty thousand so that you can better see the curves in TensorBoard."}, {"start": 328.6, "end": 335.6, "text": " So now let's just open the TensorBoard. You can see the command here in the console."}, {"start": 335.6, "end": 342.6, "text": " You just need to open your your account, the prompt and navigate to project route, activate the environment."}, {"start": 342.6, "end": 346.6, "text": " And then you can just copy paste the command in the console."}, {"start": 346.6, "end": 352.6, "text": " So TensorBoard, log, dear, blah, blah. Copy that."}, {"start": 352.6, "end": 356.6, "text": " Paste it here. Run it."}, {"start": 356.6, "end": 360.6, "text": " And you'll see the URL here. It's up."}, {"start": 360.6, "end": 367.6, "text": " It will just open like a local local server which you can access through your browser."}, {"start": 367.6, "end": 371.6, "text": " Just paste it here and you'll see the TensorBoard."}, {"start": 371.6, "end": 380.6, "text": " Once it opens, you just go and scroll to your experiment and it will be on the bottom of this list."}, {"start": 380.6, "end": 382.6, "text": " You just activate the five of these."}, {"start": 382.6, "end": 384.6, "text": " Basically, I'm going to explain what these mean."}, {"start": 384.6, "end": 389.6, "text": " So content loss and the style loss are the pretty much the most important ones."}, {"start": 389.6, "end": 397.6, "text": " And you can see that the content loss, even after five thousand batches, i.e. twenty thousand images, is still lowering down."}, {"start": 397.6, "end": 404.6, "text": " So you can train this model for at least at least twenty thousand more."}, {"start": 404.6, "end": 409.6, "text": " The important thing with style loss is that it doesn't have huge spikes."}, {"start": 409.6, "end": 416.6, "text": " Otherwise, something went wrong and you probably need to lower down your style weight and then your model will be able to train."}, {"start": 416.6, "end": 420.6, "text": " TV loss is zero because I've set the weight to zero."}, {"start": 420.6, "end": 426.6, "text": " You'll only need to use this one if you get some really unsmooth output from your model."}, {"start": 426.6, "end": 433.6, "text": " And this bottom curve here just tells me some certain statistics of the stylized batch that goes."}, {"start": 433.6, "end": 438.6, "text": " That's the output from the transformer transformer net I showed you in the code in the beginning of the video."}, {"start": 438.6, "end": 445.6, "text": " These actually make a lot of sense because the input imagery that goes through the transformer net is ImageNet normalized."}, {"start": 445.6, "end": 453.6, "text": " So it's kind of centralized around zero and the output should be similar, should have a similar distribution."}, {"start": 453.6, "end": 454.6, "text": " So we can see that here."}, {"start": 454.6, "end": 460.6, "text": " And aside from these curves, I also have some imagery logged during the training."}, {"start": 460.6, "end": 467.6, "text": " And you can see how your your model is learning to better reproduce the style of the image you're training it for."}, {"start": 467.6, "end": 472.6, "text": " So in the beginning is just some random junk imagery."}, {"start": 472.6, "end": 483.6, "text": " And then as you go, as the time passes by and as the model have seen more and more data points or images from Ms. Coco in this in this case,"}, {"start": 483.6, "end": 485.6, "text": " it just gets better and better."}, {"start": 485.6, "end": 493.6, "text": " And you can see that the content is getting more prominent and stylization is getting better."}, {"start": 493.6, "end": 505.6, "text": " This is really useful to just debug and see if your model is actually learning the stylized imagery so that you don't have to wait for two hours and only then figure out that it's not learning anything."}, {"start": 505.6, "end": 509.6, "text": " And one thing to note, you'll probably need a really strong machine to actually train this locally."}, {"start": 509.6, "end": 514.6, "text": " As you can see, the ventilators on my machine are pretty working and pretty heavy."}, {"start": 514.6, "end": 515.6, "text": " So just keep that in mind."}, {"start": 515.6, "end": 522.6, "text": " I'm going to close the answer board now and just going to lead you through the Ms. Coco data set."}, {"start": 522.6, "end": 525.6, "text": " Just show you the imagery here."}, {"start": 525.6, "end": 535.6, "text": " It's always a good idea to go through the data set and just check out images you're training on just manually and kind of get the feeling of how it looks like."}, {"start": 535.6, "end": 543.6, "text": " And you can see the imagery here on the Ms. Coco just random, random stuff pretty much."}, {"start": 543.6, "end": 551.6, "text": " And it actually doesn't matter because the model doesn't care which images you pass it in."}, {"start": 551.6, "end": 564.6, "text": " You only care to set some nice style style image and it will iterate through these content images and learn how to lower both the content as well as the style loss."}, {"start": 564.6, "end": 568.6, "text": " And you can see the imagery here."}, {"start": 568.6, "end": 574.6, "text": " Finally, you have your model saved into both binaries and checkpoints."}, {"start": 574.6, "end": 580.6, "text": " So checkpoints basically serves to save intermediate models while your model is actually being trained."}, {"start": 580.6, "end": 587.6, "text": " And you'll find it under the subdirectory with the same name as your style image."}, {"start": 587.6, "end": 592.6, "text": " So our checkpoints will be under mosaic here because we use the mosaic style image."}, {"start": 592.6, "end": 598.6, "text": " But the final model will be actually placed here in the binaries directory."}, {"start": 598.6, "end": 605.6, "text": " And you'll be able to take that model and run it through the stylization script that I showed you in the last video."}, {"start": 605.6, "end": 606.6, "text": " So that was it for this video."}, {"start": 606.6, "end": 609.6, "text": " You'll learn how the training loop looks like."}, {"start": 609.6, "end": 620.6, "text": " You learn how to start your own training, how to monitor the trainings using tensor boards, curves and and imagery, logged imagery."}, {"start": 620.6, "end": 628.6, "text": " And you also know where to find your binaries that you can run later on in the stylization script."}, {"start": 628.6, "end": 629.6, "text": " Thanks for watching."}, {"start": 629.6, "end": 651.6, "text": " If you found this video useful, consider subscribing and sharing and see you in the next video."}]